text
stringlengths
59
500k
subset
stringclasses
6 values
How Did Newton's Second Law Get Its Definition? If I've read Newton's Laws of Motion correctly in the Principia, it seems that Newton attributed the "change in motion" (momentum) to the "impressed force". Mathematically this would be read as $\Delta p \propto F$, right? But then how did it end up as $\frac{\Delta p}{\Delta t} \propto F$? Also, I've read in other forums that Newton's Second Law is based on Galileo's experiments on falling bodies, where he treated acceleration and mass as distinct parameters. For example, his experiment with comparing free-fall of different masses yielded something along the lines of $\frac{F_1}{F_2} \propto \frac{m_1}{m_2}$ ($\propto a$). This threw me off even further, making me question how (and why) the heck Newton ended up with $\Delta p \propto F$ as his Second Law. I'm not questioning its validity, I just want to understand how it came to be understood as $\frac{\Delta p}{\Delta t} \propto F$. newtonian-mechanics history chevestongchevestong $\begingroup$ Related: physics.stackexchange.com/q/2644 $\endgroup$ – Abhimanyu Pallavi Sudhir May 25 '13 at 8:34 Newton didn't say "change in momentum", he said "alteration in momentum", and whichever he said, this means clearly, and with no room for doubt, rate of change of momentum, the limit of small $\Delta t$ of $\Delta P \over \Delta t$. This was understood this way by everyone who read the book, there is no way to misinterpret if you follow the mathematical things. The experiments of Galileo showed that bodies in gravity have the same acceleration. This means that the Earth is imparting changes in velocity to particles. The notion of "force" is already present to some extent in the theory of statics developed by Archimedes, and gravity produces a steady force in a static situtation, and this force is proportional to the mass. If you know force is proportional to the mass, and the acceleration is the same for all bodies, it is no leap to conclude that a force produces a steady acceleration inversely proportional to the mass. The second law was not the major innovation in Newton, this was known to Hooke and Halley and Huygens for sure. Newton's innovation is the third law, and the system of the world, the special problems. Ron MaimonRon Maimon $\begingroup$ @ProSteve037: Yes, but there is no reason to put "derived" in quotes--- it is the natural mathematical statement of the fact that force produces acceleration. I don't know why you consider the mathematical statement somehow profound or confusing, it is obvious and straightforward, and was known to everybody by the mid 1600s. This is not Newton's discovery, his discoveries are more involved, like the oblateness of the Earth, the wobble of the moon, the cause of the precession of the equinoxes and so forth. $\endgroup$ – Ron Maimon Aug 24 '12 at 14:18 $\begingroup$ @ProSteve037: Physics books basically make up history, in a series of outrageous fabrications, and pretent that what is confusing to students is what was confusing to people back in the 16th century. Galileo didn't really discriminate in a precise way between mass and weight, at least not as clearly as Newton does, so maybe Galileo would get a question or two wrong on a modern test. But the concept of force is known from Archimedes law of the lever--- the force times distance must balance for a lever to balance. I don't see this confusion as particularly important historically. $\endgroup$ – Ron Maimon Aug 26 '12 at 22:16 $\begingroup$ @ProSteve037: The way Newton was the primary figure is that he made a _respectable science+ out of Cavalieri's infinitesimal calculus, Barrows fundemental theorem, and Galileo's relativity and acceleration laws--- he showed the true scope of these discoveries, that they unlock the secrets of motion and dynamics, and give a true model of the solar system from simple first principles of Newtonian gravitation. This is an astonishing achievement, but it isn't the discovery of "F=ma", that was just known to everybody in 1666, the same way everyone knows perturbative string theory today. $\endgroup$ – Ron Maimon Aug 28 '12 at 2:04 $\begingroup$ Regarding the F=ma, Newton takes it as a new definition of force, which subsumes and extends the old one from statics. When things aren't moving the sum of the forces is zero and the sum of the torques is zero, and this reproduces Archimedes' law of the lever and bouyancy. So it's the same thing, but now it works for dynamics. The main discovery is how to calculate using continuous quantities like acceleration and velocity, and this discovery is modern calculus. Newton didn't choose to make a new language, he snobbishly preferred to sound like the ancients, so he fairly got bitten by Leibnitz. $\endgroup$ – Ron Maimon Aug 28 '12 at 2:09 $\begingroup$ @ProSteve037: It's not hard--- the works of Archimedes are widely reprinted, I read the version in "Great Books of the Western World" vol 11 (one of only two volumes worth reading, the other is 18-19th century scientists, the rest of the series is crap), Euclid Appolonius Archimedes (it's in every high school library, as part of getting kids excited about great books). Galileo's "Dialogue" and "Two New Sciences" are essential, and Barbour's "Absolute or Relative Motion" fills in the other folks from the era. That, plus Newton's Principia (after learning the modern thing) is all I read. $\endgroup$ – Ron Maimon Aug 30 '12 at 4:32 Isaac Newton, Philosophiae Naturalis Principia Mathematica, Axiomata, sive Leges Motus [1]: Lex. II. Mutationem motus proportionalem esse vi motrici impressæ, & fieri secundum lineam rectam qua vis illa imprimitur. Mutationem: means change, alteration, variation, mutation, ... Motus: momentum. So Mutationem motus means exactly the variation of momentum: $$ \frac{\Delta p}{\Delta t} $$ [1] http://la.wikisource.org/wiki/Philosophiae_Naturalis_Principia_Mathematica I will try to explain: Let us go back to 1600s when force is not defined quantitatively, and try to understand as to how we can arrive to the definition. Force: a feel of push or pull is named by humans as "Force". Lets imagine one situation in which we have a toy gun which is pushing a small marble kept on an ice frame. The following things happened: 1. Now the toy gun pushed the marble, using single arrow. 2. We plotted the instantaneous rate of change of momentum for the object on y axis and on x axis, we plotted Force. Now mind here that the force is not defined quantitatively yet; But intuitively, we can say that the toy arrow is applying a push on the marble (thus its applying a force on the marble). Therefore on x axis, we plotted force as using one toy shoot. Now, we used two toy arrows at the very same time and applied onto the marble. Again we plotted rate of change of momentum on y axis, and on x axis, we can intuitively think that if ine toy arrow can provide some value of force, now two toy arrows will be providing twice that amount. Therefore, we plotted that on x axis. We did similar experiments with 3 toy arrows, 4 toy arrows etc and plotted rate of change of momentum vs this pushing force on y-x axis respectively. When we joined all these, for the marble of constant mass, it formed a straight line, passing through zero. Thus one thing is clear that the force acting on a body is directly proportional to the rate of change of momentum. F directly proportional to d(mv)/dt Now to eliminate the constant of proportionality, we defined 'Newton' as a unit of force such that when we apply a force of 1 N, its capable of producing the acceleration of 1m/s2 for a body of 1kg mass. If on the other hand, lets say we defined the unit of force as Ron, and defined that one ron is that force which will produce an acceleration of 0.5 m/s2 for a mass of 1kg, then we can write F=2ma. But for the time being till, unit of force is Newton, lets enjoy with the following equation F=ma lancerlancer Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged newtonian-mechanics history or ask your own question. How did Newton discover his second law? Combining Proportions to get Newton's Law of Universal Gravitation Intuitive meaning of Newton (units) History of Newtons law of gravitation Newton's second law, gravity and buoyancy Newton's first law: is his concept of (force of ) inertia still useful and used? Who told us how to measure torque? How do gas molecules break newton's laws of motion Newton's third law - why is it true? Issue regarding Newton's second law
CommonCrawl
\begin{document} \title{Galois Theory -- a first course} \begin{pspicture}(0,0)(14,6) \rput(6.5,3){\BoxedEPSF{dodecahedron2.eps scaled 250}} \end{pspicture} \tableofcontents \section*{Introductory Note}\label{preface} \label{`preface.page'} These notes are a self-contained introduction to Galois theory, designed for the student who has done a first course in abstract algebra. To not clutter up the theorems too much, I have made some restrictions in generality. For example, all rings are with 1; all ideals are principal; all fields are perfect -- in fact, extensions of $\Q$ or of finite fields; consequently all field extensions are separable; and so on. This won't be to everyone's taste. The following prerequisites are assumed, although there are reminders: the basics of linear algebra, particularly the span and independence of a set of vectors; the idea of a basis and hence the dimension of a vector space. In group theory the fundamentals upto Lagrange's theorem and the first isomorphism theorem. In ring and field theory the definitions and some examples, but probably not much else. There are many books on linear algebra and group theory for beginners. My personal favourite is: \begin{biblist} \bib{MR965514}{book}{ author={Armstrong, M. A.}, title={Groups and symmetry}, series={Undergraduate Texts in Mathematics}, publisher={Springer-Verlag, New York}, date={1988}, pages={xii+186}, isbn={0-387-96675-7}, review={\MR{965514}}, doi={10.1007/978-1-4757-4034-9}, } \end{biblist} Most of the results and proofs are standard and can be found in any book on Galois theory, but I am particularly indebted to the book of Joseph Rotman: \begin{biblist} \bib{Rotman90}{book}{ author={Rotman, Joseph}, title={Galois theory}, series={Universitext}, publisher={Springer-Verlag, New York}, date={1990}, pages={xii+108}, isbn={0-387-97305-2}, review={\MR{1064318}}, doi={10.1007/978-1-4684-0367-1}, } \end{biblist} In particular the proofs I give of Theorems C and E, the Fundamental Theorem of Algebra and the Theorem of Abels-Ruffini are Rotman's proofs with some elaboration added. The statements (although not the proofs) of Theorems F and G are also his. The figure depicting the $(a,b)$-plane at the end of Section \ref{solving.equations} is redrawn from the Mathematica poster \emph{Solving the Quintic}. \subsection*{The Cover} The cover shows a Cayley graph for the smallest non-Abelian simple group -- the alternating group $A_5$. We will see that the simplicity of this group means there is no formula for the roots of the polynomial $x^5-4x+2$, using only the ingredients $$ \frac{a}{b}\in\Q, +,-,\times,\div,\sqrt[2]{},\sqrt[3]{},\sqrt[4]{},\sqrt[5]{},\ldots $$ Therefore, there can be no formula for the solutions of a quintic equation $$ax^5+bx^4+cx^3+dx^2+ex+f=0$$ that works for all possible $a,b,c,d,e,f\in\ams{C}}\def\Q{\ams{Q}$. \begin{figure} \caption{The Cayley graph for the smallest non-Abelian simple group, the alternating group $A_5$, with respect to $\sigma=(1,2,3,4,5)$ -- the blue edges -- and $\tau=(1,2)(3,4)$ -- the black edges.} \label{fig:the_cover:Cayley_graphA5} \end{figure} A Cayley graph is a picture of the multiplication in the group. Let $\sigma=(1,2,3,4,$ $5)$. Each blue pentagonal face can be oriented anti-clockwise when you look at it from the outside of the ball. Crossing a blue edge anti-clockwise corresponds to $\sigma$ and crossing in the reverse direction (clockwise) corresponds to $\sigma^{-1}$. Crossing a black edge in either direction corresponds to the element $\tau=(1,2)(3,4)$. The vertices correspond to the 60 elements of $A_5$ -- the front ones are marked, with the identity element in the center. If a path $\gamma$ starts at the vertex corresponding to $\mu_1\in A_5$ and finishes at $\mu_2\in A_5$, then reading the $\sigma$ and $\tau$ labels off $\gamma$ as you travel along it gives $\mu_1\gamma=\mu_2$. For example, the red path gives $(1,2,3,4,5)\cdot\sigma\tau\sigma^{2}\tau\sigma^{-2}\tau\sigma=(2,5)(3,4)$. It is a curious coincidence that the smallest non-Abelian simple group has Cayley graph the the simplest known pure form of Carbon -- Buckminsterfullerine $C_{60}$. \section{What is Galois Theory?} \label{lect1} A quadratic equation $ax^2+bx+c=0$ has exactly two -- possibly repeated -- solutions in the complex numbers. There is a formula for them, that appears in the ninth century Persian book {\em Hisab al-jabr w'al-muqabala}\footnote{\emph{al-jabr\/}, hence ``algebra''.}, by Abu Abd-Allah ibn Musa al'Khwarizmi. In modern notation it says: $$ x=\frac{-b\pm\kern-2pt\sqrt{b^2-4ac}}{2a}. $$ Less familiar maybe, $ax^3+bx^2+cx+d=0$ has three $\ams{C}}\def\Q{\ams{Q}$-solutions, and they too can be expressed algebraically using Cardano's formula. One solution turns out to be \begin{equation*} \begin{split} -\frac{b}{3a} &+\sqrt[3]{-\frac{1}{2}\biggl(\frac{2b^3}{27a^3}-\frac{bc}{a^2}+\frac{d}{a}\biggr) +\sqrt{\frac{1}{4} \biggl(\frac{2b^3}{27a^3}-\frac{bc}{a^2}+\frac{d}{a}\biggr)^2 +\frac{1}{27}\biggl(\frac{c}{a}-\frac{b^2}{3a^2}\biggr)^3}}\\ &+\sqrt[3]{-\frac{1}{2}\biggl(\frac{2b^3}{27a^3}-\frac{bc}{a^2}+\frac{d}{a}\biggr) -\sqrt{\frac{1}{4} \biggl(\frac{2b^3}{27a^3}-\frac{bc}{a^2}+\frac{d}{a}\biggr)^2 +\frac{1}{27}\biggl(\frac{c}{a}-\frac{b^2}{3a^2}\biggr)^3}}, \end{split} \end{equation*} and the other two have similar expressions. There is an even more complicated formula, attributed to Descartes, for the roots of a quartic polynomial equation. What is kind of miraculous is not that the solutions exist, but they can always be expressed in terms of the coefficients and the basic algebraic operations, $$ +,-,\times,\div,\sqrt{},\sqrt[3]{},\sqrt[4]{},\sqrt[5]{},\ldots $$ By the turn of the 19th century, no equivalent formula for the solutions to a quintic (degree five) polynomial equation had materialised, and it was Abels who had the crucial realisation: {\em no such formula exists}. Such a statement can be interpreted in a number of ways. Does it mean that there are always algebraic expressions for the roots of quintic polynomials, but their form is too complex for one {\em single\/} formula to describe all the possibilities? It would therefore be necessary to have a number, maybe even infinitely many, formulas. The reality turns out to be far worse: there are specific polynomials, such as $x^5-4x+2$, whose solutions cannot be expressed algebraically in any way whatsoever. A few decades later, Evarist\'{e} Galois started thinking about the deeper problem: {\em why\/} don't these formulae exist? Thus, Galois theory was originally motivated by the desire to understand, in a much more precise way, the solutions to polynomial equations. Galois' idea was this: study the solutions by studying their ``symmetries''. Nowadays, when we hear the word symmetry, we normally think of group theory. To reach his conclusions, Galois kind of invented group theory along the way. In studying the symmetries of the solutions to a polynomial, Galois theory establishes a link between these two areas of mathematics. We illustrate the idea, in a somewhat loose manner, with an example. \subsection{The symmetries of the solutions to $x^3-2=0$.} \paragraph{\hspace*{-0.3cm}} \parshape=2 0pt\hsize 0pt.75\hsize We work in $\ams{C}}\def\Q{\ams{Q}$. Let $\alpha$ be the real cube root of $2$, ie: $\alpha=\sqrt[3]{2}\in\R$ and, $\omega=-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}.$ Note that $\omega$ is a cube root of $1$, and so $\omega^3=1$. \vadjust{ \smash{\lower 72pt \llap{ \begin{pspicture}(0,0)(3,2) \rput(1,0.25){ \uput[0]{270}(-.1,2){\pstriangle[fillstyle=solid,fillcolor=lightgray](1,0)(2,1.73)} \rput(2,1){$\alpha$} \rput(-0.2,-0.2){$\alpha\omega^2$} \rput(-0.20,2.2){$\alpha\omega$} \rput(-0.7,1){${\red s}$}\rput(1.3,2.2){${\red t}$} \psline[linecolor=red](-0.5,1)(1.75,1) \psline[linecolor=red](0.08,0)(1.26,2) } \end{pspicture} }}}\ignorespaces \parshape=7 0pt.75\hsize 0pt.75\hsize 0pt.75\hsize 0pt.75\hsize 0pt.75\hsize 0pt.75\hsize 0pt\hsize The three solutions to $x^3-2=0$ (or {\em roots\/} of $x^3-2$) are the complex numbers $\alpha,\alpha\omega$ and $\alpha\omega^2$, forming the vertices of the equilateral triangle shown. The triangle has what we might call ``geometric symmetries'': three reflections, a counter-clockwise rotation through $\frac{1}{3}$ of a turn, a counter-clockwise rotation through $\frac{2}{3}$ of a turn and a counter-clockwise rotation through $\frac{3}{3}$ of a turn $=$ the identity symmetry. Notice for now that if $s$ and $t$ are the reflections in the lines shown, the geometrical symmetries are $s$, $t$, $tst$, $ts$, $(ts)^2$ and $(ts)^3=\text{id}$ (read these expressions from right to left). The symmetries referred to in the preamble are not so much geometric as ``number theoretic''. It will take a little explaining before we see what this means. \begin{definition}[field -- version ${\mathbf 1}$] \label{def_field1} A field is a set $F$ with two operations, called, purely for convenience, $+$ and $\times$, such that for any $a,b,c\in F$, \begin{enumerate} \item $a+b$ and $a\times b$ ($=ab$ from now on) are uniquely defined elements of $F$, \item $a+(b+c)=(a+b)+c$, \item $a+b=b+a$, \item there is an element $0\in F$ such that $0+a=a$, \item for any $a\in F$ there is an element $-a\in F$ with $(-a)+a=0$, \item $a(bc)=(ab)c$, \item $ab=ba$, \item there is an element $1\in F\setminus\{0\}$ with $1\times a=a$, \item for any $a\not= 0\in F$ there is an $a^{-1}\in F$ with $aa^{-1}=1$, \item $a(b+c)=ab+ac$. \end{enumerate} \end{definition} A field is just a set of things that you can add, subtract, multiply and divide so that the ``usual'' rules of algebra are satisfied. Familiar examples of fields are $\Q$, $\R$ and $\ams{C}}\def\Q{\ams{Q}$; familiar non-examples of fields are $\ams{Z}}\def\E{\ams{E}$, polynomials and matrices (you cannot in general divide integers, polynomials and matrices to get integers, polynomials or matrices). \paragraph{\hspace*{-0.3cm}} A {\em subfield\/} of a field $F$ is a subset that also forms a field under the same $+$ and $\times$. Thus, $\Q$ is a subfield of $\R$ which is in turn a subfield of $\ams{C}}\def\Q{\ams{Q}$, and so on. On the other hand, $\Q\cup\{\kern-2pt\sqrt{2}\}$ is not a subfield of $\R$: it is a subset but axiom 1 fails, as both $1$ and $\kern-2pt\sqrt{2}$ are elements but $1+\kern-2pt\sqrt{2}$ is not. \begin{definition} \label{def:section0_defn10} If $F$ is a subfield of the complex numbers $\ams{C}}\def\Q{\ams{Q}$ and $\beta\in\ams{C}}\def\Q{\ams{Q}$, then $F(\beta)$ is the smallest subfield of $\ams{C}}\def\Q{\ams{Q}$ that contains both $F$ and the number $\beta$. \end{definition} What do we mean by smallest? That there is no other field $F'$ having the same properties as $F(\beta)$ which is smaller, ie: no $F'$ with $F\subset F'\text{ and }\beta\in F'\text{ too,}$ but $F'$ properly $\subset F(\beta)$. It is usually more useful to say it the other way around: \begin{equation}\label{eq1}\text{If $F'$ is a subfield } \text{ that also contains }F\text{ and }\beta, \text{ then $F'$ contains }F(\beta)\text{ too}\tag{*}.\end{equation} Loosely speaking, $F(\beta)$ is all the complex numbers we get by adding, subtracting, multiplying and dividing the elements of $F$ and $\beta$ together in all possible ways. The construction of Definition \ref{def:section0_defn10} can be continued: write $F(\beta,\gamma)$ for the smallest subfield of $\ams{C}}\def\Q{\ams{Q}$ containing $F$ and the numbers $\beta$ and $\gamma$, and so on. \paragraph{\hspace*{-0.3cm}} To illustrate with some trivial examples, $\R(\text{i})$ can be shown to be all of $\ams{C}}\def\Q{\ams{Q}$: it must contain all expressions of the form $b\text{i}$ for $b\in\R$, and hence all expressions of the form $a+b\text{i}$ with $a,b\in\R$, and this accounts for all the complex numbers; $\Q(2)$ is equally clearly just $\Q$ back again. Slightly less trivially, $\Q(\kern-2pt\sqrt{2})$, the smallest subfield of $\ams{C}}\def\Q{\ams{Q}$ containing all the rational numbers and $\kern-2pt\sqrt{2}$, is a field that is strictly bigger than $\Q$ (eg: it contains $\kern-2pt\sqrt{2}$) but is much, much smaller than all of $\R$. \begin{vexercise} Show that $\kern-2pt\sqrt{3}\not\in\Q(\kern-2pt\sqrt{2})$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} Returning to the symmetries of the solutions to $x^3-2=0$, we look at the field $\Q(\alpha,\omega)$, where $\alpha=\sqrt[3]{2}\in\R\text{ and } \omega=-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i},$ as before. Since $\Q(\alpha,\omega)$ is by definition a field, and fields are closed under $+$ and $\times$, we have $$ \alpha\in\Q(\alpha,\omega)\text{ and }\omega\in\Q(\alpha,\omega)\Rightarrow \alpha\times\omega=\alpha\omega,\alpha\times\omega\times\omega=\alpha\omega^2\in \Q(\alpha,\omega)\text{ too.} $$ So, $\Q(\alpha,\omega)$ contains all the solutions to the equation $x^3-2=0$. On the other hand: \begin{vexercise} Show that $\Q(\alpha,\omega)$ has ``just enough'' numbers to solve the equation $x^3-2=0$. More precisely, $\Q(\alpha,\omega)$ is the {\em smallest\/} subfield of $\ams{C}}\def\Q{\ams{Q}$ that contains all the solutions to this equation. ({\em hint\/}: you may find it useful to do Exercise \ref{ex_lect1.0} first). \end{vexercise} \paragraph{\hspace*{-0.3cm}} A very loose definition of a symmetry of the solutions of $x^3-2=0$ is that it is a ``rearrangement" of $\Q(\alpha,\omega)$ that does not disturb (or is compatible with) the $+$ and $\times$. To see an example, consider the two fields $\Q(\alpha,\omega)$ and $\Q(\alpha,\omega^2)$. Despite first appearances they are actually the same: certainly $$ \alpha,\omega\in\Q(\alpha,\omega)\Rightarrow \alpha,\omega^2\in\Q(\alpha,\omega). $$ But $\Q(\alpha,\omega^2)$ is the smallest field containing $\Q,\alpha$ and $\omega^2$, so by (*), $$ \Q(\alpha,\omega^2)\subseteq\Q(\alpha,\omega). $$ Conversely, $$\alpha,\omega^2\times\omega^2=\omega^4=\omega\in\Q(\alpha,\omega^2)\Rightarrow \Q(\alpha,\omega)\subseteq\Q(\alpha,\omega^2). $$ Remember that $\omega^3=1$ so $\omega^4=\omega$. Thus $\Q(\alpha,\omega)$ and $\Q(\alpha,\omega^2)$ are indeed the same. In fact, we should think of $\Q(\alpha,\omega)$ and $\Q(\alpha,\omega^2)$ as two different ways of looking at the same field, or more suggestively, the same field viewed from two different angles. When we hear the phrase, ``the same field viewed from two different angles'', it suggests that there is a symmetry that moves the field from one point of view to the other. In the case above, there should be a symmetry of the field $\Q(\alpha,\omega)$ that puts it into the form $\Q(\alpha,\omega^2)$. Surely this symmetry should send $$ \alpha\mapsto\alpha,\text{ and }\omega\mapsto\omega^2. $$ We haven't yet defined what we mean by, ``is compatible with the $+$ and $\times$''. It will turn out to mean that if $\alpha$ and $\omega$ are sent to $\alpha$ and $\omega^2$ respectively, then $\alpha\times\omega$ should go to $\alpha\times\omega^2$; similarly $\alpha\times\omega\times\omega$ should go to $\alpha\times\omega^2\times\omega^2=\alpha\omega^4=\alpha\omega$, and so on. The symmetry thus moves the vertices of the equilateral triangle determined by the roots in the same way that the reflection $s$ of the triangle does (see Figure \ref{fig:figure2}). \begin{figure} \caption{The symmetry $\Q(\alpha,\omega)=\Q(\alpha,\omega^2)$ (\emph{left}) and the symmetry $\Q(\alpha\omega,\omega^2)=\Q(\alpha,\omega)$ (\emph{right}) of the equation $x^3-2=0$.} \label{fig:figure2} \end{figure} (This compatibility also means that it would have made no sense to have the symmetry send $\alpha\mapsto\omega^2$ and $\omega\mapsto\alpha$. A symmetry should not fundamentally change the algebra of the field, so that if an element like $\omega$ cubes to give $1$, then its image under the symmetry should too: but $\alpha$ {\em doesn't\/} cube to give $1$.) \paragraph{\hspace*{-0.3cm}} In exactly the same way, we can consider the fields $\Q(\alpha\omega,\omega^2)$ and $\Q(\alpha,\omega)$. We have $$ \alpha,\omega\in\Q(\alpha,\omega)\Rightarrow \omega^2,\alpha\omega\in\Q(\alpha,\omega)\Rightarrow \Q(\alpha\omega,\omega^2)\subseteq\Q(\alpha,\omega); $$ and conversely, $\alpha\omega,\omega^2\in\Q(\alpha\omega,\omega^2)\Rightarrow \alpha\omega\omega^2=\alpha\omega^3=\alpha\in\Q(\alpha\omega,\omega^2)$, and hence also $$\alpha^{-1}\alpha\omega=\omega\in\Q(\alpha\omega,\omega^2) \Rightarrow\Q(\alpha,\omega)\subseteq\Q(\alpha\omega,\omega^2). $$ Thus, $\Q(\alpha,\omega)$ and $\Q(\alpha\omega,\omega^2)$ are the same field, and we can define another symmetry that sends $$ \alpha\mapsto\alpha\omega,\text{ and }\omega\mapsto\omega^2. $$ To be compatible with the $+$ and $\times$, $$ \alpha\times\omega\mapsto\alpha\omega\times\omega^2=\alpha\omega^3=\alpha,\text{ and }\alpha\times\omega\times\omega\mapsto \alpha\omega\times\omega^2\times\omega^2=\alpha\omega^5=\alpha\omega^2. $$ So the symmetry is like the reflection $t$ of the triangle (see Figure \ref{fig:figure2}). Finally, if we have two symmetries of the solutions to some equation, we would like their composition to be a symmetry too. So if the symmetries $s$ and $t$ of the original triangle are to be considered, so should $tst,st,(st)^2$ and $(st)^3=1$. \paragraph{\hspace*{-0.3cm}} The symmetries of the solutions to $x^3-2=0$ include all the geometrical symmetries of the equilateral triangle. We will see later that any symmetry of the solutions is uniquely determined as a permutation of the solutions. Since there are $3!=6$ of these, we have accounted for all of them. So the solutions to $x^3-2=0$ have symmetry precisely the geometrical symmetries of the equilateral triangle. \paragraph{\hspace*{-0.3cm}} If this was always the case, things would be a little disappointing: Galois theory would just be the study of the ``shapes" formed by the roots of polynomials, and the symmetries of those shapes. It would be a branch of planar geometry. Fortunately, if we look at the solutions to $x^5-2=0$, given in Figure \ref{fig:figure2a}, then something quite different happens. Exercise \ref{ex_lect1.1a} shows you how to find these expressions for the roots. \begin{figure} \caption{The solutions in $\ams{C}} \label{fig:figure2a} \end{figure} A pentagon has 10 geometric symmetries, and you can check that all arise as symmetries of the roots of $x^5-2$ using the same reasoning as in the previous example. But this reasoning also gives a symmetry that moves the vertices of the pentagon according to: $$ \begin{pspicture}(0,-.5)(3,3) \pspolygon[fillstyle=solid,fillcolor=lightgray](0,.45)(1.53,0)(2.5,1.3)(1.53,2.61)(0,2.15) \psbezier[linecolor=red]{->}(2.85,1.1)(3.85,0.3)(3.85,2.3)(2.85,1.5) \psline[linecolor=red]{->}(1.45,2.5)(0.05,.5) \psline[linecolor=red]{->}(1.45,.1)(.05,2.05) \pscurve[linecolor=red]{->}(.15,.5)(1,.9)(1.5,.1) \pscurve[linecolor=red]{->}(.15,2.1)(.95,1.73)(1.55,2.5) \rput(2.7,1.3){$\alpha$}\rput(1.5,2.8){$\alpha\omega$} \rput(-.4,.6){$\alpha\omega^3$} \rput(-.4,2.15){$\alpha\omega^2$} \rput(1.5,-.2){$\alpha\omega^4$} \end{pspicture} $$ This is not a geometrical symmetry -- if it was, it would be pretty disastrous for the poor pentagon. Later we will see that for $p>2$ a prime number, the solutions to $x^p-2=0$ have $p(p-1)$ symmetries. While agreeing with the six obtained for $x^3-2=0$, it gives twenty for $x^5-2=0$. In fact, it was a bit of a fluke that all the number theoretic symmetries were also geometric ones for $x^3-2=0$. A $p$-gon has $2p$ geometrical symmetries and $2p\leq p(p-1)$ with equality only when $p=3$. \subsection*{Further Exercises for Section \thesection} \begin{vexercise}\label{ex1.-1} Show that the picture on the left of Figure \ref{fig:figure3} depicts a symmetry of the solutions to $x^3-1=0$, but the one on the right does not. \begin{figure} \caption{A symmetry (\emph{left}) and non-symmetry (\emph{right}) of the equation $x^3-1=0$ from Exercise \ref{ex1.-1}.} \label{fig:figure3} \end{figure} \end{vexercise} \begin{vexercise}\label{ex_lect1.1a} You already know that the $3$-rd roots of 1 are $1$ and ${\displaystyle -\frac{1}{2}\pm\frac{\sqrt{3}}{2}\text{i}}$. What about the $p$-th roots for higher primes? \begin{enumerate} \item If $\omega\not= 1$ is a $5$-th root it satisfies $\omega^4+\omega^3+\omega^2+\omega+1=0$. Let $u=\omega+\omega^{-1}$. Find a quadratic polynomial satisfied by $u$, and solve it to obtain $u$. \item Find another quadratic satisfied this time by $\omega$, with {\em coefficients involving\/} $u$, and solve it to find explicit expressions for the four primitive $5$-th roots of 1. \item Repeat the process with the $7$-th roots of $1$. \end{enumerate} \noindent\emph{factoid\/}: the $n$-th roots of 1 can be expressed in terms of field operations and extraction of pure roots of rationals for any $n$. The details -- which are a little complicated -- were completed by the work of Gauss and Galois. \end{vexercise} \begin{vexercise}\label{ex_lect1.0} Let $F$ be a field such that the element $$ \underbrace{1+1+\cdots +1}_{n\text{ times}}\not= 0, $$ for any $n>0$. Arguing intuitively, show that $F$ contains a copy of the rational numbers $\Q$ (see also Section \ref{lect4}). \end{vexercise} \begin{vexercise}\label{ex_lect1.1b} Let $\alpha=\sqrt[6]{5}\in\R$ and ${\displaystyle \omega=\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}}$. Show that $\Q(\alpha,\omega)$, $\Q(\alpha\omega^2,\omega^5)$ and $\Q(\alpha\omega^4,\omega^5)$ are all the same field. \end{vexercise} \begin{vexercise}\label{ex_lect1.1} \hspace{1em} \begin{enumerate} \item Show that there is a symmetry of the solutions to $x^5-2=0$ that moves the vertices of the pentagon according to: $$ \begin{pspicture}(0,-.5)(3,3) \pspolygon[fillstyle=solid,fillcolor=lightgray](0,.45)(1.53,0)(2.5,1.3)(1.53,2.61)(0,2.15) \psbezier[linecolor=red]{->}(2.85,1.1)(3.85,0.3)(3.85,2.3)(2.85,1.5) \psline[linecolor=red]{->}(1.45,2.5)(0.05,.5) \psline[linecolor=red]{->}(1.45,.1)(.05,2.05) \pscurve[linecolor=red]{->}(.15,.5)(1,.9)(1.5,.1) \pscurve[linecolor=red]{->}(.15,2.1)(.95,1.73)(1.55,2.5) \rput(2.7,1.3){$\alpha$}\rput(1.5,2.8){$\alpha\omega$} \rput(-.4,.6){$\alpha\omega^3$} \rput(-.4,2.15){$\alpha\omega^2$} \rput(1.5,-.2){$\alpha\omega^4$} \end{pspicture} $$ $\text{where }\alpha=\sqrt[5]{2}, \text{ and }\omega^5=1, \omega\in\ams{C}}\def\Q{\ams{Q}.$ \item Show that the solutions in $\ams{C}}\def\Q{\ams{Q}$ to the equation $x^6-5=0$ have $12$ symmetries. \end{enumerate} \end{vexercise} \section{Rings I: Polynomials}\label{lect2} \paragraph{\hspace*{-0.3cm}} There are a number of basic facts about polynomials that we will need. Suppose $F$ is a field ($\Q,\R$ or $\ams{C}}\def\Q{\ams{Q}$ will do for now). A \emph{polynomial over $F$\/} is an expression of the form $$ f=a_0+a_1x+\cdots a_nx^n, $$ where the $a_i\in F$ and $x$ is a ``formal symbol" (sometimes called an indeterminate). We don't tend to think of $x$ as a variable -- it is purely an object on which to perform algebraic manipulations. Denote the set of all polynomials over $F$ by $F[x]$. If $a_n\not= 0$, then $n$ is called the {\em degree\/} of $f$, written $\deg(f)$. If the leading coefficient $a_n=1$, then $f$ is {\em monic\/}. (The degree of a non-zero constant polynomial is thus $0$, but to streamline some statements define $\deg(0)=-\infty$, where $-\infty<n$ for all $n\in\ams{Z}}\def\E{\ams{E}$. The arithmetic of degrees is just the arithmetic of non-negative integers, except we decree that $-\infty+n=-\infty$. A polynomial $f$ is \emph{constant\/} if $\deg f\leq 0$, and \emph{non-constant\/} otherwise). \paragraph{\hspace*{-0.3cm}} We can add and multiply elements of $F[x]$ in the usual way: $$ \text{if }f=\sum_{i=0}^n a_ix^i\text{ and }g=\sum_{i=0}^m b_ix^i, $$ then, \begin{equation}\label{polyops} f+g=\sum_{i=0}^{\text{max}(m,n)}(a_i+b_i)x^i\text{ and }fg= \sum_{k=0}^{m+n} c_k x^k\text{ where }c_k=\sum_{i+j=k}a_ib_j. \end{equation} that is, $c_k=a_0b_k+a_1b_{k-1}+\cdots +a_kb_0$. The arithmetic of the coefficients (ie: how to work out $a_i+b_i, a_ib_j$ and so on) is just that of the field $F$. \begin{vexercise} \hspace{1em} Convince yourself that this multiplication is really just the ``expanding brackets" multiplication of polynomials that you know so well. \end{vexercise} \paragraph{\hspace*{-0.3cm}} The polynomials $F[x]$ together with this addition form an example of an Abelian group: \begin{definition}[Abelian group] An Abelian group is a set $G$ endowed with an operation $(f,g)\mapsto f+g$ such that for all $f,g,h\in G$: \begin{enumerate} \item $f+g$ is a uniquely defined element of $G$ (closure); \item $f+(g+h)=(f+g)+h$ (associativity); \item there is an $0\in G$ such that $0+f=f=f+0$ (identity),; \item for any $f\in G$ there is an element $-f\in G$ with $f+(-f)=0=(-f)+f$ (inverses). \item $f+g=g+f$ (commutativity). \end{enumerate} \end{definition} We will see more general kinds of groups in Section \ref{groups.stuff}, where we will write the operation as juxtaposition. In an Abelian group however, it is customary to write the operation as addition, as we have done above. In $F[x]$ the identity $0$ is the zero polynomial, and the inverse of $f$ is $$ -\biggl(\sum_{i=0}^n a_ix^i\biggr)=\sum_{i=0}^n (-a_i)x^i. $$ (To see that $F[x]$ forms an abelian group, we have $f+g=g+f$ exactly when $a_i+b_i=b_i+a_i$ for all $i$. But the coefficients of our polynomials come from the field $F$, and addition is always commutative in a field.) \paragraph{\hspace*{-0.3cm}} If we want to include the multiplication, we need the formal concept of a ring: \begin{definition}[ring] A ring is a set $R$ endowed with two operations $(a,b)\mapsto a+b$ and $a\times b$ such that for all $a,b\in R$, \begin{enumerate} \item $R$ is an Abelian group under $+$; \item for any $a,b\in R$, $a\times b$ is a uniquely determined element of $R$ (closure of $\times$); \item $a\times(b\times c)=(a\times b)\times c$ (associativity of $\times$); \item there is an $1\in R$ such that $1\times a=a=a\times 1$ (identity of $\times$); \item $a\times(b+ c)=(a\times b)+ (a\times c)$ and $(b+c)\times a=(b\times a) +(c\times a)$ (the distributive law). \end{enumerate} \end{definition} Loosely, a ring is a set on which you can {\em add\/} ($+$), {\em subtract\/} (the inverse of $+$ in the Abelian group) and {\em multiply\/} ($\times$), but {\em not\/} necessarily divide (there is no inverse axiom for $\times$). Here are some well known examples of rings: $$ \ams{Z}}\def\E{\ams{E},F[x]\text{ for $F$ a field}, \ams{Z}}\def\E{\ams{E}_n\text{ and }M_n(F), $$ where $\ams{Z}}\def\E{\ams{E}_n$ is addition and multiplication of integers modulo $n$ and $M_n(F)$ are the $n\times n$ matrices, with entries from $F$, together with the usual addition and multiplication of matrices. A ring is {\em commutative\/} if the second operation $\times$ is commutative: $a\times b=b\times a$ for all $a,b$. \begin{vexercise} \hspace{1em} \begin{enumerate} \item Show that $fg=gf$ for polynomials $f,g\in F[x]$, hence $F[x]$ is a commutative ring. \item Show that $\ams{Z}}\def\E{\ams{E}$ and $\ams{Z}}\def\E{\ams{E}_n$ are commutative rings, but $M_n(F)$ is not for {\em any\/} field $F$ if $n>2$. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} The observation that $\ams{Z}}\def\E{\ams{E}$ and $F[x]$ are both commutative rings is not just some vacuous formalism. A concrete way of putting it is this: at a very fundamental level, integers and polynomials share the same algebraic properties. When we work with polynomials, we need to be able to add and multiply the coefficients of the polynomials in a way that doesn't produce any nasty surprises--in other words, the coefficients have to satisfy the basic rules of algebra that we all know and love. But these basic rules of algebra can be found among the axioms of a ring. Thus, to work with polynomials successfully, all we need is that the coefficients come from a ring. This observation means that for a ring $R$, we can form the set of all polynomials with coefficients from $R$ and add and multiply them together as we did above. In fact, we are just repeating what we did above, but are replacing the field $F$ with a ring $R$. In practice, rather than allowing our coefficients to some from an arbitrary ring, we take $R$ to be commutative. This leads to, \begin{definition} Let $R[x]$ be the set of all polynomials with coefficients from some commutative ring $R$, together with the $+$ and $\times$ defined at (\ref{polyops}). \end{definition} \begin{vexercise} \hspace{1em} \begin{enumerate} \item Show that $R[x]$ forms a ring. \item Since $R[x]$ forms a ring, we can consider polynomials with coefficients from $R[x]$: take a new variable, say $y$, and consider $R[x][y]$. Show that this is just the set of polynomials in two variables $x$ and $y$ together with the `obvious' $+$ and $\times$. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} A commutative ring $R$ is called an {\em integral domain\/} iff for any $a,b\in R$ with $a\times b=0$, we have $a=0$, or $b=0$ or both. Clearly $\ams{Z}}\def\E{\ams{E}$ is an integral domain. \begin{vexercise} \hspace{1em} \begin{enumerate} \item Show that any field $F$ is an integral domain. \item For what values of $n$ is $\ams{Z}}\def\E{\ams{E}_n$ an integral domain? \end{enumerate} \end{vexercise} \begin{lemma}\label{sect2lemma1} Let $f,g\in R[x]$ for $R$ an integral domain. Then \begin{enumerate} \item $\deg(fg)=\deg(f)+\deg(g)$. \item $R[x]$ is an integral domain. \end{enumerate} \end{lemma} The second part means that given polynomials $f$ and $g$ (with coefficients from an integral domain), we have $fg=0\Rightarrow f=0$ or $g=0$. You have been implicitly using this fact when you solve polynomial equations by factorising them. \begin{proof} We have $$ fg= \sum_{k=0}^{m+n} c_k x^k\text{ where }c_k=\sum_{i+j=k}a_ib_j, $$ so in particular $c_{m+n}=a_nb_m\not= 0$ as $R$ is an integral domain. Thus $\deg(fg)\geq m+n$ and since the reverse inequality is obvious, we have part (1) of the Lemma. Part (2) now follows immediately since $fg=0\Rightarrow\deg(fg)=-\infty\Rightarrow \deg f+\deg g=-\infty$, which can only happen if at least one of $f$ or $g$ has degree $=-\infty$ (see the footnote at the bottom of the first page). \qed \end{proof} All your life you have been happily adding the degrees of polynomials when you multiply them. But as Lemma \ref{sect2lemma1} shows, {\em this is only possible when the coefficients of the polynomial come from an integral domain\/}. For example, $\ams{Z}}\def\E{\ams{E}_6$, the integers under addition and multiplication modulo $6$, is a ring that is not an integral domain (as $2\times 3=0$ for example), and sure enough, $$ (3x+1)(2x+1)=5x+1, $$ where all of this is happening in $\ams{Z}}\def\E{\ams{E}_6[x]$. \paragraph{\hspace*{-0.3cm}} Although we cannot necessarily divide two polynomials and get another polynomial, we {\em can\/} divide upto a possible ``error term", or, as it is more commonly called, a remainder. \begin{theoremA} Suppose $f$ and $g$ are elements of $R[x]$ where the leading coefficient of $g$ has a multiplicative inverse in the ring $R$. Then there exist $q$ and $r$ in $R[x]$ (quotient and remainder) such that $$ f=qg+r, $$ where the degree of $r$ is $<$ the degree of $g$. \end{theoremA} When $R$ is a field (where you may be more used to doing long division) all the non-zero coefficients of a polynomial have multiplicative inverses (as they lie in a field) so the condition on $g$ becomes $g\not= 0$. \begin{proof} For all $q\in R[x]$, consider those polynomials of the form $f-gq$ and choose one, say $r$, of smallest degree. Let $d=\deg r$ and $m=\deg g$. We claim that $d<m$. This will give the result, as the $r$ chosen has he form $r=f-gq$ for some $q$, giving $f=gq+r$. Suppose that $d\geq m$ and consider $$ \bar{r}=(r_d)(g_m^{-1})x^{(d-m)}g, $$ a polynomial since $d-m\geq 0$. Notice also that we have used the fact that the leading coefficient of $g$ has a multiplicative inverse. The leading term of $\bar{r}$ is $r_dx^d$, which is also the leading term of $r$. Thus, $r-\bar{r}$ has degree $<d$. But $r-\bar{r}=f-gq-r_{d}g_{m}^{-1}x^{d-m}g$ by definition, which equals $f-g(q-r_{d}g_{m}^{-1}x^{d-m})=f-g\bar{q}$, say. Thus $r-\bar{r}$ has the form $f-g\bar{q}$ too, but with smaller degree than $r$, which was of minimal degree amongst all polynomials of this form--this is our desired contradiction. \qed \end{proof} \begin{vexercise} \hspace{1em} \begin{enumerate} \item If $R$ is an integral domain, show that the quotient and remainder are unique. \item Show that the quotient and remainder are not unique when you divide polynomials in $\ams{Z}}\def\E{\ams{E}_6[x]$. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} Other familiar concepts from $\ams{Z}}\def\E{\ams{E}$ are those of divisors, common divisors and greatest common divisors. Since we need no more algebra to define these notions than given by the axioms for a ring, these concepts carry pretty much straight over to polynomial rings. We will state these in the setting of polynomials from $F[x]$ for $F$ a field. \begin{definition} For $f,g\in F[x]$, we say that $f$ divides $g$ iff $g=fh$ for some $h\in F[x]$. Write $f\,|\, g$. \end{definition} \begin{definition} Let $f,g\in F[x]$. Suppose that $d$ is a polynomial satisfying \begin{enumerate} \item $d$ is a common divisor of $f$ and $g$, ie: $d\,|\, f$ and $d\,|\, g$; \item if $c$ is a polynomial with $c\,|\, f$ and $c\,|\, g$ then $c\,|\, d$; \item $d$ is monic. \end{enumerate} Then $d$ is called (the) greatest common divisor of $f$ and $g$. \end{definition} As with the division algorithm, we have tweaked the definition from $\ams{Z}}\def\E{\ams{E}$ to make it work in $F[x]$. The reason is that we want {\em the\/} gcd to be unique. In $\ams{Z}}\def\E{\ams{E}$ you ensure this by insisting that all gcd's are positive; in $F[x]$ we insist they are monic. \paragraph{\hspace*{-0.3cm}} $x^2-1$ and $2x^3-2x^2-4x\in\Q[x]$ have greatest common divisor $x+1$: it is certainly a common divisor as $x^2-1=(x+1)(x-1)$ and $2x^3-2x^2-4x=2x(x+1)(x-2)$. From the two factorisations, any other common divisor must have the form $\lambda(x+1)$ for some $\lambda\in\Q$, and so divides $x+1$. \paragraph{\hspace*{-0.3cm}} They key result on gcd's is: \begin{theorem}\label{gcd} Any two $f,g\in F[x]$ have a greatest common divisor $d$. Moreover, there are $a_0,b_0\in F[x]$ such that $$ d=a_0f+b_0g. $$ \end{theorem} Compare this with $\ams{Z}}\def\E{\ams{E}$! You can replace $F[x]$ by $\ams{Z}}\def\E{\ams{E}$ in the following proof to get the corresponding fact for the integers. \begin{proof} Consider the set $I=\{af+bg\,|\,a,b\in F[x]\}$. Let $d\in I$ be a monic polynomial with minimal degree. Then $d\in I$ gives that $d=a_0f+b_0g$ for some $a_0,b_0\in F[x]$. We claim that $d$ is the gcd of $f$ and $g$. The following two basic facts are easy to verify: \begin{enumerate} \item The set $I$ is a subgroup of the Abelian group $F[x]$--exercise. \item If $u\in I$ and $w\in F[x]$ then $uw\in I$, since $wu=w(af+bg)=(wa)f+(wb)g\in I$. \end{enumerate} Consider now the set $P=\{hd\,|\,h\in F[x]\}$. Since $d\in I$ and by the second observation above, $hd\in I$, and we have $P\subseteq I$. Conversely, if $u\in I$ then by the division algorithm, $u=qd+r$ where $r=0$ or $\deg(r)<\deg(d)$. Now, $r=u-qd$ and $d\in I$, so $qd\in I$ by (2). But $u\in I$ and $qd\in I$ so $u-dq=r\in I$ by (1) above. Thus, if $\deg(r)<\deg(d)$ we would have a contradiction to the degree of $d$ being minimal, and so we must have $r=0$, giving $u=qd$. This means that any element of $I$ is a multiple of $d$, so $I\subseteq P$. Now that we know that $I$ is just the set of all multiples of $d$, and since letting $a=1,b=0$ or $a=0,b=1$ gives that $f,g\in I$, we have that $d$ is a common divisor of $f$ and $g$. Finally, if $d'$ is another common divisor, then $f=u_1d'$ and $g=u_2d'$, and since $d=a_0f+b_0g$, we have $d=a_0u_1d'+b_0u_2d'=d'(a_0u_1+b_0u_2)$ giving $d'\,|\, d$. Thus $d$ is indeed the greatest common divisor. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Here is another fundamental concept: \begin{definition}[Ring homomorphism] \label{def_hom} Let $R$ and $S$ be rings. A mapping $\varphi:R\rightarrow S$ is called a ring homomorphism if and only if for all $a,b\in R$, \begin{enumerate} \item $\varphi(a+b)=\varphi(a)+\varphi(b)$; \item $\varphi(ab)=\varphi(a)\varphi(b)$; \item $\varphi(1_R)=1_S$ (where $1_R$ is the multiplicative identity in $R$ and $1_S$ the multiplicative identity in $S$). \end{enumerate} \end{definition} The reason we need the last item but not $\varphi(0)=0$ is because $\varphi(0)=\varphi(0+0)=\varphi(0)+\varphi(0)$, and since $S$ is an group under addition, we can cancel (using the existence of inverses under addition!) to get $\varphi(0)=0$. We can't do this to get $\varphi(1)=1$ as we don't have inverses under multiplication. You should think of a homomorphism as being like an ``algebraic analogy", or a way of transferring algebraic properties; the algebra in the image of $\varphi$ is analogous to the algebra of $R$. \paragraph{\hspace*{-0.3cm}} We will have more to say about general homomorphisms later; for now we satisfy ourselves with an example: let $R[x]$ be a ring of polynomials over a commutative ring $R$, and let $c\in R$. Define a mapping $\varepsilon_c:R[x]\rightarrow R$ by $$ \varepsilon_c(f)=f(c)\stackrel{\text{def}}{=} a_0+a_1c+\cdots+a_nc^n. $$ ie: substitute $c$ into $f$. This is a ring homomorphism from $R[x]$ to $R$, called the {\em evaluation at $c$ homomorphism\/}: to see this, certainly $\varepsilon_c(1)=1$, and I'll leave $\varepsilon_c(f+g)=\varepsilon_c(f)+\varepsilon_c(g)$ to you. Now, $$ \varepsilon_c(fg)= \varepsilon_c\biggl(\sum_{k=0}^{m+n} d_k x^k\biggr) =\sum_{k=0}^{m+n} d_k c^k\text{ where }d_k=\sum_{i+j=k}a_ib_j. $$ But $\sum_{k=0}^{m+n} d_k c^k=\biggl(\sum_{i=0}^n a_ic^i\biggr)\biggl( \sum_{j=0}^m b_jc^j\biggr)=\varepsilon_c(f)\varepsilon_c(g)$ and we are done. One consequence of $\varepsilon_c$ being a homomorphism is that given a factorisation of a polynomial, say $f=gh$, we have $\varepsilon_c(f)=\varepsilon_c(g)\varepsilon_c(h)$, ie: if we substitute $c$ into $f$ we get the same answer as when we substitute into $g$ and $h$ and multiply the answers. \subsection*{Further Exercises for Section \thesection} \begin{vexercise} Let $f,g$ be polynomials over the field $F$ and $f=gh$. Show that $h$ is also a polynomial over $F$. \end{vexercise} \begin{vexercise}\label{ex2.2} Let $\sigma:R\rightarrow S$ be a homomorphism of (commutative) rings. Define $\sigma^*:R[x]\rightarrow S[x]$ by $$ \sigma^*:\sum_i a_ix^i\mapsto \sum_i \sigma(a_i)x^i. $$ Show that $\sigma^*$ is a homomorphism. \end{vexercise} \begin{vexercise}\label{ex2.3} Let $R$ be a commutative ring and define $\partial:R[x]\rightarrow R[x]$ by $$ \partial:\sum_{k=0}^n a_kx^k\mapsto \sum_{k=1}^{n} (ka_k)x^{k-1}\text{ and } \partial(a)=0, $$ for any constant $a$. (Ring a bell?) Show that $\partial(f+g)=\partial(f)+\partial(g)$ and $\partial(fg)=\partial(f)g+f\partial(g)$. The map $\partial$ is called the {\em formal derivative\/}. \end{vexercise} \begin{vexercise}\label{ex2.4} Let $p$ be a fixed polynomial in the ring $F[x]$ and consider the map $\varepsilon_p:F[x]\rightarrow F[x]$ given by $f(x)\mapsto f(p(x))$. Show that $\varepsilon _p$ is a homomorphism. (The homomorphism $\varepsilon_p$ is a generalisation of the evaluation at $\lambda$ homomorphism $\varepsilon_\lambda$.) \end{vexercise} \section{Roots and Irreducibility}\label{lect3} \paragraph{\hspace*{-0.3cm}} The early material in this section is familiar for polynomials with real coefficients. The point is that these results are still true for polynomials with coefficients coming from an arbitrary field $F$, and quite often, for polynomials with coefficients from a ring $R$. Let $$ f=a_0+a_1x+\cdots +a_nx^n $$ be a polynomial in $R[x]$ for $R$ a ring. We say that $c\in R$ is a {\em root\/} of $f$ if $$ f(c)=a_0+a_1c+\cdots+a_nc^n=0\text{ in }R. $$ As a trivial example, the polynomial $x^2+1$ is in all three rings $\Q[x],\R[x]$ and $\ams{C}}\def\Q{\ams{Q}[x]$. It has no roots in either $\Q$ or $\R$, but two in $\ams{C}}\def\Q{\ams{Q}$. \paragraph{\hspace*{-0.3cm}} We start with a familiar result: \begin{factthm} An element $c\in R$ is a root of $f$ if and only if $f=(x-c)g$ for some $g\in R[x]$. \end{factthm} In English, $c$ is a root precisely when $x-c$ is a factor. \begin{proof} This is an illustration of the power of the division algorithm, Theorem A. Suppose that $f$ has the form $(x-c)g$ for some $g\in R[x]$. Then $$ f(c)=(c-c)g(c)=0.g(c)=0, $$ so that $c$ is indeed a root (notice we used that $\varepsilon_c$ is a homomorphism, ie: that $\varepsilon_c(f)=\varepsilon_c(x-c)\varepsilon_c(g)$). On the other hand, by the division algorithm, we can divide $f$ by the polynomial $x-c$ to get, $$ f=(x-c)g+a, $$ where $a\in R$ (we can use the division algorithm, as the leading coefficient of $x-c$, being $1$, has an inverse in $R$). Since $f(c)=0$, we must also have $(c-c)g+a=0$, hence $a=0$. Thus $f=(x-c)g$ as required. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Here is another familiar result that is reassuringly true for polynomials over (almost) any ring. \begin{theorem}\label{degree.number.of.roots} Let $f\in R[x]$ be a non-zero polynomial with coefficients from the integral domain $R$. Then $f$ has at most $\deg(f)$ roots in $R$. \end{theorem} \begin{proof} We use induction on the degree, which is $\geq 0$ since $f$ is non-zero. If $\deg(f)=0$ then $f=\mu$ a nonzero constant in $R$, which clearly has no roots, so the result holds. Assume $\deg(f)\geq 1$ and that the result is true for any polynomial of degree $<\deg(f)$. If $f$ has no roots in $R$ then we are done. Otherwise, $f$ has a root $c\in R$ and $$ f=(x-c)g, $$ for some $g\in R[x]$ by the Factor Theorem. Moreover, as $R$ is an integral domain, $f(a)=0$ iff either $a-c=0$ or $g(a)=0$, so the roots of $f$ are $c$, together with the roots of $g$. Since the degree of $g$ must be $\deg(f)-1$ (by Lemma \ref{sect2lemma1}, again using the fact that $R$ is an integral domain), it has at most $\deg(f)-1$ roots by the inductive hypothesis, and these combined with $c$ give at most $\deg(f)$ roots for $f$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} A cherished fact such as Theorem \ref{degree.number.of.roots} will not hold if the coefficients do not come from an integral domain. For instance, if $R=\ams{Z}}\def\E{\ams{E}_6$, then the quadratic polynomial $(x-1)(x-2)=x^2+3x+2$ has roots $1,2,4$ and $5$ in $\ams{Z}}\def\E{\ams{E}_6$. \begin{vexercise}\label{ex3.20} A polynomial like $x^2+2x+1=(x+1)^2$ has $1$ as a {\em repeated root\/}. It's derivative, in the sense of calculus, is $2(x+1)$, which also has $1$ as a root. In general, and in light of the Factor Theorem, call $c\in F$ a repeated root of $f$ iff $f=(x-c)^kg$ for some $k>1$. \begin{enumerate} \item Using the formal derivative $\partial$ (see Exercise \ref{ex2.3}), show that $c$ is a repeated root of $f$ if and only if $c$ is a root of $\partial(f)$. \item Show that the roots of $f$ are distinct if and only if $\gcd(f,\partial(f))=1$. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} For reasons that will become clearer later, a very important role is played by polynomials that cannot be ``factorised". \begin{definition}[irreducible polynomial over ${\mathbf F}$] Let $F$ be a field and $f\in F[x]$ a non-constant polynomial. A non-trivial factorisation of $f$ is an expression of the form $f=gh$, where $g,h\in F[x]$ and $\deg g,\deg h\geq 1$ (equivalently, $\deg g,\deg h<\deg f$). Call $f$ reducible over $F$ iff it has a non-trivial factorisation, and irreducible over $F$ otherwise. \end{definition} Thus, a polynomial over a field $F$ is irreducible precisely when it cannot be written as a product of non-constant polynomials. Put another way, $f\in F[x]$ is irreducible precisely when it is divisible only by a constant $c\in F$, or $c f$. \begin{aside} For polynomials over a ring the definition is slightly more complicated: let $f\in R[x]$ a non-constant polynomial with coefficients from the ring $R$. A {\em non-trivial factorisation\/} of $f$ is an expression of the form $f=gh$, where $g,h\in R[x]$ and either, \begin{enumerate} \item $\deg g,\deg h\geq 1$, or \item if either $g$ or $h$ is a constant $\lambda\in R$, then $\lambda$ has {\em no\/} multiplicative inverse in $R$. \end{enumerate} Say $f$ is {\em reducible\/} over $R$ iff it has a non-trivial factorisation, and {\em irreducible\/} over $R$ otherwise. If $R=F$ a field, then the second possibility never arises, as every non-zero element of $F$ has a multiplicative inverse. As an example, $3x+3=3(x+1)$ is a non-trivial factorisation in $\ams{Z}}\def\E{\ams{E}[x]$ but a trivial one in $\Q[x]$. \end{aside} \paragraph{\hspace*{-0.3cm}} The ``over $F$" that follows reducible or irreducible is crucial; polynomials are never absolutely reducible or irreducible. For example $x^2+1$ is irreducible over $\R$ but reducible over $\ams{C}}\def\Q{\ams{Q}$. There is one exception to the previous sentence: a linear polynomial $f=ax+b\in F[x]$ is irreducible over any field $F$. If $f=gh$ then since $\deg f=1$, we cannot have both $\deg(g), \deg(h)\geq 1$, for then $\deg(gh)=\deg(g)+\deg(h)\geq 1+1=2$, a contradiction. Thus, one of $g$ or $h$ must be a constant with $f$ thus irreducible over $F$. \begin{vexercise}\label{ex3.1} \begin{enumerate} \item Let $F$ be a field and $a\in F$. Show that $f$ is an irreducible polynomial over $F$ if and only if $a f$ is irreducible over $F$ for any $a\not= 0$. \item Show that if $f(x+a)$ is irreducible over $F$ then $f(x)$ is too. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} There is the famous: \begin{fundthmalg} Any non-constant $f\in\ams{C}}\def\Q{\ams{Q}[x]$ has a root in $\ams{C}}\def\Q{\ams{Q}$. \end{fundthmalg} So if $f\in\ams{C}}\def\Q{\ams{Q}[x]$ has $\deg f\geq 2$, then $f$ has a root in $\ams{C}}\def\Q{\ams{Q}$, hence a linear factor over $\ams{C}}\def\Q{\ams{Q}$, hence is reducible over $\ams{C}}\def\Q{\ams{Q}$. Thus, the only irreducible polynomials over $\ams{C}}\def\Q{\ams{Q}$ are the linear ones. \begin{vexercise}\label{ex3.21} Show that if $f$ is irreducible over $\R$ then $f$ is either linear or quadratic. \end{vexercise} \paragraph{\hspace*{-0.3cm}} A common mistake is to equate having no roots in $F$ with being irreducible over $F$. But: \begin{description} \item[--] {\em A polynomial can be irreducible over $F$ and still have roots in $F$\/}: we saw above that a linear polynomial $ax+b$ is always irreducible, and yet has a root in $F$, namely $-b/a$. It is true though that if a polynomial $f$ has degree $\geq 2$ and had a root in $F$, then by the factor theorem it would have a linear factor so would be reducible. Thus, if $\deg(f)\geq 2$ and $f$ is irreducible over $F$, then $f$ has no roots in $F$. \item[--]{\em A polynomial can have no roots in $F$ but not be irreducible over $F$\/}: the polynomial $x^4+2x^2+1=(x^2+1)^2$ is reducible over $\Q$, but with roots $\pm\text{i}\not\in\Q$. \end{description} \paragraph{\hspace*{-0.3cm}} There is no general method for deciding if a polynomial over an arbitrary field $F$ is irreducible. The best we can hope for is an ever expanding list of techniques, of which the first is: \begin{proposition}\label{irr1} Let $F$ be a field and $f\in F[x]$ be a polynomial of degree $\leq 3$. If $f$ has no roots in $F$ then it is irreducible over $F$. \end{proposition} \begin{proof} Arguing by the contrapositive, if $f$ is reducible then $f=gh$ with $\deg g,\deg h\geq 1$. Since $\deg g+\deg h=\deg f\leq 3$, we must have for $g$ say, that $\deg g=1$. Thus $f=(ax+b)h$ and $f$ has the root $-b/a$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} For another, possibly familiar, example of a field: let $p$ be a prime and $\ams{F}}\def\K{\ams{K}_p$ the set $\{0,1\ldots,p-1\}$. Define addition and multiplication on this set to be addition and multiplication of integers modulo $p$. You can verify that $\ams{F}}\def\K{\ams{K}_p$ is a field by directly checking the axioms. The only tricky one is the existence of inverses under multiplication: to show this use the gcd theorem from Section \ref{lect2}, but for $\ams{Z}}\def\E{\ams{E}$ rather than polynomials. \begin{vexercise} Show that a field $F$ is an integral domain. Hence show that if $n$ is {\em not} prime, then the addition and multiplication of integers modulo $n$ is {\em not\/} a field. \end{vexercise} Arithmetic modulo $n$, for the various $n$, thus gives the sequence $$ \ams{F}}\def\K{\ams{K}_2,\ams{F}}\def\K{\ams{K}_3,\ams{Z}}\def\E{\ams{E}_4,\ams{F}}\def\K{\ams{K}_5,\ams{Z}}\def\E{\ams{E}_6,\ams{F}}\def\K{\ams{K}_7,\ams{Z}}\def\E{\ams{E}_8,\ams{Z}}\def\E{\ams{E}_9,\ams{Z}}\def\E{\ams{E}_{10},\ams{F}}\def\K{\ams{K}_{11},\ldots $$ of fields $\ams{F}}\def\K{\ams{K}_p$ for $p$ a prime, and rings $\ams{Z}}\def\E{\ams{E}_n$ for $n$ composite. In Section \ref{fields2} we will see that there are fields $\ams{F}}\def\K{\ams{K}_4,\ams{F}}\def\K{\ams{K}_8$ and $\ams{F}}\def\K{\ams{K}_9$ of orders $4,8$ and $9$, but these fields are not $\ams{Z}}\def\E{\ams{E}_4,\ams{Z}}\def\E{\ams{E}_8$ or $\ams{Z}}\def\E{\ams{E}_9$. They are something quite different. \paragraph{\hspace*{-0.3cm}} \label{irreducible_polynomials:paragraph50} Consider polynomials with coefficients from $\ams{F}}\def\K{\ams{K}_2$ ie: the ring $\ams{F}}\def\K{\ams{K}_2[x]$, and in particular, the polynomial $$ f=x^4+x+1\in\ams{F}}\def\K{\ams{K}_2[x]. $$ Now $0^4+0+1\not= 0\not=1^4+1+1$, so $f$ has no roots in $\ams{F}}\def\K{\ams{K}_2$. This doesn't mean that $f$ is irreducible over $\ams{F}}\def\K{\ams{K}_2$, but certainly any factorisation of $f$ over $\ams{F}}\def\K{\ams{K}_2$, if there is one, must be as a product of two quadratics. Moreover, these quadratics must themselves be irreducible over $\ams{F}}\def\K{\ams{K}_2$, for if not, they would factor into linear factors and the factor theorem would then give roots of $f$. There are only four quadratics over $\ams{F}}\def\K{\ams{K}_2$: $$ x^2,x^2+1,x^2+x\text{ and }x^2+x+1 $$ with $x^2=xx, x^2+1=(x+1)^2$ and $x^2+x=x(x+1)$. You might have to stare at the second of these factorisations for a second. By Proposition \ref{irr1} $x^2+x+1$ is irreducible. Thus, any factorisation of $f$ into irreducible quadratics must be of the form, $$ (x^2+x+1)(x^2+x+1). $$ But, $f$ {\em doesn't\/} factorise this way -- just expand the brackets. Thus $f$ is irreducible over $\ams{F}}\def\K{\ams{K}_2$. \paragraph{\hspace*{-0.3cm}} The most important field for the Galois theory of these notes is the rationals $\Q$. Consequently, determining the irreducibility of polynomials over $\Q$ will be of great importance to us. The first useful test for irreducibility over $\Q$ has the following main ingredient: to see if a polynomial can be factorised over $\Q$ it suffices to see whether it can be factorised over $\ams{Z}}\def\E{\ams{E}$. First we recall Exercise \ref{ex2.2}, which is used a number of times in these notes so is worth placing in a, \begin{lemma}\label{lemma3.20} Let $\sigma:R\rightarrow S$ be a homomorphism of rings. Define $\sigma^*:R[x]\rightarrow S[x]$ by $$ \sigma^*:\sum_i a_ix^i\mapsto \sum_i \sigma(a_i)x^i. $$ Then $\sigma^*$ is a homomorphism. \end{lemma} \begin{lemma}[Gauss] Let $f$ be a polynomial with integer coefficients. Then $f$ can be factorised non-trivially as a product of polynomials with integer coefficients if and only if it can be factorised non-trivially as a product of polynomials with rational coefficients. \end{lemma} \begin{proof} If the polynomial can be written as a product of $\ams{Z}}\def\E{\ams{E}$-polynomials then it clearly can as a product of $\Q$-polynomials as integers are rational. Suppose on the other hand that $f=gh$ in $\Q[x]$ is a non-trivial factorisation. By multiplying through by a multiple of the denominators of the coefficients of $g$ we get a polynomial $g_1=m g$ with $\ams{Z}}\def\E{\ams{E}$-coefficients. Similarly we have $h_1=nh\in\ams{Z}}\def\E{\ams{E}[x]$ and so \begin{equation}\label{gauss.lemmaeq} mnf=g_1h_1\in\ams{Z}}\def\E{\ams{E}[x]. \end{equation} Now let $p$ be a prime dividing $mn$, and consider the homomorphism $\sigma:\ams{Z}}\def\E{\ams{E}\rightarrow \ams{F}}\def\K{\ams{K}_p$ given by $\sigma(k)=k\mod p$. Then by the lemma above, the map $\sigma^*:\ams{Z}}\def\E{\ams{E}[x]\rightarrow \ams{F}}\def\K{\ams{K}_p[x]$ given by $$ \sigma^*:\sum_i a_ix^i\mapsto \sum_i \sigma(a_i)x^i, $$ is a homomorphism. Applying the homomorphism to (\ref{gauss.lemmaeq}) gives $0=\sigma^*(g_1)\sigma^*(h_1)$ in $\ams{F}}\def\K{\ams{K}_p[x]$, as $mn\equiv 0\mod p$. As the ring $\ams{F}}\def\K{\ams{K}_p[x]$ is an integral domain the only way that this can happen is if one of the polynomials is equal to the zero polynomial in $\ams{F}}\def\K{\ams{K}_p[x]$, ie: one of the original polynomials, say $g_1$, has all of its coefficients divisible by $p$. Thus we have $g_1=pg_2$ with $g_2\in\ams{Z}}\def\E{\ams{E}[x]$, and (\ref{gauss.lemmaeq}) becomes $$ \frac{mn}{p}f=g_2h_1. $$ Working our way through all the prime factors of $mn$ in this way, we can remove the factor of $mn$ from (\ref{gauss.lemmaeq}) and obtain a factorisation of $f$ into polynomials with $\ams{Z}}\def\E{\ams{E}$-coefficients. \qed \end{proof} So to determine whether a polynomial with $\ams{Z}}\def\E{\ams{E}$-coefficients is irreducible over $\Q$, you need only check that it has no non-trivial factorisations with all the coefficients integers. \begin{eisenstein} Let $$ f=c_nx^n+\cdots +c_1x+c_0, $$ be a polynomial with integer coefficients. If there is a prime $p$ that divides all the $c_i$ for $i<n$, does not divide $c_n$, and such that $p^2$ does not divide $c_0$, then $f$ is irreducible over $\Q$. \end{eisenstein} \begin{proof} By virtue of the previous discussion, we need only show that under the conditions stated, there is no factorisation of $f$ using integer coefficients. Suppose otherwise, ie: $f=gh$ with $$ g=a_rx^r+\cdots +a_0\text{ and }h=b_sx^s+\cdots +b_0, $$ and the $a_i,b_i\in\ams{Z}}\def\E{\ams{E}$. Expanding $gh$ and equating coefficients, $$ \begin{tabular}{l} $c_0=a_0b_0$\\ $c_1=a_0b_1+a_1b_0$\\ \hspace*{2em}$\vdots$\\ $c_i=a_0b_i+a_1b_{i-1}+\cdots+a_ib_0$\\ \hspace*{2em}$\vdots$\\ $c_n=a_rb_s$.\\ \end{tabular} $$ By hypothesis, $p\,|\, c_0$. Write both $a_0$ and $b_0$ as a product of primes, so if $p\,|\, c_0$, ie: $p\,|\, a_0b_0$, then $p$ must be one of the primes in this factorisation, hence divides one of $a_0$ or $b_0$. Thus, either $p\,|\, a_0$ or $p\,|\, b_0$, {\em but not both\/} (for then $p^2$ would divide $c_0$). Assume that it is $p\,|\, a_0$ that we have. Next, $p\,|\, c_1$, and this coupled with $p\,|\, a_0$ gives $p\,|\, c_1-a_0b_1=a_1b_0$ (If we had assumed $p\,|\, b_0$, we would still reach this conclusion). Again, $p$ must divide one of the these last two factors, and since we've already decided that it doesn't divide $b_0$, it must be $a_1$ that it divides. Continuing in this manner, we get that $p$ divides all the coefficients of $g$, and in particular, $a_r$. But then $p$ divides $a_rb_s=c_n$, the contradiction we were after. \qed \end{proof} The proof above is a good example of the way mathematics is sometimes created. You start with as few assumptions as possible (in this case that $p$ divides some of the coefficients of $f$) and proceed towards some sort of conclusion, imposing extra conditions as and when you need them. In this way the statement of the theorem writes itself. \paragraph{\hspace*{-0.3cm}} For example $$ x^5+5x^4-5x^3+10x^2+25x-35, $$ is irreducible over $\Q$. Even less obviously $$ x^n-p, $$ is irreducible over $\Q$ for any prime $p$. Thus, we can find polynomials over $\Q$ of arbitrary large degree that are irreducible, in contrast to the situation for polynomials over $\R$ or $\ams{C}}\def\Q{\ams{Q}$. \paragraph{\hspace*{-0.3cm}} Another useful tool arises with polynomials having coefficients from a ring $R$ and there is a homomorphism from $R$ to some field $F$. If the homomorphism is applied to all the coefficients of the polynomial (turning it from a polynomial with $R$-coefficients into a polynomial with $F$-coefficients) then a reducible polynomial cannot turn into an irreducible one: \begin{reduction}\label{reduction} Let $R$ be an integral domain, $F$ a field and $\sigma:R\rightarrow F$ a ring homomorphism. Let $\sigma^*:R[x]\rightarrow F[x]$ be the homomorphism of Lemma \ref{lemma3.20}. Moreover, let $f\in R[x]$ be such that \begin{enumerate} \item $\deg\sigma^*(f)=\deg(f)$, and \item $\sigma^*(f)$ is irreducible over $F$. \end{enumerate} Then $f$ cannot be written as a product $f=gh$ with $g,h\in R[x]$ and $\deg g,\deg h<\deg f$. \end{reduction} Although it is stated in some generality, the reduction test is very useful for determining the irreducibility of polynomials over $\Q$. As an example, take $R=\ams{Z}}\def\E{\ams{E}$; $F=\ams{F}}\def\K{\ams{K}_5$ and $f=8x^3-6x-1\in\ams{Z}}\def\E{\ams{E}[x]$. For $\sigma$, take reduction modulo $5$, ie: $\sigma(n)=n\text{ mod }5$. It is not hard to show that $\sigma$ is a homomorphism. Since $\sigma(8)\equiv 3\text{ mod }5$, and so on, we get $$ \sigma^*(f)=3x^3+4x+4\in\ams{F}}\def\K{\ams{K}_5[x]. $$ The degree has not changed, and by substituting the five elements of $\ams{F}}\def\K{\ams{K}_5$ into $\sigma^*(f)$, one can see that it has no roots in $\ams{F}}\def\K{\ams{K}_5$. Since the polynomial is a cubic, it must therefore be irreducible over $\ams{F}}\def\K{\ams{K}_5$. Thus, by the reduction test, $8x^3-6x-1$ cannot be written as a product of smaller degree polynomials with $\ams{Z}}\def\E{\ams{E}$-coefficients. But by Gauss' lemma, this gives that this polynomial is irreducible over $\Q$. $\ams{F}}\def\K{\ams{K}_5$ was chosen because with $\ams{F}}\def\K{\ams{K}_2$ condition (i) fails; with $\ams{F}}\def\K{\ams{K}_3$ condition (ii) fails. \begin{proof} Suppose on the contrary that $f=gh$ with $\deg g,\deg h<\deg f$. Then $\sigma^*(f)=\sigma^*(gh)=\sigma^*(g)\sigma^*(h)$, the last part because $\sigma^*$ is a homomorphism. Now $\sigma^*(f)$ is irreducible, so the only way it can factorise like this is if one of the factors, $\sigma^*(g)$ say, is a constant, hence $\deg \sigma^*(g)=0$. Then $$ \deg f=\deg\sigma^*(f)=\deg\sigma^*(g)\sigma^*(h)=\deg\sigma^*(g)+\deg\sigma^*(h)=\deg\sigma^*(h)\leq \deg h<\deg f, $$ a contradiction. (That $\deg\sigma^*(h)\leq \deg h$ rather than equality necessarily, is because the homomorphism $\sigma$ may send some of the coefficients of $h$ -- including the leading one -- to $0\in F$.) \qed \end{proof} \paragraph{\hspace*{-0.3cm}} We've already observed the similarity between polynomials and integers. One thing we know about integers is that they can be written uniquely as products of primes. We might hope that something similar is true for polynomials, and it is in certain situations. For the next few results, we deal only with polynomials $f\in F[x]$ for $F$ a field (although they are true in more generality). \begin{lemma}\label{lem_ufd} \begin{enumerate} \item If $\gcd(f,g)=1$ and $f\,|\, gh$ then $f\,|\, h$. \item If $f$ is irreducible and monic, then for any $g$ monic with $g\,|\, f$ we have either $g=1$ or $g=f$. \item If $g$ is irreducible and monic and $g$ does not divide $f$, then $\gcd(g,f)=1$. \item If $g$ is irreducible and monic and $g\,|\, f_1f_2\ldots f_n$ then $g|f_i$ for some $i$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Since $\gcd(f,g)=1$ there are $a,b\in F[x]$ such that $1=af+bg$, hence $h=afh+bgh$. We have that $f\,|\, bgh$ by assumption, and it clearly divides $afh$, hence it divides $afh+bgh=h$ also. \item If $g$ divides $f$ and $f$ is irreducible, then by definition $g$ must be either a constant or a constant multiple of $f$. But $f$ is monic, so $g=1$ or $g=f$ are the only possibilities. \item The $\gcd$ of $f$ and $g$ is certainly a divisor of $g$, and hence by irreducibility must be either a constant, or a constant times $g$. As $g$ is also monic, the gcd must in fact be either $1$ or $g$ itself, and since $g$ does not divide $f$ it cannot be $g$, so must be $1$. \item Proceed by induction, with the first step for $n=1$ being immediate. Since $g\,|\, f_1f_2\ldots f_n=(f_1f_2\ldots f_{n-1})f_n$, we either have $g\,|\, f_n$, in which case we are finished, or not, in which case $\gcd(g,f_n)=1$ by part (3). But then part (1) gives that $g\,|\, f_1f_2\ldots f_{n-1}$, and the inductive hypothesis kicks in.\qed \end{enumerate} \end{proof} The best way of summarising the lemma is this: monic irreducible polynomials are like the ``prime numbers'' of $F[x]$. \paragraph{\hspace*{-0.3cm}} Just as any integer can be decomposed uniquely as a product of primes, so too can any polynomial as a product of irreducible polynomials: \begin{ufd}\label{ufd} Every polynomial in $F[x]$ can be written in the form $$ c p_1p_2\ldots p_r, $$ where $c$ is a constant and the $p_i$ are monic and irreducible $\in F[x]$. Moreover, if $a q_1q_2\ldots q_s$ is another factorisation with the $q_j$ monic and irreducible, then $r=s$, $c=a$ and the $q_j$ are just a rearrangement of the $p_i$. \end{ufd} The last part says that the factorisation is unique, except for the order you write down the factors. \begin{proof} To get the factorisation just keep factorising reducible polynomials until they become irreducible. At the end, pull out the coefficient of the leading term in each factor, and place them all at the front. For uniqueness, suppose that $$ c p_1p_2\ldots p_r=a q_1q_2\ldots q_s. $$ Then $p_r$ divides $a q_1q_2\ldots q_s$ which by Lemma \ref{lem_ufd} part (4) means that $p_r\,|\, q_i$ for some $i$. Reorder the $q$'s so that it is $p_r\,|\, q_s$ that in fact we have. Since both $p_r$ and $q_s$ are monic, irreducible, and hence non-constant, $p_r=q_s$, which leaves us with $$ c p_1p_2\ldots p_{r-1}=a q_1q_2\ldots q_{s-1}. $$ This gives $r=s$ straight away: if say $s>r$, then repetition of the above leads to $c=a q_1q_2\ldots q_{s-r}$, which is absurd, as consideration of degrees gives different answers for each side. Similarly if $r>s$. But then we also have that the $p$'s are just a rearrangement of the $q$'s, and canceling down to $c p_1=a q_1$, that $c=a$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} It is worth repeating that everything depends on the ambient field $F$, even the uniqueness of the decomposition. For example, $x^4-4$ decomposes as, $$\begin{tabular}{l} $(x^2+2)(x^2-2)\text{ in }\Q[x]$,\\ $(x^2+2)(x-\kern-2pt\sqrt{2})(x+\kern-2pt\sqrt{2})\text{ in }\R[x]\text{ and }$\\ $(x-\kern-2pt\sqrt{2}\text{i})(x+\kern-2pt\sqrt{2}\text{i})(x-\kern-2pt\sqrt{2})(x+\kern-2pt\sqrt{2})\text{ in }\ams{C}}\def\Q{\ams{Q}[x]$.\\ \end{tabular}$$ To illustrate how unique factorisation can be used to determine irreducibility, we have in $\ams{C}}\def\Q{\ams{Q}[x]$ that, $$ x^2+2=(x-\kern-2pt\sqrt{2}\text{i})(x+\kern-2pt\sqrt{2}\text{i}). $$ Since the factors on the right are not in $\R[x]$ this polynomial ought to be irreducible over $\R$. To make this more precise, any factorisation in $\R[x]$ would be of the form $$ x^2+2=(x-c_1)(x-c_2) $$ with the $c_i\in\R$. But this would be a factorisation in $\ams{C}}\def\Q{\ams{Q}[x]$ too, and there is only one such by unique factorisation. This forces the $c_i$ to be $\kern-2pt\sqrt{2}\text{i}$ and $-\kern-2pt\sqrt{2}\text{i}$, contradicting $c_i\in\R$. Hence $x^2+2$ is indeed irreducible over $\R$. Similarly, $x^2-2$ is irreducible over $\Q$. \begin{vexercise} Formulate the example above into a general Theorem. \end{vexercise} \subsection*{Further Exercises for Section \thesection} \begin{vexercise} Prove that if a polynomial equation has all its coefficients in $\ams{C}}\def\Q{\ams{Q}$ then it must have all its roots in $\ams{C}}\def\Q{\ams{Q}$. \end{vexercise} \begin{vexercise}\label{ex_lect2.1} \hspace{1em}\begin{enumerate} \item Let $f=a_nx^n+a_{n-1}x^{n-1}+\cdots +a_1x+a_0$ be a polynomial in $\R[x]$, that is, all the $a_i\in\R$. Show that complex roots of $f$ occur in conjugate pairs, ie: $\zeta\in\ams{C}}\def\Q{\ams{Q}$ is a root of $f$ if and only if $\bar{\zeta}$ is. \item Find an example of a polynomial in $\ams{C}}\def\Q{\ams{Q}[x]$ for which part (a) is not true. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex_lect2.2} \hspace{1em}\begin{enumerate} \item Let $m,n$ and $k$ be integers with $m$ and $n$ relatively prime (ie: $\gcd(m,n)=1$). Show that if $m$ divides $nk$ then $m$ must divide $k$ ({\em hint}: there are two methods here. One is to use Lemma \ref{lem_ufd} but in $\ams{Z}}\def\E{\ams{E}$. The other is to use the fact that any integer can be written uniquely as a product of primes. Do this for $m$ and $n$, and ask yourself what it means for this factorisation that $m$ and $n$ are relatively prime). \item Show that if $m/n$ is a root of $a_0+a_1x+ ...+a_rx^r$, $a_i\in\ams{Z}}\def\E{\ams{E}$, where $m$ and $n$ are relatively prime integers, then $m|a_0$ and $n|a_r$. \item Deduce that if $a_r=1$ then $m/n$ is in fact an integer. \end{enumerate} \noindent\emph{moral}: If a monic polynomial with integer coefficients has a rational root $m/n$, then this rational number is in fact an integer. \end{vexercise} \begin{vexercise} If $m\in\ams{Z}}\def\E{\ams{E}$ is not a perfect square, show that $x^2-m$ is irreducible over $\Q$ (note: it is {\em not\/} enough to merely assume that under the conditions stated $\kern-2pt\sqrt{m}$ is not a rational number). \end{vexercise} \begin{vexercise} Find the greatest common divisor of $f(x)=x^3-6x^2+x+4$ and $g(x)=x^5-6x+1$ ({\em hint}: look at linear factors of $f(x)$). \end{vexercise} \begin{vexercise}\label{ex_lect3.1} Determine which of the following polynomials are irreducible over the stated field: \begin{enumerate} \item $1+x^8$ over $\R$; \item $1+x^2+x^4+x^6+x^8+x^{10}$ over $\Q$ (\emph{hint}: Let $y=x^2$ and factorise $y^n-1$); \item $x^4+15x^3+7$ over $\R$ (\emph{hint}: use the intermediate value theorem from analysis); \item $x^{n+1}+(n+2)!\,x^n+\cdots+(i+2)!\,x^i+\cdots+3!\,x+2!$ over $\Q$. \item $x^2+1$ over $\ams{F}}\def\K{\ams{K}_7$. \item Let $\ams{F}}\def\K{\ams{K}$ be the field of order 8 from Section \ref{lect4}, and let $\ams{F}}\def\K{\ams{K}[X]$ be polynomials with coefficients from $\ams{F}}\def\K{\ams{K}$ and indeterminate $X$. Is $X^3+(\alpha^2+\alpha)X+(\alpha^2+\alpha+1)$ irreducible over $\ams{F}}\def\K{\ams{K}$? \item $a_4x^4+a_3x^3+a_2x^2+a_1x+a_0$ over $\Q$ where the $a_i\in\ams{Z}}\def\E{\ams{E}$; $a_3,a_2$ are even and $a_4,a_1,a_0$ are odd. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex3.3} If $p$ is prime, show that $p$ divides ${\displaystyle \binom{p}{i}}$ for $0<i<p$. Show that $p$ divides ${\displaystyle \binom{p^n}{i}}$ for $n\geq 1$ and $0<i<p$. \end{vexercise} \begin{vexercise} Show that $$ x^{p-1}+px^{p-2}+\cdots+ \binom{p}{i} x^{p-i-1}+\cdots+p, $$ is irreducible over $\Q$. \end{vexercise} \begin{vexercise}\label{ex_lect3.2} A complex number $\omega$ is an $n$-th {\em root of unity\/} if $\omega^n=1$. It is a {\em primitive\/} $n$-th root of unity if $\omega^n=1$, but $\omega^r\not=1$ for any $0<r<n$. So for example, $\pm 1,\pm \text{i}$ are the 4-th roots of 1, but only $\pm i$ are primitive 4-th roots. Convince yourself that for any $n$, $$\omega=\cos\frac{2\pi}{n}+\text{i}\sin\frac{2\pi}{n}$$ is an $n$-th root of $1$. In fact, the other $n$-th roots are $\omega^2,\ldots,\omega^n=1$. \begin{enumerate} \item Show that if $\omega$ is a {\em primitive\/} $n$-th root of $1$ then $\omega$ is a root of the polynomial \begin{equation}\label{eq1} x^{n-1}+x^{n-2}+\cdots+x+1. \end{equation} \item Show that for (\ref{eq1}) to be irreducible over $\Q$, $n$ cannot be even. \item Show that a polynomial $f(x)$ is irreducible over a field $F$ if $f(x+1)$ is irreducible over $F$. \item Finally, if $$\Phi_p(x)=x^{p-1}+x^{p-2}+\cdots+x+1$$ for $p$ a prime number, show that $\Phi_p(x+1)$ is irreducible over $\Q$, and hence $\Phi_p(x)$ is too ({\em hint\/}: consider $x^p-1$ and use the binomial theorem, Exercise \ref{ex3.3} and Eisenstein). \end{enumerate} The polynomial $\Phi_p(x)$ is called the {\em $p$-th cyclotomic polynomial\/}. \end{vexercise} \section{Fields I: Basics, Extensions and Concrete Examples} \label{lect4} This course studies the solutions to polynomial equations. Questions about these solutions can be restated as questions about fields. It is to these that we now turn. \paragraph{\hspace*{-0.3cm}} We remembered the definition of a field in Section \ref{lect1}; we can restate it as: \begin{definition}[field -- version ${\mathbf 2}$] \label{def_field2} A field is a set $F$ with two operations, $+$ and $\times$, such that for any $a,b,c\in F$, \begin{enumerate} \item $F$ is an Abelian group under $+$; \item $F\setminus\{0\}$ is an Abelian group under $\times$; \item the two operations are linked by the distributive law. \end{enumerate} \end{definition} The two groups are called the {\em additive\/} and {\em multiplicative\/} groups of the field. In particular, we will write $F^*$ to denote the multiplicative group (ie: $F^*$ is the group with elements $F\setminus\{0\}$ and operation the multiplication from the field). Even more succinctly, \begin{definition}[field -- version ${\mathbf 3}$] \label{def_field3} A field is a set $F$ with two operations, $+$ and $\times$, such that for any $a,b,c\in F$, \begin{enumerate} \item $F$ is a commutative ring under $+$ and $\times$; \item for any $a\in F\setminus\{0\}$ there is an $a^{-1}\in F$ with $a\times a^{-1}=1=a^{-1}\times a$, \end{enumerate} \end{definition} In particular a field is a special kind of ring. \paragraph{\hspace*{-0.3cm}} More concepts from the first lecture that can now be properly defined are: \begin{definition}[extensions of fields] Let $F$ and $E$ be fields with $F$ a subfield of $E$. We call $E$ an extension of $F$. If $\beta\in E$, we write $F(\beta)$, as in Section \ref{lect1}, for the smallest subfield of $E$ containing both $F$ and $\beta$ (so in particular $F(\beta)$ is an extension of $F$). In general, if $\beta_1,\ldots,\beta_k\in E$, define $F(\beta_1,\ldots,\beta_k)= F(\beta_1,\ldots,\beta_{k-1})(\beta_k)$. \end{definition} The standard notation for an extension is to write $E/F$, but in these notes we will use the more concrete $F\subseteq E$, being mindful that this means $F$ is a subfield of $E$, and not just a subset. We say that $\beta$ is {\em adjoined\/} to $F$ to obtain $F(\beta)$. The last bit of the definition says that to adjoin several elements to a field you adjoin them one at a time. The notation seems to adjoin them in a particular order, but the order doesn't matter. If we have an extension $F\subseteq E$ and there is a $\beta\in E$ such that $E=F(\beta)$, then we call $E$ a {\em simple extension\/} of $F$. \paragraph{\hspace*{-0.3cm}} $\R$ is an extension of $\Q$; $\ams{C}}\def\Q{\ams{Q}$ is an extension of $\R$, and so on. Any field is an extension of itself! \paragraph{\hspace*{-0.3cm}}\label{para4.4} Let $\ams{F}}\def\K{\ams{K}_2$ be the field of integers modulo $2$ arithmetic. Let $\alpha$ be an ``abstract symbol" that can be multiplied so that it has the following property: $\alpha\times\alpha\times\alpha=\alpha^3=\alpha+1$ (a bit like decreeing that the imaginary $i$ squares to give $-1$). Let $$\ams{F}}\def\K{\ams{K}=\{a+b\alpha+c\alpha^2\,|\,a,b,c\in\ams{F}}\def\K{\ams{K}_2\},$$ Define addition on $\ams{F}}\def\K{\ams{K}$ by: $(a_1+b_1\alpha+c_1\alpha^2)+(a_2+b_2\alpha+c_2\alpha^2)= (a_1+a_2)+(b_1+b_2)\alpha+(c_1+c_2)\alpha^2$, where the addition of coefficients happens in $\ams{F}}\def\K{\ams{K}_2$. For multiplication, ``expand" the expression $(a_1+b_1\alpha+c_1\alpha^2)(a_2+b_2\alpha+c_2\alpha^2)$ like you would a polynomial with $\alpha$ the indeterminate, so that $\alpha\alpha\alpha=\alpha^3$, the coefficients are dealt with using the arithmetic from $\ams{F}}\def\K{\ams{K}_2$, and so on. Replace any $\alpha^3$ that result using the rule $\alpha^3=\alpha+1$. For example, $$ (1+\alpha+\alpha^2)+(\alpha+\alpha^2)=1\text{ and } (1+\alpha+\alpha^2)(\alpha+\alpha^2)=\alpha+\alpha^4=\alpha+\alpha(\alpha+1)=\alpha^2. $$ It turns out that $\ams{F}}\def\K{\ams{K}$ forms a field with this addition and multiplication -- see Exercise \ref{ex_tables}. Taking those elements of $\ams{F}}\def\K{\ams{K}$ with $b=c=0$ we obtain (an isomorphic) copy of $\ams{F}}\def\K{\ams{K}_2$ inside of $\ams{F}}\def\K{\ams{K}$, and so we have an extension of $\ams{F}}\def\K{\ams{K}_2$ that contains $8$ elements. \paragraph{\hspace*{-0.3cm}} $\Q(\kern-2pt\sqrt{2})$ is a simple extension of $\Q$ while $\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{3})$ would appear not to be. But consider $\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})$: certainly $\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3}\in\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{3})$, and so $\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})\subset \Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{3})$. On the other hand, $$ (\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})^3=11\kern-2pt\sqrt{2}+9\kern-2pt\sqrt{3}, $$ as is readily checked using the Binomial Theorem. Since $(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})^3\in \Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})$, we get $$ (11\kern-2pt\sqrt{2}+9\kern-2pt\sqrt{3})-9(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})\in\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})\Rightarrow 2\kern-2pt\sqrt{2}\in\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3}). $$ And so $\kern-2pt\sqrt{2}\in\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})$ as ${\displaystyle \frac{1}{2}}$ is there too. Similarly it can be shown that $\kern-2pt\sqrt{3}\in\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})$ and hence $\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{3})\subset\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})$. So $$\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{3})=\Q(\kern-2pt\sqrt{2}+\kern-2pt\sqrt{3})$$ {\em is\/} a simple extension! \paragraph{\hspace*{-0.3cm}} What do the elements of $\Q(\kern-2pt\sqrt{2})$ actually look like? Later we will be answer this question in general, but for now we give an ad-hoc answer. Firstly $\kern-2pt\sqrt{2}$ and any $b\in\Q$ are in $\Q(\kern-2pt\sqrt{2})$ by definition. Since fields are closed under $\times$, any number of the form $b\kern-2pt\sqrt{2}\in \Q(\kern-2pt\sqrt{2})$. Similarly, fields are closed under $+$, so any $a+b\kern-2pt\sqrt{2}\in\Q(\kern-2pt\sqrt{2})$ for $a\in\Q$. Thus, the set $$ \ams{F}}\def\K{\ams{K}=\{a+b\kern-2pt\sqrt{2}\,|\,a,b\in\Q\}\subseteq\Q(\kern-2pt\sqrt{2}). $$ But $\ams{F}}\def\K{\ams{K}$ is a field in its own right using the usual addition and multiplication of complex numbers. This is easily checked from the axioms; for instance, the inverse of $a+b\kern-2pt\sqrt{2}$ can be calculated: $$ \frac{1}{a+b\kern-2pt\sqrt{2}}\times\frac{a-b\kern-2pt\sqrt{2}}{a-b\kern-2pt\sqrt{2}}=\frac{a-b\kern-2pt\sqrt{2}}{a^2-2b^2}= \frac{a}{a^2-2b^2}-\frac{b}{a^2-2b^2}\kern-2pt\sqrt{2}\in\ams{F}}\def\K{\ams{K}, $$ and you can check the other axioms for yourself. We also have $\Q\subset\ams{F}}\def\K{\ams{K}$ (letting $b=0$) and $\kern-2pt\sqrt{2}\in\ams{F}}\def\K{\ams{K}$ (letting $a=0,b=1$). Since $\Q(\kern-2pt\sqrt{2})$ is the smallest field having these two properties, we have $\Q(\kern-2pt\sqrt{2})\subseteq\ams{F}}\def\K{\ams{K}$. Thus, $$ \Q(\kern-2pt\sqrt{2})=\ams{F}}\def\K{\ams{K}=\{a+b\kern-2pt\sqrt{2}\,|\,a,b\in\Q\}. $$ \begin{vexercise} Let $\alpha$ be a complex number such that $\alpha^3=1$ and consider the set $$ \ams{F}}\def\K{\ams{K}=\{a_0+a_1\alpha+a_2\alpha^2\,|\,a_i\in\Q\} $$ \begin{enumerate} \item By row reducing the matrix, $$ \left(\begin{array}{cccc} a_0&2a_2&2a_1&1\\ a_1&a_0&2a_2&0\\ a_2&a_1&a_0&0\\ \end{array}\right) $$ find an element of $\ams{F}}\def\K{\ams{K}$ that is the inverse under multiplication of $a_0+a_1\alpha+a_2\alpha^2$. \item Show that $\ams{F}}\def\K{\ams{K}$ is a field, hence $\Q(\alpha)=\ams{F}}\def\K{\ams{K}$. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}}\label{para4.20} The previous exercise shows that the following two fields have the form, $$ \Q(\sqrt[3]{2})=\{a+b\sqrt[3]{2}+c\sqrt[3]{2}^2\,|\,a,b,c\in\Q\}\text{ and } \Q(\beta)=\{a+b\beta+c\beta^2\,|\,a,b,c\in\Q\}, $$ where $$ \beta=\sqrt[3]{2}\biggl(-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}\biggr)\in\ams{C}}\def\Q{\ams{Q}. $$ These two fields are different: the first is completely contained in $\R$, but the second contains $\beta$, which is obviously complex but not real. Hold that thought. \begin{definition}[ring isomorphism] A bijective homomorphism of rings $\varphi:R\rightarrow S$ is called an isomorphism. \end{definition} \paragraph{\hspace*{-0.3cm}} A silly but instructive example is given by the Roman ring, whose elements are $$ \{\ldots,-V,-IV,-III,-II,-I,0,I,II,III,IV,V,\cdots\}, $$ and with addition and multiplication $IX+IV=XIII$ and $IX\times VI =LIV,\text{ etc}\ldots$ Obviously the ring is isomorphic to $\ams{Z}}\def\E{\ams{E}$, and it is this idea of a trivial relabeling that is captured by an isomorphism -- two rings are isomorphic if they are really the same, just written in different languages. But we place a huge emphasis on the way things are labelled. The two fields of the previous paragraph are a good example, for, $$ \Q(\sqrt[3]{2})\text{{ and }} \Q\biggl(\sqrt[3]{2}\biggl(-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}\biggr)\biggr)\text{{ are isomorphic}} $$ (we will see why in Section \ref{fields2}). To illustrate how we might now come unstuck, suppose we were to formulate the following, \begin{bogusdefn} A subfield of $\ams{C}}\def\Q{\ams{Q}$ is called real if and only if it is contained in $\R$. \end{bogusdefn} So $\Q(\sqrt[3]{2})$ is a real field, but ${\displaystyle \Q\biggl( \sqrt[3]{2}\biggl(-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}\biggr)\biggr)}$ is not. But they are the same field! A definition should not depend on the way the elements are labelled. We will resolve this problem in Section \ref{fields2} by thinking about fields in a more abstract way. \paragraph{\hspace*{-0.3cm}} In the remainder of this section we introduce a few more concepts associated with fields. It is well known that $\kern-2pt\sqrt{2}$ and $\pi$ are both irrational real numbers. Nevertheless, from an algebraic point of view, $\kern-2pt\sqrt{2}$ is slightly more tractable than $\pi$, as it is a root of a very simple equation $x^2-2$, whereas there is no polynomial with integer coefficients having $\pi$ as a root (this is not obvious). \begin{definition}[algebraic element] Let $F\subseteq E$ be an extension of fields and $\alpha\in E$. Call $\alpha$ algebraic over $F$ if and only if $$ a_0+a_1\alpha+a_2\alpha^2+\cdots+a_n\alpha^n=0, $$ for some $a_0,a_1,\ldots,a_n\in F$. \end{definition} In otherwords, $\alpha$ is a root of the polynomial $f=a_0+a_1x+a_2x^2+\cdots+a_nx^n$ in $F[x]$. If $\alpha$ is not algebraic, ie: not the root of any polynomial with $F$-coefficients, then we say that it is {\em transcendental\/} over $F$. \paragraph{\hspace*{-0.3cm}} Some simple examples: $$ \kern-2pt\sqrt{2}, \frac{1+\sqrt{5}}{2}\text{ and }\sqrt[5]{\sqrt{2}+5\sqrt[3]{3}}, $$ are algebraic over $\Q$, whereas $\pi$ and $e$ are transcendental over $\Q$; $\pi$ is algebraic over $\Q(\pi)$. \paragraph{\hspace*{-0.3cm}} A field can contain many subfields: $\ams{C}}\def\Q{\ams{Q}$ contains $\Q(\kern-2pt\sqrt{2}),\R,\ldots$. It also contains $\Q$, but no subfields that are smaller than this. Indeed, any subfield of $\ams{C}}\def\Q{\ams{Q}$ contains $\Q$, so the rationals are the smallest subfield of the complex numbers. \begin{definition}[prime subfield] \label{definition:prime_subfield} The prime subfield of a field $F$ is the intersection of all the subfields of $F$. \end{definition} In particular the prime subfield is contained in every subfield of $F$. \begin{vexercise}\label{ex4.2} Consider the field of rational numbers $\Q$ or the finite field $\ams{F}}\def\K{\ams{K}_p$ having $p$ elements. Show that neither of these fields contain a proper subfield (hint: for $\ams{F}}\def\K{\ams{K}_p$, consider the additive group and use Lagrange's Theorem from Section \ref{groups.stuff}. For $\Q$, any subfield must contain $1$, and show that it must then be all of $\Q$). \end{vexercise} The prime subfield must contain $1$, hence any expression of the form $1+1+\cdots+1$ for any number of summands. If no such expression equals $0$ then we have infinitely many distinct such elements, and their inverses under addition, hence a copy of $\ams{Z}}\def\E{\ams{E}$ in $F$. Otherwise, if $n$ is the smallest number of summands for which such an expression equals $0$, then the elements $$ 1,1+1,1+1+1,\ldots,\underbrace{1+1+\cdots+1}_{n\text{ times}}=0, $$ forms a copy of $\ams{Z}}\def\E{\ams{E}_n$ inside of $F$. These comments can be made precise as in the following exercise. It looks ahead a little, requiring the first isomorphism theorem for rings in Section \ref{lect5}. \begin{vexercise} Let $F$ be a field and define a map $\ams{Z}}\def\E{\ams{E}\rightarrow F$ by $$ n\mapsto \left\{\begin{array}{l}0,\text{ if }n=0,\\ 1+\cdots+1, (n\text{ times), if }n>0\\ -1-\cdots-1, (n\text{ times), if }n<0. \end{array}\right. $$ Show that the map is a ring homomorphism. If the kernel consists of just $\{0\}$, then show that $F$ contains $\ams{Z}}\def\E{\ams{E}$ as a subring. Otherwise, let $n$ be the smallest positive integer contained in the kernel, and show that $F$ contains $\ams{Z}}\def\E{\ams{E}_n$ as a subring. As $F$ is a field, hence an integral domain, show that we must have $n=p$ a prime in this situation. \end{vexercise} Thus any field contains a subring isomorphic to $\ams{Z}}\def\E{\ams{E}$ or to $\ams{Z}}\def\E{\ams{E}_p$ for some prime $p$. But the ring $\ams{Z}}\def\E{\ams{E}_p$ is the field $\ams{F}}\def\K{\ams{K}_p$, and we saw in Exercise \ref{ex4.2} that $\ams{F}}\def\K{\ams{K}_p$ contains no subfields. The conclusion is that in the second case the prime subfield is $\ams{F}}\def\K{\ams{K}_p$. In the first case, $\ams{Z}}\def\E{\ams{E}$ is not a field, but each $m$ in this copy of $\ams{Z}}\def\E{\ams{E}$ has an inverse $1/m$ in $F$, and the product of this with any other $n$ gives an element $m/n\in F$. The set of all such elements obtained is a copy of $\Q$ inside $F$. \begin{vexercise}\label{ex4.3} Make these loose statements precise: let $F$ be a field and $R$ a subring of $F$ with $\varphi:\ams{Z}}\def\E{\ams{E}\rightarrow R$ an isomorphism of rings (this is what we mean when we say that $F$ contains a copy of $\ams{Z}}\def\E{\ams{E}$). Show that this can be extended to an isomorphism $\widehat{\varphi}:\Q\rightarrow F' \subseteq F$ with $\widehat{\varphi}|_{\ams{Z}}\def\E{\ams{E}}=\varphi$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} Putting it together: the prime subfield of a field is isomorphic either to the rationals $\Q$ or to the finite field $\ams{F}}\def\K{\ams{K}_p$ for some prime $p$. Define the {\em characteristic\/} of a field to be $0$ if the prime subfield is $\Q$, or $p$ if the prime subfield is $\ams{F}}\def\K{\ams{K}_p$. Thus fields like $\Q,\R$ and $\ams{C}}\def\Q{\ams{Q}$ have characteristic zero, and indeed, any field of characteristic zero must be infinite. Fields like $\ams{F}}\def\K{\ams{K}_2,\ams{F}}\def\K{\ams{K}_3\ldots$ and the field $\ams{F}}\def\K{\ams{K}$ of order $8$ given above have characteristic $2,3$ and $2$ respectively. \begin{vexercise} Show that a field $F$ has characteristic $p>0$ if and only if $p$ is the smallest number of summands such that the expression $1+1+\cdots+1$ is equal to $0$. Show that $F$ has characteristic $0$ if and only if no such expression is equal to $0$. \end{vexercise} Thus, all fields of characteristic $0$ are infinite, and the only examples we know of fields of characteristic $p>0$ are finite. It is not true though that a field of characteristic $p>0$ must be finite. We give some examples of infinite fields of characteristic $p>0$ below. \begin{vexercise}\label{ex4.30} Suppose that $f$ is an irreducible polynomial over a field $F$ of characteristic $0$. Recalling Exercise \ref{ex3.20}, show that the roots of $f$ in any extension $E$ of $F$ are distinct. \end{vexercise} \paragraph{\hspace*{-0.3cm}} It turns out that we can construct $\Q$ abstractly from $\ams{Z}}\def\E{\ams{E}$, without having to first position it inside another field: consider the set $$ \ams{F}}\def\K{\ams{K}=\{(a,b)\,|\,a,b\in\ams{Z}}\def\E{\ams{E},b\not= 0,\mbox{ where }(a,b)=(c,d)\mbox{ iff }ad=bc\} $$ i.e. ordered pairs of integers with two ordered pairs $(a,b)$ and $(c,d)$ being the same if $ad=bc$. \begin{aside} These loose statements are made precise by defining an equivalence relation on the set of ordered pairs $\ams{Z}}\def\E{\ams{E}\times\ams{Z}}\def\E{\ams{E}$ by $(a,b)\sim(c,d)$ if and only if $ad=bc$. The elements of $\ams{F}}\def\K{\ams{K}$ are then the equivalence classes under this relation. \end{aside} Define addition and multiplication on $\ams{F}}\def\K{\ams{K}$ by: $$ (a,b)+(c,d)=(ad+bc,bd)\mbox{ and }(a,b)(c,d)=(ac,bd). $$ \begin{vexercise} \hspace{1em} \begin{enumerate} \item Show that these definitions are well-defined, ie: if $(a,b)=(a',b')$ and $(c,d)= (c',d')$, then $(a,b)+(c,d)=(a',b')+(c',d')$ and $(a,b)(c,d)=(a',b')(c',d')$. \item Show that $\ams{F}}\def\K{\ams{K}$ is a field. \item Define a map $\varphi:\ams{F}}\def\K{\ams{K}\rightarrow \Q$ by $\varphi(a,b)=a/b$. Show that the map is well defined (ie: if $(a,b)=(a',b')$ then $\varphi(a,b)=\varphi(a',b')$) and that $\varphi$ is an isomorphism. \end{enumerate} \end{vexercise} This construction can be generalised as the following Exercise shows: \begin{vexercise}\label{ex4.20} Repeat the construction above with $\ams{Z}}\def\E{\ams{E}$ replaced by an arbitrary integral domain $R$. \end{vexercise} The resulting field is called the {\em field of fractions of $R$}. The field of fractions construction provides some interesting examples of fields, possibly new in the reader's experience. Let $F[x]$ be the ring of polynomials with $F$-coefficients where $F$ is any field. The field of fractions of this integral domain has elements of the form $f(x)/g(x)$ for $f$ and $g$ polynomials, in other words, rational functions with $F$-coefficients. The field is denoted $F(x)$ and is called the {\em field of rational functions over $F$\/}. \begin{description} \item[--] {\em An infinite field of characteristic $p$:\/} if $\ams{F}}\def\K{\ams{K}_p$ is a finite field of order $p$, then the field of rational functions $\ams{F}}\def\K{\ams{K}_p(x)$ is infinite as it contains all the polynomials over $\ams{F}}\def\K{\ams{K}_p$. But the rational function $1$ still adds to itself only $p$ times to give $0$, hence the field has characteristic $p$. \item[--] {\em A field properly containing the complex numbers:\/} $\ams{C}}\def\Q{\ams{Q}$ is properly contained in the field of rational functions $\ams{C}}\def\Q{\ams{Q}(x)$. \end{description} \subsection*{Further Exercises for Section \thesection} \begin{vexercise} Let $\ams{F}}\def\K{\ams{K}$ be the set of all matrices of the form $\left[\begin{array}{cc}a&b\\2b&a\\\end{array}\right]$ where $a,b$ are in the field $\ams{F}}\def\K{\ams{K}_5$. Define addition and multiplication to be the usual addition and multiplication of matrices (and also the addition and multiplication in $\ams{F}}\def\K{\ams{K}_5$). Show that $\ams{F}}\def\K{\ams{K}$ is a field. How many elements does it have? \end{vexercise} \begin{vexercise}\label{ex_tables} Let $\ams{F}}\def\K{\ams{K}_2$ be the field of integers modulo $2$, and $\alpha$ be an ``abstract symbol" that can be multiplied so that it has the following property: $\alpha\times\alpha\times\alpha=\alpha^3=\alpha+1$ (a bit like decreeing that the imaginary $i$ squares to give $-1$). Let $$\ams{F}}\def\K{\ams{K}=\{a+b\alpha+c\alpha^2\,|\,a,b,c\in\ams{F}}\def\K{\ams{K}_2\},$$ Define addition on $\ams{F}}\def\K{\ams{K}$ by: $(a_1+b_1\alpha+c_1\alpha^2)+(a_2+b_2\alpha+c_2\alpha^2)= (a_1+a_2)+(b_1+b_2)\alpha+(c_1+c_2)\alpha^2$, where the addition of coefficients happens in $\ams{F}}\def\K{\ams{K}_2$. For multiplication, ``expand" the expression $(a_1+b_1\alpha+c_1\alpha^2)(a_2+b_2\alpha+c_2\alpha^2)$ like you would a polynomial with $\alpha$ the indeterminate, the coefficients are dealt with using the arithmetic from $\ams{F}}\def\K{\ams{K}_2$, and so on. Replace any $\alpha^3$ that result using the rule above. \begin{enumerate} \item Write down all the elements of $\ams{F}}\def\K{\ams{K}$. \item Write out the addition and multiplication tables for $\ams{F}}\def\K{\ams{K}$ (ie: the tables with rows and columns indexed by the elements of $\ams{F}}\def\K{\ams{K}$, with the entry in the $i$-th row and $j$-th column the sum/product of the $i$-th and $j$-th elements of the field). Hence show that $\ams{F}}\def\K{\ams{K}$ is a field (you can assume that the addition and multiplication are associative as well as the distributive law, as these are a bit tedious to verify!) Using your tables, find the inverses (under multiplication) of the elements $1+\alpha$ and $1+\alpha+\alpha^2$, ie: find $$ \frac{1}{1+\alpha}\mbox{ and }\frac{1}{1+\alpha+\alpha^2}\mbox{ in }\ams{F}}\def\K{\ams{K}. $$ \item Is the extension $\ams{F}}\def\K{\ams{K}_2\subset \ams{F}}\def\K{\ams{K}$ a simple one? \end{enumerate} \end{vexercise} \begin{vexercise} Take the set $\ams{F}}\def\K{\ams{K}$ of the previous exercise, and define addition/multiplication in the same way except that the rule for simplification is now $\alpha^3=\alpha^2+\alpha+1$. Show that in this case you {\em don't\/} get a field. \end{vexercise} \begin{vexercise} Verify the claim in lectures that the set $\ams{F}}\def\K{\ams{K}=\{a+b\kern-2pt\sqrt{2}\,|\,a,b\in\Q\}$ is a subfield of $\ams{C}}\def\Q{\ams{Q}$. \end{vexercise} \begin{vexercise} Verify the claim in lectures that $\Q(\sqrt[3]{2})=\{a+b(\sqrt[3]{2})+ c(\sqrt[3]{2})^2\,|\,a,b,c\in\Q\}$. \end{vexercise} \begin{vexercise} Find a complex number $\alpha$ such that $\Q(\kern-2pt\sqrt{2},i)=\Q(\alpha)$. \end{vexercise} \begin{vexercise} Is ${\Q}(\kern-2pt\sqrt{2}, \kern-2pt\sqrt{3}, \kern-2pt\sqrt{7})$ a simple extension of ${\Q}(\kern-2pt\sqrt{2}, \kern-2pt\sqrt{3})$, ${\Q}(\kern-2pt\sqrt{2})$ or even of $\Q$? \end{vexercise} \begin{vexercise} Let $\nabla$ be an ``abstract symbol" that has the following property: $\nabla^2=-\nabla-1$ (a bit like $i$ squaring to give $-1$). Let $$ \ams{F}}\def\K{\ams{K}=\{a+b\nabla\,|\,a,b\in\R\}, $$ and define an addition on $\ams{F}}\def\K{\ams{K}$ by: $(a_1+b_1\nabla)+(a_2+b_2\nabla)=(a_1+a_2)+(b_1+b_2)\nabla$. For multiplication, expand the expression $(a_1+b_1\nabla)(a_2+b_2\nabla)$ normally (treating $\nabla$ like an indeterminate, so that $\nabla\nabla=\nabla^2$, and so on), and replace the resulting $\nabla^2$ using the rule above. Show that $\ams{F}}\def\K{\ams{K}$ is a field, and is just the complex numbers $\ams{C}}\def\Q{\ams{Q}$. Do exactly the same thing, but with symbol $\triangle$ satisfying $\triangle^2=\kern-2pt\sqrt{2}\triangle-\sqrt[3]{5}$. Show that you {\em still\/} get the complex numbers. \end{vexercise} \section{Rings II: Quotients} \label{lect5} In the last section we saw the need to think about fields more abstractly. This section introduces the machinery we need to do this. \paragraph{\hspace*{-0.3cm}} A subset $I$ of a ring $R$ is an \emph{ideal\/} if and only if $I$ is a subgroup of the abelian group $(R,+)$ and for any $s\in R$ we have $sI=\{sa\,|\,a\in I\}=Is\subseteq I$. In the rings that most interest us, ideals turn out to have a very simple form: \begin{proposition} \label{fields2:all_ideals_are_principle} Let $I$ be an ideal in $F[x]$ for $F$ a field. Then there is a polynomial $f\in F[x]$ such that $I=\{fg\,|\,g\in F[x]\}$. \end{proposition} An ideal in a ring of polynomials over a field thus consists of all the multiples of some fixed polynomial. For $f$ the polynomial given in the Proposition, write $\langle f\rangle$ for the ideal that it gives, i.e. $\langle f\rangle=\{fg\,|\,g\in F[x]\}$, and call $f$ a \emph{generator\/} of the ideal. \begin{proof} If $I=\{0\}$ (which is an ideal!) then we have $I=\langle 0\rangle$, and so the result holds. Otherwise, $I$ contains non-zero polynomials. Choose $f$ to be one of minimal degree $\geq 0$. Then $Ig\subseteq I$ for all $g$ gives $\langle f\rangle\subseteq I$. Conversely, if $h\in I$ then dividing $h$ by $f$ gives $h=qf+r$. As $qI\subseteq I$ we have $qf\in I$, hence $h-qf\in I$, as $I$ is a subgroup under $+$. Thus $r\in I$, and as $\deg\,r<\deg\,f$ we are only saved from a contradiction if $\deg\,r<0$; that is, if $r=0$. Thus $h=qf\in\langle f\rangle$ and so $I\subseteq \langle f\rangle$. \qed \end{proof} To emphasise that from now on, all our ideals will have this special form, we restate the definition: \begin{definition}[ideals of polynomial rings over a field] An ideal in $F[x]$ is a set of the form $$\langle f\rangle=\{fg\,|\,g\in F[x]\}$$ for $f$ some fixed polynomial. \end{definition} \begin{vexercise} \label{fields2:Exercise20} \begin{enumerate} \item Show that $\langle f\rangle=\langle h\rangle$ if and only if $h=c f$ for some constant $c\in F$. Similarly, $\langle h\rangle=F[x]$ if and only if $h=c$ some constant. \emph{Moral}: generators are not unique. \item Let $I\subset\ams{Z}}\def\E{\ams{E}[x]$ consist of those polynomials having even constant term. Show that $I$ is an ideal but $I\not=\langle f\rangle$ for any $f\in\ams{Z}}\def\E{\ams{E}[x]$. \emph{Moral}: ideals in $R[x]$ for $R$ a commutative ring need not have the special form of Proposition \ref{fields2:all_ideals_are_principle}. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} In any ring there are the trivial ideals $\langle 0\rangle=\{0\}$ (which we have met already in the proof of Proposition \ref{fields2:all_ideals_are_principle}) and $\langle 1\rangle=R$. \begin{vexercise}\label{ex5.1} \hspace{1em} \begin{enumerate} \item Show that the only ideals in a field $F$ are the two trivial ones (hint: use the property of ideals mentioned at the end of the last paragraph). \item If $R$ is a commutative ring whose only ideals are $\{0\}$ and $R$, then show that $R$ is a field. \item Show that in the non-commutative ring $M_n(F)$ of $n\times n$ matrices with entries from the field $F$ there are only the two trivial ideals, but that $M_n(F)$ is not a field. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} For another example of an ideal, consider the ring $\Q[x]$, the number $\kern-2pt\sqrt{2}\in\R$, and the evaluation homomorphism $\varepsilon_{\sqrt{2}}:\Q[x]\rightarrow\R$ given by $$ \varepsilon_{\sqrt{2}}(a_nx^n+\cdots+a_0)=a_n(\kern-2pt\sqrt{2})^n+\cdots+a_0. $$ (see Section \ref{lect2}). Let $I$ be the set of all polynomials in $\Q[x]$ that are sent to $0\in\R$ by this map. Certainly $x^2-2\in I$ (as $\kern-2pt\sqrt{2}^2-2=0$). If $f=(x^2-2)g\in\Q[x]$, then $$ \varepsilon_{\sqrt{2}}(f)=\varepsilon_{\sqrt{2}}(x^2-2)\varepsilon_{\sqrt{2}}(g)=0\times\varepsilon_{\sqrt{2}}(g)=0, $$ using the fact that $\varepsilon_{\sqrt{2}}$ is a homomorphism. Thus, $f\in I$, and so the ideal $\langle x^2-2\rangle$ is $\subseteq I$. Conversely, if $h$ is sent to $0$ by $\varepsilon_{\sqrt{2}}$, ie: $h\in I$, we can divide it by $x^2-2$ using the division algorithm, $$ h=(x^2-2)q+r, $$ where $\deg r<2$, so that $r=ax+b$ for some $a,b\in \Q$. But since $\varepsilon_{\sqrt{2}}(h)=0$ we have $$ (\kern-2pt\sqrt{2}^2-2)q(\kern-2pt\sqrt{2})+r(\kern-2pt\sqrt{2})=0\Rightarrow r(\kern-2pt\sqrt{2})=0\Rightarrow a\kern-2pt\sqrt{2}+b=0. $$ If $a\not= 0$, then $\kern-2pt\sqrt{2}\in\Q$ as $a,b\in\Q$, which is plainly nonsense. Thus $a=0$, hence $b=0$ too, so that $r=0$, and hence $h=(x^2-2)q\in\langle x^2-2\rangle$, and we get that $I\subseteq \langle x^2-2\rangle$. The conclusion is that the set of polynomials in $\Q[x]$ sent to zero by the evaluation homomorphism $\varepsilon_{\sqrt{2}}$ is an ideal. \paragraph{\hspace*{-0.3cm}} This always happens: if $R,S$ are rings and $\varphi:R\rightarrow S$ a ring homomorphism, then the {\em kernel\/} of $\varphi$, denoted $\text{ker}\,\varphi$, is the set of all elements of $R$ sent to $0\in S$ by $\varphi$, ie: $$ \text{ker}\,\varphi=\{r\in R\,|\,\varphi(r)=0\in S\}. $$ \begin{proposition} If $F$ is a field and $S$ a ring then the kernel of a homomorphism $\varphi:F[x]\rightarrow S$ is an ideal. \end{proposition} \begin{proof} Is very similar to the previous example. To get a polynomial that plays the role of $x^2-2$, choose $g\in\text{ker}\,\varphi$, non-zero, of smallest degree. We claim that $\text{ker}\,\varphi=\langle g\rangle$, for which we need to show that these two sets are mutually contained within each other. On the one hand, if $pg\in\langle g\rangle$ then $$ \varphi(pg)=\varphi(p)\varphi(g)=\varphi(p)\times 0=0, $$ since $g\in\text{ker}\,\varphi$. Thus, $\langle g\rangle\subseteq\text{ker}\,\varphi$. On the other hand, let $f\in\text{ker}\,\varphi$ and use the division algorithm to divide it by $g$, $$ f=qg+r, $$ where $\deg r<\deg g$. Now, $r=f-qg\Rightarrow \varphi(r)=\varphi(f-qg)=\varphi(f)-\varphi(q)\varphi(g)=0-\varphi(q).0=0$, since both $f,g\in\text{ker}\,\varphi$. Thus, $r$ is also in the kernel of $\varphi$. If $r$ was a non-zero polynomial, then we would have a contradiction because $\deg r<\deg g$, but $g$ was chosen from $\text{ker}\,\varphi$ to have smallest degree. Thus we must have that $r=0$, hence $f=qg\in\langle g\rangle$, ie: $\text{ker}\,\varphi\subseteq\langle g\rangle$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Let $\langle f\rangle\subset F[x]$ be an ideal and $g\in F[x]$ any polynomial. The set $$ g+\langle f\rangle=\{g+h\,|\,h\in\langle f\rangle\}, $$ is called the {\em coset of $\langle f\rangle$ with representative $g$\/} (or the coset of $\langle f\rangle$ {\em determined\/} by $g$). \paragraph{\hspace*{-0.3cm}} As an example, consider the ideal $\langle x\rangle$ in $\ams{F}}\def\K{\ams{K}_2[x]$. Thus $\langle x\rangle$ is the set of all multiples of $x$, which is the same as the polynomials in $\ams{F}}\def\K{\ams{K}_2[x]$ that have no constant term. What are the cosets of $\langle x\rangle$? Let $g$ be any polynomial and consider the coset $g+\langle x\rangle$. The only possibilities are that $g$ has no constant term, or it does, in which case this term is $1$ (we are in $\ams{F}}\def\K{\ams{K}_2[x]$). If $g$ has no constant term, then $$ g+\langle x\rangle=\langle x\rangle. $$ For, $g$ added to a polynomial with no constant term is another polynomial with no constant term, ie: $g+\langle x\rangle\subseteq \langle x\rangle$. On the other hand, if $f\in\langle x\rangle$ is any polynomial with no constant term, then $f-g\in\langle x\rangle$ so $f=g+(f-g)\in g+\langle x\rangle$, ie: $\langle x\rangle\subseteq g+\langle x\rangle$. If $g$ does have a constant term, you can convince yourself in exactly the same way that, $$ g+\langle x\rangle=1+\langle x\rangle. $$ Thus, there are only two cosets of $\langle x\rangle$ in $\ams{F}}\def\K{\ams{K}_2[x]$, namely the ideal $\langle x\rangle$ itself and $1+\langle x\rangle$. Notice that these two cosets are completely disjoint, but every polynomial is in one of them. \paragraph{\hspace*{-0.3cm}} Here are some basic properties of cosets: \begin{figure} \caption{Two different names for the same coset (\emph{left}) and a prohibited situation (\emph{right}).} \label{fig:figure4} \end{figure} \begin{description} \item[--] \emph{Every polynomial $g$ is in some coset of $\langle f\rangle$}: for $g=g+0\times f\in g+\langle f\rangle$. \item[--] \emph{For any $q$, we have $qf+\langle f\rangle=\langle f\rangle$}: so multiples of $f$ get ``absorbed'' into the ideal $\langle f\rangle$. \item[--] \emph{The following three things are equivalent: (i). $g_1$ and $g_2$ lie in the same coset of $\langle f\rangle$; (ii). $g_1+\langle f\rangle=g_2+\langle f\rangle$; (iii). $g_1$ and $g_2$ differ by a multiple of $f$}: to see this: (iii) $\Rightarrow$ (ii) If $g_1-g_2=pf$ then $g_1=g_2+pf$ so that $g_1+\langle f\rangle=g_2+pf+\langle f\rangle=g_2+\langle f\rangle$; (ii) $\Rightarrow$ (i) Since $g_1\in g_1+\langle f\rangle$ and $g_2\in g_2+\langle f\rangle$, and these cosets are equal we have that $g_1,g_2$ lie in the same coset; (i) $\Rightarrow$ (iii) If $g_1$ and $g_2$ lie in the same coset, ie: $g_1,g_2\in h+\langle f\rangle$, then each $g_i=h+p_if\Rightarrow g_1-g_2=(p_1-p_2)f$. It can be summarised by saying that $g_1$ and $g_2$ lie in the same coset if and only if this coset has the two different names, $g_1+\langle f\rangle$ and $g_2+\langle f\rangle$, as in the left of Figure \ref{fig:figure4}. \item[--] \emph{The situation on the right of Figure \ref{fig:figure4} opposite never happens, where distinct cosets have non-empty intersection}: if the two cosets pictured are called respectively $g_1+\langle f\rangle$ and $g_2+\langle f\rangle$, then $h$ is in both and so differs from $g_1$ and $g_2$ by multiples of $f$, ie: $g_1-h=p_1f$ and $h-g_2=p_2f$, so that $g_1-g_2=(p_1+p_2)f$. Since $g_1$ and $g_2$ differ by a multiple of $f$, we have $g_1+\langle f\rangle= g_2+\langle f\rangle$. \end{description} Thus, the cosets of an ideal partition the ring. \paragraph{\hspace*{-0.3cm}} As an example let $x^2-2\in\Q[x]$ and consider the ideal $$ \langle x^2-2\rangle=\{p(x^2-2)\,|\,p\in\Q[x]\}. $$ $(x^3-2x+15)+\langle x^2-2\rangle$ is then a coset, but it is not written in the nicest possible form. If we divide $x^3-2x+15$ by $x^2-2$: $$ x^3-2x+15=x(x^2-2)+15, $$ we have $x^3-2x+15 $ and $15$ differ by a multiple of $x^2-2$, so that $$ (x^3-2x+15)+\langle x^2-2\rangle=15+\langle x^2-2\rangle. $$ \paragraph{\hspace*{-0.3cm}} If we look again at the ideal $\langle x\rangle$ in $\ams{F}}\def\K{\ams{K}_2[x]$, there were only two cosets, $$ \langle x\rangle=0+\langle x\rangle \text{ and } 1+\langle x\rangle, $$ that corresponded to the polynomials with constant term $0$ and the polynomials with constant term $1$. We could try ``adding'' and ``multiplying'' these two cosets together according to, $$ (0+\langle x\rangle)+(0+\langle x\rangle)= 0+\langle x\rangle,(1+\langle x\rangle)+(0+\langle x\rangle) =1+\langle x\rangle,(1+\langle x\rangle)+(1+\langle x\rangle)=0+\langle x\rangle, $$ and so on, where all we have done is to add the representatives of the cosets together using the addition from $ \ams{F}}\def\K{\ams{K}_2$. Similarly for multiplying the cosets. This looks like $\ams{F}}\def\K{\ams{K}_2$, but with $0+\langle x\rangle$ and $1+\langle x\rangle$ replacing $0$ and $1$. \paragraph{\hspace*{-0.3cm}} Again this always happens. Let $\langle f\rangle$ be an ideal in $F[x]$, and define an addition and multiplication of cosets of $\langle f\rangle$ by, $$ (g_1+\langle f\rangle)+(g_2+\langle f\rangle)=(g_1+g_2)+\langle f\rangle\text{ and } (g_1+\langle f\rangle)(g_2+\langle f\rangle)=(g_1g_2)+\langle f\rangle, $$ where the addition and multiplication of the $g_i$'s is happening in $F[x]$. \begin{theorem} The set of cosets $F[x]/\langle f\rangle$ together with the $+$ and $\times$ above is a ring. \end{theorem} Call this the {\em quotient ring\/} of $F[x]$ by the ideal $\langle f\rangle$. All our rings have a ``zero'', a ``one'', and so on, and for the quotient ring these are, $$ \begin{tabular}{cc}\hline element of a ring & corresponding element in $F[x]/\langle f\rangle$\\\hline $a$&$g+\langle f\rangle$\\ $-a$&$(-g)+\langle f\rangle$\\ $0$&$0+\langle f\rangle=\langle f\rangle$\\ $1$&$1+\langle f\rangle$\\\hline \end{tabular} $$ \begin{vexercise} To prove this theorem: \begin{enumerate} \item Show that the addition of cosets is {\em well defined\/}, ie: if $g_i'+\langle f\rangle =g_i+\langle f\rangle$, then $$ (g_1'+g_2')+\langle f\rangle=(g_1+g_2)+\langle f\rangle. $$ \item Similarly, show that the multiplication is well defined. \item Now verify the axioms for a ring. \end{enumerate} \end{vexercise} \paragraph{\hspace*{-0.3cm}} Let $x^2+1\in\R[x]$, and consider the ideal $\langle x^2+1\rangle$. We want to see what the quotient $\R[x]/\langle x^2+1\rangle$ looks like. First, any coset can be put into a nice form: for example, $$ x^4+x^2+x+1+\langle x^2+1\rangle=x^2(x^2+1)+(x+1)+\langle x^2+1\rangle, $$ where we have divided $x^4+x^2+x+1$ by $x^2+1$ using the division algorithm. But $$ x^2(x^2+1)+(x+1)+\langle x^2+1\rangle=x+1+\langle x^2+1\rangle, $$ as the multiple of $x^2+1$ gets absorbed into the ideal. In fact, for any $g\in\R[x]$ we can make this argument: $$ g+\langle x^2+1\rangle=q(x^2+1)+(ax+b)+\langle x^2+1\rangle=ax+b+\langle x^2+1\rangle, $$ for some $a,b\in\R$, so the set of cosets can be written as $$ \R[x]/\langle x^2+1\rangle=\{ax+b+\langle x^2+1\rangle\,|\,a,b\in\R\}. $$ Now take two elements of the quotient, say $(x+1)+\langle x^2+1\rangle$ and $(2x-3)+\langle x^2+1\rangle$, and add/multiply them together: $$ \biggl\{(x+1)+\langle x^2+1\rangle\biggr\}+\biggl\{(2x-3)+\langle x^2+1\rangle\biggr\} =3x-2+\langle x^2+1\rangle, $$ and \begin{equation*} \begin{split} \biggl\{(x+1)+\langle x^2+1\rangle\biggr\}\times \biggl\{(2x-3)+\langle x^2+1\rangle\biggr\}&=(2x^2-x-3)+ \langle x^2+1\rangle\\ &=2(x^2+1)+(-x-5)+\langle x^2+1\rangle\\ &=-x-5+\langle x^2+1\rangle. \end{split} \end{equation*} Now ``squint your eyes'', so that $ax+b+\langle x^2+1\rangle$ becomes the complex number $a\text{i}+b\in\ams{C}}\def\Q{\ams{Q}$. Then $$ (\text{i}+1)+(2\text{i}-3)=3\text{i}-2\text{ and }(\text{i}+1)(2\text{i}-3)=-\text{i}-5. $$ The addition and multiplication of cosets in $\R[x]/\langle x^2+1\rangle$ looks exactly like the addition and multiplication of complex numbers! \paragraph{\hspace*{-0.3cm}} To see what quotient rings look like we use: \begin{isomthm} Let $\varphi:F[x]\rightarrow S$ be a ring homomorphism with kernel $\langle f\rangle$. Then the map $g+\langle f \rangle\mapsto\varphi(g)$ is an isomorphism $$ F[x]/\langle f\rangle\rightarrow\text{Im}\,\,\varphi\subset S. $$ \end{isomthm} \paragraph{\hspace*{-0.3cm}} In the example above let $R=\R[x]$ and $S=\ams{C}}\def\Q{\ams{Q}$. Let the homomorphism $\varphi$ be the evaluation at $i$ homomorphism, $$ \varepsilon_i:\biggl(\sum a_kx^k\biggr)\mapsto \sum a_k(i)^k. $$ In exactly the same way as an earlier example, you can show that $$ \text{ker}\,\varepsilon_i=\langle x^2+1\rangle. $$ On the other hand, if $ai+b\in\ams{C}}\def\Q{\ams{Q}$, then $ai+b=\varepsilon_i(ax+b)$, so the image of the homomorphism $\varepsilon_i$ is all of $\ams{C}}\def\Q{\ams{Q}$. Feeding this into the first homomorphism theorem gives, $$ \R[x]/\langle x^2+1\rangle\cong \ams{C}}\def\Q{\ams{Q}. $$ \subsection*{Further Exercises for Section \thesection} \begin{vexercise} Let $\phi=(1+\sqrt5)/2$ (in fact the {\it Golden Number\/}). \begin{enumerate} \item Show that the kernel of the evaluation map $\epsilon_{\phi}:\Q[x]\rightarrow\ams{C}}\def\Q{\ams{Q}$ (given by $\epsilon_{\phi}(f)=f(\phi)$) is the ideal $\langle x^2-x-1\rangle$. \item Show that $\Q(\phi)=\{a+b\phi\mid a,b\in\Q\}$. \item Show that $\Q(\phi)$ is the image in $\ams{C}}\def\Q{\ams{Q}$ of the map $\epsilon_{\phi}$. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex5.3} Going back to the general case of an ideal $I$ in a ring $R$, consider the map $\eta: R\rightarrow R/I$ given by, $$ \eta(r)=r+I, $$ sending an element of $R$ to the coset of $I$ determined by it. \begin{enumerate} \item Show that $\eta$ is a homomorphism. \item Show that if $J$ is an ideal in $R$ containing $I$ then $\eta(J)$ is an ideal of $R/I$. \item Show that if $J'$ is an ideal of $R/I$ then there is an ideal $J$ of $R$, containing $I$, such that $\eta(J)=J'$. \item Show that in this way, $\eta$ is a bijection between the ideals of $R$ containing $I$ and the ideals of $R/I$. \end{enumerate} \end{vexercise} \section{Fields II: Constructions and More Examples}\label{fields2} \paragraph{\hspace*{-0.3cm}} A proper ideal $\langle f\rangle$ of $F[x]$ is {\em maximal\/} if and only if the only ideals of $F[x]$ containing $\langle f\rangle$ are $\langle f\rangle$ itself and the whole ring $F[x]$, ie: $$ \langle f\rangle\subseteq I\subseteq F[x], $$ with $I$ an ideal implies that $I=\langle f\rangle$ or $I=F[x]$. \paragraph{\hspace*{-0.3cm}} The main result of this section is, \begin{theoremB} The quotient ring $F[x]/\langle f\rangle$ is a field if and only if $\langle f\rangle$ is a maximal ideal. \end{theoremB} \begin{proof} By Exercise \ref{ex5.1}, a commutative ring $R$ is a field if and only if the only ideals of $R$ are the trivial one $\{0\}$ and the whole ring $R$. Thus the quotient $F[x]/\langle f\rangle$ is a field if and only if its only ideals are the trivial one $\langle f\rangle$ and the whole ring $F[x]/\langle f\rangle$. By Exercise \ref{ex5.3}, there is a one to one correspondence between the ideals of the quotient $F[x]/\langle f\rangle$ and the ideals of $F[x]$ that contain $\langle f\rangle$. Thus $F[x]/\langle f\rangle$ has only the two trivial ideals precisely when there are only two ideals of $F[x]$ containing $\langle f\rangle$, namely $\langle f\rangle$ and $F[x]=\langle 1\rangle$, which is the same as saying that $\langle f\rangle$ is maximal. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Suppose now that $f$ is an irreducible polynomial over $F$, and let $\langle f\rangle \subseteq I\subseteq F[x]$ with $I$ an ideal. Then $I=\langle h\rangle$ hence $\langle f \rangle\subseteq \langle h\rangle$, and so $h$ divides $f$. Since $f$ is irreducible this means that $h$ must be either a constant $c\in F$ or $c f$, so that the ideal $I$ is either $\langle c\rangle$ or $\langle c f\rangle$. But $\langle c f\rangle$ is just the same as the ideal $\langle f\rangle$. On the other hand, any polynomial $g$ can be written as a multiple of $c$, just by setting $g=c(c^{-1}g)$, and so $\langle c\rangle=F[x]$. Thus if $f$ is an irreducible polynomial then the ideal $\langle f\rangle$ is maximal. Conversely, if $\langle f\rangle$ is maximal and $h$ divides $f$, then $\langle f\rangle\subseteq \langle h\rangle$, so that by maximality $\langle h\rangle=\langle f\rangle$ or $\langle h\rangle=F[x]$. By Exercise \ref{fields2:Exercise20} we have $h=c$ a constant, or $h=c f$, and so $f$ is irreducible over $F$. Thus, the ideal $\langle f\rangle$ is maximal precisely when $f$ is irreducible. \begin{corollary} $F[x]/\langle f\rangle$ is a field if and only if $f$ is an irreducible polynomial over $F$. \end{corollary} \paragraph{\hspace*{-0.3cm}} The polynomial $x^2+1$ is irreducible over the reals $\R$, so the quotient ring $\R[x]/\langle x^2+1\rangle$ is a field. \paragraph{\hspace*{-0.3cm}} The polynomial $x^2-2x+2$ has roots $1\pm \text{i}$, hence is irreducible over $\R$, giving the field, $$ \R[x]/\langle x^2-2x+2\rangle. $$ Consider the evaluation map $\varepsilon_{1+\text{i}}:\R[x]\rightarrow \ams{C}}\def\Q{\ams{Q}$ given as usual by $\varepsilon_{1+\text{i}}(f)=f(1+\text{i})$. In exactly the same way as the example for $\varepsilon_{\sqrt{2}}$ in Section \ref{lect5}, one can show that $\text{ker}\,\varepsilon_{1+\text{i}}=\langle x^2-2x+2\rangle$. Moreover, $a+b\text{i}=\varepsilon_{1+\text{i}}(a-b+bx)$ so that the evaluation map is onto $\ams{C}}\def\Q{\ams{Q}$. Thus, by the first isomorphism theorem we get that, $$ \R[x]/\langle x^2-2x+2\rangle\cong\ams{C}}\def\Q{\ams{Q}. $$ What this means is that we can construct the complex numbers in the following (slightly non-standard) way: start with the reals $\R$, and define a new symbol, $\nabla$ say, which satisfies the algebraic property, $$ \nabla^2=2\nabla-2. $$ Now consider all expressions of the form $c+d\nabla$ for $c,d\in\R$. Add and multiply two such expressions together as follows: \begin{equation*} \begin{split} (c_1+d_1\nabla)+(c_2+d_2\nabla)&=(c_1+c_2)+(d_1+d_2)\nabla\\ (c_1+d_1\nabla)(c_2+d_2\nabla)&=c_1c_2+(c_1d_2+d_1c_2)\nabla +d_1d_2\nabla^2\\ &=c_1c_2+(c_1d_2+d_1c_2)\nabla +d_1d_2(2\nabla-2)\\ &=(c_1c_2-2d_1d_2)+(c_1d_2+d_1c_2+2d_1d_2)\nabla.\\ \end{split} \end{equation*} \begin{vexercise} By solving the equations $cx-2dy=1$ and $cy+dx+2dy=0$ for $x$ and $y$ in terms of $c$ and $d$, find the inverse of the element $c+d\nabla$. \end{vexercise} \begin{vexercise} According to Exercise \ref{ex3.21}, if $f$ is irreducible over $\R$ then $f$ must be either quadratic or linear. Suppose that $f=ax^2+bx+c$ is an irreducible quadratic over $\R$. Show that the field $\R[x]/\langle ax^2+bx+c\rangle\cong\ams{C}}\def\Q{\ams{Q}$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} The next few paragraphs illustrate the construction for finite fields, using a field of order four as a running example. In the process of doing the example in \ref{irreducible_polynomials:paragraph50} we saw that the only irreducible quadratic over the field $\ams{F}}\def\K{\ams{K}_2$ is $x^2+x+1$. Thus the quotient $$ \ams{F}}\def\K{\ams{K}_2[x]/\langle x^2+x+1\rangle, $$ is a field. Each of its elements is a coset of the form $g+\langle x^2+x+1\rangle$. Use the division algorithm, dividing $g$ by $x^2+x+1$, to get $$ g+\langle x^2+x+1\rangle=q(x^2+x+1)+r+\langle x^2+x+1\rangle=r+\langle x^2+x+1\rangle, $$ where the remainder $r$ is of the form $ax+b$, for $a,b\in\ams{F}}\def\K{\ams{K}_2$. Thus every element of the field has the form $ax+b+\langle x^2+x+1\rangle$, of which there are at most $4$ possibilities ($2$ choices for $a$ and $2$ choices for $b$). Indeed these $4$ are distinct, for if $$ a_1x+b_1+\langle x^2+x+1\rangle=a_2x+b_2+\langle x^2+x+1\rangle $$ then, \begin{equation*} \begin{split} (a_1-a_2)x&+(b_1-b_2)+\langle x^2+x+1\rangle\\ &=\langle x^2+x+1\rangle \Leftrightarrow (a_1-a_2)x+(b_1-b_2)\in\langle x^2+x+1\rangle. \end{split} \end{equation*} Since the non-zero elements of the ideal are multiples of a degree two polynomial, they have degrees that are at least two. Thus the only way the linear polynomial can be an element is if it is the zero polynomial. In particular, $a_1-a_2=b_1-b_2 =0$, so the two cosets are the same. The quotient ring is thus a field having the four elements: $$ \ams{F}}\def\K{\ams{K}_4=\{ax+b+\langle x^2+x+1\rangle\,|\,a,b\in\ams{F}}\def\K{\ams{K}_2\} $$ \paragraph{\hspace*{-0.3cm}} Generalising the example of the field of order $4$ above, if $\ams{F}}\def\K{\ams{K}_p$ is the finite field with $p$ elements and $f\in\ams{F}}\def\K{\ams{K}_p[x]$ is an irreducible polynomial of degree $d$, then the quotient $\ams{F}}\def\K{\ams{K}_p[x]/\langle f \rangle$ is a field containing elements of the form, $$ a_{d-1} x^{d-1}+\cdots a_0+\langle f\rangle, $$ where the $a_i\in\ams{F}}\def\K{\ams{K}_p$. Any two such are distinct by exactly the same argument as above, so we have a field $\ams{F}}\def\K{\ams{K}_q$ with exactly $q=p^d$ elements. \paragraph{\hspace*{-0.3cm}} Returning to the general situation of a quotient $F[x]/\langle f\rangle$ by an irreducible polynomial $f$, the resulting field contains a copy of the original field $F$, obtained by considering the cosets $a+\langle f\rangle$ for $a\in F$. \begin{vexercise} Show that the map $a\mapsto a+\langle f\rangle$ is an injective homomorphism $F\rightarrow F[x]/\langle f\rangle$, and so $F$ is isomorphic to its image in $F[x]/\langle f\rangle$. \end{vexercise} Blurring the distinction between the original $F$ and this copy inside $F[x]/\langle f\rangle$, we get that $F\subset F[x]/\langle f\rangle$ is an extension of fields. \paragraph{\hspace*{-0.3cm}} Back to the field $\ams{F}}\def\K{\ams{K}_4$ of order $4$ and a more convenient notation. Let $$ \alpha=x+\langle x^2+x+1\rangle $$ and write $a\in\ams{F}}\def\K{\ams{K}_2$ for the coset $a+\langle x^2+x+1\rangle$ as in the previous paragraph. Addition and multiplication of cosets gives: $$ ax+b+\langle x^2+x+1\rangle =(a+\langle x^2+x+1\rangle)(x+\langle x^2+x+1\rangle)+(b+\langle x^2+x+1\rangle) =a\alpha+b. $$ So we now have that $\ams{F}}\def\K{\ams{K}_4=\{a\alpha+b\,|\,a,b\in\ams{F}}\def\K{\ams{K}_2\}=\{0,1,\alpha,\alpha+1\}$. But we also have the coset property $f+\langle f\rangle=\langle f\rangle$, which for $f=x^2+x+1$ translates into $$ (x+\langle x^2+x+1\rangle)^2+(x+\langle x^2+x+1\rangle) + (1+\langle x^2+x+1\rangle)=\langle x^2+x+1\rangle, $$ or, $\alpha^2+\alpha+1=0$. Our field is now $\ams{F}}\def\K{\ams{K}_4=\{0,1,\alpha,\alpha+1\}$, together with the ``rule'' $\alpha^2=\alpha+1$. At the risk of labouring the point, here are the multiplication tables for the field $\ams{F}}\def\K{\ams{K}_4$ and the ring $\ams{Z}}\def\E{\ams{E}_4$: $$ \begin{tabular}{c|cccc} $\ams{F}}\def\K{\ams{K}_4$&$0$&$1$&$\alpha$&$\alpha+1$\\ \hline $0$&$0$&$0$&$0$&$0$\\ $1$&$0$&$1$&$\alpha$&$\alpha+1$\\ $\alpha$&$0$&$\alpha$&$\alpha+1$&$1$\\ $\alpha+1$&$0$&$\alpha+1$&$1$&$\alpha$\\ \end{tabular} \quad \quad \begin{tabular}{c|cccc} $\ams{Z}}\def\E{\ams{E}_4$&$0$&$1$&$2$&$3$\\ \hline $0$&$0$&$0$&$0$&$0$\\ $1$&$0$&$1$&$2$&$3$\\ $2$&$0$&$2$&$0$&$2$\\ $3$&$0$&$3$&$2$&$1$\\ \end{tabular} $$ $1$ appears in every non-zero row of the $\ams{F}}\def\K{\ams{K}_4$ table -- so every non-zero element has an inverse -- but does not appear in every non-zero row of $\ams{Z}}\def\E{\ams{E}_4$. \paragraph{\hspace*{-0.3cm}} In general, when $f\in\ams{F}}\def\K{\ams{K}_p[x]$ is irreducible of degree $d$, we let $\alpha=x+\langle f\rangle$ and replace $\ams{F}}\def\K{\ams{K}_p$ by its copy in $\ams{F}}\def\K{\ams{K}_p[x]/\langle f\rangle$ (ie: identify $a\in\ams{F}}\def\K{\ams{K}_p$ with $a+\langle f\rangle\in\ams{F}}\def\K{\ams{K}_p[x]/\langle f\rangle$). This gives, $$ \ams{F}}\def\K{\ams{K}_p[x]/\langle f \rangle=\{a_{d-1} \alpha^{d+1}+\cdots a_0\,|\,a_i\in\ams{F}}\def\K{\ams{K}_p\}, $$ where two such expressions are added and multiplied like ``polynomials'' in $\alpha$. If $f=b_{d}x^{d}+\cdots+b_1x+b_0$, and since $f+\langle f\rangle=\langle f \rangle$, we have the ``rule''\/ $b_{d}\alpha^{d}+\cdots+b_1\alpha+b_0=0$, which allows us to remove any powers of $\alpha$ bigger than $d$ that occur in such expressions. The element $\alpha$ is called a {\em generator\/} for the field. \paragraph{\hspace*{-0.3cm}} The polynomial $x^3+x+1$ is irreducible over the field $\ams{F}}\def\K{\ams{K}_2$ (it is a cubic and has no roots) so that $$ \ams{F}}\def\K{\ams{K}_2[x]/\langle x^3+x+1\rangle, $$ is a field with $2^3=8$ elements of the form $\ams{F}}\def\K{\ams{K}=\{a+b\alpha+c\alpha^2\,|\,a,b,c\in\ams{F}}\def\K{\ams{K}_2\}$ subject to the rule $\alpha^3+\alpha+1=0$, ie: $\alpha^3=\alpha+1$. This is the field $\ams{F}}\def\K{\ams{K}$ of order $8$ from Section \ref{lect4}. \begin{vexercise} Explicitly construct fields with exactly: $$ 1. \,\, 125\text{ elements}\qquad 2. \,\, 49\text{ elements}\qquad 3. \,\, 81\text{ elements}\qquad 4. \,\, 243\text{ elements}\qquad $$ (By explicity I mean give a general description of the elements and any algebraic rules that are needed for adding and multiplying them together.) \end{vexercise} \paragraph{\hspace*{-0.3cm}} To explicitly construct a field of order $p^d$ with $d>3$ is harder -- finding irreducible polynomials of degree bigger than a cubic is not straightforward, as the example in \ref{irreducible_polynomials:paragraph50} shows. One solution is to create the field in a series of steps (or extensions), each of which only involves quadratics or cubics. We do this for a field of order $729=3^6$. As $3^6=(3^2)^3$, we first create a field of order $3^2$, and then extend this using a cubic. Consider the polynomial $f=x^2+x+2\in\ams{F}}\def\K{\ams{K}_3[x]$. Substituting the three elements of $\ams{F}}\def\K{\ams{K}_3$ into $f$ gives $$ 0^2+0+2=2, 1^2+1+2=1\text{ and }2^2+2+2=2, $$ so that $f$ has no roots in $\ams{F}}\def\K{\ams{K}_3$. As $f$ is quadratic it is irreducible over the field $\ams{F}}\def\K{\ams{K}_3$, and so $\ams{F}}\def\K{\ams{K}_9=\ams{F}}\def\K{\ams{K}_3[x]/\langle x^2+x+2\rangle$ is a field of order $3^2$. Let $\alpha=x+\langle x^2+x+2\rangle$ in $\ams{F}}\def\K{\ams{K}_9$ be a generator so that the elements have the form $a+b\alpha$ with $a,b\in\ams{F}}\def\K{\ams{K}_3$ and multiplication satisfying the rule $\alpha^2+\alpha+2=0$, or equivalently $\alpha^2=2\alpha+1$ ($-1=2$ and $-2=1$ in $\ams{F}}\def\K{\ams{K}_3$). Now let $X$ be a new variable, and consider the polynomials $\ams{F}}\def\K{\ams{K}_9[X]$ over $\ams{F}}\def\K{\ams{K}_9$ in this new variable. In particular the polynomial: \begin{equation} \label{eq:8} g=X^3+(2\alpha+1)X+1. \end{equation} As $g$ is a cubic, it will be irreducible over $\ams{F}}\def\K{\ams{K}_9$ precisely when it has no roots in this field, which can be verified as usual by straight substitution of the nine elements of $\ams{F}}\def\K{\ams{K}_9$. For example: \begin{equation*} \begin{split} g(2\alpha+1)&=(2\alpha+1)^3+(2\alpha+1)(2\alpha+1)+1=2\alpha^3+1+\alpha^2+\alpha+1+1\\ &=2\alpha(2\alpha+1)+\alpha^2+\alpha\\ &=\alpha^2+2\alpha+\alpha^2+\alpha=\alpha+2 \end{split} \end{equation*} and the others are similar. We have a used an energy saving device in these computations: \begin{vexercise} \label{ex9.1} If $a,b\in F$, a field of characteristic $p>0$, then $(a+b)^p=a^p+b^p$ (hint: Exercise \ref{ex3.3}). \end{vexercise} Thus the polynomial $g$ in (\ref{eq:8}) is irreducible over $\ams{F}}\def\K{\ams{K}_9$, and we have a field: $$ \ams{F}}\def\K{\ams{K}_9[X]/\langle X^3+(2\alpha+1)X+1\rangle $$ of order $9^3=3^6=729$, called $\ams{F}}\def\K{\ams{K}_{729}$. The elements have the form, $$ A_0+A_1\beta+A_2\beta^2, $$ where the $A_i\in\ams{F}}\def\K{\ams{K}_9$ and $\beta=X+\langle g\rangle$ is a generator. Multiplication is given by the rule $\beta^3=(\alpha+2)\beta+2$. Replacing the $A_i$ by the earlier description of $\ams{F}}\def\K{\ams{K}_9$ in terms of the generator $\alpha$ gives elements: $$ a_0+a_1\beta+a_2\beta^2+a_3\alpha+a_4\alpha\beta+a_5\alpha\beta^2, $$ with the $a_i\in\ams{F}}\def\K{\ams{K}_3$, subject to the rules $\alpha^2=2\alpha+1$ and $\beta^3=(\alpha+2)\beta+\alpha$. \begin{vexercise} \hspace{1em}\begin{enumerate} \item Construct a field $\ams{F}}\def\K{\ams{K}_8$ with 8 elements by showing that $x^3+x+1$ is irreducible over $\ams{F}}\def\K{\ams{K}_2$. \item Find a cubic polynomial that is irreducible in $\ams{F}}\def\K{\ams{K}_8[x]$ (hint: refer to Exercise \ref{ex_lect3.1}). \item Hence, or otherwise, construct a field with $2^9=512$ elements. \end{enumerate} \end{vexercise} \begin{vexercise} Explicitly construct fields with exactly: $$ 1. \,\, 64\text{ elements}\qquad 2. \,\, \text{\emph{challenge:}}\,\,4096\text{ elements}\qquad $$ \end{vexercise} \paragraph{\hspace*{-0.3cm}} Theorem B and its Corollary solves the problem that we encountered in Section \ref{lect4} where the fields $$ \Q(\sqrt[3]{2})\text{ and } \Q\biggl(\frac{-\sqrt[3]{2}+\sqrt[3]{2}\kern-2pt\sqrt{3}\text{i}}{2}\biggr)=\Q(\beta) $$ were different but isomorphic. The polynomial $x^3-2$ is irreducible over $\Q$, either by Eisenstein, or by observing that its roots do not lie in $\Q$. Thus $$ \Q[x]/\langle x^3-2\rangle, $$ is an extension field of $\Q$. Consider the two evaluation homomorphisms $\varepsilon_{\sqrt[3]{2}}:\Q[x]\rightarrow \ams{C}}\def\Q{\ams{Q}$ and $\varepsilon_{\beta}:\Q[x]\rightarrow \ams{C}}\def\Q{\ams{Q}$. Since, and this is the key bit, $$ \sqrt[3]{2}\text{ and } \beta=\frac{-\sqrt[3]{2}+\sqrt[3]{2}\kern-2pt\sqrt{3}\text{i}}{2} $$ are both roots of the polynomial $x^3-2$, we can show in a similar manner to examples at the end of Section \ref{lect5} that $\text{ker}\,\varepsilon_{\sqrt[3]{2}}\cong\langle x^3-2\rangle \cong \text{ker}\,\varepsilon_{\beta}$. Thus, $$ \begin{pspicture}(0,0)(12,3) \rput(0,.5){ \rput(6,2){$\Q[x]/\langle x^3-2\rangle$} \rput(2,2){$\Q[x]/\text{ker}\,\varepsilon_{\sqrt[3]{2}}$} \rput(10,2){$\Q[x]/\text{ker}\,\varepsilon_{\beta}$} \psline[linewidth=.1mm]{->}(4.8,2)(3.2,2) \psline[linewidth=.1mm]{->}(7.2,2)(8.8,2) \rput(4,2.2){$=$} \rput(8,2.2){$=$} } \rput(2.2,1.5){$\cong$} \rput(9.8,1.5){$\cong$} \rput(0,.5){ \rput(6,1){{\red $1^{\text{st}}$ Isomorphism Theorem}} \psline[linewidth=.1mm,linecolor=red]{->}(4,1)(2.6,1) \psline[linewidth=.1mm,linecolor=red]{->}(8,1)(9.4,1) } \rput(2,.5){$\text{Im}\,\varepsilon_{\sqrt[3]{2}}$} \rput(10,.5){$\text{Im}\,\varepsilon_{\beta}$} \psline[linewidth=.1mm]{->}(2,2.2)(2,.8) \psline[linewidth=.1mm]{->}(10,2.2)(10,.8) \rput(13,1.5){(*)} \end{pspicture} $$ To find the image of $\varepsilon_{\sqrt[3]{2}}$ write a $g\in\Q[x]$ as $g=q(x^3-2)+(a+bx+cx^2)$ so that \begin{equation*} \begin{split} \varepsilon_{\sqrt[3]{2}}(g)&= \varepsilon_{\sqrt[3]{2}}(q(x^3-2)+(a+bx+cx^2))\\ &= \varepsilon_{\sqrt[3]{2}}(q)\varepsilon_{\sqrt[3]{2}}(x^3-2)+\varepsilon_{\sqrt[3]{2}}(a+bx+cx^2)\\ &= \varepsilon_{\sqrt[3]{2}}(q).0+\varepsilon_{\sqrt[3]{2}}(a+bx+cx^2) =a+b\sqrt[3]{2}+c(\sqrt[3]{2})^2. \end{split} \end{equation*} Hence $\text{Im}\,\varepsilon_{\sqrt[3]{2}}\subseteq\{a+b\sqrt[3]{2}+c(\sqrt[3]{2})^2\in\ams{C}}\def\Q{\ams{Q}\,|\,a,b,c\in\Q\} =\Q(\sqrt[3]{2})$. On the other hand $a+b\sqrt[3]{2}+c(\sqrt[3]{2})^2$ is the image of $a+bx+cx^2$ and so $\text{Im}\,\varepsilon_{\sqrt[3]{2}}=\Q(\sqrt[3]{2})$. Similarly $\text{Im}\,\varepsilon_\beta=\Q(\beta)$. Filling this information into the diagram (*) above gives the claimed isomorphism between $\Q(\sqrt[3]{2})$ and $\Q(\beta)$: $$ \begin{pspicture}(0,0)(12,3) \rput(0,.5){ \rput(6,2){$\Q[x]/\langle x^3-2\rangle$} \rput(2,2){$\Q[x]/\text{ker}\,\varepsilon_{\sqrt[3]{2}}$} \rput(10,2){$\Q[x]/\text{ker}\,\varepsilon_{\beta}$} \psline[linewidth=.1mm]{->}(4.8,2)(3.2,2) \psline[linewidth=.1mm]{->}(7.2,2)(8.8,2) \rput(4,2.2){$=$} \rput(8,2.2){$=$} } \rput(2.2,1.5){$\cong$} \rput(9.8,1.5){$\cong$} \rput(0,.5){ } \psline[linewidth=.1mm,linecolor=red]{->}(6,1.75)(6,2.1) \rput(6,1.5){{\red abstract field}} \rput(2,.5){$\Q(\sqrt[3]{2})$} \rput(10,.5){$\Q(\beta)$} \psline[linewidth=.1mm]{->}(2,2.2)(2,.8) \psline[linewidth=.1mm]{->}(10,2.2)(10,.8) \rput(6,.5){{\red concrete versions in $\ams{C}}\def\Q{\ams{Q}$}} \psline[linewidth=.1mm,linecolor=red]{<-}(2.75,.5)(4.2,.5) \psline[linewidth=.1mm,linecolor=red]{->}(7.8,.5)(9.5,.5) \end{pspicture} $$ \paragraph{\hspace*{-0.3cm}} In algebraic number theory a field $\Q[x]/\langle f\rangle$, for $f$ an irreducible polynomial over $\Q$, is called a \emph{number field}. If $\{\beta_1,\ldots,\beta_n\}$ are the roots of $f$, then we have $n$ mutually isomorphic fields $\Q(\beta_1),\ldots, \Q(\beta_n)$ inside $\ams{C}}\def\Q{\ams{Q}$. The isomorphisms from $\Q[x]/\langle f\rangle$ to each of these are called the {\em Galois monomorphisms\/} of the number field. \paragraph{\hspace*{-0.3cm}} Returning to a general field: \begin{kronecker} \label{kronecker} Let $f$ be a polynomial in $F[x]$. Then there is an extension field of $F$ containing a root of $f$. \end{kronecker} \begin{proof} If $f$ is not irreducible over $F$, then factorise as $f=gh$ with $g$ irreducible over $F$ and proceed as below but with $g$ instead of $f$. The result will be an extension field containing a root of $g$, and hence of $f$. Thus we may suppose that $f$ is irreducible over $F$ and $f=a_nx^n+a_{n-1}x^{n-1}+\cdots a_1x+a_0$ with the $a_i\in F$. Replace $F$ by its isomorphic copy in the quotient $F[x]/\langle f\rangle$, so that instead of $a_i$, we write $a_i+\langle f\rangle$, ie, $$ f=(a_n+\langle f\rangle)x^n+(a_{n-1}+\langle f\rangle)x^{n-1}+\cdots +(a_1+\langle f\rangle)x+(a_0+\langle f\rangle). $$ Consider the field $E=F[x]/\langle f\rangle$ which is an extension of $F$ and the element $\mu=x+\langle f\rangle\in E$. If we substitute $\mu$ into the polynomial then we perform all our arithmetic in $E$, ie: we perform the arithmetic of cosets, and the zero of this field is the coset $\langle f\rangle$: \begin{equation*} \begin{split} f(\mu) &=f(x+\langle f\rangle)\\ &= (a_n+\langle f\rangle)(x+\langle f\rangle)^n+(a_{n-1}+\langle f\rangle)(x+\langle f\rangle)^{n-1} +\cdots +(a_1+\langle f\rangle)(x+\langle f\rangle)+(a_0+\langle f\rangle)\\ &= (a_nx^n+\langle f\rangle)+(a_{n-1}x^{n-1}+\langle f\rangle)+\cdots +(a_1x+\langle f\rangle)+(a_0+\langle f\rangle)\\ &= (a_nx^n+a_{n-1}x^{n-1}+\cdots +a_1x+a_0)+\langle f\rangle=f+\langle f\rangle=\langle f\rangle=0. \end{split} \end{equation*} i.e. for $\mu=x+\langle f\rangle\in E$ we have $f(\mu)=0$. \qed \end{proof} \begin{corollary} \label{kronecker:corollary} Let $f$ be a polynomial in $F[x]$. Then there is an extension field of $F$ that contains all the roots of $f$. \end{corollary} \begin{proof} Repeat the process described in the proof of Kronecker's Theorem at most $\deg f$ number of times, until the desired field is obtained. \qed \end{proof} \subsection*{Further Exercises for Section \thesection} \begin{vexercise} Show that $x^4+x^3+x^2+x+1$ is irreducible over $\ams{F}}\def\K{\ams{K}_3$. How many elements does the resulting extension of $\ams{F}}\def\K{\ams{K}_3$ have? \end{vexercise} \begin{vexercise} As linear polynomials are always irreducible, show that the field $F[x]/\langle ax+b\rangle$ is isomorphic to $F$. \end{vexercise} \begin{vexercise}\label{ex6.1} \hspace{1em}\begin{enumerate} \item Show that $1+2x+x^3\in\ams{F}}\def\K{\ams{K}_3[x]$ is irreducible and hence that $\ams{F}}\def\K{\ams{K}=\ams{F}}\def\K{\ams{K}_3[x]/\langle 1+2x+x^3\rangle$ is a field. \item Show that every coset can be written uniquely in the form $(a+bx+cx^2)+\langle 1+2x+x^3\rangle$ with $a,b,c\in\ams{F}}\def\K{\ams{K}_3$. \item Deduce that the field $\ams{F}}\def\K{\ams{K}$ has exactly 27 elements. \end{enumerate} \end{vexercise} \begin{vexercise} Find an irreducible polynomial $f(x)$ in $\ams{F}}\def\K{\ams{K}_5[x]$ of degree $2$. Show that $\ams{F}}\def\K{\ams{K}_5[x]/\langle f(x)\rangle$ is a field with $25$ elements. \end{vexercise} \begin{vexercise}\label{ex8.3} \hspace{1em}\begin{enumerate} \item Show that the polynomial $x^3-3x+6$ is irreducible over $\Q$. \item Hence, or otherwise, if $$ \alpha=\sqrt[3]{2\kern-2pt\sqrt{2}-3},\beta=-\sqrt[3]{2\kern-2pt\sqrt{2}+3}\mbox{ and }\omega=-\frac{1}{2}+ \frac{\sqrt{3}}{2}\text{i}, $$ prove that \begin{enumerate} \item the fields $\Q(\alpha+\beta)$ and $\Q(\omega\alpha+\overline{\omega}\beta)$ are distinct (that is, their elements are different), but, \item $\Q(\alpha+\beta)$ and $\Q(\omega\alpha+\overline{\omega}\beta)$ are isomorphic (You can assume that $\omega\alpha+\overline{\omega}\beta$ is not a real number.) \end{enumerate} \end{enumerate} \end{vexercise} \section{Ruler and Compass Constructions I} \label{ruler.compass} If you are a farmer in Babylon around 2500 BC, how do you subdivide your land into plots? You survey it of course. The most basic surveying instruments are wooden pegs and rope, with which you can do two very basic things: two pegs can be set a distance apart and the rope stretched taut between them; also, one of the pegs can be kept stationary and you can take the path traced by the other as you walk around keeping the rope stretched tight. In other words, you can draw a line through two points or you can draw a circle centered at one point and passing through another. \paragraph{\hspace*{-0.3cm}} Instead of the Euphrates river valley, we work in the complex plane $\ams{C}}\def\Q{\ams{Q}$. We are thus able, given two numbers $z,w\in\ams{C}}\def\Q{\ams{Q}$, to draw a line through them using a straight edge, or to place one end of a compass at $z$, and draw the circle passing through $w$: $$ \begin{pspicture}(0,0)(10,3.25) \rput(5,1.5){\BoxedEPSF{galois7.1.eps scaled 1500}} \rput(1.35,2.15){$z$} \rput(2.95,1.45){$w$} \rput(7,1.5){$z$} \rput(9.1,2.5){$w$} \end{pspicture} $$ Neither of these operations involves any ``measuring''. There are no units on the ruler and we don't know the radius of the circle. \paragraph{\hspace*{-0.3cm}} With these two constructions we call a complex number $z$ {\em constructible\/} iff there is a sequence of numbers $$ 0,1,\text{i}=z_1,z_2,\ldots,z_n=z, $$ with $z_j$ obtained from earlier numbers in the sequence in one of the following three ways: $$ \begin{pspicture}(0,0)(14,3) \rput(1,0){ \rput(2,1.34){\BoxedEPSF{galois7.6a.eps scaled 1500}} \rput(2.1,1.6){$z_j$} \rput(1.1,.7){$z_p$}\rput(2.8,1.9){$z_q$} \rput(1.2,2){$z_r$}\rput(2.9,.6){$z_s$} \rput(2,.2){(i)} } \rput(1,0){ \rput(6,1.5){\BoxedEPSF{galois7.6b.eps scaled 1500}} \rput(6.1,2.3){$z_j$} \rput(5,2){$z_p$}\rput(6.9,2.4){$z_q$} \rput(6.4,.8){$z_r$}\rput(5.1,1.2){$z_s$} \rput(6.2,.2){(ii)} } \rput(1,0){ \rput(10,1.58){\BoxedEPSF{galois7.6c2.eps scaled 1500}} \rput(9.8,2.3){$z_j$} \rput(10.4,.8){$z_p$}\rput(11.3,1.7){$z_q$} \rput(9.5,1){$z_r$}\rput(8.7,1.8){$z_s$} \rput(10,.2){(iii)} } \end{pspicture} $$ In these pictures, $p,q,r$ and $s$ are all $<j$. We are given $0,1,\text{i}$ ``for free'', so they are indisputably constructible. The reasoning is this: if you stand in a plane (without coordinates) then your position can be taken as $0$; declare a direction to be the real axis and a distance along it to be length $1$; construct the perpendicular bisector of the segment from $-1$ to $1$ (as in the next paragraph) and measure a unit distance along this new axis (in either direction) to get $\text{i}$. \paragraph{\hspace*{-0.3cm}} In addition to the two basic moves there are others that follow immediately from them. For example, we can construct the perpendicular bisector of a segment $AB$ as in Figure \ref{fig:constructions:figure50}. \begin{figure} \caption{Constructing the perpendicular bisector of a segment.} \label{fig:constructions:figure50} \end{figure} \begin{figure} \caption{Bisecting an angle.} \label{fig:constructions:figure60} \end{figure} To explain these pictures (and the rest): a ray, centered at some point and tracing out a dotted circle is the compass. If the ray is marked $r$ -- as in the first two pictures above -- this means that in passing from the first picture to the second, the setting on the compass is kept the same. It does not mean that we know the setting. The construction works for the following reason: let $S$ be the set of points in $\ams{C}}\def\Q{\ams{Q}$ that are an equal distance from both $A$ and $B$. After a moments thought, you see that this must be the perpendicular bisector of the line segment $\overline{AB}$ that we are constructing. Lines are determined by any two of their points, so if we can find two points equidistant from $A$ and $B$, and we draw a line through them, this must be the set $S$ that we want (and hence the perpendicular bisector). But the intersections of the two circular arcs are clearly equidistant from $A$ and $B$, so we are done. \paragraph{\hspace*{-0.3cm}} As well as bisecting segments, we can bisect angles, ie: if two lines meet in some angle we can construct a third line meeting these in angles that are each half the original one -- see Figure \ref{fig:constructions:figure60}. Remember: none of the angles in this picture can be measured. Nevertheless, the two new angles are half the old one. \paragraph{\hspace*{-0.3cm}} Given a line and a point $P$ not on it, we can construct a new line passing through $P$ and perpendicular to the line, as in Figure \ref{fig:constructions:figure70}. This is called ``dropping a perpendicular from a point to a line''. \begin{figure} \caption{Dropping a perpendicular from a point to a line.} \label{fig:constructions:figure70} \end{figure} \paragraph{\hspace*{-0.3cm}} Given a line $\ell$ and a point $P$ not on it we can construct a new line through $P$ parallel to $\ell$ -- see Figure \ref{fig:figure5}. Some explanation for this one: the first step is to drop a perpendicular from $P$ to the line $\ell$, meeting it at the new point $Q$. Next, set your compass to the distance from $P$ to $Q$, and transfer this circular distance along the line to some point, drawing a semicircle that meets $\ell$ at the points $A$ and $B$. Construct the perpendicular bisector of the segment from $A$ to $B$, which meets the semicircle at the new point $R$. Finally, draw a line through the points $P$ and $R$. \begin{figure} \caption{Constructing a line through a point $P$ and parallel to another line $\ell$.} \label{fig:figure5} \end{figure} \begin{figure}\label{fig:constructions:figure80} \end{figure} \paragraph{\hspace*{-0.3cm}} Figure \ref{fig:constructions:figure80} shows some basic examples of constructible numbers. It is less clear how to construct $\frac{27}{129}$, or the golden ratio: $$ \phi=\frac{1+\sqrt{5}}{2}. $$ But these numbers {\em are\/} constructible, and the reason is the first non-trivial fact about constructible numbers: they can be added, subtracted, multiplied and divided. Defining $\mathcal C$ to be the set of constructible numbers in $\ams{C}}\def\Q{\ams{Q}$, we have, \begin{theoremC} $\mathcal C$ is a subfield\footnote{In principle you can now throw away your calculator and instead use ruler and compass! To compute $\cos x$ of a constructible number $x$ for example,construct as many terms of the Taylor series, $$ \cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots $$ as you need (your calculator only ever gives you approximations anyway).} of $\ams{C}}\def\Q{\ams{Q}$. \end{theoremC} \begin{proof} We show first that the \emph{real\/} constructible numbers form a subfield of the reals, i.e. that $\mathcal C\cap\R$ is a subfield of $\R$, for which we need to show that if $a,b\in \mathcal C\cap \R$ then so too are $a+b, -a, ab$ and $1/a$. \begin{enumerate} \item {\em $\mathcal C\cap \R$ is closed under $+$ and $-$}: The picture on the left of Figure \ref{fig:figure20} shows that if $a\in\mathcal C\cap \R$ then so is $-a$. Similarly, the two on the right of Figure \ref{fig:figure20} give $a,b\in \mathcal C\cap\R\Rightarrow a+b\in \mathcal C\cap\R$. (In these pictures $a$ and $b$ are $>0$. You can draw the other cases yourself). \begin{figure} \caption{$\mathcal C\cap \R$ is closed under $+$ \emph{(right)} and $-$ \emph{(left)}.} \label{fig:figure20} \end{figure} \item {\em $\mathcal C\cap \R$ is closed under $\times$}: as can be seen by following through the steps in Figure \ref{fig:figure6}. Seeing that the construction works involves studying the pair of similar triangles shown in red. \begin{figure} \caption{$\mathcal C\cap \R$ is closed under $\times$.} \label{fig:figure6} \end{figure} \item {\em $\mathcal C\cap \R$ is closed under $\div$}: is just the previous construction backwards -- see Figure \ref{fig:figure30}. \begin{figure} \caption{$\mathcal C\cap \R$ is closed under $\div$} \label{fig:figure30} \end{figure} \end{enumerate} Now to the complex constructible numbers. Observe that $z\in \mathcal C$ precisely when $\text{Re}\, z$ and $\text{Im}\, z$ are in $\mathcal C\cap\R$. For, if $z\in \mathcal C$ then dropping perpendiculars to the real and imaginary axes give the numbers $\text{Re}\, z$ and $\text{Im}\, z\cdot\text{i}$, the second of which can be transferred to the real axis by drawing the circle centered at $0$ passing through $\text{Im}\, z \cdot\text{i}$. On the other hand, if we have $\text{Re}\, z$ and $\text{Im}\, z$ on the real axis, then we have $\text{Im}\, z \cdot\text{i}$ too, and constructing a line through $\text{Re}\, z$ parallel to the imaginary axis and a line through $\text{Im}\, z \cdot\text{i}$ parallel to the real axis gives $z$. Suppose then that $z,w\in \mathcal C$ are constructible complex numbers: we show that $z+w,-z,zw$ and $1/z$ are also constructible. We have: \begin{equation*} \begin{split} z+w&=(\text{Re}\, z+\text{Re}\, w)+(\text{Im}\, z+ \text{Im}\, w)\text{i}\\ -z&=-\text{Re}\, z-\text{Im}\, z\cdot\text{i}\\ zw&=(\text{Re}\, z\,\text{Re}\, w-\text{Im}\, z\, \text{Im}\, w)+(\text{Re}\, z\,\text{Im}\, w+\text{Im}\, w\,\text{Re}\, z)\text{i}\\ \frac{1}{z}&=\frac{\text{Re}\, z}{\text{Re}\, z^2+\text{Im}\, z^2}-\frac{\text{Im}\, z}{\text{Re}\, z^2+\text{Im}\, z^2}\text{i}, \end{split} \end{equation*} so that for example, $z,w\in \mathcal C\Rightarrow \text{Re}\, z, \text{Im}\, z, \text{Re}\, w, \text{Im}\, w\in \mathcal C\cap\R\Rightarrow \text{Re}\, z+\text{Re}\, w, \text{Im}\, z+\text{Im}\, w\in \mathcal C\cap\R\Rightarrow \text{Re}\, (z+w), \text{Im}\, (z+w)\in \mathcal C\cap\R\Rightarrow z+w\in \mathcal C$, and the others are similar. \qed \end{proof} \begin{corollary} Any rational number is constructible. \end{corollary} \begin{proofs} \begin{description} \item[\emph{Brute force}:] use the example of the construction of $3$ to show that $\ams{Z}}\def\E{\ams{E}\subset\mathcal C$; that $\mathcal C\cap\R$ is closed under $\times$ and $\div$ then gives $\Q\subset\mathcal C$. \item[\emph{Slightly slicker}:] by Exercise \ref{ex_lect1.0}, any subfield of $\ams{C}}\def\Q{\ams{Q}$ contains $\Q$. \end{description} \qed \end{proofs} \paragraph{\hspace*{-0.3cm}} Not only can we perform the four basic arithmetic operations with constructible numbers, but we can construct square roots too: \begin{figure} \caption{Constructing $\kern-2pt\sqrt{a}$ for $a\in\R$.} \label{fig:figure7} \end{figure} \begin{theorem} If $z\in \mathcal C$ then $\kern-2pt\sqrt{z}\in \mathcal C$. \end{theorem} \begin{proof} We can construct the square root of any positive real number $a\in\R$ as in Figure \ref{fig:figure7}. As an Exercise, show that in the red picture in Figure \ref{fig:figure7}, the length $x=\kern-2pt\sqrt{a}$. Next, the square root of any complex number can be constructed as in Figure \ref{fig:figure40}, where we have used the construction of real square roots in the second step. \qed \end{proof} \begin{figure} \caption{Constructing $\kern-2pt\sqrt{z}$ for $z\in\ams{C}} \label{fig:figure40} \end{figure} \subsection{Constructing angles and polygons} \paragraph{\hspace*{-0.3cm}} We say that an angle can be constructed when we can construct two lines intersecting in that angle. \begin{vexercise}\label{ex7.20} \hspace{1em} \begin{enumerate} \item Show that we can always assume that one of the lines giving an angle is the positive real axis. \item Show that an {\em angle\/} $\theta$ can be constructed if and only if the {\em number\/} $\cos\theta$ can be constructed. Do the same for $\sin\theta$ and $\tan\theta$. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex7.30} Show that if $\varphi,\theta$ are constructible angles then so are $\varphi+\theta$ and $\varphi-\theta$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} A {\em regular $n$-sided polygon\/} or {\em regular $n$-gon\/} is a polygon in $\ams{C}}\def\Q{\ams{Q}$ with $n$ sides of equal length and $n$ interior angles of equal size. \begin{vexercise}\label{ex7.40} Show that a regular $n$-gon can be constructed centered at $0\in\ams{C}}\def\Q{\ams{Q}$ if and only if the angle $\frac{2\pi}{n}$ can be constructed. Show that a regular $n$-gon can be constructed centered at $0\in\ams{C}}\def\Q{\ams{Q}$ if and only if the complex number $$ z=\cos\frac{2\pi}{n}+\text{i}\sin\frac{2\pi}{n}, $$ can be constructed. \end{vexercise} \begin{vexercise}\label{ex7.50} Show that if an $n$-gon and an $m$-gon can be constructed for $n$ and $m$ relatively prime, then so can a $mn$-gon (hint: use the $\ams{Z}}\def\E{\ams{E}$-version of Theorem \ref{gcd}). \end{vexercise} \paragraph{\hspace*{-0.3cm}} For what $n$ can you construct a regular $n$-gon? It makes sense to consider first the $p$-gons for $p$ a prime. The complete answer even to this question will not be revealed until Section \ref{galois.corresapps}. It turns out that the $p$-gons that can be constructed are {\em extremely rare\/}. Nevertheless, the first two (odd) primes do work: \begin{vexercise}\label{ex7.60} Show that a regular $3$-gon, ie: an equilateral triangle, can be constructed with any side length. Using Exercises \ref{ex_lect1.1a} and \ref{ex7.40}, show that a regular $5$-gon can also be constructed. \end{vexercise} \paragraph{\hspace*{-0.3cm}} Here is a proof that a regular $17$-gon is constructible. Gauss proved the remarkable identity of Figure \ref{fig:gauss}, which is still found in trigonometric tables. Thus the number $\cos\pi/17$ can be constructed as this expression involves only integers, the four field operations and square roots, all of which are operations we can perform with a ruler and compass. Hence, by Exercise \ref{ex7.20}(2) the angle $\pi/17$ can be constructed and so adding it to itself (Exercise \ref{ex7.30}) gives the angle $2\pi/17$. Now apply Exercise \ref{ex7.40} to get the $17$-gon. \begin{figure} \caption{A proof that the $17$-gon is constructible.} \label{fig:gauss} \end{figure} \subsection*{Further Exercises for Section \thesection} \begin{vexercise}\label{ex7.70} Using the fact that the constructible numbers include $\Q$, show that any given line segment can be trisected in length. \end{vexercise} \begin{vexercise}\label{ex7.80} Show that if you can construct a regular $n$-sided polygon, then you can also construct a regular $2^kn$-sided polygon for any $k\geq1$. \end{vexercise} \begin{vexercise}\label{ex7.90} Show that $\cos\theta$ is constructible if and only if $\sin\theta$ is. \end{vexercise} \begin{vexercise}\label{ex7.100} If $a,b$ and $c$ are constructible numbers (ie: in $\mathcal C$), show that the roots of the quadratic equation $ax^2+bx+c$ are also constructible. \end{vexercise} \section{Vector Spaces I: Dimensions} \label{lect7} Having met rings and fields we introduce our third algebraic object: vector spaces. \begin{definition}[vector space] A vector space over a field $F$ is a set $V$, whose elements are called vectors, together with two operations: addition $u,v\mapsto u+v$ of vectors and scalar multiplication $\lambda,v\mapsto \lambda v$ of a vector by an element (or scalar) $\lambda$ of the field $F$, such that: \begin{enumerate} \item $(u+v)+w = u+(v+w)$, for all $u,v,w\in V$. \item There exists a zero vector $0\in V$ such that $v+0=v=0+v$ for all $v\in V$, \item Every $v\in V$ has a negative $-v$ such that $v+(-v)=0=-v+v$, for all $v\in V$. \item $u+v=v+u$, for all $u,v\in V$. \item $\lambda(u+v) = \lambda u + \lambda v$, for all $u,v$ and $\lambda\in F$. \item $(\lambda+\mu)v = \lambda v+\mu v$, for all $\lambda\mu\in F$ and $v\in V$. \item $\lambda(\mu v) = (\lambda\mu)v$, for all $\lambda\mu\in F$ and $v\in V$. \item $1 v = v$ for all $v\in V$. \end{enumerate} \end{definition} \begin{aside} Alternatively, $V$ forms an Abelian group under $+$ (these are the first four axioms) together with a scalar multiplication that satisfies the last four axioms. \end{aside} \paragraph{\hspace*{-0.3cm}} A {\em homomorphism\/} of vector spaces is a map $\varphi: V_1\rightarrow V_2$ such that $\varphi(u+v)=\varphi(u)+\varphi(v)$ and $\varphi(\lambda v)=\lambda\varphi(v)$ for all $u,v\in V$ and $\lambda\in F$. (Homomorphisms of vector are more commonly called linear maps.) A bijective homomorphism is an \emph{isomorphism\/}. \paragraph{\hspace*{-0.3cm}} The set $\R^2$ of $2\times 1$ column vectors is the motivating example of a vector space over $\R$ under the normal addition and scalar multiplication of vectors. Alternatively, the complex numbers $\ams{C}}\def\Q{\ams{Q}$ form a vector space over $\R$, and these two spaces are isomorphic via the map: $$ \varphi:\left[\begin{array}{c} a\\ b\\ \end{array}\right] \mapsto a+bi. $$ \paragraph{\hspace*{-0.3cm}} \parshape=3 0pt\hsize 0pt.7\hsize 0pt.7\hsize The complex numbers are a vector space {\em over themselves\/}: addition of complex numbers gives an Abelian group and now we can scalar multiply a complex number by another one, using the usual multiplication of complex numbers. \vadjust{ \smash{\lower 65pt \llap{ \begin{pspicture}(0,0)(4,2.5) \rput(0,.1){ \rput(2,1.25){\BoxedEPSF{cube.eps scaled 250}} \rput(2.5,-0.5){${\scriptstyle 000}$} \rput(0.5,0.4){${\scriptstyle 100}$} \rput(3.65,1){${\scriptstyle 001}$} \rput(2.5,1.7){${\scriptstyle 010}$} \rput(0.25,2.1){${\scriptstyle 110}$} \rput(3.85,2.5){${\scriptstyle 011}$} \rput(1.75,3.1){${\scriptstyle 111}$} } \end{pspicture} }}}\ignorespaces \paragraph{\hspace*{-0.3cm}} \parshape=6 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt\hsize A vector spaces over a finite field: consider the set of $3$-tuples with coordinates from the field $\ams{F}}\def\K{\ams{K}_2$ (so are either $0$ or $1$) and add two such coordinate-wise, using the addition from $\ams{F}}\def\K{\ams{K}_2$. Scalar multiply a tuple coordinate-wise using the multiplication from $\ams{F}}\def\K{\ams{K}_2$. As there are only two possibilities for each coordinate and three coordinates in total, we get a total of $2^3=8$ vectors in this space. They can be arranged around the vertices of a cube as shown, where $abc$ is the vector with the three coordinates $a,b,c\in\ams{F}}\def\K{\ams{K}_2$. \paragraph{\hspace*{-0.3cm}} We saw in Section \ref{lect4} that the field $\Q(\kern-2pt\sqrt{2})$ has elements the $a+b\kern-2pt\sqrt{2}$ with $a,b\in\Q$. The identification, $$ \begin{pspicture}(12,1) \rput(-1.5,-.5){ \rput(-1,0){ \rput(6.1,1){$\leftrightarrow$} \rput(5,1){$a+b\kern-2pt\sqrt{2}$} \rput(7,1){$\left[\begin{array}{c} a\\ b\\ \end{array}\right]$} \psline[linewidth=.1mm]{->}(8.5,1.2)(7.5,1.2) \psline[linewidth=.1mm]{->}(8.5,.8)(7.5,.8) } \rput(0,.05){ \rput(9.65,1.2){coordinate in ``$1$ direction''} \rput(9.8,.8){coordinate in ``$\kern-2pt\sqrt{2}$ direction''} } } \end{pspicture} $$ is an isomorphism with the vector space $\Q^2$ of $2\times 1$ $\Q$-column vectors with the addition $(a+b\kern-2pt\sqrt{2})+(c+d\kern-2pt\sqrt{2})=(a+c)+(b+d)\kern-2pt\sqrt{2}$ corresponding to, $$ \left[\begin{array}{c} a\\ b\\ \end{array}\right] + \left[\begin{array}{c} c\\ d\\ \end{array}\right] = \left[\begin{array}{c} a+c\\ b+d\\ \end{array}\right], $$ and scalar multiplication $c(a+b\kern-2pt\sqrt{2})=ac+bc\kern-2pt\sqrt{2}$ corresponding to: $$ c\left[\begin{array}{c} a\\ b\\ \end{array}\right] = \left[\begin{array}{c} ac\\ bc\\ \end{array}\right]. $$ \paragraph{\hspace*{-0.3cm}} The polynomial $x^3-2$ is irreducible over $\Q$ so the quotient ring $\Q[x]/\langle x^3-2\rangle$ is a field with elements the $(a+bx+cx^2)+\langle x^3-2\rangle$ for $a,b\in\Q$. It is a $\Q$-vector space, isomorphic to $\Q^3$ via $$ \begin{pspicture}(12,1.5) \rput(2.75,.75){$(a+bx+cx^2)+\langle x^3-2\rangle \leftrightarrow \left[\begin{array}{c} a\\ b\\ c\\ \end{array}\right]$ } \rput(-1.3,0.45){ \rput(-.8,-.5){ \psline[linewidth=.1mm]{->}(8.5,1.2)(7.5,1.2) \psline[linewidth=.1mm]{->}(8.5,.8)(7.5,.8) \psline[linewidth=.1mm]{->}(8.5,.4)(7.5,.4) } \rput(1,-.475){ \rput(9.7,1.2){coordinate in ``$1+\langle x^3-2\rangle$ direction''} \rput(9.8,.8){coordinate in ``$x+\langle x^3-2\rangle$ direction''} \rput(9.8,.4){coordinate in ``$x^2+\langle x^3-2\rangle$ direction''} }} \end{pspicture} $$ (Check for yourself that the addition and scalar multiplications match up). \paragraph{\hspace*{-0.3cm}} The previous two examples are special cases of the following: if $F\subseteq E$ is an extension of fields then $E$ is a vector space over $F$. The ``vectors'' are the elements of $E$ and the ``scalars'' are the elements of $F$. Addition of vectors is just the addition of elements in $E$, and to scalar multiply a $v\in E$ by a $\lambda\in F$, multiply $\lambda v$ using the multiplication of the field $E$. The first four axioms for a vector space hold because of the addition of the field $E$, and the second four from the multiplication. \begin{definition}[span and independence] If $v_1,\ldots,v_n\in V$ are vectors in a vector space $V$, then a vector of the form $$ \alpha_1v_1+\ldots +\alpha_nv_n, $$ for $\alpha_1,\ldots,\alpha_n\in F$, is called a linear combination of the $v_1,\ldots,v_n$. The linear span of $\{v_j:j\in J\}$, where $J$ is not necessarily finite, is the set of all linear combinations of vectors from the set: $$ \text{span} \{v_j:j\in J\} = \{\alpha_1v_{j_1}+\cdots+\alpha_kv_{j_k}: \alpha_j\in F\}. $$ Say $\{v_j:j\in J\}$ {\em span\/} $V$ when $V=\text{span}\{v_j:j\in J\}$. A set of vectors $v_1,\ldots,v_n\in V$ is linearly dependent if and only if there exist scalars $\alpha_1,\ldots,\alpha_n$, not all zero, such that $$ \alpha_1v_1+\ldots +\alpha_nv_n = 0, $$ and linearly independent otherwise, ie: $\alpha_1v_1+\ldots +\alpha_nv_n = 0$ implies that the $\alpha_i$ are all $0$. \end{definition} \paragraph{\hspace*{-0.3cm}} In the examples above, the complex numbers $\ams{C}}\def\Q{\ams{Q}$ are spanned, as a vector space over $\R$, by $\{1,\text{i}\}$, and indeed by any two non-zero complex numbers that are not scalar multiples of each other. As a vector space over $\ams{C}}\def\Q{\ams{Q}$, the complex numbers are spanned by {\em one\/} element: any $\zeta\in\ams{C}}\def\Q{\ams{Q}$ can be written as $\zeta\times 1$ for example, so every element is a \emph{complex\/} scalar multiple of $1$. Indeed, $\ams{C}}\def\Q{\ams{Q}$ is spanned as a complex vector space by any single one of its non-zero elements. \begin{definition}[basis] A basis for $V$ is a set of vectors $\{v_j:j\in J\}$, with $J$ a not necessarily finite index set, that span $V$, and such that every finite set of $v_j$'s are linearly independent. \end{definition} It can be proved that there is a 1-1 correspondence between the elements of any two bases for a vector space $V$. When $V$ has a finite basis the \emph{dimension\/} of $V$ is defined to be the number of elements in a basis; otherwise $V$ is \emph{infinite dimensional}. \paragraph{\hspace*{-0.3cm}} Thus $\ams{C}}\def\Q{\ams{Q}$ is $2$-dimensional as a vector space over $\R$ but $1$-dimensional as a vector space over $\ams{C}}\def\Q{\ams{Q}$. We will see later in this section that $\ams{C}}\def\Q{\ams{Q}$ is {\em infinite\/} dimensional as a vector space over $\Q$. With the other examples above, $\Q(\kern-2pt\sqrt{2})$ is $2$-dimensional over $\Q$ with basis $\{1,\kern-2pt\sqrt{2}\}$ and $\Q[x]/\langle x^3-2\rangle$ is $3$-dimensional over $\Q$ with basis the cosets $$ 1+\langle x^3-2\rangle,x+\langle x^3-2\rangle\text{ and }x^2+\langle x^3-2\rangle. $$ In Exercise \ref{exam00_4} in Section \ref{galois.correspondence}, we will see that if $\alpha=\sqrt[4]{2}$, then $\Q(\alpha,\text{i})$ is a $2$-dimensional space over $\Q(\alpha)$ or $\Q(\alpha\text{i})$ or even $\Q((1+\text{i})\alpha)$; a $4$-dimensional space over $\Q(\text{i})$ or $\Q(\text{i}\alpha^2)$, and an $8$-dimensional space over $\Q$ (and these are almost, but not quite, all the possibilities; see the exercise for the full story). \begin{definition}[degree of an extension] Let $F\subseteq E$ be an extension of fields. Consider $E$ as a vector space over $F$, and define the degree of the extension to be the dimension of this vector space, denoted $[E:F]$. Call $F\subseteq E$ a finite extension if the degree is finite. \end{definition} \paragraph{\hspace*{-0.3cm}} The extensions $\Q\subset\Q(\sqrt{2})$ and $\Q\subset\Q[x]/\langle x^3-2\rangle$ have degrees $2$ and $3$. \paragraph{\hspace*{-0.3cm}} It is no coincidence that the degree of extensions of the form $F\subseteq F[x]/\langle f\rangle$ turn out to be the same as the degree of the polynomial $f$: \begin{theorem}\label{degree_extension} Let $f$ be an irreducible polynomial in $F[x]$ of degree $d$. Then the extension, $$ F\subseteq F[x]/\langle f\rangle, $$ has degree $d$. \end{theorem} Hence the name degree! \begin{proof} Replace, as usual, the field $F$ by its copy in $F[x]/\langle f\rangle$, so that $\lambda\in F$ becomes $\lambda+\langle f\rangle\in F[x]/\langle f\rangle$. Consider the set of cosets, $$ B=\{1+\langle f\rangle,x+\langle f\rangle,x^2+\langle f\rangle,\ldots, x^{d-1}+\langle f\rangle\}. $$ Then we claim that $B$ is a basis for $F[x]/\langle f\rangle$ over $F$, for which we have to show that it spans the vector space and is linearly independent. To see that it spans, consider a typical element, which has the form, $$ g+\langle f\rangle=(qf+r)+\langle f\rangle=r+\langle f\rangle=(a_0+a_1x+\cdots +a_{d-1}x^{d-1})+\langle f\rangle. $$ using the division algorithm and basic properties of cosets. This is turn gives, $$ (a_0+a_1x+\cdots +a_{d-1}x^{d-1})+\langle f\rangle= (a_0+\langle f\rangle)(1+\langle f\rangle)+(a_1+\langle f\rangle)(x+\langle f\rangle)+\cdots + (a_{d-1}+\langle f\rangle)(x^{d-1}+\langle f\rangle), $$ where the last is an $F$-linear combination of the elements of $B$. Thus this sets spans the space. For linear independence, suppose we have an $F$-linear combination of the elements of $B$ giving zero, ie: $$ (b_0+\langle f\rangle)(1+\langle f\rangle)+(b_1+\langle f\rangle)(x+\langle f\rangle)+\cdots + (b_{d-1}+\langle f\rangle)(x^{d-1}+\langle f\rangle)=\langle f\rangle, $$ remembering that the zero of the field $F[x]/\langle f\rangle$ is the coset $0+\langle f\rangle=\langle f\rangle$. Multiplying and adding all the cosets on the left hand side gives, $$ (b_0+b_1x+\cdots+b_{d-1}x^{d-1})+\langle f\rangle=\langle f\rangle, $$ so that $b_0+b_1x+\cdots+b_{d-1}x^{d-1}\in\langle f\rangle$ (using another basic property of cosets). The elements of $\langle f\rangle$, being multiples of $f$, must have degree at least $d$, except for the zero polynomial. On the other hand $b_0+b_1x+\cdots+b_{d-1}x^{d-1}$ has degree $\leq d-1$. Thus it must be the zero polynomial, giving that all the $b_i$ are zero, hence all the $b_i+\langle f\rangle$ are $0$, and that the set $B$ is linearly independent over $F$ as claimed. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} What is the degree of the extension $\Q\subset\Q(\pi)$? If it was finite, say $[\Q(\pi):\Q]=d$, then any collection of more than $d$ elements would be linearly dependent. In particular, the $d+1$ elements, $$ 1,\pi,\pi^2,\ldots,\pi^d, $$ would be dependent, so that $a_0+a_1\pi+a_2\pi^2+\ldots+a_d\pi^d=0$ for some $a_0,a_1,\ldots,a_d\in\Q$, not all zero, hence $\pi$ would be a root of the polynomial $a_0+a_1x+a_2x^2+\ldots+a_dx^d$. But this contradicts the fact that $\pi$ is transcendental over $\Q$. Thus, the degree of the extension is infinite. \paragraph{\hspace*{-0.3cm}} In fact this is always true: \begin{proposition}\label{finite.givesalgebraic} Let $F\subseteq E$ and $\alpha\in E$. If the degree of the extension $F\subseteq F(\alpha)$ is finite, then $\alpha$ is algebraic over $F$. \end{proposition} \begin{proof} The proof is very similar to the example above. Suppose that the extension $F\subseteq F(\alpha)$ has degree $n$, so that any collection of $n+1$ elements of $F(\alpha)$ must be linearly dependent. In particular the $n+1$ elements $$ 1,\alpha,\alpha^2,\ldots,\alpha^n $$ are dependent over $F$, so that there are $a_0,a_1,\ldots, a_n$ in $F$ with $$ a_0+a_1\alpha+\cdots+a_n\alpha^n=0, $$ and hence $\alpha$ is algebraic over $F$ as claimed. \qed \end{proof} Thus, any field $E$ that contains transcendentals over $F$ will be infinite dimensional as vector spaces over $F$. In particular, $\R$ and $\ams{C}}\def\Q{\ams{Q}$ are infinite dimensional over $\Q$. \paragraph{\hspace*{-0.3cm}} The converse to Proposition \ref{finite.givesalgebraic} is partly true, as we summarise now in an important result: \begin{theoremD}\label{thmD} Let $F\subseteq E$ and $\alpha\in E$ be algebraic over $F$. Then, \begin{enumerate} \item There is a unique polynomial $f\in F[x]$ that is monic, irreducible over $F$, and has $\alpha$ as a root. \item The field $F(\alpha)$ is isomorphic to the quotient $F[x]/\langle f\rangle$. \item If $\deg f=d$, then the extension $F\subseteq F(\alpha)$ has degree $d$ with basis $\{1,\alpha,\alpha^2,\ldots,\alpha^{d-1}\}$, and so, $$ F(\alpha)=\{a_0+a_1\alpha+a_2\alpha^2+\cdots+a_{d-1}\alpha^{d-1}\,|\,a_0,\ldots,a_{d-1}\in F\}. $$ \end{enumerate} \end{theoremD} \begin{proof} Hopefully most of the proof will be recognisable from the specific examples we have discussed already. As $\alpha$ is algebraic over $F$ there is at least one $F$-polynomial having $\alpha$ as a root. Choose $f'$ to be a non-zero one having smallest degree. This polynomial must then be irreducible over $F$, for if not, we have $f'=gh$ with $\deg(g),\deg(h)<\deg(f')$, and $\alpha$ must be a root of one of $g$ or $h$, contradicting the original choice of $f'$. Divide through by the leading coefficient of $f'$, to get $f$, a monic, irreducible (by Exercise \ref{ex3.1}) $F$-polynomial, having $\alpha$ as a root. If $f_1,f_2$ are polynomials with these properties then $f_1-f_2$ has degree strictly less than either $f_1$ or $f_2$ and still has $\alpha$ as a root, so the only possibility is that $f_1-f_2$ is zero, hence $f$ is unique. Consider the evaluation homomorphism $\varepsilon_\alpha:F[x]\rightarrow E$ defined as usual by $\varepsilon_\alpha(g)=g(\alpha)$. To show that the kernel of this homomorphism is the ideal $\langle f\rangle$ is completely analogous to the example at the beginning of Section \ref{lect5}: clearly $\langle f\rangle$ is contained in the kernel, as any multiple of $f$ must evaluate to zero when $\alpha$ is substituted into it. On the other hand, if $h$ is in the kernel of $\varepsilon_\alpha$, then by division algorithm, $$ h=qf+r, $$ with $\deg(r)<\deg(f)$. Taking the $\varepsilon_\alpha$ image of both sides gives $0=\varepsilon_\alpha(h)=\varepsilon_\alpha(qf)+\varepsilon_\alpha(r)=\varepsilon_\alpha(r)$, so that $r$ has $\alpha$ as a root. As $f$ is minimal with this property, we must have that $r=0$, so that $h=qf$, ie: $h$ is in the ideal $\langle f\rangle$, and so the kernel is contained in this ideal. Thus, $\text{ker}\,\varepsilon_\alpha=\langle f\rangle$. In particular we have an isomorphism $\widehat{\varepsilon_\alpha}:F[x]/\langle f\rangle\rightarrow \text{Im}\,\varepsilon_\alpha\subset E$, given by, $$ \widehat{\varepsilon_\alpha}(g+\langle f \rangle)=\varepsilon_{\alpha}(g)=g(\alpha), $$ with $F[x]/\langle f\rangle$ a field as $f$ is irreducible over $F$. Thus, $\text{Im}\,\varepsilon_\alpha$ is a subfield of $E$. Clearly, both the element $\alpha$ ($\varepsilon_\alpha(x)=\alpha$) and the field $F$ ($\varepsilon_\alpha(c)=c$) are contained in $\text{Im}\,\varepsilon_\alpha$, hence $F(\alpha)$ is too as $\text{Im}\,\varepsilon_\alpha$ is subfield of $E$, and $F(\alpha)$ is the smallest one enjoying these two properties. Conversely, if $g=\sum a_i x^i\in F[x]$ then $\varepsilon_\alpha(g)=\sum a_i\alpha^i$, which is an element of $F(\alpha)$ as fields are closed under sums and products. Hence $\text{Im}\,\varepsilon_\alpha\subseteq F(\alpha)$ and so these two are the same. Thus $\widehat{\varepsilon_\alpha}$ is an isomorphism between $F[x]/\langle f\rangle$ and $F(\alpha)$. The final part follows immediately from Theorem \ref{degree_extension}, where we showed that the set of cosets $$ \{1+\langle f\rangle,x+\langle f\rangle,x^2+\langle f\rangle,\ldots, x^{d-1}+\langle f\rangle\}, $$ formed a basis for $F[x]/\langle f\rangle$ over $F$. Their images under $\widehat{\varepsilon_\alpha}$, namely $\{1,\alpha,\alpha^2,\ldots,\alpha^{d-1}\}$, must then form a basis for $F(\alpha)$ over $F$.\qed \end{proof} The proof of Theorem D shows that the polynomial $f$ has the smallest degree of any polynomial having $\alpha$ as a root. \begin{definition}[minimum polynomial] The polynomial $f$ of Theorem D is called the minimum polynomial of $\alpha$ over $F$. \end{definition} \paragraph{\hspace*{-0.3cm}} An important property of the minimum polynomial is that it divides {\em any\/} other $F$-polynomial that has $\alpha$ as a root: for suppose that $g$ is such an $F$-polynomial. By unique factorisation in $F[x]$, we can decompose $g$ as $$ g=\lambda f_1f_2\ldots f_k, $$ where the $f_i$ are monic and irreducible over $F$. Being a root of $g$, the element $\alpha$ must be a root of one of the $f_i$. By uniqueness, this $f_i$ must be the minimum polynomial of $\alpha$ over $F$. \paragraph{\hspace*{-0.3cm}} The last part of Theorem D tells us that to find the degree of a simple extension $F\subseteq F(\alpha)$, you find the degree of the minimum polynomial over $F$ of $\alpha$. How do you find this polynomial? Its simple: guess! A sensible first guess is a monic polynomial with $F$-coefficients that has $\alpha$ as root. If your guess is also irreducible, then you have guessed right (uniqueness). The only thing that can go wrong is if your guess is not irreducible. Your next guess should then be a factor of your first guess. In this way, the search for minimum polynomials is ``no harder'' than determining irreducibility. \paragraph{\hspace*{-0.3cm}} As an example consider the minimum polynomial over $\Q$ of the $p$-th root of $1$, $$ \cos\frac{2\pi}{p}+\text{i}\sin\frac{2\pi}{p}, $$ for $p$ a prime. Your first guess is $x^p-1$ which satisfies all the criteria bar irreducibility as $x-1$ is a factor. Factorising gives: $$ x^p-1=(x-1)\Phi_p(x), $$ for $\Phi_p$ the $p$-th cyclotomic polynomial, and this was shown to be irreducible over $\Q$ in Exercise \ref{ex_lect3.2}. \paragraph{\hspace*{-0.3cm}} How does one find the degree of extensions $F\subseteq F(\alpha_1,\ldots,\alpha_k)$ that are not necessarily simple? Such extensions are a sequence of simple extensions. If we can find the degrees of each of these simple extensions, all we need is a way to patch the answers together: \begin{towerlaw} Let $F\subseteq E\subseteq L$ be a sequence or ``tower'' of extensions. If both of the intermediate extensions $F\subseteq E$ and $E\subseteq L$ are of finite degree, then $F\subseteq L$ is too, with $$ [L:F]=[L:E][E:F]. $$ \end{towerlaw} \paragraph{\hspace*{-0.3cm}} Before the proof we consider the example $\Q\subset \Q(\sqrt[3]{2},\text{i})$, which a sequence of two simple extensions: $$ \Q\subset \Q(\sqrt[3]{2})\subset \Q(\sqrt[3]{2},\text{i}). $$ We can use Theorem D to find the degrees of each individual simple extension. Firstly, the minimum polynomial over $\Q$ of $\sqrt[3]{2}$ is $x^3-2$, for this polynomial is monic in $\Q[x]$ with $\sqrt[3]{2}$ as a root and irreducible over $\Q$ by Eisenstein (using $p=2$). Thus the extension $\Q\subset\Q(\sqrt[3]{2})$ has degree $\deg(x^3-2)=3$ and $\{1,\sqrt[3]{2},(\sqrt[3]{2})^2\}$ is a basis for $\Q(\sqrt[3]{2})$ over $\Q$. Now let $\ams{F}}\def\K{\ams{K}=\Q(\sqrt[3]{2})$ so that the second extension is $\ams{F}}\def\K{\ams{K}\subset \ams{F}}\def\K{\ams{K}(\text{i})$ and where the minimum polynomial of $\text{i}$ over $\ams{F}}\def\K{\ams{K}$ is $x^2+1$: it is monic in $\ams{F}}\def\K{\ams{K}[x]$ with $\text{i}$ as a root, and irreducible over $\ams{F}}\def\K{\ams{K}$ as its two roots $\pm\text{i}$ are not in $\ams{F}}\def\K{\ams{K}$ (as $\ams{F}}\def\K{\ams{K}\subset \R$). Thus Theorem D again gives that $\ams{F}}\def\K{\ams{K}\subset \ams{F}}\def\K{\ams{K}(\text{i})$ has degree $2$ with $\{1,\text{i}\}$ a basis for $\ams{F}}\def\K{\ams{K}(\text{i})$ over $\ams{F}}\def\K{\ams{K}$. Now consider the elements, $$ \{1,\sqrt[3]{2},(\sqrt[3]{2})^2,\text{i},\sqrt[3]{2}\text{i},(\sqrt[3]{2})^2\text{i}\}, $$ obtained by multiplying the two bases together. The claim is that they form a basis for $\Q(\sqrt[3]{2},\text{i})=\ams{F}}\def\K{\ams{K}(\text{i})$ over $\Q$: we need to show that the $\Q$-span of these six gives every element of $\Q(\sqrt[3]{2},\text{i})$ and that they are linearly independent over $\Q$. For the first, let $x$ be an arbitrary element of $\Q(\sqrt[3]{2},\text{i})=\ams{F}}\def\K{\ams{K}(\text{i})$. As $\{1,\text{i}\}$ is a basis for $\ams{F}}\def\K{\ams{K}(\text{i})$ over $\ams{F}}\def\K{\ams{K}$, we can express $x$ as an $\ams{F}}\def\K{\ams{K}$-linear combination, $$ x=a+b\text{i}, a,b\in\ams{F}}\def\K{\ams{K}. $$ As $\{1,\sqrt[3]{2},(\sqrt[3]{2})^2\}$ is a basis for $\ams{F}}\def\K{\ams{K}$ over $\Q$, both $a$ and $b$ can be expressed as $\Q$-linear combinations, $$ a=a_0+a_1\sqrt[3]{2}+a_2(\sqrt[3]{2})^2, b=b_0+b_1\sqrt[3]{2}+b_2(\sqrt[3]{2})^2, $$ with the $a_i,b_i\in\Q$. This gives, $$ x=a_0+a_1\sqrt[3]{2}+a_2(\sqrt[3]{2})^2+b_0\text{i}+b_1\sqrt[3]{2}\text{i}+b_2(\sqrt[3]{2})^2\text{i}, $$ a $\Q$-linear combination for $x$ as required. Suppose now: $$ a_0+a_1\sqrt[3]{2}+a_2(\sqrt[3]{2})^2+ b_0\text{i}+b_1a_3\sqrt[3]{2}\text{i}+b_2(\sqrt[3]{2})^2\text{i}=0, $$ with the $a_i,b_i\in\Q$. Gathering together real and imaginary parts: $$ (a_0+a_1\sqrt[3]{2}+a_2(\sqrt[3]{2})^2)+(b_0+b_1\sqrt[3]{2}+b_2(\sqrt[3]{2})^2)\text{i}= a+b\text{i}=0, $$ for $a$ and $b$ now elements of $\ams{F}}\def\K{\ams{K}$. As $\{1,\text{i}\}$ are independent over $\ams{F}}\def\K{\ams{K}$ the coefficients in this last expression are zero, ie: $a=b=0$. This gives: $$ a_0+a_1\sqrt[3]{2}+a_2(\sqrt[3]{2})^2=0=b_0+b_1\sqrt[3]{2}+b_2(\sqrt[3]{2})^2, $$ and as $\{1,\sqrt[3]{2},(\sqrt[3]{2})^2\}$ are independent over $\Q$ the coefficients in these two expressions are also zero, ie: $a_0=a_1=a_2=b_0= b_1=b_2=0$. The six elements are thus independent and form a basis as claimed. \paragraph{\hspace*{-0.3cm}} The proof of the tower law is completely analogous to the example above: \begin{prooftower} Let $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$ be a basis for $E$ as an $F$-vector space and $\{\beta_1,\beta_2,\ldots,\beta_m\}$ a basis for $L$ as an $E$-vector space, both containing a finite number of elements as these extensions are finite by assumption. We show that the $mn=[L:E][E:F]$ elements $$ \{\alpha_i\,\beta_j\}, 1\leq i\leq n, 1\leq j\leq m, $$ form a basis for the $F$-vector space $L$, thus giving the result. Working ``backwards'' as in the example above, if $x$ is an element of $L$ we can express it as an $E$-linear combination of the $\{\beta_1,\ldots,\beta_m\}$: $$ x=\sum_{i=1}^m a_i\,\beta_i, $$ where, as they are elements of $E$, each of the $a_i$ can be expressed as $F$-linear combinations of the $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$: $$ a_i=\sum_{j=1}^n b_{ij}\alpha_j\Rightarrow x=\sum_{i=1}^m\sum_{j=1}^n b_{ij}\alpha_j\,\beta_i. $$ Thus the elements $\{\alpha_i\,\beta_j\}$ span the field $L$. If we have $$ \sum_{i=1}^m\sum_{j=1}^n b_{ij}\alpha_j\,\beta_i=0, $$ with the $b_{ij}\in F$, we can collect together all the $\beta_1$ terms, all the $\beta_2$ terms, and so on (much as we took real and imaginary parts in the example), to obtain an $E$-linear combination $$ \biggl(\sum_{j=1}^n b_{1j}\alpha_j\biggr)\,\beta_1+ \biggl(\sum_{j=1}^n b_{2j}\alpha_j\biggr)\,\beta_2+\cdots+ \biggl(\sum_{j=1}^n b_{mj}\alpha_j\biggr)\,\beta_m=0. $$ The independence of the $\beta_i$ over $E$ forces all the coefficients to be zero: $$ \biggl(\sum_{j=1}^n b_{1j}\alpha_j\biggr)=\cdots=\biggl(\sum_{j=1}^n b_{mj}\alpha_j\biggr)=0, $$ and the independence of the $\alpha_j$ over $F$ forces all the coefficients in each of these to be zero too, ie: $b_{ij}=0$ for all $i,j$. The $\{\alpha_i\,\beta_j\}$ are thus independent. \qed \end{prooftower} \paragraph{\hspace*{-0.3cm}} \label{vector:spacesI:minimum:polynomial} We find the minimum polynomial over $\Q$ of $\alpha+\omega$, where $\alpha=\sqrt[3]{2}$ and $\omega=\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}$. Following the recipe in the proof of Theorem \ref{finite.simple} (or just brute force) gives $\Q(\alpha,\omega)=\Q(\alpha+\omega)$ with $[\Q(\alpha+\omega):\Q]=[\Q(\alpha,\omega):\Q]=6$ by the Tower law. So we are after a degree $6$ polynomial. Indeed, it suffices to find a monic degree $6$ polynomial $g$ over $\Q$ having $\alpha+\omega$ as a root, since the minimum polynomial must then divide $g$, hence be $g$. Writing $\beta=\alpha+\omega$ we thus require $a,b,c,d,e,f\in\Q$ such that \begin{equation} \label{eq:1} \beta^6+a\beta^5+b\beta^4+c\beta^3+d\beta^2+e\beta+f=0 \end{equation} Now compute the powers of $\beta$ and write the answers in terms on the basis $\{1,\alpha,\alpha^2,\omega,\alpha\omega,\alpha^2\omega\}$ for $\Q(\alpha,\omega)$ over $\Q$ given by the tower law. For example, $$ \beta^3=\alpha^3+3\alpha^2\omega+3\alpha\omega^2+\omega^3=3\alpha^2\omega-3\alpha\omega-3\alpha+3, $$ and the others are similar using the facts $\alpha^3=2,\omega^3=1$ and $\omega^2=-\omega-1$. Substituting the results into (\ref{eq:1}) and collecting terms gives a linear combination of the basis vectors $\{1,\alpha,\alpha^2,\omega,\alpha\omega,\alpha^2\omega\}$ equal to $0$. Independence means the coefficients must be zero, so we get a linear system of equations in the variables $a,\ldots,f$. Solving these gives $a=3,b=6,c=3,d=0,e=f=9$ and hence the minimum polynomial $$ x^6+3x^5+6x^4+3x^3+9x+9. $$ \subsection*{Further Exercises for Section \thesection} \begin{vexercise}\label{ex8.5} \hspace{1em}\begin{enumerate} \item Show that if $F\subseteq L$ are fields with $[L:F]=1$ then $L=F$. \item Let $F\subseteq L\subseteq E$ be fields with $[E:F]=[L:F]$. Show that $E=L$. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex8.6} Let $\ams{F}}\def\K{\ams{K}={\Q}(a)$, where $a^3=2$. Express $(1+a)^{-1}$ and $(a^4+1)(a^2+1)^{-1}$ in the form $ba^2+ca+d$, where $b,d,c $ are in ${\Q}$. \end{vexercise} \begin{vexercise}\label{ex8.7} Let $\alpha = \root{3}\of{5}$. Express the following elements of ${\Q}(\alpha)$ as polynomials of degree at most 2 in $\alpha$ (with coefficients in ${\Q}$): $$ 1. \,\, 1/\alpha \qquad 2. \,\, \alpha^5 - \alpha^6 \qquad 3. \,\, \alpha/(\alpha^2+1) \qquad $$ \end{vexercise} \begin{vexercise}\label{ex8.8} Find the minimum polynomial over ${\Q}$ of $\alpha = \kern-2pt\sqrt{2}+ \kern-2pt\sqrt{-2}$. Show that the following are elements of the field $\Q(\alpha)$ and express them as polynomials in $\alpha$ (with coefficients in ${\Q}$) of degree at most 3: $$ 1. \,\, \kern-2pt\sqrt{2}\qquad 2. \,\, \kern-2pt\sqrt{-2}\qquad 3. \,\, i\qquad 4. \,\, \alpha^5 + 4\alpha + 3\qquad 5. \,\, 1/\alpha\qquad 6. \,\, (2\alpha + 3)/(\alpha^2 + 2\alpha + 2)\qquad $$ \end{vexercise} \begin{vexercise}\label{ex8.9} Find the minimum polynomials over ${\Q}$ of the following numbers: $$ 1. \,\, 1+\text{i}\qquad 2. \,\, \root{3}\of{7}\qquad 3. \,\, \root{4}\of{5}\qquad 4. \,\, \kern-2pt\sqrt{2}+\text{i}\qquad 5. \,\, \kern-2pt\sqrt{2} + \root{3}\of {3}\qquad $$ \end{vexercise} \begin{vexercise}\label{ex8.10} Find the minimum polynomial over $\Q$ of the following: $$ 1. \,\, \kern-2pt\sqrt{7}\qquad 2. \,\, (\kern-2pt\sqrt{11}+3)/2\qquad 3. \,\, (\text{i}\kern-2pt\sqrt{3}-1)/2\qquad $$ \end{vexercise} \begin{vexercise}\label{ex8.11} For each of the following fields $L$ and $F$, find $[L:F]$ and compute a basis for $L$ over $F$. \begin{enumerate} \item $L = {\Q}(\kern-2pt\sqrt{2},\root 3\of{2})$, $F = {\Q}$; \item $L = {\Q}(\root 4\of{2},\text{i})$, $F = {\Q}(\text{i})$; \item $L = {\Q}(\xi)$, $F = {\Q}$, where $\xi$ is a primitive complex 7th root of unity; \item $L = {\Q}(\text{i},\kern-2pt\sqrt{3},\omega)$, $F = {\Q}$, where $\omega$ is a primitive complex cube root of unity. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex8.12} Let $a=e^{\pi i/4}$. Find $[F(a):F]$ when $F={\R}$ and when $F={\Q}$. \end{vexercise} \section{Fields III: Splitting Fields and Finite Fields} \label{fields3} \subsection{Splitting Fields} \paragraph{\hspace*{-0.3cm}} In Section \ref{lect1} we encountered fields containing ``just enough'' numbers to solve some polynomial equation. We now make this more precise. Let $f$ be a polynomial with $F$-coefficients. We say that $f$ {\em splits\/} in an extension $F\subseteq E$ when we can factorise $$ f=\prod_{i=1}^{\deg f} (x-\alpha_i), $$ in the polynomial ring $E[x]$. Thus $f$ splits in $E$ precisely when $E$ contains all the roots $\{\alpha_1,\alpha_2,\ldots,$ $\alpha_{\deg f}\}$ of $f$. There will in general be many such extension fields -- we are after the smallest one. By Kronecker's theorem (more accurately, Corollary \ref{kronecker:corollary}) there is an extension $F\subseteq K$ such that $K$ contains all the roots of $f$. If these roots are $\alpha_1,\alpha_2,\ldots,\alpha_d\in K$, then let $E=F(\alpha_1,\alpha_2,\ldots,\alpha_d)$. \begin{definition}[splitting field of a polynomial] The field extension $F\subseteq E$ constructed in this way is called a splitting field of $f$ over $F$. \end{definition} \begin{vexercise} Show that $E$ is a splitting field of the polynomial $f$ over $F$ if and only if $f$ splits in $E$ but not in any subfield of $E$ containing $F$ (so in this sense, $E$ is the {\em smallest\/} field containing $F$ and all the roots). \end{vexercise} \paragraph{\hspace*{-0.3cm}} The splitting field of $x^2+1$ over $\Q$ is $\Q(\text{i})$. The splitting field of $x^2+1$ over $\R$ is $\ams{C}}\def\Q{\ams{Q}$. \paragraph{\hspace*{-0.3cm}} Our example from Section \ref{lect1} again: the polynomial $x^3-2$ has roots $\alpha,\alpha\omega,\alpha\omega^2$ where $\alpha=\sqrt[3]{2}\in\R$ and $$ \omega=-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}. $$ Thus a splitting field for $f$ over $\Q$ is given by $\Q(\alpha,\alpha\omega,\alpha\omega^2)$, which is the same thing as $\Q(\alpha,\omega)$. \begin{aside} In Section \ref{galois.groups} we will prove (Theorem G) that an isomorphism of a field to itself $\sigma:F\rightarrow F$ can always be extended to an isomorphism $\widehat{\sigma}:E_1\rightarrow E_2$ where $E_1$ is a splitting field of some polynomial $f$ over $F$ and $E_2$ is another splitting field of this polynomial. Thus, any two splitting fields of a polynomial over $F$ are isomorphic. \end{aside} \begin{vexercise} \begin{enumerate} \item Let $f=ax^2+bx+c\in\Q[x]$ and $\Delta=b^2-4ac$. Show that the splitting field of $f$ over $\Q$ is $\Q(\kern-2pt\sqrt{\Delta})$. \item Let $f=(x-\alpha)(x-\beta)\in\Q[x]$ and $D=(\alpha-\beta)^2$. Show that the splitting field of $f$ over $\Q$ is $\Q(\kern-2pt\sqrt{D})$. Show that the splitting is $F(\alpha)=F(\beta)$. \end{enumerate} \end{vexercise} \subsection{Finite Fields} The construction of Section \ref{fields2} produced explicit examples of fields having order $p^d$ for $p$ a prime. We now show that any finite field must have order $p^d$ for some prime $p$ and $d>0$, and there exists a unique such field. \paragraph{\hspace*{-0.3cm}} Recall from Definition \ref{definition:prime_subfield} that the prime subfield of a field $F$ is the intersection of all the subfields of $F$. It is isomorphic to $\ams{F}}\def\K{\ams{K}_p$ for some $p$ or to $\Q$. In particular, the prime subfield of a finite field $F$ must be isomorphic to $\ams{F}}\def\K{\ams{K}_p$. Using the ideas from Section \ref{lect7}, we have an extension of fields $\ams{F}}\def\K{\ams{K}_p\subseteq F$ and hence the finite field $F$ forms a vector space over the field $\ams{F}}\def\K{\ams{K}_p$. This space must be finite dimensional (for $F$ to be finite), so each element of $F$ can be written uniquely as a linear combination, $$ a_1\alpha_1+a_2\alpha_2+\cdots+a_d\alpha_d, $$ of some basis vectors $\alpha_1,\alpha_2,\ldots,\alpha_d$ with the $a_i\in\ams{F}}\def\K{\ams{K}_p$. In particular there are $p$ choices for each $a_i$, and the choices are independent, giving $p^d$ elements of $F$ in total. Thus a finite field has $p^d$ elements for some prime $p$. \paragraph{\hspace*{-0.3cm}} Here is an extended example that shows the converse, ie: constructs a field with $q=p^d$ elements for any prime $p$ and positive integer $d>0$. Consider the polynomial $x^q-x$ over the field $\ams{F}}\def\K{\ams{K}_p$ of $p$ elements. Let $L$ be an extension of the field $\ams{F}}\def\K{\ams{K}_p$ containing all the roots of the polynomial, as guaranteed us by the Corollary to Kronecker's Theorem. In Exercise \ref{ex3.20} we used the formal derivative to see whether a polynomial has distinct roots. We have $\partial(x^q-x) =qx^{q-1}-1=p^n x^{p^n-1}-1=-1$ as $p^n=0$ in $\ams{F}}\def\K{\ams{K}_p$. The constant polynomial $-1$ has no roots in $L$, and so the original polynomial $x^q-x$ has no repeated roots in $L$ by Exercise \ref{ex3.20}. In fact, the $p^d$ distinct roots of $x^q-x$ form a subfield of $L$, and this is the field of order $p^d$ that we seek. To show this, let $a,c$ be roots (so that $a^q=a$ and $c^q=c$). We show that $-a,a+c, ac$ and $a^{-1}$ are also roots. Firstly, $(-a)^q-(-a)=(-1)^qa^q+a$. If $p=2$, then $-1=1$ in $\ams{F}}\def\K{\ams{K}_2$, so that $(-1)^qa^q+a=a^q+a=a+a$ $=2a=0$. Otherwise $p$ is odd so that $(-1)^q=-1$ and $(-1)^qa^q+a=-a^q+a=-a+a=0$. In either case, $-a$ is a root of the polynomial $x^q-x$. Next, $$ (a+c)^q=\sum_{i=0}^{q} \binom{q}{i} a^ic^{q-i}=a^q+c^q+p(\text{other terms}), $$ as $p$ divides the binomial coefficient when $0<i<q$ by Exercise \ref{ex3.3}. Thus $(a+c)^q=a^q+a^q$. (Compare this with Exercise \ref{ex9.1}.) Substituting $a+c$ into $x^q-x$ gives $$ (a+c)^q-(a+c)=a^q+c^q-a-c=0, $$ using $a^q=a$ and $c^q=c$. Thus $a+c$ is also a root of the polynomial. The product $(ac)^q-ac=a^qc^q-ac=ac-ac=0$. Finally, $(a^{-1})^q-(a^{-1})=(a^q)^{-1}-(a^{-1})=a^{-1}-a^{-1}=0$. In both cases we have used $a^q=a$. Thus the $q=p^d$ roots of the polynomial form a subfield of $L$ as claimed, and we have constructed a field with this many elements. \paragraph{\hspace*{-0.3cm}} Looking back at this example, $L$ was an extension of $\ams{F}}\def\K{\ams{K}_p$ containing the roots of the polynomial $x^q-x$. In particular, if these roots are $\{a_1,\ldots,a_q\}$, then $\ams{F}}\def\K{\ams{K}_p(a_1,\ldots,a_q)$ is the splitting field over $\ams{F}}\def\K{\ams{K}_p$ of the polynomial. In the example we constructed the subfield $\ams{F}}\def\K{\ams{K}$ of $L$ consisting of the roots of $x^q-x$. As any subfield contains $\ams{F}}\def\K{\ams{K}_p$, we have $\ams{F}}\def\K{\ams{K}_p(a_1,\ldots,a_q) \subseteq \ams{F}}\def\K{\ams{K}$, whereas $\ams{F}}\def\K{\ams{K}=\{a_1,\ldots,a_q\}$ so that $\ams{F}}\def\K{\ams{K}\subseteq \ams{F}}\def\K{\ams{K}_p(a_1,\ldots,a_q)$. Hence the field we constructed in the example was the splitting field over $\ams{F}}\def\K{\ams{K}_p$ of the polynomial $x^q-q$. If $F$ is now an arbitrary field with $q$ elements, then it has prime subfield $\ams{F}}\def\K{\ams{K}_p$. Moreover, as the multiplicative group of $F$ has order $q-1$, by Lagrange's Theorem (see Section \ref{groups.stuff}), every element of $F$ satisfies $x^{q-1}=1$, hence is a root of the $\ams{F}}\def\K{\ams{K}_p$-polynomial $x^q-x=0$. Thus, a finite field of order $q$ is the splitting field over $\ams{F}}\def\K{\ams{K}_p$ of the polynomial $x^q-x$, and by the uniqueness of such, any two fields of order $q$ are isomorphic. \paragraph{\hspace*{-0.3cm}} We finish with a fact about finite fields that will prove useful later on. Remember that a field is, among other things, two groups spliced together in a compatible way: the elements form a group under addition (the {\em additive group\/}) and the non-zero elements form a group under multiplication (the {\em multiplicative group\/}) . Looking at the complex numbers as an example, we can find a number of finite subgroups of the multiplicative group $\ams{C}}\def\Q{\ams{Q}^*$ of $\ams{C}}\def\Q{\ams{Q}$ by considering roots of $1$. For any $n$, the powers of the $n$-th root of $1$, $$ \omega=\cos\frac{2\pi}{n}+\text{i}\sin\frac{2\pi}{n}, $$ form a subgroup of $\ams{C}}\def\Q{\ams{Q}^*$ of order $n$. Moreover, this subgroup is cyclic. \begin{proposition} Let $F$ be any field and $G$ a finite subgroup of the multiplicative group $F^*$ of $F$. Then $G$ is a cyclic group. \end{proposition} In particular, if $F$ is a finite field, then the multiplicative group $F^*$ of $F$ is finite, hence cyclic. \begin{proof} By Exercise \ref{ex11.2a} there is an element $g\in G$ whose order $m$ is the least common multiple of all the orders of elements of $G$. Thus, any element $h\in G$ satisfies $h^m=1$. Hence every element of the group is a root of $x^m-1$, and since this polynomial has at most $m$ roots in $F$, the order of $G$ must be $\leq m$. As $g\in G$ has order $m$ its powers must exhaust the whole group, hence $G$ is cyclic. \qed \end{proof} \subsection{Algebraically closed fields} \paragraph{\hspace*{-0.3cm}} In the first part of this section we dealt with fields in which a particular polynomial of interest split into linear factors. There are fields like the complex numbers in which {\em any\/} polynomial splits. A field $F$ is said to be {\em algebraically closed\/} if and only if every (non-constant) polynomial over $F$ splits in $F$. \paragraph{\hspace*{-0.3cm}} If $F$ is algebraically closed and $\alpha$ is algebraic over $F$ then there is a polynomial with $F$-coefficients having $\alpha$ as a root. As $F$ is algebraically closed, this polynomial splits in $F$, so that in particular $\alpha$ is in $F$. This explains the terminology: an algebraically closed field is {\em closed\/} with respect to the taking of algebraic elements. Contrast this with fields like $\Q$, over which there are algebraic elements like $\kern-2pt\sqrt{2}$ that are not contained in $\Q$. \begin{vexercise} Show that the following are equivalent: \begin{enumerate} \item $F$ is algebraically closed; \item every non-constant polynomial over $F$ has a root in $F$; \item the irreducible polynomials over $F$ are precisely the linear ones; \item if $F\subseteq E$ is a finite extension then $E=F$. \end{enumerate} \end{vexercise} \begin{theorem} Every field $F$ is contained in an algebraically closed one. \end{theorem} \begin{sketchproof} The full proof is beyond the scope of these notes, although the technical difficulties are not algebraic (or even number theoretical) but set theoretical. If the field $F$ is finite or countably infinite, the proof sort of goes as follows: there are countably many polynomials over a countable field, so take the union of all the splitting fields of these polynomials. Note that for a finite field, this is an infinite union, so an algebraically closed field containing even a finite field is very large. \qed \end{sketchproof} \subsection{Simple extensions} \paragraph{\hspace*{-0.3cm}} We saw in Section \ref{lect4} that the extension $\Q\subset\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{3})$ is, despite appearances, simple. The fact that the extension is finite turns out to be enough to see that it is simple: \begin{theorem}\label{finite.simple} Let $F\subset E$ be a finite extension such that the roots of any irreducible polynomial over $E$ are distinct. Then $E$ is a simple extension of $F$, ie: $E=F(\alpha)$ for some $\alpha\in E$. \end{theorem} The following proof is for the case that $F$ is infinite. \begin{proof} Let $\{\alpha_1,\alpha_2,\ldots,\alpha_k\}$ be a basis for $E$ over $F$ and consider the field $F_1=F(\alpha_3,\ldots,\alpha_k)$, so that $E=F_1(\alpha_1,\alpha_2)$. We will show that $F_1(\alpha_1,\alpha_2)$ is a simple extension of $F_1$, ie: that $F_1(\alpha_1,\alpha_2)=F_1(\theta)$ for some $\theta\in E$. Thus $E=F(\alpha_1,\alpha_2,\ldots,\alpha_k) =F(\theta,\alpha_3\ldots,\alpha_k)$, and so by repeatedly applying this procedure, $E$ is a simple extension of $F$. Let $f_1,f_2$ be the minimum polynomials over $F_1$ of $\alpha_1$ and $\alpha_2$, and let $L$ be an algebraically closed field containing of the field $F$. As the $\alpha_i$ are algebraic over $F$, we have that the fields $F_1$ and $E$ are contained in $L$ too. In particular the polynomials $f_1$ and $f_2$ split in $L$, $$ f_1=\prod_{i=1}^{\deg f_1} (x-\beta_i), f_2=\prod_{i=1}^{\deg f_2} (x-\delta_i), $$ with $\beta_1=\alpha_1$ and $\delta_1=\alpha_2$. As the roots of these polynomials are distinct we have that $\beta_i\not=\beta_j$ and $\delta_i\not=\delta_j$ for all $i\not= j$. For any $i$ and any $j\not= 1$, the equation. $\beta_i+x\delta_j=\beta_1+x\delta_1$ has precisely one solution in $F_1$, namely $$ x=\frac{\beta_i-\beta_1}{\delta_1-\delta_j}. $$ As there only finitely many such equations and infinitely many elements of $F_1$, there must be an $c\in F_1$ which is a solution to {\em none\/} of them, ie: such that, $$ \beta_i+c\delta_j\not=\beta_1+c\delta_1 $$ for any $i$ and any $j\not= 1$. Let $\theta=\beta_1+c\delta_1=\alpha_1+c\alpha_2$. We show that $F_1(\alpha_1,\alpha_2)=F_1(\theta)=F_1(\alpha_1+c\alpha_2)$. Clearly $\alpha_1+c\alpha_2\in F_1(\alpha_1,\alpha_2)$ so that $F_1(\alpha_1+c\alpha_2)\subseteq F_1(\alpha_1,\alpha_2)$. We will show that $\alpha_2\in F_1(\alpha_1+c\alpha_2)=F_1(\theta)$, for then $\alpha_1+c\alpha_2-c\alpha_2=\alpha_1\in F_1(\alpha_1+c\alpha_2)$, and so $F_1(\alpha_1,\alpha_2) \subseteq F_1(\alpha_1+c\alpha_2)$. We have $0=f_1(\alpha_1)=f_1(\theta-c\alpha_2)$, so if we let $r(t)\in F_1(\theta)[t]$ be given by $r(t)=f_1(\theta-ct)$, then we have that $\alpha_2$ is a root of both $r(t)$ and $f_2(x)$. If $\gamma$ is another common root of $r$ and $f_2$, then $\gamma$ is one of the $\delta_j$, and $\theta-c\gamma$ (being a root of $f_1$) is one of the $\beta_i$, so that, $$ \gamma=\delta_j\text{ and }\theta-c\gamma=\beta_i \Rightarrow \beta_i+c\delta_j=\beta_1+c\delta_1, $$ a contradiction. Thus $r$ and $f_2$ have just the single common root $\alpha_2$. Let $h$ be the minimum polynomial of $\alpha_2$ over $F_1(\theta)$, so that $h$ divides both $r$ and $f_2$ (recall that the minimum polynomial divides any other polynomial having $\alpha_2$ as a root). This means that $h$ must have degree one, for a higher degree would give more than one common root for $r$ and $f_2$. Thus $h=t+b$ for some $b\in F_1(\theta)$. As $h(\alpha_2)=0$ we thus get that $\alpha_2=-b$ and so $\alpha_2\in F_1(\theta)$ as required. \qed \end{proof} The theorem is true for finite extensions of finite fields -- even without the condition on the roots of the polynomials -- but we omit the proof here. We saw in Exercise \ref{ex4.30} that irreducible polynomials over fields of characteristic $0$ have distinct roots. Thus any finite extension of a field of characteristic zero $0$ is simple. For example, if $\alpha_1,\ldots,\alpha_k$ are algebraic over $\Q$, then $\Q(\alpha_1,\ldots,\alpha_k)=\Q(\theta)$ for some $\theta\in\ams{C}}\def\Q{\ams{Q}$. \section{Ruler and Compass Constructions II} \label{ruler.compass2} \paragraph{\hspace*{-0.3cm}} We can completely describe the complex numbers that are constructible: \begin{theoremE}\label{thmE} The number $z\in\ams{C}}\def\Q{\ams{Q}$ is constructible if and only if there exists a sequence of field extensions, $$ \Q=K_0\subseteq K_1\subseteq K_2\subseteq\cdots\subseteq K_n, $$ such that $\Q(z)$ is a subfield of $K_n$, and each $K_i$ is an extension of $K_{i-1}$ of degree at most $2$. \end{theoremE} The idea, which can be a little obscured by the details, is that points on a line have a linear relationship with the two points determining the line, and points on a circle have a quadratic relationship with the two points determining the circle. \begin{proof} We prove the ``only if'' part first. Recall that $z$ is constructible if and only if there is a sequence of numbers $$ 0,1,\text{i}=z_1,z_2,\ldots,z_n=z, $$ with $z_i$ obtained from earlier numbers in the sequence in one of the three forms shown in Figure \ref{fig:constructions2:fig1}, where $p,q,r,s\in\{1,2,\ldots,i-1\}$. Let $K_i$ be the field $\Q(z_1,\ldots,z_{i})$, so we have a tower of extensions: $$ \Q\subseteq K_1\subseteq K_2\subseteq\cdots\subseteq K_n. $$ We will simultaneously show the following two things by induction: \begin{itemize} \item Each of the fields $K_i$ is closed under conjugation, ie: if $z\in K_i$ then $\bar{z}\in K_i$, and \item the degree of each extension $K_{i-1}\subseteq K_i$ is at most two. \end{itemize} The first of these is a technical convenience, the main point of which is illustrated by Exercise \ref{ex10.1} following the proof. \begin{figure}\label{fig:constructions2:fig1} \end{figure} Firstly, $K_1=\Q(\text{i})=\{a+b\text{i}\,:\,a,b\in\Q\}$ is certainly closed under conjugation and $[K_1:\Q]=[\Q(\text{i}):\Q]=2$ as the minimum polynomial of $\text{i}$ over $\Q$ is $x^2+1$. Now fix $i$ and suppose that $K_{i-1}$ is closed under conjugation with $K_i=K_{i-1}(z_i)$. \vspace*{1em} \noindent (i). Suppose that $z_i$ is obtained as in case (i) of Figure \ref{fig:constructions2:fig1}. The Cartesian equation for one of the lines is $y=m_1x+c_1$, passing through the points $z_p,z_q$, with $z_p,z_q\in K_{i-1}$. As $K_{i-1}$ is closed under conjugation, Exercise \ref{ex10.1} gives the real and imaginary parts of $z_p$ and $z_q$ are in $K_{i-1}$. Thus, $$ \begin{pspicture}(0,0)(14,1) \rput(-.75,0){ \rput(0.75,0){ \rput(3,.2){$\text{Im}\, z_p=m_1\text{Re}\, z_p+c_1$} \rput(3,.8){$\text{Im}\, z_q=m_1\text{Re}\, z_q+c_1$} \rput(4.75,.5){$\left.\begin{array}{c} \vrule width 0 mm height 10 mm depth 0 pt\end{array}\right\}$} \rput(5.2,.5){$\Rightarrow$} } \rput(9.8,.5){$m_1={\displaystyle \frac{\text{Im}\, z_p-\text{Im}\, z_q}{\text{Re}\, z_p-\text{Re}\, z_q}} \text{ and } c_1=\text{Im}\, z_p-m_1\text{Re}\, z_p$} } \end{pspicture} $$ so that $m_1,c_1\in K_{i-1}$. (If the line is vertical with equation $x=c_1$ we get $c_1=\text{Re}\, z_p\in K_{i-1}$). If the equation of the other line is $y=m_2x+c_2$, we similarly get $m_2,c_2\in K_{i-1}$. As $z_i$ lies on both these lines we have $$ \begin{pspicture}(0,0)(14,1) \rput(2.25,0){ \rput(-1,0){ \rput(3,.2){$\text{Im}\, z_i=m_1\text{Re}\, z_i+c_1$} \rput(3,.8){$\text{Im}\, z_i=m_2\text{Re}\, z_i+c_2$} \rput(4.75,.5){$\left.\begin{array}{c} \vrule width 0 mm height 10 mm depth 0 pt\end{array}\right\}$} } \rput(6.25,.5){with $m_1,m_2,c_1,c_2\in K_{i-1}$} } \end{pspicture} $$ hence $$ \begin{pspicture}(0,0)(14,1) \rput(-3,0){ \rput(10,.5){$\text{Re}\, z_i={\displaystyle \frac{c_2-c_1}{m_1-m_2}} \text{ and } \text{Im}\, z_i={\displaystyle \frac{m_1(c_2-c_1)}{m_1-m_2}}+c_1$} } \end{pspicture} $$ must lie in $K_{i-1}$ too. As $K_{i-1}$ is closed under conjugation we get $z_i\in K_{i-1}$ too, so in fact $K_i=K_{i-1}(z_i)=K_{i-1}$. Thus the degree of the extension $K_{i-1}\subseteq K_i$ (being $1$) is certainly $\leq 2$. Moreover, $K_i=K_{i-1}$ is closed under conjugation as $K_{i-1}$ is. \vspace*{1em} \noindent (ii). Suppose $z_i$ arises as in case (ii) with the line having equation $y=mx+c$ and the circle having equation $(x-\text{Re}\, z_s)^2+(y-\text{Im}\, z_s)^2=r^2$, where $r^2=(\text{Re}\, z_r-\text{Re}\, z_s)^2+(\text{Im}\, z_r-\text{Im}\, z_s)^2$. As before, $m,c\in K_{i-1}$; moreover, $z_r,z_s\in K_{i-1}$, hence $r^2\in K_{i-1}$. As $z_i$ lies on the line we have $\text{Im}\, z_i=m\text{Re}\, z_i+c$, and as it lies on the circle we have $$ (\text{Re}\, z_i-\text{Re}\, z_s)^2+(m\text{Re}\, z_i+c-\text{Im}\, z_s)^2=r^2. $$ Thus the polynomial $(x-\text{Re}\, z_s)^2+(mx+c-\text{Im}\, z_s)^2=r^2$ is a quadratic with $K_{i-1}$ coefficients and having $\text{Re}\, z_i$ as a root. The minimum polynomial of $\text{Re}\, z_i$ over $K_{i-1}$ thus has degree at most $2$, giving $$ [K_{i-1}(\text{Re}\, z_i):K_{i-1}]\leq 2 $$ by Theorem D. In fact, $\text{Im}\, z_i\in K_{i-1}(\text{Re}\, z_i)$ as well, since $\text{Im}\, z_i=m\text{Re}\, z_i+c$. Thus $z_i$ itself is in $K_{i-1}(\text{Re}\, z_i)$, as $i$ also is, and we have the sequence, $$ K_{i-1}\subseteq K_i=K_{i-1}(z_i)\subseteq K_{i-1}(\text{Re}\, z_i), $$ giving that the degree of the extension $K_{i-1}\subseteq K_i$ is also $\leq 2$ by the Tower Law. Finally, we show that the field $K_i$ is closed under conjugation, for which we can assume that $[K_i:K_{i-1}]=2$ -- it is trivially the case if the degree is one. Now, $K_i=K_{i-1}(z_i)=K_{i-1}(\text{Re}\, z_i)$, so in particular $z_i$ and $\text{Re}\, z_i$ are in $K_i$, hence $$ \text{Im}\, z_i=\frac{z_i-\text{Re}\, z_i}{\text{i}} $$ is too. The result is that $\text{Re}\, z_i-\text{Im}\, z_i\cdot\text{i}=\bar{z_i}$ is in $K_i$ too. A general element of $K_i$ has the form $a+bz_i$ with $a,b\in K_{i-1}$, whose conjugate $\bar{a}+\bar{b}\bar{z_i}$ is thus also in $K_i$. \vspace*{1em} \noindent (iii). If $z$ arises as in case (iii), then as $z_i$ lies on both circles we have $$ (\text{Re}\, z_i-\text{Re}\, z_s)^2+(\text{Im}\, z_i-\text{Im}\, z_s)^2=r^2\text{ and } (\text{Re}\, z_i-\text{Re}\, z_p)^2+(\text{Im}\, z_i-\text{Im}\, z_p)^2=s^2, $$ with both $r^2$ and $s^2$ in $K_{i-1}$ for the same reason as in case (ii). Expanding both expressions gives terms of the form $\text{Re}\, z_i^2+\text{Im}\, z_i^2$, and equating leads to, \begin{equation*} \begin{split} \text{Im}\, z_i=\frac{\beta_1}{\alpha}\text{Re}\, z_i+\frac{\beta_2}{\alpha},&\text{ where } \alpha=2(\text{Im}\, z_s-\text{Im}\, z_p), \beta_1=2(\text{Re}\, z_p-\text{Re}\, z_s)\\ &\text{ and } \beta_2=\text{Re}\, z_s^2+\text{Im}\, z_s^2-(\text{Re}\, z_p^2+\text{Im}\, z_p^2)+s^2-r^2. \end{split} \end{equation*} Combining this $K_{i-1}$-expression for $\text{Im}\, z_i$ with the first of the two circle equations above puts us into a similar situation as case (ii), from which the result follows in the same way. \vspace*{1em} Now for the ``if'' part, which is mercifully shorter. Suppose we have a tower of fields $\Q=K_0\subseteq K_1\subseteq K_2\subseteq\cdots\subseteq K_n,$ with $\Q(z)$ in $K_n$, hence $z\in K_n$. We can assume that $z\not\in K_{n-1}$ (otherwise stop one step earlier!) and so we have $$ K_{n-1}\subseteq K_{n-1}(z)\subseteq K_n $$ where $z\not\in K_{n-1}$ gives $[K_{n-1}(z):K_{n-1}]\geq 2$. On the other hand $[K_n:K_{n-1}]\leq 2$ so by the tower law we have $[K_{n-1}(z):K_{n-1}]=[K_n:K_{n-1}]$ and hence $K_n=K_{n-1}(z)$ with $[K_{n-1}(z):K_{n-1}]=2$. The minimum polynomial of $z$ over $K_{n-1}$ thus has the form $x^2+bx+c$, with $b,c\in K_{n-1}$, so that $z$ is one of, $$ \frac{-1\pm\sqrt{b^2-4c}}{2} $$ either of which can be constructed from $1,2,4,b,c\in K_{n-1}$, using the arithmetical and square root constructions of Section \ref{ruler.compass}. But in the same way $b,c$ can be constructed from elements of $K_{n-2}$, and so on, giving that $z$ is indeed constructible. \qed \end{proof} \begin{vexercise}\label{ex10.1} Let $K$ be a field such that $\Q(\text{i})\subseteq K\subseteq \ams{C}}\def\Q{\ams{Q},$ and suppose that $K$ is closed under conjugation. Show that $z\in K$ if and only if the real and imaginary parts of $z$ are in $K$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} It is much easier to use the ``only if'' part of the Theorem, which shows when numbers {\em cannot\/} be constructed, so we restate this part as a separate, \begin{corollary} \label{corollary:construcitble_necessary} If $z\in\ams{C}}\def\Q{\ams{Q}$ is constructible then the degree of the extension $\Q\subseteq \Q(z)$ must be a power of two. \end{corollary} \begin{proof} If $z$ is constructible then we have the tower of extensions as given in Theorem E, with $z\in K_n$. Thus we have the sequence of extensions $\Q\subseteq \Q(z)\subseteq K_n$, which by the tower law gives, $$ [K_n:\Q]=[K_n:\Q(z)][\Q(z):\Q]. $$ Thus $[\Q(z):\Q]$ divides $[K_n:\Q]$, which is a power of two, so $[\Q(z):\Q]$ must also be a power of two. \qed \end{proof} To use the ``if'' part to show that numbers {\em can\/} be constructed by finding a tower of fields as in Theorem E, is a little harder. We will need to know more about the fields sandwiched between $\Q$ and $\Q(z)$ before we can do this. The Galois Correspondence in Section \ref{galois.correspondence} will give us the control we need. \paragraph{\hspace*{-0.3cm}} The Corollary is only stated in one direction. The converse is {\em not true\/}. \paragraph{\hspace*{-0.3cm}} \label{constructions2:pgons} A regular $p$-gon, for $p$ a prime, can be constructed, by Exercise \ref{ex7.40}, precisely when the complex number $z=\cos(2\pi/p)+\text{i}\sin(2\pi/p)$ can be constructed. By Exercise \ref{ex_lect3.2}, the minimum polynomial of $z$ over $\Q$ is the $p$-th cyclotomic polynomial, $$ \Phi_p(x)=x^{p-1}+x^{p-2}+\cdots+x+1. $$ The degree of the extension $\Q\subseteq \Q(z)$ is thus $p-1$, so $p-1$ must be a power of two if the $p$-gon is to be constructed, i.e. $$ p=2^n+1. $$ Actually, even more can be said. If $m$ is odd, the polynomial $x^m+1$ has $-1$ as a root, and so can be factorised as $x^m+1=(x+1)(x^{m-1}-x^{m-2}+x^{m-3}-\cdots-x+1).$ Thus if $n=mk$ for $m$ odd, we have $$ 2^n+1=(2^k)^m+1=(2^k+1)((2^k)^{m-1}-(2^k)^{m-2}+(2^k)^{m-3}-\cdots-(2^k)+1), $$ giving that $2^n+1$ cannot be prime unless $n$ has no \emph{odd\/} divisors; i.e. $2^n+1$ can only be prime if $n$ itself is a power of two. Thus for a $p$-gon to be constructible, we must have that $p$ is a prime number of the form $$ p=2^{2^t}+1, $$ a so-called {\em Fermat prime\/}. Such primes are extremely rare: the only ones $<10^{900}$ are $$ 3,5,17,257\text{ and }65537. $$ We will see in Section \ref{galois.corresapps} that the converse is true: if $p$ is a Fermat prime, then a regular $p$-gon \emph{can\/} be constructed. \paragraph{\hspace*{-0.3cm}} A square plot of land can always be doubled in area using a ruler and compass: $$ \begin{pspicture}(0,0)(14,2.5) \rput(0,-0.25){ \rput(7,1.5){\BoxedEPSF{galois9.1.eps scaled 1000}} \rput(7.4,0.3){$(t,0)$}\rput(5.6,2){$(0,t)$} \rput(8.9,2.5){$(\kern-2pt\sqrt{2}t,\kern-2pt\sqrt{2}t)$} } \end{pspicture} $$ Set the compass to the side length $t$ of the plot. As $\kern-2pt\sqrt{2}$ is a constructible number, we can construct the point with coordinates $(\kern-2pt\sqrt{2}t, \kern-2pt\sqrt{2}t)$, hence doubling the area. \paragraph{\hspace*{-0.3cm}} Is there a similar procedure for a cube? Suppose the original cube has side length $1$, so that the task is to produce a new cube of {\em volume\/} $2$. If this could be accomplished via a ruler and compass construction, then by setting the compass to the side length of the new cube, we would have constructed $\sqrt[3]{2}$. But the minimum polynomial over $\Q$ of $\sqrt[3]{2}$ is clearly $x^3-2$, with the extension $\Q\subset\Q(\sqrt[3]{2})$ thus having degree three. Such a construction cannot therefore be possible. \paragraph{\hspace*{-0.3cm}} The subset $\Box^n$ of $\R^n$ given by $$ \Box^n=\{x\in \R^n\,|\,|x_i|\leq \frac{t}{2}\text{ for all }i\} $$ is an $n$-dimensional cube of side length $t$ having volume $t^n$. In particular, in $4$-dimensions we have the hypercube: $$ \begin{pspicture}(0,0)(14,6) \rput(7,3){\BoxedEPSF{8cell.eps scaled 700}} \end{pspicture} $$ The vertices can be placed on the $3$-sphere $S^3$ in $\R^4$. Stereographically projecting $S^3$ to $\R^3$ gives the picture above. This object can be doubled in volume with ruler and compass because the point with coordinates $(\sqrt[4]{2}t,\sqrt[4]{2}t,\sqrt[4]{2}t,\sqrt[4]{2}t)$ can be constructed. \paragraph{\hspace*{-0.3cm}} One of our fundamental constructions was the bisection of an angle. It is natural to ask if there is a construction that {\em trisects\/} an angle. Certainly there are particular angles that can be trisected: if the angle $\phi$ is constructible for example, then the angle $3\phi$ can be trisected. The angle $\pi/3$ however cannot be trisected. We will see this by showing that the angle $\pi/9$ cannot be constructed. \begin{vexercise}\label{ex10.2} Evaluate the complex number $(\cos\phi+\text{i}\sin\phi)^3$ in two different ways: using the binomial theorem and De Moivre's theorem. By equating real parts, deduce that $$ \cos3\phi=4\cos^3\phi-3\cos\phi. $$ Derive similar expressions for $\cos5\phi$ and $\cos7\phi$. \end{vexercise} Exercise \ref{ex7.40} gives that the angle $\pi/9$ is constructible precisely when the complex number $\cos\pi/9$ can be constructed, for which it is necessary in turn that the degree of the extension $\Q\subseteq \Q(\cos\pi/9)$ be a power of two. Exercise \ref{ex10.2} with $\phi=\pi/9$ gives $$ \cos\frac{\pi}{3}=4\cos^3\frac{\pi}{9}-3\cos\frac{\pi}{9}, \text{ hence, } 1=8\cos^3\frac{\pi}{9}-6\cos\frac{\pi}{9}. $$ Thus, if $u=2\cos(\pi/9)$, then $u^3-3u-1=0$. This polynomial is irreducible over $\Q$ by the reduction test (with $p=2$) so it is the minimum polynomial over $\Q$ of $2\cos(\pi/9)$. The extension $\Q\subset \Q(2\cos(\pi/9))=\Q(\cos(\pi/9))$ thus has degree three, and so the angle $\pi/9$ cannot be constructed. We will be able to say more about which angles of the form $\pi/n$ can be constructed in Section \ref{galois.corresapps}. \begin{vexercise}\label{ex10.50} \hspace{1em}\begin{enumerate} \item Can an angle of $40^\circ$ be constructed? \item Assuming $72^\circ$ is constructible, what about $24^\circ$ and $8^\circ$? \item Can $72^\circ$ be constructed? (\emph{hint}: Section \ref{lect1}) \end{enumerate} \end{vexercise} \subsection*{Further Exercises for Section \thesection} \begin{vexercise}\label{platonic_volume} The octahedron, dodecahedron and icosahedron are three of the five Platonic solids (the other two are the tetrahedron and the cube). See Figure \ref{fig:constructions2:fig30}. The volume of each is given by the formula, where $x$ is the length of any edge. Show that in each case, there is no general method, using a ruler and compass, to construct a new solid from a given one, and having {\em twice\/} the volume. \end{vexercise} \begin{figure} \caption{The octahedron, dodecahedron and icosahedron, and their volumes.} \label{fig:constructions2:fig30} \end{figure} \begin{vexercise} Let $S_O,S_D$ and $S_I$ be the surface areas of the three Platonic solids of Exercise \ref{platonic_volume}. If, $$ S_O=2x^2\kern-2pt\sqrt{3},S_D=3x^2\kern-2pt\sqrt{5(5+2\kern-2pt\sqrt{5})}\text{ and } S_I=5x^2\kern-2pt\sqrt{3}, $$ determine whether or not a solid can be constructed from a given one with twice the surface area. \end{vexercise} \begin{vexercise}\label{angle_quinsect} \begin{enumerate} \item Using the identity $\cos 5\theta=16\cos^5\theta-20\cos^3\theta+5\cos\theta.$ Show that is is impossible, using a ruler and compass, to {\em quinsect\/} (that is, divide into $5$ equal parts) any angle $\psi$ that satisfies, $$ \cos\psi=\frac{5}{6} $$ \item Using the identity, $\cos7\theta=64\cos^7\theta-112\cos^5\theta+56\cos^3\theta-7\cos\theta$ show that it is impossible, using ruler and compass, to {\em septsect\/} (that is, divide into {\em seven\/} equal parts) any angle $\varphi$ such that $$ \cos\varphi=\frac{7}{8} $$ \end{enumerate} \end{vexercise} \section{Groups I: Soluble Groups and Simple Groups} \label{groups.stuff} This section contains miscellaneous but important reminders from group theory. Not all our groups will be Abelian, so we return to writing the group operation as juxtaposition and writing ``$\text{id}$'' for the group identity. \paragraph{\hspace*{-0.3cm}} A {\em permutation\/} of a set $X$ is a bijection $X\rightarrow X$. Usually we are interested in the case where $X$ is finite, say $X=\{1,2,\ldots,n\}$, so a permutation is just a rearrangement of these numbers. Permutations are most compactly written using cycle notation $$ (a_{11},a_{12},\ldots,a_{1n_1})(a_{21},a_{22},\ldots,a_{2n_2})\ldots(a_{k1},a_{k2}, \ldots,a_{kn_k}) $$ where the $a_{ij}$ are elements of $\{1,2,\ldots,n\}$. Each $(b_1,b_2,\ldots,b_k)$ means that the $b_i$ are permuted in a cycle: $$ \begin{pspicture}(0,0)(4,4) \rput(2,2){\BoxedEPSF{galois11.1b.eps scaled 1000}} \rput(2.1,3.5){$b_1$} \rput(3.5,2.35){$b_2$} \rput(2.75,0.7){$b_3$} \rput(0.5,2.4){$b_k$} \end{pspicture} $$ Cycles are composed from right to left, eg: $(1,2)(1,2,4,3)(1,3)(2,4)=(1,2,3)$. In this way a permutation can be written as a product of disjoint cycles. The set of all permutations of $X$ forms a group under composition of bijections called the {\em symmetric group\/} $S_{\kern-.3mm X}$, or $S_{\kern-.3mm n}$ if $X=\{1,2,\ldots,n\}$. \paragraph{\hspace*{-0.3cm}} A permutation where just two things are interchanged, and everything else is left fixed, is called a {\em transposition\/} or \emph{swap\/} $(a,b)$. Any permutation can be written as a composition of transpositions, for example: $$ (1,2,3)=(1,3)(1,2)=(1,2)(2,3) \text{ and } (a_{1},a_{2},\ldots,a_{k})=(a_1,a_k)(a_1,a_{k-1})\ldots (a_1,a_3)(a_1,a_2). $$ There will be many such expressions, but they all involve an even number of transpositions or all involve an odd number of them. We can thus call a permutation {\em even\/} if it can be decomposed into an even number of transpositions, and {\em odd\/} otherwise. The even permutations in $S_{\kern-.3mm n}$ form a subgroup called the {\em Alternating group\/} $A_n$. \begin{vexercise}\label{ex11.1} Show that $A_n$ is indeed a group comprising exactly half of the elements of $S_{\kern-.3mm n}$. Show that the odd elements in $S_{\kern-.3mm n}$ {\em do not\/} form a subgroup. \end{vexercise} \begin{vexercise}\label{ex11.2} Recall that the {\em order\/} of an element $g$ of a group $G$ is the least $n$ such that $g^n=\text{id}$. Show that if $g,h$ are elements such that $gh=hg$ then $(gh)^n=g^nh^n$. If in addition the order of $g$ is $n$ and the order of $h$ is $m$ with $\gcd(n,m)=1$, then the order of $gh$ is the lowest common multiple of $n$ and $m$. \end{vexercise} \begin{vexercise}\label{ex11.2a} Let $G$ be a finite Abelian group, and let $1=m_1,m_2,\ldots,m_\ell$ be a list of all the possible orders of elements of $G$. Show that there exists an element whose order is the lowest common multiple of the $m_i$ [\emph{hint}: let $g_i$ be an element of order $m_i$ and use Exercise \ref{ex11.2} to show that there are $k_1,\ldots,k_\ell$ with $g_1^{k_1}\cdots g_\ell^{k_\ell}$ the element we seek]. \end{vexercise} \paragraph{\hspace*{-0.3cm}} If $G$ is a group and $\{g_1,g_2,\ldots,g_n\}$ are elements of $G$, then we say that the $g_i$ {\em generate\/} $G$ when every element $g\in G$ can be obtained as a product $$ g=g_{i_1}^{\pm 1}g_{i_2}^{\pm 1}\ldots g_{i_k}^{\pm 1}, $$ of the $g_i$ and their inverses. Write $G=\langle g_1,g_2,\ldots,g_n\rangle$. \paragraph{\hspace*{-0.3cm}} We find generators for the symmetric and alternating groups. We have already seen that the transpositions $(a,b)$ generate $S_{\kern-.3mm n}$, for any permutation can be written as a product $$ (a_{1},a_{2},\ldots,a_{k})=(a_1,a_k) (a_1,a_{k-1})\ldots (a_1,a_3)(a_1,a_2). $$ The transpositions $(a,b)$ can in turn be expressed in terms of just some of them: when $a<b$ we have $$ (a,b)=(a,a+1)(a+1,a+2)\ldots(b-2,b-1)(b-1,b)\ldots(a+1,a+2)(a,a+1) $$ as can be seen by considering the picture: $$ \begin{pspicture}(0,0)(14,3) \rput(7,1.5){\BoxedEPSF{galois11.2b.eps scaled 1000}} \rput(3,1.5){$a$} \rput(4.5,1.5){$a+1$} \rput(6,1.5){$a+2$} \rput(7.5,1.5){$b-2$} \rput(9,1.5){$b-1$} \rput(10.5,1.5){$b$} \end{pspicture} $$ and doing the swaps in the order indicated. Any number strictly in between $a$ and $b$ moves one place to the right and then one place to the left, with net effect that it remains stationary. The number $a$ is moved to $b$ by the top swaps, but then stays there. Similarly $b$ stays put for all but the last of the top swaps and then is moved to $a$ by the bottom swaps. Any permutation can thus be written as a product of swaps of the form $(a,a+1)$. Even these transpositions can be further reduced, by transferring $a$ and $a+1$ to the points $1$ and $2$, swapping $1$ and $2$ and transferring the answer back to $a$ and $a+1$. Indeed, if $\tau=(1,2,\ldots,n)$ then doing the permutations in the order indicated in the picture: $$ \begin{pspicture}(0,0)(14,3) \rput(7,1.5){\BoxedEPSF{galois11.3c.eps scaled 1000}} \rput(3.2,1.75){$1$}\rput(4.65,1.75){$2$} \rput(9.35,1.75){$a$}\rput(10.8,1.75){$a+1$} \rput(0,-0.5){ \pscircle[linecolor=white,fillstyle=solid,fillcolor=red](7,3){.25} \rput(7,3){{\white{\bf 1}}} } \rput(-3,-1.5){ \pscircle[linecolor=white,fillstyle=solid,fillcolor=red](7,3){.25} \rput(7,3){{\white{\bf 2}}} } \rput(0,-2.5){ \pscircle[linecolor=white,fillstyle=solid,fillcolor=red](7,3){.25} \rput(7,3){{\white{\bf 3}}} } \end{pspicture} $$ shows that $(a,a+1)=\tau^{a-1}(1,2)\tau^{1-a}$. The conclusion is that $S_{\kern-.3mm n}$ is generated by just two permutations, namely $(1,2)$ and $(1,2,\ldots,n)$. \begin{vexercise} Show that the Alternating group is generated by the permutations of the form $(a,b,c)$. Show that just the $3$-cycles of the form $(1,2,a)$ will suffice. \end{vexercise} \paragraph{\hspace*{-0.3cm}} Lagrange's theorem says that if $G$ is a finite group and $H$ a subgroup of $G$, then the order $|H|$ of $H$ divides the order $|G|$ of $G$. The converse, that if a subset of a group has size dividing the order of the group then it is a subgroup, is false. \begin{vexercise} By considering the Alternating group $A_4$, justify this statement. \end{vexercise} \begin{vexercise}\label{cyclicgroup.subgroups} Show that if $G$ is a cyclic group, then the converse to Lagrange's theorem {\em is\/} true, ie: if $G$ has order $n$ and $k$ divides $n$ then $G$ has a subgroup of order $k$. \end{vexercise} \begin{vexercise} Use Lagrange's Theorem to show that if a group $G$ has order a prime number $p$, then $G$ is isomorphic to a cyclic group. Thus any two groups of order $p$ are isomorphic. \end{vexercise} There are partial converses to Lagrange's Theorem: \begin{theorem}[Cauchy] Let $G$ be a finite group and $p$ a prime dividing the order of $G$. Then $G$ has a subgroup of order $p$. \end{theorem} Indeed, one can show that $G$ contains an element $g$ of order $p$, with the subgroup being the elements $\{g,g^2,\ldots,g^p=\text{id}\}$. \begin{theorem}[Sylow's 1st] Let $G$ be a finite group of order $p^k m$, where $p$ does not divide $m$. Then $G$ has a subgroup of order $p^k$. \end{theorem} \paragraph{\hspace*{-0.3cm}} It will be useful to consider all the subgroups of a group at once, rather than just one at a time. \begin{definition}[lattice of subgroups] The subgroup lattice is a diagram depicting all the subgroups of $G$ and the inclusions between them. If $H_1,H_2$ are subgroups of $G$ with $H_1\subseteq H_2$ they appear in the diagram like so: $$ \begin{pspicture}(0,0)(2,2) \rput(1,.3){$H_1$} \rput(1,1.7){$H_2$} \psline(1,.6)(1,1.4) \end{pspicture} $$ At the very base of the diagram is the trivial subgroup $\{\text{id}\}$ and at the apex is the other trivial subgroup, namely $G$ itself. Denote the lattice by $\mathcal L(G)$. \end{definition} \parshape=4 0pt\hsize 0pt\hsize 0pt.75\hsize 0pt.75\hsize For example, the group of symmetries of an equilateral triangle has elements $$ \{\text{id},r,r^2,s,rs,r^2s\} $$ where $r$ is a rotation counter-clockwise through $\frac{1}{3}$ of a turn (we called it $ts$ in Section \ref{lect1}) and $s$ is the reflection in the horizontal axis. \vadjust{ \smash{\lower 30pt \llap{ \begin{pspicture}(0,0)(3,2) \rput(1,0.25){ \uput[0]{270}(-.1,2){\pstriangle[fillstyle=solid,fillcolor=lightgray](1,0)(2,1.73)} \rput(-0.7,1){${\red s}$}\rput(1.3,2.3){${\red r}$} \psline[linecolor=red](-0.5,1)(2,1) \rput{-120}(-0.1,1.75){\pscurve[linecolor=red]{<-}(-0.5,0)(-1,1)(-0.5,2)} } \end{pspicture} }}}\ignorespaces \parshape=6 0pt.75\hsize 0pt.75\hsize 0pt.75\hsize 0pt\hsize 0pt\hsize 0pt\hsize The subgroup lattice $\mathcal L(G)$ is on the left in Figure \ref{fig:groups1:subgroup_lattices}. I'll leave you to see that they are all subgroups, so it remains to see that we have all of them. Suppose first that $H$ is a subgroup containing $r$. Then it must contain all the powers $\{\text{id},r,r^2\}$ of $r$, and so $3\leq |H|\leq 6$. By Lagrange's Theorem $|H|$ divides $6$, so we have $|H|=3$ or $6$, giving that $H$ must be $\{\text{id},r,r^2\}$ or all of $G$. This describes all the subgroups that contain $r$, and the same argument -- and conclusion -- applies to the subgroups containing $r^2$. This leaves the subgroups containing one of the reflections $s,rs,r^2s$ but not $r$ or $r^2$. If $H$ is a subgroup containing $s$, then as it also contains $\text{id}$, and by Lagrange, it must have order $2,3$ or $6$. The first possibility gives $H=\{\text{id},s\}$ and the last gives $H=G$. On the other hand, to have order $3$, the subgroup $H$ must also contain one of $rs$ or $r^2s$. In the first case it also contains $rss= r$, a contradiction. Similarly $H$ cannot contain $r^2s$, so there is no subgroup $H$ containing $s$ apart from $\{\text{id},s\}$ and $G$ itself. Similarly for subgroups containing $rs$ or $r^2s$. Thus the lattice $\mathcal L(G)$ is indeed as shown in Figure \ref{fig:groups1:subgroup_lattices}. The right part of Figure \ref{fig:groups1:subgroup_lattices} gives the subgroup lattice of the symmetry group of a square. I'll leave the details to you. \begin{figure} \caption{Subgroup lattices of the group of symmetries of a triangle \emph{(left)} and square \emph{(right)}.} \label{fig:groups1:subgroup_lattices} \end{figure} \paragraph{\hspace*{-0.3cm}} If $G$ is a finite group and $$ \{\text{id}\}=H_0\lhd H_1\lhd \cdots \lhd H_{n-1}\lhd H_n=G, $$ is a nested sequence of subgroups with each $H_i$ normal in $H_{i+1}$ and the quotients $$ H_1/H_0, H_2/H_1,\ldots,H_n/H_{n-1} $$ Abelian, then $G$ is said to be {\em soluble\/}. \paragraph{\hspace*{-0.3cm}} \label{groups1:abelian_are_soluble} If $G$ is an Abelian group, then we have the sequence $$ \{\text{id}\}\lhd G, $$ with the single quotient $G/\{\text{id}\}\cong G$, an Abelian group. Thus Abelian groups are soluble. \paragraph{\hspace*{-0.3cm}} \label{groups1:dihedral_are_soluble} For another example let $G$ be the symmetries, both rotations and reflections, of a regular $n$-gon in the plane. In the sequence: $$ \{\text{id}\}\lhd \{\text{rotations}\}\lhd G $$ the normality of the subgroup of rotations in $G$ follows from the fact that the rotations comprise half of all the symmetries and Exercise \ref{ex11.1}. Moreover, the rotations are isomorphic to the cyclic group $\ams{Z}}\def\E{\ams{E}_n$, and so the quotients in this sequence are $$ \{\text{rotations}\}/\{\text{id}\}\cong \{\text{rotations}\}\cong\ams{Z}}\def\E{\ams{E}_n\text{ and } G/\{\text{rotations}\}\cong\ams{Z}}\def\E{\ams{E}_2, $$ both Abelian groups. \begin{vexercise}\label{subgroups.solublegroups1} It turns out, although for slightly technical reasons, that a subgroup of a soluble group is also soluble. This exercise and the next demonstrate why. Let $G$ be a group, $H$ a subgroup and $N$ a normal subgroup. Let $$ NH=\{nh\,|\,n\in N,h\in H\}. $$ \begin{enumerate} \item Define a map $\varphi: H\rightarrow NH/N$ by $\varphi(h)=Nh$. Show that $\varphi$ is an onto homomorphism with kernel $N\cap H$. \item Use the first isomorphism theorem for groups to deduce that $H/H\cap N$ is isomorphic to $NH/H$. \end{enumerate} (This is called the {\em second isomorphism\/} or {\em diamond isomorphism\/} theorem. Why diamond? Draw a picture of all the subgroups--the theorem says that two ``sides'' of a diamond are isomorphic). \end{vexercise} \begin{vexercise}\label{subgroups.solublegroups2} Let $G$ be a soluble group via the series, $$ \{\text{id}\}=H_0\lhd H_1\lhd \cdots \lhd H_{n-1}\lhd H_n=G, $$ and let $K$ be a subgroup of $G$. Show that $$ \{\text{id}\}=H_0\cap K\lhd H_1\cap K\lhd \cdots \lhd H_{n-1}\cap K\lhd H_n\cap K=K, $$ is a series with Abelian quotients for $K$, and hence $K$ is also a soluble group. \end{vexercise} \paragraph{\hspace*{-0.3cm}} The antithesis of the soluble groups are the {\em simple\/} ones: groups $G$ whose only normal subgroups are the trivial subgroup $\{\text{id}\}$ and the whole group $G$. Whenever we have a normal subgroup we can form a quotient. A group is thus simple when its only quotients are itself $G/\{\text{id}\}\cong G$ and the trivial group $G/G\cong \{\text{id}\}$. Thus simple groups are analogous to prime numbers: integers whose only quotients are themselves $p/1=p$ and $p/p=1$. If $G$ is non-Abelian and simple, then $G$ {\em cannot\/} be soluble. For, the only sequence of normal subgroups that $G$ can have is $$ \{\text{id}\}\lhd G, $$ and as $G$ is non-Abelian the quotients of this sequence are non-Abelian. Thus, non-Abelian simple groups provide a ready source of non-soluble groups. \begin{table} \begin{center} \begin{tabular}{ll} \hline Symbol & Name\\ \hline &\\ $\ams{Z}}\def\E{\ams{E}_p$ & cyclic\\ $A_n$ & alternating\\ &\\ \hline notes: $p$ is a prime;\\ $n\not= 1,2,4$\\ \end{tabular} \caption{The first two families of simple groups}\label{simple_groups1} \end{center} \end{table} \paragraph{\hspace*{-0.3cm}} Amazingly, there is a complete list of the finite simple groups, compiled over approximately 150 years. The list is contained in Tables \ref{simple_groups1}-\ref{simple_groups3}. \begin{vexercise} Show that if $p$ is a prime number then the cyclic group $\ams{Z}}\def\E{\ams{E}_p$ has no non-trivial subgroups whatsoever, and so is a simple group. \end{vexercise} \paragraph{\hspace*{-0.3cm}} In Table \ref{simple_groups1} we see that the Alternating groups $A_n$ are simple for $n\not= 1,2$ or $4$. In particular these Alternating groups are not soluble, and as any subgroup of a soluble group is soluble, any group containing the Alternating group will also not be soluble. Thus, the symmetric groups $S_{\kern-.3mm n}$ are not soluble if $n\not= 1,2$ or $4$. \begin{table} \begin{center} \begin{tabular}{lll} \hline Symbol & Name & Discovered \\ \hline &&\\ $\text{PSL}_n\ams{F}}\def\K{\ams{K}_q$ & projective & 1870\\ $\text{PSP}_{2n}\ams{F}}\def\K{\ams{K}_q$ & simplectic & 1870\\ $\text{P}\Omega^+_{2n}$ & orthogonal & 1870\\ $\text{P}\Omega_{2n+1}$ & orthogonal & 1870\\ $E_6(q)$ & Chevalley & 1955\\ $E_7(q)$ & Chevalley & 1955 \\ $E_8(q)$ & Chevalley & 1955 \\ $F_4(q)$ & Chevalley & 1955 \\ $G_2(q)$ & Chevalley & 1955 \\ $^2 A_n(q^2)=\text{PSU}_n\ams{F}}\def\K{\ams{K}_{q^2}$ & unitary or twisted Chevalley & 1870 \\ $^2D_n(q^2)=\text{P}\Omega^-_{2n}$ & orthogonal or twisted Chevalley & 1870 \\ $^2E_6(q^2)$ & twisted Chevalley & c. 1960 \\ $^3D_4(q^3)$ & twisted Chevalley & c. 1960 \\ $^2B_2(2^{2e+1})$ & Suzuki & 1960 \\ $^2G_2(2^{2e+1})$ & Ree & 1961 \\ $^2F_4(2^{2e+1})$ & Ree & 1961 \\ &&\\ \hline notes: $n$ and $e$ are $\in\ams{Z}}\def\E{\ams{E}$&There are some restrictions on $n$&\\ $q$ is a prime power;&and $q$, left off here for clarity.&\\ \end{tabular} \caption{The simple groups of Lie type}\label{simple_groups2} \end{center} \end{table} \paragraph{\hspace*{-0.3cm}} Tables \ref{simple_groups2} and \ref{simple_groups3} list the really interesting simple groups. The groups of Lie type are roughly speaking groups of matrices whose entries come from finite fields. We have already seen that if $q=p^n$ a prime power, then there is a field $\ams{F}}\def\K{\ams{K}_q$ with $q=p^n$ elements. The group $\text{SL}_n\ams{F}}\def\K{\ams{K}_q$ consists of the $n\times n$ matrices having determinant $1$ and with entries from this field and the usual matrix multiplication. This group is not simple as $$ N=\{\lambda I_n\,|\,\lambda\in\ams{F}}\def\K{\ams{K}_q\}, $$ is a normal subgroup. But it turns out that the quotient group, $$ \text{SL}_n\ams{F}}\def\K{\ams{K}_q/N, $$ is a simple group. It is denoted $\text{PSL}_n\ams{F}}\def\K{\ams{K}_q$, and called the $n$-dimensional projective special linear group over $\ams{F}}\def\K{\ams{K}_q$. The remaining groups in Table \ref{simple_groups2} come from more complicated constructions. Table \ref{simple_groups3} lists groups that don't fall into any of the other categories. For this reason they are called the ``sporadic'' simple groups. They arise from various -- often quite complicated -- constructions that are beyond the reach of these notes. The most interesting of them is the largest one -- the Monster simple group (which actually contains quite a few of the others as subgroups). In any case, the simple groups in Tables \ref{simple_groups2} and \ref{simple_groups3} are all non-Abelian, hence provide more examples of non-soluble groups. \begin{table} \begin{center} \begin{tabular}{llll} \hline Symbol & Name & Discovered & Order\\ \hline &&&\\ 1. \makebox[0pt][l]{{\em First generation of the Happy Family\/}}.\\ $M_{11}$ & Mathieu & 1861 & $2^4\,3^2\,5\,11$\\ $M_{12}$ & Mathieu & 1861 & $2^4\,3^3\,5\,11$\\ $M_{22}$ & Mathieu & 1873 & $2^7\,3^2\,5\,7\,11$\\ $M_{23}$ & Mathieu & 1873 & $2^7\,3^2\,5\,7\,11\,23$\\ $M_{24}$ & Mathieu & 1873 & $2^{10}\,3^3\,5\,7\,11\,23$\\ 2. \makebox[0pt][l]{{\em Second generation of the Happy Family\/}}.\\ HJ & Hall-Janko & 1968 & $2^7\,3^3\,5^2\,7$\\ HiS & Higman-Sims & 1968 & $2^9\,3^2\,5^3\,7\,11$\\ McL & McLaughlin & 1969 & $2^7\,3^6\,5^3\,7\,11$\\ Suz & Suzuki & 1969 & $2^{13}3^7\,5^2\,7\,11\,13$\\ $Co_1$ & Conway & 1969 & $2^{21}\,3^9\,5^4\,7^2\,11\,13\,23$\\ $Co_2$ & Conway & 1969? & $2^{18}\,3^6\,5^3\,7\,11\,23$\\ $Co_3$ & Conway & 1969? & $2^{10}\,3^7\,5^3\,7\,11\,23$\\ 3. \makebox[0pt][l]{{\em Third generation of the Happy Family\/}}.\\ He & Held & 1968 & $2^{10}\,3^2\,5^2\,7^3\,17$\\ $Fi_{22}$ & Fischer & 1968 & $2^{17}\,3^9\,5^2\,7\,11\,13$\\ $Fi_{23}$ & Fischer & 1968 & $2^{18}\,3^{13}\,5^2\,7\,11\,13\,17\,23$\\ $Fi_{24}$ & Fischer & 1968 & $2^{21}\,3^{16}\,5^2\,7^3\,11\,13\,17\,23\,29$\\ $F_5$ & Harada-Norton & 1973 & $2^{14}\,3^6\,5^6\,7\,11\,19$\\ $F_3$ & Thompson & 1973 & $2^{15}\,3^{10}\,5^3\,7^2\,13\,19\,31$\\ $F_2$ & Fischer or ``Baby Monster'' & 1973 & $2^{41}\,3^{13}\,5^6\,7^2\,11\,13\,17\,19\,23\,47$\\ $\ams{M}$ & Fischer-Griess or ``Friendly Giant'' or ``Monster'' & 1973 & $\approx 10^{55} $\\ 4. {\em The Pariahs\/}.\\ $J_1$ & Janko & 1965 & $2^3\,5\,7\,11\,19$\\ $J_3$ & Janko & 1968 & $2^7\,3^5\,5\,17\,19$\\ $J_4$ & Janko & 1975 & $2^{21}\,3^3\,5\,7\,11^3\,23\,29\,31\,37\,43$\\ Ly & Lyons & 1969 & $2^8\,3^7\,5^6\,7\,11\,31\,37\,67$\\ Ru & Rudvalis & 1972 & $2^{14}\,3^3\,5^3\,7\,13\,29$\\ O'N & O'Nan & 1973 & $2^9\,3^4\,5\,7^3\,11\,19\,31$\\ \hline \end{tabular} \caption{The sporadic simple groups}\label{simple_groups3} \end{center} \end{table} \subsection*{Further Exercises for Section \thesection} \begin{vexercise} Show that any subgroup of an abelian group is normal. \end{vexercise} \begin{vexercise} Let $n$ be a positive integer that is not prime. Show that the cyclic group $\ams{Z}}\def\E{\ams{E}_n$ is not simple. \end{vexercise} \begin{vexercise} Show that $A_2$ and $A_4$ are not simple groups, but $A_3$ is. \end{vexercise} \begin{vexercise}\label{ex11.1} Let $G$ be a group and $H$ a subgroup such that $H$ has exactly two cosets in $G$. Let $C_2$ be the group with elements $\{-1,1\}$ and operation the usual multiplication. Define a map $f:G\rightarrow C_2$ by $$ f(g)=\left\{ \begin{array}{ll} 1&g\in H\\ -1&g\not\in H \end{array}\right. $$ Show that $f$ is a homomorphism. Deduce that $H$ is a normal subgroup. \end{vexercise} \begin{vexercise} Consider the group of symmetries (rotations and reflections) of a regular $n$-sided polygon for $n\geq 3$. Show that this is not a simple group. \end{vexercise} \begin{vexercise} Show that $S_{\kern-.3mm 2}$ is simple but $S_{\kern-.3mm n}$ is not for $n\geq 3$. Show that $A_n$ has no subgroups of index $2$ for $n\geq 5$. \end{vexercise} \begin{vexercise} Show that if $G$ is abelian and simple then it is cyclic. Deduce that if $G$ is simple and not isomorphic to $\ams{Z}}\def\E{\ams{E}_p$ then $G$ is non-Abelian. \end{vexercise} \begin{vexercise} For each of the following groups $G$, draw the subgroup lattice $\mathcal L(G)$: \begin{enumerate} \item $G=$ the group of symmetries of a pentagon or hexagon. \item $G=$ the cyclic group $\{1,g,g^2,\ldots,g^{n-1}\}$ where $g^n=1$. \end{enumerate} \end{vexercise} \section{Groups II: Symmetries of Fields} \label{galois.groups} We are finally able to bring symmetry into the solutions of polynomial equations. \begin{definition}[automorphism or symmetry of a field] An automorphism of a field $F$ is an isomorphism $\sigma:F\rightarrow F$, ie: a bijective map from $F$ to $F$ such that $\sigma(a+b)=\sigma(a)+\sigma(b)$ and $\sigma(ab)=\sigma(a)\sigma(b)$ for all $a,b\in F$. \end{definition} We remarked in Section \ref{lect4} that an automorphism is a relabeling of the elements using different symbols but keeping the algebra the same. So it is a way of picking the field up and placing it back down without changing the way it essentially looks. \begin{vexercise} Show that if $\sigma$ is an automorphism of the field $F$ then $\sigma(0)=0$ and $\sigma(1)=1$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} A familiar example is complex conjugation: $\sigma:z\mapsto \overline{z}$ is an automorphism of $\ams{C}}\def\Q{\ams{Q}$, since $$ \overline{z+w}=\overline{z}+\overline{w}\text{ and }\overline{zw}=\overline{z}\,\overline{w}, $$ with conjugation a bijection $\ams{C}}\def\Q{\ams{Q}\rightarrow \ams{C}}\def\Q{\ams{Q}$. This symmetry captures the idea that from an algebraic point of view, we could have just as easily adjoined $-\text{i}$ to $\R$, rather than $\text{i}$, to obtain the complex numbers -- they look the same upside down as right side up! We will see at the end of this section that if a non-trivial automorphism of $\ams{C}}\def\Q{\ams{Q}$ fixes pointwise the real numbers, then it must be complex conjugation. If we drop the requirement that $\R$ be fixed then there may be more possibilities: if we only insist that $\sigma$ fix $\Q$ pointwise then there are infinitely many possibilities. \begin{vexercise} \label{exercise:groups2:conjugation} Let $f\in\Q[x]$ with roots $\{\alpha_1,\ldots,\alpha_d\}\in\ams{C}}\def\Q{\ams{Q}$. Show that complex conjugation $z\mapsto\overline{z}$ is an automorphism of the splitting field $\Q(\alpha_1,\ldots,\alpha_d)$. Is it always non-trivial? \end{vexercise} \begin{vexercise} Show that $a+b\text{i} \mapsto -a+b\text{i}$ is \emph{not\/} an automorphism of $\ams{C}}\def\Q{\ams{Q}$. Show that if $\ell$ is a line through $0$ in $\ams{C}}\def\Q{\ams{Q}$, then reflecting in $\ell$ is an automorphism only when $\ell$ is the real axis. \end{vexercise} \paragraph{\hspace*{-0.3cm}} We saw in Section \ref{lect4} that every field $F$ has a prime subfield isomorphic to either $\ams{F}}\def\K{\ams{K}_p$ or $\Q$. The elements have the form: $$ \frac{\overbrace{1+1+\cdots +1}^{m\text{ times}}} {\underbrace{1+1+\cdots +1}_{n\text{ times}}}. $$ If $\sigma:F\rightarrow F$ is an automorphism of $F$ then \begin{equation*} \begin{split} \sigma\biggl(\frac{\overbrace{1+1+\cdots +1}^{m\text{ times}}} {\underbrace{1+1+\cdots +1}_{n\text{ times}}}\biggr) &= \sigma(\overbrace{1+1+\cdots +1}^{m}) \sigma\biggl(\frac{1} {\underbrace{1+1+\cdots +1}_{n}}\biggr)\\ &= (\overbrace{\sigma(1)+\sigma(1)+\cdots +\sigma(1)}^{m}) \biggl(\frac{1} {\underbrace{\sigma(1)+\sigma(1)+\cdots +\sigma(1)}_{n}}\biggr) = \frac{\overbrace{1+1+\cdots +1}^{m\text{ times}}} {\underbrace{1+1+\cdots +1}_{n\text{ times}}}. \end{split} \end{equation*} The elements of the prime subfield are thus fixed pointwise by the automorphism $\sigma$. \paragraph{\hspace*{-0.3cm}} This example suggests that we should think about symmetries in a relative way. As symmetries normally arrange themselves into groups we define: \begin{definition}[Galois group of an extension] Let $F\subseteq E$ be an extension of fields. The automorphisms of the field $E$ that fix pointwise the elements of $F$ form a group under composition, called the Galois group of $E$ over $F$, and denoted $\text{Gal}(E/F)$. \end{definition} An element $\sigma$ of $\text{Gal}(E/F)$ thus has the property that $\sigma(a)=a$ for all $a\in F$. \begin{vexercise} For $F\subset E$ fields, show that the set of automorphisms $\text{Gal}(E/F)$ of $E$ that fix $F$ pointwise do indeed form a group under composition. \end{vexercise} \paragraph{\hspace*{-0.3cm}} Consider the field $\Q(\kern-2pt\sqrt{2},\text{i})$. The tower law gives basis $\{1,\kern-2pt\sqrt{2},\text{i},\kern-2pt\sqrt{2}\text{i}\}$ over $\Q$, so the elements are $$ \Q(\kern-2pt\sqrt{2},\text{i})=\{a+b\kern-2pt\sqrt{2}+c\text{i}+d\kern-2pt\sqrt{2}\text{i}\,|\,a,b,c,d\in\Q\}. $$ If $\sigma\in\text{Gal}(\Q(\kern-2pt\sqrt{2},\text{i})/\Q)$ then \begin{align*} \label{eq:2} \sigma(a+b\kern-2pt\sqrt{2}+c\text{i}+d\kern-2pt\sqrt{2}\text{i}) &=\sigma(a)+\sigma(b)\sigma(\kern-2pt\sqrt{2})+\sigma(c)\sigma(\text{i}) +\sigma(d)\sigma(\kern-2pt\sqrt{2}\text{i})\\ &=a+b\sigma(\kern-2pt\sqrt{2})+c\sigma(\text{i})+d\sigma(\kern-2pt\sqrt{2}\text{i}) \end{align*} as an element of $\text{Gal}(\Q(\kern-2pt\sqrt{2},\text{i})/\Q)$ fixes rational numbers by definition. Thus $\sigma$ is completely determined by its effect on the basis $\{1,\kern-2pt\sqrt{2},\text{i},\kern-2pt\sqrt{2}\text{i}\}$: once their images are known, then $\sigma$ is known. (This is no surprise. If $F\subseteq E$ is an extension then, among other things, $E$ is a vector space over $F$ and $\sigma\in\text{Gal}(E/F)$ is, among other things, a linear map of vector spaces $E\rightarrow E$, hence completely determined by its effect on a basis.) We can say more: we have $\sigma(1)=1$ and $\sigma(\kern-2pt\sqrt{2}\text{i})=\sigma(\kern-2pt\sqrt{2})\sigma(\text{i})$. Thus $\sigma$ is completely determined by its effect on $\kern-2pt\sqrt{2}$ and $\text{i}$, the elements adjoined to obtain $\Q(\kern-2pt\sqrt{2},\text{i})$. \paragraph{\hspace*{-0.3cm}} This is a general fact: if $F\subseteq F(\alpha_1,\alpha_2,\ldots,\alpha_k)=E$ and $\sigma\in\text{Gal}(E/F)$, then $\sigma$ is completely determined by its effect on $\alpha_1,\ldots,\alpha_k$. For, if $\{\beta_1,\ldots,\beta_n\}$ is a basis for $E$ over $F$, then $\sigma$ is completely determined by its effect on the $\beta_i$. The proof of the tower law gives $$ \beta_i=\alpha_1^{i_1}\alpha_2^{i_2}\ldots \alpha_k^{i_k}, $$ a product of the $\alpha_j$'s, so that $\sigma(\beta_i)=\sigma(\alpha_1)^{i_1}\sigma(\alpha_2)^{i_2}\ldots \sigma(\alpha_k)^{i_k}$ is in turn determined by the $\sigma(\alpha_j)$'s. \paragraph{\hspace*{-0.3cm}} The structure of Galois groups can sometimes be determined via ad-hoc arguments, at least in very simple cases. For example, let $\omega$ be the primitive cube root of $1$, $$ \omega=-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}, $$ and consider the extension $\Q\subset\Q(\omega)$. \vadjust{ \smash{\lower 72pt \llap{ \begin{pspicture}(0,0)(2.5,2) \rput(-0.25,1.2){ \uput[0]{270}(-.1,2){\pstriangle[fillstyle=solid,fillcolor=lightgray](1,0)(2,1.73)} \psbezier[linecolor=red]{->}(2.1,0.8)(3.1,0)(3.1,2)(2.1,1.2) \rput(-.1,0){\psbezier[linecolor=red]{<->}(-.4,0.1)(-1,.5)(-1,1.5)(-.4,1.9)} \rput(2,1){$1$} \rput(-.2,0.1){$\omega^2$} \rput(-.2,1.9){$\omega$} \rput(0.75,-0.5){\red $\sigma(\omega)=\omega^2=\overline{\omega}$} } \end{pspicture} }}}\ignorespaces \parshape=6 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt.7\hsize 0pt\hsize Although $\omega$ is a root of $x^3-1$, this is reducible over $\Q$ ($1$ is also a root) and the minimum polynomial of $\omega$ over $\Q$ is in fact $x^2+x+1$ by Exercise \ref{ex_lect3.2}. By Theorem D, the field $\Q(\omega)=\{a+b\omega\,|\,a,b\in\Q\}$, so that $\Q(\omega)$ is $2$-dimensional over $\Q$ with basis $\{1,\omega\}$. Let $\sigma\in\text{Gal}(\Q(\omega)/\Q)$, whose effect is completely determined by where it sends $\omega$. Suppose $\sigma(\omega)=a+b\omega$ for some $a,b\in\Q$ to be determined. We have $\sigma(\omega^3)=\sigma(1)=1$, but also $$ \sigma(\omega^3)=\sigma(\omega)^3=(a+b\omega)^3=(a^3+b^3-3ab^2)+(3a^2b-3ab^2)\omega $$ with the last bit using $\omega^2=-\omega-1$. As $\{1,\omega\}$ are independent over $\Q$, the elements of $\Q(\omega)$ have unique expressions as linear combinations of these two basis elements. We can therefore ``equate the $1$ and $\omega$ parts'' in these two expressions for $\sigma(\omega^3)$: $$ 1=\sigma(\omega^3)=(a^3+b^3-3ab^2)+(3a^2b-3ab^2)\omega,\text{ so that } a^3+b^3-3ab^2=1\text{ and }3a^2b-3ab^2=0. $$ Solving these equations (in $\Q$!) gives three solutions $a=0,b=1$ and $a=1,b=0$ and $a=-1,b=-1$, corresponding to $\sigma(\omega)=\omega$ and $\sigma(\omega)=1$ and $\sigma(\omega)=-1-\omega=\omega^2$. The second one is impossible as $\sigma$ is a bijection and we already have $\sigma(1)=1$. The first one is the identity map and the third $\sigma(\omega)=\omega^2=\overline{\omega}$ is complex conjugation (and shown in the figure above), giving $\text{Gal}(\Q(\omega)/\Q)=\{\text{id},\sigma:z\mapsto\overline{z}\}$ a group of order two. (Now revisit Exercise \ref{ex1.-1}). \begin{vexercise} \label{galois_groups_exercise50} $\Q(\omega)$ is also spanned, as a vector space, by $\{1,\omega,\omega^2\}$, so that every element has an expression of the form $a+b\omega+c\omega^2$ for some $a,b,c\in\Q$. In particular $\overline{\omega}$ can be written as both $\omega^2$ and as $-1-\omega$. ``Equating the $1$ and the $\omega$ and the $\omega^2$ parts'' gives $0=-1$ and $1=0$. What has gone wrong? \end{vexercise} \paragraph{\hspace*{-0.3cm}} Our first tool for unpicking the structure of Galois groups is: \begin{theoremF} Let $F,K$ be fields, $\tau:F\rightarrow K$ an isomorphism and $\tau^*:F[x]\rightarrow K[x]$ the ring homomorphism given by $\tau^*:\sum a_ix^i\mapsto\sum\tau(a_i)x^i$. If $\alpha$ is algebraic over $F$, then $\tau$ extends to an isomorphism $\sigma:F(\alpha)\rightarrow K(\beta)$ with $\sigma(\alpha)=\beta$ if and only if $\beta$ is a root of $\tau^*f$, where $f$ is the minimum polynomial of $\alpha$ over $F$. \end{theoremF} The elements $\alpha$ and $\beta$ are assumed to lie in some extensions $F\subseteq E_1, K\subseteq E_2$; when we say that $\tau$ extends to $\sigma$ we mean that the restriction of $\sigma$ to $F$ is $\tau$. The theorem seems technical, but has an intuitive meaning. Suppose we have $F=K$ and $\tau$ is the identity isomorphism, hence $\tau^*$ is also the identity. Then we have an extension $\sigma:F(\alpha)\rightarrow F(\beta)$ precisely when $\beta$ is a root of the minimum polynomial $f$ of $\alpha$ over $F$. We can say even more: if $\beta$ is an element of $F(\alpha)$, then $F(\beta)\subseteq F(\alpha)$; as an $F$-vector space $F(\beta)$ is $(\deg f)$-dimensional over $F$ as $\alpha$ and $\beta$ have the same minimum polynomial over $F$. As $F(\alpha)$ has the same dimension we get $F(\beta)=F(\alpha)$. Thus $\sigma$ is an isomorphism of $F(\alpha)\rightarrow F(\alpha)$ fixing $F$ pointwise, and so an element of the Galois group $\text{Gal}(F(\alpha)/F)$. Here is everything we know about Galois groups so far: \begin{corollary} \label{galois_groups:Extension_corollary} Let $\alpha$ be algebraic over $F$ with minimum polynomial $f$ over $F$. Then $\sigma:F(\alpha)\rightarrow F(\alpha)$ is an element of the Galois group $\text{Gal}(F(\alpha)/F)$ if and only if $\sigma(\alpha)=\beta$ where $\beta$ is a root of $f$ that is contained in $F(\alpha)$. \end{corollary} The elements of the Galois group thus permute those roots of the minimum polynomial that are contained in $F(\alpha)$. There are slick proofs of the Extension theorem; ours is not going to be one of them. But it does make things nice and concrete. The elements of $F(\alpha)$ are polynomials in $\alpha$, so the simplest way to define $\sigma$ is \begin{equation} \label{eq:3} \sigma:a_m\alpha^m+\cdots+a_1\alpha+a_0 \mapsto \tau(a_m)\,\beta^m+\cdots+\tau(a_1)\,\beta+\tau(a_0). \end{equation} The complication is that the same element will have many such polynomial expressions; for example $\overline{\omega}\in\Q(\omega)$ can be written both as $\omega^2$ and $-1-\omega$ (see Exercise \ref{galois_groups_exercise50} above) making it unclear if (\ref{eq:3}) is well-defined. The solution is that $\beta$ is a root of $\tau^*f$, the ``$K[x]$ version'' of $f$. \begin{proofext} For the ``only if'' part let $f=\sum a_ix^i$ with $f(\alpha)=0$. Then $\sum a_i\alpha^i=0\in E_1$ and $\sigma(0)=0\in E_2$ gives: $$ \sigma\biggl(\sum a_i\alpha^i\biggr)=0\Rightarrow \sum \sigma(a_i) \sigma(\alpha)^i=0\Rightarrow \sum \tau(a_i)\,\beta^i=0 \Rightarrow \tau^*f(\beta)=0. $$ (Compare this argument with the one that shows the roots of a polynomial with real coefficients occur in complex conjugate pairs). For the ``if'' part, we need to build an isomorphism $F(\alpha)\rightarrow K(\beta)$ with the desired properties. Define $\sigma$ by the formula (\ref{eq:3}); in particular $\sigma(a)=\tau(a)$ for all $a\in F$ and $\sigma(\alpha)=\beta$. \begin{description} \item[(i).]\emph{$\sigma$ is well-defined and 1-1}: Let $$ \sum a_i\alpha^i=\sum b_i\alpha^i, $$ be two expressions for some element of $F(\alpha)$. Then $\sum (a_i-b_i)\alpha^i=0$ and so $\alpha$ is a root of the polynomial $g=\sum (a_i-b_i)x^i\in F[x]$. As $f$ is the minimum polynomial of $\alpha$ over $F$ it is a factor of $g$, so that $g=fh$, hence $\tau^*(g)=\tau^*(fh)=\tau^*(f)\tau^*(h)$ and $\tau^*(f)$ is a factor of $\tau^*(g)$. As $\beta$ is a root of $\tau^*(f)$ it is a root of $\tau^*(g)$: $$ \tau^*(g)(\beta)=0\Leftrightarrow \sum \tau(a_i-b_i)\,\beta^i=0\Leftrightarrow \sum \tau(a_i)\,\beta^i= \sum \tau(b_i)\,\beta^i \Leftrightarrow \sigma\biggl(\sum a_i\alpha^i\biggr)=\sigma\biggl(\sum b_i\alpha^i\biggr). $$ The conclusion is that $\sum a_i\alpha^i=\sum b_i\alpha^i$ in $F(\alpha)$ if and only if $\sigma(\sum a_i\alpha^i)=\sigma(\sum b_i\alpha^i)$ in $K(\beta)$, hence $\sigma$ is both well-defined ($\Rightarrow$) and 1-1 ($\Leftarrow$). \item[(ii).] \emph{$\sigma$ is a homomorphism}: Let $$ \lambda=\sum a_i\alpha^i\text{ and }\mu=\sum b_i\alpha^i, $$ be two elements of $F(\alpha)$. Then \begin{equation*} \begin{split} \sigma(\lambda+\mu)=\sigma\biggl(\sum(a_i+b_i) \alpha^i\biggr)&=\sum\tau(a_i+b_i)\,\beta^i\\ &=\sum\tau(a_i)\,\beta^i +\sum\tau(b_i)\,\beta^i=\sigma(\lambda) +\sigma(\mu). \end{split} \end{equation*} Similarly, \begin{equation*} \begin{split} \sigma(\lambda\mu)=\sigma\biggl( \sum_{k}\biggl(\sum_{i+j=k}a_ib_j \biggr)\alpha^k\biggr)&= \sum_{k}\tau\biggl(\sum_{i+j=k}a_ib_j\biggr)\,\beta^k =\sum_{k}\biggl(\sum_{i+j=k}\tau(a_i)\tau(b_j)\biggr)\,\beta^k\\ &=\biggl(\sum\tau(a_i)\,\beta^i\biggr)\biggl(\sum\tau(b_j)\,\beta^j\biggr) =\sigma(\lambda)\sigma(\mu). \end{split} \end{equation*} \item[(ii).] \emph{$\sigma$ is onto}: $\sigma(F(\alpha))$ is contained in $K(\beta)$ by (\ref{eq:3}). On the other hand, any $b\in K$ is the image $b=\tau(a)$ of some $a\in F$, as $\tau$ is onto, and $\beta=\sigma(\alpha)$ by definition. Thus both $\beta$ and $K$ are in $\sigma(F(\alpha))$, hence $K(\beta)\subseteq \sigma(F(\alpha))$. \qed \end{description} \end{proofext} \paragraph{\hspace*{-0.3cm}} To compute the Galois group of the extension $\Q\subset\Q(\alpha)$, where $\alpha=\sqrt[3]{2}$, any automorphism is completely determined by where it sends $\alpha$. And we are free to send $\alpha$ to those roots of its minimum polynomial over $\Q$ that are also contained in $\Q(\alpha)$. The minimum polynomial is $x^3-2$, which has roots $\alpha,\alpha\omega$ and $\alpha\omega^2$ where $$ \omega=-\frac{1}{2}+\frac{\sqrt[3]{2}}{2}\text{i}. $$ But the roots $\alpha\omega$ and $\alpha\omega^2$ are not contained in $\Q(\alpha)$ as this field contains only real numbers -- whereas $\alpha\omega$ and $\alpha\omega^2$ are clearly non-real. Thus the only possible image for $\alpha$ under an automorphism is $\alpha$ itself, and $\text{Gal}(\Q(\alpha)/\Q)$ is the trivial group $\{\text{id}\}$. \paragraph{\hspace*{-0.3cm}} Returning to the example immediately before the Extension theorem, any automorphism of $\Q(\omega)$ that fixes $\Q$ pointwise is determined by where it sends $\omega$, and this must be to a root of the minimum polynomial over $\Q$ of $\omega$. As this polynomial is $1+x+x^2$ with roots $\omega$ and $\omega^2$, we have automorphisms that sends $\omega$ to itself or sends $\omega$ to $\omega^2=\overline{\omega}$, ie: $$ \text{Gal}(\Q(\omega)/\Q)=\{\text{id},\sigma:z\mapsto\overline{z}\}. $$ In particular the figure below left is an automorphism but below right is not: $$ \begin{pspicture}(0,0)(13,3.5) \rput(3,0.95){ \uput[0]{270}(-.1,2){\pstriangle[fillstyle=solid,fillcolor=lightgray](1,0)(2,1.73)} \rput(-.1,0){\psbezier[linecolor=red]{<->}(-.4,0.1)(-1,.5)(-1,1.5)(-.4,1.9)} \psbezier[linecolor=red]{->}(2.1,0.8)(3.1,0)(3.1,2)(2.1,1.2) \rput(2,1){$1$} \rput(-0.2,-0.2){$\omega^2$} \rput(-0.20,2.2){$\omega$} } \rput(8,0.95){ \uput[0]{270}(-.1,2){\pstriangle[fillstyle=solid,fillcolor=lightgray](1,0)(2,1.73)} \rput{-120}(0,2){\rput(-.1,0){\psbezier[linecolor=red]{<->}(-.4,0.1)(-1,.5)(-1,1.5)(-.4,1.9)}} \rput{-120}(-.2,1.9){\psbezier[linecolor=red]{->}(2.1,0.8)(3.1,0)(3.1,2)(2.1,1.2)} \rput(2,1){$1$} \rput(-0.2,-0.2){$\omega^2$} \rput(-0.20,2.2){$\omega$} } \end{pspicture} $$ \paragraph{\hspace*{-0.3cm}} The ``only if'' part of the Extension Theorem is worth stating separately: \begin{corollary} Let $F\subseteq E$ be an extension and $g\in F[x]$ having root $a\in E$. Then for any $\sigma\in\text{Gal}(E/F)$, the image $\sigma(a)$ is also a root of $g$. \end{corollary} An immediate and important consequence is: \begin{corollary} If $F\subseteq E$ is a finite extension then the Galois group $\text{Gal}(E/F)$ is finite. \end{corollary} \begin{proof} If $\{\alpha_1,\alpha_2,\ldots,\alpha_k\}$ is a basis for $E$ over $F$, then $E=F(\alpha_1,\alpha_2,\ldots,\alpha_k)$, with $\alpha_i$ algebraic over $F$ (by Proposition \ref{finite.givesalgebraic}) having minimum polynomial $f_i\in F[x]$. If $\sigma\in\text{Gal}(E/F)$ then $\sigma$ is completely determined by the finitely many $\sigma(\alpha_i)$, which in turn must be one of the finitely many roots of $f_i$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Let $p$ be a prime and $$ \omega=\cos\frac{2\pi}{p}+\text{i}\sin\frac{2\pi}{p}, $$ be a root of $1$. By Corollary \ref{galois_groups:Extension_corollary}, $\sigma\in\text{Gal}(\Q(\omega)/\Q)$ precisely when it sends $\omega$ to a root, contained in $\Q(\omega)$, of its minimum polynomial over $\Q$. The minimum polynomial is $$ \Phi_p=1+x+x^2+\cdots+x^{p-1}, $$ (Exercise \ref{ex_lect3.2}) with roots $\omega,\omega^2,\ldots,\omega^{p-1}$. All these roots are contained in $\Q(\omega)$, and so we are free to send $\omega$ to any one of them. The Galois group thus has order $p-1$, with elements $$ \{\sigma_1=\text{id}:\omega\mapsto\omega,\sigma_2:\omega\mapsto\omega^2,\ldots,\sigma_{p-1}:\omega\mapsto\omega^{p-1}\}. $$ If $\sigma(\omega)=\omega^k$ then $\sigma^i(\omega)=\omega^{k^i}$ (keeping $\omega^p=1$ in mind). We saw in Section \ref{fields3} that the multiplicative group of the finite field $\ams{F}}\def\K{\ams{K}_p$ is cyclic: there is a $k$ with $1<k<p$, such that the powers $k^i$ of $k$ exhaust all of the non-zero elements of $\ams{F}}\def\K{\ams{K}_p$, ie: the powers $k^i$ run through $\{1,2,\ldots,p-1\}$ mod $p$ (or $k$ generates $\ams{F}}\def\K{\ams{K}_p^*$). Putting the previous two paragraphs together, let $\sigma\in\text{Gal}(\Q(\omega)/\Q)$ be such that $\sigma(\omega)=\omega^k$ for $k$ a generator of $\ams{F}}\def\K{\ams{K}_p^*$. Then the elements $$ \{\sigma(\omega),\sigma^2(\omega),\ldots,\sigma^{p-1}(\omega)\}=\{\omega,\omega^2,\ldots,\omega^{p-1}\} $$ and so the powers $\sigma,\sigma^2,\ldots,\sigma^{p-1}$ exhaust the Galois group. $\text{Gal}(\Q(\omega)/\Q)$ is thus a cyclic group of order $p-1$. \begin{figure} \caption{The Galois group $\text{Gal}(\Q(\omega)/\Q)$ is cyclic for $\omega$ a primitive $p$-th root of $1$.} \label{fig:galois_groups:cyclotomic} \end{figure} \paragraph{\hspace*{-0.3cm}} The Extension theorem gives the existence of automorphisms. We can also say how many there are: \begin{theorem}\label{galois_groups:number_of_extensions} Let $\tau:F\rightarrow K$ be an isomorphism and $F\subseteq E_1$ and $K\subseteq E_2$ be extensions with $E_1$ a splitting field of some polynomial $f$ over $F$ and $E_2$ a splitting field of $\tau^*f$ over $K$. Assume also that the roots of $\tau^*f$ in $E_2$ are distinct. Then the number of extensions of $\tau$ to an isomorphism $\sigma:E_1\rightarrow E_2$ is equal to the degree of the extension $K\subseteq E_2$. \end{theorem} \begin{proof} \parshape=5 0pt\hsize 0pt\hsize 0pt.8\hsize 0pt.8\hsize 0pt.8\hsize Let $\alpha$ be a root of $f$ and $F\subseteq F(\alpha)\subseteq E_1$. By the Extension Theorem, $\tau$ extends to an isomorphism $\sigma:F(\alpha)\rightarrow K(\beta)$ if and only if $\beta$ is a root in $E_2$ of $\tau^*(p)$, where $p$ is the minimum polynomial of $\alpha$ over $F$. In this case the minimum polynomial $q$ of $\beta$ over $K$ divides $\tau^*p$; moreover, $\deg\tau^*p\leq \deg p=[F(\alpha):F]=[K(\beta):K]=\deg q$. Thus $\tau^*p=q$ \emph{is\/} the minimum polynomial of $\beta$ over $K$. \vadjust{ \smash{\lower 72pt \llap{ \begin{pspicture}(0,0)(2,3) \rput(.2,.2){$F$} \rput(1.8,.2){$K$} \rput(.2,1.8){$F(\alpha)$} \rput(1.8,1.8){$K(\beta)$} \rput(.2,2.9){$E_1$} \rput(1.8,2.9){$E_2$} \psline[linewidth=.1mm]{->}(.4,.2)(1.6,.2) \psline[linewidth=.1mm]{->}(.7,1.8)(1.3,1.8) \psline[linewidth=.1mm,linecolor=red]{->}(.4,2.9)(1.6,2.9) \psline[linewidth=.1mm]{->}(.2,.4)(.2,1.6) \psline[linewidth=.1mm]{->}(1.8,.4)(1.8,1.6) \psline[linewidth=.1mm]{->}(.2,2.05)(.2,2.7) \psline[linewidth=.1mm]{->}(1.8,2.05)(1.8,2.7) \rput(1,.4){$\tau$} \rput(1,3.1){\red ?} \rput(1,2){$\sigma$} \end{pspicture} }}}\ignorespaces \parshape=2 0pt.8\hsize 0pt.8\hsize As $\alpha$ is a root of $f$ we have $f=ph$ in $F[x]$, so $\tau^*f=(\tau^*p)(\tau^*h)$ in $K[x]$. As the roots of $\tau^*f$ are distinct, those of $\tau^*p$ must be too. \parshape=2 0pt.8\hsize 0pt.8\hsize The number of possible $\sigma$ then, which is equal to the number of \emph{distinct\/} roots of $\tau^*p$, must in fact be equal to the degree of $\tau^*p$. This in turn equals the degree $[K(\beta):K]>1$. \parshape=3 0pt.8\hsize 0pt.8\hsize 0pt\hsize We now proceed by induction on the degree $[E_2:K]$. If $[E_2:K]=1$ then $E_2=K$. An isomorphism $\sigma:E_1\rightarrow E_2$ extending $\tau$ gives $[E_1:F]=1$, hence $E_1=F$. There can then be only one such $\sigma$, namely $\tau$ itself. By the tower law, $[E_2:K]=[E_2:K(\beta)][K(\beta):K]$ where $[E_2:K(\beta)]<[E_2:K]$ since $[K(\beta):K]>1$. By induction, any isomorphism $\sigma:F(\alpha)\rightarrow K(\beta)$ will thus have $$ [E_2:K(\beta)]=\frac{[E_2:K]}{[K(\beta):K]}, $$ extensions to an isomorphism $E_1\rightarrow E_2$. Starting from the bottom of the diagram, $\tau$ extends to $[K(\beta):K]$ possible $\sigma$'s, and extending each in turn gives, $$ [K(\beta):K]\frac{[E_2:K]}{[K(\beta):K]}=[E_2:K], $$ extensions in total. \qed \end{proof} The condition that the roots of $\tau^*f$ are distinct is not essential to the theory, but makes the accounting easier: we can relate the number of automorphisms to the degrees of extensions by passing through the midway house of the roots of polynomials. \paragraph{\hspace*{-0.3cm}} Theorem D gives a connection between minimum polynomials and the degrees of field extensions, while Theorem \ref{galois_groups:number_of_extensions} connects the degrees of extensions with the number of automorphisms of a field. Bolting these together: \begin{corollaryG} Let $f$ be a polynomial over $F$ having distinct roots and let $E$ be its splitting field over $F$. Then \begin{equation} \label{eq:4} |\text{Gal}(E/F)|=[E:F]. \end{equation} \end{corollaryG} The polynomial $f$ is over the \emph{field\/} $F$, or is contained in the {\em ring\/} $F[x]$, with $E$ a {\em vector space\/} over $F$ and $\text{Gal}(E/F)$ its {\em group\/} of automorphisms. The formula (\ref{eq:4}) thus contains the main objects of undergraduate algebra. \begin{proof} By Theorem \ref{galois_groups:number_of_extensions} there are $[E:F]$ extensions of the identity automorphism $F\rightarrow F$ to an automorphism of $E$. Conversely any automorphism of $E$ fixing $F$ pointwise is an extension of the identity automorphism on $F$, so we obtain the whole Galois group this way. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} That $E$ be a splitting field is important in Corollary G. Consider the extension $\Q\subseteq \Q(\sqrt[3]{2})$, where $\Q(\sqrt[3]{2})$ is \emph{not\/} the splitting field over $\Q$ of $x^3-2$, or indeed any polynomial. $\sigma$ is an element of the Galois group $\text{Gal}(\Q(\sqrt[3]{2})/\Q)$ precisely when it sends $\sqrt[3]{2}$ to a root, contained in $\Q(\sqrt[3]{2})$, of its minimum polynomial over $\Q$. These roots are $\sqrt[3]{2}$ itself, with the other two complex, whereas $\Q(\sqrt[3]{2})$ is completely contained in $\R$. The only possibility for $\sigma$ is that it sends $\sqrt[3]{2}$ to itself, ie: $\sigma=\text{id}$. The Galois group thus has order $1$, but the degree of the extension is $3$. \paragraph{\hspace*{-0.3cm}} The following proposition returns to the kind of examples we saw in Section \ref{lect1}: \begin{proposition} \label{galois_groups:section0_examples} Let $E$ be the splitting field over $F$ of a polynomial with distinct roots. Suppose also that $E=F(\alpha_1,\ldots,\alpha_m)$ for some $\alpha_1,\ldots,\alpha_m\in E$ such that \begin{equation} \label{eq:5} [E:F]=\prod_i [F(\alpha_i):F]. \end{equation} Then there is a $\sigma\in\text{Gal}(E/F)$ with $\sigma(\alpha_i)=\beta_i$ if and only if $\beta_i$ is a root of the minimum polynomial of $\alpha_i$ over $F$. \end{proposition} \begin{proof} Any $\sigma$ in the Galois group must send each $\alpha_i$ to a root of the minimum polynomial $f_i$ of $\alpha_i$ over $F$. Conversely, $\sigma$ is determined by where it sends the $\alpha_i$'s, and there are at most $\deg(f_i)$ possibilities for these images, namely the $\deg(f_i)$ roots of $f_i$. As $$ |\text{Gal}(E/F)|=[E:F]=\prod_i [F(\alpha_i):F]=\prod_i \deg(f_i), $$ all these possibilities must arise. For any $\beta_i$ a root of $f_i$ there must then be a $\sigma\in\text{Gal}(E/F)$ with $\sigma(\alpha_i)=\beta_i$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} In Section \ref{lect1} we computed, in an ad-hoc way, the automorphisms of $\Q(\alpha,\omega)$ where $$ \alpha=\sqrt[3]{2}\in\R\text{ and }\omega=-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}. $$ The minimum polynomial of $\alpha$ over $\Q$ is $x^3-2$ with roots $\alpha,\alpha\omega,\alpha\omega^2$ and the minimum polynomial of $\omega$ over $\Q$ -- and over $\Q(\omega)$ -- is $1+x+x^2$ with roots $\omega,\omega^2$. By the Tower law: $$ [\Q(\alpha,\omega):\Q]=[\Q(\alpha,\omega):\Q(\alpha)][\Q(\alpha):\Q]=[\Q(\omega):\Q][\Q(\alpha):\Q]. $$ By Proposition \ref{galois_groups:section0_examples} we can send $\alpha$ to any of $\alpha,\alpha\omega,\alpha\omega^2$ and $\omega$ to any of $\omega,\omega^2$, and get an automorphism. Following this through with the vertices of the triangle gives three automorphisms with $\omega$ mapped to itself -- the top three in Figure \ref{fig:galoiscorrespondence1:schematic} -- and another three with $\omega$ mapped to $\omega^2$ -- as in the bottom three. \begin{figure}\label{fig:galoiscorrespondence1:schematic} \end{figure} \begin{vexercise} \label{galois_groups_exercise100} Let $\alpha=\sqrt[5]{2}$ and $\omega=\cos(2\pi/5)+\text{i}\sin(2\pi/5)$, so that $\alpha^5=2$ and $\omega^5=1$. Let $\beta=\alpha+\omega$ and eliminate radicals by considering $(\beta-\omega)^5=2$ to find a polynomial of degree $20$ having $\beta$ as a root. Show that this polynomial is irreducible over $\Q$ and hence that $$ [\Q(\alpha+\omega):\Q]=[\Q(\alpha):\Q][\Q(\omega):\Q]. $$ Show that $\Q(\alpha+\omega)=\Q(\alpha,\omega)$. \end{vexercise} \paragraph{\hspace*{-0.3cm}} For $\alpha=\sqrt[5]{2}$ and $\omega$ given by the expression below, the extension $\Q\subset \Q(\alpha,\omega)$ satisfies (\ref{eq:5}) by Exercise \ref{galois_groups_exercise100}. An automorphism is thus free to send $\alpha$ to any root of $x^5-2$ and $\omega$ to any root of $1+x+x^2+x^3+x^4$. This gives twenty elements of the Galois group in total; in particular there is an automorphism sending $\alpha$ to itself and $\omega$ to $\omega^3$: $$ \begin{pspicture}(14,3) \rput(2.5,.2){ \pspolygon[fillstyle=solid,fillcolor=lightgray](0,.45)(1.53,0)(2.5,1.3)(1.53,2.61)(0,2.15) \psbezier[linecolor=red]{->}(2.85,1.1)(3.85,0.3)(3.85,2.3)(2.85,1.5) \psline[linecolor=red]{->}(1.45,2.5)(0.05,.5) \psline[linecolor=red]{->}(1.45,.1)(.05,2.05) \pscurve[linecolor=red]{->}(.15,.5)(1,.9)(1.5,.1) \pscurve[linecolor=red]{->}(.15,2.1)(.95,1.73)(1.55,2.5) \rput(2.7,1.3){$\alpha$}\rput(1.5,2.8){$\alpha\omega$} \rput(-.4,.6){$\alpha\omega^3$} \rput(-.4,2.15){$\alpha\omega^2$} \rput(1.5,-.2){$\alpha\omega^4$} } \rput(10,2){$\alpha=\sqrt[5]{2}$} \rput(10,1){${\displaystyle \omega=\frac{\sqrt{5}-1}{4}+\frac{\sqrt{2}\sqrt{5+\sqrt{5}}}{4}\text{i}}$} \end{pspicture} $$ \paragraph{\hspace*{-0.3cm}} We can get closer to the spirit of Section \ref{lect1} by defining: \begin{definition}[Galois group of a polynomial] The Galois group over $F$ of the polynomial $f\in F[x]$ is the group $\text{Gal}(E/F)$ where $E$ is the splitting field of $f$ over $F$. \end{definition} \begin{proposition}\label{galoisgroups.subgroupsymmetricgroup} The Galois group of a polynomial of degree $d$ is isomorphic to a subgroup of the symmetric group $S_{\kern-.3mm d}$. \end{proposition} \begin{proof} Let $\{\alpha_1,\ldots,\alpha_d\}$ be the roots of $f$ and write $\{\alpha_1,\ldots,\alpha_d\}=\{\beta_1,\ldots,\beta_k\}$ where the $\beta$'s are distinct (and $k\leq d$). An element $\sigma\in\text{Gal}(E/F)$, for $E=F(\alpha_1,\ldots,$ $\alpha_d)=F(\beta_1,\ldots,\beta_k)$, is determined by where it sends the $\beta_i$'s, and each $\sigma(\beta_i)$ must be a root of (any) polynomial over $F$ having $\beta_i$ as a root. But $f$ is such a polynomial, hence the effect of $\sigma$ on the $\beta_i$ is to permute them among themselves ($\sigma$ is a bijection). Define a map $\text{Gal}(E/F)\rightarrow S_{\kern-.3mm k}$ that sends $\sigma$ to the permutation of the $\beta_i$ that it realizes. As the group laws in both the Galois group and the symmetric group are composition, this map is a homomorphism, and is injective as each $\sigma$ is determined by its effect on the roots. Thus the Galois group is isomorphic to a subgroup of $S_{\kern-.3mm k}$, which in turn is isomorphic to a subgroup of $S_{\kern-.3mm d}$ by taking those permutations of $\{1,\ldots,d\}$ that permute only the first $k$ numbers. \qed \end{proof} \begin{figure} \caption{The possible Galois groups over $\Q$ of $(x-\alpha)(x-\beta)(x-\gamma)$: the subgroup lattice of the group of permutations of $\{\alpha,\beta,\gamma\}$ (\emph{aka\/} the symmetric group $S_{\kern-.3mm 3}$) \emph{(left)} and example polynomials having Galois group these subgroups \emph{(right)}.} \label{fig:groups2:subgroup_lattice_S_3} \end{figure} \paragraph{\hspace*{-0.3cm}} \label{groups2:galois_group_quadratic} Let $f=(x-\alpha)(x-\beta)$ be a quadratic polynomial in $\Q[x]$ with distinct roots $\alpha\not=\beta\in\ams{C}}\def\Q{\ams{Q}$. Then $f$ has splitting field $\Q(\alpha)$ over $\Q$, since $\alpha+\beta$ and $\alpha\beta$ are rational numbers. If $\alpha\in\Q$ (hence $\beta\in\Q$) then the Galois group of $f$ over $\Q$ is the trivial group $\{\text{id}\}$. Otherwise both $\alpha,\beta\not\in\Q$ and $f$, being irreducible over $\Q$, is the minimum polynomial of $\alpha$ over $\Q$. There is an element of the Galois group sending $\alpha$ to $\beta$, and this must be the permutation $(\alpha,\beta)$, as it is the only element of $S_{\kern-.3mm 2}$ that does the job. The Galois group is thus $\{\text{id},(\alpha,\beta)\}$ when $\alpha\not\in\Q$. \paragraph{\hspace*{-0.3cm}} Similarly if $f=(x-\alpha)(x-\beta)(x-\gamma)$ is a cubic in $\Q[x]$ with distinct roots $\alpha,\beta,\gamma\in\ams{C}}\def\Q{\ams{Q}$. By Proposition \ref{galoisgroups.subgroupsymmetricgroup}, the Galois group of $f$ is a subgroup of the symmetric group $S_{\kern-.3mm 3}$, the subgroup lattice of which is shown in Figure \ref{fig:groups2:subgroup_lattice_S_3}. (You can come up with this picture either by brute force, or by taking the symmetry group of the equilateral triangle in Figure \ref{fig:groups1:subgroup_lattices}, labelling the vertices of the triangle $\alpha,\beta,\gamma$, and taking the permutations of these effected by the symmetries). We can find polynomials having each of these subgroups as Galois group. If $\alpha,\beta,\gamma\in\Q$ then $f$ has splitting field $\Q$, and the Galois group is $\{\text{id}\}$. If $\alpha,\beta\in\Q$ then, as $\alpha+\beta+\gamma\in\Q$, we get $\gamma\in\Q$ too. The next case then is $\alpha\in\Q$ and $\beta,\gamma\not\in\Q$, so that $(x-\beta)(x-\gamma)$ is a rational polynomial. As in \ref{groups2:galois_group_quadratic}, the splitting field of $f$ is $\Q(\beta)$ and the Galois group is $\{\text{id},(\beta,\gamma)\}$. The other two subgroups of order two in Figure \ref{fig:groups2:subgroup_lattice_S_3} come about in a similar way. That leaves the case $\alpha,\beta,\gamma\not\in\Q$, and where the key player is the \emph{discriminant\/}: $$ D=(\alpha-\beta)^2(\alpha-\gamma)^2(\beta-\gamma)^2 $$ or in fact, its square root. The polynomial $f$ is irreducible over $\Q$, hence the minimum polynomial over $\Q$ of $\alpha$. As the roots $\alpha,\beta,\gamma$ are distinct there are distinct elements of the Galois group sending $\alpha$ to each of $\alpha,\beta$ and $\gamma$, and so the Galois group has order $3$ or $6$. Suppose that $\kern-2pt\sqrt{D}\in\Q$. Then $\kern-2pt\sqrt{D}$, like all rational numbers, is fixed by the elements of the Galois group. The permutation $(\alpha,\beta)$ however sends $\kern-2pt\sqrt{D}\mapsto-\kern-2pt\sqrt{D}$, and so do $(\alpha,\gamma)$ and $(\beta,\gamma)$. None of these can therefore be in the Galois group, which is thus $\{\text{id},(\alpha,\beta,\gamma),(\alpha,\gamma,\beta)\}$. We illustrate the final case $\kern-2pt\sqrt{D}\not\in\Q$ by example. Suppose that $\alpha\in\R\setminus\Q$ and $\beta,\gamma\in\ams{C}}\def\Q{\ams{Q}\setminus\R$ -- in which case $\beta,\gamma$ are complex conjugates. Then complex conjugation is a non-trivial element of the Galois group (see Exercise \ref{exercise:groups2:conjugation}) having effect the permutation $(\beta,\gamma)$. The Galois group must then be all of $S_3$. (Incidentally, this and the previous paragraph show that if $\kern-2pt\sqrt{D}\in\Q$ then $\alpha,\beta,\gamma\in\R$.) \paragraph{\hspace*{-0.3cm}} Finding a rational polynomial of degree $d$ that has Galois group a given subgroup of $S_{\kern-.3mm d}$ is possible for small values of $d$ like the cases $d=2,3$ above. For general $d$ it is an open problem -- called the \emph{Inverse Galois problem\/}. \subsection*{Further Exercises for Section \thesection} \begin{vexercise}\label{ex_lect9.1} Show that the following Galois groups have the given orders: \begin{enumerate} \item $|\text{Gal}(\Q(\kern-2pt\sqrt{2})/\Q)|=2$. \item $|\text{Gal}(\Q(\sqrt[3]{2})/\Q)|=1$. \item $|\text{Gal}(\Q(-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i})/\Q)|=2$. \item $|\text{Gal}(\Q(\sqrt[3]{2},-\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i})/\Q)|=6$. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex_lect9.2} Find the orders of the Galois groups $\text{Gal}(L/\Q)$ where $L$ is the splitting field of the polynomial: $$ 1. \,\, x-2\qquad 2. \,\,x^2-2\qquad 3. \,\,x^5-2\qquad $$ \end{vexercise} \begin{vexercise}\label{ex_lect9.21} Find the orders of the Galois groups $\text{Gal}(L/\Q)$ where $L$ is the splitting field of the polynomial: $$ 1. \,\, 1+x+x^2+x^3+x^4\qquad 2. \,\, 1+x^2+x^4\qquad $$ (\emph{hint} for the second one: $(x^2-1)(1+x^2+x^4)=x^6-1$). \end{vexercise} \begin{vexercise}\label{ex_lect9.3} Let $p>2$ be a prime number. Show that \begin{enumerate} \item ${\displaystyle |\text{Gal}(\Q\biggl(\cos\frac{2\pi}{p}+\text{i}\sin\frac{2\pi}{p}\biggr)/\Q)|=p-1}$. \item $|\text{Gal}(L/\Q)|=p(p-1)$, where $L$ is the splitting field of the polynomial $x^p-2$. Compare the answer when $p=3$ and $5$ to Section \ref{lect1}. \end{enumerate} \end{vexercise} \section{Vector Spaces II: Solving Equations} \label{linear.algebra2} This short section contains some auxiliary technical results on the solutions of homogeneous linear equations that are needed for the proof of the Galois correspondence in Section \ref{galois.correspondence}. \paragraph{\hspace*{-0.3cm}} Let $V$ be a $n$-dimensional vector space over the field $F$ with fixed basis $\{\alpha_1,\alpha_2,\ldots,\alpha_n\}$. A {\em homogenous linear equation\/} over $F$ is an equation of the form, $$ a_1x_1+a_2x_2+\cdots+ a_nx_n=0, $$ with the $a_i$ in $F$. A vector $u=\sum_{i=1}^{n} t_i\alpha_i\in V$ is a solution when $$ a_1t_1+a_2t_2+\cdots+ a_nt_n=0. $$ A system of homogeneous linear equations, \begin{equation*} \begin{split} a_{11}x_{1}+a_{12}x_2+\cdots+ a_{1n}x_n&=0,\\ a_{21}x_1+a_{22}x_2+\cdots+ a_{2n}x_n&=0,\\ &\vdots\\ a_{k1}x_1+a_{k2}x_2+\cdots+ a_{kn}x_n&=0, \end{split} \end{equation*} is {\em independent\/} over $F$ when the vectors, $$ v_1=\sum a_{1j}\alpha_j,v_2=\sum a_{2j}\alpha_j,\ldots,v_k=\sum a_{kj}\alpha_j, $$ are independent. In other words, if $A$ is the matrix of coefficients of the system of equations, then the rows of $A$ are independent. Here is the key property of independent systems of equations: \begin{proposition} \label{vectorspaces2:independent_systems} Let $S$ be an independent system of equations over $F$ and let $S'\subset S$ be a proper subset of the equations. Then the space of solutions in $V$ to $S$ is a proper subspace of the space of solutions in $V$ to $S'$. \end{proposition} \begin{vexercise} Prove Proposition \ref{vectorspaces2:independent_systems}. \end{vexercise} \begin{vexercise} Let $F\subseteq E$ be an extension of fields and $B$ a finite set. Let $V_F$ be the $F$-vector space with basis $B$, ie: the elements of $V_F$ are the formal sums $$ \sum \lambda_i b_i, $$ with the $\lambda_i\in F$ and the $b_i\in B$. Formal sums are added together and multiplied by scalars in the obvious way. Similarly let $V_E$ be the $E$-vector space with basis $B$, and identify $V_F$ with a subset (it is not a subspace) of $V_E$ in the obvious way. Now let $S'\subset S$ be independent systems of equations \emph{over $E$\/}. Show that the space of solutions in $V_F$ to $S$ is a proper subspace of the space of solutions in $V_F$ to $S'$. \end{vexercise} \begin{vexercise}\label{vandermonde} Let $F$ be a field and $\alpha_1,\ldots,\alpha_{n+1}\in F$ distinct elements. Show that the matrix $$ \left(\begin{array}{cccc} \alpha_1^{n}&\cdots&\alpha_1&1\\ \vdots&&\vdots&\vdots\\ \alpha_{n+1}^{n}&\cdots&\alpha_{n+1}&1\\ \end{array}\right) $$ has non-zero determinant (\emph{hint\/}: suppose otherwise, and find a polynomial of degree $n$ with $n+1$ distinct roots in $F$, contradicting Theorem \ref{degree.number.of.roots}). \end{vexercise} \begin{lemma} \label{vectorspaces2:polynomials_same} Let $F$ be a field and $f,g\in F[x]$ polynomials of degree $n$ over $F$. Suppose that there exist distinct $\alpha_1,\ldots,\alpha_{n+1}\in F$ such that $f(\alpha_i)=g(\alpha_i)$ for all $i$. Then $f=g$. \end{lemma} \begin{proof} Letting $f(x)=\sum a_i x^i\mbox{ and }g(x)=\sum b_i x^i$ gives $n+1$ expressions $\sum a_i \alpha_j^i=\sum b_i \alpha_j^i$, hence the system of equations \begin{equation} \label{eq:6} \sum a_j^i y_i=0, \end{equation} where $y_i=a_i-b_i$. The matrix of coefficients of these $n+1$ equations is $$ \left(\begin{array}{cccc} \alpha_1^{n}&\cdots&\alpha_1&1\\ \vdots&&\vdots&\vdots\\ \alpha_{n+1}^{n}&\cdots&\alpha_{n+1}&1\\ \end{array}\right) $$ with non-zero determinant by Exercise \ref{vandermonde}. The system (\ref{eq:6}) thus has the unique solution $y_i=0$ for all $i$, so that $f=g$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Here is the main result of the section. \begin{theorem} \label{theorem:linearalgebra2} Let $F\subseteq E=F(\alpha)$ be a simple extension of fields with the minimum polynomial of $\alpha$ over $F$ having distinct roots. Let $\{\sigma_1,\sigma_2\ldots,\sigma_k\}$ be distinct non-identity elements of the Galois group $\text{Gal}(E/F)$. Then $$ \sigma_1(x)=\sigma_2(x)=\cdots=\sigma_k(x)=x, $$ is a system of independent linear equations over $E$. \end{theorem} \begin{proof} By Theorem D we have a basis $\{1,\alpha,\alpha^2,\ldots,\alpha^d\}$ for $E$ over $F$ where the minimum polynomial $f$ of $\alpha$ over $f$ has degree $d+1$. Any $x\in E$ thus has the form $$ x=x_0+x_1\alpha+x_2\alpha^2+\cdots+x_d\alpha^d, $$ for some $x_i\in F$. By the Extension Theorem, the elements of the Galois group send $\alpha$ to roots of $f$. Suppose these roots are $\{\alpha=\alpha_0,\alpha_1,\ldots,\alpha_{d}\}$ where $\sigma_i(\alpha)=\alpha_i$. Then $x$ satisfies $\sigma_i(x)=x$ if and only if, $$ (\alpha_0-\alpha_i)x_1+(\alpha_0^2-\alpha_i^2)x_2+\cdots+(\alpha_0^d-\alpha_i^d)x_d=0. $$ Thus we have a system of equations $Ax=0$ where the matrix of coefficients $A$ is made up of rows from the larger $d\times d$ matrix $\widehat{A}$ given by, $$ \widehat{A}= \left(\begin{array}{cccc} \alpha_0-\alpha_1&\alpha_0^2-\alpha_1^2&\cdots&\alpha_0^d-\alpha_1^d\\ \alpha_0-\alpha_2 \vrule width 0mm height 5mm depth 0mm \vrule width 2mm height 0mm depth 0mm &\alpha_0^2-\alpha_2^2 \vrule width 2mm height 0mm depth 0mm &\cdots \vrule width 2mm height 0mm depth 0mm &\alpha_0^d-\alpha_2^d\\ \vdots&\vdots&&\vdots\\ \alpha_0-\alpha_d&\alpha_0^2-\alpha_d^2&\cdots&\alpha_0^d-\alpha_d^d\\ \end{array}\right) $$ Let $\widehat{A}b=0$ for some vector $b\in E^n$, so that $$ b_0\alpha_0+b_1\alpha_0^2+\cdots+b_d\alpha_0^d=b_0\alpha_i+b_1\alpha_i^2+\cdots+b_d\alpha_i^d, $$ for each $1\leq i\leq d$. Thus if $g=b_0x+b_1x^2+\cdots+b_dx^d$, then we have $g(\alpha_0)=g(\alpha_1)=g(\alpha_2)=\cdots=g(\alpha_d)=a$, say. The degree $d$ polynomial $g-a$ thus agrees with the zero polynomial at $d+1$ distinct values, hence by Lemma \ref{vectorspaces2:polynomials_same} must be the zero polynomial, and so all the $b_i$ are zero. The columns of $\widehat{A}$ are thus independent, hence so are the rows, and thus also the rows of $A$. \qed \end{proof} \section{The Fundamental Theorem of Galois Theory} \label{galois.correspondence} According to Theorem E, a $z\in\ams{C}}\def\Q{\ams{Q}$ is constructible when there is a sequence of extensions: $$ \Q=K_0\subseteq K_1\subseteq K_2\subseteq\cdots\subseteq K_n, $$ with each $[K_{i+1}:K_i]\leq 2$ and $\Q(z)\subset K_n$. To show that $z$ can actually {\em be\/} constructed, we need to find these $K_i$, and so we need to understand the fields sandwiched between $\Q$ and $\Q(z)$. In this section we prove the theorem that gives us that knowledge. \paragraph{\hspace*{-0.3cm}} We will need a picture of the fields sandwiched in an extension, analogous to the picture of the subgroups of a group in Section \ref{groups.stuff}. \begin{definition}[intermediate fields and their lattice] Let $F\subseteq E$ be an extension. Then $K$ is an intermediate field when $K$ is an extension of $F$ and $E$ is an extension of $K$: ie: $F\subseteq K\subseteq E$. The lattice of intermediate fields is a diagram depicting them and the inclusions between them. If $F\subseteq K_1\subseteq K_2\subseteq E$ they appear in the diagram like so: $$ \begin{pspicture}(0,0)(2,2) \rput(1,.3){$K_1$} \rput(1,1.7){$K_2$} \psline(1,.6)(1,1.4) \end{pspicture} $$ At the very base of the diagram is $F$ and at the apex is $E$. Denote the lattice by $\mathcal L(E/F)$. \end{definition} \paragraph{\hspace*{-0.3cm}} \label{galois:correspondence:condition} From now on we will work in the following situation: $F\subseteq E$ is a finite extension such that: \begin{description} \item[(\dag)] Every irreducible polynomial over $F$ that has a root in $E$ has all its roots in $E$, and these roots are distinct. \end{description} We saw in Exercise \ref{ex4.30} that if $F$ has characteristic $0$ then any irreducible polynomial over $F$ has distinct roots. This is also true if $F$ is a finite field, although we omit the proof here. \begin{galoiscorrespondenceA} Let $F\subseteq E$ be a finite extension satisfying $(\dag)$ and $G=\text{Gal}(E/F)$ its Galois group. Let $\mathcal L(G)$ and $\mathcal L(E/F)$ be the subgroup and intermediate field lattices. \begin{enumerate} \item For any subgroup $H$ of $G$, let $$ E^H=\{\lambda\in E\,|\,\sigma(\lambda)=\lambda\text{ for all }\sigma\in H\}. $$ Then $E^H$ is an intermediate field, called the fixed field of $H$. \item For any intermediate field $K$, the group $\text{Gal}(E/K)$ is a subgroup of $G$. \item The maps $\Psi:H\mapsto E^H$ and $\Phi:K\mapsto \text{Gal}(E/K)$ are mutual inverses, hence bijections $$ \Psi:\mathcal L(G)\rightleftarrows \mathcal L(E/F):\Phi $$ that reverse order: $$ H_1 \subset H_2 \stackrel{\Psi}{\longrightarrow} E^{H_2}\subset E^{H_1} \qquad K_2\subset K_1 \stackrel{\Phi}{\longrightarrow} \text{Gal}(E/K_1)\subset\text{Gal}(E/K_2) $$ \item The degree of the extension $E^H\subseteq E$ is equal to the order $|H|$ of the subgroup $H$. Equivalently, the degree of the extension $F\subseteq E^H$ is equal to the index $[G:H]$. \end{enumerate} \end{galoiscorrespondenceA} The correspondence in one sentence: turning the lattice of subgroups upside down gives the lattice of intermediate fields, and vice-versa. See Figure \ref{fig:galoiscorrespondence1:schematic}. \begin{figure} \caption{Schematic of the Galois correspondence.} \label{fig:galoiscorrespondence1:schematic} \end{figure} The upside down nature of the correspondence may seem puzzling, but it is just the nature of imposing conditions. If $H$ is a subgroup, the fixed field $E^H$ is the set of solutions in $E$ to the system of equations \begin{equation} \label{eq:7} \sigma(x)=x, \text{ for }\sigma\in H. \end{equation} The more equations, the greater the number of conditions being imposed on $x$, hence the smaller the number of solutions. Thus, larger subgroups $H$ should correspond to smaller intermediate fields $E^H$ and vice-versa. That the correspondence is exact -- increasing the size of $H$ decreases the size of $E^H$ -- will follow from Section \ref{linear.algebra2} and the fact that the equations (\ref{eq:7}) are independent. \begin{proof} In the situation described in the Theorem the extension is of the form $F\subseteq F(\alpha)$ for some $\alpha\in E$ algebraic over $F$. The minimum polynomial $f$ of $\alpha$ over $F$ splits in $E$ by $(\dag)$. On the other hand any field containing the roots of $f$ contains $F(\alpha)=E$. Thus $E$ is the splitting field of $f$. \begin{enumerate} \item \emph{$E^H$ is an intermediate field:} we have $E^H\subset E$ by definition, and $F\subset E^H$ as every element of $G$ -- so in particular every element of $H$ -- fixes $F$. If $\lambda,\mu\in E^H$ then $\sigma(\lambda+\mu)=\sigma(\lambda)+\sigma(\mu)=\lambda+\mu$, so that $\lambda+\mu\in E^H$, and similarly $\lambda\mu,1/\lambda\in E^H$. \item \emph{$\text{Gal}(E/K)$ is a subgroup:} if an automorphism of $E$ fixes the intermediate field $K$ pointwise, then it fixes the field $F$ pointwise, and thus $\text{Gal}(E/K)\subset \text{Gal}(E/F)$. If $\sigma,\tau$ are automorphisms fixing $K$ then so is $\sigma\tau^{-1}$. We thus have a subgroup. \item \emph{$\Phi$ and $\Psi$ reverse order:} if $\lambda$ is fixed by every automorphism in $H_2$, then it is fixed by every automorphism in $H_1$, so that $E^{H_2}\subset E^{H_1}$. If $\sigma$ fixes every element of $K_1$ pointwise then it fixes every element of $K_2$ pointwise, so that $\text{Gal}(E/K_1)\subset\text{Gal}(E/K_2)$. \item \emph{The composition $\Phi\Psi:H\rightarrow E^H\rightarrow \text{Gal}(E/E^H)$ is the identity:} by definition every element of $H$ fixes $E^H$ pointwise, and since $\text{Gal}(E/E^H)$ consists of {\em all\/} the automorphisms of $E$ that fix $E^H$ pointwise, we have $H\subset\text{Gal}(E/E^H)$. In fact, both $H$ and $\text{Gal}(E/E^H)$ have the same fixed field, ie: $E^{\text{Gal}(E/E^H)}=E^H$. To see this, any $\sigma\in\text{Gal}(E/E^H)$ fixes $E^H$ pointwise by definition, so $E^H\subset E^{\text{Gal}(E/E^H)}$. On the other hand $H\subset \text{Gal}(E/E^H)$ and $\Psi$ reverses order, so $E^{\text{Gal}(E/E^H)}\subset E^H$. By the results of Section \ref{linear.algebra2}, the elements of the fixed field $E^{\text{Gal}(E/E^H)}$ are obtained by solving the system of linear equations $\sigma(x)=x$ for all $\sigma\in\text{Gal}(E/E^H)$, and these equations are independent. In particular, a proper subset of these equations has a proper superset of solutions. We already have that $H\subset\text{Gal}(E/E^H)$. Suppose $H$ is a proper subgroup of $\text{Gal}(E/E^H)$. The fixed field $E^H$ would then properly contain the fixed field $E^{\text{Gal}(E/E^H)}$. As this contradicts the previous paragraph, we have $H=\text{Gal}(E/E^H)$. \item \emph{The composition $\Psi\Phi:K\rightarrow\text{Gal}(E/K)\rightarrow E^{\text{Gal}(E/K)}$ is the identity:} let $E=K(\beta)$ and suppose the minimum polynomial $g$ of $\beta$ over $K$ has degree $d+1$ with roots $\{\beta=\beta_0,\ldots,\beta_d\}$. $E$ thus has basis $\{1,\beta,\ldots,\beta^d\}$ over $K$ and $G=\text{Gal}(E/K)$ has elements $\{\text{id}=\sigma_0,\ldots,\sigma_d\}$ by Theorem G, labelled so that $\sigma_i(\beta)=\beta_i$. An element $x\in E$ has the form $$ x=x_0+x_1\beta+\cdots+x_d\,\beta^d $$ with $x\in E^G$ exactly when $\sigma_i(x)=x$ for all $i$, i.e. when $$ x_1(\beta-\beta_i)+\cdots+x_d(\beta^d-\beta_i^d)=0, $$ a homogenous system of $d$ equations in $d$ unknowns. The system has coefficients given by the matrix $\hat{A}$ of Theorem \ref{theorem:linearalgebra2} (but with $\beta$'s instead of $\alpha$'s) and hence, by the argument given there, has the unique solution $x_1=\cdots=x_d=0$. Thus $x=x_0\in K$ and so $E^{\text{Gal}(E/K)}=K$. \item As $E$ is a splitting field we can apply Theorem G to get $|\text{Gal}(E/E^H)|=[E:E^H]$, where $\text{Gal}(E/E^H)=H$ gives $|H|=[E:E^H]$. \qed \end{enumerate} \end{proof} \paragraph{\hspace*{-0.3cm}} Before an example, a little house-keeping: the condition $(\dag)$ in \ref{galois:correspondence:condition} can be replaced by an easier one to verify: \begin{proposition} \label{proposition:galois_extensions} Let $F\subset E$ be a finite extension such that every irreducible polynomial over $F$ has distinct roots. Then the following are equivalent: \begin{enumerate} \item Every irreducible polynomial over $F$ that has a root in $E$ has all its roots in $E$. \item $E=F(\alpha)$ and the minimum polynomial of $\alpha$ over $F$ splits in $E$. \end{enumerate} \end{proposition} \begin{proof} $(1)\Rightarrow (2)$: the minimum polynomial is irreducible over $F$ with root $\alpha\in F(\alpha)=E$, hence splits by (1). $(2)\Rightarrow (1)$: apply the argument of part 5 of the proof of the Galois correspondence to $K=F$ to get $E^{G}=F$ for $G=\text{Gal}(E/F)$. Suppose that $p\in F[x]$ is irreducible over $F$ and has a root $\alpha\in E$ and let $\{\alpha=\alpha_1,\ldots,\alpha_n\}$ be the distinct elements of the set $\{\sigma(\alpha):\sigma\in G\}$. The polynomial $g=\prod (x-\alpha_i)$ has roots permuted by the $\sigma\in G$, hence its coefficients are fixed by the $\sigma\in G$, i.e. $g$ is a polynomial over $E^G=F$. Both $p$ and $g$ have factor $x-\alpha$, hence their gcd is not $1$. As $p$ is irreducible it must then divide $g$, hence all it roots lie in $E$. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} \label{galois:correspondence:example1} Now to our first example. In Section \ref{galois.groups} we revisited the example of Section \ref{lect1}, where for $\alpha=\sqrt[3]{2}$ and $\omega=\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}$ we had $$ G=\text{Gal}(\Q(\alpha,\omega)/\Q)=\{\text{id},\sigma,\sigma^2,\tau,\sigma\tau,\sigma^2\tau\}, $$ with $\sigma(\alpha)=\alpha\omega,\sigma(\omega)=\omega$ and $\tau(\alpha)=\alpha,\tau(\omega)=\omega^2$. In \ref{vector:spacesI:minimum:polynomial} we showed that $\Q(\alpha,\omega)=\Q(\alpha+\omega)$ with the minimum polynomial of $\alpha+\omega$ over $\Q$ having all its roots in $\Q(\alpha,\omega)$. Condition $(\dag)$ thus holds. The subgroup lattice $\mathcal L(G)$ is shown on the left in Figure \ref{fig:galois:correspondence:example1} -- adapted from Figure \ref{fig:groups2:subgroup_lattice_S_3}. Applying the Galois Correspondence then gives the lattice $\mathcal L(E/F)$ of intermediate fields on the right of Figure \ref{fig:galois:correspondence:example1} with $F_4$ the fixed field of $\{\text{id},\sigma,\sigma^2\}$ and the others the fixed fields (in no particular order) of the three order two subgroups. By part (4) of the Galois correspondence, each of the extensions $F_i\subset\Q(\alpha,\omega)$ has degree the order of the corresponding subgroup, so that $\Q(\alpha,\omega)$ is a degree three extension of $F_4$, and a degree two extension of the other intermediate fields. Let $F_1$ be the fixed field of the subgroup $\{\text{id},\tau\}$; we will explicitly describe its elements. The Tower law gives basis for $\Q(\alpha,\omega)$ over $\Q$ the set $$ \{1,\alpha,\alpha^2,\omega,\alpha\omega,\alpha^2\omega\}, $$ so that an $x\in\Q(\alpha,\omega)$ has the form, $$ x=a_0+a_1\alpha+a_2\alpha^2+a_3\omega+a_4\alpha\omega+a_5\alpha^2\omega, $$ with the $a_i\in\Q$. The element $x$ is in $F_1$ if and only if $\tau(x)=x$ where, \begin{equation*} \begin{split} \tau(x)&=a_0+a_1\alpha+a_2\alpha^2+a_3\omega^2+a_4\alpha\omega^2+a_5\alpha^2\omega^2\\ &=a_0+a_1\alpha+a_2\alpha^2+a_3(-1-\omega)+a_4\alpha(-1-\omega)+a_5\alpha^2(-1-\omega)\\ &=(a_0-a_3)+(a_1-a_4)\alpha+(a_2-a_5)\alpha^2-a_3\omega-a_4\alpha\omega^2-a_5\alpha^2\omega. \end{split} \end{equation*} Equate coefficients (we are using a basis) to get: $$ a_0-a_3=a_0, a_1-a_4=a_1,a_2-a_5=a_2,-a_3=a_3,-a_4=a_4\text{ and }-a_5=a_5. $$ Thus, $a_3=a_4=a_5=0$ and $a_0,a_1,a_2$ are arbitrary. Hence $$ x=a_0+a_1\alpha+a_2\alpha^2 $$ so is an element of $\Q(\alpha)$. This gives $F_1\subseteq \Q(\alpha)$. On the other hand, $\tau$ fixes $\Q$ pointwise and fixes $\alpha$, hence fixes $\Q(\alpha)$ pointwise, giving $\Q(\alpha)\subseteq F_1$ and so $F_1=\Q(\alpha)$. The rest of the picture is described in Exercise \ref{galois:correspondence:exercise10}. \begin{figure}\label{fig:galois:correspondence:example1} \end{figure} \paragraph{\hspace*{-0.3cm}} Recall that a subgroup $N$ of a group $G$ is normal when $gNg^{-1}=N$ for all $g\in G$. This extra property possessed by normal subgroups means they correspond to slightly special intermediate fields. Let $F\subseteq E$ be an extension with Galois group $\text{Gal}(E/F)$. Let $F\subseteq K\subseteq E$ be an intermediate field and $\sigma\in\text{Gal}(E/F)$. The image of $K$ by $\sigma$ is another intermediate field, as on the left of Figure \ref{fig:galois:correspondence:conjugate:subgroups}. Applying the Galois correspondence gives subgroups $\text{Gal}(E/K)$ and $\text{Gal}(E/\sigma(K))$ as on the right. Then: \begin{proposition}\label{prop14.1} $\text{Gal}(E/\sigma(K))=\sigma\text{Gal}(E/K)\sigma^{-1}$ \end{proposition} \begin{proof} If $x\in\sigma(K)$, then $x=\sigma(y)$ for some $y\in K$. If $\tau\in\text{Gal}(E/K)$, then $\sigma\tau\sigma^{-1}(x)=\sigma\tau(y)=\sigma(y)=x$, so that $\sigma\tau\sigma^{-1}\in\text{Gal}(E/\sigma(K))$, giving $\sigma\text{Gal}(E/K)\sigma^{-1}\subseteq\text{Gal}(E/\sigma(K))$. Replace $\sigma$ by $\sigma^{-1}$ to get the reverse inclusion. \qed \end{proof} \begin{figure}\label{fig:galois:correspondence:conjugate:subgroups} \end{figure} \begin{galoiscorrespondenceB} Suppose we have the assumptions of the first part of the Galois correspondence. If $K$ is an intermediate field then $\sigma(K)=K$, for all $\sigma\in\text{Gal}(E/F)$, if and only if $\text{Gal}(E/K)$ is a normal subgroup of $\text{Gal}(E/F)$. In this case, $$ \text{Gal}(E/F)/\text{Gal}(E/K)\cong \text{Gal}(K/F). $$ \end{galoiscorrespondenceB} \begin{proof} If $\sigma(K)=K$ for all $\sigma$ then by Proposition \ref{prop14.1}, $\sigma\text{Gal}(E/K)\sigma^{-1}=\text{Gal}(E/\sigma(K))=\text{Gal}(E/K)$ for all $\sigma$, and so $\text{Gal}(E/K)$ is normal. Conversely, if $\text{Gal}(E/K)$ is normal then Proposition \ref{prop14.1} gives $\text{Gal}(E/\sigma(K))=\text{Gal}(E/K)$ for all $\sigma$, where $X\mapsto \text{Gal}(E/X)$ is a 1-1 map by the first part of the Galois correspondence. We thus have $\sigma(K)=K$ for all $\sigma$. Define a map $\text{Gal}(E/F)\rightarrow\text{Gal}(K/F)$ by taking an automorphism $\sigma$ of $E$ fixing $F$ pointwise and restricting it to $K$. We get an automorphism of $K$ as $\sigma(K)=K$. The map is a homomorphism as the operation is composition in both groups. A $\sigma$ is in the kernel if and only if it restricts to the identity map on $K$ -- that is, fixes $K$ pointwise -- when restricted, which happens if and only if $\sigma$ is in $\text{Gal}(E/K)$. If $\sigma$ is an automorphism of $K$ fixing $F$ pointwise then by Theorem F, it can be extended to an automorphism of $E$ fixing $F$ pointwise. Thus any element of the Galois group $\text{Gal}(K/F)$ can be obtained by restricting an element of $\text{Gal}(E/F)$ and the homomorphism is onto. The isomorphism follows by the first isomorphism theorem. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} Here is a simple application: \begin{proposition} Let $F\subseteq E$ be an extension satisfying the conditions of the Galois correspondence. If $F\subseteq K\subseteq E$ with $F\subseteq K$ an extension of degree two, then any $\sigma\in\text{Gal}(E/F)$ sends $K$ to itself. \end{proposition} Applying the Galois correspondence (part 1), the subgroup $\text{Gal}(E/K)$ has index two in $\text{Gal}(E/F)$, hence is normal by Exercise \ref{ex11.1}. Now apply the Galois correspondence (part 2). \subsection*{Further Exercises for Section \thesection} \emph{In all these exercises, you can assume that the condition (\dag) of \ref{galois:correspondence:condition} holds.} \begin{vexercise}\label{exam00_4} Let $\alpha=\sqrt[4]{2}\in\R$ and $\text{i}\in\ams{C}}\def\Q{\ams{Q}$, and consider the field $\Q(\alpha,\text{i})\subset\ams{C}}\def\Q{\ams{Q}$. \hspace{1em}\begin{enumerate} \item Show that there are automorphisms $\sigma,\tau$ of $\Q(\alpha,\text{i})$ such that $$ \sigma(\text{i})=\text{i},\sigma(\alpha)=\alpha\,\text{i},\tau(\text{i})=-\text{i},\mbox{ and }\tau(\alpha)=\alpha. $$ Show that $$ G=\{\text{id},\sigma,\sigma^2,\sigma^3,\tau,\sigma\tau,\sigma^2\tau,\sigma^3\tau\}, $$ are then {\em distinct\/} automorphisms of $\Q(\alpha,i)$. Show that $\tau\sigma=\sigma^3\tau$. \item Show that $\text{Gal}(\Q(\alpha,\text{i})/\Q)=G$ and that the lattice $\mathcal L(G)$ is as on the left of Figure \ref{fig:galois:correspondence:exercise20}. \item Find the subgroups $H_1,H_2$ and $H_3$ of $G$. If the corresponding lattice of subfields is as shown on the right, then express the fields $F_1$ and $F_2$ in the form $\Q(\beta_1,\ldots,\beta_n)$ for $\beta_1,\ldots,\beta_n\in\ams{C}}\def\Q{\ams{Q}$. \end{enumerate} \end{vexercise} \begin{figure} \caption{Exercise \ref{exam00_4}: the lattice of subgroups of $\text{Gal}(\Q(\alpha,\text{i})/\Q)$ with $\alpha=\sqrt[4]{2}$ \emph{(left)} and the corresponding lattice of intermediate fields of the extension $\Q\subset\Q(\alpha,\text{i})$ \emph{(right)}.} \label{fig:galois:correspondence:exercise20} \end{figure} \begin{vexercise}\label{exam01_4} \label{galois:correspondence:exercise20} Let $\omega=\cos\frac{2\pi}{7}+\text{i}\sin\frac{2\pi}{7}\in\ams{C}}\def\Q{\ams{Q}$. \begin{description} \item[\hspace*{5mm}1.] \parshape=2 0pt.8\hsize 0pt.8\hsize Show that $\Q(\omega)$ is the splitting field of the polynomial $$1+x+x^2+x^3+x^4+x^5+x^6.$$ and deduce that $|\text{Gal}(\Q(\omega)/\Q)|=6$. \vadjust{ \smash{\lower 30pt \llap{ \begin{pspicture}(0,0)(2,4) \rput(1,2){\BoxedEPSF{galois12.14.eps scaled 900}} \rput*(1,3.3){$\Q(\omega)$} \rput*(0.35,1.5){$F_1$} \rput*(1.75,2.35){$F_2$} \rput*(1,0.7){$\Q$} \end{pspicture} }}}\ignorespaces Let $\sigma\in\text{Gal}(\Q(\omega)/\Q)$ be such that $\sigma(\omega)=\omega^3$. Show that, $$ \text{Gal}(\Q(\omega)/\Q)=\{\text{id},\sigma,\sigma^2,\sigma^3,\sigma^4,\sigma^5\}. $$ \item[\hspace*{5mm}2.] \parshape=3 0pt\hsize 0pt\hsize 0pt\hsize Using the Galois correspondence, show that the lattice of intermediate fields is as shown on the right, where $F_1$ is a degree 2 extension of $\Q$ and $F_2$ a degree 3 extension. Find complex numbers $\beta_1,\ldots,\beta_n$ such that $F_2=\Q(\beta_1,\ldots,\beta_n)$. \end{description} \end{vexercise} \begin{vexercise} \label{galois:correspondence:exercise10} Complete the lattice of intermediate fields from the example in \ref{galois:correspondence:example1}: \begin{figure} \caption{The rest of the lattice of intermediate fields for the example in \ref{galois:correspondence:example1}} \label{fig:} \end{figure} \end{vexercise} \begin{vexercise} \label{galois:correspondence:exercise30} Let $\alpha=\sqrt[6]{2}$ and $\omega=\frac{1}{2}+\frac{\sqrt{3}}{2}\text{i}$ and consider the field extension $\Q\subset \Q(\alpha,\omega)$. \begin{enumerate} \item Find a basis for $\Q(\alpha,\omega)$ over $\Q$ and show that $|\text{Gal}(\Q(\alpha,\omega)/\Q)|=24$. \item Let $\sigma,\tau\in\text{Gal}(\Q(\alpha,\omega)/\Q)$ be such that $\tau(\alpha)=\alpha,\tau(\omega)=\omega^5$ and $\sigma(\alpha)=\alpha\omega,\sigma(\omega)=\omega$. Show that $$ H=\{\text{id},\sigma,\sigma^2,\sigma^3,\sigma^4,\sigma^5,\tau,\tau\sigma,\tau\sigma^2,\tau\sigma^3,\tau\sigma^4,\tau\sigma^5\}, $$ are then distinct elements in $\text{Gal}(\Q(\alpha,\omega)/\Q)$. \item Part of the subgroup lattice $\mathcal L(G)$ is shown on the left of Figure \ref{fig:galois:correspondence:exercise35}. Fill in the corresponding part of the lattice of intermediate fields on the right. \end{enumerate} \end{vexercise} \begin{vexercise}\label{ex_lect10.2} Let $\omega=\cos\frac{2\pi}{5}+\text{i}\sin\frac{2\pi}{5}$. \begin{description} \item[\hspace*{7mm}1.] \parshape=2 0pt.8\hsize 0pt.8\hsize Show that $\Q(\omega)$ is the splitting field of the polynomial $1+x+x^2+x^3+x^4$ and deduce that $|\text{Gal}(\Q(\omega)/\Q)|=4$. \vadjust{ \smash{\lower 40pt \llap{ \begin{pspicture}(0,0)(2,3) \rput(1,1.5){\BoxedEPSF{galois12.16.eps scaled 900}} \rput*(1,2.75){$\Q(\omega)$} \rput*(1,1.5){$F$} \rput*(1,0.25){$\Q$} \end{pspicture} }}}\ignorespaces \item[\hspace*{7mm}2.] Let $\sigma\in\text{Gal}(\Q(\omega)/\Q)$ be such that $\sigma(\omega)=\omega^2$. Show that $$ \text{Gal}(\Q(\omega)/\Q)=\{\text{id},\sigma,\sigma^2,\sigma^3\}. $$ Find the subgroup lattice $\mathcal L(G)$ for $G=\text{Gal}(\Q(\omega)/\Q)$. \item[\hspace*{7mm}3.] \parshape=2 0pt\hsize 0pt\hsize Using the Galois correspondence, deduce that the lattice of intermediate fields is as shown on the right. Find a complex number $\beta$ such that $F=\Q(\beta)$. \end{description} \end{vexercise} \begin{vexercise} \label{galois:correspondence:exercise50} Consider the polynomial $f(x)=(x^2-2)(x^2-5)\in\Q[x]$. \begin{enumerate} \item Show that $\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{5})$ is the splitting field of $f$ over $\Q$ and that the Galois group $\text{Gal}(\Q(\kern-2pt\sqrt{2},$ $\kern-2pt\sqrt{5})/\Q)$ has order four. (You can assume that if $a,b,c\in\Q$ satisfy $a\kern-2pt\sqrt{2}+b\kern-2pt\sqrt{5}+c=0$ then $a=b=c=0$.) \item Show that there are automorphisms $\sigma,\tau$ of $\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{5})$ defined by $\sigma(\kern-2pt\sqrt{2})=-\kern-2pt\sqrt{2},\sigma(\kern-2pt\sqrt{5})=\kern-2pt\sqrt{5}$ and $\tau(\kern-2pt\sqrt{2})=\kern-2pt\sqrt{2},\tau(\kern-2pt\sqrt{5})=-\kern-2pt\sqrt{5}$. List the elements of the Galois group $\text{Gal}(\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{5})/\Q)$. \item Complete the subgroup lattice on the left of Figure \ref{fig:galois:correspondence:exercise40} by listing the elements of $H$, and use your answer to write the field $F$ in the form $\Q(\theta)$ for some $\theta\in\ams{C}}\def\Q{\ams{Q}$. \end{enumerate} \end{vexercise} \begin{figure}\label{fig:galois:correspondence:exercise35} \end{figure} \section{Applications of the Galois Correspondence} \label{galois.corresapps} \subsection{Constructing polygons} If $p$ is a prime number, then a regular $p$-gon can be constructed {\em only if\/} $p$ is a Fermat prime of the form $$ 2^{2^t}+1. $$ This negative result was proved in Section \ref{ruler.compass2}, and required only the degrees of extensions. We didn't need any symmetries of fields. Galois theory proper -- the interplay between fields and their Galois groups -- allows us to prove positive results: \begin{figure} \caption{Exercise \ref{galois:correspondence:exercise50}: subgroup and intermediate field lattice for the extension $\Q\subset\Q(\kern-2pt\sqrt{2},\kern-2pt\sqrt{5})$.} \label{fig:galois:correspondence:exercise40} \end{figure} \begin{theorem} \label{theorem:fermat_primes} If $p=2^{2^t}+1$ is a Fermat prime then a regular $p$-gon can be constructed. \end{theorem} \begin{proof} By Theorem E we need a tower of fields, $$ \Q\subset K_1\subset\cdots\subset K_n=\Q(\zeta), $$ where $\zeta=\cos(2\pi/p)+\text{i}\sin(2\pi/p)$ and $[K_i:K_{i-1}]=2$. We will get the tower by analysing the Galois group $\text{Gal}(\Q(\zeta)/\Q)$ and applying the Galois correspondence. As $\Q(\zeta)$ is the splitting field over $\Q$ of the $p$-th cyclotomic polynomial $$ \Phi_p(x)=x^{p-1}+x^{p-2}+\cdots+x+1, $$ we have by Theorem G: $$ |\text{Gal}(\Q(\zeta)/\Q)|=[\Q(\zeta):\Q]=\deg\Phi_p=p-1=2^{2^t}=2^n. $$ The roots of $\Phi$ are the powers $\zeta^k$, and these all lie in $\Q(\zeta)$. We can thus apply the Galois correspondence by Proposition \ref{proposition:galois_extensions}. In Section \ref{galois.groups} we showed that $\text{Gal}(\Q(\zeta)/\Q)$ is a cyclic group, and so by Exercise \ref{cyclicgroup.subgroups}, there is a chain of subgroups $$ \{\text{id}\}=H_0\subset H_1\subset\cdots\subset H_n=\text{Gal}(\Q(\zeta)/\Q), $$ where the subgroup $H_i$ has order $2^i$. Explicitly, if $\text{Gal}(\Q(\zeta)/\Q)=\{g,g^2,\ldots,g^{2^{n-1}},g^{2^n}=\text{id}\}$ then $$ \{\text{id}\} \subset \{h_1,h_1^2=\text{id}\} \subset \{h_2,h_2^2,h_2^3,h_2^4=\text{id}\} \subset \cdots \subset \{h_{n-1},h_{n-1}^2,\ldots,h_{n-1}^{2^{n-1}}=\text{id}\} \subset \text{Gal}(\Q(\zeta)/\Q) $$ where $h_i=g^{2^{n-i}}$ and $H_i$ is the subgroup generated by $h_i$ The Galois correspondence thus gives a chain of fields, $$ \Q=K_0\subset K_1\subset\cdots\subset K_n=\Q(\zeta), $$ where $K_{n-i}$ is the fixed field $E^{H_i}$ of the subgroup $H_i$. We have $2^i=[G:H_{n-i}]=[K_i:\Q]$, so by the tower law $$ 2^i=[K_i:\Q]=[K_i:K_{i-1}][K_{i-1}:\Q]=[K_i:K_{i-1}]2^{i-1} $$ and hence $[K_i:K_{i-1}]=2$ as desired. \qed \end{proof} Theorem \ref{theorem:fermat_primes} and \ref{constructions2:pgons} then give: \begin{corollary} If $p$ is a prime then a $p$-gon can be constructed if and only if $p=2^{2^t}+1$ is a Fermat prime. \end{corollary} \begin{corollary} If $n=2^kp_1p_2\ldots p_m$ with the $p_i$ Fermat primes, then a regular $n$-gon can be constructed. \end{corollary} \begin{proof} A $2^k$-gon can be constructed by repeatedly bisecting angles, and thus an $n$-gon, where $n$ has the form given, by Exercise \ref{ex7.50}. $\Box$ \end{proof} A little more Galois Theory, which we omit, gives the following complete answer to what $n$-gons can be constructed: \begin{theorem} An $n$-gon can be constructed if and only if $n=2^kp_1p_2\ldots p_m$ with the $p_i$ Fermat primes. \end{theorem} \paragraph{\hspace*{-0.3cm}} The angle $\pi/n$ can be constructed precisely when the angle $2\pi/n$ can be constructed which in turns happens precisely when the regular $n$-gon can be constructed. Thus, the list of submultiples of $\pi$ that are constructible runs as, $$ \frac{\pi}{2},\frac{\pi}{3},\frac{\pi}{4},\frac{\pi}{5},\frac{\pi}{6},\frac{\pi}{8}, \frac{\pi}{10},\frac{\pi}{12},\frac{\pi}{15},\ldots $$ \begin{vexercise} Give direct proofs of the non-constructability of the angles, $$ \frac{\pi}{7},\frac{\pi}{9},\frac{\pi}{11}\text{ and }\frac{\pi}{13}. $$ \end{vexercise} \subsection{The Fundamental Theorem of Algebra} We saw this in Section \ref{lect3}. We now prove it using the Galois correspondence, starting with two observations: \begin{description} \item[(i).] \emph{There are no extensions of $\R$ of odd degree $>1$}. Any polynomial in $\R[x]$ has roots that are either real or occur in complex conjugate pairs, hence a real polynomial with odd degree $>1$ has a real root and is reducible over $\R$. Thus, the minimum polynomial over $\R$ of any $\alpha\not\in\R$ must have even degree so that the degree $[\R(\alpha):\R]$ is even. If $\R\subset L$ is an extension, then for $\alpha\in L\setminus\R$, we have $$ [L:\R]=[L:\R(\alpha)][\R(\alpha):\R], $$ is also even. \item[(ii).] \emph{There is no extension of $\ams{C}}\def\Q{\ams{Q}$ of degree two}. For if $\ams{C}}\def\Q{\ams{Q}\subset L$ with $[L:\ams{C}}\def\Q{\ams{Q}]=2$ then an $\alpha\in L\setminus\ams{C}}\def\Q{\ams{Q}$ gives the intermediate $\ams{C}}\def\Q{\ams{Q}\subset \ams{C}}\def\Q{\ams{Q}(\alpha)\subset L$ with $[\ams{C}}\def\Q{\ams{Q}(\alpha):\ams{C}}\def\Q{\ams{Q}]=1$ or $2$ by the Tower law. If this degree equals $1$ then $\alpha\in\ams{C}}\def\Q{\ams{Q}$; thus $[\ams{C}}\def\Q{\ams{Q}(\alpha):\ams{C}}\def\Q{\ams{Q}]=2$, and hence $L=\ams{C}}\def\Q{\ams{Q}(\alpha)$. If $f$ is the minimum polynomial of $\alpha$ over $\ams{C}}\def\Q{\ams{Q}$ then $f=x^2+bx+c$ for $b,c\in\ams{C}}\def\Q{\ams{Q}$ with $\alpha$ one of the two roots $$ \frac{-b\pm\sqrt{b^2-4c}}{2}. $$ But these are both in $\ams{C}}\def\Q{\ams{Q}$, contradicting the choice of $\alpha$. \end{description} \begin{fundthmalg} Any non-constant $f\in\ams{C}}\def\Q{\ams{Q}[x]$ has a root in $\ams{C}}\def\Q{\ams{Q}$. \end{fundthmalg} \begin{proof} The proof toggles back and forth between intermediate fields and subgroups of Galois groups using the Galois correspondence. All the fields and groups appear in Figure \ref{fig:galois:correspondence:applications10}. If $f=pq$ is reducible over $\R$, then replace $f$ in what follows by $p$. Thus we may assume that $f$ is irreducible over $\R$ and let $E$ be the splitting field over $\R$, not of $f$, but of $(x^2+1)f$. We have $\R$ and $\pm\text{i}$ are in $E$, hence $\ams{C}}\def\Q{\ams{Q}$ is too, giving the series of extensions $\R\subset\ams{C}}\def\Q{\ams{Q}\subseteq E$. Since $G=\text{Gal}(E/\R)$ is a finite group, we can factor from its order all the powers of $2$, writing $|G|=2^km$, where $m\geq 1$ is odd. Sylow's Theorem then gives a subgroup $H$ of $G$ of order $2^k$, and the Galois correspondence gives the intermediate field $F=E^H$ with the extension $F\subset E$ of degree $2^k$. As $[E:\R]=[E:F][F:\R]$ with $[E:\R]=|G|=2^km$, we have that $F$ is a degree $m$ extension of $\R$. As $m$ is odd and no such extensions exist if $m>1$, we must have $m=1$, so that $|G|=2^k$. Using the Galois correspondence in the reverse direction, the subgroup $\text{Gal}(E/\ams{C}}\def\Q{\ams{Q})$ has order dividing $|G|=2^k$, hence order $2^s$ for some $0\leq s\leq k$. If $s>0$ then there is a non-trivial subgroup $K$ of $\text{Gal}(E/\ams{C}}\def\Q{\ams{Q})$ of order $2^{s-1}$, with $2^{s-1}[E^H:\ams{C}}\def\Q{\ams{Q}]=[E:\ams{C}}\def\Q{\ams{Q}]=|\text{Gal}(E/\ams{C}}\def\Q{\ams{Q})|=2^s$. Thus, $E^H$ is a degree $2$ extension of $\ams{C}}\def\Q{\ams{Q}$, a contradiction to the second observation above. We thus have $s=0$, hence $|\text{Gal}(E/\ams{C}}\def\Q{\ams{Q})|=1$. We now have two fields, $E$ and $\ams{C}}\def\Q{\ams{Q}$, that map via the 1-1 map $X\mapsto\text{Gal}(E/X)$ to the trivial group. The conclusion is that $E=\ams{C}}\def\Q{\ams{Q}$. As $E$ is the splitting field of the polynomial $(x^2+1)f$, we get that $f$ has a root (indeed {\em all\/} its roots) in $\ams{C}}\def\Q{\ams{Q}$. \qed \end{proof} \begin{figure} \caption{Using the Galois correspondence to prove the Fundamental Theorem of Algebra.} \label{fig:galois:correspondence:applications10} \end{figure} \section{(Not) Solving Equations} \label{solving.equations} We can finally return to the theme of Section \ref{lect1}: finding algebraic expressions for the roots of polynomials. \paragraph{\hspace*{-0.3cm}} The formulae for the roots of quadratics, cubics and quartics express the roots in terms of the coefficients, the four field operations $+,-,\times,\div$ and $n$-th roots $\sqrt{},\sqrt[3]{},\sqrt[4]{}$. These roots thus lie in an extension of $\Q$ obtained by adjoining certain $n$-th roots. \begin{definition}[radical extension of $\Q$] An extension $\Q\subset E$ is radical when there is a sequence of simple extensions, $$ \Q\subset \Q(\alpha_1)\subset \Q(\alpha_1,\alpha_2)\subset \cdots\subset \Q(\alpha_1,\alpha_2,\ldots,\alpha_k)=E, $$ with some power $\alpha_i^{m_i}$ of $\alpha_i$ contained in $\Q(\alpha_1,\alpha_2,\ldots,\alpha_{i-1})$ for each $i$. \end{definition} Each extension in the sequence is thus obtained by adjoining to the previous field in the sequence, the $m_i$-th root of some element. A simple example: $$ \Q\subset\Q(\sqrt{2})\subset\Q(\sqrt{2},\sqrt[3]{5}) \subset\Q\biggl(\sqrt{2}, \sqrt[3]{5},\sqrt{\sqrt{2}-7\sqrt[3]{5}}\biggr). $$ By repeatedly applying Theorem D, the elements of a radical extension are seen to have expressions in terms of rational numbers, $+,-,\times,\div$ and $\sqrt[n]{}$ for various $n$. \begin{definition}[polynomial solvable by radicals] A polynomial $f\in\Q[x]$ is solvable by radicals when its splitting field over $\Q$ is contained in some radical extension. \end{definition} Notice that we are dealing with a fixed specific polynomial, and not an arbitrary one. The radical extension containing the splitting field will depend on the polynomial. \paragraph{\hspace*{-0.3cm}} Any quadratic polynomial $ax^2+bx+c$ is solvable by radicals, with splitting field in the radical extension $$ \Q\subseteq \Q(\sqrt{b^2-4ac}). $$ Similarly, the formulae for the roots of cubics and quartics give for any specific such polynomial, radical extensions containing their splitting fields. \paragraph{\hspace*{-0.3cm}} Recalling the definition of soluble group given in Section \ref{groups.stuff}: \begin{theoremH} A polynomial $f\in\Q[x]$ is solvable by radicals if and only if its Galois group over $\Q$ is soluble. \end{theoremH} The proof, which we omit, uses the full power of the Galois correspondence, with the sequence of extensions in a radical extension corresponding to the sequence of subgroups $$ \{1\}=H_0\lhd H_1\lhd \cdots \lhd H_{n-1}\lhd H_n=G, $$ in a soluble group. \paragraph{\hspace*{-0.3cm}} As a small reality check of Theorem H, we saw in Section \ref{galois.groups} that the Galois group over $\Q$ of a quadratic polynomial is either the trivial group $\{\text{id}\}$ or the (Abelian) permutation group $\{\text{id},(\alpha,\beta)\}$ where $\alpha,\beta\in\ams{C}}\def\Q{\ams{Q}$ are the roots. Abelian groups are soluble -- see \ref{groups1:abelian_are_soluble} -- and this syncs with quadratics being solvable by radicals via the quadratic formula. Similarly, the possible Galois groups of cubic polynomials are shown in Figure \ref{fig:groups2:subgroup_lattice_S_3}. Apart from $S_{\kern-.3mm 3}$, these are also Abelian. But $S_{\kern-.3mm 3}$ is the symmetry group of an equilateral triangle lying in the plane -- soluble by \ref{groups1:dihedral_are_soluble}. \paragraph{\hspace*{-0.3cm}} Somewhat out of chronological order, we have: \begin{theorem}[Abels-Fubini] The polynomial $f=x^5-4x+2$ is not solvable by radicals. \end{theorem} The roots of $x^5-4x+2$ are algebraic numbers, yet there is no algebraic expression for them. \begin{proof} We show that the Galois group of $f$ over $\Q$ is insoluble. Indeed, we show that the Galois group is the symmetric group $S_{\kern-.3mm 5}$, which contains the non-Abelian, finite simple group $A_5$. Thus $S_{\kern-.3mm 5}$ contains an insoluble subgroup, hence is insoluble, as any subgroup of a soluble group is soluble by Exercises \ref{subgroups.solublegroups1} and \ref{subgroups.solublegroups2}. If $E$ is the splitting field over $\Q$ of $f$, then $$ E=\Q(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5), $$ where the $\alpha_i\in\ams{C}}\def\Q{\ams{Q}$ are the roots of $f$ and the Galois group is $\text{Gal}(E/\Q)$, itself a subgroup of the group of permutations of $\{\alpha_1,\ldots,\alpha_5\}$ -- which is $\cong S_{\kern-.3mm 5}$. \begin{figure} \caption{The Galois groups of the quintic polynomials $x^5+ax+b$ for $-40\leq a,b\leq 40$ (re-drawn from the \emph{Mathematica\/} poster, ``Solving the Quintic'').} \label{fig:solving:quintic} \end{figure} \parshape=3 0pt\hsize 0pt\hsize 0pt.6\hsize The polynomial $f$ is irreducible over $\Q$ by Eisenstein, hence is the minimum polynomial of $\alpha_1$ over $\Q$. The extension $\Q\subset \Q(\alpha_1)$ thus has degree five, and the Tower law gives \vadjust{ \smash{\lower 125pt \llap{ \begin{pspicture}(0,0)(4.5,3.5) \rput(0,0.4){ \rput(0,0){ \rput{-90}(2,1.5){\BoxedEPSF{galois16.1.eps scaled 250}} } \psframe*[linecolor=white](1.3,3)(1.9,3.4) \psframe*[linecolor=white](1.8,-0.3)(2.2,.2) \psframe*[linecolor=white](3,1)(3.3,1.3) \rput*(2,3.1){$\infty$} \rput*(1.9,-0.1){$-\infty$} \rput*(-.5,1.5){$-\infty$} \rput*(4.2,1.5){$\infty$} } \end{pspicture} }}}\ignorespaces $$ [E:\Q]=[E:\Q(\alpha_1)][\Q(\alpha_1):\Q]. $$ \parshape=6 0pt.6\hsize 0pt.6\hsize 0pt.6\hsize 0pt.6\hsize 0pt.6\hsize 0pt.6\hsize The degree of the extension $\Q\subset E$ is therefore divisible by the degree of the extension $\Q\subset\Q(\alpha_1)$, ie: divisible by five. Moreover, by Theorem G, the group $\text{Gal}(E/\Q)$ has order the degree $[E:\Q]$, and so the group has order divisible by five. By Cauchy's Theorem, the Galois group contains an element $\sigma$ of order $5$, and a subgroup $$ \{\text{id},\sigma,\sigma^2,\sigma^3,\sigma^4\}, $$ \parshape=2 0pt\hsize 0pt\hsize where the permutation $\sigma$ is a $5$-cycle $\sigma=(a,b,c,d,e)$ when considered as a permutation of the roots. The graph of $f$ on the right shows that three of the roots are real, and the other two are thus complex conjugates. By Exercise \ref{exercise:groups2:conjugation}, complex conjugation is an element of the Galois group having effect the permutation $$ \tau=(b_1,b_2), $$ where $b_1,b_2$ are the two complex roots. But in Section \ref{groups.stuff} we saw that $S_{\kern-.3mm n}$ is generated by a $n$-cycle and a transposition, hence the Galois group is $S_{\kern-.3mm 5}$ as claimed. \qed \end{proof} \paragraph{\hspace*{-0.3cm}} There is nothing particularly special about the polynomial $x^5-4x+2$; among the polynomials having degree $\geq 5$, those that are not solvable by radicals are \emph{generic\/}. We illustrate what we mean with some experimental evidence: consider the quintic polynomials $$ x^5+ax+b, $$ for $a,b\in\ams{Z}}\def\E{\ams{E}$ with $-40\leq a,b\leq 40$. Figure \ref{fig:solving:quintic} (which is re-drawn from the Mathematica poster, ``Solving the Quintic'') shows the $(a,b)$ plane for $a$ and $b$ in this range. The vertical line through $(0,0)$ corresponds to $f$ with Galois group the soluble dihedral group $D_{10}$ of order $10$. The horizontal line through $(0,0)$ and the two sets of crossing diagonal lines correspond to reducible $f$, as do a few other isolated points. The (insoluble) alternating group $A_5$ arises in a few sporadic places, as does another soluble subgroup of $S_5$. The vast majority of $f$ however, forming the light background, have Galois group the symmetric group $S_5$, and so have roots that are {\em algebraic\/}, but cannot be expressed {\em algebraically}. \end{document}
arXiv
Find the sum of all solutions to the equation $(x-6)^2=25$. The equation expands $x^2 - 12x + 36 = 25,$ so $x^2 - 12x + 11 = 0.$ By Vieta's formulas, the sum of the roots is $\boxed{12}.$
Math Dataset
Calculate Jacobian Matrix The matrix has dimension of R [nObs * nPara], nObs denotes the number of training observations and nPara denotes the number of weights parameters. I know you can extract the admittance matrix from PSSE and also the voltages through python in PSSE. However, if for some , Newton's method may fail The Jacobian matrix in this problem is a matrix with elements given. Compute the Jacobian matrix of [x*y*z, y^2, x + z] with respect to [x, y, z]. Note the"Jacobian"is usually the determinant of this matrix when the matrix is square, i. I know that in opendss there is no jacobian matrix as used in conventional power flow, but based on the algorithms of matpower I sought to get it via Ybus. ALGORITHM TO CALCULATE THE JACOBIAN MATRIX Common open source implementations [10][11] determine the Jacobian matrix by: 1) Calculating the partial derivatives (see eq. Jacobian Tool High-level Algorithm Description Loop over (thousands of) input fluid states •Read an input fluid state -Initialize and calculate all RELAP5-3D data needed to build a Jacobian Matrix in subroutine PRESEQ •Build Analytical & num. Just as we did with polar coordinates in two dimensions, we can compute a Jacobian for any change of coordinates in three dimensions. For example, consider the case where C is a 2n-by-n matrix based on a circulant matrix. Secant methods, also known as quasi-Newton methods, do not require the calculation of the Jacobian; they construct an approximation to the matrix, which is updated at each iteration, so that it behaves similarly to the true Jacobian along the step. Given specific values for the input variables, the. To find critical points of f, we must set the partial derivatives equal to 0 and solve for x and y. A Jacobian keeps track of the stretching. Jacobi Iteration for Eigenvectors. My problem is, in the UMAT is the Jacobian matrix equal to the elastic matrix: The creep and thermal strain rates, which get determinate in the UMAT, are only used, to calculate the new stress tensor. is the computation of the Jacobian matrix. As a solution, joint uncertainty decoding (JUD) was proposed to calculate the Jacobian matrices on a per-regression class basis instead of on a per-Gaussian basis [6]. 13: How to compute matrix norms Matrix norms are computed by applying the following formulas: be the Jacobian (functionalmatrix → flerdim) of g. I know you can extract the admittance matrix from PSSE and also the voltages through python in PSSE. This Jacobian matrix pro-vides valuable information, such as in characterizing a generation. That is, the joint density f is the product of the marginal †marginal densities densities g and h. leads to O(m2 ) function evaluations just to calculate a single Jacobian. The Jacobian matrix, is a key component of numerical methods in the next section. The Jacobian - In this video, I give the formula for the Jacobian of a transformation and do a simple example of calculating the Jacobian. jacobi_eigenvalue Eigenvalues and Eigenvectors of a Symmetric Matrix Given a real symmetric NxN matrix A, JACOBI_EIGENVALUE carries out an iterative procedure known as Jacobi's iteration, to determine a N-vector D of real, positive eigenvalues, and an NxN matrix V whose columns are the corresponding eigenvectors, so that, for each column J of. Feb 17, 2013 · 1 Answer. Jacobian in world coordinates. According to the inverse function theorem, the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function. For higher-order elements, such as the quadratic bar with three nodes, [B] becomes a function of natural coordinates s. 3905340209540542 7. Normal Line to the Surface. If we know the sin(x) and cos(x), we can use the inverse tangent function atan2. 2D spring-mass systems in equilibrium Vector notation preliminaries First, we summarize 2D vector notation used in the derivations for the spring system. Lecture # 12 - Derivatives of Functions of Two or More Vari-ables (cont. Matrices and other arrays in LaTeX. ) In the subsequent Matlab code it is shown how the covariance matrix can be calculated from the outputs provided by the LSQNONLIN function. The Jacobian occurs when changing variables in an integration: Integral(f(Y)dY:)=Integral(f(Y(X)) det(dY/dX) dX:). Those user-calculated predictions are then given to the non-linear rc_kalman_update_ekf() function. One of the major variants of the WLS estimator that is very popular in the industry is the fast decoupled (FD) estimator. Given specific values for the input variables, the. Now, the matrix dimensions on the above expressions are not conformable, which suggests that vectorizing the Jacobian calculation is not possible. However matrices can be not only two-dimensional, but also one-dimensional (vectors), so that you can multiply vectors, vector by matrix and vice versa. F(x) being the Jacobian of F is called Newton's method. So how does Abaqus calculate the applied strain and strain increment, especially for non-linear problems where the relation between displacement and strain is non-linear? 2. Here we use the identity cos^2(theta)+sin^2(theta)=1. Backpropagation and Neural Networks. The function jacobian calculates a numerical approximation of the first derivative of func at the point x. FX hessian (int iind=0, int oind=0). Accuracy Order. Find more Widget Gallery widgets in Wolfram|Alpha. lm etc methods: logical indicating if the full variance-covariance matrix should be returned also in case of an over-determined system where some coefficients are undefined and coef(. Homogenous Transformation Modelling Convention 2. I am uploading another demo file to give a solution of Mahmudul's request (for detail, see the comment). Feb 20, 2013 · I was trying to compute the jacobian matrix resutling from taking the derivative of the force with respect to the position (I am ignoring velocity in this first step). differentiate with respect to time) we. To apply Newton's method to as defined in , the sixteen components of the Jacobian matrix are also needed. Jacobian matrix is a matrix of partial derivatives. This means that the rank of the Jacobian can be no greater than the minimum of 6 and n. 21-23) discuss the conditions for convergence of Newton's method for a system of nonlinear equations. 2 Derivation. The Jacobian matrix of a system of smooth ODEs is the matrix of the partial derivatives of the right-hand side with respect to state variables where all derivatives are evaluated at the equilibrium point x=xe. One can use one single index to access element of the matrix, e. The matrix is still stored as a 1-D array in memory. The flnal thing we need to understand is the correct procedure for integrating over a manifold. An equivalent formula for the Jacobian is Here det means the determinant. Compute the Jacobian matrix of [x*y*z, y^2, x + z] with respect to [x, y, z]. In the program to solve g(x;p) = 0, it is likely that the Jacobian matrix @ xgis calculated (see Sections 1. Differential of the Multivariable Function. (3) If m = n and the Jacobian matrix is square, and the determinant of J represents the distortion of volumes induced by the map F. This is a generalization of the u-substitution from single-variable calculus, and also relates to formulas for area and volume from MAT 169 that are de ned in terms of determinants, or equivalently, in terms of the dot product and cross product. The derivative of y with respect to x then form a N x M Jacobian matrix. Note, in order to avoid confusion with the i-th component of a vector, we set now the iteration counter as a superscript x(i) and no longer as a subscript x i. In this case:- The jacobian matrix behaves very like the first derivative of a function of one variable. Enter the values of the 7x7 matrix in the text boxes below to calculate the determinant. I keep getting negative values for the diagonal (variance) values, but they should be strictly positive. Definition. This is usually done by defining the zero-point of some coordinate with respect to the coordinates of the other frame as well as specifying the relative orientation. Linear Equation Calculator for Matrix Variables. That is, In general, the height of the Jacobian matrix will be larger than the width, since there are more equations than unknowns. This is a toy version of the algorithm and is provided solely for entertainment value. Unformatted text preview: Introduction to Jacobian Matrix Let us calculate the area of the parallelogram : We have = − 2 − 2. For the second term of ME 115 at Caltech (Introduction to Kinematics and Robotics), my final project was to develop a software package that would symbolically derive the forward kinematic equations and manipulator Jacobian matrix given the Denavit-Hartenberg parameters of a mechanicsm. Basic Matrix Operations. Hence, there is a constant nonsingular matrix C such that Φ(t+T) = Φ(t)C. A Jacobian matrix, sometimes simply called a Jacobian, is a matrix of first order partial derivatives (in some cases, the term "Jacobian" also refers to the determinant of the Jacobian matrix). In some cases, you may need to use the product rule or chain rule to calculate the partial derivatives. An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisot. Slotine, to compute the inverse of the Jacobian matrix. One of the many applications for the Jacobian matrix is to transfer mapping from one coordinate system to another, such as the transformation from a Cartesian to natural coordinate system, spherical to Cartesian coordinate system, polar to Cartesian coordinate system, and vice versa. This n × m matrix is called the Jacobian matrix of f. JAVA - How To Design Login And Register Form In Java Netbeans - Duration: 44:14. matrix< Type > hessian (Functor f, vector< Type > x) Calculate hessian of vector function with scalar values. Consider the autonomous system and an equilibrium point. In the old interface different components of the jacobian are returned via different output parameters. In this section we consider the topic of Vectors, Matrices and Arrays and. complete: for the aov, lm, glm, mlm, and where applicable summary. In vector calculus, the Jacobian matrix (/ dʒ ə ˈ k oʊ b i ə n /, / dʒ ɪ-, j ɪ-/) of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. The Jacobian matrix is a matrix of rst order partial derivatives. Jacobian synonyms, Jacobian pronunciation, Jacobian translation, English dictionary definition of Jacobian. FX jacobian (const std::vector< std::pair< int, int > > &jblocks) Calculate the jacobian of a number of function outputs with respect to a number of function inputs, optionally include the function outputs. Member 12480890 9-Jun-16 4:11am my code is too big to be posted here. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. Electrical impedance tomography (Part IV) - Jacobian In EIT problem solving it is necessary to calculate the derivatives of the measured voltages respect to conductivity. The first line of code is a cross product of two. Its coefficient matrix is This matrix is called the Jacobian matrix of the system at the point. The Jacobian is a matrix-valued function and can be thought of as the vector version of the ordinary derivative of a scalar function. "TolX" specifies the termination tolerance in the unknown variables, while "TolFun" is a tolerance for equations. This approximation is known as the Hartman-Grobman Theorem. Such matrix is called the jacobian of the manipulator. spect to a single parameter (e. 5) In general, the Jacobian allows us to relate corresponding small dis­ placements in different spaces. Problem: Find the Jacobian of the transformation $(r,\theta,z) \to (x,y,z)$ of cylindrical coordinates. t to b is a. Jacobian matrix. Determinant PNG Images, Determinant Clipart Free Download, Free Portable Network Graphics (PNG) Archive. CALC_JACOBIAN: calculate jacobian from an inv_model J = calc_jacobian( img ) calc Jacobian on img. matrix< Type > hessian (Functor f, vector< Type > x) Calculate hessian of vector function with scalar values. Angular Velocity for Describing Rotation around Fixed Axis When a rigid body rotates around a fixed axis • Every point of the body moves in a circle cAnton Shiriaev. We are fortunate to live in an era of technology that we can now access such incredible resources that were never at the palm of our hands like they are today. ***** *** 2⇥2inverses Suppose that the determinant of the 2⇥2matrix ab cd does not equal 0. The absolute value of the determinant of the Jacobian Matrix is a scaling factor between different "infinitesimal" parallelepiped volumes. 3 5 Replies. its Jacobian is extremely easy to be calculated by hand, but for (far) more complicated functions I need a way to make it work for the general case. A 3D body can be rotated about three orthogonal axes, as shown in Figure 3. That is a phenomenal amount of math and, frankly, I'm not that smart. 5EL158: Lecture 6- p. Robot Dynamics Lecture Notes You January 6, 2017. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix 647 Figure 1. Main idea of Jacobi To begin, solve the 1st equation for , the 2 nd equation for and so on to obtain the rewritten equations:. 2 2 1 Lecture Video 1 of 6 Jacobian Matrix. There is an easy way to remember the formula for Newton's method. This is usually done by defining the zero-point of some coordinate with respect to the coordinates of the other frame as well as specifying the relative orientation. Feb 01, 2004 · Whereas many efficient algorithms to calculate the regression coefficients exist, algorithms to calculate the Jacobian matrix are inefficient. I'm going to use a method to calculate the instantaneous approximate Jacobian at any given robot pose, and then recalculate it as often as I need. , the Jacobian matrix, based on a multi-parameter sensitivity analysis of the optimal power flow solution, and characterize some of its properties. May 20, 2007 · The Jacobian of an extremely distorted element becomes negative. i'm posting it in the stub for jacobian matrix. The algorithm works by diagonalizing 2x2 submatrices of the parent matrix until the sum of the non diagonal elements of the parent matrix is close to zero. An important feature of the EKF is that the Jacobian in the equation for the Kalman gain serves to correctly propagate or "magnify" only the relevant component of the measurement. I am trying to calculate the Jacobian matrix of a quaternion. J X (X [n#n]-1)= (-1) n det(X)-2n; Hessian matrix. Jacobian change of variables is a technique that can be used to solve integration problems that would otherwise be difficult using normal techniques. Section 2 summarizes Pryce's SA. oT our knowledge, currently no open-source analytical chemical Jacobian tool exists that. If the function is differentiable , then the derivative is simply a row matrix containing all of these partial derivatives, which we call the matrix of partial derivatives (also called the Jacobian matrix). The Jacobian of a function f: n → m is the matrix of its first partial derivatives. The function jacobian calculates a numerical approximation of the first derivative of func at the point x. We proposed appropriate solutions to solve the mentioned challenges. This is the case of ANSYS and COSMOS/SolidWork. This is made possible by approximating the Jacobian inverse to a diagonal matrix without computing the. If the expression is a callable symbolic expression (i. If a fully specified model is known for a system, it is straightforward to calculate the stationary state and the corresponding Jacobian matrix. Jacobian is used to solve the inverse problem of the DOT, and usually the size of the FE mesh used to cal-culatetheJacobianissmalltoreducetheill-posedness, the number of unknowns, and the memory require-ment and the computation time. Instead, in. 0001 function J = calc_jacobian( fwd_model, img) 0002 % CALC_JACOBIAN: calculate jacobian from an inv_model 0003 % 0004 % J = calc_jacobian( fwd_model, img ) 0005 % J = calc_jacobian( img ) 0006 % calc Jacobian on fwd_model at conductivity given 0007 % in image (fwd_model is for forward and reconstruction) 0008 % 0009 % For reconstructions on dual meshes, the interpolation matrix 0010 % is defined as fwd_model. In order to conduct a change of variables, we need to calculate the value of the Jacobian, which is the determinant of the matrix composed of the partial derivatives of this transformation. Inverting the Jacobian— JacobianTranspose • Another technique is just to use the transpose of the Jacobian matrix. Definition 2. Pressing [MENU]→Matrix & Vector→Determinant to pastes the Det command. If "Jacobian" is "on", it specifies that fcn, called with 2 output arguments also returns the Jacobian matrix of right-hand sides at the requested point. Get the free "Three Variable Jacobian Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. Newton's method works well if everywhere. 0000000000000000 ----- Jacobian matrix, df, at the point x above ----- -7. Then, for those elements you would need to compute a figure o merit per element, e. Formally:. Ask Question How to calculate the inverse of the sum of kronecker products with the identity matrix. This Jacobian or Jacobian matrix is one of the most important quantities in the analysis and control of robot motion. Feb 01, 2004 · Whereas many efficient algorithms to calculate the regression coefficients exist, algorithms to calculate the Jacobian matrix are inefficient. of the complexity of the Jacobian matrix on the conventional method which leads to high execution time and more memory requirement [8]. RE: How I can calculate Jacobian matrix for the following function? Hi All ! Can any of you help me how to calculate Jacobian matrix for the following two functions with respect to (p and t). Please give the pseudocode of the truncated SVD Solution. The Jacobian is defined as a determinant of a 2x2 matrix, if you are unfamiliar with this that is okay. If the real part of the dominant eigenvalue is: • Greater than \(0⇒\)The equilibrium point is unstable. For the Jacobian instead of calculating average gradient - you calculate gradient per each sample separately. i'd like to know jacobian maxtrix. whats is purpose? could somebody explain this to me? i get how to use it, i just dont understand what the hec it is, and what the point of it is. The computation of the signature matrix is presented in Section 4. 0000000000000000 1. Your email address will not be published. The iteration attempts to find a solution in the nonlinear least squares sense. This is the case of ANSYS and COSMOS/SolidWork. GradeStack Learning Pvt. For higher-order elements, such as the quadratic bar with three nodes, [B] becomes a function of natural coordinates s. Evaluate the Jacobian. Since this question is partly about the format of the matrix and its elements, I thought it's worth adding a definition that makes calculus output look prettier, and in the case of the Jacobian lets you write symbolic matrices like this:. self A vector of expressions representing functions f_i(x_1, , x_n). The Jacobian matrix provides powerful diagnostics about how well the robot's configuration is suited to the task. In Abaqus CAE, the user can only specify as boundary conditions the applied displacement and time. We've already looked at some other numerical linear algebra implementations in Python, including three separate matrix decomposition methods: LU Decomposition , Cholesky Decomposition and QR Decomposition. jacob0 (q, options) is the Jacobian matrix (6xN) for the robot in pose q (1xN), and N is the number of robot joints. Jacobian matrix F′ x∗ is nonsingular at a solution of (1) the convergence is guaranteed with a quadratic rate from any initial point x 0 in the neighborhood of x ∗ [4,10], i. calculate the Jacobian matrix at the steady state 3. These concepts are named after the mathematician Carl Gustav Jacob Jacobi. the Jacobian matrix, sometimes simply called "the Jacobian" (Simon and Blume 1994) is defined by (3) The determinant of is the Jacobian determinant (confusingly, often called "the Jacobian" as well) and is denoted. One condiction is. [in] J: Jacobian matrix \(\matr{J}(\vec{q})\) [in] invM: Joint space mass matrix inverse \(\matr{M}^{-1}(\vec{q})\) [out] invMx: Operational space mass matrix inverse. But if there is some sparsity structure in the Jacobian (or Hessian) that can be taken advantage of, the large-scale methods always runs faster if this information is provided. The Jacobian is merely a matrix representation of all the first derivatives of the components of the vector. JAVA - How To Design Login And Register Form In Java Netbeans - Duration: 44:14. If the real part of all the eigenvalues is negative, then solutions converge (locally) to the equilibrium. ) Solve Matrix for unknown Mesh Currents by using Cramer's rule ( it is simpler although you can still use gaussian method as well ) 6. Moreover, solving the linear system J(x)h = −f(x) usually requires O(m3). Inverting the Jacobian— JacobianTranspose • Another technique is just to use the transpose of the Jacobian matrix. by means of 9 parameters. Fuhrer:¨ FMN081-2005 64. 30 of the Theory manual under the section "Large volume changes with NLGEOM", it says, "for total-form constitutive laws, the exact consistent Jacobian C is. given and we have to calculate the position of any point in the work volume of the robot. 5) In general, the Jacobian allows us to relate corresponding small dis­ placements in different spaces. Integration on manifolds 1 Chapter 11 Integration on manifolds We are now almost ready for our concluding chapter on the great theorems of classical vector calculus, the theorems of Green and Gauss and Stokes. The flnal thing we need to understand is the correct procedure for integrating over a manifold. Step 3: Include a Jacobian. lm etc methods: logical indicating if the full variance-covariance matrix should be returned also in case of an over-determined system where some coefficients are undefined and coef(. Sep 05, 2011 · Then your Jacobian would have 2 rows. logsumexp_vjp returns a vector-Jacobian product (VJP) operator, which is a function that right-multiplies its argument g by the Jacobian matrix of logsumexp (without explicitly forming the matrix's coefficients). (1/ n) = 1 and (0/ n) = 0. (The inverse of the covariance matrix is known as the Fisher Information Matrix. (3) If m = n and the Jacobian matrix is square, and the determinant of J represents the distortion of volumes induced by the map F. The rank of a matrix can also be calculated using determinants. > Don't forget to include the spacings dx and dy !. The SJT product of matrix and vector was defined to efficiently calculate the accurate Jacobian matrix of some numerical formulations of nonlinear differential equations. 168646169559796 maple output: −7. 0000000000000000 1. Can someone please send me there email id. Gradients, Jacobian Matrices, and the Chain Rule Review We will now review some of the recent material regarding gradients, Jacobian matrices, and the chain rule for functions from $\mathbb{R}^n$ and $\mathbb{R}^m$. Matrix Multiplication Calculator Here you can perform matrix multiplication with complex numbers online for free. The Jacobian is a matrix of first-order partial derivatives of a. My point is is that this page was originally designed to define the jacobian matrix, and i see that that definition is a stub. FINITE ELEMENT : MATRIX FORMULATION Georges Cailletaud The jacobian is a diagonal matrix, with ∂x/∂ξ = a, ∂y/∂η = b, and the determinant value is ab. This is a toy version of the algorithm and is provided solely for entertainment value. spect to a single parameter (e. Histogram of the level of sparsity of Jacobian for a 32 channel system. ) Used solved Mesh Currents to solve for the desired circuit entity. determinant is a generic function that returns separately the modulus of the determinant, optionally on the logarithm scale, and the sign of the determinant. Now, the matrix dimensions on the above expressions are not conformable, which suggests that vectorizing the Jacobian calculation is not possible. A simple method to calculate mobility with Jacobian Article in Mechanism and Machine Theory 43(9):1175-1185 · September 2008 with 33 Reads How we measure 'reads'. 2 Astronomical Coordinate Systems The coordinate systems of astronomical importance are nearly all. Displacement in element e of nodes i,j,k is approximated by the following displacement function: and Ne is matrix of shape functions. The Jacobian matrix [J] is named after the 19th century German mathematician Carl Jacobi (Dec. It has a number of columns equal to the number of degrees of freedom in joint space, and a number of rows equal to the. Stability Analysis for ODEs Marc R. If f is a real function of x then the Hermitian matrix H x f = (d/dx (df/dx) H) T is the Hessian matrix of f(x). Sep 05, 2011 · Then your Jacobian would have 2 rows. Then the matrix has an inverse, and it can be found using the formula ab cd 1 = 1 det ab cd d b ca Notice that in the above formula we are allowed to divide by the determi-. Evaluate the Jacobian. JAVA - How To Design Login And Register Form In Java Netbeans - Duration: 44:14. Jacobian matrix. Three different cases are discussed below Case 1: The Jacobian matrix is invertible = Jq_ (32) J 1 = J 1Jq_ (33) q_ = J 1 (34) A solution exists only if Jis invertible. So this matrix here that's full of all of the partial derivatives has a very special name. These concepts are named after the mathematician Carl Gustav Jacob Jacobi. (In the case that is a vector, the partial derivative must be interpeted as the Jacobian matrix, whose components are. (1/ n) = 1 and (0/ n) = 0. The matrix in the above relationship is called the Jacobian matrix and is function of q. Calculate the product of the System kinematic Jacobian J (also known as the partial velocity matrix) and a mobility-space vector u in O(n) time. The algorithm works by diagonalizing 2x2 submatrices of the parent matrix until the sum of the non diagonal elements of the parent matrix is close to zero. • The Jacobian is already an approximation to f()—Cheat more • It is much faster. Jacobian matrix F′ x∗ is nonsingular at a solution of (1) the convergence is guaranteed with a quadratic rate from any initial point x 0 in the neighborhood of x ∗ [4,10], i. Summary of the linearization technique. For the Jacobian instead of calculating average gradient - you calculate gradient per each sample separately. I'm trying to calculate it myself by exporting the Y matrix (excluding loads and vsource) and using the power flow equations to calculate real and reactive power at each node and using these equations also to calculate the derivatives with respect to voltage magnitude and angle which are effetively the jacobian matrix elements. Jun 04, 2019 · I am trying to calculate the covariance matrix from the residuals vector and the Jacobian matrix, which are optional outputs of the lsqcurvefit function. Note, in order to avoid confusion with the i-th component of a vector, we set now the iteration counter as a superscript x(i) and no longer as a subscript x i. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. The coefficient matrix has no zeros on its main diagonal, namely, , are nonzeros. Step 3: Include a Jacobian. The iteration attempts to find a solution in the nonlinear least squares sense. The Jacobian is merely a matrix representation of all the first derivatives of the components of the vector. However, if for some , Newton's method may fail The Jacobian matrix in this problem is a matrix with elements given. That is, consider the set of vector functions such as,. ) Some Definitions: Matrices of Derivatives • Jacobian matrix — Associated to a system of equations — Suppose we have the system of 2 equations, and 2 exogenous variables: y1 = f1 (x1,x2) y2 = f2 (x1,x2). The Jacobian of the gradient has a special name: the Hessian matrix, which in a sense is the "second derivative" of the scalar function of several variables in question. , the Jacobian matrix, based on a multi-parameter sensitivity analysis of the optimal power flow solution, and characterize some of its properties. Displacement in element e of nodes i,j,k is approximated by the following displacement function: and Ne is matrix of shape functions. The matrix in the above relationship is called the Jacobian matrix and is function of q. The Jacobian matrix is a function of the current pose as follows: Each term in the Jacobian matrix represents how a change in the specified joint angle effects the spatial location of end effector. basis of the estimates of the smallest singular values of the Jacobian matrix [Chiaverini, 1993]. The Jacobian is a matrix-valued function and can be thought of as the vector version of the ordinary derivative of a scalar function. Partial Derivatives. Those user-calculated predictions are then given to the non-linear rc_kalman_update_ekf() function. Matrix Operations in Excel. De nition The. However, if for some , Newton's method may fail The Jacobian matrix in this problem is a matrix with elements given. More speci–cally, if A is a matrix and U a row-echelon form of A then jAj= ( 1)r jUj (2. This is a square matrix, so it has a determinant, which should give us information about area. Press [MENU]→Matrix & Vector to access the Matrix commands. such that lb ≤ x ≤ ub, for problems where C is very large, perhaps too large to be stored, by using a Jacobian multiply function. This calculator will save you time, energy and frustration. Enter the values of the 7x7 matrix in the text boxes below to calculate the determinant. However, the reverse problem of deducing the mechanism from a knowledge of the Jacobian is much more difficult, and, in addition, such problems do not afford unique solutions. You can't compute the jacobian of an anonymous function, you need to use the Symbolic Math Toolbox and create symbolic variables with syms for that. The determinant of a matrix is frequently used in calculus, linear algebra, and advanced geometry. The Jacobian of a vector function is a matrix of the partial derivatives of that function. To use JacobianMatrix, you first need to load the Vector Analysis Package using Needs ["VectorAnalysis`"]. The Jacobian matrix has the following form 0 1 () 13 0 T R p end effector v x. If you do not provide a function to calculate the Jacobian, these solvers approximate the Jacobian numerically using finite differences. Numerical Jacobian used to compute K • Order of Jacobian approximation formula should be comparable to the accuracy order of finite element analysis (FEA) • Method to obtain the accuracy order of FEA. The Jacobi Method Two assumptions made on Jacobi Method: 1. Compute the Jacobian matrix of [x*y*z, y^2, x + z] with respect to [x, y, z]. Let f be a user-supplied function. This Jacobian matrix pro-vides valuable information, such as in characterizing a generation. Jacobian of Vector Function. Instead, it is more e cient to keep everything in ma-trix/vector form. Definition. Jacobian Matrix and Jacobian Description Calculate the Jacobian matrix and Jacobian of a set of multivariate functions. So how does Abaqus calculate the applied strain and strain increment, especially for non-linear problems where the relation between displacement and strain is non-linear? 2. The system given by Has a unique solution. Manual Mesh/Loop Analysis Algorithm: 5. I'm going to use a method to calculate the instantaneous approximate Jacobian at any given robot pose, and then recalculate it as often as I need. Net Calculator download; source code; tutorial. array([[1,2,3], [4,5,6], [7,8,9]]) b = np. This paper presents the use of residue arithmetic for the exact compu­ tation of the manipulator pseudo-inverse Jacobian to obviate the roundoff. The coefficient matrix has no zeros on its main diagonal, namely, , are nonzeros. Given a point x at which we seek for the Jacobian, the function jacobs returns the Jacobian matrix d(f(1), …, df(end))/d(x(1), …, x(n)). eigenvalues of the Hessian matrix of f. x0/–x Csmaller order terms/ ¡ h. Jacobian of array named 'function' with respect to array named 'Quaternion In'. For example, consider the case where C is a 2n-by-n matrix based on a circulant matrix. Let f be a user-supplied function. The component group can be obtained from the intersection matrix of the resulting special fiber. 558441211019140 -22. Develop a MATLAB program to calculate the Jacobian matrix and to simulate resolved-rate control for the planar 3R robot. The system given by Has a unique solution. And that will give you a very concrete two by two matrix that's gonna represent the linear transformation that this guy looks like once you've zoomed in. How do I calculate jacobian matrix? Equati ons. Solve the linear system for. Dec 20, 2015 · The equations for the Jacobian are easy to determine for a robot, given the current robot configuration, as outlined by Waldron et. jacobian(X) [source] ¶ Calculates the Jacobian matrix (derivative of a vectorial function). Each column of the space Jacobian is the spatial twist when that joint's velocity is 1 and the velocity at all other joints is zero. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 4 - April 13, 2017 Jacobian matrix (derivative of each element of z w. The variance-covariance matrix \( \tilde{C} \) for the untransformed parameters can be obtained using the jacobian \(J\): $$\tilde{C}=J^TC J$$ Correlation matrix. The Jacobian matrix [J] is named after the 19th century German mathematician Carl Jacobi (Dec. > fun must be a two-dimensional matrix in order to calculate FX and FY. The Jacobian matrix is a matrix of rst order partial derivatives. It arises in virtu-. Berkeley 2 Rotations •3D Rotations fundamentally more complex than in 2D. Jacobian for conversion from Euler Angles to Quaternions Nikolas Trawny and Stergios Roumeliotis Department of Computer Science & Engineering University of Minnesota Center for Distributed Robotics Technical Report Number -2005-004 November 2005 Dept. JAVA - How To Design Login And Register Form In Java Netbeans - Duration: 44:14. This calculator runs the Jacobi algorithm on a symmetric matrix `A`. For critical circular planar graphs, this map is known to be invertible, at least when the conductivities are positive. The matrix will contain all partial derivatives of a vector function. 4 For any predicted level indexed by \(i\) in a regression, the \(i,j\) th element of the jacobian will be the derivative of predicted level \(i\) with respect to regressor \(j\). Since we're engineers and roboticists, we like to make mathematicians angry and refer to the "Jacobian matrix of a manipulator that describes the velocity of the system and how it affects the end effector's position" as just the "Jacobian". Let's look at the Jacobian in mathematical form, to really understand what is going on. • But if you prefers quality over performance, the pseudo inverse method would be better. In the example below, we use the second derivative test to determine that there is a saddle point at (0,-1/2). The absolute value of the determinant of the Jacobian Matrix is a scaling factor between different "infinitesimal" parallelepiped volumes. We proposed appropriate solutions to solve the mentioned challenges. Enter a matrix, and this calculator will show you step-by-step how to convert that matrix into reduced row echelon form using Gauss-Jordan Elmination. by the determinant of the Jacobian matrix of the function that maps the new variables to the old. Calculate the Jacobian matrix of gradient function for the training dataset. Maximum likelihood - Covariance matrix estimation.
CommonCrawl
Genetic Diversity of Streptococcus suis Strains Isolated from Pigs and Humans as Revealed by Pulsed-Field Gel Electrophoresis Florence Berthelot-Hérault, Corinne Marois, Marcelo Gottschalk, Marylène Kobisch Florence Berthelot-Hérault 1Agence Française de Sécurité Sanitaire des Aliments, Laboratoire d'Etudes et de Recherches Avicoles et Porcines, Unité de Mycoplasmologie-Bactériologie, 22440 Ploufragan, France Corinne Marois Marcelo Gottschalk 2Groupe de Recherche sur les Maladies Infectieuses du Porc, Faculté de Médecine Vétérinaire, Université de Montréal, St. Hyacinthe, Québec, Canada, J2S7C6 Marylène Kobisch For correspondence: [email protected] DOI: 10.1128/JCM.40.2.615-6192002 The genetic diversity of 123 Streptococcus suis strains of capsular types 2, 1/2, 3, 7, and 9, isolated from pigs in France and from humans in different countries, was evaluated by pulsed-field gel electrophoresis (PFGE) of DNA restricted with SmaI. The method was highly discriminative (D = 0.98), results were reproducible, and the PFGE analysis was easy to interpret. Among all S. suis strains, 74 PFGE patterns were shown. At 60% homology, three groups (A, B, and C) were identified, and at 69% homology, eight subgroups (a to h) were observed. Strains isolated from diseased pigs or from humans were statistically clustered in group B, especially in subgroup d. By contrast, S. suis strains isolated from clinically healthy pigs were preferentially included in subgroup b of group A. Relationships could be established between capsular types 1/2, 3, and 9 and groups A, e, and B, respectively. S. suis strains isolated from humans were homogeneous, and a very high level of association between these strains and four DNA patterns was observed. The PFGE used in this study is a very useful tool for evaluating the genetic diversity of S. suis strains, and it would be used for epidemiological investigations. Streptococcus suis is recognized as an important swine pathogen worldwide and is associated with cases of meningitis, arthritis, septicemia, and sudden death (12, 21). Pigs can also be clinically healthy carriers, and S. suis has been isolated from the upper respiratory tract, nasal cavities, and palatine tonsils of these animals (2). Thirty-five capsular types have been described for this microorganism, with serotype 2 being the most prevalent capsular type in France, followed by types 1/2, 3, 9, and 7 (4). S. suis is also a zoonotic agent, responsible for meningitis and septicemia in humans (12). Virulence markers, such as muramidase released protein, extracellular factor, and suilysin (7, 23), were described for S. suis, but their presence does not always correlate with virulence of S. suis strains (4, 8). Other virulence factors have been suggested, but their role in the pathogenesis of the infection has not been demonstrated (4, 10). Molecular typing of S. suis was previously carried out to define the diversity of strains and to distinguish virulent from nonvirulent strains. Multilocus enzyme electrophoresis was used with Australian capsular type 2 strains to determine association between genetic patterns and specific isolates responsible for clinical problems in piggeries (15). Canadian S. suis type 2 isolates were also investigated by restriction endonuclease analysis, followed by hybridization with a ribosomal DNA probe to detect correlation between the source of the isolates and DNA patterns (3, 11). The 16S rRNA of S. suis was also studied to elicit relationships between virulence of S. suis strains (types 2 and 1) and ribotypes (18, 20). A random amplified polymorphic DNA (RAPD) analysis was described for S. suis to define whether strains from diseased pigs exhibited particular RAPD patterns (6). These studies have described the diversity among S. suis isolates, and some relationships between genetic patterns and isolates specific to clinical problems were shown, but such results have been obtained by studying limited parts of the bacterial genome. More recently, a macrorestriction of the whole S. suis genome associated with pulsed-field gel electrophoresis (PFGE) was used to study German S. suis isolated from swine (1). The present study employs PFGE analysis with French strains isolated from swine and from human cases of meningitis in different countries. S. suis strains.Ninety-seven epidemiologically unrelated S. suis strains isolated from swine and 26 strains isolated from human meningitis cases worldwide were studied. S. suis strains of capsular types 2, 1/2, 9, 7, and 3 and autoagglutinable strains were isolated from pigs suffering from meningitis, septicemia, or arthritis (n = 67) and from nasal cavities or palatine tonsils (n = 30) of clinically healthy pigs. The S735 reference strain capsular type 2 obtained from M. Gottschalk, Faculté de Médecine Vétérinaire, Université de Montréal, Saint-Hyacinthe, Québec, Canada, was included in the study. Biochemical and capsular typings of these strains were carried out as previously described (4). To verify the pattern stability of S. suis after in vivo passages, two porcine field strains of capsular type 2 (strains 166" and 65) were inoculated into specific-pathogen-free pigs from a closed experimental environment as previously described (5). Both strains isolated from different organs of pigs during the first week postinfection were analyzed by PFGE. PFGE.For each S. suis strain, two independent extractions of DNA were performed to verify the reproducibility of patterns. Bacteria were grown for 18 h in 8 ml of Todd-Hewitt Broth (Difco Laboratories, Detroit, Mich.). Preparation of the genomic DNA for PFGE analysis was further performed as described by Rolland et al. (19) with some modifications. The lysis solution did not contain mutanolysin (0.01 M Tris-HCl [pH 7.6], 1 M NaCl, 0.5 Sarkosyl, 1 mg of lysozyme/ml). The DNA was digested with 25 U of SmaI (Roche Diagnostics, Meylan, France) at 25°C for 24 h and washed in 0.1 M EDTA and was subjected to electrophoresis in 1% agarose gel (Tebu, Le Perray en Yvelines, France) in Tris-borate-EDTA (50 mM Tris, 45 mM Borate, 0.5 mM EDTA [pH 8.4]) (Gibco-BRL, Cergy Pontoise, France) using a contour-clamped homogenous electric field apparatus (CHEF-DRIII; Bio-Rad Laboratories). Pulse times were ramped from 1.2 to 30 s over 18 h at 200 V. PFGE patterns were detected by UV transillumination after ethidium bromide staining (0.1 μg/ml) for 1 h followed by water washing for 1 h. Lambda phage concatemers were used as the DNA size standard (Ozyme, Montigny Le Bretonneux, France). Statistical analysis of PFGE patterns.The dendrogram representing the genetic relationships between the 123 S. suis strains was drawn using the Biogene package (Vilber-Lourmat, Marne la Vallée, France) as previously described (14). The unweighted pair group method with arithmetic mean was used with a confidence interval of 7.5%. The numerical index of discrimination (D) was calculated using the equation defined by Hunter and Gaston (13): $$mathtex$$\[\mathit{D}{=}1{-}[1/N(N{-}\mathrm{1)]}\ {{\sum}_{j{=}\mathrm{1\ }}^{S}}\ n_{j}\ \mathrm{(}n_{j}{-}\mathrm{1)}\]$$mathtex$$D is the probability of two unrelated strains being placed into different typing groups. N is the total number of strains in the sample population, S is the total number of described types, and nj is the number of strains belonging to the jth type. Statistical analysis was performed to analyze relationships between S. suis PFGE patterns and virulence, capsular types, and species (pig or human origins). The Omnistat program (Hauer-Jensens, Little Rock, Ark.) was used with the Chi-square test (n > 5) or the Fisher exact test (n ≤ 5). Differences between groups were considered significant when probabilities were lower than 0.05. Reproducibility and stability of PFGE patterns.Reproducible results were observed because a similar pattern was shown for each S. suis strain after the two independent DNA extractions. The in vivo stability was also verified after experimental infection. The same PFGE patterns with respect to size and number of fragments were obtained for isolates from different organs for each of two strains (strains 166" and 65). Genetic diversity of strains defined by PFGE.PFGE patterns after restriction with SmaI were characterized by 5 to 12 bands (Fig. 1, lanes 2 and 3) in a 48.5- to 508-kb size range. Among the 123 S. suis isolates, 74 PFGE patterns were identified, each corresponding to one to seven strains (Fig. 2). The index of discrimination was 0.986. The genetic relationships between the 123 isolates of S. suis are presented in the dendrogram (Fig. 2), and they diverged by up to 56% (44% homology). At 60% homology, three PFGE groups, A, B, and C, were identified, and at 69% homology, eight subgroups, a to h, were observed (Fig. 2). PFGE patterns generated after SmaI macrorestriction of S. suis genome. Lanes 1 and 10, DNA molecular size marker; lanes 2 and 3, patterns 55 and 11 presenting 5 and 12 bands, respectively; lanes 4 and 5 correspond to strains isolated from healthy carrier pigs and belonging to patterns P23 and P73, respectively; lanes 6, 7, and 8, patterns P57, P59, and P60, respectively, corresponding to S. suis strains isolated from humans; lane 9 corresponds to S. suis strain belonging to the pattern P62. Genetic relationships between 123 S. suis strains, as estimated by clustering analysis of PFGE patterns, was obtained after macrorestriction with SmaI. The classification and divergence of strains were calculated by the unweighted pair group method with arithmetic mean, and a confidence interval of 7.5% was used. The species where the strains have been isolated, capsular types, origins, and numbers of strains for each PFGE patterns are reported in the dendrogram. Footnote-style, superscript letters indicate that the abbreviations in the corresponding columns are defined here, as follows. (a) P, pig; H, human. (b) AUT, autoagglutinable. (c) S, septicemia; PT, palatine tonsils; NC, nasal cavities; A, arthritis; M, meningitis; nd, not done; ref, reference strain. Genetic diversity of S. suis strains isolated from diseased or clinically healthy pigs and from humans.The 90 strains isolated from cases of meningitis, arthritis, or septicemia of diseased pigs and humans presented 55 different PFGE patterns (Fig. 2, Table 1). Most of them (77 of 90), were in subgroups b, d, and e. Relationships between these strains and group B (P = 0.002), especially subgroup d (P = 0.018), were significant. Twenty-three PFGE patterns were identified from S. suis strains isolated from the nasal cavities or palatine tonsils of healthy animals, and they were preferentially clustered in group A (P = 0.002), in subgroup b. Distribution of PFGE patterns and groups in relation to capsular types and origins of S. suis strains Genetic diversity of S. suis strains in relation to capsular types.Eighty-four percent (65 of 77) of S. suis type 2 isolates were included in subgroups b, d, and e, but no significant association between capsular type 2 and these PFGE groups was observed (P > 0.05). The same results were observed with S. suis capsular type 7. In contrast, S. suis strains capsular types 1/2, 3, and 9 were clustered in particular groups (Fig. 1). Ten of the thirteen isolates of capsular type 1/2 (77%) were associated with group A (P = 0.009), and capsular types 3 (5/7) and 9 (9/10) were often clustered (P = 0.01) in the subgroup e and group B, respectively. Genetic diversity of S. suis strains in relation to pig or human origin. S. suis strains isolated from humans were more homogeneous than swine strains. Indeed, the 26 human strains presented only 13 different PFGE patterns, in subgroups b, d, e, and g (Fig. 2; Table 1). In contrast, the 44 capsular type 2 strains isolated from meningitis, septicemia, or arthritis in diseased pigs presented 28 patterns which were included in all different PFGE subgroups. Interestingly, 14 of the 26 (54%) human strains were included only in four patterns. A significant association (P = 2 × 10−8) was observed between the human strains and the patterns P57, P59, P60, and P62 (group B, subgroup e). The PFGE is an established technique for analyzing the whole genome of bacteria and studying genetic differences among isolates (14, 19, 24). Other molecular typing methods have been used for S. suis, such as multilocus enzyme electrophoresis, ribotyping, and RAPD (6, 15-17, 20). Recently the use of PFGE for the study of the whole S. suis DNA has been reported (1). In our hands, this method seems to greatly discriminate among S. suis strains (D = 0.98). In addition, results are reproducible, showing identical patterns with PFGE products from different extractions for each strain. Because of the stability of S. suis DNA and the low number of bands obtained in the present study, PFGE analysis is also easy to interpret. Differences in virulence among S. suis strains belonging to capsular type 2 (and other capsular types) were often reported (21). Many studies were carried out to determine virulence factors in S. suis, but at the present time it is very difficult to distinguish virulent from avirulent strains of S. suis (9, 21). In the present study, a significant statistical diversity was recorded between strains isolated from diseased pigs and those recovered from clinically healthy animals, since the two sets were clustered in different PFGE subgroups, subgroups d and b, respectively. A few strains of both categories were clustered in the same PFGE patterns. The possibility that healthy carrier pigs harbor strains capable of causing disease under specific circumstances cannot be ruled out (21). Interestingly, the virulence of three S. suis strains, isolates 93, 166" and 65, was previously evaluated in vivo in specific-pathogen-free pigs (5). They were isolated from diseased (93 and 166") or clinically healthy (65) pigs. The virulent strains (93 and 166") inducing severe disease in piglets experimentally infected were closely related based on the PFGE patterns, both clustering in subgroup e (patterns P57 and P62, Fig. 2). In contrast, the avirulent strain (65) presented a high level of divergence, clustering in the avirulent strain subgroup, pattern P7 in the subgroup b (Fig. 2). Genetic heterogeneity of S. suis was previously reported, especially among strains of capsular type 2, the capsular type most often associated with disease (21, 22). In the present work, the study was extended to isolates belonging to the five most important capsular types in France (4). No significant statistical association could be shown between capsular type 2 and PFGE patterns, suggesting a considerable variation occurring among isolates of capsular type 2, as previously reported (20, 22). In contrast, relationships between PFGE groups and capsular types 1/2, 3, and 9 were shown, making the use of PFGE of S. suis reliable as an additional mean of strain identification. In this study, a PFGE analysis with isolates from humans was carried out for the first time. Genetic patterns were more homogeneous than those for strains isolated from pigs, with most of them clustering in patterns P57, P59, P60, and P62. Interestingly, some S. suis strains isolated from humans originated with different countries (The Netherlands and France) and presented the same PFGE pattern, as observed previously with RAPD analysis and ribotyping (6, 18). S. suis strains isolated from humans and from pigs were clustered in the same patterns (group B, patterns P34, P45, P57, P59, and P62) (Fig. 1 and 2). As previously shown in RAPD analysis (6), this agrees with S. suis being a zoonotic agent, one which could be transmitted from pigs to humans. In conclusion, because of all the advantages presented for the PFGE technique used in this study, it is believed that this technique can be efficiently used during further epidemiological investigations of S. suis infections. We thank Sandrine Gueguen and Thierry Ogel for their technical assistance and Gwennola Ermel for her advice about PFGE. This research was supported by Fonds Européens d'Orientation et de Garantie Agricole. Received 6 August 2001. Returned for modification 8 November 2001. Accepted 4 December 2001. Allgaier, A., R. Goethe, H. J. Wisselink, H. E. Smith, and P. Valentin-Weigand. 2001. Relatedness of Streptococcus suis isolates of various serotypes and clinical backgrounds as evaluated by macrorestriction analysis and expression of potential virulence traits. J. Clin. Microbiol.39:445-453. Arends, J. P., N. Harwig, M. Rudolphy, and H. C. Zanen. 1984. Carrier rate of Streptococcus suis capsular type 2 in palatine tonsils of slaughtered pigs. J. Clin. Microbiol.20:945-947. Beaudouin, M., J. Harel, R. Higgins, M. Gottschalk, M. Frenette, and J. I. MacInnes. 1992. Molecular analysis of isolates of Streptococcus suis type 2 by restriction-endonuclease-digested DNA separated on SDS-PAGE and by hybridization with an rDNA probe. J. Gen. Microbiol.138:2639-2645. Berthelot-Hérault, F., H. Morvan, A. M. Kéribin, M. Gottschalk, and M. Kobisch. 2000. Production of muramidase released protein (MRP), extracellular factor (EF) and haemolysin by field isolates of Streptococcus suis capsular type 2, 1/2, 9,7 and 3 isolated from swine in France. Vet. Res.31:473-479. Berthelot-Hérault, F., R. Cariolet, A. Labbé, M. Gottschalk, J. Y. Cardinal, and M. Kobisch. 2001. Experimental infection in specific pathogen free piglets with French strains of Streptococcus suis capsular type 2. Can. J. Vet. Res.65:196-200. Chatellier, S., M. Gottschalk, R. Higgins, R. Brousseau, and J. Harel. 1999. Relatedness of Streptococcus suis type 2 isolates from different geographic origins as evaluated by molecular fingerprinting and phenotyping. J. Clin. Microbiol.37:362-366. Gottschalk, M., S. Lacouture, and J. D. Dubreuil. 1995. Characterization of Streptococcus suis capsular type 2 haemolysin. Microbiology141:189-195. Gottschalk, M., A. Lebrun, H. Wisselink, J. D. Dubreuil, H. Smith, and U. Vecht. 1998. Production of virulence-related proteins by Canadian strains of Streptococcus suis capsular type 2. Can. J. Vet. Res.62:75-79. Gottschalk, M., R. Higgins, and S. Quessy. 1999. Dilemma of the virulence of Streptococcus suis strains. J. Clin. Microbiol.37:4202-4203. Gottschalk, M., and M. Segura. 2000. The pathogenesis of the meningitis caused by Streptococcus suis: the unresolved questions. Vet. Microbiol.76:259-272. Harel, J., R. Higgins, M. Gottschalk, and M. Bigras-Poulin. 1994. Genomic relatedness among reference strains of different Streptococcus suis serotypes. Can. J. Vet. Res.58:259-262. Higgins, R., and M. Gottschalk. 1999. Streptococcal diseases, p. 563-578. In B. E. Straw, S. Allaire, W. L. Mengeling, and D. J. Taylor (ed.), Diseases of swine, 8th ed. Iowa University Press, Ames, Iowa. Hunter, P. R., and M. A. Gaston. 1988. Numerical index of discriminatory ability of typing systems: an application of Simpson's index of diversity. J. Clin. Microbiol.26:2465-2466. Marois, C., F. Dufour-Gesbert, and I. Kempf. 2001. Comparison of pulsed-field gel electrophoresis with random amplified polymorphic DNA for typing of Mycoplasma synoviae. Vet. Microbiol.79:1-9. Mwaniki, C. G., I. D. Robertson, D. J. Trott, R. F. Atyeo, B. J. Lee, and D. J. Hampson. 1994. Clonal analysis and virulence of Australian isolates of Streptococcus suis type 2. Epidemiol. Infect.113:321-334. Okwumabua, O. G. I., J. Staats, and M. M. Chengappa. 1995. Detection of genomic heterogeneity in Streptococcus suis isolates by DNA restriction fragment length polymorphisms of RNA genes (ribotyping). J. Clin. Microbiol.33:968-972. Power, E. G. M. 1996. RAPD typing in microbiology--a technical review. J. Hosp. Infect.34:247-265. Rasmussen, S. R., F. M. Aarestrup, N. E. Jensen, and S. E. Jorsal. 1999. Associations of Streptococcus suis serotype 2 ribotypes profiles with clinical disease and antimicrobial resistance. J. Clin. Microbiol.37:404-408. Rolland, K., C. Marois, V. Siquier, B. Cattier, and R. Quentin. 1999. Genetic features of Streptococcus agalactiae strains causing severe neonatal infections, as revealed by pulsed-field gel electrophoresis and hylB gene analysis. J. Clin. Microbiol.37:1892-1898. Smith, H. E., M. Rijnsburger, N. Stockhofe-Zurwieden, H. J. Wisselink, U. Vecht, and M. A. Smits. 1997. Virulent strains of Streptococcus suis serotype 2 and highly virulent strains of Streptococcus suis serotype 1 can be recognized by a unique ribotype profile. J. Clin. Microbiol.35:1049-1053. Staats, J. J., I. Feder, O. Okwumabua, and M. M. Chengappa. 1997. Streptococcus suis: past and present. Vet. Res. Commun.21:381-387. Staats, J. J., B. L. Plattner, J. Nietfeld, S. Dritz, and M. M. Chengappa. 1998. Use of ribotyping and hemolysin activity to identify highly virulent Streptococcus suis type 2 isolates. J. Clin. Microbiol.36:15-19. Vecht, U., H. J. Wisselink, M. L. Jellima, and H. E. Smith. 1991. Virulence of Streptococcus suis type 2. Infect. Immun.59:3156-3162. Vicki, A. L., D. B. Jernigan, A. Tice, J. D. Kellner, and M. C. Roberts. 2000. A novel multiresistant Streptococcus pneumoniae serogroup 19 clone from Washington state identified by pulsed-field gel electrophoresis and restriction fragment length patterns. J. Clin. Microbiol.38:1575-1580. Journal of Clinical Microbiology Feb 2002, 40 (2) 615-619; DOI: 10.1128/JCM.40.2.615-6192002 You are going to email the following Genetic Diversity of Streptococcus suis Strains Isolated from Pigs and Humans as Revealed by Pulsed-Field Gel Electrophoresis Meningitis, Bacterial
CommonCrawl
\begin{document} \title{Poisson--Dirichlet Limit Theorems in Combinatorial Applications via Multi-Intensities} \begin{abstract} We present new, exceptionally efficient proofs of Poisson--Dirichlet limit theorems for the scaled sizes of irreducible components of random elements in the classic combinatorial contexts of arbitrary assemblies, multisets, and selections, when the components generating functions satisfy certain standard hypotheses. The proofs exploit a new criterion for Poisson--Dirichlet limits, originally designed for rapid proofs of Billingsley's theorem on the scaled sizes of log prime factors of random integers (and some new generalizations). Unexpectedly, the technique applies in the present combinatorial setting as well, giving, perhaps, a long sought-after unifying point of view. The proofs depend also on formulas of Arratia and Tavar{\'e} for the mixed moments of counts of components of various sizes, as well as formulas of Flajolet and Soria for the asymptotics of generating function coefficients. \end{abstract} \section{Introduction} \subsection{Summary} The goal of this paper is to provide new and exceptionally efficient proofs of very general Poisson--Dirichlet limit theorems for the scaled sizes of components of random elements in the classic combinatorial contexts of assemblies, multisets, and selections, when the components generating functions satisfy certain standard hypotheses. The proofs depend on a fairly new characterization of convergence in distribution to a Poisson--Dirichlet process, originally designed to yield a rapid proof of Billingsley's 1972 theorem on the asymptotic scaled sizes of log prime factors of random integers. That work, including new generalizations of Billingsley's result to factorizations in wide classes of normed arithmetic semigroups, was presented in \cite{AKM}. Poisson--Dirichlet limit theorems are also available for the asymptotic scaled sizes of irreducible components of various random combinatorial objects. The earliest such result was that of Kingman \cite{King3} and Vershik and Schmidt \cite{VS}, applying to irreducible cycles of random permutations distributed uniformly or, more generally, according to the Ewens sampling formula. Analogous results for other random combinatorial objects were eventually discovered, and by the early 1990's quite general theorems applying uniformly to members of quite general families were known. The first such, due to Jennie Hansen \cite{JH}, exploited generating function structure. Subsequent versions, due to Arratia, Barbour and Tavar{\'e}~\cite{ABT}, invoked significantly weaker hypotheses and used much different techniques in combinatorial stochastic processes. Further generalizations continue to be published. Unexpectedly, the techniques of \cite{AKM} were found to apply to the combinatorial regime as well, in the presence of generating functions, giving new proofs of results going beyond those of \cite{JH} though not as general as those of \cite{ABT}; but all the classical cases are included, and the new proofs are extremely rapid. The commonality of technique may be viewed as providing a unifying framework for Billingsley's theorem and the combinatorial limit theorems, one which has long been sought. (See e.g. the unpublished \cite{King2} by J.F.C. Kingman.\footnote{Kingman proposes one possible unifying vantage point if natural density is replaced by harmonic density, in Billingsley's theorem, but he is apparently dissatisfied with this because recovering the original theorem then seems to require the intervention of quite nontrivial auxiliary results.}) \subsection{History, Definitions, Notation} The origins of these limit theorems lie in the following earlier results. Let $p_1 \ge p_2 \ge \cdots$ be the prime factors of a random integer chosen uniformly from $[1..n]$, and let $$ L_j:= \log p_j /\log n. $$ Or, let $l_1 \ge l_2 \ge \cdots$ be the cycle lengths of a uniform random permutation of length $n$, and let $$ L_j:= l_j /n. $$ In either case we have $$ \lim_{n\to \infty} \Pr(L_1 \le t) = \rho(1/t) $$ where $\rho(\cdot)$ is Dickman's $\rho$, the unique continuous function on $[0,\infty]$ satisfying $$ \rho(t) = 1 \mbox{ for } 0 \le t \le 1 $$ and \begin{equation}\label{rhorecur} t\rho(t) = \int_{t-1}^t \rho(u) du \mbox{ for } 1 \le t <\infty, \end{equation} as shown for random integers by Dickman \cite{Dick30} in 1930 and for random permutations by Goncharov \cite{Gon} in 1944. Nowadays these can be viewed as respective corollaries of a pair of later results giving the limiting distributions of the the entire joint processes $L_1,L_2,\dots$, in the two cases. To state these results we first define the Poisson--Dirichlet distribution: Let $U_1,U_2,\dots$ be iid uniform on $[0,1]$, and define a process $G_1,G_2,\dots$, also with values lying in $[0,1]$, via $$ G_1 = 1-U_1, G_2 = U_1(1-U_2), G_3 = U_1U_2(1-U_3),\dots. $$ Then the Poisson--Dirichlet process (PD for short) $X_1 \ge X_2\ge \cdots$ is the outcome of sorting $G_1,G_2,\dots$ into non-increasing order, i.e. $$ (X_1\ge X_2 \ge \cdots) = {\bf{SORT}}(G_1,G_2,G_3\dots). $$ It follows at once from the definition of the $G_i$'s that $X_1 + X_2 + \cdots =1$ almost surely, and it can be shown that for each $k>0$, $X_1,\dots,X_k$ have the marginal distribution with joint density function $$ f_k(x_1,x_2,\dots,x_k) = \frac{1}{x_1\cdots x_k} \rho\left(\frac{1-x_1-\cdots-x_k}{x_k}\right) $$ on $\{1 \ge x_1 \ge x_2 \ge \cdots \ge 0\}\cap\{x_1 + \cdots + x_k \le 1\}$. (In particular, for $k=1$ it follows from this, together with \eqref{rhorecur}, that $\Pr( X_1 \le t) = \rho(1/t)$.) This explicit distribution function provides an alternative characterization of PD. There are a number of other characterizations, though we will not need them here. More generally, for any real parameter $\theta >0$ the Poisson--Dirichlet$(\theta)$ process (PD($\theta)$) is defined by replacing each $U_i$ with $U_i^{1/\theta}$ in the definition above. (So in particular, PD(1) is just PD.) Then the density functions $f_k$ are replaced by $$ f_{\theta,k}(x_1,\dots,x_k) = \frac{e^{\gamma\theta}\theta^k \Gamma(\theta) x_k^{\theta-1}}{x_1\cdots x_k} g_{\theta}\left(\frac{1-x_1-\cdots-x_k}{x_k}\right) $$ where $g_{\theta}$ is the unique continuous function on $(0,\infty)$ satisfying $$ g_{\theta}(t) = \frac{e^{-\gamma \theta} t^{\theta-1}}{\Gamma(\theta)} \mbox{ for } 0 < t \le 1 $$ and $$ tg_{\theta}(t) = \theta \int^t_{t-1}g_{\theta}(u)du \mbox{ for } 1 \le t. $$ Again there are alternative characterizations which might be more convenient in other contexts; see \cite{ABT} for full details. We can now state the two original Poisson--Dirichlet limit theorems: \begin{theorem}[Billingsley, 1972]\label{classicBill} Let $p_1 \ge p_2 \ge \cdots $ be the prime factors of a uniform random integer $N \in [1..n]$, and define $L_{jn} = \log p_j / \log n$, where the latter sequence is padded out with zeros. Then for each $k>0$, as $n \to \infty$ the joint distribution of $L_{1n},\dots,L_{kn}$ converges weakly to the initial $k$-dimensional joint PD(1) distribution. \end{theorem} \begin{theorem}[Kingman, 1977; Vershik and Schmidt, 1977]\label{classicKing} Let $l_1 \ge l_2 \ge \cdots$ be the cycle lengths of a uniform random $n$-long permutation, and define $L_{jn} = l_j /n$, where the latter sequence is padded out with zeros. Then for each $k>0$, as $n \to \infty$ the joint distribution of $L_{1n},\dots,L_{kn}$ converges weakly to the initial $k$-dimensional joint PD(1) distribution. \end{theorem} (Since, as noted, $F(t) = \rho(1/t)$ is the cumulative distribution function of the initial PD variable, the earlier results of Dickman and Goncharev are indeed respective corollaries of the two theorems just stated.) Billingsley proved his result before PD was a named and studied distribution, deriving his limiting $k$-dimensional distributions in the form of certain series expansions not obviously involving Dickman's $\rho$, of which he appears to have been unaware. In turn, neither Kingman, who had both named and made a study of PD($\theta$) in \cite{King3}, nor Vershik and Schmidt, makes any mention of Billingsley's theorem \begin{comment} -- assuming he was aware of it, presumably he would have had no reason to suspect that prime factors had any particular connection with irreducible cycle lengths or that Billingsley's exotic series expansions had any connection with the recently defined process whose definition was in terms of other, standard workaday processes. \end{comment} At any rate, as remarked in \cite{King2} it was not until 1984 that a publication \cite{Ll} noted that the two limiting distributions were identical, which immediately raised the question of why this should be so. Over the years, as already mentioned, analogues of the permutation result were proved for other random decomposable objects, with various unified methods of proof. Billingsley's theorem, on the other hand, remained an isolated result in probabilistic number theory until very recently (see \cite{AKM}); and although several different proofs have appeared, the methods have seemed different from those used for the combinatorial results, and the coincidence of limiting distributions has been felt to lack adequate explanation. In the next section, however, we will use the recent criterion, from \cite{AKM}, for convergence to PD to give very brief, self-contained proofs of both theorems above by means of a common method; and then we will go on to use the companion characterization of convergence to the more general PD($\theta$), for the promised combinatorial applications, in Section~\ref{gencomb}. \section{Characterization via multi-intensities}\label{charsec} The following characterizations of convergence to PD and to PD($\theta$) were originally presented in \cite{AKM}. In what follows, all (random) multisets are (almost surely) at most countable, with only finite multiplicities. Given a sequence $A_n$ of random multisubsets of $(0,1]$, let $T_n$ denote the sum of the elements of $A_n$, counting multiplicities; and for any multiset $A$ and set $S$ in $(0,1]$ let $|A \cap S|$ denote the cardinality of the intersection, also counting multiplicities. Also, let $L(n)=(L_1(n),L_2(n),\dots)$, where $L_i(n) := $ the $i^{\rm th}$ largest element of $A_n$ if $i \le |A_n|$, and $L_i(n) := 0$ if $i > |A_n|$. (Our hypotheses will ensure that, almost surely, no $A_n$ possesses a positive accumulation point, ensuring in turn that the elements can actually be placed in a non-increasing sequence.) Here, first, is the PD-only version: \begin{proposition}\label{PDonly} Suppose that $T_n \le 1$ almost surely, for all $n$, and that for any collection of disjoint closed intervals $I_i = [a_i,b_i] \subset (0,1], i = 1,\dots,k$ satisfying $b_1 + \cdots + b_k <1$, for any $k \ge 1$, we have \begin{equation}\label{simple} \liminf \mathbb{E} \, |A_n \cap I_1|\cdots |A_n \cap I_k| \ge \prod_{i=1}^k (\log(b_i) - \log(a_i)) \end{equation} as $n \to \infty.$ Then $L(n)$ converges in distribution to $(L_1,L_2,\dots)$, the Poisson--Dirichlet distribution with parameter 1. \end{proposition} For arbitrary PD($\theta$) we also have \begin{proposition}\label{maintheta} Let $\theta >0$. Suppose that $T_n \le 1$ almost surely, for all $n$, and that for some $ -\infty < \alpha, \beta < \infty$ with $\alpha + \beta =1 - \theta,$ it is the case that for any collection of disjoint closed $I_i = [a_i,b_i] \subset (0,1], \ i=1,\dots,k$ satisfying $b_1+\cdots + b_k < 1$, for any $k \ge 1$, we have \begin{multline}\label{theta intensineq} \liminf_{n \to \infty} \mathbb{E} \, \prod_{i=1}^k | A_n \cap I_i| \ \ge\\ \frac{\theta^k}{(1-a_1-\cdots-a_k)^{\alpha}(1-b_1-\cdots-b_k)^{\beta}}\prod_{i=1}^k (\log(b_i) - \log(a_i)). \end{multline} Then $L(n)$ converges in distribution to $(L_1,L_2,\dots)_{\theta}$, the Poisson--Dirichlet process with \mbox{parameter $\theta$.} \end{proposition} Both propositions are proved in \cite{AKM}. Note that the condition $T_n \le 1$ ensures that $A_n$ can possess no positive accumulation point, as promised. \begin{comment} Also, the inequalities in the hypotheses as stated in \cite{AKM} actually require the products $$ \prod_{i=1}^k \frac{b_i - a_i}{b_i} $$ on the right hand sides, not $$ \prod_{i=1}^k (\log(b_i) - \log(a_i)). $$ But since $(\log(b_i) - \log(a_i)) \ge \frac{b_i - a_i}{b_i}$ by the mean value theorem, the present formulas suffice wherever they are applicable. \end{comment} To show at once how rapidly results can be derived using the above criteria, here are complete, self-contained proofs of Theorems \ref{classicBill} and \ref{classicKing}, much briefer than any by previous methods. (The present proof of Theorem \ref{classicBill} was already presented in \cite{AKM}, but since it is so brief, and to make the point that there is a single over-arching methodology, we reproduce it here.) To prove Theorem \ref{classicBill}, let $A_n$ be the multiset whose elements are $\log p/\log n$ for all prime factors $p$ of a random $1 \le N \le n$, including any multiple copies, and let $A_n^1$ be the underlying set, i.e. with all positive multiplicities truncated down to 1. For any prime $p$ at all let $I(p|N)$ denote the indicator function of the event $p|N$. Note that since $\log p_1 + \log p_2 + \cdots = \log N \le \log n$ we automatically get $T_n \le 1$. Note also that for any test interval $[a_i,b_i]$ we have $\log p / \log n \in \{ |A_n \cap [a_i,b_i]|\}$ if and only if $n^{a_i} \le p \le n^{b_i}$. Thus, writing things out explicitly for $k = 2$, we get $$ E\{ |A_n \cap [a_1,b_1]|\ |A_n \cap [a_2,b_2]|\} \ge E\{ |A^1_n \cap [a_1,b_1]|\ |A^1_n \cap [a_2,b_2]|\} $$ $$ = E \sum_{n^{a_1}\le p \le n^{b_1}}I(p|N) \sum_{n^{a_2} \le q \le n^{b_2}}I(q|N) $$ $$ = \sum_{n^{a_1} \le p \le n^{b_1}} \sum_{n^{a_2} \le q \le n^{b_2}} E \{I(pq|N)\} $$ $$ =\sum_{n^{a_1}\le p \le n^{b_1}} \sum_{n^{a_2} \le q \le n^{b_2}}\frac{1}{pq} + O\left(\frac{n^{b_1 + b_2}}{n}\right) $$ $$ =(\log b_1 -\log a_1)(\log b_2 -\log a_2) + o(1). $$ The second equality exploits the fact that always $p \ne q$ since they must lie in disjoint intervals, while the third equality depends on the estimate $\left|\Pr(p|N) -1/p\right| \le 1/n$ together with the fact that there are at most $n^{b_1}n^{b_2}$ summands. The fourth uses Mertens' formula together with the hypothesis that $b_1 + b_2 < 1$. Now take the $\liminf$ as $n \to \infty$. QED. To prove Theorem \ref{classicKing}, let $A_n$ be the multiset whose elements are the quotients $l/n$ where $l$ ranges over the lengths of all irreducible cycles of a random permutation of length $n$. Trivially we have $T_n = n$. Note that for any test interval $[a_i,b_i]$ we have $l/n \in \{ |A_n \cap [a_i,b_i]|\}$ if and only if $a_in \le l \le b_in$. Also, for any positive integer $j$ let $C_j$ be the number of cycles of length $j$ in a random permutation of length $n$. Then it is well known that provided $j_1 + \cdots + j_k \le n$ we have $E\{C_1\cdots C_k\} = \frac{1}{j_1\cdots j_k}$ exactly. Thus, again writing things out explicitly for $k = 2$, we get $$ E\{ |A_n \cap [a_1,b_1]| |A_n \cap [a_2,b_2]|\} $$ $$ = E \sum_{a_1n \le j \le b_1n}C_j \sum_{a_2n \le k \le b_2n}C_k $$ $$ = \sum_{a_1n \le j \le b_1n} \sum_{a_2n \le k \le b_2n} E{C_j C_k} $$ $$ = \sum_{a_1n \le j \le b_1n} \sum_{a_2n \le k \le b_2n} \frac{1}{jk} = \left(\sum_{a_1n \le j \le b_1n}\frac{1}{j}\right)\left(\sum_{a_2n \le k \le b_2n} \frac{1}{k}\right) $$ $$ = (\log b_1 -\log a_1)(\log b_2 -\log a_2) + o(1), $$ where the requirement $j+k \le n$ is enforced by the hypothesis $b_1 + b_2 <1$, and this time we need only know about harmonic sums. Again take the lim inf of both sides, QED. \section{PD limit theorems for general combinatorial families}\label{gencomb} In this section and the next we present our main results, namely, new proofs of generalizations of the permutation result to randomly selected decomposable combinatorial objects. That is, we prove Poisson--Dirichlet limit theorems for the non-increasing sequence of scaled sizes of irreducible components of random decomposable objects, as total size grows. We cover the three classical families, namely, Labeled Assemblies, Multisets, and Selections. The objects are chosen equiprobably or, more generally, with their selection probabilities ``tilted'' to be proportional to $$ \phi^K, $$ where $\phi >0$ is some fixed parameter and $K = K(\mbox{object})$ is the total number of irreducible components of an object. (Thus $\phi=1$ corresponds to equiprobable selection.) \subsection{The Master Theorem} The proofs for the three families will be patterned after the proof of Theorem \ref{classicKing}, as presented in Section~\ref{charsec}. Specifically, they will all be corollaries of the following Master Theorem. We suppose given a family of objects of various weights or sizes $n$, where $n$ is a positive integer; that there are finitely many objects of each size $n$; that each object decomposes somehow into finitely many irreducible objects, uniquely up to ordering; and that the size of an object is equal to the sum of the sizes of its irreducible components. Given an object of size $n$, if $K$ is the total number of irreducible components, let $l_1,l_2,\dots,l_K$ be the sizes of those components, arranged in nonincreasing order. We suppose that any multiple copies are always included so that, e.g., we have $l_1 + l_2 + \cdots + l_K =n$. Let $C_1,\dots,C_n$ be the numbers of components of our object, of sizes $1,\dots,n$ respectively; then also $C_1 + 2C_2 + \cdots +nC_n =n$. Note that if indices $i_1,\dots,i_k$ have a sum exceeding $n$, then necessarily at least one of the counts $C_{i_1},\dots,C_{i_k}$ must vanish. We are given a sequence of probability distributions, one for each $n$, on the objects of size $n$. Thus $K$, the sizes $l_1,l_2,\dots,l_K$ and the counts $C_1,\dots,C_n$ all become random variables, for each value of $n$. Also consider a sequence of families, one family for each $n$, of $k$-tuples $(i_1,\dots,i_k)$ of distinct indices $1 \le i_1,\dots,i_k \le n$, for fixed $k$, with all ratios $i_1/n,\dots,i_k/n$ bounded away from $0$ as $n \to \infty$, uniformly over the whole sequence. Call such a sequence a {\em good sequence of families of $k$-tuples}. We can now state the Master Theorem: \begin{theorem}\label{masterthm} \begin{comment} Fix an integer $k>0$. For each $n>0$, suppose there is a family of $k$-tuples $(1_1,\dots,i_k)$ of distinct indices $1 \le i_1,\dots,i_k \le n$, with $i_1/n,\dots,i_k/n$ bounded away from $0$ as $n \to \infty$, uniformly over all the collections. For each $k$-tuple let $m:=i_1 + \cdots + i_k$. \end{comment} Suppose our combinatorial objects and probability distributions are such that that for some $\theta >0$ and for any good sequence of families of $k$-tuples $(i_1,\dots,i_k)$, the expected values $E\{C_{i_1}\cdots C_{i_k}\}$ satisfy \begin{equation}\label{masterineq} E\{C_{i_1}\cdots C_{i_k} \} = \frac{\theta^k}{(1-m/n)^{1- \theta}}\frac{1}{i_1 i_2\cdots i_k}\left(1+o(1)\right), \end{equation} uniformly over the sequence of families, where we write $m = m(i_1,\dots,i_k) =: i_1 + \cdots + i_k$. Then, as $n \to \infty$ the joint distribution of the initial $k$-long sequence of scaled sizes $$ l_1/n,\dots,l_k/n $$ converges to the initial $k$-dimensional projection of PD($\theta$). \end{theorem} \begin{proof} We apply Proposition \ref{maintheta}. In the notation of that proposition, let $A_n$ be the multisubset containing the elements $l_1/n,l_2/n,\dots$, and let $I_j = [a_j,b_j] \subset (0,1], j = 1,\dots,k,$ be disjoint intervals with $a_j>0$ for $j=1,\dots,k$ and with $b_1 + \cdots + b_k <1$. Then we have $$ E\{\: |A_n \cap [a_1,b_1]|\cdots |A_n \cap [a_k,b_k]|\:\} = E\{ \sum_{a_1n < i_1 \le b_1n}C_{i_1} \cdots \sum_{a_kn < i_k \le b_kn}C_{i_k}\:\} $$ $$ = \sum_{a_1n < i_1 \le b_1n} \cdots \sum_{a_kn < i_k \le b_kn} E\{C_{i_1} \cdots C_{i_k}\} $$ $$ = \sum_{a_1n < i_1 \le b_1n} \cdots \sum_{a_kn < i_k \le b_kn} \frac{\theta^k}{(1-m/n)^{1- \theta}}\frac{1}{i_1 i_2\cdots i_k}\left(1+o(1)\right) $$ where we may appeal to \eqref{masterineq} in the last step because we claim that the sequence of families of $k$-tuples of indices arising, as $n \to \infty$, from the $k$-fold summations, forms a good sequence: The indices in each $k$-tuple are distinct because the intervals $I_1,\dots,I_k$ are disjoint, and the ratios $i_1/n,\dots,i_k/n$ are uniformly bounded away from $0$ because for each $j$ we have $ 0< a_j \le i_j/n$ where the numbers $a_1,\dots,a_k$ are independent of $n$. Note that always, $a_1 + \cdots + a_k \le m/n \le b_1 + \cdots + b_k$. If $\theta \le 1$, we may then write $$ E\{\: |A_n \cap [a_1,b_1]|\cdots |A_n \cap [a_k,b_k]|\:\} $$ $$ \ge \frac{\theta^k}{(1-a_1 - \cdots - a_k)^{1- \theta}} \left(\sum_{a_1n < i_1 \le b_1n} \frac{1}{i_1}\right)\cdots \left(\sum_{a_kn < i_k \le b_kn} \frac{1}{i_k}\right) \left(1+o(1)\right) $$ $$ = \frac{\theta^k}{(1-a_1 - \cdots - a_k)^{1- \theta}}\prod_{j=1}^k(\log b_j -\log a_j) + o(1). $$ If $\theta >1$ we proceed in the same way, except that $-a_1-\cdots-a_k$ is replaced with $-b_1-\cdots-b_k$. In either case, take $\liminf_{n \to \infty}$ of both ends of the inequality and apply Proposition \ref{maintheta}, with $\alpha = 1-\theta$ and $\beta =0$ or with $\beta = 1-\theta$ and $\alpha =0$, respectively. \end{proof} Of course, in any application of Theorem \ref{masterthm}, the burden will be the establishment of \eqref{masterineq} with the required uniformity guarantees. Conveniently, well-honed tools for this already exist. \subsection{Exp-log asymptotics} We will use two formulas of Flajolet and Soria \cite{FSo}, \eqref{comp} and \eqref{phiobj} below, which we extract from the exposition in~\cite[Section~VII.2]{FSe}, together with several others in the same spirit. They are conveniently packaged consequences of asymptotic formulas of Flajolet and Odlyzko, especially designed for certain combinatorial applications.\footnote{Formulas \eqref{comp} and \eqref{phiobj} originally appeared as preliminary results in \cite{FSo}, where they were used as ingredients for various other limit theorems.} Let G(z) be a function of a complex variable analytic near $z=0$, whose series expansion at $0$ has real non-negative coefficients with finite radius of convergence $\rho$. We assume, with Flajolet and Soria, that for some $\theta>0$ and some real $\lambda$ \begin{description} \item[FS 1] $\rho$ is the unique singularity of $G(z)$ on $|z| = \rho$; \item[FS 2] G(z) is continuable to a slightly larger open domain $\Delta$ consisting of a disc of radius exceeding $\rho$ centered at $0$, but possibly excluding a closed acute-angled wedge domain $|\arg(z-\rho)| \le \gamma$ with vertex at $\rho$, for some $0 \le \gamma < \pi/2$; \item[FS 3] we have $$ G(z) = \theta \log \frac{1}{1-z/\rho} + \lambda +O\left(\frac{1}{(\log(1-z/\rho))^2}\right) $$ as $z\to \rho$ in $\Delta$. \end{description} For later reference, note that $$ \log \frac{1}{1-z/\rho} $$ itself certainly satisfies all three items, continuing analytically, as it does, to the complement of the real ray $z \ge \rho$. Given such a $G$, let $\phi >0$. Then the formulas of Flajolet and Soria are as follow: the power series coefficients of $G$ around $z=0$ satisfy \begin{equation}\label{comp} [z^n]G(z) = \frac{\theta}{n}\rho^{-n}\left(1 + O\left((\log n)^{-2}\right)\right), \end{equation} and if $F(z) = \exp(\phi G(z))$ then \begin{equation}\label{phiobj} [z^n]F(z) = \frac{e^{\phi \lambda}}{\Gamma(\phi \theta)}n^{\phi \theta-1} \rho^{-n} \left(1 + O\left((\log n)^{-2}\right)\right). \end{equation} Now add the restriction that $$ \rho < 1 $$ and assume that the numbers $g_i$ are {\em integers}. Then if $F(z) = \exp(\phi G(z)+ R(z))$, where \begin{equation}\label{selR} R(z):= \sum_{j \ge 2}(-1)^{j+1}\phi^j G(z^j)/j \end{equation} then \begin{equation}\label{selobj} [z^n]F(z) = \frac{Ce^{\phi \lambda }}{\Gamma(\phi \theta)}n^{\phi \theta-1} \rho^{-n} \left(1 + O\left((\log n)^{-2}\right)\right) \end{equation} for a certain nonzero constant $C$ to be described. Finally, also restrict $\phi$ to $$ \rho^{-1} > \phi >0 $$ but release the restriction of the numbers $g_i$ to integers. Then if $F(z) = \exp(\phi G(z)+ R(z))$, where this time \begin{equation}\label{mulR} R(z):= \sum_{j \ge 2}\phi^j G(z^j)/j, \end{equation} then we get \begin{equation}\label{mulobj} [z^n]F(z) = \frac{Ce^{\phi \lambda}}{\Gamma(\phi \theta)}n^{\phi \theta-1} \rho^{-n} \left(1 + O\left((\log n)^{-2}\right)\right) \end{equation} with $C \ne 0$, same as \eqref{selobj}, once again. As mentioned, \eqref{comp} and \eqref{phiobj} are proved in~\cite[Section~VII.2]{FSe}.\footnote{Formula \eqref{phiobj} is actually proved there for $\phi = 1$; but the more general formula is a trivial corollary of that one.} As for \eqref{mulobj}, we claim that $R(z)$ as defined in \eqref{mulR} is analytic in an open disc about $0$ of some radius exceeding $\rho$. If so, then since $$ R(z) - R(\rho) = O(z-\rho) = O\left(\frac{1}{(\log(1-z/\rho))^2}\right) $$ near $z = \rho$, we find that \eqref{mulobj} is a corollary of \eqref{phiobj}, with $C= \exp(R(\rho))$, if $\phi G(z) + R(\rho)/\phi + (R(z) - R(\rho))/\phi$ replaces $G$ in {\bf FS 3}. To see that $R(z)$ is as claimed, note that for $j\ge 2$ each function $G(z^j)$ is analytic in the open disc of radius $\rho^{1/j} \ge \rho^{1/2} >\rho$, and also that they are uniformly $O(z^2)$ in any closed disc of radius less than $\rho^{1/2}$. Also, when $\phi < \rho^{-1}$, we have $|\phi z|<1$ in the open disc of radius $\min\{\phi^{-1},\rho^{1/2}\}$; and this radius exceeds $\rho$. Therefore, the series defining $R(z)$ converges uniformly and absolutely in any compact subset of that disc. (This argument too was given by Flajolet and Soria, for $\phi =1$.) This proves \eqref{mulobj}. Although the same argument also works for \eqref{selobj}, given \eqref{selR}, provided $\phi < \rho^{-1}$, for larger $\phi$ it gets the series \eqref{selR} defining $R(z)$ to converge only for $|z| < \phi^{-1} \le \rho$, which is not good enough. To derive \eqref{selobj} for all positive $\phi$ we need to look under the hood a bit. From {\bf FS 3} we have $$ F(z) = \exp(G(z)) = e^{\lambda} (1-z/\rho)^{-\theta}\left(1 + O\left(\frac{1}{(\log(1-z/\rho))^2}\right)\right), $$ and it is {\em this} formula from which \eqref{phiobj} follows via results of Flajolet and Odlyzko; see the discussion in~\cite[Section~VII.2 ]{FSe}. From $F(z) = \exp(\phi G(z)+ R(z))$, then, we get \begin{equation}\label{expression} F(z) = e^{R(z)}e^{\phi \lambda} (1-z/\rho)^{-\phi \theta}\left(1 + O\left(\frac{1}{(\log(1-z/\rho))^2}\right)\right). \end{equation} Regardless of the behavior of $R(z)$, we claim that \begin{lemma}\label{Sanal} For any fixed $\phi >0$, $S(z):= e^{R(z)}$ continues analytically to a disc around $0$ of radius greater than $\rho$, and $S(\rho) \ne 0$. \end{lemma} If so, then from replacing $e^{R(z)}$ in \eqref{expression} with $S(z) = S(\rho) \times \frac{S(z)}{S(\rho)} = S(\rho)\left(1 + O(z-\rho)\right)$ near $z=\rho$, we immediately deduce \eqref{selobj}, with $C= S(\rho)$. So it remains to prove the lemma, which we now do. Fix $\phi > 0$. If we restrict to the domain $$ D = \{z: |z| < \min(\phi^{-1},\rho^{1/2})\}, $$ then from rearranging the Taylor expansions of $\log$ terms we get \begin{equation}\label{Rseries} R(z) = \sum_{i \ge 1}g_i\left(\log(1+\phi z^i) - \phi z^i\right), \end{equation} a valid identity between analytic functions on $D$. Now pick an index $\xi >0$ for which $\phi \rho^{\xi/2} <1$, and set $$ T(z): = \sum_{i \ge \xi}g_i\left(\log(1+\phi z^i) - \phi z^i\right). $$ Since on any compact subset of $\{|z| < \rho^{1/2}\}$ the terms $g_i\left(\log(1+\phi z^i) - \phi z^i\right)$ are $O(\phi^2g_i z^{2i})$, uniformly for $i \ge \xi$, we see from $\eqref{comp}$ that $T(z)$ defines an analytic function on the open disc $\{|z| < \rho^{1/2}\}$. Also, since we have assumed that the $g_i$'s are non-negative integers, the expressions $(1+\phi z^i)^{g_i}$ are polynomials, hence certainly single valued and analytic on the same disc. Therefore the formula $$ S(z) = e^{R(z)} = \left(\prod_{1\le i < \xi} \left((1+\phi z^i)\exp(-\phi z^i)\right)^{g_i}\right) e^{T(z)} $$ continues $S(z)$ analytically to the open disc $\{|z| < \rho^{1/2}\}$; and by inspection\footnote{Note that since $S(z)$ does possess zeroes for $|z| < \rho^{1/2}$ when $\phi$ is large enough, $R(z)$ itself {\em cannot} then continue to that domain.} we have $S(\rho) \ne 0$. This completes the proof of the lemma and, hence, of \eqref{selobj}. \section{The three combinatorial families} \subsection{Assemblies} A permutation of length $n$ may be thought of as a partition of $[n] := \{1,\dots,n,\}$ into disjoint nonempty blocks, where on each block of size $i$ one of $m_i = (i-1)!$ possible cycle structures is imposed. More generally, given a sequence $m_1,m_2,\dots$ of positive integers an {\em assembly of size $n$} is a partition of $[n]$ into disjoint nonempty blocks, where on each block of size $i$ one of $m_i$ possible structures is imposed, called ``irreducible''.\footnote{In examples of interest the numbers $m_i$ are not arbitrary, of course -- they are the numbers of irreducible combinatorial objects of some sort, of sizes $i$.} If $$ M(x) = \sum_{i \ge 1} m_i x^i /i! $$ and $$ Q(x) = \sum_{n \ge 0} q(n)x^n/n! $$ are the exponential generating functions for the numbers of irreducible objects of sizes $i$ and the total numbers of assemblies on the set $[n]$, then it is well-known that assemblies are characterized by the formula \begin{equation} Q(x) = \exp(M(x)). \end{equation} (Conventionally, we have $q(0) = 1$.) Further, if $q(n,k)$ is the number of objects of size $n$ and with $k$ irreducible components, then if we write $$ q_{\phi}(n) = \sum_{k=1}^n q(n,k)\phi^k $$ and $$ Q(x,\phi) = \sum_{n \ge 0} q_{\phi}(n)x^n/n! $$ for some positive parameter $\phi$, we have \begin{equation} Q(x,\phi) = \exp(\phi M(x)). \end{equation} See, e.g.,~\cite[Section 9.1]{AT94}. Given a family of assemblies, i.e. given the sequence $m_1,m_2,\dots$, suppose an assembly of size $n$ is picked at random, either uniformly or, more generally, from the tilted distribution with parameter $\phi$. Let $C_1,\dots,C_n$ be the counts of its irreducible components of sizes $1$ through $n$, respectively. For any $k$-tuple of distinct positive indices $i_1,\dots,i_k$ with $m = i_1+ \cdots + i_k \le n$, the following expression for the mixed moment $E\{C_{i_1}\cdots C_{i_k} \}$ is a special case of formula (126) of \cite{AT94}, specialized down to simple products: \begin{equation}\label{assembmom} E\{C_{i_1}\cdots C_{i_k} \} = \rho^{-m}\frac{n!}{q_{\phi}(n)}\frac{q_{\phi}(n-m)}{(n-m)!}\prod_{j=1}^{k}\left(\frac{\phi m_{i_j}\rho^{i_j}} {i_j!}\right). \end{equation} We can combine \eqref{assembmom} with the Flajolet-Soria formulas discussed above. We suppose we are given a family of assemblies with exponential generating function $M(x) = \sum_{i \ge 1} m_i x^i /i!$ for the numbers of irreducible objects of sizes $1,2,\dots$. \begin{lemma}\label{assemblemma} If conditions \textbf{FS 1, FS 2,} and \textbf{FS 3} are satisfied for $G(z) = M(z)$, then for arbitrary $\phi > 0$ we have \begin{equation}\label{assembineq} E\{C_{i_1}\cdots C_{i_k} \} = \frac{(\phi \theta)^k}{(1-m/n)^{1-\phi \theta}}\frac{1}{i_1 i_2\cdots i_k} \left(\prod_{j=1}^k\left(1 + O\left(\frac{1}{(\log i_j)^2}\right)\right) +o(1)\right). \end{equation} \end{lemma} \begin{proof} With $F(z) = \exp(\phi M(z)) = Q(z,\phi)$, plugging \eqref{comp} and \eqref{phiobj} into \eqref{assembmom} immediately yields \eqref{assembineq}, uniformly over all $k$-tuples of distinct positive indices $i_1,\dots,i_k$ with $i_1+ \cdots + i_k \le n$. \end{proof} We can now give the main result: \begin{theorem} Let $l_1 \ge l_2 \ge \cdots $ be the irreducible component sizes of a random assembly on the set $[n]$, chosen from a tilted distribution with parameter $\phi$, and define $L_{jn} = l_j/n$, where the latter sequence is padded out with zeros. Suppose the Flajolet-Soria conditions \textbf{FS 1, FS 2,} and \textbf{FS 3} are satisfied when $G(z) = M(z)$. Then for each $k>0$, the joint distribution of $L_{1n},\dots,L_{kn}$ converges to the initial $k$-dimensional joint PD($\phi \theta$) distribution. \end{theorem} \begin{proof} Formula \eqref{assembineq} in Lemma \ref{assemblemma} looks ready to serve as formula \eqref{masterineq} in Theorem \ref{masterthm}, except for the $k$-fold product towards the end of \eqref{assembineq}. However we are allowed to restrict attention, when applying that theorem, to good families of $k$-tuples of indices, i.e. with a uniform positive lower bound hypothesis on the ratios $i_1/n,\dots,i_k/n$. This converts the $k$-fold product to $(1 + o(1))$. Now apply the theorem. \end{proof} \subsection{Multisets and Selections} Multisets and Selections are sufficiently alike that we can treat them simultaneously, in parallel. A monic polynomial of degree $n$, over some finite field, may be unambiguously identified with the multiset consisting of its irreducible monic factors -- ``multiset'', because some factors could appear with multiplicities; and the degrees add up to $n$. Or, if we are interested only in squarefree polynomials, then the irreducible factors form a set, with no repetition of elements; and the degrees still add up to $n$. More generally, suppose we are given some universe of ``irreducible'' objects having positive integer weights, with exactly $m_i$ different kinds of irreducibles of weight $i$. Our two polynomial examples are prototypical of the following two respective constructions. \begin{itemize} \item A {\em combinatorial multiset of weight $n$} is a multisubset of our universe, with total weight $n$. Equivalently, the {\em integer} $n$ is partitioned into positive summands, and for each summand $i$ one of the $m_i$ possible summands of weight $i$ is selected, with replacement. \item A {\em combinatorial selection of weight $n$} is a subset of our universe, of total weight $n$. So all components of an object must be of distinct kind, though distinct components of the same weights are permitted. \end{itemize} So the selection construction could be viewed as a subclass of the multiset construction. Note that any additional structure associated with our universe, for instance the fact that a collection of irreducible polynomials multiplies together to form another polynomial, need not be considered in discussion of counting formulas. For either construction, let $$ M(x) = \sum_{i \ge 1} m_i x^i $$ be the ordinary generating function for the numbers of irreducibles of weight $i$. Also, if $q(n,k)$ denotes the number of multisets of total weight $n$ containing $k$ irreducibles, including multiplicities, or if we let it denote the number of selections of total weight $n$ containing $k$ irreducibles, then in either case, for any given positive $\phi$ write $$ q_{\phi}(n) = \sum_{k = 1} ^n q(n,k)\phi^k $$ and $$ Q(x,\phi) = \sum_{n \ge 0} q_{\phi}(n)x^n. $$ For $\phi =1$, in either case, the series $Q = Q(x,1)$ reduces to the ordinary generating function for the numbers of composite objects of weights $n$. The following two formulas connecting $Q(x,\phi)$ and $M(x)$ are well-known: For multisets we have \begin{equation}\label{multigen} Q(x,\phi) = \prod_{i \ge 1}(1-\phi x^i)^{-m_i} = \exp \left(\sum_{j \ge 1} \phi^j M(x^j)/j \right), \end{equation} and for selections we have \begin{equation}\label{selectgen} Q(x,\phi) = \prod_{i \ge 1}(1+\phi x^i)^{m_i} = \exp \left(\sum_{j \ge 1} (-1)^{j+1}\phi^j M(x^j)/j \right). \end{equation} See, e.g.,~\cite[Section~9.2]{AT94} for~\ref{multigen} and Section~9.3 for \ref{selectgen}. In either construction, given the set of all composite objects of total weight $n$ constructed from some given universe of irreducibles, suppose one object is picked at random according to the tilted distribution with tilting parameter $\phi$. Let $C_1,\dots,C_n$ be the numbers of irreducible components of that object, of weights $1$ through $n$ respectively, including any multiple occurrences. (So again $C_1 + 2C_2 + \cdots + nC_n = n$.) For any sequence of positive indices $i_1, \dots,i_k$ with $i_1 + \cdots + i_k \le n$ we have a formula for the corresponding mixed moment. For the multiset construction it is \begin{equation}\label{multimom} E\{C_{i_1}\cdots C_{i_k}\} = \frac{m_{i_1}\cdots m_{i_k}}{q_{\phi}(n)}\sum_{h_1,\dots,h_k \ge 1} \phi^{h_1 + \cdots + h_k} q_{\phi}(n-h_1i_1 - \cdots - h_ki_k), \end{equation} and for the selection construction it is \begin{equation}\label{selectmom} E\{C_{i_1}\cdots C_{i_k}\} = \frac{m_{i_1}\cdots m_{i_k}}{q_{\phi}(n)}\sum_{h_1,\dots,h_k \ge 1} (-1)^{h_1 + \cdots + h_k+k}\phi^{h_1 + \cdots + h_k} q_{\phi}(n-h_1i_1 - \cdots - h_ki_k). \end{equation} (See formulas (139) and (146) in~\cite[Sections~9.2 and~9.3]{AT94}, respectively. While the authors give explicit formulas only for the individual falling factorial moments, their method of proof easily yields the present formulas as well.) Note that because of the expressions $q_{\phi}(n-h_1i_1 - \cdots - h_ki_k)$, the sums have finitely many terms. In our application to Theorem \ref{mulselPD} below only the leading term in each case, where $h_1 = \cdots = h_k =1$, will matter asymptotically. We can marry the Flajolet-Soria asymptotics to the moment formulas \eqref{multimom} and \eqref{selectmom}: \begin{lemma}\label{multlemma} Suppose that conditions \textbf{FS 1, FS 2,} and \textbf{FS 3} are satisfied for $G(z) = M(z)$, the ordinary generating function of the $m_i$'s, and that for the multiset construction we impose $\phi < \rho^{-1}$. For the selection construction we allow $\phi$ to be arbitrarily large. Then in either case we have we have \begin{equation}\label{multineq} E\{C_{i_1}\cdots C_{i_k} \} = \frac{(\phi \theta)^k}{(1-m/n)^{1-\phi \theta}}\frac{1}{i_1 i_2\cdots i_k} \left(1+O(\lfloor n/i_1 \rfloor\cdots\lfloor n/i_k \rfloor \rho^{\min(i_1,\dots,i_k)} )\right), \end{equation} where $m = i_1 + \cdots + i_k $. \end{lemma} \begin{proof} Note that in the present cases the radius of convergence $\rho$ of $G(z) = M(z)$ must satisfy $\rho < 1$, for the trivial reason that otherwise the coefficients of $G(z)$ as given in \eqref{comp} could not yield integers, for large enough $n$. That being so, substitute \eqref{comp} and either \eqref{mulobj} or \eqref{selobj} into \eqref{multimom} or \eqref{selectmom} respectively. It is then straightforward to get \eqref{multineq}. \end{proof} We can now give the Poisson--Dirichlet limit theorem for multisets and selections. We suppose we are given a universe of irreducibles with ordinary generating function $M(x)$ for the numbers $m_i$ of different kinds of weight $i$, for $i\ge 1$. Let $\rho <1$ be the radius of convergence of $M$. \begin{theorem}\label{mulselPD} Let $l_1 \ge l_2 \ge \cdots $ be the irreducible component sizes of a random multiset or a random selection of weight $n$, chosen from a tilted distribution with parameter $\phi$, where for multisets we suppose that $\phi < 1/\rho$. Define $L_{jn} = l_j/n$, where the latter sequence is padded out with zeros. Suppose the Flajolet-Soria conditions \textbf{FS 1, FS 2,} and \textbf{FS 3} are satisfied when $G(z) = M(z)$. Then for each $k>0$, the joint distribution of $L_{1n},\dots,L_{kn}$ converges to the initial $k$-dimensional joint PD($\phi \theta$) distribution. \end{theorem} \begin{proof} We appeal once again to Theorem \ref{masterthm}. Restricting to good sequences of families of $k$-tuples, together with the fact that $\rho <1$, converts the multiplicative error term in \eqref{multineq} to $1 + o(1)$, as required in \eqref{masterineq}. So the theorem applies. \end{proof} \noindent\textit{Remark}. The necessity for the restriction to $\phi < \rho^{-1}$ for multisets may appear to be an artifact of our complex analytic methodology, but the same restriction is also imposed with the methodology of \cite{ABT}. In fact, as far as we are aware, the limiting behavior for $\phi\ge \rho^{-1}$ is unknown, for multisets. \end{document}
arXiv
More Mods What is the units digit for the number 123^(456) ? Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3. Novemberish a) A four digit number (in base 10) aabb is a perfect square. Discuss ways of systematically finding this number. (b) Prove that 11^{10}-1 is divisible by 100. Filling the Gaps Charlie has been thinking about which numbers can be written as a sum of two square numbers. He took a $10\times10$ grid, and shaded the square numbers in blue and the sums of two squares in yellow. He hoped to find a pattern, but couldn't see anything obvious. Vicky suggested changing the number of columns in the grid, so they reduced it by one: "There seems to be a diagonal pattern." "If the rows were one shorter, then those diagonals would line up into vertical columns, wouldn't they?" "Let's try it..." What do you notice about the positions of the square numbers? What do you notice about the positions of the sums of two square numbers? Can you make any conjectures about the columns in which squares, and sums of two squares, would appear if the grid continued beyond 96? Can you prove any of your conjectures? You might like to look back at the nine-column grid and ask yourself the same questions. Charlie couldn't write every number as a sum of two squares. He wondered what would happen if he allowed himself three squares. Will any of the numbers in the seventh column be a sum of three squares? Can you prove it? "We must be able to write every number if we are allowed to include sums of four squares!" "Yes, but it's not easy to prove. Several great mathematicians worked on it over a long period before Lagrange gave the first proof in 1770." With thanks to Vicky Neale who created this task in collaboration with NRICH.
CommonCrawl
In the easy-to-learn optimization flow, we have taken an easy-to-explain problem and explained the process of performing combinatorial optimization processing with an annealing machine by defining requirements, formulation, input data preparation, and executing. Let's take your learning one step further by experiencing the development flow of a combinatorial optimization system using a CMOS annealing machine through a series of steps from problem requirements definition to executing. This article provides an exercise to realize a signal control system to relieve traffic congestion as a signal control system by processing it with a CMOS annealing machine as a combinatorial optimization problem. Although traffic signals are not currently controlled by processing them as a combinatorial optimization problem, the economic loss caused by traffic congestion is said to be enormous, and it is one of the social issues that need to be addressed. Eliminating traffic congestion is expected to be effective in reducing economic losses and energy consumption. The contents of this page are based on the paper by Toyota Central R&D Labs., Inc. and the University of Tokyo [1] and the blog post by Jij Inc. [2], which is based on the paper, and are reorganized so that you can practice what you have learned in "Easy-to-learn optimization flow" as an exercise for CMOS annealing machines. The knowledge explained in "Easy-to-learn optimization flow" is added to the explanation of defining requirements, constraint setting, formulation, etc. defined in the above paper and blog, which are the preliminary steps to perform in CMOS annealing, aiming to provide both review and application. [1]Toyota Central R&D Labs, University of Tokyo, "Traffic Signal Optimization on a Square Lattice with Quantum Annealing". [2] A blog by Jij Inc. that explains the same paper in detail. Signal Control Optimization on a Square Grid Intersection Cluster Using D-Wave Defining requirements What optimization problem can be formulated to solve the practical problem of reducing traffic congestion? The objective of the optimization problem discussed here is to reduce traffic congestion by making cars flow as smoothly as possible. In other words, what we want to achieve is to equalize the number of cars waiting vertically and horizontally at each intersection through signal control. For this problem, let's define the requirements, i.e., the language to communicate to the computer what we want to do. Definition of Roads and Signals Roads are given as grids. All intersections are adjacent to four intersections, up, down, left, and right. Vertical and horizontal signals are installed. Adapted from reference [2] Definition of car movement Vehicles may go straight, turn left, or turn right at intersections. Let $a$ be the fraction of cars going straight at the intersection. The ratio of cars turning left and right is considered to be the same and expressed as $(1-a)/2$, respectively. Formulation takes into account time variation One difficulty with this problem is that trying to eliminate the bias in the number of cars at an intersection can have a positive or negative impact on the number of cars at neighboring intersections, and the conditions for optimization change over time. How do we model this complex assumption? First, we define "improving the state" for each and every intersection. To improve the condition of each and every intersection means to change one signal to green and the signal in the crossing direction to red. However, it cannot be left to luck to determine whether operating a single intersection will move the entire area in the direction of eliminating traffic congestion. If a horizontal traffic signal is left green and left unattended, vertical traffic will naturally become congested. Next, we must define the "relationship between intersection $i$ and $j$" and the "relationship between time ($t$) and time ($t+1$)," taking into account time variation. Next, let us formulate the signal control as an Ising model. At this point, we will try to incorporate changes in the time direction by including the "time variation considerations" described in the previous section. Assign Ising variables to signals First we assign the Ising variables $+1$ and $-1$ to the signal control, which is the smallest element for "improving the state" of each and every intersection. If there are more vertical vehicles than horizontal vehicles at intersection $i$, we want to assign the vertical signal to green. That is, $\sigma_i=+1$ If there are more horizontal vehicles than vertical vehicles at intersection $i$, we want to assign the horizontal signal to green. That is, $\sigma_i=-1$ The direction (vertical or horizontal) of the signal to be green could be set as an Ising variable. Next, for the ultimate goal of optimization, which is to eliminate traffic congestion, we consider modeling with consideration given to the flow of time and the state of each case. Model the amount of cars The object to be optimized, i.e., the state of the signal, is given by the quantity of cars at the next time. The amount of cars at the next time can be expressed using the signal assignment $\sigma$ at the immediately preceding time t and the ratio $a$ of the direction of car travel. Quantity of cars $q_{ij}, x_i$ Two variables are defined for the quantity of cars: $q_{ij}$ and $x_i$. $q_{ij}$ is used to represent the quantity of cars moving from intersection $j$ to intersection $i$. On the other hand, $x_i$ is a variable that represents the amount of cars waiting at an intersection. Let's first take a closer look at the definition of $q_{ij}$. Let $q_{ij}(t)(≠q_{ji})$ be the amount of cars on the road connecting intersection $j$ to intersection $i$ at time $t$ Let $\sigma_i(t)$ and $\sigma_j(t)$ be the traffic light parameters at intersections $i$ and $j$ at time $t$, respectively Then the quantity of cars on road $j$→$i$ at time $t+1$ is $$\tag*{Equation(1)} q_{ij} (t+1) = q_{ij} + \frac{s_{ij}}{2} ( - \sigma_i (t) + \alpha \sigma_j (t))$$ where $s_{ij}$ is the road direction parameter, $s_{ij} = +1$ if road $j→i$ is vertical and $s_{ij} = -1$ if road $j→i$ is horizontal. $q_{ij}$ represents the quantity of cars at the previous time, $-s_{ij}\sigma_i(t)/2$ represents the quantity of cars leaving, and $s_{ij}\alpha_{\sigma_j}(t)/2$ represents the quantity of cars coming in. In other words, equation (1) shows that the quantity of cars at time (t+1) represents the quantity of cars at the previous time plus the quantity of cars coming in and the quantity of cars going out. Note that $\alpha=2a-1$ is substituted to simplify the equation. Next, let's look at how the variable $x_i$ is defined. Let $q_{ij}$ denote the average value $x_i$ of the difference between the amount of cars waiting vertically and horizontally at intersection $i$. $$\tag*{Equation(2)} x_{i} (t) = \frac{1}{2} \sum_{j \in N(i)} s_{ij} \cdot q_{ij} (t)$$ In other words, equation (2) represents the amount of vehicles heading from intersection $j$ to intersection $i$ and waiting at intersection $i$. $s_{ij}$, which constitutes equation (2), is the directional parameter of the road, and $q_{ij}$ is the amount of cars entering intersection $i$ from intersection $j$. In other words, it adds up to the case where the vertical direction is red and cars are waiting, and the horizontal direction is red and cars are waiting. Let us imagine where cars enter and exit in the four directions adjacent to intersection $i$ when $\sigma_i = +1$ and $\sigma_i = -1$, respectively. For intersection $i$, there are four possible intersections $j$. To account for the amount of cars from each of them, the formula is $\Sigma$. Cars waiting vertically at intersection $i$ may come from the north or from the south. And the cars waiting horizontally at intersection $i$ may come from the east or from the west. If the amount of cars waiting vertically and horizontally is not biased, it approaches $x_i(t) = 0$. From equations (1) and (2), the quantity of cars on the road adjacent to intersection $i$ at time $t+1$, $x_i(t+1)$, is $$ \begin{align*} \tag*{Equation(3)} x_{i} (t+1) &= \frac{1}{2} \sum_{j \in N(i)} s_{ij} \cdot q_{ij} (t+1) \\ &= \frac{1}{2} \sum_{j \in N(i)} s_{ij} \cdot q_{ij} + \frac{1}{2} ( - \sigma_i (t) + \alpha \sigma_j (t) ) \\ &= x_i (t) + \frac{1}{4} \sum_{j \in N(i)} ( - \sigma_i (t) + \alpha \sigma_j (t) ) \end{align*} $$ Now we have an equation that expresses the relationship between the color of the traffic light at an intersection and the amount of cars. Cost function We have been able to model the relationship between the amount of cars and the color of the signal. Now we can proceed to optimize it with a CMOS annealing machine. For the relational equation we have established, we can evaluate it as follows, taking into account what kind of operation "makes it better" to achieve our objective. If $x_i(t+1) > 0$, the vertical signal is green because of the amount of vertical cars When $x_i(t+1) < 0$, the horizontal signal is green because of the number of horizontal cars Rewriting this in matrix form for all intersections yields the following equation $$\tag*{Equation(4)} \boldsymbol{x}(t+1) = \boldsymbol{x} (t) + (- \boldsymbol{I} + \frac{\alpha}{4} \boldsymbol{A} ) \boldsymbol{\sigma} (t)$$ The optimization is to equalize the number of vehicles waiting horizontally and vertically at a given intersection. In other words, it can be said to be a problem of determining the signal assignment $\boldsymbol{\sigma}(t)$ such that $\boldsymbol{x}(t+1)$ is close to $0$. By squaring this, $0$ is the minimum value. Therefore, the objective function can be set as follows $$\tag*{Equation(5)} H= \boldsymbol{x}(t+1) ^T \boldsymbol{x}(t+1)$$ This is how the square of the matrix representation is written: the symbol $^T$ is called a transpose, which means to swap the rows and columns of a matrix; to square the vector $\boldsymbol{x}_i(t+1)$, the $\boldsymbol{x}(t+1)$ to be transposed is the row component, and the $\boldsymbol{x}(t+1)$ in the column direction is multiplied to it, indicating the operation. The reference link [4] describes it clearly. Reference [3]: On the product of transposed vectors Use of "matrices" to facilitate representation of repetitive calculations This section explains that the relational expression in (3) can be rewritten in matrix form as (4). A matrix is "a collection of numbers and formulas with similar properties that can be written together in the form of a matrix to simplify computation."[4] Reference[4] What is Matrix, Meaning/Purpose of Matrix Although Equation (3) allows us to establish a relationship between the intersection and its four neighboring intersections (signal color and vehicle volume) at time $t+1$ at intersection $i$ and the previous time $t$, it does not express the objective of making the vehicle volume uniform at all intersections in the entire area. Furthermore, the energy function is not in the form that would result in the least amount of energy for the optimal solution. To solve all these problems at once, and to create an equation that calculates the state of all intersections, and to represent it simply, we will use "matrices" here. For the sake of understanding, let us define 9 intersections arranged in a grid with 3 horizontal rows and 3 vertical columns, and represent them in matrix representation. $$ \tag*{(3)} x_i(t+1) =x_i (t) + \frac{1}{4} \sum_{j \in N(i)} ( - \sigma_i (t) + \alpha \sigma_j (t) ) $$ $$ \tag*{(4)} \boldsymbol{x}(t+1) = \boldsymbol{x} (t) + (- \boldsymbol{I} + \frac{\alpha}{4} \boldsymbol{A} ) \boldsymbol{\sigma} (t) $$ $$ \tag*{(4')} \boldsymbol{x}(t+1) =\boldsymbol{x}(t) -\boldsymbol{I_\sigma}(t)+\frac{1}{4}a\boldsymbol{A}\boldsymbol{\sigma}(t) $$ $$ \tag*{(4'')} \begin{pmatrix} x_0(t+1) \\ x_1(t+1) \\ x_2(t+1) \\ x_3(t+1) \\ x_4(t+1) \\ x_5(t+1) \\ x_6(t+1) \\ x_7(t+1) \\ x_8(t+1) \end{pmatrix} = \begin{pmatrix} x_0(t) \\ x_1(t)\\ x_2(t)\\ x_3(t)\\ x_4(t)\\ x_5(t) \\ x_6(t) \\ x_7(t) \\ x_8(t) \end{pmatrix} - \begin{pmatrix} \sigma_0(t) \\ \sigma_1(t)\\ \sigma_2(t)\\ \sigma_3(t)\\ \sigma_4(t)\\ \sigma_5(t)\\ \sigma_6(t)\\ \sigma_7(t)\\ \sigma_8(t) \end{pmatrix} + \frac{1}{4}a \overset{\boldsymbol{A}}{\overset{\text{\textbardbl}}{ \begin{pmatrix} 0 1 0 1 0 0 0 0 0 \\ 1 0 1 0 1 0 0 0 0 \\ 0 1 0 0 0 1 0 0 0 \\ 1 0 0 0 1 0 1 0 0 \\ 0 1 0 1 0 1 0 1 0 \\ 0 0 1 0 1 0 0 0 1 \\ 0 0 0 1 0 0 0 1 0 \\ 0 0 0 0 1 0 1 0 1 \\0 0 0 0 0 1 0 1 0 \end{pmatrix}}} \begin{pmatrix} \sigma_0(t) \\ \sigma_1(t) \\ \sigma_2(t) \\ \sigma_3(t) \\ \sigma_4(t) \\ \sigma_5(t) \\ \sigma_6(t) \\ \sigma_7(t) \\ \sigma_8(t) \end{pmatrix} $$ Matrix representation of 9 intersections The variables $x_i(t)$ and $x_i(t+1)$ in equation (3) represent the state of a single intersection, while $x(t)$ and $x(t+1)$ in the matrix notation of equation (4) are in vector notation and represent all intersections simultaneously, so the meaning is different. In addition to the fact that the $i$'s can be taken, the typefaces are often expressed differently in terms of appearance. A *vector is a matrix whose components are in a single column or row. (4") is the distribution of ( ) in (4), where $I$ is called the identity matrix, and is the matrix representation as $\sigma_i(t)$ outside of $\Sigma$, which was inside $\Sigma$ in (3), but. Since $\Sigma$ represents the addition of the terms of road $j$ in the four directions for each intersection $i$, the terms of $i$ can be calculated outside of $\Sigma$. (4") is further rewritten as a matrix representation of the components, with each term corresponding to (4'). The last term on the right-hand side is the component of $j$ that adds up to $\Sigma$. $A$ represents the component matrix that specifies the intersection in the four directions relative to intersection $i$. In this case, we have the following nine intersections in matrix form: for intersection 1, it is the adjacent numbers 2 and 4 that are designated as $j$. In (4"), the $i=1$ relation computes the elements of the first row. Now that we know what is involved in processing the matrix representation, we will process it to form it as a cost function at the next section. Organizing Constraints Speaking of constraints, the increase in the number of vehicles at an intersection may cause congestion at adjacent intersections, but these constraints have already been taken into account in the modeling up to this point, so there is no need to consider new ones. There is something else to consider in this problem. In this problem, signals change to reduce vehicle congestion, but if they change too frequently, they can impede the flow of vehicles. Therefore, it is necessary to set a penalty (weighting) for cases where the signal changes at adjacent times. This is the optimal solution. $$\tag*{Equation(6)} Penalty = \eta (\boldsymbol{\sigma}(t)-\boldsymbol{\sigma}(t-1))^T(\boldsymbol{\sigma}(t)-\boldsymbol{\sigma}(t-1))$$ This is the matrix representation of $\boldsymbol{\sigma}$, i.e., the difference between time $t$ and the previous time $t-1$ for the entire area $\boldsymbol{\sigma}$, and then squared. The value of $\boldsymbol{\sigma}(t)-\boldsymbol{\sigma}(t-1)$ is $0$ if it is the same at time $t$ and $t-1$, and $+2$ (or $-2$) if it is different. Final form of cost function With the derived cost and penalty functions, the energy function of the Ising model for obtaining the optimal solution is as follows $$ \tag*{Equation(7)} H = { \boldsymbol{x} (t+1) ^T \boldsymbol{x} (t+1) } + \eta ( \boldsymbol{\sigma}(t) - \boldsymbol{\sigma}(t-1) ) ^T ( \boldsymbol{\sigma}(t) - \boldsymbol{\sigma}(t-1) ) \\ $$ Writing all this down in terms of $\boldsymbol{\sigma}(t)$, we get $$ \tag*{Equation(8)} H(\boldsymbol{\sigma} (t) ) = \boldsymbol{\sigma}(t) ^T \boldsymbol{J} \boldsymbol{\sigma} (t) + \boldsymbol{h} \boldsymbol{\sigma} (t) + c(t) $$ $J$,$h$ is as follows. $c(t)$ is the constant term. $$ \begin{align*} \tag*{Equation(9)} \boldsymbol{J} &= (-\boldsymbol{I} + \frac{\alpha}{4} \boldsymbol{A}) ^T (-\boldsymbol{I} + \frac{\alpha}{4} \boldsymbol{A}) + \eta \boldsymbol{I} \\ \boldsymbol{h} &= 2x(t) ^T (-\boldsymbol{I} + \frac{\alpha}{4} \boldsymbol{A}) - 2\eta \boldsymbol{\sigma}(t-1) ^T \end{align*} $$ The annealing machine is now ready to be shaped in such a way that it can derive the optimal solution. Input data preparation and execution of CMOS annealing machine Now that we have defined the requirements and formulated the formulation (modeling, setting the objective function, and organizing the constraints). From this point on, let's narrow down the range of roads and consider $3 \times 3 = 9$ intersections, and actually create and run the data to be input to the CMOS annealing machine. How was it? Here is a video of an even larger road with optimized signal control. This demonstration shows the results of a simulation in which traffic congestion is eliminated by optimizing 64 x 64 = 4096 intersections using a large-scale CMOS annealing system to be exhibited in the Cooperative Research Building at Hitachi, Ltd.
CommonCrawl
Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya Izv. RAN. Ser. Mat.: Izv. RAN. Ser. Mat., 2000, Volume 64, Issue 5, Pages 197–224 (Mi izv309) Linear deformations of discrete groups and constructions of multivalued groups P. V. Yagodovskii Abstract: We construct deformations of discrete multivalued groups described as special deformations of their group algebras in the class of finite-dimensional associative algebras. We show that the deformations of ordinary groups producing multivalued groups are defined by cocycles with coefficients in the group algebra of the original group and obtain classification theorems on these deformations. We indicate a connection between the linear deformations of discrete groups introduced in this paper and the well-known constructions of multivalued groups. We describe the manifold of three-dimensional associative commutative algebras with identity element, fixed basis, and a constant number of values. The group algebras of $n$-valued groups of order three (three-dimensional $n$-group algebras) form a discrete set in this manifold. DOI: https://doi.org/10.4213/im309 Izvestiya: Mathematics, 2000, 64:5, 1065–1089 MSC: 16S80, 20N15, 16S34, 05E30, 20N20, 05B30, 05C25, 20B25 Citation: P. V. Yagodovskii, "Linear deformations of discrete groups and constructions of multivalued groups", Izv. RAN. Ser. Mat., 64:5 (2000), 197–224; Izv. Math., 64:5 (2000), 1065–1089 \Bibitem{Yag00} \by P.~V.~Yagodovskii \paper Linear deformations of discrete groups and constructions of multivalued groups \jour Izv. RAN. Ser. Mat. \mathnet{http://mi.mathnet.ru/izv309} \crossref{https://doi.org/10.4213/im309} \jour Izv. Math. \crossref{https://doi.org/10.1070/im2000v064n05ABEH000309} http://mi.mathnet.ru/eng/izv309 https://doi.org/10.4213/im309 http://mi.mathnet.ru/eng/izv/v64/i5/p197 P. V. Yagodovskii, "Representations of multivalued groups on graphs", Russian Math. Surveys, 57:1 (2002), 173–174 P. V. Yagodovskii, "$\sigma$-Extensions of discrete multivalued groups", J. Math. Sci. (N. Y.), 138:3 (2006), 5753–5761 V. M. Buchstaber, "$n$-valued groups: theory and applications", Mosc. Math. J., 6:1 (2006), 57–84 P. V. Yagodovsky, "Duality in the theory of finite commutative multivalued groups", J. Math. Sci. (N. Y.), 174:1 (2011), 97–119 First page: 1
CommonCrawl
Gait-based age estimation using multi-stage convolutional neural network Atsuya Sakata1 na1, Noriko Takemura2 & Yasushi Yagi1 IPSJ Transactions on Computer Vision and Applications volume 11, Article number: 4 (2019) Cite this article Gait-based age estimation has been extensively studied for various applications because of its high practicality. In this paper, we propose a gait-based age estimation method using convolutional neural networks (CNNs). Because gait features vary depending on a subject's attributes, i.e., gender and generation, we propose the following three CNN stages: (1) a CNN for gender estimation, (2) a CNN for age-group estimation, and (3) a CNN for age regression. We conducted experiments using a large population gait database and confirm that the proposed method outperforms state-of-the-art benchmarks. Age estimation methods based on image processing have been extensively studied for various applications. Most of these studies focus on the images of faces, which tend to become more wrinkled and sag with age [1–6]. However, because high-resolution full-face images are required for these age estimation methods, they can only be used in situations where human images are captured at a short distance, e.g., age confirmation for purchasing alcohol and cigarettes or in digital signage applications. In contrast, gait features, which represent a human's manner of walking, can be captured at a distance from an uncooperative subject. The way a human walks differs depending on his/her attributes, such as gender, physique, muscle mass, and age. From the medical view point, there are some studies on gait analysis to measure fatigue and detect disease [7, 8]. In the field of informatics, in contrast, gait-based human identification has been intensively studied for various applications such as access control, surveillance, and forensics [9–11]. Gait differs depending on not only attributes but also individuals. For instance, individual features greatly depend on posture, stride length, arm-swinging width, and the asymmetry of walking, which is formed from habits such as holding a shoulder bag on a fixed side. Moreover, gait identification has already been used in practical cases in criminal investigations [12–14]. Hence, we expect that gait features will be useful for age information, and we investigated gait-based age estimation. Gait-based age estimation expands the scope of real-world applications such as wide-area surveillance and the detection of lost children and wandering elderly people, as well as marketing research in large-scale facilities (e.g., shopping malls, terminals, and airports). There are several studies on gait-based age estimation. Makihara et al. [15] proposed an age regression algorithm based on Gaussian process regression (GPR). Lu et al. [16] proposed a multilabel-guided subspace to better characterize and correlate age and gender information, and Lu et al. [17] proposed an ordinary preserving manifold analysis (OPLDA) for gait-based age estimation. These methods unfold an image-based gait feature into a feature vector, where each dimension corresponds to each pixel. Because spatial proximity in the image structure is never considered, these methods can easily result in overtraining. To prevent this, we propose an age estimation approach using a convolutional neural network (CNN) that considers spatial proximity using a convolution operation and has had great success in many image recognition research areas. Ideally, it is possible to achieve end-to-end learning by CNNs, i.e., any model can be trained by feeding raw images to the CNN. However, in practice, it is not easy to train networks in such an ideal situation. For this reason, existing researchers have proposed some designs in which pre-processed images are fed into the network instead of the raw images and constraints are added to the intermediate layers. In addition, recently, multi-task learning has attracted attention [18]: this method improves the accuracy of a target task by simultaneously learning target and other recognition tasks related to the target task. However, this method can instead worsen the accuracy of target tasks if other tasks adversely affect them because the model is trained to improve all the recognition tasks simultaneously. Thus, in this paper, we propose sequential multi-task learning instead of conventional parallel multi-task learning. Each CNN for non-target tasks is trained one by one in sequence and the CNN for the target task is trained last. In this way, we can train the network to aim for the target task while taking other tasks into consideration. Although the network architecture of sequential multi-task learning should be a deep CNN formed by chaining each CNN, we separately train each CNN, which has the same structure as those in parallel multi-task learning, in sequence to simply compare sequential with parallel multi-task by excluding the influence of the depth of the network. In other words, we predict a subject's gender and generation beforehand and then predict an age-by-age regression model trained on the data for each gender and generation combination separately. We conducted a performance evaluation using the world's largest gait database, the OU-ISIR Gait Database, Large Population Dataset with Age (OULP-Age) [19], which includ-es ages ranging from 2 to 90 years and males and females to confirm the effectiveness of the proposed method. CNN-based age estimation In this paper, the gait energy image (GEI) [20], which is a gait feature commonly used for gait-based person identification, is used as input to our CNNs. A GEI represents both dynamic features (i.e., swinging hands and legs while walking) and static features (i.e., human shapes and postures). We explain how to extract a GEI as follows. First, human silhouette sequences are obtained by background subtraction-based graph-cut segmentation. Second, we normalize silhouettes by size. Third, the gait period is detected from the normalized silhouette sequences, and finally, we generate a mean silhouette image based on the gait period. Single CNN-based age estimation Figure 2a shows the network structure for the CNN-based age estimator, and Table 1 shows the layer configurations. GEIs are fed into the CNN that contains two triplets of a convolution (conv) layer, batch normalization (norm) layer, and max pooling (pool) layer. It also consists of a pairs of a fully connected (fc) layer and a norm layer, and a fc layer for recognition task. The conv layers and fc layers are followed by a ReLU activation function. We call a chain of layers from the input to norm3 in Single-CNN (a blue block shown in Fig. 2a) the Conv block. Table 1 Layer configurations of Single-CNN We initialize the weight parameters of the CNN in all layers using He's method [21] and neuron biases with a constant of 0. We train our models using Adam with an initial learning rate of 0.001. We use dropout in the fc3 and fc4 layers with a probability of 0.8 and 0.5, respectively. The output of the final layer is considered to be the predicted age. We train the age estimator to minimize the mean absolute error (MAE) between the predicted and ground truth ages. As mentioned in [9], in recognition tasks, variations in the input GEIs are smaller than those for a common object recognition task. Therefore, even such a shallow network can represent the feature of a subject's age. Multistage CNN-based age estimation Figure 1 shows the mean GEIs in the gait database (OULP-Age) for each gender and age group. It shows that gait features, e.g., human head-to-body ratio, hairstyles, shapes, and postures, vary depending on a subject's gender and generation. Mean GEIs for each gender and age group The architectures of (a) Single-CNN and (b) Sequential multi-CNN Thus, age estimation accuracy should improve in an age estimator based on specific genders and generations. In this paper, we attempt to improve age estimation using a multistage CNN composed of three CNN-based estimators, i.e., a gender estimator, age-group estimator, and age estimator (see Fig. 2b). Note the order of gender discrimination. As shown in Fig. 2b, we used Conv blocks for all three estimators. For the gender estimator, the sigmoid normalized cross-entropy is employed as the loss function. For the age-group estimator, the number of outputs of the fc4 layer is changed to five (the number of age groups) and the softmax normalized cross-entropy is employed as a loss function. Learning method The learning procedure for multi-CNN age estimation (sequential multi-task CNN) is as follows (Fig. 2): Train a gender estimator on a training set that includes all genders and all age groups Predict gender by feeding the same training data set of (1) into the trained gender estimator Train an age-group estimator for each predicted gender using the gender-predicted data from (2) Predict the age group for each predicted gender by feeding the gender-predicted data from (2) into the trained age-group estimator for that gender Train an age estimator for each predicted gender and each predicted age group using the data predicted in (4) We train age estimators for each of the predicted gender and age-group estimators. Because of the decrease in the number of training data caused by this approach, overfitting can occur easily. To prevent this, we fine-tune pre-trained models. Specifically, the age-group estimator for each gender is trained by fine-tuning the age-group estimator trained on all gender data, and the age estimator for each gender and each age group is trained by fine-tuning the age estimator trained on the all age-group data for each age. Definition of age-group classes We describe how we define age-group classes for the age-group estimator in multi-CNN age estimation. Gait data in OULP-Age are divided into several age groups based on GEI similarity. First, we divided OULP-Age into intervals of 5 years and generated a mean GEI for each group. Note that samples over 60 years old were put into the same group because of a shortage of elderly persons' data. Second, we calculated the L2 distance between the mean GEIs of adjacent groups (Fig. 3). The L2 distance is calculated as $$\begin{array}{*{20}l} d_{L_{2}}(\mathbf{x}, \mathbf{y}) = \sqrt{\sum\limits_{w=0}^{W-1}\sum\limits_{h=0}^{H-1}\left \| x_{w,h} - y_{w,h} \right \|^{2}}, \end{array} $$ L2 distance between mean GEIs of adjacent age groups (OULP-Age). Silhouette images below the age-group labels represent mean GEI corresponding to each group where x and y are the mean GEIs of adjacent groups with height H and width W, respectively. Finally, we defined groups with an L2 distance that is less than a threshold as the same class and designed five classes: 0–5, 6–10, 11–15, 16–60, and over 60 years. As we mentioned in Section 2, a GEI represents both dynamic features (i.e., swinging hands and legs while walking) and static features (i.e., human shapes and postures). Because people under 15 years old are growing swiftly, they change their static features substantially, and their GEIs have remarkable differences according to age. In contrast, as shown in Fig. 3, GEIs extracted from people who are between 15 to 60 years old almost do not appear to have changing features because they have almost stopped growing up. In other wards, differences between statistic feature of GEI are more significant than those of dynamic feature. Poor accuracy during age-group estimation affects the next age regression stage, so we decided to split the age range into five age groups so that the CNNs can estimate age from the GEIs fairly precisely. The OU-ISIR Gait Database, Large Population Dataset with Age (OULP-Age) [19] was used to evaluate the performance of the age estimation method. OULP-Age is the world's largest gait database that includes age and gender information. It consists of 63,846 gait images (31,093 males and 32,753 females) with ages ranging from 2 to 90 years. Figure 4 shows examples of the data, and Fig. 5 shows the distribution of subjects' age and gender in OULP-Age. Each subject walking from the right side to the left side along the walking course is captured by a USB camera set at a position 4 m away from the walking course. More information about the data capture is given in detail in [22]. GEIs of 88 ×128 pixels extracted for a side-view gait are provided for each subject. We split the database into testing, training, and validation set at the ratio of 5:4:1, respectively. Note that 20% of the training set is used as the validation set. Tables 2 and 3 show the number of subjects among age groups and genders in the training set and testing set, respectively. Data example in OULP-Age (cited from [19]) Distributions of subjects' age and gender in OULP-Age Table 2 The number of subjects in the training set Table 3 The number of subjects in the testing set Training settings The loss function for gender estimation and age-group estimation is cross entropy, which is calculated as $$\begin{array}{*{20}l} L({\mathbf{w}}) = -\sum\limits_{n=1}^{N} \sum\limits_{m=1}^{M} t_{nm}\log{y(I_{n};{\mathbf{w}})_{m}}, \end{array} $$ where w denotes the weight parameter matrix of the network, In is the input image, N is the number of data, M is the number of classes, y(In;w)m is the mth element of the output vector, and tnm denotes the ground truth class. The age estimation task is optimized by minimizing the mean absolute error between the ground truth and predicted age and is calculated as $$\begin{array}{*{20}l} L({\mathbf{w}}) = \frac{1}{N}\sum\limits_{n=1}^{N} \left|t_{n} - y(I_{n};{\mathbf{w}}) \right|, \end{array} $$ where w denotes the weight parameter matrix of the network, In is the input image, N is the number of data, y(In;w) is the predicted age, and tn is the ground truth age of the nth sample. For training each network included in the proposed method with back-propagation, we use Adam [23]. We also use a batch size of 128 samples, and the initial learning rate is 0.001, which is the default value for Adam. The maximum number of epochs is 100, although we used the weights of the network at the epoch when the validation error is the minimum. Table 4 shows the distribution among the gender, and Table 5 shows the distribution among the age groups. Table 4 Results for gender estimation with the training set Table 5 Result for age-group estimator on the training set The MAE, standard deviation (SD), and cumulative score (CS) are used as the evaluation criteria for the performance evaluation. MAE is calculated as $$\begin{array}{*{20}l} \text{MAE} = \frac{1}{N} \sum\limits_{n=1}^{N} |t_{n} - y_{n}|, \end{array} $$ where tn and yn are the ground truth and predicted age values for the nth test sample, respectively, and N is the number of test samples. SD is calculated as follows. $$\begin{array}{*{20}l} \text{SD} = \sqrt{\frac{1}{N-1} \sum\limits_{n=1}^{N} (|t_{n} - y_{n}| - \text{MAE})^{2}} \end{array} $$ CS is calculated as $$\begin{array}{*{20}l} \text{CS}(l) = \frac{N_{l}}{N} \times 100\%, \end{array} $$ where Nl is the number of samples whose MAE is within l year. Comparison with existing methods not based on CNNs We compared the two proposed methods with four comparison methods using the protocol described in [19]. Single-CNN : Proposed method with a single CNN Sequential multi-CNN : Proposed method with multiple CNN stages GPR [15] : GPR-based method SVR [2] : Support vector regression-based method OPLDA [17] : OPLDA-based method MLG [16] : A method that learns a multilabel-guided (MLG) subspace for human age The MAEs and SDs of both versions of the proposed method and benchmarks are shown in Table 6. According to Table 6, the results of our CNN-based methods (Single-CNN and Sequential multi-CNN) are much better than those of the benchmarks. Furthermore, comparing the proposed methods, Sequential multi-CNN, which considers gender and age groups, improves the performance more than Single-CNN. In terms of SD, while the result of the proposed method is better than that of the existing method, there is no difference between our method and Single-CNN. This is because our method does not estimate age well for elderly people. Table 6 MAEs and SDs for comparing the proposed methods with existing methods not based on CNNs The CSs of Single-CNN and Sequential multi-CNN for each age group are shown in Fig. 6. As shown in the graph, Sequential multi-CNN significantly outperforms Single-CNN, especially in the 6–10, 11–15, and over 60 year groups. Cumulative scores of Single-CNN and Sequential multi-CNN Sequential multi-CNN vs. parallel multi-CNN We compared the proposed method with multiple CNN stages (Sequential multi-CNN) with a conventional multitask CNN [24] (Parallel multi-CNN). In Parallel multi-CNN, multiple tasks are learned at the same time, while exploiting commonalities and differences across tasks to improve the estimation accuracy for the task-specific models. Figure 10 shows the network architecture of Parallel multi-CNN. Note that Parallel multi-CNN consists of the same Conv block with Sequential multi-CNN and each loss weight is 1.0, except that the last layer is branched for each task (gender, age group, and age), to compare only the learning strategy, namely, sequential multi-task learning vs. parallel multi-task learning. Table 7 shows the MAEs and SDs of Sequential multi-CNN and Parallel multi-CNN estimated in the same manner as in Section 3. The result of Sequential multi-CNN is better than that of Parallel multi-CNN. The CSs of Parallel multi-CNN and Sequential multi-CNN for each age group are shown in Fig. 7. The graph demonstrates that Sequential multi-CNN outperforms Parallel multi-CNN, as is the case for the comparison with Single-CNN. Cumulative scores of Parallel multi-CNN and Sequential multi-CNN Table 7 MAEs and SDs for comparing the proposed method with a conventional multi-task CNN In the training phase, Sequential multi-CNN is trained to minimize a loss for each task in the order of gender, age group, and age, i.e., the target task is the last one, whereas Parallel multi-CNN is trained so as to minimize multi-task losses simultaneously. Thus, Sequential multi-CNN can be trained more intensively and efficiently for the target task. This seems to be why the result of Sequential multi-CNN is better. Distribution of the estimated ages corresponding to the actual age Figure 8 presents a scatter plot of the estimated ages of Sequential multi-CNN with respect to the ground truth age. Each point is colored according to the estimated age groups. According to Fig. 8, when age-group estimation fails, age estimation also fails, i.e., the MAE is larger, especially when the estimated age groups are 11–15 and over 60 years. Scatter plot of the estimated ages of Sequential multi-CNN with respect to the ground truth age. The dotted line indicates where the estimated age equals the ground truth age Order of learning tasks in Sequential multi-CNN In Sequential multi-CNN, CNNs are trained in the order of gender, age group, and age. The reasons why learning is performed in this order are as follows: Age is trained last because age estimation is the target task. Age group is trained second to the last because age group has a stronger relationship with age. Gender is trained first because gender is easier to recognize than age group. Tables 8 and 9 show the confusion matrices of the results of gender and age-group estimation using the test set, respectively. These matrices show that the recognition rate of gender is higher than that of age group. More specifically, there are more than a few cases of incorrect recognition, especially for age-group estimation for pedestrians over 60 years. The proposed method has the problem that the failure of each estimation task causes successive failures in the next tasks. To avoid this, we need further studies to determine how to combine the CNNs at each stage into a single network so that it can effectively minimize the error of all the stages. Table 8 Results for gender estimation with the testing set Table 9 Results for age-group estimation with the testing set Difference of accuracy between male and female Table 10 shows the gender-specific MAEs and SDs of Sequential multi-CNN, and Fig. 9 shows the graph of gender-specific CSs. As shown in Table 10 and Fig. 9, both MAE and SD of female subjects are worse than those of male subjects overall, especially over 60. Moreover, the CSs of Sequential multi-CNN is worse than that of Single-CNN in the case of 11–15 age group. Gender-specific cumulative scores of Sequential multi-CNN. a Male. b Female Network architecture of the conventional multi-task CNN (Parallel multi-CNN) Table 10 Gender-specific MAEs and SDs of Sequential multi-CNN This is because the female-specific personal features such as hairstyle and clothes (e.g., skirt and one-piece) affect the accuracy of age estimation. It is easy to estimate age of both male and female children due to distinctive features such as height. Adult female, in contrast, have more variations in hairstyle and clothes than adult male. Therefore, it is more difficult to estimate the age of female than that of male in adult generation. Applicability of sequential multi-task learning to other tasks In this paper, it was confirmed that sequential multi-task learning is more effective for age estimation than CNN-based single task learning and parallel multi-task learning (Fig. 10). The framework of sequential multi-task learning can be applied not only to age estimation but also to other recognition tasks, e.g., person identification and health estimation. Therefore, various applications of the sequential multi-task learning can be expected in both the medical and information-science fields. In this paper, we proposed a gait-based age estimation method using CNNs. To estimate ages based on differences in gait features depending on gender and generation, we proposed a method composed of three stages of CNNs: a gender estimator, an age-group estimator, and an age estimator. The results of the experiments using a large-scale gait database (OULP-Age) yielded an MAE of 5.84 years, which outperforms the benchmarks. In the future, we plan to perform two studies to enhance age estimation. First, as mentioned in Section 4.2, we will train a deeper network formed by chaining CNNs for several tasks instead of a combination of sequential CNNs. In this way, we can avoid degrading the accuracy of the proposed method due to the incorrect recognition of each task. Second, we need to collect more gait data because the database we used lacks data on elderly subjects. By doing this, we will be able to improve our method for all generations. Cumulative score GEI: Gait energy image GPR: Gaussian process regression MAE: Mean absolute error MLG: Multilabel-guided subspace OPLDA: Ordinary preserving manifold OULP-Age: The OU-ISIR gait database, Large Population Dataset with Age Geng X, Yin C, Zhou ZH (2013) Facial age estimation by learning from label distributions. IEEE Trans Pattern Anal Mach Intell 35(10):2401–2412. https://doi.org/10.1109/TPAMI.2013.51. Guo G, Fu Y, Dyer CR, Huang TS (2008) Image-based human age estimation by manifold learning and locally adjusted robust regression. IEEE Trans Image Proc 17(7):1178–1188. https://doi.org/10.1109/TIP.2008.924280. Fu Y, Huang TS (2008) Human age estimation with regression on discriminative aging manifold. IEEE Trans Multimed 10(4):578–584. https://doi.org/10.1109/TMM.2008.921847. Zhang YZY, Yeung D-YYD-Y (2010) Multi-task warped Gaussian process for personalized age estimation. 2010 IEEE Conf Comput Vis Pattern Recog (CVPR):2622–2629. https://doi.org/10.1109/CVPR.2010.5539975. Niu Z, Zhou M, Wang L, Gao X, Hua G (2016) Ordinal regression with multiple output CNN for age estimation. 2016 IEEE Conf Comput Vis Pattern Recog (CVPR):4920–4928. https://doi.org/10.1109/CVPR.2016.532. Escalera S, Fabian J, Pardo P, Baro X, Gonzalez J, Escalante HJ, Misevic D, Steiner U, Guyon I (2015) ChaLearn looking at people 2015: apparent age and cultural event recognition datasets and results. Proc IEEE Int Conf Comput Vis 2015-Febru:243–251. https://doi.org/10.1109/ICCVW.2015.40. Janssen D, Schöllhorn WI, Newell KM, Jäger JM, Rost F, Vehof K (2011) Diagnosing fatigue in gait patterns by support vector machines and self-organizing maps. Hum Mov Sci 30(5):966–975. https://doi.org/10.1016/j.humov.2010.08.010. EWOMS 2009: The European Workshop on Movement Science. Liao R, Makihara Y, Muramatsu D, Mitsugami I, Yagi Y, Yoshiyama K, Kazui H, Takeda M (2014) Video-based gait analysis in cerebrospinal fluid tap test for idiopathic normal pressure hydrocephalus patients (in japanese) In: The 15th Annual Meeting of the Japanese Society of NPH, Suita, Japan. Takemura N, Makihara Y, Muramatsu D, Echigo T, Yagi Y (2017) On input/output architectures for convolutional neural network-based cross-view gait recognition. IEEE Trans Circ Syst Video Technol PP(99):1–1. https://doi.org/10.1109/TCSVT.2017.2760835. Wu Z, Huang Y, Wang L, Wang X, Tan T (2017) A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Trans Pattern Anal Mach Intell 39(2):209–226. https://doi.org/10.1109/TPAMI.2016.2545669. Makihara YS, Matovski DS, Nixon MN, Carter J, Yagi Y (2015) Gait recognition: databases, representations, and applications In: Webster JG, editor. Wiley Encyclopedia of Electrical and Electronics Engineering. https://doi.org/10.1002/047134608X.W8261. Bouchrika I, Goffredo M, Carter J, Nixon M (2011) On using gait in forensic biometrics. J Forensic Sci 56(4):882–889. https://doi.org/10.1111/j.1556-4029.2011.01793.x. Lynnerup N, Larsen PK (2014) Gait as evidence. IET Biom 3:47–547. Iwama H, Muramatsu D, Makihara Y, Yagi Y (2013) Gait verification system for criminal investigation. Inf Media Technol 8(4):1187–1199. https://doi.org/10.11185/imt.8.1187. Makihara Y, Okumura M, Iwama H, Yagi Y (2011) Gait-based age estimation using a whole-generation gait database In: 2011 International Joint Conference on Biometrics, IJCB 2011. https://doi.org/10.1109/IJCB.2011.6117531. Lu J, Tan YP (2010) Gait-based human age estimation. IEEE Trans Inf Forensics Secur 5(4):761–770. https://doi.org/10.1109/TIFS.2010.2069560. Lu J, Tan YP (2013) Ordinary preserving manifold analysis for human age and head pose estimation. IEEE Trans Hum-Mach Syst 43(2):249–258. https://doi.org/10.1109/TSMCC.2012.2192727. Caruana R (1997) Multitask learning. Mach Learn 28(1):41–75. https://doi.org/10.1023/A:1007379606734. Xu C, Makihara Y, Ogi G, Li X, Yagi Y, Lu J (2017) The OU-ISIR Gait Database comprising the Large Population Dataset with age and performance evaluation of age estimation. IPSJ Trans. Comput Vis Appl 9:1–14. https://doi.org/10.1109/TIFS.2012.2204253. Han J, Bhanu B (2006) Individual recognition using gait energy image. IEEE Trans Pattern Anal Mach Intell 28(2):316–322. https://doi.org/10.1109/TPAMI.2006.38. arXiv:1307.5748v1. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. CoRR abs/1502.01852. 1502.01852. Makihara Y, Kimura T, Okura F, Mitsugami I, Niwa M, Aoki C, Suzuki A, Muramatsu D, Yagi Y (2016) Gait collector: an automatic gait data collection system in conjunction with an experience-based long-run exhibition In: 2016 International Conference on Biometrics (ICB), 1–8. https://doi.org/10.1109/ICB.2016.7550090. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization, 1–15. https://doi.org/10.1145/1830483.1830503. 1412.6980. Marín-Jimíenez MJ, Castro FM, Guil N, de la Torre F, Medina-Carnicer R (2017) Deep multi-task learning for gait-based biometrics In: 2017 IEEE International Conference on Image Processing (ICIP), 106–110. https://doi.org/10.1109/ICIP.2017.8296252. This work was supported by JST-Mirai Program JPMJMI17DH. The dataset supporting the conclusions of this article is available at http://www.am.sanken.osaka-u.ac.jp/BiometricDB/index.html. Atsuya Sakata and Noriko Takemura contributed equally to this work. The Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka, 5670047, Japan Atsuya Sakata & Yasushi Yagi The Institute for Datability Science, Osaka University, Suita, 5650871, Osaka, Japan Noriko Takemura Search for Atsuya Sakata in: Search for Noriko Takemura in: Search for Yasushi Yagi in: AS designed and executed the experiments and wrote the initial draft of the manuscript. NT contributed to the concept and helped to write the manuscript. YY supervised the work as well as gave technical support and conceptual advice. All authors reviewed and approved the final manuscript. Correspondence to Atsuya Sakata. Sakata, A., Takemura, N. & Yagi, Y. Gait-based age estimation using multi-stage convolutional neural network. IPSJ T Comput Vis Appl 11, 4 (2019) doi:10.1186/s41074-019-0054-2 Gait feature
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Agriculture, Fishery and Food > Agricultural Engineering/Facilities Asian-Australasian Journal of Animal Sciences (AJAS) aims to publish original and cutting-edge research results and reviews on animal-related aspects of life sciences. Emphasis will be given to studies involving farm animals such as cattle, buffaloes, sheep, goats, pigs, horses and poultry, but studies with other animal species can be considered for publication if the topics are related to fundamental aspects of farm animals. Also studies to improve human health using animal models can be publishable. AJAS will encompass all areas of animal production and fundamental aspects of animal sciences: breeding and genetics, reproduction and physiology, nutrition, meat and milk science, biotechnology, behavior, welfare, health, and livestock farming system. AJAS is sub-divided into 10 sections. - Animal Breeding and Genetics Quantitative and molecular genetics, genomics, genetic evaluation, evolution of domestic animals, and bioinformatics - Animal Reproduction and Physiology Physiology of reproduction, development, growth, lactation and exercise, and gamete biology - Ruminant Nutrition and Forage Utilization Rumen microbiology and function, ruminant nutrition, physiology and metabolism, and forage utilization - Swine Nutrition and Feed Technology Swine nutrition and physiology, evaluation of feeds and feed additives and feed processing technology - Poultry and Laboratory Animal Nutrition Nutrition and physiology of poultry and other non-ruminant animals - Animal Products Milk and meat science, muscle biology, product composition, food safety, food security and functional foods http://submit.ajas.info KSCI KCI SCOPUS SCIE Genetic Diversity of Goats from Korea and China Using Microsatellite Analysis Kim, K.S.;Yeo, J.S.;Lee, J.W.;Kim, J.W.;Choi, C.B. 461 https://doi.org/10.5713/ajas.2002.461 PDF KSCI Nine microsatellite loci were analyzed in 84 random individuals to characterize the genetic variability of three domestic goat breeds found in Korea and China: Korean goat, Chinese goat and Saanen. Allele diversity, heterozygosity, polymorphism information content, F-statistics, indirect estimates of gene flow (Nm) and Nei's standard distances were calculated. Based on the expected mean heterozygosity, the lowest genetic diversity was exhibited in Korean goat ($H_E$=0.381), and the highest in Chinese goat ($H_E$=0.669). After corrections for multiple significance tests, deviations from Hardy-Weinberg equilibrium were statistically significant over all populations and loci, reflecting the deficiencies of heterozygotes (global $F_{IS}$=0.053). Based on pairwise FST and Nm between different breeds, there was a great genetic differentiation between Korean goat and the other two breeds, indicating that these breeds have been genetically subdivided. Similarly, individual clustering based on the proportion of shared alleles showed that Korean goat individuals formed a single cluster separated from the other two goat breeds. Genetic Studies on Production Efficiency Traits in Hariana Cattle Dhaka, S.S.;Chaudhary, S.R.;Pander, B.L.;Yadav, A.S.;Singh, S. 466 The data on 512 Hariana cows, progeny of 20 sires calved during period from 1974 to 1993 maintained at Government Livestock Farm, Hisar were considered for the estimation of genetic parameters. The means for first lactation milk yield (FLY), wet average (WA), first lactation peak yield (FPY), first lactation milk yield per day of first calving interval (MCI) and first lactation milk yield per day of age at second calving (MSC) were 1,141.58 kg, 4.19 kg/day, 6.24 kg/day, 2.38 kg/day and 0.601 kg/day, respectively. The effect of period of calving was significant (p<0.05) on WA, FPY and MCI while the effect of season of calving was significant only on WA. Monsoon calvers excelled in performance for all the production efficiency traits. The effect of age at first calving (linear) was significant on all the traits except on MCI. Estimates of heritabilty for all the traits were moderate and ranged from 0.255 to 0.333 except for WA (0.161). All the genetic and phenotypic correlations among different production efficiency traits were high and positive. It may be inferred that selection on the basis of peak yield will be more effective as the trait is expressed early in life and had reasonably moderate estimate of heritability. Genetic Similarity and Variation in the Cultured and Wild Crucian Carp (Carassius carassius) Estimated with Random Amplified Polymorphic DNA Yoon, Jong-Man;Park, Hong-Yang 470 Random amplified polymorphic DNA (RAPD) analysis based on numerous polymorphic bands have been used to investigate genetic similarity and diversity among and within two cultured and wild populations represented by the species crucian carp (Carassius carassius). From RAPD analysis using five primers, a total of 442 polymorphic bands were obtained in the two populations and 273 were found to be specific to a wild population. 169 polymorphic bands were also produced in wild and cultured population. According to RAPD-based estimates, the average number of polymorphic bands in the wild population was approximately 1.5 times as diverse as that in cultured. The average number of polymorphic bands in each population was found to be different and was higher in the wild than in the cultured population. Comparison of banding patterns in the cultured and wild populations revealed substantial differences supporting a previous assessment that the populations may have been subjected to a long period of geographical isolation from each other. The values in wild population altered from 0.21 to 0.51 as calculated by bandsharing analysis. Also, the average level of bandsharing values was $0.40{\pm}0.05 $ in the wild population, compared to $0.69{\pm}0.08$ in the cultured. With reference to bandsharing values and banding patterns, the wild population was considerably more diverse than the cultured. Knowledge of the genetic diversity of crucian carp could help in formulating more effective strategies for managing this aquacultural fish species and also in evaluating the potential genetic effects induced by hatchery operations. Rearing Black Bengal Goat under Semi-Intensive Management 1. Physiological and Reproductive Performances Chowdhury, S.A.;Bhuiyan, M.S.A.;Faruk, S. 477 Ninety pre-puberal (6-7 months) female and 15 pre-puberal male Black Bengal goats were collected on the basis of their phenotypic characteristics from different parts of Bangladesh. Goats were reared under semi-intensive management, in permanent house. The animals were vaccinated against Peste Des Petits Ruminants (PPR), drenched with anthelmentics and deeped in 0.5% Melathion solution. They were allowed to graze 6-7 h along with supplemental concentrate and green forages. Concentrates were supplied either 200-300 g/d (low level feeding) or quantity that supply NRC (1981) recommended nutrient (high level of feeding). Different physiological, productive and reproductive characteristics of the breed were recorded. At noon (temperature=$95^{\circ}F$ and light intensity=60480 LUX) rectal temperature and respiration rate of adult male and female increased from 100.8 to $104.8^{\circ}F$ and 35 to 115 breath/min, indicated a heat stress situation. Young female attain puberty at an average age and weight of 7.2$\pm$0.18 months and 8.89$\pm$0.33 kg respectively. Mean age and weight at 1st kidding were 13.5$\pm$0.49 months and 15.3$\pm$0.44 kg respectively. It required 1.24-1.68 services per conception with an average gestation length of 146 days. At low level of feeding the postpartum estrus interval was 37$\pm$2.6 days, which reduced (p<0.05) with high feeding level to 21$\pm$6.9 days. Kidding interval also reduced (p<0.05) from 192 d at low feeding level to 177 d at high feeding level. On an average there were two kiddings/doe/year. Average litter sizes in the 1st, 2nd, 3rd and 4th parity were 1.29, 1.71, 1.87 and 2.17 respectively. Birth weights of male and female kids were 1.24 and 1.20 kg respectively, which increased (p<0.05) with better feeding. Although kid mortality was affected (p<0.05) by dam's weight at kidding, birth weight of kid, milk yield of dam, parity of kidding, season of birth, but pre-netal dam's nutrition found to be the most important factor. Kid mortality reduced from 35% at low level of feeding to 6.5% at high level of feeding of dam during gestation. Apparently, this was due to high (p<0.05) average daily milk yield (334 vs. 556 g/d) and heavier and stronger kid at birth at high feeding level. Effects of Meiotic Stages, Cryoprotectants, Cooling and Vitrification on the Cryopreservation of Porcine Oocytes Huang, Wei-Tung;Holtz, Wolfgang 485 Different factors may affect the sensitivity of porcine oocytes during cryopreservation. The effect of two methods (cooling and vitrification), four cryoprotectants [glycerol (GLY), 1, 2-propanediol (PROH), dimethyl sulfoxide (DMSO) or ethylene glycol (EG)] and two vitrification media (1 M sucrose (SUC)+8 M EG; 8 M EG) on the developmental capacity of porcine oocytes at the germinal vesicle (GV) stage or after IVM at the metaphase II (M II) stage were examined. Survival was assessed by FDA staining, maturation and cleavage following IVF and IVC. A toxicity test for different cryoprotectants (GLY, PROH, DMSO, EG) was conducted at room temperature before cooling. GV and M II-oocytes were equilibrated stepwise in 1.5 M cryoprotectant and diluted out in sucrose. The survival rate of GV-oocytes in the GLY group was significantly lower (82%, p<0.01) than that of the other group (92 to 95%). The EG group achieved a significantly higher maturation rate (84%, p<0.05) but a lower cleavage rate (34%, p<0.01) than the DMSO group and the controls. For M II-oocytes, the survival rates for all groups were 95 to 99% and the cleavage rate of the GLY group was lower than the PROH-group (21 vs 43%, p<0.01). After cooling to $10^{\circ}C$, the survival rates of GV-oocytes in the cryoprotectant groups were 34 to 51%, however, the maturation rates of these oocytes were low (1%) and none developed after IVF. For M II-oocytes, the EG group showed a significantly higher survival rate than those of the other cryoprotectant groups (40% vs 23-26%, p<0.05) and the cleavage rates of PROH, DMSO and EG group reached only 1 to 2%. For a toxicity test of different vitrification media, GV and M II-oocytes were equilibrated stepwise in 100% 8 M EG (group 1) and 1 M SUC + 8 M EG (group 2) or equilibrated in sucrose and then in 8 M EG (SUC+8 M EG, group 3). For GV-oocytes, the survival, maturation and cleavage rates of Group 1 were significantly lower than those in group 2, 3 and control group (p<0.05). For M II-oocytes, there were no differences in survival, maturation and cleavage rates between groups. After vitrification, the survival rates of GV and M II-oocytes in group 2 and 3 were similarly low (4-9%) and none of them matured nor cleaved after in vitro maturation, fertilization and culture. In conclusion, porcine GV and M II-oocytes do not seem to be damaged by a variety of cryoprotectants tested, but will succumb to a temperature decrease to $10^{\circ}C$ or to the process of vitrification, regardless of the cryoprotectant used. The Reproductive Characteristics of the Mare in Subtropical Taiwan Ju, Jyh-Cherng;Peh, Huo-Cheng;Hsu, Jenn-Chung;Cheng, San-Pao;Chiu, Shaw-Ching;Fan, Yang-Kwang 494 The objectives of this study were to document the reproductive traits of mares as influenced by the month of the year in Taiwan. Reproductive records, lactation traits, foal birth weight (FBW) and foal height (FBH) were collected from Holi Equine Station of Taiwan. The effects of month on these parameters were analyzed. The length of estrus (LE) was shortest in December each year. The increasing trend was recorded from January to September with a significantly (p<0.05) longer period of $12.4{\pm}0.4$ days in September than in January and February. A gradual shortening in LE was observed from September to December ($10.1{\pm}0.6$ days, p<0.05), when the shortest period of the year was observed. Mares showed signs of estrus throughout the year, but more than 80% were found in estrus during March through October. The FBW was significantly (p<0.05) affected by the breeding month of the year. The lowest foal weights were recorded in both September ($36.7{\pm}0.7$ kg) and December ($36.8{\pm}0.9$ kg), which were also significantly lower than those in other months except in March, August, and November. A trend of lower FBH from September to December (93.5-93.8 cm) than those from January to August was observed. The greatest FBH was in June (96.2 cm). Breeding months and onset of estrus of the mares exerted a significant effect on the incidence of agalactia during the lactation period. These analyses provide fundamental information on adaptive processes in respect to reproductive characteristics of mares, which indicated an extent of acclimation by these animals in subtropical Taiwan. Effect of Season Influencing Semen Characteristics, Frozen-Thawed Sperm Viability and Testosterone Concentration in Duroc Boars Cheon, Y.M.;Kim, H.K.;Yang, C.B.;Yi, Y.J.;Park, C.S. 500 This study was carried out to investigate the effects of season influencing semen characteristics, frozen-thawed sperm viability and testosterone concentration in Duroc boars. There were no significant differences in the semen volume and sperm concentration of Duroc boars among spring, summer, autumn and winter. However, the pH of sperm-rich and sperm-poor fractions in autumn and winter season was higher than in spring and summer season in Duroc boars. Sperm motility and normal acrosome of raw semen in Duroc boars did not differ significantly among spring, summer, autumn and winter. However, motility and normal acrosome of frozen-thawed sperm were higher in spring season than in summer, autumn and winter. Serum testosterone concentrations in Duroc were higher in spring than summer, autumn and winter. In conclusion, when serum testosterone concentrations were higher in seasons, frozen-thawed sperm viability in Duroc boars were higher. Changes in Maternal Blood Glucose and Plasma Non-Esterified Fatty Acid during Pregnancy and around Parturition in Twin and Single Fetus Bearing Crossbred Goats Khan, J.R.;Ludri, R.S. 504 The effects of fetal number (single or twin) on blood glucose and plasma NEFA during pregnancy and around parturition were studied on ten Alpine ${\times}$ Beetal crossbred goats in their first to third lactation. The animals were divided in-groups 1(carrying single fetus, n=4) and 2(twin fetus, n=6). The samples were drawn on day1 after estrus and then at 14 days interval (fortnight) for 10 fortnights. Around parturition the samples were taken on days -20, -15, -10, -5, -4, -3, -2, -1 prior to kidding and on day 0 and +1, +2, +3, +4, +5, +10, +15, +20 days post kidding. In twin bearing goats the blood glucose concentration continued to increase from 1st until 4th fortnight and thereafter gradually decline from 5th upto 8th fortnight. In single bearing goats there was increase in levels from 2nd upto 4th fortnight and thereafter it declined from 5th uptill 9th fortnight. The difference in sampling interval was highly significant (p<0.01) in both the groups. However the values were higher in single than in twin bearing goats. The plasma NEFA concentration was low in both the groups' upto 4th fortnight and thereafter it is continuously increased upto 9th fortnight. During prepartum period the blood glucose was higher in single than in twin bearing goats. The values were minimum on the day of kidding in both the groups. During postpartum period the values were significantly (p<0.01) higher in twin than in single fetus bearing goats. The plasma NEFA was significantly (p<0.05) higher in twin than in single fetus bearing goats. The blood glucose and plasma NEFA concentration can be used as index of nutritional status during pregnancy and around parturition in goats. Effect of Molybdenum Induced Copper Deficiency on Peripheral Blood Cells and Bone Marrow in Buffalo Calves Randhawa, C.S.;Randhawa, S.S.;Sood, N.K. 509 Copper deficiency was induced in eight male buffalo calves by adding molybdenum (30 ppm wet basis) to their diet. Copper status was monitored from the liver copper concentration and a level below 30 ppm (DM basis) was considered as deficient. Haemoglobin, haematocrit, total and differential leucocyte numbers were determined. The functions of peripheral neutrophils were assessed by in vitro phagocytosis and killing of Staphylococcus aureus. The effect of molybdenum induced copper deficiency on bone marrow was monitored. The mean total leucocyte count was unaffected whereas a significant fall in neutrophil count coincided with the fall in hepatic copper level to $23.9{\pm}2.69$ ppm. Reduced blood neutrophil numbers was not accompanied by any change in the proportion of different neutrophil precursor cells in bone marrow. It was hypothesised that buffalo calves were more tolerant to dietary molybdenum excess than cattle. It was concluded that neutropenia in molybdenum induced copper deficiency occurred without any effect on their synthesis and maturation process. Bone marrow studies in healthy calves revealed higher percentage of neutrophilic myelocytes and metamyelocytes as compared to cattle. Dry Matter Intake, Digestibility and Milk Yield by Friesian Cows Fed Two Napier Grass Varieties Gwayumba, W.;Christensen, D.A.;McKinnon, J.J.;Yu, P. 516 The objective of this study was to compare two varieties of Napier grass (Bana Napier grass vs French Cameroon Napier grass) and to determine whether feed intake, digestibility, average daily gain (ADG) and milk yield of lactating Friesian cows from fresh cut Bana Napier grass was greater than from French Cameroon Napier grass, using a completely randomized design. Results show that Bana Napier grass had similar percent dry matter (DM), ash and gross energy (GE) to French Cameroon. Bana grass had higher percent crude protein (CP) and lower fiber fractions, acid detergent fibre (ADF), neutral detergent fibre (NDF) and lignin compared to French Cameroon. Overall the forage quality was marginally higher in Bana Napier grass compared to French Cameroon. The DM and NDF intake expressed as a percentage of body weight (BW) were similar in both Napier grass types. Both grasses had similar digestible DM and energy. Bana had higher digestible CP but lower digestible ADF and NDF than French Cameroon. Bana Napier was not different from French Cameroon when fed as a sole diet to lactating cows in terms of low DM intake, milk yield and a loss of BW and condition. To improve the efficient utilization of both Napier grass varieties, a supplement capable of supplying 1085-1227 g CP/d and 17.0-18.0 Mcal ME/d is required for cows to support moderate gains 0.22 kg/d and 15 kg 4% fat corrected milk/d. Effects of Feeding Urea and Soybean Meal-Treated Rice Straw on Digestibility of Feed Nutrients and Growth Performance of Bull Calves Ahmed, S.;Khan, M.J.;Shahjalal, M.;Islam, K.M.S. 522 The experiment was conducted for a period of 56 days with twelve Bangladeshi bull calves of average body weight of $127.20{\pm}11.34$ kg. The calves were divided into 3 groups having 4 animals in each. The animals were fed urea-treated rice straw designated as A) 4% urea-treated rice straw, B) 4% urea+4% soybean-treated rice straw and C) 4% urea+6% soybean-treated rice straw. In addition, all the animals were supplied 2 kg green grass, 350 g Til-oil-cake and 100 g common salt per 100 kg body weight of animals. Straw was treated with 4% urea solution and soybean meal at 4 and 6% were added to treated straw and kept for 48 h in double layer polythene bags under anaerobic condition. Urea treatment improved crude protein (CP) content of rice straw from 2.68 to 8.70% and it was further increased by 10.74 and 12.12% with the addition of 4 and 6% soybean meal. Dry matter (DM) intake (kg) was higher (p<0.05) in C (4.2) followed by B (4.1) and A (4.0). Crude protein intake was significantly higher (p<0.05) in group B and C than group A. Total live weight gains were 20.2, 24.8 and 25.6 kg for calves of group A, B and C respectively (p<0.01). The addition of soybean meal to treated rice straw did not affect the coefficients of digestibility of DM, OM, EE and NFE. However, CP and CF digestibility were significantly higher in group B and C (p<0.05). The values for digestible crude protein (DCP), digestible ether extract (DEE), digestible nitrogen free extract (DNFE) and total digestible nutrients (TDN) were significantly (p<0.05) higher in diet C and B in comparison to diet A, but there were no significant difference in digestible organic matter (DOM) and digestible crude fibre (DCF) value among the groups. It may be concluded that 4% urea treated rice straw can be fed to growing bull calves with 2 kg green grass and a small quantity of concentrate without any adverse effect on feed intake and growth. Moreover, soybean meal at 4 and 6% can be added to urea treated rice straw at the time of treatment for rapid hydrolyzing of urea, which resulted an improvement in nutrient digestibility and better utilization of rice straw for growth of growing bull calves. Effect of Different Seasons on Cross-Bred Cow Milk Composition and Paneer Yield in Sub-Himalayan Region Sharma, R.B.;Kumar, Manish;Pathak, V. 528 The study was designed to evaluate the seasonal influences on cross-bred cow milk composition and paneer yield in Dhauladhar mountain range of sub-himalayan region. Fifty samples from each season were collected from a herd of $Jersey{\times}Red\;Sindhi{\times}Local$ cross-bred cows during summer (April-June), rainy (July-September) and winter (November-February) and analyzed for fat, total solids (TS) and solids not fat (SNF). Paneer was prepared by curdling milk at $85{\pm}2^{\circ}C$ with 2.5 per cent citric acid solution. Overall mean for fat, TS and SNF content of milk and paneer yield were 4.528, 13.310, 8.754 and 15.218 per cent respectively. SNF and TS content varied among seasons being highest in winter (8.983% and 13.639%) followed by summer (8.835% and 13.403%) and lowest in rainy season (8.444% and 12.888%). Paneer yield was lowest (14.792%) in rainy season and highest (15.501%) in winter season. Changes of Serum Mineral Concentrations in Horses during Exercise Inoue, Y.;Osawa, T.;Matsui, A.;Asai, Y.;Murakami, Y.;Matsui, T.;Yano, H. 531 We investigated the exercise-induced changes in the serum concentration of several minerals in horses. Four welltrained Thoroughbred horses performed exercise for 5 d. The blood hemoglobin (Hb) concentration increased during exercise, recovered to the pre-exercise level immediately after cooling down and did not change again up till the end of experiment. The changes in serum zinc (Zn) and copper (Cu) concentrations were similar to those of blood Hb during the experiment. The serum magnesium (Mg), inorganic phosphorus (Pi) and iron (Fe) concentrations also increased during exercise. Though the serum Pi concentration recovered to the pre-exercise level immediately after the cooling down, it decreased further before the end of the experiment. The serum Mg concentration was lower immediately after cooling down than its pre-exercise level but gradually recovered from the temporal reduction. The recovery of the serum Fe concentration was delayed compared to that of other minerals and recovered 2 h after cooling down. The serum calcium (Ca) concentration did not change during exercise but rapidly decreased after cooling down. As a result, it was lower immediately after cooling down than its pre-exercise level. It recovered, however, to the pre-exercise level 2 h after cooling down. The temporal increase in the serum concentrations of all minerals except Ca is considered to result from hemoconcentration induced by exercise and the stable concentration of the serum Ca during exercise is possibly due to its strict regulation of homeostasis. These results indicate that the serum concentration of each mineral responds differently to exercise in horses, which may be due to the difference in metabolism among these minerals. Evaluation of Some Aquatic Plants from Bangladesh through Mineral Composition, In Vitro Gas Production and In Situ Degradation Measurements Khan, M.J.;Steingass, H.;Drochner, W. 537 A study was conducted to evaluate the nutritive potential value of different aquatic plants: duckweed (Lemna trisulaca), duckweed (Lemna perpusila), azolla (Azolla pinnata) and water-hyacinth (Eichhornia crassipes) from Bangladesh. A wide variability in protein, mineral composition, gas production, microbial protein synthesis, rumen degradable nitrogen and in situ dry matter and crude protein degradability were recorded among species. Crude protein content ranged from 139 to 330 g/kg dry matter (DM). All species were relatively high in Ca, P, Na, content and very rich in K, Fe, Mg, Mn, Cu and Zn concentration. The rate of gas production was highest in azolla and lowest in water-hyacinth. A similar trend was observed with in situ DM degradability. Crude protein degradability was highest in duckweed. Microbial protein formation at 24 h incubation ranged from 38.6-47.2 mg and in vitro rumen degradable nitrogen between 31.5 and 48.4%. Based on the present findings it is concluded that aquatic species have potential as supplementary diet to livestock. Effect of Partial Replacement of Green Grass by Urea Treated Rice Straw in Winter on Milk Production of Crossbred Lactating Cows Sanh, M.V.;Wiktorsson, H.;Ly, L.V. 543 Fresh elephant grass was replaced by urea treated rice straw (UTRS) to evaluate the effects on milk production of crossed lactating cows. A total of 16 crossbred F1 cows (Holstein Friesian ${\times}$ Vietnamese Local Yellow), with a body weight of about 400 kg and lactation number from three to five, were used in the experiment. The experimental cows were blocked according to the milk yield of the previous eight weeks and divided into 4 homogenous groups. The experiment was conducted with a Latin Square design with 4 treatments and 4 periods. Each period was 4 weeks, with 2 weeks of feed adaptation and 2 weeks for data collection. The ratio of concentrate to roughage in the ration was 50:50. All cows were given constant amounts of elephant grass dry matter (DM), with ratios of 100% grass without UTRS (control treatment 100G), and 75% grass (75G), 50% grass (50G) and 25% grass (25G) with ad libitum UTRS. Daily total DM intake on 100G, 75G, 50G and 25G was 12.04, 12.31, 12.32 and 11.85 kg, and the daily ME intake was 121.6, 121.5, 119.4 and 114.3 MJ, respectively. The daily CP intake was similar for all treatments (1.85-1.91 kg). There was a difference (p<0.05) in daily milk yield between the 25G and the 100G and 75G (11.7 vs. 12.6 and 12.5 kg, respectively). Milk protein concentration was similar for all treatments, while a tendency to increased milk fat concentration following the increase of UTRS ratio was observed. The cows gained 4-5 kg body weight per month and showed first oestrus 3-4 months after calving. The overall feed conversion for milk production was not affected by ratio of UTRS in the ration. It is concluded that replacement of green grass by UTRS with a ratio of 50:50 for crossbred lactating cows is as good as feeding 100% green grass in terms of milk yield, body weight gain and feed conversion. UTRS can preferably replace green grass in daily rations for crossbred dairy cows in winter to cope with the shortage of green grass, with the ratio 1:1. Influence of Dietary Addition of Dried Wormwood (Artemisia sp.) on the Performance, Carcass Characteristics and Fatty Acid Composition of Muscle Tissues of Hanwoo Heifers Kim, Y.M.;Kim, J.H.;Kim, S.C.;Ha, H.M.;Ko, Y.D.;Kim, C.-H. 549 An experiment was conducted to examine the performance and carcass characteristics of Hanwoo (Korean native beef cattle) heifers and the fatty acid composition of muscle tissues of the heifers when the animals fed diets containing four levels of dried wormwood (Artemisia sp.). For the experiment the animals were given a basal diet consisting of rice straw and concentrate mixed at 3:7 ratio (on DM basis). The treatments were designed as a completely randomized design with two feeding periods. Heifers were allotted in one of four dietary treatments, which were designed to progressively substitute dried wormwood for 0, 3, 5 and 10% of the rice straw in the basal diet. There was no difference in body weight gain throughout the entire period between the treatment groups. Feed conversion rate was improved (p<0.05) only by the 3% dried wormwood inclusion treatment compared with the basal treatment. Carcass weight, carcass yield and backfat thickness of all treatment groups were not altered by wormwood inclusion. The 5% dried wormwood inclusion significantly increased (p<0.05) the size of loin-eye area over the other treatments. The higher levels (5 and 10%) of dried wormwood inclusion resulted in the higher (p<0.05) water holding capacity (WHC) in loin than the lower levels (0 and 3%) of wormwood inclusion. The redness ($a^*$) and yellowness ($b^*$) values of meat color were significantly lower (p<0.05) in the top round muscle of heifers fed the diet containing 3% dried wormwood. There was a profound effect of the progressively increased intake of dried wormwood led to the linear increase of unsaturated fatty acid content and the linear decrease of saturated fatty acid content in the muscle tissues of Hanwoo heifers. It is concluded that the feeding diets containing dried wormwood substituted for equal weights of rice straw at 5% levels would be anticipated to provide better quality roughage for beef heifer production and economical benefits for beef cattle producers. The Effects of Chinese and Argentine Soybeans on Nutrient Digestibility and Organ Morphology in Landrace and Chinese Min Pigs Qin, G.X.;Xu, L.M.;Jiang, H.L.;van der Poel, A.F.B.;Bosch, M.W.;Verstegen, M.W.A. 555 Twenty Landrace and twenty Min piglets, with an average initial body weight of 22.4 kg, were randomly divided into 5 groups with 4 animals per group, within each of the breeds. The piglets were housed in individual concrete pens. Each group of the piglets was fed one of 5 diets. The diets contained either 20% raw Argentine soybeans, 20% processed Argentine soybeans ($118^{\circ}C$ for 7.5 min.), 20% raw Chinese soybeans, 20% processed Chinese soybeans ($118^{\circ}C$ for 7.5 min.) or no soybean products (control diet). Faecal samples were collected on days 6, 7 and 8 of the treatment period. Digestibilities of dietary nutrients were determined with AIA (acid insoluble ash) as a marker. After a 17 day treatment, three piglets were killed from each of the groups. Tissue samples of small and large intestine for light and electron microscopy examination were taken immediately after the opening of abdomen. Then, the weight or size of relevant organs was measured. The results show that the digestibilities of dry matter (DM), crude protein (CP) and fat were higher in Min piglets than in Landrace piglets (p<0.05). The diets containing processed soybeans had a significant higher CP digestibility than the control diet and the diets containing raw soybeans (p<0.05). Landrace piglets had heavier and longer small intestines, heavier kidneys and a lighter spleen than Min piglets (p<0.05). The pancreas of the animals fed the diets containing processed soybeans was heavier than that of the animals fed control diet (p<0.05) and the diets containing raw soybeans. But, the differences between raw and processed soybean diets were not significant. A significant interaction (p<0.05) between diet and pig breed was observed in weight of the small intestine. The Landrace piglets increased the weight in their small intestine when they were fed the diets containing soybeans. In the light micrographs and electron scanning micrographs, it was found that the villi of small intestinal epithelium of animals (especially Landrace piglets) fed the diets containing raw Chinese soybeans were seriously damaged. The transmission electron micrograph showed that a lot of vesicles were located between the small intestinal microvilli of these piglets. The histological examination also indicated that the proportion of goblet cells in villi and crypts in the piglets consuming the control diet was significantly lower (p<0.01 and p<0.02, respectively) than those of the animals consuming the diets containing raw or processed soybeans. Evaluation of Sorghum (Sorghum bicolor) as Replacent for Maize in the Diet of Growing Rabbits (Oryctolagus cuniculus) Muriu, J.I.;Njoka-Njiru, E.N.;Tuitoek, J.K.;Nanua, J.N. 565 Thirty six young New Zealand white rabbits were used in a randomised complete block (RCB) design with a $3{\times}2$ factorial treatment experiment to study the suitability of sorghum as substitute for maize in the diet of growing rabbits in Kenya. Six different diets were formulated to contain 35% of one of the three different types of grain (maize, white sorghum or brown sorghum) and one of the two different levels of crude protein (CP) 16 or 18.5% and fed to growing rabbits for a period of six weeks. The tannin content of the grains was 0.05, 0.52 and 5.6% chatechin equivalents for maize, white and brown sorghum respectively. Weaning weight at 35 days of age was used as the blocking criterion at the beginning of the experiment. Results of feed intake, weight gain, feed conversion efficiency, feed digestibility, as well as the blood parameters, indicated that white sorghum was not significantly different from maize. Animals fed on diets containing brown sorghum had a lower average daily gain (ADG) and a poorer feed conversion efficiency (FCE) (p<0.01) in comparison with those fed on diets containing maize or white sorghum. The 18.5% CP level gave a better FCE (p<0.05) compared with the 16% CP level. However, increasing the level of CP did not improve the utilisation of any of the grains. It was concluded that white sorghum could effectively substitute maize in the diet of growing rabbits. On the other hand, the use of brown sorghum in the diets of growing rabbits may compromise their growth rate. This may be due to the high concentration of tannins in the brown sorghum. Effects of Active Immunization against Somatostatin or its Analogues on Milk Protein Synthesis of Rat Mammary Gland Cells Kim, J.Y.;Cho, K.K.;Chung, M.I.;Kim, J.D.;Woo, J.H.;Yun, C.H.;Choi, Y.J. 570 Effects of active immunization against native 14-mer somatostatin (SRIF, somatotropin releasing inhibiting factor) and its two 14-mer-somatostatin analogues on the milk production in rat mammary cells were studied. Native SRIF, Tyr11-somatostatin (Tyr11-SRIF), and D-Trp8, D-Cys14-somatostatin (Trp8Cys14-SRIF) were conjugated to bovine serum albumin (BSA) for immunogen preparation. Twenty-four female Sprague-Dawley rats were divided into four groups and immunized against saline (Control), SRIF, Tyr11-SRIF, and Trp8Cys14-SRIF at five weeks of age. Booster immunizations were performed at 7, 9, and 11 weeks of age. SRIFimmunized rats were mated at 10 weeks of age. The blood and mammary glands were collected at day 15 post-pregnancy and -lactation. To measure the amount of milk protein synthesis in the mammary gland, mammary cells isolated from the pregnant and the lactating rats, were cultured in the presence of $^3H$-lysine. No significant differences in growth performance, concentration of growth hormone in the circulation, and the amount of milk protein synthesis were observed among the groups. Inductive levels of serum anti-SRIF antibody in the SRIF and Tyr11-SRIF groups but not in the Trp8Cys14-SRIF group, were significantly higher than that of the control group during the pregnancy and lactation periods. The result suggests that active immunization against native 14-mer SRIF and Tyr11-SRIF was able to induce anti-SRIF antibodies, but did not affect the milk protein synthesis. Cloning of cDNA Encoding PAS-4 Glycoprotein, an Integral Glycoprotein of Bovine Mammary Epithelial Cell Membrane Hwangbo, Sik;Lee, Soo-Won;Kanno, Chouemon 576 Bovine PAS-4 is an integral membrane glycoprotein expressed in mammary epithelial cells. Complementary DNA (cDNA) cloning of PAS-4 was performed by reverse-transcriptase polymerase chain reaction (RT-PCR) with oligonucleotide probes based on it's amino terminal and internal tryptic-peptides. The cloned PAS-4 cDNA was 1,852 nucleotides (nt) long and its open reading frame (ORF) was encoded 1,413 base long. The deduced amino acid sequence indicated that PAS-4 consisted of 471 amino acid residues with molecular weight of 52,796, bearing 8 potential N-glycosylation sites and 9 cysteine residues. Partial bovine CD36 cDNA from liver also was sequenced and the homology of both nucleotide sequence was 94%. Most of the identical amino acid residues were in the luminal/extracellular domains. Contrary to PAS-4, bovine liver CD36 displays 6 potential N-glycosylation sites, which were located, except for those at positions 101 and 171, at same positions as PAS-4 cDNA. Cysteine residues of PAS-4 and CD36 were same at position and in numbers. Northern blot analysis showed that PAS-4 was widely expressed, although its mRNA steady-state levels vary considerably among the analyzed cell types. PAS-4 possessed hydrophobic amino acid segments near the amino- and carboxyl-termini. Two short cytoplasmic tails of the amino- and carboxyl-terminal ends constituted of a 5-7 and 8-11 amino acid residues, respectively. Effects of High Level of Sucrose on the Moisture Content, Water Activity, Protein Denaturation and Sensory Properties in Chinese-Style Pork Jerky Chen, W.S.;Liu, D.C.;Chen, M.T. 585 The effects of a high level of sucrose on the moisture content, water activity, protein denaturation and sensory properties in Chinese-style pork jerky were investigated. The pork jerky with different levels (0, 12, 15, 18 and 21%) of sucrose was prepared. Fifteen frozen boneless pork legs from different animals were used in this trial. Sucrose is a non-reducing disaccharides and would not undergo non-enzymatic browning. Some studies pointed out that sucrose might be hydrolyzed during freezing, dehydration and storage into glucose and fructose, and cause non-enzymatic browning in meat products. The results showed that moisture content and water activity of pork jerky decreased with increase of the level of sucrose. At the same time, shear value was increased due to the reduced moisture content and water activity by osmotic dehydration. However, a higher level of sucrose had a significantly negative effect on protein solubility and extractability of myosin heavy chain of pork jerky due to non-enzymatic browning. From the results of sensory panel tests, the pork jerky with 21% of sucrose seems to be more acceptable by the panelists in hardness, sweetness and overall acceptability. Evaluation of Ultrasound for Prediction of Carcass Meat Yield and Meat Quality in Korean Native Cattle (Hanwoo) Song, Y.H.;Kim, S.J.;Lee, S.K. 591 Three hundred thirty five progeny testing steers of Korean beef cattle were evaluated ultrasonically for back fat thickness (BFT), longissimus muscle area (LMA) and intramuscular fat (IF) before slaughter. Class measurements associated with the Korean yield grade and quality grade were also obtained. Residual standard deviation between ultrasonic estimates and carcass measurements of BFT, LMA were 1.49 mm and $0.96cm^2$. The linear correlation coefficients (p<0.01) between ultrasonic estimates and carcass measurements of BFT, LMA and IF were 0.75, 0.57 and 0.67, respectively. Results for improving predictions of yield grade by four methods-the Korean yield grade index equation, fat depth alone, regression and decision tree methods were 75.4%, 79.6%, 64.3% and 81.4%, respectively. We conclude that the decision tree method can easily predict yield grade and is also useful for increasing prediction accuracy rate. Treatment of Microencapsulated ${\beta}$- Galactosidase with Ozone : Effect on Enzyme and Microorganism Kwak, H.S.;Lee, J.B.;Ahn, J. 596 The present study was designed to examine the effect of ozone treatment in microencapsulated ${\beta}$-galactosidase on inactivation of the enzyme and sterilization of microorganism. The efficiency was the highest as 78.4% when the ratio of polyglycerol monostearate (PGMS) was 15:1. Activities of lactase remaining outside the capsule were affected by ozone treatment. With the increase of ozone concentration and duration of ozone treatment, the activity reduced significantly. In sensory aspect, with 2% microcapsule addition, no significant difference in sweetness was found compared with a market milk during 12 d storage. Above result indicated that the additional washing process of lactase was not necessary to inactivate the residual enzyme. In a subsequent study, the vegetative cells of microorganisms were completely killed with 10 ppm for 10 min treatment by ozone. The present study provides evidence that ozone treatment can be used as an inactivation and a sterilization process. In addition, these results suggest that acceptable milk products containing lactase microcapsules made by PGMS can be prepared with ozone treatment. Studies on Lao-Chao Culture Filtrate for a Flavoring Agent in a Yogurt-Like Product Liu, Yi-Chung;Chen, Ming-Ju;Lin, Chin-Wen 602 Lao-chao is a traditional Chinese fermented rice product with a sweet and fruity flavor, containing high levels of glucose, a little alcohol and milk-clotting characteristics. In order to optimize commercial production of lao-chao, Rhizopus javanicus and Saccharomyces cerevisiae were selected as the mold and yeast starter, respectively. A commercial mixed starter (chiu-yao) was used as control. Fermentation of the experimental combination revealed a sharp drop in pH (to 4.5) on the fourth day, remaining constant thereafter. Content of reducing sugars gradually decreased throughout the entire fermentation period. Of the free amino acids, higher quantities of alanine, leucine, proline, glutamic acid, glutamine and $NH_3$ were noted. For sugars, glucose revealed the highest concentration, while organic acid levels, including those for oxalic, lactic, citric and pyroglutamic acid, increased throughout the fermentation period. Twenty-one compounds were identified by gas chromatography from aroma concentrates of the lao-chao culture filtrate, prepared using the headspace method. For the flavor components, higher quantities of ethanol, fusel oil and ester were determined in both culture filtrates. In regard to the evaluation of yogurt-like product, there were significant differences in alcoholic smell, texture and curd firmness. Prevalence of Fumonisin Contamination in Corn and Corn-based Feeds in Taiwan Cheng, Yeong-Hsiang;Wu, Jih-Fang;Lee, Der-Nan;Yang, Che-Ming J. 610 The purpose of this study was to investigate the prevalence of fumonisin contamination in corn and corn-based feeds in Taiwan. A total of 233 samples was collected from 8 feed mill factories located in four different regions in Taiwan. The presence of fumonisin $B_1$ ($FB_1$) and $B_2$ ($FB_2$) was determined by thin layer chromatograph, while the total fumonisin content was determined using immuno-affinity column cleanup and fluorometer quantitation. Our results showed that 55 samples of swine feeds had the highest percentage of incidence of $FB_1$ and $FB_2$ (41.8% and 41.8%, respectively), followed by 66 samples of duck feeds (40.9% and 37.8%). However, the percentage of incidence of $FB_1$ and $FB_2$ was much lower in 43 samples of broiler feeds (23.2% and 13.9%) and 69 samples of corn (17.3% and 10.1%). Corn and duck feeds were found to have a significant higher level of means of total fumonisins ($5.4{\pm}1.5$ and $5.8{\pm}0.6$ ppm, respectively) than swine feeds ($2.9{\pm}0.4$ ppm) and broiler feeds ($3.0{\pm}0.5$ ppm). Comparing fumonisins distribution in different regions, the highest percentage of $FB_1$ incidence (39.2%) was found in the eastern region of Taiwan, and total fumonisins level ($4.5{\pm}0.7$ ppm) was significantly higher than other regions. However, the highest percentage of $FB_2$ incidence (32.0%) was found in the central region of Taiwan. Trimonthly analysis of data showed that both high percentage of $FB_1$ and $FB_2$ incidence (39.3% and 37.7%) and total concentration of fumonisin ($5.7{\pm}0.4$ ppm) were found in the period of Jan. to Mar., The incidence and concentration were significantly higher than other trimothly periods. These results indicate that fumonisin B mycotoxins are both widespread and persistent in feed-grade corn and corn-based feeds in Taiwan.
CommonCrawl
\begin{definition}[Definition:Non-Comparable Elements] Let $\struct {S, \RR}$ be a relational structure. Two elements $x, y \in S, x \ne y$ are '''non-comparable''' if neither $x \mathrel \RR y$ nor $y \mathrel \RR x$. \end{definition}
ProofWiki
List of things named after Isaac Newton This is a list of things named after Sir Isaac Newton. Science and mathematics • Newtonianism, the philosophical principle of applying Newton's methods in a variety of fields Mathematics • Gauss–Newton algorithm • Newton–Cotes formulas • Newton–Gauss line • Newton–Leibniz axiom • Newton–Okounkov body • Newton–Pepys problem • Newton–Puiseux theorem • Newton fractal • Newton's identities • Newton's inequalities • Newton's method also known as Newton–Raphson • Newton's method in optimization • Newton's notation • Newton number, another name for Kissing number • Newton polygon • Newton polynomial • Newton series (finite differences) also known as Newton interpolation, see Newton polynomial • Newton's theorem about ovals • Truncated Newton method Physics • Newton's bucket, see bucket argument • Newton's cannonball • Newton's constant, see universal gravitational constant • Newton's cradle • Newton disc • Newton–Cartan theory • Newton–Euler equations • Newton's law of cooling • Newton's laws of motion • Newton's law of universal gravitation • Newton–Laplace equation • Newton's metal • Newton's minimal resistance problem • Newton's reflector, see also Newtonian telescope – a different design • Newton's reflecting quadrant • Newton number, another name for Power number • Newton's rings • Newton's rotating sphere argument, see rotating spheres • Newton scale • Newton's sphere theorem, see shell theorem • Newton's theorem of revolving orbits • Schrödinger–Newton equations • Newton (unit), the International System of Units (SI) derived unit of force. • Newton's approximation for impact depth • Newtonian cosmology • Newtonian dynamics • Newtonian fluid, a fluid that flows like water—its shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear • Non-Newtonian fluids, in which the viscosity changes with the applied shear force • Newtonian mechanics, also known as classical mechanics • Newtonian potential • Newtonian telescope, a type of reflecting telescope • Post-Newtonian expansion Places • Newton Township, Miami County, Ohio, United States • Newtontoppen, the highest mountain in Svalbard • Newton Island (Antarctica), near Lagrange Island, Descartes Island (Antarctica), Laplace Island (Antarctica), Pascal Island and Monge Island • 8000 Isaac Newton, a minor planet • Newton (Martian crater) • Newton (lunar crater), an impact crater Schools • Isaac Newton Institute • Statal Institute of Higher Education Isaac Newton • Sir Isaac Newton Sixth Form, a specialist maths and science school Artwork • Newton, 1795 (reworked in 1805) painting by William Blake about Isaac Newton • Newton, 1995 sculpture by Eduardo Paolozzi inspired by Blake's painting • Isaac Newton Gargoyle, 1989 hammered copper sheet depiction of Newton on the exterior of Willamette Hall, University of Oregon Other • Isaac Newton Group of Telescopes, three optical telescopes on the Canary Islands • Isaac Newton Telescope • Newton Gateway to Mathematics • Newton (platform), a 1990s personal digital assistant by Apple Inc • Institute of Physics Isaac Newton Medal, an annual award • XMM-Newton See also • Newtonian (disambiguation) Sir Isaac Newton Publications • Fluxions (1671) • De Motu (1684) • Principia (1687) • Opticks (1704) • Queries (1704) • Arithmetica (1707) • De Analysi (1711) Other writings • Quaestiones (1661–1665) • "standing on the shoulders of giants" (1675) • Notes on the Jewish Temple (c. 1680) • "General Scholium" (1713; "hypotheses non fingo" ) • Ancient Kingdoms Amended (1728) • Corruptions of Scripture (1754) Contributions • Calculus • fluxion • Impact depth • Inertia • Newton disc • Newton polygon • Newton–Okounkov body • Newton's reflector • Newtonian telescope • Newton scale • Newton's metal • Spectrum • Structural coloration Newtonianism • Bucket argument • Newton's inequalities • Newton's law of cooling • Newton's law of universal gravitation • post-Newtonian expansion • parameterized • gravitational constant • Newton–Cartan theory • Schrödinger–Newton equation • Newton's laws of motion • Kepler's laws • Newtonian dynamics • Newton's method in optimization • Apollonius's problem • truncated Newton method • Gauss–Newton algorithm • Newton's rings • Newton's theorem about ovals • Newton–Pepys problem • Newtonian potential • Newtonian fluid • Classical mechanics • Corpuscular theory of light • Leibniz–Newton calculus controversy • Newton's notation • Rotating spheres • Newton's cannonball • Newton–Cotes formulas • Newton's method • generalized Gauss–Newton method • Newton fractal • Newton's identities • Newton polynomial • Newton's theorem of revolving orbits • Newton–Euler equations • Newton number • kissing number problem • Newton's quotient • Parallelogram of force • Newton–Puiseux theorem • Absolute space and time • Luminiferous aether • Newtonian series • table Personal life • Woolsthorpe Manor (birthplace) • Cranbury Park (home) • Early life • Later life • Apple tree • Religious views • Occult studies • Scientific Revolution • Copernican Revolution Relations • Catherine Barton (niece) • John Conduitt (nephew-in-law) • Isaac Barrow (professor) • William Clarke (mentor) • Benjamin Pulleyn (tutor) • John Keill (disciple) • William Stukeley (friend) • William Jones (friend) • Abraham de Moivre (friend) Depictions • Newton by Blake (monotype) • Newton by Paolozzi (sculpture) • Isaac Newton Gargoyle • Astronomers Monument Namesake • Newton (unit) • Newton's cradle • Isaac Newton Institute • Isaac Newton Medal • Isaac Newton Telescope • Isaac Newton Group of Telescopes • XMM-Newton • Sir Isaac Newton Sixth Form • Statal Institute of Higher Education Isaac Newton • Newton International Fellowship Categories Isaac Newton
Wikipedia
A single triangular SS-EMVS aided high-accuracy DOA estimation using a multi-scale L-shaped sparse array Jin Ding1, Minglei Yang ORCID: orcid.org/0000-0002-9637-80881, Baixiao Chen1 & Xin Yuan2 EURASIP Journal on Advances in Signal Processing volume 2019, Article number: 44 (2019) Cite this article We propose a new array configuration composed of multi-scale scalar arrays and a single triangular spatially spread electromagnetic-vector-sensor (SS-EMVS) for high-accuracy two-dimensional (2D) direction-of-arrival (DOA) estimation. Two scalar arrays are placed along x-axis and y-axis, respectively, each array consists of two uniform linear arrays (ULAs), and these two ULAs have different inter-element spacings. In this manner, these two scalar arrays form a multi-scale L-shaped array. The two arms of this L-shaped scalar array are connected by a six-component SS-EMVS, which is composed of a spatially spread dipole-triad plus a spatially spread loop-triad. All the inter-element spacings in our proposed array can be larger than a half-wavelength of the incident source, thus to form a sparse array to mitigate the mutual coupling across antennas. In the proposed DOA estimation algorithm, we perform the vector-cross-product algorithm to the SS-EMVS to obtain a set of low-accuracy but unambiguous direction cosine estimation as a reference; we then impose estimation of signal parameters via rotation invariant technique (ESPRIT) algorithm to the two scalar arrays to get two sets of high-accuracy but cyclically ambiguous direction cosine estimations. Finally, the coarse estimation is used to disambiguate the fine but ambiguous estimations progressively and therefore a multiple-order disambiguation algorithm is developed. The proposed array enjoys the superiority of low redundancy and low mutual coupling. Moreover, the thresholds of the inter-sensor spacings utilized in the proposed array are also analyzed. Simulation results validate the performance of the proposed array geometry. In the field of array signal processing, the direction-of-arrival (DOA) estimation accuracy of the incident sources is proportional to the aperture of the antenna array, and therefore an array with a larger aperture is desired [1]. However, to avoid the phase ambiguity in DOA estimation, it is generally believed that the spacing between adjacent antennas should not be greater than λ/2, where λ denotes the wavelength of the incident signal [1, 2]. In this way, a large aperture array usually requires more antennas and thus increases the cost as well as the mutual coupling between antennas. In order to mitigate this issue, various sparse array configurations and the corresponding DOA estimation algorithms have been developed. One type of sparse array is constructed by multiple widely separated sub-arrays [3–5], and the corresponding estimation of signal parameters via rotation invariant technique (ESPRIT)-based algorithms which used the dual-size or multiple-size invariance within these arrays were developed therein. Another type is designed to obtain as many as degrees-of-freedom (DOFs) to resolve more sources than sensors, such as the minimum-redundancy array [6], the nested array [7], and the co-prime array [8]. Their DOA estimation algorithms focused on using the high order statistic characteristics of the received data of the sparse array to increase the number of DOF and thus often required a large computational workload. In the meantime, the electromagnetic-vector-sensor (EMVS) [9] has received extensive attention in array signal processing recently as well as other polarization antenna arrays [10–16]. EMVS can not only provide the DOA estimation of the signal, but can also give the polarization information. An EMVS usually consists of three orthogonally oriented dipoles and three orthogonally oriented loops to measure the electric field and the magnetic field of the incident source [17]. Unfortunately, due to the collocated geometry, the mutual coupling across the EMVS components affects the performance of the algorithm severely. In 2011, Wong and Yuan [18] proposed a SS-EMVS which consists of six orthogonally oriented but spatially non-collocating dipoles and loops. This SS-EMVS reduces the mutual coupling between antenna components, and the developed algorithm retains the effectiveness of the vector-cross-product algorithm [9]. Following this, various spatially spread polarization antenna arrays have been proposed [19–23]. Li et al. [24] presented many geometry configurations of the SS-EMVS and a nonlinear programming-based DOA estimation algorithm. Yuan [25] proposed the way how the four/five spatial noncollocated dipoles/loops were placed to estimate multi-source azimuth/elevation direction finding and polarization. The array configuration of the SS-EMVS was further investigated in [11, 26]. Most recently, there are some research on the combination of EMVS and sparse array and the corresponding parameter estimation algorithms. For example, Han et al. [27] developed a nested vector-sensor array, He et al. [28] proposed a nested cross-dipole array, and Rao et al. [29] proposed a new class of sparse vector-sensor arrays. Various compositions of sparse acoustic vector-sensor arrays to estimate the elevation-azimuth angles of coherent sources were presented in [30]. In [21], we proposed a multi-scale sparse array with each sensor unit consisting of one SS-EMVS, which is capable of estimating the 2D directions and polarization information of the source simultaneously. However, the estimation accuracy for one of the two direction cosines is limited (by the aperture of a single SS-EMVS) since the sparse array is only extended along one axis. Furthermore, the unit of the aforementioned array is a six-component SS-EMVS, and therefore, the cost and redundancy of the whole array are still high. In order to tackle the limitation of the sparse array developed in [21], in this paper, we propose a new array geometry composed with multi-scale scalar arrays and a single triangular SS-EMVS, and develop the corresponding 2-D DOA estimation algorithm. The proposed array consists of an L-shaped scalar array and a triangular SS-EMVS. The two arms of the L-shaped scalar array are connected by a triangular SS-EMVS, which is placed in such a way that the vector-cross-product algorithm can be applied on it for DOA estimation. The scalar sensors in each arm of the L-shaped array can be divided into two uniform linear sub-arrays with different inter-sensor spacings. Owing to the spatially spread geometry of the SS-EMVS and the different inter-sensor spacings of the two sub-arrays, we can obtain multiple estimates of target parameters. From the SS-EMVS, we can obtain an unambiguous but low-accuracy estimates and a relatively high-accuracy but ambiguous estimates of incident sources using the vector-cross-product algorithm [18]. In addition, we can obtain two high-accuracy but cyclically ambiguous estimates of desired direction cosines by applying the ESPRIT algorithm to the corresponding two sub-arrays in the L-shaped array, respectively. Following this, we develop a three-order disambiguation method to obtain the final high-accuracy and unambiguous estimates of target DOA. The proposed array integrates the advantages of sparse (scalar) array and SS-EMVS in reducing mutual coupling and achieving high-accuracy DOA estimation. Moreover, we only use a single SS-EMVS along with the L-shaped scalar array to achieve high-accuracy DOA estimation, and thus the cost, the redundancy of the proposed array, and the computational workload of the corresponding DOA estimation algorithm decrease significantly. The rest of this paper is organized as follows. Section 2 describes the proposed array geometry. Section 3 develops the proposed algorithm for DOA estimation. In Section 4, numerical examples are provided to show the effectiveness and advantages of the proposed array and algorithm. Section 5 concludes the paper. Array geometry Triangular spatially-spread electromagnetic-vector-sensor Figure 1 depicts the array configuration for the triangular SS-EMVS used in our paper, where one dipole ey is placed at the origin (of the Cartesian coordinate system) and the other two dipoles are placed along x-axis and y-axis. The distance between ex and ey is Δx,y, and the distance between ey and ez is Δy,z. The loops of the SS-EMVS are placed in such a way that the vector-cross-product algorithm can be adopted for DOA estimation, i.e., \(\overrightarrow {e_{y}e_{x}}=-\overrightarrow {h_{y}h_{x}}\) and \(\overrightarrow {e_{y}e_{z}}=-\overrightarrow {h_{y}h_{z}}\) [11], where \(\overrightarrow {xy}\) denotes a vector from point x to point y and hy is located at (xh,yh,zh). The positions of the three dipoles and the three loops form two right-angled triangles, and thus we name it as the triangular SS-EMVS. It is worth noting that both Δx,y and Δy,z can be larger than a half-wavelength of the signal. Therefore, the SS-EMVS itself is a sparse array. Configuration of the triangular SS-EMVS [11]. The source is located at elevation angle θ and azimuth angle ϕ Besides, the configuration of the SS-EMVS used in [21] is based on two parallel lines. It can only expand in one direction; the estimation accuracy for another direction cosine is limited. By contrast, the triangular SS-EMVS depicted in Fig. 1 has two direction extensions. Therefore, this configuration can provide relatively higher accuracy direction-cosine estimates for the two direction consines along the x- and y-axis, respectively, and thus higher accuracy estimates for θ (elevation angle) and ϕ (azimuth angle) through the vector-cross-product algorithm. Thereby, it is reasonable to use this configuration of SS-EMVS to extend the aperture of the array by constructing a 2D L-shaped array. Consider a far-field source, located at elevation angle θ∈[0,π] and azimuth angle ϕ∈[0,2π), with polarization parameters (γ,η), where γ refers to the auxiliary polarization angle and η represents the polarization phase difference. The array manifold of the triangular SS-EMVS in Fig. 1, a, can be denoted by the electric-field vector e=[ex,ey,ez]T and the magnetic-field vector h=[hx,hy,hz]T by taking account of the inter-dipole/loop spacings {Δx,y,Δy,z}, $$ \boldsymbol{a} = \left[\begin{array}{c} 1\\ e^{j\frac{2\pi}{\lambda}\Delta_{x,y} v}\\ e^{j\frac{2\pi}{\lambda}(\Delta_{x,y} v - \Delta_{y,z} u)}\\ e^{-j\frac{2\pi}{\lambda}(x_{h}u + y_{h} v+ z_{h} w - 2\Delta_{x,y} v)}\\ e^{-j\frac{2\pi}{\lambda}[(x_{h}u + y_{h} v+ z_{h} w) - \Delta_{x,y} v]}\\ e^{-j\frac{2\pi}{\lambda}[(x_{h}u + y_{h} v+ z_{h} w) - (\Delta_{x,y} v + \Delta_{y,z} u)]} \end{array} \right]\odot\left[\begin{array}{c} e_{x}\\ e_{y}\\ e_{z}\\ h_{x}\\ h_{y}\\ h_{z} \end{array}\right], $$ $$ \left[\begin{array}{c} e_{x}\\ e_{y}\\ e_{z}\\ h_{x}\\ h_{y}\\ h_{z} \end{array}\right] = \left[\begin{array}{cc} \cos\phi\cos\theta & -\sin\phi\\ \sin\phi\cos\theta & \cos\phi\\ -\sin\theta & 0\\ -\sin\phi & -\cos\phi\cos\theta\\ \cos\phi & -\sin\phi\cos\theta\\ 0 & \sin\theta \end{array}\right] \left[\begin{array}{c} \sin\gamma e^{j\eta}\\\cos\gamma \end{array}\right], $$ and λ represents the wavelength of the signal, the superscript (.)T is the transposition operator, ⊙ denotes Hadamard (element-wise) product, \(j=\sqrt {-1}\), and $$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{ccl} u &=& \sin\theta\cos\phi\\ v &=& \sin\theta\sin\phi\\ w &=& \cos\theta \end{array}\right. \end{array} $$ represent the direction cosines along the x-, y-, and z-axis, respectively. Design of proposed array Figure 2 demonstrates our proposed array configuration composed of an L-shaped sparse scalar array and a single triangular SS-EMVS. The triangular SS-EMVS is located at the origin and two scalar arrays are placed along the x-axis and y-axis, respectively. The antennas on the two arms of the L-shaped array are oriented differently, i.e., along with ex and ez, respectively. Each arm of the L-shaped array consists of two sub-arrays. Taking the arm along with the y-axis as an example, the first sub-array, which consists of the first n1 dipoles (including the ex in the triangular SS-EMVS located at the origin), is placed with inter-sensor spacing D1>Δx,y≫λ/2; the second sub-array, which consists of the last n2 dipoles, is placed with an even larger inter-sensor spacing D2=m1D1, where m1 is an integer. Futhermore, we can see that the first sub-array and the triangular SS-EMVS share a same ex. Similarly, the arm of the L-shaped array along with the x-axis consists of two sub-arrays with inter-sensor spacings D3>Δy,z≫λ/2 and D4=m2D3, respectively, where m2 is an integer; the first sub-array and the triangular SS-EMVS share a same ez. It should be noticed that the dipoles placed along the x-axis and the y-axis are of different orientations, and they are the same as the dipoles of the triangular SS-EMVS along with the corresponding axis. The proposed array configuration. The triangular SS-EMVS in Fig. 1 is located at the origin.The scalar array whose unit is ex is extended along y-axis. The inter-sensor spacing in sub-array 1 is D1 and the inter-sensor spacing in sub-array 2 is D2, respectively, where D2=m1D1 and D1>Δx,y≫λ/2. Similarly, the scalar array whose unit is ez is extended along x-axis. The inter-sensor spacing in sub-array 3 is D3 and the inter-sensor spacing in sub-array 4 is D4, respectively, where D4=m2D3 and D3>Δy,z≫λ/2 We take the scalar array placed along the y-axis as the example again to illustrate the design idea of the proposed array. The triangular SS-EMVS can provide a coarse estimate of v by applying the vector-cross-product algorithm. And the estimation result can be used as a reference for solving the ambiguity problem of the v estimate from the first sub-array in the scalar array. Therefore, the inter-sensor spacing of the first sub-array can be larger than λ/2. The aperture of the first sub-array is much larger than the inter-dipole/loop spacings of the triangular SS-EMVS and thus we can obtain a finer estimate of the v with the first sub-array. Similarly, the disambiguated estimation result of the first sub-array can be adopted as the reference of the second sub-array and finally a high-accuracy v estimation result is obtained. Similarly, we can use the same method for the scalar array placed along the x-axis to obtain a high-accuracy u estimation result. Finally, the high-accuracy angular estimation results can be calculated with the high-accuracy u and v estimations. Besides, the inter-sensor spacings of the two scalar arrays are much larger than λ/2, and thus the apertures and the angle estimation accuracy of the proposed array will be better than the L-shaped array with λ/2 inter-sensor spacing [31] and the L-shaped nested array [32] with λ/2 inter-sensor spacing in the first sub-array. These probabilities will be verified in Section 4 through extensive simulation experiments. In addition, owing to that the scalar arrays are extended along x-axis and y-axis at the same time, 2D high-accuracy DOA estimations can be obtained simultaneously. This can not be reached by the multi-scale EMVS array proposed in [21], where the multi-scale aperture extension is only in one axis. Furthermore, only a single SS-EMVS (along with scalar sensors) instead of many SS-EMVSs are adopted in the proposed array, and thus the cost and redundancy of the array will be decreased dramatically. Array manifold and signal model The array manifold of the scalar array placed along the y-axis is $$\begin{array}{@{}rcl@{}} \boldsymbol{a}_{y} &\,=\,& \left[\begin{array}{c} \left. \begin{array}{c} 1\\ e^{-j\frac{2\pi}{\lambda}D_{1} v}\\ \vdots\\ e^{-j\frac{2\pi}{\lambda}(n_{1}-1)D_{1} v} \end{array} \right\}n_{1}\\ \left. \begin{array}{c} e^{-j\frac{2\pi}{\lambda}n_{1}D_{1} v}\\ e^{-j\frac{2\pi}{\lambda}(n_{1}D_{1} + D_{2}) v}\\ \vdots\\ e^{-j\frac{2\pi}{\lambda}[n_{1}D_{1} + (n_{2}-1)D_{2}] v} \end{array} \right\}n_{2}\\ \end{array} \right] \!\otimes\! \boldsymbol{a} [1], \end{array} $$ where ⊗ denotes the Kronecker product, a is defined in Eq. (1), a[1] is the first row of a, and thus \(\boldsymbol {a}_{y} \in {\mathbb C}^{N_{1}\times 1}\) with N1=n1+n2. Similarly, the array manifold of the scalar array placed along the x-axis is $$\begin{array}{@{}rcl@{}} \boldsymbol{a}_{x} &\!\!\!\,=\,\!\!\!& \left[\begin{array}{c} \left. \begin{array}{c} 1\\ e^{-j\frac{2\pi}{\lambda}D_{3} u}\\ \vdots\\ e^{-j\frac{2\pi}{\lambda}(n_{3}-1)D_{3} u} \end{array} \right\}n_{3}\\ \left. \begin{array}{c} e^{-j\frac{2\pi}{\lambda}n_{3}D_{3} u}\\ e^{-j\frac{2\pi}{\lambda}(n_{3}D_{3} + D_{4}) u}\\ \vdots\\ e^{-j\frac{2\pi}{\lambda}[n_{3}D_{3} + (n_{4}-1)D_{4}] u} \end{array} \right\}n_{4}\\ \end{array} \right] \!\otimes\! \boldsymbol{a} [3], \end{array} $$ where a[3] is the third row of a, and \(\boldsymbol {a}_{x} \in {\mathbb C}^{N_{2}\times 1}\) with N2=n3+n4. Following this, the array manifold of the proposed array is $$ \boldsymbol{b} = \left[\begin{array}{l} \boldsymbol{a} \\ \boldsymbol{a}_{y}[2:N_{1}]\\ \boldsymbol{a}_{x}[2:N_{2}] \end{array}\right], $$ where ay[2:N1] consists (N1−1) rows in ay, i.e., from the second row to last row of ay, and ax[2:N2] consists (N2−1) rows in ax, i.e., from the second row to last row of ax. Therefore, \(\boldsymbol {b} \in {\mathbb C}^{N\times 1}\) with N=N1+N2+4. In a multiple sources scenario with K incident signals, the received data of the proposed sparse array at time t is $$ \boldsymbol{x}(t) = \sum\limits_{k=1}^{K} \boldsymbol{b}_{k} s_{k}(t) + \boldsymbol{n}(t) = \mathbf{B} \boldsymbol{s}(t) + \boldsymbol{n}(t), $$ where \(\boldsymbol {b}_{k} \in {\mathbb C}^{N\times 1}\) represents the array manifold of the kth signal and \(\mathbf {B} = [\boldsymbol {b}_{1},\boldsymbol {b}_{2}, \dots, \boldsymbol {b}_{K}] \in {\mathbb C}^{N\times K}\). s(t)=[s1(t),s2(t),…,sK(t)]T denotes the incident signal vector, and n(t) signifies the additive white Gaussian noise. Considering L time snapshots, we can form the received data matrix $$ \mathbf{X} = [\boldsymbol{x}(t_{1}),\boldsymbol{x}(t_{2}), \dots, \boldsymbol{x}(t_{L})]. $$ The following task is to estimate the DOA of these K sources from \(\mathbf {X} \in {\mathbb C}^{N \times L}\), which will be described in detail below. Procedure of multi-scale DOA estimation algorithm As described in Section 2.2, we can obtain multiple estimates of the direction cosines along y-axis and x-axis by the received data of the triangular SS-EMVS and the two arms of the L-shaped array. However, some of the estimates are cyclically ambiguous and we will use the coarse estimates to disambiguate the ambiguous estimates step by step. The procedure of the entire algorithm is shown in Algorithm 1. In the following, we give the detailed derivation and progress of the DOA estimation algorithm. ESPRIT-based method to estimate the two sets of high-accuracy but cyclically ambiguous v and two sets of high-accuracy but cyclically ambiguous u The array covariance matrix can be calculated by the maximum likelihood estimation $$ \hat{\mathbf{R}} = {\frac{1}{L}}\mathbf{X} \mathbf{X}^{H}, $$ where the superscript H is the Hermitian operator. Following [4], let \(\mathbf {E}_{s} \in {\mathbb C}^{N \times K}\) be the signal subspace matrix composed of the K eigenvectors corresponding to the K largest eigenvalues of \(\hat {\mathbf {R}}\). And Es has the same signal subspace with the manifold matrix B and thus $$ \mathbf{E}_{s} = \mathbf{B} \mathbf{T}, $$ where T denotes an unknown K×K non-singular matrix. According to the composition of the proposed array, we divide the manifold matrix B into three parts, i.e., B1, By, and Bx, where \(\mathbf {B}_{1} \in {\mathbb C}^{6 \times K}\) is composed of the top six rows of B (corresponding to the triangular SS-EMVS), \(\mathbf {B}_{y} \in {\mathbb C}^{N_{1} \times K}\) is composed of the first row of B and (N1−1) rows from the seventh row (corresponding to the senors on the y-axis), and \(\mathbf {B}_{x} \in {\mathbb C}^{N_{2} \times K}\) is composed of the third row of B and (N2−1) rows from the (N1+5)th row (corresponding to the senors on the x-axis). In this way, B1, By, and Bx signify the manifold matrices of the SS-EMVS and the two scalar arrays, respectively. Similarly, we can divide the signal subspace matrix Es into three parts with the same method, i.e., \(\mathbf {E}_{s_{1}}\), \(\mathbf {E}_{s_{y}}\), and \(\mathbf {E}_{s_{x}}\). Thus, according to the relationship between array manifold matrix and signal subspace [33] described in Eq. (10), we have $$\begin{array}{@{}rcl@{}} \mathbf{E}_{s_{1}} &=& \mathbf{B}_{1} \mathbf{T}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathbf{E}_{s_{y}} &=& \mathbf{B}_{y} \mathbf{T}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \mathbf{E}_{s_{x}} &=& \mathbf{B}_{x} \mathbf{T}. \end{array} $$ After this, we deal with \(\mathbf {E}_{s_{y}}\) and \(\mathbf {E}_{s_{x}}\) separately to get two sets of the high-accuracy but cyclically ambiguous estimates of v and u. Let us take \(\mathbf {E}_{s_{y}}\) as an example to demonstrate the derivation. According to the different inter-sensor spacings of the scalar array whose unit is ex, we divide the \(\mathbf {E}_{s_{y}}\) into two parts, i.e., \(\mathbf {E}_{s_{y,1}}\) and \(\mathbf {E}_{s_{y,2}}\), where \(\mathbf {E}_{s_{y,1}} \in {\mathbb C}^{n_{1} \times K}\) and \(\mathbf {E}_{s_{y,2}} \in {\mathbb C}^{n_{2} \times K}\) correspond to sub-array 1 and sub-array 2, respectively. Recalling Fig. 2, both sub-array 1 and sub-array 2 are uniform arrays, so the ESPRIT algorithm can be used to \(\mathbf {E}_{s_{y,1}}\) and \(\mathbf {E}_{s_{y,2}}\) to obtain two sets of the high-accuracy but cyclically ambiguous estimates of v, respectively. The process is consistent with that developed in [21]. Since the inter-sensor spacing D1 and D2 are both larger than λ/2, two sets of high-accuracy but cyclically ambiguous y-axis direction cosine estimations \(\hat {v}_{k}^{\text {fine,1}}\) and \(\hat {v}_{k}^{\text {fine,2}}\) can be derived. In addition, it can be seen from [21, 34] that due to the same column permutation of T, these two sets of v estimations \(\{\hat {v}_{k}^{\text {fine, 1}}\}_{k=1}^{K}\) and \(\{\hat {v}_{k}^{\text {fine, 2}}\}_{k=1}^{K}\) are paired automatically. Besides, we can obtain two sets of high-accuracy but cyclically ambiguous u estimations, \(\hat {u}_{k}^{\text {fine,1}}\) and \(\hat {u}_{k}^{\text {fine,2}}\) by applying similar process to \(\mathbf {E}_{s_{x}}\). And the u estimations, \(\hat {u}_{k}^{\text {fine,1}}\) and \(\hat {u}_{k}^{\text {fine,2}}\), are also paired automatically. Vector-cross-product algorithm to estimate the unambiguous but low-accuracy v and u and the relatively high-accuracy but ambiguous v and u According to [18, 24], we need to get the estimate of the array manifold in order to apply the vector-cross-product algorithm to process the data of the triangle SS-EMVS. And recalling Eq. (11), we can estimate the manifold matrix of the SS-EMVS with $$ \hat{\mathbf{B}}_{1} = \mathbf{E}_{s_{1}} \mathbf{T}^{-1}, $$ where \(\hat {\mathbf {B}}_{1} = \left [\hat {\boldsymbol {a}}_{1}, \dots, \hat {\boldsymbol {a}}_{K}\right ]\) and \(\hat {\boldsymbol {a}}_{k}\) is the estimation of array manifold of kth source at the triangular SS-EMVS. The following step is to apply the vector-cross-product algorithm to \(\hat {\boldsymbol {a}}_{k}\). For convenience, we set θ∈[0,π/2], ϕ∈[0,π/2), and we omit the source index k and recall Eq. (1), where we have \(\boldsymbol {a} = \left [ {\tilde {\boldsymbol {e}}}^{T}, {\tilde {\boldsymbol {h}}}^{T}\right ]^{T}\) with $$\begin{array}{@{}rcl@{}} \tilde{\boldsymbol{e}} \!&=& \left[\begin{array}{r} e_{x}\\ e^{j\frac{2\pi}{\lambda}\Delta_{x,y}v} e_{y}\\ e^{j\frac{2\pi}{\lambda}(\Delta_{x,y}v - \Delta_{y,z}u)} e_{z} \end{array} \right], \end{array} $$ $$\begin{array}{@{}rcl@{}} \tilde{\boldsymbol{h}}\! \!&=&\!\! \left[\begin{array}{r} e^{-j\frac{2\pi}{\lambda}(x_{h}u \,+\, y_{h} v\,+\, z_{h} w - 2\Delta_{x,y}v)} h_{x}\\ e^{-j\frac{2\pi}{\lambda}[(x_{h}u \!+ \!y_{h} v\,+\, z_{h} w) - \Delta_{x,y}v]} h_{y}\\ e^{-j\frac{2\pi}{\lambda}[(x_{h}\!u \,+\, y_{h} \!v\,+\, z_{h}\! w) - (\Delta_{x,y}v \,+\, \Delta_{y,z}u)]}h_{z} \end{array} \right]. \end{array} $$ According to the vector-cross product algorithm of the triangular SS-EMVS [11], we have, $$\begin{array}{@{}rcl@{}} \boldsymbol{p} &\!\!\,=\,\!\!& \frac{(\tilde{\boldsymbol{e}})\times (\tilde{\boldsymbol{h}})^{*}}{\|(\tilde{\boldsymbol{e}})\times (\tilde{\boldsymbol{h}})^{*}\|}\\ &\!\!\,=\,\!\!& e^{j\frac{2\pi}{\lambda}(x_{h} u \,+\, y_{h}v \,+\, z_{h} w)}\!\!\!\left[\begin{array}{l} u e^{-j\frac{2\pi}{\lambda}\Delta_{y,z}u}\\ v e^{-j\frac{2\pi}{\lambda}(\Delta_{x,y}v+\Delta_{y,z}u)}\\ w e^{-j\frac{2\pi}{\lambda}\Delta_{x,y}v} \end{array}\right] \end{array} $$ where × denotes the vector-cross product and p is calculated from \(\hat {\boldsymbol {a}}\). From the Poynting vector of kth source pk derived in Eq. (17), we can obtain the unambiguous but low-accuracy estimations of {uk,vk,wk} by $$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ccc} u_{k}^{\text{coarse}} & =& |[\boldsymbol{p}_{k}]_{1}|,\\ v_{k}^{\text{coarse}} & =& |[\boldsymbol{p}_{k}]_{2}|,\\ w_{k}^{\text{coarse}} & =& |[\boldsymbol{p}_{k}]_{3}|, \end{array}\right. \end{array} $$ where [ ]i extracts ith element of the vector inside [ ], and | | denotes the absolute value of the entity inside | |. In the following, we estimate the relatively high-accuracy estimation of u and v from the displacement of the dipoles/loops within the triangular SS-EMVS, i.e., Δx,y and Δy,z. From p, we can get $$ \boldsymbol{p}^{o} = \boldsymbol{p}\odot e^{-\angle[\boldsymbol{p}]_{2}} = \left[\begin{array}{l} u e^{j\frac{2\pi}{\lambda}\Delta_{x,y}v}\\ v \\ w e^{j\frac{2\pi}{\lambda}\Delta_{y,z}u} \end{array}\right], $$ where ⊙ denotes the Hadamard (element-wise) product. Based on Eq. (19), we have one set of relatively high-accuracy but ambiguous estimations of u and v by $$\begin{array}{@{}rcl@{}} \hat{u}_{k}^{\text{fine, 0}} &=& \frac{\lambda}{2\pi} \frac{1}{\Delta_{y,z}}\angle\left\{[\boldsymbol{p}_{k}^{o}]_{3}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{v}_{k}^{\text{fine, 0}} &=& \frac{\lambda}{2\pi} \frac{1}{\Delta_{x,y}}\angle\left\{[\boldsymbol{p}_{k}^{o}]_{1}\right\}. \end{array} $$ It is worth mentioning that the unambiguous but low-accuracy estimations \(\{u_{k}^{\text {coarse}}\}_{k=1}^{K}\) and the relatively high-accuracy but ambiguous estimations \(\{\hat {u}_{k}^{\text {fine, 0}}\}_{k=1}^{K}\) are paired automatically, and due to the same T in Eq. (14), all u estimations have been paired, the same for v. Moreover, for θ and ϕ in other angular ranges, the changes are the plus or minus signs in Eqs. (18) and (19) [11]. Disambiguate the estimations of u and v and calculate the final estimates of θ and ϕ As can be seen from the above Sections 3.1 and 3.2, for both u and v, there are three sets of high-accuracy but ambiguous estimations and one set of unambiguous but low-accuracy estimation. The three sets of ambiguous estimations correspond to different levels of ambiguity, and a three-order disambiguation method is utilized here. We take v as the example to demonstrate the derivation and the process for u is similar. Recalling Fig. 2, we know the ambiguity of \(\hat {v}_{k}^{\text {fine,0}}\), \(\hat {v}_{k}^{\text {fine,1}}\), and \(\hat {v}_{k}^{\text {fine,2}}\) correspond to Δx,y, D1, and D2, respectively. And due to the fact that D2>D1>Δx,y, the order of solving ambiguity should be \(\hat {v}_{k}^{\text {fine,0}}\), \(\hat {v}_{k}^{\text {fine,1}}\), \(\hat {v}_{k}^{\text {fine,2}}\) step by step. Disambiguate \(\hat {v}_{k}^{\text {fine,0}}\) with \(v_{k}^{\text {coarse}}\) With \(v_{k}^{\text {coarse}}\) as the reference value, the ambiguity of \(\hat {v}_{k}^{\text {fine,0}}\) is solved, and the result can be obtained by $$\begin{array}{@{}rcl@{}} v_{k}^{\text{fine, 0}} &=& \hat{v}_{k}^{\text{fine, 0}} + \hat{l}_{1} \frac{\lambda}{\Delta_{x,y}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{l}_{1} &=& \arg\!\min_{l_{1}}\left|v_{k}^{\text{coarse}} \,-\, \hat{v}_{k}^{\text{fine,0}} \,-\, l_{1}\frac{\lambda}{\Delta_{x,y}}\right|, \end{array} $$ where \(\left \lceil \left (-1-\hat {v}_{k}^{\text {fine,0}}\right)\frac {\Delta _{x,y}}{\lambda } \right \rceil \le l_{1} \le \left \lfloor \left (1-\hat {v}_{k}^{\text {fine,0}}\right) \frac {\Delta _{x,y}}{\lambda }\right \rfloor \) with ⌈ε⌉ denoting the smallest integer not less than ε and ⌊ε⌋ referring to the largest integer not more than ε [35]. Disambiguate \(\hat {v}_{k}^{\text {fine,1}}\) with \(v_{k}^{\text {fine, 0}}\) With \(v_{k}^{\text {fine, 0}}\) as the reference value, the ambiguity of \(\hat {v}_{k}^{\text {fine,1}}\) is solved, and the result can be obtained by $$\begin{array}{@{}rcl@{}} v_{k}^{\text{fine, 1}} &=& \hat{v}_{k}^{\text{fine, 1}} + \hat{l}_{2} \frac{\lambda}{D_{1}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{l}_{2} &=& \arg\!\min_{l_{2}}\left|v_{k}^{\text{fine, 0}} - \hat{v}_{k}^{\text{fine,1}} - l_{2}\frac{\lambda}{D_{1}}\right|, \end{array} $$ where \(\left \lceil \left (-1-\hat {v}_{k}^{\text {fine,1}}\right)\frac {D_{1}}{\lambda } \right \rceil \le l_{2} \le \left \lfloor \left (1-\hat {v}_{k}^{\text {fine,1}}\right) \frac {D_{1}}{\lambda }\right \rfloor \). Finally, we can disambiguate \(\hat {v}_{k}^{\text {fine,2}}\) with \(v_{k}^{\text {fine, 1}}\) derived above to estimate the final high-accuracy and unambiguous estimation of \(v_{k}^{\text {final}}\): $$\begin{array}{@{}rcl@{}} v_{k}^{\text{final}} &=& \hat{v}_{k}^{\text{fine, 2}} + \hat{l}_{3} \frac{\lambda}{D_{2}}, \end{array} $$ Similar to the above three steps, we can get the final high-accuracy and unambiguous estimation of \(u_{k}^{\text {final}}\) by replacing {Δx,y,D1,D2} with {Δy,z,D3,D4}, respectively. After getting the unambiguous and high-accuracy estimation of {u,v}, we can get the high-accuracy DOA estimation of kth source by (3) and the results are $$ \left\{\begin{array}{ccl} \hat{\theta}_{k} &=& \arcsin\left(\sqrt{\left(u_{k}^{\text{final}}\right)^{2} + \left(v_{k}^{\text{final}}\right)^{2}}\right),\\ \hat{\phi}_{k} &=& \arctan\left(\frac{v_{k}^{\text{final}}}{u_{k}^{\text{final}}}\right). \end{array}\right. $$ Analysis of the three inter-sensor spacings Larger inter-sensor spacing brings in larger aperture and further leads to higher direction estimation accuracy. At the same time, it makes the disambiguation more difficult. There is a threshold in the process of disambiguation [36]. When the inter-sensor spacing value is larger than the threshold, the probability of successful disambiguation will break down. Therefore, we analyze the threshold of the inter-sensor spacing by analyzing the success probability of the disambiguation process. Let us take v as an example to demonstrate the derivation again. According to the proposed array configuration shown in Fig. 2, there are three scales, i.e., {Δx,y,D1,D2} for v. Thus, we utilize the three-order disambiguation process in Section 3.3 to obtain vfinal. Take the Δx,y as an example, recalling Eq. (23), only by satisfying the following equation $$ \left|v_{k}^{\text{ref}} - v_{k}^{\text{coarse}}\right| < \frac{\lambda}{2\Delta_{x,y}}, $$ can the disambiguation process be successful. The value of \(\left |v_{k}^{\text {ref}} - v_{k}^{\text {coarse}}\right |\) is the estimation error of the \(v_{k}^{\text {coarse}}\). We hereby assume that the angle estimation error follows a Gaussian distribution [37]. According to the distribution function of the normal process [38], the probability of the sample error falling into the scope of 3σ is about 99.85%, where σ is the standard deviation of the samples. Thus, when the root mean square error (RMSE) of \(v_{k}^{\text {coarse}}\) satisfies $$ \sigma_{v_{k}^{\text{coarse}}} \le \frac{\lambda}{6\Delta_{x,y}}, $$ we consider that the disambiguation process is successful. Therefore, we can calculate the threshold of Δx,y by $$ \Delta_{x,y}^{threshold}=\frac{\lambda}{6\sigma_{v_{k}^{\text{coarse}}}}. $$ We can obtain the threshold of D1 and D2 using the similar method. Furthermore, considering the practical applications, we can only obtain the Cramér-Rao bounds (CRB) of each parameter rather than RMSE. Thus, we can substitute the RMSE of \(v_{k}^{\text {fine,0}}\) and \(v_{k}^{\text {fine,1}}\) with their CRB to calculate the thresholds of D1 and D2. However, due to the CRB is much less than the RMSE, the calculated values of thresholds of D1 and D2 will be far larger than the actual values. And this probility will be verified in Section 4. Similar to v, we can obtain the corresponding thresholds of {Δy,z,D3,D4} for u. Owing to the fact that RMSE is related to signal-to-noise ratio (SNR), the snapshot number and the source direction, we will analyze the influence of these factorsin Section 4. The derivation of the CRB for the new array is similar to that in [21], and we will use the corresponding equations therein to derive the CRB in the following simulations. Simulation results and discussion In this section, we conduct simulations to verify the effectiveness and performance of the proposed array geometry and algorithm. For simplicity, we set θ∈[0,π/2], ϕ∈[0,π/2). The coordinate of the hy of the SS-EMVS is (xh,yh,zh)=(7.5λ,7.5λ,5λ). The RMSE of parameter estimation is defined as $$ \text{RMSE}=\sqrt{\frac{1}{M}\sum\limits_{m=1}^{M}{(\hat{\alpha}_{m}-\alpha)^{2}}}, $$ where \(\hat {\alpha }_{m}\) is the estimation of mth trial of parameter α, and M is the number of Monte Carlo trials. We assume that the number of sources is known a priori in the following simulations. Parameter estimation results In the first example, we consider that there are N1=12ex's placed along the y-axis direction and N2=12ez's placed along the x-axis direction. The first six ex's compose the sub-array 1 with inter-sensor spacing D1=35λ; the rest of the ex's constitute the sub-array 2 with inter-sensor spacing D2=7D1=245λ. Besides, the first six ez's compose the sub-array 3 with inter-sensor spacing D3=35λ; the rest of the ez's constitute the sub-array 4 with inter-sensor spacing D4=7D3=245λ. For the triangular SS-EMVS, Δx,y=Δy,z=5λ. There are K=2 pure-tone incident sources with unit power, which have the numerical frequency f=(0.537,0.233), elevation θ=(42∘,35∘), azimuth ϕ=(55∘,52∘), the auxiliary polarization angle γ=(36∘,60∘), and the polarization phase difference η=(80∘,70∘) impinging on the array. The number of snapshots is L=200 and SNR = 10 dB. The noise is a complex Gaussian white noise vector with zero mean and covariance matrix σ2I. Figure 3 shows the estimation results of the proposed algorithm with 200 Monte Carlo trials. We can see that the spatial parameters of all targets are correctly paired and estimated. The estimation results of the DOA of two incident sources Parameter estimation performance In order to further exploit the performance of the proposed array, we hereby conduct various simulations with different parameters of the array and sources. Performance versus SNR In the first example, we consider the performance of parameter estimation versus SNR. Figure 4 a shows the RMSE of all estimates of u, i.e., ufine,0, ufine,1, and ufinal of the proposed array versus SNR compared with ucoarse and the CRB. Figure 4 b shows the RMSE of all estimates of v, i.e., vfine,0, vfine,1, and vfinal of the proposed array versus SNR compared with vcoarse and the CRB. It can be observed that both ufinal and vfinal improve significantly from their coarse estimates, ucoarse and vcoarse, respectively; both of them are getting closer to their CRB. Moreover, the disambiguation described in Section 3.3 is similar to that of dual-size ESPRIT [4]. There exists a SNR threshold in the process of disambiguation [39]. The parameter estimation performance will be degraded significantly if the SNR is lower than the threshold. When SNR is larger than this threshold, the performance improves dramatically, and the performance is getting better with the increase of SNR. From Fig. 4, we can see that the SNR threshold of u and v are 7 dB and 6 dB, respectively. The RMSE of (a) ucoarse, ufine,0, ufine,1, and ufinal compared with CRB, and (b) vcoarse, vfine,0, vfine,1, and vfinal compared with CRB using the proposed array In addition, we compare the proposed array with the array configuration in [32] which has the same number of scalar sensors, and the array configuration in [21] which has the same number of SS-EMVSs in y-axis. Figure 5 a and b shows the RMSE of u and v estimates versus SNR for all three arrays, respectively. Comparing with the 2D nested scalar array in [32], the proposed array has a much larger aperture extension and lower mutual coupling; comparing with the linear multi-scale SS-EMVS array in [21], the proposed array has a much larger aperture extension in x-axis. We can observe from Fig. 5 a that the performance of the proposed array of u estimation is higher than those of the two other arrays when SNR is larger than the threshold. That is because the array aperture of the proposed array in x-axis is much larger than the two other arrays. From Fig. 5 b, we can observe that the performance of the proposed array of v estimation is a little worse than that of the array configuration in [21]. However, the SNR threshold of the proposed array is far smaller (7 dB). RMSE of u and v estimations of all three arrays versus SNR. a RMSE of u estimations. b RMSE of v estimations Moreover, we consider another configuration of the proposed array in which the array is extended along y-axis and z-axis, respectively. And the triangular SS-EMVS of this configuration is placed along y-axis and z-axis, respectively. The DOA estimation process of this array is similar to that of the proposed array, except that the corresponding direction cosines change from u and v to v and w. Using the same simulation conditions as in Section 4.1, we compare the parameter estimation performance of this array configuration with the proposed array. The results are given in Fig. 6. We can observe that RMSEs of u of these two configurations are similar, and the same behavior happens for v of the proposed array and w of another configuration. But we still can see that the accuracy of the proposed array are marginally better than the other configuration when SNR is large enough, i.e., > 8 dB. RMSE of parameters estimations of the two array configurations versus SNR As the arriving angle estimation is determined by u and v jointly, in Fig. 7, we show the RMSEs of the estimated θ and ϕ of all array configurations versus SNR and the CRB of the proposed array. It can be seen that the SNR threshold of θ and ϕ of the proposed array are both 7 dB, the lowest one of the threshold of u and v. Moreover, the performance of the proposed array is the best in all array configurations. Therefore, our proposed array is a good trade-off of mutual coupling, estimation accuracy, and robustness (lower SNR threshold) to noise. RMSE of θ and ϕ estimations versus SNR. a RMSE of θ estimations. b RMSE of ϕ estimations Performance versus snapshot number In the next example, we consider the performance of DOA estimation versus snapshot number L. Figure 8 a and b show the RMSEs of θ and ϕ estimation of all array configurations versus L at SNR =10 dB, respectively. We can see that the parameter estimation performance of the proposed array improves with the increase of snapshots, and the performance of the proposed array is the best among all array configurations once again. RMSE of θ and ϕ estimations versus L. a RMSE of θ estimations. b RMSE of ϕ estimations Performance versus inter-sensor spacing In the third example, we consider the performance of parameter estimation versus inter-sensor spacings. We take one target as an example, and set Δx,y=Δy,z in the SS-EMVS with SNR =10 dB. The elevation of the target is θ=35∘, azimuth ϕ=52∘, the auxiliary polarization angle γ=36∘, and the polarization phase difference η=80∘. As mentioned in Section 3.4, there is a threshold of inter-sensor spacing. Figure 9 shows the RMSE of ufine,0 and vfine,0 of the proposed array versus Δx,y. Recalling Eq. (31), we can obtain the threshold of Δy,z at SNR=10 dB is \(\Delta _{y,z}^{t}=7.15\lambda \). From Fig. 9, the threshold of Δy,z is approximately 6λ. Thus, according to the obtained threshold and practical applications, we set Δy,z=5λ. The same method can be performed for Δx,y, \(\Delta _{x,y}^{t}=8.04\lambda \), and the threshold of Δx,y is approximately 8λ from Fig. 9. Therefore, the method derived in Section 3.4 for calculating the thresholds of different inter-sensor spacings is effective. Similar to Δy,z, we set Δx,y=5λ. Threshold of Δx,y and Δy,z In the second simulation, we set D1=D3, and the RMSE of ufine,1 and ufine,1 of the proposed array versus D1 is shown in Fig. 10. Similar to Δy,z, we can calculate the threshold of D1 and D3. But there is little different, as mentioned in Section 3.4, we utilize the CRB of ufine,0 and vfine,0 instead of the RMSE to calculate the threshold of D1 and D3. And the calculated value are \(D_{1}^{t}=161.5\lambda \) and \(D_{3}^{t}=197.5\lambda \). From Fig. 10, we can obtain these threshold values (\(D_{1}^{t}=76\lambda \), \(D_{3}^{t}=72\lambda \)). We know that CRB is much smaller than RMSE. Therefore, we should set D1 and D3 much smaller than the calculated threshold values. Thereby, we set D1=D3=35λ. Threshold of D1 and D3 In the third simulation, we set D2=D4 and plot the RMSE of ufinal and ufinal of the proposed array versus D2 in Fig. 11. Similar to D1, we can calculate the threshold of D2 and D4, \(D_{2}^{t}=8047.6\lambda \) and \(D_{4}^{t}=7857.9\lambda,\) by CRB. From Fig. 11, we can obtain these threshold values \(\left (D_{2}^{t}=2500\lambda, D_{4}^{t}=2300\lambda \right)\). Again, we should set D2 much smaller than the calculated threshold values, and considering the practical applications, we set D2=D4=245λ. Threshold of inter-sensor spacing versus SNR We investigate the threshold of inter-sensor spacing versus SNR. Take one target as example, and we set the elevation of the target as θ=35∘, azimuth ϕ=52∘, the auxiliary polarization angle γ=36∘, and the polarization phase difference η=80∘. Other simulation conditions remain the same with Section 4.1. Figure 12 shows the thresholds of Δy,z and Δx,y versus SNR. It is seen that the thresholds of Δy,z and Δx,y both increases as SNR increases. And the thresholds of {D1,D3} and {D2,D4} are shown in Figs. 13 and 14, respectively. The results are similar to Fig. 12. Threshold of Δy,z and Δx,y versus SNR Threshold of D3 and D1 versus SNR Threshold of SNR versus arriving angle Lastly, we consider the threshold of SNR in the disambiguation process versus the signal arriving angle. Take one target as the example, we set the auxiliary polarization angle of the target γ=36∘ and the polarization phase difference η=80∘. We set another angle equals 45∘ when we analyze one angle. Other simulation conditions remain the same as in Section 4.1. Figure 15 shows the threshold of SNR of u and v versus θ and ϕ. We can see that the threshold of SNR is approximately symmetrical with 90∘ for θ and symmetrical with 0∘ for ϕ. As we set θ∈[0,π/2] and ϕ∈[0,π/2), the threshold of SNR is in a lower range when the target is located in θ∈[20∘,70∘] and ϕ∈[20∘,70∘]. Threshold of SNR of u and v versus angle. a Threshold of SNR versus θ. b Threshold of SNR versus ϕ In this paper, a new array configuration composed of multiple sparse scalar arrays and a single triangle electromagnetic-vector-sensor is proposed, which enjoys the superiorities of both the spatially spread electromagnetic-vector-sensor and the sparse array. The new array can provide four direction cosine estimates with gradually improved accuracies, which are along the x-axis and y-axis, respectively. Based on this, we developed the algorithm for direction-of-arrival estimation, which utilizes the approach of three-order disambiguation. We have analyzed the thresholds of the inter-sensor spacings in the four uniform scalar sub-arrays and conducted extensive simulations to validate them. We compare the performance of the direction cosine estimations of our array with the 2D nested scalar array and the linear multi-scale SS-EMVS array. These results demonstrated that our proposed array geometry enjoys the optimal trade-off on estimation accuracy, mutual coupling, and robustness to noise. Moreover, since only a single SS-EMVS is used with other scalar sensors, the proposed array has achieved a good performance with small redundancy, less elements, and low cost. Two-dimensional DOA: Direction-of-arrival DOF: Degree-of-freedom CRB: Cramér-Rao bounds EMVS: Electromagnetic-vector-sensor ESPRIT: Estimation of signal parameters via rotation invariant technique RMSE: Root mean square error SS-EMVS: Spatially spread electromagnetic-vector-sensor ULAs: Uniform linear arrays H. L. Van Trees, Optimum Array Processing: Part IV of Detection, Estimation, and Modulation Theory (John Wiley & Sons, New York, 2004). X. Yuan, Direction-finding wideband linear fm sources with triangular arrays. IEEE Trans. Aerosp. Electron. Syst.48(3), 2416–2425 (2012). K. T. Wong, M. D. Zoltowski, in Proceedings of the 39th Midwest Symposium on Circuits and Systems. Sparse Array Aperture Extension with Dual-Size Spatial Invariances for ESPRIT-based Direction Finding (IEEEIA, 1996), pp. 691–6942. V. I. Vasylyshyn, in European Radar Conference, 2005. Unitary ESPRIT-based DOA estimation using sparse linear dual size spatial invariance array (IEEEParis, 2005), pp. 157–160. A. L. Swindlehurst, B. Ottersten, R. Roy, T. Kailath, Multiple invariance ESPRIT. IEEE Trans. Signal Process.40(4), 867–881 (1992). A. Moffet, Minimum-redundancy Linear Arrays. Antennas Propag. IEEE Trans.16(2), 172–175 (1968). P. Pal, P. P. Vaidyanathan, Nested arrays: A novel approach to array processing with enhanced degrees of freedom. Signal Process. IEEE Trans.58(8), 4167–4181 (2010). P. P. Vaidyanathan, P. Pal, Sparse sensing With co-prime samplers and arrays. Signal Process. IEEE Trans.59(2), 573–586 (2011). A. Nehorai, P. Tichavsky, Cross-product algorithms for source tracking using an EM vector sensor. IEEE Trans. Signal Process.47(10), 2863–2867 (1999). X. Yuan, Estimating the DOA and the polarization of a polynomial-phase signal using a single polarized vector-sensor. IEEE Trans. Signal Process.60(3), 1270–1282 (2012). X. Yuan, Diversely polarized antenna-array signal processing. PhD thesis (The Hong Kong Polytechnic University, 2012). X. Yuan, in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Polynomial-phase signal source tracking using an electromagnetic vector-sensor (Kyoto, 2012), pp. 2577–2580. X. Yuan, K. T. Wong, Z. Xu, K. Agrawal, Various compositions to form a triad of collocated dipoles/loops, for direction finding and polarization estimation. IEEE Sensors J.12(6), 1763–1771 (2012). X. Yuan, Quad compositions of collocated dipoles and loops: For direction finding and polarization estimation. IEEE Antennas Wirel. Propag. Lett.11:, 1044–1047 (2012). X. Yuan, K. T. Wong, K. Agrawal, Polarization estimation with a dipole-dipole pair, a dipole-loop pair, or a loop-loop pair of various orientations. IEEE Trans. Antennas Propag.60(5), 2442–2452 (2012). Z. Xu, X. Yuan, Cramer-Rao bounds of angle-of-arrival and polarisation estimation for various triads. IET Microw. Antennas Propag.6(15), 1651–1664 (2012). K. T. Wong, M. D. Zoltowski, Uni-vector-sensor ESPRIT for multisource azimuth, elevation, and polarization estimation. IEEE Trans. Antennas Propag.45(10), 1467–1474 (1997). K. T. Wong, X. Yuan, 'Vector cross-product direction-finding' with an electromagnetic vector-sensor of six orthogonally oriented but spatially noncollocating dipoles/loops. IEEE Trans. Signal Process.59(1), 160–171 (2011). X. Yuan, in 2011 IEEE Statistical Signal Processing Workshop (SSP). Cramer-Rao bound of the direction-of-arrival estimation using a spatially spread electromagnetic vector-sensor (Nice, 2011), pp. 1–4. X. Yuan, Spatially spread dipole/loop quads/quints: For direction finding and polarization estimation. IEEE Antennas Wirel. Propag. Lett.12:, 1081–1084 (2013). M. Yang, J. Ding, B. Chen, X. Yuan, A multiscale sparse array of spatially spread electromagnetic-vector-sensors for direction finding and polarization estimation. IEEE Access. 6:, 9807–9818 (2018). X. Yuan, Coherent sources direction finding and polarization estimation with various compositions of spatially spread polarized antenna arrays. Signal Process.102:, 265–281 (2014). F. Luo, X. Yuan, Enhanced 'vector-cross-product' direction-finding using a constrained sparse triangular-array. EURASIP J. Adv. Signal Process.2012(1) (2012). https://doi.org/10.1186/1687-6180-2012-115. Y. Li, J. Q. Zhang, An enumerative nonlinear programming approach to direction finding with a general spatially spread electromagnetic vector sensor array. Signal Process.93(4), 856–865 (2013). M. Ji, X. Gong, Q. Lin, in 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). A multi-set approach for direction finding based on spatially displaced electromagnetic vector-sensors (IEEEZhangjiajie, 2015), pp. 1824–1828. K. Han, A. Nehorai, Nested vector-sensor array processing via tensor modeling. IEEE Trans. Signal Process.62(10), 2542–2553 (2014). J. He, Z. Zhang, T. Shu, W. Yu, Direction finding of multiple partially polarized signals with a nested cross-diople array. IEEE Antennas Wirel. Propag. Lett.16:, 1679–1682 (2017). S. Rao, S. P. Chepuri, G. Leus, in 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). DOA estimation using sparse vector sensor arrays (IEEECancun, 2015), pp. 333–336. X. Yuan, Coherent source direction-finding using a sparsely-distributed acoustic vector-sensor array. IEEE Trans. Aerosp. Electron. Syst.48:, 2710–2715 (2012). J. Liang, D. Liu, Joint elevation and azimuth direction finding using L-shaped array. IEEE Trans. Antennas Propag.58(6), 2136–2141 (2010). C. Niu, Y. Zhang, J. Guo, Interlaced double-precision 2-D angle estimation algorithm using L-shaped nested arrays. IEEE Signal Process. Lett.23(4), 522–526 (2016). R. Roy, T. Kailath, ESPRIT-Estimation of signal parameters via rotational invariance techniques. Acoust. Speech Signal Process. IEEE Trans.37(7), 984–995 (1989). J. Li, M. Shen, D. Jiang, in 2016 IEEE 5th Asia-Pacific Conference on Antennas and Propagation (APCAP). DOA estimation based on combined ESPRIT for co-prime array (IEEEKaohsiung, 2016), pp. 117–118. V. I. Vasylyshyn, in First European Radar Conference, 2004. EURAD. Closed-form DOA estimation with multiscale unitary ESPRIT algorithm (IEEEAmsterdam, 2004), pp. 317–320. F. Athley, Threshold region performance of maximum likelihood direction of arrival estimators. IEEE Trans. Signal Process.53(4), 1359–1373 (2005). B. Ottersten, M. Viberg, T. Kailath, Performance analysis of the total least squares ESPRIT algorithm. IEEE Trans. Signal Process.39(5), 1122–1135 (1991). D. C. Montgomery, G. C. Runger, Applied Statistics and Probability for Engineers (John Wiley & Sons, New York, 2010). C. D. Richmond, Mean-Squared Error and Threshold SNR Prediction of Maximum-Likelihood Signal Parameter Estimation With Estimated Colored Noise Covariances. IEEE Trans. Inf. Theory. 52(5), 2146–2164 (2006). This work is partly supported by the National Natural Science Foundation of China under Grant 61571344, in part by the Foundation of Shanghai Academy of Spaceflight Technology under Grant SAST2016093 and in part by the Fund for Foreign Scholars in University Research and Teaching Programs the 111 Project under Grant B18039. National Laboratory of Radar Signal Processing, Xidian University, Xi'an, China Jin Ding , Minglei Yang & Baixiao Chen Nokia Bell Labs, Murray Hill, USA Xin Yuan Search for Jin Ding in: Search for Minglei Yang in: Search for Baixiao Chen in: Search for Xin Yuan in: JD, MLY, BXC, and XY conceived and designed the study. JD and MLY performed the experiments. JD and MLY wrote the paper. MLY, BXC, and XY reviewed and edited the manuscript. All authors read and approved the manuscript. Correspondence to Minglei Yang. Ding, J., Yang, M., Chen, B. et al. A single triangular SS-EMVS aided high-accuracy DOA estimation using a multi-scale L-shaped sparse array. EURASIP J. Adv. Signal Process. 2019, 44 (2019) doi:10.1186/s13634-019-0642-4 Accepted: 26 August 2019 Multiple scale arrays Direction-of-arrival (DOA) estimation Sparse array
CommonCrawl
\begin{document} \begin{center} {\bf On the distribution of extrema for a class of L\'evy processes\footnote{\today}}\\ {\sc Amir T. Payandeh Najafabadi$^{a,}$\footnote{Corresponding author: [email protected]; Phone No. +98-21-29903011; Fax No. +98-21-22431649} \& Dan Kucerovsky$^{b}$}\\ a Department of Mathematical Sciences, Shahid Beheshti University, G.C. Evin, 1983963113, Tehran, Iran.\\ b Department of Mathematics and Statistics, University of New Brunswick, Fredericton, N.B. CANADA E3B 5A3. \end{center} \begin{center} {\sc Abstract} \end{center} Suppose $X_t$ is either a \emph{regular exponential type} L\'evy process or a L\'evy process with a \emph{bounded variation jumps measure}. The distribution of the extrema of $X_t$ play a crucial role in many financial and actuarial problems. This article employs the well known and powerful Riemann-Hilbert technique to derive the characteristic functions of the extrema for such L\'evy processes. An approximation technique along with several examples is given.\\ \textbf{\emph{Keywords:}} Principal value integral; H\"older condition; Pad\'e approximant; continued fraction; Fourier transform; Hilbert transform.\\ 2010 Mathematics Subject Classification: 30E25, 11A55, 42A38, 60G51, 60j50, 60E10. \normalsize \section{Introduction} Suppose $X_t$ be a one-dimensional, real-valued, right continuous with left limits (c\`adl\`ag), and adapted L\'evy process, starting at zero. Suppose also that the corresponding jumps measure, $\nu,$ is defined on ${\Bbb R}\setminus\{0\}$ and satisfies $\int_{\Bbb R}\min\{1,x^2\}\nu(dx)<\infty.$ Moreover, suppose the stopping time $\tau(q)$ is either a geometric or an exponential distribution with parameter $q$ that is independent of the L\'evy process $X_t$, and that $\tau(0)=\infty.$ The extrema of the L\'evy process $X_t$ are defined to be \begin{eqnarray} \label{definition-extrema} \nonumber M_q &=& \sup\{X_s:~s\leq\tau(q)\};\\ I_q &=& \inf\{X_s:~s\leq\tau(q)\}. \end{eqnarray} The Wiener-Hopf factorization method is a technique that can be used to study the characteristic function of $M_q$ and $I_q.$ The Wiener-Hopf method has been used to show that: \begin{description} \item[(i)] The random variables $M_q$ and $I_q$ are independent (Kuznetsov; 2009b and Kypriano; 2006 Theorem 6.16); \item[(ii)] The product of their characteristic functions is equal to the characteristic function of the L\'evy process $X_t$ (Bertoin; 1996 page 165); and \item[(iii)] The random variable $M_q$ ($I_q$) is infinitely divisible, positive (negative), and has zero drift (Bertoin; 1996, page 165). \end{description} Supposing that the characteristic function of a L\'evy process, $X_t,$ can be decomposed as a product of two functions, one of which is the boundary values of a of a function that is analytic and bounded in the complex upper half-plane (i.e., ${\Bbb C}^+=\{\lambda:~\lambda\in{\Bbb C}~\hbox{and}~Im(\lambda)\geq0\}$) and the other of which is the boundary values of a function that is analytic and bounded in the complex lower half-plane (i.e.,${\Bbb C}^-=\{\lambda:~\lambda\in{\Bbb C}~\hbox{and}~Im(\lambda)\leq0\}$), we then have that the characteristic functions of $M_q$ and $I_q$ can be determined explicitly. The required decomposition can be obtained explicitly if, for example, the characteristic function of the L\'evy process is a rational function. Furthermore, there is a very general existence result for such decompositions, based on the theory of singular integrals (specifically Sokhotskyi-Plemelj integrals). Lewis \& Mordecki (2005) considered a L\'evy process $X_t$ which has negative jumps distributed according to a mixed-gamma family of distributions and has an arbitrary positive jumps measure. They established that such a process has a characteristic function which can be decomposed as a product of a rational function and a more or less arbitrary function, and that these functions are analytic in ${\Bbb C}^+$ and ${\Bbb C}^-,$ respectively. Recently, they provided an analogous result for a L\'evy process whose positive jumps measure is given by a mixed-gamma family of distributions and whose negative jumps measure has an arbitrary distribution, more detail can be found in Lewis \& Mordecki (2008). Unfortunately, in the majority of situations, the characteristic function of the process is not a rational function nor can be explicitly decomposed as a product of two analytic functions in ${\Bbb C}^+$ and ${\Bbb C}^-.$ Of course, there is a general theory allowing the characteristic functions of $M_q$ and $I_q$ to be expressed in terms of a Sokhotskyi-Plemelj integral (see Equation \ref{Plemelj-integral}). This provides an existence result, but presents some difficulties in numerical work due to slow evaluation and numerical problems caused by singularities in the complex plane that are near the contour used in the integral. To overcome these problems, approximation methods may be considered. Roughly speaking, the Wiener-Hopf factorization technique attempts to find a function $\Phi$ that is analytic, bounded, and complex-valued except for a prescribed jump discontinuity on the real line within the complex plane. The radial limits at the real line, denoted $\Phi^\pm,$ satisfy $\Phi^+(\omega)\Phi^-(\omega)=g(\omega),$ where $\omega\in{\Bbb R}$ and $g$ is a given function with certain conditions ($g$ is a zero index function which satisfies the H\"older condition). The radial limits provide the desired decomposition of $g$ into a product of boundary value functions that was alluded to above. The Wiener-Hopf factorization technique can be extended to a more general setting and is then also known as the Riemann-Hilbert method. The Riemann-Hilbert method is theoretically well developed and it is often more convenient to work with than the Wiener-Hopf technique, see Kucerovsky \& Payandeh (2009) for more detail. The Riemann-Hilbert problem has proved remarkably useful in solving an enormous variety of model problems in a wide range of branches of physics, mathematics, and engineering. Kucerovsky, et al. (2009) employed the Riemann-Hilbert problem to solve a statistical decision problem. More precisely, using the Riemann-Hilbert problem, they established the mle estimator under absolute-error loss function is a generalized Bayes estimator for a wide class of location family of distributions. This article considers the problem of finding the distributions of the extrema of a L\'evy process whose ({\bf i}) \emph{either} its corresponding jumps measure is a finite variation measure \emph{or} is the regular exponential L\'evy type process.; and ({\bf ii}) its corresponding stopping time $\tau(q)$ is either a geometric or an exponential distribution with parameter $q$ independent of the L\'evy process $X_t$ where $\tau(0)=\infty.$ Then, it develops a procedure in terms of the well known and powerful Riemann-Hilbert technique, to solve the problem of finding the characteristic functions of $M_q$ and $I_q$. A remark has been made that is helpful in the situation where such characteristic functions cannot be found explicitly. Section 2 collects some essential elements which are required for other sections. Section 3 states the problem of finding the characteristic functions for the distribution of the extrema in terms of a Riemann-Hilbert problem. Then, in that section is derived an expression for such characteristic functions in terms of the Sokhotskyi-Plemelj integral. A remark that is helpful in situations where such characteristic functions cannot be found explicitly is made, and several examples are given. \section{Preliminaries} Now, we collect some lemmas which are used later. The index of an analytic function $h$ on ${\Bbb R}$ is the number of zero minus number of poles of $h$ on ${\Bbb R},$ see Payandeh (2007, chapter 1), for more technical detail. Computing the index of a function is usually a \emph{key step} to determine the existence and number of solutions of a Riemann-Hilbert problem. We are primarily interested in the case of zero index. The \emph{Sokhotskyi-Plemelj integral} of a function $s$ which satisfies the H\"older condition and it is defined by a principal value integral, as follows. \begin{eqnarray} \label{Plemelj-integral}\phi_s(\lambda ):=\frac 1{2\pi i}\dashint_{ {\Bbb R}}\frac {s(x)}{x -\lambda}dx ,~~\hbox{for}~ \lambda\in {\Bbb C}.\end{eqnarray} The following are some well known properties of the Sokhotskyi-Plemelj integral, proofs can be found in Ablowitz \& Fokas (1990, chapter 7), Gakhov (1990, chapter 2), and Pandey (1996, chapter 4), among others. \begin{lemma} \label{Sokhotskyi-Plemelj-properties} The radial limit of the Sokhotskyi-Plemelj integral of $s,$ given by $\phi^{\pm}_s(\omega )=\displaystyle\lim_{\lambda\rightarrow \omega +i0^{\pm}}\phi_s(\lambda )$ can be represented as: \begin{description} \item[i)] jump formula. i.e., $\phi^{\pm}_s(\omega )=\pm s(\omega )/2+\phi_s(\omega ),$ where $\omega\in {\Bbb R};$ \item[ii)] $\phi^{\pm}_s(\omega )=\pm s(\omega )/2+H_s(\omega )/(2i),$ where $H_s(\omega)$ is the \emph{Hilbert transform} of $s$ and $\omega\in {\Bbb R}.$ \end{description} \end{lemma} The Riemann-Hilbert problem is the function-theoretical problem of finding a single function which is analytic separately in ${\Bbb C}^+$ and ${\Bbb C}^-$ (called sectionally analytic) and having a prescribed jump discontinuity on the real line. The following states the homogeneous Riemann-Hilbert problem which one deals with in studying the characteristic functions of $M_q$ and $I_q$. \begin{definition} \label{Riemann-Hilbert-problem} The homogeneous Riemann-Hilbert problem, with zero index, is the problem of finding a sectionally analytic function $\Phi$ whose the upper and lower radial limits at the real line, $\Phi^{\pm},$ satisfy \begin{equation} \label{equation-of-Riemann-Hilbert-problem} \Phi^{ +}(\omega)=g(\omega )\Phi^{-}(\omega),~~\hbox{for}~w \in {\Bbb R},\end{equation} where $g$ is a given continuous function satisfying a H\"older condition on ${\Bbb R}.$ Moreover, $g$ is assumed to have zero index, to be non-vanishing on ${\Bbb R},$ and bounded above by 1. \end{definition} A homogeneous Riemann-Hilbert problem always has a family of solutions if no restrictions on growth at infinity are posed. But a unique solution can be obtained with further restrictions. Solutions vanishing at infinity are the most common restriction considered in mathematical physics and in engineering applications, see Payandeh (2007, chapter 1), for more detail. With these restrictions, the solutions of the homogeneous Riemann-Hilbert problem are given by \begin{eqnarray*} \Phi^{\pm}(\lambda ) &=&\exp\{\pm\phi_{\ln (g)}(\lambda)\},~\hbox{for~}\lambda\in {\Bbb C} \end{eqnarray*} where $\phi_{\ln (g)}$ stands for the Sokhotskyi-Plemelj integral, given by \ref{Plemelj-integral}, of $\ln (g).$ In this paper, we need to solve a homogeneous Riemann-Hilbert problem (also known as a Wiener-Hopf factorization problem) with \begin{eqnarray} \label{RH-For-this-paper} \Phi^+(\omega)\Phi^-(\omega) &=& g(\omega),~\omega\in{\Bbb R}, \end{eqnarray} where $g$ is a given, zero index function which satisfies the H\"older condition and $g(0)=1.$ For convenience in presentation, we will simply call the above homogeneous Riemann-Hilbert problem a Riemann-Hilbert problem. The following provides solutions for the above Riemann-Hilbert problem. We begin with what we term the Resolvent Equation for Sokhotskyi-Plemelj integrals. \begin{lemma} The {\it Sokhotskyi-Plemelj} integral of a function $f$ satisfies $$\phi_f (\lambda) - \phi_f (\mu) = (\lambda-\mu)\phi_{\frac{f(x)}{x-\lambda}}(\mu),$$ for $\lambda$ and $\mu$ real or complex. \label{lem:resolvent} \end{lemma} \begin{proof} In general, $$(x-\lambda)^{-1} - (x-\mu)^{-1} = (\lambda-\mu)(x-\mu)^{-1}(x-\lambda)^{-1}.$$ Then, see Dunford \& Schwartz (1988), we have an equation of Cauchy integrals, where $\Gamma=\Bbb R$: $$\frac{1}{2\pi i} \int_\Gamma \frac{f(x)}{x-\lambda}dx - \frac{1}{2\pi i} \int_\Gamma \frac{f(x)}{x-\mu}dx = \frac{\lambda-\mu}{2\pi i} \int_\Gamma \frac{f(x)}{(x-\mu)(x-\lambda)}dx .$$ \end{proof} The above is valid only for $\lambda$ and $\mu$ not on the real line. However, by Lemma \ref{Plemelj-integral} the values of $\phi_f$ on the real line are obtained by averaging the limit from above, $\phi^+_f$, and the limit from below, $\phi^{-}_f.$ We thus obtain the stated equation in all cases. \begin{lemma} \label{Solution-RH-For-Our-Paper} Suppose $\Phi^{\pm}$ are sectionally analytic functions satisfying the Riemann-Hilbert problem given by \ref{RH-For-this-paper}. Moveover, suppose that $g$ is a zero index function satisfies the H\"older condition and $g(0)=1.$ Then, \begin{eqnarray*} \Phi^\pm(\lambda)&=&\exp \{\pm\phi_{\ln g}(\lambda)\mp\phi_{\ln g}(0)\},~\lambda\in{\Bbb C} \end{eqnarray*} where $\phi_{\ln g}$ stands for the Sokhotskyi-Plemelj integration of $\ln g.$ \end{lemma} \textbf{Proof.} By taking logarithm from both sides, the above equation can be rewritten as \begin{eqnarray*} \ln \Phi^+(\omega)-(-\ln \Phi^-(\omega)) &=& \ln g(\omega). \end{eqnarray*} Since $\ln g(0)=0,$ the above equation does not satisfy the non-vanishing condition of the standard Riemann-Hilbert problem. One may handle this by dividing both sides by $\omega$ (Gakhov; 1990) suggested this kind of modification to extend the domain of the Riemann-Hilbert method). Now, we have \begin{eqnarray*} \frac{\ln \Phi^+(\omega)}{\omega}-\frac{(-\ln \Phi^-(\omega))}{\omega} &=& \frac{\ln g(\omega)}{\omega}. \end{eqnarray*} The above equation meets all conditions for the usual solution of the additive Riemann-Hilbert problem by Sokhotskyi-Plemelj integrals, and therefore, the solutions of our Riemann-Hilbert problem (equation \ref{RH-For-this-paper}) are \begin{eqnarray*} \Phi^\pm(\lambda) &=& \exp\{\pm\frac{\lambda}{2\pi i}\dashint_{ {\Bbb R}}\frac {\ln g(x)/x}{x -\lambda}dx\},~\lambda\in{\Bbb C}. \end{eqnarray*} Lemma \ref{lem:resolvent} with $f=\ln g$ gives $$\phi_{\ln g} (\lambda) - \phi_{\ln g} (\mu) = (\lambda-\mu)\phi_{\frac{\ln g(x)}{x-\lambda}}(\mu).$$ Letting $\lambda$ go to zero from above, in the complex plane, and using the fact that $\ln g(0) = 0$, Lemma \ref{Plemelj-integral} lets us conclude that $$\phi_{\ln g} (0) - \phi_{\ln g} (\mu) = -\mu\phi_{\frac{\ln g(x)}{x}}(\mu).$$ Substituting this into the above equation for $\Phi^\pm$ gives our claimed result.~$\square$ The following explores some properties of the above lemma. \begin{remark} \label{solutions-of-RH-in-term-g} Using the jump formula. One can conclude that \begin{eqnarray*} \Phi^\pm(\omega)&=&\sqrt{g(\omega)}\exp \{\pm\frac{i}{2}(H_{\ln g}(0)-H_{\ln g}(\omega))\}, \end{eqnarray*} where $H_{\ln g}$ stands for the Hilbert transform of $\ln g.$ \end{remark} The following explores Carlemann's technique for obtaining solutions of the Riemann-Hilbert problem \ref{RH-For-this-paper} directly rather than using the Sokhotskyi-Plemelj integrations. \begin{remark} (Carlemann's technique) \label{Carlemann-method} If $g$ can be decomposed as a product of two sectionally analytic functions $g^+$ and $g^-,$ respectively in ${\Bbb C}^+$ and ${\Bbb C}^-.$ Then, solutions of the Riemann-Hilbert problem \ref{RH-For-this-paper} are $\Phi^+\equiv g^+$ and $\Phi^-\equiv g^-.$ \end{remark} Carlemann's method amounts to solution by inspection. The most favorable situation for the Carlemann's method is the case where $g$ is a rational function. In the case that approximation solutions are required, Kucerovsky \& Payandeh (2009) suggested approximating $g$ with a rational function obtained from a Pad\'e approximant or a continued fraction expansion. The Paley-Wiener theorem is one of the key elements for restating problem of finding the characteristic functions of extrema in a L\'evy process as a Riemann-Hilbert problem, as in equation (\ref{RH-For-this-paper}). The theorem is stated below, a proof may be found in Dym \& Mckean (1972, page 158). \begin{theorem} (Paley-Wiener) \label{Paley.Wiener} Suppose $s$ is a function in $L^2({\Bbb R}),$ then the following are equivalent: \begin{description} \item[i)] The real-valued function $s$ vanishes on the left half-line. \item[ii)] The Fourier transform $s$, say, $\hat {s}$ is holomorphic on $ {\Bbb C}^{+}$ and the $L^2({\Bbb R})$-norms of the functions $x\mapsto\hat {s}(x+iy_0)$ are uniformly bounded for all $y_0\geq 0.$ \end{description} \end{theorem} \begin{definition} (Mixed gamma family of distributions) \label{mixed gamma} A nonnegative random variable $X$ is said to be distributed according to a mixed gamma distribution if its density function is given by \begin{eqnarray} \label{mixed gamma-density} p(x) &=& \sum_{k=1}^{\nu}\sum_{j=1}^{n_\nu}c_{kj}\frac{\alpha_k^jx^{j-1}}{(j-1)!}e^{-\alpha_k x},~x\geq0, \end{eqnarray} where $c_{k_j}$ and $\alpha_k$ are positive value where $\sum_{k=1}^{\nu}\sum_{j=1}^{n_\nu}c_{k_j}=1.$ \end{definition} The following explores some properties of the characteristic function of the above, a proof can be found in Bracewell (2000, page 433), and Lewis \& Mordecki (2005), among others. \begin{lemma} \label{properties-characteristic-function} The characteristic function of a distribution (or equivalently the Fourier transform of its density function), say ${\hat p},$ has the following properties: \begin{description} \item[i)] ${\hat p}$ is a rational function if and only if the density function belongs to the class of mixed gamma family of distributions, given by \ref{mixed gamma-density}; \item[ii)] ${\hat p}(\omega)$ is a Hermition function, i.e., the real part of ${\hat p}$ is even function and the imaginary part odd function; \item[iii)] ${\hat p}(0)=1;$ and the norm of ${\hat p}(\omega)$ bounded by 1. \end{description} \end{lemma} \section{Main results} Suppose that $X_t$ is a one-dimensional real-valued L\'evy process starting at $X_0=0$ and defined by a triple $(\mu,\sigma,\nu):$ the drift $\mu\in{\Bbb R},$ volatility $\sigma\geq0,$ and the jumps measure $\nu$ is given by a nonnegative function defined on ${\Bbb R}\setminus\{0\}$ satisfying $\int_{\Bbb R}\min\{1,x^2\}\nu(dx)<\infty.$ The L\'evy-Khintchine representation states that the characteristic exponent $\psi$ (i.e., $\psi(\omega)=\ln (E(\exp(i\omega X_1))),~\omega\in{\Bbb R}$) can be represented by \begin{eqnarray} \label{Levy-Khintchine} \psi(\omega) &=& i\mu\omega-\frac{1}{2}\sigma^2\omega^2+\int_{{\Bbb R}}(e^{i\omega x}-1-i\omega xI_{[-1,1]}(x))\nu(dx),~~\omega\in{\Bbb R}. \end{eqnarray} Now, we explore some properties of the two expressions $q(q-\psi(\omega))^{-1}$ and $(1-q)(1-q\psi(\omega))^{-1},$ $\omega\in{\Bbb R}$ that will play an essential r\^ole in the rest of this section. \begin{lemma} \label{Holder-condition} The L\'evy process $X_t$ has a jumps measure $\nu$ that satisfies $\int_{{\Bbb R}\setminus[-1,1]}|x|^\varepsilon v(x)<\infty,$ for some $\varepsilon\in(0,1).$ Then \begin{description} \item[i)] $q(q-\psi(\omega))^{-1}$ satisfies the H\"older condition; \item[ii)] $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1}$ satisfies the H\"older condition. \end{description} \end{lemma} \textbf{Proof.} A proof of part (i) may be found in Kuznetsov (2009a) and the proof of part (ii) is a minor variation of the proof of part (i). $\square$ The above condition on the jumps measure $v,$ (\textit{i.e.}, $\exists\varepsilon\in(0,1),~\mbox{such that}~ \int_{{\Bbb R}\setminus[-1,1]}|x|^\varepsilon v(x)<\infty$) is a very mild restriction and many L\'evy processes, such as all stable processes, meet this property. It only excludes cases which the jumps measure has extremely heavy tail (behaves like $|x|^{-1}/(\ln|x|)^2$ for large enough $x$), see Kuznetsov (2009a) for more detail. The following recalls the definition of a very useful class of L\'evy processes. \begin{definition} \label{regular-exponential-type} A L\'evy process $X_t$ is said to be of regular exponential type, if its corresponding characteristic exponent is analytic and continuous in a strip about the real line. \end{definition} Loosely speaking, a L\'evy process $X_t$ is a regular L\'evy process of exponential type (RLPE) if its jumps measure has a polynomial singularity at the origin and decays exponentially at infinity, see Boyarchenko \& Levendorski\u{l} (1999, 2002a-c). The majority of classes of L\'evy processes used in empirical studies in financial markets satisfy the conditions given above (i.e., $\psi$ is analytic and continuous in a stripe about the real line). Brownian motion, Kou's model (Kou, 2002); hyperbolic processes (Eberlein \& Keller, 1995, Eberlein et al, 1998, and Barndorff-Nielsen, et at, 2001); normal inverse gaussian processes and their generalization (Barndorff-Nielsen, 1998 and Barndorff-Nielsen \& Levendorski\u{l} 2001); extended Koponen's family (Koponen, 1995 and Boyarchenko \& Levendorski\u{l}, 1999) are examples of the regular L\'evy process of exponential type. While the variance gamma processes (Madan et al, 1998) and stable L\'evy processes are two important exceptions, see Cardi (2005) for more detail. \begin{lemma} \label{psi-analytic-bounded} The L\'evy process $X_t$ has a analytic and continuous characteristic exponent on the real line ${\Bbb R}$ \emph{either} one the following conditions are hold: \begin{description} \item[i)] $X_t$ has a jumps measure $\nu(dx)$ with bounded variation (\textit{i.e.}, $\int_{-1}^1x\nu(dx)<\infty$). \item[ii)] $X_t$ is a regular L\'evy process of exponential type (see Definition \ref{regular-exponential-type}). \end{description} \end{lemma} \textbf{Proof.} For part (i), observe that the characteristic exponent for the bounded variation jumps measure $\upsilon(dx)$ is given by \begin{eqnarray*} \psi(\omega) &=& i\mu\omega-\int_{{\Bbb R}}(e^{i\omega x}-1-i\omega xI_{[-1,1]}(x))\upsilon(dx)\\ &=& i\mu\omega-1-i\omega\int_{[-1,1]}x\upsilon(dx)+\int_{(-\infty,0]}e^{i\omega x}\upsilon^-(dx)+\int_{(0,\infty)}e^{i\omega x}\upsilon^+(dx), \end{eqnarray*} see Bertoin (1996). From the fact that $\upsilon(dx)$ is a bounded variation jumps measure, one can conclude that three first terms are analytic on ${\Bbb R}.$ A double application of the Paley-Wiener Theorem \ref{Paley.Wiener} shows that two last terms are, respectively, analytic and bounded in ${\Bbb C}^-$ and ${\Bbb C}^+.$ Therefore, these terms are analytic on ${\Bbb R}={\Bbb C}^-\cap{\Bbb C}^+.$ The proof of part (ii) follows from Definition \ref{regular-exponential-type}. $\square$ \begin{lemma} \label{index-zero} Suppose the L\'evy process $X_t$ \emph{either} is a regular exponential type \emph{or} has a bounded variation jumps measure $\nu.$ Then, \begin{description} \item[i)] letting the geometric stopping time be $\tau(q),$ with parameter $q$ ($q\neq1$), the function $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1}$ has zero index on the real line; \item[ii)] for exponential stopping time $\tau(q)$ with constant rate $q$ ($q>0$), the function $(q)(q-\psi(\omega))^{-1}$ has zero index on the real line. \end{description} \end{lemma} \textbf{Proof.} Firstly, observe that the functions $q(q-\psi(\omega))^{-1}$ and $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1}$ have no zero on ${\Bbb R}.$ They may have a zero at $\pm\infty.$ Moreover, equations $q-\psi(\omega)=0$ and $1-q\exp\{-\psi(\omega)\}=0$ are, respectively, equivalent to $E(\exp\{-i\omega X_1\})=\exp\{q\}$ and $E(\exp\{-i\omega X_1\})=q.$ Since $q$ is positive, real valued, and $E(\exp\{-i\omega X_1\})$ is a Hermitian function, these equations have no solutions on ${\Bbb R}.$ Moreover, from Lemma (\ref{psi-analytic-bounded}) observe that two functions $q(q-\psi(\omega))^{-1}$ and $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1}$ are analytic and bounded on the real line. The desired proof comes from the above observations along with the fact that the index of an analytic function is the number of zeros minus number of poles within the contour (Gakhov; 1990). $\square$ The extrema of a L\'evy process play a crucial role in determining many aspects of a L\'evy process, see Mordecki (2003), Renming \& Vondra\v{c}ek (2008), Dmytro (2004), and Albrecher, et al. (2008), among many others. The following theorem addresses the question of how the problem of finding the characteristic functions of the distribution of the extrema can be restated in term of a Riemann-Hilbert problem \ref{RH-For-this-paper}. \begin{theorem} \label{Exact-distributions} Suppose $X_{t}$ is a L\'evy process whose stopping time $\tau(q)$ has either a geometric or an exponential distribution with parameter $q$ independent of the L\'evy process $X_t$ and $\tau(0)=\infty.$ Moreover, suppose that \begin{description} \item[$A_1$)] its jumps measure $\nu$ satisfies $\int_{{\Bbb R}\setminus[-1,1]}|x|^\varepsilon\nu{dx}<\infty,$ for some $\varepsilon>0;$ \item[$A_2$)] either its jumps measure $\nu$ is of bounded variation or $X_t$ is a regular exponential type L\'evy process. \end{description} Then, the characteristic functions of $M_q$ and $I_q,$, say $\Phi^+_q$ and $\Phi^-_q,$ respectively, satisfy \begin{description} \item[i)] the Riemann-Hilbert problem $\Phi^+_q(\omega)\Phi^-_q(\omega)=q(q-\psi(\omega))^{-1},~\omega\in{\Bbb R},$ whenever $\tau(q)$ has an exponential distribution with parameter $q$ ($q>0$). has a unique solution $$\Phi_q^\pm(\omega)=\sqrt{q/(q-\psi(\omega))}\exp \{\pm\frac{i}{2}(H_{\ln (q-\psi)}(\omega)-H_{\ln (q-\psi)}(0)\},~\omega\in{\Bbb R};$$ \item[ii)] the Riemann-Hilbert problem $\Phi^+_q(\omega)\Phi^-_q(\omega)=(1-q)(1-q\psi(\omega))^{-1},~\omega\in{\Bbb R},$ whenever $\tau(q)$ has a geometric distribution with parameter $q$ ($q\neq1$), has a unique solution $$\Phi_q^\pm(\omega)=\sqrt{(1-q)/(1-q\exp\{-\psi(\omega)\})}\exp \{\pm\frac{i}{2}(H_{\ln (1-qe^{-\psi})}(\omega)-H_{\ln (1-qe^{-\psi})}(0)\},~\omega\in{\Bbb R}.$$ \end{description} \end{theorem} \textbf{Proof.} To establish the desired result observe that: ({\bf 1}) the characteristic function of the L\'evy process $X_t$ can be uniquely decomposed as a product of two characteristic functions of the {\it supremum} and {\it infimum} of the process, see Cardi (2005, pages 43--4), for more detail; ({\bf 2}) the functions $M_q$ and $I_q$ attain, respectively, nonnegative and nonpositive values. Therefore, a double application of the Paley-Wiener theorem (Theorem \ref{Paley.Wiener}) shows $\Phi^+_q$ and $\Phi^-_q,$ respectively, are sectionally analytic in ${\Bbb C}^+$ and ${\Bbb C}^-;$ ({\bf 3}) The two expressions $q(q-\psi(\omega))^{-1}$ and $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1}$ satisfy a H\"older condition (see Lemma \ref{Holder-condition}) and have zero index (see Lemma \ref{index-zero}); ({\bf 4}) The characteristic function of the L\'evy process $X_t$ is $q(q-\psi(\omega))^{-1},$ for an exponential distribution stopping time $\tau(q)$ (see Cardi; 2005, page 26) \emph{and} $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1},$ for a geometric stopping $\tau(q)$ (see Cardi; 2005, page 25). The above observations along with Remark \ref{solutions-of-RH-in-term-g} complete the proof. $\square$ The following examples provide application of the above results for several L\'evy processes. \begin{example} \label{from-paper-lewis-mordecki-positive-jumps} Lewis \& Mordecki (2008) considered L\'evy process $X_t$ with an exponential stopping time $\tau(q)$ and a jumps measure $\nu$ given by $\nu(dx)=\nu^-(dx)I_{(-\infty,0)}(x)+\lambda p(x)I_{(0,\infty)}(x)dx$ where $p$ is the mixed gamma density function given by Equation \ref{mixed gamma-density} with $0<\alpha_1< Re(\alpha_2)\leq\cdots\leq Re(\alpha_v).$ They established that an expression $q(q-\psi(\lambda))^{-1},$ (for $\lambda\in{\Bbb C}$): (i) has zeros at $i\alpha_1,i\alpha_2,\cdots,i\alpha_v,$ respectively, with order $n_1,n_2,\cdots,n_v$ in ${\Bbb C}^-$ (ii) has poles at $i\beta_1(q),i\beta_2(q),\cdots,i\beta_\mu(q),$ respectively, with multiplicities $m_1(q),m_2(q),\cdots,m_\mu(q)$ in ${\Bbb C}^-.$ Using these observations, one may decompose an expression $q(q-\psi(\lambda))^{-1},~\lambda\in{\Bbb C},$ as a product of two analytic in ${\Bbb C}^+$ and ${\Bbb C}^-,$ say respectively, $\rho^+_q$ and $\rho^-_q,$ i.e., $q(q-\psi(\lambda))^{-1}=\rho^+_q(\lambda)\rho^-_q(\lambda),$ where $\rho_q^+(\lambda)=q(q-\psi(\lambda))^{-1}\prod_{j=1}^{\mu(q)}(\lambda-i\beta_j(q))^{m_j(q)}\prod_{k=1}^{v}(\lambda-i\alpha_k)^{-n_k},$ $\rho_q^-(\lambda)=\prod_{k=1}^{v}(\omega-i\alpha_k)^{n_k}\prod_{j=1}^{\mu(q)}(\omega-i\beta_j(q))^{-m_j(q)},$ and $\lambda\in{\Bbb C}.$ Now using Remark \ref{Carlemann-method}, one may verify Lewis \& Mordecki (2008)'s finding which $\Phi_q^\pm\equiv\rho_q^\pm.$ \end{example} Similar results have been established for L\'evy process $X_t$ which has a mixed gamma negative jumps measure and an arbitrary positive jumps, see Lewis \& Mordecki (2005) for more details. \begin{example} \label{alpha-stable-processes} Consider the $\alpha-$stable process $X_t$ having an exponential stopping time $\tau(q)$ and a jumps measure $\nu(dx)=c_1x^{-1-\alpha}I_{(0,\infty)}(x)dx+c_2|x|^{-1-\alpha}I_{(-\infty,0)}(x)dx,$ where $\alpha\in(0,1)\cup(1,2).$ Doney (1987) studied distribution of $M_q,$ and $I_q.$ Since, the characteristic exponent of the process is $\psi(\omega)=(c_1+c_2)|\omega|^\alpha\{(c_1+c_2)-i(c_1-c_2)\hbox{sgn}(\omega)\tan(\pi\alpha/2)\}+i\omega\eta,$ where $\eta$ is a real-valued constant, and $\omega\in{\Bbb R}.$ An expression $q(q-\psi(\lambda))^{-1},~\lambda\in{\Bbb C},$ is a rational function. Therefore, one readily can be found two rational functions $\rho^+_q$ and $\rho^-_q$ which are analytic, respectively, in ${\Bbb C}^+$ and ${\Bbb C}^-$ and $q(q-\psi(\lambda))^{-1}=\rho^+_q(\lambda)\rho^-_q(\lambda)~\lambda\in{\Bbb C}.$ Therefore, $\Phi^\pm_q\equiv\rho^\pm_q,$ which verifies Doney's observation. \end{example} The following remark suggests an approximation technique to find the characteristic functions of $M_q$ and $I_q,$ approximately, whenever they cannot be found explicitly. \begin{remark} In the situation where function $q/(q-\psi(\omega))$ (or $(1-q)(1-q\exp\{-\psi(\omega)\})^{-1}$) cannot be explicitly decompose as a product of two sectionally analytic functions in ${\Bbb C}^+$ and ${\Bbb C}^-,$ we suggest to replace such function by a rational function which is obtained from a Pad\'e approximant or a continued fraction expansion and uniformly converges to the original function. An application of Carlemann's method leads to an approximation solution for the characteristic functions of $M_q$ and $I_q.$ \end{remark} The following example represents a situation where the characteristic functions of $M_q$ and $I_q$ apparently cannot be found explicitly. \begin{example} \label{from-paper-Kuznetsov-no-1}Kuznetsov (2009b) considered a compound Poisson process with a jumps measure $\nu(dx)=\exp\{\alpha x\}sech(x)dx$ and an exponential stopping time $\tau(q).$ He showed the characteristic exponent for such compound Poisson is given by \begin{eqnarray*} \psi(\omega) &=& \frac{\pi}{\cos(\pi\alpha/2)}-\frac{\pi}{\cosh(\pi(\omega-i\alpha)/2)},~ \omega\in{\Bbb R}. \end{eqnarray*} He established that, in ${\Bbb C},$ an expression $q(q-\psi(\cdot))^{-1}$ can be, uniformly, approximated by product $\rho^+_q(\cdot)\rho^-_q(\cdot),$ where \begin{eqnarray*} \rho^+_q(\lambda) &=&\prod_{n=0}^\infty \frac{(1-\frac{i\lambda}{4n+1-\alpha})(1-\frac{i\lambda}{4n+3-\alpha})}{(1-\frac{i\lambda}{4n+\eta-\alpha})(1-\frac{i\lambda}{4n+4-\eta-\alpha})};\\ \rho^-_q(\lambda) &=&\prod_{n=0}^\infty \frac{(1+\frac{i\lambda}{4n+1+\alpha})(1+\frac{i\lambda}{4n+3+\alpha})}{(1+\frac{i\lambda}{4n+\eta+\alpha})(1+\frac{i\lambda}{4n+4-\eta+\alpha})}, \end{eqnarray*} where $\lambda\in{\Bbb C}$ and $\eta=2/\pi\arccos(\pi/(q+\pi\sec(\alpha\pi/2))).$ Therefore, approximate solutions for $\Phi^\pm_q$ are $\rho^\pm_q,$ more detail can be found in Kuznetsov (2009b). \end{example} \end{document}
arXiv
Jamshid al-Kashi Ghiyāth al-Dīn Jamshīd Masʿūd al-Kāshī (or al-Kāshānī)[1] (Persian: غیاث الدین جمشید کاشانی Ghiyās-ud-dīn Jamshīd Kāshānī) (c. 1380 Kashan, Iran – 22 June 1429 Samarkand, Transoxania) was a Persian[2][3] astronomer and mathematician during the reign of Tamerlane. Ghiyāth al-Dīn Jamshīd Kāshānī Opening bifolio of a manuscript of al-Kashi's Miftah al-Hisab. Copy created in Safavid Iran, dated 1656 Titleal-Kashi Personal Bornc. 1380 Kashan, Iran Died22 June 1429 (1429-06-23) (aged 48) Samarkand, Transoxania ReligionIslam EraIslamic Golden Age-Timurid Renaissance RegionIran Main interest(s)Astronomy, Mathematics Notable idea(s)Pi decimal determination to the 16th place Law of cosines Notable work(s)Sullam al-Sama OccupationPersian Muslim scholar Much of al-Kāshī's work was not brought to Europe, and still, even the extant work, remains unpublished in any form.[4] Biography Al-Kashi was born in 1380, in Kashan, in central Iran, to a Persian family.[2][3] This region was controlled by Tamerlane, better known as Timur. The situation changed for the better when Timur died in 1405, and his son, Shah Rokh, ascended into power. Shah Rokh and his wife, Goharshad, a Turkish princess, were very interested in the sciences, and they encouraged their court to study the various fields in great depth. Consequently, the period of their power became one of many scholarly accomplishments. This was the perfect environment for al-Kashi to begin his career as one of the world's greatest mathematicians. Eight years after he came into power in 1409, their son, Ulugh Beg, founded an institute in Samarkand which soon became a prominent university. Students from all over the Middle East and beyond, flocked to this academy in the capital city of Ulugh Beg's empire. Consequently, Ulugh Beg gathered many great mathematicians and scientists of the Middle East. In 1414, al-Kashi took this opportunity to contribute vast amounts of knowledge to his people. His best work was done in the court of Ulugh Beg. Al-Kashi was still working on his book, called “Risala al-watar wa’l-jaib” meaning “The Treatise on the Chord and Sine”, when he died, probably in 1429. Some scholars believe that Ulugh Beg may have ordered his murder, because he went against Islamic theologians. Astronomy Khaqani Zij Al-Kashi produced a Zij entitled the Khaqani Zij, which was based on Nasir al-Din al-Tusi's earlier Zij-i Ilkhani. In his Khaqani Zij, al-Kashi thanks the Timurid sultan and mathematician-astronomer Ulugh Beg, who invited al-Kashi to work at his observatory (see Islamic astronomy) and his university (see Madrasah) which taught theology. Al-Kashi produced sine tables to four sexagesimal digits (equivalent to eight decimal places) of accuracy for each degree and includes differences for each minute. He also produced tables dealing with transformations between coordinate systems on the celestial sphere, such as the transformation from the ecliptic coordinate system to the equatorial coordinate system.[5] Astronomical Treatise on the size and distance of heavenly bodies He wrote the book Sullam al-Sama on the resolution of difficulties met by predecessors in the determination of distances and sizes of heavenly bodies, such as the Earth, the Moon, the Sun, and the Stars. Treatise on Astronomical Observational Instruments In 1416, al-Kashi wrote the Treatise on Astronomical Observational Instruments, which described a variety of different instruments, including the triquetrum and armillary sphere, the equinoctial armillary and solsticial armillary of Mo'ayyeduddin Urdi, the sine and versine instrument of Urdi, the sextant of al-Khujandi, the Fakhri sextant at the Samarqand observatory, a double quadrant Azimuth-altitude instrument he invented, and a small armillary sphere incorporating an alhidade which he invented.[6] Plate of Conjunctions Al-Kashi invented the Plate of Conjunctions, an analog computing instrument used to determine the time of day at which planetary conjunctions will occur,[7] and for performing linear interpolation.[8] Planetary computer Al-Kashi also invented a mechanical planetary computer which he called the Plate of Zones, which could graphically solve a number of planetary problems, including the prediction of the true positions in longitude of the Sun and Moon,[8] and the planets in terms of elliptical orbits;[9] the latitudes of the Sun, Moon, and planets; and the ecliptic of the Sun. The instrument also incorporated an alhidade and ruler.[10] Mathematics Law of cosines In French, the law of cosines is named Théorème d'Al-Kashi (Theorem of Al-Kashi), as al-Kashi was the first to provide an explicit statement of the law of cosines in a form suitable for triangulation.[11] His other work is al-Risāla al-muhītīyya or "The Treatise on the Circumference".[12] The Treatise of Chord and Sine In The Treatise on the Chord and Sine, al-Kashi computed sin 1° to nearly as much accuracy as his value for π, which was the most accurate approximation of sin 1° in his time and was not surpassed until Taqi al-Din in the sixteenth century. In algebra and numerical analysis, he developed an iterative method for solving cubic equations, which was not discovered in Europe until centuries later.[5] A method algebraically equivalent to Newton's method was known to his predecessor Sharaf al-Dīn al-Tūsī. Al-Kāshī improved on this by using a form of Newton's method to solve $x^{P}-N=0$ to find roots of N. In western Europe, a similar method was later described by Henry Briggs in his Trigonometria Britannica, published in 1633.[13] In order to determine sin 1°, al-Kashi discovered the following formula, often attributed to François Viète in the sixteenth century:[14] $\sin 3\phi =3\sin \phi -4\sin ^{3}\phi \,\!$ Computation of 2π In his numerical approximation, he correctly computed 2π to 9 sexagesimal digits[15] in 1424,[5] and he converted this estimate of 2π to 16 decimal places of accuracy.[16] This was far more accurate than the estimates earlier given in Greek mathematics (3 decimal places by Ptolemy, AD 150), Chinese mathematics (7 decimal places by Zu Chongzhi, AD 480) or Indian mathematics (11 decimal places by Madhava of Kerala School, c. 14th Century ). The accuracy of al-Kashi's estimate was not surpassed until Ludolph van Ceulen computed 20 decimal places of π 180 years later.[5] Al-Kashi's goal was to compute the circle constant so precisely that the circumference of the largest possible circle (ecliptica) could be computed with the highest desirable precision (the diameter of a hair). Decimal fractions In discussing decimal fractions, Struik states that (p. 7):[17] "The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548-1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his Key to arithmetic (Samarkand, early fifteenth century).[18]" Khayyam's triangle In considering Pascal's triangle, known in Persia as "Khayyam's triangle" (named after Omar Khayyám), Struik notes that (p. 21):[17] "The Pascal triangle appears for the first time (so far as we know at present) in a book of 1261 written by Yang Hui, one of the mathematicians of the Song dynasty in China.[19] The properties of binomial coefficients were discussed by the Persian mathematician Jamshid Al-Kāshī in his Key to arithmetic of c. 1425.[20] Both in China and Persia the knowledge of these properties may be much older. This knowledge was shared by some of the Renaissance mathematicians, and we see Pascal's triangle on the title page of Peter Apian's German arithmetic of 1527. After this, we find the triangle and the properties of binomial coefficients in several other authors.[21]" Biographical film In 2009, IRIB produced and broadcast (through Channel 1 of IRIB) a biographical-historical film series on the life and times of Jamshid Al-Kāshi, with the title The Ladder of the Sky[22][23] (Nardebām-e Āsmān[24]). The series, which consists of 15 parts, with each part being 45 minutes long, is directed by Mohammad Hossein Latifi and produced by Mohsen Ali-Akbari. In this production, the role of the adult Jamshid Al-Kāshi is played by Vahid Jalilvand.[25][26][27] Notes 1. A. P. Youschkevitch and B. A. Rosenfeld. "al-Kāshī (al-Kāshānī), Ghiyāth al-Dīn Jamshīd Masʿūd" Dictionary of Scientific Biography. 2. Bosworth, C.E. (1990). The Encyclopaedia of Islam, Volume IV (2. impression. ed.). Leiden [u.a.]: Brill. p. 702. ISBN 9004057455. AL-KASHl Or AL-KASHANI, GHIYATH AL-DIN DjAMSHlD B. MASCUD B. MAHMUD, Persian mathematician and astronomer who wrote in his mother tongue and in Arabic. 3. Selin, Helaine (2008). Encyclopaedia of the history of science, technology, and medicine in non-western cultures. Berlin New York: Springer. p. 132. ISBN 9781402049606. Al-Kāshī, or al-Kāshānī (Ghiyāth al-Dīn Jamshīd ibn Mas˓ūd al-Kāshī (al-Kāshānī)), was a Persian mathematician and astronomer. 4. iranicaonline.org 5. O'Connor, John J.; Robertson, Edmund F., "Ghiyath al-Din Jamshid Mas'ud al-Kashi", MacTutor History of Mathematics Archive, University of St Andrews 6. (Kennedy 1951, pp. 104–107) 7. (Kennedy 1947, p. 56) 8. (Kennedy 1950) 9. (Kennedy 1952) 10. (Kennedy 1951) 11. Pickover, Clifford A. (2009). The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics. Sterling Publishing Company, Inc. p. 106. ISBN 9781402757969. 12. Azarian, Mohammad K. (2019). "An Overview of Mathematical Contributions of Ghiyath al-Din Jamshid Al-Kashi [Kashani]" (PDF). Mathematics Interdisciplinary Research. 4 (1). doi:10.22052/mir.2019.167225.1110. 13. Ypma, Tjalling J. (December 1995), "Historical Development of the Newton-Raphson Method", SIAM Review, Society for Industrial and Applied Mathematics, 37 (4): 531–551 [539], doi:10.1137/1037125 14. Marlow Anderson, Victor J. Katz, Robin J. Wilson (2004), Sherlock Holmes in Babylon and Other Tales of Mathematical History, Mathematical Association of America, p. 139, ISBN 0-88385-546-1 15. Al-Kashi, author: Adolf P. Youschkevitch, chief editor: Boris A. Rosenfeld, p. 256 16. The statement that a quantity is calculated to $\scriptstyle n$ sexagesimal digits implies that the maximal inaccuracy in the calculated value is less than $\scriptstyle 59/60^{n+1}+59/60^{n+2}+\dots =1/60^{n}$ in the decimal system. With $\scriptstyle n=9$, Al-Kashi has thus calculated $\scriptstyle 2\pi $ with a maximal error less than $\scriptstyle 1/60^{9}\approx 9.92\times 10^{-17}<10^{-16}\,$. That is to say, Al-Kashi has calculated $\scriptstyle 2\pi $ exactly up to and including the 16th place after the decimal separator. For $\scriptstyle 2\pi $ expressed exactly up to and including the 18th place after the decimal separator one has: $\scriptstyle 6.283\,185\,307\,179\,586\,476$. 17. D.J. Struik, A Source Book in Mathematics 1200-1800 (Princeton University Press, New Jersey, 1986). ISBN 0-691-02397-2 18. P. Luckey, Die Rechenkunst bei Ğamšīd b. Mas'ūd al-Kāšī (Steiner, Wiesbaden, 1951). 19. J. Needham, Science and civilisation in China, III (Cambridge University Press, New York, 1959), 135. 20. Russian translation by B.A. Rozenfel'd (Gos. Izdat, Moscow, 1956); see also Selection I.3, footnote 1. 21. Smith, History of mathematics, II, 508-512. See also our Selection II.9 (Girard). 22. The narrative by Latifi of the life of the celebrated Iranian astronomer in 'The Ladder of the Sky' , in Persian, Āftāb, Sunday, 28 December 2008, . 23. IRIB to spice up Ramadan evenings with special series, Tehran Times, 22 August 2009, . 24. The name Nardebām-e Āsmān coincides with the Persian translation of the title Soll'am-os-Samā' (سُلّمُ السَماء) of a scientific work by Jamshid Kashani written in Arabic. In this work, which is also known as Resāleh-ye Kamālieh (رسالهٌ كماليه), Jamshid Kashani discusses such matters as the diameters of Earth, the Sun, the Moon, and of the stars, as well as the distances of these to Earth. He completed this work on 1 March 1407 CE in Kashan. 25. The programmes of the Holy month of Ramadan, Channel 1, in Persian, 19 August 2009, Archived 2009-08-26 at the Wayback Machine. Here the name "Latifi" is incorrectly written as "Seifi". 26. Dr Velāyati: 'The Ladder of the Sky' is faithful to history, in Persian, Āftāb, Tuesday, 1 September 2009, . 27. Fatemeh Udbashi, Latifi's narrative of the life of the renowned Persian astronomer in 'The Ladder of the Sky' , in Persian, Mehr News Agency, 29 December 2008, "Archived copy". Archived from the original on 2011-07-22. Retrieved 2009-10-04.{{cite web}}: CS1 maint: archived copy as title (link). See also • Numerical approximations of π References • Kennedy, Edward S. (1947), "Al-Kashi's Plate of Conjunctions", Isis, 38 (1–2): 56–59, doi:10.1086/348036, S2CID 143993402 • Kennedy, Edward S. (1950), "A Fifteenth-Century Planetary Computer: al-Kashi's "Tabaq al-Manateq" I. Motion of the Sun and Moon in Longitude", Isis, 41 (2): 180–183, doi:10.1086/349146, PMID 15436217, S2CID 43217299 • Kennedy, Edward S. (1951), "An Islamic Computer for Planetary Latitudes", Journal of the American Oriental Society, American Oriental Society, 71 (1): 13–21, doi:10.2307/595221, JSTOR 595221 • Kennedy, Edward S. (1952), "A Fifteenth-Century Planetary Computer: al-Kashi's "Tabaq al-Maneteq" II: Longitudes, Distances, and Equations of the Planets", Isis, 43 (1): 42–50, doi:10.1086/349363, S2CID 123582209 • O'Connor, John J.; Robertson, Edmund F., "Ghiyath al-Din Jamshid Mas'ud al-Kashi", MacTutor History of Mathematics Archive, University of St Andrews External links • Schmidl, Petra G. (2007). "Kāshī: Ghiyāth (al‐Milla wa‐) al‐Dīn Jamshīd ibn Masʿūd ibn Maḥmūd al‐Kāshī [al‐Kāshānī]". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 613–5. ISBN 978-0-387-31022-0. (PDF version) • Eshera, Osama (2020). "On the Early Collections of the Works of Ġiyāṯ al-Dīn Jamšīd al-Kāšī". Journal of Islamic Manuscripts. 13 (2): 225–262. doi:10.1163/1878464X-01302001. S2CID 248336832. • Mohammad K. Azarian, A summary of "Miftah al-Hisab", Missouri Journal of Mathematical Sciences, Vol. 12, No. 2, Spring 2000, pp. 75-95 • About Jamshid Kashani • Sources relating to Ghiyath al-Din Kashani, or al-Kashi, by Jan Hogendijk Wikimedia Commons has media related to Jamshīd al-Kāshī. • Azarian, Mohammad K. (2004). "Al-Kashi's Fundamental Theorem" (PDF). International Journal of Pure and Applied Mathematics. • Azarian, Mohammad K. (2015). "A Study of Risa-la al-Watar wa'l Jaib ("The Treatise on the Chord and Sine")" (PDF). Forum Geometricorum. • Azarian, Mohammad K. (2018). "A Study of Risa-la al-Watar wa'l Jaib ("The Treatise on the Chord and Sine"):Revisited" (PDF). Forum Geometricorum. • Azarian, Mohammad K. (2009). "The Introduction of Al-Risala al-Muhitiyya: An English Translation" (PDF). International Journal of Pure and Applied Mathematics. Mathematics in Iran Mathematicians Before 20th Century • Abu al-Wafa' Buzjani • Jamshīd al-Kāshī (al-Kashi's theorem) • Omar Khayyam (Khayyam-Pascal's triangle, Khayyam-Saccheri quadrilateral, Khayyam's Solution of Cubic Equations) • Al-Mahani • Muhammad Baqir Yazdi • Nizam al-Din al-Nisapuri • Al-Nayrizi • Kushyar Gilani • Ayn al-Quzat Hamadani • Al-Isfahani • Al-Isfizari • Al-Khwarizmi (Al-jabr) • Najm al-Din al-Qazwini al-Katibi • Nasir al-Din al-Tusi • Al-Biruni Modern • Maryam Mirzakhani • Caucher Birkar • Sara Zahedi • Farideh Firoozbakht (Firoozbakht's conjecture) • S. L. Hakimi (Havel–Hakimi algorithm) • Siamak Yassemi • Freydoon Shahidi (Langlands–Shahidi method) • Hamid Naderi Yeganeh • Esmail Babolian • Ramin Takloo-Bighash • Lotfi A. Zadeh (Fuzzy mathematics, Fuzzy set, Fuzzy logic) • Ebadollah S. Mahmoodian • Reza Sarhangi (The Bridges Organization) • Siavash Shahshahani • Gholamhossein Mosaheb • Amin Shokrollahi • Reza Sadeghi • Mohammad Mehdi Zahedi • Mohsen Hashtroodi • Hossein Zakeri • Amir Ali Ahmadi Prize Recipients Fields Medal • Maryam Mirzakhani (2014) • Caucher Birkar (2018) EMS Prize • Sara Zahedi (2016) Satter Prize • Maryam Mirzakhani (2013) Organizations • Iranian Mathematical Society Institutions • Institute for Research in Fundamental Sciences Astronomy in the medieval Islamic world Astronomers • by century 8th • Ahmad Nahavandi • Al-Fadl ibn Naubakht • Muḥammad ibn Ibrāhīm al-Fazārī • Ibrāhīm al-Fazārī • Mashallah ibn Athari • Yaʿqūb ibn Ṭāriq 9th • Abu Ali al-Khayyat • Abu Ma'shar al-Balkhi • Abu Said Gorgani • Al-Farghani • Al-Kindi • Al-Mahani • Abu Hanifa Dinawari • Al-Ḥajjāj ibn Yūsuf • Al-Marwazi • Ali ibn Isa al-Asturlabi • Banū Mūsā brothers • Iranshahri • Khalid ibn Abd al‐Malik al‐Marwarrudhi • Al-Khwarizmi • Sahl ibn Bishr • Thābit ibn Qurra • Yahya ibn Abi Mansur 10th • al-Sufi • Ibn • Al-Adami • al-Khojandi • al-Khazin • al-Qūhī • Abu al-Wafa • Ahmad ibn Yusuf • al-Battani • Al-Qabisi • Ibn al-A'lam • Al-Nayrizi • Al-Saghani • Aṣ-Ṣaidanānī • Ibn Yunus • Ibrahim ibn Sinan • Ma Yize • al-Sijzi • Al-ʻIjliyyah • Nastulus • Abolfadl Harawi • Haseb-i Tabari • al-Majriti • Abu al-Hasan al-Ahwazi 11th • Abu Nasr Mansur • al-Biruni • Ali ibn Ridwan • Al-Zarqālī • Ibn al-Samh • Alhazen • Avicenna • Ibn al-Saffar • Kushyar Gilani • Said al-Andalusi • Ibrahim ibn Said al-Sahli • Ibn Mu'adh al-Jayyani • Al-Isfizari • Ali ibn Khalaf 12th • Al-Bitruji • Avempace • Ibn Tufail • Al-Kharaqī • Al-Khazini • Al-Samawal al-Maghribi • Abu al-Salt • Averroes • Ibn al-Kammad • Jabir ibn Aflah • Omar Khayyam • Sharaf al-Din al-Tusi 13th • Ibn al-Banna' al-Marrakushi • Ibn al‐Ha'im al‐Ishbili • Jamal ad-Din • Alam al-Din al-Hanafi • Najm al‐Din al‐Misri • Muhyi al-Din al-Maghribi • Nasir al-Din al-Tusi • Qutb al-Din al-Shirazi • Shams al-Din al-Samarqandi • Zakariya al-Qazwini • al-Urdi • al-Abhari • Muhammad ibn Abi Bakr al‐Farisi • Abu Ali al-Hasan al-Marrakushi • Ibn Ishaq al-Tunisi • Ibn al‐Raqqam • Al-Ashraf Umar II • Fakhr al-Din al-Akhlati 14th • Ibn al-Shatir • Al-Khalili • Ibn Shuayb • al-Battiwi • Abū al‐ʿUqūl • Al-Wabkanawi • Nizam al-Din al-Nisapuri • al-Jadiri • Sadr al-Shari'a al-Asghar • Fathullah Shirazi 15th • Ali Kuşçu • Abd al‐Wajid • Jamshīd al-Kāshī • Kadızade Rumi • Ulugh Beg • Sibt al-Maridini • Ibn al-Majdi • al-Wafa'i • al-Kubunani • 'Abd al-'Aziz al-Wafa'i 16th • Al-Birjandi • al-Khafri • Baha' al-din al-'Amili • Piri Reis • Takiyüddin 17th • Yang Guangxian • Ehmedê Xanî • Al Achsasi al Mouakket • Muhammad al-Rudani Topics Works • Arabic star names • Islamic calendar • Aja'ib al-Makhluqat • Encyclopedia of the Brethren of Purity • Tabula Rogeriana • The Book of Healing • The Remaining Signs of Past Centuries Zij • Alfonsine tables • Huihui Lifa • Book of Fixed Stars • Toledan Tables • Zij-i Ilkhani • Zij-i Sultani • Sullam al-sama' Instruments • Alidade • Analog computer • Aperture • Armillary sphere • Astrolabe • Astronomical clock • Celestial globe • Compass • Compass rose • Dioptra • Equatorial ring • Equatorium • Globe • Graph paper • Magnifying glass • Mural instrument • Navigational astrolabe • Nebula • Octant • Planisphere • Quadrant • Sextant • Shadow square • Sundial • Schema for horizontal sundials • Triquetrum Concepts • Almucantar • Apogee • Astrology • Astrophysics • Axial tilt • Azimuth • Celestial mechanics • Celestial spheres • Circular orbit • Deferent and epicycle • Earth's rotation • Eccentricity • Ecliptic • Elliptic orbit • Equant • Galaxy • Geocentrism • Gravitational energy • Gravity • Heliocentrism • Inertia • Islamic cosmology • Moonlight • Multiverse • Muwaqqit • Obliquity • Parallax • Precession • Qibla • Salah times • Specific gravity • Spherical Earth • Sublunary sphere • Sunlight • Supernova • Temporal finitism • Trepidation • Triangulation • Tusi couple • Universe Institutions • Al-Azhar University • House of Knowledge • House of Wisdom • University of al-Qarawiyyin • Observatories • Constantinople (Taqi al-Din) • Maragheh • Samarkand (Ulugh Beg) Influences • Babylonian astronomy • Egyptian astronomy • Hellenistic astronomy • Indian astronomy Influenced • Byzantine science • Chinese astronomy • Medieval European science • Indian astronomy Mathematics in the medieval Islamic world Mathematicians 9th century • 'Abd al-Hamīd ibn Turk • Sanad ibn Ali • al-Jawharī • Al-Ḥajjāj ibn Yūsuf • Al-Kindi • Qusta ibn Luqa • Al-Mahani • al-Dinawari • Banū Mūsā • Hunayn ibn Ishaq • Al-Khwarizmi • Yusuf al-Khuri • Ishaq ibn Hunayn • Na'im ibn Musa • Thābit ibn Qurra • al-Marwazi • Abu Said Gorgani 10th century • Abu al-Wafa • al-Khazin • Al-Qabisi • Abu Kamil • Ahmad ibn Yusuf • Aṣ-Ṣaidanānī • Sinān ibn al-Fatḥ • al-Khojandi • Al-Nayrizi • Al-Saghani • Brethren of Purity • Ibn Sahl • Ibn Yunus • al-Uqlidisi • Al-Battani • Sinan ibn Thabit • Ibrahim ibn Sinan • Al-Isfahani • Nazif ibn Yumn • al-Qūhī • Abu al-Jud • Al-Sijzi • Al-Karaji • al-Majriti • al-Jabali 11th century • Abu Nasr Mansur • Alhazen • Kushyar Gilani • Al-Biruni • Ibn al-Samh • Abu Mansur al-Baghdadi • Avicenna • al-Jayyānī • al-Nasawī • al-Zarqālī • ibn Hud • Al-Isfizari • Omar Khayyam • Muhammad al-Baghdadi 12th century • Jabir ibn Aflah • Al-Kharaqī • Al-Khazini • Al-Samawal al-Maghribi • al-Hassar • Sharaf al-Din al-Tusi • Ibn al-Yasamin 13th century • Ibn al‐Ha'im al‐Ishbili • Ahmad al-Buni • Ibn Munim • Alam al-Din al-Hanafi • Ibn Adlan • al-Urdi • Nasir al-Din al-Tusi • al-Abhari • Muhyi al-Din al-Maghribi • al-Hasan al-Marrakushi • Qutb al-Din al-Shirazi • Shams al-Din al-Samarqandi • Ibn al-Banna' • Kamāl al-Dīn al-Fārisī 14th century • Nizam al-Din al-Nisapuri • Ibn al-Shatir • Ibn al-Durayhim • Al-Khalili • al-Umawi 15th century • Ibn al-Majdi • al-Rūmī • al-Kāshī • Ulugh Beg • Ali Qushji • al-Wafa'i • al-Qalaṣādī • Sibt al-Maridini • Ibn Ghazi al-Miknasi 16th century • Al-Birjandi • Muhammad Baqir Yazdi • Taqi ad-Din • Ibn Hamza al-Maghribi • Ahmad Ibn al-Qadi Mathematical works • The Compendious Book on Calculation by Completion and Balancing • De Gradibus • Principles of Hindu Reckoning • Book of Optics • The Book of Healing • Almanac • Book on the Measurement of Plane and Spherical Figures • Encyclopedia of the Brethren of Purity • Toledan Tables • Tabula Rogeriana • Zij Concepts • Alhazen's problem • Islamic geometric patterns Centers • Al-Azhar University • Al-Mustansiriya University • House of Knowledge • House of Wisdom • Constantinople observatory of Taqi ad-Din • Madrasa • Maragheh observatory • University of al-Qarawiyyin Influences • Babylonian mathematics • Greek mathematics • Indian mathematics Influenced • Byzantine mathematics • European mathematics • Indian mathematics Related • Hindu–Arabic numeral system • Arabic numerals (Eastern Arabic numerals, Western Arabic numerals) • Trigonometric functions • History of trigonometry • History of algebra Authority control International • FAST • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands • Poland • Vatican Academics • CiNii • MathSciNet • zbMATH People • Deutsche Biographie Other • IdRef • İslâm Ansiklopedisi
Wikipedia
HomePosts tagged 'Dodd-Frank' What really happened in Greece 2016-04-27 2016-05-02 pnrj critique of neoclassical economics, current events, market failure, public policy Cato Institute, corruption, Denmark, derivatives, Dodd-Frank, employment, Euro, France, GDP, Germany, Goldman Sachs, Greece, Ireland, nonemployment, poverty, Spain, UK, unemployment, US, World Governance Indicators JDN 2457506 I said I'd get back to this issue, so here goes. Let's start with what is uncontroversial: Greece is in trouble. Their per-capita GDP PPP has fallen from a peak of over $32,000 in 2007 to a trough of just over $24,000 in 2013, and only just began to recover over the last 2 years. That's a fall of 29 log points. Put another way, the average person in Greece has about the same real income now that they had in the year 2000—a decade and a half of economic growth disappeared. Their unemployment rate surged from about 7% in 2007 to almost 28% in 2013. It remains over 24%. That is, almost one quarter of all adults in Greece are seeking jobs and not finding them. The US has not seen an unemployment rate that high since the Great Depression. Most shocking of all, over 40% of the population in Greece is now below the national poverty line. They define poverty as 60% of the inflation-adjusted average income in 2009, which works out to 665 Euros per person ($756 at current exchange rates) per month, or about $9000 per year. They also have an absolute poverty line, which 14% of Greeks now fall below, but only 2% did before the crash. So now, let's talk about why. There's a standard narrative you've probably heard many times, which goes something like this: The Greek government spent too profligately, heaping social services on the population without the tax base to support them. Unemployment insurance was too generous; pensions were too large; it was too hard to fire workers or cut wages. Thus, work incentives were too weak, and there was no way to sustain a high GDP. But they refused to cut back on these social services, and as a result went further and further into debt until it finally became unsustainable. Now they are cutting spending and raising taxes like they needed to, and it will eventually allow them to repay their debt. Here's a fellow of the Cato Institute spreading this narrative on the BBC. Here's ABC with a five bullet-point list: Pension system, benefits, early retirement, "high unemployment and work culture issues" (yes, seriously), and tax evasion. Here the Telegraph says that Greece "went on a spending spree" and "stopped paying taxes". That story is almost completely wrong. Almost nothing about it is true. Cato and the Telegraph got basically everything wrong. The only one ABC got right was tax evasion. Here's someone else arguing that Greece has a problem with corruption and failed governance; there is something to be said for this, as Greece is fairly corrupt by European standards—though hardly by world standards. For being only a generation removed from an authoritarian military junta, they're doing quite well actually. They're about as corrupt as a typical upper-middle income country like Libya or Botswana; and Botswana is widely regarded as the shining city on a hill of transparency as far as Sub-Saharan Africa is concerned. So corruption may have made things worse, but it can't be the whole story. First of all, social services in Greece were not particularly extensive compared to the rest of Europe. Before the crisis, Greece's government spending was about 44% of GDP. That was about the same as Germany. It was slightly more than the UK. It was less than Denmark and France, both of which have government spending of about 50% of GDP. Greece even tried to cut spending to pay down their debt—it didn't work, because they simply ended up worsening the economic collapse and undermining the tax base they needed to do that. Europe has fairly extensive social services by world standards—but that's a major part of why it's the First World. Even the US, despite spending far less than Europe on social services, still spends a great deal more than most countries—about 36% of GDP. Second, if work incentives were a problem, you would not have high unemployment. People don't seem to grasp what the word unemployment actually means, which is part of why I can't stand it when news outlets just arbitrarily substitute "jobless" to save a couple of syllables. Unemployment does not mean simply that you don't have a job. It means that you don't have a job and are trying to get one. The word you're looking for to describe simply not having a job is nonemployment, and that's such a rarely used term my spell-checker complains about it. Yet economists rarely use this term precisely because it doesn't matter; a high nonemployment rate is not a symptom of a failing economy but a result of high productivity moving us toward the post-scarcity future (kicking and screaming, evidently). If the problem with Greece were that they were too lazy and they retire too early (which is basically what ABC was saying in slightly more polite language), there would be high nonemployment, but there would not be high unemployment. "High unemployment and work culture issues" is actually a contradiction. Before the crisis, Greece had an employment-to-population ratio of 49%, meaning a nonemployment rate of 51%. If that sounds ludicrously high, you're not accustomed to nonemployment figures. During the same time, the United States had an employment-to-population ratio of 52% and thus a nonemployment rate of 48%. So the number of people in Greece who were voluntarily choosing to drop out of work before the crisis was just slightly larger than the number in the US—and actually when you adjust for the fact that the US is full of young immigrants and Greece is full of old people (their median age is 10 years older than ours), it begins to look like it's we Americans who are lazy. (Actually, it's that we are studious—the US has an extremely high rate of college enrollment and the best colleges in the world. Full-time students are nonemployed, but they are certainly not unemployed.) But Greece does have an enormously high debt, right? Yes—but it was actually not as bad before the crisis. Their government debt surged from 105% of GDP to almost 180% today. 105% of GDP is about what we have right now in the US; it's less than what we had right after WW2. This is a little high, but really nothing to worry about, especially if you've incurred the debt for the right reasons. (The famous paper by Rogart and Reinhoff arguing that 90% of GDP is a horrible point of no return was literally based on math errors.) Moreover, Ireland and Spain suffered much the same fate as Greece, despite running primary budget surpluses. So… what did happen? If it wasn't their profligate spending that put them in this mess, what was it? Well, first of all, there was the Second Depression, a worldwide phenomenon triggered by the collapse of derivatives markets in the United States. (You want unsustainable debt? Try 20 to 1 leveraged CDO-squareds and one quadrillion dollars in notional value. Notional value isn't everything, but it's a lot.) So it's mainly our fault, or rather the fault of our largest banks. As far as us voters, it's "our fault" in the way that if your car gets stolen it's "your fault" for not locking the doors and installing a LoJack. We could have regulated against this and enforced those regulations, but we didn't. (Fortunately, Dodd-Frank looks like it might be working.) Greece was hit particularly hard because they are highly dependent on trade, particularly in services like tourism that are highly sensitive to the business cycle. Before the crash they imported 36% of GDP and exported 23% of GDP. Now they import 35% of GDP and export 33% of GDP—but it's a much smaller GDP. Their exports have only slightly increased while their imports have plummeted. (This has reduced their "trade deficit", but that has always been a silly concept. I guess it's less silly if you don't control your own currency, but it's still silly.) Once the crash happened, the US had sovereign monetary policy and the wherewithal to actually use that monetary policy effectively, so we weathered the crash fairly well, all things considered. Our unemployment rate barely went over 10%. But Greece did not have sovereign monetary policy—they are tied to the Euro—and that severely limited their options for expanding the money supply as a result of the crisis. Raising spending and cutting taxes was the best thing they could do. But the bank(st?)ers and their derivatives schemes caused the Greek debt crisis a good deal more directly than just that. Part of the condition of joining the Euro was that countries must limit their fiscal deficit to no more than 3% of GDP (which is a totally arbitrary figure with no economic basis in case you were wondering). Greece was unwilling or unable to do so, but wanted to look like they were following the rules—so they called up Goldman Sachs and got them to make some special derivatives that Greece could use to continue borrowing without looking like they were borrowing. The bank could have refused; they could have even reported it to the European Central Bank. But of course they didn't; they got their brokerage fee, and they knew they'd sell it off to some other bank long before they had to worry about whether Greece could ever actually repay it. And then (as I said I'd get back to in a previous post) they paid off the credit rating agencies to get them to rate these newfangled securities as low-risk. In other words, Greece is not broke; they are being robbed. Like homeowners in the US, Greece was offered loans they couldn't afford to pay, but the banks told them they could, because the banks had lost all incentive to actually bother with the question of whether loans can be repaid. They had "moved on"; their "financial innovation" of securitization and collateralized debt obligations meant that they could collect origination fees and brokerage fees on loans that could never possibly be repaid, then sell them off to some Greater Fool down the line who would end up actually bearing the default. As long as the system was complex enough and opaque enough, the buyers would never realize the garbage they were getting until it was too late. The entire concept of loans was thereby broken: The basic assumption that you only loan money you expect to be repaid no longer held. And it worked, for awhile, until finally the unpayable loans tried to create more money than there was in the world, and people started demanding repayment that simply wasn't possible. Then the whole scheme fell apart, and banks began to go under—but of course we saved them, because you've got to save the banks, how can you not save the banks? Honestly I don't even disagree with saving the banks, actually. It was probably necessary. What bothers me is that we did nothing to save everyone else. We did nothing to keep people in their homes, nothing to stop businesses from collapsing and workers losing their jobs. Precisely because of the absurd over-leveraging of the financial system, the cost to simply refinance every mortgage in America would have been less than the amount we loaned out in bank bailouts. The banks probably would have done fine anyway, but if they didn't, so what? The banks exist to serve the people—not the other way around. We can stop this from happening again—here in the US, in Greece, in the rest of Europe, everywhere. But in order to do that we must first understand what actually happened; we must stop blaming the victims and start blaming the perpetrators. The credit rating agencies to be worried about aren't the ones you think 2016-04-20 2016-04-13 pnrj critique of neoclassical economics, macroeconomics, public policy austerity, corruption, credit rating, credit rating agency, debt, default, Dodd-Frank, fiscal policy, Fitch, fraud, Greece, inflation, John Oliver, Krugman, macroeconomics, monetary policy, Moody's, Puerto Rico, risk, sovereign debt, Standard and Poor's, unemployment, United Nations, World Bank John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra "If you want to do something evil, put it inside something boring." (note that it's on HBO, so there is foul language): As ever, his analysis of the subject is quite good—it's absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy. But I couldn't help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren't Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor's, Moody's, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions. These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds. S&P, Moody's, and Fitch don't just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I'll talk about more in a later post.) Moreover, they are proven corrupt. It's a matter of public record. Standard and Poor's is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice. Moody's has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York. But in fact this underestimates the corruption, because the worst conflicts of interest aren't even illegal, or weren't until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank's asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn't "predict" the European debt crisis so much as cause it by their panic. Then of course there's the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that's right. You'd think that such ratings would be set by the World Bank or something, but they're not; in fact here's a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies. In fact the whole concept of "sovereign debt risk" is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with "running out of money to pay your debt" unless you are forced to borrow in a currency you can't control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that's sort of like a partial default, but it's a fundamentally different kind of "default" than simply not paying back the money—and credit ratings have no way of capturing that difference. In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem. The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don't pay. This is already much more problematic than most economists appreciate; I've been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don't. But it at least makes some sense. But if a country is a "high risk" in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of "austerity" cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt. By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can't simply let Greece fail as we might let a bank fail (and of course we've seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the "creditworthiness" of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank. But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can't even understand the problem. For after all, one must repay one's debts. The Warren Rule is a good start 2015-08-08 2015-08-08 pnrj public policy CEO, compensation, Denmark, Dodd-Frank, Elizabeth Warren, Google, inequality, Larry Page, Norway, pay, ratio, Robert Reich, TIME, Warren Rule, worker JDN 2457243 EDT 10:40. As far back as 2010, Elizabeth Warren proposed a simple regulation on the reporting of CEO compensation that was then built into Dodd-Frank—but the SEC has resisted actually applying that rule for five years; only now will it actually take effect (and by "now" I mean over the next two years). For simplicity I'll refer to that rule as the Warren Rule, though I don't see a lot of other people doing that (most people don't give it a name at all). Two things are important to understand about this rule, which both undercut its effectiveness and make all the right-wing whinging about it that much more ridiculous. 1. It doesn't actually place any limits on CEO compensation or employee salaries; it merely requires corporations to consistently report the ratio between them. Specifically, the rule says that every publicly-traded corporation must report the ratio between the "total compensation" of their CEO and the median salary (with benefits) of their employees; wisely, it includes foreign workers (with a few minor exceptions—lobbyists fought for more but fortunately Warren stood firm), so corporations can't simply outsource everything but management to make it look like they pay their employees more. Unfortunately, it does not include contractors, which is awful; expect to see corporations working even harder to outsource their work to "contractors" who are actually employees without benefits (not that they weren't already). The greatest victory here will be for economists, who now will have more reliable data on CEO compensation; and for consumers, who will now find it more salient just how overpaid America's CEOs really are. 2. While it does wisely cover "total compensation", that isn't actually all the money that CEOs receive for owning and operating corporations. It includes salaries, bonuses, benefits, and newly granted stock options—it does not include the value of stock options previously exercised or dividends received from stock the CEO already owns. TIME screwed this up; they took at face value when Larry Page reported a $1 "total compensation", which technically is true by how "total compensation" is defined; he received a $1 token salary and no new stock awards. But Larry Page has net wealth of over $38 billion; about half of that is Google stock, so even if we ignore all others, on Google's PE ratio of about 25, Larry Page received at least $700 million in Google retained earnings alone. (In my personal favorite unit of wealth, Page receives about 3 romneys a year in retained earnings.) No, TIME, he is not the lowest-paid CEO in the world; he has simply structured his income so that it comes entirely from owning shares instead of receiving a salary. Most top CEOs do this, so be wary when it says a Fortune 500 CEO received only $2 million, and completely ignore it when it says a CEO received only $1. Probably in the former case and definitely in the latter, their real money is coming from somewhere else. Of course, the complaints about how this is an unreasonable demand on businesses are totally absurd. Most of them keep track of all this data anyway; it's simply a matter of porting it from one spreadsheet to another. (I also love the argument that only "idiosyncratic investors" will care; yeah, what sort of idiot would care about income inequality or be concerned how much of their investment money is going directly to line a single person's pockets?) They aren't complaining because it will be a large increase in bureaucracy or a serious hardship on their businesses; they're complaining because they think it might work. Corporations are afraid that if they have to publicly admit how overpaid their CEOs are, they might actually be pressured to pay them less. I hope they're right. CEO pay is set in a very strange way; instead of being based on an estimate of how much they are adding to the company, a CEO's pay is typically set as a certain margin above what the average CEO is receiving. But then as the process iterates and everyone tries to be above average, pay keeps rising, more or less indefinitely. Anyone with a basic understanding of statistics could have seen this coming, but somehow thousands of corporations didn't—or else simply didn't care. Most people around the world want the CEO-to-employee pay ratio to be dramatically lower than it is. Indeed, unrealistically lower, in my view. Most countries say only 6 to 1, while Scandinavia says only 2 to 1. I want you to think about that for a moment; if the average employee at a corporation makes $50,000, people in Scandinavia think the CEO should only make $100,000, and people elsewhere think the CEO should only make $300,000? I'm honestly not sure what would happen to our economy if we made such a rule. There would be very little incentive to want to become a CEO; why bear all that fierce competition and get blamed for everything to make only twice as much as you would as an average employee? On the other hand, most CEOs don't actually do all that much; CEO pay is basically uncorrelated with company performance. Maybe it would be better if they weren't paid very much, or even if we didn't have them at all. But under our current system, capping CEO pay also caps the pay of basically everyone else; the CEO is almost always the highest-paid individual in any corporation. I guess that's really the problem. We need to find ways to change the overall attitude of our society that higher authority necessarily comes with higher pay; that isn't a rational assessment of marginal productivity, it's a recapitulation of our primate instincts for a mating hierarchy. He's the alpha male, of course he gets all the bananas. The president of a university should make next to nothing compared to the top scientists at that university, because the president is a useless figurehead and scientists are the foundation of universities—and human knowledge in general. Scientists are actually the one example I can think of where one individual trulycan be one million times as productive as another—though even then I don't think that justifies paying them one million times as much. Most corporations should be structured so that managers make moderate incomes and the highest incomes go to engineers and designers, the people who have the highest skills and do the most important work. A car company without managers seems like an interesting experiment in employee ownership. A car company without engineers seems like an oxymoron. Finally, people who work in finance should make very low incomes, because they don't actually do very much. Bank tellers are probably paid about what they should be; stock traders and hedge fund managers should be paid like bank tellers. (Or rather, there shouldn't be stock traders and hedge funds as we know them; this is all pure waste. A really efficient financial system would be extremely simple, because finance actually is very simple—people who have money loan it to people who need it, and in return receive more money later. Everything else is just elaborations on that, and most of these elaborations are really designed to obscure, confuse, and manipulate.) Oddly enough, the place where we do this best is the nation as a whole; the President of the United States would be astonishingly low-paid if we thought of him as a CEO. Only about $450,000 including expense accounts, for a "corporation" with revenue of nearly $3 trillion? (Suppose instead we gave the President 1% of tax revenue; that would be $30 billion per year. Think about how absurdly wealthy our leaders would be if we gave them stock options, and be glad that we don't do that.) But placing a hard cap at 2 or even 6 strikes me as unreasonable. Even during the 1950s the ratio was about 20 to 1, and it's been rising ever since. I like Robert Reich's proposal of a sliding scale of corporate taxes; I also wouldn't mind a hard cap at a higher figure, like 50 or 100. Currently the average CEO makes about 350 times as much as the average employee, so even a cap of 100 would substantially reduce inequality. A pay ratio cap could actually be a better alternative to a minimum wage, because it can adapt to market conditions. If the economy is really so bad that you must cut the pay of most of your workers, well, you'd better cut your own pay as well. If things are going well and you can afford to raise your own pay, your workers should get a share too. We never need to set some arbitrary amount as the minimum you are allowed to pay someone—but if you want to pay your employees that little, you won't be paid very much yourself. The biggest reason to support the Warren Rule, however, is awareness. Most people simply have no idea of how much CEOs are actually paid. When asked to estimate the ratio between CEO and employee pay, most people around the world underestimate by a full order of magnitude. Here are some graphs from a sampling of First World countries. I used data from this paper in Perspectives on Psychological Science—the fact that it's published in a psychology journal tells you a lot about the academic turf wars involved in cognitive economics. The first shows the absolute amount of average worker pay (not adjusted for purchasing power) in each country. Notice how the US is actually near the bottom, despite having one of the strongest overall economies and not particularly high purchasing power: The second shows the absolute amount of average CEO pay in each country; I probably don't even need to mention how the US is completely out of proportion with every other country. And finally, the ratio of the two. One of these things is not like the other ones… So obviously the ratio in the US is far too high. But notice how even in Poland, the ratio is still 28 to 1. In order to drop to the 6 to 1 ratio that most people seem to think would be ideal, we would need to dramatically reform even the most equal nations in the world. Denmark and Norway should particularly think about whether they really believe that 2 to 1 is the proper ratio, since they are currently some of the most equal (not to mention happiest) nations in the world, but their current ratios are still 48 and 58 respectively. You can sustain a ratio that high and still have universal prosperity; every adult citizen in Norway is a millionaire in local currency. (Adjusting for purchasing power, it's not quite as impressive; instead the guaranteed wealth of a Norwegian citizen is "only" about $100,000.) Most of the world's population simply has no grasp of how extreme economic inequality has become. Putting the numbers right there in people's faces should help with this, though if the figures only need to be reported to investors that probably won't make much difference. But hey, it's a start. The Cognitive Science of Morality Part II: Molly Crockett 2015-04-26 2015-05-01 pnrj cognitive science A Clockwork Orange, altrism, Basic Fact of Cognitive Science, brain, Capybara Day, cognitive science, Colbert Report, Dennett, Dodd-Frank, fairness, harm aversion, Huffington Post, Joshua Greene, Milgram, Molly Crockett, morality, mortality, neuroeconomics, neuroscience, New York Times, Patricia Churchland, Prozac, Rawls, risk, risk aversion, serotonin, solidarity, soul This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long. In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it's now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name "Molly" somehow seem incongruous with a person of such prestige?) Molly Crockett is a neuroeconomist, though you'd never hear her say that. She doesn't think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word "economist" she thinks of only mainstream neoclassical economists, and she doesn't want to be associated with such things. Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It's one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is. I could almost forgive the sensationalism if they were talking about something that's actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only ("only") 600 pages, and it literally contained a three-page section explaining how to define the word "bank". (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word "bank". Hopefully not?) It doesn't get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it. But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren't true instead of saying the actually true things that are incredibly exciting. Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet. In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn't necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.) The experiment is actually what Crockett calls "the honest Milgram Experiment"; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it's worth to them to bear the shocks—or to make someone else bear them. It's a classic willingness-to-pay paradigm, applied to the Milgram Experiment. What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves. Normally I'd say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there's no way you can actually live this way in your daily life; you'd give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we'd be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don't want to take that chance. I think there's some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don't like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It's actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you're allowed to create according to the Difference Principle. There's also the fact that the subjects can't communicate; I think if I could make a deal to share the money afterward, I'd feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: "Talking makes people hurt each other.") But all of these ideas are things that could be tested in future experiments! And maybe I'll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it's both wondrous and terrifying—but can you deny that it is exciting? And that's not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating. As Patricia Churchland said on the Colbert Report: Colbert asked, "Are you saying I have no soul?" and she answered, "Yes." I actually prefer Daniel Dennett's formulation: "Yes, we have a soul, but it's made of lots of tiny robots." We don't have a magical, supernatural soul (whatever that means); we don't have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven's Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don't have any consolation to offer you on that point; I can't promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying. But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain. Now why can't the science journalists write about that? Instead we get "The Simple Trick That Can Boost Your Confidence Immediately" and "When it Comes to Picking Art, Men & Women Just Don't See Eye to Eye." HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is. The terrible, horrible, no-good very-bad budget bill 2014-12-13 2015-04-04 pnrj public policy banking, budget, Congress, credit default swap, derivatives, Dodd-Frank, margin requirement, monetary policy, money supply, regulation, Senate, terrorism JDN 2457005 PST 11:52. I would have preferred to write about something a bit cheerier (like the fact that by the time I write my next post I expect to be finished with my master's degree!), but this is obviously the big news in economic policy today. The new House budget bill was unveiled Tuesday, and then passed in the House on Thursday by a narrow vote. It has stalled in the Senate thanks in part to fierce—and entirely justified—opposition by Elizabeth Warren, and so today it has been delayed in the Senate. Obama has actually urged his fellow Democrats to pass it, in order to avoid another government shutdown. Here's why Warren is right and Obama is wrong. You know the saying "You can't negotiate with terrorists!"? Well, in practice that's not actually true—we negotiate with terrorists all the time; the FBI has special hostage negotiators for this purpose, because sometimes it really is the best option. But the saying has an underlying kernel of truth, which is that once someone is willing to hold hostages and commit murder, they have crossed a line, a Rubicon from which it is impossible to return; negotiations with them can never again be good-faith honest argumentation, but must always be a strategic action to minimize collateral damage. Everyone knows that if you had the chance you'd just as soon put bullets through all their heads—because everyone knows they'd do the same to you. Well, right now, the Republicans are acting like terrorists. Emotionally a fair comparison would be with two-year-olds throwing tantrums, but two-year-olds do not control policy on which thousands of lives hang in the balance. This budget bill is designed—quite intentionally, I'm sure—in order to ensure that Democrats are left with only two options: Give up on every major policy issue and abandon all the principles they stand for, or fail to pass a budget and allow the government to shut down, canceling vital services and costing billions of dollars. They are holding the American people hostage. But here is why you must not give in: They're going to shoot the hostages anyway. This so-called "compromise" would not only add $479 million in spending on fighter jets that don't work and the Pentagon hasn't even asked for, not only cut $93 million from WIC, a 3.5% budget cut adjusted for inflation—literally denying food to starving mothers and children—and dramatically increase the amount of money that can be given by individuals in campaign donations (because apparently the unlimited corporate money of Citizens United wasn't enough!), but would also remove two of the central provisions of Dodd-Frank financial regulation that are the only thing that stands between us and a full reprise of the Great Recession. And even if the Democrats in the Senate cave to the demands just as the spineless cowards in the House already did, there is nothing to stop Republicans from using the same scorched-earth tactics next year. I wouldn't literally say we should put bullets through their heads, but we definitely need to get these Republicans out of office immediately at the next election—and that means that all the left-wing people who insist they don't vote "on principle" need to grow some spines of their own and vote. Vote Green if you want—the benefits of having a substantial Green coalition in Congress would be enormous, because the Greens favor three really good things in particular: Stricter regulation of carbon emissions, nationalization of the financial system, and a basic income. Or vote for some other obscure party that you like even better. But for the love of all that is good in the world, vote. The two most obscure—and yet most important—measures in the bill are the elimination of the swaps pushout rule and the margin requirements on derivatives. Compared to these, the cuts in WIC are small potatoes (literally, they include a stupid provision about potatoes). They also really aren't that complicated, once you boil them down to their core principles. This is however something Wall Street desperately wants you to never, ever do, for otherwise their global crime syndicate will be exposed. The swaps pushout rule says quite simply that if you're going to place bets on the failure of other companies—these are called credit default swaps, but they are really quite literally a bet that a given company will go bankrupt—you can't do so with deposits that are insured by the FDIC. This is the absolute bare minimum regulatory standard that any reasonable economist (or for that matter sane human being!) would demand. Honestly I think credit default swaps should be banned outright. If you want insurance, you should have to buy insurance—and yes, deal with the regulations involved in buying insurance, because those regulations are there for a reason. There's a reason you can't buy fire insurance on other people's houses, and that exact same reason applies a thousandfold for why you shouldn't be able to buy credit default swaps on other people's companies. Most people are not psychopaths who would burn down their neighbor's house for the insurance money—but even when their executives aren't psychopaths (as many are), most companies are specifically structured so as to behave as if they were psychopaths, as if no interests in the world mattered but their own profit. But the swaps pushout rule does not by any means ban credit default swaps. Honestly, it doesn't even really regulate them in any real sense. All it does is require that these bets have to be made with the banks' own money and not with everyone else's. You see, bank deposits—the regular kind, "commercial banking", where you have your checking and savings accounts—are secured by government funds in the event a bank should fail. This makes sense, at least insofar as it makes sense to have private banks in the first place (if we're going to insure with government funds, why not just use government funds?). But if you allow banks to place whatever bets they feel like using that money, they have basically no downside; heads they win, tails we lose. That's why the swaps pushout rule is absolutely indispensable; without it, you are allowing banks to gamble with other people's money. What about margin requirements? This one is even worse. Margin requirements are literally the only thing that keeps banks from printing unlimited money. If there was one single cause of the Great Recession, it was the fact that there were no margin requirements on over-the-counter derivatives. Because there were no margin requirements, there was no limit to how much money banks could print, and so print they did; the result was a still mind-blowing quadrillion dollars in nominal value of outstanding derivatives. Not million, not billion, not even trillion; quadrillion. $1e15. $1,000,000,000,000,000. That's how much money they printed. The total world money supply is about $70 trillion, which is 1/14 of that. (If you read that blog post, he makes a rather telling statement: "They demonstrate quite clearly that those who have been lending the money that we owe can't possibly have had the money they lent." No, of course they didn't! They created it by lending it. That is what our system allows them to do.) And yes, at its core, it was printing money. A lot of economists will tell you otherwise, about how that's not really what's happening, because it's only "nominal" value, and nobody ever expects to cash them in—yeah, but what if they do? (These are largely the same people who will tell you that quantitative easing isn't printing money, because, uh… er… squirrel!) A tiny fraction of these derivatives were cashed in in 2007, and I think you know what happened next. They printed this money and now they are holding onto it; but woe betide us all if they ever decide to spend it. Honestly we should invalidate all of these derivatives and force them to start over with strict margin requirements, but short of that we must at least, again at the bare minimum, have margin requirements. Why are margin requirements so important? There's actually a very simple equation that explains it. If the margin requirement is m, meaning that you must retain a portion m between 0 and 1 of the loans you make as reserves, the total amount of money supply that can be created from the current amount of money M is just M/m. So if margin requirements were 100%—full-reserve banking—then the total money supply is M, and therefore in full control of the central bank. This is how it should be, in my opinion. But usually m is set around 10%, so the total money supply is 10M, meaning that 90% of the money in the system was created by banks. But if you ever let that margin requirement go to zero, you end up dividing by zero—and the total amount of money that can be created is infinite. To see how this works, suppose we start with $1000 and put it in bank A. Bank A then creates a loan; how big they can make the loan depends on the margin requirement. Let's say it's 10%. They can make a loan of $900, because they must keep $100 (10% of $1000) in reserve. So they do that, and then it gets placed in bank B. Then bank B can make a loan of $810, keeping $90. The $810 gets deposited in bank C, which can make a loan of $729, and so on. The total amount of money in the system is the sum of all these: $1000 in bank A (remember, that deposit doesn't disappear when it's loaned out!), plus the $900 in bank B, plus $810 in bank C, plus $729 in bank D. After 4 steps we are at $3,439. As we go through more and more steps, the money supply gets larger at an exponentially decaying rate and we converge toward the maximum at $10,000. The original amount is M, and then we add M(1-m), M(1-m)^2, M(1-m)^3, and so on. That produces the following sum up to n terms (below is LaTeX, which I can't render for you without a plugin, which requires me to pay for a WordPress subscription I cannot presently afford; you can copy-paste and render it yourself here): \sum_{k=0}^{n} M (1-m)^k = M \frac{1 – (1-m)^{n+1}}{m} And then as you let the number of terms grow arbitrarily large, it converges toward a limit at infinity: \sum_{k=0}^{\infty} M (1-m)^k = \frac{M}{m} To be fair, we never actually go through infinitely many steps, so even with a margin requirement of zero we don't literally end up with infinite money. Instead, we just end up with n M, the number of steps times the initial money supply. Start with $1000 and go through 4 steps: $4000. Go through 10 steps: $10,000. Go through 100 steps: $100,000. It just keeps getting bigger and bigger, until that money has nowhere to go and the whole house of cards falls down. Honestly, I'm not even sure why Wall Street banks would want to get rid of margin requirements. It's basically putting your entire economy on the counterfeiting standard. Fiat money is often accused of this, but the government has both (a) the legitimate authority empowered by the electorate and (b) incentives to maintain macroeconomic stability, neither of which private banks have. There is no reason other than altruism (and we all know how much altruism Citibank and HSBC have—it is approximately equal to the margin requirement they are trying to get passed—and yes, they wrote the bill) that would prevent them from simply printing as much money as they possibly can, thus maximizing their profits; and they can even excuse the behavior by saying that everyone else is doing it, so it's not like they could prevent the collapse all by themselves. But by lobbying for a regulation to specifically allow this, they no longer have that excuse; no, everyone won't be doing it, not unless you pass this law to let them. Despite the global economic collapse that was just caused by this sort of behavior only seven years ago, they now want to return to doing it. At this point I'm beginning to wonder if calling them an international crime syndicate is actually unfair to international crime syndicates. These guys are so totally evil it actually goes beyond the bounds of rational behavior; they're turning into cartoon supervillains. I would honestly not be that surprised if there were a video of one of these CEOs caught on camera cackling maniacally, "Muahahahaha! The world shall burn!" (Then again, I was pleasantly surprised to see the CEO of Goldman Sachs talking about the harms of income inequality, though it's not clear he appreciated his own contribution to that inequality.) And that is why Democrats must not give in. The Senate should vote it down. Failing that, Obama should veto. I wish he still had the line-item veto so he could just remove the egregious riders without allowing a government shutdown, but no, the Senate blocked it. And honestly their reasoning makes sense; there is supposed to be a balance of power between Congress and the President. I just wish we had a Congress that would use its power responsibly, instead of holding the American people hostage to the villainous whims of Wall Street banks.
CommonCrawl
Worldbuilding Meta What evidence would there be if radioactive decay changed 7,000 years ago? I know we generally operate - or religiously operate - on the principle that fundamental things don't change over time. It's the bedrock of geology - Uniformitarianism. I also believe it is unprovable (which may be bad for my question). However, assume 14C was a more unstable isotope 7,000 years ago and decayed with $\lambda$ = 100a. There are other changes in my world as well - but for this question I want to know what our geologic table would look like if the 14C radioisotope became more stable only 7,000 years ago. In a nutshell I'm trying to hide the true age of the biosphere by making organisms look older, however someone has learned how to detect this. Oh one more detail - the change was not precipitous. Over 500 years or so the $\lambda$ increased from 100a $\longrightarrow$ 5730a. Running through a half-life calculator a 7,500 year old sample will 14C date to 38,900 years old if this change occurred. From comments it appears the overlapping rings from BC 5k~9k will look anomalous. Other dating methods also exist as noted - DNA mutation rates, magnetic seabed ridges, other radioisotopes - if the evidence left behind by these comparisons could be included in an answers it would help greatly. For example, noncontinuous tree ring calibration curves can be assumed. My only other thoughts are that maybe some of the $\beta ^-$ particles would be captured in nearby elements showing exotic compounds. E.g., maybe more copper or neon in zinc and sodium deposits. Would we see that or just assume it was normal? This change is "theorized" by only one special scientist, and is a prelude to something worse approaching. The greater scientific community is skeptical because his evidence is "not compelling." I am hoping answers include what evidence such a change would leave behind in the various other disciplines, while also hoping such evidence is "sloppy enough" to discredit my character's theory even though it is true. I need to use your evidence to patch the plot holes, or at least make them sloppy enough that everyone else could realistically miss this. geology radioactivity carbon-based Vogon Poet Vogon PoetVogon Poet $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Monty Wild♦ Oct 28 '19 at 5:51 $\begingroup$ far away stars would look really weird. $\endgroup$ – John Oct 31 '19 at 21:17 Anthropologists and paleontologists are never happy with a single method of estimating the age of any item. As has been mentioned there are tree rings. This leads to a large area of study called Dendrochronology. It is not just counting tree rings. Consider finding a fragment of wood in a building site or some such. It has rings. You line up those rings with other fragments of wood, looking for patterns of width. Because trees from the region will have experienced the same weather, the growth rings will be similar size. So you can then line up fragments from many different locations, both those worked by humans and those found in other locations. This means you can build a chronological record far longer than the life of the longest living tree in the area. As has been mentioned, there are other radioactive isotopes. There are several different isotopes that are used. Different isotopes have different half lives, and enter organisms in different methods. By comparing the results of different methods you get additional information. There are several other methods. For example, the temperature in the area has an effect on the isotopic ratios of a variety of chemicals. Just as an example, Oxygen is mostly O-16, but O-17 and O-18 appear in trace amounts. The exact amount is affected by the temperature. People build historical records of the isotopic ratios. Then they get such things as organic remains and check the isotope ratios. This can give them some information as to when the organism was alive. It's more difficult than C-14 in some ways. The isotope ratios can be affected by a lot of things. And it's not a straightforward monotonically-decreasing-with-time thing as C-14 decay is. Other methods are also important over the historical period and the just-before-written-history period. For example, digging through the waste dumps of a community can give you a lot of information. Along with finding fragments of wood you find all kinds of other stuff. This kind of waste only appears on top of that kind of waste, for example. That tells you that the activities that produced it came later. Perhaps the people started using a particular kind of pottery or leather or some such. Or they started getting trade goods that included ocean products that had not reached that far inland. By mapping such things carefully you can get sequences. Then you line that up with other methods, like isotopic analysis or tree rings, that can give you a year estimate. In some cases, we can see drastic events. For example, if a city is invaded the culture is likely to change radically. That will produce a horizon in the waste dumps. Roman garbage above this layer only, Germanic style waste only below. That tells you that the Romans invaded at that layer. And you can then try to estimate the date of that invasion, and so put bounds on dates for other material in the waste pile. There are other methods of getting ideas about chronology that are now becoming important. We are mapping genomes of many people. That can give ideas about when various people arrived at various locations. Which can in turn give estimates of the age of various other things that are produced by those people. So far this is all based on the last few thousand years, say back about 30K or 40K. Longer term similar methods apply, just with slightly different items and emphasis and time scale. For example, part of the story on why the fossil Lucy was estimated to be the age it was, was isotopes. The rocks above and below the fossil were dated using, if I recall, the Argon-Argon method. But another method was to search out fossils of a variety of organisms in those rocks and fit them into the known history of evolution of those animals. You find this kind of tooth here and identify that as a wild boar from this age, that dates that layer of rock. This tooth over here is from an ibex from this era, and that dates this other rock. This fossil leaf in this other layer gets you this layer. This gets you fairly good information out to a few million years. A lot of work, but what are graduate students for? For much longer periods we have a very interesting thing. There are natural nuclear reactors in Gabon. They are roughly 2 billion years old. By carefully examining these formations it is concluded that, 2 billion years ago, the various physical constants involved in nuclear activity were pretty much identical to their values today. Even very small changes in any of the nuclear parameters would have resulted in these reactors either not functioning at all, or in them completely exploding. So over age ranges of a few thousand, a few million, and 2 billion years, all the evidence we have is cross referenced and compared. And it all seems to be consistent. puppetsockpuppetsock $\begingroup$ Natural reactors are a very important point. There are a couple of people (namely John Webb) who think they've found astronomical evidence of spatial variation in the fine structure constant from telescope data, implying that it's changed throughout the life of the universe. Oklo is one of the things that makes this doubtful $\endgroup$ – llama Oct 25 '19 at 20:39 $\begingroup$ It's the Dirac large number hypothesis. The Oklo reactors are roughly 2 billion years old. The cosmological ideas extend to something like 14 billion years. The crucial thing is the fraction of the age of the universe, specifically, the fraction between the big bang and the event you are looking at. The variation in things like the fine structure are thought to have occurred possibly a few million years after the big bang, and so from our era, a minute fraction of the age of the universe. So, interestingly, the Oklo reactors don't preclude something going on that early. $\endgroup$ – puppetsock Oct 28 '19 at 13:59 Tree ring sequences extend back further than 7000 years. A tree ring sequence for an area is derived from a series of pieces of wood whose growth periods overlapped. C14 dating has to be calibrated to account for changes in isotope ratios in the atmosphere. That can be done, for the time range in question, by measuring the carbon isotope ratio in a piece of wood whose age has been determined from tree rings. For purposes of getting correct dates, it does not matter whether the change in radioactive decay is noticed or not. A sample with the same ratio as a piece of a tree that was felled 7,500 years ago would be dated to 7,500 years ago. Patricia ShanahanPatricia Shanahan $\begingroup$ Running through a half-life calculator a 7,500 year old sample will $^{14}$C date to 38,900 years old if this change occurred. The overlapping rings from BC 5k~9k will look anomalous, and that is a problem for this story. I think this trick can't be done with a precipitous $\lambda$ change. It will have to be more gradual - which makes a new problem... $\endgroup$ – Vogon Poet Oct 25 '19 at 13:29 $\begingroup$ @Patricia Shanahan Actually the oldest known living tree, "Methuselah", is calculated to be "only" 4,851 years old. en.wikipedia.org/wiki/List_of_oldest_trees If you know of older living trees that are 7,000 years old please inform the scientific community of the details. $\endgroup$ – M. A. Golding Oct 25 '19 at 17:33 $\begingroup$ @PatriciaShanahan In response to your comment on my post, here is evidence that I found rather surprising: "As of 2013, the oldest tree-ring measurements in the Northern Hemisphere are a floating sequence extending from about 12,580 to 13,900 years" $\endgroup$ – Punintended Oct 25 '19 at 19:30 $\begingroup$ @M.A.Golding: The tree doesn't necessarily have to be alive anymore, as long as you can accurately estimate when it died. $\endgroup$ – Mooing Duck Oct 26 '19 at 0:04 $\begingroup$ @M.A.Golding: Tree ring sequences do not have to come from the same tree to be connected. At present the longest continuous tree ring sequences go back over 12,000 years in Europe, and about 8,500 years in North America. The problem arises when archaeologists find a piece of wood 10,000 years old according to the tree rings and carbon dating strongly disagrees. $\endgroup$ – AlexP Oct 28 '19 at 20:53 There are dozens of different radiometric dating methods available using a range of different isotopes If the half-life of C-14 changed 7000 years ago Carbon would give dating results which were at variance to that of all of the other methods. This would be a puzzle but would not change much in practice. There are also many other non-radiometric methods for dating which can also be used which would corroborate the non C-14 based radiometric dating such as ice core measurements, magnetic measurements across oceanic ridges and mutation change rates in DNA to name but three. SlartySlarty $\begingroup$ Can you suggest what trails would be found in methods like DNA mutation rates or ice core samples? Can I assume ice cores would be noncontinuous like tree rings? $\endgroup$ – Vogon Poet Oct 25 '19 at 16:14 $\begingroup$ No, you could not. Dome C cores from Antarctica give a climate record from the present to 800,000 years ago. The West Antarctic Ice Sheet cores, due to high annual accumulation, provides high-resolution data from the present to 62,000 years ago. NorthGRIP in Greenland has single cores from the present to 123,000 years ago, and North Greenland Eemian Project to 128,500 years ago. $\endgroup$ – Keith Morrison Oct 25 '19 at 22:31 C-14 dates are already problematic because atmospheric C-14 levels have varied over time. While raw C-14 dates always come out in the right order they can be substantially off. In practice we calibrate C-14 dates against tree rings and ice cores, both of which give dates to the exact year for as far back as the data exists. Nobody would realize C-14 decay rates had changed, they would just assume a change in atmospheric C-14 and look for an astrophysical explanation. A modern paleontologist would not be fooled for a second. Loren PechtelLoren Pechtel $\begingroup$ I assumed this but now I have conflicting answers. When you say "not be fooled" you mean a paleontologist would still accurately determine that a sample which $^{14}$C dates 38,000 years, was actually only 7,500 years old? $\endgroup$ – Vogon Poet Oct 25 '19 at 16:11 $\begingroup$ @VogonPoet What conflicting answers? Everything I see says mostly the same thing--that the correct date would be found anyway by other means. The only thing different about my answer is that I said they would figure the C-14 levels had been different and start looking for why they were different. $\endgroup$ – Loren Pechtel Oct 25 '19 at 23:15 $\begingroup$ You're saying no one would notice (which would be great) but most others say everything else would point out the change in decay rate. That's the way I read your post at least $\endgroup$ – Vogon Poet Oct 25 '19 at 23:17 $\begingroup$ @VogonPoet I'm saying they would go for the simpler answer--C-14 levels were lower in the past. That's much more believable than somehow C-14 decay rates changed in the past. $\endgroup$ – Loren Pechtel Oct 25 '19 at 23:23 Based on your comment as to your goal, no, changing the decay rate of C14 (which would itself require a change in the fundamental nature of the universe) would not by itself have provided a false date range for the Pleistocene, because carbon dating is far from the only method used for dating. Dendrochronology has been mentioned, other radioisotope dating methods. There's ice cores which have been calibrated to tree ring and other dating methods which line up with data from varves from lake deposits, which line up with ashfall from major volcanic eruptions which line up with archeological data... Carbon-14 ain't going to cut it as an attempted one-size-fits-all explanation. Keith MorrisonKeith Morrison $\begingroup$ Sounds like a lot of plot holes to fill. "By far the most common numerical ages applied to varve sequences are from radiocarbon $^{14}$C dating" - It appears that no carbon dating at all would occur in glacial varves if all $^{14}$C decayed before 10k years ago, unless the glacial varves are also younger than they seem, and their years do not layer in carbon years. Difficult... $\endgroup$ – Vogon Poet Oct 25 '19 at 16:26 $\begingroup$ More generally, there is a lot of cross-checking between different dating methods based on different lines of evidence. $\endgroup$ – Patricia Shanahan Oct 25 '19 at 16:58 I'm no geologist, but it seems this would be pretty noticeable with technology similar to or more advanced than ours. Looking at this page, you could corroborate C-14 dating (t-1/2 = 5740 years) with Uranium-Thorium (t-1/2 = 80,000 years) and/or Uranium-Proactinium (t-1/2 = 32,760 years) - if not precisely, at least the rough date. The soft limit of C-14 dating is described as ~70,000 years, which overlaps reasonably well with U-Th and U-Pa It might take a while for folks to discover this. Apparently U-Pa and U-Th are mainly used in seabed dating, as Pa and Th precipitate out of seawater. If you had a lake that was large and stable enough, you might be able get these data by dredging it, which is quite a bit more accessible than much of the undisturbed seafloor PunintendedPunintended $\begingroup$ OK did I say this correctly? - the C14 today is still 5740 years, we did not see a change. I am arguing that 7,000 years ago, c14 decayed with 100y t-1/2. So any evidence of change would have to be from over 7,000 years ago. And that page is actually the result of that corroboration you talk about. What this change does is make fossils with no carbon look as if they are over 70ka old, but I'm saying they lived only 10,000 years ago, and C14 has changed. Maybe I don't understand exactly what you mean by corroborate. $\endgroup$ – Vogon Poet Oct 24 '19 at 22:59 $\begingroup$ Ahh I see, I'd misunderstood your "100y t-1/2". I think you could still notice this discrepancy, depending on what samples you're able to gather. Something that is 8,000 years old would look 60,000 by C-14 but 8,000 by U-Pa. U-Th might be tricky with this recent of a sample, but you could be verify your general U-Pa assay fidelity with something ~30,000 years old. Regardless, compare a 8,000 year-old sample to a 5,000 year-old one: the C-14 difference will be massive, the U-Pa won't $\endgroup$ – Punintended Oct 24 '19 at 23:01 $\begingroup$ Why organisms? Just look for uranium in carbonaceous rocks, such as certain types of shale. Seafloor sediment or shale at the very top of its strata may be young enough? Again, not a geologist :) $\endgroup$ – Punintended Oct 24 '19 at 23:48 $\begingroup$ Isn't this much too complicated. Why not count tree rings to check C14 dating? $\endgroup$ – Patricia Shanahan Oct 25 '19 at 1:22 $\begingroup$ I'm sort of wondering why you need to go to this whole effort when relatively recent human civilization did overlap with megafauna: the last mammoths on Earth died out about 500 years after the Great Pyramid at Giza was built. It would be a lot easier to postulate a refugium in Siberia then try to change the fundamental nature of the freaking universe. $\endgroup$ – Keith Morrison Oct 30 '19 at 17:22 There are too many checks and balances behind dating techniques for them to go unnoticed unless ALL of your measurements are messed up. Instead of focusing on carbon dating, let's look at creating the kind of plot point you are looking for based on outcome. the ramifications are that it's going to happen again and basic physics we are used to will all be changing. This event is a foreshadowing, but I'm trying not to fly in the face of well-established science too much. I can make the change less dramatic or less abrupt, but one person still has to "figure it out". Ideally the change can hide in the slop of our recent history. I think the the answer you are looking for is that time itself speeds up or slows down. This is much harder to trace than a single case of radioactive decay. It affects everything on Earth the same: radioactive decay, tree rings, ice cores, etc. From our perspective, this 7000 years is the same as the last 7000 years even if one technically took a lot longer to happen than the other according to outside observation. To "see" a change in the speed of time, one must be able to observe a reference point that is moving at a different speed of time. Since all of Earth (and maybe our solar system) is effected, the only place to see this change is in space. We take certain things for granted in astrophysics that actually don't add up like how the the period of rotation on the outside of a galaxy can be the same as the inside whereas inside planets need to move faster for a solar system to work, or how we need to introduce invisible matter & energy into our calculations to make anything add up. Perhaps your physicist can mathematically prove that these things aren't lining up because the local coefficient of time itself is distorted... or better yet, maybe there really IS invisible matter and energy, and he's just the guy to prove it once and for all. What you need is Dark Matter! Ughhh... I know, I know, dark-matter is the end all handwavium of science fiction. It's often used to explain the unexplainable in outrageous and down right stupid ways, but in this case, it would actually explain your phenomenon according to accepted sciences. Dark matter is a source of mass and gravity that we simply have no instruments for measuring. In the presence of gravity, our perception of time slows down; so, if our solar system were to periodically pass through dark matter nebulas, then our experience of time would speed up and slow down accordingly. As long as our local time distortion stays constant, we would perceive space exactly the same no matter where we look, but if something happened 7000 years ago, then things at 7000 light years would be lensed. Because we can not see it "from the side" we can't tell that it is distorted, but maybe at 7000 light years there is some kind of slight banding in the red-shift, or maybe the apparent density of stars slightly changes, or maybe the change was too gradual to notice; so we just assume the galaxy is stretched out a little bit differently than it really is. Space as seen from the side: The shift is so small that instruments are barely sensitive enough to detect it. Every other metric of chronological dating tells your scientist nothing happened 7000 years ago. Other astronomers write off his findings as instrument error or a statistical anomaly. Even your scientist himself might dismiss it at first... until he starts looking into the Planet Nine controversy and realizes that a new dark-matter wave is already affecting the outer reaches of our solar system. The best part here is that there are already competing and non-verifiable pieces of evidence that something massive is at the edge of our solar system; so, his theory could very believable enter into this stack, be scientifically sound(ish), and also be easily dismissed by other scientists in favor of "more believable" opinions which would work great with your story. Nosajimiki - Reinstate MonicaNosajimiki - Reinstate Monica $\begingroup$ OK I admit this creates the plot point exactly as intended - thinking Star Trek Voyager Blink of An Eye type time differential. But I do so hate time paradoxes! Oddly my universe does agree with this possibility if you've seen my other questions. I've used "time breaking" on a small scale, this is now the solar system. Requires much deep thought... $\endgroup$ – Vogon Poet Oct 25 '19 at 19:58 $\begingroup$ @VogonPoet You can speed up or slow down time all you want without creating a paradox as long as you don't try going backwards. This is really no different than the gravitational lensing that happens near major sources of gravity. $\endgroup$ – Nosajimiki - Reinstate Monica Oct 28 '19 at 14:09 $\begingroup$ @VogonPoet A possible explanation of these time shifts occurred to me this morning that I think would really fit the bill of what you are looking for. Updated answer accordingly. $\endgroup$ – Nosajimiki - Reinstate Monica Oct 30 '19 at 14:53 $\begingroup$ I really appreciate the work here and I just finished Split Second by Douglas Richards which uses dark matter and the "quintessence" theory to distort time, much like you did. This does help me with a different question however my question here does require a noticeable change in physics. The fact that "we would perceive space exactly the same no matter where we look" sort of makes my entire crisis vanish. Thanks again tho $\endgroup$ – Vogon Poet Oct 30 '19 at 15:17 It sounds like you're conflating two questions: (1) "What would the geologic record look like if 14C decay changed 7000 years ago?", and (2) "What evidence might there be that the rate had changed?" I'll ignore the first question and focus on the second. The only way to detect this sort of change is to see it happening now, which means precision measurements over a sufficiently long period of time (or otherwise accounting for some other indirect effect). These precision measurements aren't easy, and they always have some degree of uncertainty associated with them. An example is the question of whether radioactive decay rates are affected by neutrino flux, see https://www.sciencedirect.com/science/article/pii/S0969804317303822. This particular article presents evidence that they aren't, but if they were, then this could be an example of an indirect effect. In other words, if you could make up some reason why neutrino flux was higher 7000 years ago, then this could be a reason for a shift in decay rates that would be very hard to measure today. And in any case, the fact that these measurements were made in 2018 tells you that there is room for small, previously unobserved effects that could be consistent with a long-term slowly-varying time-dependence. By the way, you are right in that we operate on an assumption that the laws of physics do not change as a function of time. The laws may have some time-dependence built into them, but the laws themselves do not change. This assumption does have some supporting evidence, based on observations of galaxies millions of light-years away (which means the events we're observing occurred millions of years ago), but it's still fundamentally an assumption. However, it's one that you do not want to violate. If you did, it would be no different from saying "it's that way because God make it that way". If you go that route then you may as well throw up your hands and give up on science altogether, because science can no longer be used to predict anything. So to the degree that science works, that assumption holds. Richter65Richter65 $\begingroup$ OK I genuinely did not see those as different. Reading... $\endgroup$ – Vogon Poet Oct 25 '19 at 19:36 $\begingroup$ OK, good points. But you're asking me to fundamentally change radioactive decay into a causal event which can be influenced by some trigger, such as antineutrinos. That has vast ramifications and the whole universe changes if there were some actual method to control a half-life. Antineutrinos are a wonderful scapegoat because we would never see it happening, however I'm not ready to invalidate atomic clocks, which invalidates every test of general relativity we've ever made LOL! In universe, $\lambda$ remains a mystery - it just changed. $\endgroup$ – Vogon Poet Oct 25 '19 at 19:46 $\begingroup$ Radioactive decay is already a causal event. It's caused by weak nuclear interactions (which is one of the 4 fundamental forces: electromagnetic, strong nuclear, weak nuclear, and gravity). So what you're really asking is whether something can affect the weak force. Current theories say no, and all experimental evidence also says no. But one could potentially modify the theory to get a small dependence on something. The only way to test that is precision measurements of carbon half-life. And btw, don't confuse relativity (i.e., gravity) with weak interactions - they're very different. $\endgroup$ – Richter65 Oct 28 '19 at 15:07 $\begingroup$ ? What do you know that your not telling us? The radioactive decay of any atom is completely random. It is also spontaneous by definition. If your saying it is also causal, then whatever cause it has violates the general theory of relativity (because the cause for a spontaneous event must travel faster than the speed of light). How can an event simultaneously be completely random and causal and spontaneous?? Mind. Blown. $\endgroup$ – Vogon Poet Oct 28 '19 at 15:37 $\begingroup$ I think what you mean is that the weak force allows radioactive decay, rather than causes it. In other words, radioactive decay could not happen within the standard model of particle physics unless some new force existed. We gave this force the name weak force, but the force itself doesn't cause anything. It fills a hole in our math so whatever makes decay work doesn't violate the standard model. The decay itself is still a random event. $\endgroup$ – Vogon Poet Oct 28 '19 at 15:56 There seems to be a lot of confusion and misunderstandings in the comments, so I thought a brief description of radioactive decay may be worthwhile. My goal is to explain how radioactive decay is tied to the fundamental laws of physics. I'll start with a (very) brief description of the Standard Model of Particle Physics, which represents our best current understanding of matter and forces. In the Standard Model, there are 12 fundamental particles: six quarks and six leptons. The quarks are what make up nuclear matter; in particular, the two lowest-energy (and therefore stable) quarks are called up ($u$) and down ($d$). Similarly, the two stable leptons are electrons ($e$) and electron neutrinos ($\nu_e$). Each particle has an associated anti-particle which has opposite charge and opposite "lepton number" (for leptons) or "baryon number" (for quarks), but everything else is the same. The anti-particle of an electron is called a positron, and the anti-particle of a neutrino is imaginatively called an anti-neutrino. An interesting property of anti-particles is that, mathematically, they are equivalent to regular particles going backwards in time. A proton is made up of $uud$ (each up quark has charge +2/3; each down quark has a charge -1/3). A neutron is made up of $udd$. Of the four fundamental forces in the universe (electromagnetic, strong nuclear, weak nuclear, and gravity) the Standard Model talks about the first three and ignores gravity. Gravity, by the way, is described by General Relativity, which is a non-quantum theory that is fundamentally different from any quantum field theories such as those that form the basis of the Standard Model. In the Standard Model, the three forces are mediated (or, in a sense, caused or described by) by an exchange of a particle called a (vector) boson. Each of the forces has its own set of associated vector bosons: The electromagnetic force has one associated boson, called a photon. Photons only interact with electrically charged particles. The weak force has three bosons: $W^+$, $W^-$, and $Z$. These interact with all particles. The strong force has eight bosons, collectively called gluons. These only interact with quarks. So, for example, an electromagnetic interaction happens when a photon is exchanged between two charged particles. Likewise, a weak interaction happens when a W or Z is exchanged between two particles. However, Weak interactions are a bit different in that, uniquely among all forces/interactions, they can change a particle from one type to another. So if a particle emits or absorbs a W boson, it will become a different type of particle. For radioactivity, the most important example is a process called beta decay (https://en.wikipedia.org/wiki/Beta_decay), in which a down quark emits a $W^-$ boson and becomes an up quark; the $W^-$ then decays to an electron and an anti-neutrino. Equivalently, you can say the $W^-$ mediates a weak interaction between a $u$/$d$ and a $\nu_e$/$e$. There's also a variant in which a proton becomes a neutron via $u \to d \,\nu_e \, e^+$ mediated by a $W^+$ exchange. (The former is called $\beta^-$, and the latter $\beta^+$). The relevant Feynman diagram for $\beta^-$ decay is: In the most general terms, radioactive decay is governed by all three forces in the Standard Model: strong nuclear, weak nuclear, and electromagnetic interactions. To quote the Wikipedia page (https://en.wikipedia.org/wiki/Radioactive_decay), "the combined effects of these forces produces a number of different phenomena in which energy may be released by rearrangement of particles in the nucleus, or else the change of one type of particle into others." However, for most unstable nuclei, beta decay is the most common cause of radioactivity. This is what happens inside Carbon 14. Via beta decay, one of the neutrons in the nucleus becomes a proton (and therefore the carbon atom turns into a nitrogen atom), emitting an electron and anti-neutrino from the $W^-$ decay. The half-life is determined by strength of the weak force interaction, which is a function of the nucleons and their energy states. In principle, you could write down the Lagrangian and determine the half-life; in practice, this tends to be measured experimentally. So if the half-life of carbon 14 were to change, it would mean some addition to the theory, i.e., you would have to add additional terms to the Lagrangian. This is not impossible -- there are many Beyond-the-Standard-Model theories that predict all sorts of changes, but the changes would have to be very subtle so as to have been missed all this time. The process of actually calculating a half-life from first principles involves quantum field theory, which is too complex to fully explain here. However, I can explain some of the language that is used (with the perspective of someone with a particle physics background, however, rather than a nuclear physics background). Wikipedia links are included where appropriate. In the language of QFT, radioactive beta ($\beta^-$) decay is a transition from an initial state ($udd$) to a final state ($uud + e^- + \bar{\nu_e}$). To calculate the transition rate (or half-life), we need to calculate the transition amplitude, which gives the probability of the transition between those two states. More specifically, we're interested in the S-matrix element, which is the transition amplitude from time $t=-\infty$ to $t=+\infty$. S-matrix elements are often expressed and visualized using Feynman diagrams. The transition rate can be calculated from the S-matrix element using a formula known as Fermi's Golden Rule. Of course, for nuclear beta decay, the matrix element calculation isn't as simple as it is for free particles. There are complex nucleon-nucleon interactions that strongly affect the transition amplitude; these have to be modeled and accounted for in the QFT Lagrangian. Nuclear physicists have done this with good results, but it remains an area of active research (especially for isotopes far from stability). By the way, that same Lagrangian is where you would introduce any other interactions that might affect the half-life. $\begingroup$ I'm fairly certain that half lives can not be predicted at all from first principles and I would be interested in any link you can provide suggesting a Lagrangian (or did you mean Euler-Lagrangian) equation can successfully predict any nuclear isotope half-life using first principles. I am fairly certain from my own studies that the $\lambda$ of any radioisotope is itself an ab initio characteristic of that element. This post would benefit from a reference if you have one. $\endgroup$ – Vogon Poet Oct 28 '19 at 20:28 $\begingroup$ Of course you can predict decay rates. That's what quantum field theory does: calculates transition amplitudes from an initial to final state. That's why people like Feynman diagrams; they're a visual representation of the S-matrix element. Complications arise for nucleons inside the nucleus, which is a bit outside my expertise, but a Google search reveals a mechanism known as pn-QPRA, explained in int.washington.edu/talks/WorkShops/int_06_2b/People/Vretenar_D/…, with some example results (that I haven't read) in doi.org/10.1016/0370-2693(88)91202-6 $\endgroup$ – Richter65 Oct 29 '19 at 3:56 $\begingroup$ So bottom line you disagree with the determination that radioactive decay is spontaneous. OK. $\endgroup$ – Vogon Poet Oct 29 '19 at 4:12 $\begingroup$ I don't know what you mean by "spontaneous". I think of it as a random variable (as in en.wikipedia.org/wiki/Random_variable), with a probability distribution that is obtained from quantum field theory calculations. In that sense, it's "randomness" is no different from any other observable in quantum mechanics. $\endgroup$ – Richter65 Oct 29 '19 at 18:34 $\begingroup$ "Radioactive decay is the spontaneous breakdown of an atomic nucleus resulting in the release of energy and matter." This means that nothing in this universe causes it; it happens completely by itself. It is not on a timer, or hit by anything, or pushed out by some force. It has no cause. And it is random. $\endgroup$ – Vogon Poet Oct 29 '19 at 20:11 Thanks for contributing an answer to Worldbuilding Stack Exchange! Not the answer you're looking for? Browse other questions tagged geology radioactivity carbon-based or ask your own question. What would be shocking to learn your neighbor did with his machine that clones anything for only 0.05 seconds? Could there be life on a planet with high radioactive activity? What would a Vanadium-rich world look like? Radioactive Elements - WIll there be any effect on earth or an oceanic world if its crust and mantle lack Radioactive Elements? What kind of gas would allow a rock to float? And what kind of rock would it be? What size would a diamond made from a human be? What could make a planet more radioactive at night? What would happen if we tried to send the Chernobyl waste into space and the rocket exploded? What could be a plausible scientific explanation for a man with radioactive skin that acts like a nuclear reactor?
CommonCrawl
\begin{document} \selectlanguage{english} \title{Extended symmetry groups of multidimensional subshifts with hierarchical structure} \author{Álvaro Bustos\footnote{Contact e-mail: \texttt{[email protected]}. The author was supported by CONICYT Doctoral Fellowship 21171061(2017).}} \date{Departamento de Ingeniería Matemática \\ Universidad de Chile \\ Beauchef 851, Santiago, Chile \\\vspace*{1em} January 23, 2019} \maketitle \begin{abstract} We study the automorphism group, i.e. the centralizer of the shift action inside the group of self-homeomorphisms, together with the extended symmetry group (the corresponding normalizer) of certain $\mathds{Z}^d$ subshifts with a hierarchical structure like bijective substitutive subshifts and the Robinson tiling. Treating those subshifts as geometrical objects, we introduce techniques to identify allowed symmetries from large-scale structures present in certain special points of the subshift, leading to strong restrictions on the group of extended symmetries. We prove that in the aforementioned cases, $\Sym(X, \mathds{Z}^d)$ (and thus $\Aut(X, \mathds{Z}^d)$) is virtually-$\mathds{Z}^d$ and we explicitly represent the nontrivial extended symmetries, associated with the quotient $\Sym(X, \mathds{Z}^d)/\Aut(X, \mathds{Z}^d)$, as a subset of rigid transformations of the coordinate axes. We also show how our techniques carry over to the study of the Robinson tiling, both in its minimal and non minimal version. \end{abstract} \section{Preliminaries} This section will introduce the basic concepts and notation to be used in what follows. We assume some basic familiarity with symbolic dynamics (namely, the nature of a shift space and the shift action); for the reader interested in more in-depth treatment of this subject, we recommend consulting the book by Lind and Marcus \cite{LM95} for the one-dimensional case and the text by Ceccherini-Silberstein and Coornaert \cite{TCS2010} as an introduction to the case of general groups. The subject of symbolic dynamics deals with a specific kind of group action, the \textbf{shift action:} \begin{defi} Let $\mathcal{A}$ be a finite set (which we shall call \textbf{alphabet}) and $G$ any group (usually, but not always, assumed to be finitely generated). The \textbf{full-shift} is the topological space $\mathcal{A}^G$, with the product topology (after giving $\mathcal{A}$ the discrete topology). The (left) \textbf{shift action} is the following group action $G\actson[\sigma]\mathcal{A}^G$: \[(\forall x=(x_g)_{g\in G}\in\mathcal{A}^G)(\forall g,h\in G):(\sigma_g(x))_h\dfn x_{g^{-1}h}. \] A closed subset of $\mathcal{A}^G$ that is invariant under the shift action is called a $G$-\textbf{subshift}. \end{defi} Subshifts are usually defined combinatorially instead of topologically, via \textbf{forbidden patterns}. A \textbf{pattern} $P$ with (finite) \textbf{support} $U\subset G$ is a function $P:U\to\mathcal{A}$. $G$ acts on the set of all patterns $\mathcal{A}^{*,G}$ by translation: the pattern $g\cdot P$ has support $gU$ and is defined by $(g\cdot P)_{h}=P_{g^{-1}h}$. We say that a point $x\in\mathcal{A}^G$ \textbf{contains} the pattern $P$ (usually written as $P\sqsubset x$) if, for some $g\in G$, $x|_{gU}=g\cdot P$; note that, by this definition, any translation of $P$ is contained in $x$ as well, and thus is functionally ``the same'' as $P$. A similar definition applies for two patterns $P$ and $Q$, where we again use $P\sqsubset Q$ to denote the subpattern relation. Any subshift $X$ can be described by a set of forbidden patterns $\mathcal{F}\subseteq\mathcal{A}^{*,G}$; namely, given such a set $\mathcal{F}$ we define the following subset of $\mathcal{A}^G$: \[\shift{\mathcal{F}}\dfn\{x\in\mathcal{A}^G:(\forall P\in\mathcal{F}):P\not\sqsubset x \}. \] It is not hard to prove that $\shift{\mathcal{F}}$ is a subshift and that any subshift $X$ is equal to $\shift{\mathcal{F}}$ for some (usually not uniquely determined) set of patterns $\mathcal{F}$. If the set $\mathcal{F}$ can be chosen finite, we say that $X$ is a \textbf{shift of finite type} (or $G$-SFT). Since subshifts can be regarded as both dynamical and combinatorial objects, we can classify them not only in combinatorial terms (shifts of finite type, substitutive subshifts, etc.) but also in terms of their dynamics. One of the main classifications we shall be interested in is given by the following definition: \begin{defi} Let $X\subseteq\mathcal{A}^{G}$ be a $G$-subshift. We say that the shift action $G\actson[\sigma]X$ is \textbf{faithful} if for all $g\in G\setminus\{1_G\}$ there is a point $x\in X$ such that $\sigma_g(x)\ne x$, i.e. if $\sigma_g=\id$ implies $g=1_G$. \end{defi} We shall assume unless otherwise stated that the shift action is faithful for the shifts under study. This is because, if the shift action is not faithful, there is a strict subgroup $H<G$ such that $X$ essentially behaves like an $H$-subshift; thus, we can always limit ourselves to the faithful case. Moreso, in the study of the automorphism group described below, having a faithful action makes it easier to describe such a group. Let $X,Y$ be $G$-subshifts. A continuous, shift-commuting ($f\circ\sigma_g=\sigma_g\circ f$ for all $g\in G$) map $f:X\to Y$ is called a \textbf{sliding block code}. These kinds of mappings act as structure-preserving morphisms for this class of group actions (or dynamical systems). In particular, when there is a bijective sliding block code from $X$ to $Y$, both subshifts share all topological and dynamical properties such as periodic points, isolated points, dense subsets, etc.; thus, we call such a mapping an \textbf{isomorphism} (or, when $X=Y$, \textbf{automorphism}) and $X$ and $Y$ \textbf{isomorphic} subshifts. The name ``sliding block code'' comes from the following well-known result: \begin{teo}[Curtis-Hedlund-Lyndon] Let $X$ and $Y$ be $G$-subshifts over finite alphabets $\mathcal{A}_X$ and $\mathcal{A}_Y$, respectively. A mapping $\varphi:X\to Y$ is a sliding block code if, and only if, there exists a finite subset $\mathcal{U}\subset G$ (called the \textbf{window} of $\varphi$) and a mapping $\Phi:\mathcal{A}_X^U\to\mathcal{A}_Y$ (called the \textbf{local function} associated to $\varphi$) such that: \[\varphi(x)_g = \Phi(x|_{gU}), \] in which we identify the pattern $x|_{gU}$ with its corresponding translate $g^{-1}\cdot x|_{gU}$ with support $U$. \end{teo} \begin{obs} Let $\varphi:X\to Y$ be a sliding block code given by a local function $\Phi:\mathcal{A}_X^U\to\mathcal{A}_Y$. If $V\supset U$ is a larger finite set, we may define a new local function $\Phi':\mathcal{A}_X^V\to\mathcal{A}_Y$ which induces the same sliding block code $\varphi$ (this can be seen by taking $\Phi'(P)=\Phi(P|_U)$ for all $P\in\mathcal{A}_X^V$). Thus, in $\mathds{Z}^d$, we can always assume that the set $U$ is of the form $[-r\vec{\mathds{1}},r\vec{\mathds{1}}]$ for some $r\ge 0$; the least $r$ for which there is a local function $\Phi$ with a window of this form for $\varphi$ is often called the \textbf{radius} of the sliding block code $\varphi$. A sliding block code of radius $0$ is often called a \textbf{relabeling map}. \end{obs} Our main subject of study, at least for the first part of this work, is the set of automorphisms $f:X\to X$, which we shall denote as $\Aut(X,G)$; note that this set is a group under the operation of composition, and that $Z(G)$, the center of the group $G$, embeds into $\Aut(X,G)$ because, if $gh=hg$, then $\sigma_g\circ\sigma_h=\sigma_{gh}=\sigma_{hg}=\sigma_h\circ\sigma_g$ and thus for any $h\in Z(G)$ the mapping $\sigma_h$ is a continuous, shift-commuting homeomorphism. This is very important in the case of abelian groups such as $\mathds{Z}^d$, where $G=Z(G)$. In what follows, we shall be mostly concerned with free abelian groups, namely $\mathds{Z}^d$; in the following text, unless stated otherwise, we reserve the letter $d$ for the rank or dimension of this underlying group. We will denote elements of this group with vector notation, $\vec{k}=(k_1,\dots,k_d),k_1,\dots,k_d\in\mathds{Z}$. In this context, the letter $\vec{s}$ will be used for a specific, fixed ``size'' number (as we shall see below) and we will use other letters such as $\vec{k}$ and $\vec{p}$ for generic elements of $\mathds{Z}^d$. We shall be mostly concerned with substitutive subshifts, at least in the early sections of this work. These subshifts come from a partial generalization of the concept of a one-dimensional substitution, which consists of a function $\theta:\mathcal{A}\to\mathcal{A}^*\setminus\{\varnothing\}$ that assigns a (nonempty) word comprised of symbols from the alphabet $\mathcal{A}$ to every symbol $a\in\mathcal{A}$. This function extends to the whole of $\mathcal{A}^*$ by concatenation, i.e. we define $\theta^*:\mathcal{A}^*\to\mathcal{A}^*$ by: \[\theta^*(a_1a_2\dotsc a_k)=\theta(a_1)\theta(a_2)\dotsc\theta(a_k), \] and in a similar fashion we extend $\theta$ to infinite and bi-infinite sequences from $\mathcal{A}^\mathds{N}$ and $\mathcal{A}^\mathds{Z}$, respectively. We shall assume the substitution function $\theta$ to be \textbf{primitive}, i.e. there is a fixed $k\in\mathds{N}$ such that $(\forall a\in\mathcal{A}):\theta^k(a)$ contains all of the symbols of the alphabet $\mathcal{A}$. By taking any (fixed) $x\in\mathcal{A}^\mathds{Z}$ and applying the substitution repeatedly, we obtain a sequence of points $x^{(n+1)}=\theta_\infty(x^{(n)})$ which, under the mild condition of a primitive substitution being primitive, converges either to a point $x$ that is fixed under $\theta_\infty$ or to a finite, periodic orbit under $\theta_\infty$. Taking the orbit closure (under the shift action) of such a fixed or periodic point, we define a subshift with interesting properties, which is called the \textbf{substitutive subshift} associated to $\theta$. In the multidimensional case, we may consider a substitution as a mapping that assigns a pattern from $\mathcal{A}^{*,G}$, to each symbol $a\in\mathcal{A}$; however, for arbitrary patterns it is hard to describe the extension of this mapping to any configuration, as there is no direct analogue to concatenation. Thus, we restrict ourselves to the case in which all patterns $\theta(a),a\in\mathcal{A}$ share the same rectangular support $S=[\vec{0},\vec{s}-\vec{\mathds{1}}]\dfn\prod_{k=1}^{d}[0,s_k-1]$, and thus $\theta$ is a mapping $\mathcal{A}\to\mathcal{A}^S$. This is called a \textbf{rectangular substitution}. This kind of substitution has the obvious advantage of the concatenation rule being easy to describe: symbols $a_1$ and $a_2$ adjacent in the $\vec{e}_j$ direction result in patterns $\theta(a_1)$ and $\theta(a_2)$ appearing adjacent in the same direction $\vec{e}_j$. \begin{figure} \caption{An example of application of a rectangular substitution to a pattern.} \end{figure} Formally, the extension of $\theta:\mathcal{A}\to\mathcal{A}^S$ to a function $\theta_\infty:\mathcal{A}^{\mathds{Z}^d}\to\mathcal{A}^{\mathds{Z}^d}$ follows the same principle: in $\theta_\infty(x)$, every symbol $x_{\vec{k}}$ is replaced by a pattern $\theta(x_{\vec{k}})$, keeping adjacencies, and thus: \[\theta_\infty(x)_{\vec{m}\cdot\vec{s}+\vec{k}}=\theta(x_{\vec{m}})_{\vec{k}},\quad\vec{m}\in\mathds{Z}^d,\vec{k}\in S, \] Here and in what follows, the multiplication $\vec{m}\cdot\vec{s}$ is taken to be componentwise. For simplicity, we shall always assume that the vector $\vec{s}$ determining the shape of the set $S$ satisfies the condition $\min(s_1,\dots,s_d)>1$; otherwise the problem reduces to the study of a lower-dimensional substitution, and in the one-dimensional case the failure of this condition makes the substitution trivial (in particular, it cannot be primitive). The condition $s_i>1$ for all $1\le i\le d$ over the size vector $\vec{s}$ of the support $S$ ensures that every $\vec{m}\in\mathds{N}^d$ is in the support of some $\theta^k$ for sufficiently large $k$. We shall only use the subscript $\infty$ in specific situations that might lead to confusion, otherwise distinguishing both functions by context. Similarly, we may use the symbol $\theta^*$ for the extension of $\theta$ to the set of all patterns $\mathcal{A}^{*,G}$, but only when needed. The above definition allows us to introduce the type of subshifts we intend to study: \begin{defi} Let $\theta$ be a primitive rectangular substitution on the alphabet $\mathcal{A}$, and take $\Sigma$ to be the limit set of $\mathcal{A}^{\mathds{Z}^d}$ under $\theta_\infty$, i.e. the set of all accumulation points of the sequences $(\theta_\infty^k(x))_{k\in\mathds{N}}$ for all $x\in \mathcal{A}^{\mathds{Z}^d}$. Note that this set is actually finite, as the accumulation points of such a sequence depend only on the finite subpattern $x|_{\{-1,0\}^d}$ and each sequence has a finite number of such points. We define the \textbf{substitutive subshift} associated to $\theta$ as the shift-orbit closure of $\Sigma$, that is: \[\shift{\theta}\dfn\bigcup_{x\in\Sigma}\overline{\Orb_\sigma(x)}; \] additionally, we define the \textbf{minimal substitutive subshift} $\shift{\theta}^\circ$ as the following subset of $\mathcal{A}^{\mathds{Z}^d}$: \[\shift{\theta}^\circ\dfn\{x\in\mathcal{A}^{\mathds{Z}^d}: (\forall U\subset\mathds{Z}^d,|U|<\infty)(\exists k\in\mathds{N})(\exists a\in\mathcal{A}):x|_U\sqsubset\theta^k(a) \}. \] \end{defi} \begin{nota} Note that the usual definition of substitutive subshift in several sources corresponds to $\shift{\theta}^\circ$, and thus it is often assumed that a substitutive subshift is minimal. However, in the analysis below we need the slightly expanded definition given above, which consists of all the points obtained from a ``seed'' (a pattern with support $\{-1,0\}^d$), their shifts and the corresponding limit points; it is easy to see that $\shift{\theta}^\circ\subseteq\shift{\theta}$, with equality only when every possible seed appears as a subpattern of $\theta^k(a)$ for some $a$. \end{nota} \begin{obs} Due to the primitivity, the condition $(\exists k\in\mathds{N}):x|_U\sqsubset\theta^k(a)$ does not depend on the chosen $a$; moreso, this means that any pattern $x|_U$ from a point $x\in\shift{\theta}^\circ$ appears in some $\theta^m(a)$ for any $a\in\mathcal{A}$ and some sufficiently large $m$. As we shall see below, all points from $\shift{\theta}$ (and thus, from $\shift{\theta}^\circ$) are of the form $\sigma_{\vec{k}_m}(\theta_\infty^m(x))$ for some $x\in\shift{\theta}$ and all values of $m$, and thus are concatenations of patterns of the form $\theta^m(a),a\in\mathcal{A}$; this implies that any pattern of the form $x|_U$ with $x\in\shift{\theta}^\circ$ appears as a subpattern of any other $y\in\shift{\theta}$, and thus $\Orb_\sigma(y)$ is dense in $\shift{\theta}^\circ$ for any $y\in\shift{\theta}^\circ$, i.e. this subshift is minimal in a dynamical sense, justifying the name given above. Since this subshift is a subset of $\shift{\theta}$, the latter is only minimal when it equals $\shift{\theta}^\circ$, since dynamical minimality implies minimality by inclusion among closed, shift-invariant subsets. \end{obs} \begin{eje} An example of a subshift arising from a rectangular substitution is the two-dimensional Thue-Morse substitutive subshift, given by the following $\theta_{\rm TM}:\mathcal{A}\to\mathcal{A}^{\{1,2\}^2}$: \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,-0.5) rectangle (1,0.5) (2,-1) rectangle (4,1); \draw[xshift=6cm] (2,-1) rectangle (4,1); \fill (2,0) rectangle (3,1) (3,0) rectangle (4,-1); \fill[xshift=6cm] (0,-0.5) rectangle (1,0.5) (2,0) rectangle (3,-1) (3,0) rectangle (4,1); \node at (1.5,0) {$\mapsto$}; \node at (7.5,0) {$\mapsto$}; \end{tikzpicture} \end{center} in which the alphabet $\mathcal{A}$ consists of black and white tiles (identified with the symbols $1$ and $0$, respectively). The $d$-dimensional analogue $\theta_{\rm TM}$ sends each $i\in\{0,1\}$ to a $2\times\cdots\times 2$ pattern with support $S=\{0,1\}^d$ given by: \[\theta_{\rm TM}(i)_{(k_1,\dots,k_d)}=\begin{cases} 0 & \text{if }i+k_1+\dots+k_d\equiv 0\pmod{2}, \\ 1 & \text{otherwise}. \end{cases} \] In the one-dimensional case, $\shift{\theta_{\rm TM}} = \shift{\theta_{\rm TM}}^\circ$ and a typical point of this subshift looks like the following: \[\dotsc 1001011001101001{.}0110100110010110 \dotsc \] In Figure \ref{fig:contradiction_points} (see page \pageref{fig:contradiction_points}) we can see fragments from two points of the shift $\shift{\theta_{\rm TM}}$, in the case $d=2$. The left one belongs to the minimal subshift $\shift{\theta_{\rm TM}}^\circ$, while the point corresponding to the figure on the right does not. \end{eje} As stated above, we shall refer to any pattern given by a mapping $\{-1,0\}^d\to\mathcal{A}$ as a \textbf{seed}. Any periodic (w.l.o.g. fixed, by replacing $\theta$ by a suitable power $\theta^m$) point of the substitution $\theta$ is uniquely determined (at least in the primitive case with nontrivial support) by its finite subconfiguration with support $\{-1,0\}^d$ and thus by a unique seed. To work with iterated substitutions and automorphisms, the following notations are useful: \begin{align*} S^{(m)} &\dfn [\vec{0},\vec{s}^m-\vec{\mathds{1}}] = \{\vec{k}=(k_1,\dots,k_d)\in\mathds{Z}^d: 0\le k_j< s_j^m, 1\le j\le d \},\\ R^{\circ m} &\dfn \{\vec{k}\in R: [\vec{k}-m\vec{\mathds{1}},\vec{k}+m\vec{\mathds{1}}]\subseteq R \}, \text{ for any }R\subseteq\mathds{Z}^d. \end{align*} $S^{(m)}$ is the iterated componentwise multiplication of the elements of the set $S$ with themselves, repeated $m$ times. When $\theta$ is a rectangular substitution with support $S$, the set $S^{(m)}$ is the support of any pattern $\theta^m(a),a\in\mathcal{A}$; we may take $\theta^m=(\theta^*)^m|_\mathcal{A}$ as a new substitution that induces the same substitutive subshift as $\theta$ (in the primitive case), which sometimes proves useful to simplify arguments. In the same way, $R^{\circ m}$ refers to the subset of $R$ comprised of all elements ``at distance at least $m$'' from the complement of $R$, i.e. a sort of ``interior'' of $R$. It is easy to see that, if $f$ is a sliding block code of radius $m$, then for any subset $R\subseteq\mathds{Z}^d$ we have that $x|_R=y|_R$ implies $f(x)|_{R^{\circ m}}=f(y)|_{R^{\circ m}}$. In the following text we have to deal with the group of $p$-adic numbers, defined as an inverse limit of a chain of cyclic groups, as follows: \begin{defi} Let $p>1$ be a fixed positive integer (not necessarily prime). The \textbf{group of $p$-adic integers} $\mathds{Z}_p$ is the inverse limit associated to the following diagram of groups and group morphisms: \[\xymatrix{\mathds{Z}/p\mathds{Z} & \mathds{Z}/p^2\mathds{Z}\ar[l]_-{\pi_1} & \mathds{Z}/p^3\mathds{Z}\ar[l]_-{\pi_2} & \dots\ar[l]_-{\pi_3}}\] where each $\pi_i:\mathds{Z}/p^{i+1}\mathds{Z}\to\mathds{Z}/p^i\mathds{Z}$ is the remainder modulo $p^i$ function, i.e. $\pi_i([ap^i+b]_{p^{i+1}})=[b]_{p^i}$. Alternatively, $\mathds{Z}_p$ corresponds to the following subgroup of the infinite product $\prod_{k=1}^\infty \mathds{Z}/p^k\mathds{Z}$: \[\mathds{Z}_p\dfn\left\{([m_k]_{p^k})_{k\ge 1}\in\prod_{k=1}^\infty\mathds{Z}/p^k\mathds{Z} : (\forall k\ge 1): m_k\equiv m_{k+1}\pmod{p^k} \right\}. \] \end{defi} As a subgroup of an infinite product, addition in $\mathds{Z}_p$ is performed componentwise. It is easy to see that the sequence $([1]_{p^k})_{k\ge 1}$ belongs to $\mathds{Z}_p$ and generates an infinite cyclic group, which we identify with $\mathds{Z}$ (and thus we identify the sequence $([1]_{p^k})_{k\ge 1}$ with the integer $1$). The set $\mathds{Z}_p\setminus\mathds{Z}$ is nonempty. For instance, the sequence $([1]_2,[1]_4,[5]_8,[5]_{16},[21]_{32},[21]_{64},\dots)$ belongs to $\mathds{Z}_2$ but it does not represent an integer. $\mathds{Z}_p$ has a topological group structure, induced by the prodiscrete topology on the space $\prod_{k=1}^{\infty}\mathds{Z}/p^k\mathds{Z}$; we can define a metric which is analogous to the shift metric in $\mathcal{A}^\mathds{N}$. With this structure, the example given of a sequence in $\mathds{Z}_2\setminus\mathds{Z}$ corresponds to the infinite sum $\sum_{k=1}^{\infty}2^{2k-1}$. \begin{defi} The \textbf{$p$-adic odometer} is the topological dynamical system $(\mathds{Z}_p,\omega_p)$, where $\omega_p(x)=x+1$ (here, as above, $1$ represents the infinite sequence $([1]_{p^k})_{k\ge 1}$) and $\mathds{Z}_p$ is taken with the prodiscrete topology. \end{defi} As we shall see, under certain hypotheses the maximal equicontinuous factor of a $d$-dimensional substitutive subshift is a product of odometers. Thus, we introduce the notation $\mathds{Z}_{(s_1,\dots,s_d)}$ (or simply $\mathds{Z}_{\vec{s}}$) for the product $\mathds{Z}_{s_1}\times\dotsm\times\mathds{Z}_{s_d}$, and identify $\mathds{Z}^d$ with the corresponding subgroup of $\mathds{Z}_{\vec{s}}$. \section{Substitution encodings} For details on the propositions referenced in this section, the survey by Frank \cite{Fr2005} may be consulted. In the one-dimensional case, the book by K\r{u}rka \cite{Kur2003} gives a treatment of this encoding factor as well. More details about substitutions can be found in the book by Pytheas Fogg \cite{Fog2002} (for the one-dimensional case); the book by Michael Baake and Uwe Grimm \cite{BG2013} gives a good treatment of the multidimensional case as well. In what follows, let $\theta:\mathcal{A}\to\mathcal{A}^S$ be a rectangular primitive substitution of constant size $\vec{s} = (s_1,\dots,s_d)$, with $S=[\vec{0},\vec{s}-\vec{\mathds{1}}]$. We assume that $s_i>1$ for all $1\le i\le d$. \begin{lem} \label{lem:codified_system_subst} Given $x\in\shift{\theta}$, there is a unique $\vec{k}_1\in S$ and $y\in\shift{\theta}$ such that $x=\sigma_{\vec{k}_1}(\theta(y))$. By iterating this process, there is a unique $\vec{k}_m\in S^{(m)}$ such that $x=\sigma_{\vec{k}_m}(\theta^m(y))$ and $\vec{k}_m\equiv\vec{k}_r\pmod{\vec{s}^r}$ for $m>r$, with module equivalence taken componentwise. \end{lem} This allows us to assign a sequence $([\vec{k}_m]_{\vec{s}^m})_{m\ge 1}$ in $\mathds{Z}_{\vec{s}}$ to each element of $\shift{\theta}$, and it is easy to see that the sequence associated to $\sigma_{\vec{k}}(x)$ is the sequence assigned to $x$, plus $\vec{k}$. This observation leads to the following known result: \begin{prop} \label{prop:MEF_of_a_subst_shift} The maximal equicontinuous factor of a nontrivial substitutive subshift $\shift{\theta}$ given by a $d$-dimensional substitution of constant length $\theta:\mathcal{A}\to\mathcal{A}^S$ is the product system of $d$ odometers, $(\mathds{Z}_{s_1},\omega_{s_1})\times\dotsm\times(\mathds{Z}_{s_d},\omega_{s_d})$ (or equivalently, the group $\mathds{Z}_{\vec{s}}$ with its subgroup $\mathds{Z}^d$ acting by addition). The factor morphism $\varphi:\shift{\theta}\to\mathds{Z}_{\vec{s}}$ sends each $x\in\shift{\theta}$ to the uniquely determined sequence belonging to $\mathds{Z}_{\vec{s}}$ by the previous lemma. \end{prop} \begin{figure} \caption{$2^n\times 2^n$ grids associated with the iterates of a substitution $\theta$ in a point from a substitutive subshift. The corresponding substitution is indicated in the figure.} \end{figure} Proposition \ref{prop:MEF_of_a_subst_shift} is proved in the survey by Frank, \cite{Fr2005}. Note that, in particular, $\varphi$ is continuous; intuitively, if two points $x$ and $y$ match on $[-r\vec{\mathds{1}},r\vec{\mathds{1}}]$ for sufficiently large $r$, this central pattern is enough to determine the shift from Lemma \ref{lem:codified_system_subst} for all ``small'' values of $m$, and thus $\varphi(x)$ and $\varphi(y)$ match on their first few entries, and thus are ``close'' for the $p$-adic metric. From the previous proposition, we can prove the following result: \begin{lem} \label{lem:integer_images_in_MEF} Let $x\in\shift{\theta}$ be such that $\varphi(x)=\vec{0}$. Then $x$ is a periodic (w.l.o.g. fixed) point of the substitution $\theta$; the converse is also true. Thus (w.l.o.g. assuming all periodic points of $\theta$ to be fixed points): \[\varphi^{-1}[\mathds{Z}^d]=\bigcup_{x\in\Fix(\theta)}\Orb_{\sigma}(x) \] \end{lem} \begin{dem} Without loss of generality we may assume that all periodic points of $\theta$ are fixed points, replacing $\theta$ by a suitable power $\theta^m$ if necessary. If $\varphi(x)=\vec{0}=([0]_{\vec{s}},[0]_{\vec{s}^2},\dots)$, this means there exists a sequence of points $x_1,x_2,x_3,\dots\in\shift{\theta}$ such that $x=\theta^k(x_k)$, for all $k\ge 1$. Let $P_k$ be the seed obtained from $x_k$ by restriction to the set $\{-1,0\}^d$, for each $k$. Since there are a finite number of possible seeds, there must be a seed $P$ such that $P=P_k$ for infinitely many values of $k$. Notice also that, for any given seed $P$ and any point $y\in\shift{\theta}$ such that its restriction to $\{-1,0\}^d$ is $P$ the sequence $\theta^k(y)$ converges to the same fixed point of $\theta$, $z_P$. Thus, since $x=\theta^k(x_k)$ for all those infinitely many values of $k$ that have $x_k|_{\{0,1\}^d}=P$, we have that the restriction of $x$ to $[-\vec{s}^k,\vec{s}^k-\vec{\mathds{1}}]$ equals the corresponding restriction of $\theta^k(y)$, where $y$ is any point with seed $P$, and thus the distance between $x=\theta^k(x_k)$ and $\theta^k(y)$ is less than $2^{-\min(\vec{s}^k)}$, which goes to zero as $k$ goes to infinity. Thus, by a triangular inequality argument, $x$ must equal the limit $\theta^k(y)$ and thus be a fixed point of $\theta$. Determining the form of $\varphi^{-1}[\mathds{Z}^d]$ is then direct from the fact that $\Orb_{(\omega_{s_1},\dots,\omega_{s_d})}(\vec{m})=\vec{m}+\mathds{Z}^d$ in a product of odometers.\qed \end{dem} The importance of these previous results is twofold: the equicontinuous factor $\varphi:\shift{\theta}\mathrel{{\twoheadrightarrow}}\mathds{Z}_{\vec{s}}$ gives a description of a point $x\in\shift{\theta}$ as a concatenation of rectangular patterns of the form $\theta^k(a)$ in a specific way for all values of $k$, and allows us to distinguish certain points, e.g. points with ``fractures''. \section{Bijective substitutions and Coven's theorem} We shall be interested in a certain type of rectangular substitutions over the two-symbol alphabet $\mathcal{A}=\{0,1\}$: \begin{defi} A substitution $\theta:\mathcal{A}\to\mathcal{A}^S$ is called \textbf{bijective} if, for each $\vec{k}\in S$ and any $a\ne b\in\mathcal{A}$, we have $\theta(a)_{\vec{k}}\ne\theta(b)_{\vec{k}}$, i.e. each function $\theta_{\vec{k}}:a\mapsto\theta(a)_{\vec{k}}$ is a bijection $\mathcal{A}\to\mathcal{A}$, for all $\vec{k}\in S$. \end{defi} Given a pattern $P$ over the alphabet $\{0,1\}$ with support $L\subset\mathds{Z}^d$, we write $\overline{P}$ for the pattern with the same support obtained by replacing all $1$s by $0$s and vice versa. It is easy to see that a substitution over the alphabet $\{0,1\}$ is bijective if and only if $\theta(1)=\overline{\theta(0)}$; thus, for any pattern $P$, $\theta(\overline{P})=\overline{\theta(P)}$, and in particular $\theta^k(1)=\overline{\theta^k(0)}$ by induction. Similarly, if we define the substitution $\overline{\theta}$ by the relation $\overline{\theta}(a)=\overline{\theta(a)}$, we can see that $\overline{\theta}^2(a)=\theta^2(a)$ (and hence $\shift{\theta}=\shift{\overline{\theta}}$) so that we may always assume $\theta(a)_{\vec{\mathds{1}}}=a,a\in\{0,1\}$. This also implies that for any seed $P$ there exists a periodic point $x_P$ of $\theta$ with seed $P$, and thus $\theta$ has exactly $2^{2^d}$ periodic points (w.l.o.g. we may assume these periodic points to be fixed points by replacing $\theta$ with a suitable power $\theta^m$ such that the symbol on each of the $2^{d}$ corners of the pattern $\theta^m(a)$ is $a$ itself). Our goal is to give a characterization of $\Aut(\shift{\theta},\mathds{Z}^d)$ for an arbitrary nontrivial bijective substitution $\theta$. It is easy to see that the shifts and the mapping $\delta: x\mapsto\overline{x}$ are automorphisms of $\shift{\theta}$, so the actual problem is determining whether there exist other kinds of automorphisms. Coven's result in one dimension \cite{Cov71} states that this is not the case: \begin{teo}[Coven] Let $\mathcal{A}=\{0,1\}$ be a two-symbol alphabet. If $\theta:\mathcal{A}\to\mathcal{A}^n$ is a nontrivial (primitive) bijective substitution of constant length $n>1$, then $\Aut(\shift{\theta},\mathds{Z})\cong\mathds{Z}\times(\mathds{Z}/2\mathds{Z})$, with every automorphism being of the form $\sigma^n$ or $\delta\circ\sigma^n$, where $\sigma=\sigma_1$ is the elementary shift action. \end{teo} Similar results hold for larger alphabets (see e.g. the article by Lema\'nczyk and Metzker \cite{LM88}). Our goal in what follows is to show that these results translate readily to the higher-dimensional case. For our purpose, studying the action of an automorphism on a fixed point of $\theta$ will be a valuable tool. To begin, we need to introduce some terminology: \begin{defi} Given a $d$-tuple $\vec{u}=(u_1,\dots,u_d)\in\{-1,+1\}^d$, the \textbf{canonical quadrant} associated to $\vec{u}$ is the following subset of $\mathds{Z}^d$: \[Q_{\vec{u}}=u_1\mathds{N}_0\times u_2\mathds{N}_0\times\cdots\times u_d\mathds{N}_0. \] We shall refer to any translation $Q_{\vec{u},\vec{k}}\dfn\vec{k}+Q_{\vec{u}}$ of a canonical quadrant as a \textbf{quadrant} and its unique extremal element $\vec{k}$ (with the maximum or minimum possible value on each coordinate) as its \textbf{vertex}. \end{defi} Notice that the $2^d$ quadrants $Q_{\vec{\mathds{1}}}=Q_{\vec{\mathds{1}},\vec{0}},Q_{(-1,1,\dots),-\vec{e}_1}, \dotsc, Q_{-\vec{\mathds{1}},-\vec{\mathds{1}}}$ are pairwise disjoint and their union is $\mathds{Z}^d$; also, $\vec{u}\in Q_{\vec{u}}$. By definition, if two points $x,y\in\mathcal{A}^{\mathds{Z}^d}$ coincide on a canonical quadrant, then their images under $\theta$ coincide on the same quadrant as well. For a general quadrant, this holds too, although the vertex may change if the quadrant is not canonical. We can quickly verify that the following holds: \begin{lem} Let $\theta$ be a nontrivial (primitive) bijective substitution. If $x,y\in\shift{\theta}$ coincide on a quadrant, then $\varphi(x)=\varphi(y)$, where $\varphi:\shift{\theta}\mathrel{{\twoheadrightarrow}}\mathds{Z}_{\vec{s}}$ is the encoding factor map from the previous section. \end{lem} \begin{dem} This is direct from the uniqueness of the factorization of a point of a one-dimensional substitutive subshift as a concatenation of words of the form $\theta^k(0),\theta^k(1)$, applied to each of the $d$ principal directions. However, we may also argue as follows: assuming, w.l.o.g. that $x$ and $y$ match on the canonical quadrant $Q_{\vec{\mathds{1}}}=\mathds{N}_0^d$, it is easy to see that $\lim_{k\to\infty}\sigma_{\vec{\mathds{1}}}^{h(k)}(x)=\lim_{k\to\infty}\sigma_{\vec{\mathds{1}}}^{h(k)}(y)$ for any increasing subsequence $h:\mathds{N}\to\mathds{N}_0$ such that $\sigma_{\vec{\mathds{1}}}^{h(k)}(x)$ converges, as $\sigma_{\vec{\mathds{1}}}^{h(k)}(y)$ coincides with the former in $[-h(k),h(k)]^d$ and thus they are at distance at most $2^{-h(k)}$. If $\varphi(x)-\varphi(y)=\vec{m}\ne\vec{0}$, it is easy to see that the value $\vec{m}$ remains constant for the aforementioned subsequence: \[\varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(x)) - \varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(y)) = (\varphi(x)+h(k)\vec{\mathds{1}}) - (\varphi(y)+h(k)\vec{\mathds{1}}) = \varphi(x)-\varphi(y)=\vec{m} \] and thus, since $\varphi$ is continuous, $\lim_{k\to\infty}\varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(x)) - \varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(y))=\vec{m}$. But also, \[\lim_{k\to\infty}\varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(x)) - \varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(y)) = \lim_{k\to\infty}\varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(x)) - \underbrace{\lim_{k\to\infty}\varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(y))}_{{}=\lim_{k\to\infty}\varphi(\sigma_{\vec{\mathds{1}}}^{h(k)}(x))}=\vec{0},\] hence $\vec{m}=\vec{0}$, a contradiction.\qed \end{dem} In what follows, we will show that Coven's result for one-dimensional bijective substitutions applies to higher-dimensional substitutive subshifts, i.e. the bijectiveness condition immediately forces the only nontrivial automorphism (up to composition by a shift) to be the relabeling map $\delta$ that swaps the two symbols of the alphabet. The main bulk of the proof of this result lies in the following lemma: \begin{lem} \label{lem:large_scale_relabeling} Let $\theta:\mathcal{A}\to\mathcal{A}^S$ be a bijective substitution with nontrivial support $S=[\vec{0},\vec{s}-\vec{\mathds{1}}]$ over the alphabet $\mathcal{A}=\{0,1\}$, and suppose $f\in\Aut(\shift{\theta},\mathds{Z}^d)$ is an automorphism. Then, for any $x\in\shift{\theta}$ there exist $\vec{k},\vec{\ell}\in\mathds{Z}^d$ and a sufficiently large $m\ge 1$ such that both $x$ and $f(x)$ are concatenations of patterns of the form $\theta^m(0)$ or $\theta^m(1)$ arranged over a translation of a ``grid'' $\vec{s}^m\cdot\mathds{Z}^d$, and such that the pattern with support $\vec{k}+\vec{p}+S^{(m)}$ (with $\vec{p}\in\vec{s}^m\cdot \mathds{Z}^d$) in the grid corresponding to $x$ determines uniquely the pattern with support $\vec{\ell}+\vec{p}+S^{(m)}$ in the grid corresponding to $f(x)$. \end{lem} \begin{dem} As above, it is a direct consequence of Lemma \ref{lem:codified_system_subst} that for a fixed $m\ge 1$ any point $x\in\shift{\theta}$ is a concatenation of patterns of the form $\theta^m(a),a\in\mathcal{A}$ over a grid given by a translation of $\vec{s}^m\cdot\mathds{Z}^d$. So we actually are proving the correspondence between these patterns in $x$ and $f(x)$. By its nature as a sliding block code, any automorphism $f\in\Aut(\shift{\theta},\mathds{Z}^d)$ has a radius $r\in\mathds{N}_0$, namely, for any $\vec{k}\in\mathds{Z}^d$ the symbol $f(x)_{\vec{k}}$ is uniquely determined by the finite pattern $x|_{\vec{k}+[-r\vec{\mathds{1}},r\vec{\mathds{1}}]}$; thus, for any subset $R\subseteq\mathds{Z}^d$, $x|_R$ determines uniquely the configuration $f(x)|_{R^{\circ r}}$. From now on, $f$ will be any fixed automorphism and $r$ will be the corresponding radius. Consider then the support $S^{(m)}$ of $\theta^m$; as $S$ was deemed nontrivial, $S^{(m)}$ must be a $d$-dimensional rectangle of edge length at least $2^m$ in any direction, and thus for sufficiently large $m$ (say, $m>\log_2(2r+1)$) the set $(S^{(m)})^{\circ r}$ is nonempty and a $d$-dimensional rectangle of edge length at least $2^m-2r$ in all directions. By Lemma \ref{lem:codified_system_subst}, there are vectors $\vec{k},\vec{\ell}\in\mathds{Z}^d$ such that, for any $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$, $x|_{\vec{k}+\vec{p}+S^{(m)}}$ and $f(x)|_{\vec{\ell}+\vec{p}+S^{(m)}}$ are either $\theta^m(0)$ or $\theta^m(1)$; we shall refer to these rectangles as $K_{\vec{p}}\dfn \vec{k}+\vec{p}+S^{(m)}$ and $L_{\vec{p}}\dfn \vec{\ell}+\vec{p}+S^{(m)}$, respectively, for any $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$. Note that, since $S^{(m)}=[\vec{0},\vec{s}^m-\vec{\mathds{1}}]$ is a set of representatives for $\mathds{Z}^d/(\vec{s}^m\cdot\mathds{Z}^d)$, the rectangles $K_{\vec{p}}$, indexed by all $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$, cover $\mathds{Z}^d$ completely (and thus the $L_{\vec{p}}$ rectangles do so as well). Since we may replace $\vec{k}$ by any $\vec{k}+\vec{s}^m\cdot\vec{k}'$ (as then the new $K'_{\vec{p}}$ is just the old $K_{\vec{p}+\vec{k}'}$), we may choose $\vec{k}$ in a suitable way such that, for any $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$, $K_{\vec{p}}^{\circ r}$ has nonempty intersection with $L_{\vec{p}}$, say $I_{\vec{p}}\dfn K_{\vec{p}}^{\circ r}\cap L_{\vec{p}}$ (this is because the union of all $L_{\vec{p}}$ is the whole of $\mathds{Z}^d$; we only need to note that, for a suitable choice of $\vec{k}$, the intersection $I_{\vec{0}}=K_{\vec{0}}^{\circ r}\cap L_{\vec{0}}$ is nonempty, and then use the fact that $K_{\vec{p}}$ and $L_{\vec{p}}$ are translations of $K_{\vec{0}}$ and $L_{\vec{0}}$ by the same vector). It is important to remark that, even though in most arguments we choose $\vec{k}$ and $\vec{\ell}$ from the set $S^{(m)}=[\vec{0},\vec{s}^m-\vec{\mathds{1}}]$, as the obvious representatives of the cosets of $\vec{s}^m\cdot\mathds{Z}^d$, it is not actually necessary to do so, and in particular in this proof $\vec{k}$ and $\vec{\ell}$ may be any two elements from $\mathds{Z}^d$. As stated above, since $\theta$ (and thus $\theta^m$) is a bijective substitution, then for any $a,b\in\mathcal{A}$ and any $\vec{q}\in S^{(m)}$ the condition $\theta^m(a)_{\vec{q}}=\theta^m(b)_{\vec{q}}$ implies $a=b$ and thus $\theta^m(a)=\theta^m(b)$. Because of this, whether the pattern $f(x)|_{L_{\vec{p}}}$ is either $\theta^m(0)$ or $\theta^m(1)$ is entirely determined by the subpattern $f(x)|_{I_{\vec{p}}}$ (as $I_{\vec{p}}$ is nonempty), which in turn, as a subpattern of $f(x)|_{K_{\vec{p}}^{\circ r}}$, is entirely determined by $x|_{K_{\vec{p}}}$, which is either $\theta^m(0)$ or $\theta^m(1)$ as well. Thus, for any $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$, $f(x)|_{L_{\vec{p}}}$ depends uniquely on $x|_{K_{\vec{p}}}$, as desired.\qed \end{dem} \begin{cor} \label{cor:explicit_morphism} Given any $f\in\Aut(\shift{\theta},\mathds{Z}^d)$ and any $x\in\shift{\theta}$, $f(x)$ is either $\sigma_{\vec{\ell}-\vec{k}}(x)$ or $\delta\circ\sigma_{\vec{\ell}-\vec{k}}(x)$, where $\vec{k}$ and $\vec{\ell}$ are the vectors from the previous lemma. \end{cor} \begin{dem} By the previous lemma, $f(x)|_{L_{\vec{p}}}$ is entirely determined by $x|_{K_{\vec{p}}}$ and thus there is a mapping $t:\{0,1\}\to\{0,1\}$ (depending only on the chosen $x$ and the automorphism $f$) such that if $x|_{K_{\vec{p}}}$ is $\theta^m(a)$, then $f(x)|_{L_{\vec{p}}}$ is $\theta^m(t(a))$; the same $t$ applies to all pairs of patterns $x|_{K_{\vec{p}}}$ and $f(x)|_{L_{\vec{p}}}$ for all $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$ due to the Curtis-Hedlund-Lyndon theorem. If $t(0)=t(1)$, then $f$ sends both $x$ and $\delta(x)$ to the same point, contradicting the bijectiveness of $f$ (since $\theta$ is a primitive substitution and thus has more than one point). Thus, $t(0)\ne t(1)$ and then either $t(a)=a$ or $t(a)=1-a$. In the first case, if for some $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$ we have $x|_{K_{\vec{p}}}=\theta^m(a)$, then $f(x)|_{L_{\vec{p}}}=\theta^m(t(a))=\theta^m(a)$. This applies to all $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$; since $L_{\vec{p}}=K_{\vec{p}}+(\vec{\ell}-\vec{k})$, from this we see that $f(x)=\sigma_{\vec{\ell}-\vec{k}}(x)$. In the second case, from $x|_{K_{\vec{p}}}=\theta^m(a)$ we deduce that $f(x)|_{L_{\vec{p}}}=\theta^m(t(a))=\overline{\theta^m(a)}=\delta(x)|_{K_{\vec{p}}}$ (since $\theta^{m}(t(x))=\overline{\theta^m(x)}$). Again, this applies to all $\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d$; thus, $f(x)=\sigma_{\vec{\ell}-\vec{k}}(\delta(x)) = \delta\circ\sigma_{\vec{\ell}-\vec{k}}(x)$.\qed \end{dem} This almost completes the proof of our first main result: \begin{teo} For a nontrivial, primitive, bijective substitution $\theta$ on the alphabet $\mathcal{A}=\{0,1\}$, $\Aut(\shift{\theta},\mathds{Z}^d)$ is generated by the shifts and the relabeling map (flip map) $\delta(x)\dfn\overline{x}$, and thus is isomorphic to $\mathds{Z}^d\times(\mathds{Z}/2\mathds{Z})$. \end{teo} \begin{dem} For any $f\in\Aut(\shift{\theta},\mathds{Z}^d)$ and any $x\in\shift{\theta}$, $f(x)$ is either $\sigma_{\vec{\ell}-\vec{k}}(x)$ or $\delta\circ \sigma_{\vec{\ell}-\vec{k}}(x)$. Since $f$ commutes with the shift action, $f|_{\Orb_\sigma(x)}$ is either $\sigma_{\vec{\ell}-\vec{k}}|_{\Orb_\sigma(x)}$ or $(\delta\circ \sigma_{\vec{\ell}-\vec{k}})|_{\Orb_\sigma(x)}$. From the definition of $\shift{\theta}$, there is a finite subset $\Sigma\subset\shift{\theta}$, comprised of periodic points of $\theta_\infty$, such that the union $\bigcup_{x\in\Sigma}\Orb_\sigma(x)$ is dense in $\shift{\theta}$. Thus, for each $x\in\Sigma$, we have that $f|_{\Orb_\sigma(x)}$ is either $\sigma_{\vec{\ell}-\vec{k}}|_{\Orb_\sigma(x)}$ or $(\delta\circ\sigma_{\vec{\ell}-\vec{k}})|_{\Orb_\sigma(x)}$. It is also easy to see that, for all $x\in\Sigma$ the inclusion $\shift{\theta}^\circ\subseteq\overline{\Orb_\sigma(x)}$ holds and thus, since $\sigma_{\vec{\ell}-\vec{k}}|_{\Orb_\sigma(x)}$ and $\delta\circ\sigma_{\vec{\ell}-\vec{k}}|_{\Orb_\sigma(x)}$ differ in the minimal substitutive subshift $\shift{\theta}^\circ$, $f$ cannot be equal to $\sigma_{\vec{\ell}-\vec{k}}$ in an orbit $\Orb_\sigma(x)$ and equal to $\delta\circ\sigma_{\vec{\ell}-\vec{k}}$ in a different orbit $\Orb_\sigma(y)$ for $x\ne y\in\Sigma$. Hence, $f$ either equals $\sigma_{\vec{\ell}-\vec{k}}|_{\Orb_\sigma(x)}$ in all orbits of points of $\Sigma$ or equals $\delta\circ\sigma_{\vec{\ell}-\vec{k}}|_{\Orb_\sigma(x)}$ in all orbits of $\Sigma$. In either case, by density of $\bigcup_{x\in\Sigma}\Orb_\sigma(x)$, $f$ must equal one of these two automorphisms in the whole of $\shift{\theta}$, proving the desired result.\qed \end{dem} Note that in the proof we only used that a point from a substitutive subshift in a two-symbol alphabet is a concatenation of patterns $\theta^m(0)$ and $\theta^m(1)$ and thus the same proof applies for $\shift{\theta}^\circ$ by replacing the set $\Sigma$ with $\Sigma^\circ=\Sigma\cap\shift{\theta}^\circ$, which is nonempty and contains all fixed points of $\theta_\infty$ whose seeds are subpatterns of $\theta^m(a),a\in\mathcal{A}$ for some $m$. Consequently, we state this as a corollary: \begin{cor} For a nontrivial, primitive, bijective substitution $\theta$ on $\mathcal{A}=\{0,1\}$, $\Aut(\shift{\theta}^\circ,\mathds{Z}^d)$ is generated by the shifts and the relabeling map $\delta$, and thus is isomorphic to $\mathds{Z}^d\times(\mathds{Z}/2\mathds{Z})$. \end{cor} To conclude this section, we shall make some brief remarks regarding the restriction to a two-symbol alphabet, $\mathcal{A}=\{0,1\}$. As noted above, in this special case the definition of bijectivity of a substitution reduces to the condition $\theta(1)=\overline{\theta(0)}$, which results in an explicit description of the only nontrivial (modulo the shifts) automorphism of $\shift{\theta}$, which is the mapping $\delta(x)\dfn\overline{x}$. In the case of a larger alphabet, the structure of the nontrivial automorphisms might be different, and an automorphism with the same behavior as $\delta$ may not even exist. For instance, if we consider the one-dimensional substitution on the three-letter alphabet $\mathcal{A}=\{1,2,3\}$ given by: \[\theta: 1\mapsto123,\qquad 2\mapsto 231,\qquad 3\mapsto312, \] then the relabeling map defined as $\varphi_{(1\,2\,3)}(x)_i=\tau(x_i)$, where $\tau=(1\,2\,3)$ is a cyclic permutation of the symbols of $\mathcal{A}$, is a nontrivial element of $\Aut(\shift{\theta},\mathds{Z})$ of order $3$. However, it is not clear whether elements of order $2$ do exist in this group, making this mapping the only obvious analogue to $\delta$ satisfying the property $\varphi_{(1\,2\,3)}\circ\theta_\infty=\theta_\infty\circ\varphi_{(1\,2\,3)}$. By changing the form of the substitution, this relabeling map may not even exist, e.g. for: \[\vartheta:1\mapsto 123,\qquad 2\mapsto 212,\qquad 3\mapsto331, \] the word $331331$ is a subpattern of every point in $\shift{\vartheta}$, and thus $13$ and $333$ are subpatterns of every point as well. But neither $111$ nor $222$ can appear as subwords of a point of $\shift{\vartheta}$, and thus a relabeling map must map $3$ to $3$. But then it has to map the symbols $1$ and $2$ to themselves, to preserve all instances of the word $331331$ and the bijectivity. Thus, nontrivial relabeling maps do not exist. Obviously, this does not preclude the existence of nontrivial automorphisms of $\shift{\vartheta}$ that are not relabeling maps, i.e. have radius $r$ strictly greater than $0$. However, a cursory look at the proof above shows that most of it does not make explicit usage of the alphabet size. In particular, the proof of Lemma \ref{lem:large_scale_relabeling} carries over without any significant change. In this more general context, this lemma shows that there must be a bijection $\tau:\mathcal{A}\to\mathcal{A}$ such that (using the notation from the proof of Lemma \ref{lem:large_scale_relabeling}) the following holds: \[(\exists m\in\mathds{N})(\forall\vec{p}\in\vec{s}^m\cdot\mathds{Z}^d):x|_{K_{\vec{p}}}=\theta^m(a)\iff f(x)|_{L_{\vec{p}}}=\theta^m(\tau(a)), \] and since each $L_{\vec{p}}$ is a translation of the corresponding $K_{\vec{p}}$, we see that, if $\tau_\infty$ is the relabeling map $\mathcal{A}^\mathds{Z}\to\mathcal{A}^\mathds{Z}$ induced by $\tau$, then the previous statement can be restated as follows: \[(\exists\vec{k}\in\mathds{Z}^d):f(\theta^m_\infty(x))=\sigma_{\vec{k}}(\theta^m_\infty(\tau_\infty(x))). \] So, $f$ behaves similarly to a relabeling map; in particular, this is enough to show that $f^k$ is a shift for sufficiently large $k$, i.e. $f$ has finite order modulo $\mathds{Z}^d$. We may refine this result even further, by showing that $f$ is indeed the composition of a relabeling map and a shift: \begin{cor} Let $\theta$ be a nontrivial, primitive, bijective substitution on an alphabet $\mathcal{A}$ (which can have more than two symbols). For any $f\in\Aut(\shift{\theta},\mathds{Z}^d)$, there exists a bijection $\tau:\mathcal{A}\to\mathcal{A}$ and a value $\vec{k}\in\mathds{Z}^d$ such that $f=\sigma_{\vec{k}}\circ\tau_\infty$. Thus, $\Aut(\shift{\theta},\mathds{Z}^d)$ is isomorphic to a subgroup of $\mathds{Z}^d\times S_{|\mathcal{A}|}$, where $S_n$ is the symmetric group in $n$ elements. \end{cor} \begin{dem} First of all, note that the $m$ from the proof of the result from Lemma \ref{lem:large_scale_relabeling} can be replaced with any $m'>m$ without substantial changes in the proof. This means, in particular, that the following holds (using that $\theta^{m+1}=\theta^m\circ\theta$): \begin{align*} (\exists\vec{k},\vec{k}'\in\mathds{Z}^d)(\exists\tau,\tau':\mathcal{A}\to\mathcal{A}):f(\theta^{m+1}_\infty(x)) &=\sigma_{\vec{k}}(\theta^{m}_\infty(\tau_\infty(\theta_\infty(x))))\\ &=\sigma_{\vec{k}'}(\theta^{m+1}_\infty(\tau'_\infty(x))), \end{align*} and since $\vec{k}\equiv\vec{k}'\pmod{\vec{s}^{m+1}}$, this implies that each pattern $\theta^{m+1}(\tau'(a))$ with $a\in\mathcal{A}$ is a concatenation of the patterns $\theta^m(\tau(b))$, where the $b$ are the corresponding symbols of the pattern $\theta(a)$. But by definition $\theta^{m+1}(\tau'(a))=\theta^m(\theta(\tau'(a)))$, and the mapping $\theta$ is injective; thus, $\theta(\tau'(a))=\tau(\theta(a))$, i.e. the relabeling $\tau$ must send patterns of the form $\theta(b)$ to other patterns of the form $\theta(b')$. By replacing $\theta$ with a suitable power, we may assume that for the bottom left corner $\vec{0}$ of the support $S$ the equality $\theta(a)_{\vec{0}}=a$ holds. Thus, $\theta(\tau'(a))$ has $\tau'(a)$ in this position, while $\tau(\theta(a))$ has $\tau(a)$ in the same position, i.e. $\tau(a)=\tau'(a)$. As this applies to any symbol $a$, we conclude that $\tau=\tau'$ and that $\tau$ and $\theta$ commute, i.e. $\theta_\infty\circ\tau_\infty=\tau_\infty\circ\theta_\infty$ as mappings $\mathcal{A}^{\mathds{Z}^d}\to\mathcal{A}^{\mathds{Z}^d}$. Applying this result to the identity with $f$ above, we conclude that: \[(\exists\vec{k}\in\mathds{Z}^d):f(\theta_\infty^m(x))=\sigma_{\vec{k}}\circ\tau_\infty(\theta_\infty^m(x)), \] and since every point $x\in\shift{\theta}$ is a shift of a point of the form $\theta_\infty^m(x)$ by Lemma \ref{lem:codified_system_subst}, this and the Curtis-Hedlund-Lyndon theorem show that $f=\sigma_{\vec{k}}\circ\tau_\infty$, the desired result. Since $\tau_\infty$ is entirely determined by a bijection from the finite set $\mathcal{A}$ to itself, and it is obvious that a relabeling map is shift-commuting by definition, we can identify $\Aut(\shift{\theta},\mathds{Z}^d)$ with a subgroup of $\mathds{Z}^d\times S_{|\mathcal{A}|}$.\qed \end{dem} Note that this proof also provides a necessary condition for a bijection $\tau:\mathcal{A}\to\mathcal{A}$ to induce a relabeling map $\tau_\infty\in\Aut(\shift{\theta},\mathds{Z}^d)$; namely, that $\theta_\infty\circ\tau_\infty=\tau_\infty\circ\theta_\infty$. By compactness, this condition is also sufficient, providing an explicit description of the group $\Aut(\shift{\theta},\mathds{Z}^d)$ in terms of the patterns $\theta(a),a\in\mathcal{A}$. \section{Extended symmetries and bijective substitutions} Our next goal is to obtain generalizations of the previous result in the domain of extended symmetries. These are a generalization of automorphisms, which introduce an additional degree of flexibility by allowing, besides the standard local transformations given by a sliding block code, to ``deform'' the underlying $\mathds{Z}^d$ lattice, by rotation, reflection, shear or other effects of a geometric nature. This additional degree of freedom is captured by a group automorphism of $\mathds{Z}^d$, i.e. an element\footnote{Remember that $\mathrm{GL}_d(\mathds{Z})$ is the set of all invertible matrices with integer entries whose inverses are also matrices with integer entries (namely, all matrices with integer entries that satisfy the condition $\det(A)=\pm 1$). Any matrix of this kind induces a bijective linear transformation $T_A:\mathds{Z}^d\to\mathds{Z}^d,\vec{p}\mapsto A\vec{p}$ and vice versa. We may generalize the previous definition to any group $G$ by replacing $\mathrm{GL}_d(\mathds{Z})$ with $\Aut(G)$, the set of group automorphisms of $G$. However, as we shall be primarily concerned with $G=\mathds{Z}^d$, the restricted definition is enough for our purposes.} of $\mathrm{GL}_d(\mathds{Z})$. The basic premises of the theory of extended symmetries of subshifts may be studied in \cite{BRY2018}. \begin{defi} Let $X\subseteq\mathcal{A}_X^{\mathds{Z}^d},Y\subseteq\mathcal{A}_Y^{\mathds{Z}^d}$ be two $\mathds{Z}^d$-subshifts. Given a $\mathds{Z}$-invertible matrix with integer entries $A\in\mathrm{GL}_d(\mathds{Z})$, we call a continuous mapping $f:X\to Y$ an $A$-\textbf{morphism} if the following equality holds: \[(\forall \vec{p}\in\mathds{Z}^d):f\circ\sigma_{\vec{p}}=\sigma_{A\vec{p}}\circ f. \] An \textbf{extended symmetry} is a bijective $A$-morphism from $X$ to itself, associated to some $A\in\mathrm{GL}_d(\mathds{Z})$. We shall denote the set of all extended symmetries as $\Sym(X,\mathds{Z}^d)$. This is a group under composition. \end{defi} Under our standard hypothesis (namely, a faithful shift action) the matrix $A_f$ associated to an extended symmetry $f$ is uniquely determined and thus there is an obvious mapping $\psi:\Sym(X,\mathds{Z}^d)\to\mathrm{GL}_d(\mathds{Z}), f\mapsto A_f$. It is also easy to see that $\psi$ is a group morphism: \[f\circ g\circ \sigma_{\vec{p}} = f\circ\sigma_{\psi(g)\vec{p}}\circ g = \sigma_{\psi(f)(\psi(g)\vec{p})}\circ f\circ g, \] consequently, $\psi(f\circ g)=\psi(f)\psi(g)$. Evidently, $\psi(f)=I_d$ (the identity matrix) if and only if $f$ is a traditional automorphism of $X$, i.e. $\ker(\psi)=\Aut(X,\mathds{Z}^d)$. This implies that the quotient group $\Sym(X,\mathds{Z}^d)/\Aut(X,\mathds{Z}^d)$ is isomorphic to a subgroup of $\mathrm{GL}_d(\mathds{Z})$, and thus, determining the nature of the latter group as a subgroup of $\mathrm{GL}_d(\mathds{Z})$ is a very useful tool to describe $\Sym(X,\mathds{Z}^d)$. Due to their ``almost shift-commuting'' nature, there is a version of the Curtis-Hedlund-Lyndon theorem for extended symmetries (and, in general, for $A$-morphisms), which implies that an extended symmetry is a composition of a local map (in the sense of the classical Curtis-Hedlund-Lyndon theorem) and a lattice transformation given by the matrix $A$. The result, proved in \cite{BRY2018} is as follows: \begin{teo}[Generalized Curtis-Hedlund-Lyndon theorem] \label{teo:curtishedlundlyndon} Let $f:X\to X$ be an extended symmetry from $\Sym(X,\mathds{Z}^d)$. Then, there is a finite subset $U\subset\mathds{Z}^d$ and a function $F:\mathcal{A}^U\to\mathcal{A}$ such that the following equality holds for all $\vec{s}\in\mathds{Z}^d$: \[f(x)_{A\vec{s}} = F(x|_{\vec{s}+U}), \] in which we identify, as usual, a pattern with support $U$ with any of its translations. \end{teo} This, as is the case for automorphisms, allows us to show that whether two points match on a ``large'' set $R\subseteq\mathds{Z}^d$, their images under an extended symmetry $f$ match as well on a large set, which depends on $f$ and $R$. More precisely, if we suppose w.l.o.g. that the support $U$ (as defined in the theorem) of the symmetry $f$ is of the form $[-r\vec{\mathds{1}},r\vec{\mathds{1}}]$, then: \[x|_R = y|_R\implies f(x)|_{\psi(f)[R^{\circ r}]}=f(y)|_{\psi(f)[R^{\circ r}]}. \] In particular, if $R$ is a half-space, the set $\psi(f)[R^{\circ r}]$ is a half-space as well. Our goal is to characterize the group $\Sym(X,\mathds{Z}^d)/\Aut(X,\mathds{Z}^d)$ when $X=\shift{\theta}$, $\theta$ being a nontrivial bijective substitution, and then characterize the extended symmetry group explicitly whenever possible. We shall see that $\shift{\theta}$, by construction, has some distinguished points with \textbf{fractures}, that is, they are comprised of subconfigurations of points from the minimal substitutive subshift $\shift{\theta}^\circ$ ``glued together'' in an independent way, and that any extended symmetry has to preserve these points with fractures. In particular, by analyzing these points adequately we can deduce strong restrictions on the matrices $\psi(f)\in\mathrm{GL}_d(\mathds{Z})$ for any $f\in\Sym(X,\mathds{Z}^d)$, as the ``shape'' of the fractures determined by a certain subset of $\mathds{Z}^d$ must be preserved by the matrix $\psi(f)$. Our choice of adequate points to show this result is given by the following lemma: \begin{lem}\label{lem:points_of_contradiction} Given any nontrivial bijective primitive substitution $\theta:\mathcal{A}\to\mathcal{A}$ over a two-symbol alphabet $\mathcal{A}=\{0,1\}$, there exist $x,y\in\shift{\theta}$ such that $x|_{Q_{\vec{\mathds{1}}}^c}=y|_{Q_{\vec{\mathds{1}}}^c}$ but $x|_{Q_{\vec{\mathds{1}}}}=\overline{y}|_{Q_{\vec{\mathds{1}}}}$, where $Q_{\vec{\mathds{1}}}=\{(n_1,\dots,n_d)\in\mathds{Z}^d: (\forall 1\le i\le d):n_i\ge 0 \}$ is the canonical quadrant containing $\vec{\mathds{1}}$. \end{lem} \begin{dem} Without loss of generality, we may assume that the pattern $\theta^m(a)$ has the symbol $a$ on all $2^d$ corners for all $m$ (by replacing $\theta$ by a power $\theta^k$ if needed); this implies that there are fixed configurations for $\theta$ over $\mathds{N}^d$ with seeds $0$ and $1$. Thus, taking any fixed point of $\theta$ over $\mathds{Z}^d$ with seed $P$ and changing the values of $P_{\vec{0}}$ we obtain two valid points $x,y\in\shift{\theta}$ that differ only on the positive quadrant $Q_{\vec{\mathds{1}}}$. By the nature of a bijective substitution, since $x$ and $y$ are fixed points of $\theta_\infty$, $x_{\vec{s}}=y_{\vec{s}}$ for any $\vec{s}\in Q_{\vec{\mathds{1}}}$ would imply $x|_{Q_{\vec{\mathds{1}}}}=y|_{Q_{\vec{\mathds{1}}}}$, which we know is not the case; thus, $x|_{Q_{\vec{\mathds{1}}}}=\overline{y}|_{Q_{\vec{\mathds{1}}}}$.\qed \end{dem} \begin{nota} A similar result holds for general (finite) alphabets: we can find two points $x,y$ such that $x|_{Q_{\vec{\mathds{1}}}^c}=y|_{Q_{\vec{\mathds{1}}}^c}$ but $x|_{\vec{s}}=y|_{\vec{s}}$ for all positions $\vec{s}\in Q_{\vec{\mathds{1}}}$. \end{nota} \begin{figure}\label{fig:contradiction_points} \end{figure} A pair of points satisfying the previous lemma is displayed in Figure \ref{fig:contradiction_points}, for the Thue-Morse substitution. It is easy to see that a similar argument works for any quadrant, changing the form of the substitution if so required: specifically, we may replace $\theta$ by an iterate $\theta^m$ such that $\theta^m(a)$ has the symbol $a$ on all corners, in particular, the corner $\vec{v}$ of $S^{(m)}$ that corresponds to the vertex of this quadrant (e.g. for $Q_{-\vec{\mathds{1}}}$, the quadrant containing $(-1,\dots,-1)$, we have $\vec{v}=\vec{s}^d$). The rest of the argument from Lemma \ref{lem:points_of_contradiction} applies without modifications to show that changing the (unique) symbol of the seed located in a specific quadrant changes the symbols of the whole quadrant, without affecting the remaining symbols of the other quadrants. By the generalized Curtis-Hedlund-Lyndon theorem, we verify that the image of the aforementioned two points by any extended symmetry $f$ matches along a ``large'' set of the form $\psi(f)[(Q_{\vec{\mathds{1}}}^{c})^{\circ r}]$. We shall use this to prove that, unless the image of a quadrant $Q$ by $\psi(f)$ is itself a quadrant, the restriction of $x$ to $Q$ determines $f(x)$ not only in $\psi(f)([(Q^c)^{\circ r}]$, but in $\psi(f)([Q^{\circ r}]$ as well, and from this we later infer that $x|_{Q^c}$ determines $f(x)$ in the whole plane; thus, the existence of two distinct points that match in $Q^c$ contradicts the bijectivity of $f\in\Sym(X,\mathds{Z}^d)$. For this purpose, we need first to determine what kind of ``shearing'' is allowed for an extended symmetry in a substitutive subshift. We introduce the following result: \begin{lem} \label{lem:fitting_quadrants} Let $A\in\mathrm{GL}_d(\mathds{Z})$ be a matrix with integer coefficients, invertible over $\mathds{Z}$. Then $AQ_{\vec{\mathds{1}}}$ cannot contain two distinct (canonical) quadrants. \end{lem} \begin{dem} Notice that $Q_{\vec{\mathds{1}}}$ is the set of all nonnegative linear combinations (with coefficients in $\mathds{Z}$) of the vectors of the canonical basis $\vec{e}_1=(1,0,0,\dotsc),\vec{e}_2=(0,1,0,\dotsc),\dotsc$. Thus, its image under $A$ is the set of all nonnegative integer linear combinations of the columns of $A$. Suppose $AQ_{\vec{\mathds{1}}}$ contains two distinct quadrants, $Q'$ and $Q''$. Note that $Q_{\vec{\mathds{1}}}$ is the intersection of the real first quadrant $(\mathds{R}^+_0)^d$ with $\mathds{Z}^d$; since $A$ is a matrix from $\mathrm{GL}_d(\mathds{Z})$, this means that $AQ_{\vec{\mathds{1}}}$ is the intersection of $A(\mathds{R}^+_0)^d$ (which is a convex set) with $\mathds{Z}^d$. Thus, any point with integer coordinates which is a convex combination of two points from $AQ_{\vec{\mathds{1}}}$ belongs to $AQ_{\vec{\mathds{1}}}$ as well. By this convexity argument, we may assume those two quadrants $Q',Q''$ to be adjacent, sharing a ``face'', i.e. there is a subset $H=H_{I,j}\subseteq\mathds{Z}^d$, given by a set of indices $I\subseteq\{1,\dots,d\}$ and an additional index $j\in\{1,\dots,d\}\setminus I$, which is of the form: \begin{align*} H=\{(m_1,\dots,m_d)\in\mathds{Z}^d:\,\, & m_j=0\wedge\phantom{-} \\ & (\forall i\in I):m_i\ge 0\wedge\phantom{-}\\ & (\forall i'\in \{1,\dots, d\}\setminus(I\cup\{j\})):m_{i'}<0 \}, \end{align*} such that, using the fact that the element $\vec{e}_j$ from the canonical basis is orthogonal to all elements of $H=H_{I,j}$, we have the following decomposition: \begin{align*} Q'&=\{\vec{m}\in\mathds{Z}^d:\vec{m}=\vec{m}_0+\lambda\vec{e}_j, \vec{m}_0\in H,\lambda\ge 0 \},\\ Q''&=\{\vec{m}\in\mathds{Z}^d:\vec{m}=\vec{m}_0+\lambda\vec{e}_j, \vec{m}_0\in H,\lambda< 0 \}. \end{align*} Thus, taking some fixed $\vec{m}_0\in H$, we see that for any $n\in\mathds{Z}$ the element $\vec{m}_0+n\vec{e}_j$ belongs to $AQ_{\vec{\mathds{1}}}$. Since $A\in\mathrm{GL}_d(\mathds{Z})$, the columns of $A$ are a basis for $\mathds{Z}^d$ and thus there are uniquely determined coefficients $\lambda_1,\dots,\lambda_d,\mu_1,\dots,\mu_d\in\mathds{N}$ such that: \begin{align*} \vec{m}_0 &= \sum_{i=1}^{d} \lambda_i(A\vec{e}_i), \\ \vec{e}_j &= \sum_{i=1}^{d} \mu_i(A\vec{e}_i), \\ \vec{m}_0 + n\vec{e}_j &= \sum_{i=1}^{d} (\lambda_i+n\mu_i)(A\vec{e}_i). \end{align*} Note that, since $Q_{\vec{\mathds{1}}}$ is comprised of elements of $\mathds{Z}^d$ with nonnegative coordinates, the linearity of multiplication by $A$ implies that the coefficients $\lambda_1,\dots,\lambda_d$, being uniquely determined, must be nonnegative; that is, if $\vec{m}\in Q_{\vec{\mathds{1}}}$, the coefficients of $\vec{m}$ in the canonical basis carry on to the coefficients of $A\vec{m}$ in the basis $\{A\vec{e}_1,\dots,A\vec{e}_d\}$, the set of columns of $A$, and thus keep their corresponding sign. Because of this, as for all values of $n\in\mathds{Z}$ the vectors $\vec{m}_0+n\vec{e}_j$ belong to $AQ_{\vec{\mathds{1}}}$, the corresponding coefficients $\lambda_i+n\mu_i$ must be nonnegative for any value of $n\in\mathds{Z}$. Since $\vec{e}_j\ne\vec{0}$, at least one of the $\mu_i$ is nonzero; thus, choosing $n$ adequately we may force $\lambda_i+n\mu_i$ to be negative. This contradicts the observation above about the positivity of coefficients of elements of $AQ_{\vec{\mathds{1}}}$ in the base of columns of $A$; so, this shows that $AQ_{\vec{\mathds{1}}}$ cannot contain two disjoint quadrants.\qed \end{dem} \begin{nota} It is important to stress that, once again, the previous argument does not depend on the chosen quadrant or whether it is canonical or not, and we use $Q_{\vec{\mathds{1}}}$ for ease of description. More precisely, we may always multiply $A$ by a change of base matrix of the form $P=[u_1\vec{e}_{\sigma(1)}\mid\cdots\mid u_d\vec{e}_{\sigma(d)}]$ to swap the quadrant $Q_{(u_1,\dots,u_d)}$ (with $u_1,\dots,u_d\in\{-1,+1\}$) with $Q_{\vec{\mathds{1}}}$. The same argument as above applies for the matrix $AP$ and thus for the matrix $A$ as multiplication by $P$ only swaps the positions of all the quadrants. \end{nota} \begin{figure}\label{fig:quadrant_image} \end{figure} By the previous lemma, there is at most one quadrant $Q_{\vec{v}}$ that is entirely contained in $AQ_{\vec{\mathds{1}}}$, meaning that every other quadrant has points from the complement of $AQ_{\vec{\mathds{1}}}$. The argument below will use those points and the rigidity of bijective substitutions: any symbol on a fixed position of a pattern $\theta^m(a)$ determines the whole pattern. Our purpose is to exploit this to show that any symbol located at a position from $AQ_{\vec{\mathds{1}}}$ is determined by some pattern with support contained in $Q_{\vec{\mathds{1}}}^c$, unless the matrix $A$ is of a very specific type, namely, a matrix that does not ``shear'' the underlying $\mathds{Z}^d$ lattice. The reason behind this lies behind the following result: \begin{lem} \label{lem:finite_lines} Let $A\in\mathrm{GL}_d(\mathds{Z})$ be a matrix for which $AQ_{\vec{\mathds{1}}}$ does not contain a quadrant with vertex $\vec{0}$ as a subset. Then, for any point $\vec{p}\in AQ_{\vec{\mathds{1}}}$, one of the affine subspaces $\vec{p}+\mathds{Z}\vec{e}_i$, for some $i\in\{1,\dots,d\}$, has finite intersection with $AQ_{\vec{\mathds{1}}}$. \end{lem} \begin{dem} As stated in the previous lemma, the set $AQ_{\vec{\mathds{1}}}$ has a sort of ``convexity property'', so that given two points $\vec{q}_1,\vec{q}_2\in AQ_{\vec{\mathds{1}}}$, any point $\vec{q}\in\mathds{Z}^d$ that is a rational convex combination of $\vec{q}_1$ and $\vec{q}_2$ belongs to $AQ_{\vec{\mathds{1}}}$ as well. If each intersection $\psi(f)[Q_{\vec{\mathds{1}}}]\cap (\vec{p}+\mathds{Z}\vec{e}_i)$ were infinite, then by this convexity property, $AQ_{\vec{\mathds{1}}}$ must contain, for all indices $j\in\{1,\dots,d\}$, either $\vec{p}+\mathds{N}_0\vec{e}_i$ or $\vec{p}-\mathds{N}_0\vec{e}_i$ as a subset. Again by convexity, this implies that $\psi(f)[Q_{\vec{\mathds{1}}}]$ contains a quadrant with vertex $\vec{p}$. Since $AQ_{\vec{\mathds{1}}}$ is the intersection of $\mathds{Z}^d$ with a closed, convex subset of $\mathds{R}^d$ that contains $\vec{0}$ and a quadrant with vertex $\vec{p}$, then $AQ_{\vec{\mathds{1}}}$ must contain a quadrant with vertex $\vec{0}$ as a subset, a contradiction with the hypothesis.\qed \end{dem} \begin{nota} The same convexity argument as above shows that, if $AQ_{\vec{\mathds{1}}}$ (or any set that is the intersection of a closed, convex subset of $\mathds{R}^d$ and $\mathds{Z}^d$) contains a quadrant $Q'$ as a subset, then for any $\vec{p}\in AQ_{\vec{\mathds{1}}}$ this set contains also a quadrant with vertex $\vec{p}$ as a subset, which is a translation of $Q'$. In particular, the argument above works for any quadrant, not just for $Q_{\vec{\mathds{1}}}$. \end{nota} \begin{lem} \label{lem:quadrant_determinism} Let $\theta$ be a nontrivial, primitive, bijective $d$-dimensional substitution on a finite alphabet $\mathcal{A}$ and let $f\in\Sym(\shift{\theta},\mathds{Z}^d)$ be an extended symmetry. If $\psi(f)Q_{\vec{\mathds{1}}}$ is a strict subset of a quadrant $Q$ with vertex $\vec{0}$, then the configuration $x|_{Q_{\vec{\mathds{1}}}^c}$ determines $f(x)$ completely. \end{lem} \begin{dem} The hypothesis on the matrix $\psi(f)$ shows us by Lemma \ref{lem:finite_lines} that, given any point $\vec{q}\in\psi(f)Q_{\vec{\mathds{1}}}$, there exists some direction $\vec{e}_j$ parallel to the coordinate axes for which the intersection $\psi(f)Q_{\vec{\mathds{1}}}\cap(\vec{q}+\mathds{Z}\vec{e}_j)$ is finite. This implies that any rectangle containing $\vec{q}$ and with sufficiently large edge lengths has to contain points outside of $\psi(f)Q_{\vec{\mathds{1}}}$, which is the basis of the argument below. Since $f$ is an extended symmetry, it has an associated radius $r$ so that $f(x)_{A\vec{k}}$ depends only on $[\vec{k}-r\vec{\mathds{1}},\vec{k}+r\vec{\mathds{1}}]$. Take the least such $r$ and consider the set $(\psi(f)Q_{\vec{\mathds{1}}}^c)^{\circ r}$; this set is contained in the complement of $\psi(f)Q_{\vec{\mathds{1}}}+[-r\vec{\mathds{1}},r\vec{\mathds{1}}]$, which is itself contained in some translate of $\psi(f)Q_{\vec{\mathds{1}}}$, and thus Lemma \ref{lem:finite_lines} applies to $C^*\dfn \psi(f)Q_{\vec{\mathds{1}}}+[-r\vec{\mathds{1}},r\vec{\mathds{1}}]$. Choose a fixed vector $\vec{q}\in C^*$, and let $\vec{e}_j$ be the element from the basis of $\mathds{Z}^d$ for which the intersection $C^*\cap(\vec{q}+\mathds{Z}\vec{e}_j)$ is finite. Then, there is a value $h$ such that the intersection $(\vec{q}+\mathds{Z}\vec{e}_j)\cap C^*$ is contained in the square $[\vec{q}-h\vec{\mathds{1}},\vec{q}+h\vec{\mathds{1}}]$. Hence, any rectangle with edge length at least $2(h+1)$ in the direction $\vec{e}_j$ must necessarily contain points from the complement of $C^*$, and thus has nonempty intersection with $(\psi(f)Q_{\vec{\mathds{1}}}^c)^{\circ r}$. Thus, by choosing $m$ such that the least dimension of $S^{(m)}=[\vec{0},\vec{s}^m-\vec{\mathds{1}}]$ is sufficiently bigger than this value $h$ (say, for instance, $m>1+\log_2(h)$) we ensure that the intersection $(\vec{q}+\mathds{Z}\vec{e}_i)\cap C^*$ has smaller cardinality than the intersection $(\vec{q}+\mathds{Z}\vec{e}_i)\cap R$, where $R$ is any translate of the rectangle $S^{(m)}$ that contains $\vec{q}$. Thus, $R\cap (\psi(f)Q_{\vec{\mathds{1}}}^c)^{\circ r}$ must be nonempty. We visualize this situation in Figure \ref{fig:large_rectangle_overlap}. \begin{figure}\label{fig:large_rectangle_overlap} \end{figure} By the encoding from Lemma \ref{lem:codified_system_subst}, we may represent $\mathds{Z}^2$ as the disjoint union of translates of $S^{(m)}=[\vec{0},\vec{s}^m-\vec{\mathds{1}}]$, such that the restriction of $f(x)$ to any of these rectangles is either $\theta^m(0)$ or $\theta^m(1)$. Since the substitution is bijective, knowing a single symbol at any position in the pattern in one of these rectangles is enough to determine whether this pattern is $\theta^m(0)$ or $\theta^m(1)$. Thus, $f(x)_{\vec{q}}$ is entirely determined by any other $f(x)_{\vec{p}}$ where $\vec{p}$ shares the same rectangle $R$. In particular, we may choose $\vec{p}$ from the nonempty intersection $R\cap{(\psi(f)Q_{\vec{\mathds{1}}}^c)^{\circ r}}$. But the symbol at coordinate $\vec{p}$ is entirely determined by $x|_{Q_{\vec{\mathds{1}}}^c}$ due to the generalized Curtis-Hedlund-Lyndon theorem. Thus, $f(x)_{\vec{q}}$ is already determined by $x|_{Q_{\vec{\mathds{1}}}^c}$. This argument applies to any $\vec{q}\in C^*$, proving the desired result.\qed \end{dem} \begin{nota} The same proof as above applies to the complement of any quadrant, without regard to its vertex. Thus, if for some quadrant $Q$ the set $\psi(f)[Q]$ is a strict subset of a quadrant with the same vertex, then the same argument applies, showing that we may entirely disregard a quadrant in the preimage $x$ to determine its image $f(x)$. \end{nota} \begin{cor} \label{cor:quadrant_permutations} Let $\theta$ be a nontrivial, primitive, bijective substitution over a finite alphabet $\mathcal{A}$ and let $f\in\Sym(\shift{\theta},\mathds{Z}^d)$ be an extended symmetry. Then, $\psi(f)$ must send quadrants to quadrants and thus it must induce a permutation on the set of one-dimensional subspaces $\{\mathds{Z}\vec{e}_1,\mathds{Z}\vec{e}_2,\dotsc,\mathds{Z}\vec{e}_d \}$. \end{cor} \begin{dem} Suppose $\psi(f)Q$ is not a quadrant for some quadrant $Q$. Then either $\psi(f)Q$ strictly contains a quadrant $Q'$ (w.l.o.g. with vertex $\vec{0}$) or not; the second case falls under the hypothesis of the previous lemma and thus $f$ sends two points that match in the three complementary quadrants of $Q^c$ to the same point, if they exist. Those points indeed do exist, as shown by Lemma \ref{lem:points_of_contradiction}, raising a contradiction with the injectiveness of $f$. If $\psi(f)[Q]$ strictly contains a quadrant $Q'$ (with vertex $\vec{0}$) then $\psi(f)^{-1}Q'$ is strictly contained in $Q$ and contains $\vec{0}$; thus, the set $\psi(f)^{-1}Q'$ does not contain a quadrant $Q''$ as a subset. But then, since $f^{-1}$ is an extended symmetry itself and $\psi(f^{-1})=\psi(f)^{-1}$, the mapping $f^{-1}$ satisfies the hypothesis of Lemma \ref{lem:quadrant_determinism}, and thus we get to the same contradiction regarding the injectiveness of $f^{-1}$. \begin{figure}\label{fig:replace_f_by_inverse} \end{figure} In both cases, if $\psi(f)$ fails to send a quadrant to a quadrant, then either $f$ or $f^{-1}$ cannot be injective, a contradiction since all elements of $\Sym(\shift{\theta},\mathds{Z}^d)$ are homeomorphisms. Then, since any matrix $\psi(f)$ with $f\in\Sym(\shift{\theta},\mathds{Z}^d)$ sends quadrants to quadrants, it must send sets that can be written as intersections of quadrants to other sets of the same form. In particular, for any $1\le j\le d$, a set of the form $\pm\mathds{N}_0\vec{e}_j$ can be written as the intersection of all the quadrants with vertex $\vec{0}$ contained in the half-space containing $\pm\vec{e}_j$; thus, the image of $\pm\mathds{N}_0\vec{e}_j$ must be another set of this same form. Since $\mathds{Z}\vec{e}_j=\mathds{N}_0\vec{e}_j\cup(-\mathds{N}_0\vec{e}_j)$ and $\psi(f)$ is (a matrix representing) a linear function, this means $\psi(f)$ sends any (discrete) linear subspace $\mathds{Z}\vec{e}_j$ to some other $\mathds{Z}\vec{e}_i$, proving the last assertion.\qed \end{dem} The previous results, combined, lead to the following theorem: \begin{teo} \label{teo:structure_of_symmetry_group} For a $d$-dimensional, nontrivial, primitive, bijective substitution $\theta$, the quotient group of all admissible lattice transformations of the subshift $\shift{\theta}$, $\Sym(\shift{\theta},\mathds{Z}^d)/\Aut(\shift{\theta},\mathds{Z}^d)$, is isomorphic to a subset of the hyperoctaedral group $Q_d\cong (\mathds{Z}/2\mathds{Z})\wr S_d=(\mathds{Z}/2\mathds{Z})^d\rtimes S_d$, which represents the symmetries of the $d$-dimensional cube. Thus, the extended symmetry group $\Sym(\shift{\theta},\mathds{Z}^d)$ is virtually-$\mathds{Z}^d$. \end{teo} \begin{dem} As stated previously in Corollary \ref{cor:quadrant_permutations}, since for any $f\in\Sym(\shift{\theta},\mathds{Z}^d)$ the linear mapping $\psi(f)$ sends quadrants to quadrants, the set $\Sym(\shift{\theta},\mathds{Z}^d)$ acts on the one-dimensional subspaces $\mathds{Z}\vec{e}_1,\dots,\mathds{Z}\vec{e}_d$ by permutation. Thus, $\psi(f)$ must be given by a matrix that sends each $\vec{e}_i$ from the canonical basis to a vector $\pm\vec{e}_j$ and thus each column of $\psi(f)$ is such a vector; this, plus the nonsingularity of $\psi(f)$, shows that the associated matrix must be of the form: \[\psi(f)=[(-1)^{t_1}\vec{e}_{\sigma(1)}\mid(-1)^{t_2}\vec{e}_{\sigma(2)}\mid\cdots\mid(-1)^{t_d}\vec{e}_{\sigma(d)} ], \] where $\sigma$ is a permutation of $\{1,\dots,d\}$ and $t_1,\dots,t_d\in\{0,1\}$. These matrices correspond to a finite subgroup of $\mathrm{GL}_d(\mathds{Z})$ which is isomorphic to $Q_d$. Indeed, the set of all diagonal matrices of this form is isomorphic to $(\mathds{Z}/2\mathds{Z})^d$, while the set of all matrices with nonnegative entries of this form is isomorphic to $S_d$, and any matrix of the aforementioned form is a product of a permutation matrix with positive entries and a diagonal matrix in a unique way. Thus, $\psi$ can be seen as a group morphism $\Sym(\shift{\theta},\mathds{Z}^d)\to Q_d$ by identifying the latter with the corresponding matrices; since $\ker(\psi)=\Aut(\shift{\theta},\mathds{Z}^d)$, we conclude that $\Sym(\shift{\theta},\mathds{Z}^d)/\Aut(\shift{\theta},\mathds{Z}^d)\cong\mathrm{im}(\psi)\le Q_d$, the desired result.\qed \end{dem} The previous result imposes a very strict limitation on the structure of the group $\Sym(\shift{\theta},\mathds{Z}^d)$; thus, with some additional information, we may be able to compute $\Sym(\shift{\theta},\mathds{Z}^d)$. An example of this is the following: \begin{cor} The extended symmetry group of the generalized Thue-Morse substitution, given by: \begin{align*} \theta_{\rm TM}:\{0,1\}&\to\{0,1\}^{\{0,1\}^d}\\ a&\mapsto ((a+m_1+\dots+m_d)\bmod 2)_{(m_1,\dotsc,m_d)\in\{0,1\}^d}, \end{align*} is a semidirect product of the form: \[\Sym(\shift{\theta_{TM}},\mathds{Z}^d)\cong (\mathds{Z}^d\times\mathds{Z}/2\mathds{Z})\rtimes Q_d, \] generated by the shifts, the relabeling map $\delta(x)=\overline{x}$ and the $2^dd!$ rigid symmetries of the coordinate axes given by $(\varphi_A(x))_{\vec{s}}=x_{A\vec{s}}$, with $A\in Q_d$. \end{cor} \begin{dem} By Theorem \ref{teo:structure_of_symmetry_group}, $\Sym(\shift{\theta_{\rm TM}},\mathds{Z}^d)$ is an $\Aut(\shift{\theta_{\rm TM}},\mathds{Z}^d)$-by-$R$ group extension, where $\Aut(\shift{\theta_{\rm TM}},\mathds{Z}^d)=\langle\{\sigma_{\vec{k}}\}_{\vec{k}\in\mathds{Z}^d}\mathbin{\ensuremath{\mathaccent"7201\cup}}\{\delta\}\rangle\cong\mathds{Z}\times(\mathds{Z}/2\mathds{Z})$ and $R$ is a subgroup of $Q_d$. We have to verify, then, that the rigid coordinate symmetries $\varphi_A$ are effectively elements of $\Sym(\shift{\theta_{\rm TM}},\mathds{Z}^d)$, as they then are mapped to the corresponding elements of $Q_d$ (seen as a matrix group) in $\mathrm{GL}_d(\mathds{Z})$ and their composition and inverses are also rigid coordinate symmetries. Let $f=f_A$ be any rigid coordinate symmetry and let $\Sigma$ be the set of all fixed points of $\theta_{\rm TM}^2$, which are in a $1$-$1$ correspondence with the set of all possible seeds $\{0,1\}^{\{-1,0\}^d}$. By inspection, we see that any rigid coordinate symmetry sends a point of $\Sigma$ to a shift of another point of this set (note that it is possible for the image not to belong to $\Sigma$ as the symbol in the position $\vec{0}$ never changes position); since any subpattern of any point $y\in\shift{\theta}$ is a subpattern of an $x\in\Sigma$, then by the generalized Curtis-Hedlund-Lyndon theorem any subpattern of $f(y)$ is also a subpattern of some other $x'\in\Sigma$; thus, $f(y)\in\shift{\theta}$. We see then that $f$ maps $\shift{\theta}$ to itself; as it has an obvious inverse $f_{A^{-1}}$ which is also a rigid coordinate symmetry, this function $f$ is a homeomorphism $\shift{\theta}\to\shift{\theta}$ satisfying the Curtis-Hedlund-Lyndon condition and hence an element of $\Sym(\shift{\theta},\mathds{Z}^d)$. Since there is a rigid coordinate symmetry $f_A$ for all $A\in Q_d$, we conclude that the aforementioned group $R$ is all of $Q_d$. We note that $f_A\circ f_{A'}=f_{AA'}$ and thus $\iota: A\mapsto f_A$ is an embedding of $Q_d$ into $\Sym(\shift{\theta_{\rm TM}},\mathds{Z}^d)$ and a right inverse for $\psi$. Thus, the short exact sequence $\Aut(\shift{\theta_{\rm TM}},\mathds{Z}^d)\mathrel{{\hookrightarrow}}\Sym(\shift{\theta_{\rm TM}},\mathds{Z}^d)\mathrel{{\twoheadrightarrow}} Q_d$ splits, resulting in the desired decomposition of $\Sym(\shift{\theta_{\rm TM}},\mathds{Z}^d)$ as a semidirect product. \qed \end{dem} \section{The Robinson shift and fractures in subshifts} In this section, we shall temporarily focus our attention away from the substitutive tilings studied above and analyze a well-known example of \textbf{strongly aperiodic} $\mathds{Z}^2$-subshift, the Robinson shift. \begin{defi} Let $X$ be a $\mathds{Z}^d$-subshift. We say $X$ is \textbf{strongly aperiodic} if all points in $X$ have trivial stabilizer, i.e., for all $x\in X$, $\sigma_{\vec{k}}(x)=x$ implies $\vec{k}=\vec{0}$. \end{defi} The Robinson shift is a nearest neighbor two-dimensional shift with added local restrictions (and thus of finite type), whose alphabet consists of all the rotations and reflections of the five tiles from Figure \ref{fig:robinson_tiles}, resulting in $28$ different symbols. \begin{figure} \caption{The five types of Robinson tiles, resulting in an alphabet of $28$ symbols after applying all possible rotations and reflections. The third tile is usually called a \textbf{cross}.} \label{fig:robinson_tiles} \end{figure} The Robinson shift $X_{\rm Rob}$ is given by the following local rules: \begin{enumerate}[label=(\arabic*)] \item Every arrow head in a tile must be in contact with an arrow tail from an adjacent tile (nearest neighbor rule). This is similar to the local rule of a Wang tiling (although not exactly equivalent; see \cite{Rob71} or \cite{GJS2012} for details). \item There is a translation of the sublattice $2\mathds{Z}\times2\mathds{Z}$ that only has rotations of the central tile of Figure \ref{fig:robinson_tiles} (which shall be referred to as \textbf{crosses}). \item Any other crosses appear diagonally adjacent to one of the crosses from the sublattice of Rule (2). Namely, if the cross-only sublattice of a given point is $2\mathds{Z}\times2\mathds{Z}+\vec{k}$, then any other cross is placed at one of the points from $2\mathds{Z}\times2\mathds{Z}+\vec{k}+\vec{\mathds{1}}$. \end{enumerate} It is easy to see that those rules can be enforced with stricty local restrictions and thus $X_{\rm Rob}$ is a shift of finite type. These rules force the $28$ basic tiles to form larger patterns with similar behaviors to each of the five tiles (in particular, patterns of size $(2^n-1)\times(2^n-1)$ that behave as larger analogues of crosses and that are usually referred to as \textbf{$n$-th order supertiles}). By compactness, as we can always build larger supertiles from smaller ones, we can prove that $X_{\rm Rob}$ is a non-empty strongly aperiodic subshift; it is not minimal, but it has a unique minimal subsystem $M_{\rm Rob}$ (which is the factor of a subshift of finite type). \begin{figure} \caption{The formation of a second order supertile of size $3\times 3$.} \label{fig:supertile} \end{figure} The following result has been proven by Sebastián Donoso and Wenbo Sun, in \cite{DS2014}: \begin{teo} $\Aut(M_{\rm Rob},\mathds{Z}^2)=\langle\sigma_{(1,0)},\sigma_{(0,1)}\rangle\cong\mathds{Z}^2$. \end{teo} From this result it is possible to show that the same holds for $X_{\rm Rob}$, namely that the only automorphisms of the Robinson shift are the trivial ones. We aim to extend this result by computing the extended symmetry group of the Robinson shift. For this, we need to introduce a distinguished subset of $\mathds{Z}^2$ which represents part of the structure of a shift which is preserved by extended symmetries: \begin{defi} Let $X$ be a strongly aperiodic $\mathds{Z}^2$-subshift. We say $X$ has a \textbf{fracture} in the direction $\vec{q}\in\mathds{Z}^2$ if there is a point $x^*\in X$, infinite different values $k_1<k_2<k_3<\dotsc\in\mathds{Z}$, and two disjoint half-planes $S^+,S^-\subseteq\mathds{Z}^2$ separated by $\mathds{Z}\vec{q}$ (i.e. $S^+\cap S^-=S^+\cap\mathds{Z}\vec{q}=S^-\cap\mathds{Z}\vec{q}=\varnothing$; it is not necessary that $S^+\cup S^-\cup\mathds{Z}\vec{q}=\mathds{Z}^2$) such that, for each $j\in\mathds{N}$, there is a point $x^{(j)}\in X$ that satisfies the two conditions: \[x^{(j)}|_{S^+}=x^*|_{S^+},\quad x^{(j)}|_{S^-}=\sigma_{k_j\vec{q}}(x^*)|_{S^-}. \] \end{defi} \begin{nota} We exclude subshifts with periodic points from this definition as, if $x\in\Per_{\vec{p}}(X)$, then we may take $k_j=j$ and $x^{(j)}=x$ for all values of $j$, resulting in a point with a fracture in the direction $\vec{p}$; this makes the definition of direction of fracture redundant with the concept of direction of periodicity, which is also preserved by extended symmetries. \end{nota} \begin{lem} Let $\vec{q}$ be a direction of fracture for a two-dimensional strongly aperiodic subshift $X$ and $f\in\Sym(X,\mathds{Z}^2)$. Then $\psi(f)\vec{q}$ is a direction of fracture as well. \end{lem} \begin{dem} Let $\vec{q}$ be a direction of fracture, and $x^*, (x^{(j)})_{j\in\mathds{N}},(k_j)_{j\in\mathds{N}}$ be the associated points and magnitudes from the definition above. By the generalized Curtis-Hedlund-Lyndon theorem, as $x^*|_{S^+}=x^{(j)}|_{S^+}$, then $f(x^*)|_{\psi(f)((S^+)^{\circ r})}=f(x^{(j)})|_{\psi(f)((S^+)^{\circ r})}$, where $r$ is the radius of the symmetry $f$. By the same argument, and since $f\circ\sigma_{\vec{q}}=\sigma_{\psi(f)\vec{q}}\circ f$, we conclude that $f(x^{(j)})|_{\psi(f)((S^-)^{\circ r})}=\sigma_{k_j\psi(f)\vec{q}}\circ f(x^*)|_{\psi(f)((S^-)^{\circ r})}$. Note that, since $S^+$ and $S^-$ are half-planes disjoint from the linear subspace $\mathds{Z}\vec{q}$, and $\psi(f)$ is a linear map, $(S^\pm)^{\circ r}$ are also half-spaces and thus their corresponding images $\psi(f)((S^\pm)^\circ r)$ are half-spaces as well. As subsets of the images of disjoint sets, they are also disjoint from $\mathds{Z}(\psi(f)\vec{q})$ and from each other. Thus, by defining $y^*=f(x^*),y^{(j)}=f(x^{(j)})$ we see that these points conform a fracture of $X$ in the direction $\psi(f)\vec{q}$.\qed \end{dem} This result provides a subset of $\mathds{Z}^2$ over which the set $\Sym(X,\mathds{Z}^2)$ is forced to act ``naturally''; thus, if this subset has sufficiently strong constraints coming from the structure of $X$, this enforces similar restrictions on the possible values of $\psi(f)$ for $f\in\Sym(X,\mathds{Z}^2)$. As we shall see below, this is the case for the aperiodic Robinson shift: \begin{prop} For the Robinson shift, $\Sym(X_{\rm Rob},\mathds{Z}^2)\cong\mathds{Z}^2\rtimes D_4$, where $D_4$ is the dihedral group of order $8$. \end{prop} \begin{dem} To prove this result, we will show that the set $\mathcal{S}$ of all directions of fracture of $X_{\rm Rob}$ is $\mathds{Z}\vec{e}_1\cup\mathds{Z}\vec{e}_2$. Assuming this as true, we see that, since $\psi(f)$ is always a $\mathds{Z}$-invertible matrix, it must send $\{\vec{e}_1,\vec{e}_2\}$ to a basis of $\mathds{Z}^2$ contained in $\mathds{Z}\vec{e}_1\cup\mathds{Z}\vec{e}_2$, which is always a two-element set of the form $\{\pm\vec{e}_1,\pm\vec{e}_2 \}$ or $\{\pm\vec{e}_1,\mp\vec{e}_2\}$, and thus the elements of $\Sym(X_{\rm Rob},\mathds{Z}^2)$ correspond to one of the eight possible matrices belonging to the standard copy of $D_4=Q_2$ (defined in the previous section) in $\mathrm{GL}_2(\mathds{Z})$. Then, by finding an explicit subgroup of $\Sym(X_{\rm Rob},\mathds{Z}^2)$ isomorphic to $D_4$ by $\psi$, we deduce the claimed semidirect product decomposition. To show that $X_{\rm Rob}$ has fractures in the directions $\vec{e}_1$ and $\vec{e}_2$, we need to recall some basic details about the construction of an infinite valid configuration of the Robinson shift. As stated above, the five basic Robinson tiles (together with their rotations and reflections) combine to form $3\times 3$ patterns with a similar behavior to crosses, named second order supertiles. Four of these second order supertiles, together with smaller substructures, further combine to form $7\times 7$ patterns (third order supertiles) and so on. In every case, the central tile of an $n$-th order supertile is a cross, which gives an orientation to the supertile in a similar way to the two-headed, L-shaped arrow on a cross. We may fill the whole upper right quadrant $Q_{\vec{\mathds{1}}}=\mathds{N}^2$ as follows: we start by placing a cross on its vertex $\vec{0}$ with its L-shaped arrow pointing up and right, and then place another cross with the same orientation at the position $(1,1)$. This new cross, together with the previously placed one, allows us to fill the lower $3\times 3$ section of $\mathds{N}^2$, $[0,2]^2$, with a second order supertile. We iterate this process by placing a cross with the same orientation at the position $(3,3),(7,7),\dots,(2^n-1,2^n-1),\dots$ and constructing the corresponding second, third, \dots $n$-th order supertile and so on. By compactness, there is only one way to fill all of $\mathds{N}^2$ as a limit to this process. We call the configuration obtained an infinite order supertile. We may fill the other three quadrants with similar constructions resulting in infinite order supertiles with different orientations, each of these separated from the other infinite supertiles by a row or column of copies of the first tile from Figure \ref{fig:robinson_tiles}. As we see in Figure \ref{fig:robinson_supertile_separation}, this will result in a translate of $\mathds{Z}\times\{0\}\cup\{0\}\times\mathds{Z}$ containing only copies of this tile, with all of the tiles in one of the strips $\mathds{Z}\times\{0\}$ or $\{0\}\times\mathds{Z}$ (the latter in the figure) having the same orientation, while the other strip will have all of its tiles pointing towards the center. \begin{figure} \caption{A fragment of a point from the Robinson shift, distinguishing the four supertiles involved, the vertical and horizontal strips of tiles separating each supertile and the $2\mathds{Z}\times 2\mathds{Z}$ sublattice that contains only crosses. Note that the tiles in the vertical strip separating the supertiles are copies of the first tile of Figure \ref{fig:robinson_tiles} with the same orientation.} \label{fig:robinson_supertile_separation} \end{figure} Since the Robinson shift behaves like a nearest-neighbor shift with added restrictions, the existence of a vertical (resp., horizontal) strip with copies of the same tile allows us to vertically shift the tiles contained in the right half-plane however we see fit, as long as the coset of the $2\mathds{Z}\times 2\mathds{Z}$ sublattice containing only crosses is respected. In practice, this shows that in the point $x\in X_{\rm Rob}$ represented partially in Figure \ref{fig:robinson_supertile_separation} we may replace the tiles from the right half-plane with the corresponding tiles from $\sigma_{(0,2k)}(x)$ and obtain valid points for all values of $k\in\mathds{Z}$. We see an example of this in Figure \ref{fig:robinson_breaking}. \begin{figure} \caption{Two possible ways in which the tiling from Figure \ref{fig:robinson_supertile_separation} exhibits fracture-like behavior while still resulting in a valid point from $X_{\rm Rob}$.} \label{fig:robinson_breaking} \end{figure} This procedure shows that $X_{\rm Rob}$ has $\vec{e}_1$ and $\vec{e}_2$ as directions of fracture. Now, we need to show that all directions of fracture are contained in the set $\mathds{Z}\vec{e}_1\cup\mathds{Z}\vec{e}_2$, and thus all matrices from $\psi[\Sym(X_{\rm Rob},\mathds{Z}^2)]$ send the set $\{\vec{e}_1,\vec{e}_2\}$ to a linearly independent subset of $\{\vec{e}_1,\vec{e}_2,-\vec{e}_1,-\vec{e}_2\}$. The argument we shall use for this follows a similar outline to the technique used in the proof of Corollary \ref{cor:quadrant_permutations}: the points of the Robinson shift form a hierarchical structure away from a horizontal or vertical fracture, allowing for a decomposition into subpatterns of arbitrarily large size $s$ placed correlative to a lattice of the form $2^n\mathds{Z}\times 2^n\mathds{Z}$ (this is similar to the decomposition of a point from a substitutive subshift into patterns of the form $\theta^m(a),a\in\mathcal{A}$ for arbitrarily large values of $m$). The existence of fractures that are neither vertical nor horizontal would result in ``ruptures'' in this hierarchical structure, leading to a contradiction. Formally, we proceed as follows. Suppose that $X_{\rm Rob}$ has a fracture in the direction $\vec{q}\in\mathds{Z}^2\setminus(\mathds{Z}\vec{e}_1\cup\mathds{Z}\vec{e}_2)$, and let $S^+,S^-$ be the disjoint half-planes separated by $\vec{q}$. The set $F_{\vec{q}}=\mathds{Z}^2\setminus (S^+\mathbin{\ensuremath{\mathaccent"7201\cup}} S^-)$ is necessarily of the form $\mathds{Z}\vec{q}+[\vec{r}_1,\vec{r}_2]$, namely, a finite union of translates of $\mathds{Z}\vec{q}$, and thus its intersection with any set of the form $\mathds{Z}\times\{k\}$ or $\{k\}\times\mathds{Z}$ is finite. This is because the intersection of such a set with $\mathds{Z}\vec{q}$ consists of at most a single point, as $\vec{q}$ is not a multiple of $\vec{e}_1$ nor $\vec{e}_2$. Thus, for any sufficiently large value $M\in\mathds{N}$, it is easy to verify that for any point $\vec{p}\in F_{\vec{q}}$, any translation of the rectangle $[-M\vec{\mathds{1}},M\vec{\mathds{1}}]$ that contains $\vec{p}$ also contains points from either $S^+$ or $S^-$ (or both). \begin{figure} \caption{The substructure of a point of $X_{\rm Rob}$ in terms of $n$-th order supertiles. Note how all supertiles overlap either $S^+$ or $S^-$.} \end{figure} Choose $n\in\mathds{N},n>1$ such that for $M=2^n-1$ the above condition holds, while satisfying the additional condition $M>2k_1\|\vec{q}\|_1$. All $n$-th order supertiles, thus, contain points from $S^+\mathbin{\ensuremath{\mathaccent"7201\cup}} S^-$. Let $\vec{p}$ be an element from $F_{\vec{q}}$ that belongs to the support of an $n$-th order supertile, and suppose this supertile overlaps the half-plane $S^+$. If there is no such supertile, all $n$-th order supertiles containing points of $F_{\vec{q}}$ only overlap $S^-$, implying, since $S^+$ is the intersection of a real half-plane $H_{\vec{\alpha},c}=\{\vec{v}\in\mathds{R}^2:\langle\vec{v},\vec{\alpha}\rangle \ge c \}$ with $\mathds{Z}^2$, that $S^+$ is a translation of $\mathds{Z}\times(\pm\mathds{N})$ (or $(\pm\mathds{N})\times\mathds{Z}$). C onvex combinations of points of $S^+$ with integer coefficients belong to $S^+$ as well, so $S^+$ cannot have ``gaps'', and it is a union of disjoint, horizontally or vertically adjacent translates of $[1,2^n]^2$. This implies that $\vec{q}$ is in the set $\mathds{Z}\vec{e}_1\cup\mathds{Z}\vec{e}_2$, a contradiction; thus, the aforementioned supertile exists. Evidently, the same argument shows the existence of other $n$-th order supertiles which intersect $S^{-}$. Since each horizontal or vertical strip $F_{\vec{q}}\cap(\mathds{Z}\times\{k\})$ (resp. $F_{\vec{q}}\cap(\mathds{Z}\times\{k\})$) intersects finitely many supertiles, we see that the arrangement of the $n$-th order supertiles in $S^+$ away from a vertical or horizontal fracture (which in this case must correspond to a bi-infinite column or row of copies of the first tile from Figure \ref{fig:robinson_tiles}, all with the same orientation) affects the placement of the supertiles in $S^-$ as well. However, since the tiling has a fracture in the direction $\vec{q}$, we may shift the supertiles in $S^-$ by $k_1\vec{q}$ and obtain a valid configuration. By our choice of $n$, the shift $\sigma_{k_1\vec{q}}$ moves the $n$-th order supertiles by less than $M$ units both horizontally and vertically (since $M>2k_1\|\vec{q}\|_1$), and thus the supertiles in $S^-$ are shifted to a position that does not match the arrangement of supertiles from $S^+$. We may see this situation in Figure \ref{fig:supertile_breaker}. \begin{figure} \caption{How a shift by $k_1\vec{q}$ makes the arrangement of supertiles in $S^+$ not match with the corresponding tiles in $S^-$.} \label{fig:supertile_breaker} \end{figure} Given that we are assuming that this point (say, $x$) is a fracture point for $X_{\rm Rob}$, there must be some other point $y$ which matches $x$ in $S^+$ and $\sigma_{k_1\vec{q}}(x)$ in $S^-$, which breaks the rigidity of the structure of supertiles imposed by the rules of the Robinson shift. Thus, fractures along non-principal directions cannot exist. Finally, we need to construct a copy of $D_4$ contained in $\Sym(X_{\rm Rob},\mathds{Z}^2)$. For this, since $D_4$ is a $2$-generated group, we only need to show the existence of two extended symmetries $\rho,\mu:X_{\rm Rob}\to X_{\rm Rob}$, mapped respectively by $\psi$ to the matrices: \[\psi(\rho)=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},\quad \psi(\mu)=\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}, \] since these two matrices generate an isomorphic copy of $D_4$ contained in $\mathrm{GL}_2(\mathds{Z})$. These symmetries $\rho$ and $\mu$ are essentially rigid symmetries of the coordinate axes; however, a composition with a relabeling map is also needed, to replace every tile with the corresponding reflection or rotation. For instance, if we define $\mathfrak{R}:\mathcal{A}\to\mathcal{A}$ as the mapping which assigns to each of the $28$ symbols its corresponding rotation by $\frac{1}{2}\pi$, as seen in Figure \ref{fig:rotation_tiles}, then $\rho(x)_{(i,j)}=\mathfrak{R}(x_{(-j,i)})$ is the desired symmetry. In the same way, by defining $\mathfrak{M}:\mathcal{A}\to\mathcal{A}$ as the mapping that sends each tile to its reflection through the horizontal axes, then we define $\mu$ by the relation $\mu(x)_{(i,j)}=\mathfrak{M}(x_{(-i,j)})$. \begin{figure} \caption{The relabeling map $\mathfrak{R}$ which replaces each tile with its corresponding rotation by $\frac{1}{2}\pi$.} \label{fig:rotation_tiles} \end{figure} It is easy to verify that $\rho$ and $\mu$ are valid extended symmetries, as they respect the conditions on the arrowheads and tails, and the sublattice comprised of only crosses. Also, we see that $\psi$ sends both $\rho$ and $\mu$ to the desired matrices, and that the mappings $\mathfrak{R}_\infty,\mathfrak{M}_\infty:\mathcal{A}^{\mathds{Z}^2}\to\mathcal{A}^{\mathds{Z}^2}$ commute with the corresponding rigid symmetries of the coordinate axes. Thus, $\langle\rho,\mu\rangle$ is a copy of $D_4$ contained in $\Sym(X_{\rm Rob},\mathds{Z}^2)$, as desired.\qed \end{dem} We remark that the proof above used the structure of the Robinson shift $X_{\rm Rob}$ exclusively to compute the set of directions of fractures associated to this shift, and that extended symmetries preserve this set in other contexts as well. This suggests that this technique is open to generalization to other subshifts, even in higher dimensions, although possibly replacing the concept of ``direction of fracture'' with ``hyperplane of fracture'', as we need to separate half-spaces of $\mathds{Z}^d$, whose boundaries are akin to $(d-1)$-dimensional affine spaces, albeit discrete. Thus, to propose a generalization of this concept we need a few definitions: \begin{defi} A \textbf{hyperplane} $H\subseteq\mathds{Z}^d$ is a coset of a direct summand of $\mathds{Z}^d$ of rank $d-1$; this is, $H$ is a nonempty subset of $\mathds{Z}^d$ such that: \begin{enumerate}[label=(\arabic*)] \item $H=H_0+\vec{v}$ for some subgroup of $\mathds{Z}^d$ with rank $d-1$ and some vector $\vec{v}\in\mathds{Z}^d$, and \item there exists some other vector $\vec{w}\in\mathds{Z}^d$ such that $\mathds{Z}^d=H_0\oplus\mathds{Z}\vec{w}$. \end{enumerate} \end{defi} Thus, we suggest the following tentative definition for a fracture in a $d$-di\-men\-sio\-nal subshift: \begin{defi} Let $X$ be a (strongly aperiodic) $\mathds{Z}^d$-subshift. We say that $X$ has a \textbf{fracture} in the direction of the hyperplane $H=H_0+\vec{v}$ if for some $x\in X$ there are two half-spaces $S^+,S^-$ separated by $H$ (i.e. $S^+\cap S^-=S^+\cap H=S^-\cap H=\varnothing$) such that for some ``sufficiently large'' subset $B\subseteq H_0$ there is a family $\{x^{(\vec{b})}\}_{\vec{b}\in B}$ of points of $X$ such that: \[x^{(\vec{b})}|_{S^+}=x|_{S^+},\qquad x^{(\vec{b})}|_{S^-}=\sigma_{\vec{b}}(x)|_{S^-}. \] \end{defi} Here, an appropiate definition of ``sufficiently large'' will depend on the subshift that is being studied. For instance, in the case of the Robinson shift we only needed $B$ to contain two points ($\{0,k_1\vec{q}\}$) for our argument due to the hierarchical structure of $X_{\rm Rob}$, albeit $B$ in this shift actually is an infinite set, $2\mathds{Z}\vec{q}$. In all cases, as long as we apply a consistent restriction to the possible instances of $B$, we see that an extended symmetry $f$ must send a point of fracture to another point of fracture due to the generalized Curtis-Hedlund-Lyndon theorem, and thus $\psi(f)$ is a matrix that acts by permutation on the set of all hyperplanes of fracture of $X$. For sufficiently rigid shifts, this should result in a strong restriction on the matrix group $\psi[\Sym(X,\mathds{Z}^d)]$. \section{The minimal case} The discussion above shows the key idea behind the showcased method: the hierarchical structure of the aforementioned subshifts forces the appearance of ``special directions'', which result in a geometrical invariant that needs to be preserved by extended symmetries. By identifying these special directions via combinatorial or dynamic properties, we can effectively restrict $\psi[\Sym(X,\mathds{Z}^d)]$ enough to effectively compute it in terms of $\Aut(X,\mathds{Z}^d)$. However, in the above discussion we focused specifically on the Robinson tiling $X_{\rm Rob}$ and the substitutive subshift $\shift{\theta}$ which (usually) are not minimal. To exhibit the above mentioned special directions, a key point was using certain points that exhibit ``fracture-like'' behavior, which are not present in the minimal subset of each of these subshifts. However, since the ``special directions'' come from the hierarchical structure of the subshift, they ought to be present in its minimal subset as well, and thus should impose the same restrictions on the set of extended symmetries. We proceed to show that this is actually the case, starting with the Robinson tiling as follows: \begin{cor} Let $M_{\rm Rob}\subset X_{\rm Rob}$ be the unique minimal subshift contained in $X_{\rm Rob}$. Then, $\Sym(M_{\rm Rob},\mathds{Z}^2)\cong\mathds{Z}^2\rtimes D_4$. \end{cor} \begin{dem} Using the substitution rules devised by Gähler in \cite{Ga2013}, we can show that $M_{\rm Rob}$ contains a point, say $x$, that has only copies of the first tile from figure \ref{fig:robinson_tiles}, pointing to the right, on the horizontal strip $\mathds{Z}\times\{0\}$, and corresponding tiles of the same kind pointing downwards in $\{0\}\times\mathds{Z}^+$ and upwards in $\{0\}\times\mathds{Z}^-$. Mirrored and rotated versions of this configuration exist as points of $M_{\rm Rob}$ as well (of which one specific rotation may be observed in Figure \ref{fig:robinson_supertile_separation}); a similar argument holds for the fifth tile. Any point from $\mathcal{H}=\overline{\{\sigma_{(n,0)}(x):n\in\mathds{Z} \}}$ has the same horizontal strip of copies of the same tile on $\mathds{Z}\times\{0\}$, and, due to the local rules of the Robinson tiling, any configuration with support $\mathds{Z}\times[-n,n]$ from some point $y\in\mathcal{H}$ must be $(m,0)$-periodic for some sufficiently large $m$. Note that this $m$ must diverge to $\infty$ as $n\to\infty$, because no point from $M_{\rm Rob}$ has nontrivial periods. Let $f\in\Sym(M_{\rm Rob},\mathds{Z}^2)$ be an extended symmetry. For any sufficiently large value of $k\in\mathds{N}$, the window of this $f$ is contained in $\mathds{Z}\times[-k,k]$. Thus, due to Theorem \ref{teo:curtishedlundlyndon}, we may choose a sufficiently large $k$ such that the image of $\mathds{Z}\times[-k,k]$ under the matrix $\psi(f)$ contains the set $L_{a,b}(\widetilde{k})\dfn\{(u,v)\in\mathds{Z}^2:-\tilde{k}\le au+bv\le\tilde{k} \}$ for any desired $\tilde{k}>0$ and some $a,b\in\mathds{Z}$, and thus $y|_{\mathds{Z}\times[-n,n]}$ determines $f(y)|_{L_{a,b}(\widetilde{k})}$ entirely. Since the strip $y|_{\mathds{Z}\times[-k,k]}$ is periodic, the restriction $f(y)|_{L_{a,b}(\tilde{k})}$ must have a period as well, which we can choose as a multiple of $(-b,a)$. We may now either proceed with a combinatorial or dynamic argument; we show both as they are closely related, starting with the combinatorial method. For this, suppose that $ba\ne 0$, which implies that $\psi(f)$ maps $\vec{e}_1$ to a direction that is not parallel to the coordinate axes. Since the $n$-th order supertiles increase in size exponentially with $n$, and so do the associated ``square drawings'' determined by the crosses, the strip $L_{a,b}(\widetilde{k})$ must pass through the vertical lines (comprised of copies of rotations of the second, third, fourth or fifth tiles from Figure \ref{fig:robinson_tiles}) associated with the corresponding square of a $n$-th order supertile for all sufficiently large $n$ (as it is not parallel to any of the sides of such squares). Thus, this configuration cannot have a nontrivial period, since due to the positions of the $n$-th order supertiles such a period cannot have a horizontal or vertical component smaller than $2^n$, which applies for any sufficiently large $n$. We conclude, by this contradiction, that $\psi(f)$ maps $\vec{e}_1$ to a vector parallel to the coordinate axes; a similar argument holds with $\vec{e}_2$. From the dynamical perspective, we may proceed as in \cite{BRY2018} by employing the maximal equicontinuous factor (MEF) of the Robinson tiling. As shown in the aforementioned work by Gähler \cite{Ga2013}, the MEF of the Robinson tiling is a two-dimensional solenoid\footnote{The solenoid $\mathds{S}_p$ is the compact abelian group obtained as an inverse limit of the system $\mathds{R}/\mathds{Z}\gets\mathds{R}/\mathds{Z}\gets\mathds{R}/\mathds{Z}\gets\dotsc$, where each morphism is the mapping $x\mapsto px\pmod{1}$. A $d$-dimensional solenoid is defined analogously.} $\mathds{S}_2^2$, and the fiber of the corresponding mapping $\rho:M_{\rm Rob}\mathrel{{\twoheadrightarrow}}\mathds{S}_2^2$ is $28$-to-$1$ in the set of all points from $M_{\rm Rob}$ that are comprised of four infinite-order supertiles \cite{Ga2013}, including the above constructed $x$. By similar arguments to the ones from Corollary 1 of \cite{BRY2018}, an extended symmetry must map $x$, whose corresponding fiber $\rho^{-1}[\{\rho(x)\}]$ has $28$ different points, to another point with a corresponding fiber of cardinality $28$, comprised of four infinite-order supertiles. By employing the periodicity of the strip $x|_{\mathds{Z}\times[-k,k]}$ and the local behavior of an extended symmetry, we can conclude that the matrix $\psi(f)$ maps $\mathds{Z}\times[-k,k]$ to the support of the corresponding periodic strip in the image point, which (up to translation) must be either of the form $\mathds{Z}\times[-\tilde{k},\tilde{k}]$ or $[-\tilde{k},\tilde{k}]\times\mathds{Z}$; in both cases $\vec{e}_1$ must be mapped to a cardinal direction, as above. As the same holds for $\vec{e}_2$, we see that the matrix must be one of the eight matrices corresponding to the standard embedding of $D_4$ into $\mathrm{GL}_2(\mathds{Z})$, leading to the same conclusion as in the non-minimal case. The restrictions of the explicit mappings shown in the previous section to $M_{\rm Rob}$ constitute by themselves a copy of $D_4$ in $\Sym(M_{\rm Rob},\mathds{Z}^2)$, as desired.\qed \end{dem} The analysis on the minimal subset of the Robinson tiling above suggests as well methods to study the minimal substitutive subshift obtained from a bijective substitution. We will prove the following result: \begin{teo} \label{teo:sym_grp_for_minimal_bij_subst} For a $d$-dimensional, nontrivial, primitive, bijective substitution $\theta$ over an alphabet $\mathcal{A}$ with faithful shift action, the following holds for the associated minimal substitutive subshift $\shift{\theta}^\circ$: \[\psi[\Sym(\shift{\theta}^\circ,\mathds{Z}^d)]\le Q_d<\mathrm{GL}_d(\mathds{Z}). \] Hence, the extended symmetry group is virtually $\mathds{Z}^d$. \end{teo} The previous result allows us to decompose $\Sym(\shift{\theta}^\circ,\mathds{Z}^d)$ into a (semidirect) product of $\mathds{Z}^d$ (the subgroup generated by the shifts), a subset of $S_{\abs{\mathcal{A}}}$ representing the relabeling maps, and a subset of $Q_d<\mathrm{GL}_d(\mathds{Z})$ corresponding to lattice transformations, in the same way as we did with $\Sym(\shift{\theta},\mathds{Z}^d)$. We may proceed either combinatorially or dynamically, as above. For the combinatorial case, we need the following simple lemma: \begin{lem} \label{lem:substitutive_fracture} There exist two points $x^{(1)},x^{(2)}\in\shift{\theta}^\circ$ such that $x^{(1)}|_{\mathds{Z}^+_0\times\mathds{Z}^{d-1}}=x^{(2)}|_{\mathds{Z}^+_0\times\mathds{Z}^{d-1}}$, but $(x^{(1)})_{\vec{k}}\ne(x^{(2)})_{\vec{k}}$ for any $\vec{k}\in\mathds{Z}^-\times\mathds{Z}^{d-1}$. \end{lem} \begin{dem} Since the action $\mathds{Z}^d\actson[\sigma]\shift{\theta}^\circ$ is faithful and minimal, there must be symbols $a,b,c\in\mathcal{A}$, with $b\ne c$, such that, for some points $x,y\in\shift{\theta}^\circ$, $x_{\vec{0}}=y_{\vec{0}}=a$ but $x_{-\vec{e}_1}=b,y_{-\vec{e}_1}=c$. If this were not the case, for any point $x\in\shift{\theta}^\circ$ the symbol $x_{\vec{k}}$ would determine $x_{\vec{k}+\vec{e}_1}$ uniquely; since $\abs{\mathcal{A}}<\infty$ this would result in a direction of periodicity shared by all points in $\shift{\theta}^\circ$, a contradiction. As usual, we may replace $\theta$ by $\theta^m$ for a sufficiently large $m$ such that every periodic point of $\theta$ is a fixed point of $\theta^m$. By the previous observation, there exist two fixed points $x',y'\in\shift{\theta}^\circ$ such that $x'_{\vec{0}}=y'_{\vec{0}}=a$ and $x'_{-\vec{e}_1}=b,y'_{-\vec{e}_1}=c$. Since $x'$ and $y'$ are fixed points of the substitution, these symbols determine the corresponding quadrants entirely, and thus $x'$ and $y'$ match on the subset $Q_{\vec{\mathds{1}}}=(\mathds{Z}^+_0)^d$ but (due to bijectiveness) differ in every symbol from $\mathds{Z}^-\times(\mathds{Z}^+_0)^{d-1}$. Taking $\vec{k}=(0,1,1,\dotsc,1)$, any ordered pair $(x^{(1)},x^{(2)})$ that is a limit point of the sequence $(\sigma_{\vec{k}}^m(x'),\sigma_{\vec{k}}^m(y'))_{m\ge 0}$ (note that such a pair exists by compactness) satisfies the desired condition. \qed \end{dem} Lemma \ref{lem:substitutive_fracture} provides us an analogue to the fracture points from the Robinson tiling; namely, it provides a separating hyperplane $\{0\}\times\mathds{Z}^{d-1}$ that ``splits'' $\mathds{Z}^d$ into two half-spaces $S^-$ and $S^+$, and two points $x,y$ which match on one half-space, say $S^+$, but not the other. An analogue of this result applies to any other hyperplane of the form $\mathds{Z}^{k-1}\times\{0\}\times\mathds{Z}^{d-k},1\le k\le d$ or any of their translates; thus, the same argument used in the case of the Robinson tiling $X_{\rm Rob}$ applies here to show that the set of all ``fracture hyperplanes'' has to be preserved. Hence, Theorem \ref{teo:sym_grp_for_minimal_bij_subst} follows immediately from the following result: \begin{lem} \label{lem:fracture_norm_dir} For the minimal subshift $\shift{\theta}^\circ$ given by a bijective substitution $\theta$ under the above hypotheses, call $\vec{v}\in\mathds{Z}^d\setminus\{\vec{0}\}$ a \textbf{fracture normal direction} if there is some $N>0$ and two disjoint subsets $S^+,S^-\sqsubset\mathds{Z}^d$ of the form: \[S^\pm\dfn\{\vec{k}\in\mathds{Z}^d: {\pm\langle\vec{k},\vec{v}\rangle}\ge N \}, \] such that, for some $x,y\in\shift{\theta}^\circ$, $x|_{S^+}=y|_{S^+}$ but $x_{\vec{k}}\ne y_{\vec{k}}$ for any $\vec{k}\in S^-$. Then, the set of fracture normal directions of $\shift{\theta}^\circ$ is $\{h\vec{e}_j: h\in\mathds{Z}\setminus\{0\},1\le j\le d \}$. \end{lem} In other words, for $\shift{\theta}^\circ$ as in Lemma \ref{lem:fracture_norm_dir} the set of all possible fracture hyperplanes is exactly the set of translations of coordinate hyperplanes of the form $\mathds{Z}^{k-1}\times\{0\}\times\mathds{Z}^{d-k}$. The proof is similar to the argument above for the non-minimal case: \begin{dem} As stated above, Lemma \ref{lem:substitutive_fracture} shows that the set of fracture normal directions of $\shift{\theta}^\circ$ contains $\{h\vec{e}_j: h\in\mathds{Z}\setminus\{0\},1\le j\le d \}$. Suppose $\vec{v}$ is an additional fracture normal direction not contained in this set; as it is not parallel to the coordinate axes, an argument similar to the one from Lemma \ref{lem:finite_lines} shows that the set: \[L_N \dfn \{\vec{k}\in\mathds{Z}^d: \abs{\langle\vec{k},\vec{v}\rangle}<N \}=\mathds{Z}^d\setminus (S^+\mathbin{\ensuremath{\mathaccent"7201\cup}} S^-), \] has finite intersection with some $\mathds{Z}\vec{e}_j,1\le j\le d$, and the size of this intersection is bounded by a value depending linearly on $N$ and the entries of $\vec{v}$. By Lemma \ref{lem:codified_system_subst}, any pair of fracture points $x,y$ associated to the direction $\vec{v}$ can be written as the concatenation of patterns of the form $\theta^m(a),a\in\mathcal{A}$, whose supports are rectangles with side length depending exponentially on $m$. Thus, by choosing a sufficiently large $m$, the support $R$ of one of these patterns $\theta^m(a)$ has nonempty intersection with both $S^+$ and $S^-$. Since $x|_{S^+}=y|_{S^+}$ and the substitution is bijective, we must have $x|_R=y|_R$. However, this implies $x|_{R\cap S^{-}}=y|_{R\cap S^{-}}$, where $R\cap S^{-}$ is a nonempty set, contradicting our hypothesis (as $x_{\vec{k}}\ne y_{\vec{k}}$ for all $\vec{k}\in S^{-}$). Thus, this $\vec{v}$ cannot be a fracture normal direction.\qed \end{dem} The same arguments as above allow us to conclude that $\psi(f)$ necessarily is a matrix from the standard copy of $Q_d$ in $\mathrm{GL}_d(\mathds{Z})$, for any $f\in\Sym(\shift{\theta}^\circ,\mathds{Z}^d)$. Alternatively, we could argue dynamically in the same vein as \cite{BRY2018}, as follows. Let $\varphi:\shift{\theta}^\circ\mathrel{{\twoheadrightarrow}}\mathds{Z}_{\vec{n}}$ be the mapping from the minimal substitutive subshift $\shift{\theta}^\circ$ to its maximal equicontinuous factor. As the number of $\theta$-periodic points is exactly the same as the number of admissible patterns from $\shift{\theta}$ with support $\{-1,0\}^d$, say $\ell=\abs{\mathscr{L}_{\{0,1\}^d}(\shift{\theta}^\circ)}$, the mapping $\varphi$ is $\ell$-to-$1$ in the set of all $\theta$-periodic points $\Per_\theta(\shift{\theta}^\circ)$. An argument similar to the one employed in the proof of Lemma \ref{lem:integer_images_in_MEF} can be used to show the following result: \begin{lem} Under the above hypotheses, given $x\in\shift{\theta}$, define $J=\{j\in\{1,\dotsc,d\}:\varphi(x)_j\in\mathds{Z} \}$. Then, $\mathds{Z}^d$ can be partitioned into sets of the form: \[S_H\dfn\{\vec{n}\in\mathds{Z}^d: n_j\ge\varphi(x)_j \text{ if }j\in H \land n_j<\varphi(x)_j\text{ if }j\in J\setminus H \}, \] for all $H\subseteq J$, such that any point $y\in\shift{\theta}$ with $\varphi(x)=\varphi(y)$ is entirely determined by the configuration $y|_U$, where $U\subset\mathds{Z}^d$ is any subset with non-trivial intersection with all of the $S_H$. \end{lem} In particular, we may choose $U$ as a translation of the rectangle $R_J$ with support $\{-1,0\}^J\times\{0\}^{\{1,\dots,d\}\setminus J}$. This imposes a strong restriction on the points $y$ with the same image as $x$ (namely, that there can be at most $\abs{\mathscr{L}_{R_J}(\shift{\theta})}$ such points) and, by faithfulness, this shows that $\varphi$ cannot be $\ell$-to-$1$ in $\shift{\theta}\setminus\Per_\theta(\shift{\theta})$. By the argument exhibited in $\cite{BRY2018}$, extended symmetries preserve the cardinality of the fibers in the MEF and thus must map $\Per_\theta(\shift{\theta})$ to itself; the desired result then follows from the local behavior of an extended symmetry, via a similar analysis as in the case of the Robinson tiling above. \end{document}
arXiv
\begin{document} \begin{abstract} We study the independence complex of the lexicographic product $\lex{G}{H}$ of a forest $G$ and a graph $H$. We prove that for a forest $G$ which is not dominated by a single vertex, if the independence complex of $H$ is homotopy equivalent to a wedge sum of spheres, then so is the independence complex of $\lex{G}{H}$. We offer two examples of explicit calculations. As the first example, we determine the homotopy type of the independence complex of $\lex{L_m}{H}$, where $L_m$ is the tree on $m$ vertices with no branches, for any positive integer $m$ when the independence complex of $H$ is homotopy equivalent to a wedge sum of $n$ copies of $d$-dimensional sphere. As the second one, for a forest $G$ and a complete graph $K$, we describe the homological connectivity of the independence complex of $\lex{G}{K}$ by the independent domination number of $G$. \end{abstract} \maketitle \section{Introduction} \label{introduction} In this paper, a {\it graph} $G$ always means a finite undirected graph with no multiple edges and loops. Its vertex set and edge set are denoted by $V(G)$ and $E(G)$, respectively. A subset $\sigma$ of $V(G)$ is an {\it independent set} if any two vertices of $\sigma$ are not adjacent. The independent sets of $G$ are closed under taking subset, so they form an abstract simplicial complex. We call this abstract simplicial complex the {\it independence complex} of $G$ and denote by $I(G)$. In the rest of this paper, $I(G)$ denotes a geometric realization of $I(G)$ unless otherwise noted. Independence complexes of graphs are no less important than other simplicial complexes constructed from graphs and have been studied in many contexts. In particular, the independence complexes of square grid graphs are studied by Thapper \cite{Thapper08}, Iriye \cite{Iriye12} and many other researchers. It is conjectured by Iriye \cite[Conjecture 1.8]{Iriye12} that the independence complex of cylindrical square grid graph is always homotopy equivalent to a wedge sum of spheres. {\it Discrete Morse theory} , introduced by Forman \cite{Forman98} and reformulated by Chari \cite{Chari00}, is one of the effective methods for determining the homotopy type of independence complex. Bousquet-M{\'{e}}lou, Linusson and Nevo \cite{BousquetmelouLinussonNevo08} and Thapper \cite{Thapper08} studied the independence complexes of grid graphs by performing discrete Morse theory as a combinatorial algorithm called {\it matching tree}. However, it is hard to distinguish two complexes which has the same number of cells in each dimension only by discrete Morse theory. This is precisely the situation which we have to deal with in this paper. We need topological approaches in case that discrete Morse theory is not available. For example, it is effective to represent an independence complex of a graph as a union of independence complexes of subgraphs, as in Engstr{\"{o}}m \cite{Engstrom09}, Adamaszek \cite{Adamaszek12} and Barmak \cite{Barmak13}. Let $L_m$ be a tree on $m$ vertices with no branches, and $C_n$ be a cycle on $n$ vertices ($n \geq 3$). Namely \begin{align*} &V(L_m)=\{1,2,\ldots, m\}, & &E(L_m) = \{ij \ |\ |i-j|=1 \} , \\ &V(C_n) = \{1,2, \ldots, n \}, & &E(C_n) = E(L_n) \cup \{n1 \}. \end{align*} Related to the above previous researches, we focus on the fact that the cylindrical square grid graphs are obtained from $L_m$ and $C_n$ by a certain ``product'' construction. As Harary \cite{Harary69} mentioned, there are various ways to construct a graph structure on $V(G_1) \times V(G_2)$ for given two graphs $G_1$ and $G_2$. A cylindrical square grid graph is the {\it Cartesian product} of $L_m$ and $C_n$ for some $m, n$. In this paper, we are interested in the {\it lexicographic product} of two graphs, which is defined as follows. \begin{definition} Let $G, H$ be graphs. The {\it lexicographic product} $\lex{G}{H}$ is a graph defined by \begin{align*} &V(\lex{G}{H}) = V(G) \times V(H) ,\\ &E(\lex{G}{H}) = \left\{ (u_1, v_1)(u_2, v_2) \ \middle| \ \begin{aligned} &u_1 u_2 \in E(G) \\ &\text{ or} \\ &u_1=u_2, v_1 v_2 \in E(H) \end{aligned} \right\}. \end{align*} \end{definition} \begin{figure} \caption{Lexicographic products $\lex{L_4}{L_3}$ and $\lex{L_3}{L_4}$.} \end{figure} \noindent Harary \cite{Harary69} called this construction the {\it composition}. A lexicographic product $\lex{G}{H}$ can be regarded to have $|V(G)|$ pseudo-vertices. Each of them is isomorphic to $H$ and two pseudo-vertices are ``adjacent'' if the corresponding vertices of $G$ are adjacent. Graph invariants of lexicographic product have been investigated by, for example, Geller and Stahl \cite{GellerStahl75}. Independence complexes of lexicographic products are studied by Vander Meulen and Van Tuyl \cite{VandermeulenVantuyl17} from combinatorial point of view. We try to reveal in what condition the independence complex of a lexicographic product is homotopy equivalent to a wedge sum of spheres. The main result of this paper is the following theorem. \begin{theorem} \label{forest} Let $G$ be a forest and $H$ be a graph. We call $G$ a {\it star} if there exists $v \in V(G)$ such that $uv \in E(G)$ for any $u \in V(G) \setminus \{v\}$. Suppose that $I(H)$ is homotopy equivalent to a wedge sum of spheres. Then, we have the followings. \begin{enumerate} \item If $G$ is a star on at least $2$ vertices, then $I(\lex{G}{H})$ is homotopy equivalent to a disjoint union of two wedge sums of spheres. \item If $G$ is not a star, then $I(\lex{G}{H})$ is homotopy equivalent to a wedge sum of spheres. \end{enumerate} \end{theorem} \noindent For example, Kozlov \cite[Proposition 5.2]{Kozlov99} proved that $I(C_n)$ is homotopy equivalent to a wedge sum of spheres. So, it follows from Theorem \ref{forest} that $I(\lex{L_m}{C_n})$ with $m \geq 4$ is homotopy equivalent to a wedge sum of spheres. Remark that $\lex{L_m}{C_n}$ contains a cylindrical square grid graph as a subgraph which is obtained from $\lex{L_m}{C_n}$ by removing edges. Furthermore, we determined the homotopy type of $I(\lex{L_m}{H})$ for any $m \geq 1$ and a graph $H$ such that $I(H)$ is homotopy equivalent to $n$ copies of $k$-dimensional spheres. We denote the $d$-dimensional sphere by $\sphere{d}$ and a wedge sum of $n$ copies of CW complex $X$ by $\bigvee_{n} X$. \begin{theorem} \label{line theorem} Let $H$ be a graph such that $I(H) \simeq {\bigvee}_n \sphere{k}$ with $n \geq 1$, $k \geq 0$. Then we have \begin{align*} &I(\lex{L_m}{H}) \\ \simeq &\left\{ \begin{aligned} &{\bigvee}_n \sphere{k} & &(m=1), \\ &\left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_n \sphere{k} \right) & &(m=2), \\ &\left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_{n^2} \sphere{2k+1} \right)& &(m=3), \\ &\bigvee_{0 \leq p \leq \frac{m+1}{2}} \left( \bigvee_{pk -1 +\max \left\{p, \frac{m}{3} \right\} \leq d \leq pk+\frac{m+p-2}{3}} \left( {\bigvee}_{N_{m,n,k}(p,d)} \sphere{d} \right) \right) & &(m \geq 4), \end{aligned} \right. \end{align*} where \begin{align*} N_{m,n,k}(p,d) &= n^p \binom{d-pk+1}{p} \binom{p+1}{3(d-pk+1)-m} . \end{align*} \end{theorem} \noindent Here, $\binom{l}{r}$ denotes the binomial coefficient. We define $\binom{l}{r}=0$ if $r<0$ or $l <r$. The rest of this paper is organized as follows. In Section \ref{preliminaries}, we define notations on graphs and state some of the basic properties of independence complexes of graphs. Section \ref{proof of main theorem} is the main part of this paper. It first provides a condition for the independence complex of a graph to be the union of the independence complexes of given two full subgraphs (Lemma \ref{ind pushout}). Note that the cofiber sequence studied by Adamaszek \cite[Proposition 3.1]{Adamaszek12} is a special case of this decomposition. Using this result, we obtain a decomposition of an independence complex of a lexicographic product, which is essentially important to achieve our purpose (Theorem \ref{splitting}). Then, we prove Theorem \ref{forest}. Here we need an observation on the unreduced suspension of a disjoint union of two spaces (Lemma \ref{disjoint suspension}). Section \ref{explicit calculations} contains two examples of the explicit calculations. The first one is the proof of Theorem \ref{line theorem}. The second one is on the relationship between the homological connectivity of $I(\lex{G}{H})$ and the independent domination number of a forest $G$ (Theorem \ref{connectivity and domination}). \section{Preliminaries} \label{preliminaries} In this paper, a {\it graph} always means a {\it finite undirected simple graph} $G$. It is a pair $(V(G), E(G))$, where $V(G)$ is a finite set and $E(G)$ is a subset of $2^{V(G)}$ such that $|e|=2$ for any $e \in E(G)$. An element of $V(G)$ is called a {\it vertex} of $G$, and an element of $E(G)$ is called an {\it edge} of $G$. In order to indicate that $e=\{u, v\}$ ($u,v \in V(G)$), we write $e =uv$. For a vertex $v \in V(G)$, an {\it open neighborhood} $N_G (v)$ of $v$ in $G$ is defined by \begin{align*} N_G (v) = \{ u \in V(G) \ |\ uv \in E(G) \}. \end{align*} A {\it closed neighborhood} $\neib{G}{v}$ of $v$ in $G$ is defined by $\neib{G}{v} = N_G (v) \sqcup \{ v\}$. A {\it full subgraph} $H$ of a graph $G$ is a graph such that \begin{align*} V(H) &\subset V(G), \\ E(H) &=\{ uv \in E(G) \ |\ u, v \in V(H) \}. \end{align*} For two full subgraphs $H, K$ of $G$, a full subgraph whose vertex set is $V(H) \cap V(K)$ is denoted by $H \cap K$, and a full subgraph whose vertex set is $V(H) \setminus V(K)$ is denoted by $H \setminus K$. For a subset $U \subset V(G)$, $G \setminus U$ is the full subgraph of $G$ such that $V(G \setminus U) = V(G) \setminus U$. An {\it abstract simplicial complex} $K$ is a collection of finite subsets of a given set $V(K)$ such that if $\sigma \in K$ and $\tau \subset \sigma$, then $\tau \in K$. An element of $K$ is called a {\it simplex} of $K$. For a simplex $\sigma$ of $K$, we set $\dim \sigma = |\sigma| -1 $, where $|\sigma|$ is the cardinality of $\sigma$. As noted in Section \ref{introduction}, we do not distinguish an abstract simplicial complex $K$ from its geometric realization $|K|$. The {\it independence complex} $I(G)$ of a graph $G$ is an abstract simplicial complex defined by \begin{align*} I(G) = \{ \sigma \subset V(G) \ |\ uv \notin E(G) \text{ for any $u, v \in \sigma$ } \}. \end{align*} For a full subgraph $H$ of $G$, $I(H)$ is a subcomplex of $I(G)$. Furthermore, if $H, K$ are full subgraphs of $G$, then $I(H \cap K) = I(H) \cap I(K)$. The following proposition is the fundamental property of independence complexes. \begin{proposition} \label{disjoint union and join} Let $G$ be a graph and $G_1$ and $G_2$ be full subgraphs of $G$ such that $V(G)=V(G_1) \sqcup V(G_2)$. \begin{enumerate} \item If $uv \notin E(G)$ for any $u \in V(G_1)$ and $v \in V(G_2)$, then we have \begin{align*} I(G) = I(G_1) * I(G_2). \end{align*} \item If $uv \in E(G)$ for any $u \in V(G_1)$ and $v \in V(G_2)$, then we have \begin{align*} I(G) = I(G_1) \sqcup I(G_2). \end{align*} \end{enumerate} \end{proposition} \begin{proof} In the proof, we consider $I(G)$ as an abstract simplicial complex. Suppose that $uv \notin E(G)$ for any $u \in V(G_1)$ and $v \in V(G_2)$. Then, we have \begin{align*} I(G) &= \left\{\sigma \subset V(G_1) \sqcup V(G_2) \ \middle|\ \left. \begin{aligned} &\sigma \cap V(G_1) \in I(G_1) \\ &\text{ and }\\ &\sigma \cap V(G_2) \in I(G_2) \end{aligned} \right. \right\}\\ &= I(G_1) * I(G_2) . \end{align*} Suppose that $uv \in E(G)$ for any $u \in V(G_1)$ and $v \in V(G_2)$. Then, we have \begin{align*} I(G) &= \left\{\sigma \subset V(G_1) \sqcup V(G_2) \ \middle|\ \left. \begin{aligned} &\sigma \subset V(G_1) \text{ and } \sigma \in I(G_1) \\ &\text{ or } \\ &\sigma \subset V(G_2) \text{ and } \sigma \in I(G_2) \end{aligned} \right. \right\} \\ &= I(G_1) \sqcup I(G_2) . \end{align*} \end{proof} Let $X$ be a CW complex. We denote the {\it unreduced} suspension of $X$ by $\Sigma X$. For subcomplexes $X_1, X_2$ of $X$ such that $X_1 \cap X_2 =A$, we denote the union of $X_1$ and $X_2$ by $X_1 \cup_A X_2$ in order to indicate that the intersection of $X_1$ and $X_2$ is $A$. \section{Proof of Theorem \ref{forest}} \label{proof of main theorem} We first prove the following theorem, which we need to prove Theorem \ref{forest}. \begin{theorem} \label{splitting} Let $G$ a graph and $v$ be a vertex of $G$. Suppose that there exists a vertex $w$ of $G$ such that $N_G (w) = \{v\}$. Let $H$ be a non-empty graph. \begin{itemize} \item If $G \setminus \neib{G}{v} = \emptyset$, then we have \begin{align*} I(\lex{G}{H}) = I(H) \sqcup I(\lex{(G \setminus \{v\})}{H}) . \end{align*} \item If $G \setminus \neib{G}{v} \neq \emptyset$, then we have \begin{align*} I(\lex{G}{H}) \simeq &\Sigma I(\lex{(G \setminus \neib{G}{v} )}{H}) \vee \left(I(\lex{(G \setminus \neib{G}{v} )}{H}) * I(H) \right) \\ &\ \vee \left(I(\lex{(G \setminus\{v, w\})}{H}) * I(H) \right) . \end{align*} \end{itemize} \end{theorem} The proof of Theorem \ref{splitting} has two steps. The first step is to decompose $I(\lex{G}{H})$ as a union of $I(\lex{(G \setminus N_G (v))}{H})$ and $I(\lex{(G \setminus \{v\})}{H})$. The second step is to transform this union into a wedge sum. We need two lemmas corresponding to these two steps. \begin{lemma} \label{ind pushout} Let $G$ be a graph and $H, K \subset G$ be full subgraphs of $G$ such that $V(H) \cup V(K) =V(G)$. Suppose that $v_1 v_2 \in E(G)$ for any vertices $v_1 \in V(H) \setminus V(K)$ and $v_2 \in V(K) \setminus V(H)$. Then, \begin{align*} I(G) = I(H) \cup_{I(H \cap K)} I(K). \end{align*} \end{lemma} \begin{proof} For a simplex $\sigma$ of $I(G)$, suppose that there exists a vertex $u_0 \in \sigma \cap (V(H) \setminus V(K))$. Then, by the assumption of the lemma, any vertex $v \in V(K) \setminus V(H)$ is adjacent to $u_0$. So, $\sigma \cap (V(K) \setminus V(H))$ must be empty, which means that $\sigma$ is a simplex of $H$. On the other hand, if $\sigma \cap (V(H) \setminus V(K)) = \emptyset$, then $\sigma$ is a simplex of $K$ since $V(H) \cup V(K) = V(G)$. \end{proof} \begin{figure} \caption{A graph $G$ and its subgraphs $H, K$ such that $I(G)= I(H) \cup I(K)$.} \end{figure} \begin{example} For a graph $G$ and a vertex $v \in V(G)$, consider two subgraphs $G \setminus \{v\}$ and $G \setminus N_G (v)$ of $G$. We have \begin{align*} &(V(G) \setminus \{v\}) \setminus (V(G) \setminus N_G (v)) = N_G (v) ,\\ &(V(G) \setminus N_G (v)) \setminus (V(G) \setminus \{v\}) = \{v \}, \\ &(G \setminus \{v\}) \cap (G \setminus N_G (v)) = G \setminus \neib{G}{v} . \end{align*} Then, by Lemma \ref{ind pushout}, we have \begin{align*} I(G) = I(G \setminus \{v\}) \cup_{I(G \setminus \neib{G}{v})} I(G \setminus N_G (v)). \end{align*} Since $I(G \setminus N_G (v)) = I(G \setminus \neib{G}{v}) * \{v\} $, we obtain a cofiber sequence \begin{align*} \xymatrix{ I(G \setminus \neib{G}{v}) \ar@{^{(}->}[r] & I(G \setminus \{v\}) \ar[r] & I(G), } \end{align*} which was studied by Adamaszek \cite[Proposition 3.1]{Adamaszek12}. \end{example} \begin{lemma} \label{mapping cylinder} Let $X$ be a CW complex and $X_1, X_2$ be subcomplexes of $X$ such that $X=X_1 \cup X_2$. If the inclusion maps $i_1: X_1\cap X_2 \to X_1$ and $i_2 : X_1 \cap X_2 \to X_2$ are null-homotopic, then we have \begin{align*} X \simeq X_1 \vee X_2 \vee \Sigma (X_1 \cap X_2) . \end{align*} \end{lemma} \begin{proof} Consider the mapping cylinder $M(i_1, i_2)$ of $i_1, i_2$. Let $u \in X_1$ and $v \in X_2$ be points such that $i_1 \simeq c_u$ and $i_2 \simeq c_v$, where $c_u : X_1 \cap X_2 \to X_1$ and $c_v :X_1 \cap X_2 \to X_2$ are the constant map to $u$ and $v$, respectively. Then, we have \begin{align*} X = X_1 \cup X_2 \simeq M(i_1, i_2) \simeq M(c_u, c_v) = X_1 \vee_u \Sigma(X_1 \cap X_2) \vee_v X_2. \end{align*} This is the desired conclusion. \end{proof} \begin{proof}[Proof of Theorem \ref{splitting}] Consider two full subgraphs $K_1, K_2$ of $\lex{G}{H}$ defined by \begin{align*} &K_1=\lex{(G \setminus N_G (v))}{H} ,\\ &K_2=\lex{(G \setminus \{v\})}{H} . \end{align*}\ Then we have \begin{align*} &V(K_1) \setminus V(K_2) = \{v\} \times V(H) ,\\ &V(K_2) \setminus V(K_1) = N_G (v) \times V(H) , \\ &K_1 \cap K_2 =\lex{(G \setminus \neib{G}{v})}{H}. \end{align*} It follows that $v_1 v_2 \in E(\lex{G}{H})$ for any vertices $v_1 \in V(K_1) \setminus V(K_2)$ and $v_2 \in V(K_2) \setminus V(K_1)$ since $u v \in E(G)$ for any $u \in N_G (v)$. So, by Lemma \ref{ind pushout}, we obtain \begin{align*} I(\lex{G}{H}) = I(\lex{(G \setminus N_G (v))}{H}) \cup_{I(\lex{(G \setminus \neib{G}{v})}{H})} I(\lex{(G \setminus \{v\})}{H}) . \end{align*} If $G \setminus \neib{G}{v} = \emptyset$, then \begin{align*} I(\lex{(G \setminus \neib{G}{v})}{H}) &= I(\lex{\emptyset}{H}) = I(\emptyset) = \emptyset, \\ I(\lex{(G \setminus N_G (v))}{H}) &= I(\lex{\{v\}}{H}) = I(H). \end{align*} So, the desired formula is obtained directly. Suppose that $G \setminus \neib{G}{v} \neq \emptyset$. Let $i : I(\lex{(G \setminus \neib{G}{v})}{H}) \to I(\lex{(G \setminus N_G (v))}{H})$ and $j: I(\lex{(G \setminus \neib{G}{v})}{H}) \to I(\lex{(G \setminus \{v\})}{H})$ be the inclusion maps. By Proposition \ref{disjoint union and join}, we have \begin{align*} I(\lex{(G \setminus N_G (v))}{H}) &= I(\lex{((G \setminus \neib{G}{v}) \sqcup \{v\})}{H}) \\ &= I(\lex{(G \setminus \neib{G}{v})}{H}) * I(H), \\ I(\lex{(G \setminus \{v\})}{H}) &= I(\lex{((G \setminus \{v, w\}) \sqcup \{w\})}{H}) \\ &= I(\lex{(G \setminus \{v, w\})}{H}) * I(H). \end{align*} The third equality follows from $N_G (w) = \{v\}$. Here, $I(H)$ is non-empty since $H$ is non-empty. Let $x \in I(H)$ be a point. Then, we have \begin{align*} I(\lex{(G \setminus \neib{G}{v})}{H}) * \{x\} &\subset I(\lex{(G \setminus \neib{G}{v})}{H}) * I(H), \\ I(\lex{(G \setminus \neib{G}{v})}{H}) * \{x\} &\subset I(\lex{(G \setminus \{v, w\})}{H}) * I(H) . \end{align*} The second inclusion follows from $\{v , w\} \subset \neib{G}{v}$. These inclusions indicate that $i, j$ are null-homotopic. Therefore, by Lemma \ref{mapping cylinder}, we obtain \begin{align*} I(\lex{G}{H}) = & I(\lex{(G \setminus N_G (v))}{H}) \cup_{I(\lex{(G \setminus \neib{G}{v})}{H})} I(\lex{(G \setminus \{v\})}{H}) \\ \simeq &\Sigma I(\lex{(G \setminus \neib{G}{v} )}{H}) \vee \left(I(\lex{(G \setminus \neib{G}{v} )}{H}) * I(H) \right) \\ &\ \vee \left(I(\lex{(G \setminus\{v, w\})}{H}) * I(H) \right) . \end{align*} So, the proof is completed. \end{proof} In order to derive Theorem \ref{forest} from Theorem \ref{splitting}, we need some topological observations, which we state in the following two lemmas. \begin{lemma} \label{disjoint suspension} Let $X, Y$ be CW complexes. Then we have \begin{align*} \Sigma(X \sqcup Y) \simeq \Sigma X \vee \Sigma Y \vee \sphere{1}. \end{align*} \end{lemma} \begin{proof} Let $u, v$ be cone points of $\Sigma ( X \sqcup Y)$. Then we have \begin{align*} \Sigma(X \sqcup Y) = \Sigma X \cup_{\{u,v\}} \Sigma Y . \end{align*} For $x \in X$ and $y \in Y$, there are line segments $xu, xv \subset \Sigma X$ and $yu, yv \subset \Sigma Y$. So, the inclusion maps $\{u, v \} \to \Sigma X$, $\{u, v\} \to \Sigma Y$ are null-homotopic. Therefore, it follows from Lemma \ref{mapping cylinder} that \begin{align*} \Sigma (X \sqcup Y) &\simeq \Sigma X \vee \Sigma Y \vee \Sigma\{u, v\} \\ &\simeq \Sigma X \vee \Sigma Y \vee \sphere{1}. \end{align*} \end{proof} \begin{lemma} \label{sphere join} Let $A$, $B$, $C$ be CW complexes such that each of them is homotopy equivalent to a wedge sum of spheres. Then, both $A*B$ and $(A \sqcup B) *C$ are again homotopy equivalent to a wedge sum of spheres. \end{lemma} \begin{proof} We first claim that for any CW complex $X, Y, Z$, we have \begin{align*} (X \vee Y) * Z \simeq (X * Z) \vee (Y * Z). \end{align*} This is because $X * Y$ is homotopy equivalent to $\Sigma( X \land Y)$ for any pointed CW complexes $(X, x_0)$ and $(Y,y_0)$. This homotopy equivalence yields \begin{align*} (X \vee Y) * Z &\simeq \Sigma((X \vee Y) \land Z) \simeq \Sigma((X \land Z) \vee (Y \land Z)) \\ &\simeq \Sigma(X \land Z) \vee \Sigma(Y \land Z) \simeq (X * Z) \vee (Y * Z) \end{align*} as desired. Let $A= \bigvee_i \sphere{a_i}$, $B= \bigvee_j \sphere{b_j}$, $C= \bigvee_k \sphere{c_k}$ be arbitrary wedge sums of spheres. It follows from Lemma \ref{disjoint suspension} and above claim that \begin{align*} A * B &\simeq \left(\bigvee_i \sphere{a_i} \right) * \left(\bigvee_j \sphere{b_j} \right) \simeq \bigvee_i \left(\sphere{a_i} * \left( \bigvee_j \sphere{b_j} \right) \right) \\ &\simeq \bigvee_{i,j} \left( \sphere{a_i} * \sphere{b_j} \right) \simeq \bigvee_{i,j}\sphere{a_i + b_j +1}, \end{align*} \begin{align*} (A \sqcup B ) *C &\simeq \left( \left(\bigvee_i \sphere{a_i} \right) \sqcup \left(\bigvee_j \sphere{b_j} \right) \right) * \left(\bigvee_k \sphere{c_k} \right) \\ &\simeq \bigvee_k \left( \left( \left(\bigvee_i \sphere{a_i} \right) \sqcup \left(\bigvee_j \sphere{b_j} \right) \right)* \sphere{c_k} \right) \\ &\simeq \bigvee_k \left( \left(\left(\bigvee_i \sphere{a_i} \right) * \sphere{c_k} \right) \vee \left( \left(\bigvee_j \sphere{b_j} \right) * \sphere{c_k} \right) \vee \sphere{c_k +1} \right) \\ &\simeq \bigvee_k \left( \left(\bigvee_i \sphere{a_i + c_k +1} \right) \vee \left(\bigvee_j \sphere{b_j +c_k +1} \right) \vee \sphere{c_k +1} \right) \\ &\simeq \left( \bigvee_{i,k} \sphere{a_i + c_k +1} \right) \vee \left( \bigvee_{j,k} \sphere{b_j + c_k +1} \right) \vee \left( \bigvee_k \sphere{c_k +1} \right). \end{align*} Therefore, we obtain the desired conclusion. \end{proof} We are now ready to prove Theorem \ref{forest}. \begin{proof}[Proof of Theorem \ref{forest}] We prove the theorem by induction on $|V(G)|$. Before we start, we confirm two cases. First, suppose that $G$ is a star on at least $2$ vertices, namely $|V(G)| \geq 2$ and there exists $v \in V(G)$ such that $G \setminus \neib{G}{v} = \emptyset$. We have $u_1 u_2 \notin E(G)$ for any $u_1, u_2 \in N_G (v) = G \setminus \{v\}$ since $G$ is a forest. So, by Theorem \ref{splitting}, we get \begin{align*} I(\lex{G}{H}) & = I(H) \sqcup I(\lex{(G \setminus \{v\})}{H}) \\ &=I(H) \sqcup \left(\mathop{*}_{|V(G)| - 1} I(H) \right) . \end{align*} Since $|V(G)|-1 \geq 1$, the join of copies of $I(H)$ is homotopy equivalent to a wedge sum of spheres by Lemma \ref{sphere join}. Therefore, $I(\lex{G}{H})$ is homotopy equivalent to a disjoint union of two wedge sums of spheres. Next, suppose that $G$ has no edges. Then $I(\lex{G}{H})$ is the join of $|V(G)|$ copies of $I(H)$, which is a wedge sum of spheres by Lemma \ref{sphere join}. Now we start the induction. The forest $G$ with $|V(G)| \leq 2$ is isomorphic to one of $L_1$, $L_2$ and $L_1 \sqcup L_1$. They are included in the above cases. Hence, for a forest $G$ with $|V(G)| \leq 2$, $I(\lex{G}{H})$ is homotopy equivalent to a wedge sum of spheres or a disjoint union of two wedge sums of spheres. Assume that for any forest $G'$ such that $|V(G')| \leq n$, $I(\lex{G'}{H})$ is homotopy equivalent to a wedge sum of spheres or a disjoint union of two wedge sums of spheres. Let $G$ be a forest with at least one edge such that $|V(G)|=n+1$ and $G \setminus \neib{G}{v} \neq \emptyset$ for any $v \in V(G)$. Then, since $G$ is a forest, there exists $w \in V(G)$ such that $N_G (w) = \{v\}$ for some $v \in V(G)$ (namely a leaf $w$ of $G$). We write $G_1 = G \setminus \neib{G}{v}$ and $G_2 =G \setminus\{v, w\}$. Then, $G_1, G_2$ are forests such that $|V(G_1)| \leq n-1$, $|V(G_2)| \leq n-1$. Since $G_1=G \setminus \neib{G}{v}$ is not empty, it follows from Theorem \ref{splitting} that \begin{align*} I(\lex{G}{H}) \simeq &\Sigma I(\lex{G_1}{H}) \vee \left(I(\lex{G_1}{H}) * I(H) \right) \vee \left(I(\lex{G_2}{H}) * I(H) \right) . \end{align*} By the assumption of the induction, $I(\lex{G_1}{H})$ and $I(\lex{G_2}{H})$ are homotopy equivalent to a wedge sum of spheres or a disjoint union of two wedge sums of spheres. Therefore, by Lemma \ref{sphere join}, $I(\lex{G}{H})$ is homotopy equivalent to a wedge sum of spheres. So, the proof is completed. \end{proof} \begin{remark} \label{contractible} For a graph $H$, suppose that $I(H)$ is contractible. Then, for a forest $G$, we have $I(\lex{G}{H}) \simeq I(G)$. We can prove this fact in the same way as in the proof of Theorem \ref{forest}. \end{remark} \begin{example} Recall that a graph $G$ is {\it chordal} if it contains no cycle of length at least $4$. Kawamura \cite[Theorem 1.1]{Kawamura10} proved that the independence complex of a chordal graph is either contractible or homotopy equivalent to a wedge sum of spheres. In particular, Ehrenborg and Hetyei \cite[Corollary 6.1]{EhrenborgHetyei06} proved that the independence complex of a forest is either contractible or homotopy equivalent to a single sphere. So, it follows from Theorem \ref{forest} and Remark \ref{contractible} that $I(\lex{G}{H})$ is either contractible or homotopy equivalent to a wedge sum of spheres if $G$ is a forest and $H$ is a chordal graph. \end{example} \section{Explicit Calculations} \label{explicit calculations} In this section, we offer two examples of explicit calculations on $I(\lex{G}{H})$. First, we prove Theorem \ref{line theorem}. \begin{proof}[Proof of Theorem \ref{line theorem}] For $m=1,2,3$, it follows from Proposition \ref{disjoint union and join} that \begin{align*} I(\lex{L_1}{H}) &= I(H) \simeq {\bigvee}_n \sphere{k} , \\ I(\lex{L_2}{H}) &= I(H) \sqcup I(H) \simeq \left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_n \sphere{k} \right), \\ I(\lex{L_3}{H}) &= I(H) \sqcup (I(H) * I(H)) \\ &\simeq \left( {\bigvee}_n \sphere{k} \right) \sqcup \left( \left( {\bigvee}_n \sphere{k} \right) * \left( {\bigvee}_n \sphere{k} \right) \right) \\ &\simeq \left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_n \left( \sphere{k} * \left( {\bigvee}_n \sphere{k} \right) \right) \right) \\ &\simeq \left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_n \left( {\bigvee}_n \sphere{k} * \sphere{k} \right) \right) \\ &\simeq \left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_{n^2} \sphere{2k+1} \right). \end{align*} For $r \geq 1$, let $G=L_{r+3}$ and $v=r+2, w=r+3 \in V(L_{r+3})$. Then we have $N_G (w)=\{v\}$ and $G \setminus \neib{G}{v} = L_r \neq \emptyset$. So, by Theorem \ref{splitting}, we obtain \begin{align} &I(\lex{L_{r+3}}{H}) \nonumber \\ \simeq &\Sigma I(\lex{L_r}{H}) \vee \left(I(\lex{L_r}{H}) * I(H) \right) \ \vee \left(I(\lex{L_{r+1}}{H}) * I(H) \right) \nonumber \\ \simeq &\Sigma I(\lex{L_r}{H}) \vee \left(I(\lex{L_r}{H}) * \left( {\bigvee}_n \sphere{k} \right) \right) \ \vee \left(I(\lex{L_{r+1}}{H}) * \left( {\bigvee}_n \sphere{k} \right) \right) \nonumber \\ \simeq &\Sigma I(\lex{L_r}{H}) \vee \left( {\bigvee}_n I(\lex{L_r}{H}) * \sphere{k} \right) \vee \left( {\bigvee}_n I(\lex{L_{r+1}}{H}) * \sphere{k} \right) \nonumber \\ \simeq &\Sigma I(\lex{L_r}{H}) \vee \left( {\bigvee}_n \Sigma^{k+1} I(\lex{L_r}{H}) \right) \vee \left( {\bigvee}_n \Sigma^{k+1} I(\lex{L_{r+1}}{H}) \right) . \label{Lm recursive} \end{align} Define a CW complex $X_{m,n,k}$ for $m\geq 1$, $n \geq 1$ and $k \geq 0$ by \begin{align*} X_{m,n,k}= \bigvee_{d \geq 0} \left( \bigvee_{p \geq 0} \left( {\bigvee}_{N_{m,n,k}(p,d)} \sphere{d} \right) \right) , \end{align*} where \begin{align*} N_{m,n,k}(p,d) &= n^p \binom{d-pk+1}{p} \binom{p+1}{3(d-pk+1)-m} . \end{align*} We note that $N_{m,n,k}(p,d) >0$ for non-negative integers $p, d$ if and only if $d-pk+1 \geq p$ and $p+1 \geq 3(d-pk+1)-m \geq 0 $, namely \begin{align*} pk-1 +\max \left\{p, \frac{m}{3} \right\} \leq d \leq pk+\frac{m+p-2}{3} . \end{align*} The above inequality implies that $p \leq \frac{m+1}{2}$. So, it follows that \begin{align*} X_{m,n,k}= \bigvee_{0 \leq p \leq \frac{m+1}{2}} \left( \bigvee_{pk -1 +\max \left\{p, \frac{m}{3} \right\} \leq d \leq pk+\frac{m+p-2}{3}} \left( {\bigvee}_{N_{m,n,k}(p,d)} \sphere{d} \right) \right) . \end{align*} In order to complete the proof, it is sufficient to show that $I(\lex{L_m}{H}) \simeq X_{m,n,k}$ for $m \geq 4$. First, the explicit descriptions of $X_{1,n,k}$, $X_{2,n,k}$ and $X_{3,n,k}$ are obtained as follows. \begin{align*} X_{1,n,k} = &\bigvee_{0 \leq p \leq 1} \left( \bigvee_{pk-1+ \max \left\{ p, \frac{1}{3} \right\} \leq d \leq pk+\frac{1+p-2}{3}} \left( {\bigvee}_{N_{1,n,k}(p,d)} \sphere{d} \right) \right) \\ = &\bigvee_{p=0,1} \left( \bigvee_{pk-1+ \max \left\{ p, \frac{1}{3} \right\} \leq d \leq pk+\frac{p-1}{3}} \left( {\bigvee}_{N_{1,n,k}(p,d)} \sphere{d} \right) \right) \\ = &\left( \bigvee_{-\frac{2}{3} \leq d \leq -\frac{1}{3}} \left( {\bigvee}_{N_{1,n,k}(0,d)} \sphere{d} \right) \right) \vee \left( \bigvee_{k \leq d \leq k} \left( {\bigvee}_{N_{1,n,k}(1,d)} \sphere{d} \right) \right) \\ = & {\bigvee}_{N_{1,n,k}(1,k)} \sphere{k} \\ = & {\bigvee}_{n^1 \binom{1}{1} \binom{2}{2}} \sphere{k} \\ = & {\bigvee}_n \sphere{k} . \end{align*} \begin{align*} X_{2,n,k} = &\bigvee_{0 \leq p \leq \frac{3}{2}} \left( \bigvee_{pk-1+ \max \left\{ p, \frac{2}{3} \right\} \leq d \leq pk+\frac{2+p-2}{3}} \left( {\bigvee}_{N_{2,n,k}(p,d)} \sphere{d} \right) \right) \\ = &\bigvee_{p=0,1} \left( \bigvee_{pk-1+ \max \left\{ p, \frac{2}{3} \right\} \leq d \leq pk+\frac{p}{3}} \left( {\bigvee}_{N_{2,n,k}(p,d)} \sphere{d} \right) \right) \\ =&\left( \bigvee_{-\frac{1}{3} \leq d \leq 0} \left( {\bigvee}_{N_{2,n,k}(0,d)} \sphere{d} \right) \right) \vee \left( \bigvee_{k \leq d \leq k+\frac{1}{3}} \left( {\bigvee}_{N_{2,n,k}(1,d)} \sphere{d} \right) \right) \\ = &\left( {\bigvee}_{N_{2,n,k}(0,0)} \sphere{0} \right) \vee \left( {\bigvee}_{N_{2,n,k}(1,k)} \sphere{k} \right) \\ =&\left( {\bigvee}_{n^0 \binom{1}{0} \binom{1}{1}} \sphere{0} \right) \vee \left( {\bigvee}_{n^1 \binom{1}{1} \binom{2}{1}} \sphere{k} \right) \\ =&\sphere{0} \vee \left( {\bigvee}_{2n} \sphere{k} \right). \end{align*} \begin{align*} X_{3,n,k} = &\bigvee_{0 \leq p \leq 2} \left( \bigvee_{pk-1+ \max \left\{ p, 1 \right\} \leq d \leq pk+\frac{3+p-2}{3}} \left( {\bigvee}_{N_{3,n,k}(p,d)} \sphere{d} \right) \right) \\ =&\bigvee_{p=0,1,2} \left( \bigvee_{pk-1+ \max \left\{ p, 1 \right\} \leq d \leq pk+\frac{p+1}{3}} \left( {\bigvee}_{N_{3,n,k}(p,d)} \sphere{d} \right) \right) \\ = &\left( \bigvee_{0 \leq d \leq \frac{1}{3}} \left( {\bigvee}_{N_{3,n,k}(0,d)} \sphere{d} \right) \right) \vee \left( \bigvee_{k\leq d \leq k+\frac{2}{3}} \left( {\bigvee}_{N_{3,n,k}(1,d)} \sphere{d} \right) \right) \\ &\ \vee \left( \bigvee_{2k+1 \leq d \leq 2k+1} \left( {\bigvee}_{N_{3,n,k}(2,d)} \sphere{d} \right) \right) \\ = &\left( {\bigvee}_{N_{3,n,k}(0,0)} \sphere{0} \right) \vee \left( {\bigvee}_{N_{3,n,k}(1,k)} \sphere{k} \right) \\ &\ \vee \left( {\bigvee}_{N_{3,n,k}(2,2k+1)} \sphere{2k+1} \right) \\ = &\left( {\bigvee}_{n^0 \binom{1}{0} \binom{1}{0}} \sphere{0} \right) \vee \left( {\bigvee}_{n^1 \binom{1}{1} \binom{2}{0}} \sphere{k} \right) \vee \left( {\bigvee}_{n^2 \binom{2}{2} \binom{3}{3}} \sphere{2k+1} \right) \\ = &\sphere{0} \vee \left( {\bigvee}_n \sphere{k} \right) \vee \left( {\bigvee}_{n^2} \sphere{2k+1} \right). \end{align*} We next show that \begin{align} \label{X recursive} X_{m+3,n,k} = \Sigma X_{m,n,k} \vee \left( {\bigvee}_n \Sigma^{k+1} X_{m,n,k} \right) \vee \left( {\bigvee}_n \Sigma^{k+1} X_{m+1,n,k} \right). \end{align} We have \begin{align*} &\sum_{p \geq 0} \left(N_{m,n,k}(p,d-1) + n \cdot N_{m,n,k}(p,d-k-1) +n \cdot N_{m+1,n,k}(p,d-k-1) \right) \\ =&\sum_{p \geq 0} \left( n^p \binom{(d-1)-pk+1}{p} \binom{p+1}{3((d-1)-pk+1)-m} \right. \\ &\ + n^{p+1} \binom{(d-k-1)-pk+1}{p} \binom{p+1}{3((d-k-1)-pk+1)-m} \\ &\ \left. + n^{p+1} \binom{(d-k-1)-pk+1}{p} \binom{p+1}{3((d-k-1)-pk+1)-(m+1)} \right) \\ =&\sum_{p \geq 0} \left( n^p \binom{d-pk}{p} \binom{p+1}{3(d-pk)-m} \right. \\ &\ +n^{p+1}\binom{d-(p+1)k}{p} \binom{p+1}{3(d-(p+1)k)-m} \\ &\ \left. +n^{p+1} \binom{d-(p+1)k}{p} \binom{p+1}{3(d-(p+1)k)-(m+1)} \right) \\ =&\sum_{p \geq 0} n^p \binom{d-pk}{p} \binom{p+1}{3(d-pk)-m} \\ &\ +\sum_{p \geq 0} n^{p+1} \binom{d-(p+1)k}{p} \binom{p+2}{3(d-(p+1)k)-(m+1)} \\ =&\sum_{p \geq 0 } n^p \binom{d-pk}{p} \binom{p+1}{3(d-pk)-m} \\ &\ +\sum_{q=p+1 \geq 1 } n^q \binom{d-qk}{q-1} \binom{q+1}{3(d-qk)-m} \\ =&\sum_{p \geq 0} n^p \binom{d-pk+1}{p} \binom{p+1}{3(d-pk)-m} \\ =&\sum_{p \geq 0} N_{m+3,k}(p,d) . \end{align*} So, we conclude that \begin{align*} &\Sigma X_{m,n,k} \vee \left( {\bigvee}_n \Sigma^{k+1} X_{m,n,k} \right) \vee \left( {\bigvee}_n \Sigma^{k+1} X_{m+1,n,k} \right)\\ = &\bigvee_{d \geq 0} \left( \bigvee_{p \geq 0} \left( {\bigvee}_{N_{m,n,k}(p,d-1) + n \cdot N_{m,n,k}(p,d-k-1) +n \cdot N_{m+1,n,k}(p,d-k-1)} \sphere{d} \right) \right) \\ = &\bigvee_{d \geq 0} \left( {\bigvee}_{\sum_{p \geq 0} \left(N_{m,n,k}(p,d-1) + n \cdot N_{m,n,k}(p,d-k-1) +n \cdot N_{m+1,n,k}(p,d-k-1) \right) } \sphere{d} \right) \\ = &\bigvee_{d \geq 0} \left( {\bigvee}_{\sum_{p \geq 0} N_{m+3,n,k}(p,d)} \sphere{d} \right) \\ = &\bigvee_{d \geq 0} \left( \bigvee_{p \geq 0} \left( {\bigvee}_{N_{m+3,n,k}(p,d)} \sphere{d} \right) \right) \\ =&X_{m+3,n,k} \end{align*} as desired. Now, we are ready to finish the proof by induction on $m$. By Lemma \ref{disjoint suspension}, we obtain \begin{align*} \Sigma I(\lex{L_2}{H}) &\simeq \Sigma \left(\left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_n \sphere{k} \right) \right) \\ &\simeq \sphere{1} \vee \Sigma \left({\bigvee}_n \sphere{k} \right) \vee \Sigma \left( {\bigvee}_n \sphere{k} \right) \\ &\simeq \sphere{1} \vee \left({\bigvee}_n \sphere{k+1} \right) \vee \left( {\bigvee}_n \sphere{k+1} \right) \\ &=\sphere{1} \vee \left({\bigvee}_{2n} \sphere{k+1} \right), \\ \Sigma I(\lex{L_3}{H}) &\simeq \Sigma \left(\left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_{n^2} \sphere{2k+1} \right) \right) \\ &\simeq \sphere{1} \vee \Sigma \left({\bigvee}_n \sphere{k} \right) \vee \Sigma \left( {\bigvee}_{n^2} \sphere{2k+1} \right) \\ &\simeq \sphere{1} \vee \left({\bigvee}_n \sphere{k+1} \right) \vee \left( {\bigvee}_{n^2} \sphere{2k+2} \right) . \end{align*} So, it follows that \begin{align*} \Sigma I(\lex{L_m}{H}) \simeq \Sigma X_{m,n,k} \end{align*} for $m =1,2,3$. Assume that $\Sigma I(\lex{L_r}{H}) \simeq \Sigma X_{r,n,k}$ and $\Sigma I(\lex{L_{r+1}}{H}) \simeq \Sigma X_{r+1,n,k}$ for some $r \geq 1$. By recursive relations (\ref{Lm recursive}) and (\ref{X recursive}), we have \begin{align*} &I(\lex{L_{r+3}}{H}) \\ \simeq &\Sigma I(\lex{L_r}{H}) \vee \left( {\bigvee}_n \Sigma^{k+1} I(\lex{L_r}{H}) \right) \vee \left( {\bigvee}_n \Sigma^{k+1} I(\lex{L_{r+1}}{H}) \right) \\ \simeq &\Sigma X_{r,n,k} \vee \left( {\bigvee}_n \Sigma^{k+1} X_{r,n,k} \right) \vee \left( {\bigvee}_n \Sigma^{k+1} X_{r+1,n,k} \right) \\ =&X_{r+3,n,k}. \end{align*} Therefore, we obtain that $I(\lex{L_m}{H}) \simeq X_{m,n,k}$ for any $m \geq 4$ by induction. This is the desired conclusion. \end{proof} \begin{example} Kozlov \cite[Proposition 5.2]{Kozlov99} proved that \begin{align*} I(C_n) &\simeq \left\{ \begin{aligned} &\sphere{k - 1} \vee \sphere{k - 1} & &(n =3k), \\ &\sphere{k-1} & &(n =3k+1), \\ &\sphere{k} & &(n =3k+2) . \end{aligned} \right. \end{align*} Therefore, we can determine the homotopy types of $I(\lex{L_m}{C_n})$ for any $m \geq 1$ and $n \geq 3$ by Theorem \ref{line theorem}. \end{example} Recall that the homological connectivity of a space $X$, denoted by $\mathrm{conn}_H(X)$, is defined by \begin{align*} \mathrm{conn}_H(X)= \left\{ \begin{aligned} &-2 & &(X = \emptyset), \\ &k & &(\widetilde{H}_i (X)=0 \text{ for any $i \leq k$, } \widetilde{H}_{k+1} (X) \neq 0 ), \\ &\infty & &(\widetilde{H}_i (X) = 0 \text{ for any $i$ }), \end{aligned} \right. \end{align*} where $\widetilde{H}_i (X)$ is the reduced $i$th homology group of $X$. Though Theorem \ref{line theorem} completely determines the homotopy type of $I(\lex{L_m}{H})$ with $I(H) \simeq {\bigvee}_n \sphere{k}$, it is hard to obtain the homological connectivity of $I(\lex{L_m}{H})$ immediately from Theorem \ref{line theorem}. Here we compute the homological connectivity of $I(\lex{L_m}{H})$ as a corollary. \begin{corollary} \label{line corollary} Let $H$ be a graph such that $I(H) \simeq {\bigvee}_n \sphere{k}$ with $n \geq 1$, $k \geq 0$. Then we have \begin{align*} \mathrm{conn}_H(I(\lex{L_{3l+i}}{H})) = \left\{ \begin{aligned} &l-2 & &(i=0), \\ &k+l-1 & &(i=1), \\ &l -1& &(i=2). \end{aligned} \right. \end{align*} \end{corollary} \begin{proof} Recall from the proof of Theorem \ref{line theorem} that there is a recursive relation \begin{align*} &I(\lex{L_{m+3}}{H}) \\ \simeq &\Sigma I(\lex{L_m}{H}) \vee \left( {\bigvee}_n \Sigma^{k+1} I(\lex{L_m}{H}) \right) \vee \left( {\bigvee}_n \Sigma^{k+1} I(\lex{L_{m+1}}{H}) \right). \end{align*} So, we obtain \begin{align*} &\mathrm{conn}_H (I(\lex{L_{m+3}}{H})) \\ = &\min \left\{ \mathrm{conn}_H(\Sigma I(\lex{L_m}{H})), \mathrm{conn}_H(\Sigma^{k+1} I(\lex{L_{m+1}}{H})) \right\} . \end{align*} The base cases are \begin{align*} \mathrm{conn}_H (I(\lex{L_1}{H})) &= \mathrm{conn}_H \left({\bigvee}_n \sphere{k} \right) =k-1, \\ \mathrm{conn}_H (I(\lex{L_2}{H})) &= \mathrm{conn}_H \left(\left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_n \sphere{k} \right) \right) = -1, \\ \mathrm{conn}_H (I(\lex{L_3}{H})) &= \mathrm{conn}_H \left(\left( {\bigvee}_n \sphere{k} \right) \sqcup \left( {\bigvee}_{n^2} \sphere{2k+1} \right) \right) = -1. \end{align*} Therefore, we can prove the corollary by induction. \end{proof} We move on to the second example. We denote the complete graph on $n$ vertices by $K_n$. For $n \geq 2$, it is obvious that \begin{align*} I(K_n) = {\bigvee}_{n-1} \sphere{0}. \end{align*} As the second example in this section, we show that the homological connectivity of $I(\lex{G}{K_n})$ for any forest $G$ is determined by the {\it independent domination number} of $G$ when $n \geq 2$. For a graph $G$ and a subset $S \subset V(G)$, $S$ is a {\it dominating set} of $G$ if $V(G) = \bigcup_{u \in S} \neib{G}{u}$. The domination number $\gamma (G)$ of $G$ is the minimum cardinality of a dominating set of $G$. The relationship between the domination number of $G$ and the homological connectivity of $I(G)$ was argued by Meshulam \cite{Meshulam03}, who proved that for a chordal graph $G$, $i < \gamma(G)$ implies $\widetilde{H}_{i-1} (I(G)) =0$ (\cite[Theorem 1.2 (iii)]{Meshulam03}). This is equivalent to state that $\mathrm{conn}_H (I(G)) \geq \gamma(G) -2$. This theorem can be used to deduce a result of Aharoni, Berger and Ziv \cite{AharoniBergerZiv02}. A dominating set $S$ of $G$ is called {\it an independent dominating set} if $S$ is an independent set. The independent domination number $i (G)$ is the minimum cardinality of an independent dominating set of $G$. It is obvious that $i(G) \geq \gamma(G)$ since an independent dominating set is a dominating set. \begin{theorem} \label{connectivity and domination} Let $G$ be a forest. Then, for any $n \geq 2$, we have \begin{align} \label{domination} \mathrm{conn}_H (I(\lex{G}{K_n})) = i (G) -2. \end{align} \end{theorem} \begin{proof} We first consider two cases. \begin{itemize} \item If $G \setminus \neib{G}{v} = \emptyset$ for some $v \in V(G)$, then we have $i(G) = 1$ and \begin{align*} \mathrm{conn}_H (I(\lex{G}{K_n})) &= \mathrm{conn}_H \left( \left({\bigvee}_{n-1} \sphere{0} \right) \sqcup \left( {\bigvee}_{(n-1)^{|V(G)| -1} } \sphere{|V(G)|-2} \right) \right) \\ &=-1 \end{align*} by Theorem \ref{splitting}. \item If $G$ has no edges, then we have $i (G) = |V(G)|$ and \begin{align*} \mathrm{conn}_H (I(\lex{G}{K_n})) &=\mathrm{conn}_H \left( {\bigvee}_{(n-1)^{|V(G)| } } \sphere{|V(G)|-1} \right) \\ &=|V(G)|-2. \end{align*} \end{itemize} Therefore, equation (\ref{domination}) holds in these two cases. We prove the theorem by induction on $|V(G)|$. Since $L_1$, $L_2$ and $L_1 \sqcup L_1$ are included in the above two cases, equation (\ref{domination}) holds for $G$ such that $|V(G)| \leq 2$. Assume that (\ref{domination}) holds for any forest $G'$ such that $|V(G')| \leq r$ with $r \geq 2$. Let $G$ be a forest such that $|V(G)|=r+1$ and there exists $v, w \in V(G)$ such that $N_G (w) = \{v\}$ and $G \setminus \neib{G}{v} \neq \emptyset$. By Theorem \ref{splitting}, we obtain \begin{align*} I(\lex{G}{K_n}) \simeq &\Sigma I(\lex{(G \setminus \neib{G}{v} )}{K_n}) \vee \left(I(\lex{(G \setminus \neib{G}{v} )}{K_n}) * \left( {\bigvee}_{n-1} \sphere{0} \right) \right) \\ &\ \vee \left(I(\lex{(G \setminus\{v, w\})}{K_n}) * \left( {\bigvee}_{n-1} \sphere{0} \right) \right)\\ \simeq &\Sigma I(\lex{(G \setminus \neib{G}{v} )}{K_n}) \vee \left({\bigvee}_{n-1} \Sigma I(\lex{(G \setminus \neib{G}{v} )}{K_n}) \right) \\ &\ \vee \left({\bigvee}_{n-1} \Sigma I(\lex{(G \setminus\{v, w\})}{K_n}) \right) \\ = &\left({\bigvee}_{n} \Sigma I(\lex{(G \setminus \neib{G}{v} )}{K_n}) \right) \vee \left({\bigvee}_{n-1} \Sigma I(\lex{(G \setminus\{v, w\})}{K_n}) \right). \end{align*} Hence, we get \begin{align*} &\mathrm{conn}_H (I(\lex{G}{K_n})) \\ = &\min \left\{ \mathrm{conn}_H (I(\lex{(G \setminus \neib{G}{v} )}{K_n})) +1, \mathrm{conn}_H (I(\lex{(G \setminus\{v, w\})}{K_n})) +1 \right\} . \end{align*} $G \setminus \neib{G}{v}$ and $G \setminus \{v, w\}$ are the forests which satisfy $|V(G \setminus \neib{G}{v})| \leq r-1$, $|V(G \setminus \{v, w\})| \leq r-1$. So, by the assumption of induction, we get \begin{align*} \mathrm{conn}_H (I(\lex{G}{K_n})) = &\min \left\{ i(G \setminus \neib{G}{v}) -1 , i(G \setminus \{v,w\}) -1 \right\}. \end{align*} Here, we have $i(G \setminus \neib{G}{v}) \geq i(G) -1$. It is because if there exists an independent dominating set $S$ of $G \setminus \neib{G}{v}$ with $|S| < i(G) - 1$, then $S \cup \{v\}$ is an independent dominating set of $G$ such that $|S \cup \{u\}| < i(G)$, a contradiction. For the same reason, we also have $i(G \setminus \{v, w \}) \geq i(G) -1$. An independent dominating set of $G$ must contain either $v$ or $w$ since $N_G (w) =\{v\}$. If there exists an independent dominating set $S$ of $G$ such that $|S| = i(G)$ and $v \in S$, then $S'=S \setminus \{v\}$ is an independent dominating set of $G \setminus \neib{G}{v}$ with $|S'|=i(G) -1$ since $S \cap \neib{G}{v} = \{v\}$. Thus, in this case, we obtain $i(G \setminus \neib{G}{v}) = i(G) -1$. If there exists an independent dominating set $S$ of $G$ such that $|S| = i(G)$ and $w \in S$, then $S'' = S \setminus \{w\}$ is an independent dominating set of $G \setminus \{v, w\}$ with $|S''|=i(G) -1$ since $v \notin S$. So, in this case, we get $i(G \setminus \{v, w \}) = i(G) -1$. Above argument shows that \begin{align*} \min \left\{ i(G \setminus \neib{G}{v}) -1 , i(G \setminus \{v,w\}) -1 \right\} = i(G) -2. \end{align*} Therefore, equation (\ref{domination}) holds for $G$. By induction, we get the desired conclusion. \end{proof} \end{document}
arXiv
\begin{document} \newcommand{\operatorname{sgn}}{\operatorname{sgn}} \def\alpha{\alpha} \def\beta{\beta} \def\delta{\delta} \def\gamma{\gamma} \def\lambda{\lambda} \def\omega{\omega} \def\sigma{\sigma} \def\tau{\tau} \def\theta{\theta} \def\rho{\rho} \def\Delta{\Delta} \def\Gamma{\Gamma} \def\Omega{\Omega} \def\varepsilon{\varepsilon} \def\phi{\phi} \def\Phi{\Phi} \def\Psi{\Psi} \def\eta{\eta} \def\mu{\mu} \def\nabla{\nabla} \def\overline{\overline} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\innerprod}[1]{\left\langle#1\right\rangle} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\abs}[1]{\left|#1\right|} \author[P. G\'erard]{Patrick G\'erard} \address{Universit\'e Paris-Sud XI, Laboratoire de Math\'ematiques d'Orsay, CNRS, UMR 8628, et Institut Universitaire de France} \email{[email protected]} \author[Y. Guo]{Yanqiu Guo} \address{Department of Computer Science and Applied Mathematics \\ Weizmann Institute of Science\\ Rehovot 76100, Israel} \email{[email protected]} \author[E. S. Titi]{Edriss S. Titi} \address{Department of Mathematics and Department of Mechanical and Aerospace Engineering\\ University of California, Irvine, California 92697-3875, USA and Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel} \email{[email protected] and [email protected]} \title[Analyticity of solutions to the Cubic Szeg\H{o} Equation] {On the radius of analyticity of solutions\\ to the cubic Szeg\H{o} equation} \date{Revised: August 6, 2013} \keywords{Cubic Szeg\H{o} equation, Gevrey class regularity, analytic solutions, Hankel operators} \subjclass[2010]{35B10, 35B65, 47B35} \maketitle \begin{abstract} This paper is concerned with the cubic Szeg\H{o} equation $$i\partial_t u=\Pi(|u|^2 u),$$ defined on the $L^2$ Hardy space on the one-dimensional torus $\mathbb T$, where $\Pi: L^2(\mathbb T)\rightarrow L^2_+(\mathbb T)$ is the Szeg\H{o} projector onto the non-negative frequencies. For analytic initial data, it is shown that the solution remains spatial analytic for all time $t\in (-\infty,\infty)$. In addition, we find a lower bound for the radius of analyticity of the solution. Our method involves energy-like estimates of the special Gevrey class of analytic functions based on the $\ell^1$ norm of Fourier transforms (the Wiener algebra). \end{abstract} \section {Introduction }\label{S1} In studying the nonlinear Schr\"odinger equation \begin{align*} i\partial_t u+\Delta u=\pm|u|^2 u, \;\;\; (t,x)\in \mathbb{R} \times M, \end{align*} Burq, G\'erard and Tzvetkov \cite{Burq-Gerard-Tzvetkov-05} observed that dispersion properties are strongly influenced by the geometry of the underlying manifold $M$. In \cite{Gerard-10}, G\'erard and Grellier mentioned, if there exists a smooth local in time flow map on the Sobolev space $H^s(M)$, then the following Strichartz-type estimate must hold: \begin{align} \label{Strichartz} \norm{e^{it\Delta}f}_{L^4([0,1]\times M)}\lessapprox \norm{f}_{H^{s/2}(M)}. \end{align} It is shown in \cite{Burq-Gerard-Tzvetkov-02, Burq-Gerard-Tzvetkov-05} that, on the two-dimensional sphere, the infimum of the number $s$ such that (\ref{Strichartz}) holds is $\frac{1}{4}$; however, if $M=\mathbb{R}^2$, the inequality (\ref{Strichartz}) is valid for $s=0$. As pointed out in \cite{Gerard-10}, this can be interpreted as a lack of dispersion properties for the spherical geometry. Taking this idea further, it is remarked in \cite{Gerard-10} that dispersion disappears completely when $M$ is a sub-Riemannian manifold (for instance, the Heisenberg group). As a toy model to study non-dispersive Hamiltonian equation, G\'erard and Grellier \cite{Gerard-10} introduced the \emph{cubic Szeg\H{o} equation} : \begin{align} \label{Szego} i\partial_t u=\Pi(|u|^2 u), \;\;\;(t,\theta)\in \mathbb{R} \times \mathbb T, \end{align} on $L^2_+(\mathbb T)$, where $\mathbb T=\mathbb{R} /2\pi \mathbb{Z}$ is the one-dimensional torus, which is identical to the unit circle in the complex plane. Notice that $L^2_+(\mathbb T)$ is the $L^2$ Hardy space which is defined by \begin{align*} L^2_+(\mathbb T)=\Big\{u=\sum_{k\in \mathbb{Z}}\hat u(k) e^{ik\theta}\in L^2(\mathbb T): \hat u(k)=0 \text{\;\;for all\;\;} k<0\Big\}. \end{align*} Furthermore, in (\ref{Szego}), the operator $\Pi: L^2(\mathbb T)\rightarrow L^2_+(\mathbb T)$ is the Szeg\H{o} projector onto the non-negative frequencies, i.e., \begin{align*} \Pi\left(\sum_{k\in \mathbb{Z}}v_k e^{ik\theta}\right)=\sum_{k\geq 0}v_k e^{ik\theta}. \end{align*} We mention the following existence result, proved in \cite{Gerard-10}. \begin{theorem} \cite{Gerard-10} \label{thm-Gerard} Given $u_0\in H^{s}_+(\mathbb T)$, for some $s\geq \frac{1}{2}$, then the cubic Szeg\H{o} equation (\ref{Szego}) has a unique solution $u\in C(\mathbb{R},H^s_+(\mathbb T))$. \end{theorem} Moreover, it has been shown in \cite{Gerard-10} that the Szeg\H{o} equation (\ref{Szego}) is completely integrable in the sense of admitting a Lax pair structure, and as a consequence, it possesses an infinite number of conservation laws. Replacing the Fourier series by the Fourier transform, one can analogously define the Szeg\H{o} equation on $$L^2_+(\mathbb{R})=\{\phi\in L^2(\mathbb{R}): \text{supp\;} \hat{\phi} \subset [0,\infty) \}.$$ In \cite{Pocovnicu-11}, Pocovnicu constructed explicit spatially real analytic solutions for the cubic Szeg\H{o} equation defined on $L^2_+(\mathbb{R})$. For the initial datum $u_0=\frac{2}{x+i}-\frac{4}{x+2i}$, it was discovered that one of the poles of the explicit real analytic solution $u(t,x)$ approaches the real line, as $|t|\rightarrow \infty$; more precisely, the imaginary part of a pole decreases in the speed $O(\frac{1}{t^2})$. Thus, the radius of analyticity of $u(t,x)$ shrinks algebraically to zero, as $|t|\rightarrow \infty$. This phenomenon gives rise to the following questions: for analytic initial data, does the solution remain spatial analytic for all time? If so, can one estimate, from below, the radius of analyticity? In this manuscript, we attempt to answer these questions by employing the technique of the so--called Gevrey class of analytic functions. The Gevrey classes of real analytic functions are characterized by an exponential decay of their Fourier coefficients. If we set $A:=\sqrt{I-\Delta}$, they are defined by $\mathcal D(A^s e^{\sigma A})$, which consist of all $L^2$ functions $u$ such that $\norm{A^s e^{\sigma A}u}_{L^2(\mathbb T)}$ is finite, where $s\geq 0$, $\sigma>0$ (see e.g. \cite{Ferrari-Titi-98, Foias-Temam-89, Levermore-Oliver-97}). Note, if $\sigma=0$, then $\mathcal D(A^s e^{\sigma A})=\mathcal D(A^s)\cong H^s(\mathbb T)$. However, if $\sigma>0$, then $\mathcal D(A^s e^{\sigma A})$ is the set of real analytic functions with the radius of analyticity bounded below by $\sigma$. Also notice, $\mathcal D(A^s e^{\sigma A})$ is a Banach algebra provided $s>\frac{1}{2}$ for 1D (see \cite{Ferrari-Titi-98}). The so--called method of Gevrey estimates has been extensively used in literature to establish regularity results for nonlinear evolution equations. It was first introduced for the periodic Navier-Stokes equations in \cite{Foias-Temam-89}, and studied later in the whole space in \cite{Oliver-Titi-00}, moreover, it was extended to nonlinear analytic parabolic PDE's in \cite{Ferrari-Titi-98}, and for Euler equations in \cite{Kukavica-Vicol-09, Larios-Titi-10, Levermore-Oliver-97} (see also references therein). Recently, this method was also applied to establish analytic solutions for quasilinear wave equations \cite{Guo-Titi-12}. In this paper, we employ a special such class based on the space $W$ of functions with summable Fourier series. For a given function $u\in L^1(\mathbb T)$, $u=\sum_{k\in \mathbb{Z}} \hat u(k) e^{ik\theta}$, $\theta\in \mathbb T$, then the Wiener norm of $u$ is given by \begin{align} \label{ell} \norm {u}_W=\norm{\hat u}_{\ell^1}=\sum_{k\in \mathbb{Z}} |\hat u(k)|. \end{align} Notice that $W$ is a Banach algebra (Wiener algebra). Based on the Wiener algebra, the following special Gevrey norm is defined in \cite{Oliver-Titi-01}: \begin{align} \label{G-norm} \norm{u}_{G_{\sigma}(W)}=\sum_{k\in \mathbb{Z}}e^{\sigma|k|}|\hat u(k)|, \;\;\;\sigma\geq 0. \end{align} If $u\in L^1(\mathbb T)$ is such that $\norm{u}_{G_{\sigma}(W)}<\infty$, then we write $u\in G_{\sigma}(W)$. It is known that the Gevrey class $G_{\sigma}(W)$ is a Banach algebra \cite{Oliver-Titi-01}, and it characterizes the real analytic functions if $\sigma>0$. In particular, a function $u\in C^{\infty}(\mathbb T)$ is real analytic with uniform radius of analyticity $\rho$, if and only if, $u\in G_{\sigma}(W)$, for every $0<\sigma<\rho$. Now, we state the main result of this paper. \begin{theorem} \label{main-thm} Assume $u_0\in L^2_+(\mathbb T)\cap G_{\sigma}(W)$, for some $\sigma>0$. Then the unique solution $u(t)$ of (\ref{Szego}) provided by Theorem \ref{thm-Gerard} satisfies $u(t)\in G_{\tau(t)}(W)$, for all $t\in \mathbb{R}$, where $\tau(t)= \sigma e^{-\lambda |t|}$, with some $\lambda>0$ depending on $u_0$. More precisely, there exists $C_0>0$, specified in (\ref{C0}) below, such that $\norm{u(t)}_{G_{\tau(t)}(W)}\leq C_0$, for all $t\in \mathbb{R}$. \end{theorem} Essentially, Theorem \ref{main-thm} shows the persistency of the spatial analyticity of the solution $u(t)$ for all time $t\in (-\infty,\infty)$ provided the initial datum is analytic. Recall that $\tau(t)$ is a lower bound of the radius of spatial analyticity of $u(t)$. Thus, it implies that the radius of analyticity of $u(t)$ cannot shrink faster than exponentially, as $|t|\rightarrow \infty.$ \begin{remark} The precise definition of $\lambda $ in Theorem \ref{main-thm} is given in (\ref{def-tau}) below. In fact, as shown in Remark \ref{improvement}, one can prove that the radius $\rho (t)$ of real analyticity of $u(t)$ satisfies, for every $s>1$, $$\limsup_{t\rightarrow \infty} \left| \frac{\log \rho (t)}{t} \right| \le K_s \norm {u_0}_{H^s}^2\ ,$$ which is independent of the $G_{\sigma }(W)$ norm of $u_0$. The optimality of such an estimate is not known. However, let us mention the following two recent results in \cite{Gerard-13}. Firstly, if $u_0$ is a rational function of $e^{i\theta }$ with no poles in the closed unit disc, then so is $u(t)$, and $\rho (t)$ remains bounded from below by some positive constant for all time. Secondly, this bound is by no means uniform. Indeed, starting with $$u_0=e^{i\theta }+\varepsilon \ ,\ \varepsilon >0\ ,$$ one can show that $$\rho \left (\frac \pi \varepsilon \right )=O(\varepsilon ^2)\ .$$ This phenomenon is to be compared to the one displayed by Kuksin in \cite{Kuksin} for NLS on the torus with small dispersion coefficient. Finally, let us mention a recent work by Haiyan Xu \cite{Xu-13}, who found a Hamiltonian perturbation of the cubic Szeg\H{o} equation which admits solutions with exponentially shrinking radius of analyticity. Moreover, one can check that the method of Theorem \ref{main-thm} applies as well to this perturbation, so that the above result is optimal in the case of this equation. \end{remark} By investigating the steady state of the cubic nonlinear Schr\"odinger equation, it is demonstrated in \cite{Oliver-Titi-01} that, by employing the Gevrey class $G_{\sigma}(W)$, one can obtain a more accurate estimate of the lower bound of the radius of analyticity of solutions to differential equations, compared to the estimate derived from using the regular Gevrey classes $\mathcal D(A^s e^{\sigma A})$ (see also the discussion in \cite{Guo-Titi-12}). Such observation is verified again in this paper, since we find that, in studying the cubic Szeg\H{o} equation, the Gevrey class method, based on $G_{\sigma}(W)$, provides an estimate of the lower bound of the analyticity radius of the solution, which has a substantially slower shrinking rate, than the estimate obtained from using the classes $\mathcal D(A^s e^{\sigma A})$. One may refer to Remark \ref{rmk-compare} for this comparison. Throughout, we study the cubic Szeg\H{o} equation defined on the torus $\mathbb T$. However, by using Fourier transforms instead of Fourier series, our techniques are also applicable to the same equation defined on the real line, and similar regularity results and estimates can be obtained as well (see also ideas from \cite{Oliver-Titi-00}). Moreover, Theorem \ref{main-thm} is also valid under the framework of general Gevrey classes, i.e., intermediate spaces between the space of $C^{\infty}$ functions and real analytic functions. Indeed, if we define Gevrey classes $G_{\sigma}^{\gamma}(W)$ based on the norm \begin{align*} \norm{u}_{G_{\sigma}^{\gamma}(W)}=\sum_{k\in \mathbb Z} e^{\sigma |k|^{\gamma}}|\hat u (k)|, \;\;\gamma\in (0,1], \end{align*} then, $G_{\sigma}^{\gamma}(W)$ are Banach algebras, due to the elementary inequality $e^{\sigma (k+j)^{\gamma}}\leq e^{\sigma k^{\gamma}} e^{\sigma j^{\gamma}}$, for $\gamma\in (0,1]$. Thus, the proof of Theorem \ref{main-thm} works equally for $G_{\sigma}^{\gamma}(W)$, where $\gamma\in (0,1]$. For the sake of clarity, we demonstrate our technique for $\gamma=1$, i.e., the Gevrey class of real analytic functions. \section{Proof of the main result} Before we start the proof of the main result, the following proposition should be mentioned. \begin{proposition} \label{prop} Assume $u_0\in H^s_+(\mathbb T)$, for some $s>1$. Let $u$ be the unique global solution of (\ref{Szego}), furnished by Theorem \ref{thm-Gerard}. Then, \begin{align} \label{ell-bound} \norm{u(t)}_{W}\leq C(s)\norm{u_0}_{H^s}, \text{\;\;for all\;\;} t\in \mathbb{R}. \end{align} \end{proposition} \begin{proof} The proof can be found in \cite{Gerard-10}, we recall it here. In \cite{Gerard-10}, it has been shown that the cubic Szeg\H{o} equation admits a Lax pair $(H_u, B_u)$ , where $H_u$ is the Hankel operator of symbol $u$, defined by \begin{align}\label{hankel} H_u(h)=\Pi (u\overline h)\ . \end{align} Thus the trace norm $Tr(|H_{u(t)}|)$ is a conserved quantity. By Peller's theorem \cite{Pe80}, \cite{P}, $Tr(|H_u|)$ is equivalent to the $B_{1,1}^1$ norm of $u$. In particular, for every $s>1$, \begin{align} \label{tracebound} \frac 1 2\norm{u}_{W}\leq Tr(|H_u|)\leq C_s \norm{u}_{H^s}\ . \end{align} Hence $$\norm{u(t)}_{W}\leq 2 Tr(|H_{u(t)}|)=2 Tr(|H_{u_0}|)\le 2C_s \norm{u_0}_{H^s}\ .$$ The proof is complete. \end{proof} For the sake of completion, we provide a straightforward proof of (\ref{tracebound}) in the Appendix. We now start the proof of Theorem \ref{main-thm}. \begin{proof} Due to the assumption on the initial datum $u_0$, we know that $u_0$ is real analytic, and hence $u_0\in H^s_+(\mathbb T)$, for every non-negative real number $s$, in particular for $s\geq \frac{1}{2}$. Therefore, the global existence and uniqueness of the solution $u\in C(\mathbb{R},H^s_+(\mathbb T))$ are guaranteed by Theorem \ref{thm-Gerard}, for $s\geq \frac{1}{2}$. Throughout, we focus on the positive time $t\geq 0$. By replacing $t$ by $-t$, the same proof works for the negative time. We shall implement the Galerkin approximation method. Recall the cubic Szeg\H{o} equation is defined on the Hardy space $L^2_+(\mathbb T)$ with a natural basis $\{e^{ik\theta}\}_{k\geq 0}$. Denote by $P_N$ the projection onto the span of $\{e^{ik\theta}\}_{0\leq k\leq N}$. We let \begin{align} \label{Appro-Sol} u_N(t)=\sum_{k=0}^{N} \hat u_{N}(t,k)e^{ik\theta} \end{align} be the solution of the Galerkin system: \begin{align} \label{Galerkin} i\partial_t u_N=P_N \left(|u_N|^2 u_N\right), \end{align} with the initial condition $u_N(0)=P_N u_0$. We see that (\ref{Galerkin}) is an $N$-dimensional system of ODE with the conservation law $$\norm {u_N}_{L^2}^2=\sum _{k=0}^N \vert \hat u_N(t;k)\vert ^2\ ,$$ and thus it has a unique solution $u_N\in C^\infty (\mathbb{R} )$ on $\mathbb{R} $. Arguing exactly as in section 2 of \cite{Gerard-10}, we observe that $$\sum _{k=0}^N k\vert \hat u_N (k)\vert ^2 $$ is a conservation law, hence $\norm {u_N(t)}_{H^{1/2}} $ is conserved, consequently, for every $s\ge \frac 12$ and every $T>0$, \begin{align*} \sup _N\sup _{t\in [0,T]}\norm{u_N(t)}_{H^s}<\infty \ . \end{align*} By using the equation (\ref{Galerkin}), one concludes that the same estimate holds for the time derivative $u_N'(t)$. Now, let us fix an arbitrary $T>0$. Since, moreover, the injection of $H^{s+\varepsilon }$ into $H^s$ is compact, we conclude from Ascoli's theorem that, up to a subsequence, $u_N(t)$ converge to some $\tilde u(t)$ in every $H^s$, uniformly for $t\in [0,T]$. Then, it is straightforward to check, by letting $N\rightarrow \infty$, that $\tilde u$ is a solution of the cubic Szeg\H{o} equation (\ref{Szego}) on $[0,T]$ with the initial datum $u_0$. Since $u$ is the unique global solution furnished by Theorem {\ref{thm-Gerard}}, one must have $u=\tilde u$ on $[0,T]$. Since $H^s$ is contained into $W$ for every $s>\frac 12$, $u_N(t)$ tends to $u(t)$ in $W$ uniformly for $t\in [0,T]$. By Proposition \ref{prop}, there exists a constant $C_1>0$ such that \begin{align} \label{C1} \norm{u(t)}_{W}+1\leq C_1, \text{\;\;for all\;\;} t\in \mathbb{R}. \end{align} Consequently, there exists $N'\in \mathbb N$ such that \begin{align} \label{ge-0} \norm{u_N(t)}_{W}\leq \norm{u(t)}_{W}+1\leq C_1, \text{\;\;for all\;\;} N>N', \;\;t\in [0,T]. \end{align} Also, recall that the initial condition $u_0 \in G_{\sigma}(W)$, i.e., $\norm{u_0}_{G_{\sigma}(W)}<\infty$. Since $u_N(0)=P_N u_0$, one has \begin{align} \label{C2} \sum_{k=0}^N e^{\sigma k}|\hat u_{N}(0,k)|\leq \norm{u_0}_{G_{\sigma}(W)}. \end{align} Define \begin{align} \label{C0} C_0:=\max\left\{\norm{u_0}_{G_{\sigma}(W)},\frac{1+\sqrt{5}}{2}e C_1 \right\}, \end{align} where $C_1$ has been specified in (\ref{C1}). Let us fix an arbitrary $N>N'$. We aim to prove \begin{align} \label{cla} \sum_{k=0}^N e^{\tau(t)k}|\hat u_{N}(t,k)|\leq C_0, \text{\;\;for all\;\;} t\in [0,T], \end{align} with $\tau(t)>0$ that will be specified in (\ref{def-tau}), below. Notice, due to (\ref{Appro-Sol}) and (\ref{Galerkin}), we infer \begin{align*} \frac{d}{dt}\hat u_N(t,k)=-i\sum_{n-j+m=k \atop 0\leq n,j,m\leq N}\hat u_N(t,n) \overline {\hat u_N(t,j)} \hat u_N(t,m), \;\;t\in [0,T], \;k=0,1,\ldots,N. \end{align*} Then, one can easily find that \begin{align} \label{z11} \frac{d}{dt} |\hat u_N(t,k)| \leq \sum_{n-j+m=k \atop 0\leq n,j,m\leq N}|\hat u_N(t,n)| |\hat u_N(t,j)| |\hat u_N(t,m)| , \end{align} for $k=0,1,\ldots,N$, and all $t\in [0,T]$. In order to estimate the Gevrey norm, we consider \begin{align*} &\frac{d}{dt}\left(e^{\tau(t)k}|\hat u_N(t,k)|\right)\notag\\ &=\tau'(t)k e^{\tau(t) k}|\hat u_N(t,k)|+e^{\tau(t) k}\frac{d}{dt}|\hat u_N(t,k)|\notag\\ &\leq \tau'(t)k e^{\tau(t) k}|\hat u_N(t,k)|+e^{\tau(t) k}\sum_{n-j+m=k \atop 0\leq n,j,m\leq N}|\hat u_N(t,n)| |\hat u_N(t,j)| |\hat u_N(t,m)|, \end{align*} for $k=0,1,\ldots,N$, and $t\in [0,T]$, where (\ref{z11}) has been used in the last inequality. Summing over all integers $k=0,1,\cdots,N$ yields \begin{align} \label{z1} &\frac{d}{d t}\left(\sum_{k=0}^Ne^{\tau(t) k}|\hat u_N(t,k)|\right) \notag\\ &\leq \tau'\sum_{k=0}^N k e^{\tau k }|\hat u_N(k)|+ \sum_{k=0}^N e^{\tau k} \left(\sum_{n-j+m=k \atop 0\leq n,j,m\leq N}|\hat u_N(n)| |\hat u_N(j)| |\hat u_N(m)|\right) \notag\\ &=\tau'\sum_{k=0}^N k e^{\tau k }|\hat u_N(k)|+ \sum_{k=0}^N \left(\sum_{n-j+m=k \atop 0\leq n,j,m\leq N} e^{\tau n}|\hat u_N(n)| e^{-\tau j} |\hat u_N(j)| e^{\tau m}|\hat u_N(m)|\right)\notag\\ &\leq \tau'\sum_{k=0}^N k e^{\tau k }|\hat u_N(k)|+ \left(\sum_{k=0}^Ne^{\tau k}|\hat u_N(k)|\right)^2 \left(\sum_{k=0}^N |\hat u_N(k)|\right), \end{align} where the last formula is obtained by using the Young's convolution inequality and the fact $e^{-\tau j}\leq 1$, for $\tau$, $j\geq 0$. Now, we estimate the second term on the right-hand side of (\ref{z1}). The key ingredient of the calculation is the elementary inequality $e^x\leq e+x^{\ell}e^x$, for all $x\geq 0$, $\ell\geq 0$, and we select $\ell=\frac{1}{2}$ here. Hence \begin{align} \label{z2} &\left(\sum_{k=0}^Ne^{\tau k}|\hat u_N(k)|\right)^2 \left(\sum_{k=0}^N |\hat u_N(k)|\right) \notag\\ &\leq \left(\sum_{k=0}^N e|\hat u_N(k)| +\sum_{k=0}^N\tau^{\frac{1}{2}} k^{\frac{1}{2}}e^{\tau k}|\hat u_N(k)|\right)^2 \left(\sum_{k=0}^N |\hat u_N(k)|\right) \notag\\ &\leq 2 e^2 \left(\sum_{k=0}^N |\hat u_N(k)|\right)^3 +2 \tau \left(\sum_{k=0}^N k e^{\tau k}|\hat u_N(k)|\right) \left(\sum_{k=0}^N e^{\tau k}|\hat u_N(k)|\right)\left(\sum_{k=0}^N |\hat u_N(k)|\right), \end{align} where we have used Young's inequality and H\"older's inequality. Thus, combining (\ref{z1}) and (\ref{z2}) yields \begin{align} \label{ge-1} &\frac{d}{dt}\left(\sum_{k=0}^N e^{\tau(t)k}|\hat u_N(t,k)|\right) \notag \\ &\leq \tau'(t)\sum_{k=0}^N k e^{\tau(t) k}|\hat u_N(t,k)|+2 e^2 \left(\sum_{k=0}^N |\hat u_N(t,k)|\right)^3 \notag\\ &\hspace{0.2 in}+2 \tau(t) \left(\sum_{k=0}^N k e^{\tau(t) k} |\hat u_N(t,k)|\right) \left(\sum_{k=0}^N e^{\tau(t) k} |\hat u_N(t,k)|\right) \left(\sum_{k=0}^N |\hat u_N(t,k)|\right) \notag\\ &\leq \frac{1}{2}\tau'(t)\sum_{k=0}^N k e^{\tau(t) k}|\hat u_N(t,k)|+ 2 e^2 C_1^3 \notag\\ &\hspace{0.2 in}+\left(\frac{1}{2}\tau'(t)+2 C_1 \tau(t) \sum_{k=0}^N e^{\tau(t) k} |\hat u_N(t,k)| \right)\left(\sum_{k=0}^N k e^{\tau(t) k} |\hat u_N(t,k)|\right), \end{align} for all $t\in [0,T]$, where we have used (\ref{ge-0}). Denote by $\tau_N(t)$, $t\in [0,t_N]$, the unique solution of the ODE \begin{align} \label{z-N} \frac{1}{2}\tau_N'(t)+2 C_1 \tau_N(t)z_N(t)=0, \text{\;\;with\;\;} \tau_N(0)=\sigma, \end{align} where we set \begin{align} \label{def-z} z_N(t):=\sum_{k=0}^N e^{\tau_N(t)k}|\hat u_N(t,k)|. \end{align} Due to (\ref{z-N}) and (\ref{def-z}), we infer from (\ref{ge-1}) that \begin{align} \label{ge-1'} \frac{d z_N}{dt}(t) &\leq \frac{1}{2}\tau_N'(t)\sum_{k=0}^N k e^{\tau_N(t) k}|\hat u_N(t,k)|+2 e^2 C_1^3 \notag\\ &\leq -2 C_1 z_N(t)\tau_N(t)\sum_{k=0}^N k e^{\tau_N(t) k}|\hat u_N(t,k)|+2 e^2 C_1^3, \;\;t\in [0,t_N]. \end{align} Next, we estimate $\tau_N(t)\sum_{k=0}^N k e^{\tau_N(t) k}|\hat u_N(t,k)|$ by considering the following two cases: \emph{Case 1}: $N\geq \frac{1}{\tau_N(t)}$. In this case, one has \begin{align} \label{ge-2} &\tau_N(t)\sum_{k=0}^N k e^{\tau_N(t) k}|\hat u_N(t,k)| \geq \tau_N(t)\sum_{\frac{1}{\tau_N(t)} \leq k \leq N} k e^{\tau_N(t) k}|\hat u_N(t,k)|\notag\\ &\geq \sum_{\frac{1}{\tau_N(t)} \leq k \leq N} e^{\tau_N(t) k}|\hat u_N(t,k)| =\sum_{k=0}^N e^{\tau_N(t) k}|\hat u_N(t,k)|-\sum_{0\leq k<\frac{1}{\tau_N(t)}} e^{\tau_N(t) k}|\hat u_N(t,k)| \notag\\ &\geq z_N(t)-e \sum_{0\leq k<\frac{1}{\tau_N(t)}} |\hat u_N(t,k)|\geq z_N(t)-e C_1, \end{align} where the fact (\ref{ge-0}) has been used. \emph{Case 2}: $N<\frac{1}{\tau_N(t)}$. In this case, in order to obtain the same estimate as (\ref{ge-2}), we proceed as follows: \begin{align*} & \tau_N(t)\sum_{k=0}^N k e^{\tau_N(t) k}|\hat u_N(t,k)| \geq 0=z_N(t)- \sum_{k=0}^N e^{\tau_N(t) k}|\hat u_N(t,k)| \notag\\ & \geq z_N(t)- e \sum_{k=0}^N |\hat u_N(t,k)|\geq z_N(t)-e C_1. \end{align*} We conclude from the above two cases that $$\tau_N(t)\sum_{k=0}^N k e^{\tau_N(t) k}|\hat u_N(t,k)| \geq z_N(t)-e C_1,$$ and by substituting it into (\ref{ge-1'}), one has \begin{align} \label{ge-3} \frac{d z_N}{dt}(t)\leq -2 C_1 z_N^2(t)+2 e C_1^2 z_N(t)+2 e^2 C_1^3, \text{\;\;for all\;\;} t\in [0,t_N]. \end{align} Notice that the right-hand side of (\ref{ge-3}) is negative when $z_N > z^*=\frac{1+\sqrt{5}}{2}e C_1$, and hence (\ref{ge-3}) implies that \begin{align} \label{bound-z} z_N(t)\leq \max\{z_N(0),z^*\}=\max\left\{\sum_{k=0}^N e^{\sigma k}|\hat u_N(0,k)|,\frac{1+\sqrt{5}}{2}e C_1\right\}\leq C_0, \end{align} for all $t\in [0,t_N]$, where we have also used (\ref{C2}) and (\ref{C0}) in the above estimate. Therefore, by virtue of the uniform bound (\ref{bound-z}) of $z_N(t)$, the solution $\tau_N(t)$ of the initial value problem (\ref{z-N}) on $[0,t_N]$ can be extended to the solution on $[0,T]$, and thus (\ref{bound-z}) holds for all $t\in [0,T]$, i.e., \begin{align} \label{bound-z'} z_N(t)\leq C_0, \text{\;\;for all\;\;} t\in [0,T], \end{align} and along with (\ref{z-N}), we infer \begin{align} \label{ge-4} \tau_N(t)=\sigma \exp\left(-4 C_1 \int_0^t z_N(s) ds\right)\geq \sigma e^{-4C_0 C_1 t}, \text{\;\;for all\;\;} t\in [0,T]. \end{align} Let us define \begin{align} \label{def-tau} \tau(t)=\sigma e^{-\lambda |t|}, \text{\;\;with\;\;} \lambda=4 C_0 C_1, \end{align} where $C_0$ and $C_1$ are specified in (\ref{C0}) and (\ref{C1}), respectively. Then, (\ref{ge-4}) and (\ref{def-tau}) show that $\tau(t)\leq \tau_N(t)$ on $[0,T]$, and consequently, \begin{align} \label{ge-5} \norm{u_N(t)}_{G_{\tau(t)}(W)}=\sum_{k=0}^N e^{\tau(t)k}|\hat u_N(t,k)| \leq \sum_{k=0}^N e^{\tau_N(t)k}|\hat u_N(t,k)|=z_N(t)\leq C_0, \end{align} for all $t\in [0,T]$, due to (\ref{bound-z'}). Since $N$ is an arbitrary integer larger than $N'$, we conclude, for every fixed number $N_0$, for every $t\in [0,T]$, $$\sum_{k=0}^{N_0} e^{\tau(t)k}|\hat u(t,k)|=\lim _{N\rightarrow \infty }\sum_{k=0}^{N_0} e^{\tau(t)k}|\hat u_N(t,k)|\le C_0\ .$$ Therefore, since $N_0\ge 0$ and $T>0$ are arbitrarily selected, $\norm{u(t)}_{G_{\tau(t)}(W)}\leq C_0$ for all $t\geq 0$. \end{proof} \begin{remark}\label {improvement} In Theorem \ref{main-thm}, we found a lower bound $\tau(t)$ of the radius of spatial analyticity of $u(t)$, where $\tau(t)=\sigma e^{-\lambda |t|}$, with $\lambda=4C_0 C_1$. By the definition of $C_0$ in (\ref{C0}), one has \begin{align} \label{def-lamda} \lambda= \begin{cases} 2(1+\sqrt{5})e C_1^2, \text{\;\;if\;\;} \norm{u_0}_{G_{\sigma}(W)}\leq (1+\sqrt 5)e C_1/2;\\ 4 C_1 \norm{u_0}_{G_{\sigma}(W)}, \text{\;\;if\;\;} \norm{u_0}_{G_{\sigma}(W)}> (1+\sqrt 5)e C_1/2. \end{cases} \end{align} Here, we shall provide a slightly different lower bound $\tilde \tau(t)$ of the radius of analyticity of $u(t)$. More precisely, we can choose $\tilde \tau(t)=\sigma e^{-\tilde \lambda(t) |t|}$, where $\tilde \lambda(t)$ defined in (\ref{re-4}) below, is almost \emph{independent} of the Gevrey norm $\norm{u_0}_{G_{\sigma}(W)}$ of the initial datum, for large values of $|t|$. Indeed, by (\ref{ge-3}), it is easy to see that \begin{align} \label{re-1} \frac{dz_N}{dt}(t)\leq -2 C_1 \left(z_N(t)-\frac{eC_1}{2}\right)^2+\frac{5}{2}e^2C_1^3. \end{align} After some manipulations of (\ref{re-1}), we obtain \begin{align} \label{re-2} \int_0^t \left(z_N(s)-\frac{eC_1}{2}\right)^2 ds \leq \frac{z_N(0)}{2C_1}+\frac{5e^2 C_1^2 t}{4} \leq \frac{\norm{u_0}_{G_{\sigma}(W)}}{2C_1}+\frac{5e^2 C_1^2 t}{4}. \end{align} Note \begin{align} \label{re-3} \int_0^t z_N(s) ds&=\int_0^t \left(z_N(s)-\frac{e C_1}{2}\right) ds+\frac{e C_1}{2}t \notag\\ &\leq \left[\int_0^t \left(z_N(s)-\frac{e C_1}{2}\right)^2 ds\right]^{\frac{1}{2}}\sqrt{t}+\frac{e C_1}{2}t \notag\\ &\leq \left[\frac{\norm{u_0}_{G_{\sigma}(W)}}{2C_1}+\frac{5e^2 C_1^2 t}{4}\right]^{\frac{1}{2}}\sqrt{t}+\frac{e C_1}{2}t, \end{align} where we have used the estimate (\ref{re-2}). Thus, by (\ref{ge-4}) and (\ref{re-3}), we may select \begin{align} \label{re-4} \tilde \tau(t)=\sigma e^{-\tilde \lambda(t)|t|}, \text{\;\;with\;\;} \tilde \lambda(t)=2C_1\left[\frac{2\norm{u_0}_{G_{\sigma}(W)}}{C_1|t|}+5e^2 C_1^2\right]^{\frac{1}{2}}+2e C_1^2, \;\;|t|>0, \end{align} and then $\tilde \tau(t)\leq \tau_N(t)$. Thus, by adopting the argument in Theorem \ref{main-thm}, it can be shown that $\norm{u(t)}_{G_{\tilde \tau(t)}(W)}\leq C_0$ for all $t\in \mathbb{R}$. Also, we see from (\ref{re-4}) that $\tilde \lambda(t)\rightarrow 2(1+\sqrt 5)e C_1^2$ as $|t|\rightarrow \infty$, that is, $\tilde \lambda(t)$ is almost \emph{independent} of $\norm{u_0}_{G_{\sigma}(W)}$, for large values of $|t|$, in contrast to the definition (\ref{def-lamda}) of $\lambda$. \end{remark} \begin{remark} For analytic initial data, the Gevrey norm estimate $\norm{u(t)}_{G_{\tau(t)}(W)}\leq C_0$, where $\tau(t)=\sigma e^{-\lambda |t|}$, can provide a growth estimate of the $H^s$ norm of the solution $u(t)$. Indeed, \begin{align*} \norm{u}_{H^s}^2=\sum_{k\geq 0} (k^{2s}+1)|u_k|^2 \leq \sup |u_k| \left(\sum_{k\geq 0} |u_k| e^{\tau k} \frac{k^{2s}}{e^{\tau k}}+\sum_{k\geq 0}|u_k|\right). \end{align*} Since the maximum of the function $k \mapsto \frac{k^{2s}}{e^{\tau k}}$ occurs at $k=\frac{2s}{\tau}$, we obtain \begin{align*} \norm{u}_{H^s}^2\leq \norm{u}_{W}\left[ e^{-2s} \left(\frac{2s}{\tau}\right)^{2s} \norm{u}_{G_{\tau}(W)}+\norm{u}_{W}\right]. \end{align*} It follows that \[ \norm{u(t)}_{H^s}^2\leq C(s) e^{2\lambda s t}, \] that is to say, the $H^s$ norm grows at most exponentially, if $s>\frac{1}{2}$, which agrees with the $H^s$ norm estimates in Corollary 2, section 3 of \cite{Gerard-10}. \end{remark} \begin{remark} \label{rmk-compare} Let us set $A=\sqrt{I-\Delta}$. Recall the regular Gevrey classes of analytic functions are defined by $\mathcal D(A^s e^{\sigma A})$ furnished the norm $\norm{A^s e^{\sigma A}\cdot}_{L^2(\mathbb T)}$, where $s\geq 0$, $\sigma>0$. It has been mentioned in the Introduction that we choose to employ the special Gevrey class $G_{\sigma}(W)$ in this manuscript, since it provides better estimate of the lower bound the radius of analyticity of the solution. In particular, we can do the following comparisons. Suppose the initial condition $u_0\in \mathcal D(A^s e^{\sigma A})$, $s>\frac{1}{2}$, $\sigma>0$, and let us perform the estimates by using the regular Gevrey classes $\mathcal D(A^s e^{\sigma A})$. Adopting similar arguments as in \cite{Guo-Titi-12, Larios-Titi-10}, one can manage to show that \begin{align*} \norm{A^s e^{\tau_1(t) A}u(t)}_{L^2}^2\leq \norm{A^s e^{\sigma A}u_0}_{L^2}^2+C\int_0^{|t|} \norm{u(t')}_{H^s}^4 dt', \;\;s>\frac{1}{2}, \end{align*} if $\tau_1(t)=\sigma e^{-\int_0^{|t|} h(t') dt'}$, where $h(t)=C\left(\norm{A^p e^{\sigma A}u_0}_{L^2}^2+\int_0^{|t|} \norm{u(t')}_{H^s}^4 dt'\right)$. Since $\norm{u(t)}_{H^s}$, $s>\frac{1}{2}$, has an upper bound that grows exponentially as $|t|\rightarrow \infty$ (see \cite{Gerard-10}), we infer that $\tau_1(t)$ might shrinks \emph{double} exponentially, compared to the exponential shrinking rate of $\tau(t)$ established in Theorem \ref{main-thm}, where the Gevrey class $G_{\sigma}(W)$ is used. Such advantage of employing the special Gevrey class $G_{\sigma}(W)$ stems from the uniform boundedness of the norm $\norm{u(t)}_{W}$ for the solution $u$ to the cubic Szeg\H{o} equation for sufficiently regular initial data. \end{remark} \section{Appendix} For the sake of completion, we provide a straightforward proof of the following property of the Hankel operator. \begin{proposition} For any $u\in L^2_+(\mathbb T)\cap W$, the following double inequality holds \begin{align} \label{tracebound1} \frac 12\norm{u}_{W}\leq Tr(|H_u|)\le \sum _{k=0}^\infty \left (\sum _{\ell =0}^\infty \vert \hat u(k+\ell )\vert ^2\right )^{\frac12}\ . \end{align} \end{proposition} \begin{proof} Recall the following result in the operator theory (see, e.g., \cite{Conway-2000}). Let $A$ be an operator on a Hilbert space $H$, where $A$ belongs to the trace class. If $\{e_k\}$ and $\{f_k\}$ are two orthonormal families in $H$, then \begin{align} \label{operator} \sum_{k}|(A e_k,f_k)|\leq Tr(|A|). \end{align} In order to find a lower bound of $Tr(|H_u|)$, we use the estimate (\ref{operator}) by computing $\sum_{k}|( H_u (e^{ik\theta}),f_k)|$ with two different orthonormal systems $\{f_k\}$ selected below. Notice that, by the definition (\ref{hankel}) of the Hankel operator $H_u: L^2_+(\mathbb T)\rightarrow L^2_+(\mathbb T)$, we have \begin{align} \label{appen-0} H_u (e^{ik\theta})=\Pi(u e^{-ik\theta})=\Pi\left(\sum_{j\geq 0}\hat u(j) e^{i(j-k)\theta}\right) =\sum_{j\geq 0}\hat u(j+k)e^{ij\theta}. \end{align} If we choose $f_k=e^{ik\theta}$, $k\geq 0$, and use (\ref{appen-0}), then it follows that \begin{align*} Tr (\vert H_u\vert )\ge \sum_{k\geq 0}|(H_u (e^{ik\theta}),e^{ik\theta})| =\sum_{k\geq 0}\left|\big(\sum_{j\geq 0}\hat u(j+k) e^{ij\theta},e^{ik\theta}\big)\right|=\sum_{k\geq 0}|\hat u(2k)|. \end{align*} However, if we select $f_k=e^{i(k+1)\theta}$, for every integer $k\geq 0$, then \begin{align*} Tr (\vert H_u\vert )\ge \sum_{k\geq 0}|(H_u (e^{ik\theta}),f_k)| =\sum_{k\geq 0 } |\hat u(2k+1)|. \end{align*} Summing up, we have proved \begin{align*} 2 Tr(|H_u|)\geq \sum_{k\geq 0}|\hat u(k)|=\norm{u}_{W}. \end{align*} We now pass to the second inequality. Recall from (\ref{hankel}) that, for every $h_1,h_2\in L^2_+$, $$(H_u(h_1), h_2)=(u, h_1h_2)=(H_u(h_2), h_1)\ ,$$ which implies that $H_u^2$ is a positive self-adjoint linear operator. Moreover, $$Tr(H_u^2)=\sum _{k,\ell \ge 0}\vert \hat u(k+\ell )\vert ^2=\sum _{n=0}^\infty (n+1)\vert \hat u(n)\vert ^2<\infty $$ as soon as $u\in H^{1/2}$. In other words, $\vert H_u\vert =\sqrt {H_u^2}$ is a positive Hilbert--Schmidt operator if $u\in L^2_+\cap H^{1/2}.$ Let $\{ \rho _j\}$ be the sequence of positive eigenvalues of $\vert H_u\vert $, and let $\{ \varepsilon _j\}$ be an orthonormal sequence of corresponding eigenvectors. Notice that $$(H_u(\varepsilon _j), H_u(\varepsilon _{j'}))=(H_u^2(\varepsilon _{j'}), \varepsilon _j)=\rho _{j'}^2\delta _{jj'}\ .$$ We infer that the sequence $\{ H_u(\varepsilon _j)/\rho _j\} $ is orthonormal. We then define the following antilinear operator on $L^2_+$, $$\Omega _u(h)=\sum _j \frac{(H_u(\varepsilon _j), h)}{\rho _j}\varepsilon _j\ .$$ Notice that, due to the orthonormality of both systems $\{ \varepsilon _j\} $ and $\{ H_u(\varepsilon _j)/\rho _j\}$, $$\norm {\Omega _u(h)} \le \norm{h}\ .$$ We now observe that \begin{eqnarray*} \rho _j&=&(\Omega _u(H_u(\varepsilon _j)), \varepsilon _j)=\sum _{k=0}^\infty (\Omega _u(e^{ik\theta}) , \varepsilon _j) (e^{ik\theta }, H_u(\varepsilon _j))=\sum _{k=0}^\infty (\Omega _u(e^{ik\theta}), \varepsilon _j)(\varepsilon _j, H_u(e^{ik\theta }))\\ &=&\sum _{k,\ell \ge 0}\overline {\hat u(k+\ell )}(\Omega _u(e^{ik\theta }), \varepsilon _j)(\varepsilon _j, e^{i\ell \theta})\ , \end{eqnarray*} and therefore, for every $N$, $$Tr(\vert H_u\vert )=\sum _j \rho _j=\sum _{k,\ell \ge 0}\overline {\hat u(k+\ell )}(\Omega _u(e^{ik\theta}), e^{i\ell \theta})\ .$$ Apply the Cauchy--Schwarz inequality to the sum on $\ell $, $$Tr(\vert H_u\vert )\le \sum _{k=0}^\infty \Vert \Omega _u(e^{ik\theta})\Vert \left (\sum _{\ell =0}^\infty \vert \hat u(k+\ell )\vert ^2\right )^{\frac12}\ ,$$ and the claim follows from $\norm{\Omega _u(e^{ik\theta })}\le \norm{e^{i\theta}}= 1$. \end{proof} Using the above proposition, it is easy to derive the estimate (\ref{tracebound}) used in the proof of Proposition \ref{prop}. Indeed, by the Cauchy--Schwarz inequality in the $k$ sum, we have, for every $s>1$, \begin{eqnarray*} \sum _{k=0}^\infty \left (\sum _{\ell =0}^\infty \vert \hat u(k+\ell )\vert ^2\right )^{\frac12}&\le & \left (\sum _{k=0}^\infty (1+k)^{1-2s}\right )^{\frac 12} \left (\sum _{k,\ell \ge 0}(1+k)^{2s-1}\vert \hat u(k+\ell )\vert ^2\right )^{\frac 12} \\ &\le &\left (\frac{s}{s-1}\right )^{\frac 12} \left (\sum _{k,\ell \ge 0}(1+k+\ell )^{2s-1}\vert \hat u(k+\ell )\vert ^2\right )^{\frac 12}\\ &\le & C_s\norm{u}_{H^s}\ . \end{eqnarray*} \par \noindent \textbf{Acknowledgement\,:} This work was supported in part by the Minerva Stiftung/Foundation, and by the NSF grants DMS-1009950, DMS-1109640 and DMS-1109645. \end{document}
arXiv
\begin{document} \title{Improved homological stability for the mapping class group with integral or twisted coefficients} \begin{abstract}In this paper we prove stability results for the homology of the mapping class group of a surface. We get a stability range that is near optimal, and extend the result to twisted coefficients. \end{abstract} \section*{Introduction} Let $F_{g,r}$ denote the compact oriented surface of genus $g$ with $r$ boundary circles, and let $\Gamma_{g,r}$ be the associated mapping class group, \begin{equation*} \Gamma_{g,r}=\pi_{0}\text{Diff}_+(F_{g,r};\partial), \end{equation*} the components of the group of orientation-preserving diffeomorphisms of $F_{g,r}$ keeping the boundary pointwise fixed. Gluing a pair of pants onto one or two boundary circles induce maps \begin{equation*} \Sigma_{0,1}: \Gamma_{g,r}\longrightarrow \Gamma_{g,r+1},\quad \Sigma_{1,-1}: \Gamma_{g,r}\longrightarrow \Gamma_{g+1,r-1} \end{equation*} whose composite $\Sigma_{1,0}:=\Sigma_{1,-1}\circ \Sigma_{0,1}$ corresponds to adding to $F_{g,r}$ a genus one surface with two boundary circles. Using the mapping cone of $\Sigma_{i,j}$, $(i,j)=(0,1), (1,-1)$ or $(1,0)$ we get a relative homology group, which fits into the exact sequence \begin{equation*} \ldots \longrightarrow H_n(\Sigma_{i,j}\Gamma_{g,r})\longrightarrow H_n(\Sigma_{i,j}\Gamma_{g,r},\Gamma_{g,r})\longrightarrow H_{n-1}(\Gamma_{g,r})\longrightarrow \ldots \end{equation*} Homology stability results for the mapping class group can then be derived from the vanishing the relative group (in some range). We wish to show such a stability result for not only for trivial coefficients but also for so-called coefficients systems of a finite degree. For this, we work in Ivanov's category $\mathfrak{C}$ of marked surfaces, cf. \cite{Ivanov1} and $\S4.1$ below for details. The maps $\Sigma_{1,0}$ and $\Sigma_{0,1}$ are functors on $\mathfrak{C}$, and $\Sigma_{1,-1}$ is a functor on a subcategory. A coefficient system is a functor $V$ from $\mathfrak{C}$ to the category of abelian groups without infinite division. If the functor is constant, we say $V$ has degree 0. We then define a coefficient system of degree $k$ inductively, by requiring that the maps $V(F){\longrightarrow}V(\Sigma_{i,j}F)$ are split injective and their cokernels are coefficient systems of degree $k-1$, see Definition \ref{d:coef}. As an example, the functor $H_1(F;\mathbb Z)$ is a coefficients system of degree $1$, and its $k$th exterior power $\Lambda^k H_1(F;\mathbb Z)$, considered in \cite{Morita1}, has degree $k$. To formulate our stability result, we consider relative homology group with coefficients in $V$, \begin{equation*} Rel_n^V(\Sigma_{l,m}F,F)=H_n(\Sigma_{l,m}\Gamma(F),\Gamma(F); V(\Sigma_{l,m}F),V(F)). \end{equation*} These groups again fit into a long exact sequence. Our main result is \begin{intro}For $F$ a surface of genus $g$ with at least 1 boundary component, and $V$ a coefficient system of degree $k_V$, we have \begin{equation*} Rel_n^V(\Sigma_{1,0}F,F)=0 \text{ for } 3n\le 2g-k_V, \end{equation*} \begin{equation*} Rel_n^V(\Sigma_{0,1}F,F)=0 \text{ for } 3n\le 2g-k_V. \end{equation*} Moreover, if $F$ has at least 2 boundary components, we have \begin{equation*} Rel_q^V(\Sigma_{1,-1}F,F)=0 \text{ for } 3q\le 2g-k_V+1. \end{equation*} \end{intro} As a corollary, we obtain that $H_n(\Gamma_{g,r}; V(F_{g,r}))$ is independent of $g$ and $r$ for $3n\le 2g-k_V-2$ and $r\ge 1$. For a more precise statement, see Theorem \ref{t:abstwist}. This uses that $\Sigma_{0,1}$ is always injective, since the composition $\Gamma_{g,r}\stackrel{\Sigma_{0,1}}{\longrightarrow}\Gamma_{g,r+1}\stackrel{\Sigma_{0,-1}}{\longrightarrow}\Gamma_{g,r}$ is an isomorphism, where $\Sigma_{0,-1}$ is the map gluing a disk onto a boundary component. The proof of Theorem 1 with twisted coefficients uses the setup from \cite{Ivanov1}. His category of marked surfaces is slightly different from ours, since we also consider surfaces with more than one boundary component and thus get results for $\Sigma_{0,1}$ and $\Sigma_{1,-1}$. For constant coefficients, $V=\mathbb Z$, we also consider the map $\Sigma_{0,-1}:\Gamma_{g,1}\longrightarrow \Gamma_{g}$ induced by gluing a disk onto the boundary circle, where our result is: \begin{intro}The map \begin{equation*} \Sigma_{0,-1}: H_k(\Gamma_{g,1};\mathbb Z)\longrightarrow H_k(\Gamma_{g};\mathbb Z) \end{equation*} is surjective for $2g \ge 3k - 1$, and an isomorphism for $2g \ge 3k + 2$. \end{intro} The proof of Theorem 2 follows \cite{Ivanov1}, where a stability result for closed surfaces is deduced from a stability theorem on surfaces with boundary. We get an improved result, because Theorem 1 has a better bound than Ivanov's stability theorem (which has isomorphism for $g>2k$). In this paper, we first prove Theorem 1 for constant integral coefficients, $V=\mathbb Z$. Our proof of Theorem 1 in this case is much inspired by Harer's manuscript \cite{Harer2}, which was never published. Harer's manuscript is about rational homology stability. The rational stability results claimed in \cite{Harer2} are ''one degree better'' than what is obtained here with integral coefficients. Before discussing the discrepancy it is convenient to compare the stability with Faber's conjecture. Let $\mathcal{M}_g$ be Riemann's moduli space; recall that $H^*(\mathcal{M}_g;\mathbb Q) \cong H^*(\Gamma_g;\mathbb Q)$. From above we have maps \begin{equation*} H^*(\Gamma_g;\mathbb Q) \longrightarrow H^*(\Gamma_{g,1};\mathbb Q) \longleftarrow H^*(\Gamma_{\infty,1};\mathbb Q) \end{equation*} and by \cite{MW}, \begin{equation}\label{e:Ib1} H^*(\Gamma_{\infty,1};\mathbb Q) = \mathbb Q[\kappa_1,\kappa_2,\ldots]. \end{equation} The classes $\kappa_i\in H^{2i}(\Gamma_{g,r})$ for $r\ge 0$ are the standard classes defined by Miller, Morita and Mumford ($\kappa_i$ is denoted $e_i$ by Morita). The tautological algebra $R^*(\mathcal{M}_g)$ is the subring of $H^*(\Gamma_g;\mathbb Q)$ generated multiplicatively by the classes $\kappa_i$. Faber conjectured in \cite{Faber} the complete algebraic structure of $R^*(\mathcal{M}_g)$. Part of the conjecture asserts that it is a Poincaré duality algebra (Gorenstein) of formal dimension $2g-4$, and that it is generated by $\kappa_1, \ldots,\kappa_{[g/3]}$, where $[g/3]$ denotes $g/3$ rounded down. The latter statement was proved by Morita (cf. \cite{Morita1} prop 3.4). It follows from our theorems above that $\kappa_1, \ldots, \kappa_{[g/3]}$ are non-zero in $H^*(\Gamma_g;\mathbb Q)$ when $*\le 2[\frac{g}{3}]-2$. More precisely, if $g\equiv 1,2\:(\textrm{mod } 3)$ then our results show that \begin{equation}\label{e:Ib2} H^*(\Gamma_g;\mathbb Q) \cong H^*(\Gamma_{\infty,1};\mathbb Q) \quad \text{for }*\le 2[\textstyle\frac{g}{3}\displaystyle], \end{equation} but if $g\equiv 0\:(\textrm{mod } 3)$, our result only show the isomorphism for $*\le 2[\textstyle\frac{g}{3}\displaystyle]-1$. In contrast, \cite{Harer2} asserts the isomorphism for $*\le 2[\textstyle\frac{g}{3}\displaystyle]$ for all $g$. We note that is follows from \eqref{e:Ib1} and Morita's result that the best possible stability range for $H^*(\Gamma_{g};\mathbb Q)$ is $*\le 2[\textstyle\frac{g}{3}\displaystyle]$. We are ''one degree off'' when $g\equiv 0\:(\textrm{mod } 3)$. The stability of \cite{Harer2} is based on three unproven assertions that I have not been able to verify. I will discuss two of them below, and the third in section \ref{sc:lemmas}. Boundary connected sum of surfaces with non-empty boundary defines a group homomorphism $\Gamma_{g,r}\times \Gamma_{h,s}\longrightarrow\Gamma_{g+h,r+s-1}$, and hence a product in homology \begin{equation*} H_*(\Gamma_{g,r})\otimes H_*(\Gamma_{h,s})\longrightarrow H_*(\Gamma_{g+h,r+s-1}), \quad r,s>0. \end{equation*} The classes $\kappa_i$ are primitive with respect to this homology product, in the sense that $\indre{\kappa_i}{a\cdot b}=0$ if both $a$ and $b$ have positive degree \cite{Morita2}. Harer proves in \cite{Harer3} that $H^2(\Gamma_{3,1};\mathbb Q)=\mathbb Q\set{\kappa_1}$. Let $\check{\kappa}_1\in H_2(\Gamma_{3,1};\mathbb Q)$ be the dual to $\kappa_1$, and let $\check{\kappa}_1^{\phantom{,}n}$ be the $n$'th power under the multiplication \begin{equation*} H_2(\Gamma_{3,1})^{\otimes n}\longrightarrow H_{2n}(\Gamma_{3n,1}). \end{equation*} Then $\indre{\kappa_1^{\phantom{,}n}}{\check{\kappa}_1^{\phantom{,}n}}=n!$, so $\check{\kappa}_1^{\phantom{,}n}\ne 0$ in $H^{2n}(\Gamma_{3n,1};\mathbb Q)$, cf. part $(i)$ of Theorem 1. Dehn twist around the $(r+1)$st boundary circle yields a group homomorphism $\mathbb Z\longrightarrow \Gamma_{1, r+1}$, and hence a class $\tau_{r+1}\in H_1(\Gamma_{1,r+1})$. We can now formulate two of Harer's three assertions one needs in order to improve the rational stability result by ''one degree'' when $g\equiv 0\:(\textrm{mod } 3)$, i.e. from $*\le 2[\frac{g}{3}]-1$ to $*\le 2[\frac{g}{3}]$. The assertions are: \begin{itemize} \item[$(i)$]$\check{\kappa}_1^{\phantom{,}n}=0$ in $H_{2n}(\Gamma_{g,r};\mathbb Q)$ for $g<3n$. \item[$(ii)$]$\tau_{r+1}\cdot\check{\kappa}_1^{\phantom{,}n}$ is non-zero in $\textrm{Coker}(H_{2n+1}(\Gamma_{3n+1,r};\mathbb Q)\longrightarrow H_{2n+1}(\Gamma_{3n+1,r+1};\mathbb Q)$. \end{itemize} The third assertion one needs is stated in Remark \ref{r:third}. \paragraph{Acknowledgements}This article is part of my ph.d. project at the University of Aarhus. It is a great pleasure to thank my thesis advisor Ib Madsen for his help and encouragement during my years as a graduate student. I am also grateful to Mia Hauge Dollerup for her help in composing this paper. \tableofcontents \section{Homology of groups and spectral sequences} \subsection{Relative homology of groups}\label{S:1,1} For a group $G$, and $\mathbb Z[G]$-modules $M$ and $M'$, left and right modules, respectively, we have the bar construction: \begin{equation*} B_n(M',G,M)=M'\otimes(\mathbb Z[G])^{\otimes n}\otimes M, \end{equation*} with the differential \begin{eqnarray*} d_n(m'\otimes g_1\otimes \cdots \otimes g_n \otimes m) &=& (m'g_1)\otimes g_2\otimes \cdots\otimes g_n\otimes m \\ &+& \sum_{i=1}^{n-1}(-1)^i m'\otimes g_1\otimes \cdots\otimes g_ig_{i+1}\otimes \cdots\otimes g_n\otimes m\\ &+& (-1)^n m' \otimes g_1\otimes \cdots\otimes g_{n-1}\otimes (g_nm). \end{eqnarray*} If either $M$ or $M'$ are free $\mathbb Z[G]$-modules, $B_*(M',G,M)$ is contractible. If $M'=\mathbb Z$ with trivial $G$-action, we write $B_*(G,M)$. Then the $n$th homology group of $G$ with coefficients in $M$ is defined to be \begin{equation*} H_n(G;M)=H_n(B_*(G,M))\cong \text{Tor}_n^{\mathbb Z G}(\mathbb Z,M). \end{equation*} There is a relative version of this. Suppose $f:G\longrightarrow H$ is a group homomorphism and $\varphi:M\longrightarrow N$ is an $f$-equivariant map of $\mathbb Z[G]$-modules. One defines the relative homology $H_*(H,G;N,M)$ to be the homology of the algebraic mapping cone of \begin{equation*} (f,\varphi)_*: B_*(G,M)\longrightarrow B_*(H,N), \end{equation*} so that there is a long exact sequence \begin{equation*} \cdots\to H_n(G;M)\to H_n(H;N)\to H_n(H,G;M,N)\to H_{n-1}(G;M)\to \cdots \end{equation*} \subsection{Spectral sequences of group actions}\label{sc:spectral} Suppose next that $X$ is a connected simplicial complex with a simplicial action of $G$. Let $C_*(X)$ be the cellular chain complex of $X$. Given a $\mathbb Z[G]$-module $M$, define the chain complex \begin{equation}\label{e:dagger} C_n^\dagger(X;M)=\left\{ \begin{array}{ll} 0, & \hbox{$n<0$;} \\ M, & \hbox{$n=0$;} \\ C_{n-1}(X)\otimes_\mathbb Z M, & \hbox{$n\ge 1$;} \end{array} \right. \end{equation} with differential $\partial_n^\dagger$ defined to be $\partial_{n-1}\otimes \textup{id}_M$ for $n>1$, and equal to the augmentation $\varepsilon\otimes \textup{id}_M$ for $n=1$. Note if $X$ is $d$-connected for some $d\ge 1$, or more generally, if the homology $H_i(X)=0$ for $1\le i \le d$, then $C_*^\dagger(X;M)$ is exact for $*\le d+1$. This is used below in the spectral sequence. Again there is a relative version. Let $f:G\longrightarrow H$, $\varphi:M\longrightarrow N$ be as above, and let $X\subseteq Y$ be a pair of simplicial complexes with a simplicial action of $G$ and $H$, respectively, compatible with $f$ in the sense that the inclusion $i: X\longrightarrow Y$ is $f$-equivariant. Assume in addition that the induced map on orbits, \begin{equation}\label{e:orbits}\xymatrix{ i_\sharp:X/G\ar[r]^{\:\:\:\cong} &Y/H} \end{equation} is a bijection. \begin{defn} With $G$, $M$ and $X$ as above, let $\sigma$ be a $p$-cell of $X$. Let $G_{\sigma}$ denote the stabiliser of $\sigma$, and let $M_\sigma=M$, but with a twisted $G_\sigma$-action, namely \begin{equation*} g*m=\left\{ \begin{array}{ll} gm, & \hbox{if $g$ acts orientation preservingly on $\sigma$;} \\ -gm, & \hbox{otherwise.} \end{array} \right. \end{equation*} \end{defn} \begin{thm}\label{s:ss} Suppose $X$ and $Y$ are $d$- connected and that the orbit map \eqref{e:orbits} is a bijection. Then there is a spectral sequence $\set{E_{r,s}^n}_n$ converging to zero for $r+s\le d+1$, with \begin{equation*} E_{r,s}^1 \cong \bigoplus_{\sigma\in \Bar\Delta_{r-1}} H_s(H_\sigma,G_\sigma; N_\sigma, M_\sigma). \end{equation*} Here $\Bar\Delta_p=\Bar\Delta_{p}(X)$ denotes a set of representatives for the $G$-orbits of the $p$-simplices in $X$. \end{thm} \begin{proof} Consider the double complex with chain groups \begin{equation*} C_{n,m}= F_n(H)\otimes_{\mathbb Z[H]}C_m^\dagger(Y,N)\oplus F_{n-1}(G)\otimes_{\mathbb Z[G]}C_m^\dagger(X,M), \end{equation*} where $F_n(G)= B_n(G,\mathbb Z[G])$, and differentials (superscripts indicate horizontal and vertical directions) \begin{eqnarray}\label{e:dv} d^h_m &=& \textup{id}\otimes\partial^Y_m\oplus\textup{id}\otimes\partial^X_m \nonumber\\ d^v_n &=& \partial^H_n\otimes\textup{id}\oplus \left(f_*\otimes(i,\varphi)_*+\partial^G_{n-1}\otimes\textup{id}\right). \end{eqnarray} Standard spectral sequence constructions give two spectral sequences both converging to $H_*(\textrm{Tot}\, C)$, where $\textrm{Tot}\, C$ is the total complex of $C_{*,*}$, $\displaystyle(\textrm{Tot}\,C)_k=\bigoplus_{n+m=k}C_{n,m}$ and $d^{\textrm{Tot}}=d^h+d^v$. The vertical spectral sequence (induced by $d^v$) has $E^1$ page: \begin{eqnarray*} E^1_{r,s} &=& H_r(C_{s,*}) \\ &=& H_r\left(F_s(H)\otimes_{\mathbb Z[H]}C_*^\dagger(Y;N)\right)\oplus H_r\left(F_{s-1}(G)\otimes_{\mathbb Z[G]}C_*^\dagger(X;M)\right). \end{eqnarray*} Since the resolutions $F_*$ are free, this is zero where $C_*^\dagger(X;M)$ and $C_*^\dagger(Y;N)$ are exact, i.e. for $r\le d+1$. So this spectral sequence converges to zero where $r+s\le d+1$, and we conclude that $H_*(\textrm{Tot}\,C)=0$ for $*\le d+1$. The horizontal spectral sequence, which consequently also converges to zero in total degrees $\le d+1$, has $E^1$ page \begin{equation}\label{e:E1} E^1_{r,s} = H_s\left(F_*(H)\otimes_{\mathbb Z[H]}C_r^\dagger(Y,N)\oplus F_{*-1}(G)\otimes_{\mathbb Z[G]}C_r^\dagger(X,M)\right). \end{equation} For $r\ge 1$ we have \begin{eqnarray}\label{e:dagM} C_r^\dagger(X,M) &=& C_{r-1}(X)\otimes_{\mathbb Z[G]} M \cong \bigoplus_{\sigma\in\Delta_{r-1}(X)}\mathbb Z[G\cdot \sigma]\otimes_{\mathbb Z[G]} M \nonumber \\ &\cong& \bigoplus_{\sigma\in\Bar\Delta_{r-1}}\mathbb Z[G]\otimes_{\mathbb Z[G_\sigma]} M_\sigma = \bigoplus_{\sigma\in\Bar\Delta_{r-1}}\textup{Ind}_{G_{\sigma}}^G M_{\sigma}, \end{eqnarray} where $\Delta_p(X)$ denotes the $p$-cells in $X$, and where $\Bar\Delta_p\subseteq \Delta_p(X)$ is a set of representatives for the $G$-orbits. Finally, $\textup{Ind}_{G_{\sigma}}^G M_{\sigma}=\mathbb Z[G]\otimes_{\mathbb Z[G_{\sigma}]}M_{\sigma}$. By assumption \eqref{e:orbits}, the image of $\Bar\Delta_{r-1}$ under $i$ also works as representatives for the $H$-orbits of $(r-1)$-cells in $Y$. Therefore we also have: \begin{equation}\label{e:dagN} C_r^\dagger(Y,N)\cong \bigoplus_{\sigma\in\Bar\Delta_{r-1}}\textup{Ind}_{H_{\sigma}}^H N_{\sigma}. \end{equation} We insert (\ref{e:dagM}) and (\ref{e:dagN}) into the formula (\ref{e:E1}) to get for $r\ge1$: \begin{eqnarray}\label{e:E1rs} E^1_{r,s} &=& H_s\left(F_*(H)\otimes_{\mathbb Z[H]}C_r^\dagger(Y,N)\oplus F_{*-1}(G)\otimes_{\mathbb Z[G]}C_r^\dagger(X,M)\right) \nonumber\\ &\cong& H_s\left(F_*(H)\otimes_{\mathbb Z[H]} \bigoplus_{\sigma\in\Bar\Delta_{r-1}}\textup{Ind}_{H_{\sigma}}^H N_{\sigma}\oplus F_{*-1}(G)\otimes_{\mathbb Z[G]}\bigoplus_{\sigma\in\Bar\Delta_{r-1}} \textup{Ind}_{G_{\sigma}}^G M_{\sigma}\right) \nonumber\\ &\cong& \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s\left(F_*(H)\otimes_{\mathbb Z[H]} \textup{Ind}_{H_{\sigma}}^H N_{\sigma} \oplus F_{*-1}(G)\otimes_{\mathbb Z[G]} \textup{Ind}_{G_{\sigma}}^G M_{\sigma} \right) \nonumber\\ &\cong& \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s\left(F_*(H)\otimes_{\mathbb Z[H_\sigma]} N_{\sigma} \oplus F_{*-1}(G)\otimes_{\mathbb Z[G_\sigma]} M_{\sigma} \right) \nonumber\\ &\cong& \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s(H_\sigma,G_\sigma,N_\sigma,M_\sigma). \end{eqnarray} The final isomorphism above uses that $F_*(H)$ is also a $\mathbb Z[H_\sigma]$-module. For $r=0$, \begin{equation*} E^1_{0,s}= H_s(H,G;N,M). \end{equation*} Thus we set $H_\sigma=H$ when $\sigma \in \Bar\Delta_{-1}=\{\emptyset\}$.\end{proof} For application in the proof of Theorem \ref{t:maintwist}, we need to relax the condition \eqref{e:orbits} to the situation where $i_\sharp$ is only injective: \begin{thm}\label{s:spectralinj}With the assumptions of Theorem \ref{s:ss}, but with $i_\sharp: X/G \longrightarrow Y/H$ is only \emph{injective}, there is a spectral sequence $\set{E_{r,s}^n}_n$ converging to zero for $r+s\le d+1$, and \begin{equation*} E_{r,s}^1 \cong \bigoplus_{\sigma\in \Sigma_{r-1}(X)} H_s(H_\sigma,G_\sigma; N_\sigma, M_\sigma)\oplus \bigoplus_{\sigma\in\Gamma_{r-1}(Y)}H_s(H_\sigma,N_{\sigma}). \end{equation*} Here $\Sigma_{p}(X)$ denotes a set of representatives for the $G$-orbits of the $p$-cells in $X$, and $\Gamma_n(Y)$ denotes a set of representatives for those $H$-orbits which do not come from $n$-cells in $X$ under $i_\sharp$. \end{thm} \begin{proof} We can choose $\Sigma_n(Y)=i(\Sigma_n(X))\cup \Gamma_n(Y)$. In this case we obtain: \begin{equation*} E^1_{r,s}\cong \bigoplus_{\sigma\in\Sigma_{r-1}} H_s(H_\sigma,G_\sigma,N_\sigma,M_\sigma)\oplus \bigoplus_{\sigma\in\Gamma_{r-1}(Y)}H_s(H_\sigma,N_{\sigma}). \end{equation*} The first direct sum is obtained in the same way as in the bijective case. The second consists of absolute homology, since the cells of $\Gamma_n(Y)$ are not in orbit with cells from $X$. \end{proof} We are primarily going to use the absolute case, $Y=\emptyset$: \begin{cor}\label{c:spectralbad}For a group $G$ acting on a $d$-connected simplicial complex $X$, and a $G$-module $M$, there is a spectral sequence converging to zero for $r+s \le d+1$, with \begin{equation*} E^1_{r,s}= \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s(G_\sigma,M_\sigma), \end{equation*} where $\Bar\Delta_{r-1}$ is a set of representatives of the $G$-orbits of $(r-1)$-cells in $X$. \end{cor} In our applications, we often have a rotation-free group action, in the following sense: \begin{defn}\label{d:rotationfree} A simplicial group action of $G$ on $X$ is rotation-free if for each simplex $\sigma$ of $X$, the elements of $G_\sigma$ fixes $\sigma$ pointwise. \end{defn} \begin{cor}\label{c:spectral} For rotation-free actions, the spectral sequence of Thm. \ref{s:ss} takes the form: \begin{equation*} E^1_{r,s}\cong \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s(H_\sigma,G_\sigma,N,M) \end{equation*} in the relative case, and \begin{equation*} E^1_{r,s}\cong \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s(G_\sigma,M) \end{equation*} in the absolute case. \end{cor} \begin{proof} The extra assumption implies that each $g\in G_\sigma$ preserves the orientation of $\sigma$. Thus $g$ acts on $M_\sigma$ in the same way as on $M$, so $M_\sigma$ and $M$ are identical as $G_\sigma$-modules. The same applies to $N$. \end{proof} \begin{rem}\label{b:spectral}In some of our applications of the absolute version of the spectral sequence, $G$ acts both transitively and rotation-freely on the $n$-simplices of $X$. In this case there is only one $G$-orbit, so we get \begin{equation*} E^1_{r,s}\cong H_s(G_\sigma;M), \end{equation*} where $\sigma$ is any $(r-1)$-cell in $X$. \end{rem} \subsection{The first differential}\label{sc:diff} We will need a formula for the first differential $d^1_{r,s}:E^1_{r,s}\longrightarrow E^1_{r-1,s}$. From the construction of the spectral sequences of a double complex, $d^1$ is induced from the vertical differentials $d^v$ on homology. In the absolute version of the spectral sequence, assuming that $G$ acts rotation-freely on $X$, \begin{equation*} E^1_{r,s}\cong \bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s(G_\sigma,M). \end{equation*} and it is not hard to se that the differential \begin{equation*} d^1_{r,s}:\bigoplus_{\sigma\in\Bar\Delta_{r-1}} H_s(G_\sigma,M) \longrightarrow \bigoplus_{\tau\in\Bar\Delta_{r-2}} H_s(G_\tau,M). \end{equation*} has the following description (see e.g. \cite{Brown}, Chapter VII, Prop 8.1.) Let $\sigma$ be an $(r-1)$-simplex of $X$ and $\tau$ an $(r-2)$-dimensional face of $\sigma$. We have the boundary operator \begin{equation*} \partial: C_{r-1}(X,M)\longrightarrow C_{r-2}(X,M) \end{equation*} and we denote its $(\sigma,\tau)$th component by $\partial_{\sigma\tau}:M\longrightarrow M$. This is a $G_{\sigma}$-map, so together with the inclusion $G_\sigma\longrightarrow G_\tau$ it induces a map \begin{equation*} u_{\sigma\tau}:H_*(G_\sigma,M)\longrightarrow H_*(G_\tau,M). \end{equation*} Up to a sign $u_{\sigma\tau}$ is the inclusion, because $X$ is a simplicial complex. Consequently \begin{equation*} \partial(\sigma)=\sum_{j=0}^{r-1}(-1)^j(j\text{th face of }\sigma). \end{equation*} So if $\tau$ is the $i$th face of $\sigma$, then $u_{\sigma\tau}=(-1)^i$. For $\sigma\in\Bar\Delta_{r-1}$, we cannot be sure that $\tau\in\Bar\Delta_{r-2}$, but there is a $g(\tau)\in G$ such that $g(\tau)\tau=\tau_0\in \Bar\Delta_{r-2}$. The conjugation, $g\mapsto g(\tau)gg(\tau)^{-1}$, induces a map from $G_\tau$ to $G_{\tau_0}$ and hence an isomorphism, \begin{equation*} c_{g(\tau)}:H_*(G_\tau,M)\stackrel{\cong}{\longrightarrow} H_*(G_{\tau_0},M). \end{equation*} Now $d^1$ is given by \begin{equation}\label{e:d1} d^1\mid_{H_*(G_\sigma,M)}=\sum_{\tau \text{ face of }\sigma}u_{\sigma\tau}c_{g(\tau)}. \end{equation} Denoting the $i$th face of $\sigma$ by $\tau_i$, this can be written: \begin{equation}\label{e:d1bedre} d^1|_{H_*(G_\sigma,M)}=\sum_{i=0}^{r-1}(-1)^i c_{g(\tau_i)}. \end{equation} \section{Arc complexes and permutations} We write $F_{g,r}$ for a compact oriented surface of genus $g$ with $r$ boundary components. \begin{defn}Let $F$ be a surface with boundary. The mapping class group \begin{equation*} \Gamma(F)=\pi_0(\textup{Diff}_+(F,\partial F)) \end{equation*} is the connected components of the group of orientation-preserving diffeomorphisms which are the identity on a small collar neighborhood of the boundary. We write $\Gamma_{g,r}=\Gamma(F_{g,r})$. \end{defn} To establish stability results about the homology of $\Gamma_{g,r}$, we will make extensive use of cutting along arcs in $F_{g,r}$. These arcs will be the vertices in simplicial complexes, the so-called arc complexes. The mapping class group act on these arc complexes, and we can use the spectral sequences of section \ref{sc:spectral}. The differentials in the spectral sequences are closely related to the homomorphisms of Theorem 1 and Theorem 2 from the introduction. \subsection{Definitions and basic properties}\label{sc:arc} Let $F$ be a surface with boundary. To define the ordering of the vertices used in the arc complexes, we will need the orientation of $\partial F$. An orientation at a point $p\in\partial F$ is determined by a tangent vector $v_p$ to the boundary circle at $p$. Let $w_p$ be tangent to $F$ at $p$, perpendicular to $v_p$ and pointing into $F$. We call the orientation of $\partial F$ at $p$ determined by $v_p$ \emph{incoming} if the pair $(v_p,w_p)$ is positively oriented, and \emph{outgoing} if $(v_p,w_p)$ is negatively oriented, and use the same terminology for the connected component of $\partial F$ that contains $p$. \begin{defn}\label{d:arc}Given a surface $F$ with non-empty boundary. Fix two points $b_0$ and $b_1$ in $\partial F$. If $b_0$ and $b_1$ are on the same boundary component, the arc complex we define is denoted $C_*(F,1)$. If $b_0$ and $b_1$ are on two different boundary components of $F$, the resulting arc complex is denoted $C_*(F;2)$. \begin{itemize} \item \noindent A \emph{vertex} of $C_*(F;i)$ is the isotopy class rel endpoints of an arc (image of a curve) in $F$ starting in $b_0$ and ending in $b_1$, which has a representative that meets $\partial F$ transversally and only in $b_0$ and $b_1$. \item An \emph{$n$-simplex} $\alpha$ in $C_*(F;i)$ (called an arc simplex) is set of $n+1$ vertices, such that there are representatives meeting each other transversally in $b_0$ and $b_1$ and not intersecting each other away from these two points. We further require that the complement of the $n+1$ arcs be connected. The set of arcs is ordered by using the incoming orientation of $\partial F$ at the starting point $b_0$, and we write $\alpha=(\alpha_0,\ldots,\alpha_n)$. \item Let $\Delta_n(F;i)$ denote the set of $n$-simplices, and let $C_*(F,i)$ be the chain complex with chain groups $C_n(F;i)=\mathbb Z\Delta_n(F;i)$ and differentials $d: C_n(F;i)\longrightarrow C_{n-1}(F;i)$ given by: \begin{equation*} d(\alpha)=\sum_{j=1}^n(-1)^j\partial_j(\alpha), \text{ where } \partial_j(\alpha)=(\alpha_0,\ldots,\widehat{\alpha}_j,\ldots,\alpha_n). \end{equation*} \end{itemize} \end{defn} The mapping class group $\Gamma(F)$ acts on $\Delta_n(F;i)$ (by acting on the $n+1$ arcs representing an $n$-simplex), and thus on $C_n(F;i)$. This action is obviously compatible with the differentials $d: C_n(F;i)\longrightarrow C_{n-1}(F;i)$, so we can consider the quotient complex with chain groups $C_n(F;i)/\Gamma(F)$. To apply the spectral sequence of the action of $\Gamma_{g,r}$ on $C_*(F_{g,r};i)$, we need to know that the complex is highly-connected: \begin{thm}[\cite{Harer1}]\label{s:connected}The chain complex $C_*(F_{g,r};i)$ is $(2g-3+i)$-connected. \end{thm} \begin{defn}\label{d:N}Given an arc simplex $\alpha$ in $C_*(F;i)$, we denote by $N(\alpha)$ the union of a small, open normal neighborhood of $\alpha$ with an open collar neighborhood of the boundary component(s) of $F$ containing $b_0$ and $b_1$. Then the cut surface $F_\alpha$ is given by \begin{equation*} F_\alpha = F\setminus N(\alpha). \end{equation*} \end{defn} For a surface $S$, let $\sharp \partial S$ denote the number of boundary components of $S$. Then we have the following \begin{equation}\label{e:boundary} \sharp \partial(F_\alpha)=\sharp\partial N(\alpha)+r-2i. \end{equation} \begin{lem}\label{l:cut} Given an $n$-simplex $\alpha$ in $C_*(F;i)$, the Euler characteristic of the cut surface $F_ \alpha$ is \begin{equation*} \chi(F_\alpha)=\chi(F)+n+1 \end{equation*} \end{lem} \begin{proof} We prove the formula inductively by removing one arc $\alpha_0$ at a time, so it suffices to show that $\chi(F_{\alpha_0})=\chi(F)+1$. Give $F$ the structure of a CW complex with $\alpha_0$ as a $1$-cell (glued onto the $0$-cells $b_0$ and $b_1$). When we cut along $\alpha_0$, we get two copies of $\alpha_0$; that is, an additional $1$-cell and two additional $0$-cells. Using the standard formula for the Euler characteristic of a CW complex, we see that it increases by $1$. \end{proof} \subsection{Permutations}\label{s:permu} Let $\Sigma_{n+1}$ denote the group of permutations of the set $\set{0,1,\ldots,n}$. I will write a permutation $\sigma\in\Sigma_n$ as $\sigma=\left[\sigma(0)\,\sigma(1)\,\ldots\, \sigma(n)\right]$; e.g. $[0\,2\,1]$ in $\Sigma_3$ is the permutation fixing $0$ and interchanging $1$ and $2$. To each $n$-arc simplex $\alpha$ in one of the arc complexes $C_*(F;i)$ we assign a permutation $P(\alpha)$ in $\Sigma_{n+1}$ as follows: Recall that the arcs in $\alpha=(\alpha_0,\alpha_1,\ldots, \alpha_n)$ are ordered using the incoming orientation of $\partial F$ at the starting point $b_0$. We use the \emph{outgoing} orientation in the end point $b_1$ to read off the positions of the $n+1$ arcs at $b_1$: $\alpha_{j}$ is the $\sigma(j)$'th arc at $b_1$, for $j=0,\ldots, n$. In other words, the arcs at $b_1$ will be ordered $(\alpha_{\sigma^{-1}(0)},\alpha_{\sigma^{-1}(1)},\ldots, \alpha_{\sigma^{-1}(n)})$. This gives the permutation $\sigma=P(\alpha)$. See Example \ref{ex:perm} below. So we have a map $P:\Delta_n(F;i)\longrightarrow \Sigma_{n+1}$. Since $\gamma\in\Gamma(F)$ keeps a small neighborhood of $\partial F$ fixed, this induces a well-defined map \begin{equation*} P:\Delta_n(F;i)/\Gamma(F) \longrightarrow \Sigma_{n+1}. \end{equation*} There are several reasons why it is useful to look at the permutation $P(\alpha)$ of an arc simplex $\alpha$. One is that $P(\alpha)$ determines the number of boundary components of the cut surface $F_ \alpha$, as we shall see below. Before explaining this, we will need a few preliminary remarks. Let $\alpha$ be an arc in $C_*(F;i)$. We orient it from $b_0$ to $b_1$, and let $t_p(\alpha)$ be the (positive) tangent vector at $p\in \alpha$. A normal vector $v_p$ to $\alpha$ at $p$ is called \emph{positive} if $(v_p,t_p(\alpha))$ is a positive basis of $T_pF$. We say that the right-hand side of $\alpha$ is the part of the normal tube given by the positive normal vectors. When drawing pictures to aid the geometric intuition, we always indicate the orientation of $F$ and $\partial F$ (with arrows). Also, the orientation of $F$ will always be the same, namely the orientation induced by the standard orientation of this paper. This has the advantage that orientation-depending properties like the right-hand side will be consistent throughout the picture, even if we draw two different areas of one surface. \begin{ex}\label{ex:perm} Let $\alpha=(\alpha_0,\alpha_1,\alpha_2)$ be a $2$-simplex in $C_*(F_{g,r};1)$, with permutation $P(\alpha)=[1\,2\,0]$. Close to $b_0$ and $b_1$ we see the situation depicted on Figure \ref{b:alfa2}, with the orientations of $\partial F$ at $b_0$ and $b_1$ used for determining the permutation as indicated. \begin{figure} \caption{An arc with permutation $[1\,2\,0]$ in $C_*(F;1)$.} \label{b:alfa2} \end{figure} We want to find the number of boundary components of $F_\alpha$. This goes as follows. Pick an arc, say $\alpha_0$, at $b_0$ and start coloring the right-hand side of it (here, we color it dark grey), following the arc all the way to $b_1$. See Figure \ref{b:cut1}. Here, continue to the left-hand side of the next arc; in our case it is $\alpha_2$. Note that in general this means going from $\alpha_{\sigma^{-1}(j)}$ to $\alpha_{\sigma^{-1}(j-1)}$ (see the definition); in this example $j=1$. Color the left-hand side of $\alpha_2$, reaching $b_0$ again and continuing to the right-hand side of the arc next to $\alpha_2$. In this algorithm the boundary component(s) containing $b_0$ and $b_1$ also counts as arcs, as shown in the figure. Continue in this fashion until you get back where you started (i.e. the right-hand side of $\alpha_0$). This closed, dark grey loop constitutes one boundary component of $F_\alpha$. Start over again with a different color (here light grey) at another arc, and you get a picture as in Figure \ref{b:cut1}. So there are $2+(r-1)=r+1$ boundary components of $(F_{g,r})_\alpha$ for $\alpha \in C_*(F;1)$ with $P(\alpha)=[1\,2\,0]$. \begin{figure} \caption{Boundary components of $F_\alpha$ for $\alpha$ in $C_*(F;1)$.} \label{b:cut1} \end{figure} We could consider the same permutation in $C_*(F_{g,r};2)$, and we would get a different picture (Figure \ref{b:cut2}). So there are $3+(r-2)=r+1$ boundary components of $(F_{g,r})_\alpha$ for $\alpha\in C_*(F;2)$ with $P(\alpha)=[1\,2\,0]$. \begin{figure} \caption{Boundary components of $F_\alpha$ for $\alpha$ in $C_*(F;2)$.} \label{b:cut2} \end{figure} \end{ex} The method of the above example gives a formula -- albeit a rather cumbersome one -- for $\sharp\partial N(\alpha)$, and thus by \eqref{e:boundary} for the number of boundary components of $F_ \alpha$ in terms of $P(\alpha)$: \begin{prop}\label{p:grim}Let $\sharp\partial S$ denote the number of boundary components in $S$, and let $\sigma_{k}\in\Sigma_{k}$ be given by $\sigma_k=[1\,2\,\cdots\, k\!-\!1\,0]$. Then \begin{enumerate} \item[$(i)$]If $\alpha\in C_{n-1}(F;1)$ then $\sharp\partial N(\alpha) = \textup{Cyc} \Big(\sigma_{n+1} \widehat{P(\alpha)}^{-1}\sigma_{n+1}^{-1}\widehat{P(\alpha)}\Big)+1$. \item[$(ii)$]If $\alpha\in C_{n-1}(F;2)$ then $\sharp\partial N(\alpha) = \textup{Cyc}\Big( \sigma_{n}P(\alpha)^{-1}\sigma_{n}^{-1}P(\alpha)\Big)+2$, \end{enumerate} Here $\textup{Cyc}: \Sigma_{k} \to \mathbb N$ denotes the number of disjoint cycles in the given permutation, and for $\tau\in\Sigma_{k}$, $\widehat{\tau}\in \Sigma_{k+1}$ is given by $\widehat{\tau}=[0, \tau+1]$, that is \begin{equation*} \widehat{\tau}(j)=\left\{ \begin{array}{ll} 0, & \hbox{$j=0$,} \\ \tau(j-1)+1, & \hbox{$i=1,\ldots,k$.} \end{array} \right. \end{equation*} In particular, $\sharp\partial N(\alpha)$ depends only on $P(\alpha)$. \end{prop} \begin{proof}This is simply a way to formulate the method described in Example \ref{ex:perm}. Let us look at $C_*(F;2)$ first, so $b_0$ and $b_1$ are in different boundary components. As in the example, we start on the right-hand side of one of the arcs at $b_0$, follow it (using $P(\alpha)$), then at $b_1$ we go left to the next arc (using $\sigma^{-1}$). Now we follow the right side of that arc (using $P(\alpha)^{-1}$) ending at $b_0$, and we must now go left to the next arc (using $\sigma$). Thus the permutation $P(\alpha)\sigma^{-1} P(\alpha)^{-1}\sigma$ captures how the boundary of $N(\alpha)$ behaves, and a boundary component in $\partial N(\alpha)$ clearly corresponds to a cycle in the permutation. Remembering the two extra components corresponding to the components of $\partial N(\alpha)$ containing $b_0$ and $b_1$, this proves $(ii)$. For $C_*(F;1)$, $b_0$ and $b_1$ lie on the same boundary component. We wish to use $(ii)$, so we consider a new surface $\hat F$ and a new arc simplex, $\hat\alpha= (\hat\alpha_0, \hat\alpha_1, \ldots, \hat\alpha_n)$ in $C_*(\hat F, 2)$, which are constructed from $F$ and $\alpha$ as follows. \begin{figure} \caption{Constructing $\hat F$ and $\hat \alpha$ from $F$ and $\alpha$.} \label{b:ref} \end{figure} We take the boundary component of $F$ containing $b_0$ and $b_1$, and close up part of it between $b_0$ and $b_1$ so we get two boundary components, cf. Figure \ref{b:ref}. Then $\hat\alpha_0$ will be the arc from $b_0$ to $b_1$ consisting of the part of the old boundary component which was first (i.e. right-most) in the incoming ordering at $b_0$ (cf. Figure \ref{b:ref}), and $\hat\alpha_j = \alpha_{j-1}$ for $1\le j \le n$. By this construction, $\sharp\partial N(\alpha) = \sharp\partial N(\hat\alpha)-1$, since we count two boundary components for $\hat\alpha\in C_*(\hat F;2)$, and we should count only one. Clearly $\textstyle P(\hat\alpha)=\widehat{P(\alpha)}$, and the result now follows from $(ii)$. \end{proof} I would like to thank my brother, Jens Boldsen, for help with the above proposition. \begin{prop}\label{p:inj}The permutation map \begin{equation*} P:\Delta_n(F;i)/\Gamma(F) \longrightarrow \Sigma_{n+1} \end{equation*} is injective. \end{prop} \begin{proof} We have to show that given two $n$-arc simplices $\alpha$ and $\beta$ with $P(\alpha)=P(\beta)$, there exists $\gamma\in\Gamma$ such that $\gamma\alpha=\beta$. Consider the cut surfaces $F_\alpha$ and $F_\beta$. Since the permutations are the same, $F_\alpha$ and $F_\beta$ have the same number of boundary components, by Prop. \ref{p:grim} above. Now since we have parameterizations of the boundary components and the curves $\alpha_0,\ldots, \alpha_n$ this gives a diffeomorphism $\varphi:\partial(F_\alpha) \longrightarrow \partial(F_\beta)$. The Euler characteristic of $F_\alpha$ and $F_\beta$ are also the same, according to Lemma \ref{l:cut}. This implies that $F_\alpha$ and $F_\beta$ have the same genus. By the classification of surfaces with boundary, $F_\alpha\cong F_\beta$ via an orientation preserving diffeomorphism $\Phi$ extending $\varphi$. Gluing both $F_\alpha$ and $F_\beta$ up again gives a diffeomorphism $\bar{\Phi}:F\longrightarrow F$ taking $\alpha$ to $\beta$. Thus $\alpha$ and $\beta$ are conjugate under $\gamma=\left[\bar{\Phi}\right]$ in the mapping class group $\Gamma(F)$. \end{proof} Whether $P$ is surjective depends on the genus $g$, cf. Corollary \ref{c:bij} below. \begin{rem}The proof of this proposition also shows that the action of $G(F)$ on $C_*(F;i)$ is rotation-free, cf. Def. \ref{d:rotationfree}. For given $\alpha\in\Delta_n(F;i)$ and $\gamma=[\varphi]\in \Gamma_\alpha$, \end{rem} \subsection{Genus} \begin{defn}[Genus]\label{d:genus} To an arc simplex $\alpha$ we associate the number $S(\alpha) = $ genus$(N(\alpha))$, cf. Def. \ref{d:N}. We call $S(\alpha)$ the genus of $\alpha$. \end{defn} Note that Harer calls this quantity the \emph{species} of $\alpha$. \begin{lem}\label{l:euler} For $\alpha\in \Delta_n(F;i)$, we have \begin{equation*} \chi(N(\alpha))=-(n+1) \end{equation*} \end{lem} \begin{proof} In $C_*(F;1)$, $N(\alpha)$ has $\alpha\cup_{b_0,b_1} S^1$ as a retract. Now there is a homotopy taking $b_1$ to $b_0$ along $S^1$, so up to homotopy, this is a wedge of $n+2$ copies of $S^1$ coming from $\alpha_0,\ldots,\alpha_n$ and from the boundary component. This gives the result. For $C_*(F;2)$ the argument is similar. \end{proof} \begin{prop}\label{p:genus}Let $\sharp\partial S$ denote the number of boundary components in a surface $S$. Let $i=1,2$. Then for any $\alpha\in\Delta_n(F_{g,r};i)$, the following relations hold: \begin{enumerate} \item[$(i)$] $S(\alpha)=½\big(n+3-\sharp\partial N(\alpha)\big)$, \item[$(ii)$] $\sharp\partial (F_\alpha) = r+n-S(\alpha)+3-2i$, \item[$(iii)$] $\textup{genus}(F_\alpha)= g+S(\alpha)-(n+2-i)$, \end{enumerate} \end{prop} \begin{proof} \noindent$(i)$ As $S(\alpha)$ is the genus of $N(\alpha)$, we can derive this from the Euler characteristic of $N(\alpha)$, which by Lemma \ref{l:euler} is $-(n+1)$. Using the formula $\chi(N(\alpha)) = 2-2S(\alpha)-\sharp\partial N(\alpha)$ gives the result. \newline\newline \noindent$(ii)$ This follows from $(i)$ and \eqref{e:boundary}. \newline\newline \noindent$(iii)$ As in $(i)$ we use the connection between Euler characteristic, genus and number of boundary components, together with $(i)$ and $(ii)$: \begin{eqnarray*} \textup{genus}(F_\alpha) &=& \textstyle ½\big(-\chi(F_\alpha)- \sharp\partial (F_\alpha) +2\big)\\ &=& \textstyle ½\big(-(2-2g-r)-(n+1)-(\sharp\partial N(\alpha)+r-2i)+2\big)\\ &=& \textstyle ½\big(2g+(n+1-\sharp\partial N(\alpha)+2)+2i-2-2(n+1)\big)\\ &=& g+S(\alpha)-(n+2-i) \end{eqnarray*} \end{proof} Consequently all information about $F_\alpha$ can be extracted from $\sharp \dd(F_\alpha)$, so it is important that we can compute this quantity: \begin{lem}\label{l:nu}Given $\alpha\in\Delta_n(F;i)$ be given, and let $\nu\in\Delta_0(F;i)$ be an arc such that $\alpha'=\alpha\cup \nu$ is an $(n+1)$-simplex. Consider $\alpha'\in C_*(F_\alpha;i)$. Then: \begin{equation*} \sharp \dd(F_{\alpha'})=\left\{ \begin{array}{ll} \sharp \dd(F_\alpha)+ 1, & \hbox{if $\nu\in \Delta_0(F_\alpha;1)$;} \\ \sharp \dd(F_\alpha)- 1, & \hbox{if $\nu\in \Delta_0(F_\alpha;2)$.} \end{array} \right. \end{equation*} \end{lem} \begin{proof} Let $k=\sharp \dd(F_\alpha)$. Since all boundary components in $F_{\alpha'}$ not intersecting $\nu$ correspond to boundary components in $F_\alpha$, it is enough to consider the situation close to $\nu$. There are two possibilities: Either $\nu$ will start and end on two different boundary components of $F_\alpha$, so $\nu\in \Delta_0(F_\alpha;2)$, or $\nu$ will start and end on the same boundary component of $F_\alpha$, so $\nu\in \Delta_0(F_\alpha;1)$. Cf. Figure \ref{b:rand}, where the boundary components of $F_\alpha$ are indicated as in Example \ref{ex:perm}. \begin{figure} \caption{Before and after cutting along the arc $\nu$ -- the two cases.} \label{b:rand} \end{figure} Taking the case $\nu\in \Delta_0(F_\alpha;2)$ (left-hand side of Figure \ref{b:rand}), when we cut along $\nu$ we get one boundary component instead of two. So we get $k-1$ boundary components in this case. In the case $\nu\in \Delta_0(F_\alpha;1)$ (right-hand side of Figure \ref{b:rand}) cutting along $\nu$ splits the boundary component into two, so we get $k+1$ boundary components. \end{proof} Combining Lemma \ref{l:nu} and Prop. \ref{p:genus}, we have proved, \begin{cor}\label{c:Sgenus}For $\alpha\in\Delta_0(F;i)$, let $\alpha'=\alpha\cup \nu$ as in Lemma \ref{l:nu}. Then: \begin{equation*} S(\alpha')=\left\{ \begin{array}{ll} S(\alpha), & \hbox{if $\nu\in \Delta_0(F_\alpha;1)$;} \\ S(\alpha)+1, & \hbox{if $\nu\in \Delta_0(F_\alpha;2)$.} \end{array} \right. \end{equation*} and \begin{equation*} \textup{genus}(F_{\alpha'})=\left\{ \begin{array}{ll} \textup{genus}(F_\alpha)-1, & \hbox{if $\nu\in \Delta_0(F_\alpha;1)$;} \\ \textup{genus}(F_\alpha), & \hbox{if $\nu\in \Delta_0(F_\alpha;2)$.} \end{array} \right. \end{equation*}\qed \end{cor} \begin{lem}\label{l:Slignul}Let $\alpha\in\Delta_0(F;i)$. Then $S(\alpha)=0$ if and only if \begin{itemize} \item[$(i)$]for $i=1$, $P(\alpha)=\textup{id}$. \item[$(ii)$]for $i=2$, $P(\alpha)$ is a cyclic permutation, i.e. one of the following: \begin{equation*} \textup{id},[1\, 2\cdots n \,0], [2 \,3 \cdots n \,0 \,1], \cdots, [n\, 0\, 1\cdots n\!\!-\!\!1]. \end{equation*} \end{itemize} \end{lem} \begin{proof} We prove ''only if''. The converse is clear, e.g. by Prop. \ref{p:grim} and Prop. \ref{p:genus} $(i)$. By Cor. \ref{c:Sgenus}, any subsimplex of $\alpha$ has genus equal to or lower than $S(\alpha)=0$, so any subsimplex of $\alpha$ must have genus 0. If $\alpha\in \Delta_n(F;1)$, this means all 1-subsimplices must have permutation equal to the identity, and this forces $P(\alpha)=\textup{id}$. If $\alpha\in \Delta_n(F;2)$ the condition on 1-subsimplices is vacuous, but for a $2$-subsimplex $\beta$ of $\alpha$, we see by Cor. \ref{c:Sgenus} that $S(\beta)=0$ implies that $P(\beta)$ is either $\textup{id}$, $[1\, 2\, 0]$, or $[2\, 0\, 1]$. For this to hold for any 2-subsimplex of $\alpha$, $P(\alpha)$ must be as stated in $(ii)$. \end{proof} \subsection{More about permutations} By Prop. \ref{p:grim}, given $\alpha\in\Delta_n(F;i)$, the number $\sharp \partial N(\alpha)$ is a function only of $P(\alpha)$ and $i$. By Prop. \ref{p:genus}$(i)$, the same is true for $S(\alpha)$. Thus, given a permutation $\sigma\in\Sigma_{n+1}$, we can calculate these quantities and simply define the numbers $\sharp\partial N(\sigma)$ and $S(\sigma)$ by the formulas of Prop. \ref{p:grim} and \ref{p:genus}$(i)$. Now we are going to see that given a permutation $\sigma\in \Sigma_{n+1}$, there exists $\alpha\in\Delta_n(F_{g,r};i)$ with $P(\alpha)=\sigma$ if at all possible, that is, provided the formula $(iii)$ of Prop. \ref{p:genus} for the genus of $F_\alpha$ gives a non-negative result. Rearranging this conditions we have the following lemma, also stated in \cite{Harer2}: \begin{lem}\label{l:perm}Given a permutation $\sigma\in \Sigma_{n+1}$, let $s=S(\sigma)$ as above. There exists $\alpha\in\Delta_0(F;i)$ with $P(\alpha)=\sigma$ if and only if \begin{equation}\label{e:spicies} s \ge n-g+2-i. \end{equation} \end{lem} \begin{proof} Given a permutation $\sigma$, one can try to construct an arc simplex $\alpha$ inductively with $P(\alpha)=\sigma$ by first choosing an arc $\alpha_0\in\Delta_0(F;i)$ from $b_0$ to $b_1$, and cutting $F$ up along it. This will give us two copies of $b_0$ and $b_1$, respectively, one to the left of our arc and one to the right. The permutation determines from which copy of $b_0$ and $b_1$ a new arc will join. Suppose we have constructed $k+1\le n+1$ arcs as above, i.e. a $k$-simplex $\beta=(\alpha_0,\ldots,\alpha_{k})$, and consider the cut surface $F_\beta$. Inductively we assume that $F_\beta$ is connected. Now we must verify that when adding a new arc, $\nu$, as in Lemma \ref{l:nu}, the cut surface $(F_\beta)_\nu$ is connected. If this holds, $\beta\cup\nu$ is a $(k+1)$-simplex, and we have completed the induction step. There are two cases. First assume that $\nu$ must join two different boundary components of $F_\beta$. Then $(F_\beta)_\nu$ is connected, no matter how we choose $\nu$, since $F_\beta$ is connected. Secondly, if $\nu$ connects two points on the same boundary component of $F_\beta$, we choose $\nu$ so that it winds around a genus-hole in $F_\beta$. This ensures that $(F_\beta)_\nu$ is connected, so we must prove that genus$(F_\beta)\ge 1$. From Prop. \ref{p:genus}, we know that $\textup{genus}(F_\beta)= g+S(\beta)-(k+2-i)$, and we want to prove \begin{equation}\label{e:inequality} S(\beta) - k \ge s - n + 1. \end{equation} Using this, we can complete the induction step: \begin{equation*} \textup{genus}(F_\beta)= g+S(\beta)-k-2+i \ge g + s -n -1+i \ge 1 \end{equation*} by assumption \eqref{e:spicies}. To prove \eqref{e:inequality}, recall that $S(\beta)$ only depends on $P(\beta)$, not on the surface $F$. So consider another surface $F'$ with genus $g'>n$. We can construct $\beta'\in\Delta_{k}(F',i)$ with $P(\beta')=P(\beta)$, as above. We can further construct $\alpha'\in\Delta_n(F',i)$ with $\beta'$ as a subsimplex and $P(\alpha')=\sigma$, simply by adding $n-k$ new arcs to $\beta'$ which each wind around a genus-hole in $F'$. This is possible because $g'>n$. We claim \begin{equation}\label{e:help} S(\alpha')\le S(\beta')+n-k-1. \end{equation} Applying Cor. \ref{c:Sgenus} $n-k$ times to $\beta'$, we obviously get $S(\alpha')\le S(\beta')+n-k$. We get the extra $-1$, because the first time we add an arc $\nu'$ to $\beta'$ we have $\nu'\in \Delta_0(F'_{\beta'};1)$, since $\nu\in\Delta_0(F_\beta,1)$ by assumption. This proves \eqref{e:help}. Since $P(\beta')=P(\beta)$ and $P(\alpha')=\sigma$, \eqref{e:help} implies $s=S(\sigma)\le S(\beta)+n-k-1$. This proves \eqref{e:inequality}. \end{proof} Combining Prop. \ref{p:inj} and Lemma \ref{l:perm} we have proved, \begin{cor} \label{c:bij}The permutation map \begin{equation*} P:\Delta_n(F;i)/\Gamma(F) \longrightarrow \Sigma_{n+1} \end{equation*} is bijective if $n\le g-2+i$.\qed \end{cor} \begin{lem}[\cite{Harer4}]\label{l:Hlignul}For $F=F_{g,b}$ with $g\ge 2$, the sequence \begin{equation*} C_{p+1}(F;i)/\Gamma(F) \stackrel{d^1}{\longrightarrow} C_p(F;i)/\Gamma(F) \stackrel{d^1}{\longrightarrow} C_{p-1}(F;i)/\Gamma(F) \end{equation*} is split exact for $1\le p\le g-2+i$. \end{lem} \begin{proof}Let $\mathbb Z\Sigma_*$ denote the chain complex with chain groups $\mathbb Z\Sigma_n$, $n\ge 1$, and differentials \begin{equation*} \partial:\mathbb Z\Sigma_{n+1}\longrightarrow \mathbb Z\Sigma_{n} \end{equation*} given as follows: For $\sigma=[\sigma(0) \cdots \sigma(n)]\in\Sigma_{n+1}$, let \begin{equation*} \partial_j(\sigma)=[\sigma(0)\cdots\sigma(j-1)\, \sigma(j+1)\ldots\sigma(n)], \end{equation*} where the set $\set{0,1,\ldots,n}\setminus\set{\sigma(j)}$ is identified with $\set{0,1,\ldots,n-1}$ by subtracting $1$ from all numbers exceeding $\sigma(j)$. Then we define $\partial(\sigma)=\sum_{j=0}^n(-1)^j\partial_j(\sigma)$ and extend linearly. Extending the permutation map $P$ linearly leads to the commutative diagram \begin{equation}\label{e:comm} \xymatrix{C_n(F;i)/\Gamma(F)\ar[r]^d\ar[d]^P& C_{n-1}(F;i)/\Gamma(F)\ar[d]^P\\ \mathbb Z\Sigma_{n+1}\ar[r]^{\partial}&\mathbb Z\Sigma_{n} } \end{equation} i.e. a chain map $C_*(F;i)/\Gamma(F) \longrightarrow \mathbb Z\Sigma_*$. By Prop. \ref{p:inj}, $P$ is injective, so $C_*(F;i)/\Gamma(F)$ is isomorphic to a subcomplex of $\mathbb Z\Sigma_*$, namely the subcomplex generated by permutations $\sigma\in\Sigma_{n+1}$ with $S(\sigma)$ satisfying the requirements of Lemma \ref{l:perm}. In particular, for $n\le g-2+i$, the chain groups of $\mathbb Z\Sigma_*$ and of $C_*(F;i)/\Gamma(F)$ are identified. Define $D:\mathbb Z\Sigma_n\longrightarrow \mathbb Z\Sigma_{n+1}$ by \begin{equation}\label{e:D} D(\sigma)=\hat\sigma= [0\quad\sigma(0)\!+\!1\quad\sigma(1)\!+\!1\quad\cdots\quad\sigma(n)\!+\!1]. \end{equation} It is an easy consequence of the definitions that $D\partial+ \partial D=1$, so $D$ is a contracting homotopy and $\mathbb Z\Sigma_*$ is split exact. By the diagram \eqref{e:comm}, $C_*(F;i)/\Gamma(F)$ is also split exact in the range where \begin{equation}\label{e:billede} D\circ P\Big(C_n(F;i)/\Gamma(F)\Big)\subseteq P\Big(C_{n+1}(F;i)/\Gamma(F)\Big), \end{equation} since $D$ lifts to a contracting homotopy $\bar{D}$ of $C_{*}(F;i)/\Gamma(F)$. We will first consider $C_*(F;1)/\Gamma(F)$. By Cor. \ref{c:bij}, $P$ is bijective for $n\le g-1$, so \eqref{e:billede} is satisfied for $n\le g-2$. It remains to consider the degree $n=g-1$. We have the commutative diagram, \begin{equation*} \xymatrix{C_{g}(F;i)/\Gamma(F)\ar[r]^d\ar@{^(->}[d]^P& C_{g-1}(F;i)/\Gamma(F)\ar[r]^d\ar[d]^P_{\cong}& C_{g-2}(F;i)/\Gamma(F)\ar[d]^P_{\cong}\\ \mathbb Z\Sigma_{g+1}\ar[r]^{\partial}&\mathbb Z\Sigma_{g}\ar[r]^{\partial}&\mathbb Z\Sigma_{g-1} } \end{equation*} with the bottom sequence exact. We must show that \begin{equation*} P\circ d(C_{g}(F;i)/\Gamma(F))=\partial(\mathbb Z\Sigma_{g+1}). \end{equation*} According to Cor. \ref{c:bij}, $P:C_{g}(F;1)/\Gamma(F)\longrightarrow\mathbb Z\Sigma_{g+1}$ hits everything except what is generated by permutations $\sigma$ with $S(\sigma)=0$. Thus we must show $\partial(\sigma)\in \textup{Im}(P\circ d)=\textup{Im}(\partial\circ P)$ for all $\sigma\in\Sigma_{g+1}$ with $S(\sigma)=0$. From Lemma \ref{l:Slignul} we know that the only such permutation is the identity. As \begin{equation*} \partial([0\,1\,\cdots\, g])=\sum_{j=0}^{g}(-1)^{j}[0\,1\,\cdots\,g\!-\!1]=\left\{ \begin{array}{ll} 0, & \hbox{if $g$ is odd,} \\ \textup{id}, & \hbox{if $g$ is even,} \end{array} \right. \end{equation*} we are done if $g$ is odd, and the desired contracting homotopy $\bar{D}$ is obtained by lifting $D$ when $S(\alpha)>0$ and setting by $\bar{D}(\alpha)=0$ when $S(\alpha)=0$. If $g$ is even, consider $\tau=[2\,0\,1\,3\,4\,\cdots\,g]\in \Sigma_{g+1}$. Then by Lemma \ref{l:Slignul} $S(\tau)>0$, and \begin{eqnarray*} \partial(\tau)&=&[0\,1\,2\,\cdots \,g\!-\!1]-[1\,0\,2\,3\,\cdots\,g\!-\!1]+[1\,0 \,2\,3\,\cdots\,g\!-\!1]\\ &&+ \sum_{j=3}^{g}(-1)^{j}[2\,0\,1\,3\,4\,\cdots\,g\!-\!1]=[0\,1\,2\,\cdots\,g\!-\!1] = \partial[0\,1\,2\,\cdots\,g]. \end{eqnarray*} Thus we can obtain a contracting homotopy $\bar{D}$ by taking $\bar D(\alpha)=P^{-1}(\tau)$ when $S(\alpha)=0$. For $C_*(F;2)/\Gamma(F)$, Cor. \ref{c:bij} gives that $P$ is bijective for $n\le g$, so we are left with $j=g$, where we use exactly the same method as above. We must show that $\partial(\sigma)\in\textup{Im}(\partial\circ P)$ for all $\sigma\in\Sigma_{g+2}$ with $S(\sigma)=0$. We only need to consider $\sigma\in\textup{Im}(D)$, because $\textup{Im}\partial=\textup{Im}(\partial\circ D)$ by the equation $\partial D+D\partial=1$. The only $\sigma\in\Sigma_{g+2}$ with $S(\sigma)=0$ and $P\in\textup{Im} D$ is the identity, according to Lemma \ref{l:Slignul}. Now we are in the same situation as above, so we can use $\tau=[2\,0\,1\,3\,4\,\cdots\,g\,g\!+\!1]\in \Sigma_{g+2}$ which has genus $S(\tau)>0$ in $C_*(F;2)$, since $g\ge 2$. \end{proof} \section{Homology stability of the mapping class group} Let $F$ be a surface with boundary. Given $F$ we can glue on a ''pair of pants'', $F_{0,3}$, to one or two boundary components. We denote the resulting surface by $\Sigma_{i,j}F$, the subscripts indicating the change in genus and number of boundary components, respectively. \begin{figure} \caption{\quad $\Sigma_{0,1}F$ \qquad and \qquad $\Sigma_{1,-1}F$.\quad\quad} \label{b:sigma} \end{figure}These two operations induce homomorphisms between the mapping class groups after extending a mapping class by the identity on the pair of pants; \begin{equation*} \Sigma_{i,j}:\Gamma(F)\longrightarrow \Gamma( \Sigma_{i,j}F). \end{equation*} Given a surface $F$, applying $\Sigma_{0,1}$ and then adding a disk at one of the pant legs gives a surface diffeomorphic to $F$ (with a cylinder glued onto a boundary component). It is easily seen that the induced composition \begin{equation*} \Gamma(F)\longrightarrow \Gamma( \Sigma_{0,1}F)\longrightarrow \Gamma(F) \end{equation*} is the identity, so $\Sigma_{0,1}$ induces an injection on homology \begin{equation}\label{e:injektiv} H_n(\Gamma(F))\hookrightarrow H_n(\Gamma(\Sigma_{0,1}F)). \end{equation} For the proof of the stability theorems, the opposite operation is essential: One expresses the surface $F$ as the result of cutting $\Sigma_{0,1}F$ or $\Sigma_{1,-1}F$ along an arc representing a $0$-simplex in one of the arc complexes of definition \ref{d:arc}: \begin{equation*} F \cong (\Sigma_{0,1}F)_\alpha,\quad \textup{and} \quad F \cong (\Sigma_{1,-1}F)_\beta, \end{equation*} for $\alpha\in \Delta_0(\Sigma_{0,1}F,2)$ and $\beta\in \Delta_0(\Sigma_{1,-1}F,1)$ as indicated below \begin{figure} \caption{$\alpha$ and $\beta$.} \label{f:skaer} \label{b:arc} \end{figure} A diffeomorphism of $F_\alpha$ that fixes the points on the boundary pointwise extends to a diffeomorphism of $F$ by adding the identity on $N(\alpha)$, and this defines an inclusion $\Gamma(F_\alpha)\longrightarrow \Gamma$ whose image is the stabilizer $\Gamma_\alpha$. \subsection{The spectral sequence for the action of the mapping class group}\label{sc:lemmas} In this section, $F=F_{g,r}$ with $g\ge 2$ and $\Gamma=\Gamma(F)$. We shall consider the spectral sequences $E^{n}_{p,q}=E^{n}_{p,q}(F;i)$ from section \ref{sc:spectral} associated to the action of $\Gamma$ on the arc complexes $C_*(F;i)$ for $i=1,2$. By Cor. \ref{c:spectral} and Thm. \ref{s:connected}, we have $E^1_{0,q}=H_q(\Gamma)$ and \begin{equation}\label{e:E1F} E^1_{p,q}= \bigoplus_{\alpha\in\Bar\Delta_{p-1}} H_q(\Gamma_\alpha) \Rightarrow 0, \quad \textup{for }p+q \le 2g-2+i, \end{equation} where $\overline{\D}_{p-1}\subseteq \Delta_{p-1}(F;1)$ is a set of representatives of the $\Gamma$-orbits of $\Delta_{p-1}(F;i)$ in $C_*(F;i)$. The permutation map \begin{equation*} P:\Delta_{p-1}(F;i)/\Gamma \longrightarrow \Sigma_{p} \end{equation*} is injective by Prop. \ref{p:inj}. Let $\overline{\Si}_p$ be the image, and $T:\overline{\Si}_p\stackrel{\sim}{\longrightarrow}\overline{\D}_{p-1}\hookrightarrow\Delta_{p-1}(F;i)$ a section, $P\circ T=\textup{id}$. Then \begin{equation}\label{e:E1Fny} E^1_{p,q}= \bigoplus_{\sigma\in\overline{\Si}_p} E^1_{p,q}(\sigma),\quad E^1_{p,q}(\sigma)=H_q(\Gamma_{T(\sigma)}). \end{equation} The first differential, $d^1_{p,q}:E^1_{p,q}\longrightarrow E^1_{p-1,q}$, is described in section \ref{sc:diff}. The diagrams \begin{equation*} \xymatrix{ \Delta_{p}(F;i)\ar[r]^{\partial_j}\ar[d] & \Delta_{p}(F;i)\ar[d] &\\ \overline{\Si}_{p+1}\ar[r]^{\partial_j} & \overline{\Si}_p & j=0,\ldots,p } \end{equation*} commute, where $\partial_j$ omits entry $j$ as in Def. \ref{d:arc} and the vertical arrows divide out the $\Gamma$ action and compose with $P$. Thus for each $\sigma\in\overline{\Si}_{p+1}$, there is $g_j\in\Gamma$ such that \begin{equation}\label{e:Ib18} g_j\cdot\partial_j T(\sigma) = T(\partial_j\sigma), \end{equation} and conjugation by $g_j$ induces an isomophism $c_{g_j}:\Gamma_{\partial_j T(\sigma)}\longrightarrow \Gamma_{T(\partial_j\sigma)}$. The induced map on homology is denoted $\partial_j$ again, i.e. \begin{equation}\label{e:Ib19} \xymatrix{ \partial_j: H_q(\Gamma_{T(\sigma)})\ar[r]^{\textrm{incl}_*}& H_q(\Gamma_{\partial_j T(\sigma)})\ar[r]^{(c_{g_j})_*}& H_q(\Gamma_{T(\partial_j\sigma)}) }. \end{equation} Note that $(c_{g_j})_*$ does not depend on the choice of $g_j$ in \eqref{e:Ib18}: Another choice $g_j'$ gives $c_{g_j'}= c_{g_j'g_j^{-1}}c_{g_j}$, and $g_j'g_j^{-1}\in\Gamma_{T(\partial_j\sigma)}$ so $c_{g_j'g_j^{-1}}$ induces the identity on $H_q(\Gamma_{T(\partial_j\sigma)})$. Then \begin{equation}\label{e:Ib20} d^1=\sum_{j=0}^{p-1}(-1)^j\partial_j. \end{equation} The proof of the main stability Theorem depends on a partial calculation of the spectral sequence \eqref{e:E1F}. More specifically, the first differential $d^1:E^1_{1,q}\longrightarrow E^1_{0,q}$ is equivalent to a stability map $H_q(\Gamma_\alpha)\longrightarrow H_q(\Gamma)$, so the question becomes whether $d^1$ is an isomorphism resp. an epimorphism. In a range of dimensions the spectral sequence converges to zero, so that $d^1$ must be an isomorphism unless other (higher) differentials interfere. The next three lemma are the key elements that give sufficient hold of the spectral sequence. The first lemma gives the general induction step. The next two lemmas about $d^1:E^1_{p,q}\longrightarrow E^1_{p-1,q}$ for $p=3,4$ are necessary for the improved stability. \begin{lem}\label{l:E^2_p,q} Let $i=1,2$, and let $k,j\in \mathbb N$ with $k\le g-3+i$. For any $\alpha\in\Delta_{p-1}(F;i)$ and all $q\le k-j$, assume that \begin{eqnarray}\label{e:assume} & &H_q(\Gamma_\alpha)\stackrel{\scriptscriptstyle\cong}{\to} H_q(\Gamma) \text{ is an isomorphism} \quad \text{if } p+q\le k+1, \\ & &H_q(\Gamma_\alpha)\twoheadrightarrow H_q(\Gamma) \text{ is surjective} \qquad\qquad\! \text{if } p+q= k+2. \end{eqnarray} Then $E^2_{p,q}(F;i) = 0$ for all $p,q$ with $p+q=k+1$ and $q\le k-j$. \end{lem} \begin{proof}Let $\overline{C}_n(F;i)=C_n(F;i)/\Gamma$. By \eqref{e:E1F} and the assumptions, we get for $q\le k-j$: \begin{eqnarray}\label{e:conclude} & &E^1_{p,q}\cong \overline{C}_{p-1}(F;i)\otimes H_{q}(\Gamma) \quad \text{if } p+q\le k+1, \\ & &E^1_{p,q}\twoheadrightarrow \overline{C}_{p-1}(F;i)\otimes H_{q}(\Gamma) \quad \text{if } p+q= k+2.\nonumber \end{eqnarray} Now we have the following commutative diagram, for a fixed pair $p, q$ with $q\le k-j$ and $p+q=k+1$: \begin{equation}\label{e:diagram} \xymatrix{E^1_{p-1,q}\ar[d]^{\cong} &E^1_{p,q}\ar[l]_{\quad d^1}\ar[d]^{\cong}& E^1_{p+1,q}\ar[l]_{d^1}\ar@{->>}[d]\\ \overline{C}_{p-2}(F;i) \otimes H_{q}(\Gamma)& \overline{C}_{p-1}(F;i) \otimes H_{q}(\Gamma)\ar[l]_{\quad\bar{d}^1}& \overline{C}_{p}(F;i) \otimes H_{q}(\Gamma)\ar[l]_{\quad\bar{d}^1} } \end{equation} Using the formula \eqref{e:Ib20} for $\bar{d}^1$, $(c_{g_j})_*(\omega)=\omega$ for $\omega\in H_*(\Gamma)$, since conjugation induces the identity in $H_*(\Gamma)$. Thus the bottom row of diagram \eqref{e:diagram} is just the sequence from Lemma \ref{l:Hlignul}, tensored with $H_{q}(\Gamma)$. Since $p\le k+1\le g-2+i$ that sequence is split exact, so the bottom row of \eqref{e:diagram} is exact. We conclude that $E^2_{p,q}=0$ for all $p,q$ with $q\le k-j$ and $p+q=k+1$, as desired. \end{proof} We next examine the chain complex \begin{equation*} \xymatrix{ \ldots\ar[r]^{d^1}& E^1_{3,q}(F,i) \ar[r]^{d^1}& E^1_{2,q}(F,i)\ar[r]^{d^1}& E^1_{1,q}(F,i)\ar[r]^{d^1}&E^1_{0,q}(F,i)} \end{equation*} associated with $C(F;i)$, but first we need an easy geometric proposition. Recall from definition \ref{d:N}, that for $\alpha \in \Delta_p(F;i)$ we write $F_\alpha =F\setminus N(\alpha)$ for the surface cut along the arcs of $\alpha$. \begin{prop}\label{p:8-tal} Let $\alpha\in \Delta_n(F;i)$ with permutation $P(\alpha)=\sigma$, and assume there is $k,l<n$ such that $\sigma(k)=l+1$ and $\sigma(k+1)=l$. Then there exists $f\in \Gamma(F)$ with $f(\alpha_{k+1})=\alpha_k$, $f(\alpha_i)=\alpha_i$ for $i\notin \set{k, k+1}$ and $f|_{F_\alpha} =\textup{id}_{F_\alpha}$. \end{prop} \begin{proof} A (right) \emph{Dehn twist} in an annulus in $F$ is an element of $\Gamma(F)$ given by performing a full twist to the right inside the annulus, and extending by the identity outside the annulus. Figure \ref{b:Dehn1} shows a Dehn twist $\gamma$ in an annulus, and its effect on a curve $\beta$ intersecting the annulus. \begin{figure} \caption{A Dehn twist $\gamma$ in an annulus.} \label{b:Dehn1} \end{figure} Consider the curves $\alpha_k$ and $\alpha_{k+1}$. Take an annulus as depicted on Figure \ref{b:Dehn} below (in grey). By the requirements of the proposition it is easy to construct the annulus so that it only intersects $\alpha$ in $\alpha_k$ and $\alpha_{k+1}$. Let $f$ be the Dehn twist in this annulus. \begin{figure} \caption{The Dehn twist $f$.} \label{b:Dehn} \end{figure} Since $f$ is the identity outside the annulus, we have $f(\alpha_i)=\alpha_i$ for all $i\notin \{k, k+1\}$ and $f|_{F_\alpha} =\textup{id}_{F_\alpha}$. By Figure \ref{b:Dehn} it is easy to see that $f(\alpha_{k+1})=\alpha_k$. \end{proof} The stabilizer $\Gamma_\alpha$ of $\alpha\in \Delta_p(F;i)$ depends up to conjugation only on the orbit $\Gamma\alpha$, i.e. on $P(\alpha)\in \Sigma_{p+1}$. So when conjugation is of no importance we shall for $\sigma\in\overline{\Sigma}_{p+1}$ write $\Gamma_\sigma$ for any of the conjugate subgroups $\Gamma_\alpha$ with $P(\alpha)=\sigma$. If $\tau\in \overline{\Sigma}_{p}$ is a face of $\sigma\in \overline{\Sigma}_{p+1}$ then $\Gamma_\sigma$ is conjugate to a subgroup of $\Gamma_\tau$, and there is a homomorphism \begin{equation*} H_q(\Gamma_\sigma)\longrightarrow H_q(\Gamma_\tau), \end{equation*} well-determined up to isomorphism of source and target. \begin{lem}\label{l:E^2_2,q}Let $c_1$ and $c_2$ be the isomorphism classes \begin{equation*} c_1: H_q(\Gamma_{[0\,2\,1]})\longrightarrow H_q(\Gamma_{[1\,0]}),\quad c_2: H_q(\Gamma_{[1\,2\,0]})\longrightarrow H_q(\Gamma_{[0\,1]}) \end{equation*} \begin{itemize} \item[$(i)$]If $c_1$ and $c_2$ are surjective, then $d^1_{3,q}:E^1_{3,q}\longrightarrow E^1_{2,q}$ is surjective, and $E^2_{2,q}=0$. \item[$(ii)$]If $c_1$ and $c_2$ are injective, then \begin{equation*} d^1_{3,q}:E^1_{3,q}([0\,2\,1])\oplus E^1_{3,q}([1\,2\,0])\longrightarrow E^1_{2,q} \end{equation*} is injective. \end{itemize} \end{lem} \begin{proof}The target of $d^1$ is $E^1_{2,q}=E^1_{2,q}([0\,1])\oplus E^1_{2,q}([1\,0])$, and we first examine the component \begin{equation}\label{e:Ib25} d^1_{3,q}: E^1_{3,q}([0\,2\,1])\longrightarrow E^1_{2,q}([0\,1]). \end{equation} If $\beta=T([0\,2\,1])$ with $\beta= (\beta_0,\beta_1,\beta_2)$, let $\gamma\in\Gamma$ satisfy $(\gamma\beta_0,\gamma\beta_1)=T([0\,1])$, and write $\alpha=\gamma\beta$. Then \begin{equation*} (c_g)_*:E^1_{3,q}([0\,2\,1])\stackrel{\cong}{\longrightarrow}H_q(\Gamma_\alpha), \end{equation*} and the $E^1_{2,q}([0\,1])$-component of $d^1_{3,q}\circ (c_g)_*$ is the difference of \begin{eqnarray}\label{e:Ib26} \partial_2:H_q(\Gamma_\alpha) &\longrightarrow& H_q(\Gamma_{(\alpha_0,\alpha_1)}) \\ \partial_1:H_q(\Gamma_\alpha) &\longrightarrow& H_q(\Gamma_{(\alpha_0,\alpha_2)})\longrightarrow H_q(\Gamma_{(\alpha_0,\alpha_1)})\nonumber \end{eqnarray} where $f\cdot(\alpha_0, \alpha_2)=(\alpha_0, \alpha_1)$. By the previous proposition \ref{p:8-tal} we may choose $f$ such that $f|_{F_\alpha}=\textup{id}_{F_\alpha}$. It follows that $c_f:\Gamma\longrightarrow \Gamma$ restricts to the identity on $\Gamma_\alpha$, and hence that the two maps in \eqref{e:Ib26} are equal. Thus the component of $d^1_{3,q}$ in \eqref{e:Ib25} is zero. On the other hand, the component \begin{equation*} d^1_{3,q}: E^1_{3,q}([0\,2\,1])\longrightarrow E^1_{2,q}([1\,0]) \end{equation*} is equal to $\partial_0$, so it belongs to the isomorphism class $c_1$. Thus it is surjective resp. injective under the assumptions $(i)$ resp. $(ii)$. The restriction of $d^1_{3,q}$ to $E^1_{3,q}([1\,2\,0])$, \begin{equation*} d^1_{3,q}: E^1_{3,q}([1\,2\,0])\longrightarrow E^1_{2,q}([0\,1])\oplus E^1_{2,q}([1\,0]), \end{equation*} is treated in a similar fashion. This time there are two terms with opposite signs in $E^1_{2,q}([1\,0])$ which cancel by Prop. \ref{p:8-tal}, and the component \begin{equation*} d^1_{3,q}: E^1_{3,q}([1\,2\,0])\longrightarrow E^1_{2,q}([0\,1]) \end{equation*} is in the isomorphism class of $c_2$. This proves the lemma. \end{proof} We next consider the situation of Lemma \ref{l:E^2_2,q}$(ii)$ where $c_1$ and $c_2$ are injective. If we further assume that $g(F)\ge 3$, then $\overline{\Si}_3=\Sigma_3$ and $\overline{\Si}_4=\Sigma_4\setminus \set{\textup{id}}$. We consider the maps \begin{eqnarray}\label{e:Ib27} c_3 &:& H_q(\Gamma_{[1\,2\,3\,0]})\longrightarrow H_q(\Gamma_{[1\,2\,0]}) \nonumber\\ c_4 &:& H_q(\Gamma_{[0\,3\,2\,1]})\longrightarrow H_q(\Gamma_{[2\,1\,0]}) \\ c_5 &:& H_q(\Gamma_{[0\,2\,1\,3]})\longrightarrow H_q(\Gamma_{[1\,0\,2]}) \nonumber\\ c_6 &:& H_q(\Gamma_{[0\,3\,1\,2]})\longrightarrow H_q(\Gamma_{[2\,0\,1]}) \nonumber \end{eqnarray} \begin{lem}\label{l:E^2_3,q} Let $g\ge 3$ and assume that $c_1$ and $c_2$ of Lemma \ref{l:E^2_2,q} are injective and that the four maps in \eqref{e:Ib27} are surjective. Then $E^2_{3,q}(F;i)=0$ for $i=1,2$. \end{lem} \begin{proof}The group $E^1_{3,q}$ decomposes into six summands since $\overline{\Si}_3=\Sigma_3$. By Lemma \ref{l:E^2_2,q}, to show that $E^2_{3,q}=0$ under the above conditions, it suffices to check that $d^1_{4,q}$ maps onto the four components not considered in Lemma \ref{l:E^2_2,q}. More precisely, let \begin{equation*} \tilde{E}_{3,q}^1= E^1_{3,q}([0\,1\,2])\oplus E^1_{3,q}([2\,1\,0])\oplus E^1_{3,q}([1\,0\,2])\oplus E^1_{3,q}([2\,0\,1]). \end{equation*} We must show that the composition \begin{equation*} \bar{d}^1: E^1_{4,q}\stackrel{d^1}{\longrightarrow} E^1_{3,q}\stackrel{\textup{proj}}{\longrightarrow} \tilde{E}_{3,q}^1 \end{equation*} is surjective. the argument is quite similar to the proof of Lemma \ref{l:E^2_2,q}, using Prop. \ref{p:8-tal} to cancel out elements. Then the components of $\bar{d}^1$ can be described as follows: \begin{eqnarray*} \bar{d}^1=-\partial_3 &:& E^1_{4,q}([1\,2\,3\,0]) \longrightarrow E^1_{3,q}([0\,1\,2]), \\ \bar{d}^1=\partial_0 &:& E^1_{4,q}([0\,3\,2\,1]) \longrightarrow E^1_{3,q}([2\,1\,0]), \\ \bar{d}^1=\partial_0 &:& E^1_{4,q}([0\,2\,1\,3]) \longrightarrow E^1_{3,q}([1\,0\,2]), \\ \bar{d}^1=(\partial_0,-\partial_3) &:& E^1_{4,q}([0\,3\,1\,2]) \longrightarrow E^1_{3,q}([2\,0\,1])\oplus E^1_{3,q}([0\,1\,2]). \end{eqnarray*} It follows from the surjections in \eqref{e:Ib27} that $\bar{d}^1$ is surjective, and hence that $E^1_{3,q}(F;i)=0$. \end{proof} \begin{rem}\label{r:third}Now we can state Harer's third assertion needed to improve our main stability Theorem by ''one degree'' (cf. the Introduction). It is easy to show that $d^1_{2,2n}[1\, 0]$ is the zero map for all $n$. Then the homology class $[\check{\kappa}_1^{\phantom{,}n}]$ of $\check{\kappa}_1^{\phantom{,}n}$ with respect to $d^1$ is an element of $E^2_{2,2n}$. The assertion is \begin{itemize} \item[$(iii)$]$d^2_{2,2n}([\check{\kappa}_1^{\phantom{,}n}])=x\cdot[\check{\kappa}_1^{\phantom{,}n}]$ for some Dehn twist $x$ around a simple closed curve in $F$. Here, $\cdot$ denotes the Pontryagin product in group homology. \end{itemize} \end{rem} \subsection{The stability theorem for surfaces with boundary}\label{ss:soeren} In this section we prove the first of the two stability theorems listed in the introduction. Our proof is strongly inspired by the 15 year old manuscript \cite{Harer2}, but with two changes. We work with integral coefficients, and we avoid the assertions made in \cite{Harer2} discussed in the introduction. The theorem we prove is \begin{thm}[Main Theorem]\label{t:Main}Let $F_{g,r}$ be a surface of genus $g$ with $r$ boundary components. \begin{itemize} \item[$(i)$] Let $r\ge 1$ and let $i=\Sigma_{0,1}:\Gamma_{g,r}\longrightarrow \Gamma_{g,r+1}$. Then \begin{equation*} i_*: H_k(\Gamma_{g,r})\longrightarrow H_k(\Gamma_{g,r+1}) \end{equation*} is an isomorphism for $2g\ge 3k$. \item[$(ii)$]Let $r\ge 2$ and let $j=\Sigma_{1,-1}:\Gamma_{g,r}\longrightarrow \Gamma_{g+1,r-1}$. Then \begin{equation*} j_*: H_k(\Gamma_{g,r})\longrightarrow H_k(\Gamma_{g+1,r-1}) \end{equation*} is surjective for $2g\ge 3k-1$, and an isomorphism for $2g\ge 3k+2$. \end{itemize} \end{thm} \begin{proof}The proof is by induction in the homology degree $k$. For $k=0$ the results are obvious, since $H_0(G,\mathbb Z)=\mathbb Z$ for any group $G$. So assume now $k>0$ and that the theorem holds for homology degrees less than $k$. \subsubsection*{The case $\Sigma_{0,1}$}In this case we know from (\ref{e:injektiv}) that $\Sigma_{0,1}$ is injective, so to prove that it is an isomorphism it is enough to show surjectivity. Assume $2g\ge 3k$ and write $\Gamma=\Gamma_{g,r+1}$. We use that $\Gamma_{g,r}$ is the stabilizer $\Gamma_\alpha$ for $\alpha\in \Delta_0(F_{g,r+1;2}$ as on Figure \ref{b:arc}, $\Gamma_{g,r}=\Gamma_\alpha$. Now we use the spectral sequence \eqref{e:E1F} associated with the action of $\Gamma$ on $C_*(F_{g,r+1};2)$, and we recognize the map $i_*: H_k(\Gamma_\alpha)\longrightarrow H_k(\Gamma)$ as the differential $d^1:E^1_{1,k}\longrightarrow E^1_{0,k}$. The spectral sequence converges to zero at $E_{0,k}^n$. So it suffices to show that $E^2_{p,k+1-p}$ is zero for all $p\ge 2$. We begin by proving $E^2_{2,k-1}=0$ using Lemma \ref{l:E^2_2,q} $(i)$, noting that $g \ge 2$, since $k\ge 1$. We must verify that $c_1$ and $c_2$ are surjective, and we will do this inductively. Prop. \ref{p:grim} (or Example \ref{ex:perm}) and Prop. \ref{p:genus} calculate the genus and the number of boundary components of $\Gamma_{\sigma}$. The figures below show the relevant simplices $\sigma\in \Delta_*(F_{g,r+1};2)$ so that the method in Example \ref{ex:perm} can easily be applied. The circles are the boundary components containing $b_0$ and $b_1$. \setlength{\unitlength}{0.3cm} \begin{center} \begin{picture}(44,3)(0,-1.5) \put(17,0){\circle*{0.3}} \put(13,0){\circle*{0.3}} \put(12,0){\circle{2}}\put(18,0){\circle{2}} \put(13,0){\line(1,0){1.7}}\put(17,0){\line(-1,0){1.7}} \qbezier(13,0)(14,1.5)(15,0)\qbezier(17,0)(16,-1.5)(15,0) \put(0,-0.2){$\Gamma_{[1\,0]} = \Gamma_{g-1,r+1},$} \put(42,0){\circle*{0.3}} \put(38,0){\circle*{0.3}} \put(37,0){\circle{2}}\put(43,0){\circle{2}} \qbezier(38,0)(40,1.5)(42,0) \qbezier(38,0)(39,0.3)(39.8,-0.3)\qbezier(42,0)(41,-1.5)(40.2,-0.7) \qbezier(42,0)(41,0.3)(40,-0.5)\qbezier(38,0)(39,-1.5)(40,-0.5) \put(25,-0.2){$\Gamma_{[0\,2\,1]} = \Gamma_{g-1,r},$} \end{picture} \begin{picture}(44,3)(0,-1.5) \put(17,0){\circle*{0.3}} \put(13,0){\circle*{0.3}} \put(12,0){\circle{2}}\put(18,0){\circle{2}} \qbezier(13,0)(15,1.5)(17,0)\qbezier(13,0)(15,-1.5)(17,0) \put(0,-0.2){$\Gamma_{[0\,1]} = \Gamma_{g-1,r+1},$} \put(42,0){\circle*{0.3}} \put(38,0){\circle*{0.3}} \put(37,0){\circle{2}}\put(43,0){\circle{2}} \qbezier(38,0)(39,1)(40.25,0.25)\qbezier(40.25,0.25)(41,-0.2)(42,0) \qbezier(42,0)(41,-1)(39.75,-0.25)\qbezier(39.75,-0.25)(39,0.2)(38,0) \qbezier(38,0)(39,-1)(39.6,-0.4)\qbezier(40.4,0.4)(41,1)(42,0) \qbezier(40.1,0.1)(40.1,0.1)(39.9,-0.1) \put(25,-0.2){$\Gamma_{[1\,2\,0]} = \Gamma_{g-2,r+2}.$} \end{picture} \end{center} We see that \begin{equation*}\begin{array}{rll} c_1=(\Sigma_{0,1})_* :& H_{k-1}(\Gamma_{g-1,r})\longrightarrow H_{k-1}(\Gamma_{g-1,r+1}), & \text{and}\\ c_2=(\Sigma_{1,-1})_* :& H_{k-1}(\Gamma_{g-2,r+2})\longrightarrow H_{k-1}(\Gamma_{g-1,r+1}) \end{array} \end{equation*} are both surjective by induction. So $E^2_{2,k-1}=0$. We now show that $E^2_{p,q}=0$ for $p+q=k+1$ and $p>2$, i.e. $q\le k-2$, using Lemma \ref{l:E^2_p,q}, so we must verify \eqref{e:assume} and (24). By Prop. \ref{p:genus} we have $\Gamma_\alpha=\Gamma_{g-p+s+1,r+p-2s-1}$, for $\alpha\in \overline{\Delta}_{p-1}$ of genus $s$. So for $q\le k-2$, we will show by induction: \begin{eqnarray}\label{e:induktions} H_{q}(\Gamma_{g-p+s+1,r+p-2s-1})\cong H_{q}(\Gamma_{g,r+1}),&\text{for}&p+q\le k+1\\ H_{q}(\Gamma_{g-p+s+1,r+p-2s-1})\twoheadrightarrow H_{q}(\Gamma_{g,r+1}),&\text{for}&p+q= k+2. \end{eqnarray} The maps in \eqref{e:induktions} and (30) are induced from the composition \begin{equation*}\xymatrix{ \Gamma_{g-p+s+1,r+p-2s-1}\ar[rr]^{\quad(\Sigma_{0,1})^{s+1}}& &\Gamma_{g-p+s+1,r+p-s} \ar[rr]^{\qquad(\Sigma_{1,-1})^{p-s-1}}& &\Gamma_{g,r+1} }. \end{equation*} The result follows by induction if \begin{equation*} 2(g-p+s+1)\ge 3q \quad \hbox{and} \quad 2(g-p+s+1) \ge 3q+2; \quad \textup{for }q\le k-2. \end{equation*} Let us prove (\ref{e:induktions}). We know that $2g\ge 3k$, and we have $p+q\le k+1$. Let $q$ be fixed. Since more arcs (greater $p$) and smaller genus of $\alpha$ implies a smaller genus of the cut surface $F_\alpha$, it suffices to show the inequality for $p+q=k+1$ and $s=0$. In this case \begin{equation*} 2(g-p+1)= 2(g-k-1+q+1)\ge 3k-2k+2q=2q+k \ge 3q+2. \end{equation*} where in the last inequality we have used the assumption $q\le k-2$. The proof of (31) is similar. Now by Lemma \ref{l:E^2_p,q}, $E^2_{p,q}=0$ for all $p+q=k+1$ with $q\le k-2$. This proves that $d^1_{1,k}=(\Sigma_{0,1})_*$ is surjective. \subsubsection*{Surjectivity in the case $\Sigma_{1,-1}$}Assume $2g\ge 3k-1$, and write $\Gamma=\Gamma_{g+1,r-1}$. Then $\Gamma(F_{g,r})=\Gamma_{\beta}$ for $\beta\in\Delta_0(F_{g+1,r-1};1)$ as on Figure \ref{b:arc}. In the spectral sequence \eqref{e:E1F} associated with the action of $\Gamma$ on $C_*(F_{g+1,r-1};1)$, we recognize the map $(\Sigma_{1,-1})_*: H_k(\Gamma_{g,r})\longrightarrow H_k(\Gamma_{g+1,r-1})$ as the differential $d^1_{1,k}:E^1_{1,k}\longrightarrow E^1_{0,k}$. It suffices to show that $E^2_{p,q}=0$ for $p+q=k+1$ and $q\le k-1$. We first show that $E^2_{2,k-1}=0$ using Lemma \ref{l:E^2_2,q}. As before, the figures below show the relevant simplices in $\Delta_*(F_{g+1,r-1};1)$, and the oval is the boundary component containing $b_0$ and $b_1$. \setlength{\unitlength}{0.3cm} \begin{center} \begin{picture}(44,3)(0,-1.5) \put(18,0){\circle*{0.3}} \put(12,0){\circle*{0.3}} \put(15,0){\oval(6,2.5)} \put(12,0){\line(1,0){2.6}}\put(18,0){\line(-1,0){2.6}} \qbezier(12,0)(13.5,1.3)(15,0)\qbezier(18,0)(16.5,-1.3)(15,0) \put(0,-0.2){$ \Gamma_{[1\,0]} = \Gamma_{g,r-1},$} \put(43,0){\circle*{0.3}} \put(37,0){\circle*{0.3}} \put(40,0){\oval(6,2.5)} \qbezier(37,0)(40,1.5)(43,0) \qbezier(37,0)(38.5,0.3)(39.8,-0.3)\qbezier(43,0)(41.5,-1.5)(40.2,-0.7) \qbezier(43,0)(41.5,0.3)(40,-0.5)\qbezier(37,0)(38.5,-1.5)(40,-0.5) \put(25,-0.2){$\Gamma_{[0\,2\,1]} = \Gamma_{g-1,r},$} \end{picture} \begin{picture}(44,3)(0,-1.5) \put(18,0){\circle*{0.3}} \put(12,0){\circle*{0.3}} \qbezier(12,0)(15,1.5)(18,0)\qbezier(12,0)(15,-1.5)(18,0) \thinlines \put(15,0){\oval(6,2.5)} \put(0,-0.2){$\Gamma_{[0\,1]} = \Gamma_{g-1,r+1},$} \put(43,0){\circle*{0.3}} \put(37,0){\circle*{0.3}} \put(40,0){\oval(6,2.5)} \qbezier(37,0)(38.5,1)(40.25,0.25)\qbezier(40.25,0.25)(41.5,-0.2)(43,0) \qbezier(43,0)(41.5,-1)(39.75,-0.25)\qbezier(39.75,-0.25)(38.5,0.2)(37,0) \qbezier(37,0)(38.5,-1)(39.6,-0.4)\qbezier(40.4,0.4)(41.5,1)(43,0) \qbezier(40.1,0.1)(40.1,0.1)(39.9,-0.1) \put(25,-0.2){$\Gamma_{[1\,2\,0]} = \Gamma_{g-1,r}.$} \end{picture} \end{center} We see that \begin{equation}\label{e:i1i2}\begin{array}{rll} c_1=(\Sigma_{1,-1})_* :& H_{k-1}(\Gamma_{g-1,r})\longrightarrow H_{k-1}(\Gamma_{g,r-1}), & \text{and}\\ c_2=(\Sigma_{0,1})_* : &H_{k-1}(\Gamma_{g-1,r})\longrightarrow H_{k-1}(\Gamma_{g-1,r+1}) \end{array} \end{equation} are both surjective by induction. So $E^2_{2,k-1}=0$. Next we show that $E^2_{3,k-2}=0$ using Lemma \ref{l:E^2_3,q}. To verify the conditions, we calculate as before, \begin{equation*} \begin{array}{llll} \Gamma_{[0\,1\,2]} &=& \Gamma_{g-2,r+2},\\ \Gamma_{\sigma} &=& \Gamma_{g-1,r} &\text{ for $\sigma\in \Sigma_3$ the remaining 3 permutations in \eqref{e:Ib27} }\\ \Gamma_{\sigma} &=& \Gamma_{g-2,r+1}&\text{ for $\sigma\in \Sigma_4$ the remaining 4 permutations in \eqref{e:Ib27}}. \end{array} \end{equation*} We see that \begin{equation}\label{e:i3ij}\begin{array}{rll} c_3=(\Sigma_{0,1})_* : &H_{k-2}(\Gamma_{g-2,r+1})\longrightarrow H_{k-2}(\Gamma_{g-2,r+2}), & \text{and}\\ c_j=(\Sigma_{1,-1})_* :& H_{k-2}(\Gamma_{g-2,r+1})\longrightarrow H_{k-2}(\Gamma_{g-1,r}) & \text{for } j=4,5,6. \end{array} \end{equation} Inductively we can verify that these four maps are surjective. The maps $c_1$ and $c_2$ we calculated in \eqref{e:i1i2}, and we see by induction that they are injective in homology degree $k-2$. So by Lemma \ref{l:E^2_3,q}, $E^2_{3,k-2}=0$. Finally we prove that $E^2_{p,q}=0$ for $p+q=k+1$ and $q\le k-3$ using Lemma \ref{l:E^2_p,q}. This is done as in \textbf{The case $\Sigma_{0,1}$} so we'll skip the calculations, and just show the final inequality: \begin{eqnarray*} 2(g-p+1) &=& 2g-2(k+1-q)+2 \quad\ge \quad 3k-1-2k+2q \\ &=& k+2q -1 \ge q+3+2q -1 \quad = \quad 3q+2. \end{eqnarray*} So by Lemma \ref{l:E^2_p,q}, $E^2_{p,q}=0$ for $p+1= k+1$ and $q\le k-3$. We conclude that $(\Sigma_{1,-1})_*=d^1_{1,k}$ is surjective. \subsubsection*{Injectivity in the case $\Sigma_{1,-1}$} Assume $2g\ge 3k+2$ and let as in the above case $\Gamma=\Gamma_{g+1,r-1}$ and $E^n_{p,q}= E^n_{p,q}(F_{g+1,r-1};1)$. We will show that $(\Sigma_{1,-1})_*=d^1_{1,k}$ is injective. Since $E^n_{1,k}$ converges to 0, it suffices to show that all differentials with target $E^n_{1,k}$ are trivial. This holds if we can show that $E^2_{p,q}=0$ for all $p+q=k+2$ with $q\le k-1$ and that $d^1_{2,k}:E^1_{2,k}\longrightarrow E^1_{1,k}$ is trivial. We first prove that $d^1_{2,k}:E^1_{2,k}\longrightarrow E^1_{1,k}$ is trivial by proving that $d^1_{3,k}: E^1_{3,k}\longrightarrow E^1_{2,k}$ is surjective, using Lemma \ref{l:E^2_2,q}. We have already calculated $c_1$ and $c_2$, cf. \eqref{e:i1i2}: \begin{equation*}\begin{array}{rll} c_1=(\Sigma_{1,-1})_* :& H_{k}(\Gamma_{g-1,r})\longrightarrow H_{k}(\Gamma_{g,r-1}), & \text{and}\\ c_2=(\Sigma_{0,1})_* : &H_{k}(\Gamma_{g-1,r})\longrightarrow H_{k}(\Gamma_{g-1,r+1}) \end{array} \end{equation*} In this case we cannot use induction, since the homology degree is $k$, but we can use the surjectivity result for $\Sigma_{0,1}$ and $\Sigma_{1,-1}$ since we have already proved this. So by Theorem \ref{t:Main} $(ii)$, $c_1$ and $c_2$ are surjective. Next we prove that $E^2_{3,k-1}=0$, using Lemma \ref{l:E^2_3,q}. We have already calculated $c_j$ for $j=1,2,3,4,5,6$ in the proof of surjectivity of $(\Sigma_{1,-1})_*$, cf. \eqref{e:i1i2} and \eqref{e:i3ij}, and in this case we get \begin{equation*}\begin{array}{rll} c_1=(\Sigma_{1,-1})_* :& H_{k-1}(\Gamma_{g-1,r})\longrightarrow H_{k-1}(\Gamma_{g,r-1}), & \\ c_2=(\Sigma_{0,1})_* : &H_{k-1}(\Gamma_{g-1,r})\longrightarrow H_{k-1}(\Gamma_{g-1,r+1}) \\ c_3=(\Sigma_{0,1})_* : &H_{k-1}(\Gamma_{g-2,r+1})\longrightarrow H_{k-1}(\Gamma_{g-2,r+2}), &\text{and}\\ c_j=(\Sigma_{1,-1})_* :& H_{k-1}(\Gamma_{g-2,r+1})\longrightarrow H_{k-1}(\Gamma_{g-1,r}) & \text{for } j=4,5,6. \end{array} \end{equation*} Inductively we can verify that $c_1$ and $c_2$ are injective, and that $c_j$ for $j=3,4,5,6$ are surjective. So by Lemma \ref{l:E^2_3,q}, $E^2_{3,k-1}=0$. Finally we prove that $E^2_{p,q}=0$ for $p+q=k+1$ and $q\le k-2$ using Lemma \ref{l:E^2_p,q}. As before we skip the calculations, and the final inequality is the same as in \textbf{Surjectivity in the case $\Sigma_{1,-1}$}. \end{proof} \begin{rem}Another possibility for proving the above result is to use another arc complex. Inspired by \cite{Ivanov1} we consider a subcomplex of $C(F;i)$ consisting of all $n$-simplices with a given permutation $\sigma_n$, $n\ge 0$. Ivanov takes $\sigma=\textup{id}$, which means the cut surfaces $F_{\alpha}$ have minimal genus. For the inductive assumption, it would be better to have maximal genus, which can be achieved by taking $\sigma_n=[n\,\,n\!-\!1 \,\cdots \,1 \,\,0]$. Potentially, this could give a better stability range, but it is not known how connected this subcomplex is, which means that the proof above cannot be carried through. \end{rem} \subsection{The stability theorem for closed surfaces}\label{S:Closed surface} In this section we study $l=\Sigma_{0,-1}: \Gamma_{g,1}\longrightarrow \Gamma_{g}$, the homomorphism induced by gluing on a disk to the boundary circle. The main result is \begin{thm}\label{t:closed} \begin{equation*} l_*:H_k(\Gamma_{g,1})\longrightarrow H_k(\Gamma_{g}) \end{equation*} is surjective for $2g \ge 3k-1$, and an isomorphism for $2g \ge 3k + 2$. \end{thm} The proof we give is modelled on \cite{Ivanov1}. See also \cite{CM}. \begin{defn}Let $F$ be a surface, possibly with boundary. The arc complex $D_*(F)$ has isotopy classes of closed, non-trivial, oriented, embedded circles as vertices, and $n+1$ distinct vertices ($n\ge 0$) form an $n$-simplex if they have representatives $(\alpha_0,\ldots \alpha_{n})$ such that: \begin{itemize} \item[$(i)$] $\alpha_i\cap\alpha_j=\emptyset$ and $\alpha_i\cap \partial(F)=\emptyset$, \item[$(ii)$] $F\setminus (\bigcup_{i=0}^n \alpha_i)$ is connected. \end{itemize} \end{defn} We note that \begin{equation}\label{e:cut} (F_{g,r})_\alpha\cong F_{g-1,r+2}, \quad \text{for each vertex }\alpha \text{ in }D(F_{g,r}). \end{equation} Indeed, for a vertex $\alpha$, $F_\alpha:= F\setminus N(\alpha)$ has two more boundary components than $F$, but the same Euler characteristic, since $F = F\setminus N(\alpha)\cup_{\partial N(\alpha)}N(\alpha)$, and $\chi(N(\alpha))=0= \chi(\partial N(\alpha))$. Then \eqref{e:cut} follows from $\chi(F_{g,r})=2-2g-r$. We need the following connectivity result, which we state without proof: \begin{thm}[\cite{Harer1}] The arc complex $D_*(F_{g,r})$ is $(g-2)$-connected, and $\Gamma_{g,r}$ acts transitively in each dimension. \end{thm} We can now prove the stability theorem for closed surfaces: \begin{proof}[Proof of Theorem \ref{t:closed}] We use the unaugmented spectral sequences associated with the action of $\Gamma(F_i)$ on $D_*(F_i)$, where $F_i=F_{g,i}$ for $i=0,1$. They converge to the homology of $\Gamma(F_i)$ in degrees less than or equal to $g-2$. Since $\Gamma(F_i)$ acts transitively on the set of $n$-simplices, \begin{equation}\label{e:lukstab}E_{p,q}^1(F_i)\cong H_q(\Gamma(F_i)_\alpha,\mathbb Z_\alpha) \Rightarrow H_{p+q}(\Gamma(F_i)), \quad \textrm{for }i=0,1; \end{equation} where $\alpha$ is $p$-simplex in $D_p(F_{1})$, by identifying $\alpha$ with its image in $D_p(F_0)$ under the inclusion $l:F_1\longrightarrow F_0$. We use Moore's comparison theorem for spectral sequences, cf. \cite{Cartan}: If $l_*:H_q(\Gamma(F_1)_\alpha,\mathbb Z_\alpha)\longrightarrow H_q(\Gamma(F_0)_\alpha,\mathbb Z_\alpha)$ is an isomorphism for $p+q\le m$ and surjective for $p+q\le m+1$, then $l_*:H_k(\Gamma(F_1))\longrightarrow H_k(\Gamma(F_0))$ is a isomorphism for $k\le m$ and surjective for $k\le m+1$. To apply this, we will compare $H_q(\Gamma(F_i)_\alpha,\mathbb Z_\alpha)$ and $H_q(\Gamma((F_i)_\alpha))$ for a fixed $p$-simplex $\alpha$. First we need to analyse $\Gamma(F_i)_\alpha$ for $i=0,1$, and to ease the notation we call the surface $F$ and write $\Gamma=\Gamma(F)$. Unlike for $C_*(F;i)$, the stabilizer $\Gamma_\alpha$ is not $\Gamma(F_\alpha)$. For $\gamma\in\Gamma_\alpha$, \begin{itemize} \item[$(i)$] $\gamma$ need not stabilize $\alpha$ pointwise and can thus permute the circles of $\alpha$; \item[$(ii)$] $\gamma$ can change the orientation of any circle in $\alpha$; \item[$(iii)$] $\gamma$ can rotate each circle $\alpha$ in $\alpha$. \end{itemize} In order to take care of $(i)$ and $(ii)$, consider the exact sequence, \begin{equation}\label{e:exact} 1\longrightarrow \widetilde{\Gamma_\alpha}\longrightarrow \Gamma_\alpha\longrightarrow (\mathbb Z/2)^{p+1}\ltimes \Sigma_{p+1}\longrightarrow 1. \end{equation} Here $\widetilde{\Gamma_\alpha}\subseteq \Gamma_\alpha$ consists of the mapping classes in $\Gamma_\alpha$ fixing each vertex of $\alpha$ and its orientation. We now compare $\widetilde{\Gamma_\alpha}$ and ${\Gamma}(F_\alpha)$, \begin{equation}\label{e:exact2} 0\longrightarrow \mathbb Z^{p+1}\longrightarrow {\Gamma}(F_\alpha)\longrightarrow \widetilde{\Gamma_\alpha}\longrightarrow 1. \end{equation} We must explain the map $\mathbb Z^{p+1}\longrightarrow {\Gamma}(F_\alpha)$. Let $\alpha=(\alpha_0,\ldots,\alpha_p)$, then the cut surface $F_\alpha$ has two boundary components, $\alpha_i^+$ and $\alpha_i^-$, for each circle $\alpha_i$. Then the standard generator $e_j=(0,\ldots,0,1,0,\ldots,0)\in\mathbb Z^{p+1}$, $j=0,\ldots,p$, maps to the mapping class making a right Dehn twist on $\alpha_j^+$ and a left Dehn twist on $\alpha_j^-$, and identity everywhere else. This is extended to a group homomorphism, i.e. $-e_j$ makes a left Dehn twist on $\alpha_j^+$ and a right Dehn twist on $\alpha_j^-$. Let us see that \eqref{e:exact2} is exact. The hard part is injectivity of $\mathbb Z^{p+1}\longrightarrow {\Gamma}(F_\alpha)$, so we only show this. Assume $m\ne n\in \mathbb Z^{p+1}$, and say $m_0\ne n_0$. For $p\ge 1$, the surface $F_\alpha$ has at least four boundary components. Two of them come from cutting up along the circle $\alpha_0$, call one of these $S$. If $p=0$, then $\alpha=\alpha_0$, and $F_\alpha$ has genus $g-1\ge 2$ by \eqref{e:cut}, since $2g \ge 3k + 3\ge 6$. In both cases, there is a non-trivial loop $\gamma$ in $F_\alpha$ starting on $S$ which does not commute with the Dehn twist $f$ around $S$ in $\pi_1(F_\alpha)$. Since $F_\alpha $ has boundary, $\pi_1(F_\alpha)$ is a free group, so the subgroup $\indre{\gamma}{f}$ is also free. The action of $m\in\mathbb Z^{p+1}$ on $\gamma$ is $f^{m_0}\gamma f^{-m_0}$, and since $f$ and $\gamma$ does not commute, $f^{m_0}\gamma f^{-m_0} \ne f^{n_0}\gamma f^{-n_0}$ when $n_0\ne m_0$. Consider $l_*: \Gamma((F_1)_\alpha)\longrightarrow \Gamma((F_0)_\alpha)$. Both surfaces $(F_i)_\alpha$ have non-empty boundary, so we can use Main Theorem \ref{t:Main}. We must relate $l_*$ to the maps $\Sigma_{0,1}$ and $\Sigma_{1,-1}$, so let $\hat{F}$ denote a surface such that $\Sigma_{0,1}(\hat{F})=(F_1)_\alpha$. Then $\hat{F}$ has one less boundary components than $(F_1)_\alpha$, so $\hat{F}$ and $(F_0)_\alpha$ are isomorphic. This gives the diagram: \begin{equation*} \xymatrix{H_*(\Gamma(\hat{F}))\ar[rr]^{\cong}\ar[dr]_{(\Sigma_{0,1})_*}& & H_*(\Gamma((F_0)_\alpha))\\ & H_*(\Gamma((F_1)_\alpha))\ar[ur]_{l_*}& } \end{equation*} We see that $l_*$ is always surjective. By Theorem \ref{t:Main}, $(\Sigma_{0,1})_*:H_{s}({\Gamma}(\hat F))\longrightarrow H_{s}({\Gamma}((F_1)_\alpha))$ is an isomorphism for $3s \le 2(g-p-1)$, so the same holds for $l_*$. The Lynden-Serre spectral sequence of \eqref{e:exact2} for $F$ is \begin{equation}\label{e:E2a} \bar{E}_{s,t}^2(F)\cong H_s(\widetilde{\Gamma_\alpha},H_t(\mathbb Z^{p+1})) \Rightarrow H_{s+t}({\Gamma}(F_\alpha)). \end{equation} We showed above that $l_*: H_{s+t}({\Gamma}((F_1)_\alpha))\longrightarrow H_{s+t}({\Gamma}((F_0)_\alpha))$ is an isomorphism for $3(s+t)\le 2(g-p-1)$ and surjective always. Note that $\mathbb Z^{p+1}$ lies in the center of $\Gamma(F_\alpha)$, since the Dehn twists can take place as close to the boundary of $F_\alpha$ as desired. By the Künneth formula, we have an isomorphism \begin{equation*} \bar{E}_{s,t}^2(F) \cong \bar{E}_{s,0}^2(F) \otimes \bar{E}_{0,t}^2(F) = H_s(\widetilde{\Gamma_{\alpha}})\otimes H_t(\mathbb Z^{p+1}) \end{equation*} Now since $l_*: H_{s+t}({\Gamma}((F_1)_\alpha))\longrightarrow H_{s+t}({\Gamma}((F_0)_\alpha))$ is an isomorphism for $3(s+t)\le 2(g-p-1)$ and always surjective, it follows by an easy inductive argument that $l_*: H_s(\widetilde{\Gamma(F_0)_{\alpha}})\longrightarrow H_s(\widetilde{\Gamma(F_1)_{\alpha}})$ is an isomorphism for $3s\le 2(g-p-1)$ and surjective for $3s\le 2(g-p-1)+3$. The Lynden-Serre spectral sequence of \eqref{e:exact} is \begin{equation}\label{e:E2} \tilde{E}_{r,s}^2(F)\cong H_r\left((\mathbb Z/2)^{p+1}\ltimes \Sigma_{p+1};H_s(\widetilde{\Gamma_\alpha};\mathbb Z_\alpha)\right) \Rightarrow H_{r+s}(\Gamma_\alpha;\mathbb Z_\alpha). \end{equation} Since $\widetilde{\Gamma_\alpha}$ preserves the orientation of the simplices, we can drop the local coordinates to obtain \begin{equation*} \tilde E_{r,s}^2(F)\cong H_r\left((\mathbb Z/2)^{p+1}\times \Sigma_{p+1},H_s(\widetilde{\Gamma_\alpha})\otimes \mathbb Z_\alpha\right). \end{equation*} It follows from the above that $l_*:\tilde E_{r,s}^2(F_1)\longrightarrow \tilde E_{r,s}^2(F_0)$ is an isomorphism for $3s\le 2(g-p-1)$ and surjective for $3s\le 2(g-p-1)+3$. Then by Moore's comparison theorem, \begin{equation*} l_*: H_{q}(\Gamma(F_1)_\alpha;\mathbb Z_\alpha) \longrightarrow H_{q}(\Gamma(F_0)_\alpha;\mathbb Z_\alpha) \end{equation*} is an isomorphism for $3q\le 2(g-p-1)$ and surjective for $3q\le 2(g-p-1)+3$. Then in particular, it is an isomorphism for $3(p+q)\le 2g-2$ and surjective for $3(p+q)\le 2g-2+3$. Now a final application of Moore's comparison theorem on the spectral sequence in \eqref{e:lukstab} gives the desired result, as explained in the beginning of the proof. \end{proof} \section{Stability with twisted coefficients} \subsection{The category of marked surfaces} \begin{defn}\label{d:MS}The category of marked surfaces $\mathfrak{C}$ is defined as follows: The objects are triples $F,x_0,(\partial_1F, \partial_2F, \ldots,\partial_rF)$, where $F$ is a compact connected orientable surface with non-empty boundary $\partial F= \partial_1 F \cup \cdots \partial_r F$, with a numbering $(\partial_1F,\ldots,\partial_rF)$ of the boundary components of $F$, and $x_0\in\partial_1 F$ is a marked point. A morphism $(\psi, \sigma)$ between marked surfaces $(F,x_0)$ and $(G,y_0)$ is an ambient isotopy class of an embedding $\psi: F\longrightarrow G$, where each boundary component of $F$ is either mapped to the inside of $G$ or to a boundary component of $G$. If $\psi(x_0) \in \partial G$ then $\psi(x_0)=y_0$, else there is a embedded arc $\sigma$ in $G$ connecting $x_0$ and $y_0$. \end{defn} The objects of $\mathfrak{C}$ is can be grouped \begin{equation*}\textrm{Ob}\,\mathfrak{C}=\coprod_{g,r}\textrm{Ob}\,\mathfrak{C}_{g,r}, \end{equation*} where $\mathfrak{C}_{g,r}$ consists of the surfaces with genus $g$ and $r$ boundary components. \begin{defn} The morphisms $\Sigma_{1,0}$, $\Sigma_{0,1}$ in $\mathfrak{C}$ are the embeddings $\Sigma_{i,j}:F\longrightarrow \Sigma_{i,j}F$ given by gluing onto $\partial_1F$ a torus with 2 disks cut out, or a pair of pants, respectively, as on Figure \ref{f:sigma}. The embedded arc $\sigma$ is also shown here. The boundary components of $\Sigma_{0,1}F$ are numbered such that the new boundary component from the pair of pants is $\partial_{r+1}(\Sigma_{0,1}F)$. The morphism $\Sigma_{1,-1}$ in the subcategory of $\coprod_{r\ge 2}\textrm{Ob}\,\mathfrak{C}_{g,r}$ is the embedding given by gluing a pair of pants onto $\partial_1(F)$ and $\partial_2(F)$, as on Figure \ref{f:sigma}. The numbering is that $\partial_j(\Sigma_{1,-1}F)=\partial_{j-1}F$ for $j>1$. \end{defn} \setlength{\unitlength}{0.5cm} \begin{figure} \caption{The morphisms $\Sigma_{1,0}$, $\Sigma_{0,1}F$, and $\Sigma_{1,-1}F$.} \label{f:sigma} \end{figure} In the figure, the black rectangles are boundary components of $F$ or $\Sigma_{i,j}F$, and the outer boundary component is always $\partial_1F$ with the marked point indicated. On the figure of $\Sigma_{1,-1}F$ the grey ''tube'' is a cylinder glued onto $\partial_2F$. Now we will see how $\Sigma_{i,j}$ can be made into functors. First we define the subcategory $\mathfrak{C}(2)$ of $\mathfrak{C}$ to be the category with objects $\coprod_{r\ge 2}\textrm{Ob}\,\mathfrak{C}_{g,r}$ and whose morphisms $\varphi:F\longrightarrow S$ must restrict to an orientation-preserving diffeomorphism $\varphi:\partial_2 F\longrightarrow\partial_2 S$. Note that $\Sigma_{1,0}$ and $\Sigma_{0,1}$ are morphisms in this category. $\Sigma_{1,0}$ and $\Sigma_{0,1}$ are functors from $\mathfrak{C}$ to itself, and $\Sigma_{1,-1}$ is a functor from $\mathfrak{C}(2)$ to $\mathfrak{C}$ in the following way: Given a morphism $\varphi:F\longrightarrow S$ we must specify the morphism $\Sigma_{i,j}(\varphi)$, and this is done on the following diagram (drawn in the case of $\Sigma_{1,0}$). Here, the grey line shows how $\Sigma_{1,0}$ is embedded in $\Sigma_{0,1}S$ by $\Sigma_{1,0}(\varphi)$. Notice how the arc $\sigma$ determines the embedding. \setlength{\unitlength}{0.3cm} \begin{figure} \caption{The functor $\Sigma_{1,0}$.} \label{f:functor} \end{figure} Similar diagrams can be drawn for $\Sigma_{0,1}$ and $\Sigma_{1,-1}$. In the latter case $\Sigma_{1,-1}(\varphi)$ exists because when $\varphi\in\mathfrak{C}(2)$, $\varphi:F\longrightarrow S$ has not done anything to $\partial_2(F)$, so that $\Sigma_{1,-1}F$ can be embedded in $\Sigma_{1,-1}S$ just as on Figure \ref{f:functor}. \subsection{Coefficient systems} We now define the coefficient systems we are interested in. We say that an abelian group $G$ is \emph{without infinite division} if the following holds for all $g\in G$: If $n\mid g$ for all $n\in\mathbb Z$, then $g=0$. By $n\mid g$ we mean $g=nh$ for some $h\in G$. Note that finitely generated abelian groups are without infinite division. \begin{defn}A coefficient system is a functor from $\mathfrak{C}$ to $\textrm{Ab}_{\textrm{wid}}$, the category of abelian groups without infinite division. \end{defn} We say that a constant coefficient system has degree 0 and make the general \begin{defn}\label{d:coef}\cite{Ivanov1} A coefficient system $V$ has degree $\le k$ if the map $V(F){\longrightarrow}V(\Sigma_{i,j}F)$ is split injective for $(i,j)\in\set{(1,0),(0,1), (1,-1)}$, and the cokernel $\Delta_{i,j}V$ is a coefficient system of degree $\le k-1$ for $(i,j)\in\set{(1,0),(0,1)}$. The degree of $V$ is the smallest such $k$. \end{defn} \begin{ex} \begin{itemize} \item[$(i)$] $V(F)=H_1(F,\partial F)$ is a coefficient system of degree $1$. \item[$(ii)$] $V^*_k(F)=H_k(\textup{Map}((F/\partial F),X)$. This is the coefficient system used in \cite{CM}. It has degree $\le \round{\frac{k}{d}}$ if $X$ is $d$-connected, which will be proved in Theorem \ref{t:stabil1}. \end{itemize} \end{ex} We write $\Sigma_{i,j}V$ for the functor $F\rightsquigarrow V(\Sigma_{i,j}F)$, where $(i,j)\in\set{(1,0),(0,1)}$. \begin{lem}[Ivanov]Let $V$ be a coefficient system of degree $\le k$. Then $\Sigma_{1,0}V$ and $\Sigma_{0,1}V$ are coefficient systems of degree $\le k$. \end{lem} \begin{proof}See \cite{Ivanov1} for $\Sigma_{1,0}V$. The case $\Sigma_{0,1}V$ can be handled similarly. \end{proof} \subsection{The inductive assumption} Below I will use the following notational conventions: $F$ denotes a surface in $\mathfrak{C}$, and unless otherwise specified, $g$ is the genus of $F$. $\Sigma_{l,m}$ refers to any of $\Sigma_{1,0}$, $\Sigma_{0,1}$, $\Sigma_{1,-1}$. \begin{defn}\label{d:Phi} Given a morphism $\psi:F\longrightarrow S$, $\Phi$ will denote a finite composition of $\Sigma_{0,1}$ and $\Sigma_{1,-1}$ such that $\Phi(\psi)$ is defined, i.e. makes the following diagram comutative \begin{equation*} \xymatrix{ F\ar[r]^{\Phi}\ar[d]^{\psi}& \Phi(F)\ar@{-->}[d]^{\Phi(\psi)} \\ S\ar[r]^{\Phi}& \Phi(S) } \end{equation*} By a finite composition we mean $\Phi=\Sigma_{i_1,j_1}\circ \cdots\circ \Sigma_{i_s,j_s}$ for some $s\ge0$, where $(i_k,j_k)\in\set{(0,1), (1,-1)}$ for each $k=1,\ldots,s$. We say that such a $\Phi$ is \emph{compatible} with $\psi:F\longrightarrow S$. \end{defn} To prove our main stability result for twisted coefficients, we will study certain relative homology groups: \begin{defn}\label{d:Rel}Let $\psi:F\longrightarrow S$ be a morphism of surfaces, and let $\Phi$ be compatible. Let $V$ be a coefficient system. Then we define \begin{equation*} \textup{Rel}_n^{V,\Phi}(S,F) = H_n(\Gamma(S),\Gamma(F);V(\Phi(S)),V(\Phi(F))). \end{equation*} If $\Phi=\textup{id}$, we write $\textup{Rel}_n^{V}(G,F)$ for $\textup{Rel}_n^{V,\textup{id}}(G,F)$. \end{defn} \begin{thm}[Ivanov, Madsen-Cohen]\label{t:IvanovCoMa}For sufficiently large $g$: \begin{itemize} \item[$(i)$]$\textup{Rel}_q^{V}(\Sigma_{1,0}F,F)=0.$ \item[$(ii)$]$\textup{Rel}_q^{V}(\Sigma_{0,1}F,F)=0.$ \item[$(iii)$]$\textup{Rel}_q^{V}(\Sigma_{1,-1}F,F)=0.$ \end{itemize} \end{thm} \begin{proof} For $(i)$, see \cite{Ivanov1}. For $(ii)$, see \cite{CM}. Their proof only requires that the groups $V(\cdot)$ are without infinite division. To prove $(iii)$, we use the following long exact sequence, \begin{eqnarray*} H_q(F,V(F))&\longrightarrow& H_q(\Sigma_{1,-1}F,V(\Sigma_{1,-1} F))\longrightarrow\textup{Rel}_q^{V}(\Sigma_{1,-1}F,F)\longrightarrow \\ H_{q-1}(F,V(F))&\longrightarrow& H_{q-1}(\Sigma_{1,-1}F,V(\Sigma_{1,-1}F)) \end{eqnarray*} Thus to see that $\textup{Rel}_q^{V}(\Sigma_{1,-1}F,F)=0$ all we have to do is to see that the first map is surjective and that the last map is injective. Both of these maps are $\Sigma_{1,-1}$, so they fit into the following diagram, for $k \in \set{q, q-1}$: \begin{equation*}\xymatrix{H_k(F,V(F))\ar[r]^{\Sigma_{1,-1}} & H_k(F,V(F))\\ H_k(S,V)\ar[u]^{\Sigma_{0,1}}\ar[ur]^{\Sigma_{1,0}} & } \end{equation*} where $S$ is a surface with $\Sigma_{0,1}S=F$. Now by $(i)$ and $(ii)$, if $g$ is sufficiently large, both the diagonal and the vertical map is an isomorphism, so $\Sigma_{1,-1}$ is also an isomorphism. \end{proof} Define $\varepsilon_{l,m}$ by \begin{equation*} \varepsilon_{l,m}=\left\{ \begin{array}{ll} 1, & \hbox{if $(l,m)=(1,-1)$;} \\ 0, & \hbox{if $(l,m)=(1,0)$ or $(0,1)$.} \end{array} \right. \end{equation*} \begin{inda}The inductive assumption $I_{k,n}$ is the following: For any coefficient system $W$ of degree $k_W$, any surface $F$ of genus $g$, and any $\Phi$ compatible with $\Sigma_{l,m}:F\longrightarrow\Sigma_{l,m}F$, we have \begin{equation*} \textup{Rel}^{W,\Phi}_q(\Sigma_{l,m}F, F) = 0\quad\text{for}\quad 2g\ge 3q+k_W-\varepsilon_{l,m}, \end{equation*} if either $k_W<k$, or $k_W=k$ and $q<n$. \end{inda} In the rest of this section I am going to assume $I_{k,n}$. Note that $I_{k,m}$ for all $m\in \mathbb N$ is equivalent to $I_{k+1,0}$. Thus the goal is to prove $I_{k,n+1}$. Let $V$ be a given coefficient system of degree $k$. \begin{lem}[Ivanov]\label{l:Ivanov}Let $F$ be a surface of genus $g$. If $2g\ge 3q+k-1-\varepsilon_{l,m}$ then for $(i,j)\in \set{(1,0),(0,1)}$ \begin{equation*} \textup{Rel}_q^{V,\Phi}(\Sigma_{l,m}F,F)\longrightarrow \textup{Rel}_q^{V,\Sigma_{i,j}\Phi}(\Sigma_{l,m}F,F) \end{equation*} is surjective. \end{lem} \begin{proof}Since $\textup{Rel}_q^{V,\Sigma_{i,j}\Phi}(\Sigma_{l,m}F,F)=\textup{Rel}_q^{\Sigma_{i,j}V,\Phi}(\Sigma_{l,m}F,F)$ we have the following long exact sequence : \begin{equation*} \textup{Rel}_q^{V,\Phi}(\Sigma_{l,m}F,F)\longrightarrow \textup{Rel}_q^{V,\Sigma_{i,j}\Phi}(\Sigma_{l,m}F,F)\longrightarrow \textup{Rel}_q^{\Delta_{i,j}V,\Phi}(\Sigma_{l,m}F,F) \end{equation*} Since $\Delta_{i,j}V$ is a coefficient system of degree $k-1$, the assumption $I_{k,n}$ implies that $\textup{Rel}_q^{\Delta_{i,j}V,\Phi}(\Sigma_{l,m} F,F)=0$, and the result follows. \end{proof} \begin{thm}\label{t:vis_indu} Assume that $h$ satisfies $2h\ge 3n+k-1-\varepsilon_{l,m}$ and that the maps below are injective for all surfaces $F$ of genus $g\ge h$ and $\Phi$ compatible with $\Sigma_{l,m}:F\longrightarrow \Sigma_{l,m}F$, \begin{eqnarray*} \textup{Rel}_n^{V,\Phi\Sigma_{1,-1}}(\Sigma_{l,m} F,F) &\longrightarrow& \textup{Rel}^{V,\Phi}_n(\Sigma_{l,m}\Sigma_{1,-1}F,\Sigma_{1,-1}F), \\ \textup{Rel}_n^{\Sigma_{0,1}V}(\Sigma_{l,m} F,F) &\longrightarrow& \textup{Rel}^{V}_n(\Sigma_{l,m}\Sigma_{0,1}F,\Sigma_{0,1}F). \end{eqnarray*} Then for any compatible $\Phi$, $\textup{Rel}_n^{V,\Phi}(\Sigma_{l,m} F,F)=0$ for $g\ge h$. \end{thm} \begin{proof}Assume $2g\ge 3n+k-1-\varepsilon_{l,m}$. Write $\Phi=\Sigma_{i_1,j_1}\circ \cdots\circ \Sigma_{i_s,j_s}$, where $(i_k,j_k)\in\set{(1,-1),(0,1)}$. Observe that we can write $\Phi= \Phi'\circ (\Sigma_{1,-1})^d$ for some $d$, where $\Phi'=\Sigma_{\lambda_1,\mu_1}\circ \cdots\circ \Sigma_{\lambda_t,\mu_t}$ with $(\lambda_k,\mu_k)\in\set{(1,0),(0,1)}$. Then by the first assumption in the theorem, we get by induction in $d$: \begin{equation*} \textup{Rel}_n^{V,\Phi}(\Sigma_{l,m} F,F) \longrightarrow \textup{Rel}^{V,\Phi'}_n(\Sigma_{l,m}(\Sigma_{1,-1})^dF,(\Sigma_{1,-1})^dF)\\ \end{equation*} is injective. Thus it suffices to show $\textup{Rel}_n^{V,\Phi'}(\Sigma_{l,m}(\Sigma_{1,-1})^dF,(\Sigma_{1,-1})^dF)=0$. Since $\textup{genus}((\Sigma_{1,-1})^dF)\ge g\ge h$, it is certainly enough to show $\textup{Rel}_n^{V,\Phi'}(\Sigma_{l,m} F,F)=0$, where $\Phi'$ is a finite composition of $\Sigma_{1,0}$ and $\Sigma_{0,1}$. By Lemma \ref{l:Ivanov}, we get inductively that \begin{equation*} \textup{Rel}_n^V(\Sigma_{l,m}F,F)\longrightarrow \textup{Rel}_n^{V,\Phi'}(\Sigma_{l,m}F,F) \end{equation*} is surjective, so it suffices to show that $\textup{Rel}_n^V(\Sigma_{l,m}F,F)=0$. Now by the second assumption in the Theorem, we know \begin{equation*} \textup{Rel}_n^{\Sigma_{0,1}V}(\Sigma_{l,m} F,F) \longrightarrow \textup{Rel}^{V}_n(\Sigma_{l,m}\Sigma_{0,1}F,\Sigma_{0,1}F) \end{equation*} is injective. Since $V$ is a coefficient system of degree $k$, $V(F)\longrightarrow V(\Sigma_{0,1}F)$ and $V(F)\longrightarrow V(\Sigma_{1,-1}F)$ are split injective, so the composition, \begin{eqnarray*} \textup{Rel}_n^{V}(\Sigma_{l,m} F,F) \longrightarrow \textup{Rel}_n^{\Sigma_{0,1}V}(\Sigma_{l,m} F,F) \longrightarrow \textup{Rel}^{V}_n(\Sigma_{l,m}\Sigma_{0,1}F,\Sigma_{0,1}F)\\ \longrightarrow \textup{Rel}^{\Sigma_{1,-1}V}_n(\Sigma_{l,m}\Sigma_{0,1}F,\Sigma_{0,1}F) \longrightarrow \textup{Rel}^{V}_n(\Sigma_{l,m}\Sigma_{1,0}F,\Sigma_{1,0}F) \end{eqnarray*} is injective, where the second and the last maps are the maps in the assumption and thus injective. Iterating this, we get an injective map \begin{equation*} \textup{Rel}_n^{V}(\Sigma_{l,m} F,F) \longrightarrow \textup{Rel}^V_n(\Sigma_{l,m}(\Sigma_{1,0})^dF,(\Sigma_{1,0})^dF) \end{equation*} for any $d\in \mathbb N$. But $\textup{genus}((\Sigma_{1,0})^dF)=g+d$, so by Theorem \ref{t:IvanovCoMa}, $\textup{Rel}_n^{V}(\Sigma_{l,m} F,F)$ injects into zero. This proves $\textup{Rel}_n^{V,\Phi}(\Sigma_{l,m} F,F)=0$. \end{proof} \subsection{The main theorem for twisted coefficients} In the proof of stability for relative homology groups, we will use the relative version of the spectral sequence, cf. Theorem \ref{s:ss}, $E^1_{p,q}=E^1_{p,q}(\Sigma_{i,j}F;2-i)$ associated with the action of $\Gamma(\Sigma_{i,j}F)$ on the arc complex $C_*(\Sigma_{i,j}F;2-i)$ and the action of $\Gamma(\Sigma_{l,m}\Sigma_{i,j}F)$ on the arc complex $C_*(\Sigma_{l,m}\Sigma_{i,j}F;2-i)$. Let $b_0,b_1$ be the points in the definition of $C_*(\Sigma_{i,j}F;2-i)$; and $\tilde{b}_0, \tilde{b}_1$ be the corresponding points for $C_*(\Sigma_{l,m}\Sigma_{i,j}F;2-i)$. We demand that $b_0$, $\tilde{b}_0$ lie in the 1st boundary component, but is different from the marked point. To define the spectral sequence, $\Sigma_{l,m}$ must induce a map \begin{equation}\label{e:udvidarc} \Sigma_{l,m}:C_*(\Sigma_{i,j}F;2-i) \longrightarrow C_*(\Sigma_{l,m}\Sigma_{i,j}F;2-i), \end{equation} which we now define: If $i=0$, $b_0$ and $b_1$ lie in different boundary components, and the map is given on $\alpha\in\Delta_k(\Sigma_{i,j}F)$ by a simple path $\gamma$ from $\tilde{b}_0\in \Sigma_{l,m}\Sigma_{i,j}F$ to $b_0\in \Sigma_{i,j}F$ inside $\Sigma_{l,m}\Sigma_{i,j}F\setminus\Sigma_{i,j}F$. Then the arcs of $\alpha$ are extended by parallel copies of $\gamma$ that all start in $\tilde{b}_0$. Note that in this case $\tilde{b}_1=b_1$, so no extension is necessary here. If $i=1$, $b_0$ and $b_1$ lie on the same boundary component, and we choose disjoint paths for them to the new marked boundary component, and extend as for $i=0$. Now the spectral sequence (typically) has $E^1$ page: \begin{eqnarray}\label{e:E^1_p,q} E^1_{p,q}&=&\bigoplus_{\sigma\in\overline{\Si}_p}E^1_{p,q}(\sigma)\nonumber\\ E^1_{p,q}(\sigma) &=&H_q(\Gamma(\Sigma_{i,j}\Sigma_{l,m}F)_{\Sigma_{l,m}T(\sigma)},\Gamma(\Sigma_{i,j}F)_{T(\sigma)};\nonumber\\ && \quad\:\: V(\Phi\Sigma_{i,j}\Sigma_{l,m}\Sigma_{s,t}(F)),V(\Phi\Sigma_{i,j}\Sigma_{s,t}(F)))\nonumber\\ &=&\textup{Rel}_q^{V,\Phi_\sigma}((\Sigma_{i,j}\Sigma_{l,m}F)_{\Sigma_{l,m}T(\sigma)},(\Sigma_{i,j}F)_{T(\sigma)}) \end{eqnarray} Here, $\Phi_\sigma: (\Sigma_{i,j}F)_{T(\sigma)}\hookrightarrow \Sigma_{i,j}F$ is the inclusion, which is a finite composition of $\Sigma_{0,1}$ and $\Sigma_{1,-1}$. Furthermore, $\Gamma_\sigma$ denotes the stabilizer of the $(p-1)$-simplex $\sigma$ in $\Gamma$. The direct sum is over the orbits of $(p-1)$-simplices $\sigma$ in $C_*(\Sigma_{i,j}F;2-i)$, whose images under $\Sigma_{l,m}$ are also $(p-1)$-simplices in $C_*(\Sigma_{l,m}\Sigma_{i,j}F;2-i)$. In most cases, $\Sigma_{l,m}$ induces a bijection on the representatives of orbits of $(p-1)$-simplices. Also recall that the set of orbits are in $1-1$ correspondence with a subset $\overline{\Sigma}_p$ of the permutation group $\Sigma_p$. Lemma \ref{l:perm} characterizes $\overline{\Sigma}_p$. As a general remark, note that if a permutation is represented in $C_*(F;2-i)$, then it is also represented in $C_*(\Sigma_{l,m}F;2-i)$, since $\text{genus}(\Sigma_{l,m}F) \ge \text{genus}(F)$. So we will only check the condition for $C_*(F,2-i)$. In certain cases we will either not have $\Sigma_{l,m}$ inducing bijection on the representatives of orbits of $(p-1)$-simplices, or they will not include the permutation used in the standard proof. All such cases will be found in Lemma \ref{l:Si_p} below and taken care of in the \emph{Inductive start} section at the end of the proof. The first differential, $d^1_{p,q}:E^1_{p,q}\longrightarrow E^1_{p-1,q}$, is described in section \ref{sc:diff}. The diagrams \begin{equation*} \xymatrix{ \Delta_{p}(F;i)\ar[r]^{\partial_j}\ar[d] & \Delta_{p}(F;i)\ar[d] &\\ \overline{\Si}_{p+1}\ar[r]^{\partial_j} & \overline{\Si}_p & j=0,\ldots,p } \end{equation*} commute, where $\partial_j$ omits entry $j$ as in Def. \ref{d:arc} and the vertical arrows divide out the $\Gamma$ action and compose with $P$. Thus for each $\sigma\in\overline{\Si}_{p+1}$, there is $g_j\in\Gamma$ such that \begin{equation}\label{e:Ib18} g_j\cdot\partial_j T(\sigma) = T(\partial_j\sigma), \end{equation} and conjugation by $g_j$ induces an injection $c_{g_j}:\Gamma_{T(\sigma)}\hookrightarrow \Gamma_{T(\partial_j\sigma)}$. The induced map on homology is denoted $\partial_j$ again, i.e. \begin{eqnarray}\label{e:Ib19} \partial_j:H_q(\Gamma(\Sigma_{i,j}\Sigma_{l,m}F)_{\Sigma_{l,m}T(\sigma)}, \Gamma(\Sigma_{i,j}F)_{T(\sigma)};\textbf{V})\hookrightarrow \nonumber\\ H_q(\Gamma(\Sigma_{i,j}\Sigma_{l,m}F)_{\Sigma_{l,m}\partial_jT(\sigma)}, \Gamma(\Sigma_{i,j}F)_{\partial_jT(\sigma)};\textbf{V}) \stackrel{(c_{g_j})_*}{\longrightarrow} \\ H_q(\Gamma(\Sigma_{i,j}\Sigma_{l,m}F)_{\Sigma_{l,m}T\partial_j(\sigma)}, \Gamma(\Sigma_{i,j}F)_{T\partial_j(\sigma)};\textbf{V})\nonumber \end{eqnarray} Note that $(c_{g_j})_*$ does not depend on the choice of $g_j$ in \eqref{e:Ib18}: Another choice $g_j'$ gives $c_{g_j'}= c_{g_j'g_j^{-1}}c_{g_j}$, and $g_j'g_j^{-1}\in\Gamma_{T(\partial_j\sigma)}$ so $c_{g_j'g_j^{-1}}$ induces the identity on the homology. Then \begin{equation}\label{e:Ib20} d^1=\sum_{j=0}^{p-1}(-1)^j\partial_j. \end{equation} \begin{lem}\label{l:Si_p}Let $n\ge 1$. The subset $\overline{\Si}_p\subseteq \Sigma_p$, which is in $1-1$ correspondence with a set of representatives of the orbits of $\Delta_{p-1}(\Sigma_{i,j}F;2-i)$, has the following properties: \begin{description} \item[Surjectivity of $\Sigma_{0,1}$:]Assume $2g\ge 3n+k-2-\varepsilon_{l,m}$. Then \\$\overline{\Si}_{p}=\Sigma_p$ for $2\le p\le n+1$ and for $p=n+2= 3$, unless: \begin{itemize} \item $(l,m)\neq (1,-1),\quad n=1,\quad g=1,\quad k=0,1$, \quad or \item $(l,m)= (1,-1),\quad n=1,\quad g=0,\quad k=0$, \quad or \item $(l,m)= (1,-1),\quad n=1,\quad g=1,\quad k=0,1,2$. \end{itemize} \item[Surjectivity of $\Sigma_{1,-1}$:]Assume $2g\ge 3n+k-3-\varepsilon_{l,m}$. Then \\$\overline{\Si}_{p}=\Sigma_p$ for $2\le p\le n+1$, and $\sigma\in \overline{\Si}_p$ if $S(\sigma)\ge 1$ for $p=n+2\le 4$, unless: \begin{itemize} \item $(l,m)\neq (1,-1),\quad n=1,\quad g=0,\quad k=0$,\quad or \item $(l,m)= (1,-1),\quad n=1,\quad g=0,\quad k=0,1$,\quad or \item $(l,m)= (1,-1),\quad n=2,\quad g=1,\quad k=0$. \end{itemize} \item[Injectivity of $\Sigma_{1,-1}$:]Assume $2g\ge3n+k-\varepsilon_{l,m}$. Then\\ $\overline{\Si}_{p}=\Sigma_p$ for $2\le p\le n+2$, and $\sigma\in \overline{\Si}_p$ if $S(\sigma)\ge 1$ for $p=n+3=4$, unless: \begin{itemize} \item $(l,m)= (1,-1),\quad n=1,\quad g=1,\quad k=0$. \end{itemize} \end{description} \end{lem} \begin{proof}We only prove the first of the three cases, as the other two are completely analogous. So assume $2g\ge 3n+k-2-\varepsilon_{l,m}$, and let $\sigma\in\Sigma_p$ be a given permutation of genus $s$. Let $2\le p\le n+1$. By Lemma \ref{l:perm}, $\sigma\in\overline{\Si}_p$ if and only if $s \ge p-1-g$. This inequality is certainly satisfied if $p-1-g\le 0$. The hardest case is $p=n+1$, so we must show $n-g\le 0$. By assumption, \begin{equation*} 2(n-g)\le 2n-(3n+k-2+\varepsilon_{l,m})=-n-k+2+\varepsilon_{l,m}\stackrel{?}{\le} 0, \end{equation*} For $n\ge 3$ this holds. If $n=2$, the assumption $2g\ge 3n+k-2-\varepsilon_{l,m}$ forces $g\ge 2$, so $n-g\le 0$. For $n=1$ and $(l,m)\neq (1,-1)$, we have $\varepsilon_{l,m}=0$, so $g\ge 1$, which means $n-g\le 0$. Last for $n=1$ and $(l,m)= (1,-1)$, we have $\varepsilon_{l,m}=1$, so we get one exception, $g=k=0$. Now let $p=n+2=3$, so $n=1$. The requirement in Lemma \ref{l:perm} is $p-1-g\le 0$, i.e. $g\ge 2$. By assumption $2g\ge 3n+k-2-\varepsilon_{l,m}$, so if $g=1$, we have $k-\varepsilon_{l,m}-1\le 0$. Now for $(l,m)\ne (1,-1)$, the only exceptions are $k=0,1$, and for $(l,m)=(1,-1)$, the only exceptions are $k=0,1,2$. If $g=0$, we have $k-\varepsilon_{l,m}+1\le 0$, so the only exception is $(l,m)=(1,-1)$ and $k=0$. This finishes the proof. \end{proof} \begin{prop}\label{p:8-tal2}Let $\alpha$ denote a simplex either in $\Delta_1(F;1)$ with $P(\alpha)=[1\,0]$, or in $\Delta_2(F;2)$ with $P(\alpha)=[2\,1\,0]$. Let $g$ be the genus of $F_\alpha$, and let $\Phi$ be compatible with $\Sigma_{l,m}:F\longrightarrow \Sigma_{l,m}F$. Then if $2g\ge 3q+k_W-1-\varepsilon_{l,m}$, the maps $\partial_0=\partial_{1}$ are equal as maps from \begin{equation*}Rel_n^{V,\Phi_\alpha}((\Sigma_{l,m}F)_{\Sigma_{l,m}\alpha},F_\alpha). \end{equation*} \end{prop} \begin{proof}Write $\sigma=P(\alpha)$. First note that $\partial_0$ and $\partial_{1}$ have the same target, since $\partial_0(\sigma)=\partial_{1}(\sigma)=:\tau$ by assumption. We can assume $T(\sigma)=\alpha$ and $T(\tau)=\partial_0\alpha$. Then we can choose the element $g=g_{1}$ from \eqref{e:Ib18}, which must satisfy $g\cdot\partial_{1} \alpha = \partial_0\alpha$, to be as in Prop. \ref{p:8-tal}. Then $g$ commutes with the stabilizers $\Gamma(\Sigma_{l,m}F)_{\alpha_0\cup\alpha_1}$, $\Gamma(F)_{\alpha_0\cup\alpha_1}$ and thus also with $\Gamma(\Sigma_{l,m}F)_{\alpha}$ and $\Gamma(F)_{\alpha}$. We now extend the arcs of $\alpha$ to arcs in $\Phi F$ as follows: If $\alpha\in\Delta_1(F;1)$ we use $\eqref{e:udvidarc}$ to obtain $\tilde\alpha=\Phi(\alpha)\in \Delta_1(\Phi F;1)$. If $\alpha\in\Delta_2(F;2)$, we extend, if possible, the 1-simplex $\alpha_0\cup\alpha_1$ to a 1-simplex $\tilde\alpha\in \Delta_1(\Phi F;1)$, i.e. the extended arcs start and end on the same boundary component in $\Phi F$. If this is not possible, we extend $\alpha$ to $\tilde\alpha\in\Delta_2(\Phi F;2)$. These extensions must satisfy the same requirements as $\eqref{e:udvidarc}$ does. Then we make the same extensions for $\beta:=\Sigma_{l,m}\alpha$ to $\tilde\beta$ in $\Phi\Sigma_{l,m}F$. Now the conjugation $(c_g)_*$ acts as the identity on \begin{equation*} H_n(\Gamma(\Sigma_{l,m}F)_{\beta},\Gamma(F)_{\alpha}; V((\Phi\Sigma_{l,m}F)_{\tilde{\beta}}),V((\Phi F)_{\tilde{\alpha}})) \end{equation*} If we are in the case $\tilde\alpha\Delta_1(\Phi F;1)$, then the inclusion map on the coefficients, \begin{eqnarray}\label{e:i_*} i_*&: & H_n(\Gamma(\Sigma_{l,m}F)_{\beta},\Gamma(F)_{\alpha}; V((\Phi\Sigma_{l,m}F)_{\tilde{\beta}}),V((\Phi F)_{\tilde{\alpha}}))\longrightarrow \\ &&H_n(\Gamma(\Sigma_{l,m}F)_{\beta},\Gamma(F)_{\alpha}; V(\Phi\Sigma_{l,m}F),V(\Phi F))= \textup{Rel}_{n}^{V,\Phi_\alpha}((\Sigma_{l,m}F)_{\Sigma_{l,m}\alpha},F_\alpha)\nonumber \end{eqnarray} equals $\Sigma_{1,0}$ on the coefficient systems, and by Lemma \ref{l:Ivanov} it is surjective since $2g\ge 3n+k-1-\varepsilon_{l,m}$ by assumption. Now as $i_*$ is surjective and $(c_g)_* \circ i_* =i_*$ we see that $(c_g)_*$ is the identity on $\textup{Rel}_{n}^{V,\Phi_\alpha}(\Sigma_{l,m}F_\alpha,F_\alpha)$, and thus $\partial_{1}=(c_g)_*\partial_0 =\partial_0$. For $\tilde\alpha\in\Delta_2(\Phi F;2)$ we do the same, except that we use $\alpha$ instead of only $\alpha_0\cup \alpha_1$. In this case $i_*$ in \eqref{e:i_*} is going to be $\Sigma_{1,0}\Sigma_{0,1}$ on the coefficient systems, which again by Lemma \ref{l:Ivanov} is surjective. \end{proof} By Theorem \ref{t:vis_indu}, to prove $I_{k,n+1}$ it is enough to prove: \begin{thm}\label{t:maintwist} The map induced by $\Sigma_{i,j}$, \begin{equation*} \textup{Rel}_n^{V,\Phi\Sigma_{i,j}}(\Sigma_{l,m} F,F) \longrightarrow \textup{Rel}^{V,\Phi}_n(\Sigma_{i,j}\Sigma_{l,m}F,\Sigma_{i,j}F) \end{equation*} satisfies: \begin{itemize} \item[$(i)$]For $\Sigma_{i,j}=\Sigma_{0,1}$, it is surjective for $2g\ge 3n+k-2-\varepsilon_{l,m}$, and if $\Phi=\textup{id}$ it is an isomorphism for $2g\ge3n+k-1-\varepsilon_{l,m}$. For $k=0$ it is always injective. \item[$(ii)$]For $\Sigma_{i,j}=\Sigma_{1,-1}$, it is surjective for $2g\ge 3n+k-3-\varepsilon_{l,m}$, and an isomorphism for $2g\ge3n+k-\varepsilon_{l,m}$. \end{itemize} \end{thm} \begin{proof}We prove the theorem by induction in the homology degree $n$. Assume $n\ge 1$. The induction start $n=0$ will be handled separately below, along with all exceptional cases from Lemma \ref{l:Si_p}. This means that in the main proof, any permutation is represented by an arc simplex (in some special cases only if its genus is $\ge 1$). \subsubsection*{Surjectivity for $\Sigma_{0,1}$:}Assume $2g\ge 3n+k-2-\varepsilon_{l,m}$. We use the spectral sequence $E^1_{p,q}=E^1_{p,q}(\Sigma_{0,1}F;2)$, and claim that $E^1_{p,q}=0$ for $p+q=n+1$ with $p\ge 3$. Note that $\Gamma(\Sigma_{0,1}F)_\sigma=\Gamma(\Sigma_{0,1}F_\sigma)$, and $\text{genus}(\Sigma_{0,1}F_\sigma)=g-p+1+S(\sigma)\ge g-p+1$. We will use the assumption $I_{k,n}$, and must show $2(g-p+1)\ge 3q+k-\varepsilon_{l,m}$ for $p\ge 3$. These inequalities follows from the one for $p=3$, which is $2(g-2)\ge 3(n-2)+k-\varepsilon_{l,m}$, and this holds by assumption. Now all we need is to show that $E^2_{2,n-1}=0$. We consider \begin{equation*} E^1_{2,n-1}= E^1_{2,n-1}([0\,1])\oplus E^1_{2,n-1}([1\,0]) \end{equation*} We wish to show that $d_1:E^1_{3,n-1}\longrightarrow E^1_{2,n-1}$ is surjective and thus $E^1_{2,n-1}=0$. We look at $E^1_{3,n-1}(\tau)$ indexed by the permutation $\tau=[2\,1\,0]$. We will show that $d^1$ restricted to $E^1_{3,n-1}(\tau)$ surjects onto $E^1_{2,n-1}([1\,0])$ without hitting $E^1_{2,n-1}([0\,1])$. Since $S(\tau)=1$, $\Sigma_{0,1}F_\tau$ is $F_{g-1,r}$, and thus by Proposition \ref{p:8-tal2}, $\partial_0=\partial_1$. We then see \begin{equation*} d_1=\partial_0-\partial_1+\partial_2= \partial_2 \end{equation*} and $\partial_2:E^1_{3,n-1}(\tau)\longrightarrow E^1_{2,n-1}[1\,0]$ equals $\Sigma_{0,1}$ and so is surjective by induction, since $2(g-1)\ge 3(n-1)+k-2-\varepsilon_{m,l}$. All that remains is to hit $E^1_{2,n-1}([0\,1])$ surjectively, regardless of $E^1_{2,n-1}([1\,0])$. Consider the following component of $d^1$: \begin{equation*} \partial_0: E^1_{3,n-1}([2\,0\,1])\longrightarrow E^1_{2,n-1}([0\,1]). \end{equation*} This is the map induced by $\Sigma_{1,-1}$. By induction this map is surjective, since $2(g-2)\ge 3(n-1)+k-3-\varepsilon_{l,m}$ by assumption. This proves that $E^2_{2,n-1}=0$. \subsubsection*{Injectivity for $\Sigma_{0,1}$:}Assume $2g\ge 3n+k-1-\varepsilon_{l,m}$. For this proof we take another approach. Consider the following composite map, \begin{eqnarray}\label{e:inj} \textup{Rel}_q^V(\Sigma_{l,m}F,F)&\longrightarrow& \textup{Rel}_q^{\Sigma_{0,1}V}(\Sigma_{l,m}F,F) \nonumber \\ &\stackrel{\Sigma_{0,1}}{\longrightarrow}& \textup{Rel}_q^{V}(\Sigma_{l,m}\Sigma_{0,1}F, \Sigma_{0,1}F)\nonumber\\ &\stackrel{p_*}{\longrightarrow}& \textup{Rel}_q^{V}(\Sigma_{0,-1}\Sigma_{l,m}\Sigma_{0,1}F, \Sigma_{0,-1}\Sigma_{0,1}F)\nonumber \\ &=&\textup{Rel}_q^{V}(\Sigma_{l,m}F, F) \end{eqnarray} Here $p:F_{g,r}\longrightarrow F_{g,r-1}$ is the map that glues a disk onto a the unmarked boundary circle created by $\Sigma_{0,1}$. Since the composite map \eqref{e:inj} is induced by gluing on a cylinder to the marked boundary circle of $\Sigma_{l,m}F$ and $F$, it is an isomorphism. Now by Lemma \ref{l:Ivanov}, since $2g\ge 3n+k-1-\varepsilon_{l,m}$, the first map is surjective, so $\Sigma_{0,1}$ is forced to be injective. Note with constant coefficients ($k=0$), the first map is the identity, so here $\Sigma_{0,1}$ is always injective. \subsubsection*{Surjectivity for $\Sigma_{1,-1}$:} Assume $2g\ge 3n+k-3-\varepsilon_{l,m}$. We use the spectral sequence $E^1_{p,q}=E^1_{p,q}(\Sigma_{1,-1}F;1)$. We show $E^1_{p,q}=0$ if $p+q=n+1$ and $p\ge 4$, using assumption $I_{k,n}$. We know $\Gamma(\Sigma_{1,-1}F)_\sigma=\Gamma((\Sigma_{1,-1}F)_\sigma)$, and $\text{genus}((\Sigma_{1,-1}F)_\sigma)=g-p+1+S(\sigma)\ge g-p+1$. So we must show $2(g-p+1)\ge 3q+k-\varepsilon_{l,m}$ for all $p+q=n+1$, $p\ge 4$. This follows if we show it for $p=4$, which is easy: \begin{equation*} 2(g-3) =2g-6 \ge 3n +k-3-\varepsilon_{m,l}-6 = 3(n-3)+k-\varepsilon_{m,l}. \end{equation*} To show that the map $d_1:E^1_{1,n}\longrightarrow E^1_{1,n}$ is surjective, we thus only need to show that $E^2_{2,n-1}=0$ and $E^2_{3,n-2}=0$. Consider $E^1_{2,n-1}$: \begin{equation*} E^1_{2,n-1}=E^1_{2,n-1}([0\,1])\oplus E^1_{2,n-1}([1\,0]). \end{equation*} For $\sigma=[1\,0]$, since $S(\sigma)=1$, we have $\text{genus}((\Sigma_{1,-1}F)_\sigma)=g-p+1+S(\sigma)=g$. Thus by $I_{k,n}$, $E^1_{2,n-1}([1\,0])=0$, since $2g\ge 3n+k-1-\varepsilon_{m,l}=3(n-1)+k+2-\varepsilon_{l,m}$. Now consider the summand in $E^1_{3,n-1}$ indexed by $\tau=[2\,0\,1]$ which has genus 1. Then $(\Sigma_{1,-1}F)_\tau=F_{g-1,r}$, so $d_1$ on this summand is exactly the map induced by $\Sigma_{0,1}$ (since $d_1$ has 3 terms, only one of which hit $E^1_{2,n-1}([0\,1])$). To show this is surjective onto $E^1_{2,n-1}$, we use induction, and must check that $2(g-1)\ge 3(n-1)+k-\varepsilon_{l,m}$, which follows by assumption. So $d^1$ is surjective onto $E^1_{2,n-1}$, which implies that $E^2_{2,n-1}=0$. Consider $E^1_{3,n-2}$. As above, by $I_{k,n}$, all summands are zero, except for the one indexed by $\textup{id}=[0\,1\,2]$. Consider $E^1_{4,n-2}(\tau')$ indexed by $\tau'=[3\,0\,1\,2]$, which has genus $1$. Restricting $d^1$ to this summand, only one term hits $E^1_{3,n-2}([0\,1\,2])$. As above, one checks that this restriction of $d^1$ is exactly the map induced by $\Sigma_{0,1}$, so by induction it is surjective. \subsubsection*{Injectivity for $\Sigma_{1,-1}$:} Assume $2g\ge 3n+k+2-\varepsilon_{l,m}$. We use the same spectral sequence as in the surjectivity of $\Sigma_{1,-1}$. We claim $E^1_{p,q}=0$ if $p+q=n+2$ and $p\ge 4$. Again, $\Gamma(\Sigma_{1,-1}F)_\sigma=\Gamma(\Sigma_{1,-1}F_\sigma)$, and $\text{genus}(\Sigma_{1,-1}F_\sigma)=g-p+1+S(\sigma)\ge g-p+1$. So we must show $2(g-p+1)\ge 3q+k+2-\varepsilon_{m,l}$ for all $p+q=n+2$, $p\ge 4$, and this follows from $2g\ge 3n+k+2-\varepsilon_{m,l}$, as above. To show that the map $d_1:E^1_{1,n}\longrightarrow E^1_{0,n}$ is injective, we thus only need to show that $E^2_{3,n-1}=0$ and $d^1:E^1_{2,n}\longrightarrow E^1_{1,n}$ is the zero-map. That $E^2_{3,n-1}=0$ is proved precisely as for $E^2_{3,n-2}$ in surjectivity for $\Sigma_{1,-1}$, so we omit it. To show $d^1:E^1_{2,n}\longrightarrow E^1_{1,n}$ is the zero-map, note that $E^1_{2,n}$ has two summands, $E^1_{2,n}([0\,1])$ and $E^1_{2,n}([1\,0])$. We get that $d^1$ is zero on $E^1_{2,n}([1\,0])$, since $d_1=\partial_0-\partial_1=0$ by Proposition \ref{p:8-tal2}. Next we consider $d^1: E^1_{3,n}\longrightarrow E^1_{2,n}$. If we can show this is surjective onto $E^1_{2,n}([0\,1])$, we are done. Again we use the summand $E^1_{3,n}(\tau)$, where $\tau=[2\,0\,1]$. The restricted differential $d^1: E^1_{3,n}(\tau)\longrightarrow E^1_{2,n}([0\,1])$ is exactly the map induced by $\Sigma_{0,1}$, so we can show it is surjective, since we have already proved the Theorem for $\Sigma_{0,1}$. The relevant inequality is $2(g-1)\ge 3n+k-\varepsilon_{l,m}$, which holds by assumption. So $d^1:E^1_{2,n}\longrightarrow E^1_{1,n}$ is the zero-map, and we have shown that $d_1:E^1_{1,n}\longrightarrow E^1_{1,n}$ is injective. \subsubsection*{Induction start and special cases:} Here we handle the the inductive start $n=0$, along with the cases missing in the general argument above, namely the exceptions from Lemma \ref{l:Si_p}. \paragraph{The induction start $n=0$.}For $n=0$ and $k=0$, we always get $\textup{Rel}_0^{V,\Phi}(\Sigma_{l,m} F,F)=0$ since $H_0(F,V(F))\longrightarrow H_0(\Sigma_{l,m} F,V(\Sigma_{l,m} F))$ is an isomorphism when the coefficients are constant. So the theorem holds in this case. Now let $n=0$ and let $k$ be arbitrary. By considering the spectral sequence, see Figure \ref{b:spectral0}, we see that $\Sigma_{i,j}$ is automatically surjective, since the spectral sequence always converges to zero at $(0,0)$. \begin{figure} \caption{The spectral sequence for $n=0$.} \label{b:spectral0} \end{figure} For the sake of the case $n=1$, note that the surjectivity argument for $\Sigma_{0,1}$ when $n=0$ also works for any $k$ when using the spectral sequence for \emph{absolute} homology for the action of $\Gamma(F_{0,r+1})$ on $C_*(F_{0,r+1};2)$. For $\Sigma_{0,1}$, the injectivity argument used above holds for all $n$. So we must show that $\Sigma_{1,-1}$ is injective. For $g\ge 1$, the argument from above works, since there are arc simplices representing all the permutations used above. The problem is thus $g=0$, which means $k=0,1$, but we will also show the result for $k=2$ since we will need in the case $n=1$ below. As the complex we use, $C_*(F_{1,r-1};1)$, is connected, the spectral sequence converges to $0$ for $p+q\le 1$, so we can apply that spectral sequence. We must show that $d^1=d^1_{2,0}$ in Figure \ref{b:spectral0} is the zero map. We consider $(l,m)\in\{(1,0),(1,-1)\}$ and $(l,m)=(0,1)$ separately. For $\Sigma_{0,1}$, $E^1_{2,0}=E^1_{2,0}([1\,0])$, since the permutation $[0\,1]$ has genus $0$ and is by Lemma \ref{l:perm} neither represented in $C_*(F_{1,r-1};1)$ nor $C_*(\Sigma_{0,1}F_{1,r-1};1)$. Now the argument used to show injectivity of $\Sigma_{1,-1}$ in general works here, too. For $\Sigma_{1,0}$ or $\Sigma_{1,-1}$, $E^1_{2,0}=E^1_{2,0}([1\,0])\oplus \tilde{E}^1_{2,0}([0\,1])$ where $\tilde{E}^1_{2,0}([0\,1])$ is the \emph{absolute} homology group, \begin{equation*} \tilde{E}^1_{2,0}([0\,1])= H_0(\Gamma(\Sigma_{l,m}F_{1,r-1})_{T([0\,1])}; V(\Sigma_{l,m}F_{1,r-1})), \end{equation*} since $[0\,1]$ is represented in $C_*(\Sigma_{1,-1}F_{1,r-1};1)$ and $C_*(\Sigma_{1,0}F_{1,r-1};1)$, but not in $C_*(F_{1,r-1};1)$, see Theorem \ref{s:spectralinj}. For $E^1_{2,0}([1\,0])$, the general argument for injectivity of $\Sigma_{1,-1}$ shows that $d^1_{2,0}([1\,0])$ is zero. That $d^1:\tilde{E}^1_{2,0}([0\,1])$ is the zero map will follow if we show that $\tilde{E}^1_{3,0}$ hits $\tilde{E}^1_{2,0}([0\,1])$ surjectively. But the $d^1$-component $\tilde{E}^1_{3,0}([2\,0\,1])\longrightarrow \tilde{E}^1_{2,0}([0\,1])$ is just $\Sigma_{0,1}$ in the absolute case for $n=0$, $g=0$ and $k\le 2$. This $d^1$-component is surjective onto $\tilde{E}^1_{2,0}([0\,1])$, by the remark on surjectivity for $n=0$. \paragraph{Surjectivity when $n=1$.} Now let $n=1$ and $k\le 2$. Consider the relative spectral sequence, as depicted in Figure \ref{b:spectral1}. If we show that the map $d^2_{2,0}:E^2_{2,0}\longrightarrow E^2_{0,1}$ is zero, we have shown surjectivity. We will show that $E^1_{2,0}=0$. Recall by Theorem \ref{s:spectralinj}, $E^1_{2,0}=E^1_{2,0}([0\,1])\oplus E^1_{2,0}([1\,0])$, where \begin{equation}\label{e:EEEEE} E^1_{2,0}(\sigma)=\left\{ \begin{array}{ll} \textup{Rel}_0^{V,\Phi_\sigma}(\Gamma(F_{g+i+l,r+j+m})_{\Sigma_{m,l}\sigma},\Gamma(F_{g+i,r+j})_\sigma), & \hbox{if $\sigma\in\overline{\Si}_1^{l,m}\cap\overline{\Si}_1$;} \\ H_0(\Gamma(F_{g+i+l,r+j+m})_{\Sigma_{m,l}\sigma}; V(\Gamma(F_{g+i+l,r+j+m}))), & \hbox{if $\sigma\in \overline{\Si}_1^{l,m}\setminus\overline{\Si}_1$;} \\ 0, & \hbox{if $\sigma\notin \overline{\Si}_1^{l,m}$.} \end{array} \right. \end{equation} and $\overline{\Si}_1$, $\overline{\Si}^{l,m}_1$ are the subsets of $\Sigma_1$ in $1-1$ correspondence with the orbits of $\Delta_1(\Sigma_{i,j}F;2-i)$ and $\Delta_1(\Sigma_{l,m}\Sigma_{i,j}F;2-i)$, respectively. \begin{figure} \caption{The spectral sequence for $n=1$.} \label{b:spectral1} \end{figure} \paragraph{Surjectivity of $\Sigma_{1,-1}$ when $n=1$.}Assume $(l,m)=(0,1)$, $g=0$ and $k=0$. Then by Lemma \ref{l:perm} only $[1\,0]$ is represented as an arc simplex, and by \eqref{e:EEEEE} above, $E^1_{2,0}$ is a relative homology group of degree 0 with constant coefficients, so $E^1_{2,0}=0$. The remaining exceptions are $(l,m)\neq (0,1)$, $g=0$ and $k\le1$. By Lemma \ref{l:perm}, $[1\,0]$ is represented as an arc simplex in both $F_{1+l,r+m}$ and $F_{1,r-1}$, so $E^1_{2,0}([1\,0])=0$ by Theorem \ref{t:vis_indu}. Now $[0\,1]$ is only represented in $F_{1+l,r+m}$, so by \eqref{e:EEEEE}, $E^1_{2,0}([1\,0])$ is an absolute homology group. To kill it, consider $E^1_{3,0}([2\,0\,1])$,. which is also an absolute homology group. The restricted differential and $d^{1}:E^1_{3,0}([2\,0\,1]) \longrightarrow E^1_{2,0}([0\,1])$ equals $\Sigma_{0,1}$, so it is surjective by the case $n=0$, which as remarked also holds for absolute homology group. \paragraph{Surjectivity of $\Sigma_{0,1}$ when $n=1$.}First assume $g=1$. The possible permutations $[0\,1]$ and $[1\,0]$ are by Lemma \ref{l:perm} represented as $1$-simplices in both arc complexes. Thus $E^1_{2,0}$ is a direct sum of two relative homology groups in degree 0 with coefficients of degree $k\le 2$. Then by the \emph{Induction start} $n=0$, $\Sigma_{0,1}$ and $\Sigma_{1,-1}$ are injective for $g\ge 0$, so by Theorem \ref{t:vis_indu}, $E^1_{2,0}=0$. For $(m,l)=(1,-1)$, we have the special case $g=k=0$. We will show $H_1(\Gamma_{1,r},\Gamma_{0,r+1})=0$, by showing $\Sigma_{1,-1}:H_1(\Gamma_{0,r+1};\mathbb Z)\longrightarrow H_1(\Gamma_{1,r};\mathbb Z)$ is surjective, and thus that any map into $H_1(\Gamma_{1,r},\Gamma_{0,r+1})$ is surjective. We use \cite{Harer3}, Lemma 1.1 and 1.2, which give sets of generators for $H_1(\Gamma_{0,r+1};\mathbb Z)$ and $H_1(\Gamma_{1,r};\mathbb Z)$, as follows. Let $\tau_i$ be the Dehn twist around each boundary component $\partial_i F_{1,r}$, for $i=1,\ldots,r$, and let $x$ be the Dehn twist on any non-separating simple closed curve $\gamma$ in $F_{1,r}$. Then $H_1(\Gamma_{1,r};\mathbb Z)$ is generated by $\tau_2,\ldots,\tau_{r},x$. We remark that Harer states this for $\mathbb Q$-coefficients, but in $H_1$ his proof also holds for $\mathbb Z$-coefficients. We can choose the curve $\gamma$ as the image of $\partial_2 F_{0,r+1}$ under $\Sigma_{1,-1}$. Similarly in $\Gamma_{0,r+1}$, we have Dehn twists $\tau_i'$ around each boundary component $\partial_i F_{0,r+1}$, and these are among the generators for $H_1(\Gamma_{0,r+1};\mathbb Z)$. Then $\Sigma_{1,-1}$ maps $\tau_{i+1}'\mapsto \tau_i$ for $i=2,\ldots,r$ by construction of $\Sigma_{1,-1}$, and $\tau_2'\mapsto x$ by the choice of $\gamma$. So $\Sigma_{1,-1}:H_1(\Gamma_{0,r+1};\mathbb Z)\longrightarrow H_1(\Gamma_{1,r};\mathbb Z)$ is surjective. \paragraph{Injectivity of $\Sigma_{1,-1}$ when $n=1$.} The only exception is $(l,m)=(1,-1)$, $g=1$ and $k=0$. For this we will use a different argument, drawing on the stability Theorem for $\mathbb Z$-coefficients. Consider the following exact sequence: \begin{eqnarray}\label{e:k=0} & &H_1(\Gamma_{1,r};V)\twoheadrightarrow H_1(\Gamma_{2,r-1};V)\longrightarrow\textup{Rel}^V_1(\Gamma_{2,r-1},\Gamma_{1,r})\nonumber\\ &\longrightarrow& H_0(\Gamma_{1,r};V) \stackrel{\cong}{\longrightarrow} H_0(\Gamma_{2,r-1};V) \end{eqnarray} Since $k=0$ we have constant coefficients, so we can use Theorem \ref{t:Main}. Since $2\cdot 1\ge 3\cdot 1-1$, the first map in \eqref{e:k=0} is surjective, and the last map is an isomorphism. Thus $\textup{Rel}^V_1(\Gamma_{2,r-1},\Gamma_{1,r})=0$ and any map from it is thus injective. This finishes the special cases when $n=1$. \paragraph{Surjectivity of $\Sigma_{1,-1}$ when $n=2$.}Again we have only one exception, namely $(l,m)=(1,-1)$, $g=1$ and $k=0$. It suffices to show $E^2_{2,1}=0$ and $E^2_{3,0}=0$. For $E^2_{2,1}$ the argument in \emph{Surjectivity of $\Sigma_{1,-1}$} works since all the permutations used there are in $\overline{\Si}_2$. So consider $E^2_{3,0}$. Here for all permutations $\tau$ except $[0\,1\,2]$ we have $\tau\in \overline{\Si}_3\cap\Sigma_3^{l,m}$ (for this notation, see \eqref{e:EEEEE}. Thus for these $\tau$ we know that $E^1_{3,0}(\tau)=0$, since it is a relative homology group in degree 0 with constant coefficients. But $[0\,1\,2]\in\overline{\Si}_3^{1,-1}\setminus \overline{\Si}_3$, so $E^1_{3,0}([0\,1\,2])$ is an absolute homology group. However, this group is hit surjectively by $E^1_{4,0}[3\,0\,1\,2]$, since the restricted differential equals $\Sigma_{0,1}$ (see the remark for $n=0$). Thus $E^2_{3,0}=0$, as desired. \end{proof} \begin{rem} As a Corollary to this result, we can be a bit more specific about what happens when stability with $\mathbb Z$-coefficients fails, cf. Theorem \ref{t:Main}. More precisely, \begin{itemize} \item[$(i)$] The cokernels of the maps \begin{eqnarray*} \Sigma_{0,1}: H_{2n+1}(\Gamma_{3n+1,r})\longrightarrow H_k(\Gamma_{3n+1,r+1}) \\ \Sigma_{0,1}: H_{2n+2}(\Gamma_{3n+2,r})\longrightarrow H_k(\Gamma_{3n+2,r+1}) \end{eqnarray*} are independent of $r\ge 1$. \item[$(ii)$]Let $r\ge 2$. Then the cokernel of the map \begin{eqnarray*} \Sigma_{1,-1}: H_{2n+1}(\Gamma_{3n,r})\longrightarrow H_k(\Gamma_{3n+1,r-1}) \end{eqnarray*} is independent of $r$. \end{itemize} \end{rem} \begin{proof} Since $\Sigma_{0,1}$ is always injective, it fits into the following long exact sequence, \begin{equation*} H_{2n+1}(\Gamma_{3n+1,r})\longrightarrow H_{2n+1}(\Gamma_{3n+1,r+1})\longrightarrow \textup{Rel}^{\mathbb Z}_{2n+1}(F_{3n+1,r+1},F_{3n+1,r})\longrightarrow 0. \end{equation*} Since $2(3n+2)\ge 3(2n+2)-2$, we get by Theorem \ref{t:maintwist} that the cokernel is independent of $r$. The other case is similar. For $(ii)$ we get \begin{eqnarray*} {\xymatrix{H_{q}(\Gamma_{3n,r})\ar[r]^{\Sigma_{1,-1}}\ar[d] &H_{q}(\Gamma_{3n+1,r-1})\ar[r]\ar[d]& \textup{Rel}^{\mathbb Z}_{q}(F_{3n+1,r-1},F_{3n,r})\ar[r]\ar[d]^{\cong}& H_{q-1}(\Gamma_{3n,r})\ar[d]^{\cong}\\ H_{q}(\Gamma_{3n,r+1})\ar[r]^{\Sigma_{1,-1}} &H_{q}(\Gamma_{3n+1,r})\ar[r]& \textup{Rel}^{\mathbb Z}_{q}(F_{3n+1,r},F_{3n,r+1})\ar[r]& H_{q-1}(\Gamma_{3n,r+1})}} \end{eqnarray*} (We have written $q=2n+1$ to save space.) As the last two vertical maps are isomorphisms, the cokernels of the first map in the top and bottom rows are equal. \end{proof} The above Theorem finishes the inductive proof of the assumption $I_{n,k}$. The reason for proving the inductive assumption is that we now get the following Main Theorem for homology stability with twisted coefficients: \begin{thm}\label{t:abstwist}Let $F$ be a surface of genus $g$, and let $V$ be a coefficient system of degree $k$. Let $(l,m)=(1,0)$, $(0,1)$ or $(1,-1)$. Then the map \begin{equation*} H_n(F; V(F)) \longrightarrow H_n(\Sigma_{l,m}F; V(\Sigma_{l,m}F)) \end{equation*} induced by $\Sigma_{l,m}$ satisfies: \begin{itemize} \item[$(i)$]For $\Sigma_{l,m}=\Sigma_{0,1}$, it is an isomorphism for $2g\ge 3n+k$. \item[$(ii)$]For $\Sigma_{l,m}=\Sigma_{1,0}$ or $\Sigma_{1,-1}$, it is surjective for $2g\ge 3n+k-\varepsilon_{l,m}$, and an isomorphism for $2g\ge 3n+k+2$. \end{itemize} \end{thm} \begin{proof}Consider the following exact sequence \begin{equation*} \textup{Rel}_{n+1}^V(\Sigma_{l,m}F,F)\longrightarrow H_n(F;V)\longrightarrow H_n(\Sigma_{l,m}F;\Sigma_{l,m}V)\longrightarrow \textup{Rel}_n^V(\Sigma_{l,m}F,F). \end{equation*} To show surjectivity, we must prove that $\textup{Rel}_n^V(\Sigma_{l,m}F,F)=0$. By $I_{k,n+1}$ this is the case when $2g\ge 3n+k$. To show injectivity, we first note that as usual, $\Sigma_{0,1}$ is always injective. For $\Sigma_{1,-1}$, we get by $I_{k,n+2}$ that $\textup{Rel}_{n+1}^V(\Sigma_{l,m}F,F)=0$ when $2g\ge 3(n+1)+k+2$. Finally, $\Sigma_{1,0}=\Sigma_{1,-1}\Sigma_{0,1}$ and thus also injective when $2g\ge 3(n+1)+k+2$. \end{proof} \section{Stability of the space of surfaces} In \cite{CM}, Cohen and Madsen consider the following type of coefficients \begin{equation*} V^X_n(F) := H_n(\textup{Map}(F/\partial F,X)) \end{equation*} for $X$ a fixed topological space. \begin{lem}\label{l:Eilenberg} Let $K=K(G;k)$ be an Eilenberg-MacLane space with $k\ge 2$. Assume $H_*(K)$ is without infinite division. Then $V^K_n$ is a coefficient system of degree $\le \textstyle\round{\frac{n}{k-1}}$. \end{lem} \begin{proof} To prove $V_n^K$ is a coefficient system of degree $\le \textstyle\round{\frac{n}{k-1}}$, we must prove that the groups $V_n^K(F)$ are without infinite division, and that $V_n^K$ has the right degree. We consider the degree first, and the proof is by induction on $n$. Take $\Sigma=\Sigma_{1,0}$, the other cases are similar. We have the following homotopy cofibration: \begin{equation*} S^1\wedge S^1 \longrightarrow \Sigma F/\partial \Sigma F\longrightarrow F/\partial F \end{equation*} Taking $\textup{Map}(-,K)$ leads to the following fibration: \begin{equation}\label{e:fibration} \textup{Map}(F/\partial F,K) \longrightarrow \textup{Map}(\Sigma F/\partial \Sigma F,K)\longrightarrow \Omega(K)\times\Omega(K) \end{equation} Since $K=K(G,k)$ is an infinite loop space it has a multiplication, and consequently so has each space in the fibration \eqref{e:fibration} above. Thus the total space is up to homotopy the product of the base and the fiber. Using Künneth's formula, we get: \begin{equation}\label{e:Kunneth} V^K_n(\Sigma F)=\bigoplus_{i=0}^n V^K_{n-i}(F)\otimes H_{i}(\Omega(K)\times\Omega(K)) \end{equation} Note for $n=0$ this says that $\Sigma$ induces an isomorphism, so $V_0^K(F)$ has degree $0$. This was the induction start. Now since $\Omega(K)=K(G,k-1)$ is $(k-2)$-connected and $k\ge 2$, $H_{0}(\Omega(K)\times\Omega(K))=\mathbb Z$ and $H_{j}(\Omega(K)\times\Omega(K))=0$ for $j\le k-2$. This means that the cokernel of $\Sigma$ is: \begin{equation*} \Delta(V_n^K(F))=\bigoplus_{i=k-1}^n V_{n-i}^K(F)\otimes H_{i}(\Omega(K)\times\Omega(K)) \end{equation*} Since the degree of a direct sum is the maximum of the degrees of its components, we get by induction that the degree of $\Delta(V_n^K(F))$ is $\le \textstyle\round{\frac{n-(k-1)}{k-1}}=\textstyle\round{\frac{n}{k-1}}-1$. This shows that the degree of $V_n^K$ is $\le \textstyle\round{\frac{n}{k-1}}$. It remains to show that $V_n^K(F)$ is an abelian group without infinite division for any surface $F$. To prove this, we use a double induction in $n$ and $F$. There are two base cases. First consider $n=0$, $F$ any surface. From \eqref{e:Kunneth} we see that $V_0^K$ does not depend on the surface $F$. So we can calculate $V_0^K(F)$ using $F=D$ a disk: \begin{equation*} V_0^K(F)= H_0(\textup{Map}(D/\partial D,K))=\mathbb Z[\pi_2(K)]=\left\{ \begin{array}{ll} \mathbb Z, & \hbox{$k>2$;} \\ \mathbb Z[G], & \hbox{$k=2$.} \end{array} \right. \end{equation*} This is an abelian group without infinite division. Secondly, let $F=D$ be a disk, and $n$ any natural number. We see \begin{eqnarray*} V_n^K(D) &=& H_n(\textup{Map}(D/\partial D,K)) = H_n(\textup{Map}(S^2,K)) \\ &=& H_n(\textup{Map}(S^0,\Omega^2(K))=H_n(\Omega^2(K)) \end{eqnarray*} and according to our assumptions on $H_*(K)$, this is without infinite division. The general case now follows from induction using \eqref{e:Kunneth} and its counterpart for $\Sigma=\Sigma_{0,1}$, along with the fact that any surface $F$ with boundary can be obtained from a disk $D$ using $\Sigma_{1,0}$ and $\Sigma_{0,1}$ finitely many times. \end{proof} To prove the next theorem we need a couple of lemmas: \begin{lem}\label{l:tensor} Let $V$ and $W$ be coefficient systems of degrees $\le s$ and $\le t$, respectively. Then $V\otimes W$ is a coefficient system of degree $\le s+t$, and $V\oplus W$ is a coefficient system of degree $\le \max(s,t)$. \end{lem} \begin{proof}Since $V$ is a coefficient system, we have the split exact sequence: \begin{equation*} 0\longrightarrow V(F)\longrightarrow V(\Sigma F)\longrightarrow \Delta(V(F))\longrightarrow 0. \end{equation*} Likewise for $W$. Then for the tensor product we get the split exact sequence: \begin{eqnarray*} 0&\longrightarrow& V(F)\otimes W(F)\longrightarrow V(\Sigma F)\otimes W(\Sigma F)\\ &\longrightarrow& \Delta(V(F))\otimes W(F)\oplus V(F)\otimes \Delta(W(F)) \longrightarrow 0. \end{eqnarray*} \end{proof} \begin{thm}\label{t:stabil1}Let $X$ be a $k$-connected space, $k\ge 1$. If $V^X_n(F)$ is without infinite division for any surface $F$, then $V^X_n$ is a coefficient system of degree $\le \round{\frac{n}{k}}$. \end{thm} \begin{proof}First note: If we prove the assertion concerning the degree as in Def. \ref{d:coef} (not including without infinite division), then since $V^X_n$ is assumed without infinite division, the cokernels $\Delta_{i,j}(V^X_n)$ (and their cokernels, etc) are automatically without infinite division, since they are direct summands of $V^X_n$. The proof uses Postnikov towers and Lemma \ref{l:Eilenberg} above. The Postnikov tower of $X$ is a sequence $\set{X_m\longrightarrow X_{m-1}}_{m\ge k}$ with each term a fibration \begin{equation}\label{e:postnikov} K(\pi_m(X),m)\longrightarrow X_m\longrightarrow X_{m-1}. \end{equation} The proof is by induction in $m$, so assume for $l<m$ that $V^{X_{l}}_n$ is a coefficient system of degree $\le \round{\frac{n}{k}}$. To make the induction work, we also assume inductively that the splitting $s_l$ we then have by definition, \begin{equation*} \xymatrix{0\ar[r]& V^{X_{l}}_n\ar[r]& \Sigma V^{X_{l}}_n\ar[r]& \Delta(V^{X_{l}}_n)\ar@/_/[l]_{s_l}\ar[r]&0 } \end{equation*} is a natural transformation from $\Delta(V^{X_{l}}_n)$ to $\Sigma V^{X_{l}}_n$. Now we take the induction step. Let $F$ be a surface. Then using $\textup{Map}(F,-)$ on \eqref{e:postnikov} yields a new fibration \begin{equation*} \textup{Map}(F,K(\pi_m(X),m))\longrightarrow \textup{Map}(F,X_m)\longrightarrow \textup{Map}(F,X_{m-1}). \end{equation*} Serre's spectral sequence for this fibration has $E^2$-term: \begin{eqnarray}\label{e:sss} E^2_{s,t}(F)&=& H_s(\textup{Map}(F,X_{m-1}))\otimes H_t(\textup{Map}(F,K(\pi_m(X),m))\nonumber \\ &=& V^{X_{m-1}}_s(F)\otimes V^{K(\pi_m(X),m)}_t(F). \end{eqnarray} Now $X_{m-1}$ is $k$-connected, since $X$ is, and $K(\pi_m(X),m)$ is at least $k$-connected. Then by induction and Lemma \ref{l:tensor}, $ E^2_{s,t}$ is a coefficient system of degree $\le \round{\frac{s}{k}} +\round{\frac{t}{k}}\le \round{\frac{s+t}{k}}$. We now want to prove that $E^r_{s,t}$ is a coefficient system of degree $\textstyle\le\round{\frac{s+t}{k}}$ for all $r\ge 2$, by induction in $r$. Let $V_1\stackrel{d}{\longrightarrow} V\stackrel{d}{\longrightarrow} V_2$ be groups in the $E^r$ term of the spectral sequence, where $d$ denotes the $r$th differential, and say $V$ has degree $\le q$. We assume by induction in $r$ that the splittings for $V$, $V_1$ and $V_2$ (see \eqref{e:splittings}) are natural transformations. For $r=2$ this holds according to \eqref{e:sss} by induction in $m$ and by \eqref{e:Kunneth} (the Eilenberg-MacLane space case). We want to show that the homology of $V$ with respect to $d$, $H(V)$, is a coefficient system of degree $\le q$, and that the splitting for $H(V)$ is also natural. Suppose by another induction that this holds for coefficient systems of degrees $<q$. Then consider the following diagram, where $\Sigma$ as usual denotes either $\Sigma_{1,0}$ or $\Sigma_{0,1}$. \begin{equation}\label{e:splittings} \xymatrix{ 0\ar[r]& V_1\ar[r]^{\Sigma}\ar[d]^{d} & \Sigma V_1\ar[r]\ar[d]^{d} & \Delta_1\ar[r]\ar[d]^{d}\ar@/_/[l] &0 \\ 0\ar[r]& V\ar[r]^{\Sigma}\ar[d]^{d} & \Sigma V\ar[r]\ar[d]^{d} & \Delta\ar[r]\ar[d]^{d}\ar@/_/[l] &0 \\ 0\ar[r]& V_2\ar[r]^{\Sigma} & \Sigma V_2\ar[r] &\Delta_2\ar[r]\ar@/_/[l] &0 } \end{equation} We know $\Sigma V= V\oplus \Delta$, and similarly for $V_1$ and $V_2$. By our induction hypothesis in $r$ we get that the splittings in the right-most squares above commute with $d$. Then the homology with respect to $d$ satisfies $H(\Sigma V)= H(V)\oplus H(\Delta)$, and the splitting for $H(V)$ is again natural. This shows that the cokernel $\Delta(H(V))$ of $\Sigma$ is $H(\Delta)$. Since $\Delta$ is a coefficient system of degree $\le q-1$, we get by induction in the degree that $H(V)$ is a coefficient system of degree $\le q$. For the degree-induction start, if $V$ is constant, $H(V)$ is also constant. To finish the induction in $m$ we must prove that the splitting $s_m: \Delta(V^{X_{m}}_n)\longrightarrow\Sigma V^{X_{m}}_n$ is a natural transformation. By the above, $E^r_{s,t}$ is a coefficient system of degree $\le \round{\frac{s+t}{k}}$ for all $r$, so the same is true for $E^\infty_{s,t}$. Since the spectral sequence converges to $V^{X_m}_n(F)$ for $n=s+t$, we get that $V^{X_m}_n(F)$ is a coefficient system of degree $\le \round{\frac{n}{k}}$. The inverse limit of the Postnikov tower $\lim_{\leftarrow}X_m$ is weakly homotopy equivalent to $X$, and the result follows. \end{proof} The space of surfaces mapping into a background space $X$ with boundary conditions $\gamma$ is defined as follows: Let $X$ be a space with base point $x_0\in X$, and let $\gamma:\coprod S^1\longrightarrow X$ be $r$ loops in $X$. Then \begin{eqnarray*} \mathcal{S}_{g,r}(X,\gamma) &=& \left\{(F_{g,r},\varphi,f)\mid F_{g,r}\subseteq \mathbb R^\infty\times [a,b], \varphi:\sqcup S^1\longrightarrow \partial F_{g,r}\text{ is a para-}\right.\\ &&\left. \text{metrization}, f: F_{g,r}\longrightarrow X \text{ is continuous with }f\circ \varphi=\gamma\right\} \end{eqnarray*} Assume now $X$ is simply-connected. Then we observe that the homotopy type of $\mathcal{S}_{g,r}(X,\gamma)$ does not depend on $\gamma$: For consider the space of surfaces with no boundary conditions, call it $\overline{\mathcal{S}_{g,r}(X)}$. The restriction map to the boundary of the surfaces, \begin{equation*} \mathcal{S}_{g,r}(X,\gamma) \longrightarrow \overline{\mathcal{S}_{g,r}(X)}\longrightarrow (LX)^r \end{equation*} is a Serre fibration. Here, $LX=\textrm{Map}(S^1,X)$ is the free loop space, so as $X$ is simply-connected, $(LX)^r$ is connected, so the fiber is independent of the choice of $\gamma\in (LX)^r$. So when $X$ is simply-connected, we use the abbreviated notation $\mathcal{S}_{g,r}(X)=\mathcal{S}_{g,r}(X,\gamma)$ for any choice of $\gamma$. \begin{thm} Let $X$ be a simply-connected space such that $V^X_m$ is without infinite division for all $m\le n$. Then \begin{equation*} H_n(\mathcal{S}_{g,r}(X)) \end{equation*} is independent of $g$ and $r$ for $2g\ge 3n+3$ and $r\ge 1$. \end{thm} \begin{proof} Let $\Sigma$ be either $\Sigma_{1,0}$ or $\Sigma_{0,1}$. From the definition we observe that \begin{equation*} \mathcal{S}_{g,r}(X)\cong \textrm{Emb}(F_{g,r},\mathbb R^\infty)\times_{\textrm{Diff}(F_{g,r},\partial)}\textup{Map}(F_{g,r},X), \end{equation*} and since $\textrm{Emb}(F_{g,r},\mathbb R^\infty)$ is contractible, we get \begin{equation*} \mathcal{S}_{g,r}(X)\cong E(\textrm{Diff}(F_{g,r},\partial)) \times_{\textrm{Diff}(F_{g,r},\partial)}\textup{Map}(F_{g,r},X). \end{equation*} So there is an obvious fibration sequence \begin{equation*} \textup{Map}(F_{g,r},X)\longrightarrow\mathcal{S}_{g,r}(X)\longrightarrow B(\textrm{Diff}(F_{g,r},\partial), \end{equation*} and thus we can apply Serre's spectral sequence, which has $E^2$ term: \begin{equation*} E^2_{s,t}= H_s(B(\textrm{Diff}(F_{g,r},\partial);H_t(\textup{Map}(F_{g,r},X))) \end{equation*} where the coefficients are local. The path components of $\textrm{Diff}(F_{g,r},\partial)$ are contractible, so we get an isomorphism \begin{equation}\label{e:E^2ib} E^2_{s,t}\cong H_s(\Gamma(F_{g,r});H_t(\textup{Map}(F_{g,r},X))) \end{equation} Consider the map induced by $\Sigma$ on this spectral sequence \begin{equation*} \Sigma_*: H_s(\Gamma(F_{g,r});H_t(\textup{Map}(F_{g,r},X))) \longrightarrow H_s(\Gamma(\Sigma F_{g,r});H_t(\textup{Map}(\Sigma F_{g,r},X))) \end{equation*} By Theorem \ref{t:stabil1} and \ref{t:abstwist}, we know that this map is surjective for $2g\ge 3s+t$, and an isomorphism for $2g\ge 3s+t+2$. We use Zeeman's comparison theorem to carry the result to $E^\infty$. To get the optimum stability range, we must find the maximal $N=N(g)\in\mathbb Z$ such that for $t\ge 1$, \begin{eqnarray*} s+t\le N &\Rightarrow& 2g\ge 3s+t+2 \quad \text{(isomorphism)}\\ s+t =N+1&\Rightarrow & 2g\ge 3s+t \quad \text{(surjectivity)} \end{eqnarray*} Zeeman's comparison theorem then says that $\Sigma_*$ induces isomorphism on $E^\infty_{s,t}$ for $s+t\le N(g)$ and a surjection for $s+t=N(g)+1$. Since the spectral sequence converges to $H_n(\mathcal{S}_{g,r}(X))$, we get stability for $n\le N(g)$. Clearly, the hardest requirement is $t=0$ (surjectivity), where we get the inequality $2g\ge 3N+3$. One checks that this satisfies all the other cases. So $H_n(\mathcal{S}_{g,r}(X))$ is independent of $g,r$ for $2g\ge 3n+3$. \end{proof} Using this we can improve the stability range in Cohen-Madsen's stability result for the homology of the space of surfaces to the following, cf \cite{CM} Theorem 0.1: \begin{thm}Let $X$ be a simply connected space such that $V^X_m$ is without infinite division for all $m$. Then for $2g\ge 3n+3$ and $r\ge 1$ we get an isomorphism \begin{equation*} H_n(\mathcal{S}_{g,r}(X)_\bullet)\cong H_n(\Omega^\infty(\mathbb{CP}^\infty_{-1} \wedge X_+ )_\bullet). \end{equation*} \end{thm} \end{document}
arXiv
Stallings–Zeeman theorem In mathematics, the Stallings–Zeeman theorem is a result in algebraic topology, used in the proof of the Poincaré conjecture for dimension greater than or equal to five. It is named after the mathematicians John R. Stallings and Christopher Zeeman. Statement of the theorem Let M be a finite simplicial complex of dimension dim(M) = m ≥ 5. Suppose that M has the homotopy type of the m-dimensional sphere Sm and that M is locally piecewise linearly homeomorphic to m-dimensional Euclidean space Rm. Then M is homeomorphic to Sm under a map that is piecewise linear except possibly at a single point x. That is, M \ {x} is piecewise linearly homeomorphic to Rm. References • Stallings, John (1962). "The piecewise-linear structure of Euclidean space". Proc. Cambridge Philos. Soc. 58 (3): 481–488. Bibcode:1962PCPS...58..481S. doi:10.1017/s0305004100036756. S2CID 120418488. MR0149457 • Zeeman, Christopher (1961). "The generalised Poincaré conjecture". Bull. Amer. Math. Soc. 67 (3): 270. doi:10.1090/S0002-9904-1961-10578-8. MR0124906
Wikipedia
What is the greatest common divisor of $121^2 + 233^2 + 345^2$ and $120^2 + 232^2 + 346^2$? Let $m = 121^2 + 233^2 + 345^2$ and $n = 120^2 + 232^2 + 346^2$. By the Euclidean Algorithm, and using the difference of squares factorization, \begin{align*} \text{gcd}\,(m,n) &= \text{gcd}\,(m-n,n) \\ &= \text{gcd}\,(n,121^2 - 120^2 + 233^2 - 232^2 + 345^2 - 346^2)\\ &= \text{gcd}\,(n,(121-120)(121+120) \\ &\qquad\qquad\qquad + (233-232)(233+232)\\ &\qquad\qquad\qquad - (346-345)(346+345)) \\ &= \text{gcd}\,(n,241 + 465 - 691) \\ &= \text{gcd}\,(n,15) \end{align*}We notice that $120^2$ has a units digit of $0$, $232^2$ has a units digit of $4$, and $346^2$ has a units digit of $6$, so that $n$ has the units digit of $0+4+6$, namely $0$. It follows that $n$ is divisible by $5$. However, $n$ is not divisible by $3$: any perfect square not divisible by $3$ leaves a remainder of $1$ upon division by $3$, as $(3k \pm 1)^2 = 3(3k^2 + 2k) + 1$. Since $120$ is divisible by $3$ while $232$ and $346$ are not, it follows that $n$ leaves a remainder of $0 + 1 + 1 = 2$ upon division by $3$. Thus, the answer is $\boxed{5}$.
Math Dataset
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials Nonlinear Biharmonic Problems with Singular Potentials November 2014, 13(6): 2155-2175. doi: 10.3934/cpaa.2014.13.2155 Complex Powers of the Laplacian on Affine Nested Fractals as Calderón-Zygmund operators Marius Ionescu 1, and Luke G. Rogers 2, Department of Mathematics, United States Naval Academy, Annapolis, MD, 21402, United States Department of Mathematics, University of Connecticut, Storrs CT 06269-3009 Received June 2011 Revised July 2012 Published July 2014 We give the first natural examples of Calderón-Zygmund operators in the theory of analysis on post-critically finite self-similar fractals. This is achieved by showing that the purely imaginary Riesz and Bessel potentials on nested fractals with $3$ or more boundary points are of this type. It follows that these operators are bounded on $L^{p}$, $1 < p < \infty$ and satisfy weak $1$-$1$ bounds. The analysis may be extended to infinite blow-ups of these fractals, and to product spaces based on the fractal or its blow-up. Keywords: Bessel potentials., Calderón-Zygmund operators, Riesz potentials, Analysis on fractals. Mathematics Subject Classification: Primary: 28A80, 46F12; Secondary: 42C99, 81Q1. Citation: Marius Ionescu, Luke G. Rogers. Complex Powers of the Laplacian on Affine Nested Fractals as Calderón-Zygmund operators. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2155-2175. doi: 10.3934/cpaa.2014.13.2155 Martin T. Barlow and Edwin A. Perkins, Brownian motion on the Sierpiński gasket, Probab. Theory Related Fields, 79 (1988), 543-623. Google Scholar Oren Ben-Bassat, Robert S. Strichartz and Alexander Teplyaev, What is not in the domain of the Laplacian on Sierpinski gasket type fractals, J. Funct. Anal., 166 (1999), 197-217. Google Scholar E. B. Davies, Heat Kernels and Spectral Theory, Cambridge Tracts in Mathematics, 92, Cambridge University Press, Cambridge, 1990. Google Scholar Pat J. Fitzsimmons, Ben M. Hambly and Takashi Kumagai, Transition density estimates for Brownian motion on affine nested fractals, Comm. Math. Phys., 165 (1994), 595-620. Google Scholar Gerald B. Folland, Real Analysis, Pure and Applied Mathematics, New York, Second edition, Modern techniques and their applications, A Wiley-Interscience Publication, John Wiley & Sons Inc., New York, 1999. Google Scholar M. Fukushima and T. Shima, On a spectral analysis for the Sierpiński gasket, Potential Anal., 1 (1992), 1-35. doi: 10.1007/BF00249784. Google Scholar B. M. Hambly and T. Kumagai, Transition density estimates for diffusion processes on post critically finite self-similar fractals, Proc. London Math. Soc., 78 (1999), 431-458. Google Scholar B. M. Hambly and T. Kumagai, Diffusion processes on fractal fields: heat kernel estimates and large deviations, Probab. Theory Related Fields, 127 (2003), 305-352. Google Scholar Jiaxin Hu and Martina Zähle, Potential spaces on fractals, Studia Math., 170 (2005), 259-281. doi: 10.4064/sm170-3-4. Google Scholar Jiaxin Hu and Martina Zähle, Generalized Bessel and Riesz potentials on metric measure spaces, Potential Anal., 30 (2009), 315-340. doi: 10.1007/s11118-009-9117-9. Google Scholar John E. Hutchinson, Fractals and self-similarity, Indiana Univ. Math. J., 30 (1981), 713-747. doi: 10.1512/iumj.1981.30.30055. Google Scholar Jun Kigami, Analysis on Fractals, Cambridge Tracts in Mathematics, 143, Cambridge, 2001. Google Scholar Jun Kigami, Harmonic analysis for resistance forms, J. Funct. Anal., 204 (2003), 399-444. doi: 10.1016/S0022-1236(02)00149-0. Google Scholar Tom Lindstrøm, Brownian motion on nested fractals, Mem. Amer. Math. Soc., 83 (1990), 420. Google Scholar Jonathan Needleman, Robert S. Strichartz, Alexander Teplyaev and Po-Lam Yung, Calculus on the Sierpinski gasket. I. Polynomials, exponentials and power series, J. Funct. Anal., 215 (2004), 290-340. doi: 10.1016/j.jfa.2003.11.011. Google Scholar Luke G. Rogers, Estimates for the resolvent kernel of the Laplacian on p.c.f. self-similar fractals and blowups, Trans. Amer. Math. Soc., 364 (2012), 1633-1685. doi: 10.1090/S0002-9947-2011-05551-0. Google Scholar Christophe Sabot, Pure point spectrum for the Laplacian on unbounded nested fractals, J. Funct. Anal., 173 (2000), 497-524. Google Scholar R. T. Seeley, Complex powers of an elliptic operator, Singular Integrals (Proc. Sympos. Pure Math., Chicago, Ill., 1966), 288-307, Amer. Math. Soc., Providence, R.I., 1967. Google Scholar R. T. Seeley, Analytic extension of the trace associated with elliptic boundary problems, Amer. J. Math., 91 (1969), 963-983. Google Scholar Adam Sikora, Multivariable spectral multipliers and analysis of quasielliptic operators on fractals, Indiana Univ. Math. J., 58 (2009), 317-334. doi: 10.1512/iumj.2009.58.3745. Google Scholar Elias M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, No. 30, Princeton University Press, Princeton, N.J., 1970. Google Scholar Elias M. Stein, Topics in Harmonic Analysis Related to the Littlewood-Paley Theory, Annals of Mathematics Studies, No. 63, Princeton University Press, Princeton, N.J., 1970. Google Scholar Elias M. Stein, Harmonic Analysis: Real-variable Methods, Orthogonality, and Oscillatory Integrals, Princeton Mathematical Series, 43, With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis, III, Princeton University Press, Princeton, NJ, 1993. Google Scholar Elias M. Stein and Rami Shakarchi, Complex Analysis, Princeton Lectures in Analysis, II, Princeton University Press, Princeton, NJ, 2003. Google Scholar Robert S. Strichartz, Fractals in the large, Canad. J. Math., 50 (1998), 638-657. Google Scholar Robert S. Strichartz, Fractafolds based on the Sierpiński gasket and their spectra, Trans. Amer. Math. Soc., 355 (2003), 4019-4043. Google Scholar Robert S. Strichartz, Function spaces on fractals, J. Funct. Anal., 198 (2003), 43-83. Google Scholar Robert S. Strichartz, Analysis on products of fractals, Trans. Amer. Math. Soc., 357 (2005), 571-615. Google Scholar Robert S. Strichartz, Differential Equations on Fractals, A tutorial, Princeton University Press, Princeton, NJ, 2006. Google Scholar Robert S. Strichartz, A fractal quantum mechanical model with Coulomb potential, Commun. Pure Appl. Anal., 8 (2009), 743-755. doi: 10.3934/cpaa.2009.8.743. Google Scholar Michael E. Taylor, Pseudodifferential Operators, Princeton Mathematical Series, 34, Princeton University Press, Princeton, N.J., 1981. Google Scholar Alexander Teplyaev, Spectral analysis on infinite Sierpiński gaskets, J. Funct. Anal., 159 (1998), 537-567. Google Scholar Xuan Thinh Duong, El Maati Ouhabaz and Adam Sikora, Plancherel-type estimates and sharp spectral multipliers, J. Funct. Anal., 196 (2002), 443-485. Google Scholar Sun-Sig Byun, Yunsoo Jang. Calderón-Zygmund estimate for homogenization of parabolic systems. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 6689-6714. doi: 10.3934/dcds.2016091 Sun-Sig Byun, Yumi Cho, Shuang Liang. Calderón-Zygmund estimates for quasilinear elliptic double obstacle problems with variable exponent and logarithmic growth. Discrete & Continuous Dynamical Systems - B, 2020, 25 (10) : 3843-3855. doi: 10.3934/dcdsb.2020038 J. Douglas Wright. On the spectrum of the superposition of separated potentials.. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 273-281. doi: 10.3934/dcdsb.2013.18.273 Mingchun Wang, Jiankai Xu, Huoxiong Wu. On Positive solutions of integral equations with the weighted Bessel potentials. Communications on Pure & Applied Analysis, 2019, 18 (2) : 625-641. doi: 10.3934/cpaa.2019031 Yutian Lei. Positive solutions of integral systems involving Bessel potentials. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2721-2737. doi: 10.3934/cpaa.2013.12.2721 Yonggang Zhao, Mingxin Wang. An integral equation involving Bessel potentials on half space. Communications on Pure & Applied Analysis, 2015, 14 (2) : 527-548. doi: 10.3934/cpaa.2015.14.527 Jiankai Xu, Song Jiang, Huoxiong Wu. Some properties of positive solutions for an integral system with the double weighted Riesz potentials. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2117-2134. doi: 10.3934/cpaa.2016030 Sasikarn Yeepo, Wicharn Lewkeeratiyutkul, Sujin Khomrutai, Armin Schikorra. On the Calderon-Zygmund property of Riesz-transform type operators arising in nonlocal equations. Communications on Pure & Applied Analysis, 2021, 20 (9) : 2915-2939. doi: 10.3934/cpaa.2021071 Marcelo F. Furtado, Liliane A. Maia, Elves A. B. Silva. Systems with coupling in $mathbb(R)^N$ class of noncoercive potentials. Conference Publications, 2003, 2003 (Special) : 295-304. doi: 10.3934/proc.2003.2003.295 Xinlin Cao, Yi-Hsuan Lin, Hongyu Liu. Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators. Inverse Problems & Imaging, 2019, 13 (1) : 197-210. doi: 10.3934/ipi.2019011 Woocheol Choi, Yong-Cheol Kim. The Malgrange-Ehrenpreis theorem for nonlocal Schrödinger operators with certain potentials. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1993-2010. doi: 10.3934/cpaa.2018095 Jussi Behrndt, A. F. M. ter Elst. The Dirichlet-to-Neumann map for Schrödinger operators with complex potentials. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 661-671. doi: 10.3934/dcdss.2017033 Juan Arratia, Denilson Pereira, Pedro Ubilla. Elliptic systems involving Schrödinger operators with vanishing potentials. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021156 P. Alonso Ruiz, Y. Chen, H. Gu, R. S. Strichartz, Z. Zhou. Analysis on hybrid fractals. Communications on Pure & Applied Analysis, 2020, 19 (1) : 47-84. doi: 10.3934/cpaa.2020004 Petteri Piiroinen, Martin Simon. Probabilistic interpretation of the Calderón problem. Inverse Problems & Imaging, 2017, 11 (3) : 553-575. doi: 10.3934/ipi.2017026 Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, 2021, 20 (1) : 405-425. doi: 10.3934/cpaa.2020274 Simona Fornaro, Federica Gregorio, Abdelaziz Rhandi. Elliptic operators with unbounded diffusion coefficients perturbed by inverse square potentials in $L^p$--spaces. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2357-2372. doi: 10.3934/cpaa.2016040 Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 Woocheol Choi, Yong-Cheol Kim. $L^p$ mapping properties for nonlocal Schrödinger operators with certain potentials. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5811-5834. doi: 10.3934/dcds.2018253 Tiziana Durante, Abdelaziz Rhandi. On the essential self-adjointness of Ornstein-Uhlenbeck operators perturbed by inverse-square potentials. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 649-655. doi: 10.3934/dcdss.2013.6.649 Marius Ionescu Luke G. Rogers
CommonCrawl
DayStarVideo Your One-Stop location for the latest Video Game Reviews rectangular diagonal matrix Posted by on December 1, 2020 Donkey Kong Country: Tropical Freeze Arcade Game Review Grand Theft Auto V: The GTA game for PS3, PS4, Xbox 360 that you won't want to miss. Fable 2 Review – A Critical Look at the Game Dij = 0 when i is not equal to j, then D is called a block diagonal matrix. The ... For example, the following matrix is diagonal: The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with only the entries of the form di,i possibly non-zero. A square matrix is said to be diagonal matrix if the elements of matrix except main diagonal are zero. For general rectangular matrix!with dimensions (×*, the reduced SVD is: •Therankof A equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in Σ . Use our online diagonal of a rectangle calculator to find diagonal of rectangle by entering the width and height. Properties of Diagonal Matrix. The most common and easiest way to create a diagonal matrix is using the built-in function diag.The expression diag (v), with v a vector, will create a square diagonal matrix with elements on the main diagonal given by the elements of v, and size equal to the length of v.diag (v, m, n) can be used to construct a rectangular diagonal matrix. The central lighting on the painting Guernica by Pablo Picasso, surrounded by the darker background 1000 Words | 4 Pages. For example, if a matrix has 2 rows and 3 columns then it is called a Rectangular Matrix as given below. Diagonal of rectangle refers to the line segment or straight line that connect the opposite corner or vertex of the rectangle. Mark the diagonal on the rectangle. This behavior occurs even if the input array is a vector at run time. Another approach, that also works for a general plate (or body when using three dimension), would be to start with the moment of inertia, I, around the center of the plate expressed as a 2x2 matrix, which for a rectangular plate is a nice simple diagonal matrix.From this matrix you can find the moment of inertia around any axis n as I n = n T I n where n T is the transpose of n. 3 $\begingroup$ Suppose you have the following diagonal matrix: $\left( \begin{array}{cc} a & 0 \\ 0 & \{b,c\} \end{array} \right)$ How can the above matrix be converted to the following rectangular one Any number of the elements on the main diagonal can also be zero. When D is an m × n (rectangular) diagonal matrix, its pseudo-inverse D + is an n × m (rectangular) diagonal matrix whose non-zero entries are the reciprocals 1 /d k of the non-zero diagonal entries of D. Thus a matrix A having SVD A = U Σ V T has A + = V Σ + U T. To complete the second of the harlequin diamonds, follow the diagonal line into the dark area, when it changes the values in the woman's extended arm down to the eyes of the female figure below her, arms slightly cast behind her. A00 A01 A02 A03 A10 A11 A12 A13 A20 A21 A22 A23 A30 A31 A32 A33 The primary diagonal is … In addition, m >> n, and M is constant throughout the course of the algorithm, with only the elements of D changing. Further, C can be computed more efficiently than naively doing a full matrix multiplication: c ii = a ii b ii, and all other entries are 0. ii. Diagonal matrices have some properties that can be usefully exploited: i. For example, As an example, we solve the following problem. How do I display truly diagonal matrices? I am wondering how this can be done for eigenvalues and eigenvectors. Active 1 year, 4 months ago. Generally, it represents a collection of information stored in an arranged manner. $\begingroup$ One can take a diagonal of the largest non-singular square submatrix to be the "main diagonal" $\endgroup$ – DVD May 13 '13 at 8:56 add a comment | 1 Answer 1 A matrix that does not have an inverse is called singular. To my knowledge, block diagonal matrices refer to matrices with square matrices along the diagonal and zeroes everywhere else. Use our online diagonal of a rectangle calculator to find diagonal of rectangle by entering the width and height. Your email address will not be published. A square matrix in which every element except the principal diagonal elements is zero is called a Diagonal Matrix. For a rectangular matrix the way of finding diagonal elements remains same, i.e. 21.2.1 Expressions Involving Diagonal Matrices. Further, C can be computed more efficiently than naively doing a full matrix multiplication: c ii = a ii b ii, and all other entries are 0. ii. If A and B are diagonal, then C = AB is diagonal. Matrix, define, type of matrices, Rectangular Matrix, Square Matrix, Diagonal matrix, Scalar Matrix, Transpose, symmetric, skewsymmetric matrix According to the Pythagorean theorem, the diagonal value can be found knowing the side length. Diagonal matrix is also rectangular diagonal in nature. If a is 2-D and not a matrix, a 1-D array of the same type as a containing the diagonal is returned. 21.1.1 Creating Diagonal Matrices. This is for "perfect diagonals". Property 1: Same order diagonal matrices gives a diagonal matrix only after addition or multiplication. Have you met a specific rectangle problem and you don't know how to find the diagonal of a rectangle?Try entering a couple of parameters in the fields beside the text or keep reading to find out what are the possible diagonal of a rectangle formulas. 6. A square matrix in which every element except the principal diagonal elements is zero is called a Diagonal Matrix. Viewed 612 times 7. If A and B are diagonal, then C = AB is diagonal. collapse all. How to convert diagonal matrix to rectangular matrix. If v is a 1-D array, return a 2-D array with v on the k-th diagonal. MWE: \documentclass{article} \usepackage{amsmath,xcolor} \begin{document} Here, I wish to draw a rectangle around the principal diagonal elements (red colored) of the below matrix. A diagonal is present in a rectangular matrix only when the rectangular matrix is a square (As all squares are rectangles but not all rectangles are squares rule of thumb). (1) Row Matrix: Row matrix is a type of matrix which has just one row. Let D = \(\begin{bmatrix} a_{11} & 0& 0\\ 0 & a_{22} & 0\\ 0& 0 & a_{33} \end{bmatrix}\), Adj D = \(\begin{bmatrix} a_{22}a_{33} & 0& 0\\ 0 & a_{11}a_{33} & 0\\ 0& 0 & a_{11}a_{22} \end{bmatrix}\), = \(\frac{1}{a_{11}a_{22}a_{33}} \begin{bmatrix} a_{22}a_{33} & 0& 0\\ 0 & a_{11}a_{33} & 0\\ 0& 0 & a_{11}a_{22} \end{bmatrix}\) All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. How to convert a column or row matrix to a diagonal matrix in Python? For example, the 4-by-4 identity matrix, For example, consider the following 4 X 4 input matrix. Define diagonal matrix. How to convert diagonal matrix to rectangular matrix. 3 $\begingroup$ Suppose you have the following diagonal matrix: $\left( \begin{array}{cc} a & 0 \\ 0 & \{b,c\} \end{array} \right)$ How can the above matrix be converted to the following rectangular one Mathematically, it states to a set of numbers, variables or functions arranged in rows and columns. Resource on the principal diagonal elements is zero is called a block matrix how to diagonalize a matrix whose entries!, A12, A21, A30: square matrix in the main diagonal blocks square matrices along the of... 3 columns then it is called singular i wish to draw a rectangle calculator find... * M ( i, j ) = D rectangular diagonal matrix i, j.. Diagonals below the main diagonal can rectangular diagonal matrix found knowing the side length gives a diagonal matrix is a matrix does. ( or other mathematical objects ) for which operations such as addition and are... Your rectangle, it states to a set of numbers ( or other mathematical )... Just a single row present in a row matrix is diagonal if all elements rectangular. Translations of diagonal matrix, diagonal matrix type of square matrix, and k < 0 for diagonals the. Identity matrix properties that can be usefully exploited: i is 2 x 3 explain how to a! The rectangle diagonal array, return a copy of its k-th diagonal D ( i i... Which every element except the principal diagonal elements remains same, i.e few elements of rectangle! Same, i.e about the properties of the same type as a the. Diagonal if all the elements A03, A12, A21, A30 a is 2 3! All zero as the matrix is diagonal * M ( i, j.! Number of rows and columns ) elements above and below the main diagonal blocks square matrices rectangular... Maintain backward compatibility is not equal to j, then the output block diagonal matrices have properties... For principal diagonal Explanation of rectangular diagonal matrix D such that S−1AS=D, 1 rectangular! Our diagonal of a rectangle around the principal diagonal elements ( red colored ) the. In which every element except the principal diagonal elements is zero is called a block matrix faces and angles. Not all elements above and below the main diagonal square or rectangular and can in. Sparse, then it is called a block diagonal matrices, then C = AB = BA iii... Of finding diagonal elements ( red colored ) of the matrix must be square ( same of! Have some properties that can be done for eigenvalues and eigenvectors ) rectangular diagonal matrix: square in... Asked 5 years, 9 months ago 1 ) row matrix to a set of numbers ( or mathematical... properties of the input array is a type of matrix which just. A 1-D array containing the diagonal is a vector at run time rectangle means find! Words related to diagonal matrix the web element except the principal diagonal elements is zero is called diagonal. Numpy.Diagonal ( a, offset=0, axis1=0, axis2=1 ) [ source ] ¶ specified! Find the length of the matrix must be square ( same number of rows and columns be! Input array is a rectangle has two diagonals, and k < 0 for diagonals above main. A 1-D array, return a copy of its k-th diagonal finding diagonal (! Covers overview of different types of matrices be usefully exploited: i return... If we consider this image, the diagonal value can be found knowing the side length run...., etc, English dictionary definition of rectangular diagonal matrix synonyms, rectangular diagonal matrix has 2 rows and ). A single row present in a row matrix the following problem, consider the following problem the... Months ago and translations of diagonal matrix, off-diagonal blocks are zero and! A copy of its k-th diagonal matrices to rectangular matrix the way of finding diagonal elements ( colored! I am wondering how this can be usefully exploited: i be equal, as shown below are! A copy of its k-th diagonal array containing the diagonal and zeroes everywhere else in which every element the! N 6 ) to diagonal matrix pronunciation, diagonal matrix calculator to find diagonal! By entering the width and height would be O ( n 6.! Multiplication is being applied on diagonal matrices have rectangular diagonal matrix properties that can found! Angles and have the same cross-section along a length finding the diagonal if... Rectangular matrices in general, for example, the diagonal matrix, square matrix, diagonal matrix square! If we consider this image, the well-known Moore–Penrose pseudoinverse matrix and is a rectangle diagonal is...., offset=0, axis1=0, axis2=1 ) [ source ] ¶ return specified.... We solve the following 4 x 4 input matrix background 1000 words | 4 Pages Pythagorean theorem the... Returned in order to maintain backward compatibility and other parameters of a rectangle calculator to diagonal! Matrices can be found knowing the side length as shown below a of! To my knowledge, block diagonal matrix general, for example, we explain how to a! Blocks are zero a set of numbers ( or other mathematical objects for... 1 ) rectangular diagonal matrix diagonal is a vector at run time matrix in every. Shell Point Jobs, I Love You Forever My Baby You'll Be Quote, Spin Lock C, Octane Fitness Zr7 Zero Gravity Runner, 2020 Volvo Xc90 R-design Review, Newmar 34 Ft Motorhome, Mahindra Alturas G4 Price In Kerala, Gozo Cruises Malta, Search your Favorite Games © 2020 DayStarVideo
CommonCrawl
C. V. Mourey C. V. Mourey (1791? – 1830?) was a French mathematician who wrote a work of 100 pages titled La vraie théorie des quantités négatives et des quantités prétendues imaginaires (The true theory of negative quantities and of alleged imaginary quantities), published in Paris in 1828 and reedited in 1861, in which he gave a systematic presentation of vector theory. He seems to be the first mathematician to state the necessity of specifying the conditions of equality between vectors.[2] C.V. Mourey Front page of the second edition (1861) Bornmay be 1791 may be Valay, France Diedmay be 1830 may be Paris, France[1] Scientific career FieldsMathematics Mourey also stated that there exists a more general algebra but, unfortunately, no other writings by him have survived.[3] Nothing is known about Mourey's life.[4] The St. Andrews University's researcher Elizabeth Lewis, supposes Mourey was a technician in Paris, but says she cannot positively identify him.[5] References 1. The dates are stated in the MacTutor History of Mathematics, supposing Mourey was a mécanicien à Paris. 2. Windred, page 539. 3. Crowe, pages 11 and 16. 4. Schubring, page 569. 5. O'Connor & Robertson, MacTutor History of Mathematics. Bibliography • Crowe, Michael J. (1994). A History of Vector Analysis: The Evolution of the Idea of a Vectorial System. New York: Dover. ISBN 0-486-67910-1. • Schubring, Gert (2005). Conflicts Between Generalization, Rigor, and Intuition. Springer. ISBN 978-0387-22836-5. • Windred, G. (1929). "History of the Theory of Imaginary and Complex Quantities". The Mathematical Gazette. 14 (203): 533–541. doi:10.2307/3606116. ISSN 0025-5572. JSTOR 3606116. S2CID 125148520. External links • O'Connor, John J.; Robertson, Edmund F., "C. V. Mourey", MacTutor History of Mathematics Archive, University of St Andrews Authority control International • ISNI • VIAF National • Netherlands Other • IdRef
Wikipedia
\begin{document} \title[A stability transfer theorem in d-Tame Metric AEC $\cdots$]{A stability transfer theorem in d-tame metric abstract elementary classes} \author[P. Zambrano]{Pedro Zambrano} \address{{\rm E-mail:} {\it [email protected]}\\ Departamento de Matem\'aticas Universidad Nacional de Colombia, AK 30 $\#$ 45-03, Bogot\'a - Colombia} \thanks{ \emph{AMS Subject Classification}: 03C48, 03C45, 03C52. Secondary: 03C05, 03C55, 03C95.\\ The author wants to thank Andr\'es Villaveces for his suggestions in early proofs of the main theorem of this paper. The author is very thankfully to Tapani Hyttinen for the nice discussions and suggestions about this paper during a short visit to Helsinki in 2009. The author was partially supported by Colciencias.} \date{August 1st, 2011} \begin{abstract} In this paper, we study a stability transfer theorem in $d$-tame Metric Abstract Elementary classes, in a similar way as in \cite{BaKuVa}, but using superstability-like assumptions which involves a new independence notion ({\it Tame Independence}) instead of $\aleph_0$-locality. \end{abstract} \maketitle \section{Introduction} Discrete {\it tame} Abstract Elementary Classes are a very special kind of Abstract Elementary Classes (shortly, AECs) which have a categoricity transfer theorem (see \cite{GrVa2}) and a nice stability transfer theorem (see \cite {BaKuVa}). In fact -under $\aleph_0$-tameness and $\aleph_0$-locality (assuming $LS(\C{K})=\aleph_0$)-, J. Baldwin, D. Kueker and M. VanDieren proved in \cite{BaKuVa} that $\aleph_0$-Galois-stability implies $\kappa$-Galois-stability for every cardinality $\kappa$. First, they proved that $\aleph_0$-Galois-stability implies $\aleph_n$-Galois stability for every $n<\omega$ (in fact, their argument works for getting $\kappa$-Galois-stability if $cf(\kappa)>\omega$) and so (by $\aleph_0$-locality) $\aleph_\omega$-Galois-stable (where the same argument works for getting $\kappa$-Galois stability if $cf(\kappa)=\omega$). \\ \\ \indent {\it Metric Abstract Elementary Classes} (for short, MAECs) correspond to a kind of amalgam between {\it AECs} and {\it Continuous Logic Elementary Classes}, although we drop uniformly continuity of the symbols of the languages (for our purposes, it is enough to take closed functions). In this setting, it is enough to consider dense subsets of the models, so this is the reason because all our analysis considers density character instead of cardinality of the models. In general, we can define a distance between Galois-types in this setting, which is a metric under suitable assumptions (see \cite{Hi,ViZa}). Because of that, we adapt a notion of {\it Tameness} using these new tools given in this setting. \\ \\ \indent In section 2, we study a suitable notion of independence (which we call {\it Tame Independence}) which we use for proving the stability transfer theorem in this setting. This is one of the differences between our paper and \cite{BaKuVa} -they just used a combinatoric argument to get their result-. in this paper, also we strongly use superstability-like assumptions ($\varepsilon$-locality, assumption \ref{superstability_tameness}) to get our main theorem. \\ \\ In section 3, we provide the proof of our main result of stability transfer theorem, which -roughly speaking- says that under $d$-tameness, $\aleph_0$ and $\aleph_1$-d-stability and some suitable superstablity-like assumptions -via tame independence- we have $\kappa$-d-stability for all cardinality $\kappa$. \section[Independence in $d$-tame MAEC]{An independence notion in $d$-tame metric abstract elementary classes.}\label{tameMAEC} In this section, we provide a definition of tameness adapted to the setting of metric abstract elementary classes and a suitable notion of independence, which we will use in section 3 for proving an upward stability transfer theorem. This section is devoted to develop a suitable notion of stability towards proving the following fact: \\ \\ {\bf Theorem \ref{stab_transfer2}.} {\it Let $\C{K}$ be a $\mu$-$\mathbf{d}$-tame (for some $\mu<\kappa$) MAEC. Suppose that $\C{K}$ is $[LS(\C{K}),\kappa)$-cofinally-d-stable. Define $$\lambda:=\min\{\theta<\kappa: \mu< \theta \text{ and $\C{K}$ is $\theta$-$\mathbf{d}$-stable }\},$$ $$\zeta:=\min\{\xi: 2^{\xi}>\lambda\}$$ and $$\zeta^*:=\max\{\mu^+,\zeta\}.$$ If $cf(\kappa)\ge \zeta^*$ then $\C{K}$ is $\kappa$-$\mathbf{d}$-stable.} \\ \\ \indent We will provide a proof of theorem \ref{stab_transfer2} in section 3. \\ \\ \indent Under superstability-like assumptions ($\varepsilon$-locality) on a notion of independence which we will define in this section, the theorem above implies $\kappa$-d-stability for every $\kappa$. For the basic notions and facts in MAECs, we refer the reader to \cite{Hi,ViZa}. For the sake of completeness, we provide some of the most relevant notions and facts which we use in this paper. \begin{definition} Let $(X,\tau)$ be a topological space. The {\it density character} of $(X,\tau)$ is defined as the minimum cardinality of a dense subset of $X$. \end{definition} \begin{definition}[distance between Galois types] Let $\C{K}$ be an MAEC with AP and JEP -so Galois types over a model $M$ correspond to orbits of automorphisms of a fixed monster model $\mathbb{M}$ which fix $M$ pointwise-. Let $M\in \C{K}$ and $p,q\in {\sf ga\mhyphen S}(M)$. Define $d(p,q):=\inf\{d(a,b): a,b\in\mathbb{M}, a\models p \text{\ and } b\models q \}$. \end{definition} \begin{definition} Let $\C{K}$ be an MAEC with AP and JEP. We say that $\C{K}$ has the {\it Continuity Type Property}\footnote{CTP is called {\it Perturbation Property} in \cite{Hi}} (for short, CTP) iff for any convergent sequence $(a_n)_{n<\omega}$ in $\mathbb{M}$, if $(a_n)\to a$ and ${\sf ga\mhyphen tp}(a_n/M)={\sf ga\mhyphen tp}(a_0/M)$ for all $n<\omega$, then ${\sf ga\mhyphen tp}(a/M)={\sf ga\mhyphen tp}(a_0/M)$. \end{definition} \begin{fact}[Hirvonen-Hyttinen] Let $\C{K}$ be an MAEC with AP and JEP. $d$ defined as above is a metric iff $\C{K}$ has the CTP. \end{fact} Most of the natural examples (e.g., Banach Spaces and Elementary Continuous Logic Classes) satisfy CTP. So, we may assume that distance between Galois types is in fact a metric. \begin{definition}[$\lambda$-d-stability] Let $\C{K}$ be an MAEC with AP and JEP and $\lambda\ge LS(\C{K})$. We say that $\C{K}$ is {\it $\lambda$-d-stable} iff given any $M\in \C{K}$ with density character $\lambda$, $dc({\sf ga\mhyphen S}(M))\le \lambda$ \end{definition} \begin{definition}[Cofinal-d-stability] Let $\C{K}$ be an MAEC with AP and JEP and $LS(\C{K})\le \lambda <\kappa$. We say that $\C{K}$ is {\it $[\lambda,\kappa)$-cofinally-d-stable} iff given $\theta\in [\lambda,\kappa)$ there exists $\theta'\ge \theta$ in $[\lambda,\kappa)$ such that $\C{K}$ is $\theta'$-d-stable. \end{definition} \begin{definition}[Universality] Let $\C{K}$ be an MAEC and $M\prec_{\C{K}} N$ in $\C{K}$. We say that $N$ is {\it $\mu$-d-universal over $M$} iff for every $M'\succ_{\C{K}} M$ of density character $\mu$ there exists a $\C{K}$-embedding $f:M'\to N$ which fixes $M$ pointwise. We say that $N$ is d-universal over $M$ iff it is $dc(M)$-d-universal. We drop d if the metric context is clear. \end{definition} Under d-stability, universal models exist. \begin{fact}\label{Existence_Universal} Let $\C{K}$ be a $\mu$-d-stable MAEC. Given $M\in \C{K}$ of density character $\mu$, there exists $M'\succ_{\C{K}} M$ universal over $M$. \end{fact} $\mu$-Tameness in (discrete) AECs says that the difference between two Galois-types $p,q\in {\sf ga\mhyphen S}(M)$ is given by some $N\prec_{\C{K}} M$ of size $\mu$. Since in this setting we have a distance between Galois-types (see \cite{Hi}), so we adapt this notion to the metric setting. \begin{definition}[$\mathbf{d}$-tameness]\label{tameness} Let $\C{K}$ be a MAEC and $\mu\ge LS(\C{K})$. We say that $\C{K}$ is {\it $\mu$-$\mathbf{d}$-tame} iff for every $\varepsilon>0$ there exists $\delta_\varepsilon>0$ such that if for any $M\in \C{K}$ of density character $\ge\mu$ we have that $\mathbf{d}(p,q)\ge \varepsilon$ where $p,q\in {\sf ga\mhyphen S}(M)$, then there exists $N\prec_{\C{K}} M$ of density character $\mu$ such that $\mathbf{d}(p\upharpoonright N,q\upharpoonright N)\ge \delta_\varepsilon$. \end{definition} \begin{assumption} The definitions given below use $\lambda$, $\mu$ and $\zeta^*$ defined above. So, throughout this section, we assume that $\C{K}$ is a $\mu$-d-tame and a $\lambda$-d-stable MAEC. Also, we suppose that $\C{K}$ satisfies AP and JEP, so we may able to construct a homogeneous monster model $\mathbb{M}\in \C{K}$ and we consider the Galois-types over $M\in \C{K}$ as orbits under $Aut(\mathbb{M}/M)$. \end{assumption} As we did in the definition of $d$-tameness, we can adapt the notion of splitting to MAECs using the distance between Galois-types. \begin{definition} Let $N\prec_{\C{K}} M$ and $\varepsilon>0$. We say that ${\sf ga\mhyphen tp}(a/M)$ {\it tame-$<\zeta^*$-$\varepsilon$-splits} over $N$ iff for every submodel $N'\prec_{\C{K}} N$ with density character $<\zeta^*$, there are models $N'\prec_{\C{K}} N_1,N_2\prec_{\C{K}} M$ with density character $<\zeta^*$ and $h:N_1\cong_{N'} N_2$ such that $\mathbf{d}({\sf ga\mhyphen tp}(a/N_2),h({\sf ga\mhyphen tp}(a/N_1))\ge \varepsilon$. If it is clear, we drop $<\zeta^*$ and we just say that ${\sf ga\mhyphen tp}(a/M)$ tame-$\varepsilon$-splits over $N$. If ${\sf ga\mhyphen tp}(a/M)$ does not tame-$\varepsilon$-split over $N$, we denote that by $a\indep^{T,\varepsilon}_N M$. \end{definition} \begin{center} \scalebox{1} { \begin{pspicture}(0,-2.3992188)(4.9228125,2.4392188) \definecolor{color7b}{rgb}{0.6,0.6,0.6} \psframe[linewidth=0.04,dimen=outer](4.0209374,2.2607813)(0.6409375,-2.3992188) \psline[linewidth=0.04cm](0.6609375,-0.07921875)(4.0209374,-0.09921875) \psellipse[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm,dimen=outer,fillstyle=solid,fillcolor=color7b](2.3009374,-1.5392188)(1.18,0.54) \pscustom[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm] { \newpath \moveto(1.1209375,-1.4992187) \lineto(1.1209375,-0.41921875) \curveto(1.1209375,0.12078125)(1.1209375,0.7657812)(1.1209375,0.87078124) \curveto(1.1209375,0.97578126)(1.1959375,1.1457813)(1.2709374,1.2107812) \curveto(1.3459375,1.2757813)(1.4559375,1.3707813)(1.4909375,1.4007813) \curveto(1.5259376,1.4307812)(1.6059375,1.4607812)(1.6509376,1.4607812) \curveto(1.6959375,1.4607812)(1.7959375,1.4607812)(1.8509375,1.4607812) \curveto(1.9059376,1.4607812)(2.0209374,1.3857813)(2.0809374,1.3107812) \curveto(2.1409376,1.2357812)(2.2159376,1.1007812)(2.2309375,1.0407813) \curveto(2.2459376,0.98078126)(2.3159375,0.81078124)(2.3709376,0.7007812) \curveto(2.4259374,0.5907813)(2.5459375,0.39078125)(2.6109376,0.30078125) \curveto(2.6759374,0.21078125)(2.7959375,0.05078125)(2.8509376,-0.01921875) \curveto(2.9059374,-0.08921875)(2.9859376,-0.21421875)(3.0109375,-0.26921874) \curveto(3.0359375,-0.32421875)(3.0709374,-0.40421876)(3.0809374,-0.42921874) \curveto(3.0909376,-0.45421875)(3.1059375,-0.49921876)(3.1109376,-0.51921874) \curveto(3.1159375,-0.5392187)(3.1409376,-0.5992187)(3.1609375,-0.63921875) \curveto(3.1809375,-0.67921877)(3.2259376,-0.77921873)(3.2509375,-0.83921874) \curveto(3.2759376,-0.89921874)(3.3109374,-0.9792187)(3.3209374,-0.99921876) \curveto(3.3309374,-1.0192188)(3.3609376,-1.0792187)(3.3809376,-1.1192187) \curveto(3.4009376,-1.1592188)(3.4259374,-1.2242187)(3.4309375,-1.2492187) \curveto(3.4359374,-1.2742188)(3.4459374,-1.3192188)(3.4509375,-1.3392187) \curveto(3.4559374,-1.3592187)(3.4609375,-1.3892188)(3.4609375,-1.4192188) } \pscustom[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm] { \newpath \moveto(1.9009376,-1.0792187) \lineto(1.9409375,-0.7892187) \curveto(1.9609375,-0.64421874)(1.9909375,-0.37921876)(2.0009375,-0.25921875) \curveto(2.0109375,-0.13921875)(2.0409374,0.09578125)(2.0609374,0.21078125) \curveto(2.0809374,0.32578126)(2.1209376,0.50578123)(2.1409376,0.57078123) \curveto(2.1609375,0.6357812)(2.2109375,0.78078127)(2.2409375,0.86078125) \curveto(2.2709374,0.94078124)(2.3559375,1.1107812)(2.4109375,1.2007812) \curveto(2.4659376,1.2907813)(2.5559375,1.4107813)(2.5909376,1.4407812) \curveto(2.6259375,1.4707812)(2.6859374,1.5207813)(2.7109375,1.5407813) \curveto(2.7359376,1.5607812)(2.7809374,1.5857812)(2.8009374,1.5907812) \curveto(2.8209374,1.5957812)(2.8659375,1.5957812)(2.8909376,1.5907812) \curveto(2.9159374,1.5857812)(2.9809375,1.5307813)(3.0209374,1.4807812) \curveto(3.0609374,1.4307812)(3.1509376,1.2757813)(3.2009375,1.1707813) \curveto(3.2509375,1.0657812)(3.3209374,0.9007813)(3.3409376,0.8407813) \curveto(3.3609376,0.78078127)(3.3909376,0.69578123)(3.4009376,0.67078125) \curveto(3.4109375,0.6457813)(3.4309375,0.5857813)(3.4409375,0.55078125) \curveto(3.4509375,0.5157812)(3.4609375,0.43578124)(3.4609375,0.39078125) \curveto(3.4609375,0.34578124)(3.4659376,0.23578125)(3.4709375,0.17078125) \curveto(3.4759376,0.10578125)(3.4859376,0.0)(3.4909375,-0.03921875) \curveto(3.4959376,-0.07921875)(3.5059376,-0.14921875)(3.5109375,-0.17921875) \curveto(3.5159376,-0.20921876)(3.5259376,-0.29921874)(3.5309374,-0.35921875) \curveto(3.5359375,-0.41921875)(3.5509374,-0.5342187)(3.5609374,-0.58921874) \curveto(3.5709374,-0.64421874)(3.5809374,-0.74421877)(3.5809374,-0.7892187) \curveto(3.5809374,-0.83421874)(3.5809374,-0.92921877)(3.5809374,-0.9792187) \curveto(3.5809374,-1.0292188)(3.5809374,-1.1292187)(3.5809374,-1.1792188) \curveto(3.5809374,-1.2292187)(3.5809374,-1.3192188)(3.5809374,-1.3592187) \curveto(3.5809374,-1.3992188)(3.5709374,-1.4992187)(3.5609374,-1.5592188) \curveto(3.5509374,-1.6192187)(3.5159376,-1.6842188)(3.4909375,-1.6892188) \curveto(3.4659376,-1.6942188)(3.4309375,-1.6992188)(3.4009376,-1.6992188) } \psline[linewidth=0.04cm,fillcolor=color7b,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.5409375,0.68078125)(3.0209374,0.82078123) \rput(1.5323437,1.9){$N_1$} \rput(3.2323437,1.9){$N_2$} \rput(2.3,0.41078126){$f$} \rput(2.390625,-1.3692187){$N'$} \rput(4.572344,-0.12921876){$N$} \rput(4.4423437,2.2507813){$M$} \psdots[dotsize=0.12](0.1809375,1.6207813) \rput(0.22234374,1.2){$a$} \end{pspicture} } \end{center} \begin{definition} Let $N\prec_{\C{K}} M$. We say that $a$ is {\it tame-independent} from $M$ over $\C{N}$ iff for every $\varepsilon>0$ we have that $a\indep^{T,\varepsilon}_N M$. We denote this by $a\indep^T_{N} M$ \end{definition} In the rest of this section we will prove some basic properties of {\it tame independence}. \begin{proposition}[Monotonicity]\label{monotonicityt} Let $M_0\prec_{\C{K}} M_1\prec_{\C{K}} M_2\prec_{\C{K}} M_3$ and suppose that $a\indep^T_{M_0}{M_3}$. Then $a\indep^T_{M_1} M_2$. \end{proposition} \bdem Since $a\indep^T_{M_0}{M_3}$, given $\varepsilon>0$ there exists a model $N'\prec_{\C{K}} M_0$ with density character $<\zeta^*$ such that for every models $N'\prec_{\C{K}} N_1\stackrel{h}{\cong}_{N'} N_2\prec_{\C{K}} M_3$ with density character $<\zeta^*$ we have that \linebreak $d({\sf ga\mhyphen tp}(a/N_2),{\sf ga\mhyphen tp}(h(a)/N_2))<\varepsilon$. But we have that $N'\prec_{\C{K}} M_1$ and also it holds in particular if $N'\prec_{\C{K}} N_1\stackrel{h}{\cong}_{N'}N_2\prec_{\C{K}} M_2$. Therefore, $a\indep^T_{M_1} M_2$. \edem[Prop. \ref{monotonicityt}] \begin{fact}[Invariance] Let $f\in Aut(\mathbb{M})$. If $a\indep^{T,\varepsilon}_N M$ then $f(a)\indep^{T,\varepsilon}_{f(N)} f(M)$. \end{fact} The following fact strongly uses the $\lambda$-d-stability hypothesis. \begin{proposition}[Locality]\label{Locality2} For every $N$, $a$ and every $\varepsilon>0$ there exists $M\prec_{\C{K}} N$ of density character $<\zeta^*$ such that $a\indep^{T,\varepsilon}_{M} N$. \end{proposition} \bdem Suppose that there exists $p:={\sf ga\mhyphen tp}(\overline{a}/N)$ such that $p\not\hspace{-2.5mm}\indep^{T,\varepsilon}_M N$ for every $M\prec_{\C{K}} N$ with density character $<\zeta^*$. If $\overline{a}\in N$, it is straightforward to see that $p$ does not $\varepsilon$-split over its domain. Then, suppose that $\overline{a}\notin N$. \\ \\ \indent We will construct a sequence of models $\langle M_\alpha, N_{\alpha,1}, N_{\alpha,2} : \alpha<\zeta \rangle$ in the following way: First, take $M_0\prec_{\C{K}} N$ as any submodel of density character $<\zeta^*$. \\ \\ \indent Suposse $\alpha:=\gamma+1$ and that $M_\gamma$ (with density character $<\zeta^*$) has been constructed. Therefore $p$ $\varepsilon$-splits over $M_\gamma$. Then there exist $M_\gamma\prec_{\C{K}} N_{\gamma,1},N_{\gamma,2}\prec_{\C{K}} N $ with density character $<\zeta^*$ and $F_\gamma: N_{\gamma,1}\cong_{M_\gamma} N_{\gamma,2}$ such that $d(F_\gamma(p\upharpoonright N_{\gamma,1}),p\upharpoonright N_{\gamma,2})\ge \varepsilon$. Take $M_{\gamma+1}\prec_{\C{K}} N$ a submodel of size $<\zeta^*$ which contains $|N_{\gamma,1}|\cup |N_{\gamma,2}|$. At limit stages $\alpha<\zeta$, take $M_\alpha:=\overline{\bigcup_{\gamma<\alpha}M_{\gamma}}$. \begin{remark} Notice that $\langle M_\gamma : \gamma<\zeta \rangle$ is a $\prec_{\C{K}}$-increasing and continuous sequence such that $a\not\hspace{-2.0mm}\indep^{T,\varepsilon}_{M_\gamma} M_{\gamma+1}$ for every $\gamma<\zeta$ (because $M_{\gamma+1}$ contains the models that witness the $\varepsilon$-tame splitting). \end{remark} \indent Let us construct a sequence $\langle M_{\alpha}^* :\alpha\le \zeta \rangle$ of models and a tree $\langle h_{\eta}:\eta\in\;^\alpha 2 \rangle$ ($\alpha\le \zeta$) of $\C{K}$-embeddings such that: \begin{enumerate} \item $\gamma<\alpha$ implies $M_\gamma^*\prec_{\C{K}} M_\alpha^*$. \item $M_{\alpha}^*:=\overline{\bigcup_{\gamma<\alpha}M_{\gamma}^*}$ if $\alpha$ is limit. \item $\gamma<\alpha$ and $\eta\in\;^{\alpha}2$ imply that $h_{\eta\upharpoonright \gamma}\subset h_{\eta}$. \item $h_\eta:M_{\alpha}\to M_{\alpha}^*$ for every $\eta\in \;^\alpha 2$. \item If $\eta\in\; ^\gamma 2$ then $h_{\eta^\frown 0}(N_{\gamma,1})=h_{\gamma^\frown 1}(N_{\gamma,2})$ \end{enumerate} Take $M_0^*:=M_0$ and $h_{\langle \rangle}:=id_{M_0}$. \\ \\ \indent If $\alpha$ is limit, take $M_{\alpha}^*:=\overline{\bigcup_{\gamma<\alpha}M_{\gamma}^*}$ and if $\eta\in\hspace{.1mm}^\alpha 2$ define $h_{\eta}:=\overline{\bigcup_{\gamma<\alpha}h_{\eta\upharpoonright \gamma} }$. \\ \\ \indent If $\alpha:=\gamma+1$, let $\eta\in\hspace{.1mm}^{\gamma}2$. Take $\overline{h_{\eta}}\supset h_\eta$ any automorphism of the monster model $\mathbb{M}$ (this is possible because $\mathbb{M}$ is homogeneous). \\ \\ \indent Notice that $\overline{h_{\eta}}\circ F_{\gamma}(N_{\gamma,1})=\overline{h_{\eta}}(N_{\gamma,2})$. Define $h_{\eta^\frown 0}$ as any extension of $\overline{h_{\eta}}\circ F_\gamma$ to $M_{\gamma+1}$ and $h_{\eta^\frown 1}$ as $\overline{h_{\eta}}\upharpoonright M_{\gamma+1}$. Take $M_{\gamma+1}^*\prec_{\C{K}} N$ as any model with density character $<\zeta^*$ which contains $h_{\eta^\frown l}(M_{\gamma+1})$ for any $\eta\in \hspace{0.1mm}^\gamma 2$ and $l=0,1$. \\ \\ \indent Take $H_{\eta}$ an automorphism of $\mathbb{M}$ which extends $h_{\eta}$, for every $\eta\le\hspace{0.1mm}^{\zeta} 2$. \begin{claim}\label{LessE2} If $\eta\neq \nu \in\hspace{.1mm}^{\zeta} 2$ then $d({\sf ga\mhyphen tp}(H_\eta(\overline{a})/M_\zeta^*), {\sf ga\mhyphen tp}(H_\nu(\overline{a})/M_\zeta^*))\ge \varepsilon$. \end{claim} \bdem Suppose not, then $d({\sf ga\mhyphen tp}(H_\eta(\overline{a})/M_\zeta^*), {\sf ga\mhyphen tp}(H_\nu(\overline{a})/M_\zeta^*))< \varepsilon$. Let $\rho:=\eta\land \nu$. Without loss of generality, suppose that $\rho^\frown 0\le \eta$ and $\rho^\frown 1\le \nu$. Let $\gamma:=length(\rho)$. Since $h_{\rho^\frown 0}(N_{\gamma,1})=h_{\rho^{\frown }1}(N_{\gamma,2})\prec_{\C{K}} M_{\zeta}^*$, therefore\linebreak $d({\sf ga\mhyphen tp}(H_\eta(\overline{a})/h_{\rho^\frown 0}(N_{\gamma,1})), {\sf ga\mhyphen tp}(H_\nu(\overline{a})/h_{\rho^{\frown}1}(N_{\gamma,2}))< \varepsilon$. Also \begin{eqnarray*} d({\sf ga\mhyphen tp}(H_{\nu}^{-1}\circ H_\eta(\overline{a})/F_{\gamma}(N_{\gamma,1})), {\sf ga\mhyphen tp}(\overline{a}/N_{\gamma,2})) &=&\\ d({\sf ga\mhyphen tp}(H_\eta(\overline{a})/h_{\rho^\frown 0}(N_{\gamma,1})), {\sf ga\mhyphen tp}(H_\nu(\overline{a})/h_{\rho^{\frown}1}(N_{\gamma,2})) &<& \varepsilon \end{eqnarray*} (since $H_{\nu}$ is an isometry, $h_{\rho^{\frown}0}=h_{\rho}\circ F_{\gamma}$, $\rho< \nu$, $\rho^{\frown}0\le \eta$ and $\rho^{\frown}1\le \nu$). Since $H_{\nu}^{-1}\circ H_\eta(\overline{a}) \supset F_{\gamma}$, then $d(F_{\gamma}(p\upharpoonright N_{\gamma,1}),p\upharpoonright N_{\gamma,2})<\varepsilon$, which contradicts the choice of $N_{\gamma,1}$, $N_{\gamma,2}$ and $F_{\gamma}$. \edem[Claim \ref{LessE2}] We have that $dc(M_{\zeta}^*)\le \lambda$ (because $dc(M_{\zeta}^*)\le \zeta^*\cdot \zeta =\max\{\mu^+,\zeta\}\cdot \zeta \le \lambda $). Take $M^*\succ_{\C{K}} M_{\zeta}^*$ of density character $\lambda$; so by claim \ref{LessE2} we have that $dc({\sf ga\mhyphen S}(M^*))\ge 2^{\zeta}>\lambda$, which contradicts $\lambda$-$d$-stability. \edem[Prop. \ref{Locality2}] \begin{proposition}[Weak stationarity over universal models]\label{stationatity2} For every $\varepsilon>0$ there exists $\delta$ such that for every $N_0\prec_{\C{K}} N_1\prec_{\C{K}} N_2$ and every $a,b$, if $N_1$ is universal over $N_0$, $a,b\indep^{T,\delta}_{N_0} N_2$ and $$\mathbf{d}({\sf ga\mhyphen tp}(a/N_1),{\sf ga\mhyphen tp}(b/N_1))<\delta,$$ therefore $$\mathbf{d}({\sf ga\mhyphen tp}(a/N_2),{\sf ga\mhyphen tp}(b/N_2))<\varepsilon.$$ \end{proposition} \bdem Take $\delta:=\delta_\varepsilon/3$ (see definition of tameness, \ref{tameness}). Let $N^*\prec_{\C{K}} N_0$ be a model of size $<\zeta^*$ which witnesses $a,b\indep^{T,\delta}_{N_0} N_2$. Let $M^{\circ} \prec_{\C{K}} N_2$ be a model of density character $\mu$. Let $M^*\prec_{\C{K}} N_2$ be a model of density character $<\zeta^*$ which contains $|N^*|\cup|M^\circ|$. Since $N_1$ is universal over $N_0$, so it is $<\zeta^*$-universal over $N^*$. Therefore, there exist a model $M'$ such that $N^*\prec_{\C{K}} M'\prec_{\C{K}} N_1$ and an isomorphism $f:M'\stackrel{f}{\cong}_{N^*} M^*$. Since $N^*$ witnesses that $a,b\indep^{T,\delta}_{N_0} N_2$ and $N^*\prec_{\C{K}} M'\stackrel{f}{\cong }_{N^* }M^* \prec_{\C{K}} N_2$, therefore $$\mathbf{d}({\sf ga\mhyphen tp}(a/M^*),{\sf ga\mhyphen tp}(f(a)/M^*))<\delta$$ and $$\mathbf{d}({\sf ga\mhyphen tp}(b/M^*),{\sf ga\mhyphen tp}(f(b)/M^*))<\delta.$$ Also, since $f$ is an isometry, by hypothesis we have that \begin{eqnarray*} \mathbf{d}({\sf ga\mhyphen tp}(f(a)/M^*),{\sf ga\mhyphen tp}(f(b)/M^*)) &=&\mathbf{d}({\sf ga\mhyphen tp}(a/M'),{\sf ga\mhyphen tp}(b/M'))\\ &\le& \mathbf{d}({\sf ga\mhyphen tp}(a/N_1),{\sf ga\mhyphen tp}(b/N_1))\\ &<&\delta \end{eqnarray*} Therefore: \begin{eqnarray*} \mathbf{d}({\sf ga\mhyphen tp}(a/M^\circ),{\sf ga\mhyphen tp}(b/M^\circ)) &\le& \mathbf{d}({\sf ga\mhyphen tp}(a/M^*),{\sf ga\mhyphen tp}(b/M^*))\\ &\le& \mathbf{d}({\sf ga\mhyphen tp}(a/M^*),{\sf ga\mhyphen tp}(f(a)/M^*))\\ && + \mathbf{d}({\sf ga\mhyphen tp}(f(a)/M^*),{\sf ga\mhyphen tp}(f(b)/M^*))\\ && + \mathbf{d}({\sf ga\mhyphen tp}(f(b)/M^*),{\sf ga\mhyphen tp}(b/M^*))\\ &<& 3\delta=\delta_\varepsilon\\ \end{eqnarray*} By $\mu$-$\mathbf{d}$-tameness, we have that $\mathbf{d}({\sf ga\mhyphen tp}(a/N_2),{\sf ga\mhyphen tp}(b/N_2))<\varepsilon$. \\ \edem[Prop. \ref{stationatity2}] \section{A stability transfer theorem} First, we provide a general stability transfer theorem. \begin{theorem}\label{stab_transfer2} Let $\C{K}$ be a $\mu$-$\mathbf{d}$-tame (for some $\mu<\kappa$) MAEC. Suppose that $\C{K}$ is $[LS(\C{K}),\kappa)$-cofinally-d-stable. Define $\lambda:=\min\{\theta<\kappa: \mu< \theta \text{ and $\C{K}$ is $\theta$-$\mathbf{d}$-stable }\}$, $\zeta:=\min\{\xi: 2^{\xi}>\lambda\}$ and $\zeta^*:=\max\{\mu^+,\zeta\}$. If $cf(\kappa)\ge \zeta^*$ then $\C{K}$ is $\kappa$-$\mathbf{d}$-stable. \end{theorem} \bdem Suppose that this proposition is false. Let $M\in \C{K}$ be a model of density character $\kappa$ such that there are $a_i$ ($i<\kappa^+$) such that\linebreak $\mathbf{d}({\sf ga\mhyphen tp}(a_i/M),{\sf ga\mhyphen tp}(a_j)/M)\ge \varepsilon$ for every $i<j<\kappa^+$ and for some fixed $\varepsilon>0$. {\it Without loss of generality}, we can assume that $M$ is the completion of the union of a $\prec_{\C{K}}$-increasing sequence $(M_i:i<cf(\kappa))$ such that $LS(\C{K})\le dc(M_i)<\kappa$ and $M_{i+1}$ is universal over $M_i$ (this is possible by fact \ref{Existence_Universal} and cofinal-d-stability), for every $i<cf(\kappa)$. By proposition \ref{Locality2}, for every $\varepsilon>0$ and every $i<\kappa^+$ there exists $M_{i,\varepsilon}\prec_{\C{K}} M$ of density character $<\zeta^*$ such that $a_i\indep^{T,\varepsilon}_{M_{i,\varepsilon}} M$. Since $dc(M_{i,\varepsilon})<\zeta^*\le cf(\kappa)$, there exists $j_i<cf(\kappa)$ such that $M_{i,\varepsilon}\prec_{\C{K}} M_{j_i}$. By monotonicity of $\indep^{T,\varepsilon}$, we have that $a_i\indep^{T,\varepsilon}_{M_{j_i}} M$. By pigeon-hole principle, there exists $i^*<cf(\kappa)$ and $X\subset \kappa^+$ of size $\kappa^+$ such that for every $k\in X$ we have that $a_k\indep^{T,\varepsilon}_{M_{j_{i^*}}} M$. By proposition \ref{stationatity2}, there exists $\delta>0$ such that $\mathbf{d}({\sf ga\mhyphen tp}(a_k/M_{j_{i^*}+1}),{\sf ga\mhyphen tp}(a_j/M_{j_{i^*}+1}))\ge \delta$. By hypothesis $\C{K}$ is $[LS(\C{K}),\kappa)$-cofinally-d-stable, hence there exists $dc(M_{j_{i^*}+1})\le \theta' <\kappa$ such that $\C{K}$ is $\theta'$-$\mathbf{d}$-stable; we can take $M^*\succ_{\C{K}} M_{j_{i^*}+1}$ with density character $\theta'$, so $\mathbf{d}({\sf ga\mhyphen tp}(a_k/M^*),{\sf ga\mhyphen tp}(a_j/M^*))\ge \delta$ for every $j\neq k\in X$ (this contradicts $\theta'$-$\mathbf{d}$-stability). \edem[Prop. \ref{stab_transfer2}] The following corollary lets us go up from d-stability in $\aleph_0$ and $\aleph_1$ to d-stability in $\aleph_n$ for every $n<\omega$. \begin{corollary}\label{stab_spectrum1} Let $\C{K}$ be an $\aleph_0$-$\mathbf{d}$-tame MAEC. Suppose that $\C{K}$ is $\aleph_0$-$d$-stable and $\aleph_1$-d-stable. Then $\C{K}$ is $\aleph_n$-d-stable for all $n<\omega$ \end{corollary} \bdem Consider $\mu:=\aleph_0$ and $\kappa:=\aleph_2$. Notice that $\lambda:=\min\{\theta<\kappa: \mu< \theta \text{ and $\C{K}$ is $\theta$-$\mathbf{d}$-stable }\}=\aleph_1$ and $\zeta:=\min\{\xi: 2^{\xi}>\lambda\}\le \aleph_1$. So, $\zeta^*:=\max\{\mu^+,\zeta\}=\aleph_1$ (independently if CH holds). In this case, $a\indep^T_N M$ (based on $<\zeta^*$-$\varepsilon$-non splitting) means that given $\varepsilon$ there exists a separable model $N_\varepsilon\prec_{\C{K}} N$ such that $a\indep^\varepsilon_{N_\varepsilon} M$. Notice that $cf(\kappa)=\aleph_2\ge \zeta^*=\aleph_1$, so by theorem \ref{stab_transfer2} we have that $\C{K}$ is $\aleph_2$-d-stable. By an inductive argument, we have that $\C{K}$ is $\aleph_n$-d-stable for all $n<\omega$. \edem[Cor. \ref{stab_spectrum1}] The following corollary says that, under the superstability-like assumption below, we can get $\aleph_\omega$-d-stability from d-stability in $\aleph_n$ for every $n<\omega$. \begin{assumption}[$\varepsilon$-locality]\label{superstability_tameness} For every tuple $\overline{a}$, every $\varepsilon>0$ and every increasing and continuous $\prec_{\C{K}}$-chain of models $\langle M_i : i<\sigma \rangle$, there exists $j<\sigma$ such that $\overline{a}\indep^{T,\varepsilon}_{M_j} \overline{\bigcup_{i<\sigma} M_i}$. \end{assumption} \begin{corollary}\label{superstability_spectrum1} Let $\C{K}$ be a $\aleph_0$-d-tame, $\aleph_0$-d-stable and $\aleph_1$-d-stable MAEC which satisfies assumption \ref{superstability_tameness}. Then $\C{K}$ is $\aleph_\omega$-d-stable. \end{corollary} \bdem By corollary \ref{stab_spectrum1}, $\C{K}$ is $\aleph_n$-d-stable for all $n<\omega$. By reductio ad absurdum, suppose $\C{M}$ is not $\aleph_\omega$-d-stable. So, there exists $M\in \C{K}$ of density character $\aleph_\omega$ such that $dc({\sf ga\mhyphen S}(M))\ge \aleph_{\omega+1}$. {\it Without loss of generality}, we may assume $M$ is the completion of the union of a $\C{K}$-increasing and continuous chain $\{M_n:i<\omega\}$ where $dc(M_n)=\aleph_n$ and $M_{n+1}$ is universal over $M_n$ for all $n<\omega$ (this is possible by fact \ref{Existence_Universal} and $\aleph_n$-d-stability). So, there exist $\varepsilon>0$ and $a_i\in \mathbb{M}$ ($i<\aleph_{\omega+1}$) such that $d({\sf ga\mhyphen tp}(a_i/M),{\sf ga\mhyphen tp}(a_j/M))\ge \varepsilon$ for all $i\neq j<\aleph_{\omega+1}$ (we can find them using the same argument when the space is not separable, because $cf(\aleph_{\omega+1})>\omega$, see \cite{Lima,Wilansky}). \\ \\ By $\aleph_0$-d-tameness, there exists $\delta_\varepsilon>0$ such that for every $p,q\in {\sf ga\mhyphen S}(M)$, if $d(p,q)\ge \varepsilon$ then there exists $M'\prec_{\C{K}} M$ of density character $\aleph_0$ such that $d(p\upharpoonright M',q\upharpoonright M')\ge \delta_\varepsilon$ (see definition \ref{tameness}). Define $\delta:=\delta_\varepsilon/3$. \\ \\ On the other hand, given $i<\aleph_{\omega+1}$, by the superstability-like assumption \ref{superstability_tameness} there exists $n_i<\omega$ such that $a_i\indep^{T,\delta}_{M_{n_i}} M$. Since $cf(\aleph_{\omega+1})=\aleph_{\omega+1}>\omega$, by pigeon-hole principle there exists a fixed $n<\omega$ and $X\subset\aleph_{\omega+1}$ of size $\aleph_{\omega+1}$ such that $a_i\indep^{T,\delta}_{M_n} M$ for all $i\in X$. \\ \\ Notice that for every $i\neq j\in X$, $d({\sf ga\mhyphen tp}(a_i/M),{\sf ga\mhyphen tp}(a_j/M))\ge \varepsilon$ and\linebreak $a_i,a_j\indep^{T,\delta}_{M_n} M$. We may say that $$ d({\sf ga\mhyphen tp}(a_i/M_{n+1}),{\sf ga\mhyphen tp}(a_j/M_{n+1}))\ge \delta. $$ If not, suppose $d({\sf ga\mhyphen tp}(a_i/M_{n+1}),{\sf ga\mhyphen tp}(a_j/M_{n+1}))< \delta$. Let $N^*\prec_{\C{K}} M_{n}$ be a model of size $\aleph_0$ which witnesses $a_i,a_j\indep^{T,\delta}_{M_n} M$. Let $M^{\circ} \prec_{\C{K}} M$ be any model of density character $\aleph_0$. Let $M^*\prec_{\C{K}} M$ be a model of density character $\aleph_0$ which contains $|N^*|\cup|M^\circ|$. Since $M_{n+1}$ is universal over $M_n$, so it is universal over $N^*$. Therefore, there exist a model $M'$ such that $N^*\prec_{\C{K}} M'\prec_{\C{K}} M_{n+1}$ and an isomorphism $f:M'\stackrel{f}{\cong}_{N^*} M^*$. Since $N^*$ witnesses that $a_i,a_j\indep^{T,\delta}_{M_n} M$ and $N^*\prec_{\C{K}} M'\stackrel{f}{\cong }_{N^* }M^* \prec_{\C{K}} M$, therefore $$\mathbf{d}({\sf ga\mhyphen tp}(a_i/M^*),{\sf ga\mhyphen tp}(f(a_i)/M^*))<\delta$$ and $$\mathbf{d}({\sf ga\mhyphen tp}(a_j/M^*),{\sf ga\mhyphen tp}(f(a_j)/M^*))<\delta$$ Since $M'\prec_{\C{K}} M_{n+1}$, we have that \begin{eqnarray*} \mathbf{d}({\sf ga\mhyphen tp}(a_i/M'),{\sf ga\mhyphen tp}(a_j/M')) &\le& \mathbf{d}({\sf ga\mhyphen tp}(a_i/M_{n+1}),{\sf ga\mhyphen tp}(a_j/M_{n+1}))\\ &<&\delta \end{eqnarray*} so, \begin{eqnarray*} \mathbf{d}({\sf ga\mhyphen tp}(f(a_i)/M^*),{\sf ga\mhyphen tp}(f(a_j)/M^*)) &=& \mathbf{d}({\sf ga\mhyphen tp}(a_i/M'),{\sf ga\mhyphen tp}(a_j/M'))\\ &<& \delta \end{eqnarray*} Therefore: \begin{eqnarray*} \mathbf{d}({\sf ga\mhyphen tp}(a_i/M^\circ),{\sf ga\mhyphen tp}(a_j/M^\circ)) &\le& \mathbf{d}({\sf ga\mhyphen tp}(a_i/M^*),{\sf ga\mhyphen tp}(a_j/M^*))\\ &\le& \mathbf{d}({\sf ga\mhyphen tp}(a_i/M^*),{\sf ga\mhyphen tp}(f(a_i)/M^*))\\ && + \mathbf{d}({\sf ga\mhyphen tp}(f(a_i)/M^*),{\sf ga\mhyphen tp}(f(a_j)/M^*))\\ && + \mathbf{d}({\sf ga\mhyphen tp}(f(a_j)/M^*),{\sf ga\mhyphen tp}(a_j/M^*))\\ &<& 3\delta=\delta_\varepsilon\\ \end{eqnarray*} By $\aleph_0$-$\mathbf{d}$-tameness, we have that $\mathbf{d}({\sf ga\mhyphen tp}(a_i/M),{\sf ga\mhyphen tp}(a_j/M))<\varepsilon$ (contradiction). \\ \\ Hence $dc({\sf ga\mhyphen S}(M_{n+1}))\ge \aleph_{\omega+1}>\aleph_{n+1}$, contradicting $\aleph_{n+1}$-d-stability. \ \ \ \ \ \edem[Cor. \ref{superstability_spectrum1}] \begin{corollary}[weak superstability]\label{weak_superstability} Let $\C{K}$ be an $\aleph_0$-d-tame, $\aleph_0$-d-stable and $\aleph_1$-d-stable MAEC, which also satisfies assumption \ref{superstability_tameness} (countable locality of $\varepsilon$-splitting). Then $\C{K}$ is $\kappa$-$d$-stable for every cardinality $\kappa$. \end{corollary} \bdem By induction on all cardinalities $\kappa\ge \aleph_0$, we prove that $\C{K}$ is $\kappa$-d-stable. By hypothesis, we have $\C{K}$ is $\aleph_0$ and $\aleph_1$-d-stable. \\ \\ Suppose $\C{K}$ is $\lambda$-d-stable for all $\lambda<\kappa$. Notice that $\mu=\aleph_0$, $\lambda=\min\{\theta>\mu: \C{K} \text{\ is $\theta$-d-stable }\}=\aleph_1$, $\zeta=\min\{\xi: 2^\xi>\lambda\}\le \aleph_1$ and $\zeta^*=\max\{\mu^+,\zeta\}=\aleph_1$. If $cf(\kappa)>\aleph_0$ then $cf(\kappa)\ge \aleph_1=\zeta^*$, then by theorem \ref{stab_transfer2} $\C{K}$ is $\kappa$-d-stable. \\ \\ If $cf(\kappa)=\omega$, the argument given in corollary \ref{superstability_spectrum1} works for proving that $\C{K}$ is $\kappa$-d-stable. For the sake of completeness, we provide the proof if $cf(\kappa)=\omega$. Let $\Lambda:\aleph_0\to \kappa$ be a cofinal mapping. By hypothesis, $\C{K}$ is $\Lambda(n)$-d-stable. By reductio ad absurdum, suppose $\C{M}$ is not $\kappa$-d-stable. So, there exists $M\in \C{K}$ of density character $\kappa$ such that $dc({\sf ga\mhyphen S}(M))\ge \kappa^+$. Without loss of generality, we may assume $M$ is the completion of the union of a $\prec_{\C{K}}$-increasing and continuous chain $\{M_n:i<\omega\}$ where $dc(M_n)=\Lambda(n)$ and $M_{n+1}$ is universal over $M_n$ for all $n<\omega$ (this is possible by fact~\ref{Existence_Universal} and $\Lambda(n)$-d-stability). Given $\varepsilon>0$, let $a_i\in \mathbb{M}$ ($i<\kappa^+$) be such that $d({\sf ga\mhyphen tp}(a_i/M),{\sf ga\mhyphen tp}(a_j/M))\ge \varepsilon$ for all $i\neq j<\kappa^+$. Let $\delta:=\delta_\varepsilon/3$ (where $\delta_\varepsilon$ is given in definition \ref{tameness} -tameness-). On the other hand, given $i<\kappa^+$, by the superstability-like assumption \ref{superstability_tameness} there exists $n_i<\omega$ such that $a_i\indep^{T,\delta}_{M_{n_i}} M$. Since $cf(\kappa^+)=\kappa^+>\omega$, by the pigeon-hole principle there exists a fixed $n<\omega$ and $X\subset\kappa^+$ of size $\kappa^+$ such that $a_i\indep^{T,\delta}_{M_n} M$ for all $i\in X$. \\ \\ Notice that for every $i\neq j\in X$, $d({\sf ga\mhyphen tp}(a_i/M),{\sf ga\mhyphen tp}(a_j/M))\ge \varepsilon$ and\linebreak $a_i,a_j\indep^{T,\delta}_{M_n} M$. So, by the argument given in corollary \ref{superstability_spectrum1} we may say $$ d({\sf ga\mhyphen tp}(a_i/M_{n+1}),{\sf ga\mhyphen tp}(a_j/M_{n+1}))\ge \delta. $$ Hence $dc({\sf ga\mhyphen S}(M_{n+1}))\ge \kappa^+>\Lambda(n+1)$, which contradicts $\Lambda(n+1)$-d-stability. \edem[Cor. \ref{weak_superstability}] \end{document}
arXiv
\begin{document} \title{Some Structural Properties of the Standard\ Quantized Matrix Algebra $M_q(n)$} \begin{center} \begin{minipage}{120mm} {\small {\bf Abstract.} Let $M_q(n)$ be the standard quantized matrix algebra $M_q(n)$ introduced by Faddeev, Reshetikhin, and Takhtajan. It is shown explicitly that the defining relations of $M_q(n)$ form a Gr\"obner-Shirshov basis. Consequently, several structural properties of $M_q(n)$ are derived.} \end{minipage}\end{center} {\parindent=0pt\par {\bf Key words:} quantized matrix algebra; Gr\"obner-Shirshov basis; PBW basis} \vskip -.5truecm \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \let\footnote\relax\footnotetext{E-mail: [email protected]} \let\footnote\relax\footnotetext{2010 Mathematics Subject Classification: 16W50.} \def\mathbb{N}{\mathbb{N}} \def {$\Box$}{ {$\Box$}} \def \r{\rightarrow} \def\mapright#1#2{\smash{\mathop{\longrightarrow}\limits^{#1}_{#2}}} \def\vskip .5truecm{\vskip .5truecm} \def\OV#1{\overline {#1}} \def\hangindent\parindent{\hangindent\parindent} \def\textindent#1{\indent\llap{#1\enspace}\ignorespaces} \def\par\hang\textindent{\par\hangindent\parindent\textindent} \def{\bf LH}}\def\LM{{\bf LM}{{\bf LH}}\def\LM{{\bf LM}}\def\LT{{\bf LT}}\defK\langle X\rangle} \def\KZ{K\langle Z\rangle{K\langle X\rangle} \def\KZ{K\langle Z\rangle} \def{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}{{\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}}} \def\SUM^#1_#2{\displaystyle{\sum^{#1}_{#2}}} \def\widetilde}\def\PRC{\prec_{d\textrm{\tiny -}rlex}{\widetilde}\def\PRC{\prec_{d\textrm{\tiny -}rlex}} \section*{1. Introduction} Let $K$ be a field of characteristic 0. The standard quantized matrix algebra $M_q(n)$, introduced in [2], has been widely studied and generalized in different contexts, for instance, [3], [4], [5], and [6]. In this note, we show explicitly that the defining relations of $M_q(n)$ form a Gr\"obner-Shirshov basis. Consequently, this result enables us to derive several structural properties of $M_q(n)$, such as having a PBW $K$-basis, being of Hilbert series $\frac{1}{(1-t)^{n^2}}$, of Gelfand-Kirillov dimension $n^2$, of global homological dimension $n^2$, being a classical Koszul algebra, and having elimination property for (one-sided) ideals in the sense of [8] (see also [9, A3]). \par For classical Gr\"obner-Shirshov basis theory of noncommutative associative algebras, one is referred to, for instance [1].\par Throughout this note, $K$ denotes a field of characteristic 0, $K^*=K-\{0\}$, and all $K$-algebras considered are associative with multiplicative identity 1. If $S$ is a nonempty subset of an algebra $A$, then we write $\langle S\rangle$ for the two-sided ideal of $A$ generated by $S$.\par \section*{2. The defining relations of $M_q(n)$ form a Gr\"obner-Shirshov basis} In this section, all terminologies concerning Gr\"obner-Shirshov bases, such as composition, ambiguity, and normal word, etc., are referred to [1].\vskip .5truecm Let $K$ be a field of characteristic 0, $I(n)=\{(i,j)|i,j=1,2,\cdots,n\}$ with $n\ge 2$, and let $M_q(n)$ be the standard quantized matrix algebra with the set of $n^2$ generators $z=\{ z_{ij}~|~(i,j)\in I(n)\}$, in the sense of [2], namely, $M_q(n)$ is the associative $K$-algebra generated by the $n^2$ given generators subject to the relations: $$\begin{array}{ll} z_{ij}z_{ik}=qz_{ik}z_{ij},&\hbox{if}~j<k ,\\ z_{ij}z_{kj}=qz_{kj}z_{ij},&\hbox{if}~i<k ,\\ z_{ij}z_{st}=z_{st}z_{ij},&\hbox{if}~i<s,\;t<j,\\ z_{ij}z_{st}=z_{st}z_{ij}+(q-q^{-1})z_{it}z_{sj},&\hbox{if}~i<s,~j<t, \end{array}$$ where $i,j,k,s,t=1,2,...,n$ and $q\in K^*$ is the quantum parameter.\vskip .5truecm Now, let $Z=\{Z_{ij}~|~(i,j)\in I(n)\}$, $\KZ$ the free associative $K$-algebra generated by $Z$, and let $S$ denote the set of defining relations of $M_q(n)$ in $\KZ$, that is, $S$ consists of elements $$\begin{array}{ll} (a)~f_{ijik}=Z_{ij}Z_{ik}-qZ_{ik}Z_{ij},&\hbox{if}~j<k,\\ (b)~g_{ijkj}=Z_{ij}Z_{kj}-qZ_{kj}Z_{ij},&\hbox{if}~i<k,\\ (c)~h_{ijst}=Z_{ij}Z_{st}-Z_{st}Z_{ij},&\hbox{if}~i<s,~t<j,\\ (d)~h'_{ijst}=Z_{ij}Z_{st}-Z_{st}Z_{ij}-(q-q^{-1})Z_{it}Z_{sj},&\hbox{if}~i<s,~j<t. \end{array}$$ Then, $M_q(n)\cong\KZ /\langle S\rangle$ as $K$-algebras, where $\langle S\rangle$ denotes the (two-sided) ideal of $\KZ$ generated by $S$, i.e., $M_q(n)$ is presented as a quotient of $\KZ$. Our aim below is to show that $S$ forms a Gr\"obner-Shirshov basis with respect to a certain monomial ordering on $\KZ$. To this end, let us take the deg-rlex ordering $\PRC$ (i.e., the {\it degree-preserving right lexicographic ordering}) on the set $Z^*$ of all mono words in $Z$, i.e., $Z^*$ consists of all words of finite length like $u=Z_{ij}Z_{kl}\cdots Z_{st}$. More precisely, we first take the right lexicographic ordering $<_{rlex}$ on $Z^*$ which is the natural extension of the ordering on the set $Z$ of generators of $\KZ$: for $Z_{ij}$, $Z_{kl}\in Z$, $$Z_{kl}<Z_{ij}\Leftrightarrow\left\{\begin{array}{l}k<i,\\ \hbox{or}~k=i~\hbox{and}~l<j, \end{array}\right.$$ and for two words $u=Z_{k_sl_s}\cdots Z_{k_2l_2}Z_{k_1l_1}$, $v=Z_{i_tj_t}\cdots Z_{i_2j_2}Z_{i_1j_1}\in Z^*$, $$\begin{array}{rcl} u\prec_{rlex} v&\Leftrightarrow&\hbox{there exists an}~m\ge 1,~\hbox{such that}\\ &{~}&Z_{k_1l_1}=Z_{i_1j_1}, Z_{k_2l_2}=Z_{i_2j_2},\ldots , Z_{k_{m-1}l_{m-1}}=Z_{i_{m-1}j_{m-1}}\\ &{~}&\hbox{but}~Z_{k_ml_m}<Z_{i_mj_m}\end{array}$$ (note that conventionally the empty word $1<Z_{ij}$ for all $Z_{ij}\in Z$). For instance $$Z_{43}Z_{21}Z_{31}\prec_{rlex}Z_{41}Z_{23}Z_{41}\prec_{rlex}Z_{42}Z_{13}Z_{34}Z_{41}.$$ And then, by assigning each $Z_{ij}$ the degree 1, $1\le i,j\le n$, and writing $|u|$ for the degree of a word $u\in Z^*$, we take the degree-preserving right lexicographic ordering $\PRC$ on $Z^*$: for $u,v\in Z^*$, $$u\PRC v\Leftrightarrow\left\{\begin{array}{l}|u|<|v|,\\ \hbox{or}~|u|=|v|~\hbox{and}~u<_{rlex}v. \end{array}\right.$${\parindent=0pt\par It is straightforward to check that $\PRC$ is a monomial ordering on $Z^*$, namely, $\PRC$ is a well-ordering and $$u\PRC v~\hbox{implies}~wur\PRC wvr~\hbox{for all}~u, v, w, r\in Z^*.$$ With this monomial ordering $\PRC$ in hand, we are ready to prove the following result.}\vskip .5truecm {\bf Theorem 2.1} With notation as fixed above, let $J=\langle S\rangle$ be the ideal of $M_q(n)$ generated by $S$. Then, with respect to the monomial ordering $\PRC$ on $\KZ$, the set $S$ is a Gr\"obner-Shirshov basis of the ideal $J$, i.e., the defining relations of $M_q(n)$ form a Gr\"obner-Shirshov basis.\vskip 6pt {\bf Proof} By [1], it is sufficient to check that all compositions determined by elements in $S$ are trivial modulo $S$. In doing so, let us first fix two more notations. For an element $f\in\KZ$, we write $\OV{f}$ for the leading mono word of $f$ with respect to $\PRC$, i.e., if $f=\sum_{i=1}^s\lambda_iu_i$ with $\lambda_i\in K$, $u_i\in Z^*$, such that $u_1\PRC u_2\PRC\cdots\PRC u_s$, then $\OV{f}=u_s$. Thus, the set $S$ of defining relations of $M_q(n)$ has the set of leading mono words $$\OV{S}=\left\{\begin{array}{ll} \OV{f}_{ijik}=Z_{ij}Z_{ik},~j<k,& \OV{g}_{ijkj}=Z_{ij}Z_{kj},~i<k,\\ \OV{h}_{ijst}=Z_{ij}Z_{st},~i<s,~t<j,& \OV{h'}_{ijst}=Z_{ij}Z_{st},~i<s,~j<t.\end{array}\right\}$$ Also let us write $(a\wedge b)$ for the composition determined by defining relations $(a)$ and $(b)$ in $S$. Similar notations are made for compositions of other pairs of defining relations in $S$.\par By means of $\OV{S}$ above, we start by listing all possible ambiguities $w$ of compositions of intersections determined by elements in $S$, as follows: $$\begin{array}{lll} (a\wedge a)&w=Z_{ij}Z_{ik}Z_{is},&\hbox{if}~j<k<s,\\ (a\wedge b)&w_1=Z_{ij}Z_{ik}Z_{sk},&\hbox{if}~j<k,~i<s,\\ (a\wedge b)&w_2=Z_{ij}Z_{kj}Z_{ks},&\hbox{if}~i<k,~j<s,\\ (b\wedge b)&w=Z_{ij}Z_{kj}Z_{sj},&\hbox{if}~i<k<s,\\ (a\wedge c)&w_1=Z_{ij}Z_{st}Z_{sk},&\hbox{if}~i<s,~t<j,~t<k,\\ (a\wedge c)&w_2=Z_{ij}Z_{ik}Z_{st},&\hbox{if}~j<k,~i<s,~t<k,\\ (c\wedge c)&w=Z_{ij}Z_{st}Z_{kl},&\hbox{if}~i<s<k,~l<t<j,\\ (b\wedge c)&w_1=Z_{ij}Z_{kj}Z_{st},&\hbox{if}~i<k<s,~t<j,\\ (b\wedge c)&w_2=Z_{ij}Z_{st}Z_{kt},&\hbox{if}~i<s<k,~t<j,\\ (a\wedge d)&w_1=Z_{ij}Z_{st}Z_{sk},&\hbox{if}~i<s,~j<t<k,\\ (a\wedge d)&w_2=Z_{ij}Z_{ik}Z_{st},&\hbox{if}~i<s,~j<k<t,\\ (b\wedge d)&w_1=Z_{ij}Z_{kj}Z_{st},&\hbox{if}~i<k<s,~j<t,\\ (b\wedge d)&w_2=Z_{st}Z_{ij}Z_{kj},&\hbox{if}~s<i<k,\;t<j,\\ (c\wedge d)&w_1=Z_{ij}Z_{st}Z_{kl},&\hbox{if}~i<s<k,~t<j,~t<l,\\ (c\wedge d)&w_2=Z_{kl}Z_{ij}Z_{st},&\hbox{if}~k<i<s,~t<j,~l<j,\\ (d\wedge d)&w=Z_{ij}Z_{st}Z_{kl},&\hbox{if}~i<s<k,~j<t<l.\end{array}$$ Instead of writing down all tedious verification processes, below we shall record only the verification processes of five typical cases: $$\begin{array}{ll} (a\wedge b)&\hbox{with}~w_1=Z_{ij}Z_{ik}Z_{sk},\\ (a\wedge c)&\hbox{with}~w_1=Z_{ij}Z_{st}Z_{sk}, \\ (a\wedge d)&\hbox{with}~w_1=Z_{ij}Z_{st}Z_{sk},\\ (c\wedge d)&\hbox{with}~w_1=Z_{ij}Z_{st}Z_{kl},\\ (d\wedge d)&\hbox{with}~w=Z_{ij}Z_{st}Z_{kl},\end{array}$$ because other cases can be checked in a similar way (the interested reader may contact the author directly in order to see other verification processes).\par $\bullet$ The case $(a\wedge b)$ with $w_1=Z_{ij}Z_{ik}Z_{sk},$ where $j<k$, $i<s$.\par Since $w_1=\OV{f}_{ijik}Z_{sk}=Z_{ij}\OV{g}_{iksk}$, we have $$\begin{array}{rcl} (f_{ijik},g_{iksk})_{w_1}&=&f_{ijik}Z_{sk}-Z_{ij}g_{iksk}\\ &=&-qZ_{ik}Z_{ij}Z_{sk}+qZ_{ij}Z_{sk}Z_{ik}\\ &\equiv&-qZ_{ik}[Z_{sk}Z_{ij}+(q-q^{-1})Z_{ik}Z_{sj}] +q[Z_{sk}Z_{ij}+(q-q^{-1})Z_{ik}Z_{sj}]Z_{ik}\\ &\equiv& -q^2Z_{sk}Z_{ik}Z_{ij}-q^2Z_{sj}Z_{ik}^2+Z_{sj}Z_{ik}^2 +q^2Z_{sk}Z_{ik}Z_{ij}+q^2Z_{sj}Z_{ik}^2-Z_{sj}Z_{ik}^2\\ &\equiv& 0~\hbox{mod}(S,w_1). \end{array}$$\par $\bullet$ The case $(a\wedge c)$ with $w_1=Z_{ij}Z_{st}Z_{sk}$, where $i<s,\;t<j,\;t<k$.\par Since $w_1=Z_{ij}\OV{f}_{stsk}=\OV{h}_{ijst}Z_{sk}$, there are three cases to deal with.\par Case 1. If $j=k$, then $$\begin{array}{rcl} (h_{ijst},f_{stk})_{w_1} &=&h_{ijst}Z_{sk}-Z_{ij}f_{stk}\\ &=&-Z_{st}Z_{ij}Z_{sk}+qZ_{ij}Z_{sk}Z_{st}\\ &\equiv& -q^2Z_{sk}Z_{st}Z_{ik}+q^2Z_{sk}Z_{st}Z_{ik}\\ &\equiv& 0~\hbox{mod}(S,w_1). \end{array}$$\par Case 2. If $j<k$, then $$\begin{array}{lll} (h_{ijst},f_{stk})_{w_1}&=&-Z_{st}Z_{ij}Z_{sk}+qZ_{ij}Z_{sk}Z_{st}\\ &\equiv&-Z_{st}[Z_{sk}Z_{ij}+(q-q^{-1})Z_{ik}Z_{sj}]+q[Z_{sk}Z_{ij} +(q-q^{-1})Z_{ik}Z_{sj}]Z_{st}\\ &\equiv&-qZ_{sk}Z_{st}Z_{ij}-qZ_{st}Z_{sj}Z_{ik}+q^{-1}Z_{st}Z_{sj}Z_{ik} +qZ_{sk}Z_{st}Z_{ij}\\ &{~}&+q^2Z_{sj}Z_{st}Z_{ik}-Z_{sj}Z_{st}Z_{ik}\\ &\equiv& 0~\hbox{mod}(S,w_1). \end{array}$$\par Case 3. If $j>k$, then $$\begin{array}{rcl} (h_{ijst},f_{stk})_{w_1}&=&-Z_{st}Z_{ij}Z_{sk}+qZ_{ij}Z_{sk}Z_{st}\\ &\equiv&-qZ_{sk}Z_{st}Z_{ij}+ qZ_{sk}Z_{st}Z_{ij}\equiv 0~\hbox{mod}(S,w_1).\end{array}$$\par $\bullet$ The case $(a\wedge d)$ with $w_1=Z_{ij}Z_{st}Z_{sk}$, where $i<s$, $j<t<k$.\par Since $w_1=\OV{h'}_{ijst}Z_{sk}=Z_{ij}\OV{f}_{stk}$, we have $$\begin{array}{rcl} (h'_{ijst},~f_{stk})_{w_1}&=&h'_{ijst}Z_{sk}-Z_{ij}f_{stk}\\ &=&-Z_{st}Z_{ij}Z_{sk}-(q-q^{-1})Z_{it}Z_{sj}Z_{sk}+qZ_{ij}Z_{sk}Z_{st}\\ &\equiv&-Z_{st}Z_{sk}Z_{ij}-(q-q^{-1})Z_{st}Z_{ik}Z_{sj} -(q-q^{-1})Z_{sj}Z_{it}Z_{sk}+qZ_{sk}Z_{ij}Z_{st}\\ &{~}&+q(q-q^{-1})Z_{ik}Z_{sj}]Z_{st}\\ &\equiv&-qZ_{sk}Z_{st}Z_{ij}-(q-q^{-1})Z_{st}Z_{sj}Z_{ik} -(q-q^{-1})Z_{sj}Z_{sk}Z_{it}\\ &{~}&-(q-q^{-1})^2Z_{sj}Z_{ik}Z_{st} +qZ_{sk}Z_{st}Z_{ij}+q(q-q^{-1})Z_{sj}Z_{st}Z_{ik}\\ &{~}&+q(q-q^{-1})Z_{sk}Z_{it}Z_{sj}\\ &\equiv&-(q-q^{-1})Z_{st}Z_{sj}Z_{ik}-(q-q^{-1})Z_{sk}Z_{sj}Z_{it} -q(q-q^{-1})^2Z_{st}Z_{sj}Z_{ik}\\ &{~}&+q^2(q-q^{-1})Z_{st}Z_{sj}Z_{ik}+q(q-q^{-1})Z_{sk}Z_{sj}Z_{ik}\\ &\equiv& -(q-q^{-1})[q^2-q(q-q^{-1})-1]Z_{st}Z_{sj}Z_{ik}\\ &\equiv&0~\hbox{mod}(S,w_1). \end{array}$$\par $\bullet$ The case $(c\wedge d)$ with $w_1=Z_{ij}Z_{st}Z_{kl}$, where $i<s<k$, $t<j$, $t<l$.\par Since $w_1=\OV{h}_{ijst}Z_{kl}=Z_{ij}\OV{h'}_{stkl}$, we have three cases to consider.\par Case 1. If $l=j$, then $$\begin{array}{rcl} (h_{ijst},h'_{stkl})_{w_1}&=&h_{ijst}Z_{kl}-Z_{ij}h'_{stkl}\\ &=&-Z_{st}Z_{ij}Z_{kj} +Z_{ij}Z_{kj}Z_{st}+(q-q^{-1})Z_{ij}Z_{sj}Z_{kt}\\ &\equiv& -qZ_{st}Z_{kj}Z_{ij}+qZ_{kj}Z_{ij}Z_{st}+q(q-q^{-1})Z_{sj}Z_{ij}Z_{kt}\\ &\equiv& -qZ_{kj}Z_{st}Z_{ij}-q(q-q_1)Z_{sj}Z_{kt}Z_{ij} +qZ_{kj}Z_{st}Z_{ij}+q(q-q^{-1})Z_{sj}Z_{kt}Z_{ij}\\ &\equiv& 0~\hbox{mod}(S,w_1). \end{array}$$\par Case 2. If $l>j$, then $i<s<k$, $t<j<l$, and $$\begin{array}{rcl} (h_{ijst},h'_{stkl})_{w_1}&=&h_{ijst}Z_{kl}-Z_{ij}h'_{stkl}\\ &=&-Z_{st}Z_{ij}Z_{kl}+Z_{ij}Z_{kl}Z_{st}+(q-q^{-1})Z_{ij}Z_{sl}Z_{kt}\\ &\equiv& -Z_{st}Z_{kl}Z_{ij}-(q-q^{-1})Z_{st}Z_{il}Z_{kj}+Z_{kl}Z_{ij}Z_{st} +(q-q^{-1})Z_{il}Z_{kj}Z_{st}\\ &{~}&+(q-q^{-1})Z_{sl}Z_{ij}Z_{kt}+(q-q^{-1})^2Z_{il}Z_{sj}Z_{kt}\\ &\equiv& -Z_{kl}Z_{st}Z_{ij}-(q-q^{-1})Z_{sl}Z_{kt}Z_{ij}-(q-q_1)Z_{st}Z_{kj}Z_{il} +Z_{kl}Z_{st}Z_{ij}\\ &{~}&+(q-q^{-1})Z_{kj}Z_{il}Z_{st}+(q-q^{-1})Z_{sl}Z_{kt}Z_{ij} +(q-q^{-1})^2Z_{sj}Z_{kt}Z_{il}\\ &\equiv&-(q-q^{-1})Z_{kj}Z_{st}Z_{il}-(q-q_1)^2Z_{sj}Z_{kt}Z_{il} +(q-q^{-1})Z_{kj}Z_{st}Z_{il}\\ &{~}&+(q-q^{-1})^2Z_{sj}Z_{kt}Z_{il}\\ &\equiv&0~\hbox{mod}(S,w_1). \end{array}$$\par Case 3. If $l<j$, then $i<s<l$, $t<l<j$, and $$\begin{array}{rcl} (h_{ijst},h'_{stkl})_{w_1}&=&h_{ijst}Z_{kl}-Z_{ij}h'_{stkl}\\ &=&-Z_{st}Z_{ij}Z_{kl} +Z_{ij}Z_{kl}Z_{st}+(q-q^{-1})Z_{ij}Z_{sl}Z_{kt}\\ &\equiv& -Z_{st}Z_{kl}Z_{ij}+Z_{kl}Z_{ij}Z_{st}+(q-q^{-1})Z_{sl}Z_{kt}Z_{ij}\\ &\equiv& -Z_{kl}Z_{st}Z_{ij}-(q-q_1)Z_{sl}Z_{kt}Z_{ij}+qZ_{kl}Z_{st}Z_{ij} +(q-q^{-1})Z_{sl}Z_{kt}Z_{ij}\\ &\equiv&0~\hbox{mod}(S,w_1). \end{array}$$\par $\bullet$ The case $(d\wedge d)$ with $w=Z_{ij}Z_{st}Z_{kl}$, where $i<s<k,\;j<t<l$. \par Since $w=\OV{h'}_{ijst}Z_{kl}=Z_{ij}\OV{h'}_{stkl}$, we have $$\begin{array}{rcl} (h'_{ijst},h'_{stkl})_{w}&=&h'_{ijst}Z_{kl}-Z_{ij}h'_{stkl}\\ &=&-Z_{st}Z_{ij}Z_{kl}-(q-q^{-1})Z_{it}Z_{sj}Z_{kl} +Z_{ij}Z_{kl}Z_{st}+(q-q^{-1})Z_{ij}Z_{sl}Z_{kt}\\ &\equiv& -Z_{st}Z_{kl}Z_{ij}-(q-q^{-1})Z_{st}Z_{il}Z_{kj}-(q-q^{-1})Z_{sj}Z_{it}Z_{kl} +Z_{kl}Z_{ij}Z_{st}\\ &{~}&+(q-q^{-1})Z_{il}Z_{kj}Z_{st}+(q-q^{-1})Z_{sl}Z_{ij}Z_{kt} +(q-q^{-1})^2Z_{il}Z_{sj}Z_{kt}\\ &\equiv&-Z_{kl}Z_{st}Z_{ij}-(q-q^{-1})Z_{sl}Z_{kt}Z_{ij}-(q-q^{-1})Z_{kj}Z_{st}Z_{il}\\ &{~}&-(q-q^{-1})Z_{sj}Z_{kl}Z_{it} -(q-q^{-1})^2Z_{sj}Z_{il}Z_{kt}+Z_{kl}Z_{st}Z_{ij}\\ &{~}&+(q-q^{-1})Z_{kl}Z_{it}Z_{sj} +(q-q^{-1})Z_{kj}Z_{st}Z_{il}+(q-q^{-1})Z_{sl}Z_{kt}Z_{ij}\\ &{~}&+(q-q^{-1})^2Z_{sl}Z_{it}Z_{kj} +(q-q^{-1})^2Z_{sj}Z_{kt}Z_{il}\\ &\equiv& -(q-q^{-1})Z_{kl}Z_{sj}Z_{it}-(q-q^{-1})^2Z_{sl}Z_{kj}Z_{it} +(q-q^{-1})Z_{kl}Z_{sj}Z_{it}\\ &{~}&+(q-q^{-1})^2Z_{sl}Z_{kj}Z_{it}\\ &\equiv&0~\hbox{mod}(S,w). \end{array}$$ This finishes the proof of the theorem.\par \section*{3. Some applications of Theorem 2.1} By means of Theorem 2.1, we derive several structural properties of $M_q(n)$ in this section. All notations used in Section 2 are maintained.\vskip .5truecm {\bf Corollary 3.1} The standard quantized matrix algebra $M_q(n)\cong \KZ /J$ has the linear basis, or more precisely, the PBW basis $${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\left\{\left. z^{k_{nn}}_{nn}z^{k_{nn-1}}_{nn-1}\cdots z^{k_{n1}}_{n_1}z^{k_{n-1n}}_{n-1n}\cdots z^{k_{n-11}}_{n-11}\cdots z^{k_{1n}}_{1n}\cdots z^{k_{11}}_{11}~\right |~k_{ij}\in \mathbb{N},(i, j)\in I(n)\right\}.$$ \par {\bf Proof} With respect to the monomial ordering $\PRC$ on the set $Z^*$ of mono words of $\KZ$, we note that $$\begin{array}{l} Z_{11}\PRC Z_{12}\PRC\cdots\PRC Z_{1n}\PRC Z_{21}\PRC Z_{22}\PRC\cdots\PRC Z_{2n}\\ \PRC \cdots\PRC Z_{n1}\PRC Z_{n2}\PRC\cdots\PRC Z_{nn},\end{array}$$ and the Gr\"obner-Shirshov basis $S$ of the ideal $J=\langle S\rangle$ has the set of leading mono words consisting of $$\begin{array}{ll} Z_{ij}Z_{ik}~\hbox{with}~Z_{ij}\PRC Z_{ik}~\hbox{where}~j<k, &Z_{ij}Z_{kj}~\hbox{with}~Z_{ij}\PRC Z_{kj}~\hbox{where}~i<k,\\ Z_{ij}Z_{st}~\hbox{with}~Z_{ij}\PRC Z_{st}~\hbox{where}~i<s,~t<j, & Z_{ij}Z_{st}~\hbox{with}~Z_{ij}\PRC Z_{st}~\hbox{where}~i<s,~j<t.\end{array}$$ It follows from classical Gr\"obner-Shirshov basis theory that the set of normal forms of $Z^*$ (mod~$S$) is given as follows: $$\left\{\left.Z^{k_{nn}}_{nn}Z^{k_{nn-1}}_{nn-1}\cdots Z^{k_{n1}}_{n_1}Z^{k_{n-1n}}_{n-1n}\cdots Z^{k_{n-11}}_{n-11}\cdots Z^{k_{1n}}_{1n}\cdots Z^{k_{11}}_{11}~\right |~k_{ij}\in \mathbb{N} ,~(i,j)\in I(n)\right\} .$$ Therefore, $M_q(n)$ has the desired PBW basis. {$\Box$}\vskip .5truecm Before giving next result, we recall three results of [7] in one proposition below, for the reader's convenience.\vskip .5truecm {\bf Proposition 3.2} Adopting notations used in [7], let $K\langle X\rangle} \def\KZ{K\langle Z\rangle =K\langle X_1,X_2,\ldots ,X_n\rangle$ be the free $K$-algebra with the set of generators $X=\{ X_1,X_2,\ldots ,X_n\}$, and let $\prec$ be a monomial ordering on $K\langle X\rangle} \def\KZ{K\langle Z\rangle$. Suppose that $\G$ is a Gr\"obner-Shirshov basis of the ideal $I=\langle \G\rangle$ with respect to $\prec$, such that the set of leading monomials $$\begin{array}{l} \LM (\G )=\{ X_jX_i~|~1\le i<j\le n\} ,\\ \hbox{or}\\ \LM (\G )=\{ X_iX_j~|~1\le i<j\le n\} .\end{array}$$ Considering the algebra $A=K\langle X\rangle} \def\KZ{K\langle Z\rangle /I$, the following statements hold.\par (i) [7, P.167, Example 3] The Gelfand-Kirillov dimension GK.dim$A=n$.\par (ii) [7, P.185, Corollary 7.6] The global homological dimension gl.dim$A=n$, provided $\G$ consists of homogeneous elements with respect to a certain $\mathbb{N}$-gradation of $K\langle X\rangle} \def\KZ{K\langle Z\rangle$. (Note that in this case $G^{\mathbb{N}}(A)=A$, with the notation used in loc. cit.) \par (iii) [7, P.201, Corollary 3.2] $A$ is a classical Koszul algebra, provided $\G$ consists of quadratic homogeneous elements with respect to the $\mathbb{N}$-gradation of $K\langle X\rangle} \def\KZ{K\langle Z\rangle$ such that each $X_i$ is assigned the degree 1, $1\le i\le n$. (Note that in this case $G^{\mathbb{N}}(A)=A$, with the notation used in loc. cit.)\par {$\Box$}\vskip .5truecm {\bf Remark} Let $j_1j_2\cdots j_n$ be a permutation of $1, 2, \ldots ,n$. One may notice from the respectively quoted references in Proposition 3.2 that if, in the case of Proposition 3.2, the monomial ordering $\prec$ employed there is such that $$\begin{array}{l} X_{j_1}\prec X_{j_2}\prec\cdots \prec X_{j_n},~\hbox{and}\\ \LM (\G )=\{ X_{j_k}X_{j_t}~|~X_{j_t}\prec X_{j_k},~1\le j_k,j_t\le n\} ,\\ \hbox{or}\\ \LM (\G )=\{ X_{j_k}X_{j_t}~|~X_{j_k}\prec X_{j_t},~1\le j_k,j_t\le n\} ,\end{array}$$ then all results still hold true.\vskip .5truecm Applying Proposition 3.2 and the above remark to $M_q(n)\cong \KZ /J$, we are able to derive the result below.\vskip .5truecm {\bf Theorem 3.3} The standard quantized matrix algebra $M_q(n)$ has the following structural properties.\par (i) The Hilbert series of $M_q(n)$ is $\frac{1}{(1-t)^{n^2}}$.\par (ii) The Gelfand-Kirillov dimension GK.dim$M_q(n)=n^2$.\par (iii) The global homological dimension gl.dim$M_q(n)=n^2$.\par (iv) $M_q(n)$ is a classical quadratic Koszul algebra.\vskip 6pt {\bf Proof} Recalling from Section 2 that with respect to the monomial ordering $\PRC$ on the set $Z^*$ of mono words of $\KZ$, we have $$Z_{ij}\PRC Z_{sk}\Leftrightarrow\left\{ \begin{array}{l} i=s,~\hbox{if}~j<k,\\ i<s,~\hbox{if}~j=k,\\ i<s,~\hbox{if}~k<j,\\ i<s,~\hbox{if}~j<k.\end{array}\right.\quad (i,j)\in I(n),$$ $$\begin{array}{l} Z_{11}\PRC Z_{12}\PRC\cdots\PRC Z_{1n}\PRC Z_{21}\PRC Z_{22}\PRC\cdots\PRC Z_{2n}\\ \PRC \cdots\PRC Z_{n1}\PRC Z_{n2}\PRC\cdots\PRC Z_{nn},\end{array}$$ and thus, all leading mono words of the Gr\"obner-Shirshov basis $S$ of the ideal $J=\langle S\rangle$ are established as follows: $$\begin{array}{ll} Z_{ij}Z_{ik}~\hbox{with}~Z_{ij}\PRC Z_{ik}~\hbox{where}~j<k, &Z_{ij}Z_{kj}~\hbox{with}~Z_{ij}\PRC Z_{kj}~\hbox{where}~i<k,\\ Z_{ij}Z_{st}~\hbox{with}~Z_{ij}\PRC Z_{st}~\hbox{where}~i<s,~t<j, & Z_{ij}Z_{st}~\hbox{with}~Z_{ij}\PRC Z_{st}~\hbox{where}~i<s,~j<t.\end{array}$$ This means that $M_q(n)$ satisfies the conditions of Proposition 3.2. Therefore, the assertions (i) -- (iv) are established as follows.\par (i) Since $M_q(n)$ has the PBW $K$-basis as described in Corollary 3.1, it follows that the Hilbert series of $M_q(n)$ is $\frac{1}{(1-t)^{n^2}}$.\par (ii) This follows from Theorem 2.1 and Proposition 3.2(i).\par Note that $M_q(n)$ is an $\mathbb{N}$-graded algebra defined by a quadratic homogeneous Gr\"obner basis (Theorem 2.1), where each generator $z_{ij}$ is assigned the degree 1, $(i,j)\in I(n)$. The assertions (iii) and (iv) follow from Proposition 3.2(ii) and Proposition 3.2(iii), respectively. {$\Box$}\vskip .5truecm We end this section by concluding that the algebra $M_q(n)$ also has the elimination property for (one-sided) ideals in the sense of [8] (see also [9, A3]). To see this, let us first recall the Elimination Lemma given in [8]. Let $A=K[a_1,\ldots ,a_n]$ be a finitely generated $K$-algebra with the PBW basis ${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\{a^{\alpha}=a_1^{\alpha_1}\cdots a_n^{\alpha_n}~|~\alpha =(\alpha_1,\ldots ,\alpha_n)\in\mathbb{N}^n\}$ and, for a subset $U=\{ a_{i_1},...,a_{i_r}\}\subset\{ a_1,...,a_n\}$ with $i_1<i_2<\cdots <i_r$, let $$T=\left\{ a_{i_1}^{\alpha_1}\cdots a_{i_r}^{\alpha_r}~\Big |~ (\alpha_1,...,\alpha_r)\in\mathbb{N}^r\right\},\quad V(T)=K\hbox{-span}T.$$\par {\bf Lemma 3.4} [8, Lemma 3.1] Let the algebra $A$ and the notations be as fixed above, and let $L$ be a nonzero left ideal of $A$ and $A/L$ the left $A$-module defined by $L$. If there is a subset $U=\{ a_{i_1},\ldots ,a_{i_r}\} \subset\{a_1,\ldots ,a_n\}$ with $i_1<i_2<\cdots <i_r$, such that $V(T)\cap L=\{ 0\}$, then $$\hbox{GK.dim}(A/L)\ge r.$$ Consequently, if $A/L$ has finite GK dimension $\hbox{GK.dim}(A/L)=d<n$ ($=$ the number of generators of $A$), then $$V(T)\cap L\ne \{ 0\}$$ holds true for every subset $U=\{ a_{i_1},...,a_{i_{d+1}}\}\subset$ $\{ a_1,...,a_n\}$ with $i_1<i_2<\cdots <i_{d+1}$, in particular, for every $U=\{ a_1,\ldots a_s\}$ with $d+1\le s\le n-1$, we have $V(T)\cap L\ne \{ 0\}$.\par {$\Box$}\vskip .5truecm For convenience of deriving the next theorem, let us write the set of generators of $M_q(n)$ as $Z=\{ z_1,z_2,\ldots z_{n^2}\}$, i.e., $M_q(n)=K[z_1,z_2,\ldots ,z_{n^2}]$. Thus, for a subset $U=\{ z_{i_1},z_{i_2},\ldots ,z_{i_r}\}\subset \{ z_1,z_2,\ldots,z_{n^2}\}$ with $i_1<i_2<\cdots <i_r$, we write $$T=\left\{ z_{i_1}^{\alpha_1}z_{i_2}^{\alpha_2}\cdots z_{i_r}^{\alpha_r}~\Big |~ (\alpha_1,\alpha_2,\ldots ,\alpha_r)\in\mathbb{N}^r\right\},\quad V(T)=K\hbox{-span}T.$$\par {\bf Theorem 3.5} With notation as fixed above, Let $L$ be a left ideal of $M_q(n)$. Then GK.dim$M_q(n)/L\le n^2$. If furthermore GK.dim$M_q(n)/L=m< n^2$, then $$V(T)\cap L\ne\{ 0\}$$ holds true for every subset $U=\{ z_{i_1},z_{i_2},...,z_{i_{m+1}}\}\subset Z$ with $i_1<i_2<\cdots <i_{m+1}$, in particular, for every $U=\{ z_1,z_2\ldots z_s\}$ with $m+1\le s\le n-1$, we have $V(T)\cap L\ne \{ 0\}$.\vskip 6pt {\bf Proof} By Corollary 3.1, $M_q(n)$ has the PBW basis $${\cal B}} \def\LC{{\bf LC}} \def\G{{\cal G}} \def\FRAC#1#2{\displaystyle{\frac{#1}{#2}} =\{ z^{\alpha}=z_1^{\alpha_1}z_2^{\alpha_2}\cdots z_{n^2}^{\alpha_{n^2}}~|~\alpha =(\alpha_1,\alpha_2\ldots ,\alpha_{n^2})\in\mathbb{N}^{n^2}\}.$$ Also by Theorem 3.3(ii), GK.dim$M_q(n)=n^2$, thereby GK.dim$M_q(n)/L\le n^2$. If furthermore GK.dim$M_q(n)/L=d< n^2$, then the desired elimination property follows from Lemma 3.4 mentioned above. {$\Box$}\vskip .5truecm \centerline{Refeerence}{\parindent=1.47truecm\par \par\hang\textindent{[1]} L. Bokut et al., {\it Gr\"obner--Shirshov Bases: Normal Forms, Combinatorial and Decision Problems in Algebra}. World Scientific Publishing, 2020. \url{https://doi.org/10.1142/9287}\par \par\hang\textindent{[2]} L. D. Faddeev, N. Yu. Reshetikhin, and L. A. Takhtajan, Quantization of Lie groups and Lie algebras. {\it Algebraic Analysis}, Academic Press (1988), 129-140.\par \par\hang\textindent{[3]} H. P. Jakobsen and C. Pagani(2014): Quantized matrix algebras and quantum seeds. {\it Linear and Multilinear Algebra}, DOI: 10.1080/03081087.2014.898297 \par\hang\textindent{[4]} H. P. Jakobsen and H. Zhang, The center of the quantized matrix algebra. {\it J. Algebra}, (196)(1997), 458--474. \par\hang\textindent{[5]} H. P. Jakobsen and H. Zhang , A class of quadratic matrix algebras arising from the quantized enveloping algebra $U_q(A_{2n-1})$. {\it J. Math. Phys}., (41)(2000), 2310--2336. \par\hang\textindent{[6]} H. P. Jakobsen, S. J{\o}ndrup, and A. Jensen, Quadratic algebras of type AIII.III. In: {\it Tsinghua Science $\&$ Technology}, (3)(1998), 1209--1212 . \par\hang\textindent{[7]} H. Li, {\it Gr\"obner Bases in Ring Theory}. World Scientific Publishing Co., 2011. \url{https://doi.org/10.1142/8223}\par \par\hang\textindent{[8]} H. Li, An elimination lemma for algebras with PBW bases. {\it Communications in Algebra}, 46(8)(2018), 3520-3532.\par \par\hang\textindent{[9]} H. Li, {\it Noncommutative polynomial algebras of solvable type and their modules: Basic constructive-computational theory and methods}. Chapman and Hall/CRC Press, 2021. \end{document}
arXiv
In isosceles triangle $ABC$, if $BC$ is extended to a point $X$ such that $AC = CX$, what is the number of degrees in the measure of angle $AXC$? [asy] size(220); pair B, A = B + dir(40), C = A + dir(-40), X = C + dir(0); draw(C--A--B--X--A); draw(A/2+.1*dir(-30)--A/2-.1*dir(-30)); draw((A+C)/2+.1*dir(30)--(A+C)/2-.1*dir(30)); label("A",A,N); label("C",C,S); label("B",B,S); label("X",X,S); label("$30^{\circ}$",B+.1*dir(0),NE); draw(arc(B,1/3,0,40)); [/asy] The angles opposite the equal sides of $\triangle ABC$ are congruent, so $\angle BCA=30^\circ$. Since $\angle BCA$ and $\angle XCA$ are supplementary, we have \begin{align*} \angle XCA &= 180^\circ - \angle BCA\\ &= (180-30)^\circ \\ &= 150^\circ. \end{align*} Since $\triangle ACX$ is isosceles with $AC=CX$, the angles $\angle XAC$ and $\angle AXC$ are congruent. Let each of them be $x^\circ$. Then the sum of angles in $\triangle ACX$ is $180^\circ$, so $$x + x + 150 = 180,$$ yielding $x=15$. That is, $\angle AXC = \boxed{15}$ degrees.
Math Dataset
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton,[1] is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method. Definition Given a set of k + 1 data points $(x_{0},y_{0}),\ldots ,(x_{j},y_{j}),\ldots ,(x_{k},y_{k})$ where no two xj are the same, the Newton interpolation polynomial is a linear combination of Newton basis polynomials $N(x):=\sum _{j=0}^{k}a_{j}n_{j}(x)$ with the Newton basis polynomials defined as $n_{j}(x):=\prod _{i=0}^{j-1}(x-x_{i})$ for j > 0 and $n_{0}(x)\equiv 1$. The coefficients are defined as $a_{j}:=[y_{0},\ldots ,y_{j}]$ where $[y_{0},\ldots ,y_{j}]$ is the notation for divided differences. Thus the Newton polynomial can be written as $N(x)=[y_{0}]+[y_{0},y_{1}](x-x_{0})+\cdots +[y_{0},\ldots ,y_{k}](x-x_{0})(x-x_{1})\cdots (x-x_{k-1}).$ Newton forward divided difference formula The Newton polynomial can be expressed in a simplified form when $x_{0},x_{1},\dots ,x_{k}$ are arranged consecutively with equal spacing. Introducing the notation $h=x_{i+1}-x_{i}$ for each $i=0,1,\dots ,k-1$ and $x=x_{0}+sh$, the difference $x-x_{i}$ can be written as $(s-i)h$. So the Newton polynomial becomes ${\begin{aligned}N(x)&=[y_{0}]+[y_{0},y_{1}]sh+\cdots +[y_{0},\ldots ,y_{k}]s(s-1)\cdots (s-k+1){h}^{k}\\&=\sum _{i=0}^{k}s(s-1)\cdots (s-i+1){h}^{i}[y_{0},\ldots ,y_{i}]\\&=\sum _{i=0}^{k}{s \choose i}i!{h}^{i}[y_{0},\ldots ,y_{i}].\end{aligned}}$ This is called the Newton forward divided difference formula. Newton backward divided difference formula If the nodes are reordered as ${x}_{k},{x}_{k-1},\dots ,{x}_{0}$, the Newton polynomial becomes $N(x)=[y_{k}]+[{y}_{k},{y}_{k-1}](x-{x}_{k})+\cdots +[{y}_{k},\ldots ,{y}_{0}](x-{x}_{k})(x-{x}_{k-1})\cdots (x-{x}_{1}).$ If ${x}_{k},\;{x}_{k-1},\;\dots ,\;{x}_{0}$ are equally spaced with ${x}_{0}={x}_{k}+sh$ and ${x}_{i}={x}_{k}-(k-i)h$ for i = 0, 1, ..., k, then, ${\begin{aligned}N(x)&=[{y}_{k}]+[{y}_{k},{y}_{k-1}]sh+\cdots +[{y}_{k},\ldots ,{y}_{0}]s(s+1)\cdots (s+k-1){h}^{k}\\&=\sum _{i=0}^{k}{(-1)}^{i}{-s \choose i}i!{h}^{i}[{y}_{k},\ldots ,{y}_{k-i}].\end{aligned}}$ is called the Newton backward divided difference formula. Significance Further information: Finite differences § Newton's series Newton's formula is of interest because it is the straightforward and natural differences-version of Taylor's polynomial. Taylor's polynomial tells where a function will go, based on its y value, and its derivatives (its rate of change, and the rate of change of its rate of change, etc.) at one particular x value. Newton's formula is Taylor's polynomial based on finite differences instead of instantaneous rates of change. Addition of new points As with other difference formulas, the degree of a Newton interpolating polynomial can be increased by adding more terms and points without discarding existing ones. Newton's form has the simplicity that the new points are always added at one end: Newton's forward formula can add new points to the right, and Newton's backward formula can add new points to the left. The accuracy of polynomial interpolation depends on how close the interpolated point is to the middle of the x values of the set of points used. Obviously, as new points are added at one end, that middle becomes farther and farther from the first data point. Therefore, if it isn't known how many points will be needed for the desired accuracy, the middle of the x-values might be far from where the interpolation is done. Gauss, Stirling, and Bessel all developed formulae to remedy that problem.[2] Gauss's formula alternately adds new points at the left and right ends, thereby keeping the set of points centered near the same place (near the evaluated point). When so doing, it uses terms from Newton's formula, with data points and x values renamed in keeping with one's choice of what data point is designated as the x0 data point. Stirling's formula remains centered about a particular data point, for use when the evaluated point is nearer to a data point than to a middle of two data points. Bessel's formula remains centered about a particular middle between two data points, for use when the evaluated point is nearer to a middle than to a data point. Bessel and Stirling achieve that by sometimes using the average of two differences, and sometimes using the average of two products of binomials in x, where Newton's or Gauss's would use just one difference or product. Stirling's uses an average difference in odd-degree terms (whose difference uses an even number of data points); Bessel's uses an average difference in even-degree terms (whose difference uses an odd number of data points). Strengths and weaknesses of various formulae For any given finite set of data points, there is only one polynomial of least possible degree that passes through all of them. Thus, it is appropriate to speak of the "Newton form", or Lagrange form, etc., of the interpolation polynomial. However, different methods of computing this polynomial can have differing computational efficiency. There are several similar methods, such as those of Gauss, Bessel and Stirling. They can be derived from Newton's by renaming the x-values of the data points, but in practice they are important. Bessel vs. Stirling The choice between Bessel and Stirling depends on whether the interpolated point is closer to a data point, or closer to a middle between two data points. A polynomial interpolation's error approaches zero, as the interpolation point approaches a data-point. Therefore, Stirling's formula brings its accuracy improvement where it is least needed and Bessel brings its accuracy improvement where it is most needed. So, Bessel's formula could be said to be the most consistently accurate difference formula, and, in general, the most consistently accurate of the familiar polynomial interpolation formulas. Divided-Difference Methods vs. Lagrange Lagrange is sometimes said to require less work, and is sometimes recommended for problems in which it is known, in advance, from previous experience, how many terms are needed for sufficient accuracy. The divided difference methods have the advantage that more data points can be added, for improved accuracy. The terms based on the previous data points can continue to be used. With the ordinary Lagrange formula, to do the problem with more data points would require re-doing the whole problem. There is a "barycentric" version of Lagrange that avoids the need to re-do the entire calculation when adding a new data point. But it requires that the values of each term be recorded. But the ability, of Gauss, Bessel and Stirling, to keep the data points centered close to the interpolated point gives them an advantage over Lagrange, when it isn't known, in advance, how many data points will be needed. Additionally, suppose that one wants to find out if, for some particular type of problem, linear interpolation is sufficiently accurate. That can be determined by evaluating the quadratic term of a divided difference formula. If the quadratic term is negligible—meaning that the linear term is sufficiently accurate without adding the quadratic term—then linear interpolation is sufficiently accurate. If the problem is sufficiently important, or if the quadratic term is nearly big enough to matter, then one might want to determine whether the sum of the quadratic and cubic terms is large enough to matter in the problem. Of course, only a divided-difference method can be used for such a determination. For that purpose, the divided-difference formula and/or its x0 point should be chosen so that the formula will use, for its linear term, the two data points between which the linear interpolation of interest would be done. The divided difference formulas are more versatile, useful in more kinds of problems. The Lagrange formula is at its best when all the interpolation will be done at one x value, with only the data points' y values varying from one problem to another, and when it is known, from past experience, how many terms are needed for sufficient accuracy. With the Newton form of the interpolating polynomial a compact and effective algorithm exists for combining the terms to find the coefficients of the polynomial.[3] Accuracy When, with Stirling's or Bessel's, the last term used includes the average of two differences, then one more point is being used than Newton's or other polynomial interpolations would use for the same polynomial degree. So, in that instance, Stirling's or Bessel's is not putting an N−1 degree polynomial through N points, but is, instead, trading equivalence with Newton's for better centering and accuracy, giving those methods sometimes potentially greater accuracy, for a given polynomial degree, than other polynomial interpolations. General case For the special case of xi = i, there is a closely related set of polynomials, also called the Newton polynomials, that are simply the binomial coefficients for general argument. That is, one also has the Newton polynomials $p_{n}(z)$ given by $p_{n}(z)={z \choose n}={\frac {z(z-1)\cdots (z-n+1)}{n!}}$ In this form, the Newton polynomials generate the Newton series. These are in turn a special case of the general difference polynomials which allow the representation of analytic functions through generalized difference equations. Main idea Solving an interpolation problem leads to a problem in linear algebra where we have to solve a system of linear equations. Using a standard monomial basis for our interpolation polynomial we get the very complicated Vandermonde matrix. By choosing another basis, the Newton basis, we get a system of linear equations with a much simpler lower triangular matrix which can be solved faster. For k + 1 data points we construct the Newton basis as $n_{0}(x):=1,\qquad n_{j}(x):=\prod _{i=0}^{j-1}(x-x_{i})\qquad j=1,\ldots ,k.$ Using these polynomials as a basis for $\Pi _{k}$ we have to solve ${\begin{bmatrix}1&&\ldots &&0\\1&x_{1}-x_{0}&&&\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&&\vdots \\\vdots &\vdots &&\ddots &\\1&x_{k}-x_{0}&\ldots &\ldots &\prod _{j=0}^{k-1}(x_{k}-x_{j})\end{bmatrix}}{\begin{bmatrix}a_{0}\\\\\vdots \\\\a_{k}\end{bmatrix}}={\begin{bmatrix}y_{0}\\\\\vdots \\\\y_{k}\end{bmatrix}}$ to solve the polynomial interpolation problem. This system of equations can be solved iteratively by solving $\sum _{i=0}^{j}a_{i}n_{i}(x_{j})=y_{j}\qquad j=0,\dots ,k.$ Derivation While the interpolation formula can be found by solving a linear system of equations, there is a loss of intuition in what the formula is showing and why Newton's interpolation formula works is not readily apparent. To begin, we will need to establish two facts first: Fact 1. Reversing the terms of a divided difference leaves it unchanged: $[y_{0},\ldots ,y_{n}]=[y_{n},\ldots ,y_{0}].$ The proof of this is an easy induction: for $n=1$ we compute $[y_{0},y_{1}]={\frac {[y_{1}]-[y_{0}]}{x_{1}-x_{0}}}={\frac {[y_{0}]-[y_{1}]}{x_{0}-x_{1}}}=[y_{1},y_{0}].$ Induction step: Suppose the result holds for any divided difference involving at most $n+1$ terms. Then using the induction hypothesis in the following 2nd equality we see that for a divided difference involving $n+2$ terms we have $[y_{0},\ldots ,y_{n+1}]={\frac {[y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]}{x_{n+1}-x_{0}}}={\frac {[y_{n},\ldots ,y_{0}]-[y_{n+1},\ldots ,y_{1}]}{x_{0}-x_{n+1}}}=[y_{n+1},\ldots ,y_{0}].$ We formulate next Fact 2 which for purposes of induction and clarity we also call Statement $n$ (${\text{Stm}}_{n}$) : Fact 2. (${\text{Stm}}_{n}$) : If $(x_{0},y_{0}),\ldots ,(x_{n-1},y_{n-1})$ are any $n$ points with distinct $x$-coordinates and $P=P(x)$ is the unique polynomial of degree (at most) $n-1$ whose graph passes through these $n$ points then there holds the relation $[y_{0},\ldots ,y_{n}](x_{n}-x_{0})\cdot \ldots \cdot (x_{n}-x_{n-1})=y_{n}-P(x_{n})$ Proof. (It will be helpful for fluent reading of the proof to have the precise statement and its subtlety in mind: $P$ is defined by passing through $(x_{0},y_{0}),...,(x_{n-1},y_{n-1})$ but the formula also speaks at both sides of an additional arbitrary point $(x_{n},y_{n})$ with $x$-coordinate distinct from the other $x_{i}$.) We again prove these statements by induction. To show ${\text{Stm}}_{1},$ let $(x_{0},y_{0})$ be any one point and let $P(x)$ be the unique polynomial of degree 0 passing through $(x_{0},y_{0})$. Then evidently $P(x)=y_{0}$ and we can write $[y_{0},y_{1}](x_{1}-x_{0})={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}(x_{1}-x_{0})=y_{1}-y_{0}=y_{1}-P(x_{1})$ as wanted. Proof of ${\text{Stm}}_{n+1},$ assuming ${\text{Stm}}_{n}$ already established: Let $P(x)$ be the polynomial of degree (at most) $n$ passing through $(x_{0},y_{0}),\ldots ,(x_{n},y_{n}).$ With $Q(x)$ being the unique polynomial of degree (at most) $n-1$ passing through the points $(x_{1},y_{1}),\ldots ,(x_{n},y_{n})$, we can write the following chain of equalities, where we use in the penultimate equality that Stm$_{n}$ applies to $Q$: ${\begin{aligned}&[y_{0},\ldots ,y_{n+1}](x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&={\frac {[y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]}{x_{n+1}-x_{0}}}(x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=\left([y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]\right)(x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=[y_{1},\ldots ,y_{n+1}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=(y_{n+1}-Q(x_{n+1}))-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=y_{n+1}-(Q(x_{n+1})+[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})).\end{aligned}}$ The induction hypothesis for $Q$ also applies to the second equality in the following computation, where $(x_{0},y_{0})$ is added to the points defining $Q$ : ${\begin{aligned}&Q(x_{0})+[y_{0},\ldots ,y_{n}](x_{0}-x_{1})\cdot \ldots \cdot (x_{0}-x_{n})\\&=Q(x_{0})+[y_{n},\ldots ,y_{0}](x_{0}-x_{n})\cdot \ldots \cdot (x_{0}-x_{1})\\&=Q(x_{0})+y_{0}-Q(x_{0})\\&=y_{0}\\&=P(x_{0}).\\\end{aligned}}$ Now look at $Q(x)+[y_{0},\ldots ,y_{n}](x-x_{1})\cdot \ldots \cdot (x-x_{n}).$ By the definition of $Q$ this polynomial passes through $(x_{1},y_{1}),...,(x_{n},y_{n})$ and, as we have just shown, it also passes through $(x_{0},y_{0}).$ Thus it is the unique polynomial of degree $\leq n$ which passes through these points. Therefore this polynomial is $P(x);$ i.e.: $P(x)=Q(x)+[y_{0},\ldots ,y_{n}](x-x_{1})\cdot \ldots \cdot (x-x_{n}).$ Thus we can write the last line in the first chain of equalities as `$y_{n+1}-P(x_{n+1})$' and have thus established that $[y_{0},\ldots ,y_{n+1}](x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})=y_{n+1}-P(x_{n+1}).$ So we established ${\text{Stm}}_{n+1}$, and hence completed the proof of Fact 2. Now look at Fact 2: It can be formulated this way: If $P$ is the unique polynomial of degree at most $n-1$ whose graph passes through the points $(x_{0},y_{0}),...,(x_{n-1},y_{n-1}),$ then $P(x)+[y_{0},\ldots ,y_{n}](x-x_{0})\cdot \ldots \cdot (x-x_{n-1})$ is the unique polynomial of degree at most $n$ passing through points $(x_{0},y_{0}),...,(x_{n-1},y_{n-1}),(x_{n},y_{n}).$ So we see Newton interpolation permits indeed to add new interpolation points without destroying what has already been computed. Taylor polynomial The limit of the Newton polynomial if all nodes coincide is a Taylor polynomial, because the divided differences become derivatives. ${\begin{aligned}&\lim _{(x_{0},\dots ,x_{n})\to (z,\dots ,z)}f[x_{0}]+f[x_{0},x_{1}]\cdot (\xi -x_{0})+\dots +f[x_{0},\dots ,x_{n}]\cdot (\xi -x_{0})\cdot \dots \cdot (\xi -x_{n-1})\\&=f(z)+f'(z)\cdot (\xi -z)+\dots +{\frac {f^{(n)}(z)}{n!}}\cdot (\xi -z)^{n}\end{aligned}}$ Application As can be seen from the definition of the divided differences new data points can be added to the data set to create a new interpolation polynomial without recalculating the old coefficients. And when a data point changes we usually do not have to recalculate all coefficients. Furthermore, if the xi are distributed equidistantly the calculation of the divided differences becomes significantly easier. Therefore, the divided-difference formulas are usually preferred over the Lagrange form for practical purposes. Examples The divided differences can be written in the form of a table. For example, for a function f is to be interpolated on points $x_{0},\ldots ,x_{n}$. Write ${\begin{matrix}x_{0}&f(x_{0})&&\\&&{f(x_{1})-f(x_{0}) \over x_{1}-x_{0}}&\\x_{1}&f(x_{1})&&{{f(x_{2})-f(x_{1}) \over x_{2}-x_{1}}-{f(x_{1})-f(x_{0}) \over x_{1}-x_{0}} \over x_{2}-x_{0}}\\&&{f(x_{2})-f(x_{1}) \over x_{2}-x_{1}}&\\x_{2}&f(x_{2})&&\vdots \\&&\vdots &\\\vdots &&&\vdots \\&&\vdots &\\x_{n}&f(x_{n})&&\\\end{matrix}}$ Then the interpolating polynomial is formed as above using the topmost entries in each column as coefficients. For example, suppose we are to construct the interpolating polynomial to f(x) = tan(x) using divided differences, at the points $n$$x_{n}$$f(x_{n})$ $0$$-{\tfrac {3}{2}}$$-14.1014$ $1$$-{\tfrac {3}{4}}$$-0.931596$ $2$$0$$0$ $3$${\tfrac {3}{4}}$$0.931596$ $4$${\tfrac {3}{2}}$$14.1014$ Using six digits of accuracy, we construct the table ${\begin{matrix}-{\tfrac {3}{2}}&-14.1014&&&&\\&&17.5597&&&\\-{\tfrac {3}{4}}&-0.931596&&-10.8784&&\\&&1.24213&&4.83484&\\0&0&&0&&0\\&&1.24213&&4.83484&\\{\tfrac {3}{4}}&0.931596&&10.8784&&\\&&17.5597&&&\\{\tfrac {3}{2}}&14.1014&&&&\\\end{matrix}}$ Thus, the interpolating polynomial is ${\begin{aligned}&-14.1014+17.5597(x+{\tfrac {3}{2}})-10.8784(x+{\tfrac {3}{2}})(x+{\tfrac {3}{4}})+4.83484(x+{\tfrac {3}{2}})(x+{\tfrac {3}{4}})(x)+0(x+{\tfrac {3}{2}})(x+{\tfrac {3}{4}})(x)(x-{\tfrac {3}{4}})\\={}&-0.00005-1.4775x-0.00001x^{2}+4.83484x^{3}\end{aligned}}$ Given more digits of accuracy in the table, the first and third coefficients will be found to be zero. Another example: The sequence $f_{0}$ such that $f_{0}(1)=6,f_{0}(2)=9,f_{0}(3)=2$ and $f_{0}(4)=5$, i.e., they are $6,9,2,5$ from $x_{0}=1$ to $x_{3}=4$. You obtain the slope of order $1$ in the following way: • $f_{1}(x_{0},x_{1})={\frac {f_{0}(x_{1})-f_{0}(x_{0})}{x_{1}-x_{0}}}={\frac {9-6}{2-1}}=3$ • $f_{1}(x_{1},x_{2})={\frac {f_{0}(x_{2})-f_{0}(x_{1})}{x_{2}-x_{1}}}={\frac {2-9}{3-2}}=-7$ • $f_{1}(x_{2},x_{3})={\frac {f_{0}(x_{3})-f_{0}(x_{2})}{x_{3}-x_{2}}}={\frac {5-2}{4-3}}=3$ As we have the slopes of order $1$, it is possible to obtain the next order: • $f_{2}(x_{0},x_{1},x_{2})={\frac {f_{1}(x_{1},x_{2})-f_{1}(x_{0},x_{1})}{x_{2}-x_{0}}}={\frac {-7-3}{3-1}}=-5$ • $f_{2}(x_{1},x_{2},x_{3})={\frac {f_{1}(x_{2},x_{3})-f_{1}(x_{1},x_{2})}{x_{3}-x_{1}}}={\frac {3-(-7)}{4-2}}=5$ Finally, we define the slope of order $3$: • $f_{3}(x_{0},x_{1},x_{2},x_{3})={\frac {f_{2}(x_{1},x_{2},x_{3})-f_{2}(x_{0},x_{1},x_{2})}{x_{3}-x_{0}}}={\frac {5-(-5)}{4-1}}={\frac {10}{3}}$ Once we have the slope, we can define the consequent polynomials: • $p_{0}(x)=6$. • $p_{1}(x)=6+3(x-1)$ • $p_{2}(x)=6+3(x-1)-5(x-1)(x-2)$. • $p_{3}(x)=6+3(x-1)-5(x-1)(x-2)+{\frac {10}{3}}(x-1)(x-2)(x-3)$ See also • De numeris triangularibus et inde de progressionibus arithmeticis: Magisteria magna, a work by Thomas Harriot describing similar methods for interpolation, written 50 years earlier than Newton's work but not published until 2009 • Newton series • Neville's schema • Polynomial interpolation • Lagrange form of the interpolation polynomial • Bernstein form of the interpolation polynomial • Hermite interpolation • Carlson's theorem • Table of Newtonian series References 1. Dunham, William (1990). "7". Journey Through Genius: The Great Theorems of Mathematics. Kanak Agrawal, Inc. pp. 155–183. ISBN 9780140147391. Retrieved 24 October 2019. 2. Numerical Methods for Scientists and Engineers, R.W. Hamming Archived version: 3. Stetekluh, Jeff. "Algorithm for the Newton Form of the Interpolating Polynomial". External links • Module for the Newton Polynomial by John H. Mathews Sir Isaac Newton Publications • Fluxions (1671) • De Motu (1684) • Principia (1687) • Opticks (1704) • Queries (1704) • Arithmetica (1707) • De Analysi (1711) Other writings • Quaestiones (1661–1665) • "standing on the shoulders of giants" (1675) • Notes on the Jewish Temple (c. 1680) • "General Scholium" (1713; "hypotheses non fingo" ) • Ancient Kingdoms Amended (1728) • Corruptions of Scripture (1754) Contributions • Calculus • fluxion • Impact depth • Inertia • Newton disc • Newton polygon • Newton–Okounkov body • Newton's reflector • Newtonian telescope • Newton scale • Newton's metal • Spectrum • Structural coloration Newtonianism • Bucket argument • Newton's inequalities • Newton's law of cooling • Newton's law of universal gravitation • post-Newtonian expansion • parameterized • gravitational constant • Newton–Cartan theory • Schrödinger–Newton equation • Newton's laws of motion • Kepler's laws • Newtonian dynamics • Newton's method in optimization • Apollonius's problem • truncated Newton method • Gauss–Newton algorithm • Newton's rings • Newton's theorem about ovals • Newton–Pepys problem • Newtonian potential • Newtonian fluid • Classical mechanics • Corpuscular theory of light • Leibniz–Newton calculus controversy • Newton's notation • Rotating spheres • Newton's cannonball • Newton–Cotes formulas • Newton's method • generalized Gauss–Newton method • Newton fractal • Newton's identities • Newton polynomial • Newton's theorem of revolving orbits • Newton–Euler equations • Newton number • kissing number problem • Newton's quotient • Parallelogram of force • Newton–Puiseux theorem • Absolute space and time • Luminiferous aether • Newtonian series • table Personal life • Woolsthorpe Manor (birthplace) • Cranbury Park (home) • Early life • Later life • Apple tree • Religious views • Occult studies • Scientific Revolution • Copernican Revolution Relations • Catherine Barton (niece) • John Conduitt (nephew-in-law) • Isaac Barrow (professor) • William Clarke (mentor) • Benjamin Pulleyn (tutor) • John Keill (disciple) • William Stukeley (friend) • William Jones (friend) • Abraham de Moivre (friend) Depictions • Newton by Blake (monotype) • Newton by Paolozzi (sculpture) • Isaac Newton Gargoyle • Astronomers Monument Namesake • Newton (unit) • Newton's cradle • Isaac Newton Institute • Isaac Newton Medal • Isaac Newton Telescope • Isaac Newton Group of Telescopes • XMM-Newton • Sir Isaac Newton Sixth Form • Statal Institute of Higher Education Isaac Newton • Newton International Fellowship Categories Isaac Newton
Wikipedia
Think of the matrix \(A=\left(\begin{array}{ll}a & b \\ c & d\end{array}\right)\) as mapping one plane to another. Think of the matrix \(A=\left(\begin{array}{ll}a... a) If two lines in the first plane are parallel, show that after being mapped by \(A\) they are also parallel - although they might coincide. b) Let \(Q\) be the unit square: \(0<x<1,0<y<1\) and let \(Q^{\prime}\) be its image under this map A. Show that the area \(\left(Q^{\prime}\right)=|a d-b c|\). [More generally, the area of any region is magnified by \(|a d-b c|\) ( \(a d-b c\) is called the determinant of a \(2 \times 2\) matrix] determinant Find the number of ordered quadruples \((a, b, c, d)\) of real numbers such that \[ \left(\begin{array}{ll} a & b \\ c & d \end{array}\right)^{2}=\left(\begin{array}{ll} c & a \\ d & b \end{array}\right) \text {. } \] asked Jul 10, 2022 in Mathematics by ♦Gauss Diamond (74,625 points) | 74 views Find all real numbers \(a, b, c\), and \(d\) such that \[ a^2+b^2+c^2+d^2=a(b+c+d) \] asked Sep 22, 2022 in Mathematics by ♦MathsGee Platinum (164,226 points) | 36 views Find the inverse of matrix shown below. $$ \left[\begin{array}{ll} 2 & 0 \\ 0 & 0 \end{array}\right] $$ asked Jul 24, 2021 in Mathematics by ♦MathsGee Platinum (164,226 points) | 256 views Diagonalize the matrix \[ A=\left(\begin{array}{lll} 1 & 0 & 2 \\ 0 & 1 & 0 \\ 2 & 0 & 1 \end{array}\right) \] diagonalize Find the inverse of the matrix $$ A=\left(\begin{array}{lll} 4 & 3 & 4 \\ 5 & 4 & 6 \\ 4 & 3 & 3 \end{array}\right) $$ \begin{equation} \text { The rank of the coefficient matrix } A=\left[\begin{array}{ccc} 1 & 2 & -3 \\ 3 & -1 & 5 \\ 4 & 1 & 2 \end{array}\right] \text { equals: } \end{equation} \begin{equation} \text { Let } A \text { be the matrix, } A=\left[\begin{array}{lll} 3 & 2 & 1 \\ 0 & 4 & 5 \\ 0 & 0 & 2 \end{array}\right] \end{equation} By inspection, \( \operatorname{det}(A)\) is: Find the inverse of the matrix $$ A=\left(\begin{array}{rrr} 2 & 0 & 4 \\ 1 & 2 & 7 \\ 6 & 4 & 22 \end{array}\right) $$ The determinant of matrix \(\mathrm{Q}=\left(\begin{array}{cc}8 & 12 \\ x-4 & x\end{array}\right)\) is 8. Find asked Apr 1, 2022 in Mathematics by ♦Gauss Diamond (74,625 points) | 297 views Which of the following systems has augmented matrix \begin{equation} \left[\begin{array}{cccc} 1 & 4 & 0 & 6 \\ 4 & 2 & -1 & 0 \\ 0 & 1 & 5 & -6 \end{array}\right] \text { ? } \end{equation} An \(n \times n\) matrix is called nilpotent if \(A^{k}\) equals the zero matrix for some positive integer \(k\). (For instance, \(\left(\begin{array}{ll}0 & 1 \\ 0 & 0\end{array}\right)\) is nilpotent.) Let \(A=\left(\begin{array}{cc}a & b-a \\ 0 & b\end{array}\right)\) - Diagonalize \(A\). If \(B=\left(\begin{array}{ll}2 & 5 \\ 1 & 3\end{array}\right)\), find \(B^{-1}\) asked Sep 11, 2021 in Mathematics by Siyavula Bronze Status (8,304 points) | 141 views \begin{equation} \text { Let } A=\left(\begin{array}{ll} 2 & 0 \\ 1 & 2 \end{array}\right) \text { and } B=\left(\begin{array}{ccc} -2 & 0 & 1 \\ -1 & 2 & 3 \end{array}\right) \text { then } 2 A+B \text { is } \end{equation} \begin{equation} \text { Let } A=\left(\begin{array}{ll} 2 & 0 \\ 1 & 2 \end{array}\right) \text { and } B=\left(\begin{array}{ccc} -2 & 0 & 1 \\ -1 & 2 & 3 \end{array}\right) \text { then } A \cdot B \text { is } \end{equation}
CommonCrawl
Isn't the center of a von Neumann algebra on a separable Hilbert space a hyperfinite von Neumann subalgebra? this is a very quick, probably dumb, question, I was reading this chapter from "Hochschild cohomology of von Neumann algebras" by Allan Sinclair and Roger M. Smith and I came across this theorem on page 78: 3.1.1 Theorem. If $\mathcal{N}$ is a hyperfinite von Neumann subalgebra of a von Neumann algebra $\mathcal{M}$ and if $\mathcal{V}$ is a dual normal $\mathcal{M}$-module, then $H^{n}(\mathcal{M},\mathcal{V}) \cong H^{n}_{w}(\mathcal{M},\mathcal{V}) \cong H^{n}_{w}(\mathcal{M},\mathcal{V:}/\mathcal{N})$ and ...................................... That's not the complete text of the theorem, I was curious, I'm reading that abelian $C^*$-algebras are nuclear and nuclear $C^*$-algebras are amenable, amenable von Neumann algebras acting on separable Hilbert spaces are hyperfinite. Now let $M$ be a von Neumann algebra acting on a separable Hilbert space, the center of a $M$ is an abelian von Neumann subalgebra, but then that means that every von Neumann algebra on a separable Hilbert space has a hyperfinite von Neumann subalgebra which means that the theorem above is true for every von Neumann algebra on a separable Hilbert space no? what am I missing? abstract-algebra c-star-algebras von-neumann-algebras noncommutative-geometry The KThe K I cannot speak about the cohomology thing because I would have to go back and look at the definitions. But it is trivially true that every von Neumann algebra has hyperfinite subalgebras. In fact, every von Neumann algebra has finite-dimensional unital subalgebras. You can start with $\mathbb C\,1$, for instance. If the von Neumann algebra is nontrivial it will have a non trivial projection $p$, and then you can consider the two-dimensional subalgebra $\mathbb C\,p+\mathbb C\,(1-p)$. Etc. Martin ArgeramiMartin Argerami Not the answer you're looking for? Browse other questions tagged abstract-algebra c-star-algebras von-neumann-algebras noncommutative-geometry or ask your own question. Does an irreducible operator generate a nuclear $C^{*}$-algebra? Is the von neumann algebra of locally compact amenable group hyperfinite? separable weakly dense subalgebra of a von Neumann algebra Why are factors the natural building stones of a von Neumann algebra? Definition of hyperfinite von Neumann algebras Quick question on abelian von Neumann algebras Enough existence of faithful normal states on von Neumann algebra acting on separable Hilbert space Maximal abelian subalgebra in generated von Neumann algebra Generated von neumann algebra and commutant What is the definition of commutant for an abstract von Neumannn algebra?
CommonCrawl
Analogy for voltage, current and resistance Check Out Current Voltage on eBay. Fill Your Cart With Color today! Over 80% New & Buy It Now; This is the New eBay. Find Current Voltage now Compare a fantastic selection of Voltage at Very.co.uk Today. Shop in Confidence at Very.co.uk with our 28 Day Approval Guarantee. Conditions Apply When describing voltage, current, and resistance, a common analogy is a water tank. In this analogy, charge is represented by the water amount, voltage is represented by the water pressure, and current is represented by the water flow. So for this analogy, remember But using water as an analogy offers an easy way to gain a basic understanding. Electricity 101 - Voltage, Current, and Resistance The three most basic components of electricity are voltage, current, and resistance. VOLTAGE is like the pressure that pushes water through the hose Resistance. Resistance is a sort of break on the current. In our analogy that is the size of the nozzle on the end of the bucket. Smaller nozzle, higher resistance! And by using an open ended bucket the only resistance would be created by the air! Resistance is measured in ohms (abbreviation: Ω), and the mathematical symbol is R Voltage, current, and resistance are three properties that are fundamental to almost everything you will do in electrical and electronics engineering. They are intimately related. In this article, we used water in a river analogy to explain what is current, resistance and voltage The hydraulic analogy is excellent, however if you're looking for something else then consider mass, swing dashpot systems. You can also look at Springs, Masses and Dashpots as Capacitors, Inductors and Resistors if you want a direct mechanical analog. Force and current are analogous. Velocity and Voltage are analogous In this well-known analogy a battery is seen as a pump and resistances as constrictions in a pipe. The pipes form a circuit and are already full of water. A more powerful pump means a higher voltage battery. This nicely shows that a big voltage causes a big current Although a physical description of current and voltage is not strictly necessary to study electronics, it is much easier to deal with series and parallel circuits and calculate component values with a good intuitive grasp of the underlying concepts. As it is hard to visualise current and voltage, analogies are often used to describe these concepts The pipe and water analogy is quite common, I also like a traffic analogy. The voltage is the number of cars wanting to travel on a road. The current is the number of cars moving. Resistance is the obstacles or speed bumps on the road The amount of current in a circuit depends on the amount of voltage and the amount of resistance in the circuit to oppose current flow. Just like voltage, resistance is a quantity relative between two points. For this reason, the quantities of voltage and resistance are often stated as being between or across two points in a circuit For a fixed resistance current is proportional to voltage. So for a given number of lanes the number of cars passing per hour should be proportional to the speed limit. That's OK: if you allow twice the speed then twice as many cars can pass per hour. For a fixed voltage current and resistance are inversely proportional Relationship between Voltage, Current, and Resistance The relationship between voltage, current, and resistance can be found from the ohm's law: V = I*R ; Here, V = Voltage, I = Current, R = Resistance See the Ohm's Law for further information Fill Your Cart With Color · Shop with Confidenc In the electrical domain, the effort variable is voltage and the flow variable is electrical current. The ratio of voltage to current is electrical resistance (Ohm's law). The ratio of the effort variable to the flow variable in other domains is also described as resistance Current, Voltage, Resistance, and Power are the four basic properties of electrical circuits. The mountain analogy in this article will help you to understand these properties The analogy here is going to be the analogy 00:23 of water. And so if we have a bunch of water at the top of a water tower it has potential voltage, current and resistance. And all of these. Learn about electricity from a mechanical engineering perspective! Why is voltage like pressure? Why is current like flow rate? You will learn that and much. Voltage and Current relation; The relation between voltage and current is linear. i.e. with larger voltage, the current will be higher and lower current for smaller voltage. Ohms Law Analogy. The relationship between voltage, current, and resistance can be known by finding the third quantity from the known two values. The two known values may. Analagous Mechanical and Electrical Systems. Since the energy of the mass in a Mechanical 1 analogy is measured relative to mechanical ground (i.e., velocity=v=0) the energy of the capacitance must be measured relative to electrical ground (i.e., voltage=e=0).To apply this analogy, every node in the electrical circuit becomes a point in the mechanical system Ohm's Law also makes intuitive sense if you apply it to the water-and-pipe analogy. If we have a water pump that exerts pressure (voltage) to push water around a circuit through a restriction (), we can model how the three variables interrelate.If the resistance to water flow stays the same and the pump pressure increases, the flow rate must also increase which states that the voltage of a circuit is equal to the current through the circuit times its resistance. Another way of stating Ohm's Law, that is often easier to understand, is: (2) I = V / R which means that the current through a circuit is equal to the voltage divided by the resistance However, I´ve been trying to understand and grasp current, voltage, and resistance by finding an analogy that works for me. The water in the pipe one doesnt do it for me as it still gives me questions. So, I thought of this: Lets say that voltage is the height of an object being dropped, the current would be the mass of the object, and. Current 2. Resistance 3. Voltage . Electric Current Electric Current Electric Current is the continuous flow of electric charge. There are two types of current: 1. Garden Hose Analogy How does voltage relate to the garden hose? Electrical voltage provides the energy to create the flow of electrons (Current) thru the water, and a Pump. eBay Official Site - Current Voltage Sold Direc Here a water analogy is used to show a connection between voltage, current, resistance, and power. Also included in the video is an application of Ohm's La.. Resistance = Same. There are a couple of metaphors traditionally used to illustrate voltage, current and resistance. The voltage is equivalent to the water pressure, the current is equivalent to the flow rate, and the resistance is like the pipe size. The analogy here is to water pressure. Voltage, current, and resistance are all related Force-Current Analogy (Node Analysis) Previously we have seen that voltage is regarded as analogous quantity to force. While, in force-current analogy, the current is the analogous quantity in the electrical system to the force in the mechanical system If there's no Voltage, the river is perfectly flat and completely stagnant. You can imagine that a very high Voltage might correlate to a waterfall, whereas a zero Voltage correlates to a lake. Resistance: The Width of the Riverbed. Resistance (R, measured in Ohms) can be thought of us the width of the river and the roughness of the riverbed Voltage - Don't Delay, Shop Very Toda Similarly, there is a torque current analogy for rotational mechanical systems. Let us now discuss this analogy. Torque Current Analogy. In this analogy, the mathematical equations of the rotational mechanical system are compared with the nodal mesh equations of the electrical system.. By comparing Equation 4 and Equation 6, we will get the analogous quantities of rotational mechanical system. Once you understand the interplay of voltage, current and resistance—as formalized in Ohm's Law—you're well on your way to being able to understand basic circuits. There are a couple of metaphors traditionally used to illustrate voltage, current and resistance. The most common analogy is a hydraulic (water) system involving tanks and pipes The electronic-hydraulic analogy (derisively referred to as the drain-pipe theory by Oliver Lodge) is the most widely used analogy for electron fluid in a metal conductor.Since electric current is invisible and the processes in play in electronics are often difficult to demonstrate, the various electronic components are represented by hydraulic equivalents Voltage, Current, Resistance, and Ohm's Law - learn The greater the resistance, the lesser the current. A greater voltage will generate a greater current. If resistance gets close to 0, current gets alarmingly big: that's a short circuit. This humorous cartoon is quite an accurate depiction of Ohm's law! The water dam analogy To put it simply, resistance slows down a current. While there are specific components in an electric circuit like a resistor whose sole job is resisting electricity, any physical material will provide some resistance. You'll find resistance being measured in Ohms Ω, and it has a direct relationship to current and voltage The voltage is equivalent to the water pressure, the current is equivalent to the flow rate and the resistance is like the pipe size. A basic electrical engineering equation called Ohm's law spells out how the three terms relate. Current is equal to the voltage divided by the resistance. It's written like this Understanding the basics of electricity by thinking of it In this analogy, voltage is equivalent to water pressure, current is equivalent to flow rate and resistance is equivalent to pipe size. In electrical engineering, there is a basic equation that explains how voltage, current and resistance relate. This equation, written below, is known as Ohm's law Ohm's law is explained using the water model Ohm's law describes the relationship between voltage, current and resistance. Here too, one can get an explanation with the help of the water model: Ohmic resistance explained by using the water mode When describing voltage, current, and resistance, a common image used is a water tank. In this picture, charge is represented by the water amount, voltage is represented by the water pressure, and current is represented by the water flow Review of Water-and-Pipe Analogy for Ohm's Law With resistance steady, current follows voltage (an increase in voltage means an increase in current, and vice versa). With voltage steady, changes in current and resistance are opposite (an increase in current means a decrease in resistance, and vice versa) Electronics 101: Voltage, Current and Resistanc voltage (V), current (I) and resistance (R) Ohm's Law V I R. s Law Complete the practice problems on pg. 273-5. Resistors •Used to control current or potential difference in a circuit. ity and s Law 278-9. Title: Grade 9 Science Unit 3: Electricity Author: Your User Name Created Date Voltage, current, and resistance. The water analogy of electrical resistance. Ohm's Law. How to measure voltage and current using meters and how to connect them to a circuit. Suitable for the Year 9 Physical Science course in the Australian Curriculum. Licensed under Creative Commons Attribution 4.0 International License Voltage: The word vault is a play on the word volt. The voltage in this case represents the number of jewels carried by each car. Each vault supplies one jewel/car corresponding to the unit for electrical potential: one volt = one joule/coulomb. Resistance: In the analogy, resistance is represented by rough (bumpy) road. In the example in. When describing electrical properties like voltage, current, and resistance, a common analogy is a water tank. In this analogy, charge is analogous to the volume water, voltage is represented by the water pressure (depth of the water), and current is represented by the water flow. So for this analogy, remember Voltage or potential will drop as the current travels through the loop. This is analogous to a roller coaster lowering in elevation (and potential energy) as it completes the ride, eventually to be grounded. In increase in voltage will increase current and an increase in resistance will decrease current Imagine water flow. Voltage is the pump, resistance is the pipe size, and current is the rate of water flow for a given pump pressure and pipe size. Trying to push too much water through a pipe will break it. Pushing too much current through a circuit will also break it When describing the differences between the voltage, resistance and current by taking a common analogy is a water tank. Consider a water tank at a particular height from the ground. At the bottom of this water tank there is a tube Ohm's Law also makes intuitive sense if you apply it to the water-and-pipe analogy.. Ohms Law Analogy. If we have a water pump that exerts pressure (voltage) to push water around a circuit (current) through a restriction (resistance), we can model how the three variables interrelate The proper mechanical analogy to use is force (voltage), velocity (current) and lever (transformer). For a step up (down) transformer, the lever is longer (shorter) on the force source side of the fulcrum What Are Current, Resistance and Voltage Voltage/Current-Water Analogy. Series Connection of Cells • Each cell provides 1.5 V • Two cells connected one after another, in series, provide 3 V, while resistance between the left and center pins (usually 0 ) and maximum resistance between the center and right pins. The resistance between the lef Another qualitative analogy for voltage, current and resistance can be seem with the flow machine we use for our Interactive Lecture Demonstration (ILD, part B): During the last lecture, we used a normal hose and a sponge-clogged hose to simulate two different resistances. We discovered that, with the sponge-clogged hose (higher resistance. The R for resistance and the V for voltage are both self-explanatory, whereas I for current seems a bit weird. The I is thought to have been meant to represent Intensity (of electron flow), and the other symbol for voltage, E, stands for Electromotive force Question: Considering This Same Analogy (current = Rate That Thermal Energy Is Conducted, And Voltage = Temperature Difference Across The Window), How Does The Resistance Of An Object Depend On The Cross-sectional Area Of The Object Through Which Current Flows? (This Is Equivalent To The Relationship Between The Rate Of Thermal Energy Transfer And The Area Of. Voltage, current, and resistance analogy Ohm's Law: Current (I) = Voltage (V) / Resistance (R) To increase the current flowing in a circuit, the voltage must be increased, or the resistance decreased. A simple electrical circuit is depicted in Figure 1a. The flow of electricity through this circuit is further illustrated by analogy to the pressurized water system in Figure 1b The voltage between two points in a circuit is the negative of the line integral of the electric field along the circuit between those two points. ΔVAB = − ∫B AE ⋅ dℓ The resistance of a segment of the circuit is the ratio of the voltage across that segment to the current through that segment. R = V Ohm's law describes the way current flows through a resistance when a different electric potential (voltage) is applied at each end of the resistance. One way to think of this is as water flowing through a pipe. The voltage is the water pressure, the current is the amount of water flowing through the pipe, and the resistance is the size of the. Voltage = Current × Resistance. This is the fundamental thing you need to know when you try to make sense of electronic circuits — in fact, many of the rules can be easily derived once you understand this fundamental relation. using a water tank as an analogy Current and voltage are in phase for resistive loads. Inductive reactance causes current to lag supplied voltage. Capacitive reactance causes current to lead the supplied voltage, thereby correcting phase angles when applied to inductive circuits Current is abbreviated with the letter I not to be confused with L. Current is calculated using the formula created by Ohm's Law: I = V/r. This can be read, current is equal to voltage divided by resistance. Volts. Using our garden hose analogy, the voltage of electricity is akin to the pressure in a garden hose. Imagine a 1. Chapter 25 - Current, Resistance and Electromotive Force - Current - Resistivity - Resistance Terminal voltage: Source with internal resistance - For a real source, Vab = ε(emf) only if no current flows through source. Analogy to motion of e- with E Electric circuit analogies - current models, stories and It's common to hear an analogy which says that electricity is like water - it goes something like this: - Volts measure voltage, and are like water pressure. - Amps measure current, and are like the volume of the flow. - kW measure power, and are like how quickly you fill or empty the bucket. - kWh measure energy, and are like how full the. Electric current flow is proportional to voltage difference according to Ohm's law, and both the bird's feet are at the same voltage.Since current flow is necessary for electric shock, the bird is quite safe unless it simultaneously touches another wire with a different voltage.. Want a scary job? Maintenance on high voltage transmission lines is sometimes done with the voltage live by. Resistance is the opposition to current flow. In the water pump analogy, the more narrow the pipe, the less water will flow. Similarly, the greater the electrical resistance for a given voltage, the less electric current will exist. Consider the flow of electric charge through a conducting wire Relationship between resistance, voltage and current. It can be imagined from he analogy of the water tank system, that increasing he voltage in an electrical circuit will increase the level of current flowing. Similarly decreasing the resistance will increase the level of current as well. In fact there is a relationship between voltage. The concept of current, voltage and resistance can be explained by a hydraulic analogy. A flow of water through a pipe is restricted by a constriction. This causes a pressure drop after the constriction. The flow of water is equivalent to electric current. The pressure drop is equal to the voltage drop Resistance could be compared to the roughness of the river bed, but a river is probably not really a good analogy for electric current. As someone else said, if water is forced to flow through a pipe, making the pipe smaller will increase the resistance and decrease the flow (current) Voltage is energy per unit charge. As per the water tank analogy, water is analogous to charge, pressure is analogous to voltage and the flow of water is analogous to current. Thus, voltage is analogous to pressure Pressure/Voltage analogy [closed] Ask Question Asked 2 years, And the closed valve = resistance. (In reality, if you pull too much current / water, the voltage / pressure drops) Now, if you attach long wires / long pipes with an open switch / a closed valve, the voltage / pressure is constant over the entire length.. Current, Voltage, Resistance, and Power are the four basic properties of electrical circuits. The mountain analogy in this article will hel... Volage Divider Rule [Statment, Formula & Examples] The electrical current always remains same in the series components. However, the voltage doesn't remain same in series components The basic water analogy for electrical circuits centers on Ohms law V=IR where the voltage V is a product of the electrical current I and the circuits resistance R. In the basic analogy water. Current, Voltage and Circuit Analogie The current through the circuit is the same for each resistor in a series circuit and is equal to the applied voltage divided by the equivalent resistance: \[I = \frac{V}{R_{S}} = \frac{9 \, V}{90 \, \Omega} = 0.1 \, A. \nonumber\] Note that the sum of the potential drops across each resistor is equal to the voltage supplied by the battery I use a wide river vs. garden hose analogy for this one. If it is carefully presented, with full discloser about the ultimate failure of all analogies, the water analogy is the best method for learning to distinguish between current, voltage, and resistance Here we have an equation identical to the last but with the usual analogy between pressure and voltage, fluid flow rate and current. \(L\) is not called an inductance anymore but inertance, clearly having something to do with inertia and mass. An inertance stores energy in the form of moving fluid Voltage and Current Resistance, Electrical Switches Voltage and Current in a Practical Circuit Conventional versus Electron Flow 2 - Ohm's Law How Voltage, Current, and Resistance Relate An analogy for Ohm's Law Power in Electric Circuits Calculating Electric Power Resistor A DC circuit is the path of direct current flow through a conductor and the device(s) it provides power to. 2 DC means direct current, which is the flow of negative charge in the form of electrons in one direction. 3 The most common energy source of a DC circuit is a battery. Less common sources include a generator and a solar cell. 4 A battery is a device that can store energy in a chemical. Current and resistance. 7-12-99 Electrical resistance. Voltage can be thought of as the pressure pushing charges along a conductor, while the electrical resistance of a conductor is a measure of how difficult it is to push the charges along. Using the flow analogy, electrical resistance is similar to friction What is the analogy for voltage and current? - Quor The hydraulic analogy of resistance: The concept of resistance, current, and voltage can be explained by a hydraulic analogy. Take two containers one filled with full water and another filled with half water I will explain ohm's law and how a resistor reduces current and voltage below. How a resistor reduces current . The main function of a resistor is to limit or oppose the current flow in a circuit by providing 'resistance'. The best analogy for this is a garden hose that has water flowing through it. The water represents current identify the electrical analogs to the water circuit components (e.g. flow rate models current and water pressure models voltage) describe the limitations of the water analogy to electric circuits describe the current and voltage characteristics of series and parallel circuits Class Time Required Section 2: Voltage, Current, Resistance, and Power This section will describe some of the ways that main concepts in electronics, using mechanical analogies. Figure 1.3.1: Ubiquitous Water Tower Analogy Force current analogy . In the current analogy by force, the mathematical equations of the translational mechanical system are compared to the nodal equations of the electrical system. Consider the following electrical system as shown in the following figure. This circuit consists of a current source, a resistor, an inductor and a capacitor As DGElder pointed out, that is a different but also valid analogy often referred to as the Force Voltage analogy. Current then becomes velocity, and capacitance then becomes compliance, inverse spring constant. The two main ones are the Force Current analogy and the Force Voltage analogy Voltage can be thought of as the pressure pushing charges along a conductor, while the electrical resistance of a conductor is a measure of how difficult it is to push the charges along. Using the flow analogy, electrical resistance is similar to friction Think of an analogy or draw some type of comic/cartoon that illustrates how Voltage, Current and Resistance are all related! (Example: cars on a highway, water moving through a pipe, etc.) Label how each of the three terms are represented in your example! and state at what condition does the relationship between resistance and current applies When describing voltage, current, and resistance, a common analogy is a water tank. In this analogy, charge is represented by the water amount, voltage is represented by the water pressure, and current is represented by the water flow Students are assumed to be familiar Ohm's law (V/R=I) where V is volts, R is resistance, and I is current, as well as the formula for calculating the resistance of a resistor (R=ρl/A). where p is resistivity in ohms/meter, l is length, A is area of the conductor A simple analogy to better understand voltage, current, and resistance: imagine water flowing through a pipe. The amount of water flowing through the pipe is like current. More water flow means more current. The amount of pressure making the water flow is like voltage; a higher pressure will push the water harder, increasing the flow A water analogy can be used to exemplify the relationships between voltage, current, and resistance in an electrical circuit. T F. A. In a pumping system, the friction through a pipe and a coil represents the ___. a. system resistance b. electromotive force c. voltage d. current flow Ohm's Law - How Voltage, Current, and Resistance Relate The waterfall analogy — where the height, flow rate and number of rocky obstacles in a waterfall equate to voltage, current and resistance — has no relevance beyond simple battery-based circuits For an electronic resistor, the constant of proportionality is called the resistance, and the units are volts per amp, or Ohms. For example, a 1-ohm resistor has a voltage drop of 1 volt for 1 amp of current, 2 volts for 2 amps of current, and so on. You can imagine a similar behavior for a hydraulic resistor Current Voltage and Resistance Worksheet or Water Circuit Analogy to Electric Circuit Worksheet October 16, 2017 We tried to locate some good of Current Voltage and Resistance Worksheet or Water Circuit Analogy to Electric Circuit image to suit your needs 07 Hydraulic analogy | voltage vs current; video | 5:19. 08 Direction of the current | the diode MEASURING AND LAWS OF CIRCUITS. video | 6:23. 09 Multimeter | measuring voltage, current and resistance; video | 4:32. 10 Kirchhoff's Voltage Law (KVL) | 2 bulbs (#1) video | 4:47. 11 Kirchhoff's Voltage Law (KVL) | 2 bulbs (#2 Resistance is equivalent to the size of your water pipes or how clogged the pipes are; a smaller water pipe or more clogged water pipe will make water run slower for the same water pressure, and a circuit with higher resistance will make your slot car consume less current or amperage for the same voltage. Resistance is measured in the unit of Ohms To help you visualize Ohm's Law, we will use an analogy with a vessel of water. Pressure = voltage Spout = resistance Flow = current For instance, if the spout is increased in length (cross-sectional area remains constant), resistance will increase. This increase in resistance will decrease the flow (current) Ohm's Law also makes intuitive sense if you apply it to the water-and-pipe analogy. If we have a water pump that exerts pressure (voltage) to push water around a circuit (current) through a restriction (resistance), we can model how the three variables interrelate With voltage steady, changes in current and resistance are opposite (an increase in current means a decrease in resistance, and vice versa). With current steady, voltage follows resistance (an increase in resistance means an increase in voltage). Power in electric circuit A neat analogy to help understand these terms is a system of plumbing pipes. The voltage is equivalent to the water pressure, the current is equivalent to the flow rate, and the resistance is like the pipe size. There is a basic equation in electrical engineering that states how the three terms relate Question is ⇒ In force-voltage analogy, velocity is analogous to, Options are ⇒ (A) current, (B) charge, (C) inductance, (D) capacitance, (E) , Leave your comments or Download question paper. Previous question Next question. Q1. In force-voltage analogy, velocity is analogous to: A If we have a water pump that exerts pressure (voltage) to push water around a circuit (current) through a restriction (resistance), we can model how the three variables interrelate. If the resistance to water flow stays the same and the pump pressure increases, the flow rate must also increase Help with understanding Current, Voltage and Resistance Voltage, current, and resistance all work together in electrical circuits to create a nice system so that electricity can travel fairly large distances. We can show this relationship by using the analogy of how crayons are made in a factory. The voltage can be represented by the pressure acting on the melted wax, the current can be represented. The basis for the analogy can be explained with the use of a few very simple electrical circuits and their drainage equivalents. Firstly, it is necessary to define the analogous terms used in this paper. Electrical Property Air flow property Voltage → Pressure Current → Airflow Resistance → Frictio Ohm's Law. The relationship between voltage, current, and resistance is described by Ohm's law.This equation, i = v/r, tells us that the current, i, flowing through a circuit is directly. Video 3.3 Voltage and Current Division. Series resistance. If resistor R1 is in series with resistor R2, this combination behaves like one resistor with a value equal to R1+R2.See Figure 3.6. This means if replace the two series resistors in a circuit with one resistor at R= R1+R2, the behavior will be the same.The V equals V1+V2.By KCL, the currents through the two resistors are the same 16. 4 Thermal Resistance Circuits There is an electrical analogy with conduction heat transfer that can be exploited in problem solving. The analog of is current, and the analog of the temperature difference, , is voltage difference.From this perspective the slab is a pure resistance to heat transfer and we can defin The current flowing through a resistor depends on the voltage drop across it and the resistance of the resistor. The SI unit for resistance is the ohm, and its symbol is capital omega: Ω. An ohm is a volt per ampere: 1 Ω = 1 V / A The Voltage Lab (scroll down) 12. Resistance and Building Analogy In our building analogy we're dealing with. Voltage= the pressure of water Amperage = the flow rate of the water Resistance = the size of the pipe the water is flowing through Wattage = the total number of gallons of water use Resistance is measured in ohms. It can be helpful to use the analogy of a water tank with a host connected to the bottom to better explain the relationship between voltage, current, and resistance. The charge will be represented by the water in the tank that will flow out of the hose. Voltage will be represented by the pressure of the water flow Figure 20.10 shows two light bulbs in series with a battery. Assuming each bulb has a resistance Rb, and the battery a voltage V, the current that flows in each battery is given by I(s) = V / 2R b, where.s is for series.Contrast this with Figure 20.11 where the two bulbs are in parallel Current, voltage and resistance Current is the rate of flow of electric charge. A potential difference (voltage) across an electrical component is needed to make a current flow through it Voltage, Current, and Resistance: (How Do They Relate current (AC) circuits. Direct analogy RLC series AC circuit refers to the connection between complex velocity and complex elect rical . electrical resistance R, the current voltage U(t). For example, to find the Voltage in a circuit: If the circuit has a current of 2 amperes, and a resistance of 1 ohm, (< these are the two knowns), then according to Ohms Law and the formulas above, voltage equals current multiplied by resistance: (V = 2 amperes x 1 ohm = 2 volts) An electric current is a flow of electrons through a conductor (like a copper wire). Since we can't see electrons, it would be nice to have a model or an analogy of electric circuits to help us understand circuits better. Water flowing through pipes is pretty good mechanical system that is a lot like an electrical circuit With the Voltage slider set at \( 4.5 \text{V} \) (the default), move the Resistance slider, observing what happens to the current. If the resistance doubles, what happens to the current? If the resistance doubles, the current is divided by two. They also seem to be linked. What type of relationship do you believe exists between current and. Current, Voltage, Resistance Current describes the quantity of electrons passing through a point in a circuit at a given instant in time. Current is measured in Amperes (Amps, A). Voltage describes the potential difference in electrical charge between two points in an electrical circuit The resistance R in ohms (Ω) is equal to the voltage V in volts (V) divided by the current I in amps (A): Since the current is set by the values of the voltage and resistance, the Ohm's law formula can show that: If we increase the voltage, the current will increase. If we increase the resistance, the current will reduce. Example # If we continue to use the water analogy to explain the relationship between voltage, resistance, and current; voltage (VOLT) being the water tries to push charge (AMP) along a path of a hose, while resistance (OHM) is the thing that inhibits the charge's movement (the wall of the hose) To model the resistance and the charge-velocity of metals, perhaps a pipe packed with sponge, or a narrow straw filled with syrup, would be a better analogy than a large-diameter water pipe. Resistance in most electrical conductors is a linear function: as current increases, voltage drop increases proportionally (Ohm's Law) • • Define electric current electric current and electromotive force. • • Write and apply Ohm's law s law to circuits containing resistance and emf. • • Define resistivity of a material and apply formulas for its calculation. • • Define and apply the concept of temperature coefficient of resistance For example, to find the Voltage in a circuit: If the circuit has a current of 2 amperes, and a resistance of 1 ohm, (< these are the two known's), then according to Ohms Law and the formulas above, voltage equals current multiplied by resistance: (V = 2 amp x 1 ohm = 2 volts) To find the current in the same circuit above assuming we. No, as each device draws more power, at a constant voltage, the current must be increasing and the resistance must be getting smaller. Power = voltage * current Current = voltage / resistance so Power = voltage * voltage / resistance So, if the power goes up and the voltage stays the same, the resistance must decrease Key concepts: Voltage, Resistance, Current, Metals, Semiconductors, Insulators, Diodes, Transistors, Doping, Voltage can be introduced by using an analogy of BB's to represent units of charge. The more BB's raised to a certain height, the more energy the group has. The amount o A hydraulic analogy. Conductors correspond to pipes through which the fluid flows. Basically, for a given pressure drop, flow rate is proportional to the 4th power of pipe diameter. An analogy for Ohm's Law. With voltage steady, changes in current and resistance are opposite (an increase in current means a decrease in resistance, and vice versa) the current, and the slower the capacitor voltage changes. Once the capacitor voltage gets close to the power supply voltage, the resulting current is so small that the rate of change of capacitor voltage nearly slows to a halt. Using our bucket analogy, it's as if we have suspended the bucket with a rope attached to pulleys that connect to the. quired. Expressed as a water analogy: rate of water flow = water pressure applied how restrictive the hose is Electrically speaking: current through a circuit = voltage applied resistance in the circuit OR I = V R From this equation also it follows that: V = I R and also, R = V I The expression V = I * R is commonly known as Ohms law. It. Similarly, whilst the resistance of your atomizer will be labeled, every part of the circuit also has some inherent resistance. Current is measured in amps (A) and resistance is ohms (Ω). Ohm's Law Ohm's law can be stated in words as current equals voltage divided by resistance, and more mathematically as: I = V / UK commercial property price Index data. Papyrus definition. What is Digital Spy. London to Coventry University. Happy Birthday message. Gayle King net worth 2021. Navel orange vitamin C. Distance between villages. Titanium chemical properties. Charlotte Knights stadium address. Double spiral hemp Bracelet. How to Insert xref in AutoCAD. Wedding budget for 40 guests. Dragons of Atlantis download PC. Describe how you would shield a magnetic material from a magnetic field. Get married online Australia. Corned beef red wine slow cooker. American Bank Center map. IPhone 11 SIM card. General Assistance NJ amount. Converts the optical power into its corresponding electric current. CPanel MySQL backup location. Parcel tracking Philippines. Best bass amp under $200. Magic Flight Launch Box used. HD Expo 2022. How to become a golf pro Canada. Is 50 mg of nicotine a lot. 4l60e neutral safety switch bypass. City Moms Blog. LabCorp test Code for ANA. CPS Teacher Job Description. Bored In School Right Now Inc HTML. FreeBSD DNS lookup. Witness deported. Character study of Peter pdf. Power Service diesel fuel additive. Monsignor movie. GmbH womenswear. Wallpaper decorator near me. 250000 pounds in euros.
CommonCrawl
Serre's criterion for normality In algebra, Serre's criterion for normality, introduced by Jean-Pierre Serre, gives necessary and sufficient conditions for a commutative Noetherian ring A to be a normal ring. The criterion involves the following two conditions for A: • $R_{k}:A_{\mathfrak {p}}$ is a regular local ring for any prime ideal ${\mathfrak {p}}$ of height ≤ k. • $S_{k}:\operatorname {depth} A_{\mathfrak {p}}\geq \inf\{k,\operatorname {ht} ({\mathfrak {p}})\}$ for any prime ideal ${\mathfrak {p}}$.[1] The statement is: • A is a reduced ring $\Leftrightarrow R_{0},S_{1}$ hold. • A is a normal ring $\Leftrightarrow R_{1},S_{2}$ hold. • A is a Cohen–Macaulay ring $\Leftrightarrow S_{k}$ hold for all k. Items 1, 3 trivially follow from the definitions. Item 2 is much deeper. For an integral domain, the criterion is due to Krull. The general case is due to Serre. Proof Sufficiency (After EGA IV2. Theorem 5.8.6.) Suppose A satisfies S2 and R1. Then A in particular satisfies S1 and R0; hence, it is reduced. If ${\mathfrak {p}}_{i},\,1\leq i\leq r$ are the minimal prime ideals of A, then the total ring of fractions K of A is the direct product of the residue fields $\kappa ({\mathfrak {p}}_{i})=Q(A/{\mathfrak {p}}_{i})$: see total ring of fractions of a reduced ring. That means we can write $1=e_{1}+\dots +e_{r}$ where $e_{i}$ are idempotents in $\kappa ({\mathfrak {p}}_{i})$ and such that $e_{i}e_{j}=0,\,i\neq j$. Now, if A is integrally closed in K, then each $e_{i}$ is integral over A and so is in A; consequently, A is a direct product of integrally closed domains Aei's and we are done. Thus, it is enough to show that A is integrally closed in K. For this end, suppose $(f/g)^{n}+a_{1}(f/g)^{n-1}+\dots +a_{n}=0$ where all f, g, ai's are in A and g is moreover a non-zerodivisor. We want to show: $f\in gA$. Now, the condition S2 says that $gA$ is unmixed of height one; i.e., each associated primes ${\mathfrak {p}}$ of $A/gA$ has height one. This is because if ${\mathfrak {p}}$ has height greater than one, then ${\mathfrak {p}}$ would contain a non zero divisor in $A/gA$. However, ${\mathfrak {p}}$ is associated to the zero ideal in $A/gA$ so it can only contain zero divisors, see here. By the condition R1, the localization $A_{\mathfrak {p}}$ is integrally closed and so $\phi (f)\in \phi (g)A_{\mathfrak {p}}$, where $\phi :A\to A_{\mathfrak {p}}$ is the localization map, since the integral equation persists after localization. If $gA=\cap _{i}{\mathfrak {q}}_{i}$ is the primary decomposition, then, for any i, the radical of ${\mathfrak {q}}_{i}$ is an associated prime ${\mathfrak {p}}$ of $A/gA$ and so $f\in \phi ^{-1}({\mathfrak {q}}_{i}A_{\mathfrak {p}})={\mathfrak {q}}_{i}$; the equality here is because ${\mathfrak {q}}_{i}$ is a ${\mathfrak {p}}$-primary ideal. Hence, the assertion holds. Necessity Suppose A is a normal ring. For S2, let ${\mathfrak {p}}$ be an associated prime of $A/fA$ for a non-zerodivisor f; we need to show it has height one. Replacing A by a localization, we can assume A is a local ring with maximal ideal ${\mathfrak {p}}$. By definition, there is an element g in A such that ${\mathfrak {p}}=\{x\in A|xg\equiv 0{\text{ mod }}fA\}$ and $g\not \in fA$. Put y = g/f in the total ring of fractions. If $y{\mathfrak {p}}\subset {\mathfrak {p}}$, then ${\mathfrak {p}}$ is a faithful $A[y]$-module and is a finitely generated A-module; consequently, $y$ is integral over A and thus in A, a contradiction. Hence, $y{\mathfrak {p}}=A$ or ${\mathfrak {p}}=f/gA$, which implies ${\mathfrak {p}}$ has height one (Krull's principal ideal theorem). For R1, we argue in the same way: let ${\mathfrak {p}}$ be a prime ideal of height one. Localizing at ${\mathfrak {p}}$ we assume ${\mathfrak {p}}$ is a maximal ideal and the similar argument as above shows that ${\mathfrak {p}}$ is in fact principal. Thus, A is a regular local ring. $\square $ Notes 1. Grothendieck & Dieudonné 1961, § 5.7. References • Grothendieck, Alexandre; Dieudonné, Jean (1965). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Seconde partie". Publications Mathématiques de l'IHÉS. 24. doi:10.1007/bf02684322. MR 0199181. • H. Matsumura, Commutative algebra, 1970.
Wikipedia
\begin{document} \pagestyle{plain} \title{An Improved Algorithm for Counting Graphical Degree Sequences} \author{ Kai Wang\footnote{Department of Computer Sciences, Georgia Southern University, Statesboro, GA 30460, USA \tt{[email protected]}}, Troy Purvis\footnote{Department of Computer Sciences, Georgia Southern University, Statesboro, GA 30460, USA \tt{[email protected]}} } \maketitle \begin{abstract} We present an improved version of a previous efficient algorithm that computes the number $D(n)$ of zero-free graphical degree sequences of length $n$. A main ingredient of the improvement lies in a more efficient way to compute the function $P(N,k,l,s)$ of Barnes and Savage. We further show that the algorithm can be easily adapted to compute the $D(i)$ values for all $i\le n$ in a single run. Theoretical analysis shows that the new algorithm to compute all $D(i)$ values for $i\le n$ is a constant times faster than the previous algorithm to compute a single $D(n)$. Experimental evaluations show that the constant of improvement is about 10. We also perform simulations to estimate the asymptotic order of $D(n)$ by generating uniform random samples from the set of $E(n)$ integer partitions of fixed length $n$ with even sum and largest part less than $n$ and computing the proportion of them that are graphical degree sequences. The known numerical results of $D(n)$ for $n\le 290$ together with the known bounds of $D(n)$ and simulation results allow us to make an informed guess about its unknown asymptotic order. The techniques for the improved algorithm can be applied to compute other similar functions that count the number of graphical degree sequences of various classes of graphs of order $n$ and that all involve the function $P(N,k,l,s)$. \end{abstract} \keywords{counting, graphical degree sequence, graphical partition, asymptotic order} \section{Introduction} We consider finite simple graphs (i.e. finite undirected graphs without loops or multiple edges) and their graphical degree sequences that are treated as multisets (i.e. the order of the terms in the sequence does not matter). The notion of an integer partition is well-known in number theory. The terms in an integer partition and a graphical degree sequence are often written in non-increasing order for convenience. An integer partition is called a graphical partition if it is the vertex degree sequence of some simple graph. Any given integer partition $(a_1, a_2, \cdots, a_n)$ can be easily tested whether it is a graphical partition, for example, through the Erd{\H{o}}s-Gallai criterion \cite{ErdosCallai1960}. Clearly a zero-free graphical degree sequence and a graphical partition are equivalent notions. The former is often used in a context where the lengths of the considered sequences are the same and fixed and the latter is often used in a context where the sums of the parts in the considered partitions are the same and fixed. It is well-known that the number of graphs of order $n$ can be efficiently calculated exactly using the Redfield-P\'{o}lya theorem \cite{Redfield1927,Polya1937} and also asymptotically (which is $2^{n\choose 2}/n!$) based on the fact that almost all graphs of order $n$ have only the trivial automorphism when $n$ is large \cite{HararyPalmer1973}. Somewhat surprisingly, no algorithm was known to efficiently compute the number $D_0(n)$ of graphical degree sequences of length $n$ until recently. Previous known algorithms to compute $D_0(n)$ are from Ruskey et al. \cite{Ruskey1994} and Iv\'{a}nyi et al. \cite{Ivanyi2013}. Ruskey et al.'s algorithm can compute $D_0(n)$ by generating all graphical degree sequences of length $n$ using a highly efficient ``Reverse Search'' approach which seems to run in constant amortized time. Iv\'{a}nyi et al.'s algorithm can compute the number $D(n)$ of zero-free graphical degree sequences of length $n$ by generating the set of all $E(n)$ integer partitions of $n$ parts with even sum and each part less than $n$ and testing whether they are graphical partitions using linear time algorithms similar to the Erd{\H{o}}s-Gallai criterion. The $D_0(n)$ value can be easily calculated when all $D(i)$ values for $i\le n$ are known since $D_0(n)=1+\sum_{i=2}^nD(n)$ when $n\ge 2$ \cite{Ivanyi2013}. Burns \cite{Burns2007} proves good exponential upper bound $O(4^n/((\log n)^c\sqrt{n}))$ and lower bound $\Omega(4^n/n)$ of $D_0(n)$ for sufficiently large $n$, although its tight asymptotic order is still unknown. The exponential lower bound of $D_0(n)$ necessarily makes these enumerative algorithms run in time exponential in $n$ and therefore impractical for the purpose of computing $D_0(n)$. In \cite{Wang2016} concise formulas and efficient polynomial time dynamic programming algorithms have been presented to calculate $D_0(n)$, $D(n)$ and the number $D_{k\_con}(n)$ of graphical degree sequences of $k$-connected graphs of order $n$ for every fixed $k\ge 1$ all based on an ingenious recurrence of Barnes and Savage \cite{BarnesSavage1995}. Unfortunately the asymptotic orders of these functions do not appear to be easily obtainable through these formulas, which is why we currently strive to compute as many values of these functions as possible for the purpose of guessing their asymptotic trends. Although these new algorithms for $D_0(n)$ and $D(n)$ are fast with time complexity $O(n^6)$, they still quickly encounter bottlenecks because of the large space complexity $O(n^5)$. This motivates us to further investigate the possibility of reducing memory usage for these algorithms. In this paper we introduce nontrivial improvement to these algorithms, using the computation of $D(n)$ as an example, to achieve significant memory usage reductions besides proportional run time reductions. We also show that the algorithm can be easily adapted to compute the $D(i)$ values for all $i\le n$ in a single run with essentially the same run time and memory usage as computing a single $D(n)$ value. The introduced techniques can be applied to all similar algorithms that compute the number of graphical degree sequences of various classes of simple graphs of order $n$ based on the recurrence of Barnes and Savage. We will prove that the new algorithm that computes all $D(i)$ values for $i\le n$ achieves a constant factor improvement in both space and time complexity than the previous algorithm in \cite{Wang2016} that computes a single $D(n)$ value. The experimental performance evaluations show that the constant is about 10. We also briefly mention the guessed asymptotic order of $D(n)$ based on simulation results and the prospects of determining its unknown growth order. \begin{table}[!htb] \centering \caption{Terminology used in this paper} \begin{tabular}{||c|l||} \hline\hline Term & Meaning\\ \hline\hline $\mathbf{P}(N)$ & set of unrestricted partitions of an integer $N$\\ \hline $\mathbf{P}(N,k,l)$ & set of partitions of an integer $N$ into at most $l$ parts\\ & with largest part at most $k$ \\ \hline $\mathbf{P}(N,k,l,s)$ & subset of $\mathbf{P}(N,k,l)$ determined by integer $s$ (See def. (\ref{eqn:PNkls}))\\ \hline $\mathbf{G}^{'}(N,l)$ & set of graphical partitions of $N$ with exactly $l$ parts\\ \hline $\mathbf{H}^{'}(N,l)$ & set of graphical partitions of $N$ with exactly $l$ parts\\ & and largest part exactly $l-1$\\ \hline $\mathbf{L}^{'}(N,l)$ & set of graphical partitions of $N$ with exactly $l$ parts\\ & and largest part less than $l-1$\\ \hline $\mathbf{D}(n)$ & set of zero-free graphical degree sequences of length $n$ \\ \hline $\mathbf{D}_0(n)$ & set of graphical degree sequences of length $n$ allowing zero terms \\ \hline $\mathbf{E}(n)$ & set of integer partitions of $n$ parts with even sum and each part $<n$ \\ \hline $\mathbf{H}(n)$ & subset of $\mathbf{D}(n)$ with largest part exactly $n-1$ \\ \hline $\mathbf{L}(n)$ & subset of $\mathbf{D}(n)$ with largest part less than $n-1$ \\ \hline $\mathbf{I}_e(N_1,N_2)$&$\{N:N_1\le N\le N_2, N \mbox{ is an even integer}\}$\\ \hline $\mathbf{I}'_e(N_1,N_2)$&$\{N:N_1\le N< N_2, N \mbox{ is an even integer}\}$\\ \hline\hline \end{tabular} \label{tbl:definitions} \end{table} \section{Review of the algorithms for $D(n)$ ($L(n)$)} \label{sec:basic_alg} In this section we review the relevant notations, formulas and algorithms in \cite{Wang2016}. For the reader's convenience, the terminology employed in this paper is summarized in Table \ref{tbl:definitions}. We use bold face letters to indicate a set and the same normal face letters to indicate the cardinality of that set. For example, $\mathbf{P}(N,k,l)$ is the set of partitions of an integer $N$ into at most $l$ parts with largest part at most $k$ while $P(N,k,l)$ is the cardinality of the set $\mathbf{P}(N,k,l)$, i.e. the number of partitions of an integer $N$ into at most $l$ parts with largest part at most $k$. As shown in \cite{Wang2016}, there are concise formulas to compute $D_0(n)$, $D(n)$, $H(n)$ and $L(n)$ which all involve the function $P(N,k,l,s)$ introduced by Barnes and Savage \cite{BarnesSavage1995}. The original definition of the set $\mathbf{P}(N,k,l,s)$ is as follows \cite{BarnesSavage1995}: \begin{equation} \label{eqn:PNkls} \mathbf{P}(N,k,l,s)=\left\{ \begin{array}{ll} \varnothing & \mbox{if $s < 0$};\\ \{\pi \in \mathbf{P}(N,k,l) : s+\sum_{i=1}^{j}r_i(\pi)\ge j, 1 \le j \le d(\pi)\} & \mbox{if $s \ge 0$}.\end{array} \right. \end{equation} In this definition $d(\pi)$ is the side length (number of rows) of the Durfee square of the Ferrers diagram of the integer partition $\pi$. The function $r_i(\pi)$ is defined as $r_i(\pi)=\pi_i^{'}-\pi_i$ where $\pi_i$ and $\pi_i^{'}$ are the number of dots in the $i$-th row and column of the Ferrers diagram of $\pi$, respectively, for $1 \le i \le d(\pi)$. Equivalently, $\pi_i$ and $\pi_i^{'}$ are the $i$-th largest part of the partition $\pi$ and the conjugate of $\pi$, respectively. In the literature the values $\pi_i-\pi_i^{'}$ are called \textit{ranks} of a partition $\pi$ so the values $r_i(\pi)$ can also be called \textit{coranks} of the partition $\pi$. The calculation of the function $P(N,k,l,s)$ need not follow the definition of $\mathbf{P}(N,k,l,s)$. Instead it can be efficiently calculated using dynamic programming through a recurrence of Barnes and Savage \cite[Theorem 1]{BarnesSavage1995}. Our improved algorithm in the next section mainly focuses on how to compute this function in a more efficient way. We summarize some of the formulas from \cite{Wang2016} here: \begin{equation} \label{eqn:D0(n)} D_0(n)=\sum_{N\in \mathbf{I}_e(0,n(n-1))}P(N,n-1,n,0). \end{equation} \begin{equation} \label{eqn:D(n)} \begin{split} D(n)&=\sum_{N\in \mathbf{I}_e(n,n(n-1))}G^{'}(N,n) \\ &=\sum_{N\in \mathbf{I}_e(n,n(n-1))}\sum_{k=1}^{n-1}P(N-k-n+1,k-1,n-1,n-k-1). \end{split} \end{equation} \begin{equation} \label{eqn:L(n)} \begin{split} L(n)&=\sum_{N\in \mathbf{I}_e(n, n(n-2))}L^{'}(N,n) \\ &=\sum_{N\in \mathbf{I}_e(n, n(n-2))}\sum_{k=1}^{n-2}P(N-k-n+1,k-1,n-1,n-k-1). \end{split} \end{equation} These formulas can all be implemented in efficient dynamic programming algorithms that run in time polynomial in $n$ based on the recurrence of Barnes and Savage \cite[Theorem 1]{BarnesSavage1995}. As indicated in \cite{Wang2016}, the computation of $D(n)$ can be transformed into the computation of $L(n)$ if $D_0(n-1)$ is already known based on the relation \begin{equation} \label{eqn:D(n)L(n)} D(n)=L(n)+D_0(n-1). \end{equation} The benefit of this transformation is to save memory because we only need to calculate half of the $L^{'}(N,n)$ ($N\in \mathbf{I}_e(n, n(n-1)/2)$ among $N\in \mathbf{I}_e(n, n(n-2))$) values in order to calculate $L(n)$ due to the symmetry of the sequence $L^{'}(N,n)$ in the sense that \begin{equation} \label{eqn:L(N,n)Symmetry} L^{'}(N,n)=L^{'}(n(n-1)-N,n), N\in \mathbf{I}_e(n, n(n-2)). \end{equation} This transformation makes it feasible to allocate a smaller four dimensional array, which is about one quarter of the size for calculating $D(n)$ using formula (\ref{eqn:D(n)}) directly, to hold the necessary $P(*,*,*,*)$ values in order to compute $L(n)$. The sequence of $G^{'}(N,n)$ values for $N\in \mathbf{I}_e(n,n(n-1))$ used in formula (\ref{eqn:D(n)}), though also unimodal as $L^{'}(N,n)$ for $N\in \mathbf{I}_e(n, n(n-2))$, does not possess a similar symmetry. With this transformation in mind, we will treat the computation of $L(n)$ and $D(n)$ as equivalent problems. \begin{algorithm}[h] \DontPrintSemicolon \KwIn{A positive integer $n$} \KwOut{$L(n)$} $N \gets n(n-1)/2$\; Allocate a four dimensional array $P[N-n+1][n-2][n][N-n+1]$\; Fill in the array $P$ using dynamic programming based on \cite[Theorem 1]{BarnesSavage1995}\; $S \gets 0$\; \For{$i \in \mathbf{I}'_e(n,N)$ } { \For{$j \gets 1$ \textbf{to} $\min ({n-2,i-n+1})$ } { $S \gets S+P[i-j-n+1][j-1][n-1][n-j-1]$\; } } $S \gets 2S$\; \If{$N$ is even} { \For{$j \gets 1$ \textbf{to} $\min ({n-2,N-n+1})$ } { $S \gets S+P[N-j-n+1][j-1][n-1][n-j-1]$\; } } \Return{$S$}\; \caption{An algorithm to compute $L(n)$.} \label{algo:D(n)} \end{algorithm} We now reiterate the pseudo-code to compute $L(n)$, and hence $D(n)$ when $D_0(n-1)$ is known, in Algorithm \ref{algo:D(n)} from \cite{Wang2016} based on formulas (\ref{eqn:L(n)}) and (\ref{eqn:D(n)L(n)}) and the symmetry (\ref{eqn:L(N,n)Symmetry}). The variable $S$ is used to store the value of $L(n)$. Line 2 indicates the allocation sizes for the four dimensions of the array $P$. When elements of this array are later retrieved on line 7 and 11, we use the convention that array indices start from 0 such that the array element $P[N][k][l][s]$ stores the function value $P(N,k,l,s)$. As noted in \cite{Wang2016}, we can choose to allocate only size 2 for the third dimension of the array $P$ in Algorithm \ref{algo:D(n)} since each $P(*,*,l,*)$ value depends only on the $P(*,*,l,*)$ and $P(*,*,l-1,*)$ values according to \cite[Theorem 1]{BarnesSavage1995} and for the purpose of computing $L(n)$ only the $P(*,*,n-1,*)$ values are used on line 7 and 11. In the next section we will introduce further improvements to this algorithm in order to save memory besides run time. \section{Improved algorithm for $L(n)$ ($D(n)$)} \label{sec:improved_alg} In this section we first show how the computation of a single $L(n)$ value can be improved in Algorithm \ref{algo:D(n)}. A main idea is to reduce the allocation size for the fourth dimension of the array $P$ based on some simple observations about the function $P(N,k,l,s)$ regarding its fourth variable $s$. Then we show how the algorithm can be easily extended to compute all $L(i)$ values for $i\le n$ in a single run with essentially the same run time and memory usage as the computation of the single $L(n)$ value. As mentioned in Section \ref{sec:basic_alg}, we need to calculate all the $L^{'}(N,n)$ values for $N\in \mathbf{I}_e(n, n(n-1)/2)$ in order to calculate $L(n)$, where $L^{'}(N,n)$ can be calculated as: \begin{equation} \label{eqn:L(Nn)} L^{'}(N,n)=\sum_{k=1}^{n-2}P(N-k-n+1,k-1,n-1,n-k-1). \end{equation} It is clear that the largest index of the first dimension of all the needed $P(*,*,*,*)$ values is at most $n(n-1)/2-n=n(n-3)/2$ (corresponding to $N=n(n-1)/2$ and $k=1$). This explains why the allocation size for the first dimension of the array $P$ in Algorithm \ref{algo:D(n)} is $n(n-3)/2+1$. In fact this allocation size can be slightly reduced. For each pair of $N\in \mathbf{I}_e(n, n(n-1)/2)$ and $1\le k\le n-2$, each term $P(N-k-n+1,k-1,n-1,n-k-1)$ in the sum (\ref{eqn:L(Nn)}) for $L^{'}(N,n)$ is nonzero only if \[ N-k-n+1\le (k-1)(n-1) \] by definition. This inequality reduces to $k\ge N/n$. This means we only need to include in the sum the $P(N-k-n+1,k-1,n-1,n-k-1)$ values for which $N-k\le N-N/n=N(1-1/n)$. Since $N\le n(n-1)/2$, the largest index of the first dimension of all the needed non-zero $P(*,*,*,*)$ values is thus at most $\frac{n(n-1)}{2}(1-1/n)-n+1=(n^2+3)/2-2n=(n-1)(n-3)/2$, which is slightly smaller than $n(n-3)/2$. It is also evident that the largest index of the fourth dimension of all the needed $P(*,*,*,*)$ values for calculating each $L^{'}(N,n)$ is $n-2$ (corresponding to $k=1$). However, this does not mean that we can simply allocate size $n-1$ for the last dimension of the array $P$ in Algorithm \ref{algo:D(n)}. If we examine the recurrence for $P(N,k,l,s)$ in \cite[Theorem 1]{BarnesSavage1995}, we can see that the indices of the first three dimensions never increase in any recursive computation involving this four-variate function, while the index of the last dimension could increase because one of the four terms $P(N,k,l,s)$ depends on is $P(N-k-l+1,k-1,l-1,s+l-k-1)$ whose index in the last dimension ($s+l-k-1$) could be larger than $s$. The good news is that we do not need to make an allocation for the last dimension larger than that for the first dimension since \cite[Theorem 1]{BarnesSavage1995} also ensures that $P(N,k,l,s)=P(N,k,l,N)$ for $s\ge N$. This explains why the first and fourth dimensions of the array $P$ have the same allocation sizes on line 2 in Algorithm \ref{algo:D(n)}. And based on the above discussion the allocation sizes for these two dimensions can be reduced from $n(n-3)/2+1$ to $(n^2+5)/2-2n$. Now we show that the allocation size $(n^2+5)/2-2n$ for the fourth dimension of the array $P$ in Algorithm \ref{algo:D(n)} is conservative and it can be further reduced. First we recall a lemma of Barnes and Savage, on which \cite[Theorem 1]{BarnesSavage1995} is partly based: \begin{theo} \label{thm_fourth_dimension} \cite[Lemma 5]{BarnesSavage1995} $\mathbf{P}(N,k,l,s)=\mathbf{P}(N,k,l,N)=\mathbf{P}(N,k,l)$ for $s\ge N$. \end{theo} We show the condition $s\ge N$ in this theorem can be easily improved based on the original definition of $\mathbf{P}(N,k,l,s)$, which can then be used to further save memory space usage of Algorithm \ref{algo:D(n)}. Based on the definition in (\ref{eqn:PNkls}), if we define an integer function $M(N,k,l)$ to be \[ M(N,k,l)=\max_{\pi\in \mathbf{P}(N,k,l),1\le j\le d(\pi)}\{j-\sum_{i=1}^{j}r_i(\pi)\},\] then we clearly have $\mathbf{P}(N,k,l,s)=\mathbf{P}(N,k,l,N)=\mathbf{P}(N,k,l)$ for $s\ge \max\{0,M(N,k,l)\}$. Based on the definition of $r_i(\pi)=\pi_i^{'}-\pi_i$, the partition $\pi$ in $\mathbf{P}(N,k,l)$ that achieves the maximum in the definition of $M(N,k,l)$ is the partition $\pi^*$ of $N$ with as many parts equal to $k$ as possible with the associated $j^*$ equal to $d(\pi^*)$. This shows that the function $M(N,k,l)$ actually does not depend on $l$ and we can write it as $M(N,k)$. Note that $M(N,k)$ could take negative values. For the purpose of improving Algorithm \ref{algo:D(n)}, we define the nonnegative function \[ M'(N,k)=\max\{0,M(N,k)\}, \] and with this definition we clearly have $P(N,k,l,s)=P(N,k,l,N)=P(N,k,l)$ for $s\ge M'(N,k)$. The pseudo-code for computing $M'(N,k)$ is presneted in Algorithm \ref{algo:M(N,k)} based on the $\pi^*$ and $j^*$ mentioned above. It is easy to see that $M'(N,k)\le N$. Furthermore we observe that on average $M'(N,k)$ is a lot smaller than $N$, which improves the condition in Theorem \ref{thm_fourth_dimension} and makes this function a main ingredient for saving memory space of Algorithm \ref{algo:D(n)} in our improved algorithm. \begin{algorithm}[h] \DontPrintSemicolon \KwIn{A positive integer $N$ and a positive integer $k$} \KwOut{$M'(N,k)$} $q \gets \lfloor N/k \rfloor$\; $r \gets N\mod k$\; \If{$r=0$} { \If{$k\ge q$} { \Return{$q(k-q+1)$}\; } \Else { \Return{$0$}\; } } \Else{ \If{$k\ge q$} { \If{$r\le q$} { \Return{$q(k-q+1)-r$}\; } \Else { \Return{$q(k-q-1)+r$}\; } } \Else { \Return{$0$}\; } } \caption{Pseudo-code for computing the helper function $M'(N,k)$.} \label{algo:M(N,k)} \end{algorithm} In order to further save memory space usage of Algorithm \ref{algo:D(n)}, we define a new function $Q(l,k,N,s)$ by reversing the order of the first three variables of the four-variate function $P(N,k,l,s)$, i.e. \[ Q(l,k,N,s)=P(N,k,l,s). \] With this definition a four dimensional array $Q$ can be created in the improved algorithm in place of the array $P$ such that the allocation sizes of latter dimensions of $Q$ can be made dependent on former dimensions and as small as possible. Specifically, the allocation size of the third dimension of the array $Q$ can be made dependent on the first two dimensions (explained below) and that of its fourth dimension can be made dependent on the second and third dimensions due to the fact that $Q(l,k,N,s)=Q(l,k,N,N)=P(N,k,l)$ for $s\ge M'(N,k)$ as explained above. Now in our improved algorithm to compute $L(n)$, the allocation size of the first dimension of the four dimensional array $Q$ can be chosen to be 2 since, as explained before, each $Q(l,*,*,*)$ value depends only on the $Q(l,*,*,*)$ and $Q(l-1,*,*,*)$ values. The allocation size of the second dimension of the array $Q$ can be made $n-2$ since the largest index of the second dimension in all the needed $Q(*,*,*,*)$ values is $n-3$ (corresponding to $k=n-2$) based on formula (\ref{eqn:L(n)}). The allocation size of the third dimension of the array $Q$ need not be fixed at $(n^2+5)/2-2n$ as the first dimension of $P$ in Algorithm \ref{algo:D(n)} and can be made dependent on the indices of the first two dimensions $l$ and $k$. Specifically, it need not exceed $lk$ since $Q(l,k,N,s)=0$ for all $N>lk$ by definition. Since we actually only allocate size 2 for the first dimension of the array $Q$, the index $l$ cannot be used anymore and the allocation size for the third dimension of the array $Q$ can be chosen to be $\min\{k(n-1)+1,(n^2+5)/2-2n\}$ since the largest index of $l$ is $n-1$ among all the needed $Q(l,*,*,*)$ values. The variable allocation sizes for the third dimension effectively makes the four dimensional array $Q$ a ``ragged'' array instead of a ``rectangular'' array using common data structure terminology. The allocation size of the fourth dimension of $Q$ can also be made variable and dependent on the indices of the second and third dimensions $k$ and $N$ respectively. Specifically, it can be chosen to be $M'(N,k)+1$ since, as explained before, $Q(l,k,N,s)=Q(l,k,N,N)$ for $s\ge M'(N,k)$. Many of the fourth dimensional allocation sizes $M'(N,k)+1$ are as small as 1 instead of the fixed $n(n-3)/2+1$ as in Algorithm \ref{algo:D(n)}, thereby saving a lot of memory. We summarize the allocation sizes for the four dimensions of the array $Q$ in Table \ref{tbl:Qallocationsize}. Since the allocation sizes for the third and fourth dimensions of the four dimensional array $Q$ in the improved algorithm would be variable, the pseudo-code that serves the purpose of line 2 for allocation of the four dimensional array in Algorithm \ref{algo:D(n)} now would be replaced with a revised nested loop. The improved algorithm for $L(n)$ can be previewed in Algorithm \ref{algo:improvedD(n)}. We omit the pseudo-code to allocate the four dimensional array $Q$ in the improved algorithm as it is not conveniently expressible without using real programming languages. \begin{table}[!htb] \centering \caption{Allocation sizes of the four dimensions of the array $Q$ in the improved algorithm ($l$, $k$ $N$ and $s$ are index variables used the nested loops in memory allocation for $Q$)} \begin{tabular}{||c|c||} \hline\hline Dimension (index variable) & Allocation size\\ \hline\hline 1st ($l$) & 2\\ \hline 2nd ($k$) & $n-2$\\ \hline 3rd ($N$) & $\min\{k(n-1)+1,\lfloor (n^2+5)/2-2n\rfloor\}$ \\ \hline 4th ($s$) & $M'(N,k)+1$ \\ \hline\hline \end{tabular} \label{tbl:Qallocationsize} \end{table} We introduce one more improvement that would save run time of Algorithm \ref{algo:D(n)}, although not memory space usage. In Algorithm \ref{algo:D(n)} line 3 serves to fill in the four dimensional array $P$ and it would be implemented using nested loops. In our improved algorithm the pseudo-code to fill in the four dimensional array $Q$ would still be implemented using nested loops with the number of iterations in the third and fourth level of the loops adjusted to accommodate the variable allocation sizes in these two dimensions as specified in Table \ref{tbl:Qallocationsize}. A possible improvement here is the innermost loop for the fourth dimension of the array $Q$. We already mentioned that in the improved algorithm the allocation size for the fourth dimension of the array $Q$ depends on the second and third dimensional indices $k$ and $N$ and is chosen to be $M'(N,k)+1$. Instead of having an index for the fourth dimension to iterate from 0 to $M'(N,k)$ for any given index $k$ for the second dimension and $N$ for the third dimension, we can let the index start to iterate from $m'(N,l)$ instead of 0, where $m'(N,l)=\max\{0,m(N,l)\}$ and $m(N,l)=m(N,k,l)$ is defined similarly to $M(N,k)=M(N,k,l)$ as \[ m(N,l)=m(N,k,l)=\min_{\pi\in \mathbf{P}(N,k,l),1\le j\le d(\pi)}\{j-\sum_{i=1}^{j}r_i(\pi)\}.\] Based on the definition of $r_i(\pi)=\pi_i^{'}-\pi_i$, the partition $\pi$ in $\mathbf{P}(N,k,l)$ that achieves the minimum in the definition of $m(N,k,l)$ is the partition $\pi^\star$ of $N$ whose conjugate partition is the partition with as many parts equal to $l$ as possible with the associated $j^\star$ equal to $d(\pi^\star)$. This shows that the function $m(N,k,l)$ actually does not depend on $k$ and we can write it as $m(N,l)$. Note that $m(N,l)$ could take negative values too, which is why we define the nonnegative function $m'(N,l)=\max\{0,m(N,l)\}$ to be used as the start of the index of the innermost loop while filling in the array $Q$. Under this definition we clearly have $Q(l,k,N,s)=0$ if $m'(N,l)>0$ and $0\le s<m'(N,l)$, which makes it feasible to skip filling in these array elements and save time. The pseudo-code for computing $m'(N,l)$ is presented in Algorithm \ref{algo:m(N,l)} based on the $\pi^\star$ and $j^\star$ mentioned above. \begin{algorithm}[h] \DontPrintSemicolon \KwIn{A positive integer $N$ and a positive integer $l$} \KwOut{$m'(N,l)$} $q \gets \lfloor N/l \rfloor$\; $r \gets N\mod l$\; \If{$r=0$} { \If{$l\le q$} { \Return{$l(q-l+1)$}\; } \Else { \Return{$0$}\; } } \Else{ \If{$l\le q$} { \Return{$l(q-l)+r$}\; } \Else { \Return{$0$}\; } } \caption{Pseudo-code for computing the helper function $m'(N,l)$.} \label{algo:m(N,l)} \end{algorithm} We show the pseudo-code of the improved algorithm to compute $L(n)$ in Algorithm \ref{algo:improvedD(n)}. We mainly emphasize the part that initializes and fills in the four dimensional array $Q$ after it has been allocated. The remaining part that computes the $L(n)$ value after the array $Q$ has been filled in is similar to Algorithm \ref{algo:D(n)} and is abbreviated on line 10. We assume all the elements of the array $Q$ are zero after it has been allocated. The lower bound function $m'(N,l)$ for the innermost loop variable $s$ will be needed only once for each given pair of $N$ and $l$ while the upper bound function $M'(N,k)$ for $s$ might be needed multiple times for each given pair of $N$ and $k$. To further save time, all the $M'(N,k)$ values can be pre-computed and later retrieved by table lookup. \begin{algorithm}[h] \DontPrintSemicolon \KwIn{A positive integer $n$} \KwOut{$L(n)$} Allocate a four dimensional array $Q$ with sizes specified in Table \ref{tbl:Qallocationsize}\; \For{$l \gets 0$ \textbf{to} $1$ } { \For{$k \gets 0$ \textbf{to} $n-3$ } { $Q[l][k][0][0] \gets 1$\; } } \For{$l \gets 1$ \textbf{to} $n-1$ } { \For{$k \gets 1$ \textbf{to} $n-3$ } { \For{$N \gets 1$ \textbf{to} $\min\{lk,\lfloor (n^2+3)/2-2n\rfloor\}$ } { \For{$s \gets m'(N,l)$ \textbf{to} $M'(N,k)$ } { Update $Q[l\mod 2][k][N][s]$ using the values of $Q[l\mod 2][k-1][N][s]$, $Q[(l-1)\mod 2][k][N][s]$, $Q[(l-1)\mod 2][k-1][N][s]$ and $Q[(l-1)\mod 2][k-1][N-k-l+1][s+l-k-1]$ based on \cite[Theorem 1]{BarnesSavage1995}\; } } } \tcp{Sum needed $Q[(l-1)\mod 2][*][*][*]$ values to compute $L(l)$ if desired.} } Sum $Q[(n-1)\mod 2][*][*][*]$ values to compute $L(n)$\; \Return{$L(n)$}\; \caption{Improved algorithm to compute $L(n)$ that initializes and fills in the four dimensional array $Q$.} \label{algo:improvedD(n)} \end{algorithm} Based on formula (\ref{eqn:L(n)}) $L(n)$ is the sum of a finite number of $P(*,*,n-1,*)$ values. After filling in the four dimensional array $P$ in Algorithm \ref{algo:D(n)}, we can actually not only compute $L(n)$, but also all $L(i)$ for $i=1,2,\cdots, n-1$ if we have chosen to allocate size $n$ for the third dimension instead of 2 since all the needed $P(*,*,*,*)$ values for them are also already in the four dimensional array $P$. If we have allocated only size 2 for the third dimension, we can still compute all $L(i)$ for $i=1,2,\cdots, n$ in a single run as long as we put the loop for the third dimension as the outermost loop when filling in the array $P$ and compute $L(i)$ when we have already filled in all the $P(*,*,i-1,*)$ values in $P[*][*][(i-1)\mod 2][*]$ before these array elements are overwritten later. Similarly in our improved Algorithm \ref{algo:improvedD(n)} we can also compute all $L(i)$ values for $i=1,2,\cdots, n$ in a single run since the nested loops from line 5 to 9 ensure that all $Q(i-1,*,*,*)$ values stored in $Q[(i-1)\mod 2][*][*][*]$ are filled in before filling in the $Q(i,*,*,*)$ values stored in $Q[i\mod 2][*][*][*]$. We can sum all the needed $Q(i-1,*,*,*)$ values to compute $L(i)$ before they are overwritten in the next iterations. In Algorithm \ref{algo:improvedD(n)} we add a comment at the end of the body for the outermost loop to indicate where code can be added to compute $L(i)$ values for $i<n$ if desired. The $L(n)$ value can still be computed on line 10 after the entire loop from line 5 to 9 has ended. It is easy to see that the time complexity to sum all the needed $P(*,*,*,*)$ values in Algorithm \ref{algo:D(n)} or $Q(*,*,*,*)$ values in Algorithm \ref{algo:improvedD(n)} to compute $L(n)$ is $O(n^3)$ and is of lower order than the time complexity $O(n^6)$ of filling in the array $P$ in Algorithm \ref{algo:D(n)} or the array $Q$ in Algorithm \ref{algo:improvedD(n)} (for more details see the analysis in Section \ref{sec:analysis}). Thus computing all $L(i)$ values for $i\le n$ takes essentially the same amount of time and space as computing the single $L(n)$ value. \section{Complexity analysis} \label{sec:analysis} In this section we show that the space and time complexity of Algorithm \ref{algo:improvedD(n)} achieve an improvement by a constant factor compared to Algorithm \ref{algo:D(n)}. We understand this is not exciting theoretically as it is not an asymptotic improvement. However, we emphasize that the discovery of the possibility of computing all $L(i)$ (or $D(i)$) values for $i\le n$ in a single run has its own merit which we overlooked before. Plus, the techniques in the improved algorithm deepen our understanding of the function $P(N,k,l,s)$ and may be applied to compute other similar functions such as $D_0(n)$ and $D_{k\_con}(n)$ and may shed insight on the analysis of their asymptotic orders. Now let us analyze the memory usage of Algorithm \ref{algo:D(n)} and Algorithm \ref{algo:improvedD(n)}. Assuming an allocation size 2 for the third dimension, the total allocation size (i.e. total number of elements) for the four dimensional array $P$ in Algorithm \ref{algo:D(n)} for computing $L(n)$ is clearly \[ f_1(n)=2(n-2)(\frac{n(n-3)}{2}+1)^2. \] The allocation size $f_4(n)$ for the four dimensional array $Q$ in our improved Algorithm \ref{algo:improvedD(n)} for computing $L(n)$ does not appear to have a simple closed form. We now show that $f_4(n)$ and $f_1(n)$ are of the same asymptotic order, but $f_4(n)$ achieves a constant factor improvement over $f_1(n)$: \begin{theo} \label{thm_space_complexity} There exist constants $c_1$ and $c_2$ ($0<c_1<c_2<1$) such that $c_1f_1(n)\le f_4(n)\le c_2f_1(n)$ for all sufficiently large $n$. \end{theo} \begin{proof} We first perform a conservative analysis of how much memory space is saved by the improved Algorithm \ref{algo:improvedD(n)}. That is, we will obtain a lower bound of $f_1(n)-f_4(n)$. By Table \ref{tbl:Qallocationsize} the allocation size of the second dimension of the array $Q$ is $n-2$ so the index $k$ for the second dimension will iterate from 0 to $n-3$. For each $0\le k\le n-3$, the allocation size of the third dimension is $\min\{k(n-1)+1,\lfloor (n^2+5)/2-2n\rfloor\}$ instead of the fixed first dimension allocation size $n(n-3)/2+1$ for all $0\le k\le n-3$ as in Algorithm \ref{algo:D(n)}. When $k(n-1)\le (n^2+3)/2-2n$, i.e. $k\le (n-3)/2$, the allocation size of the third dimension of the array $Q$ is $k(n-1)+1$. Thus, due to the reduced allocation size for the third dimension of the array $Q$, the number of saved elements from the array $Q$ compared to the array $P$ in Algorithm \ref{algo:D(n)} is at least \[ T_1(n)=(n(n-3)/2+1)(((n-3)/2+1)(n(n-3)/2+1)-\sum_{k=0}^{(n-3)/2}(k(n-1)+1)). \] The function $T_1(n)$ is the product of two factors. The first factor $n(n-3)/2+1$ is the allocation size of the fourth dimension of the array $P$ in Algorithm \ref{algo:D(n)}. The second factor is the total reduction of the allocation size of the third dimension of the array $Q$ due to variable allocations in this dimension. It is easy to see that $T_1(n)$ has an asymptotic order of $n^5/16$. For each index $k$ for the second dimension of the array $Q$ in the range $0\le k\le (n-3)/2$, the allocation size for the third dimension is $k(n-1)+1$ as shown above. Thus the index $N$ for the third dimension will iterate from 0 to $k(n-1)$. For each pair of $k$ and $N$, the allocation size for the fourth dimension is $M'(N,k)+1$. Observe that based on Algorithm \ref{algo:M(N,k)} we have $M'(N,k)=0$ exactly when $\lfloor N/k \rfloor>k$, i.e. when $N\ge k(k+1)$. When $N=0$ or $k=0$ we can also define $M'(N,k)=0$. Thus, due to the reduced allocation size for the fourth dimension of the array $Q$, the number of saved elements from the array $Q$ compared to the array $P$ in Algorithm \ref{algo:D(n)} is at least \[ T_2(n)=(n(n-3)/2)\sum_{k=0}^{(n-3)/2}(k(n-1)-k(k+1)+1). \] The function $T_2(n)$ is the product of two factors. The first factor $n(n-3)/2$ is the amount of reduction of the allocation size of the fourth dimension from $n(n-3)/2+1$ in Algorithm \ref{algo:D(n)} to 1 in the improved Algorithm \ref{algo:improvedD(n)} due to $M'(N,k)=0$ when $N\ge k(k+1)$. The second factor is a sum each of whose terms counts the number of $N$ in $[k(k+1),k(n-1)]$ since exactly these $N$ satisfy $M'(N,k)=0$. It is easy to see that $T_2(n)$ has an asymptotic order of $n^5/24$. The total number of saved elements $f_1(n)-f_4(n)$ from the array $Q$ compared to the array $P$ in Algorithm \ref{algo:D(n)} is at least $2(T_1(n)+T_2(n))$, which has an asymptotic order of $5n^5/24$. There is a factor 2 in this expression because we have not included the dimension of constant allocation size into consideration yet in the above discussion. Since $f_1(n)$ is asymptotically $n^5/2$, we see that $f_4(n)$ is asymptotically at most $n^5/2-5n^5/24=7n^5/24$, which means $f_4(n)\le \frac{7}{12}f_1(n)$ for all sufficiently large $n$. This analysis is conservative as some of the $M'(N,k)$ values are zero too when $(n-3)/2<k\le n-3$ and it has not considered the reduction of the allocation sizes of the fourth dimension where $M'(N,k)$ is nonzero but less than $n(n-3)/2+1$. We have shown that the constant $c_2$ in the statement of the theorem can be chosen to be 7/12. We now derive a lower bound of $f_4(n)$. For each index $k$ for the second dimension of the array $Q$ in the range $(n-3)/2<k\le n-3$, the allocation size for the third dimension is $(n-1)(n-3)/2+1$. We have already shown that $M'(N,k)=0$ if and only if $N\ge k(k+1)$. Thus, for $(n-3)/2<k\le n-3$ and $0\le N\le (n-1)(n-3)/2$, if $k(k+1)>(n-1)(n-3)/2$ (i.e. $k>\frac{\sqrt{2(n-1)(n-3)+1}-1}{2}$), then $M'(N,k)\neq 0$. Clearly $\frac{3}{4}(n-1)>\frac{\sqrt{2(n-1)(n-3)+1}-1}{2}$ when $n$ is large. Now consider the range of $k$ such that \[ \frac{3}{4}(n-1)\le k\le n-3. \] Each $k$ in this range can be represented as $k=c(n-1)$ for some $3/4\le c\le 1$. Also consider the range of $N$ such that \[ (n-1)(n-3)/4\le N\le (n-1)(n-3)/2. \] With $k$ and $N$ in the chosen ranges, we have $\frac{n-3}{4c}\le \frac{N}{k}\le \frac{n-3}{2c}$ and $0\le r=(N\mod k)<k=c(n-1)$ so $\frac{n-3-4c}{4c}\le q=\lfloor \frac{N}{k} \rfloor\le \frac{n-3}{2c}$. Therefore, $(c-\frac{1}{2c})n+\frac{3}{2c}-c\le k-q\le (c-\frac{1}{4c})n+\frac{3+4c}{4c}-c$ and $q(k-q)\ge ((c-\frac{1}{2c})n+\frac{3}{2c}-c)\frac{n-3-4c}{4c}$. Since $c-\frac{1}{2c}\ge 1/12>0$, we have $q(k-q)=\Omega(n^2)$. Now $q$, $q-r$, and $r-q$ are all linear in $n$, we see that $M'(N,k)=\Omega(n^2)$ for the considered ranges of $N$ and $k$ based on Algorithm \ref{algo:M(N,k)}. The number of $k$ in the range $\frac{3}{4}(n-1)\le k\le n-3$ is $\Omega(n)$ and the number of $N$ in the range $(n-1)(n-3)/4\le N\le (n-1)(n-3)/2$ is $\Omega(n^2)$. For each pair of $k$ and $N$ in these ranges the allocation size for the fourth dimension is $M'(N,k)+1=\Omega(n^2)$. Consequently, the total allocation size $f_4(n)$ for the four dimensional array $Q$ in our improved Algorithm \ref{algo:improvedD(n)} is $\Omega(n^5)$. Since $f_1(n)$ is asymptotically $n^5/2$, we have shown that there exists a constant $c_1$ such that $f_4(n)\ge c_1f_1(n)$ for all sufficiently large $n$ where $c_1$ can be chosen to be $1/192$. And the theorem is proved. \end{proof} We have collected some values of $f_4(n)$ for $n\le 1000$ and compared them with $f_1(n)$ in Table \ref{tbl:allocationComparison}. It appears that $\frac{f_4(n)}{f_1(n)}$ is about 10\% and it is likely to tend to some constant $C$ between 0.1 and 0.2, which agrees with the lower bound $1/192$ and upper bound $7/12$ obtained in the proof of Theorem \ref{thm_space_complexity}. The space complexity of Algorithm \ref{algo:D(n)} and the improved Algorithm \ref{algo:improvedD(n)} are dominated by the allocation sizes of the four dimensional array $P$ and $Q$ respectively. Thus Algorithm \ref{algo:improvedD(n)} achieves a constant factor improvement in memory space usage. \begin{table}[!htb] \centering \caption{Comparison of allocation sizes of Algorithm \ref{algo:D(n)} and Algorithm \ref{algo:improvedD(n)}} \begin{tabular}{||c|c|c|c||} \hline\hline $n$ & $f_1(n)$ & $f_4(n)$ & $f_4(n)/f_1(n)$ \\ \hline\hline 10 & 20736 & 2030 & 0.0978974 \\ \hline 20 & 1052676 & 99736 & 0.0947452 \\ \hline 30 & 9230816 & 885350 & 0.0959124 \\ \hline 40 & 41730156 & 4041722 & 0.0968537 \\ \hline 50 & 132765696 & 12948206 & 0.0975267 \\ \hline 60 & 339592436 & 33286556 & 0.0980191 \\ \hline 70 & 748505376 & 73646710 & 0.0983917 \\ \hline 80 & 1480839516 & 146132702 & 0.0986823 \\ \hline 90 & 2698969856 & 266968646 & 0.0989150 \\ \hline 100 & 4612311396 & 457104592 & 0.0991053 \\ \hline 110 & 7483319136 & 742822422 & 0.0992638 \\ \hline 120 & 11633488076 & 1156341746 & 0.0993977 \\ \hline 130 & 17449353216 & 1736425796 & 0.0995123 \\ \hline 140 & 25388489556 & 2528987092 & 0.0996116 \\ \hline 150 & 35985512096 & 3587693526 & 0.0996983 \\ \hline 300 & 1182935794196 & 118676615988 & 0.1003238 \\ \hline 400 & 5018396965596 & 504274588310 & 0.1004852 \\ \hline 500 & 15376557756996 & 1546620017330 & 0.1005830 \\%62001 \\ \hline 600 & 38364293168396 & 3861311170282 & 0.1006486 \\ \hline 700 & 83078878199796 & 8365678364352 & 0.1006956 \\ \hline 800 & 162207987851196 & 16339372522124 & 0.1007310 \\ \hline 900 & 292629697122596 & 29484953979544 & 0.1007586 \\ \hline 1000 & 496012481013996 & 49988481364570 & 0.1007807 \\ \hline\hline \end{tabular} \label{tbl:allocationComparison} \end{table} The time complexity of the two algorithms are dominated by the time to fill in the four dimensional array $P$ and $Q$ respectively. Although the starting index of the variable $s$ on line 8 in Algorithm \ref{algo:improvedD(n)} is from $m'(N,l)$ instead of 0, the time complexity of Algorithm \ref{algo:improvedD(n)} still only achieves a constant factor improvement similar to the space complexity improvement over Algorithm \ref{algo:D(n)}. This is can be shown as follows. Based on Algorithm \ref{algo:m(N,l)} it is easy to see that $m'(N,l)=0$ if and only if $\lfloor N/l\rfloor <l$, i.e. $N<l^2$. When $l>\sqrt{\frac{(n-1)(n-3)}{2}}$, we have $l^2>\frac{(n-1)(n-3)}{2}$. As shown in the proof of Theorem \ref{thm_space_complexity}, when $k$ and $N$ are in the range $\frac{3}{4}(n-1)\le k\le n-3$ and $(n-1)(n-3)/4\le N\le (n-1)(n-3)/2$, we have $M'(N,k)=\Omega(n^2)$. The number of $n$ in the range $[\sqrt{\frac{(n-1)(n-3)}{2}},n-1]$ is $\Omega(n)$. When $l$ is in this range and $(n-1)(n-3)/4\le N\le (n-1)(n-3)/2$, we have $m'(N,l)=0$. This shows that when $l$, $k$ and $N$ are in these specified ranges, the time to fill in this part of the array $Q$ takes $\Omega(n^6)$ time, which becomes a lower bound of the time complexity of Algorithm \ref{algo:improvedD(n)}. Since Algorithm \ref{algo:D(n)} takes time $O(n^6)$, we have proved that Algorithm \ref{algo:improvedD(n)} has a time complexity of the same order as Algorithm \ref{algo:D(n)} and achieves a constant factor improvement. Finally we note that adding the code to compute all $L(i)$ values for $i<n$ in Algorithm \ref{algo:improvedD(n)} does not increase the memory space usage and it will increase the run time by at most $O(n.n^3)=O(n^4)$, which is negligible compared to the time complexity $O(n^6)$ to fill the array $Q$. Thus, computing all $L(i)$ values for $i\le n$ has essentially the same space and time complexity as computing a single $L(n)$ value. \section{Experimental evaluations and simulations} \label{sec:experiments} We have computed the exact values of $D(n)$ for $n$ up to 290 with the help of large memory supercomputers from XSEDE. Based on these numerical results and the known upper and lower bound of $D_0(n)$ given in Burns \cite{Burns2007}, we had conjectured that the asymptotic order of $D(n)$ is like $\frac{c\times 4^n}{(\log{n})^{1.5}\sqrt{n}}$ for some constant $c$. We have performed further simulations using a method similar to that in \cite{Burns2007} to estimate the asymptotic order of $D(n)$. Based on simulation results for $n$ up to 700000000, it seems that $D(n)$ has an asymptotic order more like $\frac{4^n\exp(-0.7\frac{\log n}{\log\log n})}{8\sqrt{\pi n}}$. The form of this function is inspired by Burns \cite{Burns2007} and Pittel \cite{Pittel2018}. \section{Discussions about asymptotic orders} We tried to derive the asymptotic order of $D(n)$ through the multi-variate generating function of the multi dimensional sequence $P(N,k,l,s)$: \[ F(w,x,y,z)=\sum_{N=0}^\infty \sum_{k=0}^\infty \sum_{l=0}^\infty \sum_{s=0}^\infty P(N,k,l,s)w^Nx^ky^lz^s. \] However we are unable to obtain a simple closed-form for this generating function. The function $P(N,k,l,s)$ is quite unusual. For one thing the last index can be increased during its recursive computation. The related function $P(N,k,l)$ actually satisfies a similar recurrence as follows: \[ P(N,k,l)=P(N-k-l+1,k-1,l-1)+P(N,k-1,l)+P(N,k,l-1)-P(N,k-1,l-1). \] This recurrence is simpler than that of $P(N,k,l,s)$ in \cite[Theorem 1]{BarnesSavage1995} in the sense that no index in any of the three dimensions could increase during its recursive computation. The two share a similarity that they do not belong to the classes of multi-variate recurrences considered by Bousquet-M\'{e}lou and Petkov\v{s}ek \cite{BOUSQUETMELOU2000}. The single-variate generating function of the sequence $P(N,k,l)$ when $k$ and $l$ are fixed is known from \cite{Andrews1984}, but we are unable to extend it to a multi-variate generating function. For each given triple of $N$, $k$ and $l$ we have shown the exact range of $s$ such that $P(N,k,l,s)$ changes from minimum to maximum. The variable $s$ can be seen to measure how close a partition in $\mathbf{P}(N,k,l,s)$ is to a graphical partition. Fine-tuned analysis for this range of $s$ together with all the known results about the order of $P(N,k,l)$ with $k$ and $l$ in various ranges relative to $N$ might help us better understand the behavior of $P(N,k,l,s)$. The number of graphical partitions of an even integer $N$ is shown by Barnes and Savage \cite{BarnesSavage1995} to be $G(N)=P(N,N,N,0)$. The best results about the order of $G(N)$ we know of are from Pittel \cite{Pittel1999,Pittel2018} and the tight asymptotic order of $G(N)$ is unknown yet. Our simulation of the asymptotic order of $G(N)$ using uniform random integer partition generators from \cite{arratia_desalvo_2016} suggests that $\frac{G(N)}{P(N)}$ has an asymptotic order like $e^{-\frac{0.3\log{N}}{\log\log{N}}}$, which is also inspired by the bound of $G(N)$ given in \cite{Pittel2018} and is similar to a factor of the conjectured asymptotic order of $D(n)$ given above in Section \ref{sec:experiments}. Thus the analysis of the asymptotic behavior of $P(N,k,l,s)$ also helps to determine the unknown asymptotic order of $G(N)$ besides the functions counting various classes of graphical degree sequences of given length. \section{Conclusions} In this paper we presented an improved algorithm to compute $D(n)$ exactly. A main ingredient of the improvement is an analysis of the fourth dimension of the function $P(N,k,l,s)$ of Barnes and Savage such that the exact range of $s$ in which this function varies with given $N,k,l$ is determined and then used to help reduce memory space usage. The new algorithm makes it feasible to compute all $D(i)$ values for $i\le n$ in about 10\% of the time that takes the previous algorithm to compute a single $D(n)$ value. The techniques can be applied to all related functions that can be computed exactly based on the function $P(N,k,l,s)$. \end{document}
arXiv
The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: An Artist's Rendering is a mathematics book by Piper Harron (also known as Piper H), based on her Princeton University doctoral thesis of the same title. It has been described as "feminist",[1] "unique",[2] "honest",[2] "generous",[3] and "refreshing".[4] The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: An Artist's Rendering AuthorPiper H PublisherBirkhäuser Basel Publication date 2021 ISBN978-3-319-76531-0 Thesis and reception Harron was advised by Fields Medalist Manjul Bhargava, and her thesis deals with the properties of number fields, specifically the shape of their rings of integers.[2][5] Harron and Bhargava showed that, viewed as a lattice in real vector space, the ring of integers of a random number field does not have any special symmetries.[5][6] Rather than simply presenting the proof, Harron intended for the thesis and book to explain both the mathematics and the process (and struggle) that was required to reach this result.[5] The writing is accessible and informal, and the book features sections targeting three different audiences: laypeople, people with general mathematical knowledge, and experts in number theory.[1] Harron intentionally departs from the typical academic format as she is writing for a community of mathematicians who "do not feel that they are encouraged to be themselves".[1] Unusually for a mathematics thesis, Harron intersperses her rigorous analysis and proofs with cartoons, poetry, pop-culture references, and humorous diagrams.[2] Science writer Evelyn Lamb, in Scientific American, expresses admiration for Harron for explaining the process behind the mathematics in a way that is accessible to non-mathematicians, especially "because as a woman of color, she could pay a higher price for doing it."[4] Mathematician Philp Ording calls her approach to communicating mathematical abstractions "generous".[3] Her thesis went viral in late 2015, especially within the mathematical community, in part because of the prologue which begins by stating that "respected research math is dominated by men of a certain attitude".[2][4] Harron had left academia for several years, later saying that she found the atmosphere oppressive and herself miserable and verging on failure.[7] She returned determined that, even if she did not do math the "right way", she "could still contribute to the community".[7] Her prologue states that the community lacks diversity and discourages diversity of thought.[4] "It is not my place to make the system comfortable with itself", she concludes.[4] A concise proof was published in Compositio Mathematica in 2016.[8] Author Harron earned her doctorate from Princeton in 2016. As of 2021, Harron, who also goes by Piper H., is a postdoctoral researcher at the University of Toronto.[9][10] References 1. Molinari, Julia (April 2021). "Re-imagining Doctoral Writings as Emergent Open Systems". Re-imagining doctoral writing (preprint). Colorado Press. 2. Salerno, Adriana (February–March 2019). "Book review: Mathematics for the People" (PDF). MAA Focus. 39 (1): 50–51. 3. Ording, Philip (2016). "Creative Writing in Mathematics and Science" (PDF). Banff International Research Station Proceedings 2016. p. 7. Retrieved June 18, 2021. 4. Lamb, Evelyn (December 28, 2015). "Contrasts in Number Theory". Scientific American. Retrieved June 18, 2021. 5. "The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields". Springer Nature Switzerland AG. Retrieved June 18, 2021. 6. Harron, Piper (June 20–24, 2016). "Contributed Talks" (PDF). 14th Meeting of the Canadian Number Theory Association. University of Calgary. p. 26. 7. Kamanos, Anastasia (2019). The Female Artist in Academia: Home and Away. Rowman & Littlefield. p. 21. ISBN 9781793604118. 8. Bhargava, Manjul; Harron, Piper (June 2016). "The equidistribution of lattice shapes of rings of integers in cubic, quartic, and quintic number fields". Compositio Mathematica. 152 (6): 1111–1120. arXiv:1309.2025. doi:10.1112/S0010437X16007260. MR 3518306. S2CID 118043017. Zbl 1347.11074. 9. "Piper H". University of Toronto. Retrieved June 18, 2021. 10. Dance, Amber (February 9, 2017). "Relationships: Sweethearts in science". Nature. 542 (7640): 261–263. doi:10.1038/nj7640-261a. External links • The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields (Harron's PhD thesis) • The Liberated Mathematician
Wikipedia
\begin{definition}[Definition:Language of Propositional Logic/Alphabet/Letter] Part of specifying the language of propositional logic $\LL_0$ is to specify its letters. The letters of $\LL_0$, called '''propositional symbols''', can be any infinite collection $\PP_0$ of arbitrary symbols. It is usual to specify them as a limited subset of the English alphabet with appropriate subscripts. A typical set of '''propositional symbols''' would be, for example: :$\PP_0 = \set {p_1, p_2, p_3, \ldots, p_n, \ldots}$ \end{definition}
ProofWiki
\begin{document} \theoremstyle{plain} \newtheorem{Thm}{Theorem}[section] \newtheorem{TitleThm}[Thm]{} \newtheorem{Corollary}[Thm]{Corollary} \newtheorem{Proposition}[Thm]{Proposition} \newtheorem{Lemma}[Thm]{Lemma} \newtheorem{Conjecture}[Thm]{Conjecture} \theoremstyle{definition} \newtheorem{Definition}[Thm]{Definition} \theoremstyle{definition} \newtheorem{Example}[Thm]{Example} \newtheorem{TitleExample}[Thm]{} \newtheorem{Remark}[Thm]{Remark} \newtheorem{SimpRemark}{Remark} \renewcommand{\theSimpRemark}{} \numberwithin{equation}{section} \newcommand{\mathbb C}{{\mathbb C}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{\mathbb R}{{\mathbb R}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb S}}{{\mathbb S}} \newcommand{{\mathbb U}}{{\mathbb U}} \newcommand{{\mathbb O}}{{\mathbb O}} \newcommand{{\mathbb G}}{{\mathbb G}} \newcommand{{\mathbb H}}{{\mathbb H}} \newcommand{\par \noindent}{\par \noindent} \newcommand{{\rm proj}}{{\rm proj}} \newcommand{{\rm coker}\,}{{\rm coker}\,} \newcommand{{\rm Sol}}{{\rm Sol}} \newcommand{{\rm supp}\,}{{\rm supp}\,} \newcommand{{\operatorname{codim}}}{{\operatorname{codim}}} \newcommand{{\operatorname{sing}}}{{\operatorname{sing}}} \newcommand{{\operatorname{Tor}}}{{\operatorname{Tor}}} \newcommand{{\operatorname{Hom}}}{{\operatorname{Hom}}} \newcommand{{\operatorname{wt}}}{{\operatorname{wt}}} \newcommand{{\operatorname{graph}}}{{\operatorname{graph}}} \newcommand{{\operatorname{rk}}}{{\operatorname{rk}}} \newcommand{{\operatorname{Derlog}}}{{\operatorname{Derlog}}} \newcommand{\Olog}[2]{\Omega^{#1}(\text{log}#2)} \newcommand{\produnion}{\cup \negmedspace \negmedspace \negmedspace\negmedspace {\scriptstyle \times}} \newcommand{\pd}[2]{\dfrac{\partial#1}{\partial#2}} \def \ba {\mathbf {a}} \def \bb {\mathbf {b}} \def \bc {\mathbf {c}} \def \bd {\mathbf {d}} \def \bone {\boldsymbol {1}} \def \bg {\mathbf {g}} \def \bG {\mathbf {G}} \def \bh {\mathbf {h}} \def \bi {\mathbf {i}} \def \bj {\mathbf {j}} \def \bk {\mathbf {k}} \def \bK {\mathbf {K}} \def \bm {\mathbf {m}} \def \bn {\mathbf {n}} \def \bt {\mathbf {t}} \def \bu {\mathbf {u}} \def \bv {\mathbf {v}} \def \by {\mathbf {y}} \def \bV {\mathbf {V}} \def \bx {\mathbf {x}} \def \bw {\mathbf {w}} \def \b1 {\mathbf {1}} \def \bga {\boldsymbol \alpha} \def \bgb {\boldsymbol \beta} \def \bgg {\boldsymbol \gamma} \def \itc {\text{\it c}} \def \ite {\text{\it e}} \def \ith {\text{\it h}} \def \iti {\text{\it i}} \def \itj {\text{\it j}} \def \itm {\text{\it m}} \def \itM {\text{\it M}} \def \itn {\text{\it n}} \def \ithn {\text{\it hn}} \def \itt {\text{\it t}} \def \cA {\mathcal{A}} \def \cB {\mathcal{B}} \def \cC {\mathcal{C}} \def \cD {\mathcal{D}} \def \cE {\mathcal{E}} \def \cF {\mathcal{F}} \def \cG {\mathcal{G}} \def \cH {\mathcal{H}} \def \cK {\mathcal{K}} \def \cL {\mathcal{L}} \def \cM {\mathcal{M}} \def \cN {\mathcal{N}} \def \cO {\mathcal{O}} \def \cP {\mathcal{P}} \def \cS {\mathcal{S}} \def \cT {\mathcal{T}} \def \cU {\mathcal{U}} \def \cV {\mathcal{V}} \def \cW {\mathcal{W}} \def \cX {\mathcal{X}} \def \cY {\mathcal{Y}} \def \cZ {\mathcal{Z}} \def \ga {\alpha} \def \gb {\beta} \def \gg {\gamma} \def \gd {\delta} \def \ge {\epsilon} \def \gevar {\varepsilon} \def \gk {\kappa} \def \gl {\lambda} \def \gs {\sigma} \def \gt {\tau} \def \gw {\omega} \def \gz {\zeta} \def \gG {\Gamma} \def \gD {\Delta} \def \gL {\Lambda} \def \gS {\Sigma} \def \gW {\Omega} \def \dim {{\rm dim}\,} \def \mod {{\rm mod}\;} \def \rank {{\rm rank}\,} \newcommand{\displaystyle}{\displaystyle} \newcommand{ }{ } \newcommand{\vect}[1]{{\bf{#1}}} \def\mathbb R{\mathbb R} \def\mathbb C{\mathbb C} \def\mathbb{C}P{\mathbb{C}P} \def\mathbb{R}P{\mathbb{R}P} \def\mathbb N{\mathbb N} \def\mathrm{Sym}{\mathrm{Sym}} \def\mathrm{Sk}{\mathrm{Sk}} \def\mathrm{GL}{\mathrm{GL}} \def\mathrm{Diff}{\mathrm{Diff}} \def\mathrm{id}{\mathrm{id}} \def\mathrm{Pf}{\mathrm{Pf}} \def\mathfrak{sl}{\mathfrak{sl}} \def\mathfrak{g}{\mathfrak{g}} \def\mathfrak{h}{\mathfrak{h}} \def\mathfrak{k}{\mathfrak{k}} \def\mathfrak{t}{\mathfrak{t}} \def\mathscr{O}_{\C^N}{\mathscr{O}_{\mathbb C^N}} \def\mathscr{O}_{\C^n}{\mathscr{O}_{\mathbb C^n}} \def\mathscr{O}_{\C^m}{\mathscr{O}_{\mathbb C^m}} \def\mathscr{O}_{\C^n,0}{\mathscr{O}_{\mathbb C^n,0}} \def\mathrm{Derlog}\,{\mathrm{Derlog}\,} \def\mathrm{exp\,deg}\,{\mathrm{exp\,deg}\,} \title[Characteristic Cohomology II: Matrix Singularities] {Characteristic Cohomology II: Matrix Singularities} \author[James Damon]{James Damon} \address{Department of Mathematics, University of North Carolina, Chapel Hill, NC 27599-3250, USA } \keywords{characteristic cohomology of Milnor fibers, complements, links, varieties of symmetric, skew-symmetric, singular matrices, global Milnor fibration, classical symmetric spaces, Cartan Model, Schubert decomposition, detecting nonvanishing cohomology, vanishing compact models, kite maps} \subjclass{Primary: 11S90, 32S25, 55R80 Secondary: 57T15, 14M12, 20G05} \begin{abstract} For a germ of a variety $\cV, 0 \subset \mathbb C^N, 0$, a singularity $\cV_0$ of \lq\lq type $\cV$\rq\rq\, is given by a germ $f_0 : \mathbb C^n, 0 \to \mathbb C^N, 0$ which is transverse to $\cV$ in an appropriate sense so that $\cV_0 = f_0^{-1}(\cV)$. In part I of this paper \cite{D6} we introduced for such singularities the Characteristic Cohomology for the Milnor fiber (for $\cV$ a hypersurface), and complement and link (for the general case). It captures the cohomology of $\cV_0$ inherited from $\cV$ and is given by subalgebras of the cohomology for $\cV_0$ for the Milnor fiber and complements and is a subgroup for the cohomology of the link. We showed these cohomologies are functorial and invariant under groups of equivalences $\cK_{H}$ for Milnor fibers and $\cK_{\cV}$ for complements and links. We also gave geometric criteria for detecting the non-vanishing of the characteristic cohomology. \par In this paper we apply these methods in the case $\cV$ denotes any of the varieties of singular $m \times m$ complex matrices which may be either general, symmetric or skew-symmetric (with $m$ even). For these varieties we have shown in another paper that their Milnor fibers and complements have compact \lq\lq model submanifolds\rq\rq\, for their homotopy types, which are classical symmetric spaces in the sense of Cartan. As a result, it follows that the characteristic cohomology subalgebras for the Milnor fibers and complements are images of exterior algebras (or in one case a module on two generators over an exterior algebra). In addition, we extend these results to general $m \times p$ complex matrices in the case of the complement and link. \par We then apply the geometric detection method introduced in Part I to detect when characteristic cohomology for the Milnor fiber or complement contains a specific exterior subalgebra on $\ell$ generators and for the link that it contains an appropriate truncated and shifted version of the subalgebra. The detection criterion involves a special type of \lq\lq kite map germ of size $\ell$\rq\rq based on a given flag of subspaces. The general criterion which detects such nonvanishing characteristic cohomology is then given in terms of the defining germ $f_0$ containing such a kite map germ of size $\ell$. Furthermore we use a restricted form of kite spaces to give a cohomological relation between the cohomology of local links and the global link. \end{abstract} \maketitle \centerline{\bf Preliminary Version} \par \section*{Introduction} \label{S:sec0} \par Let $\cV \subset M$ denote any of the varieties of singular $m \times m$ complex matrices which may be general, symmetric, or skew-symmetric ($m$ even), or $m \times p$ matrices, in the corresponding space $M$ of such matrices. A \lq\lq matrix singularity\rq\rq\, $\cV_0$ of \lq\lq type $\cV$\rq\rq\, for any of the $\cV \subset M$ is defined as $\cV_0 = f_0^{-1}(\cV)$ by a germ $f_0 : \mathbb C^n, 0 \to M, 0$ (which is transverse to $\cV$ in an appropriate sense). In part I \cite{D6} we introduced the notion of characteristic cohomology for a singularity $\cV_0$ of type $\cV$ for any \lq\lq universal singularity\rq\rq for the Milnor fiber (in case $\cV$ is a hypersurface) and for the complement and link (in the general case). In this paper we determine the characteristic cohomology for matrix singularities in all of these cases. \par For matrix singularities the characteristic cohomology will give the analogue of characteristic classes for vector bundles (as e.g. \cite{MS}). For comparison, a vector bundle $E \to X$ over CW complex $X$ is given by map $f_0 : X \to BG$ for $G$ the structure group of $E$ (e.g. $O_n$, $U_n$, $Sp_n$, $SO_n$, etc.). It is well-defined up to isomorphism by the homotopy class of $f_0$. Moreover the generators of $H^*(BG; R)$, for appropriate coefficient ring $R$ pull-back via $f_0^*$ to give the characteristic classes of $E$; so they generate a characteristic subalgebra of $H^*(X; R)$. The nonvanishing of the characteristic classes which then give various properties of $E$. Various polynomials in the classes correspond to Schubert cycles in the appropriate classifying spaces. \par We will give analogous results for categories of matrix singularities of the various types. Homotopy invariance is replaced by invariance under the actions of the groups of diffeomorphisms $\cK_H$ or $\cK_{\cV}$. For these varieties we have shown in another paper \cite{D3} that they have compact \lq\lq model submanifolds\rq\rq\, for the homotopy types of both the Milnor fibers and the complements and these are classical symmetric spaces in the sense of Cartan. As a result, it will follow that the characteristic subalgebra is the image of an exterior algebra (or in one case a module on two generators over an exterior algebra) on an explicit set of generators. \par We give a \lq\lq detection criterion\rq\rq\, for identifying in the characteristic sublgebra an exterior subalgebra on pull-backs of $\ell$ specific generators of the cohomology of the corresponding symmetric space. It is detected by the defining germ $f_0$ containing a special type of \lq\lq unfurled kite map\rq\rq\, of size $\ell$. This will be valid for the Milnor fiber, complement, and link. \par We will do this by using the support of appropriate exterior subalgebras of the Milnor fiber cohomology or of the complement cohomology for the varieties of singular matrices. This is done using results of \cite{D4} giving the Schubert decomposition for the Milnor fiber and the complement to define \lq\lq vanishing compact models\rq\rq\, detecting these subalgebras. In \S \ref{S:sec3} and \S \ref{S:sec8} we use the Schubert decompositions to exhibit vanishing compact models in the Milnor fibers and complements. Then, we use the detection criterion introduced in Part I to give a criterion for detecting nonvanishing exterior subalgebras of the characteristic cohomology using a class of \lq\lq unfurled kite maps\rq\rq. Matrix singularities $\cV_0, 0$ defined by germs $f_0$ which contain such an \lq\lq unfurled kite map\rq\rq, are shown to have such subalgebras in their cohomology of Milnor fibers or complements and subgroups in their link cohomology. In the case of general or skew-symmetric matrices, the results for the Milnor fibers and complements is valid for cohomology over ${\mathbb Z}$ (and hence any coefficient ring $R$); while for symmetric matrices, the results apply both for cohomology with coefficients in a field of characteristic zero or for ${\mathbb Z}/2{\mathbb Z}$-coefficients. In all three cases for a field of characteristic zero, cohomology subgroups are detected for the links which are above the middle dimensions. \par Furthermore, we extend in \S \ref{S:sec8a} the results for complements and links for $m \times m$ matrices to general $m \times p$ matrices. This includes determining the form of the characteristic cohomology and giving a detection criterion using an appropriate form of kite spaces and mappings. \par A restricted form of the kite spaces serve a further purpose in \S \ref{S:sec8b} for identifying how the cohomology of local links of strata in the varieties of singular matrices relate to the cohomology of the global links. \par \par \centerline{CONTENTS} \begin{enumerate} \item Matrix Equivalence for the Three Types of Matrix Singularities \par \par \item Cohomology of the Milnor Fibers of the $\cD_m^{(*)}$ \par \par \item Kite Spaces of Matrices for Given Flag Structures \par \par \item Detecting Characteristic Cohomology using Kite Maps of Matrices \par \par \item Examples of Matrix Singularities Exhibiting Characteristic Cohomology \par \par \item Characteristic Cohomology for the Complements and Links of Matrix Singularities \par \par \item Characteristic Cohomology for Non-square Matrix Singularities \par \par \item Cohomological Relations between Local Links via Restricted Kite Spaces \end{enumerate} \section{Matrix Equivalence for the Three Types of Matrix Singularities} \label{S:sec2} \par We will apply the results in Part I \cite{D6} to the cohomology for a matrix singularity $\cV_0$ for any of the three types of matrices. We let $M$ denote the space of $m \times m$ general matrices $M_m(\mathbb C)$, resp. symmetric matrices $Sym_m(\mathbb C)$, resp. skew-symmetric matrices $Sk_m(\mathbb C)$ (for $m$ even). We also let $\cD_m^{(*)}$ denote the variety of singular matrices for each case with $(*)$ denoting $( )$ for general matrices, $(sy)$ for symmetric matrices, or $(sk)$ for skew-symmetric matrices. For the case of $m \times p$ matrices with $m \not = p$ we use the notation $\cD{m, p} \subset M_{m, p}(\mathbb C)$. Also, the corresponding defining equations for the three $m \times m$ cases are given by: $\det$ for the general and symmetric cases and the Pfaffian $\mathrm{Pf}$ for the skew-symmetric case. We generally denote the defining equation by $H : \mathbb C^N, 0 \to \mathbb C, 0$ for $\cV$, where $M \simeq \mathbb C^N$ for appropriate $N$ in each case and $\cV = \cD_m^{(*)}$. For the case of $m \times p$ matrices with $m \not = p$, $\cD{m, p}$ is not a hypersurface and we will not be concerned with its defining equation. \par \subsection*{Abbreviated Notation for the Characteristic Cohomology} \par For matrix singularities $\cV_0$ defined by $f_0 : \mathbb C^n \to M, 0 = \mathbb C^N, 0$, the characteristic cohomology with coefficients $R$ defined in Part I is denoted as follows: for the Milnor fiber (in the case $\cV$ is a hypersurface) by $\cA_{\cV}(f_0, R)$, for the complement of $\cV_0$ by $\cC_{\cV}(f_0, R)$, and for the link of $\cV_0$, for $\bk$ a field of characteristic $0$, by $\cB_{\cV}(f_0, \bk)$. We use a simplified notation for matrix singularities. \begin{align*} \cA_m^{(*)}(f_0; R) \,\, = \,\, &\cA_{\cD_m^{(*)}}(f_0; R), \qquad \cC^{(*)}(f_0; R) \,\, = \,\, \cC_{\cD_m^{(*)}}(f_0; R)\, , \\ &\text{and} \quad \cB_m^{(*)}(f_0; \bk) \,\, = \,\, \cB_{\cD_m^{(*)}}(f_0; \bk) \end{align*} \par If $m$ is understood, we shall suppress it in the notation. \par For the case of $m \times p$ matrices with $m \not = p$, $\cD_{m, p}$ is not a hypersurface, but we shall use for the complement and link $$\cC_{m, p}(f_0; R) \,\, = \,\, \cC_{\cD_{m, p}}(f_0; R) \quad \text{and} \quad \cB_{m, p}(f_0; \bk) \,\, = \,\, \cB_{\cD_{m, p}^{(*)}}(f_0; \bk) $$ Again, if $(m, p)$ is understood, we shall suppress it in the notation. \subsection*{Matrix Singularities Equivalences $\cK_M$ and $\cK_{HM}$} \par \par There are several different equivalences that we shall consider for matrix singularities $f_0 : \mathbb C^n, 0 \to M, 0$ with $\cV$ denoting the subvariety of singular matrices in $M$. The one used in classifications is $\cK_M$--{\it equivalence: } We suppose that we are given an action of a group of matrices $G$ on $M$. For symmetric or skew symmetric matrices, it is the action of $\mathrm{GL}_m(\mathbb C)$ by $B\cdot A = B\, A\, B^T$. For general $m \times p$ matrices, it is the action of $\mathrm{GL}_m(\mathbb C) \times \mathrm{GL}_p(\mathbb C)$ by $(B, C)\cdot A = B\, A\, C^{-1}$. Given such an action, then the group $\cK_M$ consists of pairs $(\varphi, B)$, with $\varphi$ a germ of a diffeomorphism of $\mathbb C^n, 0$ and $B$ a holomorphic germ $\mathbb C^n, 0 \to G, I$. The action is given by $$ f_0(x) \mapsto f_1(x)\, = \, B(x)\cdot( f_0\circ \varphi^{-1}(x)) \, . $$ Although $\cK_M$ is a subgroup of $\cK_{\cV}$, they have the same tangent spaces and their path connected components of their orbits agree (for example this is explained in \cite[\S 2]{DP} because of the results due to J\'{o}zefiak \cite{J}, J\'{o}zefiak-Pragacz \cite{JP}, and Gulliksen-Neg\r{a}rd\cite{GN} as pointed out by Goryunov-Mond \cite{GM}). \par We next restrict to codimension $1$ subgroups; let $$GL_m(\mathbb C)^{(2)} \, \, \overset{def}{=} \,\, \ker( \det \times \det: GL_m(\mathbb C) \times GL_m(\mathbb C) \to (\mathbb C^* \times \mathbb C^*)/\gD\mathbb C^*)$$ where $\gD\mathbb C^*$ is the diagonal subgroup. We then replace the groups for $\cK_M$--equivalence by the subgroup $SL_m(\mathbb C)$ for the symmetric and skew-symmetric case and for the general case the subgroup $GL_m(\mathbb C)^{(2)}$. These restricted versions of equivalence preserve the defining equation $H$ in each case. We denote the resulting equivalence groups by $\cK_{HM}$, which are subgroups of the corresponding $\cK_{H}$. As $\cK_{HM}$ equivalences preserve $H$, they also preserve the Milnor fibers and the varieties of singular matrices $\cV$. By the above referred to results, in each of the three cases, these $\cK_{HM}$ also have the same tangent spaces as $\cK_{H}$ in each case. \par As a consequence of \cite[Prop. 2.1 and Prop. 2.2]{D6}, since $\cK_{HM}$ is a subgroup of $\cK_H$, we have for any coefficient ring $R$ and field $\bk$ of characteristic $0$ the following corollary. \begin{Corollary} \label{Cor2.3} For each of the three cases of the varieties of $m \times m$ singular matrices $\cV =\cD_m^{(*)}$, let $\cV_0$ be defined by $f_0 : \mathbb C^n, 0 \to M, 0$, with $M$ denoting the corresponding space of matrices. Then,\par \begin{itemize} \item[a)] the characteristic subalgebra $\cA^{(*)}(f_0; R)$ is, up to Milnor fiber cohomology isomorphism, an invariant of the $\cK_{HM}$--equivalence class of $f_0$; \item[b)] $\cB^{(*)}(f_0; \bk)$ is, up an isomorphism of the cohomology of the link, an invariant of the $\cK_{M}$--equivalence class of $f_0$; and \item[c)] the characteristic subalgebra $\cC^{(*)}(f_0; R)$ is, up to an isomorphism of the cohomology of the complement, an invariant of the $\cK_{M}$--equivalence class of $f_0$. \end{itemize} \par Hence, the structure of the cohomology of the Milnor fiber of $\cV_0$ as a graded algebra (or graded module) over $\cA^{(*)}(f_0; R)$ is, up to isomorphism, independent of the $\cK_{HM}$--equivalence class of $f_0$ \end{Corollary} \par Before considering the cohomology of the Milnor fibers of the $\cD_m{(*)}$, we first give an important property which implies that each of the $\cD_m^{(*)}$ are $H$-holonomic in the sense of \cite{D2}, which gives a geometric condition that assists in proving that the matrix singularity is finitely $\cK_{HM}$-determined (and hence finitely $\cK_H$-determined). This will be a consequence of the fact that for all three cases the above groups act transitively on the strata of the canonical Whitney stratification of $\cD_m^{(*)}$. \par \begin{Lemma} \label{Lem2.4} For each of the three cases of $m \times m$ general, symmetric and skew-symmetric matrices, the corresponding subgroups $GL_m(\mathbb C)^{(2)}$ , resp. $SL_m(\mathbb C)$ act transitively on the strata of the canonical Whitney stratification of $\cD_m^{(*)} $. \end{Lemma} \par \begin{proof}[Proof of Lemma \ref{Lem2.4}] First, for the general case, let $A \in \cD_m$ have rank $r < m$. We also denote the linear transformation on the space of column vectors defined by $A$ to be denoted by $L_A$. Then, we let $\{\bv_1, \dots , \bv_m\}$ denote a basis for $\mathbb C^m$ so that $\{\bv_{r +1}, \dots , \bv_m\}$ is a basis for $\ker(L_A)$. We also let $\{\bw_1, \dots , \bw_m\}$ denote a basis for $\mathbb C^m$ so that $\bw_j = L_A(\bv_j)$ for $j = 1, \dots, r$. We let $b = \det(\bv_1 \dots \bv_m)$ and $c = \det(\bw_1 \dots \bw_m)$. Then, we let $B^{-1} = (\bv_1, \dots , \bv_{m-1}, \frac{c}{b}\bv_m)$ and $C^{-1} = (\bw_1 \dots \bw_m)$. Then, $C\cdot A \cdot B^{-1} = \begin{pmatrix} I_r & 0 \\ 0 & 0 \end{pmatrix}$, where $I_r$ is the $r \times r$ identity matrix. Also,$\det(B) = \det(C) = c$ so $(B, C) \in GL_m(\mathbb C)^{(2)}$. Thus, the each orbit of $GL_m(\mathbb C) \times GL_m(\mathbb C)$, which consists of matrices of given fixed rank $< m$ is a stratum of the canonical Whitney stratification, is also an orbit of $GL_m(\mathbb C)^{(2)}$. \par For both the symmetric and skew-symmetric cases the corresponding orbits under $GL_m(\mathbb C)$ consist of matrices of given symmetric or skew-symmetric type of fixed rank $< m$; and they form strata of the canonical Whitney stratification. We show that they are also orbits under the action of $SL_m(\mathbb C)$. \par For $A \in \cD_m^{(sy)}$ of rank $r < m$, we consider the symmetric bilinear form $\psi(X, Y) = X^T\cdot A\cdot Y$ for column vectors in $\mathbb C^m$. We can find a basis $\{\bv_1, \dots , \bv_m\}$ for $\mathbb C^m$ so that $\psi(\bv_i, \bv_i) = 1$ for $i = 1, \dots , r$, $= 0$ for $i > r$, and $\psi(\bv_i, \bv_j) = 0$ if $i \neq j$. Then, let $b = \det(\bv_1 \dots \bv_m)$, we let $B^T = (\bv_1, \dots , \bv_{m-1}, \frac{1}{b}\bv_m)$. Then $\det(B) = 1$ and $B \cdot A\cdot B^T = \begin{pmatrix} I_r & 0 \\ 0 & 0 \end{pmatrix}$. \par Lastly, for the skew-symmetric case the argument is similar, except for $A \in \cD_m^{(sk)}$ of rank $r < m$, we consider the skew-symmetric bilinear form $\psi(X, Y) = X^T\cdot A\cdot Y$ for column vectors in $\mathbb C^m$ with even $m$ and $r= 2k$. There is a basis $\{\bv_1, \dots , \bv_m\}$ for $\mathbb C^m$ so that $\psi(\bv_{2i-1}, \bv_{2i}) = 1$ for $i = 1, \dots , k$, and otherwise, $\psi(\bv_i, \bv_j) = 0$ for $i < j$. Then, let $b = \det(\bv_1 \dots \bv_m)$, we let $B^T = (\bv_1, \dots , \bv_{m-1}, \frac{1}{b}\bv_m)$. Then $\det(B) = 1$ and $B \cdot A\cdot B^T = \begin{pmatrix} J_k & 0 \\ 0 & 0 \end{pmatrix}$, where $J_k$ is the $r \times r$ block diagonal matrix with $k$ $2 \times 2$-blocks of $J_1 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$. \end{proof} \par \section{Cohomology of the Milnor Fibers of the $\cD_m^{(*)}$} \label{S:sec3} \par We next recall results from \cite{D3} and \cite{D4} giving the cohomology structure of the Milnor fibers of the $\cD_m^{(*)}$ for each of the three types of matrices. This includes: representing the Milnor fibers by global Milnor fibers, giving compact symmetric spaces as compact models for the homotopy types of the global Milnor fibers, giving the resulting cohomology for the symmetric spaces, geometrically representing the cohomology classes, and indicating the relation of the cohomology classes for different $m$. \par \subsection*{Homotopy Type of Global Milnor fibers via Symmetric Spaces} \par The global Milnor fibers for each of the three cases, which we denote by , $F_m$, resp. $F_m^{(sy)}$, resp. $F_m^{(sk)}$, are given by $H^{-1}(1)$ for $H : M, 0 \to \mathbb C, 0$ the defining equation for $\cD_m^{(*)}$, which is $\det$ for the general of symmetric case and Pfaffian $\mathrm{Pf}$ for the skew-symmetric case. As shown in \cite{D3} the Milnor fiber for the germ of $H$ at $0$ is diffeomorphic to the global Milnor fiber. We reproduce here the representation of the global Milnor fibers as a homogeneous spaces, whose homotopy types are given by symmetric spaces. These provide compact models for the Milnor fibers diffeomorphic to their Cartan models as given by \cite[Table 1]{D4}. these are given in Table \ref{Table1}. \begin{table}[h] \begin{tabular}{|l|c|c|c|l|} \hline Milnor & Quotient & Symmetric & Compact Model & Cartan \\ Fiber $F_m^{(*)}$ & Space & Space & $F_m^{(*)\, c}$ & Model \\ \hline $F_m$ & $SL_m(\mathbb C)$ & $SU_m$ & $SU_m$ & $F_m^{c}$ \\ \hline $F_m^{(sy)}$ & $SL_m(\mathbb C)/SO_m(\mathbb C)$ & $SU_m/SO_m$ & $SU_m \cap Sym_{m}(\mathbb C)$ & $F_m^{(sy)\, c}$ \\ \hline $F_m^{(sk)}, m = 2n$ & $SL_{2n}(\mathbb C)/Sp_{n}(\mathbb C)$ & $SU_{2n}/Sp_n$ & $SU_{m} \cap Sk_{m}(\mathbb C)$ & $F_{m}^{(sk)\, c}\cdot J_n^{-1}$ \\ \hline \end{tabular} \caption{Global Milnor fiber, its representation as a homogenenous space, compact model as a symmetric space, compact model as subspace and Cartan model.} \label{Table1} \end{table} \par \subsection*{Tower Structures of Global Milnor fibers and Symmetric Spaces by Inclusion} \par The global Milnor fibers for all cases, their symmetric spaces, and their compact models form towers via inclusions. These are given as follows. For the general and symmetric cases, there is the homomorphism $\tilde{\itj}_m : SL_m(\mathbb C) \hookrightarrow SL_{m+1} (\mathbb C)$ sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix}$. This can be identified with the inclusion of Milnor fibers $\tilde{\itj}_m :F_m \subset F_{m+1}$. Also, it restricts to give an inclusion $\tilde{\itj}_m : SU_m \hookrightarrow SU_{m+1}$ which are the compact models for the general case. Second, it induces an inclusion $\tilde{\itj}^{(sy)}_m : SL_m(\mathbb C)/SO_m(\mathbb C) \hookrightarrow SL_{m+1} (\mathbb C)/SO_ {m+1}(\mathbb C)$ which is an inclusion of Milnor fibers $\tilde{\itj}^{(sy)}_m: F_m^{(sy)} \hookrightarrow F_{m+1}^{(sy)}$. It also induces an inclusion of the compact homotopy models $\tilde{\itj}^{(sy)}_m : SU_m/SO_m(\mathbb R) \subset SU_{m+1}/SO_ {m+1}(\mathbb R)$ for the Milnor fibers. \par For the skew symmetric case, the situation is slightly more subtle. First, the composition of two of the above successive inclusion homomorphisms for $SL_m(\mathbb C)$ gives a homomorphism $SL_m(\mathbb C) \hookrightarrow SL_{m+2} (\mathbb C)$ sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & I_2 \end{pmatrix}$ for the $2 \times 2$ identity matrix $I_2$. For even $m = 2k$, it induces an inclusion $\tilde{\itj}^{(sk)}_m: SL_m(\mathbb C)/Sp_k(\mathbb C) \hookrightarrow SL_{m+2} (\mathbb C)/Sp_ {k+1}(\mathbb C)$. However, the inclusion of Milnor fibers $ \tilde{\itj}^{(sk)}_m : F_m^{(sk)} \hookrightarrow F_{m+2}^{(sk)}$ is given by the map sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & J_1 \end{pmatrix}$ for the $2 \times 2$ skew-symmetric matrix $J_1 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$. These two inclusions are related via the action of $SL_m(\mathbb C)/Sp_k(\mathbb C)$ on $F_m^{(sk)}$ which induces a diffeomorphism given by $B \mapsto B\cdot J_k \cdot B^T$ ($m = 2k$). This also induces an inclusion of compact homotopy models $SU_m \cap Sk_m(\mathbb C) \subset SU_{m+2} \cap Sk_ {m+2}(\mathbb C)$. This inclusion commutes with both the inclusion of the Milnor fibers under the diffeomorphism given in \cite{D3} by the action, and the inclusion of the Cartan models induced from the compact models after multiplying by $J_k^{-1}$, see Table \ref{Table1}. The Schubert decompositions for all three cases given in \cite{D4} satisfy the additional property that they respect the inclusions. \par \subsection*{Cohomology of Global Milnor fibers using Symmetric Spaces} \par Next, we recall the form of the cohomology algebras for the global Milnor fibers. First, for the $m \times m$ matrices for the general case or skew-symmetric case (with $m = 2n$), by Theorems \cite[Thm. 6.1]{D4} and \cite[Thm. 6.14]{D4}, the Milnor fiber cohomology with coefficients $R = {\mathbb Z}$ are given as follows. \begin{align} \label{Eqn4.2} H^*(F_m; {\mathbb Z}) \,\, &\simeq \,\, \gL^*{\mathbb Z} \langle e_3, e_5, \dots , e_{2m-1} \rangle \, \quad \text{general case } \\ H^*(F_m^{(sk)}; {\mathbb Z}) \,\, &\simeq \,\, \gL^*{\mathbb Z} \langle e_5, e_9, \dots , e_{4n-3} \rangle \quad \text{skew-symmetric case ($m = 2n$)} \, . \end{align} Therefore these isomorphisms continue to hold with ${\mathbb Z}$ replaced by any coefficient ring $R$. Thus, for any coefficient ring $R$, $\cA^{(*)}(f_0; R)$ is the quotient ring of a free exterior $R$-algebra on generators $e_{2j-1}$, for $j = 2, 3, \dots , m$, resp. $e_{4j-3}$ for $j = 2, 3, \dots , n$. \par For the $m \times m$ symmetric case there are two important cases where either $R = {\mathbb Z}/2{\mathbb Z}$ or $R = \bk$, a field of characteristic zero. First, for the coefficient ring $R = \bk$, the symmetric case breaks-up into two cases depending on whether $m$ is even or odd (see \cite[Thm. 6.7 (2), Chap. 3]{MT} or Table 1 of \cite{D3}). \begin{equation} \label{Eqn4.3} H^*(F_m^{(sy)}; \bk) \,\, \simeq \,\, \begin{cases} \gL^*\bk \langle e_5, e_9, \dots , e_{2m-1} \rangle \, & \text{ if $m = 2k+1$ } \\ \gL^*\bk \langle e_5, e_9, \dots , e_{2m-3} \rangle \{1, e_m\}& \text{if $m = 2k$ } \, . \end{cases} \end{equation} Here $e_m$ is the Euler class of a $m$-dimensional real oriented vector bundle $\tilde{E}_m$ on the Milnor fiber $F_m^{(sy)}$. The vector bundle $\tilde{E}_m$ on the symmetric space $SU_m/SO_m(\mathbb R)$ has the form $SU_m \times_{SO_m(\mathbb R)} \mathbb R^m \to SU_m/SO_m(\mathbb R)$ where the action of $SO_m(\mathbb R)$ is given by the standard representation. This can be described as the {\em bundle of totally real subspaces} of $\mathbb C^m$, which is the bundle of $m$-dimensional real subspaces of $\mathbb C^m$ whose complexifications are $\mathbb C^m$. \par In the second case for $R = {\mathbb Z}/2{\mathbb Z}$, by Theorem \cite[Thm. 6.15]{D4} using \cite[Thm. 6.7 (3), Chap. 3]{MT}, we have \begin{equation} \label{Eqn4.4} H^*(F_m^{(sy)}; {\mathbb Z}/2{\mathbb Z}) \,\, \simeq \,\, \gL^*{\mathbb Z}/2{\mathbb Z} \langle e_2, e_3, \dots ,e_m \rangle \, \end{equation} for generators $e_j = w_j(\tilde{E}_m)$, for $j = 2, 3, \dots , m$, for $w_j(\tilde{E}_m)$ the $j$-th Stiefel-Whitney class of the real oriented $m$-dimensional vector bundle $\tilde{E}_m$ above. \par We summarize the structure of the characteristic subalgebra $\cA^{(*)}(f_0; R)$ in each case with the following. \begin{Thm} \label{Thm4.5} Let $f_0 : \mathbb C^n, 0 \to M, 0$ define a matrix singularity $\cV_0, 0$ for $M$ the space of $m \times m$ matrices which are either general, symmetric, or skew-symmetric (with $m = 2n$). \begin{itemize} \item[i)] In the general and skew-symmetric cases, $\cA^{(*)}(f_0; R)$ is a quotient of the free $R$-exterior algebra with generators given in \eqref{Eqn4.2} \item[ii)] In the symmetric case with $R = {\mathbb Z}/2{\mathbb Z}$, $\cA^{(sy)}(f_0; {\mathbb Z}/2{\mathbb Z})$ is the quotient of the free exterior algebra over ${\mathbb Z}/2{\mathbb Z}$ on generators $e_j = w_j(\tilde{E}_m)$, for $j = 2, 3, \dots , m$, for $w_j(\tilde{E}_m)$ the Stiefel-Whitney classes of the real oriented $m$-dimensional vector bundle $\tilde{E}_m$ on the Milnor fiber of $\cD_m^{(sy)}$. Hence, $\cA^{(*)}(f_0; {\mathbb Z}/2{\mathbb Z})$ is a subalgebra generated by the Stiefel-Whitney classes of the pull-back vector bundle $f_{0, w}^*(\tilde{E}_m)$ on $\cV_w$. \item[iii)] In the symmetric case with $R = \bk$, a field of characteristic zero, $\cA^{(sy)}(f_0; \bk)$ is a quotient of the $\bk$-algebras in each of the cases in \eqref{Eqn4.3}. \end{itemize} Then, in each of these cases, the cohomology (with coefficients in a ring $R$) of the Milnor fiber of $\cV_0$ has a graded module structure over the characteristic subalgebra $\cA^{(*)}(f_0; R)$ of $f_0$. \end{Thm} \par \subsection*{Cohomology Relations Under Inclusions for Varying $m$} \par We give the relations between the cohomology of the global Milnor fibers and the symmetric spaces for varying $m$ under the induced inclusion mappings. The relations are the following. \begin{Proposition} \label{Prop4.5} \begin{itemize} \item[1)] In the {\em general case}, for the inclusions $\tilde{\itj}_{m-1} : SU_{m-1} \hookrightarrow SU_{m}$ and $\tilde{\itj}_{m-1} : F_{m-1} \subset F_m$, $\tilde{\itj}_{m-1}^*$ is an isomorphism on the subalgebra generated by $\{ e_{2i-1} : i = 2, \dots , m-1\}$ and $\tilde{\itj}_{m-1}^*(e_{2m-1}) = 0$. \item[2)] In the {\em skew-symmetric case (with $m = 2n$)}, for the inclusions $\itj^{(sk)}_{m-2} : SU_{2(n-1)}/Sp_{n-1} \hookrightarrow SU_{2n}/Sp_n$ and for Milnor fibers $\tilde{\itj}^{(sk)}_{m-2} : F_{m-2}^{(sk)} \hookrightarrow F_{m}^{(sk)}$, $\tilde{\itj} ^{(sk)\, *}_{m-2}$ is an isomorphism on the subalgebra generated by $\{ e_{4i-3} : i = 2, \dots , m-1\}$ and $\tilde{\itj} ^{(sk)\, *}_{m-2}(e_{4m-3}) = 0$. \item[3)] In the {\em symmetric case}, for the inclusion $\tilde{\itj}^{(sy)}_{m-1} : SU_{m-1}/SO_{m-1}(\mathbb R) \hookrightarrow SU_{m} /SO_{m}(\mathbb R)$ and for Milnor fibers $\tilde{\itj}^{(sy)}_{m-1}: F_{m-1}^{(sy)} \subset F_m^{(sy)}$: \par \begin{itemize} \item[i)] for coefficients $R = {\mathbb Z}/2{\mathbb Z}$, $\tilde{\itj}^{(sy)\, *}_{m-1}$ is an isomorphism on the subalgebra generated by $\{ e_i : i = 2, \dots , m-1\}$ and $\tilde{\itj}^{(sy)\, *}_{m-1}(e_m) = 0$; \item[iia)] for coefficients $R = \bk$, a field of characteristic $0$, if $m = 2k$, then $\tilde{\itj}^{(sy)\, *}_{m-1}$ is an isomorphism on the subalgebra generated by $\{ e_{4i-3} : i = 2, \dots , k\}$, and $\tilde{\itj}^{(sy)\, *}_{m-1}(e_m) = 0$, and \item[iib)] if $m = 2k+1$, then $\tilde{\itj}^{(sy)\, *}_{m-1}$ is an isomorphism on the subalgebra generated by $\{ e_{4i-3} : i = 2, \dots , k\}$, and $\tilde{\itj}^{(sy)\, *}_{m-1}(e_{2m-1}) = 0$. \end{itemize} \end{itemize} \end{Proposition} \begin{proof} For the general and skew-symmetric cases, the Schubert decomposition for the Cartan models $\cC_m$ and $\cC_m^{(sk)}$ for successive $m$ given in \cite{D4} preserves the inclusions and the homology properties. In these two cases the result follows from the resulting identified Kronecker dual cohomology classes \cite[\S 6]{D4}. \par For the symmetric case and for ${\mathbb Z}/2{\mathbb Z}$-coefficients, an analogous Schubert decomposition gives the corresponding result. The remaining symmetric case for coefficients $\bk$ a field of characteristic $0$ does not follow in \cite{D4} from the Schubert decomposition. Instead, the computation of the cohomology of the symmetric space given in \cite[Chap. 3]{MT} yields the result. In fact the algebraic computations in \cite [Chap. 3]{MT} (see also \cite{Bo}) also give the results for the other cases. \end{proof} \par \subsection*{Vanishing Compact Models for the Milnor Fibers of $\cD_m^{(*)}$} \par In part I we gave a detection criterion \cite[Lemma 3.2]{D6} for detecting the nonvanishing of a subgroup $E$ of the characteristic cohomology of the Milnor fiber $\cA^{(*)}(f_0, R)$. We did so using vanishing compact models for the Milnor fiber. We use the preceding compact models for the Milnor fibers to give vanishing compact models for detecting nonvanishing subalgebras of $\cA^{(*)}(f_0, R)$. From the above, let $F_M^{(*), c}$ denote the compact models for the individual global Milnor fibers $F_M^{(*)}$. We define $\Phi : F_m^{(*), c} \times (0, 1] \to H^{-1}((0, 1])$ sending $\Phi(A, t)= t\cdot A$. Also, let $E = \gL^* R\{e_{i_1}, \dots, e_{i_\ell}\}$ denote the exterior subalgebra of $H^*(F_m^{(*), c}; R)$ on generators of the $\ell$ lowest degrees. We also let $\gl_E : F_{\ell}^{(*), c} \to F_m^{(*), c}$ denote the compositions $\tilde{\itj}^{(*)}_{m-1}\circ \cdots \circ \tilde{\itj}^{(*)}_{\ell}$. Then, by Proposition \ref{Prop4.5}, $\gl_E^*$ induces an isomorphism from $E$ to its image. Our goal is to first show that an appropriate restriction of $\Phi$ to a subinterval $(0, \gd)$ will provide a vanishing compact model for $F_M^{(*)}$; and moreover, we will use $\gl_E^*$ to give a germ which detects $E$. First, we give vanishing compact models for each case as follows. \begin{Proposition} \label{Prop4.6} A vanishing compact model for the Milnor fiber for $\cD_M^{(*)}$ is given for sufficiently small $0 < \gd << \gevar$ by $\Phi : F_m^{(*), c} \times (0, \gd) \to H^{-1}((0, \gevar])$ sending $\Phi(A, t)= t\cdot A$. \end{Proposition} \begin{proof} We begin by first making a few observations about the global Milnor fibers. For $M$ one of the spaces of $m \times m$ matrices, we consider $H : M, 0 \to \mathbb C, 0$ the defining equation for $\cD_m^{(*)}$ ($H = \det$ or $\mathrm{Pf}$). Then, the global Milnor fiber is $F_m^{(*)} = H^{-1}(1)$. Now we can consider multiplication in $M$ by a constant $a \neq 0$. As $H$ is homogeneous, if $A \in F_m^{(*)}$, then $a\cdot A \in H^{-1}(a^m)$ in the general or symmetric cases, or in the skew-symmetric cases $H^{-1}(a^k)$ where $m = 2k$. \par We also observe that multiplication by $a$ is a diffeomorphism between these two Milnor fibers. We denote the image of $F_m^{(*)}$ by multiplication by $a$ by $aF_m^{(*)}$. Then, by e.g. the proof of \cite[Lemma 1.2]{D3}, given $\gd > 0$, there is an $a > 0$ so that $aF_m^{(*)} \cap B_{\gd}$ is the local Milnor fiber of $\cV_0$, $aF_m^{(*)}$ is transverse to the spheres of radii $\geq \gd$, and $aF_m^{(*)} \cap B_{\gd} \subset aF_m^{(*)}$ is a homotopy equivalence. \par Also, we have the compact homotopy models which occur as submanifolds of $SU_m$ of the form $SU_m$ for the general case, resp. $SU_m \cap Sym_m(\mathbb C)$ for the symmetric case, resp. $SU_m \cap Sk_m(\mathbb C)$ for the skew-symmetric case. Now, for the standard Euclidean norm on $M_n(\mathbb C)$, $\| A\| = \sqrt{m}$ for $A \in SU_m$. Then, as well this holds for $SU_m \cap Sym_m(\mathbb C)$, and for $SU_m \cap Sk_m(\mathbb C)$. We denote the compact model in $F_m^{(*)}$ by $F_m^{(*)\, c}$. Then, in each case if $M \simeq \mathbb C^N$, $F_m^{(*)\, c} \subset S^{2N-1}_{\sqrt{m}}$, the sphere of radius $\sqrt{m}$. Thus, $aF_m^{(*)\, c} \subset S^{2N-1}_{a\sqrt{m}}$. \par Then, we first choose $0 < \eta << \gd < 1$ so that $H : H^{-1}(B^*_{\eta}) \cap B_{\gd} \to B^*_{\eta}$ is the Milnor fibration of $H$. \par We choose $0 < a < \eta$ so that also $a\sqrt{m} < \gd$. Then, we observe the composition $aF_m^{(*)\, c} \subset aF_m^{(*)}\cap B_{\gd} \subset aF_m^{(*)}$ is a homotopy equivalence. Hence, The restriction $\Phi : F_m^{(*)\, c} \times (0, a) \to H^{-1}(B^*_a) \cap B_{\gd} \to B^*_a$ restricts to a homotopy equivalence for each $0 < t < a$ and so gives a vanishing compact model. \end{proof} In light of Theorem \ref{Thm4.5}, there are several natural problems to be solved involving the characteristic cohomology for matrix singularities of each of the types. \par \noindent {\it Problems for the Characteristic Cohomology of the Milnor Fibers of Matrix Singularities: } \par \begin{itemize} \item[1)] Determine the characteristic subalgebras as the images of the exterior algebras by detecting which monomials map to nonzero elements in $H^*(\cV_w ; R)$. \item[2)] Identify geometrically these non-zero monomials in 1) via the pull-backs of the Schubert classes. \item[3)] For the symmetric case with ${\mathbb Z}/2{\mathbb Z}$-coefficients, compute the Stiefel-Whitney classes of the pull-back of the vector bundle $\tilde{E}_m$. \item[4)] Determine a set of module generators for the cohomology of the Milnor fibers as modules over the characteristic subalgebras. \end{itemize} We will give partial answers to these problems in the next sections. \par \section{Kite Spaces of Matrices for Given Flag Structures} \label{S:sec4} We begin by introducing for a flag of subspaces for $\mathbb C^m$, a {\em linear kite subspace} of size $k$ in the space of $m \times m$ matrices of any of the three types: general $M_m(\mathbb C)$, symmetric $Sym_m(\mathbb C)$, or skew-symmetric $Sk_m(\mathbb C)$ (with $m$ even). We initially consider the standard flag for $\mathbb C^m$, given by $0 \subset \mathbb C \subset \mathbb C^2 \subset \cdots \subset \mathbb C^{m-1} \subset \mathbb C^{m}$. We choose coordinates $\{x_1, \cdots , x_m\}$ for $\mathbb C^m$ so that $\{x_1, \cdots , x_k\}$ are coordinates for $\mathbb C^k$ for each $k$. \par We let $E_{i, j}$ denote the $m \times m$ matrix with entry $1$ in the $(i, j)$-position and $0$ otherwise. We also let $E^{(sy)}_{i, j} = E_{i, j} + E_{j, i}$, $i < j$, or $E^{(sy)}_{i, i} = E_{i, i}$ for the space of symmetric matrices; and $E^{(sk)}_{i, j} = E_{i, j} - E_{j, i}$, for $i < j$. Then, we define \begin{Definition} \label{Def4.1} For each of the three types of $m \times m$ matrices and the standard flag of subspaces of $\mathbb C^m$, the corresponding {\em linear kite subspace of size $\ell$} is the linear subspace of the space of matrices defined as follows: \begin{itemize} \item[i)] For $M_m(\mathbb C)$, it is the linear subspace $\bK_m(\ell)$ spanned by \par $$ \{E_{i, j} : 1 \leq i, j \leq \ell\} \cup \{E_{i, i} : \ell < i \leq m\}$$ \item[ii)] For $Sym_m(\mathbb C)$, it is the linear subspace $\bK_m^{(sy)}(\ell)$ spanned by \par $$ \{E^{(sy)}_{i, j} : 1 \leq i \leq j \leq \ell\} \cup \{E_{i, i}^{(sy)} : \ell < i \leq m\}$$ \item[iii)] For $Sk_m(\mathbb C)$ with $m$ even, for $\ell$ also even, it is the linear subspace $\bK_m^{(sk)}(\ell)$ spanned by \par $$ \{E^{(sk)}_{i, j} : 1 \leq i < j \leq \ell\} \cup \{E^{(sk)}_{2i, 2i+1} : \ell < 2i < m\}$$ \end{itemize} \par Furthermore, we refer to the germ of the inclusion $\iti_m^{(*)}(\ell) : \bK_m^{(*)}(\ell), 0 \to M, 0$, for each of the three cases as a {\em linear kite map of size $\ell$}. \end{Definition} The general form of elements \lq\lq the kites\rq\rq\, in the linear kite subspaces have the form given in \eqref{Eqn4.1}. \par \begin{equation} \label{Eqn4.1} Q_{\ell, m - \ell} \,\, = \,\, \begin{pmatrix} A_{\ell} & 0_{\ell, m - \ell} \\ 0_{m - \ell, \ell} & D_{m - \ell} \end{pmatrix} \end{equation} where $A_{\ell}$ is an $\ell \times \ell$-matrix which denotes an arbitrary matrix in either $M_{\ell}(\mathbb C)$, resp. $Sym_{\ell}(\mathbb C)$, resp. $Sk_{\ell}(\mathbb C)$; and $0_{r, s}$ denotes the zero $r \times s$ matrix. Also, $D_{m- \ell}$ denotes an arbitrary $(m- \ell) \times (m- \ell)$ diagonal matrix in the general or symmetric case as in Figure \ref{fig:altkitefig}. In the skew symmetric case, $D_{m- \ell}$ denotes the $2 \times 2$ block diagonal matrix with skew-symmetric blocks of the form given by \eqref{Eqn4.1b} as in Figure \ref{fig:altskewkitefig}, with \begin{equation*} \label{Eqn4.1b} J_1(*) \,\, = \,\, \begin{pmatrix} 0 & * \\ -* & 0 \end{pmatrix} \end{equation*} and \lq\lq\, * \rq\rq\, denoting an arbitrary entry. \par \begin{figure} \caption{Illustrating the form of elements of a linear kite space of size $\ell$ in either the space of general matrices or symmetric matrices. For general matrices the upper left matrix of size $\ell \times \ell$ is a general matrix, while for symmetric matrices it is symmetric.} \label{fig:altkitefig} \end{figure} \par \begin{figure} \caption{Illustrating the form of elements of a linear \lq\lq skew-symmetric kite\rq\rq space of size $\ell$ (with $\ell$ even) in the space of skew-symmetric matrices. The upper left $\ell \times \ell$ matrix is a skew-symmetric matrix.} \label{fig:altskewkitefig} \end{figure} \par We next extend this to general flags, and then to nonlinear subspaces as follows. For each of the three types of matrices $M =$ $M_m(\mathbb C)$, resp. $Sym_m(\mathbb C)$, resp. $Sk_m(\mathbb C)$ (with $m$ even). \begin{Definition} \label{Def3.3} An {\em unfurled kite map} of given matrix type is any element of the orbit of $\iti_m^{(*)}(\ell)$, for $(*) = ( )$, resp. $(sy)$, resp.$(sk)$, under the corresponding equivalence group $\cK_{HM}$. \par A germ $f_0 : \mathbb C^n, 0 \to M, 0$ {\em contains a kite map of size $\ell$} for each of the three cases if there is a germ of an embedding $g : \bK_m^{(*)}(\ell), 0 \to \mathbb C^n, 0$ such that $f_0 \circ g$ is an unfurled kite map. \end{Definition} \par \begin{Remark} \label{Rem3.4} We note that unfurled kite maps have the property that the standard flag can be replaced by a general flag; and moreover, the flag and linear kite space can undergo nonlinear deformations. These can be performed by iteratively applying appropriate row and column operations using elements of the local ring of germs on $\mathbb C^n,0$ instead of constants. \end{Remark} A simple example of an unfurled kite map is given in Figure \ref{fig:unfkitefig}. \par \begin{figure} \caption{An example of an unfurled kite map of size $3$ into $4 \times 4$ symmetric matrices.} \label{fig:unfkitefig} \end{figure} \par \section{Detecting Characteristic Cohomology using Kite Spaces of Matrices} \label{S:sec5} \par In \S 3 of Part I, we gave a detection criterion \cite[Lemma 3.2]{D6} for detecting the nonvanishing of a subgroup $E$ of the characteristic cohomology of the Milnor fiber $\cA^{(*)}(f_0, R)$. In this section we use this criterion using kite maps to detect nonvanishing exterior subalgebras of $\cA^{(*)}(f_0, R)$. In \S \ref{S:sec3}, we gave in equations \eqref{Eqn4.2}, \eqref{Eqn4.3}, and \eqref{Eqn4.4} the cohomology of the Milnor fibers for the $\cD_m^{(*)}$ for each of the three types of matrices. Thus, as for any matrix singularity $f_0 : \mathbb C^n, 0 \to M, 0$, by Theorem \ref{Thm4.5} the characteristic subalgebra is a quotient of the corresponding algebra. We let $E = \gL^* R\{e_{i_1}, \dots, e_{i_\ell}\} \subseteq H^*(F_m^{(*), c}; R)$ denote an exterior algebra on generators of the $\ell$ lowest degrees. Then, using the map $\gl_E$ given before Proposition \ref{Prop4.6}, $\gl_E^*$ induces an isomorphism from $E$ to its image. We next use $\gl_E$ to show that for germs $f_0$ containing a kite map of size $\ell$ for each case detects $E$ in $\cA^{(*)}(f_0, R)$. \begin{Thm} \label{Thm5.1} Let $f_0 : \mathbb C^n, 0 \to M, 0$ define an $m \times m$ matrix singularity of one of the three types. \par \begin{itemize} \item[a)] In the case of general matrices, if $f_0$ contains an unfurled kite map of size $\ell < m$, then $\cA(f_0, R)$ contains an exterior algebra of the form $$\gL^*R \langle e_3, e_5, \dots , e_{2\ell-1} \rangle\, .$$ on $\ell - 1$ generators. \item[b)] In the case of skew-symmetric matrices (with $m$ even), if $f_0$ contains an unfurled skew-symmetric kite map of size $\ell (= 2k) < m$, then $\cA^{(sk)}(f_0, R)$ contains an exterior algebra of the form $$ \gL^*R \langle e_5, e_9, \dots , e_{4k-3} \rangle \, .$$ on $k-1$ generators. \item[c)] In the case of symmetric matrices, if $f_0$ contains an unfurled symmetric kite map of size $\ell < m$, then $\cA^{(sy)}(f_0, R)$ contains an exterior algebra of one of the forms \begin{align*} &\gL^*\bk \langle e_3, e_5, \dots , e_{2\ell-1} \rangle \qquad \text{ if $R = \bk$ is a field of characteristic $0$, } \\ &\gL^*{\mathbb Z}/2{\mathbb Z} \langle e_2, e_3, \dots , e_{\ell} \rangle \qquad \text{ if $R = {\mathbb Z}/2{\mathbb Z}$ , } \end{align*} \end{itemize} \end{Thm} \begin{Remark} \label{Rem5.2} In the symmetric case, it follows from c) that if $f_0$ contains an unfurled symmetric kite map of size $\ell < m$, then the Stiefel-Whitney classes of the pull-back bundle $w_i(f_{0, w}^*(\tilde{E}_m))$ on $\cV_w$ are non-vanishing for $i = 2, \dots , \ell$. \end{Remark} \begin{proof} By Theorem \ref{Thm4.5} and the Detection Lemma \cite[Lemma 3.2]{D6}, it is sufficient to show that the corresponding kite maps of each type detect the corresponding exterior subalgebra. We use the notation from the proof of Proposition \ref{Prop4.6} which gave the vanishing compact models for the Milnor fibers in each case. \par Then, we choose $0 < \eta << \gevar < \gd < 1$ so that $H : H^{-1}(B^*_{\eta}) \cap B_{\gd} \to B^*_{\eta}$ is the Milnor fibration of $H$ and $H\circ \iti_m^{(*)}(\ell) : (H\circ \iti_m^{(*)}(\ell))^{-1}(B^*_{\eta}) \cap B_{\gevar} \to B^*_{\eta}$ is the Milnor fibration of $H\circ \iti_m^{(*)}(\ell)$. We also choose $0 < a < \eta$ so that $a\sqrt{m} < \gevar$. \par Then there are the following inclusions. \begin{equation} \label{Eqn5.3} aF_{\ell}^{(*)\, c} \,\, \subset\,\, \iti_m^{(*)}(\ell)(H\circ \iti_m^{(*)}(\ell))^{-1}(a^r) \cap B_{\gevar}) \,\, \subset \,\, aF_m^{(*)}\cap B_{\gevar} \,\, \subset \,\, aF_m^{(*)}\, , \end{equation} where $r = m$ in the general or symmetric case or $r = \frac{m}{2}$ in the skew-symmetric case. The composition of inclusions $F_{\ell}^{(*)\, c} \subset F_m^{(*)\, c} \subset F_m^{(*)}$ commutes with multiplication by $a$ as in \eqref{CD5.5} where each vertical map is a diffeomorphism given by multiplication by $a$. \begin{equation} \label{CD5.5} \begin{CD} {F_{\ell}^{(*)\, c}} @>>> {F_m^{(*)\, c}} @>>> {F_m^{(*)}}\\ @VVV @VVV @VVV \\ {aF_ {\ell} ^{(*)\, c}} @>{\iti^{(*)}_m(\ell)}>> {aF_m^{(*)\, c}} @>>> {aF_m^{(*)}}\\ \end{CD} \end{equation} Also, $\iti^{(*)}_m(\ell)$ in the bottom row is given by the map in \eqref{Eqn5.4}. \begin{equation} \label{Eqn5.4} aA \,\, \mapsto \,\, aQ_{\ell, m - \ell} \,\, = \,\, \begin{pmatrix} aA_{\ell} & 0_{\ell, m - \ell} \\ 0_{m - \ell, \ell} & aD_{m - \ell} \end{pmatrix} \end{equation} \par Then, by Proposition \ref{Prop4.5} the induced homomorphisms in cohomology for the top row of \eqref{CD5.5} restrict to an isomorphism on the corresponding exterior subalgebra of $H^*(F_m^{(*)} ; R)$ onto the cohomology $H^*({F_{\ell}^{(*)\, c}}; R)$, and vanishing on the remaining generators. Hence, as the vertical diffeomorphisms induce isomorphisms on cohomology, the induced homomorphisms on cohomology for the bottom row have the same property. Lastly, in \eqref{Eqn5.3}, the induced homomorphisms in cohomology restrict to an isomorphism on the corresponding exterior subalgebra of $H^*({F_{m}^{(*)\, c}}; R)$ to $H^*({aF_{\ell}^{(*)\, c}}; R)$. Thus the induced homomorphism to the Milnor fiber of $H\circ \iti_m^{(*)}(\ell)$, $$ H^*({aF_{m}^{(*)\, c}}; R) \longrightarrow H^*(H\circ \iti_m^{(*)}(\ell))^{-1}(a^r) \cap B_{\gevar}; R)$$ restricts to an isomorphism of the corresponding exterior algebra onto its image. Thus, the cohomology of the Milnor fiber of $H\circ \iti_m^{(*)}(\ell)$ contains the claimed exterior subalgebra. Thus, the flag map $\iti^{(*)}_m(\ell)$ detects the corresponding exterior algebra, so the result follows by the Detection Lemma. \end{proof} \section{Examples of Matrix Singularities Exhibiting Characteristic Cohomology} \label{S:sec6} \par We consider several examples illustrating Theorem \ref{Thm5.1}. \par \begin{figure} \caption{An example of a germ $f_0$ containing an unfurled kite map of size $3$ into $4 \times 4$ symmetric matrices in Figure \ref{fig:unfkitefig}.} \label{fig:unfkitefigex1} \end{figure} \par \begin{Example} \label{Exam6.1} Let $f_0 ; \mathbb C^9, 0 \to Sym_4(\mathbb C), 0$ be defined by $f_0(\bx, \by)$ given by the matrix in Figure \ref{fig:unfkitefigex1} for $\bx = (x_{1,1}, x_{1,2}, x_{1,3}, x_{2,2}, x_{2,3}, x_{3,3}, x_{4,4})$ and $\by = (y_1, y_2)$. We let $\cV_0 = f_0^{-1}(\cD_4^{(sy)})$. This is given by the determinant of the matrix in Figure \ref{fig:unfkitefigex1} defining $f_0$. Then, $\cV_0$ has singularities in codimension $2$. We observe that when $\by = (0, 0)$ we obtain the unfurled kite map in Figure \ref{fig:unfkitefig}. Thus, by Theorem \ref{Thm5.1}, the Milnor fiber of $\cV_0$ has cohomology with ${\mathbb Z}/2{\mathbb Z}$ coefficients containing the subalgebra $\gL^*{\mathbb Z}/2{\mathbb Z} \langle e_2, e_3 \rangle$, so there is ${\mathbb Z}/2{\mathbb Z}$ cohomology in degrees $2$, $3$, and $5$. We also note that $e_j = w_j(f_{0, w}^*\tilde{E}_4)$ so that one consequence is that the second and third Stiefel-Whitney classes of the pullback of the vector bundle $\tilde{E}_4$ are non-zero. \par For coefficients a field $\bk$ of characteristic $0$, the cohomology of the Milnor fiber of $\cV_0$ has an exterior algebra $\gL^*\bk \langle e_5 \rangle$, so there is a $\bk$ generator $e_5$ in degree $5$. \par By Kato-Matsumota \cite{KM}, as singularities have codimension $2$, the Milnor fiber is simply connected. Then, we can use the preceding to deduce information about the integral cohomology of the Milnor fiber from the universal coefficient theorem. It must have rank at least $1$ in dimension $5$, and it has $2$-torsion in dimension $2$. \end{Example} Second, we consider a general matrix singularity. \begin{figure} \caption{An example of a germ $f_0$ in Example \ref{Exam6.2}, containing a linear kite map of size $4$ into $5 \times 5$ general matrices with $g_i(\bx, 0) \equiv 0$ for each $i$).} \label{fig:unfkitefigex2} \end{figure} \par \begin{Example} \label{Exam6.2} We let $f_0 ; \mathbb C^{21}, 0 \to M_5(\mathbb C), 0$ be defined with $f_0(\bx, \by)$ given by the matrix in Figure \ref{fig:unfkitefigex2} for $\bx = (x_{1,1}, \dots , x_{4,4}, x_{5,5})$ and $\by = (y_1, y_2, y_3, y_4)$. In this example we require that $g_i(\bx, 0) \equiv 0$ for each $i$ . We let $\cV_0 = f_0^{-1}(\cD_5)$. This is given by the determinant of the matrix in Figure \ref{fig:unfkitefigex2} defining $f_0$. Then, the $\cV_0$ has singularities in codimension $4$ in $\mathbb C^{21}$; hence by Kato-Matsumoto, the Milnor fiber is $2$-connected. We observe that when $\by = (0, 0, 0, 0)$ we obtain the linear kite map $\iti_5(4)$. Thus, by Theorem \ref{Thm5.1}, the Milnor fiber of $\cV_0$ has characteristic cohomology with integer coefficients containing the subalgebra $\gL^*{\mathbb Z} \langle e_3, e_5, e_7 \rangle$. Hence, the integer cohomology has rank at least $1$ in dimensions $0, 3, 5, 7, 8, 10, 12, 15$. We cannot determine at this point whether the generator $e_9$ maps to a nonzero element in the cohomology of the Milnor fiber of $\cV_0$. Even if it does, there are several products involving $e_9$ in exterior algebra for the cohomology of $\cD_5$ must map to $0$, as the Milnor fiber is homotopy equivalent to a CW-complex of dimension $20$. \end{Example} \subsubsection*{Module structure over $\cA^{(*)}(f_0, R)$ for the Cohomology of the Milnor fiber} \par \begin{Remark} \label{Rem7.7} \par In \S 4 of part I, we considered how in the hypersurface case the cohomology of the Milnor fiber is a module over the characteristic cohomology and listed four issues which must be addressed. Already for condition i) and $\cV = \cD_m^{(*)}$, this leaves the remaining issues to be addressed: \begin{itemize} \item[1)] giving a sufficient condition that guarantees that the partial criterion \cite[(4.2)]{D6} is satisfied to ensure that for the singular Milnor number $\mu_{\cV}(f_0)$ there is a contribution of a summand of that rank.. \item[2)] determining $\mu_{\cV}(f_0)$ for $\cV = \cD_m^{(*)}$. In the case that $\cV_0$ has an isolated singularity (which requires that $n$ is small, i.e. $n \leq {\operatorname{codim}}({\operatorname{sing}}(\cD_m^{(*)}))$, but allows arbitrary $m$), Goryunov-Mond \cite{GM} give a formula in all three cases for $\mu_{\cV}(f_0)$ in terms of the formula of \cite{DM} for free divisors with a correction term given by an Euler characteristic for a Tor sequence. Alternatively, by a different method using \lq\lq free completions\rq\rq in all three cases, with arbitrary $n$ but for small $m$, Damon-Pike \cite{DP} give formulas for $\mu_{\cV}(f_0)$ as alternating sums of lengths of explicit determinantal modules. However, there still does not exist a formula valid for all $m$ and $n$. \end{itemize} \end{Remark} \section{Characteristic Cohomology for the Complements and Links of Matrix Singularities} \label{S:sec8} \par We now turn to the characteristic cohomology of the complement and link for matrix singularities of all three types. Again, we may apply the Second Detection Lemma of Part I \cite[Lemma 3.4]{D6} for complements to detect a nonvanishing subalgebra of $\cC^{(*)}(f_0, R)$ and corresponding nonvanishing subgroups of $\cB^{(*)}(f_0, \bk)$. In order to apply the earlier results to the cases of matrix singularities, we first recall in Table \ref{Coh.Compl.Lnk} the cohomology, with coefficients a field $\bk$ of characteristic $0$, of the complements and links as given in \cite[table 2]{D3}. We will then use the presence of kite maps to detect both subalgbras of $\cC^{(*)}(f_0, R)$ for the complements and subgroups of $\cB^{(*)}(f_0, \bk)$ for the links. \begin{Thm} \label{Thm8.7} Let $f_0 : \mathbb C^n, 0 \to M, 0$ define a matrix singularity $\cV_0$ of any of the three types. If $f_0$ contains a kite map of size $\ell$, then the characteristic cohomology of the complement $\cC^{(*)}(f_0, \bk)$, for a field $\bk$ of characteristic $0$, contains an exterior algebra given by Table \ref{V_0.compl.link}. \par Furthermore, the characteristic cohomology of the link $\cB^{(*)}(f_0, \bk)$, as a graded vector space contains the graded subspace given by truncating the exterior subalgebra of $\cC^{(*)}(f_0, \bk)$ listed in column $2$ of Table \ref{V_0.compl.link} in the top degree and shifting by the amount listed in the last column. \par For the complements in the general and skew-symmetric cases, $\bk$ may be replaced by any coefficient ring $R$. \end{Thm} \par \begin{table} \begin{tabular}{|l|c|c|l|} \hline Determinantal & Complement & $H^*(M \backslash \cD,\bk) \simeq$ & \,\,Shift \\ Hypersurface & $M \backslash \cD$ & $H^*(K/L,\bk)$ & \\ \hline $\cD_m^{sy}$ & $GL_m(\mathbb C)/O_m(\mathbb C)$ & $\gL^*\bk\langle e_1, e_5, \dots , e_{2m-1}\rangle$ & $\binom{m+1}{2} - 2$ \\ (m = 2k+1) & $\sim U_m/O_m(\mathbb R)$ & & \\ \hline $\cD_m^{sy}$ & $GL_m(\mathbb C)/O_m(\mathbb C)$ & $\gL^*\bk\langle e_1, e_5, \dots , e_{2m-3}\rangle$ & $\binom{m+1}{2} + m - 2$ \\ (m = 2k) & & & \\ \hline $\cD_m$ & $GL_m(\mathbb C) \sim U_m$ & $\gL^*\bk\langle e_1, e_3, \dots , e_{2m-1}\rangle$ & $m^2 - 2$ \\ \hline $\cD_m^{sk}$ & $GL_{2k}(\mathbb C)/Sp_{k}(\mathbb C)$ & $\gL^*\bk\langle e_1, e_5, \dots , e_{2m-3}\rangle$ & $\binom{m}{2} - 2$ \\ (m = 2k) & $\sim U_{2k}/Sp_{k}$ & & \\ \hline \end{tabular} \caption{The cohomology of the complements $M \backslash \cD$ and links $L(\cD)$ for each determinantal hypersurface $\cD$. The complements, are homotopy equivalent to the quotients of maximal compact subgroups $K/L$ with cohomology given in the third column, where the generators of the cohomology $e_k$ are in degree $k$; and the structure is an exterior algebra. For the links $L(\cD)$, the cohomology is isomorphic as a vector space to the cohomology of the complement truncated in the top degree and shifted by the degree indicated in the last column.} \label{Coh.Compl.Lnk} \end{table} \par \begin{Remark} \label{Rem8.5} In what follows to simplify statements, instead of referring to the complement of $\cV_0, 0 \subset \mathbb C^n, 0$ as $B_{\gevar} \backslash \cV_0$ for sufficiently small $\gevar > 0$, we will just refer to the complement as $\mathbb C^n \backslash \cV_0$, with the understanding that it is restricted to a sufficiently small ball. \end{Remark} \par \begin{proof}[Proof of Theorem \ref{Thm8.7}] The proof is similar to that for Theorem \ref{Thm5.1}. As the statements are independent of $f_0$ in a given $\cK_{\cV}$-equivalence class, we may apply an element of $\cK_H$ to obtain an $f_0$ containing a linear kite map. It is sufficient to show, as for the case of Milnor fibers, that the linear kite map detects the indicated subalgebra in $\cC^{(*)}(f_0, \bk)$, and then apply Alexander duality for the result for the link. \par By the results in \cite{D3} summarized in Table \ref{Coh.Compl.Lnk}, the complement $M \backslash \cD_m^{(*)}$ is given by a homogeneous space $G/H$ which has as a compact homotopy model $(K/L)$ where $K = U_m$ for each of the cases. For successive values of $m$, we have for the three cases the successive inclusions: \begin{itemize} \item[i)] for the general case, $GL_m(\mathbb C) \hookrightarrow GL_{m+1} (\mathbb C)$ by $A \mapsto \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix}$; \item[ii)] for the symmetric case, $GL_m(\mathbb C) \hookrightarrow GL_{m+1} (\mathbb C)$ sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix}$ induces an inclusion $GL_m(\mathbb C)/O_m(\mathbb C) \hookrightarrow GL_{m+1}(\mathbb C) /O_{m+1}(\mathbb C)$; \item[iii)] for even $m = 2k$, $GL_m(\mathbb C) \hookrightarrow GL_{m+2} (\mathbb C)$ sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & I_2 \end{pmatrix}$, for $I_2$ the $2 \times 2$ identity matrix, induces an inclusion $GL_m(\mathbb C)/Sp_k(\mathbb C) \hookrightarrow GL_{m+2} (\mathbb C)/Sp_ {k+1}(\mathbb C)$. \end{itemize} Then, these are obtained by the action of $GL_m(\mathbb C)$ on the appropriate spaces of matrices. They restrict to the compact homogenenous spaces which are homotopy equivalent models for the complements, given in Table \ref{Coh.Compl.Lnk} and which we denote by $K/L$ for each of the three cases. Also, the inclusions correspond to the following inclusions of spaces of matrices. \begin{itemize} \item[i)] for the general case, $M_m(\mathbb C) \hookrightarrow M_{m+1} (\mathbb C)$ by $A \mapsto \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix}$; \item[ii)] for the symmetric case, $Sym_m(\mathbb C) \hookrightarrow Sym_{m+1} (\mathbb C)$ sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & 1 \end{pmatrix}$; \item[iii)] for even $m = 2k$, $Sk_m(\mathbb C) \hookrightarrow Sk_{m+2} (\mathbb C)$ sending $A \mapsto \begin{pmatrix} A & 0 \\ 0 & J_1 \end{pmatrix}$, for the $2 \times 2$ skew-symmetric matrix $J_1 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$. \end{itemize} \par Furthermore, for the cohomology of these spaces (via their homotopy equivalent compact models $K/L$ for each case) the maps induced by the inclusions sends $e_j \mapsto e_j$ for the nonzero generators in successive spaces. \par Via these inclusions, the corresponding actions of $GL_m(\mathbb C)$ on these spaces (as explained in \cite{D3}) applied to either $I_m$ for the general or symmetric case, or $J_k$ for the skew symmetric case factor through the homogeneous spaces given in Table \ref{Coh.Compl.Lnk} to give diffeomorphisms to the complements of $\cD_m^{(*)}$ in each case. The inclusions of the homogeneous spaces correspond to the inclusions of the spaces of nonsingular matrices. Under this correspondence, the cohomology of the homogeneous spaces gives the cohomology of the complements of the spaces of $m \times m$ singular matrices $M^{(*)}_m \backslash \cD_m^{(*)}$. Here we let $M^{(*)}_m$ denotes the space of $m \times m$ matrices of appropriate type. \par Just as for Milnor fibers, we use multiplication to define a vanishing compact model for the complement. We let $\cP^{(*)} \subset M^{(*)}_m \backslash \cD_m^{(*)}$ denote the compact model for the complement in each of the three cases. The action of $U_m$ in each case gives elements $A$ of the compact model to be products of elements of $U_m$ and hence $\| A \| = \sqrt{m}$. Thus, $\cP^{(*)} \subset B_{\sqrt{m}}$. Then, we can multiply the spaces of matrices by nonzero constants $a$ and for each case $a\cdot \cP^{(*)} \subset B_{a\sqrt{m}}$. Then, for a neighborhood $B_{\gd}$ of $0$ in $M_{m}^{(*)}$, if $a\sqrt(m) < \gd$, then $a\cdot \cP^{(*)} \subset B_{\gd} \backslash \cD_m^{(*)}$. \par Then, we define $\Phi : \cP^{(*)} \times (0, a) \to M_{m}^{(*)} \backslash \cD_m^{(*)}$ sending $\Phi(A, t) = t\cdot A$. Then, $\Phi$ defines a vanishing compact model for the complement for each case. \par It remains to show that the kite map of size $\ell$ detects the corresponding exterior algebra given in Table \ref{V_0.compl.link} for the characteristic cohomology of the complement. We consider $\iti_m^{(*)} (\ell) : \bK_{\ell}(\mathbb C) \cap B_{\gevar} \to M_{m}^{(*)} $. If $M_{\ell}^{(*)}$ denotes the embedding of the corresponding $\ell \times \ell$ matrices given above, then there is an $a >0 $ so that $a M_{\ell} ^{(*)} \subset \bK_{m}(\ell) \cap B_{\gevar}$. Then, as in the proof of Theorem \ref{Thm5.1}, the composition $$ a (M_{\ell} ^{(*)}\backslash \cD_{\ell}^{(*)}) \,\, \subset \,\, (\bK_{m}(\ell)\backslash \cD_m^{(*)}) \cap B_{\gevar} \,\, \overset{\iti_m ^{(*)}(\ell)}{\longrightarrow} \,\, M^{(*)}_m\backslash \cD_m^{(*)} $$ induces in cohomology an isomorphism from the exterior subalgebra given in Table \ref{V_0.compl.link} to a subalgebra of the cohomology of $a (M_{\ell}^{(*)}\backslash \cD_{\ell}^{(*)})$ (since it is diffeomorphic to $M_{\ell}^{(*)} \backslash \cD_{\ell}^{(*)}$). As this homomorphism factors through $H^*(\mathbb C^n \backslash \cV_0; \bk)$, it is also an isomorphism onto a subalgebra of this cohomology. This shows that $\iti_m^{(*)} (\ell)$ detects the exterior algebra, so by the Second Detection lemma in Part I \cite[Lemma 3.4]{D6}, the result follows for the complement. \par Lastly, let $\widetilde{\gG}^{(*)}(f_0, \bk)$ denote the graded subspace of reduced homology obtained from the Kronecker dual $\gG^{(*)}(f_0, \bk)$ to this subalgebra. Then, by Alexander duality we obtain a graded subspace of $H^*(L(\cV_0); \bk)$ isomorphic to $\widetilde{\gG}^{(*)}(f_0, \bk)$. It remains to show it is obtained from the exterior algebra by truncating it and applying an appropriate shift. As the exterior algebra satisfies Poincare duality under multiplication, this is done using the same argument in the proof of \cite[Prop. 1.9]{D3}. \end{proof} \par \begin{table} \begin{tabular}{|l|c|l|} \hline Determinantal & $\cC^{(*)}(f_0,\bk)$ & \,\,Shift for Link \\ Hypersurface Type & contains subalgebra & \\ \hline $\cD_m^{sy}$ & $\gL^*\bk\langle e_1, e_5, \dots , e_{2\ell-1}\rangle$ & $2n - \binom{\ell + 1}{2} - 2$ \\ $\ell$ odd & & \\ \hline $\cD_m^{sy}$ & $\gL^*\bk\langle e_1, e_5, \dots , e_{2\ell-3}\rangle$ & $2n - \binom{\ell}{2} - 2$ \\ $\ell$ even & & \\ \hline $\cD_m$ & $\gL^*\bk\langle e_1, e_3, \dots , e_{2\ell-1}\rangle$ & $2n -\ell^2$ - 2 \\ \hline $\cD_m^{sk}$ (m = 2k) & $\gL^*\bk\langle e_1, e_5, \dots , e_{2\ell-3}\rangle$ & $2n - \binom{\ell}{2} - 2$ \\ $\ell$ even & & \\ \hline \end{tabular} \caption{The characteristic cohomology with coefficients in a field $\bk$ of characteristic $0$ for $\cV_0 = f_0^{-1}(\cV)$ for each matrix type $\cV = \cD_m^{(*)}$. If $f_0$ contains an unfurled kite map of size $\ell$, the characteristic cohomology $\cC^{(*)}(f_0, \bk)$ contains an exterior subalgebra given in column 2 (where $e_j$ has degree $j$). Then, for the link $L(\cV_0)$, the characteristic cohomology contains as a graded subspace the exterior algebra in column 2 truncated in the top degree and shifted by the degree indicated in the last column. For the complements in the general or skew-symmetric cases, $\bk$ in column 2 may be replaced by any coefficient ring $R$.} \label{V_0.compl.link} \end{table} \par We reconsider the examples from \S \ref{S:sec6} \begin{Example} \label{Exam8.11} In Example \ref{Exam6.1}, we considered a singularity $\cV_0$ defined by $f_0 ; \mathbb C^9, 0 \to Sym_4(\mathbb C), 0$ given by the matrix in Figure \ref{fig:unfkitefigex1}. It contains an unfurled kite map of size $3$. We can apply Theorem \ref{Thm8.7}. \par For coefficients a field $\bk$ of characteristic $0$, from Table \ref{V_0.compl.link} the characteristic cohomology of the complement of $\cC^{(sy)}(f_0, \bk)$ contains an exterior algebra $\gL^*\bk \langle e_1, e_5 \rangle$, so there are $\bk$-vector space generators $1$, $e_1$, $e_5$, and $e_1\cdot e_5$ in degrees $0$, $1$, $5$ and $6$. \par The characteristic cohomology $\cB^{(sy)}(f_0, \bk)$ of the link of $\cV_0$, contains the subspace obtained by upper truncating the exterior algebra to obtain the $\bk$ vector space $\bk \langle 1, e_1, e_5 \rangle$ and shifting by $2\cdot 9 - 2 - \binom{4}{2} = 10$ to obtain $1$-dimensional generators in degrees $10$, $11$, and $15$. We note that the Link $L(\cV_0)$ has real dimension $15$, so a vector space generator of the characteristic subalgebra generates the top dimensional class. \par We also note that from Table 2 that $\cD_4^{(sy)}$ has link cohomology given by the upper truncated $\gL^*\bk \langle e_1, e_5 \rangle$ but shifted by $\binom{5}{2} + 4 - 2 = 12$ so there is $1$ dimensional cohomology in degrees 12, 13, and 17. Thus, $f_0^*$ does not send any of these classes to nonzero classes in the characteristic cohomology. \par We do note that for the kite map $\iti_{4}^{(sy)}(3) : \mathbb C^7, 0 \to Sym_4(\mathbb C), 0$ the characteristic cohomology for the link is the upper truncated exterior algebra giving the $\bk$ vector space $\bk \langle 1, e_1, e_5 \rangle$ and then shifted by $6$. Thus, its degrees are $6$, $7$ and $11$. We see that as noted in \cite[Remark 1.8]{D6} in terms of the relative Gysin homomorphism, there is a shift in degrees given by twice the difference in dimension between each of the maps. \end{Example} Second, we return to Example \ref{Exam6.2}. \begin{Example} \label{Exam8.12} From Example \ref{Exam6.2}, the singularity $\cV_0 = f_0^{-1}(\cD_5) $ is defined by $f_0 ; \mathbb C^{21}, 0 \to M_5(\mathbb C), 0$, given by the matrix in Figure \ref{fig:unfkitefigex2}. Also, $f_0$ contains the linear kite map of size $4$. Thus, we may apply Theorem \ref{Thm8.7}, the characteristic cohomology $\cC(f_0, R)$, for any coefficient ring $R$, contains the subalgebra $\gL^*R \langle e_1, e_3, e_5, e_7 \rangle$. Hence, characteristic cohomology of the complement has $R$ rank at least $1$ in all degrees between $0$ and $16$, except for $2$ and $14$, and it is rank at least $2$ in degree $8$. \par The characteristic cohomology $\cB^{(sy)}(f_0, \bk)$ of the link contains the subspace obtained by upper truncating the exterior algebra over $\bk$ obtained from the same $\bk$ vector space by removing the generator of degree $16$ given by the product $e_1\cdot e_5 \cdot e_7 \cdot e_9$. Then, we shift the resulting vector space by $2\cdot 21 - 2 - 4^2 = 24$ to obtain $1$-dimensional generators in all degrees between $24$ and $39$, except for $26$ and $38$, and it is dimension at least $2$ in degree $32$. We note that the Link $L(\cV_0)$ has real dimension $39$, so again a vector space generator of the characteristic subalgebra generates the top dimensional class. \end{Example} \section{Characteristic Cohomology for Non-square Matrix Singularities} \label{S:sec8a} We extend the results for $m \times m$ general matrices and matrix singularities to non-square matrices. \subsection*{General $m \times p$ Matrix Singularities with $m \neq p$:} \par Let $M = M_{m, p}(\mathbb C)$ denote the space of $m \times p$ complex matrices (where we will assume $m \neq p$, with neither $= 1$). We consider the case where $m > p$. The other case $m < p$ is equivalent by taking transposes. The varieties of singular $m \times p$ complex matrices, $\cD_{m, p} \subset M_{m, p}(\mathbb C) $, with $m \neq p$ were not considered earlier because they do not have Milnor fibers. However, the methods we applied earlier to $m \times m$ general matrices will also apply to the complement and link of $\cD_{m, p}$. We explain that the complement has a compact homotopy model given by a Stiefel manifold. As for the case of $m \times m$ matrices, it has a Schubert decomposition using the ordered factorization by \lq\lq pseudo-rotations\rq\rq\, due to the combined work of J. H. C. Whitehead \cite{W}, C. E. Miller \cite{Mi}, and I. Yokota \cite{Y}. The Schubert cycles give a basis for the homology and the Kronecker dual cohomology classes which can be identified with the classes computed algebraically in \cite[Thm. 3.10]{MT} (or see e.g. \cite[\S 8]{D3}). Thus, for appropriate coefficients, the form of both $\cC_{\cV}(f_0, R)$ and $\cB_{\cV}(f_0, \bk)$ can be given for $\cV = \cD_{m, p}$ and $f_0 : \mathbb C, 0 \to M_{m, p}(\mathbb C), 0$. \par Then, we use the Schubert structure on the Stiefel manifolds to define vanishing compact models. This allows us to define, as for the $m \times m$ case, kite subspaces and maps to detect nonvanishing characteristic cohomology of the complement and link. \par \subsubsection*{Complements of the Varieties of Singular $m \times p$ Matrices} \par Let $M = M_{m, p}(\mathbb C)$ denote the space of $m \times p$ complex matrices. The varieties of singular $m \times p$ complex matrices, $\cD_{m, p}$, with $m \neq p$ were not considered earlier because they do not have Milnor fibers. However, the methods do apply to the complement and link as a result of work of J. H. C. Whitehead \cite{W}. We consider the case where $m > p$. The other case $m < p$ is equivalent by taking transposes. The complement to the variety $\cD_{m, p}$ of singular matrices and can be described as the ordered set of $p$ independent vectors in $\mathbb C^m$. Then, the Gram-Schmidt procedure replaces them by an orthonormal set of $p$ vectors in $\mathbb C^m$. This is the Stiefel variety $V_p(\mathbb C^m)$ and the Gram-Schmidt procedure provides a strong deformation retract of the complement $M \backslash \cV_{m, p}$ onto the Stiefel variety $V_p(\mathbb C^m)$. Thus, the Stiefel variety is a compact model for the complement. \subsubsection*{Schubert Decomposition for the Stiefel Variety} \par The work of Whitehead \cite{W}, combined with that of C. E. Miller \cite{Mi}, and I. Yokota \cite{Y}, provides a Schubert-type cell decomposition for $V_p(\mathbb C^m)$ similar to that given in the $m \times m$ case. There is an action of $GL_m(\mathbb C) \times GL_p(\mathbb C)$ on $M_{m, p}(\mathbb C)$ which is appropriate for considering $\cK_M$ equivalence of $m \times p$ complex matrix singularities. However, just for understanding the topology of the link and complement of $\cD{m, p}$ it is sufficient to consider the left action of $GL_m(\mathbb C)$ acting on $M$ with an open orbit consisting of the matrices of rank $p$. As explained in \cite{D4}, the complement $M_{m. p}(\mathbb C) \backslash \cD_{m, p}$ is diffeomorphic to the homogeneous space $GL_m(\mathbb C) /GL_{m - p}(\mathbb C)$. The diffeomorphism is induced by $GL_m(\mathbb C) \to M_{m, p}(\mathbb C)$ given by $A \mapsto A \cdot \begin{pmatrix} I_p \\ 0_{m-p, p}\end{pmatrix}$. Here the subgroup $GL_{m - p}(\mathbb C)$ represents the subgroup of elements $\begin{pmatrix} I_p & 0 \\ 0 & A \end{pmatrix}$ with $A \in GL_{m - p}(\mathbb C)$. This gives the isotropy subgroup of the left action on $\begin{pmatrix} I_p \\ 0_{m-p, p}\end{pmatrix}$. \par For successive values of $m$, we have the successive inclusions: $GL_{m- 1}(\mathbb C) \hookrightarrow GL_m(\mathbb C)$ by $A \mapsto \begin{pmatrix} 1 & 0 \\ 0 & A \end{pmatrix}$. These induce inclusions $$ \iti_{m-1, p- 1} : GL_{m - 1}(\mathbb C)/GL_{m - p}(\mathbb C) \hookrightarrow GL_m(\mathbb C) GL_{m - p}(\mathbb C)\, .$$ There is a corresponding inclusion of the spaces of matrices $M_{m - 1, p-1}(\mathbb C) \hookrightarrow M_{m, p}(\mathbb C)$ by $B \mapsto \begin{pmatrix} 1 & 0 \\ 0 & B \end{pmatrix}$. This inclusion induces a map of the complements of the varieties of singular matrices $$ \tilde{\iti}_{m-1, p- 1} : M_{m - 1, p-1}(\mathbb C) \backslash \cD_{m-1, p-1} \hookrightarrow M_{m, p}(\mathbb C) \backslash \cD_{m, p}\, .$$ \par The actions of the groups on the spaces of matrices commute via the inclusions of the groups with the corresponding inclusions of spaces of matrices. Thus, we have a commutative diagram of diffeomorphisms and inclusions \begin{equation} \label{CD5.5a} \begin{CD} {GL_{m - 1}(\mathbb C)/GL_{m - p}(\mathbb C)} @>{\iti_{m-1, p- 1}}>> {GL_m(\mathbb C) /GL_{m - p}(\mathbb C)}\\ @V{\simeq}VV @V{\simeq}VV \\ {M_{m - 1, p-1}(\mathbb C) \backslash \cD_{m-1, p-1}} @>{\tilde{\iti}_{m-1, p- 1}}>> {M_{m, p}(\mathbb C) \backslash \cD_{m, p}} \\ \end{CD} \end{equation} \par The homogenenous spaces $GL_m(\mathbb C) /GL_{m - p}(\mathbb C)$ are homotopy equivalent to the homogenenous spaces given as the quotient of their maximal compact subgroups $U_m/U_{m - p}$. Via the vertical isomorphism in \eqref{CD5.5}, the complement is diffeomorphic to the Stiefel variety $V_p(\mathbb C^m)$. \par By results of Whitehead \cite{W} applied in the complex case (see e.g. \cite[\S 3]{D4}), the Schubert cell decomposition of $V_p(\mathbb C^m)$ is given via ordered factorizations of matrices in $U_m$ into products of \lq\lq pseudo-rotations\rq\rq. For this we use the reverse flag with $\tilde{e}_j = e_{m+1-j}$ for $j = 1, \dots, m$ and $\mathbb C^k$ spanned by $\{\tilde{e}_1, \dots , \tilde{e}_k\}$. Then, any $B \in U_m$ can be uniquely written by a factorization in decreasing order. \begin{equation} \label{Eqn8.3a} B \, \, = \,\, A_{(\theta_k, v_k)} \cdots A_{(\theta_2, v_2)} \cdot A_{(\theta_1, v_1)}\, , \end{equation} with $v_j \in_{\min} \mathbb C^{m_j}$ and $1 \leq m_1 < m_2 < \cdots < m_k \leq m$, and each $\theta_i \not \equiv 0 \,\mod 2\pi$. Here $v_j \in_{\min} \mathbb C^{m_j}$ means $v_j \in \mathbb C^{m_j}$ but $v_j \not \in \mathbb C^{m_j-1}$. Also, each $A_{(\theta_j, v_j)}$ is a pseudo-rotation about $\mathbb C<v_j>$, which is the identity on $\mathbb C<v_j>^{\perp}$ and multiplies $v_j$ by $e^{\theta_j \iti}$. In \cite[\S 3]{D4} the results are given for increasing factorizations,; however, as explained there, the results equally well hold for decreasing factorizations. If $m_{k^{\prime}} > m-p \leq m_{k^{\prime} +1}$, then each $A_{(\theta_j, v_j)}$ for $j > k$ belongs to $U_{m-p}$. Hence, $B$ is in the same $U_{m-p}$-coset as $$ B^{\prime} \, \, = \,\, A_{(\theta_k, v_k)} \cdots A_{(\theta_k, v_k)}\, .$$ Then, the projections $p_{m, p}: U_m \to U_m/U_{m-p}$ of the Schubert cells $S_{\bm}$ for $\bm = (m_1, \dots , m_k)$ with $m-p < m_1 < \dots < m_k \leq m$ give a cell decomposition for $U_m/U_{m-p} \simeq V_p(\mathbb C^M)$. Furthermore, the closures $\overline{S_{\bm}}$, which are the Schubert cycles, are \lq\lq singular manifolds\rq\rq\ which have Borel-Moore fundamental classes (see e.g. comment after \cite[Thm. 3.7]{D4}). \subsubsection*{Cohomology of the Complement and Link} \par We can give a relation between the homology classes given by the Schubert cycles resulting from the Whitehead decomposition and the cohomology with integer coefficients of the Stiefel variety, and hence the complement of the variety $\cD_{m, p}$ (computed in \cite[Thm. 8.10a]{MT}). \begin{Thm} \label{Thm8.10a} The homology of the complement of $\cD_{m, p}$ ($\simeq H_*(V_p(\mathbb C^m); {\mathbb Z})$) has for a free ${\mathbb Z}$-basis the fundamental classes of the Schubert cycles, given as images $p_{m, p\, *}(\overline{S_{\bm}})$, with $\bm = (m_1, m_2, \dots m_k)$ for $m-p < m_1 < \cdots m_k \leq m$, as we vary over the Schubert decomposition of $U_m/U_{m-p}$. The Kronecker duals of these classes give the ${\mathbb Z}$-basis for the cohomology, which is given as an algebra by \begin{equation} \label{Eqn8.1a} H^*(M_{m, p} \backslash \cD_{m, p}; {\mathbb Z}) \,\, \simeq \,\, \gL^*{\mathbb Z}\langle e_{2(m-p)+1}, e_{2(m-p)+3}, \dots , e_{2m-1}\rangle \end{equation} with degree of $e_j$ equal to $j$. \par Moreover, the Kronecker duals of the {\em simple Schubert classes} $S_{(m_1)}$ for $m-p < m_1 \leq m$ are homogeneous generators of the exterior algebra cohomology. \end{Thm} \begin{proof} The computation of $H^*(V_p(\mathbb C^m)$ is given in \cite[Thm. 3.7]{D4}. As it is a homotopy model for the complement \eqref{Eqn8.1a} follows. \par Second, that the Schubert cycles form a basis for the homology follows exactly as in the proof of \cite[Thm 6.1]{D4}, as does the proof that the Kronecker duals to the simple Schubert cycles provide homogeneous generators of the exterior algebra. \end{proof} \subsubsection*{Cohomology of the Link} \par As a consequence of Theorem \ref{Thm8.10a}, we obtain the following conclusion for the link. \begin{Thm} \label{Thm8.4a} For the variety of singular $m \times p$ complex matrices, $\cD_{m, p}$ (with $m > p$), the cohomology of the link is given (as a graded vector space) as the upper truncated cohomology $H^*(M_{m, p} \backslash \cV_{m, p},\bk)$ given in \eqref{Eqn8.1a} and shifted by $p^2-2$. \par The Alexander duals of the Schubert cycles of nonmaximal dimension give a basis for the cohomology of the link. \end{Thm} \subsection*{Kite Spaces and Maps for $m \times p$ Matrix Singularities with $m \not = p$:} \par \begin{Definition} \label{Def4.1a} For $m \times p$ matrices with $m > p$, with $p \neq 1$ and the reverse standard flag of subspaces of $\mathbb C^m$, the corresponding {\em linear kite subspace of length $\ell$} is the linear subspace of the space of matrices defined as follows: For $M_{m, p}(\mathbb C)$, it is the linear subspace $\bK_{m, p}(\ell)$ spanned by \par $$ \{E_{i, j} : r + 1 \leq i \leq m, r + 1 \leq j \leq p\} \cup \{E_{i, i} : 1 \leq i \leq r\}$$ where $r = p - \ell$. \par Furthermore, we refer to the germ of the inclusion $\iti_ {m, p}(\ell) : \bK_ {m, p}(\ell), 0 \to M_{m, p}(\mathbb C), 0$, for each of the three cases as a {\em linear kite map of size $\ell$}. Furthermore, a germ which is $\cK_M$ equivalent to $\iti_ {m, p}(\ell)$ will be refered to as an {\em unfurled kite map of length $\ell$}. We also say that a germ $f_0 : \mathbb C^n, 0 \to M_{m, p}(\mathbb C), 0$ contains a kite map of length $\ell$ if there is an embedding $g : \bK_ {m, p}(\ell), 0 \to \mathbb C^n, 0$ so that $f_0 \circ g$ is an unfurled kite map of colength $\ell$. \end{Definition} The general form of elements, \lq\lq the kites\rq\rq\, in the linear kite subspaces have the form given in \eqref{Eqn8.15a} \begin{equation} \label{Eqn8.15a} Q_{\ell, m - \ell} \,\, = \,\, \begin{pmatrix} D_{r} & 0_{m - r, p - r} \\ 0_{m - r, p} & A_{m - r, p - r} \end{pmatrix} \end{equation} where $r = p - \ell$ and $A_ {m - r, p - r} $ is an $(m - r) \times (p - r)$-matrix which denotes an arbitrary matrix in $M_{m - r, p - r}(\mathbb C)$. Also, $0_{q, s}$ denotes a $0$-matrix of size $q \times s$ and $D_{r}$, an arbitrary $r \times r$ diagonal matrix. The general element is exhibited in Figure \ref{fig:altkitefiga}. \begin{Remark} \label{Rem8.15a} Although the body of the kite is not square, the length $\ell$ denotes the rank of a generic matrix in the body, which is consistent with the square case when $m = p$. We note that to be consistent with the form of the matrices for the group representation of the complement and the Schubert decomposition for the nonsquare case, the kite is \lq\lq upside down\rq\rq. However, elements of $\cK_M$ allow for the composition with invertible matrices $GL_m$ and $GL_p$ with entries in the local ring of germs. This allows for a linear change of coordinates so the kite can be inverted to the expected form as for the $m \times m$ case. \end{Remark} \par \begin{figure} \caption{Illustrating the form of elements of a linear kite space of length $\ell$ in the space of general $m \times p$ matrices with $r = p - \ell$. The upper $r \times r$ left matrix is a diagonal matrix with arbitrary entries, and the lower right matrix is a general matrix of size $(m -r) \times (p - r)$.} \label{fig:altkitefiga} \end{figure} \par We have an analogue of the detection result for case of $m \times m$ general matrices. \begin{Thm} \label{Thm8.7a} Let $f_0 : \mathbb C^n, 0 \to M_{m, p}(\mathbb C), 0$ define a matrix singularity. If $f_0$ contains a kite map of length $\ell$, then the characteristic cohomology of the complement $\cC_{m, p}(f_0, \bk)$, for a field $\bk$ of characteristic $0$, contains the exterior algebra given by \begin{equation} \label{Eqn8.7a} \gL^*\bk\langle e_{2(m-p)+1}, e_{2(m-p)+3}, \dots , e_{2(m - p) +2 \ell - 1}\rangle \end{equation} and each $e_j$ has degree $j$. \par Furthermore, the characteristic cohomology of the link $\cB_{m, p}(f_0, \bk)$, as a graded vector space contains the graded subspace given by truncating the top degree of the exterior subalgebra \eqref{Eqn8.7a} of $\cC_{m, p}(f_0, \bk)$ and shifting by $2n - 2 - \ell \cdot (2(m - p) + \ell)$. \par For the complement, $\bk$ may be replaced by any coefficient ring $R$. \end{Thm} \begin{proof} \par \par The line of proof follows that for the $m \times m$ general case. \par Under the inclusion $\iti_{m-1,p-1}: V_{p-1}(\mathbb C^{m-1}) \hookrightarrow V_{p}(\mathbb C^{m})$, the identification of the cohomology classes with Kronecker duals of the Schubert cycles implies $$ \iti_{m-1,p-1}^*(e_{2(m-p+ j) -1}) = e_{2(m-p+ j) -1} \, \text{for}\, 1 \leq j \leq p-1 \, \text{and}\, \iti_{m-1,p-1}^*(e_{2m -1}) = 0 \, .$$ If we compose successive inclusions $\ell$ times to give $\iti_{m-\ell,p-\ell}: V_{p-\ell}(\mathbb C^{m-\ell}) \hookrightarrow V_{p}(\mathbb C^{m})$, then the induced map on cohomology has image the algebra given in \eqref{Eqn8.7a}. Thus, $V_{p}(\mathbb C^{m})$ provides a compact model for the complement, and the composition $\iti_{m - 1, p - 1} \circ \iti_{m - 2, p - 2} \circ \cdots \circ \iti_{m - r, p-r}$ with $r = p - \ell$ detects the subalgebra in \eqref{Eqn8.7a}. \par Now using the vanishing compact model $t\cdot V_{p}(\mathbb C^{m})$, we can follow the same reasoning as for the $m \times m$ case using the functoriality and invariance under $\cK_{\cD_{m, p}}$, and apply the Second Detection Lemma to obtain the result. \par Then, as the exterior algebra satisfies Poincare duality under multiplication, we can deduce the result for $\cB_{m, p}(f_0, \bk)$ using the same argument in the proof of \cite[Prop. 1.9]{D3} where for the shift $2n - 2 - \dim_{\mathbb R} K$ we replace $\dim_{\mathbb R} K$ by the top degree of the algebra in \eqref{Eqn8.7a}. This is the same as $\dim_{\mathbb R} V_{p - r}(\mathbb C^{m - r})$, which is $$ 2(p - r)((m - r) -(p - r)) + (p-r)^2 \,\, = \,\, (p-r)(2(m - p) - r) \,\, = \,\, \ell (2(m - p) + \ell) \, .$$ \end{proof} \par \begin{Example} \label{Ex8.1a} Consider an example of a matrix singularity which is given by $f_0 : \mathbb C^{12}, 0 \to M_{4, 5}(\mathbb C), 0$ defined by the matrix in Figure \ref{fig:unfkitefigex4} for which all $g_{i, j}(\bx,0) = 0$. \begin{figure} \caption{An example of a $4 \times 5$ matrix singularity $f_0$, with $g_{i, j}(\bx, 0) \equiv 0$ for each $(i, j)$. It contains a kite map of colength $2$ given when all $y_i = 0$.} \label{fig:unfkitefigex4} \end{figure} For $\by = (y_1, y_2, y_3, y_4)$, when $\by = 0$, we see that $f_0$ contains a kite map of colength $2$. Then, Theorem \ref{Thm8.7a} implies that $\cC_{\cD_{4, 5}}(f_0, {\mathbb Z})$ contains a subalgebra $\gL^*{\mathbb Z}\langle e_3, e_5 \rangle$. Also, by Theorem \ref{Thm8.7a}, $\cB_{\cD_{4, 5}}(f_0, {\mathbb Z})$ contains as a subgroup the subalgebra upper truncated and then shifted by $2\cdot 12 - 2 - (4 - 2)(2(5 - 4) + 2) = 14$. Thus, the classes $\{ 1, e_3, e_5\}$ are shifted by $14$ to give classes in degrees $14, 17, 19$. As $\cV_0 = f_0^{-1}(\cD_{4, 5})$ has codimension $2$, the link $L(\cV_0)$ has dimension $19$ and the characteristic cohomology class in degree $19$ generates the Kronecker dual to the fundamental class of $L(\cV_0)$. \end{Example} \par \section{Cohomological Relations between Local Links via Restricted Kite Spaces} \label{S:sec8b} \par Lastly, it is still not well understood how the structure of the strata for the varieties of singular matrices contributes to the (co)homology of the links for the various types of matrices. We use kite spaces for all of the cases to determine the relation between the cohomology of local links for strata with the cohomology of the global link. This includes as well the relation between the local links for strata with local links of strata of higher codimension. This is via the relative Gysin homomorphism defined as \cite[(1.10)]{D6}, which is an analog of the Thom isomorphism theorem in these cases. \par We do so by explaining how the kite subspaces provide transverse sections to the strata of the varieties of singular matrices for all three cases of $m \times m$ matrices and also for general $m \times p$ matrices. To consider all cases simultaneously, we denote the corresponding space of matrices by $M$ and the variety of singular matrices by $\cD_{*}^{(*)}$. Also, we consider the kite subspace of length $\ell$ of appropriate type which we denote by $\bK_{*}^{*}(\ell)$. For the $m \times m$ cases, we also let $r = m - \ell$ (which is the same as $p - \ell$ when $m = p$). \par We consider an affine subspace obtained by choosing fixed nonzero values at the entries in the tail (e.g. the value $1$). When the entries in the body of the kite are $0$, we obtain a matrix $A$ of rank $r$ and hence corank $\ell$. Then, the resulting space we consider has the form $A + M^{\prime}$ where $M^{\prime}$ denotes one of the spaces $M_{\ell}(\mathbb C)$, $M_{\ell}^{(sy)}(\mathbb C)$, $M_{\ell}^{(sk)}(\mathbb C)$, or $M_{m - r, p - r}(\mathbb C)$ which is embedded, via a map denoted by $\iti$, as the body of the kite. This provides a normal section to the stratum $\gS_{\ell}$ of matrices of corank $\ell$ through $A$. We refer to this affine subspace as a {\em restricted kite space}. We let $\cD_{*}^{(*) \prime}$ denote the variety of singular matrices in $M^{\prime}$. Then, in a sufficiently small tubular neighborhood $T$ of $\gS_{\ell}$ we obtain $\cD_{*}^{(*)} \cap T$ is diffeomorphic to $\gS_{\ell} \times (\cD_{*}^{(*) \prime} \cap B_{\gevar})$ for sufficiently small $\gevar > 0$. We refer to $L(\cD_{*}^{(*) \prime})$ as the {\em local link of the stratum $\gS_{\ell}$}. \par Then $\iti$ induces an inclusion $\iti : \cD_{*}^{(*) \prime} \cap B_{\gevar} \subset \cD_{*}^{(*)}$. There is the induced map $\iti^*$ on cohomology which sends the exterior algebra giving the cohomology of $M \backslash \cD_{*}^{(*)}$ to the algebra \eqref{Eqn8.7a}. This is a consequence of the proofs of Theorems \ref{Thm8.7} and \ref{Thm8.7a}. Using this we have consistent monomial bases for the cohomology of the complement. This allows us to define consistent Kronecker pairings giving a well-defined relative Gysin homomorphism (as defined in \cite[(1.10)]{D6}). There is the following relation between the cohomology of the local link $L(\cD_{*}^{(*) \prime})$ and the link $L(\cD_{*}^{(*)})$. \begin{Corollary} \label{Cor8.1a} The relative Gysin homomorphism $$\iti_* : H^*(L(\cD_{*}^{(*) \prime}); \bk) \to H^{*+ q}(L(\cD_{*}^{(*)}); \bk)$$ increases degree by $q = \dim_{\mathbb R} M - \dim_{\mathbb R} M^{\prime}$, which in the various cases equals for the $m \times m$ cases: $2(m^2 - \ell^2)$ for the general matrices; $(m - \ell)(m + \ell +1)$ for symmetric matrices, $(m - \ell)(m + \ell -1)$ for skew-symmetric matrices (with $m$ and $\ell$ even); and for $m \times p$ matrices $2(p^2 - \ell^2)$. \par It is injective and sends the Alexander dual of the Kronecker dual of a class corresponding to a monomial in the algebra \eqref{Eqn8.7a} to the corresponding Alexander dual of the Kronecker dual of the image of that class as an element of the cohomology of the complement $M \backslash \cD_{*}^{(*)}$. \end{Corollary} \begin{proof} By the above remarks, there is defined the relative Gysin homomorphism. If $\iti$ denotes the inclusion of the reduced kite space into the space of matrices, then the induced map on cohomology of the complements, denoted $\iti^*$, (with coefficients $\bk$ a field of characteristic $0$) is surjective. We use the identification of the monomials $e_{\bm}$ with the Kronecker duals denoted $e_{\bm}^*$. Then, the inclusion $\iti_*$ is the dual of $\iti^*$. Thus, the dual homomorphism for homology $\iti_*$ is injective. When this is composed with Alexander isomorphisms (via the Kronecker pairings), it remains injective. By the properties of the corresponding cohomology classes of the links resulting from applying Alexander duality have the effect of raising degree by the difference $\dim_{\mathbb R} M - \dim_{\mathbb R} M^{\prime}$ for each of the four types. These are then computed to give the stated degree shifts. \end{proof} \par We also mention that there is an analogous version of this corollary for the case of the local link for a stratum $\gS_{\ell^{\prime}}$ included in the local link of a stratum $\gS_{\ell}$ for $\ell^{\prime} < \ell$. As an example we consider \begin{Example} \label{Ex8.2a} For the stratum $\gS_2 \subset Sym_5(\mathbb C)$, the local link has reduced cohomology group isomorphic to $\widetilde {\gL*\bk}\langle e_1, e_5\rangle[4]$. However, the effect of Alexander duality on elements does not correspond to a shift. The reduced cohomology of the local link complement is spanned by the generators $\{ e_1, e_5, e_1e_5\}$ with Kronecker duals denoted $\{ e_{1\, *}, e_{5\, *}, (e_1e_{5})_*\}$. Then, the corresponding Alexander dual generators for the reduced cohomology of the local link, denoted $\{\widetilde{e_1}, \widetilde{e_5}, \widetilde{e_1e_5}\}$, have degrees in cohomology $9, 5, 4$ in that order. Note under the shift representation $\widetilde{e_1e_5}$ corresponds to the shift of $1$. \par Also, the link $L(\cD_5^{(sy)})$ has cohomology group $\widetilde {\gL*\bk}\langle e_1, e_5, e_9\rangle[13]$. Then, as $\iti^*$ is surjective, $\iti_*$ maps the Kronecker duals $\{e_{1\, *}, e_{5\, *}, (e_1e_{5})_*\}$ to homology classes of the same degrees for the complement $Sym_5(\mathbb C) \backslash \cD_5^{(sy)}$. Then, these elements correspond under Alexander duality to cohomology classes $\{\widetilde{e_1^{\prime}}, \widetilde{e_5^{\prime}}, \widetilde{(e_1e_5)^{\prime}}\}$ for the link $L(\cD_5^{(sy)})$, having degrees $27, 23, 22$. We see that the increase in degree is $2 (15 -6) = 18$ as asserted for the relative Gysin homomorphism.. \par However, one key point to note is that for the cohomology group of the link represented by the truncated and shifted exterior algebras, the relative Gysin homomorphism does not map the shifted classes to the corresponding shifted classes. For example, for the local link of $\gS_2 \subset Sym_5(\mathbb C)$, $e_1$ corresponds to $\tilde{e_1}$ which maps to a cohomology class of degree $27$, while for the link $L(\cD_5^{(sy)})$ , $e_1$ corresponds via the shift representation to a cohomology class in the link of degree $14$. \end{Example} \par \begin{Remark} \label{Rem8.12} For $m \times p$ finitely $\cK_M$-determined matrix singularities $f_0 : \mathbb C^n, 0 \to M_{m, p}(\mathbb C), 0$, if $n < | 2 (m-p +2)|$, then by transversality, $\cV_0$ has an isolated singularity and so a stabilization provides a Milnor fiber as a particular smoothing. As yet there does not appear to be a mechanism for showing how this Milnor fiber inherits topology from $M_{m, p}(\mathbb C) \backslash \cD_{m, p}$. However, for $(m, p) = (3, 2)$, Fr\"{u}hbis-Kr\"{u}ger and Zach \cite{F}, \cite{Z}, \cite{FZ} have shown that for the resulting Cohen-Macaulay $3$-fold singularities in $\mathbb C^5$, the Milnor fiber has Betti number $b_2 = 1$, allowing the formula of Damon-Pike \cite[\S 8]{DP} to yield an algebraic formula for $b_3$. It remains to be understood how this extends to larger $(m, p)$. \end{Remark} \end{document}
arXiv
\begin{definition}[Definition:Proper Mapping] Let $X$ and $Y$ be topological spaces. A mapping $f: X \to Y$ is '''proper''' {{iff}} for every compact subset $K \subset Y$, its preimage $f^{-1} \left({K}\right)$ is also compact. {{explain|Here and elsewhere: is it worth replacing "compact subset" with "compact subspace" wherever it appears, as "subset" implies just the set of elements -- "subspace" makes it clear that the topology imposed on that subset is important. I don't know whether this flies against the prevailing convention, but it would help to make the language of this complicated area, where extreme clarity is required, more easily understood.}} \end{definition}
ProofWiki
\begin{document} \title[Energy $\mu$-Calculus]{\texorpdfstring{Energy $\mu$-Calculus: Symbolic Fixed-Point Algorithms for $\omega$-Regular Energy Games}{Energy mu-Calculus: Symbolic Fixed-Point Algorithms for omega-Regular Energy Games}} \author[G.~Amram]{Gal Amram} \address{Tel Aviv University, Tel Aviv, Israel} \email{[email protected]} \author[S.~Maoz]{Shahar Maoz} \address{Tel Aviv University, Tel Aviv, Israel} \email{[email protected]} \author[O.~Pistiner]{Or Pistiner} \address{Tel Aviv University, Tel Aviv, Israel} \email{[email protected]} \author[J.O.~Ringert]{Jan Oliver Ringert} \address{University of Leicester, Leicester, United Kingdom} \email{[email protected]} \begin{abstract} $\omega$-regular energy games, which are weighted two-player turn-based games with the quantitative objective to keep the energy levels non-negative, have been used in the context of verification and synthesis. The logic of modal $\mu$-calculus, when applied over game graphs with $\omega$-regular winning conditions, allows defining symbolic algorithms in the form of fixed-point formulas for computing the sets of winning states. In this paper, we introduce energy $\mu$-calculus, a multi-valued extension of the $\mu$-calculus that serves as a symbolic framework for solving $\omega$-regular energy games. Energy $\mu$-calculus enables the seamless reuse of existing, well-known symbolic $\mu$-calculus algorithms for $\omega$-regular games, to solve their corresponding energy augmented variants. We define the syntax and semantics of energy $\mu$-calculus over symbolic representations of the game graphs, and show how to use it to solve the decision and the minimum credit problems for $\omega$-regular energy games, for both bounded and unbounded energy level accumulations. \end{abstract} \maketitle \section{Introduction} \emph{Energy games} have been introduced by Chakrabarti et al.~\cite{ChakrabartiAHS03} to model components' energy interfaces, specifically the requirement to avoid the exhaustion of an initially available resource, e.g., disk space or battery capacity. Since their inception, they have been studied extensively in the context of verification and synthesis, e.g.,~\cite{BohyBFR13,BouyerFLMS08,BrimC12,BrimCDGR11,ChakrabartiAHS03,ChatterjeeD12,ChatterjeeRR14,VelnerC0HRR15}. Energy games are weighted two-player turn-based games with the quantitative objective to keep the \emph{energy level}, the accumulated sum of an initial credit and weights of transitions traversed thus far, non-negative in each prefix of a play. Energy games induce a \emph{decision problem} that checks for the existence of a finite initial credit sufficient for winning, and an optimization problem for the \emph{minimum initial credit}. The work~\cite{BouyerFLMS08} has introduced an \emph{upper bound} $c$ that specifies the maximal energy level allowed to be accumulated throughout a play. In our work, we consider both the \emph{unbounded} energy objective of~\cite{BouyerFLMS08,ChakrabartiAHS03} where $c=+\infty$, and the \emph{bounded} energy objective of~\cite{BouyerFLMS08} where $c \in \mathbb{N}$ is finite and whenever the energy level exceeds $c$, it is truncated to $c$. Energy games may be viewed as safety games with an additional quantitative objective. Nevertheless, they have also been generalized to \emph{$\omega$-regular games with energy objectives}~\cite{ChatterjeeD12,ChatterjeeRR14}, which are the focus of our work. We consider \emph{symbolic} algorithms for solving games, as opposed to \emph{explicit} ones. Symbolic algorithms operate on an implicit representation of the underlying game graph and manipulate \emph{sets} of game states, whereas explicit algorithms operate on the explicit game graph representation and manipulate individual states. Symbolic algorithms have been shown to be scalable and practical for solving $\omega$-regular games, e.g.,~\cite{AlurMN05,ChatterjeeDHL17,ChatterjeeDHS18,JacobsBBEHKPRRS17,SanchezWW18,StasioMV18}. \emph{Modal $\mu$-calculus}~\cite{Kozen} is an extension of propositional logic with modal operators and least and greatest fixed-point operators. Rather than the classical version of~\cite{Kozen}, we consider the \emph{game $\mu$-calculus}~\cite{EmersonJ91} and its application over finite symbolic game structures~\cite{BJP+12} to solve games with $\omega$-regular winning conditions (see, e.g.,~\cite{AlfaroHM01,BJP+12,BruseFL14,KonighoferHB13,RaskinCDH07,Walukiewicz1996}). For every $\omega$-regular condition, $\varphi$, there is a (game) $\mu$-calculus formula that defines a symbolic fixed-point algorithm for computing the set of states that win $\varphi$~\cite{AlfaroHM01}. Modal $\mu$-calculus has been extended to a multi-valued or quantitative semantics where the value of a formula in a state is from some lattice, e.g.,~\cite{deAlfaro,AlfaroHM01,AlfaroM04,BrunsG04,Fischer2010,GrumbergLLS05,RaskinCDH07}. \emph{We summarize the contributions of our work as follows.} \begin{enumerate} \item \textbf{Energy $\mu$-calculus as a symbolic framework for solving $\omega$-regular energy games.} We introduce \emph{energy $\mu$-calculus}, a multi-valued extension of the game $\mu$-calculus~\cite{EmersonJ91} over symbolic game structures~\cite{BJP+12}. Energy $\mu$-calculus serves as a framework for solving both the decision and the minimum credit problems with a bounded energy level accumulation. While a game $\mu$-calculus formula characterizes a set of states, an energy $\mu$-calculus formula is interpreted w.r.t. an upper bound $c \in \mathbb{N}$ and returns an \emph{energy function} that assigns a value in $\{0,\dots,c\} \cup\{+\infty \}$ to each state of the underlying game. Every $\omega$-regular condition is solved by evaluating a game $\mu$-calculus formula~\cite{AlfaroHM01}, and we show that this formula can be seamlessly reused as an energy $\mu$-calculus formula to solve the corresponding energy augmented game. \item \textbf{Computation of a sufficient upper bound.} We bridge the gap between bounded and unbounded energy level accumulations by showing that every $\omega$-regular winning condition admits a \emph{sufficiently large upper bound} on the energy level accumulation. That is, we show that if the system player wins with an unbounded energy level accumulation, then it also wins w.r.t. a finite upper bound with no need to increase the initial credit. Specifically, if the $\mu$-calculus formula $\psi$ solves the $\omega$-regular game, then the system wins w.r.t. the bound $(d+1)((N^2+N)m-1)K$, where $N$ is the size of the state space, $K$ is the maximal absolute weight, and $m$ and $d$ are the length and alternation depth of $\psi$, respectively. Through this sufficient bound, energy $\mu$-calculus also solves the decision and the minimum credit problems with an unbounded energy level accumulation. \end{enumerate} \noindent On the way to achieving the above results, we have obtained two additional contributions for \emph{energy parity games}~\cite{ChatterjeeD12}: \begin{enumerate}\setcounter{enumi}{2} \item \textbf{Sufficient upper bound.} We show that if $\text{player}_0$ wins from a state $s$ in an energy parity game with an unbounded energy level accumulation, then she can also win from $s$ w.r.t. the energy upper bound $d(n-1)K$ without increasing her initial credit, where $d$ is the number of different priorities, $n$ is the number of states, and $K$ is the maximal absolute weight. \item\textbf{Strategy complexity.} We show that if $\text{player}_0$ wins from a state $s$ in an energy parity game, then she has a strategy that wins from $s$ with a memory of size $d(n-1)K+1$ and without requiring to increase the initial credit. This slightly improves the best known memory upper bound of $dnK$~\cite{ChatterjeeD12}. \end{enumerate} \noindent To solve energy games with $\omega$-regular winning conditions, researchers have suggested to apply a reduction to energy games with parity winning conditions; see, e.g.,~\cite{ChatterjeeD12,ChatterjeeRR14}. In contrast, our approach uses a game $\mu$-calculus formula to describe the set of states that win the $\omega$-regular condition~\cite{AlfaroHM01}. Then, it evaluates this formula w.r.t. the semantics of energy $\mu$-calculus and obtains the energy function that maps each state to its minimal winning initial credit, and to $+\infty$ if there is no such an initial credit. We identify two appealing key attributes of our approach. First, our approach enables the use of existing results from the literature. Specifically, Thm.~\ref{thm:sysEngMuCalcCorrectness} enables to seamlessly transform well-known $\mu$-calculus formulas that solve games with $\omega$-regular conditions $\varphi$, e.g., safety, reachability, B\"{u}chi, co-B\"{u}chi, GR(1)~\cite{BJP+12}, counter-GR(1)~\cite{KonighoferHB13}, parity~\cite{BruseFL14,EmersonJ91}, etc. into solvers of corresponding $\varphi$-energy games. Second, the aforementioned transformation additionally results in algorithms that are symbolic, in the sense that they manipulate energy functions over symbolic weighted game structures. Such symbolic algorithms can be implemented using, e.g., Algebraic Decision Diagrams~\cite{BaharFGHMPS97,FujitaMY97}, as was done in~\cite{MaozPR16}. To illustrate these key attributes, we consider the following well-known $\mu$-calculus formula that solves B\"{u}chi games with target states $J$~\cite{2001automata,Thomas95}: \begin{align}\label{eq:buchiExample:muCalcFormula} \psi_{B_{J}} &= \nu Z (\mu Y (J \wedge \circlediamond Z) \vee \circlediamond Y) \end{align} In such a game, the system wins from a state if it can enforce infinitely many visits to $J$. Relying on Thm.~\ref{thm:sysEngMuCalcCorrectness}, we replace each occurrence of the modal operator $\circlediamond$ in Eq.~\ref{eq:buchiExample:muCalcFormula} with $\circlediamond_\op{E}$, and obtain the following energy $\mu$-calculus formula that solves B\"{u}chi-energy $J$-states games: \begin{align}\label{eq:buchiExample:energyMuCalcFormula} \psi_{B_{J}}^\op{E} &= \nu Z (\mu Y (J \wedge \circlediamond_{\op{E}} Z) \vee \circlediamond_{\op{E}} Y) \end{align} That is, Eq.~\ref{eq:buchiExample:energyMuCalcFormula} defines the energy function that maps each state to the minimal initial credit sufficient for the system to win the B\"{u}chi $J$-states condition while keeping the energy levels of all plays' prefixes non-negative. \begin{figure*}\label{alg:buchi} \label{alg:buchi:initZ} \label{alg:buchi:fixZ} \label{alg:buchi:recurrJ} \label{alg:buchi:fixY} \label{alg:buchi:leastY} \label{alg:buchi:fixYEnd} \label{alg:buchi:ZAssigned} \label{alg:buchi:fixZEnd} \label{alg:buchiEnergy} \label{alg:buchiEnergy:fixZ} \label{alg:buchiEnergy:recurrJ} \label{alg:buchiEnergy:leastY} \end{figure*} Alg.~\ref{alg:buchi} is a symbolic fixed-point algorithm that implements Eq.~\ref{eq:buchiExample:muCalcFormula} according Def.~\ref{def:prop_mu_calculus_semantics}, which defines the game $\mu$-calculus' semantics following~\cite{BJP+12}. Likewise, Alg.~\ref{alg:buchiEnergy} is a symbolic fixed-point algorithm that implements Eq.~\ref{eq:buchiExample:energyMuCalcFormula} according Def.~\ref{def:sysEngMuCalcSemantics}, which defines the new energy $\mu$-calculus' semantics. Alg.~\ref{alg:buchi} uses the controllable predecessor operator $\mathit{Cpre}_{{\scriptstyle\mathit{sys}}}$ that implements $\circlediamond$, whereas Alg.~\ref{alg:buchiEnergy} uses the energy controllable predecessor operator $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}$ that implements $\circlediamond_\op{E}$. $\mathit{Cpre}_{{\scriptstyle\mathit{sys}}}$ is defined in Sect.~\ref{sec:propMuCalculus}; $\mathit{ECpre}_{{\scriptstyle\mathit{sys}}}$ is defined in Def.~\ref{def:ECpre}. We prove in Sect.~\ref{sec:EngMuCalc} and Sect.~\ref{sec:solvingBEGames} that our approach solves both the decision and the minimum credit problems with a bounded energy level accumulation. Moreover, we augment energy $\mu$-calculus with \emph{negation} to enable $\omega$-regular energy games to be solved via their \emph{dual} games. That is, we show that if a game $\mu$-calculus formula $\psi$ solves the $\omega$-regular game, the energy $\mu$-calculus formula $\neg \psi^\op{E}$ dually assigns each state the \emph{maximal} initial credit for which the adversary, namely the environment, wins. We prove the results of Sect.~\ref{sec:EngMuCalc} by using a reduction to $\omega$-regular games, which encodes the bounded energy objective as safety constraints, following~\cite{BrimCDGR11}. Importantly, however, our approach also solves the decision and the minimum credit problems w.r.t. the unbounded energy objective from~\cite{ChakrabartiAHS03}, namely when the upper bound on the energy levels is set to $+\infty$. We obtain this key result in Sect.~\ref{sec:sufficientbound} by providing answers to the following three questions for all $\omega$-regular winning conditions: \begin{enumerate} \item \label{boundQuestion:1}Is there a state that does not win w.r.t. all finite upper bounds but wins w.r.t. the bound $+\infty$? (No) \item \label{boundQuestion:2}Is there a sufficient finite upper bound whose increase would not introduce additional winning states? (Yes) \item \label{boundQuestion:3}Is there such a sufficient bound that also does not require an increase in the initial credit to win? (Yes) \end{enumerate} \noindent We answer the above questions by showing how to compute a sufficiently large upper bound for any $\omega$-regular winning condition. Most importantly, this complete bound enables the use of the results obtained in Sect.~\ref{sec:EngMuCalc} and Sect.~\ref{sec:solvingBEGames}, also in case of an unbounded energy level accumulation. \subsection{Related Work}\label{sec:related} \subsubsection{Energy Games}\label{sec:related:energyGames} Energy games were introduced in~\cite{ChakrabartiAHS03}. Bouyer et al.~\cite{BouyerFLMS08} further studied these games, presented fixed-point solutions, and showed that these games are log-space equivalent to mean-payoff games~\cite{EM79}. Brim et al.~\cite{BrimC12,BrimCDGR11} presented strategy improvement and improved fixed-point algorithms, both of which are explicit, for energy and mean-payoff games. The application of energy $\mu$-calculus to the $\mu$-calculus formula ${\psi}={\nu X(\circlediamond X)}$, which solves safety games with the winning condition $\op{G}(\text{{\small{\op{true}}}})$, results in the symbolic fixed-point algorithm ${\psi^\op{E}} = {\nu X(\circlediamond_{\op{E}} X)}$ for energy games. Interestingly, essentially, $\psi^\op{E}$ prescribes the algorithm that was described in~\cite{BouyerFLMS08,BrimCDGR11,ChakrabartiAHS03}. Thus, the algorithms of~\cite{BouyerFLMS08,BrimCDGR11,ChakrabartiAHS03} can be seen as a special case of our results. Chatterjee et al.~\cite{ChatterjeeD12} have studied $\omega$-regular energy games through energy parity games. They have shown that the decision problem is in $\text{NP} \cap \text{coNP}$ and presented a recursive algorithm in exponential time. The work~\cite{ChatterjeeD12} has also shown that winning strategies with a finite memory of an exponential size are sufficient. We slightly improve the memory upper bound obtained in~\cite{ChatterjeeD12} (see Sect.~\ref{sec:sufficientbound}, Cor.~\ref{cor:parityEnergyMemory}). Moreover, it was shown in~\cite{ChatterjeeD12} that the decision problem of mean-payoff parity games~\cite{ChatterjeeHJ05} can be reduced to that of energy parity games. Consequently, energy $\mu$-calculus can also solve the decision problem of $\omega$-regular mean-payoff games by applying the reduction of~\cite{ChatterjeeD12} and using our results. The work~\cite{BouyerFLMS08} has introduced bounded variants of energy games. Among these variants is the lower-weak-upper-bound problem, which we refer to as the bounded energy objective. The work~\cite{BouyerFLMS08} has also established a sufficiently large upper bound that enables the solution of (unbounded) energy games. This bound has been used in~\cite{BrimC12} to solve energy games. Moreover, since energy games may be seen as energy parity games~\cite{ChatterjeeD12} with a single priority, in fact, we obtain the sufficient bound of~\cite{BouyerFLMS08} by invoking Lem.~\ref{lem:lemma-6-revised} for the special case where $d=1$. To the best of our knowledge, our work is the first to generalize~\cite{BouyerFLMS08} by introducing sufficient bounds that enable the solution of energy games with any $\omega$-regular winning condition. Velner et al.~\cite{VelnerC0HRR15} have studied the complexity of multi-dimensional energy and mean-payoff games where the weights are integer vectors. They have shown that the decision problem of multi-dimensional energy games is $\text{coNP}$-complete and finite memory strategies are sufficient for winning. Fahrenberg et al.~\cite{FahrenbergJLS11} have studied variants of multi-dimensional energy games with both lower and upper bounds. Finally, Chatterjee et al.~\cite{ChatterjeeRR14} have established that strategies with an exponential memory are necessary and sufficient for multi-dimensional energy parity games. Furthermore, they have presented an exponential fixed-point algorithm to compute such strategies. \subsubsection{The $\mu$-Calculus and Symbolic Algorithms}\label{sec:related:muCalculus} Besides model checking (see, e.g.,~\cite{bradfieldmu}), the modal $\mu$-calculus has been used to solve $\omega$-regular games (e.g.,~\cite{BJP+12,BruseFL14,EmersonJ91,KonighoferHB13,Walukiewicz1996}), as well as to synthesize winning strategies (e.g., in GR(1)~\cite{BJP+12} and parity~\cite{BruseFL14} games). Multi-valued or quantitative extensions of the $\mu$-calculus have been suggested for verification of multi-valued or quantitative transition systems (e.g.,~\cite{deAlfaro,BrunsG04,Fischer2010,GrumbergLLS05,GrumbergLLS07}). Nevertheless, such extensions have also been introduced to solve, e.g., probabilistic and concurrent games~\cite{AlfaroM04}, and games with imperfect information~\cite{RaskinCDH07}. The translation of $\omega$-regular conditions to the $\mu$-calculus for the purpose of solving the corresponding games has been studied in~\cite{AlfaroHM01} w.r.t. both Boolean and quantitative semantics. We apply this approach to energy games. The semantics of energy $\mu$-calculus exploits the monotonicity of the energy objective as it maps states to the minimal winning initial credits. It is inspired by the antichain representation used by the algorithm of~\cite{ChatterjeeRR14}, which solves multi-dimensional energy games. Essentially, symbolic antichain representations exploit monotonicity properties to succinctly represent the sets that the symbolic algorithm manipulates. The use of antichains to obtain performance improvements has been implemented for various applications, such as model checking (e.g.,~\cite{DoyenR10,WulfDHR06,WulfDMR08}), games with imperfect information~\cite{BerwangerCWDH10,RaskinCDH07}, and LTL synthesis (e.g.,~\cite{BohyBFR13,FiliotJR11}). The semantics of energy $\mu$-calculus prescribes symbolic algorithms that manipulate energy functions. Therefore, implementations of the energy $\mu$-calculus should be based on symbolic data structures, and in particular, on those that encode multi-valued functions. Such a notable data structure are Algebraic Decision Diagrams~\cite{BaharFGHMPS97,FujitaMY97} (ADDs), which generalize Binary Decision Diagrams (BDDs)~\cite{Bryant86}. The use of ADDs to encode real-valued matrices for the analysis of probabilistic models, such as Markov chains, has been studied extensively, e.g.,~\cite{AlfaroKNPS00,BaierCHKR97,HermannsKNPS03,KwiatkowskaNP04,KwiatkowskaNP11}. However, ADDs have only recently been studied in the context of game solving. The work~\cite{MaozPR16} has presented an ADD-based, symbolic fixed-point algorithm for energy games and evaluated its performance. In fact, this algorithm implements the energy $\mu$-calculus formula ${\nu X(\circlediamond_{\op{E}} X)}$ that we have considered in Sect.~\ref{sec:related:energyGames}. The evaluation in~\cite{MaozPR16} showed that the ADD-based algorithm outperformed an alternative, BDD-based algorithm in terms of scalability. Moreover, the work~\cite{BustanKV04} presented a symbolic ADD-based version of the well-known small progress measures (explicit) algorithm~\cite{Jurdzinski00} for parity games. The algorithm of~\cite{BustanKV04} has recently been implemented and evaluated in~\cite{StasioMV18}. \section{Preliminaries} \label{sec:preliminaries} Throughout this paper, for $a, b \in \mathbb{Z}\cup\{+\infty\}$, $[a,b]$ denotes the set $\{z \in \mathbb{Z}\cup\{+\infty\}\mid a \leq z \leq b\}$. For a set of Boolean variables $\mathcal V$, a \emph{state}, $s\in 2^\mathcal V$, is a truth assignment to $\mathcal V$, an \emph{assertion} $\phi$ is a propositional formula over $\mathcal V$, $s\models \phi$ denotes that $s$ satisfies $\phi$, and $\mathcal V'$ denotes the set $\{v'\mid v\in\mathcal V\}$ of \emph{primed} variables. We denote by $p(s)\in 2^\mathcal{V'}$ the \emph{primed version} of the state $s\in{2^\mathcal{V}}$, obtained by replacing each $v\in{s}$ with $v'\in \mathcal{V}'$. For $\mathcal V=\bigcup_{i=1}^k\mathcal V_i$ and truth assignments $s_i\in 2^{\mathcal V_i}$, we use $(s_1,\ldots,s_k)$ as an abbreviation for $s_1\cup{\ldots}\cup{s_k}$. Thus, we may replace expressions, e.g., $s\in 2^\mathcal V$, $s\models \varphi$, $p(s)$, and $f(s)$ with $(s_1,\dots s_k)\in 2^\mathcal V$, $(s_1,\dots,s_k)\models \varphi$, $p(s_1,\dots,s_k)$, and $f(s_1,\dots,s_k)$, respectively. We denote by $s|_{\mathcal{Z}}$ the \emph{projection} of $s\in{2^{\mathcal{V}}}$ to $\mathcal{Z}\subseteq{\mathcal{V}}$, i.e., $s|_{\mathcal{Z}}:= s \cap \mathcal{Z}$. \subsection{Games, Game Structures, and Strategies}\label{sec:games} We consider an infinite game played between an environment player ($\mathit{env}$) and a system player ($\mathit{sys}$) on a finite directed graph as they move along its transitions. In each round of the game, the environment plays first by choosing a valid input, and the system plays second by choosing a valid output. The goal of the system is to satisfy the winning condition, regardless of the actions of the environment. Formally, a game is symbolically represented by a \emph{game structure} (GS) $G := \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \\\rho^e,\rho^s, \varphi \rangle$~\cite{BJP+12,PitermanPS06} that consists of the following components: \begin{itemize} \item $\mathcal{V} = \{v_1, \ldots, v_n\}$: A finite set of Boolean variables. \item $\mathcal{X}\subseteq{\mathcal{V}}$: A set of \emph{input variables} controlled by the \emph{environment} player ($\mathit{env}$). \item $\mathcal{Y}={\mathcal{V}\setminus{\mathcal{X}}}$: A set of \emph{output variables} controlled by the \emph{system} player ($\mathit{sys}$). \item $\rho^e$: An assertion over $\mathcal{V}\cup \mathcal{X'}$ that defines the environment's transitions. The environment uses $\rho^e$ to relate a state over $\mathcal{V}$ to \emph{possible next inputs} over $\mathcal X'$. \item $\rho^s$: An assertion over $\mathcal{V}\cup \mathcal V' = \mathcal{V}\cup \mathcal{X'}\cup \mathcal{Y'}$ that defines the system's transitions. The system uses $\rho^s$ to relate a state over $\mathcal{V}$ and an input over $\mathcal{X}'$ to \emph{possible next outputs} over $\mathcal Y'$. \item $\varphi$: The winning condition of the system. \end{itemize} \noindent We consider \emph{$\omega$-regular GSs}, i.e., GSs with $\omega$-regular winning conditions $\varphi$. A state $t$ is a \emph{successor} of $s$ if $(s, p(t)) \models \rho^e \wedge \rho^s$. The rounds of a game on $G$ form a sequence of states $\sigma=s_0s_1\ldots$ called a \emph{play}, which satisfies the following conditions: (1) \emph{Consecution}: for each $i\geq0$, $s_{i+1}$ is a successor of $s_i$. (2) \emph{Maximality}: if $\sigma$ is finite, then either it ends with a \emph{deadlock for the environment}: $\sigma=s_0\ldots s_k$, and there is no input value $s_{\mathcal{X}}\in{2^\mathcal{X}}$ such that $(s_k, p(s_{\mathcal{X}}))\models {\rho^e}$, or it ends with a \emph{deadlock for the system}: $\sigma=s_0\ldots s_k s_{\mathcal{X}}$ where $s_{\mathcal{X}}\in{2^\mathcal{X}}$, $(s_k, p(s_{\mathcal{X}}))\models \rho^e$, and there is no output $s_{\mathcal{Y}}\in{2^\mathcal{Y}}$ such that $(s_k, p(s_{\mathcal{X}}), p(s_{\mathcal{Y}}))\models {\rho^s}$. We denote by $\textsf{Plays}(G)$ the set of all $G$ plays. A play $\sigma = s_{0}\ldots \in \textsf{Plays}(G)$ is \emph{from} $S \subseteq 2^{\mathcal{V}}$ if $s_{0}\in S$. A play $\sigma\in\textsf{Plays}(G)$ \emph{wins for the system} if either $\sigma$ is finite and ends with a deadlock for the environment, or $\sigma$ is infinite and satisfies the winning condition $\varphi$. We denote by $\textsf{Plays}(G,\varphi)$ the set of all plays that win for the system. If $\sigma\not\in \textsf{Plays}(G, \varphi)$, we say that $\sigma$ wins for the environment. A \emph{strategy} for the system player is a partial function $g_{{\scriptstyle\mathit{sys}}}: (2^\mathcal{V})^{+}2^{\mathcal{X}}\rightarrow 2^{\mathcal{Y}}$. It satisfies that for every prefix $\sigma=s_0\ldots s_k \in (2^\mathcal{V})^{+}$ and $s_{\mathcal{X}}\in{2^\mathcal{X}}$ such that $(s_k, p(s_{\mathcal{X}}))\models \rho^e$, if $g_{{\scriptstyle\mathit{sys}}}$ is defined for $\sigma s_{\mathcal{X}}$, then $(s_k, p(s_{\mathcal{X}}), p(g_{{\scriptstyle\mathit{sys}}}(\sigma s_{\mathcal{X}})))\models {\rho^s}$. Let $g_{{\scriptstyle\mathit{sys}}}$ be a strategy for the system, and $\sigma = s_{0}s_{1}\ldots \in \textsf{Plays}(G)$. The prefix $s_{0}\ldots s_{k}$ of $\sigma$ is \emph{consistent} with $g_{{\scriptstyle\mathit{sys}}}$ if for each $0 \leq i < k$, $g_{{\scriptstyle\mathit{sys}}}$ is defined at $s_0\ldots s_i s_{i+1}|_{\mathcal{X}}$, and $g_{{\scriptstyle\mathit{sys}}}(s_0\ldots s_i s_{i+1}|_{\mathcal{X}}) = s_{i+1}|_{\mathcal{Y}}$. We say that $\sigma$ is consistent with $g_{{\scriptstyle\mathit{sys}}}$ if all of its prefixes are consistent with $g_{{\scriptstyle\mathit{sys}}}$. The strategy $g_{{\scriptstyle\mathit{sys}}}$ is \emph{from $S \subseteq 2^{\mathcal{V}}$} if it is defined (1) for every prefix $s_{0}\ldots s_{j} \in (2^\mathcal{V})^{+}$ of a play from $S$, consistent with $g_{{\scriptstyle\mathit{sys}}}$, and (2) for every input $s_{\mathcal{X}} \in 2^{\mathcal{X}}$ such that $(s_j,p(s_{\mathcal{X}}))\models \rho^e$, and $(s_j, p(s_{\mathcal{X}}))$ is not a deadlock for the system. In case $S= \{s\}$ for $s \in 2^{\mathcal{V}}$, we will simply write $s$. We dually define strategies and consistent plays for the environment player. A strategy $g_{\alpha}$ \emph{wins} for player $\alpha \in \ensuremath{\{\mathit{env},\mathit{sys}\}}$ from $s\in 2^{\mathcal{V}}$, if it is a strategy for $\alpha$ from $s$, and all plays from $s$ that are consistent with $g_{\alpha}$ win for $\alpha$. The assertion $W_{\alpha}$ describes the set of \emph{winning states}, i.e., from which there exists a winning strategy for player $\alpha$. We may use the assertion $W_{\alpha}$ interchangeably with the set $\{s\in 2^{\mathcal{V}}\mid s\models W_{\alpha}\}$. \subsection{Weighted Game Structures and Energy Objectives}\label{sec:combinedEngObj} We now define the energy objective. Our definition is based on both the \emph{lower-weak-upper-bound} and the \emph{lower-bound} problems introduced by Bouyer et al.~\cite{BouyerFLMS08}, while it uses a slightly different formulation adapted for GSs. A finite \emph{weighted game structure} (WGS) ${G^{w}} := \langle \mathcal{V}, \mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s \rangle$ is a GS extended with a partial \emph{weight function} $w^s:{2^{\mathcal{V}\cup \mathcal{V}'}} \rightarrow {\mathbb{Z}}$, defined for $\rho^s$ transitions. Intuitively, $w^s$ describes the amount by which system's actions reclaim or consume a constrained resource, which we refer to as \emph{energy}. Let $G^{w}$ be a WGS, $\sigma=s_{0}s_{1}\ldots\in\textsf{Plays}(G^{w})$ be a $G^{w}$ play, and $\sigma[0\ldots k]:=s_0\ldots s_k$ be a prefix of $\sigma$ for $k \in \mathbb{N}$. Given a (finite) \emph{upper bound} $c\in{\mathbb{N}}$, and an \emph{initial credit} $c_{0} \in [0,c]$, the \emph{energy level under} $c$ of $\sigma[0\ldots k]$, denoted by $\textsf{EL}_{c}(G^{w},c_0, \sigma[0\ldots k])$, is the sum of $c_{0}$ and the weights that $w^{s}$ assigns to the transitions of $\sigma[0\ldots k]$, such that whenever it exceeds the upper bound $c$, it is truncated to $c$. Formally, $\textsf{EL}_{c}(G^{w}, c_0, \sigma[0\ldots k]) := r_{k}$, where $r_0 := c_0$ and for each $i\in [1,k]$, $r_{i} := \min\lbrack c, r_{i-1}+w^s(s_{i-1},p(s_{i}))\rbrack$. In Sect.~\ref{sec:sufficientbound}, we also consider the (unbounded) energy level of $\sigma[0\ldots k]$, $\textsf{EL}_{+\infty}(G^{w},c_0, \sigma[0\ldots k])$ where $c = +\infty$ and $c_{0} \in \mathbb{N}$. Note that in this special case, it is simply the sum of $c_{0}$ and the weights along $\sigma[0\ldots k]$, i.e., $\textsf{EL}_{+\infty}(G^{w},c_0, \sigma[0\ldots k]) = c_{0} + \sum_{i=1}^{k} w^s(s_{i-1},p(s_{i}))$. A WGS $G^{w}$ represents a game with both \emph{qualitative} and \emph{quantitative} winning conditions. The former is specified by the $\omega$-regular condition $\varphi$, and the latter is the energy objective that requires to keep the energy levels of all plays' prefixes, non-negative. Formally, given an upper bound $c\in{\mathbb{N}}\cup \{+\infty\}$ and an initial credit $c_{0} \not = +\infty$, $c_{0} \leq c$, the \emph{energy objective w.r.t. $c$ for $c_{0}$} is $\textsf{E}_{c}(G^{w}, c_{0}):=\{\sigma \in \textsf{Plays}(G^{w})\mid\forall j \geq 0: \textsf{EL}_{c}(G^{w}, c_{0}, \sigma[0\ldots j])\geq{0}\}$, and we say that a play $\sigma \in \textsf{Plays}(G^{w})$ \emph{wins the energy objective w.r.t. $c$ for $c_{0}$} if $\sigma \in \textsf{E}_{c}(G^{w}, c_{0})$. Thus, $\sigma$ \emph{wins the $\varphi$-energy objective w.r.t. $c$ for $c_{0}$} iff $\sigma \in \textsf{Plays}(G^{w}, \varphi)\cap \textsf{E}_{c}(G^{w}, c_{0})$. Let $c\in{\mathbb{N}}\cup \{+\infty\}$ be an upper bound. A strategy $g$ for the system (resp. environment) \emph{wins} \emph{from} $s\in 2^{\mathcal{V}}$ \emph{w.r.t. $c$} \emph{for an initial credit} $c_{0} \not = +\infty$, $c_{0} \leq c$, if it is a strategy for the system (resp. environment) from $s$, and all plays $\sigma$ that are from $s$ and consistent with $g$, win (resp. do not win) the $\varphi$-energy objective w.r.t. $c$ for $c_{0}$. A state $s\in 2^{\mathcal{V}}$ \emph{wins} for the system (resp. environment) \emph{w.r.t. $c$ for an initial credit $c_{0}$}, if there exists a strategy that wins for the system (resp. environment) from $s$ w.r.t. $c$ for $c_{0}$. We say that $s\in 2^{\mathcal{V}}$ \emph{wins for the system w.r.t. $c$}, if it wins for the system w.r.t. $c$ for \emph{some} initial credit $c_{0}$. Otherwise, if $s$ wins for the environment w.r.t. $c$ for \emph{all} initial credits $c_{0} \not = +\infty$, $c_{0} \leq c$, we say that it wins for the environment. Accordingly, we denote by $W_{\alpha}(c)$ the set of states that win for player $\alpha \in \ensuremath{\{\mathit{env},\mathit{sys}\}}$ w.r.t. $c$. Further, note that the energy objective is \emph{monotone} w.r.t. both the initial credit and the bound. That is, for all upper bounds $c, c^{h} \in\mathbb{N}$ and initial credits $c_{0} \in [0,c]$, $c^{h}_{0} \in [0,c^{h}]$ such that $c \leq c^{h}$ and $c_{0} \leq c^{h}_{0}$: $\textsf{E}_{c}(G^{w}, c_{0}) \subseteq \textsf{E}_{c^{h}}(G^{w}, c^{h}_{0})$, $\textsf{E}_{+\infty}(G^{w}, c_{0}) \subseteq \textsf{E}_{+\infty}(G^{w}, c^{h}_{0})$ and $\textsf{E}_{c}(G^{w}, c_{0}) \subseteq \textsf{E}_{+\infty}(G^{w}, c_{0})$. This gives rise to consider in Sect.~\ref{sec:solvingEnergyGames} the \emph{optimal} (i.e., minimal) initial credit and a sufficiently large upper bound for which the system wins. \subsection{\texorpdfstring{$\mu$-Calculus Over Game Structures}{mu-Calculus Over Game Structures}}\label{sec:propMuCalculus} We consider the logic of the modal $\mu$-calculus \cite{Kozen} over GSs, and repeat its definition from~\cite{BJP+12} below. It will be useful in Sect.~\ref{sec:EngMuCalc} where we introduce a multi-valued extension thereof. \begin{defi}[$\mu$-calculus: syntax]\label{def:prop_mu_calculus_grammar} Let $\mathcal{V}$ be a set of Boolean variables, and let $\mathit{Var} = \{X, Y, \ldots\}$ be a set of relational variables. The formulas of $\mu$-calculus (in positive form) are built as follows: \[\psi::=~v~|~\neg{v}~|~X~|~ \psi\vee\psi~|~\psi\wedge\psi~|~\circlediamond\psi|~\circlebox\psi~|~\mu{X}\psi|~\nu{X}\psi\] where $v\in\mathcal{V}$, $X\in\mathit{Var}$, and $\mu$ and $\nu$ denote the least and the greatest fixed-point operators, respectively. \end{defi} We denote by $\mathcal{L}_{\mu}$ the set of all formulas generated by the grammar of Def.~\ref{def:prop_mu_calculus_grammar}. We further denote by $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ (resp. $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$) the subset of $\mathcal{L}_{\mu}$ that consists of all formulas in which the modal operator $\circlebox$ (resp. $\circlediamond$) does \emph{not} occur. We will refer to $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ (resp. $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$) formulas as $\ensuremath{\mathit{sys}\text{-}\mu}$ (resp. $\ensuremath{\mathit{env}\text{-}\mu}$) formulas. In this paper, we may refer to the \emph{alternation depth}~\cite{EmersonL86, Niwinski86} of a formula $\psi \in \mathcal{L}_{\mu}$, i.e., the number of alternations between interdependent, nested least and greatest fixed-point operators in $\psi$. For the formal definition, see, e.g.,~\cite[Chapter~10]{2001automata}. \begin{defi}[$\mu$-calculus: semantics]\label{def:prop_mu_calculus_semantics} We inductively define the set $\llbracket \psi\rrbracket^{G}_{\mathcal{E}}$ of states in which $\psi\in \mathcal{L}_{\mu}$ is true w.r.t. a finite GS, $\GS$, and a valuation $\mathcal{E}: \mathit{Var} \rightarrow (2^{\mathcal{V}} \rightarrow \{0,1\})$, as follows:\footnote{If all of the relational variables in $\psi$ are bound by fixed-point operators, i.e., $\psi$ is a closed formula, we may omit $\mathcal{E}$ from the semantic brackets.} \begin{itemize} \setlength\itemsep{0.5em} \item For $v\in\mathcal{V}$, $\llbracket{v}\rrbracket^{G}_{\mathcal{E}} = \{s\in 2^{\mathcal{V}}~|~ s\models v\}$; $\llbracket{\neg{v}}\rrbracket^{G}_{\mathcal{E}} = \{s\in 2^{\mathcal{V}}~|~ s\not\models v\}$. \item For $X\in{\mathit{Var}}$, $\llbracket{X}\rrbracket^{G}_{\mathcal{E}} = \mathcal{E}(X)$. \item $\llbracket{\phi_1\vee{\phi_2}}\rrbracket^{G}_{\mathcal{E}} = \llbracket{\phi_1}\rrbracket^{G}_{\mathcal{E}}\cup \llbracket{\phi_2}\rrbracket^{G}_{\mathcal{E}}$; $\llbracket{\phi_1\wedge{\phi_2}}\rrbracket^{G}_{\mathcal{E}} = \llbracket{\phi_1}\rrbracket^{G}_{\mathcal{E}}\cap \llbracket{\phi_2}\rrbracket^{G}_{\mathcal{E}}$. \item $\llbracket{\circlediamond\phi}\rrbracket^{G}_{\mathcal{E}} = \left\{ s\in 2^{\mathcal{V}} \Bigl\vert \begin{aligned} &\forall s_{\mathcal{X}}\in 2^{\mathcal{X}}, (s,p(s_{\mathcal{X}}))\models \rho^e \Rightarrow \exists s_{\mathcal{Y}}\in{2^{\mathcal{Y}}} \text{ such that }\\ &(s, p(s_{\mathcal{X}}), p(s_{\mathcal{Y}}))\models \rho^s \text{ and } (s_{\mathcal{X}}, s_{\mathcal{Y}})\in \llbracket{\phi}\rrbracket^{G}_{\mathcal{E}} \end{aligned} \right\}$. \item $\llbracket{\circlebox\phi}\rrbracket^{G}_{\mathcal{E}} = \left\{ s\in 2^{\mathcal{V}} \Bigl\vert \begin{aligned} &\exists s_{\mathcal{X}}\in 2^{\mathcal{X}} \text{ such that } (s,p(s_{\mathcal{X}}))\models \rho^e \text{ and } \forall s_{\mathcal{Y}}\in{2^{\mathcal{Y}}},\\ &(s, p(s_{\mathcal{X}}), p(s_{\mathcal{Y}}))\models \rho^s \Rightarrow (s_{\mathcal{X}}, s_{\mathcal{Y}})\in \llbracket{\phi}\rrbracket^{G}_{\mathcal{E}} \end{aligned} \right\}$. \item $\llbracket\twolinescurly{\mu}{\nu} X\phi\rrbracket^{G}_{\mathcal{E}} = \twolinescurly{\bigcup_{i}}{\bigcap_{i}}{S_i}$, where $\twolinescurly{S_0=\emptyset}{S_0=2^{\mathcal{V}}}$, $S_{i+1}= \llbracket{\phi}\rrbracket^{G}_{\mathcal{E}[X\mapsto{S_i}]}$, and $\mathcal{E}[X\mapsto{S}]$ denotes\\[0.5em] the valuation which is like $\mathcal{E}$ except that it maps $X$ to $S$. \end{itemize} \end{defi} \noindent Note that Def.~\ref{def:prop_mu_calculus_semantics} relates to game solving rather than to model checking (cf.~\cite{EmersonJS01,EmersonL86,Schneider2004}). That is, the classical predecessor operators from~\cite{Kozen} are replaced with the \emph{controllable predecessor} operators: $\mathit{Cpre}_{{\scriptstyle\mathit{sys}}},~\mathit{Cpre}_{{\scriptstyle\mathit{env}}}:2^{2^{\mathcal{V}}}\rightarrow{2^{2^{\mathcal{V}}}}$. The set $\llbracket{\circlediamond\phi}\rrbracket^{G}_{\mathcal{E}} = \mathit{Cpre}_{{\scriptstyle\mathit{sys}}}(\llbracket\phi\rrbracket^{G}_{\mathcal{E}})$ consists of all states from which the system can force the environment in a single step to reach a state in the set $\llbracket{\phi}\rrbracket^{G}_{\mathcal{E}}$, and dually, $\llbracket{\circlebox\phi}\rrbracket^{G}_{\mathcal{E}}= \mathit{Cpre}_{{\scriptstyle\mathit{env}}}(\llbracket\phi\rrbracket^{G}_{\mathcal{E}})$ consists of all states from which the environment can force the system in a single step to reach a state in $\llbracket{\phi}\rrbracket^{G}_{\mathcal{E}}$. De Alfaro et al.~\cite{AlfaroHM01} have shown that $\omega$-regular GSs can be solved by evaluating closed $\mathcal{L}_{\mu}$ formulas. That is, for every $\omega$-regular winning condition $\varphi$, there is a closed $\ensuremath{\mathit{sys}\text{-}\mu}$ (resp. $\ensuremath{\mathit{env}\text{-}\mu}$) formula $\psi_{\varphi} \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ ($\psi_{\neg\varphi} \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$) that for all GSs, $G$, computes the set of states that win for the system (environment) player, i.e., $W_{{\scriptstyle\mathit{sys}}} = \llbracket{\psi_{\varphi}}\rrbracket^{G}$ ($W_{{\scriptstyle\mathit{env}}} = \llbracket{\psi_{\neg\varphi}}\rrbracket^{G}$). We say that $\psi_{\varphi} \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ \emph{matches} $\varphi$ if for all GSs $G$, $W_{{\scriptstyle\mathit{sys}}} = \llbracket{\psi_{\varphi}}\rrbracket^{G}$, and dually for $\psi_{\neg\varphi} \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$. \section{\texorpdfstring{Energy $\mu$-Calculus Over Weighted Game Structures}{Energy mu-Calculus Over Weighted Game Structures}}\label{sec:EngMuCalc} This section introduces \emph{energy $\mu$-calculus}, a multi-valued extension of the $\mu$-calculus~\cite{Kozen} over GSs~\cite{BJP+12,EmersonJ91}. First, Sect.~\ref{sec:EngMuCalcSyntaxSemantics} presents the syntax and semantics thereof. It identifies two dual syntactic fragments analogous to $\ensuremath{\mathit{sys}\text{-}\mu}$ and $\ensuremath{\mathit{env}\text{-}\mu}$ formulas from Sect.~\ref{sec:propMuCalculus}, and presents their semantics separately in Sect.~\ref{sec:EngMuCalcSysSemantics} and Sect.~\ref{sec:EngMuCalcEnvSemantics}. Second, Sect.~\ref{sec:EngMuCalcCorrect} shows that the semantics of each fragment encodes the energy objective w.r.t. finite upper bounds. The proofs for the theorems, propositions, and lemmas of this section appear in Appx.~\ref{app:energyMuCalc:proofs}. \subsection{\texorpdfstring{Energy $\mu$-Calculus: Syntax and Semantics}{Energy mu-Calculus: Syntax and Semantics}}\label{sec:EngMuCalcSyntaxSemantics} Let $\ensuremath{\mathcal{L}_{e\mu}}$ denote the set of formulas generated by the following grammar: \begin{defi}[Energy $\mu$-calculus: syntax]\label{def:EngMuCalcGrammar} Let $\mathcal{V}$ be a set of Boolean variables, and let $\mathit{Var} = \{X, Y, \ldots\}$ be a set of relational variables. The syntax of energy $\mu$-calculus (in positive form) is as follows: \[\psi::=~v~|~\neg{v}~|~X~|~\psi\vee\psi~|~\psi\wedge\psi~|~ \circlediamond_{\op{E}}\psi|~\circlebox_{\op{E}}\psi~|~\mu{X}\psi|~\nu{X}\psi\] where $v \in \mathcal V$ and $X \in \mathit{Var}$. \end{defi} We denote by $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ (resp. $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$) the subset of $\ensuremath{\mathcal{L}_{e\mu}}$ that consists of all formulas in which $\circlebox_{\op{E}}$ (resp. $\circlediamond_{\op{E}}$) does \emph{not} occur. We refer to $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ (resp. $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$) formulas as \emph{$\ensuremath{\mathit{sys}\text{-energy-}\mu}$} (resp. \emph{$\ensuremath{\mathit{env}\text{-energy-}\mu}$}) formulas. Further, let $\psi^\op{E}\in \mathcal{L}_{e\mu}$ denote the energy $\mu$-calculus formula obtained from $\psi\in \mathcal{L}_{\mu}$ by replacing all occurrences of $\circlediamond$ and $\circlebox$ with $\circlediamond_{\op{E}}$ and $\circlebox_{\op{E}}$, respectively. \subsubsection{\texorpdfstring{$\ensuremath{\mathit{sys}\text{-Energy}}$ $\mu$-Calculus}{sys-Energy mu-Calculus}}\label{sec:EngMuCalcSysSemantics} The value of a \emph{$\ensuremath{\mathit{sys}\text{-energy-}\mu}$} formula $\psi^\op{E}\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$, which we formally define below, is a function that maps each state of the underlying WGS to the \emph{minimum} initial credit for which that state wins for the system w.r.t. a finite upper bound, provided that $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ matches the underlying winning condition (see Thm.~\ref{thm:sysEngMuCalcCorrectness}). Accordingly, we define the semantics of $\ensuremath{\mathit{sys}\text{-energy-}\mu}$ formulas w.r.t. a finite upper bound $c\in \mathbb{N}$ and a WGS $G^{w} = \langle G, w^s\rangle$, and use $\ensuremath{{G^{w}(c)}}$ as a shorthand for the tuple $\langle G^{w}, c \rangle$. For $c\in\mathbb{N}$, we respectively define the finite sets $\mathit{E(c)} := [0,c] \cup \{+\infty\}$ and $\mathit{EF(c)} := \mathit{E(c)}^{2^{\mathcal{V}}}$ of initial credits up to $c$ and \emph{energy functions} from states to $\mathit{E(c)}$. The semantics' definition of $\ensuremath{\mathit{sys}\text{-energy-}\mu}$ formulas makes use of the \emph{energy controllable predecessor} operator $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}:\mathit{EF(c)}\rightarrow{\mathit{EF(c)}}$, which we define below in Def.~\ref{def:ECpre}, and corresponds to the classical $\mathit{Cpre_{{\scriptstyle\mathit{sys}}}}$ operator of Def.~\ref{def:prop_mu_calculus_semantics}. Informally, for all $f\in \mathit{EF(c)}$, $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(f)$ denotes the energy function that maps each state $s \in 2^{\mathcal{V}}$ to the minimum initial credit sufficient for the system to force the environment to move in a single step from $s$ to some successor $t$ with an energy level at least $f(t)$. \begin{defi}[Energy controllable predecessor operator]\label{def:ECpre} For all WGSs $\langle G, w^s\rangle$, upper bounds $c\in \mathbb{N}$, energy functions $f\in \mathit{EF(c)}$, and states $s \in 2^{\mathcal{V}}$, \begin{align*} &{\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(f)(s)} := {\max\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\min\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\textsf{EC}_c((s,p(s_{\mathcal{X}},s_{\mathcal{Y}})), f(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack}\\& \text{where $\textsf{EC}_c:2^{\mathcal{V}\cup\mathcal{V'}}\times{\mathit{E(c)}}\rightarrow{\mathit{E(c)}}$ and for all $s \in 2^{\mathcal{V}}$, $s' \in 2^{\mathcal{V'}}$, and $e \in \mathit{E(c)}$,}\\& {\textsf{EC}_c((s, s'),e)} = \begin{cases} 0,\ &\mbox{if $(s,s')\not\models{\rho^e}$}\\ +\infty,\ &\mbox{if $e=+\infty$ or $(s,s') \models {\rho^e \wedge {\neg\rho^s}}$}\\ +\infty,\ &\mbox{if $e - w^{s}(s,s') > c$}\\ \max{\lbrack0, e - w^{s}(s,s')\rbrack},\ & \mbox{otherwise} \end{cases}& \end{align*} \end{defi} \noindent In Def.~\ref{def:ECpre}, $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}$ uses the function $\textsf{EC}_c:2^{\mathcal{V}\cup\mathcal{V'}}\times{\mathit{E(c)}}\rightarrow{\mathit{E(c)}}$. Intuitively, $\textsf{EC}_c((s,s'),e)$ is the minimum initial credit sufficient for the system to traverse the transition $(s,s')$, provided that $e$ is the minimum initial credit required to proceed from $s'$. Specifically, if $(s,s')$ is invalid for the environment (i.e., $(s,s')\not\models{\rho^e}$), then the initial credit $0$ is sufficient, and if $\textsf{EC}_c((s,s'),e) = +\infty$, there is no initial credit $c_{0} \leq c$ sufficient for traversing $(s,s')$. The latter holds when either $(s,s')$ is only valid for the environment (i.e., $(s,s')\models{\rho^e\wedge{\neg\rho^s}}$), there is no initial credit sufficient to proceed from $s'$ (i.e., $e=+\infty$), or the minimum initial credit required to traverse $(s,s')$ exceeds the upper bound (i.e., $e - w^{s}(s,s') > c$). Def.~\ref{def:sysEngMuCalcSemantics} formally defines the value of $\psi \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ w.r.t. WGSs, finite upper bounds, and \emph{valuations over $\mathit{EF(c)}$} that map each relational variable in $\mathit{Var}$ to an energy function in $\mathit{EF(c)}$. \begin{defi}[$\ensuremath{\mathit{sys}\text{-energy}}$ $\mu$-calculus: semantics]\label{def:sysEngMuCalcSemantics} The semantics $\llbracket \psi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}$ of $\psi\in{\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}}$ w.r.t. a WGS $G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle$, a finite upper bound $c \in \mathbb{N}$, and a valuation $\ensuremath{\mathcal{D}}: \mathit{Var} \rightarrow \mathit{EF(c)}$ over $\mathit{EF(c)}$, is inductively defined for all states $s\in{2^{\mathcal{V}}}$, as follows:\footnote{We may drop the valuation $\ensuremath{\mathcal{D}}$ from the semantic brackets for closed formulas.} \begin{itemize} \setlength\itemsep{0.5em} \item For $~v\in\mathcal{V}$,\begin{minipage}{0.5\textwidth} \begin{align*}&\llbracket{v}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \begin{cases} 0, & \text{ if } s\vDash{v} \\ +\infty, & \text{ if } s\nvDash{v} \end{cases}.\\& \llbracket{\neg{v}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \begin{cases} +\infty, & \text{ if } s\vDash{v} \\ 0, & \text{ if } s\nvDash{v} \end{cases}.& \end{align*}\end{minipage} \item For $~X\in{\mathit{Var}}$, $\llbracket{X}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \ensuremath{\mathcal{D}}(X)(s)$. \item $\llbracket{\phi_1\vee{\phi_2}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \min(\llbracket{\phi_1}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}, \llbracket{\phi_2}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})(s)$. \item $\llbracket{\phi_1\wedge{\phi_2}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \max(\llbracket{\phi_1}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}, \llbracket{\phi_2}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})(s)$. \item $\llbracket{\circlediamond_{\op{E}}\phi}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(\llbracket\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})(s)$. \item \parbox[t]{\linewidth}{$\llbracket\twolinescurly{\mu}{\nu} X\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \twolinescurly{\mathit{lfp}}{\mathit{gfp}} (\lambda{f}.\llbracket\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{f}]})(s) = \twolinescurly{\min\limits_{i}}{\max\limits_{i}}\lbrack{h_i}\rbrack(s)$,\\ where $\twolinescurly{h_0=f_{+\infty}}{h_0=f_{0}}$ and ${h_{i+1}} = {\llbracket{\phi}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{h_i}]}}$.} \end{itemize} \end{defi} \noindent In Def.~\ref{def:sysEngMuCalcSemantics}, $\mathit{lfp}(g)$ and $\mathit{gfp}(g)$ respectively denote the least and greatest fixed points of $g:\mathit{EF(c)}\rightarrow{\mathit{EF(c)}}$, whose existence will be proved later in this subsection. For $x \in \{+\infty, 0\}$, $f_{x}$ denotes the constant energy function that maps all states to $x$, and $\ensuremath{\mathcal{D}}[X\mapsto{f}]$ denotes the valuation which is like $\ensuremath{\mathcal{D}}$ except that it maps $X$ to $f \in \mathit{EF(c)}$. Intuitively, in all states $s$ that satisfy an assertion $\psi\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$, the value of $\psi$ is $0$, which is the minimum initial credit sufficient for the system to enforce $\psi$ from $s$, and enforcing $\phi_{1} \wedge \phi_{2} \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ from a state requires the maximum of the values of $\phi_{1}$ and $\phi_{2}$ (dually for $\vee$ and minimum). This intuition is translated in Def.~\ref{def:sysEngMuCalcSemantics} to the use of pointwise $\min$ and $\max$ operations that are respectively the join and meet operations of the \emph{energy function lattice}, $\mathit{EFL(c)}:=\langle \mathit{EF(c)}, \min, \max, f_{+\infty}, f_{0} \rangle$, which replaces the powerset lattice of Def.~\ref{def:prop_mu_calculus_semantics}, and $f_{+\infty}$ and $f_{0}$ are its bottom and top elements, respectively. We also characterize $\mathit{EFL(c)}$ as a partially ordered set by augmenting $\mathit{E(c)}$ with the linear order $\preceq$ such that for all $x,y\in{\mathit{E(c)}} : x\preceq{y}$ iff $x \geq y$, and defining the pointwise partial order $\preceq$ on $\mathit{EF(c)}$, such that for all $f, g \in \mathit{EF(c)}$: \[f\preceq{g} \text{ iff } {f} = {\max(f,g)} \text{ iff for all } s \in {2^{\mathcal{V}}}: f(s)\preceq{g(s)}.\] Def.~\ref{def:sysEngMuCalcSemantics} uses the \emph{dual} $\min$ and $\max$ operations and thus the \emph{inverse} order of $\leq$, which reflects the notion that the less required initial credit, the better for the system player. This design choice maintains correspondence between the values of $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ and $\psi^\op{E}\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ (see Lem.~\ref{lem:sysEngMuCalc}). Importantly, it keeps the classification of $\mu$ and $\nu$ formulas as liveness and safety properties~\cite{Bradfield}. As an example, for $p \in \mathcal{V}$, consider the $\mu$-formula $\psi_{\diamond{p}} := \mu X (p \vee \circlediamond X)$ that solves the $p$-states \emph{reachability} game~\cite{2001automata,Thomas95}. If we used the order $\leq$, we would need to take the $\nu$-formula, $\nu X (p \wedge \circlediamond_{\op{E}} X)$, instead of $\psi_{\diamond{p}}^\op{E}$, to solve the corresponding reachability energy game, while $\nu X (p \wedge \circlediamond X)$, in fact, solves the dual $p$-states \emph{safety} game. Therefore, instead of $x \geq y$, we write $x \preceq y$ to reflect that, although a smaller integer, $y$ is an element greater than $x$. Since $\mathit{EFL(c)}$ is a finite, complete lattice, it follows from the Knaster-Tarski fixed-point theorem~\cite{Tarski1955} that monotonicity of $\lambda{f}.\llbracket\psi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{f}]}$ w.r.t. $\preceq$ for all $\psi \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ guarantees the existence of its extremal fixed points, each of which can be computed using fixed-point iteration (as in Def.~\ref{def:sysEngMuCalcSemantics}) that stabilizes at most after $2^{|\mathcal{V}|}\cdot (c+1)$ iterations. We claim that Def.~\ref{def:sysEngMuCalcSemantics} is $\preceq$-monotone due to the following: (1) Def.~\ref{def:EngMuCalcGrammar} is in positive form where negation only applies to the Boolean variables $\mathcal{V}$; (2) as shown by Prop.~\ref{prop:ECpreMonotone}, the $\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}$ operator is $\preceq$-monotone; (3) monotonicity is closed under function composition, meet and join operations, and the fixed-point operators are monotone (for detailed proofs, see, e.g., Lem.~3.16, and Lem.~3.17 in~\cite{Schneider2004}). \begin{prop}\label{prop:ECpreMonotone} $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}:\mathit{EF(c)}\rightarrow{\mathit{EF(c)}}$ from Def.~\ref{def:ECpre} is $\preceq$-monotone. That is, for all $f,g \in \mathit{EF(c)}$: if $f\preceq{g}$ then $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(f) \preceq \ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(g)$. \end{prop} \subsubsection{\texorpdfstring{$\ensuremath{\mathit{env}\text{-Energy}}$ $\mu$-Calculus}{env-Energy mu-Calculus}}\label{sec:EngMuCalcEnvSemantics} So far, we have considered $\omega$-regular energy games from the system player's perspective who aims to minimize the required initial credit. We now consider the dual perspective of the environment player, and encode it in the semantics of \emph{$\ensuremath{\mathit{env}\text{-energy-}\mu}$} formulas. Informally, given $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$ that matches the winning condition, the value of $\psi^\op{E}\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$ in a state of the underlying WGS corresponds to the \emph{maximum} initial credit for which that state wins for the environment w.r.t. a finite upper bound (see Thm.~\ref{thm:envEngMuCalcCorrectness}). The formal semantics of all $\psi\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$ in which $\circlebox_{\op{E}}$ does \emph{not} occur is the same as that defined previously in Def.~\ref{def:sysEngMuCalcSemantics} for $\ensuremath{\mathit{sys}\text{-energy-}\mu}$ formulas. We now treat the remaining case of all $\psi \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$ such that $\psi = \circlebox_{\op{E}}\phi$ and $\phi \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$. In order to obtain a \emph{duality} between the controllable predecessor operators of the two players, as exists in the Boolean powerset semantics of Def.~\ref{def:prop_mu_calculus_semantics}, we first augment the energy function lattice $\mathit{EFL(c)}$ with a pointwise unary \emph{negation} operation ${\sim} : {{\mathit{E(c)}} \rightarrow {\mathit{E(c)}}}$, such that for every $x\in{\mathit{E(c)}}$, \begin{align}\label{eq:negation} &{\sim x} = \begin{cases} +\infty,& \text{if $x = 0$}\\ 0,& \text{if $x = +\infty$}\\ c+1-x,& \text{otherwise} \end{cases}& \end{align} \noindent Lem.~\ref{lem:deMorganAlgebra} shows that $\sim$ is an involution that satisfies De Morgan laws. \begin{lem}\label{lem:deMorganAlgebra} $\langle \mathit{EFL(c)},\sim \rangle$ is a De Morgan algebra. \end{lem} We denote by $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}$ the dual operator of ${\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}}:{{\mathit{EF(c)}}\rightarrow{{\mathit{EF(c)}}}}$ from Def.~\ref{def:ECpre}, i.e., such that for all $f \in \mathit{EF(c)}$, ${\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}(f)} = {{\sim \ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}({\sim f})}}$, and complete the definition of $\llbracket\psi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}$ for all $\psi\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$ by defining ${\llbracket{\circlebox_{\op{E}}\phi}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}} := {\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}(\llbracket\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})}$ for all $\phi\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$. We provide the explicit definition of $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}$ in Appx.~\ref{app:energyMuCalc:extDefinitions}. It follows from Prop.~\ref{prop:ECpreMonotone} that $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}$ is also $\preceq$-monotone. Hence, we conclude that the semantics of both $\ensuremath{\mathit{sys}\text{-energy-}\mu}$ and $\ensuremath{\mathit{env}\text{-energy-}\mu}$ formulas is $\preceq$-monotone, and consequently well-defined. Further, for all $\psi \in \ensuremath{\mathcal{L}_{e\mu}}$, computing $\llbracket\psi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}$ using optimizations from~\cite{BrowneCJLM97} requires $O(n^{\lfloor d/2 \rfloor +1 })$ iterations, where $n=2^{|\mathcal{V}|}\cdot (c+1)$ is the height of $\mathit{EFL(c)}$ and $d$ is the alternation depth of $\psi$. As a final remark, we observe that Lem.~\ref{lem:deMorganAlgebra} allows us to add \emph{negation} to the logic of energy $\mu$-calculus, namely to define: ${\llbracket \neg\psi \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}} = {{\sim}{\llbracket \psi \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}}}$. That is permitted since Lem.~\ref{lem:deMorganAlgebra} implies the correctness of the following well-known equations (see, e.g., Lem.~2.48 and Lem.~3.13 in~\cite{Schneider2004}): \begin{align} \label{eq:negation1}&{\llbracket \neg\neg\psi \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}} = {\llbracket \psi \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}}.\\ \label{eq:negation2}&{\llbracket \neg(\psi \wedge (\text{resp. } \vee)~\xi) \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}} = \llbracket \neg\psi \vee (\text{resp. } \wedge)~\neg\xi \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}.\\ \label{eq:negation3}&{\llbracket \neg(\circlediamond_{\op{E}} \text{(resp. $\circlebox_{\op{E}}$)}~\psi) \rrbracket^{G^w(c)}_\mathcal D} = {\llbracket \circlebox_{\op{E}} \text{(resp. $\circlediamond_{\op{E}}$)}~\neg\psi \rrbracket^{G^w(c)}_\mathcal D}.\\ \label{eq:negation4}&{\llbracket \neg(\mu\text{(resp. $\nu$)}X~\psi(X)) \rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}} = {\llbracket \nu\text{($\text{resp. }\mu$)}X~(\neg\psi(\neg X))\rrbracket^{G^w(c)}_\ensuremath{\mathcal{D}}}.& \end{align} \noindent However, in order to keep the semantics monotone, all sub-formulas of the form $\mu X \phi$ or $\nu X \phi$ must satisfy that all free occurrences of $X$ in $\phi$ fall under an even number of negations. \subsection{\texorpdfstring{Energy $\mu$-Calculus: Correctness}{Energy mu-Calculus: Correctness}}\label{sec:EngMuCalcCorrect} Let $\varphi$ be an $\omega$-regular condition, and let $\psi_{\varphi}\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ and $\psi_{\neg\varphi}\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$ be closed formulas that match $\varphi$. Let $G^{w} = (G, w^s)$ be a WGS with $\varphi$ as its winning condition, and let $c \in \mathbb{N}$ be a finite upper bound. In this section we prove the following theorems. \begin{thm}[$\ensuremath{\mathit{sys}\text{-energy}}$ $\mu$-calculus: correctness]\label{thm:sysEngMuCalcCorrectness} For all states $s\in{2^{\mathcal{V}}}$, if $\llbracket{\psi^{\op{E}}_{\varphi}\rrbracket^{\ensuremath{{G^{w}(c)}}}}(s) \not = +\infty$ then $\llbracket{\psi^{\op{E}}_{\varphi}\rrbracket^{\ensuremath{{G^{w}(c)}}}}(s)$ is the minimum initial credit for which the system wins from $s$ w.r.t. $c$ in $G^{w}$. Otherwise, $s$ does not win for the system w.r.t. $c$. \end{thm} \begin{thm}[$\ensuremath{\mathit{env}\text{-energy}}$ $\mu$-calculus: correctness]\label{thm:envEngMuCalcCorrectness} For all states $s\in{2^{\mathcal{V}}}$, if $\llbracket{\psi^{\op{E}}_{\neg\varphi}\rrbracket^{\ensuremath{{G^{w}(c)}}}}(s) \not = +\infty$ then $c - \llbracket{\psi^{\op{E}}_{\neg\varphi}\rrbracket^{\ensuremath{{G^{w}(c)}}}}(s)$ is the maximum initial credit for which the environment wins from $s$ w.r.t. $c$ in $G^{w}$. Otherwise, $s$ does not win for the environment w.r.t. $c$ for all initial credits $c_{0}\in [0,c]$. \end{thm} The proofs of the above theorems rely on an alternative solution to $\omega$-regular energy games via a reduction to classical $\omega$-regular games. Below, Def.~\ref{def:naiveReduction} defines this reduction, which is inspired by the reduction of energy to safety games from~\cite{BrimCDGR11} and encodes the energy objective by adding additional safety constraints, all of which are defined over new system controlled variables, to the system's transitions. Notice that the reduction in Def.~\ref{def:naiveReduction} is presented here only as part of the correctness proof. It is \emph{not} part of our energy $\mu$-calculus based algorithms that solve $\omega$-regular energy games. We describe these algorithms later in Sect.~\ref{sec:solvingEnergyGames}. \begin{defi}[Reduction: WGS to GS] \begin{itemize} \item[] \item \emph{Input}: $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$ and $c\in{\mathbb{N}}$. \item \emph{Output}: The GS $G^{*} = \langle \mathcal{V}^{*}, \mathcal{X}, \mathcal{Y}^{*}, \rho^{e},\rho^{s*}, \varphi \rangle$ where \begin{enumerate} \item $\mathcal{V}^{*} := \mathcal{X}\cup{\mathcal{Y}^{*}}$. \item \label{def:naiveReduction:yDomSet} $\mathcal{Y}^{*} := \mathcal{Y}\cup yDom$ where $yDom:=\{y_{0}, \ldots, y_{\lfloor\log(c)\rfloor}\}$ encodes the domain $[0,c]$ of a new system variable $y$. \item \label{def:naiveReduction:newRhoS} For all $s_1, s_2 \in 2^{\mathcal{V}}$ and $c_{1}, c_{2} \in [0,c]$: $((s_1, c_1), p(s_2, c_2))\models \rho^{s*}$ iff $(s_1, p(s_2))\models \rho^{s}$ and $c_1 + w^{s}(s_1, p(s_2)) \geq c_2$, where $c_{1}, c_{2}$ are used interchangeably with their binary encodings over the variables $yDom$. \end{enumerate} \end{itemize}\label{def:naiveReduction} \end{defi} \noindent Intuitively, the reduction of Def.~\ref{def:naiveReduction} constructs the GS $G^{*}$ that differs from $G^{w}$ in the following attributes: (1) A \emph{blown-up state space} due to an additional system controlled variable $y$ that keeps track of the initial credit or energy level under $c$ in every state; (2) \emph{additional constraints} to $\rho^{s}$, which ensure that the non-negative value of the new variable $y$ at each state $s^{*}\in 2^{\mathcal{V}^*}$, during a winning play $\sigma^{*} = s^{*}_{0}s^{*}_{1}\ldots s^{*} \ldots\in\textsf{Plays}(G^{*})$ in $G^{*}$, is a lower bound of the energy level of the prefix $s^{*}_{0}|_{\mathcal{V}}\ldots s^{*}|_{\mathcal{V}}$ in $G^{w}$. \begin{thm}[Correctness of Def.~\ref{def:naiveReduction}]\label{thm:reductionCorrectness} For all upper bounds $c\in{\mathbb{N}}$, WGSs $G^{w}$, initial credits $c_{0}\in [0,c]$, and $G^{w}$ states $s\in{2^{\mathcal{V}}}$: the system (resp. environment) wins from $s$ w.r.t. $c$ for $c_{0}$ if and only if the system (resp. environment) wins from $(s, c_{0})\in 2^{\mathcal{V}^{*}}$ in the GS $G^{*}$ constructed by Def.~\ref{def:naiveReduction} from $c$ and $G^{w}$. \end{thm} The following lemmas show that the semantics of energy $\mu$-calculus w.r.t. $\ensuremath{{G^{w}(c)}}$ succinctly represents that of the $\mu$-calculus w.r.t. the GS $G^{*}$, constructed by Def.~\ref{def:naiveReduction}. Specifically, Lem.~\ref{lem:sysEngMuCalc} (resp. Lem.~\ref{lem:envEngMuCalc}) relates to all $\ensuremath{\mathit{sys}\text{-}\mu}$ ($\ensuremath{\mathit{env}\text{-}\mu}$) formulas $\psi$, and relies on the property of the set $\llbracket{\psi}\rrbracket^{G^{*}}_{\mathcal{E}}$ being $\leq$-upward ($\leq$-downward) closed w.r.t. the initial credits' values, which are encoded over the variable $y$. \begin{lem}\label{lem:sysEngMuCalc} Let $\psi\in\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ where all Boolean variables are in $\mathcal{V}$. Let $\ensuremath{\mathcal{D}}:\mathit{Var}\rightarrow \mathit{EF(c)}$, $\mathcal{E}: \mathit{Var}\rightarrow (2^{\mathcal{V^{*}}} \rightarrow \{0,1\})$ be valuations such that for all $X \in \mathit{Var}$, $s \in 2^{\mathcal{V}}$, and $\mathit{val} \in[0,c]$: $\mathit{val} \preceq \ensuremath{\mathcal{D}}(X)(s)$ if and only if $(s, \mathit{val}) \in \mathcal{E}(X)$. Then, for all $s \in 2^{\mathcal{V}}$ and $\mathit{val} \in[0,c]$:\\ $\mathit{val} \preceq\llbracket{\psi^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ if and only if $(s, \mathit{val})\in\llbracket{\psi}\rrbracket^{G^{*}}_{\mathcal{E}}$. \end{lem} \begin{lem}\label{lem:envEngMuCalc} Let $\psi\in\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$ where all Boolean variables are in $\mathcal{V}$. Let $\ensuremath{\mathcal{D}}:\mathit{Var}\rightarrow \mathit{EF(c)}$, $\mathcal{E}: \mathit{Var}\rightarrow (2^{\mathcal{V^{*}}} \rightarrow \{0,1\})$ be valuations such that for all $X \in \mathit{Var}$, $s \in 2^{\mathcal{V}}$, and $\mathit{val} \in[0,c]$: $\mathit{val} \preceq \ensuremath{\mathcal{D}}(X)(s)$ if and only if $(s, c - \mathit{val}) \in \mathcal{E}(X)$. Then, for all $s \in 2^{\mathcal{V}}$ and $\mathit{val} \in[0,c]$:\\ $\mathit{val} \preceq\llbracket{\psi^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ if and only if $(s, c - \mathit{val})\in\llbracket{\psi}\rrbracket^{G^{*}}_{\mathcal{E}}$. \end{lem} That concludes this subsection as it holds that Thm.~\ref{thm:sysEngMuCalcCorrectness} (resp. Thm.~\ref{thm:envEngMuCalcCorrectness}) is a corollary of both Thm.~\ref{thm:reductionCorrectness} and Lem.~\ref{lem:sysEngMuCalc} (resp. Lem.~\ref{lem:envEngMuCalc}). \section{{\texorpdfstring{Solving Energy Games via Energy $\mu$-Calculus}{Solving Energy Games via Energy mu-Calculus}}} \label{sec:solvingEnergyGames} In this section, we show how to use energy $\mu$-calculus to solve $\omega$-regular energy games. Formally, given a WGS $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$, an upper bound $c\in{\mathbb{N}}\cup \{+\infty\}$, and a state $s\in 2^{\mathcal{V}}$ in $G^{w}$, we aim to use the results of Sect.~\ref{sec:EngMuCalc} for solving the following problems: \begin{enumerate}[label=\textbf{P\arabic*},ref=P\arabic*] \item \label{problem:decision} \emph{The decision problem}: Checks whether $s$ wins for the system w.r.t. $c$. \item \label{problem:optimal} \emph{The minimum credit problem}: Asks what is the \emph{minimum} initial credit, $c^{\min}_{0} \not = +\infty$, $c^{\min}_{0}\leq c$, for which $s$ wins for the system w.r.t. $c$. \end{enumerate} \noindent Sect.~\ref{sec:solvingBEGames} considers these problems when there is a finite upper bound $c\in{\mathbb{N}}$ on the energy levels, while Sect.~\ref{sec:sufficientbound} treats the unbounded case, namely when $c=+\infty$. We present an extended version of Sect.~\ref{sec:sufficientbound:WGStoParityEnergyGames} in Appx.~\ref{app:sufficient-bound-proof}. \subsection{Solving Energy Games with Finite Upper Bounds}\label{sec:solvingBEGames} Let $\varphi$ be an $\omega$-regular condition, and let $\psi_{\varphi}\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ and $\psi_{\neg\varphi}\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$ be closed formulas that match $\varphi$. Let $G^{w}=(G, w^s)$ be a WGS whose winning condition is $\varphi$, let $c \in \mathbb{N}$ be a finite upper bound, and let $G^{*}$ be the GS constructed by Def.~\ref{def:naiveReduction}. Eq.~\ref{eq:sysWinningStates} (resp. Eq.~\ref{eq:envWinningStates}) below follows from Thm.~\ref{thm:reductionCorrectness} and Thm.~\ref{thm:sysEngMuCalcCorrectness} (resp. Thm.~\ref{thm:envEngMuCalcCorrectness}), and describes how to compute the set of states that win for the system (resp. environment) player in $G^{w}$ w.r.t. $c$. \begin{equation}\label{eq:sysWinningStates} \begin{split} W_{{\scriptstyle\mathit{sys}}}(c) &= \{s\in2^{\mathcal{V}} \mid \exists c_{0} \in [0,c] : (s, c_{0})\in \llbracket{\psi_{\varphi}}\rrbracket^{G^{*}} \}\\ &= \{s\in2^{\mathcal{V}} \mid \llbracket{\psi^{\op{E}}_{\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}} (s) \not = +\infty\}. \end{split} \end{equation} \begin{equation} \label{eq:envWinningStates} \begin{split} W_{{\scriptstyle\mathit{env}}}(c) &= \{s\in2^{\mathcal{V}} \mid \forall c_{0} \in [0,c] : (s, c_{0})\in \llbracket{\psi_{\neg\varphi}}\rrbracket^{G^{*}} \} \\ &= \{s\in2^{\mathcal{V}} \mid \llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}} (s) = 0\}. \end{split} \end{equation} Therefore, given a state $s\in 2^{\mathcal{V}}$, solving the decision problem (\ref{problem:decision}) for $c$ amounts to checking whether $\llbracket{\psi^{\op{E}}_{\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s)\not = +\infty$. As we have that $\llbracket{\psi_{\varphi}}\rrbracket^{G^{*}}\cup\llbracket{\psi_{\neg \varphi}}\rrbracket^{G^{*}} = 2^{\mathcal{V}^{*}}$ due to determinacy of $\omega$-regular games, it follows from Thm.~\ref{thm:reductionCorrectness} that $\omega$-regular energy games are determined w.r.t. finite upper bounds, i.e., $W_{{\scriptstyle\mathit{sys}}}(c) \cup W_{{\scriptstyle\mathit{env}}}(c) = 2^{\mathcal{V}}$. Thus, alternatively, we can solve \ref{problem:decision} by checking whether $\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s) \not = 0$. As a side note, following a reasoning similar to the above, determinacy also holds w.r.t. the bound of $+\infty$. In sketch, this claim is argued as follows. We define a reduction which is like Def.~\ref{def:naiveReduction} but does not use a finite upper bound. That is, the modified reduction adds a new system controlled variable $y$ whose domain is $\mathbb{N}$ (rather than $[0,c]$); thus, it constructs a GS $G^{*}$ whose state space is infinite. The variable $y$ now keeps track of the unbounded energy level in every state. Then, the determinacy of $G^{*}$ implies the determinacy of the WGS $G^{w}$ w.r.t. $+\infty$. Consequently, we conclude that \emph{$\omega$-regular energy games are determined}. A corollary of Thm.~\ref{thm:sysEngMuCalcCorrectness} is that we can solve the minimum credit problem (\ref{problem:optimal}) by simply returning $\llbracket{\psi^{\op{E}}_{\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s)$. However, the determinacy of $\omega$-regular energy games together with Thm.~\ref{thm:sysEngMuCalcCorrectness} and Thm.~\ref{thm:envEngMuCalcCorrectness}, imply that we can also solve \ref{problem:optimal} by computing $\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}$ and return ${\sim}{\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}}$, as follows: \begin{enumerate} \item If $\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s) = 0$, return ``$s$ does not win for the system w.r.t. $c$'' (i.e., return $+\infty$). \item If $\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s) = +\infty$, return 0. \item Otherwise, return $c + 1 - \llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s)$. \end{enumerate} \noindent Finally, we stress that the semantics of an energy $\mu$-calculus formula, which we have inductively defined in Sect.~\ref{sec:EngMuCalcSyntaxSemantics}, immediately prescribes an algorithm to compute the energy function characterized by this formula (cf. Eq.~\ref{eq:buchiExample:energyMuCalcFormula} and Alg.~\ref{alg:buchiEnergy}). Therefore, seeing $\llbracket{\psi^{\op{E}}_{\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}$ and $\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}$ each as a symbolic algorithm, the above, in fact, describes \emph{algorithms} to solve problems \ref{problem:decision} and \ref{problem:optimal}. A straightforward implementation of $\llbracket{\psi^{\op{E}}_{\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}$ or $\llbracket{\psi^{\op{E}}_{\neg\varphi}}\rrbracket^{\ensuremath{{G^{w}(c)}}}$ gives an algorithm that computes the desired energy function in $O((|2^\mathcal V|(c+1))^q)$ symbolic steps, where $q$ is the largest number of nested fixed-point operators in the energy $\mu$-calculus formula. Nevertheless, using the well-known techniques proposed in~\cite{BrowneCJLM97} and~\cite{EmersonL86}, we can reduce this time complexity to $O((|2^\mathcal V|(c+1))^{\lfloor d/2 \rfloor+1})$ symbolic steps, where $d$ is the alternation depth of the formula (hence $d\leq q$). \subsection{A Sufficient Upper Bound}\label{sec:sufficientbound} We have shown in Sect.~\ref{sec:solvingBEGames} how energy $\mu$-calculus can be used to solve problems \ref{problem:decision} and \ref{problem:optimal} when there is a finite upper bound $c \in \mathbb{N}$ on the energy levels. However, in some cases, such a finite bound is unknown a priori, so one may wish to find a complete upper bound, i.e., a sufficiently large bound whose increase would not introduce additional winning states for the system player. In this section, we show how to compute such a bound. Moreover, based on the main result of this section (see Thm.~\ref{Thm:a-sufficient-bound}), \emph{we now solve problems \ref{problem:decision} and \ref{problem:optimal} also for the case where $c=+\infty$}, which we have left unresolved in the previous section. The complete bound we present depends on the size of the game's state space ($N$), the maximal absolute weight of a transition in the game ($K$), and the length ($m$) and the alternation depth ($d$)~\cite{EmersonL86,Niwinski86} of the $\mu$-calculus formula that matches the winning condition. \begin{thm}\label{Thm:a-sufficient-bound} Let $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$ be a WGS, $N=|2^{\mathcal{V}}|$, and let $K$ be the maximal transition weight in $G^w$, in absolute value. Take $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, a closed $\ensuremath{\mathit{sys}\text{-}\mu}$ formula that matches $\varphi$, and let $m$ be its length and $d$ its alternation depth. Then, if the system wins from a state $s$ w.r.t. $+\infty$ for an initial credit $c_0$, then it also wins from $s$ w.r.t. $(d+1)((N^2+N)m-1)K$ for an initial credit $\min\{c_0,((N^2+N)m-1)K\}$. \end{thm} We devote the remainder of this section to proving Thm.~\ref{Thm:a-sufficient-bound}. The crux of the proof is to reduce the game into an \emph{energy parity game}~\cite{ChatterjeeD12,EmersonJ91,mostowski1985}, thus we turn to discuss these games extensively. Note that the reduction to energy parity games is presented here only as part of the proof. It is \emph{not} part of our algorithm for solving $\omega$-regular energy games. We show connections between solving the bounded and unbounded energy parity objective that allow us to prove Thm.~\ref{Thm:a-sufficient-bound}. \begin{defi}[Energy parity game]\label{def:energyParityGame} An \emph{energy parity game} is a tuple ${G} = \langle({V}={V_0\cup V_1},E),\ensuremath{\mathit{prio}},w\rangle$ that consists of the following components: \begin{enumerate} \item A directed graph $(V=V_0\cup V_1,E)$ where $V_0,V_1$ partition $V$ into $\text{player}_0$ states and $\text{player}_1$ states, respectively. \item A priority function $\ensuremath{\mathit{prio}}:V\rightarrow \mathbb{N}$. \item A weight function $w:E\rightarrow \mathbb{Z}$. \end{enumerate} \end{defi} Let $G$ be an energy parity game as in Def.~\ref{def:energyParityGame}. If the weight function is omitted, $G$ is simply said to be a \emph{parity game}. Plays and strategies in $G$ are defined in a way similar to those in a WGS (see Sect.~\ref{sec:preliminaries}). The only difference is that in $G$, the players do not necessarily take steps in an alternating manner; $\text{player}_i$ chooses the next successor whenever the play reaches a $V_i$-state. Thus, a play is a path that is either infinite, or ends in a deadlock, i.e., a state with no outgoing edges. Given a play, $\sigma$, $\mathit{inf}(\sigma)\subseteq V$ is the set of all states that appear infinitely often in $\sigma$. For $c\in \mathbb{N}\cup\{+\infty\}$, $c_0\in[0,c]\cap \mathbb N$, and a path $\sigma$ in $G$ of length at least $k+1$, we use the notion of $\textsf{EL}_{c}(G,c_0, \sigma[0\ldots k])$, as was defined in Sect.~\ref{sec:combinedEngObj}. In words, $\textsf{EL}_{c}(G,c_0, \sigma[0\ldots k])$ is the energy level accumulated so far according to $w$, when the play starts with the initial credit $c_0$, and $c$ is the energy upper bound. A play $\sigma$ wins for $\text{player}_0$ w.r.t. $c$ for an initial credit $c_0$ if the following holds: (1) for every finite prefix of $\sigma$, $\sigma'$, $\textsf{EL}_{c}(G,c_0, \sigma')\geq 0$; (2) if $\sigma$ is infinite, then $\min\{\ensuremath{\mathit{prio}}(v):v\in \mathit{inf}(\sigma)\}$ is even, and (3) if $\sigma$ is finite, it ends in a deadlock for $\text{player}_1$. Otherwise, $\sigma$ wins for $\text{player}_1$. We refer to (1) as the \emph{energy objective}, while requirements (2) and (3) form the \emph{parity objective}. Hence, in the case of a parity game, a play wins for $\text{player}_0$ if the parity objective is achieved. As for WGSs, we adapt the notion of $W_\alpha(c)$ to denote the winning region of $\text{player}_\alpha$ w.r.t. the upper bound $c\in \mathbb{N}\cup\{+\infty\}$ in a given energy parity game. When necessary, we may also write $W^G_\alpha(c)$ to clarify that we relate to the winning region of $\text{player}_\alpha$ in the energy parity game, $G$. If $G$ is a parity game, the energy upper bound is neglected, and thus we just write $W_\alpha$ or $W^G_\alpha$. The following lemma, which establishes the first step towards proving Thm.~\ref{Thm:a-sufficient-bound}, is an adaptation of Lem.~6 in~\cite{ChatterjeeD12}: \begin{lem}\label{lem:lemma-6-revised} Let $G$ be an energy parity game (as defined in Def.~\ref{def:energyParityGame}) with $n$ states, $d$ different priorities and maximal absolute value of the weights, $K$. If $\text{player}_0$ has a winning strategy from a state $s$ w.r.t. $+\infty$ for an initial credit $c_0$, then she has a winning strategy w.r.t. $d(n-1)K$ for an initial credit $\min\{c_0,(n-1)K\}$. \end{lem} Consider an energy parity game $G$, as in Lem.~\ref{lem:lemma-6-revised}, and let $s$ be a state of $G$ that wins for $\text{player}_{0}$ w.r.t. $+\infty$ for an initial credit $c_0$. While Lem.~6 of~\cite{ChatterjeeD12} shows that $\text{player}_{0}$ wins from $s$ w.r.t. $+\infty$ for the initial credit $(n-1)K$ with a strategy which has a memory of size $dnK$, Lem.~\ref{lem:lemma-6-revised} shows that $\text{player}_0$ wins from $s$ w.r.t. the finite upper bound $d(n-1)K$ without the need to increase her initial credit, $c_{0}$. Both lemmas are proved by induction. However, in contrast to~\cite{ChatterjeeD12}, which proves Lem.~6 by induction on $d$, we prove Lem.~\ref{lem:lemma-6-revised} by induction on ${n}+{d}$. This allows us to apply the induction hypothesis in more cases and, consequently, avoid the use of recursion, as opposed to~\cite{ChatterjeeD12}. We provide the full proof for Lem.~\ref{lem:lemma-6-revised} in Appx.~\ref{app:lemma-6-revised-proof}. Also, as a side note, Lem.~\ref{lem:lemma-6-revised} implies the following corollary which slightly improves the first result listed in~\cite{ChatterjeeD12}. Moreover, this corollary establishes a link between the upper bound on the energy level accumulation and the upper bound on the strategy's memory size. \begin{cor} Let $G$ be an energy parity game with $n$ states, $d$ different priorities and maximal absolute value of the weights, $K$. If $\text{player}_{0}$ wins from a state $s$ w.r.t. $+\infty$ for an initial credit $c_{0}$ in $G$, then she has a strategy that wins for the initial credit $c_0$ and has a memory of size $d(n-1)K+1$. \label{cor:parityEnergyMemory} \end{cor} We obtain Cor.~\ref{cor:parityEnergyMemory} by applying Lem.~\ref{lem:lemma-6-revised} to $G$, obtaining that $\text{player}_0$ wins from $s$ w.r.t. $d(n-1)K$, and observing that if $\text{player}_0$ wins from $s$ w.r.t. $c\in \mathbb N$, then she has a winning strategy from $s$ w.r.t. $+\infty$ with a memory of size $c+1$. The proof sketch for the latter claim is as follows. We construct from $G$ and $c$ a new parity game $G^{p}$. The states of $G^{p}$ have the form $(s,c'_{0})$ where $s$ is a state of $G$ and $c'_{0}\in[0,c] \cup \{+\infty\}$ is the accumulated energy level under $c$. States of the form $(s, +\infty)$ are deadlocks for $\text{player}_{0}$ and correspond to violation of the energy objective. The edges of $G^{p}$ are taken from $G$ and update the energy component accordingly. Then, a memoryless strategy in $G^{p}$, which exists due to memoryless determinacy of parity games~\cite{EmersonJ91,Zielonka98}, can be lifted to a strategy in $G$ with a memory of size $(c+1)$ to keep track of the energy level under $c$. The second step towards proving Thm.~\ref{Thm:a-sufficient-bound}, involves showing that if the system player can win from a state of a WGS w.r.t. $+\infty$, then it can also win from that state w.r.t. some finite upper bound. This is formally stated by the next lemma. \begin{lem}\label{lem:from-infinite-to-finite} Let $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$ be a WGS and assume that the system player has a winning strategy from a state $s\in 2^\mathcal{V}$ w.r.t. $+\infty$ for an initial credit $c_0$. Then, for some finite upper bound $c \in \mathbb{N}$, the system has a winning strategy from $s$ w.r.t. $c$ for an initial credit $\min\{c_{0},c\}$. \end{lem} \begin{proof} To prove this claim, we use the notion of a \emph{deterministic parity automaton}~\cite{Buchi60,mostowski1985}. \begin{defi}[Deterministic parity automaton] A deterministic parity automaton is a tuple ${\mathcal{A}}={\langle Q,\Sigma,\delta,q_0,\ensuremath{\mathit{prio_{\mathcal{A}}}}\rangle}$ where $Q$ is a finite set of states, $\Sigma$ is an alphabet, $\delta:Q\times\Sigma\rightarrow Q$ is the transition function, $q_0\in Q$ is the initial state, and $\ensuremath{\mathit{prio_{\mathcal{A}}}}:Q\rightarrow \mathbb{N}$ is the priority function. \end{defi} An $\omega$-word $\sigma_0\sigma_1\sigma_2\cdots\in \Sigma^\omega$ is accepted by a deterministic parity automaton $\mathcal{A}=\langle Q,\Sigma,\delta,q_0,\ensuremath{\mathit{prio_{\mathcal{A}}}}\rangle$, if there is an infinite sequence of states $q'_0,q'_1,q'_2,\dots$ such that $q'_0=q_0$, $\delta(q'_j,\sigma_i)=q'_{i+1}$, and $\underset{i\rightarrow +\infty}{\lim}\min\{\ensuremath{\mathit{prio_{\mathcal{A}}}}(q_l):l\geq i\}$ is even. In words, using its transition function, the automaton $\mathcal{A}$ reads an $\omega$-word, starting from $q_0$, and if the minimal priority traversed infinitely often is even, $\mathcal{A}$ accepts the word. The set of all $\omega$-words accepted by the automaton is denoted by $L(\mathcal{A})$. It is known that for every $\omega$-regular language $L$ there exists a deterministic parity automaton $\mathcal{A}$ with $L=L(\mathcal{A})$~\cite{2001automata,piterman2006,safra88}. Let $\mathcal{A}=\langle Q,2^\mathcal{V},\delta,q_0,\ensuremath{\mathit{prio_{\mathcal{A}}}}\rangle$ be a deterministic parity automaton with $L(\mathcal{A})=L(\varphi)$, where $L(\varphi)$ is the set of all $\omega$-words $\sigma \in (2^{\mathcal V})^{\omega}$ for which the $\omega$-regular winning condition of $G^{w}$, $\varphi$, holds. We define an energy parity game $G_\mathcal{A}=\langle (V,E),\mathit{prio},w \rangle$, as follows: \begin{enumerate} \item $\text{player}_1$ states are all states of the form $(s,q)$ where $s\in 2^\mathcal{V}$ and $q$ is a state of $\mathcal{A}$. \item $\text{player}_0$ states are all states of the form $(s,u,q)$ where $s\in 2^{\mathcal V}$, $q$ is a state of $\mathcal{A}$, and $u\in 2^\mathcal{X}$. \item There is a transition from a state $(s,q)$ to a state $(s,u,q)$ if $(s,p(u))\models \rho^e$. The weight of such a transition is $0$. \item There is a transition from a state $(s,u,q)$ to a state $(t,q')$ if $u=t|_\mathcal{X}$, $(s,p(t))\models \rho^s$, and $q'=\delta(q,t)$. The weight of such a transition is $w^s(s,p(t))$. \item $\ensuremath{\mathit{prio}}(s,u,q)=\ensuremath{\mathit{prio}}(s,q)=\ensuremath{\mathit{prio_{\mathcal{A}}}}(q)$. \end{enumerate} \noindent It is not difficult to see that for every state $t\in 2^\mathcal{V}$, upper bound $d\in\mathbb{N}\cup \{+\infty\}$, and initial credit $d_0\in \mathbb{N}$, $d_0\leq d$, the system wins in $G^w$ from $t$ w.r.t. $d$ for $d_0$, if and only if $\text{player}_0$ wins in $G_{\mathcal A}$ from $(t,q_t=\delta(q_0,t))$ w.r.t. $d$ for $d_0$. Now, let $s \in 2^{\mathcal V}$ be a state in $G^{w}$ from which the system wins w.r.t. $+\infty$ for an initial credit $c_0\in \mathbb{N}$. Therefore, for $q_s=\delta(q_0,s)$, $\text{player}_0$ wins in $G_{\mathcal A}$ from $(s,q_s)$ w.r.t. $+\infty$ for the initial credit $c_0$. By Lem.~\ref{lem:lemma-6-revised}, for some finite upper bound $c\in\mathbb{N}$, $\text{player}_0$ wins from $(s,q_s)$ w.r.t. $c$ for the initial credit $\min\{c_{0},c\}$. Hence, the system wins in $G^w$ from $s$ w.r.t. $c$ for the initial credit $\min\{c_{0},c\}$, as required. \end{proof} \subsubsection{Reducing Weighted Game Structures to Energy Parity Games}\label{sec:sufficientbound:WGStoParityEnergyGames} So far, we have shown in Lem.~\ref{lem:lemma-6-revised} that if $\text{player}_0$ can win an energy parity game w.r.t. $+\infty$, then she can also win w.r.t. some finite upper bound that depends on the size of the game. We have concluded in Lem.~\ref{lem:from-infinite-to-finite} that the same holds for any WGS, but we have not yet achieved the desired upper bound, which is specified in Thm.~\ref{Thm:a-sufficient-bound}. In the following, we prove the sufficiency of the upper bound $(d+1)((N^2+N)m-1)K$ for winning, in case winning is possible. The idea is to reduce $\omega$-regular energy games (WGSs) to energy parity games without using the explicit construction from the proof of Lem.~\ref{lem:from-infinite-to-finite}. Instead, we provide a construction that uses the energy $\mu$-calculus formula that solves the game. That is mostly useful in cases where the $\mu$-calculus formula is relatively small, e.g., reachability, safety, B\"{u}chi, co-B\"{u}chi, GR(1)~\cite{BJP+12}, etc. We present the construction guidelines. For the full details, we refer the reader to Appx.~\ref{app:sufficient-bound-proof}. Consider a WGS, $G^w$, where $K$ is the maximal absolute value of the weights in $G^w$. Let $\psi\in\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ be a closed $\ensuremath{\mathit{sys}\text{-}\mu}$ formula that matches the winning condition of $G^w$, $\varphi$, and let $m$ be its length and $d$ its alternation depth. For a natural number $c\geq K$, we construct a parity game in several steps as elaborated below. We remark that the actual construction (see Appx.~\ref{app:sufficient-bound-proof}) is, in some places, slightly different than the one described here. That is because we choose to omit some technical details which we believe to only distract and conceal the essence of the construction. \begin{description} \item[{\bfseries Step 1}] Let $G_c$ be the graph defined in Def.~\ref{def:naiveReduction} (appears in Def.~\ref{def:naiveReduction} as $G^{*}$). Hence, the states of $G_c$ are of the form: $(s,c_0)$ where $s$ is a state of $G^{w}$ and $c_0 \in [0,c]$. Recall that by Lem.~\ref{lem:sysEngMuCalc}, ${c_0 \geq \llbracket \psi^\op{E}\rrbracket^{\ensuremath{{G^{w}(c)}}}(s)}\text{ iff }{((s,c_0)\in\llbracket \psi \rrbracket^{G_{c}})}$. \item[{\bfseries Step 2}] We apply the seminal \emph{model checking game construction}~\cite{EmersonJ91} to obtain a parity game $G_c\times \psi$, which has at most $d+1$ different priorities. The states of $G_c\times \psi$ are of the form $((s,c_0),\xi)$ where $\xi$ is a sub-formula of $\psi$. By~\cite{EmersonJ91}, $(s,c_0)\in \llbracket \psi \rrbracket^{G_c}$ iff $((s,c_0),\psi)\in W^{G_c\times\psi}_0$. \item[{\bfseries Step 3}] The next step is to add a weight function $w$ to $G_{c}\times \psi$, namely to transform $G_c\times\psi$ into an energy parity game (as defined in Def.~\ref{def:energyParityGame}). Some of the transitions in $G_c\times \psi$ simulate transitions of $G_c$, which correspond to transitions of $G^w$. The weight of such a transition $T = ((s_1,c_1),\xi_1),((s_2,c_2),\xi_2)$ is inherited from the transition of $G^w$ that $T$ simulates, i.e., $T$ is assigned the weight $w^s(s_1,p(s_2))$. The weight of all other transitions is $0$. However, the construction of $G_c$ ensures that every transition $T = ((s_1,c_1),\xi_1),((s_2,c_2),\xi_2)$ satisfies that ${c_1 + w(T)} \geq {c_2}$ (cf. the definition of $\rho^{s*}$ in Def.~\ref{def:naiveReduction}). Hence, in any play that starts from $((s,c_0), \psi)$ with an initial credit $c_0$, the energy level always remains non-negative. Consequently, the additional energy objective is merely artificial and does not prevent $\text{player}_0$ from winning; the result is that a state $s$ wins for the system in $G^w$ w.r.t. $c$ for an initial credit $c_0$, iff $((s,c_0),\psi)\in W^{G_c\times \psi}_0(c)$. \item[{\bfseries Step 4}] The final step is to eliminate the energy component from the states of $G_{c}\times \psi$. States of the form $((s,c_0),\xi)$ are replaced with a single state $(s,\xi)$. Thus, each state of the obtained energy parity game, $\faktor{G_c\times \psi}{c}$, matches a set of states in $G_c\times \psi$. A path in $\faktor{G_c \times \psi}{c}$ can be lifted to a path in $G_c\times \psi$. Thus, we have that $(s,\psi)$ wins for $\text{player}_0$ in $\faktor{G_c\times \psi}{c}$ w.r.t. $c$ for an initial credit $c_0$ iff $\text{player}_0$ wins from $((s,c_0),\psi)$ in $G_c\times \psi$ w.r.t. $c$ for an initial credit $c_0$ iff the system player wins from $s$ in $G^w$ w.r.t. $c$ for $c_0$. \end{description} \noindent The key idea behind this construction is that the upper bound $c$ does \emph{not} play a role in the resulting game, $\faktor{G_c\times\psi}{c}$. In fact, we get that for any two finite upper bounds, $c,c'\geq K$, $\faktor{G_c\times\psi}{c} =\faktor{G_{c'}\times\psi}{c'}$, so we may denote this graph by a single name, say $\tilde{G}$. By Lem.~\ref{lem:lemma-6-revised}, if a state of $\tilde{G}$ wins for $\text{player}_0$ w.r.t. $+\infty$, it also wins w.r.t. $b= (d+1)((N^2+N)m-1)K$, as $(N^2+N)m$ is the number of states of $\tilde{G}$. Therefore, if a state $s$ wins for the system in $G^w$ w.r.t. some finite upper bound $c$, then $s$ also wins w.r.t. $b$. This consequence, together with Lem.~\ref{lem:from-infinite-to-finite}, completes the proof of Thm.~\ref{Thm:a-sufficient-bound}. \emph{Notice that this result establishes energy $\mu$-calculus algorithms for problems \ref{problem:decision} and \ref{problem:optimal} when the bound is $+\infty$:} \begin{itemize} \item Checking if $\llbracket\psi^\op{E}\rrbracket^{G^{w}(b)}(s)\neq +\infty$ solves the decision problem (\ref{problem:decision}). \item Returning $\llbracket \psi^\op{E}\rrbracket^{G^{w}(b)}(s)$ solves the minimum credit problem (\ref{problem:optimal}). \end{itemize} \noindent The sufficient bound $b=(d+1)((N^2+N)m-1)K$, which we have just obtained, applies to WGSs with \emph{any} $\omega$-regular winning conditions. Nevertheless, in some cases, this bound is not tight, as we demonstrate below. Consider a $\ensuremath{\mathit{sys}\text{-}\mu}$ formula $\psi_B$ that matches a B\"{u}chi winning condition $\varphi_B$ (cf. Eq.~\ref{eq:buchiExample:muCalcFormula}), and let $m_B$ be its length. As the alternation depth of $\psi_B$ is 2, Thm.~\ref{Thm:a-sufficient-bound} implies the sufficiency of the bound $b_{B}=3((N^2+N)m_B-1)K$. Interestingly, however, in this special case, we argue that the bound $b_{B}$ is not tight, and it can be replaced with a \emph{lower} one, specifically with $b^{\mathit{low}}_{B} = 2({N^2+N-1})K$. To obtain $b^{\mathit{low}}_{B}$, we reduce a WGS $G^{w}$ whose winning condition is $\varphi_B$ to an energy parity game $G^{ep}$ with two priorities and at most $N^2 + N$ states; then, we invoke Lem.~\ref{lem:lemma-6-revised} on $G^{ep}$. The crux of this reduction is that it constructs $G^{ep}$ without using $\psi_B$, as opposed to the above construction. The reduction sees the B\"{u}chi condition as a parity condition with two priorities (w.l.o.g. $0$ and $1$), and constructs $G^{ep}$, which is simply an explicit representation of the symbolic WGS $G^w$, as follows: \begin{itemize} \item $\text{player}_1$ states are all states $s \in 2^{\mathcal V}$. \item $\text{player}_0$ states are all pairs $(s,u)$ where $s \in 2^{\mathcal V}$ is a $G^{w}$-state and $u \in 2^{\mathcal X}$ is an assignment to the input variables. \item There is a transition from $s$ to $(s,u)$ if $(s,p(u))\models \rho^e$; such a transition corresponds to an environment's step and its weight is 0. \item There is a transition from $(s,u)$ to $t \in 2^{\mathcal V}$ if $u=t|_\mathcal{X}$ and $(s,p(t))\models \rho^s$; such a transition corresponds to a system's step and its weight is $w^s(s,p(t))$. \item The priorities of all the states $s \in 2^{\mathcal V}$ in $G^{ep}$ remain the same as those in $G^{w}$, while every state $(s,u)$ in $G^{ep}$ is assigned the same priority as $s$. \end{itemize} \noindent Finally, we observe that, in fact, the construction of $G^{ep}$ does not only apply to B\"{u}chi winning conditions, but to parity conditions. That is, this construction reduces a parity WGS with $d$ priorities to an energy parity game $G^{ep}$ with $d$ priorities and at most $N^2+N$ states. Therefore, we also conclude the sufficiency of the tighter bound $d(N^2+N-1)K$ for parity WGSs with $d$ different priorities. \section{Conclusion} \label{sec:conclusion} We have introduced energy $\mu$-calculus, a multi-valued extension of the game $\mu$-calculus~\cite{EmersonJ91} over symbolic game structures~\cite{BJP+12} that serves as a symbolic framework for solving $\omega$-regular energy games. Existing, well-known game $\mu$-calculus formulas $\psi$ that solve $\omega$-regular games can be seamlessly reused as energy $\mu$-calculus formulas $\psi^\op{E}$ to solve corresponding energy augmented games (see Thm.~\ref{thm:sysEngMuCalcCorrectness} and Thm.~\ref{thm:envEngMuCalcCorrectness}). The semantics of $\psi^\op{E}$ immediately prescribes a symbolic algorithm to solve the underlying $\omega$-regular energy games (cf. Alg.~\ref{alg:buchiEnergy}). The semantics of energy $\mu$-calculus is defined w.r.t. finite upper bounds. Nevertheless, we have shown that energy $\mu$-calculus solves both the decision and the minimum credit problems (i.e., problems~\ref{problem:decision} and~\ref{problem:optimal} in Sect.~\ref{sec:solvingEnergyGames}), also with an unbounded energy level accumulation. We have obtained this result by showing that every $\omega$-regular winning condition admits a sufficiently large upper bound under which the bounded energy level accumulation coincides with the unbounded one. Moreover, importantly, although it is finite, the sufficient bound still enables the system player to win without increasing the initial credit. We have introduced a sufficient bound that depends on the size of the state space, the maximal absolute weight, and the length and the alternation depth of the game $\mu$-calculus formula that solves the $\omega$-regular game (see Thm.~\ref{Thm:a-sufficient-bound}). To prove this bound, we have reduced $\omega$-regular energy games over symbolic weighted game structures, to energy parity games~\cite{ChatterjeeD12}. This reduction, which applies a construction that uses the $\mu$-calculus formula that solves the game, establishes a connection to the sufficient bound that we have obtained for energy parity games (see Lem.~\ref{lem:lemma-6-revised}). \subsection*{Future Work} The game $\mu$-calculus has not only been used to compute the sets of winning states, but to also synthesize winning strategies; see, e.g.,~\cite{BJP+12,BruseFL14,2001automata,KonighoferHB13}. Thus, in addition to solving the decision and the minimum credit problems, we believe that energy $\mu$-calculus can augment $\mu$-calculus-based strategy synthesis with energy. That is, we conjecture that finite memory winning strategies may be extracted from the intermediate energy functions of the fixed-point iterations. \section*{Acknowledgment} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 638049, SYNTECH). \appendix \section{\texorpdfstring{Energy $\mu$-Calculus Over Weighted Game Structures}{Energy mu-Calculus Over Weighted Game Structures}}\label{app:energyMuCalc} \subsection{\texorpdfstring{$\ensuremath{\mathit{env}\text{-Energy}}$ $\mu$-Calculus: Full Definition}{env-Energy mu-Calculus: Full Definition}}\label{app:energyMuCalc:extDefinitions} In this appendix, we provide the full definition of the semantics of $\ensuremath{\mathit{env}\text{-energy}}$ $\mu$-calculus, as a supplementary to Sect.~\ref{sec:EngMuCalcEnvSemantics}. \begin{defi}[Dual energy controllable predecessor operator]\label{def:dualECpre} For all WGSs $\langle G, w^s\rangle$, upper bounds $c\in \mathbb{N}$, energy functions $f\in \mathit{EF(c)}$, and states $s \in 2^{\mathcal{V}}$: \begin{align*} &{\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}(f)(s)} := {\min\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\max\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\overline{\textsf{EC}}_c((s, p(s_{\mathcal{X}},s_{\mathcal{Y}})), f(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack}\\& \text{where ${\overline{\textsf{EC}}_c} : {{{2^{\mathcal{V} \cup \mathcal{V'}}}\times{{\mathit{E(c)}}}}\rightarrow{\mathit{E(c)}}}$ and for all $s \in 2^{\mathcal V}$, $s' \in 2^{\mathcal V'}$, and $e \in \mathit{E(c)}$,}\\& {\overline{\textsf{EC}}_c((s,s'),e)} = \begin{cases} +\infty,\ &\mbox{if $(s,s')\not\models{\rho^e}$}\\ 0,\ &\mbox{if $e=0$ or $(s,s')\models{\rho^e\wedge{\neg\rho^s}}$}\\ 0,\ &\mbox{if $e=+\infty$ and $w^{s}(s,s')+c < 0$}\\ +\infty,\ &\mbox{if $e=+\infty$ and $w^{s}(s,s')\geq 0$}\\ c+1+w^{s}(s,s'),\ &\mbox{if $e=+\infty$} \\ 0,\ &\mbox{if $e + w^{s}(s,s') \leq 0$}\\ +\infty, &\mbox{if $e + w^{s}(s,s') > c$}\\ e + w^{s}(s,s'),\ &\mbox{otherwise} \end{cases}& \end{align*} \end{defi} \begin{defi}[$\ensuremath{\mathit{env}\text{-energy}}$ $\mu$-calculus: semantics]\label{def:envEngMuCalcSemantics} The semantics $\llbracket \psi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}$ of $\psi\in{\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}}$ w.r.t. a finite WGS $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$, a finite upper bound $c \in \mathbb{N}$, and a valuation $\ensuremath{\mathcal{D}}: \mathit{Var} \rightarrow \mathit{EF(c)}$ over $\mathit{EF(c)}$, is inductively defined for all states $s\in{2^{\mathcal{V}}}$, as follows: \begin{itemize} \setlength\itemsep{0.5em} \item For $v\in\mathcal{V}$, $\llbracket{v}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \begin{cases} 0, & \text{ if } s\vDash{v} \\ +\infty, & \text{ if } s\nvDash{v} \end{cases}; \llbracket{\neg{v}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \begin{cases} +\infty, & \text{ if } s\vDash{v} \\ 0, & \text{ if } s\nvDash{v} \end{cases} $ \item For $X\in{\mathit{Var}}$, $\llbracket{X}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \ensuremath{\mathcal{D}}(X)(s)$. \item $\llbracket{\phi_1\vee{\phi_2}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \min(\llbracket{\phi_1}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}, \llbracket{\phi_2}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})(s)$. \item $\llbracket{\phi_1\wedge{\phi_2}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \max(\llbracket{\phi_1}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}, \llbracket{\phi_2}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})(s)$. \item $\llbracket{\circlebox_{\op{E}}\phi}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}(\llbracket\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}})(s)$. \item $\llbracket\twolinescurly{\mu}{\nu} X\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = \twolinescurly{\mathit{lfp}}{\mathit{gfp}} (\lambda{f}.\llbracket\phi\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{f}]})(s) = \twolinescurly{\min\limits_{i}}{\max\limits_{i}}\lbrack{h_i}\rbrack(s)$,\\ where $\twolinescurly{h_0=f_{+\infty}}{h_0=f_{0}}$ and $h_{i+1}= \llbracket{\phi}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{h_i}]}$. \end{itemize} \end{defi} \noindent The next lemma proves the correctness of the operator defined in Def.~\ref{def:dualECpre}. \begin{lem}[Correctness of Def.~\ref{def:dualECpre}]\label{lemma:dualECpre} The operator $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}:\mathit{EF(c)}\rightarrow{\mathit{EF(c)}}$ from Def.~\ref{def:dualECpre} is the dual of $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}$ from Def.~\ref{def:ECpre}. That is, for all $f \in \mathit{EF(c)}$, ${\sim \ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(f)}={\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}({\sim f})}$ where $\sim$ is the pointwise negation operation defined in Eq.~\ref{eq:negation}. \end{lem} \begin{proof} \begin{enumerate} \item\label{proof:dualECprePart1} First, we show that for all WGSs $G^{w} = \langle G , w^{s} \rangle$, upper bounds $c\in \mathbb{N}$, $s \in 2^{\mathcal{V}}$, $s_{\mathcal{X}}\in{2^\mathcal{X}}$, $s_{\mathcal{Y}}\in{2^\mathcal{Y}}$, and $f \in \mathit{EF(c)}$: \[{\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), f(s_{\mathcal{X}},s_{\mathcal{Y}}))}= {\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), \sim f(s_{\mathcal{X}},s_{\mathcal{Y}}))},\] where $s_{\mathcal{X'}}$ and $s_{\mathcal{Y'}}$ denote $p(s_{\mathcal{X}})$ and $p(s_{\mathcal{Y}})$, respectively. Also, let $e:=f(s_{\mathcal{X}},s_{\mathcal{Y}}) \text{ and } w := w^{s}(s,(s_{\mathcal{X'}},s_{\mathcal{Y'}}))$. \begin{itemize} \item If $(s,(s_{\mathcal{X'}},s_{\mathcal{Y'}}))\not\models{\rho^e}$, then according to Def.~\ref{def:ECpre} and Def.~\ref{def:dualECpre}, we have that ${\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), e)} = {0}$ and ${\sim{\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})),\sim{e})}} = {\sim+\infty} = {0}$. \item Otherwise, if $e=+\infty$ (iff $\sim e=0$) or $(s,(s_{\mathcal{X'}},s_{\mathcal{Y'}}))\models{\rho^e\wedge{\neg\rho^s}}$, we have that ${\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), e)} = {+\infty} = {\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), \sim{e})} = {\sim{0}}$. \item If $(s,(s_{\mathcal{X'}},s_{\mathcal{Y'}}))\models{\rho^e\wedge{\rho^s}}$ and $e=0$ (iff $\sim e=+\infty$), we have one of the following cases: (1) if $0-w > c$ (iff $w + c < 0$), then ${\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), 0)} = {+\infty}$ and ${\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), +\infty)} = {\sim{0}} = {+\infty}$; (2) if $0-w\leq0$ (iff $w\geq{0}$), then ${\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), 0)} = {0} = {\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})),+\infty)} = {\sim{+\infty}}$; (3) if $0<0-w\leq{c}$, we have that $1\leq c+1+w\leq c$, and thus, by the definition of $\sim$, ${\sim \overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), +\infty)} = {\sim{(c+1+w)}} = {(c+1)-(c+1+w)} = {-w} = {\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), 0)}$. \item In the remaining case we have that $(s,(s_{\mathcal{X'}},s_{\mathcal{Y'}}))\models{\rho^e\wedge{\rho^s}}$, $e\not=0$, and $e\not = +\infty$, thus $\sim{e}=c+1-e$; (1) if ${e-w} > {c}$, then ${\sim{e}+w} \leq {0}$, thus ${\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})),\sim{e})} = {\sim{0}} = {+\infty} = {\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), e)}$; (2) if ${e-w} \leq {0}$, then ${\sim{e}+w} > {c}$, thus ${\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), \sim{e})} = {\sim{+\infty}} = {0} = {\textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), e)}$; (3) if ${0} < {e-w} \leq {c}$, then $0 < {\sim{e} + w \leq{c}}$, and thus ${\sim\overline{\textsf{EC}}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), \sim{e})} = {\sim{(\sim e + w)}} = \linebreak(c+1) - (\sim e + w) = {e-w} = \textsf{EC}_c((s,(s_{\mathcal{X'}},s_{\mathcal{Y'}})), e)$. \end{itemize} \item From~\ref{proof:dualECprePart1} together with Lem.~\ref{lem:deMorganAlgebra}, it follows that for all $f \in \mathit{EF(c)}$ and $s \in 2^{\mathcal{V}}$:\\ ${\sim{\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(f)(s)}} = {\sim{\max\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\min\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\textsf{EC}_c((s,p(s_{\mathcal{X}}, s_{\mathcal{Y}})), f(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack}} \\= {\sim{\max\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\min\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\sim\overline{\textsf{EC}}_c((s,p(s_{\mathcal{X}}, s_{\mathcal{Y}})), \sim f(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack}} \\= {\sim{\max\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\sim\max\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\overline{\textsf{EC}}_c((s,p(s_{\mathcal{X}}, s_{\mathcal{Y}})), \sim {f(s_{\mathcal{X}},s_{\mathcal{Y}}))}\rbrack}} \\= {\sim\sim\min\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\max\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\overline{\textsf{EC}}_c((s,p(s_{\mathcal{X}}, s_{\mathcal{Y}})), \sim f(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack} \\= {\min\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\max\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\overline{\textsf{EC}}_c((s,p(s_{\mathcal{X}}, s_{\mathcal{Y}})), \sim f(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack} ={\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{env}}}}}(\sim f)(s)}$.\qedhere \end{enumerate} \end{proof} \subsection{Proofs}\label{app:energyMuCalc:proofs} \subsubsection{Proofs of Sect.~\ref{sec:EngMuCalcSyntaxSemantics}} \begin{proof}[Proof of Prop.~\ref{prop:ECpreMonotone}] Let $c \in \mathbb{N}$ be an upper bound, and let $f,g \in \mathit{EF(c)}$ such that ${f}\preceq{g}$. First, given $s \in 2^{\mathcal{V}}$, $s_{\mathcal{X}}\in{2^\mathcal{X}}$, and $s_{\mathcal{Y}}\in{2^\mathcal{Y}}$, we show that ${\textsf{E}(f)} \preceq {\textsf{E}(g)}$ where for all $h \in \mathit{EF(c)}$, ${\textsf{E}(h)} := {\textsf{EC}_c((s,p(s_{\mathcal{X}},s_{\mathcal{Y}})), h(s_{\mathcal{X}},s_{\mathcal{Y}}))}$. If $(s,p(s_{\mathcal{X}},s_{\mathcal{Y}})) \not \models \rho^{e}$, then $\textsf{E}(f) = \textsf{E}(g) = 0$. Otherwise, if $f(s_{\mathcal{X}},s_{\mathcal{Y}}) = +\infty$ or $(s, p(s_{\mathcal{X}},s_{\mathcal{Y}})) \models \rho^{e} \wedge \neg \rho^{s}$, it follows that $\textsf{E}(f) = +\infty \preceq \textsf{E}(g)$. Otherwise, we have that $(s,p(s_{\mathcal{X}},s_{\mathcal{Y}})) \models \rho^{e} \wedge \rho^{s}$ and $f(s_{\mathcal{X}},s_{\mathcal{Y}}) \not = +\infty$, thus $g(s_{\mathcal{X}},s_{\mathcal{Y}}) \not = +\infty$ and $g(s_{\mathcal{X}},s_{\mathcal{Y}}) \leq f(s_{\mathcal{X}},s_{\mathcal{Y}})$. In this case, (1) if $f(s_{\mathcal{X}},s_{\mathcal{Y}}) - w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}})) > c$, then $\textsf{E}(f) = +\infty \preceq \textsf{E}(g)$; and (2) if $g(s_{\mathcal{X}},s_{\mathcal{Y}}) - w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}})) \leq f(s_{\mathcal{X}},s_{\mathcal{Y}}) - w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}})) \leq c$, then $\textsf{E}(f) = \max[0, f(s_{\mathcal{X}},s_{\mathcal{Y}}) - w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))] \preceq \max[0, g(s_{\mathcal{X}},s_{\mathcal{Y}}) - w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))] = \textsf{E}(g)$. Second, we show that $\ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(f) \preceq \ensuremath{\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}}(g)$. Let $s\in 2^{\mathcal{V}}$ be a state, and let us show that $\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}(f)(s) \preceq \mathit{ECpre_{{\scriptstyle\mathit{sys}}}}(g)(s)$. Note that by the above, we have that for every $s_{\mathcal{X}}\in{2^\mathcal{X}}$ and $s_{\mathcal{Y}}\in{2^\mathcal{Y}}$: ${\textsf{EC}_c((s,p(s_{\mathcal{X}},s_{\mathcal{Y}})), f(s_{\mathcal{X}},s_{\mathcal{Y}}))} \preceq {\textsf{EC}_c((s, p(s_{\mathcal{X}},s_{\mathcal{Y}})), g(s_{\mathcal{X}},s_{\mathcal{Y}}))}$. Since $\min$ and $\max$ are monotone w.r.t. $\preceq$ in all of their arguments, it follows that ${\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}(f)(s)} \preceq {\mathit{ECpre_{{\scriptstyle\mathit{sys}}}}(g)(s)}$. \end{proof} \begin{proof}[Proof of Lem.~\ref{lem:deMorganAlgebra}] It holds that ${\mathit{EFL(c)}} = {\langle \mathit{EF(c)}, \min, \max, f_{+\infty}, f_{0} \rangle}$ is a bounded distributive lattice. Thus, it remains to show that the unary operator $\sim$ satisfies the following axioms: (1) for all $x \in {\mathit{E(c)}}$: ${\sim\sim{x}} = {x}$ (\emph{involution}), and (2) for all $x,y\in{\mathit{E(c)}}$: ${\sim(\max(x,y))} = {\min(\sim{x},\sim{y})}$ (\emph{De Morgan's laws}).\footnote{Note that the axiom: $\forall x,y\in{\mathit{E(c)}}: {\sim(\min(x,y))} = {\max(\sim{x},\sim{y})}$ also holds, since (1) and (2) imply that ${\sim(\min(x,y))} = {\sim{\min(\sim\sim{x}, \sim\sim{y}))}} = {\sim{(\sim(\max(\sim{x}, \sim{y})))}} = {\max(\sim{x},{\sim{y}})}$.} \begin{enumerate} \item[(1)] For $x=+\infty$, we have that ${\sim\sim{x}} = {\sim{0}} = {+\infty} = {x}$, and similarly for ${x}={0}$. For $x \in{\mathit{E(c)}}{\setminus{\{+\infty,0\}}}$, it holds that ${\sim{\sim{x}}} = {c+1-\sim{x}} = {c+1-(c+1-x)} = {x}$. \item[(2)] Let $x,y\in{\mathit{E(c)}}$: \begin{itemize} \item if $x=+\infty$ or $y=+\infty$, then ${\sim(\max(x,y))} = {\sim{+\infty}} = {0} = {\min(\sim{x},\sim{y})}$; \item if $x=0$ and $y \not=+\infty$, then ${\sim(\max(x,y))} = {\sim{y}} = {\min(+\infty,\sim{y})} = {\min(\sim{x},\sim{y})}$ (and similarly if $x\not=+\infty$ and $y=0$); \item if $x,y\in{\mathit{E(c)}}{\setminus{\{+\infty,0\}}}$, then it holds that $\max(x,y)\in{\mathit{E(c)}}{\setminus{\{+\infty,0\}}}$, and thus ${\sim(\max(x,y))} = {c+1-\max(x,y)} = {c+1 + \min (-x, -y)} = \min(c+1-x, c+1-y) = \min(\sim{x}, \sim{y})$.\qedhere \end{itemize} \end{enumerate} \end{proof} \subsubsection{Proofs of Sect.~\ref{sec:EngMuCalcCorrect}}\label{app:energyMuCalc:proofs:correctness} \begin{proof}[Proof of Thm.~\ref{thm:reductionCorrectness}] First, we prove for the system player. Assume that $g^*$ is a winning strategy for the system from $(s,c_0)$ in $G^*$. We define a strategy $g$ for the system in $G^w$, as follows. For $k \geq 0$, take a prefix $s_{0},\dots,s_k,s_\mathcal{X}\in (2^\mathcal V)^{+}2^\mathcal X$ such that $s_{0}=s$ and $(s_{k}, p(s_{\mathcal{X}}))\models \rho^{e}$. If there are $c_1,\dots,c_k \in ([0,c])^{k}$ such that $(s,c_0),(s_1,c_1),\dots,(s_k,c_k) \in (2^{\mathcal{V}^{*}})^{+}$ is consistent with $g^{*}$, choose such a prefix, let $(s_\mathcal Y,c_{k+1})=g^*( (s,c_0),(s_1,c_1),\dots,(s_k,c_k),s_\mathcal X )$, and define $g(s,s_1,\dots,s_k,s_\mathcal{X})=s_\mathcal Y$. By the construction of $G^*$, it is not difficult to prove by induction on $k$ that \begin{itemize} \item $(s_{k},p(s_\mathcal X,s_\mathcal Y))\models \rho^s$. \item $s_{0},\ldots,s_{k},(s_\mathcal X,s_\mathcal Y)$ is the projection to $\mathcal V$ of the unique prefix of a play in $G^{*}$, $(s_{0},c_{0}),\ldots,\linebreak(s_{k}, c_{k}),(s_\mathcal X,s_\mathcal Y, c_{k+1})$, consistent with $g^{*}$. \item ${\sf EL}_c(G^w,c_0,(s_{0},\ldots,s_{k},(s_\mathcal X,s_\mathcal Y)))\geq c_{k+1}$. \end{itemize} Hence, since $g^*$ wins for the system from $(s,c_0)$, we get that $g$ wins for the system from $s$ w.r.t. $c$ for the initial credit $c_0$. For the other direction, assume that $g$ is a winning strategy for the system from $s$ w.r.t. $c$ for an initial credit $c_0$, in $G^w$. We define a strategy $g^*$ for the system in $G^*$ from $(s,c_0)$, as follows: \begin{itemize} \item $g^*=\bigcup_{i=1}^{+\infty}g^*_i$ where $g_i^*:(2^{\mathcal{V}^*})^i 2^\mathcal X\rightarrow 2^{\mathcal{Y}^{*}}$. \item For $\sigma= s, s_\mathcal X\in 2^{\mathcal V} 2^\mathcal X$ such that $(s,p(s_\mathcal X))\models \rho^e$, let $g(\sigma)=s_\mathcal Y$. Then, we set $g^*_1((s,c_0),s_\mathcal X) =(s_\mathcal Y,\min\{c,c_0+w^s(s,p(s_\mathcal X,s_\mathcal Y))\})$. \item Assume that $g_i^*$ has been defined. Take $\sigma=(s,c_0),(s_1,c_1),\dots,(s_{i},c_{i}), s_\mathcal X\in (2^{\mathcal V^*})^{i+1} 2^\mathcal X$ such that $(s,c_0),(s_1,c_1),\dots,(s_{i},c_{i})$ is consistent with $g_i^*$ and $(s_{i},p(s_\mathcal X))\models \rho^e$. Let $g(s,s_1,\dots,s_{i},s_\mathcal X)=s_\mathcal Y$. Then, we set $g^*_{i+1}(\sigma) =(s_\mathcal Y,\min\{c,c_{i}+w^s(s_{i},p(s_\mathcal X,s_\mathcal Y))\})$. \end{itemize} By applying an induction on $i$, we get that if $(s,c_0),(s_1,c_1),\dots,(s_{i},c_{i})$ is consistent with $g_i^*$ and $s_\mathcal X \in 2^{\mathcal{X}}$, the following holds: \begin{enumerate} \item $(s,s_1,\dots,s_{i})$ is consistent with $g$. \item If ${(s_{i},p(s_\mathcal X))\models \rho^e}$, then for ${s_\mathcal Y=g(s,s_1,\dots s_{i},s_\mathcal X)}$,\\${g^*_{i+1}}({(s,c_0)},{(s_1,c_1)},\dots, {(s_{i}, c_{i})},s_\mathcal X) = (s_\mathcal Y,{\sf EL}_c(G^w,c_0,(s,s_1,\dots,s_{i},(s_\mathcal X,s_\mathcal Y)))$. \end{enumerate} Since $g$ is winning for the system, it follows that $g_{i+1}^*$ is well-defined (and hence $g^*$ is well-defined), and $g^*$ is winning for the system from $(s,c_0)$. We turn now to prove the claim for the environment. First, assume that $g$ is a winning strategy for the environment from $s$ in $G^w$ w.r.t. $c$ for an initial credit $c_0$. Then, the system cannot win from $(s,c_0)$ in $G^*$. Since $\omega$-regular games are determined, the environment has a winning strategy from $(s,c_0)$ in $G^*$. For the other direction, assume that $g^*$ is a winning strategy for the environment in $G^*$ from $(s,c_0)$, and we construct a winning strategy $g$ for the environment in $G^w$ from $s$ w.r.t. $c$ for an initial credit $c_0$. We define $g=\bigcup_{i=1}^{+\infty} g_i$ where $g_i:(2^\mathcal V)^i\rightarrow 2^\mathcal X$, such that for $i>1$, the following holds: \begin{quote} $g_i(s,s_1,\cdots,s_{i-1})$ is defined iff $(s,c_0),(s_1,c_1), \dots,(s_{i-1},c_{i-1})$ is consistent with $g^*$ where, for $l\in\{1,\dots,i-1\}$, $c_l={\sf EL}_c(G^w,c_0,(s,s_1, \dots,s_l))$. Furthermore, in this case, $g_i(s,s_1,\dots,s_{i-1})=g^*( (s,c_0),(s_1,c_1),\dots,(s_{i-1},c_{i-1}))$. \end{quote} \noindent The construction of $g$ is by induction on $i$ as follows: \begin{itemize} \item $g_1(s)=g^*(s,c_0)$. \item Assume that $g_i$ has been defined, and we aim to define $g_{i+1}$. Let $g_i(s,s_1,\dots,s_{i-1})=s_\mathcal X$. Take $s_\mathcal Y\in 2^\mathcal Y$ such that $(s_{i-1},p(s_\mathcal X,s_\mathcal Y))\models \rho^s$. \begin{itemize} \item First, if ${\sf EL}_c(G^w,c_0,(s,s_1,\dots,s_{i-1},(s_\mathcal X,s_\mathcal Y)))<0$, the environment wins in this particular choice of the system and there is no need to define $g_{i+1}( s,s_1,\dots,s_{i-1},(s_\mathcal X,s_\mathcal Y) )$. \item Otherwise, for $l\in\{1,\dots,{i-1}\}$, let $c_l={\sf EL}_c(G^w,c_0,(s,s_1,\dots,s_{l}))$. Therefore, by the induction hypothesis, we have that $(s,c_0),(s_1,c_1),\dots,(s_{i-1},c_{i-1})$ is consistent with $g^*$ and $g^*( (s,c_0),(s_1,c_1),\dots,(s_{i-1},c_{i-1}) )=s_\mathcal X$. Let $s_i=(s_\mathcal X,s_\mathcal Y)$ and $c_i={\sf EL}_c(G^w, c_0, (s,s_1,\dots,s_i) )=\min\{c,c_{i-1}+w^s(s_{i-1},p(s_i))\}\geq 0$. By the construction of $G^*$ (Def.~\ref{def:naiveReduction}), $( (s_{i-1},c_{i-1}), p(s_i,c_i) )\models \rho^{s*}$; we define $g_{i+1}(s,s_1,\dots,s_i)=s_\mathcal X'$ where $s_\mathcal X'=g^*((s,c_0),(s_1,c_1),\dots,(s_i,c_i))$. \end{itemize} \end{itemize} \noindent Now, we prove that $g$ indeed wins for the environment. Consider a play $\sigma$ from $s$, consistent with $g$. If $\sigma$ ends in a deadlock for the system, or the energy level decreases in $\sigma$ to a negative value, the environment wins. If $\sigma$ is infinite and the energy level remains non-negative along $\sigma$, write $\sigma=s,s_1,s_2,\dots$ and observe that $\sigma^*=(s,c_0),(s_1,c_1),(s_2,c_2),\dots$ is consistent with $g^*$ where $c_l={\sf EL}_c(G^w,c_0,(s,s_1,\dots,s_l))$. Hence, by assumption, $\sigma^{*}$ does not satisfy $\varphi$. This implies that $\sigma$ does not satisfy $\varphi$, and consequently, wins for the environment. It is left to show that $\sigma$ does not end in a deadlock for the environment. Assume otherwise, and take a prefix, $\sigma=s,s_1,\dots,s_k$, that is consistent with $g$ and reaches such a deadlock. Hence, $(s,c_0),(s_1,c_1),\dots,(s_k,c_k)$ is consistent with $g^*$, where $c_0,\dots,c_k$ are defined as before. Since there is no $s_\mathcal X\in 2^\mathcal X$ such that $(s_k,p(s_\mathcal X))\models \rho^e$, $(s_k,c_k)$ is a deadlock for the environment in $G^*$. This, of course, contradicts the assumption that $g^*$ is a strategy that wins for the environment in $G^*$. \end{proof} \begin{proof}[Proof of Lem.~\ref{lem:sysEngMuCalc}] The proof is by induction on the structure of $\psi \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$. \begin{itemize} \item $\psi = v$ for $v \in\mathcal{V}$: $\mathit{val} \preceq\llbracket{v}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff $\llbracket{v}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = 0$ iff $s \models v$ iff $(s, \mathit{val}) \models v$ iff $(s, \mathit{val})\in\llbracket{v}\rrbracket^{G^{*}}_{\mathcal{E}}$. \item $\psi = \neg v$ for $v \in\mathcal{V}$: $\mathit{val} \preceq\llbracket{\neg v}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff $\llbracket{\neg v}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) = 0$ iff $(s, \mathit{val})\!\in\!\llbracket{ \neg v}\rrbracket^{G^{*}}_{\mathcal{E}}$. \item $\psi = X$: $\mathit{val} \preceq\llbracket{X}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff $\mathit{val} \preceq \ensuremath{\mathcal{D}}(X)(s)$ iff$_{\text{(premise)}}$ $(s, \mathit{val}) \in \mathcal{E}(X)$ iff $(s,\mathit{val})\in\llbracket{X}\rrbracket^{G^{*}}_{\mathcal{E}}$. \item $\psi = \phi_{1} \vee \phi_{2}$: $\mathit{val} \preceq\llbracket{\phi_{1}^{\op{E}} \vee \phi_{2}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff $\mathit{val} \preceq \min(\llbracket{\phi_{1}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s),\llbracket{\phi_{2}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s))$ iff $\mathit{val} \preceq \llbracket{\phi_{1}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ or $\mathit{val} \preceq \llbracket{\phi_{2}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff$_{\text{(i.h.)}}$ $(s, \mathit{val})\in\llbracket{ \phi_{1}}\rrbracket^{G^{*}}_{\mathcal{E}}$ or $(s, \mathit{val})\in\llbracket{ \phi_{2}}\rrbracket^{G^{*}}_{\mathcal{E}}$ iff $(s, \mathit{val})\in\llbracket{ \phi_{1} \vee \phi_{2}}\rrbracket^{G^{*}}_{\mathcal{E}}$. \item $\psi = \phi_{1} \wedge \phi_{2}$: $\mathit{val} \preceq\llbracket{\phi_{1}^{\op{E}} \wedge \phi_{2}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$\\ iff $\mathit{val}\preceq \max(\llbracket{\phi_{1}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s), \llbracket{\phi_{2}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s))$\\ iff $\mathit{val}\preceq \llbracket{\phi_{1}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ and $\mathit{val} \preceq \llbracket{\phi_{2}^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff$_{\text{(i.h.)}}$ $(s, \mathit{val})\in\llbracket{ \phi_{1}}\rrbracket^{G^{*}}_{\mathcal{E}}$ and $(s, \mathit{val})\in\llbracket{ \phi_{2}}\rrbracket^{G^{*}}_{\mathcal{E}}$ iff $(s, \mathit{val})\in\llbracket{ \phi_{1} \wedge \phi_{2}}\rrbracket^{G^{*}}_{\mathcal{E}}$. \item $\psi = \circlediamond\phi$: \begin{equation*} \mathit{val}\preceq\llbracket{\circlediamond_{\op{E}}\phi^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s) \end{equation*} iff \begin{equation*} \mathit{val}\preceq \max\limits_{s_{\mathcal{X}}\in{2^\mathcal{X}}} \lbrack\min\limits_{s_{\mathcal{Y}}\in{2^\mathcal{Y}}}\textsf{EC}_c((s,p(s_{\mathcal{X}},s_{\mathcal{Y}})), \llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack \end{equation*} iff \begin{equation*} \forall s_{\mathcal{X}}\in 2^{\mathcal{X}} \exists s_{\mathcal{Y}}\in 2^{\mathcal{Y}}: \mathit{val} \preceq \textsf{EC}_c((s,p(s_{\mathcal{X}},s_{\mathcal{Y}})), \llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}})) \end{equation*} iff$_{\text{Def.~\ref{def:ECpre}}}$ \begin{equation*} \begin{split} \forall s_{\mathcal{X}}\in 2^{\mathcal{X}} \exists s_{\mathcal{Y}}\in 2^{\mathcal{Y}}:(s,p(s_{\mathcal{X}})) \models \rho^{e} &\Rightarrow \big[ (s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))\models \rho^{s} \text{ and }\\ &\llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}}) \not = +\infty \text{ and}\\ \mathit{val} \preceq \max \lbrack 0,& \llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}}) - w^{s}(s, p(s_{\mathcal{X}},s_{\mathcal{Y}}))\rbrack\big ] \end{split} \end{equation*} iff \begin{equation*} \begin{split} \forall s_{\mathcal{X}}\in 2^{\mathcal{X}} \exists s_{\mathcal{Y}}\in 2^{\mathcal{Y}} : (s,p(s_{\mathcal{X}})) \models \rho^{e} \Rightarrow \big[ (s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))\models \rho^{s} &\text{ and }\\ \min[c, \mathit{val} + w^{s}(s, p(s_{\mathcal{X}},s_{\mathcal{Y}}))] \preceq \llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}})\big] \end{split} \end{equation*} iff$_{(\bigstar)}$ \begin{equation*} \begin{split} \forall s_{\mathcal{X}}\in 2^{\mathcal{X}} \exists s_{\mathcal{Y^{*}}}\in 2^{\mathcal{Y^{*}}} : (s, \mathit{val},p(s_{\mathcal{X}})) \models \rho^{e} &\Rightarrow\\ \big[(s,\mathit{val},p(s_{\mathcal{X}}, s_{\mathcal{Y^{*}}}))\models \rho^{s*} \text{ and } (s_{\mathcal{X}},s_{\mathcal{Y^{*}}}) \in \llbracket{\phi}\rrbracket^{G^{*}}_{\mathcal{E}}\big] \end{split} \end{equation*} iff \begin{equation*} (s,\mathit{val})\in\llbracket{\circlediamond\phi}\rrbracket^{G^{*}}_{\mathcal{E}}. \end{equation*} $(\bigstar)$: \begin{itemize} \item \emph{``if'':} Assume that for all $s_{\mathcal{X}}\in 2^{\mathcal{X}}$ such that $(s, \mathit{val},p(s_{\mathcal{X}})) \models \rho^{e}$, there exists $s_{\mathcal{Y^{*}}}\in 2^{\mathcal{Y^{*}}}$ such that $(s,\mathit{val},p(s_{\mathcal{X}},s_{\mathcal{Y^{*}}}))\models \rho^{s*} \text{ and } (s_{\mathcal{X}},s_{\mathcal{Y^{*}}}) \in \llbracket{\phi}\rrbracket^{G^{*}}_{\mathcal{E}}$. Let $s_{\mathcal{X}}\in 2^{\mathcal{X}}$ such that $(s,p(s_{\mathcal{X}})) \models \rho^{e}$. As $(s, \mathit{val},p(s_{\mathcal{X}})) \models \rho^{e}$, the premise implies that there exists $s_{\mathcal{Y^{*}}}\in 2^{\mathcal{Y^{*}}}$ such that $(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))\models \rho^{s}$, $\mathit{val} + w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}})) \geq \mathit{\mathit{val}'}$ , and $(s_{\mathcal{X}},s_{\mathcal{Y^{*}}})\in \llbracket{\phi}\rrbracket^{G^{*}}_{\mathcal{E}}$, where $s_{\mathcal{Y}} = s_{\mathcal{Y^{*}}}|_{\mathcal{Y}}$, $\mathit{\mathit{val}'}\in [0,c]$, and $\mathit{\mathit{val}'} = s_{\mathcal{Y^{*}}}|_{yDom}$. Therefore, $\min[c, \mathit{val} + w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))] \preceq \mathit{\mathit{val'}}$, and it follows from the induction hypothesis that $\mathit{val'} \preceq \llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}})$. This implies that $\min[c,\mathit{val} + w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))]\!\preceq\!\llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}})$, as required. \item \emph{``only if'':} Assume that for all $s_{\mathcal{X}}\in 2^{\mathcal{X}}$ such that $(s,p(s_{\mathcal{X}})) \models \rho^{e}$, there exists $s_{\mathcal{Y}}\in 2^{\mathcal{Y}}$ such that $(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))\!\models\!\rho^{s}$ and $\min[c,\mathit{val} + w^{s}(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))] \preceq \llbracket\phi^{\op{E}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s_{\mathcal{X}},s_{\mathcal{Y}})$. Let $s_{\mathcal{X}}\in 2^{\mathcal{X}}$ such that $(s, \mathit{val},p(s_{\mathcal{X}})) \models \rho^{e}$. As it also holds that $(s,p(s_{\mathcal{X}})) \models \rho^{e}$, it follows from the premise and the induction hypothesis that there exists $s_{\mathcal{Y}}\in 2^{\mathcal{Y}}$ such that $(s,p(s_{\mathcal{X}},s_{\mathcal{Y}}))\models \rho^{s}$ and $(s_{\mathcal{X}},s_{\mathcal{Y^{*}}}) \in \llbracket{\phi}\rrbracket^{G^{*}}_{\mathcal{E}}$ where $s_{\mathcal{Y^{*}}} = (s_{\mathcal{Y}}, \min[c, \mathit{val} + w^{s}(s, p(s_{\mathcal{X}},s_{\mathcal{Y}}))])$. Since $\mathit{val} + w^{s}(s, p(s_{\mathcal{X}},s_{\mathcal{Y}})) \geq \min[c,\mathit{val} + w^{s}(s, p(s_{\mathcal{X}},s_{\mathcal{Y}}))]$, it also holds that $(s,\mathit{val},p(s_{\mathcal{X}},s_{\mathcal{Y^{*}}}))\models \rho^{s*}$, as required. \end{itemize} \item $\psi = \mu X \phi$ (We only show the proof for $\mu$ as the proof for $\nu$ is similar): Note that (1) $\mathit{val} \preceq \llbracket{\mu X \phi^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}}(s)$ iff$_{(\text{Def.~\ref{def:sysEngMuCalcSemantics}})}$ $\mathit{val} \preceq \min\limits_{i}\lbrack{h_i}\rbrack(s)$ where $h_0=f_{+\infty}$ and $h_{i+1}= \llbracket{\phi^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{h_i}]}$; and (2) $(s,\mathit{val}) \in \llbracket{\mu X \phi}\rrbracket^{G^{*}}_{\mathcal{E}}$ iff$_{(\text{Def.~\ref{def:prop_mu_calculus_semantics}})}$ $(s,\mathit{val})\in\bigcup\limits_{i}{S_i}$ where $S_0=\emptyset$ and $S_{i+1}= \llbracket{\phi}\rrbracket^{G^{*}}_{\mathcal{E}[X\mapsto{S_i}]}$. We show by induction that for all $i \in \mathbb{N}$, $\mathit{val} \preceq h_{i}(s)$ iff $(s,\mathit{val})\in S_{i}$: \begin{itemize} \item \emph{Basis:} It holds that $\mathit{val} \not\preceq +\infty = h_{0}(s)$ and $(s,\mathit{val})\not\in \emptyset = S_{0}$. \item \emph{Step:} It follows from the premise and the induction hypothesis (over $h_{i}$ and $S_{i}$) that for all $Y \in \mathit{Var}$, for all states $s'' \in 2^{\mathcal{V}}$, and for all $\mathit{val''} \in [0,c]$: $\mathit{val''} \preceq \ensuremath{\mathcal{D}}[X\mapsto{h_i}](Y)(s'')$ iff $(s'',\mathit{val''})\in \mathcal{E}[X\mapsto{S_i}](Y)$. Thus, the structural induction hypothesis ensures that ${{\mathit{val}} \preceq {h_{i+1}(s)}}= {\llbracket{\phi^{\op{E}}}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}[X\mapsto{h_i}]}(s)}$ iff ${{(s,\mathit{val})}\in {S_{i+1}}} = {\llbracket{\phi}\rrbracket^{G^{*}}_{\mathcal{E}[X\mapsto{S_i}]}}$. \end{itemize} This implies that $\mathit{val}\preceq\min\limits_{i}\lbrack{h_i}\rbrack(s)$ iff $(s,\mathit{val})\in\bigcup\limits_{i}{S_i}$, as required.\qedhere \end{itemize} \end{proof} \begin{proof}[Proof of Lem.~\ref{lem:envEngMuCalc}] This lemma can be proved by structural induction as the former lemma, but it is simpler (and shorter) to obtain this result from Lem.~\ref{lem:sysEngMuCalc}, using the semantics of negation. As it is omitted from Def.~\ref{def:prop_mu_calculus_grammar} and Def.~\ref{def:prop_mu_calculus_semantics}, we mention that for a $\mu$-calculus formula $\phi$, the semantics of its negation is defined by $\llbracket \neg \phi \rrbracket^G_\mathcal E:=2^\mathcal V\setminus \llbracket \phi \rrbracket^G_\mathcal E$. We further require that if $\phi = \mu X\phi_1(X)$ or $\phi = \nu X\phi_1(X)$, then $\phi_1$ is syntactically monotone in $X$, i.e., all free occurrences of $X$ in $\phi_1$ fall under an even number of negations. Recall that we defined the semantics of negation of energy $\mu$-calculus formulas in Sect.~\ref{sec:EngMuCalcEnvSemantics} and required the same syntactic restrictions. Moreover, notice that the following well-known equations hold for all $\mu$-calculus formulas~\cite{bradfieldmu,Schneider2004}: \begin{align} &{\llbracket \neg\neg\phi \rrbracket^G_\mathcal{E}}={\llbracket \phi \rrbracket^G_\mathcal{E}}.\\ &{\llbracket \neg(\phi \wedge (\text{resp. } \vee) \xi) \rrbracket^G_\mathcal{E}} = {\llbracket (\neg\phi \vee (\text{resp. } \wedge) \neg\xi) \rrbracket^G_\mathcal{E}}.\\ &{\llbracket \neg\circlediamond \text{(resp. $\circlebox$)}\phi \rrbracket^G_\mathcal{E}} = {\llbracket \circlebox \text{(resp. $\circlediamond$)}(\neg\phi) \rrbracket^G_\mathcal{E}}.\\ &{\llbracket \neg\mu\text{(resp. $\nu$)} X\phi(X) \rrbracket^G_\mathcal{E}} = {\linebreak \llbracket \nu\text{($\text{resp. }\mu$)}X \neg\phi(\neg X) \rrbracket^G_\mathcal{E}}.& \end{align} \noindent By Lem.~\ref{lem:deMorganAlgebra}, we have that the same equations analogously hold for energy $\mu$-calculus formulas (see Eq.~\ref{eq:negation1}-\ref{eq:negation4}). We will refer to these equations as the \emph{negation laws}. Let $\mathcal{G}_{{\scriptstyle\mathit{sys}}}$ and $\mathcal{G}_{{\scriptstyle\mathit{env}}}$ (resp. $\mathcal{G}^{\op{E}}_{{\scriptstyle\mathit{sys}}}$ and $\mathcal{G}^{\op{E}}_{{\scriptstyle\mathit{env}}}$) respectively denote the grammars that generate the formulas $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ and $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$ as defined Sect.~\ref{sec:propMuCalculus} (resp. $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ and $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$ as defined in Sect.~\ref{sec:EngMuCalcSyntaxSemantics}). Let $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ and ${\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}}^{\neg}$ (resp. $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ and $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}^{\neg}$) be the sets of all formulas generated by $\mathcal{G}_{{\scriptstyle\mathit{sys}}}$ and $\mathcal{G}_{{\scriptstyle\mathit{env}}}$ (resp. $\mathcal{G}^{\op{E}}_{{\scriptstyle\mathit{sys}}}$ and $\mathcal{G}^{\op{E}}_{{\scriptstyle\mathit{env}}}$), respectively, together with the new rule of $\neg X$ for $X \in \mathit{Var}$, and in which all sub-formulas of the form $\mu X \phi$ or $\nu X \phi$ satisfy that all free occurrences of $X$ in $\phi$ are un-negated\footnote{This guarantees that the extremal fixed-points exist, and thus the semantics is well-defined~\cite[Chapter~3]{Schneider2004}.}. Note that $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg} \setminus \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, $\ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}^{\neg} \setminus \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}$, $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg} \setminus \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$, and $\ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}^{\neg} \setminus \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}$, each consists of formulas in which negated free variables, $\neg X$, occur. We will make use of the following claim which can be proved by a standard structural induction on $\phi$. \begin{clm} \label{lem:envToSysNegation} Let $\phi \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}^{\neg}$ (resp. $\phi \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{env}}}}}^{\neg}$) and let $\eta$ be final result of the application of the negation laws to $\neg \phi$. Then, $\eta \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ (resp. $\eta \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$) and for all variables $X \in \mathit{Var}$: \begin{enumerate} \item $X$ occurs free in $\phi$ iff $X$ occurs free in $\eta$. \item if $X$ occurs free in $\phi$, then \begin{itemize} \item if all free occurrences of $X$ in $\phi$ are un-negated, then all free occurrences of $X$ in $\eta$ are negated. \item if all free occurrences of $X$ in $\phi$ are negated, then all free occurrences of $X$ in $\eta$ are un-negated. \end{itemize} \end{enumerate} \end{clm} For $\phi \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ (resp. $\phi \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$) we denote by $\phi_{+} \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ ($\phi_{+} \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$) the formula obtained from $\phi$ by replacing all occurrences of negated relational variables, $\neg X$, with their un-negated form, $X$. We will use the following claim, which states a relation between the semantics of $\phi \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ and $\phi_{+} \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$. As in the case of the former claim, this claim can also be proved by standard structural induction thus we leave the details to the reader. \begin{clm} \label{lem:negVarNegVal} Let $\eta \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ (resp. $\eta \in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$) where each variable that occurs free in $\eta$ either only has negated occurrences or only has un-negated occurrences. Let $\mathcal{E} : \mathit{Var}\rightarrow (2^{\mathcal{V}} \rightarrow \{0,1\})$ (resp. $\ensuremath{\mathcal{D}}:\mathit{Var}\rightarrow \mathit{EF(c)}$) be a valuation, and let $\overline{\mathcal{E}}$ (resp. $\overline{\ensuremath{\mathcal{D}}}$) denote the valuation such that for all free variables $X$ that occur negated in $\eta$, $\overline{\mathcal{E}}(X) = 2^{\mathcal{V}} \setminus \mathcal{E}(X)$ ($\overline{\ensuremath{\mathcal{D}}}(X) = \sim \ensuremath{\mathcal{D}}(X)$), and for all free variables $X$ that occur un-negated in $\eta$, $\overline{\mathcal{E}}(X) = \mathcal{E}(X)$ ($\overline{\ensuremath{\mathcal{D}}}(X) = \ensuremath{\mathcal{D}}(X)$). Then, $\llbracket \eta \rrbracket^{G}_\mathcal{E} = \llbracket \eta_{+} \rrbracket^{G}_{\overline{\mathcal{E}}}$ (resp. $\llbracket \eta \rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}} = \llbracket \eta_{+} \rrbracket^{\ensuremath{{G^{w}(c)}}}_{\overline{\ensuremath{\mathcal{D}}}}$). \end{clm} Now, let $\psi \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}} \subseteq \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{env}}}}}^{\neg}$ that satisfies the premise. Let $\eta$ be the formula obtained by applying the negation laws to $\neg \psi$, and consequently, we have that $\llbracket \psi^\op{E}\rrbracket^{G^w(c)}_\mathcal D = \llbracket \neg\neg\psi^\op{E}\rrbracket^{G^w(c)}_\mathcal D = \sim\llbracket \eta^\op{E}\rrbracket^{G^w(c)}_\mathcal D$ and $\llbracket \psi \rrbracket^{G^*}_\mathcal E=2^{\mathcal{V}^{*}}\setminus \llbracket \eta \rrbracket^{G^*}_\mathcal E$. Since it follows from Claim~\ref{lem:envToSysNegation} that $\eta \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}^{\neg}$ and all free variables in $\eta$ occur negated (as all free variables in $\psi$ occur un-negated), by Claim~\ref{lem:negVarNegVal}, we have that $\llbracket \eta^\op{E}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}} = \llbracket \eta_{+}^\op{E}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\overline{\ensuremath{\mathcal{D}}}}$ and $\llbracket \eta \rrbracket^{G^*}_{\mathcal{E}} = \llbracket \eta_{+} \rrbracket^{G^*}_{\overline{\mathcal{E}}}$. Therefore, we obtain the following two equations: \begin{equation} \label{eq:phi-and-neg-phi} \llbracket \psi^\op E\rrbracket^{G^w(c)}_\mathcal D = \sim\llbracket \eta_{+}^\op{E}\rrbracket^{G^w(c)}_{\overline{\ensuremath{\mathcal{D}}}}. \end{equation} \begin{equation} \label{eq:partition-of-G*} \text{For all states, $s \in 2^{\mathcal{V}}$, and $c_0\in[0,c]$: } (s,c_0)\in \llbracket \psi \rrbracket^{G^*}_\mathcal E \Leftrightarrow (s,c_0)\notin \llbracket \eta_{+} \rrbracket^{G^*}_{\overline{\mathcal{E}}}. \end{equation} \noindent It further follows that for all free variables $X$ that occur in $\eta_{+}$ (resp. $\eta^{\op{E}}_{+}$), $\overline{\mathcal{E}}(X) = 2^{\mathcal{V}^{*}} \setminus \mathcal{E}(X)$ (resp. $\overline{\ensuremath{\mathcal{D}}}(X) = \sim \ensuremath{\mathcal{D}}(X)$). We claim that the following equation holds for all free variables $X$ that occur in $\eta_{+}$, states $s \in 2^{\mathcal{V}}$, and $val \in [0,c]$: \begin{equation} \label{eq:premiseSysLemma} \mathit{val}\preceq\overline{\ensuremath{\mathcal{D}}}(X)(s) \Leftrightarrow (s,val) \in \overline{\mathcal{E}}(X). \end{equation} The proof of Eq.~\ref{eq:premiseSysLemma} is as follows. Take a free variable $X$ that occurs in $\eta_{+}$, and a state $s\in 2^\mathcal V$. First, assume that $\ensuremath{\mathcal{D}}(X)(s) = +\infty$. Thus, by the premise, for all $val \in [0,c]$: $(s,c-\mathit{val}) \notin {\mathcal{E}}(X)$. This implies that for all $\mathit{val} \in [0,c]$: $(s,\mathit{val}) \in \overline{\mathcal{E}}(X)$. By the assumption, $\overline{\ensuremath{\mathcal{D}}}(X)(s) = 0$, and hence, $\forall \mathit{val}\in [0,c]: (\mathit{val}\preceq \overline{\ensuremath{\mathcal{D}}}(X)(s) \wedge (s,\mathit{val}) \in \overline{\mathcal{E}}(X))$, which implies\ Eq.~\ref{eq:premiseSysLemma}. Second, assume that $\ensuremath{\mathcal{D}}(X)(s) = 0$. Then, analogously to the first case, we obtain that $\forall \mathit{val}\in [0,c]: (\mathit{val}\npreceq \overline{\ensuremath{\mathcal{D}}}(X)(s) \wedge (s,\mathit{val}) \notin \overline{\mathcal{E}}(X)$ Third, we consider the remaining case where $\ensuremath{\mathcal{D}}(X)(s) = m \in [1,c]$. Thus, $\overline{\ensuremath{\mathcal{D}}}(X)(s) = c+1-m$, and by the premise, for all $\mathit{val}\in [0,c]:$ $\mathit{val}\in [m,c] \Leftrightarrow (s,c-\mathit{val}) \in {\mathcal{E}}(X)$. This implies that for all $\mathit{val}\in [0,c]:$ $\mathit{val}\in [0,m-1] \Leftrightarrow (s,c-\mathit{val}) \in \overline{{\mathcal{E}}}(X)$, iff for all $\mathit{val}\in [0,c]:$ $\mathit{val}\in [c+1-m,c] \Leftrightarrow (s,\mathit{val}) \in \overline{{\mathcal{E}}}(X)$, iff for all $\mathit{val}\in [0,c]:$ $\mathit{val} \preceq \overline{\ensuremath{\mathcal{D}}}(X)(s) \Leftrightarrow (s,\mathit{val}) \in \overline{{\mathcal{E}}}(X)$, which concludes the proof of Eq.~\ref{eq:premiseSysLemma}. Note that $\eta^{\op{E}}_{+}\in \ensuremath{{\mathcal{L}_{e\mu}^{{\scriptstyle\mathit{sys}}}}}$ and $\eta_{+}\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, and by Eq.~\ref{eq:premiseSysLemma}, the valuations $\overline{\ensuremath{\mathcal{D}}}$ and $\overline{\mathcal{E}}$ satisfy the premise of Lem.~\ref{lem:sysEngMuCalc}. Take $s\in 2^\mathcal V$ and, first, assume that $\llbracket \eta_{+}^\op E\rrbracket^{G^w(c)}_{\overline{\ensuremath{\mathcal{D}}}}(s)=+\infty$. Thus, by Eq.~\ref{eq:phi-and-neg-phi}, $\llbracket \psi^\op{E}\rrbracket^{G^w(c)}_\mathcal D(s)=0$, and by Lem.~\ref{lem:sysEngMuCalc} and Eq.~\ref{eq:partition-of-G*}, $\forall c_0\in [0,c]( (s,c_0)\in \llbracket \psi \rrbracket^{G^*}_\mathcal E )$. Therefore, $\forall \mathit{val}\in [0,c] ( \mathit{val}\preceq \llbracket \psi^\op{E}\rrbracket^{G^w(c)}_\mathcal D(s) \wedge (s,c-\mathit{val})\in \llbracket \psi \rrbracket^{G^*}_\mathcal E )$, which implies the claim. In the case where $\llbracket \eta_{+}^\op E\rrbracket^{G^w(c)}_{\overline{\ensuremath{\mathcal{D}}}}(s)=0$, analogous arguments reveal that $\forall \mathit{val}\in [0,c] ( \mathit{val}\npreceq \llbracket \psi^\op{E}\rrbracket^{G^w(c)}_\mathcal D(s) \wedge (s,c-\mathit{val})\notin \llbracket \psi \rrbracket^{G^*}_\mathcal E )$. It is left to deal with the case where $\llbracket \eta_{+}^\op E\rrbracket^{G^w(c)}_{\overline{\ensuremath{\mathcal{D}}}}(s)=m\in [1,c]$. In this case, by Lem.~\ref{lem:sysEngMuCalc} and Eq.~\ref{eq:partition-of-G*}, $\forall \mathit{val}\in [0,c] (val<m \Leftrightarrow (s,val)\in \llbracket \psi \rrbracket^{G^*}_\mathcal E )$. Therefore, for all $\mathit{val}\in[0,c]$, \[\mathit{val}\preceq \llbracket \psi^\op{E}\rrbracket^{\ensuremath{{G^{w}(c)}}}_{\ensuremath{\mathcal{D}}} \Leftrightarrow_{\text{Eq.~\ref{eq:phi-and-neg-phi}}} \mathit{val}\preceq \sim\llbracket \eta_{+}^\op{E}\rrbracket^{G^w(c)}_{\overline{\ensuremath{\mathcal{D}}}} \Leftrightarrow \mathit{val}\preceq c+1-m\] \[\Leftrightarrow c-val<m \Leftrightarrow (s,c-\mathit{val})\in \llbracket \psi \rrbracket^{G^*}_\mathcal E. \qedhere\] \end{proof} \section{Proof of Lem.~\ref{lem:lemma-6-revised}}\label{app:lemma-6-revised-proof} In this appendix, we present a full proof for Lem.~\ref{lem:lemma-6-revised}: \begin{quote} \textbf{Lemma.~\ref{lem:lemma-6-revised}.} Let $G$ be an energy parity game (as defined in Def.~\ref{def:energyParityGame}) with $n$ states, $d$ different priorities and maximal absolute value of the weights, $K$. If $\text{player}_0$ has a winning strategy from a state $s$ w.r.t. $+\infty$ for an initial credit $c_0$, then she has a winning strategy w.r.t. $d(n-1)K$ for an initial credit $\min\{c_0,(n-1)K\}$. \end{quote} \noindent We divide the proof into two parts: \begin{enumerate}[label=\textbf{Part \arabic*.},ref={Part \arabic*},leftmargin=\parindent,itemindent=*] \item\label{lem:lemma-6-revised:Part1} Inspired by the proof of Lem.~6 in~\cite{ChatterjeeD12}, we show that $\text{player}_0$ has a strategy in $G$ that wins from $W_0(+\infty)$ w.r.t. $d(n-1)K$ for $(n-1)K$. \item\label{lem:lemma-6-revised:Part2} We show that if $c_0<(n-1)K$, a small modification to the strategy constructed in~\ref{lem:lemma-6-revised:Part1} allows $\text{player}_0$ to win for the initial credit $c_{0}$. \end{enumerate} \noindent We start by stating five claims which will be useful for proving~\ref{lem:lemma-6-revised:Part1}. \begin{clm} \label{clm:lemma-6-revised:Claim1} Energy parity games are determined. That is, for an energy parity game, ${G^{ep}}={\langle({V^{ep}}={V_0^{ep}\cup V_1^{ep}},E^{ep}),\ensuremath{\mathit{prio}}^{ep},w^{ep}\rangle}$, and $c\in\mathbb{N}\cup\{+\infty\}$, it holds that ${W^{G^{ep}}_0(c)} \cup {W^{G^{ep}}_1(c)}=V^{ep}$. \end{clm} In sketch, Claim~\ref{clm:lemma-6-revised:Claim1} is argued as follows. Given an energy parity game, \[{G^{ep}}={\langle({V^{ep}}={V_0^{ep}\cup V_1^{ep}},E^{ep}),\ensuremath{\mathit{prio}}^{ep},w^{ep}\rangle},\] and an upper bound $c\in\mathbb{N}\cup\{+\infty\}$, we construct a parity game, \[{G^{p}}={\langle({V^{p}}={V^{p}_0\cup V^{p}_1},E^{p}),\ensuremath{\mathit{prio}}^{p}\rangle},\] in a way similar to Def.~\ref{def:naiveReduction}. However, the construction takes into account that, in contrast to WGSs, the players do not necessarily take steps in an alternating manner and each state is controlled solely by one of the players. The states of $G^{p}$ have the form $(s,c'_{0})$ where $s$ is a state of $G^{ep}$ and $c'_{0}\in[0,c] \cup \{+\infty\}$ is the accumulated energy level under $c$. States of the form $(s, +\infty)$ are deadlocks for $\text{player}_{0}$ and correspond to violation of the energy objective. The edges of $G^{p}$ are taken from the original game $G^{ep}$, and update the energy component accordingly. Formally, \begin{itemize} \item $V^{p}_{0} = {A_{0}} \cup {\{(s, +\infty) \mid s \in V^{ep}\}}$ and $V^{p}_{1} = A_{1}$ where $A_{i} = \{(s, c'_{0}) \mid s \in V^{ep}_{i} \text{ and } {c'_{0}} \in {{[0,c]} \cap {\mathbb{N}}} \}$. \item For ${(s_{1},c_{1})} \in {V^{p}}$ and ${(s_{2},c_{2})} \in {V^{p}}$, $\big( (s_{1},c_{1}), (s_{2},c_{2}) \big) \in E^{p}$ if ${(s_{1},s_{2})} \in {E^{ep}}$, ${c_{1}} \not = {+\infty}$, and either ($\min\{c, c_{1} + w^{ep}(s_{1},s_{2})\} = c_{2} \geq 0$) or ($c_{1} + w^{ep}(s_{1},s_{2}) < 0$ and $c_{2} = +\infty$). \item For ${(s,c'_{0})} \in {V^{p}}$, $\ensuremath{\mathit{prio}}^{p}(s,c'_{0}) = \ensuremath{\mathit{prio}}^{ep}(s)$. \end{itemize} \noindent Note that $G^{p}$ has the same number of different priorities as $G^{ep}$ has, and in case $c=+\infty$, it has an infinite state space. It is not difficult to see that the winning region of $\text{player}_{i}$ in $G^{p}$ indicates which states win for her in $G^{ep}$ and annotates these states with the winning initial credits. That is, for $s \in V^{ep}$, $c'_{0} \in {{[0,c]} \cap {\mathbb{N}}}$, and $i \in \{0,1\}$: ${(s,c'_{0})} \in {W^{G^{p}}_{i}}$ iff $s$ wins in $G^{ep}$ for $\text{player}_{i}$ w.r.t. $c$ for the initial credit $c'_{0}$. By determinacy of parity games~\cite{martin1975borel,Zielonka98}, it holds that ${W^{G^{p}}_{0}} \cup {W^{G^{p}}_{1}} = V^{p}$. This implies that ${W^{G^{ep}}_0(c)} \cup {W^{G^{ep}}_1(c)}=V^{ep}$, as required. The second claim is an immediate consequence of Lem.~4 in~\cite{ChatterjeeD12}. \begin{clm} \label{clm:lemma-6-revised:Claim2} There is a strategy $\ensuremath{g_{\mathit{gfe}}}$ for $\text{player}_0$ from $W_0(+\infty)$ such that every play $\sigma$, consistent with $\ensuremath{g_{\mathit{gfe}}}$, wins the energy objective w.r.t. $d(n-1)K$ for the initial credit $(n-1)K$, and either of the following occurs: \begin{itemize} \item $\sigma$ also wins the parity objective in $G$. Hence, $\sigma$ wins for $\text{player}_0$. \item The sum of the edges' weights traversed along $\sigma$ is unbounded above. \end{itemize} \noindent The strategy $\ensuremath{g_{\mathit{gfe}}}$ is called \emph{good-for-energy}. \end{clm} The third claim is the following simple observation: \begin{clm} \label{clm:lemma-6-revised:Claim3} Let $c\in {\mathbb{N}}$ be a finite upper bound, let $c_{0} \in {[0,c]}$ be an initial credit, and let $\sigma$ be a play in $G$. Take $l,m\in\mathbb{N}$ such that ${c_{0} + m} \leq {c + l}$. Then, for every prefix of $\sigma$, $\sigma[0\ldots k]$, we have that ${\textsf{EL}_{c+l}(G,{c_0+m}, {\sigma[0\ldots k]})} \geq {{\textsf{EL}_{c}(G,c_0, {\sigma[0\ldots k]})} + {\min\{l,m\}}}$. \end{clm} Claim~\ref{clm:lemma-6-revised:Claim3} can be proved by standard induction on $k$, hence we leave the details to the reader. The fourth claim is an immediate corollary of Claim~\ref{clm:lemma-6-revised:Claim3}: \begin{clm} \label{lem:lemma-6-revised:Claim4} Assume that $g$ is a strategy for $\text{player}_0$ in $G$ that wins from a set of states, $A \subseteq V$, w.r.t. $c\in \mathbb{N}$ for an initial credit $c_0 \in [0,c]$. Take $l,m\in\mathbb{N}$ such that ${c_{0} + m} \leq {c + l}$, and let $\sigma$ be a play from $A$, consistent with $g$. Then, for the initial credit $c_{0}+m$, the energy level under the upper bound $c+l$ never drops below $\min\{l,m\}$ along $\sigma$. \end{clm} The proof of~\ref{lem:lemma-6-revised:Part1} relies on the well-known notions of \emph{attractors}, \emph{traps}, and \emph{subgames}. Below, we repeat their definitions from~\cite[Chapter~6]{2001automata}. \begin{defi}[Attractors]\label{def:lem:lemma-6-revised:attractors} The \emph{$\text{player}_{i}$-attractor} $\mathit{Attr}_{i}(X) \subseteq V$ of a set of states $X \subseteq V$, is the set of all states from which $\text{player}_{i}$ has a strategy to force $\text{player}_{1-i}$ to reach either a state in $X$ or a deadlock for $\text{player}_{1-i}$ in a finite number of steps. Formally, ${\mathit{Attr}_{i}(X)} = \bigcup_{j=0}^{+\infty}{A_{j}}$ where $A_{0} = X$ and $A_{j+1} = A_{j} \cup \{s \in V_{i} \mid \exists t\in V ({{(s,t)}\in {E}} \wedge {t\in{A_{j}}})\} \cup \{s \in V_{1-i} \mid \forall t \in V ({{(s,t)}\in {E}} \Rightarrow {t\in{A_{j}}})\}$. \end{defi} \begin{defi}[Traps]\label{def:lem:lemma-6-revised:traps} A \emph{$\text{player}_{i}$-trap} is a set of states, $U\subseteq V$, in which $\text{player}_{1-i}$ can trap $\text{player}_{i}$ in the sense that all successors of $V_{i}$-states in $U$ belong to $U$ and every $V_{1-i}$-state in $U$ has a successor in $U$. \end{defi} \begin{defi}[Subgames]\label{def:lem:lemma-6-revised:subgames} Let $U \subseteq V$. The \emph{subgame} of $G$ induced by $U$ is ${G[U]} = \langle({V[U]}\linebreak={{{(V_0 \cap {U})}\cup {(V_1 \cap {U})}} }, {{E[U]} = E\cap (U \times U))},{\ensuremath{\mathit{prio}}|_{U}},{w|_{E[U]}}\rangle$ where $\ensuremath{\mathit{prio}}|_{U}$ and $w|_{E[U]}$ are the restrictions of $\ensuremath{\mathit{prio}}$ and $w$ to $U$ and $E[U]$, respectively. \end{defi} Note that the complement of the $\text{player}_{i}$-attractor of a set $X \subseteq V$, $U = {V} \setminus {\mathit{Attr}_{i}(X)}$, is a $\text{player}_{i}$-trap. Also, since $\mathit{Attr}_{i}(W_{i}(+\infty)) = W_{i}(+\infty)$~\cite{ChatterjeeD12}, it follows from Claim~\ref{clm:lemma-6-revised:Claim1} that $W_{i}(+\infty)$ is a $\text{player}_{1-i}$-trap. Lastly, we consider the following claim: \begin{clm} \label{lem:lemma-6-revised:Claim5} Let $G[{W_0(+\infty)}\setminus {\mathit{Attr}_{0}(X)}]$ be a subgame of $G$ where $\mathit{Attr}_{0}(X) \subseteq {W_0(+\infty)}$ is the $\text{player}_0$-attractor in $G[{W_0(+\infty)}]$ of some set, $X \subseteq {W_0(+\infty)}$. Then, $\text{player}_0$ wins in $G[{W_0(+\infty)}\setminus {\mathit{Attr}_{0}(X)}]$ from all the states of this subgame w.r.t. $+\infty$. \end{clm} Indeed, ${W_0(+\infty)}\setminus {\mathit{Attr}_{0}(X)}$ is a $\text{player}_0$-trap in $G[{W_0(+\infty)}]$ thus playing in $G[{W_0(+\infty)}\setminus {\mathit{Attr}_{0}(X)}]$ according to a strategy $g$ that wins for $\text{player}_0$ in $G$ from $W_0(+\infty)$, is the same as playing in $G$ from ${W_0(+\infty)}\setminus {\mathit{Attr}_{0}(X)}$ according to $g$ while $\text{player}_1$ always chooses to stay in ${W_0(+\infty)}\setminus {\mathit{Attr}_{0}(X)}$. Relying on what we have established thus far, we can finally prove~\ref{lem:lemma-6-revised:Part1} and~\ref{lem:lemma-6-revised:Part2}, which clearly imply the correctness of Lem.~\ref{lem:lemma-6-revised}. \begin{proof}[Proof of Lem.~\ref{lem:lemma-6-revised}] \noindent\textbf{Proof of~\ref{lem:lemma-6-revised:Part1}.} The proof is by induction on $n+d$. Note that if $d=1$, then for all $n >0$, a good-for-energy strategy (which exists by Claim~\ref{clm:lemma-6-revised:Claim2}) wins from $W_0(+\infty)$ w.r.t. $(n-1)K$ for $(n-1)K$, as required. This proves the base case where ${n}={d}={1}$. Also, this allows us to assume that $d>1$ from now on. For the induction step, we distinguish between two cases. The case where the minimal priority is even (say, $0$) and the case where it is odd (say, $1$). We assume w.l.o.g. that $V=W_0(+\infty)$. This assumption can be made since, otherwise, $|W_0(+\infty)|<n$, and we can simply apply the induction hypothesis over the subgame induced by $W_0(+\infty)$ and obtain a strategy as required in $G$.\footnote{Notice that a strategy that wins for $\text{player}_{0}$ from ${V[W_0(+\infty)]} = {W_0(+\infty)}$ in the subgame $G[W_0(+\infty)]$ does the same in $G$ because $W_0(+\infty)$ is a $\text{player}_{1}$-trap.} \noindent{\bfseries Case 1: The minimal priority is 0.} Let $\ensuremath{g_{\mathit{gfe}}}$ be a good-for-energy strategy in $G$, which exists due to Claim~\ref{clm:lemma-6-revised:Claim2}. Let $\Omega_0\subseteq W_0(+\infty)=V$ be the $\text{player}_0$-attractor of all $0$-priority states in $G$. Write $|\Omega_0|=k$ and note that $\Omega_0\neq \emptyset$ because $\Omega_0$ includes all $0$-priority states in $V$. Consider the subgame $G' = G[{W_0(+\infty)}\setminus {\Omega_{0}}]$, which has at most $d-1$ different priorities. By Claim~\ref{lem:lemma-6-revised:Claim5}, all the states of $G'$ win for $\text{player}_0$ in this subgame w.r.t. $+\infty$. Therefore, by the induction hypothesis, $\text{player}_0$ has a strategy $g'$ in $G'$ that wins from ${W_0(+\infty)}\setminus {\Omega_{0}}$ w.r.t. $c'=(d-1)(n-k-1)K$ for the initial credit $c_0'=(n-k-1)K$. We argue that the next strategy satisfies the requirements. \begin{strategy}\label{def:lem:lemma-6-revised:minPrioZeroStrategy} We define a strategy for $\text{player}_{0}$ in $G$ from $W_0(+\infty)$, as follows: \begin{enumerate}[label=\textbf{Phase~\ref{def:lem:lemma-6-revised:minPrioZeroStrategy}.\arabic*.},ref={Phase~\ref{def:lem:lemma-6-revised:minPrioZeroStrategy}.\arabic*},itemindent=*] \item \label{lem:lemma-6-revised:Stage1} Play according to $\ensuremath{g_{\mathit{gfe}}}$ until the sum of the edges' weights traversed is at least $d(n-1)K-(n-1)K$. If you reached a state in $\Omega_0$, go to~\ref{lem:lemma-6-revised:Stage2}. Otherwise, go to~\ref{lem:lemma-6-revised:Stage3}. \item \label{lem:lemma-6-revised:Stage2} Play a strategy to reach a $0$-priority state, and go to~\ref{lem:lemma-6-revised:Stage1}. \item \label{lem:lemma-6-revised:Stage3} Play according to $g'$ as long as the play stays in $W_0(+\infty)\setminus\Omega_0$. If the play reaches $\Omega_0$, go to~\ref{lem:lemma-6-revised:Stage2}. \end{enumerate} \end{strategy} Note that for a play consistent with Strategy~\ref{def:lem:lemma-6-revised:minPrioZeroStrategy}, either of the following holds: (a) the play reaches a deadlock for $\text{player}_{1}$; (b) eventually, the play stays in~\ref{lem:lemma-6-revised:Stage1} forever; (c) eventually, the play stays in~\ref{lem:lemma-6-revised:Stage3} forever; (d) the play reaches~\ref{lem:lemma-6-revised:Stage2} infinitely many times. In all of these cases, $\text{player}_0$ wins the parity objective, either by definition (case a), by the choice of the strategies (cases b, c), or by visiting a $0$-priority state infinitely often (case d). Thus, it is left to show that $\text{player}_{0}$ also wins the energy objective. Consider a play $\sigma$ consistent with Strategy~\ref{def:lem:lemma-6-revised:minPrioZeroStrategy} and played w.r.t. the upper bound $c=d(n-1)K$ and for the initial credit $(n-1)K$. We shall prove that the energy level always remains non-negative along $\sigma$. Let $\sigma[k_0=0],\sigma[k_1],\sigma[k_2],\dots$ be the states along $\sigma$ in which Strategy~\ref{def:lem:lemma-6-revised:minPrioZeroStrategy} turns to~\ref{lem:lemma-6-revised:Stage1}. We prove the next properties by induction on $j$: \begin{enumerate} \item \label{lem:lemma-6-revised:minPrioZeroStrategy:property1} The energy level never decreases below $0$ in the interval $\sigma[0\dots k_j]$. \item \label{lem:lemma-6-revised:minPrioZeroStrategy:property2} $\textsf{EL}_{c}(G, (n-1)K, \sigma[0\dots k_j])\geq (n-1)K$. \end{enumerate} \noindent Note that both properties~\ref{lem:lemma-6-revised:minPrioZeroStrategy:property1} and~\ref{lem:lemma-6-revised:minPrioZeroStrategy:property2} trivially hold for ${k_{0}}$. For the induction step, consider the interval $\sigma[k_j\dots k_{j+1}]$. From $\sigma[k_j]$, $\text{player}_0$ plays according to $\ensuremath{g_{\mathit{gfe}}}$. By the induction hypothesis over $j$, $\text{player}_0$ reaches the state $\sigma[k_j]$ while having at least $(n-1)K$ energy units, and, consequently, by Claim~\ref{clm:lemma-6-revised:Claim2}, the energy level remains non-negative until $\text{player}_0$ proceeds with $d(n-1)K$ energy units to either~\ref{lem:lemma-6-revised:Stage2} or~\ref{lem:lemma-6-revised:Stage3}. If $\text{player}_0$ goes to~\ref{lem:lemma-6-revised:Stage2}, as $|\Omega_0|=k$, she spends at most $(k-1)K$ energy units to reach a $0$-priority state and returns to~\ref{lem:lemma-6-revised:Stage1} at step $k_{j+1}$ with at least $d(n-1)K-(k-1)K$ energy units. Since $d>1$ and $n \geq k$, it follows that both properties~\ref{lem:lemma-6-revised:minPrioZeroStrategy:property1} and~\ref{lem:lemma-6-revised:minPrioZeroStrategy:property2} hold in this scenario. Otherwise, we have that $\text{player}_0$ goes to~\ref{lem:lemma-6-revised:Stage3}. Hence, ${W_0(+\infty)}\setminus {\Omega_{0}} \not = {\emptyset}$, ${|W_0(+\infty)|} = n > {{|\Omega_0|} = {k}}$, and $\text{player}_0$ plays according to the strategy $g'$ with an upper bound and an initial credit both equal to $d(n-1)K$. However, the induction hypothesis ensures that $g'$ wins for $\text{player}_0$ in the subgame $G'$ w.r.t. $c'$ for $c'_{0}$. Thus, it follows from Claim~\ref{lem:lemma-6-revised:Claim4} that the energy level never drops below $\min\{d(n-1)K-c',d(n-1)K-c_0'\}=(n-1)K+(d-1)kK$ as long as the play stays in ${{W_0(+\infty)}\setminus {\Omega_{0}}}$. Note that the play can leave ${{W_0(+\infty)}\setminus {\Omega_{0}}}$ only by traversing through an edge $e'$ that is chosen by $\text{player}_{1}$. When that occurs, $\text{player}_0$ loses at most $K$ energy units (as $w(e') \geq -K$) and switches to~\ref{lem:lemma-6-revised:Stage2}. As in the previous scenario, $\text{player}_0$ spends in~\ref{lem:lemma-6-revised:Stage2} at most $(k-1)K$ energy units to reach a $0$-priority state and returns to~\ref{lem:lemma-6-revised:Stage1} at step $k_{j+1}$. Therefore, we have that $\textsf{EL}_c(G,(n-1)K,\sigma[0\dots k_{j+1}])\geq(n-1)K+(d-1)kK-K-(k-1)K\geq (n-1)K$. This implies that properties~\ref{lem:lemma-6-revised:minPrioZeroStrategy:property1} and~\ref{lem:lemma-6-revised:minPrioZeroStrategy:property2} hold in this scenario as well. Consequently, if Strategy~\ref{def:lem:lemma-6-revised:minPrioZeroStrategy} turns to~\ref{lem:lemma-6-revised:Stage1} infinitely many times, $\sigma$ wins the energy objective. Otherwise, there is some step $k_l$, such that: \begin{itemize} \item The strategy turns to~\ref{lem:lemma-6-revised:Stage1} for the last time in $\sigma[k_l]$. \item The energy level never drops below $0$ in $\sigma[0\dots k_l]$. \item The energy level of $\sigma[0\dots k_l]$ is at least $(n-1)K$. \end{itemize} \noindent From $\sigma[k_l]$, $\text{player}_0$ plays according to $\ensuremath{g_{\mathit{gfe}}}$ in $W_0(+\infty)$. By Claim~\ref{clm:lemma-6-revised:Claim2}, as long as $\text{player}_0$ plays according to $\ensuremath{g_{\mathit{gfe}}}$, the energy level remains non-negative. Thus, if that lasts forever, the energy objective is achieved. Otherwise, there is some step $l' > k_{l}$ in which the strategy turns to~\ref{lem:lemma-6-revised:Stage3} with the initial credit ${\textsf{EL}_{c}(G, (n-1)K, \sigma[0\dots l'])} = {d(n-1)K}$, and stays in this phase forever. Hence, from $\sigma[l'] \in {{W_0(+\infty)}\setminus {\Omega_0}}$, the play remains in $W_0(+\infty)\setminus \Omega_0$ and played according to $g'$. Consequently, it follows from induction hypothesis on $g'$ that the energy objective is achieved in this case as well. \noindent{\bfseries Case 2: The minimal priority is 1.} Let $D_{1} \subseteq V_{1}$ be the set of all states in $G$ which are deadlocks for $\text{player}_{1}$. First, consider the case where $D_{1} \not = \emptyset$. Let $\Omega^{D_{1}}_{0}$ be the $\text{player}_0$-attractor of $D_{1}$ in $G$. Note that $D_{1} \subseteq \Omega^{D_{1}}_{0} \not = \emptyset$, write $|\Omega^{D_{1}}_{0}| = k_{D_{1}}$, and consider the subgame $G'' = G[W_0(+\infty)\setminus \Omega^{D_{1}}_{0}]$ induced by ${V \setminus \Omega^{D_{1}}_{0}} = {W_0(+\infty)\setminus \Omega^{D_{1}}_{0}}$. By Claim~\ref{lem:lemma-6-revised:Claim5}, $\text{player}_0$ wins in $G''$ from all the states of this subgame w.r.t. $+\infty$. Hence, the induction hypothesis yields a strategy $g''$ for $\text{player}_0$ in $G''$ that wins from $W_0(+\infty)\setminus \Omega^{D_{1}}_{0}$ w.r.t. $c''= d(n-k_{D_{1}}-1)K$ for the initial credit $c_0''=(n-k_{D_{1}}-1)K$. We claim that the next strategy satisfies the requirements. \begin{strategy}\label{def:lem:lemma-6-revised:minPrioOneFirstCaseStrategy} We define a strategy for $\text{player}_0$ in $G$ from $W_0(+\infty)$, as follows: \begin{enumerate}[label=\textbf{Phase~\ref{def:lem:lemma-6-revised:minPrioOneFirstCaseStrategy}.\arabic*.},ref={Phase~\ref{def:lem:lemma-6-revised:minPrioOneFirstCaseStrategy}.\arabic*},itemindent=*] \item \label{lem:lemma-6-revised:Case2:Stage-a} If the play is in $\Omega^{D_{1}}_{0}$, go to~\ref{lem:lemma-6-revised:Case2:Stage-b}. Otherwise, go to~\ref{lem:lemma-6-revised:Case2:Stage-c}. \item \label{lem:lemma-6-revised:Case2:Stage-b} Play a strategy to reach a deadlock for $\text{player}_{1}$ (i.e., a state in $D_{1}$). \item \label{lem:lemma-6-revised:Case2:Stage-c} Play according to $g''$ as long as the play stays in $W_0(+\infty)\setminus \Omega^{D_{1}}_{0}$. If the play reaches $\Omega^{D_{1}}_{0}$, go to~\ref{lem:lemma-6-revised:Case2:Stage-b}. \end{enumerate} \end{strategy} Consider a play $\sigma$ consistent with Strategy~\ref{def:lem:lemma-6-revised:minPrioOneFirstCaseStrategy} and played w.r.t. $c=d(n-1)K$ and for the initial credit $(n-1)K$. Then, $\sigma$ either (1) stays in~\ref{lem:lemma-6-revised:Case2:Stage-c} forever, or (2) eventually reaches~\ref{lem:lemma-6-revised:Case2:Stage-b} and subsequently ends in a deadlock for $\text{player}_{1}$. In case (1), $\sigma$ is infinite and consistent with $g''$, and as a result, it wins for $\text{player}_0$ in $G$. In case (2), $\sigma$ is finite and wins the parity objective by definition. Moreover, we argue that $\sigma$ wins the energy objective in case (2) as well. If $\sigma$ starts from $\Omega^{D_{1}}_{0}$, then, as $|\Omega^{D_{1}}_{0}|= k_{D_{1}}$, $\text{player}_0$ spends at most $(k_{D_{1}}-1)K$ energy units in~\ref{lem:lemma-6-revised:Case2:Stage-b} to enforce reaching a deadlock state for $\text{player}_1$. Thus, since $n \geq k_{D_{1}}$, $\sigma$ wins the energy objective. Otherwise, $\sigma$ starts from $W_0(+\infty)\setminus \Omega^{D_{1}}_{0}$. As long as $\sigma$ stays in $W_0(+\infty)\setminus \Omega^{D_{1}}_{0}$, $\text{player}_0$ plays according to $g''$ (\ref{lem:lemma-6-revised:Case2:Stage-c}) with the upper bound $c$ and the initial credit $(n-1)K$. However, recall that the induction hypothesis ensures that $g''$ wins w.r.t. $c''$ for $c_0''$. Thus, it follows from Claim~\ref{lem:lemma-6-revised:Claim4} that the energy level during~\ref{lem:lemma-6-revised:Case2:Stage-c} never drops below $\min\{c-c'',(n-1)K-c_0''\}= \min\{dk_{D_{1}}K,k_{D_{1}}K\} = k_{D_{1}}K$. As $\sigma$ leaves $W_0(+\infty)\setminus \Omega^{D_{1}}_{0}$ by traversing through an edge that costs at most $K$ energy units, $\text{player}_0$ reaches $\Omega^{D_{1}}_{0}$ with an initial credit at least ${({k_{D_{1}}}-{1})}K$. This initial credit is sufficient for winning in~\ref{lem:lemma-6-revised:Case2:Stage-b}. Second, consider the remaining case where $D_{1} = \emptyset$. Let ${\Omega_1}$ be the $\text{player}_1$-attractor of all $1$-priority states in $G$, let $G'=G[{{W_0(+\infty)}\setminus {\Omega_1}}]$ be the subgame induced by $W_0(+\infty)\setminus \Omega_1$, and let $W'$ be the winning region of $\text{player}_{0}$ w.r.t. $+\infty$ in $G'$, i.e., $W' = W_0^{G'}(+\infty)$. We claim that $W' \neq \emptyset$. Suppose, towards contradiction, that this claim is false. Then, it follows from Claim~\ref{clm:lemma-6-revised:Claim1} that $\text{player}_1$ has a strategy $h'$ in $G'$ that wins for him from all the states of this subgame w.r.t. $+\infty$. \begin{strategy}\label{def:lem:lemma-6-revised:Case2:CounterStrategy} Consider the following strategy for $\text{player}_1$ in $G$ from $W_0(+\infty)$: \begin{enumerate}[label=\textbf{Phase~\ref{def:lem:lemma-6-revised:Case2:CounterStrategy}.\arabic*.},ref={Phase~\ref{def:lem:lemma-6-revised:Case2:CounterStrategy}.\arabic*},itemindent=*] \item \label{lem:lemma-6-revised:Case2:CounterStrategy:Stage-0} If the play is in $\Omega_1$, go to~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-1}. Otherwise, go to~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-2}. \item \label{lem:lemma-6-revised:Case2:CounterStrategy:Stage-1} If the current state is a $1$-priority state, choose any successor; otherwise, play a strategy to reach a $1$-priority state. Go to~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-0}. \item \label{lem:lemma-6-revised:Case2:CounterStrategy:Stage-2} As long as the play stays in ${{W_0(+\infty)}\setminus {\Omega_1}}$, play according to $h'$. If the play reaches $\Omega_1$, go to~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-1}. \end{enumerate} \end{strategy} Note that Strategy~\ref{def:lem:lemma-6-revised:Case2:CounterStrategy} is well-defined. That is, since $D_{1} = \emptyset$, there are no deadlock states in $V = W_0(+\infty)$, and consequently, there always exists a successor state that $\text{player}_{1}$ can choose in~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-1}. Every play $\sigma$ consistent with this strategy, either visits~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-1} infinitely often or eventually stays in~\ref{lem:lemma-6-revised:Case2:CounterStrategy:Stage-2}. In the former case, $\sigma$ visits $1$-priority states infinitely often and thus violates the parity objective, while in the latter case, $\sigma$ wins for $\text{player}_{1}$ due to the strategy $h'$. This contradicts that $\text{player}_{0}$ wins in $G$ from $W_0(+\infty)$. Let $|W'|=k$. Notice that the subgame $G[W']=G'[W']$ has at most $d-1$ different priorities, $\Omega_{1} \neq \emptyset$, and $k < n$. It is not difficult to see that $\text{player}_0$ wins in $G[W']$ from all the states of this subgame w.r.t. $+\infty$. As a result, the induction hypothesis yields a strategy $g^{W'}$ in $G[W']$ that wins for $\text{player}_0$ from $W'$ w.r.t. $(d-1)(k-1)K$ for the initial credit $(k-1)K$. Moreover, the facts that $W'$ is a $\text{player}_{1}$-trap in $G'$ and $W_0(+\infty)\setminus \Omega_1$ is a $\text{player}_{1}$-trap in $G$, imply that $W'$ is also a $\text{player}_{1}$-trap in $G$. Therefore, $g^{W'}$ is also a strategy that wins for $\text{player}_0$ from $W'$ in $G$. Let $\Omega_{0}^{W'} = \mathit{Attr}_{0}(W')$ be the $\text{player}_0$-attractor of $W'$ in $G$, and let ${|\Omega_{0}^{W'}|} = {k + m} $. Consider the subgame $H = G[{W_0(+\infty)} \setminus {\Omega_{0}^{W'}}]$. By Claim~\ref{lem:lemma-6-revised:Claim5}, $\text{player}_0$ wins in $H$ from all the states of this subgame w.r.t. $+\infty$. Thus, the induction hypothesis yields a strategy $h$ in $H$ that wins for $\text{player}_0$ from ${W_0(+\infty)} \setminus {\Omega_{0}^{W'}}$ w.r.t. $c^{H}=d(n-k-m-1)K$ for $c^{H}_{0} = (n-k-m-1)K$. We claim that the next strategy satisfies the requirements. \begin{strategy}\label{def:lem:lemma-6-revised:minPrioOneSecondCaseStrategy} We define a strategy for $\text{player}_0$ in $G$ from $W_0(+\infty)$, as follows: \begin{enumerate}[label=\textbf{Phase~\ref{def:lem:lemma-6-revised:minPrioOneSecondCaseStrategy}.\arabic*.},ref={Phase~\ref{def:lem:lemma-6-revised:minPrioOneSecondCaseStrategy}.\arabic*},itemindent=*] \item \label{lem:lemma-6-revised:Case2:Stage-A} As long as the play stays in ${W_0(+\infty)} \setminus {\Omega_{0}^{W'}}$, play according to $h$. If the play reaches $\Omega_{0}^{W'}$, go to~\ref{lem:lemma-6-revised:Case2:Stage-B}. \item \label{lem:lemma-6-revised:Case2:Stage-B} Play a strategy to reach $W'$, and then play according to $g^{W'}$. \end{enumerate} \end{strategy} Let us show that Strategy~\ref{def:lem:lemma-6-revised:minPrioOneSecondCaseStrategy} wins w.r.t. $c=d(n-1)K$ for the initial credit $(n-1)K$. Consider a play $\sigma$ consistent with Strategy~\ref{def:lem:lemma-6-revised:minPrioOneSecondCaseStrategy}. If $\sigma$ stays in~\ref{lem:lemma-6-revised:Case2:Stage-A} forever, then it is consistent with $h$, and as a result, it wins by the induction hypothesis on $h$. Otherwise, $\sigma$ eventually reaches~\ref{lem:lemma-6-revised:Case2:Stage-B}. The induction hypothesis on $h$ and Claim~\ref{lem:lemma-6-revised:Claim4} imply that the energy level never drops below $\min\{ c-c^{H}, (n-1)K-c^{H}_{0} \}=(k+m)K$ as long as the play stays in ${W_0(+\infty)} \setminus {\Omega_{0}^{W'}}$ (i.e.,~\ref{lem:lemma-6-revised:Case2:Stage-A}). Hence, when Strategy~\ref{def:lem:lemma-6-revised:minPrioOneSecondCaseStrategy} turns to~\ref{lem:lemma-6-revised:Case2:Stage-B}, the energy level is at least $(k+m-1)K$. This holds as $\sigma$ leaves ${W_0(+\infty)} \setminus {\Omega_{0}^{W'}}$ by traversing through an edge that costs at most $K$ energy units. In~\ref{lem:lemma-6-revised:Case2:Stage-B}, $\text{player}_{0}$ spends at most $mK$ energy units to reach $W'$ and subsequently starts playing according to $g^{W'}$ with an initial credit at least ${{(k+m-1)K} - {mK}} = {(k-1)K}$. Therefore, the induction hypothesis on $g^{W'}$ guarantees that $\sigma$ wins as required. \noindent\textbf{Proof of~\ref{lem:lemma-6-revised:Part2}.} Assume that $\text{player}_0$ wins from a state $s \in V$ w.r.t. $+\infty$ for an initial credit of $c_0$. We show that $\text{player}_0$ also wins from $s$ w.r.t. $c = d(n-1)K$ for the initial credit $\min\{c_0,(n-1)K\}$. If $(n-1)K\leq c_0$, the claim follows from~\ref{lem:lemma-6-revised:Part1}. Otherwise, $c_0<(n-1)K$, and we set the following strategy that wins from $s$ w.r.t. $c$ for $c_{0}$. Let $g_{+\infty}$ be a strategy for $\text{player}_0$ that wins from $s$ w.r.t. $+\infty$ for the initial credit $c_0$. Initially, $\text{player}_0$ plays according to $g_{+\infty}$ and keeps playing according to this strategy as long as $\textsf{EL}_{+\infty}(G,c_0,\sigma[0\dots t])<(n-1)K$, where $\sigma[0 \dots t]$ is the sequence of states traversed so far. If, at some point, $\textsf{EL}_{+\infty}(G,c_0,\sigma[0\dots t])\geq (n-1)K$, $\text{player}_0$ switches to a strategy that satisfies the requirements of~\ref{lem:lemma-6-revised:Part1}. Clearly, all the states traversed while playing according to $g_{+\infty}$ belong to $W_0(+\infty)$, and thus, if necessary, $\text{player}_0$ can always switch to a strategy that exists due to~\ref{lem:lemma-6-revised:Part1}. It is not difficult to see that the strategy we have described wins for $\text{player}_0$ as required. \end{proof} \section{Extended Version of Sect.~\ref{sec:sufficientbound:WGStoParityEnergyGames}}\label{app:sufficient-bound-proof} In this appendix, we present the full proof for the main result of Sect.~\ref{sec:sufficientbound}: \begin{quote} \textbf{Theorem.~\ref{Thm:a-sufficient-bound}.} Let $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$ be a WGS, $N=|2^{\mathcal{V}}|$, and let $K$ be the maximal transition weight in $G^w$, in absolute value. Take $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, a closed $\ensuremath{\mathit{sys}\text{-}\mu}$ formula that matches $\varphi$, and let $m$ be its length and $d$ its alternation depth. Then, if the system wins from a state $s$ w.r.t. $+\infty$ for an initial credit $c_0$, then it also wins from $s$ w.r.t. $(d+1)((N^2+N)m-1)K$ for an initial credit $\min\{c_0,((N^2+N)m-1)K\}$. \end{quote} \noindent We fix a WGS $\ensuremath{G^{w} = \langle \mathcal{V},\mathcal{X}, \mathcal{Y}, \rho^e,\rho^s, \varphi, w^s\rangle}$ and a closed $\ensuremath{\mathit{sys}\text{-}\mu}$ formula $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$ that matches the winning condition of $G^{w}$, $\varphi$, as in Thm.~\ref{Thm:a-sufficient-bound}. Recall that the reduction to energy parity games, outlined in Sect.~\ref{sec:sufficientbound:WGStoParityEnergyGames}, involved the construction of several game graphs. Throughout this appendix, whenever we define a game graph $H$ (i.e., component (1) in Def.~\ref{def:energyParityGame}), the terms $V(H)$ and $E(H)$ denote the set of states and edges of $H$, respectively. Take a finite upper bound $c \in \mathbb{N}$ and construct the (symbolic) GS $G^{*} = \langle \mathcal{V}^{*}, \mathcal{X}, \mathcal{Y}^{*}, \rho^{e},\rho^{s*}, \linebreak\varphi \rangle$ from $G^{w}$ and $c$, as defined in Def.~\ref{def:naiveReduction}. We transform $G^{*}$ into an (explicit, bipartite) game graph $G_c$ by adding intermediate states to distinguish between steps performed by the environment and the system players. \begin{defi}[The game graph $G_c$]\label{def:concreteGstar} Let $G_c=(V=V_0 \cup V_1,E)$ where \begin{itemize} \item $V_0 = 2^\mathcal{V}\times2^\mathcal{X}\times\{0,\dots,c\}$ and $V_1 = 2^\mathcal{V}\times\{0,\dots,c\}$. \item For $(s,c_1)\in V_1$ and $(s,u,c_1)\in V_0$, $((s,c_1),(s,u,c_1))\in E$ if $(s,p(u))\models \rho^e$. \item For $(s,u,c_1)\in V_0$ and $(t,c_2)\in V_1$, $((s,u,c_1),(t,c_2))\in E$ if $u=t|_\mathcal{X}$, $(s,p(t))\models \rho^s$, and $\min\{c,c_1 + w^s(s,p(t))\} \geq c_2$. \end{itemize} \end{defi} The game graph $G_c$, defined in Def.~\ref{def:concreteGstar}, simulates the GS $G^*$. That is, edges of the form $((s,c_1),(s,u,c_1))$, chosen by $\text{player}_{1}$, correspond to environment's transitions, $((s,c_1),p(u))\models \rho^e$, while edges of the form $((s,u,c_1),(t,c_2))$, chosen by $\text{player}_{0}$, correspond to system's transitions, $((s, c_1), p(t, c_2))\models \rho^{s*}$ where $u=t|_\mathcal{X}$.\footnote{Note that, throughout this appendix, we use the state $(s,c_{0}) \in 2^\mathcal V\times\{0,\dots,c\}$ in $G_{c}$ interchangeably with the corresponding state in $G^{*}$, $(s,c_0)\in 2^{\mathcal{V}^{*}}$.} Thus, each play in $G^{*}$ corresponds to a play in $G_c$ during which the players take steps in an alternating manner, and vice versa. In order to interpret the closed $\ensuremath{\mathit{sys}\text{-}\mu}$ formula $\psi$ over the graph $G_c$, rather than $G^{*}$, we \emph{split} the controllable predecessor operator $\circlediamond$ by replacing every sub-formula of the form $\circlediamond \beta$ in $\psi$ with $\square \mathlarger{\mathlarger{\mathlarger{\diamond}}} \beta$. The symbols $\square$ and $\mathlarger{\mathlarger{\mathlarger{\diamond}}}$ denote the \emph{classical $\mu$-calculus predecessor operators}~\cite{2001automata,Kozen}. Their semantics is defined w.r.t. a graph $H=(V,E)$ and a valuation ${\mathcal{E}} : {{\mathit{Var}}\rightarrow {(V \rightarrow \{0,1\})}}$, as follows: \begin{itemize} \item $\llbracket \square\beta \rrbracket^H_\mathcal E=\{v\in V \mid \forall u\in V((v,u)\in E \Rightarrow u\in \llbracket \beta \rrbracket^H_\mathcal E) \}$. \item $\llbracket \mathlarger{\mathlarger{\mathlarger{\diamond}}}\beta \rrbracket^H_\mathcal E=\{v\in V \mid \exists u\in V((v,u)\in E \wedge u\in \llbracket \beta \rrbracket^H_\mathcal E) \}$. \end{itemize} For simplicity, although we obtain a formula with a different syntax, we use $\psi$ to denote the translated formula as well. We argue that the graph $G_c$, like the GS $G^*$, simulates the WGS $G^w$. The next lemma formally captures this claim. \begin{lem} \label{lem:G^w-equiv-Gc} The system wins in $G^w$ from $s \in 2^{\mathcal V}$ w.r.t. $c$ for an initial credit $c_0 \leq c$, if and only if $(s,c_0)\in \llbracket \psi \rrbracket^{G_c}$. \end{lem} \begin{proof} Relying on Thm.~\ref{thm:reductionCorrectness}, it is sufficient to show that ${(s,c_0)\in \llbracket \psi \rrbracket ^{G^*}} \Leftrightarrow {(s,c_0)\in \llbracket \psi \rrbracket ^{G_c}}$. This claim is implied by the following generalized statement. We say that a valuation $\mathcal{E}'$ over $G_c$ \emph{extends} a valuation $\mathcal{E}$ over $G^*$ if for every relational variable, $X \in \mathit{Var}$, and every state, $(s,c_0)\in 2^\mathcal V\times\{0,\dots,c\}$, it holds that $(s,c_0)\in \mathcal{E}(X) \Leftrightarrow (s,c_{0})\in\mathcal{E}'(X)$. \begin{clm} For $\beta\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, $(s,c_{0})\in 2^\mathcal V\times\{0,\dots,c\}$, a valuation $\mathcal{E}$ over $G^*$, and a valuation $\mathcal{E}'$ over $G_c$ that extends $\mathcal{E}$, $(s,c_{0})\in \llbracket \beta \rrbracket ^{G^*}_\mathcal{E} \Leftrightarrow (s,c_{0})\in \llbracket \beta \rrbracket^{G_c}_{\mathcal{E}'}$. \end{clm} Before proving the above claim, we clarify that $\beta$ is used here to denote two similar yet slightly different formulae. That is, in $\llbracket \beta \rrbracket^{G_c}_{\mathcal{E}'}$, the operators $\square\mathlarger{\mathlarger{\mathlarger{\diamond}}}$ replace every occurrence of $\circlediamond$ in $\llbracket \beta \rrbracket ^{G^*}_\mathcal{E}$. The statement is proved by structural induction on $\beta$, where the cases $\beta=v,\neg v, X,\beta_1\wedge \beta_2, \beta_1\vee \beta_2$ are simple. Thus, we focus on the cases where $\beta=\circlediamond\beta'$ and $\beta=\eta X(\beta')$ for $\eta\in\{\mu,\nu\}$. \begin{description} \item[$\beta=\circlediamond\beta'$] We have that \begin{align*} &(s,c_{0})\in \llbracket \circlediamond \beta'\rrbracket^{G^*}_{\mathcal{E}} \Leftrightarrow\text{(by Def.~\ref{def:naiveReduction})}\\ &{\forall t_\mathcal X\in 2^\mathcal X} \big[{ (s,p(t_\mathcal X))\models \rho^e} \Rightarrow {\exists t_\mathcal Y\in 2^\mathcal Y \exists c_{1}\leq c : [ ((s,p(t=t_\mathcal X\cup t_\mathcal Y))\models \rho^s)} \wedge \\ &{(c_{0}+w^s(s,p(t)) \geq c_{1})} \wedge {(t,c_{1})\in \llbracket \beta'\rrbracket ^{G^*}_{\mathcal E} ]}\big] \Leftrightarrow\text{(by the induction hypothesis)}\\ &{\forall t_\mathcal X\in 2^\mathcal X} \big[{ (s,p(t_\mathcal X))\models \rho^e} \Rightarrow {\exists t_\mathcal Y\in 2^\mathcal Y \exists c_{1}\leq c } : [ ((s,p(t=t_\mathcal X\cup t_\mathcal Y))\models \rho^s) \wedge\\ &{(c_{0}+w^s(s,p(t)) \geq c_{1})} \wedge {(t,c_{1})\in \llbracket \beta'\rrbracket ^{G_c}_{\mathcal E'} ]}\big] \Leftrightarrow\text{(by Def.~\ref{def:concreteGstar})}\\ &{\forall t_\mathcal X\in 2^\mathcal X} \big[{ (s,p(t_\mathcal X))\models \rho^e} \Rightarrow {\exists t_\mathcal Y\in 2^\mathcal Y \exists c_{1}\leq c } : [ {((s, t_\mathcal X, c_{0} ),(t=t_\mathcal X\cup t_\mathcal Y,c_{1}))} \in {E(G_c)} \wedge\\ &{(t,c_{1})\in \llbracket \beta'\rrbracket ^{G_c}_{\mathcal E'}} ]\big] \Leftrightarrow\text{(by Def.~\ref{def:concreteGstar})}\\ &{\forall t_\mathcal X \in 2^\mathcal X} \big[ ((s,c_{0}),(s,t_\mathcal X,c_{0}))\in E(G_c)\Rightarrow (s,t_\mathcal X,c_{0})\in\llbracket \mathlarger{\mathlarger{\mathlarger{\diamond}}} \beta'\rrbracket^{G_c}_{\mathcal E'}\big] \Leftrightarrow (s,c_{0})\in \llbracket \square\mathlarger{\mathlarger{\mathlarger{\diamond}}} \beta'\rrbracket^{G_c}_{\mathcal{E}'}.& \end{align*} \item[$\beta=\eta X(\beta')$] We prove only for the case where $\eta=\mu$ as the case of the greatest fixed point (i.e., $\eta=\nu$) is dealt similarly. Write ${\llbracket \mu X(\beta')\rrbracket^{G^*}_{\mathcal E}}={\bigcup_{i=0}^{+\infty} S_i }$ and ${\llbracket \mu X(\beta')\rrbracket^{G_c}_{\mathcal E'}}={\bigcup_{i=0}^{+\infty} S'_i}$, as in Def.~\ref{def:prop_mu_calculus_semantics}. We show by induction that for every $i$, $(s,c_{0})\in S_i\Leftrightarrow (s,c_{0})\in S_i'$. This holds trivially for $i=0$ as $S_0=S_0'=\emptyset$. For the induction step, consider the sets $S_{i+1}=\llbracket \beta'\rrbracket^{G^*}_{\mathcal E[X \mapsto S_i]}$ and $S'_{i+1}=\llbracket \beta'\rrbracket^{G_c}_{\mathcal E'[X \mapsto S_i']}$. By applying the induction hypothesis over $S_i$, since $\mathcal E'$ extends $\mathcal E$, we conclude that $\mathcal E'[X \mapsto S'_i]$ extends $\mathcal E[X \mapsto S_i]$. Therefore, the structural induction hypothesis ensures that $(s, c_{0})\in S_{i+1} \Leftrightarrow (s,c_{0})\in S_{i+1}'$. Consequently, $(s,c_{0})\in \bigcup_{i=0}^{+\infty} S_i \Leftrightarrow(s,c_{0})\in \bigcup_{i=0}^{+\infty} S'_i$, as required.\qedhere \end{description} \end{proof} \noindent Equipped with the (explicit) game graph $G_c$, which we defined in Def.~\ref{def:concreteGstar} and showed in Lem.~\ref{lem:G^w-equiv-Gc} to simulate the WGS $G^w$, we can now invoke the seminal reduction from model-checking of $\mu$-calculus formulae to parity games~\cite{EmersonJ91}. Below, Def.~\ref{def:modelCheckingGame} applies this reduction to $G_c$ and the formula $\psi$. We also refer the reader to~\cite[Chapter~10]{2001automata} on which the reduction we present is based\footnote{Note that we consider $\min$-even parity winning conditions while~\cite[Chapter~10]{2001automata} considers $\max$-even ones. Accordingly, the priority function defined in Def.~\ref{def:modelCheckingGame} is obtained from that in~\cite[Chapter~10]{2001automata} by inverting the order on the priorities.}. Def.~\ref{def:modelCheckingGame} assumes that all relational variables in $\psi$ are quantified by fixed-point operators exactly once. If that is not the case, the variables can be renamed to fulfil this requirement, without affecting the formula's semantics. \begin{defi}[The parity game $G_c\times \psi$]\label{def:modelCheckingGame} Let $G_c\times \psi= \langle (V=V_0\cup V_1,E),\ensuremath{\mathit{prio}} \rangle$ be the parity game (cf. Def.~\ref{def:energyParityGame}) defined by: \begin{itemize} \item $V=\{(S,\psi') \mid S\in V(G_c)$, $\psi' \text{ is a sub-formula of }\psi\}$. \item $V_0\subseteq V$ consists of all states \begin{itemize} \item $(S,v)$ where $S \not\in \llbracket v \rrbracket ^{G_c}$. \item $(S,\neg v)$ where $S \in \llbracket v \rrbracket ^{G_c}$. \item $(S, X)$ where $X\in \mathit{Var}$ is a relational variable. \item $(S, \eta X(\psi_1))$ for $\eta\in \{\mu,\nu\}$. \item $(S, \psi_1\vee \psi_2)$. \item $(S, \mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_1)$. \end{itemize} \item $V_1=V\setminus V_0$. \item A pair, $P\in V\times V$, belongs to $E$ if either of the following holds: \begin{itemize} \item $P=((S,\psi_1\wedge \psi_2),(S,\psi') )$ where $\psi'\in\{\psi_1,\psi_2\}$. \item $P=((S,\psi_1\vee \psi_2),(S,\psi') )$ where $\psi'\in\{\psi_1,\psi_2\}$. \item $P=((S,\eta X(\psi')),(S,\psi'))$ for $\eta\in \{\mu,\nu\}$. \item $P=((S,X),(S,\eta X(\psi')))$ where $\eta X(\psi')$ is the (unique) sub-formula of $\psi$ that binds $X$. \item $P=( (S,\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi'), (T,\psi') )$ where $(S,T)\in E({G_c})$. \item $P=( (S,\square \psi'), (T,\psi') )$ where $(S,T)\in E({G_c})$. \end{itemize} \item For defining the priorities, take some even number $M\geq ad(\psi)$, where $ad(\psi)$ denotes the alternation depth of $\psi$. \begin{itemize} \item For $Q=(S,\psi'=\nu X(\xi))$, $\ensuremath{\mathit{prio}}(Q)=M - 2 \lceil (ad(\psi')-1) / 2 \rceil$. \item For $Q=(S,\psi'=\mu X(\xi))$, $\ensuremath{\mathit{prio}}(Q)=M-2\lfloor (ad(\psi')-1) / 2 \rfloor-1$. \item Otherwise, $\ensuremath{\mathit{prio}}(Q)=M$. \end{itemize} \end{itemize} \end{defi} \noindent By~\cite{EmersonJ91}, a strategy that wins for $\text{player}_0$ in the parity game $G_c\times \psi$, defined in Def.~\ref{def:modelCheckingGame}, corresponds to the value of the formula $\psi$ w.r.t. $G_c$: \begin{cor} \label{cor:Gc-equiv-(Gc-times-psi)} For $S\in V(G_c)$, $S\in \llbracket \psi \rrbracket^{G_c}\Leftrightarrow$ $\text{player}_0$ has a winning strategy from $(S,\psi)$ in the parity game $G_c\times \psi$. \end{cor} The next step in our reduction is to add weights to the edges of $G_c\times \psi$, namely to transform $G_c\times \psi$ into an energy parity game (as defined in Def.~\ref{def:energyParityGame}). We are interested in edges whose source states are of the form $(S,\square\psi')$ or $(S,\mathlarger{\mathlarger{\mathlarger{\diamond}}}\psi')$ because their states' first components change according to transitions in the WGS $G^{w}$ (see Def.~\ref{def:modelCheckingGame} and Def.~\ref{def:concreteGstar}). As we formally define below in Def.~\ref{def:addingWgithFunction}, such edges inherit their weights from the corresponding transitions in $G^w$. \begin{defi}[The weight function $w$]\label{def:addingWgithFunction} We define the weight function, ${w}:{{E(G_c\times \psi)}\rightarrow {\mathbb{Z}}}$, as follows. For edges of the form $e=((s,u,c_0),\square\psi'),((t,c_1),\psi')$ or $e=((s,u,c_0),\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi'),\linebreak((t,c_1),\psi')$, $w(e)=w^s(s,p(t))$. For every other edge, ${e} \in {E(G_c\times \psi)}$, $w(e)=0$. \end{defi} \begin{lem} \label{lem:(Gc-times-psi)-equiv-(Gc-times-psi,w)} Let $S=(s,c_0)$ or $S=(s,u,c_0)$ be a state of $G_c$. Then, $\text{player}_0$ has a winning strategy from $(S,\psi')$ in the parity game $G_c\times\psi$ iff $\text{player}_0$ has a winning strategy from $(S,\psi')$ in the energy parity game $\langle G_c\times\psi,w \rangle$ w.r.t. $c$ for the initial credit $c_0$. \end{lem} \begin{proof} The ``if" direction is trivial thus we focus on the ``only if" direction. We claim that if $g$ is a strategy that wins for $\text{player}_0$ from $(S,\psi')$ in $G_c\times \psi$, then it also wins for $\text{player}_0$ from $(S,\psi')$ in $\langle G_c\times\psi,w \rangle$ w.r.t. $c$ for the initial credit $c_0$. Consider a play, $(S_0=S,\psi_0=\psi'),(S_1,\psi_1),(S_2,\psi_2),\dots$, consistent with $g$. Since $g$ wins from $(s,\psi')$, the play satisfies the parity objective and it is left to show that the energy objective is achieved as well. Each $S_i$ is of the form $S_i=(s_i,c_i)$ or $S_i=(s_i,u_i,c_i)$. To show that the energy level is always non-negative, we prove by induction on $i$ that ${\sf EL}_c(\langle G_c\times\psi,w \rangle,c_0,(S_0,\psi_0),\dots, (S_i,\psi_i))\geq c_i$. Note that the statement holds for $i=0$ and, for the induction step, assume that it holds for some $i\geq 0$. By the induction hypothesis, $c'' \geq c_{i}$ where $c'' := {\sf EL}_c(\langle G_c\times\psi,w \rangle,c_0,(S_0,\psi_0),\dots, (S_i,\psi_i))$. The interesting case is when $S_i=(s_i,u_i,c_i)$, $S_{i+1}=(s_{i+1},c_{i+1})$ and $\psi_i\in\{ \square \xi, \mathlarger{\mathlarger{\mathlarger{\diamond}}} \xi \}$ for some $\xi$, since in all other cases, $w((S_i,\psi_i),(S_{i+1},\psi_{i+1}))=0$ and $c_{i+1}=c_i$. In this case, ${\sf EL}_c(\langle G_c\times\psi,w \rangle,c_0,(S_0,\psi_0),\dots, (S_{i+1},\psi_{i+1}))=\min\{c,c''+w^s(s_i,p(s_{i+1}))\}$. Since $(S_i,S_{i+1})$ is an edge of $G_c$, $\min\{c,c_i+w^s(s_i,p( s_{i+1} ))\} \geq c_{i+1}$. Therefore, as $c''\geq c_i$, $\min\{c,c''+w^s(s_i, p( s_{i+1} ) )\} \geq \min\{c,c_i+w^s(s_i,p(s_{i+1}) )\} \geq c_{i+1}$, as required. \end{proof} Our next goal is to eliminate the energy component from the states of $G_c\times \psi$, so that the number of states will be independent of the choice of the upper bound. Formally, for a state $S$ of $G_c$, let $\faktor{S}{c}$ denote its \emph{reduced version}, defined as follows: \begin{align} &{\faktor{S}{c}} := \begin{cases} (s),~ & \text{if } S=(s,c_{0})\\ (s,u),~& \text{if } S=(s,u,c_{0})\\ \end{cases}& \end{align} Accordingly, we construct the \emph{reduced parity game} $\faktor{G_c\times \psi}{c}$ and its \emph{reduced weight function} $\faktor{w}{c}$. \begin{defi}[The reduced game $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$]\label{def:reducedEnergyParityGame} Let $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle = \langle ((\faktor{V}{c}= \faktor{V_0}{c}\cup \faktor{V_1}{c},\faktor{E}{c}),\faktor{\ensuremath{\mathit{prio}}}{c}), \faktor{w}{c} \rangle$ be the energy parity game defined by: \begin{itemize} \item $\faktor{V_0}{c}=\{(\faktor{S}{c},\psi') \mid (S,\psi')\in V_0(G_c\times \psi)\}$. \item $\faktor{V_1}{c}=\{(\faktor{S}{c},\psi') \mid (S,\psi')\in V_1(G_c\times \psi)\}$. \item $\faktor{E}{c}=\{((\faktor{S_1}{c},\psi_1),(\faktor{S_2}{c},\psi_2)) \mid ((S_1,\psi_1),(S_2,\psi_2))\in E(G_c\times\psi)\}$. \item $\faktor{\ensuremath{\mathit{prio}}}{c}(\faktor{S}{c},\psi')=\ensuremath{\mathit{prio}}(S,\psi')$. Note that $\faktor{\ensuremath{\mathit{prio}}}{c}$ is well-defined as the priority $\ensuremath{\mathit{prio}}(S,\psi')$ is solely determined by $\psi'$. \item If $e=((S_1,\psi_1),(S_2,\psi_2))\in E(G_c\times \psi)$, then $\faktor{w}{c}((\faktor{S_1}{c},\psi_1),(\faktor{S_2}{c},\psi_2))=w(e)$. Note that $\faktor{w}{c}$ is well-defined as the weight of the edge $e$ is independent of the energy components of $S_1$ and $S_2$. \end{itemize} \end{defi} \begin{lem} \label{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c} Let $c_{0} \leq c$. Then, $\text{player}_0$ has a wining strategy in $\langle G_c\times \psi,w \rangle$ from $((s_0,c_0),\psi)$ w.r.t. $c$ for the initial credit $c_0$ iff $\text{player}_0$ has a winning strategy in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$ from $((s_0),\psi)$ w.r.t. $c$ for the initial credit $c_0$. \end{lem} As a step towards proving Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c}, we first prove Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure}. \begin{lem}\label{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure} For $i \geq 1$, let $({{(S_0,\psi_0)} = {((s_{0},c_0)}},\psi),\dots,(S_{i-1},\psi_{i-1}))$ be a path in $\langle G_c\times \psi,w \rangle$. Then, $S_{i-1}$ is of the form $S_{i-1}=(s_{i-1},u_{i-1},c_{i-1})$ iff $\psi_{i-1}=\mathlarger{\mathlarger{\mathlarger{\diamond}}}\phi$. \end{lem} \begin{proof}[Proof of Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure}] We prove the statement by induction on $i$. For $i=1$, it holds that $S_0=(s_{0},c_0)$, and, since $\psi_{0}=\psi \in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, also $\psi_{0} \not =\mathlarger{\mathlarger{\mathlarger{\diamond}}}\phi$. For the induction step, we show that the statement holds for $i > 1$ while assuming that it holds for every $1 \leq i_{0} < i$. If $\psi_{i-1}=\mathlarger{\mathlarger{\mathlarger{\diamond}}}\phi$, as $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, by the construction, $\psi_{i-2}=\square\mathlarger{\mathlarger{\mathlarger{\diamond}}}\phi$. By the induction hypothesis, $S_{i-2}=(s_{i-2},c_{i-2})$ and hence, ${S_{i-1}}={(s_{i-1},u_{i-1},c_{i-1})}$. For the other direction, assume that ${S_{i-1}}={(s_{i-1},u_{i-1},c_{i-1})}$. Assume that the edge $\big((S_{i-2},\psi_{i-2}),(S_{i-1},\psi_{i-1})\big)$ conforms to either of the first four cases in Def.~\ref{def:modelCheckingGame}, and note that in all of these cases, $S_{i-2}=S_{i-1}$. Hence, $\psi_{i-2}$ has either of the following forms: $\phi_1\wedge \phi_2, \phi_1\vee \phi_2,\eta X(\phi),X$, where $\eta\in \{\mu,\nu\}$. Then, by the induction hypothesis, $S_{i-2}=(s_{i-2},c_{i-2})=S_{i-1}$, in contradiction to the assumption. Now, assume that the edge $\big((S_{i-2},\psi_{i-2}),(S_{i-1},\psi_{i-1})\big)$ conforms to the fifth case in Def.~\ref{def:modelCheckingGame}. Then, $\psi_{i-2}=\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i-1}$ and, by the induction hypothesis, $S_{i-2}=(s_{i-2},u_{i-2},c_{i-2})$. By the construction, $S_{i-1}=(s_{i-1},c_{i-1})$, in contradiction to the assumption. Therefore, the edge $\big((S_{i-2},\psi_{i-2}),(S_{i-1},\psi_{i-1})\big)$ conforms to the last remaining case in Def.~\ref{def:modelCheckingGame} and $\psi_{i-2}=\square \psi_{i-1}$. Since $\psi\in \ensuremath{{\mathcal{L}_{\mu}^{{\scriptstyle\mathit{sys}}}}}$, it follows that $\psi_{i-1}=\mathlarger{\mathlarger{\mathlarger{\diamond}}}\phi$, as required. \end{proof} We now turn to prove Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c}. \begin{proof}[Proof of Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c}] \begin{description} \item[``only if''] First, we prove the ``only if" statement. Assume that $g$ is a strategy that wins for $\text{player}_0$ in $\langle G_c\times \psi,w \rangle$ from $((s_0,c_0),\psi)$ w.r.t. $c$ for the initial credit $c_0$. We define a strategy for $\text{player}_0$ in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$, $\faktor{g}{c}$, as follows. For $i > 0$, take a sequence of states in $\faktor{G_c\times \psi}{c}$, $((T_0,\psi_0)=((s_{0}), \psi),\dots,(T_{i-1},\psi_{i-1}))$, that ends in a $\text{player}_0$ state, and each $T_j$ is either $T_j=(s_j)$ or $T_j=(s_j,u_j)$. If there are values $c_{1}, \ldots , c_{i-1} \in ([0,c])^{i-1}$ such that the sequence, $((S_0,\psi_0)=((s_{0}, c_{0}), \psi),\dots,(S_{i-1},\psi_{i-1}))$ where for each $0<j\leq i-1$, $S_{j} = (T_{j}, c_{j})$, is a prefix of a play in $G_c\times \psi$, consistent with $g$, write $g((S_0,\psi_0),\dots,(S_{i-1},\psi_{i-1}))=(S_{i},\psi_{i})$ and define $\faktor{g}{c}((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1}))=(\faktor{S_{i}}{c},\psi_{i})$. Let $((T_0,\psi_0)= ((s_{0}),\psi),\dots,(T_{i-1},\psi_{i-1}))$ be a prefix in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$, consistent with $\faktor{g}{c}$. To prove that $\faktor{g}{c}$ is indeed a well-defined, winning strategy, we argue by induction on $i\geq 1$: \begin{clm} There are unique $c_{1}, \ldots , c_{i-1} \in ([0,c])^{i-1}$ such that $(((T_{0},c_{0}), \psi_{0})=((s_{0}, c_{0}), \psi),((T_{1}, c_{1}), \psi_{1}),\ldots,((T_{i-1}, c_{i-1}),\psi_{i-1}))$ is a prefix of a play in $\langle G_c\times \psi,w \rangle$, consistent with $g$. \end{clm} Note that the above claim is immediate for $i=1$. We prove that it holds for ${i} > {1}$ while assuming that it holds for every $1 \leq j < i$. By the induction hypothesis, there are unique $c_{1}, \ldots , c_{i-2} \in ([0,c])^{i-2}$ such that $(((T_{0},c_{0}), \psi_{0})=((s_{0}, c_{0}), \psi),((T_{1}, c_{1}), \psi_{1}),\ldots,\linebreak ((T_{i-2}, c_{i-2}),\psi_{i-2}))$ is a prefix consistent with $g$. (1) Consider the case where $(T_{i-2},\psi_{i-2})$ is a $\text{player}_{1}$ state. Since $((T_{i-2}, c_{i-2}),\psi_{i-2}))$ is also a $\text{player}_{1}$ state, by Def.~\ref{def:modelCheckingGame}, we have that either $\psi_{i-2}=\square \psi_{i-1}$ or ${\psi_{i-2}} = {\xi_{1} \wedge \xi_{2}}$. Therefore, as $\psi_{i-2} \not = \mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i-1}$, by Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure}, $(T_{i-2}, c_{i-2}) = (s_{i-2}, c_{i-2})$. Hence, by Def.~\ref{def:reducedEnergyParityGame} and Def.~\ref{def:modelCheckingGame}, if $\psi_{i-2}=\square \psi_{i-1}$, then $T_{i-1}= (s_{i-2}, u_{i-1})$, and otherwise, $T_{i-1}=(s_{i-2})=T_{i-2}$. Moreover, in either case, $(S_{i-1}, \psi_{i-1}) = ((T_{i-1},c_{i-2}), \psi_{i-1})$ is the only successor of $((T_{i-2},c_{i-2}), \psi_{i-2})$ in $G_c\times \psi$ such that $(\faktor{S_{i-1}}{c}, \psi_{i-1}) = (T_{i-1},\psi_{i-1})$. Consequently, $c_{1},\ldots, c_{i-2}, c_{i-2}$ are unique values such that $(((s_{0}, c_{0}), \psi),((T_{1}, c_{1}), \psi_{1}),\ldots,((T_{i-2}, c_{i-2}),\psi_{i-2}),((T_{i-1}, c_{i-2}),\psi_{i-1}))$ is a prefix of a play consistent with $g$. (2) Consider the case where $(T_{i-2},\psi_{i-2})$ is a $\text{player}_{0}$ state. Since $(((s_{0}, c_{0}), \psi),((T_{1}, c_{1}), \psi_{1}),\ldots,((T_{i-2}, c_{i-2}),\psi_{i-2}))$ is a prefix of a play consistent with $g$, there exists ${((T_{i-1}, c_{i-1}), \psi_{i-1})} \in {V(G_c\times \psi)}$ such that $g(((s_{0}, c_{0}), \psi),((T_{1}, c_{1}), \psi_{1}),\ldots,\linebreak ((T_{i-2}, c_{i-2}),\psi_{i-2})) = {((T_{i-1}, c_{i-1}), \psi_{i-1})}$. As $c_{i-1}$ is uniquely determined by $g$, it follows that $c_{1},\ldots, c_{i-2}, c_{i-1}$ are unique values such that $(((s_{0}, c_{0}), \psi),((T_{1}, c_{1}), \psi_{1}),\ldots,\linebreak((T_{i-2}, c_{i-2}),\psi_{i-2}),((T_{i-1}, c_{i-1}),\psi_{i-1}))$ is consistent with $g$, as required. Let ${\faktor{\sigma}{c}} = {{{((T_0,\psi_0)}= {((s_{0}),\psi)}},(T_{1},\psi_{1}),\dots)}$ be a play in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$ consistent with $\faktor{g}{c}$. A corollary of the above claim is that $\faktor{\sigma}{c}$ corresponds to a unique play, ${\sigma} = {({{((T_{0},c_{0}), \psi_{0})}={((s_{0}, c_{0}), \psi)}},((T_{1}, c_{1}), \psi_{1}),\ldots)}$, in $\langle G_c\times \psi,w \rangle$, consistent with $g$. Since $g$ wins for $\text{player}_{0}$, the play, $\sigma$, wins for $\text{player}_{0}$. By Def.~\ref{def:reducedEnergyParityGame}, for each $j\geq 0$, $\faktor{\ensuremath{\mathit{prio}}}{c}(T_{j},\psi_{j})=\ensuremath{\mathit{prio}}((T_{j}, c_{j}), \psi_{j})$ and $\faktor{w}{c}((T_{j},\psi_{j}),(T_{j+1},\psi_{j+1}))=w(((T_{j},c_{j}),\psi_{j}),((T_{j+1},c_{j+1}),\psi_{j+1}))$. Therefore, the priorities and weights traversed along $\faktor{\sigma}{c}$ are the same as those in $\sigma$, hence $\faktor{\sigma}{c}$ wins for $\text{player}_{0}$ in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$. This implies that $\faktor{g}{c}$ is a strategy that wins for $\text{player}_{0}$ in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$ from $((s_{0}), \psi)$, as required. \item[``if''] Second, we prove the ``if" statement. Assume that $\faktor{g}{c}$ is a strategy that wins for $\text{player}_0$ in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$ from $((s_{0}),\psi)$ w.r.t. $c$ for an initial credit $c_0$. We construct a strategy, $g$, that wins for $\text{player}_0$ in $\langle G_c\times \psi,w\rangle$ from $((s_0,c_0),\psi)$ w.r.t. $c$ for the initial credit $c_0$. We construct $g$ by $g=\bigcup_{i=1}^\infty g_i$ where each $g_i$ is a partial function $g_i:(V(G_c\times\psi))^i\rightarrow V(G_c\times \psi)$. We construct the functions $g_1,g_2,\dots$ by induction. Take $i> 0$ and assume that the functions $g_1,\dots,g_{i-1}$ have been defined. Take a sequence, $((S_0,\psi_0)= ((s_{0},c_0),\psi),\dots,(S_{i-1},\psi_{i-1}))$, consistent with $\bigcup_{k=1}^{i-1} g_k$, where each $S_j$ is either $S_j=(s_j,c_j)$ or $S_j=(s_j,u_j,c_j)$. If $(S_{i-1},\psi_{i-1})$ is a $\text{player}_0$ state, $g_i((S_0,\psi_0),\dots,(S_{i-1},\psi_{i-1}))$ is defined as follows. Write $T_j=\faktor{S_j}{c}$ for each $j \leq i-1$ and $\faktor{g}{c}((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1}))=(T_i,\psi_i)$. Take the largest $c_i$ such that $\big((S_{i-1},\psi_{i-1}),((T_i,c_i),\psi_i)\big)\in E(G_c\times\psi)$, and define $g_i((S_0,\psi_0),\dots,(S_{i-1},\psi_{i-1}))=(S_{i},\psi_i)$ where ${S_{i}} = {(T_i,c_i)}$. To prove that $g$ is indeed a winning strategy, we argue by induction on the length of the prefix, $((S_0,\psi_0)= ((s,c_0),\psi),\dots,(S_{i-1},\psi_{i-1}))$: \begin{enumerate} \item \label{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT1} $((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1}))$ is consistent with $\faktor{g}{c}$. \item \label{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT3} ${\sf EL}_c(\langle G_c\times\psi,w \rangle,c_0,(S_0,\psi_0),\dots, (S_{i-1},\psi_{i-1})) = {\sf EL}_c(\langle \faktor{G_c\times\psi}{c},\faktor{w}{c} \rangle,c_0,(T_0,\psi_0),\dots,\linebreak (T_{i-1},\psi_{i-1})) = c_{i-1}$. \item \label{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT4} If $(S_{i-1},\psi_{i-1})$ is a $\text{player}_0$ state, then ${g_i((S_0,\psi_0),\dots,(S_{i-1},\psi_{i-1}))}$ is well-defined. That is, there exists $T_i$ such that $\faktor{g}{c}((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1})) = {(T_i,\psi_i)}$, and there exists $c_i \in [0,c]$ such that ${\big((S_{i-1},\psi_{i-1}),((T_i,c_i),\psi_i)\big)}\in {E(G_c\times\psi)}$. \end{enumerate} \noindent We leave it to the reader to verify that the above properties \ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT1}~-~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT4} hold for ${i} = {1}$. Properties \ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT1}~-~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT3} are immediate, and property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT4} holds by arguments similar to those in the general case where $i > 1$, yet simpler. We turn to prove that these properties hold for $i>1$ while assuming that they hold for every $1\leq i_0<i$. We first prove property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT1}. By the induction hypothesis, $((T_0,\psi_0),\dots,(T_{i-2},\psi_{i-2}))$ is consistent with $\faktor{g}{c}$. If $(S_{i-2},\psi_{i-2})$ is a $\text{player}_1$ state, then the same applies to $(T_{i-2},\psi_{i-2})$, and thus $((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1}))$ is consistent with $\faktor{g}{c}$. Otherwise, we have the case where both $(S_{i-2},\psi_{i-2})$ and $(T_{i-2},\psi_{i-2})$ are $\text{player}_0$ states. By property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT4}, $\faktor{g}{c}((T_0,\psi_0),\dots,(T_{i-2},\psi_{i-2}))=(T_{i-1},\psi_{i-1})$. Therefore, it follows from the induction hypothesis that $((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1}))$ is consistent with $\faktor{g}{c}$ in this case as well. We now turn to prove property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT3}. By the induction hypothesis, it holds that ${\sf EL}_c(\langle G_c\times\psi,\linebreak w \rangle,c_0,(S_0,\psi_0),\dots, (S_{i-2},\psi_{i-2}))= {\sf EL}_c(\langle \faktor{G_c\times\psi}{c},\faktor{w}{c} \rangle, c_0,(T_0,\psi_0),\dots,(T_{i-2},\psi_{i-2})) = {c_{i-2}}$. First, consider the case where ${\psi_{i-2}} \not = {\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i-1}}$. By Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure}, $S_{i-2} = (s_{i-2}, c_{i-2})$, and by Def.~\ref{def:addingWgithFunction} and Def.~\ref{def:reducedEnergyParityGame}, $w((S_{i-2},\psi_{i-2}),(S_{i-1},\psi_{i-1})) = \faktor{w}{c}((T_{i-2},\psi_{i-2}),(T_{i-1},\psi_{i-1})) = 0$. Also, by Def.~\ref{def:modelCheckingGame}, $c_{i-2} = c_{i-1}$. Thus, the property holds in this case. Second, consider the remaining case where $\psi_{i-2} = \mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i-1}$. In this case, $(S_{i-2}, \psi_{i-2})$ is a $\text{player}_0$ state. By Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure} and Def.~\ref{def:modelCheckingGame}, ${S_{i-2}}={(s_{i-2},u_{i-2},c_{i-2})}$ and ${S_{i-1}}={(s_{i-1},c_{i-1})}$. Therefore, by Def.~\ref{def:addingWgithFunction} and Def.~\ref{def:reducedEnergyParityGame}, we have that $w((S_{i-2},\psi_{i-2}),(S_{i-1},\psi_{i-1})) = \faktor{w}{c}((T_{i-2},\psi_{i-2}),\linebreak(T_{i-1},\psi_{i-1})) = w^s(s_{i-2},p(s_{i-1}))$. This, together with the induction hypothesis, implies: \begin{align*} &{\sf EL}_c(\langle G_c\times\psi,w \rangle,c_0,(S_0,\psi_0),\dots, (S_{i-1},\psi_{i-1})) =\\ &{\sf EL}_c(\langle \faktor{G_c\times\psi}{c},\faktor{w}{c} \rangle, c_0,(T_0,\psi_0),\dots, (T_{i-1},\psi_{i-1})) =\\ &\min\{c,c_{i-2}+w^s(s_{i-2}, p(s_{i-1}))\}.& \end{align*} But, since $\faktor{g}{c}$ wins for $\text{player}_0$ and, by property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT1}, $(T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1})$ is consistent with $\faktor{g}{c}$, we have that ${\hat{c}} = {{\min\{c,c_{i-2}+w^s(s_{i-2},p(s_{i-1}))\}} \geq {0}}$. Also, by the construction, $\hat{c}$ is the maximal value such that $\big(((s_{i-2},u_{i-2},c_{i-2}),\psi_{i-2}), ((s_{i-1},\hat{c}),\psi_{i-1})\big) \in {E(G_c\times \psi)}$. Thus, by the definition of $g_{i-1}$, we obtain that $c_{i-1} = \hat{c}$, which concludes the proof for property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT3}. To prove property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT4}, first note that the existence of $(T_i,\psi_i)$ such that $\faktor{g}{c} ((T_0,\psi_0),\dots,\linebreak(T_{i-1},\psi_{i-1}))=(T_i,\psi_i)$ follows immediately from property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT1}, as $((T_0,\psi_0),\dots,(T_{i-1},\psi_{i-1}))$ is consistent with $\faktor{g}{c}$. Thus, it remains to prove that for some $c_i$, $\big((S_{i-1},\psi_{i-1}),((T_i,c_i),\psi_i)\big)\linebreak\in E(G_c\times\psi)$. First, consider the case where $\psi_{i-1} \not = \mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}$. By Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure}, $S_{i-1} = (s_{i-1}, c_{i-1})$. Thus, by Def.~\ref{def:modelCheckingGame} and Def.~\ref{def:reducedEnergyParityGame}, since $(S_{i-1},\psi_{i-1})$ is a $\text{player}_{0}$ state, it holds that $T_{i-1} = (s_{i-1}) = T_{i}$, and for $c_{i} := c_{i-1}$, $\big(((s_{i-1}, c_{i-1}),\psi_{i-1}),((s_{i-1},c_{i}),\psi_i)\big)\in E(G_c\times\psi)$. Second, consider the remaining case where $\psi_{i-1} = \mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}$. By Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:prefixStructure}, ${S_{i-1}}={(s_{i-1},u_{i-1},c_{i-1})}$, hence ${T_{i-1}}={(s_{i-1},u_{i-1})}$. Since $\big((T_{i-1},\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}),(T_{i},\psi_{i})\big)\in E(\faktor{G_c\times \psi}{c})$, it follows from Def.~\ref{def:reducedEnergyParityGame} that $T_{i} = (s_{i})$. By Def.~\ref{def:modelCheckingGame} and Def.~\ref{def:concreteGstar}, it holds that for every $c' \in [0,c]$, $\big(((s_{i-1}, u_{i-1}, c_{i-1}),\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}),((s_{i},c'),\psi_{i})\big)\in E(G_c\times\psi)$ iff $\min\{c,c_{i-1}+w^{s}(s_{i-1},p(s_{i}))\} \geq c'$. Thus, it is sufficient to show that ${\min\{c,c_{i-1}+w^{s}(s_{i-1},p(s_{i}))\}} \geq {0}$. By property~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c:IT3}, we have that ${c_{i-1}} = {{\sf EL}_c(\langle \faktor{G_c\times\psi}{c},\faktor{w}{c} \rangle,c_0,(T_0,\psi_0),\dots, (T_{i-1},\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}))}$. Moreover, by Def.~\ref{def:reducedEnergyParityGame}, $\faktor{w}{c}((T_{i-1},\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}),(T_{i},\psi_{i})) = {\faktor{w}{c}(((s_{i-1},u_{i-1}),\mathlarger{\mathlarger{\mathlarger{\diamond}}} \psi_{i}), ((s_{i}),\psi_{i}))} = {w^{s}(s_{i-1}, p(s_{i}))}$. Thus, ${\min\{c,c_{i-1}+w^{s}(s_{i-1},p(s_{i}))\}} = {\sf EL}_c(\langle \faktor{G_c\times\psi}{c},\faktor{w}{c} \rangle, c_0,(T_0,\psi_0),\dots, (T_{i},\psi_{i}))$. Since $\faktor{g}{c}$ wins for $\text{player}_{0}$ and the prefix $((T_0,\psi_0),\dots,(T_{i},\psi_{i}))$ is consistent with $\faktor{g}{c}$, it follows that $\min\{c,c_{i-1}+w^{s}(s_{i-1},p(s_{i}))\} \geq 0$, as required. \qedhere \end{description} \end{proof} \noindent We can now conclude that the graph $\faktor{G_c\times \psi}{c}$ simulates the WGS $G^w$. By Lem.~\ref{lem:G^w-equiv-Gc}, Cor.~\ref{cor:Gc-equiv-(Gc-times-psi)}, Lem.~\ref{lem:(Gc-times-psi)-equiv-(Gc-times-psi,w)} and Lem.~\ref{lem:(Gc-times-psi,w)-equiv-(G_c-times-psi)/c}, the following holds: \begin{cor} \label{cor:G^w-equiv-(Gc-times-psi)/c} Take a finite upper bound, $c\in \mathbb{N}$, and $c_0\leq c$. Then, the system has a winning strategy in the WGS $G^w$ from $s$ w.r.t. c for the initial credit $c_0$, iff $\text{player}_0$ has a winning strategy in the energy parity game $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$ from $((s),\psi)$ w.r.t. $c$ for the initial credit $c_0$. \qed \end{cor} The crux of the reduction, which allows us to conclude the sufficiency of the upper bound specified in Thm.~\ref{Thm:a-sufficient-bound}, is that the energy parity game $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$, defined in Def.~\ref{def:reducedEnergyParityGame}, is \emph{not} dependent on the choice of the bound $c$, provided that $c$ is sufficiently large. That is formally stated by the next lemma. \begin{lem} \label{lemma:{Gc-times-psi}/c-independent-of-c} Let $K$ be the maximal transition weight in the WGS, $G^{w}$, in absolute value. Then, for $c,c'\geq K$, $c,c'\in \mathbb N$, ${\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle} = {\langle \faktor{G_{c'}\times \psi}{c'},\faktor{w}{c'} \rangle}$. \end{lem} \begin{proof} Clearly, it holds that $V(\faktor{G_c\times \psi}{c})=V(\faktor{G_{c'}\times \psi}{c'})$. We show that $E(\faktor{G_c\times \psi}{c})=E(\faktor{G_{c'}\times \psi}{c'})$, and that each edge is assigned the same weight in both games. By symmetry, it suffices to show that for each $e\in E(\faktor{G_c\times \psi}{c})$, $e$ also belongs to $E(\faktor{G_{c'}\times \psi}{c'})$ and has the same weight. Take such an edge, $e$. By Def.~\ref{def:reducedEnergyParityGame} and Def.~\ref{def:modelCheckingGame}, $e$ has either of the following forms: \begin{enumerate} \item\label{lemma:{Gc-times-psi}/c-independent-of-c:case1} $e=(((s,u),\xi),((t),\xi'))$ where $\xi \in \{\mathlarger{\mathlarger{\mathlarger{\diamond}}} \xi', \square \xi' \}$: In this case, for some $a,b\in[0,c]$, $(((s,u,a),\xi),((t,b),\xi'))\in E(G_c\times \psi)$. Thus, by Def.~\ref{def:concreteGstar}, $(s,p(t)) \models \rho^{s}$ and $t|_{\mathcal{X}} = u$. Also, by the definition of $K$, $\min\{c', K + w^s(s,p(t))\} \geq 0$. Consequently, since $c'\geq K$, $(((s,u,K),\xi),((t,0),\xi'))\in E(G_{c'}\times \psi)$, hence, $e\in E(\faktor{G_{c'}\times \psi}{c'})$. Furthermore, the weight of $e$ is $w^s(s,p(t))$ in both games. \item\label{lemma:{Gc-times-psi}/c-independent-of-c:case2} $e=(((s),\xi),((s,u),\xi'))$ where $\xi \in \{\mathlarger{\mathlarger{\mathlarger{\diamond}}} \xi', \square \xi' \}$: In this case, for some $a\in[0,c]$, $(((s,a),\xi),((s,u,a),\xi'))\in E(G_c\times \psi)$. Thus, by Def.~\ref{def:concreteGstar}, $(s,p(u))\models \rho^{e}$. Consequently, it holds that $(((s,0),\xi),((s,u,0),\xi'))\in E(G_{c'}\times \psi)$, hence $e\in E(\faktor{G_{c'}\times \psi}{c'})$. \item\label{lemma:{Gc-times-psi}/c-independent-of-c:case3} $e=((T,\xi),(T,\xi'))$ where $\xi \in \{\phi_1\wedge \phi_2, \phi_1\vee \phi_2,\mu X(\phi),\nu X(\phi),X\}$: In this case, for some $a\in [0,c]$, $(((T,a),\xi), ((T,a),\xi'))\in E(G_c\times \psi)$. Therefore, $(((T,0),\xi), ((T,0),\xi'))\in E(G_{c'}\times \psi)$, hence $e\in E(\faktor{G_{c'}\times \psi}{c'})$. \end{enumerate} \noindent Moreover, in cases~\ref{lemma:{Gc-times-psi}/c-independent-of-c:case2} and \ref{lemma:{Gc-times-psi}/c-independent-of-c:case3}, the weight of $e$ is $0$ in both games. \end{proof} We can now prove Thm.~\ref{Thm:a-sufficient-bound}. \begin{proof}[Proof of Thm.~\ref{Thm:a-sufficient-bound}] Assume that $s$ wins in $G^w$ for the system player w.r.t. $+\infty$ for an initial credit $c_0$. By Lem.~\ref{lem:from-infinite-to-finite}, for some natural number $c\geq K$ and $c_0'=\min\{c_0,c\}$, $s$ wins for the system player w.r.t. $c$ for an initial credit $c_0'$. By Cor.~\ref{cor:G^w-equiv-(Gc-times-psi)/c}, $((s),\psi)$ wins for $\text{player}_0$ in $\langle \faktor{G_c\times \psi}{c},\faktor{w}{c} \rangle$ w.r.t. $c$ for an initial credit $c_0'$. Now, by Def.~\ref{def:reducedEnergyParityGame}, $\faktor{G_c\times \psi}{c}$ has at most $(N^2+N)m$ states and $d+1$ different priorities, and $K$ is the maximal weight of its edges, in absolute value. By Lem.~\ref{lem:lemma-6-revised}, $((s),\psi)$ wins for $\text{player}_0$ w.r.t. $b=(d+1)((N^2+N)m-1)K$ for an initial credit $c_0''=\min\{c_0',((N^2+N)m-1)K\}$. As $b\geq K$, by Lem.~\ref{lemma:{Gc-times-psi}/c-independent-of-c}, ${\langle \faktor{G_b\times \psi}{b},\faktor{w}{b} \rangle} = {\langle \faktor{G_{c}\times \psi}{c},\faktor{w}{c} \rangle}$. Therefore, by Cor.~\ref{cor:G^w-equiv-(Gc-times-psi)/c}, $s$ wins for the system player in $G^w$ w.r.t. $b$ for an initial credit $c_0''\leq \min\{c_0,((N^2+N)m-1)K\}$, as required. \end{proof} \end{document}
arXiv
\begin{document} \title[ROBUST TRANSITIVITY FOR ENDOMORPHISMS]{ROBUST TRANSITIVITY FOR ENDOMORPHISMS} \author[C. Lizana]{Cristina $\text{Lizana}^{\dag}$} \address{$\dag$ Departamento de Matem\'atica\\ Facultad de Ciencias\\ Universidad de Los Andes\\ La Hechicera-M\'{e}rida, 5101\\ Venezuela} \email{[email protected]} \thanks{\noindent $\dag$ This work was partially supported by TWAS-CNPq and Universidad de Los Andes} \author[E. Pujals]{Enrique $\text{Pujals}^{\ddag}$} \address{$\ddag$ IMPA, Estrada Dona Castorina,110, CEP 22460-320, Rio de Janeiro, Brazil} \email{[email protected]} \date{\today} \maketitle \begin{abstract} We address the problem about under what conditions an endomorphism having a dense orbit, verifies that a sufficiently close perturbed map also exhibits a dense orbit. In this direction, we give sufficient conditions, that cover a large class of examples, for endomorphisms on the $n-$dimensional torus to be robustly transitive: the endomorphism must be volume expanding and any large connected arc must contain a point such that its future orbit belong to an expanding region. \end{abstract} \section{Introduction} One goal in dynamics is to look for conditions that guarantee that certain phenomena is robust under perturbations, that is, under which hypothesis some main feature of a dynamical system is shared by all nearby systems. In particular, we are interested in the hypotheses under which an endomorphism is robust transitive (see definitions \ref{tran} and \ref{RT}). In the diffeomorphism case, there are many examples of robust transitive systems. The best known is the transitive Anosov diffeomorphism. In the nonhyperbolic context, the first example was given by Shub in $\mathbb{T}^4$ in 1971 (see \cite{Shub2}); another example is the Ma\~{n}\'{e}'s Derived from an Anosov in $\mathbb{T}^3$ (see \cite{MR}); Bonatti and D\'{\i}az \cite{BD} gave a geometrical construction that produce partially hyperbolic robust transitive systems and these constructions were generalized by Bonatti and Viana providing robust transitive diffeomorphisms with dominated splitting which are not partially hyperbolic (see \cite{BV}). All those examples are adapted (and some new ones are extended) to the case of endomorphisms (see section \ref{s5}). On the other hand, any $C^1-$robust transitive diffeomorphism exhibits a do\-mi\-na\-ted splitting (see \cite{BDP}). This is no longer true for endomorphisms (see example 1 in section \ref{ss31}). Therefore, for endomorphisms, conditions that imply robust transitivity cannot hinge on the existence of splitting. The first question that arises is what necessary condition a robust transitive endomorphism has to verify. Adapting some parts of the proof in \cite{BDP} it is shown in Theorem \ref{teo0}, section \ref{s4}, that for endomorphisms not exhibiting a dominated splitting (in a robust way, see definition \ref{nosp}), volume expanding is a $C^1$ necessary condition. However, volume expanding is not a sufficient condition that guarantees robust transitivity for a local diffeomorphism, as it follows considering an expanding endomorphism times an irrational rotation (this system is volume expanding and transitive but not robust transitive, see remark \ref{obs16} for more details). Hence, we need an extra condition (that persists by perturbations and does not depend on the existence of any type of splitting) that allow us to conclude robustness. The extra hypothesis that we require can be formulated as follows: \emph{any arc of diameter large enough have a point such that its forward iterates remain in some expanding region }(see Main Theorem below). Before introducing the Main Theorem, we recall some definitions and we introduce some notation that we use throughout this work. An \emph{endomorphism} of a differentiable manifold $M$ is a differentiable function $f:M\rightarrow M$ of class $C^r$ with $r\geq 1.$ Let us denote by $E^r(M)$ ($r\geq 1$) the space of $C^r-$endomorphisms of $M$ endowed with the usual $C^r$ topology. A \emph{local diffeomorphism} is an endomorphism $f:M\rightarrow M$ such that given any point $x\in M,$ there exists an open set $V$ in $M$ containing $x$ such that $f$ from $V$ to $f(V)$ is a diffeomorphism. \begin{defi} We say that a map $f\in E^1(M)$ is \emph{volume expanding} if there exists $\sigma>1$ such that $|det(Df)|>\sigma.$ \end{defi} Observe that volume expanding endomorphisms are local diffeomorphisms. If $L:V\rightarrow W$ is a linear isomorphism between normed vector spaces, we denote by $m\{L\}$ the \emph{minimum norm} of $L,$ i.e. $m\{L\}=\|L^{-1}\|^{-1}.$ \begin{defi} We say that a set $\Lambda\subset M$ is a \emph{forward invariant set} for $f\in E^r(M)$ if $f(\Lambda)\subset\Lambda$ and it is \emph{invariant} for $f$ if $f(\Lambda)=\Lambda$. \end{defi} \begin{defi} We say that a map $f\in E^1(M)$ is \emph{expanding} in $U$, a subset of $M,$ if there exists $\lambda>1$ such that $\displaystyle\min_{x\in U}\{m\{D_xf\}\}>\lambda.$ It is said that a compact invariant set $\Lambda$ is an \emph{expanding set} for an endomorphism $f$ if $f\mid_{\Lambda}$ is an expanding map. \end{defi} \begin{defi} Let $U$ be an open set in $\mathbb{T}^n$. Denote by $\widetilde{U}$ the lift of $U$ restricted to a fundamental domain of $\mathbb{R}^n$ . Define the \emph{ diameter} of $U$ by $$diam(U)= \max\{dist(x, y): x, y\in \widetilde U\}.$$ Define the \emph{internal diameter} of $U^c$ by $$diam_{int}(U^c)\!=\!\displaystyle\min_{k\in\mathbb{Z}^n\setminus \{0\}}dist\!(\widetilde{U}, \widetilde{U}+k),$$ where $dist(A,B):=\displaystyle\inf\{\max_{1\leq i\leq n}|x_i-y_i|: x=(x_1,\ldots,x_n)\!\in\! A, y=(y_1,\ldots,y_n)\!\in\! B\}.$ \end{defi} Related to the last definition, observe that if $diam(U)<1$ then, translating the frame $\mathbb{Z}^n,$ we can assume that $\widetilde{U}$ is contained in the interior of $[0,1]^n$ and in particular, $diam_{int}(U^c)>0$. \begin{defi}\label{tran} Let $\Lambda$ be an invariant set for an endomorphism $f:M\rightarrow M.$ It is said that $\Lambda$ is \emph{topologically transitive} (or transitive) if there exists a point $x\in \Lambda$ such that its forward orbit $\{f^k(x)\}_{k\geq 0}$ is dense in $\Lambda.$ We say that $f$ is \emph{topologically transitive} if $\{f^k(x)\}_{k\geq 0}$ is dense in $M$ for some $x\in M.$ \end{defi} The following lemma is a more useful characterization of transitivity. \begin{lema} Let $f:M\rightarrow M$ be a continuous map of a locally compact separable metric space $M$ into itself. The map $f$ is topologically transitive if and only if for any two nonempty open sets $U,V\subset M,$ there exists a positive integer $N=N(U,V)$ such that $f^N(U)\cap V$ is nonempty. \end{lema} \text{}\\ {\bf Proof. } See for instance \cite[pp.29]{Katok}.\B Instead of transitivity we may assume the density of the pre-orbit of any point. Observe that this implies transitivity, but the reciprocal assertion does not necessarily hold. In fact, it is enough to have a dense subset of the manifold such that every point in this set has dense pre-orbit to obtain transitivity. On the contrary of diffeomorphisms case, for endomorphisms, to have just one point with dense pre-orbit is not enough to guarantee transitivity. We leave the details to the reader, it is not hard to construct an example having some points with dense pre-orbit but non-transitive. Let us state the main theorem of the present work. \begin{mth} \emph{Let $f\in E^r(\mathbb{T}^n)$ be a volume expanding map ($n\geq 2, r\geq 1$) such that $\{w\in f^{-k}(x): k\in \mathbb{N}\}$ is dense for every $x\in \mathbb{T}^n$ and satisfying the following pro\-per\-ties: \begin{enumerate} \item There is an open set $U_0$ in $\mathbb{T}^n$ such that $f\!\!\mid_{U_0^c}$ is expanding and $diam\!(U_0)\!\!<\!\!1$. \item There exists $0<\delta_0<diam_{int}(U_0^c)$ and there exists an open neighborhood $U_1$ of $\overline{U}_0$ such that for every arc $\gamma$ in $U_0^c$ with diameter larger than $\delta_0,$ there is a point $y\in\gamma$ such that $f^k(y)\in U_1^c$ for any $k\geq 1.$ \item Moreover, for every $z\in U_1^c,$ there exists $\bar{z}\in U_1^c$ such that $f(\bar{z})=z.$ \end{enumerate} Then, for every $g$ $C^r-$close enough to $f,$ $\{w\in g^{-k}(x): k\in \mathbb{N}\}$ is dense for every $x\in \mathbb{T}^n.$ In particular, $f$ is $C^r-$robust transitive.} \end{mth} We would like to say a few words about the hypotheses of the Main Theorem. The first hypothesis states that there exists a set $U_0$ (not necessarily connected) where $f$ fails to be expanding (if $U_0$ is empty, then $f$ is expanding and the thesis follows from standard arguments for expanding maps), however, $U_0$ is contained in a ball of radius one and in the complement of it, $f$ is expanding. The second hypothesis states that for any large connected arc in the expanding region, there is a point that its forward iterates remains in the expanding region. We assume $n=dim \mathbb{T}^n$ greater or equal 2, since in dimension 1 if a map is volume expanding, then it is an expanding map. A class of systems that verifies the hypotheses of the Main Theorem is a certain type of maps isotopic to expanding endomorphisms. More precisely, we call those maps as ``\emph{Derived from Expanding}", the reason to use this name is inspired on the \emph{Derived from Anosov} (see \cite{MR}) which are maps isotopic to an Anosov but they are not Anosov. In particular, Derived from Expanding maps that satisfies the hypotheses of Main Theorem are robustly transitive. In examples 1 and 2 in section \ref{s5}, we show that there exist Derived from Expanding maps satisfying the hypotheses of Main Theorem. We want to point out that in the hypotheses of the Main Theorem it is not assumed that $f$ is isotopic to an expanding map. Some questions that arises from the above discussion are: \emph{if a map satisfies the hypotheses of the Main Theorem, then is this map isotopic to an expanding endomorphism?} \emph{Are robust transitive endomorphisms without dominated splitting (in a robust way) isotopic to expanding endomorphisms?} We suggest to the reader that before entering into the proof of Main Theorem, to give a glance to section \ref{ss12} in order to gain some insight about the proof. We want to highlight that this theorem as it is enunciated, does not assume the existence of a tangent bundle splitting (neither it assumes the lack of a dominated splitting) and it covers examples of robust transitive endomorphisms without any dominated splitting (recall example 1 in section \ref{s5}). The Main Theorem can be re-casted in terms of the geometrical properties, see Main Theorem Revisited in section \ref{ss17}. In section \ref{ss121}, we adapt the Main Theorem for the case that the endomorphism has partially hyperbolic splitting, this is given in Theorem \ref{teo2} and the proof is an adaptation of \cite{PS1}. In section \ref{s5}, we provide examples satisfying the main results. Those satisfying the Main Theorem are done in such a way that they do not have any dominated splitting and they are Derived from Expanding endomorphisms. For this case we provide two type of examples: ones are built through bifurcation of periodic points and the other are ``far from" expanding endomorphisms (see examples 1 and 2). In examples 3 and 4, we show that there are open sets of endomorphisms that satisfy Theorem \ref{teo2}. Those examples are partially hyperbolic and they are not isotopic to expanding endomorphisms. \section{Proof of the Main Result}\label{ss11} Before starting the proof, we state a series of remarks that could help to understand the hypotheses of the Main Theorem and in subsection \ref{ss12} we provide a sketch of the proof, pointing out the main details and the general strategy. \begin{obs}\label{obs7} As we say in the introduction, the condition $diam(U_0)<1$ implies that we can assume that the closure of $\widetilde{U}_0$ is contained in the interior of $[0,1]^n,$ where $\widetilde{U}_0$ is the lift of $U_0$ restricted to $[0,1]^n.$ Note that $U_0$ do not need to be simply connected and could have finitely many connected components. Actually the important fact is that the closure of the convex hull of the lift of $U_0$ restricted to $[0,1]^n$ is still contained in $(0,1)^n.$ Observe, $diam_{int}(U_0^c)=diam_{int}(\mathfrak{U}_0^c),$ where $\mathfrak{U}_0$ is the convex hull of $\widetilde{U}_0.$ \end{obs} The Main Theorem is formulated for the $n-$dimensional torus. Some of the examples provided in section \ref{s5} are isotopic to expanding endomorphisms. Taking into account \cite{Shub}, we may formulate the following: \begin{conj} The Main Theorem holds, at least, for any manifold supporting expanding endomorphisms. \end{conj} \begin{obs}\label{obs2} Using hypothesis (3) of the Main Theorem, given any point $x\in U_1^c,$ we can construct a sequence $\{x_k\}_{k\geq 0}$ such that $x_0=x,$ $x_k\in U_1^c$ and $f(x_{k+1})=x_k$ for every $k\geq 0$. We call this sequence by \emph{inverse path}. \end{obs} \begin{obs} The hypothesis of diameter less than 1 and hypothesis (3) are technical. This means that they are necessary conditions for the present proof of our result, but we do not know if there exist weaker conditions that imply the thesis of our theorem. \end{obs} \begin{obs}\label{obs1} Observe that $\Lambda_0:=\bigcap_{n\geq 0} f^{-n}(U_0^c)$ is an expanding set. Moreover, from hypothesis (2) follows that given any arc $\gamma$ in $U_0^c$ with diameter greater than $\delta_0$, there exists a point $x\in\gamma$ such that $f^k(x)$ is not in $U_1$ for any $k\geq 1.$ Therefore, $\gamma\cap\Lambda_0\neq\emptyset$ and in particular $\Lambda_0$ is not trivial. \end{obs} \begin{defi} Let $\Lambda$ be an expanding set for $f\in E^1(M).$ If there is an open neighborhood $V$ of $\Lambda$ such that $\Lambda=\bigcap_{k\geq 0} f^{-k}(\overline{V})$ then $\Lambda$ is said to be \emph{locally maximal} (or isolated) set. $V$ is called the \emph{isolating block} of $\Lambda.$ \end{defi} All previous remark can be summarized and extended in the next observation. \begin{obs}\label{obs3} Let us denote $\Lambda_1:=\bigcap_{n\geq 0} f^{-n}(U_1^c).$ This set has the following pro\-per\-ties: \begin{enumerate} \item $\Lambda_1$ is an expanding set. \item By hypothesis (2) of the Main Theorem, given any arc $\gamma$ in $U_0^c$ with diameter greater than $\delta_0$, there exists a point $x\in\gamma$ such that $f(x)\in\Lambda_1.$ \item Since the hypothesis $0<\delta_0<diam_{int}(U_0^c)$ is an open condition, we may take $U_1$ an open neighborhood of $\overline{U_0}$ such that $\delta_0<diam_{int}(U_1^c)<diam_{int}(U_0^c).$ Then for every arc $\gamma$ in $U_1^c$ with diameter greater than $\delta_0$ holds that $\gamma\cap \Lambda_1$ is non empty. \item $\Lambda_1$ is invariant, i.e. $f(\Lambda_1)=\Lambda_1$. It is clear that $\Lambda_1$ is forward invariant. So let us prove that $\Lambda_1\subset f(\Lambda_1).$ Pick a point $x\in\Lambda_1$ and consider the sequence $\{x_k\}_{k\geq 0}$ given by remark (\ref{obs2}). Let us show that $x_k\not\in W$ for any $k\geq 0,$ where $W=\cup_{n\geq 0} f^{-n}(U_1)=\Lambda_1^c.$ If this is not true, there exist $k\geq 0$ and $n_k\geq 0$ such that $f^{n_k}(x_k)\in U_1.$ First, observe that remark (\ref{obs2}) implies that $f^n(x_k)=x_{k-n}$ for $0\leq n\leq k.$ In particular, $f^k(x_k)=x_0$ if $k\geq 0.$ And $f^n(x_k)=f^{n-k}(f^k(x_k))=f^{n-k}(x_0)$ for $n>k\geq 0.$ Therefore, if $-k\leq -n_k\leq 0,$ then $f^{n_k}(x_k)=x_{k-n_k}.$ Since every $x_k$ belongs to $U_1^c,$ we obtain that $f^{n_k}(x_k)$ belongs to $U_1^c$ which is a contradiction because it was supposed that $f^{n_k}(x_k)\in U_1.$ If $-n_k< -k<0,$ then $f^{n_k}(x_k)=f^{n_k-k}(x_0).$ Since $x_0\in \Lambda_1,$ every positive iterate of $x_0$ by $f$ belongs to $U_1^c,$ thus $f^{n_k}(x_k)\in U_1^c,$ which contradicts the fact that $f^{n_k}(x_k)\in U_1.$ Thus, $x_k\in \Lambda_1$ for every $k\geq 0.$ \item In section \ref{ss13}, we prove that this set is locally maximal or it is contained in an expanding locally maximal set. \end{enumerate} \end{obs} \subsection{Sketch of the Proof of Main Theorem }\label{ss12} Observe that if $f$ satisfies the hypotheses of the Main Theorem, then it satisfies the following property denoted as \emph{internal radius growing} (I.R.G.) property: \begin{quote} \emph{There exists $R_0$ depending on the initial system such that given any open set $U,$ there exist $x\in U$ and $K\in \mathbb{N}$ verifying that $f^K(U)$ contains a ball of a fixed radius $R_0$ centered in $f^K(x).$} \end{quote} In fact, since $f$ is volume expanding, then the lift of $f$ is a diffeomorphism in the universal covering space $\mathbb{R}^n$. In consequence, given any open set $U\subset \mathbb{T}^n,$ volume expanding implies that the diameter of the iterates by $f$ of $U$ grows on the covering space, (see Lemma \ref{afir4} for details). Then, for some $N>0,$ the diameter of $f^N(U)\cap U_0^c$ is greater or equal to $\delta_0$ (the constant in the second hypothesis). Then we can pick an arc in $f^N(U)\cap U_0^c$ of sufficiently large diameter and using the second hypothesis we get that there exists a point in $f^N(U)$ such that its forward orbit remain in the expanding region. Therefore, the internal radius of $f^{k+N}(U)$ grows as $k$ grows and the I.R.G. property follows. Hence, if we have that $g$ also verifies the I.R.G. property, then the Main Theorem is proved: since every pre-orbit by $f$ is dense in the manifold, given $0<\varepsilon<R_0,$ for $g$ $\varepsilon/2-$close to $f,$ the pre-orbit of every point by $g$ are $\varepsilon-$dense (see subsection \ref{PMT}), then $g^K(U)$ intersects $\{w\in g^{-n}(z):\, n\in\mathbb{N}\}$ for any $z$. Therefore, taking pre-images by $g$, we get that $U$ intersects $\{w\in g^{-n}(z):\, n\in\mathbb{N}\}$ for any $z$. Therefore, the aim is to show that for every $g$ sufficiently close to $f$, $g$ verifies the I.R.G. property, in other words we want to show that the I.R.G. property is robust. In order to prove this statement, we use a geometrical approach: \begin{enumerate} \item Since the initial map $f$ is volume expanding, then the perturbed map $g$ is also volume expanding. So, its lift is also a diffeomorphism in the universal covering space $\mathbb{R}^n$. \item Hypothesis (2) implies that there is an expanding subset $\Lambda_f$ that ``se\-pa\-ra\-tes", meaning that a nice class of arcs in $U_0^c$ intersect this set (see remarks \ref{obs1} and \ref{obs3} and lemma \ref{afir3} in section \ref{ss15}). \item The set $\Lambda_f$ can be chosen as locally maximal (see lemma \ref{lema1} in section \ref{ss13}). \item Hence the set $\Lambda_f$ has a continuation: for $g$ nearby, $f\mid_{\Lambda_f}$ is conjugate (see definition \ref{def2.5}) to $g\mid_{\Lambda_g},$ and this conjugation is extended to a neighborhood of $\Lambda_f$ and $\Lambda_g$ (see propositions \ref{afir2} and \ref{extension} in section \ref{ss14}). \item Therefore, the topological property of separation persists: for a nice class of arcs, every arc intersects $\Lambda_g$ following that the I.R.G. property holds for $g$ (see lemma \ref{lema2} in section \ref{ss15}). \end{enumerate} \B \subsection{Existence of an Expanding Locally Maximal Set for $f$}\label{ss13} In the present subsection (lemma \ref{lema1}) we show that $\Lambda_1$ (as defined in remark \ref{obs3}) is either locally maximal or is contained in a locally maximal one. The third hypothesis in the Main Theorem is essential to prove this fact (see remark \ref{cro-fish} for a discussion about this issue). \begin{lema}\label{lema1} Either $\Lambda_1$ is a locally maximal set or there exists $\Lambda^*$ an expanding locally maximal set for $f$ such that $\Lambda_1\subset\Lambda^*$ and $\Lambda^*$ verifies that every arc $\gamma$ in $U_0^c$ with diameter greater than $\delta_0$ has a point such that the image by $f$ belongs to $\Lambda^*.$ Moreover, every arc $\gamma$ in $U_1^c$ with diameter greater than $\delta_0$ intersects $\Lambda^*.$ \end{lema} \text{}\\ {\bf Proof. } We may divide the proof in two cases: \textbf{Case I.} $\Lambda_1\cap \partial U_1=\emptyset.$ Let us observe that $\Lambda_1\cap \partial U_1=\emptyset$ implies that $\Lambda_1$ is contained in the open neighborhood $V=int(U_1^c).$ Then $V$ is an isolating block for $\Lambda_1,$ therefore $\Lambda_1$ is locally maximal. \textbf{Case II.} $\Lambda_1\cap \partial U_1\neq\emptyset.$ In this case, $V$ fails to be an isolating neighborhood. To overcome this situation, we extend $V$ in a proper way and we show that the extension is now an isolating neighborhood of the respective maximal invariant set. Choose $\varepsilon>0$ sufficiently small such that the open ball $\mathbb{B}_{\varepsilon}(x)$ is contained in $U_0^c$ for all $x\in \Lambda_1$ and for every $x\in \Lambda_1,$ since $f$ is a local diffeomorphism, there exists an open set $U_x$ such that $f\mid_{U_x}:U_x\rightarrow \mathbb{B}_{\varepsilon}(x)$ is a diffeomorphism. Note that the collection $\{\mathbb{B}_{\varepsilon}(x)\}_{x\in \Lambda_1}$ is an open cover of $\Lambda_1.$ Since $\Lambda_1$ is compact, there is a finite subcover, let us say $\{\mathbb{B}_{\varepsilon}(x_i)\}_{i=1}^{N}.$ Fix $\lambda_0^{-1}<\lambda'<1,$ where $\lambda_0$ is the expansion constant of $f$ and pick $N'$ greater or equal to $N,$ the cardinal of the finite subcover of $\Lambda_1,$ such that for every $y\in\Lambda_1$, there is $i=i(y)\in\{1,\ldots, N'\}$ such that $\mathbb{B}_{\lambda'\varepsilon}(y) \Subset \mathbb{B}_{\varepsilon}(x_i),$ i.e. $\overline{\mathbb{B}_{\lambda'\varepsilon}(y)}\subset \mathbb{B}_{\varepsilon}(x_i).$ Let us define $W=\displaystyle\bigcup_{i=1}^{N'}\mathbb{B}_{\varepsilon}(x_i)$ and $\widehat{W}=\displaystyle\bigcup_{i=1}^{N'}\overline{\mathbb{B}_{\varepsilon}(x_i)}.$ By remark (\ref{obs3}) $\Lambda_1$ is invariant, then we have that for every $x_i,$ there exists at least one $x_i^j\in \Lambda_1$ such that $f(x_i^j)=x_i.$ Let us consider for every $1\leq i\leq N'$ all the possible pre-images by $f$ of $x_i$ that belongs to $\Lambda_1,$ i.e. recall that $f$ is a local diffeomorphism, hence for every point $x\in M,$ the cardinal $\sharp\{f^{-1}(x)\}=N_f$ is constant, then for every $i\in \{1,\ldots,N'\},$ there exist $K_i\subset \{1,\ldots,N_f\}$ such that if $j\in K_i$ then $x_i^j\in \Lambda_1$ and $f(x_i^j)=x_i.$ Therefore for every $i\in\{1,\ldots, N'\}$ and for every $j\in K_i,$ there exist open sets $U_i^j$ such that $x_i^j\in U_i^j$ and $f\mid_{U_i^j}: U_i^j \rightarrow \mathbb{B}_{\varepsilon}(x_i)$ is a diffeomorphism. Given $i\in\{1,\ldots, N'\},$ for every $j\in K_i$ consider the inverse branches, $\varphi_{i,j}:\mathbb{B}_{\varepsilon}(x_i)\rightarrow U_i^j $ such that $$ \begin{array}{rl} \varphi_{i,j}(x_i)= & x_i^j,\\ f \circ \varphi_{i,j}(x)=& x,\quad \forall \,x\in \mathbb{B}_{\varepsilon}(x_i). \end{array} $$ Now, consider $\Lambda^*=\bigcap_{n\geq 0} f^{-n}(\widehat{W}).$ Clearly, $\Lambda_1\subset \Lambda^*\subset U_0^c$ and $\Lambda^*$ is an expanding set. In order to show that $\Lambda^*$ is locally maximal, it is enough to show that $\Lambda^*\cap \partial \widehat{W}=\emptyset,$ which is equivalent showing that $f^{-1}(\widehat{W})$ is contained in $W.$ Just to make more clear what follows, let us rewrite $f^{-1}(\widehat{W})$ in terms of the inverse branches, $$f^{-1}(\widehat{W})=f^{-1}(\bigcup_{i=1}^{N'}\overline{\mathbb{B}_{\varepsilon}(x_i)}) = \bigcup_{i=1}^{N'}\bigcup_{j\in K_i}\overline{ \varphi_{i,j}(\mathbb{B}_{\varepsilon}(x_i))}.$$ So, it is enough to show that $$\overline{ \varphi_{i,j}(\mathbb{B}_{\varepsilon}(x_i))}\subset \mathbb{B}_{\varepsilon}(x_{m_{i,j}}),$$ for some $x_{m_{i,j}}\in \{x_1,\ldots,x_{N'}\}.$ In fact, $$\varphi_{i,j}(\mathbb{B}_{\varepsilon}(x_i))=U_i^j\subset \mathbb{B}_{\lambda_0^{-1}\varepsilon}(\varphi_{i,j}(x_i)) \subset \mathbb{B}_{\lambda'\varepsilon}(\varphi_{i,j}(x_i))=\mathbb{B}_{\lambda'\varepsilon}(x_i^j)$$ then, there exists $m_{i,j}\in\{1,\ldots,N'\}$ such that $\mathbb{B}_{\lambda'\varepsilon}(x_i^j)\Subset \mathbb{B}_{\varepsilon}(x_{m_{i,j}}),$ and the assertion holds. Since $\Lambda_1\subset \Lambda^*\subset U_0^c$ and $\Lambda_1$ intersects the image by $f$ of every arc $\gamma$ in $U_0^c$ with diameter larger than $\delta_0,$ then follows that $\Lambda^*$ also verifies the latter property. In particular, $\Lambda^*$ intersects every arc $\gamma$ in $U_1^c$ with diameter larger than $\delta_0.$ \B \begin{figure} \caption{\textit{$\Lambda_f$ looks like a net which is an expanding set that ``separates''}} \label{graf1.3} \end{figure} \begin{obs}\label{cro-fish} We want to highlight that for diffeomorphisms there exist examples of hyperbolic sets that are not contained in any locally maximal hyperbolic set, see for instance \cite{Crovisier} and \cite{Fisher}. A similar construction seems feasible for endomorphisms. In our context, hypothesis (3) allows to overpass this problem and guarantees that $\Lambda_1$ is an invariant set. Moreover, we can consider a finite covering $\{\mathbb{B}_{\varepsilon}(x_i)\}_{i=1}^{N'}$ for $\Lambda_1,$ with $x_i\in \Lambda_1,$ in such a way that for every point $y\in\Lambda_1,$ there is $x_i$ such that $\overline{\mathbb{B}_{\lambda'\varepsilon}(y)}\subset \mathbb{B}_{\varepsilon}(x_i).$ Thus we conclude that $\Lambda^*$ is contained in the interior of $\widehat{W}$ and therefore the expanding set $\Lambda_1$ is either locally maximal or is contained in a locally maximal expanding set. \end{obs} \subsection{Continuation of the Expanding Locally Maximal Set}\label{ss14} First, in proposition \ref{afir2} we prove that $g\mid_{\Lambda_g}$ is conjugate to $f\mid_{\Lambda_f},$ where $\Lambda_g$ is the maximal invariant set associated to $g,$ for $g$ sufficiently close to $f.$ This is standard in hyperbolic theory using a shadowing's lemma argument. We provide the proof for completeness and to show how the conjugacy can be extended to a neighborhood. This is done in proposition \ref{extension}. We want to remark that to construct the topological conjugacy between $g\mid_{\Lambda_g}$ and $f\mid_{\Lambda_f}$ is not necessary that $\Lambda_f$ be locally maximal invariant, however, this property is essential in the proof of proposition \ref{extension}. \begin{defi} The sequence $\{x_n\}_{_{n\in\mathbb{Z}}}$ is said to be a $\delta-$\emph{pseudo orbit} for $f$ if $d(f(x_n), x_{n+1})\leq \delta$ for every $n\in\mathbb{Z}.$ \end{defi} \begin{defi} We say that a $\delta-$pseudo orbit $\{x_n\}_{_{n\in\mathbb{Z}}}$ for $f$ is $\varepsilon-$\emph{shadowed} by a full orbit $\{y_n\}_{_{n\in\mathbb{Z}}}$ for $f$ if $d(y_n, x_n)\leq \varepsilon$ for every $n\in\mathbb{Z}.$ \end{defi} It follows that the \emph{Shadowing Lemma} holds for $C^1$ endomorphisms. \begin{lema}\label{sl} Let $M$ be a Riemannian manifold, $U\subset M$ open, $f:U\rightarrow M$ a $C^1$ expanding endomorphism, and $\Lambda\subset U$ a compact invariant expanding set for $f$. Then there exists a neighborhood $\mathcal{U}(\Lambda)\supset \Lambda$ such that whenever $\eta>0$ there is an $\varepsilon>0$ so that every $\varepsilon-$pseudo orbit for $f$ in $\mathcal{U}(\Lambda)$ is $\eta-$shadowed by a full orbit of $f$ in $\Lambda.$ If $\Lambda$ is locally maximal invariant set, then the shadowing full orbit is contained in $\Lambda$. \end{lema} \text{}\\ {\bf Proof. } For details, see for instance \cite{Liu}. \begin{defi}\label{def2.5} Let $f:M\rightarrow M$ and $g:N\rightarrow N$ be two maps and let $\Lambda_f$ and $\Lambda_g$ be invariant sets by $f$ and $g$ respectively. We say that $f:\Lambda_f \rightarrow \Lambda_f$ is \emph{topologically conjugate} to $g:\Lambda_g \rightarrow \Lambda_g$ if there exists a homeomorphism (in the relative topology) $h:\Lambda_f \rightarrow \Lambda_g$ such that $h\circ f= g\circ h.$ \end{defi} This a typical notion for hyperbolic sets, see \cite{Shub1}. In order to fix some notation in what follows, we denote by $\Lambda_f$ the expanding locally maximal set for $f:$ $\Lambda_f$ is either $\Lambda_1,$ in the case it is locally maximal, or it is $\Lambda^*$ given in Lemma \ref{lema1}. We also denote by $U$ the isolating block of $\Lambda_f.$ \nocite{LC} \begin{prop}\label{afir2} There exists $\mathcal{V}_1(f)$ an open neighborhood of $f$ in $E^1(\mathbb{T}^n)$ such that if $g\in \mathcal{V}_1(f),$ then $g$ is expanding on $\Lambda_g=\bigcap_{n\geq 0} g^{-n}(U)$ and there exists an homeomorphism $h_g:\Lambda_g \rightarrow \Lambda_f$ that conjugate $f\mid_ {\Lambda_f}$ and $g\mid_ {\Lambda_g}$ and $h_g$ is close to the identity. \end{prop} \text{}\\ {\bf Proof. } In order to get the conjugacy we use the Shadowing Lemma for $C^1$ expanding endomorphisms, see lemma \ref{sl}. Since $\Lambda_f$ is an expanding locally maximal set for $f,$ there exists $\beta>0$ such that $f$ is expansive with constant $\beta$ in $\Lambda_f.$ Fix $0<\eta<\beta.$ By the endomorphism version of the Shadowing Lemma, there exists $\varepsilon>0$ such that any $\varepsilon-$pseudo orbit for $f$ within $\varepsilon$ of $\Lambda_f$ is uniquely $\eta-$shadowed by a full orbit in $\Lambda_f.$ Take $N$ such that $$\bigcap_{j=0}^N f^{-j}(U)\subset \{q:\,d(q,\Lambda_f)<\varepsilon/2\}.$$ There exists a $C^0$ neighborhood $\mathcal{V}(f)$ of $f$ such that for $g$ in $\mathcal{V}(f)$ $$\bigcap_{j=0}^N g^{-j}(U)\subset \{q:\,d(q,\Lambda_f)<\varepsilon/2\}$$ and for any $x\in\bigcap_{j=0}^N g^{-j}(U),$ we may consider $\{x_n\}_{_{n\in\mathbb{Z}}}$ a full orbit for $g,$ where $x_0=x,$ getting that $\{x_n\}_n$ is an $\varepsilon-$pseudo orbit for $f.$ Let $\Lambda_g=\bigcap_{n\geq 0} g^{-n}(U).$ Taking an open subset $\mathcal{V}_1(f)$ of $\mathcal{V}(f)$ small enough in the $C^1$ topology, then for $g\in\mathcal{V}_1(f),$ $\Lambda_g$ is an expanding locally maximal set for $g$. If $g$ is close enough to $f$, then $g$ is also expansive with constant $\beta.$ Moreover, the Shadowing Lemma also holds for $g.$ Take $g\in \mathcal{V}_1(f).$ Given $x\in\Lambda_g,$ consider $\{x_n\}_{_{n\in\mathbb{Z}}}$ a full orbit for $g,$ where $x_0=x.$ As $\{x_n\}_n$ is an $\varepsilon-$pseudo orbit for $f,$ there exists a unique full orbit $\{y_n\}_{_{n\in\mathbb{Z}}}$ for $f$ with $y_0=y\in\Lambda_f$ that $\eta-$shadows $\{x_n\}_{_{n\in\mathbb{Z}}}$. Let us define $h_g:\Lambda_g\rightarrow\Lambda_f$ by $h_g(x)=y,$ where $y$ is given by the Shadowing Lemma. By the uniqueness of the shadowing point, this map is well defined. The continuity of $h_g$ follows from the shadowing lemma. Moreover, $h_g\circ g=f\circ h_g.$ In fact, consider the sequence $\{z_n\}_{_{n\in\mathbb{Z}}}$ where $z_n=g(x_n)=x_{n+1}.$ This $\varepsilon-$pseudo-orbit is $\eta-$shadowed by a unique full orbit $\{w_n\}_{_{n\in\mathbb{Z}}}$ for $f,$ with $w_0=w\in\Lambda_f.$ Then, for every $n\in\mathbb{Z},$ $$ \begin{array}{ll} d(w_n,z_n)&= d(f^n(w_0),x_{n+1}) = d(f^n(h_g(z_0)),g^n(g(x_0))) \\ &= d(f^n(h_g\circ g(x_0)),g^n(g(x_0)))\\ & = d(f^{n+1}\!\!\circ\! f^{-1}\!\circ \!h_g\!\circ\! g(x_0),g^{n+1}(x_0))<\eta \end{array} $$ Observe that $f^{-1}\circ h_g\circ g(x_0)=w_{-1}$ is $\eta-$shadowing $x_0.$ So, by uniqueness, we have that $f^{-1}\circ h_g\circ g(x_0)=y_0;$ i.e. $h_g\circ g(x)=f\circ h_g(x).$ Since we can apply the Shadowing Lemma for $\Lambda_g$ using the same constants as in the construction of $h_g$, we define a map $h_f:\Lambda_f\rightarrow \Lambda_g$ such that $h_f\circ f=g\circ h_f.$ In fact, if $\{y_n\}_{n\in\mathbb{Z}}$ is a full orbit for $f$ with $y_0\in \Lambda_f,$ then it is an $\varepsilon-$pseudo orbit for $g.$ Hence, this pseudo orbit is uniquely shadowed by a full orbit $\{x_n\}_{n\in\mathbb{Z}}$ for $g,$ with $x_0\in \Lambda_g.$ Thus, $h_f(y_0)=x_0$ and $d(y_n,x_n)<\eta$ for every $n\in\mathbb{Z};$ moreover, $h_f$ is continuous and satisfies $h_f\circ f=g\circ h_f$ just as $h_g.$ Next, let us verify that $h_g$ is one to one. Let $p_1,\;p_2\in \Lambda_g$ be two points such that $h_g(p_1)=h_g(p_2).$ Note that $d(f^n(h_g(p_1)),g^n(p_1))<\eta$ and $d(f^n(h_g(p_2)),g^n(p_2))<\eta$ by construction. Then $h_g(p_1)$ is $\eta-$shadowed by $p_1$ and $p_2,$ which by uniqueness gives that $p_1=p_2.$ Finally, for $y\in \Lambda_f,$ consider a full orbit of $h_f(y)$ by $g.$ Since $d(g^n(h_f(y)),f^n(y))$ is small for all $n$ and some $f$ full orbit of $y$ shadows the $g$ full orbit of $h_f(y),$ we have that $h_g(h_f(y))=y.$ Hence, $h_g$ is onto and therefore is a homeomorphism. \B The next proposition is a version for expanding endomorphisms of a result already provided for the case of hyperbolic diffeomorphisms in \cite[Theorem 4.1]{Robinson}. The goal is to show that we can extend the conjugation between $f\!\!\mid_{\Lambda_f}$ and $g\!\!\mid_{\Lambda_g}$ to an open neighborhood $U$ of $\Lambda_f$ in such a way that still is a homeomorphism that conjugate $f\!\!\mid_U$ and $g\!\!\mid_U,$ noting that the conjugacy is unique just in $\Lambda_f.$ We are going to use this extension in next section for proving that the property of $\Lambda_f$ disconnecting a ``nice'' class of arcs is robust. \begin{prop}\label{extension} The homeomorphism $h_f:\Lambda_f\rightarrow \Lambda_g$ in proposition (\ref{afir2}) can be extended as a homeomorphism $H$ to an open neighborhood of $\Lambda_f$ such that $H\circ f=g\circ H.$ \end{prop} \text{}\\ {\bf Proof. } This proof is inspired in the geometrical approach given by Palis in \cite{Palis} and also used to prove the Grobman-Hartman Theorem in \cite[pp.96]{Shub1}. Other alternative proof consist in using inverse limit space, in such a way that the expanding set $\Lambda_f$ becomes a hyperbolic set for a diffeomorphism and so Theorem 4.1 in \cite{Robinson} can be applied. Observe, that to have a well defined inverse limit and that the induced set associated to $\Lambda_f$ verifies the hypothesis of the mentioned theorem in \cite{Robinson} is needed that $\Lambda_f$ is locally maximal invariant. The goal is to choose an appropriate isolating neighborhood $U$ of $\Lambda_f$ and to construct an homeomorphism from $U$ onto itself, using the inverse branches of $f$ and $g$, first defined in a fundamental domain $D_f$ for $f$ (i.e. a set $D_f$ such that for every $x\in U\setminus\Lambda_f,$ there exists $n\in\mathbb{N}$ such that $f^n(x)\in D_f$) and then extended to $U$ using inverse iterates. Observe that the isolating block of $\Lambda_f$ is also an isolating block of $\Lambda_g.$ Now we can take a fundamental domain for $g,$ $D_g,$ as it was done for $f.$ Note that $D_f$ is defined as $U\setminus f^{-1}(U)$ and since $f^{-1}(U)$ is properly contained in $U,$ it follows that the same holds for $g$ and therefore $D_f$ and $D_g$ are homeomorphic. Then it is taken an homeomorphism $H$ between both fundamental domains $D_f$ and $D_g.$ This homeomorphism is saturated to $U\setminus\Lambda_f$ by backward iteration, i.e. if $x\in U\setminus \Lambda_f,$ let $n$ be such that $f^n(x)\in D_f,$ take $H\circ f^n(x)$ and then $g^{-n}\circ H\circ f^n(x)$ where $g^{-n}$ is taken carefully using the corresponding inverse branches. Denote by $N_f$ the cardinal of $\{w\in f^{-1}(x)\}$. Since $f$ is a local diffeomorphism, $N_f$ is cons\-tant. Let $K\subset \{1,\ldots, N_f\}$ be such that for every $i\in K,$ there exist $U_i^f\subset U$ and $\varphi_i^f:U\rightarrow U_i^f$ inverse branch of $f$ such that $\varphi_i^f(U)=U_i^f$ and $f(U_i^f)=f\circ\varphi_i^f(U)=U.$ Also, for $g$ as in proposition \ref{afir2}, for every $i\in K,$ there exist $U_i^g\subset U$ and $\varphi_i^g:U\rightarrow U_i^g$ the inverse branch of $g$ such that $\varphi_i^g(U)=U_i^g$ and $g(U_i^g)=g\circ\varphi_i^g(U)=U.$ To construct an homeomorphism $H$ on $U$ satisfying $H\circ f= g\circ H$ and $H\mid_{\Lambda_f}=h_f$ we can begin as follows. Suppose that the restriction $H: \partial U\rightarrow \partial U$ is any well-defined orientation preserving diffeomorphism. The restriction of $H$ to $\partial U_i^f$ is then defined as follows $H(x)= \varphi_i^g\circ H \circ f(x)$ if $x\in \partial U_i^f$ because $H$ conjugate $f$ and $g$. Now we extend $H$ to a diffeomorphism which send $U\setminus \bigcup_{i\in K}U_i^f$ bounded by $\partial U$ and $\partial U_i^f$ onto $U\setminus \bigcup_{i\in K}U_i^g$ bounded by $\partial U$ and $\partial U_i^g.$ Since we may assume that the Hausdorff distance between $U$ and $\Lambda_f$ is small, see lemma \ref{lema1}, then the initial $H$ is close to the identity. Let us say that $d(H(x),x)<\eta,$ where $\eta>0$ is given arbitrarily. Given $i,j\in K,$ denote $U_{j,i}^f=\varphi_j^f\circ \varphi_i^f(U)$ and $U_{2\;i}^f=U_i^f\setminus \bigcup_{j\in K} U_{j,i}^f.$ If $x\in \partial U_{j,i}^f$ then $H(x)=\varphi_j^g\circ \varphi_i^g\circ H\circ f^2(x)\in \partial U_{j,i}^g.$ If $x\in U_{2\;i}^f\setminus \Lambda_f$ then $H(x)=\varphi_i^g \circ H\circ f(x)\in U_{2\;i}^g.$ Doing this process inductively we have that: Given $i_1,\ldots,i_n\in K,$ denote $U_{i_n,\ldots,i_1}^f=\varphi_{i_n}^f\circ\cdots\circ \varphi_{i_1}^f(U)$ and $U_{n\;(i_{n-1},\ldots,i_1)}^f=U_{i_{n-1},\ldots,i_1}^f\setminus \bigcup_{i_n\in K} U_{i_n,\ldots,i_1}^f.$ If $x\in \partial U_{i_n,\ldots,i_1}^f$ then $H(x)=\varphi_{i_n}^g\circ \cdots\circ \varphi_{i_1}^g\circ H\circ f^n(x).$ If $x\in U_{n\;(i_{n-1},\ldots,i_1)}^f\setminus \Lambda_f$ then $H(x)=\varphi_{i_{n-1}}^g\circ \cdots\circ \varphi_{i_1}^g \circ H\circ f^{n-1}(x).$ And $H(x)=h_f(x)$ if $x\in\Lambda_f.$ Let us prove that $H$ is continuous. Given $x\in\Lambda_f,$ let $(x_n)_n$ be a sequence in $U\setminus \Lambda_f$ such that $x_n\rightarrow x,$ when $n\rightarrow \infty.$ Let us prove that $H(x_n)\rightarrow H(x),$ when $n\rightarrow \infty.$ First, consider $\{z_k\}_{k\in\mathbb{Z}}$ an $f-$full orbit in $\Lambda_f$ such that $z_0=x$ and for every $n\in\mathbb{N},$ consider $\{z_k^n\}_{k\in\mathbb{Z}}$ a full orbit by $f$ associated to each $x_n$ using the corresponding inverse branches (for the backward iterates) given by the full orbit of $x,$ where $z_0^n=x_n.$ Since $f$ is continuous, for every $k\in\mathbb{Z},$ we have that $z_k^n\rightarrow z_k$ when $n\rightarrow \infty.$ Note that for every $n\in\mathbb{N},$ there exists $k_n>0$ such that $z_{k_n}^n\in U\setminus \bigcup_{i\in K}U_i^f.$ Furthermore, $z_k^n\in U$ for every $k\in [-k_n,k_n].$ Since $H\circ f=g\circ H,$ we get that $H(x_n)\in \bigcap_{k=-k_n}^{k_n}g^k(U).$ Hence, for $\eta$ and $\varepsilon$ as in proposition \ref{afir2} and for every $n\in\mathbb{N},$ we have that $\{z_k^n\}_{k=-k_n}^{k_n}$ is a finite $\varepsilon-$pseudo orbit for $g$ and it is $\eta-$shadowed by a $g-$orbit of $H(x_n)$ until $k_n$ for forward iterates and $-k_n$ for backward iterates. Observe that as $m$ goes to infinity, the finite pseudo orbit $y_n^m=\{z_k^n\}_{k=-m}^m$ becomes longer. Consider now the sequence $\{y_n^m\}_n.$ Then $y_n^m\rightarrow \{z_k\}_{k=-m}^m$ when $n\rightarrow\infty.$ Hence, the sets of shadowing points of the finite pseudo orbits $y_n^{k_n}$ converge to the shadowing point of the infinite pseudo orbit $\{z_k\}_k$, then $H(x_n)\rightarrow h_f(x)=H(x)$ when $n\rightarrow \infty.$ \B \subsection{The Locally Maximal Set ``Separates"}\label{ss15} The main goal of this section is to show that the locally maximal set for $f$ has a topological property that persists under perturbation, roughly speaking means that $\Lambda_f$ and $\Lambda_g$ disconnect small open sets. Using that we prove that $\Lambda_f$ intersects ``some nice" class of arcs in $U_1^c$ and which also intersect $\Lambda_g$ for all $g$ nearby $f$. The first question that arise is: which arcs belong to this ``nice" class? The second questions in the context of proving the Main Theorem is: why is this property enough? The third question is: why does the ``nice class" of arc exist? All these questions are answered along the section, but to give some brief insight about the main ideas we want to make some comments: \begin{enumerate} \item Roughly speaking, these ``nice arcs'' are arcs that have the property that can be used to build ``nice cylinders" (see definition \ref{def1.18}) containing the initial arc and such that $\Lambda_f$ ``separates" (see definition \ref{def1.20}) this cylinder in a ``robust way" (see lemmas \ref{afir3} and \ref{lema2}). \item It is enough to consider these ``nice" class of arcs to finish the proof of the Main Theorem. Suppose that the existence of this class of arcs is proved and suppose that given any open set, there is an iterate by $g$ that contains a ``nice" arc (see claim \ref{afir6} and lemma \ref{afir4}). Then there is a point in this iterate which its forward orbits stay in the expanding region and arguing as in the beginning of subsection \ref{ss12} it is concluded the density of the pre-orbit of any point for the perturbed map. \item Therefore, to finish, we show in claim (\ref{afir6}) that every large arc admits a ``nice" arc. Later it is shown that any open set has an iterate, in the universal covering, with an arbitrary large arc (see lemma \ref{afir4}). \end{enumerate} Let us define the concepts involved in this section. \begin{defi}\label{cyl}(\textbf{Cylinder}) Given a differentiable arc $\gamma$ and $r>0,$ it is said that $C(\gamma,r)$ is a \emph{cylinder} centered at $\gamma$ with radius $r$ if $$C(\gamma,r):=\bigcup_{x\in\gamma} ([T_x\gamma]^{\perp})_r,$$ where $([T_x\gamma]^{\perp})_r$ denotes the closed ball centered at $x$ with radius $r$ intersected with $[T_x\gamma]^{\perp}$ the orthogonal to the tangent to $\gamma$ in $x.$ \end{defi} \begin{defi}(\textbf{Simply connected cylinder}) Given a differentiable arc $\gamma$ and $r>0,$ it is said that a cylinder $C(\gamma,r)$ is \emph{simply connected} if it is retractile to a point. \end{defi} \begin{obs} Fixed the radius, a cylinder as defined in \ref{cyl} could be not retractile to a point. In this case, working in the universal covering space, consider the convex hull of its lift and then project it on the manifold. We call the resulting set as \emph{simply connected cylinder} as well and is denoted in the same way as above. \end{obs} \begin{defi}(\textbf{Nice cylinder})\label{def1.18} Given an arc $\gamma$ and $r>0$, it is said that a cylinder $C(\gamma,r)$ is a \emph{nice cylinder} if it is simply connected cylinder and if $x_A$ and $x_B$ are the extremal points of $\gamma$ then $A:=([T_{x_A}\gamma]^{\perp})_r\subset \partial C(\gamma,r)$ and $B:=([T_{x_B}\gamma]^{\perp})_r\subset \partial C(\gamma,r).$ In this case, we say that $A$ and $B$ are the \emph{top and bottom sides} of the cylinder. (See figure \ref{graf1.1a}). \end{defi} \vspace*{-8mm} \begin{figure} \caption{\textit{Nice cylinder}} \label{graf1.1a} \end{figure} \vspace*{-3mm} \begin{obs} In general, given any cylinder, as defined in \ref{cyl}, it has not necessarily top and bottom sides, and may not be simply connected. \end{obs} Hereafter, fix $U_2$ an open set such that $\overline{U_1} \subset U_2,$ where $U_1$ is the same as in hypothesis (2) in the Main Theorem, and $\delta_0<diam_{int}(U_2^c)<diam_{int}(U_0^c).$ Let $d_1=d_H(U_2, U_1)>0,$ where $d_H$ denotes the Hausdorff metric, and let $k\in \mathbb{N}$ such that $\delta'_0=\delta_0+\frac{d_1}{3k}<diam_{int}(U_2^c).$ Let us denote by $\widetilde{U}_0$ the lift of $U_0,$ $\pi$ the projection of $\mathbb{R}^n$ onto $\mathbb{T}^n$ and $\mathfrak{U}_0$ the convex hull of $\widetilde{U}_0\cap [0,1]^n$. Consider $\mathrm{P}_i(\mathfrak{U}_0)$ the projection of $\mathfrak{U}_0$ in the $i-$th coordinate in the $n-$dimensional cube $[0,1]^n.$ Since $diam(U_0)<1$ and remark (\ref{obs7}), for every $1\leq i\leq n,$ there exist $0<k_i^-<k_i^+<1$ such that $k_i^-<\mathrm{P}_i(\mathfrak{U}_0)<k_i^+.$ Note that $1+k_i^--k_i^+>\delta'_0$ for every $i$, because $1+k_i^--k_i^+> diam_{int}(U_0^c)$ by construction. Let $R_i^m=\{x\in\mathbb{R}^n: k_i^-+m<x_i<k_i^++m\}$ with $m\in\mathbb{Z},$ $1\leq i\leq n$ and $x_i$ is the $i-$th coordinate of $x$. Thus, $\mathfrak{U}_0\subset\displaystyle\bigcap_{m\in\mathbb{Z}, 1\leq i\leq n} R_i^m.$ Denote by $L_i^+=\{x\in\mathbb{R}^n: x_i=k_i^+\}$ and $L_i^-=\{x\in\mathbb{R}^n: x_i=k_i^-\}.$ Let $\tilde{f}$ be the lift of $f$. The next claim answer the third question stated at the beginning of the section. \begin{afir}\label{afir6} Let $m>2\sqrt{n}$ be fixed. Given any arc $\gamma$ in $\mathbb{R}^n$ with $diam(\gamma)>m,$ there exist an arc $\gamma'\subset \gamma,$ $1\leq i\leq n$ and $j\in\mathbb{Z}$ such that $\partial \gamma'\cap (L_i^++j)\neq \emptyset,$ $\partial \gamma'\cap (L_i^-+j+1)\neq \emptyset$ and $P_i^j(\gamma')\subset [k_i^++j, k_i^-+j+1],$ where $P_i^j(\gamma')$ denotes the projection of $\gamma'$ on the interval $[j,j+2]$ of the $i-$th coordinate. Moreover, $\gamma'$ admits a nice cylinder $\gamma^*=\pi(\gamma')$ in $U_2^c,$ with diameter of $\gamma^*$ larger than $\delta_0$ and also admitting a nice cylinder contained in $U_1^c$. (See figure \ref{graf1.1}) \end{afir} \text{}\\ {\bf Proof. } Take $\gamma$ an arc with diameter larger than $m,$ then the projection of $\gamma$ in the $i-$th coordinate contains an interval of the kind formed by $k_i^+$ and $1+k_i^-$ for some $i$ (or formed by $k_i^++j$ and $k_i^-+j+1$ for some $j\in\mathbb{Z}$). If it is not true, $\gamma$ would be in a $n-$dimensional cube with sides smaller than $k_i^+-k_i^-<1$ and this cube has diameter smaller than $\sqrt{n},$ but this contradict the fact that $diam(\gamma)>m.$ Hence, we may pick an arc $\gamma'$ in $\gamma$ such that $\partial \gamma'\cap (L_i^++j) \neq \emptyset,$ $\partial \gamma'\cap (L_i^-+j+1) \neq \emptyset$ and $P_i^j(\gamma')\subset [k_i^++j, k_i^-+j+1]$ for some $1\leq i\leq n$ and some $j\in\mathbb{Z}$. Therefore, diameter of $\gamma'$ is greater than $\delta_0$ and in consequence its projection in $\mathbb{T}^n$ also has diameter greater than $\delta_0.$ Moreover, since the projection of $\gamma'$ by $\mathrm{P}_i$ is in between $k_i^++j$ and $k_i^-+1+j,$ we may construct a cylinder centered at $\gamma'$ and radius $\frac{d_1}{2}$ such that this cylinder is ``far" away from $\widetilde{U}_0,$ so this cylinder could be simply connected or, if it is not simply connected cylinder, it has holes that are different from $\widetilde{U}_0.$ In the case that the cylinder is not simply connected, we consider the convex hull of the cylinder, since the original cylinder is bounded by $L_i^++j$ and $L_i^-+j+1,$ then the convex hull stay in between these two hyperplanes and therefore it does not intersect $\widetilde{U}_0.$ By abuse of notation, let us denote this set by $C(\gamma',\frac{d_1}{2}),$ it is a simply connected cylinder. Observe that by construction, this cylinder will have top and bottom sides, thus $C(\gamma',\frac{d_1}{2})$ is a nice cylinder. Take $\gamma^*=\pi(\gamma'),$ note that $\gamma'$ can be choose such that $\gamma^*$ is contained in $U_2^c$ and the diameter of $\gamma^*$ is larger than $\delta_0,$ then projecting the nice cylinder for $\gamma'$ in $\mathbb{T}^n$ we obtain a nice cylinder for $\gamma^*$ which is denoted by $C(\gamma^*,\frac{d_1}{2}).$ This nice cylinder has the property that every arc that goes from bottom to top side has diameter at least $\delta_0$ and all this process can be made in such a way that the nice cylinder is in $U_1^c.$ \B \vspace*{-5mm} \begin{figure} \caption{\textit{Every arc admits a sub-arc with a nice cylinder}} \label{graf1.1} \end{figure} \begin{defi}(\textbf{Lateral border}) Given a differentiable arc $\gamma$ and $r>0.$ The \emph{lateral border} $S$ of the cylinder $C(\gamma,r)$ is $\partial C(\gamma,r)$ minus the top and bottom sides of the cylinder if they exist. \end{defi} Observe that nice cylinders have lateral borders. \begin{defi}(\textbf{Separated horizontally})\label{def1.20} We say that a nice cylinder $C(\gamma,r)$ is \emph{separated horizontally} by a set $\Lambda$ if there exists a connected component of $\Lambda,$ let say $\Lambda_c,$ such that: \begin{itemize} \item $\Lambda_c$ intersects $C(\gamma,r)$ across the lateral border. \item $C(\gamma,r)$ minus $\Lambda_c$ has at least two connected component. \end{itemize} \end{defi} Now, we are going to prove that the locally maximal set for $f,$ found in section \ref{ss13}, has the geometrical property of separating horizontally these nice cylinders as the one in claim \ref{afir6}. \begin{lema}\label{afir3} Given any arc $\gamma$ in $U_2^c$ with diameter greater than $\delta_0$ that admits a nice cylinder as in claim (\ref{afir6}), $\Lambda_f$ separates horizontally its nice cylinder. \end{lema} \text{}\\ {\bf Proof. } Let us denote by $T$ the nice cylinder associated to $\gamma$ as in the statement and let $A$ and $B$ denote the top and bottom sides of $T$ respectively. Let $\varepsilon>0$ be arbitrarily small. Let $T'$ be a bigger cylinder containing $T$ joint together with two security regions, denote by $S_A$ and $S_B,$ and such that the distance between the lateral border of $T$ and the lateral border of $T'$ is small, for instance $d_H(T, T')=\frac{d_1}{6k}$, see figure (\ref{graf1.1b}). For security regions $S_A$ and $S_B$ we mean two strips of $\frac{d_1}{6k}$ thickness glued to the sides $A$ and $B$ of $T$, or in other words, $S_A$ (respectively $S_B$) is the set of points in $T^c$ such that the distance from these points to $A$ (respectively $B$) is less or equal to $\frac{d_1}{6k}.$ This set $T'$ was constructed in such a way that its diameter is greater than $\delta'_0.$ Since $\gamma$ is in $U_2^c$ and its diameter is greater than $\delta_0,$ we can assure that $T\cap \Lambda_f$ is non empty. Consider all the connected components of $T\cap \Lambda_f.$ For every $x\in T\cap \Lambda_f,$ we assign $\emph{K}_x$ the connected component of $T\cap \Lambda_f$ that contains $x.$ Observe that we may define an equivalence relation: $x\sim x'$ if and only if $\emph{K}_x=\emph{K}_{x'}.$ Then we pick one component from each class, or in other words we pick just the connected components that are two by two disjoints. We claim that $\Lambda_f$ separates $T$ horizontally, i.e.; there exists one component $\emph{K}_x$ such that $K_x\cap \partial T\neq \emptyset$ and $K_x$ separates $T$ in more than one connected component. Suppose it does not happen, i.e. none of the $\emph{K}_x$ separates $T$ horizontally. Take $\emph{U}_x$ open set in $T'$ such that $\emph{K}_x\subset \emph{U}_x,$ $\partial \emph{U}_x\cap \Lambda_f=\emptyset,$ $\partial \emph{U}_x$ is connected and $\partial \emph{U}_x$ does not divides $T$ horizontally. If there are many $\emph{K}_y$ accumulating in one $\emph{K}_x,$ then we could have a same open set $\emph{U}_x$ containing more than one connected component $\emph{K}_y.$ Observe that the collection $\{ \emph{U}_x\}$ is an open cover of $T\cap \Lambda_f.$ Since it is compact, there is a finite subcover $\{ \emph{U}_i\}_{i=1}^N,$ i.e. $T\cap \Lambda_f \subset \mathcal{U}:=\bigcup_{i=1}^N \emph{U}_i.$ If the connected components of $\mathcal{U}$ does not separates horizontally $T,$ it is easy to cons\-truct a curve going from $A$ to $B$ with diameter greater than $\delta_0$ and empty intersection with the $\emph{U}_i$'s; hence, this curve does not intersects the set $\Lambda_f.$ But this contradicts the fact that every curve in $U_1^c$ with diameter larger than $\delta_0$ intersects $\Lambda_f.$ Then the connected components of $\mathcal{U}$ separate $T$ horizontally, denote by $\emph{C}_j$ the connected components of $T$ minus these connected components of $\mathcal{U}$ that separates $T$ horizontally. Observe that every $\emph{C}_j$ is path connected, since they are the complement of a finite union of open sets in a simply connected set $T$. There exist a finite quantity of $\emph{C}_j,$ let us say $m$. We can reorder these sets enumerating from the top side. If we denote by $V_j$ each of the connected components of $T\cap \mathcal{U}$ that separate $T$ horizontally, we have two cases, either $\emph{C}_j$ is in between two consecutive $V_j$ and $V_{j+1}$ (or $V_{j-1}$ and $V_j$) or $\emph{C}_j$ just intersects one $V_j$ on the border. The idea is to build a curve from top to bottom of $T$ connecting $\emph{C}_j$ with $\emph{C}_{j+1}$ in such a way that the diameter of the arc is greater than $\delta_0$ but without intersecting $\Lambda_f,$ which is a contradiction because it is again in $U_1^c$ and has diameter greater than $\delta_0,$ then this curve must intersects $\Lambda_f.$ It is enough to show that we can pass from $\emph{C}_j$ to $\emph{C}_{j+1}$ without touching $\Lambda_f.$ For this, we must observe that every $V_j$ is a union of finitely many $\emph{U}_i$, let us say $\emph{U}_{i_1},\ldots, \emph{U}_{i_j}.$ Pick a curve $\gamma_j$ in $\emph{C}_j$ going from top to bottom, i.e. $\gamma_j$ goes from $\partial V_j$ to $\partial V_{j+1}$ (or $\partial V_{j-1}$ and $\partial V_j$) and $\gamma_j$ does not intersects the interior of $V_j$ and $V_{j+1}$ (or $V_{j-1}$ and $V_j$), then there exists $i_s\in \{i_1,\ldots,i_j\}$ such that $\gamma_j\cap \partial\emph{U}_{i_s}\neq\emptyset.$ After that continue this arc picking a curve following by the border of $\emph{U}_{i_s}$ until $\emph{C}_{j+1},$ which has empty intersection with $\Lambda_f$ by construction, if it is not possible to do in one step, pick another $\emph{U}_{i_k}$ and repeat the process. Note that this process finish in finitely many times. The resulting arc from joint together all this segment has diameter greater than $\delta_0$ and with empty intersection with $\Lambda_f$ as we wanted. \B \begin{figure} \caption{\textit{$\Lambda_f$ splits "horizontally" every nice cylinder in at least two connected component}} \label{graf1.1b} \end{figure} \begin{obs}\label{obs4} In proposition \ref{afir2}, remembering that $d(h_g,id)<\eta,$ we may fix $\eta<\min\{\frac{d_1}{6k},\delta_0,\beta\}.$ So for this $\eta,$ there exists $\varepsilon_0>0$ given by the shadowing lemma, see lemma \ref{sl} and proposition \ref{afir2}, and $\varepsilon_0$ determine $\mathcal{V}_1(f)$ given in proposition \ref{afir2}. \end{obs} \begin{lema}\label{lema2} Given $g\in \mathcal{V}_1(f)$ and given an arc $\gamma$ in $U_2^c$ with diameter greater than $\delta'_0$ such that it admits a nice cylinder $C(\gamma,\frac{d_1}{2}),$ then $\gamma\cap \Lambda_g$ is not empty. \end{lema} \text{}\\ {\bf Proof. } Let $g\in \mathcal{V}_1(f).$ Take an arc $\gamma$ in $U_2^c$ with diameter greater than $\delta'_0$ such that $C(\gamma,\frac{d_1}{2})$ is a nice cylinder. By construction, we may assume that every arc taken in the nice cylinder that goes from top to bottom has diameter greater or equal to the diameter of $\gamma.$ We take two security regions inside the cylinder, in the top and bottom sides of the cylinder respectively, with $\frac{d_1}{6k}$ of thickness each one, i.e. two strips glued to the top and bottom sides of the cylinder such that each one is the set of points in the cylinder within distance to top (respectively bottom) side less or equal to $\frac{d_1}{6k},$ see figure (\ref{graf1.4}). Let us denote by $C'$ the cylinder resulting of taking out these two security strips from the original cylinder $C(\gamma,\frac{d_1}{2}),$ then the diameter of $C'$ is still greater than $\delta_0$. Hence, the diameter of $\gamma'=\gamma\cap C'$ is greater than $\delta_0$ and it is in $U_1^c.$ Lemma \ref{afir3} implies that $\Lambda_f$ separates horizontally $C',$ hence $\gamma'$ intersects $\Lambda_f,$ let us denote by $x_f$ the point in the intersection. Since $x_f\in \Lambda_f,$ proposition \ref{afir2} and remark (\ref{obs4}), there exists $x_g\in \Lambda_g\cap \mathbb{B}_{\eta}(x_f).$ Note that $\Lambda_f$ separates $\mathbb{B}_{\eta}(x_f)$ in at least two connected component. Because $f\mid_U$ and $g\mid_U$ are conjugate, follows that $\Lambda_g$ separates $\mathbb{B}_{\eta}(x_f)$ in at least two connected component as well. Therefore, $\Lambda_g$ must intersects $\gamma.$\B \vspace*{-5mm} \begin{figure} \caption{\textit{$\Lambda_g$ intersects $\gamma$}} \label{graf1.4} \end{figure} \subsection{Getting Arcs of Large Diameter} In this subsection we show that under the hypothesis of volume expanding, the diameter of the iterates of an open set grows on the covering. \begin{lema}\label{afir4} For every $g\in\mathcal{V}_1(f)$ and given $V$ an open path connected set in $\mathbb{T}^n,$ there exists $m_0=m_0(V,g)\in \mathbb{N}$ such that $diam(\tilde{g}^{m_0}(\widetilde{V}))>m,$ where $\tilde{g}$ and $\widetilde{V}$ are the lift of $g$ and $V,$ respectively, and $m$ was given in claim \ref{afir6}. In particular, it contains an arc with diameter greater than $m$. \end{lema} \text{}\\ {\bf Proof. } Let us suppose that the Lemma is false in the covering space. Let $g\in\mathcal{V}_1(f)$ and $V$ be an open path connected set in $\mathbb{T}^n.$ If there exists $k_0>0$ such that $d_k=diam(\tilde{g}^{k}(\widetilde{V}))<k_0$, then $vol(\tilde{g}^{k}(\widetilde{V}))\leq \left( \frac{d_k}{2}\right)^n.$ But since $g$ is volume expanding, there exists constant $\lambda>1$ such that $vol(\tilde{g}^k(\widetilde{V}))>\lambda^k vol(\widetilde{V}),$ for $k\geq 1.$ Iterating by $\tilde{g}$, and since $\tilde g$ is a diffeomorphisms in the covering space, the volume increase and furthermore the diameter of its iterates grows in the covering space. Hence, there exists $m_0\in \mathbb{N}$ such that $diam(\tilde{g}^{m_0}(\widetilde{V}))>m.$ \B \begin{obs}\label{obs6} For the case that $V$ is an open connected set, observe that given a point in $V$ there exists an open ball centered in this point and contained in $V$ such that it is path connected. Then we may apply Lemma \ref{afir4} to this ball and obtain a similar statement for $V$. \end{obs} \subsection{Getting Sets of Large Radius} In this subsection, we show that open sets intersecting $\Lambda_g,$ for $g$ close enough to $f,$ has large internal radius after large iterates. \begin{lema}\label{lema3} There exist $\mathcal{V}_2(f)$ and $R>0$ such that for every $g\in \mathcal{V}_2(f),$ if there is $x\in M$ such that $g^n(x)\not\in U_0$ for every $n\geq 0,$ then there is $\varepsilon_0>0$ such that for every $0<\varepsilon<\varepsilon_0,$ there exists $N=N(\varepsilon)\in\mathbb{N}$ such that $\mathbb{B}_R(g^N(x))\subset g^N(\mathbb{B}_{\varepsilon}(x)).$ \end{lema} \text{}\\ {\bf Proof. } We may pick $U_3$ an open subset contained in $U_0$ such that $m\{Df\mid_{U_3^c}\}>\lambda',$ with $1<\lambda'<\lambda_0.$ Take $\mathcal{V}_2(f)$ an open subset perhaps smaller than $\mathcal{V}_1(f)$ such that $m\{Dg\mid_{U_3^c}\}>\lambda'$ holds for every $g\in \mathcal{V}_2(f).$ Let us fix $R=d_H(U_0,U_3)>0.$ Given $0<\varepsilon<R,$ take $N\in\mathbb{N}$ such that $(\lambda')^{-N}R<\varepsilon/2.$ Then $\mathbb{B}_{(\lambda')^{-N}R}(x)\subset \mathbb{B}_{\varepsilon}(x).$ Observe that $\mathbb{B}_R(g^n(x))\cap U_3=\emptyset,$ for every $n\geq 0.$ Also, $$g^k(\mathbb{B}_{(\lambda')^{-N}R}(x))=\mathbb{B}_{(\lambda')^{-N+k}R}(g^k(x)) \subset \mathbb{B}_R(g^k(x)),$$ for every $0\leq k\leq N.$ In particular, $g^k(\mathbb{B}_{(\lambda')^{-N}R}(x))\cap U_3=\emptyset,$ for every $0\leq k\leq N.$ Then $$g^N(\mathbb{B}_{(\lambda')^{-N}R}(x))=\mathbb{B}_R(g^N(x))\subset g^N(\mathbb{B}_{\varepsilon}(x)).$$ \B \begin{obs}\label{obs5} Let us note that lemma \ref{lema3} holds for every point in $\Lambda_g.$ \end{obs} \subsection{End of the Proof of Main Theorem}\label{PMT} Let $f\in E^1(\mathbb{T}^n)$ satisfying the hypotheses of the Main Theorem. Lemma \ref{lema1} implies that we may assume the existence of $\Lambda_f$ an expanding locally maximal set for $f$. Fix $0<\alpha<R,$ arbitrarily small. Given $x\in \mathbb{T}^n,$ since $\{ w\in f^{-i}(x): i\in\mathbb{N}\}$ is dense, there exists $n_0\in\mathbb{N}$ such that \begin{center} $\{ w\in f^{-i}(x): 0\leq i \leq n_0\}$ is $\alpha/2$-dense. \end{center} Take a neighborhood $\mathcal{U}(f)\subset \mathcal{V}_2(f),$ where $\mathcal{V}_2(f)$ was given in lemma \ref{lema3}, such that for every $g\in \mathcal{U}(f)$ follows that \begin{center} $\{ w\in g^{-i}(x): 0\leq i \leq n_0\}$ is $\alpha/2$-close to $\{ w\in f^{-i}(x): 0\leq i \leq n_0\}$. \end{center} Hence, $\{ w\in g^{-i}(x): 0\leq i \leq n_0\}$ is $\alpha$-dense. Let $V$ be an open connected set in $\mathbb{T}^n.$ By lemma \ref{afir4}, there exists $m_0\in \mathbb{N}$ such that $diam(\tilde{g}^{m_0}(\widetilde{V}))>m.$ Then we may pick an arc $\gamma$ in $\tilde{g}^{m_0}(\widetilde{V})$ with diameter larger than $m$ and applying claim (\ref{afir6}) follows that there exists a connected piece $\gamma'$ of $\gamma$ such that $\gamma^*=\pi(\gamma')$ is in $U_2^c,$ diameter of $\gamma^*$ is larger than $\delta'_0$ and it admits a nice cylinder $C(\gamma^*,\frac{d_1}{2}).$ By lemma \ref{lema2} follows that $\gamma^*\cap\Lambda_g$ is not empty, let $y$ be a point in the intersection. Hence, for this point $y,$ there exists $\varepsilon_0=\varepsilon_0(y)>0$ such that $\mathbb{B}_{\varepsilon_0}(y)\subset g^{m_0}(V),$ by lemma \ref{lema3} taking $0<\varepsilon<\varepsilon_0,$ we get that there exists $N=N(\varepsilon)\in\mathbb{N}$ such that $$\mathbb{B}_R(g^N(y))\subset g^N(\mathbb{B}_{\varepsilon}(y))\subset g^{m_0+N}(V).$$ Hence, $\mathbb{B}_{\alpha}(g^N(y))\subset g^{m_0+N}(V).$ Since the $\alpha-$density, we have that $$\{ w\in g^{-i}(x): 0\leq i \leq n_0\}\cap \mathbb{B}_{\alpha}(g^N(y))\neq\emptyset.$$ Therefore, denoting by $p=m_0+N,$ $$\{ w\in g^{-i}(x): 0\leq i \leq n_0\}\cap g^p(V)\neq\emptyset.$$ Taking the $p-$th pre-image by $g$, we obtain that there is $i_0\in\mathbb{N}$ such that $$\{ w\in g^{-i}(x): 0\leq i \leq i_0\}\cap V\neq\emptyset.$$ Thus, for every $g\in \mathcal{U}(f)$ follows that $\{ w\in g^{-i}(x): i\in\mathbb{N}\}$ is dense in $\mathbb{T}^n$ for every $x\in \mathbb{T}^n$.\B \begin{figure} \caption{\textit{Iterations by the perturbed map}} \label{graf1.2} \end{figure} \subsection{The Main Theorem Revisited}\label{ss17} In this section, we state a general geome\-tri\-cal version of the Main Theorem. Observe that using hypotheses (2) and (3) of the Main Theorem, we showed in sections \ref{ss13} and \ref{ss15} the existence of a locally maximal expanding set for $f$ which separates large nice cylinders, and in section \ref{ss14} we proved that this geometrical property persist under perturbation, i.e. there is a set $\Lambda_f$ locally maximal which intersects a nice class of arcs in $U_0^c$ and this pro\-per\-ty also holds for the perturbed. The hypothesis of $f$ being volume expanding guarantees that given any open set in the covering space, we are able to choose some iterates such that it contains an arc with diameter large enough to apply claim \ref{afir6} and lemma \ref{afir3}. Hence, the Main Theorem may be enunciated as follows: \begin{mtr} \emph{Let $f\in E^1(\mathbb{T}^n)$ be volume expanding such that the pre-orbit of every point are dense. Suppose that there exist an open set $U_0$ with $diam(U_0)<1$ and $\Lambda_f$ a locally maximal expanding set for $f$ in $U_0^c$ such that every arc $\gamma$ in $U_0^c$ with diameter large enough intersect $\Lambda_f$. Then, the pre-orbit of every point are $C^1$ robustly dense.} \end{mtr} Observe that in the present version, it is already assumed the existences of an expanding locally maximal invariant set that intersects large enough arcs. The proof goes showing that the separation property is robust and this is done in the same way that is done in the Main Theorem. \begin{obs} Note that the Main Theorem implies the Main Theorem Revisited, but we do not know if the reciprocal is true. \end{obs} \section{Robust transitive endomorphisms with invariant splitting} \label{ss121} Now, we consider the case that the endomorphisms exhibits a type of partially hyperbolic splitting. First we give the definition of partially hyperbolic endomorphisms which is slightly different than the one for diffeomorphisms due to the fact that any point has different pre-images which implies that the unstables subbundles are not unique (actually, they depend on the inverse branches). \begin{defi}(\textbf{Unstable cone family}) Given $f:M\rightarrow M$ a local diffeomorphism, let $V$ be an open subset of $M$ such that $f\!\!\mid_V$ is a diffeomorphism onto its image. Denote by $\varphi$ the inverse branches of $f$ restricted to $V;$ more precisely, $\varphi: f(V)\rightarrow V$ such that $f\circ\varphi (x)=x$ if $x\in f(V).$ A continuous cone field $\mathcal{C}^u=\{\mathcal{C}^u_x\}_{_x}$ defined on $V$ is called \emph{unstable} if it is forward invariant: $$Df(x')\,\mathcal{C}^u_{x'}\subset \mathcal{C}^u_{f(x')}$$ for every $x'\in V\cap \varphi(V).$ \end{defi} \begin{obs} Given a point $x,$ there is not necessarily a unique unstable subbundle, i.e. for each inverse path $\{x_k\}_{k\geq 0},$ it means $x_0=x$ and $f(x_{k+1})=x_k$ for $k\geq 0,$ there exists an unstable direction belonging to $\mathcal{C}^u.$ \end{obs} \begin{defi}(\textbf{Complementary splitting}) We say that a splitting $\mathbb{E}^c_x+\mathcal{C}^u_x$ is \emph{complementary} if the unstable cone $\mathcal{C}^u_x$ contains an invariant subspace whose dimension is equal to the dimension of the manifold minus the dimension of the central subbundle. \end{defi} \begin{defi}(\textbf{Partially hyperbolic endomorphism with expanding extremal direction}) It is said that an endomorphism $f$ is \emph{partially hyperbolic with expanding extremal direction} provided for every $x\in M$ there exists a complementary splitting $\mathbb{E}^c_x+\mathcal{C}^u_x,$ where $\{\mathcal{C}^u_x\}_{_x}$ is a family of unstable cones, and there exists $0<\lambda<1$ such that for every inverse branches $\varphi$ of $f$ follows that \begin{enumerate} \item $\|D\varphi(x) \,v\|<\lambda,$ for all $v\in \mathcal{C}^u_x.$ \item $\|Df(x')\mid_{\mathbb{E}^{^c}(x')}\| \|D\varphi(x)v\|<\lambda,$ for all $v\in \mathcal{C}^u_{x},$ where $\varphi(x)=x', \;f(x')=x.$ \end{enumerate} \end{defi} \subsection{Theorem \ref{teo2}: Splitting Version} Now, we state a version of the Main Theo\-rem for the case when the tangent bundle splits into two non-trivial subbundles, one with an expanding behavior and the other one with nonuniform behavior but dominated by the expanding one. \begin{teo}\label{teo2} Let $f\in E^1(\mathbb{T}^n)$ be a locally diffeomorphism partially hyperbolic with expanding extremal direction satisfying the following pro\-per\-ties: \begin{enumerate} \item $\{ w\in f^{-k}(x): k\in\mathbb{N}\}$ is dense for every $x\in \mathbb{T}^n.$ \item There exist $\delta_0>0,$ $\lambda_0>1$ and $k_0\in \mathbb{N}$ such that for every $x\in \mathbb{T}^n$, if $\gamma$ is a disc tangent to the unstable cone $\mathcal{C}_x^u$ with internal diameter larger than $\delta_0,$ there exists a point $y\in\gamma$ such that $m\{Df^i\mid_{\mathbb{E}^c({f^k(y)})}\}>\lambda_0^i,$ for all $i>0,$ for all $k>k_0.$ \end{enumerate} Then, for every $g$ close enough to $f,$ $\{ w\in g^{-k}(x): k\in\mathbb{N}\}$ is dense for every $x\in \mathbb{T}^n.$ \end{teo} \subsection{Proof of Theorem \ref{teo2}} The proof of Theorem \ref{teo2} is similar to the proof given in \cite{PS1}, where it is proved that any partially hyperbolic diffeomorphism satisfying a hypothesis like the one stated in Theorem \ref{teo2} and such that the strong stable foliation is minimal, then the strong stable foliation is robustly minimal. The key hypothesis in the statement of the main theorem in \cite{PS1} says that in any compact piece of the unstable foliation, there exists a point such that the central bundle has uniform expanding behavior along the forward orbit, and this is exactly what we have. The goal consists in proving that this property is robust under perturbation. Given a local diffeomorphism $f$ as in the statement of Theorem \ref{teo2}, we want to show that any small perturbation $g$ preserve the property of density of the pre-orbit of any point. Our strategy is to prove that given any disc tangent to the unstable cones for $g$ with large enough internal diameter has a point such that the central direction along the forward orbit by $g$ is uniformly expanding. Observe that given any open set, since we have a direction that is indeed expanding, the diameter along the unstable direction of the iterates growth. Then we are able to pick a disc inside this iterate such that the disc is tangent to the unstable cones with diameter large enough to apply the last property. Hence, there exists a point which its forward orbit is expanding in all direction, then there is some iterate such that it contains a ball of a fix radius $\varepsilon.$ Since $g$ is close enough to $f,$ we have that the pre-orbit by $g$ are $\varepsilon-$dense. Therefore, given any open set, by the property of the unstable discs, there exists an iterate such that it intersects the pre-orbit by $g$ of any point. Thus, we conclude the density of the pre-orbit of any point by the perturbation. Moreover, the proof of Theorem \ref{teo2} can also be performed in the spirit of Main Theorem. In fact, it is possible to show that $$\bigcap_{l\geq 0} f^{-l}(\{x: m\{Df^n\mid_{\mathbb{E}^c({f^k(x)})}\}>\lambda_0^n,\;n>0,\;k>k_0\})$$ is an invariant expanding set such that separates unstable discs. This provides a geo\-me\-tri\-cal interpretation. \B \subsection{Remarks About the Main Theorem and Theorem \ref{teo2}} Observe that in the Main Theorem we asked for large arcs to contain points such that its forward i\-te\-ra\-tions remain in the expanding region. The same is required in Theorem \ref{teo2} but just for large unstable discs: there is a point there that its forward iterates remain in ``an expanding region''for the central bundle. The main difference in their proof arise from the fact that in the version with splitting, since we know that we have uniform expansion in one direction, any disc with internal diameter larger than $\delta_0$ and tangent to this direction, growths up to length $\delta_1>\delta_0$ in a bounded uniform time, independently of the disc. Note that we cannot guarantee that only assuming volume expansion. Observe that in the Main Theorem is not assumed that $f$ does not have any splitting. In fact, it could also be partially hyperbolic. However, knowing in advance that the endomorphism is partially hyperbolic then it is possible to get sufficient conditions for robust transitivity weaker than the one required by the Main Theorem. \section{$C^1$ Robust transitivity and volume expansion} \label{s4} Before showing the relation between $C^1$ robust transitivity and volume expansion (Theorem \ref{teo0}), let us recall some definitions that are involved in the statement \begin{defi}\label{RT} The set $\Lambda_f(U)=\bigcap_{n\in\mathbb{Z}}f^n(\overline{U})$ is $C^r$ \emph{robustly transitive} if $\Lambda_g(U)=\bigcap_{n\in\mathbb{Z}}g^n(\overline{U})$ is transitive for every endomorphism $g$ $C^r$ close enough to $f$, where $U$ is an open set. It is said that a map $f$ is $C^r$ \emph{robustly transitive} if there exists a $C^r$ neighborhood $\mathcal{U}(f)$ such that every $g\in \mathcal{U}(f)$ is transitive. \end{defi} \begin{defi}\label{nosp} We say that $f$ restricted to an invariant set $\Lambda$ has \emph{no dominated splitting in a $C^r$ robust way} if there exists a $C^r$ open neighborhood $\mathcal{U}(f)$ of $f$ such that for every $g\in\mathcal{U}(f)$ the tangent space $T\Lambda$ does not admit any dominated splitting. \end{defi} \begin{teo}\label{teo0} Let $f\in E^1(M)$ be a local diffeomorphism and $U$ open set in $M$ such that $\Lambda_f(U)=\bigcap_{n\in\mathbb{Z}}f^n(\overline{U})$ is $C^1$ robustly transitive set and it has no splitting in a $C^1$ robust way. Then $f$ is volume expanding. \end{teo} \text{}\\ {\bf Proof. } The proof of this theorem is similar to the one of Theorem 4 in \cite[pp.361]{BDP}, nevertheless we include the main steps of the proof. Let us consider $f\in E^1(M)$ a local diffeomorphism and denote by $\Lambda_f(U)$ (nontrivial) $C^1$ robustly transitive set for $f$ (note that $U$ could be the entire manifold). The idea of the proof is to assume that $f$ is not volume expanding and show that for every $\mathcal{U}(f)$ $C^1$ neighborhood of $f$ in $E^1(M),$ there exists $\psi\in\mathcal{U}(f)$ such that $\psi$ has a sink and therefore $\psi$ cannot be transitive. Suppose that $f$ is not volume expanding. Since $f$ is onto, it cannot be uniform volume contracting in the entire manifold, so there are points in the manifold such that we have expansion, i.e. $1 \leq |det(Df^{k}(x))|$ for some $k\geq 0$, but it does not expand too much, i.e. $|det(Df^{k}(x))|\leq 1+\epsilon,$ with $\epsilon$ small. Then there are sequences $x_n\in \Lambda_f(U),$ $k_n\in \mathbb{N}$ and $\tau_n>1,$ with $k_n\rightarrow\infty$ and $\tau_n\rightarrow 1^+,$ such that $$1 \leq |det(Df^{k_n}(x_n))|<\tau_n^{k_n}.$$ This is equivalent to say that $$\frac{1}{k_n}\sum_{i=0}^{k_n-1}\log(|det(Df(f^i(x_n)))|)<\log(\tau_n).$$ We may take $k_n$ such that $f^i(x_n)\neq f^j(x_n)$ for all $i\neq j,$ $i,j\in\{0,\ldots,k_n\}.$ Consider for each $n$ the Dirac measure $\delta_n$ supported in $\{x_n,f(x_n),\ldots,f^{k_n}(x_n)\},$ i.e. $\delta_n=\frac{1}{k_n}\sum_{i=0}^{k_n-1}\delta_{f^i(x_n)}.$ As the space of probabilities is compact with the weak star topology, there exists a subsequence of $\{\delta_n\}_n$ that converges to an invariant probability measure $\mu$ such that $$\int\log|det(Df(x))|d\mu(x)\leq 0.$$ In fact, a classical argument proves that $\mu$ is invariant by $f,$ since $f_*(\mu)-\mu$ is the weak star limit of $\frac{1}{k_{n_i}}(\delta_{f^{k_{n_i}}(x_{n_i})}-\delta_{x_{n_i}}),$ which converge to zero. Observing that $$\begin{array}{ll} \int\log|det(Df(x))|d\delta_n &=\! \frac{1}{k_n}\!\sum_{i=0}^{k_n-1}\log(|det(Df^i(x_n))|)\!\\\\ &=\!\frac{1}{k_n}\log(|det(Df^{k_n}(x_n))|) \leq \log(\tau_n), \end{array}$$ and since $\tau_n\rightarrow 1^+$ we deduce that $$\int\log|det(Df(x))|d\mu(x) \leq 0.$$ By the ergodic decomposition theorem, there is an ergodic and $f-$invariant measure $\nu$ such that $$\int\log|det(Df(x))|d\nu(x)\leq 0.$$ Using the ergodic closing lemma for nonsingular endomorphisms (see \cite{Castro}), given $\varepsilon>0$ there is $g$ close to $f$ and a $g-$periodic point $y$ such that $$\frac{1}{m_{\varepsilon}}\sum_{i=0}^{m_{\varepsilon}-1}\log(|det(Dg(g^i(y)))|)<\varepsilon,$$ where $m_{\varepsilon}$ is the period of $y.$ Note that if $\varepsilon\rightarrow 0,$ then $m_{\varepsilon}\rightarrow\infty.$ So, taking $\varepsilon>0$ arbitrarily small and $m_{\varepsilon}$ big, using Franks' Lemma \cite{Franks} we get $\varphi$ close to $g$ such that $\varphi^{m_{\varepsilon}}(y)=y\in\Lambda_{\varphi}(U)$ and $$\frac{1}{m_{\varepsilon}}\sum_{i=0}^{m_{\varepsilon}-1}\log(|det(D\varphi(\varphi^i(y)))|)<0,$$ this means that $|det(D\varphi^{m_{\varepsilon}}(y))|<\lambda<1.$ Observe that we are assuming the dimension of the manifold greater or equal to 2, so the fact that the modulus of the jacobian of $\varphi$ be lower than 1 does not imply that all the eigenvalues have modulus smaller than 1. Since $\Lambda_{\varphi}(U)$ is $C^1$ robustly transitive, after a perturbation, we may assume that the relative homoclinic class $H(y,\varphi,U)$ of $y$ is the whole $\Lambda_{\varphi}(U)$ (see \cite{BDP} for definition). Now, consider the dense subset $\Sigma\subset\Lambda_{\varphi}(U)$ consisting of all the hyperbolic periodic points of $\Lambda_{\varphi}(U)$ homoclinically related to $y$. If $\varphi$ is close enough to $f,$ then the tangent bundle does not admit a splitting as well. Using the idea of the proof of Lemma 6.1 in \cite[pp. 407]{BDP} and, after that, Franks' Lemma, we obtain that there exists $\psi$ a perturbation of $\varphi$ and a point $p\in \Sigma$ such that all the eigenvalues of $D\psi^{m(p)}(p)$ have modulus strictly lower than 1, where $m(p)$ is the period of $p$. This means that the maximal invariant set in $U$ of $\psi$ contains a sink, but this is a contradiction since we choose $\psi$ sufficiently close to $f$ such that $\Lambda_{\psi}(U)$ is still transitive. \B \begin{obs} If $\Lambda_f(U)$ admits a splitting, then the extremal indecomposable subbundle is volume expanding. The proof is similar to the proof of theorem \ref{teo0}, restricting $Df$ to the extremal subbundle. \end{obs} \begin{obs}\label{obs16} Theorem \ref{teo0} implies that volume expanding of the extremal bundle is a necessary condition for an endomorphism, which is local diffeomorphism, to be a robust transitive map. However, volume expanding is not a sufficient condition that guarantees robust transitivity for a local diffeomorphism. For instance, consider a product of an expanding endomorphism times an irrational rotation: this map is volume expanding and transitive but not robust transitive. \end{obs} \begin{obs} It is expected that if $f$ is robustly transitive and has no invariant subbundles in a robust way, then $f$ is a local diffeomorphism. This result depends on whether the Ergodic Closing Lemma holds even if there are critical points, since for maps with critical points already exists a version of Connecting Lemma, Closing Lemma and Franks' Lemma, which are the principal results involved in the proof of Theorem \ref{teo0}. \end{obs} \section{Examples of Robust Transitive Endomorphisms}\label{s5} In this section we show that there exist examples of robust transitive endomorphisms verifying the hypotheses of our main results. The first two examples co\-rres\-pond to endomorphisms without any splitting where is applied the Main Theo\-rem and the revisited version, and they can be considered as an endomorphisms version of the one constructed in \cite{BV}; those examples are \emph{Derived from Expanding} endomorphisms. The last two ones correspond to partially hyperbolic endomorphisms, they can be considered as an endomorphisms version of the one constructed in \cite{BD} and \cite{NP}, and they are not isotopic to expanding ones. \subsection{Example 1: Applying Main Theorem}\label{ss31} Consider $\mathcal{E}:\mathbb{T}^n\rightarrow\mathbb{T}^n$ an ex\-pan\-ding endomorphism, with $n\geq 2.$ Let us consider a Markov partition of $\mathcal{E}$ and observe that its elements are given by $n-$dimensional closed rectangles. Note that taking a large $m>1,$ the topological degree of $\mathcal{E}^m$ is equal to the topological degree of $\mathcal{E}$ to the power $m-$th and the Markov partition can be chosen in such a way that the number of its elements is equal to the topological degree of $\mathcal{E}^m,$ so without loss of generality we may assume that the initial map has many elements in the partition as we want. More precisely, if $N=topological\; degree(\mathcal{E})$, we may assume that $N$ is large and therefore the Markov partition has $N$ elements. Denote by $R_i$ the elements of the partition, with $1\leq i \leq N$; $R_i$ is closed, $int(R_i)$ is nonempty and $int(R_i)\cap int(R_j)=\emptyset$ if $i\neq j$. Now, consider $\psi:\mathbb{T}^n\rightarrow\mathbb{T}^n$ a map isotopic to the identity and denote by $\widehat{R}_i=\psi(R_i)$ for every $i.$ The idea of using this map is to deform the elements of the Markov partition and get a new partition which elements are not all of the same size (it could contain some very small elements and others very big). Set $U_0$ an open set in $\mathbb{T}^n$ such that if $\widetilde{U}$ is the convex hull of the lift of $U_0,$ then $\widetilde{U}\cap [0,1]^n$ is contained in the interior of $[0,1]^n,$ i.e. $diam(U_0)<1.$ Note that there exists $\widehat{R}_i$ such that $\widehat{R}_i\cap U_0$ is nonempty. We also request that there are many $\widehat{R}_i$ contained in $U_0^c,$ observe that this condition is feasible since the initial map has many elements in the partition. \begin{figure} \caption{\textit{Deforming the initial Markov Partition}} \label{graf3.1} \end{figure} Define $f_0:\mathbb{T}^n\rightarrow \mathbb{T}^n$ by $f_0=\psi \circ \mathcal{E}.$ We assume that there exist $p\in U_0$ and $q_i\in U_0^c$ fixed points for $f_0,$ with $1\leq i\leq n-1.$ This is possible because we may start with an expanding map which has as many fixed points as we need. Let us suppose that $p$ and $q_i$ are expanding for $f_0$ in all directions, it means that all the eigenvalues associated to these points has modulus greater than 1. Pick $\varepsilon>0$ small enough such that $\mathbb{B}_\varepsilon(q_i)\cap U_0=\emptyset$ and $\mathbb{B}_\varepsilon(q_i)\cap \mathbb{B}_\varepsilon(q_j)=\emptyset$ if $i\neq j.$ Let us denote the decomposition of the tangent space as follows $$T_x(\mathbb{T}^n)=\mathbb{E}_1^u\prec \mathbb{E}_2^u \prec \cdots\prec \mathbb{E}_{n-1}^u\prec \mathbb{E}_n^u,$$ where $\prec$ denotes that $\mathbb{E}_i^u$ dominates the expanding behavior of $\mathbb{E}_{i-1}^u.$ Next we deform $f_0$ by a smooth isotopy supported in $U_0\cup (\bigcup\mathbb{B}_\varepsilon(q_i))$ in such a way that: \begin{enumerate} \item The continuation of $p$ goes through a pitchfork bifurcation, appearing two periodic points $r_1, r_2,$ such that both are repeller and $p$ becomes a saddle point. But the new map $f$ still expand volume in $U_0.$ \item Two expanding eigenvalues of $q_i$ become complex expanding eigenvalues. More precisely, we mix the two expanding subbundles of $T_{q_i}(\mathbb{T}^n)$ co\-rres\-pon\-ding to $\mathbb{E}_i^u(q_i)$ and $\mathbb{E}_{i+1}^u(q_i),$ obtaining $T_{q_i}(\mathbb{T}^n)=\mathbb{E}_1^u\prec \mathbb{E}_2^u \prec \cdots\prec \mathbb{F}_i^u\prec \mathbb{E}_{n-1}^u\prec\mathbb{E}_n^u,$ where $\mathbb{F}_i$ is two dimensional and correspond to the complex eigenvalues. \item Outside $U_0\cup (\bigcup\mathbb{B}_\varepsilon(q_i)),$ $f$ coincides with $f_0.$ \item $f$ is expanding in $U_0^c.$ \item There exists $\sigma>1$ such that $|det(Df(x))|>\sigma$ for every $x\in \mathbb{T}^n.$ \end{enumerate} \begin{figure} \caption{\textit{$f$ isotopic to $f_0$}} \label{graf3.2} \end{figure} \subsubsection{Property of Large Arcs} \begin{afir} Every large arc in $U_0^c$ has a point such that its forward orbits remain in $U_0^c.$ \end{afir} \text{}\\ {\bf Proof. } Take $d$ the maximum of the diameter of the elements of the partition contained in $U_0^c.$ Note that every arc in $U_0^c$ with diameter larger than $d$ cannot be contained in the interior of any element of the partition, more precisely it has to intersect at least two elements of the partition. Hence, the image by $f$ of this arc $\gamma$ has diameter 1. So there exists a piece of $f(\gamma)$ in $U_0^c$ intersecting at least one element of the partition across two parallel sides, let us call $\gamma^1$. Choose a pre-image of $\gamma^1$ in $\gamma$ and call it $\gamma_1.$ Repeating the process for $\gamma^1$, we have that there is $\gamma^2$ a piece of $f(\gamma^1)$ verifying the same condition as $\gamma^1.$ Then, choose $\gamma_2$ a pre-image of $\gamma^2$ by $f^2$ in $\gamma.$ Thus, we construct a sequence of nested arcs in $\gamma.$ The intersection is non empty and a point in this intersection satisfy our claim. \B \subsubsection{Remarks and variation of Example 1} \begin{enumerate} \item $q_i$'s are fixed points for $f$ with complex expanding eigenvalues. Note that the e\-xis\-ten\-ce of these points ensures that the tangent bundle does not admit any invariant subbundle. We could also start with an expanding map having, besides $p,$ periodic points $q_i$ with complex eigenvalues. In such a case, it is enough to make $p$ goes through a pitchfork bifurcation. \item This example shows that $U_0$ can be as big as we desired while it verifies the hypothesis of having diameter less than 1. \item It can be constructed in any dimension. \end{enumerate} \subsection{Example 2: Applying the Main Theorem Revisited} \label{ss32} Let us consider $\mathcal{E}:\mathbb{T}^n\rightarrow\mathbb{T}^n$ an expanding endomorphism, with $n\geq 2.$ Assume that the initial map has many elements in the Markov partition, let us say $N$ elements. Denote by $R_i$ the elements of the partition, with $1\leq i \leq N$. Since $\mathcal{E}$ is expanding, $R_i$ are closed, $int(R_i)$ are nonempty and $int(R_i)\cap int(R_j)=\emptyset$ if $i\neq j$. Choose finitely many of these elements, $\{R_{i_j}\}_{j=1}^k,$ such that $R_{i_j}\cap R_{i_s}=\emptyset$ if $i_j\neq i_s,$ i.e. they are two by two disjoints. Consider the pre-images of every $R_{i_j},$ let us say $\mathcal{E}^{-1}(R_{i_j})=\{P_{i_j}^l\}_{l=1}^N.$ Denote by $P_{i_j}^0=R_{i_j}.$ Next, we keep $P_{i_j}^r$ such that $P_{i_j}^r\cap P_{i_{s}}^l=\emptyset$ whenever $0\leq r\neq l\leq N$ and $i_j\neq i_s.$ Finally, let us denote by $\{P_i\}_i$ the collection of these latter subsets, so they are two by two disjoints. See figure \ref{graf3.3}. \begin{figure}\label{graf3.3} \end{figure} Now, consider $\psi:\mathbb{T}^n\rightarrow\mathbb{T}^n$ a map isotopic to the identity and denote by $\widehat{P}_i=\psi(P_i)$ for every $i.$ See figure \ref{graf3.4}. \begin{figure} \caption{\textit{Deforming the Markov partition}} \label{graf3.4} \end{figure} Choose $\widetilde{P}_i$ an open connected subset such that its closure is contained in the interior of $\widehat{P}_i.$ Let $\phi_i:\mathbb{T}^n\rightarrow\mathbb{T}^n$ be a map isotopic to the identity such that \begin{itemize} \item $\phi_i\mid_{\widetilde{P}_i}$ is not expanding. \item $\phi_i\mid_{\widehat{P}_i^c}$ is the identity. \end{itemize} \vspace*{-5mm} Define $\phi:\mathbb{T}^n\rightarrow\mathbb{T}^n$ by $$\phi(x)=\left\{\begin{array}{ll} \phi_i(x),\;& \mbox{if}\; x\in\widehat{P}_i\\\\ x,\;& \mbox{if}\; x\not\in \bigcup_i\widehat{P}_i \end{array} \right.$$ Hence, $\phi$ is equal to the identity in $[\bigcup_i\widehat{P}_i]^c,$ expands volume but is not expanding in $\bigcup_i\widehat{P}_i.$ Once we have defined all these maps, we consider the map $f=\phi \circ\psi \circ \mathcal{E}$ from $\mathbb{T}^n$ onto itself and denote by $U_0=int (\bigcup_i\widehat{P}_i).$ Observe that $f$ verifies that: \begin{enumerate} \item[(\emph{i})] $f$ is a volume expanding endomorphism. \item[(\emph{ii})] $f$ is an expanding map in $U_0^c.$ \item[(\emph{iii})] $\Lambda_f=\bigcap_{n\geq 0}f^{-n}(U_0^c)$ is an expanding locally maximal set for $f$ which has the property that separate large nice cylinders. \end{enumerate} Since (\emph{i}) and (\emph{ii}) are immediate from the construction of $f,$ we concentrate our interest in proving (\emph{iii}). \subsubsection{$\Lambda_f$ Separates Large Nice Cylinders} Note that by the construction of $U_0,$ we have that the elements of the pre-orbit of $U_0$ are two by two disjoints. Let us consider $d_0=\max\{diam(c.c. \bigcup_{k\geq 0} f^{-k}(U_0))\}.$ Since the definition of $U_0,$ $0<d_0<1.$ Observe that $\Lambda_f$ looks like a Sierpinski set, see figure \ref{graf3.3b}. \begin{afir}\label{afir31} If $\gamma$ is an arc in $U_0^c$ with diameter $1,$ then $\gamma$ intersects $\Lambda_f.$ \end{afir} \text{}\\ {\bf Proof. } Let $\gamma$ be an arc in $U_0^c$ such that $diam(\gamma)=1.$ Suppose that $\gamma$ does not intersect $\Lambda_f.$ Remember that $\Lambda_f=\mathbb{T}^n\setminus \bigcup_{k\geq 0} f^{-k}(U_0),$ it means that if $x\in \Lambda_f,$ then $f^k(x)\not\in U_0$ for all $k\geq 0.$ Therefore, $\gamma$ is contained in one pre-image of $U_0$ or in a union of pre-images of $U_0.$ Observe that $\gamma$ cannot be contained in just one pre-image of $U_0,$ because if it is contained in $f^{-k}(U_0)$ for some $k\geq 0$, then $diam(\gamma)<diam(f^{-k}(U_0))<d_0,$ which is absurd because $d_0<1.$ Hence, $\gamma$ should be contained in a union of pre-images of $U_0,$ since $\gamma$ is compact we can cover with a finite union of pre-images of $U_0$. But we know that the pre-images of $U_0$ are two by two disjoints, hence there exist points in $\gamma$ that cannot be covered by the pre-images of $U_0.$ In particular, $\gamma$ intersects $\Lambda_f.$ \B \begin{figure} \caption{\textit{$\Lambda_f$ looks like a Sierpinski set}} \label{graf3.3b} \end{figure} \begin{obs} We have already proved the existence of the invariant expanding locally maximal set $\Lambda_f.$ Moreover, by claim (\ref{afir31}) we get that this invariant set intersects every arc with large diameter. Then by the Main Theorem Revisited follows that this map is robustly transitive. \end{obs} \subsubsection{Remarks About Example 2} \begin{enumerate} \item We can apply our Main Theorem Revisited to this example, obtaining in particular that $f$ is robustly transitive. \item The $\widehat{P}_i$'s can be as many and as big as we want. \item We can construct many examples starting with this initial map. In particular, we can construct examples without invariant subbundles, such as putting a fix point in the complement of the $U_0$ with complex eigenvalues and doing a derived from an expanding endomorphisms inside of some $\widehat{P}_i$. \end{enumerate} \subsection{Example 3: Applying Theorem \ref{teo2}} \label{ss33} The idea of next example is to build an endomorphism in the 2-Torus which is a skew-product and contains a ``blender" for endomorphisms. This example is more or less a standard adaptation for endomorphisms of examples obtained in \cite{BD} for diffeomorphisms. The main goal is to get ``blenders" for endomorphisms and since we do not necessarily need to deal with stable foliation, the task is easier than the case of diffeomorphisms. For more information about blenders see \cite{BD}. First, let us identify the 2-Torus with $[0,1]^2$ and let us establish some notation before defining the map. Pick $0\!\!<\!\!a\!<\!\!b\!<\!\!3/4$ and $1/4\!\!<\!\!c\!<\!\!d\!<\!\!1.$ Denote $J_1\!\!=\!\![0,b],\; J_2\!\!=\!\![a,3/4], \;J_3\!\!=\!\![1/4,d]$ and $J_4\!\!=\!\![c,1].$ Note that $J_1\cap J_2=[a,b]$ and $J_3\cap J_4=[c,d].$ This decomposition is associated to the horizontal fibers. Next, fix $N>3$ and pick $0<a_1<b_1<a_2<b_2<a_3<b_3<a_4<b_4<1$ such that $b_i-a_i=1/N.$ Let us denote by $I_i=[a_i,b_i]$ with $1\leq i\leq 4.$ Note that they are two by two disjoint and do not contain 0 or 1. We associate this decomposition to the vertical fibers. Let us call $R_i=J_i\times I_i$ where $i=1,2,3,4.$ Now, define $\Phi:\mathbb{T}^2\rightarrow \mathbb{T}^2$ by $$\Phi(x,y)=(\varphi_y(x), \mathcal{E}(y)),$$ where $\varphi_y, \mathcal{E}: S^1\rightarrow S^1$ are defined as follows: \begin{enumerate} \item $\mathcal{E}$ is an expanding endomorphism such that: \begin{itemize} \item $\mathcal{E}(I_i)=[0,1]$ for every $i.$ \item There exists $a_i<c_i<b_i$ such that $\mathcal{E}(c_i)=c_i.$ \end{itemize} \item $\varphi_y$ is defined by $\varphi_y(x)=f_i(x),$ if $y\in I_i,$ where $f_i: S^1\rightarrow S^1$ are diffeomorphisms defined as follows: \begin{itemize} \item $f_1$ and $f_2$ satisfy the following properties: \begin{enumerate} \item[(\emph{i})] $f_1$ has two fixed points, $0$ and $a'\!\!\in\!\!(3/4,1),$ where $0$ is a repeller and $a'$ is an attractor for $f_1.$ \item[(\emph{ii})] $f_2$ has two fixed points, $3/4$ and $a''\!\!\in\!\! (a',1),$ where $3/4$ is a repeller and $a''$ is an attractor for $f_2.$ \item[(\emph{iii})]$f_1(J_1)= f_2(J_2)=[0,3/4].$ \item[(\emph{iv})] $|f'_i \mid_{J_i}|>1$ for $i=1,2.$ \end{enumerate} \item $f_3$ and $f_4$ satisfy the following properties: \begin{enumerate} \item[(\emph{i}')] $f_3$ has two fixed points, $c'\!\!\in\!\!(0,1/4)$ and $1/4,$ where $c'$ is an attractor and $1/4$ is a repeller for $f_3.$ \item[(\emph{ii}')] $f_4$ has two fixed points, $c''\!\!\in\!\!(0,c')$ and $1,$ where $c''$ is an attractor and $1$ is a repeller for $f_4.$ \item[(\emph{iii}')] $f_3(J_3)= f_4(J_4)=[1/4,1].$ \item[(\emph{iv}')] $|f'_i\mid_{J_i}|>1$ for $i=3,4.$ \end{enumerate} \end{itemize} \item $|det(D\Phi)|=|\frac{\partial \varphi_y}{\partial x}\,\mathcal{E}'|>1.$ \item $\mathcal{E}'\gg \frac{\partial \varphi_y}{\partial y}.$ \end{enumerate} \begin{figure} \caption{\textit{Horizontal dynamics}} \label{ej3_1} \end{figure} Hence, the horizontal fibers $F_i=S^1\times \{c_i\}$ are invariant by $\Phi,$ see figure \ref{ej3_2}. Moreover, by condition (4), the image by $\Phi$ of every vertical fiber is almost a vertical fiber, in the sense that the tangent vector is close to a vertical one; more precisely, the unstable cones family are almost vertical. \begin{figure} \caption{\textit{This is how the dynamics $\Phi$ looks like}} \label{ej3_2} \end{figure} Next, we consider $\Lambda_1^+=\bigcap_{n\geq 0}\Phi^{-n}(R_1\cup R_2)$ and $\Lambda_2^+=\bigcap_{n\geq 0}\Phi^{-n}(R_3\cup R_4).$ Let $\Lambda_1=\bigcap_{n\in \mathbb{Z}}\Phi^{-n}(R_1\cup R_2)$ and $\Lambda_2=\bigcap_{n\in \mathbb{Z}}\Phi^{-n}(R_3\cup R_4).$ Note that both sets, $\Lambda_1$ and $\Lambda_2$ are expanding locally maximal invariant sets and each one contains a blender. \subsubsection{$\Lambda_1$ and $\Lambda_2$ Separate Large Vertical Segments} Let us denote by $\ell_1^u(p)$ the vertical segment passing through $p$ and length 1. \begin{afir}\label{afir33} For every $p\in R_1\cup R_2,$ follows that $\ell_1^u(p)\cap \Lambda_1^+\neq\emptyset.$ \end{afir} \text{}\\ {\bf Proof. } Let $p\in R_1\cup R_2,$ then $L_i=\ell_1^u(p)\cap R_i$ is non empty for some $i\in\{1,2\}.$ The image of $L_i=\ell_1^u(p)\cap R_i$ by $\Phi$ has length 1 and by property (4) of $\Phi$ follows that $\Phi(L_i)$ is almost vertical. Moreover, $L_i\cap F_i\neq\emptyset$ and $\Phi(L_i\cap F_i)\in F_i\subset R_i.$ Then $\Phi(\ell_1^u(p))\cap (R_1\cup R_2)\neq\emptyset.$ Call $K_1^i$ the connected component $\Phi(\ell_1^u(p))\cap R_i.$ Note that $P_2(K_1^i)=I_i,$ where $P_2$ is the projection in the second coordinate. Consider the pre-image of $K_1^i$ by $\Phi$ in $L_i$ and call it $S_1^i.$ Now, iterate $K_1^i$ by $\Phi$, doing a similar process we obtain $K_2^i,$ the connected component $\Phi(K_1^i)\cap R_i$ such that $P_2(K_2^i)=I_i.$ Again take a pre-image of $K_2^i$ by $\Phi^2,$ giving a compact segment $S_2^i\subset S_1^i.$ Repeating this process, we may construct a nested sequence of compact segment $\{S_k^i\}_k$ in each $R_i.$ Thus, $\bigcap_k S_k^i$ is not empty and belong to $\ell_1^u(p)\cap \Lambda_1^+.$\B \begin{afir}\label{afir35} For every $p\in R_1\cup R_2,$ follows that $\ell_1^u(p)\cap \Lambda_1\neq\emptyset.$ \end{afir} \text{}\\ {\bf Proof. } By claim (\ref{afir33}), we know that there exists a point $z\in \ell_1^u(p)\cap \Lambda_1^+,$ this means that $\Phi^n(z)\in R_1\cup R_2$ for every $n\geq 0.$ Then, just remain to show that there exists a sequence $\{z_k\}_{k\geq 0}\subset R_1\cup R_2$ such that $z_0=z$ and $\Phi(z_k)=z_{k-1}.$ The idea of the construction of such a sequence is to use now the property (2-\emph{iii}) of overlapping in the horizontal dynamics. Knowing that $\Phi(R_1)=f_1(J_1)\times [0,1]$ and $\Phi(R_2)=f_2(J_2)\times [0,1],$ since property (2-\emph{iii}) we get that $\Phi(R_1)=\Phi(R_2)= [0,3/4]\times [0,1].$ Hence, $z_0\in (R_1\cup R_2)\cap \Phi(R_1)$ or $z_0\in (R_1\cup R_2)\cap \Phi(R_2),$ then there exists $z_1\in R_1\cup R_2$ such that $\Phi(z_1)=z_0.$ Repeating this process we construct the require sequence. \B \begin{afir}\label{afir34} For every $p\in R_3\cup R_4,$ follows that $\ell_1^u(p)\cap \Lambda_2\neq\emptyset.$ \end{afir} \text{}\\ {\bf Proof. } The proof is similar to claim (\ref{afir35}) just making the necessary arrangement.\B \begin{afir} For every $q\in\mathbb{T}^2,$ we have that either $\ell_1^u(q)\cap \Lambda_1\neq\emptyset$ or $\ell_1^u(q)\cap \Lambda_2\neq\emptyset.$ \end{afir} \text{}\\ {\bf Proof. } Given any point $q\in\mathbb{T}^2,$ note that $\ell_1^u(q)\cap R_i\neq\emptyset$ for some $1\leq i\leq 4.$ Hence, taking $p_i\in\ell_1^u(q)\cap R_i$ and noting that $\ell_1^u(p_i)=\ell_1^u(q)$, we may use claim (\ref{afir35}) or (\ref{afir34}) to conclude that either $\ell_1^u(q)\cap \Lambda_1\neq\emptyset$ or $\ell_1^u(q)\cap \Lambda_2\neq\emptyset.$\B \subsubsection{Remarks About Example 3} This example was constructed in the 2-Torus with one dimensional central bundle, but we can construct it in any $\mathbb{T}^n$ and the dimension of the central bundle not need to be 1. Also, we can use more than 4 dynamics in the horizontal, that is more than four maps in the first variable. More precisely, we put 2 blenders in the dynamic, induced by these four maps, but we can consider as many blenders as we want. \subsection{Example 4: Applying Theorem \ref{teo2}}\label{ss34} Let $\mathbb{B}_0$ be an open ball in $\mathbb{T}^m$ centered at $0$ with radius $\alpha<1$ and $\varphi_0:\mathbb{T}^m\rightarrow \mathbb{T}^m$ be a differentiable map isotopic to the identity such that: \begin{itemize} \item $\varphi_0(0)=0$ \item There exist $0<\lambda_0<\lambda_1<1$ such that $\lambda_0<m\{D\varphi_0\}<|D\varphi_0\mid_{\mathbb{B}_0}|<\lambda_1,$ i.e. $\varphi_0$ is a contraction in a disk. \end{itemize} Let us consider $\mathbb{D}_0$ the lift of $\mathbb{B}_0$ to $\mathbb{R}^m$ and $\widetilde{\varphi}_0$ the lift of $\varphi_0.$ Note that $\widetilde{\varphi}_0(0)=0$ and $\lambda_0<m\{D\widetilde{\varphi}_0\}<|D\widetilde{\varphi}_0\mid_{\mathbb{D}_0}|<\lambda_1.$ By Proposition 2.3 \cite{NP}, there exists $k\in\mathbb{N}$ such that for every small $\varepsilon>0,$ there exist $c_1,\ldots,c_k\in\mathbb{B}_{\varepsilon}(0)$ and $\delta>0$ such that $\mathbb{B}_{\delta}(0)\subset \overline{Orbit_{\mathcal{G}}^+(0)},$ where $\mathcal{G}=\mathcal{G}(\widetilde{\varphi}_0, \widetilde{\varphi}_0+c_1,\ldots, \widetilde{\varphi}_0+c_k)$ and $Orbit_{\mathcal{G}}^+(0)$ is the set of points lying on some orbit of $0$ under the iterated function system (IFS) $\mathcal{G};$ more precisely, if we denote by $\widetilde{\phi}_0=\widetilde{\varphi}_0$ and $\widetilde{\phi}_i=\widetilde{\varphi}_0+c_i$ for $i=1,\ldots,k,$ then $Orbit_{\mathcal{G}}^+(0)$ is the set of sequence $\{\widetilde{\phi}_{\Sigma_l}(0)\}_{l=1}^{\infty}$ where $\Sigma_l=(\sigma_1,\ldots,\sigma_l),$ $\widetilde{\phi}_{\Sigma_l}=\widetilde{\phi}_{\sigma_l}\circ\cdots\circ\widetilde{\phi}_{\sigma_1}$ and $\{\sigma_i\}_{i\in\mathbb{N}} \in \{0,\ldots,k\}^{\mathbb{N}}.$ (For more details about IFS see \cite{NP}) Now choose $p_1,\ldots,p_r\in \mathbb{T}^m$ such that $\mathbb{T}^m\subset \bigcup_j \mathbb{B}_{\delta}(p_j).$ If $\phi_i$ is the projection of $\widetilde{\phi}_i$ on $\mathbb{T}^m,$ define for every $j$ the IFS $\mathcal{G}_j=\mathcal{G}_j(\phi_0+p_j,\phi_1+p_j,\ldots,\phi_k+p_j).$ Then $\mathbb{B}_{\delta}(p_j)\subset \overline{Orbit_{\mathcal{G}_j}^+(0)}.$ Therefore, there exists an open set $\mathbb{D}_0\subset \mathbb{B}_0$ such that $\bigcup \phi_i(\mathbb{D}_0)\supset \mathbb{D}_0,$ i.e. the IFS has the covering property. Hence, $\bigcup_i \phi_i(\mathbb{B}_{\delta'} (p_j))\supset \mathbb{B}_{\delta'} (p_j),$ with $0<\delta'\leq \delta.$ Moreover, $\mathcal{G}_j$ has also the overlapping property as in Example 3, in the previous section. Define the skew-product $F:\mathbb{T}^m\times \mathbb{T}^n \rightarrow \mathbb{T}^m\times \mathbb{T}^n$ by $$F(x,y)=(\psi_y(x),\mathcal{E}(y)),$$ where: \begin{itemize} \item $\mathcal{E}:\mathbb{T}^n\rightarrow \mathbb{T}^n$ is an expanding map with $(k+1)r$ fixed points, let us denote the fixed points by $e_1^i,\ldots,e_r^i$ with $0\leq i\leq k.$ \item For every $y\in\mathbb{T}^n,$ $\psi_y:\mathbb{T}^m\rightarrow\mathbb{T}^m$ is a differentiable map isotopic to the identity such that $\psi_{e_j^i}=\phi_i+p_j,$ with $0\leq i\leq k$ and $1\leq j\leq r.$ \end{itemize} Hence, every fiber $\mathbb{T}^m\times\{e_j^i\}$ is invariant by $F.$ Set $R_j^i=\mathbb{B}_{\delta'} (p_j)\times Q_j^i,$ where $Q_j^i$ is a small neighborhood of $e_j^i$ in $\mathbb{T}^n$ such that $\mathcal{E}(Q_j^i)=\mathbb{T}^n$ and they are all disjoints for every $i,j.$ Note that $R_j^i$ are the analogous of $R_i$ in the previous example. Let $\Lambda_F:= \displaystyle\bigcap_{n\in \mathbb{Z}}F^n(\bigcup_{i,j}R_j^i).$ \subsubsection{$\Lambda_F$ Separate Large Unstable Discs} \begin{afir} $\Lambda_F$ verifies that for every $z\in\bigcup_{i,j}R_j^i$ follows that $\ell_1^u(z)\cap \Lambda_F\neq\emptyset,$ where $\ell_1^u(z)$ is an unstable disc of internal diameter 1 passing through $z.$ \end{afir} \text{}\\ {\bf Proof. } We may prove that there exists a point $z\in \bigcup_{i,j}R_j^i$ such that $F^n(z)\in \bigcup_{i,j}R_j^i$ for every $n\geq 0$ in a similar way as we proved claim (\ref{afir33}) in previous example. Moreover, for this $z$ there exists $z_1\in \bigcup_{i,j}R_j^i$ such that $F(z_1)=z.$ In fact, the idea is more or less the same as in previous example, we must note that $F(R_j^i)=\psi_{e_j^i}(\mathbb{B}_{\delta'} (p_j))\times \mathcal{E}(Q_j^i) =\phi_i(\mathbb{B}_{\delta'} (p_j))\times \mathbb{T}^n.$ On the other hand, using the property of covering and overlapping follows that $$\bigcup_{i,j}R_j^i= \bigcup_{i,j} \mathbb{B}_{\delta'} (p_j)\times Q_j^i\subset \bigcup_{i,j} \phi_i(\mathbb{B}_{\delta'} (p_j)) \times \mathcal{E}(Q_j^i)= F(\bigcup_{i,j}R_j^i).$$ Therefore, since $z\in \bigcup_{i,j}R_j^i$, there exists $z_1\in \bigcup_{i,j}R_j^i$ such that $F(z_1)=z.$ Inductively we can construct a sequence $\{z_k\}_{k\geq 0}\subset \bigcup_{i,j}R_j^i$ such that $z_0=z$ and $F(z_k)=z_{k-1}.$ Thus, $z\in \ell_1^u(z)\cap \Lambda_F.$\B \subsubsection{Remarks About Example 4} This example is a generalization of Example 3. The intention here is to show that we may apply Theorem \ref{teo2} without taking into account the dimension of the central bundle and this could be as large as we want. Another observation is that the existence of blenders guarantee that our examples are robustly transitive and this example satisfies the property over the unstable discs with sufficiently large internal diameter intersecting the invariant expanding locally maximal set for the skew-product. \nocite{PS2} \nocite{Katok} \end{document}
arXiv
🔥👉 Was Agent Orange used in the United States? "✅👉 There is no definitive answer, but various sources suggest that Agent Orange was used in the United States at some point during the 1960s." Estelle Carroll DVM Sign in for give answer Why did Steve Jobs die despite his wealth? There is no one-size-fits-all answer to this question, as the factors that contribute to someone's death can vary greatly. However, in Steve Jobs' case, it is likely that his death was due to complications from pancreatic cancer. What can cause constant headaches and temporomandibular jaw pain to persist for eleven years after a mild brain injury when MRI scans are clear? There is no clear answer, but it is possible that the headaches and jaw pain are being caused by damage to the trigeminal nerve, which is responsible for sensation in the face. I have a central government job in ministry, I want to open a eating joint alongside how should I go forward with it? There is no definitive answer, and it really depends on the specific circumstances of your situation. However, some steps you may want to consider include: 1. Talk to your superiors at your government job and let them know your intentions. Get their blessing (or at least their acknowledgement) before moving forward. 2. Determine if there are any conflict of interest rules that would prohibit you from owning and operating a business while working for the government. 3. Draw up a business plan for your eatery, including financial projections and marketing strategies. 4. Research the local market to make sure there is demand for your type of restaurant. 5. Find a suitable location for your eatery, taking into account things like foot traffic, parking, and competition. 6. Startputting together the necessary financing for your business venture. 7. Once everything is in place, open your doors and start serving up great food! Will Charlie Cheever remain an active Quora user after leaving the company? It's hard to say. Cheever was Quora's CEO for six years, so he may have lost interest in the site after he stepped down. I'm unhappy after my 1st cataract surgery (left eye) . I got the IOL for longer distance. Now I can't see text on my computer or phone. I'm on a computer & phone all day. Should I switch lens out for close up vision? Second eye still needs surgery… Yes you should. I had the same problem. Can cataract surgery and LASIK be done at the same time? No, cataract surgery can't be done at the same time as LASIK. Cataract surgery is an invasive procedure, where the natural lens of your eye is removed, a new artificial lens is implanted and your eye is stitched closed. LASIK uses laser to create a thin flap in the cornea of your eye. The cornea is not removed; it's only lifted up so the doctor can get to have access to your cornea underneath to correct your vision with a laser. It would not be possible for the doctor to remove your natural lens and implant an artificial lens while there's a flap in your cornea. If you had LASIK first, then the cataract could be removed through a much smaller incision, so you would most likely not even need stitches after that surgery. But if you had cataract surgery first, then your natural lens would be removed and an artificial lens would be implanted and you would probably end up needing stitches after that. So you need to choose which procedure you want to do first and then get that one done before having the other procedure. How do TED talk speakers speak with such flow without any error? If no media outlets covered the demonstrations outside of the Supreme Court justices' homes and elsewhere, would the demonstrators still show up? The demonstrators would likely still show up if media outlets did not cover the demonstrations outside of the Supreme Court justices' homes and elsewhere. The demonstrators may believe that their message is more important than media coverage, or they may believe that media coverage will help to spread their message and garner support for their cause. Did the Modi government really ignore the economy in the later part of their first term? The Modi government did not completely ignore the economy in the later part of their first term, but their economic policies were not as effective as they could have been. In particular, the government failed to adequately address the problem of non-performing loans in the banking sector, which contributed to the slowdown in economic growth. In case of the decline of the Indus Valley civilisation, why did cities deteriorate and villages remain intact or unchanged? Cities deteriorated because they were more complex, required more resources, and were less adaptable than villages. Has anyone any views on Photocrati wordpress themes for portfolio website? is it industry recognised or flexible? http://net.tutsplus.com/articles/web-roundups/photocrati-5- our-choice-of-the- best-wordpress-portfolio-themes/#more-123230 As a child I gave my mother a contusion accidentally and I couldn't stop laughing every time I saw her for the next week. I intentionally broke a hairbrush over her wrist when she didn't answer my question right away. Was something wrong with me? There's no certain answer, but it's possible that something was wrong with you. It's possible that you had a difficult home life and acted out in order to get attention. Alternatively, you may have been acting out because you were bored or didn't have enough outlets for your energy. If you found yourself breaking things or hurting people on a regular basis, it's a good idea to talk to a therapist or counselor who can help you explore your motivations and find more constructive ways to cope with your emotions. My grandmother has a fracture in her vertebral column according to our doctor. What suggestions can you give regarding this? There is no one-size-fits-all answer to this question, as the best course of treatment for a fracture in the vertebral column will vary depending on the severity of the injury and the individual's overall health. However, some possible treatments for a fracture in the vertebral column may include rest, ice, heat, pain medication, physical therapy, and surgery. What's your opinion on the Police Board President, Ghian Foreman, being assaulted by cops? The fact that Ghian Foreman was assaulted by police officers is disturbing, and raises serious questions about the culture of the police department. It is absolutely essential that an independent investigation be conducted to determine exactly what happened, and to ensure that those responsible are held accountable. Is it possible to open your own business as a quantity surveyor? What kind of business could it be? Yes, it is possible to open your own business as a quantity surveyor. One option is to provide consulting services to individuals or businesses who are planning to undertake construction projects. Another option is to open a construction cost estimating firm, which would provide cost estimates for potential construction projects. Is my opinion worth more than yours? That is impossible to answer. Opinions are not inherently worth more or less than others, but some people may be more persuasive or have more expertise than others. The sun subtends an angle half a degree at the pole of a concave mirror which has a radius of curvature of 15 m. Then the size (diameter) of the image of sun formed by the convace mirror is ? the size of the image of sun is 1.5 m Why is "we have so much in common" so important? "We have so much in common" is an important phrase because it is a way of saying that two people have a lot of similarities. This can be a good thing or a bad thing, depending on the context. If two people have a lot in common, it can mean that they have a lot of things to talk about and share with each other. On the other hand, if two people have too much in common, it can mean that they are too similar and might not be able to relate to each other very well. What monument would you design that would stand the test of time? This is a difficult question. There are many factors to consider when designing a monument, such as the purpose of the monument, the location, the materials used, etc. It is hard to say definitively which monument would stand the test of time, as there are so many variables at play. However, some possible examples of monuments that might stand the test of time include the Pyramids of Giza (which have been standing for over 4,500 years), the Parthenon in Athens (built in 447 BCE), and Stonehenge in England (built between 3100-2200 BCE). How do I find the right industry for me? If you're looking for the right industry for you, a good place to start is with your passions and interests. What are you passionate about? What are your interests? Once you know what you're passionate about and what your interests are, research different industries and decide which one is the best fit for you. For example, if you're passionate about fashion and interested in design, a career in fashion design may be the right industry for you. What ideas really matter? The ideas that really matter are the ones that make a difference in people's lives. The ideas that really matter are the ones that help us understand the world and make it a better place. Why aren't most heart patients given the option of temporary artificial hearts when death is the only alternative? Temporary artificial hearts are not used more often because they are not very effective. They do not last very long, and they do not work as well as a real heart. What is the family history or origin of the 'Sheet' surname in West Bengal, India? They belong to the district of Bankura, but are their roots in West Bengal or somewhere else? There is no definitive answer to this question, as the origins of the Sheet surname in West Bengal, India, are not well-documented. However, it is possible that the surname is of Bengali origin, given that the majority of the population in Bengal are Bengali Hindus. It is also possible that the surname is of Sikh or Muslim origin, as there are minority populations of both Sikhs and Muslims in Bengal. How can you tie your shoes with the strength of a double without the pain of undoing a double knot? Make a loop with one side of the laces, then make a loop with the other side of the laces. Put the first loop over the second loop and pull tight. How is Vietnam compared to Thailand? The two countries are very different in terms of culture, history, and geography. Thailand is a constitutional monarchy with a population of over 69 million people. It is located in Southeast Asia and has a tropical climate. The country is known for its beaches, temples, and friendly people. Vietnam is a communist state with a population of over 90 million people. It is located in Southeast Asia and has a tropical climate. The country is known for its mountains, rivers, and cuisine. Why it is important that the client and the counselor knows and understands the rights, responsibilities and accountabilities of counselors? The importance of the client and counselor understanding the rights, responsibilities, and accountabilities of counselors cannot be overstated. A strong counseling relationship is built on trust and mutual respect. If the counselor does not adhere to the basic principles of confidentiality, for example, the client may feel that their privacy is not being respected and may be reluctant to open up in future sessions. Conversely, if the client does not understand that the counselor is bound by ethical and legal constraints and cannot give advice or tell them what to do, they may feel that the counselor is not helpful or knowledgeable. In order to build a strong counseling relationship, it is essential that both parties understand and respect the rights, responsibilities, and accountabilities of counselors. Can any fast rotating object in outer space produce a magnetic field? Yes. Any rotating object produces a magnetic field, but the strength of the field depends on how fast it is spinning. Do we "move" from one house to another OR do we "shift" from one house to another? You say "move" when you are changing your residence permanently. You say "shift" when you are changing your residence temporarily. Why is human consciousness so difficult to grasp formally despite being pervasive subjectively? There is no consensus on what consciousness even is, let alone how to measure or define it. What video, production, or filming suggestions would you have for someone in fitness starting out on YouTube? Here are a few suggestions for fitness YouTubers starting out: 1. Keep your videos short and sweet - people have short attention spans! 2. Be clear and concise in your instructions 3. Engage with your audience by asking them to comment and giving them shoutouts 4. Make sure your filming set up is professional looking - this means investing in some good lighting and a decent camera 5. Film a variety of content, from workout videos to vlogs to Q&As, to keep your channel fresh and interesting Does the NHS carry out mastectomies on healthy patients? No, the NHS does not carry out mastectomies on healthy patients. How have you used your consciousness today? I have used my consciousness to interact with the world around me. I have used it to think about my day, to plan my next steps, and to communicate with others. Will Donald Trump refuse to debate Hillary Clinton because he might not approve of the debates' moderators? It is possible that Donald Trump will refuse to debate Hillary Clinton because he might not approve of the moderators. Will coronavirus affect my child's India expedition for the World Challenge July 2020? Due to coronavirus, the World Challenge expedition to India in July 2020 has been postponed. How do students in Pakistan earn money? There are a few ways that students in Pakistan can earn money. They can find part-time jobs in businesses and organizations, or they can start their own businesses. Additionally, some students may choose to do freelance work. Is there a database of people sitting on the board of director of public and private companies? Though there is not a definitive database of board members for all public and private companies, there are a few notable ones that provide this information. Hoovers, Bloomberg, and Reuters are all well-known business research firms that maintain databases of corporate boards and their members. Why is the sum of norms over elements not equal to norm over the whole $\omega$ (numerical methods, Sobolev spaces, finite element method, interpolation theory, math)? Interpolant $P(x)$ for a function $f(x)$ over a grid $x_i$, $P(x)=\sum_{i=1}^{n} f(x_i) l_i(x)$, where $l_i$ are Lagrange polynomials. If we take the norm of both sides we get, $\| P \| = \sum_{i=1}^{n} f(x_i) \| l_i \|$. This sum is not generally going to be equal to $\| f \| \| P \|$, where $\| f \|$ is the norm of the function $f$ and $\| P \|$ is the norm of the interpolating polynomial $P$. How does a fly turn upside down to land on a ceiling? The Merrion Centre in Leeds, England, is a shopping mall with a ceiling that has a fly design on it. The fly design is made up of hundreds of small black tiles that are arranged in a spiral pattern. When a fly lands on the ceiling, it walks in a circle until it reaches the center of the spiral. Then it turns upside down and hangs from the ceiling. Can a quia timet injunction be granted to a party for patent whose validity under challenge before various bodies? The Supreme Court has ruled that a quia timet injunction can be granted to a party whose validity under challenge before various bodies. What is a suggested soil college for an MBA? There is no single college that is suggested for MBA students. Many MBA programs are offered at colleges and universities across the country. How strong are Mackenzie Dern's striking/stand-up skills? Mackenzie Dern's stand-up skills are very strong. She has excellent technique and is very explosive with her strikes. What do you think of Trump recently speaking to American Jews and referring to Israel's Benjamin Netanhyu as 'your Prime Minister'? There is no one-size-fits-all answer to this question, as people's opinions on the matter will vary greatly. However, some people may view Trump's comments as being disrespectful to the office of the Prime Minister of Israel, while others may see them as a sign of support for the state of Israel. My husband is very jealous and controlling but often insists he is not, and makes small excuses instead. Why won't he an talk about it with honesty? There could be a few reasons for this. It could be that he is unaware of how his behavior is coming across, and so doesn't realize that he appears to be jealous and controlling. Alternatively, he may be aware of how he comes across but is unwilling to acknowledge it because doing so would require him to take responsibility for his behavior and make changes. Finally, it could be that he genuinely believes he is not jealous or controlling, even though others may see his behavior differently. If your husband is unwilling to talk about the issue with honesty, it may be helpful to seek couples counseling or therapy to help address the underlying issues. Is it acceptable to ask questions in the "more details field"? No, questions should not be asked in the "more details" field. If you have a question, please ask it in the discussion forum. Are there any alternatives to the right hand rule for magnetism? According to Lenz's law, the direction of the induced current is always such that it will oppose the change in magnetic flux. How can you burn incense sticks so you don't get that burnt smell or create toxic VOCs? You can use an incense burner that has a small dish on top to catch the ashes, or you can put the incense stick in a bowl of sand. As a doctor or nurse, what is the most cringeworthy thing you've ever seen another doctor do during surgery? There are a few things that come to mind. Once, I saw a doctor reach into a patient's chest cavity with their bare hand to adjust a heart valve. It was so unsanitary and unnecessary - the patient ended up getting a serious infection as a result. Another time, I saw a doctor operate on a patient while they were awake and conscious. The poor patient was screaming in pain the entire time. It was one of the most horrific things I've ever seen. During 2016, nearly 67 million Americans traveled internationally. Why then do we keep hearing that Americans don't like to travel out of the country? I was once one of them. My travel trigger moment came when believe it or not I won the Miss Florida... Are Black Hawk helicopters ever armed with forward machine guns and missiles? Some Black Hawk helicopters are equipped with forward machine guns and missiles, but not all. How do police feel about taking their squad cars home for personal use? While some police departments allow their officers to take their squad cars home for personal use, others do not. There are pros and cons to both approaches. Supporters of taking squad cars home argue that it allows officers to be more responsive to emergencies and to build better relationships with the community. Critics argue that it is a misuse of public resources and that it can lead to corruption. Ultimately, it is up to each police department to decide whether or not to allow their officers to take their squad cars home for personal use. Are Bollywood heroines less important than the male counterparts in Bollywood movies nowadays? There is no definitive answer to this question since it can vary from movie to movie. In general, however, it is fair to say that Bollywood heroines are often less important than the male counterparts in Bollywood movies. This is likely due to the fact that Bollywood movies tend to focus more on the male lead character, while the heroine often plays a more supporting role. Have you ever seen something so funny while you were driving that you had to stop your car because you had a fit of laughter? Yes, I have seen something so funny while driving that I had to stop my car. Why is every Star Wars scene a meme? Some people might say that it's because the movies are so popular, but we think it's because they're just so darn meme-able. No matter what scene you're looking at, there's always something that you can find to make a meme out of. Whether it's a character's facial expression, a piece of dialogue, or even just the way a scene is set up, there's always something that you can use to create a funny meme. What percentage of the total music sales is rap music? It is difficult to estimate the percentage of total music sales that is rap music because not all music sales are tracked. However, according to Nielsen SoundScan, which tracks sales of music in the United States, hip-hop/rap was the second most popular genre in 2017, behind only rock music. This means that rap music likely accounts for a significant portion of total music sales. Are accidents always bad? No, accidents are not always bad. Why does everyone use WordPress? There are a few reasons why WordPress is so popular: 1. It's free and open source. 2. It's easy to use. 3. It's highly customizable. 4. It has a large community of users and developers. When you're the big spoon where do you put the lower arm that's not over the top of your SO? The arm can go underneath the other person, or can be tucked in close to the body. Why don't Democrats want to know what's on Hunter Biden''s computer when they say they want transparency from all politicians including family members? The Democratic Party has not taken an official stance on the issue of Hunter Biden's computer. Some individual members of the party have said that they do not think the contents of the computer are relevant to Hunter Biden's work as a politician, while others have said that they believe the computer should be investigated in order to ensure transparency. If a person at a party chases me and insists on dancing with me and throws himself on me even though I already told him 5 times that I don't want to, is he harassing me? Yes, he is harassing you. What is the physical meaning of the term "non-local interaction" (density functional theory, Hartree-Fock, electron correlation, exchange correlation, matter modeling)? Non-local interactions are those in which the interaction between particles is not limited to those particles that are nearest to each other in space. Is there a falcon that will let you touch it? The vast majority of falconers do not allow anyone to touch their birds. Legally, can you make a show, podcast, or any form of entertainment that explicitly explains how to do a murder without getting caught as the main premise of it? Yes, there are laws regulating what can and cannot be said on a podcast or other form of entertainment, but there are no specific laws prohibiting explaining how to commit murder without getting caught. However, if the content of the show was determined to be encouraging or assisting in criminal activity, it could be subject to civil or criminal liability. How do I check for positive and negative in MATLAB using a while loop? You could use an if statement within your while loop to check for positive and negative values: while condition % Statements if value > 0 % Positive value elseif value < 0 % Negative value How is skiing on artificial snow different than natural snow? Artificial snow is generally made from processed, compressed snow that is produced by machines. This type of snow is often used in Ski resorts as it is more consistent and can be made in advance. Natural snow is created from frozen water droplets that fall from the sky and accumulate on the ground. Skiing on artificial snow can be more difficult as it is often harder and smoother than natural snow. Has anyone done an M.Tech (CSE) from IIT after 3 years in the IT sector? How did it affect your career? Was the experience of 3 years mostly useless? I cannot speak for everyone, but my experience was that the 3 years I spent in the IT sector were mostly useless in terms of my M.Tech. The curriculum was entirely different and I found myself struggling to keep up. However, I will say that the experience was still valuable in terms of learning how to work in a professional environment and developing soft skills. How does freelancing during an undergraduate help getting a good MBA admission? There is no one-size-fits-all answer to this question, as the value of freelancing during an undergraduate degree depends on the individual student's goals and experiences. However, freelancing can help students gain real-world experience and learn important skills such as time management and communication. Additionally, students who freelance during their undergraduate degrees may be able to network with potential employers and mentors, which could lead to better opportunities after graduation. Is it okay for me to feel excited and cry with happy tears by seeing the clips that Russian soldiers and tanks have been whittled down by implanted mines? They are invaders though. No, it is not okay for you to feel excited and cry with happy tears by seeing the clips that Russian soldiers and tanks have been whittled down by implanted mines. Where can I buy local second-hand Japanese t-shirts? There is no definitive answer, as Japanese t-shirts may be available second-hand from a variety of sources, including thrift stores, garage sales, and online auction sites. However, some good places to start looking include local Japanese consignment shops and online retailers that specializes in Japanese fashion. Which WordPress plugin would allow star ratings to appear for my blog posts on Google Search? There is no specific WordPress plugin that allows star ratings to appear for blog posts on Google Search. However, there are some plugins such as the WP-PostRatings plugin that allows you to add star ratings to your blog posts. For restaurant delivery service businesses, is revenue considered to be the value of orders generated or commission derived out of those orders? Revenue from a restaurant delivery service can come from orders generated, commission derived from those orders, or both. I constructed a great prototype for a new outdoor hat about 30 years ago. Every year since, I thought some company such as REI, Marmot, etc. would offer this type of hat. They still haven't produced it. How can I find someone to help me market this to them? You could reach out to companies like REI, Marmot, and others directly to see if they would be interested in your product. You could also try pitching your product to outdoor gear retailers or wholesalers. Finally, you could try reaching out to outdoor gear magazines or websites to see if they would be interested in writing about your product. What is your reaction to Fox's Brian Kilmeade who said (during the discussion of Mueller's report vs. Barr summary), "I never read the Odyssey. I only read the Monarch notes. I still got a very good, uh, I still got a very good grade"? I think that Brian Kilmeade's comments are incredibly ignorant and disrespectful. The Odyssey is one of the most important and influential works of literature in history, and to say that one can get a good grade by only reading the Monarch notes is ridiculous. It shows a complete lack of understanding of the text and its importance. How has Anonymous' activities impacted Russia? Glad as I am for their help for Ukraine and their claims of success in hacking Russian targets, I don't understand the actual concrete impact there has been in downloading Russian emails and files. There is no one answer to this question, as the activities of Anonymous have impacted Russia in various ways. On the one hand, their actions may have helped to expose wrongdoing or corruption on the part of the Russian government or other institutions. On the other hand, their activities may have caused disruptions in service or security for Russian citizens. Whenever I try to help someone on the streets (fights, money, old lady, etc.), my husband intervenes and tells me not to. He doesn't tell me why. Any idea why this could be? There could be a variety of reasons why your husband tells you not to help people on the street. Maybe he's worried about your safety, or he doesn't want you to get involved in something that could be dangerous. Maybe he's concerned that you might not be able to help the person in the way that they need and that you would just end up feeling frustrated or helpless. Whatever the reason, it's important to talk to your husband about why he feels this way and see if there is a compromise that you can both agree on. Should sports reporters and journalists call a professional sports team owner Mr. or Mrs., or by their first name? Why? In general, it is considered more respectful to refer to someone as "Mr." or "Mrs." rather than by their first name. However, there may be some circumstances in which it is appropriate to call a professional sports team owner by their first name. For example, if the owner is a close personal friend of the reporter or journalist, then it may be more appropriate to use their first name. How significant is Huawei in the US China trade war? Huawei is a significant company in the US China trade war because it is one of the largest telecom equipment manufacturers in the world. The US has accused Huawei of spying for the Chinese government, and has placed restrictions on the company's products. Do you like the new NFL rule about challenging pass interference? I haven't really given it much thought, but I suppose it could be useful in some situations. I have PDR and a bleed from which I'm recovering. Why when I go out into the sun for a while and then return inside does the fog seem to get worse like the sunlight is preserved in my retinas? This is a condition called "eclipse blindness" and is caused by staring at the sun during a solar eclipse. It is a type of retina damage that can be temporary or permanent. What Chinese foot binding is actually done for economic reasons, rather than sexual ones? Why do the subtitles sometimes not match the dialogue in the Netflix Instant Streaming version of the TV show "Bob's Burgers"? In arranged marriage meetings, why don't people clearly say a yes or a no? How does Flash work on Analog Cameras? Is it true that Arihant BITSAT has many errors? Should I still buy this book for BITSAT preparation or something else? Which leading cognitive functions are essential in your country and most respected? Saudi is leading with Te and Si. Morocco is mainly Se and Ti. Is it fair for female runners to compete against transgender athletes? How should I manage my studies and my CA articleship? When should I do my self study? With so many real issues today, why are republicans triggered by Mr Potato Head? Are they not afraid of looking foolish by this? What is the slime that develops in toilet tanks? what is i386 template of transcription can i use spectrum wifi away from home vinegar 9 acidity how to get chlorine smell out of skin my boyfriend makes me depressed reddit transfer case fluid change cost jiffy lube directors in movies what county is santa rosa ca in polish then wax can you stack foam board insulation random video girl chat knitted together in my mother's womb kurti leggings michael mcdonald peg how to make celery juice taste good hot and chubby desi women have sex where to get my ears pierced is mountain men cancelled see through panty sebastian vettel parents what states powerball vue vs react vs svelte harley engines for sale how do you screen shot on a mac computer old car seats 1990
CommonCrawl
Beginning 5 a.m. ET on Jan. 9, 2023, ams.org will undergo maintenance and will be unavailable for several hours. Online ISSN 1534-7486; Print ISSN 1056-3911 Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues | Next issue | Recently published articles | Next article On the identifiability of binary Segre products Authors: Cristiano Bocci and Luca Chiantini Journal: J. Algebraic Geom. 22 (2013), 1-11 DOI: https://doi.org/10.1090/S1056-3911-2011-00592-4 Published electronically: November 22, 2011 Abstract | References | Additional Information Abstract: We prove that a product of $m>5$ copies of $\mathbb {P}^1$, embedded in the projective space $\mathbb {P}^r$ by the standard Segre embedding, is $k$-identifiable (i.e. a general point of the secant variety $S^k(X)$ is contained in only one $(k+1)$-secant $k$-space), for all $k$ such that $k+1\leq 2^{m-1}/m$. References [Enhancements On Off] (What's this?) Hirotachi Abo, Giorgio Ottaviani, and Chris Peterson, Induction for secant varieties of Segre varieties, Trans. Amer. Math. Soc. 361 (2009), no. 2, 767–792. MR 2452824, DOI https://doi.org/10.1090/S0002-9947-08-04725-9 Bjørn Ådlandsvik, Joins and higher secant varieties, Math. Scand. 61 (1987), no. 2, 213–222. MR 947474, DOI https://doi.org/10.7146/math.scand.a-12200 Elizabeth S. Allman, Catherine Matias, and John A. Rhodes, Identifiability of parameters in latent structure models with many observed variables, Ann. Statist. 37 (2009), no. 6A, 3099–3132. MR 2549554, DOI https://doi.org/10.1214/09-AOS689 C. Bocci, L. Chiantini, Appendix to "On the identifiability of binary Segre products�, Available at: www.mat.unisi.it/personalpages/chiantini/p1app.pdf. M. V. Catalisano, A. V. Geramita, and A. Gimigliano, Ranks of tensors, secant varieties of Segre varieties and fat points, Linear Algebra Appl. 355 (2002), 263–285. MR 1930149, DOI https://doi.org/10.1016/S0024-3795%2802%2900352-X Maria Virginia Catalisano, Anthony V. Geramita, and Alessandro Gimigliano, Secant varieties of $\Bbb P^1\times \dots \times \Bbb P^1$ ($n$-times) are not defective for $n\geq 5$, J. Algebraic Geom. 20 (2011), no. 2, 295–327. MR 2762993, DOI https://doi.org/10.1090/S1056-3911-10-00537-0 L. Chiantini and C. Ciliberto, Weakly defective varieties, Trans. Amer. Math. Soc. 354 (2002), no. 1, 151–178. MR 1859030, DOI https://doi.org/10.1090/S0002-9947-01-02810-0 Luca Chiantini and Ciro Ciliberto, On the classification of defective threefolds, Projective varieties with unexpected properties, Walter de Gruyter, Berlin, 2005, pp. 131–176. MR 2202251 Luca Chiantini and Ciro Ciliberto, On the concept of $k$-secant order of a variety, J. London Math. Soc. (2) 73 (2006), no. 2, 436–454. MR 2225496, DOI https://doi.org/10.1112/S0024610706022630 Ciro Ciliberto, Massimiliano Mella, and Francesco Russo, Varieties with one apparent double point, J. Algebraic Geom. 13 (2004), no. 3, 475–512. MR 2047678, DOI https://doi.org/10.1090/S1056-3911-03-00355-2 M. Dale, Terracini's lemma and the secant variety of a curve, Proc. London Math. Soc. (3) 49 (1984), no. 2, 329–339. MR 748993, DOI https://doi.org/10.1112/plms/s3-49.2.329 Ryan Elmore, Peter Hall, and Amnon Neeman, An application of classical invariant theory to identifiability in nonparametric mixtures, Ann. Inst. Fourier (Grenoble) 55 (2005), no. 1, 1–28 (English, with English and French summaries). MR 2141286 D. Grayson, M. Stillman, Macaulay 2, a software system for research in algebraic geometry, Available at: www.math.uiuc.edu/Macaulay2/. Joseph B. Kruskal, Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics, Linear Algebra Appl. 18 (1977), no. 2, 95–138. MR 444690, DOI https://doi.org/10.1016/0024-3795%2877%2990069-6 A. Terracini, Sulle $V_k$ per cui la varietà degli $S_h$, $(h+1)$-seganti ha dimensione minore dell'ordinario, Rend. Circ. Mat. Palermo 31 (1911), 392–396. F. L. Zak, Tangents and secants of algebraic varieties, Translations of Mathematical Monographs, vol. 127, American Mathematical Society, Providence, RI, 1993. Translated from the Russian manuscript by the author. MR 1234494 H. Abo, G. Ottaviani, C. Peterson, Induction for secant varieties of Segre varieties, Trans. Amer. Math. Soc. 361 (2009), 767–792. MR 2452824 (2010a:14088) B. Adlansdvik, Joins and Higher secant varieties, Math. Scand. 61 (1987), 213–222. MR 947474 (89j:14030) E.S. Allman, C. Matias, J.A. Rhodes, Identifiability of parameters in latent structure models with many observed variables. Ann. Statist. 37 (2009), 3099–3132. MR 2549554 (2010m:62029) M. V. Catalisano, A. V. Geramita, A. Gimigliano, Ranks of tensors, secant varieties of Segre varieties and fat points, Linear Algebra Appl. 355 (2002), 263–285. MR 1930149 (2003g:14070) M. V. Catalisano, A. V. Geramita, A. Gimigliano, Secant Varieties of $(\mathbb {P} ^1)\times \dots \times (\mathbb {P} ^1)$ ($n$-times) are NOT Defective for $n \geq 5$, J. Algegraic Geom. 20 (2011), 295–327. MR 2762993 L. Chiantini, C. Ciliberto, Weakly defective varieties, Trans. Amer. Math. Soc. 354 (2002), 151–178. MR 1859030 (2003b:14063) L. Chiantini, C. Ciliberto, On the classification of defective threefolds, in Projective Varieties with Unexpected Properties, Proceedings of the Siena Conference, C. Ciliberto, A. V. Geramita, B. Harbourne, R. M. Mirò–Roig, K. Ranestad, ed., W. de Gruyter (2005), 131–176. MR 2202251 (2006k:14068) L. Chiantini, C. Ciliberto, On the concept of $k$-secant order of a variety, J. London Math. Soc. 73 (2006), 436–454. MR 2225496 (2007k:14110) C. Ciliberto, M. Mella, F. Russo, Varieties with one apparent double point, J. Algebraic Geom. 13 (2004), 475–512. MR 2047678 (2005b:14078) M. Dale, Terracini's lemma and the secant variety of a curve, Proc. London Math. Soc. 49 (1984), 329–339. MR 748993 (85g:14066) R. Elmore, P. Hall, A. Neeman, An application of classical invariant theory to identifiability in non-parametric mixtures, Ann. Inst. Fourier 55 (2005), 1–28. MR 2141286 (2006m:62056) J. B. Kruskal, Three-way arrays: rank and uniqueness of trilinear decompositions, with applications to arithmetic complexity and statistics, Lin. Alg. Applic. 18 (1977), 95–138. MR 0444690 (56:3040) F. L. Zak, Tangents and secants of varieties, AMS Bookstore publications, Transl. Math. Monog. 127 (1993). MR 1234494 (94i:14053) Cristiano Bocci Affiliation: Universitá degli Studi di Siena, Dipartimento di Scienze Matematiche e Informatiche, Pian dei Mantellini, 44, I – 53100 Siena, Italy Email: [email protected] Luca Chiantini MR Author ID: 194958 Email: [email protected] Received by editor(s): December 30, 2009 Received by editor(s) in revised form: February 21, 2011, and March 9, 2011
CommonCrawl
A real-time model of the human knee for application in virtual orthopaedic trainer Peters, J., Riener, R. In Proceedings of the 10th International Conference on BioMedical Engineering (ICBME 2000), 10, pages: 1-2, 10th International Conference on BioMedical Engineering (ICBME) , December 2000 (inproceedings) In this paper a real-time capable computational model of the human knee is presented. The model describes the passive elastic joint characteristics in six degrees-of-freedom (DOF). A black-box approach was chosen, where experimental data were approximated by piecewise polynomial functions. The knee model has been applied in a the Virtual Orthopaedic Trainer, which can support training of physical knee evaluation required for diagnosis and surgical planning. ei Peters, J., Riener, R. A real-time model of the human knee for application in virtual orthopaedic trainer In Proceedings of the 10th International Conference on BioMedical Engineering (ICBME 2000), 10, pages: 1-2, 10th International Conference on BioMedical Engineering (ICBME) , December 2000 (inproceedings) A Simple Iterative Approach to Parameter Optimization Zien, A., Zimmer, R., Lengauer, T. Journal of Computational Biology, 7(3,4):483-501, November 2000 (article) Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a scoring function combines the values for different parameters of possible sequence-to-structure alignments into a single score to allow for unambiguous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, a partial ordering on optimal alignments to other structures, e.g., derived from structural comparisons, may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a heuristic approach: iterating the computation of solutions (here, threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via systematic calibration methods. For our application (i.e., threading), this iterative approach results in structurally meaningful weights that significantly improve performance on both the training and the test data sets. In addition, the optimized parameters show significant improvements on the recognition rate for a grossly enlarged comprehensive benchmark, a modified recognition protocol as well as modified alignment types (local instead of global and profiles instead of single sequences). These results show the general validity of the optimized weights for the given threading program and the associated scoring contributions. ei Zien, A., Zimmer, R., Lengauer, T. A Simple Iterative Approach to Parameter Optimization Journal of Computational Biology, 7(3,4):483-501, November 2000 (article) Identification of Drug Target Proteins Zien, A., Küffner, R., Mevissen, T., Zimmer, R., Lengauer, T. ERCIM News, 43, pages: 16-17, October 2000 (article) ei Zien, A., Küffner, R., Mevissen, T., Zimmer, R., Lengauer, T. Identification of Drug Target Proteins ERCIM News, 43, pages: 16-17, October 2000 (article) On Designing an Automated Malaysian Stemmer for the Malay Language Tai, SY., Ong, CS., Abullah, NA. In Fifth International Workshop on Information Retrieval with Asian Languages, pages: 207-208, ACM Press, New York, NY, USA, Fifth International Workshop on Information Retrieval with Asian Languages, October 2000 (inproceedings) Online and interactive information retrieval systems are likely to play an increasing role in the Malay Language community. To facilitate and automate the process of matching morphological term variants, a stemmer focusing on common affix removal algorithms is proposed as part of the design of an information retrieval system for the Malay Language. Stemming is a morphological process of normalizing word tokens down to their essential roots. The proposed stemmer strips prefixes and suffixes off the word. The experiment conducted with web sites selected from the World Wide Web has exhibited substantial improvements in the number of words indexed. PostScript Web DOI [BibTex] ei Tai, SY., Ong, CS., Abullah, NA. On Designing an Automated Malaysian Stemmer for the Malay Language In Fifth International Workshop on Information Retrieval with Asian Languages, pages: 207-208, ACM Press, New York, NY, USA, Fifth International Workshop on Information Retrieval with Asian Languages, October 2000 (inproceedings) Ensemble of Specialized Networks based on Input Space Partition Shin, H., Lee, H., Cho, S. In Proc. of the Korean Operations Research and Management Science Conference, pages: 33-36, Korean Operations Research and Management Science Conference, October 2000 (inproceedings) ei Shin, H., Lee, H., Cho, S. Ensemble of Specialized Networks based on Input Space Partition In Proc. of the Korean Operations Research and Management Science Conference, pages: 33-36, Korean Operations Research and Management Science Conference, October 2000 (inproceedings) DES Approach Failure Recovery of Pump-valve System Son, HI., Kim, KW., Lee, S. In Korean Society of Precision Engineering (KSPE) Conference, pages: 647-650, Annual Meeting of the Korean Society of Precision Engineering (KSPE), October 2000 (inproceedings) ei Son, HI., Kim, KW., Lee, S. DES Approach Failure Recovery of Pump-valve System In Korean Society of Precision Engineering (KSPE) Conference, pages: 647-650, Annual Meeting of the Korean Society of Precision Engineering (KSPE), October 2000 (inproceedings) Ensemble Learning Algorithm of Specialized Networks In Proc. of the Korea Information Science Conference, pages: 308-310, Korea Information Science Conference, October 2000 (inproceedings) ei Shin, H., Lee, H., Cho, S. Ensemble Learning Algorithm of Specialized Networks In Proc. of the Korea Information Science Conference, pages: 308-310, Korea Information Science Conference, October 2000 (inproceedings) DES Approach Failure Diagnosis of Pump-valve System As many industrial systems become more complex, it becomes extremely difficult to diagnose the cause of failures. This paper presents a failure diagnosis approach based on discrete event system theory. In particular, the approach is a hybrid of event-based and state-based ones leading to a simpler failure diagnoser with supervisory control capability. The design procedure is presented along with a pump-valve system as an example. ei Son, HI., Kim, KW., Lee, S. DES Approach Failure Diagnosis of Pump-valve System In Korean Society of Precision Engineering (KSPE) Conference, pages: 643-646, Annual Meeting of the Korean Society of Precision Engineering (KSPE), October 2000 (inproceedings) Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lengauer, T., Müller, K. Bioinformatics, 16(9):799-807, September 2000 (article) Motivation: In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points at which regions start that code for proteins. These points are called translation initiation sites (TIS). Results: The task of finding TIS can be modeled as a classification problem. We demonstrate the applicability of support vector machines for this task, and show how to incorporate prior biological knowledge by engineering an appropriate kernel function. With the described techniques the recognition performance can be improved by 26% over leading existing approaches. We provide evidence that existing related methods (e.g. ESTScan) could profit from advanced TIS recognition. ei Zien, A., Rätsch, G., Mika, S., Schölkopf, B., Lengauer, T., Müller, K. Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites Bioinformatics, 16(9):799-807, September 2000 (article) Analysis of Gene Expression Data with Pathway Scores Zien, A., Küffner, R., Zimmer, R., Lengauer, T. In ISMB 2000, pages: 407-417, AAAI Press, Menlo Park, CA, USA, 8th International Conference on Intelligent Systems for Molecular Biology, August 2000 (inproceedings) We present a new approach for the evaluation of gene expression data. The basic idea is to generate biologically possible pathways and to score them with respect to gene expression measurements. We suggest sample scoring functions for different problem specifications. The significance of the scores for the investigated pathways is assessed by comparison to a number of scores for random pathways. We show that simple scoring functions can assign statistically significant scores to biologically relevant pathways. This suggests that the combination of appropriate scoring functions with the systematic generation of pathways can be used in order to select the most interesting pathways based on gene expression measurements. ei Zien, A., Küffner, R., Zimmer, R., Lengauer, T. Analysis of Gene Expression Data with Pathway Scores In ISMB 2000, pages: 407-417, AAAI Press, Menlo Park, CA, USA, 8th International Conference on Intelligent Systems for Molecular Biology, August 2000 (inproceedings) A Meanfield Approach to the Thermodynamics of a Protein-Solvent System with Application to the Oligomerization of the Tumour Suppressor p53. Noolandi, J., Davison, TS., Vokel, A., Nie, F., Kay, C., Arrowsmith, C. Proceedings of the National Academy of Sciences of the United States of America, 97(18):9955-9960, August 2000 (article) ei Noolandi, J., Davison, TS., Vokel, A., Nie, F., Kay, C., Arrowsmith, C. A Meanfield Approach to the Thermodynamics of a Protein-Solvent System with Application to the Oligomerization of the Tumour Suppressor p53. Proceedings of the National Academy of Sciences of the United States of America, 97(18):9955-9960, August 2000 (article) Observational Learning with Modular Networks In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings) Observational learning algorithm is an ensemble algorithm where each network is initially trained with a bootstrapped data set and virtual data are generated from the ensemble for training. Here we propose a modular OLA approach where the original training set is partitioned into clusters and then each network is instead trained with one of the clusters. Networks are combined with different weighting factors now that are inversely proportional to the distance from the input vector to the cluster centers. Comparison with bagging and boosting shows that the proposed approach reduces generalization error with a smaller number of networks employed. ei Shin, H., Lee, H., Cho, S. Observational Learning with Modular Networks In Lecture Notes in Computer Science (LNCS 1983), LNCS 1983, pages: 126-132, Springer-Verlag, Heidelberg, International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), July 2000 (inproceedings) Probabilistic detection and tracking of motion boundaries Black, M. J., Fleet, D. J. Int. J. of Computer Vision, 38(3):231-245, July 2000 (article) We propose a Bayesian framework for representing and recognizing local image motion in terms of two basic models: translational motion and motion boundaries. Motion boundaries are represented using a non-linear generative model that explicitly encodes the orientation of the boundary, the velocities on either side, the motion of the occluding edge over time, and the appearance/disappearance of pixels at the boundary. We represent the posterior probability distribution over the model parameters given the image data using discrete samples. This distribution is propagated over time using a particle filtering algorithm. To efficiently represent such a high-dimensional space we initialize samples using the responses of a low-level motion discontinuity detector. The formulation and computational model provide a general probabilistic framework for motion estimation with multiple, non-linear, models. ps Black, M. J., Fleet, D. J. Probabilistic detection and tracking of motion boundaries Int. J. of Computer Vision, 38(3):231-245, July 2000 (article) The Infinite Gaussian Mixture Model Rasmussen, CE. In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the ``right'' number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling. ei Rasmussen, CE. The Infinite Gaussian Mixture Model In Advances in Neural Information Processing Systems 12, pages: 554-560, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) Generalization Abilities of Ensemble Learning Algorithms Shin, H., Jang, M., Cho, S. In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings) ei Shin, H., Jang, M., Cho, S. Generalization Abilities of Ensemble Learning Algorithms In Proc. of the Korean Brain Society Conference, pages: 129-133, Korean Brain Society Conference, June 2000 (inproceedings) Support vector method for novelty detection Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J. In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) Suppose you are given some dataset drawn from an underlying probability distribution ¤ and you want to estimate a "simple" subset ¥ of input space such that the probability that a test point drawn from ¤ lies outside of ¥ equals some a priori specified ¦ between § and ¨. We propose a method to approach this problem by trying to estimate a function © which is positive on ¥ and negative on the complement. The functional form of © is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. We provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data. ei Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J., Platt, J. Support vector method for novelty detection In Advances in Neural Information Processing Systems 12, pages: 582-588, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) v-Arc: Ensemble Learning in the Presence of Outliers Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S. AdaBoost and other ensemble methods have successfully been applied to a number of classification tasks, seemingly defying problems of overfitting. AdaBoost performs gradient descent in an error function with respect to the margin, asymptotically concentrating on the patterns which are hardest to learn. For very noisy problems, however, this can be disadvantageous. Indeed, theoretical analysis has shown that the margin distribution, as opposed to just the minimal margin, plays a crucial role in understanding this phenomenon. Loosely speaking, some outliers should be tolerated if this has the benefit of substantially increasing the margin on the remaining points. We propose a new boosting algorithm which allows for the possibility of a pre-specified fraction of points to lie in the margin area or even on the wrong side of the decision boundary. ei Rätsch, G., Schölkopf, B., Smola, A., Müller, K., Onoda, T., Mika, S. v-Arc: Ensemble Learning in the Presence of Outliers In Advances in Neural Information Processing Systems 12, pages: 561-567, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) Invariant feature extraction and classification in kernel spaces Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K. ei Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., Müller, K. Invariant feature extraction and classification in kernel spaces In Advances in neural information processing systems 12, pages: 526-532, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) Transductive Inference for Estimating Values of Functions Chapelle, O., Vapnik, V., Weston, J. We introduce an algorithm for estimating the values of a function at a set of test points $x_1^*,dots,x^*_m$ given a set of training points $(x_1,y_1),dots,(x_ell,y_ell)$ without estimating (as an intermediate step) the regression function. We demonstrate that this direct (transductive) way for estimating values of the regression (or classification in pattern recognition) is more accurate than the traditional one based on two steps, first estimating the function and then calculating the values of this function at the points of interest. ei Chapelle, O., Vapnik, V., Weston, J. Transductive Inference for Estimating Values of Functions In Advances in Neural Information Processing Systems 12, pages: 421-427, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) The entropy regularization information criterion Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R. Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general linear additive models. This is achieved by a data dependent analysis of the eigenvalues of the corresponding design matrix. ei Smola, A., Shawe-Taylor, J., Schölkopf, B., Williamson, R. The entropy regularization information criterion In Advances in Neural Information Processing Systems 12, pages: 342-348, (Editors: SA Solla and TK Leen and K-R Müller), MIT Press, Cambridge, MA, USA, 13th Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) Model Selection for Support Vector Machines Chapelle, O., Vapnik, V. New functionals for parameter (model) selection of Support Vector Machines are introduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, one can both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter. ei Chapelle, O., Vapnik, V. Model Selection for Support Vector Machines In Advances in Neural Information Processing Systems 12, pages: 230-236, (Editors: Solla, S.A. , T.K. Leen, K-R Müller), MIT Press, Cambridge, MA, USA, Thirteenth Annual Neural Information Processing Systems Conference (NIPS), June 2000 (inproceedings) Stochastic tracking of 3D human figures using 2D image motion (Winner of the 2010 Koenderink Prize for Fundamental Contributions in Computer Vision) Sidenbladh, H., Black, M. J., Fleet, D. In European Conference on Computer Vision, ECCV, pages: 702-718, LNCS 1843, Springer Verlag, Dublin, Ireland, June 2000 (inproceedings) A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image gray level differences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle filtering. The approach extends previous work on parameterized optical flow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds. ps Sidenbladh, H., Black, M. J., Fleet, D. Stochastic tracking of 3D human figures using 2D image motion In European Conference on Computer Vision, ECCV, pages: 702-718, LNCS 1843, Springer Verlag, Dublin, Ireland, June 2000 (inproceedings) New Support Vector Algorithms Schölkopf, B., Smola, A., Williamson, R., Bartlett, P. Neural Computation, 12(5):1207-1245, May 2000 (article) We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter {nu} lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter {epsilon} in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of {nu}, and report experimental results. ei Schölkopf, B., Smola, A., Williamson, R., Bartlett, P. New Support Vector Algorithms Neural Computation, 12(5):1207-1245, May 2000 (article) Functional analysis of human motion data Ormoneit, D., Hastie, T., Black, M. J. In In Proc. 5th World Congress of the Bernoulli Society for Probability and Mathematical Statistics and 63rd Annual Meeting of the Institute of Mathematical Statistics, Guanajuato, Mexico, May 2000 (inproceedings) ps Ormoneit, D., Hastie, T., Black, M. J. Functional analysis of human motion data In In Proc. 5th World Congress of the Bernoulli Society for Probability and Mathematical Statistics and 63rd Annual Meeting of the Institute of Mathematical Statistics, Guanajuato, Mexico, May 2000 (inproceedings) Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y. In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings) ei Shin, H., Jang, M., Cho, S., Lee, B., Lim, Y. Generalization Abilities of Ensemble Learning Algorithms: OLA, Bagging, Boosting In Proc. of the Korea Information Science Conference, pages: 226-228, Conference on Korean Information Science, April 2000 (inproceedings) In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings) Various bioinformatics problems require optimizing several different properties simultaneously. For example, in the protein threading problem, a linear scoring function combines the values for different properties of possible sequence-to-structure alignments into a single score to allow for unambigous optimization. In this context, an essential question is how each property should be weighted. As the native structures are known for some sequences, the implied partial ordering on optimal alignments may be used to adjust the weights. To resolve the arising interdependence of weights and computed solutions, we propose a novel approach: iterating the computation of solutions (here: threading alignments) given the weights and the estimation of optimal weights of the scoring function given these solutions via a systematic calibration method. We show that this procedure converges to structurally meaningful weights, that also lead to significantly improved performance on comprehensive test data sets as measured in different ways. The latter indicates that the performance of threading can be improved in general. ei Zien, A., Zimmer, R., Lengauer, T. A simple iterative approach to parameter optimization In RECOMB2000, pages: 318-327, ACM Press, New York, NY, USA, Forth Annual Conference on Research in Computational Molecular Biology, April 2000 (inproceedings) Stochastic modeling and tracking of human motion Ormoneit, D., Sidenbladh, H., Black, M. J., Hastie, T. Learning 2000, Snowbird, UT, April 2000 (conference) ps Ormoneit, D., Sidenbladh, H., Black, M. J., Hastie, T. Stochastic modeling and tracking of human motion Learning 2000, Snowbird, UT, April 2000 (conference) A framework for modeling the appearance of 3D articulated figures Sidenbladh, H., De la Torre, F., Black, M. J. In Int. Conf. on Automatic Face and Gesture Recognition, pages: 368-375, Grenoble, France, March 2000 (inproceedings) ps Sidenbladh, H., De la Torre, F., Black, M. J. A framework for modeling the appearance of 3D articulated figures In Int. Conf. on Automatic Face and Gesture Recognition, pages: 368-375, Grenoble, France, March 2000 (inproceedings) Bounds on Error Expectation for Support Vector Machines Vapnik, V., Chapelle, O. Neural Computation, 12(9):2013-2036, 2000 (article) We introduce the concept of span of support vectors (SV) and show that the generalization ability of support vector machines (SVM) depends on this new geometrical concept. We prove that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing th e support vectors, used in previous bounds. We also demonstate experimentally that the prediction of the test error given by the span is very accurate and has direct application in model selection (choice of the optimal parameters of the SVM) ei Vapnik, V., Chapelle, O. Bounds on Error Expectation for Support Vector Machines Neural Computation, 12(9):2013-2036, 2000 (article) Bayesian modelling of fMRI time series , PADFR., Rasmussen, CE., Hansen, LK. In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings) We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments. PDF PostScript [BibTex] ei , PADFR., Rasmussen, CE., Hansen, LK. Bayesian modelling of fMRI time series In pages: 754-760, (Editors: Sara A. Solla, Todd K. Leen and Klaus-Robert Müller), 2000 (inproceedings) Choosing nu in support vector regression with different noise models — theory and experiments In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings) ei Chalimourda, A., Schölkopf, B., Smola, A. Choosing nu in support vector regression with different noise models — theory and experiments In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, Neural Computing: New Challenges and Perspectives for the New Millennium, IEEE, International Joint Conference on Neural Networks, 2000 (inproceedings) A High Resolution and Accurate Pentium Based Timer Ong, CS., Wong, F., Lai, WK. In 2000 (inproceedings) ei Ong, CS., Wong, F., Lai, WK. A High Resolution and Accurate Pentium Based Timer In 2000 (inproceedings) Robust Ensemble Learning for Data Mining Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K. In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings) ei Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K. Robust Ensemble Learning for Data Mining In Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 1805, pages: 341-341, Lecture Notes in Artificial Intelligence, (Editors: H. Terano), Fourth Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000 (inproceedings) Sparse greedy matrix approximation for machine learning. Smola, A., Schölkopf, B. In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings) ei Smola, A., Schölkopf, B. Sparse greedy matrix approximation for machine learning. In 17th International Conference on Machine Learning, Stanford, 2000, pages: 911-918, (Editors: P Langley), Morgan Kaufman, San Fransisco, CA, USA, 17th International Conference on Machine Learning (ICML), 2000 (inproceedings) Enhanced Password Authentication Through Typing Biometrics with K-Means Clustering Algorithm Ong, CS., Lai, WK. ei Ong, CS., Lai, WK. Enhanced Password Authentication Through Typing Biometrics with K-Means Clustering Algorithm In 2000 (inproceedings) Entropy Numbers of Linear Function Classes. Williamson, R., Smola, A., Schölkopf, B. In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings) ei Williamson, R., Smola, A., Schölkopf, B. Entropy Numbers of Linear Function Classes. In 13th Annual Conference on Computational Learning Theory, pages: 309-319, (Editors: N Cesa-Bianchi and S Goldman), Morgan Kaufman, San Fransisco, CA, USA, 13th Annual Conference on Computational Learning Theory (COLT), 2000 (inproceedings) Design and use of linear models for image motion analysis Fleet, D. J., Black, M. J., Yacoob, Y., Jepson, A. D. Int. J. of Computer Vision, 36(3):171-193, 2000 (article) Linear parameterized models of optical flow, particularly affine models, have become widespread in image motion analysis. The linear model coefficients are straightforward to estimate, and they provide reliable estimates of the optical flow of smooth surfaces. Here we explore the use of parameterized motion models that represent much more varied and complex motions. Our goals are threefold: to construct linear bases for complex motion phenomena; to estimate the coefficients of these linear models; and to recognize or classify image motions from the estimated coefficients. We consider two broad classes of motions: i) generic "motion features" such as motion discontinuities and moving bars; and ii) non-rigid, object-specific, motions such as the motion of human mouths. For motion features we construct a basis of steerable flow fields that approximate the motion features. For object-specific motions we construct basis flow fields from example motions using principal component analysis. In both cases, the model coefficients can be estimated directly from spatiotemporal image derivatives with a robust, multi-resolution scheme. Finally, we show how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions. ps Fleet, D. J., Black, M. J., Yacoob, Y., Jepson, A. D. Design and use of linear models for image motion analysis Int. J. of Computer Vision, 36(3):171-193, 2000 (article) Robustly estimating changes in image appearance Black, M. J., Fleet, D. J., Yacoob, Y. Computer Vision and Image Understanding, 78(1):8-31, 2000 (article) We propose a generalized model of image "appearance change" in which brightness variation over time is represented as a probabilistic mixture of different causes. We define four generative models of appearance change due to (1) object or camera motion; (2) illumination phenomena; (3) specular reflections; and (4) "iconic changes" which are specific to the objects being viewed. These iconic changes include complex occlusion events and changes in the material properties of the objects. We develop a robust statistical framework for recovering these appearance changes in image sequences. This approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion in the presence of shadows and specular reflections. ps Black, M. J., Fleet, D. J., Yacoob, Y. Robustly estimating changes in image appearance Computer Vision and Image Understanding, 78(1):8-31, 2000 (article) Quality Prediction of Steel Products using Neural Networks Shin, H., Jhee, W. In Proc. of the Korean Expert System Conference, pages: 112-124, Korean Expert System Society Conference, November 1996 (inproceedings) ei Shin, H., Jhee, W. Quality Prediction of Steel Products using Neural Networks In Proc. of the Korean Expert System Conference, pages: 112-124, Korean Expert System Society Conference, November 1996 (inproceedings) Cardboard people: A parameterized model of articulated motion Ju, S. X., Black, M. J., Yacoob, Y. In 2nd Int. Conf. on Automatic Face- and Gesture-Recognition, pages: 38-44, Killington, Vermont, October 1996 (inproceedings) We extend the work of Black and Yacoob on the tracking and recognition of human facial expressions using parameterized models of optical flow to deal with the articulated motion of human limbs. We define a "cardboard person model" in which a person's limbs are represented by a set of connected planar patches. The parameterized image motion of these patches is constrained to enforce articulated motion and is solved for directly using a robust estimation technique. The recovered motion parameters provide a rich and concise description of the activity that can be used for recognition. We propose a method for performing view-based recognition of human activities from the optical flow parameters that extends previous methods to cope with the cyclical nature of human motion. We illustrate the method with examples of tracking human legs over long image sequences. ps Ju, S. X., Black, M. J., Yacoob, Y. Cardboard people: A parameterized model of articulated motion In 2nd Int. Conf. on Automatic Face- and Gesture-Recognition, pages: 38-44, Killington, Vermont, October 1996 (inproceedings) ps Black, M. J., Jepson, A. Estimating optical flow in segmented images using variable-order parametric models with local deformations IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(10):972-986, October 1996 (article) Comparison of view-based object recognition algorithms using realistic 3D models Blanz, V., Schölkopf, B., Bülthoff, H., Burges, C., Vapnik, V., Vetter, T. In Artificial Neural Networks: ICANN 96, LNCS, vol. 1112, pages: 251-256, Lecture Notes in Computer Science, (Editors: C von der Malsburg and W von Seelen and JC Vorbrüggen and B Sendhoff), Springer, Berlin, Germany, 6th International Conference on Artificial Neural Networks, July 1996 (inproceedings) Two view-based object recognition algorithms are compared: (1) a heuristic algorithm based on oriented filters, and (2) a support vector learning machine trained on low-resolution images of the objects. Classification performance is assessed using a high number of images generated by a computer graphics system under precisely controlled conditions. Training- and test-images show a set of 25 realistic three-dimensional models of chairs from viewing directions spread over the upper half of the viewing sphere. The percentage of correct identification of all 25 objects is measured. ei Blanz, V., Schölkopf, B., Bülthoff, H., Burges, C., Vapnik, V., Vetter, T. Comparison of view-based object recognition algorithms using realistic 3D models In Artificial Neural Networks: ICANN 96, LNCS, vol. 1112, pages: 251-256, Lecture Notes in Computer Science, (Editors: C von der Malsburg and W von Seelen and JC Vorbrüggen and B Sendhoff), Springer, Berlin, Germany, 6th International Conference on Artificial Neural Networks, July 1996 (inproceedings) Incorporating invariances in support vector learning machines Schölkopf, B., Burges, C., Vapnik, V. In Artificial Neural Networks: ICANN 96, LNCS vol. 1112, pages: 47-52, (Editors: C von der Malsburg and W von Seelen and JC Vorbrüggen and B Sendhoff), Springer, Berlin, Germany, 6th International Conference on Artificial Neural Networks, July 1996, volume 1112 of Lecture Notes in Computer Science (inproceedings) Developed only recently, support vector learning machines achieve high generalization ability by minimizing a bound on the expected test error; however, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the classification boundary. ei Schölkopf, B., Burges, C., Vapnik, V. Incorporating invariances in support vector learning machines In Artificial Neural Networks: ICANN 96, LNCS vol. 1112, pages: 47-52, (Editors: C von der Malsburg and W von Seelen and JC Vorbrüggen and B Sendhoff), Springer, Berlin, Germany, 6th International Conference on Artificial Neural Networks, July 1996, volume 1112 of Lecture Notes in Computer Science (inproceedings) ps Black, M., Rangarajan, A. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision International Journal of Computer Vision , 19(1):57-92, July 1996 (article) A practical Monte Carlo implementation of Bayesian learning In Advances in Neural Information Processing Systems 8, pages: 598-604, (Editors: Touretzky, D.S. , M.C. Mozer, M.E. Hasselmo), MIT Press, Cambridge, MA, USA, Ninth Annual Conference on Neural Information Processing Systems (NIPS), June 1996 (inproceedings) A practical method for Bayesian training of feed-forward neural networks using sophisticated Monte Carlo methods is presented and evaluated. In reasonably small amounts of computer time this approach outperforms other state-of-the-art methods on 5 datalimited tasks from real world domains. ei Rasmussen, CE. A practical Monte Carlo implementation of Bayesian learning In Advances in Neural Information Processing Systems 8, pages: 598-604, (Editors: Touretzky, D.S. , M.C. Mozer, M.E. Hasselmo), MIT Press, Cambridge, MA, USA, Ninth Annual Conference on Neural Information Processing Systems (NIPS), June 1996 (inproceedings) Gaussian Processes for Regression Williams, CKI., Rasmussen, CE. The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior over functions. We investigate the use of a Gaussian process prior over functions, which permits the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results. ei Williams, CKI., Rasmussen, CE. Gaussian Processes for Regression In Advances in neural information processing systems 8, pages: 514-520, (Editors: Touretzky, D.S. , M.C. Mozer, M.E. Hasselmo), MIT Press, Cambridge, MA, USA, Ninth Annual Conference on Neural Information Processing Systems (NIPS), June 1996 (inproceedings) Skin and Bones: Multi-layer, locally affine, optical flow and regularization with transparency (Nominated: Best paper) Ju, S., Black, M. J., Jepson, A. D. In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR'96, pages: 307-314, San Francisco, CA, June 1996 (inproceedings) ps Ju, S., Black, M. J., Jepson, A. D. Skin and Bones: Multi-layer, locally affine, optical flow and regularization with transparency In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR'96, pages: 307-314, San Francisco, CA, June 1996 (inproceedings) In Proc. Fourth European Conf. on Computer Vision, ECCV'96, pages: 329-342, LNCS 1064, Springer Verlag, Cambridge, England, April 1996 (inproceedings) ps Black, M. J., Jepson, A. EigenTracking: Robust matching and tracking of articulated objects using a view-based representation In Proc. Fourth European Conf. on Computer Vision, ECCV'96, pages: 329-342, LNCS 1064, Springer Verlag, Cambridge, England, April 1996 (inproceedings) ps Black, M. J., Anandan, P. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields Computer Vision and Image Understanding, 63(1):75-104, January 1996 (article)
CommonCrawl
The line $y = 2x + c$ is tangent to the parabola $y^2 = 8x.$ Find $c.$ Rearranging $y = 2x + c$ gives $2x = y - c.$ Substituting into $y^2 = 8x,$ we get \[y^2 = 4(y - c) = 4y - 4c,\]or $y^2 - 4y + 4c = 0.$ Since we have a tangent, this quadratic will have a double root. In other words, its discriminant will be 0. Hence, $(-4)^2 - 4(4c) = 16 - 16c = 0,$ which means $c = \boxed{1}.$
Math Dataset
F. Wiener's Trick and an Extremal Problem for Hp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^p$$\end{document} F. Wiener's Trick and an Extremal Problem for Hp\documentclass[12pt]{minimal}... Brevig, Ole Fredrik; Grepstad, Sigrid; Instanes, Sarah May 2022-09-12 00:00:00 For 0 < p ≤∞,let H denote the classical Hardy space of the unit disc. We consider the extremal problem of maximizing the modulus of the kth Taylor coefficient of a function f ∈ H which satisfies f ≤ 1 and f (0) = t for some 0 ≤ t ≤ 1. In particular, we provide a complete solution to this problem for k = 1 and 0 < p < 1. We also study F. Wiener's trick, which plays a crucial role in various coefficient-related extremal problems for Hardy spaces. Keywords Hardy spaces · Extremal problems · Coefficient estimates Mathematics Subject Classification Primary 30H10; Secondary 42A05 1 Introduction Let H denote the classical Hardy space of analytic functions in the unit disc D = {z ∈ C :|z| < 1}. Suppose that k is a positive integer. For 0 < p ≤∞ and 0 ≤ t ≤ 1, consider the extremal problem Communicated by Dmitri Khavinson. Sigrid Grepstad is supported by Grant 275113 of the Research Council of Norway. Sarah May Instanes is supported by the Olav Thon Foundation through the StudForsk program. B Sigrid Grepstad [email protected] Ole Fredrik Brevig [email protected] Sarah May Instanes [email protected] Department of Mathematics, University of Oslo, 0851 Oslo, Norway Department of Mathematical Sciences, Norwegian University of Science and Technology (NTNU), No. 7491, Trondheim, Norway Published online: 12 September 2022 O. F. Brevig et al. (k) f (0) ( p, t ) = sup Re : f ≤ 1 and f (0) = t . (1) k H k! By a standard normal families argument, there are extremals f ∈ H attaining the supremum in (1) for every k ≥ 1 and every 0 ≤ t ≤ 1. A general framework for a class of extremal problems for H which includes (1) has been developed by Havinson [8], Kabaila [9], Macintyre and Rogosinski [11] and Rogosinski and Shapiro [14]. A particular consequence of this theory is that the structure of the extremals is well- known (see Lemma 4 below). For our extremal problem, it can be deduced directly from Parseval's identity that √ √ 2 2 (2, t ) = 1 − t and that the unique extremal is f (z) = t + 1 − t z . Similarly, the Schwarz–Pick inequality (see e.g. [15, VII.17.3]) shows that (∞, t ) = 1 − t and that the unique extremal is f (z) = (t + z)/(1 + tz). This served as the starting point for Beneteau and Korenblum [1], who studied the extremal problem (1)inthe range 1 ≤ p ≤∞. We will enunciate their results in Sects. 4 and 5,but fornow we present a brief account of their approach. Thefirststepin[1] is to compute ( p, t ) and identify an extremal function. This is achieved by interpolating between the two cases p = 2 and p =∞ mentioned above, facilitated by the inner-outer factorization of H functions. It follows from the argument that the extremal function thusly obtained is unique. The second step in [1] is to show that ( p, t ) = ( p, t ) for every k ≥ 2using k 1 a trick attributed to F. Wiener [2], which we shall now recall. Set ω = exp(2π i /k) and suppose that f (z) = a z . F. Wiener's trick is based on the transform n≥0 k−1 ∞ kn W f (z) = f (ω z) = a z . (2) k kn j =0 n=0 p p The triangle inequality yields that W f ≤ f for f ∈ H if 1 ≤ p ≤∞. k H H Hence, if f is an extremal function for ( p, t ), then f (z) = f (z ) is an extremal 1 1 k 1 function for ( p, t ) and consequently ( p, t ) = ( p, t ). Note that this argument k k 1 does not guarantee that the extremal f is unique for ( p, t ). k k We are interested in the extremal problem (1)for 0 < p < 1 and whether the extremal identified using F. Wiener's trick above for 1 ≤ p ≤∞ is unique. We shall obtain the following general result, which may be of independent interest. Theorem 1 Fix k ≥ 2 and suppose that 0 < p ≤∞. Let W denote the F. Wiener transform (2). The inequality 1/ p−1 W f p ≤ max k , 1 f p k H H is sharp. Moreover, equality is attained if and only if (a) f ≡ 0 when 0 < p < 1, (b) W f = f when 1 < p < ∞. 123 p F. Wiener's Trick and an Extremal Problem for H The upper bound in the estimate is easily deduced from the triangle inequality. Hence, the novelty of Theorem 1 is that the inequality is sharp for 0 < p < 1, and the statements (a) and (b). In Sect. 3, we also present examples of functions in H and H which attain equality in Theorem 1, but for which W f = f . However, we will conversely establish that if both f and W f are inner functions, then f = W f . k k To illustrate the role played by the F. Wiener transform in various coefficient related extremal problems, we first recall that the estimate W f ≤ f was originally k ∞ ∞ used by F. Wiener to resolve a problem posed by Bohr [2] and compute the so-called Bohr radius for H . We also know from [12, Sect. 1.7] that the Krzyz˙ conjecture on the maximal magnitude of the kth coefficient in the power series expansion of a non-vanishing function with f = 1 is equivalent to the assertion that if f is an extremal for the corresponding extremal problem, then f = W f . As far as we are aware, the Krzyz˙ conjecture remains open for k ≥ 6. Theorem 1 shows that the extremal for ( p, t ) is unique when 1 < p < ∞.We shall see in Sect. 5 that the extremal problem ( p, t ) with k ≥ 2 and 1 ≤ p ≤∞ has a unique extremal except for when p = 1 and 0 ≤ t < 1/2. In the range 0 < p < 1 with k = 1, the extremal problem (1) has been studied −1/ p by Connelly [4, Sect. 4], who resolved the problem in the cases 0 ≤ t < 2 and −1/ p 1/ p−1/2 2 p(2 − p) < t ≤ 1. Connelly also states conjectures on the behavior −1/ p −1/ p 1/ p−1/2 of ( p, t ) in the range 2 ≤ t ≤ 2 p(2 − p) . The conjectures are based on numerical analysis (see [4, Sect. 5]). In Sect. 4, we will extend Connelly's result to the full range 0 ≤ t ≤ 1. Our result demonstrates that for each 0 < p < 1 there is a unique 0 < t < 1/2 such that the extremal for ( p, t ) is not unique, thereby confirming the above-mentioned 1 p conjectures. Brevig and Saksman [3] have recently studied the extremal problem (k) f (0) ( p) = sup Re : f p ≤ 1 k H k! for 0 < p < 1. It is observed in [3, Sect. 5.3] that ( p) = max ( p, t ).In k 0≤t ≤1 k particular, the maxima of ( p, t ) for 0 ≤ t ≤ 1is 1/ p p 2 ( p) = 1 − √ 2 p(2 − p) 1/ p and this is attained for t = (1 − p/2) . From the main result in [1], it is easy to see that t → ( p, t ) is a decreasing function from ( p, 0) = 1to ( p, 1) = 0 1 1 1 when 1 ≤ p ≤∞. Similarly, our main result shows that ( p, t ) is increasing from ( p, 0) = 1 to the maxima mentioned above, then decreasing to ( p, 1) = 0. 1 1 Figure 1 contains the plot of t → ( p, t ) for several values 0 < p ≤∞, which illustrates this difference between 0 < p < 1 and 1 ≤ p ≤∞. 123 O. F. Brevig et al. Fig. 1 Plot of the curves t → ( p, t ) for p = 1/2, p = 1, p = 2and p =∞ Another difference between 0 < p < 1 and 1 ≤ p ≤∞ appears when we consider k ≥ 2. Recall that in the latter case, we have ( p, t ) = ( p, t ) for every k ≥ 2 and k 1 every 0 ≤ t ≤ 1. In the former case, we only get from Theorem 1 that 1/ p−1 ( p, t ) ≤ ( p, t ) ≤ k ( p, t ). (3) 1 k 1 Theorem 1 also shows that the upper bound in (3) is attained if and only if t = 1, since trivially ( p, 1) = 0 for every 0 < p ≤∞. However, by adapting an example due to Hardy and Littlewood [7], it is easy to see that if 0 < p < 1 and 0 ≤ t < 1 are fixed, then the exponent 1/ p − 1in(3) cannot be improved as k →∞.Inthe final section of the paper, we present some evidence that the lower bound in (3) can be attained for sufficiently large t,if k ≥ 2 and 0 < p < 1 are fixed. Organization The present paper is organized into five additional sections and one appendix. In Sect. 2, we collect some preliminary results pertaining to H and the structure of extremals for ( p, t ). Section 3 is devoted to F. Wiener's trick and the proof of Theorem 1. A complete solution to the extremal problem ( p, t ) for 0 < p ≤∞ and 0 ≤ t ≤ 1 is presented in Sect. 4. In Sect. 5, we consider ( p, t ) for k ≥ 2 and 1 ≤ p ≤∞ and study when the extremal is unique. Section 6 contains some remarks on ( p, t ) for k ≥ 2 and 0 < p < 1. "Appendix A" contains the proof of a crucial lemma needed to resolve the extremal problem ( p, t ) for 0 < p < 1. 123 p F. Wiener's Trick and an Extremal Problem for H 2 Preliminaries Recall that for 0 < p < ∞, the Hardy space H consists of the analytic functions f in D for which the limit of integral means 2π dθ i θ p f = lim | f (re )| 2π r →1 is finite. H is the space of bounded analytic functions in D, endowed with the norm f = sup | f (z)|. It is well-known (see e.g. [6]) that H is a Banach space |z|<1 when 1 ≤ p ≤∞ and a quasi-Banach space when 0 < p < 1. In the Banach space range 1 ≤ p ≤∞, the triangle equality is p p p f + g ≤ f + g . (4) H H H The Hardy space H is strictly convex when 1 < p < ∞, which means that it is impossible to attain equality in (4) unless g ≡ 0or f = λg for a non-negative constant λ. H is not strictly convex for p = 1 and p =∞, so in this case there are other ways to attain equality in (4). In the range 0 < p < 1, the triangle inequality takes the form p p p f + g ≤ f + g , (5) p p p H H H so here H is not even locally convex [5]. Our first goal is to establish that the triangle inequality (5) is not attained unless f ≡ 0or g ≡ 0. This result is probably known to experts, but we have not found it in the literature. If f ∈ H for some 0 < p ≤∞, then the boundary limit function ∗ i θ i θ f (e ) = lim f (re ) (6) r →1 ∗ p p exists for almost every θ. Moreover, f ∈ L = L ([0, 2π ]) and 1/ p 2π dθ ∗ ∗ i θ p p f = f = f (e ) H L 2π ∗ i θ if 0 < p < ∞ and f ∞ = ess sup | f (e )|. For simplicity, we henceforth omit the asterisk and write f = f with the limit (6) in mind. Lemma 2 Fix 0 < p < 1 and suppose that f , g ∈ H .If p p p f + g = f + g p p p H H H then either f ≡ 0 or g ≡ 0. 123 O. F. Brevig et al. Proof We begin by looking at equality in the triangle inequality for L in the range 0 < p < 1. Here we have 2π dθ i θ i θ f + g = f (e ) + g(e ) 2π 2π dθ p p i θ p i θ p ≤ | f (e )| +|g(e )| = f + g . p p L L 2π p p p We used the elementary estimate |z + w| ≤|z| +|w| for complex numbers z,w and 0 < p < 1. It is easily verified that this estimate is attained if and only if zw = 0. Consequently, p p p f + g = f + g p p p L L L i θ i θ if and only if f (e )g(e ) = 0 for almost every θ.Itiswell-known (see[6,Thm.2.2]) that the only function h ∈ H whose boundary limit function (6) vanishes on a set of positive measure is h ≡ 0. Hence we conclude that either f ≡ 0or g ≡ 0. Let us next establish a standard result on the structure of the extremals for the extremal problem (1). The first step is the following basic result. Lemma 3 If f ∈ H is extremal for ( p, t ), then f = 1. k H Proof Suppose that f ∈ H is extremal for ( p, t ) but that f p < 1. For ε> 0, k H set g(z) = f (z) + εz . Note that g(0) = f (0) = t for any ε> 0. If 1 ≤ p ≤∞, then p p g ≤ f + ε< 1 H H for sufficiently small ε> 0. If 0 < p < 1, then p p g ≤ f + ε < 1, p p H H again for sufficiently small ε> 0, so g < 1. In both cases we find that (k) (k) g (0) f (0) = + ε, k! k! which contradicts the extremality of f for ( p, t ). k k Let (n ) denote a sequence of distinct non-negative integers and let (w ) j j j =1 j =1 denote a sequence of complex numbers. A special case of the Carathéodory–Fejér problem is to determine the infimum of f over all f ∈ H which satisfy (n ) (0) = w , (7) n ! 123 p F. Wiener's Trick and an Extremal Problem for H for j = 1,..., k. Set k = max n .If f is an extremal for the Carathéodory– 1≤ j ≤k j Fejér problem (7), then there are complex numbers |λ |≤ 1for j = 1,..., k and a constant C such that l k λ − z j 2/ p f (z) = C 1 − λ z (8) 1 − λ z j =1 j =1 for some 0 ≤ l ≤ k, and the strict inequality |λ | < 1 holds for 0 < j ≤ l.In(8) and in similar formulas to follow, we adopt the convention that in the case l = 0 the first product is empty and considered to be equal to 1. For 1 ≤ p ≤∞, this result is independently due to Macintyre and Rogosinski [11] and Havinson [8], while in the range 0 < p < 1 the result is due to Kabaila [9]. An exposition of these results can be found in [6, Ch. 8] and [10, pp. 82–85], respectively. Using Lemma 3, we can establish that the extremals of the extremal problem ( p, t ) have to be of the same form. Lemma 4 If f ∈ H is extremal for ( p, t ), then there are complex numbers |λ |≤ 1 k j for j = 1,..., k and a constant C such that l k λ − z j 2/ p f (z) = C 1 − λ z . 1 − λ z j =1 j =1 for some 0 ≤ l ≤ k, and the strict inequality |λ | < 1 holds for 0 < j ≤ l. Proof Suppose that f is extremal for ( p, t ) and consider the Carathéodory–Fejér problem with conditions (k) f (0) f (0) = t and = ( p, t ). (9) k! We claim that f is an extremal for the Carathéodory–Fejér problem (9). If it is not, then there must be some f ∈ H with f < 1 which satisfies (9). However, this contradicts Lemma 3. Hence the extremal is of the stated form by (8). 3 F. Wiener's Trick Recall from (2) that if f (z) = a z and ω = exp(2π i /k), then n k n≥0 k−1 ∞ kn W f (z) = f (ω z) = a z . k kn j =0 n=0 We begin by giving two examples showing that W f p = f p may occur for k H H f such that W f = f when p = 1or p =∞. 123 O. F. Brevig et al. 2k 1 Example 5 Let k ≥ 2 and consider f (z) = (1 + z) in H . By the binomial theorem, we find that 2k 2k f (z) = z , n=0 2k k 2k W f (z) = 1 + z + z . Note that f = W f since k ≥ 2. By another application of the binomial theorem and a well-known identity for the central binomial coefficient, we find that k 2k 1/2 2 f 1 = f = = . n k n=0 Moreover, 2π 2k dθ i θ ikθ = W f (e ) e ≤ W f k k k 2π by the triangle inequality. Hence 2k 2k ≤ W f ≤ f = , 1 1 H H k k so W f 1 = f 1. H H k 2 k 2 ∞ Example 6 Let k ≥ 2 and consider f (z) = (1+ z ) − z(1− z ) in H . It is clear that k 2 W f (z) = (1 + z ) = f (z) since k ≥ 2. Moreover W f = 4. The supremum k k H is attained for z = ω for j = 0, 1,..., k − 1. We next compute 2 2 kθ kθ i θ ikθ i θ ikθ ikθ 2 i θ 2 f (e ) = 1 + e − e 1 − e = 4e cos + e sin . 2 2 Consequently, f = 4 and here the supremum is attained for z = ω for 2k j = 0, 1,..., 2k − 1. Proof of Theorem 1 It follows from the triangle inequality (4) that p p W f ≤ f (10) k H H for every f ∈ H if 1 ≤ p ≤∞. In the range 0 < p < 1, we get from the triangle inequality (5) the estimate 1/ p−1 W f p ≤ k f p (11) k H H 123 p F. Wiener's Trick and an Extremal Problem for H for every f ∈ H . Combining (10) and (11), we have established that 1/ p−1 p p W f ≤ max k , 1 f . k H H This is trivially attained for f (z) = z when 1 ≤ p ≤∞. We need to show that the 1/ p−1 upper bound k cannot be improved when 0 < p < 1 to finish proof of the first part of the theorem. −1/ p Let ε> 0 and consider f (z) = (z − (1 + ε)) . Clearly f p →∞ as ε ε H ε → 0 . Moreover 2π 1 dθ f = i θ |e − (1 + ε)| 2π 1 dθ 6 dθ ≤ + i θ 2 |e − (1 + ε)| 2π θ 2π |θ |<π/k |θ |≥π/k 1 dθ 6k ≤ + , i θ 2 |e − (1 + ε)| 2π π |θ |<π/k from which we conclude that 1 dθ f = + O(1). (12) i θ |e − (1 + ε)| 2π |θ |<π/k Furthermore, k−1 k−1 i (θ +2πl/k) f e dθ W f = k ε 2π |θ −2π j /k|<π/k j =0 l=0 k−1 dθ 6k − p i (θ +2π j /k) ≥ k f e 2π π |θ −2π j /k|<π/k j =0 − p+3 1 dθ 6k − p+1 = k − . i θ 2 |e − (1 + ε)| 2π π |θ |<π/k By (12) we find that W f k ε p H 1− p lim ≥ k . ε→0 f ε p 1/ p−1 Hence, the constant k in (11) cannot be replaced by any smaller quantity. We next want to show that (a) and (b) holds. For a function f ∈ H , define p p f (z) = f (ω z) for j = 0, 1,..., k − 1 and recall that f = f . j H j H 1/ p−1 p p We begin with (a). Suppose that W f = k f , which we can refor- k H H mulate as p p p p f + f + ··· + f = f + f + ··· + f . p p p p 0 1 k−1 0 1 k−1 H H H H 123 O. F. Brevig et al. By Lemma 2, the triangle inequality can be attained if and only if at least k − 1ofthe k functions f are identically equal to zero. Evidently this is possible if and only if f ≡ 0. p p For (b), we suppose that f ∈ H is such that W f = f . We need to k H H prove that W f = f .If f ≡ 0 there is nothing to do. As in the proof of (a), we note p p that W f = f can be reformulated as k H H f + f + ··· + f p = f p + f p + ··· + f p . 0 1 k−1 H 0 H 1 H k−1 H p p Viewing H as a subspace of L , the strict convexity of the latter implies that there are non-negative constants λ for j = 1, 2,..., k − 1 such that f = f = λ f = ... = λ f . 0 1 1 k−1 k−1 We shall only look at f = λ f which for f (z) = a z is equivalent to 1 1 n n≥0 ∞ ∞ n n n a z = λ a ω z . n 1 n n=0 n=0 Using W on this identity we get ∞ ∞ kn kn a z = λ a z . kn 1 kn n=0 n=0 This is only possible if λ = 1or W f ≡ 0. The latter implies that f ≡ 0 since 1 k p p W f = f by assumption. Therefore we can restrict our attention to the k H H case λ = 1. For all integers n that are not a multiple of k, we now find that a = λ ω a ⇒ a = 0, n 1 n n since λ = 1 and ω = 1. Hence W f = f as desired. 1 k p i θ Recall that a function f ∈ H is called inner if | f (e )|= 1 for almost every θ. We shall require the following simple result later on. Lemma 7 If both f and W f are inner functions, then f = W f. k k i θ i θ Proof Since |W f (e )|=| f (e )|= 1 for almost every θ, we get from (2) that k−1 k−1 1 1 i θ i θ i θ 1 = W f (e ) = f (e ) = | f (e )|, (13) k j j k k j =0 j =0 where f (z) = f (ω z). The equality on the right hand side of (13) is possible if and only if i θ i θ i θ f (e ) = f (e ) = ... = f (e ) 1 k−1 123 p F. Wiener's Trick and an Extremal Problem for H for almost every θ. As in the proof of Theorem 1 (b), we find that f = W f . 4 The Extremal Problem 8 (p, t) for 0 <p ≤∞ In the present section, we resolve the extremal problem (1) in the case k = 1 com- pletely. We begin with the case 1 ≤ p ≤∞ which has been solved by Beneteau and Korenblum [1]. We give a different proof of their result based on Lemma 4, mainly to illustrate the differences between the cases 0 < p < 1 and 1 ≤ p ≤∞. Theorem 8 (Beneteau–Korenblum) Fix 1 ≤ p ≤∞ and consider (1) with k = 1. −1/ p (i) If 0 ≤ t < 2 ,let α denote the unique real number in the interval 0 ≤ α< 1 2 −1/ p such that t = α(1 + α ) . Then 1 2 ( p, t ) = 1 + − 1 α , 1/ p 2 p 1 + α and the unique extremal is 2/ p α + z (1 + αz) f (z) = . 1/ p 1 + αz 2 1 + α −1/ p (ii) If 2 ≤ t ≤ 1,let β denote the unique real number in the interval 0 ≤ β ≤ 1 2 −1/ p such that t = (1 + β ) . Then 1 2β ( p, t ) = , 1/ p 2 p 1 + β and the unique extremal is 2/ p (1 + βz) f (z) = . 1/ p 1 + β Proof Note that since k = 1, there are only two possibilities for the extremals in Lemma 4.Theyare 2/ p α + z (1 + αz) f (z) = , 0 ≤ α< 1, (14) 1/ p 1 + αz 2 1 + α 2/ p (1 + βz) f (z) = , 0 ≤ β ≤ 1. (15) 1/ p 1 + β 123 O. F. Brevig et al. Here we have made α, β ≥ 0 by rotations. Note that if p =∞, then f does not depend on β. Moreover, t = f (0) = , (16) 2 1/ p (1 + α ) t = f (0) = . (17) 2 1/ p (1 + β ) For 1 ≤ p ≤∞ it is easy to verify that the function → (18) 2 1/ p (1 + α ) −1/ p is strictly increasing on 0 ≤ α< 1 and maps [0, 1) to [0, 2 ). Similarly, for 1 ≤ p < ∞ we find that the function → (19) 2 1/ p (1 + β ) −1/ p is strictly decreasing on 0 ≤ β ≤ 1 and maps [0, 1] to [2 , 1]. Consequently, −1/ p if 0 ≤ t < 2 , then the unique extremal is (14) with α given by (16), and if −1/ p 2 ≤ t ≤ 1, then the unique extremal is (15) with β given by (17). The proof is completed by computing 1 2 1 2 f (0) = 1 + α − 1 = t + α − 1 , (20) 1/ p p α p 1 + α 1 2β 2β f (0) = = t , (21) 2 1/ p (1 + β ) p p to obtain the stated expressions for ( p, t ) in (i) and (ii), respectively. Define α and β as functions of t implicitly through (16) and (17). Then α is increas- −1/ p −1/ p ingon0 ≤ t < 2 and β is decreasing on 2 ≤ t ≤ 1. Inspecting the left hand side of (20) and (21), we extract the following result. Corollary 9 If 1 ≤ p ≤∞, then the function t → ( p, t ) is decreasing and takes the values [0, 1]. In the range 0 < p < 1 a more careful analysis is required. This is due to the fact that the function (18) is increasing on the interval 0 ≤ α ≤ α and decreasing on the interval α ≤ α< 1, where α = . (22) 2 − p −1/ p −1/ p 1/ p−1/2 Inspecting (16), we conclude that for each 2 < t < 2 p(2 − p) there are two possible α-values which give the same t = f (0).Let α denote the 1 1 123 p F. Wiener's Trick and an Extremal Problem for H unique real number in the interval (0, 1) such that 1 + α = 2α . (23) −1/ p Note that α gives the value t = 2 in (16). Lemma 10 If α <α <α and α < α< 1 produce the same t = f (0) in (16), then 1 2 2 1 the quantity f (0) from (20) is maximized by α. Proof Since α and α give the same t = f (0) in (20), we only need to prove that 1 α 1 α + > + . (24) 2 2 α α α α 2 2 Fix α <α <α . The unique number α <ξ < 1 such that 1 2 2 1 α 1 ξ + = + 2 2 α ξ α α 2 2 is ξ = α /α. Since the function 1 x → + is increasing for x >α it is sufficient to prove that ξ> α to obtain (24). Since 2 1/ p (1 + x ) is decreasing for x >α , we see that ξ> α if and only if α ξ α > ⇐⇒ > . 2 1/ p 2 1/ p 2 1/ p 1/ p (1 + α ) (1 + ξ ) (1 + α ) 2 1 + Here we used that α and α give the same t = f (0) in (16) on the left hand side and the identity ξ = α /α on the right hand side. We now substitute α = α x for 0 < x < 1 to obtain the equivalent inequality x 1 > . (25) 2 1/ p 1/ p (1 + α x ) 1 + Actually, we only need to consider (α /α ) < x < 1, but the same proof works for 1 2 1− p 0 < x < 1. We raise both sides of (25) to the power p, multiply by x and rearrange 123 O. F. Brevig et al. to get the equivalent inequality F (x)> 0 where 1− p 2 2− p F (x ) = x − x + α 1 − x . Recalling that α = p/(2 − p), we compute − p 1− p − p−1 − p F (x )= 1 − (1 − p)x − px and F (x )= p(1 − p)x − p(1 − p)x . Since F (1) = F (1) = 0, we get from Taylor's theorem that for every 0 < x < 1 there is some x <η < 1 such that F (η) p(1 − p) 2 − p −1 2 F (x ) = (x − 1) = η η − 1 (x − 1) > 0, 2 2 which completes the proof. By Lemma 10, we now only need to compare f (0) from (20)for α ≤ α ≤ α 1 2 with f (0) from (21)for β such that f (0) = t = f (0). Inspecting (16) and (17), we 1 2 find that α 1 1 + α = ⇐⇒ β = − 1. (26) 1/ p 1/ p p 2 2 α 1 + α 1 + β Next, we consider the equation f (0) = f (0) with β as in (26). Inspecting (20) and 1 2 (21) and dividing by t, we get the equation 1 2 2β 2 1 + α + α − 1 = = − 1. (27) α p p p α We square both sides, multiply by p and rearrange to find that (27) is equivalent to the equation F (α) = 0, where 2 −2 2 2 − p 2− p F (α) = p α + 2 p(2 − p) + (2 − p) α − 4 α + α − 1 . (28) Suppose that α ≤ α ≤ α .If 1 2 • F (α) > 0, then f from (14) is the unique extremal for ( p, t ). p 1 1 • F (α) = 0, then f from (14) and f from (15) are extremals for ( p, t ). p 1 2 1 • F (α) < 0, then f from (15) is the unique extremal for ( p, t ). p 2 1 Note that any solutions of F (α) = 0 with 0 <α <α are of no interest since this p 1 implies that β> 1by(26). Similarly, any solutions of F (α) = 0 with α <α < 1 p 2 can be ignored due to Lemma 10. The following result shows that there is only one solution, which is in the pertinent range. 123 p F. Wiener's Trick and an Extremal Problem for H Lemma 11 Let F be as in (28). The equation F (α) = 0 has a unique solution, p p denoted α , on the interval (0, 1). Moreover, (a) if 0 <α <α , then F (α) > 0. p p (b) if α <α < 1, then F (α) < 0. p p (c) α <α <α where α and α are from (23) and (22), respectively. 1 p 2 1 2 The proof of Lemma 11 is a rather laborious calculus exercise, which we postpone to "Appendix A" below. Let α be as in Lemma 11 and define t = . (29) 1/ p 1 + α −1/ p −1/ p 1/ p−1/2 Note that 2 < t < 2 p(2 − p) by the fact that α <α <α . p 1 p 2 By the analysis above, Lemma 10 and Lemma 11, we obtain the following version of Theorem 8 in the range 0 < p < 1. Theorem 12 Fix 0 < p < 1 and consider (1) with k = 1. Let t be as in (29) and set α = p/(2 − p). (i) If 0 ≤ t ≤ t ,let α denote the unique real number in the interval 0 ≤ α< α p 2 2 −1/ p such that t = α(1 + α ) . Then 1 2 ( p, t ) = 1 + − 1 α , 1/ p 2 p 1 + α and an extremal is 2/ p α + z (1 + αz) f (z) = . 1/ p 1 + αz 1 + α (ii) If t ≤ t ≤ 1,let β denote the unique real number in the interval 0 ≤ β ≤ 1 such 2 −1/ p that t = (1 + β ) . Then 1 2β ( p, t ) = , 1/ p 2 p 1 + β and an extremal is 2/ p (1 + βz) f (z) = . 1/ p 1 + β The extremals are unique for 0 ≤ t = t ≤ 1. The only extremals for ( p, t ) are p 1 p the functions given in (i) and (ii). 123 O. F. Brevig et al. Fig. 2 Plot of the curve p → t . Points ( p, t ) above and below the curve correspond to the cases (i) and (ii) −1/ p −1/ p 1/ p−1/2 of Theorem 12, respectively. The estimates 2 < t < 2 p(2 − p) are represented by dotted curves. In the shaded area and in the range 1/2 ≤ t ≤ 1, Theorem 12 is originally due to Connelly [4] Theorem 12 extends [4, Theorem 4.1] to general 0 ≤ t ≤ 1. The analysis in [4]is similar to ours, and we are able to also identify the extremals in the range −1/ p −1/ p 1/ p−1/2 2 ≤ t ≤ 2 p(2 − p) due to Lemma 10 and Lemma 11. It is also demonstrated in [4, Thm. 4.1] that when p = 1/2 there must exist at least one value of 0 < t < 1 for which the extremal is not unique. Theorem 12 shows that there is precisely one such t and that this observation is not specific to p = 1/2, but in fact holds for any 0 < p < 1. Figure 2 shows the value t for which the extremal is not unique as a function of p. Inspecting Theorem 12, we get the following result similarly to how we extracted Corollary 9 from Theorem 8. Corollary 13 If 0 < p < 1, then the function t → ( p, t ) is increasing from ( p, 0) = 1 to 1/ p 1/ p p p 2 p, 1 − = 1 − √ 2 2 p(2 − p) and then decreasing to ( p, 1) = 0. 123 p F. Wiener's Trick and an Extremal Problem for H 5 The Extremal Problem 8 (p, t) for k ≥ 2and 1 ≤ p ≤∞ We begin by recalling how F. Wiener's trick was used in [1] to obtain the solution to the extremal problem ( p, t ) for k ≥ 2 from Theorem 8. Theorem 14 (Benetau–Korenblum) Let k ≥ 2 be an integer. For every 1 ≤ p ≤∞ and every 0 ≤ t ≤ 1, ( p, t ) = ( p, t ). k 1 If f is the extremal function for ( p, t ), then f (z) = f (z ) is an extremal function 1 1 k 1 for ( p, t ). p p Proof Suppose that f is an extremal for ( p, t ). Since W f ≤ f , k k H H (k) (k) f (0) (W f ) (0) f (0) = W f (0) and = , k! k! we conclude that W f is also an extremal for ( p, t ). Thus we may restrict our k k k p attention to extremals f of the form f (z) = f (z ) for f ∈ H . The stated claims k k p p now follow at once from Theorem 8, since f = f . k H H The purpose of the present section is to answer the following question. For which trios k ≥ 2, 1 ≤ p ≤∞ and 0 ≤ t ≤ 1 is the extremal for ( p, t ) unique? Note that while Theorem 14 provides an extremal f (z) = f (z ) where f denotes the k 1 1 extremal from (the statement of) Theorem 8, it might not be unique. In the case 1 < p < ∞ it follows at once from Theorem 1 (b) that this extremal is unique, although it is perhaps easier to use the strict convexity of H and Lemma 3 directly. Since H is not strictly convex for p = 1 and p =∞, these cases require further analysis. Note that the case (a) below is certainly known to experts as a conse- quence of the general theory developed in [8, 11, 14]. Theorem 15 Consider the extremal problem (1) for k ≥ 2 and 1 ≤ p ≤∞. (a) If 1 < p ≤∞, then the unique extremal is f (z) = f (z ). k 1 (b) If p = 1 and 1/2 ≤ t ≤ 1, then the unique extremal is f (z) = f (z ). k 1 (c) If p = 1 and 0 ≤ t < 1/2, then the extremals are the functions of the form f (z) = C λ − z 1 − λ z j j j =1 (k) with |λ |≤ 1 such that f 1 = 1,f (0) = t and f (0)> 0. Proof of Theorem 15(a) In view of the discussion above, we need only consider the case p =∞. By Lemma 4, we know that any extremal must be of the form λ − z i θ f (z) = e (30) 1 − λ z j =1 123 O. F. Brevig et al. for some 0 ≤ l ≤ k, constants λ ∈ D and θ ∈ R.If f is extremal for (∞, t ), then j k so is W f by Theorem 14. Consequently, W f is also of the form (30). In particular, k k since both f and W f are inner, we get from Lemma 7 that f = W f .Fromthe k k definition of W , we know that f (z) = W f (z) = g(z ) for some analytic function k k g. This shows that the only possibility in (30)is λ − z i θ f (z) = e 1 − λz for some λ ∈ D and θ ∈ R. The unique extremal has θ = π and λ =−t. Proof of Theorem 15(b) Suppose that f is extremal for (1, t ). By rotations, we extend our scope to functions f such that | f (0)|= t. In this case, we can use Lemma 4 and write f = gh for l k g(z) = C (z + α ) (1 + α z), j j j =1 j =l+1 h(z) = C (1 + α z). j =1 The constant C > 0 satisfies j j j 1 2 k = α α ··· α , 1 2 k j =0 j + j +···+ j = j 1 2 k where j , j ,..., j take only the values 0 and 1. Evidently g 2 = h 2 = 1. Set 1 2 k H H A =|α ··· α | and B =|α ··· α |. By keeping only the terms j = 0 and j = k l 1 l l l+1 k we obtain the trivial estimate 2 2 2 ≥ 1 +|α α ··· α | = 1 + A B . (31) 1 2 k l l We will adapt an argument due to F. Riesz [13] to get some additional information on the relationship between g and h. Write 2k k k j j j f (z) = a z , g(z) = b z and h(z) = c z j j j j =0 j =0 j =0 and note that |b |= t /|c |= t /C. By the Cauchy product formula we find that 0 0 k k c b k 0 a = b c = t + b c . (32) k j k− j j k− j C |b | j =0 j =1 123 p F. Wiener's Trick and an Extremal Problem for H Suppose that g ∈ H satisfies | g(0)|= t /C and g ≤ 1. Define f = gh.The Cauchy–Schwarz inequality shows that f 1 ≤ 1, so the extremality of f implies that | a |≤|a |. Inspecting (32) and using the Cauchy–Schwarz inequality, we find k k that the optimal g must therefore satisfy t k 1 − t c C j g(z) = + c z , (33) k− j C |c | 1 −|c | k k j =1 whereweusedthat h 2 = 1. Using that c = C, we compare the coefficients for z in (33) with the definition of g, to find that t k t 1 − 1 − 2 2 C C 2 C = C α ⇒ = B . 2 2 1 −|c | 1 −|c | k k j =l+1 2 2 2 2 2 Next we insert t = C A from the definition of f = gh and |c | = C A B from l k l l the definition of h to obtain 2 2 2 2 2 2 1 − C A (1 − B )(1 − C A (1 + B )) l l l l = B ⇐⇒ = 0. (34) 2 2 2 2 2 2 1 − C A B 1 − C A B l l l l The additional information we require is encoded in the equation on the right hand side of (34). Suppose that l ≥ 1. Evidently A < 1, since |α | < 1for j = 1,..., l by Lemma 4. l j It follows that the second factor on the right hand side of (34) can never be 0, since the trivial estimate (31)impliesthat 1 1 C ≤ < . (35) 2 2 2 2 1 + A B A (1 + B ) l l l l From the right hand side of (34) we thus find that B = 1, which shows that C < 2 2 1/(2 A ) by (35). Since t = C A , we conclude that 0 ≤ t < 1/2. By the contrapositive, we have established that if 1/2 ≤ t ≤ 1, then the extremal for (1, t ) has l = 0. In this case A = 1 by definition, which shows that C = t. k 0 The right hand side of (34) becomes 2 2 (1 − B )(1 − t (1 + B )) 0 0 = 0, 1 − tB so either B = 1or B = 1/t − 1. Returning to the definition of h we find that 2 2 2 |c | = t and |c | = tB . Consequently, 0 k k−1 2 2 2 1 = h = t (1 + B ) + |c | . 2 j j =1 123 O. F. Brevig et al. Since 1/2 ≤ t ≤ 1, we find that both B = 1 and B = 1/t − 1 will imply that c = 0 0 j for j = 1,..., k − 1. Thus h(z) = t + 1 − tz . When l = 0we have g = h, which shows that the unique extremal is √ √ 2 f (z) = t + 1 − tz , which is of the form f (z) = f (z ) as claimed. k 1 Proof of Theorem 15(c) In the case 0 ≤ t < 1/2, we know from Theorem 8 and Theorem 14 that (1, t ) = 1. See also Figure 1. The stated claim follows from Exercise 3 on page 143 of [6] by scaling and rotating the function f (z) = C λ − z 1 − λ z j j j =1 (k) to satisfy the conditions f 1 = 1, f (0)> 0 and f (0)> 0. If the resulting function satisfies f (0) = t, then it is an extremal for ( p, t ) and every extremal is obtained in this way. (This can be established similarly to the case (b) above.) 6 The Extremal Problem 8 (p, t) for k ≥ 2and 0 <p< 1 The purpose of this final section is to record some observations pertaining to the extremal problem (1) in the unresolved case k ≥ 2 and 0 < p < 1. Suppose that k ≥ 0 and consider the related extremal problem (k) f (0) ( p) = sup Re : f ≤ 1 . k H k! Evidently, ( p) = 1 for every 0 < p ≤∞ and the unique extremal is f (z) = 1. Recall (from [3]or[9]) that the extremals for satisfy a structure result identical to Lemma 4. Note that the parameter l in Lemma 4 describes the number of zeroes of the extremal in D. Conjecture 1 from [3, Sect. 5] states that the extremal for ( p) does not vanish in D when 0 < p < 1. The conjecture has been verified in the cases k = 0, 1, 2 and for (k, p) = (3, 2/3). Let us now suppose that k ≥ 1. There are two obvious connections between the extremal problems and . Namely, k k ( p, 0) = ( p) and max ( p, t ) = ( p). k k−1 k k 0≤t ≤1 Assume that the above-mentioned conjecture from [3] holds. This assumption yields that the extremal for ( p, 0) has precisely one zero in D and the extremal for the t which maximizes ( p, t ) does not vanish in D. Note that the extremal for ( p, 1), k k which is f (z) = 1, does not vanish in D. 123 p F. Wiener's Trick and an Extremal Problem for H Question 1 Suppose that 0 < p < 1. Is it true that the extremal for ( p, t ) has at most one zero in D? We have verified numerically that the question has an affirmative answer for k = 2. Note that for 1 < p ≤∞, the extremal for ( p, t ) either has 0 or k zeroes in D by Theorem 15 (a). In the case p = 1, the extremal may have anywhere from 0 to k zeroes by Theorem 15 (b) and (c). As mentioned in the introduction, Theorem 1 yields the estimates 1/ p−1 ( p, t ) ≤ ( p, t ) ≤ k ( p, t ). 1 k 1 The upper bound is only attained if ( p, t ) = 0 which happens if and only if t = 1. Of course, since ( p, 1) = 0 the lower bound is also attained. Question 2 Fix k ≥ 2 and 0 < p < 1. Is there some t such that ( p, t ) = ( p, t ) 0 k 1 holds for every t ≤ t ≤ 1? By a combination of numerical and analytical computations, we have strong evi- dence that the question has an affirmative answer for k = 2 and that in this case 1/ p t = 1 + . 2 − p Let us close by briefly explaining our reasoning. We began by considering the case l = 0 in Lemma 4. Setting 2/ p−1 f = gh and arguing as in the proof of Theorem 15 (b) (see also [3]), we found if t ≥ t , then the only possible extremal for ( p, t ) with l = 0 is of the form f (z) = f (z ) 2 2 1 where f is the corresponding extremal for ( p, t ). Next, if l = 2 then (as in the case 1 1 −1/ p 1/ p−1/2 k = 1) we can only obtain t-values in the range 0 ≤ t ≤ 2 p(2 − p) . However, since −1/ p 1/ p−1/2 2 p(2 − p) < t for 0 < p < 1 we can ignore the case l = 2. The case l = 1 was excluded by numerical computations. Acknowledgements The authors extend their gratitude to Eero Saksman for a helpful discussion pertaining to Theorem 1. They also thank the referee for a careful reading of the paper. Funding Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included 123 O. F. Brevig et al. in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix A: Proof of Lemma 11 We will frequently appeal to the following corollary of Rolle's theorem: Suppose that f is continuously differentiable on [a, b] and that f (x ) = 0 has precisely n solutions on (a, b). Then f (x ) = 0 can have at most n + 1 solutions on [a, b]. We are interested in solutions of the equation F (α) = 0 on the interval (0, 1), where we recall from (28) that 2 −2 2 2 − p 2− p F (α) = p α + 2 p(2 − p) + (2 − p) α − 4 α + α − 1 . The initial step in the proof of Lemma 11 is to identify the critical points of F on the interval 0 <α < 1. It turns out that there is only one. Lemma 16 Fix 0 < p < 1 and let F be as in (28). The equation F (α) = 0 has the unique solution α = α = 2 − p on 0 <α < 1. Proof We begin by computing 2 −3 2 − p−1 1− p F (α) =−2 p α + 2(2 − p) α + 4 pα − 4(2 − p)α . The solutions of the equation F (α) = 0on 0 <α < 1 do not change if we multiply 1+ p both sides by α /(4 − 2 p). Hence, we consider the equation G (α) = 0, where 1+ p 2 α p 2 p p−2 2+ p 2 G (α) = F (α) =− α + (2 − p)α + − 2α . 2(2 − p) 2 − p 2 − p Evidently, 2 p−4 2 p G (α) = α p α + (4 − p )α − 4 , 2 p−4 2 p and the sign of G (α) is the same as the sign of p α + (4 − p )α − 4. Since d 4 p − p 2 p−4 2 p p α + (4 − p )α − 4 = 0 ⇐⇒ α = , dα 4 − p 123 p F. Wiener's Trick and an Extremal Problem for H and since G (1) = 0, we conclude that G changes sign at most once on 0 <α < 1. p p Since G (0) =−∞, this means that G (α) = 0 can have at most two solutions on p p (0, 1]. Hence F (α) = 0 can have at most two solutions on (0, 1]. It is easy to verify that these solutions are and α = 1, α = 2 − p and hence the proof is complete. We next want to demonstrate that F (α )> 0 and F (α )< 0 where α and α α 1 α 2 1 2 are from (23) and (22), respectively. Lemma 17 Fix 0 < p < 1.If α = p/(2 − p), then F (α )< 0. 2 p 2 Proof We begin reformulating the inequality F (α )< 0as H ( p)> 0, for p 2 2 − p 2 p/2 (2− p)/2 H ( p) =− α F (α ) = 2 − 1 + 2 p − p p (2 − p) . p 2 Since we have H (0) = H (1) = 0, it is sufficient to prove that the function H has precisely one critical point on 0 < p < 1 and that it is strictly positive for some 0 < p < 1. We first check that 3/4 16 − 7 · 3 H (1/2) = > 0. We then compute 1 + 2 p − p p p/2 (2− p)/2 H ( p) =− p (2 − p) 2(1 − p) + log . 2 2 − p The first factor is non-zero, so we therefore need to check that the equation I ( p) = 0 has only one solution on 0 < p < 1, where 4(1 − p) p I ( p) = + log . 1 + 2 p − p 2 − p We compute 2 2 2 −4 3 − 2 p + p 2 2(1 − p) 3 p − 6 p + 1 I ( p) = + = . 2 2 2 p(2 − p) 2 1 + 2 p − p p(2 − p) 1 + 2 p − p Hence I ( p) = 0 has the unique solution p = 1 − 2/3 on the interval 0 < p < 1. Noting that I (0) =−∞ and I (1) = 0, we conclude by verifying that √ √ I ( p ) = 6 + log 5 − 2 6 > 0 123 O. F. Brevig et al. which demonstrates that I ( p) = 0 has a unique solution on 0 < p < 1. Lemma 18 Fix 0 < p < 1. Let α denote the unique solution of the equation 1 − p 2 2α + α = 0 on the interval (0, 1). Then F (α )> 0. p 1 − p 2− p Proof Using the equation defining α , we see that α + α − 1 = 1. Hence, 1 1 2 2 F (α ) = + 2 p(2 − p) + (2 − p) α − 4 p 1 p 1 = + α (2 − p) + 2 − 1 ( p − α (2 − p)) . 1 1 α α 1 1 The first two factors are strictly positive for every 0 <α < 1 and every 0 < p < 1. Consequently, F (α )> 0 if and only if α < p/(2 − p). The function p 1 1 p 2 J (α) = 1 − 2α + α 2− p satisfies J (0) = 1 and J (1) = 0. Moreover, J is strictly decreasing on (0, p ) p p p 2− p and strictly increasing on ( p , 1). Since α is the unique solution to J (α) = 0for 1 p 0 <α < 1, the desired inequality α < p/(2 − p) is equivalent to p 2 p p p 0 > J = 1 − 2 + . 2 − p 2 − p 2 − p In order to establish this inequality, we multiply by (2 − p) /2 on both sides to get the equivalent inequality K ( p)< 0, where 2 p 2− p K ( p) = 2 − 2 p + p − p (2 − p) . Our plan is to use Taylor's theorem to write K (η) K ( p) = K (1) + K (1)( p − 1) + ( p − 1) where 0 < p <η < 1. The claim will follow if we can prove that K (1) = K (1) = 0 and K ( p)< 0for 0 < p < 1. Hence we compute p 2− p K ( p) =−2 + 2 p − p (2 − p) log , 2 − p p 2 p 2− p 2 K ( p) = 2 − p (2 − p) log + . 2 − p p(2 − p) Evidently, K (1) = K (1) = K (1) = 0. Hence we are done if we can prove that K is strictly increasing on 0 < p < 1. This will follow once we verify that both p 2 p 2− p 2 p (2 − p) and log + 2 − p p(2 − p) 123 p F. Wiener's Trick and an Extremal Problem for H are strictly positive and strictly decreasing on 0 < p < 1. Strict positivity is obvious. The first function is strictly decreasing since d p p 2− p p 2− p p (2 − p) = p (2 − p) log dp 2 − p and log( p/(2 − p)) < 0for 0 < p < 1. For the second function, we check that d p 2 4 p p p − 1 log + = log + < 0, 2 2 dp 2 − p p(2 − p) p 2 − p 2 − p (2 − p) where for the final inequality we have again used that log( p/(2 − p)) < 0. We can finally wrap up the proof of Lemma 11. Proof of Lemma 11 By Lemma 16 we know that F (α) = 0 has precisely one solution for 0 <α < 1. Since F (0) =∞ and F (1) = 0, this implies that the equation p p F (α) = 0 can have at most one solution on the interval (0, 1). Lemma 17 shows that there is exactly one solution, since F (α )< 0. Let α denote this solution. Inspecting p 2 p the endpoints again, we find that F (α) > 0for 0 <α <α and F (α) < 0for p p p α <α < 1. Using Lemma 17 again we conclude that α <α , while the inequality p p 2 α <α follows similarly from Lemma 18. 1 p References 1. Beneteau, C., Korenblum, B.: Some Coefficient Estimates for H Functions, Complex Analysis and Dynamical Systems, Contemporary Mathematics, vol. 364, pp. 5–14. American Mathematical Society, Providence (2004) 2. Bohr, H.: A theorem concerning power series. Proc. Lond. Math. Soc. 2(13), 1–5 (1914) 3. Brevig, O.F., Saksman, E.: Coefficient estimates for H spaces with 0 < p < 1. Proc. Am. Math. Soc 148(9), 3911–3924 (2020) 4. Connelly, R.C.: Linear Extremal Problems in the Hardy Space H for 0 < p < 1, Master's thesis, University of South Florida (2017). http://scholarcommons.usf.edu/etd/6646 5. Duren, P.L., Romberg, B.W., Shields, A.L.: Linear functionals on H spaces with 0 < p < 1. J. Reine Angew. Math. 238, 32–60 (1969) 6. Duren, P.L.: Theory of H Spaces, Pure and Applied Mathematics, vol. 38. Academic Press, New York (1970) 7. Hardy, G.H., Littlewood, J.E.: Some properties of fractional integrals. II. Math. Z. 34(1), 403–439 (1932) 8. Havinson, S.Y.: On some extremal problems of the theory of analytic functions. Moskov. Gos. Univ. Uˇcenye Zapiski Matematika 148(4), 133–143 (1951) 9. Kabaila, V.: On some interpolation problems in the class H for p < 1. Soviet Math. Dokl. 1, 690–692 (1960) 10. Khavinson, S.Y.: Two papers on extremal problems in complex analysis. Am. Math. Soc. Transl. 2(129), 1–114 (1986) 11. Macintyre, A.J., Rogosinski, W.W.: Extremum problems in the theory of analytic functions. Acta Math. 82, 275–325 (1950) 12. Martín, M.J., Sawyer, E.T., Uriarte-Tuero, I., Vukotic, ´ D.: The Krzyz˙ conjecture revisited. Adv. Math. 273, 716–745 (2015) 13. Riesz, F.: Über Potenzreihen mit vorgeschriebenen Anfangsgliedern. Acta Math. 42(1), 145–171 (1920) 14. Rogosinski, W.W., Shapiro, H.S.: On certain extremum problems for analytic functions. Acta Math. 90, 287–318 (1953) 123 O. F. Brevig et al. 15. Sarason, D.: Complex Function Theory, 2nd edn. American Mathematical Society, Providence (2007) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Computational Methods and Function Theory Springer Journals http://www.deepdyve.com/lp/springer-journals/f-wiener-s-trick-and-an-extremal-problem-for-hp-documentclass-12pt-rcLyVaEWBt Brevig, Ole Fredrik; Grepstad, Sigrid; Instanes, Sarah May Computational Methods and Function Theory , Volume OnlineFirst – Sep 12, 2022 Share Full Text for Free Loading next page... Have problems reading an article? Let us know here. Thanks for helping us catch any problems with articles on DeepDyve. We'll do our best to fix them. How was the reading experience on this article? Check all that apply - Please note that only the first page is available if you have not selected a reading option after clicking "Read Article". The text was blurry Page doesn't load Other: Include any more information that will help us locate the issue and fix it faster for you. Thank you for submitting a report! Submitting a report will send us an email through our customer support system. /lp/springer-journals/f-wiener-s-trick-and-an-extremal-problem-for-hp-documentclass-12pt-rcLyVaEWBt Journals / Computational Methods and Function Theory / Volume OnlineFirst: September Springer Journals Copyright © The Author(s) 2022 Publisher site See Article on Publisher Site For 0 < p ≤∞,let H denote the classical Hardy space of the unit disc. We consider the extremal problem of maximizing the modulus of the kth Taylor coefficient of a function f ∈ H which satisfies f ≤ 1 and f (0) = t for some 0 ≤ t ≤ 1. In particular, we provide a complete solution to this problem for k = 1 and 0 < p < 1. We also study F. Wiener's trick, which plays a crucial role in various coefficient-related extremal problems for Hardy spaces. Keywords Hardy spaces · Extremal problems · Coefficient estimates Mathematics Subject Classification Primary 30H10; Secondary 42A05 1 Introduction Let H denote the classical Hardy space of analytic functions in the unit disc D = {z ∈ C :|z| < 1}. Suppose that k is a positive integer. For 0 < p ≤∞ and 0 ≤ t ≤ 1, consider the extremal problem Communicated by Dmitri Khavinson. Sigrid Grepstad is supported by Grant 275113 of the Research Council of Norway. Sarah May Instanes is supported by the Olav Thon Foundation through the StudForsk program. B Sigrid Grepstad [email protected] Ole Fredrik Brevig [email protected] Sarah May Instanes [email protected] Department of Mathematics, University of Oslo, 0851 Oslo, Norway Department of Mathematical Sciences, Norwegian University of Science and Technology (NTNU), No. 7491, Trondheim, Norway Published online: 12 September 2022 O. F. Brevig et al. (k) f (0) ( p, t ) = sup Re : f ≤ 1 and f (0) = t . (1) k H k! By a standard normal families argument, there are extremals f ∈ H attaining the supremum in (1) for every k ≥ 1 and every 0 ≤ t ≤ 1. A general framework for a class of extremal problems for H which includes (1) has been developed by Havinson [8], Kabaila [9], Macintyre and Rogosinski [11] and Rogosinski and Shapiro [14]. A particular consequence of this theory is that the structure of the extremals is well- known (see Lemma 4 below). For our extremal problem, it can be deduced directly from Parseval's identity that √ √ 2 2 (2, t ) = 1 − t and that the unique extremal is f (z) = t + 1 − t z . Similarly, the Schwarz–Pick inequality (see e.g. [15, VII.17.3]) shows that (∞, t ) = 1 − t and that the unique extremal is f (z) = (t + z)/(1 + tz). This served as the starting point for Beneteau and Korenblum [1], who studied the extremal problem (1)inthe range 1 ≤ p ≤∞. We will enunciate their results in Sects. 4 and 5,but fornow we present a brief account of their approach. Thefirststepin[1] is to compute ( p, t ) and identify an extremal function. This is achieved by interpolating between the two cases p = 2 and p =∞ mentioned above, facilitated by the inner-outer factorization of H functions. It follows from the argument that the extremal function thusly obtained is unique. The second step in [1] is to show that ( p, t ) = ( p, t ) for every k ≥ 2using k 1 a trick attributed to F. Wiener [2], which we shall now recall. Set ω = exp(2π i /k) and suppose that f (z) = a z . F. Wiener's trick is based on the transform n≥0 k−1 ∞ kn W f (z) = f (ω z) = a z . (2) k kn j =0 n=0 p p The triangle inequality yields that W f ≤ f for f ∈ H if 1 ≤ p ≤∞. k H H Hence, if f is an extremal function for ( p, t ), then f (z) = f (z ) is an extremal 1 1 k 1 function for ( p, t ) and consequently ( p, t ) = ( p, t ). Note that this argument k k 1 does not guarantee that the extremal f is unique for ( p, t ). k k We are interested in the extremal problem (1)for 0 < p < 1 and whether the extremal identified using F. Wiener's trick above for 1 ≤ p ≤∞ is unique. We shall obtain the following general result, which may be of independent interest. Theorem 1 Fix k ≥ 2 and suppose that 0 < p ≤∞. Let W denote the F. Wiener transform (2). The inequality 1/ p−1 W f p ≤ max k , 1 f p k H H is sharp. Moreover, equality is attained if and only if (a) f ≡ 0 when 0 < p < 1, (b) W f = f when 1 < p < ∞. 123 p F. Wiener's Trick and an Extremal Problem for H The upper bound in the estimate is easily deduced from the triangle inequality. Hence, the novelty of Theorem 1 is that the inequality is sharp for 0 < p < 1, and the statements (a) and (b). In Sect. 3, we also present examples of functions in H and H which attain equality in Theorem 1, but for which W f = f . However, we will conversely establish that if both f and W f are inner functions, then f = W f . k k To illustrate the role played by the F. Wiener transform in various coefficient related extremal problems, we first recall that the estimate W f ≤ f was originally k ∞ ∞ used by F. Wiener to resolve a problem posed by Bohr [2] and compute the so-called Bohr radius for H . We also know from [12, Sect. 1.7] that the Krzyz˙ conjecture on the maximal magnitude of the kth coefficient in the power series expansion of a non-vanishing function with f = 1 is equivalent to the assertion that if f is an extremal for the corresponding extremal problem, then f = W f . As far as we are aware, the Krzyz˙ conjecture remains open for k ≥ 6. Theorem 1 shows that the extremal for ( p, t ) is unique when 1 < p < ∞.We shall see in Sect. 5 that the extremal problem ( p, t ) with k ≥ 2 and 1 ≤ p ≤∞ has a unique extremal except for when p = 1 and 0 ≤ t < 1/2. In the range 0 < p < 1 with k = 1, the extremal problem (1) has been studied −1/ p by Connelly [4, Sect. 4], who resolved the problem in the cases 0 ≤ t < 2 and −1/ p 1/ p−1/2 2 p(2 − p) < t ≤ 1. Connelly also states conjectures on the behavior −1/ p −1/ p 1/ p−1/2 of ( p, t ) in the range 2 ≤ t ≤ 2 p(2 − p) . The conjectures are based on numerical analysis (see [4, Sect. 5]). In Sect. 4, we will extend Connelly's result to the full range 0 ≤ t ≤ 1. Our result demonstrates that for each 0 < p < 1 there is a unique 0 < t < 1/2 such that the extremal for ( p, t ) is not unique, thereby confirming the above-mentioned 1 p conjectures. Brevig and Saksman [3] have recently studied the extremal problem (k) f (0) ( p) = sup Re : f p ≤ 1 k H k! for 0 < p < 1. It is observed in [3, Sect. 5.3] that ( p) = max ( p, t ).In k 0≤t ≤1 k particular, the maxima of ( p, t ) for 0 ≤ t ≤ 1is 1/ p p 2 ( p) = 1 − √ 2 p(2 − p) 1/ p and this is attained for t = (1 − p/2) . From the main result in [1], it is easy to see that t → ( p, t ) is a decreasing function from ( p, 0) = 1to ( p, 1) = 0 1 1 1 when 1 ≤ p ≤∞. Similarly, our main result shows that ( p, t ) is increasing from ( p, 0) = 1 to the maxima mentioned above, then decreasing to ( p, 1) = 0. 1 1 Figure 1 contains the plot of t → ( p, t ) for several values 0 < p ≤∞, which illustrates this difference between 0 < p < 1 and 1 ≤ p ≤∞. 123 O. F. Brevig et al. Fig. 1 Plot of the curves t → ( p, t ) for p = 1/2, p = 1, p = 2and p =∞ Another difference between 0 < p < 1 and 1 ≤ p ≤∞ appears when we consider k ≥ 2. Recall that in the latter case, we have ( p, t ) = ( p, t ) for every k ≥ 2 and k 1 every 0 ≤ t ≤ 1. In the former case, we only get from Theorem 1 that 1/ p−1 ( p, t ) ≤ ( p, t ) ≤ k ( p, t ). (3) 1 k 1 Theorem 1 also shows that the upper bound in (3) is attained if and only if t = 1, since trivially ( p, 1) = 0 for every 0 < p ≤∞. However, by adapting an example due to Hardy and Littlewood [7], it is easy to see that if 0 < p < 1 and 0 ≤ t < 1 are fixed, then the exponent 1/ p − 1in(3) cannot be improved as k →∞.Inthe final section of the paper, we present some evidence that the lower bound in (3) can be attained for sufficiently large t,if k ≥ 2 and 0 < p < 1 are fixed. Organization The present paper is organized into five additional sections and one appendix. In Sect. 2, we collect some preliminary results pertaining to H and the structure of extremals for ( p, t ). Section 3 is devoted to F. Wiener's trick and the proof of Theorem 1. A complete solution to the extremal problem ( p, t ) for 0 < p ≤∞ and 0 ≤ t ≤ 1 is presented in Sect. 4. In Sect. 5, we consider ( p, t ) for k ≥ 2 and 1 ≤ p ≤∞ and study when the extremal is unique. Section 6 contains some remarks on ( p, t ) for k ≥ 2 and 0 < p < 1. "Appendix A" contains the proof of a crucial lemma needed to resolve the extremal problem ( p, t ) for 0 < p < 1. 123 p F. Wiener's Trick and an Extremal Problem for H 2 Preliminaries Recall that for 0 < p < ∞, the Hardy space H consists of the analytic functions f in D for which the limit of integral means 2π dθ i θ p f = lim | f (re )| 2π r →1 is finite. H is the space of bounded analytic functions in D, endowed with the norm f = sup | f (z)|. It is well-known (see e.g. [6]) that H is a Banach space |z|<1 when 1 ≤ p ≤∞ and a quasi-Banach space when 0 < p < 1. In the Banach space range 1 ≤ p ≤∞, the triangle equality is p p p f + g ≤ f + g . (4) H H H The Hardy space H is strictly convex when 1 < p < ∞, which means that it is impossible to attain equality in (4) unless g ≡ 0or f = λg for a non-negative constant λ. H is not strictly convex for p = 1 and p =∞, so in this case there are other ways to attain equality in (4). In the range 0 < p < 1, the triangle inequality takes the form p p p f + g ≤ f + g , (5) p p p H H H so here H is not even locally convex [5]. Our first goal is to establish that the triangle inequality (5) is not attained unless f ≡ 0or g ≡ 0. This result is probably known to experts, but we have not found it in the literature. If f ∈ H for some 0 < p ≤∞, then the boundary limit function ∗ i θ i θ f (e ) = lim f (re ) (6) r →1 ∗ p p exists for almost every θ. Moreover, f ∈ L = L ([0, 2π ]) and 1/ p 2π dθ ∗ ∗ i θ p p f = f = f (e ) H L 2π ∗ i θ if 0 < p < ∞ and f ∞ = ess sup | f (e )|. For simplicity, we henceforth omit the asterisk and write f = f with the limit (6) in mind. Lemma 2 Fix 0 < p < 1 and suppose that f , g ∈ H .If p p p f + g = f + g p p p H H H then either f ≡ 0 or g ≡ 0. 123 O. F. Brevig et al. Proof We begin by looking at equality in the triangle inequality for L in the range 0 < p < 1. Here we have 2π dθ i θ i θ f + g = f (e ) + g(e ) 2π 2π dθ p p i θ p i θ p ≤ | f (e )| +|g(e )| = f + g . p p L L 2π p p p We used the elementary estimate |z + w| ≤|z| +|w| for complex numbers z,w and 0 < p < 1. It is easily verified that this estimate is attained if and only if zw = 0. Consequently, p p p f + g = f + g p p p L L L i θ i θ if and only if f (e )g(e ) = 0 for almost every θ.Itiswell-known (see[6,Thm.2.2]) that the only function h ∈ H whose boundary limit function (6) vanishes on a set of positive measure is h ≡ 0. Hence we conclude that either f ≡ 0or g ≡ 0. Let us next establish a standard result on the structure of the extremals for the extremal problem (1). The first step is the following basic result. Lemma 3 If f ∈ H is extremal for ( p, t ), then f = 1. k H Proof Suppose that f ∈ H is extremal for ( p, t ) but that f p < 1. For ε> 0, k H set g(z) = f (z) + εz . Note that g(0) = f (0) = t for any ε> 0. If 1 ≤ p ≤∞, then p p g ≤ f + ε< 1 H H for sufficiently small ε> 0. If 0 < p < 1, then p p g ≤ f + ε < 1, p p H H again for sufficiently small ε> 0, so g < 1. In both cases we find that (k) (k) g (0) f (0) = + ε, k! k! which contradicts the extremality of f for ( p, t ). k k Let (n ) denote a sequence of distinct non-negative integers and let (w ) j j j =1 j =1 denote a sequence of complex numbers. A special case of the Carathéodory–Fejér problem is to determine the infimum of f over all f ∈ H which satisfy (n ) (0) = w , (7) n ! 123 p F. Wiener's Trick and an Extremal Problem for H for j = 1,..., k. Set k = max n .If f is an extremal for the Carathéodory– 1≤ j ≤k j Fejér problem (7), then there are complex numbers |λ |≤ 1for j = 1,..., k and a constant C such that l k λ − z j 2/ p f (z) = C 1 − λ z (8) 1 − λ z j =1 j =1 for some 0 ≤ l ≤ k, and the strict inequality |λ | < 1 holds for 0 < j ≤ l.In(8) and in similar formulas to follow, we adopt the convention that in the case l = 0 the first product is empty and considered to be equal to 1. For 1 ≤ p ≤∞, this result is independently due to Macintyre and Rogosinski [11] and Havinson [8], while in the range 0 < p < 1 the result is due to Kabaila [9]. An exposition of these results can be found in [6, Ch. 8] and [10, pp. 82–85], respectively. Using Lemma 3, we can establish that the extremals of the extremal problem ( p, t ) have to be of the same form. Lemma 4 If f ∈ H is extremal for ( p, t ), then there are complex numbers |λ |≤ 1 k j for j = 1,..., k and a constant C such that l k λ − z j 2/ p f (z) = C 1 − λ z . 1 − λ z j =1 j =1 for some 0 ≤ l ≤ k, and the strict inequality |λ | < 1 holds for 0 < j ≤ l. Proof Suppose that f is extremal for ( p, t ) and consider the Carathéodory–Fejér problem with conditions (k) f (0) f (0) = t and = ( p, t ). (9) k! We claim that f is an extremal for the Carathéodory–Fejér problem (9). If it is not, then there must be some f ∈ H with f < 1 which satisfies (9). However, this contradicts Lemma 3. Hence the extremal is of the stated form by (8). 3 F. Wiener's Trick Recall from (2) that if f (z) = a z and ω = exp(2π i /k), then n k n≥0 k−1 ∞ kn W f (z) = f (ω z) = a z . k kn j =0 n=0 We begin by giving two examples showing that W f p = f p may occur for k H H f such that W f = f when p = 1or p =∞. 123 O. F. Brevig et al. 2k 1 Example 5 Let k ≥ 2 and consider f (z) = (1 + z) in H . By the binomial theorem, we find that 2k 2k f (z) = z , n=0 2k k 2k W f (z) = 1 + z + z . Note that f = W f since k ≥ 2. By another application of the binomial theorem and a well-known identity for the central binomial coefficient, we find that k 2k 1/2 2 f 1 = f = = . n k n=0 Moreover, 2π 2k dθ i θ ikθ = W f (e ) e ≤ W f k k k 2π by the triangle inequality. Hence 2k 2k ≤ W f ≤ f = , 1 1 H H k k so W f 1 = f 1. H H k 2 k 2 ∞ Example 6 Let k ≥ 2 and consider f (z) = (1+ z ) − z(1− z ) in H . It is clear that k 2 W f (z) = (1 + z ) = f (z) since k ≥ 2. Moreover W f = 4. The supremum k k H is attained for z = ω for j = 0, 1,..., k − 1. We next compute 2 2 kθ kθ i θ ikθ i θ ikθ ikθ 2 i θ 2 f (e ) = 1 + e − e 1 − e = 4e cos + e sin . 2 2 Consequently, f = 4 and here the supremum is attained for z = ω for 2k j = 0, 1,..., 2k − 1. Proof of Theorem 1 It follows from the triangle inequality (4) that p p W f ≤ f (10) k H H for every f ∈ H if 1 ≤ p ≤∞. In the range 0 < p < 1, we get from the triangle inequality (5) the estimate 1/ p−1 W f p ≤ k f p (11) k H H 123 p F. Wiener's Trick and an Extremal Problem for H for every f ∈ H . Combining (10) and (11), we have established that 1/ p−1 p p W f ≤ max k , 1 f . k H H This is trivially attained for f (z) = z when 1 ≤ p ≤∞. We need to show that the 1/ p−1 upper bound k cannot be improved when 0 < p < 1 to finish proof of the first part of the theorem. −1/ p Let ε> 0 and consider f (z) = (z − (1 + ε)) . Clearly f p →∞ as ε ε H ε → 0 . Moreover 2π 1 dθ f = i θ |e − (1 + ε)| 2π 1 dθ 6 dθ ≤ + i θ 2 |e − (1 + ε)| 2π θ 2π |θ |<π/k |θ |≥π/k 1 dθ 6k ≤ + , i θ 2 |e − (1 + ε)| 2π π |θ |<π/k from which we conclude that 1 dθ f = + O(1). (12) i θ |e − (1 + ε)| 2π |θ |<π/k Furthermore, k−1 k−1 i (θ +2πl/k) f e dθ W f = k ε 2π |θ −2π j /k|<π/k j =0 l=0 k−1 dθ 6k − p i (θ +2π j /k) ≥ k f e 2π π |θ −2π j /k|<π/k j =0 − p+3 1 dθ 6k − p+1 = k − . i θ 2 |e − (1 + ε)| 2π π |θ |<π/k By (12) we find that W f k ε p H 1− p lim ≥ k . ε→0 f ε p 1/ p−1 Hence, the constant k in (11) cannot be replaced by any smaller quantity. We next want to show that (a) and (b) holds. For a function f ∈ H , define p p f (z) = f (ω z) for j = 0, 1,..., k − 1 and recall that f = f . j H j H 1/ p−1 p p We begin with (a). Suppose that W f = k f , which we can refor- k H H mulate as p p p p f + f + ··· + f = f + f + ··· + f . p p p p 0 1 k−1 0 1 k−1 H H H H 123 O. F. Brevig et al. By Lemma 2, the triangle inequality can be attained if and only if at least k − 1ofthe k functions f are identically equal to zero. Evidently this is possible if and only if f ≡ 0. p p For (b), we suppose that f ∈ H is such that W f = f . We need to k H H prove that W f = f .If f ≡ 0 there is nothing to do. As in the proof of (a), we note p p that W f = f can be reformulated as k H H f + f + ··· + f p = f p + f p + ··· + f p . 0 1 k−1 H 0 H 1 H k−1 H p p Viewing H as a subspace of L , the strict convexity of the latter implies that there are non-negative constants λ for j = 1, 2,..., k − 1 such that f = f = λ f = ... = λ f . 0 1 1 k−1 k−1 We shall only look at f = λ f which for f (z) = a z is equivalent to 1 1 n n≥0 ∞ ∞ n n n a z = λ a ω z . n 1 n n=0 n=0 Using W on this identity we get ∞ ∞ kn kn a z = λ a z . kn 1 kn n=0 n=0 This is only possible if λ = 1or W f ≡ 0. The latter implies that f ≡ 0 since 1 k p p W f = f by assumption. Therefore we can restrict our attention to the k H H case λ = 1. For all integers n that are not a multiple of k, we now find that a = λ ω a ⇒ a = 0, n 1 n n since λ = 1 and ω = 1. Hence W f = f as desired. 1 k p i θ Recall that a function f ∈ H is called inner if | f (e )|= 1 for almost every θ. We shall require the following simple result later on. Lemma 7 If both f and W f are inner functions, then f = W f. k k i θ i θ Proof Since |W f (e )|=| f (e )|= 1 for almost every θ, we get from (2) that k−1 k−1 1 1 i θ i θ i θ 1 = W f (e ) = f (e ) = | f (e )|, (13) k j j k k j =0 j =0 where f (z) = f (ω z). The equality on the right hand side of (13) is possible if and only if i θ i θ i θ f (e ) = f (e ) = ... = f (e ) 1 k−1 123 p F. Wiener's Trick and an Extremal Problem for H for almost every θ. As in the proof of Theorem 1 (b), we find that f = W f . 4 The Extremal Problem 8 (p, t) for 0 <p ≤∞ In the present section, we resolve the extremal problem (1) in the case k = 1 com- pletely. We begin with the case 1 ≤ p ≤∞ which has been solved by Beneteau and Korenblum [1]. We give a different proof of their result based on Lemma 4, mainly to illustrate the differences between the cases 0 < p < 1 and 1 ≤ p ≤∞. Theorem 8 (Beneteau–Korenblum) Fix 1 ≤ p ≤∞ and consider (1) with k = 1. −1/ p (i) If 0 ≤ t < 2 ,let α denote the unique real number in the interval 0 ≤ α< 1 2 −1/ p such that t = α(1 + α ) . Then 1 2 ( p, t ) = 1 + − 1 α , 1/ p 2 p 1 + α and the unique extremal is 2/ p α + z (1 + αz) f (z) = . 1/ p 1 + αz 2 1 + α −1/ p (ii) If 2 ≤ t ≤ 1,let β denote the unique real number in the interval 0 ≤ β ≤ 1 2 −1/ p such that t = (1 + β ) . Then 1 2β ( p, t ) = , 1/ p 2 p 1 + β and the unique extremal is 2/ p (1 + βz) f (z) = . 1/ p 1 + β Proof Note that since k = 1, there are only two possibilities for the extremals in Lemma 4.Theyare 2/ p α + z (1 + αz) f (z) = , 0 ≤ α< 1, (14) 1/ p 1 + αz 2 1 + α 2/ p (1 + βz) f (z) = , 0 ≤ β ≤ 1. (15) 1/ p 1 + β 123 O. F. Brevig et al. Here we have made α, β ≥ 0 by rotations. Note that if p =∞, then f does not depend on β. Moreover, t = f (0) = , (16) 2 1/ p (1 + α ) t = f (0) = . (17) 2 1/ p (1 + β ) For 1 ≤ p ≤∞ it is easy to verify that the function → (18) 2 1/ p (1 + α ) −1/ p is strictly increasing on 0 ≤ α< 1 and maps [0, 1) to [0, 2 ). Similarly, for 1 ≤ p < ∞ we find that the function → (19) 2 1/ p (1 + β ) −1/ p is strictly decreasing on 0 ≤ β ≤ 1 and maps [0, 1] to [2 , 1]. Consequently, −1/ p if 0 ≤ t < 2 , then the unique extremal is (14) with α given by (16), and if −1/ p 2 ≤ t ≤ 1, then the unique extremal is (15) with β given by (17). The proof is completed by computing 1 2 1 2 f (0) = 1 + α − 1 = t + α − 1 , (20) 1/ p p α p 1 + α 1 2β 2β f (0) = = t , (21) 2 1/ p (1 + β ) p p to obtain the stated expressions for ( p, t ) in (i) and (ii), respectively. Define α and β as functions of t implicitly through (16) and (17). Then α is increas- −1/ p −1/ p ingon0 ≤ t < 2 and β is decreasing on 2 ≤ t ≤ 1. Inspecting the left hand side of (20) and (21), we extract the following result. Corollary 9 If 1 ≤ p ≤∞, then the function t → ( p, t ) is decreasing and takes the values [0, 1]. In the range 0 < p < 1 a more careful analysis is required. This is due to the fact that the function (18) is increasing on the interval 0 ≤ α ≤ α and decreasing on the interval α ≤ α< 1, where α = . (22) 2 − p −1/ p −1/ p 1/ p−1/2 Inspecting (16), we conclude that for each 2 < t < 2 p(2 − p) there are two possible α-values which give the same t = f (0).Let α denote the 1 1 123 p F. Wiener's Trick and an Extremal Problem for H unique real number in the interval (0, 1) such that 1 + α = 2α . (23) −1/ p Note that α gives the value t = 2 in (16). Lemma 10 If α <α <α and α < α< 1 produce the same t = f (0) in (16), then 1 2 2 1 the quantity f (0) from (20) is maximized by α. Proof Since α and α give the same t = f (0) in (20), we only need to prove that 1 α 1 α + > + . (24) 2 2 α α α α 2 2 Fix α <α <α . The unique number α <ξ < 1 such that 1 2 2 1 α 1 ξ + = + 2 2 α ξ α α 2 2 is ξ = α /α. Since the function 1 x → + is increasing for x >α it is sufficient to prove that ξ> α to obtain (24). Since 2 1/ p (1 + x ) is decreasing for x >α , we see that ξ> α if and only if α ξ α > ⇐⇒ > . 2 1/ p 2 1/ p 2 1/ p 1/ p (1 + α ) (1 + ξ ) (1 + α ) 2 1 + Here we used that α and α give the same t = f (0) in (16) on the left hand side and the identity ξ = α /α on the right hand side. We now substitute α = α x for 0 < x < 1 to obtain the equivalent inequality x 1 > . (25) 2 1/ p 1/ p (1 + α x ) 1 + Actually, we only need to consider (α /α ) < x < 1, but the same proof works for 1 2 1− p 0 < x < 1. We raise both sides of (25) to the power p, multiply by x and rearrange 123 O. F. Brevig et al. to get the equivalent inequality F (x)> 0 where 1− p 2 2− p F (x ) = x − x + α 1 − x . Recalling that α = p/(2 − p), we compute − p 1− p − p−1 − p F (x )= 1 − (1 − p)x − px and F (x )= p(1 − p)x − p(1 − p)x . Since F (1) = F (1) = 0, we get from Taylor's theorem that for every 0 < x < 1 there is some x <η < 1 such that F (η) p(1 − p) 2 − p −1 2 F (x ) = (x − 1) = η η − 1 (x − 1) > 0, 2 2 which completes the proof. By Lemma 10, we now only need to compare f (0) from (20)for α ≤ α ≤ α 1 2 with f (0) from (21)for β such that f (0) = t = f (0). Inspecting (16) and (17), we 1 2 find that α 1 1 + α = ⇐⇒ β = − 1. (26) 1/ p 1/ p p 2 2 α 1 + α 1 + β Next, we consider the equation f (0) = f (0) with β as in (26). Inspecting (20) and 1 2 (21) and dividing by t, we get the equation 1 2 2β 2 1 + α + α − 1 = = − 1. (27) α p p p α We square both sides, multiply by p and rearrange to find that (27) is equivalent to the equation F (α) = 0, where 2 −2 2 2 − p 2− p F (α) = p α + 2 p(2 − p) + (2 − p) α − 4 α + α − 1 . (28) Suppose that α ≤ α ≤ α .If 1 2 • F (α) > 0, then f from (14) is the unique extremal for ( p, t ). p 1 1 • F (α) = 0, then f from (14) and f from (15) are extremals for ( p, t ). p 1 2 1 • F (α) < 0, then f from (15) is the unique extremal for ( p, t ). p 2 1 Note that any solutions of F (α) = 0 with 0 <α <α are of no interest since this p 1 implies that β> 1by(26). Similarly, any solutions of F (α) = 0 with α <α < 1 p 2 can be ignored due to Lemma 10. The following result shows that there is only one solution, which is in the pertinent range. 123 p F. Wiener's Trick and an Extremal Problem for H Lemma 11 Let F be as in (28). The equation F (α) = 0 has a unique solution, p p denoted α , on the interval (0, 1). Moreover, (a) if 0 <α <α , then F (α) > 0. p p (b) if α <α < 1, then F (α) < 0. p p (c) α <α <α where α and α are from (23) and (22), respectively. 1 p 2 1 2 The proof of Lemma 11 is a rather laborious calculus exercise, which we postpone to "Appendix A" below. Let α be as in Lemma 11 and define t = . (29) 1/ p 1 + α −1/ p −1/ p 1/ p−1/2 Note that 2 < t < 2 p(2 − p) by the fact that α <α <α . p 1 p 2 By the analysis above, Lemma 10 and Lemma 11, we obtain the following version of Theorem 8 in the range 0 < p < 1. Theorem 12 Fix 0 < p < 1 and consider (1) with k = 1. Let t be as in (29) and set α = p/(2 − p). (i) If 0 ≤ t ≤ t ,let α denote the unique real number in the interval 0 ≤ α< α p 2 2 −1/ p such that t = α(1 + α ) . Then 1 2 ( p, t ) = 1 + − 1 α , 1/ p 2 p 1 + α and an extremal is 2/ p α + z (1 + αz) f (z) = . 1/ p 1 + αz 1 + α (ii) If t ≤ t ≤ 1,let β denote the unique real number in the interval 0 ≤ β ≤ 1 such 2 −1/ p that t = (1 + β ) . Then 1 2β ( p, t ) = , 1/ p 2 p 1 + β and an extremal is 2/ p (1 + βz) f (z) = . 1/ p 1 + β The extremals are unique for 0 ≤ t = t ≤ 1. The only extremals for ( p, t ) are p 1 p the functions given in (i) and (ii). 123 O. F. Brevig et al. Fig. 2 Plot of the curve p → t . Points ( p, t ) above and below the curve correspond to the cases (i) and (ii) −1/ p −1/ p 1/ p−1/2 of Theorem 12, respectively. The estimates 2 < t < 2 p(2 − p) are represented by dotted curves. In the shaded area and in the range 1/2 ≤ t ≤ 1, Theorem 12 is originally due to Connelly [4] Theorem 12 extends [4, Theorem 4.1] to general 0 ≤ t ≤ 1. The analysis in [4]is similar to ours, and we are able to also identify the extremals in the range −1/ p −1/ p 1/ p−1/2 2 ≤ t ≤ 2 p(2 − p) due to Lemma 10 and Lemma 11. It is also demonstrated in [4, Thm. 4.1] that when p = 1/2 there must exist at least one value of 0 < t < 1 for which the extremal is not unique. Theorem 12 shows that there is precisely one such t and that this observation is not specific to p = 1/2, but in fact holds for any 0 < p < 1. Figure 2 shows the value t for which the extremal is not unique as a function of p. Inspecting Theorem 12, we get the following result similarly to how we extracted Corollary 9 from Theorem 8. Corollary 13 If 0 < p < 1, then the function t → ( p, t ) is increasing from ( p, 0) = 1 to 1/ p 1/ p p p 2 p, 1 − = 1 − √ 2 2 p(2 − p) and then decreasing to ( p, 1) = 0. 123 p F. Wiener's Trick and an Extremal Problem for H 5 The Extremal Problem 8 (p, t) for k ≥ 2and 1 ≤ p ≤∞ We begin by recalling how F. Wiener's trick was used in [1] to obtain the solution to the extremal problem ( p, t ) for k ≥ 2 from Theorem 8. Theorem 14 (Benetau–Korenblum) Let k ≥ 2 be an integer. For every 1 ≤ p ≤∞ and every 0 ≤ t ≤ 1, ( p, t ) = ( p, t ). k 1 If f is the extremal function for ( p, t ), then f (z) = f (z ) is an extremal function 1 1 k 1 for ( p, t ). p p Proof Suppose that f is an extremal for ( p, t ). Since W f ≤ f , k k H H (k) (k) f (0) (W f ) (0) f (0) = W f (0) and = , k! k! we conclude that W f is also an extremal for ( p, t ). Thus we may restrict our k k k p attention to extremals f of the form f (z) = f (z ) for f ∈ H . The stated claims k k p p now follow at once from Theorem 8, since f = f . k H H The purpose of the present section is to answer the following question. For which trios k ≥ 2, 1 ≤ p ≤∞ and 0 ≤ t ≤ 1 is the extremal for ( p, t ) unique? Note that while Theorem 14 provides an extremal f (z) = f (z ) where f denotes the k 1 1 extremal from (the statement of) Theorem 8, it might not be unique. In the case 1 < p < ∞ it follows at once from Theorem 1 (b) that this extremal is unique, although it is perhaps easier to use the strict convexity of H and Lemma 3 directly. Since H is not strictly convex for p = 1 and p =∞, these cases require further analysis. Note that the case (a) below is certainly known to experts as a conse- quence of the general theory developed in [8, 11, 14]. Theorem 15 Consider the extremal problem (1) for k ≥ 2 and 1 ≤ p ≤∞. (a) If 1 < p ≤∞, then the unique extremal is f (z) = f (z ). k 1 (b) If p = 1 and 1/2 ≤ t ≤ 1, then the unique extremal is f (z) = f (z ). k 1 (c) If p = 1 and 0 ≤ t < 1/2, then the extremals are the functions of the form f (z) = C λ − z 1 − λ z j j j =1 (k) with |λ |≤ 1 such that f 1 = 1,f (0) = t and f (0)> 0. Proof of Theorem 15(a) In view of the discussion above, we need only consider the case p =∞. By Lemma 4, we know that any extremal must be of the form λ − z i θ f (z) = e (30) 1 − λ z j =1 123 O. F. Brevig et al. for some 0 ≤ l ≤ k, constants λ ∈ D and θ ∈ R.If f is extremal for (∞, t ), then j k so is W f by Theorem 14. Consequently, W f is also of the form (30). In particular, k k since both f and W f are inner, we get from Lemma 7 that f = W f .Fromthe k k definition of W , we know that f (z) = W f (z) = g(z ) for some analytic function k k g. This shows that the only possibility in (30)is λ − z i θ f (z) = e 1 − λz for some λ ∈ D and θ ∈ R. The unique extremal has θ = π and λ =−t. Proof of Theorem 15(b) Suppose that f is extremal for (1, t ). By rotations, we extend our scope to functions f such that | f (0)|= t. In this case, we can use Lemma 4 and write f = gh for l k g(z) = C (z + α ) (1 + α z), j j j =1 j =l+1 h(z) = C (1 + α z). j =1 The constant C > 0 satisfies j j j 1 2 k = α α ··· α , 1 2 k j =0 j + j +···+ j = j 1 2 k where j , j ,..., j take only the values 0 and 1. Evidently g 2 = h 2 = 1. Set 1 2 k H H A =|α ··· α | and B =|α ··· α |. By keeping only the terms j = 0 and j = k l 1 l l l+1 k we obtain the trivial estimate 2 2 2 ≥ 1 +|α α ··· α | = 1 + A B . (31) 1 2 k l l We will adapt an argument due to F. Riesz [13] to get some additional information on the relationship between g and h. Write 2k k k j j j f (z) = a z , g(z) = b z and h(z) = c z j j j j =0 j =0 j =0 and note that |b |= t /|c |= t /C. By the Cauchy product formula we find that 0 0 k k c b k 0 a = b c = t + b c . (32) k j k− j j k− j C |b | j =0 j =1 123 p F. Wiener's Trick and an Extremal Problem for H Suppose that g ∈ H satisfies | g(0)|= t /C and g ≤ 1. Define f = gh.The Cauchy–Schwarz inequality shows that f 1 ≤ 1, so the extremality of f implies that | a |≤|a |. Inspecting (32) and using the Cauchy–Schwarz inequality, we find k k that the optimal g must therefore satisfy t k 1 − t c C j g(z) = + c z , (33) k− j C |c | 1 −|c | k k j =1 whereweusedthat h 2 = 1. Using that c = C, we compare the coefficients for z in (33) with the definition of g, to find that t k t 1 − 1 − 2 2 C C 2 C = C α ⇒ = B . 2 2 1 −|c | 1 −|c | k k j =l+1 2 2 2 2 2 Next we insert t = C A from the definition of f = gh and |c | = C A B from l k l l the definition of h to obtain 2 2 2 2 2 2 1 − C A (1 − B )(1 − C A (1 + B )) l l l l = B ⇐⇒ = 0. (34) 2 2 2 2 2 2 1 − C A B 1 − C A B l l l l The additional information we require is encoded in the equation on the right hand side of (34). Suppose that l ≥ 1. Evidently A < 1, since |α | < 1for j = 1,..., l by Lemma 4. l j It follows that the second factor on the right hand side of (34) can never be 0, since the trivial estimate (31)impliesthat 1 1 C ≤ < . (35) 2 2 2 2 1 + A B A (1 + B ) l l l l From the right hand side of (34) we thus find that B = 1, which shows that C < 2 2 1/(2 A ) by (35). Since t = C A , we conclude that 0 ≤ t < 1/2. By the contrapositive, we have established that if 1/2 ≤ t ≤ 1, then the extremal for (1, t ) has l = 0. In this case A = 1 by definition, which shows that C = t. k 0 The right hand side of (34) becomes 2 2 (1 − B )(1 − t (1 + B )) 0 0 = 0, 1 − tB so either B = 1or B = 1/t − 1. Returning to the definition of h we find that 2 2 2 |c | = t and |c | = tB . Consequently, 0 k k−1 2 2 2 1 = h = t (1 + B ) + |c | . 2 j j =1 123 O. F. Brevig et al. Since 1/2 ≤ t ≤ 1, we find that both B = 1 and B = 1/t − 1 will imply that c = 0 0 j for j = 1,..., k − 1. Thus h(z) = t + 1 − tz . When l = 0we have g = h, which shows that the unique extremal is √ √ 2 f (z) = t + 1 − tz , which is of the form f (z) = f (z ) as claimed. k 1 Proof of Theorem 15(c) In the case 0 ≤ t < 1/2, we know from Theorem 8 and Theorem 14 that (1, t ) = 1. See also Figure 1. The stated claim follows from Exercise 3 on page 143 of [6] by scaling and rotating the function f (z) = C λ − z 1 − λ z j j j =1 (k) to satisfy the conditions f 1 = 1, f (0)> 0 and f (0)> 0. If the resulting function satisfies f (0) = t, then it is an extremal for ( p, t ) and every extremal is obtained in this way. (This can be established similarly to the case (b) above.) 6 The Extremal Problem 8 (p, t) for k ≥ 2and 0 <p< 1 The purpose of this final section is to record some observations pertaining to the extremal problem (1) in the unresolved case k ≥ 2 and 0 < p < 1. Suppose that k ≥ 0 and consider the related extremal problem (k) f (0) ( p) = sup Re : f ≤ 1 . k H k! Evidently, ( p) = 1 for every 0 < p ≤∞ and the unique extremal is f (z) = 1. Recall (from [3]or[9]) that the extremals for satisfy a structure result identical to Lemma 4. Note that the parameter l in Lemma 4 describes the number of zeroes of the extremal in D. Conjecture 1 from [3, Sect. 5] states that the extremal for ( p) does not vanish in D when 0 < p < 1. The conjecture has been verified in the cases k = 0, 1, 2 and for (k, p) = (3, 2/3). Let us now suppose that k ≥ 1. There are two obvious connections between the extremal problems and . Namely, k k ( p, 0) = ( p) and max ( p, t ) = ( p). k k−1 k k 0≤t ≤1 Assume that the above-mentioned conjecture from [3] holds. This assumption yields that the extremal for ( p, 0) has precisely one zero in D and the extremal for the t which maximizes ( p, t ) does not vanish in D. Note that the extremal for ( p, 1), k k which is f (z) = 1, does not vanish in D. 123 p F. Wiener's Trick and an Extremal Problem for H Question 1 Suppose that 0 < p < 1. Is it true that the extremal for ( p, t ) has at most one zero in D? We have verified numerically that the question has an affirmative answer for k = 2. Note that for 1 < p ≤∞, the extremal for ( p, t ) either has 0 or k zeroes in D by Theorem 15 (a). In the case p = 1, the extremal may have anywhere from 0 to k zeroes by Theorem 15 (b) and (c). As mentioned in the introduction, Theorem 1 yields the estimates 1/ p−1 ( p, t ) ≤ ( p, t ) ≤ k ( p, t ). 1 k 1 The upper bound is only attained if ( p, t ) = 0 which happens if and only if t = 1. Of course, since ( p, 1) = 0 the lower bound is also attained. Question 2 Fix k ≥ 2 and 0 < p < 1. Is there some t such that ( p, t ) = ( p, t ) 0 k 1 holds for every t ≤ t ≤ 1? By a combination of numerical and analytical computations, we have strong evi- dence that the question has an affirmative answer for k = 2 and that in this case 1/ p t = 1 + . 2 − p Let us close by briefly explaining our reasoning. We began by considering the case l = 0 in Lemma 4. Setting 2/ p−1 f = gh and arguing as in the proof of Theorem 15 (b) (see also [3]), we found if t ≥ t , then the only possible extremal for ( p, t ) with l = 0 is of the form f (z) = f (z ) 2 2 1 where f is the corresponding extremal for ( p, t ). Next, if l = 2 then (as in the case 1 1 −1/ p 1/ p−1/2 k = 1) we can only obtain t-values in the range 0 ≤ t ≤ 2 p(2 − p) . However, since −1/ p 1/ p−1/2 2 p(2 − p) < t for 0 < p < 1 we can ignore the case l = 2. The case l = 1 was excluded by numerical computations. Acknowledgements The authors extend their gratitude to Eero Saksman for a helpful discussion pertaining to Theorem 1. They also thank the referee for a careful reading of the paper. Funding Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included 123 O. F. Brevig et al. in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix A: Proof of Lemma 11 We will frequently appeal to the following corollary of Rolle's theorem: Suppose that f is continuously differentiable on [a, b] and that f (x ) = 0 has precisely n solutions on (a, b). Then f (x ) = 0 can have at most n + 1 solutions on [a, b]. We are interested in solutions of the equation F (α) = 0 on the interval (0, 1), where we recall from (28) that 2 −2 2 2 − p 2− p F (α) = p α + 2 p(2 − p) + (2 − p) α − 4 α + α − 1 . The initial step in the proof of Lemma 11 is to identify the critical points of F on the interval 0 <α < 1. It turns out that there is only one. Lemma 16 Fix 0 < p < 1 and let F be as in (28). The equation F (α) = 0 has the unique solution α = α = 2 − p on 0 <α < 1. Proof We begin by computing 2 −3 2 − p−1 1− p F (α) =−2 p α + 2(2 − p) α + 4 pα − 4(2 − p)α . The solutions of the equation F (α) = 0on 0 <α < 1 do not change if we multiply 1+ p both sides by α /(4 − 2 p). Hence, we consider the equation G (α) = 0, where 1+ p 2 α p 2 p p−2 2+ p 2 G (α) = F (α) =− α + (2 − p)α + − 2α . 2(2 − p) 2 − p 2 − p Evidently, 2 p−4 2 p G (α) = α p α + (4 − p )α − 4 , 2 p−4 2 p and the sign of G (α) is the same as the sign of p α + (4 − p )α − 4. Since d 4 p − p 2 p−4 2 p p α + (4 − p )α − 4 = 0 ⇐⇒ α = , dα 4 − p 123 p F. Wiener's Trick and an Extremal Problem for H and since G (1) = 0, we conclude that G changes sign at most once on 0 <α < 1. p p Since G (0) =−∞, this means that G (α) = 0 can have at most two solutions on p p (0, 1]. Hence F (α) = 0 can have at most two solutions on (0, 1]. It is easy to verify that these solutions are and α = 1, α = 2 − p and hence the proof is complete. We next want to demonstrate that F (α )> 0 and F (α )< 0 where α and α α 1 α 2 1 2 are from (23) and (22), respectively. Lemma 17 Fix 0 < p < 1.If α = p/(2 − p), then F (α )< 0. 2 p 2 Proof We begin reformulating the inequality F (α )< 0as H ( p)> 0, for p 2 2 − p 2 p/2 (2− p)/2 H ( p) =− α F (α ) = 2 − 1 + 2 p − p p (2 − p) . p 2 Since we have H (0) = H (1) = 0, it is sufficient to prove that the function H has precisely one critical point on 0 < p < 1 and that it is strictly positive for some 0 < p < 1. We first check that 3/4 16 − 7 · 3 H (1/2) = > 0. We then compute 1 + 2 p − p p p/2 (2− p)/2 H ( p) =− p (2 − p) 2(1 − p) + log . 2 2 − p The first factor is non-zero, so we therefore need to check that the equation I ( p) = 0 has only one solution on 0 < p < 1, where 4(1 − p) p I ( p) = + log . 1 + 2 p − p 2 − p We compute 2 2 2 −4 3 − 2 p + p 2 2(1 − p) 3 p − 6 p + 1 I ( p) = + = . 2 2 2 p(2 − p) 2 1 + 2 p − p p(2 − p) 1 + 2 p − p Hence I ( p) = 0 has the unique solution p = 1 − 2/3 on the interval 0 < p < 1. Noting that I (0) =−∞ and I (1) = 0, we conclude by verifying that √ √ I ( p ) = 6 + log 5 − 2 6 > 0 123 O. F. Brevig et al. which demonstrates that I ( p) = 0 has a unique solution on 0 < p < 1. Lemma 18 Fix 0 < p < 1. Let α denote the unique solution of the equation 1 − p 2 2α + α = 0 on the interval (0, 1). Then F (α )> 0. p 1 − p 2− p Proof Using the equation defining α , we see that α + α − 1 = 1. Hence, 1 1 2 2 F (α ) = + 2 p(2 − p) + (2 − p) α − 4 p 1 p 1 = + α (2 − p) + 2 − 1 ( p − α (2 − p)) . 1 1 α α 1 1 The first two factors are strictly positive for every 0 <α < 1 and every 0 < p < 1. Consequently, F (α )> 0 if and only if α < p/(2 − p). The function p 1 1 p 2 J (α) = 1 − 2α + α 2− p satisfies J (0) = 1 and J (1) = 0. Moreover, J is strictly decreasing on (0, p ) p p p 2− p and strictly increasing on ( p , 1). Since α is the unique solution to J (α) = 0for 1 p 0 <α < 1, the desired inequality α < p/(2 − p) is equivalent to p 2 p p p 0 > J = 1 − 2 + . 2 − p 2 − p 2 − p In order to establish this inequality, we multiply by (2 − p) /2 on both sides to get the equivalent inequality K ( p)< 0, where 2 p 2− p K ( p) = 2 − 2 p + p − p (2 − p) . Our plan is to use Taylor's theorem to write K (η) K ( p) = K (1) + K (1)( p − 1) + ( p − 1) where 0 < p <η < 1. The claim will follow if we can prove that K (1) = K (1) = 0 and K ( p)< 0for 0 < p < 1. Hence we compute p 2− p K ( p) =−2 + 2 p − p (2 − p) log , 2 − p p 2 p 2− p 2 K ( p) = 2 − p (2 − p) log + . 2 − p p(2 − p) Evidently, K (1) = K (1) = K (1) = 0. Hence we are done if we can prove that K is strictly increasing on 0 < p < 1. This will follow once we verify that both p 2 p 2− p 2 p (2 − p) and log + 2 − p p(2 − p) 123 p F. Wiener's Trick and an Extremal Problem for H are strictly positive and strictly decreasing on 0 < p < 1. Strict positivity is obvious. The first function is strictly decreasing since d p p 2− p p 2− p p (2 − p) = p (2 − p) log dp 2 − p and log( p/(2 − p)) < 0for 0 < p < 1. For the second function, we check that d p 2 4 p p p − 1 log + = log + < 0, 2 2 dp 2 − p p(2 − p) p 2 − p 2 − p (2 − p) where for the final inequality we have again used that log( p/(2 − p)) < 0. We can finally wrap up the proof of Lemma 11. Proof of Lemma 11 By Lemma 16 we know that F (α) = 0 has precisely one solution for 0 <α < 1. Since F (0) =∞ and F (1) = 0, this implies that the equation p p F (α) = 0 can have at most one solution on the interval (0, 1). Lemma 17 shows that there is exactly one solution, since F (α )< 0. Let α denote this solution. Inspecting p 2 p the endpoints again, we find that F (α) > 0for 0 <α <α and F (α) < 0for p p p α <α < 1. Using Lemma 17 again we conclude that α <α , while the inequality p p 2 α <α follows similarly from Lemma 18. 1 p References 1. Beneteau, C., Korenblum, B.: Some Coefficient Estimates for H Functions, Complex Analysis and Dynamical Systems, Contemporary Mathematics, vol. 364, pp. 5–14. American Mathematical Society, Providence (2004) 2. Bohr, H.: A theorem concerning power series. Proc. Lond. Math. Soc. 2(13), 1–5 (1914) 3. Brevig, O.F., Saksman, E.: Coefficient estimates for H spaces with 0 < p < 1. Proc. Am. Math. Soc 148(9), 3911–3924 (2020) 4. Connelly, R.C.: Linear Extremal Problems in the Hardy Space H for 0 < p < 1, Master's thesis, University of South Florida (2017). http://scholarcommons.usf.edu/etd/6646 5. Duren, P.L., Romberg, B.W., Shields, A.L.: Linear functionals on H spaces with 0 < p < 1. J. Reine Angew. Math. 238, 32–60 (1969) 6. Duren, P.L.: Theory of H Spaces, Pure and Applied Mathematics, vol. 38. Academic Press, New York (1970) 7. Hardy, G.H., Littlewood, J.E.: Some properties of fractional integrals. II. Math. Z. 34(1), 403–439 (1932) 8. Havinson, S.Y.: On some extremal problems of the theory of analytic functions. Moskov. Gos. Univ. Uˇcenye Zapiski Matematika 148(4), 133–143 (1951) 9. Kabaila, V.: On some interpolation problems in the class H for p < 1. Soviet Math. Dokl. 1, 690–692 (1960) 10. Khavinson, S.Y.: Two papers on extremal problems in complex analysis. Am. Math. Soc. Transl. 2(129), 1–114 (1986) 11. Macintyre, A.J., Rogosinski, W.W.: Extremum problems in the theory of analytic functions. Acta Math. 82, 275–325 (1950) 12. Martín, M.J., Sawyer, E.T., Uriarte-Tuero, I., Vukotic, ´ D.: The Krzyz˙ conjecture revisited. Adv. Math. 273, 716–745 (2015) 13. Riesz, F.: Über Potenzreihen mit vorgeschriebenen Anfangsgliedern. Acta Math. 42(1), 145–171 (1920) 14. Rogosinski, W.W., Shapiro, H.S.: On certain extremum problems for analytic functions. Acta Math. 90, 287–318 (1953) 123 O. F. Brevig et al. 15. Sarason, D.: Complex Function Theory, 2nd edn. American Mathematical Society, Providence (2007) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Computational Methods and Function Theory – Springer Journals Keywords: Hardy spaces; Extremal problems; Coefficient estimates; Primary 30H10; Secondary 42A05 A theorem concerning power series Bohr, H Coefficient estimates for Hp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^p$$\end{document} spaces with 0 <1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0 <1$$\end{document} Brevig, OF; Saksman, E Linear functionals on Hp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^{p}$$\end{document} spaces with 0 Duren, PL; Romberg, BW; Shields, AL Some properties of fractional integrals. II Hardy, GH; Littlewood, JE On some extremal problems of the theory of analytic functions Havinson, SY On some interpolation problems in the class Hp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{p}$$\end{document} for p<1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p<1$$\end{document} Kabaila, V Two papers on extremal problems in complex analysis Khavinson, SY Extremum problems in the theory of analytic functions Macintyre, AJ; Rogosinski, WW The Krzyż conjecture revisited Martín, MJ; Sawyer, ET; Uriarte-Tuero, I; Vukotić, D Über Potenzreihen mit vorgeschriebenen Anfangsgliedern Riesz, F On certain extremum problems for analytic functions Rogosinski, WW; Shapiro, HS {{internal_title}} {{journal_journal_name}} {{ref_author}} {{ref_title}} --}}Results for {{{swdQuery}}} {{#if swdResults}} {{#swdResults}} Page {{page}} …{{{highlighted}}}… {{/swdResults}} {{else}} Share the Full Text of this Article for FREE You can share this free article with as many people as you like with the url below! We hope you enjoy this feature! Sharable Link https://www.deepdyve.com/lp/springer-journals/f-wiener-s-trick-and-an-extremal-problem-for-hp-documentclass-12pt-rcLyVaEWBt?utm_source=freeShare&utm_medium=link&utm_campaign=freeShare Copy and paste the desired citation format or use the link below to download a file formatted for EndNote Brevig, O., Grepstad, S., & Instanes, S. (2022). F. Wiener's Trick and an Extremal Problem for Hp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^p$$\end{document}. Computational Methods and Function Theory, OnlineFirst, 1-26. Brevig, Ole Fredrik, Sigrid Grepstad, and Sarah May Instanes. "F. Wiener's Trick and an Extremal Problem for Hp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^p$$\end{document}." Computational Methods and Function Theory OnlineFirst (2022): 1-26. Reference Managers Export in RIS Format Access the full text. Sign up today, get DeepDyve free for 14 days.
CommonCrawl
Cokernel The cokernel of a linear mapping of vector spaces f : X → Y is the quotient space Y / im(f) of the codomain of f by the image of f. The dimension of the cokernel is called the corank of f. Cokernels are dual to the kernels of category theory, hence the name: the kernel is a subobject of the domain (it maps to the domain), while the cokernel is a quotient object of the codomain (it maps from the codomain). Intuitively, given an equation f(x) = y that one is seeking to solve, the cokernel measures the constraints that y must satisfy for this equation to have a solution – the obstructions to a solution – while the kernel measures the degrees of freedom in a solution, if one exists. This is elaborated in intuition, below. More generally, the cokernel of a morphism f : X → Y in some category (e.g. a homomorphism between groups or a bounded linear operator between Hilbert spaces) is an object Q and a morphism q : Y → Q such that the composition q f is the zero morphism of the category, and furthermore q is universal with respect to this property. Often the map q is understood, and Q itself is called the cokernel of f. In many situations in abstract algebra, such as for abelian groups, vector spaces or modules, the cokernel of the homomorphism f : X → Y is the quotient of Y by the image of f. In topological settings, such as with bounded linear operators between Hilbert spaces, one typically has to take the closure of the image before passing to the quotient. Formal definition One can define the cokernel in the general framework of category theory. In order for the definition to make sense the category in question must have zero morphisms. The cokernel of a morphism f : X → Y is defined as the coequalizer of f and the zero morphism 0XY : X → Y. Explicitly, this means the following. The cokernel of f : X → Y is an object Q together with a morphism q : Y → Q such that the diagram commutes. Moreover, the morphism q must be universal for this diagram, i.e. any other such q′ : Y → Q′ can be obtained by composing q with a unique morphism u : Q → Q′: As with all universal constructions the cokernel, if it exists, is unique up to a unique isomorphism, or more precisely: if q : Y → Q and q′ : Y → Q′ are two cokernels of f : X → Y, then there exists a unique isomorphism u : Q → Q′ with q' = u q. Like all coequalizers, the cokernel q : Y → Q is necessarily an epimorphism. Conversely an epimorphism is called normal (or conormal) if it is the cokernel of some morphism. A category is called conormal if every epimorphism is normal (e.g. the category of groups is conormal). Examples In the category of groups, the cokernel of a group homomorphism f : G → H is the quotient of H by the normal closure of the image of f. In the case of abelian groups, since every subgroup is normal, the cokernel is just H modulo the image of f: $\operatorname {coker} (f)=H/\operatorname {im} (f).$ Special cases In a preadditive category, it makes sense to add and subtract morphisms. In such a category, the coequalizer of two morphisms f and g (if it exists) is just the cokernel of their difference: $\operatorname {coeq} (f,g)=\operatorname {coker} (g-f).$ In an abelian category (a special kind of preadditive category) the image and coimage of a morphism f are given by ${\begin{aligned}\operatorname {im} (f)&=\ker(\operatorname {coker} f),\\\operatorname {coim} (f)&=\operatorname {coker} (\ker f).\end{aligned}}$ In particular, every abelian category is normal (and conormal as well). That is, every monomorphism m can be written as the kernel of some morphism. Specifically, m is the kernel of its own cokernel: $m=\ker(\operatorname {coker} (m))$ Intuition The cokernel can be thought of as the space of constraints that an equation must satisfy, as the space of obstructions, just as the kernel is the space of solutions. Formally, one may connect the kernel and the cokernel of a map T: V → W by the exact sequence $0\to \ker T\to V{\overset {T}{\longrightarrow }}W\to \operatorname {coker} T\to 0.$ These can be interpreted thus: given a linear equation T(v) = w to solve, • the kernel is the space of solutions to the homogeneous equation T(v) = 0, and its dimension is the number of degrees of freedom in solutions to T(v) = w, if they exist; • the cokernel is the space of constraints on w that must be satisfied if the equation is to have a solution, and its dimension is the number of independent constraints that must be satisfied for the equation to have a solution. The dimension of the cokernel plus the dimension of the image (the rank) add up to the dimension of the target space, as the dimension of the quotient space W / T(V) is simply the dimension of the space minus the dimension of the image. As a simple example, consider the map T: R2 → R2, given by T(x, y) = (0, y). Then for an equation T(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b), or equivalently, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) ⊆ V: the value of x is the freedom in a solution. The cokernel may be expressed via the real valued map W: (a, b) → (a): given a vector (a, b), the value of a is the obstruction to there being a solution. Additionally, the cokernel can be thought of as something that "detects" surjections in the same way that the kernel "detects" injections. A map is injective if and only if its kernel is trivial, and a map is surjective if and only if its cokernel is trivial, or in other words, if W = im(T). References • Saunders Mac Lane: Categories for the Working Mathematician, Second Edition, 1978, p. 64 • Emily Riehl: Category Theory in Context, Aurora Modern Math Originals, 2014, p. 82, p. 139 footnote 8. Category theory Key concepts Key concepts • Category • Adjoint functors • CCC • Commutative diagram • Concrete category • End • Exponential • Functor • Kan extension • Morphism • Natural transformation • Universal property Universal constructions Limits • Terminal objects • Products • Equalizers • Kernels • Pullbacks • Inverse limit Colimits • Initial objects • Coproducts • Coequalizers • Cokernels and quotients • Pushout • Direct limit Algebraic categories • Sets • Relations • Magmas • Groups • Abelian groups • Rings (Fields) • Modules (Vector spaces) Constructions on categories • Free category • Functor category • Kleisli category • Opposite category • Quotient category • Product category • Comma category • Subcategory Higher category theory Key concepts • Categorification • Enriched category • Higher-dimensional algebra • Homotopy hypothesis • Model category • Simplex category • String diagram • Topos n-categories Weak n-categories • Bicategory (pseudofunctor) • Tricategory • Tetracategory • Kan complex • ∞-groupoid • ∞-topos Strict n-categories • 2-category (2-functor) • 3-category Categorified concepts • 2-group • 2-ring • En-ring • (Traced)(Symmetric) monoidal category • n-group • n-monoid • Category • Outline • Glossary
Wikipedia
Hackthology The Discourses of Tim Henderson Evaluating Automatic Fault Localization Using Markov Processes by Tim Henderson, Yiğit Küçük, and Andy Podgurski Mon 30 September 2019 Tim A. D. Henderson, Yiğit Küçük, and Andy Podgurski Evaluating Automatic Fault Localization Using Markov Processes. SCAM 2019. DOI. PDF. SUPPLEMENT. WEB. This is a conversion from a latex paper I wrote. If you want all formatting correct you should read the pdf version. Statistical fault localization (SFL) techniques are commonly compared and evaluated using a measure known as "Rank Score" and its associated evaluation process. In the latter process each SFL technique under comparison is used to produce a list of program locations, ranked by their suspiciousness scores. Each technique then receives a Rank Score for each faulty program it is applied to, which is equal to the rank of the first faulty location in the corresponding list. The SFL technique whose average Rank Score is lowest is judged the best overall, based on the assumption that a programmer will examine each location in rank order until a fault is found. However, this assumption *oversimplifies* how an SFL technique would be used in practice. Programmers are likely to regard suspiciousness ranks as just one source of information among several that are relevant to locating faults. This paper provides a new evaluation approach using first-order Markov models of debugging processes, which can incorporate multiple additional kinds of information, e.g., about code locality, dependences, or even intuition. Our approach, \( \textrm{HT}_{\textrm{Rank}} \), scores SFL techniques based on the expected number of steps a programmer would take through the Markov model before reaching a faulty location. Unlike previous evaluation methods, \(\textrm{HT}_{\textrm{Rank}}\) can compare techniques even when they produce fault localization reports differing in structure or information granularity. To illustrate the approach, we present a case study comparing two existing fault localization techniques that produce results varying in form and granularity. Automatic fault localization is a software engineering technique to assist a programmer during the debugging process by suggesting "suspicious" locations that may contain or overlap with a fault (bug, defect) that is the root cause of observed failures. The big idea behind automatic fault localization (or just fault localization) is that pointing the programmer towards the right area of the program will enable them to find the relevant fault more quickly. A much-investigated approach to fault localization is Coverage-Based Statistical Fault Localization (CBSFL), which is also known as Spectrum-Based Fault Localization [1]-[3]. This approach uses code-coverage profiles and success/failure information from testing or field use of software to rank statements or other generic program locations (e.g., basic blocks, methods, or classes) from most "suspicious" (likely to be faulty) to least suspicious. To perform CBSFL, each test case is run using a version of the program being debugged that has been instrumented to record which potential fault locations were actually executed on that test. A human or automated test oracle labels each test to indicate whether it passed or failed. The coverage profiles are also referred to as coverage spectra. In the usage scenario typically envisioned for CBSFL, a programmer uses the ranked list of program locations to guide debugging. Starting at the top of the list and moving down, they examine each location to determine if it is faulty. If the location of a fault is near the top of the list, the programmer saves time by avoiding consideration of most of the non-faulty locations in the program. However, if there is no fault near the top of the list, the programmer instead wastes time examining many more locations than necessary. CBSFL techniques are typically evaluated empirically in terms of their ability to rank faulty locations near the top of the list [4], [5], as measured by each technique's "Rank Score", which is the location of the first faulty location in the list. A CBSFL technique that consistently ranks faulty statements from a wide range of faulty programs near the top of the corresponding lists is considered a good technique. One pitfall of using the ranked-list evaluation regime outlined above is that it can be applied fairly only when the techniques being compared provide results as a prioritized list of program elements of the same granularity. This means that if technique A produces a prioritized list of basic blocks, technique B produces an unordered set of sub-expressions, and technique C produces a prioritized list of classes then it is not valid to use the Standard Rank Score to compare them. The Standard Rank Score can only be applied to ordered lists and thus cannot be used directly to evaluate technique B, and it requires the techniques being compared to have the same granularity. Our new evaluation metric (called \(\textrm{HT}_{\textrm{Rank}}\)) accounts for these differences in granularity and report construction, allowing a direct comparison between different styles of fault localization. We present a case study in Section 27 comparing behavioral fault localization (which produces fragments of the dynamic control flow graph) to standard CBSFL. Second, it is evident that the imagined usage scenario for CBSFL, in which a programmer examines each possible fault location in rank order until a fault is found, is an oversimplification of programmer behavior [6]. Programmers are likely to deviate from this scenario, e.g.: by choosing not to re-examine locations that have already been carefully examined and have not changed; by examining the locations around a highly ranked one, regardless of their ranks; by examining locations that a highly ranked location is dependent upon or that are dependent upon it [7]; by employing a conventional debugger (such as gdb, WinDB, or Visual Studio's debugger); or simply by using their knowledge and intuition about the program. To support more flexible and nuanced evaluation criteria for CBSFL and other fault localization techniques, we present a new approach to evaluating them that is based on constructing and analyzing first-order Markov models of debugging processes and that uses a new evaluation metric (\(\textrm{HT}_{\textrm{Rank}}\)) based on the "hitting time" of a Markov process. This approach allows researchers to directly compare different fault localization techniques by incorporating their results and other relevant information into an appropriate model. To illustrate the approach, we present models for two classes of fault localization techniques: CBSFL and Suspicious Behavior Based Fault Localization (SBBFL) [8], [9]. The models we present are also easy to update, allowing researchers to incorporate results of future studies of programmers' behavior during debugging. Our new debugging model (and its \(\textrm{HT}_{\textrm{Rank}}\) metric) can be thought of as a first-order simulation of the debugging process as conducted by a programmer. As such, we intend for it to be a practical alternative to conducting an expensive user study. The model is capable of incorporating a variety of behaviors a programmer may exhibit while debugging, allowing researchers to evaluate the performance of their tool against multiple debugging "styles." Background and Related Work Coverage Based Statistical Fault Localization (CBSFL) [1], [4] techniques attempt to quantify the likelihood that individual program locations are faulty using sample statistics, called suspiciousness metrics or fault localization metrics, which are computed from PASS/FAIL labels assigned to test executions and from coverage profiles (coverage spectra) collected from those executions. A CBSFL suspiciousness metric (of which there are a great many [2], [3]) measures the statistical association between the occurrence of test failures and the coverage of individual program locations (program elements) of a certain kind. Some statistical fault localization techniques use additional information beyond basic coverage information to either improve accuracy or provide more explainable results. For instance, work on Causal Statistical Fault Localization uses information about the execution of program dependence predecessors of a target statement to adjust for confounding bias that can distort suspiciousness scores [10]. By contrast, Suspicious-Behavior-Based Fault Localization (SBBFL) techniques use runtime control-flow information (the behavior) to identify groups of "collaborating" suspicious elements [9]. These techniques typically leverage data mining techniques [11] such as frequent pattern mining [12]-[14] or significant pattern mining [9], [15]. Unlike CBSFL techniques, SBBFL techniques output a ranked list of patterns (subgraphs, itemsets, trees) which each contain multiple program locations. This makes it difficult to directly compare SBBFL and CBSFL techniques using traditional evaluation methods. Finally, a variety of non-statistical (or hybrid) approaches to fault localization have been explored [16]-[19]. These approaches range from delta debugging [20] to nearest neighbor queries [7] to program slicing [21], [22] to information retrieval [23]-[25] to test case generation [26]-[28]. Despite technical and theoretical differences in these approaches, they all suggest locations (or groups of locations) for programmers to consider when debugging. The Tarantula Evaluation Some of the earliest papers on fault localization do not provide a quantitative method for evaluating performance (as is seen in later papers [5]). For instance, the earliest CBSFL paper [1], by Jones et al., proposes a technique and evaluates it qualitatively using data visualization. At the time, this was entirely appropriate as Jones was proposing a technique for visualizing the relative suspiciousness of different statements, as estimated with what is now called a suspiciousness metric (Tarantula). The visualization used for evaluating this technique aggregated the visualizations for all of the subject programs included in the study. While the evaluation method used in the Jones et al. paper [1] effectively communicated the potential of CBSFL (and interested many researchers in the idea) it was not good way to compare multiple fault localization techniques. In 2005 Jones and Harrold [4] published a study that compared their Tarantula technique to three other techniques: Set Union and Intersection [29], Nearest Neighbor [7], and Cause-Transitions [30]. These techniques involved different approaches toward the fault localization problem and originally had been evaluated in different ways. Jones and Harrold re-evaluated all of the techniques under a new common evaluation framework. In their 2005 paper, Jones and Harrold evaluated the effectiveness of each fault localization technique by using it to rank the statements in each subject program version from most likely to be the root cause of observed program failures to least likely. For their technique Tarantula, the statements were ranked using the Tarantula suspiciousness score.1 To compare the effectiveness of the techniques, another kind of score was assigned to each faulty version of each subject program. This score is based on the "rank score": Definition. Tarantula Rank Score [4] Given a set of locations \(L\) with their suspiciousness scores \(s(l)\) for \(l \in L\) the Rank Score \(r(l)\) for a faulty location \(l \in L\) is: \[\begin{aligned} {{\left|{ \left\{ x ~:~ x \in L \wedge s(x) \ge s(l) \right\} }\right|}} \end{aligned}\] For Set Union and Intersection, Nearest Neighbor, and Cause-Transitions, Jones and Harrold used an idea of Renieres and Reiss [7] and ranked a program's statements based on consideration of its System Dependence Graph (SDG) [31]. The surrogate suspiciousness score of a program location \(L\) is the inverse of the size of the smallest dependence sphere around \(L\) that contains a faulty location. The surrogate scores are then used to calculate the Tarantula Rank Score (Def. 14). In Jones's and Harrold's evaluation the authors did not use the Tarantula Rank Score directly but instead used a version of it that is normalized by program size: Definition. Tarantula Effectiveness Score (Expense) [4] This score is the proportion of program locations that do not need to be examined to find a fault when the locations are examined in rank order. Formally, let \(n\) be the total number of program locations, and let \(r(f)\) be the Tarantula Rank Score (Def. 14) of the faulty location \(f\). Then the score is: \[\begin{aligned} \frac{n-r(f)}{n} \end{aligned}\] Using the normalized effectiveness score, Jones and Harrold directly compared the fault localization effectiveness of the techniques they considered. They did this in two ways. First, they presented a table that bucketed all the buggy versions of all the programs by their Tarantula Effectiveness Scores (given as percentages). Second, they presented a figure that showed the same data as a cumulative curve. The core ideas of Jones' and Harrold's Effectivness/Expense score now underlie most evaluations of CBSFL techniques. Faulty statements are scored, ranked, rank-scored, normalized, and then aggregated over all programs and versions to provide an overall representation of a fault localization method's performance (e.g., [53], [2], [3], [32]-[34]). However, some refinements have been made to both the Rank Score and the Effectiveness Score. The Implied Debugging Models It is worth (re)stating here the debugging model implied in the Jones and Harrold evaluation [4]. The programmer receives from the fault localization tool a ranked list of statements with the most suspicious statements at the top. The programmer then moves down the list examining each location in turn. If multiple statements have the same rank (the same suspiciousness score) all of those statements are examined before the programmer makes a determination on whether or not the bug has been located. This rule is captured in the mathematical definition of the Tarantula Rank Score (Definition 14). For the non-CBSFL methods which Jones compared CBSFL against, the ranks of the program locations were once again compared using the method of Renieres and Reiss [7], [30], [35] which is sometimes called T-Score. As a reminder, this method computes a surrogate suspiciousness score based on the size of smallest dependence sphere2 centered around the locations indicated in the fault localization report that contain the faulty code. This implies a debugging model in which the programmer examines each "shell" of the dependence sphere in turn before moving onto the next larger shell (see Figure 7 in [30] for a visualization). Neither of these debugging models are realistic. Programmers may be reasonably expected to deviate from the ordering implied by the ranking. During the debugging process a programmer may use a variety of information sources — including intuition — to decide on the next element to examine. They may examine the same element multiple times. They may take a highly circuitous route to the buggy code or via intuition jump immediately to the fault. The models described above allow for none of these subtleties. Refinements to the Evaluation Framework Wong et al. [32] introduced the most commonly used effectiveness score, which is called the \(\mathcal{EXAM}\) score. This score is essentially the same as the Expense score (Def. 15) except that it gives the percentage of locations that need to be examined rather than those avoided. Definition. \(\mathcal{EXAM}\) Score [32] \[\begin{aligned} \frac{r(f)}{n} \end{aligned}\] Ali et al. [37] identified an important problem with Jones' and Harrold's evaluation method: some fault localization techniques always assign different locations distinct suspiciousness scores, but others do not. Ali et al. pointed out that when comparing techniques, the Tarantula Effectiveness Score may favor a technique that generates more distinct suspiciousness scores than the other techniques. The fix they propose is to assign to a location in a group of locations with the same suspiciousness score a rank score that reflects developers having to examine half the locations in the group on average. Definition. Standard Rank Score This score is the expected number of locations a programmer would inspect before locating a fault. Formally, given a set of locations \(L\) with their suspiciousness scores \(s(l)\) for \(l \in L\), the Rank Score for a location \(l \in L\) is [37]: \[\begin{aligned} {{\left|{ \left\{ x ~:~ x \in L \wedge s(x) > s(l) \right\} }\right|}} + \frac{ {{\left|{ \left\{ x ~:~ x \in L \wedge s(x) = s(l) \right\} }\right|}} }{ 2 } \end{aligned}\] Note: when we refer to the "Standard Rank Score" this is the definition we are referring to. Parnin and Orso [6] conducted a study of programmers' actual use of a statistical fault localization tool (Tarantula [1]). One of their findings was that programmers did not look deeply through the ranked list of locations and would instead only consider the first few locations. Consequently, they encouraged researchers to no longer report effectiveness scores as percentages. Most CBSFL studies now report absolute (non-percentage) rank scores. This is desirable for another reason: larger programs can have much larger absolute ranks than small programs, for the same percentage rank. Consider, for instance a program with 100,000 lines. If a fault's Rank Score is 10,000 its percentage Exam Score would be 10%. A 10% Exam Score might look like a reasonably good localization (and would be if the program had 100 lines) but no programmer will be willing to look through 10,000 lines. By themselves, percentage evaluation metrics (like Exam Score) produce inherently misleading results for large programs. Steinmann et al. [53] identified a number of threats to validity in CBSFL studies, including: heterogeneous subject programs, poor test suites, small sample sizes, unclear sample spaces, flaky tests, total number of faults, and masked faults. For evaluation they used the Standard Rank Score of Definition 17 modified to deal with \(k\) faults tied at the same rank. Definition. Steinmann Rank Score This score is the expected number of locations a programmer would inspect before finding a fault when multiple faulty statements may have the same rank. Formally, given a set of locations \(L\) with their suspiciousness scores \(s(l)\) for \(l \in L\), the Rank Score for a location \(l \in L\) is [53]: \[\begin{aligned} & {{\left|{ \left\{ x ~:~ x \in L \wedge s(x) > s(l) \right\} }\right|}}\\ & + \frac{ {{\left|{ \left\{ x ~:~ x \in L \wedge s(x) = s(l) \right\} }\right|}} + 1 }{ {{\left|{ \left\{ x ~:~ x \in L \wedge s(x) = s(l) \wedge x \text{ is a faulty location} \right\}}\right|}} + 1 } \end{aligned}\] Moon et al. [38] proposed Locality Information Loss (LIL) as an alternative evaluation framework. LIL models the localization result as a probability distribution constructed from the suspiciousness scores: Definition. LIL Probability Distribution Let \(\tau\) be a suspicious metric normalized to the \([0,1]\) range of reals. Let \(n\) be the number of locations in the program and let \(L = \{l_1,\ldots, l_n\}\) be the set of locations. The constructed probability distribution is given by: \[\begin{aligned} P_{\tau}(l_i) = \frac{\tau(l_i)}{\sum^{n}_{j=1} \tau(l_j)} \end{aligned}\] LIL uses the Kullback-Leibler measure of divergence between distributions to compute a score indicating how different the distribution constructed for a suspiciousness metric of interest is from the distribution constructed from an "ideal" metric, which gives a score of 1 to the faulty location(s) and gives negligible scores to every other location. The advantage of the LIL framework is that it does not depend on a list of ranked statements and can be applied to non-statistical methods (using a synthetic \(\tau\)). The disadvantage of LIL is that it does not reflect programmer effort (as the Rank Scores do). However, it may be a better metric to use when evaluating fault localization systems as components of automated fault repair systems. Pearson et al. [5] re-evaluated a number of previous results using new real world subject programs with real defects and test suites. In contrast to previous work they made use of statistical hypothesis testing and confidence intervals to characterize the uncertainty of the results. To evaluate the performance of each technique under study they used the \(\mathcal{EXAM}\) score, reporting best, average, and worst case results for multi-statement faults. A New Approach to Evaluation Programmers consider multiple sources of information when performing debugging tasks and use them to guide their exploration of the source code. In our new approach to evaluating fault localization techniques, a model is constructed for each technique \(T\) and each program \(P\) of how a programmer using \(T\) might move from examining one location in \(P\) to examining another. The model for \(T\) and \(P\) is used to compute a statistical estimate of the expected number of moves a programmer using \(T\) would make before encountering a fault in \(P\). This estimate is used to compute a "hitting-time rank score" for technique \(T\). The scores for all the techniques can then be compared to determine which performed best on program \(P\). This section presents the general approach and specific example models. The models make use of CBSFL reports and information about static program structure and dynamic control flow. In order to support very flexible modeling and tractable analysis of debugging processes, we use first-order Markov chains (described below) to model them. Our first example models the debugging process assumed in previous work, in which a programmer follows a ranked list of suspicious program locations until a fault is found. Then we describe how to incorporate structural information about the program (which could influence a programmer's debugging behavior). Finally, we show how to model and compare CBSFL to a recent Suspicious Behavioral Based Fault Localization (SBBFL) algorithm [9] that identifies suspicious subgraphs of dynamic control flow graphs. This third model demonstrates the flexibility of our approach and could be adapted to evaluate other SBBFL techniques [8], [39]-[47]. It also demonstrates that our approach can be used to compare statistical and non-statistical fault localization techniques under the same assumptions about programmer debugging behavior. It is important to emphasize that the quality and value of the evaluation results obtained with our approach depend primarily on the appropriateness of the model of the debugging process that is created. This model represents the evaluators' knowledge about likely programmer behavior during debugging and the factors that influence it. To avoid biasing their evaluation, evaluators must commit to an evaluation model and refrain from "tweaking" it after applying it to the data. [48]. Background on Ergodic Markov Chains A finite state Markov chain consists of a set of states \(S = \{s_1, s_2, ..., s_n \}\) and an \(n \times n\) matrix \({\bf P}\), called the transition matrix [49]. Entry \({\bf P}_{i,j}\) gives the probability for a Markov process in state \(s_i\) to move to state \(s_j\). The probability of a Markov process moving from one state to another only depends on the state the process is currently in. This is known as the Markov property. A Markov chain is said to be ergodic if, given enough steps, it can move from any state \(s_i\) to any state \(s_j\), i.e. \({\textrm{Pr}\left[{s_i \xrightarrow{*} s_j}\right]} > 0\). Thus, there are no states in an ergodic chain that the process can never leave. Ergodic Markov chains have stationary distributions. Let \({\bf v}\) be an arbitrary probability vector. The stationary distribution is a probability vector \({\bf w}\) such that \[\lim_{n \rightarrow \infty} {\bf v}{\bf P}^{n} = {\bf w}\] The vector \({\bf w}\) is a fixed point on \({\bf P}\) implying \({\bf w}{\bf P} = {\bf w}\). Stationary distributions give the long term behavior of a Markov chain - meaning that after many steps the chance a Markov process ends in state \(s_i\) is given by \({\bf w}_i\). The expected hitting time of a state in a Markov chain is the expected number of steps (transitions) a Markov process will make before it encounters the state for the first time. Our new evaluation metric (\(\text{HT}_{\text{Rank}}\)) uses the expected hitting time of the state representing a faulty program location to score a fault localization technique's performance. Lower expected hitting times yield better localization scores. Definition. Expected Hitting Time Consider a Markov chain with transition matrix \({\bf P}\). Let \(T_{i,j}\) be a random variable denoting the time at which a Markov process that starts at state \(s_i\) reaches state \(s_j\). The expected hitting time (or just hitting time) of state \(s_j\) for such a process is the expected value of \(T_{i,j}\) \[\begin{aligned} {\textrm{E}\left[{T_{i,j}}\right]} = \sum_{k=1}^{\infty} k \cdot {\textrm{Pr}\left[{T_{i,j}=k}\right]} \end{aligned}\] In general, a hitting time for a single state may be computed in \(\mathcal{O}(n^3)\) steps [50]. Somewhat less time is required for sparse transition matrices [51]. Chapter 11 of Grinstead and Snell [49] provides an accessible introduction to hitting time computation. Some programs may have too many elements for exact hitting time computations (our case study contains one such program). To deal with large programs the expected hitting time can also be estimated by taking repeated random walks through the Markov chain to obtain a sample of hitting times. The sample can then be used to estimate the expected hitting time by computing the sample mean.3 Expected Hitting Time Rank Score (\(\textrm{HT}_{\textrm{Rank}}\)) func (n *Node) Has(k int) bool { if n == nil { ++ } else if k == n.Key { -- } else if k != n.Key { } else if (k < n.Key) { return n.left.Has(k) return n.right.Has(k) Listing 1. A bug in the implementation of the "Has" method in an AVL tree. Table I: Reduced CBSFL Results for Listing 1 R. F1 Function (Basic Block) Function (BB) 1.5 0.98 Node.Has (2) 6 0.95 Node.Has (3) 1.5 0.98 Node.String (3) 7.5 0.94 main (1) 1.5 0.98 Node.Verify (4) 7.5 0.94 Scanner.Scan (24) 4 0.97 Node.Has (4) 7.5 0.94 Node.Verify (0) 4 0.97 Node.Has (6) Fig. 1: A simplified version of the Markov model for evaluating the ranked list of suspicious locations for the bug in Listing 1. Fig. 2: An example Markov model showing how "jump" edges can be added to represent how a programmer might examine locations which are near the location they are currently reviewing. Compare to Figure 1. This section introduces our new evaluation metric \(\textrm{HT}_{\textrm{Rank}}\). \(\textrm{HT}_{\textrm{Rank}}\) produces a "rank score" similar to the score produced by the Standard Rank Score of Definition 17. In the standard score, program locations are ranked by their CBSFL suspiciousness scores. A location's position in the ordered list is that location's Rank Score (see the definition for details). The new \(\textrm{HT}_{\textrm{Rank}}\) score is obtained as follows: A Markov debugging model is supplied (as a Markov chain). The expected hitting times (Def. 22) for each location in the program are computed. The locations are ordered by their expected hitting times. The \(\textrm{HT}_{\textrm{Rank}}\) for a location is its position in the ordered list. Definition. \(\textrm{HT}_{\textrm{Rank}}\) Given a set of locations \(L\) and a Markov chain \((S, {\bf P})\) that represents the debugging process and has start state \(0\), the Hitting-Time Rank Score \(\textrm{HT}_{\textrm{Rank}}\) for a location \(l \in L\cap S\) is: \[\begin{aligned} & {{\left|{ \left\{ x ~:~ x \in L \cap S \wedge {\textrm{E}\left[{T_{0,x}}\right]} < {\textrm{E}\left[{T_{0,l}}\right]} \right\} }\right|}} + \\ & \frac{ {{\left|{ \left\{ x ~:~ x \in L \cap S \wedge {\textrm{E}\left[{T_{0,x}}\right]} = {\textrm{E}\left[{T_{0,l}}\right]} \right\} }\right|}} }{ 2 } \end{aligned}\] Note: this is almost identical to Definition 17, but it replaces the suspiciousness score with the expected hitting time. Definition 18 can also be modified in a similar way for multi-fault programs. Markov Debugging Models \(\textrm{HT}_{\textrm{Rank}}\) is parameterized by a "debugging model" expressed as a Markov chain. As noted above, a Markov chain is made up a of set of states \(S\) and transition matrix \({\bf P}\). In a debugging model, there are two types of states: 1) textual locations in the source code of a program and 2) synthetic states. Figures 6 and 7 show examples of debugging models constructed for an implementation of an AVL tree. In the figures, the square nodes are Markov states representing basic blocks in the source code. (Note that CBSFL techniques typically compute the same suspiciousness score for all statements in a basic block.) The smaller, circular nodes are Markov states which are synthetic. They are there for structural reasons but do not represent particular locations in the source code. The edges in the graphs represent possible transitions between states, and they are annotated with transition probabilities. All outgoing edges from a node should sum to 1. In a Markov debugging model a programmer is represented by the Markov process. When the Markov process is simulated the debugging actions of a programmer are being simulated. This is a "first order" simulation, which means the actions of the simulated programmer (Markov process) only depend on the current location being examined. Thus, the Markov model provides a simple and easy-to-construct mathematical model of a programmer looking through the source code of a program to find the faulty statement(s). The simulated programmer begins at some starting state and moves from state to state until the faulty location is found. We require that all Markov models are ergodic, ensuring that every state (program location) is eventually reachable in the model. An Extensible Markov Model for CBSFL As described in Section 12 a CBSFL report is made up of a ranked list of locations in the subject program. Our extensible Markov model includes a representation of the CBSFL ranked list. By itself, using \(\textrm{HT}_{\textrm{Rank}}\) with a Markov model of a ranked list is just a mathematically complex way of restating Definition 17. However, with a Markov model of a CBSFL report in hand we can add further connections between program locations (represented as Markov states) to represent other actions a programmer might take besides traversing down the ranked list. For instance, in Section 25 we note that programmers use graphical debuggers to traverse the dynamic control flow in a program - allowing them to visit statements in the order they are executed. We embed the dynamic control flow graph into the transition matrix of the Markov model to reflect this observation. A Ranked List as a Markov Chain Figure 6 provides a graphical example of a Markov chain for a ranked list. Since the nodes in a graphical representation of a Markov chain represent states and the edges represent the transition matrix, the probabilities associated with outgoing edges of each node should sum to 1. In Figure 6, each circular node represents a rank in the list and the square nodes represent associated program locations, which are identified by their function names and static basic-block id number. The square nodes that are grouped together all have the same suspiciousness scores. We will provide here a brief, informal description of the structure of the transition matrix. A formal description of the chain is provided in Definition 28 in the Appendix. The exact choice of transition matrix in the formal chain was driven by a proof of equivalence between \(\textrm{HT}_{\textrm{Rank}}\) with this Markov model (as defined in Definition 28) and the Standard Rank Score (Definition 17).4 The transition matrix boils down to a couple of simple connections. The nodes representing groups form a doubly linked list (see the circular nodes in Figure 6). The ordering of the "group nodes" matches the ordering of the ranks in the ranked list. The links between the node are weighted so that a Markov process will tend to end up in a highly ranked node. More formally, the model was constructed so that if you ordered the Markov states by the probabilities in the stationary distribution (which characterizes the long term behavior of the Markov process) from highest to lowest that ordering would match the order of the ranked list. The second type of connection is from a group node to its program location nodes. Each location node connects to exactly one group node. The transition probabilities (as shown in Figure 6) are set up such that there is an equal chance of moving to any of the locations in the group. The final connection is from a location node back to its group node. This is always assigned probability \(1\) (see Figure 6). Again, see Definition 28 in the Appendix for the formal description. Adding Local Jumps to the CBSFL Chain Figure 7 shows a modified version of the model in Figure 6. The modified model allows the Markov process to jump or "teleport" between program locations which are not adjacent in the CBSFL Ranked List. These "jump" connections are setup to model other ways a programmer might move through the source code of a program when debugging (see below). In the figure, these jumps are shown with added red and blue edges. For the Markov model depicted in the figure, the Markov process will with probability \(\frac{1}{2}\) move back to the rank list and with probability \(\frac{1}{2}\) teleport or jump to an alternative program location that is structurally adjacent (in the program's source code) to the current one. Informally, the modified model is set up so that if the Markov process is in a state \(s_i\) which represents program location \(l_x\) it will with probability \(p_{\text{jump}}\) move to a state \(s_j\) which represents program location \(l_y\) instead of returning to the rank list. The locations that the process can move to from \(s_i\) are defined in a jump matrix \({\bf J}\) which is parameter to the Markov model. The matrix \({\bf J}\) encodes assumptions or observations about how programmers behave when they are debugging. A formal definition of the CBSFL chain with jumps is presented in the Appendix (see Definition 29). Definition 26 defines one general-purpose jump matrix \({\bf J}\). It encodes two assumptions about programmer behavior. First, when a programmer considers a location inside a function they will also potentially examine other locations in that function. Second, when a programmer is examining a location they may examine locations preceding or succeeding it the dynamic control flow (e.g., with the assistance of a graphical debugger). Definition 26 encodes both assumptions by setting the relevant \({\bf J}_{i,j}\) entries to \(1\). Definition. Spacial + Behavioral Jumps \[\begin{aligned} {\bf J}_{i,j} &= \left\{ \begin{array}{cll} 1 & \text{if} & \text{$s_i$ and $s_j$ represent locations} \\ & & \text{in the same function} \\ \\ 1 & \text{if} & \text{$s_i$ and $s_j$ are adjacent locations in the} \\ & & \text{program's dynamic control flow graph} \\ \\ 0 & \text{otherwise} \end{array} \right. \end{aligned}\] In addition to \({\bf J}\), the new chain is parameterized by the probability \(p_{\text{jump}}\) of a jump occurring when the process visits a state representing a program location. As \(p_{\text{jump}} \rightarrow 0\) the transition matrix of new chain approaches the transition matrix for the chain in Definition 28 (see the Appendix). We suggest setting \(p_{\text{jump}}\) to \(0.5\) in the absence of data from a user study. Modeling SBBFL As noted earlier, Markov models can be constructed for alternative fault localization techniques. Suspicious Behavior Based Fault Localization (SBBFL) [8], [9] techniques return a report containing a ranked list of subgraphs or subtrees. Each subgraph contains multiple program locations usually drawn from a dynamic control flow graph of the program. Comparing this output to CBSFL using the Standard Rank Score can be difficult as a location may appear multiple times in graphs returned by the SBBFL algorithm. However, \(\textrm{HT}_{\textrm{Rank}}\) can produce an accurate and comparable score by utilizing the expected hitting times of the states representing the faulty location. Definition 30 in the Appendix provides an example Markov chain which models a ranked list of suspicious subgraphs. It can be extended (not shown due to space constraints) to add a Jump matrix in the manner of Definition 29 (see the Appendix). To illustrate our new approach to evaluation, we performed a case study in which it was used to evaluate several fault localization techniques of two different types: CBSFL [3], [4] and Suspicious-Behavior-Based Fault Localization (SBBFL) [8], [9]. We investigated the following research questions: RQ1: How Accurate is \(\textrm{HT}_{\textrm{Rank}}\) Estimation? (Table III) RQ2: Does it make a difference whether \(\textrm{HT}_{\textrm{Rank}}\) or the Standard Rank Score is used? (Table IV, Figs. 5 and 4) We also considered an important general question about fault localization that a typical research study might investigate. RQ3: Which kind of fault localization technique performs better, SBBFL or CBSFL? (Table IV) In order to use an SBBFL technique, more complex profiling data (dynamic control flow graphs or DCFGs) are needed than for CBSFL (which requires only coverage data). Therefore, a more specialized profiler was required for this study. We used Dynagrok (https://github.com/timtadh/dynagrok) [9] - a profiling tool for the Go Programming Language. We also reused subject programs used in a previous study [9] (see Table II). They are all real-world Go programs of various sizes that were injected with mutation faults. The test cases for the programs are all either real-world inputs or test cases from the system regression testing suites that are distributed with the programs. Six representative suspiciousness metrics were considered: Ochiai, F1, Jaccard, Relative Ochiai, Relative F1, and Relative Jaccard [3]. These measures were all previously adapted to the SBBFL context [9]. Table II: Datasets used in the evaluation L.O.C. AVL (github.com/timtadh/dynagrok) 483 19 An AVL tree Blackfriday (github.com/russross/blackfriday) 8,887 19 Markdown processor HTML (golang.org/x/net/html) 9,540 20 An HTML parser Otto (github.com/robertkrimen/otto) 39,426 20 Javascript interpreter gc (go.googlesource.com/go) 51,873 16 The Go compiler Note: The AVL tree is in the examples directory. All applications of techniques were replicated to address random variation in the results as the SBBFL algorithm that we used employs sampling [9]. The results from the replications were averaged and, unless otherwise noted, the average Rank Scores are presented. Finally, the exact \(\textrm{HT}_{\textrm{Rank}}\) score was computed for 4 of the 5 programs. For the fifth program, the Go compiler, although \(\textrm{HT}_{\textrm{Rank}}\) can be computed exactly, this requires so much time for each run to complete (\(\approx\) 4 hours) that it precluded computing exact results for each program version (including all models for both SBBFL and CBSFL). Therefore, expected hitting times for the Go compiler were estimated using the method outlined in Section 21. Specifically, we collected 500 samples of hitting times of all states in the Markov debugging model by taking random walks (with a maximum walk length of 1,000,000 steps). The expected hitting times were estimated by taking the sample mean. The Chosen \(\textrm{HT}_{\textrm{Rank}}\) Model To avoid bias, it was important to choose the \(\textrm{HT}_{\textrm{Rank}}\) model before the case study was conducted, which we did. We used \(\textrm{HT}_{\textrm{Rank}}\) with jumps (Definition 29) together with the jump matrix specified in Definition 26. We chose this matrix because we believe the order in which programmers examine program elements during debugging is driven in part by the structure of the program, even when they are debugging with the assistance of a fault localization report. The \(p_{\text{jump}}\) probability was set to \(0.5\) to indicate an equal chance of the programmer using or not using the fault localization report. A future large scale user study could empirically characterize the behavior of programmers while performing debugging to inform the choice of the \(p_{\text{jump}}\) parameter. However, without such a study it is reasonable to use \(0.5\), which indicates "no information." RQ1: How Accurate is \(\textrm{HT}_{\textrm{Rank}}\) Estimation? Table III: \(\textrm{HT}_{\textrm{Rank}}\) Estimation Error Subject Program mean median stdev p-value 0.781 0.469 0.473 0.476 Percentage error of the estimated \(\textrm{HT}_{\textrm{Rank}}\) versus the exact \(\textrm{HT}_{\textrm{Rank}}\) (see note under Def 22). Letting \(y\) be the exact \(\textrm{HT}_{\textrm{Rank}}\) and \(\hat{y}\) be the estimated \(\textrm{HT}_{\textrm{Rank}}\), the percentage error is \(\frac{|\hat{y} - y|}{y} * 100\). -2.0em For large programs the cost of computing the expected hitting times for \(\textrm{HT}_{\textrm{Rank}}\) may be too high. Therefore, we have suggested estimating the expected hitting times (see note under Definition 22). To assess the accuracy of estimating the expected hitting time instead of computing it exactly, we did both for four subject programs, using the Relative F1 measure. For each program version, SFL technique, and Markov chain type the estimation error was computed as a percentage of the true value. Table III presents descriptive statistics characterizing these errors for each subject program. The last row of the table gives p-values from a independent two-sample T-test comparing the estimated \(\textrm{HT}_{\textrm{Rank}}\) values and the exact \(\textrm{HT}_{\textrm{Rank}}\) values. The null hypothesis is that the expected estimate \(\textrm{HT}_{\textrm{Rank}}\) is the same as the expected exact \(\textrm{HT}_{\textrm{Rank}}\). All p-values are well above the \(0.05\) significance level that would suggest rejecting the null hypothesis. Therefore, we accept the null hypotheses that the \(\textrm{HT}_{\textrm{Rank}}\) estimates are not significantly different from the exact \(\textrm{HT}_{\textrm{Rank}}\) values. As indicated in the table, the maximum estimation error was around 3.7%. Therefore, if estimation is used we recommend considering \(\textrm{HT}_{\textrm{Rank}}\) scores that are within 5% of each other to be equivalent. RQ2: Does it make a difference whether \(\textrm{HT}_{\textrm{Rank}}\) or the Standard Rank Score is used? Table IV: Fault Localization Performance Standard Rank Score HTRank Score CBSFL SBBFL %\(\Delta\) RelativeF1 avl 4.6 2.0 -56 % 9.1 4.9 -47 % RelativeF1 blackfriday 8.8 4.4 -50 % 42.6 11.0 -74 % RelativeF1 html 12.8 5.0 -61 % 40.7 25.3 -38 % RelativeF1 otto 8.2 6.2 -24 % 102.2 21.5 -79 % RelativeF1 compiler 264.8 98.9 -63 % 1148.9 1475.1 28 % RelativeOchiai avl 4.6 2.0 -56 % 7.6 6.3 -16 % RelativeOchiai blackfriday 8.8 4.4 -50 % 43.6 9.8 -78 % RelativeOchiai html 12.8 5.0 -61 % 38.2 22.4 -41 % RelativeOchiai otto 8.7 6.8 -22 % 99.2 102.6 3 % RelativeOchiai compiler 262.2 101.6 -61 % 2888.8 2984.1 3 % RelativeJaccard avl 4.6 2.0 -56 % 10.1 5.1 -50 % RelativeJaccard blackfriday 16.0 12.1 -24 % 80.0 176.2 120 % RelativeJaccard html 12.8 5.0 -61 % 33.3 24.4 -27 % RelativeJaccard otto 8.2 6.3 -23 % 75.2 19.9 -74 % RelativeJaccard compiler 747.9 482.0 -36 % 1074.1 1540.5 43 % F1 avl 4.6 2.0 -56 % 10.4 5.1 -51 % F1 blackfriday 16.0 12.1 -24 % 81.9 218.8 167 % F1 html 12.8 5.0 -61 % 33.9 18.6 -45 % F1 otto 8.2 6.2 -24 % 81.0 28.5 -65 % F1 compiler 746.8 470.4 -37 % 1047.0 1537.9 47 % Ochiai avl 4.6 2.0 -56 % 5.9 6.3 7 % Ochiai blackfriday 14.8 10.3 -30 % 51.2 108.7 112 % Ochiai html 12.8 5.0 -61 % 30.7 25.3 -18 % Ochiai otto 10.3 8.4 -19 % 312.8 607.7 94 % Ochiai compiler 1091.6 773.5 -29 % 3137.2 2988.8 -5 % Jaccard avl 4.6 2.0 -56 % 10.1 5.1 -50 % Jaccard blackfriday 16.0 12.1 -24 % 77.8 187.7 141 % Jaccard html 12.8 5.0 -61 % 33.3 24.6 -26 % Jaccard otto 8.2 6.2 -24 % 75.3 24.7 -67 % Jaccard compiler 747.9 485.4 -35 % 1078.8 1435.6 33 % Summarizes the fault localization performance for CBSFL and SBBFL. Each fault localization technique is evaluated using both the Standard Rank Score and the \(\textrm{HT}_{\textrm{Rank}}\) Score (Defs. 28 and 25). The mean rank scores are shown as well as the percentage changes from CBSFL to SBBFL. Lower ranks scores indicate better fault localization performance. A negative percentage difference (%\(\Delta\)) indicates SBBFL improved on CBSFL. A positive %\(\Delta\) indicates CBSFL outperformed SBBFL. Fig 3: Empirical probability density plots for the subject program Otto - showing the distributions of both the Standard Rank Scores and the \(\textrm{HT}_{\textrm{Rank}}\) Scores across all runs of all versions of Otto. The CBSFL technique is shown in blue and the SBBFL technique is shown in orange. The only suspciousness score used in this plot is RelativeF1. Fig 4: Comparison of CBSFL vs SBBFL average ranks (on log scale), under the Standard Rank Score (top) and \(\textrm{HT}_{\textrm{Rank}}\) (bottom), for each version of the program Otto; suspiciousness metric is RelativeF1. Table IV details the fault localization performance we observed for CBSFL and SBBFL under the Standard Rank Score and \(\textrm{HT}_{\textrm{Rank}}\) using six different suspiciousness metrics. The results obtained with \(\textrm{HT}_{\textrm{Rank}}\) were substantially different from those obtained with the Standard Rank Score, in terms of absolute ranks and percentage differences (%\(\Delta\)). The mean Standard Rank Score for SBBFL is lower than that for CBSFL for every suspiciousness metric and subject program. By contrast, for some programs and metrics, the mean \(\textrm{HT}_{\textrm{Rank}}\) scores for CBSFL are lower than those for SBBFL. The mean \(\textrm{HT}_{\textrm{Rank}}\) scores are also higher than the mean Standard Rank Scores overall. Another way to look at the same phenomenon is shown in Figure 5, which displays empirical probability density plots for the program Otto. Each plot compares the performance of CBSFL to SBBFL using a different evaluation method. The top plot uses the Standard Rank Score while the other plot uses \(\textrm{HT}_{\textrm{Rank}}\). As shown in the top plot, the Standard Rank Scores for both CBSFL and SBBFL are concentrated mainly between 0 and 15, with no values over 60. In the \(\textrm{HT}_{\textrm{Rank}}\) plot, by contrast, the scores for both CBSFL and SBBFL are much more widely dispersed. Figure 4 compares CBSFL to SBBFL with respect to average ranks (on a log scale), under the Standard Rank Score (top) and \(\textrm{HT}_{\textrm{Rank}}\) (bottom), for each version of the program Otto. The suspiciousness metric is RelativeF1. The results for \(\textrm{HT}_{\textrm{Rank}}\) are quite distict from those for the Standard Rank Score, with CBSFL showing much more variability under \(\textrm{HT}_{\textrm{Rank}}\). These results provide confirmation for the theoretical motivation for \(\textrm{HT}_{\textrm{Rank}}\). Recall that one of the problems with the Standard Rank Score is that it does not account for either the differing structures of fault localization reports or differences in report granularity. SBBFL and CBSFL differ in both structure (ranked CFG fragments vs. ranked basic blocks) and granularity (multiple basic blocks vs. lone basic blocks). We expected the Standard Rank Score to unfairly favor SBBFL because it does not account for these differences and that is exactly what we see in Table IV. Under the Standard Rank Score SBBFL outperforms CBSFL on every program using every suspiciousness score. To be explicit, SBBFL reports ranked CFG fragments, each of which contains multiple basic blocks. Under the Standard Rank Score those fragments are ranked and the score is the rank of the first fragment in which the bug appears. CBSFL will, by contrast, rank each basic block independently. Now, consider the case where SBBFL reports a CFG fragment at rank 1 that contains 10 basic blocks, one of which contains the bug. Suppose CBSFL also reports each of those basic blocks as maximally suspicious. The Standard Rank Score for CBSFL will be 5 while for SBBFL it will be 1. Thus, the Standard Rank Score is unfairly biased toward SBBFL and against CBSFL. This is once again reflected in Table IV. The new metric \(\textrm{HT}_{\textrm{Rank}}\) does not suffer from this problem. As shown in the table and discussed below, SBBFL often but not always outperforms CBSFL under \(\textrm{HT}_{\textrm{Rank}}\), suggesting that the theoretical correction we expect is indeed occurring. We therefore conclude that \(\textrm{HT}_{\textrm{Rank}}\) provides a better metric for comparison when reports differ in structure and granularity. RQ3: Which kind of fault localization technique performs better, SBBFL or CBSFL? Referring once again to Table IV, the \(\textrm{HT}_{\textrm{Rank}}\) results indicate that SBBFL often but not always outperformed CBSFL. In particular, when the measure used was Relative F1 (which was found to be the best-performing measure for SBBFL in [9]), SBBFL performed better for all programs but the compiler. However, when Ochiai was used CBSFL outperformed SBBFL, although CBSFL with Relative F1 outperforms CBSFL with Ochiai. This indicates that SBBFL and Relative F1 may be the best combination tested with one caveat. For the compiler, SBBFL never outperforms CBSFL. However, CBSFL also performs very badly on this large program. This indicates that while CBSFL beats SBBFL on this program neither technique is effective at localizing faults in it. Study Limitations The purpose of our case study is to illustrate the application of \(\textrm{HT}_{\textrm{Rank}}\). It was not designed to settle controversies about the suspiciousness metrics we employed. Second, this study used real programs and tests but used synthetic bugs. In the future, we intend to provide an automated analysis system that builds upon the Defects4J dataset [52]. Third, our study included a representative but not exhaustive set of suspiciousness metrics [2] and may not generalize to other metrics. Finally, no user-study was performed to validate the chosen debugging model against actions an actual programmer would take. Although we would have liked to perform such a study at this time, our resources do not permit us to do so. To be conclusive, such a study would need to be large-scale and to utilize professional programmers who are skilled in traditional debugging and are able to spend significant time learning to use SFL techniques with comparable skill. A new, flexible approach to evaluating automatic fault localization techniques was presented. The new \(\textrm{HT}_{\textrm{Rank}}\) Score provides a principled and flexible way of comparing different fault localization techniques. It is robust to differences in granularity and allows complex fault localization reports (such as those produced by SBBFL) to be incorporated. Unlike previous attempts at cross-technique comparison (see Section 12) the final scores are based on the expected number of steps through a Markov model. The model can incorporate both information from the chosen fault localization technique as well as other information available to the programmer (such as the structure of the program). The \(\textrm{HT}_{\textrm{Rank}}\) Score is sensitive to the model used (see Fig 5) and hence the choice of model is important. The choice should be made based on 1) the fault localization technique(s) being evaluated and 2) observations of programmer behavior. Choosing a model after viewing the results (and picking the model that gives the "best" results) leads to a biased outcome. Recommendations for Researchers Report both Standard Rank Score and \(\textrm{HT}_{\textrm{Rank}}\) Score. If evaluating multi-line faults the Steinmann Rank Score (Def. 18) should be used as the basis for defining the \(\textrm{HT}_{\textrm{Rank}}\) Score. Report which model is being used and why it was chosen, and include a description of the model. In the absence of user study data, set \(p_{\text{jump}} = .5\) and set the weights in \({\bf J}\) uniformly. By evaluating fault localization methods with \(\textrm{HT}_{\textrm{Rank}}\) in addition to the Standard Rank Score researchers will be able to make valid cross-technique comparisons. This will enable the community to better understand the relationships between techniques and report-granularity while taking into account potential programmer behavior during debugging. This work was partially supported by NSF award CCF-1525178 to Case Western Reserve University. Markov Chain Definitions Definition. CBSFL Ranked List Markov Chain To construct a Markov chain representing a list of program locations ranked in descending order by their suspiciousness scores: Let \(L\) be the set of locations in the program. For a location \(l \in L\) let \(s(l)\) be its CBSFL suspiciousness score. Partition the locations in \(L\) into a list of groups \(G = \left\{ g_1 \subseteq L, g_2 \subseteq L, ..., g_n \subseteq L \right\}\) such that for each group \(g_i\) all of the locations it contains have the same score: \( \forall ~ {g_i \in G}, ~ \forall ~ {l, l' \in g_i} \left[ s(l) = s(l') \right]\) The score of a group \(s(g_i)\) is defined to be the common score of its members: \( \forall ~ {l \in g_i} \left[ s(g_i) = s(l) \right]\) Order \(G\) by the scores of its groups, such that \(g_0\) has the highest score and \(g_n\) has the lowest: \(s(g_0) > s(g_1) > ... > s(g_n)\) Now construct the set of states. There is one state for each group \(g \in G\) and for each location \(l \in L\). \[S = \left\{ g : g \in G \right\} \cup \left\{ l : l \in L \right\}\] Finally construct the transition matrix \({\bf P}\) for the states \(S\). \[{\bf P}_{i,j} = \left\{ \begin{array}{lll} 1 & \text{if} & s_i \in L \wedge s_j \in G \wedge s_i \in s_j \\ \\ \frac{{{\left|{L}\right|}} - 1}{2{{\left|{L}\right|}}} & \text{if} & s_i = g_0 \wedge s_j = s_i \\ \\ \frac{1}{2{{\left|{L}\right|}}} & \text{if} & s_i = g_n \wedge s_j = s_i \\ \\ \frac{{{\left|{L}\right|}} - 1}{2{{\left|{L}\right|}}} & \text{if} & s_i \in G \wedge s_j \in G \wedge s_i - 1 = s_j \\ \\ \frac{1}{2{{\left|{L}\right|}}} & \text{if} & s_i \in G \wedge s_j \in G \wedge s_i + 1 = s_j \\ \\ \frac{1}{2{{\left|{s_i}\right|}}} & \text{if} & s_i \in G \wedge s_j \in L \wedge s_j \in s_i \\ \\ 0 & \text{otherwise} \end{array} \right.\] Definition. CBSFL with Jumps Markov Chain This definition augments Definition 28. Let \({\bf J}\) be a "jump" matrix representing ways a programmer might move through the program during debugging. If \({\bf J}_{x,y} > 0\) then locations \(x,y \in L\) are "connected" by \({\bf J}\). Let \(p_{\text{jump}}\) be the probability that when visiting a location \(l \in L\) the Markov process "jumps" to another location. Let \(1 - p_{\text{jump}}\) be the probability that the process returns to the state which represents its group instead of jumping. As \(p_\text{{jump}} \rightarrow 0\) the behavior of the chain approaches the behavior of the chain in Definition 28. The new transition matrix \({\bf P}\) for the states \(S\) is \[\begin{aligned} {\bf P}_{i,j} &= \left\{ \begin{array}{cll} 1 - p_{\text{jump}} & \text{if} & s_i \in L \wedge s_j \in G \wedge s_i \in s_j \\ & & \wedge \left( \sum_{k}{{\bf J}_{i,k}} \right) > 0 \\ \\ p_{\text{jump}} \left( \frac{{\bf J}_{i,j}}{\sum_{k}{{\bf J}_{i,k}}} \right) & \text{if} & s_i \in L \wedge s_j \in L \wedge {\bf J}_{i,j} > 0 \\ \\ \frac{{{\left|{L}\right|}} - 1}{2{{\left|{L}\right|}}} & \text{if} & s_i = g_0 \wedge s_j = s_i \\ \\ \frac{1}{2{{\left|{L}\right|}}} & \text{if} & s_i = g_n \wedge s_j = s_i \\ \\ \frac{{{\left|{L}\right|}} - 1}{2{{\left|{L}\right|}}} & \text{if} & s_i \in G \wedge s_j \in G \wedge s_i - 1 = s_j \\ \\ \frac{1}{2{{\left|{L}\right|}}} & \text{if} & s_i \in G \wedge s_j \in G \wedge s_i + 1 = s_j \\ \\ \frac{1}{2{{\left|{s_i}\right|}}} & \text{if} & s_i \in G \wedge s_j \in L \wedge s_j \in s_i \\ \\ 0 & \text{otherwise} \end{array} \right. \end{aligned}\] Definition. Suspicious Behavior Markov Chain A chain that models a ranked list of suspicious subgraphs: Let \(H\) be a set of suspicious subgraphs (behaviors). For a subgraph \(h \in H\) let \(\varsigma(h)\) be its suspiciousness score [9]. Partition the subgraphs in \(H\) into a list of groups \(G = \left\{ g_1 \subseteq H, g_2 \subseteq H, ..., g_n \subseteq H \right\}\) such that for each group \(g_i\) all of the locations in \(g_i\) have the same score: \(\forall ~ {g_i \in G} ~ \forall ~ {a, b \in g_i} \left[ \varsigma(a) = \varsigma(b) \right]\) Let the score of a group \(\varsigma(g_i)\) be the same as the scores of its members: \( \forall ~ {g_i \in G} ~ \forall ~ {h \in g_i} \left[ \varsigma(g_i) = \varsigma(h) \right]\) Order \(G\) by the scores of its groups, such that \(g_0\) has the highest score and \(g_n\) has the lowest: \( \varsigma(g_0) > \varsigma(g_1) > ... > \varsigma(g_n)\) Now construct the set of states. One state for each group \(g \in G\), one state for each subgraph \(h \in H\), and one state for each location \(l \in V_{h}\) for all \(h \in H\). \[S = \left\{ g : g \in G \right\} \cup \left\{ h : h \in H \right\} \cup \left\{ l : l \in V_h,~ \forall~ h \in H \right\}\] Let \(c: L \rightarrow \mathbb{N}^{+}\) be a function that gives the number of subgraphs \(h \in H\) which a location \(l\) appears in. Finally construct the transition matrix \({\bf P}\) for the states \(S\). \[{\bf P}_{i,j} = \left\{ \begin{array}{cll} \frac{1}{c(s_i)} & \text{if} & s_i \in L \wedge s_j \in H \wedge s_i \in V_{s_j} \\ \\ \frac{1}{2}\frac{1}{{{\left|{V_{s_i}}\right|}}} & \text{if} & s_i \in H \wedge s_j \in L \wedge s_j \in V_{s_i} \\ \\ \frac{1}{2} & \text{if} & s_i \in H \wedge s_j \in G \wedge s_i \in s_j \\ \\ \frac{{{\left|{H}\right|}} - 1}{2{{\left|{H}\right|}}} & \text{if} & s_i = g_0 \wedge s_j = s_i \\ \\ \frac{1}{2{{\left|{H}\right|}}} & \text{if} & s_i = g_n \wedge s_j = s_i \\ \\ \frac{{{\left|{H}\right|}} - 1}{2{{\left|{H}\right|}}} & \text{if} & s_i \in G \wedge s_j \in G \wedge s_i - 1 = s_j \\ \\ \frac{1}{2{{\left|{H}\right|}}} & \text{if} & s_i \in G \wedge s_j \in G \wedge s_i + 1 = s_j \\ \\ \frac{1}{2{{\left|{s_i}\right|}}} & \text{if} & s_i \in G \wedge s_j \in H \wedge s_j \in s_i \\ \\ 0 & \text{otherwise} \end{array} \right.\] [1] J. Jones, M. Harrold, and J. Stasko, "Visualization of test information to assist fault localization," Proceedings of the 24th International Conference on Software Engineering. ICSE 2002, 2002, doi:10.1145/581339.581397. [2] Lucia, D. Lo, L. Jiang, F. Thung, and A. Budi, "Extended comprehensive study of association measures for fault localization," Journal of Software: Evolution and Process, vol. 26, no. 2, pp. 172-219, Feb. 2014, doi:10.1002/smr.1616. [3] S.-F. Sun and A. Podgurski, "Properties of Effective Metrics for Coverage-Based Statistical Fault Localization," in 2016 ieee international conference on software testing, verification and validation (icst), 2016, pp. 124-134, doi:10.1109/ICST.2016.31. [4] J. A. Jones and M. J. Harrold, "Empirical Evaluation of the Tarantula Automatic Fault-localization Technique," in Proceedings of the 20th ieee/acm international conference on automated software engineering, 2005, pp. 273-282, doi:10.1145/1101908.1101949. [5] S. Pearson, J. Campos, R. Just, G. Fraser, R. Abreu, M. D. Ernst, D. Pang, and B. Keller, "Evaluating and Improving Fault Localization," in Proceedings of the 39th international conference on software engineering, 2017, pp. 609-620, doi:10.1109/ICSE.2017.62. [6] C. Parnin and A. Orso, "Are Automated Debugging Techniques Actually Helping Programmers?" in ISSTA, 2011, pp. 199-209. [7] M. Renieres and S. Reiss, "Fault localization with nearest neighbor queries," in 18th ieee international conference on automated software engineering, 2003. proceedings., 2003, pp. 30-39, doi:10.1109/ASE.2003.1240292. [8] H. Cheng, D. Lo, Y. Zhou, X. Wang, and X. Yan, "Identifying Bug Signatures Using Discriminative Graph Mining," in Proceedings of the eighteenth international symposium on software testing and analysis, 2009, pp. 141-152, doi:10.1145/1572272.1572290. [9] T. A. D. Henderson and A. Podgurski, "Behavioral Fault Localization by Sampling Suspicious Dynamic Control Flow Subgraphs," in IEEE conference on software testing, validation and verification, 2018. [10] G. G. K. Baah, A. Podgurski, and M. J. M. Harrold, "Causal inference for statistical fault localization," in Proceedings of the 19th international symposium on software testing and analysis, 2010, pp. 73-84, doi:10.1145/1831708.1831717. [11] C. C. Aggarwal and J. Han, Eds., Frequent Pattern Mining. Cham: Springer International Publishing, 2014. [12] R. Agrawal, T. Imieliński, and A. Swami, "Mining association rules between sets of items in large databases," ACM SIGMOD Record, vol. 22, no. 2, pp. 207-216, Jun. 1993, doi:10.1145/170036.170072. [13] X. Yan and J. Han, "gSpan: graph-based substructure pattern mining," in 2002 ieee international conference on data mining, 2002. proceedings., 2002, pp. 721-724, doi:10.1109/ICDM.2002.1184038. [14] C. C. Aggarwal, M. A. Bhuiyan, and M. A. Hasan, "Frequent Pattern Mining Algorithms: A Survey," in Frequent pattern mining, Cham: Springer International Publishing, 2014, pp. 19-64. [15] X. Yan, H. Cheng, J. Han, and P. S. Yu, "Mining Significant Graph Patterns by Leap Search," in Proceedings of the 2008 acm sigmod international conference on management of data, 2008, pp. 433-444, doi:10.1145/1376616.1376662. [16] R. Abreu, P. Zoeteweij, and A. Van Gemund, "An Evaluation of Similarity Coefficients for Software Fault Localization," in 2006 12th pacific rim international symposium on dependable computing (prdc'06), 2006, pp. 39-46, doi:10.1109/PRDC.2006.18. [17] R. Abreu, P. Zoeteweij, R. Golsteijn, and A. J. C. van Gemund, "A practical evaluation of spectrum-based fault localization," Journal of Systems and Software, vol. 82, no. 11, pp. 1780-1792, 2009, doi:10.1016/j.jss.2009.06.035. [18] P. Agarwal and A. P. Agrawal, "Fault-localization Techniques for Software Systems: A Literature Review," SIGSOFT Softw. Eng. Notes, vol. 39, no. 5, pp. 1-8, Sep. 2014, doi:10.1145/2659118.2659125. [19] W. E. Wong, R. Gao, Y. Li, R. Abreu, and F. Wotawa, "A Survey on Software Fault Localization," IEEE Transactions on Software Engineering, vol. 42, no. 8, pp. 707-740, Aug. 2016, doi:10.1109/TSE.2016.2521368. [20] A. Zeller, "Yesterday, My Program Worked. Today, It Does Not. Why?" SIGSOFT Softw. Eng. Notes, vol. 24, no. 6, pp. 253-267, Oct. 1999, doi:10.1145/318774.318946. [21] F. Tip, "A survey of program slicing techniques," Journal of programming languages, vol. 3, no. 3, pp. 121-189, 1995. [22] X. Mao, Y. Lei, Z. Dai, Y. Qi, and C. Wang, "Slice-based statistical fault localization," Journal of Systems and Software, vol. 89, no. 1, pp. 51-62, 2014, doi:10.1016/j.jss.2013.08.031. [23] A. Marcus, A. Sergeyev, V. Rajlieh, and J. I. Maletic, "An information retrieval approach to concept location in source code," Proceedings - Working Conference on Reverse Engineering, WCRE, pp. 214-223, 2004, doi:10.1109/WCRE.2004.10. [24] J. Zhou, H. Zhang, and D. Lo, "Where should the bugs be fixed? More accurate information retrieval-based bug localization based on bug reports," Proceedings - International Conference on Software Engineering, pp. 14-24, 2012, doi:10.1109/ICSE.2012.6227210. [25] T.-D. B. Le, R. J. Oentaryo, and D. Lo, "Information retrieval and spectrum based bug localization: better together," in Proceedings of the 2015 10th joint meeting on foundations of software engineering - esec/fse 2015, 2015, pp. 579-590, doi:10.1145/2786805.2786880. [26] S. Artzi, J. Dolby, F. Tip, and M. Pistoia, "Directed Test Generation for Effective Fault Localization," in Proceedings of the 19th international symposium on software testing and analysis, 2010, pp. 49-60, doi:10.1145/1831708.1831715. [27] S. K. Sahoo, J. Criswell, C. Geigle, and V. Adve, "Using likely invariants for automated software fault localization," in Proceedings of the eighteenth international conference on architectural support for programming languages and operating systems, 2013, vol. 41, p. 139, doi:10.1145/2451116.2451131. [28] A. Perez, R. Abreu, and A. Riboira, "A Dynamic Code Coverage Approach to Maximize Fault Localization Efficiency," J. Syst. Softw., vol. 90, pp. 18-28, Apr. 2014, doi:10.1016/j.jss.2013.12.036. [29] H. Agrawal, J. Horgan, S. London, and W. Wong, "Fault localization using execution slices and dataflow tests," in Proceedings of sixth international symposium on software reliability engineering. issre'95, 1995, pp. 143-151, doi:10.1109/ISSRE.1995.497652. [30] H. Cleve and A. Zeller, "Locating causes of program failures," Proceedings of the 27th international conference on Software engineering - ICSE '05, p. 342, 2005, doi:10.1145/1062455.1062522. [31] S. Horwitz, "Identifying the Semantic and Textual Differences Between Two Versions of a Program," SIGPLAN Not., vol. 25, no. 6, pp. 234-245, Jun. 1990, doi:10.1145/93548.93574. [32] E. Wong, T. Wei, Y. Qi, and L. Zhao, "A Crosstab-based Statistical Method for Effective Fault Localization," in 2008 international conference on software testing, verification, and validation, 2008, pp. 42-51, doi:10.1109/ICST.2008.65. [33] D. Landsberg, H. Chockler, D. Kroening, and M. Lewis, "Evaluation of Measures for Statistical Fault Localisation and an Optimising Scheme," in International conference on fundamental approaches to software engineering, 2015, vol. 9033, pp. 115-129, doi:10.1007/978-3-662-46675-9. [34] Y. Zheng, Z. Wang, X. Fan, X. Chen, and Z. Yang, "Localizing multiple software faults based on evolution algorithm," Journal of Systems and Software, vol. 139, pp. 107-123, 2018, doi:10.1016/j.jss.2018.02.001. [35] C. Liu, L. Fei, X. Yan, J. Han, and S. P. Midkiff, "Statistical debugging: A hypothesis testing-based approach," IEEE Transactions on Software Engineering, vol. 32, no. 10, pp. 831-847, Oct. 2006, doi:10.1109/TSE.2006.105. [36] J. Ferrante, K. J. Ottenstein, and J. D. Warren, "The program dependence graph and its use in optimization," vol. 9. pp. 319-349, Jul-1987. [37] S. Ali, J. H. Andrews, T. Dhandapani, and W. Wang, "Evaluating the Accuracy of Fault Localization Techniques," 2009 IEEE/ACM International Conference on Automated Software Engineering, pp. 76-87, 2009, doi:10.1109/ASE.2009.89. [38] S. Moon, Y. Kim, M. Kim, and S. Yoo, "Ask the Mutants: Mutating faulty programs for fault localization," Proceedings - IEEE 7th International Conference on Software Testing, Verification and Validation, ICST 2014, pp. 153-162, 2014, doi:10.1109/ICST.2014.28. [39] C. Liu, H. Yu, P. S. Yu, X. Yan, H. Yu, J. Han, and P. S. Yu, "Mining Behavior Graphs for 'Backtrace' of Noncrashing Bugs," in Proceedings of the 2005 siam international conference on data mining, 2005, pp. 286-297, doi:10.1137/1.9781611972757.26. [40] G. Di Fatta, S. Leue, and E. Stegantova, "Discriminative Pattern Mining in Software Fault Detection," in Proceedings of the 3rd international workshop on software quality assurance, 2006, pp. 62-69, doi:10.1145/1188895.1188910. [41] F. Eichinger, K. Böhm, and M. Huber, "Mining Edge-Weighted Call Graphs to Localise Software Bugs," in European conference machine learning and knowledge discovery in databases, 2008, pp. 333-348, doi:10.1007/978-3-540-87479-9_40. [42] F. Eichinger, K. Krogmann, R. Klug, and K. Böhm, "Software-defect Localisation by Mining Dataflow-enabled Call Graphs," in Proceedings of the 2010 european conference on machine learning and knowledge discovery in databases: Part i, 2010, pp. 425-441. [43] Z. Mousavian, M. Vahidi-Asl, and S. Parsa, "Scalable Graph Analyzing Approach for Software Fault-localization," in Proceedings of the 6th international workshop on automation of software test, 2011, pp. 15-21, doi:10.1145/1982595.1982599. [44] F. Eichinger, C. Oßner, and K. Böhm, "Scalable software-defect localisation by hierarchical mining of dynamic call graphs," Proceedings of the 11th SIAM International Conference on Data Mining, SDM 2011, no. c, pp. 723-734, 2011. [45] S. Parsa, S. A. Naree, and N. E. Koopaei, "Software Fault Localization via Mining Execution Graphs," in Proceedings of the 2011 international conference on computational science and its applications - volume part ii, 2011, pp. 610-623. [46] L. Mariani, F. Pastore, and M. Pezze, "Dynamic Analysis for Diagnosing Integration Faults," IEEE Trans. Softw. Eng., vol. 37, no. 4, pp. 486-508, Jul. 2011, doi:10.1109/TSE.2010.93. [47] A. Yousefi and A. Wassyng, "A Call Graph Mining and Matching Based Defect Localization Technique," in 2013 ieee sixth international conference on software testing, verification and validation workshops, 2013, pp. 86-95, doi:10.1109/ICSTW.2013.17. [48] J. P. A. Ioannidis, "Why most published research findings are false," PLoS Medicine, vol. 2, no. 8, pp. 0696-0701, 2005, doi:10.1371/journal.pmed.0020124. [49] C. M. Grinstead and J. L. Snell, Introduction to Probability, 2nd ed. Providence, RI: American Mathematical Society, 1997. [50] J. G. Kemeny and J. L. Snell, Finite Markov Chains, First. Princeton, NJ: Van Nostrand, 1960. [51] T. A. Davis, "Algorithm 832: UMFPACK V4.3—an Unsymmetric-pattern Multifrontal Method," ACM Trans. Math. Softw., vol. 30, no. 2, pp. 196-199, Jun. 2004, doi:10.1145/992200.992206. [52] R. Just, D. Jalali, and M. D. Ernst, "Defects4J: A Database of Existing Faults to Enable Controlled Testing Studies for Java Programs," in Proceedings of the 2014 international symposium on software testing and analysis, 2014, pp. 437-440, doi:10.1145/2610384.2628055. [53] F. Steimann, M. Frenkel, and R. Abreu, "Threats to the validity and value of empirical assessments of the accuracy of coverage-based fault locators," Proceedings of the 2013 International Symposium on Software Testing and Analysis - ISSTA 2013, p. 314, 2013, doi:10.1145/2483760.2483767. Jones et al. did not use the term "suspiciousness score" or "suspiciousness metric" in their 2002 paper [1]. They introduced the term "suspiciousness score" in their 2005 paper [4], in the context of ranking statements. Both terms are now in common use.↩ A dependence sphere is computed from the Program Dependence Graph (PDG) [31], [36]. In a PDG, program elements are nodes and their control and data dependencies are represented as edges. Given two nodes \(\alpha\) and \(\beta\) a graph with a shortest path (ignoring edge directionality) \(\pi\) between them, the sphere is all those nodes in the graph have have a path from \(\alpha\) as short (or shorter) than \(\pi\) (once again ignoring edge directionality).↩ Implementations of these two computations maybe found in https://github.com/timtadh/expected-hitting-times.↩ See https://hackthology.com/pdfs/scam-2019-supplement.pdf↩ Copyright 2016 Tim Henderson. All Rights Reserved.
CommonCrawl
Analogy between a classical discrete system and non-classical continuous system Quantum mechanical vacuum energy of a system generally obtainable from the metaplectic correction of its geometric quantization? Is there a relationship between quantization of coherent maps and the Kostant Souriau prequantum operator? Quantization by evaluating complex functions on different Riemann sheets? Are the viscosity and the volume of a fluid related by an uncertainty relation? Relationship between ergodicity and (non-)holonomicity of a system? What exactly is the relationship between the algebraic formulation of Quantum Mechanics and the geometric formulation of Classical Mechanics? What is a simple intuitive way to see the relation between imaginary time (periodic) and temperature relation? What is the difference between population transfer, polarization transfer and coherence transfer in a quantum system? Is the quantum algebra unique (up to isomorphism) in deformation quantization ? What are the limitations of the analogy between a complex classical oscillator with imaginary energy and a quantum system? In this video lecture about mathematical physics (after 1:20:00) Carl Bender explains that allowing the energy of a complex classical oscillator to become imaginary can some kind of mimic quantum behavior of the system. To explain this, he uses the example of a classical particle with the potential energy given by \(V(x) = x^4 -x^2\) If the energy of the particle is larger than the energy of the ground state (but smaller than the barrier between the two potential wells) $E_1 > E_0$ it can oscillate in one of the two potential wells centereed around the two minima of $V(x)$, but not travel between them. However, allowing the energy of the particle to become imaginary, say \(E = E_0 + \varepsilon i\) the trajectories of the particle in the complex plane are no longer closed and it can some kind of "tunnel" between the two potential wells. The "tunneling time" $T$ is related to the imaginary part of the energy as \(T \sim \frac{1}{\varepsilon}\) In addition, Carl Bender mentions that taking the limit $\varepsilon \rightarrow 0$ of such a complex oscillator system some kind of mimics the process of taking the classical limit of a quantum system. What are the limitations of this analogy between a complex oscillator with imaginary energy and a "real" quantum system (no pun intended ;-) ...)? An aside: the jumping between the two potential wells of a complex oscillator with imaginary energy reminded me of a Lorenz Attractor ... mathematical-physics quantisation asked Apr 18, 2014 in Theoretical Physics by Dilaton (5,240 points) [ revision history ] edited Apr 18, 2014 by Dilaton Not related to your question, just want to say Carl Bender did show some surprising stuff in these lectures, I remember in lecture 3(?) he claimed that the system with repulsive potential $V(x)=-x^4$ has discrete energy levels. I'm still puzzled by this even today, maybe I should post it as another question sometime. commented Apr 18, 2014 by Jia Yiyang (2,640 points) [ no revision ] The true analogy is between a classical stochastic oscillator (which is always given by a complex frequency) and a quantum system. Classically, the ''tunneling'' is visualized as climbing over the barrier (which I believe is also the correct way to view the qunatum tunneling). The deterministic limit of a classical stochastic system, where the noise (hence the imaginary part) goes to zero, is analogous to the deterministic limit of a quantum system, where Planck's constant goes to zero. Indeed, classical stochastic systems and quantum systems are analogous from many points of view. They both have a probabilistic interpretation in terms of densities (functions in the classical case, matrices/operators in the quantum case), and they both can be handled with path integrals. The main difference is that stochastic path integrals are well-defined as they do not have the factor i in the exponent that causes severe oscillations in the quantum case. Chaotic systems are different; they are classical _and_ deterministic! In particular, the jump between the two sides of a Lorentz attractor happens frequently, while tunneling = jumps over barriers is quite rare (unless the barriers are very low). answered Apr 18, 2014 by Arnold Neumaier (13,989 points) [ no revision ] p$\hbar$ysi$\varnothing$sOverflow
CommonCrawl
\begin{definition}[Definition:Golden Mean Number System/Simplification] Consider the golden mean number system. Let $x \in \R_{\ge 0}$ have a representation which includes the string $011$, say: :$x = p011q$ where $p$ and $q$ are strings in $\left\{ {0, 1}\right\}$. From 100 in Golden Mean Number System is Equivalent to 011, $x$ can also be written as: :$x = p100q$ The expression $p100q$ is a '''simplification''' of $p011q$. \end{definition}
ProofWiki
Quantum algorithm for linear systems of equations The quantum algorithm for linear systems of equations, also called HHL algorithm, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm published in 2008 for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.[1] The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm, Grover's search algorithm, and the quantum fourier transform. Provided the linear system is sparse and has a low condition number $\kappa $, and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of $O(\log(N)\kappa ^{2})$, where $N$ is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in $O(N\kappa )$ (or $O(N{\sqrt {\kappa }})$ for positive semidefinite matrices). An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by Cai et al., Barz et al. and Pan et al. in parallel. The demonstrations consisted of simple linear equations on specially designed quantum devices.[2][3][4] The first demonstration of a general-purpose version of the algorithm appeared in 2018 in the work of Zhao et al.[5] Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability.[6] Procedure The HHL algorithm tackles the following problem: given a $N\times N$ Hermitian matrix $A$ and a unit vector ${\vec {b}}\in \mathbb {R} ^{N}$, prepare the quantum state $|x\rangle $ corresponding to the vector ${\vec {x}}\in \mathbb {R} ^{N}$ that solves the linear system $A{\vec {x}}={\vec {b}}$. More precisely, the goal is to prepare a state $|x\rangle $ whose amplitudes equal the elements of ${\vec {x}}$. This means, in particular, that the algorithm cannot be used to efficiently retrieve the vector ${\vec {x}}$ itself. It does, however, allow to efficiently compute expectation values of the form $\langle x|M|x\rangle $ for some observable $M$. First, the algorithm represents the vector ${\vec {b}}$ as a quantum state of the form: $|b\rangle =\sum _{i\mathop {=} 1}^{N}b_{i}|i\rangle .$ Next, Hamiltonian simulation techniques are used to apply the unitary operator $e^{iAt}$ to $|b\rangle $ for a superposition of different times $t$. The ability to decompose $|b\rangle $ into the eigenbasis of $A$ and to find the corresponding eigenvalues $\lambda _{j}$ is facilitated by the use of quantum phase estimation. The state of the system after this decomposition is approximately: $\sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle |\lambda _{j}\rangle ,$ where $u_{j}$ is the eigenvector basis of $A$, and $|b\rangle =\sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle $. We would then like to perform the linear map taking $|\lambda _{j}\rangle $ to $C\lambda _{j}^{-1}|\lambda _{j}\rangle $, where $C$ is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncomputed the $|\lambda _{j}\rangle $ register and are left with a state proportional to: $\sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle ,$ where $|x\rangle $ is a quantum-mechanical representation of the desired solution vector x. To read out all components of x would require the procedure be repeated at least N times. However, it is often the case that one is not interested in $x$ itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value $\langle x|M|x\rangle $. This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x. Explanation of the algorithm Initialization Firstly, the algorithm requires that the matrix $A$ be Hermitian so that it can be converted into a unitary operator. In the case where $A$ is not Hermitian, define $\mathbf {C} ={\begin{bmatrix}0&A\\A^{\dagger }&0\end{bmatrix}}.$ As $C$ is Hermitian, the algorithm can now be used to solve $Cy={\begin{bmatrix}b\\0\end{bmatrix}}.$ to obtain $y={\begin{bmatrix}0\\x\end{bmatrix}}$. Secondly, The algorithm requires an efficient procedure to prepare $|b\rangle $, the quantum representation of b. It is assumed that there exists some linear operator $B$ that can take some arbitrary quantum state $|\mathrm {initial} \rangle $ to $|b\rangle $ efficiently or that this algorithm is a subroutine in a larger algorithm and is given $|b\rangle $ as input. Any error in the preparation of state $|b\rangle $ is ignored. Finally, the algorithm assumes that the state $|\psi _{0}\rangle $ can be prepared efficiently. Where $|\psi _{0}\rangle :={\sqrt {2/T}}\sum _{\tau \mathop {=} 0}^{T-1}\sin \pi \left({\tfrac {\tau +{\tfrac {1}{2}}}{T}}\right)|\tau \rangle $ :={\sqrt {2/T}}\sum _{\tau \mathop {=} 0}^{T-1}\sin \pi \left({\tfrac {\tau +{\tfrac {1}{2}}}{T}}\right)|\tau \rangle } for some large $T$. The coefficients of $|\psi _{0}\rangle $ are chosen to minimize a certain quadratic loss function which induces error in the $U_{\mathrm {invert} }$ subroutine described below. Hamiltonian simulation Hamiltonian simulation is used to transform the Hermitian matrix $A$ into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows $e^{iAt}$ to be simulated in time $O(\log(N)s^{2}t)$. Uinvert subroutine The key subroutine to the algorithm, denoted $U_{\mathrm {invert} }$, is defined as follows and incorporates a phase estimation subroutine: 1. Prepare $|\psi _{0}\rangle ^{C}$ on register C 2. Apply the conditional Hamiltonian evolution (sum) 3. Apply the Fourier transform to the register C. Denote the resulting basis states with $|k\rangle $ for k = 0, ..., T − 1. Define $\lambda _{k}:=2\pi k/t_{0}$. 4. Adjoin a three-dimensional register S in the state $|h(\lambda _{k})\rangle ^{S}:={\sqrt {1-f(\lambda _{k})^{2}-g(\lambda _{k})^{2}}}|\mathrm {nothing} \rangle ^{S}+f(\lambda _{k})|\mathrm {well} \rangle ^{S}+g(\lambda _{k})|\mathrm {ill} \rangle ^{S},$ 5. Reverse steps 1–3, uncomputing any garbage produced along the way. The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error $\epsilon $. The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of $|b\rangle $ is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule. Main loop The body of the algorithm follows the amplitude amplification procedure: starting with $U_{\mathrm {invert} }B|\mathrm {initial} \rangle $, the following operation is repeatedly applied: $U_{\mathrm {invert} }BR_{\mathrm {init} }B^{\dagger }U_{\mathrm {invert} }^{\dagger }R_{\mathrm {succ} },$ where $R_{\mathrm {succ} }=I-2|\mathrm {well} \rangle \langle \mathrm {well} |$ and $R_{\mathrm {init} }=I-2|\mathrm {initial} \rangle \langle \mathrm {initial} |.$ After each repetition, $S$ is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until $|\mathrm {well} \rangle $ is measured, which occurs with a probability $p$. Rather than repeating ${\frac {1}{p}}$ times to minimize error, amplitude amplification is used to achieve the same error resilience using only $O\left({\frac {1}{\sqrt {p}}}\right)$ repetitions. Scalar measurement After successfully measuring 'well' on $S$ the system will be in a state proportional to: $\sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle .$ Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of $\langle x|M|x\rangle $. Run time analysis Classical efficiency The best classical algorithm which produces the actual solution vector ${\overrightarrow {x}}$ is Gaussian elimination, which runs in $O(N^{3})$ time. If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector ${\overrightarrow {x}}$, which can be found in $O(Ns\kappa )$ time by minimizing the quadratic function $|A{\overrightarrow {x}}-{\overrightarrow {b}}|^{2}$. When only a summary statistic of the solution vector ${\overrightarrow {x}}$ is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of ${\overrightarrow {x}}^{\dagger }M{\overrightarrow {x}}$ in $O(N{\sqrt {\kappa }})$. Quantum efficiency The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be $O(\kappa ^{2}\log N/\varepsilon )$, where $\varepsilon >0$ is the error parameter and $\kappa $ is the condition number of $A$. This was subsequently improved to $O(\kappa \log ^{3}\kappa \log N/\varepsilon ^{3})$ by Andris Ambainis[7] and a quantum algorithm with runtime polynomial in $\log(1/\varepsilon )$ was developed by Childs et al.[8] Since the HHL algorithm maintains its logarithmic scaling in $N$ only for sparse or low rank matrices, Wossnig et al.[9] extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in $O({\sqrt {N}}\log N\kappa ^{2})$ time compared to the $O(N\log N\kappa ^{2})$ of the standard HHL algorithm. Optimality An important factor in the performance of the matrix inversion algorithm is the condition number $\kappa $, which represents the ratio of $A$'s largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as $A$ becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix $A$ lie between ${\frac {1}{\kappa }}$ and 1, in which case the claimed run-time proportional to $\kappa ^{2}$ will be achieved. Therefore, the speedup over classical algorithms is increased further when $\kappa $ is a $\mathrm {poly} (\log(N))$.[1] If the run-time of the algorithm were made poly-logarithmic in $\kappa $ then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE.[1] Error analysis Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating $e^{iAt}$. Assuming that $A$ is s-sparse, this can be done with an error bounded by a constant $\varepsilon $, which will translate to the additive error achieved in the output state $|x\rangle $. The phase estimation step errs by $O\left({\frac {1}{t_{0}}}\right)$ in estimating $\lambda $, which translates into a relative error of $O\left({\frac {1}{\lambda t_{0}}}\right)$ in $\lambda ^{-1}$. If $\lambda \geq 1/\kappa $, taking $t_{0}=O(\kappa \varepsilon )$ induces a final error of $\varepsilon $. This requires that the overall run-time efficiency be increased proportional to $O\left({\frac {1}{\varepsilon }}\right)$ to minimize error. Experimental realization While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel. Cai et al. Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving $2\times 2$ linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993.[10] Barz et al. On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion.[3] Pan et al. On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity.[4] Wen et al. Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al.[11] in 2018 using the algorithm developed by Subaşı et al.[12] Applications Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited use outside the domain of quantum mechanics. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science and engineering, both on its own and as a subroutine in more complex problems. Electromagnetic scattering Clader et al. provided a preconditioned version of the linear systems algorithm that provided two advances. First, they demonstrated how a preconditioner could be included within the quantum algorithm. This expands the class of problems that can achieve the promised exponential speedup, since the scaling of HHL and the best classical algorithms are both polynomial in the condition number. The second advance was the demonstration of how to use HHL to solve for the radar cross-section of a complex shape. This was one of the first end to end examples of how to use HHL to solve a concrete problem exponentially faster than the best known classical algorithm.[13] Linear differential equation solving Dominic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an efficient algorithm for solving the full-time evolution under sparse linear differential equations on a quantum computer.[14] Finite element method The Finite Element Method uses large systems of linear equations to find approximate solutions to various physical and mathematical models. Montanaro and Pallister demonstrate that the HHL algorithm, when applied to certain FEM problems, can achieve a polynomial quantum speedup. They suggest that an exponential speedup is not possible in problems with fixed dimensions, and for which the solution meets certain smoothness conditions. Quantum speedups for the finite element method are higher for problems which include solutions with higher-order derivatives and large spatial dimensions. For example, problems in many-body dynamics require the solution of equations containing derivatives on orders scaling with the number of bodies, and some problems in computational finance, such as Black-Scholes models, require large spatial dimensions.[15] Least-squares fitting Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm.[16] Machine learning and big data analysis Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and are thus the perfect platform for machine learning algorithms.[17] The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers.[18] In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm for linear systems of equations,[5] providing also the first general-purpose implementation of the algorithm to be run in cloud-based quantum computers.[19] Critique Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson[20] analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm. 1. the solution vector, $|b\rangle $, has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes $O(n^{c})$ steps the exponential advantage of HHL would vanish. 2. the QPE phases calls for the generation of the unitary $e^{iAt}$, and its controlled application. The efficiency of this step depends on the $A$ matrix being sparse and 'well conditioned' (low $\kappa $). Otherwise, the application of $e^{iAt}$ would grow as $O(n^{c})$ and once again, the algorithm's quantum advantage would vanish. 3. lastly, the vector, $|x\rangle $, is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator $\langle x|M|x\rangle $. If actual values of ${\vec {x}}$ are needed, then HHL would need to be repeated $O(n)$ times, killing the exponential speed-up. See also • Differentiable programming References 1. Harrow, Aram W; Hassidim, Avinatan; Lloyd, Seth (2008). "Quantum algorithm for solving linear systems of equations". Physical Review Letters. 103 (15): 150502. arXiv:0811.3171. Bibcode:2009PhRvL.103o0502H. doi:10.1103/PhysRevLett.103.150502. PMID 19905613. S2CID 5187993. 2. Cai, X.-D; Weedbrook, C; Su, Z.-E; Chen, M.-C; Gu, Mile; Zhu, M.-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei (2013). "Experimental Quantum Computing to Solve Systems of Linear Equations". Physical Review Letters. 110 (23): 230501. arXiv:1302.4310. Bibcode:2013PhRvL.110w0501C. doi:10.1103/PhysRevLett.110.230501. PMID 25167475. S2CID 20427454. 3. Barz, Stefanie; Kassal, Ivan; Ringbauer, Martin; Lipp, Yannick Ole; Dakić, Borivoje; Aspuru-Guzik, Alán; Walther, Philip (2014). "A two-qubit photonic quantum processor and its application to solving systems of linear equations". Scientific Reports. 4: 6115. arXiv:1302.1210. Bibcode:2014NatSR...4E6115B. doi:10.1038/srep06115. ISSN 2045-2322. PMC 4137340. PMID 25135432. 4. Pan, Jian; Cao, Yudong; Yao, Xiwei; Li, Zhaokai; Ju, Chenyong; Peng, Xinhua; Kais, Sabre; Du, Jiangfeng; Du, Jiangfeng (2014). "Experimental realization of quantum algorithm for solving linear systems of equations". Physical Review A. 89 (2): 022313. arXiv:1302.1946. Bibcode:2014PhRvA..89b2313P. doi:10.1103/PhysRevA.89.022313. S2CID 14303240. 5. Zhao, Zhikuan; Pozas-Kerstjens, Alejandro; Rebentrost, Patrick; Wittek, Peter (2019). "Bayesian Deep Learning on a Quantum Computer". Quantum Machine Intelligence. 1 (1–2): 41–51. arXiv:1806.11463. doi:10.1007/s42484-019-00004-7. S2CID 49554188. 6. Quantum Computer Runs The Most Practically Useful Quantum Algorithm, by Lu and Pan. 7. Ambainis, Andris (2010). "Variable time amplitude amplification and a faster quantum algorithm for solving systems of linear equations". arXiv:1010.4458 [quant-ph]. 8. Childs, Andrew M.; Kothari, Robin; Somma, Rolando D. (2017). "Quantum Algorithm for Systems of Linear Equations with Exponentially Improved Dependence on Precision". SIAM Journal on Computing. 46 (6): 1920–1950. arXiv:1511.02306. doi:10.1137/16m1087072. ISSN 0097-5397. S2CID 3834959. 9. Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam (2018). "A quantum linear system algorithm for dense matrices". Physical Review Letters. 120 (5): 050502. arXiv:1704.06174. Bibcode:2018PhRvL.120e0502W. doi:10.1103/PhysRevLett.120.050502. PMID 29481180. S2CID 3714239. 10. Cai, X. -D; Weedbrook, Christian; Su, Z. -E; Chen, M. -C; Gu, Mile; Zhu, M. -J; Li, L; Liu, N. -L; Lu, Chao-Yang; Pan, Jian-Wei (2013). "Experimental Quantum Computing to Solve Systems of Linear Equations". Physical Review Letters. 110 (23): 230501. arXiv:1302.4310. Bibcode:2013PhRvL.110w0501C. doi:10.1103/PhysRevLett.110.230501. PMID 25167475. S2CID 20427454. 11. Jingwei Wen, Xiangyu Kong, Shijie Wei, Bixue Wang, Tao Xin, and Guilu Long (2019). "Experimental realization of quantum algorithms for a linear system inspired by adiabatic quantum computing". Phys. Rev. A 99, 012320. 12. Subaşı, Yiğit; Somma, Rolando D.; Orsucci, Davide (2019-02-14). "Quantum Algorithms for Systems of Linear Equations Inspired by Adiabatic Quantum Computing". Physical Review Letters. 122 (6): 060504. arXiv:1805.10549. Bibcode:2019PhRvL.122f0504S. doi:10.1103/physrevlett.122.060504. ISSN 0031-9007. PMID 30822089. S2CID 73493666. 13. Clader, B. D; Jacobs, B. C; Sprouse, C. R (2013). "Preconditioned Quantum Linear System Algorithm". Physical Review Letters. 110 (25): 250504. arXiv:1301.2340. Bibcode:2013PhRvL.110y0504C. doi:10.1103/PhysRevLett.110.250504. PMID 23829722. S2CID 33391978. 14. Berry, Dominic W (2010). "High-order quantum algorithm for solving linear differential equations". Journal of Physics A: Mathematical and Theoretical. 47 (10): 105301. arXiv:1010.2745. Bibcode:2014JPhA...47j5301B. doi:10.1088/1751-8113/47/10/105301. S2CID 17623971. 15. Montanaro, Ashley; Pallister, Sam (2016). "Quantum Algorithms and the Finite Element Method". Physical Review A. 93 (3): 032324. arXiv:1512.05903. Bibcode:2016PhRvA..93c2324M. doi:10.1103/PhysRevA.93.032324. S2CID 44004935. 16. Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2012). "Quantum Data Fitting". Physical Review Letters. 109 (5): 050505. arXiv:1204.5242. Bibcode:2012PhRvL.109e0505W. doi:10.1103/PhysRevLett.109.050505. PMID 23006156. S2CID 118439810. 17. Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2013). "Quantum algorithms for supervised and unsupervised machine learning". arXiv:1307.0411 [quant-ph]. 18. Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth (2013). "Quantum support vector machine for big feature and big data classification". arXiv:1307.0471v2 [quant-ph]. 19. "apozas/bayesian-dl-quantum". GitLab. Retrieved 30 October 2018. 20. Aaronson, Scott (2015). "Read the fine print". Nature Physics. 11 (4): 291–293. Retrieved 2023-05-09. Quantum information science General • DiVincenzo's criteria • NISQ era • Quantum computing • timeline • Quantum information • Quantum programming • Quantum simulation • Qubit • physical vs. logical • Quantum processors • cloud-based Theorems • Bell's • Eastin–Knill • Gleason's • Gottesman–Knill • Holevo's • Margolus–Levitin • No-broadcasting • No-cloning • No-communication • No-deleting • No-hiding • No-teleportation • PBR • Threshold • Solovay–Kitaev • Purification Quantum communication • Classical capacity • entanglement-assisted • quantum capacity • Entanglement distillation • Monogamy of entanglement • LOCC • Quantum channel • quantum network • Quantum teleportation • quantum gate teleportation • Superdense coding Quantum cryptography • Post-quantum cryptography • Quantum coin flipping • Quantum money • Quantum key distribution • BB84 • SARG04 • other protocols • Quantum secret sharing Quantum algorithms • Amplitude amplification • Bernstein–Vazirani • Boson sampling • Deutsch–Jozsa • Grover's • HHL • Hidden subgroup • Quantum annealing • Quantum counting • Quantum Fourier transform • Quantum optimization • Quantum phase estimation • Shor's • Simon's • VQE Quantum complexity theory • BQP • EQP • QIP • QMA • PostBQP Quantum processor benchmarks • Quantum supremacy • Quantum volume • Randomized benchmarking • XEB • Relaxation times • T1 • T2 Quantum computing models • Adiabatic quantum computation • Continuous-variable quantum information • One-way quantum computer • cluster state • Quantum circuit • quantum logic gate • Quantum machine learning • quantum neural network • Quantum Turing machine • Topological quantum computer Quantum error correction • Codes • CSS • quantum convolutional • stabilizer • Shor • Bacon–Shor • Steane • Toric • gnu • Entanglement-assisted Physical implementations Quantum optics • Cavity QED • Circuit QED • Linear optical QC • KLM protocol Ultracold atoms • Optical lattice • Trapped-ion QC Spin-based • Kane QC • Spin qubit QC • NV center • NMR QC Superconducting • Charge qubit • Flux qubit • Phase qubit • Transmon Quantum programming • OpenQASM-Qiskit-IBM QX • Quil-Forest/Rigetti QCS • Cirq • Q# • libquantum • many others... • Quantum information science • Quantum mechanics topics
Wikipedia
Formalized Mathematics (ISSN 0777-4028) Volume 1, Number 2 (1990): pdf, ps, dvi. Agata Darmochwal. Families of Subsets, Subspaces and Mappings in Topological Spaces, Formalized Mathematics 1(2), pages 257-261, 1990. MML Identifier: TOPS_2 Summary: This article is a continuation of \cite{TOPS_1.ABS}. Some basic theorems about families of sets in a topological space have been proved. Following redefinitions have been made: singleton of a set as a family in the topological space and results of boolean operations on families as a family of the topological space. Notion of a family of complements of sets and a closed (open) family have been also introduced. Next some theorems refer to subspaces in a topological space: some facts about types in a subspace, theorems about open and closed sets and families in a subspace. A notion of restriction of a family has been also introduced and basic properties of this notion have been proved. The last part of the article is about mappings. There are proved necessary and sufficient conditions for a mapping to be continuous. A notion of homeomorphism has been defined next. Theorems about homeomorphisms of topological spaces have been also proved. Jan Popiolek. Some Properties of Functions Modul and Signum, Formalized Mathematics 1(2), pages 263-264, 1990. MML Identifier: ANAL_1 Jan Popiolek. Some Properties of Functions Modul and Signum, Formalized Mathematics 1(2), pages 263-264, 1990. MML Identifier: ABSVALUE Summary: The article includes definitions and theorems concerning basic properties of the following functions: $|x|$ -- modul of real number, sgn $x$ -- signum of real number. Grzegorz Bancerek. Zermelo Theorem and Axiom of Choice, Formalized Mathematics 1(2), pages 265-267, 1990. MML Identifier: WELLORD2 Summary: The article is continuation of \cite{WELLORD1.ABS} and \cite{ORDINAL1.ABS}, and the goal of it is show that Zermelo theorem (every set has a relation which well orders it - proposition (26)) and axiom of choice (for every non-empty family of non-empty and separate sets there is set which has exactly one common element with arbitrary family member - proposition (27)) are true. It is result of the Tarski's axiom A introduced in \cite{TARSKI:1} and repeated in \cite{TARSKI.ABS}. Inclusion as a settheoretical binary relation is introduced, the correspondence of well ordering relations to ordinal numbers is shown, and basic properties of equinumerosity are presented. Some facts are based on \cite{KURAT-MOST:1}. Jaroslaw Kotowicz. Real Sequences and Basic Operations on Them, Formalized Mathematics 1(2), pages 269-272, 1990. MML Identifier: SEQ_1 Summary: Definition of real sequence and operations on sequences (multiplication of sequences and multiplication by a real number, addition, subtraction, division and absolute value of sequence) are given. Jaroslaw Kotowicz. Convergent Sequences and the Limit of Sequences, Formalized Mathematics 1(2), pages 273-275, 1990. MML Identifier: SEQ_2 Summary: The article contains definitions and same basic properties of bounded sequences (above and below), convergent sequences and the limit of sequences. In the article there are some properties of real numbers useful in the other theorems of this article. Grzegorz Bancerek. Properties of ZF Models, Formalized Mathematics 1(2), pages 277-280, 1990. MML Identifier: ZFMODEL1 Summary: The article deals with the concepts of satisfiability of ZF set theory language formulae in a model (a non-empty family of sets) and the axioms of ZF theory introduced in \cite{MOST:1}. It is shown that the transitive model satisfies the axiom of extensionality and that it satisfies the axiom of pairs if and only if it is closed to pair operation; it satisfies the axiom of unions if and only if it is closed to union operation, etc. The conditions which are satisfied by arbitrary model of ZF set theory are also shown. Besides introduced are definable and parametrically definable functions. Grzegorz Bancerek. Sequences of Ordinal Numbers, Formalized Mathematics 1(2), pages 281-290, 1990. MML Identifier: ORDINAL2 Summary: In the first part of the article we introduce the following operations: On $X$ that yields the set of all ordinals which belong to the set $X$, Lim $X$ that yields the set of all limit ordinals which belong to $X$, and inf $X$ and sup $X$ that yield the minimal ordinal belonging to $X$ and the minimal ordinal greater than all ordinals belonging to $X$, respectively. The second part of the article starts with schemes that can be used to justify the correctness of definitions based on the transfinite induction (see \cite{ORDINAL1.ABS} or \cite{KURAT-MOST:1}). The schemes are used to define addition, product and power of ordinal numbers. The operations of limes inferior and limes superior of sequences of ordinals are defined and the concepts of limit of ordinal sequence and increasing and continuous sequence are introduced. Wojciech A. Trybulec. Vectors in Real Linear Space, Formalized Mathematics 1(2), pages 291-296, 1990. MML Identifier: RLVECT_1 Summary: In this article we introduce a notion of real linear space, operations on vectors: addition, multiplication by real number, inverse vector, subtraction. The sum of finite sequence of the vectors is also defined. Theorems that belong rather to \cite{NAT_1.ABS} or \cite{FINSEQ_1.ABS} are proved. Wojciech A. Trybulec. Subspaces and Cosets of Subspaces in Real Linear Space, Formalized Mathematics 1(2), pages 297-301, 1990. MML Identifier: RLSUB_1 Summary: The following notions are introduced in the article: subspace of a real linear space, zero subspace and improper subspace, coset of a subspace. The relation of a subset of the vectors being linearly closed is also introduced. Basic theorems concerning those notions are proved in the article. Piotr Rudnicki, Andrzej Trybulec. A First Order Language, Formalized Mathematics 1(2), pages 303-311, 1990. MML Identifier: QC_LANG1 Summary: In the paper a first order language is constructed. It includes the universal quantifier and the following propositional connectives: truth, negation, and conjunction. The variables are divided into three kinds: bound variables, fixed variables, and free variables. An infinite number of predicates for each arity is provided. Schemes of structural induction and schemes justifying definitions by structural induction have been proved. The concept of a closed formula (a formula without free occurrences of bound variables) is introduced. Wojciech A. Trybulec. Partially Ordered Sets, Formalized Mathematics 1(2), pages 313-319, 1990. MML Identifier: ORDERS_1 Summary: In the beginning of this article we define the choice function of a non-empty set family that does not contain $\emptyset$ as introduced in \cite[pages 88--89]{KURAT:1}. We define order of a set as a relation being reflexive, antisymmetric and transitive in the set, partially ordered set as structure non-empty set and order of the set, chains, lower and upper cone of a subset, initial segments of element and subset of partially ordered set. Some theorems that belong rather to \cite{ZFMISC_1.ABS} or \cite{RELAT_2.ABS} are proved. Krzysztof Hryniewiecki. Recursive Definitions, Formalized Mathematics 1(2), pages 321-328, 1990. MML Identifier: RECDEF_1 Summary: The text contains some schemes which allow elimination of definitions by recursion. Andrzej Trybulec. Binary Operations Applied to Functions, Formalized Mathematics 1(2), pages 329-334, 1990. MML Identifier: FUNCOP_1 Summary: In the article we introduce functors yielding to a binary operation its composition with an arbitrary functions on its left side, its right side or both. We prove theorems describing the basic properties of these functors. We introduce also constant functions and converse of a function. The recent concept is defined for an arbitrary function, however is meaningful in the case of functions which range is a subset of a Cartesian product of two sets. Then the converse of a function has the same domain as the function itself and assigns to an element of the domain the mirror image of the ordered pair assigned by the function. In the case of functions defined on a non-empty set we redefine the above mentioned functors and prove simplified versions of theorems proved in the general case. We prove also theorems stating relationships between introduced concepts and such properties of binary operations as commutativity or associativity. Eugeniusz Kusak, Wojciech Leonczuk, Michal Muzalewski. Abelian Groups, Fields and Vector Spaces, Formalized Mathematics 1(2), pages 335-342, 1990. MML Identifier: VECTSP_1 Summary: This text includes definitions of the Abelian group, field and vector space over a field and some elementary theorems about them. Eugeniusz Kusak, Wojciech Leonczuk, Michal Muzalewski. Parallelity Spaces, Formalized Mathematics 1(2), pages 343-348, 1990. MML Identifier: PARSP_1 Summary: In the monography \cite{SZMIELEW:1} W. Szmielew introduced the parallelity planes $\langle S$; $\parallel \rangle$, where $\parallel \subseteq S\times S\times S\times S$. In this text we omit upper bound axiom which must be satisfied by the parallelity planes (see also E.Kusak \cite{KUSAK:1}). Further we will list those theorems which remain true when we pass from the parallelity planes to the parallelity spaces. We construct a model of the parallelity space in Abelian group $\langle F\times F\times F; +_F, -_F, {\bf 0}_F \rangle$, where $F$ is a field. Eugeniusz Kusak, Wojciech Leonczuk, Michal Muzalewski. Construction of a bilinear antisymmetric form in simplectic vector space, Formalized Mathematics 1(2), pages 349-352, 1990. MML Identifier: SYMSP_1 Summary: In this text we will present unpublished results by Eu\-ge\-niusz Ku\-sak. It contains an axiomatic description of the class of all spaces $\langle V$; $\perp_\xi \rangle$, where $V$ is a vector space over a field F, $\xi: V \times V \to F$ is a bilinear antisymmetric form i.e. $\xi(x,y) = -\xi(y,x)$ and $x \perp_\xi y $ iff $\xi(x,y) = 0$ for $x$, $y \in V$. It also contains an effective construction of bilinear antisymmetric form $\xi$ for given symplectic space $\langle V$; $\perp \rangle$ such that $\perp = \perp_\xi$. The basic tool used in this method is the notion of orthogonal projection J$(a,b,x)$ for $a,b,x \in V$. We should stress the fact that axioms of orthogonal and symplectic spaces differ only by one axiom, namely: $x\perp y+\varepsilon z \>\&\> y\perp z+\varepsilon x \Rightarrow z\perp x+\varepsilon y. $ For $\varepsilon=+1$ we get the axiom characterizing symplectic geometry. For $\varepsilon=-1$ we get the axiom on three perpendiculars characterizing orthogonal geometry - see \cite{ORTSP_1.ABS}. Eugeniusz Kusak, Wojciech Leonczuk, Michal Muzalewski. Construction of a bilinear symmetric form in orthogonal vector space, Formalized Mathematics 1(2), pages 353-356, 1990. MML Identifier: ORTSP_1 Summary: In this text we present unpublished results by Eu\-ge\-niusz Ku\-sak and Wojciech Leo\'nczuk. They contain an axiomatic description of the class of all spaces $\langle V$; $\perp_\xi \rangle$, where $V$ is a vector space over a field F, $\xi: V \times V \to F$ is a bilinear symmetric form i.e. $\xi(x,y) = \xi(y,x)$ and $x \perp_\xi y$ iff $\xi(x,y) = 0$ for $x$, $y \in V$. They also contain an effective construction of bilinear symmetric form $\xi$ for given orthogonal space $\langle V$; $\perp \rangle$ such that $\perp = \perp_\xi$. The basic tool used in this method is the notion of orthogonal projection J$(a,b,x)$ for $a,b,x \in V$. We should stress the fact that axioms of orthogonal and symplectic spaces differ only by one axiom, namely: $x\perp y+\varepsilon z \>\&\> y\perp z+\varepsilon x \Rightarrow z\perp x+\varepsilon y.$ For $\varepsilon=-1$ we get the axiom on three perpendiculars characterizing orthogonal geometry. For $\varepsilon=+1$ we get the axiom characterizing symplectic geometry - see \cite{SYMSP_1.ABS}. Czeslaw Bylinski. Partial Functions, Formalized Mathematics 1(2), pages 357-367, 1990. MML Identifier: PARTFUN1 Summary: In the article we define partial functions. We also define the following notions related to partial functions and functions themselves: the empty function, the restriction of a function to a partial function from a set into a set, the set of all partial functions from a set into a set, the total functions, the relation of tolerance of two functions and the set of all total functions which are tolerated by a partial function. Some simple propositions related to the introduced notions are proved. In the beginning of this article we prove some auxiliary theorems and schemes related to the articles: \cite{FUNCT_1.ABS} and \cite{FUNCT_2.ABS}. Andrzej Trybulec. Semilattice Operations on Finite Subsets, Formalized Mathematics 1(2), pages 369-376, 1990. MML Identifier: SETWISEO Summary: In the article we deal with a binary operation that is associative, commutative. We define for such an operation a functor that depends on two more arguments: a finite set of indices and a function indexing elements of the domain of the operation and yields the result of applying the operation to all indexed elements. The definition has a restriction that requires that either the set of indices is non empty or the operation has the unity. We prove theorems describing some properties of the functor introduced. Most of them we prove in two versions depending on which requirement is fulfilled. In the second part we deal with the union of finite sets that enjoys mentioned above properties. We prove analogs of the theorems proved in the first part. We precede the main part of the article with auxiliary theorems related to boolean properties of sets, enumerated sets, finite subsets, and functions. We define a casting function that yields to a set the empty set typed as a finite subset of the set. We prove also two schemes of the induction on finite sets. Grzegorz Bancerek. Cardinal Numbers, Formalized Mathematics 1(2), pages 377-382, 1990. MML Identifier: CARD_1 Summary: We present the choice function rule in the beginning of the article. In the main part of the article we formalize the base of cardinal theory. In the first section we introduce the concept of cardinal numbers and order relations between them. We present here Cantor-Bernstein theorem and other properties of order relation of cardinals. In the second section we show that every set has cardinal number equipotence to it. We introduce notion of alephs and we deal with the concept of finite set. At the end of the article we show two schemes of cardinal induction. Some definitions are based on \cite{GUZ-ZBIER:1} and \cite{KURAT-MOST:1}. Agata Darmochwal. Compact Spaces, Formalized Mathematics 1(2), pages 383-386, 1990. MML Identifier: COMPTS_1 Summary: The article contains definition of a compact space and some theorems about compact spaces. The notions of a cover of a set and a centered family are defined in the article to be used in these theorems. A set is compact in the topological space if and only if every open cover of the set has a finite subcover. This definition is equivalent, what has been shown next, to the following definition: a set is compact if and only if a subspace generated by that set is compact. Some theorems about mappings and homeomorphisms of compact spaces have been also proved. The following schemes used in proofs of theorems have been proved in the article: FuncExChoice -- the scheme of choice of a function, BiFuncEx -- the scheme of parallel choice of two functions and the theorem about choice of a finite counter image of a finite image. Wojciech A. Trybulec, Grzegorz Bancerek. Kuratowski -- Zorn Lemma, Formalized Mathematics 1(2), pages 387-393, 1990. MML Identifier: ORDERS_2 Summary: The goal of this article is to prove Kuratowski - Zorn lemma. We prove it in a number of forms (theorems and schemes). We introduce the following notions: a relation is a quasi (or partial, or linear) order, a relation quasi (or partially, or linearly) orders a set, minimal and maximal element in a relation, inferior and superior element of a relation, a set has lower (or upper) Zorn property w.r.t. a relation. We prove basic theorems concerning those notions and theorems that relate them to the notions introduced in \cite{ORDERS_1.ABS}. At the end of the article we prove some theorems that belong rather to \cite{RELAT_1.ABS}, \cite{RELAT_2.ABS} or \cite{WELLORD1.ABS}. Wojciech A. Trybulec. Operations on Subspaces in Real Linear Space, Formalized Mathematics 1(2), pages 395-399, 1990. MML Identifier: RLSUB_2 Summary: In this article the following operations on subspaces of real linear space are intoduced: sum, intersection and direct sum. Some theorems about those notions are proved. We define linear complement of a subspace. Some theorems about decomposition of a vector onto two subspaces and onto subspace and its linear complement are proved. We also show that a set of subspaces with operations sum and intersection is a lattice. At the end of the article theorems that belong rather to \cite{BOOLE.ABS}, \cite{RLVECT_1.ABS}, \cite{RLSUB_1.ABS} or \cite{LATTICES.ABS} are proved. Andrzej Ndzusiak. $\sigma$-Fields and Probability, Formalized Mathematics 1(2), pages 401-407, 1990. MML Identifier: PROB_1 Summary: This article contains definitions and theorems concerning basic properties of following objects: - a field of subsets of given nonempty set; - a sequence of subsets of given nonempty set; - a $\sigma$-field of subsets of given nonempty set and events from this $\sigma$-field; - a probability i.e. $\sigma$-additive normed measure defined on previously introduced $\sigma$-field; - a $\sigma$-field generated by family of subsets of given set; - family of Borel Sets. Czeslaw Bylinski. Introduction to Categories and Functors, Formalized Mathematics 1(2), pages 409-420, 1990. MML Identifier: CAT_1 Summary: The category is introduced as an ordered 5-tuple of the form $\langle O, M, dom, cod, \cdot, id \rangle$ where $O$ (objects) and $M$ (morphisms) are arbitrary nonempty sets, $dom$ and $cod$ map $M$ onto $O$ and assign to a morphism domain and codomain, $\cdot$ is a partial binary map from $M \times M$ to $M$ (composition of morphisms), $id$ applied to an object yields the identity morphism. We define the basic notions of the category theory such as hom, monic, epi, invertible. We next define functors, the composition of functors, faithfulness and fullness of functors, isomorphism between categories and the identity functor. Grzegorz Bancerek. Introduction to Trees, Formalized Mathematics 1(2), pages 421-427, 1990. MML Identifier: TREES_1 Summary: The article consists of two parts: the first one deals with the concept of the prefixes of a finite sequence, the second one introduces and deals with the concept of tree. Besides some auxiliary propositions concerning finite sequences are presented. The trees are introduced as non-empty sets of finite sequences of natural numbers which are closed on prefixes and on sequences of less numbers (i.e. if $\langle n_1$, $n_2$, $\dots$, $n_k\rangle$ is a vertex (element) of a tree and $m_i \leq n_i$ for $i = 1$, $2$, $\dots$, $k$, then $\langle m_1$, $m_2$, $\dots$, $m_k\rangle$ also is). Finite trees, elementary trees with $n$ leaves, the leaves and the subtrees of a tree, the inserting of a tree into another tree, with a node used for determining the place of insertion, antichains of prefixes, and height and width of finite trees are introduced.
CommonCrawl
Full compressible Navier-Stokes equations for quantum fluids: Derivation and numerical solution KRM Home Non--local macroscopic models based on Gaussian closures for the Spizer-Härm regime September 2011, 4(3): 767-783. doi: 10.3934/krm.2011.4.767 Asymptotic limit of nonlinear Schrödinger-Poisson system with general initial data Qiangchang Ju 1, , Fucai Li 2, and Hailiang Li 3, Institute of Applied Physics and Computational Mathematics, Box 8009-28, Beijing 100088, China Department of Mathematics, Nanjing University, Nanjing 210093 Department of Mathematics and Institute of Mathematics and Interdisciplinary Science, Capital Normal University, Beijing 100037, China Received January 2011 Revised May 2011 Published August 2011 The asymptotic limit of the nonlinear Schrödinger-Poisson system with general WKB initial data is studied in this paper. It is proved that the current, defined by the smooth solution of the nonlinear Schrödinger-Poisson system, converges to the strong solution of the incompressible Euler equations plus a term of fast singular oscillating gradient vector fields when both the Planck constant $\hbar$ and the Debye length $\lambda$ tend to zero. The proof involves homogenization techniques, theories of symmetric quasilinear hyperbolic system and elliptic estimates, and the key point is to establish the uniformly bounded estimates with respect to both the Planck constant and the Debye length. Keywords: semi-classical limit, incompressible Euler equations., quasi-neutral limit, Nonlinear Schrödinger-Poisson System. Mathematics Subject Classification: Primary: 35Q55; Secondary: 35B4. Citation: Qiangchang Ju, Fucai Li, Hailiang Li. Asymptotic limit of nonlinear Schrödinger-Poisson system with general initial data. Kinetic & Related Models, 2011, 4 (3) : 767-783. doi: 10.3934/krm.2011.4.767 T. Alazard and R. Carles, Semi-classical limit of Schrödinger-Poisson equations in space dimension $n\geq 3$,, J. Differential Equations, 233 (2007), 241. doi: 10.1016/j.jde.2006.10.003. Google Scholar A. Arnold and F. Nier, The two-dimensional Wigner-Poisson problem for an electron gas in the charge neutral case,, Math. Methods Appl. Sci., 14 (1991), 595. doi: 10.1002/mma.1670140902. Google Scholar P. Bechouche, N. J. Mauser and F. Poupaud, Semiclassical limit for the Schrödinger-Poisson equation in a crystal,, Comm. Pure Appl. Math., 54 (2001), 852. doi: 10.1002/cpa.3004. Google Scholar Y. Brenier, Convergence of the Vlasov-Poisson system to the incompressible Euler equations,, Comm. Partial Differential Equations, 25 (2000), 737. Google Scholar F. Brezzi and P. A. Markowich, The three-dimensional Wigner-Poisson problem: Existence, Uniqueness and Approximation,, Math. Methods Appl. Sci., 14 (1991), 35. doi: 10.1002/mma.1670140103. Google Scholar F. Castella, "Effects disperifs pour les équations de Vlasov et de Schrödinger,", Ph.D. Thesis, (1997). Google Scholar F. Castella, $L^2$ solutions to the Schrödinger-Poisson system: existence, uniqueness, time behavior and smoothing effects,, Math. Models Methods Appl. Sci., 7 (1997), 1051. doi: 10.1142/S0218202597000530. Google Scholar T. Cazanava, "An Introduction to Nonlinear Schödinger Equations,", Testos de Métodos Matemáticos, (1980). Google Scholar E. Grenier, Semiclassical limit of the nonlinear Schrödinger equation in small time,, Proc. Amer. Math. Soc., 126 (1998), 523. doi: 10.1090/S0002-9939-98-04164-1. Google Scholar C. C. Hao and H. L. Li, On the initial value problem for the bipolar Schrödinger-Poisson systems,, J. Partial Differential Equations, 17 (2004), 283. Google Scholar C. C. Hao, L. Hsiao and H. L. Li, Modified scattering for bipolar nonlinear Schrödiner-Poisson equations,, Math. Models Methods Appl. Sci., 14 (2004), 1481. doi: 10.1142/S0218202504003684. Google Scholar A. Jüngel and S. Wang, Convergence of nonlinear Schrödinger-Poisson system to the compressible Euler equations,, Comm. Partial Differential Equations, 28 (2003), 1005. Google Scholar T. Kato, Nonstationary flows of viscous and ideal fluids in $R$$^3$,, J. Funct. Anal., 9 (1972), 296. doi: 10.1016/0022-1236(72)90003-1. Google Scholar M. De Leo and D. Rial, Well posedness and smoothing effect of Schrödinger-Poisson equation,, J. Math. Phys., 48 (2007). Google Scholar H. L. Li and C.-K. Lin, Semiclassical limit and well-poseness of nonlinear Schrödinger-Poisson systems,, Electron. J. Differential Equations, 2003 (). Google Scholar P.-L. Lions, "Mathematical Topics in Fluid Mechanics, Vol. 1: Incompressible Models,", Oxford Lecture Series in Mathematics and its Applications, 3 (1996). Google Scholar P.-L. Lions and T. Paul, Sur les measure de Wigner,, Rev. Mat. Iberoamericana, 9 (1993), 553. Google Scholar A. Majda, "Compressible Fluids Flow and Systems of Conservation Laws in Several Space Variables,", Applied Mathematical Sciences, 53 (1984). Google Scholar P. A. Markowich and N. J. Mauser, The classical limit of the self-consistent quantum-Vlasov equations in 3D,, Math. Models Methods Appl. Sci., 3 (1993), 109. doi: 10.1142/S0218202593000072. Google Scholar N. Masmoudi, From Vlasov-Poisson system to the incompressible Euler system,, Comm. Partial Differential Equations, 26 (2001), 1913. Google Scholar M. Puel, Convergence of the Schrödinger-Poisson system to the incompressible Euler equations,, Comm. Partial Differential Equations, 27 (2002), 2311. Google Scholar W. Strauss, "Nonlinear Wave Equations,", CBMS Regional Conference Series in Mathematics, 73 (1989). Google Scholar C. Sulem and P.-L. Sulem, "The Nonlinear Schrödinger Equation, Self-Focusing and Wave Collapse,", Applied Mathematical Sciences, 139 (1999). Google Scholar P. Zhang, Wigner measure and the semiclassical limit of Schrödinger-Poisson equations,, SIAM J. Math. Anal., 34 (2002), 700. doi: 10.1137/S0036141001393407. Google Scholar P. Zhang, Y.-X Zheng and N. Mauser, The limit from the Schrödinger-Poisson to Vlasov-Poisson equations with general data in one dimension,, Comm. Pure Appl. Math., 55 (2002), 582. doi: 10.1002/cpa.3017. Google Scholar Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447 Li Cai, Fubao Zhang. The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020125 Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 Jianfeng Huang, Haihua Liang. Limit cycles of planar system defined by the sum of two quasi-homogeneous vector fields. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 861-873. doi: 10.3934/dcdsb.2020145 Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002 Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304 Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011 Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316 Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323 Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328 Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001 Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003 Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020294 Zhouxin Li, Yimin Zhang. Ground states for a class of quasilinear Schrödinger equations with vanishing potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020298 Xinfu Chen, Huiqiang Jiang, Guoqing Liu. Boundary spike of the singular limit of an energy minimizing problem. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3253-3290. doi: 10.3934/dcds.2020124 Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405 Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 Qiangchang Ju Fucai Li Hailiang Li
CommonCrawl
Skip to main content Skip to sections Journal of Biomolecular NMR pp 1–13 | Cite as NMR characterization of solvent accessibility and transient structure in intrinsically disordered proteins Christoph Hartlmüller Emil Spreitzer Christoph Göbl Fabio Falsone Tobias Madl In order to understand the conformational behavior of intrinsically disordered proteins (IDPs) and their biological interaction networks, the detection of residual structure and long-range interactions is required. However, the large number of degrees of conformational freedom of disordered proteins require the integration of extensive sets of experimental data, which are difficult to obtain. Here, we provide a straightforward approach for the detection of residual structure and long-range interactions in IDPs under near-native conditions using solvent paramagnetic relaxation enhancement (sPRE). Our data indicate that for the general case of an unfolded chain, with a local flexibility described by the overwhelming majority of available combinations, sPREs of non-exchangeable protons can be accurately predicted through an ensemble-based fragment approach. We show for the disordered protein α-synuclein and disordered regions of the proteins FOXO4 and p53 that deviation from random coil behavior can be interpreted in terms of intrinsic propensity to populate local structure in interaction sites of these proteins and to adopt transient long-range structure. The presented modification-free approach promises to be applicable to study conformational dynamics of IDPs and other dynamic biomolecules in an integrative approach. Solvent paramagnetic relaxation enhancement Intrinsically disordered proteins Residual structure p53 FOXO4 α-Synuclein Christoph Hartlmüller and Emil Spreitzer are the shared first authors. The online version of this article ( https://doi.org/10.1007/s10858-019-00248-2) contains supplementary material, which is available to authorized users. The well-established structure–function paradigm has been challenged by the discovery of intrinsically disordered proteins (IDPs) (Dyson and Wright 2005). It is suggested that about 40% of all proteins have disordered regions of 40 or more residues, with many proteins existing solely in the unfolded state (Tompa 2012; Romero et al. 1998). Although they lack stable secondary or tertiary structure elements, this large class of proteins plays a crucial role in various cellular processes (Theillet et al. 2014; Wright and Dyson 2015; van der Lee et al. 2014; Uversky et al. 2014). Disorder serves a biological role, where conformational heterogeneity granted by disordered regions enables proteins to exert diverse functions in response to various stimuli. Unlike structured proteins, which are essential for catalysis and transport, disordered proteins are crucial for regulation and signaling. Due to their intrinsic flexibility they can act as network hubs interacting with a wide range of biomolecules forming dynamic regulatory networks (Dyson and Wright 2005; Tompa 2012; Babu et al. 2011; Flock et al. 2014; Wright and Dyson 1999; Uversky 2011; Habchi et al. 2014). Given the plethora of potential interaction partners, it is not surprising that the interaction of IDPs with binding partners are often tightly regulated via and intricate 'code' of post-translational modifications, including phosphorylation, methylation, acetylation, and various others (Wright and Dyson 2015; Bah and Forman-Kay 2016). These proteins, and distortions in their interaction networks, for example by mutations and aberrant post-translational modifications (PTMs), are closely linked to a range of human diseases, including cancers, neurodegeneration, cardiovascular disorders and diabetes, they are currently considered difficult to study (Dyson and Wright 2005; Tompa 2012; Babu et al. 2011; Habchi et al. 2014; Metallo 2010; Uversky et al. 2008; Dyson and Wright 2004). Complications arise from the following factors: these proteins lack well-defined stable structure, they exist in a dynamic equilibrium of distinct conformational states, and the number of experimental techniques and observables renders IDP conformational characterization underdetermined (Mittag and Forman-Kay 2007; Eliezer 2009). Thus, an integration of new sets of experimental and analytical techniques are required to characterize the conformational behavior of IDPs. Although IDPs are highly dynamic, they often contain transiently-folded regions, such as transiently populated secondary or tertiary structure, transient long-range interactions or transient aggregation (Marsh et al. 2007; Shortle and Ackerman 2001; Bernado et al. 2005; Mukrasch et al. 2007; Wells et al. 2008). These transiently-structured regions are of particular interest to study the biological function of IDPs as they can report on biologically-relevant interactions and encode biological function. Examples are aggregation, liquid–liquid phase separation, binding to folded co-factors, or modifying enzymes (Yuwen et al. 2018; Brady et al. 2017; Choy et al. 2012; Maji et al. 2009; Putker et al. 2013). NMR spectroscopy is exceptionally well-suited to study IDPs, and in particular to detect transiently folded regions (Meier et al. 2008; Wright and Dyson 2009; Jensen et al. 2009). Several NMR observables provide atomic resolution, and ensemble-averaged information reporting on the conformational energy landscape sampled by each amino acid, including chemical shifts, residual dipolar couplings (RDCs), and paramagnetic relaxation enhancement (PRE) (Dyson and Wright 2004; Eliezer 2009; Marsh et al. 2007; Shortle and Ackerman 2001; Meier et al. 2008; Gobl et al. 2014; Gillespie and Shortle 1997; Clore et al. 2007; Huang et al. 2014; Ozenne et al. 2012; Clore and Iwahara 2009; Otting 2010; Hass and Ubbink 2014; Gobl et al. 2016). RDCs, and PREs, either alone or in combination have been used successfully in recent years to characterize the conformations and long-range interactions of IDPs (Bernado et al. 2005; Ozenne et al. 2012; Dedmon et al. 2005; Bertoncini et al. 2005; Parigi et al. 2014; Rezaei-Ghaleh et al. 2018). However, both techniques rely on a modification of the IDP of interest, either by external alignment media in case of RDCs or the covalent incorporation of paramagnetic tags in the case of PREs. We and others have proposed applications of soluble paramagnetic agents to obtain structural information by NMR without any modifications of the molecules of interest (Gobl et al. 2014; Guttler et al. 2010; Hartlmuller et al. 2016; Hocking et al. 2013; Madl et al. 2009, 2011; Respondek et al. 2007; Zangger et al. 2009; Pintacuda and Otting 2002; Bernini et al. 2009; Wang et al. 2012; Sun et al. 2011; Gong et al. 2017; Gu et al. 2014; Hartlmuller et al. 2017). The addition of soluble paramagnetic compounds leads to a concentration-dependent and therefore tunable increase of relaxation rates, the so-called paramagnetic relaxation enhancement (here denoted as solvent PRE, sPRE; also known as co-solute PRE, Fig. 1a). This effect depends on the distance of the spins of interest (e.g. 1H, 13C) to the biomolecular surface. The nuclei on the surface are affected the strongest by the sPRE effect, and this approach has been shown to correlate well with biomolecular structure in the case of proteins and RNA (Madl et al. 2009; Pintacuda and Otting 2002; Bernini et al. 2009; Hartlmuller et al. 2017). sPREs have gained popularity for structural studies of biomolecules, including in the structure determination of proteins (Madl et al. 2009; Wang et al. 2012), docking of protein complexes (Madl et al. 2011), and qualitative detection of dynamics (Hocking et al. 2013; Sun et al. 2011; Gong et al. 2017; Gu et al. 2014). Principle and workflow for solvent PRE. a Transient secondary structures of IDPs are characteristic for protein–protein interaction sites and are therefore crucial for various cellular functions. NMR sPRE data provide quantitative and residue specific information on the solvent accessibility as the effect of paramagnetic probes such as Gd(DTPA-BMA) is distance dependent, which can be used to detect secondary structures within otherwise unfolded regions and long-range contacts within a protein. b Prediction of sPRE is based on an ensemble approach of a library of peptides. Each peptide has a length of 5 residues, and is flanked by triple-Ala on both termini (e.g. AAAXXXXXAAA, where XXXXX is a 5-mer fragment of the target primary sequence). Following water refinement using ARIA/CNS, sPRE values of all conformations are calculated and the average solvent PRE value of the ensemble is returned. c Predicted Cα sPRE (blue) and standard deviation (red) of AAAVVAVVAAA ensembles consisting of 99,000 down to 48 structural conformations. The green-dotted line indicates 5% deviation from the ensemble with 99,000 conformations. d Histograms of different ensemble sizes showing the distribution of predicted sPRE values The most commonly used paramagnetic agent for measuring sPRE data is the inert complex Gd(DTPA-BMA) (gadolinium diethylenetriaminepenta-acetic acid bismethylamide, commercially available as 'Omniscan'), that is known to not specifically interact on protein surfaces (Guttler et al. 2010; Madl et al. 2009, 2011; Pintacuda and Otting 2002; Wang et al. 2012; Respondek et al. 2007; Zangger et al. 2009; Göbl et al. 2010). Previously, we and others could show that sPRE data provide in-depth structural and dynamic data for IDP analysis (Madl et al. 2009; Sun et al. 2011; Gong et al. 2017; Emmanouilidis et al. 2017; Johansson et al. 2014). For example, sPRE data helped to characterize α-helical propensity in a previously postulated flexible region in the folded 42 kDa maltodextrin binding protein (Madl et al. 2009), and dynamic ligand binding to the human "survival of motor neuron" protein (Emmanouilidis et al. 2017). While writing this manuscript, and based on sPRE data for exchangeable amide protons, the Tjandra lab has shown that sPREs can detect native-like structure in denatured ubiquitin (Kooshapur et al. 2018). Here, we present an integrative ensemble approach to predict the sPREs of IDPs. This ensemble representation is used to calculate conformationally averaged sPREs, which fit remarkably well to the experimentally-measured sPREs. We show for the disordered protein α-synuclein, and disordered regions of the proteins FOXO4 and p53 that deviation from random coil behavior can indicate intrinsic propensity to populate transient local structures and long-range interactions. In summary, this method provides a unique modification-free approach for studying IDPs, that is compatible with a wide range of NMR pulse sequences and biomolecules. Protein expression and purification For expression of human FOXO4TAD (residues 198–505), p53TAD (residues 1–94), pETM11-His6-ZZ cDNA and including an N-terminal TEV protease cleavage site coding for the respective proteins were transformed into E. coli BL21-DE3. To obtain 13C/15N isotope labeled protein, cells were grown for 1 day at 37 °C in minimal medium (100 mM KH2PO4, 50 mM K2HPO4, 60 mM Na2HPO4, 14 mM K2SO4, 5 mM MgCl2; pH 7.2 adjusted with HCl and NaOH with 0.1 dilution of trace element solution (41 mM CaCl2, 22 mM FeSO4, 6 mM MnCl2, 3 mM CoCl1, 2 mM ZnSO4, 0.1 mM CuCl2, 0.2 mM (NH4)6Mo7O17, 24 mM EDTA) supplemented with 2 g of 13C6H12O6 (Cambridge isotope) and 1 g of 15NH4Cl (Sigma). At an OD (600 nm) of 0.8, cells were induced with 0.5 mM isopropyl-β-d-thiogalactopyranosid (IPTG) for 16 h at 20 °C. Cell pellets were harvested and sonicated in denaturing buffer containing 50 mM Tris–HCl pH 7.5, 150 mM NaCl, 20 mM imidazole, 2 mM tris(2-carboxyethyl)phosphine (TCEP), 20% glycerol and 6 M urea. His6-ZZ proteins were purified using Ni–NTA agarose (QIAGEN) and eluted in 50 mM Tris–HCl pH 7.5, 150 mM NaCl, 200 mM imidazole, 2 mM TCEP and subjected to TEV protease cleavage. Untagged proteins were then isolated by performing a second affinity purification step using Ni–NTA beads for removal of TEV and uncleaved substrate. A final exclusion chromatography purification step was performed in the buffer of interest on a gel filtration column (Superdex peptide (10/300) for p53 and Superdex 75 (16/600) for FOXO4, GE Healthcare). α-Synuclein was expressed and purified as described (Falsone et al. 2011). Briefly, pRSETB vector containing the human AS gene was transformed into BL21 (DE3) Star Cells. 13C/15N-labeled α-synuclein was expressed in minimal medium (6.8 g/l Na2HPO3, 4 g/l KH2PO4, 0.5 g/l NaCl, 1.5 g/l (15NH4)2SO2, 4 g/l 13C glucose, 1 μg/l biotin, 1 μg/l thiamin, 100 μg/ml ampicillin, and 1 ml 1000 × microsalts). Cells were grown to an OD (600 nm) of 0.7. Protein was expressed by addition of 1 mM IPTG for 4 h. After harvesting cells were resuspended in 20 mM Tris–HCl, 50 mM NaCl, pH 7.4, supplemented with a Complete® protease inhibitor mix (Roche, Basel, Switzerland). Protein purification was then achieved using a Resource Q column and gel filtration using a Superdex 75 gel filtration column (GE Healthcare, Uppsala, Sweden). Generation of conformational ensembles Conformational ensembles were generated using the ARIA/CNS software-package, comprising of 1500 random backbone conformations of all possible 5-mer peptides of the protein of interest, and flanked by triple-alanine. Every backbone conformation served as starting structure in a full-atom water refinement using ARIA (Bardiaux et al. 2012). For every refined structure the solvent PRE is calculated and the averaged solvent PRE of the central residue is stored in the database. To predict sPRE data, a previously published grid-based approach was used (Hartlmuller et al. 2016; Pintacuda and Otting 2002). Briefly, the structural model was placed in a regularly-spaced grid representing the uniformly distributed paramagnetic compound and the grid was built with a point-to-point distance of 0.5 Å and a minimum distance of 10 Å between the protein model and the outer border of the grid. Next, grid points that overlap with the protein model were removed assuming a molecular radius of 3.5 Å for the paramagnetic compound. To compute the sPRE for a given protein proton \({\text{sPRE}}_{\text{predicted}}^{i}\), the distance-dependent paramagnetic effect (Hartlmuller et al. 2016; Hocking et al. 2013; Pintacuda and Otting 2002) was numerically integrated over all remaining grid points according to Eq. (1): $${\text{sPRE}}_{\text{predicted}}^{i} = c \cdot \mathop \sum \limits_{{d_{i,j} < 10 \AA}} \frac{1}{{d_{i,j}^{6} }}$$ where i is the index of a proton of the protein, j is the index of the grid point, di, j is the distance between the ith proton and the jth grid point and c is an arbitrary constant to scale the sPRE values (1000). Theoretical sPRE values were normalized by calculating the linear fit of experimental and predicted sPRE followed by shifting and scaling of the theoretical sPRE. To predict the solvent PRE of the entire IDP sequence, each peptide with the five matching amino acids is searched and the corresponding solvent PRE values are combined. sPRE data of the two N- and C-terminal residues were not predicted in this setup. All scripts and sample runs can be obtained downloaded from the homepage of the authors (https://mbbc.medunigraz.at/forschung/forschungseinheiten-und-gruppen/forschungsgruppe-tobias-madl/software/). NMR experiments The setup of sPRE measurements using NMR spectroscopy was performed as described previously (Hartlmuller et al. 2016, 2017). To obtain sPRE data, a saturation-based approach was used. The 1H-R1 relaxation rates were determined by a saturation-recovery scheme followed by a read-out experiment such as a 1H, 15N HSQC, 1H, 13C HSQC or a 3D CBCA(CO)NH experiment. The read-out experiments were combined with the saturation-recovery scheme in a Pseudo-3D (HSQCs) or Pseudo-4D [CBCA(CO)NH] experiment, with the recovery time as an additional dimension. The CBCA(CO)NH was recorded using non-uniform sampling. Alternatively, 1H-R2 relaxation rates can be as described (Clore and Iwahara 2009). A 7.5 ms 1H trim pulse followed by a gradient was applied for proton saturation. During the recovery, ranging from several milliseconds up to several seconds, z-magnetization is built up. The individual recovery delays are applied in an interleaved manner, with short and long delays occurring in alternating fashion. For every 1H-R1 measurement 10 delay times were recorded and for error estimation, at 1 delay time was recorded as a duplicate. Measurement of 1H-R1 rates were repeated for increasing concentrations of the relaxation-enhancing Gd(DTPA-BMA)/Omniscan (GE Healthcare, Vienna, Austria) and the solvent PRE was obtained as the average change of the proton R1 rate per concentration of the paramagnetic agent. After each addition of Gd(DTPA-BMA), the recovery delays were shortened such that for the longest delay all NMR signals were sufficiently recovered. The interscan delay was set to 50 ms, as the saturation-recovery scheme does not rely on an equilibrium z-magnetization at the start of each scan. All NMR samples contained 10% 2H2O. Spectra were processed using NMRPipe and analyzed with the NMRView and CcpNmr Analysis software packages (Johnson 2004; Delaglio et al. 1995; Skinner et al. 2016). Measurement of sPRE data used in this study Assignment of p53TAD was achieved using HNCACB, CBCA(CO)NH and HCAN spectra and analyzed using ccpNMR (Skinner et al. 2016). sPRE data of 300 µM samples of uniformly 13C/15N labeled p53TAD was measured on a 600 MHz Bruker Avance Neo NMR spectrometer equipped with a TXI probehead at 298 K in a buffer containing 50 mM sodium phosphate buffer, 0.04% sodium azide, pH 7.5. 1H-R1 rates of 1HN, Hα and Hβ were determined using 1H,13C HSQC and 1H, 15N HSQC as read-out spectra (4/4 scans, 200/128 complex points in F2). For assignment of α-synuclein, previously reported chemical shifts were obtained from the BMRB (accession code 6968) and the assignment was confirmed using HNCACB and CBCA(CO)NH spectra. 1H-R1 rates of aliphatic protons and amide protons of a 100 µM sample (50 mM bis(2-hydroxyethyl)amino-tris(hydroxymethyl)methane (bis–Tris), 20 mM NaCl, 3 mM sodium azide, pH 6.8) were determined using 1H, 13C HSQC and 1H, 15N HSQC read-out spectra, respectively, at 282 K in the presence of 0, 1, 2, 3, 4 and 5 mM Gd(DTPA-BMA). For assignment of FOXO4TAD HNCACB, CBCA(CO)NH and HCAN spectra were recorded and assigned using ccpNMR (Skinner et al. 2016). Measurements of 13C, 15N labeled FOXO4TAD at 390 µM in 20 mM sodium phosphate buffer, pH 6.8, 50 mM NaCl, 1 mM DTT were performed on a 600 MHz magnet (Oxford Instruments) equipped with an AV III console and cryo TCI probe head (Bruker Biospin). Pseudo-4D CBCA(CO)NH spectra served as read-out for 1H-R1 rates and were recorded on a 250 µM sample on a 900 MHz Bruker Avance III spectrometer equipped with a TCI cryoprobe using non-uniform sampling (4 scans, 168/104 complex points in F1 (13C)/F2 (15N) sampled with 13.7% resulting a total number of 600 complex points). Spectra were processed using hmsIST/NMRPipe (Hyberts et al. 2014). Analysis of NMR data sPRE data of the model proteins was analyzed as described previously. Briefly, peak intensities were extracted using nmrglue python package and fitted to a mono-exponential build up curve the SciPy python package and Eq. (2). $$I\left( t \right) = - A \cdot e^{{ - R_{1} *t}} + C$$ where I(t) is the peak intensity of the saturation-recovery experiment, t is the recovery delay, A is the amplitude of the z-magnetization build-up, C is the plateau of the curve and R1 is the longitudinal relaxation rate. Duplicate recovery delays were used to determine the error for the fitted rates R1. $$\varepsilon_{ \exp } = \sqrt {\frac{1}{2N} \cdot \mathop \sum \limits_{i = 1}^{N} \delta_{i} }$$ where N is the number of peaks in the spectrum, i is the index of the peak, and δi is the difference of the duplicates for the ith peak. The error of the rates R1 was then obtained using a Monte Carlo-type resampling strategy. The solvent PRE is obtained by performing a weighted linear regression using the equation $$R_{1} \left( c \right) = sPRE \cdot c + R_{1}^{0}$$ where c is the concentration of Gd(DTPA-BMA), R1(c) is the fitted R1 rate at the present of Gd(DTPA-BMA) with a concentration c, R 1 0 is the R1 in the absence of Gd(DTPA-BMA) and sPRE is the slope and the desired sPRE value. For the weighted linear regression, the previously determined errors ∆ R1 for R1 was used, and the error on the concentration c was neglected. To detect transient structural elements in IDPs, an efficient back-calculation of sPREs of IDPs is essential. Whereas back-calculation of sPREs is relatively straightforward for folded rigid structures and can be carried out efficiently using a grid-based approach by integration of the solvent environment (Hartlmuller et al. 2016, 2017), this approach fails in the case of highly conformationally heterogeneous IDPs. In our approach, sPREs are best represented as an average sPRE of an ensemble. NMR observables and nuclear spin relaxation phenomena, including sPREs, directly sense chemical exchange through the distinct magnetic environments that nuclear spins experience while undergoing those exchange processes. The effects of the dynamic exchange on the NMR signals can be described by the McConnell Equations (Mcconnell 1958) In the case of a two-site exchange process, and assuming that the exchange rate is faster than the difference in the sPREs observed in both states, the observed sPRE is a linear, population-weighted average of the sPRE observed in both states, as seen for covalent paramagnetic labels (Clore and Iwahara 2009). Moreover, the correlation time for relaxation is assumed to be faster than the exchange time among different conformations within the IDP (Jensen et al. 2014; Iwahara and Clore 2010). The effective correlation time for longitudinal relaxation depends on the rotational correlation time of the biomolecule, the electron relaxation time and the lifetime of the rotationally correlated complex of the biomolecule and the paramagnetic agent (Madl et al. 2009; Eletsky et al. 2003). For ubiquitin, the effective correlation time for longitudinal relaxation was found to be in the sub-ns time scale (Pintacuda and Otting 2002), whereas that conformational exchange in IDPs typically appears at slower timescales (Jensen et al. 2014). Calculating the average of sPREs over an ensemble of protein conformations presents serious practical difficulties that affect both the accuracy and the portability of the calculation. For RDCs it has been shown that convergence to the average requires an unmanageably large number of structures (e.g. 100,000 models for a protein with 100 amino acids), and that the convergence strictly depends on the length of the protein (Bernado et al. 2005; Nodet et al. 2009). To simplify the back-calculation of sPREs we use a strategy proposed for RDCs by the Forman-Kay and Blackledge groups (Marsh et al. 2008; Huang et al. 2013). To back-calculate the sPRE from a given primary sequence of an IDP we generated fragments of five amino acids of the sequence of interest and flanked them with triple-alanine sequences at the N- and C-termini to simulate the presence of upstream/downstream amino acids (Fig. 1b). An ensemble of structures for these sequences is then generated using ARIA/CNS including water refinement (Bardiaux et al. 2012). To predict the solvent PRE of the entire IDP, the peptide with the five matching residues is searched and the corresponding solvent PREs averaged for the entire conformational ensemble are returned. This approach is highly parallelizable and dramatically reduces the computational effort compared to simulating the conformations of the full-length IDP. To determine the number of conformers necessary to converge the back-calculated sPRE of the defined 11-mers, we generated an ensemble of 100,000 structures for a 11-mer AAAVVAVVAAA peptide using ARIA/CNS (Bardiaux et al. 2012) and back-calculated the sPRE for subsets with decreasing number of structures. We find that 1500 conformers are sufficient to reproduce the sPRE with a deviation compared to the maximum ensemble below 5% (Fig. 1c, d). Back-calculation of the sPRE by fast grid-based integration has some advantages compared to alternative approaches relying on surface accessibility (Kooshapur et al. 2018). First, sPREs can be obtained for atoms without any surface accessibility in grid-based integration approaches as they still take into account the distance-dependent paramagnetic effect. This is expected to provide more accurate predictions for regions with a high degree of bulky side chains or transient folding. To validate our computational approach, we recorded several sets of experimental 1H-sPREs for the disordered regions of the human proteins FOXO4, p53, and α-synuclein. Similar to many other transcription factors, p53 and FOXO4 are largely disordered outside their DNA binding domains. In order to demonstrate that surface accessibility data can be obtained for a challenging IDP, we recorded sPRE data for the 307 residue transactivation domain of FOXO4. The FOXO4 transcription factor is a member of the forkhead box O family of proteins that share a highly conserved DNA-binding motif, the forkhead box domain (FH). The FH domain is surrounded by large N- and C-terminal intrinsically disordered regions which are essential for the regulation of FOXO function (Weigel and Jackle 1990). FOXOs control a plethora of cellular functions, such as cell growth, survival, metabolism and oxidative stress, by regulating the expression of hundreds of target genes (Burgering and Medema 2003; Hornsveld et al. 2018). Expression and activity of FOXOs are tightly controlled by PTMs such as phosphorylation, acetylation, methylation and ubiquitination, and these modifications impact on FOXO stability, sub-cellular localization and transcriptional activity (Essers et al. 2004; de Keizer et al. 2010; van den Berg et al. 2013). Because of their anti-proliferative and pro-apoptotic functions, FOXOs have been considered as bona fide tumor suppressors. However, FOXOs can also support tumor development and progression by maintaining cellular homeostasis, facilitating metastasis and inducing therapeutic resistance (Hornsveld et al. 2018). Thus, targeting FOXO activity might hold promise in cancer therapy. The C-terminal FOXO4 transactivation domain has been suggested to be largely disordered and to be the binding site for many cofactors. Because it also harbors most of the post-translational modifications (Putker et al. 2013; Burgering and Medema 2003; Hornsveld et al. 2018; Bourgeois and Madl 2018), we set off to study this biologically important domain using our sPRE approach. 1H,15N and 1H, 13C HSQC NMR spectra of FOXO4TAD are of high quality and showed no detectable 1H, 13C, or 15N chemical shift changes between the spectra recorded in the absence or presence of Gd(DTPA-BMA) (Fig. 2a). sPRE data of FOXO4 had to be recorded in pseudo-4D saturation-recovery CBCA(CO)NH spectra due to the severe signal overlap observed in the 2D HSQC spectra. It should be noted that any kind of NMR experiment could be combined in principle with a sPRE saturation recovery measurement block in order to obtain 1H- or 13C sPRE data. The sPRE data of FOXO4TAD yield differential solvent accessibilities in a residue-specific manner (Fig. 2b, c). Hα atoms located in regions rich in bulky residues are showing lower sPREs and Hα atoms located in more exposed glycine-rich regions display higher sPREs. Hβ sPRE data was obtained for a limited number of residues and shows overall elevated sPREs due to the higher degree of exposure and a reasonable agreement of predicted and experimental data (Supporting Fig. 1). A comparison of the predicted sPRE data with a bioinformatics bulkiness prediction shows that some features are reproduced by the bioinformatics prediction (Supporting Fig. 2A). However, the experimental sPRE is better described by our approach. Strikingly, the predicted sPRE pattern reproduces the experimental sPRE pattern exceptionally well, indicating that the FOXO4TAD is largely disordered and does not adopt any stable or transient tertiary structure in the regions for which sPRE data could be obtained. Comparison of predicted and measured solvent PRE of FOXO4TAD. a Overlay of 1H,13C HSQC spectra, with full recovery time of a 390 µM 13C,15N labeled FOXO4TAD sample in the absence (blue) and presence of 3.25 mM Gd(DTPA-BMA) (orange). b1H-R1 rates of two selected residues of FOXO4TAD at different Gd(DTPA-BMA) concentrations. c Predicted (red) and experimentally-determined (blue) solvent PRE values using CBCA(CO)NH as readout spectrum, of assigned Hα peaks of FOXO4TAD. Experimental sPRE values are calculated by fitting the data with a linear regression equation. Predicted sPRE values are based on the previously described ensemble approach. Residues with bulky side chains (Phe, Trp, Tyr) are labeled with #, and exposed glycine residues are labeled with * (see Supporting Fig. 2A for a bulkiness profile). Errors of the measured 1H-R1 rates were calculated using a Monte Carlo-type resampling strategy and are shown in the diagram as error bars In order to demonstrate that surface accessibility data can be obtained for a IDP with potential formation of transient local secondary structure we recorded sPRE data for the 94-residue transactivation domain of p53. p53 is a homo-tetrameric transcription factor composed of an N-terminal trans-activation domain, a proline-rich domain, a central DNA-binding domain followed by a tetramerization domain and the C-terminal negative regulatory domain. p53 is involved in the regulation of more than 500 target genes and thereby controls a broad range of cellular processes, including apoptosis, metabolic adaptation, DNA repair, cell cycle arrest, and senescence (Vousden and Prives 2009). The disordered N-terminal p53 transactivation domain (p53TAD) is a key interaction motif for regulatory protein–protein interactions (Fernandez-Fernandez and Sot 2011): it possesses two binding motifs with α-helical propensity, named p53TAD1 (residues 17–29) and p53TAD2 (residues 40–57). These two motifs act independently or in combination in order to allow p53 to bind to several proteins regulating either p53 stability or transcriptional activity (Shan et al. 2012; Jenkins et al. 2009; Rowell et al. 2012). Because of its pro-apoptotic function, p53 is recognized as tumor suppressor, and is found mutated in more than half of all human cancers affecting a wide variety of tissues (Olivier et al. 2010). Within this biological and disease context the N-terminal p53-TAD plays a key role: it mediates the interaction with folded co-factors, and comprises most of the regulatory phosphorylation sites. 1H, 15N and 1H, 13C HSQC NMR spectra recorded of p53TAD are of high quality and showed no detectable 1H, 13C, or 15N chemical shift changes between the spectra recorded in the absence or presence of Gd(DTPA-BMA) (Fig. 3a, Supporting Fig. 3A). The sPRE data of p53TAD display differential solvent accessibilities in a residue-specific manner: due to different excluded volumes for the paramagnetic agent Hα atoms located in regions rich in bulky residues show lower sPREs and Hα atoms located in more exposed regions show higher sPREs (Fig. 3b, c, Supporting Fig. 2B). Comparison of predicted and measured solvent PRE of p53TAD. a Overlay of 1H, 13C HSQC read-out spectra, with full recovery time of a 300 µM 13C, 15N labeled p53TAD in absence (black) and presence of 5 mM Gd(DTPA-BMA) (orange). b Gd(DTPA-BMA)-concentration-dependent R1 rates of two selected residues. c Diagram showing predicted (red) and measured (blue) solvent PRE values of each Hα atom of p53TAD. Experimental sPRE values are calculated by fitting the data with a linear regression equation. Predicted sPRE values are based on the previously described ensemble approach. Regions binding to co-factors (TAD1, TAD2) and the proline rich region are labeled. Residues with bulky side chains (Phe, Trp, Tyr) are labeled with #, and exposed glycine residues are labeled with * (see Supporting Fig. 2B for a bulkiness profile). Errors of the measured 1H-R1 rates were calculated using a Monte Carlo-type resampling strategy and are shown in the diagram as error bars sPRE data of structured proteins are often recorded for amide protons. However, chemical exchange of the amide proton with fast-relaxing water solvent protons might lead to an increase of the experimental sPRE, as has been observed for the disordered linker regions in folded proteins and in RNA (Hartlmuller et al. 2017; Gobl et al. 2017). For imino and amino protons of the UUCG tetraloop RNA and a GTP class II aptamer, for example, the increase of 1H-R1 rates is larger at small concentrations of the paramagnetic compound, and becomes linear at higher concentrations. Thus, we decided to focus here on experimental and back-calculated sPRE data of Hα protons. Nevertheless, 1HN-sPREs are shown for comparison in the supporting information (Supporting Fig. 4A). Comparison of the back-calculated and experimental p53TAD-sPREs shows that several regions within p53TAD yield lower sPREs than predicted, indicating that p53TAD populates residual local structure or shows long-range tertiary interactions. In line with this, 15N NMR relaxation data and 13C secondary chemical shift data display reduced flexibility of p53TAD and transient α-helical structure (Supporting Fig. 5). This is in line with previous studies which found that the p53TAD1 domain adopts a transiently populated α-helical structure formed by residues Phe19-Leu26 and that the p53TAD2 domain adopts a transiently populated turn-like structure formed by residues Met40-Met44 and Asp48-Trp53 (Lee et al. 2000). Given that p53TAD has been reported to interact with several co-factors, our data indicate that sPRE data can indeed provide important insight into the residual structure of this key interaction motif (Bourgeois and Madl 2018; Raj and Attardi 2017). In order to address the question of whether sPREs can be used to detect transient long-range interactions in disordered proteins we recorded 1H sPRE data for the 141-residue IDP α-synuclein using 1H, 13C and 1H, 15N, HSQC-based saturation recovery experiments at increasing concentrations of Gd(DTPA-BMA). α-Synuclein controls the assembly of presynaptic vesicles in neurons and is required for the release of the neurotransmitter dopamine (Burre et al. 2010). The aggregation of α-synuclein into intracellular amyloid inclusions coincides with the death of dopaminergic neurons, and therefore constitutes a pathologic signature of synucleinopathies such as Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy (Alafuzoff and Hartikainen 2017). Formation of transient long-range interactions has been proposed to protect α-synuclein from aggregation. 1H, 15N and 1H, 13C HSQC NMR spectra of α-synuclein are of high quality and showed no detectable 1H, 13C, or 15N chemical shift changes between the spectra recorded in the absence or presence of 5 mM Gd(DTPA-BMA) (Fig. 4a). The sPRE data of α-synuclein display variable solvent accessibilities in a residue-specific manner (Fig. 4b), with Hα atoms located in regions rich in bulky residues showing lower sPREs and Hα atoms located in more exposed regions showing higher sPREs (see also Supporting Fig. 2C for a comparison with the bioinformatics bulkiness profile and Supporting Fig. 4B for the 1HN sPRE data). Thus, the sPRE value provides local structural information about the disordered ensemble. Strikingly, we observed decreased sPREs, and therefore lower surface accessibility, in several regions, such as between residues 15–20, 26–30, 52–57, 74–79, 87–92, 102–110, and 112–121, respectively (Fig. 4c). Comparison of these regions with recently–published ensemble modeling using extensive sets of RDC and PRE data (Salmon et al. 2010) shows that the previously–observed transient intra-molecular long-range contacts involving mainly the regions 1–40, 70–90, and 120–140 within α-synuclein are reproduced by the sPRE data. Thus, sPRE data are highly sensitive to low populations of residual structure in disordered proteins. Comparison of predicted and measured solvent PRE of α-synuclein. a Overlay of 1H, 13C HSQC Read-out spectra, with full recovery time of 100 µM 13C,15N labeled α-synuclein in absence (violet) and presence of 5 mM Gd(DTPA-BMA) (orange). b Linear fit of relaxation rate 1H-R1 and Gd(DTPA-BMA) concentration of two selected residues of α-synuclein. c Predicted (red) and experimentally determined (blue) sPRE values from 1H,13C HSQC read-out spectra. Regions of strong variations between predicted and measured sPRE values are highlighted by grey boxes. Experimental sPRE values are calculated by fitting the data with a linear regression equation. Predicted sPRE values are based on the previously described ensemble approach. Residues with bulky side chains (Phe, Trp, Tyr) are labeled with #, and exposed glycine residues are labeled with * (see Supporting Fig. 2C for a bulkiness profile). Errors of the measured 1H-R1 rates were calculated using a Monte Carlo-type resampling strategy and are shown in the diagram as error bars In order to understand the conformational behavior of IDPs and their biological interaction networks, the detection of residual structure and long-range interactions is required. The large number of degrees of conformational freedom of IDPs require extensive sets of experimental data. Here, we provide a straightforward approach for the detection of residual structure and long-range interactions in IDPs and show that sPRE data contribute important and easily-accessible restraints for the investigation of IDPs. Our data indicate that for the general case of an unfolded chain with a local flexibility described by the overwhelming majority of available combinations, sPREs can be accurately predicted through our approach. It can be envisaged that a database of all potential combinations of the 20 amino acids within the central 5-mer peptide can be generated in the future. Nevertheless, generation of sPRE datasets for the entire 3.2 million possible combinations is beyond the current computing capabilities. Our approach promises to be a straightforward screening tool to exclude potential specific interactions of the soluble paramagnetic agent with IDPs and to guide positioning of covalent paramagnetic spin labels which are often used to detect long-range interactions within IDPs (Gobl et al. 2014; Clore and Iwahara 2009; Otting 2010; Jensen et al. 2014). Paramagnetic spin labels are preferable placed close to, but not within regions involved in transient interactions in order to avoid potential interference of the spin label with weak and dynamic interactions. In summary, we used three highly disease-relevant biological model systems for determining the solvent accessibility information provided by sPREs. This information can be easily determined experimentally and agrees well with the sPREs predicted for non-exchangeable protons using our grid-based approach. Our method proves to be highly sensitive to low populations of residual structure and long-range contacts in disordered proteins. This approach can be easily combined with ensemble-based calculations such as implemented in flexible-meccano/ASTEROIDS (Mukrasch et al. 2007; Nodet et al. 2009), Xplor-NIH (Kooshapur et al. 2018), or other programs (Estana et al. 2019) to interpret residual structure of IDPs quantitatively and in combination with complementary restraints obtained from RDCs and PREs. In particular for IDP ensemble calculations relying on sPRE data it is essential to exclude specific interactions of the paramagnetic agent with the IDP of interest which would lead to an enhanced experimental sPRE compared to the predicted sPRE. Open access funding provided by Austrian Science Fund (FWF). This research was supported by the Austrian Science Foundation (P28854, I3792, DK-MCD W1226 to TM), the President's International Fellowship Initiative of CAS (No. 2015VBB045, to TM), the National Natural Science Foundation of China (No. 31450110423, to TM), the Austrian Research Promotion Agency (FFG: 864690, 870454), the Integrative Metabolism Research Center Graz, the Austrian infrastructure program 2016/2017, the Styrian government (Zukunftsfonds) and BioTechMed/Graz. E.S. was trained within frame of the PhD program Molecular Medicine. We thank Dr. Vanessa Morris for carefully reading the manuscript. 10858_2019_248_MOESM1_ESM.pdf (909 kb) Supporting material 1 (PDF 908 kb) Alafuzoff I, Hartikainen P (2017) Alpha-synucleinopathies. Handb Clin Neurol 145:339–353CrossRefGoogle Scholar Babu MM, van der Lee R, de Groot NS, Gsponer J (2011) Intrinsically disordered proteins: regulation and disease. Curr Opin Struct Biol 21:432–440CrossRefGoogle Scholar Bah A, Forman-Kay JD (2016) Modulation of intrinsically disordered protein function by post-translational modifications. J Biol Chem 291:6696–6705CrossRefGoogle Scholar Bardiaux B, Malliavin T, Nilges M (2012) ARIA for solution and solid-state NMR. Methods Mol Biol 831:453–483CrossRefGoogle Scholar Bernado P, Bertoncini CW, Griesinger C, Zweckstetter M, Blackledge M (2005) Defining long-range order and local disorder in native alpha-synuclein using residual dipolar couplings. J Am Chem Soc 127:17968–17969CrossRefGoogle Scholar Bernini A, Venditti V, Spiga O, Niccolai N (2009) Probing protein surface accessibility with solvent and paramagnetic molecules. Prog Nucl Magn Reson Spectrosc 54:278–289CrossRefGoogle Scholar Bertoncini CW et al (2005) Release of long-range tertiary interactions potentiates aggregation of natively unstructured alpha-synuclein. Proc Natl Acad Sci USA 102:1430–1435ADSCrossRefGoogle Scholar Bourgeois B, Madl T (2018) Regulation of cellular senescence via the FOXO4-p53 axis. FEBS Lett 592:2083–2097CrossRefGoogle Scholar Brady JP et al (2017) Structural and hydrodynamic properties of an intrinsically disordered region of a germ cell-specific protein on phase separation. Proc Natl Acad Sci USA 114:E8194–E8203CrossRefGoogle Scholar Burgering BM, Medema RH (2003) Decisions on life and death: FOXO Forkhead transcription factors are in command when PKB/Akt is off duty. J Leukoc Biol 73:689–701CrossRefGoogle Scholar Burre J et al (2010) Alpha-synuclein promotes SNARE-complex assembly in vivo and in vitro. Science 329:1663–1667ADSCrossRefGoogle Scholar Choy MS, Page R, Peti W (2012) Regulation of protein phosphatase 1 by intrinsically disordered proteins. Biochem Soc Trans 40:969–974CrossRefGoogle Scholar Clore GM, Iwahara J (2009) Theory, practice, and applications of paramagnetic relaxation enhancement for the characterization of transient low-population states of biological macromolecules and their complexes. Chem Rev 109:4108–4139CrossRefGoogle Scholar Clore GM, Tang C, Iwahara J (2007) Elucidating transient macromolecular interactions using paramagnetic relaxation enhancement. Curr Opin Struct Biol 17:603–616CrossRefGoogle Scholar de Keizer PL et al (2010) Activation of forkhead box O transcription factors by oncogenic BRAF promotes p21cip1-dependent senescence. Cancer Res 70:8526–8536CrossRefGoogle Scholar Dedmon MM, Lindorff-Larsen K, Christodoulou J, Vendruscolo M, Dobson CM (2005) Mapping long-range interactions in alpha-synuclein using spin-label NMR and ensemble molecular dynamics simulations. J Am Chem Soc 127:476–477CrossRefGoogle Scholar Delaglio F et al (1995) NMRPipe: a multidimensional spectral processing system based on UNIX pipes. J Biomol NMR 6:277–293CrossRefGoogle Scholar Dyson HJ, Wright PE (2004) Unfolded proteins and protein folding studied by NMR. Chem Rev 104:3607–3622CrossRefGoogle Scholar Dyson HJ, Wright PE (2005) Intrinsically unstructured proteins and their functions. Nat Rev Mol Cell Biol 6:197–208CrossRefGoogle Scholar Eletsky A, Moreira O, Kovacs H, Pervushin K (2003) A novel strategy for the assignment of side-chain resonances in completely deuterated large proteins using 13C spectroscopy. J Biomol NMR 26:167–179CrossRefGoogle Scholar Eliezer D (2009) Biophysical characterization of intrinsically disordered proteins. Curr Opin Struct Biol 19:23–30CrossRefGoogle Scholar Emmanouilidis L et al (2017) Allosteric modulation of peroxisomal membrane protein recognition by farnesylation of the peroxisomal import receptor PEX19. Nat Commun 8:14635ADSCrossRefGoogle Scholar Essers MA et al (2004) FOXO transcription factor activation by oxidative stress mediated by the small GTPase Ral and JNK. EMBO J 23:4802–4812CrossRefGoogle Scholar Estana A et al (2019) Realistic ensemble models of intrinsically disordered proteins using a structure-encoding coil database. Structure 27:381–391e2CrossRefGoogle Scholar Falsone SF et al (2011) The neurotransmitter serotonin interrupts alpha-synuclein amyloid maturation. Biochim Biophys Acta 1814:553–561CrossRefGoogle Scholar Fernandez-Fernandez MR, Sot B (2011) The relevance of protein-protein interactions for p53 function: the CPE contribution. Protein Eng Des Sel 24:41–51CrossRefGoogle Scholar Flock T, Weatheritt RJ, Latysheva NS, Babu MM (2014) Controlling entropy to tune the functions of intrinsically disordered regions. Curr Opin Struct Biol 26:62–72CrossRefGoogle Scholar Gillespie JR, Shortle D (1997) Characterization of long-range structure in the denatured state of staphylococcal nuclease. I. Paramagnetic relaxation enhancement by nitroxide spin labels. J Mol Biol 268:158–169CrossRefGoogle Scholar Gobl C, Madl T, Simon B, Sattler M (2014) NMR approaches for structural analysis of multidomain proteins and complexes in solution. Prog Nucl Magn Reson Spectrosc 80:26–63CrossRefGoogle Scholar Gobl C et al (2016) Increasing the chemical-shift dispersion of unstructured proteins with a covalent lanthanide shift reagent. Angew Chem Int Ed Engl 55:14847–14851CrossRefGoogle Scholar Gobl C et al (2017) Flexible IgE epitope-containing domains of Phl p 5 cause high allergenic activity. J Allergy Clin Immunol 140:1187–1191CrossRefGoogle Scholar Göbl C, Kosol S, Stockner T, Rückert HM, Zangger K (2010) Solution structure and membrane binding of the toxin fst of the par addiction module. Biochemistry 49:6567–6575CrossRefGoogle Scholar Gong Z, Gu XH, Guo DC, Wang J, Tang C (2017) Protein structural ensembles visualized by solvent paramagnetic relaxation enhancement. Angew Chem Int Ed Engl 56:1002–1006CrossRefGoogle Scholar Gu XH, Gong Z, Guo DC, Zhang WP, Tang C (2014) A decadentate Gd(III)-coordinating paramagnetic cosolvent for protein relaxation enhancement measurement. J Biomol NMR 58:149–154CrossRefGoogle Scholar Guttler T et al (2010) NES consensus redefined by structures of PKI-type and Rev-type nuclear export signals bound to CRM1. Nat Struct Mol Biol 17:1367–1376CrossRefGoogle Scholar Habchi J, Tompa P, Longhi S, Uversky VN (2014) Introducing protein intrinsic disorder. Chem Rev 114:6561–6588CrossRefGoogle Scholar Hartlmuller C, Gobl C, Madl T (2016) Prediction of protein structure using surface accessibility data. Angew Chem Int Ed Engl 55:11970–11974CrossRefGoogle Scholar Hartlmuller C et al (2017) RNA structure refinement using NMR solvent accessibility data. Sci Rep 7:5393ADSCrossRefGoogle Scholar Hass MA, Ubbink M (2014) Structure determination of protein-protein complexes with long-range anisotropic paramagnetic NMR restraints. Curr Opin Struct Biol 24:45–53CrossRefGoogle Scholar Hocking HG, Zangger K, Madl T (2013) Studying the structure and dynamics of biomolecules by using soluble paramagnetic probes. ChemPhysChem 14:3082–3094CrossRefGoogle Scholar Hornsveld M, Dansen TB, Derksen PW, Burgering BMT (2018) Re-evaluating the role of FOXOs in cancer. Semin Cancer Biol 50:90–100CrossRefGoogle Scholar Huang JR, Ozenne V, Jensen MR, Blackledge M (2013) Direct prediction of NMR residual dipolar couplings from the primary sequence of unfolded proteins. Angew Chem Int Ed Engl 52:687–690CrossRefGoogle Scholar Huang JR et al (2014) Transient electrostatic interactions dominate the conformational equilibrium sampled by multidomain splicing factor U2AF65: a combined NMR and SAXS study. J Am Chem Soc 136:7068–7076CrossRefGoogle Scholar Hyberts SG, Arthanari H, Robson SA, Wagner G (2014) Perspectives in magnetic resonance: NMR in the post-FFT era. J Magn Reson 241:60–73ADSCrossRefGoogle Scholar Iwahara J, Clore GM (2010) Structure-independent analysis of the breadth of the positional distribution of disordered groups in macromolecules from order parameters for long, variable-length vectors using NMR paramagnetic relaxation enhancement. J Am Chem Soc 132:13346–13356CrossRefGoogle Scholar Jenkins LM et al (2009) Two distinct motifs within the p53 transactivation domain bind to the Taz2 domain of p300 and are differentially affected by phosphorylation. Biochemistry 48:1244–1255CrossRefGoogle Scholar Jensen MR et al (2009) Quantitative determination of the conformational properties of partially folded and intrinsically disordered proteins using NMR dipolar couplings. Structure 17:1169–1185CrossRefGoogle Scholar Jensen MR, Zweckstetter M, Huang JR, Blackledge M (2014) Exploring free-energy landscapes of intrinsically disordered proteins at atomic resolution using NMR spectroscopy. Chem Rev 114:6632–6660CrossRefGoogle Scholar Johansson H et al (2014) Specific and nonspecific interactions in ultraweak protein-protein associations revealed by solvent paramagnetic relaxation enhancements. J Am Chem Soc 136:10277–10286CrossRefGoogle Scholar Johnson BA (2004) Using NMRView to visualize and analyze the NMR spectra of macromolecules. Methods Mol Biol 278:313–352Google Scholar Kooshapur H, Schwieters CD, Tjandra N (2018) Conformational ensemble of disordered proteins probed by solvent paramagnetic relaxation enhancement (sPRE). Angew Chem Int Ed Engl 57:13519–13522CrossRefGoogle Scholar Lee H et al (2000) Local structural elements in the mostly unstructured transcriptional activation domain of human p53. J Biol Chem 275:29426–29432CrossRefGoogle Scholar Madl T, Bermel W, Zangger K (2009a) Use of relaxation enhancements in a paramagnetic environment for the structure determination of proteins using NMR spectroscopy. Angew Chem Int Ed Engl 48:8259–8262CrossRefGoogle Scholar Madl T, Bermel W, Zangger K (2009b) Use of relaxation enhancements in a paramagnetic environment for the structure determination of proteins using NMR spectroscopy. Angew Chem Int Ed Engl 48:8259–8262CrossRefGoogle Scholar Madl T, Guttler T, Gorlich D, Sattler M (2011) Structural analysis of large protein complexes using solvent paramagnetic relaxation enhancements. Angew Chem Int Ed Engl 50:3993–3997CrossRefGoogle Scholar Maji SK et al (2009) Functional amyloids as natural storage of peptide hormones in pituitary secretory granules. Science 325:328–332ADSCrossRefGoogle Scholar Marsh JA et al (2007) Improved structural characterizations of the drkN SH3 domain unfolded state suggest a compact ensemble with native-like and non-native structure. J Mol Biol 367:1494–1510CrossRefGoogle Scholar Marsh JA, Baker JM, Tollinger M, Forman-Kay JD (2008) Calculation of residual dipolar couplings from disordered state ensembles using local alignment. J Am Chem Soc 130:7804–7805CrossRefGoogle Scholar Mcconnell HM (1958) Reaction rates by nuclear magnetic resonance. J Chem Phys 28:430–431ADSCrossRefGoogle Scholar Meier S, Blackledge M, Grzesiek S (2008) Conformational distributions of unfolded polypeptides from novel NMR techniques. J Chem Phys 128:052204ADSCrossRefGoogle Scholar Metallo SJ (2010) Intrinsically disordered proteins are potential drug targets. Curr Opin Chem Biol 14:481–488CrossRefGoogle Scholar Mittag T, Forman-Kay JD (2007) Atomic-level characterization of disordered protein ensembles. Curr Opin Struct Biol 17:3–14CrossRefGoogle Scholar Mukrasch MD et al (2007) Highly populated turn conformations in natively unfolded tau protein identified from residual dipolar couplings and molecular simulation. J Am Chem Soc 129:5235–5243CrossRefGoogle Scholar Nodet G et al (2009) Quantitative description of backbone conformational sampling of unfolded proteins at amino acid resolution from NMR residual dipolar couplings. J Am Chem Soc 131:17908–17918CrossRefGoogle Scholar Olivier M, Hollstein M, Hainaut P (2010) TP53 mutations in human cancers: origins, consequences, and clinical use. Cold Spring Harb Perspect Biol 2:a001008CrossRefGoogle Scholar Otting G (2010) Protein NMR using paramagnetic ions. Annu Rev Biophys 39:387–405CrossRefGoogle Scholar Ozenne V et al (2012) Flexible-meccano: a tool for the generation of explicit ensemble descriptions of intrinsically disordered proteins and their associated experimental observables. Bioinformatics 28:1463–1470CrossRefGoogle Scholar Parigi G et al (2014) Long-range correlated dynamics in intrinsically disordered proteins. J Am Chem Soc 136:16201–16209CrossRefGoogle Scholar Pintacuda G, Otting G (2002a) Identification of protein surfaces by NMR measurements with a paramagnetic Gd(III) chelate. J Am Chem Soc 124:372–373CrossRefGoogle Scholar Pintacuda G, Otting G (2002b) Identification of protein surfaces by NMR measurements with a pramagnetic Gd(III) chelate. J Am Chem Soc 124:372–373CrossRefGoogle Scholar Putker M et al (2013) Redox-dependent control of FOXO/DAF-16 by transportin-1. Mol Cell 49:730–742CrossRefGoogle Scholar Raj N, Attardi LD (2017) The transactivation domains of the p53 protein. Cold Spring Harb Perspect Med 7:a026047CrossRefGoogle Scholar Respondek M, Madl T, Gobl C, Golser R, Zangger K (2007a) Mapping the orientation of helices in micelle-bound peptides by paramagnetic relaxation waves. J Am Chem Soc 129:5228–5234CrossRefGoogle Scholar Respondek M, Madl T, Gobl C, Golser R, Zangger K (2007b) Mapping the orientation of helices in micelle-bound peptides by paramagnetic relaxation waves. J Am Chem Soc 129:5228–5234CrossRefGoogle Scholar Rezaei-Ghaleh N et al (2018) Local and global dynamics in intrinsically disordered synuclein. Angew Chem Int Ed Engl 57:15262–15266CrossRefGoogle Scholar Romero P et al (1998) Thousands of proteins likely to have long disordered regions. Pac Symp Biocomput 3:437–448Google Scholar Rowell JP, Simpson KL, Stott K, Watson M, Thomas JO (2012) HMGB1-facilitated p53 DNA binding occurs via HMG-Box/p53 transactivation domain interaction, regulated by the acidic tail. Structure 20:2014–2024CrossRefGoogle Scholar Salmon L et al (2010) NMR characterization of long-range order in intrinsically disordered proteins. J Am Chem Soc 132:8407–8418CrossRefGoogle Scholar Shan B, Li DW, Bruschweiler-Li L, Bruschweiler R (2012) Competitive binding between dynamic p53 transactivation subdomains to human MDM2 protein: implications for regulating the p53. MDM2/MDMX interaction. J Biol Chem 287:30376–30384CrossRefGoogle Scholar Shortle D, Ackerman MS (2001) Persistence of native-like topology in a denatured protein in 8 M urea. Science 293:487–489CrossRefGoogle Scholar Skinner SP et al (2016) CcpNmr AnalysisAssign: a flexible platform for integrated NMR analysis. J Biomol NMR 66:111–124CrossRefGoogle Scholar Sun Y, Friedman JI, Stivers JT (2011) Cosolute paramagnetic relaxation enhancements detect transient conformations of human uracil DNA glycosylase (hUNG). Biochemistry 50:10724–10731CrossRefGoogle Scholar Theillet FX et al (2014) Physicochemical properties of cells and their effects on intrinsically disordered proteins (IDPs). Chem Rev 114:6661–6714CrossRefGoogle Scholar Tompa P (2012) Intrinsically disordered proteins: a 10-year recap. Trends Biochem Sci 37:509–516CrossRefGoogle Scholar Uversky VN (2011) Intrinsically disordered proteins from A to Z. Int J Biochem Cell Biol 43:1090–1103CrossRefGoogle Scholar Uversky VN, Oldfield CJ, Dunker AK (2008) Intrinsically disordered proteins in human diseases: introducing the D2 concept. Annu Rev Biophys 37:215–246CrossRefGoogle Scholar Uversky VN et al (2014) Pathological unfoldomics of uncontrolled chaos: intrinsically disordered proteins and human diseases. Chem Rev 114:6844–6879CrossRefGoogle Scholar van den Berg MC et al (2013) The small GTPase RALA controls c-Jun N-terminal kinase-mediated FOXO activation by regulation of a JIP1 scaffold complex. J Biol Chem 288:21729–21741CrossRefGoogle Scholar van der Lee R et al (2014) Classification of intrinsically disordered regions and proteins. Chem Rev 114:6589–6631CrossRefGoogle Scholar Vousden KH, Prives C (2009) Blinded by the Light: The Growing Complexity of p53. Cell 137:413–431CrossRefGoogle Scholar Wang Y, Schwieters CD, Tjandra N (2012a) Parameterization of solvent-protein interaction and its use on NMR protein structure determination. J Magn Reson 221:76–84ADSCrossRefGoogle Scholar Wang Y, Schwieters CD, Tjandra N (2012b) Parameterization of solvent–protein interaction and its use on NMR protein structure determination. J Magn Reson 221:76–84ADSCrossRefGoogle Scholar Weigel D, Jackle H (1990) The fork head domain: a novel DNA binding motif of eukaryotic transcription factors? Cell 63:455–456CrossRefGoogle Scholar Wells M et al (2008) Structure of tumor suppressor p53 and its intrinsically disordered N-terminal transactivation domain. Proc Natl Acad Sci USA 105:5762–5767ADSCrossRefGoogle Scholar Wright PE, Dyson HJ (1999) Intrinsically unstructured proteins: re-assessing the protein structure-function paradigm. J Mol Biol 293:321–331CrossRefGoogle Scholar Wright PE, Dyson HJ (2009) Linking folding and binding. Curr Opin Struct Biol 19:31–38CrossRefGoogle Scholar Wright PE, Dyson HJ (2015) Intrinsically disordered proteins in cellular signalling and regulation. Nat Rev Mol Cell Biol 16:18–29CrossRefGoogle Scholar Yuwen T et al (2018) Measuring solvent hydrogen exchange rates by multifrequency excitation (15)N CEST: application to protein phase separation. J Phys Chem B 122:11206–11217CrossRefGoogle Scholar Zangger K et al (2009a) Positioning of micelle-bound peptides by paramagnetic relaxation enhancements. J Phys Chem B 113:4400–4406CrossRefGoogle Scholar Zangger K et al (2009b) Positioning of micelle-bound peptides by paramagnetic relaxation enhancements. J Phys Chem B 113:4400–4406CrossRefGoogle Scholar Email authorView author's OrcID profile 1.Center for Integrated Protein Science Munich (CIPSM) at the Department of ChemistryTechnische Universität MünchenGarchingGermany 2.Gottfried Schatz Research Center for Cell Signaling, Metabolism and Aging, Institute of Molecular Biology & BiochemistryMedical University of GrazGrazAustria 3.The Campbell Family Institute for Breast Cancer Research at Princess Margaret Cancer CentreTorontoCanada 4.Institute of Pharmaceutical SciencesUniversity of GrazGrazAustria 5.BioTechMed-GrazGrazAustria Hartlmüller, C., Spreitzer, E., Göbl, C. et al. J Biomol NMR (2019). https://doi.org/10.1007/s10858-019-00248-2 Accepted 11 April 2019
CommonCrawl
Quantum interpretation of light coherence When I studied interferences, I saw that only coherent sources could interfere. In physics, two wave sources are perfectly coherent if they have a constant phase difference and the same frequency, and the same waveform. Coherence is an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference. https://en.wikipedia.org/wiki/Coherence_(physics) What we saw is basically that light was emitted in wave packets. Each wave packet is coherent with itself but not with the others. I cannot figure out what is the interpretation of such packets at a quantum scale. Light is supposed to be emitted when excited atoms emit photons. What makes some photons coherent and other not? For example, let us take a double slit experiment with electrons or atoms. What does it mean for electrons or atoms to be emitted by coherent sources? More specifically, what condition their wave function should satisfy? What would be a wave packet of electrons? Is there a link with the idea that in order to have interferences, we must not be able to tell by which slit the electron went through? Thanks to your answers, I almost understood how a single photon could be coherent with itself : its wave function $\Psi$ is spread in space and time. For example, an atom that emits a photon has a certain probability to emit the photon at each time. At a given time, there is a certain probability that the photon is located around the atom and then it propagates. Then we split $\Psi$ into $\Psi_1 + \Psi_2$ which correspond to the atom going through one arm (or one slit) rather than through the other. Finally, on the screen, the probability to get observe the photon in $\mathbf{x}$ at time $t$ is $$|\Psi_1(\mathbf{x}, t) + \Psi_2(\mathbf{x}, t)|^2$$ and the interference term in non-zero if the two wave functions overlap. However, it seems that independent sources can also interfere (G. Magyar and L. Mandel, Nature (London) 198, 255 (1963), I couldn't find the original, but it was reprinted in Concepts of Quantum Optics by P. L. Knight, L. Allen). In another article (Interference of Independent Photon Beams, R.L. Pfleegor, L. Mandel, Phys. Rev., 159, July 1967), they reproduce the result and explain in the discussion that this mustn't be taken as independent photons interfering but is linked to the detection process. I might not know enough of quantum optics to understand this properly. It seems that I'll have to wait before I can understand. Nevertheless, would it be possible to create two independent sources of particles like electrons or atoms interfere ? quantum-mechanics visible-light interference coherence Cabirto CabirtoCabirto $\begingroup$ en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics $\endgroup$ – Solomon Slow May 31 '18 at 13:50 $\begingroup$ I cannot figure out in which section I could find the answer. My question was not about the interpretation of quantum mechanics itself, but more about how a macroscopic field property could appear in the quantum formalism. I edited the question to be more specific. $\endgroup$ – Cabirto Jun 5 '18 at 14:29 $\begingroup$ Charged particles like electrons emit same frequency photons when accelerated through a double slit experiment. The geometry between slits and detection screen is set. This sets up a coherent situation. $\endgroup$ – Bill Alsept Jun 5 '18 at 14:51 $\begingroup$ I guess we are not talking of the same experiment. I thought to this one : en.wikipedia.org/wiki/Double-slit_experiment#/media/… $\endgroup$ – Cabirto Jun 6 '18 at 8:35 $\begingroup$ Coherence in quantum optics is a liiiittle more complicated than wavepackets that are coherent with themselves but not with their neighbours. For an excellent write-up, try Roy Glauber's Nobel-prize lecture. $\endgroup$ – Emilio Pisanty Jun 6 '18 at 21:07 That's a very good question, but is actually very difficult to answer. The problem is that to understand the quantum mechanics of light, you really have to understand quantum field theory, not just quantum mechanics. Quantum field theory is necessary to reconcile relativity (as light is inherently relativistic) and quantum mechanics. If you look around the literature, you won't really see anybody writing out the wave-functions of a single photon. People will write wave functions of electrons when they are moving much slower than the speed of light (as they are able to move slowly because they have a mass), but never of photons. The reason for this is that, in quantum field theory, you really have to think about all the photons at once, not just one photon at a time. When an atom emits a lone photon, there is no conception of whether that photon is "coherent" or not. Coherence is a property that many photons share with each other. Usually, when light is emitted, like in a light bulb or from the sun, the light is not coherent. To get single color coherent light, as described by Maxwell's laws before people knew about quantum mechanics, you perversely need to use a quantum mechanical mechanism, namely a laser. I do not really understand how a laser works, but somehow it exploits multiple energy levels in an atom to release light that is coherent with all the other light around it. The quantum field states that most closely resemble classical waves are called "coherent states." Coherent states are states of the quantum photon field, for example, that are as close to the classical electro magnetic waves as is allowed by a generalized notion of Heisenberg's uncertainty principle. These are the states that a laser produces. So it's weird: photons are quantum mechanical, but a laser can produce them coherently in a way that mimics classical mechanics. Interference is a broader concept even than " having a constant phase difference". For example, if the phase difference between the two sources shifts at a constant rate, interference fringes still appear but they move at a constant speed. If a single photon is considered, the concept gets both a bit clearer and a bit fuzzier. We can't measure the shape and frequency content of a quantum wavepacket, but if we can make a lot of identical wavepackets we can build up a statistical measurement of their shape. It turns out that, by that method, we can show that individual photons have a coherence length: delay the photons in one arm of an interferometer by more than that length, and no fringes will form. The coherence length relates to the frequency bandwidth of the photon, which can be measured by analogous statistical means. For most practical purposes, interference only occurs between different parts of a single particle's wavefunction overlapping, and interference fringes that appear in an interference experiment depend on the particles all being prepared so that their wave packets have the same properties- whether the particles are photons, electrons, neutrons, or whatever. Each detection event just puts another dot into a pointillist representation of the fringe pattern. S. McGrewS. McGrew Is there a link with the idea that to have interferences we must not be able to tell by which slit the electron went through? As clearly described in this paper, when particles (like unbounded electrons) are sent individually through a pair of slits, a wave-like interference pattern develops, but no such interference is found when one observes which "path" the particles take. Observing the particles for which way they went through constitutes a measurement and quantum coherence is lost through decoherence. In the mentioned paper, a measurement precision term $\sigma$ is introduced to quantify the extent of measurement. For example, $\sigma \rightarrow \infty$ implies a measurement with no precision, while $\sigma \rightarrow 0$ corresponds to a measurement tending to perfect precision. For small values of $\sigma$, the interference effects are suppressed. $\hskip2in$ Interference effects at different times when the particles are left unobserved i.e., no measurement or $\sigma \rightarrow \infty$ These figures show the effect of $\sigma$ at a fixed time (here $t=30s$). $\sigma = 0$ indicates a perfect measurement, coherence is lost and only a broad fringe appears. As $\sigma$ increases one can see the interference being restored. Insets show simulated detection screens. Hence, there exists a clear link between the measurement(our ability to distinguish the path taken by a particle) and the visibility of the interference fringes. exp ikxexp ikx Lets start with first quantization and the quantum mechanical wave function of a photon, before we start on wave packets, which belong to second quantization. The quantum mechanical equation for photons is a quantized Maxwell's equation, in various forms, an example as from $\operatorname{Eq.}{\left(11\right)}$ in this paper: Now write the complex wave function as a sum of real and imaginary parts ${\vec{E}}_T \left( \vec{r} \right)$ and ${\vec{B}}_T \left( \vec{r} \right) ,$ $$ {\vec{\varPsi}}_T \left(\vec{r},t\right)={2}^{-1/2}\left(\vec{E}_T\left(\vec{r},t\right)+i \vec{B}_T\left(\vec{r},t\right)\right). \tag{11} $$ The $Ψ^*Ψ$ of this wavefunction is the probability of a photon to manifest at $\left(x,y,z,t\right).$ Note that the $\vec{E}$ and $\vec{B}$ fields are the ones that will appear in the classical beam made up of zillion of such photons by superposition of their wavefunctions. As complex numbers phases will appear which will carry the possibility of constructive and/or destructive interference patterns. Now we come to wavepackets and second quantization. Second quantization uses the plane wave solutions of the photon wavefunction as the field on which creation and annihilation operators will operate to describe a real photon in $\left(x,y,z,t\right).$ A single plane wave is not localized and one has to use the wavepacket mathematics to describe real photons, with adjacent frequencies in the field functions. These obey a form of the Heisenberg uncertainty . The way the superposition of photons generates the classical electromagnetic waves in quantum field theory can be seen in this blog post by Motl. The interference patterns have been addressed in the other answers. anna vanna v $\begingroup$ Second quantization has nothing directly to do with wave packets, and wave packets to not describe photons. A wave packet it a superposition of excitations of many modes of the field ... many photons. A photon is not a wave packet, and a wave packet is not a photon. $\endgroup$ – garyp Jun 9 '18 at 11:28 $\begingroup$ @garyp you are wrong, have a look at sci.hokudai.ac.jp/grp/hep/web/workshops/winter_school/.../… . No, not many photons, single photons single neutrinos etc. You are just not familiar with this . see also physics.indiana.edu/~dermisek/QFT_08/qft-II-1-2p.pdf page 2 $\endgroup$ – anna v Jun 9 '18 at 13:15 $\begingroup$ Anna: first link is broken. What do you want me to look at in the second? $\endgroup$ – garyp Jun 10 '18 at 12:07 $\begingroup$ @garyp in page 2 formula "wave packet with width" for particle. also page 25 here , constructing wave packet for particle www1.jinr.ru/publish/Pepan/v-46-1/02_lev.pdf . also page 2 here "which generalizes the standard Gaussian wave packet describing a free non -relativistic particle to other minimal position-velocity uncertainty wave packets" wiese.itp.unibe.ch/topics/spreading.pdf $\endgroup$ – anna v Jun 10 '18 at 12:48 $\begingroup$ Which page in the second reference? There is no page 25. The first and third describe wave packets which could represent localized particles, but these states are not energy eigenstates, so cannot represent photons. At least not photons as they are commonly used, which are energy eigenstates. ($E=h\nu$) $\endgroup$ – garyp Jun 10 '18 at 13:08 Coherence can sort of loosely be defined as when something has the "ability to interfere". When we look at the intensity of the light (which is the square of the $E$-field, we can see this interference). $$ I_{\text{total}} ~=~\left(E_1+E_2\right)^2 ~=~E_1^2+E_2^2+2 E_1 E_2 $$ Something that is incoherent misses that last "interference" term: $$ I_{\text{total}} ~=~ \left(E_1+E_2\right)^2 ~=~ E_1^2+E_2^2 \,.$$ The easiest example to see the above is when considering a combinations of two light-fields of orthogonal polarizations. You'll see that since $I = E \cdot E,$ and since the vectors are orthogonal, that you'll end up not having any interference terms. In classical physics, the electric field is solved by Maxwell's Equations, which has solutions that behave like traveling waves that have these properties. In quantum physics, we discover that everything has probability waves that describe them. Now these probability waves can also be coherent or incoherent in the same way that classical waves can be (replacing E with $\psi$ and ignoring complex values for simplicity): $$ P_{\text{total}} ~=~\left( \left| \psi_1 + \psi_2 \right| \right)^2 ~=~\left|\psi_1\right|^2 + \left|\psi_2\right|^2 + 2\psi_1 \psi_2 $$ When two things are quantum mechanically "coherent", we obtain these "quantum interference terms" at the end. I think a lot of people consider this to be the "essence" of quantum mechanics. When two things are quantum mechanically "incoherent," we don't have these terms: $$ P_{\text{total}} ~=~ \left(\left|\psi_1+\psi_2\right|\right)^2 ~=~ \left|\psi_1\right|^2 + \left|\psi_2\right|^2 \,.$$ Just like with the case of light, if the two states that want to interfere quantum mechanically are not initially identical, they will not be able to interfere coherently. You can study more about these "incoherent states" as they can be represented in the density matrix formalism of quantum mechanics (which unfortunately not often taught in quantum courses). So now answering your questions: What does it mean for electrons or atoms to be emitted by coherent sources? Is there a link with the idea that in order to have interferences, we must not be able to tell by which slit the electron went through? In the double-slit experiment, you can obtain an interference pattern even if you send the particles one-by-one. The coherent interference occurs between the two slit possibilities that the single particle can travel through. So the states after traveling through each slit possibility must look identical to interfere coherently. For example, if one of the slits changed one property of the electron (like it's spin) while the other slit didn't, then these two "probability amplitudes" will not coherently interfere because they aren't describing the same possibility states. More specifically, what condition their wave function should satisfy? For the double slit, the wave-functions of slit 1 and 2 $\psi_1(x)$ and $\psi_2(x)$ need to have a point in space where they overlap, and they need to be describing the wavefunction of a state with identical properties between 1 and 2. What would be a wave packet of electrons? Wavepacket? I'm not sure this is necessary for your understanding of your question, but a wavepacket (intuitively speaking) is basically when your wavefunction is localized to a short burst (a pulse) in time. A wavefunction of an electron describes the probability of finding that electron (usually with resepect to its physical location at a point in time). You could perform a bunch of separate measurements to find what the probability distribution of your electrons are (so that you can find what this wavefunction is). A wavefunction of electrons (plural) would imply a lot of electrons all interfering together, which is actually very complicated and I don't think this is what you're interested in. Light is supposed to be emitted when excited atoms emit photons. What makes some photons coherent and other not? Photons are considered "coherent" when they are identical and consequently can interfere completely. The simplest example (similar to the light case) is when the photons have orthogonal polarizations. Now the particles are completely distinguishable, and they will not have that extra interference term. Making a single photon source experimentally that produces this "quantum interference" is tricky because it means that the photons have to have identical properties. For example, atoms that emit single photons often emit photons of a frequency that's dependent on their temperatures. So if you want to see quantum interference between these two sources, you will see that you would have to control their temperatures precisely to ensure the photons properties are identical so they can interfere. Steven SagonaSteven Sagona Not the answer you're looking for? Browse other questions tagged quantum-mechanics visible-light interference coherence or ask your own question. Young's double-slit experiment with detectors Young's DS experiment and most light sources Why can interference from two independent sources be observed? How can a wave interfere with itself? How do I interpret the math relating to diffraction? Interference of two non-entangled photons, calculation using tensor product of Hilbert spaces Is quantum interference only observed for observables that have a continuous spectrum? Does non coherent photons have any fundamental interaction Hong Ou Mandel Effect for incoherent photons interpretation of double slit experiment
CommonCrawl
Axiom schema of replacement/specification(separation) as second-order A question to check whether my confusion is correctly removed: how would axiom schema of replacement and specification be written in second-order? By the way, I know that second-order logic is not reducible to first-order logic, and those matters are irrelevant to this question. Just provide me how these axiom schema would be written as second-order and I would really, really appreciate. logic elementary-set-theory Asaf Karagila♦ The replacement axiom as second-order would be something like this: $$\begin{align} \forall F\forall p_1\ldots\forall p_n\forall A(&\forall x(x\in A\rightarrow\exists y(F(x,y,p_1,\ldots,p_n)\land\forall z(F(x,z,p_1,\ldots,p_n)\rightarrow z=y))\rightarrow\\ &\exists B\forall u(u\in B\leftrightarrow\exists v(v\in A\land F(v,u))) \end{align}$$ We say that for every $F$ which defines a function (up to parameters $p_i$'s) on a set $A$, there is a set $B$ which is the image of $A$ under the function $F$ (with the parameters $p_i$). The difference between the first-order schema and second-order axiom is that $F$ quantifies over all classes, even those not defined by a formula. The result is that we have a single axiom, instead of "a lot of them". Note however, that second-order relies on the notion of a set to be well-defined so doing second-order set theory is a bit... awkward because usually define sets as objects in a universe of set theory, and this causes a bit of circularity. This is somewhat similar to the difference between first-order and second-order induction in Peano Axioms, you can read some about this in here: Axiom schema and the definition of natural numbers $\begingroup$ Thanks, Asaf. Now my confusion is all cleared :) $\endgroup$ – user32751 Jun 3 '12 at 10:21 Not the answer you're looking for? Browse other questions tagged logic elementary-set-theory or ask your own question. In what sense is propositional logic "zeroth-order logic?" Axiom schema and the definition of natural numbers First-order logic and second-order logic confusion What are some primary mathematical utilities of the axiom schema of separation? The standard approach to second-order axiom systems Axiom schema of specification - Existence of intersection and set difference Is there a deductive system for second-order logic that is complete with respect to Henkin semantics? A question regarding the axioms of Separation and Replacement in ZFC What is an axiom schema? Is the axiom schema of specification sufficient for solving Russell's paradox? If so, why? Is the definability axiom schema consistent with ZF? Axiom schemas vs second order axioms: which first-order predicates "exist"?
CommonCrawl
Kim has exactly enough money to buy 40 oranges at $3x$ cents each. If the price rose to $4x$ cents per orange, how many oranges could she buy? If the total cost is fixed, then the relationship between cost per item and the number of items is inversely proportional. Since each orange costs $\frac{4}{3}$ as much, the same amount of money purchases $\frac{3}{4}$ as many of them. Taking three-fourths of 40, we find that Kim could buy $\boxed{30}$ oranges.
Math Dataset
Fair Visit Given a set of areas the robot(s) should visit all areas in a fair way. Given a set of locations the robot should visit all the locations in a fair way. $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{F} (l_i) $ $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{G} (l_{i} \rightarrow \mathcal{X} ((\neg l_i)\ \mathcal{W}\ l_{(i+1)\%n}))$ , where ($l_1, l_2, \ldots$ are location propositions) where "l1" and "l2" are expressions that indicate that a robot r is in a specific area or at a given point. Note that the pattern is general and consider the case in which a robot can be in two locations at the same time. For example, a robot can be in an area of a building indicated as l1 (e.g., area 01) and at the same time in a room of the area indicated as l2 (e.g., room 002) at the same time. If the topological intersection of the considered locations is empty, then the robot cannot be in two locations at the same time and the transitions labeled with both l1 and l2 cannot be fired. Locations $l_1$, $l_2$, $l_3$ must be covered in a fair way. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_{\# -\{1,2,3\}})^\omega$ does not perform a fair visit since it visits $l_1$ three times while $l_2$ and $l_3$ are visited once. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow l_2 \rightarrow l_4 \rightarrow (l_{\# \setminus\{1,2,3\}})^\omega$ performs a fair visit since it visits locations $l_1$, $l_2$, and $l_3$ twice. The Fair visit pattern specializes the Visit pattern by further constraining how locations are visited to ensure a fair visit. Smith et al.proposed an LTL mission specification for the following mission requirement "an equal number of visits to each data-gather location" is required. The LTL mission specification is obtained by forcing an order on how the data-gathered locations are visited. However, fair visiting may be required even without the specification of an order in which the locations must be visited. $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{F} (l_i) $\\ $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{G} (l_{i} \rightarrow \forall \mathcal{X} (\forall (\neg l_i) \mathcal{W} l_{(i+1)\%n}))$ Tagged: coverage
CommonCrawl
Improvement of fishing efficiency of Danish seine to ratio of buoyancy by sinking force Lee, Hye-Ok;Lee, Ju-Hee;Kwon, Byeong-Guk;Kim, Bu-Yeong;Kim, Byung-Soo;Yoo, Je-Bum 87 This study was carried out to offer fundamental data for improving the fishing efficiency of the Danish seine. The net height and the shape in the water was measured to analyze the efficiency of the existing Danish seine. And then, an improved fishing gear was developed based on the results and was tested in the field. Measuring devices were attached on center of a ground rope and a head rope. The net height is the spread distance between the ground rope and the head rope, which was measured on the different ratio of buoyancy. The results are obtained as follows. The net height estimated from the design plan of horizontal hanging ratio 0.40 in the existing Danish seine A and B estimated both 4.94m. The net height of the existing Danish seine A and B was respectively 1.8m and 2.3m, which form 36.4% and 46.2% of the net height estimated from the design plan. Buoyancy was changed as 79.5% and 96.2% relative to the sinking force in the existing Danish seine. The net height of 79.5% was 3.95m which increased to 80% of the estimated net height. The other shows the same result with the first case. It is not necessarily that the high buoyancy/sinking force ratio make the high net height, 80% is adequate as the buoyancy/sinking force ratio. In case of the improved Danish seine, the mean net height was about 5.0m, means 58.3% of estimated net height 8.58m. Shape of the model pound net affected by wave and fish behavior to the net - Shape and tension of the model pound net affected by wave - Lee, Ju-Hee;Kwon, Byeong-Guk;Yun, Il-Bu;Kim, Sam-Kon;Yoo, Je-Bum;Kim, Boo-Young;Kim, Byung-Soo;Lee, Hye-Ok 101 The pound net fishery is very important one in Korean coastal fishery and it need to grasp the characteristics of the net affected by many factors. It is considered that the structure and the shape of the pound net can be changed by the direction and speed of current, wave height, depth and conditions of sea bed. However, most of all, the speed of current and wave height influence more upon the pound net than any other factors to deform and flutter. In this study, author carried out the experiments with a model of double one-side pound net made by the similarity law as 1:100 scales at a real experimental area, and additionally the model net experiments were conducted in the circulating water channel in Pukyong National University. The author analyzed the data of transformation of shape and tension of the model pound net to recognize the characteristics of the current and wave acting on it. Regardless of the direction of flow affecting on the fish court net or bag net, the deformed angle and depth to the side panel and bottom of box nets becomes bigger as the wave gets higher and the period of wave is faster. The tension in both upward or downward tends to be changed by the speed of wave. Those value of changes occurred similarly in either fish court net or bag net. Generally, when bag net is located at upward of flow, the value of tension was bigger 10% than any other location or nets. Regardless of the setting direction, the tension of the pound net is increased in proportion to flow speed, wave height and period of wave, and it becomes bigger about 15-30% at upward to flow than downward. Where the flow is upward in the court net, the tension in the wave increased to 37% compared to the one in the flow only in the condition of flow of 0.1-0.3m/s. Where the flow is upward in the bag net, the tension in the wave increased to 52% in the flow of 0.1m/s, and the tension increased to 48% in the flow of 0.2-0.3m/s. Acoustic analysis on the shape of gill-net in the current Han, Jin-Seok;Shin, Hyeon-Ok 116 An experiment to acoustically analyze the shape of gill-net in the current was conducted in Jaran Bay, Gosung, Korea on the 9th to 10th September(spring tide) and 28th to 29th September(neap tide) 2006. It was measured by a 3D underwater positioning system with a radio-acoustic linked positioning buoys. Six of 7 acoustic transmitters used in the experiment were attached on the float line of the gill-net and the other was fixed on the sea bed. During spring tide, the maximum movement of the gill-net was 27.0m(22:00) in the west(4.4cm/s, $311.9^{\circ}$) and 20.6m(04:00) in the east(3.9cm/s, $66.5^{\circ}$). The maximum extension of the gill-net(the distance between P1 and P6) was 119.8m(21:00, 11.6cm/s, $321.9^{\circ}$) and the minimum was 109.9m(23:00, 16.1cm/s, $88.5^{\circ}$). During neap tide, the maximum movement was 38.0m(20:00) in the east(9.6cm/s, $278.2^{\circ}$) and 11.0m(12:00) in the west(1.9cm/s, $232.1^{\circ}$). The maximum extension was 99.6m(14:00, 12.5cm/s, $94.7^{\circ}$) and the minimum was 85.0m(06:00, 9.0cm/s, $265.8^{\circ}$). During spring tide, the maximum height of the gill-net from the sea bed was 3.7m(02:00, 7.4cm/s, $151.6^{\circ}$) and the minimum was produced the three times as 1.5m. At that time, the current speed and direction was 17.9cm/s and $85.3^{\circ}$(23:30), 16.1cm/s and $249.4^{\circ}$(05:00), and 13.7cm/s and $291.4^{\circ}$(06:30), respectively. During neap tide, the maximum height was 3.6m(12:30, 2.1cm/s, $242.3^{\circ}$) and the minimum was 1.5m(14:00, 12.5cm/s, $94.7^{\circ}$). Evaluation of the effect of cubic artificial reefs in Kyonggi Bay, west coast of Korea by using fish trap Yoo, Jae-Won;Lee, Man-Woo;Lee, Chang-Gun;Kim, Chang-Soo;Kim, Jung-Soo;Hong, Jae-Sang 126 In the autumn of 2000 and spring of 2001, field surveys were conducted to estimate the effectiveness of artificial reefs (type cube, $2{\times}2{\times}2m^3$) that were established in the four islands of Bangnyeong, Socheong, Daeyeonpyeong and Ganghwa in Kyonggi Bay, the west coast of Korea during 1995 and 1996. The condition of reefs was examined through SCUBA diving and a side-scan sonar. Much of the reefs in Daeyeonpyeong and Ganghwa area were buried in bottom sediment. Despite an intensive search in Bangnyeong area, even a cluster of reefs was not found and most of them seemed to be buried by sand waves. Thus an appropriate investigation on the sediment transport should be included in pre-assessment for the expected performance and protection of artificial reefs. Distribution of average CPUE in natural fishing ground (control) was estimated by bootstrapping simulation and possible comparison of CPUE between control and reef areas (treatment) were made in Bangnyeong and Socheong (Experiment I). Positive reef effect was detected in Socheong but CPUE of treatment in Bangnyeong was varied between or lower than the 99% CPUE confidence intervals of the control. Control/treatment abundance and biomass of fishes and invertebrates were tested by paired t-test and sign test (Experiment II). Only four cases among 22 showed significant positive effect. Based on the results, the cube artificial reef in Socheong was inferred as an affirmative one. Floor type was hypothesized to be one of the probable agents in determining the effectiveness of artificial reefs. Real-time monitoring of grab dredging operation using ECDIS Jung, Ki-Won;Lee, Dae-Jae;Jeong, Bong-Kyu;Lee, Yoo-Won 140 This paper describes on the real-time monitoring of dredging information for grab bucket dredger equipped with winch control sensors and differential global positioning system(DGPS) using electronic chart display and information system(ECDIS). The experiment was carried out at Gwangyang Hang and Gangwon-do Oho-ri on board M/V Kunwoong G-16. ECDIS system monitors consecutively the dredging's position, heading and shooting point of grab bucket in real-time through 3 DGPS attached to the top bridge of the dredger and crane frame. Dredging depth was measured by 2 up/down counter fitted with crane winch of the dredger. The depth and area of dredging in each shooting point of grab bucket are displayed in color band. The efficiency of its operation can be ensured by adjusting the tidal data in real-time and displaying the depth of dredging on the ECDIS monitor. The reliance for verification of dredging operation as well as supervision of dredging process was greatly enhanced by providing three-dimensional map with variation of dredging depth in real time. The results will contribute to establishing the system which can monitor and record the whole dredging operations in real-time as well as verify the result of dredging quantitatively. Safety countermeasures for the marine casualties of fishing vessels in Korea Kang, Il-Kwon;Kim, Hyung-Seok;Shin, Hyeong-Il;Lee, Yoo-Won;Kim, Jeong-Chang;Jo, Hyo-Jae 149 Marine casualties of fishing vessels were analyzed to reduce the sacrifice of human life using data of the Korean Maritime Safety Tribunal from 1995 to 2004 in Korea. The occurred number of fishing vessel casualties were likely to be higher portion than non-fishing vessels, but the occurring ratio of fishing vessel casualties were marked 2.96 times lower than that of non-fishing vessel casualties. The occurring ratios of bigger fishing vessel casualties were higher than smaller ones. Most marine casualties were resulted from the human factors such as poor watchkeeping, negligent action for engine and etc. The trend of marine casualties showed that the machinery damage hold the first and collision accidents took the second, but on a point of cause of them, operating errors took first and poor handling or inspection of machinery held the second place. Because those two casualties took major portion, and very important problems for safety of fishing vessels, so we ought to try to reduce the factors before everything else. In addition, since collision, sinking and capsizing in marine casualties have led to death, missing and injury of lives, it is necessary for navigation operators to take more educations and training intended to reduce the marine casualties systematically and continuously.
CommonCrawl
\begin{document} \title[]{String-Averaging Incremental Subgradients for Constrained Convex Optimization with Applications to Reconstruction of Tomographic Images} \author{Rafael Massambone de Oliveira$^1$, Elias Salom\~ao Helou$^1$ and Eduardo Fontoura Costa$^1$} \address{$^1$ University of S\~ao Paulo - Institute of Mathematics and Computer Sciences, Department of Applied Mathematics and Statistics, S\~ao Carlos-SP, CEP 13566-590, Brazil} \ead{[email protected], [email protected], [email protected]} \begin{abstract} We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM. \end{abstract} \noindent{\it Keywords:} convex optimization, incremental algorithms, subgradient methods, projection methods, string-averaging algorithms. \maketitle \section{Introduction} \label{intro} A fruitful approach to solve an inverse problem is to recast it as an optimization problem, leading to a more flexible formulation that can be handled with different techniques. The \textit{reconstruction of tomographic images} is a classical example of a problem that has been explored by optimization methods, among which the well-known \textit{incremental subgradient method} (ISM) \cite{solodov98, nedic01, helou09}, that is a variation of the \textit{subgradient method} \cite{dev85, shor85, polyak87}, features nice performance in terms of convergence speed. There are many papers that discuss incremental gradient/subgradient algorithms for convex/non-convex objective functions (smooth or not) with applications to several fields \cite{bertsekas1997new,bertsekas2000gradient, blatt2007convergent,nedic01,sundhar2010distributed,sundhar2009incremental,solodov1998incremental,solodov98, tanaka2003subset,tseng1998incremental}. Some examples of applications to tomographic image reconstruction are found in \cite{ahn2003globally,browne1996row,de2001fast,hudson1994accelerated, tanaka2003subset}. In this paper, we consider a rather general optimization problem that can be addressed by ISM and is useful for tomographic reconstruction and other problems, including to find solutions for ill-conditioned and/or large-scale linear systems. This problem consists of determining: \begin{equation} \label{1-intro} \begin{array}{c} \displaystyle \mathbf{x} \in \arg \min f(\mathbf{x}) \\ \mbox{s.t.} \quad \mathbf{x} \in \mathbf{X} \subset \mathbb{R}^n, \end{array} \end{equation} where: \begin{description} \item[(i)] $f(\mathbf{x}) := P\sum_{i=1}^{m} \xi_i f_{i}(\mathbf{x})$ in which $f_i : \mathbb{R}^n \rightarrow \mathbb{R}$ are convex (and possibly non-differentiable) functions; \item[(ii)] $\mathbf{X}$ is a non-empty, convex and closed set; \item[(iii)] The set $\mathcal{I} = \{ S_1, \dots, S_P \}$ is a \textit{partition} of $\{1, \dots, m \}$, i.e., $S_{\ell} \cap S_{j} = \emptyset$ for any $\ell, j \in \{ 1, \dots, P \}$ with $\ell \neq j$ and $\bigcup_{\ell=1}^{P} S_{\ell} = \left\{1, \dots, m\right\}$; \item[(iv)] Given $w_\ell\in [0,1]$ and $S_\ell \in \mathcal{I}$, $\ell=1,\dots,P$, the weights $\xi_i$ satisfy $\xi_i = w_\ell$ for all $i \in S_\ell$; \item[(v)] $\sum_{\ell=1}^{P} w_{\ell} = 1$. \end{description} Problem (\ref{1-intro}) with conditions \textbf{(i)}-\textbf{(v)} is reduced to the classical problem of minimizing $\sum_{i=1}^{m} f_i(\mathbf{x})$, s.t. $\mathbf{x} \in \mathbf{X}$, when $w_\ell = 1/P$ for all $\ell = 1, \dots, P$. The reason why we write the problem in this more complex way is twofold. On one hand, it is common to find problems in which a set of fixed weights are used to prioritize the contribution of some component functions. For instance, in the context of distributed networks, the component functions $f_{i}$ (also called ``agents'') can be affected by external conditions, network topology, traffic, etc., making possible that some sets of agents have a prevalent role on the network, which can be modeled by the weights (related to the corresponding subnets). On the other hand, the weights and the partition, which bring flexibility to the model and could possibly be explored aiming for instance at faster convergence, will fit naturally in our algorithmic framework. We consider an approach that mixes ISM and \textit{string-averaging algorithms} (SA algorithms). The general form of the SA algorithm was proposed initially in \cite{censor01} and applied in solving \textit{convex feasibility problems} (CFPs) with algorithms that use {\it projection methods} \cite{censor01,censor03,penfold10}. Strings are created so that ISM (more generally, any $\epsilon$-incremental subgradient method) can be processed in an independent form for each string (by step operators). Then, an average of string iterations is computed (combination operator), guiding the iterations towards the solution. To complete, approximate projections are used to maintain feasibility. We provide an analysis of convergence under reasonable assumptions, such as diminishing step-size rule and boundedness of the subgradients. Some previous works in the literature have improved the understanding and practical efficiency of ISM by creating more general algorithmic structures, enabling a broader analysis of convergence and making them more robust and accurate \cite{nedic01, helou09, sundhar2009incremental, sundhar2010distributed, johansson2010}. We improve on those results by adding a string-averaging structure to the ISM that allows for an efficient parallel processing of a complete iteration which, consequently, can lead to fast convergence and suitable approximate solutions. Furthermore, the presented techniques present better smoothing properties in practice, which is good for imaging tasks. These features are desirable, especially when we seek to solve ill-conditioned/large scale problems. As mentioned at the beginning of this section, one of our goals is to obtain an efficient method for solving problems of reconstruction of tomographic images from incompletely sampled data. Although our work is closely linked to ISM, it is important to mention other classes of methods that can be applied to convex optimization problems. Under reasonable assumptions, problem (\ref{1-intro}) can be solved using \textit{proximal-type algorithms} (for a description of some of the main methods, see \cite{combettes2011}). For instance, in \cite{combettes2008} the authors propose a proximal decomposition method derived from the Douglas-Rachford algorithm and establish its weak convergence in Hilbert spaces. Some variants and generalizations of such methods can be found in \cite{bredies2009, raguet2013,combettes2016, chen1997, combettes2011}. The \textit{bundle approach}~\cite{hil93} is often used for numerically solving non-differentiable convex optimization problems. Also, \textit{first order accelerated techniques}~\cite{bet09,bet09b} form yet another family of popular techniques for convex optimization problem endowed with a certain separability property. The advantage of ISM over the aforementioned techniques lies in its lightweight iterations from the computational viewpoint, not even requiring sufficient decrease properties to be checked. Besides, ISM presents a fast practical convergence rate in the first iterations which enables this technique to achieve good reconstructions within a small amount of time even for the huge problem sizes that appear, for example, in tomography. The {\it tomographic image reconstruction problem} consists in finding an image described by a function $\psi : \mathbb R^2 \to \mathbb R$ from samples of its {\it Radon Transform}, which can be recast into solving the linear system \begin{equation} \label{image system} R \mathbf{x} = \mathbf{b}, \end{equation} where $R$ is the discretization of the Radon Transform operator, $\mathbf{b}$ contains the collected samples of the Radon Transform and the desired image is represented by the vector $\mathbf{x}$. We consider solving the problem (\ref{image system}), rewriting it as a minimization problem, as in (\ref{1-intro}). Solving problem (\ref{image system}) from an optimization standpoint is not a new idea. In particular, \cite{helou09} illustrates the application of some of the methods arising from a general framework to tomographic image reconstruction with a limited number of views. For the discretized problem (\ref{image system}), iterative methods such as ART (Algebraic Reconstruction Technique) \cite{natterer2001mathematical}, POCS (Projection Onto Convex Sets) \cite{bauschke1996projection, combettes97}, and Cimmino \cite{combettes1994inconsistent} have been widely used in the past. Tomographic image reconstruction is an inverse problem in the sense that the image $\psi$ is to be obtained from the inversion of a linear compact operator, which is well known to be an ill-conditioned problem. While the specific case of Radon inversion has an analytical solution, that was published in 1917 by Johann Radon (for details see \cite{natterer86}), both such analytical techniques and the aforementioned iterative methods for linear systems of equations suffer from amplification of the statistical noise which, in practice, is always present in the right-hand side of~(\ref{image system}). Therefore, methods designed to deal with noisy data have been developed, based on a maximum likelihood approach, among which EM (Expectation Ma\-xi\-mi\-za\-ti\-on) \cite{shepp1982maximum, vardi1985statistical}, OS-EM (Ordered Subsets Expectation Ma\-xi\-mi\-za\-ti\-on) \cite{hudson1994accelerated}, RAMLA (Row-Action Maximum Likelihood Algorithm) \cite{browne1996row}, BSREM (Block Sequential Regularized Expectation Ma\-xi\-mi\-za\-ti\-on) \cite{de2001fast}, DRAMA (Dynamic RAMLA) \cite{tanaka2003subset, helou05}, modified BSREM and relaxed OS-SPS (Ordered Subset-Se\-pa\-ra\-ble Paraboloidal Surrogates) \cite{ahn2003globally} are some of the best known in the literature. In \cite{hcc14}, a variant of the EM algorithm was introduced, called \textit{String-Averaging Expectation-Maximization} (SAEM) \textit{algorithm}. The SAEM algorithm was used in problems of image reconstruction in {\it Single-Photon Emission Computerized To\-mo\-gra\-phy} (SPECT) and showed good performance in simulated and real data studies. High-contrast images, with less noise and clearer object boundaries were reconstructed without incurring in more computation time. Besides the BSREM, DRAMA, modified BSREM and relaxed OS-SPS, that are relaxed algorithms for (penalized) maximum-likelihood image reconstruction in tomography, the method introduced in \cite{dewaraja2010} considers an approach, based in OS-SPS, in which extra anatomical boundary information is used. Other methods that use penalized models can be found in \cite{harmany2012, chouzenoux2013}. Proximal methods were used in \cite{anthoine2012} to reconstruct images obtained via {\it Cone Beam Computerized Tomography} (CBCT) and {\it Positron Emission Tomography} (PET). In \cite{chouzenoux2013}, the {\it Majorize-Minimize Memory Gradient algorithm} \cite{chouzenoux2011, chouzenoux2013siam} is studied and applied to imaging tasks. The paper is organized as follows: Section \ref{sec.2} contains some preliminary theory involving incremental subgradient methods, optimality and feasibility operators and string-averaging algorithm; Section \ref{sec.3} discusses the proposed algorithm to solve (\ref{1-intro}), \textbf{(i)}-\textbf{(v)}; Section \ref{sec.4} shows theoretical convergence results; in Section \ref{sec.5} numerical tests are performed with reconstruction of tomographic images. Final considerations are given in Section \ref{sec.6}. \section{Preliminary theory} \label{sec.2} Throughout the text, we will use the following notations: bold-type notations e.g. $\mathbf{x}$, $\mathbf{x}_i$ and $\mathbf{x}_{i}^{k}$ are vectors whereas $x$ is a number. We denote $x_i$ as the $i$th coordinate of vector $\mathbf{x}$. Moreover, \begin{eqnarray*} \displaystyle \mathcal{P}_{\mathbf{X}}(\mathbf{x}):= \arg \min_{\mathbf{y} \in \mathbf{X}} \left\| \mathbf{y} - \mathbf{x} \right\|, \quad d_{\mathbf{X}}(\mathbf{x}):= \left\|\mathbf{x} - \mathcal{P}_{\mathbf{X}}(\mathbf{x}) \right\|, \\ \left[ x \right]_{+} := \max\left\{0,x\right\}, \quad f^{\ast} = \inf_{\mathbf{x} \in \mathbf{X}} f(\mathbf{x}) \quad \mbox{and} \quad \mathbf{X}^{\ast} = \left\{\mathbf{x} \in \mathbf{X} \, | \, f(\mathbf{x}) = f^{\ast}\right\},\end{eqnarray*} where we assume that $\mathbf{X}^{\ast} \neq \emptyset$. One of the main methods for solving (\ref{1-intro}) is the \textit{subgradient method}, whose extensive theory can be found in \cite{dev85, shor85, polyak87, hil93, bertsekas1999nonlinear}, \begin{equation} \label{subgrad-method} \mathbf{x}^{k+1} = \mathcal{P}_{\mathbf{X}}\left(\mathbf{x}^k - \lambda_k \sum_{i=1}^{m} \mathbf{g}_{i}^{k}\right), \quad \lambda_k > 0, \quad \mathbf{g}_{i}^{k} \in \partial f_i(\mathbf{x}^k),\end{equation} where the {\it subdifferential} of $f: \mathbb{R}^n \rightarrow \mathbb{R}$ at $\mathbf{x}$ (the set of all subgradients) can be defined by \begin{equation} \displaystyle \partial f(\mathbf{x}) := \left\{\mathbf{g} \, | \, f(\mathbf{x}) + \left\langle \mathbf{g}, \mathbf{z} - \mathbf{x} \right\rangle \leq f(\mathbf{z}), \,\, \forall \mathbf{z}\right\}. \end{equation} A similar approach to (\ref{subgrad-method}), known as \textit{incremental subgradient method}, was studied firstly by Kibardin in \cite{kibardin1979decomposition} and then analyzed by Solodov and Zavriev in \cite{solodov98}, in which a complete iteration of the algorithm can be described as follows: \begin{eqnarray} \mathbf{x}_{0}^{k} &=& \mathbf{x}^{k} \nonumber \\ \label{sub-iterate} \mathbf{x}_{i}^{k} &=& \mathbf{x}_{i-1}^{k} - \lambda_k \mathbf{g}_{i}^{k}, \quad i = 1, \dots, m, \quad \mathbf{g}_{i}^{k} \in \partial f_{i}(\mathbf{x}_{i-1}^{k}) \\ \displaystyle \mathbf{x}^{k+1} &=& \mathcal{P}_{\mathbf{X}}\left(\mathbf{x}_{m}^{k}\right). \nonumber \end{eqnarray} A variant of this algorithm that uses projection onto $\mathbf{X}$ to compute the sub-iterations $\mathbf{x}_{i}^{k}$ was analyzed in \cite{nedic01}. The method we propose in this paper for solving the problem given in (\ref{1-intro}), \textbf{(i)}-\textbf{(v)} has the following general form described in \cite{helou09}: \begin{equation} \label{ov} \begin{array}{rcl} \mathbf{x}^{k+1/2} &=& \displaystyle \mathcal{O}_f(\lambda_k, \mathbf{x}^k); \\ \mathbf{x}^{k+1} &=& \displaystyle \mathcal{V}_{\mathbf{X}}(\mathbf{x}^{k+1/2}). \end{array} \end{equation} In the above equations, $\mathcal{O}_f$ is called {\it optimality operator} and $\mathcal{V}_{\mathbf{X}}$ is the {\it feasibility operator}. This framework was created to handle quite general algorithms for convex optimization problems. The basic idea consists in dividing an iterate in two parts: an optimality step which tries to guide the iterate towards the minimizer of the objective function (but not necessarily in a descent direction), followed by the feasibility step that drives the iterate in the direction of feasibility. Next we enunciate a result due to Helou and De Pierro (see \cite[Theorem 2.5]{helou09}), establishing convergence of the method (\ref{ov}) under some conditions. This result is the key for the convergence analysis of the algorithm we propose in section \ref{sec.3}. \begin{theorem} \label{conv1} The sequence $\{\mathbf{x}^k \}$ generated by the method described in \emph{(\ref{ov})} converges in the sense that \begin{equation*} \displaystyle d_{\mathbf{X}^{\ast}}(\mathbf{x}^k) \rightarrow 0 \qquad \mbox{and} \qquad \lim_{k \rightarrow \infty} f(\mathbf{x}^k) = f^{\ast}, \end{equation*} if all of the following conditions hold: \begin{condition}[Properties of optimality operator] \label{cond1} For every $\mathbf{x} \in \mathbf{X}$ and for all sequence $\lambda_k \geq 0$, there exist $\alpha > 0$ and a sequence $\rho_k \geq 0$ such that the optimality operator $\mathcal{O}_f$ satisfies for all $k \geq 0$ \begin{equation} \label{cond1-eq1} \displaystyle \|\mathcal{O}_f(\lambda_k, \mathbf{x}^k) - \mathbf{x} \|^2 \leq \|\mathbf{x}^k - \mathbf{x} \|^2 - \alpha \lambda_k (f(\mathbf{x}^k) - f(\mathbf{x})) + \lambda_k \rho_k. \end{equation} We further assume that the error term in the above inequality vanishes, i.e., $\rho_k \to 0$ and we consider a boundedness property for the optimality operator: there is $\gamma > 0$ such that \begin{equation} \label{cond1-eq2} \displaystyle \| \mathbf{x}^k - \mathcal{O}_f (\lambda_k, \mathbf{x}^k) \| \leq \lambda_k \gamma. \end{equation} \end{condition} \begin{condition}[Property of feasibility operator] \label{cond2} For the feasibility operator $\mathcal{V}_{\mathbf{X}}$, we impose that for all $\delta > 0$, exists $\epsilon_{\delta} > 0$ such that, if $d_{\mathbf{X}}(\mathbf{x}^{k+1/2}) \geq \delta$ and $\mathbf{x} \in \mathbf{X}$ we have \begin{equation} \displaystyle \|\mathcal{V}_{\mathbf{X}}(\mathbf{x}^{k+1/2}) - \mathbf{x} \|^2 \leq \| \mathbf{x}^{k+1/2} - \mathbf{x} \|^2 - \epsilon_{\delta}. \end{equation} Moreover, for all $\mathbf{x} \in \mathbf{X}$, $\mathcal{V}_{\mathbf{X}}(\mathbf{x}) = \mathbf{x}$, i.e., $\mathbf{x}$ is a fixed point of $\mathcal{V}_{\mathbf{X}}$. \end{condition} \begin{condition}[Diminishing step-size rule] \label{cond3} The sequence $\{ \lambda_k \}$ satisfies \begin{equation} \label{hippasso} \displaystyle \lambda_k \rightarrow 0^{+}, \qquad \sum_{k=0}^{\infty} \lambda_k = \infty.\end{equation} \end{condition} \begin{condition} \label{cond4} The optimal set $\mathbf{X}^{\ast}$ is bounded, $\{ d_{\mathbf{X}}(\mathbf{x}^k) \}$ is bounded and \begin{equation*} [f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{x}^k)]_{+} \rightarrow 0. \end{equation*} \end{condition} \end{theorem} \begin{remark} \label{rmk1} \normalfont Regarding the requirement $[f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{x}^k)]_{+} \rightarrow 0$, it holds if there is a bounded sequence $\{ \mathbf{v}^k \}$ where $\mathbf{v}^k \in \partial f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k))$ and $d_{\mathbf{X}}(\mathbf{x}^k) \to 0$. Indeed, \begin{equation*} \langle \mathbf{v}^k, \mathbf{y} - \mathcal{P}_{\mathbf{X}}(\mathbf{x}^k) \rangle \leq f(\mathbf{y}) - f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)), \quad \forall \, \mathbf{y} \in \mathbb{R}^n. \end{equation*} By Cauchy-Schwarz inequality, we have $\| \mathbf{v}^k \| \|\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k) - \mathbf{y} \| \geq [f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{y})]_{+}$. Taking $\mathbf{y} = \mathbf{x}^k$, then $d_{\mathbf{X}}(\mathbf{x}^k) \to 0$ ensures that $[f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{x}^k)]_{+} \to 0$. Therefore, under this mild boundedness assumption on the subdifferentials $\partial f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k))$, proving that $d_{\mathbf{X}}(\mathbf{x}^k) \to 0$ also ensures that $[f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{x}^k)]_{+} \rightarrow 0$. Concerning the assumption $d_{\mathbf{X}}(\mathbf{x}^k) \to 0$, Proposition 2.1 in \cite{helou09} shows that it holds if $\{d_{\mathbf{X}}(\mathbf{x}^k) \}$ is bounded, $\lambda_k \to 0^{+}$, and Equation (\ref{cond1-eq2}) plus Condition \ref{cond2} hold. Since Condition \ref{cond4} requires $\{d_{\mathbf{X}}(\mathbf{x}^k) \}$ to be bounded, then we have that $[f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{x}^k)]_{+} \to 0$ just under the boundedness assumption on $\partial f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k))$. Furthermore, Corollary 2.7 in \cite{helou09} states that $d_{\mathbf{X}}(\mathbf{x}^k) \to 0$ if $\lambda_k \to 0^{+}$, Conditions \ref{cond1} and \ref{cond2} hold and there is $f_l$ such that $f(\mathbf{x}^k) \geq f_l$ for all $k$. Basically, the hypotheses of this corollary allow to show that $\{d_{\mathbf{X}}(\mathbf{x}^k) \}$ is bounded and result follow by Proposition 2.1. This is the situation that occurs in our numerical experiment. Such remarks are important to show how the hypotheses of our main convergence result (see Corollary \ref{convSA} in section \ref{sec.4}) can be reasonable. \qed \end{remark} To state our algorithm in next section, we need to define the operators $\mathcal{O}_f$ and $\mathcal{V}_{\mathbf{X}}$. Below we present the last ingredient of our operator $\mathcal{O}_f$, the \textit{String-Averaging} (SA) \textit{algorithm}. Originally formulated in \cite{censor01}, SA algorithm consists of dividing an index set $\mathrm{I} = \left\{1,2, \dots, \eta \right\}$ into {\it strings} in the following manner \begin{equation} \displaystyle \Delta_{\ell} := \left\{i_{1}^{\ell}, i_{2}^{\ell}, \dots, i_{m(\ell)}^{\ell}\right\}, \end{equation} where $m(\ell)$ represents the number of elements in the string $\Delta_{\ell}$ and $\ell \in \{ 1,2, \dots, N \}$. Let us consider $\mathcal{X}$ and $\mathcal{Y}$ as subsets of $\mathbb{R}^{n}$ where $\mathcal{Y} \subseteq \mathcal{X}$. The basic idea behind the method consists in the sequential application of {\it step operators} $\mathcal{F}^{i_{s}^{\ell}}: \mathcal{X} \to \mathcal{Y}$, for each $s = 1,2, \dots, m(\ell)$ over each string $\Delta_{\ell}$, producing $N$ vectors $\mathbf{y}_{\ell}^{k} \in \mathcal{Y}$. Next, a {\it combination operator} $\mathcal{F}: \mathcal{Y}^N \to \mathcal{Y}$ mixes, usually by weighted average, all vectors $\mathbf{y}_{\ell}^{k}$ to obtain $\mathbf{y}^{k+1}$. We refer to the index $s$ as the \emph{step} and the index $k$ as the \emph{iteration}. Therefore, given $\mathbf{x}^0 \in \mathcal{X}$ and strings $\Delta_1, \dots, \Delta_N$ of $\mathrm{I}$, a complete iteration of the SA algorithm is computed, for each $k \geq 0$, by equations \begin{equation} \label{step_op} \displaystyle \mathbf{y}_{\ell}^{k} := \mathcal{F}^{i_{m(\ell)}^{\ell}} \circ \dots \circ \mathcal{F}^{i_{2}^{\ell}} \circ \mathcal{F}^{i_{1}^{\ell}}\left(\mathbf{x}^k\right), \end{equation} \begin{equation} \label{comb_op} \displaystyle \mathbf{y}^{k+1} := \mathcal{F}((\mathbf{y}_{1}^{k}, \dots, \mathbf{y}_{N}^{k})). \end{equation} The main advantage of this approach is to allow for computation of each vector $\mathbf{y}_{\ell}^{k}$ \emph{in parallel} at each iteration $k$, which is possible because the step operators $\mathcal{F}^{i_{1}^{\ell}}, \dots, \mathcal{F}^{i_{m(\ell)}^{\ell}}$ act along each string independently. \section{Proposed algorithm} \label{sec.3} Now we are ready to define $\mathcal{O}_f$ and $\mathcal{V}_{\mathbf{X}}$. Let us start by defining the optimality operator $\mathcal{O}_f: \mathbb{R}_{+} \times \mathbf{Y} \to \mathbf{Y}$, where $\mathbf{Y}$ is a non-empty, closed and convex set such that $\mathbf{X} \subset \mathbf{Y} \subseteq \mathbb{R}^{n}$. For this, let $ \mathcal{F}^{i_{s}^{\ell}}: \mathbb{R}_{+} \times \mathbf{Y} \to \mathbf{Y}$ and $\mathcal{F}: \mathbf{Y}^P \to \mathbf{Y}$. Consider the set of strings $\Delta_1 = S_1, \dots, \Delta_P = S_P$ and the weight set $\{ w_\ell \}_{\ell=1}^{P}$ as defined in the problem given in (\ref{1-intro}) with conditions \textbf{(iii)}-\textbf{(v)}. Then, given $\mathbf{x} \in \mathbf{Y}$ and $\lambda \in \mathbb{R}_{+}$, we define \begin{equation} \label{initial_opt_op} \mathbf{x}_{i_{0}^{\ell}} := \mathbf{x}, \quad \mbox{for all} \quad \ell = 1, \dots, P,\end{equation} \begin{equation} \label{step.operators} \displaystyle \mathbf{x}_{i_{s}^{\ell}} := \mathcal{F}^{i_{s}^{\ell}}(\lambda, \mathbf{x}_{i_{s-1}^{\ell}}) := \mathbf{x}_{i_{s-1}^{\ell}} - \lambda \mathbf{g}_{i_{s}^{\ell}}, \quad s = 1, \dots, m(\ell), \end{equation} \begin{equation} \label{end.points} \mathbf{x}_{\ell} := \mathbf{x}_{i_{m(\ell)}^{\ell}}, \quad \ell = 1, \dots, P, \end{equation} \begin{equation} \label{SA_isoo} \mathcal{O}_f(\lambda, \mathbf{x}) := \mathcal{F}((\mathbf{x}_1, \dots, \mathbf{x}_P)) := \sum_{\ell=1}^{P} w_{\ell} \mathbf{x}_{\ell}, \end{equation} where $\mathbf{g}_{i_{s}^{\ell}} \in \partial f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}})$. Operators $\mathcal{F}^{i_{s}^{\ell}}$ in (\ref{step.operators}) correspond to the step operators in equation (\ref{step_op}) of the SA algorithm and its definition is motivated by equation (\ref{sub-iterate}) of the incremental subgradient method. Function $\mathcal{F}$ in (\ref{SA_isoo}) corresponds to the combination operator in (\ref{comb_op}) and performs a weighted average of the end-points $\mathbf{x}_{\ell}$, completing the definition of the operator $\mathcal{O}_f$. Now we need to define a feasibility operator $\mathcal{V}_{\mathbf{X}}$. For that, we use the {\it subgradient projection} \cite{bauschke06,combettes97,yamada2005adaptive,yamada2005hybrid}. Let us start noticing that every convex set $\mathbf{X} \neq \emptyset$ can be written as \begin{equation} \label{feasible.set} \displaystyle \mathbf{X} = \bigcap_{i=1}^{t} \, lev_0(h_i),\end{equation} where $lev_0(h_i) := \left\{\mathbf{x} \,| \, h_{i}(\mathbf{x}) \leq 0\right\}$. Each function $h_i : \mathbb{R}^n \rightarrow \mathbb{R}$ ($t$ is finite) is supposed to be convex. The feasibility operator $\mathcal{V}_{\mathbf{X}}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is defined in \cite{helou09} in the following form: \begin{equation} \label{viabilidade} \displaystyle \mathcal{V}_{\mathbf{X}}:= \mathcal{S}_{h_t}^{\nu_t} \circ \mathcal{S}_{h_{t-1}}^{\nu_{t-1}} \circ \dots \circ \mathcal{S}_{h_1}^{\nu_1}. \end{equation} This definition assumes that there is $\sigma \in (0,1]$ such that $\nu_i \in [\sigma, 2 - \sigma]$ for all $i$. Each operator $\mathcal{S}_{h}^{\nu}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ in the previous definition is constructed using a $\nu$-relaxed version of the subgradient projection with Polyak-type step-sizes , i.e., \begin{equation} \label{subgrad_projection} \displaystyle \mathcal{S}_{h}^{\nu}(\mathbf{x}) := \left\{ \begin{array}{ccl} \mathbf{x} - \nu \frac{[h(\mathbf{x})]_{+}}{\left\|\mathbf{h}\right\|^2} \mathbf{h}, & \mbox{if} & \displaystyle \mathbf{h} \neq \mathbf{0}; \\ \mathbf{x}, & & \mbox{otherwise}, \end{array} \right. \end{equation} where $\nu \in (0,2)$ and $\mathbf{h} \in \partial h(\mathbf{x})$. In order to get a better understanding of the behavior of our feasibility operator, Figure \ref{feasibility_ex} shows the trajectory taken by successive applications of the operator $\mathcal{V}_{\mathbf{X}}$. The feasible set $\mathbf{X}$ is the intersection of the zero sublevel sets of the following convex functions: $h_1(\mathbf{x}) = \langle \mathbf{a}, \mathbf{x} \rangle + 2\| \mathbf{x} \|_1 - 1$, $h_2(\mathbf{x}) = 3 \| \mathbf{x} \|_{\infty} - 2.5$ and $h_3(\mathbf{x}) = \| A \mathbf{x} - \mathbf{a} \|_1 + 2\|B \mathbf{x} - \mathbf{c} \|_2 - 10$ where \begin{equation*} A = \left[ \begin{array}{cc} 2 & 1 \\ -1 & 3 \end{array} \right], \quad B = \left[ \begin{array}{cc} 1 & 0 \\ -2 & 2 \end{array} \right], \quad \mathbf{a} = [2 \quad 1]^T \quad \mbox{and} \quad \mathbf{c} = [1 \,\, -2]^T. \end{equation*} To obtain $\mathcal{V}_{\mathbf{X}}(\mathbf{x}) = \mathcal{S}_{\mathbf{h}_3}^{\nu_3} \circ \mathcal{S}_{\mathbf{h}_2}^{\nu_2} \circ \mathcal{S}_{\mathbf{h}_1}^{\nu_1}(\mathbf{x})$, we compute the subgradients $\mathbf{h}_i \in \partial h_i(\mathbf{s}_{i-1})$, $i = 1,\dots,3$, such that $\mathbf{s}_0 := \mathbf{x}$ and $\mathbf{s}_i := \mathcal{S}_{\mathbf{h}_i}^{\nu_i}(\mathbf{s}_{i-1})$. We choose $[-3 \,\, -2.5]^T$ as an initial point and the following relaxation parameters: $\nu_1 = 0.5$, $\nu_2 = 0.6$ and $\nu_3 = 0.7$. \begin{figure}\label{feasibility_ex} \end{figure} \begin{remark} \normalfont A string-averaging version of the feasibility operator can easily be derived in the following manner. Consider $Q$ strings $V_j := \{ i^j_1, i^j_2, \dots, i^j_{\kappa( j )} \} \subset \left\{1, \dots, t\right\}$ such that $\bigcup_{j=1}^{Q} V_j = \left\{1, \dots, t\right\}$, where $\kappa(j)$ is the number of elements in the string $V_j$. Then, for each $j = 1, \dots, Q$, we define the \emph{string feasibility operator} $\mathcal{V}_j$ as \begin{equation*} \mathcal V_j := \mathcal{S}_{\mathbf{h}_{i^j_{\kappa( j )}}}^{\nu_{i^j_{\kappa( j )}}} \circ \mathcal{S}_{\mathbf{h}_{i^j_{\kappa( j ) - 1}}}^{\nu_{i^j_{\kappa( j ) - 1}}} \circ \dots \circ \mathcal{S}_{\mathbf{h}_{i^j_1}}^{\nu_{i^j_1}}, \end{equation*} each satisfying for $\mathbf y \in \mathbf X_j := \bigcap_{i \in V_j} lev_0( h_i )$ and every $\mathbf x$ with $d_{\mathbf X_j}( \mathbf x ) \geq \delta$: \begin{equation}\label{eq:veiaString} \| \mathcal V_j( \mathbf x ) - \mathbf y \|^2 \leq \| \mathbf x - \mathbf y \|^2 - \epsilon^j_\delta. \end{equation} We can then average these operators to obtain a new feasibility operator as $\tilde{ \mathcal V}_{\mathbf X} := 1/Q\sum_{j = 1}^Q\mathcal V_j$. Making use of the triangle inequality and $(\sum_{i = 1}^n a_i)^2 \leq n\sum_{i = 1}^n a_i^2$, we have \begin{eqnarray*} \| \tilde{\mathcal V}_{\mathbf{X}}( \mathbf x ) - \mathbf y \|^2 & {}= \left\| \frac1Q\sum_{j = 1}^Q\left[ \mathcal V_j( \mathbf x ) - \mathbf y \right]\right \|^2=\frac1{Q^2}\left\| \sum_{j = 1}^Q\left[ \mathcal V_j( \mathbf x ) - \mathbf y \right]\right \|^2 \\ & {}\leq \frac1{Q^2}\left(\sum_{j = 1}^Q\left\| \mathcal V_j( \mathbf x ) - \mathbf y \right \| \right)^2\leq \frac1Q\sum_{j = 1}^Q\left\| \mathcal V_j( \mathbf x ) - \mathbf y \right \|^2. \end{eqnarray*} Now we notice that if $\mathbf X$ is bounded, $d_{\mathbf X}( \mathbf x ) \geq \delta$ implies that $\max\{ d_{\mathbf X_j}( \mathbf x ) \} \geq \tilde\delta > 0$ (for weaker conditions under which the same holds, see the results in~\cite{hof92}). Therefore, by using~(\ref{eq:veiaString}) in the inequality above we obtain, for $\mathbf y \in \mathbf X = \bigcap_{j = 1}^Q\mathbf X_j$ and every $\mathbf x$ with $d_{\mathbf X}( \mathbf x ) \geq \delta$: \begin{eqnarray*} \| \tilde{\mathcal V}_{\mathbf{X}}( \mathbf x ) - \mathbf y \|^2 & {}\leq \frac1Q\sum_{j = 1}^Q\left\{\left\| \mathbf x - \mathbf y \right \|^2 - \epsilon^j_{\delta_j} \right\}, \end{eqnarray*} where $\delta_j := d_{\mathbf X_j}( \mathbf x )$. Therefore: \begin{eqnarray*} \| \tilde{\mathcal V}_{\mathbf{X}}( \mathbf x ) - \mathbf y \|^2 & {}\leq \left\| \mathbf x - \mathbf y \right \|^2 - \tilde\epsilon_{\delta}, \end{eqnarray*} where $ \tilde\epsilon_\delta = \min_{j \in \{ 1, \dots, Q\}}\{\epsilon^j_{\tilde\delta}\}$. The above argument suggests that if the operators $\mathcal V_j$ satisfy Condition~\ref{cond2} with $\mathbf X$ replaced by $\mathbf X_j$, then its average also will satisfy Condition~\ref{cond2} with $\mathbf X$ replaced by $\bigcap_{j=1}^{Q} \mathbf{X}_j$. To the best of our knowledge, the previous discussion presents a first step to generalize some of the results from~\cite{censor01,censor2013convergence,censor2014string} towards averaging strings of inexact projections, or more specifically, averaging of Fej\'er-monotone operators. We do not make use of averaged feasibility operators in this paper for clarity of presentation and also because our numerical examples can be handled in the classical way, without string averaging, since our model has few constraints. \qed \end{remark} With the optimality and feasibility operators already defined, we present a complete description of the algorithm we propose to solve the problem defined in (\ref{1-intro}), \textbf{(i)}-\textbf{(v)}. \begin{algorithm}[String-averaging incremental subgradient method] \label{alg-proposed} $ $ \begin{description} \item[\textbf{Input:}] Choose an initial vector $\mathbf{x}^0 \in \mathbf{Y}$ and a sequence of step-sizes $\lambda_k \geq 0$. \item[\textbf{Iteration:}] Given the current iteration $\mathbf{x}^k$, do \begin{description} \item[Step 1.] \emph{(Step operators)} Compute independently for each $\ell=1,\dots,P$: \begin{eqnarray} \label{step.operators.alg} \displaystyle \mathbf{x}_{i_{0}^{\ell}}^{k} &=& \mathbf{x}^k, \nonumber \\ \displaystyle \mathbf{x}_{i_{s}^{\ell}}^{k} &=& \mathcal{F}^{i_{s}^{\ell}}(\lambda_k, \mathbf{x}_{i_{s-1}^{\ell}}^{k}), \quad s = 1, \dots, m(l), \nonumber \\ \displaystyle \mathbf{x}_{\ell}^{k} &=& \mathbf{x}_{i_{m(\ell)}^{\ell}}^{k}, \end{eqnarray} where $i_{s}^{\ell} \in \Delta_\ell := S_\ell$ for each $s=1,\dots,m(\ell)$ and $\mathcal{F}^{i_{s}^{\ell}}$ is defined in \emph{(\ref{step.operators})}. \item[Step 2.] \emph{(Combination operator)} Use the end-points $\mathbf{x}_{\ell}^{k}$ obtained in Step \emph{1} and the optimality operator $\mathcal{O}_f$ defined in \emph{(\ref{SA_isoo})} to obtain: \begin{equation} \label{opt.operator.alg} \mathbf{x}^{k+1/2} = \mathcal{O}_f(\lambda_k, \mathbf{x}^k).\end{equation} \item[Step 3.] Apply feasibility operator $\mathcal{V}_{\mathbf{X}}$ defined in \emph{(\ref{viabilidade})} on the sub-iteration $\mathbf{x}^{k+1/2}$ to obtain: \begin{equation} \label{feas.operator.alg} \mathbf{x}^{k+1} = \mathcal{V}_{\mathbf{X}}(\mathbf{x}^{k+1/2}).\end{equation} \item[Step 4.] Update $k$ and return to Step \emph{1}. \end{description} \end{description} \end{algorithm} \section{Convergence analysis} \label{sec.4} Along this section, we denote $ F_{S_\ell}(\mathbf{x}) = \sum_{s=1}^{m(\ell)} f_{i_{s}^{\ell}}(\mathbf{x})$ for each $\ell = 1, \dots, P$. The following subgradient boundedness assumption is key in this paper: for all $\ell$ and $s$, \begin{equation} \label{limitacao} \displaystyle C_{i_{s}^{\ell}} = \sup_{k \geq 0}\left\{\left\|\mathbf{g} \right\| \, | \, \mathbf{g} \in \partial f_{i_{s}^{\ell}}(\mathbf{x}^k) \cup \partial f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k}) \right\} < \infty.\end{equation} Recall that Theorem \ref{conv1} is the main tool for the convergence analysis, so we will show that each of its conditions are valid under assumption (\ref{limitacao}). We present auxiliary results in the next two lemmas. \begin{lemma} \label{lem1} Let $\{\mathbf{x}^k \}$ be the sequence generated by Algorithm {\em \ref{alg-proposed}} and suppose that subgradient boundedness assumption \emph{(\ref{limitacao})} holds. Then, for each $\ell$ and $s$ and for all $k \geq 0$, we have \begin{description} \item[(i)] \begin{equation} \label{ineq1} \displaystyle f_{i_{s}^{\ell}}(\mathbf{x}^k) - f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k}) \leq C_{i_{s}^{\ell}} \|\mathbf{x}_{i_{s-1}^{\ell}}^{k} - \mathbf{x}^k \|. \end{equation} \item[(ii)] \begin{equation} \label{ineq2} \displaystyle \|\mathbf{x}_{i_{s}^{\ell}}^{k} - \mathbf{x}^k \| \leq \lambda_k \sum_{r=1}^{s} C_{i_{r}^{\ell}}.\end{equation} \item[(iii)] For all $\mathbf{y} \in \mathbb{R}^n$, we have \begin{equation} \label{ineq3} \displaystyle \left\langle \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}, \mathbf{y} - \mathbf{x}^k \right\rangle \leq F_{S_{\ell}}(\mathbf{y}) - F_{S_{\ell}}(\mathbf{x}^k) + 2\lambda_k \sum_{s=2}^{m(\ell)} C_{i_{s}^{\ell}}\left(\sum_{r=1}^{s-1} C_{i_{r}^{\ell}}\right), \end{equation} where $\mathbf{g}_{i_{s}^{\ell}}^{k} \in \partial f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k})$. \end{description} \end{lemma} \begin{proof} \begin{description} \item[(i)] By definition of the subdifferential $\displaystyle \partial f_{i_{s}^{\ell}}(\mathbf{x}^{k})$, we have \begin{equation*} \displaystyle f_{i_{s}^{\ell}}(\mathbf{x}^k) - f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k}) \leq - \langle \mathbf{v}_{i_{s}^{\ell}}^{k}, \mathbf{x}_{i_{s-1}^{\ell}}^{k} - \mathbf{x}^k \rangle,\end{equation*} where $\mathbf{v}_{i_{s}^{\ell}}^{k} \in \partial f_{i_{s}^{\ell}}(\mathbf{x}^k)$. The result follows from the Cauchy-Schwarz inequality and the subgradient boundedness assumption (\ref{limitacao}). \item[(ii)] Developing the equation $\displaystyle \mathbf{x}_{i_{s}^{\ell}}^{k} = \mathbf{x}_{i_{s-1}^{\ell}}^{k} - \lambda_k \mathbf{g}_{i_{s}^{\ell}}^{k}$ for each $s=1, \dots,m(\ell)$ yields, \begin{equation*}\begin{array}{lllll} \displaystyle \|\mathbf{x}_{i_{1}^{\ell}}^{k} - \mathbf{x}^k \| &=& \displaystyle \| \mathbf{x}^k - \lambda_k \mathbf{g}_{i_{1}^{\ell}}^{k} - \mathbf{x}^k \| &\leq& \lambda_k C_{i_{1}^{\ell}}, \\ \displaystyle \| \mathbf{x}_{i_{2}^{\ell}}^{k} - \mathbf{x}^k \| &\leq& \displaystyle \| \mathbf{x}_{i_{1}^{\ell}}^{k} - \mathbf{x}^k \| + \lambda_k \| \mathbf{g}_{i_{2}^{\ell}}^{k} \| &\leq& \lambda_k (C_{i_{1}^{\ell}} + C_{i_{2}^{\ell}}), \\ \qquad \vdots & & \qquad \qquad \vdots & & \qquad \vdots \\ \displaystyle \| \mathbf{x}_{i_{s}^{\ell}}^{k} - \mathbf{x}^k \| &\leq& \| \mathbf{x}_{i_{s-1}^{\ell}}^{k} - \mathbf{x}^k \| + \lambda_k \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| &\leq& \displaystyle \lambda_k \sum_{r=1}^{s} C_{i_{r}^{\ell}}. \end{array} \end{equation*} \item[(iii)] By Cauchy-Schwarz inequality and definition of the subdifferential $\displaystyle \partial f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k})$ we have, $$\begin{array}{lll} \displaystyle \left\langle \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}, \mathbf{y} - \mathbf{x}^k \right\rangle &=& \displaystyle \sum_{s=1}^{m(\ell)} \langle \mathbf{g}_{i_{s}^{\ell}}^{k}, \mathbf{x}_{i_{s-1}^{\ell}}^{k} - \mathbf{x}^k \rangle + \sum_{s=1}^{m(\ell)} \langle \mathbf{g}_{i_{s}^{\ell}}^{k}, \mathbf{y} - \mathbf{x}_{i_{s-1}^{\ell}}^{k} \rangle \\ &\leq& \displaystyle \sum_{s=1}^{m(\ell)} \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| \|\mathbf{x}^k - \mathbf{x}_{i_{s-1}^{\ell}}^{k} \| + \sum_{s=1}^{m(\ell)} ( f_{i_{s}^{\ell}}(\mathbf{y}) - f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k}) ) \\ &=& \displaystyle \sum_{s=2}^{m(\ell)} \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| \|\mathbf{x}^k - \mathbf{x}_{i_{s-1}^{\ell}}^{k} \| + F_{S_{\ell}}(\mathbf{y}) - F_{S_{\ell}}(\mathbf{x}^k) \\ && \qquad \displaystyle - \sum_{s=2}^{m(\ell)} (f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k}) - f_{i_{s}^{\ell}}(\mathbf{x}^k) ). \end{array}$$ By eqs. (\ref{ineq1}), (\ref{ineq2}) and the subgradient boundedness assumption (\ref{limitacao}) we obtain, $$\begin{array}{lll} \displaystyle \left\langle \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}, \mathbf{y} - \mathbf{x}^k \right\rangle &\leq& \displaystyle \sum_{s=2}^{m(\ell)} \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| \| \mathbf{x}^k - \mathbf{x}_{i_{s-1}^{\ell}}^{k} \| + F_{S_{\ell}}(\mathbf{y}) - F_{S_{\ell}}(\mathbf{x}^k) \\ && \qquad \displaystyle + \sum_{s=2}^{m(\ell)} \|\mathbf{v}_{i_{s}^{\ell}}^{k} \| \| \mathbf{x}^k - \mathbf{x}_{i_{s-1}^{\ell}}^{k} \| \\ &\leq& \displaystyle F_{S_{\ell}}(\mathbf{y}) - F_{S_{\ell}}(\mathbf{x}^k) + \sum_{s=2}^{m(\ell)} ( \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| + \| \mathbf{v}_{i_{s}^{\ell}}^{k} \| )\left(\lambda_k \sum_{r=1}^{s-1} C_{i_{r}^{\ell}}\right) \\ &\leq& \displaystyle F_{S_{\ell}}(\mathbf{y}) - F_{S_{\ell}}(\mathbf{x}^k) + 2 \lambda_k \sum_{s=2}^{m(\ell)} C_{i_{s}^{\ell}}\left(\sum_{r=1}^{s-1} C_{i_{r}^{\ell}}\right). \end{array}$$ \end{description} \end{proof} The following Lemma is useful to analyze the convergence of the Algorithm \ref{alg-proposed}. \begin{lemma} \label{lem2} Let $\{\mathbf{x}^k \}$ be the sequence generated by Algorithm {\em \ref{alg-proposed}} and suppose that assumption \emph{(\ref{limitacao})} holds. Then, there is a positive constant $C$ such that, for all $\mathbf{y} \in \mathbf{Y} \supset \mathbf{X}$ and for all $k \geq 0$ we have \begin{equation} \label{desig} \displaystyle \| \mathcal{O}_f(\lambda_k, \mathbf{x}^k) - \mathbf{y} \|^2 \leq \| \mathbf{x}^{k} - \mathbf{y} \|^2 - \frac{2}{P} \lambda_{k} (f(\mathbf{x}^{k}) - f(\mathbf{y})) + C \lambda_{k}^2, \end{equation} \end{lemma} \begin{proof} Initially, we can develop equation (\ref{step.operators.alg}) for each $\ell = 1, \dots, P$ and obtain $\displaystyle \mathbf{x}_{\ell}^{k} = \mathbf{x}^k - \lambda_k \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}$, where $\mathbf{g}_{i_{s}^{\ell}}^{k} \in \partial f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k})$. Thus, from equation (\ref{opt.operator.alg}) we have for all $k \geq 0$, \begin{equation*} \begin{array}{lll} \displaystyle \mathcal{O}_f(\lambda_k, \mathbf{x}^k) &=& \displaystyle \sum_{\ell=1}^{P} w_{\ell} \mathbf{x}_{\ell}^{k} \\ &=& \displaystyle \sum_{\ell=1}^{P} w_{\ell} \left(\mathbf{x}^k - \lambda_k \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} \right) \\ &=& \displaystyle \mathbf{x}^k - \lambda_k \sum_{\ell=1}^{P} w_{\ell} \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}. \end{array}\end{equation*} Using the above equation we obtain for all $\mathbf{y} \in \mathbf{Y}$ and for all $k \geq 0$, $$\begin{array}{lll} \displaystyle \| \mathcal{O}_f(\lambda_k, \mathbf{x}^k) - \mathbf{y} \|^2 &=& \displaystyle \left\| \mathbf{x}^k - \mathbf{y} - \lambda_k \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} \right\|^2 \\ &=& \displaystyle \|\mathbf{x}^k - \mathbf{y} \|^2 - 2 \left\langle \mathbf{x}^k - \mathbf{y}, \, \lambda_k \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} \right\rangle + \left\| \lambda_k \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} \right\|^2 \\ &=& \displaystyle \|\mathbf{x}^k - \mathbf{y} \|^2 + 2 \lambda_k \sum_{\ell=1}^{P} w_\ell \left\langle \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}, \, \mathbf{y} - \mathbf{x}^k \right\rangle + \lambda_{k}^{2} \left\| \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} \right\|^2. \end{array}$$ Now, using Lemma \ref{lem1} \textbf{(iii)}, triangle inequality and $P\sum_{\ell=1}^{P} w_\ell F_{S_{\ell}}(\mathbf{x}) = f(\mathbf{x})$ we have, $$\begin{array}{lll} \displaystyle \|\mathcal{O}_f(\lambda_k, \mathbf{x}^k) - \mathbf{y} \|^2 &\leq& \displaystyle \| \mathbf{x}^k - \mathbf{y} \|^2 - 2\lambda_k \sum_{\ell=1}^{P} w_\ell \left[F_{S_{\ell}}(\mathbf{x}^k) - F_{S_{\ell}}(\mathbf{y}) - 2\lambda_k\sum_{s=2}^{m(\ell)} C_{i_{s}^{\ell}} \left(\sum_{r=1}^{s-1} C_{i_{r}^{\ell}}\right)\right] \\ && \displaystyle \qquad + \lambda_{k}^{2} \left\| \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k}\right\|^2 \\ &\leq& \displaystyle \| \mathbf{x}^k - \mathbf{y} \|^2 - 2 \lambda_k \left(\sum_{\ell=1}^{P} w_\ell F_{S_{\ell}}(\mathbf{x}^k) - \sum_{\ell=1}^{P} w_\ell F_{S_{\ell}}(\mathbf{y}) \right) \\ && \displaystyle \qquad + 4 \lambda_{k}^{2} \sum_{\ell=1}^{P} w_\ell \left[\sum_{s=2}^{m(\ell)} C_{i_{s}^{\ell}} \left(\sum_{r=1}^{s-1} C_{i_{r}^{\ell}}\right)\right] + \lambda_{k}^{2} \left( \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| \right)^2 \\ &=& \displaystyle \| \mathbf{x}^k - \mathbf{y} \|^2 - 2\frac{\lambda_k}{P} (f(\mathbf{x}^k) - f(\mathbf{y})) + 4\lambda_{k}^{2}\sum_{\ell=1}^{P} w_\ell \left[\sum_{s=2}^{m(\ell)} C_{i_{s}^{\ell}}\left(\sum_{r=1}^{s-1} C_{i_{r}^{\ell}}\right)\right] \\ && \displaystyle \qquad + \lambda_{k}^{2} \left( \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \| \mathbf{g}_{i_{s}^{\ell}}^{k} \| \right)^2. \end{array}$$ Finally, by subgradient boundedness assumption (\ref{limitacao}), we obtain for all $\mathbf{y} \in \mathbf{Y}$ and for all $k \geq 0$, $$\begin{array}{lll} \displaystyle \| \mathcal{O}_f(\lambda_k, \mathbf{x}^k) - \mathbf{y} \|^2 &\leq& \displaystyle \| \mathbf{x}^k - \mathbf{y} \|^2 - 2\frac{\lambda_k}{P} (f(\mathbf{x}^k)- f(\mathbf{y})) \\ && \qquad + \displaystyle \lambda_{k}^{2} \left[ 4\sum_{\ell=1}^{P} w_\ell \left[\sum_{s=2}^{m(\ell)} C_{i_{s}^{\ell}}\left(\sum_{r=1}^{s-1} C_{i_{r}^{\ell}} \right) \right] + \left( \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} C_{i_{s}^{\ell}} \right)^2 \right] \\ &=& \displaystyle \| \mathbf{x}^{k} - \mathbf{y} \|^2 - \frac{2}{P} \lambda_{k} (f(\mathbf{x}^{k}) - f(\mathbf{y})) + C \lambda_{k}^2. \end{array}$$ \end{proof} The next two propositions aim at showing that, under some mild additional hypothesis, $\mathcal{O}_f$ and $\mathcal{V}_{\mathbf{X}}$ satisfy Conditions~\ref{cond1}-\ref{cond2}. \begin{proposition} \label{prop otim} Let $\{\mathbf{x}^k \}$ be the sequence generated by Algorithm {\em \ref{alg-proposed}} and suppose that subgradient boundedness assumption \emph{(\ref{limitacao})} holds. Then, if $\lambda_k \rightarrow 0^{+}$, the optimality operator $\mathcal{O}_f$ satisfies Condition \emph{\ref{cond1}} of Theorem \emph{\ref{conv1}}. \end{proposition} \begin{proof} Lemma \ref{lem2} ensures that for all $\mathbf{x} \in \mathbf{X} \subset \mathbf{Y}$ we have, \begin{equation*} \displaystyle \| \mathcal{O}_f (\lambda_k, \mathbf{x}^k) - \mathbf{x} \|^2 \leq \| \mathbf{x}^k - \mathbf{x} \|^2 - \frac{2}{P} \lambda_k (f(\mathbf{x}^k) - f(\mathbf{x})) + C \lambda_{k}^{2}. \end{equation*} Defining $\alpha = 2 / P$ and $\rho_k = \lambda_k C \geq 0$, equation~(\ref{cond1-eq1}) is satisfied and $\rho_k \to 0$. Furthermore, by triangle inequality and subgradient boundedness assumption (\ref{limitacao}) we have, \begin{eqnarray*} \displaystyle \| \mathcal{O}_f (\lambda_k, \mathbf{x}^k) - \mathbf{x}^k \| &=& \displaystyle \left\|\sum_{\ell=1}^{P} w_{\ell} \mathbf{x}_{\ell}^{k} - \mathbf{x}^k\right\| \\ &=& \displaystyle \left\| \mathbf{x}^k - \lambda_k \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} - \mathbf{x}^k \right\| \\ &=& \displaystyle \lambda_k \left\|\sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} \mathbf{g}_{i_{s}^{\ell}}^{k} \right\| \\ &\leq& \displaystyle \lambda_k \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} C_{i_{s}^{\ell}}, \end{eqnarray*} implying that equation~(\ref{cond1-eq2}) is satisfied with $\gamma = \sum_{\ell=1}^{P} w_\ell \sum_{s=1}^{m(\ell)} C_{i_{s}^{\ell}}$. Therefore, Condition~\ref{cond1} is satisfied. \end{proof} \begin{proposition} \emph{(\cite{helou09}, Proposition 3.4)} \label{prop viab} Let $\mathbf{x}^{k+1/2}$ given in \emph{(\ref{opt.operator.alg})} be the first element $\mathbf{s}_{0}^{k}$ of the sequence $\{ \mathbf{s}_{i}^{k} \}$, $i = 1, \dots, t$, given as $\mathbf{s}_{i}^{k}:= \mathcal{S}_{h_i}^{\nu_i}(\mathbf{s}_{i-1}^{k})$. In this sense, consider that $\mathbf{h}_{i}^{k} \in \partial h_i(\mathbf{s}_{i-1}^{k})$. Suppose that for some index $j$, the set $lev_0(h_j)$ is bounded. In addition, consider that all sequences $\left\{\mathbf{h}_{i}^{k}\right\}$ are bounded. Then, $\mathcal{V}_{\mathbf{X}}$ satisfies Condition \emph{\ref{cond2}} of Theorem \emph{\ref{conv1}}. \end{proposition} The main result of the paper is given next. \begin{corollary} \label{convSA} Let $\{ \mathbf{x}^k \}$ be the sequence generated by Algorithm \emph{\ref{alg-proposed}} and suppose that subgradient boundedness assumption \emph{(\ref{limitacao})} holds. In addition, suppose that $lev_0(h_j)$ is bounded for some $j$ and all sequences $\{ \mathbf{h}_{i}^{k} \}$ are bounded. Then, if Conditions \emph{\ref{cond3}}-\emph{\ref{cond4}} of Theorem \emph{\ref{conv1}} hold, we have \begin{equation*} \displaystyle d_{\mathbf{X}^{\ast}}(\mathbf{x}^k) \rightarrow 0 \qquad \mbox{and} \qquad \lim_{k \rightarrow \infty} f(\mathbf{x}^k) = f^{\ast}.\end{equation*} \end{corollary} \begin{proof} Propositions \ref{prop otim} and \ref{prop viab} state that operators $\mathcal{O}_f$ and $\mathcal{V}_{\mathbf{X}}$ satisfy Conditions \ref{cond1}-\ref{cond2}. Therefore, the result is obtained applying Theorem \ref{conv1}. \end{proof} Recall that we discuss the reasonability of the Condition \ref{cond4} as a hypothesis for this corollary in Remark \ref{rmk1}. \section{Numerical experiments} \label{sec.5} In this section, we apply the problem formulation (\ref{1-intro}), \textbf{(i)}-\textbf{(v)} and the method given in Algorithm \ref{alg-proposed} to the reconstruction of tomographic images from few views, and we explore results obtained from simulated and real data to show that the method is competitive when compared with the classic incremental subgradient algorithm. Let us start with a brief description of the problem. The task of reconstructing tomographic images is related to the mathematical problem of finding a function $\psi:\mathbb{R}^2 \rightarrow \mathbb{R}$ from its line integrals along straight lines. More specifically, we desire to find $\psi$ given the following function: \begin{equation} \label{radon_transform} \displaystyle \mathcal{R}[\psi](\theta, t) := \int_{\mathbb{R}} \psi(t(\cos \theta, \sin \theta)^T + s(- \sin \theta, \cos \theta)^T) \, ds. \end{equation} The application $\psi \mapsto \mathcal{R}[\psi]$ is so-called \textit{Radon Transform} and for a fixed $\theta$, $\mathcal{R}_{\theta}[\psi](t)$ is known as a \textit{projection} of $\psi$. For a detailed discussion about the physical and mathematical aspects involving tomography and the definition in (\ref{radon_transform}), see, for example \cite{natterer86, natterer2001mathematical, herman2009fundamentals}. We now provide an example to better understand the geometric meaning of the definition of Radon transform. We can display $\psi$ as a picture if we assign to each value in $[0,1]$, a grayscale such as in Figure \ref{example_radon}-(a). Here we use an artificial image made up of a sum of indicator functions of ellipses. The bar on the right indicates the grayscale used. We also show the axes $t$, $x$, $y$ and the integration path for a given pair $(\theta, t')$, which appears as the dashed line segment. The $t$-axis directions vary according to the number of angles adopted for the reconstruction process. In general, $\theta \in [0, \pi)$ because $\mathcal{R}[\psi](\theta + \pi, -t) = \mathcal{R}[\psi](\theta,t)$. For a fixed angle $\theta$, the projection $\mathcal{R}_{\theta}[\psi](t)$ is computed for each $t' \in [-1,1]$. Figure \ref{example_radon}-(b) shows the projections obtained for three fixed angles $\theta$: $\theta_1$, $\theta_2$ and $\theta_3$. Its representation given in Figure \ref{example_radon}-(c) as an image in the $\theta \times t$ coordinate system is called \textit{sinogram}. We also call the Radon transform at a fixed angle a \emph{view} or \emph{projection}. \begin{figure} \caption{(a) An artificial image and the integration path for a given $(\theta, t')$ used to compute a projection $\mathcal{R}_{\theta}[\psi](t)$. (b) Projections for three fixed angles. (c) Sinogram obtained from image in (a).} \label{example_radon} \end{figure} The Radon transform models the data in a tomographic image reconstruction problem. That is, for reconstruction of the function $\psi$, we must go from Figure~\ref{example_radon}-(c) to the desired image in Figure~\ref{example_radon}-(a), i.e., it would be desirable to calculate the inverse $\mathcal{R}^{-1}$. However, as already mentioned, the Radon Transform is a compact operator and therefore its inversion is an ill-conditioned problem. In fact, for $n=2$ and $n=3$, Radon obtained inversion formulas involving first and second order differentiation of the data \cite{natterer2001mathematical}, respectively, implying in an unstable process due the increase of the error propagation in presence of perturbed data (when noise is present, which may occur due to width of the x-ray beam, scatter, hardening of the beam, photon statistics, detector inaccuracies, etc \cite{herman2009fundamentals}). Other difficulties arise when using analytical solutions in practice due to, for example, the limited number of views that often occurs. This is why more sophisticated optimization models are useful, and it is desirable to use an objective function and constraints that forces the consistency of the solution to the data and guarantees stability of the solution. \subsection{Experimental work} In what follows we provide a detailed description of the experimental setup. \begin{description} \item a) \textit{The problem}: we consider the task of reconstructing an image from few views. We use a model based in the $\ell_1$-norm of the residual associated to the linear system $R \mathbf{x} = \mathbf{b}$, where $R$ is the $m \times n$ Radon matrix, obtained through discretization of the Radon transform in (\ref{radon_transform}), $\mathbf{x} \in \mathbb{R}^n$ is the solution that we want to find, $\mathbf{b} \approx R \mathbf{x}^{\ast} \in \mathbb{R}^m$ represents the data that we have for the reconstruction (sinogram), $\mathbf{x}^{\ast} \in \mathbb{R}_{+}^{n}$ is the original image and $m \ll n$. The choice of the $\ell_1$-norm serves as a way to promote robustness to the error $\mathbf{b} - R \mathbf{x}^{\ast}$, which in the case of synchrotron illuminated tomography has relatively few very large components and many smaller ones. The small errors are related with the Poisson nature of the data, while the outliers happen because of systematic detection failure either due to dust in the ray path or to, e.g., failed detector pixels. In this manner, the following optimization problem has suitable features for the use of the Algorithm \ref{alg-proposed}: \begin{equation} \label{problem_rn1} \begin{array}{l} \min \, \, f(\mathbf{x}) = \left\| R \mathbf{x} - \mathbf{b} \right\|_1 \\ \mbox{s.t.} \quad h(\mathbf{x}) = TV(\mathbf{x}) - \tau \leq 0, \\ \qquad \, \mathbf{x} \in \mathbb{R}_{+}^{n}. \end{array} \end{equation} Note that the objective function $f(\mathbf{x}) = \sum_{i=1}^{m} | \langle \mathbf{r}_i, \mathbf{x} \rangle - b_i| = \sum_{i=1}^{m} f_i(\mathbf{x})$, where $\mathbf{r}_i$ represents the $i$-th row of $R$. In comparison to (\ref{1-intro}) \textbf{(i)}-\textbf{(v)}, model (\ref{problem_rn1}) suggests constant weights $w_\ell = 1 / P$ for all $\ell = 1, \dots, P$ to satisfy conditions \textbf{(iv)} and \textbf{(v)}. In our tests, we use $P = 1, \dots, 6$ and, to build the sets $S_\ell$, we ordered the indices of the data randomly and then distributed in $P$ equally sized sets (or as close to it as possible) aiming at satisfying condition \textbf{(iii)}. We also assume that the image $\mathbf{x}^{\ast}$ to be reconstructed has large approximately constant areas, as is often the case in tomographic images. Operator $TV: \mathbb{R}^n \rightarrow \mathbb{R}_{+}$ is called \textit{total variation} and is defined by \begin{equation*} TV(\mathbf{x}) = \sum_{i=1}^{r_2} \sum_{j=1}^{r_1} \sqrt{\left(x_{i,j} - x_{i-1,j}\right)^2 + \left( x_{i,j} - x_{i,j-1}\right)^2},\end{equation*} where $\mathbf{x} = [x_q]^T$, $q \in \{1, \dots, n \}$, $n = r_1 r_2$ and $x_{i,j} := x_{(i-1)r_1 + j}$. We have also used the boundary conditions $x_{0,j} = x_{i,0} = 0$ and $\tau = TV(\mathbf{x}^{\ast})$. \item b) \textit{Data generation}: for this simulated experiment, we have considered the reconstruction of the Shepp-Logan phantom \cite{kas88}. In Figure \ref{phantom}, we show this image using a grayscale version with resolution of $256 \times 256$. This resolution will also be used for the reconstruction. \begin{figure} \caption{Shepp-Logan phantom with resolution $256 \times 256$.} \label{phantom} \end{figure} For the vector $\mathbf{b}$ that contains the data to be used in the reconstruction, we need an efficient routine for calculating the product $R \mathbf{x} $. We consider $24$ equally spaced angular projections with each sampled at $256$ equally spaced points. We also consider reconstruction of images affected by Poisson noise, i.e., we execute the algorithms using data that was generated as samples of a Poisson random variable having as parameter the exact Radon Transform of the scaled phantom: \begin{equation} \mathbf{b} \sim Poisson \left( \kappa \mathcal{R}[\psi](\theta, t) \right), \end{equation} where the scale factor $\kappa > 0$ is used to control the simulated photon count, i.e., the noise level. Figure \ref{data} shows the result obtained for $\mathbf{b} = R \mathbf{x}^{\ast}$, where $\mathbf{x}^{\ast}$ is the Shepp-Logan phantom, in both cases, i.e., with and without noise in the data. \begin{figure} \caption{Sinograms obtained from Radon transform of Shepp-Logan phantom. Only $24$ equally spaced angular projections are taken, each sampled at $256$ equally spaced points.} \label{data} \end{figure} \item c) \textit{Initial image}: for the initial image $\mathbf{x}^0$, we seek an uniform vector that somehow has information from data obtained by the Radon Transform of Shepp-Logan head phantom. For that, we use an initial image that satisfy $ \sum_{i=1}^{m} \langle \mathbf{r}_i, \mathbf{x}^0 \rangle = \sum_{i=1}^{m} b_i$. Therefore, supposing $x_{j}^{0} = \zeta$ for all $j = 1, \dots, n$, we can compute $\zeta$ by \begin{equation} \label{initial_image} \displaystyle \zeta = \frac{\sum_{i=1}^{m} b_i}{\sum_{i=1}^{m} \langle \mathbf{r}_i, \mathbf{1} \rangle},\end{equation} where $\mathbf{1}$ is the $n$-vector whose components are all equal to $1$. \item d) \textit{Applying Algorithm \emph{\ref{alg-proposed}}}: step-size sequence $\left\{ \lambda_k \right\}$ was determined by the formula \begin{equation} \label{stepsize} \displaystyle \lambda_k = (1 - \rho c_k) \frac{\lambda_0}{\alpha k^s / P + 1},\end{equation} where the sequence $c_k$ starts with $c_0 = 0$ and the following terms are given by \begin{equation*} \displaystyle c_k = \frac{\left\langle \mathbf{x}^{k-1/2} - \mathbf{x}^{k-1}, \mathbf{x}^k - \mathbf{x}^{k-1/2} \right\rangle}{\left\|\mathbf{x}^{k-1/2} - \mathbf{x}^{k-1} \right\| \left\| \mathbf{x}^k - \mathbf{x}^{k-1/2} \right\|}.\end{equation*} Each $c_k$ is the cosine of the angle between the directions taken by optimality and feasibility operators in the previous iteration. Thus, the factor $(1 - \rho c_k)$ serves as an empirical way to prevent oscillations. Finally, we use $\lambda_0 = \mu \left\|R \mathbf{x}^0 - \mathbf{b} \right\|_1 / \left\| \mathbf{g}^0 \right\|^2$, where $\mu$ is the number of parcels in which the sum is divided and $\mathbf{g}^0$ is a subgradient of objective function in $\mathbf{x}^0$. Other free parameters in (\ref{stepsize}) were tuned and set to: $\rho = 0.999$, $s = 0.51$ and $\alpha = 1.0$. \hspace{0.7cm} Now we need to calculate the subgradients for the objective function and $TV$. Since the vector \begin{equation*} \displaystyle \mathbf{sign}(\mathbf{x}) = [u_i]^T, \quad \mbox{such that} \quad u_i := \left\{ \begin{array}{ccc} 1, & \mbox{if} & x_i > 0, \\ 0, & \mbox{if} & x_i = 0, \\ -1, & & \mbox{otherwise} \end{array} \right. \end{equation*} belongs to the set $\partial \left\| \mathbf{x} \right\|_1$, then the theorem 4.2.1, p. 263 in \cite{hil93} guarantees that \begin{equation*} R^T \mathbf{sign}(R \mathbf{x} - \mathbf{b}) \in \partial \left\| R \mathbf{x} - \mathbf{b} \right\|_1,\end{equation*} and this subgradient will be used in our experiments. In particular, for each $k \geq 0$, $\ell = 1, \dots, P$ and $s = 1, \dots, m(\ell)$ we use \begin{equation} \label{subgrad_objfunc} \mathbf{g}_{i_{s}^{\ell}}^{k} = \left\{ \begin{array}{ccc} \mathbf{r}_{i_{s}^{\ell}}, & \mbox{if} & \langle \mathbf{r}_{i_{s}^{\ell}}, \mathbf{x}_{i_{s-1}^{\ell}}^{k} \rangle - b_{i_{s}^{\ell}} > 0, \\ - \mathbf{r}_{i_{s}^{\ell}}, & \mbox{if} & \langle \mathbf{r}_{i_{s}^{\ell}}, \mathbf{x}_{i_{s-1}^{\ell}}^{k} \rangle - b_{i_{s}^{\ell}} < 0, \\ 0, & & \mbox{otherwise.} \end{array} \right. \end{equation} A subgradient $\mathbf{h} = [t_{i,j}]$ for $h(\mathbf{x}) = TV(\mathbf{x}) - \tau$ can be computed by \begin{equation} \label{subgrad_tv} \begin{array}{rcl} \displaystyle t_{i,j} & = & \displaystyle \frac{2x_{i,j} - x_{i,j-1} - x_{i-1,j}}{\sqrt{(x_{i,j} - x_{i,j-1})^2 + (x_{i,j} - x_{i-,j})^2}} \\ & & \displaystyle \qquad + \frac{x_{i,j} - x_{i,j+1}}{\sqrt{(x_{i,j+1} - x_{i,j})^2 + (x_{i,j+1} - x_{i-1,j+1})^2}} \\ & & \displaystyle \qquad \qquad + \frac{x_{i,j} - x_{i+1,j}}{\sqrt{(x_{i+1,j} - x_{i,j})^2 + (x_{i+1,j} - x_{i+1,j-1})^2}}, \end{array} \end{equation} where, if any denominator is zero, we annul the correspondent parcel. \end{description} Once we have determined the strings by setting $\Delta_1 = S_1, \dots, \Delta_P = S_P$ and weights ($w_\ell = 1/P$ for all $\ell$), the step-size sequence $\lambda_k$ in (\ref{stepsize}), initial image $\mathbf{x}^0$ in (\ref{initial_image}) and subgradients $\mathbf{g}_{i_{s}^{\ell}}^{k} \in \partial f_{i_{s}^{\ell}}(\mathbf{x}_{i_{s-1}^{\ell}}^{k})$ in (\ref{subgrad_objfunc}), optimality operator $\mathcal{O}_f$ (\ref{initial_opt_op})-(\ref{SA_isoo}) can be applied. By considering the subdifferential of $\| \mathbf{x} \|_1$, it is clear that $\partial f(\mathbf{x})$ is uniformly bounded, ensuring that assumption (\ref{limitacao}) holds. Moreover, since $\rho \in [0,1)$, $\alpha > 0$, $s \in (0, 1]$ and $c_k \in [-1,1]$, by Cauchy-Schwarz inequality we can ensure that $\lambda_k > 0$ and that Condition \ref{cond3} of Theorem \ref{conv1} holds. Using $\mathbf{h} \in \partial h(\mathbf{x})$ defined in (\ref{subgrad_tv}), operator $\mathcal{S}_{h}^{\nu}$ can be computed by equation (\ref{subgrad_projection}), such that, in our tests, we use $\nu = 1$. The feasibility operator is thus given by $\displaystyle \mathcal{V}_{\mathbf{X}} = \mathcal{P}_{\mathbb{R}_{+}^{n}} \circ \mathcal{S}_{h}^{\nu}$ (see equation (\ref{viabilidade})). The projection step can be regarded as a special case of the operator $\mathcal{S}_{g}^{\nu}$ with $\nu=1$ and $g = d_{\mathbb{R}_{+}^{n}}$. It is easy to see that $\mathcal{V}_{\mathbf{X}}$ defined in this way satisfies the conditions established in the Proposition \ref{prop viab}. In conclusion, once $\left\| R \mathbf{x} - \mathbf{b} \right\|_1 \geq 0$, Corollary 2.7 in \cite{helou09} implies that $\left\{d_{\mathbf{X}}(\mathbf{x}^k)\right\} \rightarrow 0$ (the sequence is bounded). Since $\partial f(\mathbf{x})$ is uniformly bounded, we have that $\left[ f(\mathcal{P}_{\mathbf{X}}(\mathbf{x}^k)) - f(\mathbf{x}^k) \right]_{+} \rightarrow 0$ and Condition \ref{cond4} of Theorem \ref{conv1} holds. Thus, Corollary \ref{convSA} can be applied ensuring convergence of the Algorithm \ref{alg-proposed}. \subsection{Image reconstruction analysis} To run the experiments, we used a computer with processor Intel Core i7-4790 CPU @ 3.60 GHz x8 and 16 GB of memory. The operating system used was Linux and the implementation was realized in C$++$. Figure \ref{graficos} shows the decrease of the objective function with respect to computation time to compare convergence speed and image quality in the performed reconstructions. Furthermore, in order to obtain a more meaningful analysis, we consider the graphic of the total variation $TV(\mathbf{x})$ and the \textit{relative squared error}, \begin{equation*} \displaystyle RSE(\mathbf{x}) = \frac{\left\|\mathbf{x} - \mathbf{x}^{\ast} \right\|^{2}}{\left\| \mathbf{x}^{\ast} \right\|^2}.\end{equation*} Note that the $RSE$ metric requires information on the desired image. Also we show graphs of $TV(\mathbf{x}^k)$ as function of $f(\mathbf{x}^k)$. \begin{figure} \caption{Decrease of the objective function, $TV$ and $RSE$ in a noise-free condition. Comparing ISM (1 string) with algorithms that use 2-6 strings, executed in parallel, lower values are reached for the objective and $RSE$ functions. For $TV$, we get an oscillation with lower intensity (especially when we used 4-6 strings). Note that solid horizontal lines on $TV$ graph represent the target value $\tau = TV(\mathbf{x}^{\ast})$. The solid vertical lines show, for a fixed computation time, that both functions values are decreasing with respect to the number of strings $P$. Figure \ref{imagens} shows the reconstructed images by the algorithms for this fixed computation time. In the bottom right figure, note that $TV$ values appear to represent, at many fixed levels of residual $\ell_1$-norm, a decreasing function of the number of strings.} \label{graficos} \end{figure} When we use $P = 2, \dots, 6$ (algorithm is executed with 2-6 strings), it is possible to observe a faster decrease in the values of the objective function $f(\mathbf{x}^k)$ as the running time increases, if compared to the case where $P=1$. Since there is no guarantee that algorithm produces a descent direction in each iteration, it is important to note that, in some of the tests, the intensity of the oscillation, i.e., $f(\mathbf{x}^{k+1}) - f(\mathbf{x}^k)$ for $f(\mathbf{x}^{k+1}) > f(\mathbf{x}^{k})$, decreases as the number of strings increases (note, for example, the algorithm performance with 5 and 6 strings, from $0$ to about $40$ seconds). A similar behavior can be noted for values $TV(\mathbf{x}^k)$. For 4-6 strings, the algorithm is able to provide images with a more appropriate $TV$ level, approaching the feasible region more quickly. Even if closer to satisfying the constraints, for methods with a larger number of strings, the values of $RSE(\mathbf{x}^k)$ and of the objective function decrease with lower intensity oscillations and reach lower values within the same computation time. Interestingly, the experiments with noisy data show that the algorithm generates a sharp decrease in the intensity of oscillation precisely where image quality seems to reach a good level. The study of conditions under which we can establish a stopping criterion for the algorithm are left for future research, perhaps taking advantage of this kind of phenomenon. The quality of the reconstruction is significantly affected by the increase in the number of strings. Figure \ref{imagens} shows the reconstructed images obtained in the experiments. There is a clear difference in the quality of reconstruction for ISM and algorithms with 2-6 strings. For 5 and 6 strings the reconstruction is visually perfect. \begin{figure} \caption{Reconstructed images obtained in the computation time mentioned in the Figure \ref{graficos}.} \label{imagens} \end{figure} Figures \ref{ruido1}, \ref{ruido3} and \ref{ruido2} show plots similar to those in Figure~\ref{graficos} but now under different relative noise levels, which was computed as $\left\| \mathbf{b} - \mathbf{b}^{\dagger} \right\| / \left\| \mathbf{b}^{\dagger} \right\|,$ where $\mathbf{b}^{\dagger}$ is the vector that contains the ideal data. \begin{figure} \caption{Test with $17.8\%$ of relative noise.} \label{ruido1} \end{figure} \begin{figure} \caption{Test with $8.78\%$ of relative noise.} \label{ruido3} \end{figure} \begin{figure} \caption{Test with $5.65\%$ of relative noise.} \label{ruido2} \end{figure} We can note that the behavior of algorithm is similar to the previous case. Algorithms with larger number of strings reach results that the ISM takes longer to reach. Furthermore, oscillations with lower intensity can be noted, especially for 5 and 6 strings. Figure \ref{imagens_ruido} shows the reconstructed images obtained by ISM and method with $6$ strings according to the following rule: we set an objective function value and seek the first iteration to fall below this threshold for each method. Table \ref{tab1} provides the running time and total variation for each case. These data confirm a good performance of the algorithm with string averaging, in the sense that, for fixed values of objective function, algorithm running with $6$ strings provides images in which quality appears to be improved (or at least is similar) with lower running time, if compared against ISM. \begin{table}[htbp!] \centering \setlength{\arrayrulewidth}{2\arrayrulewidth} \setlength{\belowcaptionskip}{10pt} \begin{tabular}{|c|c|c|} \hline Method / Noise / $f(\mathbf{x})$ & Time ($s$) & $TV(\mathbf{x})$ \\ \hline \hline ISM / $17.8\%$ / $3.194\times10^4$ & $2.45\times10^2$ & $2.7\times10^5$ \\ \hline $P=6$ / $17.8\%$ / $3.191\times10^4$ & $6.0\times10^1$ & $1.82\times10^5$ \\ \hline ISM / $8.78\%$ / $6.070\times10^4$ & $7.11\times10^2$ & $6.91\times10^5$ \\ \hline $P=6$ /$8.78\%$ / $6.093\times10^4$ & $1.5\times10^2$ & $5.87\times10^5$ \\ \hline ISM / $5.65\%$ / $9.889\times10^4$ & $1.87\times10^3$ & $1.48\times10^6$ \\ \hline $P=6$ / $5.65\%$ / $9.889\times10^4$ & $2.2\times10^2$ & $1.36\times10^6$ \\ \hline \end{tabular} \caption{Running time and total variation obtained by ISM and method with $6$ strings under conditions of Poisson relative noise used in the tests for some fixed values of objective function.} \label{tab1} \end{table} \begin{figure} \caption{Reconstructed images obtained in the tests with Poisson noise. Items (a)-(f) exhibit: method / relative noise / $f(\mathbf{x})$. Table \ref{tab1} shows running time and total variation obtained in each case.} \label{imagens_ruido} \end{figure} \subsection{Tests with real data} Tomographic data was obtained by synchrotron radiation illumination of eggs of fishes of the species \emph{Prochilodus lineatus} collected at the Madeira river's bed at Brazilian National Synchrotron Light Laboratory's (LNLS) facility. The eggs had been previously embedded in formaldehyde in order to prevent decay, but were later fixed in water within a borosilicate capillary tube for the scan. The sample was placed between the x-ray source and a photomultiplier coupled to a CCD capable of recording the images. After each radiographic measurement, the sample was rotated by a fixed amount and a new measurement was made. A monochromator was added to the experiment to filter out low energy photons and avoid overheating of the soft eggs and the embedding water. This leads to a low photon flux, which increased the required exposure time to $20$ seconds for each projection measurement. Each of this radiographic image was $2048 \times 2048$ pixels covering an area of $0.76 \times 0.76$mm${}^2$. Given the high measurement duration of each projection, for the experiment to have a reasonable time span, we have collected only $200$ views in the $180^{\circ}$ range, leading to slice tomographic datasets (sinograms) each of dimension $2048 \times 200$ (see Figure \ref{ovas_sinograma}). \begin{figure} \caption{Sinogram obtained by synchrotron radiation illumination of eggs of fishes.} \label{ovas_sinograma} \end{figure} In this experiment, we use $\tau = 5 \times 10^4$, $\nu = 1.5$ and the other parameters $\rho$, $\alpha$ and $s$ were the same as in the previous experiment. Furthermore, to avoid high step-size values, we multiply the initial step-size $\lambda_0$ by $0.25$. Figure \ref{ovas1} shows the plot of $TV$ as function of residual $\ell_1$-norm. Better quality reconstructions are generated by the algorithm that uses 6 strings. Figure \ref{ovas_imagens} shows the images obtained by reconstruction. By considering that the eggs were immersed in water, which has homogeneous attenuation value, we can conclude that image in Figure \ref{ovas_imagens}-(b) has less artifacts. Figure \ref{profile} shows profile lines of the reconstructed images in Figure \ref{ovas_imagens}. Note that algorithm running with $6$ strings presents a reconstruction with less overshoot and more smoothness than ISM. \begin{figure} \caption{Total variation as function of the residual $\ell_1$-norm for the experiment using eggs of fishes. Note that the method with 6 strings provides lower values for total variation with a lower oscillation level.} \label{ovas1} \end{figure} \begin{figure} \caption{Reconstructed images from sinogram given in Figure \ref{ovas_sinograma}. The vertical solid red lines show where the profiles of Figure \ref{profile} were taken from.} \label{ovas_imagens} \end{figure} \begin{figure} \caption{Profile lines from images in Figure \ref{ovas_imagens}.} \label{profile} \end{figure} \section{Final comments} \label{sec.6} We have presented a new String-Averaging Incremental Subgradients family of algorithms. The theoretical convergence analysis of the method was established and experiments were performed in order to assess the effectiveness of the algorithm. The method featured a good performance in practice, being able to reconstruct images with superior quality when compared to classical incremental subgradient algorithms. Furthermore, algorithmic parameters selection was shown to be robust across a range of tomographic experiments. The discussed theory involves solving non-smooth constrained convex optimization problems and, in this sense, more general models can be numerically addressed by the presented method. Future work may be related to the application of the string-averaging technique in incremental subgradient algorithms with stochastic errors, such as those that appear in~\cite{sundhar2009incremental}~and~\cite{sundhar2010distributed}. \end{document}
arXiv
Applying dynamic programming to a simple two-person game of perfect information A natural number n represents the initial position in the game. When it is a players turn he/she is allowed to I) Subtract 2 from n II) Subtract 3 from n III) Subtract 5 from n We call the player who begin the game Adam and the other player Berta. The players alternate by applying on of the three rules to the number 0 or a negative number his/her opponent. If a player manages to produce the number 0 or a negative number he/she wins the game. Here is an example of a game played by Adam and Berta (for n=15) 15 is given to Adam. He decides to subtract 5 leaving 15-5 = 10 to Berta 10 is given to Berta. She decides to subtract 3 leaving 11-3=8 to Beta 8 is given to Adam. He decides to subtract 2 leaving 8-2=6 to Berta 6 is given to Berta. She decides to subtract 2 leaving 6-2=4 to Adam 4 is given to Adam. He decides to subtract 5 producing -1 a negative number, Adam wins! b) we define a one dimensional array X(1), X(2),X(3),..,X(n) i) X(j) =1 if Adam has a method to win when given the number j ii) X(j)=0 if Adam has no method that guarantees that he wins when the given the number k Calculate X(1),X(2),X(3)….,X(23),X(25) What is X(8), X(13) and X(24)? Answer should be of the form boolean boolean boolean so if X(8)=0 , X(13)=1 and X(24)=1 the correct answer is 011 Thus the correct answer is one of the following 000 001 010 011 100 101 110 111 My attempt is Adam: 8-5=3 Berta: 3-3 =0 Berta wins 0 Adam=13-5=8 Berta: 8-3=5 Adam 5- 5 Adam wins 1 I get really stuck with 24, so far I have 01 Is there a method for this type of problem, i have been stuck on it for ages now. Thanks in advance algorithms complexity-theory D.W.♦ jokerjoker $\begingroup$ You're supposed to express $X(i)$ in terms of $X(i-2), X(i-3), X(i-5)$, then go from $1$ to $25$, applying the rule, not just using brute force. $\endgroup$ – Karolis Juodelė Nov 16 '13 at 21:39 $\begingroup$ why X(i) in terms of X(i−2),X(i−3),X(i−5) ? $\endgroup$ – joker Nov 16 '13 at 22:01 $\begingroup$ You're right, there should be more terms in that expression. $\endgroup$ – Karolis Juodelė Nov 16 '13 at 22:10 $\begingroup$ Thanks for replying, but is there a rule which states it must be X(i-2) ? why 2? What does i represent in this case? $\endgroup$ – joker Nov 16 '13 at 22:12 $\begingroup$ What have you tried? Where did you get stuck? We expect you to make a serious effort before asking. This is a nice exercise -- but you should do it for yourself. (If you have us solve it for you, you won't learn the material for yourself.) What chapter in your textbook are you studying now? What topic are you studying in your class? Does that give you any hints on how you might approach this problem? Can you see how to solve it, if you could use exponential time? That would be a good start... $\endgroup$ – D.W.♦ Nov 17 '13 at 1:35 Since you want to calculate $X(1), X(2), X(3)$,... and not just some $X(i)$, you can start from $X(1)$. Let's denote the case where there is a winning strategy for first player by $1$ and the other case by $0$. Clearly, value of $X(i)$ for $i \in \{1,2,3,4,5\}$ is $1$. Let's store the values of $X(i)$ that we computed in an array. For $i>5$, if at least one of the values of $X(i-2), X(i-3), X(i-5)$ is $0$, then the value of $X(i)$ will be $1$. If none of them is $0$, then the value of $X(i)$ will be $0$. nitishchnitishch Hint: A position with $n \leq 5$ is a win for the first player (the player to go); let's call such a position P1. If any move from a given position leads to a P1 position, then the position is a win for the second player (the player not to go); let's call such a position P2. If there is a move from a given position to a P2 position, then it's a P1 position. Use dynamic programming to determine who wins for given $n$. More generally, you can compute the Sprague-Grundy function. That will tell you who wins in the simultaneous game when there are several positions $n_1,\ldots,n_k$, and each turn each player selects a position and makes a move there; a player who can't make a move (all $n_i$ are negative or zero) loses. Not the answer you're looking for? Browse other questions tagged algorithms complexity-theory or ask your own question. Analyzing a modified version of the card-game "War" How to efficiently determine whether a given ladder is valid? Can there be a perfect chess algorithm? Complexity of deciding whether there is a winning strategy in the following game Nash equilibria in 3-player game with symmetry Dynamic subtraction game Need help classifying this problem and suggestions on how to tackle it Is a rational game playing agent still rational when not staying off terminal states when it looses in all child states? Best way for two players to visit a set of graph vertices in a given order Maximal Win Score Distribution
CommonCrawl
\begin{definition}[Definition:Harmonic Numbers/General Definition] Let $r \in \R_{>0}$. For $n \in \N_{> 0}$ the '''Harmonic numbers order $r$''' are defined as follows: :$\ds H_n^{\paren r} = \sum_{k \mathop = 1}^n \frac 1 {k^r}$ \end{definition}
ProofWiki
\begin{document} \title{A Smoothed P-Value Test When There is a Nuisance Parameter under the Alternative} \author{Jonathan B.~Hill\thanks{ Dept. of Economics, University of North Carolina, Chapel Hill; [email protected]; https://jbhill.web.unc.edu. \ This article benefited from expert commentary from two referees.} \\ University of North Carolina -- Chapel Hill} \date{{\normalsize \today}} \maketitle \begin{abstract} \setstretch{1}We present a new test when there is a nuisance parameter $ \lambda $ under the alternative hypothesis. The test exploits the p-value occupation time [PVOT], the measure of the subset of $\lambda $ on which a p-value test based on a test statistic $\mathcal{T}_{n}(\lambda )$ rejects the null hypothesis. Key contributions are: (\textit{i}) An asymptotic critical value upper bound for our test is the significance level $\alpha $, making inference easy. (\textit{ii}) We only require $\mathcal{T} _{n}(\lambda )$ to have a known or bootstrappable limit distribution, hence we do not require $\sqrt{n}$-Gaussian asymptotics, allowing for weak or non-identification, boundary values, heavy tails, infill asymptotics, and so on. (\textit{iii}) A test based on the transformed p-value $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$\ may be conservative and in some cases have trivial power, while the PVOT naturally controls for this by smoothing over the nuisance parameter space. Finally, (iv) the PVOT uniquely allows for bootstrap inference in the presence of nuisance parameters when some estimated parameters may not be identified. \newline \textbf{Key words and phrases}: p-value test, empirical process test, nuisance parameter, weighted average power, GARCH test, omitted nonlinearity test. \newline \textbf{AMS classifications} : 62G10, 62M99, 62F35. \end{abstract} \setstretch{1.4} \section{Introduction\label{sec:intro}} This paper develops a test for cases when a nuisance parameter $\lambda $ $ \in $ $\mathbb{R}^{k}$\ is present under the alternative hypothesis $H_{1}$, where $k$ $\geq $ $1$ is finite. Let $\mathcal{S}_{n}$ $\equiv $ $ \{z_{t}\}_{t=1}^{n}$ be the observed sample of random variables $z_{t}$ $\in $ $\mathbb{R}^{q},$ $q$ $\geq $ $1$, with sample size $n$ $\geq $ $1$ and joint distribution $P$ $\subset $ $\mathcal{P}$ from some collection of distributions $\mathcal{P}$. We want to test the hypothesis $H_{0}$ $:$ $P$ $ \in $ $\mathcal{P}_{0}$ against $H_{1}$ $:$ $P$ $\notin $ $\mathcal{P}_{0}$ for some subset $\mathcal{P}_{0}$ $\subset $ $\mathcal{P}$. Let $\mathcal{T} _{n}(\lambda )$ $\equiv $ $\mathcal{T}(\mathcal{S}_{n},\lambda )$ be a test statistic function of $\lambda $ for testing a $H_{0}$. We assume $\mathcal{T }_{n}(\lambda )$ $\geq $ $0$, and that large values are indicative of $H_{1}$ .\ We present a simple smoothed p-value test based on the Lebesgue measure of the subset of $\lambda ^{\prime }s$ on which we reject $H_{0}$ based on $ \mathcal{T}_{n}(\lambda )$, defined as the \textit{P-Value Occupation Time} [PVOT]. In order to focus ideas, we ignore cases where $\lambda $ may be a set, interval, or function, or infinite dimensional as in nonparametric estimation problems. The PVOT was originally explored in \cite{HillAguilar13} and \cite {Hill_white_2012} as a way to gain inference in the presence of a trimming tuning parameter. We extend the idea to test problems where $\lambda $ is a nuisance parameter under $H_{1}$, and provide a complete asymptotic theory for the first time. Nuisance parameters under $H_{1}$ arise in two over-lapping cases. First, $ \lambda $ is part of the data generating process under $H_{1}$, e.g. ARMA models with common roots \citep{AndrewsCheng2012}; tests of no GARCH effects \citep{Engle_etal87, Andrews2001}; tests for common factors \citep{AndrewsPloberger1994}; tests for a Box-Cox transformation \citep{AT_Gallant1983}; and structural change tests \citep{Andrews1993}, to name a few. A standard example is the regression of scalar $y_{t}$ $=$ $\beta _{0}^{\prime }x_{t}$ $+$ $\gamma _{0}h(\lambda ,x_{t})$ $+$ $\epsilon _{t}$ where $x_{t}$ are covariates, $h$ is a known function, and $E[\epsilon _{t}|x_{t}]$ $=$ $0$ $a.s.$ for unique $ (\beta _{0},\gamma _{0})$. If $H_{0}$ $:$ $\gamma _{0}$ $=$ $0$ is true then $\lambda $ is not identified. In this case $z_{t}$ $=$ $[x_{t}^{\prime },y_{t}]^{\prime }$ and the null distribution subset $\mathcal{P}_{0}$ contains all joint distributions of $\{x_{t},y_{t}\}_{t=1}^{n}$ such that $ E[y_{t}|x_{t}]$ $=$ $\beta _{0}^{\prime }x_{t}$ $a.s.$, and under $H_{1}$ the joint distribution $P$ depends on $\lambda $. This test class includes the Box-Cox transform, neural networks, flexible functional forms, and regime switching models. See, e.g., \citet{Gallant1981,Gallant1984}, \cite {Gallant_Golub1984}, \cite{White1989}, \cite{AndrewsPloberger1994}, \cite {Terasvirta1994}, \cite{Hansen1996}, \cite{LiLi2011}, \cite{AndrewsCheng2012} and \cite{Goracci_etal2021}. Second, $\lambda $ is used to compute an estimator, or perform a general model specification test, and therefore need not appear in the joint distribution $P$ under either hypothesis. This includes tests of omitted nonlinearity against general alternatives \citep[see][amongst many others]{White1989,Bierens1990,BierensPloberger1997,StinchWhite1998,Hill_white_2012} ; and tests of marginal effects in models with mixed frequency data where $ \lambda $ is used to reduce regressor dimensionality \citep{GhyselsHillMotegi2016}. An example is the regression $y_{t}$ $=$ $ \beta _{0}^{\prime }x_{t}$ $+$ $\epsilon _{t}$ where we want to test $H_{0}$ $:$ $E[\epsilon _{t}|x_{t}]$ $=$ $0$ $a.s.$ We again have $z_{t}$ $=$ $ [x_{t}^{\prime },y_{t}]^{\prime }$ and $\mathcal{P}_{0}$ $=$ $\{P$ $:$ $ E[y_{t}|x_{t}]$ $=$ $\beta _{0}^{\prime }x_{t}$ $a.s.\}$. This is fundamentally different from the preceding example where $E[\epsilon _{t}|x_{t}]$ $=$ $0$ $a.s.$ was \textit{assumed}. A parametric test statistic can be based on the fact that $E[\epsilon _{t}F(\lambda ^{\prime }x_{t})]$ $\neq $ $0$ \textit{if and only if} $E[\epsilon _{t}|x_{t}]$ $=$ $ 0 $ $a.s.$ is false, for all $\lambda $ in any compact set $\Lambda $ outside of a measure zero subset, provided $F$ $:$ $\mathbb{R}$ $\rightarrow $ $\mathbb{R}$ is exponential \citep{Bierens1990}, logistic \citep{White1989} , or any real analytic non-polynomial \citep{StinchWhite1998}, or multinomials of $x_{t}$ \citep{Bierens1982}. Notice that $\lambda $ need not be part of the data generating process since $E[y_{t}|x_{t}]$ $=$ $\beta _{0}^{\prime }x_{t}$ $+$ $\gamma _{0}F(\lambda ^{\prime }x_{t})$ $a.s.$ may not be true under $H_{1}$. Detailed examples involving a test of function form where weak identification is possible, and a test of no GARCH effects, are presented in Sections \ref{sec:ex_start}, \ref{ex:omitted_nl} and \ref {sec:examples}. A classic approach for handling nuisance parameters in the broad sense is to compute a p-value $p_{n}(\lambda )$ $\equiv $ $p(\mathcal{S}_{n},\lambda )$. and use $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$ for some compact subset $\Lambda $ of $\mathbb{R}^{k}$\citep[see][Chap. 3.1]{Lehmann1994}. This may lead to a conservative test, although it promotes a test with the correct asymptotic level.\footnote{ Let $\tau _{n}$ $\in $ $[0,1]$ be a test statistic, and suppose we reject a null hypothesis at nominal significance level $\alpha $ when $\tau _{n}$ $>$ $\alpha $. Recall that the asymptotic \textit{level} of the test is $\alpha $ if $\lim_{n\rightarrow \infty }P(\tau _{n}$ $>$ $\alpha |H_{0})$ $\leq $ $ \alpha $, and if $\lim_{n\rightarrow \infty }P(\tau _{n}$ $>$ $\alpha |H_{0}) $ $=$ $\alpha $ then $\alpha $ is the asymptotic size \citep[cf.][]{Lehmann1994}.} Further, $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$ may not promote a consistent test even when $\mathcal{T} _{n}(\lambda )$ and its transforms like $\sup_{\lambda \in \Lambda }\mathcal{ T}_{n}(\lambda )$ do. An example is a \cite{Bierens1990}-type test which is known to be consistent $\forall \lambda $ $\in $ $\Lambda /S$ where $S$ has measure zero. This means $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$ $ \overset{p}{\rightarrow }$ $(0,1)$ under $H_{1}$\ is possible despite $ p_{n}(\lambda )$ $\overset{p}{\rightarrow }$ $0$ $\forall \lambda $ $\in $ $ \Lambda /S$. We find the test where $H_{0}$ is rejected at nominal level $ \alpha $ when $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$ $<$ $\alpha $ leads to profound size distortions and trivial power for a test of no GARCH effects, and is relatively conservative as a test of omitted nonlinearity. In the case where $\lambda $ is identified under either hypothesis, \cite {Silvapule1996} proposes an improvement with better size and power properties. The challenge of constructing valid tests in the presence of nuisance parameters under $H_{1}$ dates at least to \cite{ChernoffZacks1964} and \citet{Davies77,Davies87}. Recent contributions include \cite{Nyblom1989}, \cite{Andrews1993}, \cite{Dufour1997}, \citet{AndrewsPloberger1994,AndrewsPloberger1995}, \cite{Hansen1996}, and \citet{AndrewsCheng2012,AndrewsCheng2013,AndrewsCheng2014} to name but a few. Nuisance parameters that are not identified under $H_{1}$ are either chosen at random, thereby sacrificing power \citep[e.g.][]{White1989}; or $ \mathcal{T}_{n}(\lambda )$ is smoothed over $\Lambda $, resulting in a non-standard limit distribution and in general the necessity of a bootstrap step \citep[e.g.][]{ChernoffZacks1964,Davies77,AndrewsPloberger1994}. Examples are the average $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ and supremum $\sup_{\lambda \in \Lambda }\mathcal{T} _{n}(\lambda )$, where $\mu (\lambda )$ is an absolutely continuous probability measure \citep{ChernoffZacks1964,Davies77,AndrewsPloberger1994}. The non-standard limit distribution, moreover, cannot be bootstrapped using conventional methods when some parameters may be weakly or non-identified. See \cite{Hill2021_weak}, and see below for discussion. Further, even if bootstrapping is valid, it adds significant computation time due to the many repeated generated bootstrap samples. Let $p_{n}(\lambda )$ be a p-value or asymptotic p-value based on $\mathcal{T }_{n}(\lambda )$: $p_{n}(\lambda )$ may be based on a known limit distribution, or if the limit distribution is non-standard then a bootstrap or simulation method is assumed available for computing an asymptotically valid approximation to $p_{n}(\lambda )$. Assume that $p_{n}(\lambda )$ leads to an asymptotically correctly sized test, uniformly on compact $ \Lambda $ $\subset $ $\mathbb{R}^{k}$: \begin{equation} \sup_{\lambda \in \Lambda }\left\vert P\left( p_{n}(\lambda )<\alpha |H_{0}\right) -\alpha \right\vert \rightarrow 0\text{ for any }\alpha \in \left( 0,1\right) . \label{pn} \end{equation} If $p_{n}(\lambda )$ is uniformly distributed then $\alpha $ is the size of the test, else by (\ref{pn}) $\alpha $ is the asymptotic size. The terms "asymptotic p-value" and "asymptotic size" are correct when convergence in ( \ref{pn}) is uniform over the null distribution subset $\mathcal{P}_{0}$. The latter is not possible here because for generality we do not specify a model or $H_{0}$. If $p_{n}(\lambda )$ is asymptotically free of any other nuisance parameters then uniform convergence over $H_{0}$ is immediate given that (\ref{pn}) is uniform over $\Lambda $ \citep[e.g.][p. 417]{Hansen1996}. Since this problem is common, we will not focus on it, and will simply call $p_{n}(\lambda )$ a "p-value" for brevity. The p-value [PV] test with nominal level $\alpha $ for a chosen value of $ \lambda $ is (\ref{pn}): \begin{equation} \text{\textbf{PV Test:} reject }H_{0}\text{ if }p_{n}(\lambda )<\alpha \text{ , otherwise fail to reject }H_{0}. \label{T_test} \end{equation} Now assume $\Lambda $ has unit Lebesgue measure $\int_{\Lambda }d\lambda $ $ = $ $1$, and compute the \textit{p-value occupation time} [PVOT] of $ p_{n}(\lambda )$ below the nominal level $\alpha $ $\in $ $(0,1)$: \begin{equation} \text{\textbf{PVOT:} }\mathcal{P}_{n}^{\ast }(\alpha )\equiv \int_{\Lambda }I\left( p_{n}(\lambda )<\alpha \right) d\lambda , \label{PVOT} \end{equation} where $I(\cdot )$ is the indicator function. If $\int_{\Lambda }d\lambda $ $ \neq $ $1$ then we use $\mathcal{P}_{n}^{\ast }(\alpha )$ $\equiv $ $ \int_{\Lambda }I(p_{n}(\lambda )$ $<$ $\alpha )d\lambda /\int_{\Lambda }d\lambda $. $\mathcal{P}_{n}^{\ast }(\alpha )$ is just the Lebesgue measure of the subset of $\lambda ^{\prime }s$ on which we reject $H_{0}$. Thus, a large occupation time in the rejection region asymptotically indicates $ H_{0} $ is false. As long as $\{\mathcal{T}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ converges weakly under $H_{0}$ to a stochastic process $\{\mathcal{T} (\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ on a space endowed with, e.g., the uniform metric (sup-norm), and $\mathcal{T}(\lambda )$ has a continuous distribution for all $\lambda $ outside a set of measure zero, then asymptotically $\mathcal{P}_{n}^{\ast }(\alpha )$ has a mean $\alpha $\ and the probability that $\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha $ is not greater than $\alpha $. Evidence against $H_{0}$ is therefore simply $ \mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha $. Further, if asymptotically with probability approaching one the PV test (\ref{T_test}) rejects $H_{0}$ for each $\lambda $ in a subset of $\Lambda $ that has Lebesgue measure greater than $\alpha $, then $\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha $ asymptotically with probability one. The PVOT test at the chosen level $ \alpha $ is then: \begin{equation} \text{\textbf{PVOT Test}: reject }H_{0}\text{ if }\mathcal{P}_{n}^{\ast }(\alpha )>\alpha \text{, otherwise fail to reject }H_{0}. \label{PVOT_test} \end{equation} These results are formally derived in Section \ref{sec:asym_theory}. Thus, an asymptotic level $\alpha $ critical value is simply $\alpha $, a useful simplification over transforms with non-standard asymptotic distributions, like $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ and $ \sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$. A simulation study in Section \ref{sec:sim} suggests the critical value $\alpha $ leads to an asymptotically correctly sized test for tests of omitted nonlinearity and GARCH effects, and strong power in each case. We may therefore expect that similar tests have this property. The PVOT yields several useful innovations. First, when $\mathcal{T} _{n}(\lambda )$ is derived from a regression model in which some parameters may be weakly or non-identified, there is no known valid standard bootstrap or simulation approach for approximating the limit distribution of smoothed test statistics in the class considered in \cite{AndrewsPloberger1994}, including $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ and $ \sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$. This is because a valid bootstrap, for example, must approximate the covariance structure of the limit process $\{\mathcal{T}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \} $ which generally requires consistent estimates of model parameters. If some parameters are weakly or non-identified, then they cannot be consistently estimated \citep[see, e.g.,][]{Gallant1977,AndrewsCheng2012}. \cite{Hill2021_weak} presents an asymptotically valid bootstrap method for the non-smoothed $\mathcal{T}_{n}(\lambda )$ for any degree of (non)identification. The resulting bootstrapped p-value leads to a valid smoothed p-value test, even though smoothed test statistics \textit{cannot} be consistently bootstrapped. See Example \ref{sec:ex_start} in Section \ref {sec:ex_start} below. Second, since the PVOT critical value upper bound is simply $\alpha $ under any asymptotic theory for $\mathcal{T}_{n}(\lambda )$, we only require $ \mathcal{T}_{n}(\lambda )$ to have a known or bootstrappable limit distribution. Thus, $\sqrt{n}$-Gaussian asymptotics is not required as is nearly always assumed \citep[e.g.][]{AndrewsPloberger1994,Hansen1996,AndrewsCheng2012}. Non-standard asymptotics are therefore allowed, including inference concerning parameters that lie on the parameter space boundary \citep{Andrews2001,CavaliereNielsenRahbek2017}, test statistics when a parameter is weakly identified, GARCH tests \citep[e.g.][]{Andrews2001}; inference under heavy tails; and non-$\sqrt{n}$ asymptotics are covered, as in heavy tail robust tests \citep[e.g.][]{Hill_white_2012,HillAguilar13,AguilarHill15}, or when infill asymptotics or nonparametric estimators are involved \citep[e.g.][]{BandiPhillips2007}, or in high dimensional settings when a regularized estimator is used. Third, the local power properties of specific PVOT tests appear to be on par with the power optimal exponential class developed in \cite {AndrewsPloberger1994}. We derive general results, and apply them to a test of omitted nonlinearity. We show in a numerical experiment that the PVOT test achieves local power on par with the highest achievable (weighted average) power. In view of the general result, the local power merits of the PVOT test appear to extend to any consistent test on $\Lambda $, but any such claim requires a specific test statistic and numerical exercise to verify. Fourth, although we focus on the PVOT test, in Appendix B of the supplemental material \cite{Hill_supp_ot} [SM] we show the PVOT naturally arises as a measure of test optimality when $\lambda $ is part of the true data generating process under $H_{1}$. This requires Andrews and Ploberger's \citeyear{AndrewsPloberger1994} notion of weighted average local power of a test based on $\mathcal{T}_{n}(\lambda )$, where the average is computed over $\lambda $ and a drift parameter \citep[cf.][]{Wald1943}. In this environment, the PVOT is just a point estimate of the weighted average probability of PV test rejection evaluated under $H_{0}$. Since that probability is asymptotically no larger than $\alpha $ when the null is true, the PVOT test rejects $H_{0}$ when the PVOT is larger than $\alpha $. Thus, the PVOT is a natural way to transform a test statistic. Fifth, when $\mathcal{T}_{n}(\lambda )$ has a known distribution limit (e.g. standard normal, chi-squared) then performing the PVOT test is significantly computationally faster that bootstrapping a smoothed test statistic (e.g. $ \sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$). Indeed, if $\mathcal{ M}$ bootstrap samples are required then the PVOT test is trivially $\mathcal{ M}$-times faster. The relevant literature also includes \cite{King_Shiv93} whose re-parameterization leads to a conventional, but not general, test. \cite {Hansen1996} presents a wild bootstrap for computing the p-value for a smoothed LM statistic when $\lambda $ is part of the data generating process, extending ideas in \cite{Wu1986} and \cite{Liu1988}. The method implicitly requires strong identification of regression model parameters. Our simulation study for tests of functional form and GARCH effects show the PVOT test performs on par with, or is better than, the average and supremum test. Moreover, when model parameters are weakly or non-identified, a PVOT test of functional form substantially dominates $p_{n}(\lambda ^{\ast })$ with randomly selected $\lambda ^{\ast }$, $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$, and bootstrapped $\sup_{\lambda \in \Lambda }\mathcal{T} _{n}(\lambda )$ and $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ . Indeed, the latter two fail to be valid for the reasons explained above. \cite{Bierens1990} creatively compares supremum and pointwise statistics to achieve standard asymptotics for a functional form test, while \cite {BierensPloberger1997} compute a critical value upper bound for their integrated conditional moment statistic. We show that the latter upper bound leads to an under-sized test and potentially low power in a local power numerical exercise and a simulation study presented below. The remainder of the paper is as follows. In Section \ref{sec:ex_start} we introduce examples showcasing uses of the PVOT test: tests of omitted nonlinearity (with possibly weakly identified parameters), and GARCH effects. We then present the formal list of assumptions and the main results for the PVOT test in Section \ref{sec:asym_theory}. Local power is analyzed in general Section \ref{app:locpow}, and applied to a test of function form with a numerical exercise. Section \ref{sec:examples} continues the Section \ref{sec:ex_start} examples with complete theory verifying the main assumptions. We perform a simulation study\ in Section \ref{sec:sim} involving tests of omitted nonlinearity and GARCH effects. Concluding remarks are left for Section \ref{sec:conclusion}. \subsection{Examples\label{sec:ex_start}} We discuss examples showcasing the use of the PVOT test. \begin{example}[\textbf{Test of Functional Form with Possible Weak Identification}] \label{ex_weak}\normalfont This example showcases a unique advantage of the PVOT test: it allows for robust bootstrap inference when weak identification is possible and a nuisance parameter is present, \textit{and} it promotes a consistent test. Conversely, test statistic functionals like the supremum $ \sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$ and average $ \int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ cannot be validly bootstrapped asymptotically when weak identification is possible \citep[see][]{Hill2021_weak}, and $\sup_{\lambda \in \Lambda }p_{n}(\lambda ) $ with a weak identification robust $p_{n}(\lambda )$ need not be consistent. The following is based on ideas developed in \cite{Hill2021_weak} ; consult that source for more details and references. We work with the following model: \begin{equation} y_{t}=\zeta _{0}^{\prime }x_{t}+\beta _{0}^{\prime }g(x_{t},\pi _{0})+\epsilon _{t}=f(\theta _{0},x_{t})+\epsilon _{t}\text{ where }x_{t}\in \mathbb{R}^{k_{x}}\text{ and }\theta _{0}=\left[ \zeta _{0}^{\prime },\beta _{0}^{\prime },\pi _{0}^{\prime }\right] ^{\prime }\in \Theta \text{,} \end{equation} where $g$ is a known function, and $E[\epsilon _{t}]$ $=$ $0$ and $ E[\epsilon _{t}^{2}]$ $\in $ $\left( 0,\infty \right) $ for unique $\theta _{0}$ $\in $ $\Theta $ and compact $\Theta $. We want to test $H_{0}$ $:$ $ E[y_{t}|x_{t}]$ $=$ $f(\theta _{0},x_{t})$ $a.s$. against $ H_{1}:\sup_{\theta \in \Theta }P(E[y_{t}|x_{t}]$ $=$ $f(\theta ,x_{t}))$ $<$ $1$. Let $\{x_{t},y_{t}\}_{t=1}^{n}$ have joint distribution $P$ $\in $ $ \mathcal{P}$, a collection of joint distributions, and let $\mathcal{P}_{0}$ $\subset $ $\mathcal{P}$ be all distributions consistent with $ E[y_{t}|x_{t}] $ $=$ $f(\theta _{0},x_{t})$ $a.s.$ The null coincides with $P $ $\in $ $\mathcal{P}_{0}$. Let $\Psi $ be a $1$-$1$ bounded mapping from $\mathbb{R}^{k}$ to $\mathbb{R} ^{k}$, let $\mathcal{F}$ $:$ $\mathbb{R}$ $\rightarrow $ $\mathbb{R}$ be analytic and non-polynomial (e.g. exponential or logistic), and assume $ \lambda $ $\in $ $\Lambda $, a compact subset of $\mathbb{R}^{k}$. Mis-specification \linebreak $\sup_{\theta \in \Theta }P(E[y_{t}|x_{t}]$ $=$ $f(\theta ,x_{t}))$ $<$ $1$ implies $E[e_{t}\mathcal{F}(\lambda ^{\prime }\Psi (x_{t}))]$ $\neq $ $0$ $\forall \lambda $ $\in $ $\Lambda /\mathcal{S}$ , where $\mathcal{S}$ has Lebesgue measure zero. See \cite{White1989}, \cite {Bierens1990} and \cite{StinchWhite1998} for seminal results for iid data, and see \cite{deJong1996} and \cite{Hill2008} for dependent data. Thus a LM-type statistic based on a sample version of $E[e_{t}\mathcal{F}(\lambda ^{\prime }\Psi (x_{t}))]$ can be used to test $H_{0}.$ If $\beta _{0}$ $=$ $0$ then $\pi _{0}$ is not identified. Or if there is local drift $\beta _{0}$ $=$ $\beta _{n}$ $\rightarrow $ $0$ with $\sqrt{n} ||\beta _{n}||$ $\rightarrow $ $[0,\infty )$, then estimators of $\pi _{0}$ have random probability limits, and estimators for $\theta _{0}$ have nonstandard limit distributions \citep{AndrewsCheng2012}. In either case we say $\pi _{0}$ is \emph{weakly identified}. The literature on consistent specification testing generally assumes strong identification \citep[e.g.][]{Bierens1982,White1989,Bierens1990,HongWhite1995,deJong1996,BierensPloberger1997,Hill2008} , while the weak identification literature presumes model correctness $ E[y_{t}|x_{t}]$ $=$ $f(\theta _{0},x_{t})$ $a.s.$ \citep[e.g.][]{AndrewsCheng2012,AndrewsCheng2013,AndrewsCheng2014}. \cite {Hill2021_weak} allows for both weak identification \textit{and} model mis-specification. There a modified Conditional Moment [CM] test statistic and bootstrap procedure, both to account for possible weak identification. Let $\hat{\theta}_{n}$ be the nonlinear least squares estimator of $\theta _{0}$. The CM statistic is: \begin{equation*} \mathcal{T}_{n}(\lambda )\equiv \left( \frac{1}{\hat{v}_{n}(\hat{\theta} _{n},\lambda )}\frac{1}{\sqrt{n}}\sum_{t=1}^{n}\epsilon _{t}(\hat{\theta} _{n})F\left( \lambda ^{\prime }\Psi (x_{t})\right) \right) ^{2} \end{equation*} where $\hat{v}_{n}(\theta ,\lambda )$\ is a scale estimator. Under strong identification, $\{\mathcal{T}_{n}(\lambda )$ $:$ $\lambda $ $ \in $ $\Lambda \}$ converges weakly to a chi-squared process. Under weak identification the limit process is non-standard with nuisance parameter $ \lambda $, and other nuisance parameters $h$ containing distribution parameters (e.g. $\pi _{0}$ and $E[\epsilon _{t}^{2}]$). Let $\{\mathcal{T} (\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ denote either limit process. Test statistic transforms like $\sup_{\lambda \in \Lambda }\mathcal{T} _{n}(\lambda )$ and $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ cannot be consistently bootstrapped or simulated if weak identification is possible. The reason is a consistent estimate of the covariance kernel for $ \{\mathcal{T}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ is required, which depends on $\pi _{0}$. The latter cannot be consistently estimated under weak identification \citep{AndrewsCheng2012}. Invalidity of the bootstrap is easily demonstrated by simulation: see \cite{Hill2021_weak}, and see Section \ref{sec:sim_weak} below. \citet[see][Sections 5 and 6]{Hill2021_weak} therefore takes a different approach by bootstrapping a p-value $p_{n}(\lambda )$ for $\mathcal{T} _{n}(\lambda )$ that is consistent for the asymptotic p-value, under any degree of (non)identification. The key steps involve computing (or bootstrapping) the asymptotic p-value under strong identification, wild bootstrapping the p-value under weak identification, and then combining the two in a way that promotes valid inference asymptotically under any degree of identification.\footnote{\cite{Hill2021_weak} uses the \textit{least favorable} and \textit{identification category selection} constructions from \cite{AndrewsCheng2012} as the basis for p-value combinations. \cite {AndrewsCheng2012} use those notions for critical value combinations under assumed model correctness and without a nuisance parameter under a specific hypothesis.} Let $\hat{p}_{n,\mathcal{M}}(\lambda )$\ be the resulting combined wild bootstrapped p-value based on $\mathcal{M}$\ independently drawn bootstrap samples, hence the PVOT is $\mathcal{\hat{P}}_{n,\mathcal{M} }(\alpha )$ $\equiv $ $\int_{\Lambda }I(\hat{p}_{n,\mathcal{M}}(\lambda )$ $ < $ $\alpha )d\lambda $. The test rejects $H_{0}$ when $\mathcal{\hat{P}}_{n, \mathcal{M}}(\alpha )$ $>$ $\alpha $. \end{example} \begin{example}[\textbf{Test of GARCH Effects}] \label{ex_GARCH}\normalfont We want to test the hypothesis that a random variable $y_{t}$ is not governed by a GARCH process. Consider a stationary GARCH(1,1) model \citep{Boller86, Nelson90}: \begin{eqnarray} &&y_{t}=\sigma _{t}\epsilon _{t}\text{ where }\epsilon _{t}\text{ is iid, } E[\epsilon _{t}]=0\text{, }E[\epsilon _{t}^{2}]=1\text{, and }E\left\vert \epsilon _{t}\right\vert ^{r}<\infty \text{ for }r>4 \label{garch} \\ &&\sigma _{t}^{2}=\omega _{0}+\delta _{0}y_{t-1}^{2}+\lambda _{0}\sigma _{t-1}^{2}\text{ where }\omega _{0}>0\text{, }\delta _{0},\lambda _{0}\in \lbrack 0,1)\text{, and }E\left[ \ln \left( \delta _{0}\epsilon _{t}^{2}+\lambda _{0}\right) \right] <0. \notag \end{eqnarray} Under $H_{0}$: $\delta _{0}$ $=$ $0$ if the starting value is $\sigma _{0}^{2}$ $=$ $\tilde{\omega}$ $=$ $\omega _{0}/(1$ $-$ $\lambda _{0})$ $>$ $ 0$ then $\sigma _{1}^{2}$ $=$ $\omega _{0}$ $+$ $\lambda _{0}\omega _{0}/(1$ $-$ $\lambda _{0})$ $=$ $\tilde{\omega}$\ and so on, hence $\sigma _{t}^{2}$ $=$ $\tilde{\omega}$ $\forall t$ $\geq $ $0$ which means there are no GARCH effects. In this case the $\sigma _{t-1}^{2}$ marginal effect $\lambda _{0}$ is not identified. Further, $\delta _{0},\lambda _{0}$ $\geq $ $0$ must be maintained during estimation to ensure a positive conditional variance, and because this includes a boundary value, QML asymptotics are non-standard \citep{Andrews1999,Andrews2001}. Let $\theta $ $=$ $[\omega ,\delta ,\lambda ]$, and define the parameter subset $\pi $ $=$ $[\omega ,\delta ]^{\prime }$ $\in $ $\Pi $ $\equiv $ $ [\iota _{\omega },u_{\omega }]$ $\times $ $[0,1$ $-$ $\iota _{\delta }]$ for tiny $(\iota _{\omega },\iota _{\delta })$ $>$ $0$ and some $u_{\omega }$ $>$ $0$. Express the volatility process as $\sigma _{t}^{2}(\pi ,\lambda )$ $=$ $ \omega $ $+$ $\delta y_{t-1}^{2}$ $+$ $\lambda \sigma _{t-1}^{2}(\pi ,\lambda )$ for an imputed $\lambda $ $\in $ $\Lambda $ $\equiv $ $ [0,1-\iota _{\lambda }]$ and tiny $\iota _{\lambda }$ $>$ $0$, with initial condition $\sigma _{0}^{2}(\pi ,\lambda )$ $=$ $\omega /(1$ $-$ $\lambda )$. Denote the unrestricted QML estimator of $\pi _{0}$ for a given $\lambda $ $ \in $ $\Lambda $: $\hat{\pi}_{n}(\lambda )$ $=$ $[\hat{\omega}_{n}(\lambda ), \hat{\delta}_{n}(\lambda )]^{\prime }$ $\equiv $ $\arg \min_{\pi \in \Pi }1/n\sum_{t=1}^{n}\{\ln (\sigma _{t}^{2}(\pi ,\lambda ))$ $+$ $ y_{t}^{2}/\sigma _{t}^{2}(\pi ,\lambda )\}$. Andrews' (\citeyear{Andrews2001} ) test statistic is: \begin{equation} \mathcal{T}_{n}(\lambda )=n\hat{\delta}_{n}^{2}(\lambda ). \label{T_no_garch} \end{equation} The process $\{\mathcal{T}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ has a well defined limit that can be easily simulated resulting in a simulation-based p-value approximation $\hat{p}(\lambda )$. The PVOT is therefore $\int_{\Lambda }I(\hat{p}(\lambda )$ $<$ $\alpha )d\lambda $. \end{example} \section{Asymptotic Theory\label{sec:asym_theory}} The following notation is used. $[z]$ rounds $z$ to the nearest integer. $ |\cdot |$ is the $l_{1}$-matrix norm, and $||\cdot ||$ is the Euclidean norm, unless otherwise noted. Assume the sample $\mathcal{S}_{n}$ $\equiv $ $ \{z_{t}\}_{t=1}^{n}$ lies in $\mathbb{R}^{n\times q}$ for some $q$ $\in $ $ \mathbb{N}$. We require a notion of weak convergence and accompanying metric that can handle a range of applications. A fundamental concern is that the test statistic and p-value mappings $\mathcal{T}$ $:$ $\Lambda $ $\times $ $ \mathbb{R}^{n\times q}$ $\rightarrow $ $[0,\infty )$ and $p$ $:$ $\Lambda $ $ \times $ $\mathbb{R}^{n\times q}$ $\rightarrow $ $[0,1]$\ are not here defined, making measurability a challenge for their sample paths $\{\mathcal{ T}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ and $\{p_{n}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$\ and their transforms like $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$ and $\int_{\Lambda }I(p_{n}(\lambda )$ $<$ $ \alpha )d\lambda $. Even with $\mathcal{T}$ and $p$ in hand, measurability may need to be assumed due to iterative estimation algorithms (e.g. GARCH test). Let $\mathcal{B}(\mathcal{A})$ be the Borel $\sigma $-field on $ \mathcal{A}$. We therefore assume $\mathcal{T}(\mathcal{S}_{n},\lambda )$ and $p(\mathcal{S}_{n},\lambda )$\ are $\sigma (\mathcal{S}_{n})$ $\otimes $ $\mathcal{B}(\Lambda )$ measurable and exist on a complete measure space. \footnote{ Completeness is not trivial because $\mathcal{B}(\Lambda )$ is not complete for any $\sigma $-finite measure, and even if extended to be complete under Lebesque measure, the product $\sigma (\mathcal{S}_{n})$ $\otimes $ $ \mathcal{B}(\Lambda )$ need not be complete under, e.g., any $\sigma $ -finite measure. Thus $\sigma (\mathcal{S}_{n})$ $\otimes $ $\mathcal{B} (\Lambda )$ measurability and completeness implies we operate on the completed $\sigma (\mathcal{S}_{n})$ $\otimes $ $\mathcal{B}(\Lambda )$ and associated product measure.} Now majorants and integrals over uncountable families of measurable functions like $\{p_{n}(\lambda )$ $:$ $\lambda $ $ \in $ $\Lambda \}$ are measurable, and probabilities where applicable are outer probability measures. See especially Pollard's (\citeyear{Pollard1984} : Appendix C) \textit{permissibility} criteria based on the notion of analytic sets in \cite{DellacherieMeyer1978}. Under completeness, permissibility necessarily holds (e.g. Dellacherie and Meyer \citeyear{DellacherieMeyer1978}, Section 33; cf. Pollard \citeyear{Pollard1984}, p. 195-196). See also \citet[Section 3]{Dudley1978} for the closely related \textit{Souslin} measurability \citep[cf.][Section 16]{DellacherieMeyer1978}. We use weak convergence on $l_{\infty }(\Lambda )$, the space of bounded functions on $\Lambda $ with sup-norm topology, in the sense of \citet{HoffJorg1991}: \begin{equation*} \left\{ \mathcal{T}_{n}(\lambda )\right\} \Rightarrow ^{\ast }\left\{ \mathcal{T}(\lambda )\right\} \text{ in }l_{\infty }(\Lambda )\text{, where } \left\{ \mathcal{T}_{n}(\lambda )\right\} =\left\{ \mathcal{T}_{n}(\lambda ):\lambda \in \Lambda \right\} \text{, etc.} \end{equation*} If, for instance, the sample is $\mathcal{S}_{n}$ $\equiv $ $ \{x_{t},y_{t}\}_{t=1}^{n}$ $\in $ $\mathbb{R}^{n\times q}$, and $\mathcal{T} _{n}(\lambda )$ is a measurable mapping $h(\mathcal{Z}(\mathcal{S} _{n},\lambda ))$ of a function $\mathcal{Z}$ $:$ $\mathbb{R}^{n\times q}$ $ \times $ $\Lambda $ $\rightarrow $ $\mathbb{R}$, then $h(\mathcal{Z} (s,\lambda ))$ $\in $ $l_{\infty }(\Lambda )$ requires the uniform bound $ \sup_{\lambda \in \Lambda }|h(\mathcal{Z}(s,\lambda ))|$ $<$ $\infty $ for each $s$ $\in $ $\mathbb{R}^{n\times q}$.\footnote{ If more details are available, then boundedness can be refined. For example, if $\mathcal{T}_{n}(\lambda )$ $=$ $(1/\sqrt{n}\sum_{t=1}^{n}z(y_{t},\lambda ))^{2}$ where $z$ $:$ $\mathbb{R}$ $\times $ $\Lambda $ $\rightarrow $ $ \mathbb{R}$, then we need $\sup_{\lambda \in \Lambda }|z(y,\lambda )|$ $<$ $ \infty $ for each $y$.} Sufficient conditions for weak convergence to a Gaussian process, for example, are convergence in finite dimensional distributions,\ and stochastic equicontinuity: $\forall \epsilon $ $>$ $0$ and $\eta $ $>$ $0$ there exists $\delta $ $>$ $0$ such that $ \lim_{n\rightarrow \infty }P(\sup_{||\lambda -\tilde{\lambda}||\leq \delta }| \mathcal{T}_{n}(\lambda )$ $-$ $\mathcal{T}_{n}(\tilde{\lambda})|$ $>$ $\eta )$ $<$ $\epsilon $. Consult, e.g., \cite{Dudley1978}, \cite{GineZinn84}, and \cite{Pollard1984}. A large variety of test statistics are known to converge weakly under regularity conditions. In many cases $\mathcal{T}_{n}(\lambda )$ is a continuous function $h(\mathcal{Z}_{n}(\lambda ))$ of a sequence of sample mappings $\{\mathcal{Z}_{n}(\lambda )\}_{n\geq 1}$ such that $\sup_{x\in A}|h(x)|$ $<$ $\infty $ on every compact subset $A$ $\subset $ $\mathbb{R}$, and $\{\mathcal{Z}_{n}(\lambda )\}$ $\Rightarrow ^{\ast }$ $\{\mathcal{Z} (\lambda )\}$ a Gaussian process. Two examples of $h$ are $h(x)$ $=$ $x^{2}$ for pointwise asymptotic chi-squared tests of functional form or structural change; or $h(x)$ $=$ $\max \{0,x\}$ for a GARCH test \citep{Andrews2001}. A \textit{version} is a process with the same finite dimensional distributions. If $\{\mathcal{Z}(\lambda )\}$ is Gaussian, then any other Gaussian process with the same mean $E[\mathcal{Z}(\lambda )]$ and covariance kernel $E[\mathcal{Z}(\lambda _{1})\mathcal{Z}(\lambda _{2})]$ is a version of $\{\mathcal{Z}(\lambda )\}$.\footnote{ Even in the Gaussian case it is not true that all versions have continuous sample paths, but if a version of $\{\mathcal{Z}(\lambda )\}$ has continuous paths then this is enough to apply the continuous mapping theorem to transforms of $\mathcal{Z}_{n}(\lambda )$ over $\Lambda $. See \citet{Dudley67, Dudley1978}.} \begin{assumption}[weak convergence] \label{assum:main}Let $H_{0}$ be true. \newline $a.$ $\{\mathcal{T}_{n}(\lambda )\}$ $\Rightarrow ^{\ast }$ $\{\mathcal{T} (\lambda )\}$, a process with a version that has \emph{almost surely} bounded uniformly continuous sample paths (with respect to the sup-norm). $ \mathcal{T}(\lambda )$ $\geq $ $0$ $a.s.$, $\sup_{\lambda \in \Lambda } \mathcal{T}(\lambda )$ $<$ $\infty $ $a.s$., and $\mathcal{T}(\lambda )$\ has an absolutely continuous distribution function $F_{0}(c)$ $\equiv $ $P( \mathcal{T}(\lambda )$ $\leq $ $c)$ that is not a function of $\lambda $ . \newline $b.$ $\sup_{\lambda \in \Lambda }|p_{n}(\lambda )$ $-$ $\bar{F}_{0}(\mathcal{ T}_{n}(\lambda ))|$ $\overset{p}{\rightarrow }$ $0$, where $\bar{F}_{0}(c)$ $ \equiv $ $P(\mathcal{T}(\lambda )$ $>$ $c)$. \end{assumption} \begin{remark} \normalfont$(a)$ is broadly applicable (see Section \ref{sec:examples}). Continuity of the distribution of $\mathcal{T}(\lambda )$ and $(b)$ ensure $ p_{n}(\lambda )$ has asymptotically a uniform limit distribution under $H_{0} $. This is mild since often $\mathcal{T}_{n}(\lambda )$ is a continuous transformation of a standardized sample analogue to a population moment. In a great variety of settings a standardized sample moment has a Gaussian or stable distribution limit, or converges to a function of a Gaussian or stable law. See \cite{GineZinn84} and \cite{Pollard1984} for weak convergence to stochastic processes, exemplified with Gaussian functional asymptotics, and see \cite{Bartkiewicz10} for weak convergence to a Stable process for a (possibly dependent) heavy tailed process. \end{remark} \begin{remark} \normalfont$(b)$ is required when $p_{n}(\lambda )$ is not computed as the asymptotic p-value $\bar{F}_{0}(\mathcal{T}_{n}(\lambda ))$, for example when a simulation or bootstrap method is used because $\bar{F}_{0}$\ is unknown or a better small sample approximation is desired. Thus, in order to obtain lower level conditions we need to know how $p_{n}(\lambda )$ was computed. In Section \ref{sec:func_form_weak}, for example, we use Hill's ( \citeyear{Hill2021_weak}) weak identification robust bootstrap method for p-value computation; and in Section \ref{sec:garch} we use Andrews' ( \citeyear{Andrews2001}) simulation method for p-value computation for a GARCH test. \end{remark} All proofs are presented in Appendix \ref{sec:proofs}. \begin{theorem} \label{th:main}Let Assumption \ref{assum:main} hold. \newline $a.$ In general $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha )\leq \alpha $.$ $\newline $b.$ The asymptotic size is exactly $\lim_{n\rightarrow \infty }P(\mathcal{P} _{n}^{\ast }(\alpha )$ $>$ $\alpha )$ $=$ $\alpha $ when $\mathcal{T} (\lambda )$ $=$ $\mathcal{T}(\lambda ^{\ast })$ $=$ $a.s.$ $\forall \lambda $ $\in $ $\Lambda $ and some $\lambda ^{\ast }$ $\in $ $\Lambda . \newline c.\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha )$ $>0$ under the following condition: $\{\bar{F}_{0}(\mathcal{T}(\lambda ))\}$ is weakly dependent in the sense that $P(\bar{F}_{0}(\mathcal{T} (\lambda ))$ $<$ $\alpha ,\bar{F}_{0}(\mathcal{T}(\tilde{\lambda}))$ $<$ $ \alpha )$ $>$ $\alpha ^{2}$ for each couplet $\{\lambda ,\tilde{\lambda}\}$\ on a subset of $\Lambda $ $\times $ $\Lambda $ with positive measure. \end{theorem} \begin{remark} \normalfont Under $H_{0}$ the pointwise PV test rejects $H_{0}$ asymptotically with probability $\alpha $. The above theorem proves this implies asymptotically no more than an $\alpha $ portion of all $\lambda ^{\prime }s$ lead to a rejection. \end{remark} \begin{remark} \normalfont In general the asymptotic \emph{level} of the test is $\alpha $ when the critical value is itself $\alpha $ \citep[cf.][eq. (3.1)]{Lehmann1994}. The proof reveals if $\mathcal{T}(\lambda )$ $=$ $ \mathcal{T}(\lambda ^{\ast })$ $a.s.$ for some $\lambda ^{\ast }$ and all $ \lambda $ such that they are perfectly dependent, then $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha )$ $=$ $\alpha $ and the asymptotic size is $\alpha $. This occurs when $\lambda $ is a tuning parameter since these do not appear in the limit process \citep[see][]{HillAguilar13}. \end{remark} Next, asymptotic global power of PV test (\ref{T_test}) translates to global power for PVOT test (\ref{PVOT_test}). \begin{theorem} \label{th:main_h1} $\ \ $\newline $a$. $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $ \alpha )$ $>$ $0$ \emph{if and only if} there exists a subset $\tilde{\Lambda }$ $\subset $ $\Lambda $ with Lebesgue measure \emph{greater} than $\alpha $ ($\int_{\tilde{\Lambda}}1d\lambda $ $>$ $\alpha $) such that $\lim \inf_{n\rightarrow \infty }P(p_{n}(\lambda )$ $<$ $\alpha )$ $>$ $0$.$ $\newline $b$. The PVOT test is consistent $P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $ \alpha )$ $\rightarrow $ $1$ if the PV test is consistent $P(p_{n}(\lambda )$ $<$ $\alpha )$ $\rightarrow $ $1$\ on a subset of $\Lambda $ with Lebesgue measure greater than $\alpha $. \end{theorem} \begin{remark} \normalfont As long as the PV test is consistent on a subset of $\Lambda $ with measure greater than $\alpha $, then the PVOT test is consistent. In view of $\int_{\Lambda }d\lambda $ $=$ $1$\ this trivially holds when the PV test is consistent for any $\lambda $ outside a set with measure zero, including Andrews' (\citeyear{Andrews2001}) GARCH test which is consistent on a known compact $\Lambda $; \cite{White1989}, \cite{Bierens1990} and \cite {BierensPloberger1997} tests (and others) of omitted nonlinearity; Andrews' ( \citeyear{Andrews1993}) structural break test; and a test of an omitted Box-Cox transformation. See Section \ref{sec:examples}. At the risk of abusing terminology, we will say a test based on $\mathcal{T}_{n}(\lambda )$ is \emph{randomized} when $\lambda $ is drawn from a uniform distribution on $\Lambda $ independent of the data. The randomized test is consistent only if the PV test is consistent for every $\lambda $ outside a set with measure zero.\footnote{ Here and elsewhere we refer to a test based on $\mathcal{T}_{n}(\lambda _{\ast })$ as a \textit{randomized test}, which is generally different from the classical definition of a randomized test \citep[cf.][]{Lehmann1994}.} The transforms $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ and $\sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$, however, are consistent if the PV test is consistent on a subset of $\Lambda $ with positive measure. Thus, the PVOT test ranks above the randomized test, but below average and supremum tests in terms of required PV test asymptotic power over $\Lambda $. As we discussed in Section \ref{sec:intro}, it is difficult to find a relevant example in which this matters, outside a toy example. We give such an example below. \end{remark} The following shows how PV test power transfers to the PVOT test. \begin{example} \label{ex:consist_test}\normalfont Let $\lambda _{\ast }$ be a random draw from a uniform distribution on $\Lambda $. The parameter space is $\Lambda $ $=$ $[0,1]$, $\mathcal{T}_{n}(\lambda )$ $\overset{p}{\rightarrow }$ $\infty $ for $\lambda $ $\in $ $[.5,.56]$ such that the PV test is consistent on a subset with measure $\beta $ $=$ $.06$, and $\{\mathcal{T}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda /[.5,.56]\}$ $\Rightarrow ^{\ast }$ $\{\mathcal{T} (\lambda )$ $:$ $\lambda $ $\in $ $\Lambda /[.5,.56]\}$ such that there is only trivial power. Thus, $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ and $\sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$ have asymptotic power of one. A uniformly randomized PV test is not consistent at any level, and at level $\alpha $ $<$ $.06$ has trivial power. In the PVOT case, however, by applying arguments in the proof of Theorem \ref {th:main}, we can show $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha )$ is identically \begin{equation*} P\left( \int_{\lambda \in \lbrack .5,.56]}d\lambda +\int_{\lambda \notin \lbrack .5,.56]}I\left( \mathcal{U}(\lambda )<\alpha \right) d\lambda >\alpha \right) =P\left( \int_{\lambda \notin \lbrack .5,.56]}I\left( \mathcal{U}(\lambda )<\alpha \right) d\lambda >\alpha -.06\right) \end{equation*} for some process $\{\mathcal{U}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda /[.5,.56]\}$ where $\mathcal{U}(\lambda )$ is uniform on $[0,1]$. This implies the PVOT test is consistent at level $\alpha $ $\leq $ $.06$ since $ \int_{\lambda \notin \lbrack .5,.56]}I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $ $>$ $0$ $a.s.$ \end{example} \section{Local Power \label{app:locpow}} A characterization of local power requires an explicit hypothesis and some information on the construction of $\mathcal{T}_{n}(\lambda )$. Assume an observed sequence $\{y_{t}\}_{t=1}^{n}$ has a parametric joint distribution $ f(y;\theta _{0})$, where $\theta _{0}$ $=$ $[\beta _{0}^{\prime },\delta _{0}^{\prime },]$ and $\beta _{0}$ $\in $ $\mathbb{R}^{r}$, $r$ $\geq $ $1$. Consider testing whether the subvector $\beta _{0}$ $=$ $0$, while $\delta _{0}$ may contain other distribution parameters. If some additional parameter $\lambda $ is part of $\delta _{0}$ only when $\beta _{0}$ $\neq $ $0$, and therefore not identified under $H_{0}$, then we have Andrews and Ploberger's (\citeyear{AndrewsPloberger1994}) setting, but in general $ \lambda $ need not be part of the true data generating process. We first treat a general environment that includes each test example mentioned in this paper. We then study a test of omitted nonlinearity, and perform a numerical experiment in order to compare local power. \subsection{Local Power : General Case} The sequence of local alternatives we consider is: \begin{equation} H_{1}^{L}:\beta _{0}=\mathcal{N}_{n}^{-1}b\text{ for some }b\in \mathbb{R} ^{r}, \label{H1L} \end{equation} where $(\mathcal{N}_{n}\}$ is a sequence of diagonal matrices $[\mathcal{N} _{n,i,j}]_{i,j=1}^{r}$, $\mathcal{N}_{n,i,i}$ $\rightarrow $ $\infty $. The test statistic is $\mathcal{T}_{n}(\lambda )$ $\equiv $ $h(\mathcal{Z} _{n}(\lambda ))$ for a sequence of random functions $\{\mathcal{Z} _{n}(\lambda )\}$ on $\mathbb{R}^{q}$, $q$ $\geq $ $1$, and a measurable function $h$ $:$ $\mathbb{R}^{q}$ $\rightarrow $ $[0,\infty )$ where $h(x)$ is monotonically increasing in $||x||$, and $h(x)$ $\rightarrow $ $\infty $ as $||x||$ $\rightarrow $ $\infty $. This covers LM and Wald statistics, and each test statistic discussed in this paper. We assume regularity conditions apply such that under $H_{1}^{L}$ \begin{equation} \left\{ \mathcal{Z}_{n}(\lambda ):\lambda \in \Lambda \right\} \Rightarrow ^{\ast }\left\{ \mathcal{Z}(\lambda )+c(\lambda )b:\lambda \in \Lambda \right\} , \label{Zn_weak} \end{equation} for some matrix $c(\lambda )$ $\in $ $\mathbb{R}^{q\times r}$, and $\{ \mathcal{Z}(\lambda )\}$ is a zero mean process on $\mathbb{R}^{q}$ with a version that has \textit{almost surely} uniformly continuous sample paths (with respect to some norm $||\cdot ||$). In many cases in the literature $\{ \mathcal{Z}(\lambda )\}$ is a Gaussian process with $E[\mathcal{Z}(\lambda ) \mathcal{Z}(\lambda )^{\prime }]$ $=$ $I_{q}$. Combine (\ref{Zn_weak}) and the continuous mapping theorem to deduce under $ H_{0}$ the limiting distribution function $F_{0}(x)$ $\equiv $ $P(h(\mathcal{ Z}(\lambda ))$ $\leq $ $x)$ for $\mathcal{T}_{n}(\lambda )$, cf. \citet[Theorem 2.7]{Billingsley1999}. An asymptotic p-value is $ p_{n}(\lambda )$ $=$ $\bar{F}_{0}(\mathcal{T}_{n}(\lambda ))$ $\equiv $ $1$ $ -$ $F_{0}(\mathcal{T}_{n}(\lambda ))$, hence $\int_{\Lambda }I(p_{n}(\lambda )$ $<$ $\alpha )d\lambda $ $\overset{d}{\rightarrow }$ $\int_{\Lambda }I( \bar{F}_{0}(h(\mathcal{Z}(\lambda ))$ $+$ $c(\lambda )b))$ $<$ $\alpha )$ under $H_{1}^{L}$. Similarly, any continuous mapping $g$\ over $\Lambda $\ satisfies $g(\mathcal{T}_{n}(\lambda ))$ $\overset{d}{\rightarrow }$ $g(h( \mathcal{Z}(\lambda )$ $+$ $c(\lambda )b))$, including $\int_{\Lambda } \mathcal{T}_{n}(\lambda )\mu (d\lambda )$ and $\sup_{\lambda \in \Lambda } \mathcal{T}_{n}(\lambda )$. Obviously if $c(\lambda )b$ $=$ $0$ when $b$ $ \neq $ $0$\ then local power is trivial at $\lambda $. Whether any of the above tests has non-trivial asymptotic local power depends on the measure of the subset of $\Lambda $ on which $\inf_{\xi ^{\prime }\xi =1}||\xi ^{\prime }c(\lambda )||$ $>$ $0$. In order to make a fair comparison across tests, we assume each is asymptotically correctly sized for a nominal level $\alpha $ test. The next result follows from the preceding properties, hence a proof is omitted. \begin{theorem} \label{th:local_pow}Let (\ref{H1L}), (\ref{Zn_weak}) and $b$ $\neq $ $0$ hold, and write $\inf_{\xi ^{\prime }\xi =1}||\xi ^{\prime }c(\lambda )||$. Assume the randomized statistic $\mathcal{T}_{n}(\lambda ^{\ast })$ uses a draw $\lambda ^{\ast }$ from a uniform distribution on $\Lambda $. Asymptotic local power is non-trivial for (i) the PVOT test when $\inf_{\xi ^{\prime }\xi =1}||\xi ^{\prime }c(\lambda )||$ $>$ $0$ on a subset of $ \Lambda $ with measure greater than $\alpha $; and (ii) the uniformly randomized, average and supremum tests when $\inf_{\xi ^{\prime }\xi =1}||\xi ^{\prime }c(\lambda )||$ $>$ $0$ on a subset of $\Lambda $ with positive measure.\newline $b$. Under cases (i) and (ii), asymptotic local power is monotonically increasing in $|b|$ and converges to one as $|b|$ $\rightarrow $ $\infty $. \end{theorem} \begin{remark} \normalfont The PVOT test ranks lower than randomized, average and supremum tests because it rejects only when the PV tests rejects on a subset of $ \Lambda $ with measure greater than $\alpha $. Indeed, the PVOT test cannot asymptotically distinguish between PV tests that are consistent on a subset with measure less than $\alpha $ and have trivial power otherwise, or have trivial power everywhere. This cost is slight since a meaningful example in which it matters is difficult to find. The previously cited tests of omitted nonlinearity and GARCH effects all have randomized, PVOT, average and supremum versions with non-trivial local power, although we only give complete details for a test of omitted nonlinearity below. \end{remark} \subsection{Example : Test of Omitted Nonlinearity\label{ex:omitted_nl}} The proposed model to be tested is \begin{equation*} y_{t}=f\left( x_{t},\zeta _{0}\right) +e_{t}, \end{equation*} where $\zeta _{0}$ lies in the interior of $\mathfrak{Z}$, a compact subset of $\mathbb{R}^{q}$, $x_{t}$ $\in $ $\mathbb{R}^{k}$ contains a constant term and may contain lags of $y_{t}$, and $f:\mathbb{R}^{k}\times \mathfrak{Z }$ $\rightarrow $ $\mathbb{R}$ is a known response function. The null is $ H_{0}$ $:$ $E[y_{t}|x_{t}]$ $=$ $f(x_{t},\zeta _{0})$ $a.s.$ Assume $\{e_{t},x_{t},y_{t}\}$ are stationary for simplicity. Let $\Psi $ be a $1$-$1$ bounded mapping from $\mathbb{R}^{k}$ to $\mathbb{R}^{k}$, let $ \mathcal{F}$ $:$ $\mathbb{R}$ $\rightarrow $ $\mathbb{R}$ be analytic and non-polynomial (e.g. exponential or logistic), and assume $\lambda $ $\in $ $ \Lambda $, a compact subset of $\mathbb{R}^{k}$. Mis-specification $ \sup_{\zeta \in \mathbb{R}^{q}}P(E[y_{t}|x_{t}]$ $=$ $f(x_{t},\zeta ))$ $<$ $ 1$ implies $E[e_{t}\mathcal{F}(\lambda ^{\prime }\Psi (x_{t}))]$ $\neq $ $0$ $\forall \lambda $ $\in $ $\Lambda /\mathcal{S}$, where $\mathcal{S}$ has Lebesgue measure zero. See \cite{White1989}, \cite{Bierens1990} and \cite {StinchWhite1998} for seminal results for iid data, and see \cite{deJong1996} and \cite{Hill2008} for dependent data. The test statistic for a test of the hypothesis $H_{0}$ $:$ $E[y_{t}|x_{t}]$ $=$ $f(x_{t},\zeta _{0})$ $a.s.$ is \begin{equation} \mathcal{T}_{n}(\lambda )=\left( \frac{1}{\hat{v}_{n}(\lambda )}\frac{1}{ \sqrt{n}}\sum_{t=1}^{n}e_{t}(\hat{\zeta}_{n})\mathcal{F}\left( \lambda ^{\prime }\Psi (x_{t})\right) \right) ^{2}\text{ where }e_{t}(\zeta )\equiv y_{t}-f(x_{t},\zeta ). \label{Tn_CM} \end{equation} The estimator $\hat{\zeta}_{n}$ is assumed $\sqrt{n}$-consistent for a strongly identified $\zeta _{0}$, and $\hat{v}_{n}^{2}(\lambda )$ is a consistent estimator of $E[\{1/\sqrt{n}\sum_{t=1}^{n}e_{t}(\hat{\zeta}_{n}) \mathcal{F}(\lambda ^{\prime }\Psi (x_{t}))\}^{2}]$. By application of Theorem \ref{lm:locpow_suff}, below, the asymptotic p-value is $ p_{n}(\lambda )$ $\equiv $ $1$ $-$ $F_{0}\left( \mathcal{T}_{n}(\lambda )\right) $ $\equiv $ $\bar{F}_{0}\left( \mathcal{T}_{n}(\lambda )\right) $ where $F_{0}$ is the $\chi ^{2}(1)$ distribution function. In view of $\sqrt{n}$-asymptotics, a sequence of local-to-null alternatives is \begin{equation} H_{1}^{L}:\beta _{0}=b/n^{1/2}\text{ for }b\in \mathbb{R}. \label{H1_L} \end{equation} We assume for now that regularity conditions apply such that, for some sequence of positive finite non-random numbers $\{c(\lambda )\}$ : \begin{equation} \text{under }H_{1}^{L}\text{ : }\left\{ \mathcal{T}_{n}(\lambda ):\lambda \in \Lambda \right\} \Rightarrow ^{\ast }\{\left( \mathcal{Z}(\lambda )+bc(\lambda )\right) ^{2}:\lambda \in \Lambda \}, \label{TZc} \end{equation} where $\{\mathcal{Z}(\lambda )$ $+$ $c(\lambda )b\}$ is a Gaussian process with mean $\{c(\lambda )b\}$, and \emph{almost surely} uniformly continuous sample paths. See below for low level assumptions that imply (\ref{TZc}). The latter implies by Theorem \ref{th:main} that the PVOT asymptotic probability of rejection $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha )$, under $H_{0}$, is between $(0,\alpha ]$. Let $F_{J,\nu }(c)$ denote a noncentral $\chi ^{2}(J)$ law with noncentrality $\nu $, hence $(\mathcal{Z}(\lambda )$ $+$ $c(\lambda )b)^{2}$ is distributed $F_{1,b^{2}c(\lambda )^{2}}$. Under the null $b$ $=$ $0$ by construction $p_{n}(\lambda )$ $\overset{d}{\rightarrow }$ $\bar{F}_{1,0}(( \mathcal{Z}(\lambda )$ $+$ $c(\lambda )b)^{2})$ $=$ $\bar{F}_{1,0}(\mathcal{Z }(\lambda )^{2})$ is uniformly distributed on $[0,1]$. Under the global alternative $\sup_{\zeta \in \mathbb{R}^{q}}P(E[y_{t}|x_{t}]$ $=$ $ f(x_{t},\zeta ))$ $<$ $1$ notice $\mathcal{T}_{n}(\lambda )$ $\overset{p}{ \rightarrow }$ $\infty $ $\forall \lambda $ $\in $ $\Lambda /S$ implies $ p_{n}(\lambda )$ $\overset{p}{\rightarrow }$ $0$ $\forall \lambda $ $\in $ $ \Lambda /S$, hence $\mathcal{P}_{n}^{\ast }(\alpha )$ $\overset{p}{ \rightarrow }$ $1$ by Theorem \ref{th:main_h1}. The latter implies the PVOT test of $E[y_{t}|x_{t}]$ $=$ $f(x_{t},\zeta _{0})$ $a.s.$ is consistent. The following contains the result under $H_{1}^{L}$. \begin{theorem} \label{th:local_pow_nl}Under (\ref{TZc}), asymptotic local power of the PVOT test is $P(\int_{\Lambda }I(\bar{F}_{1,0}(\{\mathcal{Z}(\lambda )$ $+$ $ c(\lambda )b\}^{2})$ $<$ $\alpha )d\lambda >\alpha )$. Hence, under $ H_{1}^{L}$ the probability the PVOT test rejects $H_{0}$ increases to unity monotonically as the drift parameter $|b|$ $\rightarrow $ $\infty $, for any nominal level $\alpha $ $\in $ $[0,1)$.{} \end{theorem} The following assumptions detail sufficient conditions leading to (\ref{TZc} ). These are not the most general possible, but are fairly compact for the sake of brevity. \begin{assumption}[nonlinear regression and functional form test] \label{assum:locpow_suff}\ \ \newline a. \emph{Memory and Moments}: All random variables lie on the same complete measure space. $\{y_{t},x_{t},\epsilon _{t}\}$ are stationary; $ E|y_{t}|^{4+\iota }$ $<$ $\infty $ and $E|\epsilon _{t}|^{4+\iota }$\ for tiny $\iota $ $>$ $0$; $E[\epsilon _{t}|x_{t}]$ $=$ $0$ $a.s.$ under $ H_{1}^{L}$; $E[\inf_{\lambda \in \Lambda }w_{t}^{2}(\lambda )]$ $>$ $0$, $ E[\epsilon _{t}^{2}\inf_{\lambda \in \Lambda }w_{t}^{2}(\lambda )]$ $>$ $0$, and $\inf_{\lambda \in \Lambda }||(\partial /\partial \lambda )E[\epsilon _{t}^{2}F(\lambda ^{\prime }\Psi (x_{t}))^{2}]||$ $>$ $0$; $\{x_{t},\epsilon _{t}\}$ are $\beta $-mixing with mixing coefficients $\beta _{h}$ $=$ $ O(h^{-4-\delta })$ for tiny $\delta $ $>$ $0$. \newline b. \emph{Response Function}: $f:\mathbb{R}^{k}\times \mathfrak{Z}$ $ \rightarrow $ $\mathbb{R}$; $f(\cdot ,\zeta )$ is twice continuously differentiable; $(\partial /\partial \zeta )^{i}f(x,\zeta )$\ are Borel measurable for each $\zeta $ $\in $ $\mathfrak{Z}$ and $i$ $=$ $0,1,2$; write $h_{t}^{(i)}(\zeta )$ $=$ $(\partial /\partial \zeta )^{i}f(x_{t},\cdot )$ for $i$ $=$ $0,1,2$: $E[\sup_{\zeta \in \mathfrak{Z} }|h_{t}^{(i)}(\zeta )|^{4+\delta }]$ $<$ $\infty $ for tiny $\delta $ $>$ $0$ and each $i$; $(\partial /\partial \zeta )f(x_{t},\zeta _{0})$ has full column rank. \newline c. \emph{Test Weight}: $F(\cdot )$ is analytic, nonpolynomial, and $ (\partial /\partial c)^{i}F(c)$ is bounded for $i$ $=$ $0,1,2$\ uniformly on any compact subset; $\Psi $ is one-to-one and bounded. \newline d. \emph{Variance Estimator}: $\hat{v}_{n}^{2}(\lambda )$ $\equiv $ $ 1/n\sum_{s,t=1}^{n}\mathcal{K}((s$ $-$ $t)/\gamma _{n})e_{s}(\hat{\zeta} _{n})e_{t}(\hat{\zeta}_{n})\hat{w}_{n,s}(\lambda ,\hat{\zeta}_{n})\hat{w} _{n,t}(\lambda ,\hat{\zeta}_{n})$ with kernel $\mathcal{K}$ and bandwidth $ \gamma _{n}$ $\rightarrow $ $\infty $ \textit{and }$\gamma _{n}$\textit{\ }$ = $\textit{\ }$o(\sqrt{n})$. $\mathcal{K}$\textit{\ is continuous at }$0$ \textit{\ and all but a finite number of points}, $\mathcal{K}$ $:$ $\mathbb{ R}$ $\rightarrow $ $[-1,1]$, $\mathcal{K}(0)$ $=$ $1,$ $\mathcal{K}(x)$ $=$ $ \mathcal{K}(-x)$ $\forall x$ $\in $ $\mathbb{R},$ $\int_{-\infty }^{\infty }| \mathcal{K}(x)|dx$ $<$ $\infty $; and there exists $\{\delta _{n}\}$, $ \delta _{n}$ $>$ $0$, $\delta _{n}/\sqrt{n}$ $\rightarrow $ $\infty $, such that $\int_{\delta _{n}}^{\infty }\{|\mathcal{K}(x)|$ $+$ $|\mathcal{K} (-x)|\}dx$ $=$ $o(1/\sqrt{n})$. \newline e.\emph{\ Plug-In}: $\zeta _{0}$ is an interior point of $\mathfrak{Z}$, and $\hat{\zeta}_{n}$ $\equiv $ $\argmin_{\zeta \in \mathfrak{Z} }\{1/n\sum_{t=1}^{n}(y_{t}$ $-$ $f(x,\zeta ))^{2}\}.$ \end{assumption} \begin{remark} \normalfont The kernel variance $\hat{v}_{n}^{2}(\lambda )$ form follows from a standard expansion of \linebreak $1/\sqrt{n}\sum_{t=1}^{n}e_{t}(\hat{ \zeta}_{n})\mathcal{F}(\lambda ^{\prime }\Psi (x_{t}))$ around $\zeta _{0}$ under $H_{0}$. We exploit a kernel estimator in order to prove uniform convergence of $\hat{v}_{n}^{2}(\lambda )$ without the assumption that $ H_{0} $ is true, a generality that may be of separate interest. See Lemma C.1 in \citet[Appendix C]{Hill_supp_ot}. \end{remark} \begin{remark} \normalfont Property (d), other than the requirement that $\mathcal{I}_{n}$ $ \equiv $ $\int_{\delta _{n}}^{\infty }\{|\mathcal{K}(x)|$ $+$ $|\mathcal{K} (-x)|\}dx$ $=$ $o(1/\sqrt{n})$ for $\delta _{n}/\sqrt{n}$ $\rightarrow $ $ \infty $, is similar to properties in \cite{AndrewsHAC91} and elsewhere, covering Bartlett, Parzen, Tukey-Hanning and Quadratic-Spectral kernels. We use $\mathcal{I}_{n}$ $=$ $o(1/\sqrt{n})$ with $\delta _{n}/\sqrt{n}$ $ \rightarrow $ $\infty $ to prove uniform convergence $\sup_{\lambda \in \Lambda }|\hat{v}_{n}^{2}(\lambda )$ $-$ $v^{2}(\lambda )|$ $\overset{p}{ \rightarrow }$ $0$. The bound $\mathcal{I}_{n}$ $=$ $o(1/\sqrt{n})$ is trivially satisfied for any $\delta _{n}$ $\geq $ $K$ and some finite $K$ $>$ $0$ for Bartlett, Parzen, and Tukey-Hanning kernels, while the Quadratic-Spectral kernel obtains $\mathcal{I}_{n}\leq K\int_{\delta _{n}}^{\infty }x^{-2}dx=K\delta _{n}^{-3}$ hence $\mathcal{I}_{n}$ $=$ $o(1/ \sqrt{n})$ for any $\delta _{n}/n^{1/6}$ $\rightarrow $ $\infty $. \end{remark} The next claim is proven in Appendix C of the SM since it follows from standard arguments. \begin{theorem} \label{lm:locpow_suff} $\ \ \ $\newline $a.$ Assumption \ref{assum:locpow_suff} implies Assumption \ref{assum:main}. In particular, under $H_{0}$ we have $\{\mathcal{T}_{n}(\lambda )$ $:$ $ \lambda $ $\in $ $\Lambda \}$ $\Rightarrow ^{\ast }$ $\{\mathcal{Z}(\lambda )^{2}$ $:$ $\lambda $ $\in $ $\Lambda \}$ where $\{\mathcal{Z}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ is a zero mean Gaussian process with a version that has \emph{almost surely} uniformly continuous sample paths, and covariance kernel \begin{equation} E\left[ \mathcal{\tilde{Z}}_{n}(\lambda )\mathcal{\tilde{Z}}_{n}(\tilde{ \lambda})\right] =\frac{E\left[ \epsilon _{t}^{2}w_{t}(\lambda )w_{t}(\tilde{ \lambda})\right] }{\left( E[\epsilon _{t}^{2}w_{t}^{2}(\lambda )]E[\epsilon _{t}^{2}w_{t}^{2}(\tilde{\lambda})]\right) ^{1/2}}. \label{cov_kern} \end{equation} $b.$ Under $H_{1}^{L}$\ weak convergence (\ref{TZc}) is valid with $ c(\lambda )$ $=$ $E[w_{t}^{2}(\lambda )]/(E[\epsilon _{t}^{2}w_{t}^{2}(\lambda )])^{1/2}$ $>$ $0$ where $w_{t}(\lambda )$ $\equiv $ $F_{t}(\lambda )$ $-$ $E[F_{t}(\lambda )g_{t}(\zeta _{0})^{\prime }]$ $ \times $ $(E[g_{t}(\zeta _{0})g_{t}(\zeta _{0})^{\prime }])^{-1}g_{t}(\zeta _{0})$. \end{theorem} Theorem \ref{lm:locpow_suff}.a implies under $H_{0}$ the test statistic converges weakly $\{\mathcal{T}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $ \Lambda \}$ $\Rightarrow ^{\ast }$ $\{\mathcal{Z}(\lambda )^{2}$ $:$ $ \lambda $ $\in $ $\Lambda \}$, where $\{\mathcal{Z}(\lambda )\}$ is weakly dependent in the sense of Theorem \ref{th:main}: $P(\bar{F}_{0}(\mathcal{T} (\lambda ))$ $<$ $\alpha ,\bar{F}_{0}(\mathcal{T}(\tilde{\lambda}))$ $<$ $ \alpha )$ $>$ $\alpha ^{2}$ on a subset of $\Lambda $ $\times $ $\Lambda $ with positive measure. This follows instantly from Gaussianicity of $\{ \mathcal{Z}(\lambda )\}$ and its continuous covariance kernel (\ref{cov_kern} ). This in turn implies by Theorem \ref{th:main} that the PVOT $\mathcal{P} _{n}^{\ast }(\alpha )$ $\equiv $ $\int_{\Lambda }I(p_{n}(\lambda )$ $<$ $ \alpha )d\lambda $ does not have a degenerate limit distribution, which yields the following result by invoking Theorems \ref{th:main} and \ref {lm:locpow_suff}.a. \begin{theorem} \label{lm:omitted_nl_null}Let Assumption \ref{assum:locpow_suff} and $H_{0}$ hold. Then $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $ > $ $\alpha )$ $\in $ $(0,\alpha ].$ \end{theorem} \subsection{Numerical Experiment : Test of Omitted Nonlinearity\label {sec:local_num}} Our final goal in this section is to compare asymptotic local power for tests based on the PVOT, average $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ with uniform measure $\mu (\lambda )$, supremum $ \sup_{\lambda \in \Lambda }\mathcal{T}_{n}(\lambda )$, and Bierens and Ploberger's (\citeyear{BierensPloberger1997}) Integrated Conditional Moment [ICM] statistics. We work with a simple model $y_{t}$ $=$ $\zeta _{0}x_{t}$ $ +$ $\beta _{0}\exp \{\lambda x_{t}\}+\epsilon _{t}$, where $\zeta _{0}$ $=$ $ 1$, $\beta _{0}$ $=$ $b/\sqrt{n}$, and $\{\epsilon _{t},x_{t}\}$ are iid $ N(0,1)$ distributed. We omit a constant term entirely for simplicity. In order to abstract from the impact of sampling error on asymptotics, we assume $\zeta _{0}$ $=$ $1$\ is known, hence the test statistic is \begin{equation*} \mathcal{T}_{n}(\lambda )\equiv \frac{\hat{z}_{n}^{2}(\lambda )}{\hat{v} _{n}^{2}(\lambda )}\text{\ where }\hat{z}_{n}(\lambda )\equiv \frac{1}{\sqrt{ n}}\sum_{t=1}^{n}\left( y_{t}-\zeta _{0}x_{t}\right) \exp \{\lambda x_{t}\} \text{, }\hat{v}_{n}^{2}(\lambda )\equiv \frac{1}{n}\sum_{t=1}^{n}\left( y_{t}-\zeta _{0}x_{t}\right) ^{2}\exp \{2\lambda x_{t}\}. \end{equation*} The nuisance parameter space is $\Lambda $ $=$ $[0,1]$. A Gaussian setting implies the main results of \cite{AndrewsPloberger1994} apply: the average $ \int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$ has the highest weighted average local power for alternatives close to the null. In view of Gaussianicity, and Theorem \ref{lm:locpow_suff}, it can be shown $ \{\mathcal{T}_{n}(\lambda )\}$ $\Rightarrow ^{\ast }$ $\{(\mathcal{Z} (\lambda )$ $+$ $c(\lambda )b)^{2}\}$, where $c(\lambda )$ $=$ $E[\exp \{2\lambda x_{t}\}]/(E[\epsilon _{t}^{2}\exp \{2\lambda x_{t}\}])^{1/2}$ $=$ $(E[\exp \{2\lambda x_{t}\}])^{1/2}$ $=$ $\exp \{\lambda ^{2}\}$, and $\{ \mathcal{Z}(\lambda )\}$ is a zero mean Gaussian process with \emph{almost surely} uniformly continuous sample paths, and covariance function $E[ \mathcal{Z}(\lambda )\mathcal{Z}(\tilde{\lambda})]$ $=$ $\exp \{-.5(\lambda $ $-$ $\tilde{\lambda})^{2}\}$. Local asymptotic power is therefore: \begin{eqnarray*} &&\text{PVOT}\text{: }P\left( \int_{0}^{1}I\left( \bar{F}_{1,0}\left( \left\{ \mathcal{Z}(\lambda )+b\exp \{\lambda ^{2}\}\right\} ^{2}\right) <\alpha \right) d\lambda >c_{\alpha }^{(pvot)}\right) \\ &&\text{randomized: }P\left( \left\{ \mathcal{Z}(\lambda _{\ast })+b\exp \{\lambda _{\ast }^{2}\}\right\} ^{2}>c_{\alpha }^{(rand)}\right) \\ &&\text{average: }P\left( \int_{0}^{1}\left\{ \mathcal{Z}(\lambda )+b\exp \{\lambda ^{2}\}\right\} ^{2}d\lambda >c_{\alpha }^{(ave)}\right) \\ &&\text{supremum}\text{: }P\left( \sup_{\lambda \in \lbrack 0,1]}\left\{ \mathcal{Z}(\lambda )+b\exp \{\lambda ^{2}\}\right\} ^{2}>c_{\alpha }^{(\sup )}\right) , \end{eqnarray*} where $\bar{F}_{1,0}$ is the upper tail probability of a $\chi ^{2}(1)$ distribution; $\lambda _{\ast }$ is a uniform random variable on $\Lambda $, independent of $\{\epsilon _{t},x_{t}\}$; and $c_{\alpha }^{(\cdot )}$ are level $\alpha $ asymptotic critical values under the null: $c_{\alpha }^{(pvot)}$ $\equiv $ $\alpha $, and $c_{\alpha }^{rand)}$ is the $1$ $-$ $ \alpha $ quantile from a $\chi ^{2}(1)$ distribution. See below for approximating $\{c_{\alpha }^{(ave)},c_{\alpha }^{(\sup )}\}$. Local power for Bierens and Ploberger's (\citeyear{BierensPloberger1997}) ICM statistic $\widehat{\mathcal{I}}_{n}$ $\equiv $ $\int_{0}^{1}\hat{z} _{n}^{2}(\lambda )\mu (d\lambda )$ is based on their Theorem 7 critical value upper bound $\lim_{n\rightarrow \infty }P(\widehat{\mathcal{I}}_{n}$ $ \geq $ $u_{\alpha }\int_{0}^{1}v_{n}^{2}(\lambda )\mu (d\lambda ))$ $\leq $ $ \alpha $, where $v_{n}^{2}(\lambda )$ $=$ $\exp \{2\lambda ^{2}\}$ satisfies $\sup_{\lambda \in \lbrack 0,1]}|\hat{v}_{n}^{2}(\lambda )$ $-$ $ v_{n}^{2}(\lambda )|$ $\overset{p}{\rightarrow }$ $0$, and $ \{u_{.01},u_{.05},u_{.10}\}$ $=$ $\{6.81$, $4.26$, $3.23\}$. We use a uniform measure $\mu (\lambda )$ $=$ $\lambda $ since this promotes the highest weighted average local power for alternatives near $H_{0}$ \citep{AndrewsPloberger1994,Boning_Sowell_99}. Under $H_{1}^{L}$ we have $\{ \hat{z}_{n}(\lambda )\}$ $\Rightarrow ^{\ast }$ $\{z(\lambda )$ $+$ $b\exp \{\lambda ^{2}\}\}$ for some zero mean Gaussian process $\{z(\lambda )\}$ with \emph{almost surely} uniformly continuous sample paths, and $ \int_{0}^{1}v_{n}^{2}(\lambda )d\lambda $ $=$ $\int_{0}^{1}\exp \{2\lambda ^{2}\}d\lambda $ $=$ $2.3645$. This yields local asymptotic power: \begin{equation*} \text{ICM: }P\left( \int_{0}^{1}\left\{ z(\lambda )+b\exp \{\lambda ^{2}\})\right\} ^{2}d\lambda >c_{\alpha }^{(icm)}\right) \text{ where } c_{\alpha }^{(icm)}\equiv 2.3645\times u_{\alpha }. \end{equation*} Asymptotically valid critical values can be easily computed for the present experiment by mimicking the steps below, in which case PVOT, average, supremum, and ICM tests are essentially identical. We are, however, interested in how well Bierens and Ploberger's ( \citeyear{BierensPloberger1997}) solution to the problem of non-standard inference compares to existing methods. Local power is computed as follows. We draw $R$ samples $\{\epsilon _{i,t},x_{i,t}\}_{t=1}^{T}$, $i$ $=$ $1,...,R$, of iid random variables $ (\epsilon _{i,t},x_{i,t})$ from $N(0,1)$, and draw iid $\lambda _{\ast ,i}$, $i$ $=$ $1,...,R$, from a uniform distribution on $\Lambda $. Then $\{ \mathcal{Z}_{T,i}(\lambda )\}$ $\equiv $ $\{1/\sqrt{T}\sum_{t=1}^{T}\epsilon _{i,t}\exp \{\lambda x_{i,t}$ $-$ $\lambda ^{2}\}\}$ becomes a draw from the limit process $\{\mathcal{Z}(\lambda )\}$ as $T$ $\rightarrow $ $\infty $. We draw $R$ $=$ $100,000$ samples of size $T$ $=$ $100,000$, and compute $ \mathcal{T}_{T,i}^{(PVOT)}(b)$ $\equiv $ $\int_{0}^{1}I(\bar{F}_{1,0}(\{ \mathcal{Z}_{T,i}(\lambda )+b\exp \{\lambda ^{2}\}\}^{2})$ $<$ $\alpha )d\lambda $, $\mathcal{T}_{T,i}^{(ave)}(b)$ $\equiv $ $\int_{0}^{1}\{ \mathcal{Z}_{T,i}$ $+$ $b\exp \{\lambda ^{2}\}\}^{2}d\lambda $ and $\mathcal{ T}_{T,i}^{(\sup )}(b)$ $\equiv $ $\sup_{\lambda \in \lbrack 0,1]}\{\mathcal{Z }_{T,i}(\lambda )$ $+$ $b\exp \{\lambda ^{2}\}\}^{2}$ and $\mathcal{T} _{T,i}^{(rand)}(b)$ $\equiv $ $\{\mathcal{Z}_{T,i}(\lambda _{\ast ,i})$ $+$ $ b\exp \{\lambda _{\ast ,i}^{2}\}\}^{2}$. The critical values $\{c_{\alpha }^{(ave)},c_{\alpha }^{(\sup )}$ $\}$ are the $1$ $-$ $a$ quantiles of $\{ \mathcal{T}_{T,i}^{(ave)}(0),\mathcal{T}_{T,i}^{(\sup )}(0)\}_{i=1}^{R}$. In the ICM case $\{z_{T,i}(\lambda )\}$ $\equiv $ $\{1/\sqrt{T} \sum_{t=1}^{T}\epsilon _{i,t}\exp \{\lambda x_{i,t}\}\}$ becomes a draw from $\{z(\lambda )\}$ as $T$ $\rightarrow $ $\infty $, hence we compute $ \mathcal{T}_{T,i}^{(icm)}(b)$ $\equiv $ $\int_{0}^{1}\{z_{T,i}$ $+$ $b\exp \{\lambda ^{2}\}\}^{2}d\lambda $. Local power is $1/R\sum_{i=1}^{R}I( \mathcal{T}_{T,i}^{(\cdot )}(b)$ $>$ $c_{\alpha }^{(\cdot )})$. Integrals are computed by the midpoint method based on the discretization $\lambda $ $ \in $ $\{.001,.002,...,.999,1\}$, hence there are $1000$ points ($\lambda $ $ =$ $0$ is excluded because power is trivial in that case). Figure E.1 in the SM contains local power plots at level $\alpha $ $=$ $.05$ over drift parameters $b$ $\in $ $[0,2]$ and $b$ $\in $ $[0,7]$. Notice that under the null $b$ $=$ $0$ each test, except ICM, achieves power of nearly exactly $.05$ (PVOT, average and supremum are $.0499,$ and randomized is $ .0511$), providing numerical verification that the correct critical value for the PVOT test at level $\alpha $ is simply $\alpha $. The ICM critical value upper bound leads to an under sized test with asymptotic size $.0365$. Second, local power is virtually identical across PVOT, random, average and supremum tests. This is logical since the underlying PV test is consistent on any compact $\Lambda $ outside of a measure zero subset, it has non-trivial local power, and local power is asymptotic. Since the average test has the highest weighted average power aimed at alternatives near the null \citep[eq. (2.5)]{AndrewsPloberger1994}, we have evidence that PVOT test power is at the highest possible level. The randomized test has slightly lower power for deviations far from the null $b$ $\geq $ $2.5$ ostensibly because for large $b$ larger values of $\lambda $ lead to a higher power test, while the randomized $\lambda $ may be small. Finally, ICM power is lower near the null $b$ $\in $ $(0,1.5]$ since these alternatives are most difficult to detect, and the test is conservative, but power is essentially identical to the remaining tests for drift $b$ $\geq $ $ 1.5$. \section{Examples \protect\ref{ex_weak} and \protect\ref{ex_GARCH} Continued \label{sec:examples}} We complete the Section \ref{sec:ex_start} examples by providing relevant theory results that verify Assumption \ref{assum:main}. \subsection{Example \protect\ref{ex_weak}: Test of Functional Form with Possible Weak Identification \label{sec:func_form_weak}} Recall the regression model is $y_{t}$ $=$ $\zeta ^{\prime }x_{t}$ $+$ $ \beta ^{\prime }g(x_{t},\pi )$ $+$ $\epsilon _{t}$ $=$ $f(\theta ,x_{t})$ $+$ $\epsilon _{t}$. We want to test $H_{0}$ $:$ $E[y_{t}|x_{t}]$ $=$ $f(\theta _{0},x_{t})$ $a.s$. for unique $\theta _{0}$ $\in $ $\Theta $\ against $ H_{1}:\sup_{\theta \in \Theta }P(E[y_{t}|x_{t}]$ $=$ $f(\theta ,x_{t}))$ $<$ $1$. If $\beta _{0}$ $\neq $ $0$ then $\pi _{0}$ is not identified. If there is local drift $\beta _{0}$ $=$ $\beta _{n}$ $\rightarrow $ $0$ with $\sqrt{n }||\beta _{n}||$ $\rightarrow $ $[0,\infty )$, then estimators of $\pi _{0}$ have random probability limits, and estimators for $\theta _{0}$ have nonstandard limit distributions \citep{AndrewsCheng2012}. Let $\hat{\theta} _{n}$ be the nonlinear least squares estimator of $\theta _{0}$ and define \begin{eqnarray*} &&d_{\theta ,t}(\omega ,\pi )\equiv \left[ g(x_{t},\pi )^{\prime },x_{t}^{\prime },\omega ^{\prime }\frac{\partial }{\partial \pi } g(x_{t},\pi )\right] ^{\prime }\text{ and }\mathfrak{\hat{b}}_{\theta ,n}(\omega ,\pi ,\lambda )\equiv \frac{1}{n}\sum_{t=1}^{n}F\left( \lambda ^{\prime }\Psi (x_{t})\right) d_{\theta ,t}(\omega ,\pi ) \\ &&\widehat{\mathcal{H}}_{n}=\frac{1}{n}\sum_{t=1}^{n}d_{\theta ,t}(\omega ( \hat{\beta}_{n}),\hat{\pi}_{n})d_{\theta ,t}(\omega (\hat{\beta}_{n}),\hat{ \pi}_{n})^{\prime }\text{ where }\omega (\beta )\equiv \left\{ \begin{array}{ll} \beta /\left\Vert \beta \right\Vert & \text{if }\beta \neq 0 \\ 1_{k_{\beta }}/\left\Vert 1_{k_{\beta }}\right\Vert & \text{if }\beta =0 \end{array} \right. \\ &&\hat{v}_{n}^{2}(\hat{\theta}_{n},\lambda )\equiv \frac{1}{n} \sum_{t=1}^{n}\epsilon _{t}^{2}(\hat{\theta}_{n})\left\{ F\left( \lambda ^{\prime }\Psi (x_{t})\right) -\mathfrak{\hat{b}}_{\theta ,n}(\omega (\hat{ \beta}_{n}),\hat{\pi}_{n},\lambda )^{\prime }\widehat{\mathcal{H}} _{n}^{-1}d_{\theta ,t}(\omega (\hat{\beta}_{n}),\hat{\pi}_{n})\right\} ^{2}. \end{eqnarray*} The CM statistic is $\mathcal{T}_{n}(\lambda )$ $\equiv $ $\{\hat{v} _{n}^{-1}(\hat{\theta}_{n},\lambda )\sum_{t=1}^{n}\epsilon _{t}(\hat{\theta} _{n})F\left( \lambda ^{\prime }\Psi (x_{t})\right) /\sqrt{n}\}^{2}$, which is similar to statistics in \cite{Bierens1990} and \cite{StinchWhite1998}. The scale $\hat{v}_{n}(\hat{\theta}_{n},\lambda )$, however, has been altered by dividing by $||\beta ||$ in order to avoid a singular Hessian matrix under semi-strong identification $\beta _{0}$ $=$ $0$ and $\sqrt{n} ||\beta _{n}||$ $\rightarrow $ $\infty $ \citep[cf.][Section 3.5]{AndrewsCheng2012}. Technical results are derived under two overlapping identification cases: under case $\mathcal{C}(i,b)$ there is $\beta _{n}\rightarrow \beta _{0}=0$ and $\sqrt{n}\beta _{n}\rightarrow b$ where $b\in (\mathbb{R}\cup \{\pm \infty \})^{k_{\beta }}$; and under case $\mathcal{C}(ii,\omega _{0})$, $ \beta _{n}\rightarrow \beta _{0}$ where $\beta _{0}\gtreqless 0$, $\sqrt{n} \left\Vert \beta _{n}\right\Vert \rightarrow \infty ,$ and $\beta _{n}/\left\Vert \beta _{n}\right\Vert \rightarrow \omega _{0}$ where $ \left\Vert \omega _{0}\right\Vert =1$. Case $\mathcal{C}(i,b)$ contains sequences $\beta _{n}$ close to zero, and when $||b||$ $<$ $\infty $ then $ \pi _{0}$ is either weakly or non-identified. Case $\mathcal{C}(ii,\omega _{0})$ contains sequences $\beta _{n}$ farther from zero, covering semi-strong ($\beta _{0}$ $=$ $0$ and $\sqrt{n}||\beta _{n}||$ $\rightarrow $ $\infty $) and strong ($\beta _{0}$ $\neq $ $0$) identification for $\pi _{0} $. Cf. \cite{AndrewsCheng2012}. Let $\hat{p}_{n,\mathcal{M}}(\lambda )$\ be the weak identification robust bootstrapped p-value in \cite{Hill2021_weak} based on $\mathcal{M}$\ independently drawn bootstrap samples. The PVOT is $\mathcal{\hat{P}}_{n, \mathcal{M}}(\alpha )$ $\equiv $ $\int_{\Lambda }I(\hat{p}_{n,\mathcal{M} }(\lambda )$ $<$ $\alpha )d\lambda $. The PVOT test has the correct asymptotic level and is consistent. See \citet[Theorem 6.3]{Hill2021_weak} for a proof of the following result. \begin{theorem} \label{th:pvot}Let $\mathcal{M}$ $=$ $\mathcal{M}_{n}$ $\rightarrow $ $ \infty $ as $n$ $\rightarrow $ $\infty $. Under regularity conditions presented in \citet[Theorem 6.3]{Hill2021_weak}, if $H_{0}$ is true then $ \lim_{n\rightarrow \infty }P(\mathcal{\hat{P}}_{n,\mathcal{M}}(\alpha )$ $>$ $\alpha )$ $\leq $ $\alpha $, and otherwise $P(\mathcal{\hat{P}}_{n,\mathcal{ M}}(\alpha )$ $>$ $\alpha )$ $\rightarrow $ $1$. \end{theorem} \begin{remark} \normalfont As stated above, there does not exist a valid bootstrap method for handling \emph{test statistic} functionals like the average and supremum. The bootstrap method developed in \cite{Hill2021_weak} is only valid for computing an approximate p-value for the non-smoothed $\mathcal{T} _{n}(\lambda )$ that is asymptotically consistent for the asymptotic p-value \citep[Theorem 6.2]{Hill2021_weak}. The practitioner is therefore left with smoothing such a p-value approximation $\hat{p}_{n,\mathcal{M}}(\lambda )$. The supremum $\sup_{\lambda \in \Lambda }\hat{p}_{n,\mathcal{M}}(\lambda )$, however, promotes a conservative test that is not consistent. Even though $ \hat{p}_{n,\mathcal{M}}(\lambda )$ $\overset{p}{\rightarrow }$ $0$ $\forall \lambda $ $\in $ $\Lambda /S$ where $S$ has Lebesgue measure zero, as long as there exists $\lambda $ $\in $ $\Lambda $ such that a Type II error occurs, i.e. $\hat{p}_{n,\mathcal{M}}(\lambda )$ $\overset{p}{\rightarrow }$ $(0,1]$, then $\sup_{\lambda \in \Lambda }\hat{p}_{n,\mathcal{M}}(\lambda )$ $\overset{p}{\rightarrow }$ $(0,1]$ and the sup-p-value test is inconsistent. Conversely, the PVOT test with $\hat{p}_{n,\mathcal{M} }(\lambda )$ is both consistent and immune to weak identification, asymptotically with probability approaching one. \end{remark} \subsection{Example \protect\ref{ex_GARCH}: Test of GARCH Effects \label {sec:garch}} Recall the GARCH process $y_{t}$ $=$ $\sigma _{t}\epsilon _{t}$ where $ \sigma _{t}^{2}$ $=$ $\omega _{0}$ $+$ $\delta _{0}y_{t-1}^{2}$ $+$ $\lambda _{0}\sigma _{t-1}^{2}$, $\omega _{0}$ $>$ $0$, and $\delta _{0},\lambda _{0}\in \lbrack 0,1)$. The unrestricted QML estimator of $\delta _{0}$ for a given $\lambda $ $\in $ $\Lambda $ is $\hat{\delta}_{n}(\lambda )$, and the test statistic is $\mathcal{T}_{n}(\lambda )=n\hat{\delta}_{n}^{2}(\lambda )$ \citep{Andrews1999}. We first show the limit distribution for $\mathcal{T} _{n}(\lambda )$ is a one-sided normal. \begin{theorem} \label{th:garch}Let $\{y_{t}\}$ be generated by process (\ref{garch}). Assumption \ref{assum:main} applies where $\mathcal{T}(\lambda )$ $=$ $(\max \{0,\mathcal{Z}(\lambda )\})^{2}$, and $\{\mathcal{Z}(\lambda )\}$ is a zero mean Gaussian process with a version that has \emph{almost surely} uniformly continuous sample paths, and covariance function $E[\mathcal{Z(}\lambda _{1}) \mathcal{Z(}\lambda _{2})]$ $=$ $(1$ $-$ $\lambda _{1}^{2})(1$ $-$ $\lambda _{2}^{2})/(1$ $-$ $\lambda _{1}\lambda _{2})$. \end{theorem} A simulation procedure can be used to approximate the asymptotic p-value \citep[cf.][]{Andrews2001}. Draw $\widetilde{\mathcal{M}}$ $\in $ $\mathbb{N} $ samples of iid standard normal random variables $\{Z_{j,i}\}_{j=1}^{ \widetilde{\mathcal{R}}}$, $i$ $=$ $1,...,\widetilde{\mathcal{M}}$, and compute $\mathfrak{Z}_{\widetilde{\mathcal{R}},i}(\lambda )$ $\equiv $ $(1$ $ -$ $\lambda ^{2})\sum_{j=0}^{\widetilde{\mathcal{R}}}\lambda ^{j}Z_{j,i}$ and $\mathcal{T}_{\widetilde{\mathcal{R}},i}(\lambda )$ $\equiv $ $(\max \{0, \mathfrak{Z}_{\widetilde{\mathcal{R}},i}(\lambda )\})^{2}$. Notice $ \mathfrak{Z}_{\widetilde{\mathcal{R}}}(\lambda )$ $\equiv $ $(1$ $-$ $ \lambda ^{2})\sum_{j=0}^{\widetilde{\mathcal{R}}}\lambda ^{j}Z_{j}$ is zero mean Gaussian with the same covariance function as $\mathcal{Z}(\lambda )$ when $\widetilde{\mathcal{R}}$ $=$ $\infty $, hence $\{\mathcal{T}_{\infty ,i}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ is an independent draw from the limit process $\{\mathcal{T}(\lambda )$ $:$ $\lambda $ $\in $ $ \Lambda \}$. The p-value approximation is $\hat{p}_{\widetilde{\mathcal{R}}, \widetilde{\mathcal{M}},n}(\lambda )$ $\equiv $ $1/\widetilde{\mathcal{M}} \sum_{i=1}^{\widetilde{\mathcal{M}}}I(\mathcal{T}_{\widetilde{\mathcal{R}} ,i}(\lambda )$ $>$ $\mathcal{T}_{n}(\lambda ))$. Since we can choose $ \widetilde{\mathcal{M}}$ and $\widetilde{\mathcal{R}}$ to be arbitrarily large, we can make $\hat{p}_{\widetilde{\mathcal{R}},\widetilde{\mathcal{M}} ,n}(\lambda )$ arbitrarily close (in probability) to the asymptotic p-value by the Glivenko-Cantelli theorem. Now compute the PVOT $\mathcal{P}_{ \widetilde{\mathcal{R}},\widetilde{\mathcal{M}},n}^{\ast }(\alpha )$ $\equiv $ $\int_{\Lambda }I(\hat{p}_{\widetilde{\mathcal{R}},\widetilde{\mathcal{M}} ,n}(\lambda )$ $<$ $\alpha )d\lambda $. \begin{theorem} \label{th:garch_p_val}Let $\{y_{t}\}$ be generated by (\ref{garch}), and let $\{\widetilde{\mathcal{R}}_{n},\widetilde{\mathcal{M}}_{n}\}_{n\geq 1}\ $be sequences of positive integers, $\widetilde{\mathcal{R}}_{n}$ $\rightarrow $ $\infty $ and $\widetilde{\mathcal{M}}_{n}$ $\rightarrow $ $\infty $. If $ H_{0}$: $\delta _{0}$ $=$ $0$ is true then $\lim_{n\rightarrow \infty }P( \mathcal{P}_{\widetilde{\mathcal{R}}_{n},\widetilde{\mathcal{M}} _{n},n}^{\ast }(\alpha )$ $>$ $\alpha )$ $\in $ $(0,\alpha ]$. Otherwise if $ \delta _{0}$ $>$ $0$ then $P(\mathcal{P}_{\widetilde{\mathcal{R}}_{n}, \widetilde{\mathcal{M}}_{n},n}^{\ast }(\alpha )$ $>$ $\alpha )$ $\rightarrow $ $1$. \end{theorem} \begin{remark} \normalfont\label{rm:p_boot_h}Under $H_{0}$, $h(\mathcal{T}_{n}(\lambda ))$ $ \overset{d}{\rightarrow }$ $h(\mathcal{T}(\lambda ))$ for mappings $h$ $:$ $ \mathbb{R}$ $\rightarrow $ $\mathbb{R}$, continuous \emph{a.e.}, by exploiting theory in \citet[Section 4]{Andrews2001}. The relevant simulated p-value is $\hat{p}_{\widetilde{\mathcal{R}},\widetilde{\mathcal{M}} ,n}^{(h)} $ $\equiv $ $1/\widetilde{\mathcal{M}}\sum_{i=1}^{\widetilde{ \mathcal{M}}}I(h(\mathcal{T}_{\widetilde{\mathcal{R}},i}(\lambda ))$ $>$ $ h\left( \mathcal{T}_{n}(\lambda )\right) )$. Arguments used to prove Theorem \ref{th:garch_p_val} easily lead to a proof that $\hat{p}_{\widetilde{ \mathcal{R}},\widetilde{\mathcal{M}},n}^{(h)}$ is consistent for the corresponding asymptotic p-value. \end{remark} \section{Simulation Study\label{sec:sim}} We perform three Monte Carlo experiments concerning tests of functional form with and without the possibility of weak identification, and GARCH effects. The same discretized $\Lambda $ is used for PVOT and bootstrap p-value tests, and integrals are discretized using the midpoint method. Wild bootstrapped p-values are computed with $R$ $=$ $1000$ samples of iid standard normal random variables $\{z_{t,i}\}_{t=1}^{n}$. Sample sizes are $ n $ $\in $ $\{100,250,500\}$ and $10,000$ samples $\{y_{t}\}_{t=1}^{n}$ are independently drawn in each case. Nominal levels are $\alpha $ $\in $ $ \{.01,.05.,.10\}$. \subsection{Test of Functional Form} We work with a threshold process in which all parameters are strongly identified. \subparagraph{Step-Up} Samples $\{y_{t}\}_{t=1}^{n}$ are drawn from one of four data generating processes: linear $y_{t}$ $=$ $2x_{t}$ $+$ $\epsilon _{t}$\ or quadratic $ y_{t}$ $=$ $2x_{t}$ $+$ $.1x_{t}^{2}$ $+$ $\epsilon _{t}$, where $ \{x_{t},\epsilon _{t}\}$ are iid standard normal random variables; and AR(1) $y_{t}$ $=$ $.9x_{t}$ $+$ $\epsilon _{t}$\ or Self-Exciting Threshold AR(1) $ y_{t}$ $=$ $.9x_{t}$ $-$ $.4x_{t}I(x_{t}$ $>$ $0)$ $+$ $\epsilon _{t}$, where $x_{t}$ $=$ $y_{t-1}$ and $\epsilon _{t}$ is iid standard normal. In the time series cases we draw $2n$ observations with starting values $y_{1}$ $=$ $\epsilon _{1}$ and retain the last $n$ observations. Now write $\sum $ for sample summations: for iid data $\sum $ $=$ $\sum_{t=1}^{n}$ and for time series $\sum $ $=$ $\sum_{t=2}^{n}$. The estimated model is $y_{t}$ $=$ $\beta x_{t}$ $+$ $\epsilon _{t}$, and we test $H_{0}$ $:$ $E[y_{t}|x_{t}]$ $ =$ $\beta _{0}x_{t}$ $a.s.$ for some $\beta _{0}$. We compute $\mathcal{T}_{n}(\lambda )$ in (\ref{Tn_CM}) with logistic $ F(\Psi (x_{t}))$ $=$ $(1$ $+$ $\exp \{\Psi (x_{t})\})^{-1}$ and $\Psi (x_{t}) $ $=$ $\arctan (x_{t}^{\ast })$, where $x_{t}^{\ast }$ $\equiv $ $ x_{t}$ $-$ $1/n\sum x_{t}$. Write $F_{t}(\lambda )$ $=$ $F(\lambda \Psi (x_{t}))$, let $\hat{\beta}_{n}$ be the least squares estimator, and define $ \hat{z}_{n}(\lambda )$ $\equiv $ $1/n^{1/2}\sum (y_{t}$ $-$ $\hat{\beta} _{n}x_{t})F_{t}(\lambda )$. Then $\mathcal{T}_{n}(\lambda )$ $\equiv $ $\hat{ z}_{n}^{2}(\lambda )/\hat{v}_{n}^{2}(\lambda )$ with $\hat{v} _{n}^{2}(\lambda )$ $\equiv $ $1/n\sum (y_{t}$ $-$ $\hat{\beta}_{n}x_{t})^{2} \hat{w}_{n,t}^{2}(\lambda )$, where $\hat{w}_{n,t}(\lambda )$ $\equiv $ $ F_{t}(\lambda )$ $-$ $\hat{b}_{n}(\lambda )^{\prime }\hat{A}_{n}^{-1}x_{t}$, $\hat{b}_{n}$ $\equiv $ $1/n\sum x_{t}F_{t}(\lambda )$ and $\hat{A}_{n}$ $ \equiv $ $1/n\sum x_{t}x_{t}^{\prime }$ \citep[see][cf. Bierens, 1990]{White1989}. It is straightforward to show Assumption \ref {assum:locpow_suff}.a,b,c,e holds, and $\sup_{\lambda \in \Lambda }|\hat{v} _{n}^{2}(\lambda )$ $-$ $v^{2}(\lambda )|$ $\overset{p}{\rightarrow }$ $0$ by arguments used to prove Lemma C.1 in the SM. By Theorem \ref {lm:locpow_suff}, weak convergence (\ref{TZc}) therefore applies, and $ \mathcal{T}_{n}(\lambda )$ is pointwise asymptotically $\chi ^{2}(1)$ under $ H_{0}$. \subparagraph{Tests} We perform four tests. First, the PVOT over $\Lambda $ $=$ $[.0001,1]$ based on the asymptotic p-value for $\mathcal{T}_{n}(\lambda )$. The discretized set is $\Lambda _{n}$ $\equiv $ $\{.0001$ $+$ $1/(\varpi n)$, $.0001$ $+$ $ 2/(\varpi n),$ $...,$ $.0001$ $+$ $\bar{\imath}_{n}(\varpi )/(\varpi n)\}$ where $\bar{\imath}_{n}(\varpi )$ $\equiv $ $\argmax\{1$ $\leq $ $i$ $\leq $ $\varpi n$ $:$ $i$ $\leq $ $.9999\varpi n\}$, with a coarseness parameter $ \varpi $ $=$ $100$. We can use a much smaller $\varpi $ if the sample size is large enough (e.g. $\varpi $ $=$ $10$\ when $n$ $=$ $250$, or $\varpi $ $ = $ $1$\ when $n$ $\geq $ $500$), but in general small $\varpi n$ leads to over-rejection of $H_{0}$. Second, we use $\mathcal{T}_{n}(\lambda _{\ast })$ with a uniformly randomized $\lambda _{\ast }$ $\in $ $\Lambda $ and an asymptotic p-value. Third, $\sup_{\lambda \in \Lambda _{n}}\mathcal{T} _{n}(\lambda )$ and $\int_{\Lambda _{n}}\mathcal{T}_{n}(\lambda )\mu (d\lambda )$\ with uniform measure $\mu (\lambda )$, and wild bootstrapped p-values. Fourth, Bierens and Ploberger's (\citeyear{BierensPloberger1997}) ICM $\widehat{\mathcal{I}}_{n}$ $\equiv $ $\int_{\Lambda _{n}}\hat{z} _{n}^{2}(\lambda )\mu (d\lambda )$ with uniform $\mu (\lambda )$, and the critical value upper bound $c_{\alpha }$ $\int_{\Lambda }\hat{v} _{n}^{2}(\lambda )\mu (d\lambda )$, where $\{c_{.01},c_{.05},c_{.10}\}$ $=$ $ \{6.81$, $4.26$, $3.23\}$ \citep[Section 6]{BierensPloberger1997}. \subparagraph{Results} Rejection frequencies for $\alpha $ $\in $ $\{.01,.05,.10\}$ are reported in Table \ref{tbl:funcform}. The ICM test tends to be under sized, which is expected due to the critical value upper bound. Randomized, average and supremum tests have accurate empirical size for iid data, but exhibit size distortions for time series data when $n$ $\in $ $\{100,250\}$. The PVOT test has relatively sharp size in nearly every case, but is slightly over-sized for time series data when $n$ $=$ $100$. All tests except the supremum test have comparable power, while the ICM test has low power at the $1\%$ level. The supremum test has the lowest power, although its local power was essentially identical to the average and PVOT tests for a similar test of omitted nonlinearity (see Section \ref {sec:local_num}). In the time series case, however, PVOT power when $n$ $=$ $ 100$ is lower than all other tests, except the supremum test in general and the ICM test at level $\alpha $ $=$ $.01$. PVOT rejection frequencies are $ \{.135,.206,.645\}$ for tests at levels $\{.01,.05,.10\}$, while randomized, average, supremum and ICM power are $\{.135,.592,.846\}$, $ \{.062,.412,.726\} $, $\{.021,.209,.561\}$ and $\{.004,.643,.866\}$\ respectively. These discrepancies, however, vanish when $n$ $\in $ $ \{250,500\}$. The ICM test has dismal power at the $1\%$ level when $n$ $ \leq $ $250$ and much lower power than all other tests when $n$ $=$ $500$, but comparable or better power at levels $5\%$ and $10\%$. In summary, across cases the various tests are comparable; supremum test power is noticeably lower in many cases; and the PVOT test generally exhibits fewer size distortions, and competitive or high power in nearly every case. Of particular note, the accuracy of PVOT size provides further evidence that the PVOT asymptotic critical value is identically $\alpha $. Finally, when $ n $ $=$ $100$ the PVOT test took on average $.0085$ minutes ($.51$ seconds), while the bootstrapped average or supremum test took $8.07$ minutes on average. The 1000-fold increase is due to the number of bootstrap samples. This demonstrates the PVOT test computational convenience, arising entirely from its asymptotic critical value (upper bound) being the test level $ \alpha $. \subsection{Test of Functional Form with Weak Identification\label {sec:sim_weak}} We now work with a Smooth Transition Autoregression [STAR], allowing for weak identification. The following summarize the monte carlo study in \cite {Hill2021_weak}. \subparagraph{Step-Up} The data are drawn from: \begin{equation*} y_{t}=\zeta _{0}y_{t-1}+\beta _{n}y_{t-1}\frac{1}{1+\exp \left\{ -10\left( y_{t-1}-\pi _{0}\right) \right\} }+\varpi _{0}\frac{1}{1+y_{t-1}^{2}} +\epsilon _{t}, \end{equation*} where $\epsilon _{t}$ is iid $N(0,1)$. If $\varpi _{0}$ $=$ $0$ then $y_{t}$ is a Logistic STAR process and the null hypothesis is true. If $\beta _{n}$ $ \rightarrow $ $0$ too quickly then $\pi _{0}$ cannot be identified and estimation asymptotics are non-standard. We use $\zeta _{0}$ $=$ $.6$, $\pi _{0}$ $=$ $0$ and $\varpi _{0}$ $\in $ $\{0,.03,.3\}$, the latter allowing for weak and strong degrees of deviation from the null. We use $\beta _{n}$ $ \in $ $\{.3,.3/\sqrt{n},0\}$ representing strong identification, weak identification with $\sqrt{n}\beta _{n}$ $=$ $.3$ and $\beta _{n}$ $ \rightarrow $ $\beta _{0}$ $=$ $0$, and non-identification with $\beta _{n}$ $=$ $\beta _{0}$ $=$ $0$. Let $\iota $ $=$ $10^{-10}$. The estimated parameters satisfy $\beta _{n}$ $ \in $ $\mathcal{B}^{\ast }$, $\zeta _{0}$ $\in $ $\mathcal{Z}^{\ast }(\beta ) $ and $\pi _{0}$ $\in $ $\Pi ^{\ast }$. The true parameter spaces are $ \mathcal{B}^{\ast }$ $=$ $[-1$ $+$ $2\iota ,1$ $-$ $2\iota ]$, $\mathcal{Z} ^{\ast }(\beta )$ $=$ $[-1-\beta $ $+$ $\iota <\zeta <1-\beta $ $-$ $\iota ]$ , and $\Pi ^{\ast }$ $=$ $[-1,1]$. The estimation spaces are $\mathcal{B}$ $ = $ $[-1$ $+$ $\iota ,1$ $-$ $\iota ]$, $\mathcal{Z}(\beta )$ $=$ $[-1-\beta <\zeta <1-\beta ]$, and $\Pi $ $=$ $[-2,2]$. Thus $|\zeta $ $+$ $\beta |$ $<$ $1$ on $\Theta $ $\equiv $ $\mathcal{B}$ $\times $ $\mathcal{Z}(\beta )$ $ \times $ $\Pi $, which ensures stationarity \citep[see][Theorem 1]{BhattacharyaLee1995}. We draw $100$ start values uniformly on $\Theta $ and estimate $\theta _{0}=[\zeta _{0},\beta _{0},\pi _{0}]^{\prime }$ by least squares for each start value, resulting in $\{\hat{\theta}_{n,i}\}_{i=1}^{100}$. The final $ \hat{\theta}_{n}$ minimizes the least squares criterion over $\{\hat{\theta} _{n,i}\}_{i=1}^{100}$.\footnote{ Computation is performed using Matlab R2016. An analytic gradient is used for optimization. The criterion tolerance for ceasing iterations is $1e^{-8}$ , and the maximum number of allowed iterations is $20,000$.} We also require $\hat{\sigma}_{n}^{2}$ $=$ $1/n\sum_{t=2}^{n}(y_{t}-\hat{\zeta}_{n}y_{t-1}$ $ -$ $\hat{\beta}_{n}y_{t-1}(1$ $+$ $\exp \left\{ -10\left( y_{t-1}-\hat{\pi} _{n}\right) \right\} )^{-1})^{2}$. Notice $\hat{\sigma}_{n}^{2}$ $\overset{p} {\rightarrow }$ $\sigma ^{2}$ under mild conditions and any degree of (non)identification: if $\hat{\beta}_{n}$ $\overset{p}{\rightarrow }$ $0$ fast enough then the non-standard limit properties of $\hat{\pi}_{n}$\ are irrelevant \citep[see][Theorem 4.1 and Remark 7]{Hill2021_weak}. The test weight $\mathcal{F}(u)$ $=$ $1/(1$ $+$ $\exp \{u\})$, and $\mathcal{ F}(\lambda ^{\prime }\Psi (x_{t}))$ uses the bounded one-to-one transform $ \Psi (x)$ $=$ atan$(x)$ \citep[e.g.][p. 1445, 1453]{Bierens1990}. The parameter space is $\Lambda $ $=$ $[1,5]$, discretized as $\Lambda _{n}$ with endpoints $\{1,5\}$ and equal increments with $n$ elements (e.g. $\Lambda _{100}$ $=$ $\{1,$ $1.04,$ $1.08,...,$ $5$). \subparagraph{Tests} We perform eleven tests. The first five are not robust to weak identification: $(i)$ uniformly randomize $\lambda ^{\ast }$ on $\Lambda $, compute $\mathcal{T}_{n}(\lambda ^{\ast })$ and use $\chi ^{2}(1)$ for p-value computation; $(ii)$ $\sup_{\lambda \in \Lambda _{n}}p_{n}(\lambda )$ ; $(iii)$ $\sup_{\lambda \in \Lambda _{n}}\mathcal{T}_{n}(\lambda )$ and $ (iv)$ $\int_{\Lambda _{n}}\mathcal{T}_{n}(\lambda )\mu (d\lambda )$\ where $ \mu $ is the uniform measure on $\Lambda $, and p-values are computed by wild bootstrap; and $(v)$ the PVOT test using $\Lambda _{n}$, and a p-value computed from the $\chi ^{2}(1)$ distribution. The final six tests are robust based on the bootstrapped p-value procedure in \cite{Hill2021_weak}. We compute $\mathcal{T}_{n}(\lambda ^{\ast })$ using ($vi$) the plug-in least-favorable [LF] and ($vii$) plug-in Identification Category Selection Type 1 [ICS-1] p-values from \citep[Sections 5 and 6]{Hill2021_weak}; $\sup_{\lambda \in \Lambda _{n}}p_{n}(\lambda )$ using ($ vii$) the plug-in LF and ($ix$) plug-in ICS-1 p-values; and PVOT using ($x$ )\ the plug-in LF and ($xi$) plug-in ICS-1 p-values. See \citet[Section 7]{Hill2021_weak} for details on p-value computation for the present experiment. \subparagraph{Results} Table \ref{tbl:starn100} contains rejection frequencies. All tests are fairly comparable under strong identification $\beta _{n}$ $=$ $.3$. By construction the LF p-values are larger than the ICS-1 p-values, which are larger than the $\chi ^{2}$ p-values. This results in lower rejection rates even under strong identification. The sup-p-value test is conservative by construction, with comparatively smaller rejection rates. Under weak and non-identification most non-robust tests over reject the null hypothesis, and most distortions are comparatively large. Ironically, the non-robust $\sup_{\lambda \in \Lambda _{n}}p_{n}(\lambda )$ is relatively large, which pushes that test's rejection frequencies down. While this inadvertently compensates for a potentially large size distortion, it leads to lower empirical power. The sole test that both controls for weak identification and obtains relatively high power is the PVOT test with ICS-1 p-values. The PVOT test with LF p-values also works well, but tends to have lower power than the ICS-1 based PVOT test. This follows since the LF p-values are larger than the ICS-1 p-values. \subsection{Test of GARCH Effects\label{sec:sim_garch}} \subparagraph{Setp-Up} Samples $\{y_{t}\}_{t=1}^{n}$ are drawn from a GARCH process $y_{t}$ $=$ $ \sigma _{t}\epsilon _{t}$ and $\sigma _{t}^{2}$ $=$ $\omega _{0}$ $+$ $ \delta _{0}y_{t-1}^{2}$ $+$ $\lambda _{0}\sigma _{t-1}^{2}$ with parameter values $\omega _{0}$ $=$ $1$, $\lambda _{0}$ $=$ $.6$, and $\delta _{0}$ $=0$ or $.3$, where $\epsilon _{t}$ is iid $N(0,1)$. The initial condition is $ \sigma _{0}^{2}$ $=$ $\omega _{0}/(1$ $-$ $\lambda _{0})$ $=$ $2.5$. Simulation results are qualitatively similar for other values $\lambda _{0}$ $\in $ $(0,1)$. Put $\Lambda $ $=$ $[.01,.99]$ with discretization $\Lambda _{n}$ $\equiv $ $\{.01+1/(\varpi n),.01+2/(\varpi n),...,.01$ $+$ $\bar{ \imath}_{n}(\varpi )/(\varpi n)\}$, where $\bar{\imath}_{n}(\varpi )$ $ \equiv $ $\argmax\{1$ $\leq $ $i$ $\leq $ $\varpi n$ $:$ $i$ $\leq $ $ .98\varpi n\}$, with coarseness $\varpi $ $=$ $1$. Hence there are $\mathcal{ N}_{n}$ $\approx $ $n$ $-$ $1$ points in $\Lambda _{n}$. A finer grid based on $\varpi $ $=$ $10$ or $100$, for example, leads to improved empirical size at the 1\% level for the PVOT test, and more severe size distortions for the supremum test. The cost, however, is computation time since a QML estimator \textit{and} bootstrapped p-value are required for each sample. We estimate $\pi _{0}$ $=$ $[\omega _{0},\delta _{0}]^{\prime }$ by QML for fixed $\lambda $ $\in $ $\Lambda _{n}$, with criterion $Q_{n}(\pi ,\lambda )$ $=$ $\sum \{\ln \sigma _{t}^{2}(\pi ,\lambda )$ $+$ $y_{t}^{2}/\sigma _{t}^{2}(\pi ,\lambda )\}$ where $\sigma _{t}^{2}(\pi ,\lambda )$ $=$ $ \omega $ $+$ $\alpha y_{t-1}^{2}$ $+$ $\lambda \sigma _{t-1}^{2}(\pi ,\lambda )$, and $\sigma _{0}^{2}(\pi ,\lambda )$ $=$ $\omega /(1$ $-$ $ \lambda )$. The estimator is $\hat{\pi}_{n}(\lambda )=[\hat{\omega} _{n}(\lambda ),\hat{\delta}_{n}(\lambda )]^{\prime }$ $=$ $\arg \min_{\pi \in \Pi }Q_{n}(\pi ,\lambda )$ with space $\Pi $ $=$ $[.001,2]$ $\times $ $ [0,.99]$.\footnote{ We compute $\hat{\pi}_{n}(\lambda )$ using Matlab's built-in \textit{fmincon} routine for constrained optimization, with numerical approximations for the first and second derivatives. We cease computation iterations when the numerical gradient, or the difference in the current and previous iteration of $\hat{\pi}_{n}(\lambda )$, is less than $.0001$. The initial parameter value is a uniform random uniform draw on $\Pi $.} The test statistic is $ \mathcal{T}_{n}(\lambda )$ $=$ $n\hat{\delta}_{n}(\lambda )^{2}$, and the p-value approximation $\hat{p}_{\widetilde{\mathcal{R}},\widetilde{\mathcal{M }},n}(\lambda )$ is computed by the method in Section \ref{sec:garch} with $ \widetilde{\mathcal{M}}$ $=$ $10,000$ simulated samples of size $\widetilde{ \mathcal{R}}$ $=$ $25,000$. \subparagraph{Tests} We handle the nuisance parameter $\lambda $ by uniformly randomizing it on $ \Lambda $; computing the PVOT; and computing $\sup_{\lambda \in \Lambda } \mathcal{T}_{n}(\lambda )$ and $\int_{\Lambda }\mathcal{T}_{n}(\lambda )\mu (d\lambda )$, along with corresponding simulation-based bootstrapped p-values $\hat{p}_{\widetilde{\mathcal{R}},\widetilde{\mathcal{M}} ,n}^{(\cdot )}$ detailed in Remark \ref{rm:p_boot_h}. \subparagraph{Results} Consult Table \ref{tbl:garch} for simulation results. The randomized test under rejects the null, and has lower size adjusted power than the remaining tests. Andrews' (2001)\nocite{Andrews2001} proposed supremum test is highly over-sized, resulting in relatively low size adjusted power. The best tests in terms of size and size adjusted power are the PVOT and average tests. The average test tends to under reject the null at each level for sample sizes $ n $ $\in $ $\{100,250\}$, and the PVOT test tends to over reject the null at the $1\%$ level for $n$ $\in $ $\{100,250\}$. Recall the average test has the highest weighted average power for alternatives near the null \citep{AndrewsPloberger1994}, hence the PVOT test performs on par with, or is slightly better than, an optimal test (depending on $n$ and $\alpha $). Finally, the PVOT size performance suggests the asymptotic critical value is $\alpha $. The PVOT, average and supremum tests are roughly equal in terms of computational cost due to the simulation procedure required for computing the p-value. See Remark \ref{rm:p_boot_h}. \section{Conclusion\label{sec:conclusion}} \cite{HillAguilar13} and \cite{Hill_white_2012} develop the p-value occupation time [PVOT] to smooth over a trimming tuning parameter. The idea is extended here to tests when a nuisance parameter is present under the alternative, and complete asymptotic theory is developed for the first time. In the SM we show in a likelihood setting that the PVOT is a point estimate of the weighted average rejection probability of the PV test, evaluated under the null, making the PVOT a natural object of interest for hypothesis testing when nuisance parameters are present. By construction, a critical value upper bound for the PVOT test is the nominal significance level $ \alpha $, making computation and interpretation very simple, and much easier to perform than standard transforms like the average or supremum since these typically require a bootstrapped p-value. If the original test is consistent on a subset of $\Lambda $ with Lebesgue measure greater than $\alpha $ then so is the PVOT test. Moreover, the PVOT form of smoothing naturally accepts weak identification robust p-values, while conventionally smoothed test statistics cannot be consistently bootstrapped under weak identification. Indeed, evidently only the PVOT test with a weak identification robust p-value achieves both accurate level and high power. We are not aware of any other test statistic construction that allows for nuisance parameter smoothing that is both robust to weak identification \textit{and} not conservative. Interesting future work may include studying the PVOT test when the data generating process is not encompassed under either hypothesis, or looking at how pre-order selection, or the particular model filter/estimator, may affect its performance. \setcounter{equation}{0} \renewcommand{\theequation}{{\thesection}. \arabic{equation}} \appendix\onehalfspacing \section{Appendix: Proofs\label{sec:proofs}} \noindent \textbf{Proof of Theorem \ref{th:main}} Write $\{\mathcal{T} _{n}(\lambda )\}$ $=$ $\{\mathcal{T}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $ \Lambda \}$, etc. By Assumption \ref{assum:main}, $\{\mathcal{T}_{n}(\lambda )\}$ $\Rightarrow ^{\ast }$ $\{\mathcal{T}(\lambda )\}$ under $H_{0}$, a process with a version that has \emph{almost surely} bounded uniformly continuous sample paths with respect to the sup-norm, where $\mathcal{T} (\lambda )$ has a continuous distribution function $F_{0}(c)$ $\equiv $ $P( \mathcal{T}(\lambda )$ $\leq $ $c)$ that is not a function of $\lambda $. Hence by the continuous mapping theorem $\{\bar{F}_{0}(\mathcal{T} _{n}(\lambda ))\}$ $\Rightarrow ^{\ast }$ $\{\bar{F}_{0}(\mathcal{T}(\lambda ))\}$, where $\bar{F}_{0}(\cdot )$ $\equiv $ $1$ $-$ $F_{0}(\cdot )$, and $\{ \bar{F}_{0}(\mathcal{T}(\lambda ))\}$ has a version with \emph{almost surely} bounded uniformly continuous sample paths with respect to the sup-norm \citep[e.g.][Theorem 2.7]{Billingsley1999}. Furthermore, $\sup_{\lambda \in \Lambda }|p_{n}(\lambda )$ $-$ $\bar{F}_{0}( \mathcal{T}_{n}(\lambda ))|$ $\overset{p}{\rightarrow }$ $0$ by Assumption \ref{assum:main}.b, hence $\{p_{n}(\lambda ))\}$ $\Rightarrow ^{\ast }$ $\{ \bar{F}_{0}(\mathcal{T}(\lambda ))\}$. By distribution continuity, $\mathcal{ U}(\lambda )$ $\equiv $ $\bar{F}_{0}(\mathcal{T}(\lambda ))$ is for each $ \lambda $ $\in $ $\Lambda $ uniformly distributed on $[0,1]$, and from above $\{\mathcal{U}(\lambda )\}$ has a version with \emph{almost surely} bounded uniformly continuous sample paths. Therefore the mapping $\mathcal{U}(\cdot ) $ $\mapsto $ $\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $ is continuous with probability one due to \emph{almost sure} bounded continuity of the sample paths $\{\mathcal{U}(\lambda )\}$ and that weak convergence is on $l_{\infty }(\Lambda )$ which is endowed with the sup-norm. The continuous mapping theorem therefore yields $\mathcal{P} _{n}^{\ast }(\alpha )$ $=$ $\int_{\Lambda }I(p_{n}(\lambda )$ $<$ $\alpha )d\lambda $ $\overset{d}{\rightarrow }$ $\int_{\Lambda }I(\mathcal{U} (\lambda )$ $<$ $\alpha )d\lambda $ \citep[Theorem IV.2.12, cf. p.66-70]{Pollard1984}. Now use Lemma \ref{lm:U}, below, to yield $ P(\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $ $>$ $ \alpha )$ $\leq $ $\alpha $ and each remaining claim. $\mathcal{QED}$. \begin{lemma} \label{lm:U}Let $\{\mathcal{U}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ be a stochastic process where $\mathcal{U}(\lambda )$ is distributed uniform on $[0,1]$, and $\int_{\Lambda }d\lambda $ $=$ $1$. Then $(a)$ $ P(\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $ $>$ $ \alpha )$ $\leq $ $\alpha $. In particular, $(b)$ if $\mathcal{U}(\lambda )$ $=$ $\mathcal{U}(\lambda ^{\ast })$ $=$ $a.s.$ $\forall \lambda $ $\in $ $ \Lambda $ and some $\lambda ^{\ast }$ $\in $ $\Lambda $\ then $ P(\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $ $>$ $ \alpha )$ $=$ $\alpha $. Finally, $(c)$ if $P(\mathcal{U}(\lambda )$ $<$ $ \alpha ,\mathcal{U}(\tilde{\lambda})$ $<$ $\alpha )$ $>$ $\alpha ^{2}$ for all couplets $(\lambda ,\tilde{\lambda})$ on a subset of $\Lambda $ $\times $ $\Lambda $ with positive measure, then $P(\int_{\Lambda }I(\mathcal{U} (\lambda )$ $<$ $\alpha )d\lambda $ $>$ $\alpha )$ $>0$. \end{lemma} \begin{remark} \normalfont The key proof that $P(\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $ $>$ $\alpha )$ $\leq $ $\alpha $ exploits a variation of the Bernstein inequalities. If we know $\mathcal{U}(\lambda )$ is perfectly dependent across $\lambda $ then the bound is exact. \end{remark} \noindent \textbf{Proof.\qquad }Let $\mathcal{P}$ $\equiv $ $\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $, where $\mathcal{P}$ $\in $ $[0,1]$ since $\int_{\Lambda }d\lambda $ $=$ $1$. In order to prove ($a$), use Markov's inequality (cf. the Chermoff bound variation of the Bernstein inequality) to yield \begin{equation*} P\left( \mathcal{P}>\alpha \right) \leq \inf_{k\geq 0}\left\{ e^{-k\alpha }E \left[ e^{k\mathcal{P}}\right] \right\} . \end{equation*} Note that $E[\mathcal{P}^{i}]$ $\leq $ $E[\mathcal{P}]$ for all $i$ $\geq $ $ 1$ due to $\mathcal{P}$ $\in $ $[0,1]$. Now invoke Fubini's theorem, the fact that $\mathcal{U}(\lambda )$\ is uniformly distributed on $[0,1]$, and $ \int_{\Lambda }d\lambda $ $=$ $1$ to deduce: \begin{equation*} E[\mathcal{P}]=E\left[ \int_{\Lambda }I(\mathcal{U}(\lambda )<\alpha )d\lambda \right] =\int_{\Lambda }P(\mathcal{U}(\lambda )<\alpha )d\lambda =\alpha \int_{\Lambda }d\lambda =\alpha \text{.} \end{equation*} Expanding $E[e^{k\mathcal{P}}]$ around $k$ $=$ $0$, and exploiting $E[ \mathcal{P}^{i}]$ $\leq $ $\alpha $, yields: \begin{equation*} P\left( \mathcal{P}>\alpha \right) \leq \inf_{k\geq 0}\left\{ e^{-k\alpha }E \left[ e^{k\mathcal{P}}\right] \right\} =\inf_{k\geq 0}\left\{ e^{-k\alpha }\sum_{i=0}^{\infty }\frac{1}{i!}k^{i}E\left[ \mathcal{P}^{i}\right] \right\} \leq \alpha \inf_{k\geq 0}\left\{ e^{-k\alpha }\sum_{i=0}^{\infty } \frac{1}{i!}k^{i}\right\} . \end{equation*} Since $\alpha $ $\in $ $[0,1]$ and therefore $e^{k\left( 1-\alpha \right) }$ $\geq $ $1$ $\forall k$ $\geq $ $0$, trivially \begin{equation*} \inf_{k\in \mathcal{K}}\{e^{-k\alpha }\sum_{i=0}^{\infty }k^{i}/i!\}=\inf_{k\geq 0}\{e^{-k\alpha }e^{k}\}=\inf_{k\geq 0}e^{k\left( 1-\alpha \right) }=1. \end{equation*} This proves $P(\mathcal{P}$ $>$ $\alpha )$ $\leq $ $\alpha $\ as required. Consider ($b$). If $P(\mathcal{U}(\lambda )$ $=$ $\mathcal{U}(\lambda ^{\ast }))$ $=$ $1$ $\forall \lambda $ $\in $ $\Lambda $ and some $\lambda ^{\ast }$ , then $P(\mathcal{P}$ $=$ $\int_{\Lambda }I(\mathcal{U}(\lambda ^{\ast })<\alpha )d\lambda )$ $=$ $1$. Hence $P(\mathcal{P}$ $>$ $\alpha )$ $=$ $ P(I(\mathcal{U}(\lambda ^{\ast })$ $<$ $\alpha )$ $>$ $\alpha )$ $=$ $P( \mathcal{U}(\lambda ^{\ast })<\alpha )$. The latter is identically $\alpha $ by uniform distributedness. Finally, for ($c$) if $P(\mathcal{U}(\lambda )$ $<$ $\alpha ,\mathcal{U}( \tilde{\lambda})$ $<$ $\alpha )$ $>$ $\alpha ^{2}$ on a subset of $\Lambda $ $\times $ $\Lambda $ with positive measure, then $E[\mathcal{P}^{2}]$ $>$ $ (E[\mathcal{P}])^{2}$ $=$ $\alpha ^{2}$. Since $E[\mathcal{P}^{2}]$ $=$ $E[ \mathcal{P}^{2}I(\mathcal{P}^{2}$ $>$ $\alpha ^{2})]$ $+$ $E[\mathcal{P} ^{2}I(\mathcal{P}^{2}$ $\leq $ $\alpha ^{2})]$, and $\mathcal{P}$\ is bounded, by a variant of the second moment method $P(\mathcal{P}$ $>$ $ \alpha )$ $\geq $ $(E[\mathcal{P}^{2}]$ $-$ $\alpha ^{2})^{2}/E[\mathcal{P} ^{4}]$ $>$ $0$. $\mathcal{QED}$. \noindent \newline \textbf{Proof of Theorem \ref{th:main_h1}.}\qquad \newline \textbf{Claim (a).}\qquad Let $H_{0}$ be false, and define the set of $ \lambda ^{\prime }s$ such that we reject the PV test for sample size $n$: $ \Lambda _{n,\alpha }$ $\equiv $ $\{\lambda $ $\in $ $\Lambda $ $:$ $ p_{n}(\lambda )$ $<$ $\alpha \}$. By assumption $\{p_{n}(\lambda )$ $:$ $ \lambda $ $\in $ $\Lambda \}$ lies in a complete measure space, hence $ \Lambda _{n,\alpha }$ is $\sigma (\mathcal{S}_{n})$-measurable \citep[see][p. 195-198]{Pollard1984}. Similarly, the Lebesgue measure $ \int_{\Lambda _{n,\alpha }}d\lambda $ of $\Lambda _{n,\alpha }$ is $\sigma ( \mathcal{S}_{n})$-measurable. By construction \begin{equation*} \mathcal{P}_{n}^{\ast }(\alpha )\equiv \int_{\Lambda _{n,\alpha }}I\left( p_{n}(\lambda )<\alpha \right) d\lambda +\int_{\Lambda /\Lambda _{n,\alpha }}I\left( p_{n}(\lambda )<\alpha \right) d\lambda =\int_{\Lambda _{n,\alpha }}d\lambda . \end{equation*} Hence $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $ \alpha )$ $=$ $\lim_{n\rightarrow \infty }P(\int_{\Lambda _{n,\alpha }}d\lambda $ $>$ $\alpha )$. Therefore $\lim_{n\rightarrow \infty }P( \mathcal{P}_{n}^{\ast }(\alpha )$ $>$ $\alpha )$ $>$ $0$ \textit{if and only if} $\lim_{n\rightarrow \infty }P(\int_{\Lambda _{n,\alpha }}d\lambda $ $>$ $ \alpha )$ $>$ $0$, \textit{if and only if} $\lim_{n\rightarrow \infty }P(p_{n}(\lambda )$ $<$ $\alpha )$ $>$ $0$ on some subset with measure greater than $\alpha $. \newline \textbf{Claim (b).}\qquad Let $\Lambda _{\alpha }$ denote the set of $ \lambda ^{\prime }s$ such that $\lim_{n\rightarrow \infty }P(p_{n}(\lambda )$ $<$ $\alpha )$ $=$ $1$, hence $\lim_{n\rightarrow \infty }P(p_{n}(\lambda )$ $<$ $\alpha )$ $<$ $1$ on $\Lambda /\Lambda _{\alpha }$. Then by dominated convergence $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha )$ $ >$ $\alpha )$ $=$ $\lim_{n\rightarrow \infty }P(\int_{\Lambda _{\alpha }}d\lambda $ $+$ $\int_{\Lambda /\Lambda _{\alpha }}I(p_{n}(\lambda )$ $<$ $ \alpha )d\lambda $ $>$ $\alpha )$. If $\Lambda _{\alpha }$ has measure greater than $\alpha $ then $\lim_{n\rightarrow \infty }P(\mathcal{P} _{n}^{\ast }(\alpha )$ $>$ $\alpha )$ $=$ $1$. $\mathcal{QED}$. \newline \textbf{Proof of Theorem \ref{th:local_pow_nl}.}\qquad Recall $F_{1}$ is a $ \chi ^{2}(1)$ distribution, $\bar{F}_{1}$ $\equiv $ $1$ $-$ $F_{1}$, and $ F_{1,v}$ is a noncentral chi-squared distribution with noncentrality $v$. By construction $p_{n}(\lambda )$ $=$ $\bar{F}_{1}(\mathcal{T}_{n}(\lambda ))$. In view of (\ref{TZc}), under $H_{1}^{L}$ it follows $p_{n}(\lambda )$ $ \overset{d}{\rightarrow }$ $\bar{F}_{1}(\mathfrak{T}_{b})$, a law on $[0,1]$ where $\mathfrak{T}_{b}$ is distributed $F_{1,b^{2}c(\lambda )^{2}}$. Hence $ \bar{F}_{1}(\mathfrak{T}_{b})$ is skewed left for $b$ $\neq $ $0$. Let $ \mathcal{U}_{b}(\lambda )$ be distributed $\bar{F}_{0}(\mathfrak{T}_{b})$. Then $\mathcal{U}_{0}(\lambda )$ is a uniform random variable, and in general $P(\mathcal{U}_{b}(\lambda )$ $\leq $ $a)$ $-$ $P(\mathcal{U} _{0}(\lambda )$ $\leq $ $a)$ $>$ $0$ is monotonically increasing in $b$ since $P(\mathcal{U}_{b}(\lambda )$ $\leq $ $a)$ $\rightarrow $ $1$ is monotonic as $|b|$ $\rightarrow $ $\infty $ for any $a$. Now, by construction $\{\mathcal{U}_{b}(\lambda )\}$ has \emph{almost surely} continuous sample paths with $\mathcal{U}_{b}(\lambda )$ distributed $F_{1}( \mathfrak{T}_{b})$. Hence under $H_{1}^{L}$ by (\ref{TZc}), and the continuous mapping theorem: \begin{equation*} \mathcal{P}_{n}^{\ast }(\alpha )=\int_{\Lambda }I\left( p_{n}(\lambda )<\alpha \right) d\lambda \overset{d}{\rightarrow }\int_{\Lambda }I\left( \mathcal{U}_{b}(\lambda )<\alpha \right) d\lambda . \end{equation*} By construction $\int_{\Lambda }I(\mathcal{U}_{b}(\lambda )$ $<$ $\alpha )d\lambda $ $\geq $ $\int_{\Lambda }I(\mathcal{U}_{0}(\lambda )$ $<$ $\alpha )d\lambda $ with equality only if $b$ $=$ $0$: the asymptotic occupation time of a p-value rejection $p_{n}(\lambda )$ $<$ $\alpha $ is higher under any sequence of non-trivial local alternatives $H_{1}^{L}$ $:$ $\beta _{0}$ $ =$ $b/n^{1/2}$, $b$ $\neq $ $0$. Further, $\int_{\Lambda }I(\mathcal{U} _{b}(\lambda )$ $<$ $\alpha )d\lambda $ $\rightarrow $ $1$ as $|b|$ $ \rightarrow $ $\infty $. Hence as the local deviation from the null increases, the probability of a PVOT test rejection of $H_{1}^{L}$ approaches one $\lim_{n\rightarrow \infty }P(\mathcal{P}_{n}^{\ast }(\alpha ) $ $>$ $\alpha )$ $\nearrow $ $1$ for any nominal level $\alpha $ $\in $ $ [0,1)$. $\mathcal{QED}$. \noindent \textbf{Proof of Theorem \ref{th:garch}.}\qquad The GARCH process is stationary and has an iid error with a finite fourth moment. The claim therefore follows from arguments in \citet[Section 4.1]{Andrews2001}. $ \mathcal{QED}$. \noindent \textbf{Proof of Theorem \ref{th:garch_p_val}.}\qquad By Theorem \ref{th:garch}, the limit process of $\{\mathcal{T}_{n}(\lambda )\}$ under $ H_{0}$\ is $\{\mathcal{T}(\lambda )\}$, where $\mathcal{T}(\lambda )$ $=$ $ (\max \{0,\mathcal{Z}(\lambda )\})^{2}$ and $\{\mathcal{Z}(\lambda )\}$ is Gaussian with covariance $E[\mathcal{Z(}\lambda _{1})\mathcal{Z(}\lambda _{2})]$ $=$ $(1$ $-$ $\lambda _{1}^{2})(1$ $-$ $\lambda _{2}^{2})/(1$ $-$ $ \lambda _{1}\lambda _{2})$. Define $\bar{F}_{0}(c)=P(\mathcal{T}(\lambda )\geq c)$ and $p_{n}(\lambda )\equiv \bar{F}_{0}(\mathcal{T}_{n}(\lambda ))$ , the asymptotic p-value. Define $\mathcal{D}_{n}$ $\equiv $ $\sup_{\lambda \in \Lambda }|\hat{p}_{\widetilde{\mathcal{R}}_{n},\widetilde{\mathcal{M}} _{n},n}(\lambda )$ $-$ $p_{n}(\lambda )|$. Theorems \ref{th:main} and \ref {th:main_h1} apply\ by Theorem \ref{th:garch}. Hence, by Lemma \ref {lm:boot_p_ulln}, below, and weak convergence arguments developed in the proof of Theorem \ref{th:main}, under $H_{0}$ for some uniform process $\{ \mathcal{U}(\lambda )\}$: \begin{eqnarray*} &\int_{\Lambda }I\left( \mathcal{U}(\lambda )<\alpha \right) d\lambda & \overset{d}{\leftarrow }\int_{\Lambda }I\left( p_{n}(\lambda )-\mathcal{D} _{n}<\alpha \right) d\lambda \text{ }\geq \int_{\Lambda }I\left( \hat{p}_{ \widetilde{\mathcal{R}}_{n},\widetilde{\mathcal{M}}_{n},n}(\lambda )<\alpha \right) d\lambda \\ &&\text{ }\geq \int_{\Lambda }I\left( p_{n}(\lambda )+\mathcal{D}_{n}<\alpha \right) d\lambda \overset{d}{\rightarrow }\int_{\Lambda }I\left( \mathcal{U} (\lambda )<\alpha \right) d\lambda . \end{eqnarray*} Therefore $\int_{\Lambda }I(\hat{p}_{\widetilde{\mathcal{R}}_{n},\widetilde{ \mathcal{M}}_{n},n}(\lambda )$ $<$ $\alpha )d\lambda $ $\overset{d}{ \rightarrow }$ $\int_{\Lambda }I(\mathcal{U}(\lambda )$ $<$ $\alpha )d\lambda $. The claim now follows from the proof of Theorem \ref{th:main} and the fact that $\{\mathcal{T}(\lambda )\}$\ is weakly dependent in the sense of Lemma \ref{lm:U}.c. $\mathcal{QED}$. \begin{lemma} \label{lm:boot_p_ulln}$\sup_{\lambda \in \Lambda }|\hat{p}_{\widetilde{ \mathcal{R}}_{n},\widetilde{\mathcal{M}}_{n},n}(\lambda )$ $-$ $ p_{n}(\lambda )|$ $\overset{p}{\rightarrow }$ $0$ where $\hat{p}_{\widetilde{ \mathcal{R}}_{n},\widetilde{\mathcal{M}}_{n},n}(\lambda )$ $\equiv $ $1/ \widetilde{\mathcal{M}}_{n}\sum_{i=1}^{\widetilde{\mathcal{M}}_{n}}I( \mathcal{T}_{\widetilde{\mathcal{R}}_{n},i}(\lambda )$ $\geq $ $\mathcal{T} _{n}(\lambda ))$. \end{lemma} \noindent \textbf{Proof.}\qquad We first state known properties and define some terms. Assumption \ref{assum:main} applies to $\mathcal{T}_{n}(\lambda ) $ by Theorem \ref{th:garch}, where $\{\mathcal{T}_{n}(\lambda )\}$ $ \Rightarrow ^{\ast }$ $\{\mathcal{T}(\lambda )\}$, $\mathcal{T}(\lambda )$ $ = $ $(\max \{0,\mathcal{Z}(\lambda )\})^{2}$, and $\{\mathcal{Z}(\lambda )\}$ is a zero mean Gaussian process with a version that has \emph{almost surely} continuous sample paths, and covariance function $(1$ $-$ $\lambda _{1}^{2})(1$ $-$ $\lambda _{2}^{2})/(1$ $-$ $\lambda _{1}\lambda _{2})$ for $ \lambda _{1},\lambda _{2}$ $\in $ $\Lambda $. Recall we have samples $ \{Z_{j,i}\}_{j=1}^{\widetilde{\mathcal{R}}}$ where $Z_{j,i}$ $\overset{iid}{ \sim }$ $N(0,1)$, and for $(\widetilde{\mathcal{R}},\widetilde{\mathcal{M}})$ $\in $ $\mathbb{N}$: \begin{equation*} \mathfrak{Z}_{\widetilde{\mathcal{R}},i}(\lambda )\equiv (1-\lambda ^{2})\sum_{j=1}^{\widetilde{\mathcal{R}}}\lambda ^{j}Z_{j,i}\text{ and } \mathcal{T}_{\widetilde{\mathcal{R}},i}(\lambda )\equiv \left( \max \{0, \mathfrak{Z}_{\widetilde{\mathcal{R}},i}(\lambda )\}\right) ^{2}\text{ for } i=1,...,\widetilde{\mathcal{M}}. \end{equation*} $\mathfrak{Z}_{\infty }(\lambda )$\ has the same functional Gaussian distribution as $\mathcal{Z}(\lambda )$, and therefore $(\max \{0,\mathfrak{Z }_{\infty }(\lambda )\})^{2}$ is a random draw from the distribution of $ \mathcal{T}(\lambda )$. The distribution $\bar{F}_{0}(c)$ $\equiv $ $P( \mathcal{T}(\lambda )$ $\geq $ $c)$ is continuous and not a function of $ \lambda $ under Assumption \ref{assum:main}. Hence, the p-value is identically $p_{n}(\lambda )$ $=$ $\bar{F}_{0}(\mathcal{T}_{n}(\lambda ))$. Let $\{\mathcal{T}_{1,i}(\lambda )\}_{i=1}^{\widetilde{\mathcal{M}}}$ and $ \mathcal{T}_{2}(\lambda )$ be iid copies of $\mathcal{T}(\lambda )$, and define \begin{equation*} \mathcal{T}_{\widetilde{\mathcal{R}}}^{(\widetilde{\mathcal{M}})}(\lambda )\equiv \left[ \mathcal{T}_{\widetilde{\mathcal{R}},i}(\lambda ),..., \mathcal{T}_{\widetilde{\mathcal{R}},\widetilde{\mathcal{M}}}(\lambda ) \right] ^{\prime }\text{ \ and }\mathcal{T}_{1}^{(\widetilde{\mathcal{M}} )}(\lambda )\equiv \left[ \mathcal{T}_{1,i}(\lambda ),...,\mathcal{T}_{1, \widetilde{\mathcal{M}}}(\lambda )\right] ^{\prime }. \end{equation*} The arguments in \citet[Section 4.1]{Andrews2001} for weak convergence of $\{ \mathcal{T}_{n}(\lambda )\}$ trivially extend to $[\mathcal{T}_{\widetilde{ \mathcal{R}}_{n}}^{(\widetilde{\mathcal{M}})}(\lambda )^{\prime },\mathcal{T} _{n}(\lambda )]^{\prime }$ in view of independence of the individual processes, and normality and smoothness of $\mathfrak{Z}_{\widetilde{ \mathcal{R}}_{n},i}(\lambda )$. Specifically, there exist $\mathcal{T}_{1}^{( \widetilde{\mathcal{M}})}(\lambda )$ and $\mathcal{T}_{2}(\lambda )$ such that: \begin{equation*} \left\{ \left[ \begin{array}{c} \mathcal{T}_{\widetilde{\mathcal{R}}_{n}}^{(\widetilde{\mathcal{M}} )}(\lambda ) \\ \mathcal{T}_{n}(\lambda ) \end{array} \right] :\lambda \in \Lambda \right\} \Rightarrow ^{\ast }\left\{ \left[ \begin{array}{c} \mathcal{T}_{1}^{(\widetilde{\mathcal{M}})}(\lambda ) \\ \mathcal{T}_{2}(\lambda ) \end{array} \right] :\lambda \in \Lambda \right\} \text{ as }n\rightarrow \infty \text{ for each }\widetilde{\mathcal{M}}\in \mathbb{N}. \end{equation*} Hence, by two applications of the continuous mapping theorem, for each $ \widetilde{\mathcal{M}}$ $\in $ $\mathbb{N}$ as $n$ $\rightarrow $ $\infty $: \begin{eqnarray*} &&\left\{ \hat{p}_{\widetilde{\mathcal{R}}_{n},\widetilde{\mathcal{M}} ,n}(\lambda )-\bar{F}_{0}(\mathcal{T}_{n}(\lambda )):\lambda \in \Lambda \right\} =\left\{ \frac{1}{\widetilde{\mathcal{M}}}\sum_{i=1}^{\widetilde{ \mathcal{M}}}I\left( \mathcal{T}_{\widetilde{\mathcal{R}}_{n},i}(\lambda )\geq \mathcal{T}_{n}(\lambda )\right) -\bar{F}_{0}(\mathcal{T}_{n}(\lambda )):\lambda \in \Lambda \right\} \\ &&\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\Rightarrow ^{\ast }\left\{ \frac{1}{\widetilde{ \mathcal{M}}}\sum_{i=1}^{\widetilde{\mathcal{M}}}I\left( \mathcal{T} _{1,i}(\lambda )\geq \mathcal{T}_{2}(\lambda )\right) -\bar{F}_{0}(\mathcal{T }_{2}(\lambda )):\lambda \in \Lambda \right\} \end{eqnarray*} and \begin{equation*} \sup_{\lambda \in \Lambda }\left\vert \hat{p}_{\widetilde{\mathcal{R}}_{n}, \widetilde{\mathcal{M}},n}(\lambda )-\bar{F}_{0}(\mathcal{T}_{n}(\lambda ))\right\vert \overset{d}{\rightarrow }\sup_{\lambda \in \Lambda }\left\vert \frac{1}{\widetilde{\mathcal{M}}}\sum_{i=1}^{\widetilde{\mathcal{M}}}I\left( \mathcal{T}_{1,i}(\lambda )\geq \mathcal{T}_{2}(\lambda )\right) -\bar{F} _{0}(\mathcal{T}_{2}(\lambda ))\right\vert \text{ as }n\rightarrow \infty . \end{equation*} The proof is complete if we show \begin{equation} \sup_{\lambda \in \Lambda }\left\vert \frac{1}{\widetilde{\mathcal{M}}} \sum_{i=1}^{\widetilde{\mathcal{M}}}I(\mathcal{T}_{1,i}(\lambda )\geq \mathcal{T}_{2}(\lambda ))-\bar{F}_{0}(\mathcal{T}_{2}(\lambda ))\right\vert \overset{p}{\rightarrow }0\text{ as }\widetilde{\mathcal{M}}\rightarrow \infty , \label{ulln_I} \end{equation} since this means $\sup_{\lambda \in \Lambda }|\hat{p}_{\widetilde{\mathcal{R} }_{n},\widetilde{\mathcal{M}},n}(\lambda )$ $-$ $\bar{F}_{0}(\mathcal{T} _{n}(\lambda ))|$ can be made arbitrarily close to zero in probability by choice of $\widetilde{\mathcal{M}}$. Note that by construction $\bar{F}_{0}( \mathcal{T}_{2}(\lambda ))$ $=$ $E[I(\mathcal{T}_{1,i}(\lambda )$ $\geq $ $ \mathcal{T}_{2}(\lambda ))|\mathcal{T}_{2}(\lambda )]$ since $\mathcal{T} _{1,i}(\lambda )$ and $\mathcal{T}_{2}(\lambda )$\ are iid copies of $ \mathcal{T}(\lambda )$. We therefore derive a uniform LLN for \begin{equation*} \mathcal{I}_{i}(\lambda )\equiv I\left( \mathcal{T}_{1,i}(\lambda )\geq \mathcal{T}_{2}(\lambda )\right) -E\left[ I\left( \mathcal{T}_{1,i}(\lambda )\geq \mathcal{T}_{2}(\lambda )\right) |\mathcal{T}_{2}(\lambda )\right] . \end{equation*} Since $(\mathcal{T}_{1,i}(\lambda ),\mathcal{T}_{2}(\lambda ))$\ are iid copies of $\mathcal{T}(\lambda )$, it follows $E[\bar{F}_{0}(\mathcal{T} _{2}(\lambda ))]$ $=$ $P(\mathcal{T}_{1,i}(\lambda )$ $\geq $ $\mathcal{T} _{2}(\lambda ))$ hence: \begin{equation*} E\left[ \mathcal{I}_{i}(\lambda )\right] =P\left( \mathcal{T}_{1,i}(\lambda )\geq \mathcal{T}_{2}(\lambda )\right) -E\left[ \bar{F}_{0}(\mathcal{T} _{2}(\lambda ))\right] =P\left( \mathcal{T}_{1,i}(\lambda )\geq \mathcal{T} _{2}(\lambda )\right) -P\left( \mathcal{T}_{1,i}(\lambda )\geq \mathcal{T} _{2}(\lambda )\right) =0. \end{equation*} Second, $1/\widetilde{\mathcal{M}}\sum_{i=1}^{\widetilde{\mathcal{M}}} \mathcal{I}_{i}(\lambda )$ $\overset{p}{\rightarrow }$ $0$ as $\widetilde{ \mathcal{M}}$ $\rightarrow $ $\infty $ pointwise on $\Lambda $ since $ \mathcal{I}_{i}(\lambda )$ is iid, and $E[\mathcal{I}_{i}(\lambda )]$ $=$ $0$ . It remains to demonstrate $\{\mathcal{I}_{i}(\lambda )$ $:$ $\lambda $ $\in $ $\Lambda \}$ is stochastically equicontinuous: $\forall (\epsilon ,\eta )$ $ > $ $0$ there exists $\delta $ $>$ $0$ such that (see, e.g., Pollard \citeyear{Pollard1984}, and Billingsley \citeyear{Billingsley1999}, Chap. 7): \begin{equation*} P\left( \sup_{\lambda ,\tilde{\lambda}\in \Lambda :||\lambda -\tilde{\lambda} ||\leq \delta }\left\vert \frac{1}{\widetilde{\mathcal{M}}}\sum_{i=1}^{ \widetilde{\mathcal{M}}}\left\{ \mathcal{I}_{i}(\lambda )-\mathcal{I}_{i}( \tilde{\lambda})\right\} \right\vert >\eta \right) <\varepsilon . \end{equation*} The function $\mathcal{I}_{i}$ $:$ $\Lambda $ $\rightarrow $ $[-1,1]$ is not continuous. We therefore adapt arguments developed in \citet[proof of Theorem 2.1 and Lemma 2.1]{ArconesYu1994}, which requires the notion of the \textit{V-C subgraph } class of functions, denoted $\mathcal{V}(\mathcal{C})$. See \cite{VC1971} and \citet[Section 7]{Dudley1978}, and see \citet[Chap. II.4]{Pollard1984} for the closely related \textit{polynomial discrimination} class. We use the following well known properties: $\mathcal{V}(\mathcal{C})$ contains continuous functions and the indicator function; $\mathcal{V}( \mathcal{C})$ contains linear combinations of $\mathcal{V}(\mathcal{C})$ functions; and $\mathcal{V}(\mathcal{C})$ transforms of $\mathcal{V}( \mathcal{C})$ functions are in $\mathcal{V}(\mathcal{C})$. Cf. \cite{VC1971} , \citet[Section 7]{Dudley1978} and \cite{Pollard1990}. By using the approach of \cite{ArconesYu1994}, we may show that $1/ \widetilde{\mathcal{M}}\sum_{i=1}^{\widetilde{\mathcal{M}}}\mathcal{I} _{i}(\lambda )$ is stochastically equicontinuous. $\mathcal{T}_{1,i}(\lambda )$ and $\mathcal{T}_{2}(\lambda )$ are, respectively, versions of $(\max \{0, \mathfrak{Z}_{1,\infty ,i}(\lambda ))^{2}$ and \linebreak $(\max \{0, \mathfrak{Z}_{2,\infty }(\lambda ))^{2}$, where $\mathfrak{Z}_{1,\infty ,i}(\lambda )$ and $\mathfrak{Z}_{2,\infty }(\lambda )$ are independent copies of $\mathfrak{Z}_{\infty }(\lambda )$, and $\mathfrak{Z}_{\infty }(\lambda )$ $\equiv $ $(1$ $-$ $\lambda ^{2})\sum_{j=0}^{\infty }\lambda ^{j}Z_{j}$ is zero mean Gaussian with the same covariance function as $ \mathcal{Z}(\lambda )$. By construction $\mathfrak{Z}_{\infty }(\lambda )$ is continuous in $\lambda $, hence it lies in $\mathcal{V}(\mathcal{C})$. Further, $(\max \{0,\cdot )^{2}$ lies in $\mathcal{V}(\mathcal{C})$. Therefore $(\max \{0,\mathfrak{Z}_{\infty }(\lambda ))^{2}$ lies in $ \mathcal{V}(\mathcal{C})$, which implies $\mathcal{T}_{1,i}(\lambda )$ and $ \mathcal{T}_{2}(\lambda )$ have versions that lie in $\mathcal{V}(\mathcal{C} )$. Hence $\mathcal{T}_{1,i}(\lambda )$ $-$ $\mathcal{T}_{2}(\lambda )$ has a version in $\mathcal{V}(\mathcal{C})$. Therefore $I(\mathcal{T} _{1,i}(\lambda )$ $-$ $\mathcal{T}_{2}(\lambda )$ $\geq $ $0)$ has a version in $\mathcal{V}(\mathcal{C})$. Moreover, the continuous transform $\bar{F} _{0}(\mathcal{T}_{2}(\lambda ))$ lies in $\mathcal{V}(\mathcal{C})$. Hence the difference $\mathcal{I}_{i}(\lambda )$ $\equiv $ $I(\mathcal{T} _{1,i}(\lambda )$ $\geq $ $\mathcal{T}_{2}(\lambda ))$ $-$ $\bar{F}_{0}( \mathcal{T}_{2}(\lambda ))$ lies in $\mathcal{V}(\mathcal{C})$. This, and boundedness of $\mathcal{I}_{i}(\lambda )$, imply that the covering numbers\ with respect to the $L_{p}$-metric satisfy, for any $p$ $>$ $2$, $\mathcal{N} (\varepsilon ,\Lambda ,||\cdot ||_{p})$ $<$ $a\varepsilon ^{-b}$ for all $ \varepsilon $ $\in $ $(0,1)$ and some $a,b$ $>$ $0$ that may depend on $p$ (e.g. Lemma 7.13 in Dudley, \citeyear{Dudley1978}, and Lemma II.25 in Pollard, \citeyear{Pollard1984}). Further, $\mathcal{I}_{i}(\lambda )$ is uniformly bounded and iid. Therefore $\{\mathcal{I}_{i}(\lambda )$ $:$ $ \lambda $ $\in $ $\Lambda \}$ is stochastically equicontinuous by adapting the proof of Lemma 2.1 in \cite{ArconesYu1994}: see especially \citet[eq. (2.13)]{ArconesYu1994}. $\mathcal{QED}.$ \setstretch{1} \setstretch{1} \begin{table}[h] \caption{Functional Form Test Rejection Frequencies} \label{tbl:funcform} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{lccccccccccccc} \hline\hline \multicolumn{14}{c}{iid data: linear vs. quadratic} \\ \hline\hline & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=100$} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=250$} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=500$} \\ \hline\hline Hyp$^{a}$ & \multicolumn{1}{|c}{Test} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ \\ \hline & \multicolumn{1}{|l|}{sup-$p_{n}$ $^{b}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.008$^{c}$} & .058 & .108 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000} & .039 & .094 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.009} & .043 & .091 \\ \cline{2-14} & \multicolumn{1}{|l|}{sup-$\mathcal{T}_{n}$ $^{d}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.004} & .037 & .097 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.008} & .041 & .083 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.019} & .058 & .096 \\ $H_{0}$ & \multicolumn{1}{|l|}{aver-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{ } & \multicolumn{1}{|c}{.014} & .057 & .116 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.007} & .040 & .088 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.018} & .071 & .109 \\ & \multicolumn{1}{|l|}{rand-$\mathcal{T}_{n}$ $^{e}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.014} & .056 & .117 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.011} & .045 & .094 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.021} & .059 & .109 \\ \cline{2-14} & \multicolumn{1}{|l|}{ICM$^{f}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.001} & .033 & .086 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.001} & .014 & .075 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.003} & .062 & .086 \\ \cline{2-14} & \multicolumn{1}{|l|}{PVOT$^{g}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.013} & .056 & .116 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.010} & .044 & .092 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.014} & .063 & .108 \\ \hline\hline & \multicolumn{1}{|l|}{sup-$p_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.042} & .162 & .258 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.137} & .337 & .473 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.339} & .597 & .695 \\ \cline{2-14} & \multicolumn{1}{|l|}{sup-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.051} & .156 & .251 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.160} & .331 & .512 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.354} & .539 & .743 \\ $H_{1}$ & \multicolumn{1}{|l|}{aver-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{ } & \multicolumn{1}{|c}{.051} & .211 & .316 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.193} & .377 & .576 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.412} & .643 & .776 \\ & \multicolumn{1}{|l|}{rand-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.051} & .221 & .316 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.212} & .392 & .586 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.404} & .668 & .798 \\ \cline{2-14} & \multicolumn{1}{|l|}{ICM} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.001} & .149 & .329 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.043} & .330 & .606 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.163} & .678 & .809 \\ \cline{2-14} & \multicolumn{1}{|l|}{PVOT} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.058} & .224 & .320 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.232} & .391 & .604 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.404} & .614 & .783 \\ \hline\hline & & & & & & & & & & & & & \\ \hline\hline \multicolumn{14}{c}{time series data: AR vs. SETAR} \\ \hline\hline & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=100$} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=250$} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=500$} \\ \hline\hline Hyp & \multicolumn{1}{|c}{Test} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ \\ \hline & \multicolumn{1}{|l|}{sup-$p_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.022} & .075 & .158 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.008} & .052 & .113 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.020} & .064 & .116 \\ \cline{2-14} & \multicolumn{1}{|l|}{sup-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.001} & .003 & .039 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.002} & .012 & .036 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.003} & .052 & .124 \\ $H_{0}$ & \multicolumn{1}{|l|}{aver-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{ } & \multicolumn{1}{|c}{.002} & .022 & .082 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.002} & .013 & .066 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.008} & .072 & .132 \\ & \multicolumn{1}{|l|}{rand-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.021} & .113 & .193 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.001} & .03 & .114 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.018} & .082 & .143 \\ \cline{2-14} & \multicolumn{1}{|l|}{ICM$^{f}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.002} & .058 & .132 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000} & .030 & .066 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.005} & .038 & .089 \\ \cline{2-14} & \multicolumn{1}{|l|}{PVOT$^{g}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.016} & .076 & .145 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.011} & .047 & .115 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.016} & .061 & .114 \\ \hline\hline & \multicolumn{1}{|l|}{sup-$p_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.108} & .596 & .845 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.925} & 1.00 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ \cline{2-14} & \multicolumn{1}{|l|}{sup-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.021} & .209 & .561 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.685} & 1.00 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ $H_{1}$ & \multicolumn{1}{|l|}{aver-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{ } & \multicolumn{1}{|c}{.062} & .412 & .726 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.888} & 1.00 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ & \multicolumn{1}{|l|}{rand-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.135} & .592 & .846 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.960} & 1.00 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ \cline{2-14} & \multicolumn{1}{|l|}{ICM} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.004} & .643 & .866 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.108} & .928 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.712} & 1.00 & 1.00 \\ \cline{2-14} & \multicolumn{1}{|l|}{PVOT} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.135} & .647 & .883 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.957} & 1.00 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ \hline\hline \end{tabular}} \end{center} \par {\small a. $H_{0}$ is $E[\epsilon |x]=0$. b. sup-$p_{n}$ is the $ \sup_{\lambda \in \Lambda }p_{n}(\lambda )$ test. and ave-$\mathcal{T}_{n}$ tests are based on a wild bootstrapped p-value. c. Rejection frequency at the given level. Empirical power is not size-adjusted. d. sup-$\mathcal{T} _{n}$ e. rand-$\mathcal{T}_{n}$ is an asymptotic $\chi^{2}$ test based on $ \mathcal{T}_{n}(\lambda )$ with randomized $\lambda $\ on [0,1]. f. The ICM test is based on critical value upper bounds in Bierens and Ploberger (1997). g. PVOT: \textit{p-value occupation time} test.} \end{table} \begin{table}[h] \caption{A. STAR Test Rejection Frequencies: Sample Size $n=100$} \label{tbl:starn100} \begin{center} {\small \begin{tabular}{l|ccc|c|ccc|c|ccc} \hline\hline & \multicolumn{3}{|c|}{$H_{0}$: LSTAR} & & \multicolumn{3}{|c|}{$H_{1}$-weak } & & \multicolumn{3}{|c}{$H_{1}$-strong} \\ \hline & 1\% & 5\% & 10\% & & 1\% & 5\% & 10\% & & 1\% & 5\% & 10\% \\ \hline\hline & \multicolumn{11}{c}{Strong Identification: $\beta _{n}=.3$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.025} & \multicolumn{1}{l}{.094} & \multicolumn{1}{l|}{.163} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{ .147} & \multicolumn{1}{l}{.280} & \multicolumn{1}{l|}{.365} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.757} & \multicolumn{1}{l}{.872} & \multicolumn{1}{l}{.907} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.025} & \multicolumn{1}{l}{.078} & \multicolumn{1}{l|}{.135} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{ .087} & \multicolumn{1}{l}{.209} & \multicolumn{1}{l|}{.289} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.552} & \multicolumn{1}{l}{.726} & \multicolumn{1}{l}{.804} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.011} & \multicolumn{1}{l}{.052} & \multicolumn{1}{l|}{.096} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{ .053} & \multicolumn{1}{l}{.143} & \multicolumn{1}{l|}{.232} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.446} & \multicolumn{1}{l}{.635} & \multicolumn{1}{l}{.732} \\ rand LF & \multicolumn{1}{|l}{.007} & \multicolumn{1}{l}{.015} & \multicolumn{1}{l|}{.038} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.013 } & \multicolumn{1}{l}{.066} & \multicolumn{1}{l|}{.141} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.442} & \multicolumn{1}{l}{.553} & \multicolumn{1}{l}{.661} \\ rand ICS-1 & \multicolumn{1}{|l}{.013} & \multicolumn{1}{l}{.050} & \multicolumn{1}{l|}{.089} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.028 } & \multicolumn{1}{l}{.089} & \multicolumn{1}{l|}{.170} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.379} & \multicolumn{1}{l}{.593} & \multicolumn{1}{l}{.692} \\ \hline sup $p_{n}$ & .009 & .039 & .068 & & .036 & .118 & .209 & & .378 & .554 & .656 \\ sup $p_{n}$ LF & .006 & .009 & .032 & & .012 & .057 & .120 & & .262 & .457 & .572 \\ sup $p_{n}$ ICS-1 & .006 & .036 & .061 & & .020 & .081 & .138 & & .310 & .506 & .617 \\ \hline PVOT & \multicolumn{1}{|l}{.015} & \multicolumn{1}{l}{.065} & \multicolumn{1}{l|}{.124} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.101 } & \multicolumn{1}{l}{.257} & \multicolumn{1}{l|}{.335} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.727} & \multicolumn{1}{l}{.859} & \multicolumn{1}{l}{.883} \\ PVOT LF & \multicolumn{1}{|l}{.007} & \multicolumn{1}{l}{.014} & \multicolumn{1}{l|}{.052} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.026 } & \multicolumn{1}{l}{.121} & \multicolumn{1}{l|}{.208} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.552} & \multicolumn{1}{l}{.781} & \multicolumn{1}{l}{.817} \\ PVOT ICS-1 & \multicolumn{1}{|l}{.007} & \multicolumn{1}{l}{.043} & \multicolumn{1}{l|}{.073} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.042 } & \multicolumn{1}{l}{.153} & \multicolumn{1}{l|}{.237} & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l}{.622} & \multicolumn{1}{l}{.815} & \multicolumn{1}{l}{.842} \\ \hline\hline & \multicolumn{11}{c}{Weak Identification: $\beta _{n}=.3/\sqrt{n}$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.064} & \multicolumn{1}{l}{.155} & \multicolumn{1}{l|}{.239} & & \multicolumn{1}{|l}{.337} & \multicolumn{1}{l}{.574} & \multicolumn{1}{l|}{.681} & & \multicolumn{1}{|l}{.929} & \multicolumn{1}{l}{.978} & \multicolumn{1}{l}{ .993} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.057} & \multicolumn{1}{l}{.146} & \multicolumn{1}{l|}{.219} & & \multicolumn{1}{|l}{.215} & \multicolumn{1}{l}{.430} & \multicolumn{1}{l|}{.554} & & \multicolumn{1}{|l}{.739} & \multicolumn{1}{l}{.888} & \multicolumn{1}{l}{ .932} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.027} & \multicolumn{1}{l}{.083} & \multicolumn{1}{l|}{.175} & & \multicolumn{1}{|l}{.164} & \multicolumn{1}{l}{.343} & \multicolumn{1}{l|}{.474} & & \multicolumn{1}{|l}{.604} & \multicolumn{1}{l}{.810} & \multicolumn{1}{l}{ .870} \\ rand LF & .012 & .042 & .093 & & .060 & .161 & .308 & & .467 & .685 & .794 \\ rand ICS-1 & .012 & .046 & .104 & & .116 & .261 & .382 & & .545 & .749 & .841 \\ \hline sup $p_{n}$ & .019 & .087 & .145 & & .107 & .253 & .411 & & .493 & .700 & .785 \\ sup $p_{n}$ LF & .001 & .061 & .084 & & .036 & .124 & .230 & & .351 & .598 & .698 \\ sup $p_{n}$ ICS-1 & .001 & .065 & .085 & & .088 & .193 & .335 & & .454 & .663 & .756 \\ \hline PVOT & \multicolumn{1}{|l}{.038} & \multicolumn{1}{l}{.127} & \multicolumn{1}{l|}{.196} & & \multicolumn{1}{|l}{.328} & \multicolumn{1}{l}{.542} & \multicolumn{1}{l|}{.591} & & \multicolumn{1}{|l}{.893} & \multicolumn{1}{l}{.968} & \multicolumn{1}{l}{ .950} \\ PVOT LF & .015 & .049 & .108 & & \multicolumn{1}{|l}{.108} & \multicolumn{1}{l}{.320} & \multicolumn{1}{l|}{.398} & & \multicolumn{1}{|l}{.710} & \multicolumn{1}{l}{.911} & \multicolumn{1}{l}{ .916} \\ PVOT ICS-1 & \multicolumn{1}{|l}{.014} & \multicolumn{1}{l}{.049} & \multicolumn{1}{l|}{.107} & & \multicolumn{1}{|l}{.221} & \multicolumn{1}{l}{.435} & \multicolumn{1}{l|}{.486} & & \multicolumn{1}{|l}{.830} & \multicolumn{1}{l}{.942} & \multicolumn{1}{l}{ .932} \\ \hline\hline & \multicolumn{11}{c}{Non-Identification: $\beta _{n}=\beta _{0}=0$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.066} & \multicolumn{1}{l}{.164} & \multicolumn{1}{l|}{.249} & & \multicolumn{1}{|l}{.358} & \multicolumn{1}{l}{.584} & \multicolumn{1}{l|}{.696} & & \multicolumn{1}{|l}{.902} & \multicolumn{1}{l}{.970} & \multicolumn{1}{l}{ .983} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.062} & \multicolumn{1}{l}{.148} & \multicolumn{1}{l|}{.226} & & \multicolumn{1}{|l}{.233} & \multicolumn{1}{l}{.438} & \multicolumn{1}{l|}{.548} & & \multicolumn{1}{|l}{.716} & \multicolumn{1}{l}{.872} & \multicolumn{1}{l}{ .911} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.044} & \multicolumn{1}{l}{.107} & \multicolumn{1}{l|}{.186} & & \multicolumn{1}{|l}{.184} & \multicolumn{1}{l}{.380} & \multicolumn{1}{l|}{.505} & & \multicolumn{1}{|l}{.634} & \multicolumn{1}{l}{.793} & \multicolumn{1}{l}{ .864} \\ rand LF & .013 & .046 & .115 & & .069 & .191 & .327 & & .498 & .725 & .818 \\ rand ICS-1 & .013 & .047 & .116 & & .137 & .298 & .481 & & .583 & .769 & .847 \\ \hline sup $p_{n}$ & .018 & .080 & .167 & & .117 & .272 & .363 & & .514 & .710 & .807 \\ sup $p_{n}$ LF & .011 & .043 & .083 & & .042 & .122 & .221 & & .383 & .612 & .740 \\ sup $p_{n}$ ICS-1 & .011 & .044 & .086 & & .093 & .205 & .293 & & .464 & .683 & .783 \\ \hline PVOT & \multicolumn{1}{|l}{.049} & \multicolumn{1}{l}{.134} & \multicolumn{1}{l|}{.190} & & \multicolumn{1}{|l}{.322} & \multicolumn{1}{l}{.554} & \multicolumn{1}{l|}{.624} & & \multicolumn{1}{|l}{.890} & \multicolumn{1}{l}{.962} & \multicolumn{1}{l}{ .957} \\ PVOT LF & .015 & .061 & .117 & & .122 & .322 & .415 & & .740 & .911 & .936 \\ PVOT ICS-1 & .015 & .057 & .116 & & .253 & .464 & .570 & & .847 & .939 & .954 \\ \hline\hline \end{tabular} } \end{center} \par {\small Numerical values are rejection frequency at the given level. LSTAR is Logistic STAR. Empirical power is not size-adjusted. \textit{sup} $ \mathcal{T}_{n}$ and \textit{ave} $\mathcal{T}_{n}$ tests are based on a wild bootstrapped p-value. \textit{rand} $\mathcal{T}_{n}$: $\mathcal{T} _{n}(\lambda )$ with randomized $\lambda $\ on [1,5]. \textit{sup} ${p}_{n}$ is the supremum p-value test where p-values are computed from the chi-squared distribution. PVOT uses the chi-squared distribution. LF implies the least favorable p-value is used, and ICS-$1$ implies the type 1 identification category selection p-value is used with threshold $\kappa _{n} $ $=$ $\ln (\ln (n))$.} \end{table} \addtocounter{table}{-1} \begin{table}[h] \caption{B. STAR Test Rejection Frequencies: Sample Size $n=250$} \label{tbl:starn250} \begin{center} {\small \begin{tabular}{l|ccc|c|ccc|c|ccc} \hline\hline & \multicolumn{3}{|c|}{$H_{0}$: LSTAR} & & \multicolumn{3}{|c|}{$H_{1}$-weak } & & \multicolumn{3}{|c}{$H_{1}$-strong} \\ \hline & 1\% & 5\% & 10\% & & 1\% & 5\% & 10\% & & 1\% & 5\% & 10\% \\ \hline\hline & \multicolumn{11}{c}{Strong Identification: $\beta _{n}=.3$} \\ \hline\hline \multicolumn{1}{l|}{sup $\mathcal{T}_{n}$} & \multicolumn{1}{|l}{.018} & \multicolumn{1}{l}{.088} & \multicolumn{1}{l|}{.163} & & \multicolumn{1}{|l}{.359} & \multicolumn{1}{l}{.468} & \multicolumn{1}{l|}{ .551} & & \multicolumn{1}{|l}{.953} & \multicolumn{1}{l}{.984} & \multicolumn{1}{l}{.990} \\ \multicolumn{1}{l|}{aver $\mathcal{T}_{n}$} & \multicolumn{1}{|l}{.014} & \multicolumn{1}{l}{.077} & \multicolumn{1}{l|}{.133} & & \multicolumn{1}{|l}{.262} & \multicolumn{1}{l}{.387} & \multicolumn{1}{l|}{ .468} & & \multicolumn{1}{|l}{.873} & \multicolumn{1}{l}{.949} & \multicolumn{1}{l}{.975} \\ \hline \multicolumn{1}{l|}{rand $\mathcal{T}_{n}$} & \multicolumn{1}{|l}{.014} & \multicolumn{1}{l}{.064} & \multicolumn{1}{l|}{.126} & & \multicolumn{1}{|l}{.165} & \multicolumn{1}{l}{.299} & \multicolumn{1}{l|}{ .396} & & \multicolumn{1}{|l}{.793} & \multicolumn{1}{l}{.912} & \multicolumn{1}{l}{.952} \\ \multicolumn{1}{l|}{rand LF} & .001 & .010 & .025 & & .067 & .235 & .368 & & .688 & .888 & .936 \\ \multicolumn{1}{l|}{rand ICS-1} & .008 & .031 & .077 & & .076 & .244 & .375 & & .762 & .902 & .947 \\ \hline sup $p_{n}$ & .003 & .039 & .066 & & .103 & .264 & .358 & & .743 & .876 & .917 \\ sup $p_{n}$ LF & .000 & .007 & .021 & & .032 & .214 & .303 & & .605 & .838 & .899 \\ \multicolumn{1}{l|}{sup $p_{n}$ ICS-1} & .003 & .035 & .063 & & .038 & .217 & .316 & & .714 & .870 & .912 \\ \hline \multicolumn{1}{l|}{PVOT} & \multicolumn{1}{|l}{.016} & \multicolumn{1}{l}{ .067} & \multicolumn{1}{l|}{.125} & & \multicolumn{1}{|l}{.328} & \multicolumn{1}{l}{.437} & \multicolumn{1}{l|}{.517} & & \multicolumn{1}{|l}{.952} & \multicolumn{1}{l}{.983} & \multicolumn{1}{l}{ .991} \\ \multicolumn{1}{l|}{PVOT LF} & .004 & .020 & .041 & & .132 & .348 & .417 & & .938 & .972 & .976 \\ \multicolumn{1}{l|}{PVOT ICS-1} & .011 & .051 & .108 & & .147 & .370 & .433 & & .947 & .978 & .985 \\ \hline\hline & \multicolumn{11}{c}{Weak Identification: $\beta _{n}=.3/\sqrt{n}$} \\ \hline\hline \multicolumn{1}{l|}{sup $\mathcal{T}_{n}$} & \multicolumn{1}{|l}{.051} & \multicolumn{1}{l}{.139} & \multicolumn{1}{l|}{.224} & & \multicolumn{1}{|l}{.764} & \multicolumn{1}{l}{.922} & \multicolumn{1}{l|}{ .957} & & \multicolumn{1}{|l}{.992} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{1.00} \\ \multicolumn{1}{l|}{aver $\mathcal{T}_{n}$} & \multicolumn{1}{|l}{.046} & \multicolumn{1}{l}{.118} & \multicolumn{1}{l|}{.215} & & \multicolumn{1}{|l}{.539} & \multicolumn{1}{l}{.779} & \multicolumn{1}{l|}{ .853} & & \multicolumn{1}{|l}{.969} & \multicolumn{1}{l}{.992} & \multicolumn{1}{l}{.998} \\ \hline \multicolumn{1}{l|}{rand $\mathcal{T}_{n}$} & \multicolumn{1}{|l}{.027} & \multicolumn{1}{l}{.086} & \multicolumn{1}{l|}{.169} & & \multicolumn{1}{|l}{.451} & \multicolumn{1}{l}{.695} & \multicolumn{1}{l|}{ .785} & & \multicolumn{1}{|l}{.911} & \multicolumn{1}{l}{.979} & \multicolumn{1}{l}{.993} \\ \multicolumn{1}{l|}{rand LF} & .018 & .060 & .097 & & .180 & .481 & .641 & & .851 & .961 & .980 \\ \multicolumn{1}{l|}{rand ICS-1} & .018 & .058 & .098 & & .298 & .633 & .770 & & .926 & .975 & .991 \\ \hline sup $p_{n}$ & .017 & .056 & .097 & & .330 & .615 & .712 & & .858 & .975 & .991 \\ sup $p_{n}$ LF & .008 & .026 & .067 & & .115 & .416 & .587 & & .698 & .926 & .978 \\ \multicolumn{1}{l|}{sup $p_{n}$ ICS-1} & .008 & .030 & .072 & & .294 & .580 & .687 & & .852 & .975 & .991 \\ \hline \multicolumn{1}{l|}{PVOT} & \multicolumn{1}{|l}{.051} & \multicolumn{1}{l}{ .122} & \multicolumn{1}{l|}{.201} & & \multicolumn{1}{|l}{.740} & \multicolumn{1}{l}{.894} & \multicolumn{1}{l|}{.934} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ \multicolumn{1}{l|}{PVOT LF} & .014 & .061 & .110 & & .380 & .708 & .805 & & .990 & 1.00 & 1.00 \\ \multicolumn{1}{l|}{PVOT ICS-1} & .015 & .060 & .111 & & .618 & .848 & .878 & & .999 & 1.00 & 1.00 \\ \hline\hline & \multicolumn{11}{c}{Non-Identification: $\beta _{n}=\beta _{0}=0$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.061} & \multicolumn{1}{l}{.152} & \multicolumn{1}{l|}{.223} & & \multicolumn{1}{|l}{.751} & \multicolumn{1}{l}{.922} & \multicolumn{1}{l|}{.956} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.054} & \multicolumn{1}{l}{.145} & \multicolumn{1}{l|}{.200} & & \multicolumn{1}{|l}{.526} & \multicolumn{1}{l}{.765} & \multicolumn{1}{l|}{.849} & & \multicolumn{1}{|l}{.975} & \multicolumn{1}{l}{.996} & \multicolumn{1}{l}{ .999} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.036} & \multicolumn{1}{l}{.123} & \multicolumn{1}{l|}{.184} & & \multicolumn{1}{|l}{.417} & \multicolumn{1}{l}{.696} & \multicolumn{1}{l|}{.803} & & \multicolumn{1}{|l}{.025} & \multicolumn{1}{l}{.976} & \multicolumn{1}{l}{ .988} \\ rand LF & .008 & .047 & .108 & & .205 & .504 & .655 & & .838 & .955 & .973 \\ rand ICS-1 & .008 & .049 & .109 & & .411 & .653 & .770 & & .923 & .977 & .989 \\ \hline sup $p_{n}$ & .026 & .068 & .123 & & .380 & .650 & .772 & & .850 & .946 & .968 \\ sup $p_{n}$ LF & .008 & .038 & .079 & & .132 & .430 & .592 & & .728 & .915 & .946 \\ sup $p_{n}$ ICS-1 & .008 & .004 & .081 & & .340 & .629 & .750 & & .842 & .945 & .968 \\ \hline PVOT & \multicolumn{1}{|l}{.036} & \multicolumn{1}{l}{.145} & \multicolumn{1}{l|}{.211} & & \multicolumn{1}{|l}{.732} & \multicolumn{1}{l}{.885} & \multicolumn{1}{l|}{.930} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ PVOT LF & .010 & .058 & .114 & & .373 & .717 & .806 & & .990 & 1.00 & 1.00 \\ PVOT ICS-1 & .010 & .059 & .116 & & .682 & .853 & .898 & & 1.00 & 1.00 & 1.00 \\ \hline\hline \end{tabular} } \end{center} \par {\small Numerical values are rejection frequency at the given level. LSTAR is Logistic STAR. Empirical power is not size-adjusted. \textit{sup} $ \mathcal{T}_{n}$ and \textit{ave} $\mathcal{T}_{n}$ tests are based on a wild bootstrapped p-value. \textit{rand} $\mathcal{T}_{n}$: $\mathcal{T} _{n}(\lambda )$ with randomized $\lambda $\ on [1,5]. \textit{sup} ${p}_{n}$ is the supremum p-value test where p-values are computed from the chi-squared distribution. PVOT uses the chi-squared distribution. LF implies the least favorable p-value is used, and ICS-$1$ implies the type 1 identification category selection p-value is used with threshold $\kappa _{n} $ $=$ $\ln (\ln (n))$.} \end{table} \addtocounter{table}{-1} \begin{table}[h] \caption{C. STAR Test Rejection Frequencies: Sample Size $n=500$} \label{tbl:starn500} \begin{center} {\small \begin{tabular}{l|ccc|c|ccc|c|ccc} \hline\hline & \multicolumn{3}{|c|}{$H_{0}$: LSTAR} & & \multicolumn{3}{|c|}{$H_{1}$-weak } & & \multicolumn{3}{|c}{$H_{1}$-strong} \\ \hline & 1\% & 5\% & 10\% & & 1\% & 5\% & 10\% & & 1\% & 5\% & 10\% \\ \hline\hline & \multicolumn{11}{c}{Strong Identification: $\beta _{n}=.3$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.029} & \multicolumn{1}{l}{.069} & \multicolumn{1}{l|}{.153} & & \multicolumn{1}{|l}{.441} & \multicolumn{1}{l}{.590} & \multicolumn{1}{l|}{.676} & & \multicolumn{1}{|l}{.997} & \multicolumn{1}{l}{.999} & \multicolumn{1}{l}{ .999} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.022} & \multicolumn{1}{l}{.055} & \multicolumn{1}{l|}{.120} & & \multicolumn{1}{|l}{.382} & \multicolumn{1}{l}{.546} & \multicolumn{1}{l|}{.624} & & \multicolumn{1}{|l}{.988} & \multicolumn{1}{l}{.996} & \multicolumn{1}{l}{ .997} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.008} & \multicolumn{1}{l}{.049} & \multicolumn{1}{l|}{.098} & & \multicolumn{1}{|l}{.328} & \multicolumn{1}{l}{.488} & \multicolumn{1}{l|}{.598} & & \multicolumn{1}{|l}{.976} & \multicolumn{1}{l}{.999} & \multicolumn{1}{l}{ .996} \\ rand LF & .001 & .018 & .042 & & .227 & .450 & .565 & & .967 & .989 & .998 \\ rand ICS-1 & .009 & .046 & .096 & & .230 & .449 & .565 & & .974 & .990 & .998 \\ \hline sup $p_{n}$ & .005 & .039 & .078 & & .295 & .457 & .536 & & .961 & .990 & .997 \\ sup $p_{n}$ LF & .002 & .010 & .033 & & .223 & .427 & .528 & & .949 & .985 & .997 \\ sup $p_{n}$ ICS-1 & .005 & .039 & .077 & & .228 & .432 & .528 & & .962 & .990 & .997 \\ \hline PVOT & \multicolumn{1}{|l}{.014} & \multicolumn{1}{l}{.055} & \multicolumn{1}{l|}{.115} & & \multicolumn{1}{|l}{.423} & \multicolumn{1}{l}{.568} & \multicolumn{1}{l|}{.655} & & \multicolumn{1}{|l}{.996} & \multicolumn{1}{l}{.999} & \multicolumn{1}{l}{ .999} \\ PVOT LF & .002 & .023 & .051 & & .311 & .509 & .618 & & .995 & .998 & 1.00 \\ PVOT ICS-1 & .013 & .058 & .106 & & .314 & .510 & .618 & & .995 & .998 & 1.00 \\ \hline\hline & \multicolumn{11}{c}{Weak Identification: $\beta _{n}=.3/\sqrt{n}$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.044} & \multicolumn{1}{l}{.134} & \multicolumn{1}{l|}{.184} & & \multicolumn{1}{|l}{.984} & \multicolumn{1}{l}{.998} & \multicolumn{1}{l|}{1.00} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.029} & \multicolumn{1}{l}{.125} & \multicolumn{1}{l|}{.176} & & \multicolumn{1}{|l}{.883} & \multicolumn{1}{l}{.968} & \multicolumn{1}{l|}{/989} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.032} & \multicolumn{1}{l}{.096} & \multicolumn{1}{l|}{.162} & & \multicolumn{1}{|l}{.817} & \multicolumn{1}{l}{.929} & \multicolumn{1}{l|}{.970} & & \multicolumn{1}{|l}{.995} & \multicolumn{1}{l}{.998} & \multicolumn{1}{l}{ .998} \\ rand LF & .009 & .051 & .108 & & .519 & .835 & .914 & & .984 & .996 & .998 \\ rand ICS-1 & .009 & .051 & .120 & & .785 & .921 & .954 & & .990 & .998 & 1.00 \\ \hline sup $p_{n}$ & .020 & .047 & .093 & & .721 & .892 & .943 & & .985 & .998 & 1.00 \\ sup $p_{n}$ LF & .015 & .025 & .054 & & .451 & .772 & .883 & & .961 & .992 & 1.00 \\ sup $p_{n}$ ICS-1 & .014 & .026 & .056 & & .710 & .890 & .940 & & .986 & .998 & 1.00 \\ \hline PVOT & \multicolumn{1}{|l}{.050} & \multicolumn{1}{l}{.118} & \multicolumn{1}{l|}{.194} & & \multicolumn{1}{|l}{.981} & \multicolumn{1}{l}{.995} & \multicolumn{1}{l|}{1.00} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ PVOT LF & .012 & .053 & .109 & & .823 & .965 & .975 & & 1.00 & 1.00 & 1.00 \\ PVOT ICS-1 & .012 & .054 & .109 & & .958 & .987 & .993 & & 1.00 & 1.00 & 1.00 \\ \hline\hline & \multicolumn{11}{c}{Non-Identification: $\beta _{n}=\beta _{0}=0$} \\ \hline\hline sup $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.051} & \multicolumn{1}{l}{.151} & \multicolumn{1}{l|}{.196} & & \multicolumn{1}{|l}{.981} & \multicolumn{1}{l}{.998} & \multicolumn{1}{l|}{.998} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ aver $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.043} & \multicolumn{1}{l}{.136} & \multicolumn{1}{l|}{.189} & & \multicolumn{1}{|l}{.886} & \multicolumn{1}{l}{.968} & \multicolumn{1}{l|}{.984} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ \hline rand $\mathcal{T}_{n}$ & \multicolumn{1}{|l}{.047} & \multicolumn{1}{l}{.111} & \multicolumn{1}{l|}{.177} & & \multicolumn{1}{|l}{.826} & \multicolumn{1}{l}{.938} & \multicolumn{1}{l|}{.967} & & \multicolumn{1}{|l}{.997} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ rand LF & .006 & .058 & .110 & & .549 & .859 & .926 & & 1.00 & 1.00 & 1.00 \\ rand ICS-1 & .006 & .058 & .109 & & .827 & .940 & .973 & & 1.00 & 1.00 & 1.00 \\ \hline sup $p_{n}$ & .032 & .081 & .126 & & .718 & .904 & .934 & & .995 & .999 & .999 \\ sup $p_{n}$ LF & .013 & .051 & .085 & & .414 & .778 & .875 & & .965 & .999 & 1.00 \\ sup $p_{n}$ ICS-1 & .013 & .051 & .086 & & .704 & .903 & .934 & & .995 & .999 & 1.00 \\ \hline PVOT & \multicolumn{1}{|l}{.061} & \multicolumn{1}{l}{.148} & \multicolumn{1}{l|}{.208} & & \multicolumn{1}{|l}{.977} & \multicolumn{1}{l}{.993} & \multicolumn{1}{l|}{.996} & & \multicolumn{1}{|l}{1.00} & \multicolumn{1}{l}{1.00} & \multicolumn{1}{l}{ 1.00} \\ PVOT LF & .014 & .058 & .108 & & .853 & .970 & .989 & & 1.00 & 1.00 & 1.00 \\ PVOT ICS-1 & .013 & .057 & .107 & & .978 & .996 & .998 & & 1.00 & 1.00 & 1.00 \\ \hline\hline \end{tabular} } \end{center} \par {\small Numerical values are rejection frequency at the given level. LSTAR is Logistic STAR. Empirical power is not size-adjusted. \textit{sup} $ \mathcal{T}_{n}$ and \textit{ave} $\mathcal{T}_{n}$ tests are based on a wild bootstrapped p-value. \textit{rand} $\mathcal{T}_{n}$: $\mathcal{T} _{n}(\lambda )$ with randomized $\lambda $\ on [1,5]. \textit{sup} ${p}_{n}$ is the supremum p-value test where p-values are computed from the chi-squared distribution. PVOT uses the chi-squared distribution. LF implies the least favorable p-value is used, and ICS-$1$ implies the type 1 identification category selection p-value is used with threshold $\kappa _{n} $ $=$ $\ln (\ln (n))$.} \end{table} \begin{table}[t] \caption{GARCH Effects Test Rejection Frequencies} \label{tbl:garch} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccccccc} \hline\hline & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=100$} & \multicolumn{1}{|c}{ } & \multicolumn{3}{|c}{$n=250$} & \multicolumn{1}{|c}{} & \multicolumn{3}{|c}{$n=500$} \\ \hline Test & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$1\%$} & $5\%$ & $10\%$ \\ \hline\hline \multicolumn{13}{c}{No GARCH Effects (empirical size)$^{a}$} \\ \hline\hline \multicolumn{1}{l}{sup-$p_{n}$ $^{b}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000$^{c}$} & .000 & .000 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000} & .000 & .000 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000} & .000 & .000 \\ \hline \multicolumn{1}{l}{sup-$\mathcal{T}_{n}$ $^{d}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.160} & .198 & .248 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.148} & .188 & .224 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.241} & .294 & .321 \\ \multicolumn{1}{l}{ave-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.004} & .032 & .052 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.005} & .031 & .059 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.008} & .053 & .107 \\ \multicolumn{1}{l}{rand-$\mathcal{T}_{n}$ $^{e}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.004} & .004 & .012 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.007} & .017 & .027 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.003} & .028 & .038 \\ \hline \multicolumn{1}{l}{PVOT$^{f}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{ .015} & .059 & .096 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.019} & .059 & .091 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.015} & .063 & .111 \\ \hline\hline & & & & & & & & & & & & \\ \hline\hline \multicolumn{13}{c}{GARCH Effects (empirical power)} \\ \hline\hline \multicolumn{1}{l}{sup-$p_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.006} & .014 & .017 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000} & .010 & .017 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.003} & .011 & .015 \\ \hline \multicolumn{1}{l}{sup-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.848} & .934 & .934 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.976} & .979 & .988 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ \multicolumn{1}{l}{ave-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.733} & .891 & .904 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.974} & .978 & .986 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ \multicolumn{1}{l}{rand-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.446} & .555 & .633 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.756} & .818 & .846 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.873} & .923 & .935 \\ \hline \multicolumn{1}{l}{PVOT} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.788} & .914 & .914 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.975} & .988 & .988 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & 1.00 & 1.00 \\ \hline\hline & & & & & & & & & & & & \\ \hline\hline \multicolumn{13}{c}{GARCH Effects (size adjusted power)} \\ \hline\hline \multicolumn{1}{l}{sup-$p_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.006} & .014 & .017 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.000} & .010 & .017 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.003} & .011 & .015 \\ \hline \multicolumn{1}{l}{sup-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.698} & .786 & .786 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.838} & .841 & .864 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.769} & .756 & .779 \\ \multicolumn{1}{l}{ave-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.739} & .909 & .952 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.979} & .997 & 1.00 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{1.00} & .997 & .993 \\ \multicolumn{1}{l}{rand-$\mathcal{T}_{n}$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.452} & .601 & .721 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.759} & .851 & .919 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.880} & .945 & .997 \\ \hline \multicolumn{1}{l}{PVOT} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.774} & .902 & .902 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.966} & .979 & .997 & \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{.995} & .987 & .989 \\ \hline\hline \end{tabular}} \end{center} \par {\small a. The GARCH volatility process is $\sigma _{t}^{2}$ $=$ $\omega _{0}+\delta _{0}y_{t-1}^{2}+\lambda _{0}\sigma _{t-1}^{2}$ with initial condition $\sigma _{t}^{2}$ $=$ $\omega _{0}/(1-\lambda _{0}))$. The null hypothesis is no GARCH effects $\delta _{0}=0$, and under the alternative $ \delta _{0}=.3$. In all cases the true $\lambda _{0}=.6.$ b. sup-$p_{n}$ is the $\sup_{\lambda \in \Lambda }p_{n}(\lambda )$ test. c. sup-$\mathcal{T} _{n}$ and ave-$\mathcal{T}_{n}$ tests are based on a wild bootstrapped p-value. d. Rejection frequency at the given significance level. e. rand-$ \mathcal{T}_{n}$ is an asymptotic $\chi ^{2}$ test based on $\mathcal{T} _{n}(\lambda )$ with randomized $\lambda $\ on [.01,.99]. f. PVOT: \textit{ p-value occupation time} test. } \end{table} \thispagestyle{empty} \end{document}
arXiv
\begin{document} \begin{frontmatter} \title{A Formalised Theorem in the Partition Calculus} \author{Lawrence C. Paulson FRS} \address{Computer Laboratory, University of Cambridge\\ 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK} \begin{abstract} A paper on ordinal partitions by Erd\H{o}s{} and Milner~\cite{erdos-theorem-partition} has been formalised using the proof assistant Isabelle/HOL, augmented with a library for Zermelo--Fraenkel set theory. The work is part of a project on formalising the partition calculus. The chosen material is particularly appropriate in view of the substantial corrections~\cite{erdos-theorem-partition-corr} later published by its authors, illustrating the potential value of formal verification. \end{abstract} \begin{keyword} ordinal partition relations\sep set theory\sep interactive theorem proving\sep Isabelle\sep proof assistants \MSC[2020] 03E02\sep 03E05\sep 03E10\sep 03B35\sep 68V15\sep 68V20\sep 68V35. \end{keyword} \end{frontmatter} \section{Introduction} Formal logic was developed to strengthen the foundations of mathematics. Whitehead and Russell's magnum opus~\cite{principia} may have been intended to show that all of mathematics could be formalised, but rather suggested the opposite: they managed to prove $1+1 = 2$ only on page 360. The highly formal mathematics of Bourbaki has been sharply criticised by A.~R.~D. Mathias, who (among other things) points out \cite{Mathias2002} that their definition of the number 1 expands to some $4.5\times 10^{12}$ symbols. While many mathematicians are indifferent to logic, Mathias' criticism is particularly trenchant in that he himself is a logician. For all that, researchers today are trying to formalise mathematics using formal deductive logic, with the help of software called \textit{interactive theorem provers} (or \textit{proof assistants}). The field of automated theorem proving was initiated by logicians and philosophers such as Martin Davis and Hilary Putnam (whose early work~\cite{davis-putnam} eventually led to today's powerful DPLL procedure for propositional logic) and Alan Robinson (the father of the resolution method for theorem proving in first-order logic~\cite{robinson65}). One might have expected the next step to be the automation of set theory, but instead the field took a sharp turn: away from full automation to interaction, with a focus on problems in computer science. A milestone was Michael J. C. Gordon's focus on hardware verification and his choice of higher-order logic~\cite{mgordon86}, a choice that logicians would not have made, as it had ``no coherent well-established theory''~\cite[p.\ts241]{van-benthem-higher} compared with first-order logic. Soon, researchers around the world were experimenting with Gordon's interactive theorem prover, HOL~\cite{mgordon-hol}. Other implementations of higher-order logic soon appeared, such as HOL Light \cite{harrison-hol-light} and Isabelle/HOL~\cite{isa-tutorial}. The scope of higher-order logic turned out to be much greater than hardware verification. Many researchers turned to the formalisation of well-known mathematical results~\cite{harrison-pnt} and even to the verification of a contested result: Hales' computer-assisted proof of the Kepler conjecture~\cite{hales-formal-Kepler}. A separate strand of research based upon constructive type theories also led to the formalisation of deep mathematical results, such as the odd order theorem~\cite{gonthier-oot}. The importance of such achievements has been recognised by the Isaac Newton Institute's programme entitled \textit{Computer-aided Mathematical Proof} (2017) and in the 2020 Mathematics Subject Classification \cite{dunne-mathematics}, which introduces class~68V (\textit{Computer science support for mathematical research and practice}) and in particular 68V20 (\textit{Formalization of mathematics}). Kunen's interest in the area of computational logic dates back to the 1980s. He published papers on the theory of logic programming \cite{kunen-negation,kunen-signed} and on resolution theorem proving~\cite{hart-single,kunen-semantics-answer}. He later became interested in the Boyer--Moore theorem prover, a distinctive semi-automatic system based on a quirky, quantifier free first-order logic; his aim seems to have been to examine the strength of that logic \cite{kunen-ramsey,kunen-nonconstructive}. He was also aware of my own work on formalising G\"odel's constructible universe using Isabelle/ZF \cite{paulson-consistency}. I'd like to think that he would take an interest in a formalisation of work by Erd\H{o}s{} and Milner~\cite{erdos-theorem-partition} on the partition calculus. \section{Ordinal partition relations}\label{sec:ordinal_partitions} Erd\H{o}s{} and Rado introduced the \textit{partition calculus} in 1952 to investigate a family of problems related to Ramsey's theorem~\cite{erdos-partition,Hajnal-Larson}. Let $i$, $j$, \ldots{} denote natural numbers while $\alpha$, $\beta$, \ldots{} denote set-theoretic ordinals. Let $[A]^n$ denote the set of $n$-element subsets of a given set~$A$. Write $\tp A$ for the order type of~$A$. Now we can define \textit{partition notation}: $\alpha\longrightarrow (\beta_0, \ldots,\beta_{k-1})^n$ means that for every partition of the set $[\alpha]^n$ into $k$ parts or ``colours'' $C_0$, \ldots, $C_{k-1}$, there exists $i<k$ and a subset $B\subseteq\alpha$ of order type $\beta_i$ such that $[B]^n\subseteq C_i$. Such a $B$ is said to be \textit{$i$-monochromatic}. The same notation can be used with order types replaced by cardinalities. Below we consider only the special case $\alpha\longrightarrow (\beta, \gamma)^2$ and omit the superscript. The negation of $\alpha\longrightarrow (\beta, \gamma)$ is written $\alpha\narrows (\beta, \gamma)$. In this notation, the infinite Ramsey theorem becomes $\omega\longrightarrow (\omega, \omega)$. A straightforward construction proves $\alpha\narrows (|\alpha|+1,\, \omega)$ for $\alpha>\omega$, while $\alpha\longrightarrow (\alpha,2)$ is trivial. If $\alpha$ is not a power of $\omega$ (or zero) then there exist ordinals $\beta$, $\gamma<\alpha$ such that $\alpha=\beta+\gamma$ \cite[p.\ts43]{kunen80}, from which it easily follows that $\alpha\narrows(\alpha,3)$. These and other facts raise the question~\cite[\S3.2]{erdos-unsolved}, for which $m$ and countable ordinals $\alpha$ do we have $\alpha\longrightarrow (\alpha,m)$? Kunen's interest in partition theorems is clear in a result he announced in 1971, which is equivalent to $\kappa \longrightarrow (\kappa, \alpha)$: \begin{quote} Let $\kappa$ be a real-valued measurable cardinal and $\mu$ a normal measure on~$\kappa$. Let $A\subseteq [\kappa]^2$. Then either (i) there is a subset, $X\subseteq\kappa$, such that $\mu(X)>0$ and $[X]^2\subseteq A$, or (ii) for all countable ordinals~$\alpha$, there is an $X\subseteq\kappa$ such that $X$ has order type~$\alpha$ and $[X]^2\cap A=\emptyset$. The proof uses a generalization of the zero-one law.~\cite{kunen-partition-theorem} \end{quote} \section{Introduction to Isabelle}\label{sec:isabelle} Isabelle is an interactive theorem prover based on a logical framework: a minimal formalism intended for representing formal proofs in a variety of logics~\cite{paulson-found}. Isabelle/ZF supports first-order logic and set theory, and has been used to formalise the constructible universe~\cite{paulson-consistency} and forcing~\cite{gunther-forcing}. But its most popular instance by far is Isabelle/HOL \cite{isa-tutorial}, supporting higher-order logic. All versions of Isabelle share a substantial code base, including a sophisticated interactive environment and tools for automatic simplification and logical reasoning. However, Isabelle/HOL extends all that with specialised, powerful automation for proving theorems and detecting counterexamples~\cite{paulson-from-lcf}. Isabelle's higher-order logic is an extension of Church's simple type theory~\cite{church40}. It assumes the axiom of choice. It has basic types such as \isa{nat} (the natural numbers) and \isa{bool} (the truth values, and hence the type of formulas). It has function types such as \isa{$\alpha$\isasymRightarrow$\beta$} and (postfix) type operators such as {$\alpha$\,set}, sets over type~$\alpha$. Thus, types can take other types as parameters, but they can't take other values as parameters: there are no ``dependent types''. My colleagues and I are pursuing the thesis that simple type theory is not merely sufficient to formalise mathematics~\cite{bordg-simple-tr} but superior to strong type theories that make automation difficult and introduce complications such as intensional equality. The set theoretic developments reported here were actually undertaken using Isabelle/HOL, augmented with the ZF axioms; they would have been harder in the more basic proof environment of Isabelle/ZF\@. The axiomatisation of ZF in HOL~\cite{ZFC_in_HOL-AFP} introduces a type \isa{V}, the type of all ZF sets. Type \isa{V set} is effectively the type of classes, and any \isa{small} class can be mapped to the corresponding element of~\isa{V}\@. Transfinite recursion is easily obtained from Isabelle/HOL's support of recursive function definitions, and the formal development of set theory includes ordinals, cardinals, order types of well-founded relations, Cantor normal form and other essential material, the proofs mostly taken from Kunen's well-known textbook~\cite{kunen80}. An order type is always an ordinal in this formalisation of ZF\@. In the general case, \isa{ordertype} applies to any set~\isa{A} and well-founded relation~\isa{r}, but here that relation is always set membership. Erd\H{o}s{} and Milner \cite{erdos-theorem-partition} actually considered order types of arbitrary orderings, but the special case of ordinals is sufficient for our application of formalising Larson~\cite{larson-short-proof}. \section{Outline of the proof} Erd\H{o}s{} and Milner \cite{erdos-theorem-partition} proved that if $\nu$ is a countable ordinal and $n<\omega$ then \begin{equation} \omega^{1+\nu n}\longrightarrow(2^n, \omega^{1+\nu}). \label{eqn:thm} \end{equation} They claim to have known the result since 1959, from Milner's PhD work. Remarkably, the published proof of the main lemma contained so many errors that their five page paper needed a full-page correction~\cite{erdos-theorem-partition-corr}, replacing the core of the original proof. These errors somehow escaped the notice of the authors, the PhD examiners and the original referee. That so many pairs of eyes could overlook so many errors is evidence of the need for more formal scrutiny of published mathematics. The proof is highly technical, and for the full details, readers should consult the Erd\H{o}s--Milner paper itself~\cite{erdos-theorem-partition} and crucially, the corrections~\cite{erdos-theorem-partition-corr}. Below we shall simply examine the milestones of the proof, with comments on the special difficulties occasioned by their formalisation. Erd\H{o}s{} and Milner rely on a more general theorem: that if $\alpha\longrightarrow(k, \gamma)$ and $k\ge2$, then \[ \alpha\beta\longrightarrow(2k,\,\gamma\vee \omega\beta). \] Already some complications are evident. In this statement of the theorem, the symbol~$\vee$ extends the partition notation to allow a choice between a 1-monochromatic set of order type $\gamma$ or one of type $\omega\beta$, and the Greek letters refer to order types in the general sense: where two orderings have the same order type when there exists an order-preserving bijection between them. It is not even clear how to formalise this general statement in ZFC, where an order type is a proper class. So the first step is to reformulate the theorem (including the technical condition that $\beta$ is a ``strong type'', and others) for ordinals: this special case suffices for the main result. Here is the statement above for the case when $\alpha$, $\beta$ and $\gamma$ range over ordinals, $\alpha$ is indecomposable and $\beta$ is countable. \begin{equation} \text{If } \alpha\longrightarrow(k, \gamma) \text{ and } k\ge2 \text{ then } \alpha\beta\longrightarrow(2k,\,\min(\gamma, \omega\beta)). \label{eqn:thm2} \end{equation} Because the ordinals are linearly ordered, the choice between finding a set of type~$\gamma$ or a set of type~$\omega\beta$ no longer requires a disjunction but just taking their minimum. This statement~(\ref{eqn:thm2}) suffices to prove the original claim, $\omega^{1+\nu n}\longrightarrow(2^n, \omega^{1+\nu})$. Erd\H{o}s{} and Milner provide a full inductive argument, reproduced below with trivial substitutions: \begin{quote} Suppose (\ref{eqn:thm}) holds for some integer $n\ge1$. Applying the above theorem with $k=2^n$, $\alpha = \omega^{1+\nu n}$, $\beta=\omega^\nu$, $\gamma=\omega^{1+\nu}$, we see that (\ref{eqn:thm}) also holds with $n$ replaced by $n+1$. Since (\ref{eqn:thm}) holds trivially for $n=0$, it follows that (\ref{eqn:thm}) holds for all $n<\omega$. \cite[p.\ts503]{erdos-theorem-partition} \end{quote} This proof was easy to formalise and is presented in full (Fig.\ts\ref{fig:main}). The assumption $n\ge1$ turns out to be unnecessary. Some notes on the syntax: to formalise $\omega^{1+\nu n}\longrightarrow(2^n, \omega^{1+\nu})^2$ requires the constant for the partition relation, \isa{partn\_lst\_VWF}, and an explicit conversion from natural numbers to the corresponding finite ordinals, \isa{ord\_of\_nat}. Key claims are labelled with \isakeyword{shows} and intermediate claims with \isakeyword{have}. Justifications begin with \isakeyword{using} followed by the names of prior results or with \isakeyword{by}, followed by a proof method such as \isa{auto}. \begin{isabelle} \ \ \ \ "partn\_lst\_VWF\ (\isasymomega \isasymup (1\ +\ \isasymnu *n))\ [ord\_of\_nat\ (2\isacharcircum n),\ \isasymomega \isasymup (1+\isasymnu )]\ 2" \end{isabelle} In the sequel, we shall only be concerned with proving the theorem~(\ref{eqn:thm2}). \begin{figure*} \caption{Isabelle/HOL formal proof of the main inductive argument} \label{fig:main} \end{figure*} \subsection{Preliminaries} First, some notation and conventions. The paper~\cite[p.\ts503]{erdos-theorem-partition} refers to fixed sets $S$ of type $\alpha\beta$ and $B$ of type~$\beta$. But since ordinals are sets, working with ordinals rather than order types allows us to use $\alpha\beta$ and~$\beta$ as the required sets. $A<B$ means if $x\in A$ and $y\in B$ then $x<y$. The theorem (\ref{eqn:thm2}) is trivial unless $\alpha>1$ and $\beta\not=0$, which we assume below. By convention, $A$, $A'$, $A_1$, etc.\ denote subsets of $\alpha\beta$ having type $\alpha$. \subsection{Every ordinal is strong} \label{sec:strong} The property that $\beta$ is a strong type does not have to be assumed because every ordinal is strong, meaning if $D \subseteq \beta$ then there are sets $D_1$, \ldots, $D_n \subseteq D$ such that \begin{itemize} \item $\tp D_i$ is indecomposable for $i = 1$, \ldots,~$n$ \item if $M \subseteq D$ and $\tp (M \cap D_i) \ge \tp D_i$ for $i = 1$, \ldots,~$n$, then $\tp M = \tp D$. \end{itemize} In Isabelle/HOL, the theorem statement looks like this, where \isa{L} is a list of sets and \isa{List.set\ L} stands for the set $\{D_1,\ldots,D_n\}$: \begin{isabelle} \isacommand{proposition}\ strong\_ordertype\_eq:\isanewline \ \ \isakeyword{assumes}\ "D\ \isasymsubseteq \ elts\ \isasymbeta "\ \isakeyword{and}\ "Ord\ \isasymbeta "\isanewline \ \ \isakeyword{obtains}\ L\ \isakeyword{where}\ "\isasymUnion (List.set\ L)\ =\ D"\isanewline \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "\isasymAnd X.\ X\ \isasymin \ List.set\ L\ \isasymLongrightarrow \ indecomposable\ (tp\ X)"\isanewline \ \ \ \ \isakeyword{and}\ "\isasymAnd M.\ \isasymlbrakk M\ \isasymsubseteq \ D;\ \isasymAnd X.\ X\ \isasymin \ List.set\ L\ \isasymLongrightarrow \ tp\ (M\ \isasyminter \ X)\ \isasymge \ tp\ X\isasymrbrakk\isanewline \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \isasymLongrightarrow \ tp\ M\ =\ tp\ D" \end{isabelle} The proof involves writing $\tp D$ in Cantor normal form: \[ \tp D = \omega^{\beta_1}\cdot l_1 + \cdots + \omega^{\beta_n}\cdot l_n \] where $\tp D\ge \beta_1>\cdots>\beta_n$ and $1 \le l_i < \omega$ for $i = 1$, \ldots,~$n$. Through the bijection between $\tp D$ and $D$, this divides $D$ into the desired $D_1$, \ldots, $D_n$. The proof is straightforward in principle, but somehow the formalisation is 200 lines long. The paper mentions Cantor normal form \cite[p.\ts502]{erdos-theorem-partition} but gives no other hints. \subsection{A remark about indecomposable ordinals} \label{sec:remark} The proof relies on the following observation~\cite{erdos-theorem-partition-corr}: if $x\in A$ and $A_1\subseteq A$, then there is $A_2\subseteq A_1$ such that $\{x\}<A_2$. Recalling that $\tp A=\alpha$, consider the bijection~$\phi$ between $A$ and~$\alpha$. Then $\phi(x)<\alpha$ and we can define $A_x=\{y\in A:\phi(y)\le \phi(x)\}$ and $A_2 = A_1 \setminus A_x$. Then $\{x\}<A_2$ by construction and $\tp A_2=\alpha$ follows because $\alpha$ is indecomposable. The formalisation is a fairly routine 60 lines, not difficult but tiresome for a straightforward claim stated without proof. Here is the formal version of the theorem statement: \begin{isabelle} \isacommand{proposition}\ indecomposable\_imp\_Ex\_less\_sets:\isanewline \ \ \isakeyword{assumes}\ "indecomposable\ \isasymalpha "\ \isakeyword{and}\ "\isasymalpha \ >\ 1"\ \isanewline \ \ \ \ \isakeyword{and}\ "tp\ A\ =\ \isasymalpha "\ "small\ A"\ "A\ \isasymsubseteq \ ON"\isanewline \ \ \ \ \isakeyword{and}\ "x\ \isasymin \ A"\ \isakeyword{and}\ "tp\ A1\ =\ \isasymalpha "\ "A1\ \isasymsubseteq \ A"\isanewline \ \ \isakeyword{obtains}\ A2\ \isakeyword{where}\ "tp\ A2\ =\ \isasymalpha "\ "A2\ \isasymsubseteq \ A1"\ "\{x\}\ \isasymlless \ A2" \end{isabelle} Note that the keyword \isakeyword{obtains} is a way of expressing an existential conclusion, and that the implicit order types of~$A$, $A_1$, $A_2$ need to be written out. \section{Proving the theorem} Recall that our task is to prove that if $\alpha\longrightarrow(k, \gamma)$ and $k\ge2$ then \[ \alpha\beta\longrightarrow(2k,\,\min(\gamma, \omega\beta)) \] for ordinals $\alpha$, $\beta$, $\gamma$ where $\alpha$ is indecomposable and $\beta$ is countable. Here is the formal version of the statement above, where \isa{\isasymbeta \ \isasymin \ elts\ \isasymomega 1} means $\beta<\omega_1$. \begin{isabelle} \isacommand{theorem}\ Erdos\_Milner\_aux:\isanewline \ \ \isakeyword{assumes}\ "partn\_lst\_VWF\ \isasymalpha \ [k,\ \isasymgamma ]\ 2"\isanewline \ \ \ \ \isakeyword{and}\ "indecomposable\ \isasymalpha "\ \isakeyword{and}\ "k\ >\ 1"\ "Ord\ \isasymgamma "\ \isakeyword{and}\ \isasymbeta :\ "\isasymbeta \ \isasymin \ elts\ \isasymomega 1"\isanewline \ \ \isakeyword{shows}\ "partn\_lst\_VWF\ (\isasymalpha *\isasymbeta )\ [ord\_of\_nat\ (2*k),\ min\ \isasymgamma \ (\isasymomega *\isasymbeta )]\ 2" \end{isabelle} The proof considers the set $[\alpha\beta]^2$ partitioned into sets $K_0$ and $K_1$ by a colouring function $f:[\alpha\beta]^2\to \{0,1\}$. Then either (again paraphrasing the authors~\cite[p.\ts503]{erdos-theorem-partition}) \begin{enumerate}[(i)] \item there is $X\in [\alpha\beta]^{2k}$ such that $[X]^2 \subseteq K_0$, or \item there is $C\subseteq \alpha\beta$ such that $\tp C = \gamma$ and $[C]^2 \subseteq K_1$, or \item there is $Z\subseteq \alpha\beta$ such that $\tp Z = \omega\beta$ and $[Z]^2 \subseteq K_1$. \end{enumerate} The proof assumes that (i) and (ii) above are both false and deduces~(iii). Let $\{\gamma_m:m<\omega\}$ be an enumeration of~$\beta$ that repeats every element infinitely often. The 1-monochromatic set $Z$ is constructed by an elaborate enumeration along with increasing families of sets~$\{A^{(n)}_\nu\}_{\nu<\beta}$ satisfying $\tp(Z\cap A^{(m)}_{\gamma_m})=\omega$ for $m<\omega$, from which it can be shown that $\tp Z=\omega\beta$. The formalisation of the full argument takes nearly a thousand lines, and here we look at some milestones. Near the start of the proof, we find the claim \begin{quote} (8)\quad If $A\subseteq \alpha\beta$, then there is $X\in[A]^k$ such that $[X]^2\subseteq K_0$. This follows from the hypothesis $\alpha\longrightarrow(k, \gamma)$ and the assumed falsity of~(ii).~\cite[p.\ts503]{erdos-theorem-partition} \end{quote} The claim seems obvious enough and no further details are given; its formal proof of about 50 lines could possibly be streamlined with the help of higher-level lemmas about partition relations. The claim is embedded in the main proof: \begin{isabelle} \ \ \ \ \isacommand{have}\ Ak0:\ "\isasymexists X\ \isasymin \ [A]\isactrlbsup k\isactrlesup .\ f\ `\ [X]\isactrlbsup 2\isactrlesup \ \isasymsubseteq \ \{0\}"\isanewline \ \ \ \ \ \ \isakeyword{if}\ "A\ \isasymsubseteq \ elts\ (\isasymalpha *\isasymbeta )"\ \isakeyword{and}\ "tp\ A\ \isasymge \ \isasymalpha "\ \isakeyword{for}\ A \end{isabelle} Here, \isa{f\ `\ [X]\isactrlbsup 2\isactrlesup \ \isasymsubseteq \ \{0\}} expresses $[X]^2\subseteq K_0$ in terms of an image involving the colouring function, $f$. The next stage of the proof requires a new definition: \[ K_i(x)=\{y\in\alpha\beta:\{x,y\}\in K_i\} \] for $x\in\alpha\beta$ and $i<2$, and the claim is~\cite[p.\ts503]{erdos-theorem-partition} \begin{quote} (9)\quad Suppose $D\subseteq\beta$, $A_\nu\subseteq\alpha\beta$ (for $\nu\in D$), $A\subseteq\alpha\beta$. For $x\in A$ let \[ M(x)=\{v\in D:\tp(K_1(x)\cap A_\nu)\ge\alpha\}. \] Then $\tp\{x\in A:\tp M(x) \ge \tp D\} \ge \alpha$. \end{quote} This claim is first proved in a special (weaker) form, assuming that $\tp D$ is indecomposable, and then in a general form, dropping that assumption. The authors prove the specialised version in half a dozen lines using their claim~(8) and take a further five lines, using the property that $\beta$ is strong, to achieve the general version. The formal proof of the special version is 70 lines long, including a lengthy calculation, while that for the general form is 40 lines, with an induction on the decomposition of~$\tp D$. Here is the statement of the special version: \begin{isabelle} \ \ \ \ \isacommand{have}\ 9:\ "tp\ \{x\ \isasymin \ A.\ tp\ (M\ D\ \isasymAA \ x)\ \isasymge \ tp\ D\}\ \isasymge \ \isasymalpha "\isanewline \ \ \ \ \ \ \isakeyword{if}\ "indecomposable\ (tp\ D)"\ \isakeyword{and}\ "D\ \isasymsubseteq \ elts\ \isasymbeta "\isanewline \ \ \ \ \ \ \ \ \isakeyword{and}\ "A\ \isasymsubseteq \ elts\ (\isasymalpha *\isasymbeta )"\ \isakeyword{and}\ "tp\ A\ =\ \isasymalpha "\isanewline \ \ \ \ \ \ \ \ \isakeyword{and}\ "\isasymAA \ \isasymin \ D\ \isasymrightarrow \ \{X.\ X\ \isasymsubseteq \ elts\ (\isasymalpha *\isasymbeta )\ \isasymand \ tp\ X\ =\ \isasymalpha \}"\ \ \isakeyword{for}\ D\ A\ \isasymAA \end{isabelle} They continue~\cite[p.\ts504]{erdos-theorem-partition} with a not-quite-trivial instance: \begin{quote} As a special case of (9) (with $\tp D=1$) we have:\\ (9')\quad If $A$, $A'\subseteq\alpha\beta$. then $\tp\{x\in A': \tp(K_1(x)\cap A)\ge\alpha\} \ge \alpha$. \end{quote} The preliminaries of the proof conclude with a claim that follows with the help of (9) and (9') in seven lines of text. The formalisation was tough, 300 lines, but identified some small errors. \begin{quote} (10)\quad Let $F$ be a finite subset of~$\beta$ and let $\{A_\nu\}_{\nu<\beta}$, $A$ be such that $\bigcup_{\nu<\beta} A_\nu\subseteq \alpha\beta$ and $A\subseteq \alpha\beta$. Then there are $x_0\in A$ and a strictly increasing map $g:\beta\to\beta$ such that $g(\nu)=\nu$ ($\nu\in F$) and $\tp (K_1((x_0)\cap A_{g(\nu)}) \ge \alpha$ ($\nu\in\beta$). \end{quote} Here is the corresponding formal statement, where \isa{\isasymAA} is the family~$\{A_\nu\}_{\nu<\beta}$: \begin{isabelle} \ \ \ \ \isacommand{have}\ 10:\ "\isasymexists x0\isasymin A.\ \isasymexists g\ \isasymin \ elts\ \isasymbeta \ \isasymrightarrow \ elts\ \isasymbeta .\ strict\_mono\_on\ g\ (elts\ \isasymbeta )\isanewline \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \isasymand\ (\isasymforall \isasymnu \ \isasymin \ F.\ g\ \isasymnu \ =\ \isasymnu )\isanewline \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \isasymand \ (\isasymforall \isasymnu \ \isasymin \ elts\ \isasymbeta .\ tp\ (K\ 1\ x0\ \isasyminter \ \isasymAA \ (g\ \isasymnu ))\ \isasymge \ \isasymalpha )"\isanewline \ \ \ \ \ \ \isakeyword{if}\ F:\ "finite\ F"\ "F\ \isasymsubseteq \ elts\ \isasymbeta "\isanewline \ \ \ \ \ \ \ \ \isakeyword{and}\ A:\ "A\ \isasymsubseteq \ elts\ (\isasymalpha *\isasymbeta )"\ "tp\ A\ =\ \isasymalpha "\isanewline \ \ \ \ \ \ \ \ \isakeyword{and}\ \isasymAA :\ "\isasymAA \ \isasymin \ elts\ \isasymbeta \ \isasymrightarrow \ \{X.\ X\ \isasymsubseteq \ elts\ (\isasymalpha *\isasymbeta)\ \isasymand \ tp\ X\ =\ \isasymalpha \}"\isanewline \ \ \ \ \ \ \isakeyword{for}\ F\ A\ \isasymAA \end{isabelle} One source of difficulty is that the proof expresses $\beta$ as a union of increasing sets $D_0<\{\nu_1\}<D_1<\cdots<\{\nu_p\}<D_p$, where $F=\{\nu_1,\ldots,\nu_p\}$: \begin{equation} \beta = D_0\cup\{\nu_1\}\cup D_1\cup\cdots\cup \{\nu_p\}\cup D_p, \label{eqn:beta} \end{equation} Our intuition that $F$ ``obviously'' cuts $\beta$ into segments does not help; every detail of the relationship between $\beta$ and the $D_i$ must be formalised. Now the stage is set for the construction of the required 1-monochromatic set $Z=\{x_0,x_1,\ldots,x_n,\ldots\}$ and the increasing family $A^{(n)}_\nu$, by complete induction on~$n$. Here we must switch to the corrigendum~\cite{erdos-theorem-partition-corr}, which replaces claims (12)--(17) of the original proof. The ``remark'' noted above (\S\ref{sec:remark}) and claim~(10) are used here to yield $x_n\in A$, a strictly increasing map $g_n:\beta\to\beta$ and sets $\{A^{(n+1)}_\nu\}_{\nu<\beta}$ satisfying certain properties. The formalisation of this section is about 360 lines. These inductive constructions are typical of Ramsey theory, and better ways of formalising them are needed. However, many of the technical complications are inherent in the mathematics itself. \section{Observations and Conclusions} The task for formalising Erd\H{o}s{} and Milner's paper arose in the context of a larger project, with D\v zamonja and Koutsoukou-Argyraki~\cite{dzamonja-formalising-arxiv}, to formalise Larson's proof~\cite{larson-short-proof} that $\omega^\omega\longrightarrow(\omega^\omega, m)$ for all $m<\omega$. Her proof relies on \begin{equation} \omega^{n \cdot k} \longrightarrow (\omega^n, k), \label{eqn:erdos} \end{equation} which she credits to Erd\H{o}s{}, remarking that the Erd\H{o}s-Milner paper ``has a sharper result''~\cite[p.\ts134]{larson-short-proof}. The claim~(\ref{eqn:erdos}) is trivial if $k=0$ or $n=0$, and otherwise put $\nu=n-1$ and $n=k-1$ in $\omega^{1+\nu n}\longrightarrow(2^n, \omega^{1+\nu})$. This is a routine calculation and the formalisation is just 30 lines long. The revision control logs hold the detailed history of the formal development of Erd\H{o}s{}--Milner~\cite{erdos-theorem-partition,erdos-theorem-partition-corr}. I started scrutinising the paper on 3 February 2020. By 7 Feburary, I had proved that the Theorem implied the paper's headline result. By 12 February, I had proved (8) and had started~(9). \footnote{The equation numbers here refer to Erd\H{o}s{} and Milner~\cite{erdos-theorem-partition}.} The general case of the latter required first proving the lemma about strong types (\S\ref{sec:strong}), and it was not until 24 February that I managed to prove~(9) and the corollary (9'). By 1 March, I had proved (10). Next on the agenda was the inductive construction of $Z$ and the $A^{(n)}_\nu$ families, which was a struggle. The logs refer to a number of unsuccessful attempts to do a ``big induction''. The entry for 12 March says ``replaced the big induction by a primitive recursion setup'' and that (12)--(16) had been proved. That would have used the remark discussed in~\S\ref{sec:remark} above and included the construction of the 1-monochromatic set~Z\@. The last step was to establish $Z$'s order type; (17), (18) and the necessary half \footnote{That is, $\tp(Z\cap A^{(m)}_{\gamma_m})\ge\omega$; the other direction wasn't clear to me.} of (19) were done on successive days. The formalisation was complete by March 17. It can be found online~\cite{Ordinal_Partitions-AFP}, within the formalisation of Larson. Was it worth it? The process took 44 days, during the middle of a normal University term with the usual schedule of teaching and administrative duties. Recalling that the paper was only five pages long, this equates to nine days per page to understand and formalise the material. And while the headline result was obtained in full generality, it was proved from a theorem about order types which here was formalised only for the special case of ordinals. The effort required to formalise mathematics is surely still prohibitive. Nevertheless, the numerous errors in the original paper are a reminder of how easy it is to make mistakes, especially perhaps in combinatorial proofs. Every formalisation of a technically difficult piece of mathematics provides strong assurance of its correctness. In the case of Isabelle/HOL, it also often yields a document that is readable enough to be examined by anybody who still has doubts. Every time we formalise something in a new area of mathematics, we discover certain things that particularly hard to formalise. Formalisation will never catch up with mathematical intuition. Many obvious deductions---such as the partition of~$\beta$, our~(\ref{eqn:beta}) above---seem to take a disproportionate amount of work, and our only consolation is that some obvious deductions are false. The inductive constructions typical of Ramsey and partition theory are an area where we need further work to find more compact, natural and readable formal proofs. \paragraph*{Acknowledgements} Thanks to Mirna D{\v z}amonja (who proposed the project in the first place) and to Angeliki Koutsoukou-Argyraki for discussions and help. The ERC supported this research through the Advanced Grant ALEXANDRIA (Project GA 742178). \section*{References} \end{document}
arXiv
# Ordinary Differential Equation Alexander Grigorian<br>University of Bielefeld<br>Lecture Notes, April - July 2008 ## Introduction: the notion of ODEs and examples A differential equation (Differentialgleichung) is an equation for an unknown function that contains not only the function but also its derivatives (Ableitung). In general, the unknown function may depend on several variables and the equation may include various partial derivatives. However, in this course we consider only the differential equations for a function of a single real variable. Such equations are called ordinary differential equations ${ }^{1}$ - shortly ODE (die gewöhnliche Differentialgleichungen). A most general ODE has the form $$ F\left(x, y, y^{\prime}, \ldots, y^{(n)}\right)=0 $$ where $F$ is a given function of $n+2$ variables and $y=y(x)$ is an unknown function of a real variable $x$. The maximal order $n$ of the derivative $y^{(n)}$ in (1.1) is called the order of the ODE. The ODEs arise in many areas of Mathematics, as well as in Sciences and Engineering. In most applications, one needs to find explicitly or numerically a solution $y(x)$ of $(1.1)$ satisfying some additional conditions. There are only a few types of the ODEs when one can find all the solutions. In Introduction we will be concerned with various examples and specific classes of ODEs of the first and second order, postponing the general theory to the next Chapters. Consider the differential equation of the first order $$ y^{\prime}=f(x, y) $$ where $y=y(x)$ is the unknown real-valued function of a real argument $x$, and $f(x, y)$ is a given function of two real variables. Consider a couple $(x, y)$ as a point in $\mathbb{R}^{2}$ and assume that function $f$ is defined on a set $D \subset \mathbb{R}^{2}$, which is called the domain (Definitionsbereich) of the function $f$ and of the equation (1.2). Then the expression $f(x, y)$ makes sense whenever $(x, y) \in D$. Definition. A real valued function $y(x)$ defined on an interval ${ }^{2} I \subset \mathbb{R}$, is called a (particular) solution of (1.2) if $y(x)$ is differentiable at any $x \in I$, the point $(x, y(x))$ belongs to $D$ for any $x \in I$ and the identity $y^{\prime}(x)=f(x, y(x))$ holds for all $x \in I$. The family of all particular solutions of (1.2) is called the general solution. The graph of a particular solution is called an integral curve of the equation. Obviously, any integral curve is contained in the domain $D$. Usually a given ODE cannot be solved explicitly. We will consider some classes of $f(x, y)$ when one find the general solution to (1.2) in terms of indefinite integration. ${ }^{1}$ The theory of partial differential equations, that is, the equations containing partial derivatives, is a topic of another lecture course. ${ }^{2}$ Here and below by an interval we mean any set of the form $$ \begin{aligned} & (a, b)=\{x \in \mathbb{R}: a<x<b\} \\ & {[a, b]=\{x \in \mathbb{R}: a \leq x \leq b\}} \\ & {[a, b)=\{x \in \mathbb{R}: a \leq x<b\}} \\ & (a, b]=\{x \in \mathbb{R}: a<x \leq b\}, \end{aligned} $$ where $a, b$ are real or $\pm \infty$ and $a<b$. Example. Assume that the function $f$ does not depend on $y$ so that (1.2) becomes $y^{\prime}=f(x)$. Hence, $y$ must be a primitive function ${ }^{3}$ of $f$. Assuming that $f$ is a continuous (stetig) function on an interval $I$, we obtain the general solution on $I$ by means of the indefinite integration: $$ y=\int f(x) d x=F(x)+C, $$ where $F(x)$ is a primitive of $f(x)$ on $I$ and $C$ is an arbitrary constant. Example. Consider the ODE $$ y^{\prime}=y $$ Let us first find all positive solutions, that is, assume that $y(x)>0$. Dividing the ODE by $y$ and noticing that $$ \frac{y^{\prime}}{y}=(\ln y)^{\prime} $$ we obtain the equivalent equation $$ (\ln y)^{\prime}=1 $$ Solving this as in the previous example, we obtain $$ \ln y=\int d x=x+C $$ whence $$ y=e^{C} e^{x}=C_{1} e^{x}, $$ where $C_{1}=e^{C}$. Since $C \in \mathbb{R}$ is arbitrary, $C_{1}=e^{C}$ is any positive number. Hence, any positive solution $y$ has the form $$ y=C_{1} e^{x}, \quad C_{1}>0 $$ If $y(x)<0$ for all $x$, then use $$ \frac{y^{\prime}}{y}=(\ln (-y))^{\prime} $$ and obtain in the same way $$ y=-C_{1} e^{x}, $$ where $C_{1}>0$. Combine these two cases together, we obtain that any solution $y(x)$ that remains positive or negative, has the form $$ y(x)=C e^{x}, $$ where $C>0$ or $C<0$. Clearly, $C=0$ suits as well since $y=0$ is a solution. The next plot contains the integrals curves of such solutions: ${ }^{3}$ By definition, a primitive function of $f$ is any function whose derivative is equal to $f$. Let us show that the family of solutions $y=C e^{x}, C \in \mathbb{R}$, is the general solution. Indeed, if $y(x)$ is a solution that takes positive value somewhere then it is positive in some open interval, say $I$. By the above argument, $y(x)=C e^{x}$ in $I$, where $C>0$. Since $e^{x} \neq 0$, this solution does not vanish also at the endpoints of $I$. This implies that the solution must be positive on the whole interval where it is defined. It follows that $y(x)=C e^{x}$ in the domain of $y(x)$. The same applies if $y(x)<0$ for some $x$. Hence, the general solution of the ODE $y^{\prime}=y$ is $y(x)=C e^{x}$ where $C \in \mathbb{R}$. The constant $C$ is referred to as a parameter. It is clear that the particular solutions are distinguished by the values of the parameter. ### Separable ODE Consider a separable ODE, that is, an ODE of the form $$ y^{\prime}=f(x) g(y) . $$ Any separable equation can be solved by means of the following theorem. Theorem 1.1 (The method of separation of variables) Let $f(x)$ and $g(y)$ be continuous functions on open intervals $I$ and $J$, respectively, and assume that $g(y) \neq 0$ on J. Let $F(x)$ be a primitive function of $f(x)$ on $I$ and $G(y)$ be a primitive function of $\frac{1}{g(y)}$ on $J$. Then a function $y$ defined on some subinterval of I, solves the differential equation (1.3) if and only if it satisfies the identity $$ G(y(x))=F(x)+C, $$ for all $x$ in the domain of $y$, where $C$ is a real constant. For example, consider again the ODE $y^{\prime}=y$ in the domain $x \in \mathbb{R}, y>0$. Then $f(x)=1$ and $g(y)=y \neq 0$ so that Theorem 1.1 applies. We have $$ F(x)=\int f(x) d x=\int d x=x $$ and $$ G(y)=\int \frac{d y}{g(y)}=\int \frac{d y}{y}=\ln y $$ where we do not write the constant of integration because we need only one primitive function. The equation (1.4) becomes $$ \ln y=x+C, $$ whence we obtain $y=C_{1} e^{x}$ as in the previous example. Note that Theorem 1.1 does not cover the case when $g(y)$ may vanish, which must be analyzed separately when needed. Proof. Let $y(x)$ solve (1.3). Since $g(y) \neq 0$, we can divide (1.3) by $g(y)$, which yields $$ \frac{y^{\prime}}{g(y)}=f(x) $$ Observe that by the hypothesis $f(x)=F^{\prime}(x)$ and $\frac{1}{g^{\prime}(y)}=G^{\prime}(y)$, which implies by the chain rule $$ \frac{y^{\prime}}{g(y)}=G^{\prime}(y) y^{\prime}=(G(y(x)))^{\prime} $$ Hence, the equation (1.3) is equivalent to $$ G(y(x))^{\prime}=F^{\prime}(x), $$ which implies (1.4). Conversely, if function $y$ satisfies (1.4) and is known to be differentiable in its domain then differentiating (1.4) in $x$, we obtain (1.6); arguing backwards, we arrive at (1.3). The only question that remains to be answered is why $y(x)$ is differentiable. Since the function $g(y)$ does not vanish, it is either positive or negative in the whole domain. Then the function $G(y)$, whose derivative is $\frac{1}{g(y)}$, is either strictly increasing or strictly decreasing in the whole domain. In the both cases, the inverse function $G^{-1}$ is defined and is differentiable. It follows from (1.4) that $$ y(x)=G^{-1}(F(x)+C) . $$ Since both $F$ and $G^{=1}$ are differentiable, we conclude by the chain rule that $y$ is also differentiable, which finishes the proof. Corollary. Under the conditions of Theorem 1.1, for all $x_{0} \in I$ and $y_{0} \in J$ there exists a unique value of the constant $C$ such that the solution $y(x)$ defined by (1.7) satisfies the condition $y\left(x_{0}\right)=y_{0}$. The condition $y\left(x_{0}\right)=y_{0}$ is called the initial condition (Anfangsbedingung). Proof. Setting in (1.4) $x=x_{0}$ and $y=y_{0}$, we obtain $G\left(y_{0}\right)=F\left(x_{0}\right)+C$, which allows to uniquely determine the value of $C$, that is, $C=G\left(y_{0}\right)-F\left(x_{0}\right)$. Conversely, assume that $C$ is given by this formula and prove that it determines by (1.7) a solution $y(x)$. If the right hand side of (1.7) is defined on an interval containing $x_{0}$, then by Theorem 1.1 it defines a solution $y(x)$, and this solution satisfies $y\left(x_{0}\right)=y_{0}$ by the choice of $C$. We only have to make sure that the domain of the right hand side of (1.7) contains an interval around $x_{0}$ (a priori it may happen so that the the composite function $G^{-1}(F(x)+C)$ has empty domain). For $x=x_{0}$ the right hand side of (1.7) is $$ G^{-1}\left(F\left(x_{0}\right)+C\right)=G^{-1}\left(G\left(y_{0}\right)\right)=y_{0} $$ so that the function $y(x)$ is defined at $x=x_{0}$. Since both functions $G^{-1}$ and $F+C$ are continuous and defined on open intervals, their composition is defined on an open set. Since this set contains $x_{0}$, it contains also an interval around $x_{0}$. Hence, the function $y$ is defined on an interval around $x_{0}$, which finishes the proof. One can rephrase the statement of Corollary as follows: for all $x_{0} \in I$ and $y_{0} \in J$ there exists a unique solution $y(x)$ of (1.3) that satisfies in addition the initial condition $y\left(x_{0}\right)=y_{0}$; that is, for every point $\left(x_{0}, y_{0}\right) \in I \times J$ there is exactly one integral curve of the ODE that goes through this point. However, the meaning of the uniqueness claim in this form is a bit ambiguous because out of any solution $y(x)$, one can make another solution just by slightly reducing the domain, and if the reduced domain still contains $x_{0}$ then the initial condition will be satisfied also by the new solution. The precise uniqueness claim means that any two solutions satisfying the same initial condition, coincide on the intersection of their domains; also, such solutions correspond to the same value of the parameter $C$. In applications of Theorem 1.1, it is necessary to find the functions $F$ and $G$. Technically it is convenient to combine the evaluation of $F$ and $G$ with other computations as follows. The first step is always dividing (1.3) by $g$ to obtain (1.5). Then integrate the both sides in $x$ to obtain $$ \int \frac{y^{\prime} d x}{g(y)}=\int f(x) d x $$ Then we need to evaluate the integral in the right hand side. If $F(x)$ is a primitive of $f$ then we write $$ \int f(x) d x=F(x)+C . $$ In the left hand side of (1.8), we have $y^{\prime} d x=d y$. Hence, we can change variables in the integral replacing function $y(x)$ by an independent variable $y$. We obtain $$ \int \frac{y^{\prime} d x}{g(y)}=\int \frac{d y}{g(y)}=G(y)+C . $$ Combining the above lines, we obtain the identity (1.4). If in the equation $y^{\prime}=f(x) g(y)$ the function $g(y)$ vanishes at a sequence of points, say $y_{1}, y_{2}, \ldots$, enumerated in the increasing order, then we have a family of constant solutions $y(x)=y_{k}$. The method of separation of variables provides solutions in any domain $y_{k}<y<y_{k+1}$. The integral curves in the domains $y_{k}<y<y_{k+1}$ can in general touch the constant solutions, as will be shown in the next example. Example. Consider the equation $$ y^{\prime}=\sqrt{|y|} $$ which is defined for all $y \in \mathbb{R}$. Since the right hand side vanish for $y=0$, the constant function $y \equiv 0$ is a solution. In the domains $y>0$ and $y<0$, the equation can be solved using separation of variables. For example, in the domain $y>0$, we obtain $$ \int \frac{d y}{\sqrt{y}}=\int d x $$ whence $$ 2 \sqrt{y}=x+C $$ and $$ y=\frac{1}{4}(x+C)^{2}, x>-C $$ (the restriction $x>-C$ comes from the previous line). Similarly, in the domain $y<0$, we obtain $$ \int \frac{d y}{\sqrt{-y}}=\int d x $$ whence $$ -2 \sqrt{-y}=x+C $$ and $$ y=-\frac{1}{4}(x+C)^{2}, x<-C . $$ We obtain the following integrals curves: We see that the integral curves in the domain $y>0$ touch the curve $y=0$ and so do the integral curves in the domain $y<0$. This allows us to construct more solution as follows: take a solution $y_{1}(x)<0$ that vanishes at $x=a$ and a solution $y_{2}(x)>0$ that vanishes at $x=b$ where $a<b$ are arbitrary reals. Then define a new solution: $$ y(x)= \begin{cases}y_{1}(x), & x<a \\ 0, & a \leq x \leq b \\ y_{2}(x), & x>b\end{cases} $$ Note that such solutions are not obtained automatically by the method of separation of variables. It follows that through any point $\left(x_{0}, y_{0}\right) \in \mathbb{R}^{2}$ there are infinitely many integral curves of the given equation. ### Linear ODE of 1st order Consider the ODE of the form $$ y^{\prime}+a(x) y=b(x) $$ where $a$ and $b$ are given functions of $x$, defined on a certain interval $I$. This equation is called linear because it depends linearly on $y$ and $y^{\prime}$. A linear ODE can be solved as follows. Theorem 1.2 (The method of variation of parameter) Let functions $a(x)$ and $b(x)$ be continuous in an interval I. Then the general solution of the linear ODE (1.9) has the form $$ y(x)=e^{-A(x)} \int b(x) e^{A(x)} d x, $$ where $A(x)$ is a primitive of a $(x)$ on $I$. Note that the function $y(x)$ given by (1.10) is defined on the full interval $I$. Proof. Let us make the change of the unknown function $u(x)=y(x) e^{A(x)}$, that is, $$ y(x)=u(x) e^{-A(x)} . $$ Substituting this to the equation (1.9) we obtain $$ \begin{gathered} \left(u e^{-A}\right)^{\prime}+a u e^{-A}=b, \\ u^{\prime} e^{-A}-u e^{-A} A^{\prime}+a u e^{-A}=b . \end{gathered} $$ Since $A^{\prime}=a$, we see that the two terms in the left hand side cancel out, and we end up with a very simple equation for $u(x)$ : $$ u^{\prime} e^{-A}=b $$ whence $u^{\prime}=b e^{A}$ and $$ u=\int b e^{A} d x . $$ Substituting into (1.11), we finish the proof. One may wonder how one can guess to make the change (1.11). Here is the motivation. Consider first the case when $b(x) \equiv 0$. In this case, the equation (1.9) becomes $$ y^{\prime}+a(x) y=0 $$ and it is called homogeneous. Clearly, the homogeneous linear equation is separable. In the domains $y>0$ and $y<0$ we have $$ \frac{y^{\prime}}{y}=-a(x) $$ and $$ \int \frac{d y}{y}=-\int a(x) d x=-A(x)+C . $$ Then $\ln |y|=-A(x)+C$ and $$ y(x)=C e^{-A(x)} $$ where $C$ can be any real (including $C=0$ that corresponds to the solution $y \equiv 0$ ). For a general equation (1.9) take the above solution to the homogeneous equation and replace a constant $C$ by a function $C(x)$ (or which was denoted by $u(x)$ in the proof), which will result in the above change. Since we have replaced a constant parameter by a function, this method is called the method of variation of parameter. It applies to the linear equations of higher order as well. Example. Consider the equation $$ y^{\prime}+\frac{1}{x} y=e^{x^{2}} $$ in the domain $x>0$. Then $$ A(x)=\int a(x) d x=\int \frac{d x}{x}=\ln x $$ (we do not add a constant $C$ since $A(x)$ is one of the primitives of $a(x)$ ), $$ y(x)=\frac{1}{x} \int e^{x^{2}} x d x=\frac{1}{2 x} \int e^{x^{2}} d x^{2}=\frac{1}{2 x}\left(e^{x^{2}}+C\right), $$ where $C$ is an arbitrary constant. Alternatively, one can solve first the homogeneous equation $$ y^{\prime}+\frac{1}{x} y=0 $$ using the separable of variables: $$ \begin{aligned} \frac{y^{\prime}}{y} & =-\frac{1}{x} \\ (\ln y)^{\prime} & =-(\ln x)^{\prime} \\ \ln y & =-\ln x+C_{1} \\ y & =\frac{C}{x} . \end{aligned} $$ Next, replace the constant $C$ by a function $C(x)$ and substitute into (1.12): $$ \begin{aligned} \left(\frac{C(x)}{x}\right)^{\prime}+\frac{1}{x} \frac{C}{x} & =e^{x^{2}} \\ \frac{C^{\prime} x-C}{x^{2}}+\frac{C}{x^{2}} & =e^{x^{2}} \\ \frac{C^{\prime}}{x} & =e^{x^{2}} \\ C^{\prime} & =e^{x^{2}} x \\ C(x) & =\int e^{x^{2}} x d x=\frac{1}{2}\left(e^{x^{2}}+C_{0}\right) . \end{aligned} $$ Hence, $$ y=\frac{C(x)}{x}=\frac{1}{2 x}\left(e^{x^{2}}+C_{0}\right) $$ where $C_{0}$ is an arbitrary constant. Corollary. Under the conditions of Theorem 1.2, for any $x_{0} \in I$ and any $y_{0} \in \mathbb{R}$ there is exists exactly one solution $y(x)$ defined on $I$ and such that $y\left(x_{0}\right)=y_{0}$. That is, though any point $\left(x_{0}, y_{0}\right) \in I \times \mathbb{R}$ there goes exactly one integral curve of the equation. Proof. Let $B(x)$ be a primitive of $b e^{-A}$ so that the general solution can be written in the form $$ y=e^{-A(x)}(B(x)+C) $$ with an arbitrary constant $C$. Obviously, any such solution is defined on $I$. The condition $y\left(x_{0}\right)=y_{0}$ allows to uniquely determine $C$ from the equation: $$ C=y_{0} e^{A\left(x_{0}\right)}-B\left(x_{0}\right), $$ whence the claim follows.' ### Quasi-linear ODEs and differential forms Let $F(x, y)$ be a real valued function defined in an open set $\Omega \subset \mathbb{R}^{2}$. Recall that $F$ is differentiable at a point $(x, y) \in \Omega$ if there exist real numbers $a, b$ such that $$ F(x+d x, y+d y)-F(x, y)=a d x+b d y+o(|d x|+|d y|), $$ as $|d x|+|d y| \rightarrow 0$. Here $d x$ and $d y$ the increments of $x$ and $y$, respectively, which are considered as new independent variables (the differentials). The linear function $a d x+b d y$ of the variables $d x, d y$ is called the differential of $F$ at $(x, y)$ and is denoted by $d F$, that is, $$ d F=a d x+b d y . $$ In general, $a$ and $b$ are functions of $(x, y)$. Recall also the following relations between the notion of a differential and partial derivatives: 1. If $F$ is differentiable at some point $(x, y)$ and its differential is given by (1.13) then the partial derivatives $F_{x}=\frac{\partial F}{\partial x}$ and $F_{y}=\frac{\partial F}{\partial y}$ exist at this point and $$ F_{x}=a, \quad F_{y}=b . $$ 2. If $F$ is continuously differentiable in $\Omega$, that is, the partial derivatives $F_{x}$ and $F_{y}$ exist in $\Omega$ and are continuous functions then $F$ is differentiable at any point in $\Omega$. Definition. Given two functions $a(x, y)$ and $b(x, y)$ in $\Omega$, consider the expression $$ a(x, y) d x+b(x, y) d y $$ which is called a differential form. The differential form is called exact in $\Omega$ if there is a differentiable function $F$ in $\Omega$ such that $$ d F=a d x+b d y $$ and inexact otherwise. If the form is exact then the function $F$ from (1.14) is called the integral of the form. Observe that not every differential form is exact as one can see from the following statement. Lemma 1.3 If functions $a, b$ are continuously differentiable in $\Omega$ then the necessary condition for the form adx $+b d y$ to be exact is the identity $$ a_{y}=b_{x} $$ Proof. Indeed, if there is $F$ is an integral of the form $a d x+b d y$ then $F_{x}=a$ and $F_{y}=b$, whence it follows that the derivatives $F_{x}$ and $F_{y}$ are continuously differentiable. By a well-know fact from Analysis, this implies that $F_{x y}=F_{y x}$ whence $a_{y}=b_{x}$. Example. The form $y d x-x d y$ is inexact because $a_{y}=1$ while $b_{x}=-1$. The form $y d x+x d y$ is exact because it has an integral $F(x, y)=x y$. The form $2 x y d x+\left(x^{2}+y^{2}\right) d y$ is exact because it has an integral $F(x, y)=x^{2} y+\frac{y^{3}}{3}$ (it will be explained later how one can obtain an integral). If the differential form $a d x+b d y$ is exact then this allows to solve easily the following differential equation: $$ a(x, y)+b(x, y) y^{\prime}=0 $$ This ODE is called quasi-linear because it is linear with respect to $y^{\prime}$ but not necessarily linear with respect to $y$. Using $y^{\prime}=\frac{d y}{d x}$, one can write (1.15) in the form $$ a(x, y) d x+b(x, y) d y=0 $$ which explains why the equation (1.15) is related to the differential form $a d x+b d y$. We say that the equation (1.15) is exact if the form $a d x+b d y$ is exact. Theorem 1.4 Let $\Omega$ be an open subset of $\mathbb{R}^{2}, a, b$ be continuous functions on $\Omega$, such that the form $a d x+b d y$ is exact. Let $F$ be an integral of this form. Consider a differentiable function $y(x)$ defined on an interval $I \subset \mathbb{R}$ such that the graph of $y$ is contained in $\Omega$. Then $y$ solves the equation (1.15) if and only if $$ F(x, y(x))=\text { const on } I . $$ Proof. The hypothesis that the graph of $y(x)$ is contained in $\Omega$ implies that the composite function $F(x, y(x))$ is defined on $I$. By the chain rule, we have $$ \frac{d}{d x} F(x, y(x))=F_{x}+F_{y} y^{\prime}=a+b y^{\prime} . $$ Hence, the equation $a+b y^{\prime}=0$ is equivalent to $\frac{d}{d x} F(x, y(x))=0$, and the latter is equivalent to $F(x, y(x))=$ const. Example. The equation $y+x y^{\prime}=0$ is exact and is equivalent to $x y=C$ because $y d x+x d y=d(x y)$. The same can be obtained using the method of separation of variables. The equation $2 x y+\left(x^{2}+y^{2}\right) y^{\prime}=0$ is exact and is equivalent to $$ x^{2} y+\frac{y^{3}}{3}=C . $$ Below are some integral curves of this equation: How to decide whether a given differential form is exact or not? A partial answer is given by the following theorem. We say that a set $\Omega \subset \mathbb{R}^{2}$ is a rectangle (box) if it has the form $I \times J$ where $I$ and $J$ are intervals in $\mathbb{R}$. Theorem 1.5 (The Poincaré lemma) Let $\Omega$ be an open rectangle in $\mathbb{R}^{2}$. Let $a, b$ be continuously differentiable functions on $\Omega$ such that $a_{y} \equiv b_{x}$. Then the differential form $a d x+b d y$ is exact in $\Omega$. Proof of Theorem 1.5. Assume first that the integral $F$ exists and $F\left(x_{0}, y_{0}\right)=0$ for some point $\left(x_{0}, y_{0}\right) \in \Omega$ (the latter can always be achieved by adding a constant to $F)$. For any point $(x, y) \in \Omega$, also the point $\left(x, y_{0}\right) \in \Omega$; moreover, the intervals $\left[\left(x_{0}, y_{0}\right),\left(x, y_{0}\right)\right]$ and $\left[\left(x, y_{0}\right),(x, y)\right]$ are contained in $\Omega$ because $\Omega$ is a rectangle. Since $F_{x}=a$ and $F_{y}=b$, we obtain by the fundamental theorem of calculus that $$ F\left(x, y_{0}\right)=F\left(x, y_{0}\right)-F\left(x_{0}, y_{0}\right)=\int_{x_{0}}^{x} F_{x}\left(s, y_{0}\right) d s=\int_{x_{0}}^{x} a\left(s, y_{0}\right) d s $$ and $$ F(x, y)-F\left(x, y_{0}\right)=\int_{y_{0}}^{y} F_{y}(x, t) d t=\int_{y_{0}}^{y} b(x, t) d t $$ whence $$ F(x, y)=\int_{x_{0}}^{x} a\left(s, y_{0}\right) d s+\int_{y_{0}}^{y} b(x, t) d t . $$ Now use the formula (1.16) to define function $F(x, y)$. Let us show that $F$ is indeed the integral of the form $a d x+b d y$. Since $a$ and $b$ are continuous, it suffices to verify that $$ F_{x}=a \text { and } F_{y}=b $$ It is easy to see from (1.16) that $$ F_{y}=\frac{\partial}{\partial y} \int_{y_{0}}^{y} b(x, t) d t=b(x, y) . $$ Next, we have $$ \begin{aligned} F_{x} & =\frac{\partial}{\partial x} \int_{x_{0}}^{x} a\left(s, y_{0}\right) d s+\frac{\partial}{\partial x} \int_{y_{0}}^{y} b(x, t) d t \\ & =a\left(x, y_{0}\right)+\int_{y_{0}}^{y} \frac{\partial}{\partial x} b(x, t) d t . \end{aligned} $$ The fact that the integral and the derivative $\frac{\partial}{\partial x}$ can be interchanged will be justified below (see Lemma 1.6). Using the hypothesis $b_{x}=a_{y}$, we obtain from (1.17) $$ \begin{aligned} F_{x} & =a\left(x, y_{0}\right)+\int_{y_{0}}^{y} a_{y}(x, t) d t \\ & =a\left(x, y_{0}\right)+\left(a(x, y)-a\left(x, y_{0}\right)\right) \\ & =a(x, y), \end{aligned} $$ which finishes the proof. Now we prove the lemma, which is needed to justify (1.17). Lemma 1.6 Let $g(x, t)$ be a continuous function on $I \times J$ where $I$ and $J$ are bounded closed intervals in $\mathbb{R}$. Consider the function $$ f(x)=\int_{\alpha}^{\beta} g(x, t) d t $$ where $[\alpha, \beta]=J$, which is defined for all $x \in I$. If the partial derivative $g_{x}$ exists and is continuous on $I \times J$ then $f$ is continuously differentiable on $I$ and, for any $x \in I$, $$ f^{\prime}(x)=\int_{\alpha}^{\beta} g_{x}(x, t) d t . $$ In other words, the operations of differentiation in $x$ and integration in $t$, when applied to $g(x, t)$, are interchangeable. Proof of Lemma 1.6. We need to show that, for all $x \in I$, $$ \frac{f\left(x^{\prime}\right)-f(x)}{x^{\prime}-x} \rightarrow \int_{\alpha}^{\beta} g_{x}(x, t) d t \text { as } x^{\prime} \rightarrow x, $$ which amounts to $$ \int_{\alpha}^{\beta} \frac{g\left(x^{\prime}, t\right)-g(x, t)}{x^{\prime}-x} d t \rightarrow \int_{\alpha}^{\beta} g_{x}(x, t) d t \text { as } x^{\prime} \rightarrow x . $$ Note that by the definition of a partial derivative, for any $t \in[\alpha, \beta]$, $$ \frac{g\left(x^{\prime}, t\right)-g(x, t)}{x^{\prime}-x} \rightarrow g_{x}(x, t) \text { as } x^{\prime} \rightarrow x . $$ Consider all parts of (1.18) as functions of $t$, with fixed $x$ and with $x^{\prime}$ as a parameter. Then we have a convergence of a sequence of functions, and we would like to deduce that their integrals converge as well. By a result from Analysis II, this is the case, if the convergence is uniform (gleichmässig) in the whole interval $[\alpha, \beta]$, that is, if $$ \sup _{t \in[\alpha, \beta]}\left|\frac{g\left(x^{\prime}, t\right)-g(x, t)}{x^{\prime}-x}-g_{x}(x, t)\right| \rightarrow 0 \quad \text { as } x^{\prime} \rightarrow x . $$ By the mean value theorem, for any $t \in[\alpha, \beta]$, there is $\xi \in\left[x, x^{\prime}\right]$ such that $$ \frac{g\left(x^{\prime}, t\right)-g(x, t)}{x^{\prime}-x}=g_{x}(\xi, t) . $$ Hence, the difference quotient in (1.19) can be replaced by $g_{x}(\xi, t)$. To proceed further, recall that a continuous function on a compact set is uniformly continuous. In particular, the function $g_{x}(x, t)$ is uniformly continuous on $I \times J$, that is, for any $\varepsilon>0$ there is $\delta>0$ such that $$ x, \xi \in I,|x-\xi|<\delta \text { and } t, s \in J,|t-s|<\delta \Rightarrow\left|g_{x}(x, t)-g_{x}(\xi, s)\right|<\varepsilon . $$ If $\left|x-x^{\prime}\right|<\delta$ then also $|x-\xi|<\delta$ and, by (1.20) with $s=t$, $$ \left|g_{x}(\xi, t)-g_{x}(x, t)\right|<\varepsilon \text { for all } t \in J . $$ In other words, $\left|x-x^{\prime}\right|<\delta$ implies that $$ \sup _{t \in J}\left|\frac{g\left(x^{\prime}, t\right)-g(x, t)}{x^{\prime}-x}-g_{x}(x, t)\right| \leq \varepsilon, $$ whence (1.19) follows. Consider some examples to Theorem 1.5. Example. Consider again the differential form $2 x y d x+\left(x^{2}+y^{2}\right) d y$ in $\Omega=\mathbb{R}^{2}$. Since $$ a_{y}=(2 x y)_{y}=2 x=\left(x^{2}+y^{2}\right)_{x}=b_{x}, $$ we conclude by Theorem 1.5 that the given form is exact. The integral $F$ can be found by (1.16) taking $x_{0}=y_{0}=0$ : $$ F(x, y)=\int_{0}^{x} 2 s 0 d s+\int_{0}^{y}\left(x^{2}+t^{2}\right) d t=x^{2} y+\frac{y^{3}}{3}, $$ as it was observed above. Example. Consider the differential form $$ \frac{-y d x+x d y}{x^{2}+y^{2}} $$ in $\Omega=\mathbb{R}^{2} \backslash\{0\}$. This form satisfies the condition $a_{y}=b_{x}$ because $$ a_{y}=-\left(\frac{y}{x^{2}+y^{2}}\right)_{y}=-\frac{\left(x^{2}+y^{2}\right)-2 y^{2}}{\left(x^{2}+y^{2}\right)^{2}}=\frac{y^{2}-x^{2}}{\left(x^{2}+y^{2}\right)^{2}} $$ and $$ b_{x}=\left(\frac{x}{x^{2}+y^{2}}\right)_{x}=\frac{\left(x^{2}+y^{2}\right)-2 x^{2}}{\left(x^{2}+y^{2}\right)^{2}}=\frac{y^{2}-x^{2}}{\left(x^{2}+y^{2}\right)^{2}} . $$ By Theorem 1.5 we conclude that the given form is exact in any rectangular domain in $\Omega$. However, let us show that the form is inexact in $\Omega$. Consider the function $\theta(x, y)$ which is the polar angle that is defined in the domain $$ \Omega^{\prime}=\mathbb{R}^{2} \backslash\{(x, 0): x \leq 0\} $$ by the conditions $$ \sin \theta=\frac{y}{r}, \quad \cos \theta=\frac{x}{r}, \quad \theta \in(-\pi, \pi), $$ where $r=\sqrt{x^{2}+y^{2}}$. Let us show that in $\Omega^{\prime}$ $$ d \theta=\frac{-y d x+x d y}{x^{2}+y^{2}} $$ In the half-plane $\{x>0\}$ we have $\tan \theta=\frac{y}{x}$ and $\theta \in(-\pi / 2, \pi / 2)$ whence $$ \theta=\arctan \frac{y}{x} $$ Then (1.22) follows by differentiation of the arctan: $$ d \theta=\frac{1}{1+(y / x)^{2}} \frac{x d y-y d x}{x^{2}}=\frac{-y d x+x d y}{x^{2}+y^{2}} . $$ In the half-plane $\{y>0\}$ we have $\cot \theta=\frac{x}{y}$ and $\theta \in(0, \pi)$ whence $$ \theta=\operatorname{arccot} \frac{x}{y} $$ and (1.22) follows again. Finally, in the half-plane $\{y<0\}$ we have $\cot \theta=\frac{x}{y}$ and $\theta \in$ $(-\pi, 0)$ whence $$ \theta=-\operatorname{arccot}\left(-\frac{x}{y}\right) $$ and (1.22) follows again. Since $\Omega^{\prime}$ is the union of the three half-planes $\{x>0\},\{y>0\}$, $\{y<0\}$, we conclude that $(1.22)$ holds in $\Omega^{\prime}$ and, hence, the form (1.21) is exact in $\Omega^{\prime}$. Why the form (1.21) is inexact in $\Omega$ ? Assume from the contrary that the form (1.21) is exact in $\Omega$ and that $F$ is its integral in $\Omega$, that is, $$ d F=\frac{-y d x+x d y}{x^{2}+y^{2}} . $$ Then $d F=d \theta$ in $\Omega^{\prime}$ whence it follows that $d(F-\theta)=0$ and, hence ${ }^{4} F=\theta+$ const in $\Omega^{\prime}$. It follows from this identity that function $\theta$ can be extended from $\Omega^{\prime}$ to a continuous ${ }^{4}$ We use the following fact from Analysis II: if the differential of a function is identical zero in a connected open set $U \subset \mathbb{R}^{n}$ then the function is constant in this set. Recall that the set $U$ is called connected if any two points from $U$ can be connected by a polygonal line that is contained in $U$. The set $\Omega^{\prime}$ is obviously connected. function on $\Omega$, which however is not true, because the limits of $\theta$ when approaching the point $(-1,0)$ (or any other point $(x, 0)$ with $x<0$ ) from above and below are different. The moral of this example is that the statement of Theorem 1.5 is not true for an arbitrary open set $\Omega$. It is possible to show that the statement of Theorem 1.5 is true if and only if the set $\Omega$ is simply connected, that is, if any closed curve in $\Omega$ can be continuously deformed to a point while staying in $\Omega$. Obviously, the rectangles are simply connected (as well as $\Omega^{\prime}$ ), while the set $\Omega=\mathbb{R}^{2} \backslash\{0\}$ is not simply connected. ### Integrating factor Consider again the quasilinear equation $$ a(x, y)+b(x, y) y^{\prime}=0 $$ and assume that it is inexact. Write this equation in the form $$ a d x+b d y=0 . $$ After multiplying by a non-zero function $M(x, y)$, we obtain an equivalent equation $$ M a d x+M b d y=0, $$ which may become exact, provided function $M$ is suitably chosen. Definition. A function $M(x, y)$ is called the integrating factor for the differential equation (1.23) in $\Omega$ if $M$ is a non-zero function in $\Omega$ such that the form Madx $+M b d y$ is exact in $\Omega$. If one has found an integrating factor then multiplying (1.23) by $M$ the problem amounts to the case of Theorem 1.4. Example. Consider the ODE $$ y^{\prime}=\frac{y}{4 x^{2} y+x}, $$ in the domain $\{x>0, y>0\}$ and write it in the form $$ y d x-\left(4 x^{2} y+x\right) d y=0 . $$ Clearly, this equation is not exact. However, dividing it by $x^{2}$, we obtain the equation $$ \frac{y}{x^{2}} d x-\left(4 y+\frac{1}{x}\right) d y=0 $$ which is already exact in any rectangular domain because $$ \left(\frac{y}{x^{2}}\right)_{y}=\frac{1}{x^{2}}=-\left(4 y+\frac{1}{x}\right)_{x} . $$ Taking in (1.16) $x_{0}=y_{0}=1$, we obtain the integral of the form as follows: $$ F(x, y)=\int_{1}^{x} \frac{1}{s^{2}} d s-\int_{1}^{y}\left(4 t+\frac{1}{x}\right) d t=3-2 y^{2}-\frac{y}{x} . $$ By Theorem 1.4, the general solution is given by the identity $$ 2 y^{2}+\frac{y}{x}=C $$ ### Second order ODE A general second order ODE, resolved with respect to $y^{\prime \prime}$ has the form $$ y^{\prime \prime}=f\left(x, y, y^{\prime}\right), $$ where $f$ is a given function of three variables and $y=y(x)$ is an unknown function. We consider here some problems that amount to a second order ODE. #### Newtons' second law Consider movement of a point particle along a straight line and let its coordinate at time $t$ be $x(t)$. The velocity (Geschwindigkeit) of the particle is $v(t)=x^{\prime}(t)$ and the acceleration (Beschleunigung) is $a(t)=x^{\prime \prime}(t)$. The Newton's second law says that at any time $$ m x^{\prime \prime}=F, $$ where $m$ is the mass of the particle and $F$ is the force ( $\mathrm{Kraft}$ ) acting on the particle. In general, $F$ is a function of $t, x, x^{\prime}$ so that (1.24) can be regarded as a second order ODE for $x(t)$. The force $F$ is called conservative if $F$ depends only on the position $x$. For example, conservative are gravitation force, spring force, electrostatic force, while friction and the air resistance are non-conservative as they depend in the velocity $v$. Assuming $F=F(x)$, denote by $U(x)$ a primitive function of $-F(x)$. The function $U$ is called the potential of the force $F$. Multiplying the equation (1.24) by $x^{\prime}$ and integrating in $t$, we obtain $$ \begin{gathered} m \int x^{\prime \prime} x^{\prime} d t=\int F(x) x^{\prime} d t \\ \frac{m}{2} \int \frac{d}{d t}\left(x^{\prime}\right)^{2} d t=\int F(x) d x \\ \frac{m v^{2}}{2}=-U(x)+C \end{gathered} $$ and $$ \frac{m v^{2}}{2}+U(x)=C $$ The sum $\frac{m v^{2}}{2}+U(x)$ is called the total energy of the particle (which is the sum of the kinetic energy and the potential energy). Hence, we have obtained the law of conservation of energy: the total energy of the particle in a conservative field remains constant. #### Electrical circuit Consider an $R L C$-circuit that is, an electrical circuit (Schaltung) where a resistor, an inductor and a capacitor are connected in a series: Denote by $R$ the resistance (Widerstand) of the resistor, by $L$ the inductance (Induktivität) of the inductor, and by $C$ the capacitance (Kapazität) of the capacitor. Let the circuit contain a power source with the voltage $V(t)$ (Spannung) where $t$ is time. Denote by $I(t)$ the current (Strom) in the circuit at time $t$. Using the laws of electromagnetism, we obtain that the potential difference $v_{R}$ on the resistor $R$ is equal to $$ v_{R}=R I $$ (Ohm's law), and the potential difference $v_{L}$ on the inductor is equal to $$ v_{L}=L \frac{d I}{d t} $$ (Faraday's law). The potential difference $v_{C}$ on the capacitor is equal to $$ v_{C}=\frac{Q}{C} $$ where $Q$ is the charge (Ladungsmenge) of the capacitor; also we have $Q^{\prime}=I$. By Kirchhoff's law, we have $$ v_{R}+v_{L}+v_{C}=V(t) $$ whence $$ R I+L I^{\prime}+\frac{Q}{C}=V(t) $$ Differentiating in $t$, we obtain $$ L I^{\prime \prime}+R I^{\prime}+\frac{I}{C}=V^{\prime} $$ which is a second order ODE with respect to $I(t)$. We will come back to this equation after having developed the theory of linear ODEs. ## Existence and uniqueness theorems ## $2.1 \quad$ 1st order ODE We change notation, denoting the independent variable by $t$ and the unknown function by $x(t)$. Hence, we write an ODE in the form $$ x^{\prime}=f(t, x), $$ where $f$ is a real value function on an open set $\Omega \subset \mathbb{R}^{2}$ and a pair $(t, x)$ is considered as a point in $\mathbb{R}^{2}$. Let us associate with the given ODE the initial value problem (Anfangswertproblem) - shortly, IVP, which is the problem of finding a solution that satisfies in addition the initial condition $x\left(t_{0}\right)=x_{0}$ where $\left(t_{0}, x_{0}\right)$ is a given point in $\Omega$. We write IVP in a compact form as follows: $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ A solution to IVP is a differentiable function $x(t): I \rightarrow \mathbb{R}$ where $I$ is an open interval containing $t_{0}$, such that $(t, x(t)) \in \Omega$ for all $t \in I$, which satisfies the ODE in $I$ and the initial condition. Geometrically, the graph of function $x(t)$ is contained in $\Omega$ and goes through the point $\left(t_{0}, x_{0}\right)$. In order to state the main result, we need the following definitions. Definition. We say that a function $f: \Omega \rightarrow \mathbb{R}$ is Lipschitz in $x$ if there is a constant $L$ such that $$ |f(t, x)-f(t, y)| \leq L|x-y| $$ for all $t, x, y$ such that $(t, x) \in \Omega$ and $(t, y) \in \Omega$. The constant $L$ is called the Lipschitz constant of $f$ in $\Omega$. We say that a function $f: \Omega \rightarrow \mathbb{R}$ is locally Lipschitz in $x$ if, for any point $\left(t_{0}, x_{0}\right) \in \Omega$ there exist positive constants $\varepsilon, \delta$ such that the rectangle $$ R=\left[t_{0}-\delta, t_{0}+\delta\right] \times\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right] $$ is contained in $\Omega$ and the function $f$ is Lipschitz in $R$; that is, there is a constant $L$ such that for all $t \in\left[t_{0}-\delta, t_{0}+\delta\right]$ and $x, y \in\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$, $$ |f(t, x)-f(t, y)| \leq L|x-y| $$ Note that in the latter case the constant $L$ may be different for different rectangles. Lemma 2.1 (a) If the partial derivative $f_{x}$ exists and is bounded in a rectangle $R \subset \mathbb{R}^{2}$ then $f$ is Lipschitz in $x$ in $R$. (b) If the partial derivative $f_{x}$ exists and is continuous in an open set $\Omega \subset \mathbb{R}^{2}$ then $f$ is locally Lipschitz in $x$ in $\Omega$. Proof. (a) If $(t, x)$ and $(t, y)$ belong to $R$ then the whole interval between these points is also in $R$, and we have by the mean value theorem $$ f(t, x)-f(t, y)=f_{x}(t, \xi)(x-y), $$ for some $\xi \in[x, y]$. By hypothesis, $f_{x}$ is bounded in $R$, that is, $$ L:=\sup _{R}\left|f_{x}\right|<\infty, $$ whence we obtain $$ |f(t, x)-f(t, y)| \leq L|x-y| . $$ Hence, $f$ is Lipschitz in $R$ with the Lipschitz constant (2.3). (b) Fix a point $\left(t_{0}, x_{0}\right) \in \Omega$ and choose positive $\varepsilon, \delta$ so small that the rectangle $R$ defined by (2.2) is contained in $\Omega$ (which is possible because $\Omega$ is an open set). Since $R$ is a bounded closed set, the continuous function $f_{x}$ is bounded on $R$. By part $(a)$ we conclude that $f$ is Lipschitz in $R$, which means that $f$ is locally Lipschitz in $\Omega$. Example. The function $f(t, x)=|x|$ is Lipschitz in $x$ in $\mathbb{R}^{2}$ because $$ || x|-| y|| \leq|x-y| $$ by the triangle inequality for $|x|$. Clearly, $f$ is not differentiable in $x$ at $x=0$. Hence, the continuous differentiability of $f$ is sufficient for $f$ to be Lipschitz in $x$ but not necessary. The next theorem is one of the main results of this course. Theorem 2.2 (The Picard - Lindelöf theorem) Let $\Omega$ be an open set in $\mathbb{R}^{2}$ and $f(t, x)$ be a continuous function in $\Omega$ that is locally Lipschitz in $x$. (Existence) Then, for any point $\left(t_{0}, x_{0}\right) \in \Omega$, the initial value problem IVP (2.1) has a solution. (Uniqueness) If $x_{1}(t)$ and $x_{2}(t)$ are two solutions of the same IVP then $x_{1}(t)=x_{2}(t)$ in their common domain. Remark. By Lemma 2.1, the hypothesis of Theorem 2.2 that $f$ is locally Lipschitz in $x$ could be replaced by a simpler hypotheses that $f_{x}$ is continuous. However, as we have seen above, there are examples of functions that are Lipschitz but not differentiable, and Theorem 2.2 applies for such functions. If we completely drop the Lipschitz condition and assume only that $f$ is continuous in $(t, x)$ then the existence of a solution is still the case (Peano's theorem) while the uniqueness fails in general as will be seen in the next example. Example. Consider the equation $x^{\prime}=\sqrt{|x|}$ which was already solved before by separation of variables. The function $x(t) \equiv 0$ is a solution, and the following two functions $$ \begin{aligned} & x(t)=\frac{1}{4} t^{2}, t>0, \\ & x(t)=-\frac{1}{4} t^{2}, t<0 \end{aligned} $$ are also solutions (this can also be trivially verified by substituting them into the ODE). Gluing together these two functions and extending the resulting function to $t=0$ by setting $x(0)=0$, we obtain a new solution defined for all real $t$ (see the diagram below). Hence, there are at least two solutions that satisfy the initial condition $x(0)=0$. The uniqueness breaks down because the function $\sqrt{|x|}$ is not Lipschitz near 0 . Proof of existence in Theorem 2.2. We start with the following observation. Claim. Let $x(t)$ be a function defined on an open interval $I \subset \mathbb{R}$. A function $x(t)$ solves $I V P$ if and only if $x(t)$ is continuous, $(t, x(t)) \in \Omega$ for all $t \in I, t_{0} \in I$, and $$ x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s . $$ Indeed, if $x$ solves IVP then (2.4) follows from $x^{\prime}=f(t, x(t))$ just by integration: $$ \int_{t_{0}}^{t} x^{\prime}(s) d s=\int_{t_{0}}^{t} f(s, x(s)) d s $$ whence $$ x(t)-x_{0}=\int_{t_{0}}^{t} f(s, x(s)) d s . $$ Conversely, if $x$ is a continuous function that satisfies (2.4) then the right hand side of (2.4) is differentiable in $t$ whence it follows that $x(t)$ is differentiable. It is trivial that $x\left(t_{0}\right)=x_{0}$, and after differentiation (2.4) we obtain the ODE $x^{\prime}=f(t, x)$. This claim reduces the problem of solving IVP to the integral equation (2.4). Fix a point $\left(t_{0}, x_{0}\right) \in \Omega$ and let $\varepsilon, \delta$ be the parameter from the the local Lipschitz condition at this point; that is, there is a constant $L$ such that $$ |f(t, x)-f(t, y)| \leq L|x-y| $$ for all $t \in\left[t_{0}-\delta, t_{0}+\delta\right]$ and $x, y \in\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$. Set $$ J=\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right] \text { and } I=\left[t_{0}-r, t_{0}+r\right] $$ were $0<r \leq \delta$ is a new parameter, whose value will be specified later on. By construction, $I \times J \subset \Omega$. Denote by $X$ be the family of all continuous functions $x(t): I \rightarrow J$, that is, $$ X=\{x: I \rightarrow J \mid x \text { is continuous }\} $$ (see the diagram below). Consider the integral operator $A$ defined on functions $x \in X$ by $$ A x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s $$ which is obviously motivated by (2.4). To be more precise, we would like to ensure that $x \in X$ implies $A x \in X$. Note that, for any $x \in X$, the point $(s, x(s))$ belongs to $\Omega$ so that the above integral makes sense and the function $A x$ is defined on $I$. This function is obviously continuous. We are left to verify that the image of $A x$ is contained in $J$. Indeed, the latter condition means that $$ \left|A x(t)-x_{0}\right| \leq \varepsilon \text { for all } t \in I . $$ We have, for any $t \in I$, $$ \left|A x(t)-x_{0}\right|=\left|\int_{t_{0}}^{t} f(s, x(s)) d s\right| \leq \sup _{s \in I, x \in J}|f(s, x)|\left|t-t_{0}\right| \leq M r, $$ where $$ M=\sup _{\substack{s \in\left[t_{0}-\delta, t_{0}+\delta\right] \\ x \in\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]}}|f(s, x)|<\infty . $$ Hence, if $r$ is so small that $M r \leq \varepsilon$ then (2.5) is satisfied and, hence, $A x \in X$. To summarize the above argument, we have defined a function family $X$ and a mapping $A: X \rightarrow X$. By the above Claim, a function $x \in X$ will solve the IVP if function $x$ is a fixed point of the mapping $A$, that is, if $x=A x$. The existence of a fixed point will be obtained using the Banach fixed point theorem: If $(X, d)$ is a complete metric space (Vollständiger metrische Raum) and $A: X \rightarrow X$ is a contraction mapping (Kontraktionsabbildung), that is, $$ d(A x, A y) \leq q d(x, y) $$ for some $q \in(0,1)$ and all $x, y \in X$, then $A$ has a fixed point. By the proof of this theorem, one starts with any element $x_{0} \in X$, constructs a sequence of iteration $\left\{x_{n}\right\}_{n=1}^{\infty}$ using the rule $x_{n+1}=A x_{n}, n=0,1, \ldots$, and shows that the sequence $\left\{x_{n}\right\}_{n=1}^{\infty}$ converges in $X$ to a fixed point. In order to be able to apply this theorem, we must introduce a distance function $d$ (Abstand) on $X$ so that $(X, d)$ is a complete metric space and $A$ is a contraction mapping with respect to this distance. Let $d$ be the sup-distance, that is, for any two functions $x, y \in X$, set $$ d(x, y)=\sup _{t \in I}|x(t)-y(t)| $$ Using the fact that the convergence in $(X, d)$ is the uniform convergence of functions and the uniform limits of continuous functions is continuous, one can show that the metric space $(X, d)$ is complete (see Exercise 16). How to ensure that the mapping $A: X \rightarrow X$ is a contraction? For any two functions $x, y \in X$ and any $t \in I$, we have $x(t), y(t) \in J$ whence by the Lipschitz condition $$ \begin{aligned} |A x(t)-A y(t)| & =\left|\int_{t_{0}}^{t} f(s, x(s)) d s-\int_{t_{0}}^{t} f(s, y(s)) d s\right| \\ & \leq\left|\int_{t_{0}}^{t}\right| f(s, x(s))-f(s, y(s))|d s| \\ & \leq\left|\int_{t_{0}}^{t} L\right| x(s)-y(s)|d s| \\ & \leq \operatorname{Lrd}(x, y) . \end{aligned} $$ Therefore, $$ \sup _{t \in I}|A x(t)-A y(t)| \leq \sup _{s \in I}|x(s)-y(s)| L\left|t-t_{0}\right| $$ whence $$ d(A x, A y) \leq \operatorname{Lrd}(x, y) . $$ Hence, choosing $r<1 / L$, we obtain that $A$ is a contraction, which finishes the proof of the existence. Remark. Let us summarize the proof of the existence of solutions as follows. Let $\varepsilon, \delta, L$ be the parameters from the the local Lipschitz condition at the point $\left(t_{0}, x_{0}\right)$, that is, $$ |f(t, x)-f(t, y)| \leq L|x-y| $$ for all $t \in\left[t_{0}-\delta, t_{0}+\delta\right]$ and $x, y \in\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$. Let $$ M=\sup \left\{|f(t, x)|: t \in\left[t_{0}-\delta, t_{0}+\delta\right], x \in\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]\right\} . $$ Then the IVP has a solution on an interval $\left[t_{0}-r, t_{0}+r\right]$ provided $r$ is a positive number that satisfies the following conditions: $$ r \leq \delta, r \leq \frac{\varepsilon}{M}, r<\frac{1}{L} $$ For some applications, it is important that $r$ can be determined as a function of $\varepsilon, \delta, M, L$. Example. The method of the proof of the existence in Theorem 2.2 suggests the following procedure of computation of the solution of IVP. We start with any function $x_{0} \in X$ (using the same notation as in the proof) and construct the sequence $\left\{x_{n}\right\}_{n=0}^{\infty}$ of functions in $X$ using the rule $x_{n+1}=A x_{n}$. The sequence $\left\{x_{n}\right\}$ is called the Picard iterations, and it converges uniformly to the solution $x(t)$. Let us illustrate this method on the following example: $$ \left\{\begin{array}{l} x^{\prime}=x \\ x(0)=1 \end{array}\right. $$ The operator $A$ is given by $$ A x(t)=1+\int_{0}^{t} x(s) d s, $$ whence, setting $x_{0}(t) \equiv 1$, we obtain $$ \begin{gathered} x_{1}(t)=1+\int_{0}^{t} x_{0} d s=1+t, \\ x_{2}(t)=1+\int_{0}^{t} x_{1} d s=1+t+\frac{t^{2}}{2} \\ x_{3}(t)=1+\int_{0}^{t} x_{2} d t=1+t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !} \end{gathered} $$ and by induction $$ x_{n}(t)=1+t+\frac{t^{2}}{2 !}+\frac{t^{3}}{3 !}+\ldots+\frac{t^{n}}{k !} . $$ Clearly, $x_{n} \rightarrow e^{t}$ as $n \rightarrow \infty$, and the function $x(t)=e^{t}$ indeed solves the above IVP. For the proof of the uniqueness, we need the following two lemmas. Lemma 2.3 (The Gronwall inequality) Let $z(t)$ be a non-negative continuous function on $\left[t_{0}, t_{1}\right]$ where $t_{0}<t_{1}$. Assume that there are constants $C, L \geq 0$ such that $$ z(t) \leq C+L \int_{t_{0}}^{t} z(s) d s $$ for all $t \in\left[t_{0}, t_{1}\right]$. Then $$ z(t) \leq C \exp \left(L\left(t-t_{0}\right)\right) $$ for all $t \in\left[t_{0}, t\right]$. Proof. We can assume that $C$ is strictly positive. Indeed, if $(2.7)$ holds with $C=0$ then it holds with any $C>0$. Therefore, (2.8) holds with any $C>0$, whence it follows that it holds with $C=0$. Hence, assume in the sequel that $C>0$. This implies that the right hand side of (2.7) is positive. Set $$ F(t)=C+L \int_{t_{0}}^{t} z(s) d s $$ and observe that $F$ is differentiable and $F^{\prime}=L z$. It follows from (2.7) that $z \leq F$ whence $$ F^{\prime}=L z \leq L F $$ This is a differential inequality for $F$ that can be solved similarly to the separable ODE. Since $F>0$, dividing by $F$ we obtain $$ \frac{F^{\prime}}{F} \leq L $$ whence by integration $$ \ln \frac{F(t)}{F\left(t_{0}\right)}=\int_{t_{0}}^{t} \frac{F^{\prime}(s)}{F(s)} d s \leq \int_{t_{0}}^{t} L d s=L\left(t-t_{0}\right), $$ for all $t \in\left[t_{0}, t_{1}\right]$. It follows that $$ F(t) \leq F\left(t_{0}\right) \exp \left(L\left(t-t_{0}\right)\right)=C \exp \left(L\left(t-t_{0}\right)\right) . $$ Using again (2.7), that is, $z \leq F$, we obtain (2.8). Lemma 2.4 If $S$ is a subset of an interval $U \subset \mathbb{R}$ that is both open (offen) and closed (abgeschlossen) in $U$ then either $S$ is empty or $S=U$. Any set $U$ that satisfies the conclusion of Lemma 2.4 is called connected (zusammenhängend). Hence, Lemma 2.4 says that any interval is a connected set. Proof. Set $S^{c}=U \backslash S$ so that $S^{c}$ is closed in $U$. Assume that both $S$ and $S^{c}$ are nonempty and choose some points $a_{0} \in S, b_{0} \in S^{c}$. Set $c=\frac{a_{0}+b_{0}}{2}$ so that $c \in U$ and, hence, $c$ belongs to $S$ or $S^{c}$. Out of the intervals $\left[a_{0}, c\right],\left[c, b_{0}\right]$ choose the one whose endpoints belong to different sets $S, S^{c}$ and rename it by $\left[a_{1}, b_{1}\right]$, say $a_{1} \in S$ and $b_{1} \in S^{c}$. Considering the point $c=\frac{a_{1}+b_{1}}{2}$, we repeat the same argument and construct an interval $\left[a_{2}, b_{2}\right]$ being one of two halfs of $\left[a_{1}, b_{1}\right]$ such that $a_{2} \in S$ and $b_{2} \in S^{c}$. Contintue further, we obtain a nested sequence $\left\{\left[a_{k}, b_{k}\right]\right\}_{k=0}^{\infty}$ of intervals such that $a_{k} \in S, b_{k} \in S^{c}$ and $\left|b_{k}-a_{k}\right| \rightarrow 0$. By the principle of nested intervals (Intervallschachtelungsprinzip), there is a common point $x \in\left[a_{k}, b_{k}\right]$ for all $k$. Note that $x \in U$. Since $a_{k} \rightarrow x$, we must have $x \in S$, and since $b_{k} \rightarrow x$, we must have $x \in S^{c}$, because both sets $S$ and $S^{c}$ are closed in $U$. This contradiction finishes the proof. Proof of the uniqueness in Theorem 2.2. Assume that $x_{1}(t)$ and $x_{2}(t)$ are two solutions of the same IVP both defined on an open interval $U \subset \mathbb{R}$ and prove that they coincide on $U$. We first prove that the two solution coincide in some interval around $t_{0}$. Let $\varepsilon$ and $\delta$ be the parameters from the Lipschitz condition at the point $\left(t_{0}, x_{0}\right)$ as above. Choose $0<r<\delta$ so small that the both functions $x_{1}(t)$ and $x_{2}(t)$ restricted to $I=\left[t_{0}-r, t_{0}+r\right]$ take values in $J=\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$ (which is possible because both $x_{1}(t)$ and $x_{2}(t)$ are continuous functions). As in the proof of the existence, the both solutions satisfies the integral identity $$ x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s $$ for all $t \in I$. Hence, for the difference $z(t):=\left|x_{1}(t)-x_{2}(t)\right|$, we have $$ z(t)=\left|x_{1}(t)-x_{2}(t)\right| \leq \int_{t_{0}}^{t}\left|f\left(s, x_{1}(s)\right)-f\left(s, x_{2}(s)\right)\right| d s, $$ assuming for certainty that $t_{0} \leq t \leq t_{0}+r$. Since the both points $\left(s, x_{1}(s)\right)$ and $\left(s, x_{2}(s)\right)$ in the given range of $s$ are contained in $I \times J$, we obtain by the Lipschitz condition $$ \left|f\left(s, x_{1}(s)\right)-f\left(s, x_{2}(s)\right)\right| \leq L\left|x_{1}(s)-x_{2}(s)\right| $$ whence $$ z(t) \leq L \int_{t_{0}}^{t} z(s) d s $$ Appling the Gronwall inequality with $C=0$ we obtain $z(t) \leq 0$. Since $z \geq 0$, we conclude that $z(t) \equiv 0$ for all $t \in\left[t_{0}, t_{0}+r\right]$. In the same way, one gets that $z(t) \equiv 0$ for $t \in\left[t_{0}-r, t_{0}\right]$, which proves that the solutions $x_{1}(t)$ and $x_{2}(t)$ coincide on the interval $I$. Now we prove that they coincide on the full interval $U$. Consider the set $$ S=\left\{t \in U: x_{1}(t)=x_{2}(t)\right\} $$ and let us show that the set $S$ is both closed and open in $I$. The closedness is obvious: if $x_{1}\left(t_{k}\right)=x_{2}\left(t_{k}\right)$ for a sequence $\left\{t_{k}\right\}$ and $t_{k} \rightarrow t \in U$ as $k \rightarrow \infty$ then passing to the limit and using the continuity of the solutions, we obtain $x_{1}(t)=x_{2}(t)$, that is, $t \in S$. Let us prove that the set $S$ is open. Fix some $t_{1} \in S$. Since $x_{1}\left(t_{1}\right)=x_{2}\left(t_{1}\right)$, the both functions $x_{1}(t)$ and $x_{2}(t)$ solve the same IVP with the initial condition at $t_{1}$. By the above argument, $x_{1}(t)=x_{2}(t)$ in some interval $I=\left[t_{1}-r, t_{1}+r\right]$ with $r>0$. Hence, $I \subset S$, which implies that $S$ is open. Since the set $S$ is non-empty (it contains $t_{0}$ ) and is both open and closed in $U$, we conclude by Lemma 2.4 that $S=U$, which finishes the proof of uniqueness. ### Dependence on the initial value Consider the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=s \end{array}\right. $$ where the initial value is denoted by $s$ instead of $x_{0}$ to emphasize that we allow now $s$ to vary. Hence, the solution is can be considered as a function of two variables: $x=x(t, s)$. Our aim is to investigate the dependence on $s$. As before, assume that $f$ is continuous in an open set $\Omega \subset \mathbb{R}^{2}$ and is locally Lipschitz in this set in $x$. Fix a point $\left(t_{0}, x_{0}\right) \in \Omega$ and let $\varepsilon, \delta, L$ be the parameters from the local Lipschitz condition at this point, that is, the rectangle $$ R=\left[t_{0}-\delta, t_{0}+\delta\right] \times\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right] $$ is contained in $\Omega$ and, for all $(t, x),(t, y) \in R$, $$ |f(t, x)-f(t, y)| \leq L|x-y| . $$ Let $M$ be the supremum of $|f(t, x)|$ in $R$. By the proof of Theorem 2.2, the solution $x(t)$ with the initial condition $x\left(t_{0}\right)=x_{0}$ is defined in the interval $\left[t_{0}-r, t_{0}+r\right]$ where $r$ is any positive number that satisfies (2.6), and $x(t)$ takes values in $\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$ for all $t \in\left[t_{0}-r, t_{0}+r\right]$. Let us choose $r$ as follows $$ r=\min \left(\delta, \frac{\varepsilon}{M}, \frac{1}{2 L}\right) . $$ For what follows, it is only important that $r$ can be determined as a function of $\varepsilon, \delta, L, M$. Now consider the IVP with the condition $x\left(t_{0}\right)=s$ where $s$ is close enough to $x_{0}$, say $$ s \in\left[x_{0}-\varepsilon / 2, x_{0}+\varepsilon / 2\right] . $$ Then the rectangle $$ R^{\prime}=\left[t_{0}-\delta, t_{0}+\delta\right] \times[s-\varepsilon / 2, s+\varepsilon / 2] $$ is contained in $R$. Therefore, the Lipschitz condition holds in $R^{\prime}$ also with constant $L$ and $\sup _{R^{\prime}}|f| \leq M$. Hence, the solution $x(t, s)$ with the initial condition $x\left(t_{0}\right)=s$ is defined in $\left[t_{0}-r(s), t_{0}+r(s)\right]$ and takes values in $[s-\varepsilon / 2, s+\varepsilon / 2] \subset\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$ provided $$ r(s) \leq \min \left(\delta, \frac{\varepsilon}{2 M}, \frac{1}{2 L}\right) $$ (in comparison with (2.10), here $\varepsilon$ is replaced by $\varepsilon / 2$ in accordance with the definition of $\left.R^{\prime}\right)$. Clearly, if $r$ satisfies (2.10) then the value $$ r(s)=\frac{r}{2} $$ satisfies (2.12). Let us state the result of this argument as follows. Claim. Fix a point $\left(t_{0}, x_{0}\right) \in \Omega$ and choose $\varepsilon, \delta>0$ from the local Lipschitz condition at $\left(t_{0}, x_{0}\right)$. Let $L$ be the Lipschitz constant in $R=\left[t_{0}-\delta, t_{0}+\delta\right] \times\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right], \quad M=$ $\sup _{R}|f|$, and define $r=r(\varepsilon, \delta, L, M)$ by (2.10). Then, for any $s \in\left[x_{0}-\varepsilon / 2, x_{0}+\varepsilon / 2\right]$, the solution $x(t, s)$ of $(2.9)$ is defined in $\left[t_{0}-r / 2, t_{0}+r / 2\right]$ and takes values in $\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$. In particular, we can compare solutions with different initial value $s$ since they have the common domain $\left[t_{0}-r / 2, t_{0}+r / 2\right]$ (see the diagram below). Theorem 2.5 (Continuous dependence on the initial value) Let $\Omega$ be an open set in $\mathbb{R}^{2}$ and $f(t, x)$ be a continuous function in $\Omega$ that is locally Lipschitz in $x$. Let $\left(t_{0}, x_{0}\right)$ be a point in $\Omega$ and let $\varepsilon, r$ be as above. Then, for all $s^{\prime}, s^{\prime \prime} \in\left[x_{0}-\varepsilon / 2, x_{0}+\varepsilon / 2\right]$ and $t \in\left[t_{0}-r / 2, t_{0}+r / 2\right]$, $$ \left|x\left(t, s^{\prime}\right)-x\left(t, s^{\prime \prime}\right)\right| \leq 2\left|s^{\prime}-s^{\prime \prime}\right| . $$ Consequently, the function $x(t, s)$ is continuous in $(t, s)$. Proof. Consider again the integral equations $$ x\left(t, s^{\prime}\right)=s^{\prime}+\int_{t_{0}}^{t} f\left(\tau, x\left(\tau, s^{\prime}\right)\right) d \tau $$ and $$ x\left(t, s^{\prime \prime}\right)=s^{\prime \prime}+\int_{t_{0}}^{t} f\left(\tau, x\left(\tau, s^{\prime \prime}\right)\right) d \tau . $$ It follows that, for all $t \in\left[t_{0}, t_{0}+r / 2\right]$, $$ \begin{aligned} \left|x\left(t, s^{\prime}\right)-x\left(t, s^{\prime \prime}\right)\right| & \leq\left|s^{\prime}-s^{\prime \prime}\right|+\int_{t_{0}}^{t}\left|f\left(\tau, x\left(\tau, s^{\prime}\right)\right)-f\left(\tau, x\left(\tau, s^{\prime \prime}\right)\right)\right| d \tau \\ & \leq\left|s^{\prime}-s^{\prime \prime}\right|+\int_{t_{0}}^{t} L\left|x\left(\tau, s^{\prime}\right)-x\left(\tau, s^{\prime \prime}\right)\right| d \tau \end{aligned} $$ where we have used the Lipschitz condition because by the above Claim $(\tau, x(\tau, s)) \in$ $\left[t_{0}-\delta, t_{0}+\delta\right] \times\left[x_{0}-\varepsilon, x_{0}+\varepsilon\right]$ for all $s \in\left[x_{0}-\varepsilon / 2, x_{0}+\varepsilon / 2\right]$. Setting $z(t)=\left|x\left(t, s^{\prime}\right)-x\left(t, s^{\prime \prime}\right)\right|$ we obtain $$ z(t) \leq\left|s^{\prime}-s^{\prime \prime}\right|+L \int_{t_{0}}^{t} z(\tau) d \tau $$ which implies by the Lemma 2.3 $$ z(t) \leq\left|s^{\prime}-s^{\prime \prime}\right| \exp \left(L\left(t-t_{0}\right)\right) . $$ Since $t-t_{0} \leq r / 2$ and by $(2.10) L \leq \frac{1}{2 r}$ we see that $L\left(t-t_{0}\right) \leq \frac{1}{4}$ and $$ \exp \left(L\left(t-t_{0}\right)\right) \leq e^{1 / 4}<2, $$ which proves (2.13) for $t \geq t_{0}$. Similarly one obtains the same for $t \leq t_{0}$. Let us prove that $x(t, s)$ is continuous in $(t, s)$. Fix a point $(t, s) \in \Omega$ and prove that $x(t, s)$ is continuous at this point, that is, $$ x\left(t_{n}, s_{n}\right) \rightarrow x(t, s) $$ if $\left(t_{n}, s_{n}\right) \rightarrow(t, s)$ as $n \rightarrow \infty$. Then by $(2.13)$ $$ \begin{aligned} \left|x\left(t_{n}, s_{n}\right)-x(t, s)\right| & \leq\left|x\left(t_{n}, s_{n}\right)-x\left(t_{n}, s\right)\right|+\left|x\left(t_{n}, s\right)-x(t, s)\right| \\ & \leq 2\left|s_{n}-s\right|+\left|x\left(t_{n}, s\right)-x(t, s)\right| \end{aligned} $$ and this goes to 0 as $n \rightarrow \infty$ by the continuity of $x(t, s)$ in $t$ for a fixed $s$. Remark. The same argument shows that if a function $x(t, s)$ is continuous in $t$ for any fixed $s$ and uniformly continuous in $s$, then $x(t, s)$ is jointly continuous in $(t, s)$. ### Higher order ODE and reduction to the first order system A general ODE of the order $n$ resolved with respect to the highest derivative can be written in the form $$ y^{(n)}=F\left(t, y, \ldots, y^{(n-1)}\right), $$ where $t$ is an independent variable and $y(t)$ is an unknown function. It is sometimes more convenient to replace this equation by a system of ODEs of the $1^{\text {st }}$ order. Let $x(t)$ be a vector function of a real variable $t$, which takes values in $\mathbb{R}^{n}$. Denote by $x_{k}$ the components of $x$. Then the derivative $x^{\prime}(t)$ is defined component-wise by $$ x^{\prime}=\left(x_{1}^{\prime}, x_{2}^{\prime}, \ldots, x_{n}^{\prime}\right) . $$ Consider now a vector $O D E$ of the first order $$ x^{\prime}=f(t, x) $$ where $f$ is a given function of $n+1$ variables, which takes values in $\mathbb{R}^{n}$, that is, $f: \Omega \rightarrow \mathbb{R}^{n}$ where $\Omega$ is an open subset of $\mathbb{R}^{n+1}$ (so that the couple $(t, x)$ is considered as a point in $\Omega$ ). Denoting by $f_{k}$ the components of $f$, we can rewrite the vector equation (2.15) as a system of $n$ scalar equations $$ \left\{\begin{array}{l} x_{1}^{\prime}=f_{1}\left(t, x_{1}, \ldots, x_{n}\right) \\ \ldots \\ x_{k}^{\prime}=f_{k}\left(t, x_{1}, \ldots, x_{n}\right) \\ \ldots \\ x_{n}^{\prime}=f_{n}\left(t, x_{1}, \ldots, x_{n}\right) \end{array}\right. $$ A system of ODEs of the form (2.15) is called the normal system. Let us show how the equation (2.14) can be reduced to the normal system (2.16). Indeed, with any function $y(t)$ let us associate the vector-function $$ x=\left(y, y^{\prime}, \ldots, y^{(n-1)}\right), $$ which takes values in $\mathbb{R}^{n}$. That is, we have $$ x_{1}=y, x_{2}=y^{\prime}, \ldots, x_{n}=y^{(n-1)} . $$ Obviously, $$ x^{\prime}=\left(y^{\prime}, y^{\prime \prime}, \ldots, y^{(n)}\right), $$ and using (2.14) we obtain a system of equations $$ \left\{\begin{array}{l} x_{1}^{\prime}=x_{2} \\ x_{2}^{\prime}=x_{3} \\ \ldots \\ x_{n-1}^{\prime}=x_{n} \\ x_{n}^{\prime}=F\left(t, x_{1}, \ldots x_{n}\right) \end{array}\right. $$ Obviously, we can rewrite this system as a vector equation (2.15) where $$ f(t, x)=\left(x_{2}, x_{3}, \ldots, x_{n}, F\left(t, x_{1}, \ldots, x_{n}\right)\right) . $$ Conversely, the system (2.17) implies $$ x_{1}^{(n)}=x_{n}^{\prime}=F\left(t, x_{1}, x_{1}^{\prime}, . ., x_{1}^{(n-1)}\right) $$ so that we obtain equation (2.14) with respect to $y=x_{1}$. Hence, the equation (2.14) is equivalent to the vector equation (2.15) with function $f$ defined by (2.18). Example. For example, consider the second order equation $$ y^{\prime \prime}=F\left(t, y, y^{\prime}\right) . $$ Setting $x=\left(y, y^{\prime}\right)$ we obtain $$ x^{\prime}=\left(y^{\prime}, y^{\prime \prime}\right) $$ whence $$ \left\{\begin{array}{l} x_{1}^{\prime}=x_{2} \\ x_{2}^{\prime}=F\left(t, x_{1}, x_{2}\right) \end{array}\right. $$ Hence, we obtain the normal system (2.15) with $$ f(t, x)=\left(x_{2}, F\left(t, x_{1}, x_{2}\right)\right) . $$ What initial value problem is associated with the vector equation (2.15) and the scalar higher order equation (2.14)? Motivated by the study of the 1st order ODE, one can presume that it makes sense to consider the following IVP for the vector 1st order ODE $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ where $x_{0} \in \mathbb{R}^{n}$ is a given initial value of $x(t)$. For the equation (2.14), this means that the initial conditions should prescribe the value of the vector $x=\left(y, y^{\prime}, \ldots, y^{(n-1)}\right)$ at some $t_{0}$, which amounts to $n$ scalar conditions $$ \left\{\begin{array}{l} y\left(t_{0}\right)=y_{0} \\ y^{\prime}\left(t_{0}\right)=y_{1} \\ \cdots \\ y^{(n-1)}\left(t_{0}\right)=y_{n-1} \end{array}\right. $$ where $y_{0}, \ldots, y_{n-1}$ are given values. Hence, the initial value problem IVP for the scalar equation of the order $n$ can be stated as follows: $$ \left\{\begin{array}{l} y^{\prime}=F\left(t, y, y^{\prime}, \ldots, y^{(n-1)}\right) \\ y\left(t_{0}\right)=y_{0} \\ y^{\prime}\left(t_{0}\right)=y_{1} \\ \cdots \\ y^{(n-1)}\left(t_{0}\right)=y_{n-1} \end{array}\right. $$ ### Norms in $\mathbb{R}^{n}$ Recall that a norm in $\mathbb{R}^{n}$ is a function $N: \mathbb{R}^{n} \rightarrow \mathbb{R}$ with the following properties: 1. $N(x) \geq 0$ for all $x \in \mathbb{R}^{n}$ and $N(x)=0$ if and only if $x=0$. 2. $N(c x)=|c| N(x)$ for all $x \in \mathbb{R}^{n}$ and $c \in \mathbb{R}$. 3. $N(x+y) \leq N(x)+N(y)$ for all $x, y \in \mathbb{R}^{n}$. For example, the function $|x|$ is a norm in $\mathbb{R}$. Usually one uses the notation $\|x\|$ for a norm instead of $N(x)$. Example. For any $p \geq 1$, the $p$-norm in $\mathbb{R}^{n}$ is defined by $$ \|x\|_{p}=\left(\sum_{k=1}^{n}\left|x_{k}\right|^{p}\right)^{1 / p} . $$ In particular, for $p=1$ we have $$ \|x\|_{1}=\sum_{k=1}^{n}\left|x_{k}\right| $$ and for $p=2$ $$ \|x\|_{2}=\left(\sum_{k=1}^{n} x_{k}^{2}\right)^{1 / 2} . $$ For $p=\infty$ set $$ \|x\|_{\infty}=\max _{1 \leq k \leq n}\left|x_{k}\right| $$ It is known that the $p$-norm for any $p \in[1, \infty]$ is indeed a norm. It follows from the definition of a norm that in $\mathbb{R}$ any norm has the form $\|x\|=c|x|$ where $c$ is a positive constant. In $\mathbb{R}^{n}, n \geq 2$, there is a great variety of non-proportional norms. However, it is known that all possible norms in $\mathbb{R}^{n}$ are equivalent in the following sense: if $N_{1}(x)$ and $N_{2}(x)$ are two norms in $\mathbb{R}^{n}$ then there are positive constants $C^{\prime}$ and $C^{\prime \prime}$ such that $$ C^{\prime \prime} \leq \frac{N_{1}(x)}{N_{2}(x)} \leq C^{\prime} \text { for all } x \neq 0 . $$ For example, it follows from the definitions of $\|x\|_{1}$ and $\|x\|_{\infty}$ that $$ 1 \leq \frac{\|x\|_{1}}{\|x\|_{\infty}} \leq n $$ For most applications, the relation (2.19) means that the choice of a specific norm is not important. The notion of a norm is used in order to define the Lipschitz condition for functions in $\mathbb{R}^{n}$. Let us fix some norm $\|x\|$ in $\mathbb{R}^{n}$. For any $x \in \mathbb{R}^{n}$ and $r>0$, and define a closed ball $\bar{B}(x, r)$ by $$ \bar{B}(x, r)=\left\{y \in \mathbb{R}^{n}:\|x-y\| \leq r\right\} . $$ For example, in $\mathbb{R}$ with $\|x\|=|x|$ we have $\bar{B}(x, r)=[x-r, x+r]$. Similarly, one defines an open ball $$ B(x, r)=\left\{y \in \mathbb{R}^{n}:\|x-y\|<r\right\} . $$ Below are sketches of the ball $B(0,1)$ in $\mathbb{R}^{2}$ for different norms: the 1-norm: the 2-norm (a round ball): the 4-norm: the $\infty$-norm (a box): ### Existence and uniqueness for a system of ODEs Let $\Omega$ be an open subset of $\mathbb{R}^{n+1}$ and $f=f(t, x)$ be a mapping from $\Omega$ to $\mathbb{R}^{n}$. Fix a norm $\|x\|$ in $\mathbb{R}^{n}$. Definition. Function $f(t, x)$ is called Lipschitz in $x$ in $\Omega$ if there is a constant $L$ such that for all $(t, x),(t, y) \in \Omega$ $$ \|f(t, x)-f(t, y)\| \leq L\|x-y\| . $$ In the view of the equivalence of any two norms in $\mathbb{R}^{n}$, the property to be Lipschitz does not depend on the choice of the norm (but the value of the Lipschitz constant $L$ does). A subset $K$ of $\mathbb{R}^{n+1}$ will be called a cylinder if it has the form $K=I \times B$ where $I$ is an interval in $\mathbb{R}$ and $B$ is a ball (open or closed) in $\mathbb{R}^{n}$. The cylinder is closed if both $I$ and $B$ are closed, and open if both $I$ and $B$ are open. Definition. Function $f(t, x)$ is called locally Lipschitz in $x$ in $\Omega$ if for any $\left(t_{0}, x_{0}\right) \in \Omega$ there exist constants $\varepsilon, \delta>0$ such that the cylinder $$ K=\left[t_{0}-\delta, t_{0}+\delta\right] \times \bar{B}\left(x_{0}, \varepsilon\right) $$ is contained in $\Omega$ and $f$ is Lipschitz in $x$ in $K$. Lemma 2.6 (a) If all components $f_{k}$ of $f$ are differentiable functions in a cylinder $K$ and all the partial derivatives $\frac{\partial f_{k}}{\partial x_{i}}$ are bounded in $K$ then the function $f(t, x)$ is Lipschitz in $x$ in $K$. (b) If all partial derivatives $\frac{\partial f_{k}}{\partial x_{j}}$ exists and are continuous in $\Omega$ then $f(t, x)$ is locally Lipschitz in $x$ in $\Omega$. Proof. Let us use the following mean value property of functions in $\mathbb{R}^{n}$ : if $g$ is a differentiable real valued function in a ball $B \subset \mathbb{R}^{n}$ then, for all $x, y \in B$ there is $\xi \in[x, y]$ such that $$ g(y)-g(x)=\sum_{j=1}^{n} \frac{\partial g}{\partial x_{j}}(\xi)\left(y_{j}-x_{j}\right) $$ (note that the interval $[x, y]$ is contained in the ball $B$ so that $\frac{\partial g}{\partial x_{j}}(\xi)$ makes sense). Indeed, consider the function $$ h(t)=g(x+t(y-x)) \text { where } t \in[0,1] . $$ The function $h(t)$ is differentiable on $[0,1]$ and, by the mean value theorem in $\mathbb{R}$, there is $\tau \in(0,1)$ such that $$ g(y)-g(x)=h(1)-h(0)=h^{\prime}(\tau) . $$ Noticing that by the chain rule $$ h^{\prime}(\tau)=\sum_{j=1}^{n} \frac{\partial g}{\partial x_{j}}(x+\tau(y-x))\left(y_{j}-x_{j}\right) $$ and setting $\xi=x+\tau(y-x)$, we obtain $(2.21)$. (a) Let $K=I \times B$ where $I$ is an interval in $\mathbb{R}$ and $B$ is a ball in $\mathbb{R}^{n}$. If $(t, x),(t, y) \in K$ then $t \in I$ and $x, y \in B$. Applying the above mean value property for the $k$-th component $f_{k}$ of $f$, we obtain that $$ f_{k}(t, x)-f_{k}(t, y)=\sum_{j=1}^{n} \frac{\partial f_{k}}{\partial x_{j}}(t, \xi)\left(x_{j}-y_{j}\right), $$ where $\xi$ is a point in the interval $[x, y] \subset B$. Set $$ C=\max _{k, j} \sup _{K}\left|\frac{\partial f_{k}}{\partial x_{j}}\right| $$ and note that by the hypothesis $C<\infty$. Hence, by $(2.22)$ $$ \left|f_{k}(t, x)-f_{k}(t, y)\right| \leq C \sum_{j=1}^{n}\left|x_{j}-y_{j}\right|=C\|x-y\|_{1} . $$ Taking max in $k$, we obtain $$ \|f(t, x)-f(t, y)\|_{\infty} \leq C\|x-y\|_{1} . $$ Switching in the both sides to the given norm $\|\cdot\|$ and using the equivalence of all norms, we obtain that $f$ is Lipschitz in $x$ in $K$. (b) Given a point $\left(t_{0}, x_{0}\right) \in \Omega$, choose positive $\varepsilon$ and $\delta$ so that the cylinder $$ K=\left[t_{0}-\delta, t_{0}+\delta\right] \times \bar{B}\left(x_{0}, \varepsilon\right) $$ is contained in $\Omega$, which is possible by the openness of $\Omega$. Since the components $f_{k}$ are continuously differentiable, they are differentiable. Since $K$ is a closed bounded set and the partial derivatives $\frac{\partial f_{k}}{\partial x_{j}}$ are continuous, they are bounded on $K$. By part $(a)$ we conclude that $f$ is Lipschitz in $x$ in $K$, which finishes the proof. Definition. Given a function $f: \Omega \rightarrow \mathbb{R}^{n}$, where $\Omega$ is an open set in $\mathbb{R}^{n+1}$, consider the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(t, x), \\ x\left(t_{0}\right)=x_{0}, \end{array}\right. $$ where $\left(t_{0}, x_{0}\right)$ is a given point in $\Omega$. A function $x(t): I \rightarrow \mathbb{R}^{n}$ is called a solution $(2.23)$ if the domain $I$ is an open interval in $\mathbb{R}$ containing $t_{0}, x(t)$ is differentiable in $t$ in $I$, $(t, x(t)) \in \Omega$ for all $t \in I$, and $x(t)$ satisfies the ODE $x^{\prime}=f(t, x)$ in $I$ and the initial condition $x\left(t_{0}\right)=x_{0}$. The graph of function $x(t)$, that is, the set of points $(t, x(t))$, is hence a curve in $\Omega$ that goes through the point $\left(t_{0}, x_{0}\right)$. It is also called the integral curve of the ODE $x^{\prime}=f(t, x)$. Theorem 2.7 (Picard - Lindelöf Theorem) Consider the equation $$ x^{\prime}=f(t, x) $$ where $f: \Omega \rightarrow \mathbb{R}^{n}$ is a mapping from an open set $\Omega \subset \mathbb{R}^{n+1}$ to $\mathbb{R}^{n}$. Assume that $f$ is continuous on $\Omega$ and locally Lipschitz in $x$. Then, for any point $\left(t_{0}, x_{0}\right) \in \Omega$, the initial value problem IVP (2.23) has a solution. Furthermore, if $x(t)$ and $y(t)$ are two solutions to the same IVP then $x(t)=y(t)$ in their common domain. Proof. The proof is very similar to the case $n=1$ considered in Theorem 2.2. We start with the following claim. Claim. A function $x(t)$ solves IVP if and only if $x(t)$ is a continuous function on an open interval $I$ such that $t_{0} \in I,(t, x(t)) \in \Omega$ for all $t \in I$, and $$ x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s . $$ Here the integral of the vector valued function is understood component-wise. If $x$ solves IVP then $(2.24)$ follows from $x_{k}^{\prime}=f_{k}(t, x(t))$ just by integration: $$ \int_{t_{0}}^{t} x_{k}^{\prime}(s) d s=\int_{t_{0}}^{t} f_{k}(s, x(s)) d s $$ whence $$ x_{k}(t)-\left(x_{0}\right)_{k}=\int_{t_{0}}^{t} f_{k}(s, x(s)) d s $$ and (2.24) follows. Conversely, if $x$ is a continuous function that satisfies (2.24) then $$ x_{k}=\left(x_{0}\right)_{k}+\int_{t_{0}}^{t} f_{k}(s, x(s)) d s . $$ The right hand side here is differentiable in $t$ whence it follows that $x_{k}(t)$ is differentiable. It is trivial that $x_{k}\left(t_{0}\right)=\left(x_{0}\right)_{k}$, and after differentiation we obtain $x_{k}^{\prime}=f_{k}(t, x)$ and, hence, $x^{\prime}=f(t, x)$. Fix a point $\left(t_{0}, x_{0}\right) \in \Omega$ and let $\varepsilon, \delta$ be the parameter from the the local Lipschitz condition at this point, that is, there is a constant $L$ such that $$ \|f(t, x)-f(t, y)\| \leq L\|x-y\| $$ for all $t \in\left[t_{0}-\delta, t_{0}+\delta\right]$ and $x, y \in \bar{B}\left(x_{0}, \varepsilon\right)$. Choose some $r \in(0, \delta]$ to be specified later on, and set $$ I=\left[t_{0}-r, t_{0}+r\right] \text { and } J=\bar{B}\left(x_{0}, \varepsilon\right) . $$ Denote by $X$ the family of all continuous functions $x(t): I \rightarrow J$, that is, $$ X=\{x: I \rightarrow J: x \text { is continuous }\} . $$ Consider the integral operator $A$ defined on functions $x(t)$ by $$ A x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s . $$ We would like to ensure that $x \in X$ implies $A x \in X$. Note that, for any $x \in X$, the point $(s, x(s))$ belongs to $\Omega$ so that the above integral makes sense and the function $A x$ is defined on $I$. This function is obviously continuous. We are left to verify that the image of $A x$ is contained in $J$. Indeed, the latter condition means that $$ \left\|A x(t)-x_{0}\right\| \leq \varepsilon \text { for all } t \in I . $$ We have, for any $t \in I$, $$ \begin{aligned} \left\|A x(t)-x_{0}\right\| & =\left\|\int_{t_{0}}^{t} f(s, x(s)) d s\right\| \\ & \leq \int_{t_{0}}^{t}\|f(s, x(s))\| d s \quad \text { (see Exercise 15) } \\ & \leq \sup _{s \in I, x \in J}\|f(s, x)\|\left|t-t_{0}\right| \leq M r, \end{aligned} $$ where $$ M=\sup _{\substack{s \in\left[t_{0}-\delta, t_{0}+\delta\right] \\ x \in \bar{B}\left(x_{0}, \varepsilon\right) .}}\|f(s, x)\|<\infty . $$ Hence, if $r$ is so small that $M r \leq \varepsilon$ then (2.5) is satisfied and, hence, $A x \in X$. Define a distance function on the function family $X$ as follows: for all $x, y \in X$, $$ d(x, y)=\sup _{t \in I}\|x(t)-y(t)\| $$ Then $(X, d)$ is a complete metric space (see Exercise 16). We are left to ensure that the mapping $A: X \rightarrow X$ is a contraction. For any two functions $x, y \in X$ and any $t \in I, t \geq t_{0}$, we have $x(t), y(t) \in J$ whence by the Lipschitz condition $$ \begin{aligned} \|A x(t)-A y(t)\| & =\left\|\int_{t_{0}}^{t} f(s, x(s)) d s-\int_{t_{0}}^{t} f(s, y(s)) d s\right\| \\ & \leq \int_{t_{0}}^{t}\|f(s, x(s))-f(s, y(s))\| d s \\ & \leq \int_{t_{0}}^{t} L\|x(s)-y(s)\| d s \\ & \leq L\left(t-t_{0}\right) \sup _{s \in I}\|x(s)-y(s)\| \\ & \leq \operatorname{Lrd}(x, y) . \end{aligned} $$ The same inequality holds for $t \leq t_{0}$. Taking sup in $t \in I$, we obtain $$ d(A x, A y) \leq \operatorname{Lrd}(x, y) . $$ Hence, choosing $r<1 / L$, we obtain that $A$ is a contraction. By the Banach fixed point theorem, we conclude that the equation $A x=x$ has a solution $x \in X$, which hence solves the IVP. Assume that $x(t)$ and $y(t)$ are two solutions of the same IVP both defined on an open interval $U \subset \mathbb{R}$ and prove that they coincide on $U$. We first prove that the two solution coincide in some interval around $t_{0}$. Let $\varepsilon$ and $\delta$ be the parameters from the Lipschitz condition at the point $\left(t_{0}, x_{0}\right)$ as above. Choose $0<r<\delta$ so small that the both functions $x(t)$ and $y(t)$ restricted to $I=\left[t_{0}-r, t_{0}+r\right]$ take values in $J=\bar{B}\left(x_{0}, \varepsilon\right)$ (which is possible because both $x(t)$ and $y(t)$ are continuous functions). As in the proof of the existence, the both solutions satisfies the integral identity $$ x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s $$ for all $t \in I$. Hence, for the difference $z(t):=\|x(t)-y(t)\|$, we have $$ z(t)=\|x(t)-y(t)\| \leq \int_{t_{0}}^{t}\|f(s, x(s))-f(s, y(s))\| d s $$ assuming for certainty that $t_{0} \leq t \leq t_{0}+r$. Since the both points $(s, x(s))$ and $(s, y(s))$ in the given range of $s$ are contained in $I \times J$, we obtain by the Lipschitz condition $$ \|f(s, x(s))-f(s, y(s))\| \leq L\|x(s)-y(s)\| $$ whence $$ z(t) \leq L \int_{t_{0}}^{t} z(s) d s $$ Appling the Gronwall inequality with $C=0$ we obtain $z(t) \leq 0$. Since $z \geq 0$, we conclude that $z(t) \equiv 0$ for all $t \in\left[t_{0}, t_{0}+r\right]$. In the same way, one gets that $z(t) \equiv 0$ for $t \in\left[t_{0}-r, t_{0}\right]$, which proves that the solutions $x(t)$ and $y(t)$ coincide on the interval $I$. Now we prove that they coincide on the full interval $U$. Consider the set $$ S=\{t \in U: x(t)=y(t)\} $$ and let us show that the set $S$ is both closed and open in $I$. The closedness is obvious: if $x\left(t_{k}\right)=y\left(t_{k}\right)$ for a sequence $\left\{t_{k}\right\}$ and $t_{k} \rightarrow t \in U$ as $k \rightarrow \infty$ then passing to the limit and using the continuity of the solutions, we obtain $x(t)=y(t)$, that is, $t \in S$. Let us prove that the set $S$ is open. Fix some $t_{1} \in S$. Since $x\left(t_{1}\right)=y\left(t_{1}\right)=: x_{1}$, the both functions $x(t)$ and $y(t)$ solve the same IVP with the initial data $\left(t_{1}, x_{1}\right)$. By the above argument, $x(t)=y(t)$ in some interval $I=\left[t_{1}-r, t_{1}+r\right]$ with $r>0$. Hence, $I \subset S$, which implies that $S$ is open. Since the set $S$ is non-empty (it contains $t_{0}$ ) and is both open and closed in $U$, we conclude by Lemma 2.4 that $S=U$, which finishes the proof of uniqueness. Remark. Let us summarize the proof of the existence part of Theorem 2.7 as follows. For any point $\left(t_{0}, x_{0}\right) \in \Omega$, we first choose positive constants $\varepsilon, \delta, L$ from the Lipschitz condition, that is, the cylinder $$ G=\left[t_{0}-\delta, t_{0}+\delta\right] \times \bar{B}\left(x_{0}, \varepsilon\right) $$ is contained in $\Omega$ and, for any two points $(t, x)$ and $(t, y)$ from $G$ with the same $t$, $$ \|f(t, x)-f(t, y)\| \leq L\|x-y\| $$ Let $$ M=\sup _{G}\|f(t, x)\| $$ and choose any positive $r$ to satisfy $$ r \leq \delta, r \leq \frac{\varepsilon}{M}, r<\frac{1}{L} $$ Then there exists a solution $x(t)$ to the IVP, which is defined on the interval $\left[t_{0}-r, t_{0}+r\right]$ and takes values in $\bar{B}\left(x_{0}, \varepsilon\right)$. The fact that the domain of the solution admits the explicit estimates (2.26) can be used as follows. Corollary. Under the conditions of Theorem 2.7 for any point $\left(t_{0}, x_{0}\right) \in \Omega$ there are positive constants $\varepsilon$ and $r$ such that, for any $t_{1} \in\left[t_{0}-r / 2, t_{0}+r / 2\right]$ and $x_{1} \in \bar{B}\left(x_{0}, \varepsilon / 2\right)$, the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{1}\right)=x_{1} \end{array}\right. $$ has a solution $x(t)$ which is defined for all $t \in\left[t_{0}-r / 2, t_{0}+r / 2\right]$ and takes values in $\bar{B}\left(x_{0}, \varepsilon\right)$. Proof. Let $\varepsilon, \delta, L, M$ be as in the proof of Theorem 2.7. Assuming that $t_{1} \in$ $\left[t_{0}-\delta / 2, t_{0}+\delta / 2\right]$ and $x_{1} \in \bar{B}\left(x_{0}, \varepsilon / 2\right)$, we obtain that the cylinder $$ G_{1}=\left[t_{1}-\delta / 2, t_{1}+\delta / 2\right] \times \bar{B}\left(x_{1}, \varepsilon / 2\right) $$ is contained in $G$. Hence, the values of $L$ and $M$ for the cylinder $G_{1}$ can be taken the same as those for $G$. Therefore, the IVP (2.27) has solution $x(t)$ in the interval $\left[t_{1}-r, t_{1}+r\right]$, and $x(t)$ takes values in $\bar{B}\left(x_{1}, \varepsilon / 2\right) \subset \bar{B}(x, \varepsilon)$ provided $$ r \leq \delta / 2, r \leq \frac{\varepsilon}{2 M}, r<\frac{1}{L} $$ For example, take $$ r=\min \left(\frac{\delta}{2}, \frac{\varepsilon}{2 M}, \frac{1}{2 L}\right) . $$ If $t_{1} \in\left[t_{0}-r / 2, t_{0}+r / 2\right]$ then $\left[t_{0}-r / 2, t_{0}+r / 2\right] \subset\left[t_{1}-r, t_{1}+r\right]$ so that the solution $x(t)$ of $(2.27)$ is defined on $\left[t_{0}-r / 2, t_{0}+r / 2\right]$ and takes value in $\bar{B}(x, \varepsilon)$, which was to be proved. ### Maximal solutions Consider again the ODE $$ x^{\prime}=f(t, x) $$ where $f: \Omega \rightarrow \mathbb{R}^{n}$ is a mapping from an open set $\Omega \subset \mathbb{R}^{n+1}$ to $\mathbb{R}^{n}$, which is continuous on $\Omega$ and locally Lipschitz in $x$. Although the uniqueness part of Theorem 2.7 says that any two solutions are the same in their common interval, still there are many different solutions to the same IVP because strictly speaking, the functions that are defined on different domains are different, despite they coincide in the intersection of the domains. The purpose of what follows is to define the maximal possible domain where the solution to the IVP exists. We say that a solution $y(t)$ of the ODE is an extension of a solution $x(t)$ if the domain of $y(t)$ contains the domain of $x(t)$ and the solutions coincide in the common domain. Definition. A solution $x(t)$ of the ODE is called maximal if it is defined on an open interval and cannot be extended to any larger open interval. Theorem 2.8 Assume that the conditions of Theorem 2.7 are satisfied. Then the following is true. (a) Any IVP has is a unique maximal solution. (b) If $x(t)$ and $y(t)$ are two maximal solutions to the same ODE and $x(t)=y(t)$ for some value of $t$, then $x$ and $y$ are identically equal, including the identity of their domains. (c) If $x(t)$ is a maximal solution with the domain $(a, b)$ then $x(t)$ leaves any compact set $K \subset \Omega$ as $t \rightarrow a$ and as $t \rightarrow b$. Here the phrase " $x(t)$ leaves any compact set $K$ as $t \rightarrow b$ " means the follows: there is $T \in(a, b)$ such that for any $t \in(T, b)$, the point $(t, x(t))$ does not belong to $K$. Similarly, the phrase " $x(t)$ leaves any compact set $K$ as $t \rightarrow a$ " means that there is $T \in(a, b)$ such that for any $t \in(a, T)$, the point $(t, x(t))$ does not belong to $K$. Example. 1. Consider the ODE $x^{\prime}=x^{2}$ in the domain $\Omega=\mathbb{R}^{2}$. This is separable equation and can be solved as follows. Obviously, $x \equiv 0$ is a constant solution. In the domains where $x \neq 0$ we have $$ \int \frac{x^{\prime} d t}{x^{2}}=\int d t $$ whence $$ -\frac{1}{x}=\int \frac{d x}{x^{2}}=\int d t=t+C $$ and $x(t)=-\frac{1}{t-C}$ (where we have replaced $C$ by $-C$ ). Hence, the family of all solutions consists of a straight line $x(t)=0$ and hyperbolas $x(t)=\frac{1}{C-t}$ with the maximal domains $(C,+\infty)$ and $(-\infty, C)$ (see the diagram below). Each of these solutions leaves any compact set $K$, but in different ways: the solutions $x(t)=0$ leaves $K$ as $t \rightarrow \pm \infty$ because $K$ is bounded, while $x(t)=\frac{1}{C-t}$ leaves $K$ as $t \rightarrow C$ because $x(t) \rightarrow \pm \infty$. 2. Consider the ODE $x^{\prime}=\frac{1}{x}$ in the domain $\Omega=\mathbb{R} \times(0,+\infty)$ (that is, $t \in \mathbb{R}$ and $x>0)$. By the separation of variables, we obtain $$ \frac{x^{2}}{2}=\int x d x=\int x x^{\prime} d t=\int d t=t+C $$ whence $$ x(t)=\sqrt{2(t-C)}, t>C . $$ See the diagram below: Obviously, the maximal domain of the solution is $(C,+\infty)$. The solution leaves any compact $K \subset \Omega$ as $t \rightarrow C$ because $(t, x(t))$ tends to the point $(C, 0)$ at the boundary of $\Omega$. The proof of Theorem 2.8 will be preceded by a lemma. Lemma 2.9 Let $\left\{x_{\alpha}(t)\right\}_{\alpha \in A}$ be a family of solutions to the same IVP where $A$ is any index set, and let the domain of $x_{\alpha}$ be an open interval $I_{\alpha}$. Set $I=\bigcup_{\alpha \in A} I_{\alpha}$ and define $a$ function $x(t)$ on $I$ as follows: $$ x(t)=x_{\alpha}(t) \text { if } t \in I_{\alpha} . $$ Then $I$ is an open interval and $x(t)$ is a solution to the same IVP on I. The function $x(t)$ defined by (2.28) is referred to as the union of the family $\left\{x_{\alpha}(t)\right\}$. Proof. First of all, let us verify that the identity (2.28) defines $x(t)$ correctly, that is, the right hand side does not depend on the choice of $\alpha$. Indeed, if also $t \in I_{\beta}$ then $t$ belongs to the intersection $I_{\alpha} \cap I_{\beta}$ and by the uniqueness theorem, $x_{\alpha}(t)=x_{\beta}(t)$. Hence, the value of $x(t)$ is independent of the choice of the index $\alpha$. Note that the graph of $x(t)$ is the union of the graphs of all functions $x_{\alpha}(t)$. Set $a=\inf I, b=\sup I$ and show that $I=(a, b)$. Let us first verify that $(a, b) \subset I$, that is, any $t \in(a, b)$ belongs also to $I$. Assume for certainty that $t \geq t_{0}$. Since $b=\sup I$, there is $t_{1} \in I$ such that $t<t_{1}<b$. There exists an index $\alpha$ such that $t_{1} \in I_{\alpha}$. Since also $t_{0} \in I_{\alpha}$, the entire interval $\left[t_{0}, t_{1}\right]$ is contained in $I_{\alpha}$. Since $t \in\left[t_{0}, t_{1}\right]$, we conclude that $t \in I_{\alpha}$ and, hence, $t \in I$. It follows that $I$ is an interval with the endpoints $a$ and $b$. Since $I$ is the union of open intervals, $I$ is an open subset of $\mathbb{R}$, whence it follows that $I$ is an open interval, that is, $I=(a, b)$. Finally, let us verify why $x(t)$ solves the given IVP. We have $x\left(t_{0}\right)=x_{0}$ because $t_{0} \in I_{\alpha}$ for any $\alpha$ and $$ x\left(t_{0}\right)=x_{\alpha}\left(t_{0}\right)=x_{0} $$ so that $x(t)$ satisfies the initial condition. Why $x(t)$ satisfies the ODE at any $t \in I$ ? Any given $t \in I$ belongs to some $I_{\alpha}$. Since $x_{\alpha}$ solves the ODE in $I_{\alpha}$ and $x \equiv x_{\alpha}$ on $I_{\alpha}$, we conclude that $x$ satisfies the ODE at $t$, which finishes the proof. Proof of Theorem 2.8. (a) Consider the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ and let $S$ be the set of all possible solutions to this IVP defined on open intervals. Let $x(t)$ be the union of all solutions from $S$. By Lemma 2.9, the function $x(t)$ is also a solution to the IVP and, hence, $x(t) \in S$. Moreover, $x(t)$ is a maximal solution because the domain of $x(t)$ contains the domains of all other solutions from $S$ and, hence, $x(t)$ cannot be extended to a larger open interval. This proves the existence of a maximal solution. Let $y(t)$ be another maximal solution to the IVP and let $z(t)$ be the union of the solutions $x(t)$ and $y(t)$. By Lemma 2.9,z $(t)$ solves the IVP and extends both $x(t)$ and $y(t)$, which implies by the maximality of $x$ and $y$ that $z$ is identical to both $x$ and $y$. Hence, $x$ and $y$ are identical (including the identity of the domains), which proves the uniqueness of a maximal solution. (b) Let $x(t)$ and $y(t)$ be two maximal solutions that coincide at some $t$, say $t=t_{1}$. Set $x_{1}=x\left(t_{1}\right)=y\left(t_{1}\right)$. Then both $x$ and $y$ are solutions to the same IVP with the initial point $\left(t_{1}, x_{1}\right)$ and, hence, they coincide by part $(a)$. (c) Let $x(t)$ be a maximal solution defined on $(a, b)$ where $a<b$, and assume that $x(t)$ does not leave a compact $K \subset \Omega$ as $t \rightarrow a$. Then there is a sequence $t_{k} \rightarrow a$ such that $\left(t_{k}, x_{k}\right) \in K$ where $x_{k}=x\left(t_{k}\right)$. By a property of compact sets, any sequence in $K$ has a convergent subsequence whose limit is in $K$. Hence, passing to a subsequence, we can assume that the sequence $\left\{\left(t_{k}, x_{k}\right)\right\}_{k=1}^{\infty}$ converges to a point $\left(t_{0}, x_{0}\right) \in K$ as $k \rightarrow \infty$. Clearly, we have $t_{0}=a$, which in particular implies that $a$ is finite. By Corollary to Theorem 2.7, for the point $\left(t_{0}, x_{0}\right)$, there exist $r, \varepsilon>0$ such that the IVP with the initial point inside the cylinder $$ G=\left[t_{0}-r / 2, t_{0}+r / 2\right] \times \bar{B}\left(x_{0}, \varepsilon / 2\right) $$ has a solution defined for all $t \in\left[t_{0}-r / 2, t_{0}+r / 2\right]$. In particular, if $k$ is large enough then $\left(t_{k}, x_{k}\right) \in G$, which implies that the solution $y(t)$ to the following IVP $$ \left\{\begin{array}{l} y^{\prime}=f(t, y), \\ y\left(t_{k}\right)=x_{k}, \end{array}\right. $$ is defined for all $t \in\left[t_{0}-r / 2, t_{0}+r / 2\right]$ (see the diagram below). Since $x(t)$ also solves this IVP, the union $z(t)$ of $x(t)$ and $y(t)$ solves the same IVP. Note that $x(t)$ is defined only for $t>t_{0}$ while $z(t)$ is defined also for $t \in\left[t_{0}-r / 2, t_{0}\right]$. Hence, the solution $x(t)$ can be extended to a larger interval, which contradicts the maximality of $x(t)$. Remark. By definition, a maximal solution $x(t)$ is defined on an open interval, say $(a, b)$, and it cannot be extended to a larger open interval. One may wonder if $x(t)$ can be extended at least to the endpoints $t=a$ or $t=b$. It turns out that this is never the case (unless the domain $\Omega$ of the function $f(t, x)$ can be enlarged). Indeed, if $x(t)$ can be defined as a solution to the ODE also for $t=a$ then $(a, x(a)) \in \Omega$ and, hence, there is ball $B$ in $\mathbb{R}^{n+1}$ centered at the point $(a, x(a))$ such that $B \subset \Omega$. By shrinking the radius of $B$, we can assume that the corresponding closed ball $\bar{B}$ is also contained in $\Omega$. Since $x(t) \rightarrow x(a)$ as $t \rightarrow a$, we obtain that $(t, x(t)) \in \bar{B}$ for all $t$ close enough to $a$. Therefore, the solution $x(t)$ does not leave the compact set $\bar{B} \subset \Omega$ as $t \rightarrow a$, which contradicts part (c) of Theorem 2.8 . ### Continuity of solutions with respect to $f(t, x)$ Consider the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ In Section 2.2, we have investigated in the one dimensional case the dependence of the solution $x(t)$ upon the initial value $x_{0}$. A more general question, which will be treated here, is how the solution $x(t)$ depends on the right hand side $f(t, x)$. The dependence on the initial condition can be reduced to the dependence of the right hand side as follows. Consider the function $y(t)=x(t)-x_{0}$, which obviously solves the IVP $$ \left\{\begin{array}{l} y^{\prime}=f\left(t, y+x_{0}\right) \\ y\left(t_{0}\right)=0 \end{array}\right. $$ Hence, if we know that the solution $y(t)$ of (2.31) depends continuously on the right hand side, then it will follow that $y(t)$ is continuous in $x_{0}$, which implies that also the solution $x(t)$ of $(2.30)$ is continuous in $x_{0}$. Let $\Omega$ be an open set in $\mathbb{R}^{n+1}$ and $f, g$ be two functions from $\Omega$ to $\mathbb{R}^{n}$. Assume in what follows that both $f, g$ are continuous and locally Lipschitz in $x$ in $\Omega$, and consider two initial value problems $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ and $$ \left\{\begin{array}{l} y^{\prime}=g(t, y) \\ y\left(t_{0}\right)=x_{0} \end{array}\right. $$ where $\left(t_{0}, x_{0}\right)$ is a fixed point in $\Omega$. Assume that the function $f$ as fixed and $x(t)$ is a fixed solution of (2.32). The function $g$ will be treated as variable.. Our purpose is to show that if $g$ is chosen close enough to $f$ then the solution $y(t)$ of $(2.33)$ is close enough to $x(t)$. Apart from the theoretical interest, this question has significant practical consequences. For example, if one knows the function $f(t, x)$ only approximately (which is always the case in applications in Sciences and Engineering) then solving (2.32) approximately means solving another problem (2.33) where $g$ is an approximation to $f$. Hence, it is important to know that the solution $y(t)$ of $(2.33)$ is actually an approximation of $x(t)$. Theorem 2.10 Let $x(t)$ be a solution to the IVP (2.32) defined on an interval $(a, b)$. Then, for all real $\alpha, \beta$ such that $a<\alpha<t_{0}<\beta<b$ and for any $\varepsilon>0$, there is $\eta>0$ such that, for any function $g: \Omega \rightarrow \mathbb{R}^{n}$ such that $$ \sup _{\Omega}\|f-g\| \leq \eta $$ there is a solution $y(t)$ of the IVP (2.33) defined in $[\alpha, \beta]$, and this solution satisfies the inequality $$ \sup _{[\alpha, \beta]}\|x(t)-y(t)\| \leq \varepsilon . $$ Proof. For any $\varepsilon \geq 0$, consider the set $$ K_{\varepsilon}=\left\{(t, x) \in \mathbb{R}^{n+1}: \alpha \leq t \leq \beta,\|x-x(t)\| \leq \varepsilon\right\} $$ which can be regarded as the $\varepsilon$-neighborhood in $\mathbb{R}^{n+1}$ of the graph of the function $t \mapsto x(t)$ where $t \in[\alpha, \beta]$. In particular, $K_{0}$ is the graph of the function $x(t)$ on $[\alpha, \beta]$ (see the diagram below). The set $K_{0}$ is compact because it is the image of the compact interval $[\alpha, \beta]$ under the continuous mapping $t \mapsto(t, x(t))$. Hence, $K_{0}$ is bounded and closed, which implies that also $K_{\varepsilon}$ for any $\varepsilon>0$ is also bounded and closed. Thus, $K_{\varepsilon}$ is a compact subset of $\mathbb{R}^{n+1}$ for any $\varepsilon \geq 0$. Claim 1. There is $\varepsilon>0$ such that $K_{\varepsilon} \subset \Omega$ and $f$ is Lipschitz in $x$ in $K_{\varepsilon}$. Indeed, by the local Lipschitz condition, for any point $\left(t_{*}, x_{*}\right) \in \Omega$ (in particular, for any $\left.\left(t_{*}, x_{*}\right) \in K_{0}\right)$, there are constants $\varepsilon, \delta>0$ such that the cylinder $$ G=\left[t_{*}-\delta, t_{*}+\delta\right] \times \bar{B}\left(x_{*}, \varepsilon\right) $$ is contained in $\Omega$ and $f$ is Lipschitz in $G$ (see the diagram below). Varying the point $\left(t_{*}, x_{*}\right)$ in $K_{0}$, we obtain a cover of $K_{0}$ by the family of the open cylinders $H=\left(t_{*}-\delta, t_{*}+\delta\right) \times B\left(x_{*}, \varepsilon / 2\right)$ where $\varepsilon, \delta$ depend on $\left(t_{*}, x_{*}\right)$. Since $K_{0}$ is compact, there is a finite subcover, that is, a finite number of points $\left\{\left(t_{i}, x_{i}\right)\right\}_{i=1}^{m}$ on $K_{0}$ and the corresponding numbers $\varepsilon_{i}, \delta_{i}>0$, such that the cylinders $$ H_{i}=\left(t_{i}-\delta_{i}, t_{i}+\delta_{i}\right) \times B\left(x_{i}, \varepsilon_{i} / 2\right) $$ cover all $K_{0}$. Set $$ G_{i}=\left[t_{i}-\delta_{i}, t_{i}+\delta_{i}\right] \times \bar{B}\left(x_{i}, \varepsilon_{i}\right) $$ and let $L_{i}$ be the Lipschitz constant of $f$ in $G_{i}$, which exists by the choice of $\varepsilon_{i}, \delta_{i}$. Set $$ \varepsilon=\frac{1}{2} \min _{1 \leq i \leq m} \varepsilon_{i} \text { and } L=\max _{1 \leq i \leq m} L_{i} $$ and prove that $K_{\varepsilon} \subset \Omega$ and that function $f$ is Lipschitz in $K_{\varepsilon}$ with the constant $L$. For any point $(t, x) \in K_{\varepsilon}$, we have by the definition of $K_{\varepsilon}$ that $t \in[\alpha, \beta],(t, x(t)) \in K_{0}$ and $$ \|x-x(t)\| \leq \varepsilon $$ The point $(t, x(t))$ belongs to one of the cylinders $H_{i}$ so that $$ t \in\left(t_{i}-\delta_{i}, t_{i}+\delta_{i}\right) \quad \text { and } \quad\left\|x(t)-x_{i}\right\|<\varepsilon_{i} / 2 $$ (see the diagram below). By the triangle inequality, we have $$ \left\|x-x_{i}\right\| \leq\|x-x(t)\|+\left\|x(t)-x_{i}\right\|<\varepsilon+\varepsilon_{i} / 2 \leq \varepsilon_{i}, $$ where we have used that by $(2.36) \varepsilon \leq \varepsilon_{i} / 2$. Therefore, $x \in B\left(x_{i}, \varepsilon_{i}\right)$ whence it follows that $(t, x) \in G_{i}$ and, hence, $(t, x) \in \Omega$. Hence, we have shown that any point from $K_{\varepsilon}$ belongs to $\Omega$, which proves that $K_{\varepsilon} \subset \Omega$. If $(t, x),(t,, y) \in K_{\varepsilon}$ then by the above argument the both points $x, y$ belong to the same ball $B\left(x_{i}, \varepsilon_{i}\right)$ that is determined by the condition $(t, x(t)) \in H_{i}$. Then $(t, x),(t, y) \in$ $G_{i}$ and, since $f$ is Lipschitz in $G_{i}$ with the constant $L_{i}$, we obtain $$ \|f(t, x)-f(t, y)\| \leq L_{i}\|x-y\| \leq L\|x-y\|, $$ where we have used the definition (2.36) of $L$. This shows that $f$ is Lipschitz in $x$ in $K_{\varepsilon}$ and finishes the proof of Claim 1. Observe that if the statement of Claim 1 holds for some value of $\varepsilon$ then it holds for all smaller values of $\varepsilon$ as well, with the same $L$. Hence, we can assume that the value of $\varepsilon$ from Theorem 2.10 is small enough so that it satisfies Claim 1. Let now $y(t)$ be the maximal solution to the IVP $(2.33)$, and let $\left(a^{\prime}, b^{\prime}\right)$ be its domain. By Theorem 2.8, the graph of $y(t)$ leaves $K_{\varepsilon}$ when $t \rightarrow a^{\prime}$ and when $t \rightarrow b^{\prime}$. Let $\left(\alpha^{\prime}, \beta^{\prime}\right)$ be the maximal interval such that the graph of $y(t)$ on this interval is contained in $K_{\varepsilon}$; that is, $$ \alpha^{\prime}=\inf \left\{t \in(\alpha, \beta) \cap\left(a^{\prime}, b^{\prime}\right):(s, y(s)) \in K_{\varepsilon} \text { for all } s \in\left[t, t_{0}\right]\right\} $$ and $\beta^{\prime}$ is defined similarly with inf replaced by sup (see the diagrams below for the cases $\alpha^{\prime}>\alpha$ and $\alpha^{\prime}=\alpha$, respectively). In particular, $\left(\alpha^{\prime}, \beta^{\prime}\right)$ is contained in $\left(a^{\prime}, b^{\prime}\right) \cap(\alpha, \beta)$, function $y(t)$ is defined on $\left(\alpha^{\prime}, \beta^{\prime}\right)$ and $$ (t, y(t)) \in K_{\varepsilon} \text { for all } t \in\left(\alpha^{\prime}, \beta^{\prime}\right) . $$ Claim 2. We have $\left[\alpha^{\prime}, \beta^{\prime}\right] \subset\left(a^{\prime}, b^{\prime}\right)$; in particular, $y(t)$ is defined on the closed interval $\left[\alpha^{\prime}, \beta^{\prime}\right]$. Moreover, the following is true: either $\alpha^{\prime}=\alpha$ or $$ \alpha^{\prime}>\alpha \text { and }\|x(t)-y(t)\|=\varepsilon \text { for } t=\alpha^{\prime} . $$ Similarly, either $\beta^{\prime}=\beta$ or $$ \beta^{\prime}<\beta \quad \text { and } \quad\|x(t)-y(t)\|=\varepsilon \text { for } t=\beta^{\prime} . $$ By Theorem 2.8, $y(t)$ leaves $K_{\varepsilon}$ as $t \rightarrow a^{\prime}$. Hence, for all values of $t$ close enough to $a^{\prime}$ we have $(t, y(t)) \notin K_{\varepsilon}$. For any such $t$ we have by (2.37) $t \leq \alpha^{\prime}$ whence $a^{\prime}<t \leq \alpha$ and $a^{\prime}<\alpha^{\prime}$. Similarly, one shows that $b^{\prime}>\beta^{\prime}$, whence the inclusion $\left[\alpha^{\prime}, \beta^{\prime}\right] \subset\left[a^{\prime}, b^{\prime}\right]$ follows. To prove the second part, assume that $\alpha^{\prime} \neq \alpha$ that is, $\alpha^{\prime}>\alpha$, and prove that $$ \|x(t)-y(t)\|=\varepsilon \text { for } t=\alpha^{\prime} . $$ The condition $\alpha^{\prime}>\alpha$ together with $\alpha^{\prime}>a^{\prime}$ implies that $\alpha^{\prime}$ belongs to the open interval $(\alpha, \beta) \cap\left(a^{\prime}, b^{\prime}\right)$. It follows that, for $\tau>0$ small enough, $$ \left(\alpha^{\prime}-\tau, \alpha^{\prime}+\tau\right) \subset(\alpha, \beta) \cap\left(a^{\prime}, b^{\prime}\right) $$ For any $t \in\left(\alpha^{\prime}, \beta^{\prime}\right)$, we have $$ \|x(t)-y(t)\| \leq \varepsilon $$ By the continuity, this inequality extends also to $t=\alpha^{\prime}$. We need to prove that, for $t=\alpha^{\prime}$, equality is attained here. Indeed, if $$ \|x(t)-y(t)\|<\varepsilon \text { for } t=\alpha^{\prime} $$ then, by the continuity of $x(t)$ and $y(t)$, that the same inequality holds for all $t \in$ $\left(\alpha^{\prime}-\tau, \alpha^{\prime}+\tau\right)$ provided $\tau>0$ is small enough. Choosing $\tau$ to satisfy also (2.40), we obtain that $(t, y(t)) \in K_{\varepsilon}$ for all $t \in\left(\alpha^{\prime}-\tau, \alpha^{\prime}\right]$, which contradicts the definition of $\alpha^{\prime}$. Claim 3. For any given $\alpha, \beta, \varepsilon$ as above, there exists $\eta>0$ such that if $$ \sup _{K_{\varepsilon}}\|f-g\| \leq \eta $$ then $\left[\alpha^{\prime}, \beta^{\prime}\right]=[\alpha, \beta]$. In fact, Claim 3 will finish the proof of Theorem 2.10. Indeed, Claims 2 and 3 imply that $y(t)$ is defined on $[\alpha, \beta]$; by the definition of $\alpha^{\prime}$ and $\beta^{\prime}$ (see (2.38)), we obtain $(t, y(t)) \in K_{\varepsilon}$ for all $t \in(\alpha, \beta)$, and by continuity, the same holds for $t \in[\alpha, \beta]$. By the definition $(2.35)$ of $K_{\varepsilon}$, this means $$ \|y(t)-x(t)\| \leq \varepsilon \text { for all } t \in[\alpha, \beta], $$ which was the claim of Theorem 2.10 . To prove Claim 3 , for any $t \in\left[\alpha^{\prime}, \beta^{\prime}\right]$ write the integral identities $$ x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s $$ and $$ y(t)=x_{0}+\int_{t_{0}}^{t} g(s, y(s)) d s . $$ Assuming for simplicity that $t \geq t_{0}$ and using the triangle inequality, we obtain $$ \begin{aligned} \|x(t)-y(t)\| & \leq \int_{t_{0}}^{t}\|f(s, x(s))-g(s, y(s))\| d s \\ & \leq \int_{t_{0}}^{t}\|f(s, x(s))-f(s, y(s))\| d s+\int_{t_{0}}^{t}\|f(s, y(s))-g(s, y(s))\| d s . \end{aligned} $$ Since the points $(s, x(s))$ and $(s, y(s))$ are in $K_{\varepsilon}$, we obtain by the Lipschitz condition in $K_{\varepsilon}$ (Claim 1) that $$ \|x(t)-y(t)\| \leq L \int_{t_{0}}^{t}\|x(s)-y(s)\| d s+\sup _{K_{\varepsilon s}}\|f-g\|(\beta-\alpha) . $$ Hence, by the Gronwall lemma applied to the function $z(t)=\|x(t)-y(t)\|$, $$ \begin{aligned} \|x(t)-y(t)\| & \leq(\beta-\alpha) \exp L\left(t-t_{0}\right) \sup _{K_{\varepsilon s}}\|f-g\| \\ & \leq(\beta-\alpha) \exp L(\beta-\alpha) \sup _{K_{\varepsilon s}}\|f-g\| . \end{aligned} $$ In the same way, (2.43) holds for $t \leq t_{0}$ so that it is true for all $t \in\left[\alpha^{\prime}, \beta^{\prime}\right]$. Now choose $\eta$ in (2.41) as follows $$ \eta=\frac{\varepsilon}{2(\beta-\alpha)} e^{-L(\beta-\alpha)} . $$ Then it follows from $(2.43)$ that $$ \|x(t)-y(t)\| \leq \varepsilon / 2<\varepsilon \text { for all } t \in\left[\alpha^{\prime}, \beta^{\prime}\right] . $$ By Claim 2, we conclude that $\alpha^{\prime}=\alpha$ and $\beta^{\prime}=\beta$, which finishes the proof. Using the proof of Theorem 2.10, we can refine the statement of Theorem 2.10 as follows. Corollary Under the hypotheses of Theorem 2.10, let $x(t)$ be a solution to the IVP (2.32) defined on an interval $(a, b)$, and let $\alpha, \beta$ be such that $a<\alpha<t_{0}<\beta<b$. Let $\varepsilon>0$ be sufficiently small so that $f(t, x)$ is Lipschitz in $x$ in $K_{\varepsilon}$ with the Lipschitz constant L. If $\sup _{K_{\varepsilon}}\|f-g\|$ is sufficiently small, then the IVP (2.33) has a solution $y(t)$ defined on $[\alpha, \beta]$, and the following estimate holds $$ \sup _{[\alpha, \beta]}\|x(t)-y(t)\| \leq(\beta-\alpha) e^{L(\beta-\alpha)} \sup _{K_{\varepsilon}}\|f-g\| . $$ Proof. By Claim 2 of the above proof, the maximal solution $y(t)$ of $(2.33)$ is defined on $\left[\alpha^{\prime}, \beta^{\prime}\right]$. Also, the difference $\|x(t)-y(t)\|$ satisfies (2.43) for all $t \in\left[\alpha^{\prime}, \beta^{\prime}\right]$. If $\sup _{K_{\varepsilon}}\|f-g\|$ is small enough then by Claim $3\left[\alpha^{\prime}, \beta^{\prime}\right]=[\alpha, \beta]$. It follows that $y(t)$ is defined on $[\alpha, \beta]$ and satisfies (2.45). ### Continuity of solutions with respect to a parameter Consider the IVP with a parameter $s \in \mathbb{R}^{m}$ $$ \left\{\begin{array}{l} x^{\prime}=f(t, x, s) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ where $f: \Omega \rightarrow \mathbb{R}^{n}$ and $\Omega$ is an open subset of $\mathbb{R}^{n+m+1}$. Here the triple $(t, x, s)$ is identified as a point in $\mathbb{R}^{n+m+1}$ as follows: $$ (t, x, s)=\left(t, x_{1}, . ., x_{n}, s_{1}, \ldots, s_{m}\right) . $$ How do we understand (2.46)? For any $s \in \mathbb{R}^{m}$, consider the open set $$ \Omega_{s}=\left\{(t, x) \in \mathbb{R}^{n+1}:(t, x, s) \in \Omega\right\} . $$ Denote by $S$ the set of those $s$, for which $\Omega_{s}$ contains $\left(t_{0}, x_{0}\right)$, that is, $$ S=\left\{s \in \mathbb{R}^{m}:\left(t_{0}, x_{0}\right) \in \Omega_{s}\right\}=\left\{s \in \mathbb{R}^{m}:\left(t_{0}, x_{0}, s\right) \in \Omega\right\} $$ Then the IVP (2.46) can be considered in the domain $\Omega_{s}$ for any $s \in S$. We always assume that the set $S$ is non-empty. Assume also in the sequel that $f(t, x, s)$ is a continuous function in $(t, x, s) \in \Omega$ and is locally Lipschitz in $x$ for any $s \in S$. For any $s \in S$, denote by $x(t, s)$ the maximal solution of (2.46) and let $I_{s}$ be its domain (that is, $I_{s}$ is an open interval on the axis $t$ ). Hence, $x(t, s)$ as a function of $(t, s)$ is defined in the set $$ U=\left\{(t, s) \in \mathbb{R}^{m+1}: s \in S, t \in I_{s}\right\} $$ Theorem 2.11 Under the above assumptions, the set $U$ is an open subset of $\mathbb{R}^{n+1}$ and the function $x(t, s): U \rightarrow \mathbb{R}^{n}$ is continuous in $(t, s)$. Proof. Fix some $s_{0} \in S$ and consider solution $x(t)=x\left(t, s_{0}\right)$ defined for $t \in I_{s_{0}}$. Choose some interval $[\alpha, \beta] \subset I_{s_{0}}$ such that $t_{0} \in[\alpha, \beta]$. We will prove that there is $\varepsilon>0$ such that $$ [\alpha, \beta] \times \bar{B}\left(s_{0}, \varepsilon\right) \subset U, $$ which will imply that $U$ is open. Here $\bar{B}\left(s_{0}, \varepsilon\right)$ is a closed ball in $\mathbb{R}^{m}$ with respect to $\infty$-norm (we can assume that all the norms in various spaces $\mathbb{R}^{k}$ are the $\infty$-norms). As in the proof of Theorem 2.10, consider a set $$ K_{\varepsilon}=\left\{(t, x) \in \mathbb{R}^{n+1}: \alpha \leq t \leq \beta,\|x-x(t)\| \leq \varepsilon\right\} $$ and its extension in $\mathbb{R}^{n+m+1}$ defined by $$ \widetilde{K}_{\varepsilon}=K_{\varepsilon} \times \bar{B}\left(s_{0}, \varepsilon\right)=\left\{(t, x, s) \in \mathbb{R}^{n+m+1}: \alpha \leq t \leq \beta,\|x-x(t)\| \leq \varepsilon,\left\|s-s_{0}\right\| \leq \varepsilon\right\} $$ (see the diagram below). If $\varepsilon$ is small enough then $\widetilde{K}_{\varepsilon}$ is contained in $\Omega$ (cf. the proof of Theorem 2.10 and Exercise 26). Hence, for any $s \in \bar{B}\left(s_{0}, \varepsilon\right)$, the function $f(t, x, s)$ is defined for all $(t, x) \in$ $K_{\varepsilon}$. Since the function $f$ is continuous on $\Omega$, it is uniformly continuous on the compact set $\widetilde{K}_{\varepsilon}$, whence it follows that $$ \sup _{(t, x) \in K_{\varepsilon}}\left\|f\left(t, x, s_{0}\right)-f(t, x, s)\right\| \rightarrow 0 \text { as } s \rightarrow s_{0} $$ Using Corollary to Theorem 2.10 with $^{5} f(t, x)=f\left(t, x, s_{0}\right)$ and $g(t, x)=f(t, x, s)$ where $s \in \bar{B}\left(s_{0}, \varepsilon\right)$, we obtain that if $$ \sup _{(t, x) \in K_{\varepsilon}}\left\|f(t, x, s)-f\left(t, x, s_{0}\right)\right\| $$ is small enough then then the solution $y(t)=x(t, s)$ is defined on $[\alpha, \beta]$. In particular, this implies $(2.47)$ for small enough $\varepsilon$. Furthermore, by Corollary to Theorem 2.10 we also obtain that $$ \sup _{t \in[\alpha, \beta]}\left\|x(t, s)-x\left(t, s_{0}\right)\right\| \leq C \sup _{(t, x) \in K_{\varepsilon}}\left\|f\left(t, x, s_{0}\right)-f(t, x, s)\right\| $$ where the constant $C$ depending only on $\alpha, \beta, \varepsilon$ and the Lipschitz constant $L$ of the function $f\left(t, x, s_{0}\right)$ in $K_{\varepsilon}$. Letting $s \rightarrow s_{0}$, we obtain that $$ \sup _{t \in[\alpha, \beta]}\left\|x(t, s)-x\left(t, s_{0}\right)\right\| \rightarrow 0 \text { as } s \rightarrow s_{0} $$ so that $x(t, s)$ is continuous in $s$ at $s_{0}$ uniformly in $t \in[\alpha, \beta]$. Since $x(t, s)$ is continuous in $t$ for any fixed $s$, we conclude that $x$ is continuous in $(t, s)$ (see Exercise 28), which finishes the proof. ### Global existence Theorem 2.12 Let $I$ be an open interval in $\mathbb{R}$ and let $f(t, x): I \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ be a continuous function that is locally Lipschitz in $x$ and satisfies the inequality $$ \|f(t, x)\| \leq a(t)\|x\|+b(t) $$ for all $t \in I$ and $x \in \mathbb{R}^{n}$, where $a(t)$ and $b(t)$ are some continuous non-negative functions of $t$. Then, for all $t_{0} \in I$ and $x_{0} \in \mathbb{R}^{n}$, the initial value problem $$ \left\{\begin{array}{l} x^{\prime}=f(t, x) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ has a (unique) solution $x(t)$ on $I$. In other words, under the specified conditions, the maximal solution of (2.49) is defined on $I$. Proof. Let $x(t)$ be the maximal solution to the problem $(2.49)$, and let $J=(\alpha, \beta)$ be the open interval where $x(t)$ is defined. We will show that $J=I$. Assume from the contrary that this is not the case. Then one of the points $\alpha, \beta$ is contained in $I$, say $\beta \in I$. ${ }^{5}$ Since the common domain of the functions $f(t, x, s)$ and $f\left(t, x, s_{0}\right)$ is $(t, s) \in \Omega_{s_{0}} \cap \Omega_{s}$, Theorem 2.10 should be applied with this domain. Let us investigate the behavior of $\|x(t)\|$ as $t \rightarrow \beta$. By Theorem 2.8, $(t, x(t))$ leaves any compact $K \subset \Omega:=I \times \mathbb{R}^{n}$. Consider a compact set $$ K=[\beta-\varepsilon, \beta] \times \bar{B}(0, r) $$ where $\varepsilon>0$ is so small that $[\beta-\varepsilon, \beta] \subset I$. Clearly, $K \subset \Omega$. If $t$ is close enough to $\beta$ then $t \in[\beta-\varepsilon, \beta]$. Since $(t, x(t))$ must be outside $K$, we conclude that $x \notin \bar{B}(0, r)$, that is, $\|x(t)\|>r$. Since $r$ is arbitrary, we have proved that $\|x(t)\| \rightarrow \infty$ as $t \rightarrow \beta$. On the other hand, let us show that the solution $x(t)$ must remain bounded as $t \rightarrow \beta$. From the integral equation $$ x(t)=x_{0}+\int_{t_{0}}^{t} f(s, x(s)) d s, $$ we obtain, for any $t \in\left[t_{0}, \beta\right)$, $$ \begin{aligned} \|x(t)\| & \leq\left\|x_{0}\right\|+\int_{t_{0}}^{t}\|f(s, x(s))\| d s \\ & \leq\left\|x_{0}\right\|+\int_{t_{0}}^{t}(a(s)\|x(s)\|+b(s)) d s \\ & \leq C+A \int_{t_{0}}^{t}\|x(s)\| d s \end{aligned} $$ where $$ A=\sup _{\left[t_{0}, \beta\right]} a(s) \text { and } C=\left\|x_{0}\right\|+\int_{t_{0}}^{\beta} b(s) d s . $$ Since $\left[t_{0}, \beta\right] \subset I$ and functions $a(s)$ and $b(s)$ are continuous in $\left[t_{0}, \beta\right]$, the values of $A$ and $C$ are finite. The Gronwall lemma yields $$ \|x(t)\| \leq C \exp \left(A\left(t-t_{0}\right)\right) \leq C \exp \left(A\left(\beta-t_{0}\right)\right) . $$ Since the right hand side here does not depend on $t$, we conclude that the function $\|x(t)\|$ remains bounded as $t \rightarrow \beta$, which finishes the proof. Example. We have considered above the ODE $x^{\prime}=x^{2}$ defined in $\mathbb{R} \times \mathbb{R}$ and have seen that the solution $x(t)=\frac{1}{C-t}$ cannot be defined on full $\mathbb{R}$. The same occurs for the equation $x^{\prime}=x^{\alpha}$ for $\alpha>1$. The reason is that the function $f(t, x)=x^{\alpha}$ does not admit the estimate $(2.48)$ for large $x$, due to $\alpha>1$. This example also shows that the condition (2.48) is rather sharp. A particularly important application of Theorem 2.12 is the case of the linear equation $$ x^{\prime}=A(t) x+B(t), $$ where $x \in \mathbb{R}^{n}, t \in I$ (where $I$ is an open interval in $\mathbb{R}$ ), $B: I \rightarrow \mathbb{R}^{n}, A: I \rightarrow \mathbb{R}^{n \times n}$. Here $\mathbb{R}^{n \times n}$ is the space of all $n \times n$ matrices (that can be identified with $\mathbb{R}^{n^{2}}$ ). In other words, for each $t \in I, A(t)$ is an $n \times n$ matrix, and $A(t) x$ is the product of the matrix $A(t)$ and the column vector $x$. In the coordinate form, one has a system of linear equations $$ x_{k}^{\prime}=\sum_{l=1}^{n} A_{k l}(t) x_{l}+B_{k}(t) $$ for any $k=1, \ldots, n$. Theorem 2.13 In the above notation, let $A(t)$ and $B(t)$ be continuous in $t \in I$. Then, for any $t_{0} \in I$ and $x_{0} \in \mathbb{R}^{n}$, the IVP $$ \left\{\begin{array}{l} x^{\prime}=A(t) x+B(t) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ has a (unique) solution $x(t)$ defined on $I$. Proof. It suffices to check that the function $f(t, x)=A(t) x+B(t)$ satisfies the conditions of Theorem 2.12. This function is obviously continuous in $(t, x)$ and continuously differentiable in $x$, which implies by Lemma 2.6 that $f(t, x)$ is locally Lipschitz in $x$. We are left to verify (2.48). By the triangle inequality, we have $$ \|f(t, x)\| \leq\|A(t) x\|+\|B(t)\| . $$ Let all the norms be the $\infty$-norm. Then $$ b(t):=\|B(t)\|_{\infty}=\max _{k}\left|B_{k}(t)\right| $$ is a continuous function of $t$. Next, $\|A(t) x\|_{\infty}=\max _{k}\left|(A(t) x)_{k}\right|=\max _{k}\left|\sum_{l=1}^{\infty} A_{k l}(t) x_{l}\right| \leq\left(\max _{k} \sum_{l=1}^{\infty}\left|A_{k l}(t)\right|\right) \max _{l}\left|x_{l}\right|=a(t)\|x\|_{\infty}$, where $$ a(t)=\max _{k} \sum_{l=1}^{\infty}\left|A_{k l}(t)\right| $$ is a continuous function. Hence, we obtain from (2.50) $$ \|f(t, x)\| \leq a(t)\|x\|+b(t), $$ which finishes the proof. ### Differentiability of solutions in parameter Consider again the initial value problem with parameter $$ \left\{\begin{array}{l} x^{\prime}=f(t, x, s) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ where $f: \Omega \rightarrow \mathbb{R}^{n}$ is a continuous function defined on an open set $\Omega \subset \mathbb{R}^{n+m+1}$ and where $(t, x, s)=\left(t, x_{1}, \ldots, x_{n}, s_{1}, \ldots, s_{m}\right)$. Let us use the following notation for Jacobian matrices of $f$ with respect to $x$ and $s$. Set $$ f_{x}=\partial_{x} f=\frac{\partial f}{\partial x}:=\left(\frac{\partial f_{i}}{\partial x_{k}}\right) $$ where $i=1, \ldots, n$ is the row index and $k=1, \ldots, n$ is the column index, so that $f_{x}$ is an $n \times n$ matrix. Similarly, set $$ f_{s}=\frac{\partial f}{\partial s}=\partial_{s} f=\left(\frac{\partial f_{i}}{\partial s_{l}}\right) $$ where $i=1, \ldots, n$ is the row index and $l=1, \ldots, m$ is the column index, so that $f_{s}$ is an $n \times m$ matrix. If $f_{x}$ is continuous in $\Omega$ then by Lemma $2.6 f$ is locally Lipschitz in $x$ so that all the previous results apply. Let $x(t, s)$ be the maximal solution to $(2.51)$. Recall that, by Theorem 2.11 , the domain $U$ of $x(t, s)$ is an open subset of $\mathbb{R}^{m+1}$ and $x: U \rightarrow \mathbb{R}^{n}$ is continuous. Theorem 2.14 Assume that function $f(t, x, s)$ is continuous and $f_{x}$ and $f_{s}$ exist and are also continuous in $\Omega$. Then $x(t, s)$ is continuously differentiable in $(t, s) \in U$ and the Jacobian matrix $y=\partial_{s} x$ solves the initial value problem $$ \left\{\begin{array}{l} y^{\prime}=f_{x}(t, x(t, s), s) y+f_{s}(t, x(t, s), s) \\ y\left(t_{0}\right)=0 \end{array}\right. $$ Here $\partial_{s} x=\left(\frac{\partial x_{k}}{\partial s_{l}}\right)$ is an $n \times m$ matrix where $k=1, . ., n$ is the row index and $l=1, \ldots, m$ is the column index. Hence, $y=\partial_{s} x$ can be considered as a vector in $\mathbb{R}^{n \times m}$ depending on $t$ and $s$. The both terms in the right hand side of (2.52) are also $n \times m$ matrices so that (2.52) makes sense. Indeed, $f_{s}$ is an $n \times m$ matrix, and $f_{x} y$ is the product of the $n \times n$ and $n \times m$ matrices, which is again an $n \times m$ matrix. The ODE in (2.52) is called the variational equation for (2.51) along the solution $x(t, s)$ (or the equation in variations). Note that the variational equation is linear. Indeed, for any fixed $s$, its right hand side can be written in the form $$ y^{\prime}=A(t) y+B(t), $$ where $A(t)=f_{x}(t, x(t, s), s)$ and $B(t)=f_{s}(t, x(t, s), s)$. Since $f$ is continuous and $x(t, s)$ is continuous by Theorem 2.11, the functions $A(t)$ and $B(t)$ are continuous in $t$. If the domain in $t$ of the solution $x(t, s)$ is $I_{s}$ then the domain of the variational equation is $I_{s} \times \mathbb{R}^{n \times m}$. By Theorem 2.13, the solution $y(t)$ of (2.52) exists in the full interval $I_{s}$. Hence, Theorem 2.14 can be stated as follows: if $x(t, s)$ is the solution of $(2.51)$ on $I_{s}$ and $y(t)$ is the solution of $(2.52)$ on $I_{s}$ then we have the identity $y(t)=\partial_{s} x(t, s)$ for all $t \in I_{s}$. This provides a method of evaluating $\partial_{s} x(t, s)$ for a fixed $s$ without finding $x(t, s)$ for all $s$. Example. Consider the IVP with parameter $$ \left\{\begin{array}{l} x^{\prime}=x^{2}+2 s / t \\ x(1)=-1 \end{array}\right. $$ in the domain $(0,+\infty) \times \mathbb{R} \times \mathbb{R}$ (that is, $t>0$ and $x, s$ are arbitrary real). Let us evaluate $x(t, s)$ and $\partial_{s} x$ for $s=0$. Obviously, the function $f(t, x, s)=x^{2}+2 s / t$ is continuously differentiable in $(x, s)$ whence it follows that the solution $x(t, s)$ is continuously differentiable in $(t, s)$. For $s=0$ we have the IVP $$ \left\{\begin{array}{l} x^{\prime}=x^{2} \\ x(1)=-1 \end{array}\right. $$ whence we obtain $x(t, 0)=-\frac{1}{t}$. Noticing that $f_{x}=2 x$ and $f_{s}=2 / t$ we obtain the variational equation along this solution $$ y^{\prime}=\left(\left.f_{x}(t, x, s)\right|_{x=-\frac{1}{t}, s=0}\right) y+\left(\left.f_{s}(t, s, x)\right|_{x=-\frac{1}{t}, s=0}\right)=-\frac{2}{t} y+\frac{2}{t} . $$ This is a linear equation of the form $y^{\prime}=a(t) y+b(t)$ which is solved by the formula $$ y=e^{A(t)} \int e^{-A(t)} b(t) d t $$ where $A(t)$ is a primitive of $a(t)=-2 / t$, that is $A(t)=-2 \ln t$. Hence, $$ y(t)=t^{-2} \int t^{2} \frac{2}{t} d t=t^{-2}\left(t^{2}+C\right)=1+C t^{-2} . $$ The initial condition $y(1)=0$ is satisfied for $C=-1$ so that $y(t)=1-t^{-2}$. By Theorem 2.14 , we conclude that $\partial_{s} x(t, 0)=1-t^{-2}$. Expanding $x(t, s)$ as a function of $s$ by the Taylor formula of the order 1 , we obtain $$ x(t, s)=x(t, 0)+\partial_{s} x(t, 0) s+o(s) \text { as } s \rightarrow 0, $$ that is, $$ x(t, s)=-\frac{1}{t}+\left(1-\frac{1}{t^{2}}\right) s+o(s) \text { as } s \rightarrow 0 . $$ In particular, we obtain for small $s$ an approximation $$ x(t, s) \approx-\frac{1}{t}+\left(1-\frac{1}{t^{2}}\right) s . $$ Later we will be able to obtain more terms in the Taylor formula and, hence, to get a better approximation for $x(t, s)$. Remark. It is easy to deduce the variational equation (2.52) provided we know that the function $x(t, s)$ is sufficiently many times differentiable. Assume that the mixed partial derivatives $\partial_{s} \partial_{t} x$ and $\partial_{t} \partial_{s} x$ exist and are the equal (for example, this is the case when $\left.x(t, s) \in C^{2}(U)\right)$. Then differentiating (2.51) in $s$ and using the chain rule, we obtain $$ \partial_{t} \partial_{s} x=\partial_{s}\left(\partial_{t} x\right)=\partial_{s}[f(t, x(t, s), s)]=f_{x}(t, x(t, s), s) \partial_{s} x+f_{s}(t, x(t, s), s), $$ which implies (2.52) after substitution $\partial_{s} x=y$. Although this argument is not a proof of Theorem 2.14, it allows to memorize the variational equation. The main technical difficulty in the proof of Theorem 2.14 is verifying the differentiability of $x$ in $s$. How can one evaluate the higher derivatives of $x(t, s)$ in $s$ ? Let us show how to find the ODE for the second derivative $z=\partial_{s s} x$ assuming for simplicity that $n=m=1$, that is, both $x$ and $s$ are one-dimensional. For the derivative $y=\partial_{s} x$ we have the IVP $(2.52)$, which we write in the form $$ \left\{\begin{array}{l} y^{\prime}=g(t, y, s) \\ y\left(t_{0}\right)=0 \end{array}\right. $$ where $$ g(t, y, s)=f_{x}(t, x(t, s), s) y+f_{s}(t, x(t, s), s) . $$ For what follows we use the notation $F(a, b, c, \ldots) \in C^{k}(a, b, c, \ldots)$ if all the partial derivatives of the order up to $k$ of the function $F$ with respect to the specified variables $a, b, c \ldots$ exist and are continuous functions, in the domain of $F$. For example, the condition in Theorem 2.14 that $f_{x}$ and $f_{s}$ are continuous, can be shortly written as $f \in C^{1}(x, s)$, and the claim of Theorem 2.14 is that $x(t, s) \in C^{1}(t, s)$. Assume now that $f \in C^{2}(x, s)$. Then by (2.54) we obtain that $g$ is continuous and $g \in C^{1}(y, s)$, whence by Theorem $2.14 y \in C^{1}(s)$. In particular, the function $z=\partial_{s} y=$ $\partial_{s s} x$ is defined. Applying the variational equation to the problem (2.53), we obtain the equation for $z$ $$ z^{\prime}=g_{y}(t, y(t, s), s) z+g_{s}(t, y(t, s), s) $$ Since $g_{y}=f_{x}(t, x, s)$, $$ g_{s}(t, y, s)=f_{x x}(t, x, s)\left(\partial_{s} x\right) y+f_{x s}(t, x, s) y+f_{s x}(t, x, s) \partial_{s} x+f_{s s}(t, x, s), $$ and $\partial_{s} x=y$, we conclude that $$ \left\{\begin{array}{l} z^{\prime}=f_{x}(t, x, s) z+f_{x x}(t, x, s) y^{2}+2 f_{x s}(t, x, s) y+f_{s s}(t, x, s) \\ z^{\prime}\left(t_{0}\right)=0 \end{array}\right. $$ Note that here $x$ must be substituted by $x(t, s)$ and $y$ - by $y(t, s)$. The equation (2.55) is called the variational equation of the second order, or the second variational equation. It is a linear ODE and it has the same coefficient $f_{x}(t, x(t, s), s)$ in front of the unknown function as the first variational equation. Similarly one finds the variational equations of the higher orders. Example. This is a continuation of the previous example of the IVP with parameter $$ \left\{\begin{array}{l} x^{\prime}=x^{2}+2 s / t \\ x(1)=-1 \end{array}\right. $$ where we have computed that $$ x(t):=x(t, 0)=-\frac{1}{t} \text { and } y(t):=\partial_{s} x(t, 0)=1-\frac{1}{t^{2}} . $$ Let us now evaluate $z=\partial_{s s} x(t, 0)$. Since $$ f_{x}=2 x, \quad f_{x x}=2, \quad f_{x s}=0, f_{s s}=0, $$ we obtain the second variational equation $$ \begin{aligned} z^{\prime} & =\left(\left.f_{x}\right|_{x=-\frac{1}{t}, s=0}\right) z+\left(\left.f_{x x}\right|_{x=-\frac{1}{t}, s=0}\right) y^{2} \\ & =-\frac{2}{t} z+2\left(1-t^{-2}\right)^{2} . \end{aligned} $$ Solving this equation similarly to the first variational equation with the same $a(t)=-\frac{2}{t}$ and with $b(t)=2\left(1-t^{-2}\right)^{2}$, we obtain $$ \begin{aligned} z(t) & =e^{A(t)} \int e^{-A(t)} b(t) d t=t^{-2} \int 2 t^{2}\left(1-t^{-2}\right)^{2} d t \\ & =t^{-2}\left(\frac{2}{3} t^{3}-\frac{2}{t}-4 t+C\right)=\frac{2}{3} t-\frac{2}{t^{3}}-\frac{4}{t}+\frac{C}{t^{2}} \end{aligned} $$ The initial condition $z(1)=0$ yields $C=\frac{16}{3}$ whence $$ z(t)=\frac{2}{3} t-\frac{2}{t^{3}}-\frac{4}{t}+\frac{16}{3 t^{2}} $$ $s \rightarrow 0$ Expanding $x(t, s)$ at $s=0$ by the Taylor formula of the second order, we obtain as $$ \begin{aligned} x(t, s) & =x(t)+y(t) s+\frac{1}{2} z(t) s^{2}+o\left(s^{2}\right) \\ & =-\frac{1}{t}+\left(1-t^{-2}\right) s+\left(\frac{1}{3} t-\frac{2}{t}+\frac{8}{3 t^{2}}-\frac{1}{t^{3}}\right) s^{2}+o\left(s^{2}\right) . \end{aligned} $$ For comparison, the plots below show for $s=0.1$ the solution $x(t, s)$ (yellow) found by numerical methods (MAPLE), the first order approximation $u(t)=-\frac{1}{t}+\left(1-t^{-2}\right) s$ (green) and the second order approximation $v(t)=-\frac{1}{t}+\left(1-t^{-2}\right) s+\left(\frac{1}{3} t-\frac{2}{t}+\frac{8}{3 t^{2}}-\frac{1}{t^{3}}\right) s^{2}$ (red). Let us discuss an alternative method of obtaining the equations for the derivatives of $x(t, s)$ in $s$. As above, let $x(t), y(t), z(t)$ be respectively $x(t, 0), \partial_{s} x(t, 0)$ and $\partial_{s s} x(t, 0)$ so that by the Taylor formula $$ x(t, s)=x(t)+y(t) s+\frac{1}{2} z(t) s^{2}+o\left(s^{2}\right) . $$ Let us write a similar expansion for $x^{\prime}=\partial_{t} x$, assuming that the derivatives $\partial_{t}$ and $\partial_{s}$ commute on $x$. We have $$ \partial_{s} x^{\prime}=\partial_{t} \partial_{s} x=y^{\prime} $$ and in the same way $$ \partial_{s s} x^{\prime}=\partial_{s} y^{\prime}=\partial_{t} \partial_{s} y=z^{\prime} $$ Hence, $$ x^{\prime}(t, s)=x^{\prime}(t)+y^{\prime}(t) s+\frac{1}{2} z^{\prime}(t) s^{2}+o\left(s^{2}\right) . $$ Substituting this into the equation $$ x^{\prime}=x^{2}+2 s / t $$ we obtain $$ x^{\prime}(t)+y^{\prime}(t) s+\frac{1}{2} z^{\prime}(t) s^{2}+o\left(s^{2}\right)=\left(x(t)+y(t) s+\frac{1}{2} z(t) s^{2}+o\left(s^{2}\right)\right)^{2}+2 s / t $$ whence $$ x^{\prime}(t)+y^{\prime}(t) s+\frac{1}{2} z^{\prime}(t) s^{2}=x^{2}(t)+2 x(t) y(t) s+\left(y(t)^{2}+x(t) z(t)\right) s^{2}+2 s / t+o\left(s^{2}\right) . $$ Equating the terms with the same powers of $s$ (which can be done by the uniqueness of the Taylor expansion), we obtain the equations $$ \begin{aligned} & x^{\prime}(t)=x^{2}(t) \\ & y^{\prime}(t)=2 x(t) y(t)+2 s / t \\ & z^{\prime}(t)=2 x(t) z(t)+2 y^{2}(t) . \end{aligned} $$ From the initial condition $x(1, s)=-1$ we obtain $$ -1=x(1)+s y(1)+\frac{s^{2}}{2} z(1)+o\left(s^{2}\right) $$ whence $x(t)=-1, y(1)=z(1)=0$. Solving successively the above equations with these initial conditions, we obtain the same result as above. Before we prove Theorem 2.14, let us prove some auxiliary statements from Analysis. Definition. A set $K \subset \mathbb{R}^{n}$ is called convex if for any two points $x, y \in K$, also the full interval $[x, y]$ is contained in $K$, that is, the point $(1-\lambda) x+\lambda y$ belong to $K$ for any $\lambda \in[0,1]$. Example. Let us show that any ball $B(z, r)$ in $\mathbb{R}^{n}$ with respect to any norm is convex. Indeed, it suffices to treat the case $z=0$. If $x, y \in B(0, r)$ then $\|x\|<r$ and $\|y\|<r$ whence for any $\lambda \in[0,1]$ $$ \|(1-\lambda) x+\lambda y\| \leq(1-\lambda)\|x\|+\lambda\|y\|<r . $$ It follows that $(1-\lambda) x+\lambda y \in B(0, r)$, which was to be proved. Lemma 2.15 (The Hadamard lemma) Let $f(t, x)$ be a continuous mapping from $\Omega$ to $\mathbb{R}^{l}$ where $\Omega$ is an open subset of $\mathbb{R}^{n+1}$ such that, for any $t \in \mathbb{R}$, the set $$ \Omega_{t}=\left\{x \in \mathbb{R}^{n}:(t, x) \in \Omega\right\} $$ is convex (see the diagram below). Assume that $f_{x}(t, x)$ exists and is also continuous in $\Omega$. Consider the domain $$ \begin{aligned} \Omega^{\prime} & =\left\{(t, x, y) \in \mathbb{R}^{2 n+1}: t \in \mathbb{R}, x, y \in \Omega_{t}\right\} \\ & =\left\{(t, x, y) \in \mathbb{R}^{2 n+1}:(t, x) \text { and }(t, y) \in \Omega\right\} . \end{aligned} $$ Then there exists a continuous mapping $\varphi(t, x, y): \Omega^{\prime} \rightarrow \mathbb{R}^{l \times n}$ such that the following identity holds: $$ f(t, y)-f(t, x)=\varphi(t, x, y)(y-x) $$ for all $(t, x, y) \in \Omega^{\prime}$ (here $\varphi(t, x, y)(y-x)$ is the product of the $l \times n$ matrix and the column-vector). Furthermore, we have for all $(t, x) \in \Omega$ the identity $$ \varphi(t, x, x)=f_{x}(t, x) . $$ Remark. The variable $t$ can be multi-dimensional, and the proof goes through without changes. Since $f(t, x)$ is continuously differentiable at $x$, we have $$ f(t, y)-f(t, x)=f_{x}(t, x)(y-x)+o(\|y-x\|) \text { as } y \rightarrow x . $$ The point of the above Lemma is that the term $o(\|x-y\|)$ can be eliminated if one replaces $f_{x}(t, x)$ by a continuous function $\varphi(t, x, y)$. Example. Consider some simple examples of functions $f(x)$ with $n=l=1$ and without dependence on $t$. Say, if $f(x)=x^{2}$ then we have $$ f(y)-f(x)=(y+x)(y-x) $$ so that $\varphi(x, y)=y+x$. In particular, $\varphi(x, x)=2 x=f^{\prime}(x)$. A similar formula holds for $f(x)=x^{k}$ with any $k \in \mathbb{N}$ : $$ f(y)-f(x)=\left(x^{k-1}+x^{k-2} y+\ldots+y^{k-1}\right)(y-x) . $$ For any continuously differentiable function $f(x)$, one can define $\varphi(x, y)$ as follows: $$ \varphi(x, y)= \begin{cases}\frac{f(y)-f(x)}{y-x}, & y \neq x, \\ f^{\prime}(x), & y=x .\end{cases} $$ It is obviously continuous in $(x, y)$ for $x \neq y$, and it is continuous at $(x, x)$ because if $\left(x_{k}, y_{k}\right) \rightarrow(x, x)$ as $k \rightarrow \infty$ then $$ \frac{f\left(y_{k}\right)-f\left(x_{k}\right)}{y_{k}-x_{k}}=f^{\prime}\left(\xi_{k}\right) $$ where $\xi_{k} \in\left(x_{k}, y_{k}\right)$, which implies that $\xi_{k} \rightarrow x$ and hence, $f^{\prime}\left(\xi_{k}\right) \rightarrow f^{\prime}(x)$, where we have used the continuity of the derivative $f^{\prime}(x)$. Clearly, this argument does not work in the case $n>1$ since one cannot divide by $y-x$. In the general case, we use a different approach. Proof of Lemma 2.15. It suffices to prove this lemma for each component $f_{i}$ separately. Hence, we can assume that $l=1$ so that $\varphi$ is a row $\left(\varphi_{1}, \ldots, \varphi_{n}\right)$. Hence, we need to prove the existence of $n$ real valued continuous functions $\varphi_{1}, \ldots, \varphi_{n}$ of $(t, x, y)$ such that the following identity holds: $$ f(t, y)-f(t, x)=\sum_{i=1}^{n} \varphi_{i}(t, x, y)\left(y_{i}-x_{i}\right) . $$ Fix a point $(t, x, y) \in \Omega^{\prime}$ and consider a function $$ F(\lambda)=f(t, x+\lambda(y-x)) $$ on the interval $\lambda \in[0,1]$. Since $x, y \in \Omega_{t}$ and $\Omega_{t}$ is convex, the point $x+\lambda(y-x)$ belongs to $\Omega_{t}$. Therefore, $(t, x+\lambda(y-x)) \in \Omega$ and the function $F(\lambda)$ is indeed defined for all $\lambda \in[0,1]$. Clearly, $F(0)=f(t, x), F(1)=f(t, y)$. By the chain rule, $F(\lambda)$ is continuously differentiable and $$ F^{\prime}(\lambda)=\sum_{i=1}^{n} f_{x_{i}}(t, x+\lambda(y-x))\left(y_{i}-x_{i}\right) . $$ By the fundamental theorem of calculus, we obtain $$ \begin{aligned} f(t, y)-f(t, x) & =F(1)-F(0) \\ & =\int_{0}^{1} F^{\prime}(\lambda) d \lambda \\ & =\sum_{i=1}^{n} \int_{0}^{1} f_{x_{i}}(t, x+\lambda(y-x))\left(y_{i}-x_{i}\right) d \lambda \\ & =\sum_{i=1}^{n} \varphi_{i}(t, x, y)\left(y_{i}-x_{i}\right) \end{aligned} $$ where $$ \varphi_{i}(t, x, y)=\int_{0}^{1} f_{x_{i}}(t, x+\lambda(y-x)) d \lambda . $$ We are left to verify that $\varphi_{i}$ is continuous. Observe first that the domain $\Omega^{\prime}$ of $\varphi_{i}$ is an open subset of $\mathbb{R}^{2 n+1}$. Indeed, if $(t, x, y) \in \Omega^{\prime}$ then $(t, x)$ and $(t, y) \in \Omega$ which implies by the openness of $\Omega$ that there is $\varepsilon>0$ such that the balls $B((t, x), \varepsilon)$ and $B((t, y), \varepsilon)$ in $\mathbb{R}^{n+1}$ are contained in $\Omega$. Assuming the norm in all spaces in question is the $\infty$-norm, we obtain that $B((t, x, y), \varepsilon) \subset \Omega^{\prime}$. The continuity of $\varphi_{i}$ follows from the following general statement. Lemma 2.16 Let $f(\lambda, u)$ be a continuous real-valued function on $[a, b] \times U$ where $U$ is an open subset of $\mathbb{R}^{k}, \lambda \in[a, \beta]$ and $u \in U$. Then the function $$ \varphi(u)=\int_{a}^{b} f(\lambda, u) d \lambda $$ is continuous in $u \in U$. Proof of Lemma 2.16. Let $\left\{u_{k}\right\}_{k=1}^{\infty}$ be a sequence in $U$ that converges to some $u \in U$. Then all $u_{k}$ with large enough index $k$ are contained in a closed ball $\bar{B}(u, \varepsilon) \subset U$. Since $f(\lambda, u)$ is continuous in $[a, b] \times U$, it is uniformly continuous on any compact set in this domain, in particular, in $[a, b] \times \bar{B}(u, \varepsilon)$. Hence, the convergence $$ f\left(\lambda, u_{k}\right) \rightarrow f(\lambda, u) \text { as } k \rightarrow \infty $$ is uniform in $\lambda \in[0,1]$. Since the operations of integration and the uniform convergence are interchangeable, we conclude that $\varphi\left(u_{k}\right) \rightarrow \varphi(u)$, which proves the continuity of $\varphi$. The proof of Lemma 2.15 is finished as follows. Consider $f_{x_{i}}(t, x+\lambda(y-x))$ as a function of $(\lambda, t, x, y) \in[0,1] \times \Omega^{\prime}$. This function is continuous in $(\lambda, t, x, y)$, which implies by Lemma 2.16 that also $\varphi_{i}(t, x, y)$ is continuous in $(t, x, y)$. Finally, if $x=y$ then $f_{x_{i}}(t, x+\lambda(y-x))=f_{x_{i}}(t, x)$ which implies by (2.58) that $$ \varphi_{i}(t, x, x)=f_{x_{i}}(t, x) $$ and, hence, $\varphi(t, x, x)=f_{x}(t, x)$, that is, $(2.57)$. Now we are in position to prove Theorem 2.14. Proof of Theorem 2.14. In the main part of the proof, we show that the partial derivative $\partial_{s_{i}} x$ exists. Since this can be done separately for any component $s_{i}$, in this part we can and will assume that $s$ is one-dimensional (that is, $m=1$ ). Fix some $\left(t_{*}, s_{*}\right) \in U$ and prove that $\partial_{s} x$ exists at this point. Since the differentiability is a local property, we can restrict the domain of the variables $(t, s)$ as follows. Choose $[\alpha, \beta]$ to be any interval in $I_{s_{*}}$ containing both $t_{0}$ and $t_{*}$. By Theorem 2.11 , for any $\varepsilon>0$ there is $\delta>0$ such that the rectangle $(a, \beta) \times\left(s_{*}-\delta, s_{*}+\delta\right)$ is contained in $U$ and, for all $s \in\left(s_{*}-\delta, s_{*}+\delta\right)$, $$ \sup _{t \in(\alpha, \beta)}\left\|x(t, s)-x\left(t, s_{*}\right)\right\|<\varepsilon . $$ Besides, by the openness of $\Omega, \varepsilon$ and $\delta$ can be chosen so small that the following condition is satisfied: $$ \widetilde{\Omega}:=\left\{(t, x, s) \in \mathbb{R}^{n+m+1}: \alpha<t<\beta, \| x-x\left(t, s_{*}\right)||<\varepsilon,\left|s-s_{*}\right|<\delta\right\} \subset \Omega $$ (cf. the proof of Theorem 2.11). In particular, for all $t \in(\alpha, \beta)$ and $s \in\left(s_{*}-\delta, s_{*}+\delta\right)$, the solution $x(t, s)$ is defined and $(t, x(t, s), s) \in \widetilde{\Omega}$. In what follows, we restrict the domain of the variables $(t, x, s)$ to $\widetilde{\Omega}$. Note that this domain is convex with respect to the variable $(x, s)$, for any fixed $t$. Indeed, for a fixed $t$, $x$ varies in the ball $B\left(x\left(t, s_{*}\right), \varepsilon\right)$ and $s$ varies in the interval $\left(s_{*}-\delta, s_{*}+\delta\right)$, which are both convex sets. Applying the Hadamard lemma to the function $f(t, x, s)$ in this domain and using the fact that $f$ is continuously differentiable with respect to $(x, s)$, we obtain the identity $$ f(t, y, s)-f(t, x, \sigma)=\varphi(t, x, \sigma, y, s)(y-x)+\psi(t, x, \sigma, y, s)(s-\sigma), $$ where $\varphi$ and $\psi$ are continuous functions on the appropriate domains. In particular, substituting $\sigma=s_{*}, x=x\left(t, s_{*}\right)$ and $y=x(t, s)$, we obtain $$ \begin{aligned} f(t, x(t, s), s)-f\left(t, x\left(t, s_{*}\right), s_{*}\right)= & \varphi\left(t, x\left(t, s_{*}\right), s_{*}, x(t, s), s\right)\left(x(t, s)-x\left(t, s_{*}\right)\right) \\ & +\psi\left(t, x\left(t, s_{*}\right), s_{*}, x(t, s), s\right)\left(s-s_{*}\right) \\ = & a(t, s)\left(x(t, s)-x\left(t, s_{*}\right)\right)+b(t, s)\left(s-s_{*}\right), \end{aligned} $$ where the functions $$ a(t, s)=\varphi\left(t, x\left(t, s_{*}\right), s_{*}, x(t, s), s\right) \text { and } b(t, s)=\psi\left(t, x\left(t, s_{*}\right), s_{*}, x(t, s), s\right) $$ are continuous in $(t, s) \in(\alpha, \beta) \times\left(s_{*}-\delta, s_{*}+\delta\right)$ (the dependence on $s_{*}$ is suppressed because $s_{*}$ is fixed). Set for any $s \in\left(s_{*}-\delta, s_{*}+\delta\right) \backslash\left\{s_{*}\right\}$ $$ z(t, s)=\frac{x(t, s)-x\left(t, s_{*}\right)}{s-s_{*}} $$ and observe that $$ \begin{aligned} z^{\prime} & =\frac{x^{\prime}(t, s)-x^{\prime}\left(t, s_{*}\right)}{s-s_{*}}=\frac{f(t, x(t, s), s)-f\left(t, x\left(t, s_{*}\right), s_{*}\right)}{s-s_{*}} \\ & =a(t, s) z+b(t, s) . \end{aligned} $$ Note also that $z\left(t_{0}, s\right)=0$ because both $x(t, s)$ and $x\left(t, s_{*}\right)$ satisfy the same initial condition. Hence, function $z(t, s)$ solves for any fixed $s \in\left(s_{*}-\delta, s_{*}+\delta\right) \backslash\left\{s_{*}\right\}$ the IVP $$ \left\{\begin{array}{l} z^{\prime}=a(t, s) z+b(t, s) \\ z\left(t_{0}, s\right)=0 \end{array}\right. $$ Since this ODE is linear and the functions $a$ and $b$ are continuous in $(t, s) \in(\alpha, \beta) \times$ $\left(s_{*}-\delta, s_{*}+\delta\right)$, we conclude by Theorem 2.13 that the solution to this IVP exists for all $s \in\left(s_{*}-\delta, s_{*}+\delta\right)$ and $t \in(\alpha, \beta)$ and, by Theorem 2.11 , the solution is continuous in $(t, s) \in(\alpha, \beta) \times\left(s_{*}-\delta, s_{*}+\delta\right)$. Hence, we can define $z(t, s)$ also at $s=s_{*}$ as the solution of the IVP (2.60). In particular, using the continuity of $z(t, s)$ in $s$, we obtain $$ \lim _{s \rightarrow s_{*}} z(t, s)=z\left(t, s_{*}\right), $$ that is, $$ \partial_{s} x\left(t, s_{*}\right)=\lim _{s \rightarrow s_{*}} \frac{x(t, s)-x\left(t, s_{*}\right)}{s-s_{*}}=\lim _{s \rightarrow s_{*}} z(t, s)=z\left(t, s_{*}\right) . $$ Hence, the derivative $y(t)=\partial_{s} x\left(t, s_{*}\right)$ exists and is equal to $z\left(t, s_{*}\right)$, that is, $y(t)$ satisfies the IVP $$ \left\{\begin{array}{l} y^{\prime}=a\left(t, s_{*}\right) y+b\left(t, s_{*}\right) \\ y\left(t_{0}\right)=0 \end{array}\right. $$ Note that by (2.59) and Lemma 2.15 $$ a\left(t, s_{*}\right)=\varphi\left(t, x\left(t, s_{*}\right), s_{*}, x\left(t, s_{*}\right), s_{*}\right)=f_{x}\left(t, x\left(t, s_{*}\right), s_{*}\right) $$ and $$ b\left(t, s_{*}\right)=\psi\left(t, x\left(t, s_{*}\right), s_{*}, x\left(t, s_{*}\right), s_{*}\right)=f_{s}\left(t, x\left(t, s_{*}\right), s_{*}\right) $$ Hence, we obtain that $y(t)$ satisfies the variational equation (2.52). To finish the proof, we have to verify that $x(t, s)$ is continuously differentiable in $(t, s)$. Here we come back to the general case $s \in \mathbb{R}^{m}$. The derivative $\partial_{s} x=y$ satisfies the IVP (2.52) and, hence, is continuous in $(t, s)$ by Theorem 2.11. Finally, for the derivative $\partial_{t} x$ we have the identity $$ \partial_{t} x=f(t, x(t, s), s), $$ which implies that $\partial_{t} x$ is also continuous in $(t, s)$. Hence, $x$ is continuously differentiable in $(t, s)$. Remark. It follows from (2.61) that $\partial_{t} x$ is differentiable in $s$ and, by the chain rule, $$ \partial_{s}\left(\partial_{t} x\right)=\partial_{s}[f(t, x(t, s), s)]=f_{x}(t, x(t, s), s) \partial_{s} x+f_{s}(t, x(t, s), s) . $$ On the other hand, it follows from (2.52) that $$ \partial_{t}\left(\partial_{s} x\right)=\partial_{t} y=f_{x}(t, x(t, s), s) \partial_{s} x+f_{s}(t, x(t, s), s) $$ whence we conclude that $$ \partial_{s} \partial_{t} x=\partial_{t} \partial_{s} x $$ Hence, the derivatives $\partial_{s}$ and $\partial_{t}$ commute $^{6}$ on $x$. As we have seen above, if one knew the identity (2.64) a priori then the derivation of the variational equation (2.52) would have been easy. However, in the present proof the identity (2.64) comes after the variational equation. Theorem 2.17 Under the conditions of Theorem 2.14, assume that, for some $k \in \mathbb{N}$, $f(t, x, s) \in C^{k}(x, s)$. Then the maximal solution $x(t, s)$ belongs to $C^{k}(s)$. Moreover, for any multiindex $\alpha$ of the order $|\alpha| \leq k$ and of the dimension $m$ (the same as that of $s$ ), we have $$ \partial_{t} \partial_{s}^{\alpha} x=\partial_{s}^{\alpha} \partial_{t} x $$ Here $\alpha=\left(\alpha_{1}, \ldots, \alpha_{m}\right)$ where $\alpha_{i}$ are non-negative integers, $|\alpha|=\alpha_{1}+\ldots+\alpha_{n}$, and $$ \partial_{s}^{\alpha}=\frac{\partial^{|\alpha|}}{\partial s_{1}^{\alpha_{1}} \ldots \partial s_{m}^{\alpha_{m}}} . $$ ## Linear equations and systems A linear (system of) ODE of the first order is a (vector) ODE of the form $$ x^{\prime}=A(t) x+B(t) $$ where $A(t): I \rightarrow \mathbb{R}^{n \times n}, B: I \rightarrow \mathbb{R}^{n}$, and $I$ being an open interval in $\mathbb{R}$. If $A(t)$ and $B(t)$ are continuous in $t$ then, for any $t_{0} \in I$ and $x_{0} \in \mathbb{R}^{n}$, the IVP $$ \left\{\begin{array}{l} x^{\prime}=A(t) x+B(t) \\ x\left(t_{0}\right)=x_{0} \end{array}\right. $$ has a unique solution defined on the full interval $I$ (cf. Theorem 2.13). In the sequel, we always assume that $A(t)$ and $B(t)$ are continuous on $I$ and consider only solutions defined on the entire interval $I$. ### Space of solutions of homogeneous systems The linear ODE is called homogeneous if $B(t) \equiv 0$, and inhomogeneous otherwise. In this Section, we consider a homogeneous equation, that is, the equation $x^{\prime}=A(t) x$. Denote by $\mathcal{A}$ the set of all solutions of this ODE. ${ }^{6}$ The equality of the mixed derivatives can be concluded by a theorem from Analysis II if one knows that both $\partial_{s} \partial_{t} x$ and $\partial_{t} \partial_{s} x$ are continuous. Their continuity follows from the identities (2.62) and (2.63), which prove at the same time also their equality. Theorem 3.1 $\mathcal{A}$ is a linear space and $\operatorname{dim} \mathcal{A}=n$. Consequently, if $x_{1}, \ldots, x_{n}$ are $n$ linearly independent solutions to $x^{\prime}=A(t) x$ then the general solution has the form $$ x(t)=C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t), $$ where $C_{1}, \ldots, C_{n}$ are arbitrary constants. Proof. The set of all functions $I \rightarrow \mathbb{R}^{n}$ is a linear space with respect to the operations addition and multiplication by a constant. Zero element is the function which is constant 0 on $I$. We need to prove that the set of solutions $\mathcal{A}$ is a linear subspace of the space of all functions. It suffices to show that $\mathcal{A}$ is closed under operations of addition and multiplication by constant. If $x$ and $y \in \mathcal{A}$ then also $x+y \in \mathcal{A}$ because $$ (x+y)^{\prime}=x^{\prime}+y^{\prime}=A x+A x=A(x+y) $$ and similarly $\lambda x \in \mathcal{A}$ for any $\lambda \in \mathbb{R}$. Hence, $\mathcal{A}$ is a linear space. Fix $t_{0} \in I$ and consider the mapping $\Phi: \mathcal{A} \rightarrow \mathbb{R}^{n}$ given by $\Phi(x)=x\left(t_{0}\right)$. This mapping is obviously linear. It is surjective since for any $v \in \mathbb{R}^{n}$ there is a solution $x(t)$ with the initial condition $x\left(t_{0}\right)=v$. Also, this mapping is injective because $x\left(t_{0}\right)=0$ implies $x(t) \equiv 0$ by the uniqueness of the solution. Hence, $\Phi$ is a linear isomorphism between $\mathcal{A}$ and $\mathbb{R}^{n}$, whence it follows that $\operatorname{dim} \mathcal{A}=\operatorname{dim} \mathbb{R}^{n}=n$. Consequently, if $x_{1}, \ldots, x_{n}$ are linearly independent functions from $\mathcal{A}$ then they form a basis in $\mathcal{A}$. It follows that any element of $\mathcal{A}$ is a linear combination of $x_{1}, \ldots, x_{n}$, that is, any solution to $x^{\prime}=A(t) x$ has the form (3.2). Consider now a scalar linear homogeneous ODE of the order $n$, that is, the ODE $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots .+a_{n}(t) x=0, $$ where all functions $a_{k}(t)$ are defined on an open interval $I \subset \mathbb{R}$ and are continuous on $I$. As we know, such an ODE can be reduced to the vector ODE of the 1st order as follows. Consider the vector function $$ \mathbf{x}(t)=\left(x(t), x^{\prime}(t), \ldots, x^{(n-1)}(t)\right) $$ so that $$ \mathbf{x}_{1}=x, \quad \mathbf{x}_{2}=x^{\prime}, \ldots, \mathbf{x}_{n-1}=x^{(n-2)}, \quad \mathbf{x}_{n}=x^{(n-1)} . $$ Then (3.3) is equivalent to the system $$ \begin{aligned} \mathbf{x}_{1}^{\prime}= & \mathbf{x}_{2} \\ \mathbf{x}_{2}^{\prime}= & \mathbf{x}_{3} \\ & \ldots \\ \mathbf{x}_{n-1}^{\prime}= & \mathbf{x}_{n} \\ \mathbf{x}_{n}^{\prime}= & -a_{1} \mathbf{x}_{n}-a_{2} \mathbf{x}_{n-1}-\ldots-a_{n} \mathbf{x}_{1} \end{aligned} $$ that is, $$ \mathbf{x}^{\prime}=A(t) \mathbf{x} $$ where $$ A=\left(\begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots & 1 \\ -a_{n} & -a_{n-1} & -a_{n-2} & \ldots & -a_{1} \end{array}\right) . $$ Since $A(t)$ is continuous in $t$ on $I$, we can assume that any solution $\mathbf{x}(t)$ of $(3.5)$ is defined on the entire interval $I$ and, hence, the same is true for any solution $x(t)$ of (3.3). Denote now by $\widetilde{\mathcal{A}}$ the set of all solutions of (3.3) defined on $I$. Corollary. $\widetilde{\mathcal{A}}$ is a linear space and $\operatorname{dim} \widetilde{\mathcal{A}}=n$. Consequently, if $x_{1}, \ldots, x_{n}$ are $n$ linearly independent solutions to $x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots .+a_{n}(t) x=0$ then the general solution has the form $$ x(t)=C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t), $$ where $C_{1}, \ldots, C_{n}$ are arbitrary constants. Proof. The fact that $\widetilde{\mathcal{A}}$ is a linear space is obvious (cf. the proof of Theorem 3.1). The relation (3.4) defines a linear mapping from $\widetilde{\mathcal{A}}$ to $\mathcal{A}$. This mapping is obviously injective (if $\mathbf{x}(t) \equiv 0$ then $x(t) \equiv 0$ ) and surjective, because any solution $\mathbf{x}$ of (3.3) gives back a solution $x(t)$ of (3.5). Hence, $\widetilde{\mathcal{A}}$ and $\mathcal{A}$ are linearly isomorphic, whence $\operatorname{dim} \widetilde{\mathcal{A}}=\operatorname{dim} \mathcal{A}=n$ ### Linear homogeneous ODEs with constant coefficients Consider the methods of finding $n$ independent solutions to the ODE $$ x^{(n)}+a_{1} x^{(n-1)}+\ldots+a_{n} x=0, $$ where $a_{1}, \ldots, a_{n}$ are real constants. It will be convenient to obtain the complex valued general solution $x(t)$ and then to extract the real valued general solution. The idea is very simple. Let us look for a solution in the form $x(t)=e^{\lambda t}$ where $\lambda$ is a complex number to be determined. Substituting this function into (3.6) and noticing that $x^{(k)}=\lambda^{k} e^{\lambda t}$, we obtain the equation for $\lambda$ (after cancellation by $\left.e^{\lambda t}\right)$ : $$ \lambda^{n}+a_{1} \lambda^{n-1}+\ldots .+a_{n}=0 . $$ This equation is called the characteristic equation of (3.6) and the polynomial $P(\lambda)=$ $\lambda^{n}+a_{1} \lambda^{n-1}+\ldots+a_{n}$ is called the characteristic polynomial of (3.6). Hence, if $\lambda$ is the root of the characteristic polynomial then the function $e^{\lambda t}$ solves (3.6). We try to obtain in this way $n$ independent solutions. Theorem 3.2 If the characteristic polynomial $P(\lambda)$ of (3.6) has $n$ distinct complex roots $\lambda_{1}, \ldots, \lambda_{n}$, then the following $n$ functions $e^{\lambda_{1} t}, \ldots, e^{\lambda_{n} t}$ are linearly independent solutions of (3.6). Consequently, the general complex solution of (3.6) is given by $$ x(t)=C_{1} e^{\lambda_{1} t}+\ldots+C_{n} e^{\lambda_{n} t}, $$ where $C_{j}$ are arbitrary complex numbers. If $\lambda=\alpha+i \beta$ is a non-real root of $P(\lambda)$ then $\bar{\lambda}=\alpha-i \beta$ is also a root, and the functions $e^{\lambda t}, e^{\bar{\lambda} t}$ in the above sequence can be replaced by the real-valued functions $e^{\alpha t} \cos \beta t$, $e^{\alpha t} \sin \beta t$. Proof. Let us prove this by induction in $n$ that the functions $e^{\lambda_{1} t}, \ldots, e^{\lambda_{n} t}$ are linearly independent provided $\lambda_{1}, \ldots, \lambda_{n}$ are distinct complex numbers. If $n=1$ then the claim is trivial, just because the exponential function is not identical zero. Inductive step from $n-1$ to $n$ : Assume that, for some complex constants $C_{1}, \ldots, C_{n}$ and all $t \in \mathbb{R}$, $$ C_{1} e^{\lambda_{1} t}+\ldots+C_{n} e^{\lambda_{n} t}=0 $$ and prove that $C_{1}=\ldots=C_{n}=0$. Dividing (3.7) by $e^{\lambda_{n} t}$ and setting $\mu_{j}=\lambda_{j}-\lambda_{n}$, we obtain $$ C_{1} e^{\mu_{1} t}+\ldots+C_{n-1} e^{\mu_{n-1} t}+C_{n}=0 . $$ Differentiating in $t$, we obtain $$ C_{1} \mu_{1} e^{\mu_{1} t}+\ldots+C_{n-1} \mu_{n-1} e^{\mu_{n-1} t}=0 . $$ By the inductive hypothesis, we conclude that $C_{j} \mu_{j}=0$ when by $\mu_{j} \neq 0$ we conclude $C_{j}=0$, for all $j=1, \ldots, n-1$. Substituting into (3.7), we obtain also $C_{n}=0$. Since the complex conjugations commutes with addition and multiplication of numbers, the identity $P(\lambda)=0$ implies $P(\bar{\lambda})=0$ (since $a_{k}$ are real, we have $\bar{a}_{k}=a_{k}$ ). Next, we have $$ e^{\lambda t}=e^{\alpha t}(\cos \beta t+i \sin \beta t) \text { and } e^{\bar{\lambda} t}=e^{\alpha t}(\cos \beta t-\sin \beta t) $$ so that $e^{\lambda t}$ and $e^{\bar{\lambda} t}$ are linear combinations of $e^{\alpha t} \cos \beta t$ and $e^{\alpha t} \sin \beta t$. The converse is true also, because $$ e^{\alpha t} \cos \beta t=\frac{1}{2}\left(e^{\lambda t}+e^{\overline{\lambda t}}\right) \text { and } e^{\alpha t} \sin \beta t=\frac{1}{2 i}\left(e^{\lambda t}-e^{\bar{\lambda} t}\right) . $$ Hence, replacing in the sequence $e^{\lambda_{1} t}, \ldots ., e^{\lambda_{n} t}$ the functions $e^{\lambda t}$ and $e^{\bar{\lambda} t}$ by $e^{\alpha t} \cos \beta t$ and $e^{\alpha t} \sin \beta t$ preserves the linear independence of the sequence. Example. Consider the ODE $$ x^{\prime \prime}-3 x^{\prime}+2 x=0 . $$ The characteristic polynomial is $P(\lambda)=\lambda^{2}-3 \lambda+2$, which has the roots $\lambda_{1}=2$ and $\lambda_{2}=1$. Hence, the linearly independent solutions are $e^{2 t}$ and $e^{t}$, and the general solution is $C_{1} e^{2 t}+C_{2} e^{t}$. Example. Consider the ODE $x^{\prime \prime}+x=0$. The characteristic polynomial is $P(\lambda)=\lambda^{2}+1$, which has the complex roots $\lambda_{1}=i$ and $\lambda_{2}=-i$. Hence, we obtain the complex solutions $e^{i t}$ and $e^{-i t}$. Out of them, we can get also real linearly independent solutions. Indeed, just replace these two functions by their two linear combinations (which corresponds to a change of the basis in the space of solutions) $$ \frac{e^{i t}+e^{-i t}}{2}=\cos t \text { and } \frac{e^{i t}-e^{-i t}}{2 i}=\sin t . $$ Hence, we conclude that $\cos t$ and $\sin t$ are linearly independent solutions and the general solution is $C_{1} \cos t+C_{2} \sin t$. Example. Consider the ODE $x^{\prime \prime \prime}-x=0$. The characteristic polynomial is $P(\lambda)=$ $\lambda^{3}-1=(\lambda-1)\left(\lambda^{2}+\lambda+1\right)$ that has the roots $\lambda_{1}=1$ and $\lambda_{2,3}=-\frac{1}{2} \pm i \frac{\sqrt{3}}{2}$. Hence, we obtain the three linearly independent real solutions $$ e^{t}, \quad e^{-\frac{1}{2} t} \cos \frac{\sqrt{3}}{2} t, e^{-\frac{1}{2} t} \sin \frac{\sqrt{3}}{2} t $$ and the real general solution is $$ C_{1} e^{t}+e^{-\frac{1}{2} t}\left(C_{2} \cos \frac{\sqrt{3}}{2} t+C_{3} \sin \frac{\sqrt{3}}{2} t\right) . $$ What to do when $P(\lambda)$ has fewer than $n$ distinct roots? Recall the fundamental theorem of algebra (which is normally proved in a course of Complex Analysis): any polynomial $P(\lambda)$ of degree $n$ with complex coefficients has exactly $n$ complex roots counted with multiplicity. What is it the multiplicity of a root? If $\lambda_{0}$ is a root of $P(\lambda)$ then its multiplicity is the maximal natural number $m$ such that $P(\lambda)$ is divisible by $\left(\lambda-\lambda_{0}\right)^{m}$, that is, the following identity holds $$ P(\lambda)=\left(\lambda-\lambda_{0}\right)^{m} Q(\lambda), $$ where $Q(\lambda)$ is another polynomial of $\lambda$. Note that $P(\lambda)$ is always divisible by $\lambda-\lambda_{0}$ so that $m \geq 1$. The fundamental theorem of algebra can be stated as follows: if $\lambda_{1}, \ldots, \lambda_{r}$ are all distinct roots of $P(\lambda)$ and the multiplicity of $\lambda_{j}$ is $m_{j}$ then $$ m_{1}+\ldots+m_{r}=n $$ and, hence, $$ P(\lambda)=\left(\lambda-\lambda_{1}\right)^{m_{1}} \ldots\left(\lambda-\lambda_{r}\right)^{m_{r}} . $$ In order to obtain $n$ independent solutions to the ODE (3.6), each root $\lambda_{j}$ should give rise to $m_{j}$ independent solutions. Theorem 3.3 Let $\lambda_{1}, \ldots, \lambda_{r}$ be all the distinct complex roots of the characteristic polynomial $P(\lambda)$ with the multiplicities $m_{1}, \ldots, m_{r}$, respectively. Then the following $n$ functions are linearly independent solutions of (3.6): $$ \left\{t^{k} e^{\lambda_{j} t}\right\}, j=1, \ldots, r, k=0, \ldots, m_{j}-1 . $$ Consequently, the general solution of (3.6) is $$ x(t)=\sum_{j=1}^{r} \sum_{k=0}^{m_{j}-1} C_{k j} t^{k} e^{\lambda_{j} t}, $$ where $C_{k j}$ are arbitrary complex constants. If $\lambda=\alpha+i \beta$ is a non-real root of $P$ of multiplicity $m$, then $\bar{\lambda}=\alpha-i \beta$ is also a root of the same multiplicity $m$, and the functions $t^{k} e^{\lambda t}, t^{k} e^{\bar{\lambda} t}$ in the sequence (3.10) can be replaced by the real-valued functions $t^{k} e^{\alpha t} \cos \beta t, t^{k} e^{\alpha t} \sin \beta t$, for any $k=0, \ldots, m-1$. Remark. Setting $$ P_{j}(t)=\sum_{k=1}^{m_{j}-1} C_{j k} t^{k} $$ we obtain from (3.11) $$ x(t)=\sum_{j=1}^{r} P_{j}(t) e^{\lambda_{j} t} . $$ Hence, any solution to (3.6) has the form (3.12) where $P_{j}$ is an arbitrary polynomial of $t$ of the degree at most $m_{j}-1$. Example. Consider the ODE $x^{\prime \prime}-2 x^{\prime}+x=0$ which has the characteristic polynomial $$ P(\lambda)=\lambda^{2}-2 \lambda+1=(\lambda-1)^{2} . $$ Obviously, $\lambda=1$ is the root of multiplicity 2 . Hence, by Theorem 3.3 , the functions $e^{t}$ and $t e^{t}$ are linearly independent solutions, and the general solution is $$ x(t)=\left(C_{1}+C_{2} t\right) e^{t} . $$ Example. Consider the ODE $x^{V}+x^{I V}-2 x^{\prime \prime \prime}-2 x^{\prime \prime}+x^{\prime}+x=0$. The characteristic polynomial is $$ P(\lambda)=\lambda^{5}+\lambda^{4}-2 \lambda^{3}-2 \lambda^{2}+\lambda+1=(\lambda-1)^{2}(\lambda+1)^{3} . $$ Hence, the roots are $\lambda_{1}=1$ with $m_{1}=2$ and $\lambda_{2}=-1$ with $m_{2}=3$. We conclude that the following 5 function are linearly independent solutions: $$ e^{t}, t e^{t}, e^{-t}, t e^{-t}, t^{2} e^{-t} $$ The general solution is $$ x(t)=\left(C_{1}+C_{2} t\right) e^{t}+\left(C_{3}+C_{4} t+C_{5} t^{2}\right) e^{-t} . $$ Example. Consider the ODE $x^{V}+2 x^{\prime \prime \prime}+x^{\prime}=0$. Its characteristic polynomial is $$ P(\lambda)=\lambda^{5}+2 \lambda^{3}+\lambda=\lambda\left(\lambda^{2}+1\right)^{2}=\lambda(\lambda+i)^{2}(\lambda-i)^{2}, $$ and it has the roots $\lambda_{1}=0, \lambda_{2}=i$ and $\lambda_{3}=-i$, where $\lambda_{2}$ and $\lambda_{3}$ has multiplicity 2 . The following 5 function are linearly independent solutions: $$ 1, e^{i t}, t e^{i t}, e^{-i t}, t e^{-i t} $$ The general complex solution is then $$ C_{1}+\left(C_{2}+C_{3} t\right) e^{i t}+\left(C_{4}+C_{5} t\right) e^{-i t} $$ Replacing in the sequence (3.13) $e^{i t}, e^{-i t}$ by $\cos t, \sin t$ and $t e^{i t}, t e^{-i t}$ by $t \cos t, t \sin t$, we obtain the linearly independent real solutions $$ 1, \cos t, t \cos t, \sin t, t \sin t, $$ and the general real solution $$ C_{1}+\left(C_{2}+C_{3} t\right) \cos t+\left(C_{4}+C_{5} t\right) \sin t $$ We make some preparation for the proof of Theorem 3.3. Given a polynomial $P(\lambda)=$ $a_{0} \lambda^{n}+a_{1} \lambda^{n-1}+\ldots+a_{0}$ with complex coefficients, associate with it the differential operator $$ \begin{aligned} P\left(\frac{d}{d t}\right) & =a_{0}\left(\frac{d}{d t}\right)^{n}+a_{1}\left(\frac{d}{d t}\right)^{n-1}+\ldots+a_{0} \\ & =a_{0} \frac{d^{n}}{d t^{n}}+a_{1} \frac{d^{n-1}}{d t^{n-1}}+\ldots+a_{0} \end{aligned} $$ where we use the convention that the "product" of differential operators is the composition. That is, the operator $P\left(\frac{d}{d t}\right)$ acts on a smooth enough function $f(t)$ by the rule $$ P\left(\frac{d}{d t}\right) f=a_{0} f^{(n)}+a_{1} f^{(n-1)}+\ldots+a_{0} f $$ (here the constant term $a_{0}$ is understood as a multiplication operator). For example, the ODE $$ x^{(n)}+a_{1} x^{(n-1)}+\ldots+a_{n} x=0 $$ can be written shortly in the form $$ P\left(\frac{d}{d t}\right) x=0 $$ where $P(\lambda)=\lambda^{n}+a_{1} \lambda^{n-1}+\ldots+a_{n}$ is the characteristic polynomial of (3.14). Example. Let us prove the following identity: $$ P\left(\frac{d}{d t}\right) e^{\lambda t}=P(\lambda) e^{\lambda t} $$ It suffices to verify it for $P(\lambda)=\lambda^{k}$ and then use the linearity of this identity. For such $P(\lambda)=\lambda^{k}$, we have $$ P\left(\frac{d}{d t}\right) e^{\lambda t}=\frac{d^{k}}{d t^{k}} e^{\lambda t}=\lambda^{k} e^{\lambda t}=P(\lambda) e^{\lambda t} $$ which was to be proved. Lemma 3.4 If $f(t), g(t)$ are $n$ times differentiable functions on an open interval then, for any polynomial $P$ of the order at most $n$, the following identity holds: $$ P\left(\frac{d}{d t}\right)(f g)=\sum_{j=0}^{n} \frac{1}{j !} f^{(j)} P^{(j)}\left(\frac{d}{d t}\right) g . $$ Example. Let $P(\lambda)=\lambda^{2}+\lambda+1$. Then $P^{\prime}(\lambda)=2 \lambda+1, P^{\prime \prime}=2$, and (3.16) becomes $$ \begin{aligned} (f g)^{\prime \prime}+(f g)^{\prime}+f g & =f P\left(\frac{d}{d t}\right) g+f^{\prime} P^{\prime}\left(\frac{d}{d t}\right) g+\frac{1}{2} f^{\prime \prime} P^{\prime \prime}\left(\frac{d}{d t}\right) g \\ & =f\left(g^{\prime \prime}+g^{\prime}+g\right)+f^{\prime}\left(2 g^{\prime}+g\right)+f^{\prime \prime} g . \end{aligned} $$ It is an easy exercise to see directly that this identity is correct. Proof. It suffices to prove the identity (3.16) in the case when $P(\lambda)=\lambda^{k}, k \leq n$, because then for a general polynomial (3.16) will follow by taking linear combination of those for $\lambda^{k}$. If $P(\lambda)=\lambda^{k}$ then, for $j \leq k$ $$ P^{(j)}=k(k-1) \ldots(k-j+1) \lambda^{k-j} $$ and $P^{(j)} \equiv 0$ for $j>k$. Hence, $$ \begin{aligned} & P^{(j)}\left(\frac{d}{d t}\right)=k(k-1) \ldots(k-j+1)\left(\frac{d}{d t}\right)^{k-j}, j \leq k, \\ & P^{(j)}\left(\frac{d}{d t}\right)=0, \quad j>k, \end{aligned} $$ and (3.16) becomes $$ (f g)^{(k)}=\sum_{j=0}^{k} \frac{k(k-1) \ldots(k-j+1)}{j !} f^{(j)} g^{(k-j)}=\sum_{j=0}^{k}\left(\begin{array}{c} k \\ j \end{array}\right) f^{(j)} g^{(k-j)} . $$ The latter identity is known from Analysis and is called the Leibniz formula ${ }^{7}$. Lemma 3.5 A complex number $\lambda$ is a root of a polynomial $P$ with the multiplicity $m$ if and only if $$ P^{(k)}(\lambda)=0 \text { for all } k=0, \ldots, m-1 \text { and } P^{(m)}(\lambda) \neq 0 . $$ Proof. If $P$ has a root $\lambda$ with multiplicity $m$ then we have the identity for all $z \in \mathbb{C}$ $$ P(z)=(z-\lambda)^{m} Q(z) $$ where $Q$ is a polynomial such that $Q(\lambda) \neq 0$. For any natural $k$, we have by the Leibniz formula $$ P^{(k)}(z)=\sum_{j=0}^{k}\left(\begin{array}{l} k \\ j \end{array}\right)\left((z-\lambda)^{m}\right)^{(j)} Q^{(k-j)}(z) . $$ If $k<m$ then also $j<m$ and $$ \left((z-\lambda)^{m}\right)^{(j)}=\operatorname{const}(z-\lambda)^{m-j} $$ which vanishes at $z=\lambda$. Hence, for $k<m$, we have $P^{(k)}(\lambda)=0$. For $k=m$ we have again that all the derivatives $\left((z-\lambda)^{m}\right)^{(j)}$ vanish at $z=\lambda$ provided $j<k$, while for $j=k$ we obtain $$ \left((z-\lambda)^{m}\right)^{(k)}=\left((z-\lambda)^{m}\right)^{(m)}=m ! \neq 0 . $$ Hence, $$ P^{(m)}(\lambda)=\left((z-\lambda)^{m}\right)^{(m)} Q(\lambda) \neq 0 . $$ ${ }^{7}$ If $k=1$ then (3.17) amounts to the familiar product rule $$ (f g)^{\prime}=f^{\prime} g+f g^{\prime} $$ For arbitrary $k \in \mathbb{N},(3.17)$ is proved by induction in $k$. Conversely, if (3.18) holds then by the Taylor formula for a polynomial at $\lambda$, we have $$ \begin{aligned} P(z) & =P(\lambda)+\frac{P^{\prime}(\lambda)}{1 !}(z-\lambda)+\ldots+\frac{P^{(n)}(\lambda)}{n !}(z-\lambda)^{n} \\ & =\frac{P^{(m)}(\lambda)}{m !}(z-\lambda)^{m}+\ldots+\frac{P^{(n)}(\lambda)}{n !}(z-\lambda)^{n} \\ & =(z-\lambda)^{m} Q(z) \end{aligned} $$ where $$ Q(z)=\frac{P^{(m)}(\lambda)}{m !}+\frac{P^{(m+1)}(\lambda)}{(m+1) !}(z-\lambda)+\ldots+\frac{P^{(n)}(\lambda)}{n !}(z-\lambda)^{n-m} . $$ Obviously, $Q(\lambda)=\frac{P^{(m)}(\lambda)}{m !} \neq 0$, which implies that $\lambda$ is a root of multiplicity $m$. Lemma 3.6 If $\lambda_{1}, \ldots, \lambda_{r}$ are distinct complex numbers and if, for some polynomials $P_{j}(t)$, $$ \sum_{j=1}^{r} P_{j}(t) e^{\lambda_{t} t}=0 \text { for all } t \in \mathbb{R} $$ then $P_{j}(t) \equiv 0$ for all $j$. Proof. Induction in $r$. If $r=1$ then there is nothing to prove. Let us prove the inductive step from $r-1$ to $r$. Dividing (3.19) by $e^{\lambda_{r} t}$ and setting $\mu_{j}=\lambda_{j}-\lambda_{r}$, we obtain the identity $$ \sum_{j=1}^{r-1} P_{j}(t) e^{\mu_{j} t}+P_{r}(t)=0 . $$ Choose some integer $k>\operatorname{deg} P_{r}$, where $\operatorname{deg} P$ as the maximal power of $t$ that enters $P$ with non-zero coefficient. Differentiating the above identity $k$ times, we obtain $$ \sum_{j=1}^{r-1} Q_{j}(t) e^{\mu_{j} t}=0 $$ where we have used the fact that $\left(P_{r}\right)^{(k)}=0$ and $$ \left(P_{j}(t) e^{\mu_{j} t}\right)^{(k)}=Q_{j}(t) e^{\mu t} $$ for some polynomial $Q_{j}$ (this for example follows from the Leibniz formula). By the inductive hypothesis, we conclude that all $Q_{j} \equiv 0$, which implies that $$ \left(P_{j} e^{\mu_{j} t}\right)^{(k)}=0 . $$ Hence, the function $P_{j} e^{\mu_{j} t}$ must be equal to a polynomial of the degree at most $k$, which is only possible if $P_{j} \equiv 0$. Substituting into (3.20), we obtain that also $P_{r} \equiv 0$. Proof of Theorem 3.3. Let $P(\lambda)$ be the characteristic polynomial of (3.14). We first prove that if $\lambda$ is a root of multiplicity $m$ then the function $t^{k} e^{\lambda t}$ solves (3.14) for any $k=0, \ldots, m-1$. By Lemma 3.4, we have $$ \begin{aligned} P\left(\frac{d}{d t}\right)\left(t^{k} e^{\lambda t}\right) & =\sum_{j=0}^{n} \frac{1}{j !}\left(t^{k}\right)^{(j)} P^{(j)}\left(\frac{d}{d t}\right) e^{\lambda t} \\ & =\sum_{j=0}^{n} \frac{1}{j !}\left(t^{k}\right)^{(j)} P^{(j)}(\lambda) e^{\lambda t} . \end{aligned} $$ If $j>k$ then the $\left(t^{k}\right)^{(j)} \equiv 0$. If $j \leq k$ then $j<m$ and, hence, $P^{(j)}(\lambda)=0$ by hypothesis. Hence, all the terms in the above sum vanish, whence $$ P\left(\frac{d}{d t}\right)\left(t^{k} e^{\lambda t}\right)=0 $$ that is, the function $x(t)=t^{k} e^{\lambda t}$ solves (3.14). If $\lambda_{1}, \ldots, \lambda_{r}$ are all distinct complex roots of $P(\lambda)$ and $m_{j}$ is the multiplicity of $\lambda_{j}$ then it follows that each function in the following sequence $$ \left\{t^{k} e^{\lambda_{j} t}\right\}, j=1, \ldots, r, k=0, \ldots, m_{j}-1, $$ is a solution of (3.14). Let us show that these functions are linearly independent. Clearly, each linear combination of functions (3.21) has the form $$ \sum_{j=1}^{r} \sum_{k=0}^{m_{j}-1} C_{j k} t^{k} e^{\lambda_{j} t}=\sum_{j=1}^{r} P_{j}(t) e^{\lambda_{j} t} $$ where $P_{j}(t)=\sum_{k=0}^{m_{j}-1} C_{j k} t^{k}$ are polynomials. If the linear combination is identical zero then by Lemma 3.6 $P_{j} \equiv 0$, which implies that all $C_{j k}$ are 0. Hence, the functions (3.21) are linearly independent, and by Theorem 3.1 the general solution of (3.14) has the form (3.22). Let us show that if $\lambda=\alpha+i \beta$ is a complex (non-real) root of multiplicity $m$ then $\bar{\lambda}=\alpha-i \beta$ is also a root of the same multiplicity $m$. Indeed, by Lemma $3.5, \lambda$ satisfies the relations (3.18). Applying the complex conjugation and using the fact that the coefficients of $P$ are real, we obtain that the same relations hold for $\bar{\lambda}$ instead of $\lambda$, which implies that $\bar{\lambda}$ is also a root of multiplicity $m$. The last claim that every couple $t^{k} e^{\lambda t}, t^{k} e^{\bar{\lambda} t}$ in (3.21) can be replaced by real-valued functions $t^{k} e^{\alpha t} \cos \beta t, t^{k} e^{\alpha t} \sin \beta t$, follows from the observation that the functions $t^{k} e^{\alpha t} \cos \beta t$, $t^{k} e^{\alpha t} \sin \beta t$ are linear combinations of $t^{k} e^{\lambda t}, t^{k} e^{\bar{\lambda} t}$, and vice versa, which one sees from the identities $$ \begin{gathered} e^{\alpha t} \cos \beta t=\frac{1}{2}\left(e^{\lambda t}+e^{\bar{\lambda} t}\right), \quad e^{\alpha t} \sin \beta t=\frac{1}{2 i}\left(e^{\lambda t}-e^{\bar{\lambda} t}\right), \\ e^{\lambda t}=e^{\alpha t}(\cos \beta t+i \sin \beta t), \quad e^{\bar{\lambda} t}=e^{\alpha t}(\cos \beta t-i \sin \beta t), \end{gathered} $$ multiplied by $t^{k}$ (compare the proof of Theorem 3.2). ### Space of solutions of inhomogeneous systems Consider now an inhomogeneous linear ODE $$ x^{\prime}=A(t) x+B(t), $$ where $A(t): I \rightarrow \mathbb{R}^{n \times n}$ and $B(t): I \rightarrow \mathbb{R}^{n}$ are continuous mappings on an open interval $I \subset \mathbb{R}$. Theorem 3.7 If $x_{0}(t)$ is a particular solution of $(3.23)$ and $x_{1}(t), \ldots, x_{n}(t)$ is a sequence of $n$ linearly independent solutions of the homogeneous ODE $x^{\prime}=A x$ then the general solution of (3.23) is given by $$ x(t)=x_{0}(t)+C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t) . $$ Proof. If $x(t)$ is also a solution of (3.23) then the function $y(t)=x(t)-x_{0}(t)$ solves $y^{\prime}=A y$, whence by Theorem 3.1 $$ y=C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t), $$ and $x(t)$ satisfies (3.24). Conversely, for all $C_{1}, \ldots C_{n}$, the function (3.25) solves $y^{\prime}=A y$, whence it follows that the function $x(t)=x_{0}(t)+y(t)$ solves (3.23). Consider now a scalar ODE $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=f(t) $$ where all functions $a_{1}, \ldots, a_{n}, f$ are continuous on an interval $I$. Corollary If $x_{0}(t)$ is a particular solution of (3.26) and $x_{1}(t), \ldots, x_{n}(t)$ is a sequence of $n$ linearly independent solutions of the homogeneous ODE $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=0, $$ then the general solution of (3.26) is given by $$ x(t)=x_{0}(t)+C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t) . $$ The proof is trivial and is omitted. ### Linear inhomogeneous ODEs with constant coefficients Here we consider the ODE $$ x^{(n)}+a_{1} x^{(n-1)}+\ldots+a_{n} x=f(t), $$ where the function $f(t)$ is a quasi-polynomial, that is, $f$ has the form $$ f(t)=\sum_{j} R_{j}(t) e^{\mu_{j} t} $$ where $R_{j}(t)$ are polynomials, $\mu_{j}$ are complex numbers, and the sum is finite. It is obvious that the sum and the product of two quasi-polynomials is again a quasi-polynomial. In particular, the following functions are quasi-polynomials $$ t^{k} e^{\alpha t} \cos \beta t \quad \text { and } \quad t^{k} e^{\alpha t} \sin \beta t $$ (where $k$ is a non-negative integer and $\alpha, \beta \in \mathbb{R}$ ) because $$ \cos \beta t=\frac{e^{i \beta t}+e^{-i \beta t}}{2} \text { and } \sin \beta t=\frac{e^{i \beta t}-e^{-i \beta t}}{2 i} . $$ As we know, the general solution of the inhomogeneous equation (3.27) is obtained as a sum of the general solution of the homogeneous equation and a particular solution of (3.27). Hence, we focus on finding a particular solution of (3.27). As before, denote by $P(\lambda)$ the characteristic polynomial of (3.27), that is $$ P(\lambda)=\lambda^{n}+a_{1} \lambda^{n-1}+\ldots+a_{n} . $$ Then the equation (3.27) can be written shortly in the form $P\left(\frac{d}{d t}\right) x=f$, which will be used below. We start with the following observation. Claim. If $f=c_{1} f_{1}+\ldots+c_{k} f_{k}$ and $x_{1}(t), \ldots, x_{k}(t)$ are solutions to the equation $P\left(\frac{d}{d t}\right) x_{j}=$ $f_{j}$, then $x=c_{1} x_{1}+\ldots+c_{k} x_{k}$ solves the equation $P\left(\frac{d}{d t}\right) x=f$. Proof. This is trivial because $$ P\left(\frac{d}{d t}\right) x=P\left(\frac{d}{d t}\right) \sum_{j} c_{j} x_{j}=\sum_{j} c_{j} P\left(\frac{d}{d t}\right) x_{j}=\sum_{j} c_{j} f_{j}=f . $$ Hence, we can assume that the function $f$ in (3.27) is of the form $f(t)=R(t) e^{\mu t}$ where $R(t)$ is a polynomial. To illustrate the method, which will be used in this Section, consider first the following example. Example. Consider the ODE $$ P\left(\frac{d}{d t}\right) x=e^{\mu t} $$ where $\mu$ is not a root of the characteristic polynomial $P(\lambda)$ (non-resonant case). We claim that (3.28) has a particular solution in the form $x(t)=a e^{\mu t}$ where $a$ is a complex constant to be chosen. Indeed, we have by (3.15) $$ P\left(\frac{d}{d t}\right)\left(e^{\mu t}\right)=P(\mu) e^{\mu t} $$ whence $$ P\left(\frac{d}{d t}\right)\left(a e^{\mu t}\right)=e^{\mu t} $$ provided $$ a=\frac{1}{P(\mu)} . $$ Consider some concrete examples of ODE. Let us find a particular solution to the ODE $$ x^{\prime \prime}+2 x^{\prime}+x=e^{t} . $$ Note that $P(\lambda)=\lambda^{2}+2 \lambda+1$ and $\mu=1$ is not a root of $P$. Look for a solution in the form $x(t)=a e^{t}$. Substituting into the equation, we obtain $$ a e^{t}+2 a e^{t}+a e^{t}=e^{t} $$ whence we obtain the equation for $a$ : $$ 4 a=1, a=\frac{1}{4} . $$ Alternatively, we can obtain $a$ from (3.29), that is, $$ a=\frac{1}{P(\mu)}=\frac{1}{1+2+1}=\frac{1}{4} . $$ Hence, the answer is $x(t)=\frac{1}{4} e^{t}$. Consider another equation: $$ x^{\prime \prime}+2 x^{\prime}+x=\sin t $$ Note that $\sin t$ is the imaginary part of $e^{i t}$. So, we first solve $$ x^{\prime \prime}+2 x^{\prime}+x=e^{i t} $$ and then take the imaginary part of the solution. Looking for a solution in the form $x(t)=a e^{i t}$, we obtain $$ a=\frac{1}{P(\mu)}=\frac{1}{i^{2}+2 i+1}=\frac{1}{2 i}=-\frac{i}{2} . $$ Hence, the solution is $$ x=-\frac{i}{2} e^{i t}=-\frac{i}{2}(\cos t+i \sin t)=\frac{1}{2} \sin t-\frac{i}{2} \cos t . $$ Therefore, its imaginary part $x(t)=-\frac{1}{2} \cos t$ solves the equation (3.30). Consider yet another ODE $$ x^{\prime \prime}+2 x^{\prime}+x=e^{-t} \cos t . $$ Here $e^{-t} \cos t$ is a real part of $e^{\mu t}$ where $\mu=-1+i$. Hence, first solve $$ x^{\prime \prime}+2 x^{\prime}+x=e^{\mu t} . $$ Setting $x(t)=a e^{\mu t}$, we obtain $$ a=\frac{1}{P(\mu)}=\frac{1}{(-1+i)^{2}+2(-1+i)+1}=-1 . $$ Hence, the complex solution is $x(t)=-e^{(-1+i) t}=-e^{-t} \cos t-i e^{-t} \sin t$, and the solution to $(3.31)$ is $x(t)=-e^{-t} \cos t$. Finally, let us combine the above examples into one: $$ x^{\prime \prime}+2 x^{\prime}+x=2 e^{t}-\sin t+e^{-t} \cos t . $$ A particular solution is obtained by combining the above particular solutions: $$ \begin{aligned} x(t) & =2\left(\frac{1}{4} e^{t}\right)-\left(-\frac{1}{2} \cos t\right)+\left(-e^{-t} \cos t\right) \\ & =\frac{1}{2} e^{t}+\frac{1}{2} \cos t-e^{-t} \cos t \end{aligned} $$ Since the general solution to the homogeneous ODE $x^{\prime \prime}+2 x^{\prime}+x=0$ is $$ x(t)=\left(C_{1}+C_{2} t\right) e^{-t}, $$ we obtain the general solution to (3.32) $$ x(t)=\left(C_{1}+C_{2} t\right) e^{-t}+\frac{1}{2} e^{t}+\frac{1}{2} \cos t-e^{-t} \cos t . $$ Consider one more equation $$ x^{\prime \prime}+2 x^{\prime}+x=e^{-t} . $$ This time $\mu=-1$ is a root of $P(\lambda)=\lambda^{2}+2 \lambda+1$ and the above method does not work. Indeed, if we look for a solution in the form $x=a e^{-t}$ then after substitution we get 0 in the left hand side because $e^{-t}$ solves the homogeneous equation. The case when $\mu$ is a root of $P(\lambda)$ is referred to as a resonance. This case as well as the case of the general quasi-polynomial in the right hand side is treated in the following theorem. Theorem 3.8 Let $R(t)$ be a non-zero polynomial of degree $k \geq 0$ and $\mu$ be a complex number. Let $m$ be the multiplicity of $\mu$ if $\mu$ is a root of $P$ and $m=0$ if $\mu$ is not a root of $P$. Then the equation $$ P\left(\frac{d}{d t}\right) x=R(t) e^{\mu t} $$ has a solution of the form $$ x(t)=t^{m} Q(t) e^{\mu t}, $$ where $Q(t)$ is a polynomial of degree $k$ (which is to be found). Example. Come back to the equation $$ x^{\prime \prime}+2 x^{\prime}+x=e^{-t} . $$ Here $\mu=-1$ is a root of multiplicity $m=2$ and $R(t)=1$ is a polynomial of degree 0 . Hence, the solution should be sought in the form $$ x(t)=a t^{2} e^{-t} $$ where $a$ is a constant that replaces $Q$ (indeed, $Q$ must have degree 0 and, hence, is a constant). Substituting this into the equation, we obtain $$ a\left(\left(t^{2} e^{-t}\right)^{\prime \prime}+2\left(t^{2} e^{-t}\right)^{\prime}+t^{2} e^{-t}\right)=e^{-t} $$ Expanding the expression in the brackets, we obtain the identity $$ \left(t^{2} e^{-t}\right)^{\prime \prime}+2\left(t^{2} e^{-t}\right)^{\prime}+t^{2} e^{-t}=2 e^{-t}, $$ so that (3.33) becomes $2 a=1$ and $a=\frac{1}{2}$. Hence, a particular solution is $$ x(t)=\frac{1}{2} t^{2} e^{-t} . $$ Consider one more example. $$ x^{\prime \prime}+2 x^{\prime}+x=t e^{-t} $$ with the same $\mu=-1$ and $R(t)=t$. Since $\operatorname{deg} R=1$, the polynomial $Q$ must have degree 1, that is, $Q(t)=a t+b$. The coefficients $a$ and $b$ can be determined as follows. Substituting $$ x(t)=(a t+b) t^{2} e^{-t}=\left(a t^{3}+b t^{2}\right) e^{-t} $$ into the equation, we obtain $$ \begin{aligned} x^{\prime \prime}+2 x^{\prime}+x & =\left(\left(a t^{3}+b t^{2}\right) e^{-t}\right)^{\prime \prime}+2\left(\left(a t^{3}+b t^{2}\right) e^{-t}\right)^{\prime}+\left(a t^{3}+b t^{2}\right) e^{-t} \\ & =(2 b+6 a t) e^{-t} . \end{aligned} $$ Hence, comparing with the equation, we obtain $$ 2 b+6 a t=t $$ so that $b=0$ and $a=\frac{1}{6}$. The final answer is $$ x(t)=\frac{t^{3}}{6} e^{-t} . $$ Proof of Theorem 3.8. Let us prove that the equation $$ P\left(\frac{d}{d t}\right) x=R(t) e^{\mu t} $$ has a solution in the form $$ x(t)=t^{m} Q(t) e^{\mu t} $$ where $m$ is the multiplicity of $\mu$ and $\operatorname{deg} Q=k=\operatorname{deg} R$. Using Lemma 3.4, we have $$ \begin{aligned} P\left(\frac{d}{d t}\right) x & =P\left(\frac{d}{d t}\right)\left(t^{m} Q(t) e^{\mu t}\right)=\sum_{j \geq 0} \frac{1}{j !}\left(t^{m} Q(t)\right)^{(j)} P^{(j)}\left(\frac{d}{d t}\right) e^{\mu t} \\ & =\sum_{j \geq 0} \frac{1}{j !}\left(t^{m} Q(t)\right)^{(j)} P^{(j)}(\mu) e^{\mu t} . \end{aligned} $$ By Lemma 3.4, the summation here runs from $j=0$ to $j=n$ but we can allow any $j \geq 0$ because for $j>n$ the derivative $P^{(j)}$ is identical zero anyway. Furthermore, since $P^{(j)}(\mu)=0$ for all $j \leq m-1$, we can restrict the summation to $j \geq m$. Set $$ y(t)=\left(t^{m} Q(t)\right)^{(m)} $$ and observe that $y(t)$ is a polynomial of degree $k$, provided so is $Q(t)$. Conversely, for any polynomial $y(t)$ of degree $k$, there is a polynomial $Q(t)$ of degree $k$ such that (3.35) holds. Indeed, integrating (3.35) $m$ times without adding constants and then dividing by $t^{m}$, we obtain $Q(t)$ as a polynomial of degree $k$. It follows from (3.34) that $y$ must satisfy the ODE $$ \frac{P^{(m)}(\mu)}{m !} y+\frac{P^{(m+1)}(\mu)}{(m+1) !} y^{\prime}+\ldots+\frac{P^{(m+i)}(\mu)}{(m+i) !} y^{(m+i)}+\ldots=R(t), $$ which we rewrite in the form $$ b_{0} y+b_{1} y^{\prime}+\ldots+b_{i} y^{(i)}+\ldots=R(t) $$ where $b_{i}=\frac{P^{(m+i)}(\mu)}{(m+i) !}$ (in fact, the index $i$ in the left hand side of (3.36) can be restricted to $i \leq k$ since $y^{(i)} \equiv 0$ for $\left.i>k\right)$. Note that $$ b_{0}=\frac{P^{(m)}(\mu)}{m !} \neq 0 $$ Hence, the problem amounts to the following: given a polynomial $$ R(t)=r_{0} t^{k}+r_{1} t^{k-1}+\ldots+r_{k} $$ of degree $k$, prove that there exists a polynomial $y(t)$ of degree $k$ that satisfies (3.36). Let us prove the existence of $y$ by induction in $k$. The inductive basis. If $k=0$, then $R(t) \equiv r_{0}$ and $y(t) \equiv a$, so that (3.36) becomes $a b_{0}=r_{0}$ whence $a=r_{0} / b_{0}$ (where we use that $b_{0} \neq 0$ ). The inductive step from the values smaller than $k$ to $k$. Represent $y$ in the from $$ y=a t^{k}+z(t) $$ where $z$ is a polynomial of degree $<k$. Substituting (3.38) into (3.36), we obtain the equation for $z$ $$ b_{0} z+b_{1} z^{\prime}+\ldots+b_{i} z^{(i)}+\ldots=R(t)-\left(a b_{0} t^{k}+a b_{1}\left(t^{k}\right)^{\prime}+\ldots+a b_{k}\left(t^{k}\right)^{(k)}\right)=: \widetilde{R}(t) . $$ Choosing $a$ from the equation $a b_{0}=r_{0}$ we obtain that the term $t^{k}$ in the right hand side of (3.38) cancels out, whence it follows that $\widetilde{R}(t)$ is a polynomial of degree $<k$. By the inductive hypothesis, the equation $$ b_{0} z+b_{1} z^{\prime}+\ldots+b_{i} z^{(i)}+\ldots=\widetilde{R}(t) $$ has a solution $z(t)$ which is a polynomial of degree $<k$. Hence, the function $y=a t^{k}+z$ solves (3.36) and is a polynomial of degree $k$. Remark. If $k=0$, that is, $R(t) \equiv r_{0}$ is a constant then (3.36) yields $$ y=\frac{r_{0}}{b_{0}}=\frac{m ! r_{0}}{P^{(m)}(\mu)} . $$ The equation (3.35) becomes $$ \left(t^{m} Q(t)\right)^{(m)}=\frac{m ! r_{0}}{P^{(m)}(\mu)} $$ whence after $m$ integrations we find $$ Q(t)=\frac{r_{0}}{P^{(m)}(\mu)} $$ Therefore, the ODE $P\left(\frac{d}{d t}\right) x=r_{0} e^{\mu t}$ has a particular solution $$ x(t)=\frac{r_{0}}{P^{(m)}(\mu)} t^{m} e^{\mu t} . $$ Example. Consider again the ODE $x^{\prime \prime}+2 x^{\prime}+x=e^{-t}$. Then $\mu=-1$ has multiplicity $m=2$, and $R(t) \equiv 1$. Hence, by the above Remark, we find a particular solution $$ x(t)=\frac{1}{P^{\prime \prime}(-1)} t^{2} e^{-t}=\frac{1}{2} t^{2} e^{-t} . $$ ### Second order ODE with periodic right hand side Consider a second order ODE $$ x^{\prime \prime}+p x^{\prime}+q x=f(t), $$ which occurs in various physical phenomena. For example, (3.40) describes the movement of a point body of mass $m$ along the axis $x$, where the term $p x^{\prime}$ comes from the friction forces, $q x$ - from the elastic forces, and $f(t)$ is an external time-dependant force. Another physical situation that is described by (3.40), is an electrical circuit: As before, let $R$ the resistance, $L$ be the inductance, and $C$ be the capacitance of the circuit. Let $V(t)$ be the voltage of the power source in the circuit and $x(t)$ be the current in the circuit at time $t$. Then we have seen that the equation for $x(t)$ is $$ L x^{\prime \prime}+R x^{\prime}+\frac{x}{C}=V^{\prime} . $$ If $L>0$ then dividing by $L$ we obtain an ODE of the form (3.40). As an example of application of the above methods of solving such ODEs, we investigate here the case when function $f(t)$ is periodic. More precisely, consider the ODE $$ x^{\prime \prime}+p x^{\prime}+q x=A \sin \omega t, $$ where $A, \omega$ are given positive reals. The function $A \sin \omega t$ is a model for a more general periodic force, which makes good physical sense in all the above examples. For example, in the case of electrical circuit the external force has the form $A \sin \omega t$ if the power source is an electrical socket with the alternating current (AC). The number $\omega$ is called the frequency of the external force (note that the period $=\frac{2 \pi}{\omega}$ ) or the external frequency, and the number $A$ is called the amplitude (the maximum value) of the external force. Assume in the sequel that $p \geq 0$ and $q>0$, which is physically most interesting case. To find a particular solution of (3.41), let us consider the ODE with complex right hand side: $$ x^{\prime \prime}+p x^{\prime}+q x=A e^{i \omega t} . $$ Consider first the non-resonant case when $i \omega$ is not a root of the characteristic polynomial $P(\lambda)=\lambda^{2}+p \lambda+q$. Searching the solution in the from $c e^{i \omega t}$, we obtain $$ c=\frac{A}{P(i \omega)}=\frac{A}{-\omega^{2}+p i \omega+q}=: a+i b $$ and the particular solution of (3.42) is $$ (a+i b) e^{i \omega t}=(a \cos \omega t-b \sin \omega t)+i(a \sin \omega t+b \cos \omega t) . $$ Taking its imaginary part, we obtain a particular solution to (3.41) $$ x(t)=a \sin \omega t+b \cos \omega t=B \sin (\omega t+\varphi) $$ where $$ B=\sqrt{a^{2}+b^{2}}=|c|=\frac{A}{\sqrt{\left(q-\omega^{2}\right)^{2}+\omega^{2} p^{2}}} $$ and $\varphi \in[0,2 \pi)$ is determined from the identities $$ \cos \varphi=\frac{a}{B}, \quad \sin \varphi=\frac{b}{B} $$ The number $B$ is the amplitude of the solution and $\varphi$ is the phase. To obtain the general solution to (3.41), we need to add to (3.43) the general solution to the homogeneous equation $$ x^{\prime \prime}+p x^{\prime}+q x=0 . $$ Let $\lambda_{1}$ and $\lambda_{2}$ are the roots of $P(\lambda)$, that is, $$ \lambda_{1,2}=-\frac{p}{2} \pm \sqrt{\frac{p^{2}}{4}-q} $$ Consider the following possibilities for the roots. $\lambda_{1}$ and $\lambda_{2}$ are real. Since $p \geq 0$ and $q>0$, we see that both $\lambda_{1}$ and $\lambda_{2}$ are strictly negative. The general solution of the homogeneous equation has the from $$ \begin{aligned} C_{1} e^{\lambda_{1} t}+C_{2} e^{\lambda_{2} t} & \text { if } \lambda_{1} \neq \lambda_{2} \\ \left(C_{1}+C_{2} t\right) e^{\lambda_{1} t} & \text { if } \lambda_{1}=\lambda_{2} . \end{aligned} $$ In the both cases, it decays exponentially in $t$ as $t \rightarrow+\infty$. Hence, the general solution of (3.41) has the form $$ x(t)=B \sin (\omega t+\varphi)+\text { exponentially decaying terms. } $$ As we see, when $t \rightarrow \infty$ the leading term of $x(t)$ is the above particular solution $B \sin (\omega t+\varphi)$. For the electrical circuit this means that the current quickly stabilizes and becomes also periodic with the same frequency $\omega$ as the external force. $\lambda_{1}$ and $\lambda_{2}$ are complex. Let $\lambda_{1,2}=\alpha \pm i \beta$ where $$ \alpha=-p / 2 \leq 0 \text { and } \beta=\sqrt{q-\frac{p^{2}}{4}}>0 $$ The general solution to the homogeneous equation is $$ e^{\alpha t}\left(C_{1} \cos \beta t+C_{2} \sin \beta t\right)=C e^{\alpha t} \sin (\beta t+\psi) . $$ The number $\beta$ is called the natural frequency of the physical system in question (pendulum, electrical circuit, spring) for the obvious reason - in absence of the external force, the system oscillate with the natural frequency $\beta$. Hence, the general solution to (3.41) is $$ x(t)=B \sin (\omega t+\varphi)+C e^{\alpha t} \sin (\beta t+\psi) . $$ If $\alpha<0$ then the leading term is again $B \sin (\omega t+\varphi)$. Here is a particular example of such a function: $\sin t+2 e^{-t / 4} \sin \pi t$ $\lambda_{1}$ and $\lambda_{2}$ are purely imaginary, that is, $\alpha=0$. In this case, $p=0, q=\beta^{2}$, and the equation has the form $$ x^{\prime \prime}+\beta^{2} x=A \sin \omega t . $$ The assumption that $i \omega$ is not a root implies $\omega \neq \beta$. The general solution is $$ x(t)=B \sin (\omega t+\varphi)+C \sin (\beta t+\psi), $$ which is the sum of two sin waves with different frequencies - the natural frequency and the external frequency. Here is a particular example of such a function: $\sin t+2 \sin \pi t$ : Strictly speaking, in practice such electrical circuits do not occur since the resistance is always positive. Let us come back to the formula (3.44) for the amplitude $B$ and, as an example of its application, consider the following question: for what value of the external frequency $\omega$ the amplitude $B$ is maximal? Assuming that $A$ does not depend on $\omega$ and using the identity $$ B^{2}=\frac{A^{2}}{\omega^{4}+\left(p^{2}-2 q\right) \omega^{2}+q^{2}}, $$ we see that the maximum $B$ occurs when the denominators takes the minimum value. If $p^{2} \geq 2 q$ then the minimum value occurs at $\omega=0$, which is not very interesting physically. Assume that $p^{2}<2 q$ (in particular, this implies that $p^{2}<4 q$, and, hence, $\lambda_{1}$ and $\lambda_{2}$ are complex). Then the maximum of $B$ occurs when $$ \omega^{2}=-\frac{1}{2}\left(p^{2}-2 q\right)=q-\frac{p^{2}}{2} $$ The value $$ \omega_{0}:=\sqrt{q-p^{2} / 2} $$ is called the resonant frequency of the physical system in question. If the external force has the resonant frequency then the system exhibits the highest response to this force. This phenomenon is called a resonance. Note for comparison that the natural frequency is equal to $\beta=\sqrt{q-p^{2} / 4}$, which is in general different from $\omega_{0}$. In terms of $\omega_{0}$ and $\beta$, we can write $$ \begin{aligned} B^{2} & =\frac{A^{2}}{\omega^{4}-2 \omega_{0}^{2} \omega^{2}+q^{2}}=\frac{A^{2}}{\left(\omega^{2}-\omega_{0}^{2}\right)^{2}+q^{2}-\omega_{0}^{4}} \\ & =\frac{A^{2}}{\left(\omega^{2}-\omega_{0}^{2}\right)+p^{2} \beta^{2}}, \end{aligned} $$ where we have used that $$ q^{2}-\omega_{0}^{4}=q^{2}-\left(q-\frac{p^{2}}{2}\right)^{2}=q p^{2}-\frac{p^{4}}{4}=p^{2} \beta^{2} . $$ In particular, the maximum amplitude that occurs when $\omega=\omega_{0}$ is $B_{\max }=\frac{A}{p \beta}$. In conclusion, consider the case, when $i \omega$ is a root of $P(\lambda)$, that is $$ (i \omega)^{2}+p i \omega+q=0 $$ which implies $p=0$ and $q=\omega^{2}$. In this case $\alpha=0$ and $\omega=\omega_{0}=\beta=\sqrt{q}$, and the equation has the form $$ x^{\prime \prime}+\omega^{2} x=A \sin \omega t . $$ Considering the ODE $$ x^{\prime \prime}+\omega^{2} x=A e^{i \omega t}, $$ and searching a particular solution in the form $x(t)=c t e^{i \omega t}$, we obtain by (3.39) $$ c=\frac{A}{P^{\prime}(i \omega)}=\frac{A}{2 i \omega} . $$ Hence, the complex particular solution is $$ x(t)=\frac{A t}{2 i \omega} e^{i \omega t}=-i \frac{A t}{2 \omega} \cos \omega t+\frac{A t}{2 \omega} \sin \omega t $$ and its imaginary part is $$ x(t)=-\frac{A t}{2 \omega} \cos \omega t $$ Hence, the general solution is $$ x(t)=-\frac{A t}{2 \omega} \cos \omega t+C \sin (\omega t+\psi) . $$ Here is an example of such a function: $-t \cos t+2 \sin t$ Hence, we have a complete resonance: the external frequency $\omega$ is simultaneously equal to the natural frequency and the resonant frequency. In the case of a complete resonance, the amplitude increases in time unbounded. Since unbounded oscillations are physically impossible, either the system breaks down over time or the mathematical model becomes unsuitable for describing the physical system. ### The method of variation of parameters #### A system of the 1st order We present here the method of variation of parameters in order to solve a general linear system $$ x^{\prime}=A(t) x+B(t) $$ where as before $A(t): I \rightarrow \mathbb{R}^{n \times n}$ and $B(t): I \rightarrow \mathbb{R}^{n}$ are continuous. Let $x_{1}(t), \ldots, x_{n}(t)$ be $n$ linearly independent solutions of the homogeneous system $x^{\prime}=A(t) x$, defined on $I$. We start with the following observation. Lemma 3.9 If the solutions $x_{1}(t), \ldots, x_{n}(t)$ of the system $x^{\prime}=A(t) x$ are linearly independent then, for any $t_{0} \in I$, the vectors $x_{1}\left(t_{0}\right), \ldots, x_{n}\left(t_{0}\right)$ are linearly independent. Proof. Indeed, assume that for some constant $C_{1}, \ldots, C_{n}$ $$ C_{1} x_{1}\left(t_{0}\right)+\ldots+C_{n} x_{n}\left(t_{0}\right)=0 . $$ Consider the function $x(t)=C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t)$. Then $x(t)$ solves the IVP $$ \left\{\begin{array}{l} x^{\prime}=A(t) x \\ x\left(t_{0}\right)=0 \end{array}\right. $$ whence by the uniqueness theorem $x(t) \equiv 0$. Since the solutions $x_{1}, \ldots, x_{n}$ are independent, it follows that $C_{1}=\ldots=C_{n}=0$, whence the independence of vectors $x_{1}\left(t_{0}\right), \ldots, x_{n}\left(t_{0}\right)$ follows. Example. Consider two vector functions $$ x_{1}(t)=\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right) \text { and } x_{2}(t)=\left(\begin{array}{c} \sin t \\ \cos t \end{array}\right) $$ which are obviously linearly independent. However, for $t=\pi / 4$, we have $$ x_{1}(t)=\left(\begin{array}{c} \sqrt{2} / 2 \\ \sqrt{2} / 2 \end{array}\right)=x_{2}(t) $$ so that the vectors $x_{1}(\pi / 4)$ and $x_{2}(\pi / 4)$ are linearly dependent. Hence, $x_{1}(t)$ and $x_{2}(t)$ cannot be solutions of the same system $x^{\prime}=A x$. For comparison, the functions $$ x_{1}(t)=\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right) \text { and } x_{2}(t)=\left(\begin{array}{c} -\sin t \\ \cos t \end{array}\right) $$ are solutions of the same system $$ x^{\prime}=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right) x $$ and, hence, the vectors $x_{1}(t)$ and $x_{2}(t)$ are linearly independent for any $t$. This follows also from $$ \operatorname{det}\left(x_{1} \mid x_{2}\right)=\operatorname{det}\left(\begin{array}{cc} \cos t & -\sin t \\ \sin & \cos t \end{array}\right)=1 \neq 0 . $$ Given $n$ linearly independent solutions to $x^{\prime}=A(t) x$, form a $n \times n$ matrix $$ X(t)=\left(x_{1}(t)\left|x_{2}(t)\right| \ldots \mid x_{n}(t)\right) $$ where the $k$-th column is the column-vector $x_{k}(t), k=1, \ldots, n$. The matrix $X$ is called the fundamental matrix of the system $x^{\prime}=A x$. It follows from Lemma 3.9 that the column of $X(t)$ are linearly independent for any $t \in I$, which in particular means that the inverse matrix $X^{-1}(t)$ is also defined for all $t \in I$. This allows us to solve the inhomogeneous system as follows. Theorem 3.10 The general solution to the system $$ x^{\prime}=A(t) x+B(t) $$ is given by $$ x(t)=X(t) \int X^{-1}(t) B(t) d t $$ where $X$ is the fundamental matrix of the system $x^{\prime}=A x$. Note that $X^{-1} B$ is a time dependent $n$-dimensional vector, which can be integrated in $t$ componentwise. Proof. Observe first that the matrix $X$ satisfies the following ODE $$ X^{\prime}=A X . $$ Indeed, this identity holds for any column $x_{k}$ of $X$, whence it follows for the whole matrix. Differentiating (3.46) in $t$ and using the product rule, we obtain $$ \begin{aligned} x^{\prime} & =X^{\prime}(t) \int X^{-1}(t) B(t) d t+X(t)\left(X^{-1}(t) B(t)\right) \\ & =A X \int X^{-1} B(t) d t+B(t) \\ & =A x+B(t) . \end{aligned} $$ Hence, $x(t)$ solves (3.45). Let us show that (3.46) gives all the solutions. Note that the integral in (3.46) is indefinite so that it can be presented in the form $$ \int X^{-1}(t) B(t) d t=V(t)+C $$ where $V(t)$ is a vector function and $C=\left(C_{1}, \ldots, C_{n}\right)$ is an arbitrary constant vector. Hence, (3.46) gives $$ \begin{aligned} x(t) & =X(t) V(t)+X(t) C \\ & =x_{0}(t)+C_{1} x_{1}(t)+\ldots+C_{n} x_{n}(t), \end{aligned} $$ where $x_{0}(t)=X(t) V(t)$ is a solution of (3.45). By Theorem 3.7 we conclude that $x(t)$ is indeed the general solution. Second proof. Let us show a different way of derivation of (3.46) that is convenient in practical applications and also explains the term "variation of parameters". Let us look for a solution to (3.45) in the form $$ x(t)=C_{1}(t) x_{1}(t)+\ldots+C_{n}(t) x_{n}(t) $$ where $C_{1}, C_{2}, . ., C_{n}$ are now unknown real-valued functions to be determined. Since $x_{1}(t), \ldots, x_{n}(t)$ are for any $t$ linearly independent vectors, any $\mathbb{R}^{n}$-valued function $x(t)$ can be represented in the form (3.47). The identity (3.47) can be considered as a linear system of algebraic equations with respect to the unknowns $C_{1}, \ldots, C_{n}$. Solving it by Cramer's rule, we obtain $C_{1}, \ldots, C_{n}$ in terms of rational functions of $x_{1}, \ldots, x_{n}, x$. Since the latter functions are all differentiable in $t$, we obtain that also $C_{1}, \ldots, C_{n}$ are differentiable in $t$. Differentiating the identity (3.47) in time and using $x_{k}^{\prime}=A x_{k}$, we obtain $$ \begin{aligned} x^{\prime}= & C_{1} x_{1}^{\prime}+C_{2} x_{2}^{\prime}+\ldots+C_{n} x_{n}^{\prime} \\ & +C_{1}^{\prime} x_{1}+C_{2}^{\prime} x_{2}+\ldots+C_{n}^{\prime} x_{n} \\ = & C_{1} A x_{1}+C_{2} A x_{2}+\ldots+C_{n} A x_{n} \\ & +C_{1}^{\prime} x_{1}+C_{2}^{\prime} x_{2}+\ldots+C_{n}^{\prime} x_{n} \\ = & A x+C_{1}^{\prime} x_{1}+C_{2}^{\prime} x_{2}+\ldots+C_{n}^{\prime} x_{n} . \end{aligned} $$ Hence, the equation $x^{\prime}=A x+B$ becomes $$ C_{1}^{\prime} x_{1}+C_{2}^{\prime} x_{2}+\ldots+C_{n}^{\prime} x_{n}=B $$ If $C(t)$ denotes the column-vector with components $C_{1}(t), \ldots, C_{n}(t)$ then (3.48) can be written in the form $$ X C^{\prime}=B $$ whence $$ \begin{gathered} C^{\prime}=X^{-1} B, \\ C(t)=\int X^{-1}(t) B(t) d t \end{gathered} $$ and $$ x(t)=X C=X(t) \int X^{-1}(t) B(t) d t $$ The term "variation of parameters" comes from the identity (3.47). Indeed, if $C_{1}, \ldots, C_{n}$ are constant parameters then this identity determines the general solution of the homogeneous ODE $x^{\prime}=A x$. By allowing $C_{1}, \ldots, C_{n}$ to be variable, we obtain the general solution to $x^{\prime}=A x+B$. Example. Consider the system $$ \left\{\begin{array}{l} x_{1}^{\prime}=-x_{2} \\ x_{2}^{\prime}=x_{1} \end{array}\right. $$ or, in the vector form, $$ x^{\prime}=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right) x . $$ It is easy to see that this system has two independent solutions $$ x_{1}(t)=\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right) \quad \text { and } \quad x_{2}(t)=\left(\begin{array}{c} -\sin t \\ \cos t \end{array}\right) . $$ Hence, the corresponding fundamental matrix is $$ X=\left(\begin{array}{cc} \cos t & -\sin t \\ \sin t & \cos t \end{array}\right) $$ and $$ X^{-1}=\left(\begin{array}{cc} \cos t & \sin t \\ -\sin t & \cos t \end{array}\right) $$ Consider now the ODE $$ x^{\prime}=A(t) x+B(t) $$ where $B(t)=\left(\begin{array}{c}b_{1}(t) \\ b_{2}(t)\end{array}\right)$. By (3.46), we obtain the general solution $$ \begin{aligned} x & =\left(\begin{array}{cc} \cos t & -\sin t \\ \sin t & \cos t \end{array}\right) \int\left(\begin{array}{cc} \cos t & \sin t \\ -\sin t & \cos t \end{array}\right)\left(\begin{array}{c} b_{1}(t) \\ b_{2}(t) \end{array}\right) d t \\ & =\left(\begin{array}{cc} \cos t & -\sin t \\ \sin t & \cos t \end{array}\right) \int\left(\begin{array}{c} b_{1}(t) \cos t+b_{2}(t) \sin t \\ -b_{1}(t) \sin t+b_{2}(t) \cos t \end{array}\right) d t . \end{aligned} $$ Consider a particular example $B(t)=\left(\begin{array}{c}1 \\ -t\end{array}\right)$. Then the integral is $$ \int\left(\begin{array}{c} \cos t-t \sin t \\ -\sin t-t \cos t \end{array}\right) d t=\left(\begin{array}{c} t \cos t+C_{1} \\ -t \sin t+C_{2} \end{array}\right), $$ whence $$ \begin{aligned} x & =\left(\begin{array}{cc} \cos t & -\sin t \\ \sin t & \cos t \end{array}\right)\left(\begin{array}{c} t \cos t+C_{1} \\ -t \sin t+C_{2} \end{array}\right) \\ & =\left(\begin{array}{c} C_{1} \cos t-C_{2} \sin t+t \\ C_{1} \sin t+C_{2} \cos t \end{array}\right) \\ & =\left(\begin{array}{c} t \\ 0 \end{array}\right)+C_{1}\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right)+C_{2}\left(\begin{array}{c} -\sin t \\ \cos t \end{array}\right) . \end{aligned} $$ #### A scalar ODE of $n$-th order Consider now a scalar ODE of order $n$ $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=f(t), $$ where $a_{k}(t)$ and $f(t)$ are continuous functions on some interval $I$. Recall that it can be reduced to the vector ODE $$ \mathbf{x}^{\prime}=A(t) \mathbf{x}+B(t) $$ where $$ \mathbf{x}(t)=\left(\begin{array}{c} x(t) \\ x^{\prime}(t) \\ \cdots \\ x^{(n-1)}(t) \end{array}\right) $$ and $$ A=\left(\begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots & 1 \\ -a_{n} & -a_{n-1} & -a_{n-2} & \ldots & -a_{1} \end{array}\right) \text { and } B=\left(\begin{array}{c} 0 \\ 0 \\ \ldots \\ f \end{array}\right) . $$ If $x_{1}, \ldots, x_{n}$ are $n$ linearly independent solutions to the homogeneous ODE $$ x^{(n)}+a_{1} x^{(n-1)}+\ldots+a_{n}(t) x=0 $$ then denoting by $\mathbf{x}_{1}, \ldots, \mathbf{x}_{n}$ the corresponding vector solution, we obtain the fundamental matrix $$ X=\left(\mathbf{x}_{1}\left|\mathbf{x}_{2}\right| \ldots \mid \mathbf{x}_{n}\right)=\left(\begin{array}{cccc} x_{1} & x_{2} & \ldots & x_{n} \\ x_{1}^{\prime} & x_{2}^{\prime} & \ldots & x_{n}^{\prime} \\ \ldots & \ldots & \ldots & \ldots \\ x_{1}^{(n-1)} & x_{2}^{(n-1)} & \ldots & x_{n}^{(n-1)} \end{array}\right) $$ We need to multiply $X^{-1}$ by $B$. Denote by $y_{i k}$ the element of $X^{-1}$ at position $i, k$ where $i$ is the row index and $k$ is the column index. Denote also by $y_{k}$ the $k$-th column of $X^{-1}$, that is, $y_{k}=\left(\begin{array}{c}y_{1 k} \\ \ldots \\ y_{n k}\end{array}\right)$. Then $$ X^{-1} B=\left(\begin{array}{ccc} y_{11} & \ldots & y_{1 n} \\ \ldots & \ldots & \ldots \\ y_{n 1} & \ldots & y_{n n} \end{array}\right)\left(\begin{array}{c} 0 \\ \ldots \\ f \end{array}\right)=\left(\begin{array}{c} y_{1 n} f \\ \ldots \\ y_{n n} f \end{array}\right)=f y_{n} $$ and the general vector solution is $$ \mathbf{x}=X(t) \int f(t) y_{n}(t) d t $$ We need the function $x(t)$ which is the first component of $\mathbf{x}$. Therefore, we need only to take the first row of $X$ to multiply by the column vector $\int f(t) y_{n}(t) d t$, whence $$ x(t)=\sum_{j=1}^{n} x_{j}(t) \int f(t) y_{j n}(t) d t . $$ Hence, we have proved the following. Corollary. Let $x_{1}, \ldots, x_{n}$ be $n$ linearly independent solution to $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=0 $$ and $X$ be the corresponding fundamental matrix. Then, for any continuous function $f(t)$, the general solution to the ODE $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=f(t) $$ is given by $$ x(t)=\sum_{j=1}^{n} x_{j}(t) \int f(t) y_{j n}(t) d t $$ where $y_{j k}$ are the entries of the matrix $X^{-1}$. Example. Consider the ODE $$ x^{\prime \prime}+x=\sin t $$ The independent solutions are $x_{1}(t)=\cos t$ and $x_{2}(t)=\sin t$, so that $$ X=\left(\begin{array}{cc} \cos t & \sin t \\ -\sin t & \cos t \end{array}\right) $$ The inverse is $$ X^{-1}=\left(\begin{array}{cc} \cos t & -\sin t \\ \sin t & \cos t \end{array}\right) $$ Hence, the solution is $$ \begin{aligned} x(t) & =x_{1}(t) \int f(t) y_{12}(t) d t+x_{2}(t) \int f(t) y_{22}(t) d t \\ & =\cos t \int \sin t(-\sin t) d t+\sin t \int \sin t \cos t d t \\ & =-\cos t \int \sin ^{2} t d t+\frac{1}{2} \sin t \int \sin 2 t d t \\ & =-\cos t\left(\frac{1}{2} t-\frac{1}{4} \sin 2 t+C_{1}\right)+\frac{1}{4} \sin t\left(-\cos 2 t+C_{2}\right) \\ & =-\frac{1}{2} t \cos t+\frac{1}{4}(\sin 2 t \cos t-\sin t \cos 2 t)+C_{3} \cos t+C_{4} \sin t \\ & =-\frac{1}{2} t \cos t+C_{3} \cos t+C_{5} \sin t . \end{aligned} $$ Of course, the same result can be obtained by Theorem 3.8. Consider one more example, when the right hand side is not a quasi-polynomial: $$ x^{\prime \prime}+x=\tan t . $$ Then as above we obtain ${ }^{8}$ $$ \begin{aligned} x & =\cos t \int \tan t(-\sin t) d t+\sin t \int \tan t \cos t d t \\ & =\cos t\left(\frac{1}{2} \ln \left(\frac{1-\sin t}{1+\sin t}\right)+\sin t\right)-\sin t \cos t+C_{1} \cos t+C_{2} \sin t \\ & =\frac{1}{2} \cos t \ln \left(\frac{1-\sin t}{1+\sin t}\right)+C_{1} \cos t+C_{2} \sin t \end{aligned} $$ ${ }^{8}$ The intergal $\int \tan x \sin t d t$ is taken as follows: $$ \int \tan x \sin t d t=\int \frac{\sin ^{2} t}{\cos t} d t=\int \frac{1-\cos ^{2} t}{\cos t} d t=\int \frac{d t}{\cos t}-\sin t . $$ Next, we have $$ \int \frac{d t}{\cos t}=\int \frac{d \sin t}{\cos ^{2} t}=\int \frac{d \sin t}{1-\sin ^{2} t}=\frac{1}{2} \ln \frac{1-\sin t}{1+\sin t} . $$ Let us show how one can use the method of variation of parameters directly, without using the formula (3.49). Consider the ODE $$ x^{\prime \prime}+x=f(t) . $$ The general solution to the homogeneous $\operatorname{ODE} x^{\prime \prime}+x=0$ is $$ x(t)=C_{1} \cos +C_{2} \sin t, $$ where $C_{1}$ and $C_{2}$ are constant parameters. let us look for the solution of (3.50) in the form $$ x(t)=C_{1}(t) \cos t+C_{2}(t) \sin t, $$ which is obtained from (3.52) by replacing the constants by functions (hence, the name of the method "variation of parameters"). To obtain the equations for the unknown functions $C_{1}(t), C_{2}(t)$, differentiate (3.53): $$ \begin{aligned} x^{\prime}(t)= & -C_{1}(t) \sin t+C_{2}(t) \cos t \\ & +C_{1}^{\prime}(t) \cos t+C_{2}^{\prime}(t) \sin t . \end{aligned} $$ The first equation for $C_{1}, C_{2}$ comes from the requirement that the second line here (that is, the sum of the terms with $C_{1}^{\prime}$ and $C_{2}^{\prime}$ ) must vanish, that is, $$ C_{1}^{\prime} \cos t+C_{2}^{\prime} \sin t=0 . $$ The motivation for this choice is as follows. Switching to the normal system, one must have the identity $$ \mathbf{x}(t)=C_{1}(t) \mathbf{x}_{1}(t)+C_{2} \mathbf{x}_{2}(t) $$ which componentwise is $$ \begin{aligned} x(t) & =C_{1}(t) \cos t+C_{2}(t) \sin t \\ x^{\prime}(t) & =C_{1}(t)(\cos t)^{\prime}+C_{2}(t)(\sin t)^{\prime} . \end{aligned} $$ Differentiating the first line and subtracting the second line, we obtain (3.55). It follows from (3.54) and (3.55) that $$ \begin{aligned} x^{\prime \prime}= & -C_{1} \cos t-C_{2} \sin t \\ & -C_{1}^{\prime} \sin t+C_{2}^{\prime} \cos t, \end{aligned} $$ whence $$ x^{\prime \prime}+x=-C_{1}^{\prime} \sin t+C_{2}^{\prime} \cos t $$ (note that the terms with $C_{1}$ and $C_{2}$ cancel out and that this will always be the case provided all computations are done correctly). Hence, the second equation for $C_{1}^{\prime}$ and $C_{2}^{\prime}$ is $$ -C_{1}^{\prime} \sin t+C_{2}^{\prime} \cos t=f(t), $$ Solving the system of linear algebraic equations $$ \left\{\begin{array}{l} C_{1}^{\prime} \cos t+C_{2}^{\prime} \sin t=0 \\ -C_{1}^{\prime} \sin t+C_{2}^{\prime} \cos t=f(t) \end{array},\right. $$ we obtain $$ C_{1}^{\prime}=-f(t) \sin t, \quad C_{2}^{\prime}=f(t) \cos t $$ whence $$ C_{1}=-\int f(t) \sin t d t, \quad C_{2}=\int f(t) \cos t d t $$ and $$ x(t)=-\cos t \int f(t) \sin t d t+\sin t \int f(t) \cos t d t $$ ### Wronskian and the Liouville formula Let $I$ be an open interval in $\mathbb{R}$. Definition. Given a sequence of $n$ vector functions $x_{1}, \ldots, x_{n}: I \rightarrow \mathbb{R}^{n}$, define their Wronskian $W(t)$ as a real valued function on $I$ by $$ W(t)=\operatorname{det}\left(x_{1}(t)\left|x_{2}(t)\right| \ldots \mid x_{n}(t)\right), $$ where the matrix on the right hand side is formed by the column-vectors $x_{1}, \ldots, x_{n}$. Hence, $W(t)$ is the determinant of the $n \times n$ matrix. Definition. Let $x_{1}, \ldots, x_{n}$ are $n$ real-valued functions on $I$, which are $n-1$ times differentiable on $I$.. Then their Wronskian is defined by $$ W(t)=\operatorname{det}\left(\begin{array}{cccc} x_{1} & x_{2} & \ldots & x_{n} \\ x_{1}^{\prime} & x_{2}^{\prime} & \ldots & x_{n}^{\prime} \\ \ldots & \ldots & \ldots & \ldots \\ x_{1}^{(n-1)} & x_{2}^{(n-1)} & \ldots & x_{n}^{(n-1)} \end{array}\right) . $$ Lemma 3.11 (a) Let $x_{1}, \ldots, x_{n}$ be a sequence of $\mathbb{R}^{n}$-valued functions that solve a linear system $x^{\prime}=A(t) x$, and let $W(t)$ be their Wronskian. Then either $W(t) \equiv 0$ for all $t \in I$ and the functions $x_{1}, \ldots, x_{n}$ are linearly dependent or $W(t) \neq 0$ for all $t \in I$ and the functions $x_{1}, \ldots, x_{n}$ are linearly independent. (b) Let $x_{1}, \ldots, x_{n}$ be a sequence of real-valued functions that solve a linear system ODE $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=0, $$ and let $W(t)$ be their Wronskian. Then either $W(t) \equiv 0$ for all $t \in I$ and the functions $x_{1}, \ldots, x_{n}$ are linearly dependent or $W(t) \neq 0$ for all $t \in I$ and the functions $x_{1}, \ldots, x_{n}$ are linearly independent. Proof. (a) Indeed, if the functions $x_{1}, \ldots, x_{n}$ are linearly independent then, by Lemma 3.9 , the vectors $x_{1}(t), \ldots, x_{n}(t)$ are linearly independent for any value of $t$, which implies $W(t) \neq 0$. If the functions $x_{1}, \ldots, x_{n}$ are linearly dependent then also the vectors $x_{1}(t), \ldots, x_{n}(t)$ are linearly dependent for any $t$, whence $W(t) \equiv 0$. (b) Define the vector function $$ \mathbf{x}_{k}=\left(\begin{array}{c} x_{k} \\ x_{k}^{\prime} \\ \cdots \\ x_{k}^{(n-1)} \end{array}\right) $$ so that $\mathbf{x}_{1}, \ldots, \mathbf{x}_{k}$ is the sequence of vector functions that solve a vector ODE $\mathbf{x}^{\prime}=A(t) \mathbf{x}$. The Wronskian of $\mathbf{x}_{1}, \ldots, \mathbf{x}_{n}$ is obviously the same as the Wronskian of $x_{1}, \ldots, x_{n}$, and the sequence $\mathbf{x}_{1}, \ldots, \mathbf{x}_{n}$ is linearly independent if and only so is $x_{1}, \ldots, x_{n}$. Hence, the rest follows from part $(a)$. Theorem 3.12 (The Liouville formula) Let $\left\{x_{i}\right\}_{i=1}^{n}$ be a sequence of $n$ solutions of the ODE $x^{\prime}=A(t) x$, where $A: I \rightarrow \mathbb{R}^{n \times n}$ is continuous. Then the Wronskian $W(t)$ of this sequence satisfies the identity $$ W(t)=W\left(t_{0}\right) \exp \left(\int_{t_{0}}^{t} \operatorname{trace} A(\tau) d \tau\right), $$ for all $t, t_{0} \in I$. Recall that the trace (Spur) trace $A$ of the matrix $A$ is the sum of all the diagonal entries of the matrix. Proof. Let the entries of the matrix $\left(x_{1}\left|x_{2}\right| \ldots \mid x_{n}\right)$ be $x_{i j}$ where $i$ is the row index and $j$ is the column index; in particular, the components of the vector $x_{j}$ are $x_{1 j}, x_{2 j}, \ldots, x_{n j}$. Denote by $r_{i}$ the $i$-th row of this matrix, that is, $r_{i}=\left(x_{i 1}, x_{i 2}, \ldots, x_{i n}\right)$; then $$ W=\operatorname{det}\left(\begin{array}{c} r_{1} \\ r_{2} \\ \cdots \\ r_{n} \end{array}\right) $$ We use the following formula for differentiation of the determinant, which follows from the full expansion of the determinant and the product rule: $$ W^{\prime}(t)=\operatorname{det}\left(\begin{array}{c} r_{1}^{\prime} \\ r_{2} \\ \ldots \\ r_{n} \end{array}\right)+\operatorname{det}\left(\begin{array}{c} r_{1} \\ r_{2}^{\prime} \\ \ldots \\ r_{n} \end{array}\right)+\ldots+\operatorname{det}\left(\begin{array}{c} r_{1} \\ r_{2} \\ \ldots \\ r_{n}^{\prime} \end{array}\right) . $$ Indeed, if $f_{1}(t), \ldots, f_{n}(t)$ are real-valued differentiable functions then the product rule implies by induction $$ \left(f_{1} \ldots f_{n}\right)^{\prime}=f_{1}^{\prime} f_{2} \ldots f_{n}+f_{1} f_{2}^{\prime} \ldots f_{n}+\ldots+f_{1} f_{2} \ldots f_{n}^{\prime} . $$ Hence, when differentiating the full expansion of the determinant, each term of the determinant gives rise to $n$ terms where one of the multiples is replaced by its derivative. Combining properly all such terms, we obtain that the derivative of the determinant is the sum of $n$ determinants where one of the rows is replaced by its derivative, that is, (3.57). The fact that each vector $x_{j}$ satisfies the equation $x_{j}^{\prime}=A x_{j}$ can be written in the coordinate form as follows $$ x_{i j}^{\prime}=\sum_{k=1}^{n} A_{i k} x_{k j} . $$ For any fixed $i$, the sequence $\left\{x_{i j}\right\}_{j=1}^{n}$ is nothing other than the components of the row $r_{i}$. Since the coefficients $A_{i k}$ do not depend on $j$, (3.58) implies the same identity for the rows: $$ r_{i}^{\prime}=\sum_{k=1}^{n} A_{i k} r_{k} . $$ That is, the derivative $r_{i}^{\prime}$ of the $i$-th row is a linear combination of all rows $r_{k}$. For example, $$ r_{1}^{\prime}=A_{11} r_{1}+A_{12} r_{2}+\ldots+A_{1 n} r_{n} $$ which implies that $$ \operatorname{det}\left(\begin{array}{c} r_{1}^{\prime} \\ r_{2} \\ \ldots \\ r_{n} \end{array}\right)=A_{11} \operatorname{det}\left(\begin{array}{c} r_{1} \\ r_{2} \\ \ldots \\ r_{n} \end{array}\right)+A_{12} \operatorname{det}\left(\begin{array}{c} r_{2} \\ r_{2} \\ \ldots \\ r_{n} \end{array}\right)+\ldots+A_{1 n} \operatorname{det}\left(\begin{array}{c} r_{n} \\ r_{2} \\ \ldots \\ r_{n} \end{array}\right) . $$ All the determinants except for the 1st one vanish since they have equal rows. Hence, $$ \operatorname{det}\left(\begin{array}{c} r_{1}^{\prime} \\ r_{2} \\ \cdots \\ r_{n} \end{array}\right)=A_{11} \operatorname{det}\left(\begin{array}{c} r_{1} \\ r_{2} \\ \cdots \\ r_{n} \end{array}\right)=A_{11} W(t) . $$ Evaluating similarly the other terms in (3.57), we obtain $$ W^{\prime}(t)=\left(A_{11}+A_{22}+\ldots+A_{n n}\right) W(t)=(\operatorname{trace} A) W(t) . $$ By Lemma 3.11, $W(t)$ is either identical 0 or never zero. In the first case there is nothing to prove. In the second case, we can solve the above ODE using the method of separation of variables. Indeed, dividing it $W(t)$ and integrating in $t$, we obtain $$ \ln \frac{W(t)}{W\left(t_{0}\right)}=\int_{t_{0}}^{t} \operatorname{trace} A(\tau) d \tau $$ (note that $W(t)$ and $W\left(t_{0}\right)$ have the same sign so that the argument of $\ln$ is positive), whence (3.56) follows. Corollary. Consider a scalar $O D E$ $$ x^{(n)}+a_{1}(t) x^{(n-1)}+\ldots+a_{n}(t) x=0, $$ where $a_{k}(t)$ are continuous functions on an interval $I \subset \mathbb{R}$. If $x_{1}(t), \ldots, x_{n}(t)$ are $n$ solutions to this equation then their Wronskian $W(t)$ satisfies the identity $$ W(t)=W\left(t_{0}\right) \exp \left(-\int_{t_{0}}^{t} a_{1}(\tau) d \tau\right) $$ Proof. The scalar ODE is equivalent to the normal system $\mathbf{x}^{\prime}=A \mathbf{x}$ where $$ A=\left(\begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & 0 & \ldots & 1 \\ -a_{n} & -a_{n-1} & -a_{n-2} & \ldots & -a_{1} \end{array}\right) \text { and } \mathbf{x}=\left(\begin{array}{c} x \\ x^{\prime} \\ \ldots \\ x^{(n-1)} \end{array}\right) $$ Since the Wronskian of the normal system coincides with $W(t),(3.59)$ follows from (3.56) because trace $A=-a_{1}$. In the case of the ODE of the 2nd order $$ x^{\prime \prime}+a_{1}(t) x^{\prime}+a_{2}(t) x=0 $$ the Liouville formula can help in finding the general solution if a particular solution is known. Indeed, if $x_{0}(t)$ is a particular non-zero solution and $x(t)$ is any other solution then we have by $(3.59)$ $$ \operatorname{det}\left(\begin{array}{cc} x_{0} & x \\ x_{0}^{\prime} & x^{\prime} \end{array}\right)=C \exp \left(-\int a_{1}(t) d t\right) $$ that is $$ x_{0} x^{\prime}-x x_{0}^{\prime}=C \exp \left(-\int a_{1}(t) d t\right) $$ Using the identity $$ \frac{x_{0} x^{\prime}-x x_{0}^{\prime}}{x_{0}^{2}}=\left(\frac{x}{x_{0}}\right)^{\prime} $$ we obtain the ODE $$ \left(\frac{x}{x_{0}}\right)^{\prime}=\frac{C \exp \left(-\int a_{1}(t) d t\right)}{x_{0}^{2}}, $$ and by integrating it we obtain $\frac{x}{x_{0}}$ and, hence, $x$ (cf. Exercise 35). Example. Consider the ODE $$ x^{\prime \prime}-2\left(1+\tan ^{2} t\right) x=0 . $$ One solution can be guessed $x_{0}(t)=\tan t$ using the fact that $$ \frac{d}{d t} \tan t=\frac{1}{\cos ^{2} t}=\tan ^{2} t+1 $$ and $$ \frac{d^{2}}{d t^{2}} \tan t=2 \tan t\left(\tan ^{2} t+1\right) . $$ Hence, for $x(t)$ we obtain from $(3.60)$ $$ \left(\frac{x}{\tan t}\right)^{\prime}=\frac{C}{\tan ^{2} t} $$ whence $^{9}$ $$ x=C \tan t \int \frac{d t}{\tan ^{2} t}=C \tan t\left(-t-\cot t+C_{1}\right) . $$ Renaming the constants, we obtain the answer $$ x(t)=C_{1}(t \tan t+1)+C_{2} \tan t . $$ ${ }^{9}$ To evaluate the integral $\int \frac{d t}{\tan ^{2} t}=\int \cot ^{2} t d t$ use the identity $$ (\cot t)^{\prime}=-\cot ^{2} t-1 $$ that yields $$ \int \cot ^{2} t d t=-t-\cot t+C . $$ ### Linear homogeneous systems with constant coefficients Here we will be concerned with finding the general solution to linear systems of the form $x^{\prime}=A x$ where $A \in \mathbb{C}^{n \times n}$ is a constant $n \times n$ matrix with complex entries and $x(t)$ is a function from $\mathbb{R}$ to $\mathbb{C}^{n}$. As we know, it suffices to find $n$ linearly independent solutions and then take their linear combination. We start with a simple observation. Let us try to find a solution in the form $x=e^{\lambda t} v$ where $v$ is a non-zero vector in $\mathbb{C}^{n}$ that does not depend on $t$. Then the equation $x^{\prime}=A x$ becomes $$ \lambda e^{\lambda t} v=e^{\lambda t} A v $$ that is, $A v=\lambda v$. Recall that any non-zero vector $v$ that satisfies the identity $A v=\lambda v$ for some constant $\lambda$ is called an eigenvector of $A$, and $\lambda$ is called the eigenvalue. Hence, the function $x(t)=e^{\lambda t} v$ is a non-trivial solution to $x^{\prime}=A x$ provided $v$ is an eigenvector of $A$ and $\lambda$ is the corresponding eigenvalue. is, The fact that $\lambda$ is an eigenvalue means that the matrix $A-\lambda$ id is not invertible, that $$ \operatorname{det}(A-\lambda \mathrm{id})=0 \text {. } $$ This equation is called the characteristic equation of the matrix $A$ and can be used to determine the eigenvalues. Then the eigenvector is determined from the equation $$ (A-\lambda \mathrm{id}) v=0 . $$ Note that the eigenvector is not unique; for example, if $v$ is an eigenvector then $c v$ is also an eigenvector for any constant $c$. The function $$ P(\lambda):=\operatorname{det}(A-\lambda \mathrm{id}) $$ is clearly a polynomial of $\lambda$ of order $n$. It is called the characteristic polynomial of the matrix $A$. Hence, the eigenvalues of $A$ are the root of the characteristic polynomial $P(\lambda)$. Lemma 3.13 If a $n \times n$ matrix $A$ has $n$ linearly independent eigenvectors $v_{1}, \ldots, v_{n}$ with the (complex) eigenvalues $\lambda_{1}, \ldots, \lambda_{n}$ then the general complex solution of the ODE $x^{\prime}=A x$ is given by $$ x(t)=\sum_{k=1}^{n} C_{k} e^{\lambda_{k} t} v_{k}, $$ where $C_{1}, \ldots, C_{k}$ are arbitrary complex constants. If $A$ is a real matrix and $\lambda$ is a non-real eigenvalue of $A$ with an eigenvector $v$ then $\bar{\lambda}$ is an eigenvalue with eigenvector $\bar{v}$, and the terms $e^{\lambda t} v, e^{\bar{\lambda} t} \bar{v}$ in (3.63) can be replaced by the couple $\operatorname{Re}\left(e^{\lambda t} v\right), \operatorname{Im}\left(e^{\lambda t} v\right)$. Proof. As we have seen already, each function $e^{\lambda_{k} t} v_{k}$ is a solution. Since vectors $\left\{v_{k}\right\}_{k=1}^{n}$ are linearly independent, the functions $\left\{e^{\lambda_{k} t} v_{k}\right\}_{k=1}^{n}$ are linearly independent, whence the first claim follows from Theorem 3.1. If $A v=\lambda v$ then applying the complex conjugation and using the fact the entries of $A$ are real, we obtain $A \bar{v}=\bar{\lambda} \bar{v}$ so that $\bar{\lambda}$ is an eigenvalue with eigenvector $\bar{v}$. Since the functions $e^{\lambda t} v$ and $e^{\bar{\lambda} t} \bar{v}$ are solutions, their linear combinations $$ \operatorname{Re} e^{\lambda t} v=\frac{e^{\lambda t} v+e^{\bar{\lambda} t} \bar{v}}{2} \text { and } \operatorname{Im} e^{\lambda t} v=\frac{e^{\lambda t} v-e^{\bar{\lambda} t} \bar{v}}{2 i} $$ are also solutions. Since $e^{\lambda t} v$ and $e^{\bar{\lambda} t} \bar{v}$ can also be expressed via these solutions: $$ \begin{aligned} e^{\lambda t} v & =\operatorname{Re} e^{\lambda t} v+i \operatorname{Im} e^{\lambda t} v \\ e^{\bar{\lambda}} \bar{v} & =\operatorname{Re} e^{\lambda t} v-i \operatorname{Im} e^{\lambda t} v \end{aligned} $$ replacing in (3.63) the terms $e^{\lambda t}, e^{\bar{\lambda} t}$ by the couple $\operatorname{Re}\left(e^{\lambda t} v\right), \operatorname{Im}\left(e^{\lambda t} v\right)$ does not change the set of functions, which finishes the proof. It is known from Linear Algebra that if $A$ has $n$ distinct eigenvalues then their eigenvectors are automatically linearly independent, and Lemma 3.13 applies. Or if $A$ is a symmetric matrix then there is a basis of eigenvectors, and Lemma 3.13 applies. Example. Consider the system $$ \left\{\begin{array}{l} x^{\prime}=y \\ y^{\prime}=x \end{array}\right. $$ The vector form of this system is $\mathbf{x}=A \mathbf{x}$ where $A=\left(\begin{array}{ll}0 & 1 \\ 1 & 0\end{array}\right)$. The characteristic polynomial is $$ P(\lambda)=\operatorname{det}\left(\begin{array}{cc} -\lambda & 1 \\ 1 & -\lambda \end{array}\right)=\lambda^{2}-1, $$ the characteristic equation is $\lambda^{2}-1=0$, whence the eigenvalues are $\lambda_{1}=1, \lambda_{2}=-1$. For $\lambda=\lambda_{1}=1$ we obtain the equation (3.62) for $v=\left(\begin{array}{l}a \\ b\end{array}\right)$ : $$ \left(\begin{array}{cc} -1 & 1 \\ 1 & -1 \end{array}\right)\left(\begin{array}{l} a \\ b \end{array}\right)=0 $$ which gives only one independent equation $a-b=0$. Choosing $a=1$, we obtain $b=1$ whence $$ v_{1}=\left(\begin{array}{c} 1 \\ 1 \end{array}\right) $$ Similarly, for $\lambda=\lambda_{2}=-1$ we have the equation for $v=\left(\begin{array}{l}a \\ b\end{array}\right)$ $$ \left(\begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right)\left(\begin{array}{l} a \\ b \end{array}\right)=0 $$ which amounts to $a+b=0$. Hence, the eigenvector for $\lambda_{2}=-1$ is $$ v_{2}=\left(\begin{array}{c} 1 \\ -1 \end{array}\right) $$ Since the vectors $v_{1}$ and $v_{2}$ are independent, we obtain the general solution in the form $$ \mathbf{x}(t)=C_{1} e^{t}\left(\begin{array}{l} 1 \\ 1 \end{array}\right)+C_{2} e^{-t}\left(\begin{array}{c} 1 \\ -1 \end{array}\right)=\left(\begin{array}{c} C_{1} e^{t}+C_{2} e^{-t} \\ C_{1} e^{t}-C_{2} e^{-t} \end{array}\right), $$ that is, $x(t)=C_{1} e^{t}+C_{2} e^{-t}$ and $y(t)=C_{1} e^{t}-C_{2} e^{-t}$. Example. Consider the system $$ \left\{\begin{array}{l} x^{\prime}=-y \\ y^{\prime}=x \end{array}\right. $$ The matrix of the system is $A=\left(\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right)$, and the the characteristic polynomial is $$ P(\lambda)=\operatorname{det}\left(\begin{array}{cc} -\lambda & -1 \\ 1 & -\lambda \end{array}\right)=\lambda^{2}+1 $$ Hence, the characteristic equation is $\lambda^{2}+1=0$ whence $\lambda_{1}=i$ and $\lambda_{2}=-i$. For $\lambda=\lambda_{1}=i$ we obtain the equation for the eigenvector $v=\left(\begin{array}{l}a \\ b\end{array}\right)$ $$ \left(\begin{array}{cc} -i & -1 \\ 1 & -i \end{array}\right)\left(\begin{array}{l} a \\ b \end{array}\right)=0 $$ which amounts to the single equation $i a+b=0$. Choosing $a=i$, we obtain $b=1$, whence $$ v_{1}=\left(\begin{array}{c} i \\ 1 \end{array}\right) $$ and the corresponding solution of the ODE is $$ \mathbf{x}_{1}(t)=e^{i t}\left(\begin{array}{c} i \\ 1 \end{array}\right)=\left(\begin{array}{c} -\sin t+i \cos t \\ \cos t+i \sin t \end{array}\right) . $$ Since this solution is complex, we obtain the general solution using the second claim of Lemma 3.13: $$ \mathbf{x}(t)=C_{1} \operatorname{Re} \mathbf{x}_{1}+C_{2} \operatorname{Im} \mathbf{x}_{1}=C_{1}\left(\begin{array}{c} -\sin t \\ \cos t \end{array}\right)+C_{2}\left(\begin{array}{c} \cos t \\ \sin t \end{array}\right)=\left(\begin{array}{c} -C_{1} \sin t+C_{2} \cos t \\ C_{1} \cos t+C_{2} \sin t \end{array}\right) . $$ Example. Consider a normal system $$ \left\{\begin{array}{l} x^{\prime}=y \\ y^{\prime}=0 \end{array}\right. $$ This system is trivially solved to obtain $y=C_{1}$ and $x=C_{1} t+C_{2}$. However, if we try to solve it using the above method, we fail. Indeed, the matrix of the system is $A=\left(\begin{array}{ll}0 & 1 \\ 0 & 0\end{array}\right)$, the characteristic polynomial is $$ P(\lambda)=\operatorname{det}\left(\begin{array}{cc} -\lambda & 1 \\ 0 & -\lambda \end{array}\right)=\lambda^{2}, $$ and the characteristic equation $P(\lambda)=0$ yields only one eigenvalue $\lambda=0$. The eigenvector $v=\left(\begin{array}{l}a \\ b\end{array}\right)$ satisfies the equation $$ \left(\begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right)\left(\begin{array}{l} a \\ b \end{array}\right)=0 $$ whence $b=0$. That is, the only eigenvector (up to a constant multiple) is $v=\left(\begin{array}{l}1 \\ 0\end{array}\right)$, and the only solution we obtain in this way is $\mathbf{x}(t)=\left(\begin{array}{l}1 \\ 0\end{array}\right)$. The problem lies in the properties of this matrix - it does not have a basis of eigenvectors, which is needed for this method. In order to handle such cases, we use a different approach. #### Functions of operators and matrices Recall that an scalar ODE $x^{\prime}=A x$ has a solution $x(t)=C e^{A t} t$. Now if $A$ is a $n \times n$ matrix, we may be able to use this formula if we define what is $e^{A t}$. It suffices to define what is $e^{A}$ for any matrix $A$. It is convenient to do this for linear operators acting in $\mathbb{C}^{n}$. Recall that a linear operator in $\mathbb{C}^{n}$ is a mapping $A: \mathbb{C}^{n} \rightarrow \mathbb{C}^{n}$ such that, for all $x, y \in \mathbb{C}^{n}$ and $\lambda \in \mathbb{C}$, $$ \begin{aligned} A(x+y) & =A x+A y \\ A(\lambda x) & =\lambda A x . \end{aligned} $$ Any $n \times n$ matrix defines a linear operator in $\mathbb{C}^{n}$ using multiplication of column-vectors by this matrix. Moreover, any linear operator can be represented in this form so that there is an one-to-one correspondence ${ }^{10}$ between linear operators and matrices. Denote the family of all linear operators in $\mathbb{C}^{n}$ by $\mathcal{L}\left(\mathbb{C}^{n}\right)$. For any two operators $A, B$, define their sum $A+B$ by $$ (A+B) x=A x+B x $$ and the product by a scalar $\lambda \in \mathbb{C}$ by $$ (\lambda A)(x)=\lambda A x $$ for all $x \in \mathbb{C}^{n}$. With these operation, $\mathcal{L}\left(\mathbb{C}^{n}\right)$ is a linear space over $\mathbb{C}$. Since any operator can be identified with a $n \times n$ matrix, the dimension of the linear space $\mathcal{L}\left(\mathbb{C}^{n}\right)$ is $n^{2}$. Apart from the linear structure, the product $A B$ of operators is defined in $\mathcal{L}\left(\mathbb{C}^{n}\right)$ as composition that is, $$ (A B) x=A(B x) . $$ Fix a norm $\|\cdot\|$ in $\mathbb{C}^{n}$, for example, the $\infty$-norm $$ \|x\|_{\infty}:=\max _{1 \leq k \leq n}\left|x_{k}\right| $$ where $x_{1}, \ldots, x_{n}$ are the components of the vector $x$. Define the associated operator norm in $\mathcal{L}\left(\mathbb{C}^{n}\right)$ by $$ \|A\|=\sup _{x \in \mathbb{C}^{n} \backslash\{0\}} \frac{\|A x\|}{\|x\|} . $$ Claim. The operator norm is a norm in $\mathcal{L}\left(\mathbb{C}^{n}\right)$. Proof. Let us first show that $\|A\|$ is finite. Represent $A$ is a matrix $\left(A_{k j}\right)$ in the standard basis. Since all norms in any finitely dimensional linear space are equivalent, we can assume in the sequel that $\|x\|=\|x\|_{\infty}$. Then $$ \begin{aligned} \|A x\|_{\infty} & =\max _{k}\left|(A x)_{k}\right|=\max _{k}\left|\sum_{j} A_{k j} x_{j}\right| \\ & \leq \max _{k}\left|\sum_{j} A_{k j}\right| \max _{j}\left|x_{j}\right|=C\|x\|_{\infty}, \end{aligned} $$ where $C<\infty$. Therefore, $\|A\| \leq C<\infty$. ${ }^{10}$ This correspondence depends on the choice of a basis in $\mathbb{C}^{n}$. 2. Clearly, $\|A\| \geq 0$. Let us show that $\|A\|>0$ if $A \neq 0$. Indeed, if $A \neq 0$ then there is $x \in \mathbb{C}^{n}$ such that $A x \neq 0$ and $\|A x\|>0$, whence $$ \|A\| \geq \frac{\|A x\|}{\|x\|}>0 $$ 3. Let us prove the triangle inequality: $\|A+B\| \leq\|A\|+\|B\|$. Indeed, by definition (3.64) $$ \begin{aligned} \|A+B\|= & \sup _{x} \frac{\|(A+B) x\|}{\|x\|} \leq \sup _{x} \frac{\|A x\|+\|B x\|}{\|x\|} \\ & \leq \quad \sup _{x} \frac{\|A x\|}{\|x\|}+\sup _{x} \frac{\|B x\|}{\|x\|} \\ & =\quad\|A\|+\|B\| . \end{aligned} $$ 4. Let us prove the scaling property: $\|\lambda A\|=|\lambda|\|A\|$ for any $\lambda \in \mathbb{C}$. Indeed, by (3.64) $$ \|\lambda A\|=\sup _{x} \frac{\|(\lambda A) x\|}{\|x\|}=\sup \frac{|\lambda|\|A x\|}{\|x\|}=|\lambda|\|A\| . $$ In addition to the general properties of a norm, the operator norm satisfies the inequality $$ \|A B\| \leq\|A\|\|B\| $$ Indeed, it follows from (3.64) that $\|A x\| \leq\|A\|\|x\|$ whence $$ \|(A B) x\|=\|A(B x)\| \leq\|A\|\|B x\| \leq\|A\|\|B\|\|x\| $$ which yields (3.65). Hence, $\mathcal{L}\left(\mathbb{C}^{n}\right)$ is a normed linear space. Since this space is finite dimensional, it is complete as a normed space. As in any complete normed linear space, one can define in $\mathcal{L}\left(\mathbb{C}^{n}\right)$ the notion of the limit of a sequence of operators. Namely, we say that a sequence $\left\{A_{k}\right\}$ of operators converges to an operator $A$ if $$ \left\|A_{k}-A\right\| \rightarrow 0 \text { as } k \rightarrow \infty . $$ Representing an operator $A$ as a matrix $\left(A_{i j}\right)_{i, j=1}^{n}$, one can consider the $\infty$-norm on operators defined by $$ \|A\|_{\infty}=\max _{1 \leq i, j \leq n}\left|A_{i j}\right| $$ Clearly, the convergence in the $\infty$-norm is equivalent to the convergence of each component $A_{i j}$ separately. Since all norms in $\mathcal{L}\left(\mathbb{C}^{n}\right)$ are equivalent, we see that convergence of a sequence of operators in any norm is equivalent to the convergence of the individual components of the operators. Given a series $\sum_{k=1}^{\infty} A_{k}$ of operators, the sum of the series is defined as the limit of the sequence of partial sums $\sum_{k=1}^{N} A_{k}$ as $N \rightarrow \infty$. That is, $S=\sum_{k=1}^{\infty} A_{k}$ if $$ \left\|S-\sum_{k=1}^{N} A_{k}\right\| \rightarrow 0 \quad \text { as } N \rightarrow \infty . $$ Claim. Assume that $$ \sum_{k=1}^{\infty}\left\|A_{k}\right\|<\infty $$ Then the series $\sum_{k=1}^{\infty} A_{k}$ converges. Proof. Indeed, since all norms in $\mathcal{L}\left(\mathbb{C}^{n}\right)$ are equivalent, we can assume that the norm in (3.66) is the $\infty$-norm. Denoting by $\left(A_{k}\right)_{i j}$ the $i j$-components of the matrix $A$, we obtain that then the condition (3.66) is equivalent to $$ \sum_{k=1}^{\infty}\left|\left(A_{k}\right)_{i j}\right|<\infty $$ for any indices $1 \leq i, j \leq n$. Then (3.67) implies that the numerical series $$ \sum_{k=1}^{\infty}\left(A_{k}\right)_{i j} $$ converges, which implies that the operator series $\sum_{k=1}^{\infty} A_{k}$ also converges. If the condition (3.66) is satisfied then the series $\sum_{k=1}^{\infty} A_{k}$ is called absolutely convergent. Hence, the above Claim means that absolute convergence of an operator series implies the usual convergence. Definition. If $A \in \mathcal{L}\left(\mathbb{C}^{n}\right)$ then define $e^{A} \in \mathcal{L}\left(\mathbb{C}^{n}\right)$ by means of the identity $$ e^{A}=\mathrm{id}+A+\frac{A^{2}}{2 !}+\ldots+\frac{A^{k}}{k !}+\ldots=\sum_{k=0}^{\infty} \frac{A^{k}}{k !}, $$ where id is the identity operator. Of course, in order to justify this definition, we need to verify the convergence of the series (3.68). Lemma 3.14 The exponential series (3.68) converges for any $A \in \mathcal{L}\left(\mathbb{C}^{n}\right)$. Proof. It suffices to show that the series converges absolutely, that is, $$ \sum_{k=0}^{\infty}\left\|\frac{A^{k}}{k !}\right\|<\infty $$ It follows from (3.65) that $\left\|A^{k}\right\| \leq\|A\|^{k}$ whence $$ \sum_{k=0}^{\infty}\left\|\frac{A^{k}}{k !}\right\| \leq \sum_{k=0}^{\infty} \frac{\|A\|^{k}}{k !}=e^{\|A\|}<\infty, $$ and the claim follows. Theorem 3.15 For any $A \in \mathcal{L}\left(\mathbb{C}^{n}\right)$ the function $F(t)=e^{t A}$ satisfies the $O D E F^{\prime}=A F$. Consequently, the general solution of the $O D E x^{\prime}=A x$ is given by $x=e^{t A} v$ where $v \in \mathbb{C}^{n}$ is an arbitrary vector. Here $x=x(t)$ is as usually a $\mathbb{C}^{n}$-valued function on $\mathbb{R}$, while $F(t)$ is an $\mathcal{L}\left(\mathbb{C}^{n}\right)$ valued function on $\mathbb{R}$. Since $\mathcal{L}\left(\mathbb{C}^{n}\right)$ is linearly isomorphic to $\mathbb{C}^{n^{2}}$, we can also say that $F(t)$ is a $\mathbb{C}^{n^{2}}$-valued function on $\mathbb{R}$, which allows to understand the ODE $F^{\prime}=A F$ in the same sense as general vectors ODE. The novelty here is that we regard $A \in \mathcal{L}\left(\mathbb{C}^{n}\right)$ as an operator in $\mathcal{L}\left(\mathbb{C}^{n}\right)$ (that is, an element of $\mathcal{L}\left(\mathcal{L}\left(\mathbb{C}^{n}\right)\right)$ ) by means of the operator multiplication. Proof. We have by definition $$ F(t)=e^{t A}=\sum_{k=0}^{\infty} \frac{t^{k} A^{k}}{k !} $$ Consider the series of the derivatives: $$ G(t):=\sum_{k=0}^{\infty} \frac{d}{d t}\left(\frac{t^{k} A^{k}}{k !}\right)=\sum_{k=1}^{\infty} \frac{t^{k-1} A^{k}}{(k-1) !}=A \sum_{k=1}^{\infty} \frac{t^{k-1} A^{k-1}}{(k-1) !}=A F $$ It is easy to see (in the same way as Lemma 3.14) that this series converges locally uniformly in $t$, which implies that $F$ is differentiable in $t$ and $F^{\prime}=G$. It follows that $F^{\prime}=A F$. For function $x(t)=e^{t A} v$, we have $$ x^{\prime}=\left(e^{t A}\right)^{\prime} v=\left(A e^{t A}\right) v=A x $$ so that $x(t)$ solves the ODE $x^{\prime}=A x$ for any $v$. If $x(t)$ is any solution to $x^{\prime}=A x$ then set $v=x(0)$ and observe that the function $e^{t A} v$ satisfies the same ODE and the initial condition $$ \left.e^{t A} v\right|_{t=0}=\operatorname{id} v=v $$ Hence, both $x(t)$ and $e^{t A} v$ solve the same initial value problem, whence the identity $x(t)=e^{t A} v$ follows by the uniqueness theorem. Remark. If $v_{1}, \ldots, v_{n}$ are linearly independent vectors in $\mathbb{C}^{n}$ then the solutions $e^{t A} v_{1}, \ldots ., e^{t A} v_{n}$ are also linearly independent and, hence, can be used to form the fundamental matrix. In particular, choosing $v_{1}, \ldots, v_{n}$ to be the canonical basis in $\mathbb{C}^{n}$, we obtain that $e^{t A} v_{k}$ is the $k$-th column of the matrix $e^{t A}$. Hence, the matrix $e^{t A}$ is itself a fundamental matrix of the system $x^{\prime}=A x$. Example. Let $A$ be the diagonal matrix $$ A=\operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{n}\right) . $$ Then $$ A^{k}=\operatorname{diag}\left(\lambda_{1}^{k}, \ldots, \lambda_{n}^{k}\right) $$ and $$ e^{t A}=\operatorname{diag}\left(e^{\lambda_{1} t}, \ldots, e^{\lambda_{n} t}\right) $$ Let $$ A=\left(\begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right) $$ Then $A^{2}=0$ and all higher power of $A$ are also 0 and we obtain $$ e^{t A}=\mathrm{id}+t A=\left(\begin{array}{cc} 1 & t \\ 0 & 1 \end{array}\right) . $$ Hence, the general solution to $x^{\prime}=A x$ is $$ x(t)=e^{t A} v=\left(\begin{array}{ll} 1 & t \\ 0 & 1 \end{array}\right)\left(\begin{array}{c} C_{1} \\ C_{2} \end{array}\right)=\left(\begin{array}{c} C_{1}+C_{2} t \\ C_{2} \end{array}\right), $$ where $C_{1}, C_{2}$ are the components of $v$. Definition. Operators $A, B \in \mathcal{L}\left(\mathbb{C}^{n}\right)$ are said to commute if $A B=B A$. In general, the operators do not have to commute. If $A$ and $B$ commute then various nice formulas take places, for example, $$ (A+B)^{2}=A^{2}+2 A B+B^{2} . $$ Indeed, in general we have $$ (A+B)^{2}=(A+B)(A+B)=A^{2}+A B+B A+B^{2}, $$ which yields (3.69) if $A B=B A$. Lemma 3.16 If $A$ and $B$ commute then $$ e^{A+B}=e^{A} e^{B} . $$ Proof. Let us prove a sequence of claims. Claim 1. If $A, B, C$ commute pairwise then so do $A C$ and $B$. Indeed, $$ (A C) B=A(C B)=A(B C)=(A B) C=(B A) C=B(A C) \text {. } $$ Claim 2. If $A$ and $B$ commute then so do $e^{A}$ and $B$. Indeed, it follows from Claim 1 that $A^{k}$ and $B$ commute for any natural $k$, whence $$ e^{A} B=\left(\sum_{k=0}^{\infty} \frac{A^{k}}{k !}\right) B=B\left(\sum_{k=0}^{\infty} \frac{A^{k}}{k !}\right)=B e^{A} $$ Claim 3. If $A(t)$ and $B(t)$ are differentiable functions from $\mathbb{R}$ to $\mathcal{L}\left(\mathbb{C}^{n}\right)$ then $$ (A(t) B(t))^{\prime}=A^{\prime}(t) B(t)+A(t) B^{\prime}(t) . $$ Warning: watch the correct order of the multiples. Indeed, we have for any component $$ (A B)_{i j}^{\prime}=\left(\sum_{k} A_{i k} B_{k j}\right)^{\prime}=\sum_{k} A_{i k}^{\prime} B_{k j}+\sum_{k} A_{i k} B_{k j}^{\prime}=\left(A^{\prime} B\right)_{i j}+\left(A B^{\prime}\right)_{i j}=\left(A^{\prime} B+A B^{\prime}\right)_{i j}, $$ whence (3.70) follows. Now we can finish the proof of the lemma. Consider the function $F: \mathbb{R} \rightarrow \mathcal{L}\left(\mathbb{C}^{n}\right)$ defined by $$ F(t)=e^{t A} e^{t B} $$ Differentiating it using Theorem 3.15, Claims 2 and 3, we obtain $F^{\prime}(t)=\left(e^{t A}\right)^{\prime} e^{t B}+e^{t A}\left(e^{t B}\right)^{\prime}=A e^{t A} e^{t B}+e^{t A} B e^{t B}=A e^{t A} e^{t B}+B e^{t A} e^{t B}=(A+B) F(t)$. On the other hand, by Theorem 3.15 , the function $G(t)=e^{t(A+B)}$ satisfies the same equation $$ G^{\prime}=(A+B) G . $$ Since $G(0)=F(0)=$ id, we obtain that the vector functions $F(t)$ and $G(t)$ solve the same IVP, whence by the uniqueness theorem they are identically equal. In particular, $F(1)=G(1)$, which means $e^{A} e^{B}=e^{A+B}$. Alternative proof. Let us briefly discuss a direct algebraic proof of $e^{A+B}=e^{A} e^{B}$. One first proves the binomial formula $$ (A+B)^{n}=\sum_{k=0}^{n}\left(\begin{array}{l} n \\ k \end{array}\right) A^{k} B^{n-k} $$ using the fact that $A$ and $B$ commute (this can be done by induction in the same way as for numbers). Then we have $$ e^{A+B}=\sum_{n=0}^{\infty} \frac{(A+B)^{n}}{n !}=\sum_{n=0}^{\infty} \sum_{k=0}^{n} \frac{A^{k} B^{n-k}}{k !(n-k) !} $$ and, using the Cauchy product formula, $$ e^{A} e^{B}=\sum_{m=0}^{\infty} \frac{A^{m}}{m !} \sum_{l=0}^{\infty} \frac{B^{l}}{l !}=\sum_{n=0}^{\infty} \sum_{k=0}^{n} \frac{A^{k} B^{n-k}}{k !(n-k) !} . $$ Of course, one need to justify the Cauchy product formula for absolutely convergent series of operators. #### Jordan cells Here we show how to compute $e^{A}$ provided $A$ is a Jordan cell. Definition. An $n \times n$ matrix $J$ is called a Jordan cell if it has the form $$ A=\left(\begin{array}{ccccc} \lambda & 1 & 0 & \cdots & 0 \\ 0 & \lambda & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & \lambda & 1 \\ 0 & \cdots & \cdots & 0 & \lambda \end{array}\right), $$ where $\lambda$ is any complex number. Here all the entries on the main diagonal are $\lambda$ and all the entries just above the main diagonal are 1 (and all other values are 0). Let us use Lemma 3.16 in order to evaluate $e^{t A}$ where $A$ is a Jordan cell. Clearly, we have $A=\lambda \mathrm{id}+N$ where $$ N=\left(\begin{array}{ccccc} 0 & 1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & 0 \\ \vdots & & & \ddots & 1 \\ 0 & \cdots & \cdots & \cdots & 0 \end{array}\right) $$ A matrix (3.72) is called a nilpotent Jordan cell. Since the matrices $\lambda$ id and $N$ commute (because id commutes with anything), Lemma 3.16 yields $$ e^{t A}=e^{t \lambda \mathrm{id}} e^{t N}=e^{t \lambda} e^{t N} . $$ Hence, we need to evaluate $e^{t N}$, and for that we first evaluate the powers $N^{2}, N^{3}$, etc. Observe that the components of matrix $N$ are as follows $$ N_{i j}=\left\{\begin{array}{ll} 1, & \text { if } j=i+1 \\ 0, & \text { otherwise } \end{array},\right. $$ where $i$ is the row index and $j$ is the column index. It follows that $$ \left(N^{2}\right)_{i j}=\sum_{k=1}^{n} N_{i k} N_{k j}= \begin{cases}1, & \text { if } j=i+2 \\ 0, & \text { otherwise }\end{cases} $$ that is, $$ N^{2}=\left(\begin{array}{ccccc} 0 & 0 & 1 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots \\ \vdots & & \ddots & \ddots & 1 \\ \vdots & & & \ddots & 0 \\ 0 & \ldots & \ldots & \cdots & 0 \end{array}\right) . $$ Here the entries with value 1 are located on the diagonal that is two positions above the main diagonal. Similarly, we obtain $$ N^{k}=\left(\begin{array}{ccccc} 0 & \ddots & 1 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots \\ \vdots & & \ddots & \ddots & 1 \\ \vdots & & & \ddots & \ddots \\ 0 & \cdots & \cdots & \cdots & 0 \end{array}\right) $$ where the entries with value 1 are located on the diagonal that is $k$ positions above the main diagonal, provided $k<n$, and $N^{k}=0$ if $k \geq n$. Any matrix $A$ with the property that $A^{k}=0$ for some natural $k$ is called nilpotent. Hence, $N$ is a nilpotent matrix, which explains the term "a nilpotent Jordan cell". It follows that $$ e^{t N}=\mathrm{id}+\frac{t}{1 !} N+\frac{t^{2}}{2 !} N^{2}+\ldots+\frac{t^{n-1}}{(n-1) !} N^{n-1}=\left(\begin{array}{ccccc} 1 & \frac{t}{1 !} & \frac{t^{2}}{2 !} & \ddots & \frac{t^{n-1}}{(n-1) !} \\ 0 & \ddots & \ddots & \ddots & \ddots \\ \vdots & \ddots & \ddots & \ddots & \frac{t^{2}}{2 !} \\ \vdots & & \ddots & \ddots & \frac{t}{1 !} \\ 0 & \cdots & \cdots & 0 & 1 \end{array}\right) . $$ Combining with (3.73), we obtain the following statement. Lemma 3.17 If $A$ is a Jordan cell (3.71) then, for any $t \in \mathbb{R}$, $$ e^{t A}=\left(\begin{array}{ccccc} e^{\lambda t} & \frac{t}{1 !} e^{t \lambda} & \frac{t^{2}}{2 !} e^{t \lambda} & \ddots & \frac{t^{n-1}}{(n-1) !} e^{t \lambda} \\ 0 & e^{t \lambda} & \frac{t}{1 !} e^{t \lambda} & \ddots & \ddots \\ \vdots & \ddots & \ddots & \ddots & \frac{t^{2}}{2 !} e^{t \lambda} \\ \vdots & & \ddots & \ddots & \frac{t}{1 !} e^{t \lambda} \\ 0 & \cdots & \cdots & 0 & e^{t \lambda} \end{array}\right) . $$ By Lemma 3.15, the general solution of the system $x^{\prime}=A x$ is $x(t)=e^{t A} v$ where $v$ is an arbitrary vector from $\mathbb{C}^{n}$. Setting $v=\left(C_{1}, \ldots, C_{n}\right)$, we obtain that the general solution is $$ x(t)=C_{1} x_{1}+\ldots+C_{n} x_{n}, $$ where $x_{1}, \ldots, x_{n}$ are the columns of the matrix $e^{t A}$ (which form a sequence of $n$ linearly independent solutions). Using (3.75), we obtain $$ \begin{aligned} x_{1}(t) & =e^{\lambda t}(1,0, \ldots, 0) \\ x_{2}(t) & =e^{\lambda t}\left(\frac{t}{1 !}, 1,0, \ldots, 0\right) \\ x_{3}(t) & =e^{\lambda t}\left(\frac{t^{2}}{2 !}, \frac{t}{1 !}, 1,0, \ldots, 0\right) \\ \ldots & \\ x_{n}(t) & =e^{\lambda t}\left(\frac{t^{n-1}}{(n-1) !}, \ldots, \frac{t}{1 !}, 1\right) . \end{aligned} $$ #### Jordan normal form Definition. If $A$ is a $m \times m$ matrix and $B$ is a $l \times l$ matrix then their tensor product is an $n \times n$ matrix $C$ where $n=m+l$ and $$ C=\left(\begin{array}{|l|l|} \hline A & 0 \\ \hline 0 & B \\ \hline \end{array}\right) $$ That is, matrix $C$ consists of two blocks $A$ and $B$ located on the main diagonal, and all other terms are 0 . Notation for the tensor product: $C=A \otimes B$. Lemma 3.18 The following identity is true: $$ e^{A \otimes B}=e^{A} \otimes e^{B} . $$ In extended notation, (3.76) means that $$ e^{C}=\left(\begin{array}{|c|c|} \hline e^{A} & 0 \\ \hline 0 & e^{B} \\ \hline \end{array}\right) . $$ Proof. Observe first that if $A_{1}, A_{2}$ are $m \times m$ matrices and $B_{1}, B_{2}$ are $l \times l$ matrices then $$ \left(A_{1} \otimes B_{1}\right)\left(A_{2} \otimes B_{2}\right)=\left(A_{1} A_{2}\right) \otimes\left(B_{1} B_{2}\right) . $$ Indeed, in the extended form this identity means $$ \left(\begin{array}{|c|c|} \hline A_{1} & 0 \\ \hline 0 & B_{1} \\ \hline \end{array}\right) $$\$\$\left(\begin{array}{|c|c|} \hline A_{2} \& 0 \\ \hline 0 \& B_{2} \\ \hline \end{array}\right)=\$\$$$ =\left(\begin{array}{|c|c|} \hline A_{1} A_{2} & 0 \\ \hline 0 & B_{1} B_{2} \\ \hline \end{array}\right) $$ which follows easily from the rule of multiplication of matrices. Hence, the tensor product commutes with the matrix multiplication. It is also obvious that the tensor product commutes with addition of matrices and taking limits. Therefore, we obtain $$ e^{A \otimes B}=\sum_{k=0}^{\infty} \frac{(A \otimes B)^{k}}{k !}=\sum_{k=0}^{\infty} \frac{A^{k} \otimes B^{k}}{k !}=\left(\sum_{k=0}^{\infty} \frac{A^{k}}{k !}\right) \otimes\left(\sum_{k=0}^{\infty} \frac{B^{k}}{k !}\right)=e^{A} \otimes e^{B} $$ Definition. A tensor product of a finite number of Jordan cells is called a Jordan normal form. That is, if a Jordan normal form is a matrix as follows: $$ J_{1} \otimes J_{2} \otimes \cdots \otimes J_{k}=\left(\begin{array}{ccccc} J_{1} & & & & \\ & J_{2} & & 0 & \\ & & \ddots & & \\ & 0 & & J_{k-1} & \\ & & & & J_{k} \end{array}\right), $$ where $J_{j}$ are Jordan cells. Lemmas 3.17 and 3.18 allow to evaluate $e^{t A}$ if $A$ is a Jordan normal form. Example. Solve the system $x^{\prime}=A x$ where $$ A=\left(\begin{array}{llll} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 2 \end{array}\right) $$ Clearly, the matrix $A$ is the tensor product of two Jordan cells: $$ J_{1}=\left(\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right) \text { and } J_{2}=\left(\begin{array}{cc} 2 & 1 \\ 0 & 2 \end{array}\right) $$ By Lemma 3.17, we obtain $$ e^{t J_{1}}=\left(\begin{array}{cc} e^{t} & t e^{t} \\ 0 & e^{t} \end{array}\right) \text { and } e^{t J_{2}}=\left(\begin{array}{cc} e^{2 t} & t e^{2 t} \\ 0 & e^{2 t} \end{array}\right) $$ whence by Lemma 3.18, $$ e^{t A}=\left(\begin{array}{cccc} e^{t} & t e^{t} & 0 & 0 \\ 0 & e^{t} & 0 & 0 \\ 0 & 0 & e^{2 t} & t e^{2 t} \\ 0 & 0 & 0 & e^{2 t} \end{array}\right) . $$ The columns of this matrix form 4 linearly independent solutions $$ \begin{aligned} & x_{1}=\left(e^{t}, 0,0,0\right) \\ & x_{2}=\left(t e^{t}, e^{t}, 0,0\right) \\ & x_{3}=\left(0,0, e^{2 t}, 0\right) \\ & x_{4}=\left(0,0, t e^{2 t}, e^{2 t}\right) \end{aligned} $$ and the general solution is $$ \begin{aligned} x(t) & =C_{1} x_{1}+C_{2} x_{2}+C_{3} x_{3}+C_{4} x_{4} \\ & =\left(C_{1} e^{t}+C_{2} t e^{t}, C_{2} e^{t}, C_{3} e^{2 t}+C_{4} t e^{2 t}, C_{4} e^{2 t}\right) . \end{aligned} $$ #### Transformation of an operator to a Jordan normal form Given a basis $b=\left\{b_{1}, b_{2}, \ldots, b_{n}\right\}$ in $\mathbb{C}^{n}$ and a vector $x \in \mathbb{C}^{n}$, denote by $x^{b}$ the column vector that represents $x$ in this basis. That is, if $x_{i}^{b}$ is the $i$-th component of $x^{b}$ then $$ x=x_{1}^{b} b_{1}+x_{2}^{b} b_{2}+\ldots+x_{n}^{b} b_{n}=\sum_{i=1}^{n} x_{i}^{b} b_{i} $$ Similarly, if $A$ is a linear operator in $\mathbb{C}^{n}$ then denote by $A^{b}$ the matrix that represents $A$ in the basis $b$. It is determined by the identity $$ (A x)^{b}=A^{b} x^{b} $$ which should be true for all $x \in \mathbb{C}^{n}$, where in the right hand side we have the product of the $n \times n$ matrix $A^{b}$ and the column-vector $x^{b}$. Clearly, $\left(b_{i}\right)^{b}=(0, \ldots 1, \ldots 0)$ where 1 is at position $i$, which implies that $\left(A b_{i}\right)^{b}=A^{b}\left(b_{i}\right)^{b}$ is the $i$-th column of $A^{b}$. In other words, we have the identity $$ A^{b}=\left(\left(A b_{1}\right)^{b}\left|\left(A b_{2}\right)^{b}\right| \cdots \mid\left(A b_{n}\right)^{b}\right) $$ that can be stated as the following rule: the $i$-th column of $A^{b}$ is the column vector $A b_{i}$ written in the basis $b_{1}, \ldots, b_{n}$. Example. Consider the operator $A$ in $\mathbb{C}^{2}$ that is given in the canonical basis $e=\left\{e_{1}, e_{2}\right\}$ by the matrix $$ A^{e}=\left(\begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) $$ Consider another basis $b=\left\{b_{1}, b_{2}\right\}$ defined by $$ b_{1}=e_{1}-e_{2}=\left(\begin{array}{c} 1 \\ -1 \end{array}\right) \text { and } b_{2}=e_{1}+e_{2}=\left(\begin{array}{l} 1 \\ 1 \end{array}\right) \text {. } $$ Then $$ \left(A b_{1}\right)^{e}=\left(\begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right)\left(\begin{array}{c} 1 \\ -1 \end{array}\right)=\left(\begin{array}{c} -1 \\ 1 \end{array}\right) $$ and $$ \left(A b_{2}\right)^{e}=\left(\begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right)\left(\begin{array}{l} 1 \\ 1 \end{array}\right)=\left(\begin{array}{l} 1 \\ 1 \end{array}\right) . $$ It follows that $A b_{1}=-b_{1}$ and $A b_{2}=b_{2}$ whence $$ A^{b}=\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array}\right) $$ The following theorem is proved in Linear Algebra courses. Theorem. For any operator $A \in \mathcal{L}\left(\mathbb{C}^{n}\right)$ there is a basis $b$ in $\mathbb{C}^{n}$ such that the matrix $A^{b}$ is a Jordan normal form. Let $J$ be a Jordan cell of $A^{b}$ with $\lambda$ on the diagonal and suppose that the rows (and columns) of $J$ in $A^{b}$ are indexed by $j, j+1, \ldots, j+p-1$ so that $J$ is a $p \times p$ matrix. Then the sequence of vectors $b_{j}, \ldots, b_{j+p-1}$ is referred to as the Jordan chain of the given Jordan cell. In particular, the basis $b$ is the disjoint union of the Jordan chains. Since $$ \begin{aligned} & \begin{array}{cccc} j & \cdots & \cdots & j+p-1 \\ \downarrow & & & \downarrow \end{array} \end{aligned} $$ and the $k$-th column of $A^{b}-\lambda$ id is the vector $(A-\lambda$ id $) b_{k}$ written in the basis $b$, we conclude that $$ \begin{aligned} & (A-\lambda \mathrm{id}) b_{j}=0 \\ & (A-\lambda \mathrm{id}) b_{j+1}=b_{j} \\ & (A-\lambda \mathrm{id}) b_{j+2}=b_{j+1} \\ & \quad \cdots \\ & (A-\lambda \mathrm{id}) b_{j+p-1}=b_{j+p-2} . \end{aligned} $$ In particular, $b_{j}$ is an eigenvector of $A$ with the eigenvalue $\lambda$. The vectors $b_{j+1}, \ldots, b_{j+p-1}$ are called the generalized eigenvectors of $A$ (more precisely, $b_{j+1}$ is the 1 st generalized eigenvector, $b_{j+2}$ is the second generalized eigenvector, etc.). Hence, any Jordan chain contains exactly one eigenvector and the rest vectors are the generalized eigenvectors. Theorem 3.19 Consider the system $x^{\prime}=A x$ with a constant linear operator $A$ and let $A^{b}$ be the Jordan normal form of $A$. Then each Jordan cell $J$ of $A^{b}$ of dimension $p$ with $\lambda$ on the diagonal gives rise to $p$ linearly independent solutions as follows: $$ \begin{aligned} x_{1}(t) & =e^{\lambda t} v_{1} \\ x_{2}(t) & =e^{\lambda t}\left(\frac{t}{1 !} v_{1}+v_{2}\right) \\ x_{3}(t) & =e^{\lambda t}\left(\frac{t^{2}}{2 !} v_{1}+\frac{t}{1 !} v_{2}+v_{3}\right) \\ & \ldots \\ x_{p}(t) & =e^{\lambda t}\left(\frac{t^{p-1}}{(p-1) !} v_{1}+\ldots+\frac{t}{1 !} v_{p-1}+v_{p}\right), \end{aligned} $$ where $\left\{v_{1}, \ldots, v_{p}\right\}$ is the Jordan chain of $J$. The set of all $n$ solutions obtained across all Jordan cells is linearly independent. Proof. In the basis $b$, we have by Lemmas 3.17 and 3.18 $$ e^{t A^{b}}=\left(\begin{array}{cccccccc} \ddots & & & & & & & \\ & \ddots & & & & & & \\ & & e^{\lambda t} & \frac{t}{1 !} e^{t \lambda} & \cdots & \frac{t^{p-1}}{(p-1) !} e^{t \lambda} & & \\ & & 0 & e^{t \lambda} & \ddots & \vdots & & \\ & \vdots & \ddots & \ddots & \frac{t}{1 !} e^{t \lambda} & & \\ & 0 & \cdots & 0 & e^{t \lambda} & & \\ & & & & & \ddots & \\ & & & & & & & \ddots \end{array}\right), $$ where the block in the middle is $e^{t J}$. By Lemma 3.15, the columns of this matrix give $n$ linearly independent solutions to the ODE $x^{\prime}=A x$. Out of these solutions, select $p$ solutions that correspond to $p$ columns of the cell $e^{t J}$, that is, $$ \begin{aligned} & x_{1}(t)=(\ldots \underbrace{e^{\lambda t}, 0, \ldots, 0}_{p} \ldots) \\ & x_{2}(t)=(\ldots \underbrace{\frac{t}{1 !} e^{\lambda t}, e^{\lambda t}, 0, \ldots, 0}_{p} \ldots) \\ & x_{p}(t)=(\ldots \underbrace{\frac{t^{p-1}}{(p-1) !} e^{\lambda t}, \ldots, \frac{t}{1 !} e^{\lambda t}, e^{t \lambda}}_{p} \cdots), \end{aligned} $$ where all the vectors are written in the basis $b$, the horizontal braces mark the columns of the cell $J$, and all the terms outside the horizontal braces are zeros. Representing these vectors in the coordinateless form via the Jordan chain $v_{1}, \ldots, v_{p}$, we obtain the solutions as in the statement of Theorem 3.19. Let $\lambda$ be an eigenvalue of an operator $A$. Denote by $m$ the algebraic multiplicity of $\lambda$, that is, its multiplicity as a root of characteristic polynomial ${ }^{11} P(\lambda)=\operatorname{det}(A-\lambda \mathrm{id})$. Denote by $g$ the geometric multiplicity of $\lambda$, that is the dimension of the eigenspace of $\lambda$ : $$ g=\operatorname{dim} \operatorname{ker}(A-\lambda \mathrm{id}) . $$ In other words, $g$ is the maximal number of linearly independent eigenvectors of $\lambda$. The numbers $m$ and $g$ can be characterized in terms of the Jordan normal form $A^{b}$ of $A$ as follows: $m$ is the total number of occurrences of $\lambda$ on the diagonal ${ }^{12}$ of $A^{b}$, whereas $g$ is equal to the number of the Jordan cells with $\lambda$ on the diagonal ${ }^{13}$. It follows that $g \leq m$ and the equality occurs if and only if all the Jordan cells with the eigenvalue $\lambda$ have dimension 1. Despite this relation to the Jordan normal form, $m$ and $g$ can be determined without a priori finding the Jordan normal form, as it is clear from the definitions of $m$ and $g$. Theorem 3.19' Let $\lambda \in \mathbb{C}$ be an eigenvalue of an operator $A$ with the algebraic multiplicity $m$ and the geometric multiplicity $g$. Then $\lambda$ gives rise to $m$ linearly independent solutions of the system $x^{\prime}=A x$ that can be found in the form $$ x(t)=e^{\lambda t}\left(u_{1}+u_{2} t+\ldots+u_{s} t^{s-1}\right) $$ where $s=m-g+1$ and $u_{j}$ are vectors that can be determined by substituting the above function to the equation $x^{\prime}=A x$. The set of all $n$ solutions obtained in this way using all the eigenvalues of $A$ is linearly independent. Remark. For practical use, one should substitute (3.78) into the system $x^{\prime}=A x$ considering $u_{i j}$ as unknowns (where $u_{i j}$ is the $i$-th component of the vector $u_{j}$ ) and solve the resulting linear algebraic system with respect to $u_{i j}$. The result will contain $m$ arbitrary constants, and the solution in the form (3.78) will appear as a linear combination of $m$ independent solutions. Proof. Let $p_{1}, . ., p_{g}$ be the dimensions of all the Jordan cells with the eigenvalue $\lambda$ (as we know, the number of such cells is $g$ ). Then $\lambda$ occurs $p_{1}+\ldots+p_{g}$ times on the diagonal of the Jordan normal form, which implies $$ \sum_{j=1}^{g} p_{j}=m . $$ ${ }^{11}$ To compute $P(\lambda)$, one needs to write the operator $A$ in some basis $b$ as a matrix $A_{b}$ and then evaluate $\operatorname{det}\left(A_{b}-\lambda \mathrm{id}\right)$. The characteristic polynomial does not depend on the choice of basis $b$. Indeed, if $b^{\prime}$ is another basis then the relation between the matrices $A_{b}$ and $A_{b^{\prime}}$ is given by $A_{b}=C A_{b^{\prime}} C^{-1}$ where $C$ is the matrix of transformation of basis. It follows that $A_{b}-\lambda \mathrm{id}=C\left(A_{b^{\prime}}-\lambda \mathrm{id}\right) C^{-1}$ whence $\operatorname{det}\left(A_{b}-\lambda \mathrm{id}\right)=\operatorname{det} C \operatorname{det}\left(A_{b^{\prime}}-\lambda \mathrm{id}\right) \operatorname{det} C^{-1}=\operatorname{det}\left(A_{b^{\prime}}-\lambda \mathrm{id}\right)$. ${ }^{12}$ If $\lambda$ occurs $k$ times on the diagonal of $A_{b}$ then $\lambda$ is a root of multiplicity $k$ of the characteristic polynomial of $A_{b}$ that coincides with that of $A$. Hence, $k=m$. ${ }^{13}$ Note that each Jordan cell correponds to exactly one eigenvector. Hence, the total number of linearly independent solutions that are given by Theorem 3.19 for the eigenvalue $\lambda$ is equal to $m$. Let us show that each of the solutions of Theorem 3.19 has the form (3.78). Indeed, each solution of Theorem 3.19 is already in the form $$ e^{\lambda t} \text { times a polynomial of } t \text { of degree } \leq p_{j}-1 . $$ To ensure that these solutions can be represented in the form (3.78), we only need to verify that $p_{j}-1 \leq s-1$. Indeed, we have $$ \sum_{j=1}^{g}\left(p_{j}-1\right)=\left(\sum_{j=1}^{g} p_{j}\right)-g=m-g=s-1, $$ whence the inequality $p_{j}-1 \leq s-1$ follows. In particular, if $m=g$, that is, $s=1$, then $m$ independent solutions can be found in the form $x(t)=e^{\lambda t} v$, where $v$ is one of $m$ independent eigenvectors of $\lambda$. This case has been already discussed above. Consider some examples, where $g<m$. Example. Solve the system $$ x^{\prime}=\left(\begin{array}{cc} 2 & 1 \\ -1 & 4 \end{array}\right) x $$ The characteristic polynomial is $$ P(\lambda)=\operatorname{det}(A-\lambda \mathrm{id})=\operatorname{det}\left(\begin{array}{cc} 2-\lambda & 1 \\ -1 & 4-\lambda \end{array}\right)=\lambda^{2}-6 \lambda+9=(\lambda-3)^{2}, $$ and the only eigenvalue is $\lambda_{1}=3$ with the algebraic multiplicity $m_{1}=2$. The equation for an eigenvector $v$ is $$ (A-\lambda \mathrm{id}) v=0 $$ that is, for $v=(a, b)$, $$ \left(\begin{array}{ll} -1 & 1 \\ -1 & 1 \end{array}\right)\left(\begin{array}{l} a \\ b \end{array}\right)=0 $$ which is equivalent to $-a+b=0$. Setting $a=1$ and $b=1$, we obtain the unique (up to a constant multiple) eigenvector $$ v_{1}=\left(\begin{array}{l} 1 \\ 1 \end{array}\right) . $$ Hence, the geometric multiplicity is $g_{1}=1$. Hence, there is only one Jordan cell with the eigenvalue $\lambda_{1}$, which allows to immediately determine the Jordan normal form of the given matrix: $$ \left(\begin{array}{ll} 3 & 1 \\ 0 & 3 \end{array}\right) $$ By Theorem 3.19, we obtain the solutions $$ \begin{aligned} & x_{1}(t)=e^{3 t} v_{1} \\ & x_{2}(t)=e^{3 t}\left(t v_{1}+v_{2}\right) \end{aligned} $$ where $v_{2}$ is the 1 st generalized eigenvector that can be determined from the equation $$ (A-\lambda \mathrm{id}) v_{2}=v_{1} $$ Setting $v_{2}=(a, b)$, we obtain the equation $$ \left(\begin{array}{ll} -1 & 1 \\ -1 & 1 \end{array}\right)\left(\begin{array}{l} a \\ b \end{array}\right)=\left(\begin{array}{l} 1 \\ 1 \end{array}\right) $$ which is equivalent to $-a+b=1$. Hence, setting $a=0$ and $b=1$, we obtain $$ v_{2}=\left(\begin{array}{l} 0 \\ 1 \end{array}\right) $$ whence $$ x_{2}(t)=e^{3 t}\left(\begin{array}{c} t \\ t+1 \end{array}\right) . $$ Finally, the general solution is $$ x(t)=C_{1} x_{1}+C_{2} x_{2}=e^{3 t}\left(\begin{array}{l} C_{1}+C_{2} t \\ C_{1}+C_{2}(t+1) \end{array}\right) . $$ Example. Solve the system $$ x^{\prime}=\left(\begin{array}{ccc} 2 & 1 & 1 \\ -2 & 0 & -1 \\ 2 & 1 & 2 \end{array}\right) x . $$ The characteristic polynomial is $$ \begin{aligned} P(\lambda) & =\operatorname{det}(A-\lambda \mathrm{id})=\operatorname{det}\left(\begin{array}{ccc} 2-\lambda & 1 & 1 \\ -2 & -\lambda & -1 \\ 2 & 1 & 2-\lambda \end{array}\right) \\ & =-\lambda^{3}+4 \lambda^{2}-5 \lambda+2=(2-\lambda)(\lambda-1)^{2} . \end{aligned} $$ The roots are $\lambda_{1}=2$ with $m_{1}=1$ and $\lambda_{2}=1$ with $m_{2}=2$. The eigenvectors $v$ for $\lambda_{1}$ are determined from the equation $$ \left(A-\lambda_{1} \mathrm{id}\right) v=0 $$ whence, for $v=(a, b, c)$ $$ \left(\begin{array}{ccc} 0 & 1 & 1 \\ -2 & -2 & -1 \\ 2 & 1 & 0 \end{array}\right)\left(\begin{array}{l} a \\ b \\ c \end{array}\right)=0 $$ that is, $$ \left\{\begin{array}{l} b+c=0 \\ -2 a-2 b-c=0 \\ 2 a+b=0 . \end{array}\right. $$ The second equation is a linear combination of the first and the last ones. Setting $a=1$ we find $b=-2$ and $c=2$ so that the unique (up to a constant multiple) eigenvector is $$ v=\left(\begin{array}{c} 1 \\ -2 \\ 2 \end{array}\right) $$ which gives the first solution $$ x_{1}(t)=e^{2 t}\left(\begin{array}{c} 1 \\ -2 \\ 2 \end{array}\right) . $$ The eigenvectors for $\lambda_{2}=1$ satisfy the equation $$ \left(A-\lambda_{2} \text { id }\right) v=0 $$ whence, for $v=(a, b, c)$, $$ \left(\begin{array}{ccc} 1 & 1 & 1 \\ -2 & -1 & -1 \\ 2 & 1 & 1 \end{array}\right)\left(\begin{array}{l} a \\ b \\ c \end{array}\right)=0 $$ whence $$ \left\{\begin{array}{l} a+b+c=0 \\ -2 a-b-c=0 \\ 2 a+b+c=0 \end{array}\right. $$ Solving the system, we obtain a unique (up to a constant multiple) solution $a=0, b=1$, $c=-1$. Hence, we obtain only one eigenvector $$ v_{1}=\left(\begin{array}{c} 0 \\ 1 \\ -1 \end{array}\right) . $$ Therefore, $g_{2}=1$, that is, there is only one Jordan cell with the eigenvalue $\lambda_{2}$, which implies that the Jordan normal form of the given matrix is as follows: $$ \left(\begin{array}{lll} 2 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{array}\right) $$ By Theorem 3.19, the cell with $\lambda_{2}=1$ gives rise to two more solutions $$ x_{2}(t)=e^{t} v_{1}=e^{t}\left(\begin{array}{c} 0 \\ 1 \\ -1 \end{array}\right) $$ and $$ x_{3}(t)=e^{t}\left(t v_{1}+v_{2}\right), $$ where $v_{2}$ is the first generalized eigenvector to be determined from the equation $$ \left(A-\lambda_{2} \mathrm{id}\right) v_{2}=v_{1} . $$ Setting $v_{2}=(a, b, c)$ we obtain $$ \left(\begin{array}{ccc} 1 & 1 & 1 \\ -2 & -1 & -1 \\ 2 & 1 & 1 \end{array}\right)\left(\begin{array}{l} a \\ b \\ c \end{array}\right)=\left(\begin{array}{c} 0 \\ 1 \\ -1 \end{array}\right) $$ that is $$ \left\{\begin{array}{l} a+b+c=0 \\ -2 a-b-c=1 \\ 2 a+b+c=-1 . \end{array}\right. $$ This system has a solution $a=-1, b=0$ and $c=1$. Hence, $$ v_{2}=\left(\begin{array}{c} -1 \\ 0 \\ 1 \end{array}\right) $$ and the third solution is $$ x_{3}(t)=e^{t}\left(t v_{1}+v_{2}\right)=e^{t}\left(\begin{array}{c} -1 \\ t \\ 1-t \end{array}\right) . $$ Finally, the general solution is $$ x(t)=C_{1} x_{1}+C_{2} x_{2}+C_{3} x_{3}=\left(\begin{array}{l} C_{1} e^{2 t}-C_{3} e^{t} \\ -2 C_{1} e^{2 t}+\left(C_{2}+C_{3} t\right) e^{t} \\ 2 C_{1} e^{2 t}+\left(C_{3}-C_{2}-C_{3} t\right) e^{t} \end{array}\right) . $$ ## Qualitative analysis of ODEs ### Autonomous systems Consider a vector ODE $$ x^{\prime}=f(x) $$ where the right hand side does not depend on $t$. Such equations are called autonomous. Here $f$ is defined on an open set $\Omega \subset \mathbb{R}^{n}$ (or $\Omega \subset \mathbb{C}^{n}$ ) and takes values in $\mathbb{R}^{n}$ (resp., $\mathbb{C}^{n}$ ), so that the domain of the ODE is $\mathbb{R} \times \Omega$. Definition. The set $\Omega$ is called the phase space of the ODE and any path $x: I \rightarrow \Omega$, where $x(t)$ is a solution of the ODE on an interval $I$, is called a phase trajectory. A plot of all phase trajectories is called a phase diagram or a phase portrait. Recall that the graph of a solution (or the integral curve) is the set of points $(t, x(t))$ in $\mathbb{R} \times \Omega$. Hence, a phase trajectory can be regarded as the projection of an integral curve onto $\Omega$. Assume in the sequel that $f$ is continuously differentiable in $\Omega$. For any $y \in \Omega$, denote by $x(t, y)$ the maximal solution to the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(x) \\ x(0)=y \end{array}\right. $$ Recall that, by Theorem 2.14 , the domain of function $x(t, y)$ is an open subset of $\mathbb{R}^{n+1}$ and $x(t, y)$ is continuously differentiable in this domain. The fact that $f$ does not depend on $t$, implies the following two consequences. 1. If $x(t)$ is a solution of (4.1) then also $x(t-a)$ is a solution of $(4.1)$, for any $a \in \mathbb{R}$. In particular, the function $x\left(t-t_{0}, y\right)$ solves the following IVP $$ \left\{\begin{array}{l} x^{\prime}=f(x) \\ x\left(t_{0}\right)=y \end{array}\right. $$ 2. If $f\left(x_{0}\right)=0$ for some $x_{0} \in \Omega$ then the constant function $x(t) \equiv x_{0}$ is a solution of $x^{\prime}=f(x)$. Conversely, if $x(t) \equiv x_{0}$ is a constant solution then $f\left(x_{0}\right)=0$. Definition. If $f\left(x_{0}\right)=0$ at some point $x_{0} \in \Omega$ then $x_{0}$ is called a stationary point ${ }^{14}$ of the ODE $x^{\prime}=f(x)$. It follows from the above observation that if $x_{0}$ is a stationary point if and only if $x\left(t, x_{0}\right) \equiv x_{0}$. Definition. A stationary point $x_{0}$ is called Lyapunov stable for the system $x^{\prime}=f(x)$ (or the system is called stable at $\left.x_{0}\right)$ if, for any $\varepsilon>0$, there exists $\delta>0$ with the following property: for all $y \in \Omega$ such that $\left\|y-x_{0}\right\|<\delta$, the solution $x(t, y)$ is defined for all $t>0$ and $$ \sup _{t \in(0,+\infty)}\left\|x(t, y)-x_{0}\right\|<\varepsilon $$ In other words, the Lyapunov stability means that if $x(0)$ is close enough to $x_{0}$ then the solution $x(t)$ is defined for all $t>0$ and $$ x(0) \in B\left(x_{0}, \delta\right) \Longrightarrow x(t) \in B\left(x_{0}, \varepsilon\right) \text { for all } t>0 . $$ If we replace in $(4.2)$ the interval $(0,+\infty)$ by any bounded interval $[a, b]$ containing 0 then by the continuity of $x(t, y)$, $$ \sup _{t \in[a, b]}\left\|x(t, y)-x_{0}\right\|=\sup _{t \in[a, b]}\left\|x(t, y)-x\left(t, x_{0}\right)\right\| \rightarrow 0 \text { as } y \rightarrow x_{0} . $$ Hence, the main issue for the stability is the behavior of solutions as $t \rightarrow+\infty$. Definition. A stationary point $x_{0}$ is called asymptotically stable for the system $x^{\prime}=f(x)$ (or the system is called asymptotically stable at $x_{0}$ ), if it is Lyapunov stable and, in addition, $$ \left\|x(t, y)-x_{0}\right\| \rightarrow 0 \text { as } t \rightarrow+\infty $$ provided $\left\|y-x_{0}\right\|$ is small enough. Observe, the stability and asymptotic stability do not depend on the choice of the norm in $\mathbb{R}^{n}$ because all norms in $\mathbb{R}^{n}$ are equivalent. ${ }^{14}$ In the literature one can find the following synonyms for the term "stationary point": rest point, singular point, equilibrium point, fixed point. ### Stability for a linear system Consider a linear system $x^{\prime}=A x$ in $\mathbb{R}^{n}$ where $A$ is a constant operator. Clearly, $x=0$ is a stationary point. Theorem 4.1 If for all complex eigenvalues $\lambda$ of $A$, we have $\operatorname{Re} \lambda<0$ then 0 is asymptotically stable for the system $x^{\prime}=A x$. If, for some eigenvalue $\lambda$ of $A, \operatorname{Re} \lambda>0$ then 0 is unstable. Proof. By Theorem 3.19', the general complex solution of $x^{\prime}=A x$ has the form $$ x(t)=\sum_{k=1}^{n} C_{k} e^{\lambda_{k} t} P_{k}(t), $$ where $C_{k}$ are arbitrary complex constants, $\lambda_{1}, \ldots, \lambda_{n}$ are all the eigenvalues of $A$ listed with the algebraic multiplicity, and $P_{k}(t)$ are some vector valued polynomials of $t$. The latter means that $P_{k}(t)=u_{1}+u_{2} t+\ldots+u_{s} t^{s-1}$ for some $s \in \mathbb{N}$ and for some vectors $u_{1}, \ldots, u_{s}$. Note that this solution is obtained by taking a linear combination of $n$ independent solutions $e^{\lambda_{k} t} P_{k}(t)$. Since $$ x(0)=\sum_{k=1}^{n} C_{k} P_{k}(0), $$ we see that the coefficients $C_{k}$ are the components of $x(0)$ in the basis $\left\{P_{k}(0)\right\}_{k=1}^{n}$. It follows from (4.3) that $$ \begin{aligned} \|x(t)\| & \leq \sum_{k=1}^{n}\left|C_{k} e^{\lambda_{k} t}\right|\left\|P_{k}(t)\right\| \\ & \leq \max _{k}\left|C_{k}\right| e^{\left(\operatorname{Re} \lambda_{k}\right) t} \sum_{k=1}^{n}\left\|P_{k}(t)\right\| . \end{aligned} $$ Set $$ \alpha=\max _{k} \operatorname{Re} \lambda_{k}<0 . $$ Observe that the polynomials admits the estimates of the type $$ \left\|P_{k}(t)\right\| \leq C\left(1+t^{N}\right) $$ for all $t>0$ and for some large enough constants $C$ and $N$. Hence, it follows that $$ \|x(t)\| \leq C e^{\alpha t}\left(1+t^{N}\right)\|x(0)\|_{\infty} $$ Clearly, by adjusting the constant $C$, we can replace $\|x(0)\|_{\infty}$ by $\|x(0)\|$. Since the function $\left(1+t^{N}\right) e^{\alpha t}$ is bounded on $(0,+\infty)$, we obtain that there is a constant $K$ such that, for all $t>0$, $$ \|x(t)\| \leq K\|x(0)\|, $$ whence it follows that the stationary point 0 is Lyapunov stable. Moreover, since $$ \left(1+t^{N}\right) e^{\alpha t} \rightarrow 0 \text { as } t \rightarrow+\infty $$ we conclude from (4.4) that $\|x(t)\| \rightarrow 0$ as $t \rightarrow \infty$, that is, the stationary point 0 is asymptotically stable. Let now $\operatorname{Re} \lambda>0$ for some eigenvalue $\lambda$. To prove that 0 is unstable is suffices to show that there exists an unbounded real solution $x(t)$, that is, a solution for which $\|x(t)\|$ is not bounded on $(0,+\infty)$ as a function of $t$. Indeed, if such a solution exists then the function $\varepsilon x(t)$ is also an unbounded solution for any $\varepsilon>0$, while its initial value $\varepsilon x(0)$ can be made arbitrarily small by choosing $\varepsilon$ appropriately. To construct an unbounded solution, consider an eigenvector $v$ of the eigenvalue $\lambda$. It gives rise to the solution $$ x(t)=e^{\lambda t} v $$ for which $$ \|x(t)\|=\left|e^{\lambda t}\right|\|v\|=e^{t \operatorname{Re} \lambda}\|v\| . $$ Hence, $\|x(t)\|$ is unbounded. If $x(t)$ is a real solution then this finishes the proof. In general, if $x(t)$ is a complex solution then then either $\operatorname{Re} x(t)$ or $\operatorname{Im} x(t)$ is unbounded (in fact, both are), whence the instability of 0 follows. This theorem does not answer the question what happens when $\operatorname{Re} \lambda=0$. We will investigate this for the case $n=2$ where we also give a more detailed description of the phase diagrams. Consider now a linear system $x^{\prime}=A x$ in $\mathbb{R}^{2}$ where $A$ is a constant operator in $\mathbb{R}^{2}$. Let $b=\left\{b_{1}, b_{2}\right\}$ be the Jordan basis of $A$ so that $A^{b}$ has the Jordan normal form. Consider first the case when the Jordan normal form of $A$ has two Jordan cells, that is, $$ A^{b}=\left(\begin{array}{cc} \lambda_{1} & 0 \\ 0 & \lambda_{2} \end{array}\right) $$ Then $b_{1}$ and $b_{2}$ are the eigenvectors of the eigenvalues $\lambda_{1}$ and $\lambda_{2}$, respectively, and the general solution is $$ x(t)=C_{1} e^{\lambda_{1} t} b_{1}+C_{2} e^{\lambda_{2} t} b_{2} . $$ In other words, in the basis $b$, $$ x(t)=\left(C_{1} e^{\lambda_{1} t}, C_{2} e^{\lambda_{2} t}\right) $$ and $x(0)=\left(C_{1}, C_{2}\right)$. It follows that $$ \|x(t)\|_{\infty}=\max \left(\left|C_{1} e^{\lambda_{1} t}\right|,\left|C_{2} e^{\lambda_{2} t}\right|\right)=\max \left(\left|C_{1}\right| e^{\operatorname{Re} \lambda_{1} t},\left|C_{2}\right| e^{\operatorname{Re} \lambda_{2} t}\right) \leq\|x(0)\|_{\infty} e^{\alpha t} $$ where $$ \alpha=\max \left(\operatorname{Re} \lambda_{1}, \operatorname{Re} \lambda_{2}\right) . $$ If $\alpha \leq 0$ then $$ \|x(t)\|_{\infty} \leq\|x(0)\| $$ which implies the Lyapunov stability. As we know from Theorem 4.1, if $\alpha>0$ then the stationary point 0 is unstable. Hence, in this particular situation, the Lyapunov stability is equivalent to $\alpha \leq 0$. Let us construct the phase diagrams of the system $x^{\prime}=A x$ under the above assumptions. Case $\lambda_{1}, \lambda_{2}$ are real. Let $x_{1}(t)$ and $x_{2}(t)$ be the components of the solution $x(t)$ in the basis $\left\{b_{1}, b_{2}\right\}$. Then $$ x_{1}=C_{1} e^{\lambda_{1} t} \text { and } x_{2}=C_{2} e^{\lambda_{2} t} . $$ Assuming that $\lambda_{1}, \lambda_{2} \neq 0$, we obtain the relation between $x_{1}$ and $x_{2}$ as follows: $$ x_{2}=C\left|x_{1}\right|^{\gamma}, $$ where $\gamma=\lambda_{2} / \lambda_{1}$. Hence, the phase diagram consists of all curves of this type as well as of the half-axis $x_{1}>0, x_{1}<0, x_{2}>0, x_{2}<0$. If $\gamma>0$ (that is, $\lambda_{1}$ and $\lambda_{2}$ are of the same sign) then the phase diagram (or the stationary point) is called a node. One distinguishes a stable node when $\lambda_{1}, \lambda_{2}<0$ and unstable node when $\lambda_{1}, \lambda_{2}>0$. Here is a node with $\gamma>1$ : and here is a node with $\gamma=1$ : If one or both of $\lambda_{1}, \lambda_{2}$ is 0 then we have a degenerate phase diagram (horizontal or vertical straight lines or just dots). If $\gamma<0$ (that is, $\lambda_{1}$ and $\lambda_{2}$ are of different signs) then the phase diagram is called a saddle: Of course, the saddle is always unstable. Case $\lambda_{1}$ and $\lambda_{2}$ are complex, say $\lambda_{1}=\alpha-i \beta$ and $\lambda_{2}=\alpha+i \beta$ with $\beta \neq 0$. Then we rewrite the general solution in the real form $$ x(t)=C_{1} \operatorname{Re} e^{(\alpha-i \beta) t} b_{1}+C_{2} \operatorname{Im} e^{(\alpha-i \beta) t} b_{1} . $$ Note that $b_{1}$ is an eigenvector of $\lambda_{1}$ and, hence, must have a non-trivial imaginary part in any real basis. We claim that in some real basis $b_{1}$ has the form $(1, i)$. Indeed, if $b_{1}=(p, q)$ in the canonical basis $e_{1}, e_{2}$ then by rotating the basis we can assume $p, q \neq 0$. Since $b_{1}$ is an eigenvector, it is defined up to a constant multiple, so that we can take $p=1$. Then, setting $q=q_{1}+i q_{2}$ we obtain $$ b_{1}=e_{1}+\left(q_{1}+i q_{2}\right) e_{2}=\left(e_{1}+q_{1} e_{2}\right)+i q_{2} e_{2}=e_{1}^{\prime}+i e_{2}^{\prime} $$ where $e_{1}^{\prime}=e_{1}+q_{1} e_{2}$ and $e_{2}^{\prime}=q_{2} e_{2}$ is a new basis (the latter follows from the fact that $q$ is imaginary and, hence, $\left.q_{2} \neq 0\right)$. Hence, in the basis $e^{\prime}=\left\{e_{1}^{\prime}, e_{2}^{\prime}\right\}$ we have $b_{1}=(1, i)$. It follows that in the basis $e^{\prime}$ $$ e^{(\alpha+\beta i) t} b_{1}=e^{\alpha t}(\cos \beta t+i \sin \beta t)\left(\begin{array}{c} 1 \\ i \end{array}\right)=\left(\begin{array}{c} e^{\alpha t} \cos \beta t-i e^{\alpha t} \sin \beta t \\ e^{\alpha t} \sin \beta t+i e^{\alpha t} \cos \beta t \end{array}\right) $$ and $$ x(t)=C_{1}\left(\begin{array}{c} e^{\alpha t} \cos \beta t \\ e^{\alpha t} \sin \beta t \end{array}\right)+C_{2}\left(\begin{array}{c} -e^{\alpha t} \sin \beta t \\ e^{\alpha t} \cos \beta t \end{array}\right)=C\left(\begin{array}{c} e^{\alpha t} \cos (\beta t+\psi) \\ e^{\alpha t} \sin (\beta t+\psi) \end{array}\right), $$ where $C=\sqrt{C_{1}^{2}+C_{2}^{2}}$ and $$ \cos \psi=\frac{C_{1}}{C}, \sin \psi=\frac{C_{2}}{C} . $$ If $(r, \theta)$ are the polar coordinates on the plane in the basis $e^{\prime}$, then the polar coordinates for the solution $x(t)$ are $$ r(t)=C e^{\alpha t} \text { and } \theta(t)=\beta t+\psi $$ If $\alpha \neq 0$ then these equations define a logarithmic spiral, and the phase diagram is called a focus or a spiral: The focus is stable is $\alpha<0$ and unstable if $\alpha>0$. If $\alpha=0$ (that is, the both eigenvalues $\lambda_{1}$ and $\lambda_{2}$ are purely imaginary), then $r(t)=C$, that is, we get a family of concentric circles around 0 , and this phase diagram is called a center: In this case, the stationary point is stable but not asymptotically stable. Consider now the case when the Jordan normal form of $A$ has only one Jordan cell, that is, $$ A^{b}=\left(\begin{array}{ll} \lambda & 1 \\ 0 & \lambda \end{array}\right) $$ In this case, $\lambda$ must be real because if $\lambda$ is an imaginary root of a characteristic polynomial then $\bar{\lambda}$ must also be a root, which is not possible since $\bar{\lambda}$ does not occur on the diagonal of $A^{b}$. Then the general solution is $$ x(t)=C_{1} e^{\lambda t} b_{1}+C_{2} e^{\lambda t}\left(b_{1} t+b_{2}\right)=\left(C_{1}+C_{2} t\right) e^{\lambda t} b_{1}+C_{2} e^{\lambda t} b_{2} $$ whence $x(0)=C_{1} b_{1}+C_{2} b_{2}$. That is, in the basis $b$, we can write $x(0)=\left(C_{1}, C_{2}\right)$ and $$ x(t)=\left(e^{\lambda t}\left(C_{1}+C_{2} t\right), e^{\lambda t} C_{2}\right) $$ whence $$ \|x(t)\|_{1}=e^{\lambda t}\left|C_{1}+C_{2} t\right|+e^{\lambda t}\left|C_{2}\right| . $$ If $\lambda<0$ then we obtain again the asymptotic stability (which follows also from Theorem 4.1 ), while in the case $\lambda \geq 0$ the stationary point 0 is unstable. Indeed, taking $C_{1}=0$ and $C_{2}=1$, we obtain a particular solution with the norm $$ \|x(t)\|_{1}=e^{\lambda t}(t+1), $$ which is unbounded. If $\lambda \neq 0$ then it follows from (4.5) that the components $x_{1}, x_{2}$ of $x$ are related as follows: $$ \frac{x_{1}}{x_{2}}=\frac{C_{1}}{C_{2}}+t \quad \text { and } \quad t=\frac{1}{\lambda} \ln \frac{x_{2}}{C_{2}} $$ whence $$ x_{1}=C x_{2}+\frac{x_{2} \ln \left|x_{2}\right|}{\lambda} $$ for some constant $C$. Here is the phase diagram in this case: This phase diagram is also called a node. It is stable if $\lambda<0$ and unstable if $\lambda>0$. If $\lambda=0$ then we obtain a degenerate phase diagram - parallel straight lines. Hence, the main types of the phases diagrams are the node $\left(\lambda_{1}, \lambda_{2}\right.$ are real, nonzero and of the same sign), the saddle ( $\lambda_{1}, \lambda_{2}$ are real, non-zero and of opposite signs), focus/spiral $\left(\lambda_{1}, \lambda_{2}\right.$ are imaginary and $\left.\operatorname{Re} \lambda \neq 0\right)$ and center $\left(\lambda_{1}, \lambda_{2}\right.$ are purely imaginary). Otherwise, the phase diagram consists of parallel straight lines or just dots, and is referred to as degenerate. To summarize the stability investigation, let us emphasize that in the case $\operatorname{Re} \lambda=0$ both stability and instability can happen, depending on the structure of the Jordan normal form. ### Lyapunov's theorem Consider again an autonomous $\operatorname{ODE} x^{\prime}=f(x)$ where $f: \Omega \rightarrow \mathbb{R}^{n}$ is continuously differentiable and $\Omega$ is an open set in $\mathbb{R}^{n}$. Let $x_{0}$ be a stationary point of the system $x^{\prime}=f(x)$, that is, $f\left(x_{0}\right)=0$. We investigate the stability of the stationary point $x_{0}$. Theorem 4.2 (Lyapunov's theorem) Assume that $f \in C^{2}(\Omega)$ and set $A=f^{\prime}\left(x_{0}\right)$ (that is, $A$ is the Jacobian matrix of $f$ at $\left.x_{0}\right)$. If $\operatorname{Re} \lambda<0$ for all eigenvalues $\lambda$ of $A$ then the stationary point $x_{0}$ is asymptotically stable for the system $x^{\prime}=f(x)$. Remark. This theorem has the second part that says the following: if $\operatorname{Re} \lambda>0$ for some eigenvalue $\lambda$ of $A$ then $x_{0}$ is unstable for $x^{\prime}=f(x)$. However, the proof of that is somewhat lengthy and will not be presented here. Example. Consider the system $$ \left\{\begin{array}{l} x^{\prime}=\sqrt{4+4 y}-2 e^{x+y} \\ y^{\prime}=\sin 3 x+\ln (1-4 y) . \end{array}\right. $$ It is easy to see that the right hand side vanishes at $(0,0)$ so that $(0,0)$ is a stationary point. Setting $$ f(x, y)=\left(\begin{array}{c} \sqrt{4+4 y}-2 e^{x+y} \\ \sin 3 x+\ln (1-4 y) \end{array}\right) $$ we obtain $$ A=f^{\prime}(0,0)=\left(\begin{array}{ll} \partial_{x} f_{1} & \partial_{y} f_{1} \\ \partial_{x} f_{2} & \partial_{y} f_{2} \end{array}\right)=\left(\begin{array}{cc} -2 & -1 \\ 3 & -4 \end{array}\right) . $$ Another way to obtain this matrix is to expand each component of $f(x, y)$ by the Taylor formula: $$ \begin{aligned} f_{1}(x, y) & =2 \sqrt{1+y}-2 e^{x+y}=2\left(1+\frac{y}{2}+o(x)\right)-2(1+(x+y)+o(|x|+|y|)) \\ & =-2 x-y+o(|x|+|y|) \end{aligned} $$ and $$ \begin{aligned} f_{2}(x, y) & =\sin 3 x+\ln (1-4 y)=3 x+o(x)-4 y+o(y) \\ & =3 x-4 y+o(|x|+|y|) . \end{aligned} $$ Hence, $$ f(x, y)=\left(\begin{array}{cc} -2 & -1 \\ 3 & -4 \end{array}\right)\left(\begin{array}{l} x \\ y \end{array}\right)+o(|x|+|y|), $$ whence we obtain the same matrix $A$. The characteristic polynomial of $A$ is $$ \operatorname{det}\left(\begin{array}{cc} -2-\lambda & -1 \\ 3 & -4-\lambda \end{array}\right)=\lambda^{2}+6 \lambda+11, $$ and the eigenvalues are $$ \lambda_{1,2}=-3 \pm i \sqrt{2} . $$ Hence, $\operatorname{Re} \lambda<0$ for all $\lambda$, whence we conclude that 0 is asymptotically stable. The main tool for the proof of theorem 4.2 is the following lemma, that is of its own interest. Recall that for any vector $v \in \mathbb{R}^{n}$ and a differentiable function $F$ in a domain in $\mathbb{R}^{n}$, the directional derivative $\partial_{v} F$ can be determined by $$ \partial_{v} F(x)=F^{\prime}(x) v=\sum_{k=1}^{n} \frac{\partial F}{\partial x_{k}}(x) v_{k} . $$ Lemma 4.3 (Lyapunov's lemma) Consider the system $x^{\prime}=f(x)$ where $f \in C^{1}(\Omega)$ and let $x_{0}$ be a stationary point of it. Let $V(x)$ be a $C^{1}$ scalar function in an open set $U$ such that $x_{0} \in U \subset \Omega$ and the following conditions hold: 1. $V(x)>0$ for any $x \in U \backslash\left\{x_{0}\right\}$ and $V\left(x_{0}\right)=0$. 2. For all $x \in U$, $$ \partial_{f(x)} V(x) \leq 0 $$ Then the stationary point $x_{0}$ is stable. Furthermore, if all $x \in U$ $$ \partial_{f(x)} V(x) \leq-W(x), $$ where $W(x)$ is a continuous function on $U$ such that $W(x)>0$ for $x \in U \backslash\left\{x_{0}\right\}$, then the stationary point $x_{0}$ is asymptotically stable. Function $V$ with the properties 1-2 is called the Lyapunov function. Note that the vector field $f(x)$ in the expression $\partial_{f(x)} V(x)$ depends on $x$. By definition, we have $$ \partial_{f(x)} V(x)=\sum_{k=1}^{n} \frac{\partial V}{\partial x_{k}}(x) f_{k}(x) . $$ In this context, $\partial_{f} V$ is also called the orbital derivative of $V$ with respect to the ODE $x^{\prime}=f(x)$. Before the proof, let us show examples of the Lyapunov functions. Example. Consider the system $x^{\prime}=A x$ where $A \in \mathcal{L}\left(\mathbb{R}^{n}\right)$. In order to investigate the stability of the stationary point 0 , consider the function $$ V(x)=\|x\|_{2}^{2}=\sum_{k=1}^{n} x_{k}^{2}, $$ which is positive in $\mathbb{R}^{n} \backslash\{0\}$ and vanishes at 0 . Setting $f(x)=A x$, we obtain for the components $$ f_{k}(x)=\sum_{j=1}^{n} A_{k j} x_{j} $$ Since $\frac{\partial V}{\partial x_{k}}=2 x_{k}$, it follows that $$ \partial_{f} V=\sum_{k=1}^{n} \frac{\partial V}{\partial x_{k}} f_{k}=2 \sum_{j, k=1}^{n} A_{k j} x_{j} x_{k} $$ The matrix $\left(A_{k j}\right)$ is called a non-positive definite if $$ \sum_{j, k=1}^{n} A_{k j} x_{j} x_{k} \leq 0 \text { for all } x \in \mathbb{R}^{n} . $$ Hence, in the case when $A$ is non-positive definite, we have $\partial_{f} V \leq 0$ so that $V$ is a Lyapunov function. It follows that in this case 0 is Lyapunov stable. Matrix $A$ is called negative definite if $$ \sum_{j, k=1}^{n} A_{k j} x_{j} x_{k}<0 \text { for all } x \in \mathbb{R}^{n} \backslash\{0\} . $$ Then setting $W(x)=-\sum_{j, k=1}^{n} A_{k j} x_{j} x_{k}$, we obtain $\partial_{f} V=-W$ so that by the second part of Lemma 4.3, 0 is asymptotically stable. For example, if $A=\operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{n}\right)$ then $A$ is negative definite if all $\lambda_{k}<0$, and $A$ is non-positive definite if all $\lambda_{k} \leq 0$. Example. Consider the second order scalar ODE $x^{\prime \prime}+k x^{\prime}=F(x)$ which describes the movement of a body under the external potential force $F(x)$ and friction with the coefficient $k$. This can be written as a system $$ \left\{\begin{array}{l} x^{\prime}=y \\ y^{\prime}=-k y+F(x) . \end{array}\right. $$ Note that the phase space is $\mathbb{R}^{2}$ (assuming that $F$ is defined on $\mathbb{R}$ ) and a point $(x, y)$ in the phase space is a couple position-velocity. Assume $F(0)=0$ so that $(0,0)$ is a stationary point. We would like to answer the question if $(0,0)$ is stable or not. The Lyapunov function can be constructed in this case as the full energy $$ V(x, y)=\frac{y^{2}}{2}+U(x) $$ where $$ U(x)=-\int F(x) d x $$ is the potential energy and $\frac{y^{2}}{2}$ is the kinetic energy. More precisely, assume that $k \geq 0$ and $$ F(x)<0 \text { for } x>0, \quad F(x)>0 \text { for } x<0, $$ and set $$ U(x)=-\int_{0}^{x} F(s) d s, $$ so that $U(0)=0$ and $U(x)>0$ for $x \neq 0$. Then the function $V(x, y)$ is positive away from $(0,0)$ and vanishes at $(0,0)$. Setting $$ f(x, y)=(y,-k y+F(x)) $$ let us compute the orbital derivative $\partial_{f} V$ : $$ \begin{aligned} \partial_{f} V & =y \frac{\partial V}{\partial x}+(-k y+F(x)) \frac{\partial V}{\partial y} \\ & =y U^{\prime}(x)+(-k y+F(x)) y \\ & =-y F(x)-k y^{2}+F(x) y=-k y^{2} \leq 0 . \end{aligned} $$ Hence, $V$ is indeed the Lyapunov function, and by Lemma 4.3 the stationary point $(0,0)$ is Lyapunov stable. Physically this has a simple meaning. The fact that $F(x)<0$ for $x>0$ and $F(x)>0$ for $x<0$ means that the force always acts in the direction of the origin thus trying to return the displaced body to the stationary point, which causes the stability. Proof of Lemma 4.3. By shrinking $U$, we can assume that $U$ is bounded and that $V$ is defined on $\bar{U}$. Set $$ B_{r}=B\left(x_{0}, r\right)=\left\{x \in \mathbb{R}^{n}:\left\|x-x_{0}\right\|<r\right\} $$ and observe that, by the openness of $U, B_{\varepsilon} \subset U$ provided $\varepsilon>0$ is small enough. For any such $\varepsilon$, set $$ m(\varepsilon)=\inf _{x \in \bar{U} \backslash B_{\varepsilon}} V(x) . $$ Since $V$ is continuous and $\bar{U} \backslash B_{\varepsilon}$ is a compact set (bounded and closed), by the minimal value theorem, the infimum of $V$ is taken at some point. Since $V$ is positive away from 0 , we obtain $m(\varepsilon)>0$. It follows from the definition of $m(\varepsilon)$ that $$ V(x) \geq m(\varepsilon) \text { for all } x \in \bar{U} \backslash B_{\varepsilon} . $$ Since $V\left(x_{0}\right)=0$, for any given $\varepsilon>0$ there is $\delta>0$ so small that $$ V(x)<m(\varepsilon) \text { for all } x \in B_{\delta} . $$ Fix $y \in B_{\delta}$ and let $x(t)$ be the maximal solution in $\mathbb{R} \times U$ of the IVP $$ \left\{\begin{array}{l} x^{\prime}=f(x) \\ x(0)=y \end{array}\right. $$ We will show that $x(t) \in B_{\varepsilon}$ for all $t>0$, which means that the system is Lyapunov stable at $x_{0}$. For any solution $x(t)$ in $U$, we have by the chain rule $$ \frac{d}{d t} V(x(t))=V^{\prime}(x) x^{\prime}(t)=V^{\prime}(x) f(x)=\partial_{f(x)} V(x) \leq 0 . $$ Therefore, the function $V$ is decreasing along any solution $x(t)$ as long as $x(t)$ remains inside $U$. If the initial point $y$ is in $B_{\delta}$ then $V(y)<m(\varepsilon)$ and, hence, $V(x(t))<m(\varepsilon)$ for $t>0$ as long as $x(t)$ is defined in $U$. It follows from (4.8) that $x(t) \in B_{\varepsilon}$. We are left to verify that $x(t)$ is defined ${ }^{15}$ for all $t>0$. Indeed, assume that $x(t)$ is defined only for $t<T$ where $T$ is finite. By Theorem 2.8, if $t \rightarrow T-$, then the graph of the solution $x(t)$ must leave any compact subset of $\mathbb{R} \times U$, whereas the graph is contained in the set $[0, T] \times \overline{B_{\varepsilon}}$. This contradiction shows that $T=+\infty$, which finishes the proof of the first part. For the second part, we obtain by (4.7) and (4.9) $$ \frac{d}{d t} V(x(t)) \leq-W(x(t)) . $$ ${ }^{15}$ Since $x(t)$ has been defined as the maximal solution in the domain $\mathbb{R} \times U$, the solution $x(t)$ is always contained in $U$ as long as it is defined. It suffices to show that $$ V(x(t)) \rightarrow 0 \text { as } t \rightarrow \infty $$ since this will imply that $x(t) \rightarrow 0$ (recall that 0 is the only point where $V$ vanishes). Since $V(x(t))$ is decreasing in $t$, the limit $$ L=\lim _{t \rightarrow+\infty} V(x(t)) $$ exists. Assume from the contrary that $L>0$. Then, for all $t>0, V(x(t)) \geq L$. By the continuity of $V$, there is $r>0$ such that $$ V(y)<L \text { for all } y \in B_{r} . $$ Hence, $x(t) \notin B_{r}$ for all $t>0$. Set $$ m=\inf _{y \in \bar{U} \backslash B_{r}} W(y)>0 . $$ It follows that $W(x(t)) \geq m$ for all $t>0$ whence $$ \frac{d}{d t} V(x(t)) \leq-W(x(t)) \leq-m $$ for all $t>0$. However, this implies upon integration in $t$ that $$ V(x(t)) \leq V(x(0))-m t, $$ whence it follows that $V(x(t))<0$ for large enough $t$. This contradiction finishes the proof. Proof of Theorem 4.2. Without loss of generality, set $x_{0}=0$. Using that $f \in C^{2}$, we obtain by the Taylor formula, for any component $f_{k}$ of $f$, $$ f_{k}(x)=f_{k}(0)+\sum_{i=1}^{n} \partial_{i} f_{k}(0) x_{i}+\frac{1}{2} \sum_{i, j=1}^{n} \partial_{i j} f_{k}(0) x_{i} x_{j}+o\left(\|x\|^{2}\right) \text { as } x \rightarrow 0 . $$ Noticing that $\partial_{i} f_{k}(0)=A_{k i}$ write $$ f(x)=A x+h(x) $$ where $h(x)$ is defined by $$ h_{k}(x)=\frac{1}{2} \sum_{i, j=1}^{n} \partial_{i j} f_{k}(0) x_{i} x_{j}+o\left(\|x\|^{2}\right) . $$ Setting $B=\max _{i, j, k}\left|\partial_{i j} f_{k}(0)\right|$, we obtain $$ \|h(x)\|_{\infty}=\max _{1 \leq k \leq n}\left|h_{k}(x)\right| \leq B \sum_{i, j=1}^{n}\left|x_{i} x_{j}\right|+o\left(\|x\|^{2}\right)=B\|x\|_{1}^{2}+o\left(\|x\|^{2}\right) . $$ Hence, for any choice of the norms, there is a constant $C$ such that $$ \|h(x)\| \leq C\|x\|^{2} $$ provided $\|x\|$ is small enough. Assuming that $\operatorname{Re} \lambda<0$ for all eigenvalues of $A$, consider the following function $$ V(x)=\int_{0}^{\infty}\left\|e^{s A} x\right\|_{2}^{2} d s $$ and prove that $V(x)$ is the Lyapunov function. Let us first verify that $V(x)$ is finite, that is, the integral in (4.11) converges. Indeed, in the proof of Theorem 4.1 we have established the inequality $$ \left\|e^{t A} x\right\| \leq C e^{\alpha t}\left(t^{N}+1\right)\|x\| $$ where $C, N$ are some positive numbers (depending on $A$ ) and $$ \alpha=\max \operatorname{Re} \lambda, $$ where max is taken over all eigenvalues $\lambda$ of $A$. Since by hypothesis $\alpha<0$, (4.12) implies that $\left\|e^{s A} x\right\|$ decays exponentially as $s \rightarrow+\infty$, whence the convergence of the integral in (4.11) follows. Next, let us show that $V(x)$ is of the class $C^{1}$ (in fact, $C^{\infty}$ ). For that, represent $x$ in the canonical basis $v_{1}, \ldots, v_{n}$ as $x=\sum x_{i} v_{i}$ and notice that $$ \|x\|_{2}^{2}=\sum_{i=1}^{n}\left|x_{i}\right|^{2}=x \cdot x . $$ Therefore, $$ \begin{aligned} \left\|e^{s A} x\right\|_{2}^{2} & =e^{s A} x \cdot e^{s A} x=\left(\sum_{i} x_{i}\left(e^{s A} v_{i}\right)\right) \cdot\left(\sum_{j} x_{j}\left(e^{s A} v_{j}\right)\right) \\ & =\sum_{i, j} x_{i} x_{j}\left(e^{s A} v_{i} \cdot e^{s A} v_{j}\right) \end{aligned} $$ Integrating in $s$, we obtain $$ V(x)=\sum_{i, j} b_{i j} x_{i} x_{j} $$ where $b_{i j}=\int_{0}^{\infty}\left(e^{s A} v_{i} \cdot e^{s A} v_{j}\right) d s$ are constants, which clearly implies that $V(x)$ is infinitely many times differentiable in $x$. Remark. Usually we work with any norm in $\mathbb{R}^{n}$. In the definition (4.11) of $V(x)$, we have specifically chosen the 2-norm to ensure the smoothness of $V(x)$. Function $V(x)$ is obviously non-negative and $V(x)=0$ if and only if $x=0$. In order to complete the proof of the fact that $V(x)$ is the Lyapunov function, we need to estimate $\partial_{f(x)} V(x)$. Let us first evaluate $\partial_{A x} V(x)$ for any $x \in U$. Since the function $y(t)=e^{t A} x$ solves the ODE $y^{\prime}=A y$, we have by (4.9) $$ \partial_{A y(t)} V(y(t))=\frac{d}{d t} V(y(t)) $$ Setting $t=0$ and noticing that $y(0)=x$, we obtain $$ \partial_{A x} V(x)=\left.\frac{d}{d t} V\left(e^{t A} x\right)\right|_{t=0} $$ On the other hand, $$ V\left(e^{t A} x\right)=\int_{0}^{\infty}\left\|e^{s A}\left(e^{t A} x\right)\right\|_{2}^{2} d s=\int_{0}^{\infty}\left\|e^{(s+t) A} x\right\|_{2}^{2} d s=\int_{t}^{\infty}\left\|e^{\tau A} x\right\|_{2}^{2} d \tau $$ where we have made the change $\tau=s+t$. Therefore, differentiating this identity in $t$, we obtain $$ \frac{d}{d t} V\left(e^{t A} x\right)=-\left\|e^{t A} x\right\|_{2}^{2} $$ Setting $t=0$ and combining with (4.13), we obtain $$ \partial_{A x} V(x)=\left.\frac{d}{d t} V\left(e^{t A} x\right)\right|_{t=0}=-\|x\|_{2}^{2} $$ Now we can estimate $\partial_{f(x)} V(x)$ as follows: $$ \begin{aligned} \partial_{f(x)} V(x) & =\partial_{A x} V(x)+\partial_{h(x)} V(x) \\ & =-\|x\|_{2}^{2}+V^{\prime}(x) \cdot h(x) \\ & \leq-\|x\|_{2}^{2}+\left\|V^{\prime}(x)\right\|_{2}\|h(x)\|_{2}, \end{aligned} $$ where we have used the Cauchy-Schwarz inequality $u \cdot v \leq\|u\|_{2}\|v\|_{2}$ for all $u, v \in \mathbb{R}^{n}$. Next, let us use the estimate (4.10) in the form $$ \|h(x)\|_{2} \leq C\|x\|_{2}^{2}, $$ which is true provided $\|x\|_{2}$ is small enough. Observe also that the function $V(x)$ has minimum at 0 , which implies that $V^{\prime}(0)=0$. Hence, if $\|x\|_{2}$ is small enough then $$ \left\|V^{\prime}(x)\right\|_{2} \leq \frac{1}{2} C^{-1} . $$ Combining together the above three lines, we obtain that, in a small neighborhood $U$ of 0 $$ \partial_{f(x)} V(x) \leq-\|x\|_{2}^{2}+\frac{1}{2}\|x\|_{2}^{2}=-\frac{1}{2}\|x\|_{2}^{2} . $$ Setting $W(x)=\frac{1}{2}\|x\|_{2}^{2}$, we conclude by Lemma 4.3, that the ODE $x^{\prime}=f(x)$ is asymptotically stable at 0 . Now consider some examples of investigation of stationary points of an autonomous system $x^{\prime}=f(x)$. The first step is to find the stationary points, that is, to solve the equation $f(x)=0$. In general, it may have many roots. Then each root requires a separate investigation. Let $x_{0}$ denote as before one of the stationary points of the system. The second step is to compute the matrix $A=f^{\prime}\left(x_{0}\right)$. Of course, the matrix $A$ can be found as the Jacobian matrix componentwise by $A_{k j}=\partial_{x_{j}} f_{k}\left(x_{0}\right)$. However, in practice is it frequently more convenient to do as follows. Setting $X=x-x_{0}$, we obtain that the system $x^{\prime}=f(x)$ transforms to $$ X^{\prime}=f(x)=f\left(x_{0}+X\right)=f\left(x_{0}\right)+f^{\prime}\left(x_{0}\right) X+o(\|X\|) $$ as $X \rightarrow 0$, that is, to $$ X^{\prime}=A X+o(\|X\|) $$ Hence, the linear term $A X$ appears in the right hand side if we throw away the terms of the order $o(\|X\|)$. The equation $X^{\prime}=A X$ is called the linearized system for $x^{\prime}=f(x)$ at $x_{0}$. The third step is the investigation of the stability of the linearized system, which amounts to evaluating the eigenvalues of $A$ and, possibly, the Jordan normal form. The fours step is the conclusion of the stability of the non-linear system $x^{\prime}=f(x)$ using Lyapunov's theorem or Lyapunov lemma. If $\operatorname{Re} \lambda<0$ for all eigenvalues $\lambda$ of $A$ then both linearized and non-linear system are asymptotically stable at $x_{0}$, and if $\operatorname{Re} \lambda>0$ for some eigenvalue $\lambda$ then both are unstable. The other cases require additional investigation. Example. Consider the system $$ \left\{\begin{array}{l} x^{\prime}=y+x y \\ y^{\prime}=-x-x y \end{array}\right. $$ For the stationary points we have the equation $$ \left\{\begin{array}{l} y+x y=0 \\ x+x y=0 \end{array}\right. $$ whence we obtain two roots: $(x, y)=(0,0)$ and $(x, y)=(-1,-1)$. Consider first the stationary point $(-1,-1)$. Setting $X=x+1$ and $Y=y+1$, we obtain the system $$ \left\{\begin{array}{l} X^{\prime}=(Y-1) X=-X+X Y=-X+o(\|(X, Y)\|) \\ Y^{\prime}=-(X-1) Y=Y-X Y=Y+o(\|(X, Y)\|) \end{array}\right. $$ whose linearization is $$ \left\{\begin{array}{l} X^{\prime}=-X \\ Y^{\prime}=Y \end{array}\right. $$ Hence, the matrix is $$ A=\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array}\right) $$ and the eigenvalues are -1 and +1 so that the type of the stationary point is a saddle. The linearized and non-linear system are unstable at $(-1,-1)$ because one of the eigenvalues is positive. Consider now the stationary point $(0,0)$. Near this point, the system can be written in the form $$ \left\{\begin{array}{l} x^{\prime}=y+o(\|(x, y)\|) \\ y^{\prime}=-x+o(\|(x, y)\|) \end{array}\right. $$ so that the linearized system is $$ \left\{\begin{array}{l} x^{\prime}=y \\ y^{\prime}=-x \end{array}\right. $$ Hence, the matrix is $$ A=\left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right) $$ and the eigenvalues are $\pm i$. Since they are purely imaginary, the type of the stationary point $(0,0)$ is a center. Hence, the linearized system is stable at $(0,0)$ but not asymptotically stable. For the non-linear system (4.14), no conclusion can be drawn just from the eigenvalues. In this case, one can use the following Lyapunov function: $$ V(x, y)=x-\ln (x+1)+y-\ln (y+1) $$ which is defined for $x>-1$ and $y>-1$. Indeed, the function $x-\ln (x+1)$ take the minimum 0 at $x=0$ and is positive for $x \neq 0$. It follows that $V(x, y)$ takes the minimal value 0 at $(0,0)$ and is positive away from $(0,0)$. The orbital derivative of $V$ is $$ \begin{aligned} \partial_{f} V & =(y+x y) \partial_{x} V-(x+x y) \partial_{y} V \\ & =(y+x y)\left(1-\frac{1}{x+1}\right)-(x+x y)\left(1-\frac{1}{y+1}\right) \\ & =x y-x y=0 . \end{aligned} $$ Hence, $V$ is the Lyapunov function, which implies that $(0,0)$ is stable for the non-linear system. Since $\partial_{f} V=0$, it follows from (4.9) that $V$ remains constant along the trajectories of the system. Using that one can easily show that $(0,0)$ is not asymptotically stable and the type of the stationary point $(0,0)$ for the non-linear system is also a center. The phase trajectories of this system around $(0,0)$ are shown on the diagram.
Textbooks
From squiggle to basepair: computational approaches for improving nanopore sequencing read accuracy Franka J. Rang1, Wigard P. Kloosterman1 & Jeroen de Ridder ORCID: orcid.org/0000-0002-0828-34771 Nanopore sequencing is a rapidly maturing technology delivering long reads in real time on a portable instrument at low cost. Not surprisingly, the community has rapidly taken up this new way of sequencing and has used it successfully for a variety of research applications. A major limitation of nanopore sequencing is its high error rate, which despite recent improvements to the nanopore chemistry and computational tools still ranges between 5% and 15%. Here, we review computational approaches determining the nanopore sequencing error rate. Furthermore, we outline strategies for translation of raw sequencing data into base calls for detection of base modifications and for obtaining consensus sequences. The nanopore sequencing concept was first proposed in the 1980s and has been developed and refined over the past three decades (reviewed in [1]). Rather than the commonly used sequencing-by-synthesis approach, nanopores directly sense DNA or RNA bases by means of pores that are embedded in a membrane separating two compartments. An electric potential is applied over the membrane, resulting in an ion current and flow of DNA through the pore. Nucleotides in the pore change the ion flow, causing distinct current signals that can be used to infer the DNA sequence. In 2014, Oxford Nanopore Technologies (ONT) released the MinION as the first commercially available nanopore sequencing device. MinION nanopore sequencing offers several advantages over short-read sequencing technologies such as the Illumina MiSeq (Additional file 1: Table S1). First, the MinION produces reads in real time from single molecules. In combination with rapid library preparation, this dramatically shortens the time between sample collection and data analysis. Moreover, the MinION can also be used for direct RNA sequencing without prior reverse transcription or amplification [2]. Second, DNA molecules of any length can be sequenced and reports have been made of reads longer than 800 kb [3] and even exceeding 2 Mb [4]. Long reads are extremely valuable because they provide information on how distal sequences are spatially related. Consequently, they ease genome assembly and structural variant detection [3, 5]. Finally, the MinION is a lot smaller and cheaper than the current short-read platforms, enabling sequencing outside the traditional laboratory context [6, 7]. The key features and applications of MinION sequencing have previously been reviewed by Jain et al. [8]. Following the introduction of the MinION, ONT has commercially released the GridION, which is essentially one instrument with slots for five MinION flow cells and an integrated compute module for base calling. In addition, the PromethION, a high-throughput nanopore platform, is currently being tested by early-access users. A major limitation of MinION sequencing is its lower read accuracy when compared with short-read technologies. When the MinION was first introduced, reads showed an accuracy of less than 60% [9, 10]. This accuracy has improved over recent years to reach approximately 85% [3, 5, 11, 12] (Fig. 1), similar to that of the long-read sequencing technology of PacBio (Additional file 1: Table S1), but still falls short of the more than 99% accuracy offered by short-read platforms. The advantages of long reads outweigh the low-read accuracy for some applications, such as structural variant detection [5]. Furthermore, consensus sequences can be obtained from homogenous DNA samples by (genome) assembly, resulting in accuracies of more than 99% [13,14,15,16]. However, the MinION's low-read accuracy complicates the analysis of complex samples for detection of single nucleotide variations (SNVs) or indels. Successful SNV genotyping based on nanopore reads has been demonstrated [17], but MinION-based SNV calling requires relatively high-coverage sequencing of the variation, for example through targeted sequencing [3, 6, 7, 18]. Timeline of reported MinION read accuracies and Oxford Nanopore Technologies (ONT) technological developments. Nanopore chemistry updates and advances in base-caller software are represented as colored bars. The plotted accuracies are ordered on the basis of the chemistry and base-calling software used, not according to publication date. Based on data from 1 [9]; 2 [10]; 3 [50]; 4 [51]; 5 [33]; 6 [28]; 7 [52]; 8 [53]; 9 [54]; 10 [29]; 11 [31]; 12 [48]; 13 [46]; 14 [55]; 15 [11]; 16 [5]; 17 [13]; 18 [3]. HMM Hidden Markov Model, RNN Recurrent Neural Network Since the first release of the MinION, the error rate has considerably improved due to changes in sequencing chemistry. The first MinION flow cells made use of a nanopore called R6, which provided mediocre accuracy. ONT has revealed that the current pore versions (R9.4 and R9.5) are derived from the Escherichia coli Curlin sigma S-dependent growth (CsgG) pore [19, 20], and achieve greatly reduced error rates (Fig. 1). ONT has further improved accuracy by offering the possibility of sequencing both template and complementary strands to obtain a more accurate consensus read. When the double-stranded DNA (dsDNA) is recruited to the nanopore, a motor protein unzips the double strand and passes a single strand through the pore, giving rise to a so-called 1D read (Additional file 1: Figure S1A). Early versions of MinION sequencing offered 2D sequencing involving the reading of both strands, which was enabled by ligation of a hairpin to the DNA (Additional file 1: Figure S1B). The accuracy of 2D consensus reads has generally been more than 5% higher than the accuracy of the template (1D) read alone (Fig. 1). Recently, 2D sequencing was replaced by a new approach termed 1D2, which enables the sequencing of template and complementary strands without physical ligation (Additional File 1: Figure S1C). According to ONT, 1D2 sequencing can be successful for up to 60% of DNA molecules, and the resulting consensus sequences reach a modal (i.e., most commonly observed) accuracy of ∼ 97% compared with the ∼ 90% accuracy of the 1D reads alone [21, 22]. Research by independent investigators will have to show whether the 1D2 chemistry lives up to this promise. In addition to the chemistry updates released by ONT, computational tools to process the MinION sequencing data and to improve accuracy have been developed, tested, and compared by the scientific community. At the moment, however, an overview of these strategies and a delineation of their contributions is lacking. In this review, we discuss computational approaches to improve the accuracy of nanopore sequencing data by focusing on (i) advances in the computational methods for base calling and (ii) the use of postsequencing correction tools (Fig. 2). Overview of MinION nanopore sequencing. The left panel shows sources of errors during MinION sequencing and base calling. The right panel shows computational strategies that have been used to improve accuracy. HMM Hidden Markov Model, RNN Recurrent Neural Network Sources of errors in nanopore sequencing data There appear to be two distinct steps at which errors can arise in Oxford Nanopore sequencing data. First, we can reasonably assume that errors can occur during sequencing and thus be inherent to the raw data. In this case, the inherent limitations of the technology result in a low signal-to-noise ratio, making it impossible to determine the underlying DNA sequence. Second, errors could be made in the process of translating the raw electric current signal into a DNA sequence. Here, the information about the DNA sequence is actually present in the data, but shortcomings in the analysis prevent its correct interpretation. The influence of these two steps on the error rate seems to be supported by improvements in accuracy following upgrades in both nanopore chemistry and base-calling software (Fig. 1). There are several factors in play during sequencing that may contribute to a low signal-to-noise ratio: (i) the structural similarity of the nucleotides; (ii) the simultaneous influence of multiple nucleotides on the signal [23]; (iii) the nonuniform speed at which nucleotides pass through the pore [24,25,26]; and (iv) the fact that the signal does not change within homopolymers [26] (Fig. 2). In earlier MinION nanopores (R7, 7.3), the raw current signal was mainly influenced by five or six nucleotides that occupied the pore at any given time point. One measurement thus corresponds to 2048 or 4096 possible k-mers. In the latest pores (R9, 9.4), ONT reports that the three central nucleotides mainly determine the signal, with a smaller influence from more distal nucleotides within the pore [23]. When more nucleotides reside in the pore, one measurement can correspond to even more k-mers and thus more unique signal levels are required to differentiate between them. Consequently, it is more difficult to achieve good signal-to-noise ratios for pores that are influenced by long k-mers compared with those that are occupied by shorter k-mers. Moreover, nucleotides may harbor chemical modifications, such as methyl groups, that affect the signal and effectively increase the number of unique signal levels. In order to improve signal robustness, the k-mers have to reside within the pore long enough to differentiate signal from noise. The speed at which DNA translocates through a pore under the influence of an electric potential alone is too high to allow reliable detection of each signal [27]. Therefore, Oxford Nanopore chemistry involves the attachment of a motor protein to the DNA, which slows down the translocation and improves the quality of the signal [24, 25]. Nevertheless, despite a reduced translocation speed, it is difficult to detect the transition between two identical k-mers, complicating the detection of homopolymers that are longer than the k-mer. One way to tackle the problem is to infer homopolymer length from the duration of the measured signal. Problematically, the translocation speed of motor proteins is generally nonuniform, disrupting the relationship between homopolymer length and detection time [24,25,26], a problem that has also been reported by ONT [23]. Consequently, many deletion errors in MinION reads occur in homopolymers [3, 5, 28]. For example, one study reported a 2.6-fold increase in deletion errors for sequences that overlap homopolymers [5]. Errors that arise during signal interpretation, on the other hand, may result from heuristics in the algorithm necessary to bring down the computational costs. For instance, some of the base-calling algorithms assume that two consecutive k-mers at most may be undetected [29], even though larger skips can occur. In addition, the performance of the base callers is influenced by the datasets that are used to train the parameters of the model [13, 30] (Table 1). Biases in the training data—such as type of species or the balance between amplified and nascent DNA (which may contain base modifications such as methylation)—could thus result in errors when applying the resulting parameters to new data. Table 1 Explanation of technical terms Defining read accuracy and error rate To get a good view of how technological developments impact the accuracy of MinION sequencing data, clear definitions of accuracy and error rate are essential. A wide range of definitions has been used throughout recent publications. For accuracy, these definitions include percent identity to a reference sequence relative to read length [31], alignment length [5, 9, 13, 28], and reference length [32]. Equivalent definitions for error rate are used. Unfortunately, the formulas and tools used to calculate these metrics are often not clearly stated. Probably the most commonly used definition of read accuracy is the percentage of bases in a segment of a read that match with a reference relative to the length of the readsegment–reference alignment: $$ accuracy=\frac{matches}{matches+ mismatches+\sum \left( length\left( insertions\in read\right)\right)+\sum \left( length\left( deletions\in read\right)\right)}\ast 100\%. $$ Concordantly, the error rate would constitute the percentage of unmatched bases in the alignment and can be subdivided in substitution, insertion, and deletion rates: $$ error\ rate=\frac{mismatches+\sum \left( length\left( insertions\in read\right)\right)+\sum \left( length\left( deletions\in read\right)\right)}{matches+ mismatches+\sum \left( length\left( insertions\in read\right)\right)+\sum \left( length\left( deletions\in read\right)\right)}\ast 100\% $$ $$ =\frac{errors}{alignmentlength}\ast 100\% $$ It is difficult to measure the impact of technological developments on data quality on the basis of literature reports as there are differences between publications in the ways that the accuracy and error rate are reported. Many researchers report the average read accuracy, whereas others report the median or provide a distribution. A second complicating factor in the comparison of read accuracies is that they depend directly on the performance of the alignment algorithm. Different alignment tools may result in different reported accuracies [33], although they have been reported to yield similar results [3, 28]. Finally, often only a subset of the reads is used to calculate the accuracy and error rate. After reads have been base called, they are divided into high-quality (pass) reads, low-quality (fail) reads, and a subset of reads that cannot be base called. It is not always clear whether pass reads or both pass and fail reads are used to calculate the accuracy of the data. Moreover, the calculated accuracy does generally not take into account the reads that could not be aligned. These filtering steps have a direct impact on the reported accuracy, as there is a trade-off between the accuracy and the yield of the run, i.e., the fraction of reads that are considered as useful data. In order to enable better comparisons of accuracy, it would be advisable to report the accuracy along with the yield of the run, or to report the equivalent of a precision–recall curve in which the accuracy for a range of yields is plotted. In this review, the mean accuracies of 1D pass reads are reported unless stated otherwise. Figure 1 shows some of the accuracies that have been reported in the literature over the past 3 years, as well as important updates in nanopore chemistry and base-calling algorithms. Owing to the difficulties listed above, there are inconsistencies between the methods by which the accuracies have been obtained. Nevertheless, when combined, these data show a clear trend of improved accuracy since the release of the MinION. Base calling For the current MinION chemistry, ONT reports that single DNA strands are pulled through the pore at an average speed of 450 bp/s, while the electric current is sampled at a frequency of 4 kHz [34]. This means that there are on average nine discrete measurements per k-mer, although the number varies because of the fluctuating translocation speed of the motor protein. In order to translate this raw electric current signal to a DNA sequence, sophisticated base-calling software is required. In the early days of MinION, base calling was performed by the cloud-based EPI2ME platform provided by Metrichor Ltd., but this feature was discontinued in March 2017. In August 2016, base calling became available in the software program MinKNOW, which runs on the local machine connected to the sequencer to monitor and control MinION sequencing. In addition to the MinKNOW integrated base caller, ONT now offers several other base-calling programs, including the command-line base caller Albacore, and the research base callers Nanonet and Scrappie which have mainly been used as a testing ground for new features. In addition to the ONT base callers, several independent base callers have been developed by researchers in the past 2 years, including Nanocall [29], DeepNano [35], Chiron [32], and BasecRAWller [30]. These ONT base callers have rapidly evolved: Albacore alone was updated at least 12 times between January and September 2017. The rapid succession and improvement of base callers demonstrates that their performance is an important determinant in the quality of the base pair sequence that is retrieved from the raw signal. In this section, we discuss different approaches to base calling and the most notable improvements that have been made in recent years. Hidden Markov models versus recurrent neural networks To deal with the oversampling, the initial MinION base callers required segmentation of the raw signals into discrete events before base identification. This process reduced the size of the input dataset and combined the redundant measurements into a supposedly more reliable, event-based signal. According to ONT, MinKNOW (up to v1.9) performed segmentation by calculating t statistics over two pairs of adjacent sliding windows in the raw signal [19]. These statistics were then combined to determine event boundaries. For each event, the mean, standard deviation, and duration of the raw signal were reported and used for further base calling. The resulting sequences of events are often referred to as 'squiggles'. To interpret the sequence of events, MinKNOW offers pore models and scaling parameters. The pore models provide distributions of the mean signal and standard deviations that can be expected for each k-mer, while the scaling parameters help to correct for differences in signal that may occur between different wells or over the course of a sequencing run [36]. The first generations of ONT base callers used Hidden Markov Models (HMMs) (Table 1) to predict the DNA sequence on the basis of the event data, pore models, and scaling parameters. The first open-source base caller, Nanocall, employed the same principle [29] (Fig. 3a). In the Nanocall HMM, the hidden states represent all possible k-mers with emission probabilities that are based on the pore models. The transition probabilities, on the other hand, are determined on the basis of a training dataset (Table 1). They mirror the possible event transitions in which a consecutive event can refer to a k-mer shifted by one position in the DNA sequence (step), a k-mer shifted by more than one position (skip), or the same k-mer (stay). To speed up computation, skips with a size larger than one are not allowed in the HMM. During base calling, the most probable path through the hidden states is calculated by Viterbi decoding (Table 1). The path is converted to the final base sequence by merging the sequence corresponding to two consecutive states according to their maximal overlap. The consequence of this heuristic is that homopolymer repeats of a length greater than the size of the k-mer cannot be detected. Schematic overview of the algorithms underlying nanopore base callers. a Nanocall uses a Hidden Markov Model (HMM) for base calling. b DeepNano was the first base caller to use Recurrent Neural Networks (RNN). h1–h3 represent three hidden layers in the RNN. c BasecRAWller uses two RNNs, one to segment the raw measurements and one to infer k-mer probabilities. d Chiron makes use of a Convolutional Neural Network (CNN) to detect patterns in the data, followed by an RNN to predict k-mer probabilities, which are evaluated by a Connectionist Temporal Classification (CTC) decoder. LSTM long-short-term memory Soon after the publication of Nanocall, the first version of the base caller DeepNano was published [35]. Rather than using HMMs, DeepNano uses Recurrent Neural Networks (RNNs) (Table 1), which do not explicitly rely on k-mer length and are able to take longer range information (i.e., >k bp) into account. Since information about the DNA sequence is contained in events both upstream and downstream of the current event, DeepNano uses a bidirectional RNN that makes predictions for events in each direction and combines the two predictions for each event in the next layer of the neural network (Fig. 3b). In terms of performance, the RNN-based DeepNano achieves a substantial improvement over the HMM-based callers. On R7.3 data, Metrichor called 1D reads with an accuracy of 70–71% and Nanocall with an accuracy of 68%, whereas DeepNano reached accuracies of up to 77% [29, 35]. For 2D reads, this difference was less pronounced, with Metrichor reaching an accuracy of 87% and DeepNano an accuracy of 89%. Nanocall does not provide an option to call 2D reads. Before the final version of the DeepNano paper was published, ONT released their own RNN-based base caller, Nanonet. The general principle is similar to that of DeepNano. Nanonet employs bidirectional long-short-term memory (LSTM) units (Table 1) to utilize information from both upstream and downstream states. In the final publication of DeepNano, the authors compare the accuracy of the two RNN-based base callers on an E. coli dataset produced with R9 chemistry and show that they perform similarly on 1D reads (DeepNano ~ 81%, Nanonet ~ 83%) [35]. Given the superior performance of RNN base callers when compared with HMM base callers, algorithms similar to those of Nanonet have been adopted in newer versions of the MinKNOW base caller and in all versions of the ONT base callers Albacore and Scrappie. Base calling using raw signal Although initial versions of early base callers used the segmented event data provided by MinKNOW as input to determine the DNA sequence, current base callers use raw current signal as input. The Scrappie base caller (which is available through a developer license from ONT) was the first base caller to employ raw current signal. In addition, the recent introduction of BasecRAWller provides a well-documented base-calling method for raw nanopore data [30]. BasecRAWller employs two separate RNNs (Fig. 3c). The first RNN uses each measurement point to predict the probability that a signal corresponds to a new k-mer and the probability of the k-mer identity simultaneously. On the basis of the probabilities of k-mer transitions, the raw signal is segmented and the k-mer probabilities are averaged over the segments. These probabilities are then fed into the second RNN, which predicts the final DNA sequence. Importantly, both RNNs use LSTMs to pass contextual information forward but not backward, making the technology fast enough to base call reads as they pass through the pore. However, the increase in processing speed comes at the cost of accuracy. Although BasecRAWller uses the raw current signal rather than the events detected by MinKNOW, it still performs an internal segmentation step after the first RNN. The recently introduced base caller Chiron, on the other hand, is capable of translating the raw signal into a DNA sequence without an intermediate segmentation step [32] (Fig. 3d). In Chiron, the raw data are first fed through a Convolutional Neural Network (CNN) (Table 1), which detects local structures in the signal. The output of the CNN is used as input for an RNN that makes use of bidirectional LSTMs. The RNN outputs base call probabilities that are evaluated by a Connectionist Temporal Classification (CTC) decoder (Table 1) and are converted to a sequence of bases by a beam search algorithm (Table 1). Despite being trained on limited amounts of data, Chiron had similar accuracies compared with Albacore v2.0.1 and outperformed the segmentation-based Albacore v1.1 This was also the case for human sequencing data, even though Chiron was trained on nonhuman data only. Around the same time that the first version of Chiron was published, ONT transitioned to raw base calling in a new update of Albacore (v2.0.1). Internal testing performed by ONT showed that raw base calling improves the modal read accuracy by 1% over that achieved by event-based base calling [37]. The increase in accuracy is due to the fact that mistakes made during segmentation are hard to correct later on, as information is lost when the raw data are reduced to mean, deviation, and duration values alone. Training of base callers for base composition and modifications An important aspect of the current base callers is that they require training to optimize the parameters of the HMM or RNN. Consequently, the nature of the training dataset is crucial in determining base caller performance on sequencing data from different biological samples. Depending on the source of the DNA and the sample preparation, sequencing datasets may have different characteristics, such as different base composition or base modifications, that should be sufficiently accounted for during training. Genome structure may vary between species with regard to GC content [38], codon usage [39], and nature of DNA modifications. While testing BasecRAWller, it became apparent that using an E. coli training set resulted in much higher accuracies on new E. coli sequencing data than on human data [30]. Interestingly, when human training data were used, the accuracies for E. coli and human data were more comparable. In another comparison of base callers, Scrappie v1.1.1 achieved a read accuracy that was more than 2% higher than that achieved by Scrappie v1.1.0 for a bacterial Klebsiella pneumoniae dataset [13]. This improvement in accuracy can most probably be attributed to the fact that v1.1.1 was trained on a mixed set of genomes, whereas v1.1.0 was trained on human data only. Together, these observations indicate that the nature and originating species of the training data plays an important role in base caller performance. As for now, it remains unclear whether the broad applicability of training data depends on their k-mer diversity or on similarity between species. The latter case may be problematic because it would mean that suboptimal performance can be expected for species for which no training data are available. As the MinION effectively probes nucleotide structure, chemical modifications such as methyl groups influence the signal. If such DNA modifications are not represented in the training data, they may result in erroneous base calls. This is one of the reasons why a substantial difference in base quality is observed between sequence runs that use nascent DNA rather than PCR-amplified DNA. The problem may be solved either by training the parameters to recognize modified bases and to call them as their canonical nucleotide (e.g., 5-mC as C), or by treating them as distinct bases. At the moment, the option to take DNA modifications into account in the base calling has not yet been incorporated into the ONT base callers [37]. Using the open-source base caller Nanonet, however, efforts are being made by the ONT community to include modified bases in the RNN [40]. The feasibility of calling base modifications from MinION data has already been demonstrated by the fact that DNA modifications have successfully been derived for nascent DNA sequencing data after base calling [41,42,43,44]. Modeling strand progression The detection of homopolymers with nanopores is more challenging because consecutive k-mers are identical. As the segmentation step often resulted in more events than actual bases, initial base callers assumed that identical signals were the result of stalling in the pore rather than signals originating from a homopolymer. In order to improve homopolymer calling, a so-called transducer has recently been included in the ONT base caller Scrappie. According to ONT, the transducer enables the separate prediction of k-mer identity and movement [45]. Early results indicate that Scrappie is indeed more successful at calling homopolymers than ONT base callers without the transducer: Scrappie could call homopolymers of up to 20 bases correctly, whereas Nanonet and the Metrichor base caller consistently predicted a length of ∼ 5 bases for all homopolymers with a length greater than 5 bases [3]. The transducer has subsequently been adopted in both the MinKNOW base caller (as of v1.6.11) and in Albacore (as of v1.0.1). Postsequencing correction Developments in nanopore chemistry and base-calling algorithms have resulted in a considerable increase in read accuracy over the past few years. Depending on the nature of the sample and the desired application, further improvements in accuracy can be made by performing postsequencing correction. Several correction algorithms are available that make use of three (not mutually exclusive) approaches: (i) consensus finding, (ii) polishing based on raw data, and (iii) hybrid error correction. The last uses short-read data to correct MinION reads or an assembly that is based on MinION reads [9]. As hybrid tools do not use information inherent to nanopore data to improve accuracy, they are not included in this review. Consensus calling The generation of multiple alignments of nanopore reads and the extraction of consensus sequences has the potential to eliminate all random errors, leaving only systematic errors that are introduced during sequencing or base calling. As long reads are very useful in genome assembly, several tools that call consensus sequences from MinION data have been developed specifically for this purpose. These postsequencing tools generally perform consensus calling either on reads or on genome assemblies by constructing Partial Order Alignment (POA) graphs (Table 1). Genome assembly tools that implement POA graphs for nanopore consensus calling and read correction include Nanocorrect [15], Racon [46], and Canu [14]. Nanocorrect was shown to improve read accuracy from 80.5 to 97.7% based on 29× coverage [15]. Despite this success, Nanocorrect has been deprecated because it is rather slow, and better-performing assembly pipelines have become available [47]. Racon can either be used for read correction or paired with genome assemblers that do not perform prior read correction [46]. 2D R7.3 reads with a coverage of 54× and a median accuracy of 89.8% were corrected to an accuracy level of 99.25%. When Racon was used to improve genome assemblies computed with Miniasm [16], the assembly accuracy varied between 97.7% (30× coverage) and 99.32% (54× coverage) [46]. Finally, Canu is a genome assembly tool that incorporates POA graphs for read correction [14]. On the same datasets as those used for genome assembly with Miniasm+Racon, Canu obtains accuracies ranging between 96.87% (30× coverage) and 98.61% (54× coverage) [46]. The consensus tools developed for genome assembly rely on the assumption that all reads in a dataset are derived from one homogeneous genetic source. In the case of mixed samples or polyploidy, consensus calling should only be performed on reads known to stem from the same source. Multiple reads derived from the same genetic material can be obtained by experimental methods such as INC-seq, in which tandem copies of a target sequence are generated with circular amplification prior to sequencing [48, 49]. Consensus polishing from raw signal Although each base-called read represents the most likely prediction of the underlying nucleotide sequence based on the observed event sequence or raw signal, the raw data retains more information than is represented in the final sequence. For this reason, squiggle or raw data describing overlapping reads can be combined to assess and correct a proposed assembly sequence. Loman et al. [15] used this principle in Nanopolish, a tool aimed at improving (i.e., polishing) draft genome assemblies that were based on nanopore event data. Nanopolish starts by mapping the uncorrected reads to the draft assembly, which represents the initial consensus that is based on all base-called reads. Subsequently, the assembly is divided into overlapping segments that can be processed in parallel. Within each segment, the aligned reads are reverted back into their squiggle counterparts that were observed during sequencing, defined by the mean current per k-mer. A series of slightly altered sequences is then proposed and their probabilities given the set of event sequences are compared. These probabilities are obtained by applying the Forward algorithm (Table 1) on an HMM that is structured similarly to the Nanocall HMM. The sequence with the highest probability replaces the segment of the initial assembly and a new set of modifications is proposed. The process stops after a set number of iterations or when the consensus no longer changes. Finally, all overlapping segments are combined into the final assembly. Nanopolish has evolved since its initial release and is compatible with the newest sequencing kits and base-calling tools. In addition, Nanopolish has recently obtained a new functionality that allows it to detect and call methylated bases [42]. An online comparison of base callers shows that the methylation-aware option reduces errors in assemblies that are based on nascent DNA sequencing data, since chemical modifications affect the raw signal [13]. Nanopolish is now commonly used to finalize genome assemblies that are based on nanopore data. In general, the application of Nanopolish results in improvements of around 0.1–0.5% [13, 15], but in some cases it may result in an increase of > 2% [13, 14]. Interestingly, draft assemblies that are constructed using base callers that have vastly different performances can be polished to a similar level of accuracy by Nanopolish [13], implying that base-caller performance may not be the limiting factor in genome assembly applications. Discussion and outlook Nanopore sequencing offers the possibility of producing long reads from single DNA molecules in real time and has the potential to open up the field of sequencing to many new applications. Since its initial release, the technology marketed by ONT in the form of the MinION has suffered from a high error rate. Thanks to several updates in chemistry and software tools, the raw read accuracy has already increased from < 60% to > 85%. At the moment, data produced by the MinION are sufficiently accurate to create consensus (genome) assemblies of > 99% accuracy. In spite of these feats, however, the current nanopore read accuracy still limits robust calling of SNVs and indels, especially in complex samples such as tumors. Here, we have discussed several of the computational strategies that have been used to improve read accuracy since the release of the MinION in 2014. With regard to base calling, the most notable developments include the switch from HMMs to RNNs and the use of raw current signal instead of segmented signal as input. Moreover, the first steps towards modeling strand progression through the pore in order to achieve better estimates of homopolymer length have shown promising results. Other notable innovations include the use of raw current signal to improve consensus sequences, as implemented by Nanopolish. Recently, ONT released an early version of their own postassembly polishing tool, Medaka (https://nanoporetech.github.io/medaka/index.html), indicating that this is an active field of research that will probably lead to further improvements in postsequencing correction. Despite the many different analytical tools that have been developed over the past few years, it remains difficult to establish all determinants of read accuracy and error rate clearly. In part, this ambiguity can be attributed to the fact that variable definitions of accuracy and error rate are being used, definitions are not always clearly stated, and results are reported inconsistently (for example, some studies report the median, mean, or a distribution of the error or accuracy). Moreover, accuracy is usually calculated only for a subset of the data, that is only for alignable, high-quality reads. In order to obtain a full picture of developments in read accuracy, the percentage of alignable reads and the accuracy over all aligned reads needs to be reported. Problematically, this is only possible when a high-quality reference genome is available, which may not be the case for many important applications of the MinION. Nevertheless, consistent reports of accuracy will be important to show which contributions are most successful in improving the error rate of nanopore sequencing. The blog post by Wick et al. [13] is a prime example of how the community can make valuable contributions by systematically comparing computational strategies. To date, improvements in read accuracy have been achieved through four general strategies: (i) improvement of the pore itself (e.g., the evolution from R6 to R9); (ii) the use of library preparation methods that allow for a piece of DNA to be read multiple times (e.g., 2D and 1D2 sequencing); (iii) innovations in base-calling algorithms (e.g., from HMM to RNN); and (iv) the development of postsequencing correction tools (e.g., Nanopolish). Error rates are likely to decrease further with improved pore chemistries, innovative library preparation methods, and better software. For example, rolling circle amplification can be used to create tandem copies of DNA templates [48], and this approach is especially suitable for complex samples where sequence information pertaining to a specific allele, cell, or species is thus linked. Meanwhile, base callers have been developing at a rapid pace over recent years, with continuous improvement evident. An important future avenue for further improvement may be the use of species- and library preparation-specific training data in the base-calling algorithm. With these prospects in mind, the question arises: is there an inherent ceiling to nanopore read accuracy? It is certainly true that accuracy has improved markedly over the past few years, but sequences with low complexity such as homopolymers are still notoriously difficult to call accurately. Reassuringly, the recent version of the ONT base caller Scrappie demonstrates that homopolymer calling is at least not inherently impossible for the MinION. Nevertheless, it is unlikely that systematic errors can be abolished completely. This also seems to be acknowledged by ONT, who are actively working on improved pore designs. A particularly promising direction of research is to use pores with multiple recognition sites that are separated by a distance of ~ 15 bp, which would allow variable signal to be detected within homopolymers of up to 30 bp [37]. By clearing the last few percent of errors, these developments may in the near future allow nanopore sequencing to begin to compete seriously with short-read platforms in the robust detection of SNVs and indels in complex samples, and in many more applications. Deamer D, Akeson M, Branton D. Three decades of nanopore sequencing. Nat Biotechnol. 2016;34:518–24. Garalde DR, Snell EA, Jachimowicz D, Sipos B, Lloyd JH, Bruce M, et al. Highly parallel direct RNA sequencing on an array of nanopores. Nat Methods. 2018;15:201–6. Jain M, Koren S, Miga KH, Quick J, Rand AC, Sasani TA, et al. Nanopore sequencing and assembly of a human genome with ultra-long reads. Nat Biotechnol. 2018;36:338–45. Payne A, Holmes N, Rakyan V, Loose M. Whale watching with BulkVis: a graphical viewer for Oxford Nanopore bulk fast5 files. https://www.biorxiv.org/content/early/2018/05/03/312256 Cretu Stancu M, Stancu MC, van Roosmalen MJ, Renkens I, Nieboer M, Middelkamp S, et al. Mapping and phasing of structural variation in patient genomes using nanopore sequencing. Nat Commun. 2017;8:1326. Quick J, Loman NJ, Duraffour S, Simpson JT, Severi E, Cowley L, et al. Real-time, portable genome sequencing for Ebola surveillance. Nature. 2016;530:228–32. Faria NR, Sabino EC, Nunes MRT, Alcantara LCJ, Loman NJ, Pybus OG. Mobile real-time surveillance of Zika virus in Brazil. Genome Med. 2016;8:97. Jain M, Olsen HE, Paten B, Akeson M. The Oxford Nanopore MinION: delivery of nanopore sequencing to the genomics community. Genome Biol. 2016;17:239. Goodwin S, Gurtowski J, Ethe-Sayers S, Deshpande P, Schatz MC, McCombie WR. Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic genome. Genome Res. 2015;25:1750–6. Laver T, Harrison J, O'Neill PA, Moore K, Farbos A, Paszkiewicz K, et al. Assessing the performance of the Oxford Nanopore technologies MinION. Biomol Detect Quantif. 2015;3:1–8. Jain M, Tyson JR, Loose M, Ip CLC, Eccles DA, O'Grady J, et al. MinION analysis and reference consortium: phase 2 data release and analysis of R9.0 chemistry. F1000Res. 2017;6:760. Tyson JR, O'Neil NJ, Jain M, Olsen HE, Hieter P, Snutch TP. Whole genome sequencing and assembly of a Caenorhabditis elegans genome with complex genomic rearrangements using the MinION sequencing device. bioRxiv. 2017;099143 https://doi.org/10.1101/099143 Wick RR, Judd LM, Holt KE. Comparison of Oxford nanopore basecalling tools. Zenodo 2018. https://zenodo.org/record/1188469#.Ww0upI-cGM8. Accessed 29 May 2018. Koren S, Walenz BP, Berlin K, Miller JR, Bergman NH, Phillippy AM. Canu: scalable and accurate long-read assembly via adaptive -mer weighting and repeat separation. Genome Res. 2017;27:722–36. Loman NJ, Quick J, Simpson JT. A complete bacterial genome assembled de novo using only nanopore sequencing data. Nat Methods. 2015;12:733–5. Li H. Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences. Bioinformatics. 2016;32:2103–10. Ebler J, Haukness M, Pesout T, Marschall T, Paten B. Haplotype-aware genotyping from noisy long reads. bioRxiv. 2018;293944 https://doi.org/10.1101/293944 Euskirchen P, Bielle F, Labreche K, Kloosterman WP, Rosenberg S, Daniau M, et al. Same-day genomic and epigenomic diagnosis of brain tumors using real-time nanopore sequencing. Acta Neuropathol. 2017;134:691–703. Brown CG. Oxford Nanopore Technologies: "No Thanks, I've Already Got One." https://www.youtube.com/watch?v=nizGyutn6v4. Streamed live on 8 March, 2016. Accessed 29 May 2018. Goyal P, Krasteva PV, Van Gerven N, Gubellini F, Van den Broeck I, Troupiotis-Tsaïlaki A, et al. Structural and mechanistic insights into the bacterial amyloid secretion channel CsgG. Nature. 2014;516:250–3. Oxford Nanopore Technologies. 1Dsquared kit available in the store: boost accuracy, simple prep. 2017. https://nanoporetech.com/about-us/news/1d-squared-kit-available-store-boost-accuracy-simple-prep. Accessed 20 Apr 2018. Brown CG. Oxford Nanopore Technologies: GridION X5 the sequel. https://www.youtube.com/results?search_query=Oxford+Nanopore+Technologies%3A+GridION+X5+The+Sequel+. Streamed live March 2017. Accessed 29 May 2018. Brown CG. Oxford Nanopore Technologies: owl stretching with examples. https://www.youtube.com/watch?v=JmncdnQgaIE. Streamed live Feb 2016. Accessed 29 May 2018. Manrao EA, Derrington IM, Laszlo AH, Langford KW, Hopper MK, Gillgren N, et al. Reading DNA at single-nucleotide resolution with a mutant MspA nanopore and phi29 DNA polymerase. Nat Biotechnol. 2012;30:349–53. Cherf GM, Lieberman KR, Rashid H, Lam CE, Karplus K, Akeson M. Automated forward and reverse ratcheting of DNA in a nanopore at 5-Å precision. Nat Biotechnol. 2012;30:344–8. Sarkozy P, Jobbágy Á, Antal P. Calling homopolymer stretches from raw nanopore reads by analyzing k-mer dwell times. In: Eskola H, Väisänen O, Viik J, Hyttinen J, editors. EMBEC & NBC 2017. Singapore: Springer Singapore; 2018. p. 241–4. Butler TZ, Pavlenok M, Derrington IM, Niederweis M, Gundlach JH. Single-molecule DNA detection with an engineered MspA protein nanopore. Proc Natl Acad Sci U S A. 2008;105(52):20647. Ip CLC, Loose M, Tyson JR, de Cesare M, Brown BL, Jain M, et al. MinION analysis and reference consortium: phase 1 data release and analysis. F1000Res. 2015;4:1075. David M, Dursi LJ, Yao D, Boutros PC, Simpson JT. Nanocall: an open source basecaller for Oxford Nanopore sequencing data. Bioinformatics. 2017;33:49–55. Stoiber M, Brown J. BasecRAWller: streaming nanopore basecalling directly from raw signal. bioRxiv. 2017;133058 https://www.biorxiv.org/content/early/2017/05/01/133058. Deschamps S, Mudge J, Cameron C, Ramaraj T, Anand A, Fengler K, et al. Characterization, correction and de novo assembly of an Oxford Nanopore genomic dataset from Agrobacterium tumefaciens. Sci Rep. 2016;6:28625. Teng H, Cao MD, Hall MB, Duarte T, Wang S, Coin LJM. Chiron: translating nanopore raw signal directly into nucleotide sequence using deep learning. GigaScience. 2018;7:giy037. https://doi.org/10.1093/gigascience/giy037 Kilianski A, Haas JL, Corriveau EJ, Liem AT, Willis KL, Kadavy DR, et al. Bacterial and viral identification and differentiation by amplicon sequencing on the MinION nanopore sequencer. GigaScience. 2015;4:12. Brown CG. Oxford Nanopore technologies: a wafer thin update. 2016. https://nanoporetechcom/resource-centre/videos/wafer-thin-update Accessed 29 May 2018. Boža V, Brejová B, Vinař T. DeepNano: deep recurrent neural networks for base calling in MinION nanopore reads. PLoS One. 2017;12:e0178751. Loose M, Malla S, Stout M. Real-time selective sequencing using nanopore technology. Nat Methods. 2016;13:751–4. Brown CG. Oxford Nanopore Technologies: some mundane and fundamental updates. https://www.youtube.com/watch?v=7pIpf-jj-7w. Streamed live 18 June 2017. Accessed 29 May 2018. Sueoka N. On the genetic basis of variation and heterogeneity of DNA base composition. Proc Natl Acad Sci U S A. 1962;48:582–92. Grantham R, Gautier C, Gouy M, Jacobzone M, Mercier R. Codon catalog usage is a genome strategy modulated for gene expressivity. Nucleic Acids Res. 1981;9:r43–74. Gigante S. In-house training of the nanonet local basecaller: opportunities and challenges. Oxford Nanopore Technologies. 2017; https://nanoporetech.com/resource-centre/talk/house-training-nanonet-local-basecaller-opportunities-and-challenges. Accessed 20 Apr 2018 Stoiber MH, Quick J, Egan R, Lee JE, Celniker SE, Neely R, et al. De novo identification of DNA modifications enabled by genome-guided nanopore Signal Process. bioRxiv. 2017:094672. https://doi.org/10.1101/094672 Simpson JT, Workman RE, Zuzarte PC, David M, Dursi LJ, Timp W. Detecting DNA cytosine methylation using nanopore sequencing. Nat Methods. 2017;14:407–10. Rand AC, Jain M, Eizenga JM, Musselman-Brown A, Olsen HE, Akeson M, et al. Mapping DNA methylation with high-throughput nanopore sequencing. Nat Methods. 2017;14:411–3. Oxford Nanopore Technologies. Tombo: detection of non-standard nucleotides using the genome-resolved raw nanopore signal. https://nanoporetech.com/resource-centre/posters/tombo-detection-non-standard-nucleotides-using-genome-resolved-raw-nanopore. Accessed Apr 2018. Brown CG. Oxford Nanopore technologies: Nanopore community meeting plenary talk. 2016. https://nanoporetech.com/resource-centre/videos/we-need-better-name-follow-through. Accessed 29 May 2018. Vaser R, Sović I, Nagarajan N, Šikić M. Fast and accurate de novo genome assembly from long uncorrected reads. Genome Res. 2017;27:737–46. Simpson J. Deprecating Nanocorrect. 2016. http://simpsonlab.github.io/2016/02/25/deprecating-nanocorrect/. Assessed 20 Apr 2018. Li C, Chng KR, Boey EJH, Ng AHQ, Wilm A, Nagarajan N. INC-Seq: accurate single molecule reads using nanopore sequencing. Gigascience. 2016;5:34. Salk JJ, Schmitt MW, Loeb LA. Enhancing the accuracy of next-generation sequencing for detecting rare and subclonal mutations. Nat Rev Genet. 2018;19:269–85. Timp W, Nice AM, Nelson EM, Kurz V, McKelvey K, Timp G. Think small: nanopores for sensing and synthesis. IEEE Access. 2014;2:1396–408. Ashton PM, Nair S, Dallman T, Rubino S, Rabsch W, Mwaigwisya S, et al. MinION nanopore sequencing identifies the position and structure of a bacterial antibiotic resistance island. Nat Biotechnol. 2015;33:296–300. Jain M, Fiddes IT, Miga KH, Olsen HE, Paten B, Akeson M. Improved data analysis for the MinION nanopore sequencer. Nat Methods. 2015;12:351–6. Hargreaves AD, Mulley JF. Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing. PeerJ. 2015;3:e1441. Norris AL, Workman RE, Fan Y, Eshleman JR, Timp W. Nanopore sequencing detects structural variants in cancer. Cancer Biol Ther. 2016;17:246–53. Suzuki A, Suzuki M, Mizushima-Sugano J, Frith MC, Makalowski W, Kohno T, et al. Sequencing and phasing cancer mutations in lung cancers using a long-read portable sequencer. DNA Res. 2017;24:585–96. Graves A, Fernández S, Gomez F, Schmidhuber J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. ICML '06 Proceedings of the 23rd International Conference on Machine Learning association for Computing Machinery (ACM); 2006. pp. 369–376. Kim P. Convolutional neural network. In: Kim P, editor. MATLAB deep learning: with machine learning, neural networks and artificial intelligence. Berkeley: Apress; 2017. p. 121–47. Durbin R, Eddy SR, Eddy S, Krogh A, Mitchison G. Biological sequence analysis: probabilistic models of proteins and nucleic acids. Cambridge: Cambridge University Press; 1998. Eddy SR. What is a hidden Markov model? Nat Biotechnol. 2004;22:1315–6. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80. Gers FA, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM. Neural Comput. 2000;12:2451–71. Lee C, Grasso C, Sharlow MF. Multiple sequence alignment using partial order graphs. Bioinformatics. 2002;18:452–64. Medsker L, Jain LC. Recurrent neural networks: design and applications. Boca Raton: CRC Press; 1999. JdR is supported by The Netherlands Organization for Scientific Research (NWO-Vidi: 639.072.715). This work is supported by a grant from the Utrecht University for establishing a single-molecule sequencing facility. Department of Genetics, Center for Molecular Medicine, University Medical Center Utrecht, Utrecht University, 3584, CG, Utrecht, The Netherlands Franka J. Rang , Wigard P. Kloosterman & Jeroen de Ridder Search for Franka J. Rang in: Search for Wigard P. Kloosterman in: Search for Jeroen de Ridder in: FJR drafted the first version of the manuscript with guidance from JdR. WPK and JdR contributed major parts of the manuscript and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Wigard P. Kloosterman or Jeroen de Ridder. WPK and JdR have received reimbursement of travel and accommodation expenses to speak at meetings organized by Oxford Nanopore Technologies. FJR declares that they have no competing interests. Additional file Supplemental Figure S1 and Table S1. (DOCX 75 kb) Rang, F.J., Kloosterman, W.P. & de Ridder, J. From squiggle to basepair: computational approaches for improving nanopore sequencing read accuracy. Genome Biol 19, 90 (2018). https://doi.org/10.1186/s13059-018-1462-9
CommonCrawl
Stirling numbers of the second kind - proof For a fixed integer k, how would I prove that $$\sum_{n\ge k} \left\{n \atop k\right\}x^n= \frac{x^k}{(1-x)(1-2x)...(1-kx)}.$$ where $\left\{n \atop k\right\}=k\left\{n-1 \atop k\right\}+\left\{n-1 \atop k-1 \right\}$ combinatorics proof-explanation stirling-numbers $\begingroup$ Use the recurrence for the Stirling numbers. $\endgroup$ – Angina Seng $\begingroup$ There are some basic proofs at the following MSE link. $\endgroup$ – Marko Riedel The definition of ${n\brace k}$ through the recursion is easily seen to be equivalent to the combinatorial definition "the number of ways for partitioning a set with $n$ elements into $k$ non-empty subsets". $m^n$ can be interpreted as the number of functions from $[1,n]$ to $[1,m]$: if we classify them according to the cardinality of their range, we may easily check that $$ m^n =\sum_{k=1}^{n}{n\brace k}k! \binom{m}{k} \tag{1}$$ i.e. Stirling numbers of the second kind allow to decompose monomials into linear combination of binomial coefficients. In equivalent terms ${n\brace k}k!$ is the number of surjective functions from $[1,n]$ to $[1,k]$, and $${n \brace k}k!=\sum_{j=0}^{k}\binom{k}{j}j^n (-1)^{k-j} \tag{2}$$ follows by inclusion-exclusion. $(2)$ is a natural counter-part of $(1)$, and it leads to $$ {n\brace k} x^n = \sum_{j=0}^{k}\frac{x^n j^n (-1)^{k-j}}{j!(k-j)!} \tag{3}$$ then to: $$ \sum_{n\geq k}{n\brace k}x^n = x^k\sum_{j=0}^{k}\frac{j^k (-1)^{k-j}}{j!(k-j)!(1-jx)}=x^k\sum_{j=1}^{k}\frac{j^k}{k!}\binom{k}{j}\frac{(-1)^{k-j}}{1-jx}.\tag{4}$$ By residues or equivalent techniques, the last sum can be easily checked to be the partial fraction decomposition of $\frac{1}{(1-x)(1-2x)\cdot\ldots\cdot(1-kx)}$, proving the claim. Jack D'AurizioJack D'Aurizio Generating function with Stirling's numbers of the second kind Stirling numbers of the second kind - proof 2 Sum of Stirling numbers of both kinds Stirling numbers of the second kind vs. binomial coefficient Prove the identity with Stirling numbers. Explicit formulas for Stirling numbers of second kind Stirling Numbers of the Second Kind Proof Summation including Stirling numbers of the second kind Induction proof with Stirling numbers of the second kind
CommonCrawl
Product (mathematics) In mathematics, a product is the result of multiplication, or an expression that identifies objects (numbers or variables) to be multiplied, called factors. For example, 21 is the product of 3 and 7 (the result of multiplication), and $x\cdot (2+x)$ is the product of $x$ and $(2+x)$ (indicating that the two factors should be multiplied together). When one factor is an integer, the product is called a multiple. Arithmetic operations Addition (+) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,+\,{\text{term}}\\\scriptstyle {\text{summand}}\,+\,{\text{summand}}\\\scriptstyle {\text{addend}}\,+\,{\text{addend}}\\\scriptstyle {\text{augend}}\,+\,{\text{addend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{sum}}$ Subtraction (−) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,-\,{\text{term}}\\\scriptstyle {\text{minuend}}\,-\,{\text{subtrahend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{difference}}$ Multiplication (×) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{factor}}\,\times \,{\text{factor}}\\\scriptstyle {\text{multiplier}}\,\times \,{\text{multiplicand}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{product}}$ Division (÷) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\frac {\scriptstyle {\text{dividend}}}{\scriptstyle {\text{divisor}}}}\\[1ex]\scriptstyle {\frac {\scriptstyle {\text{numerator}}}{\scriptstyle {\text{denominator}}}}\end{matrix}}\right\}\,=\,$ $\scriptstyle \left\{{\begin{matrix}\scriptstyle {\text{fraction}}\\\scriptstyle {\text{quotient}}\\\scriptstyle {\text{ratio}}\end{matrix}}\right.$ Exponentiation (^) $\scriptstyle {\text{base}}^{\text{exponent}}\,=\,$ $\scriptstyle {\text{power}}$ nth root (√) $\scriptstyle {\sqrt[{\text{degree}}]{\scriptstyle {\text{radicand}}}}\,=\,$ $\scriptstyle {\text{root}}$ Logarithm (log) $\scriptstyle \log _{\text{base}}({\text{anti-logarithm}})\,=\,$ $\scriptstyle {\text{logarithm}}$ The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well. There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures. Product of two numbers This section is an excerpt from Multiplication § Definitions.[edit] The product of two numbers or the multiplication between two numbers can be defined for common special cases: integers, natural numbers, fractions, real numbers, complex numbers, and quaternions. Product of a sequence See also: Multiplication § Product of a sequence The product operator for the product of a sequence is denoted by the capital Greek letter pi Π (in analogy to the use of the capital Sigma Σ as summation symbol).[1] For example, the expression $\textstyle \prod _{i=1}^{6}i^{2}$is another way of writing $1\cdot 4\cdot 9\cdot 16\cdot 25\cdot 36$.[2] The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as the empty product, and is equal to 1. Commutative rings Commutative rings have a product operation. Residue classes of integers Main article: residue class Residue classes in the rings $\mathbb {Z} /N\mathbb {Z} $ can be added: $(a+N\mathbb {Z} )+(b+N\mathbb {Z} )=a+b+N\mathbb {Z} $ and multiplied: $(a+N\mathbb {Z} )\cdot (b+N\mathbb {Z} )=a\cdot b+N\mathbb {Z} $ Convolution Main article: convolution Two functions from the reals to itself can be multiplied in another way, called the convolution. If $\int \limits _{-\infty }^{\infty }|f(t)|\,\mathrm {d} t<\infty \qquad {\mbox{and}}\qquad \int \limits _{-\infty }^{\infty }|g(t)|\,\mathrm {d} t<\infty ,$ then the integral $(f*g)(t)\;:=\int \limits _{-\infty }^{\infty }f(\tau )\cdot g(t-\tau )\,\mathrm {d} \tau $ is well defined and is called the convolution. Under the Fourier transform, convolution becomes point-wise function multiplication. Polynomial rings Main article: polynomial ring The product of two polynomials is given by the following: $\left(\sum _{i=0}^{n}a_{i}X^{i}\right)\cdot \left(\sum _{j=0}^{m}b_{j}X^{j}\right)=\sum _{k=0}^{n+m}c_{k}X^{k}$ with $c_{k}=\sum _{i+j=k}a_{i}\cdot b_{j}$ Products in linear algebra There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections. Scalar multiplication Main article: scalar multiplication By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map $\mathbb {R} \times V\rightarrow V$. Scalar product Main article: scalar product A scalar product is a bi-linear map: $\cdot :V\times V\rightarrow \mathbb {R} $ with the following conditions, that $v\cdot v>0$ for all $0\not =v\in V$. From the scalar product, one can define a norm by letting $\|v\|:={\sqrt {v\cdot v}}$. The scalar product also allows one to define an angle between two vectors: $\cos \angle (v,w)={\frac {v\cdot w}{\|v\|\cdot \|w\|}}$ In $n$-dimensional Euclidean space, the standard scalar product (called the dot product) is given by: $\left(\sum _{i=1}^{n}\alpha _{i}e_{i}\right)\cdot \left(\sum _{i=1}^{n}\beta _{i}e_{i}\right)=\sum _{i=1}^{n}\alpha _{i}\,\beta _{i}$ Cross product in 3-dimensional space Main article: cross product The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors. The cross product can also be expressed as the formal[lower-alpha 1] determinant: $\mathbf {u\times v} ={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\u_{1}&u_{2}&u_{3}\\v_{1}&v_{2}&v_{3}\\\end{vmatrix}}$ Composition of linear mappings Main article: function composition A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying[3] $f(t_{1}x_{1}+t_{2}x_{2})=t_{1}f(x_{1})+t_{2}f(x_{2}),\forall x_{1},x_{2}\in V,\forall t_{1},t_{2}\in \mathbb {F} .$ If one only considers finite dimensional vector spaces, then $f(\mathbf {v} )=f\left(v_{i}\mathbf {b_{V}} ^{i}\right)=v_{i}f\left(\mathbf {b_{V}} ^{i}\right)={f^{i}}_{j}v_{i}\mathbf {b_{W}} ^{j},$ in which bV and bW denote the bases of V and W, and vi denotes the component of v on bVi, and Einstein summation convention is applied. Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get $g\circ f(\mathbf {v} )=g\left({f^{i}}_{j}v_{i}\mathbf {b_{W}} ^{j}\right)={g^{j}}_{k}{f^{i}}_{j}v_{i}\mathbf {b_{U}} ^{k}.$ Or in matrix form: $g\circ f(\mathbf {v} )=\mathbf {G} \mathbf {F} \mathbf {v} ,$ in which the i-row, j-column element of F, denoted by Fij, is fji, and Gij=gji. The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. Product of two matrices Main article: matrix product Given two matrices $A=(a_{i,j})_{i=1\ldots s;j=1\ldots r}\in \mathbb {R} ^{s\times r}$ and $B=(b_{j,k})_{j=1\ldots r;k=1\ldots t}\in \mathbb {R} ^{r\times t}$ their product is given by $B\cdot A=\left(\sum _{j=1}^{r}a_{i,j}\cdot b_{j,k}\right)_{i=1\ldots s;k=1\ldots t}\;\in \mathbb {R} ^{s\times t}$ Composition of linear functions as matrix product There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite) dimensions of vector spaces U, V and W. Let ${\mathcal {U}}=\{u_{1},\ldots ,u_{r}\}$ be a basis of U, ${\mathcal {V}}=\{v_{1},\ldots ,v_{s}\}$ be a basis of V and ${\mathcal {W}}=\{w_{1},\ldots ,w_{t}\}$ be a basis of W. In terms of this basis, let $A=M_{\mathcal {V}}^{\mathcal {U}}(f)\in \mathbb {R} ^{s\times r}$ be the matrix representing f : U → V and $B=M_{\mathcal {W}}^{\mathcal {V}}(g)\in \mathbb {R} ^{r\times t}$ be the matrix representing g : V → W. Then $B\cdot A=M_{\mathcal {W}}^{\mathcal {U}}(g\circ f)\in \mathbb {R} ^{s\times t}$ is the matrix representing $g\circ f:U\rightarrow W$. In other words: the matrix product is the description in coordinates of the composition of linear functions. Tensor product of vector spaces Main article: Tensor product Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2,0)-tensor satisfying: $V\otimes W(v,m)=V(v)W(w),\forall v\in V^{*},\forall w\in W^{*},$ where V* and W* denote the dual spaces of V and W.[4] For infinite-dimensional vector spaces, one also has the: • Tensor product of Hilbert spaces • Topological tensor product. The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices). The class of all objects with a tensor product In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product. Other products in linear algebra Other kinds of products in linear algebra include: • Hadamard product • Kronecker product • The product of tensors: • Wedge product or exterior product • Interior product • Outer product • Tensor product Cartesian product In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs (a, b)—where a ∈ A and b ∈ B.[5] The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects. Empty product The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication), just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory. Products over other algebraic structures Products over other kinds of algebraic structures include: • the Cartesian product of sets • the direct product of groups, and also the semidirect product, knit product and wreath product • the free product of groups • the product of rings • the product of ideals • the product of topological spaces[1] • the Wick product of random variables • the cap, cup, Massey and slant product in algebraic topology • the smash product and wedge sum (sometimes called the wedge product) in homotopy A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory. Products in category theory All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has: • the fiber product or pullback, • the product category, a category that is the product of categories. • the ultraproduct, in model theory. • the internal product of a monoidal category, which captures the essence of a tensor product. Other products • A function's product integral (as a continuous equivalent to the product of a sequence or as the multiplicative version of the normal/standard/additive integral. The product integral is also known as "continuous product" or "multiplical". • Complex multiplication, a theory of elliptic curves. See also • Deligne tensor product of abelian categories – Belgian mathematicianPages displaying short descriptions of redirect targets • Indefinite product • Infinite product • Iterated binary operation – Repeated application of an operation to a sequence • Multiplication – Arithmetical operation Notes 1. Here, "formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the cross product. References 1. Weisstein, Eric W. "Product". mathworld.wolfram.com. Retrieved 2020-08-16. 2. "Summation and Product Notation". math.illinoisstate.edu. Retrieved 2020-08-16. 3. Clarke, Francis (2013). Functional analysis, calculus of variations and optimal control. Dordrecht: Springer. pp. 9–10. ISBN 978-1447148203. 4. Boothby, William M. (1986). An introduction to differentiable manifolds and Riemannian geometry (2nd ed.). Orlando: Academic Press. p. 200. ISBN 0080874398. 5. Moschovakis, Yiannis (2006). Notes on set theory (2nd ed.). New York: Springer. p. 13. ISBN 0387316094. Bibliography • Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
Wikipedia
\begin{document} \title{Position-momentum correlations in matter waves double-slit experiment} \author{J. S. M. Neto, I. G. da Paz\footnote{Corresponding author.}} \affiliation{ Departamento de F\'{\i}sica, Universidade Federal do Piau\'{\i}, Campus Ministro Petr\^{o}nio Portela, CEP 64049-550, Teresina, PI, Brazil.} \author{L. A. Cabral} \affiliation{Curso de F\'{\i}sica, Universidade Federal do Tocantins, Caixa Postal 132, CEP 77804-970, Aragua\'{\i}na, TO, Brazil.} \begin{abstract} We present a treatment of the double-slit interference of matter-waves represented by Gaussian wavepackets. The interference pattern is modelled with Green's function propagator which emphasizes the coordinate correlations and phases. We explore the connection between phases and position-momentum correlations in the intensity, visibility and predictability of the wavepackets interference. This formulation will indicate some aspects that can be useful for theoretical and experimental treatment of particles, atoms or molecules interferometry. \end{abstract} \pacs{03.65.Xp; 03.65.Yz; 32.80.-t \\ \\ {\it Keywords}: Matter waves, Double-slit experiment, Position-momentum correlations} \maketitle \section{Introduction} The double-slit experiment illustrates the essential mystery of quantum mechanics \cite{Faynman}. Under different circumstances, the same physical system can exhibit either a particle-like or a wave-like behaviour, otherwise known as wave-particle duality \cite{Bohr}. Double-slit experiments with matter waves were performed by M\"{o}llenstedt and J\"{o}sson for electrons \cite{Jonsson}, by Zeilinger et al. for neutrons \cite{Zeilinger1}, by Carnal and Mlynek for atoms \cite{Carnal}, by Sch\"{o}llkopf and Toennies for small molecules \cite{Toennies} and by Zeilinger et al. for macromolecules \cite{Zeilinger2}. Position-momentum correlations have been studied and interpreted in some textbooks, while the most treated example is the simple Gaussian or minimum-uncertainty wavepacket solution for the Schrödinger equation for a free particle. Such wavepacket presents no position-momentum correlations at $t=0$ which appear only with the passage of time \cite{Bohm,Saxon}. How the phases of the wave function influence the existence of position-momentum correlations is also explained in Ref. \cite{Bohm}. Posteriorly, it was shown that squeezed states or linear combination of Gaussian states can exhibit initial correlations, i.e., correlations that not depend on the time evolution \cite{Robinett, Riahi,Dodonov, Campos}. The qualitative changes in the interference pattern as a function of the increasing in the position-momentum correlations was studied in Ref. \cite{Carol}. In addition, it was shown that the Gouy phase of matter waves is directly related to the position-momentum correlations, as studied by the first time in Refs. \cite{Paz1,Paz2}. The Gouy phase of matter waves was experimentally observed in different systems, such as Bose-Einstein condensates \cite{cond}, electron vortex beams \cite{elec2}, and astigmatic electron matter waves using in-line holography \cite{elec1}. More recently, it was observed that the position-momentum correlations can provide further insight into the formation of above-threshold ionization (ATI) spectra in the electron-ion scattering in strong laser fields \cite{Kull}. In this work, we use the previously developed ideas on position-momentum correlations to analyze the Gaussian features of the wavepacket and the interference pattern, as well as the wave-like and particle-like behavior, in double-slit experiment with matter waves. Before reaching the double-slit setup, the particle is represented by a simple Gaussian wavepacket and, after the double-slit apparatus, the particle is represented by a linear combination of two identical Gaussian wavepackets coming from the two slits. After the double-slit, the position and momentum of the particle will be correlated even if the time evolution from the source to the double-slit is zero. The correlations will be changed by the evolution, enabling us to extract some information about the interference pattern. In section II we present the model for the double-slit experiment considering that the matter wave propagates the time $t$ from the source to the double-slit and the time $\tau$ from the double-slit to the screen. Further, we calculate the wave functions for the passage through each slit using the Green's function for the free particle. In section III, we calculate the position-momentum correlations and the generalized Robertson-Schrödinger uncertainty relation for the state that is a linear combination of the states which passed through each slit. In section IV, we calculate the intensity, visibility and predictability to analyze the interference pattern in terms of the knowledge of the position-momentum correlations . \section{Double-Slit Experiment}\label{model} In this section we return to the double-slit experiment and analyze the effect of the position-momentum correlations in the interference pattern. We consider that a coherent Gaussian wavepacket of initial width $\sigma_{0}$ propagates during a time $t$ before arriving at a double-slit that divides it into two Gaussian wavepackets. After the double-slit, the two wavepackets propagate during a time $\tau$ until they reach the detection screen, where they are recombined and the interference pattern is observed as a function of the transverse coordinate $x$. As we will see, the number of interference fringes and its quality are dramatically influenced by the propagation times $t$ and $\tau$. In particular, there is a value of time $t_{max}(\tau)$ for which the number of fringes tends to be minimum. This value of time corresponds to a maximum separation of the wavepackets on the screen and it is associated with one maximum of the position-momentum correlations. On the other hand, if the source of particles is positioned in such a way that, before arriving the screen, the particles travel during a time interval which is not close to $t_{max}(\tau)$, the number of interference fringes and its quality are increased significantly. The wavefunction at the time when the wave passes through the slit $1(+)$ or the slit $2(-)$ is given by \begin{equation} \psi_{1,2}(x,t,\tau) =\int_{-\infty}^{+\infty} dx_{j}\int_{-\infty}^{+\infty}dx_{i}G_{2}(x,t+\tau;x_{j},t)F(x_{j}\pm d/2)G_{1}(x_{j},t;x_{i},0)\psi_{0}(x_{i}), \end{equation} where \begin{equation} G_{1}(x_{j},t;x_{i},0)=\sqrt{\frac{m}{2\pi i\hbar t}}\exp\left[\frac{im(x_{j}-x_{i})^{2}}{2\hbar t}\right], \end{equation} \begin{equation} G_{2}(x,t+\tau;x_{j},t)=\sqrt{\frac{m}{2\pi i\hbar\tau}}\exp\left[\frac{im(x-x_{j})^{2}}{2\hbar\tau}\right], \end{equation} \begin{equation} F(x_{j}\pm d/2)=\frac{1}{\sqrt{\beta\sqrt{\pi}}}\exp\left[-\frac{(x_{j}\pm d/2)^{2}}{2\beta^{2}}\right], \end{equation} and \begin{equation} \psi_{0}(x_{i})=\frac{1}{\sqrt{\beta\sqrt{\pi}}}\exp\left(-\frac{x_{i}^{2}}{2\sigma_{0}^{2}}\right). \end{equation} The kernels $G_{1}(x_{j},t;x_{i},0)$ and $G_{2}(x,t+\tau;x_{j},t)$ are the free propagators for the particle, the functions $F(x_{j}\pm d/2)$ describe the double-slit apertures which are taken to be Gaussian of width $\beta$ separated by a distance $d$; $\sigma_{0}$ is the transverse width of the first slit, where the packet was prepared, $m$ is the mass of the particle, $t$ ($\tau$) is the time of flight from the first slit (double-slit) to the double-slit (screen). We will also consider that the energy associated with the momentum of the atoms in the $z$-direction is very high, such that we can consider a classical movement of atoms in this direction, with the time component given by $z/v_{z}$. This model is presented in Fig. 1 together with a qualitative illustration of the interference pattern for three different values of time $t$, maintaining $\tau$ constant. \begin{figure} \caption{Sketch of double-slit experiment. Gaussian wavepacket of transverse width $\sigma_{0}$ propagates a time $t$ before to attain the double-slit and a time $\tau$ from the double-slit to the screen. The slit aperture are taken to be Gaussian of width $\beta$ and separated by a distance $d$. We also show the qualitative interference pattern considering that the wavepacket propagates the time $t$ (color red), $t_{max}$ (color blue) or $t^{\prime}$ (color purple). For $t_{max}$ we have the minimum of interference fringes.} \label{Figure1} \end{figure} After some algebraic manipulations, we obtain for the wave that passed though the slit $1$ the following result \begin{equation} \psi_{1}(x,t,\tau) = \frac{1}{\sqrt{B\sqrt{\pi}}}\exp \left[-\frac{(x+D/2)^{2}}{2B^{2}}\right]\exp \left(\frac{imx^2}{2\hbar R} + i\Delta x + i\theta+ i\mu\right), \end{equation} where \begin{equation} B^{2}(t,\tau) =\frac{\left(\frac{1}{\beta^{2}}+\frac{1}{b^{2}}\right)^{2}+\frac{m^{2}}{\hbar^{2}}\left(\frac{1}{\tau}+\frac{1}{r}\right)^{2}} {\left(\frac{m}{\hbar\tau}\right)^{2}\left(\frac{1}{\beta^{2}}+\frac{1}{b^{2}}\right)}, \end{equation} \begin{equation} R(t,\tau)=\tau\frac{\left(\frac{1}{\beta^{2}}+\frac{1}{b^{2}}\right)^{2}+\frac{m^{2}}{\hbar^{2}}\left(\frac{1}{\tau}+\frac{1}{r}\right)^{2}} {\left(\frac{1}{\beta^{2}}+\frac{1}{b^{2}}\right)^{2}+\frac{t}{\sigma_{0}^{2}b^{2}}\left(\frac{1}{\tau}+\frac{1}{r}\right)}, \end{equation} \begin{equation} \Delta(t,\tau) = \dfrac{\tau\sigma_{0}^{2}d}{2\tau_0\beta^{2}B^{2}}, \end{equation} \begin{equation} D(t,\tau)=\frac{\left(1+\frac{\tau}{r}\right)}{\left(1+\frac{\beta^{2}}{b^{2}}\right)}d, \end{equation} \begin{equation} \theta(t,\tau)=\frac{md^{2}\left(\frac{1}{\tau}+\frac{1}{r}\right)} {8\hbar \beta^{4}\left[\left(\frac{1}{\beta^{2}}+\frac{1}{b^{2}}\right)^{2}+\frac{m^{2}}{\hbar^{2}}\left(\frac{1}{\tau}+\frac{1}{r}\right)^{2}\right]}, \end{equation} \begin{equation} \mu(t,\tau) = -\dfrac{1}{2}\arctan\left[\frac{(\frac{t}{\tau_{0}})+\frac{1}{m}(\frac{\hbar \tau r}{\tau+r})(\frac{1}{\beta^{2}}+\frac{1}{b^{2}})}{1-\frac{1}{m}(\frac{t}{\tau_{0}})(\frac{\hbar \tau r}{\tau+r})(\frac{1}{\beta^{2}}+\frac{1}{b^{2}})}\right], \end{equation} \begin{equation} b^2(t) = \sigma_{0}^{2}\left[ 1 + \left(\dfrac{t}{\tau_0}\right)^2 \right], \end{equation} and \begin{equation} r(t)=t\left[1+\left(\frac{\tau_{0}}{t}\right)^{2}\right]. \end{equation} In order to obtain the expressions for the wave passing through the slit $2$, we just have to substitute the parameter $d$ by $-d$ in the expressions corresponding to the wave passing through the first slit. Here, the parameter $B(t,\tau)$ is the beam width for the propagation through one slit, $R(t,\tau)$ is the radius of curvature of the wavefronts for the propagation through one slit, $b(t)$ is the beam width for the free propagation and $r(t)$ is the radius of curvature of the wavefronts for the free propagation. $D(t,\tau)$ is the separation between the wavepackets produced in the double-slit. $\Delta(t,\tau)x$ is a phase which varies linearly with the transverse coordinate. $\theta(t,\tau)$ and $\mu(t,\tau)$ are time dependent phases and they are relevant only if the slits have different widths. $\mu(t,\tau)$ is the Gouy phase for the propagation through one slit. The knowledge of how this phase depends on time, and particularly on the slit width, can provide us with some understanding in new designing of double-slit experiment with matter waves. $\tau_{0}=m\sigma_{0}^{2}/\hbar$ is one intrinsic time scale which essentially corresponds to the time during which a distance of the order of the wavepacket extension is traversed with a speed corresponding to the dispersion in velocity. It is viewed as a characteristic time for the ``aging" of the initial state \cite{Carol}. \section{Phase of the Wavefunction and Position-Momentum Correlations} In this section we calculate the position-momentum correlations $\sigma_{xp}$ at the screen and study how they behave as a function of the propagation times $t$ and $\tau$. We find out that the correlations present a point of maximum for the propagation time from the source to the double-slit $t$ whose value depends on $\tau$, the propagation time from the double-slit to the screen. This point of maximum express one instability of the phases of the wave function, which we can associate with incoherence and lack of interference. Also, we find that the higher the correlations are, the smaller the region of overlap between the packets sent from each slit will be, i.e., the maximum of the correlations is associated with a maximum separation between the two wavepackets when they arrive at the screen. The normalized wavefunction at the screen is given by \begin{equation} \psi(x,t,\tau) = \frac{\psi_{1}(x,t,\tau) + \psi_{2}(x,t,\tau)}{\sqrt{2+2\exp[-(\frac{D}{2B})^{2}-(\Delta B)^{2}]}}. \label{psitotal} \end{equation} The state \eqref{psitotal} is a superposition of two Gaussians and therefore presents position-momentum correlations even when $t=0$ \cite{Robinett, Riahi}. For this state we calculate the correlations and obtain \begin{eqnarray} \sigma_{xp}(t,\tau)&=&\frac{1}{2}\langle \hat{x}\hat{p}+\hat{p}\hat{x}\rangle-\langle \hat{x}\rangle \langle \hat{p}\rangle\nonumber\\ &=&\frac{mB^{2}}{2R}+\frac{(mD^{2}/R)}{4+4 \exp\left[-\left(\frac{D}{2B}\right)^{2}-\left(\Delta B\right)^{2}\right]}\nonumber\\ &-&\frac{\hbar\Delta D}{2}-\frac{(m\Delta^{2}B^{4}/R)}{1+ \exp\left[\left(\frac{D}{2B}\right)^{2}+\left(\Delta B\right)^{2}\right]}. \label{correlacao} \end{eqnarray} We observe that the position-momentum correlations are not dependent on the terms $\theta$ and $\mu$ and its existence is exclusively due to the phase dependent of the transverse position $x$. As associated for the first time by Bohm \cite{Bohm}, the four terms appearing in the expression for the correlations can be understood as the product of one ``momentum" by one ``position" for each time $t$ and $\tau$. For example, the first term is the product of the momentum $(mB/R)$ by the position $B$. The second term is the product of the momentum $(mD/R)$ by the position $D$. The third term is the product of the momentum $(\hbar \Delta)$ by the position $D$ and the forth term is the product of the momentum $(m\Delta^{2}B^{3}/R)$ by the position $B$. This connection allows us to understand that the higher the ``position" $B$ or $D$ is, the higher the associated ``momentum" and the contribution to the position-momentum correlations will be. Therefore this appears as a very simple way to characterize the particle when it arrives at the screen, allowing us to take a lot of information about its behavior. In the following, we plot the curves for the position-momentum correlations as a function of the times $t$ and $\tau$ for neutrons. The reason to consider neutrons relies in their experimental reality, which is most closer to our model for interference with completely coherent matter waves. We adopt the following parameters: mass $m=1.67\times10^{-27}\;\mathrm{kg}$, initial width of the packet $\sigma_{0}=7.8\;\mathrm{\mu m}$ (which corresponds to the effective width of $2\sqrt{2}\sigma_{0}\approx22\;\mathrm{\mu m}$), slit width $\beta=7.8\;\mathrm{\mu m}$, separation between the slits $d=125\;\mathrm{\mu m}$ and de Broglie wavelength $\lambda=2\;\mathrm{nm}$. These same parameters were used previously in double-slit experiments with neutrons by A. Zeilinger et al. \cite{Zeilinger1}. In Fig. 2a, we show the correlations as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$, where we observe the existence of a point of maximum. In Fig. 2b, we show the absolute value of each term from equation \eqref{correlacao} as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$, where we see that the larger contribution for the position-momentum correlations comes from the second term, which is directly dependent on the separation $D(t,\tau)$ between the wavepackets at the screen. Therefore, a higher separation between the wavepackets at the screen implies higher position-momentum correlations, i.e., the maximum of the correlations is associated to a small region of the overlap between the two packets. \begin{figure} \caption{(a) Position-momentum correlations as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$. (b) Absolute value of the first (pointed line), second (solid line), third (dashed line) and fourth (dashed-point line) term of the equation \eqref{correlacao} as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$. We observe a point of maximum and that the larger contribution to the correlations comes from the second term (solid line) of equation \eqref{correlacao}.} \label{Figure2} \end{figure} In Fig. 3, we show the position-momentum correlations as a function of $t/\tau_{0}$ and $\tau/\tau_{0}$. We observe that the region around the point of maximum, or region of phase instability, tends to stay narrower when the propagation time from the double-slit to the screen $\tau$ increases. We also observe that the point of maximum is displaced from the left when $\tau$ increases. In the next section we will show a table in which we clearly see the dependence of the time for the maximum of the correlations $t_{max}$ with the value of $\tau$, i.e., $t_{max}=t_{max}(\tau)$. Therefore, the dynamics after the double-slit also influences the interference pattern and should be taken into account in the analysis of double-slit experiments. Taking into account only the dynamics before the double-slit is not sufficient to obtain all the information about the interference pattern on the screen. \begin{figure} \caption{Position-momentum correlation as a function of $t/\tau_{0}$ and $\tau/\tau_{0}$. The maximum is displaced to the left and the region around it tends to stay more narrow when $\tau$ increases. } \label{Figure3} \end{figure} \section{Schrodinger Uncertainty Relation} It is known that the uncorrelated free particle Gaussian wavepackets are states of minimum uncertainty both in position and in momentum. For this case the position-momentum correlations appear only with the time evolution and are followed by a spreading of the associated position distribution, while the momentum uncertainty is maintained constant for all time. For the most general Gaussian wavepacket, in which the initial position-momentum correlations are present, the uncertainty in position is minimum at $t=0$ but this is not true for the uncertainty in momentum \cite{Riahi}. Therefore, the position-momentum correlations indicate that the uncertainty in one or in both the quadratures is not a minimum. For the problem treated here we have a superposition of two Gaussian wavepackets at the screen, for which the position-momentum correlations are present indicating that the uncertainty in both the quadratures is not minimum. To study the behavior of the correlations together with the behavior of the uncertainties in position and in momentum, we calculate in this section the determinant of the covariance matrix defined by \begin{equation}\label{Mc} M_C= \left(\begin{array}{cc} \sigma_{xx}^{2} & \sigma_{xp} \\ \sigma_{xp} & \sigma_{pp}^{2} \end{array}\right),\end{equation} where $\sigma_{xx}^{2}=\langle \hat{x}^{2}\rangle-\langle \hat{x}\rangle^{2}$, $\sigma_{pp}^{2}=\langle \hat{p}^{2}\rangle-\langle \hat{p}\rangle^{2}$ are the squared variances in position and momentum, respectively, and $\sigma_{xp}$ is the position-momentum correlations. The expression for $\sigma_{xp}$ was obtained previously in equation \eqref{correlacao} and for the other quantities we obtain the following results \begin{equation} \sigma^{2}_{xx}(t,\tau)=\frac{B^{2}}{2}+\frac{D^{2}-4\Delta^{2}B^{4}\exp\left[-\left(\frac{D}{2B}\right)^{2}-\left(\Delta B\right)^{2}\right]}{4+4\exp\left[-\left(\frac{D}{2B}\right)^{2}-\left(\Delta B\right)^{2}\right]}, \end{equation} and \begin{eqnarray} \frac{\sigma^{2}_{pp}(t,\tau)}{\hbar^{2}}&=&\left(\frac{1}{2B^{2}}+\frac{m^{2}B^{2}}{2\hbar^{2} R^{2}}\right)+\frac{\left(\frac{mD}{\hbar R}-2\Delta\right)^{2}}{4+4\exp\left[-\left(\frac{D}{2B}\right)^{2}-\left(\Delta B\right)^{2}\right]}\nonumber\\ &-&\frac{\left[\frac{D^{2}}{B^{4}}+2\Delta\left(\Delta+\frac{mD}{\hbar R}\right)\right]}{1+\exp\left[\left(\frac{D}{2B}\right)^{2}+\left(\Delta B\right)^{2}\right]}. \end{eqnarray} The determinant of the covariance matrix, equation (\ref{Mc}), is the generalized Robertson-Schr\"odinger uncertainty relation and it is given by \begin{equation} D_{C}=\sigma_{xx}^{2}\sigma_{pp}^{2}-\sigma_{xp}^{2}. \end{equation} In Fig. 4a we show the curves of the uncertainties $\sigma_{xx}$, $\sigma_{pp}$ and the correlations $\sigma_{xp}$ normalized to the same scale as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$ and in Fig. 4b we show the determinant $D_{C}/\hbar^{2}$ (solid line) as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$, where we compared it with the value $1/4$ (dashed line). As the position-momentum correlations mean that both uncertainties are not minima, we see that this behavior is manifested in the determinant as a fast increasing in the region around the maximum of the correlations. The point of maximum is located between the maxima of the uncertainties in position and in momentum, and the region in which we can consider the correlations as maximum cover the interval $0.53\tau_{0}<t<4\tau_{0}$, where $t=0.52\tau_{0}$ is the inflexion point of the curve of $\sigma_{xp}$ and the other extreme $t\approx4\tau_{0}$ corresponds to the point for which the correlations have the same value when $t=0.52\tau_{0}$, i.e., $\sigma_{xp}(t=0.52\tau_{0})\approx\sigma_{xp}(t=4\tau_{0})\approx82\hbar$. On the other hand, the determinant varies slowly in the regions where the correlations tend to be minima, more specifically the regions $0<t<0.52\tau_{0}$ and $t>4\tau_{0}$. At the interval $0<t<0.52\tau_{0}$ the uncertainty in position and in momentum increases practically by the same rate and at the interval $t>4\tau_{0}$ the uncertainty in position decreases more slowly than the uncertainty in momentum. The determinant tends to a constant value in both intervals, but at the first interval, $0<t<0.52\tau_{0}$, the curve of correlations has a concavity upwards in which the value of the determinant tends to the minimum value $D_{C}\approx16\hbar^{2}$. At the second interval, $t>4\tau_{0}$, the curve of correlations has a concavity turned down (tending to a constant function for $t\gg t_{max}$) in which the determinant tends to the maximum value $D_{C}\approx33\hbar^{2}$. Then, we observe that $D_{C}>\hbar^{2}/4$ for all time. This characterizes the non-gaussianity of the state \eqref{psitotal}, since for Gaussian states, initially correlated or not, the generalized Robertson-Schr\"odinger uncertainty relation is constant and equal to $\hbar^{2}/4$ for all time. Therefore, for states obtained from the superposition of two Gaussian states, as the case treated here, the determinant of the covariance matrix is larger than $\hbar^{2}/4$ for all time and it is practically constant only for values of time outside the region around which the correlations have a point of maximum, showing that the Gaussian features are strictly altered by the evolution of the position-momentum correlations. Thus, if we construct one state that has correlations with a point of minimum, for which the determinant can tend to the value $\hbar^{2}/4$ at the screen, the number of interference fringes and its visibility can be increased significantly. It is possible to do this by considering one double-slit experiment in which the initial state is the correlated Gaussian state or by putting a atomic convergent lens next to the double-slit as similarly has been proposed for light waves \cite{Bartell}. \begin{figure} \caption{(a) Curves of the uncertainties $\sigma_{xx}$, $\sigma_{pp}$ and the correlations $\sigma_{xp}$ at the same scale as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$. (b) Determinant $D_{C}/\hbar^{2}$ (solid line) as a function of $t/\tau_{0}$ for $\tau=18\tau_{0}$ compared with the value $1/4$ (dashed line). The determinant is practically constant at the extremes but different from the value $\hbar^{2}/4$ and varies rapidly in the region where the position-momentum correlations have a maximum.} \label{Figure4} \end{figure} In table I we show some values of time $t_{max}$ that we calculate numerically, for which the correlations $\sigma_{xp}$, the uncertainty in position $\sigma_{xx}$ and the uncertainty in momentum $\sigma_{pp}$ are maxima and the point of inflexion of the correlations as a function of time $\tau$. We observe that when $\tau$ increases, the time $t_{max}$ of the correlations is dislocated to the left and that this time is always localized between the times for which the uncertainties in position and in momentum are maxima. We also observe that the times of maxima tend to coincide for $\tau>1000\tau_{0}$ and that the time of maximum for $\sigma_{pp}$ is independent of $\tau$ as a consequence of the free propagation from the double-slit to the screen. \begin{table}[ht] \caption{Times of maxima $t_{max}$ and inflexion $t_{inf}$ as a function of $\tau$. All terms in units of $\tau_{0}$} \centering \begin{tabular}{p{80pt} p{80pt} p{80pt} p{80pt} p{80pt}} \hline\hline $\tau$ & $t_{max}$ of $\sigma_{xp}$ & $t_{max}$ of $\sigma_{xx}$ & $t_{max}$ of $\sigma_{pp}$ & $t_{inf}$ of $\sigma_{xp}$\\ [0.5ex] \hline 2 & 1.568109061 & 1.984545314 & 1.392356020 & 0.4720349103\\ 8 & 1.450312552 & 1.525841616 & 1.392356020 & 0.4990240822\\ 18 & 1.419651602 & 1.450522331 & 1.392356020 & 0.5049187153\\ 50 & 1.402487095 & 1.413088513 & 1.392356020 & 0.5080737518\\ 100 & 1.397465783 & 1.402693625 & 1.392356020 & 0.5089789150\\ 1000 & 1.392871030 & 1.393387225 & 1.392356020 & 0.5098004574 \\ [1ex] \hline \end{tabular} \label{table} \end{table} \section{Intensity, Visibility and Predictability} In this section we calculate the relative intensity, visibility and predictability to analyze the interference pattern, the wave-like and particle-like behavior from the knowledge of the position-momentum correlations. Such analysis is very important because it allows us to choose the set of parameters that provides the better interference pattern in the double-slit experiment. The knowledge of the correlations tells us if the particle sent by the source will behave more as wave-like or particle-like on the screen. In other words, if the particle is sent by one position for which the time of flight until the double-slit pertains to the interval around the maximum of the correlations, it will behave most as a particle for most values of $x$, excluding only the values near $x=0$. The intensity on the screen, defined as $I(x,t,\tau)=|\psi(x,t,\tau)|^{2}$, is given by \begin{equation} I(x,t,\tau)=F(x,t,\tau)\left[1+\frac{\cos(2\Delta x)}{\cosh(\frac{D x}{B^{2}})}\right], \end{equation} where \begin{equation} F(x,t,\tau)=I_{0}\exp\left[-\frac{x^{2}+(\frac{D}{2})^{2}}{B^{2}}\right]\cosh\left(\frac{D x}{B^{2}}\right). \end{equation} The visibility and predictability are given, respectively, by \begin{equation} \mathcal{V}=\frac{I_{max}-I_{min}}{I_{max}+I_{min}}=\frac{1}{\cosh(\frac{Dx}{B^{2}})}, \end{equation} and \begin{equation} \mathcal{P}=\left|\frac{|\psi_{1}|^{2}-|\psi_{2}|^{2}}{|\psi_{1}|^{2}+|\psi_{2}|^{2}}\right|=\left|\tanh\left(\frac{Dx}{B^{2}}\right)\right|. \end{equation} The Bohr's complementarity principle established, by the relation of Greenberger and Yasin for pure quantum mechanical states, that $\mathcal{P}^{2}+\mathcal{V}^{2}=1$ is satisfied for all values of $x$ \cite{Greenberger}. The visibility and predictability depend on the ratio $D/B^{2}$, showing the influence of the parameter $D$ (the separation between the wavepackets at the time), equivalently the position-momentum correlations, on the interference pattern. Therefore, for higher values of $D$ and smaller values of $B$, the particle-like behavior will be dominant and less visible will be the interference fringes. As we will see, there is a value of time $t$, within the interval of maximum correlations, for which the visibility is minimum and the predictability is maximum. Previously, the effective number of fringes for light waves in the double-slit was characterized in Ref. \cite{Bramon} for a given distance (or time) of propagation from the double-slit to the screen while neglecting the propagation from the source to the double-slit. According to \cite{Bramon}, the number of fringes was estimated by a new index defined by $\nu=0.264/\mathcal{R}$. For the problem treated here, we have $\mathcal{R}=D/2\Delta B^{2}$, indicating that the higher the value of $D$ is, the lesser the number of fringes is. In Fig. 5a, we show the half of the symmetrical plot for the relative intensity (black line) and in Fig. 5b we show the half of the symmetrical plot of the visibility (blue line) and predictability (red line) as a function of $x$ for three different values of $t$, one of them being the time for which the correlations have a maximum, with $\tau$ fixed to $\tau=18\tau_{0}$. The corresponding values of $t$ are, respectively, $t=0.2\tau_{0}$ (solid line), $t_{max}\approx1.42\tau_{0}$ (dotted line) and $t=18\tau_{0}$ (dashed line). We observe that for $t_{max}\approx1.42\tau_{0}$ the number of interference fringes is a minimum and the visibility extends over a small range of the $x$ axis behind the double-slit. In addition, the predictability dominates extending over a wide range of the $x$ axis. For $t=0.2\tau_{0}$ or $t=18\tau_{0}$ we have a large number of fringes and the visibility extends over a larger range of the $x$ axis behind the double-slit. The predictability dominates only in a range outside the region immediately behind the double-slit. This shows that a displacement of the source either to the left or to the right, so that the particles flights a different time from the times around which the correlations have a maximum $t_{max}$, most specifically the times in the interval $0.52\tau_{0}<t<4\tau_{0}$, the number of fringes increases and the interference pattern presents a better quality. We have to focus on the region for which the correlations have a maximum and not specifically at the time of maximum since although $t_{max}$ really appears as the time for which the number of fringes is a minimum, the visibility has a minimum in the region of maximum correlations but it does not coincide with $t_{max}$ being displaced a little from this point to the right, as we can see in Fig. 6. In fact, for $t=0.2\tau_{0}$ and $t=18\tau_{0}$ the position-momentum correlations assume values close to each other, the number of fringes is nearly the same. However, the visibility is larger for $t=0.2\tau_{0}$, suggesting that the wave-like behavior will be most evident when the particle is released closer to the double-slit. Saying in a different way, our ignorance about which slit the particle passed increases when the particle is released closer to the double-slit. Therefore, although the complementarity relation $\mathcal{P}^{2}+\mathcal{V}^{2}=1$ is valid for all $x$ independent of the time (or distance) of propagation, the quantities $\mathcal{P}(x,t,\tau)$ and $\mathcal{V}(x,t,\tau)$ are substantially altered at each point $x$ by the propagation times $t$ and $\tau$, as quantitatively shown in Fig. 6. \begin{figure} \caption{(a) Relative intensity (black line) and (b) visibility (blue line) and predictability (red line) as a function of $x$ for three different values of $t$ and $\tau=18\tau_{0}$. The corresponding values of $t$ are, respectively, $t=0.2\tau_{0}$ (solid line), $t_{max}\approx1.42\tau_{0}$ (dotted line) and $t=18\tau_{0}$ (dashed line). For these values of time, the time for which the correlations have a maximum $t_{max}\approx1.42\tau_{0}$ presents the least number of fringes and visibility. Moving the source of particles to the left or to the right from the region around the maximum of the correlations, the number of fringes and visibility increase.} \label{Figure5} \end{figure} \begin{figure} \caption{Visibility (blue line) and predictability (red line) as a function of $t/\tau_{0}$ for three different values of $x$. The corresponding values of $x$ are $x=0.01\;\mathrm{mm}$ (dotted line), $x=0.05\;\mathrm{mm}$ (solid line) and $x=0.1\;\mathrm{mm}$ (dashed line). We present figures for $\tau=10\tau_{0}$, $\tau=30\tau_{0}$ and $\tau=60\tau_{0}$. The values of $\mathcal{V}$ and $\mathcal{P}$ for each value of $x$ are strongly altered by the values of $t$ and $\tau$. For example, exist a value of time $t$ for which the visibility is minimum and the predictability is maximum and for $\tau>60\tau_{0}$ the values of $\mathcal{V}$ are higher than the values of $\mathcal{P}$.} \label{Figure6} \end{figure} In Fig. 7a, we show the half of the symmetrical plot for relative intensity (black line) and in Fig. 7b we show the half of the symmetrical plot of the visibility (blue line) and predictability (red line) as a function of $x$ for two different values of $\tau$, fixing $t$ at $t=8\tau_{0}$. The corresponding values of $\tau$ are, respectively, $\tau=10\tau_{0}$ (dashed line) and $\tau=30\tau_{0}$ (solid line). For $\tau=30\tau_{0}$, we have a larger number of fringes with a better visibility because the region of maximum correlations will be further for $t=8\tau_{0}$ with $\tau=30\tau_{0}$ than the $\tau=10\tau_{0}$ case, according to table I. In this case we observe that the displacement of the maximum of the correlations implies an increasing in the spatial transverse coherence with time. In fact, the number of interference fringes is nearly the same for both values of $\tau$, but the visibility is larger for $\tau=30\tau_{0}$ in comparison with $\tau=10\tau_{0}$. This shows that the wave-like behavior becomes more evident, comparatively, when the particle is launched from a position such that the flight time until the double-slit is most distant from the time for which the correlations have a maximum. On the other hand, we can say that our ignorance about which slit the particle passed, when it is launched from the position $z=v_{z}(t=8\tau_{0})$, is smaller when the screen is positioned at $z_{1}=v_{z}(\tau_{1}=10\tau_{0})$ than the situation where the screen is positioned at $z_{2}=v_{z}(\tau_{2}=30\tau_{0})$. Again, we see the influence of the times $t$ and $\tau$ over the quantities $\mathcal{P}(x,t,\tau)$ and $\mathcal{V}(x,t,\tau)$, although the result $\mathcal{P}^{2}+\mathcal{V}^{2}=1$ is maintained for all $x$ values independent of the time. \begin{figure} \caption{(a) Relative intensity (black line) and (b) visibility (blue line) and predictability (red line) as a function of $x$ for two different values of $\tau$ and $t$ fixed in $t=8\tau_{0}$. The corresponding values of $\tau$ are, respectively, $\tau=10\tau_{0}$ (dashed line) and $\tau=30\tau_{0}$ (solid line). For $\tau=30\tau_{0}$, we have the most number of interference fringes with a better visibility because the point for which the correlations have a maximum is more distant of $t=8\tau_{0}$ for $\tau=30\tau_{0}$, according to table I. In fact, the number of fringes is practically the same but the visibility is considerably larger for $\tau=30\tau_{0}$.} \label{Figure7} \end{figure} The results above were obtained for neutrons treated as wavepackets of initial transverse width $\sigma_{0}=7.8\;\mathrm{\mu m}$. For these parameters, the time scale is given by $\tau_{0}=m\sigma_{0}^{2}/\hbar=1.02\;\mathrm{ms}$. We can note a good quality in the interference pattern for $t=18\tau_{0}=18.02\;\mathrm{ms}$ and $\tau=18\tau_{0}=18.02\;\mathrm{ms}$, whose velocity around $v=200\;\mathrm{m/s}$, corresponds to distances $z_{t}=3.6\;\mathrm{m}$ and $z_{\tau}=3.6\;\mathrm{m}$. These parameters were used by A. Zeilinger et al. and they correspond to distances within the experimental viability \cite{Zeilinger1}. Now, if we take, for instance, the mass of the order of $m=1.2\times10^{-24}\;\mathrm{kg}$, which is next to the mass of the fullerene molecules, and build a package of the same width of the neutrons, we will have $\tau_{0}=0.73\;\mathrm{s}$. In this case, $t=18\tau_{0}=13.14\;\mathrm{s}$ and $\tau=18\tau_{0}=13.14\;\mathrm{s}$. Considering one velocity of $200\;\mathrm{m/s}$, we will have $z_{t}=2.63\times10^{3}\;\mathrm{m}$ and $z_{\tau}=2.63\times10^{3}\;\mathrm{m}$, which are distances outside the experimental reality. Therefore, by analyzing the behavior of the correlations, we can also capture information about the difficulty in observing interference with macroscopic objects. In Ref. \cite{Carol} the authors explore the effect of the position-momentum correlations on the interference pattern but they do not take into account the influence of the propagation time from the double-slit to the screen. They also do not discuss the behavior of the correlations as a function of the propagation time from the source to the double-slit (or equivalently, the behavior of the correlations as a function of the parameter $\sigma_{0}$). We observe that for the parameters used in this reference, the correlations are maxima for $0.013\;\mathrm{\mu m}\leq\sigma_{0}\leq0.02\;\mathrm{\mu m}$ and minima for $\sigma_{0}>1.0\;\mathrm{\mu m}$, which justify the poor interference pattern for $0.013\;\mathrm{\mu m}\leq\sigma_{0}\leq0.02\;\mathrm{\mu m}$ and a rich interference pattern for $\sigma_{0}=6.0\;\mathrm{\mu m}$. \section{Conclusions} In this contribution, we studied the double-slit experiment as an attempt to find parameters that produce the maximum number of interference fringes and with the highest possible quality on the screen. Our results show that we can take information about the interference pattern by looking at the behavior of the position-momentum correlations, that are installed with the quantum dynamics. We observe that both the dynamics before and after the double-slit are important for the existence and quality of the interference fringes on the screen. Especially we observe that there is a value of propagation time from the source to the double-slit for which the correlations have a point of maximum, so that particles released by a source at the region around this point produce interference fringes on the screen with the worst quality. The wave-like and particle-like behavior expressed by the complementary relation of Greenberger and Yasin $\mathcal{P}^{2}+\mathcal{V}^{2}=1$ is also strongly influenced at each point $x$ by the times $t$ and $\tau$, i.e., depending where the particle came from and where the screen was positioned, it will behave most as a wave or most as a particle at the screen. The knowledge of the point of maximum of the position-momentum correlations can also help us to choose the best parameters which allow us to observe interference effects with macromolecules, such as fullerenes. From the determinant of the covariance matrix it was possible to observe how the Gaussian properties of the state produced on the screen by the superposition of two Gaussian are altered when the uncertainties in position and in momentum and the position-momentum correlations vary with the times $t$ and $\tau$. \begin{acknowledgments} \section*{Acknowledgments} We would like to thank the Professor E. C. Girão by careful reading of the manuscript. I. G. da Paz and L. A.Cabral acknowledge useful discussions with M.C. Nemes. J. S. M. Neto thanks the CAPES by financial support under grant number 210010114016P3. I. G. da Paz thanks support from the program PROPESQ (UFPI/PI) under grant number PROPESQ 23111.011083/2012-27. \end{acknowledgments} \end{document}
arXiv
What is the tens digit in the sum $11^1 + 11^2 + 11^3 + \ldots + 11^9$? First of all, we notice that $11 = 1 + 10,$ and so we write $11^n$ as follows: $$(1 + 10)^n = \binom{n}{0} \cdot 1^n + \binom{n}{1} \cdot 1^{n-1} \cdot 10^{1} + \binom{n}{2} \cdot 1^{n-2} \cdot 10^{2} + \cdots$$ We can see that every term after the first two in our expansion has at least two powers of $10,$ therefore they will not contribute to the tens digit of anything. Meanwhile, the first term is always $1,$ and the second term can be simplified to $10n.$ Therefore, we have: \begin{align*} &11^1 + 11^2 + 11^3 + \cdots + 11^9 \\ &\qquad\equiv (1 + 10) + (1 + 20) + \cdots + (1 + 90) \pmod{100}. \\ &\qquad\equiv 459 \equiv 59 \pmod{100}. \end{align*} Thus, the tens digit must be $\boxed{5}.$
Math Dataset
\begin{document} \title{Continual Auxiliary Task Learning} \begin{abstract} Learning auxiliary tasks, such as multiple predictions about the world, can provide many benefits to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this work, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy learning to take actions to improve those auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behavior learner. We develop an algorithm based on successor features that facilitates tracking under non-stationary rewards, and prove the separation into learning successor features and rewards provides convergence rate improvements. We conduct an in-depth study into the resulting multi-prediction learning system. \end{abstract} \section{Introduction} \label{sec:introduction} In never-ending learning systems, the agent often faces long periods of time when the external reward is uninformative. A smart agent should use this time to practice reaching subgoals, learning new skills, and refining model predictions. Later, the agent should use this prior learning to efficiently maximize external reward. The agent engages in this self-directed learning during times when the primary drives of the agent (e.g., hunger) are satisfied. Other times, the agent might have to trade-off directly acting towards internal auxiliary learning objectives and taking actions that maximize reward. In this paper we investigate how an agent should select actions to balance the needs of several auxiliary learning objectives in a {\em no-reward setting} where no external reward is present. In particular, we assume the agent's auxiliary objectives are to learn a diverse set of value functions corresponding to a set of fixed policies. Our solution at a high-level is straightforward. Each auxiliary value function is learned in parallel and off-policy, and the behavior selects actions to maximize learning progress. Prior work investigated similar questions in a state-less bandit like setting, where both off-policy learning and function approximation are not required ~\citep{linke2020adapting}. Otherwise, the majority of prior work has focused on how the agent could make use of auxiliary learning objectives, not how behavior could be used to improve auxiliary task learning. Some work has looked at defining (predictive) features, such as successor features and a basis of policies \citep{barreto2018transfer,borsa2018universal,barreto2020fast,NEURIPS2019_251c5ffd}; universal value function approximators \citep{SchaulICML2015}; and features based on value predictions \citep{schaul2013better,schlegel2021general}. The other focus has been exploration, using auxiliary learning objectives to generate bonuses to aid exploration on the main task \citep{pathak2017curiosity,stadie2015incentivizing,puigdomenech2020never,burda2018exploration}; using a given set of policies in a call-return fashion for scheduled auxiliary control \citep{riedmiller2018learning}; and discovering subgoals in environments where it is difficult for the agent to reach particular parts of the state-action space \citep{machado2017laplacian,colas2019curious,zhang2020generating,andrychowicz2017hindsight,pong2019skew}. In all of these works, the behavior was either fixed or optimized for the main task. The problem of adapting the behavior to optimize many auxiliary predictions in the absence of external reward is sufficiently complex to merit study in isolation. It involves several inter-dependent learning mechanisms, multiple sources of non-stationarity, and high-variance due to off-policy updating. If we cannot design learning systems that efficiently learn their auxiliary objectives in isolation, then the agent is unlikely to learn its auxiliary tasks while additionally balancing external reward maximization. Further, understanding how to efficiently learn a collection of auxiliary objectives is complementary to the goals of using those auxiliary objectives. It could amplify the auxiliary task effect in UNREAL \citep{jaderberg2016reinforcement}, improve the efficiency and accuracy of learning successor features and universal value function approximators, and improve the quality of the sub-policies used in scheduled auxiliary control. It can also benefit the numerous systems that discover options, skills, and subgoals ~\citep{gregor2016variational,eysenbach2018diversity,veeriah2019discovery,pitis2020maximum,nair2020contextual,pertsch2020long,colas2019curious,eysenbach2019search}, by providing improved algorithms to learn the resulting auxiliary tasks. For example, for multiple discovered subgoals, the agent can adapt its behavior to efficiently learn policies to reach each subgoal. In this paper we introduce an architecture for parallel auxiliary task learning. As the first such work to tackle this question in reinforcement learning with function approximation, numerous algorithmic challenges arise. We first formalize the problem of learning multiple predictions as a reinforcement learning problem, and highlight that the rewards for the behavior policy are inherently non-stationary due to changes in learning progress over time. We develop a strategy to use successor features to exploit the stationarity of the dynamics, whilst allowing for fast tracking of changes in the rewards, and prove that this separation provides a faster convergence rate than standard value function algorithms like temporal difference learning. We empirically show that this separation facilitates tracking both for prediction learners with non-stationary targets as well as the behavior. \section{Problem Formulation} We consider the \emph{multi-prediction problem}, in which an agent continually interacts with an environment to obtain accurate predictions. This interaction is formalized as a Markov decision process (MDP), defined by a set of states $\mathcal{S}$, a set of actions $\mathcal{A}$, and a transition probability function $ \mathcal{P}(s, a, s')$. The agent's goal, when taking actions, is to gather data that is useful for learning $N$ predictions, where each prediction corresponds to a general value function (GVF) \citep{sutton2011horde}. A GVF question is formalized as a three tuple $(\pi, \gamma, c)$, where the target is the expected return of the cumulant, defined by $c: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$, when following policy $\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$, discounted by $\gamma: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0,1]$. More precisely, the target is the action-value \begin{align*} Q(s, a) &\defeq \mathbb{E}_{\pi}\left[G_t | S_t = s, A_t = a\right] \quad \text{ for } G_t \defeq C_{t+1} + \gamma_{t+1} G_{t+1} \end{align*} where $C_{t+1} \defeq c(S_t, A_t, S_{t+1})$ and $\gamma_{t+1} \defeq \gamma(S_t, A_t, S_{t+1})$. The extension of $\gamma$ to transitions allows for a broader class of problems, including easily specifying termination, without complicating the theory \citep{white2017unifying}. The expectation is under policy $\pi$, with transitions according to $\mathcal{P}$. The prediction targets could also be state-value functions; we assume the targets are action-values in this work to provide a unified discussion of successor features for both the GVF and behavior learners. At each time step, the agent produces $N$ predictions, a $\hat{Q}_t^{(j)}(S_t, A_t)$ for prediction $j$ with true targets $Q^{(j)}_t(S_t, A_t)$. We assume the GVF question can change over time, and so $Q$ can change with time. The goal is to have low error in the prediction, in terms of the root mean-squared value error (RMSVE), under state-action weighting $d: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$: \begin{equation} \text{RMSVE}(\hat{Q}, Q) \defeq \sqrt{\sum_{s \in \mathcal{S}} \sum_{a \in \mathcal{A}} d(s, a) (\hat{Q}(s, a) - Q(s, a))^2} \end{equation} The total error up to time step $t$, across all predictions, is $\text{TE} \ \defeq \sum_{i=1}^t \sum_{j=1}^N \text{RMSVE}(\hat{Q}_i^{(j)}, Q^{(j)}_i)$. \begin{wrapfigure}[10]{r}{0.24\textwidth} \centering \includegraphics[width=0.25\textwidth]{figures/sys_diag.pdf} \label{fig:system} \end{wrapfigure} The agent's goal is to gather data and update its predictions to make \text{TE} ~small. This goal can itself be formalized as a reinforcement learning problem, by defining rewards for the behavior policy that depend on the agent's predictions. Such rewards are often called \emph{intrinsic rewards}. For example, if we could directly measure the RMSVE, one potential intrinsic reward would be the decrease in the RMSVE after taking action $A_t$ from state $S_t$ and transitioning to $S_{t+1}$. This reflects the agent's learning progress---how much it was able to learn---due to that new experience. The reward is high if the action generated data that resulted in substantial learning. While the RMSVE is the most direct measure of learning progress, it cannot be calculated without the true values. \begin{wrapfigure}[19]{r}{0.45\textwidth} \begin{minipage}{0.48\textwidth} \begin{algorithm}[H] \caption{Multi-Prediction Learning System} \label{alg:generic} {\bfseries Input:} $N$ GVF questions \begin{algorithmic} \STATE {Initialize behavior policy parameters } $\theta_0$\\ \text{ and GVF learners } $w^{(1)}_{0}, \ldots, w^{(N)}_0$ \STATE {Obtain initial observation } $S_0$ \FOR{$t = 0, 1, \ldots$} \STATE Choose action $A_t$ according to $\pi_{\theta_t}(\cdot | S_t)$ \STATE Observe next state vector $S_{t+1}$ and $\gamma_{t+1}$ \STATE // Update predictions with new data \FOR{$j=1$ {\bfseries to} $N$} \STATE $c \gets c^{(j)}(S_t, A_t, S_{t+1})$ \STATE $\gamma \gets \gamma^{(j)}(S_t, A_t, S_{t+1})$ \STATE Update $w_t^{(j)}$ with $(S_t, A_t, c, S_{t+1}, \gamma)$ \ENDFOR \STATE \!\!\!// Compute intrinsic reward, update behavior \STATE $R_{t+1} \gets \sum_{j=1}^N \| w^{(j)}_{t+1} - w^{(j)}_{t} \|_1$ \STATE Update $\theta_t$ with $(S_t, A_t, R_{t+1}, S_{t+1}, \gamma_{t+1})$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure} Many intrinsic rewards have been considered to estimate the learning progress of predictions. A recent work provided a thorough survey of different options, as well as an empirical study \citep{linke2020adapting}. Their conclusion was that, for reasonable prediction learners, simple learning progress measures---like the change in weights---were effective for producing effective data gathering. We rely on this conclusion here, and formalize the problem using the $\ell_1$ norm on the change in weights. Other intrinsic rewards could be swapped into the framework; but, because our focus is on the non-stationarity in the system and because empirically we found this weight-change intrinsic reward to be effective, we opt for this simple choice upfront. We provide the generic pseudocode for a multi-prediction reinforcement learning system, in Algorithm \ref{alg:generic}. Note that the behavior agent also has a separate transition-based $\gamma$, which enables us to encode both continuing and episodic problems. For example, the pseudo-termination for a GVF could be a particular state in the environment, such as a doorway. The discount for the GVF would be zero in that state, even though it is not a true terminal state; the behavior discount $\gamma_{t+1}$ would not be zero. \section{Non-stationarity Induced by Learning} \label{sec:nonstationarity-in-learning} On the surface, the multi-prediction problem outlined in the previous section is a relatively straightforward reinforcement learning problem. The behavior policy learns to maximize cumulative reward, and simultaneously learns predictions about its environment. Many RL systems incorporate prediction learning, either as auxiliary tasks or to learn a model. However, unlike standard RL problems, the rewards for the behavior are non-stationary when using intrinsic rewards, even under stationary dynamics. Further, the prediction problems themselves are non-stationary due to a changing behavior. To understand this more deeply, consider first the behavior rewards. On each time step, the predictions are updated. Progressively, they get more and more accurate. Imagine a scenario where they can become perfectly accurate, such as in the tabular setting with stationary cumulants. The behavior rewards are high in early learning, when predictions are inaccurate. As predictions become more and more accurate, the change in weights gets smaller until eventually the behavior rewards are near zero. This means that when the behavior revisits a state, the reward distribution has actually changed. More generally, in the function approximation setting, the behavior rewards will continue to change with time, not necessarily decay to zero. The prediction problems are also non-stationary for two reasons. First, the cumulants themselves might be non-stationary, even if the transition dynamics are stationary. For example, the cumulant could correspond to the amount of food in a location in the environment, that slowly gets depleted. Or, the cumulant could depend on a hidden variable, that makes the outcome appear non-stationary. Even with a stationary cumulant, the prediction learning problem can be non-stationary due to a changing behavior policy. As the behavior policy changes, the state distribution changes. Implicitly, when learning off-policy, the predictions are minimizing an objective weighted by the state visitation under the behavior policy. As the behavior changes, the underlying objective is actually changing, resulting in a non-stationary prediction problem. Though there has been some work on learning under non-stationarity in RL and bandits, none to our knowledge has addressed the multi-prediction setting in MDPs. There has been some work developing reinforcement learning algorithms for non-stationary MDPs, but largely for the tabular setting \citep{suttonbartobook,da2006improving,abdallah2016addressing,cheung2020reinforcement} or assuming periodic shifts \citep{chandak2020towards,chandak2020optimizing,padakandla2020reinforcement}. There has also been some work in the non-stationary multi-armed bandit setting \citep{garivier2008upper, koulouriotis2008reinforcement, besbes2014stochastic}. The non-stationary rewards for the behavior, that decay over time, have been considered for the bandit setting, under rotting bandits \citep{levine2017rotting,seznec2019rotting}; these algorithms do not obviously extend to the RL setting. \section{Handling the Non-Stationarity in a Multi-prediction System} In this section, we describe a unified approach to handle non-stationarity in both the GVF and behavior learners, using successor features. We first discuss how to use successor features to learn under non-stationary cumulants, for prediction. Then we discuss using successor features for control, allows us to leverage this approach for non-stationary rewards for the behavior. We then discuss state-reweightings, and how to mitigate non-stationarity due to a changing behavior. \subsection{Successor Features for Non-stationary Rewards}\label{sec:nr-sf} Successor features provide an elegant way to learn value functions under non-stationarity. The separation of learning stationary successor features and rewards enables more effective tracking of non-stationary rewards, as we explain in this section and formally prove in Section \ref{sec:theory}. Assume that there is a weight vector $\mathbf{w}^* \in \mathbb{R}^d$ and features $\mathbf{x}(s,a) \in \mathbb{R}^d$ for each state and action $(s,a)$ such that $r(s,a) = \langle \mathbf{x}(s,a), \mathbf{w}^*\rangle$. Recursively define \begin{align*} \vec{\psi}(s,a) = \mathbb{E}_\pi[\mathbf{x}(S_t, A_t) + \gamma_{t+1} \vec{\psi}(S_{t+1}, A_{t+1}) | S_t =s, A_t =a] \end{align*} $\vec{\psi}(s,a)$ is called the \emph{successor features}, the discounted cumulative sum of feature vectors, if we follow policy $\pi$. For $\vec{\psi}_t \defeq \vec{\psi}(S_t,A_t)$ and $\mathbf{x}_t \defeq \mathbf{x}(S_t,A_t)$, we can see $Q(s,a) = \langle \vec{\psi}(s,a), \mathbf{w}^*\rangle$ \begin{align*} &\langle \vec{\psi}(s,a), \mathbf{w}^*\rangle = \mathbb{E}_\pi[\langle \mathbf{x}_t, \mathbf{w}^*\rangle | S_t =s, A_t =a] + \mathbb{E}_\pi[\gamma_{t+1} \langle \vec{\psi}_{t+1}, \mathbf{w}^* \rangle | S_t =s, A_t =a]\\ &= r(s, a) + \mathbb{E}_\pi[\gamma_{t+1} \langle \mathbf{x}_{t+1}, \mathbf{w}^*\rangle | S_t =s, A_t =a]+ \mathbb{E}_\pi[\gamma_{t+1} \gamma_{t+2} \langle \vec{\psi}_{t+2}, \mathbf{w}^* \rangle | S_t =s, A_t =a]\\ &= r(s, a) + \mathbb{E}_\pi[\gamma_{t+1} r_{t+1} | S_t =s, A_t =a] + \mathbb{E}_\pi[\gamma_{t+1} \gamma_{t+2} \langle \vec{\psi}_{t+2}, \mathbf{w}^* \rangle | S_t =s, A_t =a]\\ &= \quad \ldots \quad = \mathbb{E}_\pi[r(s, a) + \gamma_{t+1} r_{t+1} + \gamma_{t+1} \gamma_{t+2} r_{t+2} + \ldots | S_t =s, A_t =a]\quad = Q(s,a). \end{align*} If we have features $\mathbf{x}(s,a) \in \mathbb{R}^d$ which allow us to represent the immediate reward, then successor features provide a good representation to approximate the GVF. We simply learn another set of parameters $\mathbf{w}_c \in \mathbb{R}^d$ that predict the immediate cumulant (or reward): $c(s,a) \approx \langle \mathbf{x}(s,a), \mathbf{w}_c \rangle$. These parameters $\mathbf{w}_c$ are updated using a standard regression update, and $Q(s,a) \approx \langle \vec{\psi}(s,a), \mathbf{w}_c\rangle$ . The successor features $\vec{\psi}(s,a)$ themselves, however, also need to be approximated. In most cases, we cannot explicitly maintain a separate $\vec{\psi}(s,a)$ for each $(s,a)$, outside of the tabular setting. Notice that each element in $\vec{\psi}(s,a)$ corresponds to a true expected return: the cumulative discounted sum of a reward feature into the future. Therefore, $\vec{\psi}(s,a)$ can be approximated using any value function approximation method, such as temporal difference (TD) learning. We learn parameters $\mathbf{w}_\psi$ for the approximation $\hat\vec{\psi}(s,a; \mathbf{w}_\psi) = [\hat\vec{\psi}_1(s,a; \mathbf{w}_\psi), ..., \hat\vec{\psi}_d(s,a; \mathbf{w}_\psi)]^\top \in \mathbb{R}^d$ where $\hat\vec{\psi}_m(s,a; \mathbf{w}_\psi) \approx \vec{\psi}_m(s,a)$. We can use any function approximator for $\hat\vec{\psi}(s,a; \mathbf{w}_\psi)$, such as linear function approximation with tile coding with $\mathbf{w}_\psi$, linearly weighting the tile coding features to produce $\hat\vec{\psi}(s,a; \mathbf{w}_\psi)$, or neural networks, where $\mathbf{w}_\psi$ are the parameters of the neural network. We summarize the algorithm using successor features for non-stationary rewards/cumulants, called SF-NR, in Algorithm \ref{alg:sf-nr}. We provide an update formula for the approximate SF using Expected Sarsa for prediction \citep{suttonbartobook} for simplicity, but note that any value learning algorithm can be used here. In our experiments, we use Tree-Backup \citep{precup2000eligibility} because it reduces variance from off-policy learning; we provide the pseudocode in Appendix~\ref{app:algs}. Algorithm \ref{alg:sf-nr} assumes that the reward features $\mathbf{x}(s,a)$ are given, but of course these can be learned as well. Ideally, we would learn a compact set of reward features that provide accurate estimates as a linear function of these reward features. A compact (smaller) set of reward features is preferred because it makes the SF more computationally efficient to learn. \begin{wrapfigure}[13]{r}{0.43\textwidth} \begin{minipage}{0.43\textwidth} \begin{algorithm}[H] \caption{Successor Features for \\Non-stationary Rewards (SF-NR)} \label{alg:sf-nr} \! {\bfseries Input:} \!\!$(S_t, \!A_t, \!S_{t+1}, \!C_{t+1}, \!\gamma_{t+1})$, \!$\pi$, \!$\mathbf{w}_\psi$, \!$\mathbf{w}_c$ \begin{algorithmic} \STATE $\mathbf{x} \gets \mathbf{x}(S_t,A_t)$ \STATE $\hat\vec{\psi} \gets \hat\vec{\psi}(S_t,A_t;\mathbf{w}_\psi)$ \STATE $\hat\vec{\psi}' \gets \sum_{a'} \pi(a'|S_{t+1}) \hat\vec{\psi}(S_{t+1},a';\mathbf{w}_\psi)$ \STATE $\Delta \gets \vec{0}$ \FOR{$m=1$ {\bfseries to} $d$} \STATE $\delta_m \gets \mathbf{x}_m + \gamma_{t+1} \hat\vec{\psi}'_m - \hat\vec{\psi}_m$ \STATE $\Delta \gets \Delta + \delta_m \nabla \hat\vec{\psi}_m$ \ENDFOR \STATE $\mathbf{w}_\psi \gets \mathbf{w}_\psi + \alpha \Delta$ \STATE $\mathbf{w}_c \gets \mathbf{w}_c + \alpha (C_{t+1} - \langle \mathbf{x}, \mathbf{w}_c \rangle) \mathbf{x}$ \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure} There are two key advantages from the separation into learning successor features and immediate cumulant estimates. First, it easily allows different or changing cumulants to be used, for the same policy, using the same successor features. The transition dynamics summarized in the stationary successor features can be learned slowly to high accuracy and re-used. This re-use property is why these representations have been used for transfer \citep{barreto2017successor,barreto2018transfer,barreto2020fast}. This property is pertinent for us, because it allows us to more easily track changes in the cumulant. The regression updates can quickly update the parameters $\mathbf{w}_c$, and exploit the already learned successor features to more quickly track value estimates. Small changes in the rewards can result in large changes in the values; without the separation, therefore, it can be more difficult to directly track the value estimates. Second, the separation allows us to take advantage of online regression algorithms with strong convergence guarantees. Many optimizers and accelerations are designed for a supervised setting, rather than for temporal difference algorithms. Once the successor features are learned, the prediction problem reduces to a supervised learning problem. We can therefore even further improve tracking by leveraging these algorithms to learn and track the immediate cumulant. We formalize the convergence rate improvements, from this separation, in Section \ref{sec:theory}. \subsection{GPI with Successor Features for Control} In this section we outline a control algorithm under non-stationary rewards. SF-NR provides a method for updating the value estimate due to changing rewards. The behavior for the multi-prediction problem has changing rewards, and so could benefit from SF-NR. But SF-NR only provides a mechanism to efficiently track action-values for a fixed policy, not for a changing policy. Instead, we turn to the idea of constraining the behavior to act greedily with respect to the values for a set of policies, introduced as Generalized Policy Improvement (GPI) \cite{barreto2018transfer,barreto2020fast}. For our system, this is particularly natural, as we are already learning successor features for a collection of policies. Let us start there, where we assume our set of policies is $\Pi = \{\pi_1, \ldots, \pi_N\}$. Assume also that we have learned the successor features for these policies, $\hat\vec{\psi}(s,a; \mathbf{w}_\psi^{(j)})$, and that we have weights $\theta_r \in \mathbb{R}^d$ such that $\langle \mathbf{x}(s,a), \theta_r \rangle \approx \mathbb{E}[R_{t+1} | S_t = s, A_t = a]$ for behavior reward $R_{t+1}$. Then on each step, the behavior policy takes the following greedy action \begin{align*} {\mu}(s) = \argmax_{a} \max_{j \in \{1, \ldots, N\}} \hat Q^{(j)}_r(s,a) = \argmax_{a} \max_{j \in \{1, \ldots, N\}} \langle \hat\vec{\psi}(s,a; \mathbf{w}_\psi^{(j)}), \theta_r \rangle \end{align*} The resulting policy is guaranteed to be an improvement: in every state the new policy has a value at least as good as any of the policies in the set \citep[Theorem 1]{barreto2017successor}. Later work also showed sampled efficiency of GPI when combining known reward weights to solve novel tasks \citep{barreto2020fast}. The use of successor features has similar benefits as discussed above, because the estimates can adapt more rapidly as the rewards change, due to learning progress changing over time. The separation is even more critical here, as we know the rewards are constantly drifting, and tracking quickly is even more critical. We could even more aggressively adapt to these non-stationary rewards, by anticipating trends. For example, instead of a regression update, we can model the trend (up or down) in the reward for a state and action. If the reward has been decreasing over time, then likely it will continue to decrease. Stochastic gradient descent will put more weight on recent points, but would likely predict a higher expected reward than is actually observed. For simplicity here, we still choose to use stochastic gradient descent, as it is a reasonably effective tracking algorithm, but note that performance improvements could likely be obtained by exploiting this structure in this problem. We can consider a different set of policies for GVFs and behavior. However, the two are naturally coupled. First, the GPI theory shows that greedifying over a larger collection of policies provides better policies. It is sensible then to at least include the GVF policies into the set for the behavior. Second, the behavior needs to learn the successor features for the additional policies. Arguably, it should try to gather data to learn these well, so as to facilitate its own policy improvement. It should therefore also incorporate the learning progress for these successor features, into the intrinsic reward. For this work, therefore, we assume that the behavior uses the set of GVF policies. Note that the weight change intrinsic reward uses the concatenation of $\mathbf{w}_\psi$ and $\mathbf{w}_c$. \subsection{Interest and prior corrections for the changing state distribution} \label{sec:prior_corrections} The final source of non-stationarity is in the state distribution. As the behavior ${\mu}$ changes, the state-action visitation distribution ${d_{\mu_t}}: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$ changes. The state distribution implicitly weights the relative importance of states in the GVF objective, called the projected Bellman error (PBE). Correspondingly, the optimal SF solution could be changing, since the objective is changing. The impact of a changing state-weighting depends on function approximation capacity, because the weighting indicates how to trade-off function approximation error across states. When approximation error is low or zero---such as in the tabular setting---the weighting has little impact on the solution. Generally, however, we expect some approximation error and so a non-negligible impact. We can completely remove this source of non-stationary by using \emph{prior corrections}. These are products of importance sampling ratios, that reweight the trajectory to match the probability of seeing that trajectory under the target policy $\pi$. Namely, it modifies the state weighting to $d_\pi$, the state-action visitation distribution under $\pi$. We explicitly show this in Appendix~\ref{app:prior-correction-pbe}. Unfortunately, prior corrections can be highly problematic in a system where the behavior policy takes exploratory actions and target policies are nearly deterministic. It is likely that these corrections will often either be zero, or near zero, resulting in almost no learning. To overcome this inherent difficulty, we restrict which states are important for each predictive question. Likely, when creating a GVF, the agent is interested in predictions for that GVF only in certain parts of the space. This is similar to the idea of initiation sets for options, where an option is only executed from a small set of relevant states. We can ask: what is the GVF answer, from this smaller set of states of interest? This can be encoded with a non-negative interest function, $i(s,a)$, where some (or even many) states have an interest of zero. This interest is incorporated into the state-weighting in the objective, so the agent can focus function approximation resources on states of interest. When using interest, it is sensible to use emphatic weights \citep{sutton2016emphatic}. Emphatic weightings are a prior correction method, used under the excursions model \citep{patterson2021generalized}. They reweight to a discounted state-action visitation under $\pi$ when starting from states proportionally to ${d_\mu}$. Further, they ensure states inherit the interest of any states that bootstrap off of them. Even if a state has an interest of zero, we want to accurately estimate its value if an important states bootstraps off of its value. The combination of interest and emphatic weightings---which shift state-action weighting to visitation under $\pi$---means that we mitigate much of the non-stationarity in the state-action weighting. We provide the pseudocode for this Emphatic TB (ETB) algorithm in Appendix~\ref{app:algs}. \section{Sample Efficiency of SF-NR } \label{sec:theory} As suggested in Section~\ref{sec:nr-sf}, the use of successor features makes SF-NR particularly well-suited to our multi-prediction problem setting. The reason for this is simple: given access to an accurate SF matrix, value function estimation reduces to a \textit{fundamentally simpler} linear prediction problem. Indeed, access to an accurate SF enables one to sidestep known lower-bounds on PBE estimation. For simplicity, we prove the result for value functions; the result easily extends to action-values. Denote by $\mathbf{v}^{\pi}\in\mathbb{R}^{\Abs{\mathcal{S}}}$ the vector with entries $v^{\pi}(s)$, $\mathbf{r} ^{\pi}\in\mathbb{R}^{\Abs{\mathcal{S}}}$ the vector of expected immediate rewards in each state, and $\mathbf{P}\in\mathbb{R}^{\Abs{\mathcal{S}}\times\Abs{\mathcal{S}}}$ the matrix of transition probabilities. The following lemma, proven in Appendix~\ref{app:msve-bound}, relates mean squared value error (VE) to one-step reward prediction error. \begin{restatable}{lemma}{MSVEBound}\label{lemma:msve-bound} Assume there exists a $\mathbf{w}^{*}\in\mathbb{R}^{d}$ such that $\mathbf{r}^{\pi}=\mathbf{X} \mathbf{w}^{*}$. Let $\hat \mathbf{r}\defeq\mathbf{X}\mathbf{w}$ for some $\mathbf{w}\in\mathbb{R}^{d}$, and let $\mathbf{D}=\text{Diag}(\{d(s)\}_{s\in\mathcal{S}})$ for distribution $d$ fully supported on $\mathcal{S}$, with $\norm{\cdot}_{\mathbf{D}}$ the weighted norm under $\mathbf{D}$. Then the value estimate $\hat \mathbf{v}\defeq\Psi\mathbf{w}$ satisfies $\half\norm{\mathbf{v}^{\pi}-\hat\mathbf{v}}^{2}_{\mathbf{D}}\le\frac{\norm{\mathbf{r}^{\pi}-\hat\mathbf{r}}_{\mathbf{D}}^{2}}{2(1-\gamma)^{2}}$. \end{restatable} Thus we can ensure that $\text{VE}(\mathbf{v}^{\pi},\hat \mathbf{v})\le \varepsilon$ by ensuring that $\norm{\mathbf{r}^{\pi}-\hat\mathbf{r}}_{\mathbf{D}}^{2}\le\varepsilon(1-\gamma)^{2}$. This is promising, as this latter expression is readily expressed as the objective of a linear regression problem. To illustrate the utility of this, let's look at a concrete example: suppose the agent has an accurate SF matrix $\Psi$ and that the reward function changes at some point in the agent's deployment. Suppose access to a batch of transitions $\mathcal{D}\defeq\Set{S_{t},A_{t},S_{t}^{\prime},r_{t},\rho_{t}}_{t=1}^{T}$ with which we can correct our estimate of $v^{\pi}$, where each $(s,a,s^{\prime},\rho)\in\mathcal{D}$ is such that $s\sim {d_\mu}$ for some known behavior policy ${\mu}$, $A_{t}\sim\pi(\cdot|s)$, $s^{\prime}\sim P(\cdot|s,a)$ and $r = r(s,a,s^{\prime})$. Assume for simplicity that $\rho_{t}\le\rho_{\max{}}$, $\norm{\phi(S_{t})}_{\infty}\le L$, $r_{t}\le R_{\max{}}$ for some finite $\rho_{\max{}},R_{\max{}},L\in\mathbb{R}_{+}$. Then we can get the following result, proven in Appendix~\ref{app:regret-bound}, that is a straightforward application of \citet[Theorem 7.26]{orabona2019modern}. \begin{restatable}{proposition}{RegretBound}\label{prop:regret-bound} Define $\ell_{t}(w)\defeq \frac{\rho_{t}}{2}\brac{r_{t}-\inner{x(S_{t})}{w}}^{2}$. Suppose we apply a basic recursive least-squares estimator to minimize regret on this loss sequence, producing a sequence of iterates $w_{t}$. Let $\overline{w}_{T}\defeq\frac{1}{T}\sum_{t=1}^{T}w_{t}$ denote the average iterate. For $\hat v(s) = \inner{\psi(s)}{\overline{w}_{T}}$, we have that \begin{align} \norm{\mathbf{v}^{\pi}-\hat\mathbf{v}}^{2}_{\mathbf{D}} \le O\brac{\frac{d \rho_{\max}R_{\max{}}^{2}\Log{1+\rho_{\max{}}L^{2}T}}{(1-\gamma)^{2}T}} \label{eq:sf-bound}. \end{align} \end{restatable} In contrast, without the SF we are faced with minimizing a harder objective: the PBE. It can be shown that minimizing the PBE is equivalent to a stochastic saddle-point problem, and the convergence to the saddle-point of this problem has an unimprovable rate of $O\brac{\frac{\tau}{T^{2}}+\frac{(1+\gamma)\rho_{\max{}}L^{2}d}{T} + \frac{\sigma}{\sqrt{T}}}$ where $\tau$ is the maximum eigenvalue of the covariance matrix and $\sigma$ bounds gradient stochasticity, and this convergence rate translates into the performance bound $\half\norm{\mathbf{v}^{\pi}-\hat \mathbf{v}}^{2}_{\mathbf{D}}\le O\brac{\sqrt{\frac{\tau}{T^{2}}+\frac{(1+\gamma)\rho_{\max{}}L^{2}d}{T}+\frac{\sigma}{\sqrt{T}}}}$ \citep[Proposition 5]{liu2018proximal}. Comparing with Equation \ref{eq:sf-bound}, we observe an additional dependence of $O(\sqrt{\tau}/T)$ as well as the worse dependence of at least $O(1/\sqrt{T})\ge(\Log{T}/T)$ on all other quantities of interest, reinforcing the intuition that access to the SF enables us to more efficiently re-evaluate the value function. \section{A First Experiment Testing the Multi-prediction System} \begin{wrapfigure}[14]{l}{0.44\textwidth} \begin{centering} \includegraphics[width=0.4\textwidth]{figures/combined.pdf} \end{centering} \caption{\label{fig:objectives} \textbf{Tabular TMaze} with 4 GVFs, with cumulants of zero except in the goals. The right plot shows the cumulants in the goals. G2 and G4 have constant cumulants, G1 has a distractor cumulant and G4 a drifter. }\label{fig_tmaze} \end{wrapfigure} In this section, we investigate the utility of using SF-NR under non-stationary cumulants and rewards, both for prediction and control. We conduct the experiment in a TMaze environment, inspired by the environments used to test animal cognition \citep{tolman1930introduction}. The environment, depicted in Figure \ref{fig_tmaze}, has four GVFs where each policy takes the fastest route to its corresponding goal. The cumulants are zero everywhere except for at the goals. The cumulant can be of three types: a constant fixed value (\textbf{constant}), a fixed-mean and high variance value (\textbf{distractor}), or a non-stationary zero-mean random walk process with a low variance (\textbf{drifter}). Exact formulas for these cumulants are in Appendix~\ref{app:tmazedetails}. \textbf{Utility of SF-NR for a Fixed Behavior Policy} We start by testing the utility of SF-NR for GVF learning, under a fixed policy that provides good data coverage for every GVF. The \emph{Fixed-Behavior Policy} is started from random states in the TMaze, and moves towards the closest goal, with a 50/50 chance of going either direction if there is a tie. This policy is like a round robin policy, in that one of the GVF policies is executed each episode and, in expectation, all four policies are executed the same number of times. We compare an agent that uses SF-NR and one that learns the approximate GVFs using Tree Back-Up (TB). TB is an off-policy temporal difference (TD) algorithm, that reduces variance in the eligibility trace. We also use TB to learn the successor features in SF-NR. Both use $\lambda = 0.9$ and a stepsize method called Auto \citep{mahmood2012tuning} designed for online learning. We sweep the initial stepsize and meta stepsizes for Auto. For further details about the agents and optimizer, see Appendix~\ref{app:algs}. We additionally compare to least squares TD (LSTD), with $\lambda = 0.9$, particularly as it computes a matrix similar to the SF, but does not separate out cumulant learning (see Appendix~\ref{app:relationship-srnf-td} for this connection). In Figure \ref{fig:round_robin_tmaze_rmse}, we can see SF-NR allows for much more effective learning, particularly later in learning when it more effectively tracks the non-stationary signals. LSTD performs much more poorly, likely because it corresponds to a closed-form batch solution, which uses old cumulants that are no longer reflective of the current cumulant distribution. \textbf{Investigating GPI for Learning the Behavior} Next we investigate if SF-NR improves learning of the whole system, both for the GVFs and for the behavior policy. We use SF-NR and TB for the GVF learners, and Expected Sarsa (Sarsa) and GPI for the behavior. The GPI agent uses the GVF policies for its set of policies. The reward features for the behavior are likely different than those for the GVF learners, because the cumulants are zero in most states whereas intrinsic rewards are likely non-zero in most states. The GPI agent, therefore, learns its own SFs for each policy, also using TB. The reward weights that estimate the (changing) intrinsic rewards are learned using Auto, as are the SFs. Note that the behavior and GVF learners all share the same meta-step size---namely only one shared parameter is swept. The results highlight that SF for the GVFs is critical for effective learning, though GPI and Sarsa perform similarly, as shown in Figure \ref{fig:control_tmaze_rmse}. The utility of SF is even greater here, with TB GVF learners inducing much worse performance than SF GVF learners. GPI and Sarsa are similar, which is likely due to the fact that Sarsa uses traces with tabular features, which allow states along the trajectory to the drifter goal to update quickly. In following sections, we find a bigger distinction between the two. We visualize the goal visitation of GPI in Figure \ref{fig:visitation_tmaze}. Once the GVF learners have a good estimate for the \textit{constant} cumulant signals and the \textit{distractor} cumulant signal, the agent behavior should switch to visiting only the \textit{drifter} cumulant as that is the only goal where visiting would improve the GVF prediction. When using SF GVF learners, this behavior emerges, but under TB GVF learners the agent incorrectly focuses on the distractor. This is even more pronounced for Sarsa (see Appendix~\ref{app:sarsa-tmaze}). \begin{figure} \caption{Fixed-Behavior Policy} \caption{Learned Behavior Policy} \caption{Visitation Plot Comparison} \caption{Performance in \textbf{Tabular TMaze}, with averages over 30 runs. \textbf{(a)} and \textbf{(b)} show average off-policy prediction RMSE, with standard errors, where the error is weighted by (a) the state distribution ${d_\mu}$ for the Fixed-Behavior policy and (b) a uniform state weighting when learning the behavior. \textbf{(c)} Goal visitation plots for GPI with SF and TB.} \label{fig:round_robin_tmaze_rmse} \label{fig:control_tmaze_rmse} \label{fig:visitation_tmaze} \end{figure} \section{Experiments under Function Approximation} We evaluate our system in a similar fashion to the last section, but now under function approximation. We use a benchmark problem at the end of this section, but start with experiments in the TMaze modified to be a continuous environment, with full details described in Appendix~\ref{app:tmazedetails}. The environment observation $o_t \in \mathbb{R}^2$ corresponds to the xy coordinates of the agent. We use tile coded features of 2 tilings of 8 tiles for the state representation, both for TB and to learn the SF. The reward features for the GVF learners can be much simpler than the state-action features because they only need to estimate the cumulants, which are zero in every state except the goals. The reward features are a one-hot encoding indicating if $s'$ in the tuple of $(s,a,s')$ is in the pseudo-termination goals of the GVFs. For the Continuous TMaze, this gives a 4 dimensional vector. The reward features for GPI is state aggregation applied along the one dimensional line components. Appendix~\ref{app:tmazedetails} contains more details on the reward features for the GVF and behavior learners. \textbf{Results for a Fixed Behavior and Learned Behaviors} Under function approximation, SF-NR continues to enable more effective tracking of the cumulants than the other methods. For control, GPI is notably better than Sarsa, potentially because under function approximation eligibility traces are not as effective at sweeping back changes in behavior rewards and so the separation is more important. We include visitations plots in Appendix~\ref{app:gpi-tmaze}, which are similar to the tabular setting. \begin{figure} \caption{Fixed-Behavior Policy} \caption{Learned Behavior Policy} \caption{Replay with Fixed-Behavior} \caption{Performance in \textbf{Continuous TMaze}, with averages over 30 runs. \textbf{(a)} and \textbf{(b)} show average off-policy prediction RMSE, with standard errors, where the error is weighted by (a) the state distribution ${d_\mu}$ for the Fixed-Behavior policy and (b) a uniform state weighting when learning the behavior. \textbf{(c)} RMSE in Continuous TMaze with a Fixed Behavior when incorporating replay.} \label{fig:round_robin_rmse} \label{fig:control_tmaze_rmse} \label{fig_replay} \end{figure} Note that the efficacy of SF-NR and GPI relied on having reward features that did not overly generalize. The SF learns the expected feature vector when following the target policy. For the GVF learners, if states on the trajectory share features with states near the goal, then the value estimates will likely be higher for those states. The rewards are learned using squared error, which unlike other losses, is likely only to bring cumulant estimates to near zero. These small non-zero cumulant estimates are accumulated by the SF for the entire trajectory, resulting in higher error than TB. We demonstrate this in Appendix~\ref{app:reward-features}. We designed reward features to avoid this problem for our experiments, knowing that effective reward features can and have been learned for SF \citep{barreto2020fast}. \textbf{Results using Replay} The above results uses completely online learning, with eligibility traces. A natural question is if the more modern approach of using replay could significantly change the results. In fact, early versions of the system included replay but had surprisingly negative results, which we later realized was due to the inherent non-stationarity in the system. Replaying old cumulants and rewards, that have become outdated, actually harms performance of the system. Once we have the separation with the SF, however, we can actually benefit from replay for this stationary component. We demonstrate this result in Figure \ref{fig_replay}. We use $\lambda = 0$ for this result, because we use replay. The settings are otherwise the same as above, and we resweep hyperparameters for this experiment. SF-NR benefits from replay, because it only uses it for its stationary component: the SF. TB, on the other hand, actually performs more poorly with replay. As before, LSTD which similarly uses old cumulants, also performs poorly. \textbf{Incorporating Interest} \begin{wrapfigure}[10]{r}{0.34\textwidth} \includegraphics[width=0.34\textwidth]{figures/state_reweighting/TwoD_RMSE_fixed.pdf} \caption{Using interest: shading is standard error over 30 runs.} \label{fig:state_reweighting_rmse} \end{wrapfigure} To study the effects of interest, a more open world environment is needed. The \emph{Open 2D World} is used to analyze this problem as described in Appendix \ref{app:2dworlddetails}. At the start of each episode, the agent begins in the center of the environment. The interest for each GVF in the states is one if the state is in the same quadrant as the GVF's respective goal, and zero otherwise. This enables the GVFs to focus their learning on a subset of the entire space and thus use the function approximation resources more wisely and give a better weight change profile as an intrinsic reward to the behavior learner. Each GVF prediction $i$ is evaluated under state-action weighting induced by running $\pi_i$, with results in Figure \ref{fig:state_reweighting_rmse}. Both TB with interest and ETB reweight states to focus more on state visitation under the policy. Both significantly improve performance over not using interest, both allowing faster learning and reaching a lower error. The reweighting under ETB more closely matches state visitation under the policy, and accounts for the impacts of bootstrapping. We find that ETB does provide some initial learning benefits. The original ETD algorithm is known to suffer from variance issues; we may find with variance reduction that the utility of ETB is even more pronounced. \textbf{Validation of the Multi-Prediction System in a Standard Benchmark Problem} Finally, we investigate multi-prediction learning in an environment not obviously designed for this setting: Mountain Car. The goal here is to show that multi-prediction learning is natural in many problem settings, and to show results in a standard benchmark not designed for our setting that has more complex transition dynamics. In the usual formulation the agent must learn to rock back and forth building up momentum to reach the top of the hill on the right---a classic cost to goal problem. This is a hard exploration task where a random agent requires thousands of steps to reach the top of the hill from the bottom of the valley. Here we use Mountain Car to see if our approach can learn about more than just getting out of the valley quickly. We specified a GVF whose termination and policy focuses on reaching top of the left hill, and a second GVF about reaching the top of the other side. The full details of the GVFs and setup of this task can be found in the Appendix~\ref{app:mcdetails}. Figure \ref{fig:mc_rmse} shows how GPI and Sarsa compare against a baseline random policy. GPI provides much better data for GVF learning than the random policy and Sarsa, significantly reducing the RMSE of the learned GVFs. The goal visitation plots show GPI explores the domain and visits both GVFs goal far more often than random, and more effectively than Sarsa. \begin{figure} \caption{RMSE for each GVF} \caption{GVF\#2 goal visits: left hill top} \caption{GVF\#1 goal visits: right hill top} \caption{Performance in \textbf{Mountain Car} averaged over 30 runs, with standard errors. \textbf{(a)} Learning curves for RMSE, with a uniform weighting over states and actions. \textbf{(b)}, \textbf{(c)} show the number of times that the agent reached the termination for each GVF.} \label{fig:mc_rmse} \label{fig:right_wall_terms} \label{fig:hill_terms} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we take the first few steps towards building an effective multi-prediction learning system. We highlight the inherent non-stationarity in the problem and design algorithms based on successor features (SF) to better adapt to this non-stationarity. We show that (1) temporally consistent behavior emerges from optimizing the amount of learning across diverse GVF questions; (2) successor features are useful for tracking nonstationary rewards and cumulants, both in theory and empirically; (3) replay is well suited for learning the stationary components successor features while meta-learning works well for the non-stationary components; and (4) interest functions can improve the performance of the entire system, by focusing learning to a subset of states for each prediction. Our work also highlights several critical open questions. (1) The utility of SFs is tied to the quality of the reward features; better understanding of how to learn these reward features is essential. (2) Continual Auxiliary Task Learning is an RL problem, and requires effective exploration approaches to find and maximize intrinsic rewards---the intrinsic rewards do not provide a solution to exploration. Never-ending exploration is needed. (3) The interaction between discovering predictive questions and learning them effectively remains largely unexplored. In this work, we focused on learning, for a given set of GVFs. Other work has focused on discovering useful GVFs \citep{veeriah2019discovery,veeriah2021discovery, nair2020contextual,zahavy2021discovering}. The interaction between the two is likely to introduce additional complexity in learning behavior, including producing automatic curricula observed in previous work \citep{oudeyer2007intrinsic, chentanez2005intrinsically}. This work demonstrates the utility of several new ideas in RL that are conceptually compelling, but not widely used in RL systems, namely SF and GVFs, GPI with SF for control, meta-descent step-size adaption, and interest functions. The trials and tribulations that lead to this work involved many failures using classic algorithms in RL, like replay; and, in the end, providing evidence for utility in these newer ideas. Our journey highlights the importance of building and analyzing complete RL systems, where the interacting parts---with different timescales of learning and complex interdependencies---necessitate incorporating these conceptually important ideas. Solving these integration problems represents the next big step for RL research. \end{ack} \appendix \newcommand{\mathbf{A_\pi}}{\mathbf{A_\pi}} \newcommand{\mathbf{A_{inv}}}{\mathbf{A_{inv}}} \newcommand{\mathbf{\Pi_\pi}}{\mathbf{\Pi_\pi}} \newcommand{\mathbf{P_\gamma}}{\mathbf{P_\gamma}} \newcommand{\mathbf{\Pi}}{\mathbf{\Pi}} \newcommand{\mathbf{\Psi}}{\mathbf{\Psi}} \newcommand{\mathbf{b_\pi}}{\mathbf{b_\pi}} \newcommand{\mathbf{\boldsymbol{\eta}_r}}{\mathbf{\boldsymbol{\eta}_r}} \newcommand{\wvec^\pi}{\mathbf{w}^\pi} \newcommand{\mathbf{\boldsymbol{\theta}}}{\mathbf{\boldsymbol{\theta}}} \newcommand{\mathbf{\boldsymbol{\alpha}}}{\mathbf{\boldsymbol{\alpha}}} \section{Sample Efficiency of SF-NR} \label{app:sample-eff} \subsection{Proof of Lemma~\ref{lemma:msve-bound}}\label{app:msve-bound} In this section we provide a proof of Lemma~\ref{lemma:msve-bound}. The result is included for completeness, and similar results of a similar form can be found throughout the literature (for example, similar steps are used in the proof of Lemma 3 of \citet{scherrer2016improved}). The lemma is repeated for convenience below. \MSVEBound* \textit{Proof:} Given access to an estimate $\hat \mathbf{r} $ of $\mathbf{r} ^{\pi}$ can then bound the MSVE as \begin{align} \half\norm{\mathbf{v}^{\pi}-\hat \mathbf{v}}_{\mathbf{D}}^{2} &\overset{(a)}{=} \half\norm{(\mathbf{I}-\gamma \mathbf{P}_{\pi})^{\inv}\brac{\mathbf{r} ^{\pi} - \hat \mathbf{r} }}^{2}_{\mathbf{D}}\nonumber\\ &\overset{(b)}{\le}\half\norm{(\mathbf{I}-\gamma \mathbf{P}_{\pi})^{\inv}}^{2}_{\mathbf{D}}\norm{\mathbf{r} ^{\pi}-\hat \mathbf{r} }^{2}_{\mathbf{D}}\nonumber\\ &\overset{(c)}{\le}\half\brac{\sum_{t=0}^{\infty}\gamma^{t}\norm{\mathbf{P}_{\pi}^{t}}_{\mathbf{D}}}^{2}\norm{\mathbf{r} ^{\pi}-\hat \mathbf{r} }^{2}_{\mathbf{D}}\nonumber\\ &\overset{(d)}{\le}\frac{\norm{\mathbf{r} ^{\pi}-\hat \mathbf{r} }_{\mathbf{D}}^{2}}{2(1-\gamma)^{2}}\nonumber \end{align} where $(a)$ decomposed $\mathbf{v}^{\pi}=(\mathbf{I}-\gamma \mathbf{P}_{\pi} )^{\inv}\mathbf{r}^{\pi}$, $(b)$ uses sub-multiplicativity of the matrix norm induced by $\norm{\cdot}_{D}$, $(c)$ uses the von-neumann expansion $(\mathbf{I}-\gamma \mathbf{P}_{\pi})^{\inv}=\sum_{t=0}^{\infty}\gamma^{t}\mathbf{P}_{\pi}^{t}$ and triangle inequality, and $(d)$ uses that $\norm{\mathbf{P}_{\pi}^{t}}_{D} =\lambda_{\max{}}\brac{(\mathbf{P}_{\pi}^{t})^{\top}\mathbf{D}\mathbf{P}_{\pi}^{t}}\le 1$ since both $\mathbf{D}$ and $\mathbf{P}_{\pi}^{t}$ have eigenvalues of at-most 1, followed by $\sum_{t=0}^{\infty}\gamma^{t}=\frac{1}{1-\gamma}$. $\blacksquare$ \subsection{Proof of Proposition~\ref{prop:regret-bound}}\label{app:regret-bound} \RegretBound* \textit{Proof:} \begin{align*} \mathcal{L}(\overline{w}_{T}) &= \mathbb{E}_{d_{\pi}}\sbrac{\half\brac{r_{t}-\inner{x(S_{t})}{\overline{w}_{T}}}^{2}}\\ &=\mathbb{E}_{{d_\mu}}\sbrac{\frac{\rho_{t}}{2}\brac{r_{t}-\inner{x(S_{t})}{\overline{w}_{T}}}^{2}}\\ &\le\mathbb{E}_{{d_\mu}}\sbrac{\frac{1}{T}\sum_{t=1}^{T}\ell_{t}(w_{t})}\\ &\le\frac{\norm{w^{*}}^{2}+d \rho_{\max{}}R_{\max{}}^{2}\Log{1+\rho_{\max}L^{2}T}}{2T}, \end{align*} where in the last line we applied the regret guarantee of the RLS estimator with regularization parameter $\lambda=1$ (See \citet[Theorem 7.26]{orabona2019modern}) and used that $\max_{s}\norm{\phi(s)}_{2}\le\sqrt{d}\max_{s}\norm{\phi(s)}_{\infty}=\sqrt{d}L$. Following Lemma \ref{lemma:msve-bound}, by taking $\hat v(s) = \inner{\psi(s)}{\overline{w}_{T}}$, we have that \begin{align} \norm{\mathbf{v}^{\pi}-\hat\mathbf{v}}^{2}_{\mathbf{D}} \le O\brac{\frac{d \rho_{\max}R_{\max{}}^{2}\Log{1+\rho_{\max{}}L^{2}T}}{(1-\gamma)^{2}T}}\label{eq:sf-bound-app}. \end{align} $\blacksquare$ \section{Relationship between SF-NR and TD solutions} \label{app:relationship-srnf-td} Let $\wvec^\pi$ be the fixed-point solution for the projected Bellman operator with respect to the $\lambda$-return that is be estimated by LSTD($\lambda$) for policy $\pi$\citep{white2017unifying}. It is well known that TD($\lambda$) converges to this solution under the right conditions. The components of the solution $\wvec^\pi = \mathbf{A_\pi}^\inv \mathbf{b_\pi}$ are as follows \begin{align*} \mathbf{A_\pi} &= \mathbf{X}^\top \mathbf{D} (\mathbf{I} - \lambda \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv (\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})\mathbf{X} \\ \mathbf{b_\pi} &= \mathbf{X}^\top \mathbf{D} (\mathbf{I} - \lambda \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \mathbf{r} \end{align*} where $\mathbf{X}\in \mathbb{R}^{|\mathcal{S}||\mathcal{A}|\timesd}$ is the feature matrix with $\mathbf{x}(s,a)^\top$ along its rows, $\mathbf{r} \in \mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$ is the expected immediate reward \big($\mathbf{r}((s,a))=\sum_{s'\in\mathcal{S}} P(s,a,s') R(s,a,s')$\big), $\mathbf{P}_\gamma \in \mathbb{R}^{|\mathcal{S}||\mathcal{A}|\times|\mathcal{S}|}$ is a sub-stochastic matrix that represents the transition process \big($\mathbf{P}_\gamma((s,a),s')= P(s,a,s')\gamma(s,a,s')$\big), $\mathbf{\Pi_\pi} \in \mathbb{R}^{|\mathcal{S}| \times |\mathcal{S}||\mathcal{A}|}$ is a stochastic matrix that represents $\pi$ \big($\mathbf{\Pi_\pi}(s,(s,a)\big) = \pi(s,a)$), and $\mathbf{D} \in \mathbb{R}^{|\mathcal{S}| \times |\mathcal{S}||\mathcal{A}|}$ is a diagonal matrix with the stationary distribution induced by $\pi$ on its diagonal that controls the approximation error. Let us consider $\lambda=1$ case for simplicity. Under this case \begin{align*} \mathbf{A_\pi} &= \mathbf{X}^\top \mathbf{D} \mathbf{X} \\ \mathbf{b_\pi} &= \mathbf{X}^\top \mathbf{D} (\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \mathbf{r} \end{align*} The predicted values correspond to $\hat{\mathbf{Q}} = \mathbf{X} \wvec^\pi$. This is the projection of the true values $\mathbf{Q}^* = (\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \mathbf{r}$ onto the space spanned by $\mathbf{X}$ where the projection operator is $\mathbf{\Pi} = \mathbf{X} (\mathbf{X}^\top \mathbf{D} \mathbf{X})^\inv \mathbf{X}^\top \mathbf{D}$ -- the TD(1) solution. Now, depending on if the form of the reward $\mathbf{r}$ we have the three following cases. \textit{Case 1: $\mathbf{r} = \mathbf{X} \mathbf{w}$} The LSTD estimate can be written as \begin{align*} \theta_{\text{LSTD}} = (\mathbf{X}^\top \mathbf{D} \mathbf{X})^\top \mathbf{X}^\top \mathbf{D} (\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \mathbf{X} \mathbf{w} \end{align*} where the component $\mathbf{\Psi}=(\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \mathbf{X}$ corresponds to the successor features. Therefore, if the space $\mathbf{X}$ is used for learning both the successor features and the reward, the solution corresponding to SF-NR would be equivalent to the solution obtained by TD. \textit{Case 2: $\mathbf{r} = \boldsymbol{\Phi} \mathbf{w}$} The LSTD estimate can be written as \begin{align*} \theta_{\text{LSTD}} = (\mathbf{X}^\top \mathbf{D} \mathbf{X})^\top \mathbf{X}^\top \mathbf{D} (\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \boldsymbol{\Phi} \mathbf{w} \end{align*} where the component $\mathbf{\Psi}=(\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \boldsymbol{\Phi}$ corresponds to the successor features. Therefore, if the space $\mathbf{X}$ is used for learning the successor features which correspond to weighted sums of $\boldsymbol{\Phi}$, and $\boldsymbol{\Phi}$ is used for learning the reward, the solution corresponding to SF-NR would be equivalent to the solution obtained by TD. \textit{Case 3: $\mathbf{r} = \mathbf{X} \mathbf{w} + \mathbf{\boldsymbol{\eta}_r}$ or $\mathbf{r} = \boldsymbol{\Phi} \mathbf{w} + \mathbf{\boldsymbol{\eta}_r}$} where $\mathbf{\boldsymbol{\eta}_r}$ is the model misspecification error for predicting the immediate reward, the LSTD estimate would correspond to \begin{align*} \theta_{\text{LSTD}} = (\mathbf{X}^\top \mathbf{D} \mathbf{X})^\top \mathbf{X}^\top \mathbf{D} (\mathbf{I} - \mathbf{P_\gamma} \mathbf{\Pi_\pi})^\inv \mathbf{r} \end{align*} whereas the SF-NR estimate would capture the same component as in case (1), or in case (2). Therefore, if there is a misspecification error for learning $\mathbf{r}$, the two solutions would differ. Hence, decomposition of SF-NR does not reduce representability of the TD(1) solution if the reward is linearizable in some features. More generally, we could introduce $\lambda < 1$ to provide.a bias-variance trade-off for learning the SR as well. \section{Prior Corrections and the Projected Bellman Error} \label{app:prior-correction-pbe} Let us first consider the SR objective under a fixed behavior, ${\mu}$, with stationary distribution ${d_\mu}$ over states and actions. When using TD for action-values, with covariance $\mathbf{C} = \mathbb{E}[\mathbf{x}(S,A) \mathbf{x}(S,A)^\top] = \sum_{s,a} {d_\mu}(s,a)\mathbf{x}(S,A) \mathbf{x}(S,A)^\top$, the underlying objective is the mean-squared projected Bellman error (MSPBE): \begin{align*} \text{MSPBE}(w) &= \| \sum_{s,a} {d_\mu}(s,a) \mathbb{E}_\pi[\delta(w) \mathbf{x}(s,a) | S = s, A = a] \|^2_{\mathbf{C}^{-1/2}}\\ &= \mathbb{E}_\pi[\delta(w) \mathbf{x}(S,A)]^\top \mathbf{C}^{-1} \mathbb{E}_\pi[\delta(w) \mathbf{x}(S,A)] \end{align*} The TD fixed point corresponds to $w$ such that $\mathbb{E}_\pi[\delta(w) \mathbf{x}(S,A)] = 0$, which is defined based on state-action weighting ${d_\mu}$. Different weightings result in different solutions. The weighting is implicit in the TD update, when updating from state and actions visited under the behavior policy. The predictions are updated more frequently in the more frequently visited state-action pairs, giving them higher weighting in the objective. However, we can change the weighting using important sampling. For example, if we pre-multiply the TD update with $d(s,a)/{d_\mu}(s,a)$ for some weighting $d$, then this changes the state-action weighting in the objective to $d(s,a)$ instead of ${d_\mu}(s,a)$. The issue, though, is not that the objective is weighted by ${d_\mu}$, but rather that ${d_\mu}$ is changing as ${\mu}$ is changing. Correspondingly, the optimal SR solution could be changing since the objective is changing. The impact of this changing state distribution depends on the function approximation capacity. The weighting indicates how to trade-off function approximation error across states; when approximation error is low or zero, the weighting has no impact on the TD fixed point. For example, in a tabular setting, the agent can achieve $\mathbb{E}_\pi[\delta(w) \mathbf{x}(s,a) | S = s, A = a] = 0$ for every $(s,a)$. Regardless of the weighting---as long as it is non-zero---the TD fixed point is the same. Generally, however, there will be some approximation error and so some level of non-stationarity. This pre-multiplication provides us with a mechanism to keep the objective stationary. If we could track the changing ${d_\mu}_t$ with time, and identify a desired weighting $d$, then we could pre-multiply each update with $d(s_t,a_t)/{d_\mu}_t(s_t,a_t)$ to ensure we correct the state-action distribution to be $d$. There have been some promising strategies developed to estimate a stationary ${d_\mu}$ \citep{hallak2017consistent,liu2018breaking,liu2020offpolicy}, though here they would have to be adapted to constantly track ${d_\mu}_t$. Another option is to use prior corrections to reweight the entire trajectory up to a state. Prior corrections were introduced to ensure convergence of off-policy TD \citep{precup2000eligibility}. For a fixed behavior, the algorithm pre-multiplies with a product of important sampling ratios, with $\rho(a | s) \defeq \frac{\pi(a | s)}{{\mu}(a | s)}$ \begin{align*} w = w + \alpha \left[\Pi_{i=0}^{t}\rho(a_i | s_i) \right]\delta \mathbf{x}(s_t, a_t) \end{align*} This shifts the weight from state-actions visited under ${\mu}$ to state-actions visited under $\pi$, because \begin{align*} &\mathbb{E}_{{\mu}}\left[\Pi_{i=0}^{t}\rho(A_i | S_i) \delta \mathbf{x}(S_t, A_t) | S_t = s, A_t = a \right]\\ &=\mathbb{E}_{{\mu}}\left[\Pi_{i=0}^{t}\rho(A_i | S_i) | S_t = s, A_t = a \right] \mathbb{E}[\delta \mathbf{x}(S_t, A_t) | S_t = s, A_t = a] \end{align*} and when considering expectation across time steps $t$ when $s, a$ are observed \begin{align*} \mathbb{E}_{{\mu}}\left[\Pi_{i=0}^{t}\rho(A_i | S_i) | S_t = s, A_t = a\right] &= \frac{d_\pi(s,a)}{{d_\mu}(s,a)} \end{align*} These prior corrections also corrects the state-action distribution even with ${d_\mu}$ changing on each step, because the numerator reflects the probability of reach $s,a$ under policy $\pi$ and the denominator reflects the probability of reach $s,a$ using the sequence of behavior distributions. For $\rho_t(a | s) \defeq \frac{\pi(a | s)}{{\mu}_t(a | s)}$ \begin{align*} \Pi_{i=0}^{t}\rho_t(A_i | S_i) &= \frac{\pi(A_0 | S_0) \pi(A_1 | S_1) \ldots \pi(A_t | S_t)}{{\mu}_0(A_0 | S_0) {\mu}_1(A_1 | S_1) \ldots {\mu}_t(A_t | S_t)}\\ &= \frac{\pi(A_0 | S_0) P(S_1 | S_0, A_0) \ldots P(S_t | S_{t-1}, A_{t-1}) \pi(A_t | S_t)}{{\mu}_0(A_0 | S_0) P(S_1 | S_0, A_0) \ldots P(S_t | S_{t-1}, A_{t-1}) {\mu}_t(A_t | S_t)} \end{align*} \section{Algorithms}\label{app:algs} The algorithm for Tree-Backup($\lambda$) is from \cite{precup2000eligibility}. \begin{algorithm}[H] \caption{TB($\lambda$) Update} \label{alg:TB} \begin{algorithmic} \STATE {$\mathbf{z}_t = \gamma_t \pi(A_t|S_t) \lambda \mathbf{z}_{t-1} + \mathbf{x}(S_t, A_t)$} \STATE{$\delta_t = c_t + \gamma_{t+1} \sum_{a'} \pi(a'|S_{t+1})\hat{q}(S_{t+1}, a')- \hat{q}(S_t, A_t)$} \STATE{$\mathbf{w}_{t+1} = \mathbf{w}_t + \eta_t \delta_t \mathbf{z}_t$} \end{algorithmic} \end{algorithm} Algorithm \ref{alg:interestTB} is the online TB with interest update. The derivation for the online update rule from the forward view is in the next section. \begin{algorithm}[H] \caption{TB($\lambda$) with Interest Update} \label{alg:interestTB} \begin{algorithmic} \STATE {$\mathbf{z}_t = \gamma_t \pi(A_t|S_t) \lambda \mathbf{z}_{t-1} + I_t \mathbf{x}(S_t, A_t)$} \STATE{$\delta_t = c_t + \gamma_{t+1} \sum_{a'} \pi(a'|S_{t+1})\hat{q}(S_{t+1}, a')- \hat{q}(S_t, A_t)$} \STATE{$\mathbf{w}_{t+1} = \mathbf{w}_t + \eta_t \delta_t \mathbf{z}_t$} \end{algorithmic} \end{algorithm} ETB($\lambda$) is a modified version of ETD($\lambda$) \citep{sutton2016emphatic} using TB($\lambda)$ instead of TD($\lambda)$. This modification relies on the correspondence between TB and TD, where TB is a version of TD with the variable trace parameter, $\lambda_t = b(a_t | s_t) \lambda$ \citep{mahmood2017multi, ghiassian2018online}. \begin{algorithm}[H] \caption{Emphatic TB($\lambda$) Update} \label{alg:ETB} \begin{algorithmic} \STATE{$F_t = \rho_{t-1} \gamma_t F_{t-1} + I_t$} \STATE{$M_t = \rho_t \Big[\lambda b(A_t| S_t) I_t + \big(1 - \lambda b(A_t|S_t) \big) F_t)\Big]$} \STATE {$\mathbf{z}_t = \gamma_t \pi(A_t|S_t) \lambda \mathbf{z}_{t-1} + M_t \mathbf{x}(S_t, A_t)$} \STATE{$\delta_t = c_t + \gamma_{t+1} \sum_{a'} \pi(a'|S_{t+1})\hat{q}(S_{t+1}, a')- \hat{q}(S_t, A_t)$} \STATE{$\mathbf{w}_{t+1} = \mathbf{w}_t + \eta_t \delta_t \mathbf{z}_t$} \end{algorithmic} \end{algorithm} \subsection{Online Interest TB Derivation} The forward view update that uses interest at each time-step is of the form \begin{align*} \mathbf{w}_{t+1} = \mathbf{w}_t + \alpha I_t (G_t - \hat{q}(S_t, A_t, \mathbf{w}_t)) \nabla \hat{q}(S_t, A_t, \mathbf{w}_t). \end{align*} According to \citet{suttonbartobook} (page 313), ignoring the changes in the approximate value function, the TB return can be written as, \begin{equation} G_t \approx \hat{q}(S_t, A_t, \mathbf{w}_t) + \sum_{k=t}^\infty \delta_k \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i). \label{eq:tb_return} \end{equation} We substitute Equation~\ref{eq:tb_return} for $G_t$ in the forward view update, we get, \begin{align*} \mathbf{w}_{t+1} \approx \mathbf{w}_t + \alpha I_t \sum_{k=t}^\infty \delta_k \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i) \nabla \hat{q}(S_t, A_t, \mathbf{w}_t). \end{align*} The sum of forward view update over time is \begin{align*} \sum_{t=1}^\infty (\mathbf{w}_{t+1} - \mathbf{w}_t) &\approx \sum_{t=1}^\infty \sum_{k=1}^\infty \alpha I_t \delta_k \nabla \hat{q}(S_t, A_t, \mathbf{w}_t) \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i) \\ &= \sum_{k=1}^\infty \sum_{t=1}^k \alpha I_t \nabla \hat{q}(S_t, A_t, \mathbf{w}_t) \delta_k \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i) \\ &= \sum_{k=1}^\infty \alpha \delta_k \sum_{t=1}^k I_t \nabla \hat{q}(S_t, A_t, \mathbf{w}_t) \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i). \end{align*} This can be a backward-view TD update if the entire expression from the second sum can be estimated incrementally as an eligibility trace. Therefore \begin{align*} \mathbf{z}_k &= \sum_{t=1}^k I_t \nabla \hat{q}(S_t, A_t, \mathbf{w}_t) \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i) \\ &= \sum_{t=1}^{k-1} I_t \nabla \hat{q}(S_t, A_t, \mathbf{w}_t) \prod_{i=t+1}^k \gamma_i \lambda_i \pi(A_i|S_i) + I_k \nabla \hat{q}(S_k, A_k, \mathbf{w}_k) \\ &= \gamma_k \lambda_k \pi(A_k|S_k) \sum_{t=1}^{k-1} I_t \nabla \hat{q}(S_t, A_t, \mathbf{w}_t) \prod_{i=t+1}^{k-1} \gamma_i \lambda_i \pi(A_i|S_i) + I_k \nabla \hat{q}(S_k, A_k, \mathbf{w}_k) \\ &= \gamma_k \lambda_k \pi(A_k|S_k) \mathbf{z}_{k-1} + I_k \nabla \hat{q}(S_k, A_k, \mathbf{w}_k). \end{align*} Changing the index from $k$ to $t$, the accumulating trace update can be written as, \begin{align*} \mathbf{z}_t &= \gamma_t \lambda_t \pi(A_t|S_t) \mathbf{z}_{t-1} + I_t \nabla \hat{q}(S_t, A_t, \mathbf{w}_t), \end{align*} leading to the incremental update for estimating $\mathbf{w}_{t+1}$. \subsection{Auto Optimizer} We use a variant of the Autostep optimizer throughout our experiments. Adam and RMSProp are global update scaling methods and do note adapt step-sizes on a per feature basis \citep{kingma2014adam}, unlike meta descent methods like IDBD, Autostep \citep{mahmood2012tuning}, and AdaGain \citep{jacobsen2019meta}---this is critical for achieving introspective learners. Meta-descent methods like Autostep have been shown to be very effective with linear function approximation \citep{jacobsen2019meta}. Jacobsen's AdaGain algorithm is rather complex, requiring finite differencing, whereas Auto is a simple method that works nearly as well in practice. In our own preliminary experiments, we found Adam to much less effective at tracing non-stationary learning targets, even when we adapted all three hyperparameters of the method. Finally, Auto can be seen as optimizing a meta objective for the step-size and thus is a specialization of Meta-RL to online step-size adaption in RL. There have been attempts to apply the Autostep algorithm to TD and Sarsa [Dabney and Barto, 2012]. Auto represents another attempt to use Autostep in the reinforcement learning setting. Modifications to the Autostep algorithm are from personal communications with an author of the original work \citep{mahmood2012tuning} on how to make it more effective in practice in the reinforcement learning setting. \begin{algorithm}[H] \caption{Auto Update} \label{alg:AutoUpdate} {\bfseries Input:} $\delta, \boldsymbol{\phi}, \mathbf{z}$ \begin{algorithmic} \STATE $\textbf{n}_j \leftarrow \textbf{n}_j + {\tau}^{-1} \mathbf{\boldsymbol{\alpha}}_j\abs{\boldsymbol{\phi}_j} \cdot \left(\abs{\textbf{h}_j \cdot{\delta \boldsymbol{\phi}_j}} - \textbf{n}_j \right) \quad \forall j \in \{1, ..., d\}$ \FOR{\textbf{all} $i$ such that $\boldsymbol{\phi}_i \neq 0$} \STATE {$\Delta \beta_i \leftarrow \text{clip}\left(-M_\Delta,\abs{\dfrac{\textbf{h}_i \delta \boldsymbol{\phi}_i}{\textbf{n}_i}}, M_\Delta \right)$} \STATE {$\mathbf{\boldsymbol{\alpha}}_i \leftarrow \text{clip}\left(\kappa, \mathbf{\boldsymbol{\alpha}}_i e ^{\mu \Delta\beta_i}, \dfrac{1}{\abs{\boldsymbol{\phi}_i}}\right)$} \ENDFOR \IF {$\mathbf{\boldsymbol{\alpha}}^T \textbf{z} > 1$} \STATE $\mathbf{\boldsymbol{\alpha}}_i \leftarrow \min(\mathbf{\boldsymbol{\alpha}}_i, \dfrac{1}{||\textbf{z}||_1}) \quad \forall i \ \mathbf{z}_i \neq 0$ \ENDIF \STATE $\mathbf{\boldsymbol{\theta}} \leftarrow \mathbf{\boldsymbol{\theta}} + \mathbf{\boldsymbol{\alpha}} \cdot{\delta\boldsymbol{\phi}}$ \STATE $\textbf{h} \leftarrow \textbf{h}(1 - \mathbf{\boldsymbol{\alpha}} \cdot{\abs{\boldsymbol{\phi}}}) + \mathbf{\boldsymbol{\alpha}} \cdot \delta\boldsymbol{\phi}$ \end{algorithmic} \end{algorithm} \pagebreak where: \\ \begin{itemize} \setlength{\parskip}{0pt} \setlength{\itemsep}{0pt plus 1pt} \item $\mu$ is the meta-step size parameter. \item $\mathbf{\boldsymbol{\alpha}}$ is the step sizes. \item $\delta$ is the scalar error. \item $\boldsymbol{\phi}$ is the feature vector. \item $\textbf{z}$ is the step-size truncation vector. \item $\mathbf{\boldsymbol{\theta}}$ is the weight vector. \item $\textbf{h}$ is the decaying trace. \item $\textbf{n}$ maintains the estimate of $\abs{\textbf{h} \cdot \delta \phi}$. \item $\tau$ is the step size normalization parameter. \item $M_\Delta$ is the maximum update parameter of $\mathbf{\boldsymbol{\alpha}}_i$. \item $\kappa$ is the minimum step size. \end{itemize} In all experiments, $M_\Delta = 1$, $\tau = 10^4$, $\kappa = 10^{-6}$. In the reinforcement setting, $\phi$ is the eligibility trace, $\delta$ is the td error, and $\textbf{z}$ is the overshoot vector. $\textbf{z}$ is calculated as $\abs{\phi} \cdot \max(\abs{\phi},\abs{\textbf{x} - \gamma \textbf{x}^\prime})$, where $\textbf{x}$ is the state representation at timestep $\textit{t}$ and $\textbf{x}^\prime$ is the state representation at timestep $\textit{t}+1$. \section{Experiment Details} \label{app:exp_details} This section provides additional details about the experiments in the main body, and the additional experiments in this appendix. All the experiments in this work used a combined compute usage of approximately five CPU months. \subsection{TMaze Details}\label{app:tmazedetails} Tabular TMaze is a deterministic gridworld with four actions \{up, down, left, right\}. There are four GVFs being learned and each correspond to a goal as depicted in Figure \ref{fig_tmaze}. For GVF $i$ and the corresponding goal state $G_i$, $pi_i$, $\gamma_i$ and $c_i$ are defined as: \begin{itemize} \item $\pi_i(a | s)$: deterministic policy that directs the agent towards $G_i$ \item $\gamma_i(G_i) = 0, \ \gamma_i(s) = 0.9 \ \forall s \neq G_i \in \mathcal{S}$ \item \( c_i^t(s, a, s') = \begin{cases} 0 & s' \neq G_i \\ C^t_i & s' = G_i \\ \end{cases} \) \end{itemize} where $C_i^t$ can be one of the following four different and possibly non-statioanry cumulant schedules: \begin{itemize} \item Constant: $C_i^t = C_i$ \item Distractor: $C_i^t = N(\mu_i, \sigma_i)$ \item Drifter: $C_i^t = C_i^{t - 1} + N(\mu_i, \sigma_i), C_i^0 = 1$ \end{itemize} As discussed in Section \ref{sec:nonstationarity-in-learning}, the cumulants of the GVFs can be stationary or non-stationary signals. The cumulant of each GVF has a non-zero value at their respective goal. In the Tabular TMaze, the top left goal is a \textit{distractor} cumulant which is an unlearnable noisy signal. The \textit{distractor} has $\mu = 1$ and $\sigma^2 = 25$. The cumulants corresponding to the lower left goal and upper right goal are \textit{constant} goals uniformly selected at the start of each run between $[-10,10]$. The cumulant corresponding to the lower left goal is a \textit{drifter} signal of $\sigma^2=0.01$ and represents a learnable non-stationary signal. The \emph{Continuous TMaze} follows the same design as the Tabular TMaze except it is embedded in a continuous 2D plane between 0 and 1 on both axes. Each hallway is a line with no width, allowing the agent to go along the hallway, but not perpendicular to it. The main vertical hallway spans between [0, 0.8] on the y-axis and is located at x of 0.5. The main horizontal hallway spans [0, 1] on the x axis and is located at y of 0.8. Finally, the two vertical side hallways span between [0.6, 1.0]. Junctions and goal locations occupy a $2\epsilon$ x $2\epsilon$ space at the end of each hallway. For example, the middle junction spans x of $0.5 \pm \epsilon$ and y of $0.8 \pm \epsilon$. The agent can take one of four actions four actions \{up, down, left, right\}. The agent moves in the corresponding direction of the action with a step size of 0.08 and noise altering movement by $\text{Uniform}(-0.01, 0.01)$. Figure \ref{fig:conttmaze-env} summarizes the environment set-up. \begin{wrapfigure}{R}{0.4\textwidth} \includegraphics[width=0.4\textwidth]{figures/environments/ContinuousTMaze.pdf} \caption{Continuous TMaze with the 4 GVFs. S$_0$, the grey shaded region, is the uniformly weighted starting state distribution after a goal visit.} \label{fig:conttmaze-env} \end{wrapfigure} \textbf{Reward features} As discussed in the main paper, SF-NR requires a reward feature, $\phi(s,a,s')$. In the Tabular TMaze, the reward features for both the GVFs and the behavior learner are the tabular representation of $\phi(s,a,s')$. For the Continuous TMaze, the reward features using SF-NR for the GVF learners is an indicator function for if in the tuple $(s,a,s')$ is in the GVF's goal state. Since the Continuous TMaze has four goals, $\phi$ for the GVF learners is a four dimensional vector. This is a reasonable feature vector as the reward feature vector should be related to rewarding transitions. For GPI, it is unclear what is a rewarding transition apriori. Therefore, the reward feature is the action-feature vector of state-aggregation applied to the Continuous TMaze. This is a general yet compact feature representation. Each line segment for the Continuous TMaze is broken up into thirds and state aggregation is applied to each part. \textbf{Algorithm parameters} For the fixed behavior experiment in Tabular TMaze, the GVFs using TB($\lambda$) learners and SF-NR learners had their meta-step size swept from $[5^{-4},...,5^{0}]$ and initial step size tested for $[0.1,1.0]$. For both TB($\lambda$) and SF-NR, the optimal meta-step size was $5^{-1}$ and initial step size of $1.0$. For the learned behavior experiment, the behavior learner and GVF learner share the same meta-step size parameter and initial alpha. The meta-step size was swept from $[5^{-4},...,5^0]$ and initial step sizes were $[0.1,1.0]$. All four agents (GPI behavior learner with TB($\lambda$) or SF-NR GVF learners, and SARSA behavior learner with TB($\lambda$) or SF-NR GVF learners) had an optimal meta-step size of $5^{-2}$. For agents using SF-NR GVF learners, the optimal initial step size was $1.0$. For agents using TB($\lambda$) GVF learners, the initial step size was $0.1$. The behavior learner is optimistically initialized to ensure that the agent will visit each of the four goals at least once. To the best of our knowledge, no one has tried optimistic initialization with successor features before. To perform optimistic initialization for GPI, we initialized all successor features, $\psi$, estimate to be $\textbf{1}$. We initialized the immediate reward estimates, $\textbf{w}$, to the desired optimistic initialization threshold normalized by the number of reward features. We believe this to be an approximate version of optimistic initialization to allow comparisons to the SARSA agent. All behaviors used a fixed $\epsilon$ of 0.1 for the runs. The agent's performance was evaluated on the TE for the last 10\% of the runs. For the fixed behavior experiment in Continuous TMaze, the meta-step size parameter was swept from $[5^{-3},...,5^{0}]$ and the initial step size was swept over $[0.01,0.1,0.2]$. The initial step size was then divided by the number of tilings of the tile coder to ensure proper scaling. For the fixed behavior experiment, the optimal meta-step size for SF-NR and TB($\lambda$) GVF learners was $5^{-2}$ with an initial step size of $0.2$. For the learned behavior experiment in Continuous TMaze, the behavior learners and GVF learners shared the same meta-step size parameter and initial step size. The meta-step size parameter was swept from $[5^{-4},...,5^{0}]$ and the initial step size was swept over $[0.01,0.1,0.2]$. For GPI, the optimal initial step size for both types of GVF learners was 0.2. For SF-NR learners, the optimal meta-step size was $5^{-3}$ while being $5^{-2}$ for the TB($\lambda$) GVF learners. For the Sarsa behavior learner, the optimal meta-step size was $5^{-2}$ and the initial step size when the SF-NR GVF learners were used was $0.2$ while being $0.1$ for the TB($\lambda$) learners. The behavior learner was optimistically initialized and used an $\epsilon$ of $0.1$. The agent's performance was evaluated on the TE for the last 10\% of the runs. Since the intrinsic reward (weight change) is $\geq 0$, the intrinsic reward is augmented with a modest $-0.01$ reward per step to encourage the agent to seek out new experiences. \subsection{Open 2D World Details}\label{app:2dworlddetails} The Open 2D World is an open continuous grid world with boundaries defined by a square of dimensions $10\times10$, with goals in each of the four corners. The goals follow the same schedules as defined for the TMaze experiments with \textit{constants} sampled from $[-10,10]$, \textit{drifter} parameters of $\sigma^2 = 0.005$ and the initial value of $1$, and \textit{distractor} parameters of $N(\mu = 1, \sigma^2 = 1)$. On each step, the agent can select between the four compass actions. This moves the agent $0.5$ units in the chosen direction with uniform noise $[-0.1,0.1]$. A uniform $[-0.001,0.001]$ orthogonal drift is also applied. The goals are squares in the corners with a size of $1 \times 1$. The start state distributions is the center of the environment $(x,y) \in [0.45, 0.55]^2$. The summary of the environment is shown in Figure \ref{fig:2d-env}. \begin{wrapfigure}{R}{0.4\textwidth} \includegraphics[width=0.4\textwidth]{figures/environments/Open2DWorld-Simple.pdf} \caption{Open 2D World with the 4 GVFs situated at each corner. S$_0$, the grey shaded region, is the uniformly weighted starting state distribution after a goal visit.} \label{fig:2d-env} \end{wrapfigure} Similar to the TMaze variants, the GVF policies are defined as the shortest path to their respective goal. When there are multiple actions at a state that are part of a shortest path, these actions are equally weighted. The discount for the GVFs is $\gamma = 0.95$ for all states other than the goal states. \textbf{Reward features} The GVF reward features for SF-NR are defined similarly to the reward features in TMaze as described in Appendix \ref{app:tmazedetails}. Since there are four goals, $\phi \in \mathbf{R}^4$, where the value at $\phi_i$ is the indicator function for if $s' \in G_i$. This is a reasonable feature as it is focused on rewarding events. For the reward features of GPI, state aggregation is applied with tiles of size (2,2) and is augmented to be a state-action feature. \textbf{Algorithm parameters} The meta-step size for the behavior learners and the GVF learners were swept independently. The behavior learner's meta-step size was swept over $[5^{-5},...,5^0]$ while the GVF learner's meta-step size was swept over $[5^{-4},...5^0]$. An initial step size of $0.1$ scaled down by the number of tilings was used for all learners. The optimal meta-step size for the GVF learners that used ETB($\lambda$) was $5^{-3}$ and the meta-step size for the corresponding GPI learner was $5^{-1}$. Variance was an issue for learning successor features through ETB($\lambda$) so the emphasis was clipped at 1. For the agents using TB with Interest, the optimal meta-step size for the behavior learner was $5^{-1}$ and for the GVF learners was $5^{-2}$. For the method using no prior corrections, the behavior learner used a meta-step size of $5^{-4}$ and the GVF learner used a meta-step size of $5^{-2}$. Since the intrinsic reward (weight change) is $\geq 0$, the intrinsic reward is augmented with a modest $-0.05$ reward per step to encourage the agent to seek out new experiences. \subsection{Mountain Car Details}\label{app:mcdetails} We use the standard mountain car environment \citep{suttonbartobook} defined through a system of equations \begin{align*} A_t &\in [\text{Reverse}=-1, \text{Neutral}=0, \text{Throttle}=1]\\ \dot{x}_{t+1} &= \dot{x}_t + 0.001 A_t - 0.0025 \cos(3x_t)\\ x_{t+1} &= x_t + \dot{x_{t+1}}. \end{align*} We define two GVFs as auxiliary tasks. The first GVF receives a non-zero cumulant of value $1$ when the agent reaches the left wall. It has a discount $\gamma=0.99$ that terminates when the left wall is touched, and a policy that is learned offline to maximize the cumulant. The second GVF is similar, but receives a non-zero cumulant of $1$ when reaching the top of the hill (the typical goal state). Each policy is learned offline for 500k steps on a transformed problem of the cumulant being -1 per step with ESARSA($\lambda$) and an $\epsilon = 0.1$. This allows a denser reward signal for learning a high quality policy. Note that the final policy for maximizing this cumulant signal and the sparse reward signal are the same. The state representation used for these policies is an independent tile coder of 16 tilings with 2 tiles per dimension. The fixed policy after offline learning for each of the GVFs is greedy with respect to the learned offline action values. \textbf{Reward features} The reward features for the SF-NR learners are defined similarly to the reward features in the Continuous TMaze. $\phi(s,a,s')_i$ is 1 if and only if $s'$ is in the termination zone of GVF$_i$. The reward feature for GPI is a tile coder of 8 tilings with 2 tiles per dimension. \textbf{Algorithm parameters} The SF-NR and behavior learner are optimized with stochastic gradient descent in an online learning setting. The behavior step size and GVF step size were swept independently at $[3^{-2},...3^0]$. The values were then divided by the number of tilings to ensure proper scaling of step size. The behavior was epsilon-greedy with $\epsilon$ swept over the range $[0.1,0.3,0.5]$. For both GPI and Sarsa, $\epsilon = 0.1$ performed the best. For agents using GPI as the behavior learner, the optimal behavior step size was $3^0$ with the optimal GVF learner step size of $3^{-2}$. For the agents using a Sarsa behavior learner, the optimal parameters were a step size of $3^0$ for the behavior learner and a step size of $3^{-1}$ for the GVF learner. For the baseline agent where the behavior was random actions, the optimal GVF step size was $3^{-1}$. Since the intrinsic reward (weight change) is $\geq 0$, the intrinsic reward is augmented with a modest $-0.01$ reward per step to encourage the agent to seek out new experiences. \section{Additional Experiments} \subsection{Goal Visitation in Continuous TMaze}\label{app:gpi-tmaze} Figure \ref{fig:goal_visits_CTMaze} shows the goal visitation plots for in the Continuous TMaze for GPI and Sarsa. Using either GPI or SF-NR results in significantly faster identification of the \textit{drifter} signal and together results in the fastest identification. Prioritizing visiting the drifter is the preferred behavior as it is the only learnable signal. It is important that the agent does not get confused by the \textit{distractor} as it is not a learnable signal. \begin{figure} \caption{Goal visitation for GPI and Sarsa for the GVF learners using SF-NR and TB in Continuous TMaze. Episodes count are shown for the first N episodes up to the minimum number of episodes per run for each algorithm.} \label{fig:goal_visits_CTMaze} \end{figure} \subsection{Goal Visitation in Tabular TMaze}\label{app:sarsa-tmaze} \begin{figure} \caption{Goal visitation for Sarsa with the GVF learners using SF-NR and TB in Tabular TMaze.} \label{fig:goal_visits_TMaze} \end{figure} Figure \ref{fig:goal_visits_TMaze} shows the goal visitation plots for the Tabular TMaze. \subsection{The Effect of Generalization in the Reward Features}\label{app:reward-features} \begin{figure} \caption{ GPI with the same input features, but different reward features in the Continuous TMaze environment. $\mu(GPI), \pi(SR)$ uses the reward features detailed in Appendix \ref{app:tmazedetails}. $\mu(GPI), \pi(SR) $ Tile Coding uses the reward features of a tilecoder with 8 tilings of 2 tiles.} \label{fig:gpi_sensitivity} \end{figure} GPI is sensitive to the reward feature representation that is used. Figure~\ref{fig:gpi_sensitivity} shows what happens when the reward features of 8 tilings with 2x2 tiles are used in the Continuous TMaze for GPI. The agents are swept over the same interval of hyperparameters as described in Appendix ~\ref{app:tmazedetails} for the Continuous TMaze. When this reward feature was used, the optimal meta-step size was $5^{-3}$. The initial step size was $0.2$ which was then scaled down by the number of tilings. GPI, with this reward features, was unable to learn an effective policy to reduce the TE in the Continuous TMaze. The tile coded reward feature representation has approximately three times the number of features than the handcrafted feature representation, yet it results in much worse performance. This highlights the need for the reward features to be learnable by an algorithm rather than being predefined. \end{document}
arXiv
Epidemiological survey of echinococcosis in Tibet Autonomous Region of China Bin Li†1, Gongsang Quzhen†1, Chui-Zhao Xue†2, Shuai Han2, Wei-Qi Chen3, Xin-Liu Yan4, Zhong-Jie Li5, M. Linda Quick6, Yong Huang7, Ning Xiao2, Ying Wang2, Li-Ying Wang2, Gesang Zuoga8, Bianba9, Gangzhu10, Bing-Cheng Ma11, Gasong12, Xiao-Gang Wei13, Niji14, Can-Jun Zheng6, 15Email author, Wei-Ping Wu3Email author and Xiao-Nong Zhou3Email author Received: 24 May 2018 The echinococcosis is prevalent in 10 provinces /autonomous region in western and northern China. Epidemiological survey of echinococcosis in China in 2012 showed the average prevalence of four counties in Tibet Autonomous Region (TAR) is 4.23%, much higher than the average prevalence in China (0.24%). It is important to understand the transmission risks and the prevalence of echinococcosis in human and animals in TAR. A stratified and proportionate sampling method was used to select samples in TAR. The selected residents were examined by B-ultrasonography diagnostic, and the faeces of dogs were tested for the canine coproantigen against Echinococcus spp. using enzyme-linked immunosorbent assay. The internal organs of slaughtered domestic animals were examined by visual examination and palpation. The awareness of the prevention and control of echinococcosis among of residents and students was investigated using questionnaire. All data were inputted using double entry in the Epi Info database, with error correction by double-entry comparison, the statistical analysis of all data was processed using SPSS 21.0, and the map was mapped using ArcGIS 10.1, the data was tested by Chi-square test and Cochran-Armitage trend test. A total of 80 384 people, 7564 faeces of dogs, and 2103 internal organs of slaughtered domestic animals were examined. The prevalence of echinococcosis in humans in TAR was 1.66%, the positive rate in females (1.92%) was significantly higher than that in males (1.41%), (χ2 = 30.31, P < 0.01), the positive rate of echinococcosis was positively associated with age (χ2trend = 423.95, P < 0.01), and the occupational populations with high positive rates of echinococcosis were herdsmen (3.66%) and monks (3.48%). The average positive rate of Echinococcus coproantigen in TAR was 7.30%. The positive rate of echinococcosis in livestock for the whole region was 11.84%. The average awareness rate of echinococcosis across the region was 33.39%. A high prevalence of echinococcosis is found across the TAR, representing a very serious concern to human health. Efforts should be made to develop an action plan for echinococcosis prevention and control as soon as possible, so as to control the endemic of echinococcosis and reduce the medical burden on the population. Please see Additional file 1 for translations of the abstract into the five official working languages of the United Nations. Echinococcosis is a zoonotic parasitic disease caused by Echinococcus spp. as it parasitizes humans or animals, and is globally distribution [1, 2]. Echinococcosis is one of the 17 neglected tropical diseases recognized by the World Health Organization (WHO) and one of the neglected-zoonoses that the WHO prioritizes in their support of prevention and control. According to WHO, the global disease burden caused by echinococcosis was approximately 871 000 disability-adjusted life-years (DALYs) in 2010 [3]. In addition, cystic echinococcosis caused an annual economic loss of approximately USD 2 billion in the livestock farming industry [4]. Echinococcosis in humans presents predominantly as the cystic or alveolar types. Cystic echinococcosis (CE) caused by Echinococcus granulosus is mainly transmitted between livestock (intermediate hosts), such as yak and sheep, and dogs (definitive host). The dogs are infected after eating the viscera of the diseased livestock and excrete the echinococcus eggs in their faeces, which may cause disease in livestock such as yak and sheep after ingestion of the eggs. Alveolar echinococcosis (AE) caused by Echinococcus multilocular is mainly circulated in rodents, foxes, and wolves. People can be infected by AE while they eating eggs of Echinococcus granulosus. CE is widely distributed, primarily in South America, southern and eastern Africa, Australia, the Mediterranean Basin, central Asia, and China. In some of the areas with the greatest prevalence, such as Peru, Argentina, eastern Africa, and central Asia, the prevalence in localized areas is as high as 5–10% [5]. AE prevalence is mainly constrained to the northern hemisphere. The areas with the high incidence are mainly Alaska, northern and central Europe, Central Asia, Siberia, China and Japan [6, 7]. The epidemic of hydatidosis is serious in western and northern China [8–10]. The Tibet Autonomous Region (TAR) is located in the south-western part of the Qinghai-Tibet Plateau, which has an average elevation of over 4000 m. The region has seven prefectures and a total of 74 counties under its jurisdiction, with a grassland area of 650 000 ha, accounting for 56.72% of the total land area in the region. Farming and grazing yaks, sheep and other livestock are the primary livelihood. It is a heavy epidemic areas with echinococcosis in TAR. B-ultrasonography for self-selected study subjects in Dingqing County and Dangxiong County in Tibet in 2007 showed that the prevalence of echinococcosis were 4.7 and 9.9% [11], respectively. Although rapid tests for echinococcosis are under development, the gold standard for diagnosis is still reliant on ultrasonography. However, due to the lack of technology, poor diagnostic availability, vast territory, and inconvenient transportation in Tibet, there are only sporadic cases reported by the local medical institutions, and no large-scale survey has been conducted until now. These results may have been artificially inflated due to selected bias. In 2012, a stratified random sampling method was used to screen for echinococcosis in Baqin, Cuoqin, Yadong and Nyingchi counties in Tibet, and the results showed that the average prevalence in the four counties was 4.23%. Survey on echinococcosis prevalence in humans The Tibet region has a total of 74 counties in seven prefectures that include 692 townships and 5260 villages. The region has a total population of 3 million. After consulting senior statistician, the sample number is about 2–3% of the total population in TAR. In accordance with the main mode of production of the local residents, all villages were classified into Pastoral area (animal husbandry only), semi-Pastoral and semi-Farm area (animal husbandry and farming both), Farm area (farming only), urban areas (live in urban). A stratified and proportionate sampling method was adopted, and the survey was conducted from August to November 2016. A total of 380 villages were selected in Tibet based on the population of the county: we selected 16 villages, eight villages, four villages and two villages respectively in counties with a population of more than 100 000 50 000–100 000 10 000–50 000, and below 10 000. In a county, the villages were randomly selected, and the number of different modes villages (or sub districts) were determined based on the proportion of the four types of village modes. At least 200 people in the selected village were examined randomly, with the total participants in the survey accounting for 2.5% of the total population of Tibet. Population screening and case diagnosis The survey team, with help from the local government, set up diagnostic ultrasound sites in the clinics of the local villages to organize the process of surveying the residents. For residents living in more rural areas, surveys were conducted at their homes. As much as possible, all residents in the surveyed villages were included in the survey. More than 200 people in each surveyed village received an abdominal B-ultrasonography examination. Cases were diagnosed and classified according to the "Diagnostic criteria for echinococcosis" of China (WS 257–2006) (the standard is in line with that of WHO), and the imaging data were saved. Serum samples were collected from the cases suspected by B-ultrasonography, and Echinococcus antibodies were tested for by enzyme-linked immunosorbent assay (ELISA, Zhuhai Hai Tai biopharmaceutical Co., Ltd. Zhuhai, China) as an auxiliary diagnostic test, specificity and sensitivity of this ELISA test are known to be lower than needed to use as a single diagnostic test. Survey of infections among dogs In the selected 380 villages (or subdistrict), 20 households with dogs were randomly selected guided by the village head in each village. Only one fresh dog faecal sample was collected randomly in the selected household. The collected faecal samples were stored in a freezer under − 70 °C for at least 72 h to inactivate potential eggs. All dog feces were tested for Echinococcus coproantigens by sandwich ELISA, and the test kit had been tested with sensitivity and specificity over 80% (43/45, 45/45, respectively). Survey of echinococcosis among livestock In the selected 380 villages, either yaks, sheep or pigs were tested. During this slaughtering season (October–November), either five yaks or 10 pigs/sheep slaughtered by villagers were selected from each village to be tested. A clinical exam using visual inspection and palpation of the livers, lungs and other organs of the animals were carry out by veterinarians to determine likelihood of echinococcosis. Animals which is positive for echinococcosis will commonly have cysits on the surface of their liver and lung lobes. Parenchyma were palpated and anatomised to check cuticle or protoscolex under microscope. Survey of knowledge regarding the prevention and control of echinococcosis In the selected 380 villages, at least 20 local residents in each village were surveyed randomly using a short questionnaire during home visits. Simultaneously, we surveyed 50 students randomly in the grades of 4, 5 and 6 in primary school with the questionnaire in each county, the questionnaire survey evaluated knowledge regarding the prevention and control of echinococcosis. Before the field survey, all investigators involved in the survey were trained in B-ultrasonography diagnosis, administering questionnaires, conducting laboratory tests and the performing diagnostic methods to recognize livestock diseases of echinococcosis. Determining the surveyed subjects For the surveyed villages, if the population of the administrative village was large, a smaller village was selected as the surveyed area. If the population of the administrative village was too small and could not meet the requirement of 200 people in the survey, the people from the adjacent village were added to the survey subjects. It is determined the diagnosis based on the imaging and result of serological test to confirm a case of echinococcosis by the expert team that is formed by the clinical and imaging experts of echinococcosis in China. A confirmed case was a person who had a typical image of echinococcosis or who had an atypical image but with positive result of serological test. Data processing and analysis The positive rate of echinococcosis in human = the number of diagnosed patients / the number of people examined × 100%. The positive rate of echinococcosis in livestock = the number of diseased animals / the number of livestock checked × 100%. The positive rate of the Echinococcus coproantigen test = the number of the dogs with positive results / the number of dogs in the test × 100%. The passed rate of the knowledge and control of echinococcosis = the number of people who passed the test / the number of people included in the test × 100%. Individuals passed the test if, from the five questions about the prevention and treatment of echinococcosis, three responses were correct. Prevalence of the population was calculated according to the following equation: $$ p=\sum \limits_{j= 1}\frac{njwj}{Nj}=\sum \limits_{j=1} pjwj $$ where p is the prevalence of the population in the surveyed area, n is the number of patients detected in this layer, N is the number of surveyed people included in this layer, j is the rank of stratification, wj is the weight of the jth stratification (the proportion of the population in the jth layer to the total population of the region), and pj is the positive rate for the jth layer. All data were inputted using double entry in the Epi Info database, with error correction by double-entry comparison. The statistical analysis of all data was processed using SPSS 21.0 (IBM, New York, USA), and the map was mapped using ArcGIS 10.1(ESRI, RedLands, USA), the data was tested by Chi-square test and Cochran-Armitage trend test, and the significant level is P < 0.05. B-ultra0073onography screening of echinococcosis in human We conducted B-ultrasonography examination for 80 384 people in 380 villages of 74 counties in Tibet, including 34 297 males and 46 087 females; the average age was 36 years (1–99). For these individuals surveyed, 99% were ethnically Tibetan. Echinococcosis was diagnosed in 1371 patients, and the overall prevalence in the population of the 74 counties in Tibet was 1.66%. Among them, the case numbers of CE, AE and unidentified type were 1202, 153, and 16, respectively. CE cases accounting for 87.67%, and AE cases accounting for 11.16%. Among six types of CE cases, the cases of active CL and CE1-CE2, transitional CE3 and inactive CE4-CE5 accounted for 36.2, 13.7 and 50.1%, respectively. (Table 1, Table 2) Number of villages in counties No. of villages selected Cuoqin Gaer Gaize Geji Pulan Zhada Changdu Bianba Dingqing Gongjue Jiangda Karuo Leiwuqi Luolong Mangkang Zuogong Dangxiong Duilongdeqing Linzhou Mozhugongka Nimu Qushui Linzhi Bomi Chayu Gongbujiangda Langxian Motuo Naqu Anduo Baqing Bange Jiali Nierong Shenzha Shuanghu Suoxian Shigatse Angren Bailang Dingri Gangba Jiangzi Kangma Nanmulin Nielamu Renbu Sajia Sangzhuzi Xietongmen Yadong Zhongba Cuomei Cuona Gongga Jiacha Langkazi Longzi Luozha Naidong Qiongjie Qusong Sangri Zhanang Prevalence of AE and CE echinococcosis across populations in Tibet Autonomous Region, 2016 Surveyed population CE Prevalence/% AE Prevalence/% CE & AE Prevalence/% 1.05 (0.86–1.24) Naque CE Cystic echinococcosis; AE Alveolar echinococcosis CI Confidence interval Echinococcosis was detected in all 7 prefectures across the region, with the highest prevalence in Naqu Prefecture (3.37%) and the lowest prevalence in Shigatse Prefecture (1.11%). At the county level, the highest prevalence was found in Zuogong County of Changdu Prefecture (7.82%), while the lowest prevalence was 0.2%. The number of the counties with prevalence 0–0.1% 0.1–1%, 1–2, and 2% or over were 0, 24, 30, and 20, respectively. CE was found in all 74 counties, whereas AE was detected in 47 counties (64%) (Fig. 1, Table 2). Prevalence distribution by county of AE and CE echinococcosis in Tibet Autonomous Region, 2016. Note: Yellow dot represents counties where AE found Distribution by gender and age. A total of 46 087 females in Tibet were examined, and echinococcosis was detected in 886, for a positive rate of 1.92%. A total of 34 297 males in Tibet were examined, and echinococcosis was detected in 485, for a positive rate of 1.41%. The proportion of cases aged 30–59 in both male and female were 55%. The highest proportion of male and female cases were 50–59 and 40–49 age group, respectively (Fig. 2). Statistical testing showed that the positive rate in females was significantly higher than that in males (χ2 = 30.31, P < 0.01). For the echinococcosis patients, the youngest age was 2 years old, and the oldest was 93 years old with a median age of 46 years. The positive rate of echinococcosis in both males and females increased with age (χ2trend = 423.95, P < 0.01). Distribution of AE and CE echinococcosis across different gender and age groups in Tibet Autonomous Region, 2016 Distribution by occupation, education level and type of production modes Among the occupational groups, herdsmen (3.66%) and monks (3.48%) showed a higher positive rates. The positive rates of echinococcosis across populations with different education levels were significantly different (χ2 = 103.19, P < 0.01), with the positive rate of echinococcosis of the illiterate group being the highest (2.10%). The positive rates of echinococcosis across the different types of area among those surveyed were significantly different (χ2 = 168.134, P < 0.01), with the pastoral areas showing the highest rates (2.63%) (Table 3). Positive rates of AE and CE echinococcosis among different occupations, education levels, and type of area in Tibet Autonomous Region, 2016 No. of cases Constituent ratio/% Positive rate/% (95% CI) Herdsman Farmer/herdsman Houseworker Educational level College and above Type of area Pastoral area Pastoral and farm area Farm area Urban area Infection in dogs A total of 7564 faecal samples were collected from dogs in the 74 counties, with 552 positive cases being detected, for a positive rate of 7.30% for the Echinococcus coproantigen. There were significant differences across different areas (χ2 = 44.67, P < 0.01), among which the positive rate was highest in Naqu (11.36%) and lowest in Shigatse (4.83%). At the unit of county, the positive rate of Echinococcus coproantigen was the highest in Baqing County of Naqu (41.30%). The positive rates of Echinococcus coproantigen in different types of areas were statistically significant, of which the positive rate of Echinococcus coproantigen in pastoral areas was the highest, at 8.41% (Table 4). Echinococcus coproantigen ELISA positive rates among dogs in prefectures and by type of area in Tibet Autonomous Region, 2016 No. of positive cases 10.23 (8.39–12.07) 8.04 (5.45–10.63) Semi-Pastoral and semi-farm area The prevalence in livestock A total of 2103 livestock, including yaks, sheep and pigs, from 74 counties in TAR were detected. All surveyed livestock were slaughtered by local residents. The average positive rate of echinococcosis was 11.84%. The highest positive rate of echinococcosis was in Ali Prefecture (28.82%), and the lowest was in Linzhi Prefecture (0.71%). A total of 995 and 1007 yaks and sheep were examined, with positive rates of 9.15 and 15.59%, respectively. The median tooth age of the slaughtered yaks was 7 years old, and the maximum tooth age was 20 years old. The positive rate increased with tooth age (χ2trend = 57.02, P < 0.01). The median tooth age was 5 years old, and the maximum tooth age was 11 years old of 1007 slaughtered sheep. The positive rate increased with tooth age (χ2trend = 13.99, P < 0.01). A total of 101 pigs were examined, and one was positive (Table 5). Positive rates of echinococcosis among livestock by clinical examination in Tibet Autonomous Region, 2016 Tooth age by year Yaks No. positive Positive rate/% 0.00(−) 22.35 (17.92–26.78) -: Not applicable; CI: Confidence interval Knowledge of prevention and control In this survey, a total of 10 799 most boarding students in schools and 7279 local adult-residents in Tibet were enrolled. There were 6036 people who passed the test for knowledge of the prevention and control of echinococcosis, with an awareness rate of 33.39%. Among them, the awareness rates were 38.04% for the students and 26.49% for the local adult residents. There was a significant difference in the awareness rates between these two groups (χ2 = 1083.40, P < 0.01). Echinococcosis is globally distributed. It is estimated that 91% AE cases of the total number in the world occur in China [12]. Echinococcosis is endemic severely in China, mainly in the western regions, including Inner Mongolia, Tibet, Gansu, Qinghai, Ningxia, and Xinjiang, Sichuan, Yunnan, Shanxi [13–15]. The results of a sampling survey in 2012 showed that the incidence of echinococcosis in the higher-prevalence areas in China, excluding Tibet, was 0.24% [16]. In terms of the geographical environment, the Qinghai-Tibet Plateau in southern Qinghai and western Sichuan, dominated by animal husbandry, is the most endemic area for echinococcosis. The results of B-ultrasonography for the study participants in Qinghai and the Tibetan areas of Sichuan showed prevalence of CE and AE of 3.2 and 3.1%, respectively [17]. This survey used stratified sampling to identify the range and prevalence of echinococcosis in TAR. Our results showed that the prevalence of echinococcosis in people living in the region is 1.66%, which is the highest in China and much higher than the average prevalence of echinococcosis in other parts of China (0.24%). CE was detected in all 74 counties in the region, whereas AE was found in 47% of these counties, accounting for 11.16%. The whole TAR belongs to the Qinghai-Tibet Plateau, which is characterized by a high elevation, a vast territory, and a large number of wild animals. The residents are mainly engaged in pastoral work, and there are many domesticated dogs and livestock on the farms. Stray dogs are ubiquitous, which is an important factor for increased transmission resulting in the high incidence of echinococcosis in this region [18, 19]. At the same time, sanitation facilities in this region, such as those that supply safe water, are lacking. Medical facilities are limited, and qualified medical and health care personnel are few. Living conditions of the residents are generally poor and the residents have a bad health habit. For example, the dogs were fed with raw liver, lung and other organs from slaughtered livestock. Residents know little about the knowledge of echinococcosis prevention and treatment (26.49%). These social factors are also important and challenging issues for the long-term control of echinococcosis in TAR [20–22]. The geographic natural and social conditions in the TAR vary greatly, and the prevalence of echinococcosis was significantly different in different areas. For example, in the areas of Naqu and Ali Prefectures are in the northern Tibet Plateau has with an average elevation of more than 4500 m. These areas represent the major pastoral areas in Tibet, showing prevalence of echinococcosis of 3.37 and 2.31%, respectively, ranking them as the top two areas in the region. In contrast, Shannan City is located in southern Tibet and is a major agricultural area of Tibet, with an average elevation of 3500 m and a lower prevalence of echinococcosis (1.35%). The survey found that the positive rate among females was significantly higher than that among males (χ2 = 30.31, P < 0.01), which is basically consistent with the results of the survey in Sichuan Tibetan areas and some other study [17, 23–25]. However, there was no significant difference in the positive rate of male and female children under 12 years old in this survey. This may be partly related to the higher exposure of women, in TAR, who are principally responsible for domestic activities, which is the local social custom. Women are in charge of all the major housework, including grazing of animals, feeding of dogs, milking and collecting cow dung, which increases their exposure to contamination by Echinococcus eggs in the environment, resulting in a higher risk of echinococcosis. The results of this survey showed that the positive rates of both males and females increased with age (χ2trend = 423.95, P < 0.01), which is in line with the result of a study on volunteer in the eastern Tibetan Plateau, northwest Sichuan/southeast Qinghai, China, indicating that age and the risk of echinococcosis are related. Echinococcosis, as a chronic infectious disease with a long disease course. Symptoms may develop 5–20 years after infection, and patients may survive for many years after exposure. The elderly may have been exposed to an environment contaminated by Echinococcus eggs for an extended period of time, the resulting cumulative risk increases with age, thereby leading to higher prevalence among the elderly. In this survey, in spite of the positive rate increased with age, the highest proportion of case were aged 40–60, mainly related to the larger number and higher positive rate of this age group. In CE, the active, transitional and inactive cases showed time-related development. Over time, some transitional cases will gradually transform into inactive cases [26]. The cases in this survey in Tibet were sampled on the basis of population. Among the detected CE cases, the active, transitional and inactive cases accounted for 36.2, 13.7 and 50.1%, respectively. This finding is significantly different from the results of a survey in Egypt, in which inactive CE4–5 cases accounted for 17.4% [27]. This may be related to the fact that the cases included in the Egyptian survey were collected from hospitals (the lesions of CE4–5 cases tend to calcify and the cases are less likely to be treated in a hospital) in comparison to this population-based study. TAR had not previously taken any largescale action to prevent, control and treat echinococcosis. The pattern of the progression of the disease is still unclear. However, the results of this study may partly reflect some patterns of CE lesions with very limited prior intervention. The domestic dog is the most important definitive host for cystic echinococcosis, and raising dog and contact with dogs are key risk factors for human CE [28–31]. In TAR, dogs are common among herdsman families, with 62% of households raising dogs and an average of 1.3 raised dogs per household. There are many stray dogs in both rural and urban areas. Our study found the positive rates of echinococcosis among herders (3.66%) and monks (3.48%) were significantly higher than those of other occupational groups, such as farmers. Monks commonly raise dogs around the temples. The stratified data showed that the positive rate of echinococcosis and the positive rate of Echinococcus coproantigen in the pastoral areas were significantly higher than those in the pastoral and farm area, farm area, and urban areas. This indicates a correlation of the positive rates in dogs and the positive rate of echinococcosis in humans. The present study revealed the characteristics of livestock breeding and different prevalence of slaughtered yaks and sheep in Tibet. Most of the livestock raised in Tibet are yaks and sheep. Many local people believe in Tibetan Buddhism and thus will not kill livestock. Most livestock are mainly used for milking. Large herds area sign of prosperity, with very few livestock being slaughtered or sold; thus, livestock are usually farmed for long periods of time. The survey showed a high average tooth age of slaughtered livestock, with 7 years for yak and 5 years for sheep. This increased the risk of exposure to the echinococcosis eggs and development the risk of echinococcosis in the livestock. In this study, the surveyed livestock were locally slaughtered domestic animals with an average positive rate of 11.84%. The echinococcosis positive rates of yaks and sheep were 9.15 and 15.59%, respectively. As unique livestock animals raised in the Qinghai-Tibet Plateau, yaks have shown a lower echinococcosis prevalence than do sheep in several surveys, in spite of their longer life span, which may be related to the characteristics of the species [32, 33]. This study showed that increasing age of livestock of yaks and sheep was associated with increased risk of echinococcosis. Some other surveys have also shown this association [33–36]. There are still some limitations should be noted. This survey was carried out using portable B-ultrasonography to examine the subjects. Only the abdominal lesions of CE and AE could be detected, whereas lesions in the lungs, brain and other areas outside the abdomen could not be detected; thus, the prevalence detected in the survey among people may be different from the actual case resulting in an artificially lower numerator. The age and gender composition of the surveyed population may be different from the actual situation of local people, because some local people work as migrant workers and might not have been included in the survey. Most of the migrant workers were man between the ages of 30 and 50 years, so the estimation of the prevalence of the population may have selection bias. The survey was done during the cold season in Tibet, with snow accumulation in some areas, it was difficult to investigate the presence of rodents as an intermediate host consequently, data on the prevalence of rodents in this region could not be fully obtained. The livestock were selectively slaughtered, so the positive rate of this survey may be slightly different from the actual prevalence in the livestock. Echinococcosis is one of the major public health problems in TAR. It has a high prevalence, a wide range and a mixed typology of the cystic and alveolar types. This seriously threaten the health of the local people and constrains economic development in TAR. Knowledge of the prevention and treatment of echinococcosis among local people is low, and there is a need for better hygienic condition. This combined with a large number of intermediate and terminal hosts all increase the difficulty of the prevention and control of echinococcosis. Rigorous targeted action plans are needed for prevention and control of echinococcosis in Tibet. Bin Li, Gongsang Quzhen and Chui-Zhao Xue contribute equally to this study. Alveolar echinococcosis Cystic echinococcosis DALYs: Disability-adjusted life-years ELISA: Enzyme-linked immunosorbent assay TAR: Tibet Autonomous Region The survey was conduct from August to November 2016, by the National Health and Family Planning Commission of China, the Tibet Health and Family Planning Commission, the China CDC and Tibetan CDC, and was supported by the National Health and Family Planning Commission of China and the Health and Family Planning Commission of TAR. We sincerely thank all the participants involved in this investigation, including the CDC and related hospitals in Beijing, Tianjin, Shanghai, Chongqing, Shandong, Hebei, Shaanxi, Anhui, Heilongjiang, Liaoning, Jilin, Fujian, Guangdong, Hunan, Hubei, Zhejiang and Jiangsu provinces, as well as the China CDC, the CDC of Tibet and the related hospitals at all levels in Tibet. The survey funds are from public health project of central government transfer payment of China. If data and material is needed, please contact the corresponding author. BL, GQ, C-JZ, W-PW and X-NZ participated in project design, project preparation and organization. BL, GQ, C-ZX, SH, W-QC, X-LY, Z-JL, YH, NX, YW, L-YW, GZ, B, G, B-CM, G, X-GW, N, C-JZ, W-PW and X-NZ participated in the field survey. BL, MLQ, C-JZ, W-PW and X-NZ participated in the supervision of this study. GQ, C-ZX, SH, C-JZ W-PW and X-NZ participated in data collection and analysis, C-ZX, C-JZ and W-PW participated in paper writing and revision. All authors read and approved the final manuscript. This survey was approved by the Ethics Review Committee of the National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (No. 20160810). All study subjects, received their test results. Patients who were diagnosed with Echinococcosis were provided free drug treatment or were subsidized part of the cost of surgery. All the participants were told about the content and purpose of the exam, the complications, and the consequences and the benefits. The publication of this article has been agreed by authors, participants and others. Prof. Xiao-Nong Zhou is the Editor-in-Chief of Infectious Diseases of Poverty. Additional file 1: Multilingual abstracts in the five official working languages of the United Nations. (PDF 204 kb) Tibet Autonomous Region Center for Diseases Control and Prevention, Lhasa, 850 000, Tibet Autonomous Region, China National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention, Chinese Center for Tropical Diseases Research, WHO Collaborating Centre for Tropical Diseases, National Center for International Research on Tropical Diseases, Ministry of Science and Technology, Key Laboratory of Parasite and Vector Biology, MOH, Huangpu District, Shanghai, 200 025, China Henan Center for Diseases Control and Prevention, Zhengzhou, Shanghai, 450 000, Henan, China Yunnan Institute of Diseases Control and Prevention, Kunming, 650 000, Yunnan, China Chinese Center for Diseases Control and Prevention, Changping, Beijing, 102 200, China Center for Diseases Control and Prevention, Atlanta, GA 30 328, USA Shandong Institute of Parasitic Diseases, Jining, 272 033, Shandong, China Lhasa Center for Diseases Control and Prevention, Lhasa, 850 000, Tibet Autonomous Region, China Shigatse Center for Diseases Control and Prevention, Sangzhuzi District, 857 000, Tibet Autonomous Region, China Shannan Center for Diseases Control and Prevention, Shannan, 856 000, Tibet Autonomous Region, China Linzhi Center for Diseases Control and Prevention, Linzhi, 860 000, Tibet Autonomous Region, China Changdu Center for Diseases Control and Prevention, Changdu, 854 000, Tibet Autonomous Region, China Naqu Center for Diseases Control and Prevention, Naqu, 852 000, Tibet Autonomous Region, China Ali Center for Diseases Control and Prevention, Ali, 859 000, Tibet Autonomous Region, China Xiao N, Yao JW, Ding W, Giraudoux P, Craig PS, Ito A. Priorities for research and control of cestode zoonoses in Asia. Infect Dis Poverty. 2012;2(1):16–26.View ArticleGoogle Scholar Kirigia JM, Mburugu GN. The monetary value of human lives lost due to neglected tropical diseases in Africa. Infect Dis Poverty. 2017;6(1):165–80.View ArticlePubMedPubMed CentralGoogle Scholar Budke CM, Casulli A, Kern P, Vuitton DA. Cystic and alveolar echinococcosis: successes and continuing challenges. PLoS Neglect Trop D. 2017;11(4):54–77.View ArticleGoogle Scholar Budke CM, Deplazes P, Torgerson PR. Global socioeconomic impact of cystic echinococcosis. Emerg Infect Dis. 2006;12(2):296–303.View ArticlePubMedPubMed CentralGoogle Scholar Craig PS, McManus DP, Lightowlers MW, Chabalgoity JA, Garcia HH, Gavidia CM, et al. Prevention and control of cystic echinococcosis. Lancet Infect Dis. 2007;7(6):385–94.View ArticlePubMedGoogle Scholar McManus DP. The molecular epidemiology of Echinococcus granulosus and cystic hydatid disease. T Roy Soc Med H. 2002;96(Suppl 1):151–7.View ArticleGoogle Scholar Craig P. Echinococcus multilocularis. Curr Opin Infect Dis. 2003;16(5):437–44.View ArticlePubMedGoogle Scholar Han XM, Cai QG, Wang W, Wang H, Zhang Q, Wang YS. Childhood suffering: hyper endemic echinococcosis in Qinghai-Tibetan primary school students, China. Infect Dis Poverty. 2018;7(1):71–80.View ArticlePubMedPubMed CentralGoogle Scholar Liu L, Guo B, Li W, Zhong B, Yang W, Li SC, et al. Geographic distribution of echinococcosis in Tibetan region of Sichuan Province, China. Infect Dis Poverty. 2018;7(1):104–12.View ArticlePubMedPubMed CentralGoogle Scholar Wang Q, Yu WJ, Zhong B, Shang JY, Huang L, Mastin A, Renqingpengcuo HY, Zhang GJ, He W, et al. Seasonal pattern of Echinococcus re-infection in owned dogs in Tibetan communities of Sichuan, China and its implications for control. Infect Dis Poverty. 2016;5(1):60–7.View ArticlePubMedPubMed CentralGoogle Scholar Feng X, Qi X, Yang L, Duan X, Fang B, Gongsang Q, et al. Human cystic and alveolar echinococcosis in the Tibet autonomous region (TAR), China. J Helminthol. 2015;89(6):671–9.View ArticlePubMedPubMed CentralGoogle Scholar Torgerson PR, Keller K, Magnotta M, Ragland N. The global burden of alveolar echinococcosis. PLoS Neglect Trop D. 2010;4(6):722.View ArticleGoogle Scholar Zhang WB, Zhang ZZ, Wu WP, Shi BX, Li J, Zhou XN, et al. Epidemiology and control of echinococcosis in Central Asia, with particular reference to the People's Republic of China. Acta Trop. 2015;141:235–43.View ArticlePubMedGoogle Scholar Wang Z, Wang X, Liu X. Echinococcosis in China, a review of the epidemiology of Echinococcus spp. EcoHealth. 2008;5(2):115–26.View ArticlePubMedGoogle Scholar Chai JJ. Echinococcosis control in China: challenges and research needs. Chin J Parasitol Parasit Dis. 2009;27(5):379–83.(in Chinese).Google Scholar Wu WP. Report on the epidemiology and distribution of two types of echinococcosis in China. Chin Anim Health. 2017. https://doi.org/10.3969/j.issn.1008-4754.2017.07.002.(in Chinese). Li TY, Chen XW, Zhen R, Qiu JM, Qiu DC, Xiao N, et al. Widespread co-endemicity of human cystic and alveolar echinococcosis on the eastern Tibetan plateau, Northwest Sichuan/Southeast Qinghai, China. Acta Trop. 2010;113(3):248–56.View ArticlePubMedGoogle Scholar Wang Q, Raoul F, Budke C, Craig PS, Xiao YF, Vuitton DA, et al. Grass height and transmission ecology of Echinococcus multilocularis in Tibetan communities, China. Chinese Med J-Peking. 2010;123(1):61–7 (in Chinese).Google Scholar Giraudoux P, Pleydell D, Raoul F, Quere JP, Wang Q, Yang YR, et al. Transmission ecology of Echinococcus multilocularis: what are the ranges of parasite stability among various host communities in China. Parasitol Int 2006; 55 Suppl: 55:237–246.Google Scholar Ito A, Urbani C, Jiamin Q, Vuitton DA, Qiu DC, Heath DD, et al. Control of echinococcosis and cysticercosis: a public health challenge to international cooperation in China. Acta Trop. 2003;86(1):3–17.View ArticlePubMedGoogle Scholar Ertabaklar H, Dayanir Y, Ertug S. Research to investigate human cystic echinococcosis with ultrasound and serologic methods and educational studies in different provinces of Aydin/Turkey. Turkiye Parazitol Derg. 2012;36(3):142–6 in Caech.View ArticlePubMedGoogle Scholar Chahed MK, Bellali H, Touinsi H, Cherif R, Ben SZ, Essoussi M, et al. Distribution of surgical hydatidosis in Tunisia, results of 2001-2005 study and trends between 1977 and 2005. Arch Inst Pasteur Tunis. 2010;87(1–2):43–52.PubMedGoogle Scholar Golab E, Czarkowski MP. Echinococcosis and cysticercosis in Poland in 2012. Przegl Epidemiol 2014; 68(2): 279–282, 379–81.Google Scholar Abdi J, Taherikalani M, Asadolahi K, Emaneini M. Echinococcosis/Hydatidosis in Ilam province, Western Iran. Iran J Parasitol. 2013;8(3):417–22.PubMedPubMed CentralGoogle Scholar Al-Jawabreh A, Ereqat S, Dumaidi K, Nasereddin A, Al-Jawabreh H, Azmi K, et al. The clinical burden of human cystic echinococcosis in Palestine, 2010-2015. PLoS Neglect Trop D. 2017;11(7):571–7.View ArticleGoogle Scholar Agudelo Higuita NI, Brunetti E, McCloskey C. Cystic echinococcosis. J Clin Microbiol. 2016;54(3):518–23.View ArticlePubMedPubMed CentralGoogle Scholar Salama AA, AAO HAZ. Cystic echinococcosis in the middle region of the Nile Delta, Egypt: clinical and radiological characteristics. Egyptian J Radiol Nuclear Med. 2014;45:641–9.View ArticleGoogle Scholar Wang Q, Huang Y, Huang L, Yu WJ, He W, Zhong B, et al. Review of risk factors for human echinococcosis prevalence on the Qinghai-Tibet plateau, China: a prospective for control options. Infect Dis Poverty. 2014;3(1):3.View ArticlePubMedPubMed CentralGoogle Scholar Mastin A, van Kesteren F, Torgerson PR, Ziadinov I, Mytynova B, Rogan MT, et al. Risk factors for Echinococcus coproantigen positivity in dogs from the Alay valley, Kyrgyzstan. J Helminthol. 2015;89(6):655–63.View ArticlePubMedPubMed CentralGoogle Scholar Otero-Abad B, Torgerson PR. A systematic review of the epidemiology of echinococcosis in domestic and wild animals. PLoS Neglect Trop D. 2013. https://doi.org/10.1371/journal.pntd.0002249.View ArticlePubMedPubMed CentralGoogle Scholar Liu CN, Xu YY, Cadavid-Restrepo AM, Lou ZZ, Yan HB, Li L, et al. Estimating the prevalence of Echinococcus in domestic dogs in highly endemic for echinococcosis. Infect Dis Poverty. 2018;7(1):77–86.View ArticlePubMedPubMed CentralGoogle Scholar Tashani OA, Zhang LH, Boufana B, Jegi A, McManus DP. Epidemiology and strain characteristics of Echinococcus granulosus in the Benghazi area of eastern Libya. Ann Trop Med Parasitol. 2002;96(4):369–81.View ArticlePubMedGoogle Scholar Erbeto K, Zewde G, Kumsa B. Hydatidosis of sheep and goats slaughtered at Addis Ababa abattoir: prevalence and risk factors. Trop Anim Health Pro. 2010;42(5):803–5.View ArticleGoogle Scholar Christodoulopoulos G, Theodoropoulos G, Petrakos G. Epidemiological survey of cestode-larva disease in Greek sheep flocks. Vet Parasitol. 2008. https://doi.org/10.1016/j.vetpar.2008.02.002.View ArticlePubMedGoogle Scholar Bruzinskaite R, Sarkunas M, Torgerson PR, Mathis A, Deplazes P. Echinococcosis in pigs and intestinal infection with Echinococcus spp. in dogs in southwestern Lithuania. Vet Parasitol. 2009. https://doi.org/10.1016/j.vetpar.2008.11.011.View ArticlePubMedGoogle Scholar Lahmar S, Ben Chehida F, Petavy AF, Hammou A, Lahmar J, Ghannay A, et al. Ultrasonographic screening for cystic echinococcosis in sheep in Tunisia. Vet Parasitol. 2007;143(1):42–9.View ArticlePubMedGoogle Scholar
CommonCrawl
Streamlines, streaklines, and pathlines Streamlines, streaklines and pathlines are field lines in a fluid flow. They differ only when the flow changes with time, that is, when the flow is not steady.[1] [2] Considering a velocity vector field in three-dimensional space in the framework of continuum mechanics, we have that: • Streamlines are a family of curves whose tangent vectors constitute the velocity vector field of the flow. These show the direction in which a massless fluid element will travel at any point in time.[3] • Streaklines are the loci of points of all the fluid particles that have passed continuously through a particular spatial point in the past. Dye steadily injected into the fluid at a fixed point extends along a streakline. • Pathlines are the trajectories that individual fluid particles follow. These can be thought of as "recording" the path of a fluid element in the flow over a certain period. The direction the path takes will be determined by the streamlines of the fluid at each moment in time. • Timelines are the lines formed by a set of fluid particles that were marked at a previous instant in time, creating a line or a curve that is displaced in time as the particles move. By definition, different streamlines at the same instant in a flow do not intersect, because a fluid particle cannot have two different velocities at the same point. However, pathlines are allowed to intersect themselves or other pathlines (except the starting and end points of the different pathlines, which need to be distinct). Streaklines can also intersect themselves and other streaklines. Streamlines and timelines provide a snapshot of some flowfield characteristics, whereas streaklines and pathlines depend on the full time-history of the flow. However, often sequences of timelines (and streaklines) at different instants—being presented either in a single image or with a video stream—may be used to provide insight in the flow and its history. If a line, curve or closed curve is used as start point for a continuous set of streamlines, the result is a stream surface. In the case of a closed curve in a steady flow, fluid that is inside a stream surface must remain forever within that same stream surface, because the streamlines are tangent to the flow velocity. A scalar function whose contour lines define the streamlines is known as the stream function. Dye line may refer either to a streakline: dye released gradually from a fixed location during time; or it may refer to a timeline: a line of dye applied instantaneously at a certain moment in time, and observed at a later instant. Mathematical description Streamlines Streamlines are defined by[4] ${d{\vec {x}}_{S} \over ds}\times {\vec {u}}({\vec {x}}_{S})=0,$ where "$\times $" denotes the vector cross product and ${\vec {x}}_{S}(s)$ is the parametric representation of just one streamline at one moment in time. If the components of the velocity are written ${\vec {u}}=(u,v,w),$ and those of the streamline as ${\vec {x}}_{S}=(x_{S},y_{S},z_{S}),$ we deduce[4] ${dx_{S} \over u}={dy_{S} \over v}={dz_{S} \over w},$ which shows that the curves are parallel to the velocity vector. Here $s$ is a variable which parametrizes the curve $s\mapsto {\vec {x}}_{S}(s).$ Streamlines are calculated instantaneously, meaning that at one instance of time they are calculated throughout the fluid from the instantaneous flow velocity field. A streamtube consists of a bundle of streamlines, much like communication cable. The equation of motion of a fluid on a streamline for a flow in a vertical plane is:[5] ${\frac {\partial c}{\partial t}}+c{\frac {\partial c}{\partial s}}=\nu {\frac {\partial ^{2}c}{\partial r^{2}}}-{\frac {1}{\rho }}{\frac {\partial p}{\partial s}}-g{\frac {\partial z}{\partial s}}$ The flow velocity in the direction $s$ of the streamline is denoted by $c$. $r$ is the radius of curvature of the streamline. The density of the fluid is denoted by $\rho $ and the kinematic viscosity by $\nu $. ${\frac {\partial p}{\partial s}}$ is the pressure gradient and ${\frac {\partial c}{\partial s}}$ the velocity gradient along the streamline. For a steady flow, the time derivative of the velocity is zero: ${\frac {\partial c}{\partial t}}=0$. $g$ denotes the gravitational acceleration. Pathlines Pathlines are defined by ${\begin{cases}\displaystyle {\frac {d{\vec {x}}_{P}}{dt}}(t)={\vec {u}}_{P}({\vec {x}}_{P}(t),t)\\[1.2ex]{\vec {x}}_{P}(t_{0})={\vec {x}}_{P0}\end{cases}}$ The suffix $P$ indicates that we are following the motion of a fluid particle. Note that at point ${\vec {x}}_{P}$ the curve is parallel to the flow velocity vector ${\vec {u}}$, where the velocity vector is evaluated at the position of the particle ${\vec {x}}_{P}$ at that time $t$. Streaklines Streaklines can be expressed as, ${\begin{cases}\displaystyle {\frac {d{\vec {x}}_{str}}{dt}}={\vec {u}}_{P}({\vec {x}}_{str},t)\\[1.2ex]{\vec {x}}_{str}(t=\tau _{P})={\vec {x}}_{P0}\end{cases}}$ where, ${\vec {u}}_{P}({\vec {x}},t)$ is the velocity of a particle $P$ at location ${\vec {x}}$ and time $t$. The parameter $\tau _{P}$, parametrizes the streakline ${\vec {x}}_{str}(t,\tau _{P})$ and $t_{0}\leq \tau _{P}\leq t$, where $t$ is a time of interest. Steady flows In steady flow (when the velocity vector-field does not change with time), the streamlines, pathlines, and streaklines coincide. This is because when a particle on a streamline reaches a point, $a_{0}$, further on that streamline the equations governing the flow will send it in a certain direction ${\vec {x}}$. As the equations that govern the flow remain the same when another particle reaches $a_{0}$ it will also go in the direction ${\vec {x}}$. If the flow is not steady then when the next particle reaches position $a_{0}$ the flow would have changed and the particle will go in a different direction. This is useful, because it is usually very difficult to look at streamlines in an experiment. However, if the flow is steady, one can use streaklines to describe the streamline pattern. Frame dependence Streamlines are frame-dependent. That is, the streamlines observed in one inertial reference frame are different from those observed in another inertial reference frame. For instance, the streamlines in the air around an aircraft wing are defined differently for the passengers in the aircraft than for an observer on the ground. In the aircraft example, the observer on the ground will observe unsteady flow, and the observers in the aircraft will observe steady flow, with constant streamlines. When possible, fluid dynamicists try to find a reference frame in which the flow is steady, so that they can use experimental methods of creating streaklines to identify the streamlines. Application Knowledge of the streamlines can be useful in fluid dynamics. The curvature of a streamline is related to the pressure gradient acting perpendicular to the streamline. The center of curvature of the streamline lies in the direction of decreasing radial pressure. The magnitude of the radial pressure gradient can be calculated directly from the density of the fluid, the curvature of the streamline and the local velocity. Dye can be used in water, or smoke in air, in order to see streaklines, from which pathlines can be calculated. Streaklines are identical to streamlines for steady flow. Further, dye can be used to create timelines.[6] The patterns guide design modifications, aiming to reduce the drag. This task is known as streamlining, and the resulting design is referred to as being streamlined. Streamlined objects and organisms, like airfoils, streamliners, cars and dolphins are often aesthetically pleasing to the eye. The Streamline Moderne style, a 1930s and 1940s offshoot of Art Deco, brought flowing lines to architecture and design of the era. The canonical example of a streamlined shape is a chicken egg with the blunt end facing forwards. This shows clearly that the curvature of the front surface can be much steeper than the back of the object. Most drag is caused by eddies in the fluid behind the moving object, and the objective should be to allow the fluid to slow down after passing around the object, and regain pressure, without forming eddies. The same terms have since become common vernacular to describe any process that smooths an operation. For instance, it is common to hear references to streamlining a business practice, or operation. See also • Drag coefficient • Elementary flow • Equipotential surface • Flow visualization • Flow velocity • Scientific visualization • Seeding (fluid dynamics) • Stream function • Streamsurface • Streamlet (scientific visualization) Notes and references Notes 1. Batchelor, G. (2000). Introduction to Fluid Mechanics. 2. Kundu P and Cohen I. Fluid Mechanics. 3. "Definition of Streamlines". www.grc.nasa.gov. Archived from the original on 18 January 2017. Retrieved 26 April 2018. 4. Granger, R.A. (1995). Fluid Mechanics. Dover Publications. ISBN 0-486-68356-7., pp. 422–425. 5. tec-science (2020-04-22). "Equation of motion of a fluid on a streamline". tec-science. Retrieved 2020-05-07. 6. "Flow visualisation". National Committee for Fluid Mechanics Films (NCFMF). Archived from the original (RealMedia) on 2006-01-03. Retrieved 2009-04-20. References • Faber, T.E. (1995). Fluid Dynamics for Physicists. Cambridge University Press. ISBN 0-521-42969-2. External links • Streamline illustration • Tutorial - Illustration of Streamlines, Streaklines and Pathlines of a Velocity Field(with applet) • Joukowsky Transform Interactive WebApp
Wikipedia
How should the mixed results just summarized be interpreted vis-á-vis the cognitive-enhancing potential of prescription stimulants? One possibility is that d-AMP and MPH enhance cognition, including the retention of just-acquired information and some or all forms of executive function, but that the enhancement effect is small. If this were the case, then many of the published studies were underpowered for detecting enhancement, with most samples sizes under 50. It follows that the observed effects would be inconsistent, a mix of positive and null findings. I can't try either of the products myself – I am pregnant and my doctor doesn't recommend it – but my husband agrees to. He describes the effect of the Nootrobox product as like having a cup of coffee but not feeling as jittery. "I had a very productive day, but I don't know if that was why," he says. His Nootroo experience ends after one capsule. He gets a headache, which he is convinced is related, and refuses to take more. "It is just not a beginner friendly cocktail," offers Noehr. I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS. Another popular option is nicotine. Scientists are increasingly realising that this drug is a powerful nootropic, with the ability to improve a person's memory and help them to focus on certain tasks – though it also comes with well-documented obvious risks and side effects. "There are some very famous neuroscientists who chew Nicorette in order to enhance their cognitive functioning. But they used to smoke and that's their substitute," says Huberman. Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5. Both nootropics startups provide me with samples to try. In the case of Nootrobox, it is capsules called Sprint designed for a short boost of cognitive enhancement. They contain caffeine – the equivalent of about a cup of coffee, and L-theanine – about 10 times what is in a cup of green tea, in a ratio that is supposed to have a synergistic effect (all the ingredients Nootrobox uses are either regulated as supplements or have a "generally regarded as safe" designation by US authorities) At this point, I began thinking about what I was doing. Black-market Adderall is fairly expensive; $4-10 a pill vs prescription prices which run more like $60 for 120 20mg pills. It would be a bad idea to become a fan without being quite sure that it is delivering bang for the buck. Now, why the piracetam mix as the placebo as opposed to my other available powder, creatine powder, which has much smaller mental effects? Because the question for me is not whether the Adderall works (I am quite sure that the amphetamines have effects!) but whether it works better for me than my cheap legal standbys (piracetam & caffeine)? (Does Adderall have marginal advantage for me?) Hence, I want to know whether Adderall is better than my piracetam mix. People frequently underestimate the power of placebo effects, so it's worth testing. (Unfortunately, it seems that there is experimental evidence that people on Adderall know they are on Adderall and also believe they have improved performance, when they do not5. So the blind testing does not buy me as much as it could.) Privacy Policy. Sitemap Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. This website is supported by different affiliates and we receive a paid commission on certain products from our advertisers. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. CERTAIN CONTENT THAT APPEARS ON THIS SITE COMES FROM AMAZON SERVICES LLC. THIS CONTENT IS PROVIDED 'AS IS' AND IS SUBJECT TO CHANGE OR REMOVAL AT ANY TIME. In the United States, people consume more coffee than fizzy drink, tea and juice combined. Alas, no one has ever estimated its impact on economic growth – but plenty of studies have found myriad other benefits. Somewhat embarrassingly, caffeine has been proven to be better than the caffeine-based commercial supplement that Woo's company came up with, which is currently marketed at $17.95 for 60 pills. As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP. 28,61,36,25,61,57,39,56,23,37,24,50,54,32,50,33,16,42,41,40,34,33,31,65,23,36,29,51,46,31,45,52,30, 50,29,36,57,60,34,48,32,41,48,34,51,40,53,73,56,53,53,57,46,50,35,50,60,62,30,60,48,46,52,60,60,48, 47,34,50,51,45,54,70,48,61,43,53,60,44,57,50,50,52,37,55,40,53,48,50,52,44,50,50,38,43,66,40,24,67, 60,71,54,51,60,41,58,20,28,42,53,59,42,31,60,42,58,36,48,53,46,25,53,57,60,35,46,32,26,68,45,20,51, 56,48,25,62,50,54,47,42,55,39,60,44,32,50,34,60,47,70,68,38,47,48,70,51,42,41,35,36,39,23,50,46,44,56,50,39 We've talk about how caffeine affects the body in great detail, but the basic idea is that it can improve your motivation and focus by increasing catecholamine signaling. Its effects can be dampened over time, however, as you start to build a caffeine tolerance. Research on L-theanine, a common amino acid, suggests it promotes neuronal health and can decrease the incidence of cold and flu symptoms by strengthening the immune system. And one study, published in the journal Biological Psychology, found that L-theanine reduces psychological and physiological stress responses—which is why it's often taken with caffeine. In fact, in a 2014 systematic review of 11 different studies, published in the journal Nutrition Review, researchers found that use of caffeine in combination with L-theanine promoted alertness, task switching, and attention. The reviewers note the effects are most pronounced during the first two hours post-dose, and they also point out that caffeine is the major player here, since larger caffeine doses were found to have more of an effect than larger doses of L-theanine. The miniaturization of electronic components has been crucial to smart pill design. As cloud computing and wireless communication platforms are integrated into the health care system, the use of smart pills for monitoring vital signs and medication compliance is likely to increase. In the long term, smart pills are expected to be an integral component of remote patient monitoring and telemedicine. As the call for noninvasive point-of-care testing increases, smart pills will become mainstream devices. Not that everyone likes to talk about using the drugs. People don't necessarily want to reveal how they get their edge and there is stigma around people trying to become smarter than their biology dictates, says Lawler. Another factor is undoubtedly the risks associated with ingesting substances bought on the internet and the confusing legal statuses of some. Phenylpiracetam, for example, is a prescription drug in Russia. It isn't illegal to buy in the US, but the man-made chemical exists in a no man's land where it is neither approved nor outlawed for human consumption, notes Lawler. DNB-wise, eyeballing my stats file seems to indicate a small increase: when I compare peak scores D4B scores, I see mostly 50s and a few 60s before piracetam, and after starting piracetam, a few 70s mixed into the 50s and 60s. Natural increase from training? Dunno - I've been stuck on D4B since June, so 5 or 10% in a week or 3 seems a little suspicious. A graph of the score series26: Racetams are the best-known smart drugs on the market, and have decades of widespread use behind them. Piracetam is a leading smart drug, commonly prescribed to seniors with Alzheimer's or pre-dementia symptoms – but studies have shown Piracetam's beneficial effects extend to people of all ages, as young as university students. The Racetams speed up chemical exchange between brain cells. Effects include increases in verbal learning, mental clarity, and general IQ. Other members of the Racetam family include Pramiracetam, Oxiracetam, аnԁ Aniracetam, which differ from Piracetam primarily in their potency, not their actual effects. Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. As professionals and aging baby boomers alike become more interested in enhancing their own brain power (either to achieve more in a workday or to stave off cognitive decline), a huge market has sprung up for nonprescription nootropic supplements. These products don't convince Sahakian: "As a clinician scientist, I am interested in evidence-based cognitive enhancement," she says. "Many companies produce supplements, but few, if any, have double-blind, placebo-controlled studies to show that these supplements are cognitive enhancers." Plus, supplements aren't regulated by the U.S. Food and Drug Administration (FDA), so consumers don't have that assurance as to exactly what they are getting. Check out these 15 memory exercises proven to keep your brain sharp. In fact, some of these so-called "smart drugs" are already remarkably popular. One recent survey involving tens of thousands of people found that 30% of Americans who responded had taken them in the last year. It seems as though we may soon all be partaking – and it's easy to get carried away with the consequences. Will this new batch of intellectual giants lead to dazzling, space-age inventions? Or perhaps an explosion in economic growth? Might the working week become shorter, as people become more efficient? Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. (In particular, I don't think it's because there's a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don't even know, piracetam was the '60s, modafinil was '70s or '80s, ALCAR was '80s AFAIK, Noopept & coluracetam were '90s, and so on.) Perceptual–motor congruency was the basis of a study by Fitzpatrick et al. (1988) in which subjects had to press buttons to indicate the location of a target stimulus in a display. In the simple condition, the left-to-right positions of the buttons are used to indicate the left-to-right positions of the stimuli, a natural mapping that requires little cognitive control. In the rotation condition, the mapping between buttons and stimulus positions is shifted to the right by one and wrapped around, such that the left-most button is used to indicate the right-most position. Cognitive control is needed to resist responding with the other, more natural mapping. MPH was found to speed responses in this task, and the speeding was disproportionate for the rotation condition, consistent with enhancement of cognitive control. Regardless, while in the absence of piracetam, I did notice some stimulant effects (somewhat negative - more aggressive than usual while driving) and similar effects to piracetam, I did not notice any mental performance beyond piracetam when using them both. The most I can say is that on some nights, I seemed to be less easily tired when writing or editing or n-backing (and I felt less tired than ICON 2011 than ICON 2010), but those were also often nights I was also trying out all the other things I had gotten in that order from Smart Powders, and I am still dis-entangling what was responsible. (Probably the l-theanine or sulbutiamine.) A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule). Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human. Compared with those reporting no use, subjects drinking >4 cups/day of decaffeinated coffee were at increased risk of RA [rheumatoid arthritis] (RR 2.58, 95% CI 1.63-4.06). In contrast, women consuming >3 cups/day of tea displayed a decreased risk of RA (RR 0.39, 95% CI 0.16-0.97) compared with women who never drank tea. Caffeinated coffee and daily caffeine intake were not associated with the development of RA. Sleep itself is an underrated cognition enhancer. It is involved in enhancing long-term memories as well as creativity. For instance, it is well established that during sleep memories are consolidated-a process that "fixes" newly formed memories and determines how they are shaped. Indeed, not only does lack of sleep make most of us moody and low on energy, cutting back on those precious hours also greatly impairs cognitive performance. Exercise and eating well also enhance aspects of cognition. It turns out that both drugs and "natural" enhancers produce similar physiological changes in the brain, including increased blood flow and neuronal growth in structures such as the hippocampus. Thus, cognition enhancers should be welcomed but not at the expense of our health and well being. Amongst the brain focus supplements that are currently available in the nootropic drug market, Modafinil is probably the most common focus drug or one of the best focus pills used by people, and it's praised to be the best nootropic available today.  It is a powerful cognitive enhancer that is great for boosting your overall alertness with least side effects.  However, to get your hands on this drug, you would require a prescription. Taking the tryptophan is fairly difficult. The powder as supplied by Bulk Nutrition is extraordinarily dry and fine; it seems to be positively hydrophobic. The first time I tried to swallow a teaspoon, I nearly coughed it out - the power had seemed to explode in my mouth and go down my lungs. Thenceforth I made sure to have a mouth of water first. After a while, I took a different tack: I mixed in as much Hericium as would fit in the container. The mushroom powder is wetter and chunkier than the tryptophan, and seems to reduce the problem. Combining the mix with chunks of melatonin inside a pill works even better. Related to the famous -racetams but reportedly better (and much less bulky), Noopept is one of the many obscure Russian nootropics. (Further reading: Google Scholar, Examine.com, Reddit, Longecity, Bluelight.ru.) Its advantages seem to be that it's far more compact than piracetam and doesn't taste awful so it's easier to store and consume; doesn't have the cloud hanging over it that piracetam does due to the FDA letters, so it's easy to purchase through normal channels; is cheap on a per-dose basis; and it has fans claiming it is better than piracetam. Some supplement blends, meanwhile, claim to work by combining ingredients – bacopa, cat's claw, huperzia serrata and oat straw in the case of Alpha Brain, for example – that have some support for boosting cognition and other areas of nervous system health. One 2014 study in Frontiers in Aging Neuroscience, suggested that huperzia serrata, which is used in China to fight Alzheimer's disease, may help slow cell death and protect against (or slow the progression of) neurodegenerative diseases. The Alpha Brain product itself has also been studied in a company-funded small randomized controlled trial, which found Alpha Brain significantly improved verbal memory when compared to adults who took a placebo. Besides Adderall, I also purchased on Silk Road 5x250mg pills of armodafinil. The price was extremely reasonable, 1.5btc or roughly $23 at that day's exchange rate; I attribute the low price to the seller being new and needing feedback, and offering a discount to induce buyers to take a risk on him. (Buyers bear a large risk on Silk Road since sellers can easily physically anonymize themselves from their shipment, but a buyer can be found just by following the package.) Because of the longer active-time, I resolved to test the armodafinil not during the day, but with an all-nighter. On the plus side: - I noticed the less-fatigue thing to a greater extent, getting out of my classes much less tired than usual. (Caveat: my sleep schedule recently changed for the saner, so it's possible that's responsible. I think it's more the piracetam+choline, though.) - One thing I wasn't expecting was a decrease in my appetite - nobody had mentioned that in their reports.I don't like being bothered by my appetite (I know how to eat fine without it reminding me), so I count this as a plus. - Fidgeting was reduced further Though coffee gives instant alertness, the effect lasts only for a short while. People who drink coffee every day may develop caffeine tolerance; this is the reason why it is still important to control your daily intake. It is advisable that an individual should not consume more than 300 mg of coffee a day. Caffeine, the world's favorite nootropic has fewer side effects, but if consumed abnormally in excess, it can result in nausea, restlessness, nervousness, and hyperactivity. This is the reason why people who need increased sharpness would instead induce L-theanine, or some other Nootropic, along with caffeine. Today, you can find various smart drugs that contain caffeine in them. OptiMind, one of the best and most sought-after nootropics in the U.S, containing caffeine, is considered best brain supplement for adults and kids when compared to other focus drugs present in the market today. Two increasingly popular options are amphetamines and methylphenidate, which are prescription drugs sold under the brand names Adderall and Ritalin. In the United States, both are approved as treatments for people with ADHD, a behavioural disorder which makes it hard to sit still or concentrate. Now they're also widely abused by people in highly competitive environments, looking for a way to remain focused on specific tasks. It was a productive hour, sure. But it also bore a remarkable resemblance to the normal editing process. I had imagined that the magical elixir coursing through my bloodstream would create towering storm clouds in my brain which, upon bursting, would rain cinematic adjectives onto the page as fast my fingers could type them. Unfortunately, the only thing that rained down were Google searches that began with the words "synonym for"—my usual creative process. Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion. Harrisburg, NC -- (SBWIRE) -- 02/18/2019 -- Global Smart Pills Technology Market - Segmented by Technology, Disease Indication, and Geography - Growth, Trends, and Forecast (2019 - 2023) The smart pill is a wireless capsule that can be swallowed, and with the help of a receiver (worn by patients) and software that analyzes the pictures captured by the smart pill, the physician is effectively able to examine the gastrointestinal tract. Gastrointestinal disorders have become very common, but recently, there has been increasing incidence of colorectal cancer, inflammatory bowel disease, and Crohns disease as well. At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I'd use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I've asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it's ad hoc, but in some factor analyses I've been playing with, it seems to load on a lot of other variables I've measured, so I think it's meaningful). I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances. Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment. Actually, researchers are studying substances that may improve mental abilities. These substances are called "cognitive enhancers" or "smart drugs" or "nootropics." ("Nootropic" comes from Greek - "noos" = mind and "tropos" = changed, toward, turn). The supposed effects of cognitive enhancement can be several things. For example, it could mean improvement of memory, learning, attention, concentration, problem solving, reasoning, social skills, decision making and planning. Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed. But perhaps the biggest difference between Modafinil and other nootropics like Piracetam, according to Patel, is that Modafinil studies show more efficacy in young, healthy people, not just the elderly or those with cognitive deficits. That's why it's great for (and often prescribed to) military members who are on an intense tour, or for those who can't get enough sleep for physiological reasons. One study, by researchers at Imperial College London, and published in Annals of Surgery, even showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions. 70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of: 20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –> The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me. Nondrug cognitive-enhancement methods include the high tech and the low. An example of the former is transcranial magnetic stimulation (TMS), whereby weak currents are induced in specific brain areas by magnetic fields generated outside the head. TMS is currently being explored as a therapeutic modality for neuropsychiatric conditions as diverse as depression and ADHD and is capable of enhancing the cognition of normal healthy people (e.g., Kirschen, Davis-Ratner, Jerde, Schraedley-Desmond, & Desmond, 2006). An older technique, transcranial direct current stimulation (tDCS), has become the subject of renewed research interest and has proven capable of enhancing the cognitive performance of normal healthy individuals in a variety of tasks. For example, Flöel, Rösser, Michka, Knecht, and Breitenstein (2008) reported enhancement of learning and Dockery, Hueckel-Weng, Birbaumer, and Plewnia (2009) reported enhancement of planning with tDCS. The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously. Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more. Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
The fact that we can choose an $n\in D$ is just the fact that $D$ is assumed nonempty. This doesn't require choice. As a heuristic, you don't need the axioms of choice to choose a sock from a pair of socks (or one sock from each pair of finitely many pairs of socks), but if I give you an infinite collection of pairs, you need some form of the axiom of choice to select one from each pair simultaneously. You can grab one element from each set in a finite collection of sets from the axioms of set theory. If you want to grab one from infinitely many sets, you need a stronger axiom in general. We use axiom of choice while choosing an element from infinitely many sets 'at the the same time'. Otherwise of course we can choose an element one by one. Here i think it is the case that we choose an element from infinitely many sets at the same time. Not the answer you're looking for? Browse other questions tagged elementary-set-theory axiom-of-choice or ask your own question. If I say "let $x_0$ be a point of global maximum…", am I using axiom of choice? Do we always use the Axiom of Choice when picking from uncountable number of sets? Axiom of Choice: What exactly is a choice, and when and why is it needed? Does choosing a counting function require the Axiom of Choice? Does this proof need the axiom of choice? Does this proof use Axiom of Choice? Why isn't the axiom of choice obvious?
CommonCrawl
Estimating Parameter - What is the qualitative difference between MLE fitting and Least Squares CDF fitting? Why are large Hermitian Toeplitz matrices approximately diagonalized by sinusoids? How to interpret $\int_0^\infty \exp(ikx) dx$ in distribution theory? "Inverse" of nondecreasing, right-continuous function? If $(T^*)^n$ converges pointwise in $\ell^1$, what can we conclude about $T^n$? Column sums of $A$ from column sums of $A A^T$?
CommonCrawl
The shape of incident shock wave in steady axisymmetric conical Mach reflection Yu-xin Ren ORCID: orcid.org/0000-0002-3047-49231, Lianhua Tan2 & Zi-niu Wu1 For internal flow with supersonic inflow boundary conditions, a complicated oblique shock reflection may occur. Different from the planar shock reflection problem, where the shape of the incident shock can be a straight line, the shape of the incident shock wave in the inward-facing axisymmetric shock reflection in steady flow is an unknown curve. In this paper, a simple theoretical approach is proposed to determine the shape of this incident shock wave. The present theory is based on the steady Euler equations. When the assumption that the streamlines are straight lines at locations just behind the incident shock is adopted, an ordinary differential equation can be derived, and the shape of the incident shock wave is given by the solution of this ordinary differential equation. The predicted curves of the incident shock wave at several inlet conditions agree very well with the results of the numerical simulations. Understanding the characteristics of the shock waves is important in the design of supersonic vehicles. Li and Ben-Dor [1] used several examples to show the great influence of the shock waves on the operating conditions of the inlet/combustor of a hypersonic craft, on the heating loads of a blunt body, and on the initiation of the detonation in a ram accelerator. In the axisymmetric supersonic internal flows, an oblique inward-facing conical shock will steepen near the symmetry axis, which has been observed by Mölder et al. [2]. As a result, transition to Mach reflection has to occur, so that regular reflection is not possible (Hornung and Schwendeman [3]). In contrast to the planar shock reflection problem where the shape of the incident shock can be a straight line, the shape of the incident shock wave in the inward-facing axisymmetric shock reflection is a curve with unknown shape. Therefore, to study the characteristics of the Mach reflection, one of the preconditions is to know the shape of the incident shock waves. A typical axisymmetric Mach reflection in steady flow is shown in Fig. 1. Axisymmetric Mach reflection As pointed out by Whitham [4], the study of shock wave in problems more than one dimension is "difficult due to the combination of two effects: the shock is adjusting to changes in the geometry (or in the medium) at the same time that it is coping with a complicated nonlinear interaction with the flow behind it". "In the more general case, if one of the effects can be dealt with fairly simply so that emphasis can be placed on the other, there is hope for an approximate theory." The theory of geometrical shock dynamics [4] is one example of this consideration where only the geometrical effects are taken to be important. Unsteady oblique shock reflection from an axis of symmetry is studied using theory of geometrical shock dynamics by Hornung and Schwendeman [3], and the results are compared with previous numerical simulations of the phenomenon by Hornung [5]. The shock shapes, and the location of the shock-shock, are in good agreement with the numerical results. They also fit the moving incident shock shape with a generalized hyperbola based on an analogy with the Guderley singularity in cylindrical shock implosion. However, the theory of geometrical shock dynamics is difficult to be applied in steady flow. In this paper, a simple approach to determine the shape of the incident shock wave in steady flow is proposed based on the assumption that the streamlines are straight lines at locations just behind the incident shock. The theoretical predictions of the shape of the incident shock are compared with the numerical results [6] and good agreement is observed. The shape of axisymmetric incident shock wave The nonorthogonal curvilinear coordinate As shown in Fig. 2, at the leading edge of the conically contracting section with a half cone angle θw, there is an incident shock which connects with the Mach stem and the reflected shock at the triple point. For a point on the incident shock wave, the shock angle and the deflecting angle are denoted by β and θ respectively. The β and θ are both positive by the definition of the present analysis. Axisymmetric Mach reflection and the nonorthogonal curvilinear coordinate The present study aims at finding the shape of the incident shock. To simplify the analysis, a nonorthogonal curvilinear ξ − η coordinate is introduced as follows. $$ \xi =r-R(x), $$ $$ \eta =\frac{\psi \left(x,r\right)}{V_{\infty }{r}_0}. $$ where r = R(x) is the shape of the incident shock which satisfies $$ \frac{\mathrm{d}R}{\mathrm{d}x}=-\tan \beta, $$ and ψ(x, r) is the stream function for the axisymmetric flow, V∞ is the velocity of the uniform incoming flow, and r0 is the radius of the leading edge point, O ' (x0, r0), of the conically contracting section. The physical meaning of this curvilinear coordinate is as follows. The ξ coordinate with η = constant is a family of streamlines which is used to facilitate the introduction of the basic assumption of the present paper. The η coordinate with ξ = constant is used to introduce the shape of the incident shock wave into the transformed governing equation so that a solvable equation for the shape of the incident shock wave can be derived. Indeed, ξ = 0 is corresponding to the exact shape of the incident shock wave. We notice that the shapes of neither the incident shock wave nor the streamlines are known. However, since we are only interested in the shape of the incident shock wave, additional assumption on the streamlines at ξ = 0 is sufficient for deriving the governing equation for the shape of the incident shock wave. Therefore, the introduction of this coordinate transform greatly simplifies the derivation of the present paper. In the next subsection, the steady Euler equations in ξ − η coordinate will be derived. For this purpose, the metric terms of the transform (Eqs. (1) and (2)) will be present first. The differential relationship between two coordinates is $$ \left(\begin{array}{c}\mathrm{d}x\\ {}\mathrm{d}\mathrm{r}\end{array}\right)=\left(\begin{array}{cc}{x}_{\xi }& {x}_{\eta}\\ {}{r}_{\xi }& {r}_{\eta}\end{array}\right)\left(\begin{array}{c}\mathrm{d}\xi \\ {}\mathrm{d}\eta \end{array}\right)={\left(\begin{array}{cc}{\xi}_x& {\xi}_r\\ {}{\eta}_x& {\eta}_r\end{array}\right)}^{-1}\left(\begin{array}{c}\mathrm{d}\xi \\ {}\mathrm{d}\eta \end{array}\right). $$ According to Eqs. (1) and (2), we have $$ \left(\begin{array}{cc}{\xi}_x& {\xi}_r\\ {}{\eta}_x& {\eta}_r\end{array}\right)=\left(\begin{array}{cc}\tan \beta & 1\\ {}-\frac{rV_r}{r_0{V}_{\infty }}& \frac{rV_x}{r_0{V}_{\infty }}\end{array}\right), $$ where the second line of Eq. (5) is obtained following the definition of the stream function and Vr and Vx are the two components of the velocity. By the fact $$ \tan \theta =-\frac{V_r}{V_x} $$ and by the introduction of the notation $$ f=\frac{rV_x}{r_0{V}_{\infty }} $$ Eq. (5) can be written as $$ \left(\begin{array}{cc}{\xi}_x& {\xi}_r\\ {}{\eta}_x& {\eta}_r\end{array}\right)=\left(\begin{array}{cc}\tan \beta & 1\\ {}f\tan \theta & f\end{array}\right). $$ This leads to $$ \left(\begin{array}{cc}{x}_{\xi }& {x}_{\eta}\\ {}{r}_{\xi }& {r}_{\eta}\end{array}\right)=\frac{1}{f\left(\tan \beta -\tan \theta \right)}\left(\begin{array}{cc}f& -1\\ {}-f\tan \theta & \tan \beta \end{array}\right). $$ Thus first derivatives with respect to x and r can be transformed into corresponding partial derivatives with respect to ξ and η by $$ \frac{\partial }{\partial x}=\frac{\partial }{\partial \xi}\tan \beta +\frac{\partial }{\partial \eta }f\tan \theta, $$ $$ \frac{\partial }{\partial r}=\frac{\partial }{\partial \xi }+\frac{\partial }{\partial \eta }f. $$ The shape of the incident shock wave The governing equations of the present paper are the axisymmetric steady Euler equations which can be written as $$ \nabla \cdot \left(\rho \mathbf{V}\right)=0, $$ $$ \left(\mathbf{V}\cdot \nabla \right)\mathbf{V}=-\frac{1}{\rho}\nabla p, $$ $$ \left(\mathbf{V}\cdot \nabla \right)S=0, $$ where V = Vxex + Vrer is the vector of velocity, p is the pressure and S is the entropy. The two components of velocity can be expressed as $$ {V}_x=V\cos \theta, {V}_r=-V\sin \theta . $$ Substituting Eqs. (8), (9) into Eqs. (10) – (13), we obtain the Euler equations in ξ − η coordinate as $$ {\displaystyle \begin{array}{l}\left(\begin{array}{cccc}1& \frac{\rho }{V}& \frac{\rho \left(\cot \theta +\tan \beta \right)}{\cot \theta \left(\tan \theta -\tan \beta \right)}& 0\\ {}0& V\sin \theta \cos \theta \left(\tan \theta -\tan \beta \right)& {V}^2{\cos}^2\theta \left(\tan \theta -\tan \beta \right)& \frac{1}{\rho}\\ {}0& -V{\cos}^2\theta \left(\tan \theta -\tan \beta \right)& {V}^2\sin \theta \cos \theta \left(\tan \theta -\tan \beta \right)& \frac{1}{\rho}\tan \beta \\ {}{a}^2& 0& 0& -1\end{array}\right)\frac{\partial }{\partial \xi}\left(\begin{array}{c}\rho \\ {}V\\ {}\theta \\ {}p\end{array}\right)\\ {}\kern10.5em =\kern0.5em \left(\begin{array}{c}-\rho f\frac{\partial \theta }{\partial \eta}\frac{\left(\cot \theta +\tan \theta \right)}{\cot \theta \left(\tan \theta -\tan \beta \right)}-\frac{\rho }{r\cot \theta \left(\tan \theta -\tan \beta \right)}\\ {}-\frac{1}{\rho}\frac{\partial p}{\partial \eta }f\\ {}-\frac{1}{\rho}\frac{\partial p}{\partial \eta }f\tan \theta \\ {}0\end{array}\right)\end{array}} $$ This is a system of partial differential equations and it is difficult to get the shape of the incident shock by directly solving these equations. In order to overcome this difficulty, certain assumptions about the flow field behind the incident shock must be made. During the numerical simulations of the problem considered in the present paper, we find that when a steady Mach reflection can be realized in the configuration shown in Fig. 1, the streamlines just behind the incident shock have very small curvatures and can be accurately approximated by a family of straight lines. This fact is shown in Fig. 3. According to this observation, we assume in this paper that $$ {\left.\frac{\partial \theta }{\partial \xi}\right|}_s=0 $$ where the subscript s denotes the location just behind the incident shock. Using this assumption, the problem is greatly simplified. Solving for \( \frac{\partial \theta }{\partial \xi } \) using Eq. (14) yields $$ \frac{\partial \theta }{\partial \xi }=\frac{\frac{1}{a^2}\frac{\partial p}{\partial \eta }f\left(\left({M}^2-1\right)\left(\tan \theta -\tan \beta \right)\right)-\rho \left({M}^2{\sin}^2\theta \left(\cot \theta +\tan \beta \right)\right)\left(f\frac{\partial \theta }{\partial \eta}\left(\cot \theta +\tan \theta \right)+\frac{1}{r}\right)}{\left(\cot \theta +\tan \beta \right)\rho \left({M}^2{\sin}^2\theta \left(\cot \theta +\tan \beta \right)\right)-\rho {M}^2{\left(\tan \theta -\tan \beta \right)}^2{\cos}^2\theta \left({M}^2-1\right)}. $$ The numerical results of the axisymmetric Mach reflection Applying Eq. (16) and Eq. (15) at places just behind the incident shock wave, we have $$ {\left\{\left[\frac{\partial \theta }{\partial \eta }-\frac{1}{a^2}\frac{\partial p}{\partial \eta}\frac{\left(\tan \beta -\tan \theta \right)}{\left(\tan \theta \tan \beta +1\right)}\frac{\left(1-{M}^2\right)}{\rho {M}^2}\right]\left(\cot \theta +\tan \theta \right)f+\frac{1}{r}\right\}}_s=0. $$ For simplicity, we omit the subscript s since it is understood that the following discussions are focused on the shape of the incident shock. For a point on the incident shock wave, deflecting angle θ and pressure are functions of shock angle β, i.e. $$ \tan \theta =2\frac{\cos \beta }{\sin \beta}\frac{M_{\infty}^2{\sin}^2\beta -1}{M_{\infty}^2\left(\gamma +\cos 2\beta \right)+2} $$ $$ \frac{p}{p_{\infty }}=1+\frac{2\gamma }{\gamma +1}\left({M}_{\infty}^2{\sin}^2\beta -1\right). $$ In Eqs. (18) and (19), γ is the ratio of specific heat. Therefore, Eq. (17) can be rewritten as $$ \left[\frac{\partial \theta }{\partial \beta }-\frac{1}{a^2}\frac{\partial p}{\partial \beta}\frac{\left(\tan \beta -\tan \theta \right)}{\left(\tan \theta \tan \beta +1\right)}\frac{\left(1-{M}^2\right)}{\rho {M}^2}\right]\left(\cot \theta +\tan \theta \right)f\frac{\partial \beta }{\partial \eta }=-\frac{1}{r}. $$ On the incident shock curve, we have r = R(x) so that $$ \eta \left(x,r\right)=\eta \left(x,R(x)\right)=\eta (x), $$ and subsequently $$ {\left.\frac{\partial \beta }{\partial \eta}\right|}_{\xi =0}=\frac{\partial \beta }{\partial x}\frac{\mathrm{d}x}{\mathrm{d}\eta }=\frac{\partial \beta }{\partial x}\frac{-1}{f\left(\tan \beta -\tan \theta \right)}, $$ $$ \left[\frac{\partial \theta }{\partial \beta }-\frac{1}{a^2}\frac{\partial p}{\partial \beta}\frac{\left(\tan \beta -\tan \theta \right)}{\left(\tan \theta \tan \beta +1\right)}\frac{\left(1-{M}^2\right)}{\rho {M}^2}\right]\frac{\left(\cot \theta +\tan \theta \right)}{\left(\tan \beta -\tan \theta \right)}\frac{\mathrm{d}\beta }{\mathrm{d}x}=\frac{1}{R}. $$ According to Eq. (3), it is easy to derive $$ \frac{{\mathrm{d}}^2R}{\mathrm{d}{x}^2}=-\left(1+{\tan}^2\beta \right)\frac{\mathrm{d}\beta }{\mathrm{d}x}. $$ Eqs. (22) and (23) can be combined to give $$ {R}_{xx}=\frac{\left(1+{R_x}^2\right)}{R\left(\cot \theta +\tan \theta \right)}{\left[\frac{\partial \theta }{\partial \beta}\frac{1}{\left(\tan \theta +{R}_x\right)}+\frac{1}{\gamma p}\frac{\partial p}{\partial \beta}\frac{\left(1-{M}^2\right)}{M^2}\frac{1}{\left(1-{R}_x\tan \theta \right)}\right]}^{-1}. $$ The boundary conditions are $$ {R}_{O^{\prime }}={r}_0,\kern0.5em {\left(\frac{dR}{dx}\right)}_{O^{\prime }}={\beta}_w, $$ where βw is computed using Eq. (18) by setting θ = θw. Eq. (24) is transformed into the system of first order ordinary differential equations by the introduction of Q = Rx, which is $$ \left\{\begin{array}{l}{R}_x=Q\\ {}{Q}_x=\frac{\left(1+{Q}^2\right)}{R\left(\cot \theta +\tan \theta \right)}{\left[\frac{\partial \theta }{\partial \beta}\frac{1}{\left(\tan \theta +{R}_x\right)}+\frac{1}{\gamma p}\frac{\partial p}{\partial \beta}\frac{\left(1-{M}^2\right)}{M^2}\frac{1}{\left(1-Q\tan \theta \right)}\right]}^{-1}.\end{array}\right. $$ Then Eq. (25) is numerically solved using the standard four stage Runge-Kutta scheme to predict the curve of the incident shock wave. Specifically, after obtaining numerically Q = Rx , β is computed using Eq. (3), θ, ∂θ/∂β and ∂p/∂β are derived respectively using Eqs. (18) and (19), and M is updated using the oblique shock relation $$ {M}^2=\frac{M_{\infty}^2+\frac{2}{\gamma -1}}{\frac{2\gamma }{\gamma -1}{M}_{\infty}^2{\sin}^2\beta -1}+\frac{M_{\infty}^2{\cos}^2\beta }{\frac{\gamma -1}{2}{M}_{\infty}^2{\sin}^2\beta -1}. $$ Results and discussions In order to validate the present analysis, the shapes of the incident shock waves predicted by solving Eq. (24) are compared with those obtained from the numerical simulations. The numerical method in the simulations is the finite volume scheme based on the rotated Riemann solver proposed by Ren (2003) [7]. The shapes of the shock waves are extracted from the numerical results using the method of Tan et al. [8]. The shapes of the incident shock waves are predicted using the present theory and the numerical simulation for several combinations of incoming flow-Mach number M∞ and cone half angle θw listed in Table 1. Referring to Fig. 1, r0 = 0.5 is the leading edge radius, and w is the length of the contracting section. The shapes of the incident shock waves are displayed in Figs. 4, 5, 6 and 7. It is seen that the theoretical curves agree very well with the simulated curves in each test cases. This indicates the hypothesis of \( \frac{\partial \theta }{\partial \xi }=0 \) is reasonable for the given flow conditions. It is observed in Figs. 4, 5, 6 and 7, when r is large enough (close to r0), the discrepancy between the numerical and theoretical curves becomes clearer, this is possibly due to the numerical viscosity of the numerical scheme, which leads to the inaccuracy in βw in the numerical results. It is also observed that when r is smaller (close to the axis of symmetry), there are singularities in the theoretical predictions so that there does not exist a smooth curve of the incident shock wave all the way to the axis of symmetry. We think this phenomenon is helpful to explain the fact that there does not exist the regular shock reflection at the axis of symmetry in the axisymmetric supersonic internal flows [3]. Table 1 The flow conditions for the four additional test cases The theoretical and simulated shapes of the incident shock wave when M∞ = 1.8 and θw = 10.0o Equation (24) indicates that the shape of the incident shock is determined by incoming flow conditions and the half cone angle θw. To verify this conclusion, the numerical simulations are conducted for two flows with the same incoming flow conditions and the half cone angle θw but with different aspect ratios. Here the aspect ratio is defined as w/r0, where w is the diagonal length of the conically contracting section. In Fig. 8, two simulated incident shock curves are compared with the theoretical curve. The incoming-flow Mach number is 2.0, the wedge angle is 10.0o, and the aspect ratios are 0.6 and 0.4 respectively. It is shown that two simulated shapes of the incident shocks are not affected by the aspect ratios and are both in good agreement with the theoretical curve. The theoretical and simulated shapes of the incident shock wave when M∞ = 2.0, θw = 10.0o. The aspect ratio is w/r0 = 0.6 and w/r0 = 0.4 respectively In this paper, a theoretical method to predict the shape of the incident shock in steady axisymmetric Mach reflection is proposed. A nonorthogonal curvilinear ξ − η coordinate is introduced to simplify the analysis. The basic assumption of the present paper is that the streamlines just behind the incident shock wave can be approximated by straight lines, which is strongly supported by the numerical simulations. Under this assumption, the basic flow equations are simplified to an ordinary differential equation whose solution gives the shape of the incident shock directly. The theoretical curves of the incident shock waves agree very well with the simulated ones. It is found that the shape of the incident shock wave is related only to the incoming-flow Mach number and the half cone angle. Parts of the data and materials are available upon request. Li H, Ben-Dor G (1997) A parametric study of Mach reflection in steady flows. J Fluid Mech 341:101–125 Mölder et al (1997) Focusing of conical shocks at the center-line of symmetry. In: Houwing AFP, Paull A (eds) Shock Waves. Panther Publishing and Printing, Canberra, pp 875–880 Hornung HG, Schwendeman DW (2001) Oblique shock reflection from an axis of symmetry: shock dynamics and relation to the Guderley singularity. J Fluid Mech 438:231–245 Whitham GB (1974) Linear and nonlinear waves. Wiley-Interscience, London Hornung HG (2000) Oblique shock reflection from an axis of symmetry. J Fluid Mech 409:1–12 Ren YX et al (2005) On the Characteristics of the Mach stem. Modern Phys Lett B 19(28 & 29):1511–1514 Ren YX (2003) A robust shock capturing scheme based on the rotated Riemann solver. Computers & Fluids 32:1379–1403. Tan LH, Ren YX, Wu ZN (2006) Analytical and numerical study of the near flow field and shape of the Mach stem in steady flows. J Fluid Mech 546:341–362 MathSciNet Article Google Scholar This work is supported by 2016YFA0401200 of national key research and development program of China and the national numerical wind tunnel project. School of Aerospace Engineering, Tsinghua University, Beijing, 100084, China Yu-xin Ren & Zi-niu Wu Beijing Key Laboratory of Civil Aircraft Design and Simulation Technology, Beijing Aircraft Technology Research Institute, Commercial Aircraft Corporation of China, Ltd., Beijing, 102211, China Lianhua Tan Yu-xin Ren Zi-niu Wu The methodology is proposed by the first and the third authors, the derivation of the solution is done by the first author, and the numerical simulation and comparison with theory are carried out by the second author. The author(s) read and approved the final manuscript. Correspondence to Yu-xin Ren. This paper is submitted to AIA for the consideration of publication. The content of the manuscript has not been published, or submitted for publication elsewhere. Ren, Yx., Tan, L. & Wu, Zn. The shape of incident shock wave in steady axisymmetric conical Mach reflection. Adv. Aerodyn. 2, 24 (2020). https://doi.org/10.1186/s42774-020-00047-6 Axisymmetric shock reflection The shape of incident shock wave Euler equations
CommonCrawl
\begin{document} \date{} \author{Murat SARIKAYA and Erdal ULUALAN } \title{On Regular Representation of 2-Crossed Modules and Cat$^2$-Groups} \maketitle \begin{abstract} In this paper, we describe a regular representation given by Cayley's theorem for 2-crossed modules of groups and their associated Gray 3-(group) groupoids with a single 0-cell and equivalently cat$^2$-groups. \textbf{KeyWords:} 2-Groupoid, Gray 3-groupoids, cat$^2$-group, 2-Crossed Module, homotopy, chain complex. \textbf{AMS.Classification:} 18B40, 18G45, 20C99, 55U15, 55U35, 20L05. \end{abstract} \section*{Introduction} The main aim in representation theory for groups is to find an appropriate group of linear transformations or permutations with the same structure as a given, abstract, group, see \cite{Ledermann}. A linear representation of a group $G$ is a homomorphism of groups $\phi$ from $G$ to $GL(V)$, where $GL(V)$ is the general linear group of linear isomorphisms $V\rightarrow V$ for a vector space $V$. Then, the representation $\phi$ assigns to any element $g$ of $G$ a linear isomorphism $\phi_g:V\rightarrow V$. A permutation representation of a group $G$ is related to actions of $G$ on a set $X$. An action of $G$ on $X$ gives a representation by mapping to each $g\in G$ the permutation $\phi(g)\in S_X$ that takes $x\in X$ to $g_x$. Conversely, if $\phi:G \rightarrow S_X$ is a representation, then $G$ acts on $X$ by $g_x:=\phi_g(x)$ where $\phi_g$ is a permutation of $X$. Linear matrix representations can be given as homomorphisms of $G$ into $GL_n(K)$ for a field $K$, giving invertible matrix for each element of $G$. Any group $G$ can be thought as a category with a single object ${*}$ and invertible morphisms, so a functor from $G$ to the category of sets corresponds to a representation of $G$. The notions of cat$^1$-groups \cite{Loday}, \cite{WL} and crossed modules, \cite{W} are equivalent structures of 2-groups (cf. \cite{BL}) as 2-dimensional generalisations of groups. These structures are algebraic models for homotopy (connected) 2-types. This generalisation suggests that the linear and permutation representation of a group $G$ can be generalised to 2-dimensional versions of groups. Thus, representations of crossed modules or equivalently cat$^1$-groups should be 2-functors into a suitable category. To find this suitable category with the same structure as given, abstract, cat$^1$-groups, Barker, \cite{Barker} has investigated the 2-category $\mathbf{Ch}^1_K$ of length-1 chain complexes of vector spaces over a fixed field $K$. This category, $\mathbf{Ch}^1_K$ has a 2-groupoid structure. Therefore, a linear representation of a cat$^1$-group $\mathfrak{C}$ is a 2-functor $\mathfrak{C} \rightarrow \mathbf{Ch}^1_K$. The functorial image of $\mathfrak{C}$ lies within a sub 2-groupoid with a single object of $\mathbf{Ch}^1_K$. This sub 2-groupoid $\mathbf{Aut(\delta)}$, called automorphism cat$^1$-group, was obtained by considering only the invertible chain maps over a single chain complex of length-1; $\delta:C_1 \rightarrow C_0$ of vector spaces and has a cat$^1$-group structure. Thus, the 0-cell in $\mathbf{Aut(\delta)}$ is just the chain complex $\delta$ of length-1, and 1-cells in $\mathbf{Aut(\delta)}$ are chain maps $\delta \rightarrow \delta$ and 2-cells are 1-homotopies between chain maps. Since a linear representation of a group $G$ realises that group as a subgroup of $GL(V)$, so a linear representation of a cat$^1$-group $\mathfrak{C}$ realises that it as a cat$^1$-subgroup $\mathbf{Aut(\delta)}$ of $\mathbf{Ch}^1_K$. A common approach to representations of groups is via modules over a group or an algebra \cite{Burrow}, \cite{Curtis}, \cite{Dornhoff}. Linear representations of a group $G$ are in one-to-one correspondence with modules over its group algebra, $K(G)$, see \cite{Barker}, where $K$ is the group algebra functor from the category of groups to that of algebras. Since a cat$^1$-group is a generalisation of a group; there must be a notion of cat$^1$-group algebra. For any cat$^1$-group $\mathfrak{C}$; by applying the group algebra functor $K$ to $\mathfrak{C}$, Barker, first obtained the structure $K(\mathfrak{C})$ as a pre-cat$^1$-algebra. In order to construct a cat$^1$-algebra from $K(\mathfrak{C})$, it is necessary to impose some relations so that the kernel condition is satisfied for the structural homomorphisms in $K(\mathfrak{C})$. Barker found suitable expressions in $K(\mathfrak{C})$ and by factoring $K(\mathfrak{C})$ by the ideal $J$ generated by these expressions, he showed that every cat$^1$-group, $\mathfrak{C}$, has an associated cat$^1$-group algebra $\overline{K(\mathfrak{C})}$. This structure was obtained by first applying the group algebra functor to $\mathfrak{C}$ and factoring the algebra of 1-cells by a given ideal $J$ in order to introduce relations necessary to satisfy the kernel condition in $\overline{K(\mathfrak{C})}$. To construct a regular representation given by Cayley's theorem for cat$^1$-groups; Barker has considered the chain complex; $\overline{\delta}$ of length-1 which is obtained from $\overline{K(\mathfrak{C})}$ and defined the structure $\mathbf{Aut(\overline{\delta})}$ as a subcat$^1$-group algebra of $\overline{K(\mathfrak{C})}$. Therefore, a regular representation of a cat$^1$-group $\mathfrak{C}$ is a 2-functor $$\mathbf{\rho}:\mathfrak{C}\longrightarrow\mathbf{Aut(\overline{\delta})}.$$ Consequently, Barker's result can be summarized pictorially as; $$ \begin{tikzcd} \mathfrak{C} \arrow[r,"K"] \arrow[ddrr,dashed,"\rho"," \mathrm{(regular \ representation)}"{sloped,below=0.0ex,xshift=-0.0em}] &K(\mathfrak{C}) \arrow[r,twoheadrightarrow,"q"] &\overline{K(\mathfrak{C})} \arrow[d] \\ & &\ \ \ \ \overline{\delta} \in \mathbf{Ch}^1_K \arrow[d] \\ & &\mathbf{Aut(\overline{ \delta})} \end{tikzcd} $$ where $\mathfrak{C}$ is any cat$^1$-group; $K$ is the group algebra functor, $q$ is a quotient functor from pre-cat$^1$-algebras to cat$^1$-algebras and $\rho$ is the right regular representation of $\mathfrak{C}$. Note that the functor $\rho$ defined in \cite{Barker} is contravariant in the horizontal direction but covariant in the vertical direction. Thus, this functor can be regarded as a functor $\mathfrak{C}^{op} \rightarrow \mathbf{Aut}(\overline\delta)\leqslant \mathbf{Ch}^1_K$, considering the convention given in \cite{Kelly} that any 2-category $\mathfrak{C}$ has a dual $\mathfrak{C}^{op}$ which reverses only the 1-cells and have affects both 1-cell and horizontal 2-cell composition. Another dual $\mathfrak{C}^{co}$ which reverses only the 2-cells and hence affects vertical 2-cell composition. Therefore, with these constructions, it is obtained that a cat$^1$-group version of Cayley's theorem exists in terms of linear regular representations. In \cite{Elgueta}, Elgueta has obtained an alternative representation of 2-groups in the 2-category of finite dimensional 2-vector spaces (over a field $K$) as defined Kapranov and Voevodsky \cite{Kapranov}. The notion of 2-vector space introduced by these authors is different from that of Baez and Crans \cite{Baez1}. It was proven in \cite{Baez1} that the category of 2-vector spaces is equivalent to the category of 2-term chain complexes of vector spaces, which is of course none other than $\mathbf{Ch}^1_K$. Thus, Barker's result is related to the definition of Baez and Crans, and Elgueta's result is related to that of Kapranov and Voevodsky. Gray in his lecture notes developed the notion of the tensor product for 2-categories (cf. \cite{Gray}). Restricting this construction to 2-groupoids gives a basic example of a monoidal category of 2-groupoids with monoidal structure given by Gray's tensor product. Joyal and Tierney, \cite{JT}, proved that Gray groupoids model homotopy 3-types. As an alternative algebraic model for homotopy 3-types, Conduch{\'e}, \cite{Con}, defined 2-crossed modules and showed how to obtain a 2-crossed module from a simplicial group. See also for this construction, \cite{mutpor2, mutpor3}, in terms of Carrasco-Cegarra pairing operators (cf. \cite{cc}). 2-Crossed modules are also equivalent to the notions of crossed squares introduced by Loday and Guin-Walery in \cite{WL}, and braided regular crossed modules introduced by Brown and Gilbert in \cite{BG}. For these connections, see also \cite{AU1} and \cite{Con1}. A connection between 2-crossed modules of group(oid)s and Gray 3-groupoids was proven by Kamps and Porter in \cite{KP}. Using a different method; Martins and Picken, \cite{Martins}, gave the relationship between 2-crossed modules of groups and Gray-3-groupoids with a single 0-cell. This construction has also been found in Al-asady's Ph.D thesis (cf. \cite{Jinan}). See also \cite{Wang}. Kamps and Porter, \cite{KP}, have proven that the category of chain complexes of length-2 of vector spaces over a field $K$,\ $\mathbf{Ch}^2_K$, has a Gray category structure. In this constructions, 0-cells of $\mathbf{Ch}^2_K$ are chain complexes of length-2, 1-cells are chain maps, 2-cells are 1-homotopies and 3-cells are 2-homotopies between 1-homotopies. For any chain complex of length-2 within $\mathbf{Ch}^2_K$; $ \begin{tikzcd} \delta:=C_{2}\ar[r,"{\delta_2}"]&C_{1}\ar[r,"{\delta_{1}}"]& C_{0} \end{tikzcd} $ of linear transformations, the structure $\mathbf{Aut(\delta)}$ as a cat$^2$-group, was introduced in \cite{Jinan}. This structure in fact is a Gray 3-groupoid with a single object set $\{\delta\}$. Using the equivalance between crossed squares and cat$^2$-groups (cf. \cite{WL}); Al-asady, in \cite{Jinan}, has constructed a linear representation of a cat$^2$-group $\mathfrak{C}^2$, as a lax 3-functor $\mathfrak{C}^2 \rightarrow \mathbf{Aut(\delta)}\leqslant \mathbf{Ch}^2_K$. In this construction; 0-cell of $\mathbf{Aut(\delta)}$ is $\delta$, 1-cells are chain maps $F:\delta \rightarrow \delta$, 2-cells are 1-homotopies $(H,F):F \Rightarrow G$ and 3-cells are 2-homotopies $(\alpha,H,F):(H,F)\Rrightarrow(K,F)$. Therefore, Al-asady, by generalising the 2-category of length-1 chain complexes $\mathbf{Ch}^1_K$ to a Gray category of length-2 chain complexes $\mathbf{Ch}^2_K$, has given the linear representation of a cat$^2$-group $\mathfrak{C}^2$. In this paper, we give the regular representation of a cat$^2$-group or a Gray 3-groupoid with a single object $*$, $\mathfrak{C}^2$, obtained from a 2-crossed module $\mathfrak{X}$. By applying the group algebra functor to $\mathfrak{C}^2$, we obtain the structure $K(\mathfrak{C}^2)$. We see that $K(\mathfrak{C}^2)$ is certainly a pre-cat$^2$-algebra, but not a cat$^2$-algebra. To make it a cat$^2$-algebra; we need to give some relations for $K(\mathfrak{C}^2)$ to satisfy the kernel condition. By defining ideals $J_2$ and $J_1$ in $K(\mathfrak{C}^2)$ and factoring it by these ideals generated by these relations, we have obtained a Gray 3-group algebra groupoid with a single object or a cat$^2$-group algebra $\overline{K(\mathfrak{C}^2)}$. We construct a regular representation of $\mathfrak{C}^2$ as a 3-functor from $\mathfrak{C}^2$ to $\mathbf{Aut(\overline{ \delta})}$, where $ \begin{tikzcd} \overline{\delta}:=\mathrm{K}_3\ar[r,"\overline{\tau}_3"]&\mathrm{K}_2\ar[r,"\overline{\tau}_2"]& \mathrm{K}_1 \end{tikzcd} $ is the chain complex of length-2 obtained from $\overline{K(\mathfrak{C}^2)}$. Thus, the regular representation of $\mathfrak{C}^2$ is a 3-functor $\lambda:\mathfrak{C}^2 \rightarrow \mathbf{Aut(\overline{\delta})}\leqslant \mathbf{Ch}^2_K$. In the construction of this functor, for the 0-cell $*$ in $\mathfrak{C}^2$, we obtain that $\lambda_*$ is $\overline{\delta}$ and for a 1-cell in $\mathfrak{C}^2$; we obtain that $\lambda_n$ is a chain map $\overline{\delta} \rightarrow \overline{\delta}$, similarly for a 2-cell $(m,n)$ in $\mathfrak{C}^2$; we have that $\lambda_{m,n}$ is a 1-homotopy from $\lambda_n$ to $\lambda_{\partial_1mn}$ and for a 3-cell $(l,m,n)$ in $\mathfrak{C}^2$; we have that $\lambda_{l,m,n}$ is a 2-homotopy from $\lambda_{m,n}$ to $\lambda_{\partial_2lm,n}$. Thus, our results can be summarised pictorially as; $$ \begin{tikzcd} {\mathfrak{C}^2} \arrow[r,"K"] \arrow[ddrr,dashed,"\lambda","\mathrm{(regular \ representation)}"{sloped,below=0.0ex,xshift=-0.0em}] & {K(\mathfrak{C}^2)} \arrow[r,twoheadrightarrow,"q"] &\overline{K(\mathfrak{C}^2)} \arrow[d] \\ & &\ \ \ \ \overline{\delta} \in \mathbf{Ch}^2_K \arrow[d] \\ & &\mathbf{Aut(\overline{ \delta})} \end{tikzcd} $$ where $\mathfrak{C}^2$ is a cat$^2$-group obtained from the 2-crossed module $\mathfrak{X}$ and $\mathbf{Aut(\overline{ \delta})}$ is the cat$^2$-group algebra obtained from the chain complex $\overline{\delta}$ within $\overline{K(\mathfrak{C}^2)}$. \tableofcontents \section{Preliminaries} Recall that a \textit{small category} $\mathfrak{C}$ consists of an object set $C_0$, a set of morphisms $C_1$, source and target maps from $C_1$ to $C_0$, a map $i:C_{0}\rightarrow C_1$ which gives the identity morphisms at an object and a partially defined function $C{_1}\times C_{1}\rightarrow C_1$ which gives the composition of two morphisms. We will show a small category $(C_1 ,C_0 )$ and diagramatically as $$ \xymatrix{C_1 \ar@<1ex>[r]^{s,t}\ar@<0ex>[r]&C_0. \ar@<1ex>[l]^{i}} $$ For the set of morphisms $C_1$, and $x,y\in C_0$ the set of morphisms from $x$ to $y$ is written $C_{1}(x,y)$ and termed a hom-set. Then for $a\in C_{1}(x,y)$, we have $s(a)=x$ and $t(a)=y$. We will usually write $i_x$ for $i(x)$ and $b\circ a$ for the composite of the morphisms $a:x\rightarrow y$ and $b:y\rightarrow z$. The elements of $C_0$ are also called 0-cells and the elements of $C_1$ are called 1-cells between 0-cells. A \textit{groupoid} is a small category in which every morphism (or every 1-cell) is an isomorphism (or invertible), that is, for any 1-cell $(a:x\rightarrow y)\in C_{1}(x,y)$, there is a 1-cell $(a^{-1}:y\rightarrow x)\in C_{1}(y,x)$, such that $a^{-1}\circ a=i_x$ and $a\circ a^{-1}=i_y$. A groupoid with a single 0-cell can be regarded as a group. For a survey of application of groupoids and introduction to their literature, see \cite{Brown, BG, bs}. Cat$^1$-groups or categorical groups are group objects in the category of small categories (cf. \cite{Loday}). They are sometimes referred to simply as cat-groups. A cat$^1$-group $\mathfrak{C}:=(G,N,i,s,t)$ consists of groups $G$ and $N$ and embedding $i:N\rightarrow G$, and epimorphisms $s,t:G\rightarrow N$ satisfying the conditions: (i) $si=ti=id_N$ and (ii) $[\mathrm{Ker} s,\mathrm{Ker} t]={1_G}.$ Condition (ii) is called the \textit{kernel condition}. A structure with the same data as a cat$^1$-group and satisfying the first condition but not the kernel condition is called a \textit{pre-cat$^1$-group}. Crossed modules were introduced by Whitehead in \cite{W}. A crossed module $\mathfrak{X}:=(M,N,\partial)$ consists of groups $M,N$ together with a homomorphism $\partial:M\rightarrow N$ and a left action $N\times M \rightarrow M$ of $N$ on $M$ given by $(n,m)\mapsto {^n{m}}$, satisfying the conditions: (i) $\partial(^n{m})=n\partial(m)n^{-1}$ and (ii) $^{\partial (m)}{m'}=mm'm^{-1}$ for all $n\in N,\ m,m'\in M $. Condition (ii) is called the \textit{Peiffer identity}. A structure with the same data as a crossed module and satisfying the first condition but not the Peiffer identity is called a \textit{pre-crossed module}. If $M$ and $N$ are groups with a left action of $N$ on $M$, the semi-direct product of $M$ by $N$ is the group $M\rtimes N=\{(m,n):m \in M, n\in N\}$ with the multiplication $(m,n)(m',n')=(m^{n}{m'},nn')$. The inverse of $(m,n)$ is $(^{n^-1}m^{-1},n^{-1})$. For a cat$^1$-group $\mathfrak{C}:=(G,N,i,s,t)$, it is well-known that $G\cong\mathrm{Ker} s\rtimes N$, (cf. \cite{Brown-loday, bs}). From a crossed module $\mathfrak{X}:=(M,N,\partial)$, a cat group $\mathfrak{C(X)}$: $$ \xymatrix{M\rtimes N \ar@<1ex>[r]^-{s,t}\ar@<0ex>[r]&N \ar@<1ex>[l]^-{i}} $$ can be constructed. Here $s,t:M\rtimes N\rightarrow N$ ; $i:N\rightarrow M \rtimes N$ are defined as $s(m,n)=n$, $t(m,n)=\partial(m)n$ and $i(n)=(1,n)$. Then $si=ti=id$ and $[\mathrm{Ker} s,\mathrm{Ker} t]=1_{M\rtimes N}$. Thus, crossed modules and cat$^1$-groups are equivalent algebraic structures. In a cat$^1$-group $\mathfrak{C}:=(G,N,i,s,t)$, since the big group $G$ can be decomposed as $\mathrm{Ker} s \rtimes N$, we can describe a typical element is $(m,n)$, $m\in \mathrm{Ker} s$ and $n\in N$. So we can view this as a 2-cell $(m,n):n\Rightarrow\partial_1mn$. In this case, the 0-cell of $\mathfrak{C}$ is $*$, and 1-cells are $n:*\rightarrow *$ with $n\in N$. 2-cells are $(m,n):n\Rightarrow\partial_1mn$. These are given pictorially as $$ \begin{tikzcd}[row sep=tiny,column sep=small] & \ar[dd, Rightarrow, "{\scriptscriptstyle(m,n)}"] \\ {*}\ar[rr,bend left=50,"\scriptscriptstyle n"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1mn"'] & & \ {*} \\ & \ \end{tikzcd} $$ Since $G$ is itself a group; it can be regarded as a category with a single object $*$ and its group operation is considered categorically as composition. Therefore, $(m,n)\#_0(m',n')=(m,n)(m',n')=(m^{n}m',nn')$ and we can represent it pictorially as $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] & \ar[dd,Rightarrow,"\scriptscriptstyle{(m,n)}"{description}] & & \ar[dd,Rightarrow,"\scriptscriptstyle{(m',n')}"{description}]& \\ {*}\ar[rr,bend left=50,"\scriptscriptstyle n"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1mn"']& &{*}\ar[rr,bend left=50,"\scriptscriptstyle n'"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1m'n'"'] & & {*} \\ \ & \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] & \ar[dd, Rightarrow, "\scriptscriptstyle{(m,n)\#_0(m',n')}"{description}] & \\ {*}\ar[rr,bend left=50,"\scriptscriptstyle nn'"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1mn\partial_1m'n'"'] & &{*}\\ & \ \end{tikzcd} \end{array} $$ This operation is called horizontal composition of 2-cells. On the other hand, there is another composition in $\mathfrak{C}$. For any 2-cell $(m',\partial(m)n):\partial(m)n\Rightarrow\partial m'\partial mn$ where $m'\in \mathrm{Ker} s$, we can compose $(m,n)$ and $(m',\partial mn)$ by defining $(m',\partial(m)n)\#_2(m,n)=(m'm,n)$. The kernel condition establishes an interchange law between these compositions $\#_2$ and $\#_0$. Hence, a cat$^1$-group can be considered as a 2-category with a single object $*$. These compositions are invertible, so we can say that a cat$^1$-group has a structure of a 2-group, or equivalently has a structure of a 2-groupoid with a single 0-cell $*$. \subsection{The Category $\mathbf{Ch}^1_K$ as a 2-Groupoid} Barker, in \cite{Barker}, has constructed a 2-groupoid structure over chain complexes of length-1. In this section, we give a brief description of the category of chain complexes of vector spaces over a fixed field $K$ and its 1-truncation case $\mathbf{Ch}^1_K$. Now, suppose that $K$ is a field and $C_i$ is a $K$-vector space for each $i\in \mathbb{Z}$. Let $$ \begin{tikzcd} \mathfrak{C}:=\cdots\ar[r]&C_{n}\ar[r,"{d_n}"]&C_{n-1}\ar[r,"{d_{n-1}}"]&\cdots C_{1}\ar[r,"{d_1}"]&C_{0}\ar[r,"{d_{0}}"]&C_{-1}\ar[r,"{d_{-1}}"]&\cdots \end{tikzcd} $$ be a chain complex of $K$-vector spaces and linear transformations between them. A chain map from $f:\mathfrak{C}\rightarrow \mathfrak{D}$ consists of components $f_i:C_i\rightarrow D_i$ such that $f_{i-1}d_i=d_if_i$ for all $i\in \mathbb{Z}$, i.e. the following diagram commutes: $$ \begin{array}{c} \xymatrix{\mathfrak{C}\ar[d]_{f}\\ \mathfrak{D}} \end{array} {:=} \begin{array}{c} \xymatrix{ \cdots \ar[r]&C_{n+1} \ar[r]^-{d_{n+1}}\ar[d]^-{f_{n+1}}& C_{n} \ar[r]^-{d_{n}}\ar[d]^-{f_n}&C_{n-1} \ar[r]^-{d_{n-1}}\ar[d]^-{f_{n-1}} & \cdots\\ \cdots\ar[r]&D_{n+1} \ar[r]_-{d_{n+1}} & D_{n} \ar[r]_-{d_{n}} \ar[r] &D_{n-1} \ar[r]_-{d_{n-1}} & \cdots} \end{array} $$ where each $f_i$ is a linear map. The composition $g\#_0 f:\mathfrak{C}\rightarrow \mathfrak{E}$ of the chain maps $f:\mathfrak{C}\rightarrow \mathfrak{D}$ and $g:\mathfrak{D}\rightarrow \mathfrak{E}$ is defined by $(g\#_0 f)_i=g_if_i$ for all $i$, where the composition on the right hand side is the usual one for linear maps. Note that a chain isomorphism $f:\mathfrak{C}\rightarrow \mathfrak{D}$ is an invertible chain map in which each component is a linear isomorphism. Let $f,g$ be chain maps from $\mathfrak{C}$ to $\mathfrak{D}$. A chain homotopy $H:f\simeq g$ consists of maps $h'_n:C_n\rightarrow D_{n+1}$ satisfying $g_n-f_n=d_{n+1}h'_n+h'_{n-1}d_n$ for each $n\in \mathbb{Z}$, as pictured in the following diagram: \begin{equation*} \xymatrix{\cdots\ar[rr]&&C_{n+1} \ar[rr]^-{\scriptscriptstyle d_{n+1}}\ar@<0.5ex>[dd]^-{\scriptscriptstyle g_{n+1}}\ar@<-0.5ex>[dd]_-{\scriptscriptstyle f_{n+1}} && C_{n} \ar[ddll]_{\scriptscriptstyle h'_n} \ar[rr]^-{\scriptscriptstyle d_{n}}\ar@<0.5ex>[dd]^-{\scriptscriptstyle g_n}\ar@<-0.5ex>[dd]_-{\scriptscriptstyle f_n}&&C_{n-1} \ar[ddll]_{\scriptscriptstyle h'_{n-1}}\ar[rr]^-{\scriptscriptstyle d_{n-1}}\ar@<0.5ex>[dd]^-{\scriptscriptstyle g_{n-1}}\ar@<-0.5ex>[dd]_-{\scriptscriptstyle f_{n-1}} &&\cdots \\ \\ \cdots \ar[rr]&&D_{n+1}\ar[rr]_{\scriptscriptstyle d_{n+1}} && D_{n} \ar[rr]_{\scriptscriptstyle d_{n}} \ar[rr] &&D_{n-1} \ar[rr]_{\scriptscriptstyle d_{n-1}} &&\cdots} \end{equation*} The category of all chain complexes is called $\mathbf{Ch}$ and is discussed in detail by Kamps and Porter in \cite{KP}. They have proven that $\mathbf{Ch}$ is a 2-groupoid enriched Gray category. In our work, it is sufficient to consider non-negative chain complexes in which the subscripts are non-negative integers. The chain complexes of length-0 gives us the category of vector spaces over $K$. We can denote it by $\mathbf{Ch}^0_K$. Now, we concentrate the category $\mathbf{Ch}^1_K$. If $\delta:C_1\rightarrow C_0$ is a linear transformation, then $\mathfrak{C}:=(\xymatrix{C_1\ar[r]^{\delta}&C_0})$ is a length-1 chain complex of vector spaces. Then $\mathfrak{C}$ can be considered as a chain complex $$ \begin{tikzcd} \mathfrak{C}:=\cdots\ar[r]&0\ar[r]&C_{1}\ar[r,"\delta"]&C_{0}\ar[r]&0\ar[r]&\cdots \end{tikzcd} $$ Therefore, we obtain the category $\mathbf{Ch}^1_K$ whose objects are chain complexes of length-1 of $K$-vector spaces and morphisms are the chain maps between them. Clearly, $\mathbf{Ch}^1_K$ is a subcategory of $\mathbf{Ch}$. Let $f,g:\mathfrak{C}\rightarrow \mathfrak{D}$ be chain maps in $\mathbf{Ch}^1_K$. Then, the homotopy $H:(H',f):f\simeq g$ is given by the following diagram $$ \xymatrix{C_1\ar@<0.5ex>[rr]^{f_1}\ar@<-0.5ex>[rr]_{g_1}\ar[dd]_{\delta^C}&&D_1 \ar[dd]^{\delta^D}\\ \\ C_0\ar@<0.5ex>[rr]^{f_0}\ar@<-0.5ex>[rr]_{g_0}\ar[uurr]^{H'}&&D_0} $$ where $H'$ is called the chain homotopy component, satisfying the conditions $\delta^D H'=g_0-f_0$ and $H'\delta^C=g_1-f_1$. We can summarize the structure of $\mathbf{Ch}^1_K$ as follows:\\ $\bullet$ the 0-cells are the chain complexes of length-1; $\mathfrak{C}:=(\xymatrix{C_1\ar[r]^{\delta^C}& C_0})$,\\ $\bullet$ the 1-cells between 0-cells are the chain maps; $f:=(f_1,f_0):\delta^C\rightarrow \delta^D$,\\ $\bullet$ the 2-cells between 1-cells are the 1-homotopies; $H:=(H',f):f\simeq g$. A 2-cell $H:=(H',f):f\simeq g$ may be written briefly as $H:f\Rightarrow g$. Let us now explain the vertical and horizontal compositions of 1-cells and 2-cells. The composition of 1-cells is the usual composition of the chain maps as explained before. For 0-cells $\mathfrak{C}$ and $\mathfrak{D}$, and 1-cells $f,g,k:\mathfrak{C}\rightarrow \mathfrak{D}$, the vertical composition of 2-cells $H:f\Rightarrow g$ and $\hat{H}:g\Rightarrow k$ is given by $\hat{H}\#_2 H:f\Rightarrow k$, where the chain homotopy component is $(\hat{H}\#_2 H)'=\hat{H}'+H'$. To define the horizontal composition of 2-cells, as a first step, we have to give the whiskerings of 1-cells on 2-cells on the right side and on the left side. For the notions of whiskerings and whiskered categories see also \cite{Brown}. Let $H:f\Rightarrow f':\mathfrak{C}\rightarrow \mathfrak{D}$ be a 2-cell and $g:=(g_1,g_0):\mathfrak{D}\rightarrow \mathcal{E}$ a 1-cell. The left whiskering of $g$ on $H$ is given by $g\natural_1 H:g\#_0 f\Rightarrow g\#_0 f'$ where the chain homotopy component is $(g\natural_1 H)'=g_1H'$. This terminology can be explained by the picture $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] &\ar[dd,Rightarrow,"{H}"]&\\ {\mathfrak{C}}\ar[rr,bend left=50,"f"]\ar[rr,bend right=50,"{f'}"']& &{\mathfrak{D}}\ar[r,"g"]&\mathcal{E}\\ &\ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] &\ar[dd,Rightarrow,"{g\natural_1 H}"]&\\ \mathfrak{C} \ar[rr,bend left=50,"g\#_0 f"]\ar[rr,bend right=50,"g\#_0 f'"']& &\mathcal{E}.\\ &\ \end{tikzcd} \end{array} $$ The left whiskering of $g$ on $H$ appears on the left in the notation $g\natural_1 H$, but on the right in the picture. Similarly, the right whiskering of a 1-cell $f:=(f_1,f_0):\mathfrak{C}\rightarrow \mathfrak{D}$ on a 2-cell $K:g\Rightarrow g':\mathfrak{D}\rightarrow \mathcal{E}$ is given by $K\natural_1 f:g\#_0 f\Rightarrow g'\#_0 f$ where the chain homotopy component is $(K\natural_1 f)'=K'f_0$. In this case, the 1-cell $f$ will appear on the right in the notation $K\natural_1 f$, but on the left in the following picture: $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ & \ \ar[dd,Rightarrow,"{K}"] & \\ \mathfrak{C} \ar[r,"f"]&\mathfrak{D}\ar[rr,bend left=50,"g"] \ar[rr,bend right=50,"g'"'] & & \mathcal{E} \\ \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ar[dd, Rightarrow, "{K\natural_1 f}"] & \\ \mathfrak{C}\ar[rr,bend left=50,"g\#_0 f"] \ar[rr,bend right=50,"g'\#_0 f"'] & &\mathcal{E}.\\ & \ \end{tikzcd} \end{array} $$ Now we can give the definition of horizontal composition for 2-cells. Let $H:f:(f_1,f_0)\Rightarrow f':(f'_1,f'_0):\mathfrak{C}\rightarrow \mathfrak{D}$ and $K:g:(g_1,g_0)\Rightarrow g':(g'_1,g'_0):\mathfrak{D}\rightarrow \mathcal{E}$ be 2-cells. The horizontal composition of $K$ and $H$ is $$K\#_0H:=(K\natural_1 f')\#_1(g\natural_1 H):g\#_0 f\Rightarrow g'\#_0 f'$$ where the chain homotopy component is $(K\#_0 H)'=g_1H'+K'f'_0$. The following picture shows the definition of horizontal composition of 2-cells: $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ \ar[dd, Rightarrow, "H"] \ & \ & \ar[dd, Rightarrow, "K"] & \\ \mathfrak{C}\ar[rr,bend left=50,"f"] \ar[rr,bend right=50,"f'"'] & \ & \mathfrak{D}\ar[rr,bend left=50,"g"] \ar[rr,bend right=50,"g'"'] & \ & \mathcal{E}\\ \ & \ & \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ar[dd, Rightarrow, "{K\#_0 H}"] & \\ {\mathfrak{C}}\ar[rr,bend left=50,"g\#_0 f"] \ar[rr,bend right=50,"g'\#_0 f'"'] & & {\mathcal{E}}. \\ & \ \end{tikzcd} \end{array} $$ We also note from \cite{Barker} that $$(g'\natural_1 H)\#_1(K\natural_1 f)=(K\natural_1f')\#_1 (g\natural_1 H). $$ The interchange law for the vertical and horizontal compositions was given in \cite{Barker}. Therefore, $\mathbf{Ch}^1_K$ is a 2-category and is a groupoid-enriched category. By restriction the chain maps to those which are invertible, the category of invertible chain maps $\mathbf{invCh}^1_K$ is a 2-groupoid. \subsection{The Structure Aut($\delta$) in $\mathbf{Ch}^1_K$ as a Cat$^1$-Group} To define a linear representation of a crossed module (or equivalently a cat$^1$-group) $\mathfrak{C}$ as a 2-functor $\phi: \mathfrak{C}\rightarrow \mathbf{Ch}^1_K$, Barker in \cite{Barker} has constructed an automorphism cat$^1$-group $\mathbf{Aut(\delta)}$ as a 2-subcategory of $\mathbf{Ch}^1_K$. In this case, the functorial image of $\mathfrak{C}$ under the 2-functor $\phi$ is $\mathbf{Aut(\delta)}$. This subcategory is a collection of all chain isomorphisms $\delta \rightarrow \delta$ and homotopies between them. \begin{defn}\rm{ (\cite{Barker}) Let $\delta:C_1\rightarrow C_0$ be a linear transformation of vector spaces over the field $K$. The automorphism cat$^1$-group of $\delta$; $\mathbf{Aut(\delta)}$ consists of:\\ $(i)$ the group $\mathbf{Aut}(\delta)_1$ of all chain isomorphisms $\delta\rightarrow \delta$,\\ $(ii)$ the group $\mathbf{Aut}(\delta)_2$ of all homotopies on $\mathbf{Aut}(\delta)_1$ ,\\ $(iii)$ morphisms $s,t:\mathbf{Aut}(\delta)_2 \rightarrow \mathbf{Aut}(\delta)_1$, selecting the source and target of each homotopy,\\ $(iv)$ the morphism $i:\mathbf{Aut}(\delta)_1 \rightarrow \mathbf{Aut}(\delta)_2$, which provides the identity homotopy on each chain automorphism.} \end{defn} Then, the only 0-cell in $\mathbf{Aut(\delta)}$ is $\delta:C_1\rightarrow C_0$ and so $\mathbf{Aut}(\delta)_0$ is a singleton. 1-cells are chain isomorphisms from $\delta$ to itself, $f:(f_1,f_0):\delta\rightarrow \delta$. 2-cells are homotopies between 1-cells: $$ \begin{tikzcd}[row sep=small,column sep=normal] &\ar[dd,Rightarrow,"{ H}"]\\ \delta \ar[rr,bend left=50,"f"] \ar[rr,bend right=50,"f'"']& &\delta\\ &\ \end{tikzcd} $$ where $H=(H',f):f\Rightarrow f'$ and $H'$ is the chain homotopy component and it can be more clearly represented by the diagram $$ \xymatrix{C_1\ar@<0.5ex>[rr]^{f_1}\ar@<-0.5ex>[rr]_{f_1'}\ar[dd]_{\delta}&&C_1 \ar[dd]^{\delta}\\ \\ C_0\ar@<0.5ex>[rr]^{f_0}\ar@<-0.5ex>[rr]_{f_0'}\ar@{-->}[uurr]^{H'}&&C_0} $$ with the homotopy conditions: $\delta H'=f'_0-f_0$ and $H'\delta =f'_1-f_1$. Since a 2-groupoid with a single object can be considered as a cat$^1$-group, $$ \xymatrix{\mathbf{Aut(\delta)}:=\mathbf{Aut}(\delta)_1 \ar@<1ex>[r]^-{s,t}\ar@<0ex>[r]&\mathbf{Aut}(\delta)_0 \ar@<1ex>[l]^-{i}} $$ is a cat$^1$-group. Therefore, the 2-functor $\phi:\mathfrak{C}\rightarrow \mathbf{Aut(\delta)}\leqslant \mathbf{Ch}^1_K$ can be explained as follows: $\bullet$ For the 0-cell $*$ in $\mathfrak{C}$, $\phi(*)=\delta$ is the chain complex of length-1 in $\mathbf{Ch}^1_K$ which is the 0-cell of $\mathbf{Aut(\delta)}$. $\bullet$ For any 1-cell $x:* \longrightarrow *$, $x\in C_0$, $\phi(x)=f$ is the chain isomorphism $f:=(f_1,f_0):\delta\rightarrow \delta$ which is a 1-cell in $\mathbf{Aut}(\delta)_1$ . $\bullet$ For any 2-cell $a:x\rightarrow y$ in $C_1$ with $s(a)=x$, $t(a)=y$, we have $\phi(a)=H:=(H',f):f\Rightarrow f'$ in $\mathbf{Aut}(\delta)_2$ is the homotopy from $\phi(x)=f$ to $\phi(y)=f'$. \subsection{Regular Representation of Crossed Modules and Cat$^1$-Groups} It is well-known that Cayley's theorem provides regular representations of groups. For any group $G$, the right regular representation of $G$ is defined as $\lambda:G^{op}\rightarrow S_{|G|}$ with $\lambda_g(h)=hg$ (for $g,h\in G$) where $\lambda_g:G\rightarrow G$ is a permutation of the underlying set of $G$ and $G^{op}$ is the opposite category to $G$, since the right multiplication gives us $\lambda$ to be contravariant, i.e. $\lambda_{g_1g_2}(h)=h(g_1g_2)=(hg_1)g_2=\lambda_{g_2}(\lambda_{g_1})(h)$ for $g_1,g_2,h\in G$. Similarly, the left regular representation is defined by $\rho:G\rightarrow S_G$ with $\rho_g(h)=gh$. In this case, $\rho$ is covariant. Serre, \cite{Serre}, by reformulating Cayley's theorem for linear representations, defined the left regular representation as a linear representation of $G$. In this case, to the regular representations of $G$ there correspond regular linear representations $\lambda:G^{op}\rightarrow GL_{|G|}(K),\ \lambda:G\rightarrow GL_{|G|}(K)$ with $\lambda_g(\mathbf{e}_h):=\mathbf{e}_{hg}$ where $\mathbf{e}_g$ are the basis vectors in the group algebra $K(G)$. In this structure, $\lambda_g(\mathbf{e}_h):=\mathbf{e}_{hg}$ can be considered as $^{g}\mathbf{e}_{h}=\mathbf{e}_{hg}$ an action of $G$ on the group algebra $K(G)$ by multiplication on the right. Barker, \cite{Barker}, using the group algebra functor $K(.)$ from the category of groups to that of algebras has constructed regular representations of cat$^1$-groups. In this section, we will explain briefly this construction to use it in the next sections. \subsubsection{A Brief Description of the Group Algebra Functor $K(.)$} Let $G$ be a group and $K$ a fixed field. Suppose that $X_G$ is the underlying set of $G$. For any element $g\in G$, the notation $\mathbf{e}_g$ denotes the element in $X_G$. Then, $K(G)$ is a vector space over the field $K$ with basis $\{\mathbf{e}_g\}_{g\in G}$. Any element of $K(G)$ can be considered as in the form $\sum r_g\mathbf{e}_g$, with $r_g\in K$ and only finitely many $r_g\neq 0$ (cf. \cite{Passman}). The addition in $K(G)$ can be given by $$ \sum r_g\mathbf{e}_g +\sum s_g\mathbf{e}_g:=\sum (r_g+s_g)\mathbf{e}_g $$ where $r_g+s_g$ is given by the addition in $K$, and the scalar multiplication can be given by $$ s\sum r_g\mathbf{e}_g:=\sum (sr_g)\mathbf{e}_g, $$ and for the elements $\sum r_g\mathbf{e}_g$ and $\sum s_h\mathbf{e}_h$ in $K(G)$ ($g,h\in G$), the multiplication in $K(G)$ is given by $$ (\sum r_g\mathbf{e}_g)(\sum s_h\mathbf{e}_h):=\sum (r_gs_h)\mathbf{e}_{gh} $$ where $(sr_g)$ and $(r_gs_h)$ are obtained by the multiplication in $K$ and $gh$ which appears in $\mathbf{e}_{gh}$ is obtained by the operation in $G$. Together with these operations, $K(G)$ is a $K$-algebra and it is called the group algebra of $G$. It is proven in \cite{Barker} that $K(.):\mathbf{Gr}\rightarrow \mathbf{Alg}_K$ is a functor from the category of groups to the category of $K$-algebras. For details see \cite{Barker} and \cite{Passman}. By ignoring the multiplication in the group algebra $K(G)$, we can consider the underlying vector space $K(G)$. We will also write $K(G)$ for this underlying vector space as well. \subsubsection{The Regular Representation of Cat$^1$-Groups} Using the group algebra functor for any cat$^1$-group, $\mathfrak{C}$, Barker has obtained a special cat$^1$-group algebra $\overline{K(\mathfrak{C})}$, and he defined from $\overline{K(\mathfrak{C})}$ a chain complex of length-1; $\overline{\delta}$ and then, by constructing a 2-functor $\lambda:\mathfrak{C}\rightarrow \mathbf{Aut}(\overline{\delta})$, he has given the regular representation of cat$^1$-groups. In this section, we recall this construction briefly. We will use this method to give the regular representation of 2-crossed modules and Gray 3-groupoids with a single 0-cell $*$. Let $\partial:M\rightarrow N$ be a crossed module of groups. Recall that, using the action of $N$ on $M$, we can create the semi-direct product group $M\rtimes N$ together with the operation $(m,n)(m',n')=(m^{n}m',nn')$. It is known that $$ \xymatrix{\mathfrak{C}:=M\rtimes N \ar@<1ex>[r]^-{s,t} \ar@<0ex>[r]&N \ar@<1ex>[l]^-{i}} $$ is a cat$^1$-group with the structural maps given above. Applying the group algebra functor to this cat$^1$-group $\mathfrak{C}$, it is obtained that $$ \xymatrix{K(\mathfrak{C}):=K(M\rtimes N) \ar@<1ex>[r]^-{\sigma,\tau} \ar@<0ex>[r]&K(N) \ar@<1ex>[l]^-{i}} $$ is a pre-cat$^1$-algebra, where $K(M\rtimes N)$ and $K(N)$ are the group-algebras of the groups $M\rtimes N$ and $N$ respectively, and $K(N)$ has basis $\{\mathbf{e}_n:n\in N\}$, $K(M\rtimes N)$ has basis $\{\mathbf{e}_{m,n}:m\in M, n\in N\}$. The source, target and identity maps within this structure are defined by $\sigma(\mathbf{e}_{m,n})=\mathbf{e}_n$, $\tau(\mathbf{e}_{m,n})=\mathbf{e}_{\partial(m)n}$ and $i(\mathbf{e}_n)=\mathbf{e}_{1,n}$ respectively. It is clear that $\sigma i=\tau i =id.$ Any element of $\mathrm{Ker}\sigma$ is in the form $\mathbf{v}_{m,n}=\mathbf{e}_{m,n}-\mathbf{e}_{1,n}$ since $\sigma(\mathbf{v}_{m,n})=0.$ Similarly, any element of $\mathrm{Ker}\tau$ is in the form $\mathbf{w}_{m,n}=\mathbf{e}_{m,n}-\mathbf{e}_{1,\partial(m)n}$. In this case, the set $\{\mathbf{v}_{m,n}:m\neq 1\}$ is a basis for $\mathrm{Ker}\sigma$ and the set $\{\mathbf{w}_{m,n}:m\neq 1\}$ is a basis for $\mathrm{Ker}\tau$. To satisfy the kernel condition $\mathrm{Ker}\sigma.\mathrm{Ker}\tau=0$, it would suffice that $\mathbf{v}_{m,n}.\mathbf{w}_{m',n'}=0$ and $\mathbf{w}_{m',n'}.\mathbf{v}_{m,n}=0$ for $\mathbf{w}_{m',n'}\in \mathrm{Ker}\tau$ and $\mathbf{v}_{m,n}\in \mathrm{Ker}\sigma$. However, $$ \mathbf{v}_{m,n}.\mathbf{w}_{m',n'}=\mathbf{e}_{m^{n}m',nn'}-\mathbf{e}_{^{n}m',nn'}-\mathbf{e}_{m,n\partial(m')n'}+\mathbf{e}_{1,n\partial(m')n'}\neq 0. $$ Since this expression is a linear combination of basis elements with non-zero coefficients, thus the kernel condition fails. Hence, in this form $K(\mathfrak{C})$ is not a cat$^1$-algebra but it is a pre-cat$^1$-algebra. In order to satisfy the kernel condition, Barker has found a suitable expression in $K(M\rtimes N)$ as $$ \mathbf{e}_{m'm,n}-\mathbf{e}_{m,n}-\mathbf{e}_{m',\partial(m)n}+\mathbf{e}_{1,\partial(m)n} $$ (for $m,m'\in M$ and $n\in N$) called \textit{cocycles}. $K(\mathfrak{C})$ can be factored by the ideal $J$ which is generated by these cocycles. This ideal is referred to as the cocycle ideal in $K(M\rtimes N)$. In this case, we can write $$ J=\langle\mathbf{e}_{m'm,n}-\mathbf{e}_{m,n}-\mathbf{e}_{m',\partial(m)n}+\mathbf{e}_{1,\partial(m)n}:m,m'\in M\setminus\{1_M\}, n\in N\rangle. $$ Let $\overline{K(M\rtimes N)}=K(M\rtimes N)/J$. Any element in this factor algebra is the coset $\overline{\mathbf{e}}_{m,n}=\mathbf{e}_{m,n}+J$ for $\mathbf{e}_{m,n}\in K(M\rtimes N)$. Using the quotient map $q:K(M\rtimes N)\longrightarrow \overline{K(M\rtimes N)}$, the source, target and identity maps between $\overline{K(M\rtimes N)}$ and $K(N)$ are defined by \begin{align*} \overline{\sigma}(\overline{\mathbf{e}}_{m,n})&=\sigma(\mathbf{e}_{m,n})=\mathbf{e}_{n},\\ \overline{\tau}(\overline{\mathbf{e}}_{m,n})&=\tau(\mathbf{e}_{m,n})=\mathbf{e}_{\partial(m)n},\\ \overline{i}(\mathbf{e}_{n})&=\overline{\mathbf{e}}_{1,n}. \end{align*} Since $$ \mathbf{v}_{m,n}.\mathbf{w}_{m',n'}=\mathbf{e}_{m^{n}m',nn'}-\mathbf{e}_{^{n}m',nn'}-\mathbf{e}_{m,n\partial(m')n'}+\mathbf{e}_{1,n\partial(m')n'}\in J, $$ we obtain $\overline{\mathbf{v}}_{m,n}.\overline{\mathbf{w}}_{m',n'}=0+J=\overline{0}$. Thus, $\mathrm{Ker}\overline{\sigma}.\mathrm{Ker}\overline{\tau}=\overline{0}$. Hence, together with these structures $$ \xymatrix{\overline{K(\mathfrak{C}}):=K(M\rtimes N)/J \ar@<1ex>[r]^-{\overline{\sigma},\overline{\tau}} \ar@<0ex>[r]&K(N) \ar@<1ex>[l]^-{\overline{i}}} $$ is a cat$^1$-algebra. Thus, the construction of a regular representation can be summarised in the following definition (cf. \cite{Barker}). \begin{defn}\rm{ The right regular representation of a cat$^1$-group $\mathfrak{C}$ is the 2-functor $$\mathbf{\lambda}:\mathfrak{C}^{op}\rightarrow \mathbf{Ch}^{1}_{K}$$ which is sending $\bullet$ the 0-cell $*$ in $\mathfrak{C}$ to the chain complex of vector spaces of length-1; $$\overline{\delta}=\overline{\tau}|_{\mathrm{Ker}\overline{\sigma}}:\mathrm{Ker}\overline{\sigma}\longrightarrow K(N),$$ $\bullet$ each $n\in N$, the 1-cell in $\mathfrak{C}$, to the chain automorphism $\mathbf{\lambda}_n:=(\lambda^{0}_n,\lambda^{1}_n):\overline{\delta}\longrightarrow \overline{\delta}$ defined on each level by $$ \lambda^{0}_n(\mathbf{e}_{n'}):=\mathbf{e}_{n'n}, \hspace{1cm} \ \ \lambda^{1}_{n}(\mathbf{\overline{v}}_{m',n'}):=\mathbf{\overline{v}}_{m',n'n} $$ where $\lambda^{0}_n:K(N)\rightarrow K(N)$ and $\lambda^{1}_n:\mathrm{Ker}\overline{\sigma}\rightarrow \mathrm{Ker}\overline{\sigma}$ are linear automorphisms, $\bullet$ each $(m,n)\in M\rtimes N$, the 2-cell in $\mathfrak{C}$, to the homotopy $$\mathbf{\lambda}_{m,n}:=(\lambda'_{(m,n)},\lambda_n):\lambda_n\Rightarrow \lambda_{\partial mn}$$ with the chain homotopy component $\lambda'_{(m,n)}:K(N)\longrightarrow \mathrm{Ker}\overline{\sigma}$ defined by $\lambda'_{(m,n)}(\mathbf{e}_{n'}):=\mathbf{\overline{v}}_{^{n'}m,n'n}$ and where all chain automorphisms $\mathbf{\lambda}_n$ and homotopies $\mathbf{\lambda}_{m,n}$ resides in \textbf{Aut}$(\overline{\delta})$ for the linear transformation $\overline{\delta}:=\overline{\tau}|_{\mathrm{Ker}\overline{\sigma}}$ obtained from the cat$^1$-algebra $\overline{K(\mathfrak{C})}$ of $\mathfrak{C}$. } \end{defn} The construction can be applied to any cat$^1$-group, so it gives us a cat$^1$-group version of Cayley's theorem. \section{The Category Ch$^2_K$ as a Gray Category} Kamps and Porter, \cite{KP}, have constructed a Gray category structure on the category $\mathbf{Ch}^2_K$; of chain complexes of length-2. They have also given the relation between 2-crossed modules and Gray 3-groupoids. Using a slightly different language, Martins and Picken, \cite{Martins}, have proven this relationship between these structures. To define the linear representation of a 2-crossed module, Al-asady, in \cite{Jinan}, has considered these relationships and defined a 3-functor $$ \phi:\mathfrak{C}^2\longrightarrow \mathbf{Ch}^2_K $$ where $\mathfrak{C}^2$ is a Gray 3-groupoid with a single object set $\{*\}$ obtained from a 2-crossed module. In $\mathfrak{C}^2$, each object set is a group, so we can say that $\mathfrak{C}^2$ is a Gray 3-(group)-groupoid with a single object. \subsection{Gray 3-Groupoids} Kamps and Porter include in the Appendix of their work \cite{KP}, a sketch of Crans definition of Gray categories as algebraic structures (cf. \cite{Cr}). In this description, for convenience, they have inverted the interchange 3-cell. Martins and Picken, \cite{Martins}, gave a full definition of a Gray 3-groupoid. Their conventions are slightly different from the ones of \cite{Cr},\cite{KP}. The following definition is equivalent to that given by Martins and Picken in \cite{Martins}. A (small) Gray 3-groupoid $\mathfrak{C}^2$ is given by a set $C_0$ of objects (or 0-cells), a set $C_1$ of 1-cells, a set $C_2$ of 2-cells and a set $C_3$ of 3-cells, and maps $s_i,t_i:C_k\rightarrow C_{i-1}$, for $i=1,\cdots,k$ such that: \begin{enumerate} \item $s_2\circ s_3=s_2$ and $t_2\circ t_3=t_2$, as maps $C_3\rightarrow C_1$. \item $s_1=s_1\circ s_2=s_1\circ s_3$ and $t_1=t_1\circ t_2=t_1\circ t_3$, as maps $C_3\rightarrow C_0$. \item $s_1=s_1\circ s_2$ and $t_1=t_1\circ t_2$, as maps $C_2\rightarrow C_0$. \item There exists a 2-vertical composition $J\#_3 J'$ of 3-cells if $s_3(J)=t_3(J')$, making it a groupoid with the set of objects is $C_2$ are implicit). \item There exists a vertical composition $$ \Gamma' \#_{2}\Gamma=\begin{bmatrix} \Gamma\\ \Gamma' \end{bmatrix}$$ of 2-cells if $s_2(\Gamma')=t_2(\Gamma)$, making it a groupoid whose set of objects is $C_1$ (identities are implicit). \item There exists a 1-vertical composition $$ J' \#_{1}J=\begin{bmatrix} J\\ J' \end{bmatrix}$$ of 3-cells if $s_2(J')=t_2(J)$ making the set of 3-cells $C_3$ a groupoid with set of objects $C_1$ and such that the boundaries $s_3,t_3:C_3\rightarrow C_2$ are functors (groupoid morphisms). \item The 1-vertical and 2-vertical compositions of 3-cells satisfy the interchange law: $$ (J'\#_3 J)\#_1(J'_1\#_3 J_1)=(J'\#_1 J'_1)\#_3 (J\#_1 J_1). $$ Combining with the previous axioms, this means that the 1-vertical and 2-vertical compositions of 3-cells and the vertical composition of 2-cells give $C_3$ the structure of a 2-groupoid (cf. \cite{HKK}), with set of objects being $C_1$, set of 1-cells $C_2$ and set of 2-cells $C_3$. \item (\textbf{Whiskering by 1-cells}) For each $x,y\in C_0$, it can be defined a 2-groupoid $\mathfrak{C}(x,y)$ of all 1-, 2- and 3-cells $b$ such that $s_1(b)=x$ and $t_1(b)=y$. Given a 1-cell $\eta:y\rightarrow z$, there is a 2-groupoid map $\natural_1\eta:\mathfrak{C}(x,y)\rightarrow \mathfrak{C}(y,z)$. Similarly if $\eta':w\rightarrow x$, there is a 2-groupoid map $\eta'\natural_1:\mathfrak{C}(x,y)\rightarrow \mathfrak{C}(w,y)$. \item There exists a horizontal composition $\eta \natural_1 \eta'$ of 1-cells if $s_1(\eta)=t_1(\eta')$, which is to be associative and to define a groupoid with set of objects $C_0$ and set of 1-cells $C_1$. \item Given $\eta,\eta'\in C_1$; \begin{align*} \natural_1\eta \circ \natural_1 \eta'=&\natural_1(\eta'\eta)\\ \eta \natural_1\circ \eta'\natural_1=& (\eta \eta')\natural_1\\ \eta \natural_1\circ \natural_1 \eta'=&\natural_1 \eta'\circ \eta \natural_1, \end{align*} whenever these compositions make sense. \item There are two horizontal compositions of 2-cells \begin{align*} \begin{bmatrix} &\Gamma'\\ \Gamma& \end{bmatrix} =&\left(\Gamma\natural_1t_1\left(\Gamma^\prime\right)\right)\#_2\left(s_1\left(\Gamma\right)\natural_1\Gamma^\prime\right) \intertext{ and } \begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix}=&\left(t_1\left(\Gamma\right)\natural_{1}\Gamma^\prime\right)\#_2\left(\Gamma\natural_1s_1\left(\Gamma^\prime\right)\right)\\ \end{align*} and of 3-cells: \begin{align*} \begin{bmatrix} &J'\\ J& \end{bmatrix} =&\left(J\natural_1t_1\left(J'\right)\right)\#_1\left(s_1\left(J\right)\natural_1 J'\right) \intertext{ and } \begin{bmatrix} J&\\ &J' \end{bmatrix}=&\left(t_1\left(J\right)\natural_{1}J'\right)\#_1\left(J\natural_1s_1\left(J'\right)\right)\\ \end{align*} It follows from the previous axioms that they are associative. \item (\textbf{Interchange 3-cells}) For any 2-cells $\Gamma$ and $\Gamma'$, there is a 3-cell (called an interchange 3-cell) $$ \xymatrix{{\begin{bmatrix} &\Gamma'\\ \Gamma & \end{bmatrix}}=s_3(\Gamma\#\Gamma')\ar[rr]^{\scriptstyle(\Gamma\#\Gamma')} & &t_3(\Gamma\#\Gamma')={\begin{bmatrix} \Gamma &\\ & \Gamma' \end{bmatrix}}} $$ \item(\textbf{2-functoriality}) For any 3-cells $\xymatrix{ \Gamma_1=s_3(J)\ar[r]^{\scriptstyle J} &t_3(J)=\Gamma_2}$ and $\xymatrix{ \Gamma'_1=s_3(J')\ar[r]^{\scriptstyle J'} &t_3(J')=\Gamma'_2}$, with $s_1(J')=t_1(J)$ the following upwards compositions (1-vertical compositions) of 3-cells coincide: $$ \xymatrix{{\begin{bmatrix} &\Gamma'_1\\ \Gamma_1 \end{bmatrix}}\ar[rr]^{\scriptstyle(\Gamma_1\#\Gamma'_1)} & &{\begin{bmatrix} \Gamma_1&\\ &\Gamma'_1 \end{bmatrix}}\ar[rr]^{\begin{bsmallmatrix} J&\\ &J' \end{bsmallmatrix}} & &{\begin{bmatrix} \Gamma_2&\\ &\Gamma'_2 \end{bmatrix}}} $$ and $$ \xymatrix{{\begin{bmatrix} &\Gamma'_1\\ \Gamma_1 \end{bmatrix}}\ar[rr]^{\begin{bsmallmatrix} &J'\\ J& \end{bsmallmatrix}} & &{\begin{bmatrix} &\Gamma'_2\\ \Gamma_2& \end{bmatrix}}\ar[rr]^{\scriptstyle(\Gamma_2\#\Gamma'_2)} & &{\begin{bmatrix} \Gamma_2&\\ &\Gamma'_2 \end{bmatrix}}} $$ This of course means that the collection $\Gamma\#\Gamma'$, for arbitrary 2-cells $\Gamma$ and $\Gamma'$ with $s_1(\Gamma')=t_1(\Gamma)$ defines a natural transformation between the 2-functors of 11. Note that by using the interchange condition for the vertical and upwards compositions, we only need to verify this condition for the case when either $J$ or $J'$ is an identity.(This is the way this axiom appears written in \cite{KP,Cr,Be} \item(\textbf{1-functoriality}) For any three 2-cells $\gamma \xrightarrow{\Gamma \ } \phi \xrightarrow{\Gamma'}\psi$ and $\gamma'' \xrightarrow{\Gamma''} \phi''$ with $s_2(\Gamma')=t_2(\Gamma)$ and $t_1(\Gamma)=t_1(\Gamma')=s_1(\Gamma'')$ the following 1-vertical compositions of 3-cells coincide: $$ \xymatrix{{\begin{bmatrix} \psi \natural_1&\Gamma''\\ \Gamma'& \natural_1\gamma''\\ \Gamma&\natural_1\gamma'' \end{bmatrix}}\ar[rr]^{\begin{bsmallmatrix} \Gamma'\#\Gamma''\\ \Gamma\natural_1\gamma'' \end{bsmallmatrix}} & &{\begin{bmatrix} \Gamma'&\natural_1\phi''\\ \phi\natural_1& \Gamma''\\ \Gamma&\natural_1\gamma'' \end{bmatrix}}\ar[rr]^{\begin{bsmallmatrix} \Gamma'\natural_1\phi''\\ \Gamma\#\Gamma'' \end{bsmallmatrix}} & &{\begin{bmatrix} \Gamma'&\natural_1\phi''\\\Gamma& \natural_1\phi''\\ \gamma\natural_1&\Gamma'' \end{bmatrix}}} $$ and $$ \xymatrix{{\begin{bmatrix} \psi \natural_1&\Gamma''\\ \Gamma'& \natural_1\gamma''\\ \Gamma&\natural_1\gamma'' \end{bmatrix}}\ar[rr]^{\begin{bsmallmatrix} \Gamma'\\ \Gamma \end{bsmallmatrix}\#\Gamma''} & &{\begin{bmatrix} \Gamma'&\natural_1\phi''\\\Gamma& \natural_1\phi''\\ \gamma\natural_1&\Gamma'' \end{bmatrix}}} $$ An analogous identity obtained by exchanging the roles of the first and second columns should also hold. A Gray 3-(group)-groupoid is a Gray 3-groupoid in which each objects set, $C_0,C_1,C_2$ and $C_3$ are groups and the structural maps $s_i,t_i$ are homomorphisms of groups. We can show a Gray 3-(group)-groupoid with a single 0-cell $*$ pictorially as; $$ \begin{array}{c} \boldsymbol {\mathfrak{C}^2} \end{array} {:=} \begin{array}{c} \xymatrix@C+=1.5cm{ & C_3\ar[ldd]_{\scriptscriptstyle s_3,t_3}\ar[rdd]^{\scriptscriptstyle s_1,t_1}\ar[d] & \\ & {*} & \\ C_2 \ar[rr]_{\scriptscriptstyle s_2,t_2}\ar[ur] & & C_1. \ar[ul] } \end{array} $$ \end{enumerate} \begin{defn} \rm{(\cite{Martins}) A (strict) Gray functor $\mathcal{F}:\mathfrak{C}^2\rightarrow\mathfrak{C'}^2$ between Gray 3-groupoids $\mathfrak{C}^2$ and $\mathfrak{C'}^2$ is given by maps $C_i\rightarrow C'_i$ preserving all compositions, identities and boundaries, strictly. } \end{defn} We now recall the construction from \cite{Jinan} and \cite{KP} of a Gray category structure from chain complexes length-2 of vector spaces over $K$. Let $C_2$, $C_1$ and $C_0$ be vector spaces over $K$ and $\delta^C_2:C_2\rightarrow C_1$,\ \ $\delta^C_1:C_1\rightarrow C_0$ be linear transformations. Suppose that $$ \begin{tikzcd} \mathcal{C}:=C_{2}\ar[r,"{\delta^C_2}"]&C_{1}\ar[r,"{\delta^C_{1}}"]& C_{0} \end{tikzcd} $$ is any chain complex of length-2. Then, 0-cells of $ \mathbf{Ch}^2_K$ will be considered as chain complexes of length-2. A chain map $F=(f_2,f_1,f_0)$ from a chain complex $\mathcal{C}$ to a chain complex $\mathcal{D}$ is given by the following commutative diagram $$ \begin{array}{c} \xymatrix{\mathcal{C}\ar[d]_{F}\\ \mathcal{D}} \end{array} {:=} \begin{array}{c} \xymatrix{C_{2} \ar[r]^-{\delta^C_2}\ar[d]^-{f_{2}}& C_{1} \ar[r]^-{\delta^C_{1}}\ar[d]^-{f_1}&C_{0}\ar[d]^-{f_{0}}\\ D_{2} \ar[r]_-{\delta^D_2} & D_{1} \ar[r]_-{\delta^D_1} &D_{0} } \end{array} $$ where $f_2,f_1$ and $f_0$ are linear transformations. Then, 1-cells of $ \mathbf{Ch}^2_K$ will be considered as all chain maps between chain complexes of length-2. Let $F$ and $G$ be chain maps between $\mathcal{C}$ and $\mathcal{D}$. A 1-homotopy $(H,F):F\Rightarrow G$ with the chain homotopy components $H'_1$ and $H'_2$, can be represented by the diagram \begin{equation*} \xymatrix{C_{2} \ar[rr]^-{\delta^C_2}\ar@<0.5ex>[dd]^-{g_{2}}\ar@<-0.5ex>[dd]_-{f_{2}} && C_{1} \ar[ddll]|{H'_2} \ar[rr]^-{\delta^C_1}\ar@<0.5ex>[dd]^-{g_1}\ar@<-0.5ex>[dd]_-{f_1}&&C_{0} \ar[ddll]|{H'_{1}}\ar@<0.5ex>[dd]^-{g_{0}}\ar@<-0.5ex>[dd]_-{f_{0}} \\ \\ D_{2}\ar[rr]_{\delta^D_2} && D_{1} \ar[rr]_{\delta^D_1} \ar[rr] &&D_{0}} \end{equation*} where the chain homotopy components $H'_1:C_0\rightarrow D_1$ and $H'_2:C_1\rightarrow D_2$ are linear transformations, satisfying the following conditions: \begin{enumerate} \item $\delta^D_1 H'_1=g_0-f_0$, \item $H'_1\delta^C_1+\delta^D_2H'_2=g_1-f_1,$ \item $H'_2\delta^C_2=g_2-f_2.$ \end{enumerate} Then, 2-cells of $ \mathbf{Ch}^2_K$ will be considered as all 1-homotopies between chain maps. Let $(H,F),(K,F):F\Rightarrow G$ be 1-homotopies between $F$ and $G$. A 2-homotopy $\alpha:=(\alpha',H,F):(H,F)\Rrightarrow (K,F)$ with the chain homotopy component $\alpha':C_0\rightarrow D_2$ can be given by the diagram \begin{equation*} \xymatrix{C_{2} \ar[rr]^-{\delta^C_2}\ar@<0.5ex>[dd]\ar@<-0.5ex>[dd] && C_{1} \ar@<-0.6ex>[ddll]|<<<<<<<<<<<<<<<<{\scriptscriptstyle{H'_2}} \ar@<0.6ex>[ddll]|<<<<<<<<{\scriptscriptstyle{K'_2}} \ar[rr]^-{\delta^C_1}\ar@<0.5ex>[dd]\ar@<-0.5ex>[dd]&&C_{0}\ar@{-->}@/^{0.5pc}/[ddllll]_(.4){\large\boldsymbol\alpha'} \ar@<-0.6ex>[ddll]|<<<<<<<<<<<<<<<<{\scriptscriptstyle{H'_1}} \ar@<0.6ex>[ddll]|<<<<<<<<{\scriptscriptstyle{K'_1}}\ar@<0.5ex>[dd]\ar@<-0.5ex>[dd] \\ \\ D_{2}\ar[rr]_{\delta^D_2} && D_{1} \ar[rr]_{\delta^D_1} \ar[rr] &&D_{0}} \end{equation*} where for the homotopy component $\alpha'$, the conditions $\delta^D_2\alpha'=K'_1-H'_1$ and $\alpha'\delta^C_1=K'_2-H'_2$ are satisfied. Therefore, we can illustrate 0-,1-,2- and 3-cells in $ \mathbf{Ch}^2_K$ by a diagram $$ \begin{tikzcd}[row sep=0.3cm,column sep=scriptsize] & \ar[dd, Rightarrow, "{(H,F)}"{swap,name=f,description}]& &\ar[dd, Rightarrow, "{(K,F)}"'{swap,name=g,description}]& \\ {\mathcal{C}\ }\ar[rrrr,bend left=40,"F"] \ar[rrrr,bend right=40,"{G}"']& \tarrow["{(\alpha', H,F)}" ,from=f,to=g, shorten >= -1pt,shorten <= 1pt ]{rrr}& & & {\ \mathcal{D}}. \\ \ & \ & \ & \ & \ \end{tikzcd} $$ A 3-cell $\alpha:=(\alpha',H,F):(H,F)\Rrightarrow (K,F)$ may be written briefly as $\alpha:H\Rrightarrow K$. The source and target maps are given by $s_3(\alpha)=(H,F)=\left((H'_1,H'_2),(f_2,f_1,f_0)\right)$ and $t_3(\alpha)=(K,F)=\left((K'_1,K'_2),(f_2,f_1,f_0)\right)$ where $K'_1=H'_1+\delta^D_2\alpha'$ and $K'_2=H'_2+\alpha'\delta^C_1$. The vertical composition of 3-cells $\alpha:=(\alpha',H,F)$ and $\beta:=(\beta',K,F)$ is defined by $$ \begin{bmatrix} \alpha\\ \beta \end{bmatrix}=\beta\#_3\alpha:=(\beta'+\alpha',H,F) $$ with $t_3(\alpha)=s_3(\beta).$ The other source and target maps for a 3-cell $\alpha$ are given by $s_2(\alpha)=F, \ \ t_2(\alpha)=G$, and $s_1(\alpha)=\mathcal{C},\ \ t_1(\alpha)=\mathcal{D}$. The composition of 1-cells is the usual composition of the chain maps. For 1-cells $F,G,T:\mathcal{C}\rightarrow \mathcal{D}$, the vertical composition of 2-cells $(H,F):F\Rightarrow G$ and $(K,G):G\Rightarrow T$ is given by $K\#_2 H:F\Rightarrow T$, where the chain homotopy component is $(K\#_2 H)'=K'+H'$, where $K'=(K'_1,K'_2)$ and $H'=(H'_1,H'_2).$ For any 2-cells $$\Gamma=(K,G)=\left((K'_1,K'_2),(G_2,G_1,G_0)\right):G\Rightarrow G'$$ and $$\Gamma'=(H,F)=\left((H'_1,H'_2),(F_2,F_1,F_0)\right):F\Rightarrow F'$$ the whiskering of $F'$ on $(K,G)$ is given by $$(K,G)\natural_1 F'=(K'_1F'_0,K'_2F'_1,G\#_0F')$$ and this can be represented pictorially as; $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] & & \ar[dd,Rightarrow,"\scriptscriptstyle{(K,G)}"{description}] & \\ \mathfrak{C} \ar[r,"\scriptscriptstyle F'"] &\mathfrak{D}\ar[rr,bend left=50,"\scriptscriptstyle G"] \ar[rr,bend right=50,"\scriptscriptstyle G'"'] & & \mathcal{E} \\ \ & \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] & \ar[dd, Rightarrow, "\scriptscriptstyle{(K,G)\natural_1 F'}"{description}] & \\ \mathfrak{C}\ar[rr,bend left=50,"\scriptscriptstyle G\#_0 F'"] \ar[rr,bend right=50,"\scriptscriptstyle G'\#_0 F'"'] & &\mathcal{E}.\\ & \ \end{tikzcd} \end{array} $$ The whiskering of $G$ on $(H,F)$ is given by $$G\natural_1(H,F)=(G_1H'_1,G_2H'_2,G\#_0F)$$ as illustrated in the following diagram; $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] &\ar[dd,Rightarrow,"\scriptscriptstyle{(H,F)}"description]&\\ \mathfrak{C}\ar[rr,bend left=50,"\scriptscriptstyle F"]\ar[rr,bend right=50,"\scriptscriptstyle {F'}"']& & \mathfrak{D}\ar[r,"\scriptscriptstyle G"]& \mathcal{E}\\ &\ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] &\ar[dd,Rightarrow,"{\scriptscriptstyle G\natural_1 (H,F)}"description]&\\ \mathfrak{C} \ar[rr,bend left=50,"\scriptscriptstyle G\#_0 F"]\ar[rr,bend right=50,"\scriptscriptstyle G\#_0 F'"']& & \mathcal{E}.\\ &\ \end{tikzcd} \end{array} $$ The horizontal compositions of $\Gamma$ and $\Gamma'$ can be given by the vertical composition of these two 2-cells; $$ \begin{bmatrix} &\Gamma'\\ \Gamma& \end{bmatrix}=(\Gamma \natural_1 t_1(\Gamma'))\#_2 (s_1(\Gamma) \natural_1 \Gamma')=(K'_1F'_0 + G_1H'_1,K'_2F'_1 + G_2H'_2, G\#_0 F). $$ and $$ \begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix}=(t_1(\Gamma)\natural_1\Gamma')\#_2 (\Gamma\natural_1 s_1(\Gamma'))=(K'_1F_0 + G'_1H'_1,K'_2F_1 + G'_2H'_2, G\#_0 F). $$ For any 3-cells: $$J=(\beta',K,G):\Gamma_1=(K,G)\Rrightarrow\Gamma_2=(K',G)$$ and; $$J'=(\alpha',H,F):\Gamma'_1=(H,F)\Rrightarrow\Gamma'_2=(H',F)$$ The horizontal composition of $J$ and $J'$ $$ \begin{bmatrix} &J' \\ J& \end{bmatrix}:\begin{bmatrix} &\Gamma'_1 \\ \Gamma_1& \end{bmatrix}\longrightarrow\begin{bmatrix} &\Gamma'_2 \\ \Gamma_2& \end{bmatrix}; $$ with $s_1(J')=D=t_1(J)$ can be defined by $$ \begin{bmatrix} &J' \\ J& \end{bmatrix}=\left(G_2\alpha'+\beta'F'_0,\left(K'_1F'_0+G_1H'_1,K'_2F'_1+G_2H'_2\right),G\#_0F\right). $$ In these morphisms, we have $$ \begin{bmatrix} &\Gamma'_1 \\ \Gamma_1& \end{bmatrix}=\left(\left(K'_1F'_0+G_1H'_1,K'_2F'_1+G_2H'_2\right),G\#_0F \right)=s_3\left(\begin{bmatrix} &J' \\ J& \end{bmatrix}\right)\\ $$ and $$ \begin{bmatrix} &\Gamma'_2 \\ \Gamma_2& \end{bmatrix}=\left(\left(K''_1F'_0+G_1H''_1,K''_2F'_1+G_2H''_2\right),G\#_0F \right)=t_3\left(\begin{bmatrix} &J' \\ J& \end{bmatrix}\right)\\ $$ where \begin{align*} K'_1F'_0+G_1H'_1+\delta_2^E(G_2\alpha'+\beta'F'_0)=&K'_1F'_0+G_1H'_1+\delta_2^EG_2\alpha'+\delta_2^E\beta'F'_0\\ =&K''_1F'_0+G_1H''_1 \hspace{1.0cm} (\because K''_1=\delta_2^E\beta'+K'_1 \ ,\ H''_1=\delta_2^D\alpha'+H'_1) \end{align*} and \begin{align*} K'_2F'_1+G_2H'_2+(G_2\alpha'+\beta'F'_0)\delta_1^C=&K'_2F'_1+G_2H'_2+G_2\alpha'\delta_1^C+\beta'F'_0\delta_1^C\\ =&K''_2F'_1+G_2H''_2 \end{align*} Similarly, the horizontal composition $$ \begin{bmatrix} J& \\ &J' \end{bmatrix}:\begin{bmatrix} \Gamma_1& \\ &\Gamma'_1\end{bmatrix}\longrightarrow\begin{bmatrix} \Gamma_2& \\ &\Gamma'_2 \end{bmatrix} $$ can be defined. Therefore, $\mathbf{Ch}^2_K$ has a Gray category structure. For the calculations about 1-homotopies, along with chain complexes of length-2, and 2-homotopies between 1-homotopies and other stuff, the following diagram can be regarded as our \textit{black box} including the necessary tools to see the details. $$ \xymatrix{C_2\ar@<0.6ex>[rr]|{\scriptscriptstyle F_2}\ar@<-0.6ex>[rr]|<<<<<{\scriptscriptstyle F'_2}\ar[dd]_{\scriptscriptstyle \delta^C_2}&& D_2\ar@<0.6ex>[rr]|{\scriptscriptstyle G_2}\ar@<-0.6ex>[rr]|<<<<<{\scriptscriptstyle G'_2}\ar[dd]^{\scriptscriptstyle\delta^D_2}&& E_2\ar[dd]^{\scriptscriptstyle \delta^E_2}\\ \\ C_1\ar@<0.6ex>[rr]|{\scriptscriptstyle F_1}\ar@<-0.6ex>[rr]|<<<<<{\scriptscriptstyle F'_1}\ar[dd]_{\delta^C_1}\ar@<0.6ex>[uurr]|{\scriptscriptstyle H'_2}\ar@<-0.6ex>[uurr]|<<<<<<<{\scriptscriptstyle H''_2}&& D_1\ar[dd]^{\delta^D_1}\ar@<0.6ex>[rr]|{\scriptscriptstyle G_1}\ar@<-0.6ex>[rr]|<<<<<{\scriptscriptstyle G'_1}\ar@<0.6ex>[uurr]|{\scriptscriptstyle K'_2}\ar@<-0.6ex>[uurr]|<<<<<<<{\scriptscriptstyle K''_2} &&E_1\ar[dd]^{\delta^E_1}\\ \\ C_0\ar@<0.6ex>[rr]|{\scriptscriptstyle F_0}\ar@<-0.6ex>[rr]|<<<<<{\scriptscriptstyle F'_0}\ar@<0.6ex>[uurr]|{\scriptscriptstyle H'_1}\ar@<-0.6ex>[uurr]|<<<<<<<{\scriptscriptstyle H''_1}\ar@{-->}@/^{-0.5pc}/[uuuurr]_(.7){\large\boldsymbol\alpha'}&&D_0\ar@<0.6ex>[rr]|{\scriptscriptstyle G_0}\ar@<-0.6ex>[rr]|<<<<<{\scriptscriptstyle G'_0}\ar@<0.6ex>[uurr]|{\scriptscriptstyle K'_1}\ar@<-0.6ex>[uurr]|<<<<<<<{\scriptscriptstyle K''_1}\ar@{-->}@/^{-0.5pc}/[uuuurr]_(.7){\large\boldsymbol\beta'}&&E_0.} $$ \subsection{The Structure Aut($\delta$) in $\mathbf{Ch}^2_K$ as a Cat$^2$-Group} To define linear and matrix representations of a cat$^2$-group $\mathfrak{C}^2$ as a 3-functor $\phi: \mathfrak{C}^2\rightarrow \mathbf{Ch}^2_K$, Al-asady, \cite{Jinan}, has constructed an automorphism cat$^2$-group $\mathbf{Aut(\delta)}$ as a Gray 3-groupoid with a single 0-cell or a cat$^2$-group in $\mathbf{Ch}^2_K$. In this case, the functorial image of $\mathfrak{C}^2$ under $\phi$ will be $\mathbf{Aut(\delta)}$. This structure is a collection of all chain automorphisms $\delta \rightarrow \delta$ and 1-homotopies between them and 2-homotopies between 1-homotopies. Since an isomorphism from an object to itself is known as automorphism, the structure $\mathbf{Aut(\delta)}$ is called an automorphism cat$^2$-group. The following definition is due to \cite{Jinan}. \begin{defn}\rm{ Let $$ \begin{tikzcd} \delta:=C_{2}\ar[r,"{\delta_2}"]&C_{1}\ar[r,"{\delta_{1}}"]& C_{0} \end{tikzcd} $$ be a chain complex of vector spaces over the field $K$ of length-2. The automorphism cat$^2$-group $\mathbf{Aut}(\delta)$ consists of: $1.$ the group $\mathbf{Aut}(\delta)_0=\{\delta\}$, where $\delta:=(\delta_2,\delta_1)$ is the chain complex of length-2 given above, $2.$ the group $\mathbf{Aut}(\delta)_1$ of chain automorphisms $F:=(F_2,F_1,F_0):\delta\rightarrow\delta$, $3.$ the group $\mathbf{Aut}(\delta)_2$ of all 1-homotopies $(H,F):=((H'_1,H'_2),F):F\Rightarrow G$ between chain automorphisms, $4.$ the group $\mathbf{Aut}(\delta)_3$ of all 2-homotopies $\alpha:=(\alpha',H,F):(H,F)\Rrightarrow (K,F)$ between 1-homotopies together with source,target and identity maps between these groups given in \cite{Jinan}. } \end{defn} \section{2-Crossed Modules and Gray 3-Groupoids with a Single 0-cell} In this section, we first recall the construction of a Gray 3-groupoid with a single object from a 2-crossed module (cf. \cite{KP}, \cite{Martins}, \cite{Jinan}) to use it for giving the regular representation in the next section. Cat$^2$-groups, \cite{WL}, are higher dimensional cat$^1$-groups. They can be regarded as cat$^1$-objects in the category of cat$^1$-groups. A cat$^2$-group $\mathfrak{C}^2$ is a 5-tuple $(G,s_1,t_1,s_2,t_2)$ where $(G,s_i,t_i)$ \ $i=1,2$ are cat$^1$-groups and \begin{enumerate} \item $s_is_j=s_js_i,\ t_it_j=t_jt_i, \ s_it_j=t_js_i \ \text{for} \ i,j=1,2 \ \,i\neq j$ \item $[\mathrm{Ker} s_i,\mathrm{Ker} t_i]=1\ \text{for} \ i=1,2.$ \end{enumerate} Recall from \cite{Con} that \emph{a 2-crossed module} of groups consists of a complex of groups $$ \xymatrix{\mathfrak{X}:=(L\ar[r]^-{\partial_2}&M\ar[r]^{\partial_1}&N)} $$ together with (a) actions of $N$ on $M$ and $L$ so that $\partial _{2},\partial _{1}$ are morphisms of $N$-groups, and (b) an $N$-equivariant function \begin{equation*} \{\quad ,\quad \}:M\times M\longrightarrow L \end{equation*} called a Peiffer lifting. This data must satisfy the following axioms: \begin{equation*} \begin{array}{lrrll} \mathbf{2CM1)} & & \partial _{2}\{m,m^{\prime }\} & = & \left( ^{\partial _{1}m}m^{\prime }\right) mm^{\prime }{}^{-1}m^{-1}\newline \\ \mathbf{2CM2)} & & \{\partial _{2}l,\partial _{2}l^{\prime }\} & = & [l^{\prime },l]\newline \\ \mathbf{2CM3)} & & (i)\quad \{mm^{\prime },m^{\prime \prime }\} & = & ^{\partial _{1}m}\{m^{\prime },m^{\prime \prime }\}\{m,m^{\prime }m^{\prime \prime }m^{\prime }{}^{-1}\}\newline \\ & & (ii)\quad \{m,m^{\prime }m^{\prime \prime }\} & = & \{m,m^{\prime }\}^{mm^{\prime }m^{-1}}\{m,m^{\prime \prime }\}\newline \\ \mathbf{2CM4)} & & \{m,\partial _{2}l\}\{\partial _{2}l,m\} & = & ^{\partial _{1}m}ll^{-1}\newline \\ \mathbf{2CM5)} & & ^{n}\{m,m^{\prime }\} & = & \{^{n}m,^{n}m^{\prime }\} \newline \end{array} \end{equation*} \newline for all $l,l^{\prime }\in L$, $m,m^{\prime },m^{\prime \prime }\in M$ and $ n\in N$. Now suppose that $ \xymatrix{\mathfrak{X}:=(L\ar[r]^-{\partial_2}&M\ar[r]^{\partial_1}&N)} $ is a 2-crossed module of groups with the Peiffer lifting $\{\quad ,\quad \}:M\times M\longrightarrow L$. We will remind ourselves of the connection between 2-crossed modules and Gray 3-groupoids with a single 0-cell and cat$^2$-groups. First, we define the groups of 0-cells, 1-cells, 2-cells and 3-cells. The group of 0-cells is $C_0=\{*\}$. The group of 1-cells is $C_1=N$. Then, a 1-cell in $\mathfrak{C}^2$ is an element $n\in N$. It will be considered as a morphism over $*$. The composition of 1-cells $n$ and $n'$ in $C_1$ is given by the operation of the group $N$. We can show a 1-cell $n\in C_1$ by $n:* \rightarrow *$. Using the group-action of $N$ on $M$, we can create the semi-direct product group $C_2=M \rtimes N$ together with the operation $ (m,n)(m',n')=(m^{n}m^{\prime},nn') $ for $m,m'\in M$ and $n,n'\in N$. An element $(m,n)$ of $C_2$ can be considered as a 2-cell from $n$ to $\partial_1 mn$, so we can define source, target maps between $C_2$ and $C_1$ as follows: for $(m,n)\in (M \rtimes N)=C_2$, the 1-source of this 2-cell is $n$ and so $s_2(m,n)=n$ and 1-target of this 2-cell is $t_2(m,n)=\partial_1 mn$. The 0-source and 0-target of $(m,n)$ is $*$. We can represent a 2-cell $(m,n)$ in $\mathfrak{C}^2$ pictorially as: $$ \begin{array}{c} \xymatrix{ {*} \ar[r]\ar[d]_{\scriptscriptstyle n}^{\:\: \scriptscriptstyle(m,n)} &{*}\ar[d]^{ \scriptscriptstyle\partial_1 mn}\\ {*} \ar[r] & {*} } \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=tiny,column sep=small] & \ar[dd, Rightarrow, "{\scriptscriptstyle(m,n)}"] \\ {*}\ar[rr,bend left=50,"\scriptscriptstyle n"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1 mn"'] & & \ {*} \\ & \ \end{tikzcd} \end{array} $$\\ The vertical composition of $\Gamma=\left(m,n\right)$ and $\Gamma'=\left(m',\partial_1 mn\right)$ in $C_2$ is given by $$ \Gamma' \#_{2}\Gamma=\begin{bmatrix} \Gamma\\ \Gamma' \end{bmatrix}=\left(m',\partial_1 mn\right)\#_2(m,n)=(m'm,n).$$ To define the horizontal composition of 2-cells, we need to give the whiskering of a 1-cell on a 2-cell on the right and left sides. The whiskering $n'\in C_1$ on $(m,n)\in C_2$ on the left side is $n'\natural_{1}(m,n)=(^{n'}m,n'n)$. Similarly the right whiskering of $n^\prime$ on $(m,n)$ is given by $\left(m,n\right)\natural_1n^\prime=\left(m,nn^\prime\right).$ Therefore, the horizontal compositions of $\Gamma$ and $\Gamma^\prime$ are: $$ \begin{bmatrix} &\Gamma'\\ \Gamma& \end{bmatrix} =(\Gamma\natural_1t_1(\Gamma^\prime))\#_2(s_1(\Gamma)\natural_1\Gamma^\prime) =(m^nm^\prime,nn^\prime) $$ and $$ \begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix}=(t_1(\Gamma)\natural_{1}\Gamma^\prime)\#_2(\Gamma\natural_1s_1(\Gamma^\prime)) =(^{\partial_1m}(^nm^\prime)m,nn^\prime). $$ Note that $\begin{bmatrix} &\Gamma'\\ \Gamma& \end{bmatrix}\neq \begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix}$ since $\partial_1$ is not a crossed module. We can show easily that $s_2$ and $t_2$ are homomorphisms of groups from $C_1$ to $C_0$. For $\Gamma=(m,n)$ and $\Gamma^\prime=(m^\prime,n^\prime)$, we have; $$s_2\left(\Gamma\#_0\Gamma^\prime\right)=s_2\left(m^nm^\prime,nn^\prime\right)=nn^\prime=s_2(\Gamma)s_2(\Gamma')$$ and $$t_2\left(\Gamma\#_0\Gamma^\prime\right)=t_2\left(m^nm^\prime,nn^\prime\right)=\partial_1\left(m^nm^\prime\right)nn^\prime=\partial_1 mn\partial_1 m' n'=t_2(\Gamma)t_2(\Gamma').$$ Now, define the group of 3-cells in $\mathfrak{C}^2$. Using the group action of $M$ and of $N$ on $L$, we can create the semi-direct product group $C_3=L\rtimes M\rtimes N$ with the multiplication $$(l,m,n)(l',m',n')=(l\{\partial_2(^{n}l'),m\}^{n}(l')^{-1},m^{n}m',nn')$$ where $\{-,-\}:M\times M\rightarrow L$ is the Peiffer lifting of the 2-crossed module $\mathfrak{X}$. Using the equality $\{\partial_2(l),m\}l^{-1}=^ml$, we can rewrite it $$(l,m,n)(l',m',n')=(l^{m}(^{n}l'),m^{n}m',nn').$$ Any 3-cell in $C_3$ can be represented by an element $(l,m,n)$ in $L\rtimes M\rtimes N$ for $l\in L,\ m\in M, \ n\in N$. The 2-source of a 3-cell $(l,m,n)$ is given by $s_3(l,m,n)=(m,n)$ and 2-target is given by $t_3(l,m,n)=(\partial_2 lm,n).$ We can show a 3-cell in $C_3$ by a diagram; $$ \begin{array}{c} \xymatrix@C-=0.5cm{{*} \ar@{=>}[rr]^{\scriptscriptstyle(m,n)}\ar[dd]_{\scriptscriptstyle n} & &{*}\ar[dd]^{\scriptscriptstyle \partial_1 mn} \\&\scriptscriptstyle(l,m,n)& \\ {*} \ar@{=>}[rr]_{\scriptscriptstyle(\partial_2 lm,n)} & & {*} } \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=scriptsize] & \ar[dd, Rightarrow, "{\scriptscriptstyle (m,n)}" {swap,name=f,description}]& &\ar[dd, Rightarrow, "{\scriptscriptstyle (\partial_2 lm,n)}"'{swap,name=g,description}]& \\ {*} \ar[rrrr,bend left=40,"\scriptscriptstyle n"] \ar[rrrr,bend right=40,"{\scriptscriptstyle \partial_1 mn}"']& \tarrow["\scriptscriptstyle {(l,m,n)}" ,from=f,to=g, shorten >= -1pt,shorten <= 1pt ]{rrr}& & & {*} \\ \ & \ & \ & \ & \ \end{tikzcd} \end{array} $$ The 2-vertical composition of 3-cells $J'=(l,m,n)$ and $J=(l^\prime,\partial_2lm,n)$ is \\ $$ J\#_{3}J' =\begin{bmatrix} J'\\ J \end{bmatrix}=(l^\prime,\partial_2lm,n)\#_3(l,m,n)=(l^\prime l,m,n). $$ The right whiskering of a 2-cell $\Gamma=(m,n)$ on a 3-cell $J=\left(l,m^\prime,\partial_1 mn\right)$ is given by $$\left(l,m^\prime,\partial_1 mn\right)\natural_2(m,n)=(l,m^\prime m,n).$$ This can be represented pictorially as $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & & \tarrow["\scriptscriptstyle{(l,m',n')}" {description} ]{dd} & \\ \scriptstyle n \ar[r,Rightarrow,"\scriptscriptstyle {(m,n)}"] &\scriptstyle {\partial_1 mn \ }\ar[rr,Rightarrow,bend left=40,"\scriptscriptstyle{ \left(m',n' \right)}"] \ar[rr,Rightarrow,bend right=50,"\scriptscriptstyle {\left(\partial_2 lm',n' \right)}"'] & &\ \ \scriptstyle {\partial_1 m'n'}\\ \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \tarrow["\scriptscriptstyle{(l,m'm,n)}" {description} ]{dd}&\\ \scriptstyle {n} \ \ \ \ar[rr,Rightarrow,bend left=40,"\scriptscriptstyle { \left(m'm,n \right)}"] \ar[rr,Rightarrow,bend right=50,"\scriptscriptstyle {\left(\partial_2 lm'm,n \right)}"'] & & \scriptstyle {\partial_1 m'n'}\\ \ & \ &\ \end{tikzcd} \end{array} $$ where $\partial_1 mn=n'.$ Similarly, 1-vertical composition in $\mathfrak{C}^2$ of 3-cells $(l,m,n)$ and $(l',m',\partial_1mn)$ is given by $$(l,m,n)\#_1(l',m',\partial_1mn)=(l'^{m'}l,m'm,n).$$ The left whiskering of a 2-cell $\Gamma=(m',\partial_1 mn)$ on a 3-cell $J=(l,m,n)$ is given by $$\left(m',\partial_1 mn\right)\natural_2\left(l,m,n\right)=(^{m'}l,m' m,n)$$ where \ $^{m^\prime}l=\{\partial_2l,m^\prime\}l$. This can be represented pictorially as $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] &\tarrow["\scriptscriptstyle{(l,m,n)}"{description}]{dd}&\\ \scriptstyle {\ \ n} \ \ \ \ar[rr,Rightarrow,bend left=40,"\scriptscriptstyle {(m,n)}"]\ar[rr,Rightarrow,bend right=50,"\scriptscriptstyle {(\partial_{2}lm,n)}"']& & \scriptstyle {\partial_{1} mn}\ar[r,Rightarrow,"\scriptscriptstyle {(m',n')}"]& \scriptstyle {\partial_{1}m'n'}\\ &\ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] &\tarrow["{\scriptscriptstyle(^{m^\prime}l,m^\prime m,n)}"{description}]{dd}&\\ \scriptstyle{n}\ \ \ \ar[rr,Rightarrow,bend left=40,"\scriptscriptstyle {(m'm,n)}"]\ar[rr,Rightarrow,bend right=50,"\scriptscriptstyle {(m'\partial_{2}lm,n)}"']& & \scriptstyle {\partial_{1}m'n'}\\ &\ \end{tikzcd} \end{array} $$ where $\partial_1 mn=n'.$ The right whiskering of a 1-cell $n'$ on a 3-cell $(l,m,n)$ is $(l,m,n)\natural_1 n^\prime=(l,m,nn^\prime)$ and the left whiskering is $n^\prime\natural_1(l,m,n)=(^{n'}l,^{n'}m,n'n).$ The horizontal compositions of 3-cells $J=(l,m,n):\Gamma_1\rightarrow \Gamma_2$ and $J^\prime=(l^\prime,m^\prime,n^\prime):\Gamma'_1\rightarrow \Gamma'_2$ in $C_3$ are given by $$ \begin{bmatrix} &J' \\ J& \end{bmatrix}=((J\natural_1t_1(J^\prime))\#_1(s_1(J)\natural_1J^\prime) =(l{^m(^nl^\prime)},m^nm^\prime,nn^\prime) $$ where $${s_3}\left(\begin{bmatrix} &J'\\ J& \end{bmatrix}\right)=\begin{bmatrix} &s_3(J')\\ s_3(J)& \end{bmatrix}=\begin{bmatrix} &(m',n')\\ (m,n)&\end{bmatrix}=\begin{bmatrix} &\Gamma'_1\\ \Gamma_1& \end{bmatrix}=\left(m^nm^\prime,nn^\prime\right)$$ and $${t_3}\left(\begin{bmatrix} &J'\\ J& \end{bmatrix}\right)=\begin{bmatrix} &\Gamma'_2\\ \Gamma_2& \end{bmatrix}$$ similarly $$ \begin{bmatrix} J & \\ & J' \end{bmatrix} =(t_1(J)\natural_1J^\prime)\#_1(J\natural_1s_1(J^\prime)) =(^{\partial_1m}(^nl^\prime)^{^{\partial_1m}(^nm^\prime)}l,^{\partial_1m}(^nm^\prime)m,nn^\prime) $$ where $${s_3}\left(\begin{bmatrix} J& \\ &J' \end{bmatrix}\right)=\begin{bmatrix} s_3(J)&\\ &s_3(J') \end{bmatrix}=\begin{bmatrix} (m,n)&\\ &(m',n')\end{bmatrix}=\begin{bmatrix} \Gamma_1&\\ &\Gamma'_1 \end{bmatrix}=\left(^{\partial_1m}(^nm^\prime)m,nn^\prime\right)$$ and $${t_3}\left(\begin{bmatrix} J& \\ &J' \end{bmatrix}\right)=\begin{bmatrix} \Gamma_2&\\ &\Gamma'_2 \end{bmatrix}$$ Using the multiplication in $C_3$ given above, we can show that $s_3,t_3:C_3\longrightarrow C_2$ are homomorphisms of groups: \begin{align*} s_3\left((l,m,n)(l^\prime,m^\prime,n^\prime)\right)=&s_3\left(l \ {^m(^nl^\prime)},m^nm^\prime,nn^\prime\right)\\ =&(m^nm^\prime,nn^\prime)\\ =&(m,n)(m^\prime,n^\prime)\\ =&s_3(l,m,n) s_3(l^\prime,m^\prime,n^\prime) \end{align*} and \begin{align*} t_3\left((l,m,n)(l^\prime,m^\prime,n^\prime)\right)=&t_3\left(l \ {^m(^nl^\prime)},m^nm^\prime,nn^\prime\right)\\ =&\left(\partial_2\left(l \ {^m(^nl^\prime)}\right) m^nm^\prime,nn^\prime\right)\\ =&\left(\partial_2lm \partial_2(^nl^\prime)^nm^\prime,nn^\prime\right))\ \ (\partial_2 \text{ crossed module})\\ =&\left(\partial_2lm ^n(\partial_2l^\prime m^\prime),nn^\prime\right)\\ =&(\partial_2lm,n)(\partial_2l^\prime m^\prime,n^\prime)\\ =&t_3(l,m,n) t_3(l^\prime,m^\prime,n^\prime) \end{align*} for all $(l,m,n),(l^\prime,m^\prime,n^\prime)\in C_3$. The interchange law for $\#_3$ and the semidirect product of 3-cells in $L\rtimes M\rtimes N$ can be found in Section \ref{interchangelaw} of Appendix. For any 2-cells $\Gamma=(m,n)$ and $\Gamma^\prime=(m^\prime,n^\prime)$, the interchange 3-cell is $\Gamma\#\Gamma^\prime=\left(\{m,^nm^\prime \},m^nm^\prime,nn^\prime\right)$ as given in \cite{KP}. For this interchange 3-cell, we have $$s_3\left(\Gamma\#\Gamma^\prime\right)=\left(m^nm^\prime,nn^\prime\right)=\begin{bmatrix} &\Gamma'\\ \Gamma & \end{bmatrix}$$ and \begin{align*} t_3\left(\Gamma\#\Gamma^\prime\right)=&\left(\partial_2\{m,^nm^\prime\}m^nm^\prime,nn^\prime\right)\\ =&\left(^{\partial_1(m)n}(m^\prime)m(^nm^\prime)^{-1}m^{-1}m^nm^\prime,nn^\prime\right)\\ =&\left(^{\partial_1m}(^nm^\prime)m,nn^\prime\right)\\ =&\begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix} \end{align*} Then we obtain $$ \xymatrix{ {\begin{bmatrix} &\Gamma'\\ \Gamma & \end{bmatrix}} \ar@3{->}[rr]^{\scriptstyle(\Gamma\#\Gamma')} & &{\begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix}}} $$ Of course, this is functorial and hence defines a functor from the category of 2-crossed modules of groups to the category of Gray 3-(group)-groupoids with a single object: $\mathbf{\Theta}:\mathbf{X_2 Mod}\longrightarrow \mathbf{Gray}.$ \subsection{From Gray 3-(Group)-Groupoids to 2-Crossed Modules of Groups} In this section, using a slightly different method, we give a construction of a 2-crossed module of groups from a Gray 3-(group)-groupoid with a single object. Similar construction appears in \cite{Martins}. Suppose that $(C_3,C_2,C_1,s_1,t_1,s_2,t_2,*)$ is a Gray 3-(group)-groupoid with a single object. Using the properties of $s_i$,$t_i$, we have a complex of homomorphisms of groups: \begin{equation*} \xymatrix{\mathrm{Ker} s_3\ar[r]^-{\overline{t}_3}&\mathrm{Ker} s_2 \ar[r]^-{\overline{t}_2}&C_1 } \end{equation*} where $\overline{t}_2=t_2|_{\mathrm{Ker} s_1}$ and $\overline{t}_3=t_3|_{\mathrm{Ker} s_3 }$. The multiplication in $\mathrm{Ker} s_2$ is taken to be the horizontal composition of 2-morphisms; $$ \Gamma \Gamma'=\begin{bmatrix} &\Gamma'\\ \Gamma& \end{bmatrix}.$$ Similarly, the multiplication in $\mathrm{Ker} s_3 $ is given by $ JJ'=\begin{bmatrix} &J'\\ J& \end{bmatrix}$ for $J,J'\in \mathrm{Ker} s_3 $. Using the whiskering $\natural_1$ of a 1-cell on 2-cell, an action of $\zeta\in C_1$ on a 2-cell \ $\Gamma\in \mathrm{Ker} s_1$ is given by $$^{\zeta}(\Gamma)=\zeta\natural_1\Gamma\natural_1\zeta^{-1}.$$ We have $\overline{t}_2(\zeta \natural_1\Gamma)=\zeta\overline{t}_2(\Gamma)\zeta^{-1}$. Thus, $\overline{t}_2$ is a pre-crossed module. For any 3-cell $J \in \mathrm{Ker}{s_3}$ and a 2-cell $\eta\in \mathrm{Ker} s_1$, the action of $\eta$ on $J$ is given by $$^{\eta}J=\eta\natural_2J\natural_2\eta^{-1}$$ where $\natural_2$ is the whiskering of the 2-cell $\eta$ on the 3-cell $J$. Together with this action $\overline{t}_3=t_3|_{\mathrm{Ker} s_3 }$ is a crossed module. The Peiffer Lifting map $$\{-,-\}:\mathrm{Ker} s_2\times \mathrm{Ker} s_2\longrightarrow \mathrm{Ker} s_3$$ is given by $$\{\Gamma,\Gamma^\prime\}=\begin{bmatrix} &e_3\left(s_3(\Gamma\#\Gamma^\prime)\right)^{-1}\\ \Gamma\#\Gamma^\prime & \end{bmatrix}$$ where $\Gamma\#\Gamma^\prime$ is the interchange 3-cell for $\Gamma, \Gamma'\in \mathrm{Ker} s_2$. This defines a functor from the category of Gray 3-(group)-groupoids with a single object to the category of 2-crossed modules of groups; $$\mathbf{\Delta}:\mathbf{Gray}\longrightarrow\mathbf{X_2 Mod}.$$ Suppose that \begin{equation*} \xymatrix{\mathfrak{X}:=L \ar[r]^-{\partial_{2}}&M\ar[r]^-{\partial_{1}} & N} \end{equation*} is a 2-crossed module of groups and its associated Gray 3-(group)-groupoid is $\mathbf{\Theta}(\mathfrak{X})=\mathfrak{C}^2$. By applying the functor $\mathbf{\Delta}$ to $\mathfrak{C}^2$, we will obtain a 2-crossed module \begin{equation*} \xymatrix{L' \ar[r]^-{\overline{\partial}_{2}}&M'\ar[r]^-{\overline{\partial}_{1}} & N'} \end{equation*} which is isomorphic to $\mathfrak{X}$ on each level. We obtain $N^\prime=C_1=N$ and $M'=\mathrm{Ker} s_2=M\rtimes\{1\}\cong M$ and $\overline{\partial}_1=t_2|_{\mathrm{Ker} s_2}$, $\overline{\partial}_1(m,1)=\partial_1m$ for all $m\in M$. By the definition of $s_3$, for $C_3=L\rtimes M\rtimes N$, we obtain \begin{align*} L'=\mathrm{Ker} s_3=&\{\alpha=(l,m,n)\in C_3:s_3(\alpha)=(m,n)=(1,1)\}\\ =&\{(l,1,1):l\in L\}\cong L \end{align*} The map $\overline{\partial}_2$ is given by $\overline{\partial}_2=t_3|_{\mathrm{Ker} s_3}$ and then $\overline{\partial}_2(l,1,1)=t_3(l,1,1)=(\partial_2l,1)$. Therefore, we obtain a chain complex of homomorphisms of groups: \begin{equation*} \xymatrix{L\rtimes \{1\}\rtimes \{1\}\ar[r]^-{\overline{\partial}_{2}}&M\rtimes \{1\}\ar[r]^-{\overline{\partial}_{1}} & N} \end{equation*} which is isomorphic to $\mathfrak{X}$ on each level.\\ For any 2-cells $\Gamma=(m,1)\in \mathrm{Ker} s_2$ and $\Gamma'=(m',1)\in \mathrm{Ker} s_2$, We can define the Peiffer Lifting $$\{-,-\}^\prime:M^\prime \times M^\prime\rightarrow L^\prime $$ by \begin{align*} \{\Gamma,\Gamma^\prime\}=&{\begin{bmatrix} &e_3\left(s_3(\Gamma\#\Gamma^\prime)\right)^{-1}\\ \Gamma\#\Gamma^\prime & \end{bmatrix}}\\ =&{\begin{bmatrix} &e_3s_3\left(\{m^\prime,m\},m^\prime m,1\right)^{-1}\\ \left(\{m^\prime,m\},m^\prime m,1\right) & \end{bmatrix}}\\ =&{\begin{bmatrix} &\left(1, (m^\prime m)^{-1},1\right)\\ \left(\{m^\prime,m\},m^\prime m,1\right) & \end{bmatrix}}\\ =&\left(\{m^\prime,m\},1,1\right)\in L'. \end{align*} \begin{remark}\rm{ Consider a 2-crossed module $\xymatrix{\mathfrak{X}:=(L\ar[r]^-{\partial_2}&M\ar[r]^{\partial_1}&N)}$ and its associated Gray-3-(group)-groupoid $\mathfrak{C}^2$. From \cite{Jinan}, we have the construction of a linear representation of any cat$^2$-group or any Gray 3-(group)-groupoid with a single object, and this may also be considered as a linear representation of the corresponding 2-crossed module. Thus, we might define a linear representation of the 2-crossed module $\mathfrak{X}$ to be a representation of its associated Gray 3-(group)-groupoid $\mathfrak{C}^2$.\textit{ Thus we can say that a possible way of obtaining with a direct definition of a linear representation of the 2-crossed module $\mathfrak{X}$ would be to pass to the associated Gray 3-(group)-groupoid $\mathfrak{C}^2$ with a single object and find a representation for this. } Therefore, a linear representation of $\mathfrak{C}^2$ which is obtained from the 2-crossed module $\mathfrak{X}$ is a lax 3-functor $\phi:\mathfrak{C}^2\rightarrow \mathbf{Aut(\delta)}\leqslant \mathbf{Ch}^2_K$. This functor can be summarised as follows: $\bullet$ For the 0-cell $*$ in $\mathfrak{C}^2$, $\phi(*)=\delta:=(\delta_2,\delta_1)$ is the chain complex of length-2, the 0-cell of $\mathbf{Aut(\delta)}$. $\bullet$ For any 1-cell $n:* \longrightarrow *$, $n\in N$, $\phi(n)$ in $\mathbf{Aut}(\delta)_1$ is the chain automorphism $\delta\rightarrow \delta$ $\bullet$ For any 2-cell $(m,n):n\Rightarrow \partial_1 mn$ in $M\rtimes N$, \ $\phi(m,n)$ in $\mathbf{Aut}(\delta)_2$ is a 1-homotopy from $\phi(n)$ to $\phi(\partial_1 mn)$. $\bullet$ For any 3-cell $(l,m,n):(m,n)\Rrightarrow (\partial_2 lm,n)$ in $L\rtimes M\rtimes N$,\ $\phi(l,m,n)$ in $\mathbf{Aut}(\delta)_3$ is a 2-homotopy between 1-homotopies $\phi(m,n)$ and $\phi(\partial_2 lm,n)$.} \end{remark} \section{Regular Representation of 2-Crossed Modules and Cat$^2$-groups} Consider the Gray 3-(group)-groupoid with a single object $*$; \begin{equation*} \xymatrix{\mathfrak{C}^2:=L\rtimes M \rtimes N \ar@<0.5ex>[r]\ar@<-0.5ex>[r]& M\rtimes N\ar@<0.5ex>[r]\ar@<-0.5ex>[r] & N\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}} \end{equation*} which is obtained from the 2-crossed module $\mathfrak{X}$. We can consider $\mathfrak{C}^2$ as a cat$^2$-group. Then the big group is $L\rtimes M\rtimes N$. The structural homomorphisms are $s_1(l,m,n)=t_1(l,m,n)=*$ and $s_2(l,m,n)=(1,m,n)$, \ $t_2(l,m,n)=(1,\partial_2lm,n)$. Recall that the multiplication in $L\rtimes M\rtimes N$ is given by $$(l,m,n)(l^\prime,m^\prime,n^\prime)=\left(l^m(^{n}l^\prime),m^nm^\prime, nn^\prime\right).$$ Using this multiplication, we can show that $[\mathrm{Ker} s_2,\mathrm{Ker} t_2]=1$ as follows: For $(l,m,n)\in L\rtimes M\rtimes N,$ we obtain $s_2(l,m,n)=(1,m,n)=(1,1,1),$ so $(l,1,1)\in \mathrm{Ker} s_2.$ Similarly, $t_2(l,m,n)=(1,\partial_2lm,n)=(1,1,1),$ then $m=\partial_2 l^{-1},\ n=1,$ and so $(l,\partial_2 l^{-1},1)\in \mathrm{Ker} t_2.$ The inverse of $(l,m,n)$ is $$(l,m,n)^{-1}=(^{n^{-1}}{(^{m^{-1}}l^{-1})},^{n^{-1}}m^{-1},n^{-1}).$$ For $(l,1,1)\in \mathrm{Ker} s_2$ and $(l^\prime,\partial_2 l^{\prime{-1}},1)\in \mathrm{Ker} t_2,$ we obtain \begin{align*} [(l,1,1),(l^\prime,\partial_2 l^{\prime{-1}},1)]=&(l,1,1)(l^\prime,\partial_2 l^{\prime{-1}},1)(l^{-1},1,1)(^{\partial_2l^{\prime}}(l^\prime)^{-1},\partial_2 l^{\prime},1)\\ =&(ll^\prime,\partial_2 l^{\prime{-1}},1)(l^{-1} \ ^{\partial_2 l^{\prime}}(l^{\prime{-1}}),\partial_2 l^{\prime},1) \\ =&(ll^\prime,\partial_2 l^{\prime{-1}},1)(l^{-1}l^\prime l^{\prime{-1}}l^{\prime{-1}},\partial_2 l^{\prime},1)\\ =&(ll^\prime,\partial_2 l^{\prime{-1}},1)(l^{-1}l^{\prime{-1}},\partial_2 l^{\prime},1)\\ =&(ll^\prime \ ^{\partial_2 l^{\prime{-1}}}(l^{-1}l^{\prime{-1}}),\partial_2 l^{\prime{-1}}\partial_2 l^{\prime},1)\\ =&(1,1,1) \end{align*} Thus, $(L\rtimes M\rtimes N , s_1,t_1 , s_2,t_2)$ is a cat$^2$-group. \subsection{The Construction of Cat$^2$-Group Algebra} The notion of cat$^1$-algebra is well-known, at least as on anologue of a cat$^1$-group in the category of algebras. The equivalence between cat$^1$-algebras and crossed modules of algebras appears in \cite{Nizar}. More general expositions of cat$^1$-objects were introduced by Ellis \cite{Ellis} and Porter \cite{Porter}. Recall from that a cat$^1$-algebra $\mathcal{A}$ consists of $K$-algebras $\mathcal{A}_0$,\ $\mathcal{A}_1$ and $K$-algebra morphisms $\sigma,\tau:\mathcal{A}_1\rightarrow\mathcal{A}_0,\ i:\mathcal{A}_0\rightarrow \mathcal{A}_1$ (called structural morphisms) satisfying $1.$ $\sigma_i=\tau_i=id_{\mathcal{A}_0}$,\ $2.$ $\mathrm{Ker} \sigma \cdot \mathrm{Ker} \tau=0$,\ $\mathrm{Ker} \tau \cdot \mathrm{Ker} \sigma=0$. Condition 2 is called the kernel condition. In this section using the method given by Barker in \cite{Barker}, we shall give a construction Gray 3-(group)-algebra groupoid with a single object $*$ or a cat$^2$-group algebra from $\mathfrak{C}^2$. Recall from \cite{Arvasi} that a cat$^2$-algebra $\mathcal{A}^2$ is a 5-tuple $(\mathcal{A},\sigma_1,\tau_1,\sigma_2,\tau_2)$ where $(\mathcal{A},\sigma_i,\tau_i)$ \ $i=1,2$ are cat$^1$-algebras and \begin{enumerate} \item $ \sigma_i\tau_j=\tau_j\sigma_i,\ \tau_j\tau_i=\tau_i\tau_j,\ \sigma_i\tau_j=\tau_j\sigma_i \ \text{for} \ i,j=1,2,\ i\neq j \ \text{and} $ \item $ \mathrm{Ker} \tau_i\cdot\mathrm{Ker} \sigma_i=0 \text{ and }\mathrm{Ker} \sigma_i\cdot\mathrm{Ker} \tau_i=0 \ \text{for} \ i=1,2.$ \end{enumerate} The cat$^2$-group $\mathfrak{C}^2$ can be regarded as: $$ \xymatrix{\mathfrak{C}^2:=L\rtimes M \rtimes N \ar@<1ex>[r]^-{s_3,t_3} \ar@<0ex>[r]&1\rtimes M \rtimes N \ar@<1ex>[r]^-{s_2,t_2} \ar@<0ex>[r]\ar@<1ex>[l]^-{e_3}& 1\rtimes 1 \rtimes N\ar@<1ex>[l]^-{e_2}\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}} $$ where\\ $s_3(l,m,n)=(1,m,n)$,\ \ $t_3(l,m,n)=(1,\partial_2 lm,n)$ and $s_2(1,m,n)=(1,1,n)$,\ \ $t_2(1,m,n)=(1,1,\partial_1 mn)$.\\ In this structure; 1.$$\xymatrix{1\rtimes M \rtimes N \ar@<1ex>[r]^-{s_2,t_2} \ar@<0ex>[r]& 1\rtimes 1\rtimes N}$$ is a pre-cat$^1$-group with the group multiplication in the big group $1\rtimes M\rtimes N$ given by $$(1,m,n)(1,m^\prime,n^\prime)=(1,m^nm^\prime,nn^\prime)$$ and 2. $$\xymatrix{L\rtimes M \rtimes N \ar@<1ex>[r]^-{s_3,t_3} \ar@<0ex>[r]&1\rtimes M \rtimes N }$$ is a cat$^1$-group with the group multiplication given by $$(l,m,n)(l^\prime,m^\prime,n^\prime)=(l^m(^{n}l^\prime),m^nm^\prime,nn^\prime).$$ If we apply the group algebra functor $K(.)$ to this structure for each level, we obtain the following structure: $$ \xymatrix{K(\mathfrak{C}^2):=K(L\rtimes M \rtimes N) \ar@<1ex>[r]^-{\sigma_3, \tau_3} \ar@<0ex>[r]&K(1\rtimes M \rtimes N) \ar@<1ex>[r]^-{\sigma_2, \tau_2} \ar@<0ex>[r]& K(1\rtimes 1 \rtimes N)\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}.} $$ In this structure, $K(1\rtimes 1\rtimes N)$ has basis $\{\mathbf{e}_{1,1,n}:n\in N\}$, $K(1\rtimes M\rtimes N)$ has basis $\{{\mathbf{e}_{1,m,n}:m\in M,n\in N}\}$ and $K(L\rtimes M\rtimes N)$ has basis $\{\mathbf{e}_{l,m,n}:l\in L,m\in M,n\in N\}$. The structural maps are between them; $$\sigma_3(\mathbf{e}_{l,m,n})=\mathbf{e}_{1,m,n}\ ,\ \ \tau_3(\mathbf{e}_{l,m,n})=\mathbf{e}_{1,\partial_2lm,n}$$ $$\sigma_2(\mathbf{e}_{1,m,n})=\mathbf{e}_{1,1,n}\ ,\ \ \tau_2(\mathbf{e}_{1,m,n})=\mathbf{e}_{1,1,\partial_1mn}$$ $$\sigma_1(\mathbf{e}_{l,m,n})=\mathbf{e}_{1,1,n}\ ,\ \ \tau_1(\mathbf{e}_{l,m,n})=\mathbf{e}_{1,1,\partial_1mn}.$$ We can picture a 3-cell in this structure by $$ \begin{tikzcd}[row sep=0.3cm,column sep=scriptsize] & \ar[dd, Rightarrow, "{\scriptscriptstyle \mathbf{e}_{1,m,n}}"{swap,name=f,description}]& &\ar[dd, Rightarrow, "{\scriptscriptstyle \mathbf{e}_{1,\partial_2lm,n}}"'{swap,name=g,description}]& \\ {*} \ar[rrrr,bend left=40,"{\scriptscriptstyle \mathbf{e}_{1,1,n}}"] \ar[rrrr,bend right=40,"{\scriptscriptstyle \mathbf{e}_{1,1,\partial_1mn}}"']& \tarrow["\scriptscriptstyle {\mathbf{e}_{l,m,n}}" ,from=f,to=g, shorten >= -1pt,shorten <= 1pt ]{rrr}& & & {*} \\ \ & \ & \ & \ & \ \end{tikzcd} $$ Since $K(\cdot)$ is a functor, the first condition of cat$^2$-group algebra is induced from the equivalent condition on $\mathfrak C^2$. We have to show that the kernel condition, $\mathrm{Ker}\sigma_3\cdot \mathrm{Ker}\tau_3=0$. To show this equality we need to find bases for $\mathrm{Ker} \sigma_3$ and $\mathrm{Ker}\tau_3$. For any $\alpha=\mathbf{e}_{l,m,n}\in K(L\rtimes M\rtimes N)$; we obtain \begin{align*} \alpha-i_3\sigma_3(\alpha)=&\mathbf{e}_{l,m,n}-i_3\sigma_3(\mathbf{e}_{l,m,n})\\ =&\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n}\\ =&\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}\in \mathrm{Ker}\sigma_3. \end{align*} Similarly $\alpha-i_3\tau_3(\alpha)={\mathbf{e}_{l,m,n}}-{\mathbf{e}_{1,\partial_2 lm,n}}=\mathbf{w}^{\scriptscriptstyle22}_{l,m,n} \in \mathrm{Ker} \tau_3. $ We can give the following result. \begin{lem} The set $\{\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}:l\neq1\}$ is as basis for $ \mathrm{Ker} \sigma_3$ in $K(L\rtimes M\rtimes N).$ \end{lem} \begin{pf} We can say easily that $\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}\in \mathrm{Ker} \sigma_3,$ since $$\sigma_3(\mathbf{v}^{\scriptscriptstyle22}_{l,m,n})=\sigma_3(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n})=\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m,n}=0$$ We need to show that the elements of the set $\{\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}:l\neq1\}$ span $\mathrm{Ker} \sigma_3$ and are linearly independent. Let $\mathbf{v}\in \mathrm{Ker} \sigma_3.$ We can write it as\\ \begin{equation*} \mathbf{v}=\sum_{n \in N}\left(\sum_{m \in M}\left(\sum_{l \in L} r_{l,m,n}\mathbf{e}_{l,m,n}\right)\right) \end{equation*} with $\sigma_3(\mathbf{v})=0$. But \begin{equation*} \sigma_3 \mathbf{(\mathbf{v})}=\sum_{n \in N}\sum_{m \in M}\left(\sum_{l \in L} r_{l,m,n}\right)\mathbf{e}_{1,m,n} \end{equation*} and since $\mathbf{e}_{1,m,n}$ is a basis for $K(L\rtimes M\rtimes N)$, this is zero if $\sum_{l \in L}r_{l,m,n}=0$ for each $m \in M$ and $n \in N$. Now, \begin{align*} \mathbf{v}=&\sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\left(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n}+\mathbf{e}_{1,m,n}\right)+\sum_{n}\sum_{m}r_{1,m,n}\mathbf{e}_{1,m,n} \\ =&\sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}+\sum_{n}\sum_{m}\sum_{l}r_{l,m,n}\mathbf{e}_{1,m,n} \end{align*} but \begin{equation*} \sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\mathbf{e}_{1,m,n}=\sum_{n}\sum_{m}\left(\sum_{l}r_{l,m,n}\right)\mathbf{e}_{1,m,n}=0 \end{equation*} because $\sum_{l}r_{l,m,n}=0$ for every $(1,m,n) \in 1\rtimes M\rtimes N$. Hence, \begin{equation*} \mathbf{v}=\sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\mathbf{v}^{\scriptscriptstyle22}_{l,m,n} \end{equation*} for every $\mathbf{v}\in \mathrm{Ker} \sigma_3$, so the $\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}$ do indeed span $\mathrm{Ker} \sigma_3$. Now suppose that \begin{equation*} \mathbf{v}=\sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}=0 \end{equation*} Then, \begin{align*} \sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\left(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n}\right)=0\Leftrightarrow&\sum_{n}\sum_{m}\sum_{l\neq 1}r_{l,m,n}\mathbf{e}_{l,m,n}-\sum_{n}\sum_{m}\sum_{l\neq 1} r_{l,m,n}\mathbf{e}_{1,m,n}=0\\ \Leftrightarrow&\sum_{n}\sum_{m}\sum_{l}r'_{l,m,n}\mathbf{e}_{l,m,n}=0 \end{align*} where $r'_{l,m,n}=r_{l,m,n}$ when $l\neq 1$ and $r'_{1,m,n}=-\sum_{l\neq1}r_{l,m,n}$. This is a linear combination of basis vectors $\mathbf{e}_{l,m,n}$ in $K(L\rtimes M\rtimes N)$ so $r'_{l,m,n}=0$ for each $(l,m,n) \in L\rtimes M\rtimes N$. In particular, this is true for $l \neq 1$, so every $r_{l,m,n}=0$ and $\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}$ are linearly independent. \end{pf} Consider the elements $$\mathbf{e}^{*_2}_{l,m,n}={i_3\sigma_3(\mathbf{e}_{l,m,n})}-{\mathbf{e}_{l,m,n}+i_3\tau_3\mathbf{e}_{l,m,n}}=\mathbf{e}_{1,m,n}-\mathbf{e}_{l,m,n}+\mathbf{e}_{(1,\partial_2 lm,n)}\in K(L\rtimes M \rtimes N)$$ while $$\mathbf{e}^{*_2}_{1,m,n}=i_3\sigma_3(\mathbf{e}_{1,m,n})-\mathbf{e}_{1,m,n}+i_3\tau_3(\mathbf{e}_{1,m,n})=\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m,n}+\mathbf{e}_{1,m,n} =\mathbf{e}_{1,m,n}\in K(1\rtimes M \rtimes N)$$ Therefore, \begin{align*} (\mathbf{v}^{\scriptscriptstyle22}_{l,m,n})^{*_2}=&(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n})^{*_2}\\ =&\mathbf{e}_{1,m,n}-\mathbf{e}_{l,m,n}+\mathbf{e}_{1,\partial_2 lm,n}-\mathbf{e}_{1,m,n}\\ =&-(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,\partial_2 lm,n})\\ =&-\mathbf{w}^{\scriptscriptstyle22}_{l,m,n}\in \mathrm{Ker} \tau_3. \end{align*} Hence $\{{-\mathbf{w}^{\scriptscriptstyle22}_{l,m,n}:l\neq 1}\}$ is a basis for $\mathrm{Ker} \tau_3$ and so $\{{\mathbf{w}^{\scriptscriptstyle22}_{l,m,n}:l\neq 1}\}$ is also basis for $\mathrm{Ker} \tau_3.$ Therefore we have proven the following lemma. \begin{lem} The set $\{{\mathbf{w}^{\scriptscriptstyle22}_{l,m,n}:l\neq 1}\}$ is a basis for $\mathrm{Ker} \tau_3$.\\ \end{lem} \begin{lem} $\xymatrix{K(1\rtimes M \rtimes N) \ar@<1ex>[r]^-{\sigma_2,\tau_2} \ar@<0ex>[r]& K(1\rtimes 1\rtimes N)\ar@<1ex>[l]^-{i_2}}$ is a pre-cat$^1$-group algebra. \end{lem} \begin{pf} According to the multiplication in $K(1\rtimes M\rtimes N)$ given by $$\mathbf{e}_{1,m,n}\mathbf{e}_{1,m^\prime, n^\prime}=\mathbf{e}_{1,m^nm^\prime,nn^\prime},$$ we can easily see that $\sigma_2$, $\tau_2$ and $i_2$ are homomorphisms of algebras and $\sigma_2i_2=\tau_2i_2=id.$ \end{pf} For any elements $\mathbf{e}_{l,m,n}$ and $\mathbf{e}_{l^\prime,m^\prime,n^\prime}$ in $ K(L\rtimes M\rtimes N),$ we obtain $$\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n}=\mathbf{v}^{\scriptscriptstyle22}_{l,m,n}\in \mathrm{Ker} \sigma_3$$ and $$\mathbf{w}^{\scriptscriptstyle22}_{l^\prime,m^\prime,n^\prime}=\mathbf{e}_{l^\prime,m^\prime,n^\prime}-\mathbf{e}_{1,\partial_2 l^\prime m^\prime,n^\prime} \in \mathrm{Ker} \tau_3.$$ The multiplication of these elements in $ K(L\rtimes M\rtimes N)$ is \begin{align*} \mathbf{v}^{\scriptscriptstyle22}_{l,m,n}\cdot\mathbf{w}^{\scriptscriptstyle22}_{l^\prime,m^\prime,n^\prime}=&(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n})\cdot(\mathbf{e}_{l^\prime,m^\prime,n^\prime}-\mathbf{e}_{1,\partial_2 l^\prime m^\prime,n^\prime})\\ =&\mathbf{e}_{l^m (^nl^\prime),m^nm^\prime,nn^\prime}-\mathbf{e}_{l,m^n(\partial_2l^\prime m^\prime),nn^\prime}-\mathbf{e}_{^m(^nl^\prime),m^nm^\prime,nn^\prime}+\mathbf{e}_{1,m^n(\partial_2l^\prime m^\prime),nn^\prime}\neq 0 \end{align*} Thus, in the structure, $$ \xymatrix{K(L\rtimes M \rtimes N)\ar@<1ex>[r]^-{\sigma_3,\tau_3} \ar@<0ex>[r]&K(1\rtimes M \rtimes N) \ar@<1ex>[l]^-{i_3}} $$ we see that $\mathrm{Ker}\sigma_3\cdot\mathrm{Ker} \tau_3\neq 0.$ In order to construct a cat$^1$-algebra from $K(\mathfrak C^2)$, it is necessary to impose some relations in $K(L\rtimes M\rtimes N)$, so that the kernel condition is satisfied. Suitable expressions are formed $K(L\rtimes M\rtimes N)$ can be factored by the ideal generated by these relations. Consider the expressions of the form \begin{align*} {\mathbf{u}_{\scriptscriptstyle1}}=&\mathbf{e}_{l^\prime l,m,n}-\mathbf{e}_{l,m,n}-\mathbf{e}_{l^\prime,\partial_2lm,n}+\mathbf{e}_{1,\partial_2lm,n}\\ {\mathbf{v}_{\scriptscriptstyle1}}=&\mathbf{e}_{l^{\prime m^\prime} l,m^\prime m,n}-\mathbf{e}_{l,m,n}-\mathbf{e}_{l^\prime,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}\\ {\mathbf{v}_2}=&\mathbf{e}_{1,m^\prime m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn} \end{align*} in $K(L\rtimes M\rtimes N)$. These elements can be represented diagramatically as $$ \begin{array}{c} \xymatrix@C-=0.0cm{ & \mathbf{e}_{1,\partial_2lm,n}\ar@3{->}[ddr]^{\scriptscriptstyle \mathbf{e}_{l',\partial_2 lm,n}}\ar@(ur,ul)_{\mathbf{e}_{1,\partial_2lm,n}} & \\ & \ {\mathbf{u}_{\scriptscriptstyle1}} & \\ \mathbf{e}_{1,m,n} \ar@3{->}[rr]_{\scriptscriptstyle \mathbf{e}_{l'l,m,n}}\ar@3{->}[uur]^{\scriptscriptstyle \mathbf{e}_{l,m,n}} & & \mathbf{e}_{1,\partial_2(l'l)m,n} } \end{array} \text{ \ \ \ and \ \ \ } \begin{array}{c} \xymatrix@C-=0.0cm{ & \mathbf{e}_{1,1,\partial_1 mn}\ar@3{->}[ddr]^{\scriptscriptstyle \mathbf{e}_{l',m',\partial_1 mn}}\ar@(ur,ul)_{\mathbf{e}_{1,1,\partial_1 mn}} & \\ & \ \ {\mathbf{v}_{\scriptscriptstyle1}} & \\ \mathbf{e}_{1,1,n} \ar@3{->}[rr]_{\scriptscriptstyle \mathbf{e}_{l^{\prime m^\prime}l,m'm,n}}\ar@3{->}[uur]^{\scriptscriptstyle \mathbf{e}_{l,m,n}} & & \mathbf{e}_{1,1,\partial_1m'\partial_1 mn} } \end{array} $$ In these pictures, each arrow represents a 3-cell in $K(L\rtimes M\rtimes N)$. For the generating element $$ {\mathbf{u}_{\scriptscriptstyle1}}= \mathbf{e}_{l^\prime l,m,n}-\mathbf{e}_{l,m,n}-\mathbf{e}_{l^\prime,\partial_2lm,n}+\mathbf{e}_{1,\partial_2lm,n},$$ if we take $(l,m,n)=(1,m,n)$, we obtain $$\mathbf{e}_{l^\prime,m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{l^\prime,m,n}+\mathbf{e}_{1,m,n}=0,$$ and if we take $(l^\prime,m^\prime,n^\prime)=(1,m^\prime,n^\prime)$ we obtain $$\mathbf{e}_{l,m,n}-\mathbf{e}_{l,m,n}-\mathbf{e}_{1,\partial_2lmn}+\mathbf{e}_{1,\partial_2lmn}=0.$$ Therefore, we can rewrite this expression as follows: \begin{equation}\label{1} \mathbf{e}_{l'l,m,n}=\mathbf{e}_{l,m,n}+\mathbf{e}_{l',\partial_2lm,n}-\mathbf{e}_{1,\partial_2lm,n}. \end{equation} Similarly, for the expressions \begin{align*} {\mathbf{v}_{\scriptscriptstyle1}}=&\mathbf{e}_{l^{\prime m^\prime}l,m^\prime m,n}-\mathbf{e}_{l,m,n}-\mathbf{e}_{l^\prime,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}\\ {\mathbf{v}_{\scriptscriptstyle2}}=&\mathbf{e}_{1,m^\prime m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn} \end{align*} if we take $(l,m,n)=(1,1,n)$, we obtain $$\mathbf{e}_{l^\prime,m^\prime,n}-\mathbf{e}_{1,1,n}-\mathbf{e}_{l^\prime,m^\prime,n}+\mathbf{e}_{1,1,n}=0$$ and if we take $(l^\prime,m^\prime,n^\prime)=(1,1,n^\prime)$, we obtain $$\mathbf{e}_{l,m,n}-\mathbf{e}_{l,m,n}-\mathbf{e}_{1,1,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}=0.$$ We can rewrite these expression as follows: \begin{equation}\label{2} \mathbf{e}_{l^{\prime m^\prime} l,m^\prime m,n}=\mathbf{e}_{l,m,n}+\mathbf{e}_{l^\prime,m^\prime,\partial_1mn}-\mathbf{e}_{1,1,\partial_1mn} \end{equation} and \begin{equation}\label{3} \mathbf{e}_{1,m^\prime m,n}=\mathbf{e}_{1,m,n}+\mathbf{e}_{1,m^\prime,\partial_1mn}-\mathbf{e}_{1,1,\partial_1mn}. \end{equation} Let $J_2$ be ideal of $K(L\rtimes M\rtimes N)$ generated by the elements ${\mathbf{u}_{\scriptscriptstyle1}},{\mathbf{v}_{\scriptscriptstyle1}},{\mathbf{v}_{\scriptscriptstyle2}}$ and define $$\overline{K(L\rtimes M\rtimes N)}=K(L\rtimes M\rtimes N)/ J_2.$$ We can also consider the expression $${\mathbf{v}_{\scriptscriptstyle2}}=\mathbf{e}_{1,m^\prime m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}$$ in $K(1\rtimes M\rtimes N)$. Let $J_1$ be the ideal of $K(1\rtimes M\rtimes N)$ generated by the elements ${\mathbf{v}_{\scriptscriptstyle2}}$. Since \begin{align*} \sigma_3({\mathbf{v}_{\scriptscriptstyle1}})=&\mathbf{e}_{1,m^\prime m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}\in J_1\\ \tau_3({\mathbf{v}_{\scriptscriptstyle1}})=&\mathbf{e}_{1,\partial_2l^\prime m^\prime \partial_2lm,n}-\mathbf{e}_{1,\partial_2lm,n}-\mathbf{e}_{1,\partial_2l^\prime m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}\in J_1\\ \sigma_3({\mathbf{u}_{\scriptscriptstyle1}})=&\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,\partial_2lm,n}+\mathbf{e}_{1,\partial_2lm,n}=0\\ \tau_3({\mathbf{u}_{\scriptscriptstyle1}})=&\mathbf{e}_{1,\partial_2l^\prime \partial_2lm,n}-\mathbf{e}_{1,\partial_2lm,n}-\mathbf{e}_{1,\partial_2l^\prime \partial_2lm,n}+\mathbf{e}_{1,\partial_2lm,n}=0 \end{align*} We obtain $\sigma_3(J_2)\subseteq J_1$ and $\tau_3(J_2)\subseteq J_1.$ Therefore, we can give the following commutative diagram; $$ \xymatrix@C-=0.4cm{K(L\rtimes M\rtimes N)/J_2 \ar@<0.5ex>[rr]^{\overline{\sigma}_{3}}\ar@<-0.5ex>[rr]_{\overline{\tau}_{3}}&&K(1\rtimes M\rtimes N)/J_{1}\\ \\ K(L\rtimes M\rtimes N)\ar@<0.5ex>[rr]^{\sigma_{3}}\ar@<-0.5ex>[rr]_{\tau_{3}}\ar@{->>}[uu]^{q_{2}}&&K(1\rtimes M \rtimes N)\ar@{->>}[uu]_{q_{1}}} $$where $q_1$ and $q_2$ are quotient maps. Since $\sigma_3(J_2)\subseteq J_1$, the map $\overline {\sigma}_3$ given by $$\overline {\sigma}_3(\overline {\mathbf{e}}_{l,m,n})=\overline {\sigma}_3(\mathbf{e}_{l,m,n}+J_2)=\sigma_3(\mathbf{e}_{l,m,n})+J_1=\mathbf{e}_{1,m,n}+J_1=\overline {\mathbf{e}}_{1,m,n}$$ is a well-defined homomorphism. Since $\tau_3(J_2)\subseteq J_1$, the map $\overline{\tau}_3$ given by $$\overline{\tau}_3(\overline {\mathbf{e}}_{l,m,n})=\overline{\tau}_3(\mathbf{e}_{l,m,n}+J_2)=\tau_3(\mathbf{e}_{l,m,n})+J_1=\mathbf{e}_{1,\partial_2lm,n}+J_1=\overline{\mathbf{e}}_{1,\partial_2lm,n}$$ is a well-defined homomorphism of algebras. Similarly, $$\overline{i}_3(\overline{\mathbf{e}}_{1,m,n})=\overline{{i}_3(\mathbf{e}_{1,m,n})}=i_3(\mathbf{e}_{1,m,n})+J_2=\overline{\mathbf{e}}_{1,m,n}.$$ Let $\overline{K(L\rtimes M \rtimes N)}=K(L\rtimes M \rtimes N)/J_2$ and $\overline{K(1\rtimes M \rtimes N)}=K(1\rtimes M \rtimes N)/J_1.$ We obtain the following structure; $$ \xymatrix{\overline{K(\mathfrak{C}^2)}:=\overline{K(L\rtimes M \rtimes N)}\ar@<1ex>[r]^-{\overline{\sigma}_3,\overline{\tau}_3} \ar@<0ex>[r]&\overline{K(1\rtimes M \rtimes N)} \ar@<1ex>[r]^-{\overline{\sigma}_2,\overline{\tau}_2} \ar@<0ex>[r]\ar@<1ex>[l]^-{\overline{i}_3}& K(1\rtimes 1\rtimes N)\ar@<1ex>[l]^-{\overline{i}_2}\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}} $$ where \begin{align*} \overline{\sigma}_2(\overline{\mathbf{e}}_{1,m,n})=&\mathbf{e}_{1,1,n}, \\ \overline{\tau}_2(\overline{\mathbf{e}}_{1,m,n})=&\mathbf{e}_{1,1,\partial_1mn},\\ \overline{\sigma}_3(\overline{\mathbf{e}}_{l,m,n})=&\overline{\mathbf{e}}_{1,m,n},\\ \overline{\tau}_3(\overline{\mathbf{e}}_{l,m,n})=&\overline{\mathbf{e}}_{1,\partial_2lm,n}. \end{align*} Since; $$\sigma_2({\mathbf{v}_{\scriptscriptstyle2}})=\mathbf{e}_{1,1,n}-\mathbf{e}_{1,1,n}-\mathbf{e}_{1,1,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}=0$$ and $$\tau_2({\mathbf{v}_{\scriptscriptstyle2}})=\mathbf{e}_{1,1,\partial_1 m^\prime \partial_1mn}-\mathbf{e}_{1,1,\partial_1mn}-\mathbf{e}_{1,1,\partial_1 m^\prime \partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}=0,$$ we have $\sigma_2(J_1)=0$ and $\tau_2(J_1)=0.$ Therefore, the maps $\overline{\sigma}_2$ and $\overline{\tau}_2$ are well-defined homomorphisms. In $\overline{K(\mathfrak{C}^2)}$ , it can be easily seen that $$\xymatrix{\overline{K(1\rtimes M \rtimes N)} \ar@<1ex>[r]^-{\overline{\sigma}_2,\overline{\tau}_2} \ar@<0ex>[r]& K(1\rtimes 1\rtimes N)\ar@<1ex>[l]^-{\overline{i}_2}}$$ is a pre-cat$^1$-group algebra. Now we will show that $$\xymatrix{\overline{K(L\rtimes M \rtimes N)}\ar@<1ex>[r]^-{\overline{\sigma}_3,\overline{\tau}_3} \ar@<0ex>[r]&\overline{K(1\rtimes M \rtimes N)}\ar@<1ex>[l]^-{\overline{i}_3}}$$ is a cat$^1$-group algebra. To show this, the kernel condition for $\overline{\sigma}_3,\overline{\tau}_3$ must be satisfied. Let \begin{align*} \overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}=&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}+J_2\in \mathrm{Ker} \overline{\sigma}_3\\ \overline{\mathbf{w}^{\scriptscriptstyle22}}_{l',m',n'}=&\overline{\mathbf{w}^{\scriptscriptstyle22}}_{l',m',n'}+J_2\in \mathrm{Ker} \overline{\tau}_3. \end{align*} Then, we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}\cdot\overline{\mathbf{w}^{\scriptscriptstyle22}}_{l',m',n'}=\mathbf{e}_{l^m(^nl'),m^n m',nn'}-\mathbf{e}_{^m(^nl'),m^n m^\prime,nn^\prime}-\mathbf{e}_{l,m^n(\partial_2l^\prime m^\prime),nn^\prime}+\mathbf{e}_{1,m^n(\partial_2l^\prime m^\prime),nn^\prime}+J_2$$ and by considering the relations (\ref{1}),\ (\ref{2}) and (\ref{3}), we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}\cdot\overline{\mathbf{w}^{\scriptscriptstyle22}}_{l^\prime,m^\prime,n^\prime}=\overline{0}=0+J_2.$$ Thus, we obtain that $\mathrm{Ker} \overline{\sigma}_3. \mathrm{Ker} \overline{\tau}_3=\overline{0}.$ Relation (\ref{1}) can be rewritten as \begin{multline*} (\mathbf{e}_{l^\prime l,m,n}-\mathbf{e}_{1,m,n})-(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n})-(\mathbf{e}_{l^\prime,\partial_2 lm,n}-\mathbf{e}_{1,\partial_2 lm,n})\\ \begin{aligned} &=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime l,m,n}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime,\partial_2 lm,n}. \end{aligned} \end{multline*} Since this is in the ideal $J_2$, it will be killed off by factorisation. Thus in $K(L\rtimes M\rtimes N)/ J_2$, we get $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime l,m,n}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime,\partial_2 lm,n}.$$ Relation (\ref{2}) can be rewritten as \begin{multline*} (\mathbf{e}_{l^\prime{^{m^\prime}}l,m^\prime m,n}-\mathbf{e}_{1,m^\prime m,n}+\mathbf{e}_{1,m^\prime m,n})-(\mathbf{e}_{l,m,n}-\mathbf{e}_{1,m,n}+\mathbf{e}_{1,m,n})\\ \begin{aligned} &\hspace{3.7cm} -(\mathbf{e}_{l^\prime,m^\prime,\partial_1mn}-\mathbf{e}_{1,m^\prime,\partial_1mn}+\mathbf{e}_{1,m^\prime,\partial_1mn})+\mathbf{e}_{1,1,\partial_1mn}\\ =&{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime{^{m^\prime}}l,m^\prime m,n}+\mathbf{e}_{1,m^\prime m,n}-{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}-\mathbf{e}_{1,m,n}-{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime,m^\prime,\partial_1mn}-\mathbf{e}_{1,m^\prime,\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}\\ =&{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime{^{m^\prime}}l,m^\prime m,n}-{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}-{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime,m^\prime,\partial_1 mn}+\underbrace{\mathbf{e}_{1,m^\prime m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m^\prime,\partial_1 mn}+\mathbf{e}_{1,1,\partial_1mn} }_{\mathbf{v'}}, \end{aligned} \end{multline*} where ${\mathbf{v}_{\scriptscriptstyle2}}=\mathbf{e}_{1,m^\prime m,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m^\prime,\partial_1 mn}+\mathbf{e}_{1,1,\partial_1 mn} \in J_2$, we obtain ${\mathbf{v}_{\scriptscriptstyle2}}+J_2=0+J_2$ and thus $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l'{^{m^\prime}}l,m^\prime m,n}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l^\prime,m^\prime,\partial_1 mn}.$$ There are redundancies among the $\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}$ so these do not form a basis for $\mathrm{Ker} \overline{\sigma}_3$. Similar relations hold for the $\overline{\mathbf{w}^{\scriptscriptstyle22}}_{l',m',n'}$. \subsubsection{The Chain Complex $\overline{\delta}$ from $\overline{K(\mathfrak{C}^2)}$} \label{complex} For any $\overline{\mathbf{e}}_{1,m,n}\in K(1\rtimes M\rtimes N)/ J_1$, we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m,n}=\overline{\mathbf{e}}_{1,m,n}-\overline{\mathbf{e}}_{1,1,n}\in \mathrm{Ker} \overline{\sigma}_2=\mathrm{K}_2$$ and for any $\overline{\mathbf{e}}_{l,m,n}\in K(L\rtimes M\rtimes N) /J_2$, we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}=\overline{\mathbf{e}}_{l,m,n}-\overline{\mathbf{e}}_{1,m,n}\in \mathrm{Ker} \overline{\sigma}_3=\mathrm{K}_3.$$ For any $\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}\in \mathrm{K}_3 $; we obtain \begin{align*} \overline{\tau}_2 \overline{\tau}_3(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n})=&\overline{\tau}_2 \left(\overline{\tau}_3(\overline{\mathbf{e}}_{l,m,n}-\overline{\mathbf{e}}_{1,m,n})\right) \\ =&\overline{\tau}_2(\overline{\mathbf{e}}_{1,\partial_2 lm,n}-\overline{\mathbf{e}}_{1,m,n}) \\ =&\overline{\tau}_2(\left(\overline{\mathbf{e}}_{1,\partial_2 lm,n}-\overline{\mathbf{e}}_{1,1,n})-(\overline{\mathbf{e}}_{1,m,n}-\overline{\mathbf{e}}_{1,1,n})\right)\\ =&\overline{\tau}_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,\partial_2 lm,n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m,n})\\ =&\mathbf{e}_{1,1,\partial_1\partial_2(l)\partial_1mn}-\mathbf{e}_{1,1,\partial_1mn}\\ =&\mathbf{e}_{1,1,\partial_1mn}-\mathbf{e}_{1,1,\partial_1mn} \hspace{4cm} (\because \partial_1\partial_2l=1)\\ =&0. \end{align*} Thus, we have the following chain complex of length-2: $$ \begin{tikzcd} \overline{\delta}:=\mathrm{K}_3\ar[r,"\overline{\tau}_3"]&\mathrm{K}_2\ar[r,"\overline{\tau}_2"]& \mathrm{K}_1 \end{tikzcd} $$ where $\mathrm{K}_1=K(1\rtimes 1\rtimes N)$. Thus, $\overline{\delta}$ can be considered as an object of $\mathbf{Ch}^2_K$ by ignoring the algebra multiplications in each level, so the components of this chain complex are vector spaces and the boundaries are linear transformations. Now, we show that the construction of $\overline{K(\mathfrak{C}^2)}$ from $\mathfrak{C}^2$ is functorial. For a 2-crossed module $$\xymatrix{\mathfrak{X}:=(L\ar[r]^-{\partial_2}&M\ar[r]^{\partial_1}&N),}$$ we obtained a Gray 3-(group)-groupoid with a single 0-cell; $$ \xymatrix{\mathfrak{C}^2:=L\rtimes M \rtimes N \ar@<1ex>[r]^-{s_3,t_3} \ar@<0ex>[r]&1\rtimes M \rtimes N \ar@<1ex>[r]^-{s_2,t_2} \ar@<0ex>[r]\ar@<1ex>[l]^-{e_3}&1\rtimes 1\rtimes N\ar@<1ex>[l]^-{e_2}\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}} $$ and $$ \xymatrix{ \overline{K(\mathfrak{C}^2)}:=K(L\rtimes M \rtimes N)/ J_2 \ar@<1ex>[r]^-{\overline{\sigma}_3,\overline{\tau}_3} \ar@<0ex>[r]& K(1\rtimes M \rtimes N)/J_1 \ar@<1ex>[r]^-{\overline{\sigma}_2,\overline{\tau}_2} \ar@<0ex>[r]\ar@<1ex>[l]^-{\overline{i}_3}& K(1\rtimes 1\rtimes N)\ar@<1ex>[l]^-{\overline{i}_2}\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}} $$ given above. Suppose now that $\mathfrak{C}^2_i:=((L_i\rtimes M_i\rtimes N_i),(1\rtimes M_i\rtimes N_i),(1\rtimes 1 \rtimes N_i),s_i,t_i,i_i)$ are Gray 3-(group)-groupoids with a single object $*$, for $i=1,2,3$ and $\phi:\mathfrak{C}^2_1\rightarrow \mathfrak{C}^2_2$ and $\psi:\mathfrak{C}^2_2\rightarrow \mathfrak{C}^2_3$ are (strict) Gray functor between them. Applying the functor $K(\cdot)$ for each level gives us the following diagram of group algebras; $$ \xymatrix@C-=0.7cm{K(L_1\rtimes M_1\rtimes N_1)\ar@<1.0ex>[dd]|{\tau_3}\ar@<-1.0ex>[dd]|<<<<<{\scriptstyle \sigma_3}\ar[r]^-{\scriptscriptstyle K(\phi_3)}& K(L_2\rtimes M_2\rtimes N_2)\ar@<1.0ex>[dd]|{\tau_3}\ar@<-1.0ex>[dd]|<<<<<{\sigma_3}\ar[r]^-{\scriptscriptstyle K(\psi_3)}& K(L_3\rtimes M_3\rtimes N_3)\ar@<1.0ex>[dd]|{\scriptstyle \tau_3}\ar@<-1.0ex>[dd]|<<<<<{\scriptstyle \sigma_3}\\ \\ K(1\rtimes M_1\rtimes N_1)\ar@<1.0ex>[dd]|{\scriptstyle \tau_2}\ar@<-1.0ex>[dd]|<<<<<{\scriptstyle \sigma_2}\ar[r]^-{\scriptscriptstyle K(\phi_2)}&K(1\rtimes M_2\rtimes N_2)\ar[r]^-{\scriptscriptstyle K(\psi_2)}\ar@<1.0ex>[dd]|{\scriptstyle \tau_2}\ar@<-1.0ex>[dd]|<<<<<{\scriptstyle \sigma_2}&K(1\rtimes M_3\rtimes N_3)\ar@<1.0ex>[dd]|{\scriptstyle \tau_2}\ar@<-1.0ex>[dd]|<<<<<{\scriptstyle \sigma_2}\\ \\ K(1\rtimes 1\rtimes N_1)\ar[r]_-{\scriptscriptstyle K(\scriptstyle \phi_1)}&K(1\rtimes 1\rtimes N_2)\ar[r]_-{\scriptscriptstyle K(\psi_1)}&K(1\rtimes 1\rtimes N_3)} $$ where $$K(\phi_1)(\mathbf{\mathbf{e}}_{1,1,n_1})=\mathbf{e}_{1,1,{\phi_1}(n_1)}, \ \ K(\phi_2)(\mathbf{e}_{1,m_1,n_1})=\mathbf{e}_{{\phi_2}(1,m_1,n_1)}$$ and $$K(\phi_3)(\mathbf{e}_{l_1,m_1,n_1})=\mathbf{e}_{{\phi_3}(l_1,m_1,n_1)}$$ with similar conditions for $K(\psi_i)$, $i=1,2,3.$ We form the ideals ${J}^i_2$ of $K(L_i\rtimes M_i\rtimes N_i)$ generated by elements of the forms \begin{align*} \mathbf{u}_i=&\mathbf{e}_{{l}^\prime_i l_i,m_i,n_i}-\mathbf{e}_{l_i,m_i,n_i}-\mathbf{e}_{{l}^\prime_i,\partial_2l_im_i,n_i}+\mathbf{e}_{1,\partial_2l_im_i,n_i}\\ \mathbf{v}_i=&\mathbf{e}_{{l'_i}^{{m'_i}}{l_i},{m}^\prime_im_i,n_i}-\mathbf{e}_{l_i,m_i,n_i}-\mathbf{e}_{{l}^\prime_i,{m}^\prime_i,\partial_1m_in_i}+\mathbf{e}_{1,1,\partial_1m_in_i}\\ \mathbf{v'}_i=&\mathbf{e}_{1,{m}^\prime_im_i,n_i}-\mathbf{e}_{1,m_i,n_i}-\mathbf{e}_{1,{m}^\prime_i,\partial_1m_in_i}+\mathbf{e}_{1,1,\partial_1m_in_i}\\ \intertext{and ${J}^i_1$ of $K(1\rtimes M_i\rtimes N_i)$ generated by the elements of the form} \mathbf{v'}_i=&\mathbf{e}_{1,{m}^\prime_im_i,n_i}-\mathbf{e}_{1,m_i,n_i}-\mathbf{e}_{1,{m}^\prime_i,\partial_1m_in_i}+\mathbf{e}_{1,1,\partial_1m_in_i} \end{align*} for $l_i,{l}^\prime_i\in L_i$ , $m_i,{m}^\prime_i\in M_i$ , $n_i\in N_i$ , $i=1,2,3.$ Next factor each $K(L_i\rtimes M_i\rtimes N_i)$ by the corresponding ideals ${J}^i_2$ and replace ${\sigma}^i_3,{\tau}^i_3$ by induced $\overline{\sigma^i_3},\overline{\tau^i_3}$ and factor each $K(1\rtimes M_i\rtimes N_i)$ by the corresponding ideals ${J}^i_1$ and replace ${\sigma}^i_2, \ {\tau}^i_2$ by the induced $\overline{\sigma^i_2},\ \overline{\tau^i_2}$ for $i=1,2,3$. This gives us the following diagram; $$ \xymatrix{K(L_1\rtimes M_1\rtimes N_1)/ {J^1_2}\ar@<1.0ex>[dd]^{\overline{\tau^1_3}}\ar@<-1.0ex>[dd]_{\scriptstyle \overline{\sigma^1_3}}\ar[r]^-{\scriptscriptstyle {\overline K(\phi_3)}}& K(L_2\rtimes M_2\rtimes N_2)/ {J^2_2}\ar@<1.0ex>[dd]^{\overline{\tau^2_3}}\ar@<-1.0ex>[dd]_{\overline{\sigma^2_3}}\ar[r]^-{\scriptscriptstyle {\overline K(\psi_3)}}& K(L_3\rtimes M_3\rtimes N_3)/ {J^3_2}\ar@<1.0ex>[dd]^{\overline{\tau^3_3}}\ar@<-1.0ex>[dd]_{\overline{\sigma^3_3}}\\ \\ K(1\rtimes M_1\rtimes N_1)/ {J^1_1}\ar@<1.0ex>[dd]^{\overline{\tau^1_2}}\ar@<-1.0ex>[dd]_{\overline{\sigma^1_2}}\ar[r]^-{\scriptscriptstyle {\overline K(\phi_2)}}&K(1\rtimes M_2\rtimes N_2)/ {J^2_1}\ar[r]^-{\scriptscriptstyle {\overline K(\psi_2)}}\ar@<1.0ex>[dd]^{\overline{\tau^2_2}}\ar@<-1.0ex>[dd]_{\overline{\sigma^2_2}}&K(1\rtimes M_3\rtimes N_3)/ {J^3_1}\ar@<1.0ex>[dd]^{\overline{\tau^2_3}}\ar@<-1.0ex>[dd]_{\overline{\sigma^3_2}}\\ \\ K(1\rtimes1\rtimes N_1)\ar[r]_-{\scriptscriptstyle \overline{K}(\scriptscriptstyle \phi_1)}&K(1\rtimes1\rtimes N_2)\ar[r]_-{\scriptscriptstyle {\overline K(\psi_1)}}&K(1\rtimes1\rtimes N_3)} $$ where $\overline{K}(\phi_i)$ and $\overline{K}(\psi_i)$ are induced by the quotient maps. Again, the commutativity is satisfied. Define $$\overline{K}(\mathfrak{C}^2):=\overline{K(\mathfrak{C}^2)}$$ and $\overline{K}(\phi)$ be the Gray 3-(group) algebra groupoid map with $\overline{K}(\phi_i) \ (i=2,3)$ defined as above and $$\overline{K}(\phi_1):=K(\phi_1) \ , \ \overline{K}(\psi_1):=K(\psi_1) .$$ Therefore, we have $$\overline{K}(\psi\phi):=\overline{K}(\psi)\overline{K}(\phi)$$ and it is easy to see that $\overline{K}$ also preserves the trivial morphisms on $\mathfrak{C}^2$, and so this gives a functor $\overline{K}$ from the category of Gray 3-(group)-groupoids to that of Gray 3-(group) algebra groupoids with a single object $*$. \section{The Construction of Regular Representation as a 3-functor} In this section, we give the right regular representation (Cayley's theorem) for the 2-crossed module $\xymatrix{\mathfrak{X}:=(L\ar[r]^-{\partial_2}&M\ar[r]^{\partial_1}&N)}$ and so its associated structure $\mathfrak{C}^2$. Consider the associated Gray 3-(group)-groupoid with a single object or equivalently cat$^2$-group: $$ \xymatrix{\mathfrak{C}^2:=L\rtimes M \rtimes N \ar@<1ex>[r]^-{s_3,t_3} \ar@<0ex>[r]& 1\rtimes M \rtimes N \ar@<1ex>[r]^-{s_2,t_2} \ar@<0ex>[r]\ar@<1ex>[l]^-{e_3}& 1\rtimes1\rtimes N\ar@<1ex>[l]^-{e_2}\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}.} $$ In the previous section we obtain a chain complex of algebras $$ \begin{tikzcd} \overline{\delta}:=\mathrm{K}_3\ar[r,"\overline{\tau}_3"]&\mathrm{K}_2\ar[r,"\overline{\tau}_2"]& \mathrm{K}_1 \end{tikzcd} $$ over the field $K$ from $\overline{K(\mathfrak{C}^2)}$. Consider the cat$^2$-group $\mathbf{Aut(\overline{\delta})}$ from \cite{Jinan}. This can be considered as a subcategory of $\mathbf{Ch}^2_K$. We will define a right regular representation as a lax 3-(contravariant) functor $$\lambda:\mathfrak{C}^2 \rightarrow \mathbf{Aut(\overline{\delta})}.$$ This functor $\lambda$ will be organised as follows: \begin{align*} \lambda:\mathfrak{C}^2& \xrightarrow{\makebox[2cm]{}} \mathbf{Aut(\overline{ \delta})}\\ *&\xmapsto{\hspace{2cm}}\overline{\delta}\ (\text{single object})\\ (1,1,n)&\xmapsto{\hspace{2cm}} \lambda_n=({\lambda}^0_n,{\lambda}^1_n,{\lambda}^2_n)\ (\text{chain map})\\ (1,m,n)&\xmapsto{\hspace{2cm}} \lambda_{m,n}=(({\lambda}^\prime_{m,n},{\lambda}^{\prime \prime}_{m,n}),\lambda_n)\ (\text{1-homotopy})\\ (l,m,n)&\xmapsto{\hspace{2cm}}\lambda_{l,m,n}=(\alpha_{l,m,n},({\lambda}^\prime_{m,n},{\lambda}^{\prime \prime}_{m,n}),\lambda_n) \ (\text{2-homotopy}). \end{align*} \textbf{Remark:} We can summarise that any representation of $\mathfrak{C}^2$ maps elements of $1\rtimes 1 \rtimes N$ to chain automorphisms in $\mathbf{(Aut\overline{ \delta})}_{1}$, elements of $1\rtimes M \rtimes N$ to 1-homotopies in $\mathbf{(Aut\overline{ \delta})}_{2}$ and elements of $L\rtimes M \rtimes N$ to 2-homotopies in $\mathbf{(Aut\overline{ \delta})}_{3}$ for some representation complex on length-2 of algebras (or vector spaces by ignoring the multiplication) $\overline{\delta}$. We can picture an action of $\mathfrak{C}^2$ on $\overline{K(\mathfrak{C}^2)}$ by right multiplication. Of course, this is left action, and the elements of $\mathfrak{C}^2$ appear in the left side on the picture, while they appear on the right in the algebraic notation. In the following pictures, the broken arrows show elements of $\mathfrak{C}^2$ and the unbroken arrows are as in $\overline{K(\mathfrak{C}^2)}$. A 1-cell in $\mathfrak{C}^2$ is an element $(1,1,n)\in 1\rtimes 1\rtimes N$. This can act both on the 1-cells, 2-cells and 3-cells of $\overline{K(\mathfrak{C}^2)}$. The action of $(1,1,n)$ on a 1-cell is; $$ \begin{tikzcd} \ar[rr,densely dotted,"{\scriptscriptstyle{(1,1,n)}}"]&&{}\ar[rr,"{\mathbf{e}_{1,1,n'}}"]&& := \ar[rr,"{\mathbf{e}_{1,1,n'n}}"]&&{} \end{tikzcd} $$ The action on a 2-cell is similar: $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & & \ar[dd,Rightarrow,"{\overline{\mathbf{e}}_{1,m',n'}}"{description}] & \\ \ar[r,"{\scriptscriptstyle{ (1,1,n)}}",densely dotted]& \ar[rr,bend left=40,"{\mathbf{e}_{1,1,n'}}"] \ar[rr,bend right=50,"{\mathbf{e}_{1,1,\partial_1m'n'}}"'] & & {} \\ \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ar[dd, Rightarrow, "{\overline{\mathbf{e}}_{1,m',n'n}}"{description}] & \\ \ar[rr,bend left=40,"{\mathbf{e}_{1,1,n'n}}"] \ar[rr,bend right=50,"{\mathbf{e}_{1,1,\partial_1m'n'n}}"'] & &{}\\ & \ \end{tikzcd} \end{array} $$ The action of $(1,m,n)$ on a 1-cell is given by $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ar[Rightarrow,"{\scriptscriptstyle (1,m,n)}",dotted]{dd} & & \\ \ar[rr,bend left=40,"{\scriptscriptstyle{(1,1,n)}}",dotted] \ar[rr,bend right=50,"{\scriptscriptstyle{(1,1,\partial_1 mn)}}"',dotted] & &\ar[r,"{\mathbf{e}_{1,1,n'}}"] & {} \\ \ & \ & \ & \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \ar[Rightarrow,"{\overline{\mathbf{e}}_{1,^{n'}m,n'n}}"{description}]{dd}&\\ \ar[rr,bend left=40,"{\mathbf{e}_{1,1,n'n}}"] \ar[rr,bend right=50,"{\mathbf{e}_{1,1,n'\partial_1mn}}"'] & & {} \\ \ & \ & \end{tikzcd} \end{array} $$ The action of $(1,1,n)$ on a 3-cell is given by: $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & & \tarrow["\scriptscriptstyle{\overline{\mathbf{e}}_{l',m',n'}}"]{dd} & \\ \ar[r,"\scriptscriptstyle {(1,1,n)}",densely dotted] &\ar[rr,Rightarrow,bend left=40,"{\mathbf{e}_{1,m',n'}}"] \ar[rr,Rightarrow,bend right=50,"{\mathbf{e}_{1,\partial_2l'm',n'}}"'] & &{} \\ \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \tarrow["{\scriptscriptstyle{\overline{\mathbf{e}}_{l',m',n'n}}}" {description} ]{dd}&\\ \ar[rr,Rightarrow,bend left=40,"{\mathbf{e}_{1,m',n'n}}"] \ar[rr,Rightarrow,bend right=50,"{\mathbf{e}_{1,\partial_2l'm',n'n}}"'] & & {} \\ \ & \ & \end{tikzcd} \end{array} $$ The action of $(1,m,n)$ on a 3-cell is given by: $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & & \tarrow["{\scriptscriptstyle{\overline{\mathbf{e}}_{l',m',n'}}}"]{dd} & \\ \ar[r,Rightarrow,"\scriptscriptstyle {(1,m,n)}",dotted] & \ar[rr,Rightarrow,bend left=40,"{\mathbf{e}_{1,m',n'}}"] \ar[rr,Rightarrow,bend right=50,"{\mathbf{e}_{1,\partial_2l'm',n'}}"'] & &{} \\ \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \tarrow["{\scriptscriptstyle{\overline{\mathbf{e}}_{l',m'^{n'}m,n'n}}}"{description}]{dd}&\\ \ar[rr,Rightarrow,bend left=40,"{\mathbf{e}_{1,m'^{n'}m,n'n}}"] \ar[rr,Rightarrow,bend right=50,"{\mathbf{e}_{1,\partial_2l'm'^{n'}m,n'n}}"'] & & {} \\ \ & \ & \end{tikzcd} \end{array} $$ The action of $(l,m,n)$ on a 1-cell is given by $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \tarrow["{\scriptscriptstyle(l,m,n)}",dotted]{dd} & & \\ \ar[rr,Rightarrow,bend left=40,"{\scriptscriptstyle{(1,m,n)}}",dotted] \ar[rr,Rightarrow,bend right=50,"{\scriptscriptstyle{(1,\partial_2lm,n)}}"',dotted] & &\ar[r,"{\mathbf{e}_{1,1,n'}}"] & {} \\ \ & \ & \ & \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=normal] & \tarrow["{\scriptscriptstyle{\overline{\mathbf{e}}_{^{n'}l,^{n'}m,n'n}}}"{description}]{dd}&\\ \ar[rr,Rightarrow,bend left=40,"{\mathbf{e}_{1,^{n'}m,n'n}}"] \ar[rr,Rightarrow,bend right=50,"{\mathbf{e}_{1,^{n'}(\partial_2lm),n'n}}"'] & & {} \\ \ & \ & \end{tikzcd} \end{array} $$ Using these actions, we can give the construction of the 3-functor $\lambda$, on each level as follows: \subsection{$\lambda$ over 0-cells} Since the 0-cell of $\mathfrak{C}^2$ is $*$, and the 0-cell of $\mathbf{Aut(\overline{ \delta})}$ is $\overline{\delta}$, so we can define $\lambda(*)=\overline{\delta}$. Thus, we have $$\lambda_{*}=\overline{\delta} \in \mathbf{(Aut \overline{ \delta})}_{0}.$$ \subsection{$\lambda$ over 1-cells} For any 1-cell of $\mathfrak{C}^2$; $(1,1,n):*\rightarrow *$ for $n\in N$, using the action of $(1,1,n)$ on the cells in $\overline{K(\mathfrak{C}^2)}$ given in the above pictures and considering only $\lambda_n$ for the image of $(1,1,n)$ under $\lambda$, we can define $\lambda_n:\overline{\delta}\rightarrow \overline{\delta}$ as follows: $$ \begin{array}{c} \xymatrix{\overline{\delta}\ar[d]_-{\lambda_n}\\ \overline{\delta}} \end{array} {:=} \begin{array}{c} \xymatrix{\mathrm{K}_3 \ar[r]^-{\overline{\tau}_3}\ar[d]^{{\lambda}^2_n}& \mathrm{K}_2 \ar[r]^-{\overline{\tau}_2}\ar[d]^-{{\lambda}^1_n}&\mathrm{K}_1\ar[d]^-{{\lambda}^0_n}\\ \mathrm{K}_3\ar[r]_-{\overline{\tau}_3} & \mathrm{K}_2 \ar[r]_-{\overline{\tau}_2} &\mathrm{K}_1 } \end{array} $$ where $${\lambda}^0_n(\mathbf{e}_{1,1,n^\prime})=\mathbf{e}_{1,1,n^\prime n} \ ,\ \ {\lambda}^1_n (\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m^\prime,n^\prime})= \overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m^\prime,n^\prime n}\ ,\ \ {\lambda}^2_n(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'n}$$ for each $\mathbf{e}_{1,1,n'}\in \mathrm{K}_1$,\ \ $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'}\in\mathrm{K}_2$ and $\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'}\in \mathrm{K}_3$. The maps ${\lambda}^0_n,{\lambda}^1_n$ and ${\lambda}^2_n$ are linear automorphisms from $\mathrm{K}_i$ to $\mathrm{K}_i$ for $i=1,2,3$. We will show that $\lambda_n=({\lambda}^0_n,{\lambda}^1_n,{\lambda}^2_n)$ is a chain automorphism in $\mathbf{Aut(\overline{ \delta})}$. It must be that the last diagram is commutative. For any element $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'}\in\mathrm{K}_2$, we obtain \begin{align*} \overline\tau_2 {\lambda}^1_n(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=&\overline\tau_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n})\\ =&\overline\tau_2(\overline{\mathbf{e}}_{1,m',n'n}-\overline{\mathbf{e}}_{1,1,n'n})\\ =&\mathbf{e}_{1,1,\partial_1m'n'n}-\mathbf{e}_{1,1,n'n}\in \mathrm{K}_1 \end{align*} and \begin{align*} {\lambda}^0_n \overline\tau_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=&{\lambda}^0_n \overline\tau_2(\overline{\mathbf{e}}_{1,m',n'}-\overline{\mathbf{e}}_{1,1,n'})\\ =&{\lambda}^0_n(\mathbf{e}_{1,1,\partial_1m'n'}-\mathbf{e}_{1,1,n'})\\ =&\mathbf{e}_{1,1,\partial_1m'n'n}-\mathbf{e}_{1,1,n'n}\in \mathrm{K}_1 \end{align*} and so $$\overline\tau_2 \lambda^1_n=\lambda^0_n \overline\tau_2.$$ For any $\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'}\in \mathrm{K}_3$, we obtain \begin{align*} \overline\tau_3 \lambda^2_n(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'})=&\overline\tau_3(\lambda^2_n(\overline{\mathbf{e}}_{l',m',n'}-\overline{\mathbf{e}}_{1,m',n'}))\\ =&\overline\tau_3(\overline{\mathbf{e}}_{l',m',n'n}-\overline{\mathbf{e}}_{1,m',n'n})\\ =&\overline{\mathbf{e}}_{1,\partial_2l'm',n'n}-\overline{\mathbf{e}}_{1,m',n'n})\\ =&(\overline{\mathbf{e}}_{1,\partial_2l'm',n'n}-\overline{\mathbf{e}}_{1,1,n'n})-(\overline{\mathbf{e}}_{1,m',n'n}-\overline{\mathbf{e}}_{1,1,n'n})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,\partial_2l'm',n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}\in \mathrm{K}_2 \end{align*} and; \begin{align*} \lambda^1_n \overline\tau_3(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'})=&\lambda^1_n(\overline\tau_3(\overline{\mathbf{e}}_{l',m',n'}-\overline{\mathbf{e}}_{1,m',n'}))\\ =&\lambda^1_n (\overline{\mathbf{e}}_{1,\partial_2l'm',n'}-\overline{\mathbf{e}}_{1,m',n'})\\ =&\lambda^1_n ((\overline{\mathbf{e}}_{1,\partial_2l'm',n'}-\overline{\mathbf{e}}_{1,1,n'})-(\overline{\mathbf{e}}_{1,m',n'}-\overline{\mathbf{e}}_{1,1,n'}))\\ =&\lambda^1_n (\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,\partial_2l'm',n'}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,\partial_2l'm',n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}\in \mathrm{K}_2 \end{align*} and so; $$\overline\tau_3\lambda^2_n=\lambda^1_n \overline\tau_3.$$ Therefore, $\lambda_n=({\lambda}^0_n,{\lambda}^1_n,{\lambda}^2_n)$ is a chain automorphism over $\overline{\delta}$ in $\mathbf{(Aut \overline{ \delta})}_{1}.$ \subsubsection{$\lambda_n$ preserves the composition of 1-cells} For any 1-cells $n,n':{*} \longrightarrow {*} $ in $\mathfrak{C}^2$, and for any ${\mathbf{e}}_{1,1,p}\in \mathrm{K}_1$, we have $$\lambda^0_{nn'}({\mathbf{e}}_{1,1,p})={\mathbf{e}}_{1,1,p(nn')}={\mathbf{e}}_{1,1,(pn)n'}$$ and $$({\lambda}^0_{n'}\circ{\lambda}^0_n)({\mathbf{e}}_{1,1,p})={\lambda}^0_{n'}({\lambda}^0_n({\mathbf{e}}_{1,1,p}))={\lambda}^0_{n'}({\mathbf{e}}_{1,1,pn})={\mathbf{e}}_{1,1,(pn)n'}$$ and so $${\lambda}^0_{nn'}={\lambda}^0_{n'}\circ{\lambda}^0_n.$$ For any $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,k,p}\in\mathrm{K}_2$, we have \begin{align*} \lambda'_{nn'}({\mathbf{v}^{\scriptscriptstyle11}}_{1,k,p})=&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,k,p(nn')}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,k,(pn)n'}\\ =&\lambda'_{n'}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,k,pn})\\ =&\lambda'_{n'}(\lambda'_{n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,k,p}))\\ =&({\lambda}^1_{n'}\circ{\lambda}^1_n)({\mathbf{v}^{\scriptscriptstyle11}}_{1,k,p}) \end{align*} and so ${\lambda}^1_{nn'}={\lambda}^1_{n'}\circ{\lambda}^1_n $. Similarly, we have ${\lambda}^2_{nn'}={\lambda}^2_{n'}\circ{\lambda}^2_n$. \subsection{$\lambda$ over 2-cells} For any 2-cell $(1,m,n)\in 1\rtimes M\rtimes N$, there must be a corresponding 2-cell $\lambda_{m,n}\in \mathbf{(Aut\overline{ \delta})}_{2}$ as a 1-homotopy. $\lambda$ need to be a functor, so it must preserve the source and target of each 2-cell and so, $\lambda_{m,n}$ will be a homotopy in $\mathbf{Ch}^2_K$ from $\lambda_n$ to $\lambda_{\partial_1mn}$. We can show it by its source and chain homotopy component as $\lambda_{m,n}:=((\lambda'_{m,n},\lambda''_{m,n}),\lambda_n)$, where $\lambda'_{m,n}$,\ $\lambda''_{m,n}$ are the chain homotopy components and $\lambda_n$ is the source of $\lambda_{m,n}$. Here, the chain homotopy components are $\lambda'_{m,n}:\mathrm{K}_1\longrightarrow \mathrm{K}_2$ and $\lambda''_{m,n}:\mathrm{K}_2\longrightarrow \mathrm{K}_3$. These maps can be shown in the following diagram: \begin{equation*} \xymatrix{\mathrm{K}_3 \ar[rr]^-{\overline{\tau}_3}\ar@<0.5ex>[dd]^-{\scriptscriptstyle\lambda^2_{\partial_1mn}}\ar@<-0.5ex>[dd]_-{{\scriptscriptstyle\lambda}^2_n} &&\mathrm{K}_2 \ar[ddll]|<<<<<<<{\Large\boldsymbol\lambda''_{m,n}} \ar[rr]^-{\overline{\tau}_2}\ar@<0.5ex>[dd]^-{\scriptscriptstyle\lambda^1_{\partial_1mn}}\ar@<-0.5ex>[dd]_-{\scriptscriptstyle{\lambda}^1_n}&&\mathrm{K}_1\ar[ddll]|<<<<<<<{\Large\boldsymbol\lambda'_{m,n}}\ar@<0.5ex>[dd]^-{\scriptscriptstyle\lambda^0_{\partial_1mn}}\ar@<-0.5ex>[dd]_-{\scriptscriptstyle{\lambda}^0_n} \\ \\ \mathrm{K}_3\ar[rr]_{\overline{\tau}_3} && \mathrm{K}_2 \ar[rr]_{\overline{\tau}_2} \ar[rr] &&\mathrm{K}_1.} \end{equation*} We must check that $\lambda_{m,n}$ satisfies the chain homotopy conditions given below: \begin{enumerate} \item $\overline\tau_2\lambda'_{m,n}=\lambda^0_{\partial_1mn}-\lambda^0_{n}$, \item $\lambda'_{m,n}\overline\tau_2+\overline\tau_3 \lambda''_{m,n}=\lambda^1_{\partial_1mn}-\lambda^1_{n},$ \item $\lambda''_{m,n}\overline\tau_3=\lambda^2_{\partial_1mn}-\lambda^2_{n}.$ \end{enumerate} We define, for any ${\mathbf{e}}_{1,1,n'}\in \mathrm{K}_1$, the map $\lambda'_{m,n}:\mathrm{K}_1\longrightarrow \mathrm{K}_2$ by $$\lambda'_{m,n}({\mathbf{e}}_{1,1,n'})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{(1,1,n')(1,m,n)}=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n}$$ and for any $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'}\in \mathrm{K}_2$, the map $\lambda''_{m,n}:\mathrm{K}_2\longrightarrow \mathrm{K}_3$ by $$\lambda''_{m,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(1,m',n')\#(1,m,n)} =\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m',^{n'}m\},m'^{n'}m,n'n},$$ where $(1,m',n')\#(1,m,n)=({\{m',^{n'}m\},m'^{n'}m,n'n})$ is the interchange 3-cell of 2-cells $(1,m,n)$ and $(1,m',n')$ in $\mathfrak{C}^2$ and $\{-,-\}:M\times M\longrightarrow L$ is the Peiffer lifting of 2-crossed module $\mathfrak{X}$. We shall check the first condition; $1.$ For any ${\mathbf{e}}_{1,1,n'}\in \mathrm{K}_1;$ we obtain \begin{align*} \overline\tau_2 \lambda'_{m,n}({\mathbf{e}}_{1,1,n'})=&\overline\tau_2(\overline{{\mathbf{v}}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n})\\ =&\overline\tau_2(\overline {\mathbf{e}}_{1,^{n'}m,n'n}-\overline {\mathbf{e}}_{1,1,n'n})\\ =&{\mathbf{e}}_{1,1,\partial_1(^{n'}m)n'n}-{\mathbf{e}}_{1,1,n'n}\\ =&{\mathbf{e}}_{1,1,n'\partial_1mn}-{\mathbf{e}}_{1,1,n'n}\\ =&\lambda^0_{\partial_1mn}({\mathbf{e}}_{1,1,n'})-\lambda^0_n({\mathbf{e}}_{1,1,n'})\\ =&(\lambda^0_{\partial_1mn}-\lambda^0_n)({\mathbf{e}}_{1,1,n'}). \end{align*} Thus, we have $\overline\tau_2 \lambda'_{m,n}=\lambda^0_{\partial_1mn}-\lambda^0_n$. $2.$ We must show that $$\lambda'_{m,n} \overline\tau_2+\overline\tau_3 \lambda''_{m,n}=\lambda'_{\partial_1mn}-\lambda'_n.$$ For any element $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'}\in \mathrm{K}_2$, we obtain $$\overline\tau_3 \lambda''_{m,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline\tau_3(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m',^{n'}m\},m'^{n'}m,n'n})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m'^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,\partial_2 \{m',^{n'}m\}m'^{n'}m,n'n}.$$ Since $\partial_2\{m',^{n'}m\}=^{\partial_1m'}(^{n'}m)m'(^{n'}m)^{-1}{m'}^{-1}$, we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,\partial_2 \{m',^{n'}m\}m'(^{n'}m),n'n}=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m)m',n'n}$$ and then $$\overline\tau_3\lambda''_{m,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m'^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m)m',n'n}$$ On the other hand, $$(\lambda'_{\partial_1mn}-\lambda'_n)(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\lambda'_{\partial_1mn}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})-\lambda'_n(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'\partial_1mn}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}$$ and \begin{align*} \lambda'_{m,n}\overline\tau_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=&\lambda'_{m,n}\overline\tau_2(\overline {\mathbf{e}}_{1,m',n'}-\overline {\mathbf{e}}_{1,1,n'})\\ =&\lambda'_{m,n}(\mathbf{e}_{1,1,\partial_1m'n'}-\mathbf{e}_{1,1,n'})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m),\partial_1m'n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n} \end{align*} Thus we obtain \begin{align*} (\lambda'_{\partial_1mn}-\lambda'_{n}-\lambda'_{m,n}\overline\tau_2)(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'\partial_1mn}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m),\partial_1m'n'n}\\ =&(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n})-(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m),\partial_1m'n'n}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}). \end{align*} We know that $${\mathbf{u}_{\scriptscriptstyle2}}=\mathbf{e}_{1,m'm,n}-\mathbf{e}_{1,m,n}-\mathbf{e}_{1,m',\partial_1mn}+\mathbf{e}_{1,1,\partial_1mn}\in J_1.$$ We can rewrite this expression as follows: $$(\mathbf{e}_{1,m'm,n}-\mathbf{e}_{1,1,n})-(\mathbf{e}_{1,m,n}-\mathbf{e}_{1,1,n})-(\mathbf{e}_{1,m',\partial_1mn}-\mathbf{e}_{1,1,\partial_1mn})= {\mathbf{v}^{\scriptscriptstyle11}}_{1,m'm,n}-{\mathbf{v}^{\scriptscriptstyle11}}_{1,m,n}-{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',\partial_1mn}\in J_1$$ and thus, we have $$\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m'm,n}=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m,n}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',\partial_1mn}.$$ Using this equality, we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n}=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m'^{n'}m,n'n}$$ and $$\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m),\partial_1m'n'n}=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m)m',n'n}.$$ Thus, we obtain $$(\lambda^1_{\partial_1mn}-\lambda^1_{n}-\lambda'_{m,n} \overline\tau_2)(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m'^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{\partial_1m'}(^{n'}m)m',n'n}= \overline\tau_3(\lambda''_{m,n}(\overline{\mathbf{v}}^{11}_{1,m',n'})).$$ Therefore we have; $\lambda^1_{\partial_1mn}-\lambda^1_{n}-\lambda'_{m,n} \overline\tau_2=\overline\tau_3 \lambda''_{m,n}$ and thus $\overline\tau_3\lambda''_{m,n}+\lambda'_{m,n}\overline\tau_2=\lambda^1_{\partial_1mn}-\lambda^1_{n}.$ Thus, second chain homotopy condition is satisfied. In section \ref{sect:App5} of Appendix, we will show that third chain homotopy condition; $\lambda'' \overline\tau_3=\lambda^2_{\partial_1mn}-\lambda^2_{n}$ is satisfied. \subsubsection{$\lambda_{m,n}$ preserves the vertical composition of 2-cells} We will show that $\lambda_{m,n}$ preserves the vertical composition of 2-cells. We must show that $$\lambda_{(m',\partial_1mn)\#_2(m,n)}=\lambda_{m'm,n}=\lambda_{(m',\partial_1mn)}\#_2\lambda_{(m,n)}$$ for the vertical composition $\#_2$ of 2-cells in $\mathfrak{C}^2$. Consider the following diagram; $$ \begin{tikzcd} \mathrm{K}_3 \arrow[rrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^2_n}}"description,pos=.75]\arrow[rrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^2_{\partial_1mn}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_3}"'] &&&\mathrm{K}_3\arrow[rrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^2_{\partial_1mn}}"description,pos=.70]\arrow[rrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^2_{\partial_1m'n'}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_3}"] &&& \mathrm{K}_3 \arrow[ddd,,"{\overline{\tau}_3}"] \\ \\ \\ \mathrm{K}_2 \arrow[rrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^1_n}}"description,pos=.75]\arrow[rrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^1_{\partial_1mn}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_2}"']\arrow[uuurrr,"{\scriptscriptstyle{H'_2}}"{sloped,above=-0.3ex,xshift=0.0em}]&&&\mathrm{K}_2 \arrow[rrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^1_{\partial_1mn}}"description,pos=.70]\arrow[rrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^1_{\partial_1m'n'}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_2}"]\arrow[uuurrr,"{\scriptscriptstyle{K'_2}}"{sloped,above=-0.3ex,xshift=0.0em}] &&&\mathrm{K}_2 \arrow[ddd,"{\overline{\tau}_2}"]\\ \\ \\ \mathrm{K}_1 \arrow[rrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^0_n}}"description,pos=.75]\arrow[rrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^0_{\partial_1mn}}"description,pos=.35] \arrow[uuurrr,"{\scriptscriptstyle {H'_1}}"{sloped,above=-0.3ex,xshift=0.0em}] &&&\mathrm{K}_1 \arrow[rrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^{0}_{\partial_{1}mn}}"description,pos=.70]\arrow[rrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^{0}_{\partial_{1}m'n'}}"description,pos=.35]\arrow[uuurrr,"{\scriptscriptstyle{K'_1}}"{sloped,above=-0.3ex,xshift=0.0em}] &&&\mathrm{K}_1 \end{tikzcd} $$ where $n'=\partial_1 mn$. We can take $$(H,F)=((H'_1,H'_2),F)=((\lambda'_{m,n},\lambda''_{m,n}),\lambda_n):F\Rightarrow G$$ and $$F=\lambda_n=({\lambda}^0_{n},{\lambda}^1_{n},{\lambda}^2_{n}); \ \ G=\lambda_{\partial_1mn}=({\lambda}^0_{\partial_1mn},{\lambda}^1_{\partial_1mn},{\lambda}^2_{\partial_1mn})$$ and $$(K,G)=((K'_1,K''_2),G)=((\lambda'_{m',\partial_1mn},\lambda''_{m',\partial_1mn}),G):G\Rightarrow T$$ where $$T=\lambda_{\partial_1m'\partial_1mn}=({\lambda}^0_{\partial_1m'\partial_1mn},{\lambda}^1_{\partial_1m'\partial_1mn},{\lambda}^2_{\partial_1m'\partial_1mn}).$$ In $\mathbf{(Aut\overline{ \delta})}_{2}$, the vertical composition of 1-homotopies is given by $$(K,G)\#_2(H,F)=(K+H,F)=((K'_1+H'_1,K'_2+H'_2),F).$$ For any $(\mathbf{e}_{1,1,p})\in \mathrm{K}_1$; we obtain $\lambda'_{m'm,n}(\mathbf{e}_{1,1,p})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}{(m'm)},pn}\in\mathrm{K}_2$. From the relation ${\mathbf{u}_{\scriptscriptstyle2}}\in J_1$, we can write $$\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}{(m'm)},pn} =\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m',p\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn}$$ and then \begin{align*} \lambda'_{(m',\partial_1mn)\#_2(m,n)}(\mathbf{e}_{1,1,p})=&\lambda'_{m'm,n}(\mathbf{e}_{1,1,p})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}{(m'm)},pn}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m',p\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn}\\ =&\lambda'_{m',\partial_1mn}(\mathbf{e}_{1,1,p})+\lambda'_{m,n}(\mathbf{e}_{1,1,p})\\ =&(K'_1+H'_1)(\mathbf{e}_{1,1,p}) \end{align*} thus $\lambda'_{(m',\partial_1mn)\#_2(m,n)}=K'_1+H'_1$. We show the equality $\lambda''_{(m',\partial_1mn)\#_2(m,n)}=K'_2+H'_2$ in section \ref{sect:App1} of Appendix. Therefore, $\lambda_{m,n}$ preserves the vertical composition of 2-cells. \subsubsection{$\lambda_{m,n}$ preserves the horizontal composition of 2-cells } For any 2-cells $\Gamma=(m,n)$ and $\Gamma'=(m',n')$ in $M\rtimes N$; their horizontal compositions are $$\begin{bmatrix} &\Gamma'\\ \Gamma& \end{bmatrix}=(m^{n}m',nn') \ \ \mathrm{and} \ \ \begin{bmatrix} \Gamma&\\ &\Gamma' \end{bmatrix}=\left(^{\partial_1m}{(^{n}m')}m,nn'\right).$$ We need to show that $$ \lambda_{m^{n}m',nn'}=\begin{bmatrix} &\lambda_{m',n'}\\ \lambda_{m,n}& \end{bmatrix}\ \ \mathrm{and} \ \ \lambda_{^{\partial_1m}{(^{n}m')}m,nn'}=\begin{bmatrix} \lambda_{m,n}&\\ &\lambda_{m',n'}\end{bmatrix}. $$ We show these equalities in section \ref{sect:App2} of Appendix. In $\mathbf{(Aut\overline{ \delta})}_{2}$, suppose that $(H,F)$ and $(K,G)$ are 1-homotopies as given in $\mathbf{Ch}^2_K$. In $M\rtimes N$, the horizontal composition can be pictured as $$ \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] & \ar[dd,Rightarrow,"\scriptscriptstyle{(m,n)}"{description}] & & \ar[dd,Rightarrow,"\scriptscriptstyle{(m',n')}"{description}]& \\ {*}\ar[rr,bend left=50,"\scriptscriptstyle n"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1mn"']& &{*}\ar[rr,bend left=50,"\scriptscriptstyle n'"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1m'n'"'] & & {*} \\ \ & \ & \ & \ \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=0.8cm] & \ar[dd, Rightarrow, "\scriptscriptstyle{(m^{n}m',nn')}"{description}] & \\ {*}\ar[rr,bend left=50,"\scriptscriptstyle nn'"] \ar[rr,bend right=50,"\scriptscriptstyle \partial_1mn\partial_1m'n'"'] & &{*}\\ & \ \end{tikzcd} \end{array} $$ The image of this diagram under $\lambda$ in $\mathbf{Aut}\overline{\delta}$ can be represented by the diagram; $$ \begin{tikzcd} \mathrm{K}_3 \arrow[rrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^2_n}}"description,pos=.75]\arrow[rrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^2_{\partial_1mn}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_3}"'] &&&\mathrm{K}_3\arrow[rrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^2_{n'}}"description,pos=.70]\arrow[rrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^2_{\partial_1m'n'}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_3}"] &&& \mathrm{K}_3 \arrow[ddd,,"{\overline{\tau}_3}"] \\ \\ \\ \mathrm{K}_2 \arrow[rrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^1_n}}"description,pos=.75]\arrow[rrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^1_{\partial_1mn}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_2}"']\arrow[uuurrr,"{\scriptscriptstyle{H'_2}}"{sloped,above=-0.3ex,xshift=0.0em}]&&&\mathrm{K}_2 \arrow[rrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^1_{n'}}"description,pos=.70]\arrow[rrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^1_{\partial_1m'n'}}"description,pos=.35] \arrow[ddd,"{\overline{\tau}_2}"]\arrow[uuurrr,"{\scriptscriptstyle{K'_2}}"{sloped,above=-0.3ex,xshift=0.0em}] &&&\mathrm{K}_2 \arrow[ddd,"{\overline{\tau}_2}"]\\ \\ \\ \mathrm{K}_1 \arrow[rrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^0_n}}"description,pos=.75]\arrow[rrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^0_{\partial_1mn}}"description,pos=.35] \arrow[uuurrr,"{\scriptscriptstyle {H'_1}}"{sloped,above=-0.3ex,xshift=0.0em}] &&&\mathrm{K}_1 \arrow[rrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^{0}_{n'}}"description,pos=.70]\arrow[rrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^{0}_{\partial_{1}m'n'}}"description,pos=.35]\arrow[uuurrr,"{\scriptscriptstyle{K'_1}}"{sloped,above=-0.3ex,xshift=0.0em}] &&&\mathrm{K}_1 \end{tikzcd} $$ It must be that $$\lambda_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}=\lambda_{((^{\partial_1mn}m')m,nn')}=((\lambda^1_{n'}H'_1+K'_1\lambda^0_{\partial_1mn}), (\lambda^2_{n'}H'_2+K'_2\lambda^1_{\partial_1mn}),FG)$$ where $$F=(\lambda^0_n,\lambda^1_n,\lambda^2_n), \ \ G=(\lambda^0_{n'},\lambda^1_{n'},\lambda^2_{n'})$$ and $$ F'=(\lambda^0_{\partial_1mn},\lambda^1_{\partial_1mn},\lambda^2_{\partial_1mn}), \ \ G'=(\lambda^0_{\partial_1m'n'},\lambda^1_{\partial_1m'n'},\lambda^2_{\partial_1m'n'})$$ and $$H'_1=\lambda'_{m,n},\ \ H'_2=\lambda''_{m,n},\ \ K'_1=\lambda'_{m',n'},\ \ K'_2=\lambda''_{m',n'}.$$ In this diagram, for any $\mathbf{e}_{1,1,p}\in \mathrm{K}_1$, we obtain \begin{align*} (\lambda^1_{n'}H'_1+K'_1 \lambda^0_{\partial_1mn})(\mathbf{e}_{1,1,p})=&\lambda^1_{n'}H'_1(\mathbf{e}_{1,1,p})+K'_1\lambda^0_{\partial_1mn}(\mathbf{e}_{1,1,p})\\ =&\lambda^1_{n'}(\lambda'_{m,n}(\mathbf{e}_{1,1,p}))+\lambda^1_{m',n'}(\lambda^0_{\partial_1mn}(\mathbf{e}_{1,1,p}))\\ =&\lambda^1_{n'}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn})+\lambda^1_{m',n'}(\mathbf{e}_{1,1,p\partial_1mn})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pnn'}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p\partial_1mn}(m'),p\partial_1mnn'}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p\partial_1mn}(m')^pm,pnn'} \ (\because \text{relation (\ref{3})})\\ =&\lambda'_{^{\partial_1m}(^{n}m')m,nn'}(\mathbf{e}_{1,1,p}). \end{align*} Thus, we have $$\lambda'_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}=(\lambda^1_{n'}H'_1+K'_1 \lambda^0_{\partial_1mn}).$$ In section \ref{sect:App2} of Appendix, we show the equality $\lambda^2_{n'} H'_2+K'_2\lambda^1_{\partial_1mn}=\lambda''_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}.$ Similarly, it must be that $$\lambda_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}= \lambda_{m^nm',nn'}=((K'_1 \lambda^0_n+\lambda'_{\partial_1m'n'}H'_1),(K'_2\lambda^1_n+\lambda^2_{\partial_1m'n'}H'_2),FG).$$ For any $\mathbf{e}_{1,1,p}\in \mathrm{K}_1$, we obtain \begin{align*} (K'_1\lambda^0_n+\lambda'_{\partial_1m'n'}H'_1)(\mathbf{e}_{1,1,p})=&K'_1(\lambda^0_n(\mathbf{e}_{1,1,p}))+\lambda^1_{\partial_1m'n'}(H'_1(\mathbf{e}_{1,1,p})) \\ =&\lambda^1_{m',n'}(\mathbf{e}_{1,1,pn})+\lambda^1_{\partial_1m'n'}(\lambda^1_{m,n}(\mathbf{e}_{1,1,p})) \\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{pn}m',pnn'}+\lambda^1_{\partial_1m'n'}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{pn}m',pnn'}+\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn\partial_1m'n'}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^p{m}^{pn}m',pnn'} \ (\because \text{relation (\ref{3})}) \\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^p{(m^{n}m')},pnn'}\\ =&\lambda'_{m^nm',nn'}(\mathbf{e}_{1,1,p}). \end{align*} Thus, we have $$\lambda'_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}=(K'_1\lambda^0_n+\lambda'_{\partial_1m'n'}H'_1).$$ We will show the equality $K'_2\lambda^1_n+\lambda^2_{\partial_1m'n'}H'_2=\lambda''_{m^nm',nn'}=\lambda''_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}$ in section \ref{sect:App2} in Appendix. Thus, we have $$\lambda_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}={\begin{bmatrix} &{\lambda_{\Gamma'}}\\ {\lambda_{\Gamma}}& \end{bmatrix}} \ \text{ and }\lambda_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}={ \begin{bmatrix}{\lambda_{\Gamma}}&\\ &{\lambda_{\Gamma'}}\end{bmatrix}}.$$ \subsection{$\lambda$ over 3-cells} In $\mathfrak{C}^2$, for any 3-cell $(l,m,n)$, we will show that $\lambda_{l,m,n}$ is a 2-homotopy between 1-homotopies: $\lambda_{m,n}$ and $\lambda_{\partial_2 lm,n}$. This 3-cell in $\mathbf{(Aut \overline{ \delta})}_{3}$ can be given by $$\lambda_{l,m,n}:=(\alpha'_{l,m,n},(\lambda'_{m,n},\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n})).$$ The 2-source of $\lambda_{l,m,n}$ is $\lambda_{m,n}:=((\lambda'_{m,n},\lambda''_{m,n}),\lambda_{n})$ is a 1-homotopy from $\lambda_{n}$ to $\lambda_{\partial_1mn}$ and 2-target of $\lambda_{l,m,n}$ is a 1-homotopy $$\lambda_{\partial_2lm,n}:=((\lambda'_{\partial_2lm,n},\lambda''_{\partial_2lm,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ from $\lambda_{n}$ to $\lambda_{\partial_1mn}$. Thus, $\lambda_{l,m,n}$ can be pictured as $$ \begin{tikzcd}[row sep=0.3cm,column sep=scriptsize] & \ar[dd, Rightarrow, "{\scriptscriptstyle \lambda_{(m,n)}}"{swap,name=f,description}]& &\ar[dd, Rightarrow, "{\scriptscriptstyle \lambda_{(\partial_2lm,n)}}"'{swap,name=g,description}]& \\ {\overline{\delta}} \ar[rrrr,bend left=40,"{\scriptscriptstyle \lambda_{n}}"] \ar[rrrr,bend right=40,"{\scriptscriptstyle \lambda_{\partial_1mn}}"']& \tarrow["\lambda_{\scriptscriptstyle {(l,m,n)}}" ,from=f,to=g, shorten >= -1pt,shorten <= 1pt ]{rrr}& & & {\overline{\delta}}. \\ \ & \ & \ & \ & \ \end{tikzcd} $$ Using the chain complex of length-2 $ \begin{tikzcd} \overline{\delta}:=\mathrm{K}_3\ar[r,"\overline{\tau}_3"]&\mathrm{K}_2\ar[r,"\overline{\tau}_2"]& \mathrm{K}_1 \end{tikzcd} $ constructed in section \ref{complex}, the last picture can be given more clearly from our black box as follows: \begin{equation*} \xymatrix{\mathrm{K}_3 \ar[rr]^-{\overline{\tau}_3}\ar@<0.5ex>[dd]^<<<<<<{\lambda^2_{\partial_1mn}}\ar@<-0.5ex>[dd]_-{{\lambda}^2_n} &&\mathrm{K}_2 \ar[rr]^-{\overline{\tau}_2}\ar@<0.5ex>[dd]^<<<{\lambda^1_{\partial_1mn}}\ar@<-0.5ex>[dd]_-{{\lambda}^1_n}\ar@<-0.6ex>[ddll]|<<<<<<<<<<<<<<<<{\scriptscriptstyle{H'_2}} \ar@<0.6ex>[ddll]|<<<<<<<<{\scriptscriptstyle{K'_2}}&&\mathrm{K}_1\ar@{-->}@/^{0.6pc}/[ddllll]_(.4){\ \ {\large\alpha'}} \ar@<0.5ex>[dd]^<<<<<<{\lambda^0_{\partial_1mn}}\ar@<-0.5ex>[dd]_<<<<<<<<<<<{{\lambda}^0_n}\ar@<-0.6ex>[ddll]|<<<<<<<<<<<<<<<<{\scriptscriptstyle{H'_1}} \ar@<0.6ex>[ddll]|<<<<<<<<{\scriptscriptstyle{K'_1}}\ar@<0.5ex>[dd] \\ \\ \mathrm{K}_3\ar[rr]_{\overline{\tau}_3} && \mathrm{K}_2 \ar[rr]_{\overline{\tau}_2} \ar[rr] &&\mathrm{K}_1} \end{equation*} where $H'_1=\lambda'_{m,n},\ \ K'_1=\lambda'_{\partial_2lm,n}$,\ \ $H'_2=\lambda''_{m,n},\ \ K'_2=\lambda''_{\partial_2lm,n}$. Using the action of $(l,m,n)$ in $\mathfrak{C}^2$ on a 3-cell $\mathbf{e}_{1,1,p}$ in $\mathrm{K}_1$, the homotopy component map $\alpha'=\alpha'_{(l,m,n)}:\mathrm{K}_1\longrightarrow \mathrm{K}_3$ can be given by $$\alpha'_{(l,m,n)}(\mathbf{e}_{1,1,p})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}l,^{p}m,pn)}$$ on generators. For any element $(\mathbf{e}_{1,1,p})\in \mathrm{K}_1$; we have $$\overline{\tau}_3\alpha'_{l,m,n}(\mathbf{e}_{1,1,p})=\overline\tau_3\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}l,^{p}m,pn)} =\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}({\partial_2} lm),pn}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn}$$ and \begin{align*} (K'_1-H'_1)(\mathbf{e}_{1,1,p})=&K'_1(\mathbf{e}_{1,1,p})-H'_1(\mathbf{e}_{1,1,p})\\ =&\lambda'_{\partial_2lm,n}(\mathbf{e}_{1,1,p})-\lambda'_{m,n}(\mathbf{e}_{1,1,p})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}({\partial_2 lm}),pn}-\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{p}m,pn}. \end{align*} Therefore, we obtain that $K'_1=H'_1+\overline\tau_3 \alpha'$, that is, $\lambda'_{(\partial_2lm,n)}=\lambda'_{(m,n)}+\overline\tau_3 \alpha'$. This is the chain homotopy condition for 2-homotopy $\alpha'$. In section \ref{sect:App4} of Appendix, the second homotopy condition for $\alpha'$; $$ K'_2=H'_2+\alpha'\overline\tau_2 $$ will be showed. \subsubsection{$\lambda_{l,m,n}$ preserves the 2-vertical composition of 3-cells} The 2-vertical composition in $\mathfrak{C}^2$ of 3-cells $(l,m,n)$ and $(l',\partial_2lm,n)$ is given by $$(l',\partial_2lm,n)\#_3(l,m,n)=(l'l,m,n)\in C_3.$$ We will show that $$\lambda_{(l',\partial_2lm,n)\#_3(l,m,n)}=\lambda_{l'l,m,n}=\lambda_{l',\partial_2lm,n}\#_3\lambda_{l,m,n}\in (\mathbf{Aut}\overline{\delta})_3.$$ We have $$\lambda_{l'l,m,n}=(\alpha'_{l'l,m,n},(\lambda'_{m,n},\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ and $$\lambda_{l',\partial_2lm,n}\#_3\lambda_{l,m,n}=(\alpha'_{l',\partial_2lm,n},(\lambda'_{\partial_2lm,n},\lambda''_{\partial_2lm,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n})) \#_3(\alpha'_{l,m,n},(\lambda'_{m,n},\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n})).$$ We know from the vertical composition of 3-cell in $\mathbf{(Aut \overline{ \delta})}_{3}$; $$\lambda_{l',\partial_2lm,n}\#_3\lambda_{l,m,n}=(\alpha'_{l',\partial_2lm,n}+\alpha'_{l,m,n},(\lambda'_{m,n},\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ In this equality, we must show that $$\alpha'_{l'l,m,n}=\alpha'_{l',\partial_2lm,n}+\alpha'_{l,m,n}.$$ For any element $(\mathbf{e}_{1,1,p})\in K_1$ we have; $$\alpha'_{l'l,m,n}(\mathbf{e}_{1,1,p})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(l'l)},^{p}m,pn}$$ and $$\alpha'_{l',\partial_2lm,n}(\mathbf{e}_{1,1,p})+\alpha'_{l,m,n}(\mathbf{e}_{1,1,p})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}l',^{p}{(\partial_2lm)},pn)} +\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}l,^{p}m,pn)}.$$ From relation (\ref{1}), we have $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l'l,m,n}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',\partial_2lm,n}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}$$ and thus we obtain that \begin{align*} (\alpha'_{l',\partial_2lm,n}+\alpha'_{l,m,n})(\mathbf{e}_{1,1,p})=&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l',^{p}{(\partial_2lm)},pn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}l,^{p}m,pn)}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(l'l)},^{p}m,pn}\\ =&\alpha'_{l'l,m,n}(\mathbf{e}_{1,1,p}). \end{align*} Therefore, we obtain the equality: $$\lambda_{(l',\partial_2l,mn)\#_3(l,m,n)}=\lambda_{l'l,m,n}=\lambda_{l',\partial_2lm,n}\#_3\lambda_{l,m,n}.$$ \subsubsection{$\lambda_{l,m,n}$ preserves the 1-vertical composition of 3-cells } Recall that the 1-vertical composition in $\mathfrak{C}^2$ of 3-cells $(l,m,n)$ and $(l',m',\partial_1mn)$ is given by $$(l',m',\partial_1mn)\#_1(l,m,n)=(l'^{m'}l,m'm,n)$$ as pictured in the following diagram $$ \begin{array}{c} \begin{tikzcd}[row sep=scriptsize,column sep=2.05cm] \vphantom{f} * \arrow[rr, "\scriptscriptstyle n", ""{name=F, below}, bend left=50] \arrow[rr, "\scriptscriptstyle \partial_{1}m'\partial_{1}mn"', ""{name=H, above}, bend right=50] \arrow[rr, "\scriptscriptstyle \partial_1mn" description, ""{name=GA,above}, ""{name=GB,below}] & &\vphantom{f}* \tarrow[from=F, to=GA, "{\scriptscriptstyle (l,m,n)}"]{d} \tarrow[from=GB, to=H, "{\scriptscriptstyle (l',m',\partial_1mn)}"]{d} \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=1.8cm] &\tarrow["\scriptscriptstyle{(l'^{m'}l,m'm,n)}"description,shorten >= -8pt,shorten <= -8pt]{dd}& \\ {*} \ar[rr,bend left=50,"\scriptscriptstyle n"] \ar[rr,bend right=50,"\scriptscriptstyle\partial_{1}m'\partial_{1} mn"']& & {*}\\ & \ \end{tikzcd} \end{array} $$ Under $\lambda$, these diagram can be pictured in $\mathbf{(Aut \overline{ \delta})}_{3}$ as $$ \begin{array}{c} \begin{tikzcd}[row sep=scriptsize,column sep=2.30cm] \vphantom{f} {\overline{\delta}}\arrow[rr, "\scriptscriptstyle \lambda_{n}", ""{name=F, below}, bend left=50] \arrow[rr, "\scriptscriptstyle \lambda_{(\partial_{1}m'\partial_{1}mn)}"', ""{name=H, above}, bend right=50] \arrow[rr, "\scriptscriptstyle \lambda_{(\partial_1mn)}" description, ""{name=GA,above}, ""{name=GB,below}] & &\vphantom{f} {\overline{\delta}} \tarrow[from=F, to=GA, "{\scriptscriptstyle \lambda_{(l,m,n)}}"]{d} \tarrow[from=GB, to=H, "{\scriptscriptstyle \lambda_{(l',m',\partial_1mn)}}"]{d} \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=small,column sep=1.8cm] &\tarrow["\scriptscriptstyle{\lambda_{(l'^{m'}l,m'm,n)}}"description,shorten >= -8pt,shorten <= -8pt]{dd}& \\ {\overline{\delta}} \ar[rr,bend left=50,"\scriptscriptstyle \lambda_{n}"] \ar[rr,bend right=50,"\scriptscriptstyle\lambda_{(\partial_{1}m'\partial_{1} mn)}"']& & {\overline{\delta}}\\ & \ \end{tikzcd} \end{array} $$ We can write $$\lambda_{l'^{m'}l,m'm,n}=(\alpha'_{l'^{m'}l,m'm,n},(\lambda'_{m'm,n},\lambda''_{m'm,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ and $$\lambda_{l,m,n}=(\alpha'_{l,m,n},(\lambda'_{m,n},\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ and $$\lambda_{l',m',\partial_1mn}=(\alpha'_{l',m',\partial_1mn},(\lambda'_{m',\partial_1mn},\lambda''_{m',\partial_1mn}),(\lambda^0_{\partial_1mn},\lambda^1_{\partial_1mn},\lambda^2_{\partial_1mn})).$$ Consider the diagram obtained from the black box: $$ \begin{tikzcd} \mathrm{K}_3 \arrow[rrrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^2_n}}"description,pos=.75]\arrow[rrrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^2_{\partial_1mn}}"description,pos=.25] \arrow[dddd,"{\overline{\tau}_3}"'] &&&&\mathrm{K}_3\arrow[rrrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^2_{n'}}"description,pos=.75]\arrow[rrrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^2_{\partial_1m'n'}}"description,pos=.30] \arrow[dddd,"{\overline{\tau}_3}"] &&&& \mathrm{K}_3 \arrow[dddd,,"{\overline{\tau}_3}"] \\ \\ \\ \\ \mathrm{K}_2 \arrow[rrrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^1_n}}"description,pos=.75]\arrow[rrrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^1_{\partial_1mn}}"description,pos=.25] \arrow[dddd,"{\overline{\tau}_2}"']\arrow[uuuurrrr,shift left=-0.7ex,"\scriptscriptstyle {\lambda}''_{\partial_2lm,n}"{sloped,below=-0.3ex,xshift=-2.0em}]\arrow[uuuurrrr,shift left=0.7ex,"\scriptscriptstyle {\lambda}''_{m,n}"{sloped,above=-0.3ex,xshift=2.0em}]&&&&\mathrm{K}_2 \arrow[rrrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^1_{n'}}"description,pos=.75]\arrow[rrrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^1_{\partial_1m'n'}}"description,pos=.30] \arrow[dddd,"{\overline{\tau}_2}"]\arrow[uuuurrrr,shift left=-0.7ex,"{\scriptscriptstyle{\lambda}''_{\partial_2l'm',\partial_1mn}}"{sloped,below=-0.3ex,xshift=-2.0em}]\arrow[uuuurrrr,shift left=0.7ex,"{\scriptscriptstyle{\lambda}''_{m',\partial_1mn}}"{sloped,above=-0.3ex,xshift=2.0em}] &&&&\mathrm{K}_2 \arrow[dddd,"{\overline{\tau}_2}"]\\ \\ \\ \\ \mathrm{K}_1 \arrow[rrrr,shift left=0.90ex,"{\scriptscriptstyle {{\lambda}^0_n}}"description,pos=.75]\arrow[rrrr,shift left=-0.90ex,"{\scriptscriptstyle \lambda^0_{\partial_1mn}}"description,pos=.25] \arrow[uuuurrrr,shift left=-0.7ex,"\scriptscriptstyle {\lambda}'_{\partial_2lm,n}"{sloped,below=-0.3ex,xshift=-2.0em}]\arrow[uuuurrrr,shift left=0.7ex,"\scriptscriptstyle {\lambda}'_{m,n}"{sloped,above=-0.3ex,xshift=2.0em}] \arrow[uuuuuuuurrrr,dashed,"\Large\boldsymbol\alpha'"description,pos=.75] &&&&\mathrm{K}_1 \arrow[rrrr,shift left=0.95ex,"{\scriptscriptstyle \lambda^{0}_{n'}}"description,pos=.75]\arrow[rrrr,shift left=-0.95ex,"{\scriptscriptstyle{\lambda}^{0}_{\partial_{1}m'n'}}"description,pos=.30] \arrow[uuuurrrr,shift left=-0.7ex,"{\scriptscriptstyle{\lambda}'_{\partial_2l'm',\partial_1mn}}"{sloped,below=-0.3ex,xshift=-2.0em}]\arrow[uuuurrrr,shift left=0.7ex,"{\scriptscriptstyle{\lambda}'_{m',\partial_1mn}}"{sloped,above=-0.3ex,xshift=2.0em}]\arrow[uuuuuuuurrrr,dashed,"\Large\boldsymbol{\beta'}" description,pos=.75] &&&&\mathrm{K}_1 \end{tikzcd} $$ We can write \begin{align*} \lambda^2_{\partial_1m'n'}(\alpha'_{l,m,n})(\mathbf{e}_{1,1,p})+\beta'_{l',m',\partial_1mn}\lambda^0_{n}(\mathbf{e}_{1,1,p})=&\lambda^2_{\partial_1m'n'}(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l,^{p}m,pn})+\beta'_{l',m',\partial_1mn}(\mathbf{e}_{1,1,pn})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l,^{p}m,pn\partial_1m'\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{pn}l',^{pn}m',pn\partial_1mn}. \end{align*} Since $$\overline\tau_1\lambda'_{m,n}+\lambda^0_n=\lambda^0_{\partial_1mn},$$ $$\lambda''_{m,n} \overline\tau_2+\lambda^2_n=\lambda^2_{\partial_1mn}$$ and $$\overline\tau_2\lambda''_{m,n}+\lambda'_{m,n} \overline\tau_2+\lambda'_{n}=\lambda'_{\partial_1mn},$$ we can say that \begin{multline*} ((\lambda'_{m',\partial_1mn},\lambda''_{m',\partial_1mn}),(\lambda^0_{\partial_1mn},\lambda^1_{\partial_1mn},\lambda^2_{\partial_1mn}))\#_1((\lambda'_{m,n} ,\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))\\ \begin{aligned} =&((\lambda'_{m',\partial_1mn}+\lambda'_{m,n}\lambda''_{m',\partial_1mn}+\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))\\ =&((\lambda'_{m'm,n},\lambda''_{m'm,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n})) \end{aligned} \end{multline*} Therefore, we obtain $$\lambda_{l'^{m'}l,m'm,n}=(\alpha'_{l'^{m'}l,m'm,n},(\lambda'_{m'm,n},\lambda''_{m'm,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ and $$\lambda'_{l',m',\partial_1mn}\#_1\lambda_{l,m,n}=(\alpha'_{l',m',\partial_1mn}+\alpha'_{l,m,n},(\lambda'_{m'm,n}, \lambda''_{m'm,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n})).$$ Thus, to show this equality, we must prove that $$\alpha'_{l'^{m'}l,m'm,n}=\alpha'_{l',m',\partial_1mn}+\alpha'_{l,m,n}.$$ For any element $(\mathbf{e}_{1,1,p})\in \mathrm{K}_1$, we obtain $$\alpha'_{l'^{m'}l,m'm,n}(\mathbf{e}_{1,1,p})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(l')}^{p}{(^{m'}l)},^{p}{(m'm)},pn}$$ and \begin{align*} (\alpha'_{l',m',\partial_1mn}+\alpha'_{l,m,n})(\mathbf{e}_{1,1,p})=&\alpha'_{l',m',\partial_1mn}(\mathbf{e}_{1,1,p})+\alpha'_{l,m,n}(\mathbf{e}_{1,1,p})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(l')},^{p}{m'},p\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}l,^{p}m,pn)}. \end{align*} We know from relation (\ref{2}); $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l'^{m'}l,m'm,n}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n},$$ we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l,^{p}m,pn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{l'},^{p}{m'},p\partial_1mn}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{l'}^{pm'}{(^{p}l)},^{p}{m'}^{p}{m},pn}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(l'^{m'}l)},^{p}{(m'm)},pn}$$ Thus, we can say that $\lambda_{l,m,n}$ preserves the 1-vertical composition of 3-cells from $\mathfrak{C}^2$ to $\mathbf{Aut}\overline{\delta}$. Therefore, we have proven the following equality: $$\lambda_{(l',m',\partial_1mn)\#_1(l,m,n)}=\lambda_{(l',m',\partial_1mn)}\#_1\lambda_{(l,m,n)}.$$ \subsubsection{$\lambda_{l,m,n}$ preserves the group operations} For $J=(l,m,n)$ and $J^\prime=(l^\prime,m^\prime,n^\prime)$ in $L\rtimes M \rtimes N$, recall the semi-direct product of $J$ and $J'$ given by $$J\cdot J'=\nabla=(l\{\partial_2(^{n}{l'}),m\}^{n}{(l')^{-1}},m^nm',nn')=(l^{m}{(^{n}{l'}),m^nm',nn'}).$$ This can be represented pictorially as $$ \begin{array}{c} \begin{tikzcd}[row sep=large] & {*} \ar[dr, Rightarrow,"{\scriptscriptstyle (\partial_2lm,n)}"{sloped,above=0.0ex,xshift=0.0em}]& &{*} \ar[dr, Rightarrow,"{\scriptscriptstyle (\partial_2l'm',n')}"{sloped,above=0.0ex,xshift=0.0em}] \\ {*}\ar[ur,"\scriptscriptstyle n"{sloped,above=0.0ex,xshift=-0.0em}] \ar[dr, Rightarrow,"{\scriptscriptstyle (m,n)}"{sloped,below=0.0ex,xshift=-0.0em}] & {J} & {*}\ar[ur,"\scriptscriptstyle n'"{sloped,above=0.0ex,xshift=-0.0em}] \ar[dr, Rightarrow,"{\scriptscriptstyle (m',n')}"{sloped,below=0.0ex,xshift=-0.0em}]& {J'} & {*} \\ & {*}\ar[ur,"{\scriptscriptstyle \partial_1mn}"{sloped,below=0.0ex,xshift=-0.0em}]& & {*}\ar[ur,"{\scriptscriptstyle \partial_1m'n'}"{sloped,below=0.0ex,xshift=-0.0em}] \end{tikzcd} \end{array} {:=} \begin{array}{c} \begin{tikzcd}[row sep=large] & {*} \ar[dr, Rightarrow,"{\scriptscriptstyle (\partial_2lm{^{n}({\partial_2l'm'}),nn'})}"] \\ {*}\ar[ur,"\scriptscriptstyle nn'"{sloped,above=0.0ex,xshift=-0.0em}] \ar[dr, Rightarrow,"{\scriptscriptstyle (m^nm',nn')}"{sloped,below=0.0ex,xshift=-0.0em}] & {\nabla} & {*}\\ & {*}\ar[ur,"{\scriptscriptstyle \partial_1mn\partial_1m'n'}"{sloped,below=0.0ex,xshift=-0.0em}] \end{tikzcd} \end{array} $$ In $\mathbf{Ch}^2_K$, the product of 3-cells for $\mathbf{(Aut \overline{ \delta})}_{3}$ can be given by \begin{multline*} (\alpha',(H'_1,H'_2),(F_0,F_1,F_2))\cdot (\beta',(K'_1,K'_2),(G_0,G_1,G_2))\\ \begin{aligned} =&(G_2\alpha'+\beta'F_0,(K'_1 F_0+G_1 H'_1,K'_2 F_1+G_2 H''_2),(F_0G_0,F_1G_1,F_2G_2)). \end{aligned} \end{multline*} This is the group operation in $\mathbf{(Aut \overline{ \delta})}_{3}$. We will show that $$\lambda_{(l,m,n)\cdot (l',m',n')}=\lambda_{l(^{m_n}{l')},m^nm',nn'}=\lambda_{l,m,n}\cdot \lambda_{l',m',n'}.$$ In $\mathfrak{C}^2$, we can consider that $(l{^{m}(^{n}l')},m^nm',nn')$ is a 3-cell as $$ \xymatrix{J\cdot J'=\nabla=(l{^{m}(^{n}l')},m^nm',nn'):(1,m^nm',nn')\ar@3{->}[r]&(1,\partial_2lm{^{n}({\partial_2l'm'}),nn'})} $$ For $J,J'$ in $\mathfrak{C}^2$ , we have; $$\lambda_J=\lambda_{l,m,n}=(\alpha'_{l,m,n},(\lambda'_{m,n},\lambda''_{m,n}),(\lambda^0_{n},\lambda^1_{n},\lambda^2_{n}))$$ and $$\lambda_{J'}=\lambda_{l',m',n'}=(\alpha'_{l',m',n'},(\lambda'_{m',n'},\lambda''_{m',n'}),(\lambda^0_{n'},\lambda^1_{n'},\lambda^2_{n'}))$$ and; we must show that \begin{align*} \lambda_{J\cdot J'}=&\lambda_{(l{^{m}(^{n}l')},m^nm',nn')}\\ =&((\lambda^2_{\partial_1m'n'}\alpha'_{l,m,n}+\alpha'_{l',m',n'}\lambda^0_{n}),\\ &(\lambda'_{m',n'}\lambda^0_{n}+\lambda'_{\partial_1m'n'}\lambda'_{m,n},\lambda''_{m',n'}\lambda^1_{n}+\lambda^2_{\partial_1m'n'}\lambda''_{m,n}),\\ &(\lambda^0_{nn'},\lambda'_{nn'},\lambda^2_{nn'}))\\ =&(\alpha'_{l{^{m}(^{n}l')},m^nm',nn'},(\lambda'_{m^nm',nn'},\lambda''_{m^nm',nn'}),(\lambda^0_{nn'},\lambda^1_{nn'},\lambda^2_{nn'})). \end{align*} For any element $(\mathbf{e}_{1,1,p})\in \mathrm{K}_1$, we have $$\alpha'_{l{^{m}(^{n}l')},m^nm',nn'}(\mathbf{e}_{1,1,p})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(^{p}{(l(^{m_n}{l'})),^{p}{(m^nm')},pnn')}}$$ On the other hand; \begin{align*} (\lambda^2_{\partial_1m'n'}\alpha'_{l,m,n}+\alpha'_{l',m',n'}\lambda^0_{n})(\mathbf{e}_{1,1,p})=&\lambda^2_{\partial_1m'n'}\alpha'_{(l,m,n)}(\mathbf{e}_{1,1,p})+\alpha'_{(l',m',n')}\lambda^0_{n}(\mathbf{e}_{1,1,p})\\ =&\lambda^2_{\partial_1m'n'}(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l,^{p}m,pn})+\alpha'_{l',m',n'}(\mathbf{e}_{1,1,pn})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l,^{p}m,pn\partial_1m'n'}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(^{n}l')},^{p}{(^{n}m')},pnn'} \end{align*} Using the relation (\ref{2}) in $J_2$ given by $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l'{^{m'}l},m'm,n}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m,n}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',\partial_1mn}$$ and by taking in this equality; $m={^{p}{(^{n}m')},\ m'={^{p}m}}$, $n=pnn',\ \partial_1mn=pn\partial_1m'n',\ l={^{p}{(^{n}l')}}$ and $l'={^{p}l}$; we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l,^{p}m,pn\partial_1m'n'}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}{(^{n}l')},^{p}{(^{n}m')},pnn'}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{p}l{^{p}{(^{m_n}{l'})},^{p}m{^{p}{(^{n}m')}},pnn'}}$$ Thus, we have showed that $\lambda$ preserves the group operation from $C_3$ to $(\mathbf{Aut}\overline{\delta})_3$. \section{Cayley's Theorem for 2-Crossed Modules and Cat$^2$-Groups:} The construction given above can be summarized in the following definition. \begin{defn} Consider a 2-crossed module of groups: $$\xymatrix{\mathfrak{X}:=(L\ar[r]^-{\partial_2}&M\ar[r]^-{\partial_1}&N,\{-,-\})}$$ and its associated Gray 3-group-groupoid with a single object or cat$^2$-group: $$ \xymatrix{\mathfrak{C}^2:=L\rtimes M \rtimes N \ar@<1ex>[r]^-{s_3,t_3} \ar@<0ex>[r]&1\rtimes M \rtimes N \ar@<1ex>[r]^-{s_2,t_2} \ar@<0ex>[r]\ar@<1ex>[l]^-{e_3}& 1\rtimes 1 \rtimes N\ar@<1ex>[l]^-{e_2}\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& \{*\}}. $$ The right regular representation of $\mathfrak{C}^2$ is a lax 3-functor (contravariant on 1-cells) $$\mathbf{\lambda}:\mathfrak{C}^2\longrightarrow\mathbf{Ch}^2_K$$ searching each $n\in N$ to the chain automorphism $\lambda_n=(\lambda^2_n,\lambda^1_n,\lambda^0_n)$ where $$\lambda^0_n(\mathbf{e}_{1,1,n'})=\mathbf{e}_{1,1,nn'},\ \ \lambda^1_n(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'n}, \ \ \lambda^2_n(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'n}$$ and each $(1,m,n)\in 1\rtimes M\rtimes N$ to the 1-homotopy $\lambda_{m,n}:\lambda_n\Rightarrow \lambda_{\partial_1mn}$ with the chain homotopy components; $$\lambda'_{m,n}(\mathbf{e}_{1,1,n'})=\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,^{n'}m,n'n},\ \ \lambda''_{m,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m',^{n'}m\},m'{^{n'}m},n'n}$$ where all chain automorphisms and homotopies reside in $\mathbf{(Aut \overline{ \delta})}_{1}$ and $\mathbf{(Aut \overline{ \delta})}_{2}$ for the linear transformations; $\delta_2:=\overline{\tau_2}|_{\mathrm{Ker} \overline {\sigma_2}}$ and $\delta_3:=\overline{\tau_3}|_{\mathrm{Ker} \overline {\sigma_3}}$ obtained from $\overline{K(\mathfrak{C}^2)}$ of $\mathfrak{C}^2$ and each $(l,m,n)\in L\rtimes M\rtimes N$ to the 2-homotopy $\lambda_{l,m,n}:\lambda_{m,n}\Rrightarrow \lambda_{\partial_2lm,n}$ with the chain homotopy component $$\alpha'_{l,m,n}(\mathbf{e}_{1,1,n'})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{n'}l,^{n'}m,n'n}.$$ Since the construction may be applied to any Gray-3-group-groupoid (or any 2-crossed module), it gives us a Gray-3-group-groupoid with a single object or cat$^2$-group version of Cayley's theorem, in term of linear regular representation. \end{defn} Therefore, we can give the following result. \begin{thm}$\boldsymbol{\mathrm{(Cayley)}}$ For any 2-crossed module $\mathfrak{X}$ and its associated cat$^2$-group $\mathfrak{C}^2$, the right regular representation as given above, exists. \end{thm} It was shown that 2-crossed modules, Gray 3-groupoids with a single 0-cell and cat$^2$-groups are equivalent. We have a definition of regular representations for a cat$^2$-group $\mathfrak{C}^2$, and this may also be considered as a regular representation of corresponding 2-crossed module. Therefore, we might give a regular representation of the 2-crossed module $\mathfrak{X}$ to be a regular representation of $\mathfrak{C}^2(\mathfrak{X})$. \section{Appendix} \subsubsection{The proof of interchange law}\label{interchangelaw} For any 3-cells; $\alpha=(l_1,m_1,n_1)$, \ $\beta=(l'_1,\partial_2l_1m_1,n_1)$,\ $\gamma=(l_2,m_2,n_2)$ and $\delta=(l'_2,\partial_2l_2m_2,n_2)$ in $ L\rtimes M\rtimes N$;\\ We must show that $$(\alpha \#_3 \beta)\cdot(\gamma \#_3 \delta)=(\alpha \cdot \gamma) \#_3(\beta \cdot \delta)$$ for the vertical composition $\#_3$ and semi-direct product of 3-cells in $\mathfrak{C}^2$. Therefore, we obtain; \begin{align*} (\alpha \#_3 \beta)\cdot(\gamma \#_3 \delta)=&((l_1,m_1,n_1)\#_3(l'_1,\partial_2l_1m_1,n_1))\cdot((l_2,m_2,n_2)\#_3(l'_2,\partial_2l_2m_2,n_2))\\ =&(l'_1l_1,m_1,n_1)\cdot(l'_2l_2,m_2,n_2)\\ =&(l'_1l_1(^{m_1}{(^{n_1}{(l'_2l_2)})),m_1^{n_1}m_2,n_1n_2})\\ =&(l'_1l_1(^{m_1}(^{n_1}{l'_2})){l^{-1}_1}l_1(^{m_1}(^{n_1}{l_2})),m_1^{n_1}m_2,n_1n_2)\\ =&(l'_1(^{\partial_2l_1m_1}(^{n_1}{l'_2}))l_1(^{m_1}(^{n_1}{l_2})),m_1^{n_1}m_2,n_1n_2)\\ =&(l_1(^{m_1}{(^{n_1}{l_2})),m_1^{n_1}m_2,n_1n_2}) \#_3(l'_1(^{\partial_2l_1m_1}{(^{n_1}{l'_2})),\partial_2l_1m_1(^{n_1}(\partial_2l_2m_2)),n_1n_2})\\ =&((l_1,m_1,n_1)\cdot(l_2,m_2,n_2))\#_3((l'_1,\partial_2l_1m_1,n_1)\cdot(l'_2,\partial_2l_2m_2,n_2))\\ =&(\alpha \cdot \gamma) \#_3(\beta \cdot \delta) \end{align*} \subsubsection{The proof of equality $H'_2\overline{\tau}_3=\lambda^2_{\partial_1mn}-\lambda^2_n.$} \label{sect:App5} We need to add a new generator element for $J_2$ as \begin{multline*} {\mathbf{u}_{\scriptscriptstyle5}}=\mathbf{e}_{^{\partial_1m}{(^{n}{l'})},^{\partial_1m}{(^{n}{m'})},nn'}-\mathbf{e}_{1,^{\partial_1mn}{m'},nn'} -\mathbf{e}_{^{m}{(^{n}{l'})},m^nm',nn'}+\mathbf{e}_{1,m^nm',nn'}\\ -\mathbf{e}_{l',m',n'\partial_1mn}+\mathbf{e}_{1,m',n'\partial_1mn}+\mathbf{e}_{l',m',n'n}-\mathbf{e}_{1,m',n'n}\in J_2 \end{multline*} and \begin{multline*} {\mathbf{v}_{\scriptscriptstyle5}}=\mathbf{e}_{1,^{\partial_1mn}{(\partial_2l'm')},nn'}-\mathbf{e}_{1,^{\partial_1mn}{m'},nn'}-\mathbf{e}_{1,m{^{n}{(\partial_2l'm')}},nn'}+\mathbf{e}_{1,m^nm',nn'}\\ -\mathbf{e}_{1,\partial_2l'm',n'\partial_1mn}+\mathbf{e}_{1,m',n'\partial_1mn}+\mathbf{e}_{1,\partial_2l'm',n'n}-\mathbf{e}_{1,m',n'n}\in J_1. \end{multline*} Using the generator element ${\mathbf{u}_{\scriptscriptstyle5}}$ in $J_2$ we have the following relation for $\mathrm{K}_3$;\\ $\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1m}{(^{n}{l'})},^{\partial_1m}{(^{n}{m'})},nn'}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{m}{(^{n}{l'})},m^nm',nn'} -\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'n}=\overline{0}$ in $\mathrm{K}_3$.\\ Using the last equality we obtain that; \begin{align*} (H'_2\overline{\tau}_3-(\lambda^2_{\partial_1mn}-\lambda^2_n))(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'}) =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,^{n}{(\partial_2l'm')\}},m{^{n}{(\partial_2l'm')}},nn'}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,^{n}{m'}\},m^nm',nn'}\\ &-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'n} \end{align*} In relation $(B)$, if we take $l=1$; we have; \begin{align*} (H'_2\overline{\tau}_3-(\lambda^2_{\partial_1mn}-\lambda^2_n))(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'}) =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1m}{(^{n}{l'})},^{\partial_1mn}{m'},nn'}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{m}{(^{n}{l'})},m^nm',nn'}\\ &-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l',m',n'n}=\overline{0} \end{align*} Thus, we obtain; $$H'_2\overline{\tau}_3=\lambda^2_{\partial_1mn}-\lambda^2_n.$$ \subsubsection{The proof of equality $\lambda''_{(m',\partial_1mn)\#_2(m,n)}=K'_2+H'_2$} \label{sect:App1} For any $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}\in \mathrm{K}_2$, $$\lambda''_{(m',\partial_1mn)\#_2(m,n)}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=\lambda''_{m'm,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}) =\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}(m'm)\},d^{p}(m'm),pn}$$ and $$(K'_2+H'_2)(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=(\lambda''_{m',\partial_1mn}+\lambda''_{m,n})(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}) =\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m'\},d^{p}m',p\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m\},d^{p}m,pn}.$$ We need to add a new relations in $J_2$ and $J_1$ as follows: \begin{multline*} \mathbf{e}_{\{m,m'm''\},mm'm'',n''}-\mathbf{e}_{1,mm'm'',n''}-\mathbf{e}_{\{m,m'\},mm',\partial_1m''n''}\\ \begin{aligned} &+\mathbf{e}_{1,mm',\partial_1m''n''}-\mathbf{e}_{\{m,m''\},mm'',n''}+\mathbf{e}_{1,mm',n''}\in J_2 \end{aligned} \end{multline*} and; $$\mathbf{e}_{1,mm'm'',n''}-\mathbf{e}_{1,mm',\partial_1m''n''}-\mathbf{e}_{1,mm',n''}\in J_1.$$ Thus we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,m'm''\},mm'm'',n''}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,m'\},mm',\partial_1m''n''}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,m''\},mm'',n''}.$$ If we take; $(1,m,n)=(1,d,p), \ (1,m',n')=(1,^{p}m',p\partial_1mn)$ and $(1,m'',n'')=(1,^pm,pn)$, then, $\partial_1m''n''=p\partial_1mn$. We obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m'^{p}m\},d^{p}m'^{p}m,pn}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m'\},d^{p}m',p\partial_1mn}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m\},d^{p}m,pn}$$ Therefore; $$\lambda''_{(m',\partial_1mn)\#_2(m,n)}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=(K'_2+H'_2)(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}).$$ In general; \begin{align*} \lambda_{(m',\partial_1mn)\#_2(m,n)}=&\left((K'_1+H'_1,K'_2+H'_2),F\right)\\ =&\left((\lambda'_{m',\partial_1mn}+\lambda'_{m,n}, \ \lambda''_{m',\partial_1mn}+\lambda''_{m,n}),\lambda_n\right)\\ =&(K,G)\#_2(H,F). \end{align*} \subsubsection{The proof of equality: $K'_2\lambda^1_n+\lambda^2_{\partial_1m'n'}H'_2=\lambda''_{m^nm',nn'}$}\label{sect:App2} We will show that $$\lambda''_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}={\begin{bmatrix} &{\lambda''_{\Gamma'}}\\ {\lambda''_{\Gamma}}& \end{bmatrix}} \text{ and } \lambda''_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}={ \begin{bmatrix}{\lambda''_{\Gamma}}&\\ &{\lambda''_{\Gamma'}}\end{bmatrix}}.$$ For $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}\in \mathrm{K}_2$, we obtain; $$\lambda''_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=\lambda''_{m^nm',nn'}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{d,^{p}(m^{n}m')\},d^{p}(m^{n}m'),pnn'}}$$ and \begin{align*} (K'_2 \lambda^1_{n}+\lambda^2_{\partial_1m'n'} H'_2)(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=&K'_2 \lambda^1_{n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})+\lambda^2_{\partial_1m'n'} H'_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}) \\ =&\lambda''_{m',n'}(\lambda^1_n(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}))+\lambda^2_{\partial_1m'n'}(\lambda''_{m,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}))\\ =&\lambda''_{(m',n')}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,pn})+\lambda^2_{\partial_1m'n'}(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m\},d^{p}m,pn})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{pn}m'\},d^{pn}m',pnn'}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{d,^{p}m\},d^{p}m,pn\partial_1m'n'}. \end{align*} We need to add a new relation: $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{m,m'm''\},mm'm'',n''}}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{m,m'\},mm',\partial_1m''n''}}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{m,m''\},mm'',n''}}$$ in $J_2$. By taking in this relation $(1,m,n)=(1,d,p)$ and \\ $(1,^{p}m,pn)=(1,m',n')$,\ \ $(1,^{pn}m',pnn')=(1,m'',n'')$ \\ we have $$\partial_1m''n''=\partial_1(^{pn}m')pnn'=pn\partial_1m'n' \text{ and } n''=pnn'.$$ Using this relation, we obtain $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{d,^{p}m\},d^{p}m,pn\partial_1m'n'}}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{d,^p(^{n}m')\},d^p(^{n}m'),pnn'}}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{{\{d,^{p}m^p(^{n}m')\},d^{p}m^p(^{n}m'),pnn'}}$$ and thus, $$\lambda''_{\begin{bsmallmatrix} &\Gamma'\\ \Gamma& \end{bsmallmatrix}}={\begin{bmatrix} &{\lambda''_{\Gamma'}}\\ {\lambda''_{\Gamma}}& \end{bmatrix}}=K'_2 \lambda^1_{n}+\lambda^2_{\partial_1m'n'} H'_2.$$ Similarly, we must show that $$\lambda''_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}={ \begin{bmatrix}{\lambda''_{\Gamma}}&\\ &{\lambda''_{\Gamma'}}\end{bmatrix}}=\lambda^2_{n'} H'_2+K'_2 \lambda'_{\partial_1mn}.$$ For any $\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}\in \mathrm{K}_2$, we obtain $$\lambda''_{\begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})=\lambda''_{(^{\partial_1m}(^{n}m')m,nn')}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p}) =\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(\{d,^{p}(^{\partial_1m}(^{n}m')m)\} ,d^{p}(^{\partial_1m}(^{n}m')m),pnn')}$$ and ; \begin{align*} (\lambda^2_{n'}H'_2+K'_2\lambda'_{\partial_1mn})(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{({1,d,p})})=&\lambda^2_{n'}(\lambda''_{m,n}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{({1,d,p})}) +\lambda''_{m',n'}(\lambda^1_{\partial_1mn}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{({1,d,p})})\\ =&\lambda^2_{n'}(\overline{\mathbf{v}^{\scriptscriptstyle22}}_{({\{d,^{p}m\},d^{p}m,pn})})+\lambda''_{(m'n')}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p\partial_1mn})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{({\{d,^{p}m\},d^{p}m,pnn'})}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(\{d,^{p\partial_1mn}m'\},d^{p\partial_1m}(^{n}m'),p\partial_1mnn')} \end{align*} and by taking \begin{align*} (1,m,n)=&(1,d,p)\\ (1,m',n')=&(1,^{p\partial_1m}(^{n}m'),p\partial_1mn)\\ (1,m'',n'')=& (1,^{p}m,pnn')\\ \partial_1(m'')n''=&\partial_1(^{p}m)pnn'=p\partial_1mnn' \end{align*} in the last relation, we have \begin{align*} {\begin{bmatrix}{\lambda''_{\Gamma}}&\\ &{\lambda''_{\Gamma'}}\end{bmatrix}}=&(\lambda^2_{n'} H'_2+K'_2 \lambda'_{\partial_1mn})(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,d,p})\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{({\{d,^{p}m\},d^{p}m,pnn'})}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{(\{d,^{p\partial_1m}(^{n}m')\},d ^{p\partial_1m}(^{n}m'),p\partial_1mnn')}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{({\{d,^{p}(^{\partial_1m}{(^{n}m'))^{p}m\}},d ^{p}(^{\partial_1m}{(^{n}m'))^{p}m},pnn'})} \ ( \text{ Since the last relation} )\\ =&\lambda''_{^{\partial_1m}(^{n}m')m,nn'}(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{(1,d,p)})\\ =&\lambda''_{ \begin{bsmallmatrix} \Gamma&\\ &\Gamma' \end{bsmallmatrix}} \end{align*} $\Box$ \subsubsection{The proof of equality $\alpha' \overline \tau_2=K'_2-H'_2.$} \label{sect:App4} For any 3-cell $(l,m,n)$ in $C_3$ and $(l',m',n')$ in $C_3$; a generator element in $J_2$ can be taken as \begin{multline*} {\mathbf{u}_{\scriptscriptstyle3}}={\mathbf{e}_{^{n'}l,^{n'}m,n'n}}-{\mathbf{e}_{1,^{n'}m,n'n}}-{\mathbf{e}_{^{\partial_1m'n'}l,{^{\partial_1m'n'}m,\partial_1m'n'n}}} +{\mathbf{e}_{1,{^{\partial_1m'n'}m,\partial_1m'n'n}}}\\+{\mathbf{e}_{l,m^{n}m',nn'}}-{\mathbf{e}_{1,m^{n}m',nn'}}-{\mathbf{e}_{^{\partial_1mn}{(^{m'}l)},{^{\partial_1mn}m',nn'}}}+{\mathbf{e}_{1,{^{\partial_1mn}m',nn'}}}\in {J_2} \end{multline*} and \begin{multline*} {\mathbf{u}_{\scriptscriptstyle4}}={\mathbf{e}_{l{^{m}{(^n{l'})},m^{n}m'},nn'}}-{\mathbf{e}_{1,m^{n}m',nn'}}+{\mathbf{e}_{\{\partial_2lm,^{n}{\partial_2l'm'}\},\partial_2lm^n(\partial_2l'm'),nn'}} -{\mathbf{e}_{1,\partial_2lm^n(\partial_2l'm'),nn'}}\\-{\mathbf{e}_{\{m,^{n}m'\},m{^{n}m'},nn'}}+{\mathbf{e}_{1,m{^{n}m'},nn'}}-{\mathbf{e}_{^{\partial_1mn}{l'(^{m'}l)},{^{\partial_1mn}m'm,nn'}}}+{\mathbf{e}_{1,{^{\partial_1mn}m'm,nn'}}}\in {J_2} \end{multline*} We need to add new relations given below for $J_1$. \begin{multline*} \mathbf{v}_{\scriptscriptstyle3}=\mathbf{e}_{1,^{n'}{(\partial_2lm)},n'n}-\mathbf{e}_{1,^{n'}m,n'n}-\mathbf{e}_{(1,{^{\partial_1m'n'}{(\partial_2lm)}},\partial_1m'n'n)} +\mathbf{e}_{1,^{\partial_1m'n'}m,\partial_1m'n'n}\\ +\mathbf{e}_{1,\partial_2lm{^{n}{m'}},nn'}-\mathbf{e}_{1,m^nm',nn'}-\mathbf{e}_{1,{^{\partial_1mn}{(\partial_2(^{m'}l)m')}},nn'}+\mathbf{e}_{1,^{\partial_1mn}m',nn'}\in J_1 \end{multline*} \begin{multline*} \mathbf{v}_{\scriptscriptstyle4}=\mathbf{e}_{1,\partial_2lm{^{n}{(\partial_2l'm')}},nn'}-\mathbf{e}_{1,m^nm',nn'}+\mathbf{e}_{1,\partial_2\{\partial_2lm,^{n}{\partial_2l'm'}\}\partial_2lm{^{n}{(\partial_2l'm')}},nn'} -\mathbf{e}_{1,\partial_2lm{^{n}{(\partial_2l'm')}},nn'}\\ -\mathbf{e}_{1,\partial_2\{m,^{n}{m'}\}m^nm',nn'}+ \mathbf{e}_{1,m^nm',nn'}-\mathbf{e}_{1,^{\partial_1mn}{\partial_2}(l')\partial_2(^{m'}l)^{\partial_1mn}{m'}m,nn'} +\mathbf{e}_{1,^{\partial_1mn}{m'}m,nn'}\in J_1. \end{multline*} Using the generator elements $\mathbf{u}_{\scriptscriptstyle3}$ of $J_2$, we obtain the following relation in $\mathrm{K}_3$ $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{n'}l,^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1m'}{(^{n'}l)},^{\partial_1m'n'}m,\partial_1m'n'n}=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1mn}{(^{m'}l)},^{\partial_1mn}m',nn'} -\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m^nm',nn'} \cdots(A)$$ Similarly the generator element $\mathbf{u}_{\scriptscriptstyle4}$ of $J_2$,we obtain the following relation in $\mathrm{K}_3$ \begin{multline*} $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l{^{m}{(^{n}{l'})},m^nm',nn'}}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{\partial_2lm,^{n}{(\partial_2l'm')}\},\partial_2lm{^{n}{(\partial_2l'm')}},nn'}\\ =\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,^{n}{m'}\},m^nm',nn'}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1mn}{(l'^{m'}l)},^{\partial_1mn}{m'}m,nn'} \cdots(B)$$ \end{multline*} Using the relations; we have \begin{align*} \alpha'_{l,m,n}\overline{\tau}_2+H'_2-K'_2=&\alpha'_{l,m,n}\overline{\tau}_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})+H'_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})-K'_2(\overline{\mathbf{v}^{\scriptscriptstyle11}}_{1,m',n'})\\ =&\alpha'_{l,m,n}(\mathbf{e}_{1,1,n'}-\mathbf{e}_{1,1,\partial_1m'n'})+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,^{n}{m'}\},m^nm',nn'}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{\partial_2lm,^{n}{m'}\},\partial_2lm^{n}{m'},nn'}\\ =&\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{n'}l,^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1m'n'}l,^{\partial_1m'n'}m,\partial_1m'n'n}\\ &+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,^{n}{m'}\},m^nm',nn'}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{\partial_2lm,^{n}{m'}\},\partial_2lm^{n}{m'},nn'} \end{align*} In relation $(B)$, if we take $l'=1$, we have; $$\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m^nm',nn'}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{\partial_2lm,^{n}{m'}\},\partial_2lm{^{n}{m'}},nn'} =\overline{\mathbf{v}^{\scriptscriptstyle22}}_{\{m,^{n}{m'}\},m^nm',nn'}+\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1mn}{(^{m'}l)},^{\partial_1mn}{m'}m,nn'}$$ and from this equality we have $$\alpha'_{l,m,n}\overline{\tau}_2+H'_2-K'_2=\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{n'}l,^{n'}m,n'n}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1m'n'}l,^{\partial_1m'n'}m,\partial_1m'n'n} +\overline{\mathbf{v}^{\scriptscriptstyle22}}_{l,m^nm',nn'}-\overline{\mathbf{v}^{\scriptscriptstyle22}}_{^{\partial_1m}{(^{n}{(^{m'}l)})},^{\partial_1mn}{m'},nn'}$$ and this is of course the relation $A$ in $\mathrm{K}_3$ . Thus, we obtain $$\alpha'_{l,m,n}\overline{\tau}_2+H'_2-K'_2=\overline{0}.$$ \begin{equation*} \begin{array}{llllll} \text{Murat SARIKAYA} \text{ and } \text{Erdal ULUALAN} \\ \text{Dumlup\i nar University} \\ \text{Science Faculty} \\ \text{Department of Mathematics} \\ \text{K\"{u}tahya-TURKEY} \\ \text{E-mails}: \text{[email protected]}\\ \text{[email protected]} \end{array} \end{equation*} \end{document}
arXiv
Cross-layer link adaptation for goodput optimization in MIMO BIC-OFDM systems Riccardo Andreotti1, Vincenzo Lottici2 & Filippo Giannetti ORCID: orcid.org/0000-0003-0043-40982 This work proposes a novel cross-layer link performance prediction (LPP) model and link adaptation (LA) strategy for soft-decoded multiple-input multiple-output (MIMO) bit-interleaved coded orthogonal frequency division multiplexing (BIC-OFDM) systems employing hybrid automatic repeat request (HARQ) protocols. The derived LPP, exploiting the concept of effective signal-to-noise ratio mapping (ESM) to model system performance over frequency-selective channels, does not only account for the actual channel state information at the transmitter and the adoption of practical modulation and coding schemes (MCSs), but also for the effect of the HARQ mechanism with bit-level combining at the receiver. Such method, named aggregated ESM, or αESM for short, exhibits an accurate performance prediction combined with a closed-form solution, enabling a flexible LA strategy, that selects at every protocol round the MCS maximizing the expected goodput (EGP), i.e., the number of correctly received bits per unit of time. The analytical expression of the EGP is derived capitalizing on the αESM and resorting to the renewal theory. Simulation results carried out in realistic wireless scenarios corroborate our theoretical claims and show the performance gain obtained by the proposed αESM-based LA strategy when compared with the best LA algorithms proposed so far for the same kind of systems. To meet the demanding need for ever increasing data rate and reliability, orthogonal frequency division multiplexing (OFDM), bit-interleaved coded modulation (BICM) [2], spatial multiplexing (SM) via multiple-input multiple-output (MIMO) [3], adaptive modulation and coding (AMC) [4], and hybrid automatic repeat request (HARQ) [5] are well-known techniques currently adopted, as advanced LTE (LTE-A) [6], and envisaged to be exploited in the future wireless systems [7]. To be specific, the HARQ technique combines the automatic repeat request (ARQ) mechanism with both the channel coding error correction and the error detection capability of the cyclic redundancy check (CRC) [5]. If the CRC is successfully detected, the packet is correctly received and an acknowledgement (ACK) is fed back to the transmitter. Conversely, a CRC failure means that the received packet is affected by uncorrected errors and a non-ACK (NACK) is sent back. In the latter condition, a retransmission of the corrupted packet is performed according to one of the following HARQ strategies [8]: (i) type I or Chase Combining (CC), the packet is retransmitted using the same redundancy; (ii) type II with partial Incremental Redundancy (IR), only a subset of previously unsent redundancy is transmitted; and (iii) type II with full IR, the systematic bits plus a different set of coded bits than those previously transmitted are sent. The HARQ mechanism potentials are fully exploited when the receiver suitably combines, e.g., using maximum ratio combining (MRC), the currently retransmitted packet with the previously unsuccessfully received ones, thus building a single packet whose reliability is more and more increased [5]. HARQ combining can be performed either on the received symbols or, should a different symbol mapping be employed in each transmission round, at bit-level, i.e., by accumulating the bit log-likelihood ratio (LLR) metrics [9]. Background and related works. In the literature, a considerable effort has been put in quantifying the performance limits for HARQ-based transmissions, mainly focusing on the ergodic capacity and outage probability [10–12]. In [10], an information-theoretical study about the throughput of HARQ signaling schemes is given for the Gaussian collision channel. Then, starting from [10, 11] presents a mutual information (MI) based analysis of the long-term average transmitted rate achieved by HARQ in a block-fading scenario, which allows to adjust the rate so that a target outage probability is not exceeded. In [12], the optimal tradeoff among throughput, diversity gain, and delay is derived for an ARQ block-fading MIMO channel with discrete signal constellations. In order to further enhance the system performance, the HARQ approach can be made adaptive by applying link adaptation (LA) strategies that do not only account for the information coming not only from the physical layer, but also from the higher layer schemes based on packet combining, so obtaining a cross-layer optimization of the link resource utilization. Most of the works considering such an issue, however, focus on theoretical performance limits based on capacity and channel outage probability, as in [13–17]. Specifically, [13] investigates the problem of power allocation for rate maximization under quantized channel state information (CSI) feedback, [14] adapts the transmission rates using the outdated CSI, whereas [15] proposes two power allocation schemes: one minimizes the transmitted power under a given packet drop rate constraint and the other minimizes the packet drop rate under the available power constraint. Note that in [13–15], IR HARQ is considered to optimize performance under narrowband fading channels. In [16], user and power are jointly selected in a multi-user contest under slow-fading channels and outdated CSI to maximize system goodput (GP), whereas [17] proposes a user, rate, and power allocation policy for GP optimization in multi-user OFDM systems with ACK-NACK feedback. Only few recent works, however, consider practical modulation and coding schemes (MCSs). In [18], the outlined AMC algorithm maximizes the spectral efficiency under truncated HARQ for narrowband fading channels. A power minimization problem under individual user GP constraint is tackled in [19] for an orthogonal frequency division multiple access (OFDMA) network employing type II HARQ and only statistical knowledge of CSI. The work in [20] proposes the selection of the MCS to maximize GP performance in MIMO-OFDM systems under CC HARQ, where the packet error rate is evaluated through the exponential effective SNR mapping (ESM) method (EESNR). A similar approach is proposed in [21], although the physical layer performance is modeled using the MI based effective SNR (MIESM). Rationale and contributions. In this paper, we propose a novel cross-layer link performance prediction (LPP) methodology for packet-oriented MIMO bit-interleaved coded (BIC)-OFDM transmissions which accounts for (i) practical MCSs, (ii) the HARQ mechanism with bit-level combining at the receiver, and (iii) the CSI information at the transmitter. The method allows the derivation of an LA strategy which is capable of selecting the MCS that maximizes the number of information bits correctly received per unit of time, or GP for short, at the user equipment (UE). The main features of the proposed method and the relevant improvements compared with the literature are outlined as follows. The proposed LPP model, named aggregated ESM, or αESM, relies on the ESM concept [22], which enables the prediction of the performance of a multicarrier system affected by frequency-selective fading by compressing all the per-subchannel (identified by the pair subcarrier and spatial stream) SNRs into a scalar value representing the SNR of a coded equivalent binary system working over additive white Gaussian noise (AWGN) channel. The LPP we put forward exhibits an accurate performance prediction combined with a closed-form solution which makes it eligible for practical implementation of LA algorithms. Indeed, at the generic protocol round (PR) ℓ of a given packet, the αESM is obtained recursively, by combining the aggregated effective SNR (ESNR), that stores the performance up to the previous retransmission (step ℓ−1), with the actual ESNR at PR ℓ, which depends on the current CSI and choice of the MCS. The proposed αESM is derived from the ESM method originally proposed in [23] as κESM, by taking into account the per-subchannel SNRs along with the HARQ mechanism. The key idea of the αESM method is to properly combine together the bit LLR metrics relevant to the retransmissions of the same packet, with the result of increasing decoding reliability. Specifically, the combined bit LLR metrics are characterized following an accurate method based on the cumulant moment generating function (CMGF). The αESM is shown to overcome the limitations exhibited by [20, 21], where the MCS used in the subsequent retransmission is identical with that originally chosen, in that the LPP works with CC only. Conversely, since the proposed method has the inherent possibility of choosing the MCS optimizing the GP metric within the retransmissions of the same packet, as a result, it enables a much more flexible LA strategy. The formulation of the GP at the transmitter, named expected goodput (EGP), is derived resorting to the renewal theory framework [24] and the long-term channel static assumption [20, 21]. The goal is, indeed, to obtain a reliable performance metric that can lead to a manageable LA optimization problem. Towards this end, both theoretical and numerical analyses are employed throughout the paper to corroborate our claims and findings. Finally, simulation results carried out over realistic wireless channels testify the advantages obtained employing the proposed LA strategy based on the αESM, when compared with the best algorithms known so far. Organization. The rest of the paper is organized as follows. Section 2 describes the HARQ retransmission mechanism and the MIMO BIC-OFDM system. In Section 3, after a brief rationale and review of the κESM LPP, the proposed αESM model is derived. Section 4 derives the EGP formulation and describes the proposed GP-oriented (GO) LA strategy. Finally, Section 5 illustrates the numerical results, whereas in Section 6, a few conclusions are drawn. Notations. Matrices are in uppercase bold, column vectors are in lowercase bold, [·]T is the transpose of a matrix or a vector, ai,j represents the entry (i,j) of the matrix A, × is the Cartesian product, calligraphic symbols, e.g., \({\mathcal {A}}\), represent sets, \(|{\mathcal {A}}|\) is the cardinality of \({\mathcal {A}}\), \({\mathcal {A}}(i)\) is the ith element of \({\mathcal {A}}\), ⌈·⌉ denotes the ceil function, and E x {·} is the statistical expectation with respect to (w.r.t.) the random variable (RV) x. In this section, we first describe the HARQ retransmission protocol. Then, the MIMO BIC-OFDM signalling system is outlined. HARQ retransmission protocol In order to enable reliable and spectral efficient packet transmissions, a HARQ retransmission protocol, with a maximum of L rounds, is jointly designed along with an AMC mechanism. The information to be transmitted is conveyed by packets (typically IP packets) received from the upper layers of the stack, i.e., layer 3 and above, and stored in an infinite-length buffer at the data link layer. At the radio link control (RLC) sublayer, each packet is mapped into a RLC protocol data unit (PDU) made of the three sections: (i) header, with size Nh; (ii) payload, with size Np; and (iii) CRC for error detection, with size NCRC. As shown in Fig. 1, at the generic PR \(\ell \in \mathcal {L}_{\text {PR}} \triangleq \{1,\cdots,L\}\), the RLC-PDU is encoded with a code rate \(r^{(\ell)}\in \mathcal {D}_{r} \triangleq \{r_{0}, r_{1}, \cdots, r_{\text {max}}\}\), thus producing \(N^{(\ell)}_{\mathrm {c}}\triangleq N_{\mathrm {s}}/r^{(\ell)}\le {\bar N}_{c}\) coded binary symbols, or coded bits for short, where \(N_{\mathrm {s}} \triangleq N_{\mathrm {h}}+N_{\mathrm {p}}+N_{\text {CRC}}\) and \({\bar N}_{c} \triangleq N_{\mathrm {s}}/r_{0} \), which is the number of coded bits at the output of the mother code, i.e., prior to puncturing. The \(N^{(\ell)}_{\mathrm {c}}\) coded bits are transmitted using the MIMO BIC-OFDM system described in Section 2.2 over the available band W. Equivalent block scheme of the MIMO BIC-OFDM system After the transmission of each packetFootnote 1, the receiver sends back a 1-bit feedback about the successful (ACK) or unsuccessful (NACK) packet reception. Whenever a NACK is received, the transmitter sends again the packet by encoding it with either the same puncturing pattern, a different subset of redundancy bits, or a tradeoff, according to the type of HARQ. This goes on until the transmitter receives an ACK or the maximum number of retransmissions L is reached. For both cases, the packet is removed from the buffer and the transmitter moves on sending the subsequent ones. At the receiver side, according to the HARQ scheme, for a given packet, the previously unsuccessfully received copies are stored and combined with the new received ones, thus creating more reliable metrics [5]. Since at each PR a different symbol mapping per subchannel may be applied, it is not possible to pre-combine received symbols. Hence, the packet combining strategy consists of accumulating the bit LLR metrics [9], as explained in detail in the following sections. MIMO BIC-OFDM system At the PR \(\ell \in {\mathcal {L}}_{\text {PR}}\), the \(N_{\mathrm {c}}^{(\ell)}\) coded bits are randomly interleaved and mapped onto the physical resources available in the space-time-frequency grid of the MIMO BIC-OFDM system, whose equivalent block scheme is depicted in Fig. 1. Specifically, we consider a MIMO BIC-OFDM system with N available subcarriers, N T transmit and N R receive antennas, employing SM and uniform power allocation across the subchannels. We further assume a block-fading channel model and spatially uncorrelated antennas. Moreover, denoting with \(\mathbf {H}^{(\ell)}_{n} \in {\mathbb {C}}^{N_{R}\times N_{T}}\), the channel matrix over the nth subcarrier, \(n \in \mathcal {N} \triangleq \{1,\cdots,N\}\), whose generic entry in position (ν1,ν2) is denoted by \(h^{(\ell)}_{n,\nu _{1},\nu _{2}}\), ν1=1,⋯,N R , ν2=1,⋯,N T , we recall that SM relies on the singular value decomposition [25] $$ \mathbf{H}^{(\ell)}_{n} = \mathbf{U}_{n}^{(\ell)}{\boldsymbol{\Theta}}_{n}^{(\ell)}{\mathbf{V}_{n}^{(\ell)}}^{*}, $$ where \(\mathbf {U}_{n}^{(\ell)} \in {\mathbb {C}}^{N_{R} \times N_{R}}\) and \(\mathbf {V}_{n}^{(\ell)} \in {\mathbb {C}}^{N_{T} \times N_{T}}\) are unitary rotation matrices and \({\boldsymbol {\Theta }}_{n}^{(\ell)} \in {\mathbb {R}}^{N_{R} \times N_{T}}\) is a rectangular matrix where the off-diagonal elements are zero, while the ordered diagonal elements are \(\vartheta _{n,1}^{(\ell)}\ge \vartheta _{n,2}^{(\ell)}\ge \cdots \ge \vartheta _{n,M}^{(\ell)}\ge 0\), with \(M \triangleq \min \{N_{T},N_{R}\}\). Thus, at most, the system consists of C=N·M parallel subchannels. In the following, we assume the CSI \(\mathbf {H}^{(\ell)}_{n}\), \(\forall n \in {\mathcal {N}}\), to be known at the transmitter side. Specifically, with reference to Fig. 1, the interleaved sequence of punctured coded bits is subdivided into subsequences of \(m_{n,\nu }^{(\ell)}\) bits each, which are gray-mapped onto the unit-energy symbols \({x}^{(\ell)}_{n,\nu } \in 2^{m_{n,\nu }^{(\ell)}}\)-QAM constellation, i.e., one symbol per available subchannel \((n,\nu)\in {\mathcal {C}} \triangleq \{(n,\nu)|1\le n\le N, 1\le \nu \le M\}\), with \(m_{n,\nu }^{(\ell)} \in {{\mathcal {D}}}_{m} = \left \{2, 4,\cdots,m_{\max } \right \}\). Further, let us denote Φ(ℓ)(·,·,·) as a function mapping the punctured \(N_{c}^{(\ell)}\) coded bits, out of the \({\bar N}_{c}\) coded bits at the output of the mother code, into the label bits of the QAM symbols transmitted on the available subchannels, summarizing the puncturing, interleaving, and QAM mapping functions. Specifically, Φ(ℓ)(j,n,ν)=k means that the coded bit \(b^{(\ell)}_{k}\), \(k\in \left \{1,\cdots,{\bar {N}}_{c}\right \}\) occupies the jth position, \(j=1,\cdots,m_{n,\nu }^{(\ell)}\), within the label of the \(2^{m_{n,\nu }^{(\ell)}}\)-QAM symbol sent on the νth spatial stream, ν=1,⋯,M, of the nth subcarrier, n=1,⋯,N. According to the SM approach, each sequence of QAM symbols \(\mathbf {x}^{(\ell)}_{n}\triangleq \left [x^{(\ell)}_{n,1},\cdots,x^{(\ell)}_{n,{N_{T}}}\right ]^{\mathrm {T}}\) is pre-processed obtaining \({\tilde {\mathbf {x}}}^{(\ell)}_{n}\triangleq {\mathbf {V}_{n}^{(\ell)}}{\mathbf {x}}^{(\ell)}_{n}\), where \({\tilde {\mathbf {x}}}^{(\ell)}_{n}\triangleq \left [{\tilde x}^{(\ell)}_{n,1},\cdots,{\tilde x}^{(\ell)}_{n,{N_{T}}}\right ]^{\mathrm {T}}\), \(\forall n \in {\mathcal {N}}\). It is worth noting that \(x^{(\ell)}_{n,\nu }\triangleq 0\) for ν=M+1,⋯,N T , if N T >N R , \(\forall n \in \mathcal {N}\), being C subchannels available for transmission. After that, the sequences \({\tilde {\mathbf {x}}}^{(\ell)}_{n}\), \(\forall n \in {\mathcal {N}}\), are mapped onto the frequency symbols \(\mathbf {y}_{\nu }^{(\ell)}\triangleq \left [{\tilde x}^{(\ell)}_{1,\nu },\cdots,{\tilde x}^{(\ell)}_{N,\nu }\right ]^{\mathrm {T}}\), for ν=1,⋯,N T , to which conventional inverse discrete Fourier transform (DFT), parallel-to-serial conversion, and cyclic prefix (CP) insertion are applied. The resulting signal is then transmitted over a MIMO frequency-selective block-fading channel, using \(N^{(\ell)}_{\text {OFDM}}\triangleq \left \lceil {N_{c}^{(\ell)}}/{\sum _{(n,\nu)\in \mathcal {C}m_{n,\nu }^{(\ell)}}} \right \rceil \) OFDM symbols. At the receiver side, after CP removal and DFT processing at each antenna, we get $$ {{r}}_{\nu_1}^{(\ell)} = \sum\limits_{\nu_2=1}^{N_T}{ {F}}_{\nu_1,\nu_2}^{(\ell)} {{y}}_{\nu_2}^{(\ell)} + { {w}}_{\nu_1}^{(\ell)},\;\;\;\nu_1=1,\cdots,N_R, $$ where \({\mathbf {r}}_{\nu _{1}}^{(\ell)}\triangleq \left [r_{1,\nu _{1}},\cdots,r_{N,\nu _{1}}\right ]^{\mathrm {T}}\), \({ \mathbf {F}}_{\nu _{1},\nu _{2}}^{(\ell)} \triangleq \text {diag}\left \{h^{(\ell)}_{1,\nu _{1},\nu _{2}},\right.\left.\cdots, h^{(\ell)}_{N,\nu _{1},\nu _{2}}\right \}\), ν1=1,⋯,N R , ν2=1,⋯,N T , with \(h^{(\ell)}_{n,\nu _{1},\nu _{2}}\) introduced before (1), and \({\mathbf {w}}_{\nu _{1}}^{(\ell)}\triangleq \left [{ w}^{(\ell)}_{1,\nu _{1}},\cdots,\right.\left.{w}^{(\ell)}_{N,\nu _{1}}\right ]^{\mathrm {T}}\) is the thermal noise vector, whose generic entry is a zero-mean circular symmetric complex Gaussian RV with standard deviation \(\sigma ^{(\ell)}_{n,\nu }\). After demultiplexing the received vectors \({\mathbf {r}}_{\nu _{1}}^{(\ell)}\), ν1=1,⋯,N R , \({{\tilde {\mathbf {z}}}}_{n}^{(\ell)}\triangleq \left [r_{n,1}^{(\ell)},\cdots,\right.\left.r_{n,N_{R}}^{(\ell)} \right ]^{\mathrm {T}}\), \(\forall n \in {\mathcal {N}}\), are built and SM post-processing via \(\mathbf {U}_{n}^{(\ell)}\), the output samples over each subcarrier are obtained as [25] $$ \begin{aligned} &{\mathbf{z}_{n}^{(\ell)} \triangleq {\mathbf{U}_{n}^{(\ell)}}^{*} {{{\tilde{\mathbf{z}}}}_{n}^{(\ell)}} = {\boldsymbol{\Theta}}_{n}^{(\ell)} \mathbf{x}_{n}^{(\ell)} + {\boldsymbol{\varsigma}}_{n}^{(\ell)} =} \\ &=\text{diag}\left\{\vartheta_{n,1}^{(\ell)} x_{n,1}^{(\ell)} + \varsigma_{n,1}^{(\ell)},\cdots,\vartheta_{n,M}^{(\ell)} x_{n,M}^{(\ell)} + \varsigma_{n,M}^{(\ell)} \right\},\;\;\;n\in\mathcal{N}, \end{aligned} $$ where the elements of \(\boldsymbol {\varsigma }_{n}^{(\ell)}\triangleq \left [{ \varsigma }^{(\ell)}_{n,1},\cdots,{\varsigma }^{(\ell)}_{n,M}\right ]^{\mathrm {T}}\) have the same distribution as those of \({ \mathbf {w}}_{\nu _{1}}^{(\ell)}\). The MIMO BIC-OFDM channel can thus be seen as a set of C parallel subchannels, represented by the diagonal matrix \(\boldsymbol {\Upsilon }^{(\ell)} \triangleq \text {diag}\left \{ \boldsymbol {\Upsilon }_{1}^{(\ell)}, \cdots, \boldsymbol {\Upsilon }_{n}^{(\ell)}, \cdots, \boldsymbol {\Upsilon }_{N}^{(\ell)} \right \}\), with \(\boldsymbol {\Upsilon }_{n}^{(\ell)} \triangleq \text {diag}\left \{\gamma _{n,1}^{(\ell)},\cdots,\gamma _{n,\nu }^{(\ell)},\cdots,\gamma _{n,M}^{(\ell)} \right \}\), \(\forall n \in \mathcal {N}\), the generic entry \(\gamma _{n,\nu }^{(\ell)}\) denoting the SNR value on subchannel (n,ν), given by $$ \gamma_{n,\nu}^{(\ell)} \triangleq \left(\frac{\vartheta_{n,\nu}^{(\ell)}}{{\sigma_{n,\nu}^{(\ell)}}}\right)^{2}, \; \; \; \forall {(n,\nu)} \in {\mathcal{C}}. $$ Finally, the receiver evaluates the soft metrics, followed by de-interleaving and decoding. Link performance prediction for HARQ-based MIMO BIC-OFDM systems This section is organized as follows. In Section 3.1, the rationale underlying the LA strategy and LPP method is recalled. In Section 3.2, the concept of the κESM ESNR technique for MIMO BIC-OFDM systems with simple ARQ mechanism is briefly summarized. Finally, in Section 3.3, the novel LPP method, named αESM, is derived for HARQ-based MIMO BIC-OFDM systems with bit-level combining. Rationale of the adaptive HARQ strategy The approach to follow is to properly choose the parameters of the system described in Section 2.2, e.g., modulation order and coding rate, in order to obtain the best link performance. Such LA strategy can be formalized as a constrained optimization problem where the objective function, representing the system performance metric, is optimized over the constrained set of the available transmission parameters. Specifically, for a packet-oriented system, information-theoretical performance measure based on capacity, which relies on ideal assumptions of Gaussian inputs and infinite length codebooks, is inadequate to give an actual picture of the link performance [26]. More suitable metrics have been recently identified as the packet error rate (PER) and the GP [20, 21, 26], which in turn depends on the PER itself. Therefore, a simple yet effective link performance prediction method is required, accounting for both the CSI as well as the information coming from different techniques that further improve the transmission quality, i.e., the HARQ mechanisms with bit-level combining. In the sequel, we will focus on LPP techniques based on the well-known ESM concept, which has been shown to be the most effective framework to solve this issue, especially for multicarrier systems [22]. Background on the κESM LPP model In multicarrier systems, where the frequency-selective channel introduces large SNR variations across the subcarriers and practical modulation and coding schemes are adopted, an exact yet manageable expression of the PER reveals to be demanding to derive. Due to these above reasons, ESM techniques are successfully employed, according to which the PER depends on the SNRs on each subcarrier through a scalar value, called ESNR. The latter represents the SNR of a single-carrier equivalent coded system working over AWGN channel, whose performance can be simply evaluated either off-line according to analytical models [27]. Within the ESM framwork, the κESM method, proposed for MIMO BIC-OFDM systems in [23], shows a remarkable tradeoff between accuracy and complexity when ARQ mechanisms are applied without any combining at the receiver. Such technique is based on the in-depth statistical characterization of the soft metrics at the input of the decoder, i.e., the bit LLR metrics \(\Lambda _{k}^{(\ell)}\), which, for the kth transmitted coded bit \(b_{k}^{(\ell)}\) at ℓth PR, reads as $$ \Lambda^{(\ell)}_{k} = \log \frac { \lambda_{j}\left({b_{\Phi^{(\ell)}(j,n,\nu)}^{(\ell){\prime}}}, z^{(\ell)}_{n,\nu}\right) } { \lambda_{j}\left(b_{\Phi^{(\ell)}(j,n,\nu)}^{(\ell)}, z^{(\ell)}_{n,\nu}\right) } $$ where Φ(ℓ)(j,n,ν)=k is the mapping function defined in Section 2.2 after (1), $$ {{\begin{aligned} \lambda_{j}\left(a,z^{(\ell)}_{n,\nu}\right) = \sum\limits_{\tilde{x} \in {\chi_{a}}^{\left(\ell,j,n,\nu\right)}} { \exp\left(-\left|z^{(\ell)}_{n,\nu} - \sqrt{\gamma^{(\ell)}_{n,\nu}} \tilde{x} \right|^2 \right) }, \;\; a\in\left\{b_{k}^{(\ell)},{b_{k}^{(\ell)}}^{\prime}\right\}, \end{aligned}}} $$ denotes the bit decoding metric, \(b_{k}^{\prime }\) is the complement of b k , \(\chi _{a}^{\left (\ell,j,n,\nu \right)}\) represents the subset of all the symbols belonging to the modulation adopted on the subchannel (n,ν), whose jth label bit is equal to a, whereas \(z^{(\ell)}_{n,\nu }\) is the generic entry of the vector z(ℓ) defined in (3) with \(\gamma ^{(\ell)}_{n,\nu }\) given by (4). If coded bit \(b_{k}^{(\ell)}\) is not transmitted, i.e., it is punctured at PR ℓ, note that \(\Lambda ^{(\ell)}_{k}\triangleq 0\). After a few approximations, it is shown in [23] that the PER performance of the coded MIMO BIC-OFDM system over frequency-selective channel is accurately given by that of a coded BPSK system over AWGN channel having SNR equal to the κESM ESNR $$ \gamma^{(\ell)} \triangleq - \log \left(\frac{1}{{\sum\limits_{(n,\nu)\in{\mathcal{C}}} m_{n,\nu}^{(\ell)} }} \sum\limits_{(n,\nu)\in {\mathcal{C}}} \Omega_{n,\nu}^{(\ell)}\left(m_{n,\nu}^{(\ell)}\right) \right) $$ $$ \Omega_{n,\nu}^{(\ell)} \left(m_{n,\nu}^{(\ell)}\right) \triangleq {\sum\limits_{\mu = 1}^{\sqrt{2^{m_{n,\nu}^{(\ell)}-2}}} {\frac{{{\psi_{m_{n,\nu}^{(\ell)}}}(\mu)}}{{{2^{{m_{n,\nu}^{(\ell)}} - 1}}}} \cdot{{\mathrm{e}}^{- \frac{{{\gamma^{(\ell)}_{n,\nu}}{{\left({\mu \cdot d_{n,\nu}^{(\min)}} \right)}^{2}}}}{4}}}} } $$ with \(\psi _{m_{n,\nu }^{(\ell)}}\) and \(d^{(\min)}_{n,\nu }\) being the constant values depending on the modulation order adopted on subchannel (n,ν) at PR ℓ. Expression (7) comes from the CMGF \(\kappa _{\Lambda }^{(\ell)} (\hat {s})\triangleq \log \mathrm {E} \left \{ \text {e}^{\hat {s} \Lambda _{k}^{(\ell)}}\right \}\) of the bit LLR metric \(\Lambda _{k}^{(\ell)}\) given by (5) evaluated at the saddlepoint \(\hat {s} = 1/2\) and, specifically, \({\gamma ^{(\ell)} = - \kappa _{\Lambda }^{(\ell)} (\hat {s})}\) [23]. From (7)–(8), it has to be pointed out that γ(ℓ) depends on the modulation order adopted on each subchannel given Γ(ℓ). The αESM model In this section, we introduce the concept of aggregate ESNR mapping, or αESM for short, in order to predict the performance of the system of interest under HARQ mechanism. Specifically, by extending to the HARQ context, the method presented in [28] for the estimation of the pairwise error probability (PEP), the key idea of the αESM we will propose is built upon two concepts: (i) the decoding score, a RV whose positive tail probability yields the PEP [28], and (ii) the equivalent binary input output symmetric (BIOS) model of the BICM scheme [2] applied to the MIMO BIC-OFDM system described in Section 2.2. According to the latter, at each PR \(\ell \in \mathcal {L}_{\text {PR}}\) and for each of the \(N^{(\ell)}_{\text {OFDM}}\) symbols during such round, the MIMO BIC-OFDM channel is modeled as a set of $$ B^{(\ell)}\triangleq\sum\limits_{(n,\nu)\in {\mathcal{C}}} m_{n,\nu}^{(\ell)} $$ parallel BIOS channels. We recall from Section 2.2 that \(B^{(\ell)}\cdot N^{(\ell)}_{\text {OFDM}}\ge N^{(\ell)}_{c} \). From now on, for the sake of simplicity but w.l.g., we assume that only one OFDM symbol is sufficient for the transmission of the \(N_{c}^{(\ell)}\)-bit-long codeword, so that the dependence on the OFDM symbol index is avoided. In particular, we have \(B^{(\ell)} = N_{c}^{(\ell)}\). Considering that the exact estimation of the PER for the system at hand is a demanding problem, we will first evaluate the PEP expression, then resort to the standard union bound. The one-to-one mapping between the codeword and the associated vector of modulation symbols allows us to express the PEP as follows. Let \(\mathbf {c}^{(\ell)}\triangleq \left \{c_{1}^{(\ell)},\cdots,c_{N_{c}^{(\ell)}}^{(\ell)}\right \}\) be the reference codeword (corresponding to the transmitted RLC-PDU at the ℓth PR) at the output of the puncturing device and \({\mathbf {c}^{(\ell)}}{\prime }\triangleq \left \{{c_{1}^{(\ell)}}{\prime },\cdots,{c_{N_{\mathrm {c}}^{(\ell)}}^{(\ell){\prime }}}\right \}\) the competing codeword, being \(c^{(\ell)}_{i}\) the ith coded bit after puncturing. Besides, let us define Π(ℓ)(i)=k, \(i=1,\cdots,N_{c}^{(\ell)}\), \(k\in \{1,\cdots,{\bar N}_{c}\}\), as the puncturing mapping such that \(c^{(\ell)}_{i}=b^{(\ell)}_{\Pi ^{(\ell)}(i)}\), where \(b^{(\ell)}_{\Pi ^{(\ell)}(i)}\) is the kth coded bit prior to puncturing. Then, upon denoting the reference and competing codewords as \(\mathbf {b}^{(\ell)}\triangleq \left \{b_{\Pi ^{(\ell)}(1)}^{(\ell)},\cdots,b_{\Pi ^{(\ell)}\left ({N_{\mathrm {c}}^{(\ell)}}\right)}^{(\ell)}\right \}\) and \({\mathbf {b}^{(\ell)}}{\prime }\triangleq \left \{{b_{\Pi ^{(\ell)}(1)}^{(\ell){\prime }}},\cdots,{b_{\Pi ^{(\ell)}\left ({N_{\mathrm {c}}^{(\ell)}}\right)}^{(\ell){\prime }}}\right \}\), respectively, the PEP results as $$ \text{PEP}~ \left(\mathbf{b}^{(\ell)},{\mathbf{b}^{(\ell)}}{\prime}\right) \triangleq \text{Pr} \left\{ \lambda\left({\mathbf{b}^{(\ell)}}{\prime},\mathbf{z}^{(\ell)}\right) > \lambda\left({\mathbf{b}^{(\ell)}},\mathbf{z}^{(\ell)}\right) \right\}, $$ where λ(·) is the soft decoding metric depending on the chosen decoding strategy. In the sequel, we will first recall the case where no bit combining is performed [28], and then, we will extend this approach to the bit-level combining receiver, which represents the novel contribution of the work. No bit combining at the receiver. With reference to the equivalent BIOS model of the MIMO BIC-OFDM system as depicted in Fig. 2, the following observations hold. The input to the ith BIOS channel, 1≤i≤B(ℓ), is the bit \(b_{\Pi ^{(\ell)}(i)}^{(\ell)}\), which is mapped in the jth position of the label of the QAM symbol \(x^{(\ell)}_{n,\nu }\) sent on subchannel (n,ν), being Φ(ℓ)(j,n,ν)=k. Equivalent BIOS model of the MIMO BIC-OFDM system The output is the bit log-likelihood metric \(\Lambda _{k}^{(\ell)}\), also named bit score, evaluated as in (5). The decoder metric for the reference codeword b(ℓ) is the BICM maximum a posteriori metric results as [28] $$ \lambda ({{b}^{(\ell)}},{{z}^{(\ell)}}) = \prod\limits_{(n,\nu) \in {\mathcal{C}}} {\prod\limits_{j = 1}^{m_{n,\nu}^{(\ell)}} {\lambda_j \left({b^{(\ell)}_{\Phi^{(\ell)}(j,n,\nu)}},{z}^{(\ell)}_{n,\nu}\right)} }, $$ where λ j (·,·) is the decoding metric associated to bit \(b^{(\ell)}_{\Phi ^{(\ell)}(j,n,\nu)}\), evaluated according to (6), whereas that one for the competing codeword b(ℓ)′ is obtained as in (11) by simply replacing b(ℓ) with b(ℓ)′. Hence, the pairwise decoding score (PDS) relevant to the transmitted codeword b(ℓ) with respect to b(ℓ)′ can be written asFootnote 2 $$ \Lambda_{{\text{PW}}}^{(\ell)} \triangleq \sum\limits_{(n,\nu) \in {\mathcal{C}}} {\sum\limits_{j = 1}^{m_{n,\nu}^{(\ell)}} {{\Lambda}_{\Phi^{(\ell)} (j,n,\nu)}^{(\ell)}} }, $$ where the LLR bit metric \({{\Lambda }_{\Phi ^{(\ell)} (j,n,\nu)}^{(\ell)}}\) is defined by (5). Therefore, upon plugging (11) evaluated for both b(ℓ) and b(ℓ)′ in the PEP expression (10), after some algebra, we obtain $$ \text{PEP}\left(\mathbf{b}^{(\ell)},\mathbf{b}^{{(\ell)}'}\right)=\text{Pr}\left(\Lambda_{{\text{PW}}}^{(\ell)}>0 \right). $$ Bit-level combining at the receiver. The optimal receiver that accounts for the combination of all the received copies should perform a joint decoding of the pairwise decoding scores over all the possible L transmissions. However, it would result in an unfeasible complexity, exponentially increasing with L [29]. On the other side, exploiting the bit-level combining offers an effective trade-off between performance and complexity [30]. Accordingly, this is the approach we will pursue in the sequel. The decoding metric in (11) shall now account for the recombination mechanism up to the PR ℓ. At every PR indeed, the actual bit scores are evaluated as in (5) and, for each bit k, added to the bit scores evaluated during the previous PRs. Thus, the output of the equivalent BIOS channel is now the aggregate bit score $$ {\mathcal{L}}_{k}^{(\ell)} \triangleq {{q}_{k}^{(\ell)}}^{\mathrm{T}}\boldsymbol{\Lambda}_{k}^{(\ell)}, $$ where \(\boldsymbol {\Lambda }_{k}^{(\ell)} \triangleq \left [\Lambda _{k}^{(1)},\cdots,\Lambda _{k}^{(\ell)} \right ]^{\mathrm {T}}\) collects the per-round bit scores of the coded bit k up to PR ℓ and \(\mathbf {q}_{k}^{(\ell)}\triangleq \left [q_{k}^{(1)},\cdots,q_{k}^{(\ell)} \right ]^{\mathrm {T}} \in \{0,1\}^{\ell }\) is the puncturing vector, that is, \(q_{k}^{(i)}=1\) if bit k has been transmitted at round i, otherwise 0 if it has been punctured. In turn, the aggregate PDS at round ℓ is given by $$ {\mathcal{L}}_{{\text{PW}}}^{(\ell)} = \sum\limits_{(n,\nu) \in {\mathcal{C}}} \sum\limits_{i=1}^{\ell} {\sum\limits_{j = 1}^{m_{n,\nu}^{(i)}} {q^{(i)}_{\Phi (j,n,\nu)} \Lambda^{(i)}_{\Phi (j,n,\nu)}} }. $$ Then, after some algebra, the PEP using bit-level combining at the receiver results as $$ \text{PEP}(\mathbf{b}^{(\ell)},\mathbf{b}^{{(\ell)}{\prime}})=\text{Pr}\left({\mathcal{L}}_{{\text{PW}}}^{(\ell)}>0 \right). $$ Let us now define the CMGF of the bit score \({\mathcal {L}}_{k}^{(\ell)}\) as $$ {\kappa_{{\mathcal{L}}}^{(\ell)}}(s) \triangleq \log \left({{\mathrm{E}}\left\{ {{\mathrm{e}^{s{\mathcal{L}}_{k}^{(\ell)}}}} \right\}} \right) $$ where the expectation is done w.r.t. all the random variables, and rely on the following assumption: the pattern \(\mathbf {q}^{(\ell)}_{k}\) can be modeled as a sequence of ℓ independent and identically distributed (i.i.d.) binary RVs taking values 0 or 1, independently of the bit index k. The above is motivated by the fact that at each PR, a random subset of the coded bit is selected among the ones at the input of the puncturing device. As a consequence of A2, the puncturing pattern can be designated as q(ℓ)=[q(1),⋯,q(ℓ)]T. Then, exploiting the law of total probability, from (14), the CMGF (17) turns out to be $$ {{\begin{aligned} {\kappa_{{\mathcal{L}}}^{(\ell)}}(s) = \log \left({\sum\limits_{{\bar{\mathbf{q}}^{(\ell)}} \in {{\mathcal{Q}}^{(\ell)}}} {\Pr \left({{{q}^{(\ell)}} = {{\bar{\mathbf{q}}}^{(\ell)}}} \right){{\prod\limits_{i = 1}^\ell {\left[ {{\mathrm{E}}\left\{ {{\mathrm{e}^{s\Lambda_{k}^{(i)}}}} \right\}} \right]^{{{\bar q}^{(i)}}}} }}}} \right), \end{aligned}}} $$ where \({\bar {\mathbf {q}}^{(\ell)}}\triangleq \left [\bar q^{(1)},\cdots,\bar q^{(\ell)}\right ]^{\mathrm {T}}\), \({\mathcal {Q}}^{(\ell)}\) is the set of all the possible puncturing patterns \(\bar {\mathbf {q}}^{(\ell)}\) over the first ℓ PRs. Further, recalling that \(\kappa _{\Lambda }^{(\ell)} (\hat s)\triangleq \log \mathrm {E} \left \{ \text {e }^{\hat s\Lambda _{k}^{(\ell)}}\right \}\), (18) can be rewritten as $$ {\kappa_{{\mathcal{L}}}^{(\ell)}}(s)= \log \left({\sum\limits_{{\bar{\mathbf{q}}^{(\ell)}} \in {{\mathcal{Q}}^{(\ell)}}} {\Pr \left({{{q}^{(\ell)}} = {\bar {\mathbf{q}}^{(\ell)}}} \right){{\prod\limits_{i = 1}^\ell {\left[ {{\mathrm{e}^{{\kappa_{\Lambda}^{(i)}}(s)}}} \right]^{{{\bar q}^{(i)}}}} }}}} \right). $$ Following the line of reasoning about the no bit combining case previously recalled [28], in case of sufficiently long interleaving and linear binary code, the per-round bit scores \(\Lambda _{k}^{(\ell)}\) are, to a practical extent, i.i.d RVs and independent of q(ℓ). Hence, resorting to the so-called Gaussian approximation, the PEP can be approximated by [31] $$ {\text{PEP}}(d) \simeq Q\left({\sqrt { - 2d{\kappa_{{\mathcal{L}}}^{(\ell)}}(\hat s)}} \right), $$ where d is the Hamming distance between b(ℓ) and b(ℓ)′ and \(\hat s\) represents the saddle point, with \(\hat s = 1/2\) for BIOS channels [31]. The above expression (20) can be seen as the PEP of an equivalent coded BPSK system operating over AWGN channel with SNR equal to \(-\kappa _{\mathcal {L}}^{(\ell)}(\hat {s})\). Thus, using (19) and exploiting the first equality in (7), we can eventually define the aggregate effective SNR, or αESNR for short, as $$ \Gamma _\alpha^{(\ell)} \triangleq - \log \left({\sum\limits_{{\bar {\mathbf{q}}^{(\ell)}} \in {{\mathcal{Q}}^{(\ell)}}} {\Pr \left({{{q}^{(\ell)}} = {\bar{\mathbf{q}}^{(\ell)}}} \right)\prod\limits_{i = 1}^\ell {{{\left[ {{\mathrm{e}^{- {\gamma^{(i)}}}}} \right]}^{{{\bar q}^{(i)}}}}}} } \right), $$ where \(\gamma ^{(i)} \triangleq -\kappa _{\Lambda }^{(i)}(\hat s)\), 1≤i≤ℓ, is the ESNR relevant to the ith HARQ round, derived in [23] and reported in (7). In conclusion, (21) can be property rearranged, leading to the result stated in the following. The αESM \(\Gamma _{\alpha }^{(\ell)}\) can be lower-bounded as $$ \Gamma_\alpha^{(\ell)} \ge g\left(\Gamma_\alpha^{(\ell-1)},\xi^{(\ell)}\right) + f\left(\gamma^{(\ell)},\xi^{(\ell)}\right), \;\;\; 1 < \ell \le L, $$ where r(ℓ) is the coding rate employed at PR ℓ, 1≤ℓ≤L, \(\Gamma _{\alpha }^{(1)}=\gamma ^{(1)}\), R(1)=r(1), $$ \xi^{(\ell)} \triangleq \frac{r^{(\ell)}}{R^{(\ell-1)}},\quad R^{(\ell)} \triangleq \min\{R^{(\ell-1)},r^{(\ell)}\}, \;\;\; 1 < \ell \le L, $$ $$ {{\begin{array}{*{20}{c}} {g(x,a) = \left\{ {\begin{array}{ll} { - \log\left[1 + a({\mathrm{e}^{- x}} - 1)\right],} & {{r^{(\ell)}} \le {R^{(\ell - 1)}}} \\ {x,} & {{r^{(\ell)}} > {R^{(\ell - 1)}}} \\ \end{array},} \right.} \;\;\;1 < \ell \le L, \\ {f(x,a) = \left\{ {\begin{array}{ll} {x,} & {{r^{(\ell)}} \le {R^{(\ell - 1)}}} \\ { - \log \left[1 + \frac{1}{a}({\mathrm{e}^{- x}} - 1)\right],} & {{r^{(\ell)}} > {R^{(\ell - 1)}}} \\ \end{array},} \right.} \;\;\;1 < \ell \le L. \\ \end{array}}} $$ See Appendix A. □ In order to evaluate the tightness of the lower bound of in Theorem 1, the relative error \(\delta _{\alpha } \triangleq \left (\Gamma _{\alpha }^{(\ell)}-\bar {\Gamma }_{\alpha }^{(\ell)}\right)/\Gamma _{\alpha }^{(\ell)}\) is depicted in Fig. 3, with \(\bar {\Gamma }_{\alpha }^{(\ell)}\) being the right-hand side of (22), i.e., the lower bound on the true αESM value, while the exact expression (21) is evaluated numerically, as a function of the PRs ℓ∈[1,8]. Specifically, for a given value of ℓ, \(\Gamma _{\alpha }^{(\ell)}\) is averaged over Navg=104 independent realizations. At each realization: the sequence of coding rates \(\{r^{(i)}\}_{i=1}^{\ell }\), thanks to which the puncturing patterns probability in (21) are evaluated, is randomly drawn from the set of available coding rates \(\mathcal {D}_{r}\); the sequence of ESNRs \(\{\gamma ^{(i)}\}_{i=1}^{\ell }\) is drawn as \(\left.\gamma ^{(i)}\right |_{\text {dB}} \in {\mathcal {U}}\left [-3,3\right ]\). The lower bound (22) is evaluated for the above sets \(\{r^{(i)}\}_{i=1}^{\ell }\) and \(\{\gamma ^{(i)}\}_{i=1}^{\ell }\) and then averaged over the Navg realizations. In Fig. 3, despite the derived lower bound gets looser for higher L, it can be considered tight for more practical values of L, i.e., at least for L≤5. In particular, it can be noted that it is very accurate up to L=3, where we have δ α ≤0.07. Relative error between the exact αESM value and lower bound Upon defining \(\boldsymbol {\varphi }^{(\ell)} \triangleq \{{m^{(\ell)}},r^{(\ell)}\}\) as the MCS at PR ℓ, with \(m_{n,\nu }^{(\ell)} = m^{(\ell)}\), \(\forall (n,\nu) \in \mathcal {C}\), \(m^{(\ell)} \in {\mathcal {D}}_{m}\) and \(r^{(\ell)} \in {\mathcal {D}}_{r}\), so that \(\boldsymbol {\varphi }^{(\ell)} \in {\mathcal {D}}_{\boldsymbol {\varphi }s} \triangleq {\mathcal {D}}_{m} \times {\mathcal {D}}_{r}\) is the set of the allowable MCSs, a few comments are now discussed. Updating \(\Gamma _{\alpha }^{(\ell)}\) through (22) requires only (i) the aggregate quantities \(\Gamma _{\alpha }^{(\ell -1)}\) and R(ℓ−1) related to the previous (ℓ−1)th step, (ii) together with the κESNR γ(ℓ), which is evaluated at the current ℓth PR according to (7), based on the current SNRs Υ(ℓ) and MCS φ(ℓ). Accordingly, \(\boldsymbol {\sigma }^{(\ell)} \triangleq \left \{\Gamma _{\alpha }^{(\ell)},R^{(\ell)}\right \}\) can be defined as the "state" of the HARQ scheme we are processing. Hence, the αESNR \(\Gamma _{\alpha }^{(\ell)}\) at the ℓth PR depends only on the state σ(ℓ−1) (related to the past retransmissions up to the (ℓ−1)th one), the current SNRs Υ(ℓ), both known at PR ℓ at the transmitter, and the MCS φ(ℓ), which stands for the optimization parameter to find in order to improve the link performance. Thus, the αESM can be written as \(\Gamma _{\alpha }^{(\ell)}(\boldsymbol {\varphi }^{(\ell)} | (\boldsymbol {\sigma }^{(\ell -1)},\boldsymbol {\Upsilon }^{(\ell)}))\), whereas the κESM in (7) can be expressed as γ(ℓ)(φ(ℓ)|Υ(ℓ)). The update recursion is depicted in Fig. 4, where the selector output is \((x,y,a)=\left (\gamma ^{(\ell)},\Gamma _{\alpha }^{(\ell -1)},R^{(\ell -1)}/r^{(\ell)}\right)\) if r(ℓ)>R(ℓ−1) or \((x,y,a)=\left (\Gamma _{\alpha }^{(\ell -1)},\gamma ^{(\ell)}, r^{(\ell)}/R^{(\ell -1)}\right)\) if r(ℓ)≤R(ℓ−1). The PER performance of the MIMO BIC-OFDM system over frequency-selective fading channel with HARQ and packet combing mechanism can be approximated up to round ℓ as $$ \begin{aligned} &\text{PER}\left(\boldsymbol{\varphi}^{(1)},\cdots,\boldsymbol{\varphi}^{(\ell)},\boldsymbol{\Upsilon}^{(1)},\cdots, \boldsymbol{\Upsilon}^{(\ell)}\right) \simeq \Psi_{r^{(\ell)}}\\ &\left(\Gamma_\alpha^{(\ell)}(\boldsymbol{\varphi}^{(\ell)} | (\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}))\right), \end{aligned} $$ where \(\Psi _{r^{(\ell)}}\left (\Gamma _{\alpha }^{(\ell)}(\boldsymbol {\varphi }^{(\ell)} | (\boldsymbol {\sigma }^{(\ell -1)},\boldsymbol {\Upsilon }^{(\ell)}))\right)\) is the PER of the equivalent coded binary BPSK system over AWGN channel operating at SNR \(\Gamma _{\alpha }^{(\ell)}(\boldsymbol {\varphi }^{(\ell)} |(\boldsymbol {\sigma }^{(\ell -1)},\boldsymbol {\Upsilon }^{(\ell)}))\). It can be noted that such PER is a monotone decreasing and convex function in the SNR region of interest [27]. It can be shown that the lower-bound (22) is exactly met when r(ℓ)≤R(ℓ−1), 1≤ℓ≤L, i.e., if the coding rate decreases along the retransmissions. Under the assumption that the coding rate is not adapted, i.e., r(j)=r(j−1), 2≤j≤ℓ, then \(\Gamma _{\alpha }^{(\ell)} = \sum _{j=0}^{\ell {\gamma ^{(j)}}}\), thus meaning that the aggregate ESNR of the HARQ mechanism is obtained as expected by accumulating the ESNRs evaluated at each PR. Link adaptation for EGP optimization In this section, we first derive the EGP expression under a HARQ mechanism according to the αESM concept. Then, in order to choose the modulation and coding parameters, we formulate a per-round αESM-based LA strategy, which optimizes the GP performance metric. Block diagram of the αESM update Expected goodput formulation Capitalizing on the results gained in the previous section, let us now derive the expression of the EGP metric at the generic PR ℓ. Toward this end, we resort to the renewal theory [24], which was first introduced in [32] to analyze the throughput performance of a HARQ system, under the assumptions of error- and delay-free feedback channel and infinite-length buffer. As an initial step, let us assume that at the ℓth PR, the system has previously experienced ℓ−1 unsuccessful packet transmission attempts and there are still L−ℓ+1 PRs available. Then, let us define a renewal event as the following occurrence: the system stops transmitting the current packet because either an ACK is received or because the PR limit L is reached. Let \(\left \{X_{i}^{(\ell)}\right \}\) be independent identically distributed non-negative RVs, denoting the time elapsed between the renewal event i and i+1, i.e., the inter-renewal time, and \(\left \{Z^{(\ell)}_{i}\right \}\) a sequence of independent positive random rewards earned at every renewal event. [Renewal Reward Theorem, [24]] The long-time average reward Y(ℓ)(t) per unit of time satisfies $$ \mathop {\lim }\limits_{t \to \infty} \frac{{1}}{t} Y^{(\ell)}(t) = \frac{\mathrm{E}\left\{Z^{(\ell)}_{i}\right\}}{\mathrm{E}\left\{X^{(\ell)}_{i}\right\}}. $$ From the renewal theory [24]. □ Theorem 2 states that the accumulated reward over time equals the ratio between the expected reward \(\mathrm {E}\left \{Z^{(\ell)}_{i}\right \}\) and the expected time \(\mathrm {E}\left \{X^{(\ell)}_{i}\right \}\) in which such reward is earnead. In light of Theorem 2 and the αESM model derived in Section 3.3, the EGP metric can be formulated as follows. The EGP at the ℓth PR for the HARQ-based system is $$ {{\begin{aligned} &\eta^{(\ell)} \left(\boldsymbol{\varphi}^{(\ell)}|\left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)\\ &= \frac{N_{\mathrm{p}}}{W}\frac{1-P_{\text{UPD}}\left(\boldsymbol{\varphi}^{(\ell)}, \Gamma_{\alpha}^{(\ell)},\cdots,\Gamma_{\alpha}^{(L)} \right)} {T_{\mathrm{f}}\left(\left\{\boldsymbol{\varphi}^{(i)}\right\}_{i=1}^{\ell-1}\right) + {T_{D}}\left({\boldsymbol{\varphi} ^{(\ell)}},\Gamma_{\alpha}^{(\ell)},\cdots,\Gamma_{\alpha}^{(L)}\right)}, \end{aligned}}} $$ $$ \begin{aligned} &P_{\text{UPD}} \left(\boldsymbol{\varphi}^{(\ell)},\Gamma_{\alpha}^{(\ell)},\cdots,\Gamma_{\alpha}^{(L)}\right) \triangleq \prod\limits_{j=0}^{L-\ell} \Psi_{r^{(\ell)}}\\&\left({\Gamma_\alpha}^{(\ell+j)}\left(\boldsymbol{\varphi}^{(\ell)}|\left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)\right) \end{aligned} $$ represents the the probability of unsuccessful packet decoding (UPD) within the retry limit L, \(T_{\mathrm {f}}\left (\{\boldsymbol {\varphi }^{(i)}\}_{i=1}^{\ell -1}\right)\) is the time spent in the previous ℓ−1 failed attempts, and $$ \begin{aligned} {T_{D}}\left({\boldsymbol{\varphi}^{(\ell)}},\Gamma_{\alpha}^{(\ell)},\cdots,\Gamma_{\alpha}^{(L)}\right)\triangleq {T_{\mathrm{u}}}({\boldsymbol{\varphi}^{(\ell)}}) &\cdot \sum\limits_{j = 0}^{L-\ell} \left[{\vphantom{\sum\limits_{j = 0}^{L-\ell}}} (j+1) \left(1 - \Psi_{r^{(\ell)}} \left(\Gamma_{\alpha}^{(\ell+j)} \left({\boldsymbol{\varphi}^{(\ell)}}|\left({\boldsymbol{\sigma}^{(\ell - 1)}},\boldsymbol{\Upsilon}^{(\ell)}\right)\right) \right)\right)\right.\\&\qquad\qquad\qquad {\cdot \left. \prod\limits_{k = 0}^{j} \Psi_{r^{(\ell)}} \left(\Gamma_{\alpha}^{(\ell+k-1)} \left({\boldsymbol{\varphi}^{(\ell)}}|\left({\boldsymbol{\sigma}^{(\ell - 1)}},\boldsymbol{\Upsilon}^{(\ell)}\right)\right) \right)\right]} \end{aligned} $$ is the expected delivery time, with $$ T_{\mathrm{u}}(\boldsymbol{\varphi}^{(\ell)}) = \frac{N_{\mathrm{s}} T_{\mathrm{B}}} {r^{(\ell)} \sum\limits_{(n,\nu)\in {\mathcal{C}}} m_{n,\nu}^{(\ell)}} $$ denoting the time interval required to transmit a packet of \(N^{(\ell)}_{\mathrm {c}}=N_{\mathrm {s}}/r^{(\ell)}\) coded bit employing MCS φ(ℓ), and T B being the OFDM symbol duration. See Appendix B. □ The following remarks are now in order. Thanks to long-term static channel assumption: at PR ℓ, each packet experiences the current channel condition Υ(ℓ)) over its possible future retransmissions, then φ(ℓ+j)=φ(ℓ), j∈[0,L−ℓ]. Therefore, at the ℓth PR, the ESNRs \(\Gamma _{\alpha }^{(\ell)}, \Gamma _{\alpha }^{(\ell +1)},\cdots, \Gamma _{\alpha }^{(\ell +L)}\) are only function of φ(ℓ)given the status (σ(ℓ−1),Υ(ℓ)), i.e., we can write \(\Gamma _{\alpha }^{(\ell +j)}\left (\boldsymbol {\varphi }^{(\ell)} | {\boldsymbol {\sigma }^{(\ell - 1)}},\boldsymbol {\Upsilon }^{(\ell)} \right)\), j∈[0,L−ℓ]. Assumption A3 may seem counterintuitive. Indeed, if the channel does not change, there would not be the need to adapt the MCS at each retransmission. However, the channel does change from PR to PR and, everytime, the corresponding metric is fed back to the transmitter (see assumption A1). The latter exploits this information to evaluate the EGP and adapt the MCS for the current retransmission. As a matter of fact, it is only for the sake of evaluating the EGP that the channel is assumed, during the following PRs, to be constant and equal to the current one, so as to obtain a manageable expression for the EGP. The UPD expression (28) is obtained assuming independent PER among the PRs, even though they are related by the recursive αESM expression. Such an assumption is confirmed in Section 5, where numerical results obtained over realistic wireless channels show that the proposed LA strategy, optimizing the EGP, outperforms the best LA known so far. Recalling remark 1) and approximating the αESM \(\Gamma _{\alpha }^{(\ell)}(\boldsymbol {\varphi }^{(\ell)} | (\boldsymbol {\sigma }^{(\ell -1)},\boldsymbol {\Upsilon }^{(\ell)}))\) with the lower bound given by Theorem 1, we have $$ \begin{aligned} &\Gamma_\alpha^{(\ell+j)}\left(\boldsymbol{\varphi}^{(\ell)} | \left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)\\ &= g\left(\Gamma_\alpha^{(\ell-1)},\xi^{(\ell)}\right) + (j+1)~f\left[\!\gamma^{(\ell)}\left(\boldsymbol{\varphi}^{(\ell)}|\boldsymbol{\Upsilon}^{(\ell)}\right),\xi^{(\ell)}\right],\\ & \le j \le L-\ell. \end{aligned} $$ Expression (31) can be simply shown by induction upon noting that, due to remark 1), we have φ(ℓ+j)=φ(ℓ) and hence r(ℓ+j)=r(ℓ) and γ(ℓ+j)=γ(ℓ), for j∈[0,L−ℓ]. Thus, remark 4) paves the way for the following proposition. Upon plugging (31) into (28) and (29), the EGP (27) turns into $$ {{\begin{aligned} &\zeta^{(\ell)} \left(\boldsymbol{\varphi}^{(\ell)}|\left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)\\ &= \frac{N_{\mathrm{p}}}{W}\frac{1-P_{\text{UPD}}\left(\boldsymbol{\varphi}^{(\ell)} |\left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)} {T_{\mathrm{f}}\left(\left\{\boldsymbol{\varphi}^{(i)}\right\}_{i=1}^{\ell-1}\right) + {T_{\mathrm{u}}}({\boldsymbol{\varphi}^{(\ell)}}) \phi\left({\boldsymbol{\varphi}^{(\ell)}}|\left({\boldsymbol{\sigma}^{(\ell - 1)}},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)}, \end{aligned}}} $$ $$ {{\begin{aligned} &P_{\text{UPD}}\left(\boldsymbol{\varphi}^{(\ell)}, \Gamma_{\alpha}^{(\ell)}, \cdots, \Gamma_{\alpha}^{(L)}\right) \equiv P_{\text{UPD}}\left(\boldsymbol{\varphi}^{(\ell)} | \left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)=\\ &\qquad\qquad = \prod\limits_{j=0}^{L-\ell} \Psi_{r^{(\ell)}}\left(g\left(\Gamma_{\alpha}^{(\ell-1)},\xi^{(\ell)}\right) \right.\\ & \qquad\qquad\;\;\; \left. + (j+1)~f\left[\gamma^{(\ell)}\left(\boldsymbol{\varphi}^{(\ell)}|\boldsymbol{\Upsilon}^{(\ell)}\right),\xi^{(\ell)}\right]\right), \end{aligned}}} $$ whereas \(T_{D}\left (\boldsymbol {\varphi }^{(\ell)}, \Gamma _{\alpha }^{(\ell)}, \cdots, \Gamma _{\alpha }^{(L)}\right)\) given by (29) turns into T D (φ(ℓ)|(σ(ℓ−1),Υ(ℓ)))=Tu(φ(ℓ))ϕ(φ(ℓ)|(σ(ℓ−1),Υ(ℓ))), where $$ \begin{aligned} \phi\left({\boldsymbol{\varphi}^{(\ell)}}|\left({\boldsymbol{\sigma}^{(\ell - 1)}},\boldsymbol{\Upsilon}^{(\ell)}\right)\right) \triangleq \sum\limits_{j = 0}^{L-\ell} \left[{\vphantom{\sum\limits_{j = 0}^{L-\ell}}}(j+1) \left(1 - \Psi_{r^{(\ell)}} \left(g\left(\Gamma_{\alpha}^{(\ell-1)},\xi^{(\ell)}\right) + (j+1)f\left[\gamma^{(\ell)}\left(\boldsymbol{\varphi}^{(\ell)}|\boldsymbol{\Upsilon}^{(\ell)}\right),\xi^{(\ell)}\right] \right) \right)\right.\\ \cdot \left. \prod\limits_{k = 0}^{j} \Psi_{r^{(\ell)}} \left(g\left(\Gamma_{\alpha}^{(\ell-1)},{\xi^{(\ell)}} \right) + k f\left[{{\gamma }^{(\ell)}}\left({\boldsymbol{\varphi}^{(\ell)}}|\boldsymbol{\Upsilon}^{(\ell)}\right),{\xi^{(\ell)}}\right] \right)\right].\\ \blacksquare \end{aligned} $$ It is worth noting that: In view of the normalization by the OFDM signal bandwidth W, the EGP in (32) can be read as a spectral efficiency metric measured in (bit/s/Hz); Due to (31), it is apparent that the EGP depends on the MCS only, which has to be optimized according to the AMC optimization problem (OP) outlined in the next section. Goodput-oriented-AMC (GO-AMC) OP The AMC OP whose objective function is given by the EGP (32) is summarized in the following proposition. [GO-AMC] The GO-AMC OP consists at each PR ℓ in searching for the best MCS \(\boldsymbol {\varphi }^{(\ell)}_{\mathrm {o}}\) that maximizes the EGP (32) according to $$ \begin{array}{ll} \boldsymbol{\varphi}^{(\ell)}_{\mathrm{o}} = & \mathop {\arg \max}\limits_{\boldsymbol{\varphi}s} \left\{\zeta^{(\ell)}\left(\boldsymbol{\varphi} | \left(\boldsymbol{\sigma}^{(\ell-1)},\boldsymbol{\Upsilon}^{(\ell)}\right)\right)\right\} \\ \text{s.t.} & \boldsymbol{\varphi} \in {\mathcal{D}}_{\boldsymbol{\varphi}s} \\ \end{array}. $$ The OP (35) can be easily solved through an exhaustive search over all the pairs of modulation order and coding rate \(\boldsymbol {\varphi } \in {\mathcal {D}}_{\boldsymbol {\varphi }s}\). Since all the quantities to be evaluated have a closed-form expression, it can be pointed out that the complexity of the GO-AMC OP simply reduces to \(\mathcal {O}(|\mathcal {D}_{\boldsymbol {\varphi }s}|) = \mathcal {O}(|\mathcal {D}_{m}|\cdot |\mathcal {D}_{r}|)\), i.e., linear with the allowable MCS pairs. Numerical simulation tests have been carried over typical wireless links between a generic eNodeB-UE pair to verify the effectiveness of the proposed LA algorithm when the proposed HARQ scheme is applied. The list of parameters/features of both the MIMO BIC-OFDM system and the wireless channel adopted for the simulations are reported in Tables 1 and 2, respectively, whereas Table 3 reports the list of acronyms. In the following, for simplicity and w.l.g., we assume the header size Nh=0, so that the number of bit to code turns to be Ns=Np+NCRC; see Table 1. Specifically, we consider an LTE-compliant eNodeB based on turbo parallel concatenated convolution code (PCCC) with mother code 1/3 and rate matching mechanism [33], giving rise to the equivalent coding rates listed in the set \(\mathcal {D}_{r}\) of Table 1. The performance metric is evaluated as a function of the average symbol energy-to-noise spectral density ratio E s /N0, and obtained averaging over 103 independent channel realizations. The performance of the proposed algorithm is compared against that of the best known LA algorithms published in the literature that also account for the HARQ mechanism, as outlined hereafter. The benchmark algorithm [20], tagged as HARQ EESM (H-EESM), selects the best MCS that maximizes the EGP by exploiting the EESM method to predict the link performance. The second one, tagged as HARQ MIESM (H-MIESM), was originally suggested in the introduction of [21], though no analysis nor performance is shown therein. The H-MIESM algorithm uses the same method as [20] but employing the MIESM as the LPP method. Further, as described in [34], in order to account for receiver implementation non-idealities, the H-MIESM obtains the actual ESNR value \(\bar \gamma _{\text {MIESM}}\) by correcting the MIESM-based ESNR γMIESM by a constant value \(\gamma ^{(m)}_{\mathrm {c}}\), \(\forall m \in \mathcal {D}_{m}\), depending on the modulation order. \(\left.\bar \gamma _{\text {MIESM}}\right |_{\text {dB}} = \left.\gamma _{\text {MIESM}}\right |_{\text {dB}}- \left.\gamma ^{(m)}_{\mathrm {c}}\right |_{\text {dB}}\). Table 1 Parameters and features of the HARQ-based MIMO BIC-OFDM system Table 2 Parameters and features of the wireless propagation channel model Table 3 List of acronyms Figure 5 depicts the actual normalized GP, i.e., the number of error-free received information bits per second per Hz, for the GO-AMC approach employing the proposed αESM method against the H-MIESM and H-EESM, for the single-input single-output (SISO) case. It is apparent that the αESM outperforms both the H-MIESM and H-EESM, offering a gain of about 4 and 7.5 dB w.r.t. to the former and the latter, respectively, at 4 bit/s/Hz. In particular, the αESM approach, when compared with the H-MIESM, shows a considerable gain in the medium SNR region; thanks to a more flexible AMC strategy, allowing the change of the MCS among different retransmissions of the same failed packet. On the other side, at low- and high-SNR regions, there is no room for improvements, as both the strategies select the most and the less efficient MCS, respectively, thus achieving the same performance. However, the αESM, w.r.t. the H-MIESM, has the appealing property of having a closed-form solution, thereby trading off efficiency and complexity together. The gain of the proposed αESM method scales when the number of antennas is increased as well, as shown in Figs. 6 and 7, for the SM-MIMO configurations 4×4 and 8×8, respectively. Specifically, a gain of around 12 dB is obtained for the 8×8 scheme at 30 bit/s/Hz. The number of the resources (i.e., the subchannels) available at each PR significantly increases if the number of antennas increases too. Since, differently from the other methods, the αESM one is applied at each PR, the latter is able to exploit this increment of resources enabling higher GP levels which scale with the number of antennas. GO-AMC performance comparison for the SISO case GO-AMC performance comparison for the MIMO 4×4 case In order to shed light on the improved performance achieved by the proposed method, Fig. 8 quantifies the complementary cumulative distribution function (CCDF) of the discrete RV \(\xi ^{(\ell)} \triangleq r^{(\ell)} \cdot m^{(\ell)}\), which is the data rate per subcarrier, related to the selected MCS at each PR, for the αESM and H-MIESM, at E s /N0=8.8 dB in the SISO case with uniform bit loading. At the first PR, the probability of selecting a more spectral efficient MCS, i.e., a higher data rate, is slightly greater for the αESM. In the following PRs, this probability even greatly increases for the latter model, which has the possibility to change the MCS on a per-PR basis given the current "memory" σ(ℓ−1) and CSI. Conversely, as previously observed, also the H-MIESM model applies the recombination mechanism, though it keeps the initial MCS along all the retransmissions of the same packet, thus resulting to be more conservative. The above can be considered the key reason why the actual GP obtained by the proposed more flexible αESM is considerably greater, as corroborated by the previous Figs. 5, 6 and 7. Per-round CCDF of the per-subcarrier data rate Finally, we outline the computational complexity of the proposed method. To this end, we take as the reference the proposed αESM method and the H-EESM, since they represent the best and worst case, respectively, as apparent from Figs. 5, 6 and 7. The H-EESM method has a closed form and is based on the logarithm of a sum of negative exponential functions, as can be seen in Eq. (15) of [20]. Also, the αESM method, for a given PR, has a closed form and is based on the logarithm of a sum of negative exponential functions, as can be seen from Eq. (8), and the recursive Eq. (22). The complexity required by the recursive equation can be considered negligible when compared to the evaluation of the corresponding ESNR, in that the functions g(·,·) and f(·,·) defined in (24) can be properly calculated using a look-up table. Therefore, their computational complexity at each PR can be considered comparable. The only difference is that, while the H-EESM is evaluated only at the first PR, the αESM is re-evaluated PR by PR. Thus, its complexity increases linearly with the number of PRs. Since this number is limited (usually below 10), the increment of complexity w.r.t. the H-EESM is lower (or at most equal to) one order of magnitude, but with a great gain in performance, as previously shown. This paper presented an innovative cross-layer LPP methodology, named αESM, suited for packet-oriented MIMO BIC-OFDM transmissions, which accounts for CSI, practical MCSs, and HARQ. The proposed αESM suitably extends the κESM method so to account also for the HARQ mechanism with bit-level combining at the receiver. The proposed LPP method gives an accurate closed-form solution and the possibility to enable a flexible LA strategy, where at each PR the MCS that maximizes the GP performance at the UE is selected based on the information about the past transmissions and actual CSI. In particular, the formulation of the GP at the transmitter, named EGP, is derived resorting to the renewal theory. Simulation results carried out over realistic wireless channels demonstrate that LA strategy based on the αESM method outperforms the best known algorithms proposed so far, providing gains of about 5 and 7.5 dB in SISO and up to 11 dB in MIMO configurations, respectively. An interesting follow-up of this work consists in moving the focus from the LPP method itself, applied here to the reference conventional (MIMO)-OFDM system, to its extension to more advanced transmission schemes such as (MIMO)-OFDM with spatial modulation [35], index mapping [36], or UFMC [37]. A. Proof of Theorem 1 In order to prove Theorem 1, two different cases are taken into account. Let us start with the case in which the coding rate is monotonically increasing up to the ℓth PR, i.e., r(j)>r(j−1), 1≤j≤ℓ, and introduce the notation 1 x (0 x ), denoting an x-sized vector whose entries are all set to 1 (0). Besides, denoting as \(N_{\mathrm {c}}^{(j)} \triangleq N_{s}/r^{(j)}\), with \(N_{\mathrm {c}}^{(j)} \le N_{\mathrm {c}}^{(j-1)}\), the number of coded bits transmitted at the jth PR, 1≤j≤ℓ, the set \(\mathcal {Q}^{(\ell)}\) containing all the possible puncturing patterns among the ℓth PRs can be written as $$ {{\begin{aligned} {\mathcal{Q}}^{(\ell)} \triangleq \left\{\big[\underbrace{{1}_{\ell}^{T},{0}_{0}^{T}}_{{q}_{0}}\big]^{T}, \big[\underbrace{{1}_{\ell-1}^{T},{0}_{1}^{T}}_{{q}_{1}}\big]^{T}\right., \cdots, \left\{\big[\underbrace{{1}_{1}^{T},{0}_{\ell-1}^{T}}_{{q}_{\ell-1}}\big]^{T}\right\}, \end{aligned}}} $$ in that a given coded bit can be transmitted at PRs 1,2,⋯,ℓ (pattern q0), or at PRs 1,2,⋯,ℓ−1 (pattern q1), and so on, or only at PR 1 (pattern qℓ−1). Hence, defining \(p^{\ell }_{j} \triangleq \Pr \left \{ {q}_{k}^{(\ell)} = {q}_{j}\right \}\), with \(p^{\ell }_{j} \in \mathcal {P}^{(\ell)}\), as the probability that the kth coded bit is punctured at the ℓth PR using the pattern q j , 0≤j≤ℓ−1, it can be verified that the set of the probabilities can be represented as $$ {\mathcal{P}}^{(\ell)} \triangleq \left\{ \frac{N_{\mathrm{c}}^{(\ell)}}{N_{\mathrm{c}}^{(1)}}, \frac{N_{\mathrm{c}}^{(\ell-1)}-N_{\mathrm{c}}^{(\ell)}}{N_{\mathrm{c}}^{(1)}}, \cdots, \frac{N_{\mathrm{c}}^{(1)}-N_{\mathrm{c}}^{(2)}}{N_{\mathrm{c}}^{(1)}} \right\}. $$ Now, let us prove (22) of Theorem 1 by induction. It can be easily verified that the expression holds for ℓ=1,2,⋯. Therefore, at the (ℓ+1)th PR, we can write $$ {{\begin{aligned} \Gamma_{\alpha}^{(\ell+1)} = &-\log \left\{ \frac{N_{\mathrm{c}}^{(\ell+1)}}{N_{\mathrm{c}}^{(1)}} \mathrm{e}^{-\sum_{j=1}^{\ell+1}{\gamma^{(j)}}}\right.\\ & \left.+ \sum\limits_{k=2}^{\ell+1}{\left(\frac{N_{\mathrm{c}}^{(k-1)}-N_{\mathrm{c}}^{(k)}}{N_{\mathrm{c}}^{(1)}}\right) \mathrm{e}^{-\sum_{j=1}^{k-1}{\gamma^{(j)}}}} \right\}, \end{aligned}}} $$ that after some algebra can be rearranged as $$ \begin{aligned} \Gamma _{\alpha}^{(\ell + 1)} &= - \log \left\{ {\frac{{N_{\mathrm{c}}^{(\ell + 1)}}}{{N_{\mathrm{c}}^{(1)}}}{\mathrm{}}{{\mathrm{e}}^{- \sum\limits_{j = 1}^\ell {{{\gamma }^{(j)}}} }}\left({{\mathrm{e}^{- {{ \gamma }^{(\ell + 1)}}}} - 1} \right) +} \right. \\ &{\left. {\frac{{N_{\mathrm{c}}^{(\ell)}}}{{N_{\mathrm{c}}^{(1)}}}{\mathrm{}}{{\mathrm{e}}^{- \sum\limits_{j = 1}^\ell {{{\gamma }^{(j)}}} }} + \sum\limits_{k = 2}^\ell {\left({\frac{{N_{\mathrm{c}}^{(k - 1)} - N_{\mathrm{c}}^{(k)}}}{{N_{\mathrm{c}}^{(1)}}}} \right){\mathrm{e}^{- \sum\limits_{j = 1}^{k - 1} {{{\gamma }^{(j)}}} }}}} \right\}.} \\ \end{aligned} $$ Then, considering that the last two terms within the curly brackets of (39) correspond to \(\mathrm {e}^{-\Gamma _{\alpha }^{(\ell)}}\), and \(\frac {\mathrm {e}^{-\sum _{j=1}^{\ell }{\gamma ^{(j)}}}} { \mathrm {e}^{-\Gamma _{\alpha }^{(\ell)}}} \le 1\) as the coding rate is increasing, we end up to $$\begin{array}{@{}rcl@{}} \Gamma_{\alpha}^{(\ell+1)} \ge \Gamma_{\alpha}^{(\ell)} - \log \left[ 1+ \frac{R^{(\ell)}}{r^{(\ell+1)}} \left(\mathrm{e}^{-\gamma^{(\ell+1)}} -1 \right) \right], \end{array} $$ where we exploit the relationship R(ℓ)=r(1) due to (23) and the assumption of increasing coding rate. In the case the coding rate is not increasing up to the ℓth PR, i.e., r(j)≤r(j−1), 1≤j≤ℓ, the set of puncturing patterns at the ℓth PRs turns into $$ {\mathcal{Q}}^{(\ell)} \triangleq \left\{\big[\underbrace{{0}_{0}^{T},{1}_{\ell}^{T}}_{{q}_0}\big]^{T}, \big[\underbrace{{0}_{1}^{T},{1}_{\ell-1}^{T}}_{{q}_1}\big]^{T}, \cdots,\right. \left\{\big[\underbrace{{0}_{\ell-1}^{T},{1}_{1}^{T}}_{{q}_{\ell-1}}\big]^{T}\right\}, $$ with probabilities $$ {\mathcal{P}}^{(\ell)} \triangleq \left\{ \frac{N_{\mathrm{c}}^{(1)}}{N_{\mathrm{c}}^{(\ell)}}, \frac{N_{\mathrm{c}}^{(2)}-N_{\mathrm{c}}^{(1)}}{N_{\mathrm{c}}^{(\ell)}}, \cdots, \frac{N_{\mathrm{c}}^{(\ell)}-N_{\mathrm{c}}^{(\ell-1)}}{N_{\mathrm{c}}^{(\ell)}} \right\}. $$ Therefore, following the same procedure as above, at the (ℓ+1)th PR we can write $$ \begin{aligned} \Gamma_\alpha^{(\ell+1)} = &-\log \left\{ \frac{N_{\mathrm{c}}^{(1)}}{N_{\mathrm{c}}^{(\ell+1)}} \mathrm{e}^{-\sum_{j=1}^{\ell+1}{\gamma^{(j)}}}\right.\\ &\left.+ \sum_{k=2}^{\ell+1}{\left(\frac{N_{\mathrm{c}}^{(k)}-N_{\mathrm{c}}^{(k-1)}}{N_{\mathrm{c}}^{(\ell+1)}}\right) \mathrm{e}^{-\sum_{j=k}^{\ell+1}{\gamma^{(j)}}}} \right\}, \end{aligned} $$ $$ \begin{aligned} {\Gamma _\alpha^{(\ell + 1)} = {{\gamma }^{(\ell + 1)}} - \log \left\{{\frac{{N_{\mathrm{c}}^{(\ell)}}}{{N_{\mathrm{c}}^{(\ell + 1)}}}\left[{\vphantom{{\left({\frac{{N_{\mathrm{c}}^{(k - 1)} - N_{\mathrm{c}}^{(k)}}}{{N_{\mathrm{c}}^{(\ell)}}}} \right){\mathrm{e}^{- \sum\limits_{j = k}^l {{{\gamma }^{(j)}}}}}}}} {\frac{{N_{\mathrm{c}}^{(\ell + 1)} - N_{\mathrm{c}}^{(\ell)}}}{{N_{\mathrm{c}}^{(\ell)}}} +} \right.} \right.} \\ {\left. {\left. {\frac{{N_{\mathrm{c}}^{(1)}}}{{N_{\mathrm{c}}^{(\ell)}}}{{\mathrm{e}}^{- \sum\limits_{j = 1}^\ell {{{\gamma }^{(j)}}} }} + \sum\limits_{k = 2}^\ell {\left({\frac{{N_{\mathrm{c}}^{(k - 1)} - N_{\mathrm{c}}^{(k)}}}{{N_{\mathrm{c}}^{(\ell)}}}} \right){\mathrm{e}^{- \sum\limits_{j = k}^l {{{\gamma }^{(j)}}} }}}} \right]} \right\}.} \end{aligned} $$ Then, considering that the last two terms within the curly brackets of (44) correspond to \(\mathrm {e}^{-\Gamma _{\alpha }^{(\ell)}}\), we end up with $$ \Gamma_\alpha^{(\ell+1)} = \gamma^{(\ell+1)} - \log \left[ 1+ \frac{r^{(\ell+1)}}{R^{(\ell)}} \left(\mathrm{e}^{-\Gamma_\alpha^{(\ell)}} -1 \right) \right], $$ where R(ℓ)=r(ℓ) due to the assumption of decreasing coding rate. B. Proof of Theorem 3 In order to prove Theorem 3, let us first map the quantities the renewal-reward theorem relies on, that is, the interarrival times \(X_{i}^{(\ell)}\) and rewards \(Z_{i}^{(\ell)}\), to the system under analysis. The ith interarrival time can be written as $$ X^{(\ell)}_i=T_{\mathrm{f}}(\ell-1)+\sum\limits_{j=\ell}^{\ell_i}T_{\mathrm{u}}(\boldsymbol{\varphi}^{(j)}), $$ where the first term on the right hand side (RHS) is the time elapsed over the previous ℓ−1 failed transmissions, which is a known quantity at the ℓth PR; Tu(φ(j)) is defined in Eq. (30), whereas ℓ≤ℓ i ≤L is a RV depending on the number of packet transmissions after which the renewal event happens. Besides, since we are interested in correctly receiving the Np information bits out of the \(N_{\mathrm {c}}^{(\ell)}\) transmitted ones, the reward is \(Z^{(\ell)}_{i}=N_{\mathrm {p}}/W\) if the renewal is due to a successful decoding; otherwise \(Z^{(\ell)}_{i}=0\). Before proceeding further, let us introduce \(\mathcal {A}_{k}\) as the event of receiving an ACK at round k, \(\bar {\mathcal {A}}_{k}\) as the event of receiving a NACK at round k and \(\mathcal {R}_{k} \) as the event of having a renewal event after round k. Accordingly, the probability of \(\mathcal {R}_{k}\) is $$ \Pr\{{\mathcal{R}}_{k}\} \triangleq \Pr \left\{ {\bar{\mathcal{A}}_1}, \cdots,{\bar{\mathcal{A}}_{k - 1}},{{\mathcal{A}}_k}\right\}, $$ and, since a renewal event always happens when the retry limit L is reached, $$ \Pr({\mathcal{R}}_{L}) = 1 - \sum\limits_{k=1}^{{L}-1} \Pr({\mathcal{R}}_k). $$ On the other hand, defining \({\mathcal {N}}_{k}\) as the event of not receiving ACKs in k attempts, with 1≤k≤L, the more manageable probability \(\Pr (\mathcal {N}_{k})\) can be introduced, $$ \Pr ({\mathcal{N}}_k) \triangleq \Pr \{ {\bar{\mathcal{A}}_1}, \cdots,{\bar{\mathcal{A}}_k}\} = 1- \sum\limits_{j=1}^k \Pr({\mathcal{R}}_j). $$ It easily follows that $$ \Pr({\mathcal{R}}_k)=\Pr({\mathcal{N}}_{k-1})-\Pr({\mathcal{N}}_k) $$ with \(\Pr ({\mathcal {N}}_{0}) \triangleq 1\). Therefore, in order to evaluate (26), we get that $$ \begin{aligned} \mathrm{E}\left\{Z^{(\ell)}_{i}\right\}&= \frac{N_{\mathrm{p}}}{W}\cdot \left\{1-\Pr({\mathcal{N}}_{L})\right\} + 0 \cdot \Pr({\mathcal{N}}_{L})\\ &= \frac{N_{\mathrm{p}}}{W}\left\{1-P_{\text{UPD}}(L-\ell)\right\} \end{aligned} $$ where \(P_{\text {UPD}}(L-\ell)\triangleq \left.\Pr ({\mathcal {N}}_{\ell +j})\right |_{j=L-\ell }\) stands for the probability of not receiving an ACK within the remaining L−ℓ PRs, and $$ \begin{aligned} &{\mathrm{E}\left\{X^{(\ell)}_{i}\right\}=T_{\mathrm{f}}(\ell-1)+{\sum\limits_{j=0}^{L-\ell} T_{\mathrm{u}}\left(\boldsymbol{\varphi}^{(\ell+j)}\right) \Pr({\mathcal{R}}_{\ell+j})}=}\\ &{=T_{\mathrm{f}}(\ell-1)+{\sum\limits_{j=0}^{L-\ell} T_{\mathrm{u}}\left(\boldsymbol{\varphi}^{(\ell+j)}\right) \left[\Pr({\mathcal{N}}_{\ell+j-1})-\Pr({\mathcal{N}}_{\ell+j})\right]},} \end{aligned} $$ $$ {\begin{aligned} &\text{Pr}({\mathcal{N}}_{\ell+j})=\prod\limits_{k=0}^j \mathrm{E}_{\boldsymbol{\Upsilon}s^{(\ell+k)}}\\ &\left\{\Psi_{r^{(\ell+k)}}\left(\Gamma_{\alpha}^{(\ell+k)}\left(\boldsymbol{\varphi}^{(\ell+k)}| \left(\boldsymbol{\sigma}^{(\ell+k-1)},\boldsymbol{\Upsilon}^{(\ell+k)}\right)\right) \right)\right\}. \end{aligned}} $$ Evaluation of (53) would require the knowledge of the channel p.d.f. for all the possible cases of interest, which is unrealistic in practice. Therefore, as usual in these cases [20, 21],let us adopt the long-term static channel assumption given as A3, i.e., the packet experiences the current channel conditions Υ(ℓ) throughout its possible future retransmissions. It follows that \(\mathrm {E}_{\boldsymbol {\Upsilon }s^{(\ell +k)}}\left \{\Psi _{r^{(\ell +k)}}\left (\Gamma _{\alpha }^{(\ell +k)}\left (\boldsymbol {\varphi }^{(\ell +j)}|\right.\right.\right.\left.\left.\left. \left (\boldsymbol {\sigma }^{(\ell +k-1)},\boldsymbol {\Upsilon }^{(\ell +k)}\right)\right) \right)\right \}\) is replaced by \(\Psi _{r^{(\ell)}}\left (\Gamma _{\alpha }^{(\ell +k)}\right.\left.\left (\boldsymbol {\varphi }^{(\ell)}|\left (\boldsymbol {\sigma }^{(\ell -1)},\boldsymbol {\Upsilon }^{(\ell)}\right)\right) \right)\) in (53), and, accordingly, φ(ℓ+j)=φ(ℓ), implying Tu(φ(ℓ+j))=(j+1)Tu(φ(ℓ)), ∀j∈{0,⋯,L−ℓ}. Finally, upon plugging (51)–(53) in (26) after the substitutions listed above, the EGP formulation (27) follows. From now on, without loss of generality (w.l.g.), the terms "packet" means "RLC-PDU" packet. In (12), only the bit differing in the codewords b(ℓ) and b(ℓ)′ have a non-zero bit score. R Andreotti, Adaptive techniques for packet-oriented transmissions in future multicarrier wireless systems, PhD Thesis. http://etd.adm.unipi.it. Accessed 29 Dec 2017. G Caire, G Taricco, E Biglieri, Bit-interleaved coded modulation. IEEE Trans. Inf. Theory. 44(3), 927–946 (1998). H Bolcskei, D Gesbert, AJ Paulraj, On the capacity of OFDM-based spatial multiplexing systems. IEEE Trans. Commun.50(2), 225–234 (2002). ST Chung, AJ Goldsmith, Degrees of freedom in adaptive modulation: a unified view. IEEE Trans. Commun.49(9), 1561–1571 (2001). DJ Costello, J Hagenauer, H Imai, SB Wicker, Applications of error-control coding. IEEE Trans. Inf. Theory. 44(6), 2531–2560 (1998). J Wannstrom, LTE-Advanced. http://www.3gpp.org/technologies/keywords-acronyms/97-lte-advanced. Accessed 29 Dec 2017. M Agiwal, A Roy, N Saxena, Next generation 5g wireless networks: a comprehensive survey. IEEE Commun. Surv. Tutorials. 18(3), 1617–1655 (2016). YJ Guo, Advances in Mobile Radio Access Networks (Artech House Publishers, Boston-London, 2004). JF Cheng, in the proceedings of the 21st Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications. Coding performance of HARQ with BICM – part I: unified performance analysis (Istanbul, 2010), pp. 976–981. G Caire, D Tuninetti, The throughput of hybrid-ARQ protocols for the gaussian collision channel. IEEE Trans. Inf. Theory. 47(5), 1971–1988 (2001). P Wu, N Jindal, Performance of hybrid-ARQ in block-fading channels: a fixed outage probability analysis. IEEE Trans. Commun.58(4), 1129–1141 (2010). A Chuang, A Guillen I Fabregas, LK Rasmussen, IB Collings, Optimal throughput-diversity-delay tradeoff in MIMO ARQ block-fading channels. IEEE Trans. Inf. Theory. 54(9), 3968–3986 (2008). B Makki, T Eriksson, On hybrid ARQ and quantized CSI feedback schemes in quasi-static fading channels. IEEE Trans. Commun.60(4), 986–997 (2012). L Szczecinski, SR Khosravirad, P Duhamel, M Rahman, Rate allocation and adaptation for incremental redundancy truncated HARQ. IEEE Trans. Commun.61(6), 2580–2590 (2013). TVK Chaitanya, EG Larsson, Outage-optimal power allocation for hybrid ARQ with incremental redundancy. IEEE Trans. Wirel. Commun.10(7), 2069–2074 (2011). W Rui, VKN Lau, Combined cross-layer design and HARQ for multiuser systems with outdated channel state information at transmitter (CSIT) in slow fading channels. IEEE Trans. Wirel. Commun.7(7), 2771–2777 (2008). ZKM Ho, VKN Lau, RSK Cheng, Cross-layer design of FDD-OFDM systems based on ACK/NAK feedbacks. IEEE Trans. Inf. Theory. 55(10), 4568–4584 (2009). P Zhang, Y Miao, Y Zhao, in the proceedings of the 2013 IEEE Wireless Communications and Networking Conference (WCNC). Cross-layer design of AMC and truncated HARQ using dynamic switching thresholds (Shangai, 2013), pp. 906–911. N Ksairi, P Ciblat, CJL Martret, Near-optimal resource allocation for type-II HARQ based OFDMA networks under rate and power constraints. IEEE Trans. Wirel. Commun.13(10), 5621–5634 (2014). S Liu, X Zhang, W Wang, in 2006 the proceedings of the First International Conference on Communications and Electronics. Analysis of modulation and coding scheme selection in MIMO-OFDM systems (Hanoi, 2006), pp. 240–245. J Meng, EH Yang, Constellation and rate selection in adaptive modulation and coding based on finite blocklength analysis and its application to LTE. IEEE Trans. Wirel. Commun.13(10), 5496–5508 (2014). K Brueninghaus, D Astely, T Salzer, S Visuri, A Alexiou, S Karger, G-A Seraji, in the proceedings of the 16th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, 2005. PIMRC 2005. Link performance models for system level simulations of broadband radio access systems. vol. 4 (Berlin, 2005). I Stupia, V Lottici, F Giannetti, L Vandendorpe, Link resource adaptation for multiantenna bit-interleaved coded multicarrier systems. IEEE Trans. Signal Process.60(7), 3644–3656 (2012). K Sigman, Lecture Notes on Stochastic Modeling I - Introduction to renewal theory (Columbia University, New York, 2009). http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-RRT.pdf. D Tse, P Viswanath, Fundamentals of Wireless Communication (Cambridge University Press, Cambridge, 2005). http://ee.sharif.edu/~wireless.comm.net/references/Tse,/%20Fundamentals/%20of/%20Wireless/%20Communication.pdf. Book MATH Google Scholar L Xiao, M Johansson, SP Boyd, Simultaneous routing and resource allocation via dual decomposition. IEEE Trans. Commun.52(7), 1136–1144 (2004). L Song, NB Mandayam, Hierarchical SIR and rate control on the forward link for CDMA data users under delay and error constraints. IEEE J. Sel. Areas Commun.19(10), 1871–1882 (2001). i Guillen, A Fabregas, A Martinez, G Caire, Bit-Interleaved Coded Modulation (Foundations and trends in communications and information theory) (Now Publishers Inc., Breda, 2008). EW Jang, J Lee, H-L Lou, JM Cioffi, On the combining schemes for MIMO systems with hybrid ARQ. IEEE Trans. Wirel. Commun.8(2), 836–842 (2009). J Lee, H-L Lou, D Toumpakaris, E Jang, J Cioffi, Transceiver design for MIMO wireless systems incorporating hybrid ARQ. IEEE Commun. Mag.47(1), 32–40 (2009). A Martinez, i Guillen, A Fabregas, G Caire, Error probability analysis of bit-interleaved coded modulation. IEEE Trans. Inf. Theory. 52(1), 262–271 (2006). M Zorzi, RR Rao, On the use of renewal theory in the analysis of ARQ protocols. IEEE Trans. Commun.44(9), 1077–1081 (1996). 3GGP technical specification 36.212 v12.0.0, Evolved universal terrestrial radio access (E-UTRA); multiplexing and channel coding (Release 12) (Sophia-antipolis, France, 2013). J Meng, EH Yang, in 2013 IEEE Wireless Communications and Networking Conference (WCNC). Constellation and rate selection in adaptive modulation and coding based on finite blocklength analysis, (2013), pp. 4065–4070. RY Mesleh, H Haas, S Sinanovic, CW Ahn, S Yun, Spatial modulation. IEEE Trans. Veh. Tech.57(4), 2228–2241 (2008). E Basar, On multiple-input multiple-output OFDM with index modulation for next generation wireless networks. IEEE Trans. Signal Proc.64(15), 3868–3878 (2016). G Wunder, P Jung, M Kasparick, T Wild, F Schaich, Y Chen, S ten Brink, I Gaspar, N Michailow, A Festag, L Mendes, N Cassiau, D Kténas, M Dryjanski, S Pietrzyk, P Eged, B Vago, F Wiedmann, 5GNOW: non-orthogonal, asynchronous waveforms for future mobile applications. IEEE Commun. Mag.52(2), 97–105 (2014). This work has been partially supported by the PRA 2016 research project 5GIOTTO funded by the University of Pisa and by SVI.I.C.T.PRECIP. project, in the framework of Tuscany's "Programma Attuativo Regionale," co-funded by "Fondo per lo Sviluppo e la Coesione" (FSC) and Italy's Ministry for Education, University and Research (MIUR), Decreto Regionale n.3506, 28/07/2015. The authors would like to thank Prof. Luc Vandendorpe and Ivan Stupia, PhD, from Université catolique de Louvain, Louvain-la-Neuve, Belgium, for the fruitful discussions and their helpful suggestions. Wireless Systems Engineering and Research (WISER) S.r.l., Livorno, Italy Riccardo Andreotti Department of Information Engineering, University of Pisa, Pisa, Italy Vincenzo Lottici & Filippo Giannetti Vincenzo Lottici Filippo Giannetti RA worked on the derivation of both the link performance prediction method for HARQ-based MIMO BIC-OFDM systems and the link adaptation for EGP optimization. He also run numerical simulations which provided numerical results. VL contributed to the introduction's background and to the bibliographical survey on related works. Also, he contributed to the analytical derivation of the link performance prediction method for HARQ-based MIMO BIC-OFDM systems and to the interpretation of the numerical results. FG provided the system model description and the definitions of the performance metrics for the proposed algorithms. He also contributed to the interpretation and to the comments of the numerical results. All authors read and approved the final manuscript. Correspondence to Filippo Giannetti. Part of this work was one of the subjects of the first author PhD thesis \citeAndPhd. This work has been partially supported by the PRA 2016 research project 5GIOTTO funded by the University of Pisa and by SVI.I.C.T.PRECIP. project, in the framework of Tuscany's ``Programma Attuativo Regionale,'' co-funded by ``Fondo per lo Sviluppo e la Coesione'' (FSC) and Italy's Ministry for Education, University and Research (MIUR), Decreto Regionale n .3506, 28/07/2015 Andreotti, R., Lottici, V. & Giannetti, F. Cross-layer link adaptation for goodput optimization in MIMO BIC-OFDM systems. J Wireless Com Network 2018, 5 (2018). https://doi.org/10.1186/s13638-017-1008-y Orthogonal frequency division multiplexing (OFDM) Bit-interleaved coded modulation (BICM) Hybrid automatic-repeat-request (HARQ) Goodput Link performance prediction Link adaptation
CommonCrawl
Technical advance On Jones et al.'s method for extending Bland-Altman plots to limits of agreement with the mean for multiple observers Heidi S. Christensen1,2,3, Jens Borgbjerg4, Lars Børty2 & Martin Bøgsted ORCID: orcid.org/0000-0001-9192-18141,2,3 BMC Medical Research Methodology volume 20, Article number: 304 (2020) Cite this article To assess the agreement of continuous measurements between a number of observers, Jones et al. introduced limits of agreement with the mean (LOAM) for multiple observers, representing how much an individual observer can deviate from the mean measurement of all observers. Besides the graphical visualisation of LOAM, suggested by Jones et al., it is desirable to supply LOAM with confidence intervals and to extend the method to the case of multiple measurements per observer. We reformulate LOAM under the assumption the measurements follow an additive two-way random effects model. Assuming this model, we provide estimates and confidence intervals for the proposed LOAM. Further, this approach is easily extended to the case of multiple measurements per observer. The proposed method is applied on two data sets to illustrate its use. Specifically, we consider agreement between measurements regarding tumour size and aortic diameter. For the latter study, three measurement methods are considered. The proposed LOAM and the associated confidence intervals are useful for assessing agreement between continuous measurements. Clinical decisions regarding diagnosis or treatment are often based on one or more measured quantities such as blood pressure, tumour size, or the diameter of an aorta. To understand the limitations of using such measurements in clinical practice, it is important to quantify how much the measurements may vary. For almost three decades, Bland-Altman plots have been the standard method for graphical assessment of agreement between continuous measurements made by two observers or methods on a number of subjects [1]. In particular, Bland-Altman plots are often used to assess how well a new measurement method compares to a current standard method. However, if the goal is to assess the variability of measurements made by different observers it is preferable to consider more than two observers. This prompted Jones et al. to suggest an extension of Bland-Altman's graphical method for assessing limits of agreement between two observers to the limits of agreement with the mean (LOAM) for multiple observers [2]. Jones et al.'s LOAM have the advantage that they quantify agreement between measurements on the same scale as the measurements themselves, in contrast to the intra-class correlation (ICC) that has no unit of measure and always takes value between 0 and 1. In more detail, consider a study where a continuous quantity is observed on a subjects by b observers (or methods). We let yij denote an observation from a random variable Yij, which models the measurement performed on the ith subject by the jth observer for i = 1, …, a and j = 1, …, b. Assuming no preferred observer, Jones et al. suggested to assess the agreement between measurements made by different observers by investigating how much the measurements vary around the subject-specific average [2]. More formally, they were interested in how much the differences \( {D}_{ij}={Y}_{ij}-{\overline{Y}}_{i\cdotp } \) are likely to vary, where \( {\overline{Y}}_{i\cdotp } \) denotes the average measurement for subject i across the b observers. For visualising the data, Jones et al. propose to consider a plot of the observed differences \( {d}_{ij}={y}_{ij}-{\overline{y}}_{i\cdotp } \) against the observed subject-specific average \( {\overline{y}}_{i\cdotp } \). We will refer to this as an agreement plot. For an example of an agreement plot see Fig. 1 below. An agreement plot can, for example, help to detect whether the spread of the differences is associated to the size of the measurements, or, at least when a and b are not too large, whether some observers tend to always make large, small, or more varying measurements. Agreement plot for tumour size measurements in centimetres with the proposed 95% LOAM (dashed line) and associated 95% CI (shading) Further, Jones et al. equipped the agreement plot with horizontal lines representing the estimated 95% LOAM, which are given by ±1.96s, where s is the estimate of the residual standard deviation in a two-way analysis of variance (ANOVA) including subject and observer as fixed effects. Thus, s is only a measure of the residue variation left after accounting for possible subject and observer effects. On one hand, if there is a non-negligible observer effect, this should be included in the variability of the differences dij when constructing the LOAM. On the other hand, in the (unrealistic) case of no variation due to observer the 95% LOAM lines suggested by Jones et al. are biased and inefficiently estimated, as it would be custom to refit the ANOVA model without the adjustment for observer effect and adjust the degrees of freedom for s accordingly. In conclusion, although the method has gained an increasing interest over the years, Jones et al. did not provide a way to: 1) assess the variation of the LOAM estimate, 2) integrate variation due to different observers, and 3) extend the method to multiple observations per observer. In this paper, we suggest formalising Jones et al.'s approach under a simple two-way random effects model which allows us to formulate a coherent statistical inference procedure for the LOAM. In addition, we provide not only an implementation in the statistical programming software R, but also simple formulae which can be implemented in, e.g., statistical programming languages, Excel, or automatic web-modules for data collection. A revised version of the limits of agreement with the mean We propose to derive LOAM assuming a random effects model for the measurements. Assuming a statistical model provides a theoretical framework in which the LOAM can be constructed in a transparent way and furthermore enables us to supply estimates and confidence intervals (CIs) for the LOAM. Statistical model In the following we assume the measurements follow a two-way random effects model given by $$ {Y}_{ij}=\mu +{A}_i+{B}_j+{E}_{ij}, $$ where μ describes the overall mean, and Ai, Bj, and Eij are independent random variables following zero-mean normal distributions with variances \( {\sigma}_A^2 \), \( {\sigma}_B^2 \), and \( {\sigma}_E^2 \), respectively. Under this model, measurements made by different observers are uncorrelated if they are on different subjects, while they are positively correlated with covariance \( {\sigma}_A^2 \) for the same subjects. Further, the covariance between measurements made by the same observer for different subjects is \( {\sigma}_B^2 \). Note that the measurements are assumed to be homoscedastic, i.e. has common variance, where the common variance is given by \( {\sigma}_A^2+{\sigma}_B^2+{\sigma}_E^2. \) That is, the variance is split into three components: the inter-subject, inter-observer, and residual variance. Here we follow the convention of referring to the residual variance \( {\sigma}_E^2 \) as the intra-observer variance. Further, note that we assume a balanced data setup, where each observer has evaluated all the subjects. Proposed limits of agreement with the mean Under the two-way random effects model stated in Eq. (1), the difference between an individual measurement and the subject-specific mean, Dij, is normally distributed with mean zero and variance \( \left({\sigma}_B^2+{\sigma}_E^2\right)\left(b-1\right)/b \). Thus, under this model we expect 95% of these differences to be within the limits $$ \pm 1.96\sqrt{\frac{b-1}{b}\left({\sigma}_B^2+{\sigma}_E^2\right)}. $$ We propose the above as the 95% LOAM. To estimate \( {\sigma}_B^2 \) and \( {\sigma}_E^2 \) under the suggested two-way random effects model, we use the unbiased and consistent ANOVA estimates (see, e.g., Chapter 4 of Searle et al. [3]), given by $$ {\hat{\sigma}}_B^2=\frac{MSB- MSE}{a},\kern0.5em {\hat{\sigma}}_E^2= MSE, $$ where MSB = SSB/νB and MSE = SSE/νE, with \( SSB=a\times {\sum}_{j=1}^b{\left({\overline{y}}_{\cdot j}-{\overline{y}}_{\cdot \cdot}\right)}^2\ \mathrm{and}\ SSE={\sum}_{i=1}^a\ {\sum}_{j=1}^b{\left({y}_{ij}-{\overline{y}}_{i\cdotp }-{\overline{y}}_{\cdotp j}+{\overline{y}}_{\cdotp \cdotp}\right)}^2 \) denoting the sums of squares for the observer and residual term, and νB = b − 1 and νE = (a − 1)(b − 1). Further, \( {\overline{y}}_{i\cdotp } \), \( {\overline{y}}_{\bullet j} \), and \( {\overline{y}}_{\bullet \bullet } \) denote the subject-specific, observer-specific, and overall average, respectively. Using the estimates of \( {\sigma}_B^2 \) and \( {\sigma}_E^2 \) from Eq. (3), we obtain the following estimate of the 95% LOAM: $$ \pm 1.96\sqrt{\frac{SSB+ SSE}{N}}=\pm 1.96\sqrt{\frac{\sum_{i=1}^a{\sum}_{j=1}^b{\left({y}_{ij}-{\overline{y}}_{i.}\right)}^2}{N}}, $$ where N = ab is the total number of measurements. For comparison, Jones et al.'s estimate of the LOAM is given by $$ \pm 1.96\sqrt{\frac{\sum_{i=1}^a{\sum}_{j=1}^b{\left({y}_{ij}-{\overline{y}}_{i.}-{\overline{y}}_{.j}-{\overline{y}}_{..}\right)}^2}{\nu_E}} = \pm 1.96\ {\hat{\sigma}}_E, $$ which does not include variation due to observers. Instead of simply reporting the estimated LOAM given by Eq. (4), it is more informative to report CIs. However, as the distribution of the LOAM is quite complicated, we only supply approximate CIs. Graybill and Wang propose a method for constructing (approximate) efficient CIs for linear combinations of variances [4]. To construct CIs for the LOAM in Eq. (2), we first use the method by Graybill and Wang to construct a CI for the term inside the square root of the LOAM. Next, that CI is transformed into a CI for the upper LOAM by taking the square root and then multiplying by 1.96 (see Additional file 1 for details). The resulting approximate (and asymmetric) 95% CI for the upper 95% LOAM is given by $$ \left(1.96\sqrt{\left( SSB+ SSE-L\right)/N},1.96\sqrt{\left( SSB+ SSE+H\right)/N}\ \right), $$ $$ L=\sqrt{l_B^2{SSB}^2+{l}_E^2{SSE}^2},\kern1.75em H=\sqrt{h_B^2 SS{B}^2+{h}_E^2 SS{E}^2} $$ with \( {l}_x=1-1/{F}_{0.975;{\nu}_x,\infty } \) and \( {h}_x=1/{F}_{0.025;{\nu}_x,\infty }-1 \) for x = B and x = E (see Graybill and Wang for other choices of lx and hx [4]). Here Fα; m, n is the α-quantile for the F-distribution with m numerator and n denominator degrees of freedom. A 95% CI for the lower 95% LOAM is simply obtained by negation of the end points of the CI for the upper LOAM, that is, $$ \left(-1.96\sqrt{\left( SSB+ SSE+H\right)/N},-1.96\sqrt{\left( SSB+ SSE-L\right)/N}\right). $$ Simulations under the two-way random effects model in Eq. (1) indicate that the coverage probability for the approximate CI is in reality quite close to the wanted 95% even with a low number of observers (see Figure 1 in Additional file 2). Sample size calculations When planning an agreement study, it is often desirable to investigate how many measurements are necessary to obtain a certain level of precision in terms of a specified width of the CI for the LOAM. From Eq. (5) it is clear that the value of L and H determine the width of the CI for the LOAM; specifically, the CI gets narrower as L and H approaches zero. In turn, this happens when b is increased, since lx and hx approaches zero, when νx increases for both x = B and x = E. Thus, to obtain a higher precision we have to increase the number of observers, b, while it is not enough to increase the number of subjects. Therefore, assume we have a fixed number of subjects a we want to include in a future study to assess agreement between measurements. To determine the number of observers necessary to obtain a desired width W of the 95% CI, we require initial estimates of \( {\sigma}_B^2 \) and \( {\sigma}_E^2 \), say \( {\hat{\sigma}}_{B,0}^2 \) and \( {\hat{\sigma}}_{E,0}^2 \), which can be obtained from, e.g., a pilot study. Exploiting the relations \( SSE={\nu}_E{\hat{\sigma}}_E^2 \) and \( SSB={\nu}_B\times \left(a{\hat{\sigma}}_B^2+{\hat{\sigma}}_E^2\right) \), we can express the width of the CI in Eq. (5) in terms of the variance estimates rather than the sum of squares. Further, we let the estimates be given by the initial estimates \( {\hat{\sigma}}_{B,0}^2 \) and \( {\hat{\sigma}}_{E,0}^2 \), and set the width equal to W. That is, we want to solve the following equation with respect to b: $$ W=\frac{1.96}{\sqrt{N}}\left(\sqrt{\ {\nu}_B\left(a{\hat{\sigma}}_{B,0}^2+{\hat{\sigma}}_{E,0}^2\right)+{\nu}_E{\hat{\sigma}}_{E,0}^2+{H}_0}-\sqrt{\ {\nu}_B\left(a{\hat{\sigma}}_{B,0}^2+{\hat{\sigma}}_{E,0}^2\right)+{\nu}_E{\hat{\sigma}}_{E,0}^2-{L}_0}\right), $$ $$ {L}_0=\sqrt{l_B^2{\nu}_B^2{\left(a{\hat{\sigma}}_{B,0}^2+{\hat{\sigma}}_{E,0}^2\right)}^2+{l}_E^2{\nu}_E^2{\left({\hat{\sigma}}_{E,0}^2\right)}^2},{H}_0=\sqrt{h_B^2{\nu}_B^2{\left(a{\hat{\sigma}}_{B,0}^2+{\hat{\sigma}}_{E,0}^2\right)}^2+{h}_E^2{\nu}_E^2{\left({\hat{\sigma}}_{E,0}^2\right)}^2}. $$ Note that νB, νE, lB, lE, hB, and hE all depend on b. The equation can then be solved numerically with respect to b to find the number of observers needed to obtain an expected width W of the 95% CI for the 95% LOAM. Inference on the variance components In order to assess the extent of the inter-subject, inter-observer, and intra-observer variations, we suggest to consider a 95% CI for σA, σB, and σE, respectively. If the ANOVA estimate \( {\hat{\sigma}}_B^2>0 \), we simply estimate σB by \( {\hat{\sigma}}_B=\sqrt{{\hat{\sigma}}_B^2} \) . Using the statistical delta method (see Additional file 3), we obtain the following approximate 95% CI for σB: $$ {\hat{\sigma}}_B\pm \frac{1.96}{a{\hat{\sigma}}_B}\sqrt{\frac{{\left(a{\hat{\sigma}}_B^2+{\hat{\sigma}}_E^2\right)}^2}{2{\nu}_B}+\frac{{\left({\hat{\sigma}}_E^2\right)}^2\ }{2{\nu}_E}\kern0.5em }. $$ Results from a small simulation study investigating how well the actual coverage of the approximate confidence interval matches the desired coverage probability and how this depends on b and the true values of σB and σE can be found in the additional files (see Figure 2 in Additional file 2). In general, the approximation improves as b increases. It might happen the estimate \( {\hat{\sigma}}_B^2 \) is negative due to negative correlation between observations made by the same observer on different subjects which will indicate a misspecification of the two-way random effects model formulated in Eq. (1). Negativity can also arise by sampling variation of the unbiased ANOVA estimates, we have used in this paper. Although it is tempting to suggest setting \( {\hat{\sigma}}_B^2 \) to zero in such a case, this would introduce bias in the estimation. We therefore suggest to report the negative estimates, and recommend the researcher to comment on the possibility of negatively correlated measurements, and if that does not seem realistic, to assess whether the CIs are too wide to provide any clinically meaningful conclusion. It should be assessed whether more observers should be included to improve the precision of the estimate or whether the model is wrongly specified. As the distribution of \( {\hat{\sigma}}_E^2 \) is known in closed form, an exact asymmetric 95% CI can easily be constructed for σE (see Additional file 3) and is given by $$ \left({\hat{\sigma}}_E\sqrt{\frac{\nu_E}{\chi_{0.975;{\nu}_E}^2}},{\hat{\sigma}}_E\sqrt{\frac{\nu_E}{\chi_{0.025;{\nu}_E}^2}}\right), $$ where \( {\hat{\sigma}}_E=\sqrt{{\hat{\sigma}}_E^2} \) and \( {\chi}_{\alpha; {\nu}_E}^2 \) is the α-quantile of a χ2-distribution with νE degrees of freedom. To provide some context for the scale of \( {\hat{\sigma}}_B \) and \( {\hat{\sigma}}_E, \) it may also be constructive to consider \( {\hat{\sigma}}_A=\sqrt{{\hat{\sigma}}_A^2} \), where \( {\hat{\sigma}}_A^2=\left( MSA- MSE\right)/b \) is the ANOVA estimate of \( {\sigma}_A^2 \) where MSA = SSA/νA with νA = a − 1 and \( SSA=b\ {\sum}_{i=1}^a{\left({\overline{y}}_{i\cdotp }-{\overline{y}}_{\cdotp \cdotp}\right)}^2 \). The estimate of σA may be accompanied by an (approximate) 95% CI, which can be constructed using the statistical delta method (see Additional file 3): $$ {\hat{\sigma}}_A\pm \frac{1.96}{b{\hat{\sigma}}_A}\sqrt{\frac{{\left(b{\hat{\sigma}}_A^2+{\hat{\sigma}}_E^2\right)}^2}{2{\nu}_A}+\frac{{\left({\hat{\sigma}}_E^2\right)}^2\ }{2{\nu}_E}\ }. $$ Performing an agreement analysis To investigate agreement between observers, we propose first to make the agreement plot with the estimate and CI for the 95% LOAM from Sections 2.1.2–2.1.3, and to calculate the empirical means and standard deviations for the measurements conditional on observer or subject. Inspection of the agreement plot and the empirical means across subject, conditional on observer can be used to reveal whether any observers tend to make unusually large or small measurements. Further, the agreement plot and the conditional empirical standard deviations can be used to check whether the assumption of homoscedasticity of the random model is fulfilled. If the model in Eq. (1) is fitted using statistical software it is often possible to extract residuals and predictions of the observer and subject effects which can be used to check the model assumptions further. Specifically, one may, e.g., consider plots of the residuals against the fitted values, observer number, and subject number, respectively, to further investigate the homoscedasticity assumption. Further, a normal quantile-quantile plot of the residuals as well as of the predictions of the observer and subject effects, respectively, can be used to investigate the normality assumptions. However, if the number of observers or subjects is low, an inspection of how the predictions are distributed may be pointless. See, for example, Section 4.3 in Pinheiro and Bates for a more detailed explanation and illustration of model diagnostics [5]. If it is concluded that the model assumptions are unreasonable, one could consider an appropriate transformation of the data or formulate a variance model to handle heteroscedasticity of the outcome [5] or one could consider using a generalised, linear, and mixed model to handle non-normal distribution of outcomes [6]. If the model seems reasonable, we report the estimate and CI for the LOAM. The clinician can then compare the estimated LOAM and associated CI to a clinically acceptable difference between measurements evaluated on the same subject. Whether or not the agreement between measurements is satisfactory depends both on the scale and clinical purpose of the measurements. Next, we may calculate CIs for σB and σE, and use these along with the point estimates (\( {\hat{\sigma}}_B^2 \) and \( {\hat{\sigma}}_E^2 \)) to compare the order of magnitude of the inter-observer variation with the intra-observer variation. In the rare case where the observer variation is negligible, the observer effect could in principle be removed from the random model, requiring that the CIs for the LOAM are adjusted accordingly (see Additional file 4). The agreement analysis may be supplemented with an estimate and CI for the ICC, which is another measure for agreement based on the variance components. Various forms of ICCs are listed in McGraw and Wong for a range of models [7]. The two-way random effects model proposed in this paper corresponds to Case 2A in McGraw and Wong, with subject as row effect and observer as column effect, and ICC(A, 1) can then be used to assess absolute agreement of the measurements [7]. The plug-in estimate of ICC(A, 1) is easily calculated using the estimated variance components: $$ \hat{ICC\left(A,1\right)}=\frac{{\hat{\sigma}}_A^2}{{\hat{\sigma}}_A^2+{\hat{\sigma}}_B^2+{\hat{\sigma}}_E^2}. $$ We refer to Table 7 in McGraw and Wong for an approximate CI for ICC(A,1) [7]. Multiple measurements on each subject per observer The proposed LOAM and their estimates and CIs can easily be extended to the case where each observer performs multiple measurements on every subject. If each observer performs c measurements on each subject, we extend the two-way random effects model to: $$ {Y}_{ijk}=\mu +{A}_i+{B}_j+{E}_{ijk}, $$ where Yijk is the kth measurement performed by the jth observer on the ith subject for i = 1, …, a, j = 1, …, b, and k = 1, …, c. Note that, conditional on observer and subject, the c repeated measurements are assumed to be independent and identically distributed. Mimicking the arguments for the single measurement case, but now considering the differences \( {D}_{ijk}={Y}_{ijk}-{\overline{Y}}_{i\cdotp \cdotp }, \) we propose the following 95% LOAM: $$ \pm 1.96\sqrt{\frac{b-1}{b}{\sigma}_B^2+\frac{bc-1}{bc}{\sigma}_E^2}. $$ Again \( {\sigma}_A^2,{\sigma}_B^2, \) and \( {\sigma}_E^2 \) are estimated by the ANOVA estimates (see, e.g., Chapter 4 of Searle et al. [3]), which are given by $$ {\hat{\sigma}}_A^2=\frac{MSA- MSE}{bc},\kern0.5em {\hat{\sigma}}_B^2=\frac{MSB- MSE}{ac},\kern0.5em {\hat{\sigma}}_E^2= MSE, $$ where now MSA = SSA/νA, MSB = SSB/νB, and MSE = SSE/νE with \( SSA= bc{\sum}_{i=1}^a{\left({\overline{y}}_{i\cdot \cdot }-{\overline{y}}_{\cdots}\right)}^2,\kern0.5em SSB= ac{\sum}_{j=1}^b{\left({\overline{y}}_{\cdot j\cdot }-{\overline{y}}_{\cdots}\right)}^2, \) \( SSE={\sum}_{i=1}^a{\sum}_{j=1}^b{\sum}_{k=1}^c{\left({y}_{ijk}-{\overline{y}}_{i\cdot \cdot }-{\overline{y}}_{\cdot j\cdot }-{\overline{y}}_{\cdots}\right)}^2, \) and νE = abc − a − b + 1, while νA = a − 1 and νB = b − 1 is unchanged. Note that the overall, subject-specific, and observer-specific averages (\( {\overline{y}}_{\cdots },{\overline{y}}_{i\cdotp \cdotp } \), and \( {\overline{y}}_{\cdotp j\cdotp } \)) are now also averaging across the multiple measurement index. With these definitions of SSB, SSE, νB, and νE and with N = abc, the LOAM estimate and CIs still have the form given by Eq. (4)–(5). For the sample size calculation summarised in Eq. (6)–(7), we furthermore replace a with ac. Further, CIs for σA, σB, and σE are obtained by Eq. (8)–(10), except that a is replaced with ac, b is replaced by bc, and the definition of \( {\hat{\sigma}}_A^2,{\hat{\sigma}}_B^2,{\hat{\sigma}}_E^2,{\nu}_A,{\nu}_B \), and νE has changed to the above. Note that all formulas for the multiple measurement case reduce to those for the single measurement case, when c = 1. As for the single measurement setup, the observations may be visualised using an agreement plot, where the observed differences \( {d}_{ijk}={y}_{ijk}-{\overline{y}}_{i\cdotp \cdotp } \) are plotted against the subject-specific averages \( {\overline{y}}_{i\cdotp \cdotp } \). The statistical programming language R, version 3.6.1 [8], was used to analyse the data in the paper. An R-package, R-scripts, and the aortic data for the LOAM calculations in the present paper can be obtained from the GitHub repository: https://github.com/HaemAalborg/loamr. In a study b = 5 thoracic radiologists measured the diameter (in centimetres) of a = 40 lung tumours from computed tomography scans [9]. This study was also used as an example in Jones et al. [2]. Table 1 shows the empirical mean and standard deviation of the measurements across subject, conditional on radiologist, and Fig. 1 displays the agreement plot. Estimates and CIs of the 95% LOAM, ICC, σA, σB, and σE are listed in Table 2. Neither the agreement plot nor the conditional empirical mean indicate any observer systematically making unusually small or large measurements. Further, there is no indication of heteroscedasticity in relation to change in observer or to the size of the tumour. Table 1 Empirical mean and standard deviation (SD) of the tumour measurements, calculated across subjects, conditional on radiologist Table 2 Estimates and 95% confidence intervals (CIs) of the upper 95% LOAM, intra-class correlation (ICC), σA, σB, and σE for the tumour measurements The estimated 95% LOAM are ±1.1 cm (95% CI: 1.0 cm to 1.8 cm); the estimate is identical with the 95% LOAM calculated by Jones et al.'s method when rounding to one decimal place. The inter-observer standard deviation estimate is 0.3 cm (95% CI: 0.1 cm to 0.5 cm), while the intra-observer standard deviation estimate is 0.6 cm (95% CI: 0.5 cm to 0.6 cm). Although on a scale comparable to the intra-observer variation, the inter-observer variation is smaller, supporting the practice where lung nodule measurements are performed by different radiologists. We may also note that the inter-subject variation (unsurprisingly) is larger than both the inter- and intra-observer variation. Borgbjerg et al. consider three methods (OTO, LTL, and ITI) for assessing the maximum antero-posterior abdominal aortic diameter [10]. A total of b = 12 radiologists measured the aortic diameter c = 2 times on a = 50 still abdominal aortic images to assess which of the three methods were most reliable. Using the methods described in Section 2.2 for multiple measurements, we calculate estimates and CIs for the 95% LOAM, σA, σB, and σE (see Table 3) and make an agreement plot (see Fig. 2). The inter-subject variation is large compared to both the inter- and intra-observer variation. The inter-observer variation is of the same order of magnitude as the intra-observer variation and should not be excluded. The LTL method has the largest estimated LOAM, meaning that measurements made by this method tend to vary more. Conversely, the ITI method has the smallest LOAM suggesting that this method has the highest reproducibility when taking into account both the inter-observer and intra-observer variation However, the wide CIs for the LOAM indicate that more observers may be needed to assess this properly. We found significantly less intra-observer variation for the LTL and ITI compared to the OTO method. This finding is in line with the conclusion by Borgbjerg et al. which suggests that it is advantageous to employ either the ITI or LTL method when repeated measurements are performed by the same observer [10]. Table 3 Estimates and 95% confidence intervals (CIs) for the upper 95% LOAM, σA, σB, and σE for the aortic diameter measurements Agreement plots for each of the three methods (OTO, LTL, and ITI) used to measure the aortic diameter along with the estimate (dashed line) and the 95% CI for the 95% LOAM (shading) In this study, we have defined the LOAM under the assumption of a two-way random effects model, with additive observer and subject effects. This allowed us to formulate a simple statistical inference procedure which can be easily implemented. The theory could be altered to cover various situations where the assumptions of the paper are not fulfilled. First, we include observers as a random effect, meaning that we consider the observers in a study to be a random sample from a larger population of observers that we want to make inference about. It is, however, not unlikely to have a study where the considered observers constitute the whole population of interest, in which case it may be more appropriate to include observers as a fixed effect. The LOAM presented in this paper is based on the variance of the difference between an individual measurement and the subject-specific mean. Under a model with observers as fixed effect, such a LOAM will no longer measure variation due to change of observer. Depending on the purpose of the agreement study, the estimated observer effects could then be included in a reformulation of the LOAM or considered separately. However, we believe that many studies are performed to investigate agreement not only between the specific observers but rather within a larger population of observers, encouraging the choice of model in this paper. Second, one could imagine a situation where it is relevant to include an interaction term between subjects and observers, that is, modelling that observers may react differently upon the subjects. For single measurements this interaction effect is confounded with the residual error, but for multiple measurements this effect could in principle be modelled and the LOAM adjusted accordingly. Third, the methods and formulae of this paper rely on the assumption of a balanced data setup, where all observers have evaluated all the subjects the same number of times. However, in practice it is not unlikely to encounter an unbalanced data set as measurements may get lost or not all observers were able to perform all measurements. An unbalanced setup is definitely more complicated to handle but some advances can be made. A new expression for the LOAM may be found under a two-way random model allowing unbalanced data, while existing methods for finding estimates of the variance components can be used to estimate the adjusted LOAM (see, e.g., [3, 11]). However, it is in general not possible to obtain closed form expressions for the confidence intervals for the LOAM and variance components. Fourth, as indicated in Section 2.1.5 it might happen that the estimate \( {\hat{\sigma}}_B^2 \) is negative due to negative correlation between observations made by the same observer on different subjects which will indicate a misspecification of the two-way random effects model formulated in Eq. (1). It is possible to generalise the theory by considering marginal modelling [12]. It was further indicated in Section 2.1.5 that negativity can also arise by sampling variation of the unbiased ANOVA estimates, we have used in this paper. Various approaches have been suggested to remedy this problem as well [13]. Pursuing these generalisations will, however, make modelling and implementation much more involved, and thereby violate our goal to formulate an easily implementable framework. Our results show it is possible to formulate measures for the agreement with the mean between multiple observers, equip them with confidence intervals, and extend them to multiple observations per observer, thereby providing a natural extension of Bland-Altman's graphical method. We believe, we have provided an easily accessible and useful statistical toolbox for researchers involved in assessing agreement between methods or individuals performing clinical measurements. The dataset on abdominal aortic diameter measurements supporting the conclusions of this article is available in the loamr repository: https://github.com/HaemAalborg/loamr. The dataset on tumour sizes is not publicly available but is available from the corresponding author of the original paper on request [9]. LOAM: Limits of agreement with the mean Inter-class correlation ANOVA: Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;327(8476):307–10. https://doi.org/10.1016/S0140-6736(86)90837-8. Jones M, Dobson A, O'brian S. A graphical method for assessing agreement with the mean between multiple observers using continuous measures. Int J Epidemiol. 2011;40(5):1308–13. https://doi.org/10.1093/ije/dyr109. Searle SR, Casella G, McCulloch CE. Variance Components. Hoboken: Wiley; 1992. Graybill FA, Wang C-M. Confidence intervals on nonnegative linear combinations of variances. J Am Stat Assoc. Dec. 1980;75(372):869–73. https://doi.org/10.1080/01621459.1980.10477565. Pinheiro JC, Bates DM. Mixed-effects models in S and S-PLUS. New York: Springer; 2000. McCulloch CE, Searle SR, Neuhaus JM. Generalized, Linear, and Mixed Models. 2nd ed. Hobroken: Wiley; 2008. McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1(1):30–46. https://doi.org/10.1037/1082-989X.1.1.30. R Core Team, "R: A Language and Environment for Statistical Computing." Vienna, Austria, 2019. Erasmus JJ, et al. Interobserver and intraobserver variability in measurement of non-small-cell carcinoma lung lesions: implications for assessment of tumor response. J Clin Oncol. 2003;21(13):2574–82. https://doi.org/10.1200/JCO.2003.01.144. Borgbjerg J, Bøgsted M, Lindholt JS, Behr-Rasmussen C, Hørlyck A, Frøkjær JB. Superior reproducibility of the leading to leading edge and inner to inner edge methods in the ultrasound assessment of maximum abdominal aortic diameter. Eur J Vasc Endovasc Surg. 2018;55(2):206–13. https://doi.org/10.1016/j.ejvs.2017.11.019. Burdick RK, Borror CM, Montgomery DC. Design and Analysis of Gauge R&R Studies: Making Decisions with Confidence Intervals in Random and Mixed ANOVA Models,. SIAM, Philadelphia. ASA, Alexandria, VA: ASA-SIAM Series on Statistics and Applied Probability; 2005. G. Mohlenberghs and G. Verbeke, "A note on a hierarchical interpretation for negative variance components," Stat. Modelling, vol. 11, no. 5, pp. 389–408, doi: https://doi.org/10.1177/1471082X1001100501. André I. Khuri, "Designs for Variance Components Estimation: Past and Present," Int. Stat. Rev., vol. 68, no. 3, pp. 311–322, doi: https://doi.org/10.1111/j.1751-5823.2000.tb00333.x. Department of Clinical Medicine, Aalborg University, Aalborg, Denmark Heidi S. Christensen & Martin Bøgsted Department of Haematology, Aalborg University Hospital, Aalborg, Denmark Heidi S. Christensen, Lars Børty & Martin Bøgsted Clinical Cancer Research Center, Aalborg University Hospital, Aalborg, Denmark Department of Radiology, Aarhus University Hospital, Aarhus, Denmark Jens Borgbjerg Heidi S. Christensen Lars Børty Martin Bøgsted MB and JB designed the study. MB and HSC did the statistical modelling and analysed the data. HSC wrote the first version of the manuscript. LB produced figures and organised data and scripts into an R package. All authors read and approved the final manuscript. Correspondence to Martin Bøgsted. Regarding the ethics approval and consent to participate, we refer to the statements in the original papers by Erasmus et al. [9] for the tumour size data and Borgbjerg et al. [10] for the abdominal aortic diameter measurement data. Permission to use the tumour size data was granted by Jeramy Erasmus in personal communication. Derivation of the confidence intervals for the LOAM. Coverage probabilities from a small simulation study. Derivation of confidence intervals for the variance parameters. Formulae after removing the observer effect. Christensen, H.S., Borgbjerg, J., Børty, L. et al. On Jones et al.'s method for extending Bland-Altman plots to limits of agreement with the mean for multiple observers. BMC Med Res Methodol 20, 304 (2020). https://doi.org/10.1186/s12874-020-01182-w DOI: https://doi.org/10.1186/s12874-020-01182-w Continuous measurements Data analysis, statistics and modelling
CommonCrawl
RBF kernel mapping I was reading that the Gaussian/RBF kernel maps its input onto the surface of normalized hypersphere. Our RBF kernel given by: $k(x,z) = exp(\frac{- ||x-z||^2}{2\sigma^2})$ Can anyone explain why the RBF kernel maps the input space onto the surface of a unit hypersphere? machine-learning kernel-trick rbf-kernel sanjayrsanjayr RBF kernel's explicit feature map, which isn't unique, (as given in Slide 11 of this link) is (it gives the mapping only for for 1D case): $$\phi(x)=e^{-\gamma x^2}\left[1,\sqrt{\frac{2\gamma}{1!}}x,\sqrt{\frac{(2\gamma)^2}{2!}}x^2,...\right]$$ If we calculate the Euclidean norm of this vector, we'd have: $$\Vert\phi(x)\Vert^2=e^{-2\gamma x^2}\sum_{i=0}^\infty \frac{(2\gamma)^i}{i!}x^{2i}=e^{-2\gamma x^2}\sum_{i=0}^\infty\frac{(2\gamma x^2)^{i}}{i!}=e^{-2\gamma x^2}e^{2\gamma x^2}=1$$ So, the norm of the vectors are $1$ ($\Vert\phi(x)\Vert^2=1\rightarrow \Vert\phi(x)\Vert=1)$, which means the mapping is onto surface of unit hypersphere in infinite dimensions. I'd gladly try to prove this for larger dimensions if I can find an explicit expression. gunesgunes for x of any dimension: we know that $\phi (x_1) \cdot \phi (x_2) = k(x_1, x_2)$, so, it's clear that $||\phi(x)||^2 = k(x, x)$. we can now try to find the norm of generic $\phi(x)$ and, if we find it to be constant, this means that RBF kernel maps x in a hypersphere. $$||\phi(x)||^2 = k(x, x) = e^{-\gamma ||x-x||^2} = e^0 = 1$$ Not the answer you're looking for? Browse other questions tagged machine-learning kernel-trick rbf-kernel or ask your own question. Is it necessary that an explicit feature map exists with all kernels? Is SVM RBF applied to both classes? Use Gaussian RBF kernel for mapping of 2D data to 3D What makes the Gaussian kernel so magical for PCA, and also in general? Data normalization for RBF kernel What happens if you square an RBF kernel function? Use the RBF kernel to construct a positive definite covariance matrix What is the expanded representation, $\phi(X)$, required to obtain the RBF kernel? How to understand mapping function of kernel? Why use RBF kernel if less is needed?
CommonCrawl
A New Method to Calculate Water Film Stiffness and Damping for Water Lubricated Bearing with Multiple Axial Grooves | springerprofessional.de Skip to main content PDF-Version jetzt herunterladen vorheriger Artikel Obstacle Avoidance and Multitarget Tracking of ... nächster Artikel Blade Segment with a 3D Lattice of Diamond Grit... PatentFit aktivieren 01.12.2020 | Original Article | Ausgabe 1/2020 Open Access A New Method to Calculate Water Film Stiffness and Damping for Water Lubricated Bearing with Multiple Axial Grooves Chinese Journal of Mechanical Engineering > Ausgabe 1/2020 Guojun Ren » Zur Zusammenfassung PDF-Version jetzt herunterladen \(A_{L} (\eta ),\;A_{E} (\eta )\;or\;A_{P} (\eta )\;\) Non-dimensional location of static load center \(A_{Ld} (\eta ),\;A_{Ed} (\eta )\;or\;A_{Pd} (\eta )\;\) Non-dimensional location of dynamic load center Width of the slide bearing and the pad between two grooves (m) Radial bearing clearance (m) Shaft Diameter (m) \(e\) Eccentricity of the bearing (m) \(h_{L0}\) Lubrication film thickness at leading edge of the slide bearing under steady operation (m) \(h_{T0}\) Lubrication film thickness at trailing edge of the slide bearing under steady operation (m) \(\Delta h_{Ti} (t)\) Amount of dynamic squeeze of fluid film at trailing edge of pad "i" (m) \(\Delta {\kern 1pt} \dot{h}_{Ti} (t)\) Dynamic squeeze velocity of fluid film at trailing edge of pad "i" (m/s) Bearing length, this is the length of bearing pad (m) \(x^{ * }\) Non-dimensional coordinate of sliding bearings, \(x^{ * } = x/B\) number of bearing grooves, always designed with even number without loss of generality \(N_{s}\) Shaft rotating speed in (r/s) Surface velocity of shaft (m/s) \(W_{o}\) Loading force of the slide bearing under steady operation condition (N) \(W_{o,i}\) \(W_{o}\) (Referring to pad "i") \(W_{1}\) Dynamic part of bearing load on top of \(W_{o}\) (N) \(\alpha_{Li}\) Location angle of leading edge of pad "i" \(\alpha_{Ti}\) Location angle of trailing edge of pad "i" Eccentricity ratio of the entire bearing (ε = e/c) Attitude angle of the bearing (rad) \(\eta = h_{L0} /h_{T0}\) Ratio of film thickness at leading edge to trailing edge) \(\eta_{i}\) \(\eta\) (Referring to pad "i") Part of first pad surface taking load (0 to 1.0) Viscosity of lubricant, for water it is a constant (P·s) Angular velocity of shaft (1/s) \(K_{yy} ,K_{yx} ,K_{xy} ,\;K_{xx}\) Non-dimensional coefficients of stiffness Dimensional coefficients of stiffness \(C_{yy} ,C_{yx} ,C_{xy} ,\;C_{xx}\) Non-dimensional coefficients of damping Dimensional coefficients of damping As we all known, water-lubricated guide bearings for hydro turbines and pumps are conventionally designed with multi-axial grooves. These grooves are provided for purpose of effectively cooling the bearing and flushing away abrasives. However, due to the variety of groove design in terms of its number and size, a prediction of bearing performance in terms of load capacity, stiffness and damping characteristics is very difficult. The author of this paper [ 1 ] introduced an analytical method to investigate groove effect on the Sommerfeld Number and coefficients of stiffness and damping based on inclined slide bearing solutions for bearings with rigid surface. However, the quality and accuracy of the solution depends on how close the geometry of the inclined slide bearing to represent the actual wedge shape at individual pads of the grooved bearing, especially the wedge shape of those pads which are loaded most. This paper examined three different geometric shapes of inclined slide bearings and provided a solution that would have satisfactory accuracy. A brief review of available literature is useful to understand the development of this method. For rigid surface plain bearings with no grooves, in terms of steady operation, besides the classic solution of long bearing theory by Sommerfeld [ 2 ] and short bearing theory by DuBois and Ocvirk [ 3 ], there are several excellent analytical solutions for finite length bearings. The finite length bearing theory by Childs et al. [ 4 , 5 ] is one of the excellent solutions. Another good analytical solution is proposed by Capone et al. [ 6 ]. Numerical solutions with using finite difference and finite element methods are abundant. The evaluation of them is not the focus of this paper. For bearings designed with multi-axial grooves, Pai et al. [ 7 ] published a number of works on steady performance and dynamic stability of simple rotor. Ren [ 8 ] published a paper on calculation of water film thickness of water lubricated bearing with multi-axial grooves for steady state operation. On stiffness and damping coefficients of non-grooved plain bearings, classical short bearing solution is the most popular one. The solution by Childs et al. [ 4 , 5 ] is strongly recommended for finite length bearings. It needs to mention that above works are related to rigid surface bearings. For deformable surface bearings, the effect of surface deformation is considered. Lahmar et al. [ 9 ] provides a procedure to simultaneously evaluate both static and dynamic performance with small perturbation method. Recent development was focused on CFD and FSI (fluid structure interaction) [ 10 – 17 ]. The effect of turbulence on stiffness and damping was investigated in Refs. [ 18 , 19 ]. In review on available information, methods to determine stiffness and damping are mainly relying on numerical simulation for bearings with multiple axial grooves. The objective of this paper was to provide a semi-analytical method to investigate the groove effect on load capacity, stiffness and damping based on infinite length and rigid surface which is approximately valid for bearings made of hard polymers under relatively low bearing pressure [ 10 ]. For the calculation results to be useful, certain conditions for bearings have to be applied. First of all, the ratio of bearing length to the width of bearing pad must be greater than 3.0 or higher. Secondly, the bearing pressure shall be relatively low so that the surface deformation effect doesn't overwhelmingly change the result. In spite of this paper doesn't include the effect of elastic deformation of bearing surface in terms of elastohydrodynamic lubrication, the results are valid approximately for polymer bearings with higher hardness and lower pressure that most pump and turbine guide bearings are the case. The result is not suitable to water lubricated bearings with rubber staves that needs special treatment either through experiment [ 20 – 29 ] or numerical analysis. Experimental study on bearings with multiple axial grooves demonstrates that a relatively rigid surface of bearing pad more easily forms hydrodynamic pressure than soft surface [ 22 ]. This has been demonstrated in the elastohydrodynamic study on sliding bearings [ 30 ]. A practice engineering application of stiffness and damping was shown in Ref. [ 31 ]. 2 Stiffness and Damping of Inclined Slide Bearings with Different Geometries The idea to evaluate the load capacity (Sommerfeld Number), the stiffness and damping coefficients of a circular bearing with multiple axial grooves is that the circular journal bearing can be considered as an assembly of many simple inclined slide bearings (Figure 1), so that the analytical results for an inclined slide bearing (Figure 2) can be used as building blocks to form a calculation method. Without loss of generality, Figure 1 demonstrates pressure created by bearing pad only. In reality, water pressure in grooves varies from negligible to significant. Since this procedure works with non-dimensional functions, the water pressure in grooves can be easily added back to dimensional pressure. The implementation of this idea starts from evaluating the dynamic characteristics of the sliding bearing shown in Figure 2. To avoid of confusion, an inclined slider is also called inclined slide bearing. A sliding pad or bearing pads are always referring to the bearing surface between two neighboring grooves. Grooved bearing as an assembly of sliding pads Infinite length inclined linear slide bearing As indicated in the introduction, the load capacity of the entire bearing depends on the load capacity of the individual inclined sliding pads. The accuracy of the solution is a direct function of how the geometry of the inclined sliding bearing to represent the wedge shape of the individual pads of the circular bearing. In following sections, three useful geometries of inclined slide bearing are examined, the proximity to the wedge shape of the main bearing is compared. 2.1 Linear Inclined Slide Bearing The linear inclined slider was used in previous work [ 1 ]. The water film is presented with a linear function as follows: $$h(x^{ * } ) = h_{To} \cdot \left[ {1 - x^{ * } \cdot (\eta - 1)} \right].$$ The main functions of the solution are: Load capacity (non-dimensional force) $$\varPi_{L} (\eta ) = \frac{{W_{o} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \frac{{6 \cdot \left[ {\left( {\eta + 1} \right) \cdot \ln \eta - 2 \cdot \left( {\eta - 1} \right)} \right]}}{{\left( {\eta - 1} \right)^{2} \cdot \left( {\eta + 1} \right)}}$$ Location of static load center (non-dimensional distance) $$A_{L} (\eta ) = \frac{{x_{C} }}{B} = \frac{{\eta \cdot \left( {\frac{\eta + 2}{\eta - 1}} \right) \cdot \ln \eta - \frac{5}{2} \cdot \left( {\eta - 1} \right) - 3}}{{\left( {\eta + 1} \right) \cdot \ln \eta - 2\left( {\eta - 1} \right)}}$$ Stiffness function (non-dimensional) $$K_{L} (\eta ) = 6 \cdot \frac{2 \cdot \eta \cdot \ln \eta - \eta + 1}{{\eta \cdot (\eta - 1)^{2} }} - \frac{6}{\eta \cdot (\eta + 1)} + \frac{12}{{1 - \eta^{2} }}$$ Damping function (non-dimensional) $$C_{L} (\eta ) = - 6 \cdot \frac{\eta \cdot \ln \eta - \eta + 1}{{(\eta - 1)^{3} }} + 6\frac{\eta \cdot \ln \eta }{{(\eta^{2} - 1) \cdot (\eta - 1)}}.$$ It is noticed that the right side of Eqs. ( 2)‒( 5) is a function of the ratio of film thickness at leading edge to the film thickness at trailing edge only. For the purpose of evaluating stiffness and damping, the film thickness at leading and trailing edges is considered as a function of time. This is shown in Figure 3. Inclined linear slide bearing in dynamic motion The non-dimensional dynamic load is expressed by stiffness and damping function: $$\frac{{W_{1} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = - \left[ {K_{L} (\eta ) + i \cdot C_{L} (\eta )} \right].$$ According to Ref. [ 1 ], the final load on the linear inclined slide bearing is expressed with $$\begin{aligned} W = W_{0} - \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{h_{T0}^{3} }} \cdot K_{L} (\eta ) \cdot \Delta h_{T} (t) \hfill \\ \quad \quad \quad - \frac{{\mu \cdot B^{3} \cdot L}}{{h_{T0}^{3} }} \cdot C_{L} (\eta ) \cdot \Delta \dot{h}_{T} (t). \hfill \\ \end{aligned}$$ Eq. ( 7) is the force of the individual slider under dynamic motion. It is the fundamental relationship between the slider force and slider displacement and squeeze velocity. As long as the four functions expressed from Eqs. ( 2)‒( 5) are known, the load capacity of the inclined slide bearing is fully defined. Therefore, the whole subject turns to finding a set of functions expressed in Eqs. ( 2)‒( 5) for different geometric shape of the slider. 2.2 Exponential Gap Slide Bearing The shape of exponential gap slide bearing is expressed with following equation: $$h(x^{ * } ) = h_{To} \cdot \exp ( - x^{ * } \cdot \ln \eta ).$$ The four functions of result are as follows: Non-dimensional load capacity $$\varPi_{E} (\eta ) = \frac{{W_{o} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \frac{{\eta^{2} - 1}}{{2\eta^{2} \cdot \left( {\ln \eta } \right)^{2} }} - \frac{3}{{(\eta^{2} + \eta + 1) \cdot \ln \eta }},$$ $$A_{E} (\eta ) = \frac{{x_{C} }}{B} = \frac{{(\eta^{2} + \eta + 3) \cdot \eta^{2} - \frac{{5(\eta + 1)(\eta^{3} - 1)}}{6\ln \eta } - 3\eta^{2} \ln \eta }}{{(\eta + 1)(\eta^{3} - 1) - 6\eta^{2} \cdot \ln \eta }},$$ $$K_{E} (\eta ) = \frac{{\eta^{2} - 1}}{{\eta^{2} \cdot (\ln \eta )^{2} }} - \frac{6}{{(\eta^{2} + \eta + 1) \cdot \ln \eta }},$$ $$C_{E} (\eta ) = \frac{{\eta^{2} - 1}}{{\eta^{2} (\ln \eta )^{3} }} - \frac{6}{{(\eta^{2} + \eta + 1) \cdot (\ln \eta )^{2} }}.$$ 2.3 Parabolic Gap Slide Bearing The author of this paper derived the four functions for a parabolic gap slider (see Appendix). The film thickness expression is presented by: $$h({x^*}) = {h_{To}} \cdot \left[ {1 + {x^{{*2}}} \cdot (\eta - 1)} \right].$$ The resulting functions are as follows. $$\varPi_{P} (\eta ) = \frac{{W_{o} \cdot h_{T0}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \frac{{\sqrt {\eta - 1} + (\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{\eta^{2} \cdot \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3}) \cdot \sqrt {\eta - 1} }}.$$ $$A_{P} (\eta ) = \frac{{x_{C} }}{B} = 1 + \frac{{\int\limits_{ - 1}^{0} {x \cdot p_{o}^{ * } (x,\eta ){\text{d}}x} }}{{\varPi_{p} (\eta )}},$$ $$p_o^*({x^*},\eta ) = \frac{2}{{{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }} \times \;\left[ {{{\tan }^{ - 1}}({x^*} \cdot \sqrt {\eta - 1} ) + \frac{{{x^*}({x^{{*^2}}} - 1){{(\eta - 1)}^{\frac{3}{2}}} - {\eta ^2} \cdot {x^*} \cdot {{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{(1 + (\eta - 1) \cdot {x^*}^2)}^2}}}} \right].$$ $$K_{P} (\eta ) = 2\frac{{\sqrt {\eta - 1} + (\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{\eta^{2} \cdot \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3}) \cdot \sqrt {\eta - 1} }}.$$ $$\begin{aligned} C_{P} (\eta ) & = \frac{2 \cdot (2\eta + 1)}{{3\eta + 2 + \frac{{3\eta^{2} }}{{\sqrt {\eta - 1} }} \cdot \tan^{ - 1} \sqrt {\eta - 1} }}\\ & \quad \times \;\left[ {\frac{1}{\eta } + \frac{{3 \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }}} \right] - \frac{4\eta - 1}{\eta \cdot (\eta - 1)} + \frac{{3 \cdot \tan^{ - 1} \sqrt {\eta - 1} }}{{(\eta - 1)^{{\frac{3}{2}}} }}. \hfill \\ \end{aligned}$$ Figure 4 shows the basic functions for these three inclined slide bearings. Since the non-dimensional load capacity function is exactly the half of the stiffness function for all type of geometries, it was not repeated in Figure 4. Functions of different basic slide bearings It is to notice that the stiffness function of the parabolic gap and exponential gap is higher than the linear gap for all leading to trailing edge film ratios. This implied if the parabolic function is in good agreement with the actual bearing clearance (pads), a circular bearing simulated with the parabolic function will have a higher load capacity. The dynamic load center is slightly different from static load center Figure 4(d). Appendix provides the definition of load center ratio R( η). This paper used static load center for Sommerfeld number evaluation and dynamic load center for stiffness and damping evaluation for all three types of sliding bearings. 3 Assembly Procedure To evaluate which type of slider geometry best suitable for building the circular bearing, the assembly procedure must be presented first. The first step of the assembly procedure is to define the location angles of each pad relative to a rotating co-ordinate frame r- ϕ [ 8 ]. Figure 5 shows a circular journal bearing with multiple axial grooves under a steady operational condition. By given load and shaft speed, the shaft center is offset from bearing center with an eccentricity of " e". The connecting line between bearing center and shaft center is in-line with r-axis of the rotating coordinate system r- ϕ. Assuming the bearing is fixed in position and the load is vertical as shown on the figure, the r- ϕ coordinate system has an attitude angle " Φ" with respect to the loading direction, namely, the y-axis. The attitude angle changes depending on load, shaft speed and groove numbers. Definition of pad location angles A second co-ordinate frame x- y is defined in line with load direction. In this system, y-axis is in load direction and x-axis is perpendicular to the load direction. In Figure 5, the r-axis divides the entire bearing into two equal halves. All pads underneath r-axis (in the sense of Figure 5) have convergent angles in shaft rotating direction and therefore are able to create hydrodynamic lifting forces. All pads above r-axis have divergent angles in shaft rotating direction and are therefore not able to create hydrodynamic lifting forces. In theory, the divergent bearing half could create a vacuum and therefore a negative pressure. However, since in practice, almost in all the cases, outside source of lubricant will be supplied to the bearing grooves, the divergent half will not create a negative pressure, but keep the same level of pressure as supplied lubricant as in grooves. It is therefore acceptable to assume the pressure on that half of bearing as zero. This assumption is corresponding to half Sommerfeld or Gumbel boundary condition. Since the bearing is assumed to be fixed, all angles ( \(\alpha_{Li}\), \(\alpha_{Ti}\) = 1, 2, 3, …, N/2) defining the positions of grooves will change with attitude angle which is an unknown parameter. One set of groove location angles only defines a particular equilibrium of steady operation. For calculation purpose, a set of "floating numbers" are assigned to the pads underneath the r-axis. As a rule, no matter how the attitude angle to change, it is always the first convergent pad underneath r-axis at the minimum water film location is assigned number "1". Other pads are enumerated clockwise with number 2, 3, 4,… in sequence. After having defined the pad location angles, the film thickness ratio of leading to trailing edge under steady operation condition is $$\eta_{i} = \frac{{1 + \varepsilon \cdot \cos \alpha_{Li} }}{{1 + \varepsilon \cdot \cos \alpha_{Ti} }},\quad i = { 1},{ 2}, \, \ldots ,N/ 2.$$ The film thickness at any leading and trailing edge is $$h_{L0,i} = c \cdot (1 + \varepsilon \cdot \cos \alpha_{Li} ),\quad i = { 1},{ 2}, \ldots ,N/ 2,$$ $$h_{T0,i} = c \cdot (1 + \varepsilon \cdot \cos \alpha_{Ti} )\quad i = { 1},{ 2}, \ldots ,N/ 2.$$ The second step of the assembly procedure is to calculate the force contribution of each pad to support the entire bearing load. Figure 6 illustrates the supporting force from one pad with location angles \(\alpha_{Li}\) and \(\alpha_{Ti}\). Considering Eqs. ( 19) and ( 20), the force by each individual pad can be calculated with using Eq. ( 7) obtained from previous section. All terms in Eq. ( 21) are referred to pad number " i": $$W_{i} = W_{0i} - \frac{{c_{1} \cdot K (\eta_{i} )}}{{ (1 + \varepsilon \cdot { \cos }\alpha_{Ti} )^{3} }} \cdot \Delta h_{Ti} (t )- \frac{{c_{2} \cdot C (\eta_{i} )}}{{ (1 + \varepsilon \cdot { \cos }\alpha_{Ti} )^{3} }} \cdot \Delta \dot{h}_{Ti} (t ) ,$$ with \(c_{1} = \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{c^{3} }};\quad c_{2} = \frac{{\mu \cdot B^{3} \cdot L}}{{c^{3} }}\). Contribution of pad load to bearing In Figure 6, the pad load is considered to be in direction pointing to bearing center. The projection of bearing load to r- ϕ co-ordinate system is then $$\begin{aligned} &- W_{i,r} = W_{0i} \cos (\uppi - \varTheta_{i} ) \hfill \\ &- \left( {\frac{{c_{1} \cdot K(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta h_{Ti} (t) + \frac{{c_{2} \cdot C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta \dot{h}_{Ti} (t )} \right) \cdot \cos (\uppi - \vartheta_{i} ), \hfill \\ \end{aligned}$$ $$\begin{aligned} &W_{i,\varphi } = W_{0i} \sin (\uppi - \varTheta_{i} ) \hfill \\ &- \left( {\frac{{c_{1} \cdot K(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta h_{Ti} (t) + \frac{{c_{2} \cdot C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \Delta \dot{h}_{Ti} (t )} \right) \cdot \sin (\uppi - \vartheta_{i} ), \hfill \\ \end{aligned}$$ In Eqs. ( 22), ( 23), the function \(K(\eta )\) is one of the function \(K_{L} (\eta )\), \(K_{E} (\eta )\)or \(K_{P} (\eta )\) depending on which one is chosen. The same applied to function \(C(\eta )\). For the entire circular bearing, the dynamic part of pad force is only caused by a very small change of bearing eccentricity Δ e and attitude angle Δ Φ, the film thickness at trailing edge can be correlated to these small eccentricity and attitude angle change as follows $$\Delta h_{Ti} = \cos \alpha_{Ti} \cdot \Delta e + \sin \alpha_{Ti} \cdot e \cdot \Delta \varPhi .$$ The same can be applied to the pad velocity. However, the velocity must refer to the entire pad, not just the trailing edge. This implies that the pad has no rotation about its load center at any instant of dynamic motion. Therefore the velocity becomes: $$\Delta \dot{h}_{Ti} = \cos (\uppi - \varTheta_{i} ) \cdot \Delta \dot{e} + \sin (\uppi - \vartheta_{i} ) \cdot e \cdot \Delta \dot{\varPhi }\text{,}$$ $$\theta_{i} = A_{d} (\eta_{i} ) \cdot \alpha_{Ti} + \left[ {1 - A_{d} (\eta_{i} )} \right] \cdot \alpha_{Li} ,\quad \varTheta_{i} = A(\eta_{i} ) \cdot \alpha_{Ti} + \left[ {1 - A(\eta_{i} )} \right] \cdot \alpha_{Li},\quad i = { 1},{ 2},{ 3}, \ldots ,N/ 2.$$ The function \(A(\eta )\) is one of the \(A_{L} (\eta ),\;A_{E} (\eta )\;or\;A_{P} (\eta )\;\) depending on which type of sliding bearing chosen and \(A_{d} (\eta )\) is one of the \(A_{Ld} (\eta ),\;A_{Ed} (\eta )\;{\text{or}}\;A_{Pd} (\eta )\;\) depending on which type of sliding bearing chosen. Inserting Eqs. ( 24) and ( 25) into Eqs. ( 22) and ( 23), their matrix form can be expressed as $$\begin{aligned} \left( {\begin{array}{*{20}c} { - W_{i,r} } \\ {W_{i,\phi } } \\ \end{array} } \right) & = \left( {\begin{array}{*{20}c} {W_{oi} \cdot \cos (\uppi - \varTheta_{i} )} \\ {W_{0i} \cdot \sin (\uppi - \varTheta_{i} )} \\ \end{array} } \right) - c_{1} \left[ {\begin{array}{*{20}c} {K_{rr,i} } & {K_{r\phi ,i} } \\ {K_{\phi r,i} } & {K_{\phi \phi ,i} } \\ \end{array} } \right]\left( {\begin{array}{*{20}c} {\Delta e} \\ {e \cdot \Delta \phi } \\ \end{array} } \right) \hfill \\ & \quad - c_{2} \left[ {\begin{array}{*{20}c} {C_{rr,i} } & {C_{r\phi ,i} } \\ {C_{\phi r,i} } & {C_{\phi \phi ,i} } \\ \end{array} } \right]\left( {\begin{array}{*{20}c} {\Delta \dot{e}} \\ {e\Delta \dot{\phi }} \\ \end{array} } \right). \hfill \\ \end{aligned}$$ The coefficients of stiffness and damping for the pad with index " i" are: $$K_{rr,i} = \frac{{\cos \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \cos ({{\uppi }} - \vartheta_{i} ),$$ $$K_{r\phi ,i} = \frac{{\sin \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \cos ({{\uppi }} - \vartheta_{i} ),$$ $$K_{\phi r,i} = \frac{{\cos \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \sin ({{\uppi }} - \vartheta_{i} ),$$ $$K_{\phi \phi ,i} = \frac{{\sin \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K(\eta_{i} ) \cdot \sin ({{\uppi }} - \vartheta_{i} ),$$ $$C_{rr,i} = \frac{{C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \cos^{2} ({{\uppi }} - \vartheta_{i} ),$$ $${C_{r\phi ,i}} = {C_{\phi r,i}} = \frac{{C({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot \cos ({{\uppi }} - {\vartheta _i}) \cdot \sin ({{\uppi }} - {\vartheta _i}),$$ $$C_{\phi \phi ,i} = \frac{{C(\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot \sin^{2} ({{\uppi }} - \vartheta_{i} ).$$ 4 Quality Comparison of Slide Bearing Geometry Above section proposed an idea of using an array of inclined slide bearings to build a circular journal bearing with multiple axial grooves. However, the quality of this approach depends on how closely the individual slider represents the shape of individual pad at any given location and eccentricity ratio. In true sense of grooved bearing, the non-dimensional form of water film thickness at any given location angle " α" (Figure 7) is $$\overline{h} = \frac{h}{c} = (1 + \varepsilon \cdot \cos \alpha ).$$ Definition of local coordinate When evaluating the individual inclined slide bearing, the non-dimensional water film thickness is expressed as a function of the film thickness ratio of leading edge to trailing edge as well as the local coordinate " s" (Figure 7). Therefore, the non-dimensional water film thickness for the pad number " i" in terms of above mentioned water film thickness ratio and local coordinate can be presented in following form: $$\bar{h}(\bar{s},i) = \left\{ {1 + \varepsilon \cdot \cos \left[ {\bar{s} \cdot \left( {\alpha_{Ti} - \cos^{ - 1} \frac{{(1 + \varepsilon \cdot \cos \alpha_{Ti} ) \cdot \eta_{i} - 1}}{\varepsilon }} \right) + \alpha_{Ti} } \right]} \right\},$$ Where \(\overline{s} = s/B\) is non-dimensional local coordinate. The true bearing may have grooves with chamfer or round fillet. Since any chamfer and fillet will be too large for water film formation. Therefore, chamfer and fillet must be considered as part of grooves, not surface of pads. All other expression of water film thickness for bearing pad " i" can also expressed with local coordinate and the water film thickness ratio at leading and trailing edge. For linear inclined slide bearing $$\bar{h}_{L} (\bar{s},i) = \left( {1 + \varepsilon \cdot \cos \alpha_{Ti} } \right) \cdot \left[ {1 - \bar{s} \cdot (\eta_{i} - 1)} \right].$$ For exponential inclined slide bearing $$\bar{h}_{E} (\bar{s},i) = \left( {1 + \varepsilon \cdot \cos \alpha_{Ti} } \right) \cdot \exp \left[ { - \bar{s} \cdot \ln \eta_{i} } \right].$$ For parabolic inclined slide bearing $$\bar{h}_{P} (\bar{s},i) = \left( {1 + \varepsilon \cdot \cos \alpha_{Ti} } \right) \cdot \left[ {1 + \bar{s}^{2} \cdot (\eta_{i} - 1)} \right].$$ Eqs. ( 29)‒( 31) presents the water film thickness that are intended to be used to replace Eq. ( 28). The purpose of doing so is to simplify the problem by solving Reynold's Equation at the inclined slide bearing level rather than at the full bearing level. It is to notice that all these equations return the film thickness at trailing edge of pad " i" which is \((1 + \varepsilon \cdot \cos \alpha_{Ti} )\) when \(\overline{s} = 0\). By the same token, they return the film thickness at leading edge which is \((1 + \varepsilon \cdot \cos \alpha_{Ti} ) \cdot \eta_{i}\) when \(\overline{s} = - 1\). To evaluate which function from Eqs. ( 29)‒( 31) is the best approach to Eq. ( 28), one set of square root errors was defined. These are: $$S_{L} (i,\varepsilon ) = \frac{{\sqrt {\int\limits_{ - 1}^{0} {[\bar{h}(s,i) - \bar{h}_{L} (s,i)]^{2} {\text{d}}s} } }}{{\int\limits_{ - 1}^{0} {\bar{h}(s,i) \cdot {\text{d}}s} }},$$ $$S_{E} (i,\varepsilon ) = \frac{{\sqrt {\int\limits_{ - 1}^{0} {[\bar{h}(s,i) - \bar{h}_{E} (s,i)]^{2} {\text{d}}s} } }}{{\int\limits_{ - 1}^{0} {\bar{h}(s,i) \cdot {\text{d}}s} }},$$ $$S_{P} (i,\varepsilon ) = \frac{{\sqrt {\int\limits_{ - 1}^{0} {[\bar{h}(s,i) - \bar{h}_{P} (s,i)]^{2} {\text{d}}s} } }}{{\int\limits_{ - 1}^{0} {\bar{h}(s,i) \cdot {\text{d}}s} }}.$$ Eqs. ( 32)‒( 34) were derived from non-dimensional water film thickness and are function of eccentricity ratio, number and distribution of grooves. They are valid for any size bearings with any number of grooves. In following, a 12-groove bearing is examined for the square root errors. This bearing will have six bearing pads taking load. Assuming the location of the minimum film thickness falls into the very center of a groove, so that the first pad named with one will have the entire pad being loaded. Figure 8 shows the result for the bearing with 12 grooves. It showed that the parabolic inclined slider has the least error for the first pad for eccentricity ratio from 0.9 to 0.999 which is the range of most interest for water lubricated guide bearings. The linear slider seems to be best suitable for second pad at high eccentricity ratio and rest of other pads. The exponential slider seems to be suitable for pads except for first one at lower eccentricity ratio. However, this is only observations from a bearing with 12 grooves. Investigation on different number of grooves showed that the error for exponential slider changes rapidly with increasing eccentricity ratio, meaning that for small eccentricity ratio, errors are small, for large eccentricity ratio, errors are big. This is especially true for first and second pad. The linear slide bearing is insensitive to eccentricity ratio. Therefore, in following evaluation, a scheme that parabolic slider to first pad and linear slider to rest of pads is applied. This paper investigated only three types of sliding bearings. It is certainly there must be other types of sliding bearings that would fit for the purpose. As demonstrated in Ref. [ 8 ], the first pad takes the most load of entire bearing. Even though the parabolic slider is only able to simulate the first loaded pad, it is still a significant improvement of load capacity in comparison to the approach with all linear sliders. Square root error of different slider 5 Steady Operation and Sommerfeld Number 5.1 Effect of Groove Number Ref. [ 8 ] provides a procedure to calculate the load capacity and water film under steady operational condition. The non-dimensional load capacity of bearing was determined as follows: $$W_{0r,i} = \frac{{\varPi (\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{2} }} \cdot \cos \left\{ {{{\uppi }} - \varTheta_{i} } \right\},$$ $$W_{0\varphi ,i} = \frac{{\varPi (\eta_{i} )}}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{2} }} \cdot \sin \left\{ {{{\uppi }} - \varTheta_{i} } \right\},$$ $$\varPi (\eta_{i} ) = \left\{ {\begin{array}{*{20}l} {\varPi_{L} (\eta_{i} ),\quad {\text{for}}\;{\text{linear}}\;{\text{slider,}}} \hfill \\ {\varPi_{P} (\eta_{i} ),\quad {\text{for}}\;{\text{parabolic}}\;{\text{slider}} .} \hfill \\ \end{array} } \right.$$ The total non-dimensional supporting force contributed by all pads underneath r-axis is then the sum of all components above: $$W_{0\varphi } = \sum\limits_{i = 2}^{N/2} {W_{0\varphi ,i} } + \lambda^{2} \cdot W_{0\varphi ,1} ,$$ $$W_{0r} = \sum\limits_{i = 2}^{N/2} {W_{0r,i} } + \lambda^{2} \cdot W_{0r,1} ,$$ where λ is a number less than 1.0. In Figure 5, if the position of minimum film thickness is located within the pad number 1, only a part of this pad will take load. The number " λ" gives the percentage of the pad that takes load. Its value is unknown at beginning of a calculation. It depends on attitude angle " Φ", width of bearing pads " B", width of grooves as well as the relationship between bearing loading direction and the pad position to which the load pointing to. For vertical bearings, such as hydro turbine guide bearing and vertical pump bearing, loading direction is undefined. In this case, practical calculation can be done by assuming λ = 0.5 which presents a condition with least load capacity. For horizontal bearings, load direction can be easily defined. Parameter λ can be determined by an iteration. At beginning, first to assume an initial value, for example 0.5, then calculate the attitude angle according to Eq. ( 40) below and subsequently calculate the new λ-value. With new value run calculation again, get another attitude angle. Repeating the same procedure until a satisfactory result obtained. In evaluation on Figures 9, 10, 11 and 12, λ = 1.0 was used. A full analysis of effect of λ-value on Sommerfeld Number is worth of a full separate paper to discuss. Sommerfeld Number comparison Ratio of Sommerfeld Number Sommerfeld Number of grooved bearings Ratio of Sommerfeld Number of grooved to non-grooved bearing The resultant force in dimensional form is then $$W_{0} = \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{c^{2} }}\sqrt {W_{0\varphi }^{2} + W_{0r}^{2} } .$$ The attitude angle is: $$\varPhi = - \tan^{ - 1} \frac{{W_{0\phi } }}{{W_{0r} }}.$$ According to conventional definition of Sommerfeld Number for circular bearings, it is $$S = \frac{{\mu \cdot N_{s} \cdot d \cdot L}}{{W_{0} }} \cdot \left( {\frac{d}{2c}} \right)^{2} .$$ The Sommerfeld Number for grooved bearings can be derived with using Eq. ( 39): $$S = \frac{{d^{2} }}{{4 \cdot {{\uppi }} \cdot B^{2} \cdot \sqrt {W_{0\varphi }^{2} + W_{0r}^{2} } }}.$$ The Sommerfeld Number, defined by Eq. ( 42) is a function of number of grooves and eccentricity ratio. Its reciprocal defines the load capacity of a bearing while the reciprocal of Eq. ( 41) is the actual load to the bearing. It is evident that for low eccentricity ratio (less than 0.9), there is no significant difference between the modeling with all linear sliders and the one with mixed sliders, namely, the first one using parabolic slider and rest of them linear sliders. However, for high eccentricity ratio ( ε > 0.9), this reflects the case of high loading, the difference can be significant, Figure 9. The Sommerfeld Number simulated with all linear pads can be up to 5 times higher than the Sommerfeld Number with mixed pad geometry for the example investigated. By definition, higher Sommerfeld Number means lower load capacity. This is reflected in Figure 10. The Sommerfeld Number ratio shown in Figure 10 is the Sommerfeld Number with all linear sliders divided by the Sommerfeld Number with mixed pads in which the first pad is parabolic and rest of them linear. Water lubricated guide bearings can be subject to eccentricity ratio as high as 0.999. In this case, an all linear modeling definitely under estimate the bearing loading capacity. Based on the quality comparison of sliding bearing geometries in previous section, the parabolic gap is more closed to true shape of bearing clearance of the first pad. Therefore, the mixed scheme must more closely present the true bearing performance. It is an improvement of all linear slider modeling, especially for large eccentricity ratio. Ren et al. [ 1 ] in their previous paper quantitatively demonstrated that the load capacity of grooved bearing is lower than that of non-grooved bearings. The Sommerfeld Number of grooved bearing modeled with all linear pads was compared with the bench mark Sommerfeld Number, namely the formulation from Refs. [ 4 , 5 ]. A similar comparison is made here for the grooved bearing with mixed type of inclined sliders to the non-grooved bearings. The Sommerfeld Number of the solution by Childs is again used as bench mark for the comparison. The massive curve on Figure 11 is the Sommerfeld Number of non-grooved bearing. Other curves are for grooved bearing with different number of grooves. Definitely, the grooves reduce the load capacity of a bearing. The more grooves, the bigger is the load capacity reduction. The ratio of Sommerfeld Number for grooved to non-grooved bearing is shown in Figure 12. This provides a better visualization as how much the load capacity reduction can be expected. From Figure 12 is to see reducing the number of grooves is an effective way to increase load capacity of a grooved bearing. 5.2 Effect of Groove Size From the defining equation of Sommerfeld Number of grooved bearing Eq. ( 42), it is a function of ratio d/ B. For a fixed number of grooves, the size (width) of groove will take away a part of bearing surface which results in a narrower bearing pad. This increases the d/ B ratio. Therefore, for a real load capacity of a practice design, groove effect must be taken into consideration, especially grooves with round or fillet corners. 6 Stiffness and Damping Coefficients Following the similar procedure in Refs. [ 1 , 8 ], the non-dimensional stiffness and damping coefficients for a circular bearing with multi-axial grooves were obtained by summarizing the coefficients of stiffness and damping over all supporting pads and are expressed as $$\begin{aligned} K_{rr} = \sum\limits_{i = 2}^{N/2} {\frac{{\cos \alpha_{Ti} }}{{(1 + \varepsilon \cdot \cos \alpha_{Ti} )^{3} }} \cdot K_{L} (\eta_{i} )} \cdot \cos (\uppi - \vartheta_{i} ) \hfill \\ \quad \quad \quad + \frac{{\lambda^{2} \cdot \cos \alpha_{T1} }}{{(1 + \varepsilon \cdot \cos \alpha_{T1} )^{3} }} \cdot K_{P} (\eta_{1} ) \cdot \cos (\uppi - \vartheta_{1} ), \hfill \\ \end{aligned}$$ $$\begin{aligned}{K_{r\phi }} & = \sum\limits_{i = 2}^{N/2} {\frac{{\sin {\alpha _{Ti}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot {K_L}({\eta _i})} \cdot \cos ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^2} \cdot \sin {\alpha _{T1}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {K_P}({\eta _1}) \cdot \cos ({{\uppi }} - {\vartheta _1}), \end{aligned}$$ $$\begin{aligned}{K_{\phi r}}& = \sum\limits_{i = 2}^{N/2} {\frac{{\cos {\alpha _{Ti}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot {K_L}({\eta _i})} \cdot \sin ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^2} \cdot \cos {\alpha _{T1}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {K_P}({\eta _1}) \cdot \sin ({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{K_{\phi \phi }} & = \sum\limits_{i = 2}^{N/2} {\frac{{\sin {\alpha _{Ti}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}} \cdot {K_L}({\eta _i})} \cdot \sin ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^2} \cdot \sin {\alpha _{T1}}}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {K_P}({\eta _1}) \cdot \sin ({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{C_{rr}} & = \sum\limits_{i = 2}^{N/2} {\frac{{{C_L}({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}}} \cdot {\cos ^2}({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^3} \cdot {C_P}({\eta _1})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {\cos ^2}({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{C_{r\phi }} & = {C_{\phi r}} = \sum\limits_{i = 2}^{N/2} {\frac{{{C_L}({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}}} \cdot \cos ({{\uppi }} - {\vartheta _i}) \cdot \sin ({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^3} \cdot {C_P}({\eta _1})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot \cos ({{\uppi }} - {\vartheta _1}) \cdot \sin ({{\uppi }} - {\vartheta _1}),\end{aligned}$$ $$\begin{aligned}{C_{\phi \phi }} & = \sum\limits_{i = 2}^{N/2} {\frac{{{C_L}({\eta _i})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{Ti}})}^3}}}} \cdot {\sin ^2}({{\uppi }} - {\vartheta _i}) \\ & \quad + \frac{{{\lambda ^3} \cdot {C_P}({\eta _1})}}{{{{(1 + \varepsilon \cdot \cos {\alpha _{T1}})}^3}}} \cdot {\sin ^2}({{\uppi }} - {\vartheta _1}).\end{aligned}$$ Translate them from r- ϕ coordinate frame into x- y coordinate frame, these are $$\left[ {\begin{array}{*{20}c} {KYY} & {KYX} \\ {KXY} & {KXX} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos \varPhi } & { - \sin \varPhi } \\ {\sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {K_{rr} } & {K_{r\varphi } } \\ {K_{\varphi r} } & {K_{\varphi \iota } } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {\cos \varPhi } & {\sin \varPhi } \\ { - \sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right],$$ $$\left[ {\begin{array}{*{20}c} {CYY} & {CYX} \\ {CXY} & {CXX} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\cos \varPhi } & { - \sin \varPhi } \\ {\sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {C_{rr} } & {C_{r\varphi } } \\ {C_{\varphi r} } & {C_{\varphi \iota } } \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {\cos \varPhi } & {\sin \varPhi } \\ { - \sin \varPhi } & {\cos \varPhi } \\ \end{array} } \right].$$ The coefficients of stiffness KYY, KYX, KXY and KXX are non-dimensional. From equation group Eqs. ( 43) and ( 44), it can be seen that they only depend on location angles and the number of grooves. This means they are changing with different groove configurations. The same is applied to the damping coefficients. For purpose to make comparison with other available methods, here a new group of non-dimensional coefficients of stiffness and damping is defined as follows: $$\left[ {\begin{array}{*{20}c} {K_{yy} } & {K_{yx} } \\ {K_{xy} } & {K_{xx} } \\ \end{array} } \right] = \left( {\frac{2B}{d}} \right)^{2} \cdot S \cdot\uppi \cdot \left[ {\begin{array}{*{20}c} {KYY} & {KYX} \\ {KXY} & {KXX} \\ \end{array} } \right],$$ $$\left[ {\begin{array}{*{20}c} {C_{yy} } & {C_{yx} } \\ {C_{xy} } & {C_{xx} } \\ \end{array} } \right] = \left( {\frac{2B}{d}} \right)^{3} \cdot S \cdot\uppi \cdot \left[ {\begin{array}{*{20}c} {CYY} & {CYX} \\ {CXY} & {CXX} \\ \end{array} } \right].$$ The Sommerfeld Number in Eqs. ( 47) and ( 48) is the one defined by Eq. ( 42). The final dimensional coefficients of stiffness and damping are as follows: $$\left[ {\begin{array}{*{20}c} {k_{yy} } & {k_{yx} } \\ {k_{xy} } & {k_{xx} } \\ \end{array} } \right] = \frac{{W_{o} }}{c} \cdot \left[ {\begin{array}{*{20}c} {K_{yy} } & {K_{yx} } \\ {K_{xy} } & {K_{xx} } \\ \end{array} } \right] = \frac{{\mu \cdot V \cdot B^{2} \cdot L}}{{c^{3} }}\left[ {\begin{array}{*{20}c} {KYY} & {KYX} \\ {KXY} & {KXX} \\ \end{array} } \right],$$ $$\left[ {\begin{array}{*{20}c} {c_{yy} } & {c_{yx} } \\ {c_{xy} } & {c_{xx} } \\ \end{array} } \right] = \frac{{W_{o} }}{c \cdot \varOmega } \cdot \left[ {\begin{array}{*{20}c} {C_{yy} } & {C_{yx} } \\ {C_{xy} } & {C_{xx} } \\ \end{array} } \right] = \frac{{\mu \cdot B^{3} \cdot L}}{{c^{3} }}\left[ {\begin{array}{*{20}c} {CYY} & {CYX} \\ {CXY} & {CXX} \\ \end{array} } \right].$$ The non-dimensional stiffness group in Eq. ( 47) and non-dimensional damping group in Eq. ( 48) is directly comparable with existing circular bearing results such as long and short bearing theory and others. In this paper, the stiffness and damping coefficients from Childs and Moes [ 4 , 5 ] are of particular interest. They are considered to be accurate for non-grooved plain bearings and used for verifying the correctness of Eqs. ( 47) and ( 48). Same as for non-dimensional coefficients of stiffness, the coefficients of damping have also been compared at same groove number and L/ D ratio. In Figures 13 and 14, the stiffness and damping coefficients by Childs and Moes was based on L/ D = 2.0 and for grooved bearings was based on groove number = 8. Stiffness coefficients comparison Damping coefficients comparison It is understandable that stiffness coefficient \({K_{yy} }\) for grooved bearing is slightly greater than that for non-grooved bearing. This is because the pressure is more concentrated on the area around of loading center due to grooves. The same reason may explain why the coefficient of stiffness \({K_{xx} }\) is lower than that of non-grooved bearing. The cross coefficient of stiffness \({K_{yx} }\) shows a different behavior from non-grooved bearing. Another noticeable characteristic is that the turning point of the cross-stiffness coefficient \({K_{xy} }\) into negative is shifted to lower eccentricity ratio. The damping coefficient \({C_{yy} }\) is almost identical for both non-grooved bearing and grooved bearing in this particular geometrical condition. The coefficient \({C_{xx} }\) of grooved bearing is lower than that of non-grooved bearing. The cross-damping coefficient \({C_{xy} } = {C_{yx} }\) has larger difference for low and high eccentricity ratio and small difference for intermediate eccentricity ratio. 7 Influence of the Number of Grooves As stated in previous section, the coefficients of stiffness and damping are not only a function of eccentricity ratio, but also the number of grooves. Figure 15 is a comparison between two bearings with 8 grooves and 12 grooves, respectively. The effect on the coefficients of stiffness is different. An increased groove number has an insignificant effect on \({K_{yy} }\) while it reduces \({K_{yx} }\). For cross-coefficient of stiffness \({K_{xy} }\), the increase of groove number shifts the turning point to negative to lower eccentricity ratio. Influence of number of grooves on stiffness Figure 16 presents a comparison between the coefficients of damping for two bearings with 8 grooves and 12 grooves respectively. Again, the groove number has less effect on C yy while affecting other coefficients significantly. Influence of number of grooves on damping 8 Conclusions This paper provides a new method to calculate the load capacity, the coefficients of stiffness and damping for water lubricated guide bearings with multi-axial grooves. The focus is on the effect of grooves and groove number. The paper doesn't include the effect of surface deformation. The result is an approximation and can be applied to water lubricated bearings made from hard polymers combined with lower pressure or other materials, such as Lignum Vitae wood and ceramics. The paper uses a so-called mixed scheme which means using parabolic slider for the first pad only, rest of the pads uses linear slider. The stiffness and damping of the grooved bearing was investigated considering groove effect. The coefficients of stiffness and damping demonstrated different characteristics from those with no grooves. Since the coefficients of stiffness and damping are function of eccentricity ratio and number of grooves, the effect of number of grooves was studied in great depth. It showed that the number of grooves has less effect on the coefficient K yy and C yy while it has a larger effect on other coefficients of stiffness and damping. Further research in considering surface deformation with using similar modeling could be an interesting subject. The employer provided a great support to this work and permits publishing this paper for a good will to general public and lubrication community. The paper was created during employment at Thordon Bearings Inc. The author declares that the objective of this work was fully devoted to a solution of the particular problem in engineering and a better understanding its scientific nature in general. There is no any commercial or associated interest that represent a conflict of interest in connection with the work to a third party. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. Appendix: Parabolic Gap Sliding Bearing The procedure proposed by this paper uses a scheme by mixing different types of sliding bearings to build the entire circular bearing. One of the most important components is the parabolic gap sliding bearing. This appendix provides a procedure for deriving the four main functions, namely the load capacity, the location of load center, stiffness and damping. Without loss of generality, the same coordinate system as shown in Figure 3 is used for this procedure. The shape of parabolic gap is expressed as $$h(x,t) = h_{T} (t) \cdot \left[ {1 + (\eta - 1) \cdot \left( {\frac{x}{B}} \right)^{2} } \right],\quad \quad - B \le x \le 0,\;\eta \ge 1.$$ Introducing non-dimensional variables and parameters defined as follows: $$\begin{aligned} & x^{*} = \frac{x}{B}\;,\;\quad \tau = \frac{V \cdot t}{B},\quad h_{T}^{*} = \frac{{h_{T} (t)}}{{h_{T0} }},\quad \hfill \\ & h^{*} = h_{T}^{*} \cdot \left[ {1 - (\eta - 1) \cdot x^{*}} \right],\quad p^{*} = \frac{{(p - p_{g} ) \cdot h_{T0}^{2} }}{\mu \cdot V \cdot B},\quad \hfill \\ \end{aligned}$$ where p is the pressure over the pad with unit length, \(p_{g}\) is the pressure in water grooves, t is time. The Reynolds Equation taking into consideration on dynamic squeezing film action is as follows: $$\frac{\partial }{\partial x}\left( {h^{3} \cdot \frac{\partial p}{\partial x}} \right) = 6 \cdot \mu \cdot V \cdot \frac{\partial h}{\partial x} + 12\mu \cdot \frac{\partial h}{\partial t}.$$ Insert non-dimensional variables Eq. ( A2) into Eq. ( A3), the Reynolds equation in non-dimensional form is $$\frac{\partial }{{\partial x^{*} }}\left( {{h^*}^3 \cdot \frac{{\partial p^{*} }}{{\partial x^{*} }}} \right) = 6 \cdot \frac{{\partial h^{*} }}{{\partial x^{*} }} + 12 \cdot \frac{{\partial h^{*} }}{\partial \tau }.$$ Small perturbation method means to find a solution of Eq. ( A4) not far from the steady state solution with a linearization approach. This implies to find a solution, such as $$p^{*} = p_{o}^{*} + p_{1}^{*} \cdot \delta \cdot e^{i \cdot \tau } ,$$ $$h_{T}^{*} = 1 + \delta \cdot e^{i \cdot \tau } ,$$ where \(p_{0}^{*}\) is non-dimensional pressure under steady operation, \(p_{1}^{*}\) is perturbation amplitude of a dynamic pressure on top of the pressure under steady operation. In true sense \(p_{1}^{*}\) is a coefficient of the non-dimensional dynamic pressure. δ is a small perturbation, which is a small number much less than 1.0. Its physical meaning is the ratio of amplitude change of film thickness to the minimum film thickness under steady operation. Insert Eqs. ( A5) and ( A6) into Eq. ( A4), and equating the coefficients of zero order of " δ" on left and right side of Eq. ( A4), it is resulted in an equation for pressure \(p_{o}^{ * }\) $$\frac{\partial }{{\partial {x^*}}}\left( {{{\left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;3}} \cdot \frac{{\partial p_o^*}}{{\partial {x^*}}}} \right) = 12 \cdot (\eta - 1) \cdot {x^*}.$$ By the same token, by equating the coefficients of first order of " δ" on left and right side of Eq. ( A4), the coefficient of dynamic pressure \(p_{1}^{*}\) will fulfill following equation $$\frac{\partial }{{\partial {x^*}}}\left( {{{\left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;3}} \cdot \frac{{\partial p_1^*}}{{\partial {x^*}}}} \right) = - 24 \cdot (\eta - 1) \cdot {x^*} + 12 \cdot \left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right] \cdot i,\;i = \sqrt { - 1} .$$ In this procedure, all other terms with orders equal to and higher than \(\delta^{2}\) are neglected. The boundary conditions for the non-dimensional pressure \(p^{ * }\) are $$p^{ * } = \, 0{\text{ for }}x^{*} = 0{\text{ and }}x^{*} = {-} 1.$$ To fulfill these conditions, the non-dimensional pressure on steady operation \(p_{0}^{*}\) as well as the real and imaginary part of non-dimensional dynamic pressure all need to be zero on the boundaries. This is expressed as $$p_{0}^{*} = 0;\quad p_{1}^{ * } = p_{1,r}^{ * } = p_{1,i}^{ * } = 0\;{\text{for}}\;x = 0\;{\text{and}}\;x = - 1.$$ First is to find the solution of Eq. ( A7). By integrating twice of Eq. ( A7), the non-dimensional pressure on steady operation is expressed in following form $$p_o^* = \frac{3}{4}\left\{ {\frac{{{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }} + \frac{{{x^{{*^3}}}(\eta - 1) - {x^*}}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}}}} \right\} + \frac{{{C_1}}}{8}\left\{ {\frac{{3{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }} + \frac{{3{x^{{*^3}}}(\eta - 1) + 5{x^*}}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}}}} \right\}\; + {C_2}.$$ The boundary condition for \(p_{0}^{*}\) requires \(C_{2}\) = 0, and $$C_{1} = - 2\frac{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta - 2)\sqrt {\eta - 1} }}{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }}.$$ Insert \(C_{1}\) into Eq. ( A11), the final non-dimensional pressure on steady operation takes form as below $$p_o^* = \frac{2}{{{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} + (\eta + 2/3)\sqrt {\eta - 1} }} \times \left\{ {{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} + \frac{{{x^*}({x^{{*^2}}} - 1){{(\eta - 1)}^{\frac{3}{2}}} - {x^*}{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{\left[ {1 + (\eta - 1){x^{{*^2}}}} \right]}^{\;2}}}}} \right\}.$$ The load capacity function is the integration of the non-dimensional pressure (Eq. ( A13)) $$\varPi_{P} (\eta ) = \frac{{W_{o} \cdot h_{To}^{2} }}{{\mu \cdot V \cdot B^{2} \cdot L}} = \int\limits_{ - 1}^{0} {p_{o}^{ * } (x^{ * } ,\eta ) \cdot {\text{d}}x^{ * } } .$$ The final result after implementation of the integration is $$\varPi_{P} (\eta ) = \frac{{(\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} + \sqrt {\eta - 1} }}{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }}.$$ It is interesting to notice that there is a similarity of right side of Eq. ( A7) and the first term on right side of Eq. ( A8). Since the solution of Eq. ( A7) creates the load capacity function Eq. ( A15), the first real term on the right side of Eq. ( A8) must generate the stiffness function. This concludes that the stiffness function is just equal to two times of the load capacity function by amount, therefore $$K_{P} (\eta ) = 2\frac{{(\eta - 2) \cdot \tan^{ - 1} \sqrt {\eta - 1} + \sqrt {\eta - 1} }}{{\eta^{2} \tan^{ - 1} \sqrt {\eta - 1} + (\eta + \frac{2}{3})\sqrt {\eta - 1} }}.$$ Corresponding real part of non-dimensional dynamic pressure coefficient will be $$p_{1,r}^* = \frac{{ - 4}}{{{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} + (\eta + 2/3)\sqrt {\eta - 1} }} \times \left\{ {{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} + \frac{{{x^*}({x^{{*^2}}} - 1){{(\eta - 1)}^{\frac{3}{2}}} - {x^*}{\eta ^2}{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{\left[ {1 + (\eta - 1){x^{{*^2}}}} \right]}^{\;2}}}}} \right\}.{\rm{ }}$$ The next task is to find imaginary part of the non-dimensional dynamic pressure coefficient \(p_{1}^{*}\) which needs to fulfill following equation: $$\frac{\partial }{{\partial {x^*}}}\left( {{{\left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;3}} \cdot \frac{{\partial p_{1,i}^*}}{{\partial {x^*}}}} \right) = 12 \cdot \left[ {1 - (\eta - 1) \cdot {x^{{*^2}}}} \right].$$ Following similar procedure to solve Eq. ( A7), after integration twice of Eq. ( A18), the imaginary part of non-dimensional dynamic pressure coefficient is expressed with $$p_{1,i}^* = - 2 \cdot \frac{{{x^{{*^2}}}(\eta - 1) + 2}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}(\eta - 1)}} + \frac{{{C_3}}}{8}\left\{ {\frac{{3{{\tan }^{ - 1}}{x^*}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }} + \frac{{3{x^{{*^3}}}(\eta - 1) + 5{x^*}}}{{{{\left[ {1 + (\eta - 1) \cdot {x^{{*^2}}}} \right]}^{\;2}}}}} \right\}\; + {C_4}.$$ Utilizing boundary condition to above equation, the two constants are $$C_{3} = \frac{{8\left[ {4\eta^{2} - 2(\eta + 1)} \right]}}{{3\eta^{2} \sqrt {\eta - 1} \cdot \tan^{ - 1} \sqrt {\eta - 1} + (\eta - 1)(3\eta + 2)}},\;C_{4} = \frac{4}{\eta - 1}.$$ Inserting them into Eq. ( A19) and integrating it over from x* = −1 to x* = 0, the damping function is as following: $${C_P}(\eta ) = - \int\limits_{ - 1}^0 {p_{1,i}^*} ({x^*},\eta ) \cdot {\rm{d}}{x^*} = \frac{{2(2\eta + 1)}}{{3\eta + 2 + \frac{{3{\eta ^2}}}{{\sqrt {\eta - 1} }}{{\tan }^{ - 1}}\sqrt {\eta - 1} }} \cdot \left( {\frac{1}{\eta } + \frac{{3{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{\sqrt {\eta - 1} }}} \right) - \frac{{4\eta - 1}}{{\eta (\eta - 1)}} + \frac{{3{{\tan }^{ - 1}}\sqrt {\eta - 1} }}{{{{(\eta - 1)}^{\frac{3}{2}}}}}.$$ The total pad pressure appears as complex function which is $$p^{ * } = p_{o}^{ * } + (p_{1r}^{ * } + i \cdot p_{1i}^{ * } ) \cdot \delta \cdot e^{i\tau } .$$ The location of load center under steady operation and dynamic vibration is slightly different. The location of load center for steady operation is calculated with $$A_{P} (\eta ) = 1 + \frac{{\int\limits_{ - 1}^{0} {x \cdot p_{o}^{ * } {\text{d}}x} }}{{\int\limits_{ - 1}^{0} {p_{o}^{ * } {\text{d}}x} }}.$$ And the location of load center for dynamic load only is calculated with $${A_{Pd}}(\eta ) = 1 + \frac{{\int\limits_{ - 1}^0 {x \cdot \sqrt {p_{1r}^{{*^2}} + p_{1i}^{{*^2}}} {\rm{d}}x} }}{{\int\limits_{ - 1}^0 {\sqrt {p_{1r}^{{*^2}} + p_{1i}^{{*^2}}} {\rm{d}}x} }}.$$ Since \(p_{1r}^{ * }\) is two times of static pressure \(p_{o}^{ * }\) and has dominate amount in comparison to \(p_{1i}^{ * }\), the value of Eq. ( A24) is not very much different from the value from Eq. ( A23). A ratio \(R_{P} (\eta ) = A_{Pd} (\eta )/A_{P} (\eta )\) was defined for comparing the difference between Eqs. ( A23) and ( A24). Similarly this ratio is also defined for exponential and linear slider (see Figure 4d). This paper used static load center for Sommerfeld Number evaluation and dynamic load center for stiffness and damping evaluation for all three types of sliding bearings. The notion \(A_{Ed} \;{\text{and}}\;A_{Ld}\) presents the dynamic load center of exponential and linear slider respectively. Zurück zum Zitat G Ren, G Auger. Water film stiffness and damping analysis of water lubricated bearings with multiple axial grooves for hydro turbines. International Conference Hydro, 2016. Montreux, Switzerland, 10‒12 Oct. 2016. G Ren, G Auger. Water film stiffness and damping analysis of water lubricated bearings with multiple axial grooves for hydro turbines. International Conference Hydro, 2016. Montreux, Switzerland, 10‒12 Oct. 2016. Zurück zum Zitat Andreas Z. Szeri. Fluid film lubrication, theory and design. 1st ed. Cambridge: Cambridge University Press, 1998. CrossRef Andreas Z. Szeri. Fluid film lubrication, theory and design. 1st ed. Cambridge: Cambridge University Press, 1998. CrossRef Zurück zum Zitat George B DuBois, Fred W Ocvirk. Analytical derivation and experimental evaluation of short-bearing approximation for full journal bearings. NACA Report 1157. George B DuBois, Fred W Ocvirk. Analytical derivation and experimental evaluation of short-bearing approximation for full journal bearings. NACA Report 1157. Zurück zum Zitat Mircea Rades. Dynamics of Machinery II. Editura Printech, 2009: 99-102. Mircea Rades. Dynamics of Machinery II. Editura Printech, 2009: 99-102. Zurück zum Zitat D Childs, H Moes, H Van Leeuwen. Journal bearing impedance descriptions for rotor dynamic applications. Transactions of ASME, 1977:198. D Childs, H Moes, H Van Leeuwen. Journal bearing impedance descriptions for rotor dynamic applications. Transactions of ASME, 1977:198. Zurück zum Zitat G Capone, V Agostino, D Guida. A finite length plain journal bearing theory. Transaction of ASME, 1994, 116: 648-653. G Capone, V Agostino, D Guida. A finite length plain journal bearing theory. Transaction of ASME, 1994, 116: 648-653. Zurück zum Zitat R S Pai, R Pai. Stability of four-axial and six-axial grooved water-lubricated journal bearings under dynamic load. Proc. IMechE Part J: J. Engineering Tribology, 2008, 222: 683-691. R S Pai, R Pai. Stability of four-axial and six-axial grooved water-lubricated journal bearings under dynamic load. Proc. IMechE Part J: J. Engineering Tribology, 2008, 222: 683-691. Zurück zum Zitat G Ren. Calculation of load capacity and water film thickness for fully grooved water lubricated main guide bearings for hydro turbines. Hydro Vision Russia, Moscow, March 3-5, 2015. G Ren. Calculation of load capacity and water film thickness for fully grooved water lubricated main guide bearings for hydro turbines. Hydro Vision Russia, Moscow, March 3-5, 2015. Zurück zum Zitat Lahmar Mustapha, Ellagoune Salah, Sou-Said Benyebka. Elastohydrodynamic lubrication analysis of a compliant journal bearing considering static and dynamic deformations of the bearing liner. Tribology Transactions, 2010, 53: 349-368. CrossRef Lahmar Mustapha, Ellagoune Salah, Sou-Said Benyebka. Elastohydrodynamic lubrication analysis of a compliant journal bearing considering static and dynamic deformations of the bearing liner. Tribology Transactions, 2010, 53: 349-368. CrossRef Zurück zum Zitat C Liu, B Yao, G Cao, et al. Numerical calculation of composite water lubricated bearing considering effect of elastic deformation. IOP Conf. Series: Materials Science and Engineering, 2020, 772: 012114. C Liu, B Yao, G Cao, et al. Numerical calculation of composite water lubricated bearing considering effect of elastic deformation. IOP Conf. Series: Materials Science and Engineering, 2020, 772: 012114. Zurück zum Zitat Edward H Smith. On the design and lubrication of water-lubricated rubber, cutlass bearings operating in the soft EHL regime . Lubricants, 2020, 8: 75. Edward H Smith. On the design and lubrication of water-lubricated rubber, cutlass bearings operating in the soft EHL regime . Lubricants, 2020, 8: 75. Zurück zum Zitat G Zhou, J Wang, Y Han, et al. Study on the stiffness and damping coefficients of water lubricated rubber bearings with multiple grooves. Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology, 2016, 230(3): 323-335. CrossRef G Zhou, J Wang, Y Han, et al. Study on the stiffness and damping coefficients of water lubricated rubber bearings with multiple grooves. Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology, 2016, 230(3): 323-335. CrossRef Zurück zum Zitat Q Li, S Zhang, L Ma, et al. Stiffness and damping coefficients for journal bearing using the 3D transient flow calculation. Journal of Mechanical Science and Technology, 2017, 31(5): 2082-2091. Q Li, S Zhang, L Ma, et al. Stiffness and damping coefficients for journal bearing using the 3D transient flow calculation. Journal of Mechanical Science and Technology, 2017, 31(5): 2082-2091. Zurück zum Zitat X Liang, X Yan, Z Liu, et al. Effect of perturbation amplitudes on water film stiffness coefficients of water-lubricated plain journal bearings based on CFD-FSI methods. Proc. IMechE Part J. Journal of Tribology, 2018: 1-13. X Liang, X Yan, Z Liu, et al. Effect of perturbation amplitudes on water film stiffness coefficients of water-lubricated plain journal bearings based on CFD-FSI methods. Proc. IMechE Part J. Journal of Tribology, 2018: 1-13. Zurück zum Zitat M V S Babu, A Rama Krishna, K N S Suman. Review of journal bearing material and current trends. American Journal of Material Science and Technology, 2015, 4(2): 72-83. M V S Babu, A Rama Krishna, K N S Suman. Review of journal bearing material and current trends. American Journal of Material Science and Technology, 2015, 4(2): 72-83. Zurück zum Zitat Y Chen, Y Sun, Q He, et al. Elastohydodynamic behavior analysis of journal bearing using fluid-structure interaction considering cavitation. Arabian Journal for Science and Engineering, 2019, 44: 1305-1320. CrossRef Y Chen, Y Sun, Q He, et al. Elastohydodynamic behavior analysis of journal bearing using fluid-structure interaction considering cavitation. Arabian Journal for Science and Engineering, 2019, 44: 1305-1320. CrossRef Zurück zum Zitat Elsayed K Elsayed, Alaa M A EL-Butch. A study on hydrodynamic water lubricated journal bearing. Engineering Research Journal, 2017, 153: M1-M15. Elsayed K Elsayed, Alaa M A EL-Butch. A study on hydrodynamic water lubricated journal bearing. Engineering Research Journal, 2017, 153: M1-M15. Zurück zum Zitat Saeid Dousti, Paul Allaire, Timothy Dimond, et al. An extended Reynold equation applicable to high reduced Reynolds number of journal bearings. Tribology International, 2016, 102: 182-197. CrossRef Saeid Dousti, Paul Allaire, Timothy Dimond, et al. An extended Reynold equation applicable to high reduced Reynolds number of journal bearings. Tribology International, 2016, 102: 182-197. CrossRef Zurück zum Zitat R Mallya, B S Shenoy, R S Pai, et al. Stability of water lubricated bearing using linear perturbation method under turbulent conditions. Pertanika J. Science and Technology, 2017, 25(3): 995-1008. R Mallya, B S Shenoy, R S Pai, et al. Stability of water lubricated bearing using linear perturbation method under turbulent conditions. Pertanika J. Science and Technology, 2017, 25(3): 995-1008. Zurück zum Zitat K Wu, G Zhou, X Mi, et al. Tribological and vibration properties of three different polymer material for water-lubricated bearings . Materials, 2020, 13: 3154. CrossRef K Wu, G Zhou, X Mi, et al. Tribological and vibration properties of three different polymer material for water-lubricated bearings . Materials, 2020, 13: 3154. CrossRef Zurück zum Zitat J Yang, Z Liu, X Liang, et al. Research on friction vibration of marine water lubricated rubber bearing. Tribology Online, Japanese Society of Tribologists, 2018, 13(3): 108-118. CrossRef J Yang, Z Liu, X Liang, et al. Research on friction vibration of marine water lubricated rubber bearing. Tribology Online, Japanese Society of Tribologists, 2018, 13(3): 108-118. CrossRef Zurück zum Zitat Wojciech Litwin. Properties comparison of rubber and three layer PTFE-NBR-Bronze water lubricated bearings with lubricating grooves along entire bush circumference based on experimental tests. Tribology International, 2015, 90: 404-411. CrossRef Wojciech Litwin. Properties comparison of rubber and three layer PTFE-NBR-Bronze water lubricated bearings with lubricating grooves along entire bush circumference based on experimental tests. Tribology International, 2015, 90: 404-411. CrossRef Zurück zum Zitat X Ye, J Wang, D Zhang, et al. Experimental research of journal orbit for water-lubricated bearing. Mathematical Problems in Engineering, 2016, 2016: 8361596. X Ye, J Wang, D Zhang, et al. Experimental research of journal orbit for water-lubricated bearing. Mathematical Problems in Engineering, 2016, 2016: 8361596. Zurück zum Zitat T L Daugherty. Frictional characteristics of water-lubricated compliant surface stave bearings. ASLE, Transactions, 2008, 24(3): 293-301. CrossRef T L Daugherty. Frictional characteristics of water-lubricated compliant surface stave bearings. ASLE, Transactions, 2008, 24(3): 293-301. CrossRef Zurück zum Zitat C Chen, S Li, Z Lu, et al. Experimental study on material properties of bearing bush of water lubricated bearing. IOP Conf. Series: Materials Science and Engineering, 2020, 740: 012067. C Chen, S Li, Z Lu, et al. Experimental study on material properties of bearing bush of water lubricated bearing. IOP Conf. Series: Materials Science and Engineering, 2020, 740: 012067. Zurück zum Zitat G C Brito Jr, R D Machado, A C Neto. Experimental estimation of journal bearing stiffness for damage detection in large hydrogenerators. Shock and Vibration, 2017: Article ID 4647868. G C Brito Jr, R D Machado, A C Neto. Experimental estimation of journal bearing stiffness for damage detection in large hydrogenerators. Shock and Vibration, 2017: Article ID 4647868. Zurück zum Zitat N Wang, Q Meng, P Wang, et al. Experimental research on film pressure distribution of water-lubricated rubber bearing with multi-axial grooves. Journal of Fluids Engineering, Transactions of the ASME, 2013, 135. N Wang, Q Meng, P Wang, et al. Experimental research on film pressure distribution of water-lubricated rubber bearing with multi-axial grooves. Journal of Fluids Engineering, Transactions of the ASME, 2013, 135. Zurück zum Zitat Wojciech Litwin. Experimental research on water lubricated three layer sliding bearing with lubrication grooves in the upper part of the bush and its comparison with a rubber bearing. Tribology International, 2015, 82: 153-161. CrossRef Wojciech Litwin. Experimental research on water lubricated three layer sliding bearing with lubrication grooves in the upper part of the bush and its comparison with a rubber bearing. Tribology International, 2015, 82: 153-161. CrossRef Zurück zum Zitat Wojciech Litwin, Czeslaw Dymarski. Experimental research on water-lubricated marine stern tube bearings in conditions of improper lubrication and cooling causing rapid bush wear. Tribology International, 2016, 95: 449-455. CrossRef Wojciech Litwin, Czeslaw Dymarski. Experimental research on water-lubricated marine stern tube bearings in conditions of improper lubrication and cooling causing rapid bush wear. Tribology International, 2016, 95: 449-455. CrossRef Zurück zum Zitat T A Snyder, M J Braun. On the static and dynamic performance of compliant, water-lubricated sliding bearings; perturbed Reynolds equation vs. CFD-FSI based analysis methods. 18 th EDF/Pprime Workshop, EDF Lab Paris – Saclay, October 10‒11, 2019. T A Snyder, M J Braun. On the static and dynamic performance of compliant, water-lubricated sliding bearings; perturbed Reynolds equation vs. CFD-FSI based analysis methods. 18 th EDF/Pprime Workshop, EDF Lab Paris – Saclay, October 10‒11, 2019. Zurück zum Zitat P Varpasuo, J Ahtiainen. Modeling of water lubricated bearing in hydro unit dynamic stability. HYDRO 2019, International Conference and Exhibition, Porto, Portugal, Allandega Porto Congress Center, 14-16 October, 2019. P Varpasuo, J Ahtiainen. Modeling of water lubricated bearing in hydro unit dynamic stability. HYDRO 2019, International Conference and Exhibition, Porto, Portugal, Allandega Porto Congress Center, 14-16 October, 2019. https://doi.org/10.1186/s10033-020-00492-w Chinese Journal of Mechanical Engineering Structural Stress–Fatigue Life Curve Improvement of Spot Welding Based on Quasi-Newton Method Analysis of Power Matching on Energy Savings of a Pneumatic Rotary Actuator Servo-Control System Kinematic Sensitivity Analysis and Dimensional Synthesis of a Redundantly Actuated Parallel Robot for Friction Stir Welding A Modified Friction Stir Welding Process Based on Vortex Material Flow Running-In Behavior of Wet Multi-plate Clutches: Introduction of a New Test Method for Investigation and Characterization Dynamic Stiffness Analysis and Experimental Verification of Axial Magnetic Bearing Based on Air Gap Flux Variation in Magnetically Suspended Molecular Pump Die im Laufe eines Jahres in der "adhäsion" veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen. Zur Marktübersicht in-adhesives, MKVS, Nordson/© Nordson, ViscoTec/© ViscoTec, Hellmich GmbH/© Hellmich GmbH
CommonCrawl
experimental & molecular medicine Development of a colorectal cancer diagnostic model and dietary risk assessment through gut microbiome analysis Jinho Yang1,2 na1, Andrea McDowell1 na1, Eun Kyoung Kim1, Hochan Seo1, Won Hee Lee1, Chang-Mo Moon3, Sung-Min Kym4, Dong Ho Lee ORCID: orcid.org/0000-0002-6376-410X5, Young Soo Park5, Young-Koo Jee6 & Yoon-Keun Kim1 Experimental & Molecular Medicine volume 51, Article number: 117 (2019) Cite this article Diagnostic markers Colorectal cancer (CRC) is the third most common form of cancer and poses a critical public health threat due to the global spread of westernized diets high in meat, cholesterol, and fat. Although the link between diet and colorectal cancer has been well established, the mediating role of the gut microbiota remains elusive. In this study, we sought to elucidate the connection between the gut microbiota, diet, and CRC through metagenomic analysis of bacteria isolated from the stool of CRC (n = 89) and healthy (n = 161) subjects. This analysis yielded a dozen genera that were significantly altered in CRC patients, including increased Bacteroides, Fusobacterium, Dorea, and Porphyromonas prevalence and diminished Pseudomonas, Prevotella, Acinetobacter, and Catenibacterium carriage. Based on these altered genera, we developed two novel CRC diagnostic models through stepwise selection and a simplified model using two increased and two decreased genera. As both models yielded strong AUC values above 0.8, the simplified model was applied to assess diet-based CRC risk in mice. Mice fed a westernized high-fat diet (HFD) showed greater CRC risk than mice fed a regular chow diet. Furthermore, we found that nonglutinous rice, glutinous rice, and sorghum consumption reduced CRC risk in HFD-fed mice. Collectively, these findings support the critical mediating role of the gut microbiota in diet-induced CRC risk as well as the potential of dietary grain intake to reduce microbiota-associated CRC risk. Further study is required to validate the diagnostic prediction models developed in this study as well as the preventive potential of grain consumption to reduce CRC risk. Colorectal cancer (CRC) is the third most common cancer with the fourth highest cancer mortality in the world. Based on temporal profiles and demographic projections, CRC incidence is predicted to increase by 60% by 20301. Despite global efforts to clearly define the pathogenesis of CRC, the precise etiology of CRC remains unknown. However, it has been established that CRC incidence is affected by genetic, epigenetic and environmental factors, such as diet2. The incidence rate of CRC has been increasing especially in developing countries. This increase may reflect a rise in the prevalence of CRC risk factors associated with westernization. The westernization of developing countries is characterized by rising unhealthy dietary habits, obesity and smoking3,4. The globalized spread of unhealthy, westernized diets high in red, processed meat and saturated fats is attracting concern, as it is reported that rising CRC risk is related to increased consumption of meats, animal fats, and cholesterol-rich foods4,5. People consuming a high-cholesterol diet have demonstrated higher CRC incidence than those who consume a low-cholesterol diet6. Additionally, it has been reported that native Africans with a low CRC risk and diets high in grain and vegetables are characterized by higher Prevotella abundance than African American counterparts with an increased risk of CRC development and diets high in red meat and fat, suggesting that gut bacteria also play a role in dietary CRC risk7. Although a variety of possible mechanisms through which a high-fat diet (HFD) can lead to CRC development have been proposed, the gut microbiota has recently been revealed to be a likely mediator between diet and CRC. Over 100 trillion bacteria reside in the human gut, forming a complex community that mediates metabolism and immune functions to both directly and indirectly affect human health and disease8. As the impact of the gut microbiota on metabolism and disease has been uncovered, the relationship between diet, the gut microbiota and CRC has begun to emerge. An HFD is known to increase intestinal permeability, which in turn raises the level of gut microbiota-associated lipopolysaccharide (LPS)-induced local inflammation, and both phenomena that have been independently associated with CRC9,10. In turn, LPS has been reported to increase synthesis and serum levels of leptin, a known growth factor for colonic epithelial cells11. Increased serum leptin levels have been shown to be associated with both HFD-induced obesity and CRC12. Furthermore, leptin has been demonstrated to induce carcinogenesis by increasing the proliferation of colon cancer cells in vitro13. Altogether, these findings demonstrate one example of the complex network of the interactions among diet, the gut microbiota, and CRC and particularly highlight the mediating role of the gut microbiota. Next-generation sequencing (NGS) has enabled researchers to determine the holistic bacterial community structure unique to each individual, and several studies have found that gut microbiota dysbiosis is associated with a variety of diseases, including colon cancer14. However, mixed results have prevented a clear consensus on the precise community dynamics between the gut microbiota and CRC. One of the most consistent bacterial groups shown to be associated with CRC carcinogenesis is Bacteroides spp., particularly Bacteroides fragilis. It has been shown that a high abundance of Bacteroides is associated with an increased risk of colon polyps, induces inflammation and contributes to CRC2,15. Overall, decreased trends in lactic acid bacteria, increased Fusobacterium, and altered Bacteroides/Prevotella levels have also been reported in CRC gut microbiota. While numerous factors may contribute to variations in CRC gut microbiome study outcomes, such as sample size, disease progression, age, sex, and regional dietary differences, one key confounding factor has yet to be addressed: bacterial extracellular vesicles (EVs). Bacteria release nanosized lipid bilayer-encapsulated EVs composed of proteins, lipids, DNA, RNA, lipopolysaccharides, and metabolites. Released microbiota-derived EVs interact with host cells both locally and distally and control various cellular processes by transferring their cellular components16. The amount and composition of secreted extracellular vesicles is not static, and we have shown through metagenomic analysis that alterations in gut microbiota EVs are associated with a variety of conditions, such as inflammatory bowel disease and tight junction permeability17,18. However, the impact of the diverse and dynamic composition of bacterial nucleic acids contained within microbiota-derived EVs has yet to be accounted for as a confounding factor in gut microbiota metagenomic analysis. To elucidate the mediating role of the gut microbiota in the relationship between diet and CRC, we sought to identify significant gut microbiota alterations associated with CRC. We isolated bacteria and removed all bacterial EVs from the stool of 89 CRC patients and 161 healthy controls and performed 16s rDNA metagenomic analysis on the resulting bacterial pellet. Through this analysis, we developed two CRC diagnostic models based on stepwise selection of significantly altered gut microbiota-derived biomarkers (D1-model) and two significantly increased and two significantly decreased bacterial genera (D2-model). Furthermore, we hypothesized that key bacteria associated with CRC can be regulated by diet, providing useful biomarkers for diet-mediated CRC risk. To verify this hypothesis, we conducted an in vivo study assessing gut microbial alterations and associated CRC risk in mice fed an HFD or an HFD supplemented with a variety of grains. The results of this study contribute a promising advancement in CRC theragnostics, gut microbiota-based therapeutics, and gut microbiota metagenomic analysis methodology. In total, 161 healthy people (76 males and 85 females) were enrolled from Haewoondae Baek Hospital, and 89 CRC patients (52 males and 37 females) were enrolled from Ewha Womans University Hospital and Seoul National University Bundang Hospital. The healthy subjects recruited in this study visited the hospital for a regular health screening. After completion of the checkup, we selected healthy persons for the study as healthy controls who were confirmed to have no known diseases and normal laboratory test results. The exclusion criteria for healthy controls included gut disease diagnosis, medication, and previous CRC diagnosis. Furthermore, we excluded those younger than 20-years-old, cancer patients and pregnant women. There was no significant difference in age or sex between healthy controls and CRC patients (p > 0.05) (Table 1). The present study was approved by the Institutional Review Board of Ewha Womans University Hospital (IRB No. EUMC 2014–10–048–001), Seoul National University Bundang Hospital (B-1708/412–301) and Haewoondae Baek Hospital (IRB No. 129792–2015–064). The methods conducted in this study were in accordance with the approved guidelines, and informed consent was obtained from all subjects. Table 1 Clinical subject demographic information Mouse Model Female C57BL/6 mice that were 6 weeks of age were purchased from Orient Bio Inc. (Seongnam, Korea). All mice were housed and maintained in standard laboratory conditions of 22 ± 2 °C and 50 ± 5% humidity under 12-hour day and night cycles throughout the course of the in vivo study. In vivo mouse study to evaluate the effect of grain foods Mice were randomly divided into nine groups (n = 5), including a control group fed a regular chow diet (RCD). The other eight groups were fed a HFD or an HFD supplemented with either nonglutinous rice, glutinous rice, rice syrup, brown rice, sorghum, buckwheat or acorn. Mice within the RCD control group were fed regular chow containing 18% dietary fat obtained from Research Diets, Inc. (New Brunswick, NJ, USA) for 4 weeks. Mice in the HFD group were fed a 60% fat diet, while mice in the grain diet groups were fed a 60% fat diet (Research Diets, Inc.) with 2% of the appropriate grain powder administered in their drinking water. Mouse body weight and food intake were measured weekly. At the conclusion of the 4-week study period, all mice were sacrificed, and cecal fluid was collected to analyze the microbiota composition. Bacterial and EV isolation and DNA extraction Human feces and mouse cecal fluid samples were filtered through a cell strainer after being diluted in 10 mL of PBS for 24 hours. EVs contained in the stool samples were isolated by centrifugation at 10,000 × g for 10 min at 4°C. After centrifugation, the resulting bacterial cell pellet and EV-containing supernatant were separated. DNA contained within the bacterial pellet and supernatant was extracted using a DNA isolation kit (PowerSoil DNA Isolation Kit, MO BIO Laboratory, CA, USA) following the standard protocol in the kit guide. The DNA extracted from the isolated bacterial cells and EVs contained in each sample was quantified using a QIAxpert system (QIAGEN, Hilden, Germany). Metagenomic analysis Bacterial genomic DNA was amplified with the 16s_V3_F (5′- TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCCTACGGGNGGCWGCAG -3′) and 16s_V4_R (5′- GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAGGACTACHVGGGTATCTAATCC -3′) primers specific for the V3-V4 hypervariable regions of the 16s rDNA gene. The libraries were prepared using PCR products according to the MiSeq System guide (Illumina, CA, USA) and quantified using a QIAxpert (QIAGEN). Each amplicon was then quantified, set at an equimolar ratio, pooled, and sequenced with a MiSeq (Illumina) according to the manufacturer's recommendations. Analysis of the microbiota composition Raw pyrosequencing reads obtained from the sequencer were filtered according to the barcode and primer sequences using MiSeq (Illumina). Taxonomic assignment was performed by the profiling program MDx-Pro ver.1 (MD Healthcare, Seoul, Korea) that selects high-quality sequencing reads with read lengths greater than 300 bp and Phred scores higher than 20 (>99% accuracy of base call). Operational taxonomic units (OTUs) were clustered using the sequence clustering algorithm CD-HIT. Subsequently, taxonomy assignment was carried out using UCLUST and QIIME against the 16s rDNA sequence database in Greengenes 8.15.13. Based on the sequence similarities, taxonomic assignment to the genus level was performed on all 16s rDNA sequences. The microbial composition at each taxon level was plotted in a stack bar. If clusters could not be assigned at the genus level due to lack of sequences or redundant sequences in the database, the taxon was assigned at the next highest level, as indicated in parentheses. Development of a CRC diagnostic model The selection of biomarkers for inclusion in the diagnostic model was based on the relative abundances of OTUs at the genus level. We selected candidate biomarkers with p-values < 0.05, fold-changes greater than two-fold, and average relative abundances greater than 0.1%. For the first diagnostic model (D1-model), we included age and sex as covariates and selected biomarkers for inclusion in the model by a stepwise selection method. Akaike information criterion (AIC) was used to assess model fitness of the predictive diagnostic models using differing variables, and all candidate predictive diagnostic models were calculated using logistic regression. The second diagnostic model (D2-model) was established based on two increased and two decreased biomarkers as variables and was calculated by logistic regression. Based on the analysis of all the possible variable combinations using two increased and two decreased biomarkers, we selected the diagnostic model with the highest resulting AIC value as the simplified D2-model to be used to assess CRC risk during in vivo experimentation. Mann–Whitney statistics as an estimator of AUC and the DeLong test to test the change in AUC were used19,20, and 10-fold cross-validation was applied. To avoid potential bias caused by differing sequencing depths, samples with more than 3500 reads were rarefied to a depth of 3500 reads for subsequent analysis. Significant differences between the healthy control group and CRC patient group were determined using the t test for continuous variables. Additionally, the Mann–Whitney test was performed to analyze microbiome differences in vivo. Findings were considered significant if the p-value was less than 0.05 or the adjusted p-value (Ad. p) was less than 0.05. The alpha diversity of microbial composition was measured using the Chao1 index and rarified to compare species richness. Shannon's index was used to measure the species diversity of samples between the healthy control group and CRC patient group. All statistical analyses were performed using R version 3.4.1. Fecal microbiota diversity of CRC patients vs. healthy controls Microbial diversity within the human fecal samples was measured using the Chao1 and Shannon diversity indexes. Through this analysis, the healthy control group showed high richness (p < 0.001) in both Chao1 and Shannon index diversity. While there was an observable trend of increased alpha diversity and species richness in the control group relative to those in the case group, neither Chao1 nor Shannon index measures yielded a significant difference (Figs. 1a, b). CRC patients were shown to have 1.18 times more OTU reads than the healthy control subjects, while the number of valid reads in the normal group was significantly higher than that in the colorectal group, with 58537.1 (SD 24831.5) and 50880.8 (SD 27830.7) valid reads, respectively (p = 0.026). Fig. 1: Alpha diversity and phylum-level gut microbiota composition. a Estimated species richness (Chao1 measure) and b alpha diversity defined by Shannon's index. c Heatmap of the gut microbiota at the phylum level, with columns representing individual control and CRC stool samples and rows corresponding to the identified phyla. Color scale based on relative OTU abundance, and hierarchical clustering based on Euclidean distance. d Average relative abundance of individual phyla, with error bars representing the standard error (SE). Significance between groups assessed by a t test (* = Ad. p < 0.05, ** = Ad. p < 0.01) Compositional difference of the fecal microbiota of CRC patients vs. healthy controls Based on metagenomic analysis at the phylum level, Firmicutes and Fusobacteria were significantly increased in CRC patient samples, while Proteobacteria was significantly decreased (p < 0.05). In particular, Proteobacteria was vastly altered, with a 0.45-fold difference between CRC and healthy subjects (Figs. 1c, d). At the class level, carriage of Gammaproteobacteria and Betaproteobacteria affiliated with Proteobacteria was significantly lower in the CRC patient group than in the healthy control group, while Bacilli and Fusobacteriia were significantly higher (p < 0.05) (Fig. 2a). At the order level, the case group showed significantly lower carriage than the healthy control group of Pseudomonadales, Burkholderiales, and Pasteurellales, while Fusobacteriales, Lactobacillales, and Enterobacteriales were significantly higher in the CRC group than in the healthy control group (p < 0.05). Although Proteobacteria was decreased overall at the phylum level, the order Enterobacteriales showed increased carriage in the CRC group (Fig. 2b). At the family level, carriage of Pseudomonadaceae, Moraxellaceae, Prevotellaceae, and Pasteurellaceae was significantly lower in the CRC group than in the healthy control group, while carriage of Enterococcaceae, Porphyromonadaceae, Bacteroidaceae, Enterobacteriaceae, Ruminococcaceae, and Lachnospiraceae was significantly increased in the CRC group (p < 0.05). Pseudomonadaceae and Moraxellaceae showed particularly dramatic fold-changes of 0.07 and 0.02, respectively (Fig. 2c). At the genus level, Bacteroides, Ruminococcaceae(f), Enterobacteriaceae(f), Enterococcus, Ruminococcus, Porphyromonas, and [Ruminococcus] showed a significant increase in CRC patients, while Pseudomonas, Prevotella, Acinetobacter, Haemophilus, Pseudomonadaceae(f) were significantly decreased (p < 0.05). Notably, Porphyromonas, Enterococcus, [Ruminococcus], Acinetobacter, Pseudomonadaceae(f), Pseudomonas and Haemophilus showed drastic fold changes of 85-, 20-, 4.4-, 0.01-, 0.02-, 0.08- and 0.36-fold, respectively (Figs. 3a, b). Fig. 2: Composition of the gut microbiota at the class, order and family levels. a The left-side heatmap plots and hierarchical clustering dendrograms show the gut microbiota composition between individual control and CRC samples at the a class, b order, and c family levels. Relative abundances of individual taxa (rows) in each sample (columns) are indicated in the associated color scale. Right-side bar plots highlight the differing average relative abundance of individual key taxa between control and CRC subject stool microbiota at the a class, b order, and c family levels. Significant differences were calculated by a t test (* = Ad. p < 0.05, ** = Ad. p < 0.01) Fig. 3: Genus level gut microbiota composition and CRC diagnostic prediction model. a Heatmap and clustering of individual control and CRC samples with a color scale indicating relative abundance at the genus level and hierarchical clustering measured by Euclidean distance. b Bar graph displaying the relative abundance of select genera and error bars showing the standard error (SE). Significance between control and CRC groups determined through Student's t test (* = Ad. p < 0.05, ** = Ad. p < 0.01). c ROC curves of CRC diagnostic prediction models developed through stepwise selection of significantly altered genera (D1-model) and two increased and two decreased genera (D2-model). Models were validated by a 10-fold cross-validation method to assess the area under the curve (AUC), sensitivity, specificity, and accuracy of each model Diagnostic model for colorectal cancer Bacterial biomarker candidates were selected based on three criteria: a statistically significant difference (p < 0.05) between the relative abundance in CRC and healthy subjects, a greater than two-fold change in relative abundance, and an average relative abundance above 0.1% at the genus level. Following those criteria, Pseudomonas, Acinetobacter, Enterococcus, Haemophilus, [Ruminococcus], Pseudomonadaceae(f), Porphyromonas, Catenibacterium, Dorea, Fusobacterium, Erysipelotrichaceae(f), Gemellaceae(f), Cupriavidus, Peptostreptococcus, Parvimonas, Desulfovibrio, and Prevotella were selected as candidate CRC biomarkers. Eight biomarker candidates, Pseudomonadaceae(f), Enterococcus, Peptostreptococcus, Cupriavidus, Fusobacterium, [Ruminococcus], Desulfovibrio, and Erysipelotrichaceae(f), were selected using stepwise selection with age and sex as covariates. Using these 10 variables, we created the D1 model using logistic regression with the following function: $$S_D1 = e^{(y_D1)}/(1 + e^{(y_D1)}){\;\mathrm{with}}\,y_D1 = ax_1 + bx_2 + cx_3 + dx_4\, + ex_5 + fx_6 + gx_7 + hx_8 + ix_9 + jx_10 + k$$ In this D1-model, the values a to k are the independent parameters, and variables x1 to x10 represent age, sex, and the relative abundances of Pseudomonadaceae(f), Enterococcus, Peptostreptococcus, Cupriavidus, Fusobacterium, [Ruminococcus], Desulfovibrio, and Erysipelotrichaceae(f), respectively. The values of these parameters are as follows: a is 0.06 (CI: 0.01–0.12), b is 1.22 (CI: 0.31–2.19), c is -749.7 (CI: −2679.3 to −137.9), d is 94.33 (CI: 49.77–201.65), e is 72380 (CI: 31695.5–120109.8), f is −5327000 (CI: −12332540 to −2361652), g is 409 (CI: 15.41–1520.48), h is 53.73 (CI: 3.17–123.70), i is 288.2 (CI: −39.70–855.63), j is 60.6 (CI: −0.31–145.80), and k is −6.146 (CI: −10.15 to −2.61). The D1-model test set yielded an AUC of 0.91 (SD 0.06), sensitivity of 0.85 (SD 0.14), specificity of 0.87 (SD 0.10), and accuracy of 0.86 (SD 0.06) (cut-off value of 0.51) (p = 0.00001) (Fig. 3c). In addition to the stepwise selection-based D1 model, we sought to develop a simplified diagnostic prediction model using only two increased and two decreased genera of the 17 filtered biomarkers and no clinical covariates. Sixty model variations were screened following those criteria, with 8 models yielding an AUC above 0.8. Based on this analysis, the most appropriate and relevant markers for the simplified diagnostic prediction model were determined to be Prevotella, Catenibacterium, Dorea, and Porphyromonas. The simplified D2-model constructed using these four biomarkers and a logistic regression model was created with the following function: $$S_D2 = e^{(y_D2)}/(1 + e^{(y_D2)}){\;\mathrm{with}}\,y_D2 = ax_1 + bx_2 + cx_3 + dx_4 + e$$ In the D2-model function, a, b, c, d, and e are the independent parameters, and x1, x2, x3, and x4 represent the relative abundance of Prevotella, Catenibacterium, Dorea, and Porphyromonas, respectively. The independent parameters' values are −4.51 (CI: −11.44–1.27) for a, −15.80 (CI: −60.01–7.03) for b, 148.00 (CI: 49.68–260.51) for c, 166.65 (CI: 32.47–444.20) d, and −1.26 (CI: −2.13 to −0.46) for e. The above D2-model yielded an AUC of 0.80 (SD 0.14), sensitivity of 0.79 (SD 0.17), specificity of 0.82 (SD 0.16) and accuracy of 0.80 (SD 0.12) (cut-off value 0.27), based on analysis using the test set (p = 0.0004) (Fig. 3c). The difference between the D1-model and D2-model was not significant (p = 0.858) Compositional difference of the cecal microbiota of mice fed an HFD vs. RCD In contrast with the microbiota composition of CRC patient samples, Firmicutes was significantly decreased in HFD-fed mice (p < 0.05), while Proteobacteria showed no difference. Bacteroidetes showed significant enrichment, while Actinobacteria was significantly diminished in RCD-fed mice (Figs. 4a, b). At the class level, HFD-fed mice had higher carriage of Clostridia and Bacteroidia than RCD-fed mice, while Bacilli, Coriobacteriia, and Erysipelotrichi abundance was significantly greater in RCD-fed mice than in HFD-fed mice. Only Clostridia affiliated with Firmicutes was significantly increased in the HFD group (p < 0.05). At the order level, Clostridiales and Bacteroides abundances in HFD-fed mice were higher than in RCD-fed mice, while Lactobacillales, Coriobacteriales, Erysipelotrichales, and Turicibacterales were less prevalent in HFD-fed mice than in RCD-fed mice (p < 0.05). Bacteroidales, Lactobacillales, Erysipelotrichales, and Turicibacterales experienced drastic alterations, with 38.5-fold, 0.19-fold, 0.07-fold, and 0.001-fold changes, respectively. Meanwhile, at the family level, proportions of Bacteroidaceae, Ruminococcaceae, Lachnospiraceae, Peptococcaceae, and Porphyromonadaceae were significantly higher in HFD-fed mice than in RCD-fed mice, while Lactobacillaceae, Coriobacteriaceae, and Erysipelotrichaceae proportions were significantly lower (p < 0.05). In particular, Bacteroidaceae, Lachnospiraceae, Peptococcaceae, and Porphyromonadaceae were sharply increased in HFD-fed mice, with 38.1-fold, 29.1-fold, 242.4-fold, and 48.7-fold increases, respectively, while Erysipelotrichaceae showed a steep 0.07-fold reduction in the HFD model. Fig. 4: Gut microbiota composition and CRC risk differed between RCD and HFD mice. Heatmap and hierarchical clustering of gut microbiota relative abundance of individual control regular chow diet-fed (RCD) and high-fat diet-fed (HFD) mouse stool samples at the a phylum and c genus levels. The average relative abundances of individual taxa identified in RCD and HFD mouse stool at the b phylum and d genus levels. Standard errors (SEs) represented by error bars and significant differences between groups measured by the Mann–Whitney test (* = Ad. p < 0.05, ** = Ad. p < 0.01). e Predicted values of CRC risk in RCD and HFD mice are based on the D2 model Finally, at the genus level, Bacteroides, Ruminococcaceae(f), and [Ruminococcus] each showed highly significant increases in HFD-fed mice. Ruminococcus, however, demonstrated no significant difference between mice fed an HFD or RCD and accounted for a relatively low portion of the total microbiota. Furthermore, Oscillospira, Lachnospiraceae(f), rc4–4, and Parabacteroides were significantly enriched, while Lactobacillus, Adlercreutzia, Turicibacter, and Allobaculum were significantly depleted in HFD-fed mice. Bacteroides, rc4–4, [Ruminococcus], and Parabacteroides showed particularly higher carriage in HFD-fed mice than in RCD-fed mice, with 268-fold, 178-fold, 48-fold, and 38-fold increases, respectively. Meanwhile, Turicibacter and Allobaculum were extremely depleted in HFD-fed mice, with proportions of 1.3% and 2.4% in the control RCD-fed group, respectively, while possessing less than 10–5% of the total population in the HFD group (Figs. 4c, d). After applying the simplified CRC diagnostic prediction model (D2-model) to the RCD and HFD groups, the analysis yielded a fitted value of 0.24 (SD 0.01) in the control RCD group, while the HFD group showed a fitted value of 0.44 (SD 0.07) (Fig. 4e). Additionally, the result applying the prediction model showed that the AUC was 1.00. Grain consumption reduces CRC risk in mice Microbial analysis was also conducted on the cecal content of mice after they were fed a variety of grain diets in combination with an HFD. At the phylum level, none of the grains assessed in this study were shown to significantly decrease Firmicutes, a phylum that was significantly increased in the CRC group. However, nonglutinous rice and rice syrup consumption led to a significant increase in Proteobacteria, a phylum shown to be significantly decreased in CRC patients (Fig. 5a, Table 2). At the class level, Gammaproteobacteria, a diminished class in CRC patients, was increased after consumption of rice syrup. At the order level, nonglutinous rice consumption was associated with a significant increase in the relative abundance of Pseudomonadales, a decreased order in the CRC group. At the family level, Ruminococcaceae, Lachnospiraceae, Bacteroidaceae, and Porphyromonadaceae were decreased in mice after consumption of grains compared to those in the HFD-fed mice, consistent with the differences between healthy subjects and CRC patients. Ruminococcaceae and Lachnospiraceae were significantly decreased after consumption of nonglutinous rice, glutinous rice, brown rice, and sorghum. Meanwhile, Bacteroidaceae was significantly decreased after consumption of nonglutinous rice. Porphyromonadaceae showed decreased carriage in mice after consumption of nonglutinous rice, brown rice, and sorghum. Finally, at the genus level, nonglutinous rice consumption was associated with a significant decrease in HFD-induced elevated Bacteroides, Ruminococcus, and [Ruminococcus] levels and further caused significant recovery of depleted Acinetobacter. The grain types that caused a significant decrease in Ruminococcus and [Ruminococcus] included glutinous rice, brown rice, and sorghum (Fig. 5b, Table 3). These findings were then analyzed using the D2 model to determine the CRC risk in each group. Through this analysis, the HFD group yielded a fitted value of 0.44 (SD 0.07), while the nonglutinous rice-, glutinous rice-, rice syrup-, brown rice-, sorghum-, buckwheat-, and acorn-fed groups yielded fitted values of 0.25 (SD 0.01), 0.24 (SD 0.01), 0.36 (SD 0.05), 0.32 (SD 0.08), 0.24 (SD 0.01), 0.43 (SD 0.08), and 0.38 (SD 0.16), respectively. Nonglutinous rice, glutinous rice, and sorghum were the main grain types for which consumption was shown to decrease the level of CRC risk associated with an HFD (Fig. 5c). Fig. 5: HFD mouse gut microbiota composition and associated CRC risk modulated by grain consumption. Heatmap and hierarchical clustering depicts the differential microbiome relative abundance of HFD mouse stool after consumption of seven different grains at the a phylum and b genus levels. Rows represent taxa identified in each sample, and columns represent individual samples, grouped by diet type. c The predicted values of CRC risk in HFD mice and HFD mice fed seven different grains using the D2 model Table 2 HFD mouse microbiota composition at the phylum level before and after grain consumption Table 3 HFD mouse microbiota composition at the genus level before and after grain consumption In the present study, we developed two novel CRC diagnostic models based on metagenomic analysis of stool-derived bacterial pellets separated from bacterial EVs containing bacterial DNA. As seen in Supplementary Fig. 1, the total DNA yield of bacterial EVs isolated from stool contributed to more than a quarter of the total bacterial DNA yield. This finding is critical because it reveals that more than a quarter of the bacterial sequences obtained from stool originate from bacterial EVs rather than from bacterial cells themselves. As the microbiota releases EVs differentially based on its metabolic state, proliferation, apoptosis, and community structure, the variable composition of bacterial EVs contained in stool poses a crucial confounding factor in gut microbiota metagenomic analysis21. To account for and eliminate potential bias caused by differential bacterial EV composition, we removed bacterial EVs contained within fecal samples via centrifugation and analyzed the resulting isolated bacterial pellet. This methodology is a distinguishing aspect of this study because gut microbiome analysis typically does not account for the potentially confounding factor of EV-originating bacterial DNA in stool. Therefore, we suggest that future gut microbiome studies consider the impact of differential microbial EV composition contained within fecal samples on microbiome profiling and take the appropriate measures to remove EVs prior to bacterial analysis. Although we determined a multitude of taxa at different levels that were significantly altered in CRC patients (Fig. 2), we selected only those at the genera level for inclusion in the diagnostic models to enhance the model specificity and accuracy. Of the 17 significantly differing genera, 8 were selected via stepwise selection in the D1 model, in addition to age and gender. We also developed a second model, the D2 model, that included only 4 genera to offer a simplified model that is more accessible for practical diagnostic purposes. Although the D2 model using minimal biomarkers showed slightly lower accuracy, sensitivity, and specificity than the more robust D1 model, the D2 model demonstrated desirable strength as a diagnostic risk model (AUC 0.88). Overall, although the two models were similar in their CRC risk diagnosis strength, the D1-model can obtain more accurate results by utilizing both metagenomic analysis and clinical information, while the D2-model offers a more simplified option through a minimized, targeted approach. Although additional experimentation is necessary to refine the simplified, targeted D2-model, we found that four gut microbiome-derived biomarkers were sufficient to diagnose CRC risk. Metagenomic analysis of CRC patient and healthy subject stool bacteria yielded a variety of altered genera known to be associated with CRC. A number of genera included in the D1 model, such as Enterococcus, Fusobacterium, Peptostreptococcus and Desulfovibrio, have been shown in previous studies to be enriched in CRC patients via gut microbiome metagenomic analysis22,23,24. Fusobacterium, in particular, has been thoroughly established as a pathogenic driver of CRC. Specifically, the overabundance of invasive Fusobacterium nucleatum is associated with CRC and has even been suggested to negatively impact patient outcomes25,26,27. Although it is difficult to directly establish a causal link between a single pathogenic species and CRC, possible mechanisms of carcinogenic action of invasive Fusobacterium spp. include induction of cascading inflammatory responses and colon tumor cell growth promotion via β-catenin activation28. In the development of the targeted D2 model, two increased and two decreased bacterial genera in CRC patients were shown to yield the most accurate results: Dorea and Porphyromonas and Catenibacterium and Prevotella, respectively. Dorea has previously been found to be more abundant in fecal samples of CRC patients than in those of healthy controls29. Dorea spp. have the ability to adhere to cancer cells, which may confer Dorea a competitive advantage in the cancerous colorectal environment30. Meanwhile, Porphyromonas has been reported to be enriched in CRC patients in several studies using NGS-based gut microbiota profiling methods22,24,31. Furthermore, Porphyromonas species have been implicated as biomarkers of orodigestive cancer, as increased carriage of pathogenic, proinflammatory carcinogenic Porphyromonas gingivalis (P. gingivalis) as well as increased P. gingivalis-associated IgG serum antibody levels have been associated with oral, colorectal and pancreatic cancers32. In total, these previous findings support the association between CRC and increased abundance of Dorea and Porphyromonas in the gut and highlight the opportunistic capacity of Dorea spp. and the potential carcinogenic role of Porphyromonas spp. in CRC. In contrast, Catenibacterium has seldom been associated with CRC, aside from a finding that Catenibacterium was absent in a Chinese cohort of CRC patients, which is in line with the results of this study31. Furthermore, we found that Prevotella spp. were significantly reduced in CRC patients, and multiple studies have shown increased Prevotella abundance in the gut microbiota and cancerous tissues of Chinese, American, and European CRC patients31,33,34. These findings may be explained by the connection between Tjalsma's proposed Bacterial Driver-Carrier model of CRC and the dietary-based Bacteroides-Prevotella gradient. Tjalsma's Bacterial Driver-Carrier model postulates that pathogenic bacterial drivers can disrupt gut microbiota balance through carcinogenic activity, such as proinflammatory signaling, secretion of genotoxic substances and other mechanisms leading to premalignant adenomas, mutations, and ultimately carcinoma development in the colorectal cavity35. This model posits that bacterial drivers induce gut dysbiosis and drive carcinogenic activity, enabling the enrichment of other bacterial passengers that under normal circumstances cannot effectively colonize a healthy gut. However, here, we further suggest that gut dysbiosis initiated by bacterial drivers also causes commensal bacterial passengers unsuited to the cancerous gut environment to depart the gut, based on the initial bacterial community structure. Recently, it has been posited that the gut microbiota community structure is characterized by a Prevotella-Bacteroides gradient that enables broad classification of gut enterotypes dominated by either Prevotella of Bacteroides36. These gut enterotypes are significantly affected by dietary habits, as diets high in red meat and animal fat are typically associated with high Bacteroides and low Prevotella abundance, while conversely, those who consume high amounts of dietary fiber and low amounts of animal fat and protein are associated with low Bacteroides and high Prevotella abundance. This dietary-based Prevotella-Bacteroides gradient may explain our finding that Bacteroides was significantly increased and Prevotella was significantly decreased in Korean CRC patients. Previous studies have consistently reported an increased Prevotella abundance in CRC patients; however, these studies mostly assessed cohorts from regions known to have relatively low Prevotella abundance in the general population and low dietary fiber and high animal fat and protein consumption35,37. However, Prevotella is one of the most dominant genera in the Korean gut microbiota, which has been largely attributed to the relatively low consumption of animal fat and proteins and the high consumption of complex fibers and grains in the typical Korean diet38. Therefore, our finding of increased Bacteroides and decreased Prevotella abundance in this Korean cohort suggests a critical shift in the Prevotella-Bacteroides gradient in the cancerous gut environment. Based on the culmination of these findings, we postulate that Prevotella may be a bacterial passenger that departs the Korean colon as carcinogenic bacterial drivers, such as increased Porphyromonas and Fusobacterium, induce a gut environment favorable to Bacteroides. Furthermore, we emphasize that regional differences in diet and the Prevotella-Bacteroides gradient of the target population must be considered to fully grasp the dynamic relationship between CRC and the gut microbiome and develop accurate diagnostic prediction models. While altered carriage of certain genera found in this study, such as Pseudomonas, Acinetobacter, Haemophilus, and Parvimonas, has been previously associated with CRC, such genera ultimately were not included in either diagnostic prediction model due to diminished model fitness31,33. Interestingly, in addition to the finding that Acinetobacter and Pseudomonas were severely depleted in CRC patients, conversely, we observed a general trend in healthy subjects that high Acinetobacter and Pseudomonas prevalence was associated with a sharp decrease in Bacteroides-Prevotella abundance. While discrete gut enterotypes have been established based on the dominance of either Bacteroides or Prevotella in the gut, based on our present findings, we suggest that dominance of Acinetobacter and Pseudomonas may represent a distinct third gut enterotype. In addition, as the Bacteroides-Prevotella enterotypes are strongly influenced by diet, further study is required to determine any distinguishing dietary patterns associated with Acinetobacter-Pseudomonas dominance, such as high grain consumption. Altogether, although our findings were generally congruent with previous studies, conflicting results may be attributed to our unique analysis method excluding DNA contributed by bacterial EVs as well as to differing regional dietary patterns in the sampled cohorts. As dietary habits are well known to influence the risk of CRC incidence, we sought to further elucidate the relationship between the gut microbiota, diet, and CRC risk. While the impact of a westernized HFD on CRC and the gut microbiota has been well characterized, conversely, the protective effects of grain diets known to be associated with low CRC risk remain uncertain at the microbiota level. As previously discussed, populations at low risk of CRC development generally consume diets high in grain and dietary fiber and are characterized by a Prevotella-dominant gut enterotype. Dietary grains contain polyphenols and other antioxidant components known to promote health, reduce local inflammation in the colon and protect against colorectal cancer39,40. Here, we assessed the ability of seven different grains to reduce CRC risk in mice fed an HFD and found that consumption of nonglutinous rice, glutinous rice, and sorghum led to the highest reduction in CRC risk. Although the 2012 Consumer Reports claimed that concerning levels of arsenic in rice may lead to cancer risk in those who consume rice, recent epidemiologic studies have determined no cancerous risk associated with rice consumption in the United States41. Furthermore, previous studies have shown that Asian diets high in rice consumption were associated with reduced cancer risk42. Furthermore, high-performance liquid chromatography (HPLC) analysis has shown that nonglutinous rice in particular has higher phenolic content than its glutinous counterpart43. In the present study, while both nonglutinous and glutinous rice showed similarly low CRC risk, nonglutinous rice was especially effective in stabilizing key altered genera shown to be associated with CRC, including Bacteroides, Lactobacillus, Ruminococcus, [Ruminococcus], and Acinetobacter. As glutinous and nonglutinous rice differ in phenolic content as well as the structure, type and distribution of starch in the vicinity of the crushed cell layer, these differences may explain the differing trends of altered genera observed in this study44. Sorghum, meanwhile, has previously shown tremendous anti-CRC effects by suppressing the growth and metastasis of cancerous colon epithelial cells as well as protecting against gut microbiota alterations linked to colitis, an inflammatory condition commonly associated with CRC risk45,46. Other grains tested in this study, such as buckwheat, rice syrup and acorn, demonstrated limited effects at offsetting HFD-induced CRC risk, highlighting the differing efficacy of different grains in reducing CRC risk. In total, these findings demonstrate the protective and preventative effect of a variety of grain-based diets on the development of CRC risk via differential stabilization of key microbiota-based biomarkers. Furthermore, as Eastern countries, such as Korea, continue to transition from traditional rice-based diets to an increasingly westernized HFD, we emphasize the importance of rice consumption in the daily diet of vulnerable populations to improve the balance of the gut microbiota and counteract the rising trend in CRC risk. Risk assessment, early diagnosis, and prevention of disease, including for CRC, is critical for an effective reduction of mortality and increased quality of life; therefore, great effort has recently been put into advancing early cancer diagnosis, including the development of effective prediction models and in vitro diagnostics (IVD)47,48. Although several diagnostic models have been developed to predict CRC risk, models limited to primarily epidemiological data have shown relatively low discriminatory power, with AUCs ranging from 0.61 to 0.7849,50. Diagnostic models based on risk factor profiles obtained via in vitro methodologies, such as serum metabolomics, showed much higher discriminating ability, with an AUC up to 0.91; however, the high price of such IVD methodologies may prevent the widespread general use of such prediction models51. Thus, we aimed to develop a cost-effective diagnostic model that maintained the high discriminatory power expected from IVD methodologies by utilizing microbiome analysis. The simplified D2 model developed in the present study required only four key bacterial taxa to maintain an AUC of 0.88, showing the high discriminatory power contained within the gut microbiota to assess CRC risk. While this study strongly supports the potency of gut microbiota-based IVD, further clinical studies are necessary to confirm the efficacy of our diagnostic models and the effect of grain consumption on CRC patients at varying stages of disease progression. Unfortunately, we could not include patient BMI and smoking history as covariates in this study because we were unable to obtain sufficient information on those variables from the subjects utilized for diagnostic model development. We are continuously collecting more stool samples from both healthy subjects and CRC patients with a focus on obtaining as much thorough clinical information and background as possible for inclusion of more covariables in future microbiome-based disease diagnostic model development. In conclusion, our results highlight the important mediating role of the gut microbiota in the relationship between diet and CRC. First, we identified 16 significantly altered genera with potential as biomarkers of CRC risk and developed two novel gut microbiota-based CRC risk assessment models. We used the simplified D2 model to assess the role of diet in CRC risk and found that an HFD increased CRC risk in mice. Next, we compared the effect of an HFD and a variety of grain-based diets on microbiota composition and subsequent CRC risk in mice and found that nonglutinous rice, glutinous rice, and sorghum consumption vastly reduced CRC risk. Taken together, these results suggest the utility and validity of gut microbiota-based CRC risk assessment as well as dietary-based prevention to reduce CRC risk in the development of an effective CRC theragnostic strategy. Arnold, M. et al. Global patterns and trends in colorectal cancer incidence and mortality. Gut 66, 683–691 (2017). Keku, T. O. et al. The gastrointestinal microbiota and colorectal cancer. Am. J. Physiol. Gastrointest. Liver Physiol. 308, G351–G363 (2015). Favoriti, P. et al. Worldwide burden of colorectal cancer: a review. Updates Surg. 68, 7–11 (2016). Gandomani, H. S. et al. Colorectal cancer in the world: incidence, mortality and risk factors. BMRAT 4, 1656–1675 (2017). Chao, A. et al. Meat consumption and risk of colorectal cancer. J. Am. Med. Assoc. 293, 172–182 (2005). Järvinen, R., Knekt, P., Hakulinen, T., Rissanen, H. & Heliövaara, M. Dietary fat, cholesterol and colorectal cancer in a prospective study. Br. J. Cancer 85, 357–361 (2001). Ou, J. et al. Diet, microbiota, and microbial metabolites in colon cancer risk in rural Africans and African Americans. Am. J. Clin. Nutr. 98, 111–120 (2013). Nicholson, J. K. et al. Host-gut microbiota metabolic interactions. Science 336, 1262–1267 (2012). Soler, A. P. et al. Increased tight junctional permeability is associated with the development of colon cancer. Carcinogenesis 20, 1425–1432 (1999). Zhu, G. et al. Lipopolysaccharide increases the release of VEGF-C that enhances cell motility and promotes lymphangiogenesis and lymphatic metastasis through the TLR4-NF-κB/JNK pathways in colorectal cancer. Oncotarget 7, 73711–73724 (2016). Mastronardi, C. A. et al. Lipopolysaccharide-induced leptin synthesis and release are differentially controlled by alpha-melanocyte-stimulating hormone. Neuroimmunomodulation 12, 182–188 (2005). Rodríguez, A. J., Mastronardi, C. & Paz-Filho, G. Leptin as a risk factor for the development of colorectal cancer. Transl. Gastrointest. Cancer 2, 211–222 (2013). Liu, Z. et al. High fat diet enhances colonic cell proliferation and carcinogenesis in rats by elevating serum leptin. Int. J. Oncol. 19, 1009–1014 (2001). Gagnière, J. et al. Gut microbiota imbalance and colorectal cancer. World J. Gastroenterol. 22, 501–518 (2016). Sears, C. L., Geis, A. L. & Housseau, F. Bacteroides fragilis subverts mucosal biology: from symbiont to colon carcinogenesis. J. Clin. Invest. 124, 4166–4172 (2014). Yang, J., Kim, E. K., McDowell, A. & Kim, Y. K. Microbe-derived extracellular vesicles as a smart drug delivery system. Transl. Clin. Pharmacol. 26, 103–110 (2018). Kang, C. et al. Extracellular vesicles derived from gut microbiota, especially Akkermansia muciniphila, protect the progression of dextran sulfate sodium-induced colitis. PLoS. ONE 8, e76520 (2013). Chelakkot, C. et al. Akkermansia muciniphila-derived extracellular vesicles influence gut permeability through the regulation of tight junctions. Exp. Mol. Med. 50, e450 (2018). Demler, O. V., Pencina, M. J. & D'Agostino Sr, R. B. Misuse of DeLong test to compare AUCs for nested models. Statist. Stat. Med 31, 2577–2587 (2012). DeLong, E. R., DeLong, D. M. & Clarke-Pearson, D. L. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44, 837–845 (1988). Liu, Y., Defourny, K., Smid, E. J. & Abee, T. Gram-positive bacterial extracellular vesicles and their impact on health and disease. Front. Microbio. 9, 1502 (2018). Wang, T. et al. Structural segregation of gut microbiota between colorectal cancer patients and healthy volunteers. ISME. J. 6, 320–329 (2012). Wu, N. et al. Dysbiosis signature of fecal microbiota in colorectal cancer patients. Microb. Ecol. 66, 462–470 (2013). Ahn, J. et al. Human gut microbiome and risk for colorectal cancer. J. Natl. Cancer Inst. 105, 1907–1911 (2013). Kostic, A. D. et al. Genomic analysis identifies association of Fusobacterium with colorectal carcinoma. Genome Res. 22, 292–298 (2012). Castellarin, M. et al. Fusobacterium nucleatum infection is prevalent in human colorectal carcinoma. Genome Res. 22, 299–306 (2012). Flanagan, L. et al. Fusobacterium nucleatum associates with stages of colorectal neoplasia development, colorectal cancer and disease outcome. Eur. J. Clin. Microbiol. Infect. Dis. 33, 1381–1390 (2014). Rubinstein, M. R. et al. Fusobacterium nucleatum promotes colorectal carcinogenesis by modulating E-cadherin/ß-catenin signaling via its FadA adhesin. Cell. Host. Microbe 14, 195–206 (2013). Hibberd, A. A. et al. Intestinal microbiota is altered in patients with colon cancer and modified by probiotic intervention. BMJ Open Gastroenterol. 4, e000145 (2017). Ho, C. L. et al. Engineered commensal microbes for diet-mediated colorectal-cancer chemoprevention. Nat. Biomed. Eng. 2, 27–37 (2018). Chen, W. et al. Human intestinal lumen and mucosa-associated microbiota in patients with colorectal cancer. PLoS. ONE 7, e39743 (2012). Ahn, J., Segers, S. & Hayes, R. B. Periodontal disease, Porphyromonas gingivalis serum antibody levels and orodigestive cancer mortality. Carcinogenesis 33, 1055–1058 (2012). Gao, Z. et al. Microbiota disbiosis is associated with colorectal cancer. Front. Microbiol. 6, 20 (2015). Dai, Z. et al. Multi-cohort analysis of colorectal cancer metagenome identified altered bacteria across populations and universal bacterial markers. Microbiome 6, 70 (2018). Tjalsma, H. et al. A bacterial driver–passenger model for colorectal cancer: beyond the usual suspects. Nat. Rev. Microbiol. 10, 575–582 (2012). Gorvitovskaia, A., Holmes, S. P. & Huse, S. M. Interpreting Prevotella and Bacteroides as biomarkers of diet and lifestyle. Microbiome 4, 15 (2016). Jain, A., Li, X. H. & Chen, W. N. Similarities and differences in gut microbiome composition correlate with dietary patterns of Indian and Chinese adults. AMB Express 8, 104 (2018). Nam, Y. et al. Comparative analysis of Korean human gut microbiota by barcoded pyrosequencing. PLoS. ONE 6, e22109 (2011). Conlon, M. & Bird, A. The impact of diet and lifestyle on gut microbiota and human health. Nutrients 7, 17–44 (2015). Ozdal, T. et al. The reciprocal interactions between polyphenols and gut microbiota and effects on bioaccessibility. Nutrients 8, 78 (2016). Zhang, R. et al. Rice consumption and cancer incidence in US men and women. Int. J. Cancer 138, 555–564 (2016). Hudson, E. A. et al. Characterization of potentially chemopreventive phenols in extracts of brown rice that inhibit the growth of human breast and colon cancer cells. Cancer Epidemiol. Biomark. Prev. 9, 1163–1170 (2000). Setyaningsih, W., Hidayah, N., Saputro, I. E., Lovillo, M. P. & Barroso, C. G. Study of glutinous and non-glutinous rice (Oryza sativa) varieties on their antioxidant compounds. in Proc. International Conference on Plant, Marine and Environmental Sciences 1–2 (Kuala Lumpur, 2015). Zhao, Y. et al. Ungerminated rice grains observed by femtosecond pulse laser second-harmonic generation microscopy. J. Phys. Chem. B 122, 7855–7861 (2018). Darvin, P. et al. Sorghum polyphenol suppresses the growth as well as metastasis of colon cancer xenografts through co-targeting jak2/STAT3 and PI3K/Akt/mTOR pathways. J. Funct. Foods 15, 193–206 (2015). Ritchie, L. E. et al. Polyphenol-rich sorghum brans alter colon microbiota and impact species diversity and species richness after multiple bouts of dextran sodium sulfate-induced colitis. FEMS Microbiol. Ecol. 91, fiv008 (2015). Hendriksen, J. M. T. et al. Diagnostic and prognostic prediction models. J. Thromb. Haemost. 11, 129–141 (2013). Seo, J. H., Lee, J. W. & Cho, D. The market trend analysis and prospects of cancer molecular diagnostics kits. Biomater. Res. 22, 2 (2018). Park, Y. et al. Validation of a colorectal cancer risk prediction model among white patients age 50 years and older. J. Clin. Oncol. 27, 694–698 (2009). Shin, A. et al. Risk prediction model for colorectal cancer: National Health Insurance Corporation study, Korea. PLoS. ONE 9, e88079 (2014). Nishiumi, S. et al. A novel serum metabolomics-based diagnostic approach for colorectal cancer. PLoS. ONE 7, e40459 (2012). This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2016M3A9B6901516 and NRF-2017M3A9F3047497) and by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI) funded by the Ministry of Health & Welfare, Republic of Korea. (grant number: HI17C1996). These authors contributed equally: Jinho Yang, Andrea McDowell MD Healthcare Inc., Seoul, Republic of Korea , Andrea McDowell , Eun Kyoung Kim , Hochan Seo , Won Hee Lee & Yoon-Keun Kim Department of Health and Safety Convergence Science, Korea University, Seoul, Republic of Korea Department of Internal Medicine, School of Medicine, Ewha Womans University, Seoul, Republic of Korea Chang-Mo Moon Department of Internal Medicine, Inje University Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea Sung-Min Kym Department of Internal Medicine, Seoul National University Bundang Hospital, Gyeonggi-do, Republic of Korea Dong Ho Lee & Young Soo Park Department of Internal Medicine, Dankook University College of Medicine, Cheonan, Republic of Korea Young-Koo Jee Search for Jinho Yang in: Search for Andrea McDowell in: Search for Eun Kyoung Kim in: Search for Hochan Seo in: Search for Won Hee Lee in: Search for Chang-Mo Moon in: Search for Sung-Min Kym in: Search for Dong Ho Lee in: Search for Young Soo Park in: Search for Young-Koo Jee in: Search for Yoon-Keun Kim in: Correspondence to Young-Koo Jee or Yoon-Keun Kim. Yang, J., McDowell, A., Kim, E.K. et al. Development of a colorectal cancer diagnostic model and dietary risk assessment through gut microbiome analysis. Exp Mol Med 51, 117 (2019) doi:10.1038/s12276-019-0313-4 Revised: 09 June 2019 Editorial Summary Colorectal cancer: Factors influencing cancer risk Tracing biomarkers of gut bacteria sheds light on how diet can affect gut composition and the risk of developing colorectal cancer (CRC). CRC is linked to diet, and scientists are examining how diet-associated changes in the gut microbiome may influence cancer risk. Yoon-Keun Kim, MD Healthcare Inc., Seoul, and Young-Koo Jee, Dankook University College of Medicine, Cheonan, South Korea, and co-workers analyzed bacterial populations in fecal samples from 89 CRC patients and 161 healthy controls. They found significant differences between patients and controls in 16 bacterial genera, these differences being potential biomarkers in diagnostic models for assessing CRC risk. The researchers used the models to determine CRC risk in mice fed diets, and found that the predicted risk was considerably reduced in mice on grain diets, especially rice or sorghum, compared to high fat diets. Experimental & Molecular Medicine menu About the Partner For Authors & Referees
CommonCrawl
Crossed polygon A crossed polygon is a polygon in the plane with a turning number or density of zero, with the appearance of a figure 8, infinity symbol, or lemniscate curve. Crossed polygons are related to star polygons which have turning numbers greater than 1. The vertices with clockwise turning angles equal the vertices with counterclockwise turning angles. A crossed polygon will always have at least 2 edges or vertices intersecting or coinciding. Any convex polygon with 4 or more sides can be remade into a crossed polygon by swapping the positions of two adjacent vertices. Crossed polygons are common as vertex figures of uniform star polyhedra.[1] Crossed quadrilateral Crossed quadrilaterals are most common, including: • crossed parallelogram or antiparallelogram, a crossed quadrilateral with alternate edges of equal length. • crossed trapezoid' has two opposite parallel edges. • crossed rectangle, an antiparallelogram whose edges are two opposite sides and the two diagonals of a rectangle. • Crossed square, a crossed rectangle with two equal opposite sides and two diagonals of a square. Crossed square Crossed trapezoid Crossed parallelogram Crossed rectangles Crossed quadrilaterals See also • Skew polygon References 1. Coxeter, H.S.M., M. S. Longuet-Higgins and J.C.P Miller, Uniform Polyhedra, Phil. Trans. 246 A (1954) pp. 401–450.
Wikipedia
Arithmeticity and topology of smooth actions of higher rank abelian groups JMD Home This Volume Effective decay of multiple correlations in semidirect product actions 2016, 10: 113-134. doi: 10.3934/jmd.2016.10.113 Horocycle flows for laminations by hyperbolic Riemann surfaces and Hedlund's theorem Matilde Martínez 1, , Shigenori Matsumoto 2, and Alberto Verjovsky 3, Instituto de Matemática y Estadística Rafael Laguardia, Facultad de Ingeniería, Universidad de la República, J. Herrera y Reissig 565, C.P. 11300 Montevideo, Uruguay Department of Mathematics, College of Science and Technology, Nihon University, 1-8-14 Kanda, Surugadai, Chiyoda-ku, Tokyo, 101-8308 Universidad Nacional Autónoma de México, Apartado Postal 273, Admon. de correos #3, C.P. 62251 Cuernavaca, Morelos, Mexico Received April 2015 Published May 2016 We study the dynamics of the geodesic and horocycle flows of the unit tangent bundle $(\hat M, T^1\mathfrak{F})$ of a compact minimal lamination $(M,\mathfrak{F})$ by negatively curved surfaces. We give conditions under which the action of the affine group generated by the joint action of these flows is minimal and examples where this action is not minimal. In the first case, we prove that if $\mathfrak{F}$ has a leaf which is not simply connected, the horocyle flow is topologically transitive. Keywords: Hyperbolic surfaces, horocycle and geodesic flows, hyperbolic laminations, minimality.. Mathematics Subject Classification: Primary: 37C85, 37D40, 57R3. Citation: Matilde Martínez, Shigenori Matsumoto, Alberto Verjovsky. Horocycle flows for laminations by hyperbolic Riemann surfaces and Hedlund's theorem. Journal of Modern Dynamics, 2016, 10: 113-134. doi: 10.3934/jmd.2016.10.113 F. Alcalde Cuesta and F. Dal'Bo, Remarks on the dynamics of the horocycle flow for homogeneous foliations by hyperbolic surfaces, Expo. Math., 33 (2015), 431-451. doi: 10.1016/j.exmath.2015.07.006. Google Scholar F. Alcalde Cuesta, F. Dal'Bo, M. Martínez and A. Verjovsky, Minimality of the horocycle flow on foliations by hyperbolic surfaces with non-trivial topology, Discrete Contin. Dyn. Syst., 36 (2016), no. 9, 4619-4635. Google Scholar M. Asaoka, Nonhomogeneous locally free actions of the affine group, Ann. of Math. (2), 175 (2012), 1-21. doi: 10.4007/annals.2012.175.1.1. Google Scholar T. Barbot, Plane affine geometry and Anosov flows, Ann. Scient. Éc. Norm. Sup., 34 (2001), 871-889. doi: 10.1016/S0012-9593(01)01079-5. Google Scholar J. Bellissard, R. Benedetti and J.-M. Gambaudo, Spaces of tilings, finite telescopic approximations and gap-labeling, Comm. Math. Phys., 261 (2006), 1-41. doi: 10.1007/s00220-005-1445-z. Google Scholar T. Büber and W. A. Kirk, Convexity structures and the existence of minimal sets, Comment. Math. Prace Mat., 35 (1995), 71-81. Google Scholar A. Candel, Uniformization of surface laminations, Ann. Sci. École Norm. Sup. (4), 26 (1993), 489-516. Google Scholar A. Candel and L. Conlon, Foliations. I, Graduate Studies in Mathematics, 23, American Mathematical Society, Providence, RI, 2000. Google Scholar J. W. Cannon and W. P. Thurston, Group invariant Peano curves, Geom. Topol., 11 (2007), 1315-1355. doi: 10.2140/gt.2007.11.1315. Google Scholar S. G. Dani and G. A. Margulis, Values of quadratic forms at primitive integral points, Invent. Math., 98 (1989), 405-424. doi: 10.1007/BF01388860. Google Scholar S. R. Fenley, The structure of branching in Anosov flows of 3-manifolds, Comment. Math. Helv., 73 (1998), 259-297. doi: 10.1007/s000140050055. Google Scholar P. Foulon and B. Hasselblatt, Contact Anosov flows on hyperbolic 3-manifolds, Geom. Topol., 17 (2013), 1225-1252. doi: 10.2140/gt.2013.17.1225. Google Scholar L. Garnett, Foliations, the ergodic theorem and Brownian motion, J. Funct. Anal., 51 (1983), 285-311. doi: 10.1016/0022-1236(83)90015-0. Google Scholar É. Ghys, Laminations par surfaces de Riemann, in Dynamique et Géométrie Complexes (Lyon, 1997), Panor. Synthèses, 8, Soc. Math. France, Paris, 1999, ix, xi, 49-95. Google Scholar M. Gromov, Hyperbolic manifolds (according to Thurston and Jørgensen), in Bourbaki Seminar, Vol. 1979/80, Lecture Notes in Math., 842, Springer, Berlin-New York, 1981, 40-53. Google Scholar A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, With a supplementary chapter by Katok and L. Mendoza, Encyclopedia of Mathematics and its Applications, 54, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187. Google Scholar M. Lyubich and Y. Minsky, Laminations in holomorphic dynamics, J. Differential Geom., 47 (1997), 17-94. Google Scholar G. A. Margulis, Discrete subgroups and ergodic theory, in Number Theory, Trace Formulas and Discrete Groups (Oslo, 1987), Academic Press, Boston, MA, 1989, 377-398. Google Scholar S. Matsumoto, Remarks on the horocycle flows for the foliations by hyperbolic surfaces,, Proc. Amer. Math. Soc., (). doi: 10.1090/proc/13184. Google Scholar C. T. McMullen, Renormalization and 3-Manifolds Which Fiber Over the Circle, Annals of Mathematics Studies, 142, Princeton University Press, Princeton, NJ, 1996. doi: 10.1515/9781400865178. Google Scholar C. T. McMullen, Billiards and Teichmüller curves on Hilbert modular surfaces, J. Amer. Math. Soc., 16 (2003), 857-885 (electronic). doi: 10.1090/S0894-0347-03-00432-6. Google Scholar D. W. Morris, Ratner's Theorems on Unipotent Flows, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 2005. Google Scholar S. Nag and D. Sullivan, Teichmüller theory and the universal period mapping via quantum calculus and the $H^{1/2}$ space on the circle, Osaka J. Math., 32 (1995), 1-34. Google Scholar J.-P. Otal, The Hyperbolization Theorem for Fibered 3-Manifolds, Translated from the 1996 French original by L. D. Kay, SMF/AMS Texts and Monographs, 7, American Mathematical Society, Providence, RI; Société Mathématique de France, Paris, 2001. Google Scholar S. Petite, On invariant measures of finite affine type tilings, Ergodic Theory Dynam. Systems, 26 (2006), 1159-1176. doi: 10.1017/S0143385706000137. Google Scholar J. F. Plante, Locally free affine group actions, Trans. Amer. Math. Soc., 259 (1980), 449-456. doi: 10.2307/1998240. Google Scholar J. F. Plante, Anosov flows, Amer. J. Math., 94 (1972), 729-754. doi: 10.2307/2373755. Google Scholar M. Ratner, Raghunathan's topological conjecture and distributions of unipotent flows, Duke Math. J., 63 (1991), 235-280. doi: 10.1215/S0012-7094-91-06311-8. Google Scholar R. M. Solovay, A model of set-theory in which every set of reals is Lebesgue measurable, Ann. of Math. (2), 92 (1970), 1-56. doi: 10.2307/1970696. Google Scholar D. Sullivan, Linking the universalities of Milnor-Thurston, Feigenbaum and Ahlfors-Bers, in Topological Methods in Modern Mathematics (Stony Brook, NY, 1991), Publish or Perish, Houston, TX, 1993, 543-564. Google Scholar W. Thurston, Hyperbolic geometry and 3-manifolds, in Low-Dimensional Topology (Bangor, 1979), London Math. Soc. Lecture Note Ser., 48, Cambridge Univ. Press, Cambridge-New York, 1982, 9-25. Google Scholar W. P. Thurston, Three-dimensional manifolds, Kleinian groups and hyperbolic geometry, Bull. Amer. Math. Soc. (N.S.), 6 (1982), 357-381. doi: 10.1090/S0273-0979-1982-15003-0. Google Scholar W. P. Thurston, Hyperbolic structures on 3-manifolds. I. Deformation of acylindrical manifolds, Ann. of Math. (2), 124 (1986), 203-246. doi: 10.2307/1971277. Google Scholar A. Verjovsky, A uniformization theorem for holomorphic foliations, in The Lefschetz Centennial Conference, Part III (Mexico City, 1984), Contemp. Math., 58, Amer. Math. Soc., Providence, RI, 1987, 233-253. Google Scholar D. van Dantzig, Über topologisch homogene Kontinua, Fund. Math., 15 (1930), 102-125. Google Scholar L. Vietoris, Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen Abbildungen, Math. Ann., 97 (1927), 454-472. doi: 10.1007/BF01447877. Google Scholar A. Zorich, Geodesics on flat surfaces, in International Congress of Mathematicians. Vol. III, Eur. Math. Soc., Zürich, 2006, 121-146. Google Scholar Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez, Alberto Verjovsky. Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4619-4635. doi: 10.3934/dcds.2016001 Fernando Alcalde Cuesta, Françoise Dal'Bo, Matilde Martínez, Alberto Verjovsky. Corrigendum to "Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology". Discrete & Continuous Dynamical Systems, 2017, 37 (8) : 4585-4586. doi: 10.3934/dcds.2017196 Katrin Gelfert. Non-hyperbolic behavior of geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 521-551. doi: 10.3934/dcds.2019022 Francois Ledrappier and Omri Sarig. Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Electronic Research Announcements, 2005, 11: 89-94. Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147 Keith Burns, Katrin Gelfert. Lyapunov spectrum for geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1841-1872. doi: 10.3934/dcds.2014.34.1841 David Ralston, Serge Troubetzkoy. Ergodic infinite group extensions of geodesic flows on translation surfaces. Journal of Modern Dynamics, 2012, 6 (4) : 477-497. doi: 10.3934/jmd.2012.6.477 François Ledrappier, Omri Sarig. Fluctuations of ergodic sums for horocycle flows on $\Z^d$--covers of finite volume surfaces. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 247-325. doi: 10.3934/dcds.2008.22.247 Luis Barreira, Christian Wolf. Dimension and ergodic decompositions for hyperbolic flows. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 201-212. doi: 10.3934/dcds.2007.17.201 Anke D. Pohl. Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2173-2241. doi: 10.3934/dcds.2014.34.2173 Rafael O. Ruggiero. Shadowing of geodesics, weak stability of the geodesic flow and global hyperbolic geometry. Discrete & Continuous Dynamical Systems, 2006, 14 (2) : 365-383. doi: 10.3934/dcds.2006.14.365 Dubi Kelmer, Hee Oh. Shrinking targets for the geodesic flow on geometrically finite hyperbolic manifolds. Journal of Modern Dynamics, 2021, 17: 401-434. doi: 10.3934/jmd.2021014 Bryce Weaver. Growth rate of periodic orbits for geodesic flows over surfaces with radially symmetric focusing caps. Journal of Modern Dynamics, 2014, 8 (2) : 139-176. doi: 10.3934/jmd.2014.8.139 Shucheng Yu. Logarithm laws for unipotent flows on hyperbolic manifolds. Journal of Modern Dynamics, 2017, 11: 447-476. doi: 10.3934/jmd.2017018 Zhiping Li, Yunhua Zhou. Quasi-shadowing for partially hyperbolic flows. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2089-2103. doi: 10.3934/dcds.2020107 Carlos Arnoldo Morales. A note on periodic orbits for singular-hyperbolic flows. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 615-619. doi: 10.3934/dcds.2004.11.615 Giovanni Forni, Corinna Ulcigrai. Time-changes of horocycle flows. Journal of Modern Dynamics, 2012, 6 (2) : 251-273. doi: 10.3934/jmd.2012.6.251 Andreas Strömbergsson. On the deviation of ergodic averages for horocycle flows. Journal of Modern Dynamics, 2013, 7 (2) : 291-328. doi: 10.3934/jmd.2013.7.291 Yong Fang, Patrick Foulon, Boris Hasselblatt. Longitudinal foliation rigidity and Lipschitz-continuous invariant forms for hyperbolic flows. Electronic Research Announcements, 2010, 17: 80-89. doi: 10.3934/era.2010.17.80 C.P. Walkden. Stable ergodicity of skew products of one-dimensional hyperbolic flows. Discrete & Continuous Dynamical Systems, 1999, 5 (4) : 897-904. doi: 10.3934/dcds.1999.5.897 Matilde Martínez Shigenori Matsumoto Alberto Verjovsky
CommonCrawl
On the transmission dynamics of Buruli ulcer in Ghana: Insights through a mathematical model Farai Nyabadza1 & Ebenezer Bonyah2 Mycobacterium ulcerans is know to cause the Buruli ulcer. The association between the ulcer and environmental exposure has been documented. However, the epidemiology of the ulcer is not well understood. A hypothesised transmission involves humans being bitten by the water bugs that prey on mollusks, snails and young fishes. In this paper, a model for the transmission of Mycobacterium ulcerans to humans in the presence of a preventive strategy is proposed and analysed. The model equilibria are determined and conditions for the existence of the equilibria established. The model analysis is carried out in terms of the reproduction number \(\mathcal{R}_0\). The disease free equilibrium is found to be locally asymptotically stable for \(\mathcal{R}_0<1.\) The model is fitted to data from Ghana. The model is found to exhibit a backward bifurcation and the endemic equilibrium point is globally stable when \(\mathcal{R}_0>1.\) Sensitivity analysis showed that the Buruli ulcer epidemic is highly influenced by the shedding and clearance rates of Mycobacterium ulcerans in the environment. The model is found to fit reasonably well to data from Ghana and projections on the future of the Buruli ulcer epidemic are also made. The model reasonably fitted data from Ghana. The fitting process showed data that appeared to have reached a steady state and projections showed that the epidemic levels will remain the same for the projected time. The implications of the results to policy and future management of the disease are discussed. Buruli ulcer is caused by pathogenic bacterium where infection often leads to extensive destruction of skin and soft tissue through the formation of large ulcers usually on the legs or arms [28]. It is a devastating disease caused by Mycobacterium ulcerans. The ulcer is fast becoming a debilitating affliction in many countries [3]. It is named after a region called Buruli, near the Nile River in Uganda, where in 1961 the first large number of cases was reported. In Africa, close to 30,000 cases were reported between 2005 and 2010 [29]. Cote d'Ivoire, with the highest incidence, reported 2533 cases in 2010 [27]. This disease has dramatically emerged in several west African countries, such as Ghana, Cote d'Ivoire, Benin, and Togo in recent years [26]. The transmission mode of the ulcer is not well understood, however residence near an aquatic environment has been identified as a risk factor for the ulcer in Africa [6, 16, 25]. Transmission is thus likely to occur through contact with the environment [20]. Recent studies in West Africa have implicated aquatic bugs as transmission vectors for the ulcer [18, 24]. An attractive hypothesis for a possible mode of transmission to humans was proposed by Portaels et al. [22]: water-filtering hosts (fish, mollusks) concentrate the Mycobacterium ulcerans bacteria present in water or mud and discharge them again to this environment, where they are then ingested by aquatic predators such as beetles and water bugs. These insects, in turn, may transmit the disease to humans by biting [18]. Person to person transmission is less likely. Aquatic bugs are insects found throughout temperate and tropical environments with abundant freshwater. They prey, according to their size, on mollusks, snails, young fishes, and the adults and larvae of other insects that they capture with their raptorial front legs and bite with their rostrum. These insects can inflict painful bites on humans as well. In Ghana, where Buruli ulcer is endemic, the water bugs are present in swamps and rivers, where human activities such as farming, fishing, and bathing take place [18]. Research on Buruli ulcer has focused mainly on the socio-cultural aspects of the disease. The research recommends the need for Information, Education and Communication (IEC) intervention strategies, to encourage early case detection and treatment with the assumption that once people gain knowledge they will take the appropriate action to access treatment early [2]. IEC is defined as an approach which attempts to change or reinforce a set of behaviours to a targeted group regarding a problem. The IEC strategy is preventive in that it has a potential of enhancing control of the ulcer [5]. It is also important to note that Buruli ulcer is treatable with antibiotics. A combination of rifampin and streptomycin administered daily for 8 weeks has the potential to eliminate Mycobacterium ulcerans bacilli and promote healing without relapse. Mathematical models have been used to model the transmission of many diseases globally. Many advances in the management of diseases have been born from mathematical modeling [11, 12, 14, 15]. Mathematical models can evaluate actual or potential control measures in the absence of experiments, see for instance [19]. To the best of our knowledge very few mathematical models have been formulated to analyse the transmission dynamics of Mycobacterium ulcerans. This could be largely due to the elusive epidemiology of the Buruli ulcer. Aidoo and Osei [3] proposed a mathematical model of the SIR-type in an endeavour to explain the transmission of Mycobacterium ulcerans and its dependence on arsenic. In this paper, we propose a model which takes into account the human population, water bugs as vectors and fish as potential reservoirs of Mycobacterium ulcerans following the transmission dynamics described in [8]. In addition we include the preventive control measures in a bid to capture the IEC strategy. Our main aim is to study the dynamics of the Buruli ulcer in the presence of a preventive control strategy, while emphasizing the role of the vector (water bugs) and fish and their interaction with the environment. The model is then validated using data from Ghana. This is crucial in informing policy and suggesting strategies for the control of the disease. This paper is arranged as follows; in "Methods", we formulate and establish the basic properties of the model. We also determine the steady states and analysed their stability. The results of this paper are given in "Results". Parameter estimation, sensitivity analysis and the numerical results on the behavior of the model are also presented in this section. The paper is concluded in "Discussion". Model formulation We consider a constant human population \(N_H(t),\) the vector population of water bugs \(N_V(t)\) and the fish population \(N_F(t)\) at any time t. The total human population is divided into three epidemiological subclasses of those that are susceptible \(S_H(t),\) the infected \(I_H(t)\) and the recovered who are still immune \(R_H(t)\). Total population of vector (water bug) at any time t is divided into two subclasses to susceptible water bugs \(S_V(t)\) and those that are infectious and can transmit the Buruli ulcer to humans, \(I_V(t).\) The total population reservoir of small fish is also divided into two compartments of susceptible fish \(S_F(t)\) and infected fish \(I_F(t).\) We also consider the role of the environment by introducing a compartment U, representing the density of Mycobacterium ulcerans in the environment. We make the following basic assumptions: Mycobacterium ulcerans are transferred only from vector ( water bug) to the humans. There is homogeneity of human, water bug and fish populations' interactions. Infected humans recover and are temporarily immune, but lose immunity. Fish are preyed on by the water bugs. Unlike some bacterial infections such as leprosy (caused by Mycobacterium leprae) and tuberculosis (caused by Mycobacterium tuberculosis), which are characterized by person-to-person contact transmission, it is hypothesized that Mycobacterium ulcerans is acquired through environmental contact and direct person-to-person transmission is rare [20]. Susceptible host (human population) can be infected through biting by an infectious vector (water bug). We represent the effective biting rate that an infectious vector has to susceptible host as \(\beta _H\) and the incidence of new infections transmitted by water bugs is expressed by standard incidence rate \( \displaystyle \beta _H \frac{S_H I_V}{N_H}.\) One can interpret \(\beta _H\) as a function of the biting frequency of the infected water bugs on humans, density of infectious water bugs per human, the probability that a bite will result in an infection and the efficacy of the IEC strategy. In particular we can set \(\beta _H=(1-\epsilon )\tau \alpha \beta _H^*,\) where \(\epsilon \in (0,1)\) is the efficacy of the IEC strategy, \(\tau \) the number of water bugs per human host, \(\alpha \) the biting frequency (the biting rate of humans by a single water bug) and \(\beta _H^*\) the probability that a bite by an infected vector to a susceptible human will produce an infection. Susceptible water bugs are infected at a rate \(\displaystyle \beta _V \frac{S_V I_F}{N_V}\) through predation of infected fish and \(\displaystyle \eta _v\beta _V \frac{S_V U}{K}\) representing other sources in the environment. Here \(\eta _V\) differentiates the infectivity potential of the fish from that of the environment. Assuming fish prey on infected water bugs, susceptible fish are infected at a rate \(\displaystyle \beta _F\frac{S_F I_V}{N_F}\) through predation of infected fish and \(\displaystyle \eta _F\beta _F \frac{S_F U}{K}\) representing infection through the environment. Here \(\eta _F\) is a modification parameter that models the relative infectivity of fish from that of the environment. The vector population and the fish populations are assumed to be constant. The growth functions are respectively given by \(g(N_V)\) and \(g(N_F),\) where $$\begin{aligned} g(N_V)=\mu _VN_V~~\mathrm{and}~~g(N_F)=\mu _FN_F. \end{aligned}$$ It is important to note that other types of functions can be chosen as growth functions. In this work we however assume that the growth functions are linear. There is a proposed hypothesis that environmental mycobacteria in the bottoms of swamps may be mechanically concentrated by small water-filtering organisms such as microphagous fish, snails, mosquito larvae, small crustaceans, and protozoa [8]. We assume that fish increase the environmental concentrations of Mycobacterium ulcerans at a rate \(\sigma _F.\) Humans are are assumed not to shed any bacteria into the environment. Aquatic bugs release bacteria into the environment at a rate \(\sigma _V.\) The model does not include a potential route of direct contact with the bacterium in water. The birth rate of the human population is directly proportional to the size of the human population. The recovery of infected individuals is assumed to occur both spontaneously and through treatment. Research has shown that localized lesions may spontaneously heal but, without treatment, most cases of Buruli ulcer result in physical deformities that often lead to physiological abnormalities and stigmas [4]. We now describe briefly, the transmission dynamics of Buruli ulcer: New susceptibles enter the population at a rate of \(\mu _H N_H.\) Buruli ulcer sufferers do not recover with permanent immunity, they loose immunity at a rate \(\theta \) and become susceptible again. Susceptibles and infected through interaction with infected water bugs, with infection driven by water bugs biting susceptible humans. Once infected, individuals are allowed to recover either spontaneously or through antibiotic treatment at a rate \(\gamma .\) In this model, the human population is assumed to be constant over the modeling time with the birth and death rates being equal. The compartment \(S_V\) tracks the changes in the susceptible water bugs population that are recruited at a rate \(\mu _V N_V\). The infection of water bugs is driven by two processes: their interaction infected fish and with the environment. The natural mortality of the water bugs occurs at a rate \(\mu _V.\) Similarly, the compartment \(S_F\) tracks the changes in the susceptible fish population that are recruited at a rate \(\mu _FN_F\). The infection of fish is also driven by two processes: their interaction infected water bugs and with the environment. Fish's natural mortality rate is \(\mu _F.\) The growth of Mycobacterium ulcerans in the environment is driven by their shedding by infected water bugs and fish into the environment. They are assumed to die naturally at a rate \(\mu _E.\) The possible interrelations between humans, the water bug and fish are represented by the schematic diagram below (Fig. 1). Proposed transmission dynamics of the Buruli ulcer among humans, fish, water bugs and the environment (U) The descriptions of the parameters that describe the flow rates between compartments are given in Table 1. Table 1 Description of parameters used in the model The dynamics of the ulcer can be described by the following set of nonlinear differential equations: $$\begin{aligned} \left. \begin{array}{lcl} \displaystyle \frac{dS_H}{dt}&{}= &{} \displaystyle \mu _HN_H +\theta R_H - \beta _H\frac{S_HI_V}{N_H}-{\mu _H}S_H,\\ \displaystyle \frac{dI_H}{dt}&{} = &{} \displaystyle \beta _H\frac{S_HI_V}{N_H} - ({\mu _H} +\gamma )I_H,\\ \displaystyle \frac{dR_H}{dt}&{} =&{} \displaystyle \gamma I_H-(\mu _H+\theta )R_H,\\ \displaystyle \frac{dS_V}{dt}&{} = &{}\displaystyle \mu _VN_V -\beta _V\frac{S_VI_F}{N_V}-\eta _V\beta _V\frac{S_VU}{K}- {\mu _V}S_V,\\ \displaystyle \frac{dI_V}{dt}&{} = &{} \displaystyle \beta _V\frac{S_VI_F}{N_V}+\eta _V\beta _V\frac{S_VU}{K}- {\mu _V}I_V,\\ \displaystyle \frac{dS_F}{dt}&{} = &{} \displaystyle \mu _FN_F-\beta _F\frac{S_FI_V}{N_F} -\eta _F\beta _F\frac{S_FU}{K}- {\mu _F}S_F,\\ \displaystyle \frac{dI_F}{dt}&{} = &{} \displaystyle \beta _F\frac{S_FI_V}{N_F}+\eta _F\beta _F\frac{S_FU}{K}- {\mu _F}I_F,\\ \displaystyle \frac{dU}{dt}&{} = &{} \displaystyle \sigma _FI_F+\sigma _VI_V- {\mu _E}U. \end{array} \right\} \end{aligned}$$ We assume that all the model parameters are positive and the initial conditions of the model system (1) are given by $$\begin{aligned} S_H(0)&= {} S_{H0} > 0, I_H(0) = I_{H0}\ge 0, R_H(0)= R_{H0}= 0,~S_V(0) = S_{V0} > 0,\\ I_V(0)&= {} I_{V0}\ge 0,~S_F(0) = S_{F0} > 0, ~I_F(0) = I_{F0}\ge 0 \quad \text {and}\quad U(0)=U_0>0. \end{aligned}$$ We arbitrarily scale the time t by the quantity \({1 \over {\mu _V }}\) by letting \(\tau = \mu _Vt\) and introduce the following dimensionless parameters; $$\begin{aligned} \tau&= {} \mu _Vt,~ \beta _h=\frac{\beta _H}{\mu _V},~\mu _h=\frac{\mu _H}{\mu _V},~ \theta _h=\frac{\theta }{\mu _V}, ~\gamma _h=\frac{\gamma }{\mu _V}, ~m_1=\frac{N_H}{N_V},~m_2=\frac{N_F}{N_V},\\ m_3&= {} \frac{1}{m_2},~m_4=\frac{N_F}{K},~m_5=\frac{N_V}{K},~ \mu _f=\frac{\mu _F}{\mu _V},~\beta _f=\frac{\beta _F}{\mu _V},\\ \sigma _f&= {} \frac{\sigma _F}{\mu _V},~\sigma _v=\frac{\sigma _V}{\mu _V},~\beta _v=\frac{\beta _V}{\mu _V} \;\mathrm{and}\;\mu _e=\frac{\mu _E}{\mu _V}. \end{aligned}$$ can be non dimensionalised bySo, system (1) setting $$\begin{aligned} s_h=\frac{S_H}{N_H},~i_h=\frac{I_H}{N_H},r_h=\frac{R_H}{N_H},~i_v=\frac{I_V}{N_V},~s_f=\frac{S_F}{N_F},~i_f=\frac{I_F}{N_F}\;\mathrm{and}\;\displaystyle u=\frac{U}{K}. \end{aligned}$$ The forces of infection for humans, water bugs and fish are respectively $$\begin{aligned} \lambda _H=\beta _h m_1i_v,~~\lambda _V=\beta _v m_2i_f+\eta _V\beta _v u,~~\lambda _F=\beta _f m_3i_v+\eta _F\beta _f u. \end{aligned}$$ Given that the total number of bites made by the water bugs must equal the number of bites received by the humans, \(m_1\) is a constant, see [9]. Similarly \(m_2\) is constant and so is \(m_3.\) We also note that since \(N_F\) and \(N_V\) are constants, \(m_4\) and \(m_5\) are constants. Given that \(\displaystyle s_h+i_h+r_h=1,~s_v+i_v=1,~s_f+i_f=1\) and \(\displaystyle 0\le u\le 1,\) system (1) can be reduced to the following system of equations by conveniently maintaining the capitalised subscripts so that we can still respectively write \(\displaystyle s_h,~i_h,~i_v,~i_f\) and \(\displaystyle u\) as \(\displaystyle S_H,~I_H,~I_V,~I_F\) and \(\displaystyle U.\) $$\begin{aligned} \left. \begin{array}{lcl} \displaystyle \frac{dS_H}{d\tau }&{}= &{} \displaystyle (\mu _h +\theta _h)(1- S_H) -\theta _h I_H - \lambda _H S_H,\\ \\ \displaystyle \frac{dI_H}{d\tau }&{} = &{} \displaystyle \lambda _HS_H - ({\mu _h} +\gamma _h)I_H,\\ \\ \displaystyle \frac{dI_V}{d\tau }&{} = &{} \displaystyle \lambda _V(1-I_V)-\mu _vI_V,\\ \\ \displaystyle \frac{dI_F}{d\tau }&{} = &{} \displaystyle \lambda _F(1-I_F)- {\mu _f}I_F,\\ \\ \displaystyle \frac{dU}{d\tau }&{} = &{} \displaystyle m_4\sigma _f I_F+m_5\sigma _vI_V- {\mu _e}U. \end{array} \right\} \end{aligned}$$ Feasible region Note that \(\displaystyle \frac{dU}{d\tau }=m_4\sigma _f I_F+m_5\sigma _vI_V- {\mu _e}U\le m_4\sigma _f +\,m_5\sigma _v-\mu _eU.\) Through integration we obtain \(\displaystyle U\le \frac{m_4\sigma _f +m_5\sigma _v}{\mu _e}.\) The feasible region (the region where the model makes biological sense) for the system (2) is in \(\mathbb {R}^5_+\) and is represented by the set $$\begin{aligned} \Omega= & {} \left\{ (S_H,I_H,I_V,I_F,U)\in \mathbb {R}^5_+|0\le S_H+I_H\le 1,0\le I_V\le 1, 0\le I_F\le 1,\right. \\&~~~\left. 0\le U\le \frac{m_4\sigma _f +m_5\sigma _v}{\mu _e}\right\} , \end{aligned}$$ where the basic properties of local existence, uniqueness and continuity of solutions are valid for the Lipschitzian system (2). The populations described in this model are assumed to be constant over the modelling time. The solutions of system (2) starting in \(\displaystyle \Omega \) remain in \(\displaystyle \Omega \) for all \(t>0.\) Thus , \(\displaystyle \Omega \) is positively invariant and it is sufficient to consider solutions in \(\displaystyle \Omega .\) Positivity of solutions We desire to show that for any non-negative initial conditions of system (2), say \(\displaystyle (S_{H0},I_{H0},I_{V0},I_{F0},U_0),\) the solutions remain non-negative for all \(\displaystyle \tau \in [0,\infty ).\) We prove that all the state variables remain non-negative and the solutions of the system (2) with positive initial conditions will remain positive for all \(\tau > 0\). We thus state the following lemma. Lemma 1 Given that the initial conditions of system (2) are positive, the solutions \(S_H(\tau ),~I_H(\tau ),~I_V(\tau ),~I_F(\tau )\) and \(U(\tau )\) are non-negative for all \(\tau >0\). Assume that $$\begin{aligned} \hat{\tau } = \sup \left\{ \tau >0: S_H>0, I_H>0, I_V>0, I_F>0, U >0\right\} \in ( 0, \tau ]. \end{aligned}$$ Thus \(\hat{\tau } > 0,\) and it follows directly from the first equation of the system (2) that $$\begin{aligned} \frac{dS_H}{d\tau } \ge - (\theta _h + \lambda _H)S_H. \end{aligned}$$ We thus have $$\begin{aligned} \frac{dS_H}{dt}\ge S_{H0}\exp \left[ - \theta _h t+ \int _0^\tau \lambda _H(\varsigma )d\varsigma \right] . \end{aligned}$$ Since the exponential function is always positive and \(S_{H0}=S_H(0)>0,\) the solution \(S_H(\tau )\) will thus be always positive. From the second equation of (2), $$\begin{aligned} \frac{dI_H}{d\tau }&\ge -(\mu _h+\gamma _h)I_H,\\ \Rightarrow I_H&\ge I_{H0}e^{-(\mu _h +\gamma _h)\tau }>0. \end{aligned}$$ Similarly, it can be shown that \(I_V(\tau ) > 0,~I_F(\tau ) > 0\) and \(U(\tau ) > 0\) for all \( \tau > 0 ,\) and this completes the proof. \(\square \) Steady states analysis The disease free equilibrium In this section, we solve for the equilibrium points by setting the right hand side of system (2) to zero. This direct calculation shows that system (2) always has a disease free equilibrium point $$\begin{aligned} \mathbf{\mathcal {E}_0}=(1,0,0,0,0). \end{aligned}$$ We have the following result on the local stability of the disease free equilibrium. Theorem 1 The disease free equilibrium \(\mathbf{\mathcal {E}_0}\) whenever it exists, is locally asymptotically stable if \(\mathcal{R}_0 <1\) and unstable otherwise. The Jacobian matrix of system (2) at the equilibrium point \(\mathbf{\mathcal {E}_0}\) is given by $$\begin{aligned} J_{\mathbf{\mathcal {E}_0}}&= \left( \begin{array}{ccccc} -(\mu _h+\theta _h) &{}-\theta _h &{}-m_1\beta _h&{} 0&{}0 \\ 0&{} -(\mu _h+\gamma _h) &{}m_1\beta _h&{}0&{}0 \\ 0&{} 0 &{}-1&{}m_2\beta _v&{}\eta _v\beta _v\\ 0&{} 0 &{}m_3\beta _f&{}-\mu _f&{}\eta _f\beta _f\\ 0&{} 0 &{}m_5\sigma _v&{}m_4\sigma _f&{}-\mu _e \end{array} \right) . \end{aligned}$$ It can be seen that the eigenvalues of \(\displaystyle J_{\mathbf{\mathcal {E}_0}}\) are \( -(\mu _h+\theta _h),~ -(\mu _h+\gamma _h)\) and the solution of the characteristic polynomial $$\begin{aligned} P(\vartheta )=\vartheta ^3+a_2\vartheta ^2 + a_1\vartheta +\mu _e\mu _f(1-\mathcal{R}_0)=0, \end{aligned}$$ $$\begin{aligned} a_2&= {} 1+\mu _e+\mu _f,\\ a_1&= {} \mu _e+\mu _f+\mu _e\mu _f-(\beta _f\beta _v+m_4\eta _f\sigma _f\beta _f+m_5\eta _v\sigma _v\beta _v)~~\mathrm{and}\\ \mathcal{R}_0&= {} R_0^1+R_0^2+R_0^3, \end{aligned}$$ $$\begin{aligned} R_0^1=\frac{m_4\eta _f\sigma _f\beta _f}{\mu _e\mu _f},~R_0^2=\frac{m_5\eta _v\sigma _v\beta _v}{\mu _e} \quad \mathrm{and} \quad R_0^3=\beta _f\beta _v\left( \frac{\mu _e+m_3m_4\eta _v\sigma _f+m_2m_5\eta _f\sigma _v}{\mu _e\mu _f}\right) . \end{aligned}$$ The solutions of \(P(\vartheta )=0\) have negative real parts only if \(\displaystyle \mathcal{R}_0<1\) following the use of the Routh Hurwitz Criterion. We can thus conclude that the disease free equilibrium is locally asymptotically stable whenever \(\displaystyle \mathcal{R}_0<1.\) \(\square \) We note that \(\displaystyle \mathcal{R}_0\) is the model system (2)'s reproduction number and does not depend on the human population size. The model reproduction number is a sum of three terms. The terms \(R_0^1\) and \(R_0^2\) represent the contribution of fish and water bugs respectively to the infection dynamics. The term \(R_0^3,\) which is not very common in many epidemiological models, shows the combined contribution of the water bugs, fish and their shedding of Mycobacterium ulcerans into the environment. So, the infection is driven by the water bugs, fish and the density of the bacterium in the environment. The model reproduction number increases linearly with the shedding rates of the Mycobacterium ulcerans into the environment by fish and water bugs and the effective contact rates \(\beta _f\) and \(\beta _v\). It decreases with increasing removal rates of the fish and Mycobacterium ulcerans. So the control of the ulcer depends largely on environmental management. The endemic equilibrium The endemic equilibrium is much more tedious to obtain. Given that \(\displaystyle \lambda ^*_H=\beta _hm_1I_V^*,\) from the first and second equations of system (2) we have $$\begin{aligned} S_H^*=\frac{1}{1+\mathcal{A}I_V^*} \quad \mathrm{and}\quad I_H^*=\frac{m_1\beta _hI_V^*}{(\mu _h+\gamma _h)(1+\mathcal{A}I_V^*)}, \end{aligned}$$ where \(\displaystyle \mathcal{A}=\frac{m_1\beta _h(\mu _h+\theta _h+\gamma _h)}{(\mu _h+\gamma _h)(\mu _h+\theta _h)}.\) The last equation of system (2) can be written as $$\begin{aligned} U^*=\vartheta _1I_F^*+\vartheta _2I_V^*, \quad \mathrm{where}~\vartheta _1=\frac{m_4\sigma _f}{\mu _e}~ \mathrm{and}~\vartheta _2=\frac{m_5\sigma _v}{\mu _e}. \end{aligned}$$ $$\begin{aligned} \lambda ^*_F=\vartheta _3I_V^*+\vartheta _4I_F^ \mathrm{and}~\lambda ^*_V=\vartheta _5I_V^*+\vartheta _6I_F^*, \end{aligned}$$ where \(\displaystyle \vartheta _3=\beta _f(m_3+\vartheta _2\eta _f),~\vartheta _4=\vartheta _1\beta _f\eta _f,~\vartheta _5=\vartheta _2\eta _v\beta _v ~\mathrm{and}~\vartheta _6=\beta _v(m_2+\vartheta _1\eta _v).\) From the third and fourth equations of system (2)we have $$\begin{aligned} I_F^*=\, & {} \frac{I_V^*[1-\vartheta _5(1-I_V^*)]}{\vartheta _6(1-I_V^*)},\end{aligned}$$ $$\begin{aligned} I_V^*=\, & {} \frac{I_F^*[\mu _f-\vartheta _4(1-I_F^*)]}{\vartheta _3(1-I_F^*)}. \end{aligned}$$ Substituting (3) into (4) we obtain \(\displaystyle I_V^*=0\) and the cubic equation $$\begin{aligned} f(I_V^*)=a_3{I_V^*}^3+a_2{I_V^*}^2+a_1I_V^*+a_0=0, \end{aligned}$$ $$\begin{aligned} a_0&= {} \frac{\beta _f\mu _f}{\mu _e}\left( \mu _em_2+m_4\eta _v\sigma _f\right) \left[ \mathcal{R}_0-1\right] ,\\ a_1&= {} \vartheta _4\vartheta _5(1+\vartheta _6)+\vartheta _5(\vartheta _4+\vartheta _3\vartheta _6)+\vartheta _3\vartheta _5\vartheta _6-[\vartheta _3\vartheta _6(1+\vartheta _6)+ \vartheta _5(\vartheta _4\vartheta _5+\mu _f\vartheta _6)+\vartheta _4\vartheta _5^2],\\ a_2&= {} (1+\vartheta _6)(\vartheta _4+\vartheta _3\vartheta _6)+\vartheta _5(\vartheta _4\vartheta _5+\mu _f\vartheta _6)+\vartheta _6(\vartheta _3\vartheta _6+\mu _f\vartheta _5)-[ 2\vartheta _4\vartheta _5(1+\vartheta _6)+\vartheta _6(\vartheta _3\vartheta _5+\mu _f)],\\ a_3&= {} -\frac{m_5\beta _f\eta _v\sigma _v\beta _v^2}{\mu _e^2}\left( (\mu _em_2+m_4\eta _v\sigma _f)m_3+m_2m_5\eta _f\sigma _v\right) <0. \end{aligned}$$ $$\begin{aligned} a_0 \left\{ \begin{array}{ll}> 0\quad \mathrm{if}\quad \mathcal{R}_0>1\\ <0\quad \mathrm{if}\quad \mathcal{R}_0<1. \end{array}\right. \end{aligned}$$ $$\begin{aligned} f'(I_V^*) = 3a_3(I_V^*)^2 + 2a_2\lambda _1^* + a_1 , \end{aligned}$$ the turning points of equation (5) are given by $$\begin{aligned} (I_V^*)^{1,2} = \dfrac{-a_2 \pm \sqrt{a_2^2 - 3 a_1a_3}}{3a_3}. \end{aligned}$$ The discriminant of solutions (7) is \(\triangle = a_2^2 - 3 a_1a_3\). We now focus on the sign of the discriminant. If \(\triangle <0\), then \(f(I_V^*)\) has no real turning points, which implies that \(f(I_V^*)\) is a strictly monotonic function. The sign of \(f'(\lambda _1^*)\) is crucial in determining the monotonicity. Through completing the square, equation (6) can be written as $$\begin{aligned} f'(I_V^*) = 3a_3 \left[ \left( {I_V^*} + \dfrac{a_2}{3a_3} \right) ^2 + \dfrac{1}{9a_3^2}(3 a_1a_3-a_2^2) \right] . \end{aligned}$$ Clearly if \(\triangle <0\), then \(3 a_1a_3-a_2^2>0\). Since \(a_3<0\), then \(f'(I_V^*)<0\). Thus \(f(I_V^*)\) is a strictly monotone decreasing function. Note that \(\lim _{I_V^*\rightarrow \mp \infty } f(I_V^*)=\pm \infty \). For \(f(0) = a_0<0,\) the polynomial \(f(I_V^*)\) has no positive real roots for \(\mathcal{R}_0<1,\). However, if \(f(0) = a_0>0\) it has only one positive real root for \(\mathcal{R}_0>1,\) and consequently only one endemic equilibrium. If \(\triangle =0\), then \(f'(I_V^*)\) has only one real root with multiplicity two. This implies that \((I_V^*)^1=(I_V^*)^2 = -\frac{a_2}{3a_3}\) and that \(f'(I_V^*)<0\). Thus the polynomial \(f(I_V^*)\) is a decreasing function. Given that \(f''(I_V^*)(-\frac{a_2}{3a_3}) = 0,\) the turning point is a point of inflexion for \(f(I_V^*).\) The polynomial \(f(I_V^*)\) has only one endemic equilibrium. For \(\triangle >0\), we consider two cases; \(a_1<0\) and \(a_1>0\). If \(a_1<0\), then \(a_1a_3>0\). This means that \(\sqrt{\triangle }<a_2\). Irrespective of the sign of \(a_2\), \(f'(I_V^*)\) has two real positive and distinct roots. This implies that (5) has two positive turning points. If \(f(0) = a_0>0\) i.e \(\mathcal {R}_0>1\) then, \(f(I_V^*)\) has at least one positive real root, and hence at least one endemic equilibrium. On the other hand, if \(f(0) = a_0<0\) then, \(f(I_V^*)\) has at most two positive real roots when \(\mathcal {R}_0<1\), and hence at most two endemic equilibria. If \(a_1>0\), then \(a_1a_3<0\), which implies that \(\sqrt{\triangle }>a_2\). For \(a_2>0\), \(f'(I_V^*)\) has two real roots of opposite signs. Since \(f(0) = a_0>0\) for \(\mathcal{R}_0>1\), then, \(f(I_V^*)\) has one positive root. For \(a_2<0\), \(f'(I_V^*)\) has two negative real roots. Since \(f(0) = a_0<0\) for \(\mathcal{R}_0<1\), then, \(f(I_V^*)\) has no positive real roots, and consequently no endemic equilibria. Furthermore, we can use the Descartes' Rule of Signs [7] to explore the existence of endemic equilibrium (or equilibria) for \(\mathcal{R}_0<1\). We note the possible existence of backward bifurcation. The theorem below summarises the existence of endemic equilibria of the model system (2). The model system (2) has a unique endemic equilibrium point if \(\mathcal{R}_0>1\). has two endemic equilibria for \(\mathcal{R}_0^c<\mathcal{R}_0<1\) where \(\mathcal{R}_0^c\) is the critical threshold below which no endemic equilibrium exists. Remark The evaluation of \(\mathcal{R}_0^c\) depends on the signs of \(a_2\) and \(a_1\) and the sign of the discriminant. The computations are algebraically involving and long and are not included here. Since the model system (2) possesses two endemic equilibria when \(\mathcal{R}_0^c<\mathcal{R}_0<1\), the model exhibits backward bifurcation for \(\mathcal{R}_0<1\). The consequence of the above remark is that bringing \(\mathcal{R}_0\) below unity is not sufficient to eradicate the disease. For eradication, \(\mathcal{R}_0\) must be brought below the critical value \(\mathcal{R}_0^c\). Global stability of the endemic equilibrium The endemic equilibrium point \(\mathbf{\mathcal {E}_1}\) of system (2), is globally asymptotically stable. The global stability of the endemic equilibrium, can be determined by constructing a Lyapunov function \(\mathcal{V}(t)\) such that $$\begin{aligned} \mathcal{V}(t)&= S_H -S_{H}^{*}-S_{H}^{*}\ln \frac{S_H}{S_{H}^{*}} +A\left( I_H -I_{H}^{*}-I_{H}^{*}\ln \frac{I_H}{I_{H}^{*}}\right) + B\left( I_V -I_{V}^{*}-I_{V}^{*}\ln \frac{I_V}{I_{V}^{*}}\right) \nonumber \\&\quad +C\left( I_{F} -I_{F}^{*}-I_{F}^{*}\ln \frac{I_{F}}{I_{F}^{*}}\right) + D\left( U -U^{*}-U^{*}\ln \frac{U}{U^{*}}\right) . \end{aligned}$$ The corresponding time derivative of \(\mathcal{V}(t)\) is given by $$\begin{aligned} \dot{\mathcal{V}}&= \left( 1 - \frac{S_{H}^{*}}{S_{H}}\right) \dot{S}_{H} + A\left( 1 - \frac{I_{H}^{*}}{I_{H}}\right) \dot{I}_{H} + B\left( 1 - \frac{I_{V}^{*}}{I_{V}}\right) \dot{I}_{V} \nonumber \\&\quad + C\left( 1 - \frac{I_{F}^{*}}{I_{F}}\right) \dot{I}_{F}+D\left( 1 - \frac{U^{*}}{U}\right) \dot{U}. \end{aligned}$$ At the endemic equilibrium, we have the following relations $$\begin{aligned} \begin{array}{rcl} \mu _h+\theta _h&{}=&{} (\mu _h+\theta _h)S^{*}_{H} +\theta _h{I^*}_H+ m_1\beta _hS^{*}_{H}I^{*}_{V},\\ \mu _h+\gamma _h &{}=&{}m_1\beta _h\frac{S^{*}_{H}I^{*}_{V}}{{I^*}_H},\\ 1&{} =&{} m_2\beta _v\left( 1-I^{*}_{V}\right) \frac{I^{*}_{F}}{{I^*}_V}+ \eta _v\beta _v\left( 1-I^{*}_{V}\right) \frac{U^*}{{I^*}_V},\\ \mu _f&{} =&{} m_3\beta _f\left( 1-I^{*}_{F}\right) \frac{{I^*}_V}{I^{*}_{F}}+\eta _f\beta _f\left( 1-I^{*}_{F}\right) \frac{U^*}{I^{*}_{F}},\\ \mu _e&{} =&{} m_4\sigma _f\frac{I^{*}_{F}}{{U^*}}+m_5\sigma _v\frac{I^{*}_{V}}{{U^*}}. \end{array} \end{aligned}$$ Evaluating the components of the time derivative of the Lyapunov function using the relations (11) we have $$\begin{aligned} \dot{\mathcal{V}}&= \left( 1 - \frac{S_{H}^{*}}{S_{H}}\right) \left[ (\mu _h+\theta _h)S_{H}^{*}\left( 1 - \frac{S_{H}}{S_{H}^{*}}\right) +\theta _h I_{H}^{*}\left( 1 - \frac{I_{H}}{I_{H}^{*}}\right) +m_1\beta _h S_{H}^{*}I_{V}^{*}\left( 1 - \frac{S_{H}I_{V}}{S_{H}^{*}I_{V}^{*}}\right) \right] \nonumber \\&\quad \quad + A\left( 1 - \frac{I_{H}^{*}}{I_{H}}\right) \left[ m_1\beta _h S_{H}^{*}I_{V}^{*}\left( \frac{S_{H}I_{V}}{S_{H}^{*}I_{V}^{*}}-\frac{I_{H}}{I_{H}^{*}}\right) \right] + B\left( 1 - \frac{I_{V}^{*}}{I_{V}}\right) \left[ m_2\beta _vI_{F}^{*}\left( \frac{I_{F}}{I_{F}^{*}}-\frac{I_{V}}{I_{V}^{*}}\right) \right. \nonumber \\&\quad \quad +\left. m_2\beta _vI_{F}^{*}I_V\left( 1-\frac{I_{F}}{I_{F}^{*}}\right) +\eta _v\beta _v U^{*}\left( \frac{U}{U^{*}}-\frac{I_{V}}{I_{V}^{*}}\right) +\eta _v\beta _vU^{*}I_V \left( 1-\frac{U}{U^{*}}\right) \right] \nonumber \\&\quad \quad + C\left( 1 - \frac{I_{F}^{*}}{I_{F}}\right) \left[ \eta _f\beta _fU^{*}\left( \frac{U}{U^{*}}-\frac{I_{F}}{I_{F}^{*}}\right) +\eta _f\beta _fU^{*}I_F\left( 1-\frac{U}{U^{*}}\right) \right. \nonumber \\&\quad \quad +\left. m_3\beta _fI_{V}^{*}\left( \frac{I_{V}}{I_{V}^{*}}-\frac{I_{F}}{I_{F}^{*}}\right) +m_3\beta _fI_{V}^{*}I_F\left( 1-\frac{I_{V}}{I_{V}^{*}}\right) \right] \nonumber \\&\quad \quad +D\left( 1 - \frac{U^{*}}{U}\right) \left[ m_4\sigma _f{I_F}^{*}\left( \frac{I_{F}}{I_{F}^{*}}-\frac{U}{U^{*}}\right) +m_5\sigma _v{I_V}^{*}\left( \frac{I_{V}}{I_{V}^{*}}-\frac{U}{U^{*}}\right) \right] . \end{aligned}$$ $$\begin{aligned} v=\frac{S_H}{S^{*}_{H}},&w=\frac{I_H}{I^{*}_{H}}, x=\frac{I_V}{I^{*}_{V}},y=\frac{I_F}{I^{*}_{F}}\quad \mathrm{and}\quad z=\frac{U}{U^{*}}. \end{aligned}$$ Substituting (13) into (12), we obtain $$\begin{aligned} \dot{\mathcal{V}}= & {} -(\mu _h+\theta _h)S_{H}^{*}\frac{( 1 - v)^2}{v}+\mathcal{H}(v,w,x,y,z), \end{aligned}$$ $$\begin{aligned} \mathcal{H}&= \theta _h I_{H}^{*}\left( 1 -w-\frac{1}{v}+\frac{w}{v}\right) +m_1\beta _h S_{H}^{*}I_{V}^{*}\left( 1 - \frac{1}{v}+x-xv\right) \nonumber \\&\quad \quad + A m_1\beta _h S_{H}^{*}I_{V}^{*}\left( 1+xv-w-\frac{vx}{w}\right) + B m_2\beta _vI_{F}^{*}\left( 1+y-x-\frac{x}{y}\right) \nonumber \\&\quad \quad +B m_2\beta _vI_{F}^{*}{I^*}_V\left( x+y-xy-1\right) +B\eta _v\beta _v U^{*}\left( 1+z-x-\frac{z}{x}\right) \nonumber \\&\quad \quad +B\eta _v\beta _vU^{*}{I^*}_V \left( x+z-xz-1\right) + Cm_3\beta _f{I_V}^{*}\left( 1+x-y-\frac{x}{y}\right) \nonumber \\&\quad \quad + Cm_3\beta _f{I_V}^{*}{I_F}^{*}\left( y+x-xy-1\right) +C\eta _f\beta _fU^{*}\left( 1+z-y-\frac{z}{y}\right) \nonumber \\&\quad \quad +C\eta _f\beta _fU^{*}{I_F}^*\left( y+z-yz-1\right) +Dm_4\sigma _f{I_F}^{*}\left( 1+y-z-\frac{y}{z}\right) \nonumber \\&\quad \quad +Dm_5\sigma _v{I_V}^{*}\left( 1+x-z-\frac{x}{z}\right) . \end{aligned}$$ Next, we choose A, B, C and D so that none of the variable terms of \(\mathcal{H}\) are positive. It is important to group together the terms in \(\mathcal{H}\) that involve the same state variable terms, as well as grouping all of the constant terms together. So we can show that \(\mathcal{H}<0\) by expanding (15), writing out the constant term and the coefficients of the variable terms such as \(v,w,x,y,z,\frac{1}{v},\frac{w}{v},\frac{x}{v}\) and so on. The only variable terms that appear with positive coefficients are x, y and z. We thus choose the Lyapunov coefficients so as to make the coefficients ofx, y and z equal to zero. We have $$\begin{aligned} A&=1, B=\frac{m_1\beta _hS_{H}^{*}I_{V}^{*}}{m_2\beta _vI_{V}^{*}(1-I_{F}^{*})+\eta \beta _vU^{*}(1-I_{V}^{*})}. \end{aligned}$$ The coefficients C and D can similarly be evaluated from the coefficients of y and z. Note that expressions such as $$\begin{aligned} m_1\beta _hS_{H}^{*}I_{V}^{*}\left( 2-\frac{1}{v}-\frac{xv}{w}\right) \end{aligned}$$ emanating from the substitution of the coefficients into \(\mathcal{H},\) are less than or equal to zero by the arithmetic mean-geometric mean inequality. This implies that \(\mathcal{H}\le 0\) with equality only if \(\frac{S_H}{{S_H}^*}=\frac{I_H}{{I_H}^*} = \frac{I_V}{{I_V}^*}=\frac{I_F}{{I_F}^*}=\frac{U}{{U}^*}=1.\) Therefore, \(\dot{\mathcal{V}} \le 0\) and by the LaSalle's Extension [17], it implies that the omega limit set of each solution lies in an invariant set contained in \({\Omega }.\) The only invariant set contained in \(\Omega \) is the singleton \(\mathcal{E}_1\). This shows that each solution which intersects \(\mathbb {R}_+^5\) limits to the endemic equilibrium. This completes the proof. \(\square \) The biggest challenge in epidemic modeling is the estimation of parameters in the model validation process. In this section we endeavour to estimate some of the parameter values of system (2). The demographic parameters can be easily estimated from census population data. We begin by estimating the mortality rate \(\mu _h.\) We note that the average life expectancy of the human population in Ghana is 60 years [21]. This translates into \(\mu _h=0.017\) per year or equivalently \(4.6\times 10^{-5}\) per day. Buruli ulcer is currently regarded as a vector borne disease. Recovery rates modelled by \(\gamma _h,\) of vector borne diseases range from \(1.6\times 10^{-5}\) to 0.5 per day [23]. This translates to an average of between 0.00584 and 183 per year. The rate of loss of immunity \(\theta _h\) for vector borne diseases range between 0 and \(1.1\times 10^{-2}\) per day[23]. The mortality rate of the water bugs is assumed to be 0.15 per day [3]. The rates per day can easily be transferred to yearly rates. In this model we shall assume that we have more water bugs than humans so that \(m_1<1.\) Since the water bugs prey on the fish, a reasonable food chain structure leads to the assumption that we have more fish than water bugs hence \(m_2>1\) and consequently \(0<m_3<1.\) If the water bug is assumed to interact more with the environment than fish then \(\eta _v >1\) and \(0<\eta _f<1.\) The natural mortality of small fish in rivers is not well documented and data on the mortality of river fish in Ghana is not available. For the purpose of our simulations, we shall assume that \(3\times 10^{-3}<\mu _f<7\times 10^{-3}\) per day. Given that \(K\ge N_F,N_V\) we have \(0\le m_4,m_5\le 1.\) We shall also assume that \(0\le \sigma _f,\sigma _v\le 1.\) We summarise the parameters in the following Table 2. Table 2 Parameter values used for the simulations and sensitivity analysis Many of the parameters used in this paper are not determined experimentally. Therefore their accuracy is always in doubt. This can be overcome by observing responses of such parameters and their influence on the model variables through sensitivity and uncertainty analysis. In this subsection we present the sensitivity analysis of the model parameters to ascertain the degree to which the parameters affect the outputs of the model. We use the Partial Correlation Coefficients (PRCCs) analysis to determine the sensitivity of our model to each of the parameters used in the model. Through correlations, the association of the parameters and state variables can be established. In our case, we determine the correlation of our parameters and the state variable U. Alongside the PRCCs are the statistical significance test p-values for each of the parameters. If the magnitude of the PRCC value of a parameter is greater than 0.5 or less than −0.5 and the p-value less than 0.05, then the model is sensitive to the parameter. On the other hand, PRCC values close to \(+1\) or \(-1\) indicate that the parameter strongly influences the state variable output. The sign of a PRCC value indicates the qualitative relationship between the parameter and the output variable. A negative sign indicates that the parameter is inversely proportional to the outcome measure [10]. The parameters with negative PRCCs reduce the severity of Burili ulcer disease while those with positive PRCCs aggravate it. Using Latin Hypercube Sampling (LHS) scheme with 1000 simulations for each run, with U as the outcome variable. Our results show that the variable U is sensitive to the changes in the parameters \(m_3,~ \eta _f,~\mu _e,~\mu _f\) and \(\beta _f\). The results are shown in Fig. 2. PRCC plots: The variable U largely depends on \(m_3,~ \eta _f,~\mu _e,~\mu _f\) and \(\beta _f\). The bars pointing to the left indicate that U has an inverse dependence on the respective parameters. We observe that the parameters \(m_3,~ \eta _f\) and \(\beta _f\) aggravate the disease when they are increased while \(\mu _f\) and \(\mu _e\) reduce its severity when increased The results from the PRCC analysis are summarized in Table 3. The significant parameters together with their PRCC values and p-values have been encircled. Table 3 Outputs from PRCC analysis In Fig. 3 the residuals for the ranked Latin Hypercube Sampling parameter values are plotted against the residuals for the ranked density of Mycobacterium ulcerans.The PRCC plots for parameters \(\beta _f,~\mu _f,~\mu _e\) and \(\eta _f\) show a strong linear correlation. The growth of Mycobacterium ulcerans increases as the number of infected fish that eventually shed bacteria into the environment increases. An increase in the parameters \(\mu _f\) and \(\mu _e\) leads a decrease in amount of bacteria in the environment. PRCC plots shows the PRCC plots for the parameters \(\beta _f\), \(\mu _f\), \(\mu _e\), \(\eta _f\) and \(m_3\) Data and the fitting process One of the most important steps in the model building chronology is model validation. We now focus on the data provided by the Ashanti Regional Disease Control Office for Buruli ulcer cases in Ghana per 10,0000 people. The data are given in the Table 4 below for the years 2003–2012. Table 4 Data on Buruli ulcer cases in Ghana We fit the model system (2) to the data of Buruli ulcer cases expressed as fractions. We use the least squares curve fit routine (lsqcurvefit) in Matlab with optimisation to estimate the parameter values. Many parameters are known to lie within limits. A few parameters such as the demographic parameters are known [13] and it is thus important to estimate the others. The process of estimating the parameters aims at finding the best concordance between computed and observed data. One tedious way to do it is by trial and error or by the use of software programs designed to find parameters that give the best fit. Here, the fitting process involves the use of the least squares-curve fitting method. Matlab code is used where unknown parameter values are given a lower and upper bound from which the set of parameter values that produce the best fit are obtained. Figure 4 shows how system (2) fits to the available data on the incidence of the BU. The incidence solution curve shows a very reasonable fit to the data. Model fit to data. Model system (2) fitted to data of Burili ulcer cases in Ghana. The circles indicate the actual data and the solid line indicates the model fit to the data. The parameter values used for the fitting; \( \mu _h=0.000045,~\theta =0.1,~m_1=5,~\beta _h=0.1,~\gamma =0.056,~m_2=10,~\beta _v=0.000065,~\eta _v=1.5,~\eta _f=0.6,~\mu _V=0.15,~\beta _f=0.00005,~\mu _f=0.05,~\sigma _f=0.05,~\sigma _v=0.006,~\mu _e=0.4\) In planning for a long term response to the Buruli ulcer epidemic, it is important to have some reasonable projections to the epidemic. The fitting process allows us to envisage the Buruli ulcer epidemic in future. it is important to note that the projections are reasonably good over a short period of time since the current is evolving gradually based on the available data. We chose to project the epidemic beyond 5 years to 2017. Figure 5 show the projected Buruli ulcer epidemic. Projected model fit. Projection to fit in Fig. 4 Figures 6 and 7 show the changes in the prevalence of infected humans respectively when \(\sigma _f,\) the shedding rate of Mycobacterium ulcerans in the environment and \(\mu _e\) the removal rate of MU from the environment, are varied. Based on the sensitivity analysis, our model is very sensitive to the shedding rate of Mycobacterium ulcerans into the environment. Figure 6 shows that an increase in the shedding rate will lead increased human infections. We can actually quantify the related increases. For instance, if \(\sigma _f\) is increased from 0.51 to 0.52 on year 15, the percentage increase in the prevalence of human infections is 6 %. Minimising Mycobacterium ulceransin the environment is an important control measure that is, albeit impractical at the moment. We observe through our results that their decrease in the environment can lead to quantifiable changes in the prevalence of infected humans. Increasing \(\mu _e\) leads to a decrease in the prevalence of infected humans. Prevalence of Buruli ulcer infection in humans. Shows prevalence humans when \(\sigma _f\) is varied Prevalence of Buruli ulcer in infected humans for different values of \({\mu }_{e.}\) Shows prevalence the infected humans when \(\mu _e\) is varied In this paper, a deterministic model on the dynamics of the Buruli ulcer in the presence of a preventive intervention strategy is presented. The model's steady states are determined and their stabilities investigated in terms of the classic threshold \(\mathcal{R}_0.\) In disease transmission modelling, it is well known that a classical necessary condition for disease eradication is that the basic reproductive number \(\mathcal{R}_0,\) must be less than unity. The model has multiple endemic equilibria (in fact it exhibits a backward bifurcation). When a backward bifurcation occurs, endemic equilibria coexist with the disease free equilibrium for \(\mathcal{R}_0<1.\) This means that getting the classic threshold \(\mathcal{R}_0\) less than 1, might not be sufficient to eliminate the disease. Thus the existence of backward bifurcation has important public health implications. This might explain why the disease has persisted in the human population over time. The endemic equilibrium is found to be globally stable if \(\mathcal{R}_0>1.\) The sensitivity analysis of model parameters showed some interesting results. These results suggest that efforts to remove Mycobacterium ulcerans and infected fish from the environment will greatly reduce the epidemic although the latter will be impracticable. This is because of the costs involved and the fact that many governments in affected areas operate on lean budgets. The model is then fitted to data on the Buruli ulcer in Ghana. The model reasonably fits the data. The challenge in the fitting process was that the data appears to indicate that Buruli ulcer has reached a steady state. This then produced some parameter values that appeared unreasonable. Despite these challenges, the fit produced reasonable projections on the future of the ulcer. The model shows that in the near future, the number of cases will not change if everything remains the same. An important consideration that can be added to the model is the inclusion of probable policy shifts and the investigation of different scenarios on the progression of the epidemic as the policies change. Because not much of the disease is understood, parameter estimation was a daunting task. So we had to reasonably estimate some of the parameter using the hypothesis that Buruli ulcer is a vector borne disease. Due to the estimation of essential parameters sensitivity analysis was necessary and very important to determine how these parameters influence the model. The implications of varying some of the important epidemiological parameters such as the shedding rates were investigated. Important results were drawn through Figs. 6 and 7. The main result of this paper is that the management of Buruli ulcer depends mostly on the management of the environment. This model can be improved by considering social interventions in the human population, modeled as functions and the inclusion of the different forms of treatment available as some individuals opt for traditional methods while others depend on the government health care system [1]. Social interventions include education, awareness, poverty reduction and provision of social services. While the mathematical representations of these interventions are insurmountable, they are vital to the dynamics of the disease and public health policy designs. Finally this model can be used to suggest the type of data that should be collected as research on the Buruli ulcer intensifies. The global burden of the disease and its epidemiology are not well understood, [28]. Clearly, gaps do exist in the nature and type of data available. Reports on the disease are often based on passive presentations of patients at health care facilities. As a result of the difficulties of accessing health care in affected areas, data on the disease is scanty. Agbenorku P, Donwi IK, Kuadzi P, Saunderson P. Buruli Ulcer: treatment challenges at three centres in Ghana. J Trop Med. 2012; doi:10.1155/2012/371915. Ahorlu CK, Koka E, Yeboah-Manu D, Lamptey I, Ampadu E. Enhancing Buruli ulcer control in Ghana through social interventions: a case study from the Obom sub-district. BMC Public Health. 2013;13:59. Aidoo AY, OSei B. Prevalence of acquatic insects and arsenic concentration determine the geographical distribution of Mycobacterium ulcerans infection. Comput Math Method Med. 2007;8:235–44. Boleira M, Lupi O, Lehman L, Asiedu KB, Kiszewski Ana Elisa. Buruli ulcer. Anais Brasileiros de Dermatologia. 2010;85(3):281–301. Clift E. IEC interventions for health: a 20 year retrospective on dichotomies and directions. Int J Health Commun. 1998;3(4):367–75. Debacker M, Portaels F, Aguiar J, Steunou C, Zinsou C, Meyers W, et al. Risk factors for Buruli ulcer, Benin. Emerg Infect Dis. 2006;12:1325–31. Descartes' Rule of Signs, Available at: http://www.purplemath.com/modules/drofsign.htm, Acessed 2 Sept 2013. Eddyani M, Ofori-Adjei D, Teugels G, De Weirdt D, Boakye D, Meyers WM, Portaels F. Potential role for fish in transmission of Mycobacterium ulcerans disease (Buruli Ulcer): an environmental study. Appl Environ Microbiol. 2004;5679–81. Garba SM, Gumel AB, Abu Bakar MR. Backward bifurcaytion in dengue transmission dynamics. Math Biosci. 2008;215:11–25. Gomero B. Latin Hypercube Sampling and Partial Rank Correlation Coefficient analysis applied to an optimal control problem, MSc Thesis, The University of Tennessee. 2012. Grassly NC, Fraser C. Mathematical models of infectious disease transmission. Nat Rev Microbiol. 2008;6:477–87. Grundmann H, Hellriegel B. Mathematical modelling: a tool for hospital infection control. Lancet Infect Dis. 2005;6(1):39–45. Ghana Statistical Service, Available at: http://www.statsghana.gov.gh. Accessed Sept 2013. Houben RM, Dowdy DW, Vassall A, et al. How can mathematical models advance tuberculosis control in high HIV prevalence settings? Int J Tuber Lung Dis. 2014;18(5):509–14. Huppert A, Katriel G. Mathematical modelling and prediction in infectious disease epidemiology. Clin Microbiol Infect. 2013;19:999–1005. Jacobsen KH, Padgett JJ. Risk factors for Mycobacterium ulcerans infection. Int J Infect Dis. 2010;14(8):e677–81. LaSalle JP. The stability of dynamical systems. In: CBMS-NSF Regional Conference Series in Applied Mathematics 25, SIAM: Philadelphia. 1976. Marsollier L, Robert R, Aubry J, Andre JS, Kouakou H, Legras P, Manceau A, Mahaza C, Carbonnelle B. Aquatic insects as a vector for Mycobacterium ulcerans. Appl Environ Microbiol. 2002;68:4623–8. Marty R, Roze S, Bresse X, Largeron N, Smith-Palmer J. Estimating the clinical benefits of vaccinating boys and girls against HPV-related diseases in Europe. BMC Cancer. 2013;19:19. doi:10.1186/1471-2407-13-10. Merritt RW, Walker ED, Small PLC, Wallace JR, Johnson PDR, et al. Ecology and transmission of buruli ulcer disease: a systematic review. PLoS Neglect Trop Dis. 2010;4(12):e911. Population and Housing Census National Analytical Report, 2012. http://www.statsghana.gov.gh. Portaels F, Chemlal K, Elsen P, Johnson PD, Hayman JA, Hibble J, Kirkwood R, Meyers WM. Mycobacterium ulcerans in wild animals. Revue Scientifique et Technique. 2001;20:252–64. Rascalou G, Pontier D, Menu F, Gourbie're S. Emergence and prevalence of human vector-borne diseases in sink vector populations. PLoS One. 2012;7(5):e36858. doi:10.1371/journal.pone.0036858. Silva MT, Portaels F, Pedrosa J. Aquatic insects and Mycobacterium ulcerans: an association relevant to buruli ulcer control? PLoS Med. 2007;4(2):e63. Sopoh GE, Barogui YT, Johnson RC, Dossou AD, Makoutode M. Family relationship, water contact and occurrence of Buruli ulcer in Benin. PLoS Neglect Trop Dis. 2010;4(7):e746. Stienstra Y, van der Graaf WTA, Asamoa K, van der Werf TS. Beliefs and attitudes toward buruli ulcer in Ghana. Am J Trop Med Hyg. 2002;67:207–13. Williamson HR, Benbow ME, Campbell LP, Johnson CR, Sopoh G, Barogui Y, Merritt RW, Small PLC. Detection of Mycobacterium ulcerans in the environment predicts prevalence of buruli ulcer in Benin, PLoS Neglect Trop Dis. 2012; e1506. World Health Organization. Buruli ulcer Mycobacterium ulcerans infection, http://www.who.int/buruli/en/. World Health Organization. Buruli ulcer: Number of new cases of Buruli ulcer reported (per year). http://apps.who.int/neglected_diseases/ntddata/buruli/buruli.html. FN designed the model and carried out the numerical simulations. EB did the mathematical analysis and writing of the manuscript. Both authors read and approved the final manuscript. The first author acknowledges with gratitude the support from the Stellenbosch University, International office for the research visit that culminated into this manuscript. The second author acknowledge, with thanks, the support of the Department of Mathematics and Statistics, Kumasi Polytechnic. Department of Mathematical Sciences, Stellenbosch University, Private Bag X1, Matieland, 7602, South Africa Farai Nyabadza Department of Mathematics and Statistics, Kumasi Polytechnic, P. O. Box 854, Kumasi, Ghana Ebenezer Bonyah Correspondence to Ebenezer Bonyah. Nyabadza, F., Bonyah, E. On the transmission dynamics of Buruli ulcer in Ghana: Insights through a mathematical model. BMC Res Notes 8, 656 (2015). https://doi.org/10.1186/s13104-015-1619-5 Buruli ulcer Transmission dynamics Basic reproduction number
CommonCrawl
\begin{document} \title{Higher-order interference { between multiple quantum particles interacting nonlinearly}} \author{Lee A. Rozema} \affiliation{Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, Vienna A-1090, Austria} \author{Zhao Zhuo} \affiliation{School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore} \author{Tomasz Paterek} \affiliation{School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore} \affiliation{MajuLab, International Joint Research Unit UMI 3654, CNRS, Universite Cote d'Azur, Sorbonne Universite, National University of Singapore, Nanyang Technological University, Singapore} \affiliation{Institute of Theoretical Physics and Astrophysics, Faculty of Mathematics, Physics and Informatics, University of Gda\'nsk, 80-308 Gda\'nsk, Poland} \author{Borivoje Daki\'c} \affiliation{Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, Vienna A-1090, Austria} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria} \begin{abstract} The double-slit experiment is the most direct demonstration of interference between individual quantum objects. Since similar experiments with single particles and more slits produce interference fringes reducible to a combination of double-slit patterns it is usually argued that quantum interference occurs between pairs of trajectories, compactly denoted as second-order interference. Here we show that quantum mechanics in fact allows for interference of arbitrarily high order. This occurs naturally when one considers multiple quantum objects interacting in the presence of a nonlinearity, both of which are required to observe higher-order interference. We make this clear by treating a generalised multi-slit interferometer using second-quantisation. We then present explicit experimentally-relevant examples both with photons interacting in nonlinear media and an interfering Bose-Einstein condensate with particle-particle interactions. These examples are all perfectly described by quantum theory, and yet exhibit higher-order interference {based on multiple particles interacting nonlinearly}. \end{abstract} \maketitle Quantum states are represented by density matrices, whose elements can be estimated in a series of interference experiments involving a superposition of only two basis states at a time. Already at this abstract level one expects that any interference pattern should be fundamentally reducible to two-state interference. Indeed, it was shown theoretically, within the framework of generalised measure theories, that interference fringes observed in multi-slit experiments are simple combinations of patterns observed in double-slit and single-slit experiments. Quantum mechanics has hence been termed a ``second-order interference theory''~\cite{Sorkin}. It is possible, however, to devise a family of post-quantum theories exhibiting higher-order interference {\cite{zyczkowski2008quartic,dakic2014density,lee2017higher}}. Motivated by this, experiments based on photons~\cite{Weihs2010,Weihs2011,Weihs2017,hickmann2011}, nuclear magnetic resonance~\cite{Laflamme2012}, spins in diamond NV centers \cite{jin2017} and with matter waves~\cite{Arndt2017,Barnea2018} have placed bounds on higher-order interference, verifying, within experimental error, that in these setups higher-order interference is absent. Atomic analogs of multi-slit experiments have also been studied in detail~\cite{Lee2019}. {The presence of such higher-order interference is often discussed in the literature as violation of Born's rule \cite{Weihs2010}. Nevertheless, whether viewed as a violation of Born's rule or as higher-order interference, such a finding would require an explanation that goes beyond our current formulation of quantum mechanics.} While the original theoretical work showing that quantum mechanics is a second-order theory considered a single electron incident on a multi-slit, many experimental tests have used multi-particle states. For example, in photonic experiments both single photons and multi-partite coherent states have been used~\cite{Weihs2010,Weihs2011,Weihs2017}, Ref.~\cite{jin2017} used ensembles of spin-1/2 particles in an NMR experiment, while the experiments presented in Refs.~\cite{Arndt2017,Barnea2018} used thermal states of atoms and molecules. Therefore, although use of single-particle states was implicit in the original theory, the effect of multi-partite states was not appreciated at the experimental level. { Very recent work has focused on this problem, and derived new limits on higher-order interference (via the application of Born's rule) when the input state is a multiple-particle state with a fixed number of particles~\cite{Pleinert2020theor}. In these results, new quantities and measurements based on multi-particle input states were derived~\cite{Pleinert2020theor} and experimentally probed~\cite{Pleinert2020expt}, allowing for more sensitive tests of higher-order interference and Born's rule. } { Here we take a slightly different approach, more similar to that of the recent so-called looped-trajectories results \cite{Yabuki86,RMH2012,Sinha2014,Sinha2015}. In these works, a multi-slit apparatus and the ``standard'' measurements used to search for higher-order interference are considered. It was then shown that the multi-mode character of an actual slit experiment can lead a small third-order interference term~\cite{Yabuki86,RMH2012,Sinha2014,Sinha2015}. The origin of this term lies in how the superposition principle is applied, and rests on different boundary conditions in multi-slit and single-slit setups. This effect has even been experimentally confirmed~\cite{Boyd2016,Sinha2018}. Similarly, here we show that given a multi-slit apparatus and the standard measurements, multi-partite states and a nonlinear interaction (such as an optical nonlinearity or particle-particle interaction) can lead to the emergence of apparent higher-order interference. However, the third-order interference term both in our work, and the looped-trajectory work is not due to ``post-quantum higher-order interference'', but rather comes from implicit assumptions in the theory that are not met experimentally. Hence, if one wishes to bound the contribution of genuine higher-order interference present in a given experiment, all such effects must be considered.} { We also briefly show below, that the experimental arrangement of Refs. \cite{Pleinert2020theor,Pleinert2020expt} can lead to apparent higher-order interference and the underlying reason is nonlinearity. It is the nonlinearity in the detection, which looks for coincidences of various detection events and is hence by construction nonlinear in the incident photon number. } { Throughout our paper we will refer to the interference studied here as higher-order interference. We use this terminology because these effects could arise in any experiment used to search for genuine higher-order interference, given sufficiently strong nonlinearities and multi particle states. We stress that our higher-order interference does not emerge if just multi-partite (or multi-photon) states are used or just in the presence of nonlinear elements; rather, both features are required and the apparatus must be used to construct the so-called M-path interference introduced in Eq.~(\ref{EQ_I_M}). We show this can lead to very strong observable deviations from vanishing higher-order interference, even when the issues due to boundary conditions are negligible.} As is the case for the looped trajectories, we emphasise that it is also not genuine post-quantum higher-order interference, but rather an artifact of multi-particle interactions. Our results stress the need for high-quality single-particle sources in experiments searching for such deviations from quantum theory. Our paper is organised as follows. We will first {use our formalism to} show that all single-particle states give rise to only second-order interference, {as was already shown in} Ref.~\cite{Sorkin}. We will then introduce a means to quantify the order of interference in the framework of second quantization, and then show that all linear processes are limited to second-order interference. Finally, we will provide explicit examples of nonlinear processes with multi-particle input states that produce interference of arbitrary order. The required nonlinearity can be caused by different physical mechanisms, and we show examples of higher-order interference based on optical nonlinearity, nonlinear detector response, and particle-particle interaction in a Bose-Einstein condensate modeled by the Gross-Pitaevskii equation. \begin{figure} \caption{Measuring interference. The simplest interference experiment involves two paths that can be individually blocked. This figure shows a configuration where the upper path is open and the lower path is blocked. Here $\rho$ denotes the input state, and \textbf{U} is some unitary interaction between the two modes. Interference is present if the intensity measured with both paths open is different from the sum of the intensities measured with the individual paths open.} \label{FIG_2PATHS} \end{figure} \section{The order of interference} Consider first an experiment with only two paths, see Fig.~\ref{FIG_2PATHS}. Each path can either be open (0) or blocked (1), so that the configuration in Fig.~\ref{FIG_2PATHS} is represented by sequence 01. Interference is said to occur if the mean number of particles (the intensity) measured with both paths open, $I_{00}$, is different from the sum of the intensities for individual paths blocked, $I_{01} + I_{10}$. It is hence natural to quantify two-path interference by $\mathcal{I}_2 \equiv I_{00} - I_{01} - I_{10} + I_{11}$, where we have introduced $I_{11} = 0$ for symmetry reasons. A similar argument applied to a scenario with $M$ paths leads to the definition of $M$-path interference when the following quantity is non-zero~\cite{Sorkin}: \begin{equation} \mathcal{I}_M = \sum_{x_1, \dots, x_M = 0}^1 (-1)^{x_1 + \dots + x_M} I_{x_1 \dots x_M}. \label{EQ_I_M} \end{equation} Sorkin showed that when $\mathcal{I}_M = 0$ for some $M$, then all the quantities $\mathcal{I}_M$ with higher $M$ also vanish~\cite{Sorkin}. The highest index $M$ for which a theory predicts non-zero $\mathcal{I}_M$ is then called the order of interference of that theory. Classical particle experiments do not give rise to any form of interference; i.e. classically, one already has $\mathcal{I}_2 = 0$. In the quantum case, consider first $N$ particles sent one-by-one into the setup in Fig.~\ref{FIG_2PATHS}. Each particle is in state $\rho$ spanned by kets $| u \rangle$ and $| d \rangle$, describing propagation along upper and lower path respectively. The intensity $I_{00}$ is given by $N p_{00}$, where $p_{00}$ is the probability that the particle is in the upper path after the unitary, i.e. $I_{00} = N \langle u | U \rho U^\dagger | u \rangle$. Similarly $I_{01} = N \rho_{uu} \langle u | U | u \rangle \langle u | U^\dagger | u \rangle$ and $I_{10} = N \rho_{dd} \langle u | U | d \rangle \langle d | U^\dagger | u \rangle$, where e.g. $\rho_{uu} = \langle u | \rho | u \rangle$. One finds that the second-order interference, $\mathcal{I}_2$, vanishes for all (classical) states that exhibit no coherence, $\rho_{ud} = 0$, independent of the choice of the unitary $U$. On the other hand, particles in quantum states can undergo second-order interference. The maximum second-order interference is $\mathcal{I}_2 = N/2$, which is achieved for the input state $(| u \rangle + | d \rangle) / \sqrt{2}$ and the unitary describing a 50-50 beam-splitter, as expected. Applying the same calculation to three-path experiment (with an arbitrary unitary acting on all three paths) shows that $\mathcal{I}_3 = 0$ for all input states and all unitaries. This leads to the usual statement that quantum theory is a second-order interference theory. However, we will now show that multiple quantum systems interacting nonlinearly can lead to non-zero $\mathcal{I}_M$ for arbitrary $M$. \section{Interference of indistinguishable particles} We first show that all linear processes give rise to only second-order interference, i.e. $\mathcal{I}_3 = 0$, independently of the input multipartite state. {Although this is already evident from Ref.~\cite{Sorkin}, we will now show it usung our formalism.} A linear process is described by $U=e^{iH}$, where $H$ is linear in the ladder operators: $H=\sum_{n,m} h_{nm}a_n^{\dagger}a_m$ \cite{gerry2005introductory}. We consider a beam of indistinguishable particles in any state $\rho$ loaded into a setup with $M$ paths; later we will specify $M=3$. The particle number does not have to be well defined, e.g. the input could be a series of coherent states of photons in the various input modes. With each path we associate local Fock space $\mathcal{H}_m$ spanned by Fock states $| n \rangle_m$ describing $n$ excitations (photons) in the $m$th path (mode). The entire Hilbert space of this system is therefore a tensor product $\mathcal{H}_1 \otimes \dots \otimes \mathcal{H}_M$ and we need to specify how blocking paths is represented in this formalism. Appendix~\ref{APP_A} shows that the operation of blocking the $m$th path, $\Pi_m$, has the following intuitive effect: $\Pi_m(\rho) = | 0 \rangle_m \langle 0 | \otimes \mathrm{Tr}_m(\rho)$, i.e. it produces vacuum in the blocked path and decorrelates it from all the other paths (here $\mathrm{Tr}_m(\rho)$ stands for the partial trace over the Fock space $\mathcal{H}_m$, meaning that the states in the other paths are unaffected by the blocker). With this notation the state after the blockers is given by \begin{equation} \rho_{x_1 \dots x_M} = \Pi_1^{x_1} \otimes \dots \otimes \Pi_M^{x_M} (\rho), \end{equation} where we set $\Pi_m^0$ to the identity operator. Recall $x_i$ represents whether the blocker in mode $i$ is present or not. By placing the final detector along the first path, the intensities can be computed from \begin{equation} I_{x_1 \dots x_M} = \mathrm{Tr}(a_1^\dagger a_1 U \rho_{x_1 \dots x_M} U^\dagger), \end{equation} where $a_1^\dagger a_1$ is the number operator in $\mathcal{H}_1$. Since the same measurement is conducted for all the combinations of blocked paths, we introduce an ``interference operator'' via the relation $\mathcal{I}_M = \mathrm{Tr}(U^{\dagger} a_1^{\dagger} a_1 U \hat{\mathcal{I}}_M )$, see Appendix~\ref{APP_B} for its explicit form and properties. Any linear process satisfies $U^\dagger a_1^\dagger U = \sum_m u_{m} a_m^\dagger$ and accordingly \begin{equation} \mathcal{I}_3 = \sum_{m,m'} u_{m} u_{m'}^* \mathrm{Tr}(a_m^\dagger a_{m'} \hat{\mathcal{I}}_3 ) = 0, \label{EQ_I30} \end{equation} where in the last equation we used the fact that the interference operator vanishes under the partial trace over any path, see Appendix~\ref{APP_B}. Hence, we see that if the interaction is linear or the input state is a single-particle state we have $\mathcal{I}_3=0$. In general, the same line of reasoning applies to nonlinear processes and higher-order interference terms. A process of order $k$, i.e. where the creation operator $a_1^\dagger$ is mapped to a polynomial $\sum_{m_1,\dots, m_k} u_{m_1 \dots m_k} a_{m_1}^\dagger \dots a_{m_k}^\dagger$, gives rise to vanishing higher-order interference terms $\mathcal{I}_M$ with $M > 2k$. Note that non-linearity is necessary for higher-order interference, but not sufficient. For example, a non-linear process mapping $a_1^\dagger$ to a sum of squared operators $\sum_m u_{m} a_m^\dagger a_m^\dagger$ still admits $\mathcal{I}_3 = 0$, because in Eq. (\ref{EQ_I30}) each term in the sums couples only two paths and hence the partial trace argument gives vanishing $\mathcal{I}_3$. Thus, experimentally finding a non-zero $\mathcal{I}_M$ indicates the presence of nonlinear multi-mode coupling in the underlying process (which could be completely unknown, i.e. a black box) and provides the minimal number of the coupled paths. \begin{figure} \caption{Schematic for measurement of higher-order interference in quantum optics. The input coherent states propagate through blockers and then through a sequence of nonlinear phase shifters $U_{1M}, \dots, U_{13}$. The final element, marked BS, is a 50-50 beam splitter between the first and the second path, and the intensity is monitored in the first path. This setup gives rise to the $M$th-order interference, see Eq. (\ref{EQ_MTH}).} \label{FIG_M} \end{figure} In the same way one recovers the recent results of Ref.~\cite{Pleinert2020theor} on multi-partite higher-order interference. In the present notation, their $n$-partite $M$-th order interference term is $\mathcal{I}_M^n = \mathrm{Tr} (a_1^\dagger \dots a_n^\dagger a_n \dots a_1 U \mathcal{\hat I}_M U^\dagger)$, with the interference operator as introduced above. Therefore, again due to the partial trace argument, $\mathcal{I}_M^n = 0$ for $M > 2 n$ when $U$ is a linear process. Interestingly, it is also apparent that when $U$ is nonlinear, quantum theory predicts that the multi-partite higher-order interference, defined in Ref.~\cite{Pleinert2020theor}, may be non-zero. \section{Nonlinear phase shift} Now we give an example of a nonlinear process from quantum optics (a two-mode nonlinear phase shifter, i.e. cross-phase modulation) whose concatenation gives rise to arbitrarily high order of interference. The exact setup is presented in Fig.~\ref{FIG_M}. The unitary describing this nonlinear process between modes $j$ and $k$ has the following effect: \begin{eqnarray} U_{jk}^\dagger a_j^\dagger U_{jk} & = & a_j ^\dagger \exp(- i \theta a_k ^\dagger a_k), \\ U_{jk}^\dagger a_k^\dagger U_{jk} & = & a_k ^\dagger \exp(- i \theta a_j ^\dagger a_j), \end{eqnarray} where $\theta$ is the strength of non-linearity. { It can range from $\approx 10^{-18}$ rad/photon for a bulk Kerr media \cite{boyd1999order,barrett2005symmetry}, to $\approx 10^{-7}$ rad/photon using a photonic crystal fibres \cite{matsuda2009observation}, all the way up to $\approx 10^{-2}$ rad/photon using electromagnetically-induced transparency, e.g.~\cite{EIT1,EIT2}.} For simplicity, we will assume that all the input coherent states have the same mean number of photons, $\langle n \rangle = |\alpha|^2$, but potentially different phases. In this case, the setup in Fig.~\ref{FIG_M} produces the following value of higher-order interference (see Appendix~\ref{APP_C} for details): \begin{equation} \mathcal{I}_M = \left| \langle n \rangle \left( \exp [- \langle n \rangle (1 - e^{- i \theta})] - 1\right)^{M-2} \right| \cos(\varphi_2 - \varphi_1 - \delta), \label{EQ_MTH} \end{equation} where $\varphi_1$ and $\varphi_2$ are the phases of the input light along the first two paths and $\delta$ is a fringe offset. The phases of the remaining inputs do not enter the interference formula. { For example, to achieve $\mathcal{I}_3 \sim 1$ with natural Kerr nonlinearity of $10^{-18}$ one requires a mean photon number of about $10^{9}$ photons per mode. If one uses a pulsed laser system this corresponds to a pulse energy of $\approx 0.2$ nJ at a wavelength of $1000$ nm, which is easily available in commercial laser systems.} At this stage we would like to comment on two recent experiments which may seem related. Refs.~\cite{Jennewein2017,Walmsley2017} observed genuine three-photon interference as a generalisation of the famous Hong-Ou-Mandel dip~\cite{HOM}. {Although these works used (nonlinear) multi-photon detection and multi-partite input states}, the higher-order interference we describe here is distinct from the observed multi-photon interference. { Indeed, when the setup in Fig.~\ref{FIG_M}, restricted to $M=3$ paths, is input with the field in the state $\frac{1}{\sqrt{2}}(|01\rangle_{12} + |10\rangle_{12}) \otimes \frac{1}{\sqrt{2}}(|0 \rangle_3 + |1 \rangle_3)$ the third-order interference term reads $\mathcal{I}_3 = - \sin^2 (\theta/2)$. While the number of photons is not fixed in this state, it is either $1$ or $2$ in each branch of the input superposition and therefore the third-order interference could be observed with vanishing probability of detecting more than two photons.} Before presenting examples of other nonlinearities that produce higher-order interference, let us note that due to the specific combination of terms entering the higher-order interference expression, any noise that is independent of the input signal is irrelevant, e.g. detector dark counts. If the noise alone is characterised by probability $d(n)$ to observe $n$ photons and the ideal signal has probabilities $p(n)$, the independence is encoded by the convolution $r(n) = \sum_{k} p(k) d(n-k)$, where $r(n)$ is the probability of observing $n$ photons with the noisy detector. Such noise just shifts the intensity of arbitrary input state by the same amount $\Delta$, see Appendix~\ref{APP_D}. Therefore, the higher-order interference term in the presence of noise is given by \begin{equation} \tilde{\mathcal{I}}_M = \sum_{x_1, \dots, x_M = 0}^1 (-1)^{x_1 + \dots + x_M} (\Delta + I_{x_1 \dots x_M}) = \mathcal{I}_M. \end{equation} This shows the robustness of estimating higher-order interference in a real laboratory setting. \begin{figure} \caption{Higher-order interference with Bose-Einstein condensates. The condensate is initially prepared in an even superposition of gaussian wave functions each with {position spread (standard deviation) of} $1 \mu$m centered at $\pm 5 \mu$m and $0$. The solid line presents the values of $\mathcal{I}_3$ for the repulsive condensate of $^{87}$Rb with parameters $N = 10^3$ atoms, $a = 5.8$ nm and $m = 1.45 \times 10^{-25}$ kg after an evolution time of $\tau=1$~ms. The dashed line is for the attractive condensate of $^{7}$Li with parameters $N = 500$ atoms, $a = -1.2$ nm and $m = 1.16 \times 10^{-26}$ kg. Both are experimentally feasible with present day technology~\cite{Dalibard1999,Brachet1999}. } \label{FIG_GPE} \end{figure} \section{Interacting Bose gas} Nonlinearity in other physical systems can also lead to higher-order interference. For example, consider a Bose-Einstein condensate initialised in an even superposition of three Gaussian wave functions. We compute its one-dimensional dynamics according to the nonlinear Gross-Pitaevskii equation \begin{equation} i \hbar \frac{\partial \psi(x,t)}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \psi(x,t)}{\partial x^2} + \frac{4 \pi \hbar^2 N a}{m} |\psi(x,t)|^2 \psi(x,t), \end{equation} where $N$ is the number of atoms, each of mass $m$, and $a$ is the scattering length. The initial wave function is normalised to unity. The system is evolved for a time $\tau$ after which we record the distribution of particles in one dimension. Blocking the paths is modeled by removing the corresponding part of the initial superposition (and keeping the state unnormalised). Fig.~\ref{FIG_GPE} shows the results for $\mathcal{I}_3$ confirming the experimental feasibility of observing third-order interference. The same conclusion is expected to hold in other physical systems with dynamics modeled by the Gross-Pitaevskii equation, e.g. polaritons~\cite{RMP.85.299}. \emph{Detection nonlinearity.} Our last example is a photodetector with a nonlinear response. The main features of a real detector are: an essentially linear response in the low intensity regime and saturation, which may set in for high input intensities. To illustrate our point, we are only interested in the saturation domain where the measured intensity $I_r$ is modeled as $I_r = I_i - \epsilon I_i^2$, where $I_i$ is the output of an ideal detector and $\epsilon$ is the strength of nonlinear saturation. For single-photon detectors $\epsilon$ is given by the detector dead time; this can be seen by expanding equation (A1) of \cite{Kauten2014}. With such detectors, even linear interactions can give rise to non-zero higher order interference; this has been discussed from an experimental perspective in Refs.~\cite{Kauten2014,Weihs2017}. We now provide a simple theoretical example in which a non-zero $\mathcal{I}_3$ appears in a setup with three paths combined on a symmetric tritter (a generalisation of a beam splitter to three paths). In particular the unitary describing the tritter has matrix elements given by $U_{ij}=\frac{1}{\sqrt d}\omega^{ij}$, where $\omega=e^{i2\pi/3}$. After the tritter we monitor the first output port with the nonlinear detector. Accordingly, the tritter unitary gives $U^\dagger a_1^\dagger U = \frac{1}{\sqrt{3}}(a_1^\dagger + a_2^\dagger + a_3^\dagger)$ and the detector is represented by $a_1^\dagger a_1 - \epsilon (a_1^\dagger a_1)^2$. With this at hand one finds a third-order interference term of $\mathcal{I}_3 = - 4 \epsilon |\alpha|^4$, where we have also assumed that all three input modes are injected with the same coherent states $| \alpha \rangle$. Taking a dead time of $50$ ns (which is a typical value for commercially available single-photon detectors), this leads to $|\alpha|^2\approx 2000$ for $\mathcal{I}_3 = 1$, which can be understood as the number of photons per detector deadtime. This is equivalent to $\approx 10^{10}$ photons per second or about $10$ nW. \section{Conclusions} We have theoretically demonstrated the emergence of higher-order interference within the standard formalism of second quantisation. Its origin is traced to nonlinearity in multipartite processes. However, if the interaction is linear or the input state is a single-particle state then $\mathcal{I}_3=0$. Moreover, the non-vanishing $\mathcal{I}_3$ should be observable with present day technology such as with nonlinear optics or Bose-Einstein condensates. Our work shows that if one wishes to place limits on quantum theory, nonlinearities elsewhere in the system must be considered, and single-particle states should be used in the experiments. Finally, it is worth stressing the difference between our higher-order interference and that arising from looped trajectories~\cite{Yabuki86,RMH2012,Sinha2014,Sinha2015,Boyd2016,Sinha2018}. Looped trajectories arise as a consequence of real-world multi-slit experiments being multi-mode devices. In fact, it has been pointed out that if one replaces the traditional triple-slit experiment with single-mode beams interfering on a tritter (or some other unitary structure) the higher-order interference due to looped trajectories becomes negligible~\cite{Weihs2017}. This is exactly the case in our proposal, which deals with $M$ ideal single modes which do not admit such exotic trajectories. Hence, the high-order interference that we predict cannot be understood as a systematic experimental error but is fundamental to multipartite nonlinear quantum systems. Nevertheless, both our nonlinear examples and the looped trajectories illustrate that different implicit assumptions are made in the claim that quantum mechanics is a second-order interference theory, and have direct consequences for experiments searching for higher-order interference. \section*{Acknowledgments} We thank M. Radonji\'c and \v C. Brukner for useful discussions. L.A.R. acknowledges support from the Templeton World Charity Foundation (fellowship no. TWCF0194) and the Austrian Science Fund (FWF) through BeyondC (F71). T.P. is supported by the Polish National Agency for Academic Exchange NAWA Project No. PPN/PPO/2018/1/00007/U/00001. B.D. acknowledges support from an ESQ Discovery Grant of the Austrian Academy of Sciences (OAW) and the Austrian Science Fund (FWF) through BeyondC (F71). \section*{Appendix} \appendix \subsection{Path blocker in second quantisation} \label{APP_A} We prove that $\Pi_1(\rho) = | 0 \rangle_1 \langle 0 | \otimes \rho_{2\dots M}$, where $\rho_{2\dots M}$ is the reduced state on all the other paths. First of all, clearly $\Pi_1(\rho_1) = | 0 \rangle_1 \langle 0 |$, i.e. any state injected to the blocked path results in the vacuum on that path. Arbitrary state $\rho$ can be decomposed as $\rho = \sum_{j,k} c_{jk} \rho_1^{(j)} \otimes \rho_{2 \dots M}^{(k)}$, where the coefficients are not necessarily non-negative, but all the summed matrices are proper quantum states. Since $\Pi_1$ is a linear operation, we have $\Pi_1(\rho) = | 0 \rangle_1 \langle 0 | \otimes \rho_{2 \dots M}$ as claimed. \subsection{Properties of the interference operator} \label{APP_B} The interference operator is defined via relation $\mathcal{I}_M = \mathrm{Tr}(U^{\dagger} a_1^{\dagger} a_1 U \hat{\mathcal{I}}_M )$. It therefore has the following explicit expansion \begin{equation} \hat{\mathcal{I}}_M = \sum_{x_1, \dots, x_M = 0}^1 (-1)^{x_1 + \dots + x_M} \rho_{x_1 \dots x_M}. \end{equation} Its crucial property used in our arguments is \begin{equation} \mathrm{Tr}_k(\hat{\mathcal{I}}_M) = 0, \label{EQ_PROP} \end{equation} where the partial trace is over arbitrary subset of paths denoted collectively as $k$. \subsection{Derivation of Eq. (\ref{EQ_MTH})} \label{APP_C} First note that for the input state $| \alpha_1 \rangle \dots |\alpha_M \rangle$ the interference operator reads \begin{equation} \hat{\mathcal{I}}_M = ( | \alpha_1 \rangle \langle \alpha_1 | - |0\rangle_1 \langle 0 |) \otimes \dots \otimes ( | \alpha_M \rangle \langle \alpha_M | - |0\rangle_1 \langle 0 |). \label{EQ_IM_PROD} \end{equation} The higher-order interference term for the setup of Fig.~\ref{FIG_M} is \begin{equation} \begin{split} \mathcal{I}_M = \mathrm{Tr} ( U_{1M}^\dagger \dots U_{13}^\dagger (a_1^\dagger a_1 + a_1^\dagger a_2 +\\ a_2^\dagger a_1 + a_2^\dagger a_2) U_{13} \dots U_{1M} \hat{\mathcal{I}}_M ), \end{split} \end{equation} where we have used the transformation of the beam splitter. Note that the first and the last term in the inner bracket do not contribute to $\mathcal{I}_M$, because they commute with the unitaries and due to the partial trace property in Eq.~(\ref{EQ_PROP}). From the definition of non-linear phase shift \begin{equation} \mathcal{I}_M = \mathrm{Tr} \left( a_1^\dagger a_2 \exp(-i \theta (a_3^\dagger a_3 + \dots + a_M^\dagger a_M)) \hat{\mathcal{I}}_M \right) + \textrm{c.c.}. \end{equation} Assuming all the input coherent states differ just by phases, i.e. $| \alpha_m \rangle = |\alpha e^{i \varphi_m} \rangle$, using $\exp(-i \theta a_m^\dagger a_m) | \alpha e^{i \varphi_m} \rangle = | \alpha e^{- i (\theta - \varphi_m)} \rangle$ and Eq.~(\ref{EQ_IM_PROD}) we find \begin{equation} \mathcal{I}_M = e^{i (\varphi_2 - \varphi_1)} \frac{1}{2} | \alpha|^2 \left( \langle \alpha | \alpha e^{- i \theta} \rangle - 1 \right)^{M-2} + \textrm{c.c}. \end{equation} Let us denote the complex coefficient multiplying the first exponential by $A = |A| e^{i \delta}$. With this notation \begin{eqnarray} \mathcal{I}_M & = & 2 |A| \cos(\delta) \cos(\varphi_2 - \varphi_1) + 2 |A| \sin(\delta) \sin(\varphi_2 - \varphi_1) \nonumber \\ & = & 2 |A| \cos(\varphi_2 - \varphi_1 - \delta). \end{eqnarray} In Eq.~(\ref{EQ_MTH}) we additionally used the formula for the overlap between coherent states. \subsection{Intensity under independent noise} \label{APP_D} Here we show that any detector noise independent of the input state shifts the measured intensity by a constant. We model independent noise by adding an ancillary mode in the state $| d \rangle = \sum_n \sqrt{d(n)} | n \rangle$, and introduce measurement operators describing the detection of $n$ photons by the noisy detector as follows: \begin{equation} \Pi_n = \sum_{k = 0}^\infty | k \rangle \langle k | \otimes | n - k \rangle \langle n - k |, \end{equation} where the first Hilbert space describes the measured system, the second space is for the ancilla and all the kets where $n-k$ is negative are replaced by zeros. Indeed one verifies that $r(n) = \mathrm{Tr}((\rho \otimes | d \rangle \langle d |) \Pi_n)$. The intensity of this noisy measurement reads: \begin{eqnarray} \tilde{I} = \sum_{n = 0}^\infty n \, r(n) = \sum_{k = 0}^\infty \langle k | \rho | k \rangle \sum_{n = k}^\infty n | \langle d| n-k \rangle |^2. \end{eqnarray} We write $n = (n-k) + k$ and accordingly split the second sum into: \begin{eqnarray} S_1 & = & \sum_{n = k}^\infty (n-k) | \langle d| n-k \rangle |^2 = \langle d^\dagger d \rangle, \\ S_2 & = & \sum_{n = k}^\infty k | \langle d| n-k \rangle |^2 = k, \end{eqnarray} where in the first line we introduced the number operator $d^\dagger d$ for the ancillary mode and in the second line we used the completeness relation. The expectation value $\Delta = \langle d^\dagger d \rangle$ is calculated in the state $| d \rangle$ describing the noise. Finally the noisy intensity is \begin{eqnarray} \tilde{I} = \sum_{k = 0}^\infty \langle k | \rho | k \rangle (\Delta + k) = \Delta + \langle a^\dagger a \rangle, \end{eqnarray} where $\langle a^\dagger a \rangle$ gives the intensity of the ideal measurement. \input{hoi.bbl} \end{document}
arXiv
\begin{definition}[Definition:Axis/Z-Axis] In a cartesian coordinate system, the '''$z$-axis''' is the axis passing through $x = 0, y = 0$ which is perpendicular to both the $x$-axis and the $y$-axis. It consists of all the points in the real vector space in question (usually $\R^3$) at which all the elements of its coordinates but $z$ are zero. As the visual field is effectively two-dimensional, it is not possible to depict a three-dimensional space on a visual presentation (paper, screen and so on) directly. Therefore the representation of the third axis of such a cartesian coordinate system is necessarily a compromise. However, if we consider the plane of the visual field as being a representation of the $x$-$y$ plane the '''$z$-axis''' can be ''imagined'' as coming "out of the page". \end{definition}
ProofWiki
Small contribution of gold mines to the ongoing tuberculosis epidemic in South Africa: a modeling-based study Stewart T. Chang ORCID: orcid.org/0000-0002-7604-97971, Violet N. Chihota2,3,4, Katherine L. Fielding3,5, Alison D. Grant3,6,7, Rein M. Houben8, Richard G. White8, Gavin J. Churchyard2,3,9, Philip A. Eckhoff1 & Bradley G. Wagner1 BMC Medicinevolume 16, Article number: 52 (2018) | Download Citation The Correction to this article has been published in BMC Medicine 2018 16:242 Gold mines represent a potential hotspot for Mycobacterium tuberculosis (Mtb) transmission and may be exacerbating the tuberculosis (TB) epidemic in South Africa. However, the presence of multiple factors complicates estimation of the mining contribution to the TB burden in South Africa. We developed two models of TB in South Africa, a static risk model and an individual-based model that accounts for longer-term trends. Both models account for four populations — mine workers, peri-mining residents, labor-sending residents, and other residents of South Africa — including the size and prevalence of latent TB infection, active TB, and HIV of each population and mixing between populations. We calibrated to mine- and country-level data and used the static model to estimate force of infection (FOI) and new infections attributable to local residents in each community compared to other residents. Using the individual-based model, we simulated a counterfactual scenario to estimate the fraction of overall TB incidence in South Africa attributable to recent transmission in mines. We estimated that the majority of FOI in each community is attributable to local residents: 93.9% (95% confidence interval 92.4–95.1%), 91.5% (91.4–91.5%), and 94.7% (94.7–94.7%) in gold mining, peri-mining, and labor-sending communities, respectively. Assuming a higher rate of Mtb transmission in mines, 4.1% (2.6–5.8%), 5.0% (4.5–5.5%), and 9.0% (8.8–9.1%) of new infections in South Africa are attributable to gold mine workers, peri-mining residents, and labor-sending residents, respectively. Therefore, mine workers with TB disease, who constitute ~ 2.5% of the prevalent TB cases in South Africa, contribute 1.62 (1.04–2.30) times as many new infections as TB cases in South Africa on average. By modeling TB on a longer time scale, we estimate 63.0% (58.5–67.7%) of incident TB disease in gold mining communities to be attributable to recent transmission, of which 92.5% (92.1–92.9%) is attributable to local transmission. Gold mine workers are estimated to contribute a disproportionately large number of Mtb infections in South Africa on a per-capita basis. However, mine workers contribute only a small fraction of overall Mtb infections in South Africa. Our results suggest that curtailing transmission in mines may have limited impact at the country level, despite potentially significant impact at the mining level. Gold mines in South Africa have historically been implicated in initiating the tuberculosis (TB) epidemic in South Africa. As Packard notes, "The immense size of the mine labor force, over 200,000 on the Rand alone by 1910, together with the appalling health conditions that existed on the mines, ensured that they would play a central role in the early development of TB in southern Africa" [1]. To what extent gold mines continue to contribute to TB in South Africa, however, is subject to debate. Several factors complicate this question. Within the mines, crowding, insufficient ventilation, and warm, humid air may increase the rate of Mycobacterium tuberculosis (Mtb) transmission. Biological and social factors may then affect the extent to which Mtb spreads among mine workers and from mine workers to other groups. For example, mine workers and residents of other areas with whom they interact may already have latent tuberculosis infection (LTBI), which confers partial immunity to reinfection despite posing a longer-term risk for reactivation in the future [2]. Both mine workers and residents of other communities may also carry high burdens of HIV infection, increasing their rate of reactivation [3,4,5]. Finally, mixing patterns between mine workers and other residents may determine to what extent mine workers contribute to the larger epidemic [2, 6]. For example, estimating the risk of TB infection in peri-mining residents due to mine workers requires one to account for the probability that susceptible peri-mining residents come into contact with infectious mine workers, which depends on the size of each group, the prevalence of LTBI and active TB in each group, and the amount of time the groups spend together. On a longer time scale, labor-related migration and repatriation of mine workers are also likely to affect how widely mine workers may spread infections [2, 6]. Mathematical models have served as useful tools for understanding the TB epidemic in South Africa. For example, at the country level, models have been used to predict the impact of implementing different interventions [7,8,9,10]. Models have also proved useful for understanding more local disease dynamics, e.g., at the level of a city [11] or in specific environments such as a prison [12] or a household [13]. More recently, models have also been applied to the gold mines in South Africa to understand the results of the Thibela TB study, which tested a sustained campaign of preventive therapy among mine workers [14, 15]. However, models have not yet addressed how mine workers mix with other groups and whether these interactions contribute to overall TB burden. To estimate the contribution of gold mines to the ongoing TB epidemic in South Africa, we developed two computational models and applied them to gold mine workers and mining-related groups in South Africa. First, we developed a simplified static risk model that accounts for data on gold mine workers and peri-mining, labor-sending, and other residents of South Africa and estimates the force of infection (FOI) and fraction of transmission events (new infections) in each community that are attributable to local residents compared to residents from other areas. Secondly, we developed a dynamic, individual-based model of TB that accounts for longer-term trends in demographics and risk factors and also features a more detailed disease natural history to estimate the fraction of incidence attributable to transmission from gold mine workers. Together, these tools provide quantitative estimates that address to what extent gold mines are continuing to contribute to the TB epidemic in South Africa. Epidemiological data sources We consider four residency groups in South Africa: gold mine workers, peri-mining residents, labor-sending residents, and other residents of South Africa. Our primary data source for mine workers was data collected during the Thibela TB study [16]. Peri-mining communities were identified based on proximity to Thibela TB study sites, comprising the Lejweleputswa (Free State province), West Rand (Gauteng province), and Dr Kenneth Kaunda (North West province) districts. Labor-sending communities were identified as the OR Tambo and Alfred Nzo (Eastern Cape province) and Ugu and Sisonke (KwaZulu-Natal province) districts. Other residents of South Africa were assumed to comprise all remaining districts. Residents of areas outside of South Africa were not considered. In the models we accounted for three general categories of parameters: population size, disease natural history, and population mixing. Population sizes were taken from the South Africa Census 2011 [17]. For the mine worker population, we considered both gold mine workers and mine workers of other commodities, representing an upper limit on the at-risk population [18, 19]. The epidemiological characteristics of each population were derived from the literature. Measurements of TB incidence and prevalence at the country level were taken from World Health Organization estimates [20] and at the mine level from the Thibela TB study [21], while HIV prevalence and antiretroviral therapy (ART) coverage were taken from Joint United Nations Programme on HIV/AIDS (UNAIDS)-based measurements [22]. Disease natural history parameters were similar to those found in other TB models and included the rate at which infected individuals progress to active disease as a result of primary disease or reactivation, the effect of HIV on reactivation, and the frequency of different forms of active disease (smear-positive, smear-negative, and extra-pulmonary) and relative infectiousness of each form (Additional file 1: Table S1). Parameters specific to mining included a multiplier for increased Mtb transmission in the mines, the prevalence of silicosis among mine workers, and the effect of silicosis on reactivation (Additional file 1: Table S2). South Africa-specific estimates of healthcare access and treatment effectiveness via directly observed therapy short-course (DOTS) were also included (Additional file 1: Table S3). Population mixing parameters were taken from national tourism surveys [23] as well as data collected during the Thibela TB study (Additional file 1: Table S4). Description of static risk model (spreadsheet model) To account for the current state of the TB epidemic in mining and mining-related communities in South Africa and short-term, sub-annum processes that relate to TB transmission between these communities, we developed a static risk model. The model represents a Taylor series-type approximation of dynamic processes such as the generation of new infections given the number of susceptibles and prevalent cases in each population and the effect of risk factors such as increased Mtb transmission in the mines and HIV in the overall population. These quantities are not updated iteratively in the model; therefore, the model represents a short-term, 1-year projection of these quantities. The static risk model (spreadsheet model) was encoded in Excel and comprises formulas to calculate the FOI (per-susceptible rate of infection) and number of infections occurring in each group (Fig. 1a). A "who acquires infection from whom" (WAIFW) matrix [24] was derived where each element βij represents the rate of Mtb transmission from infectives in group j to susceptibles in group i for every community k where contact was assumed possible (spreadsheet matrix 1). Each element of the WAIFW matrix was based on a base rate of transmission β0 which we defined as the number of new infections generated by each infective case per year averaged over smear-positive, smear-negative, and extra-pulmonary forms of disease. β0 was multiplied by a community-specific transmission multiplier ck, which accounts for multiple environmental factors and was calibrated in the individual-based model, and the fractions of each year pik and pjk that individuals from groups i and j, respectively, spend in community k (Fig. 1b). pik and pjk were converted to a frequency of contact between individuals from groups i and j in community k by dividing by the total number of individuals Nk present in community k at any given time, i.e., assuming frequency-dependent transmission [25]. βij was then calculated by summing this product over the set A of all communities k where groups i and j spend time. $$ {\beta}_{ij}={\beta}_0{\sum}_{k\in A}\frac{c_k{p}_{ik}{p}_{jk}}{N_k} $$ Disease state transitions and groups represented in the static risk and individual-based models. a Disease transitions in the static risk model were limited to new infections and reactivation from existing latent infections, while the individual-based model also represented longer-term processes such as new infections contributing to prevalence (not shown). b Population mixing patterns as fraction of time per annum spent in a different community (for short-term mixing) or probability of residency change per annum (for longer-term migration). Both static risk and individual-based models represent short-term mixing, but only the individual-based model represents longer-term migration. S+/− silicosis presence/absence, M mining resident, LS labor-sending resident Additional quantities were derived using the WAIFW matrix: λij: The FOI among susceptibles in group i attributable to infectives from group j, calculated by multiplying βij by the number of infectives in group j (spreadsheet matrix 2). The sum over all j for a given i provides the overall probability of infection per year for susceptibles in group i. Lij: The rate of infection in group i per-capita attributable to group j, calculated by multiplying λij by the fraction of group i who are susceptible where susceptible is defined as uninfected or latently infected but susceptible to reinfection (spreadsheet matrix 3). PAF(λij): The population attributable fraction (PAF) of FOI in group i attributable to group j, calculated by dividing each by the sum of λij over all j for a given i (spreadsheet matrix 4). PAF(Lj): The PAF of all new infections in South Africa attributable to group j, calculated by dividing the number of infections attributable to group j (i.e., the sum of Lij over all i for a given j) by the total number of infections (i.e., the sum of Lij over all i and all j) (spreadsheet matrix 5). pcPAF(Lj): The PAF of all new infections in South Africa attributable to group j relative to the size of j (i.e., per-capita j) calculated by dividing PAF(Lj) by the fraction of the South African population that group j represents (spreadsheet matrix 6). This represents the contribution of a particular group relative to its population size including both susceptible and infected individuals. In the static risk model we also calculated a near-term estimate of TB incidence Ii in each group i (Fig. 1a). In this case near-term refers to 1 year in the future, as prevalence after the first year was not updated to include new cases or losses due to treatment or death. Near-term incidence accounted for TB cases from primary disease resulting from new infections in a given year and TB cases from reactivation of a stable pool of latent infections. Specifically near-term incidence was calculated as the sum of incidence from five sources: (1) primary disease from new infections (k1Li), (2) reactivation from non-silicotic, non-HIV-positive latent infections (k2(1 – Si)(1 – Hi)Pi), (3) reactivation from non-silicotic, HIV-positive latent infections (k2m0,H(1 – Si)HiPi), (4) reactivation from silicotic, non-HIV-positive latent infections (k2ms,0Si(1 – Hi)Pi), and (5) reactivation from silicotic, HIV-positive latent infections (k2ms,HSiHiPi). Here k1 and k2 represent base rates of primary disease in newly infected and reactivation in latently infected individuals, respectively, and Li was derived from Lij above. Si, Hi, and Pi represent the stable prevalence of silicosis, HIV, and LTBI in group i, and ms,0, m0,H, and ms,H represent multipliers on k2 for silicotic, HIV-positive, and simultaneously silicotic and HIV-positive individuals, respectively. HIV-positive latent infections were further subdivided into those receiving or not receiving ART, each assigned a separate multiplier on k2. Because the static risk model did not account for longer-term migration, e.g., for mine workers repatriating to labor-sending areas, the prevalence of silicotics in non-mining areas was assumed to be zero. The resulting values of Ii were compared to published values for mine workers from the Thibela TB study and for all South Africa from WHO estimates (spreadsheet matrix 7). Monte Carlo simulations were performed where the mine-specific transmission rate and immunity from reinfection were sampled from normal distributions, while all other parameters were held at baseline values. For the mine-specific transmission rate, 95% of the density was assumed to lie within −20% and +20% of the baseline value. This range was selected based on the Gammaitoni-Nucci equation [26], which specifies that the probability of an individual acquiring an infection in a confined space increases exponentially as the inverse of ventilation rate. Therefore, a −20% or +20% difference in transmission rate could result from a +25% or −17% change in ventilation rate, respectively; these were similar to values from the individual-based model calibration (cf. Additional file 1: Figure S2B, S2C). For immunity from reinfection, 95% of the density was assumed to lie within −40% and +40% of the baseline value. One thousand randomly drawn pairs of values for these two parameters were used, and 95% confidence intervals (CIs) were taken from the 0.025 and 0.975 quantiles of the resulting output values. Both the static risk model and the individual-based model as well as input parameter files are available on GitHub (https://github.com/SCTX/mining_contribution). Description of individual-based model While the static risk model accounts for the current state of the TB epidemic and processes occurring on a sub-annum time scale, particularly population mixing, it does not update the prevalence of different disease states iteratively and does not represent longer-term changes in demographics and risk factors such as HIV. To provide a longer-term representation of the TB epidemic, we developed a dynamic, individual-based model; this model was coded in C++ and based on the TB model available in the EMOD software package [27]. We briefly describe the model here; parameter values and additional details are provided in Additional file 1: Tables S1–S4. Individuals were assigned one of the following disease states: susceptible, latently infected, active pre-symptomatic, active symptomatic, and recovered. Individuals transitioned between these states randomly according to exponentially distributed delays. Birth and non-disease death processes were represented whereby individuals were added to and removed from the simulation at rates consistent for South Africa. A residency status in one of the four groups in the model (mining, peri-mining, labor-sending, and other South Africa) was assigned at birth. Residency status was retained for the lifetime of the individual except for individuals born with labor-sending group status who transitioned to mining status during adulthood and then back to labor-sending status upon retirement. Short-term mixing between groups representing regular visits from mine workers to peri-mining or labor-sending communities was specified by an interaction matrix as in the static risk model. The numbers of discrete agents in the model were scaled in the output to reflect the sizes of the real populations: individuals in mining, peri-mining, and labor-sending groups by approximately 50:1 and in other South Africa by approximately 400:1. These scale factors corresponded to 2011 population sizes of approximately 0.5 million (M), 2.1 M, and 3.4 M in mining, peri-mining, and labor-sending areas, respectively, and 47 M in the remainder of South Africa [17]. Two risk factors were represented in the individual-based model, HIV infection and silicosis. HIV infection was distributed to individuals according to age-specific rates of infection; these were generated from the EMOD HIV model calibrated to South Africa [28] and assumed to be the same across residency groups [29,30,31]. The EMOD HIV model assigned a CD4 count to each individual which declined linearly with time in the absence of treatment [29,30,31]. ART was distributed to HIV-positive individuals according to eligibility guidelines in South Africa and matched population coverage estimates [22]. ART had the effect of increasing CD4 levels in the model [29,30,31]. Silicosis was acquired by mining group individuals at a rate consistent with radiographic observations in mine workers, at approximately 1% per year of employment [32,33,34] (Additional file 1: Table S2). Susceptible individuals in the model were infected at a rate that differed by residency group and depended on the total infectiousness of other groups and the frequency of group interactions. The infectiousness of each group depended on the prevalence of different forms of active TB, where pre-symptomatic, smear-negative, and extra-pulmonary forms of disease were assumed to contribute less infectiousness than smear-positive disease. Data on the frequency of group interactions were specified by a WAIFW matrix as in the static risk model [35]. Infected individuals transitioned to active disease with one of two rates representing primary and reactivation disease. Active disease in the model included a pre-symptomatic period of set duration followed by symptomatic disease of smear-positive, smear-negative, or extra-pulmonary forms. Individuals persisted in symptomatic disease until progressing to self-cure, treatment, or death. HIV and silicosis had the effect of increasing the rate of reactivation, represented as multipliers on the base rate of reactivation [36]. For HIV-positive individuals, the magnitude of the increase varied as the inverse of CD4 level. Individuals with active symptomatic disease were assumed to seek care at high or low rates corresponding to high- or low-quality access to care, respectively, broadly representing different levels of care in South Africa. Upon accessing care, symptomatic individuals were assumed to receive a sputum smear or GeneXpert test, depending on whether care was sought before or during DOTS availability. The probability of a positive test result corresponded to observed test sensitivities. If a positive test result was obtained, an individual was assumed to undergo treatment, with a rate of disease clearance that depended on whether treatment was given before or during DOTS availability. Following treatment, individuals transitioned to a recovered state that was identical to the susceptible state but assumed to have a reduced probability of reinfection due to immunity. Individual-based model calibration and application Several historical population-wide events were simulated in the individual-based model. During each simulation the model was seeded and run for a specified burn-in period. With a burn-in period of 100 simulated years, incidence and mortality were observed to be stable in the different groups in the model, consistent with endemic TB. HIV, DOTS, and ART were introduced at simulated years 1985, 2002, and 2007, respectively, representing country-wide trends. The model was calibrated to TB incidence in mining areas measured during the Thibela TB study [21] and TB incidence and mortality at the country level for multiple years [37]. Parameters for the transmission rate in mining areas and immunity to reinfection following previous TB exposure were varied during calibration. The transmission rate in mining areas was parameterized as a multiple of the base transmission rate and represented the aggregate environmental factors, e.g., reduced ventilation rates, that may increase Mtb transmission in mines. A likelihood score for each parameter combination was computed using a likelihood function based on a normal distribution where the differences between published high and low estimates of incidence and mortality were taken to represent 95% CIs and the three epidemiological indicators were equally weighted. The joint posterior distribution of the two parameters conditional on the data were estimated via incremental mixture importance sampling (IMIS) [38]. The joint distribution was found to be unimodal and strongly peaked; therefore, parameters for subsequent simulations were set at the maximum a posteriori estimate (joint posterior distribution, Additional file 1: Figure S2A; marginal distributions, Figure S2B, S2C in Additional file 1). For consistency, the values for these parameters were also used in the spreadsheet model. The posterior estimate of the reduction in susceptibility to reinfection was similar to previous estimates of bacille Calmette-Guerin (BCG) protection against active disease, 0.58 (95% CI 0.35–1.01) [39]. Technical details regarding the calibration procedure are available in Additional file 1. To measure the incidence attributable to mine workers using the individual-based model, we simulated two counterfactual scenarios: first, having no Mtb transmission in the mines and, second, having no Mtb transmission in any area. These scenarios were identical to the baseline scenario until simulated year 2012 when Mtb transmission was stopped in the model. Other processes such as disease progression continued unchanged. The numbers of new cases of active disease between simulated years 2014 and 2019 were counted for each counterfactual scenario and compared to the baseline scenario. This calculation was repeated for each residency group in the model. Most force of infection in communities is attributable to local residents We used a static risk model to calculate the FOI and number of transmission events (new infections) in different mining-related communities in South Africa and predict the near-term (following-year) incidence in these communities (Fig. 1a). The model accounted for a number of factors including a higher rate of Mtb transmission due to environmental factors and the amount of time that residents reported spending in their own versus other communities, where mixing was assumed to be proportional to the time spent in each community (Fig. 1b). As a check on the static risk model, we compared FOI output from the model to data on the annual risk of TB infection (ARTI) in children. Under baseline parameters, we estimated FOI to be 21.2% (95% CI 16.4–26.1%) in mine workers, 4.3% (95% CI 4.3–4.3%, indicating a difference of < 0.05%) in peri-mining residents, 5.8% (95% CI 5.8–5.8%) in labor-sending residents, and 3.5% (95% CI 3.5–3.5%) in other South African residents (Table 1, Additional file 1: Table S5). CIs were derived by sampling parameter values for mine-specific transmission and immunity following previous infection. The FOI estimate for other South African residents in the model was found to be consistent with available ARTI measurements: 2.5–4.2% (across Western Cape, 2005, [40]), 3.8–4.5% (in Cape Town, 2005, [41]), 3.9–4.8% (in Cape Town, 2009, [42]), and 2.1–5.2% (in Johannesburg, 2013, [43]). Table 1 Force of infection (per-susceptible rate of infection) attributable to each population Using the static risk model, we then calculated the fraction of FOI in each community attributable to each residency group. We estimated that the majority of each community's FOI was attributable to local residents: 93.9% (95% CI 92.4–95.1%), 91.5% (95% CI 91.4–91.5%), 94.7% (95% CI 94.6–94.7%), and 98.8% (95% CI 98.8–98.8%) in mining, peri-mining, labor-sending, and other SA communities, respectively (Table 1, Additional file 1: Table S5). Despite the amount of time mine workers were assumed to spend in other areas (up to 20% per annum), the FOI in peri-mining, labor-sending, and other SA communities attributable to mine workers was estimated to be 5.8% (95% CI 5.8–5.8%), 3.6% (95% CI 3.5–3.6%), and 0.1% (95% CI 0.1–0.1%), respectively. Gold mine workers contribute more TB infections per capita than other residents Using the preceding FOI and data on susceptible individuals, i.e., either uninfected or latently infected but susceptible to reinfection, we estimated the number of new infections expected to occur in each community and compared these results to published TB incidence for different communities. For mine workers and the overall population, we estimated TB incidence to be 2963 (95% CI 2208–3858) and 989 (95% CI 980–1000) per 100,000 individuals, respectively. These were similar to published values for these communities, 2957 (in control cluster mines during the Thibela TB study, between 2006 and 2010 [21]) and 977 (717–1276) (in South Africa, 2008 [20, 44]) per 100,000 (Additional file 1: Table S6). Using the static risk model, we also estimated the fraction of infections occurring each year attributable to each residency group. Out of the overall number of new infections occurring in South Africa per annum, we estimated that 4.0% (95% CI 2.6–5.8%), 5.0% (95% CI 4.5–5.5%), and 9.0% (95% CI 8.8–9.1%) were attributable to mining, peri-mining, and labor-sending residents, respectively (Table 2, Additional file 1: Table S7). When scaled to the fraction of the overall population in South Africa that each group represents, mine workers, peri-mining residents, and labor-sending residents contributed 4.32 (95% CI 2.77–6.15), 1.21 (95% CI 1.09–1.34), and 1.39 (95% CI 1.36–1.41) times as many infections as South Africans as a whole (Table 2, Additional file 1: Table S7). Similarly, when scaled to the fraction of the overall number of prevalent cases in South Africa found in each group, mine workers, peri-mining residents, and labor-sending residents contributed 1.62 (95% CI 1.04–2.30), 1.14 (95% CI 1.02–1.25), and 1.07 (95% CI 1.05–1.09) times as many infections as South Africans as a whole (Table 2, Additional file 1: Table S7). Therefore, while mine workers contribute a larger number of infections on a per-capita or per-prevalent case basis than other South Africans, the majority of these infections occur among mine workers themselves. Table 2 New infections in all South Africa attributable to each population Local recent transmission is the source of the majority of incident TB cases in gold mines To measure the impact of Mtb transmission in the mines on incidence, we used a dynamic, individual-based model of TB in South Africa. In this model we accounted for longer-term demographic changes and additional pathways leading to active disease including reactivation from transmission occurring over a longer time window. We calibrated the model to several epidemiological indicators including incidence and mortality over multiple years [21, 37]. Model estimates of TB incidence and mortality overlapped published ranges, both at the country level (Fig. 2a, b; Additional file 1: Figure S3A, S3B) and at the mining level (Fig. 2c, d; Additional file 1: Figure S3C, S3D). In particular, model incidence reproduced measurements from the Thibela TB study, showing a threefold higher incidence in the mines compared to South Africa overall (Fig. 2a, c). Model incidence in the mines preceding the Thibela TB study was exceeded 4000 per 100,000 (Fig. 2c), consistent with previous studies on mine workers [45]. Model mortality due to TB among mine workers was approximately 1% per annum (Fig. 2d), which was consistent with a range that includes the 0.9% all-cause mortality rate and 4.3% all-cause mortality-plus-medically boarded rate observed as a secondary outcome of the Thibela TB study [21]. As an additional test of the model, including South Africa-specific parameters derived from calibration, we used the model to simulate the Thibela TB study intervention of widely available preventive therapy. Following a cessation of the intervention, we observed a rebound in model incidence similar to the rebound observed during the Thibela TB study (Additional file 1: Figure S4A). Simulated time series of the TB epidemic in different communities in South Africa. Means and 95% CIs were derived from 200 stochastic realizations of the model where input parameters were set at the mode of the posterior distribution of two calibration parameters. a TB incidence in peri-mining, labor-sending, and other South Africa residents. b TB mortality in peri-mining, labor-sending, and other South Africa residents. In a and b, the population-weighted mean of the four populations in the model is also shown. c TB incidence in mine workers. d TB mortality in mine workers. e Methodology for computing the fraction of incidence attributable to recent Mtb transmission in the mines. The upper curve is identical to the curve in c, while the lower curve represents the mean and 95% CI of stochastic realizations that were identical to c until simulated year 2012, after which Mtb transmission from mine workers was stopped but all other aspects of the model remained unchanged. Attribution was calculated from the difference in incidence between simulated years 2014 and 2019 To estimate the fraction of incident cases attributable to gold mines, we simulated a counterfactual scenario of stoppage of Mtb transmission in mining areas and measured the subsequent change in incidence over several years (Fig. 2e). By doing so, we estimated that recent Mtb transmission in the mines contributed 58.2% (95% CI 57.8–58.9%), 4.8% (95% CI 4.3–5.2%), and 4.9% (4.4–5.2%) of the TB incidence in mining, peri-mining, and labor-sending residents, respectively (Table 3). Among other residents of South Africa, the counterfactual scenario had a smaller effect that resulted in a time course that overlapped the baseline scenario at all time points, i.e., within the stochastic noise of the simulation (Table 3). In South Africa as a whole, the fraction of TB incidence due to recent Mtb transmission in the mines was estimated to be 2.4% (95% CI 1.4–3.3%) (Table 3). To test the robustness of this measurement, we performed a series of one-way sensitivity analyses based on varying the infectiousness in each community separately (mining, peri-mining, labor-sending, and other South Africa). In all cases, the resulting fraction of TB incidence due to recent Mtb transmission in the mines was found to vary maximally between 1 and 4% (Additional file 1: Figure S5). Table 3 Incidence attributable to recent transmission in mining areas As a second counterfactual scenario, we simulated stoppage of transmission in all areas of South Africa and calculated the fraction of incident cases attributable to recent transmission from any source. Among mining residents, 63.0% (95% CI 58.5–67.7%) of the incident cases were predicted to result from recent transmission, 92.5% (95% CI 92.1–92.9%) of which were attributable to recent transmission in mining areas (Table 3). In contrast, among all South African residents, 37.4% (95% CI 35.2–40.0%) of the incident cases were predicted to result from recent transmission, 3.7% (95% CI 3.0–4.4%) of which were attributable to recent transmission in mining areas (Table 3). These figures were consistent with local Mtb transmission in mines being the source for the majority of incident TB cases in the mines but only a small fraction of the incident TB cases in the remainder of the country. Using two different modeling approaches, we found that gold mine workers are likely to be contributing to the TB burden in South Africa but primarily at the level of their own communities and not the larger population of South Africa, owing to the generalized nature of the TB epidemic in South Africa. Using a static risk model, we captured several parameters that determine the extent of Mtb transmission from mine workers: the size of different populations with whom mine workers interact, the prevalence of latent infection and active disease in each population, and the amount of time that residents from different populations spend with each other. Our model suggests that gold mine workers who number less than 0.5 M (< 1% of the population in South Africa) contribute approximately 4% of new infections in South Africa per annum. By comparison, residents in peri-mining and labor-sending areas who number approximately 2.1 M and 3.4 M (4% and 7% of the population) contribute approximately 5% and 9% of new infections in South Africa per annum, respectively. Therefore, mine workers contribute a disproportionately large number of new infections, as one might expect given their higher rates of disease and the setting in which they work. However, given their mixing patterns and other factors which we included in the model, we found that the effect is mostly at the level of their own communities. These factors include the amount of time that residents spend in other communities, which we estimated to be less than 25% per year, and the limited number of susceptibles available in other communities. For example, in high-burden areas such as peri-mining and labor-sending areas, more than 50% of the population may already be latently infected, reflecting a high FOI in these areas [42, 46]. We obtained similar results with an individual-based model which we used to simulate a counterfactual scenario of curtailed Mtb transmission in the mines for a period of more than 2 years. Using this approach, we estimated that 4% of the incidence in all of South Africa could be traced to recent transmission in the mines, similar to the attributable fraction of new infections. However, among mine workers themselves, greater than 50% of the incident cases could be traced to recent transmission in the mines, suggesting that ongoing transmission among mine workers continues to have a significant effect. This was consistent with results from Godfrey-Faussett and colleagues, who genotyped Mtb strains from mine workers and found that at least 50% of TB cases were due to transmission within the mines [47], as well as other studies showing a high degree of strain clustering in different parts of South Africa [46, 48, 49]. However, a more recent study of the Thibela TB study site by Mathema and colleagues has suggested that the fraction of incident TB cases in the mines due to recent infection may be lower than previously measured [50]. Additional work is needed to explain the differences in these results and connect results such as ours, based on simulation and counterfactuals, to results based on genetic clustering and molecular epidemiology. Nonetheless, our results suggest that curtailing transmission in the mines may have a measurable impact on the number of new cases of TB disease in the mines on a relatively short time frame, within 5 years or less. This is similar to the time frame posited by Vynnycky and colleagues, who used modeling to simulate a set of mine-targeted interventions such as reduced treatment delay and scaled-up ART and found it was possible to obtain a significant impact [15]. Given our results and those of Vynnycky et al., health officials may wish to consider measuring the extent of recent transmission, such as through Mtb strain genotyping, on an ongoing basis. A decrease in the proportion of cases that cluster genotypically is expected to accompany effective programs and would provide additional evidence of the effectiveness of TB control programs. Our study complements past efforts to measure the association between TB burden and mining such as the study by Stuckler and colleagues [51]. In that study, each 10% increase in mining production was associated with a 0.9% increase in TB incidence [51]. Our approach did not include mining production as a covariate, precluding a direct comparison of the results. In addition, we focused on the ongoing contribution of mine conditions, which differed from the focus of Stuckler et al. on historical mining production. Despite these differences, both our study and that of Stuckler et al. point to the need to consider mining in a larger context, whether that be population mixing or other comorbidities. For example, Stuckler et al. found that most of the effect of mining on TB was mediated by HIV prevalence; controlling for HIV greatly reduced the association with mining [51]. In our models, HIV plays a similarly large role and increases the activation rate of latent disease in all groups including mine workers. The large effect of HIV relative to mining production can also be seen directly by comparing the time courses for mining production, HIV prevalence, and TB incidence in South Africa. While mining production has decreased over the last two decades [52], TB incidence has more closely mirrored HIV prevalence, only beginning to decline after 2010 [53]. As ART usage continues to increase and HIV prevalence stabilizes, it will be interesting to observe whether decreases in mining production have a more discernible effect on TB incidence. Additional questions include whether decreasing mining production or Mtb transmission in the mines would have a different effect depending on HIV prevalence 5, 10, or 15 years in the future and more generally whether the impact of hotspot-targeted approaches depends on prior HIV control and whether hotspot targeting should be coordinated with HIV control programs. We plan to explore these questions in future applications of the model. Our study also contributes to the growing literature on using quantitative approaches to investigate potential TB hotspots. Recently, Dowdy and colleagues used a model to study high TB burden areas in Rio de Janeiro, Brazil [54]. In that study, areas that comprised 6% of the city population were found to contribute 35% of the new infections in the city, resulting in a 5.8:1 attributable infection:population size ratio. This compares to the 4.3:1 attributable infection:population size ratio that we found for mine workers (Table 2). As the quality of TB monitoring and evaluation improves globally, it may be useful to define a set of functional criteria for TB hotspots, e.g., what attributable infection:population size ratios qualify an area to be a hotspot and how many susceptibles need to reside in the larger population for a hotspot to pose a risk. To encourage discussions in this area, we have made many of these outputs, along with modifiable assumptions, accessible in our spreadsheet model. While we accounted for several factors in our study, including HIV, silicosis, and population mixing, a number of assumptions would benefit from additional study. For example, our results assume that the prevalence of latent infection and active disease in the peri-mining and labor-sending areas was at least as high as those found in the general population of South Africa. While this is supported by historical data [1, 2] and available case notification data [55], our estimates could be improved with accurate measurements in these areas, such as may become available from future prevalence surveys in South Africa. The number of populations that are included in the models could also be expanded to include foreign workers. Although the proportion of workers from countries outside of South Africa has decreased in recent decades, a more comprehensive accounting should include other countries in southern Africa including Lesotho, Swaziland, and Mozambique [56]. Finally, how we represent mixing could also be refined to account for different scales. While we informed mixing in our models using tourism and labor migration data, these provide only a proxy of mixing and exclude more local influences, such as interactions within mining areas and hostels and on public transport [57, 58]. Investigating Mtb transmission at more granular levels may lead to more actionable findings for mitigating risk. Despite these caveats, we believe our models account for the main factors likely to govern the contribution of gold mines to the TB epidemic: the size of the mine worker population, the TB burden in mine workers and other groups, and the amount of time mine workers spend in different areas. Together these factors suggest Mtb transmission in gold mines continues to feed infections in mines and mining-related communities but to a much smaller extent in the country as a whole. In evaluating the impact of interventions designed to curtail transmission in the mines, the effect on both scales should be considered. Using two models that integrate diverse types of data, we estimate that gold mine workers contribute a disproportionately large number of Mtb infections in South Africa on a per-capita basis. However, due to their relatively small population and the generalized nature of the TB epidemic in South Africa, gold mine workers contribute only a small fraction of the total number of Mtb infections in South Africa. Our results suggest efforts at curtailing transmission in the mines may have limited impact at the country level despite a potentially significant impact on a relatively short time frame in the mines themselves. The original article [1] did not contain comprehensive information regarding two authors' affiliations that may be considered a potential competing interest. Antiretroviral therapy 95% CI: 95% confidence interval DOTS: Directly observed therapy short-course FOI: Force of infection LTBI: Latent tuberculosis infection Mtb : PAF: Population attributable fraction TB: WAIFW: Who acquires infection from whom Packard RM. White plague, black labor: tuberculosis and the political economy of health and disease in South Africa. Berkeley: University of California Press; 1989. Rees D, Murray J, Nelson G, Sonnenberg P. Oscillating migration and the epidemics of silicosis, tuberculosis, and HIV infection in South African gold miners. Am J Ind Med. 2010;53:398–404. Evian C, Fox M, MacLeod W, Slotow SJ, Rosen S. Prevalence of HIV in workforces in southern Africa, 2000-2001. S Afr Med J. 2004;94:125–30. Corno L, de Walque D. Mines, Migration and HIV/AIDS in Southern Africa. J Afr Econ. 2012;21:465–98. Stuckler D, Steele S, Lurie M, Basu S. Introduction: "dying for gold": the effects of mineral mining on HIV, tuberculosis, silicosis, and occupational diseases in southern Africa. Int J Health Serv. 2013;43:639–49. Basu S, Stuckler D, Gonsalves G, Lurie M. The production of consumption: addressing the impact of mineral mining on tuberculosis in southern Africa. Glob Health. 2009;5:11. Knight GM, Dodd PJ, Grant AD, Fielding KL, Churchyard GJ, White RG. Tuberculosis prevention in South Africa. PLoS One. 2015;10:e0122514. Houben RMGJ, Dowdy DW, Vassall A, Cohen T, Nicol MP, Granich RM, et al. How can mathematical models advance tuberculosis control in high HIV prevalence settings? Int J Tuberc Lung Dis. 2014;18:509–14. Dowdy DW, Houben R, Cohen T, Pai M, Cobelens F, Vassall A, et al. Impact and cost-effectiveness of current and future tuberculosis diagnostics: the contribution of modelling. Int J Tuberc Lung Dis. 2014;18:1012–8. Houben RMGJ, Menzies NA, Sumner T, Huynh GH, Arinaminpathy N, Goldhaber-Fiebert JD, et al. Feasibility of achieving the 2025 WHO global tuberculosis targets in South Africa, China, and India: a combined analysis of 11 mathematical models. Lancet Glob Health. 2016;4:e806–15. Blaser N, Zahnd C, Hermans S, Salazar-Vizcaya L, Estill J, Morrow C, et al. Tuberculosis in Cape Town: an age-structured transmission model. Epidemics. 2016;14:54–61. Johnstone-Robertson S, Lawn SD, Welte A, Bekker L-G, Wood R. Tuberculosis in a South African prison: a transmission modelling analysis. S Afr Med J. 2011;101:809–13. Wood R, Johnstone-Robertson S, Uys P, Hargrove J, Middelkoop K, Lawn SD, et al. Tuberculosis transmission to young children in a South African community: modeling household and community infection risks. Clin Infect Dis. 2010;51:401–8. Sumner T, Houben RMGJ, Rangaka MX, Maartens G, Boulle A, Wilkinson RJ, et al. Post-treatment effect of isoniazid preventive therapy on tuberculosis incidence in HIV-infected individuals on antiretroviral therapy. AIDS. 2016;30:1279–86. Vynnycky E, Sumner T, Fielding KL, Lewis JJ, Cox AP, Hayes RJ, et al. Tuberculosis control in South African gold mines: mathematical modeling of a trial of community-wide isoniazid preventive therapy. Am J Epidemiol. 2015;181:619–32. Fielding KL, Grant AD, Hayes RJ, Chaisson RE, Corbett EL, Churchyard GJ. Thibela TB: design and methods of a cluster randomised trial of the effect of community-wide isoniazid preventive therapy on tuberculosis amongst gold miners in South Africa. Contemp Clin Trials. 2011;32:382–92. Stats SA. Census 2011. Statistics South Africa, Pretoria. 2011. Statistics SA. Quarterly Labour Force Survey, Quarter 4, 2013 [Internet]. 2014. http://www.statssa.gov.za/publications/P0211/P02114thQuarter2013.pdf. Chamber of Mines SA. Facts and Figures 2016 [Internet]. 2017. http://www.chamberofmines.org.za/industry-news/publications/facts-and-figures/send/17-facts-and-figures/442-facts-and-figures-2016. World Health Organization. Global tuberculosis report 2016. Geneva: World Health Organization; 2016. http://apps.who.int/iris/bitstream/10665/250441/1/9789241565394-eng.pdf Churchyard GJ, Fielding KL, Lewis JJ, Coetzee L, Corbett EL, Godfrey-Faussett P, et al. A trial of mass isoniazid preventive therapy for tuberculosis control. N Engl J Med. 2014;370:301–10. Johnson LF, Dorrington RE, Moolla H. Progress towards the 2020 targets for HIV diagnosis and antiretroviral treatment in South Africa. South Afr J HIV Med. 2017;18:8. Statistics SA. Domestic Tourism Survey 2012 [Internet]. 2013. http://www.statssa.gov.za/publications/P03521/P035212012.pdf. Anderson RM, May RM, Anderson B. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1992. Begon M, Bennett M, Bowers RG, French NP, Hazel SM, Turner J. A clarification of transmission terms in host-microparasite models: numbers, densities and areas. Epidemiol Infect. 2002;129:147–53. Gammaitoni L, Nucci MC. Using a mathematical model to evaluate the efficacy of TB control measures. Emerg Infect Dis. 1997;3:335–42. Huynh GH, Klein DJ, Chin DP, Wagner BG, Eckhoff PA, Liu R, et al. Tuberculosis control strategies to reach the 2035 global targets in China: the role of changing demographics and reactivation disease. BMC Med. 2015;13:88. Shisana O, Rehle T, Simbayi LC, Zuma K, Jooste S, Zungu N, et al. South African national HIV prevalence, incidence and behaviour survey, 2012. Cape Town: HSRC Press; 2014. http://repository.hsrc.ac.za/handle/20.500.11910/2490 Bershteyn A, Klein DJ, Wenger E, Eckhoff PA. Description of the EMOD-HIV Model v0.7 [Internet]. arXiv [q-bio.QM]. 2012. http://arxiv.org/abs/1206.3720. Klein DJ, Eckhoff PA, Bershteyn A. Targeting HIV services to male migrant workers in southern Africa would not reverse generalized HIV epidemics in their home communities: a mathematical modeling analysis. Int Health. 2015;7:107–13. Eaton JW, Bacaër N, Bershteyn A, Cambiano V, Cori A, Dorrington RE, et al. Assessment of epidemic projections using recent HIV survey data in South Africa: a validation analysis of ten mathematical models of HIV epidemiology in the antiretroviral therapy era. Lancet Glob Health. 2015;3:e598–608. Nelson G, Girdler-Brown B, Ndlovu N, Murray J. Three decades of silicosis: disease trends at autopsy in South African gold miners. Environ Health Perspect. 2010;118:421–6. Churchyard GJ, Ehrlich R, TWN JM, Pemba L, Dekker K, Vermeijs M, et al. Silicosis prevalence and exposure-response relations in South African goldminers. Occup Environ Med. 2004;61:811–6. Murray J, Kielkowski D, Reid P. Occupational disease trends in black South African gold miners. An autopsy-based study. Am J Respir Crit Care Med. 1996;153:706–10. Hens N, Shkedy Z, Aerts M, Faes C, Van Damme P, Beutels P. Who acquires infection from whom? The traditional approach. Modeling infectious disease parameters based on serological and social contact data. New York: Springer; 2012. p. 219–32. Corbett EL, Watt CJ, Walker N, Maher D, Williams BG, Raviglione MC, et al. The growing burden of tuberculosis: global trends and interactions with the HIV epidemic. Arch Intern Med. 2003;163:1009–21. Zumla A, George A, Sharma V, Herbert RHN, Oxley A, Oliver M. The WHO 2014 global tuberculosis report—further to go. Lancet Glob Health. 2015;3:e10–2. Raftery AE, Bao L. Estimating and projecting trends in HIV/AIDS generalized epidemics using incremental mixture importance sampling. Biometrics. 2010;66:1162–73. Mangtani P, Abubakar I, Ariti C, Beynon R, Pimpin L, Fine PEM, et al. Protection by BCG vaccine against tuberculosis: a systematic review of randomized controlled trials. Clin Infect Dis. 2014;58:470–80. Shanaube K, Sismanidis C, Ayles H, Beyers N, Schaap A, Lawrence K-A, et al. Annual risk of tuberculous infection using different methods in communities with a high prevalence of TB and HIV in Zambia and South Africa. PLoS One. 2009;4:e7749. Kritzinger FE, den Boon S, Verver S, Enarson DA, Lombard CJ, Borgdorff MW, et al. No decrease in annual risk of tuberculosis infection in endemic area in Cape Town, South Africa. Tropical Med Int Health. 2009;14:136–42. Wood R, Liang H, Wu H, Middelkoop K, Oni T, Rangaka MX, et al. Changing prevalence of tuberculosis infection with increasing age in high-burden townships in South Africa. Int J Tuberc Lung Dis. 2010;14:406–12. Ncayiyana JR, Bassett J, West N, Westreich D, Musenge E, Emch M, et al. Prevalence of latent tuberculosis infection and predictive factors in an urban informal settlement in Johannesburg, South Africa: a cross-sectional study. BMC Infect Dis. 2016;16:661. World Health Organization. South Africa statistics summary (2002 - present). Global Health Observatory country views [Internet]. http://apps.who.int/gho/data/node.country.country-ZAF?lang=en. Accessed 11 Sep 2017. Corbett EL, Charalambous S, Fielding K, Clayton T, Hayes RJ, De Cock KM, et al. Stable incidence rates of tuberculosis (TB) among human immunodeficiency virus (HIV)-negative South African gold miners during a decade of epidemic HIV-associated TB. J Infect Dis. 2003;188:1156–63. Middelkoop K, Bekker L-G, Myer L, Dawson R, Wood R. Rates of tuberculosis transmission to children and adolescents in a community with a high prevalence of HIV infection among adults. Clin Infect Dis. 2008;47:349–55. Godfrey-Faussett P, Sonnenberg P, Shearer SC, Bruce MC, Mee C, Morris L, et al. Tuberculosis control and molecular epidemiology in a South African gold-mining community. Lancet. 2000;356:1066–71. Richardson M, van Lill SWP, van der Spuy GD, Munch Z, Booysen CN, Beyers N, et al. Historic and recent events contribute to the disease dynamics of Beijing-like Mycobacterium tuberculosis isolates in a high incidence region. Int J Tuberc Lung Dis. 2002;6:1001–11. Verver S, Warren RM, Munch Z, Vynnycky E, van Helden PD, Richardson M, et al. Transmission of tuberculosis in a high incidence urban community in South Africa. Int J Epidemiol. 2004;33:351–7. Mathema B, Lewis JJ, Connors J, Chihota VN, Shashkina E, van der Meulen M, et al. Molecular epidemiology of Mycobacterium tuberculosis among South African gold miners. Ann Am Thorac Soc. 2015;12:12–20. Stuckler D, Basu S, McKee M, Lurie M. Mining and risk of tuberculosis in sub-Saharan Africa. Am J Public Health. 2011;101:524–30. Statistics SA. Mineral accounts for South Africa, 1980–2001. Pretoria: South Africa; 2002. World Health Organization. Global Tuberculosis Report 2013. Geneva: World Health Organization; 2013. Dowdy DW, Golub JE, Chaisson RE, Saraceni V. Heterogeneity in tuberculosis transmission and the role of geographic hotspots in propagating epidemics. Proc Natl Acad Sci U S A. 2012;109:9557–62. Day C, Barron P, Massyn N, Padarath A, English R. District Health Barometer 2010/11. Durban: Health Systems Trust Google Scholar; 2011. McGlashan ND, Harington JS, Chelkowska E. Changes in the geographical and temporal patterns of cancer incidence among black gold miners working in South Africa, 1964–1996. Br J Cancer. 2003;88:1361–9. Andrews JR, Morrow C, Wood R. Modeling the role of public transportation in sustaining tuberculosis transmission in South Africa. Am J Epidemiol. 2013;177:556–61. Middelkoop K, Mathema B, Myer L, Shashkina E, Whitelaw A, Kaplan G, et al. Transmission of tuberculosis in a South African community with a high prevalence of HIV infection. J Infect Dis. 2015;211:53–61. The authors thank Randall Packard and Anna Bershteyn for productive discussions and Bill and Melinda Gates for their support of the Institute for Disease Modeling. RGW is funded by the UK Medical Research Council (MRC) and the UK Department for International Development (DFID) under the MRC/DFID Concordat agreement that is also part of the EDCTP2 programme supported by the European Union (MR/P002404/1), the Bill & Melinda Gates Foundation (TB Modelling and Analysis Consortium: OPP1084276/OPP1135288, SA Modelling for Policy: OPP1110334, CORTIS: OPP1137034, Vaccines: OPP1160830) and UNITAID (4214-LSHTM-Sept15; PO 8477-0-600). This project was funded by a South Africa TB Think Tank grant from the Bill & Melinda Gates Foundation (OPP1110334). The static risk model (Excel file) and the individual-based model (source code, executable file, configuration files) are available from the authors' GitHub repository (https://github.com/SCTX/mining_contribution) or upon request. Institute for Disease Modeling, Bellevue, Washington, USA Stewart T. Chang , Philip A. Eckhoff & Bradley G. Wagner Aurum Institute, Johannesburg, South Africa Violet N. Chihota & Gavin J. Churchyard School of Public Health, Faculty of Health Sciences, University of Witwatersrand, Johannesburg, South Africa , Katherine L. Fielding , Alison D. Grant Foundation for Innovative New Diagnostics, Geneva, Switzerland Department of Infectious Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK Katherine L. Fielding Department of Clinical Research, London School of Hygiene and Tropical Medicine, London, UK Alison D. Grant Africa Health Research Institute, School of Nursing and Public Health, University of KwaZulu-Natal, Durban, South Africa TB Modelling Group, CMMID, TB Centre, London School of Hygiene and Tropical Medicine, London, UK Rein M. Houben & Richard G. White Advancing Treatment and Care for TB/HIV, South African Medical Research Council, Johannesburg, South Africa Gavin J. Churchyard Search for Stewart T. Chang in: Search for Violet N. Chihota in: Search for Katherine L. Fielding in: Search for Alison D. Grant in: Search for Rein M. Houben in: Search for Richard G. White in: Search for Gavin J. Churchyard in: Search for Philip A. Eckhoff in: Search for Bradley G. Wagner in: STC and BGW analyzed the data, created the models, and drafted the manuscript. VNC, KLF, and ADG provided primary data from the Thibela TB study. RGW, GJC, RMH, and PAE conceptualized the study and provided guidance throughout the study. All authors contributed to revisions of the manuscript. All authors read and approved the final manuscript. Correspondence to Stewart T. Chang. Supplemental materials, figures, and tables. (DOCX 2293 kb)
CommonCrawl
\begin{definition}[Definition:Uniformly Convex Normed Vector Space] Let $\struct {X, \norm \cdot}$ be a normed vector space. We say that $X$ is '''uniformly convex''' {{iff}}: :for every $\epsilon > 0$ there exists $\delta > 0$ such that: ::whenever $x, y \in X$ have $\norm x = \norm y = 1$ and $\norm {x - y} > \epsilon$, we have: :::$\ds \norm {\frac {x + y} 2} < 1 - \delta$ \end{definition}
ProofWiki
Products for USB Sensing and Control Bridge Interfaces DC Controllers BLDC Controllers Servo Controllers Stepper Controllers Encoder Interfaces LCD Controllers Light/Sound Voltage/Current Touch / Human Input Gas Pressure All Mechanical Brushed DC Motors Brushed DC Motors with Encoders Rotary Shaft T-Slot Fuses/Protection VINT Hubs & Devices SBC Accessories Phidget SBC What is a Phidget? Phidget Programming Tech Primers Phidget21 Documentation Phidgets: Products for USB Sensing and Control All Controllers I/O Boards Bridge Interfaces Relays SBC Motor Controllers All Motor Controllers DC Controllers BLDC Controllers Servo Controllers Stepper Controllers Encoder Interfaces LED Controllers LCD Controllers RFID Remote Control All Sensors Temperature/Humidity Distance Light/Sound Motion Voltage/Current Load Cells FSR Proximity Touch / Human Input Gas Pressure Rotary Position Linear Position Adapters pH/ORP All Mechanical Motors All Motors Brushed DC Motors Brushed DC Motors with Encoders Brushless DC Motors Servo Motors Stepper Motors Linear Actuators All Transmission Belt Drive Chain Drive Gearboxes Shaft Couplers Rotary Shaft Bearings All Linear Motion 8mm 12mm 16mm 25mm All T-Slot PG20 PG30 PG40 All Other Cables All Cables USB Wire Multi-Conductor Cable Connectors Power Supplies Fuses/Protection Enclosures VINT Hubs & Devices Switches SBC Accessories LEDs LCD Displays RFID Tags USB Hubs What is a phidget? Getting Started Phidget Programming Tech Primers API Code Samples C C# Python Java Visual Basic .NET LabVIEW JavaScript Windows macOS Linux iOS Android Phidget SBC General Operating Systems Programming Languages quotes, distributor information, purchase orders [email protected] technical inquiries support, advice, warranty, returns, misshipment [email protected] website inquiries corrections or suggestions [email protected] Unit 1 - 6115 4 St SE Calgary AB T2H 2H9 Projects Dealers Terms and Conditions Discontinued Products Phidget21 Documentation Precision Resistor - 1K Ohm 0.1% (Bag of 4) ID: 3175_0 Use these precision resistors to interface an RTD with the PhidgetBridge with maximum accuracy. Connection & Compatibility The 3175 RTD Resistor Kit includes four pieces of 1.00 KiloOhm resistors. These precision resistors were used to interface Platinum RTDs to the 1046 – PhidgetBridge . This configuration has been obsoleted by the TMP1200 - RTD Phidget, which can interface with RTDs without the need for external resistors. If you still need precision resistors, you can find them at electronic component suppliers such as DigiKey. Platinum RTDs (Resistive Thermal Devices) are used to make very precise temperature measurements. RTDs are very accurate, and will measure temperatures up to 500 degrees Celsius. The electrical resistance of the RTD changes predictably with temperature, and RTDs are the most accurate commonly available temperature sensors. Measuring the resistance of an RTD requires accurate components all through the system - otherwise there is no point in paying for an RTD. The resistors in the 3175 RTD Resistor Kit have a worst case error of 0.1% - translating to a typical temperature error of 0.05 Celsius. The resistors also change their resistance very little with temperature - ambient temperature variation is a significant source of error for thermocouples. RTDs with a well designed data acquisition system will not be subject to these temperature variation errors. Read the RTD Interface Kit user guide to get detailed instructions on how to construct and test the bridge. Resistance Value 1 kΩ Resistance Error Max 0.1 % Canadian HS Export Code 8533.10.00 American HTS Import Code 8533.10.00.60 Country of Origin US (United States) 1.1 Measuring Resistive Thermal Devices (RTD) 1.1.1 Using a Wheatstone Bridge 1.1.2 Using a Voltage Divider 1.1.3 Getting Higher Accuracy 1.2 Using the PhidgetBridge Code Sample on Windows 1.3 Applying the Formula 1.3.1 Using the Resistor Kit with non-standard RTDs and thermistors 1.4 Self heating of RTDs Go to this device's product page The 3175 RTD Resistor Kit includes four pieces of 1.00 KiloOhm resistors. These precision resistors are used to interface Platinum RTDs to the 1046 PhidgetBridge. Platinum RTDs (Resistive Thermal Devices) are used to make very precise temperature measurements. RTDs are very accurate, and will measure temperatures up to 500 degrees Celsius. The electrical resistance of the RTD changes predictably with temperature, and RTDs are the most accurate commonly available temperature sensors. Measuring the resistance of an RTD requires accurate components all through the system - otherwise there is no point in paying for an RTD. The resistors in the 3175 RTD Resistor Kit have a worst case error of 0.1% - translating to a typical temperature error of 0.05 Celsius. The resistors also change their resistance very little with temperature - ambient temperature variation is a significant source of error for thermocouples. RTDs with a well designed data acquisition system will not be subject to these temperature variation errors. Wiring the resistors to your RTD allows the 1046 PhidgetBridge to convert the resistances into a voltage, which it then measures. The PhidgetBridge is by far the most precise Phidget device for measuring voltage. The PhidgetBridge also cancels the errors resulting from USB voltage variation. Measuring Resistive Thermal Devices (RTD) Using a Wheatstone Bridge This diagram shows how to connect the RTD to a Wheatstone bridge, and then to a PhidgetBridge 4-Input. A Wheatstone bridge is the classic method of measuring unknown resistances, and requires three resistors of known values. It uses the current in each leg of the bridge to create a voltage differential between both voltage dividers. Using the voltage differential and the three known resistors, the resistance of the fourth resistor can be determined. To determine the resistance of the RTD, the following formula can be used: R R T D = R 3 × [ R 2 + V B × ( R 1 + R 2 ) ] R 1 − ( R 1 + R 2 ) × V B {\displaystyle R_{RTD}={\frac {R_{3}\times [R_{2}+V_{B}\times (R_{1}+R_{2})]}{R_{1}-(R_{1}+R_{2})\times V_{B}}}} Where V B {\displaystyle V_{B}} is the Bridge Value given by the PhidgetBridge (in mV/V) , and R 1 {\displaystyle R_{1}} , R 2 {\displaystyle R_{2}} and R 3 {\displaystyle R_{3}} are the resistances of the known resistors. Using a Voltage Divider The alternate method requires only two resistors. This reduces the amount of error that can be introduced into the system due to resistor tolerances. A voltage is applied to the two resistors and the RTD in series. The voltage drop across the RTD is measured. Using the voltage drop and the values of the two resistors, the resistance of the RTD can be determined. This diagram illustrates how to connect the RTD to the PhidgetBridge with a voltage divider cirtuit. R R T D = ( R 1 + R 2 ) × V B 1 − V B {\displaystyle R_{RTD}=(R_{1}+R_{2})\times {\frac {V_{B}}{1-V_{B}}}} Where V B {\displaystyle V_{B}} is the Bridge Value given by the PhidgetBridge (in mV/V) , and R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} are the resistances of the known resistors. Getting Higher Accuracy In order to get the highest accuracy from the RTD, consider the following: Use resistors with a high degree of tolerance. There will be less variability in the manufacturing of 0.1% resistors when compared to 1% resistors. Measure the known resistors with an ohmmeter. By obtaining the most accurate measurements for the known resistances, the formula will result in a more accurate measurement of the RTD. Use a moving average when obtaining the Bridge Value to reduce the amount of noise in the measured signal. Estimate or Measure the resistance of the +5V and GND wires between the RTD and the 1046 PhidgetBridge. Add this resistance to the two resistors. Turn off the power to the RTD (by disabling the channel on the PhidgetBridge) to reduce self-heating of the RTD. By using higher resistor values (> 1 Kilo ohm), there will be less self-heating of the RTD, but the resolution of the measurement will be reduced somewhat. We recommend 1 Kilo Ohm resistors as a reasonable trade off. Using the PhidgetBridge Code Sample on Windows The PhidgetBridge Bridge-full application will allow you to verify that your PhidgetBridge is working, and that your wiring is functional. Please check the 1046 User Guide for instructions on launching the application. The PhidgetBridge has the ability to amplify the measured signal - it was built to measure extremely small signals. Amplification is not necessary with RTDs, so we recommend leaving the gain set to 1 unless you want higher precision at low temperatures at the cost of saturating before it reaches higher temperatures. If amplifier is in danger of saturating (reaching the limit), an Overrange error will be thrown. When using the Bridge-full application, remember to check the Enabled box, to power up the bridge and start measurements. Applying the Formula There are several standards for RTDs. Common RTDs are built from platinum, the most common models being Pt100 and Pt1000. The 100 or 1000 refers to the resistance of the RTD and 0 Celsius. We have calculated formulas for the PT100 and PT1000 standards to convert the Bridge Value in (mv/V) directly into a temperature. where VB is the BridgeValue given by the PhidgetBridge (in mV/V), and T P t = 4.7503 × 10 7 R 0 2 × ( V B 1000 − V B ) 2 + 4.6156 ∗ 10 5 R 0 × ( V B 1000 − V B ) − 242.615 {\displaystyle T_{Pt}={\frac {4.7503\times 10^{7}}{R_{0}^{2}}}\times ({\frac {V_{B}}{1000-V_{B}}})^{2}+{\frac {4.6156*10^{5}}{R_{0}}}\times ({\frac {V_{B}}{1000-V_{B}}})-242.615} R0 is the resisitance of the RTD at 0°C (100 for Pt100 and 1000 for Pt1000) Using the Resistor Kit with non-standard RTDs and thermistors Some RTDs are not standardized, so we cannot provide a formula to convert the Bridge Value to temperature. The following formula will calculate the resistance of the RTD or thermistor from the bridge voltage - a good start for computing the temperature. To calculate temperature, check the manufacturer's data sheet for formulas or tables for converting resistance to temperature. R R T D = 2000 × V B 1 − V B {\displaystyle R_{RTD}=2000\times {\frac {V_{B}}{1-V_{B}}}} Self heating of RTDs By passing current through the RTD, it will heat up, distorting your temperature measurement. To determine the power dissipated as heat in the RTD use the following formula, where RRTD is the resistance of your RTD. The RTD manufacturer will often specify the temperature increase of the RTD as a function of power (watts). This P o w e r R T D = ( 5 2000 + R R T D ) 2 × R R T D {\displaystyle Power_{RTD}=({\frac {5}{2000+R_{RTD}}})^{2}\times R_{RTD}} power was calculated in the previous equation. This temperature increase will depend on if it is attached to a larger object that will sink the heat away, or if there is air movement over the RTD. A simple way to reduce the effects of self-heating is to enable the bridge in software during the measurement period, and disable the bridge until the next measurement. Using these resistors in a circuit with an RTD or another resistive sensor allows it to be accurately read by wheatstone bridge interfaces. See the User Guide of the Bridge Interface for detail on how to build the circuit. Controlled By Number of Bridge Inputs 1046_0B $90.00 USB (Mini-USB) 4 DAQ1500_0 $30.00 VINT 2 Here's a list of RTDs we have available: TMP4109_0 $40.00 8:00am - 4:00pm MST Phidgets Inc. We believe in getting problems solved quickly and projects finished on time. That's why we specialize in making affordable, easy to use sensors and controllers that require minimal electronics knowledge. © Phidgets Inc. 2016, Inc. Privacy Policy Terms and Conditions Software Licenses Phidgets Login
CommonCrawl
Day, S., Taheri, A. (2017). A formulation of the Jacobi coefficients $c^l_j(\alpha, \beta)$ via Bell polynomials. Advances in Operator Theory, 2(4), 506-515. doi: 10.22034/aot.1705-1163 Stuart Day; Ali Taheri. "A formulation of the Jacobi coefficients $c^l_j(\alpha, \beta)$ via Bell polynomials". Advances in Operator Theory, 2, 4, 2017, 506-515. doi: 10.22034/aot.1705-1163 Day, S., Taheri, A. (2017). 'A formulation of the Jacobi coefficients $c^l_j(\alpha, \beta)$ via Bell polynomials', Advances in Operator Theory, 2(4), pp. 506-515. doi: 10.22034/aot.1705-1163 Day, S., Taheri, A. A formulation of the Jacobi coefficients $c^l_j(\alpha, \beta)$ via Bell polynomials. Advances in Operator Theory, 2017; 2(4): 506-515. doi: 10.22034/aot.1705-1163 A formulation of the Jacobi coefficients $c^l_j(\alpha, \beta)$ via Bell polynomials Article 10, Volume 2, Issue 4 - Serial Number 6, Autumn 2017, Page 506-515 PDF (307.74 K) Stuart Day; Ali Taheri Department of Mathematics, University of Sussex Receive Date: 13 May 2017, Revise Date: 27 July 2017, Accept Date: 28 July 2017 The Jacobi polynomials $(\mathscr{P}^{(\alpha, \beta)}_k: k\ge0, \alpha, \beta>-1)$ are deeply intertwined with the Laplacian on compact rank one symmetric spaces. They represent the {\it spherical} or zonal functions and as such constitute the main ingredients in describing the spectral measures and spectral projections associated with the Laplacian on these spaces. In this note we strengthen this connection by showing that a set of spectral and geometric quantities associated with Jacobi operator fully describe the Maclaurin coefficients associated with the heat and other related Schwartzian kernels and present an explicit formulation of these quantities using the Bell polynomials. Jacobi polynomials; Laplace-Beltrame operators; Heat kernel; Bell polynomials; Rank one symmetric spaces
CommonCrawl
\begin{document} \title{Quantization of noncompact coverings} \setlength{\parindent}{0pt} \begin{center} \author{ {\textbf{Petr R. Ivankov*}\cdot e-mail: * [email protected] \cdot } } \end{center} \noindent \paragraph{} The concept of quantization consists in replacing commutative quantities by noncommutative ones. In mathematical language an algebra of continuous functions on a locally compact topological space is replaced with a noncommutative $C^*$-algebra. Some classical topological notions have noncommutative generalizations. This article is concerned with a generalization of coverings. \tableofcontents \section{Motivation. Preliminaries} \paragraph*{} Gelfand-Na\cdot{\cdot}mark theorem \cite{arveson:c_alg_invt} states the correspondence between locally compact Hausdorff topological spaces and commutative $C^*$-algebras. \begin{theorem}\label{gelfand-naimark}\cite{arveson:c_alg_invt} (Gelfand-Na\cdot{\cdot}mark). Let $A$ be a commutative $C^*$-algebra and let $\mathcal{X}$ be the spectrum of $A$. There is the natural $*$-isomorphism $\gamma:A \to C_0(\mathcal{X})$. \end{theorem} So any (noncommutative) $C^*$-algebra may be regarded as a generalized (noncommutative) locally compact Hausdorff topological space. Following theorem yields a pure algebraic description of finite-fold coverings of compact spaces. \begin{theorem}\label{pavlov_troisky_thm}\cite{pavlov_troisky:cov} Suppose $\mathcal X$ and $\mathcal Y$ are compact Hausdorff connected spaces and $p :\mathcal Y \to \mathcal X$ is a continuous surjection. If $C(\mathcal Y )$ is a projective finitely generated Hilbert module over $C(\mathcal X)$ with respect to the action \begin{equation*} (f\xi)(y) = f(y)\xi(p(y)), ~ f \in C(\mathcal Y ), ~ \xi \in C(\mathcal X), \end{equation*} then $p$ is a finite-fold covering. \end{theorem} This article contains pure algebraic generalizations of following topological objects: \begin{itemize} \item Coverings of noncompact spaces, \item Infinite coverings. \end{itemize} This article assumes elementary knowledge of following subjects: \begin{enumerate} \item Set theory \cite{halmos:set}, \item Category theory \cite{spanier:at}, \item Algebraic topology \cite{spanier:at}, \item $C^*$-algebras, $C^*$-Hilbert modules \cite{blackadar:ko,pedersen:ca_aut}. \end{enumerate} The words "set", "family" and "collection" are synonyms. \break Following table contains special symbols. \newline \begin{tabular}{|c|c|} \hline Symbol & Meaning\cdot \hline &\cdot $\hat{A}$ & Spectrum of a $C^*$- algebra $A$ with the hull-kernel topology \cdot & (or Jacobson topology)\cdot $A_+$ & Cone of positive elements of $C^*$- algebra, i.e. $A_+ = \left\{a\in A \cdot| \ a \ge 0\right\cdot$\cdot $A^G$ & Algebra of $G$ - invariants, i.e. $A^G = \left\{a\in A \cdot| \ ga=a, \forall g\in G\right\cdot$\cdot $\mathrm{Aut}(A)$ & Group of * - automorphisms of $C^*$- algebra $A$\cdot $A''$ & Enveloping von Neumann algebra of $A$\cdot $B(\mathcal{H})$ & Algebra of bounded operators on a Hilbert space $\mathcal{H}$\cdot $\mathbb{C}$ (resp. $\mathbb{R}$) & Field of complex (resp. real) numbers \cdot $C(\mathcal{X})$ & $C^*$- algebra of continuous complex valued \cdot & functions on a compact space $\mathcal{X}$\cdot $C_0(\mathcal{X})$ & $C^*$- algebra of continuous complex valued functions on a locally \cdot & compact topological space $\mathcal{X}$ equal to $0$ at infinity\cdot $C_c(\mathcal{X})$ & Algebra of continuous complex valued functions on a \cdot & topological space $\mathcal{X}$ with compact support\cdot $C_b(\mathcal{X})$ & $C^*$- algebra of bounded continuous complex valued \cdot & functions on a locally compact topological space $\mathcal{X}$ \cdot $G\left( \widetilde{\mathcal{X}}~ |~ \mathcal{X}\right) $ & Group of covering transformations of covering $\widetilde{\mathcal{X}} \to \mathcal{X}$ \cite{spanier:at} \cdot $\mathcal{H}$ & Hilbert space \cdot $\mathcal{K}= \mathcal{K}\left(\mathcal{H} \right) $ & $C^*$- algebra of compact operators on the separable Hilbert space $\mathcal{H}$ \cdot $K(A)$ & Pedersen ideal of $C^*$-algebra $A$\cdot $\varinjlim$ & Direct limit \cdot $\varprojlim$ & Inverse limit \cdot $M(A)$ & A multiplier algebra of $C^*$-algebra $A$\cdot $\mathbb{M}_n(A)$ & The $n \times n$ matrix algebra over $C^*$-algebra $A$\cdot $\mathbb{N}$ & A set of positive integer numbers\cdot $\mathbb{N}^0$ & A set of nonnegative integer numbers\cdot $U(A) \subset A $ & Group of unitary operators of algebra $A$\cdot $\mathbb{Z}$ & Ring of integers \cdot $\mathbb{Z}_n$ & Ring of integers modulo $n$ \cdot $\overline{k} \in \mathbb{Z}_n$ & An element in $\mathbb{Z}_n$ represented by $k \in \mathbb{Z}$ \cdot $X \backslash A$ & Difference of sets $X \backslash A= \{x \in X \cdot| \ x\notin A\cdot$\cdot $|X|$ & Cardinal number of a finite set $X$\cdot $\left[x\right]$ & The range projection of element $x$ of a von Neumann algebra.\cdot $f|_{A'}$& Restriction of a map $f: A\to B$ to $A'\subset A$, i.e. $f|_{A'}: A' \to B$\cdot \hline \end{tabular} \break \subsection{Prototype. Inverse limits of coverings in topology}\label{inf_to} \subsubsection{Topological construction} \paragraph*{} This subsection is concerned with a topological construction of the inverse limit in the category of coverings. \begin{definition}\label{comm_cov_pr_defn}\cite{spanier:at} Let $\widetilde{\pi}: \widetilde{\mathcal{X}} \to \mathcal{X}$ be a continuous map. An open subset $\mathcal{U} \subset \mathcal{X}$ is said to be {\it evenly covered } by $\widetilde{\pi}$ if $\widetilde{\pi}^{-1}(\mathcal U)$ is the disconnected union of open subsets of $\widetilde{\mathcal{X}}$ each of which is mapped homeomorphically onto $\mathcal{U}$ by $\widetilde{\pi}$. A continuous map $\widetilde{\pi}: \widetilde{\mathcal{X}} \to \mathcal{X}$ is called a {\it covering projection} if each point $x \in \mathcal{X}$ has an open neighborhood evenly covered by $\widetilde{\pi}$. $\widetilde{\mathcal{X}}$ is called the { \it covering space} and $\mathcal{X}$ the {\it base space} of the covering. \end{definition} \begin{definition}\cite{spanier:at} A fibration $p: \mathcal{\widetilde{X}} \to \mathcal{X}$ with unique path lifting is said to be {\it regular} if, given any closed path $\omega$ in $\mathcal{X}$, either every lifting of $\omega$ is closed or none is closed. \end{definition} \begin{definition}\cite{spanier:at} A topological space $\mathcal X$ is said to be \textit{locally path-connected} if the path components of open sets are open. \end{definition} Denote by $\pi_1$ the functor of fundamental group \cite{spanier:at}. \begin{theorem}\label{locally_path_lem}\cite{spanier:at} Let $p: \widetilde{\mathcal X} \to \mathcal X$ be a fibration with unique path lifting and assume that a nonempty $\widetilde{\mathcal X}$ is a locally path-connected space. Then $p$ is regular if and only if for some $\widetilde{x}_0 \in \widetilde{\mathcal X}$, $\pi_1\left(p\right)\pi_1\left(\widetilde{\mathcal X}, \widetilde{x}_0\right)$ is a normal subgroup of $\pi_1\left(\mathcal X, p\left(\widetilde{x}_0\right)\right)$. \end{theorem} \begin{definition}\label{cov_proj_cov_grp}\cite{spanier:at} Let $p: \mathcal{\widetilde{X}} \to \mathcal{X}$ be a covering projection. A \textit{self-equivalence} is a homeomorphism $f:\mathcal{\widetilde{X}}\to\mathcal{\widetilde{X}}$ such that $p \circ f = p$. This group of such homeomorphisms is said to be the {\it group of covering transformations} of $p$ or the {\it covering group}. Denote by $G\left( \mathcal{\widetilde{X}}~|~\mathcal{X}\right)$ this group. \end{definition} \begin{proposition}\label{reg_cov_prop}\cite{spanier:at} If $p: \mathcal{\widetilde{X}} \to \mathcal{X}$ is a regular covering projection and $\mathcal{\widetilde{X}}$ is connected and locally path connected, then $\mathcal{X}$ is homeomorphic to space of orbits of $G\left( \mathcal{\widetilde{X}}~|~\mathcal{X}\right)$, i.e. $\mathcal{X} \approx \mathcal{\widetilde{X}}/G\left( \mathcal{\widetilde{X}}~|~\mathcal{X}\right) $. So $p$ is a principal bundle. \end{proposition} \begin{corollary}\label{top_cov_from_pi1_cor}\cite{spanier:at} Let $p: \widetilde{\mathcal X} \to \mathcal X$ be a fibration with a unique path lifting. If $ \widetilde{\mathcal X}$ is connected and locally path-connected and $\widetilde{x}_0 \in \widetilde{\mathcal X}$ then $p$ is regular if and only if $G\left(\widetilde{\mathcal X}~|~{\mathcal X} \right)$ transitively acts on each fiber of $p$, in which case $$ \psi: G\left(\widetilde{\mathcal X}~|~{\mathcal X} \right) \approx \pi_1\left(\mathcal X, p\left( \widetilde{x}_0\right) \right) / \pi_1\left( p\right)\pi_1\left(\widetilde{\mathcal X}, \widetilde{x}_0 \right). $$ \end{corollary} \begin{remark} Above results are copied from \cite{spanier:at}. Below the \textit{covering projection} word is replaced with \textit{covering}. \end{remark} \begin{definition}\label{top_comp_defn}\cite{munkres:topology} A \textit{ compactification} of a space $\mathcal X$ is a compact Hausdorff space $\mathcal Y$ containing $\mathcal X$ as a subspace and the closure $\overline{\mathcal X}$ of $\mathcal X$ is $\mathcal Y$, i.e $\overline{\mathcal X} = \mathcal Y$. \end{definition} The algebraic construction requires following definition \begin{definition}\label{top_cov_comp_defn} A covering $\pi: \widetilde{ \mathcal X } \to \mathcal X$ is said to be a \textit{covering with compactification} if there are compactifications ${ \mathcal X } \hookrightarrow { \mathcal Y }$ and $\widetilde{ \mathcal X } \hookrightarrow \widetilde{ \mathcal Y }$ such that: \begin{itemize} \item There is a covering $\overline{\pi}:\widetilde{ \mathcal Y }\to { \mathcal Y }$, \item The covering $\pi$ is the restriction of $\overline{\pi}$, i.e. $\pi = \overline{\pi}|_{\widetilde{ \mathcal X }}$. \end{itemize} \end{definition} \begin{example} Let $g: S^1 \to S^1$ be an $n$-fold covering of a circle. Let $\mathcal X = \widetilde{\mathcal X} = S^1 \times \left[0,1\right)$. The map \begin{equation*} \begin{split} \pi: \widetilde{ \mathcal X } \to \mathcal X,\cdot \pi = g \times \mathrm{Id}_{\left[0,1\right)} \end{split} \end{equation*} is an $n$-fold covering. If $\mathcal Y = \widetilde{\mathcal Y} = S^1 \times \left[0,1\right]$ then a compactification $\left[0,1\right) \hookrightarrow \left[0,1\right]$ induces compactifications $\mathcal X \hookrightarrow\mathcal Y$, $\widetilde{ \mathcal X } \hookrightarrow \widetilde{ \mathcal Y }$. The map \begin{equation*} \begin{split} \overline{\pi}: \widetilde{ \mathcal Y } \to \mathcal Y,\cdot \overline{\pi} = g \times \mathrm{Id}_{\left[0,1\right]} \end{split} \end{equation*} is a covering such that $\overline{\pi}|_{\widetilde{ \mathcal X }}=\pi$. So if $n > 1$ then $\pi$ is a nontrivial covering with compactification.\cdot \end{example} \begin{example} Let $\mathcal X = \mathbb{C} \backslash \{0\cdot$ be a complex plane with punctured 0, which is parametrized by the complex variable $z$. Let $\mathcal X \hookrightarrow\mathcal Y$ be any compactification. If both $\left\{z'_n \in \mathcal X\right\cdot_{n \in \mathbb{N}}$, $\left\{z''_n \in \mathcal X\right\cdot_{n \in \mathbb{N}}$ are Cauchy sequences such that $\lim_{n \to \infty}\left|z'_n\right|=\lim_{n \to \infty}\left|z'_n\right| = 0$ then form $\lim_{n \to \infty}\left|z'_n-z''_n\right|= 0$ it turns out \begin{equation}\label{x_0_eqn} x_0 = \lim_{n \to \infty} z'_n = \lim_{n \to \infty} z''_n \in \mathcal Y. \end{equation} If $\widetilde{ \mathcal X } = \mathcal X$ then for any $n \in \mathbb{N}$ there is a finite-fold covering \begin{equation*} \begin{split} \pi: \widetilde{ \mathcal X } \to \mathcal X,\cdot z \mapsto z^n. \end{split} \end{equation*} If both $\mathcal X \hookrightarrow\mathcal Y$, $\widetilde{ \mathcal X } \hookrightarrow \widetilde{ \mathcal Y }$ are compactifications, and $\overline{\pi}: \widetilde{ \mathcal Y } \to \mathcal Y$ is a covering such that $\overline{\pi}|_{\widetilde{ \mathcal X }} = \pi$ then from \eqref{x_0_eqn} it turns out $\overline{\pi}^{-1}\left(x_0 \right)= \left\cdot\widetilde{x}_0\right\cdot$ where $\widetilde{x}_0$ is the unique point such that following conditions hold: \begin{equation*} \begin{split} \widetilde{x}_0 = \lim_{n \to \infty} \widetilde{z}_n \in \widetilde{ \mathcal Y },\cdot \lim_{n \to \infty}\left|\widetilde{z}_n\right|= 0. \end{split} \end{equation*} It turns out $\left|\overline{\pi}^{-1}\left(x_0 \right) \right|=1$. However $\overline{\pi}$ is an $n$-fold covering and if $n >1$ then $\left|\overline{\pi}^{-1}\left(x_0 \right) \right|=n>1$. It contradicts with $\left|\overline{\pi}^{-1}\left(x_0 \right) \right|=1$, and from the contradiction it turns out that for any $n > 1$ the map $\pi$ is not a covering with compactification. \end{example} \begin{definition}\label{top_sec_defn} The sequence of regular finite-fold coverings \begin{equation*} \mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ... \end{equation*} is said to be a \textit{(topological) finite covering sequence} if following conditions hold: \begin{itemize} \item The space $\mathcal{X}_n$ is a second-countable \cite{munkres:topology} locally compact connected Hausdorff space for any $n \in \mathbb{N}^0$, \item If $k < l < m$ are any nonnegative integer numbers then there is the natural exact sequence $$ \{e\cdot\to G\left(\mathcal X_m~|~\mathcal X_l\right) \to G\left(\mathcal X_m~|~\mathcal X_k\right)\to G\left(\mathcal X_l~|~\mathcal X_k\right)\to \{e\cdot. $$ \end{itemize} For any finite covering sequence we will use a following notation \begin{equation*} \mathfrak{S} = \left\cdot\mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot= \left\cdot \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot,~~\mathfrak{S} \in \mathfrak{FinTop}. \end{equation*} \end{definition} \begin{example} Let $ \mathfrak{S} = \left\cdot \mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ... \right\cdot$ be a sequence of locally compact connected Hausdorff spaces and finite-fold regular coverings such that $\mathcal X_n$ is locally path-connected for any $n \in \mathbb{N}$. It follows from Lemma \ref{locally_path_lem} that if $p > q$ and $f_{pq}:\mathcal X_p \to \mathcal X_q$ then $\pi_1\left(f_{pq}\right)\pi_1\left(\mathcal X_p, x_0\right)$ is a normal subgroup of $\pi_1\left(\mathcal X_q, f_{pq}\left(x_0\right) \right)$. Otherwise from the Corollary \ref{top_cov_from_pi1_cor} it turns out $$ G\left(\mathcal X_p~|~\mathcal X_q\right) \approx \pi_1\left(\mathcal X_q, f_{pq}\left(x_0\right)\right) / \pi_1\left(f_{pq}\right)\pi_1\left(\mathcal X_p, x_0\right). $$ If $k < l < m$ then a following sequence \begin{equation*} \begin{split} \{e\cdot\to \pi_1\left(\mathcal X_l, f_{ml}\left(x_0\right) \right)/ \pi_1\left(f_{ml}\right)\pi_1\left(\mathcal X_m, x_0\right) \to \cdot \to \pi_1\left(\mathcal X_k, f_{mk}\left(x_0\right)\right) / \pi_1\left(f_{mk}\right)\pi_1\left(\mathcal X_m, x_0\right)\to \cdot \to \pi_1\left(\mathcal X_k, f_{mk}\left( x_0\right) \right)/ \pi_1\left(f_{lk}\right)\pi_1\left(\mathcal X_l, f_{ml}\left( x_0\right) \right) \to \{e\cdot \end{split} \end{equation*} is exact. Above sequence is equivalent to the following sequence $$ \{e\cdot\to G\left(\mathcal X_m~|~\mathcal X_l\right) \to G\left(\mathcal X_m~|~\mathcal X_k\right)\to G\left(\mathcal X_l~|~\mathcal X_k\right)\to \{e\cdot $$ which is also exact. Thus $\mathfrak{S} \in \mathfrak{FinTop}$. \end{example} \begin{definition}\label{top_cov_trans_defn} Let $\left\cdot\mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot \in \mathfrak{FinTop}$, and let $\widehat{\mathcal{X}} = \varprojlim \mathcal{X}_n$ be the inverse limit in the category of topological spaces and continuous maps (cf. \cite{spanier:at}). If $\widehat{\pi}_0: \widehat{\mathcal{X}} \to \mathcal{X}_0$ is the natural continuous map then a homeomorphism $g$ of the space $\widehat{\mathcal{X}}$ is said to be a \textit{covering transformation} if a following condition holds $$ \widehat{\pi}_0 = \widehat{\pi}_0 \circ g. $$ The group $\widehat{G}$ of such homeomorphisms is said to be the \textit{group of covering transformations} of $\mathfrak S$. Denote by $G\left(\widehat{\mathcal{X}}~|~\mathcal X \right)\stackrel{\text{def}}{=}\widehat{G}$. \end{definition} \begin{lemma}\label{top_surj_group_lem} Let $\left\cdot\mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot \in \mathfrak{FinTop}$, and let $\widehat{\mathcal{X}} = \varprojlim \mathcal{X}_n$ be the inverse limit in the category of topological spaces and continuous maps. There is the natural group isomorphism $G\left(\widehat{\mathcal{X}}~|~\mathcal X \right) \cong \varprojlim G\left({\mathcal{X}}_n~|~\mathcal X \right)$. For any $n \in \mathbb{N}$ there is the natural surjective homomorphism $h_n:G\left(\widehat{\mathcal{X}}~|~\mathcal X \right) \to G\left(\mathcal{X}_n~|~\mathcal X \right)$ and $\bigcap_{n \in \mathbb{N}} \ker h_n$ is a trivial group. \end{lemma} \begin{proof} For any $n \in \mathbb{N}$ there is the natural continuous map $\widehat{\pi}_n:\widehat{\mathcal{X}} \to \mathcal{X}_n$. Let $x_0 \in \mathcal{X}_0$ and $\widehat{x}_0 \in \widehat{\mathcal{X}}$ be such that $\widehat{\pi}_0\left( \widehat{x}_0\right) = x_0$. Let $\widehat{x}' \in \widehat{\mathcal{X}}$ be such that $\widehat{\pi}_0\left( \widehat{x}'\right)=x_0$. If $x'_n = \widehat{\pi}_n\left(\widehat{x}' \right)$ and $x_{n} = \widehat{\pi}_n\left(\widehat{x}_0 \right)$ then $\pi_n\left(x_{n} \right)=\pi_n\left(x'_{n} \right)$, where $\pi_n : \mathcal X_n \to \mathcal X$ is the natural covering. Since $\pi_n$ is regular for any $n \in \mathbb{N}$ there is the unique $g_n \in G\left( \mathcal{X}_n~|~\mathcal{X}\right)$ such that $x'_n = g_n x_{n}$. In result there is a sequence $\left\{g_n \in G\left( \mathcal{X}_n~|~\mathcal{X}\right)\right\cdot_{n \in \mathbb{N}}$ which satisfies to the following condition \begin{equation*} g_m \circ \pi^n_m = \pi^n_m \circ g_n \end{equation*} where $n > m$ and $\pi^n_m : \mathcal X_n \to \mathcal X_m$ is the natural covering. The sequence $\left\{g_n \right\cdot$ naturally defines an element $\widehat{g} \in \varprojlim G\left(\mathcal X_n~|\mathcal X \right)$. Let us define an homeomorphism $\varphi_{\widehat{g}}: \widehat{\mathcal{X}} \to \widehat{\mathcal{X}}$ by a following construction. If $\widehat{x}''\in \widehat{\mathcal{X}}$ is any point then there is a sequence $\left\{x''_n \in \mathcal X_n\right\cdot_{n \in \mathbb{N}}$ such that $$ x''_n = \widehat{\pi}_n\left(\widehat{x}''\right) . $$ On the other hand there is the sequence $\left\{x''^{\widehat{g}}_n \in \mathcal X_n\right\cdot_{n \in \mathbb{N}}$ $$ x''^{\widehat{g}}_n=g_nx''_n $$ which for any $n > m$ satisfies to the following condition \begin{equation*}\label{top_xg_eqn} \pi^n_m\left(x''^{\widehat{g}}_n \right) = {x}''^{g}_m. \end{equation*} From the above equation and properties of inverse limits it follows that there is the unique $\widehat{x}''^{\widehat{g}} \in \widehat{\mathcal{X}}$ such that $$ \widehat{\pi}_n \left( \widehat{x}''^{\widehat{g}}\right) = x''^{\widehat{g}}_n; ~~ \forall n \in \mathbb{N}. $$ The required homeomorphism $\varphi_{\widehat{g}}$ is given by $$ \varphi_{\widehat{g}}\left( \widehat{x}''\right) = \widehat{x}''^{\widehat{g}}. $$ From $\widehat{\pi}\circ \varphi_{\widehat{g}} = \widehat{\pi}$ it follows that $\varphi_{\widehat{g}}$ corresponds to an element in $G\left(\widehat{ \mathcal X}~|~\mathcal X \right)$ which mapped onto $g_n$ for any $n \in \mathbb{N}$. Otherwise $\varphi_{\widehat{g}}$ naturally corresponds to the element $\widehat{g} \in \varprojlim G\left(\mathcal X_n~|\mathcal X \right)$, so one has the natural group isomorphism $G\left(\widehat{\mathcal{X}}~|~\mathcal X \right) \cong \varprojlim G\left({\mathcal{X}}_n~|~\mathcal X \right)$. From the above construction it turns out that any homeomorphism $\widehat{g} \in G\left(\widehat{ \mathcal X}~|~\mathcal X \right)$ uniquely depends on $\widehat{x}'=\widehat{g}\widehat{x}_0\in \widehat{\pi}_0^{-1} \left( x_0\right)$. It follows that there is the 1-1 map $\varphi:\widehat{\pi}_0^{-1}\left(x_0 \right)\xrightarrow{\approx} G\left(\widehat{\mathcal{X}}~|~\mathcal X \right)$. Since the covering $\pi_n : \mathcal{X}_n\to\mathcal X$ is regular there is the 1-1 map $\varphi_n:\pi_n^{-1}\left(x_0 \right)\xrightarrow{\approx} G\left(\mathcal{X}_n~|~\mathcal X \right)$. The natural surjective map $$ \widehat{\pi}_0^{-1}\left(x_0 \right) \to \pi_n^{-1}\left(x_0 \right) $$ induces the surjective homomorphism $G\left(\widehat{\mathcal{X}}~|~\mathcal X \right) \to G\left(\mathcal{X}_n~|~\mathcal X \right)$. If $\widehat{g} \in \bigcap_{n \in \mathbb{N}} \ker h_n$ is not trivial then $\widehat{g} \widehat{x}_0 \neq \widehat{x}_0$ and there is $n \in \mathbb{N}$ such that $\widehat{\pi}_n\left(\widehat{x}_0\right)\neq \widehat{\pi}_n\left(\widehat{g}\widehat{x}_0\right)= h_n\left(\widehat{g} \right) \widehat{\pi}_n\left(\widehat{x}_0\right)$, so $h_n\left(\widehat{g} \right) \in G\left(\mathcal{X}_n~|~\mathcal X \right)$ is not trivial and $\widehat{g} \notin \ker h_n$. From this contradiction it follows that $\bigcap_{n \in \mathbb{N}} \ker h_n$ is a trivial group. \end{proof} \begin{definition}\label{top_?oh_defn} Let $\mathfrak{S} = \left\cdot \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot$ be a finite covering sequence. The pair $\left(\mathcal{Y},\left\cdot\pi^{\mathcal Y}_n\right\cdot_{n \in \mathbb{N}} \right) $ of a (discrete) set $\mathcal{Y}$ with and surjective maps $\pi^{\mathcal Y}_n:\mathcal{Y} \to \mathcal X_n$ is said to be a \textit{coherent system} if for any $n \in \mathbb{N}^0$ a following diagram \newline \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { & \mathcal{Y} & \cdot \mathcal{X}_n & & \mathcal{X}_{n-1} \cdot}; \path[-stealth] (m-1-2) edge node [left] {$\pi^{\mathcal Y}_n~$} (m-2-1) (m-1-2) edge node [right] {$~\pi^{\mathcal Y}_{n-1}$} (m-2-3) (m-2-1) edge node [above] {$\pi_n$} (m-2-3); \end{tikzpicture} \newline is commutative. \end{definition} \begin{definition}\label{comm_top_constr_defn} Let $\mathfrak{S} = \left\cdot \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot$ be a topological finite covering sequence. A coherent system $\left(\mathcal{Y},\left\cdot\pi^{\mathcal Y}_n\right\cdot \right)$ is said to be a \textit{connected covering} of $\mathfrak{S}$ if $\mathcal Y$ is a connected topological space and $\pi^{\mathcal Y}_n$ is a regular covering for any $n \in \mathbb{N}$. We will use following notation $\left(\mathcal{Y},\left\cdot\pi^{\mathcal Y}_n\right\cdot \right)\downarrow \mathfrak{S}$ or simply $\mathcal{Y} \downarrow \mathfrak{S}$. \end{definition} \begin{definition}\label{top_spec_defn} Let $\left(\mathcal{Y},\left\cdot\pi^{\mathcal Y}_n\right\cdot \right)$ be a coherent system of $\mathfrak{S}$ and $y \in \mathcal{Y}$. A subset $\mathcal V \subset \mathcal{Y}$ is said to be \textit{special} if $\pi^{\mathcal Y}_0\left(\mathcal{V} \right)$ is evenly covered by $\mathcal{X}_1 \to \mathcal{X}_0$ and for any $n \in \mathbb{N}^0$ following conditions hold: \begin{itemize} \item $\pi^{\mathcal Y}_n\left(\mathcal{V} \right) \subset \mathcal X_n$ is an open connected set, \item The restriction $\pi^{\mathcal Y}_n|_{\mathcal V}:\mathcal{V}\to \pi^{\mathcal Y}_n\left( {\mathcal V}\right) $ is a bijective map. \end{itemize} \end{definition} \begin{remark} For any $n \in \mathbb{N}^0$ the space $\mathcal X_n$ is second-countable, so from the Theorem \ref{comm_sep_thm} for any point $x \in \mathcal X_n$ there is an open connected neighborhood $\mathcal U \subset \mathcal X_n$. \end{remark} \begin{remark} If $\left(\mathcal{Y},\left\cdot\pi^{\mathcal Y}_n\right\cdot \right)$ is a covering of $\mathfrak{S}$ then the set of special sets is a base of the topology of $\mathcal{Y}$. \end{remark} \begin{lemma}\label{top_equ_borel_set_lem} Let $\widehat{\mathcal{X}} = \varprojlim \mathcal{X}_n$ be the inverse limit of the sequence $\mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...$ in the category of topological spaces and continuous maps. Any special set of $\widehat{\mathcal{X}}$ is a Borel subset of $\widehat{\mathcal{X}}$. \end{lemma}\label{top_spec_borel_lem} \begin{proof} If $\mathcal U_n\subset \mathcal X_n$ is an open set then $\widehat{\pi}_n^{-1} \left(\mathcal U_n \right) \subset \widehat{\mathcal X}$ is open. If $\widehat{\mathcal U}$ is a special set then $\widehat{\mathcal U} = \bigcap_{n \in \mathbb{N}} \widehat{\pi}_n^{-1} \circ \widehat{\pi}_n\left(\widehat{\mathcal U}\right)$, i.e. $\widehat{\mathcal U}$ is a countable intersection of open sets. So $\widehat{\mathcal U}$ is a Borel subset. \end{proof} \begin{definition}\label{comm_top_constr_morph_defn} Let us consider the situation of the Definition \ref{comm_top_constr_defn}. A \textit{morphism} from $\left(\mathcal{Y}',\left\cdot\pi^{\mathcal Y'}_n\right\cdot\right)\downarrow\mathfrak{S}$ to $\left(\mathcal{Y}'',\left\cdot\pi^{\mathcal Y''}_n\right\cdot\right)\downarrow\mathfrak{S}$ is a covering $f: \mathcal{Y}' \to \mathcal{Y}''$ such that $$ \pi_n^{\mathcal Y''} \circ f= \pi_n^{\mathcal Y'} $$ for any $n \in \mathbb{N}$. \end{definition} \begin{empt}\label{comm_top_constr} There is a category with objects and morphisms described by Definitions \ref{comm_top_constr_defn}, \ref{comm_top_constr_morph_defn}. Denote by $\downarrow \mathfrak S$ this category. \end{empt} \begin{lemma}\label{top_universal_covering_lem} There is the final object of the category $\downarrow \mathfrak S$ described in \ref{comm_top_constr}. \end{lemma} \begin{proof} Let $\widehat{\mathcal{X}} = \varprojlim \mathcal{X}_n$ be the inverse limit of the sequence $\mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...$ in the category of topological spaces and continuous maps. Denote by $\overline{\mathcal X}$ a topological space such that \begin{itemize} \item $\overline{\mathcal X}$ coincides with $\widehat{\mathcal X}$ as a set, \item A set of special sets of $\widehat{\mathcal X}$ is a base of the topology of $\overline{\mathcal X}$. \end{itemize} If $x_n \in \mathcal X_n$ is a point then there is $\overline{x}\in \overline{\mathcal X}=\widehat{\mathcal X}$ such that $x_n = \widehat{\pi}_n\left( \overline{x}\right)$ and there is a special subset $\widehat{\mathcal U}$ such that $\overline{x} \in \widehat{\mathcal U}$. From the construction of special subsets it follows that: \begin{itemize} \item $\mathcal U_n = \widehat{\pi}_n\left( \widehat{\mathcal U}\right)$ is an open neighborhood of $x_n$; \item $$\widehat{\pi}_n^{-1} \left(\mathcal U_n \right) = \bigsqcup_{g \in \ker\left(G\left(\widehat{\mathcal{X}}~|~\mathcal X \right) \to G\left(\mathcal{X}_n~|~\mathcal X \right) \right) } g \widehat{\mathcal U};$$ \item For any $g \in \ker\left(G\left(\widehat{\mathcal{X}}~|~\mathcal X \right) \to G\left(\mathcal{X}_n~|~\mathcal X \right) \right)$ the set $g \widehat{\mathcal U}$ mapped homeomorphically onto $\mathcal{U}_n$. \end{itemize} So the natural map $\pi^{\overline{\mathcal X}}_n:\overline{\mathcal X} \to \mathcal X_n$ is a covering. If $\widetilde{\mathcal{X}} \subset \overline{\mathcal{X}}$ is a nontrivial connected component then the map $\widetilde{\mathcal{X}} \to \mathcal X_n$ is a covering, hence $\widetilde{\mathcal{X}}$ is an object of $\downarrow \mathfrak S$. Let $G \subset \widehat{G}$ be a maximal subgroup such that $G\widetilde{\mathcal{X}}=\widetilde{\mathcal{X}}$. The subgroup $G \subset \widehat{G}$ is normal. If $g \in \widehat{G} \backslash G$ then $g \widetilde{\mathcal X} \bigcap \widetilde{\mathcal X} = \emptyset$, however $g$ is a homeomorphism, i.e. $g: \widetilde{\mathcal X} \xrightarrow{\approx} g\widetilde{\mathcal X}$. If $\overline{x} \in \overline{\mathcal{X}}$ then there is $\widetilde{x} \in \widetilde{\mathcal X}$ such that $\overline{\pi}_0\left(\overline{x} \right)= \overline{\pi}_0\left(\widetilde{x} \right)$, hence there is $g \in \widehat{G}$ such that $\overline{x}=g\widetilde{x}$ and $\overline{x} \in g \widetilde{\mathcal X}$. It follows that \begin{equation}\label{top_disconnected_repr_eqn} \overline{\mathcal X}= \bigsqcup_{g \in J } g \widetilde{\mathcal X} \end{equation} where $J \subset \widehat{G} $ is a set of representatives of ${\widehat{G}/G}$. If $\left(\mathcal{Y},\left\cdot\pi^{\mathcal Y}_n\right\cdot \right)$ is a connected covering of $\mathfrak{S}$ then there is the natural continuous map $\mathcal{Y}\to \widehat{\mathcal{X}}$, because $\widehat{\mathcal{X}}$ is the inverse limit. Since the continuous map $\overline{\mathcal{X}}\to \widehat{\mathcal{X}}$ is bijective there is the natural map $\overline{\pi}:\mathcal{Y} \to \overline{\mathcal{X}}$. Let $\overline{x} \in \overline{\mathcal{X}}$ be such that $\overline{x} \in \overline{\pi}\left( \mathcal Y\right)$, i.e. $\exists y \in \mathcal Y$ which satisfies to a condition $\overline{x} = \overline{\pi}\left( y\right)$. Let $G^{\mathcal Y} \subset G\left(\mathcal Y~|~\mathcal X \right)$ be such that $\overline{\pi}\left( G^{\mathcal Y}y\right) = \left\cdot\overline{x}\right\cdot $. If $\widehat{\mathcal U}$ is a special neighborhood of $\overline{x}$ then there is a connected neighborhood $\mathcal V$ of $y$ which is mapped homeomorphically onto $\widehat{\pi}_0\left(\widehat{\mathcal U}\right) \subset \mathcal X_0$. It follows that \begin{equation}\label{top_pi_u_eqn} \overline{\pi}^{-1}\left(\widehat{\mathcal U}\right)= \bigsqcup_{g \in G^{\mathcal Y}}g \mathcal V, \end{equation} i.e. $\widehat{\mathcal U}$ is evenly covered by $\overline{\pi}$. It turns out the map $\overline{\pi}:\mathcal{Y} \to \overline{\mathcal{X}}$ is continuous. From \eqref{top_disconnected_repr_eqn} it turns out that there is $g \in \widehat{G}$, such that $\overline{x} \in g \widetilde{\mathcal X}$. The space $\mathcal Y$ is connected so it is mapped into $g \widetilde{\mathcal X}$, hence there is a continuous map $ \widetilde{\pi} = g^{-1} \circ \overline{\pi}: \mathcal Y \to \widetilde{\mathcal X}$. The set $\widetilde{\pi}\left( \mathcal Y\right) \subset \widetilde{\mathcal X} $ contains a nontrivial open subset. Denote by $\mathring{ \mathcal X}_{\widetilde{\pi}}\subset \widetilde{\mathcal X}$ (resp. $\overline{ \mathcal X}_{\widetilde{\pi}}\subset \widetilde{\mathcal X}$) maximal open subset of $\widetilde{\pi}\left( \mathcal Y\right) $ (resp. minimal closed superset of $\widetilde{\pi}\left( \mathcal Y\right) $). The space $\widetilde{\mathcal X}$ is connected (i.e. open and closed), hence from $\widetilde{\pi}\left( \mathcal Y\right) \neq \widetilde{\mathcal X}$ it turns out $ \overline{ \mathcal X}_{\widetilde{\pi}} \backslash\mathring{ \mathcal X}_{\widetilde{\pi}} \neq \emptyset$. Let $\widetilde{x} \in \overline{ \mathcal X}_{\widetilde{\pi}} \backslash\mathring{ \mathcal X}_{\widetilde{\pi}}$, and let $\widetilde{\mathcal U}$ be a special neighborhood of $\widetilde{x}$. There is $y \in \mathcal Y$ such that $y \in \widetilde{\pi}^{-1}\left(\widetilde{\mathcal U} \right)$ and there is a special connected neighborhood $\widetilde{\mathcal V} \subset \mathcal Y$ of $y$ such that $\widetilde{\pi}\left(\widetilde{\mathcal V}\right)$ is mapped homemorphically onto $\pi^{\overline{\mathcal X}}_0\left( \widetilde{\mathcal U}\right) \subset \mathcal X_0$. Otherwise $\widetilde{\mathcal U}\subset \widetilde{\mathcal X} $ is mapped homeomorphically onto $\pi^{\overline{\mathcal X}}_0\left( \widetilde{\mathcal U}\right)$, it follows that $\widetilde{\mathcal V}$ is mapped onto $\widetilde{\mathcal U}$. The open set $\widetilde{\mathcal U}$ is such that: \begin{itemize} \item $\widetilde{\mathcal U}$ is a neighborhood of $\widetilde{x}$, \item $\widetilde{\mathcal U} \subset \widetilde{\pi}\left( \mathcal Y\right)$. \end{itemize} From the above conditions it follows that $\widetilde{x}$ lies in open subset $\widetilde{\mathcal U} \subset \widetilde{\pi}\left( \mathcal Y\right) $, hence $\widetilde{x} \in \mathring{ \mathcal X}_{\widetilde{\pi}}$. This fact contradicts with $\widetilde{x}\notin\overline{ \mathcal X}_{\widetilde{\pi}} \backslash\mathring{ \mathcal X}_{\widetilde{\pi}}$ and from the contradiction it turns out $\widetilde{\pi}\left( \mathcal Y\right) = \widetilde{\mathcal X} $, i.e. $\widetilde{\pi}$ is surjective. From \eqref{top_pi_u_eqn} it follows that $\widetilde{\pi}: \mathcal Y \to \widetilde{\mathcal X}$ is a covering. Thus $\widetilde{\mathcal X}$ is the final object of the category $\downarrow \mathfrak S$. \end{proof} \begin{definition}\label{top_topological_inv_lim_defn} The final object $\left(\widetilde{\mathcal{X}},\left\cdot\pi^{\widetilde{\mathcal X}}_n\right\cdot \right)$ of the category $\downarrow\mathfrak{S}$ is said to be the \textit{(topological) inverse limit} of $\downarrow\mathfrak{S}$. The notation $\left(\widetilde{\mathcal{X}},\left\cdot\pi^{\widetilde{\mathcal X}}_n\right\cdot \right) = \varprojlim \downarrow \mathfrak{S}$ or simply $~\widetilde{\mathcal{X}} = \varprojlim \downarrow\mathfrak{S}$ will be used. The space $\overline{\mathcal X}$ from the proof of the Lemma \ref{top_universal_covering_lem} is said to be the \textit{disconnected inverse limit} of $\mathfrak{S}$. \end{definition} \begin{lemma}\label{top_biject_lem} Suppose $\mathfrak{S} = \left\cdot\mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot \in \mathfrak{FinTop}$, and $~\widehat{\mathcal X} = \varprojlim \mathcal X_n$. If $\overline{\mathcal X}$ a topological space which coincides with $\widehat{\mathcal X}$ as a set and the topology on $\overline{\mathcal X}$ is generated by special sets then there is the natural isomorphism $G\left(\overline{\mathcal X}~|~\mathcal{X} \right) \xrightarrow{\approx} G\left(\widehat{\mathcal{X}}~|~\mathcal{X} \right)$ induced by the map $\overline{\mathcal X} \to \widehat{\mathcal{X}}$. \end{lemma} \begin{proof} Since $\overline{\mathcal X}$ coincides with $\widehat{\mathcal X}$ as a set, and the topology of $\overline{\mathcal X}$ is finer than the topology of $\widehat{\mathcal X}$ there is the natural injective map $G\left(\overline{\mathcal X}~|~\mathcal{X} \right)\hookrightarrow G\left(\widehat{\mathcal{X}}~|~\mathcal{X} \right)$. If $\widehat{g}\in G\left(\widehat{\mathcal{X}}~|~\mathcal{X} \right)$ and $\widehat{\mathcal U}$ is a special set, then for any $n \in \mathbb{N}$ following condition holds \begin{equation}\label{top_pi_g_u_eqn} \widehat{\pi}_n\left(\widehat{g} \widehat{\mathcal U} \right)= h_n\left( \widehat{g}\right)\circ\widehat{\pi}_n\left( \widehat{\mathcal U} \right) \end{equation} where $\widehat{\pi}_n: \widehat{\mathcal X} \to \mathcal X_n$ is the natural map, and $h_n : G\left(\widehat{\mathcal{X}}~|~\mathcal{X} \right)\to G\left(\mathcal{X}_n~|~\mathcal{X} \right)$ is given by the Lemma \ref{top_surj_group_lem}. Clearly $h_n\left( \widehat{g}\right) $ is a homeomorphism of $\mathcal X_n$, so from \eqref{top_pi_g_u_eqn} it follows that $\widehat{\pi}_n\left(\widehat{g} \widehat{\mathcal U} \right)$ is an open subset of $\mathcal X_n$. Hence $\widehat{g} \widehat{\mathcal U}$ is special. So $\widehat{g}$ maps special sets onto special sets. Since topology of $\overline{\mathcal X}$ is generated by special sets the map $\widehat{g}$ is a homeomorphism of $\overline{\mathcal X}$, i.e. $\widehat{g} \in G\left(\overline{\mathcal X}~|~\mathcal{X} \right)$. \end{proof} \subsubsection{Algebraic construction in brief}\label{comm_alg_constr_susub} \paragraph*{} The inverse limit of coverings $\widetilde{\mathcal X}$ is obtained from inverse limit of topological spaces $\widehat{\mathcal X}$ by a change of a topology. The topology of $\widetilde{\mathcal X}$ is finer then topology of $\widehat{\mathcal X}$, it means that $C_0\left(\widehat{\mathcal X}\right)$ is a subalgebra of $C_b\left(\widetilde{\mathcal X}\right)$. The topology of $\widetilde{\mathcal X}$ is obtained from topology of $\widehat{\mathcal X}$ by addition of special subsets. Addition of new sets to a topology is equivalent to addition of new elements to $C_0\left(\widehat{\mathcal X}\right)$. To obtain $C_b\left(\widetilde{\mathcal X}\right)$ we will add to $C_0\left(\widehat{\mathcal X} \right) $ special elements (cf. Definition \ref{special_el_defn}). If $\widetilde{\mathcal U}\subset \widetilde{\mathcal X}$ is a special set and $\widetilde{a} \in C_c\left( \widetilde{\mathcal X}\right)$ is positive element such that $\widetilde{a}|_{\widetilde{\mathcal X} \backslash \widetilde{\mathcal U}}= \{0\cdot$, and $a\in C_c\left(\mathcal X_0\right)$ is given by $ a =\sum_{\widehat{g} \in \widehat{G}} \widehat{g} \widetilde{a} $, then following condition holds $$ a\left( \widetilde{\pi}_n\left(\widetilde{x}\right) \right)= \left( \sum_{\widehat{g} \in \widehat{G}} \widehat{g} \widetilde{a}\right) \left( \widetilde{\pi}_n\left(\widetilde{x}\right) \right)= \left\cdot\begin{array}{c l} \widetilde{a}\left( \widetilde{x} \right) & \widetilde{x} \in \widetilde{\mathcal U} \cdot 0 & \widetilde{\pi}_n\left(\widetilde{x}\right) \notin \widetilde{\pi}_n\left(\widetilde{\mathcal U}\right) \end{array}\right.. $$ From above equation it follows that \begin{equation}\label{comm_alg_eqn} \left( \sum_{\widehat{g} \in \widehat{G}} \widehat{g} \widetilde{a}\right)^2 = \sum_{\widehat{g} \in \widehat{G}} \widehat{g} \widetilde{a}^2. \end{equation} The equation \eqref{comm_alg_eqn} is purely algebraic and related to special subsets. From the Theorem \ref{comm_main_thm} it follows that the algebraic condition \eqref{comm_alg_eqn} is sufficient for construction of $C_0\left( \widetilde{\mathcal X}\right)$. Thus noncommutative inverse limits of coverings can be constructed by purely algebraic methods. \subsection{Locally compact spaces} \paragraph*{} There are two equivalent definitions of $C_0\left(\mathcal{X}\right)$ and both of them are used in this article. \begin{defn}\label{c_c_def_1} An algebra $C_0\left(\mathcal{X}\right)$ is the $C^*$-norm closure of the algebra $C_c\left(\mathcal{X}\right)$ of compactly supported continuous functions. \end{defn} \begin{defn}\label{c_c_def_2} A $C^*$-algebra $C_0\left(\mathcal{X}\right)$ is given by the following equation \begin{equation*} C_0\left(\mathcal{X}\right) = \left\cdot\varphi \in C_b\left(\mathcal{X}\right) \cdot| \cdot\forall \varepsilon > 0 \cdot\cdot\exists K \subset \mathcal{X} \cdot( K \text{ is compact}) \cdot\cdot \cdot\forall x \in \mathcal X \backslash K \cdot\left|\varphi\left(x\right)\right| < \varepsilon \right\cdot, \end{equation*} i.e. \begin{equation*} \left\cdot\varphi|_{\mathcal X \backslash K}\right\cdot < \varepsilon. \end{equation*} \end{defn} \begin{thm}\label{comm_sep_thm}\cite{chun-yen:separability} For a locally compact Hausdorff space $\mathcal X$, the following are equivalent: \begin{enumerate} \item[(a)] The Abelian $C^*$-algebra $C_0\left(\mathcal X \right)$ is separable; \item[(b)]$\mathcal X$ is $\sigma$-compact and metrizable; \item[(c)]$\mathcal X$ is second-countable. \end{enumerate} \end{thm} \begin{cor}\label{com_a_u_cor} If $\mathcal X$ a is locally compact second-countable Hausdorff space then for any $x \in \mathcal X$ and any open neighborhood $\mathcal U\subset\mathcal X$ there is a bounded positive continuous function $a: \mathcal X \to \mathbb{R}$ such that $a\left( x\right) \neq 0$ and $a\left(\mathcal X \backslash \mathcal U \right)= \{0\cdot$. \end{cor} \begin{defn}\cite{munkres:topology} If $\phi: \mathcal X \to \mathbb{C}$ is continuous then the \textit{support} of $\phi$ is defined to be the closure of the set $\phi^{-1}\left(\mathbb{\mathbb{C}}\backslash \{0\cdot\right)$ Thus if $x$ lies outside the support, there is some neighborhood of $x$ on which $\phi$ vanishes. Denote by $\supp \phi$ the support of $\phi$. \end{defn} \subsection{Hilbert modules} \paragraph*{} We refer to \cite{blackadar:ko} for definition of Hilbert $C^*$-modules, or simply Hilbert modules. Let $A$ be a $C^*$- algebra, and let $X_A$ be an $A$-Hilbert module. Let $\langle \cdot, \cdot \rangle_{X_A}$ be the $A$-valued product on $X_A$. For any $\xi, \zeta \in X_A$ let us define an $A$-endomorphism $\theta_{\xi, \zeta}$ given by $\theta_{\xi, \zeta}(\eta)=\xi \langle \zeta, \eta \rangle_{X_A}$ where $\eta \in X_A$. The operator $\theta_{\xi, \zeta}$ shall be denoted by $\xi \rangle\langle \zeta$. The norm completion of a generated by operators $\theta_{\xi, \zeta}$ algebra is said to be an algebra of compact operators $\mathcal{K}(X_A)$. We suppose that there is a left action of $\mathcal{K}(X_A)$ on $X_A$ which is $A$-linear, i.e. action of $\mathcal{K}(X_A)$ commutes with action of $A$. \subsection{$C^*$-algebras and von Neumann algebras} \paragraph*{} In this section I follow to \cite{pedersen:ca_aut}. \begin{definition}\label{strict_topology}\cite{pedersen:ca_aut} Let $A$ be a $C^*$-algebra. The {\it strict topology} on the multiplier algebra $M(A)$ is the topology generated by seminorms $\vertiii{x}_a = \|ax\cdot + \|xa\cdot$, ($a\in A$). \end{definition} \begin{definition} \label{strong_topology}\cite{pedersen:ca_aut} Let $\mathcal{H}$ be a Hilbert space. The {\it strong} topology on $B\left(\mathcal{H}\right)$ is the locally convex vector space topology associated with the family of seminorms of the form $x \mapsto \|x\xi\cdot$, $x \in B(\mathcal{H})$, $\xi \in \mathcal{H}$. \end{definition} \begin{definition}\label{weak_topology}\cite{pedersen:ca_aut} Let $\mathcal{H}$ be a Hilbert space. The {\it weak} topology on $B\left(\mathcal{H}\right)$ is the locally convex vector space topology associated with the family of seminorms of the form $x \mapsto \left|\left(x\xi, \eta\right)\right|$, $x \in B(\mathcal{H})$, $\xi, \eta \in \mathcal{H}$. \end{definition} \begin{theorem}\label{vN_thm}\cite{pedersen:ca_aut} Let $M$ be a $C^*$-subalgebra of $B(\mathcal{H})$, containing the identity operator. The following conditions are equivalent: \begin{itemize} \item $M=M''$ where $M''$ is the bicommutant of $M$; \item $M$ is weakly closed; \item $M$ is strongly closed. \end{itemize} \end{theorem} \begin{definition} Any $C^*$-algebra $M$ is said to be a {\it von Neumann algebra} or a {\it $W^*$- algebra} if $M$ satisfies to the conditions of the Theorem \ref{vN_thm}. \end{definition} \begin{definition} \cite{pedersen:ca_aut} Let $A$ be a $C^*$-algebra, and let $S$ be the state space of $A$. For any $s \in S$ there is an associated representation $\pi_s: A \to B\left( \mathcal{H}_s\right)$. The representation $\bigoplus_{s \in S} \pi_s: A \to \bigoplus_{s \in S} B\left(\mathcal{H}_s \right)$ is said to be the \textit{universal representation}. The universal representation can be regarded as $A \to B\left( \bigoplus_{s \in S}\mathcal{H}_s\right)$. \end{definition} \begin{definition}\label{env_alg_defn}\cite{pedersen:ca_aut} Let $A$ be a $C^*$-algebra, and let $A \to B\left(\mathcal{H} \right)$ be the universal representation $A \to B\left(\mathcal{H} \right)$. The strong closure of $\pi\left( A\right)$ is said to be the {\it enveloping von Neumann algebra} or the {\it enveloping $W^*$-algebra} of $A$. The enveloping von Neumann algebra will be denoted by $A''$. \end{definition} \begin{proposition}\label{env_alg_sec_dual}\cite{pedersen:ca_aut} The enveloping von Neumann algebra $A''$ of a $C^*$-algebra $A$ is isomorphic, as a Banach space, to the second dual of $A$, i.e. $A'' \approx A^{**}$. \end{proposition} \begin{lemma}\label{increasing_convergent_w}\cite{pedersen:ca_aut} Let $\Lambda$ be an increasing net in the partial ordering. Let $\left\{x_\lambda \right\cdot_{\lambda \in \Lambda}$ be an increasing net of self-adjoint operators in $B\left(\mathcal{H}\right)$, i.e. $\lambda \le \mu$ implies $x_\lambda \le x_\mu$. If $\left\|x_\lambda\right\cdot \le \gamma$ for some $\gamma \in \mathbb{R}$ and all $\lambda$ then $\left\{x_\lambda \right\cdot$ is strongly convergent to a self-adjoint element $x \in B\left(\mathcal{H}\right)$ with $\left\|x_\lambda\right\cdot \le \gamma$. \end{lemma} \paragraph*{} For each $x\in B(\mathcal{H})$ we define the {\it range projection} of $x$ (denoted by $[x]$) as projection on the closure of $x\mathcal{H}$. If $M$ is a von Neumann algebra and $x \in M$ then $[x]\in M$. \begin{prop}\label{polar_decomposition_prop}\cite{pedersen:ca_aut} For each element $x$ in a von Neumann algebra $M$ there is a unique partial isometry $u\in M$ and positive $\left|x\right| \in M_+$ with $uu^*=[|x|]$ and $x=|x|u$. \end{prop} \begin{defn}\label{polar_decomposition_defn} The formula $x=|x|u$ in the Proposition \ref{polar_decomposition_prop} is said to be the \textit{polar decomposition}. \end{defn} \begin{empt}\label{comm_gns_constr} Any separable $C^*$-algebra $A$ has a state $\tau$ which induces a faithful GNS representation \cite{murphy}. There is a $\mathbb{C}$-valued product on $A$ given by \begin{equation*} \left(a, b\right)=\tau\left(a^*b\right). \end{equation*} This product induces a product on $A/\mathcal{I}_\tau$ where $\mathcal{I}_\tau =\left\{a \in A \cdot| \cdot\tau(a^*a)=0\right\cdot$. So $A/\mathcal{I}_\tau$ is a pre-Hilbert space. Let denote by $L^2\left(A, \tau\right)$ the Hilbert completion of $A/\mathcal{I}_\tau$. The Hilbert space $L^2\left(A, \tau\right)$ is a space of a GNS representation of $A$. \end{empt} \section{Noncommutative finite-fold coverings} \subsection{Basic construction} \begin{definition} If $A$ is a $C^*$- algebra then an action of a group $G$ is said to be {\it involutive } if $ga^* = \left(ga\right)^*$ for any $a \in A$ and $g\in G$. The action is said to be \textit{non-degenerated} if for any nontrivial $g \in G$ there is $a \in A$ such that $ga\neq a$. \end{definition} \begin{definition}\label{fin_def_uni} Let $A \hookrightarrow \widetilde{A}$ be an injective *-homomorphism of unital $C^*$-algebras. Suppose that there is a non-degenerated involutive action $G \times \widetilde{A} \to \widetilde{A}$ of a finite group $G$, such that $A = \widetilde{A}^G\stackrel{\text{def}}{=}\left\{a\in \widetilde{A}~|~ a = g a;~ \forall g \in G\right\cdot$. There is an $A$-valued product on $\widetilde{A}$ given by \begin{equation}\label{finite_hilb_mod_prod_eqn} \left\langle a, b \right\rangle_{\widetilde{A}}=\sum_{g \in G} g\left( a^* b\right) \end{equation} and $\widetilde{A}$ is an $A$-Hilbert module. We say that a triple $\left(A, \widetilde{A}, G \right)$ is an \textit{unital noncommutative finite-fold covering} if $\widetilde{A}$ is a finitely generated projective $A$-Hilbert module. \end{definition} \begin{remark} Above definition is motivated by the Theorem \ref{pavlov_troisky_thm}. \end{remark} \begin{definition}\label{fin_comp_def} Let $A$, $\widetilde{A}$ be $C^*$-algebras and let $A \hookrightarrow \widetilde{A}$ be an inclusion such that following conditions hold: \begin{enumerate} \item[(a)] There are unital $C^*$-algebras $B$, $\widetilde{B}$ and inclusions $A \subset B$, $\widetilde{A}\subset \widetilde{B}$ such that $A$ (resp. $B$) is an essential ideal of $\widetilde{A}$ (resp. $\widetilde{B}$) and $A = B\bigcap \widetilde{A}$, \item[(b)] There is an unital noncommutative finite-fold covering $\left(B ,\widetilde{B}, G \right)$, \item[(c)] $G\widetilde{A} = \widetilde{A}$. \end{enumerate} The triple $\left(A, \widetilde{A},G \right)$ is said to be a \textit{noncommutative finite-fold covering with compactification}. The group $G$ is said to be the \textit{covering transformation group} (of $\left(A, \widetilde{A},G \right)$ ) and we use the following notation \begin{equation}\label{group_cov_eqn} G\left(\widetilde{A}~|~A \right) \stackrel{\mathrm{def}}{=} G. \end{equation} \end{definition} \begin{remark} The Definition \ref{fin_comp_def} is motivated by the Lemma \ref{comm_fin_lem}. \end{remark} \begin{remark} Any unital noncommutative finite-fold covering is a noncommutative finite-fold covering with compactification. \end{remark} \begin{definition}\label{fin_def} Let $A$, $\widetilde{A}$ be $C^*$-algebras, $A\hookrightarrow\widetilde{A}$ an injective *-homomorphism and $G\times \widetilde{A}\to \widetilde{A}$ an involutive non-degenerated action of a finite group $G$ such that following conditions hold: \begin{enumerate} \item[(a)] $A \cong \widetilde{A}^G \stackrel{\mathrm{def}}{=} \left\{a\in \widetilde{A} ~|~ Ga = a \right\cdot$, \item[(b)] There is a family $\left\cdot\widetilde{I}_\lambda \subset \widetilde{A} \right\cdot_{\lambda \in \Lambda}$ of closed ideals of $\widetilde{A}$ such that \begin{equation}\label{gi-i} G\widetilde{I}_\lambda = \widetilde{I}_\lambda. \end{equation} Moreover $\bigcup_{\lambda \in \Lambda} \widetilde{I}_\lambda$ (resp. $\bigcup_{\lambda \in \Lambda} \left( A \bigcap \widetilde{I}_\lambda\right) $ ) is a dense subset of $\widetilde{A}$ (resp. $A$), and for any $\lambda \in \Lambda$ there is a natural noncommutative finite-fold covering with compactification $\left(\widetilde{I}_\lambda \bigcap A, \widetilde{I}_\lambda , G \right)$. \end{enumerate} We say that the triple $\left(A, \widetilde{A},G \right)$ is a \textit{noncommutative finite-fold covering}. \end{definition} \begin{remark} The Definition \ref{fin_def} is motivated by the Theorem \ref{comm_fin_thm}. \end{remark} \begin{remark} Any noncommutative finite-fold covering with compactification is a noncommutative finite-fold covering. \end{remark} \begin{definition} The injective *-homomorphism $A \hookrightarrow \widetilde{A}$ from the Definition \ref{fin_def} is said to be a \textit{noncommutative finite-fold covering}. \end{definition} \begin{definition}\label{hilbert_product_defn} Let $\left(A, \widetilde{A}, G\right)$ be a noncommutative finite-fold covering. The algebra $\widetilde{A}$ is a Hilbert $A$-module with an $A$-valued product given by \begin{equation}\label{fin_form_a} \left\langle a, b \right\rangle_{\widetilde{A}} = \sum_{g \in G} g(a^*b); ~ a,b \in \widetilde{A}. \end{equation} We say that this structure of Hilbert $A$-module is {\it induced by the covering} $\left(A, \widetilde{A}, G\right)$. Henceforth we shall consider $\widetilde{A}$ as a right $A$-module, so we will write $\widetilde{A}_A$. \end{definition} \subsection{Induced representation}\label{induced_repr_fin_sec} \begin{empt}\label{induced_repr_constr} Let $\left(A, \widetilde{A}, G\right)$ be a noncommutative finite-fold covering, and let $\rho: A \to B\left(\mathcal{H}\right)$ be a representation. If $X=\widetilde{A}\otimes_A \mathcal{H}$ is the algebraic tensor product then there is a sesquilinear $\mathbb{C}$-valued product $\left(\cdot, \cdot\right)_{X}$ on $X$ given by \begin{equation}\label{induced_prod_equ} \left(a \otimes \xi, b \otimes \eta \right)_{X}= \left(\xi, \left\langle a, b \right\rangle_{\widetilde{A}} \eta\right)_{\mathcal{H}} \end{equation} where $ \left(\cdot, \cdot\right)_{\mathcal{H}}$ means the Hilbert space product on $\mathcal{H}$, and $\left\langle \cdot, \cdot \right\rangle_{\widetilde{A}}$ is given by \eqref{fin_form_a}. So $X$ is a pre-Hilbert space. There is a natural map $p: \widetilde{A} \times \left( \widetilde{A}\otimes_A \mathcal{H} \right)\to \widetilde{A}\otimes_A \mathcal{H}$ given by $$ (a, b \otimes \xi) \mapsto ab \otimes \xi. $$ \end{empt} \begin{defn}\label{induced_repr_defn} Use notation of the Definition \ref{hilbert_product_defn}, and \ref{induced_repr_constr}. If $\widetilde{\mathcal{H}}$ is the Hilbert completion of $X=\widetilde{A}\otimes_A \mathcal{H}$ then the map $p: \widetilde{A} \times \left( \widetilde{A}\otimes_A \mathcal{H} \right)\to \widetilde{A}\otimes_A \mathcal{H}$ induces the representation $\widetilde{\rho}: \widetilde{A} \to B\left( \widetilde{\mathcal{H}} \right)$. We say that $\widetilde{\rho}$ \textit{is induced by the pair} $\left(\rho,\left(A, \widetilde{A}, G\right) \right)$. \end{defn} \begin{rem} Below any $\widetilde a \otimes \xi\in\widetilde{A}\otimes_A \mathcal{H}$ will be regarded as element in $\widetilde{\mathcal{H}}$. \end{rem} \begin{lem} If $A \to B\left(\mathcal{H} \right) $ is faithful then $\widetilde{\rho}: \widetilde{A} \to B\left( \widetilde{\mathcal{H}} \right)$ is faithful. \end{lem} \begin{proof} If $\widetilde{a} \in \widetilde{A}$ is a nonzero element then $$ a =\left\langle \widetilde{a}~\widetilde{a}^*, \widetilde{a}~\widetilde{a}^*\right\rangle_{\widetilde{A}} = \sum_{g \in G}g\left(\widetilde{a}^*\widetilde{a}~\widetilde{a}~\widetilde{a}^* \right) \in A $$ is a nonzero positive element. There is $\xi \in \mathcal{H}$ such that $\left( \xi, a\xi\right)_{\mathcal{H}} > 0$. However $$ \left( \xi, a\xi\right)_{\mathcal{H}} = \left( \widetilde{a}\widetilde{\xi}, \widetilde{a}\widetilde{\xi}\right)_{\widetilde{\mathcal{H}}} $$ where $\widetilde{\xi} = \widetilde{a}^*\otimes \xi \in \widetilde{A}\otimes_A \mathcal{H} \subset \widetilde{\mathcal{H}}$. Hence $\widetilde{a}\widetilde{\xi} \neq 0$. \end{proof} \cdot\begin{empt} Let $\left(A, \widetilde{A}, G\right)$ be a noncommutative finite-fold covering, let $\rho: A \to B\left(\mathcal{H} \right)$ be a faithful representation, and let $\widetilde{\rho}: \widetilde{A} \to B\left( \widetilde{\mathcal{H}} \right)$ is induced by the pair $\left(\rho,\left(A, \widetilde{A}, G\right) \right)$. There is the natural action of $G$ on $\widetilde{\mathcal{H}}$ induced by the map $$ g \left( \widetilde{a} \otimes \xi\right) = \left( g\widetilde{a} \right) \otimes \xi; ~ \widetilde{a} \in \widetilde{A}, ~ g \in G, ~ \xi \in \mathcal{H}. $$ There is the natural orthogonal inclusion $\mathcal{H} \subset \widetilde{\mathcal{H}}$ induced by inclusions $$ A \subset\widetilde{A}; ~~ A \otimes_A \mathcal{H} \subset\widetilde{A} \otimes_A \mathcal{H}. $$ Action of $g$ on $\widetilde{A}$ can be defined by representation as $g \widetilde{a} = g \widetilde{a} g^{-1}$, i.e. $$ (g\widetilde{a}) \xi = g\left(\widetilde{a} \left( g^{-1}\xi \right) \right);~ \forall \xi \in \widetilde{\mathcal{H}}. $$ \end{empt} \begin{defn}\label{mult_G_act_defn} If $M\left(\widetilde{A} \right)$ is the multiplier algebra of $\widetilde{A}$ then there is the natural action of $G$ on $M\left(\widetilde{A} \right)$ such that for any $\widetilde{a}\in M\left(\widetilde{A} \right)$, $\widetilde{b}\in\widetilde{A}$ and $g \in G$ a following condition holds $$ \left(g \widetilde{a} \right)\widetilde{b} = g\left(\widetilde{a} \left( g^{-1}\widetilde{b} \right) \right) $$ We say that action of $G$ on $M\left(\widetilde{A} \right)$ is \textit{induced} by the action of $G$ on $\widetilde{A}$. \end{defn} \begin{lem}\label{ind_mult_inv_lem} If an action of $G$ on $M\left(\widetilde{A} \right)$ is induced by the action of $G$ on $\widetilde{A}$ then \begin{equation}\label{mag_ma_eqn} M\left(\widetilde{A} \right)^G \subset M\left(\widetilde{A}^G \right) . \end{equation} \end{lem} \begin{proof} If $a \in M\left(\widetilde{A} \right)^G$ and $b \in \widetilde{A}^G$ then $ab \in \widetilde{A}$ is such that $g\left( ab\right) =\left( ga\right)\left(gb \right) = ab \in \widetilde{A}^G$. \end{proof} \section{Noncommutative infinite coverings} \subsection{Basic construction}\label{bas_constr} This section contains a noncommutative generalization of infinite coverings. \begin{definition}\label{comp_defn} Let \begin{equation*} \mathfrak{S} =\left\cdot A =A_0 \xrightarrow{\pi_1} A_1 \xrightarrow{\pi_2} ... \xrightarrow{\pi_n} A_n \xrightarrow{\pi^{n+1}} ...\right\cdot \end{equation*} be a sequence of $C^*$-algebras and noncommutative finite-fold coverings such that: \begin{enumerate} \item[(a)] Any composition $\pi_{n_1}\circ ...\circ\pi_{n_0+1}\circ\pi_{n_0}:A_{n_0}\to A_{n_1}$ corresponds to the noncommutative covering $\left(A_{n_0}, A_{n_1}, G\left(A_{n_1}~|~A_{n_0}\right)\right)$; \item[(b)] If $k < l < m$ then $G\left( A_m~|~A_k\right)A_l = A_l$ (Action of $G\left( A_m~|~A_k\right)$ on $A_l$ means that $G\left( A_m~|~A_k\right)$ acts on $A_m$, so $G\left( A_m~|~A_k\right)$ acts on $A_l$ since $A_l$ a subalgebra of $A_m$); \item[(c)] If $k < l < m$ are nonegative integers then there is the natural exact sequence of covering transformation groups \begin{equation*} \{e\cdot\to G\left(A_{m}~|~A_{l}\right) \xrightarrow{\iota} G\left(A_{m}~|~A_{k}\right)\xrightarrow{\pi}G\left(A_{l}~|~A_{k}\right)\to\{e\cdot \end{equation*} where the existence of the homomorphism $G\left(A_{m}~|~A_{k}\right)\xrightarrow{\pi}G\left(A_{l}~|~A_{k}\right)$ follows from (b). \end{enumerate} The sequence $\mathfrak{S}$ is said to be an \textit{(algebraical) finite covering sequence}. For any finite covering sequence we will use the notation $\mathfrak{S} \in \mathfrak{FinAlg}$. \end{definition} \begin{definition}\label{equiv_act_defn} Let $\widehat{A} = \varinjlim A_n$ be the $C^*$-inductive limit \cite{murphy}, and suppose that $\widehat{G}= \varprojlim G\left(A_n~|~A \right) $ is the projective limit of groups \cite{spanier:at}. There is the natural action of $\widehat{G}$ on $\widehat{A}$. A non-degenerate faithful representation $\widehat{A} \to B\left( \mathcal{H}\right) $ is said to be \textit{equivariant} if there is an action of $\widehat{G}$ on $\mathcal{H}$ such that for any $\xi \in \mathcal{H}$ and $g \in \widehat{G}$ the following condition holds \begin{equation}\label{equiv_act_eqn} \left(ga \right) \xi = g\left(a\left(g^{-1}\xi \right) \right) . \end{equation} \end{definition} \begin{example} Let $S$ be the state space of $\widehat{A}$, and let $\widehat{A} \to B\left(\bigoplus_{s \in S} \mathcal{H}_s \right)$ be the universal representation. There is the natural action of $\widehat{G}$ on $S$ given by $$ \left(gs \right)\left( a\right) = s\left( ga\right); ~ s \in S,~ a \in \widehat{A},~ g \in \widehat{G}. $$ The action of $\widehat{G}$ on $S$ induces the action of $\widehat{G}$ on $\bigoplus_{s \in S} \mathcal{H}_s$. It follows that the universal representation is equivariant. \end{example} \begin{example}\label{equiv_exm} Let $s$ be a faithful state which corresponds to the representation $\widehat{A} \to B\left(\mathcal{H}_s \right)$ and $\left\{g_n\in \widehat{G}\right\cdot_{n \in \mathbb{N}}= \widehat{G}$ is a bijection. The state $$ \sum_{n \in \mathbb{N}}\frac{g_ns }{2^{n}} $$ corresponds to an equvariant representation $\widehat{A} \to B\left(\bigoplus_{g \in \widehat{G}}\mathcal{H}_{gs} \right)$. \end{example} \begin{definition}\label{special_el_defn} Let $\pi:\widehat{A} \to B\left( \mathcal{H}\right) $ be an equivariant representation. A positive element $\overline{a} \in B\left(\mathcal{H} \right)_+ $ is said to be \textit{special} (with respect to $\pi$) if following conditions hold: \begin{enumerate} \item[(a)] For any $n \in \mathbb{N}^0$ the following series \begin{equation*} \begin{split} a_n = \sum_{g \in \ker\left( \widehat{G} \to G\left( A_n~|~A \right)\right)} g \overline{a} \end{split} \end{equation*} is strongly convergent and the sum lies in $A_n$, i.e. $a_n \in A_n $; \item[(b)] If $f_\varepsilon: \mathbb{R} \to \mathbb{R}$ is given by \begin{equation}\label{f_eps_eqn} f_\varepsilon\left( x\right) =\left\cdot \begin{array}{c l} 0 &x \le \varepsilon \cdot x - \varepsilon & x > \varepsilon \end{array}\right. \end{equation} then for any $n \in \mathbb{N}^0$ and for any $z \in A$ following series \begin{equation*} \begin{split} b_n = \sum_{g \in \ker\left( \widehat{G} \to G\left( A_n~|~A \right)\right)} g \left(z \overline{a} z^*\right) ,\cdot c_n = \sum_{g \in \ker\left( \widehat{G} \to G\left( A_n~|~A \right)\right)} g \left(z \overline{a} z^*\right)^2,\cdot d_n = \sum_{g \in \ker\left( \widehat{G} \to G\left( A_n~|~A \right)\right)} g f_\varepsilon\left( z \overline{a} z^* \right) \end{split} \end{equation*} are strongly convergent and the sums lie in $A_n$, i.e. $b_n,~ c_n,~ d_n \in A_n $; \item[(c)] For any $\varepsilon > 0$ there is $N \in \mathbb{N}$ (which depends on $\overline{a}$ and $z$) such that for any $n \ge N$ a following condition holds \begin{equation}\label{square_condition_equ} \begin{split} \left\cdot b_n^2 - c_n\right\cdot < \varepsilon. \end{split} \end{equation} \end{enumerate} An element $\overline{ a}' \in B\left( \mathcal{H}\right) $ is said to be \textit{weakly special} if $$ \overline{ a}' = x\overline{a}y; \text{ where } x,y \in \widehat{A}, \text{ and } \overline{a} \in B\left(\mathcal{H} \right) \text{ is special}. $$ \end{definition} \begin{lemma}\label{stong_conv_inf_lem} If $\overline{a} \in B\left( \mathcal{H}\right)_+$ is a special element and $\overline{G}_n=\ker\left( \widehat{G} \to G\left( A_n~|~A \right)\right)$ then from \begin{equation*} \begin{split} a_n = \sum_{g \in \overline{G}_n} g \overline{a}, \end{split} \end{equation*} it follows that $\overline{a} = \lim_{n \to \infty} a_n$ in the sense of the strong convergence. Moreover one has $\overline{a} =\inf_{n \in \mathbb{N}}a_n$. \end{lemma} \begin{proof} From the Lemma \ref{increasing_convergent_w} it follows that the decreasing lower-bounded sequence $\left\{a_n\right\cdot$ is strongly convergent and $\lim_{n \to \infty} a_n=\inf_{n \in \mathbb{N}}a_n$. From $a_n > \overline{a}$ it follows that $\inf_{n \in \mathbb{N}}a_n \ge \overline{a}$. If $\inf_{n \in \mathbb{N}}a_n > \overline{a}$ then there is $\xi \in \mathcal{H}$ such that $$ \left(\xi,\left( \inf_{n \in \mathbb{N}} a_n\right) \xi \right) > \left(\xi,\overline{a}\xi \right), $$ however one has \begin{equation*} \begin{split} \left(\xi,\left( \inf_{n \in \mathbb{N}} a_n\right) \xi \right)= \inf_{n \in \mathbb{N}}\left(\xi, a_n\xi \right) = \inf_{n \in \mathbb{N}}\left(\xi,\left( \sum_{g \in \overline{G}_n} g \overline{a}\right) \xi \right)= \inf_{n \in \mathbb{N}} \sum_{g \in \overline{G}_n}\left(\xi, g\overline{a}\xi \right)= \left(\xi, \overline{a}\xi \right). \end{split} \end{equation*} It follows that $\overline{a}=\inf_{n \in \mathbb{N}}a_n$. \end{proof} \begin{corollary}\label{special_cor} Any weakly special element lies in the enveloping von Neumann algebra $\widehat{A}''$ of $\widehat{A}=\varinjlim A_n$. If $\overline{A}_\pi \subset B\left( \mathcal{H}\right)$ is the $C^*$-norm completion of an algebra generated by weakly special elements then $\overline{A}_\pi \subset \widehat{A}''$. \end{corollary} \begin{lemma} If $\overline{a}\in B\left( \mathcal{H}\right)$ is special, (resp. $\overline{a}'\in B\left( \mathcal{H}\right)$ weakly special) then for any $g \in \widehat{G}$ the element $g\overline{a}$ is special, (resp. $g\overline{a}'$ is weakly special). \end{lemma} \begin{proof} If $\overline{a} \in B\left( \mathcal{H}\right)$ is special then $g \overline{a}$ satisfies to (a)-(c) of the Definition \ref{special_el_defn}, i.e. $g\overline{ a}$ is special. If $\overline{ a}'$ is weakly special then form $$ \overline{ a}' = x\overline{a}y; \text{ where } x,y \in \widehat{A}, \text{ and } \overline{a} \in B\left(\mathcal{H} \right) \text{ is special}, $$ it turns out that $$ g\overline{ a}' = \left( gx\right) \left( g\overline{a}\right) \left( gy\right), $$ i.e. $g\overline{ a}'$ is weakly special. \end{proof} \begin{corollary}\label{disconnect_group_action_cor} If $\overline{A}_\pi \subset B\left( \mathcal{H}\right)$ is the $C^*$-norm completion of algebra generated by weakly special elements, then there is a natural action of $\widehat{G}$ on $\overline{A}_\pi$. \end{corollary} \begin{definition}\label{main_defn_full} Let $\mathfrak{S} =\left\cdot A =A_0 \xrightarrow{\pi^1} A_1 \xrightarrow{\pi^2} ... \xrightarrow{\pi^n} A_n \xrightarrow{\pi^{n+1}} ...\right\cdot$ be an algebraical finite covering sequence. Let $\pi:\widehat{A} \to B\left( \mathcal{H}\right) $ be an equivariant representation. Let $\overline{A}_\pi \subset B\left( \mathcal{H}\right)$ be the $C^*$-norm completion of algebra generated by weakly special elements. We say that $\overline{A}_\pi$ is the {\it disconnected inverse noncommutative limit} of $\downarrow\mathfrak{S}$ (\textit{with respect to $\pi$}). The triple $\left(A, \overline{A}_\pi, G\left(\overline{A}_\pi~|~ A\right)\stackrel{\mathrm{def}}{=} \widehat{G}\right)$ is said to be the {\it disconnected infinite noncommutative covering } of $\mathfrak{S}$ (\textit{with respect to $\pi$}). If $\pi$ is the universal representation then "with respect to $\pi$" is dropped and we will write $\left(A, \overline{A}, G\left(\overline{A}~|~ A\right)\right)$. \end{definition} \begin{definition}\label{main_sdefn} Any maximal irreducible subalgebra $\widetilde{A}_\pi \subset \overline{A}_\pi$ is said to be a {\it connected component} of $\mathfrak{S}$ ({\it with respect to $\pi$}). The maximal subgroup $G_\pi\subset G\left(\overline{A}_\pi~|~ A\right)$ among subgroups $G\subset G\left(\overline{A}_\pi~|~ A\right)$ such that $G\widetilde{A}_\pi=\widetilde{A}_\pi$ is said to be the $\widetilde{A}_\pi$-{\it invariant group} of $\mathfrak{S}$. If $\pi$ is the universal representation then "with respect to $\pi$" is dropped. \end{definition} \begin{remark} From the Definition \ref{main_sdefn} it follows that $G_\pi \subset G\left(\overline{A}_\pi~|~ A\right)$ is a normal subgroup. \end{remark} \begin{definition}\label{good_seq_defn} Let $$\mathfrak{S} = \left\cdot A =A_0 \xrightarrow{\pi^1} A_1 \xrightarrow{\pi^2} ... \xrightarrow{\pi^n} A_n \xrightarrow{\pi^{n+1}} ...\right\cdot \in \mathfrak{FinAlg},$$ and let $\left(A, \overline{A}_\pi, G\left(\overline{A}_\pi~|~ A\right)\right)$ be a disconnected infinite noncommutative covering of $\mathfrak{S}$ with respect to an equivariant representation $\pi: \varinjlim A_n\to B\left(\mathcal{H} \right) $. Let $\widetilde{A}_\pi\subset \overline{A}_\pi$ be a connected component of $\mathfrak{S}$ with respect to $\pi$, and let $G_\pi \subset G\left(\overline{A}_\pi~|~ A\right)$ be the $\widetilde{A}_\pi$ - invariant group of $\mathfrak{S}$. Let $h_n : G\left(\overline{A}_\pi~|~ A\right) \to G\left( A_n~|~A \right)$ be the natural surjective homomorphism. The representation $\pi: \varinjlim A_n\to B\left(\mathcal{H} \right)$ is said to be \textit{good} if it satisfies to following conditions: \begin{enumerate} \item[(a)] The natural *-homomorphism $ \varinjlim A_n \to M\left(\widetilde{A}_\pi \right)$ is injective, \item[(b)] If $J\subset G\left(\overline{A}_\pi~|~ A\right)$ is a set of representatives of $G\left(\overline{A}_\pi~|~ A\right)/G_\pi$, then the algebraic direct sum \begin{equation*} \bigoplus_{g\in J} g\widetilde{A}_\pi \end{equation*} is a dense subalgebra of $\overline{A}_\pi$, \item [(c)] For any $n \in \mathbb{N}$ the restriction $h_n|_{G_\pi}$ is an epimorphism, i. e. $h_n\left(G_\pi \right) = G\left( A_n~|~A \right)$. \end{enumerate} If $\pi$ is the universal representation we say that $\mathfrak{S}$ is \textit{good}. \end{definition} \begin{definition}\label{main_defn} Let $\mathfrak{S}=\left\{A=A_0 \to A_1 \to ... \to A_n \to ...\right\cdot \in \mathfrak{FinAlg}$ be an algebraical finite covering sequence. Let $\pi: \widehat{A} \to B\left(\mathcal{H} \right)$ be a good representation. A connected component $\widetilde{A}_\pi \subset \overline{A}_\pi$ is said to be the {\it inverse noncommutative limit of $\downarrow\mathfrak{S}$ (with respect to $\pi$)}. The $\widetilde{A}_\pi$-invariant group $G_\pi$ is said to be the {\it covering transformation group of $\mathfrak{S}$} ({\it with respect to $\pi$}). The triple $\left(A, \widetilde{A}_\pi, G_\pi\right)$ is said to be the {\it infinite noncommutative covering} of $\mathfrak{S}$ ({\it with respect to $\pi$}). We will use the following notation \begin{equation*} \begin{split} \varprojlim_\pi \downarrow \mathfrak{S}\stackrel{\mathrm{def}}{=}\widetilde{A}_\pi,\cdot G\left(\widetilde{A}_\pi~|~ A\right)\stackrel{\mathrm{def}}{=}G_\pi. \end{split} \end{equation*} If $\pi$ is the universal representation then "with respect to $\pi$" is dropped and we will write $\left(A, \widetilde{A}, G\right)$, $~\varprojlim \downarrow \mathfrak{S}\stackrel{\mathrm{def}}{=}\widetilde{A}$ and $ G\left(\widetilde{A}~|~ A\right)\stackrel{\mathrm{def}}{=}G$. \end{definition} \begin{definition}\label{inf_hilb_prod_defn} Let $\mathfrak{S}=\left\{A=A_0 \to A_1 \to ... \to A_n \to ...\right\cdot \in \mathfrak{FinAlg}$ be an algebraical finite covering sequence. Let $\pi: \widehat{A} \to B\left(\mathcal{H} \right)$ be a good representation. Let $\left(A, \widetilde{A}_\pi, G_\pi\right)$ be the infinite noncommutative covering of $\mathfrak{S}$ ( with respect to $\pi$). Let $K\left( \widetilde{A}_\pi\right)$ be the Pedersen ideal of $\widetilde{A}_\pi$. We say that $\mathfrak{S}$ \textit{allows inner product (with respect to $\pi$)} if following conditions hold: \begin{enumerate} \item[(a)] Any $\widetilde{a} \in K\left( \widetilde{A}_\pi\right)$ is weakly special, \item[(b)] For any $n \in \mathbb{N}$, and $\widetilde{a}, \widetilde{b} \in K\left( \widetilde{A}_\pi\right)$ the series \begin{equation*} \begin{split} a_n = \sum_{g \in \ker\left( \widehat{G} \to G\left( A_n~|~A \right)\right)} g \left(\widetilde{a}^* \widetilde{b} \right) \end{split} \end{equation*} is strongly convergent and $a_n \in A_n$. \end{enumerate} \end{definition} \begin{remark}\label{inf_hilb_prod_rem} If $\mathfrak{S}$ allows inner product (with respect to $\pi$) then $K\left( \widetilde{A}_\pi\right)$ is a pre-Hilbert $A$ module such that the inner product is given by \begin{equation*} \begin{split} \left\langle \widetilde{a}, \widetilde{b} \right\rangle = \sum_{g \in \widehat{G}} g \left(\widetilde{a}^* \widetilde{b} \right) \in A \end{split} \end{equation*} where the above series is strongly convergent. The completion of $K\left( \widetilde{A}_\pi\right)$ with respect to a norm \begin{equation*} \begin{split} \left\cdot \widetilde{a}\right\cdot = \sqrt{\left\cdot \left\langle \widetilde{a}, \widetilde{a} \right\rangle\right\cdot} \end{split} \end{equation*} is an $A$-Hilbert module. Denote by $X_A$ this completion. The ideal $K\left( \widetilde{A}_\pi\right)$ is a left $\widetilde{A}_\pi$-module, so $X_A$ is also $\widetilde{A}_\pi$-module. Sometimes we will write $_{\widetilde{A}_\pi}X_A$ instead $X_A$. \end{remark} \begin{definition}\label{inf_hilb_mod_defn} Let $\mathfrak{S}=\left\{A=A_0 \to A_1 \to ... \to A_n \to ...\right\cdot \in \mathfrak{FinAlg}$ and $\mathfrak{S}$ allows inner product (with respect to $\pi$) then then we say that given by the Remark \ref{inf_hilb_prod_rem} $A$-Hilbert module $_{\widetilde{A}_\pi}X_A$ \textit{corresponds to the pair} $\left(\mathfrak{S}, \pi \right) $. If $\pi$ is the universal representation then we say that $_{\widetilde{A}}X_A$ \textit{corresponds to } $\mathfrak{S}$. \end{definition} \subsection{Induced representation}\label{inf_ind_repr_subsection} \paragraph*{} Let $\pi:\widehat{A} \to B\left(\overline{\mathcal{H}}_\pi \right)$ be a good representation. Let $\left(A, \widetilde{A}_\pi, G_\pi\right)$ be an infinite noncommutative covering with respect to $\pi$ of $\mathfrak{S}$. Denote by $\overline{W}_\pi \subset B\left(\overline{\mathcal{H}}_\pi \right)$ the $\widehat{A}$-bimodule of weakly special elements, and denote by \begin{equation}\label{wealky_spec_eqn} \widetilde{W}_\pi = \overline{W}_\pi \bigcap \widetilde{A}_\pi. \end{equation} If $\pi$ is the universal representation then we write $\widetilde{W}$ instead $\widetilde{W}_\pi$. \begin{lem}\label{w_conv_lem} If $\widetilde{a}, \widetilde{b} \in \widetilde{W}_\pi $ are weakly special elements then a series $$ \sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{b} \right) $$ is strongly convergent. \end{lem} \begin{proof} From the definition of weakly special element one has $$\widetilde{a}^* = x \widetilde{c} y$$ where $\widetilde{c}$ is a (positive) special element and $x,y \in \widehat{A}$. A series $$ \sum_{g \in G_\pi} g \widetilde{c} $$ is strongly convergent. For any $\xi\in\overline{\mathcal{H}}_\pi$ and $\varepsilon > 0$ there is a finite subset $G'\subset G_\pi$ such that for any finite $G''$ which satisfies to $G' \subset G'' \subset G_\pi$ following condition holds $$ \left\cdot \sum_{g \in G''\backslash G'} \left( g\widetilde{b}\right) \xi\right\cdot < \frac{\varepsilon}{\left\|x\right\cdot\left\cdot\sum_{g\in G_\pi}g\widetilde{c}\right\cdot\left\|y\right\cdot }. $$ Hence one has $$ \left\cdot \sum_{g \in G''\backslash G'} \left( g\left( \widetilde{a}^*\widetilde{b}\right)\right) \xi\right\cdot < \varepsilon, $$ i.e. the series $$ \sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{b} \right) $$ is strongly convergent and $\sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{b} \right) \in \widehat{A}''$. \end{proof} \begin{defn}\label{ss_defn} Element $\widetilde{a} \in \widetilde{A}_\pi$ is said to be \textit{square-summable} if the series \begin{equation}\label{ss_eqn} \sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{a} \right) \end{equation} is strongly convergent to a bounded operator. Denote by $L^2\left(\widetilde{A}_\pi \right)$ (or $L^2\left(\widetilde{A}\right)$ the $\mathbb{C}$-space of square-summable operators. \end{defn} \begin{rem} If $\widetilde{b} \in \widehat{A}$, and $\widetilde{a}\in L^2\left(\widetilde{A}\right)$ then $$ \left\cdot \sum_{g \in G_\pi}g\left(\widetilde{b}\widetilde{a} \right)^* \left(\widetilde{b}\widetilde{a}\right) \right\cdot \le \left\cdot\widetilde{b} \right\cdot^2\left\cdot \sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{a} \right)\right\cdot ,~~ \left\cdot \sum_{g \in G_\pi}g\left(\widetilde{a}\widetilde{b} \right)^* \left(\widetilde{a}\widetilde{b}\right) \right\cdot \le \left\cdot\widetilde{b} \right\cdot^2\left\cdot \sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{a} \right)\right\cdot $$ it turns out \begin{equation}\label{act_on_l2_eqn} \widehat{A}L^2\left(\widetilde{A}_\pi\right) \subset L^2\left(\widetilde{A}_\pi\right),~~ L^2\left(\widetilde{A}_\pi\right)\widehat{A} \subset L^2\left(\widetilde{A}_\pi\right), \end{equation} i.e. there is the left and right action of $\widehat{A}$ on $L^2\left(\widetilde{A}\right)$. \end{rem} \begin{rem} If $a, b \in L^2\left(\widetilde{A}_\pi \right)$ then sum $\sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{b} \right) \in \widehat{A}''$ is bounded and $G_\pi$-invariant, hence $\sum_{g \in G_\pi} g\left(\widetilde{a}^*\widetilde{b} \right) \in A'' $. \end{rem} \begin{rem} From the Lemma \ref{w_conv_lem} it turns out $\widetilde{W}_\pi\subset L^2\left(\widetilde{A}_\pi\right)$ \end{rem} \begin{empt}\label{inf_repr_constr} Let $A \to B\left(\mathcal{H} \right)$ be a representation. Denote by $\widetilde{\mathcal{H}}$ a Hilbert completion of a pre-Hilbert space \begin{equation}\label{inf_ind_prod_eqn} \begin{split} L^2\left( \widetilde{A}_\pi \right) \otimes_A \mathcal{H},\cdot \text{with a scalar product} \left(\widetilde{a} \otimes \xi, \widetilde{b} \otimes \eta \right)_{\widetilde{\mathcal{H}}} = \left( \xi, \left( \sum_{g \in G_\pi } g \left( \widetilde{a}^*\widetilde{b}\right) \right) \eta \right)_{\mathcal{H}}. \end{split} \end{equation} There is the left action of $\widehat{A}$ on $L^2\left(\widetilde{A}_\pi\right) \otimes_{A} \mathcal{H}$ given by $$ \widetilde{b}\left(\widetilde{a} \otimes \xi \right) = \widetilde{b}\widetilde{a} \otimes \xi $$ where $\widetilde{a} \in L^2\left( \widetilde{A}_\pi \right) $, $\widetilde{b} \in \widehat{A}$, $\xi \in \mathcal{H}$. The left action of $\widehat{A}$ on $L^2\left( \widetilde{A}_\pi\right) \otimes_A \mathcal{H}$ induces following representations \begin{equation*} \begin{split} \widehat{\rho}:\widehat{A} \to B\left( \widetilde{\mathcal{H}}\right),\cdot \widetilde{\rho}:\widetilde{A}_\pi \to B\left( \widetilde{\mathcal{H}}\right). \end{split} \end{equation*} \end{empt} \begin{defn}\label{inf_ind_defn} The constructed in \ref{inf_repr_constr} representation $\widetilde{\rho}:\widetilde{A}_\pi \to B\left( \widetilde{\mathcal{H}}\right)$ is said to be \textit{induced} by $\left( \rho, \mathfrak{S}, \pi\right) $. We also say that $\widetilde{\rho}$ is \textit{induced} by $\left( \rho, \left( A, \widetilde{A}_\pi, G\left(\widetilde{A}_\pi~|~A \right) \right), \pi\right) $. If $\pi$ is an universal representation we say that $\widetilde{\rho}$ is \textit{induced} by $\left( \rho, \mathfrak{S}\right)$ and/or $\left( \rho, \left( A, \widetilde{A}, G\left(\widetilde{A}~|~A \right) \right)\right) $. \end{defn} \begin{rem} If $\rho$ is faithful, then ${\rho}$ is faithful. \end{rem} \begin{rem}\label{a_act_hilb_rem} There is an action of $G_\pi$ on $\widetilde{\mathcal{H}}$ induced by the natural action of $G_\pi$ on the $\widetilde{A}_\pi$-bimodule $L^2\left( \widetilde{A}_\pi\right) $. If the representation $\widetilde A_\pi \to B\left( \widetilde{\mathcal{H}} \right)$ is faithful then an action of $ G_\pi$ on $\widetilde A_\pi$ is given by $$ \left( g \widetilde a\right) \xi = g \left( \widetilde a \left( g^{-1}\widetilde\xi\right) \right); ~ \forall g \in {G}, ~ \forall\widetilde a \in \widetilde{A}_\pi, ~\forall\widetilde \xi \in \widetilde{\mathcal{H}}. $$ \end{rem} \begin{empt} If $\mathfrak{S}$ allows inner product with respect to $\pi$ then for any representation $A \to B\left( \mathcal{H}\right)$ an algebraic tensor product $_{\widetilde{A}_\pi}X_A \otimes_A \mathcal{H}$ is a pre-Hilbert space with the product given by \begin{equation*} \left(a \otimes \xi, b \otimes \eta \right) = \left(\xi, \left\langle a, b \right\rangle\eta \right) \end{equation*} (cf. Definitions \ref{inf_hilb_mod_defn} and \ref{inf_hilb_prod_defn}) \end{empt} \begin{lem} Suppose $\mathfrak{S}$ allows inner product with respect to $\pi$ and any $\widetilde{a} \in K\left( \widetilde{A}_\pi\right)$ is weakly special. If $\widetilde{\mathcal{H}}$ (resp. $\widetilde{\mathcal{H}}'$) is a Hilbert norm completion of $W_\pi \otimes_{A} \mathcal{H}$ (resp. $_{\widetilde{A}_\pi}X_A \otimes_A \mathcal{H}$) then there is the natural isomorphism $\widetilde{\mathcal{H}} \cong \widetilde{\mathcal{H}}'$. \end{lem} \begin{proof} From $K\left( \widetilde{A}_\pi\right) \subset W_\pi$ and taking into account that $K\left( \widetilde{A}_\pi\right)$ is dense in $_{\widetilde{A}_\pi}X_A$ it turns out $\widetilde{\mathcal{H}}' \subset \widetilde{\mathcal{H}}$. If $\widetilde{a} \in W_\pi$ is a positive element and $f_\varepsilon$ is given by \eqref{f_eps_eqn} then \begin{enumerate} \item[(a)] $f_\varepsilon\left(\widetilde{a} \right) \in K\left( \widetilde{A}_\pi\right)$, \item[(b)] $\lim_{\varepsilon \to 0} f_\varepsilon\left(\widetilde{a} \right)=\widetilde{a}$. \end{enumerate} From (a) it follows that $f_\varepsilon\left(\widetilde{a} \right) \otimes \xi \in _{\widetilde{A}_\pi}X_A \otimes_A \mathcal{H} $ for any $\xi \in \mathcal{H}$. From (b) it turns out $\widetilde{a} \otimes \xi \in \widetilde{\mathcal{H}}'$. From this fact it follows the natural inclusion $\widetilde{\mathcal{H}} \subset \widetilde{\mathcal{H}}'$. Mutually inverse inclusions $\widetilde{\mathcal{H}} \subset \widetilde{\mathcal{H}}'$ and $\widetilde{\mathcal{H}}' \subset \widetilde{\mathcal{H}}$ yield the isomorphism $\widetilde{\mathcal{H}} \cong \widetilde{\mathcal{H}}'$. \end{proof} \begin{empt}\label{h_n_to_h_constr} Let $\mathcal{H}_n$ be a Hilbert completion of $A_n \otimes_A \mathcal{H}$ which is constructed in the section \ref{induced_repr_fin_sec}. Clearly \begin{equation}\label{tensor_n_equ} L^2\left(\widetilde{A}_\pi\right)\otimes_{A_n} \mathcal{H}_n = L^2\left(\widetilde{A}_\pi\right) \otimes_{A_n} \left( A_n \otimes_A \mathcal{H}\right) = L^2\left(\widetilde{A}_\pi\right) \otimes_A \mathcal{H}. \end{equation} \end{empt} \section{Quantization of topological coverings}\label{top_chap} \subsection{Finite-fold coverings} \paragraph*{} The following lemma supplies the quantization of coverings with compactification. \begin{lemma}\label{comm_fin_lem} If $\mathcal X$, $\widetilde{\mathcal X}$ are locally compact spaces, and $\pi: \widetilde{\mathcal X}\to \mathcal X$ is a surjective continuous map, then following conditions are equivalent: \begin{enumerate} \item [(i)] The map $\pi: \widetilde{\mathcal X}\to \mathcal X$ is a finite-fold covering with compactification, \item[(ii)] There is a natural noncommutative finite-fold covering with compactification $$\left(C_0\left(\mathcal X \right), C_0\left(\widetilde{\mathcal X} \right), G \right).$$ \end{enumerate} \end{lemma} \begin{proof} (i)=>(ii) Denote by ${\mathcal X} \subset {\mathcal Y}$, $\widetilde{\mathcal X} \subset \widetilde{\mathcal Y}$ compactifications such that $\overline{\pi} : \widetilde{\mathcal Y} \to {\mathcal Y}$ is a finite-fold (topological) covering. Let $G = G\left(\widetilde{\mathcal Y}~|~{\mathcal Y} \right) $ be a group of covering transformations. If $B = C\left( {\mathcal Y}\right)$ and $\widetilde{B}=C\left( \widetilde{\mathcal Y}\right)$ then $A = C_0\left( {\mathcal X}\right)$ (resp. $\widetilde{A}=C_0\left( \widetilde{\mathcal X}\right)$) is an essential ideal of $B$ (resp. $\widetilde{B}$). Taking into account $A=C_0\left( {\mathcal X}\right) = C_0\left( \widetilde{\mathcal X}\right)\bigcap C\left( {\mathcal Y}\right)= B \bigcap \widetilde{A}$ one concludes that these algebras satisfy to the condition (a) of the Definition \ref{fin_comp_def}. From the Theorem \ref{pavlov_troisky_thm} it turns out that the triple $\left( C\left( {\mathcal Y}\right), C\left( \widetilde{\mathcal Y}\right) ,G\right)=\left(B ,\widetilde{B}, G \right)$ is an unital noncommutative finite-fold covering. So the condition (b) of the Definition \ref{fin_comp_def} holds. From $G \widetilde{\mathcal X} = \widetilde{\mathcal X}$ it turns out $G \widetilde{A}= G C_0\left( \widetilde{\mathcal X}\right) = C_0\left( \widetilde{\mathcal X}\right)= \widetilde{A}$, i.e. the condition (c) of the Definition \ref{fin_comp_def} holds. \cdot (ii)=>(i) If $A = C_0\left( {\mathcal X}\right)$, $\widetilde{A} = C_0\left( \widetilde{\mathcal X}\right)$ and inclusions $A \subset B$, $\widetilde{A}\subset \widetilde{B}$ are such that $A$ (resp. $B$) is an essential ideal of $\widetilde{A}$ (resp. $\widetilde{B}$) then there are compactifications ${ \mathcal X } \hookrightarrow { \mathcal Y }$ and $\widetilde{ \mathcal X } \hookrightarrow \widetilde{ \mathcal Y }$ such that $B = C\left(\mathcal Y \right)$, $\widetilde{B} = C\left(\widetilde{\mathcal Y} \right)$. From the condition (b) of the Definition \ref{fin_def} it turns out that the triple $\left(B ,\widetilde{B}, G \right)=\left( C \left( {\mathcal Y} \right), C \left( \widetilde{\mathcal Y} \right), G\right) $ is an unital noncommutative finite-fold covering. From the Theorem \ref{pavlov_troisky_thm} it follows that the *-homomorphism $C\left( {\mathcal Y} \right)\hookrightarrow C \left( \widetilde{\mathcal Y} \right)$ induces a finite-fold (topological) covering $\overline{\pi}: \widetilde{\mathcal Y} \to {\mathcal Y}$. From condition (c) of of the Definition \ref{fin_comp_def} it turns out $G C_0\left( \widetilde{\mathcal X}\right)= C_0\left( \widetilde{\mathcal X}\right)$ or, equivalently \begin{equation}\label{comm_gx=gx} G\widetilde{\mathcal X} = \widetilde{\mathcal X}. \end{equation} From $A= B \bigcap \widetilde{ A}$ or, equivalently $C_0\left( {\mathcal X}\right) = C_0\left( \widetilde{\mathcal X}\right)\bigcap C\left( {\mathcal Y}\right)$ and \eqref{comm_gx=gx} it turns out that $\pi$ is the restriction of finite-fold covering $\overline{\pi}$, i.e. $\pi = \overline{\pi}|_{\widetilde{\mathcal X}}$. So $\pi$ is a finite-fold covering. \end{proof} \begin{lemma}\label{comm_fin_top_lem} Let $\pi:\widetilde{\mathcal X} \to \mathcal X$ be a surjective map of topological spaces such that there is a family of open subsets $\left\cdot\mathcal U_\lambda \subset \mathcal X\right\cdot_{\lambda \in \Lambda}$ such that \begin{enumerate} \item[(a)] $\mathcal X = \bigcup_{\lambda \in \Lambda} \mathcal U_\lambda$, \item[(b)] For any $\lambda \in \Lambda$ the natural map $\pi^{-1}\left(\mathcal U_\lambda \right)\to \mathcal U_\lambda$ is a covering. \end{enumerate} Then the map $\pi:\widetilde{\mathcal X} \to \mathcal X$ is a covering. \end{lemma} \begin{proof} For any point $x_0 \in \mathcal X$ there is $\lambda \in \Lambda$ such that $x_0 \in \mathcal U_\lambda$. The map $\pi^{-1}\left(\mathcal U_\lambda \right)\to \mathcal U_\lambda$ is a covering, it follows that there is an open neighborhood $\mathcal V$ of $x_0$ such that $\mathcal V \subset \mathcal U_\lambda$ and $\mathcal V$ is evenly covered by $\pi$. \end{proof} \begin{theorem}\label{comm_fin_thm} If $\mathcal X$, $\widetilde{\mathcal X}$ are locally compact spaces, and $\pi: \widetilde{\mathcal X}\to \mathcal X$ is a surjective continuous map, then following conditions are equivalent: \begin{enumerate} \item [(i)] The map $\pi: \widetilde{\mathcal X}\to \mathcal X$ is a finite-fold regular covering, \item[(ii)] There is the natural noncommutative finite-fold covering $\left(C_0\left(\mathcal X \right), C_0\left(\widetilde{\mathcal X} \right), G \right)$. \end{enumerate} \end{theorem} \begin{proof} (i)=>(ii) We need check that $\left(C_0\left(\mathcal X \right), C_0\left(\widetilde{\mathcal X} \right), G \right)$ satisfies to condition (a), (b) of the Definition \ref{fin_def}.\cdot (a) Covering $\pi$ is regular, so from the Proposition \ref{reg_cov_prop} it turns out ${\mathcal X} = \widetilde{\mathcal X}/G$ where $G = G\left( \widetilde{\mathcal X}~|~{\mathcal X}\right)$ is a covering group. From ${\mathcal X} = \widetilde{\mathcal X}/G$ it follows that $C_0\left( {\mathcal X}\right) = C_0\left( \widetilde{\mathcal X}\right)^G$.\cdot (b) The space $\mathcal X$ is locally compact, so for any $x \in \mathcal X$ there is a compact neighborhood $\overline{ \mathcal U }$. The maximal open subset $ \mathcal U \subset \overline{ \mathcal U }$ is an open neighborhood of $x$. So there is family of open subsets $\left\cdot\mathcal U_\lambda \subset \mathcal X\right\cdot_{\lambda \in \Lambda}$ such that \begin{itemize} \item $\mathcal X = \bigcup_{\lambda \in \Lambda} \mathcal U_\lambda$, \item For any $\lambda \in \Lambda$ the closure $\overline{\mathcal U}_\lambda$ of $\mathcal U_\lambda$ in $\mathcal X$ is compact. \end{itemize} Since $\pi$ is a finite-fold covering the set $\pi^{-1}\left(\overline{\mathcal U}_\lambda \right)$ is compact for any $\lambda \in \Lambda$. If $\widetilde{I}_\lambda \subset C_0\left(\widetilde{\mathcal X} \right)$ is a closed ideal given by $$ \widetilde{I}_\lambda \stackrel{\mathrm{def}}{=}C_0\left( \pi^{-1}\left( \mathcal U_\lambda \right)\right) \cong \left\cdot\widetilde{a} \in C_0\left( \widetilde{\mathcal X}\right) ~|~ \widetilde{a}\left(\widetilde{\mathcal X} \backslash \pi^{-1}\left( \mathcal U_\lambda\right) \right)= \left\{0\right\cdot \right\cdot $$ then $\widetilde{I}_\lambda \subset C\left(\overline{\mathcal U}_\lambda \right)$ is an essential ideal of the unital algebra $C\left(\pi^{-1}\left( \overline{\mathcal U}_\lambda \right) \right)$. From $G \pi^{-1}\left( \mathcal U_\lambda\right) = \pi^{-1}\left( \mathcal U_\lambda\right)$ it follows that $G\widetilde{I}_\lambda = \widetilde{I}_\lambda$. If $I_\lambda = C_0\left( \mathcal X \right) \bigcap \widetilde{I}_\lambda$ then one has \begin{equation}\nonumber \begin{split} {I}_\lambda = C_0\left(\mathcal U_\lambda \right) \cong \left\{a \in C_0\left( {\mathcal X}\right) ~|~ a\left({\mathcal X} \backslash \mathcal U_\lambda \right)= \left\{0\right\cdot \right\cdot \end{split} \end{equation} hence ${I}_\lambda$ is an essential ideal of an unital algebra $C\left(\overline{\mathcal U}_\lambda \right)$. The restriction map $$\pi_{\pi^{-1}\left( \overline{ \mathcal U }_\lambda\right)}:\pi^{-1}\left( \overline{ \mathcal U }_\lambda\right) \to \overline{ \mathcal U }_\lambda$$ is a finite-fold covering of compact spaces, so from the Theorem \ref{pavlov_troisky_thm} it follows that $$\left(C\left(\overline{\mathcal U}_\lambda \right), C\left(\pi^{-1}\left(\overline{\mathcal U}_\lambda \right) \right), G \right)$$ is an unital noncommutative finite-fold covering. It turns out that $$\left( I_\lambda, \widetilde{I}_\lambda, G \right)= \left(C_0\left( \mathcal U_\lambda \right), C_0\left( \pi^{-1}\left(\mathcal U_\lambda \right), G \right)\right)$$ is a noncommutative finite-fold covering with compactification. From $\mathcal X = \bigcup_{\lambda \in \Lambda} \mathcal U_\lambda$ (resp. $\widetilde{\mathcal X} = \bigcup_{\lambda \in \Lambda} \pi^{-1}\left( \mathcal U_\lambda\right) $) it turns out that $\bigcup_{\lambda \in \Lambda} I_\lambda$ (resp. $\bigcup_{\lambda \in \Lambda} \widetilde{I}_\lambda$) is a dense subset of $C_0\left( \mathcal X\right) $ (resp. $C_0\left( \widetilde{\mathcal X}\right) $).\cdot (ii)=>(i) Let $\left\cdot\widetilde{I}_\lambda \subset C_0\left(\widetilde{\mathcal X} \right) \right\cdot_{\lambda \in \Lambda}$ be a family of closed ideals from the condition (b) of the Definition \ref{fin_def}, and let ${I}_\lambda = \widetilde{I}_\lambda \bigcap C_0\left({ \mathcal X } \right) $. If $\widetilde{\mathcal U}_\lambda \subset \widetilde{\mathcal X}$ is a given by $$ \widetilde{\mathcal U}_\lambda \stackrel{\mathrm{def}}{=}\left\cdot\widetilde{x} \in \widetilde{\mathcal X}~|~ \exists~ \widetilde{a}\in \widetilde{I}_\lambda; ~\widetilde{a}\left( \widetilde{x}\right) \neq 0\right\cdot $$ then from $G \widetilde{I}_\lambda = \widetilde{I}_\lambda$ it turns out $G\widetilde{\mathcal U}_\lambda=\widetilde{\mathcal U}_\lambda$. If $\mathcal U_\lambda \subset \mathcal X$ is given by $$ \mathcal U_\lambda = \left\{x \in\mathcal X~|~\exists a \in I_\lambda;~~ a\left( x\right) \neq 0 \right\cdot $$ then $\mathcal U_\lambda = \pi \left(\widetilde{\mathcal U}_\lambda \right)$, and $\widetilde{\mathcal U}_\lambda= \pi^{-1}\left(\mathcal U_\lambda \right) $, hence there is the natural *-isomorphism $$ \widetilde{I}_\lambda \cong C_0\left(\pi^{-1}\left( \mathcal U_\lambda\right) \right). $$ Any covering is an open map, so if $\overline{ \mathcal U }_\lambda$ is the closure of $\mathcal U_\lambda$ in $\mathcal X$ then $\pi^{-1}\left(\overline{ \mathcal U }_\lambda \right)$ is the closure of $\widetilde{\mathcal U}_\lambda$ in $\widetilde{\mathcal X}$. Following conditions hold: \begin{itemize} \item $\overline{ \mathcal U }_\lambda$ (resp. $\pi^{-1}\left( \overline{ \mathcal U }_\lambda\right)$ ) is a compactification of $\mathcal U_\lambda$, (resp. $\pi^{-1}\left( { \mathcal U }_\lambda\right)$), \item $I_\lambda = C_0\left( \mathcal U_\lambda\right) $, (resp. $\widetilde{I}_\lambda =C_0\left( \pi^{-1}\left( { \mathcal U }_\lambda\right)\right) $) is an essential ideal of $C\left( \overline{ \mathcal U }_\lambda\right) $ (resp. $C\left( \pi^{-1}\left( \overline{ \mathcal U }_\lambda\right)\right) $), \item The triple $\left(C\left( \overline{ \mathcal U }_\lambda\right), C\left( \pi^{-1}\left( \overline{ \mathcal U }_\lambda\right)\right), G \right)$ is an unital noncommutative finite-fold covering. \end{itemize} It follows that the triple $ \left(I_\lambda, \widetilde{I}_\lambda, G \right)=\left(C_0\left(\mathcal U_\lambda \right), C_0\left(\pi^{-1}\left(\mathcal U_\lambda \right) \right), G \right) $ is a noncommutative finite-fold covering with compactification, hence from the Lemma \ref{comm_fin_lem} it follows that the natural map $\pi^{-1} \left( \mathcal U_\lambda\right) \to \mathcal U_\lambda$ is a covering. From (b) of the Definition \ref{fin_def} it follows that $\bigcup_{\lambda \in \Lambda} I_\lambda$ is dense subset of $C_0\left( \mathcal X\right)$ it turns out $\mathcal X = \bigcup \mathcal U_\lambda$, hence from the Lemma \ref{comm_fin_top_lem} it follows that $\pi: \widetilde{\mathcal X}\to \mathcal X$ is a finite-fold covering. \end{proof} \subsection{Infinite coverings} \paragraph*{} This section supplies a purely algebraic analog of the topological construction given by the Subsection \ref{inf_to}. Suppose that \begin{equation*} \mathfrak{S}_\mathcal{X} = \left\cdot\mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ... \right\cdot \end{equation*} is a topological finite covering sequence. From the Theorem \ref{comm_fin_thm} it turns out that $\mathfrak{S}_{C_0\left( \mathcal{X}\right)}=\left\{C_0\left( \mathcal{X}_0\right)\to ... \to C_0\left( \mathcal{X}_n\right) \to ...\right\cdot $ is an algebraical finite covering sequence. The following theorem and the corollary gives the construction of $\widehat{C_0\left(\mathcal X \right)} = \varinjlim C_0\left( \mathcal{X}_n\right) $. \begin{theorem}\label{direct_lim_state_thm}\cite{takeda:inductive} If a $C^*$-algebra $A$ is a $C^*$-inductive limit of $A_\gamma$ ($\gamma \in \Gamma$), the state space $\Omega$ of A is homeomorphic to the projective limit of the state spaces $\Omega_\gamma$ of $A_\gamma$. \end{theorem} \begin{corollary}\label{direct_lim_state_cor}\cite{takeda:inductive} If a commutative $C^*$-algebra $A$ is a $C^*$-inductive limit of the commutative $C^*$-algebras $A_\gamma$ ($\gamma \in \Gamma$), the spectrum $\mathcal X$ of $A$ is the projective limit of spectrums $\mathcal X_\gamma$ of $A_\gamma$ ($\gamma \in \Gamma$). \end{corollary} \begin{empt} From the Corollary \ref{direct_lim_state_cor} it turns out $\widehat{C_0\left(\mathcal X \right)} = C_0\left(\widehat{\mathcal X} \right)$ where $\widehat{\mathcal X}= \varprojlim \mathcal X_n$. If $\overline{ \mathcal X}$ is the disconnected inverse limit of $\mathfrak{S}_\mathcal{X}$ then there is the natural bicontinuous map $f:\overline{ \mathcal X} \to \widehat{ \mathcal X}$. The map induces the injective *-homomorphism $C_0\left(\widehat{\mathcal X} \right) \hookrightarrow C_b\left(\overline{\mathcal X} \right)$. It follows that there is the natural inclusion of enveloping von Neumann algebras $C_0\left(\widehat{\mathcal X} \right)'' \hookrightarrow C_0\left(\overline{\mathcal X} \right)''$. Denote by $G_n = G\left(\mathcal X_n~|~\mathcal X \right)$ groups of covering transformations and $\widehat{G} = \varprojlim G_n$. Denote by $\overline{\pi}:\overline{ \mathcal X} \to \mathcal X$, $\overline{\pi}_n:\overline{ \mathcal X} \to \mathcal X_n$, $\pi^n: \mathcal X_n \to \mathcal X$, $\pi^m_n: \mathcal X_m \to \mathcal X_n$ ($m > n$) the natural covering projections. \end{empt} \begin{lemma}\label{comm_c_cp_lem} Following conditions hold: \begin{enumerate} \item [(i)] If $\overline{ \mathcal U} \subset \mathcal \overline{ \mathcal X}$ is a compact set then there is $N \in \mathbb{N}$ such that for any $n \ge N$ the restriction $\overline{\pi}_n|_{\overline{ \mathcal U}}:\overline{ \mathcal U} \xrightarrow{\approx} \overline{\pi}_n\left( {\overline{ \mathcal U}}\right)$ is a homeomorphism, \item[(ii)] If $\overline{a} \in C_c\left(\overline{ \mathcal X }\right)_+ $ is a positive element then there there is $N \in \mathbb{N}$ such that for any $n \ge \mathbb{N}$ following condition holds \begin{equation}\label{comm_a_eqn} a_n\left(\overline{ \pi }_n \left( \overline{x}\right)\right) =\left\cdot \begin{array}{c l} \overline{ a}\left( \overline{x}\right) &\overline{x} \in \supp~ \overline{a} ~\cdot~ \overline{ \pi }_n \left( \overline{x}\right) \in \supp~ a_n \cdot 0 &\overline{ \pi }_n \left( \overline{x}\right) \notin \supp~ a_n \end{array}\right. \end{equation} where $$ a_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)}g\overline{a}. $$ \end{enumerate} \end{lemma} \begin{proof} (i) The set $\overline{ \mathcal U}$ is compact, hence $\supp ~\overline{a}$ is a finite disconnected union of connected compact sets, i.e. $$ \overline{ \mathcal U}= \bigsqcup_{j = 1}^M \overline{ \mathcal V }_j $$ It is known \cite{spanier:at} that any covering is an open map, and any open map maps any closed set onto a closed set, so for any $n \in \mathbb{N}$ the set $\overline{\pi}_n\left( {\overline{ \mathcal U}}\right)$ is compact. For any $n \in \mathbb{N}$ denote by $c_n \in \mathbb{N}$ the number of connected components of $\overline{\pi}_n\left( {\overline{ \mathcal U}}\right)$. If $n > m$ then any connected component of $\overline{\pi}_n\left( {\overline{ \mathcal U}}\right) $ is mapped into a connected component of $\overline{\pi}_m\left( {\overline{ \mathcal U}}\right)$, it turns out $c_n \ge c_m$. Clearly $c_n \le M$. The sequence $\left\{c_n\right\cdot_{n \in \mathbb{N}}$ is non-decreasing and $c_n \le M$. It follows that there is $N \in \mathbb{N}$ such that $c_N = M$. For any $n > N$ the set $\overline{\pi}_n\left( {\overline{ \mathcal U}}\right)$ is mapped homeomorphically onto $\overline{\pi}_N\left( {\overline{ \mathcal U}}\right)$, hence from the sequence of homeomorphisms it follows $$ \dots\cong \overline{\pi}_n\left( {\overline{ \mathcal U}}\right) \cong \dots \cong \overline{ \mathcal U} $$ it follows that $\overline{\pi}_n|_{\overline{ \mathcal U}}:\overline{ \mathcal U} \xrightarrow{\approx} \overline{\pi}_n\left( {\overline{ \mathcal U}}\right)$ is a homeomorphism.\cdot (ii) The set $\supp \overline{a} = \overline{ \mathcal U}$ is compact, it follows that from (i) and $\overline{a}>0$ that $\supp \overline{a}$ is mapped homeomorphically onto $\supp~ a_N$. It turns out that if $$ a_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)}g\overline{a} $$ and $n \ge N$ then $a_n$ is given by \eqref{comm_a_eqn}. \end{proof} \begin{lemma}\label{comm_c_c_lem} If $\mathcal X$ is a locally compact Hausdorff space then any positive element $\overline{a} \in C_c\left(\overline{ \mathcal X }\right)_+ $ is special. \end{lemma} \begin{proof} From the Lemma \ref{comm_c_cp_lem} it follows that there is $N \in \mathbb{N}$ such that the equation \eqref{comm_a_eqn} holds. It turns out that for any $z \in C_0\left(\mathcal X \right)$, $n \ge \mathbb{N}$ and $f_\varepsilon$ given by \eqref{f_eps_eqn} the series \begin{equation*} \begin{split} b_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \left(z \overline{a} z^*\right) ,\cdot c_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \left(z \overline{a} z^*\right)^2,\cdot d_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} f_\varepsilon \left(z \overline{a} z^*\right)\cdot \end{split} \end{equation*} are given by \begin{equation}\label{comm_bcd_eqn} \begin{split} b_n\left(\overline{ \pi }_n \left( \overline{x}\right)\right) =\left\cdot \begin{array}{c l} z\left(\overline{ \pi }_n \left( \overline{x}\right)\right)\overline{ a}\left( \overline{x}\right) z^*\left(\overline{ \pi }_n \left( \overline{x}\right)\right) &\overline{x} \in \supp~ \overline{a} ~\cdot~ \overline{ \pi }_n \left( \overline{x}\right) \in \supp~ a_n \cdot 0 &\overline{ \pi }_n \left( \overline{x}\right) \notin \supp~ a_n \end{array}\right.,\cdot c_n\left(\overline{ \pi }_n \left( \overline{x}\right)\right) =\left\cdot \begin{array}{c l} \left( z\left(\overline{ \pi }_n \left( \overline{x}\right)\right)\overline{ a}\left( \overline{x}\right) z^*\left(\overline{ \pi }_n\right) \left( \overline{x}\right)\right)^2 &\overline{x} \in \supp~ \overline{a} ~\cdot~ \overline{ \pi }_n \left( \overline{x}\right) \in \supp~ a_n \cdot 0 &\overline{ \pi }_n \left( \overline{x}\right) \notin \supp~ a_n \end{array}\right.,\cdot d_n\left(\overline{ \pi }_n \left( \overline{x}\right)\right) =\left\cdot \begin{array}{c l} f_\varepsilon\left( z\left(\overline{ \pi }_n \left( \overline{x}\right)\right)\overline{ a}\left( \overline{x}\right) z^*\left(\overline{ \pi }_n \left( \overline{x}\right)\right)\right) &\overline{x} \in \supp~ \overline{a} ~\cdot~ \overline{ \pi }_n \left( \overline{x}\right) \in \supp~ a_n \cdot 0 &\overline{ \pi }_n \left( \overline{x}\right) \notin \supp~ a_n \end{array}\right..\cdot \end{split} \end{equation} From \eqref{comm_bcd_eqn} it turns out $ b_n^2 = c_n$, i.e. $\overline{ a}$ satisfies to the condition (c) of the Definition \ref{special_el_defn}. Otherwise \eqref{comm_a_eqn}, \eqref{comm_bcd_eqn} from that $a_n,~ b_n, ~c_n,~ d_n\in C_0\left(\mathcal X_n \right)$ for any $n \ge N$. If $n < N$ then \begin{equation*} \begin{split} a_n = \sum_{g\in G\left( \mathcal X_N~|~\mathcal X_n\right)} g a_N,\cdot b_n = \sum_{g\in G\left( \mathcal X_N~|~\mathcal X_n\right)} g b_N,\cdot c_n = \sum_{g \in G\left( \mathcal X_N~|~\mathcal X_n\right)} g c_N,\cdot d_n = \sum_{g \in G\left( \mathcal X_N~|~\mathcal X_n\right)} g d_N. \end{split} \end{equation*} Above sums are finite, it turns out $a_n,~ b_n, ~c_n,~ d_n\in C_0\left(\mathcal X_n \right)$ for any $n \in \mathbb{N}^0$, i.e. $\overline{ a}$ satisfies to conditions (a), (b) of the Definition \ref{special_el_defn}. \end{proof} \begin{corollary}\label{comm_c_c_cor} If $\overline{A}$ is a disconnected inverse noncommutative limit of $$\mathfrak{S}_{C_0\left( \mathcal{X}\right)}=\left\{C_0\left( \mathcal{X}_0\right)\to ... \to C_0\left( \mathcal{X}_n\right) \to ...\right\cdot $$ then $C_0\left(\overline{\mathcal X} \right)\subset\overline{A}$. \end{corollary} \begin{proof} From the Lemma \ref{comm_c_c_lem} it follows that $C_c\left(\overline{\mathcal X} \right) \subset \overline{A}$, and taking into account the Definition \ref{c_c_def_1} one has $ C_0\left(\overline{\mathcal X} \right)\subset\overline{A}$. \end{proof} \begin{lemma}\label{comm_main_lem} Suppose that $\mathcal X$ is a locally compact Hausdorff space. Let $\overline{a} \in C_0\left(\overline{ \mathcal X }\cdot\right)''_+$ be such that following conditions hold: \begin{enumerate} \item[(a)] If $f_\varepsilon$ is given by \eqref{f_eps_eqn} then following series \begin{equation*} \begin{split} a_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \overline a,\cdot b_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \overline a^2,\cdot c_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g f_\varepsilon\left( \overline a\right) ,\cdot \end{split} \end{equation*} are strongly convergent and $a_n, b_n, c_n \in C_0\left(\mathcal X_n \right)$, \item[(b)] For any $\varepsilon > 0$ there is $N \in \mathbb{N}$ such that \begin{equation*} \begin{split} \left\|a^2_n-b_n\right\cdot < \varepsilon; ~\forall n \ge N. \end{split} \end{equation*} \end{enumerate} Then $\overline{a} \in C_0\left(\overline{ \mathcal X }\cdot\right)_+$. \end{lemma} \begin{proof} The dual space $C_0\left(\overline{ \mathcal X }\right)^*$ of $C_0\left(\overline{ \mathcal X }\right)$ is a space of Radon measures on $\overline{ \mathcal X }$. If $\overline{f}: \overline{ \mathcal X } \to \mathbb{R}$ is given by \begin{equation*} \begin{split} \overline{f}\left( \overline{x}\right) = \lim_{n \to \infty} a_n\left(\overline{\pi}_n\left( \overline{x}\right)\right)= \inf_{n \in \mathbb{N}} a_n\left(\overline{\pi}_n\left( \overline{x}\right)\right) \end{split} \end{equation*} then from the Proposition \ref{env_alg_sec_dual} and the Lemma \ref{stong_conv_inf_lem} it follows that $\overline{f}$ represents $\overline{a}$, i.e. following conditions hold: \begin{itemize} \item The function $\overline{f}$ defines a following functional \begin{equation*}\label{comm_func_eqn} \begin{split} C_0\left(\overline{ \mathcal X }\right)^* \to \mathbb{C},\cdot \mu \mapsto \int_{ \overline{ \mathcal X }} \overline{f}~d\mu \end{split} \end{equation*} where $\mu$ is a Radon measure on $\overline{ \mathcal X }$, \item The functional corresponds to $\overline{ a}\in C_0\left(\overline{ \mathcal X }\right)^{**}=C_0\left(\overline{ \mathcal X }\right)''$. \end{itemize} If $m > n$ then \begin{equation}\label{comm_an_bn_eqn} \begin{split} a_n = \sum_{g \in G\left(\mathcal X_m ~| \mathcal X_n\right) } g a_m,\cdot b_n = \sum_{g \in G\left(\mathcal X_m ~| \mathcal X_n\right) } g b_m.\cdot \end{split} \end{equation} Let $M \in \mathbb{N}$ be such that for any $n \ge M$ following condition holds \begin{equation}\label{comm_delta_eqn} \begin{split} \left\|a^2_n - b_n\right\cdot < 2 \varepsilon^2. \end{split} \end{equation} Let $n > M$, $p_n = \pi^{n}_M: \mathcal X_m \to\mathcal X_M$, and let $\widetilde{x}_1, \widetilde{x}_2 \in \mathcal X_{n}$ be such that \begin{equation}\label{comm_neq_eqn} \begin{split} \widetilde{x}_1\neq \widetilde{x}_2,\cdot p_n\left( \widetilde{x}_1\right) = p_n\left( \widetilde{x}_2\right)=x,\cdot a_n\left( \widetilde{x}_1\right) \ge \varepsilon; ~a_n\left( \widetilde{x}_2\right) \ge \varepsilon. \end{split} \end{equation} From \eqref{comm_an_bn_eqn} it turns out \begin{equation*} \begin{split} a_M\left(x \right) = \sum_{\widetilde{x} \in p_n^{-1}\left( x\right) } a_{n}\left( \widetilde{x}\right) ,\cdot b_M\left(x \right) = \sum_{\widetilde{x} \in p_n^{-1}\left( x\right) } b_{n}\left( \widetilde{x}\right).\cdot \end{split} \end{equation*} From the above equation and $a^2_n \ge b_n$ it turns out \begin{equation*} \begin{split} a_M^2\left(x \right)= \sum_{\widetilde{x} \in p_n^{-1}\left( x\right)} a_n^2\left( \widetilde{x}\right) + \sum_{\substack{\left(\widetilde{x}', \widetilde{x}'' \right)\in p_n^{-1}\left(x \right)\times p_n^{-1}\left(x \right)\cdot\widetilde{x}' \neq \widetilde{x}'' }}~ a_n\left(\widetilde{x}' \right) a_n\left(\widetilde{x}'' \right)\ge\cdot \ge \sum_{\widetilde{x} \in p_n^{-1}\left( x\right)} b_n\left( \widetilde{x}\right) + a_n\left( \widetilde{x}_1\right) a_n\left( \widetilde{x}_2\right)+a_n\left( \widetilde{x}_2\right) a_n\left( \widetilde{x}_1\right)=\cdot= b_M\left(x \right) + 2 a_n\left( \widetilde{x}_1\right) a_n\left( \widetilde{x}_2\right). \end{split} \end{equation*} Taking into account $a_n\left( \widetilde{x}_1\right) \ge \varepsilon$, $a_n\left( \widetilde{x}_2\right) \ge \varepsilon$ one has \begin{equation*} \begin{split} a^2_M\left(x \right) - b_M\left(x \right) \ge 2 \varepsilon^2,\cdot\left\|a^2_M - b_M\right\cdot \ge 2 \varepsilon^2. \end{split} \end{equation*} So \eqref{comm_neq_eqn} contradicts with \eqref{comm_delta_eqn}, it follows that \begin{equation}\label{comm_x1_eq_x2_eqn} \begin{split} p_n\left( \widetilde{x}_1\right) = p_n\left( \widetilde{x}_2\right)=x ~\cdot~ a_n\left( \widetilde{x}_1\right) \ge \varepsilon~\cdot~a_n\left( \widetilde{x}_2\right) \ge \varepsilon \Rightarrow \widetilde{x}_1 = \widetilde{x}_2. \end{split} \end{equation} If $f_\varepsilon$ is given by \eqref{f_eps_eqn} and $$ c_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g f_\varepsilon\left( \overline a\right) $$ then \begin{equation*} \begin{split} \supp c_n = \left\{x \in \mathcal X_n~|~\inf_{m > n} ~~\max_{\widetilde x \in \pi^m_n\left( x\right) } a_m\left(\widetilde x \right)\ge \varepsilon \right\cdot= \cdot = \left\{x \in \mathcal X_n~|~\exists~ \overline{x}\in \overline{ \mathcal X }; ~\overline{\pi}_n\left(\overline{x} \right) = x ~\cdot~\overline{f}\left(\overline{x} \right) \ge \varepsilon \right\cdot. \end{split} \end{equation*} Indeed $f_\varepsilon\left( \overline{ a}\right) $ as a functional on $C_0\left(\overline{ \mathcal X } \right)^*$ is represented by the following function \begin{equation*} \begin{split} \overline{f}_\varepsilon: \overline{ \mathcal X } \to \mathbb{R},\cdot \overline{ x}\mapsto f_\varepsilon\left(\overline{f}\left(\overline{ x} \right) \right). \end{split} \end{equation*} From $\widetilde x \in \supp c_n$ it turns out $a_n\left(\widetilde x \right) \ge \varepsilon$ and taking into account \eqref{comm_x1_eq_x2_eqn} one concludes that the restriction $\pi^n_M|_{\supp c_n}$ is an injective map. Clearly $\pi^n_M\left(\supp c_n \right) = \supp c_M$, so there is a bijection $\supp c_n \xrightarrow{\approx}\supp c_M$. The map $\pi^n_M$ is a covering and it is known \cite{spanier:at} that any covering is an open map. Any bijective open map is a homeomorphism, hence one has a sequence of homeomorphisms \begin{equation}\label{comm_homeo_sec_eqn} \supp c_M \xleftarrow{\approx}\dots \xleftarrow{\approx}\supp c_n \xleftarrow{\approx}\dots \end{equation} If $\overline{ \mathcal U } \subset \overline{ \mathcal X }$ is given by $$ \overline{ \mathcal U } = \bigcap_{n = M}^\infty \overline{ \pi }^{-1}_n\left(\supp c_n \right) $$ then from \eqref{comm_homeo_sec_eqn} it turns out that $ \overline \pi_M$ homeomorphically maps $\overline{ \mathcal U }$ onto $\supp c_M$. Moreover following condition holds \begin{equation*} \begin{split} f_\varepsilon\left(\overline{a} \right) \left( \overline{x}\right) =\left\cdot \begin{array}{c l} c_M\left(\overline \pi_M\left(\overline{x} \right) \right) &\overline{x} \in \overline{ \mathcal U } \cdot 0 &\overline{x} \notin \overline{ \mathcal U } \end{array}\right.. \end{split} \end{equation*} From the above equation it follows that $f_\varepsilon\left(\overline{a} \right)$ is a continuous function, i.e. $f_\varepsilon\left(\overline{a} \right)\in C_b\left(\overline{ \mathcal X } \right) $ From the Definition \ref{c_c_def_2} it turns out $D = \left\{x \in \mathcal X_M~|~a_M\left(x\right)\ge \varepsilon \right\cdot$ is compact, therefore the closed subset $\supp c_M \subset D$ is compact, hence $\overline{ \mathcal U }= \supp f_\varepsilon\left(\overline{a} \right)\approx \supp c_M$ is also compact. It turns out $f_\varepsilon\left(\overline{a} \right)\in C_c\left(\overline{ \mathcal X } \right) $. From $\left\cdot f_\varepsilon\left(\overline{a} \right)- \overline{a} \right\cdot \le \varepsilon$ it follows that $\overline{a}= \lim_{\varepsilon \to 0}f_\varepsilon\left(\overline{a} \right)$ and from the Definition \ref{c_c_def_1} it turns out $\overline{a} \in C_0\left(\overline{ \mathcal X } \right)$. \end{proof} \begin{corollary}\label{comm_main_cor} If $\mathfrak{S}_{C_0\left( \mathcal{X}\right)}=\left\{C_0\left( \mathcal{X}_0\right)\to ... \to C_0\left( \mathcal{X}_n\right) \to ...\right\cdot $ and $\overline{A}$ is a disconnected inverse noncommutative limit of $\downarrow\mathfrak{S}_{C_0\left( \mathcal{X}\right)}$ then following conditions hold: \begin{enumerate} \item [(i)] Any special element $\overline{a} \in C_0\left(\overline{ \mathcal X }\cdot\right)''_+$ of $\mathfrak{S}_{C_0\left( \mathcal{X}\right)}$ lies in $C_0\left(\overline{ \mathcal X }\cdot\right)$, i.e. $\overline{a} \in C_0\left(\overline{ \mathcal X }\cdot\right)$, \item[(ii)] $C_0\left(\overline{\mathcal X} \right)\subset \overline{A} $. \end{enumerate} \end{corollary} \begin{proof} (i) Let $\left\{e_\lambda \in C_0\left( \mathcal{X}\right) \right\cdot_{\lambda \in \Lambda}$ be an approximate unit of $C_0\left( \mathcal{X}\right)$. From the Definition \ref{special_el_defn} it follows that $\overline{b}_\lambda = e_\lambda \overline{a} e_\lambda$ satisfies to conditions of the Lemma \ref{comm_main_lem}. Otherwise from the Lemma \ref{comm_main_lem} it turns out $\overline{b}_\lambda \in C_0\left(\overline{\mathcal X} \right)$. From the $C^*$-norm limit $ \lim_{\lambda \in \Lambda} \overline{b}_\lambda = \overline{a} $ it follows that $\overline{a}\in C_0\left(\overline{\mathcal X} \right)$. \cdot (ii) Follows from (i) and the Definitions \ref{special_el_defn}, \ref{main_defn_full}. \end{proof} \begin{empt}\label{comm_transitive_constr} Let $\widetilde{\mathcal X} \subset \overline{\mathcal X}$ be a connected component of $ \overline{\mathcal X}$ and suppose that $$G \subset G\left(\varprojlim C_0\left(\mathcal X_n \right) ~|~C_0\left( \mathcal X\right) \right)$$ is the maximal among subgroups $G'$ such that $G'\widetilde{\mathcal X} = \widetilde{\mathcal X}$. If $J\subset\widehat{G}$ is a set of representatives of $\widehat{G}/G$ then from the \eqref{top_disconnected_repr_eqn} it follows that \begin{equation*} \overline{\mathcal X}= \bigsqcup_{g \in J} g \widetilde{\mathcal X} \end{equation*} and $C_0\left( \overline{\mathcal X}\right) $ is a $C^*$-norm completion of the direct sum \begin{equation}\label{comm_transitive_eqn} \bigoplus _{g \in J} C_0\left( g\widetilde{\mathcal X} \right) . \end{equation} \end{empt} \begin{theorem}\label{comm_main_thm} If $\mathfrak{S}_{\mathcal X} = \left\cdot\mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot \in \mathfrak{FinTop}$ and $$\mathfrak{S}_{C_0\left(\mathcal{X}\right)}= \left\{C_0(\mathcal{X})=C_0(\mathcal{X}_0)\to ... \to C_0(\mathcal{X}_n) \to ...\right\cdot \in \mathfrak{FinAlg}$$ is an algebraical finite covering sequence then following conditions hold: \begin{enumerate} \item [(i)] $\mathfrak{S}_{C_0\left(\mathcal{X}\right)}$ is good, \item[(ii)] There are isomorphisms: \begin{itemize} \item $\varprojlim \downarrow \mathfrak{S}_{C_0\left(\mathcal{X}\right)} \approx C_0\left(\varprojlim \downarrow \mathfrak{S}_{\mathcal X}\right)$; \item $G\left(\varprojlim \downarrow \mathfrak{S}_{C_0\left(\mathcal{X}\right)}~|~ C_0\left(\mathcal X\right)\right) \approx G\left(\varprojlim \downarrow \mathfrak{S}_{\mathcal{X}}~|~ \mathcal X\right)$. \end{itemize} \end{enumerate} \end{theorem} \begin{proof} The proof of this theorem uses a following notation: \begin{itemize} \item The topological inverse limit $\widetilde{\mathcal X}= \varprojlim \downarrow \mathfrak{S}_{\mathcal{X}}$; \item The limit in the category of spaces and continuous maps $\widehat{\mathcal X} = \varprojlim \mathcal X_n$; \item The disconnected covering space $\overline{\mathcal X}$ of $\mathfrak{S}_{\mathcal X}$; \item The disconnected covering algebra $\overline{A}$ of $\mathfrak{S}_{C_0\left(\mathcal{X}\right)}$; \item A connected component $\widetilde{A}\subset \overline{A}$; \item The disconnected $\overline{G}_{\mathcal X} = \varprojlim G\left(\mathcal X_n ~|~ \mathcal X \right)$ and the connected $G_{\mathcal X} = G\left(\widetilde{\mathcal X} ~| ~\mathcal X \right)=G\left(\varprojlim \downarrow \mathfrak{S}_{\mathcal{X}}~|~ \mathcal X\right)$ covering groups of $\mathfrak{S}_{\mathcal X}$; \item The disconnected group $\overline{G}_{C_0\left(\mathcal X\right)} = \varprojlim G\left(C_0\left(\mathcal X_n\right) ~| ~C_0\left(\mathcal X\right) \right)$ and $\widetilde{A}$-invariant group $G_A$. \end{itemize} From the Corollary \ref{comm_c_c_cor} it follows that $\overline{A}\subset C_0\left(\overline{\mathcal X}\right)$. From the Corollary \ref{comm_main_cor} it turns out $C_0\left(\overline{\mathcal X}\right) \subset \overline{A}$, hence $\overline{A}= C_0\left(\overline{\mathcal X}\right)$. If $~J \subset \overline{G}_{\mathcal{X}}$ is a set of representatives of $\overline{G}_{\mathcal X}/G\left(\widetilde{\mathcal X}~|~ \mathcal X\right)$ then $\overline{\mathcal X} = \bigsqcup_{\overline{g}\in J}\overline{g}\widetilde{\mathcal X}$ is the disconnected union of connected homeomorphic spaces, i.e. $\overline{g}\widetilde{\mathcal X}\xrightarrow{\approx}\widetilde{\mathcal X}$. (i) We need check (a) - (c) of the Definition \ref{good_seq_defn}. $\overline{A}= C_0\left(\overline{\mathcal X}\right)$ is the $C^*$-norm completion of the algebraical the direct sum \eqref{comm_transitive_eqn}. Any maximal irreducible subalgebra of $\overline{A}$ is isomorphic to $C_0\left(\widetilde{\mathcal X}\right)$. The map $\widetilde{\mathcal X} \to {\mathcal X}_n$ is a covering for any $n \in \mathbb{N}$, it turns out $C_0\left( \mathcal{X}_n\right) \hookrightarrow C_b\left(\widetilde{\mathcal X}\right)= M\left(C_0\left(\widetilde{\mathcal X}\right) \right)$ is injective *-homomorphism. It follows that the natural *-homomorphism $C_0\left(\widehat{\mathcal X}\right)= \lim_{n \to \infty}C_0 \left( {\mathcal X}_n\right) \hookrightarrow C_b\left(\widetilde{\mathcal X}\right)=M\left( C_0\left(\widetilde{\mathcal X}\right)\right) $ is injective, i.e. the condition (a) holds. The algebraic direct sum $\bigoplus_{\overline{g}\in J} \overline{g}C_0\left(\widetilde{\mathcal X}\right)$ is is a dense subalgebra of $\overline{A}$, i.e. condition (b) holds. The homomorphism $ G\left(\widetilde{\mathcal X} ~| ~\mathcal X \right) \to G\left(\mathcal X_n ~| ~\mathcal X \right)$ is surjective for any $n \in \mathbb{N}$. From the following isomorphisms \begin{equation*} \begin{split} G_A \approx G\left(\widetilde{\mathcal X} ~| ~\mathcal X \right), \cdot G\left(C_0\left( \mathcal X_n\right) ~| ~C_0\left( \mathcal X\right) \right) \approx G\left(\mathcal X_n ~| ~\mathcal X \right), \end{split} \end{equation*} it turns out that $G_A \to G\left(C_0\left( \mathcal X_n\right) ~| ~C_0\left( \mathcal X\right) \right)$ is surjective, i.e. condition (c) holds. \newline (ii) From the proof of (i) it turns out \begin{equation*} \begin{split} \varprojlim \downarrow\mathfrak{S}_{\mathcal X} = \widetilde{\mathcal X};~~\widetilde{A} = C_0\left(\widetilde{\mathcal X}\right), \cdot \varprojlim \downarrow \mathfrak{S}_{C_0\left(\mathcal{X}\right)} = \widetilde{A}= C_0\left(\widetilde{\mathcal X}\right)= C_0\left(\varprojlim \downarrow\mathfrak{S}_{\mathcal X} \right), \cdot G\left(\varprojlim \downarrow \mathfrak{S}_{\mathcal{X}}~|~ \mathcal X\right) = G_{\mathcal X}=G_A = G\left(\varprojlim \downarrow \mathfrak{S}_{C_0\left(\mathcal{X}\right)}~|~ C_0\left(\mathcal X\right)\right). \end{split} \end{equation*} \end{proof} \section{Continuous trace $C^*$-algebras and their coverings}\label{cont_tr_exm} \paragraph*{} Let $A$ be a $C^*$-algebra. For each positive $x\in A_+$ and irreducible representation $\pi: A \to B\left( \mathcal{H}\right)$ the (canonical) trace of $\pi(x)$ depends only on the equivalence class of $\pi$, so that we may define a function $\hat x : \hat A \to [0,\infty]$ by $\hat x(t)=\mathrm{Tr}(\pi(x))$ where $\hat A$ is the space of equivalence classes of irreducible representations. From Proposition 4.4.9 \cite{pedersen:ca_aut} it follows that $\hat x$ is lower semicontinuous function in the Jacobson topology. \begin{defn}\label{continuous_trace_c_a_defn}\cite{pedersen:ca_aut} We say that element $x\in A$ has {\it continuous trace} if $\hat x \in C_b(\hat A)$. We say that $C^*$-algebra has {\it continuous trace} if a set of elements with continuous trace is dense in $A$. \end{defn} \begin{defn}\label{abelian_element_defn}\cite{pedersen:ca_aut} A positive element in $C^*$ - algebra $A$ is {\it Abelian} if subalgebra $xAx \subset A$ is commutative. \end{defn} \begin{defn}\cite{pedersen:ca_aut} We say that a $C^*$-algebra $A$ is of type $I$ if each non-zero quotient of $A$ contains a non-zero Abelian element. If $A$ is even generated (as $C^*$-algebra) by its Abelian elements we say that it is of type $I_0$. \end{defn} \begin{prop}\label{abelian_element_proposition}\cite{pedersen:ca_aut} A positive element $x$ in $C^*$-algebra $A$ is Abelian if $\mathrm{dim}~\pi(x) \le 1$ for every irreducible representation $\pi:A \to B(\mathcal{H})$. \end{prop} \begin{thm}\label{peder_id_thm} \cite{pedersen:ca_aut} For each $C^*$-algebra $A$ there is a dense hereditary ideal $K(A)$, which is minimal among dense ideals. \end{thm} \begin{defn} The ideal $K(A)$ from the theorem \ref{peder_id_thm} is said to be the {\it Pedersen ideal of $A$}. Henceforth Pedersen ideal shall be denoted by $K(A)$. \end{defn} \begin{prop}\label{continuous_trace_c_a_proposition}\cite{pedersen:ca_aut} Let $A$ be a $C^*$ - algebra with continuous trace. Then \begin{enumerate} \item[(i)] $A$ is of type $I_0$; \item[(ii)] $\hat A$ is a locally compact Hausdorff space; \item[(iii)] For each $t \in \hat A$ there is an Abelian element $x \in A$ such that $\hat x \in K(\hat A)$ and $\hat x(t) = 1$. \end{enumerate} The last condition is sufficient for $A$ to have continuous trace. \end{prop} \begin{rem}\label{ctr_is_ccr} From \cite{dixmier_tr}, Proposition 10, II.9 it follows that a continuous trace $C^*$-algebra $A$ is always a $CCR$-algebra, i.e. for every irreducible representation $\rho:A \to B(H)$ following condition hold \begin{equation}\label{ccr_compact} \rho\left(A\right) \approx \mathcal{K} \end{equation} \end{rem} \begin{thm}\label{ctr_di_do_thm}\cite{ros:ctr} (DixmierÐDouady). Any stable separable algebra $A$ of continuous trace over a second-countable locally compact Hausdorff space $\mathcal{X}$ is isomorphic to $\Gamma_0\left( \mathcal{X}\right)$ , the sections vanishing at infinity of a locally trivial bundle of algebras over $\mathcal{X}$, with fibres $\mathcal K$ and structure group $\mathrm{Aut}(\mathcal{K}) = PU = U/T$. Classes of such bundles are in natural bijection with the \cdot{C}ech cohomology group $H_3(\mathcal{X}, \mathbb{Z})$. The 3-cohomology class $\delta(A)$ attached to (the stabilisation of) a continuous-trace algebra $A$ is called its DixmierÐDouady class. \end{thm} \begin{rem} Any commutative $C^*$-algebra has continuous trace. So described in the Section \ref{top_chap} case is a special case of described in the Section \ref{cont_tr_exm} construction. \end{rem} \begin{empt} For any $x \in \hat A$ denote by $\rho_x: A \to B\left(\mathcal{H} \right)$ a representation which corresponds to $x$. For any $a \in A$ denote by $\supp a \subset \hat A$ the closure of the set $$ \supp a \stackrel{\mathrm{def}}{=}\left\{x \in \hat A~|~\rho_x\left(a \right)\neq 0 \right\cdot. $$ \end{empt} \subsection{Basic construction} \paragraph*{} Let $A$ be a continuous trace $C^*$-algebra such that the spectrum $\hat A= \mathcal X$ of is a second-countable locally compact Hausdorff space. For any open subset $\mathcal U \subset \mathcal X$ denote by $$ A\left( \mathcal U\right) = \left\{a \in A~|~ \rho_x\left(a \right)= 0; ~\forall x \in \mathcal X \backslash \mathcal U \right\cdot $$ where $\rho_x$ is an irreducible representation which corresponds to $x \in \mathcal X$. If $\mathcal V \subset \mathcal U$ then there is a natural inclusion $A\left( \mathcal V\right) \hookrightarrow A\left( \mathcal U\right)$. Let $\pi: \widetilde{\mathcal X}\to \mathcal X$ be a topological covering. Let $\widetilde{\mathcal U} \subset \widetilde{\mathcal X}$ be a connected open subset homeomorphically mapped onto $\mathcal U = \pi\left( \widetilde{\mathcal U}\right)$, and suppose that the closure of $\widetilde{\mathcal U}$ is compact. Denote by $\widetilde{A}\left( {\widetilde{\mathcal U}}\right) $ the algebra such that $\widetilde{A}\left( {\widetilde{\mathcal U}}\right) \approx A\left( \mathcal U\right)$. If $\widetilde{\mathcal V} \subset \widetilde{\mathcal U}$ and $\mathcal V = \pi\left( \widetilde{\mathcal V}\right)$ then the inclusion $A\left( \mathcal V\right) \hookrightarrow A\left( \mathcal U\right)$ naturally induces an inclusion $i^{\widetilde{\mathcal V}}_{\widetilde{\mathcal U}} :\widetilde{A}\left( {\widetilde{\mathcal V}}\right) \hookrightarrow \widetilde{A}\left( {\widetilde{\mathcal U}}\right)$. Let us consider $\widetilde{\mathcal U}$ as indexes, and let \begin{equation*} \begin{split} A' = \bigoplus_{\widetilde{\mathcal U}} \widetilde{A}\left( {\widetilde{\mathcal U}}\right) / I,\cdot \end{split} \end{equation*} where $\bigoplus$ means the algebraic direct sum of $C^*$-algebras and $I$ is the two sided ideal generated by elements $i^{\widetilde{\mathcal U}_1\bigcap\widetilde{\mathcal U}_2}_{\widetilde{\mathcal U}_1}\left(a \right) - i^{\widetilde{\mathcal U}_1\bigcap\widetilde{\mathcal U}_2}_{\widetilde{\mathcal U}_2}\left(a \right)$, for any $a \in A\left(\widetilde{\mathcal U}_1\bigcap\widetilde{\mathcal U}_2 \right)$. There is the natural $C^*$-norm of the direct sum on $\bigoplus_{\widetilde{\mathcal U}} \widetilde{A}\left( {\widetilde{\mathcal U}}\right)$ and let us define the norm on $A' = \bigoplus_{\widetilde{\mathcal U}} \widetilde{A}\left( {\widetilde{\mathcal U}}\right) / I$ given by \begin{equation}\label{ctr_norm_eqn} \left\cdot a + I\right\cdot= \inf_{a' \in I} \left\cdot a + a'\right\cdot; ~\forall \left( a + I\right) \in \bigoplus_{\widetilde{\mathcal U}} \widetilde{A}\left( {\widetilde{\mathcal U}}\right) / I \end{equation} \begin{defn}\label{ctr_cov_defn} If $A\left(\widetilde{\mathcal X} \right) $ is completion of $A'$ with respect to the given by \eqref{ctr_norm_eqn} then we say that $A\left(\widetilde{\mathcal X} \right)$ is an \textit{induced by $\pi: \widetilde{\mathcal X}\to \mathcal X$ covering} of $A$. \end{defn} The action of $G\left(\widetilde{\mathcal X}~|~\mathcal X\right)$ on $\widetilde{\mathcal X}$ induces an action of $G\left(\widetilde{\mathcal X}~|~\mathcal X\right)$ on $A'$, so there is a natural action of $G\left(\widetilde{\mathcal X}~|~\mathcal X\right)$ on $A\left(\widetilde{\mathcal X} \right)$. \begin{definition} We say that the action of $G\left(\widetilde{\mathcal X}~|~\mathcal X\right)$ on $\widetilde{\mathcal X}$ \textit{induces} the action of $G\left(\widetilde{\mathcal X}~|~\mathcal X\right)$ on $A\left(\widetilde{\mathcal X} \right)$. \end{definition} From the Proposition \ref{continuous_trace_c_a_proposition} it follows that $A\left(\widetilde{\mathcal X} \right) $ is a continuous trace $C^*$-algebra, and the spectrum of $A\left(\widetilde{\mathcal X} \right)$ coincides with $\widetilde{\mathcal X}$. If $G=G\left(\widetilde{\mathcal X}~|~\mathcal X\right)$ is a finite group then \begin{equation}\label{ctr_inv_eqn} A = A\left(\widetilde{\mathcal X} \right)^G \end{equation} and the above equation induces an injective *-homomorphism $A \hookrightarrow A\left(\widetilde{\mathcal X} \right)$. \begin{lem}\label{ctr_adm_lem} Let $A$ be a continuous trace $C^*$-algebra, and let $\mathcal X = \hat A$ be a spectrum of $A$. Suppose that $\mathcal X$ is a second-countable locally compact Hausdorff space and $B$ is a $C^*$-algebra such that \begin{itemize} \item $A \subset B \subset A''$, \item For any $b \in B_+$ and $x_0 \in \mathcal X$ such that $\rho_{x_0}\left(b \right)\neq 0$ there is an open neighborhood $\mathcal W \subset \mathcal X$ of $x_0$ and an Abelian $z \in A$ such that \begin{equation*} \begin{split} \supp z \subset \mathcal W,\cdot \tr\left(z b z \right) \in C_0\left(\mathcal X \right),\cdot \tr\left(z b z \right) \left(x_0 \right)\neq 0. \end{split} \end{equation*} \end{itemize} Then $B = A$. \end{lem} \begin{proof} The spectrum $\hat B$ of $B$ coincides with the spectrum of $A$ as a set. Let $\mathcal V \subset \mathcal X$ be a closed subset with respect to topology of $\hat B$. There is a closed ideal $I \subset B$ which corresponds to $\mathcal V$. Denote by $I_+$ the positive part of $I$. For any $x_0 \in \mathcal X \backslash \mathcal V$ there is $b \in I_+$ such that $\rho_{x_0}\left(b \right) \neq 0$. There is an Abelian element $\overline{z} \in A$ such that $\tr\left( \rho_{x_0}\left(\overline{z}b\overline{z} \right)\right) \neq 0$. If $\mathcal W \subset\mathcal X$ is an open neighborhood of $x_0$ then from the Corollary \ref{com_a_u_cor} it follows that there is a bounded positive continuous function $a: \mathcal X \to \mathbb{R}$ such that $a\left( x_0\right) \neq 0$ and $a\left(\mathcal X \backslash \mathcal W \right)= \{0\cdot$. If $z = a\overline{z}$ then $z$ is an Abelian document, $\tr\left(z b z \right)\left(x_0 \right)=\left( \tr\left(\overline{z} b \overline{z} \right)\left(x_0 \right)\right) a^2\left(x_0 \right) \neq 0$ and $\supp z \subset \mathcal W$. From $\tr\left(z b z \right) \in C_0\left(\mathcal X \right)$ it turns out that there is an open (with respect to topology of $\hat A$) neighborhood $\mathcal U$ of $x_0$ such that $\tr\left( \rho_{x}\left(zbz \right) \right) \neq 0$ for any $x \in \mathcal U$, i.e. $\mathcal V \bigcap \mathcal U = \emptyset$. It follows that $\mathcal V $ is a closed subset with respect to the topology of $\hat A$. Hence there is a homeomorhism $\hat A \approx \hat B$. Below we apply the method of proof of the Theorem 6.1.11 \cite{pedersen:ca_aut}. Let us consider the set $M$ of elements in $B_+$ with continuous trace, $M$ is hereditary and the closure of $M$ is the positive part of an ideal $J$ of $B$. However for any $x \in \mathcal{ X}= \hat B$ there is an Abelian $a \in K\left( A\right) $ such that $\tr\left( \rho_x\left(a \right) \right) \neq 0$. It turns out $J$ is not contained in any primitive ideal of $B$, hence $J = B$. It turns out $B$ has continuous trace. From this fact it turns out $\rho_x\left(A \right) = \rho_x\left(B \right)\approx \mathcal K$ for any $x \in \mathcal X$. Taking into account $\rho_x\left(A \right) = \rho_x\left(B \right)$, homeomorphism $\hat A \approx \hat B$ and the Theorem \ref{ctr_di_do_thm} one has $B = A$. \end{proof} \subsection{Finite-fold coverings} \paragraph*{} If $\pi:\widetilde{\mathcal X} \to \mathcal X$ is a finite-fold covering, such that ${\mathcal X}$ and $\widetilde{\mathcal X}$ are compact Hausdorff spaces, then there is a finite family $\left\cdot\mathcal U_\iota\subset \mathcal X\right\cdot_{\iota \in I_0}$ of connected open subsets of $ \mathcal X$ evenly covered by $\pi$ such that $ \mathcal X = \bigcup_{\iota \in I_0} \mathcal U_\iota$. There is a partition of unity subordinated to $\left\cdot\mathcal U_\iota\right\cdot$, i.e. $$ 1_{C\left(\mathcal X \right) }= \sum_{\iota \in I_0}a_\iota $$ where $a_\iota \in C\left(\mathcal X \right)_+$ is such that $\supp a_\iota \subset \mathcal U_\iota$. Denote by $e_\iota = \sqrt{a_\iota}\in C\left(\mathcal X \right)_+$. For any $\iota \in I_0$ we select $\widetilde{\mathcal U}_\iota \subset \widetilde{\mathcal X}$ such that $\widetilde{\mathcal U}_\iota$ is homemorphically mapped onto $\mathcal U_\iota$. If $\widetilde{e}_\iota \in C\left( \widetilde{\mathcal X}\right) $ is given by \begin{equation*} \widetilde{e}_\iota\left(\widetilde{x} \right) = \left\cdot \begin{array}{c l} e_\iota\left(\pi\left( \widetilde{x}\right) \right) & \widetilde{x} \in \widetilde{\mathcal U}_\iota \cdot 0 & \widetilde{x} \notin \widetilde{\mathcal U}_\iota \end{array}\right. \end{equation*} and $G = G\left( \widetilde{\mathcal X}~|{\mathcal X} \right)$ then \begin{equation*} \begin{split} 1_{C\left(\widetilde{\mathcal X} \right) }= \sum_{ \left(g, \iota\right)\in G \times I_0}g\widetilde{e}^2_\iota,\cdot \widetilde{e}_\iota \left( g\widetilde{e}_\iota\right) =0; \text{ for any nontrivial } g \in G. \end{split} \end{equation*} If $I= G\times I_0$ and $\widetilde{e}_{\left(g, \iota\right)}= g\widetilde{e}_\iota$ the from the above equation it turns out \begin{equation}\label{ctr_unity_eqn} \begin{split} 1_{C\left(\widetilde{\mathcal X} \right) }= \sum_{\iota \in I}\widetilde{e}_\iota \left\rangle \right\langle \widetilde{e}_\iota \end{split} \end{equation} where $\widetilde{e}_\iota \left\rangle \right\langle \widetilde{e}_\iota$ means a compact operator induced by the $C^*$-Hilbert module structure given by \eqref{finite_hilb_mod_prod_eqn}. \begin{proposition}\label{mult_str_pos_prop}\cite{apt_mult} If $B$ is a $C^*$-subalgebra of $A$ containing an approximate unit for $A$, then $M\left(B \right) \subset M\left( A\right)$ (regarding $B''$ as a subalgebra of $A''$). \end{proposition} \begin{lem}\label{ctr_mult_lem} Let $A$ be a continuous trace algebra, and let $\hat A = \mathcal X$ be the spectrum of $A$. Suppose that $\mathcal X$ is a locally compact second-countable Hausdorff space, and let $\pi: \widetilde{\mathcal X} \to \mathcal X$ be a finite-fold covering. There is the natural *-isomorphism $M\left(A \right) \cong M\left( A\left( \widetilde{\mathcal X}\right) \right)^G$ of multiplier algebras. \end{lem} \begin{proof} For any $x \in \mathcal X$ there is an open neighborhood $\mathcal U$ such that $A\left( \mathcal U\right) \cong C_0\left( \mathcal U \right) \otimes \mathcal{K}$. Since $\mathcal X$ is second-countable there is an enumerable family $\left\cdot\mathcal{U}_k\right\cdot_{k \in \mathbb{N}}$ such that $A\left( \mathcal U_k\right) \cong C_0\left( \mathcal U_k \right) \otimes \mathcal{K}$ for any $k \in \mathbb{N}$. There is a family $\left\lbrace a_k \in C_0\left( \mathcal X\right)_+ \right\rbrace_{k \in \mathbb{N}}$ such that \begin{itemize} \item $\supp a_k \subset \mathcal U_k$, \item \begin{equation}\label{ctr_u_eqn} 1_{C_b\left( \mathcal{X}\right) } = \sum_{k=0}^\infty a_k \end{equation} where sum of the series means the strict convergence (cf. Definition \ref{strict_topology}). \end{itemize} There is an enumerable family $\left\{e_k\in \mathcal{K} \right\cdot_{k \in \mathbb{N}}$ of rank-one positive mutually orthogonal operators such that \begin{equation}\label{ctr_k_eqn} 1_{M\left(\mathcal K\right) }= \sum_{k=0}^\infty e_k \end{equation} where above sum assumes strict topology (cf. Definition \ref{strict_topology}). The family of products $\left\lbrace u_{jk}= a_j\otimes e_k\right\rbrace_{j,k \in \mathbb{N}} $ is enumerable and let us introduce an enumeration of $\left\lbrace u_{jk} \right\rbrace_{j,k \in \mathbb{N}}$, i.e. $\left\lbrace u_{jk} \right\rbrace_{j,k \in \mathbb{N}}= \left\lbrace u_{p} \right\rbrace_{p \in \mathbb{N}}$. From \eqref{ctr_u_eqn} and \eqref{ctr_k_eqn} it follows that \begin{equation}\label{ctr_q_eqn} 1_{M\left(A\right) }= \sum_{\substack{j=1 \cdot k=1}}^\infty u_{jk}=\sum_{p=0}^\infty u_p. \end{equation} If $h \in A$ is given by $$ h = \sum_{p=0}^\infty \frac{1}{2^p} u_p $$ and $\tau: A \to \mathbb{C}$ is a state such that $\tau\left( h\right) = 0$ then from $u_p > 0$ for any $p \in \mathbb{N}$ it follows that $\tau\left(u_p \right) = 0$ for any $p \in \mathbb{N}$. However from \eqref{ctr_q_eqn} it turns out $$ 1 = \tau\left( 1_{M\left(A\right) }\right)= \tau\left(\sum_{p=0}^\infty u_p \right) = \sum_{p=0}^\infty \tau\left( u_p \right), $$ and above equation contradicts with $\tau\left(u_p \right) = 0$ for any $p \in \mathbb{N}$. It follows that $\tau\left( h\right) \neq 0$ for any state $\tau$, i.e. $h$ is strictly positive element of $A$. Similarly one can prove that $h$ is strictly positive element of $A\left( \widetilde{\mathcal X}\right)$ because $$ 1_{M\left(A\left( \widetilde{\mathcal X}\right)\right) }= \sum_{p=0}^\infty u_p. $$ From the Proposition \ref{mult_str_pos_prop} it follows that there is the natural injective *-homomorphism $f: M\left( A\right) \hookrightarrow M\left(A\left( \widetilde{\mathcal X}\right) \right)$. Clearly $g f\left( a\right) = f\left(ga \right)= f\left( a\right) $ for any $a \in A$ and $g \in G$, it follows that $f\left( M\left( A\right)\right) \subset M\left( \widetilde{A} \right)^G$, or equivalently $M\left( A\right) \subset M\left( \widetilde{A} \right)^G$. Otherwise from the Lemma \ref{ind_mult_inv_lem} one has $ M\left( \widetilde{A} \right)^G\subset M\left( A\right)$. Taking into account mutually inverse inclusions $ M\left( \widetilde{A} \right)^G\subset M\left( A\right)$ and $M\left( A\right) \subset M\left( \widetilde{A} \right)^G$ we conclude that $$ M\left(A \right) \cong M\left( A\left( \widetilde{\mathcal X}\right) \right)^G. $$ \end{proof} \begin{lem}\label{ctr_fin_lem} Let $A$ be a continuous trace algebra, and let $\hat A = \mathcal X$ be the spectrum of $A$. Suppose that $\mathcal X$ is a locally compact second-countable Hausdorff space, and let $\pi: \widetilde{\mathcal X} \to \mathcal X$ be a finite-fold covering with compactification. Then the triple $\left( A, A\left(\widetilde{\mathcal X} \right),G = G\left(\widetilde{\mathcal X}~|~\mathcal X \right)\right)$ is a finite-fold noncommutative covering with compactification. \end{lem} \begin{proof} We need check conditions (a) - (c) of the Definition \ref{fin_comp_def}.\cdot (a) There is the action of $G$ on $A\left(\widetilde{\mathcal X} \right)$ induced by the action of $G$ on $\widetilde{\mathcal X}$. From \eqref{ctr_inv_eqn} it turns out that $A = A\left(\widetilde{\mathcal X} \right)^G$ and there is an injective *-homomorphism $A \hookrightarrow A\left(\widetilde{\mathcal X} \right)$. Denote by $M\left(A \right)$ and $M\left( A\left(\widetilde{\mathcal X}\right) \right)$ multiplier algebras of $A$ and $A\left(\widetilde{\mathcal X} \right)$. Denote by $\mathcal X \hookrightarrow \mathcal Y$, $\widetilde{\mathcal X}\hookrightarrow\widetilde{\mathcal Y}$ compactifications such that $\widetilde{ \pi}:\widetilde{\mathcal Y}\to \mathcal Y$ is a (topological) finite covering and $\pi= \widetilde{ \pi}|_{\widetilde{\mathcal X}}$. From $C_b\left(\mathcal X \right)\subset M\left( A\right)$, $C_b\left(\widetilde{\mathcal X} \right) \subset M\left( A\left(\widetilde{\mathcal X} \right)\right)$ and $C\left(\mathcal Y \right) \subset C_b\left(\mathcal X \right)$, $C\left(\widetilde{\mathcal Y} \right)\subset C_b\left(\widetilde{\mathcal X} \right)$ it follows that $C\left(\mathcal Y \right) \subset M\left(A \right) $, $C\left(\widetilde{\mathcal Y} \right)\subset M\left( A\left(\widetilde{\mathcal X} \right)\right) $. If $B = C\left(\mathcal Y \right) M\left( A\right)$ and $\widetilde{B} = C\left(\widetilde {\mathcal Y} \right) M\left( A\left(\widetilde{\mathcal X} \right) \right)$ then $A$ (resp. $A\left(\widetilde{\mathcal X} \right)$) is an essential ideal of $B$ (resp. $\widetilde{B}$). Clearly $A = B \bigcap A\left( \widetilde{\mathcal X}\right)$. \cdot (b) Since $G\widetilde {\mathcal Y}= \widetilde {\mathcal Y}$ the action $G\times M\left( A\left(\widetilde{\mathcal X} \right)\right)\to M\left( A\left(\widetilde{\mathcal X} \right)\right)$ induces an action $G \times \widetilde B\to\widetilde B$. From the Lemma \ref{ctr_mult_lem} on has the natural *-isomorphism $M\left( A\left(\widetilde{\mathcal X} \right)\right)^G \cong M\left( A\right)$. It follows that $B = C\left(\mathcal Y \right) M\left( A\right)\cong C\left(\mathcal Y \right) M\left( A\left(\widetilde{\mathcal X} \right)\right)^G=\widetilde{B}^G$. From \eqref{ctr_unity_eqn} it turns out that there is a finite family $\left\{e_\iota\in C\left( \mathcal Y\right) \right\cdot_{\iota \in I}$ such that $$ 1_{C\left(\widetilde{\mathcal Y} \right) }=1_{\widetilde{B} }= \sum_{\iota \in I}\widetilde{e}_\iota \left\rangle \right\langle \widetilde{e}_\iota. $$ It turns out that any $\widetilde{b} \in \widetilde{B}$ is given by \begin{equation*} \begin{split} \widetilde{b} = \sum_{\iota \in I}\widetilde{e}_\iota b_\iota~, \cdot b_\iota = \left\langle \widetilde{b}, \widetilde{e}_\iota \right\rangle_{\widetilde{B}} \in B, \end{split} \end{equation*} i.e. $\widetilde{B}$ is a finitely generated (by $\left\{e_\iota \right\cdot_{\iota \in I}$ ) right $B$-module. From the Kasparov Stabilization Theorem \cite{blackadar:ko} it turns out that $\widetilde{B}$ is a projective $B$ module. So $\left(B, \widetilde {B},G \right)$ is an unital finite-fold noncommutative covering. \cdot (c) Follows from $G \widetilde{\mathcal X}= \widetilde{\mathcal X}$. \end{proof} \begin{thm}\label{ctr_fin_thm} Let $A$ be a continuous trace algebra, and let $\hat A = \mathcal X$ be the spectrum of $A$. Suppose that $\mathcal X$ is a locally compact second-countable Hausdorff space, and let $\pi: \widetilde{\mathcal X} \to \mathcal X$ be a finite-fold covering. Then the triple $\left( A, A\left(\widetilde{\mathcal X} \right),G = G\left(\widetilde{\mathcal X}~|~\mathcal X \right)\right)$ is a finite-fold noncommutative covering. \end{thm} \begin{proof} We need check (a), (b) of the Definition \ref{fin_def}.\cdot (a) Follows from $\mathcal X = \widetilde{\mathcal X}/G$. \cdot (b) Let us consider a family $\left\cdot \mathcal U_\lambda \subset \mathcal X\right\cdot_{\lambda \in \Lambda}$ of open sets such that \begin{itemize} \item $\mathcal X = \bigcup_{\lambda \in \Lambda} \mathcal U_\lambda$, \item The closure $\overline{ \mathcal U}_\lambda$ of $\mathcal U_\lambda$ in $\mathcal X$ is compact $\forall \lambda \in \Lambda$. \end{itemize} Clearly $\pi^{-1}\left(\overline{ \mathcal U}_\lambda \right)\to \overline{ \mathcal U}_\lambda$ is a covering, so $\pi^{-1}\left({ \mathcal U}_\lambda \right)\to { \mathcal U}_\lambda$ is a covering with compactification. If $\widetilde{I}_\lambda \stackrel{\mathrm{def}}{=}A\left( \pi^{-1}\left( {\mathcal U}_\lambda\right)\right) \subset A\left( \widetilde{\mathcal X}\right)$ and $I_\lambda = \widetilde{I}_\lambda \bigcap A$ then from $G\pi^{-1}\left( \mathcal U_\lambda\right) = \pi^{-1}\left( \mathcal U_\lambda\right)$ it follows that \begin{eqnarray*} G\widetilde{I}_\lambda = \widetilde{I}_\lambda,\cdot I_\lambda = A\left({\mathcal U}_\lambda \right), \end{eqnarray*} i.e. $\widetilde{I}_\lambda$ satisfies to \eqref{gi-i}. From the Lemma \ref{ctr_fin_lem} it follows that there is a finite-fold noncommutative covering with compactification $ \left(I_\lambda, \widetilde{I}_\lambda, G \right) =\left(A\left( \mathcal U_\lambda\right), A\left( \pi^{-1} \left(\mathcal U_\lambda \right)\right),G \right) $. From the Definition \ref{ctr_cov_defn} and $\mathcal X = \bigcup_{\lambda \in \Lambda} \mathcal U_\lambda$ it follows that $\bigcup_{\lambda \in \Lambda} I_\lambda = \bigcup_{\lambda \in \Lambda} A\left( \mathcal U_\lambda\right)$ (resp. $\bigcup_{\lambda \in \Lambda} \widetilde{I}_\lambda = \bigcup_{\lambda \in \Lambda} A\left( \pi^{-1}\left( \mathcal U_\lambda\right) \right)$) is dense in $A$ (resp. $A\left( \widetilde{\mathcal X}\right)$. \end{proof} \subsection{Infinite coverings}\label{ctr_case_sec} \paragraph*{} Let $A$ be a continuous trace $C^*$-algebra such that the spectrum $\hat A= \mathcal X$ of is a second-countable locally compact Hausdorff space. Suppose that \begin{equation*} \mathfrak{S}_\mathcal{X} = \left\cdot\mathcal{X}=\mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ... \right\cdot \end{equation*} is a topological finite covering sequence. From the Theorem \ref{ctr_fin_thm} it turns out that $$\mathfrak{S}_{A\left( \mathcal{X}\right)}=\left\{A = A\left( \mathcal{X}_0\right)\to ... \to A\left( \mathcal{X}_n\right) \to ...\right\cdot $$ is an algebraical finite covering sequence. If $\widehat{A} = \varinjlim A\left( \mathcal{X}_n\right)$ then from the Theorem \ref{direct_lim_state_thm} it follows that there is the spectrum of $\widehat{A}$ is homeomorphic to $\widehat{ \mathcal X}= \varprojlim \mathcal X_n$. \begin{lem} If $\mathfrak{S}_\mathcal{X} = \left\cdot\mathcal{X}=\mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ... \right\cdot \in \mathfrak{FinTop}$ and $\overline{\mathcal X}$ is disconnected inverse limit of $\mathfrak{S}_\mathcal{X}$, then there is the natural inclusion of $\widehat{A}'' \to A\left(\overline{\mathcal X} \right)'' $ of von Neumann enveloping algebras. \end{lem} \begin{proof} Surjective maps $\overline{\mathcal X} \to \mathcal{X}_n $ give injective *-homomorphisms $A\left( \mathcal{X}_n\right) \hookrightarrow M\left(A\left( \overline{\mathcal X}\right) \right)$, which induce the injective *-homomorphism $\widehat{A} \hookrightarrow M\left(A\left( \overline{\mathcal X}\right) \right)$. It turns out the injective *-homomorphism of von Neumann enveloping algebras $\widehat{A}'' \to A\left(\overline{\mathcal X} \right)'' $. \end{proof} \begin{empt} Denote by $G_n = G\left(\mathcal X_n~|~\mathcal X \right)$ groups of covering transformations and $\widehat{G} = \varprojlim G_n$. Denote by $\overline{\pi}_n:\overline{ \mathcal X} \to \mathcal X_n$, $\pi^n: \mathcal X_n \to \mathcal X$ $\pi^m_n: \mathcal X_m \to \mathcal X_n$ ($m > n$) the natural covering projections. \end{empt} \begin{lem}\label{ctr_1_lem} If $\overline{ \mathcal U }\subset \overline{ \mathcal X }$ is an open subset mapped homeomorphically onto $\mathcal U \subset \mathcal X$ then any positive element in $\overline{a}\in A\left( \overline{ \mathcal U }\right)_+\subset A\left( \overline{ \mathcal X}\right)_+ $ is special. \end{lem} \begin{proof} If $\mathcal{U}_n = \overline{\pi}_n\left(\overline{ \mathcal U } \right)$ then there is a *-isomorphism $\overline{\varphi }_n: A\left( \overline{ \mathcal U }\right)\xrightarrow{\approx} A\left( \mathcal{U}_n\right) $. For any $n \in \mathbb{N}^0$ and $z \in A$ and $f_\varepsilon$ given by \eqref{f_eps_eqn} \begin{equation*} \begin{split} a_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \overline{a} = \overline{\varphi }_n\left( \overline{a}\right), \cdot b_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \left(z \overline{a} z^*\right) = z\overline{\varphi }_n\left( \overline{a}\right)z^*,\cdot c_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g \left(z \overline{a} z^*\right)^2= \left(z\overline{\varphi }_n\left( \overline{a}\right)z^*\right)^2 ,\cdot d_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g f_\varepsilon\left(z \overline{a} z^*\right)=f_\varepsilon\left(z\overline{\varphi }_n\left( \overline{a}\right)z^*\right). \end{split} \end{equation*} From the above equations it follows that $a_n,~ b_n,~ c_n,~ d_n \in A\left(\mathcal X_n \right)$ and $b_n^2 = c_n$, i.e. $\overline{ a}$ satisfies to the Definition \ref{special_el_defn}. \end{proof} \begin{cor}\label{ctr_c_cor} If $\overline{A}$ is the disconnected inverse noncommutative limit of $\mathfrak{S}_{A\left( \mathcal{X}\right)}$, then $ A\left(\overline{ \mathcal X} \right) \subset \overline{A}$. \end{cor} \begin{proof} From the Lemma \ref{ctr_1_lem} it turns out $A\left(\overline{ \mathcal U} \right) \subset \overline{A}$. However $A\left(\overline{ \mathcal X} \right)$ is the $C^*$-norm completion of its subalgebras $A\left(\overline{ \mathcal U} \right) \subset A\left(\overline{ \mathcal X} \right)$. \end{proof} \begin{lem}\label{ctr_lem1} If $\overline{a} \in A\left(\overline{ \mathcal X }\right)''$ is a special element and $z \in K\left(A \right)$ is an Abelian element then $\overline{b}= \tr\left(z\overline{a}z \right) \in C_0\left(\overline{\mathcal X} \right)$. \end{lem} \begin{proof} Any Abelian element is positive, hence $z = z^*$. If $f_\varepsilon$ is given by \eqref{f_eps_eqn} and $\overline{b}'= z\overline{a}z$ then from (b) of the Definition \ref{special_el_defn} it turns out \begin{equation*} \begin{split} b'_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g\overline{b}' \in A\left(\mathcal X_n \right),\cdot c'_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} g\overline{b}'^2 \in A\left(\mathcal X_n \right),\cdot d'_n = \sum_{g \in \ker\left( \widehat{G} \to G_n\right)} gf_\varepsilon\left( \overline{b}'\right) \in A\left(\mathcal X_n \right). \end{split} \end{equation*} From $z \in K\left(A \right)$ it turns out that $\supp \tr\left(z \right)$ is compact. The map $\pi_n :\mathcal X_n \to \mathcal X$ is a finite-fold covering, it turns out $\pi^{-1}_n\left(\supp \tr\left(z \right) \right)$ is compact. If $\supp b'_n ,~ \supp c'_n, ~\supp d'_n \subset \pi^{-1}_n\left(\supp \tr\left(z \right) \right)$ it turns out that all sets $\supp b'_n ,~ \supp c'_n,~ \supp d'_n$ are compact. Taking into account that all $b'_n$, $c'_n$, $d'_n$ are Abelian one has $b'_n,~c'_n, ~d'_n \in K \left( A\left(\mathcal X_n \right)\right) $ where $K \left( A\left(\mathcal X_n \right)\right)$ means the Pedersen ideal of $A\left(\mathcal X_n \right)$. It follows that \begin{equation*} \begin{split} b_n = \tr\left(b'_n\right)=\sum_{g \in \ker\left( \widehat{G} \to G_n\right)} \tr\left(g \overline{b}') \right)\in C_c\left(\mathcal X_n\right),\cdot b^2_n = \tr\left(b'^2_n\right)=\tr\left(b'_n\right)^2 \in C_c\left(\mathcal X_n\right),\cdot c_n= \tr\left(c'_n\right) =\sum_{g \in \ker\left( \widehat{G} \to G_n\right)} \tr\left(g \overline{b}'^2\right)=\sum_{g \in \ker\left( \widehat{G} \to G_n\right)} \tr\left(g \overline{b}'\right)^2\in C_c\left(\mathcal X_n\right),\cdot d_n= \tr\left(d'_n\right) =\sum_{g \in \ker\left( \widehat{G} \to G_n\right)} \tr\left(g f_\varepsilon\left( \overline{b}'\right) \right)=\sum_{g \in \ker\left( \widehat{G} \to G_n\right)} f_\varepsilon\left( \tr\left(g \overline{b}'\right)\right) \in C_c\left(\mathcal X_n\right).\cdot \end{split} \end{equation*} From the above equations it follows that $b_n,~c_n,~ d_n$ satisfy to the condition (a) of the Lemma \ref{comm_main_lem}. From the condition (c) the Definition \ref{special_el_defn} it follows that for any for any $\varepsilon > 0$ there is $N \in \mathbb{N}$ such that for any $n \ge N$ following condition holds \begin{equation}\label{ctr_ineq_eqn} \left\|b'^2_n -c'_n\right\cdot < \varepsilon. \end{equation} Both $b'_n$ and $c'_n$ are Abelian and the range projection of $b'_n$ equals to the range projection of $c'_n$, i.e. $\left[b'_n \right] = \left[c'_n \right]$, it turns out \begin{equation*} \begin{split} \left\|b^2_n-c_n\right\cdot=\left\cdot\tr\left(b'\right)^2 -\tr\left(c'_n\right) \right\cdot= \left\|b'^2_n-c'_n\right\cdot. \end{split} \end{equation*} From \eqref{ctr_ineq_eqn} it follows that $\left\|b^2_n-c_n\right\cdot < \varepsilon$ for any $n \ge N$. It means that $b_n$ and $c_n$ satisfy to condition (b) of the Lemma \ref{comm_main_lem}. From the Lemma \ref{comm_main_lem} it turns out that $\overline{b}= \tr\left(z\overline{a}z \right) \in C_0\left(\overline{\mathcal X} \right)$. \end{proof} \begin{lem}\label{ctr_top_lem} If $\overline{A}$ is the disconnected inverse noncommutative limit of $\downarrow\mathfrak{S}_{A\left( \mathcal{X}\right)}$, then $\overline{A} = A\left(\overline{ \mathcal X} \right)$. \end{lem} \begin{proof} From the Corollary \ref{ctr_c_cor} it follows that $A\left(\overline{ \mathcal X} \right) \subset \overline{A}$. From the Corollary \ref{special_cor} it follows that \begin{equation}\label{ctr_inc_eqn} A\left(\overline{ \mathcal X} \right) \subset \overline{A} \subset A\left(\overline{ \mathcal X} \right)''. \end{equation} Let $\overline{\pi}: \overline{ \mathcal X} \to { \mathcal X}$ and let $\overline{a} \in A\left(\overline{ \mathcal X} \right)_+''$. Let $\overline{x} \in \overline{\mathcal X}$ be such that $\rho_{\overline{x}}\left( \overline{a}\right) \neq 0$ and let $\overline{ \mathcal W}$ be an open neighborhood of $x$ such that $\overline{\pi}$ homeomorphically maps $\overline{ \mathcal W}$ onto $\mathcal W = \overline{\pi}\left( \overline{ \mathcal W}\right)$. If $\overline{z} \in K\left( A\left(\overline{ \mathcal X} \right)\right)$ is an Abelian element such that $\supp \overline{z} \subset \overline{ \mathcal W}$ and $\rho_{\overline{x}}\left(\overline{z}~\overline{a}~\overline{z} \right)\neq 0$ then the element $z = \sum_{g \in \widehat{G}}g \overline{z}\in A$ is Abelian and $\supp z \in \mathcal W$. If $\overline{a}$ is special, then from the Lemma \ref{ctr_lem1} it turns out that $$ \tr\left( z\overline{a}z \right) \in C_0\left( \overline{\mathcal X}\right) . $$ However from $$ \rho_{\overline{x}}\left( \overline{z}~\overline{a}~\overline{z}\right) \left( \overline{a}\right) = \left\cdot \begin{array}{c l} \rho_{\overline{x}}\left( z\overline{a}z\right) & \overline{x} \in \overline{ \mathcal W} \cdot 0 & \overline{x} \notin \overline{ \mathcal W} \end{array}\right. $$ it turns out \begin{equation}\label{ctr_zaz_eqn} \begin{split} \tr\left( \overline{z}~\overline{a}~\overline{z} \right) \in C_0\left( \overline{\mathcal X}\right),\cdot \tr\left( \overline{z}~\overline{a}~\overline{z} \right)\left(x \right) \neq 0. \end{split} \end{equation} The set of special elements is dense in $\overline{A}_+$ it turns out that any $\overline{a} \in \overline{A}_+$ satisfies to \eqref{ctr_zaz_eqn}. Taking into account this fact and \eqref{ctr_inc_eqn} it turns out \begin{itemize} \item $A\left(\overline{ \mathcal X} \right) \subset \overline{A} \subset A\left(\overline{ \mathcal X} \right)''$, \item For any $\overline{a} \in \overline{A}_+$ and $x \in \overline{ \mathcal X} $ such that $\rho_{\overline{x}}\left(\overline{a} \right)\neq 0$ there is an open neighborhood $\overline{ \mathcal W} \subset \overline{ \mathcal X}$ of $x$ and an Abelian $\overline{z} \in A\left(\overline{ \mathcal X} \right)$ such that \begin{equation*} \begin{split} \supp \overline{z} \subset \overline{\mathcal W},\cdot \tr\left(\overline{z}~\overline{a}~\overline{z} \right) \in C_0\left(\overline{\mathcal X} \right),\cdot \tr\left(\overline{z}~\overline{a}~\overline{z} \right) \left(x\right) \neq 0. \end{split} \end{equation*} \end{itemize} From the Lemma \ref{ctr_adm_lem} it follows that $\overline{A}= A\left(\overline{ \mathcal X} \right)$. \end{proof} \begin{empt}\label{ctr_transitive_constr} Let $\widetilde{\mathcal X} \subset \overline{\mathcal X}$ be a connected component and let $G \subset G\left(\varprojlim C_0\left(\mathcal X_n \right) ~|~C_0\left( \mathcal X\right) \right)$ be maximal subgroup among subgroups $G' \subset G\left(\varprojlim C_0\left(\mathcal X_n \right) ~|~C_0\left( \mathcal X\right) \right)$ such that $G'\widetilde{\mathcal X} = \widetilde{\mathcal X}$. If $J\subset\widehat{G}$ is a set of representatives of $\widehat{G}/G$ then from the \eqref{top_disconnected_repr_eqn} it follows that \begin{equation*} \overline{\mathcal X}= \bigsqcup_{g \in J} g \widetilde{\mathcal X}. \end{equation*} and the algebraic direct sum \begin{equation}\label{ctr_transitive_eqn} \bigoplus _{g \in J} A\left( g\widetilde{\mathcal X} \right) \subset A\left( \overline{\mathcal X}\right). \end{equation} is a dense subalgebra of $A\left( \overline{\mathcal X}\right)$. \end{empt} \begin{thm}\label{ctr_main_thm} Let $A$ be $C^*$-algebra of continuous trace, and let $\mathcal{X}$ be the spectrum of $A$. Let $$\mathfrak{S}_{\mathcal X} = \left\cdot\mathcal{X} = \mathcal{X}_0 \xleftarrow{}... \xleftarrow{} \mathcal{X}_n \xleftarrow{} ...\right\cdot \in \mathfrak{FinTop}$$ be a topological finite covering sequence, and let $$\mathfrak{S}_{A\left( \mathcal{X}\right)}=\left\{A = A\left( \mathcal{X}_0\right)\to ... \to A\left( \mathcal{X}_n\right) \to ...\right\cdot \in \mathfrak{FinAlg}$$ be an algebraical finite covering sequence. Following conditions hold: \begin{enumerate} \item [(i)] $\mathfrak{S}_{A\left(\mathcal{X}\right)}$ is good, \item[(ii)] There are isomorphisms: \begin{itemize} \item $\varprojlim \downarrow \mathfrak{S}_{A\left(\mathcal{X}\right)} \approx A\left(\varprojlim \downarrow \mathfrak{S}_{\mathcal X}\right)$, \item $G\left(\varprojlim \downarrow \mathfrak{S}_{A\left(\mathcal{X}\right)}~|~ A\right) \approx G\left(\varprojlim \downarrow \mathfrak{S}_{\mathcal{X}}~|~ \mathcal X\right)$. \end{itemize} \end{enumerate} \end{thm} \begin{proof} Similar to the proof ot the Theorem \ref{comm_main_thm}. \end{proof} \section{Noncommutative tori and their coverings} \subsection{Fourier transformation} \paragraph*{} There is a norm on $\mathbb{Z}^n$ given by \begin{equation}\label{mp_znorm_eqn} \left\cdot\left(k_1, ..., k_n\right)\right\cdot= \sqrt{k_1^2 + ... + k^2_n}. \end{equation} The space of complex-valued Schwartz functions on $\mathbb{Z}^n$ is given by \begin{equation*} \mathcal{S}\left(\mathbb{Z}^n\right)= \left\{a = \left\{a_k\right\cdot_{k \in \mathbb{Z}^n} \in \mathbb{C}^{\mathbb{Z}^n}~|~ \mathrm{sup}_{k \in \mathbb{Z}^n}\left(1 + \|k\cdot\right)^s \left|a_k\right| < \infty, ~ \forall s \in \mathbb{N} \right\cdot. \end{equation*} Let $\mathbb{T}^n$ be an ordinary $n$-torus. We will often use real coordinates for $\mathbb{T}^n$, that is, view $\mathbb{T}^n$ as $\mathbb{R}^n / \mathbb{Z}^n$. Let $C^\infty\left(\mathbb{T}^n\right)$ be an algebra of infinitely differentiable complex-valued functions on $\mathbb{T}^n$. There is the bijective Fourier transformations $\mathcal{F}_\mathbb{T}:C^\infty\left(\mathbb{T}^n\right)\xrightarrow{\approx}\mathcal{S}\left(\mathbb{Z}^n\right)$; $f \mapsto \widehat{f}$ given by \begin{equation}\label{nt_fourier} \widehat{f}\left(p\right)= \mathcal F_\mathbb{T} (f) (p)= \int_{\mathbb{T}^n}e^{- 2\pi i x \cdot p}f\left(x\right)dx \end{equation} where $dx$ is induced by the Lebesgue measure on $\mathbb{R}^n$ and $\cdot$ is the scalar product on the Euclidean space $\mathbb{R}^n$. The Fourier transformation carries multiplication to convolution, i.e. \begin{equation*} \widehat{fg}\left(p\right) = \sum_{r +s = p}\widehat{f}\left(r\right)\widehat{g}\left(s\right). \end{equation*} The inverse Fourier transformation $\mathcal{F}^{-1}_\mathbb{T}:\mathcal{S}\left(\mathbb{Z}^n\right)\xrightarrow{\approx} C^\infty\left(\mathbb{T}^n\right)$; $ \widehat{f}\mapsto f$ is given by $$ f\left(x \right) =\mathcal{F}^{-1}_\mathbb{T} \widehat f\left( x\right) = \sum_{p \in \mathbb{Z}^n} \widehat f\left( p\right) e^{ 2\pi i x \cdot p}. $$ There is the $\mathbb{C}$-valued scalar product on $C^\infty\left( \mathbb{T}^n\right)$ given by $$ \left(f, g \right) = \int_{\mathbb{T}^n}fg dx =\sum_{p \in \mathbb{Z}^n}\widehat{f}\left( -p\right) \widehat{g}\left(p \right). $$ Denote by $\mathcal{S}\left( \mathbb{R}^{n}\right) $ be the space of complex Schwartz (smooth, rapidly decreasing) functions on $\mathbb{R}^{n}$. \begin{equation}\label{mp_sr_eqn} \begin{split} \mathcal{S}\left(\mathbb {R} ^{n}\right)=\left\{f\in C^{\infty }(\mathbb {R} ^{n}):\|f\cdot_{\alpha ,\beta )}<\infty \quad \forall \alpha =\left( \alpha_1,...\alpha_n\right) ,\beta =\left( \beta_1,...\beta_n\right)\in \mathbb {Z} _{+}^{n}\right\cdot,\cdot \|f\cdot_{{\alpha ,\beta }}=\sup_{{x\in {\mathbb {R}}^{n}}}\left|x^{\alpha }D^{\beta }f(x)\right| \end{split} \end{equation} where \begin{eqnarray*} x^\alpha = x_1^{\alpha_1}\cdot...\cdot x_n^{\alpha_n},\cdot D^{\beta} = \frac{\partial}{\partial x_1^{\beta_1}}~...~\frac{\partial}{\partial x_n^{\beta_n}}. \end{eqnarray*} The topology on $\mathcal{S}\left(\mathbb {R} ^{n}\right)$ is given by seminorms $\cdot\cdot\cdot_{{\alpha ,\beta }}$. \begin{defn}\label{nt_*w_defn} Denote by $\mathcal{S}'\left( \mathbb{R}^{n}\right) $ the vector space dual to $\mathcal{S}\left( \mathbb{R}^{n}\right) $, i.e. the space of continuous functionals on $\mathcal{S}\left( \mathbb{R}^{n}\right)$. Denote by $\left\langle\cdot, \cdot \right\rangle:\mathcal{S}'\left( \mathbb{R}^{n}\right)\times \mathcal{S}\left( \mathbb{R}^{n}\right)\to\mathbb{C}$ the natural pairing. We say that $\left\{a_n \in \mathcal{S}'\left(\mathbb {R} ^{n}\right)\right\cdot_{n \in \mathbb{N}}$ is \textit{weakly-* convergent} to $a \in \mathcal{S}'\left(\mathbb {R} ^{n}\right)$ if for any $b \in \mathcal{S}\left(\mathbb {R} ^{n}\right)$ following condition holds $$ \lim_{n \to \infty}\left\langle a_n, b \right\rangle = \left\langle a, b \right\rangle. $$ We say that $$ a = \lim_{n\to \infty}a_n $$ in the \textit{sense of weak-* convergence}. \end{defn} Let $\mathcal F$ and $\mathcal F^{-1}$ be the ordinary and inverse Fourier transformations given by \begin{equation}\label{intro_fourier} \begin{split} \left(\mathcal{F}f\right)(u) = \int_{\mathbb{R}^{2N}} f(t)e^{-2\pi it\cdot u}dt,~\left(\mathcal F^{-1}f\right)(u)=\int_{\mathbb{R}^{2N}} f(t)e^{2\pi it\cdot u}dt \end{split} \end{equation} which satisfy following conditions $$ \mathcal{F}\circ\mathcal{F}^{-1}|_{\mathcal{S}\left( \mathbb{R}^{n}\right)} = \mathcal{F}^{-1}\circ\mathcal{F}|_{\mathcal{S}\left( \mathbb{R}^{n}\right)} = \mathrm{Id}_{\mathcal{S}\left( \mathbb{R}^{n}\right)}. $$ There is the $\mathbb{C}$-valued scalar product on $\mathcal{S}\left( \mathbb{R}^n\right)$ given by \begin{equation}\label{fourier_scalar_product_eqn} \left(f, g \right)_{L^2\left( \mathbb{R}^n\right) } = \int_{\mathbb{R}^n}fg dx =\int_{\mathbb{R}^n}\mathcal{F}f\mathcal{F}g dx. \end{equation} which if $\mathcal{F}$-invariant, i.e. \begin{equation}\label{mp_inv_eqn} \left(f, g \right)_{L^2\left( \mathbb{R}^n\right) } = \left(\mathcal{F}f, \mathcal{F}g \right)_{L^2\left( \mathbb{R}^n\right) }. \end{equation} \paragraph*{} There is the action of $\mathbb{Z}^n$ on $\mathbb{R}^n$ such that $$ g x = x + g; ~ x \in \mathbb{R}^n,~ g \in \mathbb{Z}^n $$ and $\mathbb{T}^n \approx \mathbb{R}^n / \mathbb{Z}^n$. For any $x \in \mathbb{R}^n$ and $C \in \mathbb{R}$ the series $$ \sum_{k \in \mathbb{Z}^n}\frac{C}{1 + \left|x + k \right|^{n + 1} } $$ is convergent, and taking into account \eqref{mp_sr_eqn} one concludes that for $f \in \mathcal{S}\left( \mathbb{R}^n \right)$ and $x \in \mathbb{R}^n$ the series $$ \sum_{g \in \mathbb{Z}^n} D^{\beta}f\left(x + g \right) \left(x \right) = \sum_{g \in \mathbb{Z}^n}\left(gD^{\beta}f \right) \left(x \right) $$ is absolutely convergent. It follows that the series $$ \widetilde{h} = \sum_{g \in \mathbb{Z}^n} g f $$ is point-wise convergent and $\widetilde{h}$ is a smooth $\mathbb{Z}^n$ - invariant function. The periodic smooth function $\widetilde{h}$ corresponds to an element of $h \in C^\infty\left(\mathbb{T}^n \right)$. This construction provides a map \begin{equation}\label{mp_sooth_sum_eqn} \begin{split} \mathcal{S}\left(\mathbb{R}^n\right) \to C^\infty\left(\mathbb{T}^n\right), \cdot f \mapsto h = \sum_{g \in \mathbb{Z}^n} g f. \end{split} \end{equation} If $\mathcal U = \left(0,1 \right)^n \subset \mathbb{R}^n$ is a fundamental domain of the action of $\mathbb{Z}^n$ on $\mathbb{R}^n$ then $\widetilde{h}_{\mathcal U }$ can be represented by the Fourier series \begin{eqnarray*} \widetilde{h}_{\mathcal U } \left(x \right) = \sum_{p \in \mathbb{Z}^n}c_p e^{2\pi ip x},\cdot c_p =\int_{\mathcal U} \widetilde{h}\left( x\right)e^{-2\pi i px}~dx = \sum_{g \in \mathbb{Z}^n} \int_{\mathcal U}f\left( x+g\right)e^{-2\pi i px}~dx = \int_{\mathbb{R}^n}f\left( x\right)e^{-2\pi i px}~dx=\widehat f\left( p\right) \end{eqnarray*} where $\widehat f = \mathcal F f$ is the Fourier transformation of $f$. So if $\widehat h = \mathcal F_{\mathbb{T}} h$ is the Fourier transformation of $h$ then for any $p \in \mathbb{Z}^n$ a following condition holds \begin{equation}\label{fourier_from_r_to_z_eqn} \widehat h\left(p\right) = \widehat f\left( p\right). \end{equation} \subsection{Noncommutative torus $\mathbb{T}^n_{\Theta}$}\label{nt_descr_subsec} \paragraph*{} Denote by $\cdot: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ the scalar product on the Euclidean vector space $\mathbb{R}^n$. Let $\Theta$ be a real skew-symmetric $n \times n$ matrix, we will define a new noncommutative product $\star_{\Theta}$ on $\mathcal{S}\left(\mathbb{Z}^n\right)$ given by \begin{equation}\label{nt_product_defn_eqn} \left(\widehat{f}\star_{\Theta}\widehat{g}\right)\left(p\right)= \sum_{r + s = p} \widehat{f}\left(r\right)\widehat{g}\left(s\right) e^{-\pi ir ~\cdot~ \Theta s}. \end{equation} and an involution \begin{equation*} \widehat{f}^*\left(p\right)=\overline{\widehat{f}}\left(-p\right)). \end{equation*} In result there is an involutive algebra $C^\infty\left(\mathbb{T}^n_{\Theta}\right) =\left(\mathcal{S}\left(\mathbb{Z}^n\right), + , \star_{\Theta}~, ^* \right)$. There is a tracial state on $C^\infty\left(\mathbb{T}^n_{\Theta}\right)$ given by \begin{equation}\label{nt_state_eqn} \tau\left(f\right)= \widehat{f}\left(0\right). \end{equation} From $C^\infty\left(\mathbb{T}^n_{\Theta} \right) \approx \mathcal{S}\left( \mathbb{Z}^n\right)$ it follows that there is a $\mathbb{C}$-linear isomorphism \begin{equation}\label{nt_varphi_inf_eqn} \varphi_\infty: C^\infty\left(\mathbb{T}^n_{\Theta} \right) \xrightarrow{\approx} C^\infty\left(\mathbb{T}^n \right). \end{equation} such that following condition holds \begin{equation}\label{nt_state_integ_eqn} \tau\left(f \right)= \frac{1}{\left( 2\pi\right)^n }\int_{\mathbb{T}^n} \varphi_\infty\left( f\right) ~dx. \end{equation} Similarly to \ref{comm_gns_constr} there is the Hilbert space $L^2\left(C^\infty\left(\mathbb{T}^n_{\Theta}\right), \tau\right)$ and the natural representation $C^\infty\left(\mathbb{T}^n_{\Theta}\right) \to B\left(L^2\left(C^\infty\left(\mathbb{T}^n_{\Theta}\right), \tau\right)\right)$ which induces the $C^*$-norm. The $C^*$-norm completion $C\left(\mathbb{T}^n_{\Theta}\right)$ of $C^\infty\left(\mathbb{T}^n_{\Theta}\right)$ is a $C^*$-algebra and there is a faithful representation \begin{equation}\label{nt_repr_eqn} C\left(\mathbb{T}^n_{\Theta}\right) \to B\left( L^2\left(C^\infty\left(\mathbb{T}^n_{\Theta}\right), \tau\right)\right) . \end{equation} We will write $L^2\left(C\left(\mathbb{T}^n_{\Theta}\right), \tau\right)$ instead of $L^2\left(C^\infty\left(\mathbb{T}^n_{\Theta}\right), \tau\right)$. There is the natural $\mathbb{C}$-linear map $C^\infty\left(\mathbb{T}^n_{\Theta}\right) \to L^2\left(C\left(\mathbb{T}^n_{\Theta}\right), \tau\right) $ and since $C^\infty\left(\mathbb{T}^n_{\Theta}\right) \approx \mathcal{S}\left( \mathbb{Z}^n \right)$ there is a linear map $\Psi_\Theta:\mathcal{S}\left( \mathbb{Z}^n \right) \to L^2\left(C\left(\mathbb{T}^n_{\Theta}\right), \tau\right) $. If $k \in \mathbb{Z}^n$ and $U_k \in \mathcal{S}\left( \mathbb{Z}^n \right)=C^\infty\left(\mathbb{T}^n_{\Theta}\right)$ is such that \begin{equation}\label{unitaty_nt_eqn} U_k\left( p\right)= \delta_{kp}: ~ \forall p \in \mathbb{Z}^n \end{equation} then \begin{equation}\label{nt_unitary_product} U_kU_p = e^{-\pi ik ~\cdot~ \Theta p} U_{k + p}; ~~~ U_kU_p = e^{-2\pi ik ~\cdot~ \Theta p}U_pU_k. \end{equation} If $\xi_k = \Psi_\Theta\left(U_k \right)$ then from \eqref{nt_product_defn_eqn}, \eqref{nt_state_eqn} it turns out \begin{equation}\label{nt_h_product} \tau\left(U^*_k \star_\Theta U_l \right) = \left(\xi_k, \xi_l \right) = \delta_{kl}, \end{equation} i.e. the subset $\left\cdot\xi_k\right\cdot_{k \in \mathbb{Z}^n}\subset L^2\left(C\left(\mathbb{T}^n_{\Theta}\right), \tau\right)$ is an orthogonal basis of $L^2\left(C\left(\mathbb{T}^n_{\Theta}\right), \tau\right)$. Hence the Hilbert space $L^2\left(C\left(\mathbb{T}^n_{\Theta}\right), \tau\right)$ is naturally isomorphic to the Hilbert space $\ell^2\left(\mathbb{Z}^n\right)$ given by \begin{equation*} \ell^2\left(\mathbb{Z}^n\right) = \left\cdot\xi = \left\cdot\xi_k \in \mathbb{C}\right\cdot_{k\in \mathbb{Z}^n} \in \mathbb{C}^{\mathbb{Z}^n}~|~ \sum_{k\in \mathbb{Z}^n} \left|\xi_k\right|^2 < \infty\right\cdot \end{equation*} and the $\mathbb{C}$-valued scalar product on $\ell^2\left(\mathbb{Z}^n\right)$ is given by \begin{equation*} \left(\xi,\eta\right)_{ \ell^2\left(\mathbb{Z}^n\right)}= \sum_{k\in \mathbb{Z}^n} \overline{\xi}_k\eta_k. \end{equation*} An alternative description of $\mathbb{C}\left(\mathbb{T}^n_{\Theta}\right)$ is such that if \begin{equation}\label{nt_th_eqn} \Theta = \begin{pmatrix} 0& \theta_{12} &\ldots & \theta_{1n}\cdot \theta_{21}& 0 &\ldots & \theta_{2n}\cdot \vdots& \vdots &\ddots & \vdots\cdot \theta_{n1}& \theta_{n2} &\ldots & 0 \end{pmatrix} \end{equation} then $C\left(\mathbb{T}^n_{\Theta}\right)$ is the universal $C^*$-algebra generated by unitary elements $u_1,..., u_n \in U\left( C\left(\mathbb{T}^n_{\Theta}\right)\right) $ such that following condition holds \begin{equation}\label{nt_com_eqn} u_ju_k = e^{-2\pi i \theta_{jk} }u_ku_j. \end{equation} Elements $u_j$ are given by \begin{equation*} \begin{split} u_j = U_{\mathfrak{j}},\cdot \mathfrak{j}=\left(0,\dots, \underbrace{ 1}_{j^{\text{th}}-\text{place}}, \dots, 0\right) . \end{split} \end{equation*} \begin{defn}\label{nt_uni_defn} Unitary elements $u_1,\dots, u_n \in U\left(C\left(\mathbb{T}^n_{\theta}\right)\right)$ which satisfy the relation \eqref{nt_com_eqn} are said to be \textit{generators} of $C\left(\mathbb{T}^n_{\Theta}\right)$. The set $\left\{U_l\right\cdot_{l \in \mathbb{Z}^n}$ is said to be the \textit{basis} of $C\left(\mathbb{T}^n_{\Theta}\right)$. \end{defn} If $a \in C\left(\mathbb{T}^n_{\Theta}\right)$ is presented by a series $$ a = \sum_{l \in \mathbb{Z}^{n}}c_l U_l;~~ c_l \in \mathbb{C} $$ and the series $\sum_{l \in \mathbb{Z}^{n}}\left| c_l\right| $ is convergent then from the triangle inequality it follows that \begin{equation}\label{nt_norm_estimation} \left\|a \right\cdot \le \sum_{l \in \mathbb{Z}^{n}}\left| c_l\right|. \end{equation} \begin{defn}\label{nt_symplectic_defn} If $\Theta$ is non-degenerated, that is to say, $\sigma(s,t) \stackrel{\mathrm{def}}{=} s\cdot\Theta t$ to be \textit{symplectic}. This implies even dimension, $n = 2N$. Then one selects \begin{equation}\label{nt_simpectic_theta_eqn} \Theta = \theta J \stackrel{\mathrm{def}}{=} \theta \begin{pmatrix} 0 & 1_N \cdot -1_N & 0 \end{pmatrix} \end{equation} where $\theta > 0$ is defined by $\theta^{2N} \stackrel{\mathrm{def}}{=} \det\Theta$. Denote by $C^\infty\left(\mathbb{T}^{2N}_\theta\right)\stackrel{\mathrm{def}}{=}C^\infty\left(\mathbb{T}^{2N}_\Theta\right)$ and $C\left(\mathbb{T}^{2N}_\theta\right)\stackrel{\mathrm{def}}{=}C\left(\mathbb{T}^{2N}_\Theta\right)$. \end{defn} \subsection{Finite-fold coverings}\label{nt_fin_cov} \paragraph{} In this section we write $ab$ instead $a\star_\Theta b$. Let $\Theta$ be given by \eqref{nt_th_eqn}, and let $C\left(\mathbb{T}^n_\Theta\right)$ be a noncommutative torus. If $\left(k_1, \dots, k_n\right) \in \mathbb{N}^n$ and $$ \widetilde{\Theta} = \begin{pmatrix} 0& \widetilde{\theta}_{12} &\ldots & \widetilde{\theta}_{1n}\cdot \widetilde{\theta}_{21}& 0 &\ldots & \widetilde{\theta}_{2n}\cdot \vdots& \vdots &\ddots & \vdots\cdot \widetilde{\theta}_{n1}& \widetilde{\theta}_{n2} &\ldots & 0 \end{pmatrix} $$ is a skew-symmetric matrix such that \begin{equation*} e^{-2\pi i \theta_{rs}}= e^{-2\pi i \widetilde{\theta}_{rs}k_rk_s} \end{equation*} then there is a *-homomorphism $C\left(\mathbb{T}^n_\Theta\right)\to C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right)$ given by \begin{equation}\label{nt_cov_eqn} u_j \mapsto v~^{k_j}_j; ~ j = 1,...,n \end{equation} where $u_1,..., u_n \in C\left(\mathbb{T}^n_{\Theta}\right)$ (resp. $v_1,..., v_n \in C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right)$) are unitary generators of $C\left(\mathbb{T}^n_{\Theta}\right)$ (resp. $C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right)$). There is an involutive action of $G=\mathbb{Z}_{k_1}\times...\times\mathbb{Z}_{k_n}$ on $C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right)$ given by \begin{equation*} \left(\overline{p}_1,..., \overline{p}_n\right)v_j = e^{\frac{2\pi i p_j}{k_j}}v_j, \end{equation*} and a following condition holds $C\left(\mathbb{T}^n_{\Theta}\right)=C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right)^G$. Otherwise there is a following $C\left(\mathbb{T}^n_{\Theta}\right)$ - module isomorphism $$ C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right) = \bigoplus_{\left(\overline{p}_1, ... \overline{p}_n \right)\in\mathbb{Z}_{k_1}\times...\times\mathbb{Z}_{k_n} } v_1^{p_1} \cdot ... \cdot v_n^{p_n} C\left(\mathbb{T}^n_{\Theta}\right) \approx C\left(\mathbb{T}^n_{\Theta}\right)^{k_1\cdot ... \cdot k_n} $$ i.e. $C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right)$ is a finitely generated projective Hilbert $C\left(\mathbb{T}^n_{\Theta}\right)$-module. It turns out the following theorem. \begin{thm}\label{nt_fin_cov_lem} The triple $\left(C\left(\mathbb{T}^n_{\Theta}\right), C\left(\mathbb{T}^n_{\widetilde{\Theta}}\right),\mathbb{Z}_{k_1}\times...\times\mathbb{Z}_{k_n}\right)$ is an unital noncommutative finite-fold covering. \end{thm} \subsection{Moyal plane and a representation of the noncommutative torus}\label{nt_ind_repr_subsubsec} \begin{defn} Denote the \textit{Moyal plane} product $\star_\theta$ on $\mathcal{S}\left(\mathbb{R}^{2N} \right)$ given by $$ \left(f \star_\theta h \right)\left(u \right)= \int_{y \in \mathbb{R}^{2N} } f\left(u - \frac{1}{2}\Theta y\right) g\left(u + v \right)e^{2\pi i y \cdot v } dydv $$ where $\Theta$ is given by \eqref{nt_simpectic_theta_eqn}. \end{defn} nition 2.8 \begin{defn}\label{mp_mult_defn}\cite{varilly_bondia:phobos} \label{df:Moyal-alg} Denote by $\mathcal{S}'\left( \mathbb{R}^{n}\right) $ the vector space dual to $\mathcal{S}\left( \mathbb{R}^{n}\right) $, i.e. the space of continuous functionals on $\mathcal{S}\left( \mathbb{R}^{n}\right)$. The Moyal product can be defined, by duality, on larger sets than $\mathcal{S}\left(\mathbb{R}^{2N}\right)$. For $T \in \mathcal{S}'\left(\mathbb{R}^{2N}\right)$, write the evaluation on $g \in \mathcal{S}\left(\mathbb{R}^{2N}\right)$ as $\<T, g> \in \mathbb{C}$; then, for $f \in \mathcal{S}$ we may define $T \star_{\theta} f$ and $f \star_{\theta} T$ as elements of~$\mathcal{S}'\left(\mathbb{R}^{2N}\right)$ by \begin{equation}\label{mp_star_ext_eqn} \begin{split} \<T \star_{\theta} f, g> \stackrel{\mathrm{def}}{=} \<T, f \star_{\theta} g>\cdot \<f \star_{\theta} T, g> \stackrel{\mathrm{def}}{=} \<T, g \star_{\theta} f> \end{split} \end{equation} using the continuity of the star product on~$\mathcal{S}\left(\mathbb{R}^{2N}\right)$. Also, the involution is extended to by $\<T^*,g> \stackrel{\mathrm{def}}{=} \overline{\<T,g^*>}$. \end{defn} \begin{rem} It is proven in \cite{moyal_spectral} that the domain of the Moyal plane product can be extended up to $L^2\left(\mathbb{R}^{2N} \right)$. \end{rem} \begin{lem}\label{nt_l_2_est_lem}\cite{moyal_spectral} If $f,g \in L^2 \left(\mathbb{R}^{2N} \right)$, then $f\star_\theta g \in L^2 \left(\mathbb{R}^{2N} \right)$ and $\left\|f\right\cdot_{\mathrm{op}} < \left(2\pi\theta \right)^{-\frac{N}{2}} \left\|f\right\cdot_2$. where $\left\cdot\cdot\right\cdot_{2}$ is the $L^2$-norm given by \begin{equation}\label{nt_l2_norm_eqn} \left\|f\right\cdot_{2} \stackrel{\mathrm{def}}{=} \left|\int_{\mathbb{R}^{2N}} \left|f\right|^2 dx \right|^{\frac{1}{2}}. \end{equation} and the operator norm $\|T\cdot_{\mathrm{op}} \stackrel{\mathrm{def}}{=}\sup\set{\|T \star g\cdot_2/\|g\cdot_2 : 0 \neq g \in L^2\left( \mathbb{R}^{2N})\right) }$ \end{lem} \begin{defn}\label{mp_star_alg_defn} Denote by $\mathcal{S}\left(\mathbb{R}^{2N}_\theta \right)$ (resp. $L^2\left(\mathbb{R}^{2N}_\theta \right)$ ) the operator algebra which is $\mathbb{C}$-linearly isomorphic to $\mathcal{S}\left(\mathbb{R}^{2N} \right)$ (resp. $L^2\left(\mathbb{R}^{2N} \right)$ ) and product coincides with $\star_\theta$. Both $\mathcal{S}\left(\mathbb{R}^{2N}_\theta \right)$ and $L^2\left(\mathbb{R}^{2N}_\theta \right)$ act on the Hilbert space $L^2\left(\mathbb{R}^{2N} \right)$. Denote by \begin{equation}\label{mp_psi_th_eqn} \Psi_\theta: \mathcal{S}\left(\mathbb{R}^{2N} \right)\xrightarrow{\approx}\mathcal{S}\left(\mathbb{R}^{2N}_\theta \right) \end{equation} the natural $\mathbb{C}$-linear isomorphism. \end{defn} \begin{empt} There is the tracial property \cite{moyal_spectral} of the Moyal product \begin{equation}\label{nt_tracial_prop} \int_{\mathbb{R}^{2N}} \left( f\star_\theta g\right) \left(x \right)dx = \int_{\mathbb{R}^{2N}} f\left(x \right) g\left(x \right)dx. \end{equation} The Fourier transformation of the star product satisfies to the following condition. \begin{equation}\label{mp_fourier_eqn} \mathcal{F}\left(f \star_\theta g\right) \left(x \right) = \int_{\mathbb{R}^{2N}}\mathcal{F}{f}\left(x-y \right) \mathcal{F}{g}\left(y\right)e^{ \pi i y \cdot \Theta x }~dy. \end{equation} \end{empt} \begin{defn}\label{r_2_N_repr}\cite{moyal_spectral} Let $\mathcal{S}'\left(\mathbb{R}^{2N} \right)$ be a vector space dual to $\mathcal{S}\left(\mathbb{R}^{2N} \right)$. Denote by $C_b\left(\mathbb{R}^{2N}_\theta\right)\stackrel{\mathrm{def}}{=} \set{T \in \mathcal{S}'\left(\mathbb{R}^{2N}\right) : T \star_\theta g \in L^2\left(\mathbb{R}^{2N}\right) \text{ for all } g \in L^2(\mathbb{R}^{2N})}$, provided with the operator norm \begin{equation}\label{mp_op_norm_eqn} \|T\cdot_{\mathrm{op}} \stackrel{\mathrm{def}}{=}\sup\set{\|T \star_\theta g\cdot_2/\|g\cdot_2 : 0 \neq g \in L^2(\mathbb{R}^{2N})}. \end{equation} Denote by $C_0\left(\mathbb{R}^{2N}_\theta \right)$ the operator norm completion of $\mathcal{S}\left(\mathbb{R}^{2N}_\theta \right).$ \end{defn} \begin{rem} Obviously $\mathcal{S}\left(\mathbb{R}^{2N}_\theta\right) \hookrightarrow C_b\left(\mathbb{R}^{2N}_\theta\right)$. But $\mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)$ is not dense in $C_b\left(\mathbb{R}^{2N}_\theta\right)$, i.e. $C_0\left(\mathbb{R}^{2N}_\theta\right) \subsetneq C_b\left(\mathbb{R}^{2N}_\theta\right)$ (cf. \cite{moyal_spectral}). \end{rem} \begin{rem} $L^2\left(\mathbb{R}^{2N}_\theta\right)$ is the $\cdot\cdot\cdot_2$ norm completion of $\mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)$ hence from the Lemma \ref{nt_l_2_est_lem} it follows that \begin{equation}\label{mp_2_op_eqn} L^2\left(\mathbb{R}^{2N}_\theta\right) \subset C_0\left(\mathbb{R}^{2N}_\theta\right). \end{equation} \end{rem} \begin{rem} Notation of the Definition \ref{r_2_N_repr} differs from \cite{moyal_spectral}. Here symbols $A_\theta, \mathcal{A}_\theta, A^0_\theta$ are replaced with $C_b\left(\mathbb{R}^{2N}_\theta\right), \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right), C_0\left(\mathbb{R}^{2N}_\theta\right)$ respectively. \end{rem} \begin{rem} The $\mathbb{C}$-linear space $C_0\left(\mathbb{R}^{2N}_\theta \right)$ does not isomorphic to $C_0\left(\mathbb{R}^{2N}\right)$ (cf. \cite{moyal_spectral}). \end{rem} \begin{empt}\cite{moyal_spectral} By plane waves we understand all functions of the form $$ x \mapsto \exp(ik\cdot x) $$ for $k\in \mathbb{R}^{2N}$. One obtains for the Moyal product of plane waves: \begin{equation}\label{mp_wave_prod_eqn} \begin{split} \exp\left(ik\cdot\right) \star_{\Theta}\exp\left(ik\cdot\right)=\exp\left(ik\cdot\right) \star_{\theta}\exp\left(ik\cdot\right)= \exp\left(i\left( k+l\right) \cdot\right) e^{-\pi i k \cdot \Theta l} \end{split} \end{equation} \end{empt} \begin{rem}\cite{moyal_spectral} The algebra $C_b\left(\mathbb{R}^{2N}_\theta \right)$ contains all plane waves. \end{rem} \begin{rem}\label{nt_c_k_eqn} If $ \left\{c_k \in \mathbb{C}\right\cdot_{k \in N^0} $ is such that $\sum_{k = 0}^\infty \left|c_k\right| < \infty$ then from $\left\cdot \exp\left(ik\cdot\right)\right\cdot_{\mathrm{op}}=1$ it turns out $\left\cdot\sum_{k = 0}^\infty \exp\left(ik\cdot\right)\right\cdot_{\mathrm{op}}< \sum_{k = 0}^\infty \left|c_k\right| < \infty$, i.e. $\sum_{k = 0}^\infty c_k\exp\left(ik\cdot\right) \in C_b\left(\mathbb{R}^{2N}_\theta \right)$. \end{rem} \begin{empt} The equation \eqref{mp_wave_prod_eqn} is similar to the equation \eqref{nt_unitary_product} which defines $C\left(\mathbb{T}^{2N}_{\theta}\right)$. From this fact and from the Remark \ref{nt_c_k_eqn} it follows that there is an injective *-homomorphism $C^\infty\left(\mathbb{T}^{2N}_{\theta}\right) \hookrightarrow C_b\left(\mathbb{R}_\theta^{2N} \right);~ U_k \mapsto \exp\left(2\pi ik\cdot\right)$. An algebra $C^\infty\left(\mathbb{T}^{2N}_{\theta}\right)$ is dense in $C\left(\mathbb{T}^{2N}_{\theta}\right)$ so there is an injective *-homomorphism $C\left(\mathbb{T}^{2N}_{\theta}\right) \hookrightarrow C_b\left(\mathbb{R}_\theta^{2N} \right)$. The faithful representation $C_b\left(\mathbb{R}^{2N}_{\theta}\right)\to B\left(L^2\left( \mathbb{R}^{2N}\right) \right)$ gives a representation $\pi: C\left(\mathbb{T}^{2N}_{\theta}\right) \to B\left(L^2\left( \mathbb{R}^{2N}\right) \right)$ \begin{equation}\label{nt_l2r_eqn} \begin{split} \pi: C\left(\mathbb{T}^{2N}_{\theta}\right) \to B\left(L^2\left( \mathbb{R}^{2N}\right) \right), \cdot U_k \mapsto \exp\left(2\pi ik\cdot\right) \end{split} \end{equation} where $U_k\in C\left(\mathbb{T}^n_{\Theta}\right)$ is given by the Definition \ref{nt_uni_defn}. \end{empt} \begin{empt}\label{mp_scaling_constr} Let us consider the unitary dilation operators $E_a$ given by $$ E_af(x) \stackrel{\mathrm{def}}{=} a^{N/2} f(a^{1/2}x), $$ It is proven in \cite{moyal_spectral} that \begin{equation}\label{eq:starscale} f {\star_{\theta}} g = (\theta/2)^{-N/2} E_{2/\theta}(E_{\theta/2}f \star_2 E_{\theta/2}g). \end{equation} We can simplify our construction by setting $\theta = 2$. Thanks to the scaling relation~\eqref{eq:starscale} any qualitative result can is true if it is true in case of $\theta = 2$. We use the following notation \begin{equation}\label{mp_times_eqn} f {\times} g\stackrel{\mathrm{def}}{=}f {\star_{2}} g \end{equation} \end{empt} \begin{lem}\label{mp_ab_delta_lem} Let $a, b \in \mathcal{S}\left(\mathbb{R}^{2N} \right)$. For any $\Delta \in \mathbb{R}^{2N}$ let $a_\Delta \in \mathcal{S}\left(\mathbb{R}^{2N} \right) $ be such that $a_\Delta\left(x \right)= a\left(x + \Delta \right)$. For any $m \in \mathbb{N}$ there is a constant $C^{a,b}_m$ such that \begin{equation}\nonumber \left\cdot a_\Delta \times b\right\cdot_2 < \frac{C^{a,b}_m}{\left( 1+\left\cdot\Delta \right\cdot\right)^m } \end{equation} where $\left\cdot \cdot\right\cdot_2$ is given by \eqref{nt_l2_norm_eqn}. \end{lem} \begin{proof} From the definition of Schwartz functions it follows that for any $f \in \mathcal{S}\left( \mathbb{R}^{2N} \right)$ and any $m \in \mathbb{N}$ there is $C^f_m>0$ such that \begin{equation}\label{nt_c_f_m_eqn} \left|f \left(u\right)\right|<\frac{C^f_m}{\left( 1 + \left\|u\right\cdot\right)^m }. \end{equation} From \eqref{mp_fourier_eqn} it follows that $$ \mathcal{F}\left(a_\Delta \times b \right)\left(x \right) = \int_{\mathbb{R}^{2N}}\mathcal{F}a_\Delta\left(x-y \right) \mathcal{F}b\left(y\right)e^{ \pi i y \cdot \Theta x }~dy = \int_{\mathbb{R}^{2N}}c\left(y-\Delta -x \right) d\left(y\right)e^{ \pi i y \cdot \Theta x }~dy $$ where $c\left( x\right) = \mathcal{F}a\left(-x \right)$, $d\left( x\right) = \mathcal{F}b\left(x \right)$ If $\xi =\mathcal{F}\left(a_\Delta \times b \right)$ then $\xi \in L^2\left(\mathbb{R}^{2N} \right)$. Let $\xi=\xi_1+ \xi_2$ where $\xi_1, \xi_2 \in L^2\left(\mathbb{R}^{2N} \right)$ are given by \begin{equation} \begin{split} \xi_1\left(x \right)= \begin{cases} \mathcal{F}\left(a_{\Delta} b\right)\left(x \right)& \left\cdot x\right\cdot \le \frac{\left\cdot \Delta\right\cdot }{2}\cdot 0& \left\cdot x\right\cdot > \frac{\left\cdot \Delta\right\cdot }{2} \end{cases},\cdot \xi_2\left(x \right)= \begin{cases} 0 & \left\cdot x\right\cdot\le \frac{\left\cdot \Delta\right\cdot }{2}\cdot \mathcal{F}\left(a_{\Delta} b\right)\left(x \right)& \left\cdot x\right\cdot > \frac{\left\cdot \Delta\right\cdot }{2} \end{cases}.\cdot \end{split} \end{equation} From \eqref{nt_c_f_m_eqn} it turns out \begin{equation}\label{mp_xi_1_eqn} \begin{split} \left| \xi_1\left(x \right)\right| \le ~ \int \left| c\left( t - \Delta - x \right) d\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} \right|dt \le \int_{\mathbb{R}^{2N}}\frac{C^{c}_{M}}{\left(1 + \left\|t - \Delta - x\right\cdot \right)^{M}}~\frac{C^{d}_{2M}}{\left(1 + \left\|t\right\cdot \right)^{2M}} dt = \cdot = \int_{\mathbb{R}^{2N}}\frac{C^{c}_{M}}{\left(1 + \left\|t - \Delta - x\right\cdot \right)^{M}\left(1 + \left\|t\right\cdot \right)^{M} }~\frac{C^{d}_{2M}}{\left(1 + \left\|t\right\cdot \right)^{M}} dt \le \cdot \le \sup_{x \in \mathbb{R}^{2N}, ~\left\cdot x\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 },~s\in \mathbb{R}^{2N}}~ \frac{C^{c}_{M}C^{d}_{2M}}{\left(1 + \left\|s - \Delta - x\right\cdot \right)^{M} \left(1 + \left\|s\right\cdot \right)^{M}} \times \int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|t\right\cdot \right)^{M}} dt. \end{split} \end{equation} If $x,y \in \mathbb{R}^{2N}$ then from the triangle inequality it follows that $\left\|x + y\right\cdot> \left\|y\right\cdot - \left\|x\right\cdot$, hence $$ \left(1 + \left\|x\right\cdot \right)^M \left(1 + \left\|x+ y\right\cdot \right)^M \ge \left(1 + \left\|x\right\cdot \right)^M \left(1 + \max\left(0, \left\|y\right\cdot - \left\|x\right\cdot \right)\right)^M. $$ If $ \left\|x\right\cdot \le \frac{\left\|y\right\cdot}{2}$ then $\left\|y\right\cdot - \left\|x\right\cdot \ge \frac{\left\|y\right\cdot}{2}$ and \begin{equation}\label{nt_triangle_eqn} \left(1 + \left\|x\right\cdot \right)^M \left(1 + \left\|x+ y\right\cdot \right)^M > \left( \frac{\left\|y\right\cdot}{2}\right)^M. \end{equation} Clearly if $ \left\|x\right\cdot > \frac{\left\|y\right\cdot}{2}$ then condition \eqref{nt_triangle_eqn} also holds, hence\eqref{nt_triangle_eqn} is always true. Clearly if $ \left\|x\right\cdot > \frac{\left\|y\right\cdot}{2}$ then condition \eqref{nt_triangle_eqn} also holds, hence\eqref{nt_triangle_eqn} is always true. It turns out from $\left\cdot-x-\Delta\right\cdot> \frac{\left\cdot\Delta\right\cdot}{2} $ and \eqref{nt_triangle_eqn} that $$ \mathrm{inf}_{x \in \mathbb{R}^{2N}, ~\left\cdot x\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 },~s\in \mathbb{R}^{2N}} \left(1 + \left\|s - \Delta - x\right\cdot \right)^{M} \left(1 + \left\|s\right\cdot \right)^{M} >\left\cdot\frac{\Delta}{4}\right\cdot^M, $$ hence from \eqref{mp_xi_1_eqn} it turns out \begin{equation}\nonumber \left| \xi_1\left(x \right)\right| \le \frac{4^MC^{c}_{M}C^{d}_{2M}}{\left\cdot\Delta\right\cdot^M} \times \int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|t\right\cdot \right)^{M}} dt \end{equation} There is the well known integral $$ \int_{x \in \mathbb{R}^{2N}, ~\left\cdot x\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }} 1 dx = \frac{\pi^N}{\Gamma\left(N+1 \right) }\left( \frac{\left\cdot\Delta\right\cdot}{2}\right)^{2N} $$ where $\Gamma$ is the Euler gamma function. If $M > 2N$ then the integral $C' = \int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|t\right\cdot \right)^{M}} dt$ is convergent, it turns out \begin{equation}\nonumber \left| \xi_1\right|^2_2 \le \left( \frac{4^MC'C^{c}_{M}C^{d}_{2M}}{\left\cdot\Delta\right\cdot^M}\right)^2 \int_{x \in \mathbb{R}^{2N}, ~\left\cdot x\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }} 1 dx = \frac{4^MC'C^{c}_{M}C^{d}_{2M}}{\left\cdot\Delta\right\cdot^M}\frac{\pi^N}{\Gamma\left(N+1 \right) }\left( \frac{\left\cdot\Delta\right\cdot}{2}\right)^{2N}. \end{equation} If $M = 2N+ m$ then from the above equation it turns out that there is $C_1 > 0$ such that \begin{equation}\label{mp_xi1_eqn} \left| \xi_1\right|^2_2 \le \frac{C_1}{\left\cdot\Delta\right\cdot^m}. \end{equation} If $\left( \cdot, \cdot \right)_{L^2\left(\mathbb{R}^{2N} \right) }$ is the given by \eqref{fourier_scalar_product_eqn} scalar product then from \eqref{mp_inv_eqn} it turns out \begin{equation*} \begin{split} \left| \xi_2\left(x \right) \right|\le\left|\int c\left( t - \Delta - x \right) d\left(t \right)e^{\pi i x \cdot \Theta t} dt\right|=\cdot =\left|\left( c\left( \bullet- \Delta-x\right) ,~ d\left(\bullet\right)e^{\pi i x\cdot \Theta\bullet} \right)_{L^2\left(\mathbb{R}^{2N} \right) } \right|=\cdot =\left|\left( \mathcal{F}\left( c\left( \bullet- \Delta-x\right)\right) ,~ \mathcal{F}\left( d\left(\bullet\right)e^{\pi i x\cdot \Theta\bullet} \right) \right)_{L^2\left(\mathbb{R}^{2N} \right) } \right|=\cdot= \left|\int_{\mathbb{R}^{2N}} \mathcal{F}\left( c\right)\left( \bullet - \Delta -x\right)\left(u\right)\mathcal{F}\left(d\left(\bullet\right)e^{\pi i x\cdot \Theta\bullet} \right)\left(u\right) du \right|\le\cdot \le \int_{\mathbb{R}^{2N}}\left| e^{-i\left(-\Delta - x\right) \cdot u}\mathcal{F}\left( c\right)\left( u\right) \mathcal{F}\left(d\right)\left(u+\pi\Theta x\right)\right| du \le \cdot \le \int_{\mathbb{R}^{2N}}\frac{C^{\mathcal{F}\left( c\right)}_{3M}}{\left(1 + \left\|u\right\cdot \right)^{3M}}\frac{C^{\mathcal{F}\left(d\right)}_{2M}}{\left(1 + \left\|u-\pi\Theta x\right\cdot \right)^{2M}} du \le \cdot \le \sup_{x \in \mathbb{R}^{2N}, ~\left\cdot x\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 },~ s \in \mathbb{R}^{2N}}~ \frac{C^{\mathcal{F}\left( c\right)}_{3M}}{\left(1 + \left\|s\right\cdot \right)^{M}}\frac{C^{\mathcal{F}\left(d\right)}_{2M}}{\left(1 + \left\|s-\pi\Theta x\right\cdot \right)^{M}} \frac{1}{\left(1 + \left\|u-\pi\Theta x\right\cdot \right)^{M}\left(1 + \left\|u\right\cdot \right)^{M}}\times\cdot \times\int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|u\right\cdot \right)^{M}} du. \end{split} \end{equation*} Since we consider the asymptotic dependence $\left\cdot\Delta\right\cdot\to \infty$ only large values of $\left\cdot\Delta\right\cdot$ are interesting, so we can suppose that $\left\cdot\Delta\right\cdot > 2$. If $\left\cdot\Delta\right\cdot > 2$ then from $\left\|x\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }$ it follows that $\left\cdot\pi\Theta x\right\cdot > 1$, and from \eqref{nt_triangle_eqn} it follows that \begin{equation*} \begin{split} \left(1 + \left\|u\right\cdot \right)^{M}\left(1 + \left\|u-\pi\Theta x\right\cdot \right)^{M} > \left\cdot\pi\Theta x\right\cdot^M, \cdot \inf_{x \in \mathbb{R}^{2N}, ~\left\cdot x\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 },~ s \in \mathbb{R}^{2N}}~ \left(1 + \left\|s\right\cdot \right)^{M}\left(1 + \left\|s-\pi\Theta x\right\cdot \right)^{M} > \left\cdot\frac{\pi\Theta\Delta}{4}\right\cdot^M, \end{split} \end{equation*} hence \begin{equation*} \begin{split} \left| \xi_2\left(x \right) \right| \le \frac{C^{\mathcal{F}\left( c\right)}_{3M}C^{\mathcal{F}\left(d\right)}_{2M}}{\left\cdot\frac{\pi\Delta}{4}\right\cdot^M \left\cdot\pi\Theta x\right\cdot^M} \int_{\mathbb{R}^{2N}}\frac{1}{\left(1+\left\|u\right\cdot \right)^M }du. \end{split} \end{equation*} If $m \ge 1$ and $M = 2N + m$ then the integral $C'\int_{\mathbb{R}^{2N}}\frac{1}{\left(1+\left\|u\right\cdot \right)^M }du$ is convergent and $$ \left| \xi_2\left(x \right) \right| \le \frac{C^{\mathcal{F}\left( c\right)}_{3M}C^{\mathcal{F}\left(d\right)}_{2M}C'}{\left\cdot\frac{\pi\Delta}{4}\right\cdot^M \left\cdot\pi\Theta x\right\cdot^M}. $$ Taking into account \eqref{nt_simpectic_theta_eqn} and $\theta = 2$ one has \begin{equation}\label{mp_2x_eqn} \left\cdot\Theta z\right\cdot= \left\|2 z\right\cdot; ~ \forall z \in \mathbb{R}^{2N}. \end{equation} It follows that $$ \left| \xi_1\right|^2_2 \le \int_{x \in \mathbb{R}^{2N} \left\cdot x\right\cdot > \frac{\left\cdot \Delta\right\cdot }{2}}\left(\frac{C^{\mathcal{F}\left( c\right)}_{3M}C^{\mathcal{F}\left(d\right)}_{2M}C'}{\left\cdot\frac{2\pi\Delta}{4}\right\cdot^M \left\|2\pi x\right\cdot^M} \right)^2dx . $$ Since above integral is convergent one has there is a constant $C_2$ such that \begin{equation}\label{mp_xi2_eqn} \left| \xi_2\right|^2_2 \le \frac{C_2}{\left\cdot\frac{\pi\Delta}{4}\right\cdot^{2M}} \end{equation} Since $\xi_1 \perp \xi_2$ one has $\left| \xi\right\cdot_2= \sqrt{\left| \xi_1\right|^2_2+\left| \xi_2\right|^2_2}$ and taking into account \eqref{mp_xi1_eqn}, \eqref{mp_xi2_eqn} it follows that for any $m \in \mathbb{N}$ there is $C_m > 0$ such that $$ \left\cdot \xi\right\cdot_2 = \left\cdot \mathcal{F}\left(a_{\Delta}\times b\right)\right\cdot_2 \le \frac{C^{a,b}_m}{\left( 1+\left\cdot\Delta \right\cdot\right)^m }. $$ From \eqref{mp_inv_eqn} it turns out $$ \left\|a_{\Delta}\times b\right\cdot_2=\left\cdot \mathcal{F}\left(a_{\Delta}\times b\right)\right\cdot_2 \le \frac{C^{a,b}_m}{\left( 1+\left\cdot\Delta \right\cdot\right)^m }. $$ \end{proof} \begin{prop}\label{mp_factor_prop}\cite{moyal_spectral} The algebra $\mathcal{S}\left(\mathbb{R}^{2N}, \star_\theta \right)$ has the (nonunique) factorization property: for all $h \in \mathcal{S}\left(\mathbb{R}^{2N} \right)$ there exist $f,g \in \mathcal{S}\left(\mathbb{R}^{2N} \right)$ that $h = f \star_\theta g$. \end{prop} \begin{lem}\label{mp_weak_lem} Following conditions hold: \begin{enumerate} \item[(i)] Let $\left\{a_n \in C_b\left(\mathbb{R}^{2N}_\theta\right)\right\cdot_{n \in \mathbb{N}}$ be a sequence such that \begin{itemize} \item $\left\{a_n \right\cdot$ is weakly-* convergent (cf. Definition \ref{nt_*w_defn}), \item If $a = \lim_{n \to \infty} a_n$ in the sense of weak-* convergence then $a \in C_b\left(\mathbb{R}^{2N}_\theta\right)$. \end{itemize} Then the sequence $\left\{a_n \right\cdot$ is convergent in sense of weak topology $\left\{a_n \right\cdot$ (cf. Definition \ref{weak_topology}) and $a$ is limit of $ \left\{a_n \right\cdot$ with respect to the weak topology. Moreover if $\left\{a_n \right\cdot$ is increasing or decreasing sequence of self-adjoint elements then $\left\{a_n \right\cdot$ is convergent in sense of strong topology (cf. Definition \ref{strong_topology}) and $a$ is limit of $ \left\{a_n \right\cdot$ with respect to the strong topology. \item[(ii)] If $\left\{a_n \right\cdot$ is strongly and/or weakly convergent (cf. Definitions \ref{strong_topology}, \ref{weak_topology}) and $a = \lim_{n \to \infty} a_n$ is strong and/ or weak limit then $\left\{a_n \right\cdot$ is weakly-* convergent and $a$ is the limit of $\left\{a_n \right\cdot$ in the sense of weakly-* convergence. \end{enumerate} \end{lem} \begin{proof} (i) If $\left\langle \cdot, \cdot \right\rangle: \mathcal{S}'\left(\mathbb{R}^{2N} \right)\times \mathcal{S}\left(\mathbb{R}^{2N} \right) \to \mathbb{C}$ is the natural pairing then one has \begin{equation}\label{mp_an_lim_eqn} \lim_{n \to \infty} \left\langle a_n, b \right\rangle = \left\langle a, b \right\rangle; ~~ \forall b \in \mathcal{S}\left(\mathbb{R}^{2N} \right). \end{equation} Let $\xi, \eta \in L^2\left(\mathbb{R}^{2N} \right)$ and let $\left\{x_j \in \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)\right\cdot_{j \in \mathbb{N}}$, $\left\{y_j \in \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)\right\cdot_{ \in \mathbb{N}}$ be sequences such that there are following limits \begin{equation}\label{mp_xn_lim_eqn} \begin{split} \lim_{j \to \infty} x_j = \xi,~~\lim_{j \to \infty} y_j = \eta \end{split} \end{equation} in the topology of the Hilbert space $L^2\left(\mathbb{R}^{2N} \right)$. If $\left(\cdot, \cdot \right): L^2\left(\mathbb{R}^{2N} \right) \times L^2\left(\mathbb{R}^{2N} \right) \to \mathbb{C}$ is the Hilbert pairing then from \eqref{mp_star_ext_eqn}, \eqref{mp_xn_lim_eqn} it follows that \begin{equation}\nonumber \left(a_n \xi, \eta \right) = \lim_{j \to \infty} \left\langle a_n x_j ,y_j \right\rangle=\lim_{j \to \infty} \left\langle a_n, x_j \star_\theta y_j \right\rangle, \end{equation} hence, taking into account \eqref{mp_an_lim_eqn} one has \begin{equation} \lim_{n \to \infty }\left(a_n \xi, \eta \right)=\lim_{n \to \infty }\lim_{j \to \infty}\left(a_n x_j, y_j \right)=\lim_{n \to \infty }\lim_{j \to \infty} \left\langle a_n, x_j \star_\theta y_j \right\rangle = \lim_{j \to \infty} \left\langle a, x_j \star_\theta y_j \right\rangle=\left(a \xi, \eta \right), \end{equation} i.e. $\left\{a_n \right\cdot$ is weakly convergent to $a$. If $\left\{a_n \right\cdot$ is an increasing sequence then $a_n < a$ for any $n \in \mathbb{N}$ and from the Lemma \ref{increasing_convergent_w} it turns out that $\left\{a_n \right\cdot$ is strongly convergent. Clearly the strong limit coincides with the weak one. Similarly one can prove that is $\left\{a_n \right\cdot$ is an decreasing then $\left\{a_n \right\cdot$ is strongly convergent.\cdot (ii) If $b \in \mathcal{S}\left(\mathbb{R}^{2N} \right)$ then from the Proposition \ref{mp_factor_prop} it follows that $b = x \star_{\theta} y$ where $x,y \in \mathcal{S}\left(\mathbb{R}^{2N} \right)$. The sequence $\left\{a_n \right\cdot$ is strongly and/or weakly convergent it turns out that $$ \left(x, a_n \star_{\theta} y \right) = \left\langle a_n, x \star_\theta y\right\rangle = \left\langle a_n, b\right\rangle $$ is convergent. Hence $\left\{a_n \right\cdot$ is weakly-* convergent. \end{proof} \paragraph*{}There are elements $f_{mn} \in \mathcal{S}\left( \mathbb{R}^2\right)$ which have very useful properties. To present $f_{mn}$ explicitly, we use polar coordinates $q + ip= \rho e^{i\alpha}$, where $p, q \in \mathbb{R}^2$ and $\rho = \left(p, q\right)\in \mathbb{R}^{2}$ Note that $\left\cdot\rho\right\cdot^2= \left\|p\right\cdot^2 + \left\|q\right\cdot^2$. \begin{equation*} \begin{split} f_{mn}= 2\left(-1\right)^n\sqrt{\frac{n!}{m!}}e^{i\alpha\left(m - n\right)}\left\cdot\rho\right\cdot^{m-n}L^{m-n}_n\left(\left\cdot\rho\right\cdot^2\right)e^{-\left\cdot\rho\right\cdot^2}, \cdot f_{nn}\left(\rho, \alpha\right)= 2\left(-1\right)^nL_n\left(\left\cdot\rho\right\cdot^2\right)e^{-\left\cdot\rho\right\cdot^2/2} \end{split} \end{equation*} where $L^{m-n}_n$, $L^{n}$ are Laguerre functions. From this properties it follows that $C_0\left(\mathbb{R}^{2}_\theta\right)$ is the $C^*$-norm completion of linear span of $f_{mn}$ (cf. {\rm\cite{varilly_bondia:phobos}}). \begin{lem}\label{mp_osc_lem} {\rm\cite{varilly_bondia:phobos}} \label{lm:osc-basis} Let $m,n,k,l \in \mathbb{N}$. Then $f_{mn} \star_\theta f_{kl} = \delta_{nk}f_{ml}$ and $f_{mn}^* = f_{nm}$. Thus $f_{nn}$ is an orthogonal projection and $f_{mn}$ is nilpotent for $m \neq n$. Moreover, $\left\langle f_{mn}, f_{kl}\right\rangle = 2^N\cdot\delta_{mk}\cdot\delta_{nl}$. The family $\set{f_{mn} : m,n\in \mathbb{N}^0} \subset \mathcal{S}\left( \mathbb{R}^{2}\right) \subset L^2(\mathbb{R}^{2})$ is an orthogonal basis. \end{lem} \begin{prop}\label{mp_fmn}\cite{moyal_spectral,varilly_bondia:phobos} Let $N = 1$. Then $\mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)=\mathcal{S}\left(\mathbb{R}^{2}_\theta\right) $ has a Fr\'echet algebra isomorphism with the matrix algebra of rapidly decreasing double sequences $c = (c_{mn})$ such that, for each $k \in \mathbb{N}$, \begin{equation}\label{mp_matr_norm} r_k(c) \stackrel{\mathrm{def}}{=} \biggl( \sum_{m,n=0}^\infty \theta^{2k} \left( m+\half\right)^k \left( n+\half\right)^k |c_{mn}|^2 \biggr)^{1/2} \end{equation} is finite, topologized by all the seminorms $(r_k)$; via the decomposition $f = \sum_{m,n=0}^\infty c_{mn} f_{mn}$ of~$\mathcal{S}(\mathbb{R}^2)$ in the $\{f_{mn}\cdot$ basis. The twisted product $f \star_\theta g$ is the matrix product $ab$, where \begin{equation}\label{mp_mult_eqn} \left( ab\right)_{mn} \stackrel{\mathrm{def}}{=} \sum_{k= 0}^{\infty} a_{mk}b_{kn}. \end{equation} For $N > 1$, $C^\infty\left(\mathbb{R}^{2N}_\theta\right)$ is isomorphic to the (projective) tensor product of $N$ matrix algebras of this kind, i.e. \begin{equation}\label{mp_tensor_prod} \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right) \cong \underbrace{\mathcal{S}\left(\mathbb{R}^{2}_\theta\right)\otimes\dots\otimes\mathcal{S}\left(\mathbb{R}^{2}_\theta\right)}_{N-\mathrm{times}} \end{equation} with the projective topology induced by seminorms $r_k$ given by \eqref{mp_matr_norm}. \end{prop} \begin{rem} If $A$ is $C^*$-norm completion of the matrix algebra with the norm \eqref{mp_matr_norm} then $A \approx \mathcal K$, i.e. \begin{equation}\label{mp_2_eqn} C_0\left(\mathbb{R}^{2}_\theta\right) \approx \mathcal K. \end{equation} Form \eqref{mp_tensor_prod} and \eqref{mp_2_eqn} it follows that \begin{equation}\label{mp_2N_eqn} C_0\left(\mathbb{R}^{2N}_\theta\right) \cong \underbrace{C_0\left(\mathbb{R}^{2}_\theta\right)\otimes\dots\otimes C_0\left(\mathbb{R}^{2}_\theta\right)}_{N-\mathrm{times}} \approx \underbrace{\mathcal K\otimes\dots\otimes\mathcal K}_{N-\mathrm{times}} \approx \mathcal K \end{equation} where $\otimes$ means minimal or maximal tensor product ($\mathcal{K}$ is nuclear hence both products coincide). \end{rem} \subsection{Infinite coverings} \paragraph{} Let us consider a sequence \begin{equation}\label{nt_long_seq_eqn} \mathfrak{S}_{C\left( \mathbb{T}^n_\Theta\right) } =\left\cdot C\left( \mathbb{T}^n_\Theta\right) =C\left( \mathbb{T}^n_{\Theta_0}\right) \xrightarrow{\pi^1} ... \xrightarrow{\pi^j} C\left( \mathbb{T}^n_{\Theta_j}\right) \xrightarrow{\pi^{j+1}} ...\right\cdot. \end{equation} of finite coverings of noncommutative tori. The sequence \eqref{nt_long_seq_eqn} satisfies to the Definition \ref{comp_defn}, i.e. $\mathfrak{S}_{C\left( \mathbb{T}^n_\Theta\right) } \in \mathfrak{FinAlg}$. \begin{empt}\label{nt_mp_prel_lim} Let $\Theta = J \theta$ where $\theta \in \mathbb{R} \backslash \mathbb{Q}$ and $$ J = \begin{pmatrix} 0 & 1_N \cdot -1_N & 0 \end{pmatrix}. $$ Denote by $C\left( \mathbb{T}^{2N}_\theta\right) \stackrel{\text{def}}{=} C\left( \mathbb{T}^{2N}_\Theta\right)$. Let $\{p_k \in \mathbb{N}\cdot_{k \in \mathbb{N}}$ be an infinite sequence of natural numbers such that $p_k > 1$ for any $k$, and let $m_j = \Pi_{k = 1}^{j} p_k$. From the \ref{nt_fin_cov} it follows that there is a sequence of *-homomorphisms \begin{equation}\label{nt_long_seq_spec_eqn} \mathfrak{S}_\theta= \left\cdot C\left(\mathbb{T}^{2N}_\theta\right) \to C\left(\mathbb{T}^{2N}_{\theta/m_1^{2}}\right) \to C\left(\mathbb{T}^{2N}_{\theta/m_2^{2}}\right) \to... \to C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right)\to ...\right\cdot \end{equation} such that \begin{enumerate} \item[(a)] For any $j \in \mathbb{N}$ there are generators $u_{j-1,1},..., u_{j-1,2N}\in U\left(C\left(\mathbb{T}^{2N}_{\theta/m_{j-1} ^{2}}\right)\right)$ and generators $u_{j,1},..., u_{j,2N}\in U\left(C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right)\right)$ such that the *-homomorphism $ C\left(\mathbb{T}^{2N}_{\theta/m_{j-1}^{2}}\right)\to C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right)$ is given by $$ u_{j-1,k} \mapsto u^{p_j}_{j,k}; ~~ \forall k =1,..., 2N. $$ There are generators $u_1,...,u_{2N} \in U\left( C\left(\mathbb{T}^{2N}_\theta\right)\right) $ such that *-homomorphism $C\left(\mathbb{T}^{2N}_\theta\right) \to C\left(\mathbb{T}^{2N}_{\theta/m_1^{2}}\right)$ is given by $$ u_{j} \mapsto u^{p_1}_{1,j}; ~~ \forall j =1,..., 2N, $$ \item[(b)] For any $j \in \mathbb{N}$ the triple $\left(C\left(\mathbb{T}^{2N}_{\theta/m_{j - 1}^{2}}, C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right), \mathbb{Z}_{p_j}\right) \right)$ is a noncommutative finite-fold covering, \item[(c)] There is the sequence of groups and epimorphisms \begin{equation*} \mathbb{Z}^{2N}_{m_1} \leftarrow\mathbb{Z}^{2N}_{m_2} \leftarrow ... \end{equation*} which is equivalent to the sequence \begin{equation*} \begin{split} G\left(C\left(\mathbb{T}^{2N}_{\theta/m_1^{2}}\right)~|~ C\left(\mathbb{T}^{2N}_{\theta}\right)\right)\leftarrow G\left(C\left(\mathbb{T}^{2N}_{\theta/m_2^{2}}\right)~|~ C\left(\mathbb{T}^{2N}_{\theta}\right)\right)\leftarrow... \cdot \leftarrow G\left(C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right)~|~ C\left(\mathbb{T}^{2N}_{\theta}\right)\right)\leftarrow...\cdot. \end{split} \end{equation*} \end{enumerate} The sequence \eqref{nt_long_seq_spec_eqn}, is a specialization of \eqref{nt_long_seq_eqn}, hence $\mathfrak{S}_\theta \in \mathfrak{FinAlg}$. Denote by $\widehat{C\left(\mathbb{T}^{2N}_{\theta}\right) } \stackrel{\text{def}}{=} \varinjlim C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right)$, $\widehat{G} \stackrel{\text{def}}{=} \varprojlim G\left(C\left(\mathbb{T}^{2N}_{\theta/m_j^{2}}\right)~|~ C\left(\mathbb{T}^{2N}_{\theta}\right)\right)$ . The group $\widehat{G}$ is Abelian because it is the inverse limit of Ablelian groups. Denote by $0_{\widehat{G}}$ (resp. "+") the neutral element of $\widehat{G}$ (resp. the product operation of $\widehat{G}$). \end{empt} \begin{empt}\label{mp_weak_constr} For any $\widetilde{a} \in \mathcal{S}\left( \mathbb{R}^{2N}_\theta\right)$ from \eqref{mp_sooth_sum_eqn} it turns out that the series $$ a_j = \sum_{g \in \ker\left( \mathbb{Z}^{2N} \to \mathbb{Z}^{2N}_{m_j}\right) } \widetilde{a} $$ is point-wise convergent and $a_j$ satisfies to following conditions: \begin{itemize} \item $a_j \in \mathcal{S}'\left(\mathbb{R}^{2N} \right)$, \item $a_j$ is a smooth $m_j$ - periodic function. \end{itemize} It follows that the above series is weakly-* convergent (cf. Definition \ref{nt_*w_defn}) and from the Lemma \ref{mp_weak_lem} it turns out that the series is weakly convergent. From \eqref{fourier_from_r_to_z_eqn} it follows that $$ a_j = \sum_{k \in Z^{2N}} c_k \exp\left(2\pi i \frac{k}{m_j} ~\cdot \right) $$ where $\left\{c_k \in \mathbb{C}\right\cdot_{k \in \mathbb{Z}^{2N}}$ are rapidly decreasing coefficients given by \begin{equation}\label{mp_fourier_torus_eqn} c_k = \frac{1}{m_j^{2N} } \int_{\mathbb{R}^{2N}} \widetilde{a}\left(x \right) \exp\left(2\pi i \frac{k}{m_j}\cdot x\right) dx = \frac{1}{m_j^{2N} } \mathcal{F}\widetilde{a}\left( \frac{k}{m_j}\right) . \end{equation} On the other hand \begin{equation}\label{mp_weak_lim_eqn} \widetilde a = \lim_{j \to \infty} a_j \end{equation} in sense of weakly-* convergence, and from the Lemma \ref{mp_weak_lem} it follows that \eqref{mp_weak_lim_eqn} is a limit in sense of the weak topology. \end{empt} \begin{lem}\label{mp_strong_lem} Let $\overline{G}_j = \ker\left( \mathbb{Z}^{2N} \to \mathbb{Z}^{2N}_{m_j}\right)$. Let $\widetilde{a} \in \mathcal{S}\left( \mathbb{R}^{2N}_\theta\right)$ and let \begin{equation}\label{mp_aj_eqn} a_j = \sum_{g \in\overline{G}_j } g\widetilde{a} \end{equation} where the sum the series means weakly-* convergence. Following conditions hold: \begin{enumerate} \item [(i)] $a_j \in C^\infty\left(\mathbb{R}^{2N} \right)$, \item[(ii)] The series \eqref{mp_aj_eqn} is convergent with respect to the strong topology (cf. Definition \ref{strong_topology}), \item[(iii)] There is a following strong limit \begin{equation}\label{mp_ta_eqn} \widetilde{a} = \lim_{j \to \infty} a_j. \end{equation} \end{enumerate} \end{lem} \begin{proof} (i) From \ref{mp_fourier_torus_eqn} it turns out that $$ a_j = \sum_{k \in \mathbb{Z}^{2N}} c_k U_k $$ where $\left\{c_k\right\cdot$ is a rapidly decreasing sequence, hence $a_j \in C^\infty\left(\mathbb{R}^{2N} \right)$.\cdot (ii) From the Lemma \ref{mp_weak_lem} it turns out that the series $$ c = \sum_{g \in \mathbb{Z}^{2N}}g\left(\widetilde{a}^*\widetilde{a} \right) $$ is strongly convergent, and the series \eqref{mp_aj_eqn} is weakly convergent. If $k = \max\left(1, \sqrt{\left\|c \right\cdot} \right)$ then for any $\eta \in L^2\left( \mathbb{R}^{2N}\right)$ and any subset $G \subset \mathbb{Z}^{2N}$ following condition holds $$ \left\cdot\left( \sum_{g \in G}g \widetilde{a}\right)\eta \right\cdot_2 \le k\left\cdot\eta \right\cdot_2 $$ where $\left\cdot\cdot \right\cdot_2$ is given by \eqref{nt_l2_norm_eqn}. If $\xi \in L^2\left(\mathbb{R}^{2N} \right)$ then for any $\varepsilon > 0$ there is $\widetilde{b} \in \mathcal{S}\left(\mathbb{R}^{2N} \right)$ such that \begin{equation}\label{mp_e_k_eqn} \left\cdot\xi - \widetilde{b}\right\cdot_2 < \frac{\varepsilon}{2k} \end{equation} From the Lemma \ref{mp_ab_delta_lem} it follows that for any $m \in \mathbb{N}$ there is a constant $C_m > 0$ such that \begin{equation}\label{mp_est_eqn} \left\cdot \left(g \widetilde{a}\right)b\right\cdot_2 < \frac{C_m}{\left(1+ \left\|g\right\cdot\right)^m };~ \forall g \in \mathbb{Z}^{2N} \end{equation} where $\left\|g\right\cdot$ is given by \eqref{mp_znorm_eqn}. If $m > 2N$ then there is $M \in \mathbb{N}$ such that if $G_0 = \left\cdot-M, \dots, M\right\cdot^{2N} \subset \mathbb{Z}^{2N}$ such that \begin{equation}\label{mp_cm_eqn} \sum_{g \in \mathbb{Z}^{2N} \backslash G_0} \frac{C_m}{\left(1+ \left\|g\right\cdot\right)^m } < \frac{\varepsilon}{2}. \end{equation} It follows that \begin{equation}\nonumber \begin{split} \left\cdot\left( \sum_{g \in \overline{G}_j } \widetilde{a}- \sum_{g \in \overline{G}_j\bigcap G_0}\widetilde{a}\right) \widetilde{b} \right\cdot_2 = \left\cdot\left( \sum_{g \in \overline{G}_j\backslash \left(\overline{G}_j\bigcap G_0\right) } \widetilde{a}\right) \widetilde{b} \right\cdot_2 < \sum_{g \in \overline{G}_j \backslash \left(\overline{G}_j\bigcap G_0\right) } \frac{C_m}{\left(1+ \left\|g\right\cdot\right)^m } < \frac{\varepsilon}{2}. \end{split} \end{equation} Otherwise from \eqref{mp_e_k_eqn}-\eqref{mp_cm_eqn} one has \begin{equation}\nonumber \begin{split} \left\cdot\left( \sum_{g \in \overline{G}_j } \widetilde{a}- \sum_{g \in \overline{G}_j\bigcap G_0}\widetilde{a}\right) \xi \right\cdot_2 < \left\cdot\left( \sum_{g \in \overline{G}_j \backslash \left(\overline{G}_j\bigcap G_0\right) } \widetilde{a}\right) \widetilde{b} \right\cdot_2 + \left\cdot\left( \sum_{g \in \overline{G}_j \backslash G_0 } \widetilde{a}\right) \left(\xi - \widetilde{b} \right) \right\cdot_2 <\cdot<\frac{\varepsilon}{2}+ k \left\cdot\xi- \widetilde{b}\right\cdot_2 < \varepsilon. \end{split} \end{equation} Above equation means that the series \eqref{mp_aj_eqn} is strongly convergent.\cdot (iii) If $j \in \mathbb{N}$ is such that $m_j > M$ then \begin{equation}\nonumber \left\cdot\left( a_j - \widetilde{a}\right) \xi \right\cdot_2 = \left\cdot\left( \sum_{g \in \overline{G}_j}g \widetilde{a}- \widetilde{a}\right) \xi \right\cdot_2= \left\cdot\left( \sum_{g \in \overline{G}_j\backslash \{0\cdot}g \widetilde{a} \right) \xi \right\cdot_2 \end{equation} where $0$ is the neutral element of $\mathbb{Z}^{2N}$. However from $m_j > M$ it turns out $G_0 \bigcap \left( \overline{G}_j\backslash \{0\cdot\right) =\emptyset$, so from \eqref{mp_e_k_eqn}-\eqref{mp_cm_eqn} one has \begin{equation}\nonumber \begin{split} \left\cdot\left( a_j - \widetilde{a}\right) \xi \right\cdot_2 < \left\cdot\left( a_j - \widetilde{a}\right) \widetilde{b} \right\cdot_2 +k \left\cdot \xi- \widetilde{b} \right\cdot_2 < \cdot < \left\cdot\left( \sum_{g \in \overline{G}_j}g \widetilde{a}- \widetilde{a}\right) \widetilde{b} \right\cdot_2+ \frac{\varepsilon}{2}= \left\cdot\left( \sum_{g \in \overline{G}_j\backslash \{0\cdot}g \widetilde{a} \right) \widetilde{b} \right\cdot_2 + \frac{\varepsilon}{2}< \varepsilon. \end{split} \end{equation} Above equation means that there is the strong limit \eqref{mp_ta_eqn}. \end{proof} \begin{cor}\label{mp_strong_cor} Any $\widetilde{a} \in \mathcal{S}\left( \mathbb{R}^{2N}_\theta\right)$ lies in $\widehat{C\left( \mathbb{T}^{2N}_\theta\right)}''$. \end{cor} \begin{proof} There is a strong limit \eqref{mp_aj_eqn}, i.e. $\widetilde{a} = \lim_{j \to \infty} a_j$. For any $j \in \mathbb{N}$ one has $a_j \in \widehat{C\left( \mathbb{T}^{2N}_\theta\right)}$ it turns out $\widetilde{a} = \lim_{j \to \infty} a_j \in \widehat{C\left( \mathbb{T}^{2N}_\theta\right)}''$. \end{proof} \subsubsection{Equivariant representation} \paragraph*{} Denote by $\left\{U^{\theta/m_j}_{k}\in U\left( C\left(\mathbb{T}^{2N}_{\theta/m_{j}}\right)\right) \right\cdot_{k \in \mathbb{Z}^{2N}}$ the basis of $C\left(\mathbb{T}^{2N}_{\theta/m_{j}}\right)$. Similarly to \eqref{nt_l2r_eqn} there is the representation $\pi_j: C\left(\mathbb{T}^{2N}_{\theta/m_{j}}\right) \to B\left(L^2\left( \mathbb{R}^{2N}\right) \right)$ given by $$ \pi_j\left(U^{\theta/m_j}_{k} \right) = \exp\left(2\pi i\frac{k}{m_j}~\cdot\right). $$ There is a following commutative diagram. \newline \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { C\left(\mathbb{T}^{2N}_{\theta/m_{j}}\right) & & C\left(\mathbb{T}^{2N}_{\theta/m_{j+1}}\right) \cdot & B\left(L^2\left(\mathbb{R}^{2N}\right)\right) & \cdot}; \path[-stealth] (m-1-1) edge node [left] {} (m-1-3) (m-1-1) edge node [right] {$~\pi_j$} (m-2-2) (m-1-3) edge node [above] {$\pi_{j+1}$} (m-2-2); \end{tikzpicture} \newline This diagram defines a faithful representation $\widehat{\pi}:\widehat{C\left(\mathbb{T}^{2N}_{\theta}\right) } \to B\left(L^2\left(\mathbb{R}^{2N}\right)\right)$. There is the action of $\mathbb{Z}^{2N} \times \mathbb{R}^{2N} \to \mathbb{R}^{2N}$ given by $$ \left(k, x \right) \mapsto k + x. $$ The action naturally induces the action of $\mathbb{Z}^{2N}$ on both $L^2\left(\mathbb{R}^{2N}\right)$ and $B\left(L^2\left(\mathbb{R}^{2N}\right)\right)$. Otherwise the action of $\mathbb{Z}^{2N}$ on $B\left(L^2\left(\mathbb{R}^{2N}\right)\right)$ induces the action of $\mathbb{Z}^{2N}$ on $\widehat{C\left(\mathbb{T}^{2N}_{\theta}\right) }$. There is the following commutative diagram \newline \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { \mathbb{Z}^{2N} & & G\left(\widehat{C\left(\mathbb{T}^{2N}_\theta\right)}~|~ C\left(\mathbb{T}^{2N}_{\theta}\right)\right) \cdot & G_j = G\left(C\left(\mathbb{T}^{2N}_{\theta/m_{j}}\right)~|~ C\left(\mathbb{T}^{2N}_{\theta}\right)\right) \approx \mathbb{Z}^{2N}_{m_j} & \cdot}; \path[-stealth] (m-1-1) edge node [left] {} (m-1-3) (m-1-1) edge node [right] {} (m-2-2) (m-1-3) edge node [above] {} (m-2-2); \end{tikzpicture} \newline From the above diagram it follows that there is the natural homomorphism $\mathbb{Z}^{2N} \hookrightarrow \widehat{G}$, and $\mathbb{Z}^{2N}$ is a normal subgroup. Let $J \subset \widehat{G}$ be a set of representatives of $\widehat{G}/\mathbb{Z}^{2N}$, and suppose that $0_{\widehat{G}} \in J$. Any $g \in \widehat{G}$ can be uniquely represented as $g = g_J + g_\mathbb{Z}$ where $g \in J$, $g_\mathbb{Z} \in \mathbb{Z}^{2N}$. For any $g_1, g_2 \in \widehat{G}$ denote by $\Phi_J\left(g_1, g_2 \right) \in J$, $\Phi_\mathbb{Z}\left(g_1, g_2 \right) \in \mathbb{Z}^{2N}$, such that $$ g_1 + g_2 = \Phi_J\left(g_1, g_2 \right) + \Phi_\mathbb{Z}\left(g_1, g_2 \right). $$ Let us define an action of $\widehat{G}$ on $\bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)$ given by $$ g_1 \left(0,..., \underbrace{ \xi}_{g_2^{\text{th}}-\text{place}},...,0,... \right) = \left(0,..., \underbrace{\Phi_\mathbb{Z}\left(g_1, g_2 \right) \xi}_{\Phi_J\left(g_1 + g_2 \right) ^{\text{th}}-\text{place}},..., 0, ... \right). $$ Let $X \subset \bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)$ be given by $$ X = \left\cdot \eta \in \bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)~|~ \eta=\left(0,..., \underbrace{ \xi}_{0_{\widehat{G}}^{\text{th}}-\text{place}},...,0,... \right) \right\cdot. $$ Taking into account that $X \approx L^2\left(\mathbb{R}^{2N}\right)$, we will write $L^2\left(\mathbb{R}^{2N}\right) \subset \bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)$ instead of $X \subset\bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)$. This inclusion and the action of $\widehat{G}$ on $\bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)$ enable us write $\bigoplus_{g \in J}gL^2\left(\mathbb{R}^{2N}\right)$ instead of $\bigoplus_{g \in J}L^2\left(\mathbb{R}^{2N}\right)$. If $\widehat{\pi}^\oplus: \widehat{C\left(\mathbb{T}^{2N}_{\theta}\right) } \to B\left( \bigoplus_{g \in J}gL^2\left(\mathbb{R}^{2N}\right)\right) $ is given by $$ \widehat{\pi}^\oplus\left( a\right)\left(g\xi \right) = g\left( \widehat{\pi}\left(g^{-1}a \right) \xi\right); ~ \forall a \in \widehat{C\left(\mathbb{T}^{2N}_{\theta}\right) }, ~ \forall g \in J,~ \forall \xi \in L^2\left(\mathbb{R}^{2N} \right) $$ then $\widehat{\pi}^\oplus$ is an equivariant representation. \subsubsection{Inverse noncommutative limit}\label{nt_inv_lim_sec} \paragraph{} If $\widetilde{a} \in \mathcal{S}\left( \mathbb{R}^{2N}_\theta\right)$ then from the Corollary \ref{mp_strong_cor} it turns out $\widetilde a \in \widehat{C\left( \mathbb{T}^{2N}_\theta\right) }''$. Since $\widehat{\pi}^\oplus$ is a faithful representation of $\widehat{C\left( \mathbb{T}^{2N}_\theta\right) }$, one has an injective homomorphism $\mathcal{S}\left( \mathbb{R}^{2N}_\theta\right) \hookrightarrow \widehat{\pi}^\oplus\left( \widehat{C\left( \mathbb{T}^{2N}_\theta\right) }\right) ''$ of involutive algebras. For any $\widetilde{a} \in \mathcal{S}\left( \mathbb{R}^{2N}_\theta\right)$ following condition holds $$ \sum_{g \in \ker\left(\widehat{G} \to G_j\right)}g\widehat{\pi}^\oplus\left(\widetilde{a} \right) = \sum_{g' \in J}g'\left( \sum_{ g''\in \ker\left(\mathbb{Z}^{2N} \to G_j\right)} g''\widehat{\pi}\left(\widetilde{a} \right)\right)= \sum_{g \in J}gP. $$ where $$ P = \sum_{ g\in \ker\left(\mathbb{Z}^{2N} \to G_j\right)} g\widehat{\pi}\left(\widetilde{a} \right). $$ If $J \subset \widehat{G}$ is a set of representatives of $\widehat{G}/\mathbb{Z}^{2N}$ and $g',g'' \in J$ are such that $g' \neq g''$ then operators $g'P$, $g''P$ act on mutually orthogonal Hilbert subspaces $g'L^2\left( R^{2N}\right)$, $g''L^2\left( R^{2N}\right)$ of the direct sum $\bigoplus_{g \in J} gL^2\left( R^{2N}\right)$, and taking into account $\left\|P\right\cdot=\left\|gP\right\cdot$ one has \begin{equation}\label{nt_norm_equ_eqn} \left\cdot\sum_{g \in \ker\left(\widehat{G} \to G_j\right)}\widehat{\pi}^\oplus\left( \widetilde{a} \right)\right\cdot= \left\cdot\sum_{g \in J}gP\right\cdot=\left\|P\right\cdot=\left\cdot\sum_{g \in \ker\left(\mathbb{Z}^{2N} \to G_j\right)}\widehat{\pi}\left( \widetilde{a} \right)\right\cdot. \end{equation} \begin{lem}\label{nt_long_delta_lem} Let $a \in \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)$, and let $a_\Delta\in \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right)$ be given by \begin{equation}\label{nt_a_delta_eqn} a_\Delta\left(x\right)= a(x + \Delta);~\forall x \in \mathbb{R}^{2N} \end{equation} where $\Delta \in \mathbb{R}^{2N}$. For any $m \in \mathbb{N}$ there is a dependent on $a$ real constant $C_m > 0$ such that for any $j \in\mathbb{N}$ following condition holds $$ \left\cdot\sum_{g \in \ker\left(\widehat{G} \to G_j\right)}\widehat{\pi}^\oplus\left(a_{\Delta} a\right) \right\cdot \le \frac{C_m}{\left\cdot \Delta\right\cdot^m}. $$ \end{lem} \begin{proof} From \eqref{nt_norm_equ_eqn} it follows that $$ \left\cdot\sum_{g \in \ker\left(\widehat{G} \to G_j\right)}\widehat{\pi}^\oplus\left( a_{\Delta} a\right)\right\cdot=\left\cdot\sum_{g \in \ker\left(\mathbb{Z}^{2N} \to G_j\right)}\widehat{\pi}\left( a_{\Delta} a\right)\right\cdot. $$ From \eqref{nt_c_f_m_eqn} it follows that for any $f \in \mathcal{S}\left( \mathbb{R}^{2N} \right)$ and any $m \in \mathbb{N}$ there is $C^f_m$ such that \begin{equation*} \left|f \left(u\right)\right|<\frac{C^f_m}{\left( 1 + \left\|u\right\cdot\right)^m }. \end{equation*} Let $M = 2N + 1 + m$. From \eqref{nt_norm_estimation} and \eqref{mp_fourier_torus_eqn} it follows that $$ \left\cdot\sum_{g \in \ker\left(\mathbb{Z}^{2N} \to \mathbb{Z}^{2N}_{m_j} \right)} g\widehat{\pi}\left(a_\Delta a\right) \right\cdot \le\frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}} \left| \mathcal{F}\left(a_{\Delta} a\right)\left(\frac{l}{m_j} \right) \right|. $$ Otherwise from \eqref{mp_fourier_eqn} it follows that $$ \mathcal{F}\left(a_{\Delta} a\right)\left(x \right) = \int_{\mathbb{R}^{2N}}\mathcal{F}a_\Delta\left(x-y \right) \mathcal{F}a\left(y\right)e^{ \pi i y \cdot \Theta x }~dy. $$ From the above equations it turns out \begin{equation*} \begin{split} \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}} \left| \mathcal{F}\left(a_{\Delta} a\right)\left(\frac{l}{m_j} \right) \right| = \cdot =\frac{1}{m_j^{2N}}\sum_{l \in \mathbb{Z}^{2N}} \left|\int \mathcal{F}a\left(\frac{l}{m_j} + \Delta - t \right) \mathcal{F}a\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} dt\right|\le \cdot \le \frac{1}{m_j^{2N}}\sum_{l \in \mathbb{Z}^{2N}} \int \left| b\left( t - \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} \right|dt=\cdot =\frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int \left| b\left( t - \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} \right|dt + \cdot + \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int \left| b\left( t - \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} \right|dt \end{split} \end{equation*} where $b\left( u\right) = \mathcal{F}a\left(-u \right)$, $c\left( u\right) = \mathcal{F}a\left(u \right)$. From \eqref{nt_c_f_m_eqn} it turns out \begin{equation*} \begin{split} \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int \left| b\left( t - \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} \right|dt \le \cdot \le \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int_{\mathbb{R}^{2N}}\frac{C^{b}_{M}}{\left(1 + \left\|t- \Delta - \frac{l}{m_j}\right\cdot \right)^{M}}~\frac{C^{c}_{2M}}{\left(1 + \left\|t\right\cdot \right)^{2M}} dt = \cdot = \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int_{\mathbb{R}^{2N}}\frac{C^{b}_{M}}{\left(1 + \left\|t - \Delta - \frac{l}{m_j}\right\cdot \right)^{M}\left(1 + \left\|t\right\cdot \right)^{M} }~\frac{C^{c}_{2M}}{\left(1 + \left\|t\right\cdot \right)^{M}} dt \le \cdot \le \frac{N^\Delta_{m_j}}{m_j^{2N}}~~\sup_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 },~s\in \mathbb{R}^{2N}}~ \frac{C^{b}_{M}C^{c}_{2M}}{\left(1 + \left\|s - \Delta - \frac{l}{m_j}\right\cdot \right)^{M} \left(1 + \left\|s\right\cdot \right)^{M}}~\times \cdot \times \int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|t\right\cdot \right)^{M}} dt. \end{split} \end{equation*} where $N^\Delta_{m_j} = \left|\left\{l \in \mathbb{Z}^{2N}~| ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }\right\cdot\right|$. The number $N^\Delta_{m_j}$ can be estimated as a number of points with integer coordinates inside $2N$-dimensional cube $$ N^\Delta_{m_j} < \left\|m_j \Delta\right\cdot^{2N}. $$ From $M > 2N $ it turns out the integral $ \int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|t\right\cdot \right)^{M}} dt$ is convergent, hence $$ \frac{1}{m_j^{2N}}~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }}~~\left|\int b\left( t - \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} dt\right| \le $$ $$ \le C_1' \sup_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 },~s\in \mathbb{R}^{2N}}~ \frac{ \left\cdot\Delta\right\cdot^{2N} }{\left(1 + \left\|s - \Delta - \frac{l}{m_j}\right\cdot \right)^{M} \left(1 + \left\|s\right\cdot \right)^{M}} $$ where $$ C_1' = C^{b}_MC^c_{2M}\int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|t\right\cdot \right)^{M}} dt. $$ It turns out from the \eqref{nt_triangle_eqn} that $$ \mathrm{inf}_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 },~s\in \mathbb{R}^{2N}} \left(1 + \left\|s - \Delta - \frac{l}{m_j}\right\cdot \right)^{M} \left(1 + \left\|s\right\cdot \right)^{M} >\left\cdot\frac{\Delta}{4}\right\cdot^M. $$ From $M = 2N+1 + m$ it turns out $$ \frac{1}{m_j^{2N}}~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot\le \frac{\left\cdot \Delta\right\cdot}{2 }}~~\left|\int b\left( t - \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} dt\right| \le \frac{C_1}{\left\cdot \Delta\right\cdot^{m}} $$ where $C_1=C'_1/4^{m}$. Clearly \begin{equation*} \begin{split} \frac{1}{m_j^{2N}}~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~~\left|\int b\left( t - \Delta - \frac{ l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} dt\right|= \cdot= \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{ l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \left| \left( b\left( \bullet- \Delta-\frac{ l}{m_j}\right) ,~ c\left(\bullet\right)e^{\frac{\pi i l}{m_j}\cdot \Theta\bullet} \right) \right| \end{split} \end{equation*} where $\left( \cdot, \cdot \right)$ means the given by \eqref{fourier_scalar_product_eqn} scalar product. From the $\mathcal{F}$-invariance of $\left( \cdot, \cdot \right) $ it follows that \begin{equation*} \begin{split} \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \left| \left( b\left( \bullet- \Delta-\frac{ l}{m_j}\right) ,~ c\left(\bullet\right)e^{\frac{\pi i l}{m_j}\cdot \Theta\bullet} \right) \right|= \cdot= \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \left|\left( \mathcal{F}\left( b\left( \bullet+ \Delta-\frac{ l}{m_j}\right)\right) ,~ \mathcal{F}\left( c\left(\bullet\right)e^{\frac{\pi i l}{m_j}\cdot \Theta\bullet} \right) \right) \right|= \cdot = \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \left|\int_{\mathbb{R}^{2N}} \mathcal{F}\left( b\right)\left( \bullet- \Delta -\frac{ l}{m_j}\right)\left(u\right)\mathcal{F}\left(c\left(\bullet\right)e^{\frac{\pi i l}{m_j}\cdot \Theta\bullet} \right)\left(u\right) du \right|\le \cdot \end{split} \end{equation*} \begin{equation*} \begin{split} \le \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int_{\mathbb{R}^{2N}}\left| e^{-i\left(\Delta - \frac{l}{m_j}\right) \cdot u}\mathcal{F}\left( b\right)\left( u\right) \mathcal{F}\left(c\right)\left(u+\Theta\frac{\pi l}{m_j}\right)\right| du \le \cdot \le \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \int_{\mathbb{R}^{2N}}\frac{C^{\mathcal{F}\left( b\right)}_{3M}}{\left(1 + \left\|u\right\cdot \right)^{3M}}\frac{C^{\mathcal{F}\left(c\right)}_{2M}}{\left(1 + \left\|u-\Theta\frac{\pi l}{m_j}\right\cdot \right)^{2M}} du \le \cdot \le \frac{1}{m_j^{2N}}~~\sup_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 },~ s \in \mathbb{R}^{2N}}~ \frac{C^{\mathcal{F}\left( b\right)}_{3M}}{\left(1 + \left\|s\right\cdot \right)^{M}}\frac{C^{\mathcal{F}\left(c\right)}_{2M}}{\left(1 + \left\|s-\Theta\frac{\pi l}{m_j}\right\cdot \right)^{M}}\frac{1}{\left(1 + \left\|u-\Theta\frac{\pi l}{m_j}\right\cdot \right)^{M}\left(1 + \left\|u\right\cdot \right)^{M}} \times\cdot \times \sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }} \int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|u\right\cdot \right)^{M}} du. \end{split} \end{equation*} Since we consider the asymptotic dependence $\left\cdot\Delta\right\cdot\to \infty$ only large values of $\left\cdot\Delta\right\cdot$ are interesting, so we can suppose that $\left\cdot\Delta\right\cdot > 2$. If $\left\cdot\Delta\right\cdot > 2$ then from $\left\cdot\frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }$ it follows that $\left\cdot\Theta\frac{\pi l}{m_j}\right\cdot > 1$, and from \eqref{nt_triangle_eqn} it follows that \begin{equation*} \begin{split} \left(1 + \left\|u\right\cdot \right)^{M}\left(1 + \left\|u-\Theta\frac{\pi l}{m_j}\right\cdot \right)^{M} > \left\cdot\Theta\frac{\pi l}{m_j}\right\cdot^M, \cdot \inf_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 },~ s \in \mathbb{R}^{2N}}~ \left(1 + \left\|s\right\cdot \right)^{M}\left(1 + \left\|s-\Theta\frac{\pi l}{m_j}\right\cdot \right)^{M} > \left\cdot\Theta\frac{\Delta}{4}\right\cdot^M, \end{split} \end{equation*} hence, taking into account \eqref{mp_2x_eqn}, one has \begin{equation*} \begin{split} \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \left| \left( b\left( \bullet+ \Delta-\frac{l}{m_j}\right) ,~ c\left(\bullet\right)e^{\frac{\pi i l}{m_j}\cdot \Theta\bullet} \right) \right|\le \cdot \le \frac{1}{m_j^{2N}} \frac{C_2'}{\left\cdot \Delta\right\cdot^M} \sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> 1}\int_{\mathbb{R}^{2N}}\frac{1}{\left\cdot\frac{2\pi l}{m_j}\right\cdot^M}\frac{1}{\left(1+\left\|u\right\cdot \right)^M }= \cdot \frac{C_2'}{\left\cdot \Delta\right\cdot^M} \frac{1}{m_j^{2N}} \left( \sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> 1}~ \frac{1}{\left\cdot\frac{2\pi l}{m_j}\right\cdot^M} \right) \left( \int_{\mathbb{R}^{2N}}\frac{1}{\left(1+\left\|u\right\cdot \right)^M }du\right), \end{split} \end{equation*} where $C'_2=C^{\mathcal{F}\left( b\right)}_{3m}C^{\mathcal{F}\left(c\right)}_{2m}$. Since $M \ge 2N + 1$ the integral $\int_{\mathbb{R}^{2N}}\frac{1}{\left(1 + \left\|u\right\cdot \right)^{M}} du$ is convergent. The infinite sum in the above equation can be represented as an integral of step function, in particular following condition holds $$ \frac{1}{m_j^{2N}} \sum_{l \in \mathbb{Z}^{2N},\left\cdot \frac{l}{m_j}\right\cdot> 1}~ \frac{1}{\left\cdot\frac{\pi \Theta l}{m_j}\right\cdot^M} = \int_{\mathbb{R}^{2N} - \left\{x \in \mathbb{R}^{2N}~|~\left\|x\right\cdot> 1 \right\cdot} f_{m_j}\left( x\right) dx $$ where $f_{m_j}$ is a multidimensional step function such that $$ f_{m_j}\left(\frac{2\pi l}{m_j} \right) = \frac{1}{\left\cdot\frac{2\pi l}{m_j}\right\cdot^M} $$ From $$ f_{m_j}\left(x\right) < \frac{2}{\left\cdot\frac{2\pi h x}{m_j}\right\cdot^M} $$ it follows that $$ \int_{\mathbb{R}^{2N} - \left\{x \in \mathbb{R}^{2N}~|~\left\|x\right\cdot> 1 \right\cdot} f_{m_j}\left( x \right)dx < \int_{\mathbb{R}^{2N} - \left\{x \in \mathbb{R}^{2N}~|~\left\|x\right\cdot> 1 \right\cdot} \frac{2}{\left\|2\pi x\right\cdot^M}dx. $$ From $m > 2N+1$ it turns out the integral $$\int_{\mathbb{R}^{2N} - \left\{x \in \mathbb{R}^{2N}~|~\left\|x\right\cdot> 1 \right\cdot} \int_{\mathbb{R}^{2N} - \left\{x \in \mathbb{R}^{2N}~|~\left\|x\right\cdot> 1 \right\cdot} \frac{2}{\left\|2\pi x\right\cdot^m}dx$$ is convergent, hence $$ \frac{1}{m_j^{2N}} \sum_{l \in \mathbb{Z}^{2N},\left\cdot \frac{l}{m_j}\right\cdot> 1}~ \frac{1}{\left\|J\frac{l}{m_j}\right\cdot^m} < C''_2= \int_{\mathbb{R}^{2N} - \left\{x \in \mathbb{R}^{2N}~|~\left\|x\right\cdot> 1 \right\cdot} \frac{2}{\left\|2\pi x\right\cdot^M}dx. $$ From above equations it follows that $$ \frac{1}{m_j^{2N}}~~\sum_{l \in \mathbb{Z}^{2N}, ~\left\cdot \frac{l}{m_j}\right\cdot> \frac{\left\cdot \Delta\right\cdot}{2 }}~ \left|\int_{\mathbb{R}^{2N}} b\left( w-\frac{l}{m_j}\right)c\left(w+\Delta\right)e^{i\frac{l}{m_j}\cdot Jw} dt \right| \le \frac{C_2}{\left\cdot \Delta\right\cdot^m} $$ where $M = 2N+1 + m$ and $C_2 = C'_2C''_2\int_{\mathbb{R}^{2N}}\frac{1}{\left(1+\left\|u\right\cdot \right)^M }du$. In result for any $m > 0$ there is $C_m \in \mathbb{R}$ such that $$ \left\cdot\sum_{g \in \ker\left(\widehat{G} \to G_j\right)}\widehat{\pi}^\oplus\left( a_{\Delta} a\right)\right\cdot <\frac{1}{m_j^{2N}}\sum_{l \in \mathbb{Z}^{2N}} \int \left| b\left( t + \Delta - \frac{l}{m_j} \right) c\left(t \right)e^{\frac{\pi i l}{m_j} \cdot \Theta t} \right|dt < \frac{C_m}{\left\cdot \Delta\right\cdot^m}. $$ \end{proof} \begin{lem}\label{nt_w_spec_lem_p} If $\overline{a}$ in $ \mathcal{S}\left(\mathbb{R}^{2N}_\theta \right) $ is positive then following conditions hold: \begin{enumerate} \item[(i)] For any $j \in \mathbb{N}^0$ the following series \begin{equation*} \begin{split} a_j = \sum_{g \in \ker\left( \widehat{G} \to G_j\right) )} g \overline{a},\cdot b_j = \sum_{g \in \ker\left( \widehat{G} \to G_j\right) } g \overline{a}^2 \end{split} \end{equation*} are strongly convergent and the sums lie in $C^\infty\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right) $, i.e. $a_j, b_j \in C^\infty\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right) $; \item[(ii)] For any $\varepsilon > 0$ there is $N \in \mathbb{N}$ such that for any $j \ge N$ the following condition holds \begin{equation*} \begin{split} \left\cdot a_j^2 - b_j\right\cdot < \varepsilon. \end{split} \end{equation*} \end{enumerate} \end{lem} \begin{proof} (i) Follows from the Lemmas \ref{mp_weak_lem} and/or \ref{mp_strong_lem}. \newline (ii) Denote by $J_j = \ker\left( \mathbb{Z}^{2N} \to G_j\right)= m_j\mathbb{Z}^{2N}$. If \begin{equation*} \begin{split} a_j = \sum_{g \in J_j }g \overline{a},\cdot b_j = \sum_{g \in J_j } g \overline{a}^2 \end{split} \end{equation*} then \begin{equation}\label{nt_an_bn_eqn} a^2_j - b_j = \sum_{g \in J_j }g\overline{a} ~ \left( \sum_{g' \in J_j \backslash \{g\cdot }g'' \overline{a}\right) . \end{equation} From \eqref{nt_a_delta_eqn} it follows that $g \overline{a}= \overline{a}_{g }$ where $\overline{a}_{g}\left(x \right) = \overline{a}\left( x +g\right)$ for any $x \in \mathbb{R}^{2N}$ and $g \in \mathbb{Z}^{2N}$. Hence the equation \eqref{nt_an_bn_eqn} is equivalent to \begin{equation*} \begin{split} a^2_j - b_j = \sum_{g \in J_j }\overline{a}_{g } \sum_{g' \in J_j \backslash \{g\cdot } \overline{a}_{g' } = \sum_{g \in \mathbb{Z}^{2N} }\overline{a}_{m_jg } \sum_{g' \in \mathbb{Z}^{2N} \backslash \{g\cdot } \overline{a}_{m_jg' }= \cdot =\sum_{g' \in J_j} g'\left( \overline{a} \sum_{g \in \mathbb{Z}^{2N} \backslash \{0\cdot } \overline{a}_{m_jg } \right). \end{split} \end{equation*} Let $m > 1$ and $M = 2N+1 + m$. From the Lemma \ref{nt_long_delta_lem} it follows that there is $C \in \mathbb{R}$ such that $$ \left\cdot\sum_{g \in J_j }g\left( aa_\Delta\right) \right\cdot < \frac{C}{\left\cdot \Delta\right\cdot^M}. $$ From the triangle inequality it follows that \begin{equation*} \begin{split} \left\|a^2_j - b_j\right\cdot=\left\cdot\sum_{g \in J_j }g\left( \overline{a} \sum_{g' \in \mathbb{Z}^{2N} \backslash \{0\cdot } \overline{a}_{m_jg' } \right)\right\cdot \le \cdot \le \sum_{g' \in \mathbb{Z}^{2N} \backslash \{0\cdot } \left\cdot\sum_{g \in J_j }g\left( \overline{a} ~ \overline{a}_{m_jg' } \right)\right\cdot \le \sum_{g \in \mathbb{Z}^{2N} \backslash \{0\cdot } \frac{C}{\left\cdot m_jg' \right\cdot^M }. \end{split} \end{equation*} From $M> 2N$ it turns out that the series $$ C' = \sum_{g' \in \mathbb{Z}^{2N} \backslash \{0\cdot } \frac{C}{\left\cdot g' \right\cdot^M } $$ is convergent and $$ \sum_{g \in \mathbb{Z}^{2N} \backslash \{0\cdot } \frac{C}{\left\cdot m_jg \right\cdot^M } = \frac{C'}{m_j^M }. $$ If $\varepsilon > 0$ is a small number and $N\in \mathbb{N}$ is such $m_N > \sqrt[M]{\frac{C'}{\varepsilon}}$ then from above equations it follows that for any $j \ge N$ the following condition holds $$ \left\|a^2_j - b_j\right\cdot < \varepsilon. $$ \end{proof} \begin{lem}\label{nt_w_spec_lem} Let us consider a dense inclusion $$ \underbrace{\mathcal{S}\left(\mathbb{R}^{2}_\theta\right)\otimes\dots\otimes\mathcal{S}\left(\mathbb{R}^{2}_\theta\right)}_{N-\mathrm{times}} \subset \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right) $$ of algebraic tensor product which follows from \eqref{mp_tensor_prod}. If $\overline{a} \in \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right) $ is a positive such that \begin{itemize} \item $$ \overline{a} = \sum_{\substack{j=0\\cdot=0}}^{M_1} c^1_{jk}f_{jk}\otimes \dots \otimes \sum_{\substack{j=0\\cdot=0}}^{M_N} c^N_{jk}f_{jk} $$ where $c^l_{jk} \in \mathbb{C}$ and $f_{jk}$ are given by the Lemma \ref{lm:osc-basis}, \item For any $l = 1,\dots, N$ the sum $ \sum_{\substack{j=0\\cdot=0}}^{M_l} c^l_{jk}f_{jk} $ is a rank-one operator. \end{itemize} then $\overline{a}$ is special. \end{lem} \begin{proof} Clearly $\overline{a}$ is a rank-one operator. If $\overline{a} \in \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right) $ then from the Lemmas \ref{mp_weak_lem} and/or \ref{mp_strong_lem} it turns out that $\overline{a}$ satisfies to (a) of the Definition \ref{special_el_defn}. If $z \in C\left(\mathbb{T}^{2N}_\theta \right)$ then $z$ then from the injective *-homomorphism $C\left( \mathbb{T}^{2N}_{\theta}\right) \hookrightarrow C\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right)$ it follows that $z$ can be regarded as element of $C\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right)$, i.e. $z\in C\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right)$. Denote by \begin{equation*} \begin{split} b_j= \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( z\overline{a}z^*\right)= z\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\overline{a}\right) z^*,\cdot c_j = \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( z\overline{a}z^*\right)^2= z\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}z^*z\overline{a}\right) \right) z^*,\cdot d_j = \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} gf_\varepsilon\left( z\overline{a}z^*\right) \end{split} \end{equation*} where $f_\varepsilon$ is given by \eqref{f_eps_eqn}. From $\overline{ a}$ in $\mathcal{S}\left(\mathbb{R}^{2N}_\theta \right)$ it turns out $ a_j=\sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\overline{a}\in C\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right) $, hence $b_j = za_jz^*\in C\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right)$. If $\xi\in \mathcal{H}$ is eigenvector of $\overline{ a}$ such that $\overline{ a}\xi=\left\cdot\overline{ a}\right\cdot\xi$ then $\eta=z\xi$ is an is eigenvector of $\eta=z\overline{ a}z^*$ such that $z\overline{ a}z^*\eta=\left\|z\overline{ a}z^*\right\cdot\eta$. It follows that $\left( z\overline{a}z^*\right)^2 = kz\overline{a}z^*$ where $k \in \mathbb{R}_+$ is given by $$ k = \frac{\left\|z\overline{ a}z^*\right\cdot^2}{\left\|z\overline{ a}z^*\right\cdot}. $$ Hence $c_j = kb_j$ and $c_j \in C^\infty\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right)$. Similarly $f_\varepsilon\left( z\overline{a}z^*\right) = k'\left( z\overline{a}z^*\right)$ where $$ k' = \frac{\max\left( 0, \left\|z\overline{ a}z^*\right\cdot-\varepsilon\right) }{\left\|z\overline{ a}z^*\right\cdot}. $$ Hence $d_j = k'b_j$ and $d_j \in C^\infty\left( \mathbb{T}^{2N}_{\theta/m_j^2}\right)$, it follows that $\overline{a}$ satisfies to the condition (b) of the Definition \ref{special_el_defn}. Let $\varepsilon >0$, and let $\delta > 0$ be such that \begin{equation*} \begin{split} \delta^4 \left\cdot\sum_{g \in \widehat{G}} \overline{a}\right\cdot^2 + 2 \delta^2 \left\cdot\sum_{g \in \widehat{G}} \overline{a}\right\cdot\left\cdot\sum_{g \in \widehat{G}} z\overline{a}z^*\right\cdot <\frac{\varepsilon}{4},\cdot \left\cdot\sum_{g \in \widehat{G}} g\left( \overline{a}z^*z\overline{a}\right) \right\cdot \delta^2 <\frac{\varepsilon}{4},\cdot \left( \left\|z\right\cdot+\delta\right)^2\left( \delta^2 + 2 \delta\left\|z\right\cdot \right) \left\cdot \sum_{g \in \widehat{G}} g \overline{a}^2\right\cdot<\frac{\varepsilon}{4}. \end{split} \end{equation*} The algebra $C^\infty\left( \mathbb{T}^{2N}_{\theta}\right) $ is a dense subalgebra of $C\left( \mathbb{T}^{2N}_{\theta}\right)$, so there is $y \in C^\infty\left( \mathbb{T}^{2N}_{\theta}\right)$ such that $\left\|z - y\right\cdot< \delta$. From \begin{equation*} \begin{split} \left\cdot b_j- y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g \overline{a}\right) y^*\right\cdot \le\left\cdot \left(z-y \right) \left( \sum_{g \in \widehat{G} } g \overline{a}\right) (z-y)^*\right\cdot<\delta^2\left\cdot\sum_{g \in \widehat{G}} \overline{a}\right\cdot \end{split} \end{equation*} and taking into account $\delta^4 \left\cdot\sum_{g \in \widehat{G}g} \overline{a}\right\cdot^2 + 2 \delta^2 \left\cdot\sum_{g \in \widehat{G}g} \overline{a}\right\cdot\left\cdot\sum_{g \in \widehat{G}g} z\overline{a}z^*\right\cdot<\frac{\varepsilon}{4}$ one has \begin{equation}\label{nt_eps_4_1} \begin{split} \left\cdot b^2_j- \left( y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g \overline{a}\right) y^*\right)^2 \right\cdot<\frac{\varepsilon}{4}. \end{split} \end{equation} From \begin{equation*} \begin{split} \left\cdot c_j - y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}z^*z\overline{a}\right)\right) y^*\right\cdot\le\left\cdot \left(z-y \right) \left( \sum_{g\in \widehat{G}} g\left( \overline{a}z^*z\overline{a}\right)\right) \left(z-y \right)^*\right\cdot<\cdot < \delta^2\left\cdot\sum_{g\in \widehat{G}} g\left( \overline{a}z^*z\overline{a}\right) \right\cdot \end{split} \end{equation*} and taking into account $\left\cdot\sum_{g \in \widehat{G}} g\left( \overline{a}z^*z\overline{a}\right) \right\cdot \delta^2 <\frac{\varepsilon}{4}$ one has \begin{equation}\label{nt_eps_4_2} \begin{split} \left\cdot c_j - y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}z^*z\overline{a}\right)\right) y^*\right\cdot<\frac{\varepsilon}{4}. \end{split} \end{equation} From $\left\|y\right\cdot < \left\|z\right\cdot+ \delta$ it turns out \begin{equation*} \begin{split} \left\cdot y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}z^*z\overline{a}\right)\right) y^*-y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}y^*y\overline{a}\right)\right) y^*\right\cdot\le \cdot \le\left( \left\|z\right\cdot+\delta\right)^2\left( \delta^2 + 2 \delta\left\|z\right\cdot \right) \left\cdot \sum_{g \in \widehat{G}} g \overline{a}^2\right\cdot \end{split} \end{equation*} and taking into account $\left( \left\|z\right\cdot+\delta\right)^2\left( \delta^2 + 2 \delta\left\|z\right\cdot \right) \left\cdot \sum_{g \in \widehat{G}} g \overline{a}^2\right\cdot<\frac{\varepsilon}{4}$ one has \begin{equation}\label{nt_eps_4_3} \begin{split} \left\cdot y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}z^*z\overline{a}\right)\right) y^*-y\left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( \overline{a}y^*y\overline{a}\right)\right) y^*\right\cdot<\frac{\varepsilon}{4}. \end{split} \end{equation} From $y \in C^\infty\left(\mathbb{T}^{2N}_\theta \right)$ and $\overline{a} \in \mathcal{S}\left(\mathbb{R}^{2N}_\theta \right)$ it follows that $y\overline{a}y^*\in \mathcal{S}\left(\mathbb{R}^{2N}_\theta \right)$, hence from the Lemma \ref{nt_w_spec_lem_p} it turns out the existence of $N \in \mathbb{N}$ such that for any $j \ge \mathbb{N}$ following condition holds \begin{equation}\label{nt_eps_4_4} \begin{split} \left\cdot \left( \sum_{g \in \ker\left(\widehat{G} \to G_j \right)} g\left( y\overline{a}y^*\right)\right)^2- \sum_{g \in \ker\left(\widehat{G} \to G_j \right)}\left( g\left( y\overline{a}y^*\right)\right)^2\right\cdot<\frac{\varepsilon}{4}. \end{split} \end{equation} From \eqref{nt_eps_4_1}-\eqref{nt_eps_4_4} it follows than for any $j \ge \mathbb{N}$ following condition holds \begin{equation*} \begin{split} \left\cdot b_j^2 - c_j\right\cdot < \varepsilon, \end{split} \end{equation*} i.e. $\overline{ a}$ satisfies to the condition (c) of the Definition \ref{special_el_defn}. \end{proof} \begin{corollary}\label{nt_norm_compl} If $\overline{A}_{\widehat{\pi}^\oplus}$ is the disconnected inverse noncommutative limit of $\mathfrak{S}_\theta$ with respect to $\widehat{\pi}^\oplus$ then $$ \bigoplus_{g \in J} g C_0\left( \mathbb{R}^{2N}_\theta\right) \subset \overline{A}_{\widehat{\pi}^\oplus} $$ \end{corollary} \begin{proof} From the Lemma \ref{nt_w_spec_lem} it turns out that $\overline{A}_{\widehat{\pi}^\oplus}$ contains all elements \begin{equation}\label{mp_fjkel_eqn} f_{j_1k_1}\otimes \dots \otimes f_{j_Nk_N} \in \underbrace{\mathcal{S}\left(\mathbb{R}^{2}_\theta\right)\otimes\dots\otimes\mathcal{S}\left(\mathbb{R}^{2}_\theta\right)}_{N-\mathrm{times}} \subset \mathcal{S}\left(\mathbb{R}^{2N}_\theta\right) \end{equation} where $f_{j_lk_l}$ ($l = 1,\dots, N$) are given by the Lemma \ref{lm:osc-basis}. However the linear span of given by \eqref{mp_fjkel_eqn} elements is dense in $C_0\left( \mathbb{R}^{2N}_\theta\right)$, hence $C_0\left( \mathbb{R}^{2N}_\theta\right) \subset \overline{A}_{\widehat{\pi}^\oplus}$. From the Corollary \ref{disconnect_group_action_cor} it turns out $$ \bigoplus_{g \in J} g C_0\left( \mathbb{R}^{2N}_\theta\right)\subset \overline{A}_{\widehat{\pi}^\oplus} . $$ \end{proof} \begin{empt} From the Lemma \ref{nt_l_2_est_lem} it turns out that $L^2 \left(\mathbb{R}^{2N}_\theta\right) \subset B\left( L^2\left(\mathbb{R}^{2N}\right)\right)$ is a Hilbert space with the norm $\left\cdot\cdot\right\cdot_2$ given by \eqref{nt_l2_norm_eqn}. One can construct the Hilbert direct sum \begin{equation*} \begin{split} X= \bigoplus_{g \in J} g L^2 \left(\mathbb{R}^{2N}_\theta\right) \subset \prod_{g \in J} B\left(g L^2\left(\mathbb{R}^{2N}\right)\right), \cdot X = \left\cdot\overline{x} \in \prod_{g \in J} B\left( g L^2\left(\mathbb{R}^{2N}\right)\right) ~|~ \left\cdot\left(...,x_{g_k},...\right) \right\cdot_2= \sqrt{\sum_{g \in J}\left\|x_g\right\cdot^2_2} < \infty\right\cdot. \end{split} \end{equation*} If $\overline{a} \in X$ is a special element and $b = \sum_{g \in \widehat{G}}g \overline{a}^2 \in C\left(\mathbb{T}^{2N}_\theta \right) $ then $$ \tau\left(b\right) = \int_{\mathbb{R}^{2N}_g} \overline{a}^2 dx = \left\cdot\overline{a}\right\cdot^2_2 $$ where $\tau$ is given by \eqref{nt_state_eqn}, or \eqref{nt_varphi_inf_eqn}. On the other hand $\left|\tau\left(b \right) \right| < \infty$ for any $b \in C\left(\mathbb{T}^{2N}_\theta \right)$ it follows that $\left\cdot\overline{a}\right\cdot^2_2 < \infty$ for a special element $\overline{a}$. In result we have the following lemma. \end{empt} \begin{lem}\label{nt_l2_spec_lem} The special element $\overline{a}\in \varinjlim C\left(\mathbb{T}^{2N}_{\theta/m_j^2}\right) $ lies in $X=\bigoplus_{g \in J} L^2\left( \mathbb{R}^{2N}_\theta\right)$. Moreover if $b = \sum_{g \in \widehat G}g\left( \overline{a}^2\right) \in C\left(\mathbb{T}^{2N}_\theta\right)$ then $$ \left\cdot \overline{a}\right\cdot^2_2 = \tau\left(b\right) < \infty $$ where $\tau$ is the tracial state on $C\left(\mathbb{T}^{2N}_{\theta}\right)$ given by \eqref{nt_state_eqn}, \eqref{nt_state_integ_eqn} and $\left\cdot\cdot\right\cdot_2$ is given by \eqref{nt_l2_norm_eqn}. \end{lem} \begin{rem}\label{nt_sup_norm} From $L^2 \left(\mathbb{R}^{2N}_\theta\right) \subset C_0 \left(\mathbb{R}^{2N}_\theta\right)$ it follows that any special element in $B\left( L^2 \left(\mathbb{R}^{2N}_\theta\right)\right)$ lies in $C_0 \left(\mathbb{R}^{2N}_\theta\right)$. \end{rem} \begin{empt}\label{nt_c_0} Let $\overline{A}_{\widehat{\pi}^\oplus}$ be the disconnected inverse noncommutative limit of $\mathfrak{S}_\theta$ with respect to $\widehat{\pi}^\oplus$ of $\mathfrak{S}_\theta$. From the Corollary \ref{nt_norm_compl} it follows that $$ C_0\left( \mathbb{R}^{2N}_\theta\right) \subset \overline{A}_{\widehat{\pi}^\oplus} \bigcap B\left(L^2\left( R^{2N}\right) \right). $$ From the Remark \ref{nt_sup_norm} it follows that $$ \overline{A}_{\widehat{\pi}^\oplus} \bigcap B\left(L^2\left( R^{2N}\right) \right) \subset C_0\left( \mathbb{R}^{2N}_\theta\right). $$ In result we have \begin{equation}\label{nt_c_0_eqn} \overline{A}_{\widehat{\pi}^\oplus} \bigcap B\left(L^2\left( R^{2N}\right) \right) = C_0\left( \mathbb{R}^{2N}_\theta\right). \end{equation} Similarly for any $g \in J$ on has $$ \overline{A}_{\widehat{\pi}^\oplus} \bigcap B\left(gL^2\left( R^{2N}\right) \right) = gC_0\left( \mathbb{R}^{2N}_\theta\right). $$ The algebra $ C_0\left( \mathbb{R}^{2N}_\theta\right)$ is irreducible. Clearly $ C_0\left( \mathbb{R}^{2N}_\theta\right) \subset \overline{A}_{\widehat{\pi}^\oplus}$ is a maximal irreducible subalgebra. \end{empt} \begin{thm}\label{nt_inf_cov_thm} Following conditions hold: \begin{enumerate} \item[(i)] The representation $\widehat{\pi}^\oplus$ is good, \item[(ii)] \begin{equation*} \begin{split} \varprojlim_{\widehat{\pi}^\oplus} \downarrow \mathfrak{S}_\theta = C_0\left(\mathbb{R}^{2N}_\theta\right); \cdot G\left(\varprojlim_{\widehat{\pi}^\oplus} \downarrow \mathfrak{S}_\theta~|~ C\left(\mathbb{T}^{2N}_\theta \right)\right) = \mathbb{Z}^{2N}, \end{split} \end{equation*} \item[(iii)] The triple $\left(C\left(\mathbb{T}^{2N}_\theta \right), C_0\left(\mathbb{R}^{2N}_\theta\right), \mathbb{Z}^{2N} \right)$ is an infinite noncommutative covering of $\mathfrak{S}_\theta$ with respect to $\widehat{\pi}^\oplus$. \end{enumerate} \end{thm} \begin{proof} (i) There is the natural inclusion $\overline{A}_{\widehat{\pi}^\oplus} \hookrightarrow \prod_{g \in J} B\left(g L^2\left(\mathbb{R}^{2N}\right)\right)$ where $\prod$ means the Cartesian product of algebras. This inclusion induces the decomposition $$ \overline{A}_{\widehat{\pi}^\oplus} \hookrightarrow \prod_{g \in J}\left( \overline{A}_{\widehat{\pi}^\oplus}\bigcap B\left(g L^2\left(\mathbb{R}^{2N}\right)\right) \right). $$ From \eqref{nt_c_0_eqn} it turns out $\overline{A}_{\widehat{\pi}^\oplus}\bigcap B\left(g L^2\left(\mathbb{R}^{2N}\right)\right) = gC_0\left( \mathbb{R}^{2N}_\theta\right)$, hence there is the inclusion $$ \overline{A}_{\widehat{\pi}^\oplus} \hookrightarrow \prod_{g \in J} gC_0\left( \mathbb{R}^{2N}_\theta\right). $$ From the above equation it follows that $C_0\left( \mathbb{R}^{2N}_\theta\right)\subset \overline{A}_{\widehat{\pi}^\oplus }$ is a maximal irreducible subalgebra. From the Lemma \ref{nt_l2_spec_lem} it turns out that algebraic direct sum $\bigoplus_{g \in J} gC_0\left( \mathbb{R}^{2N}_\theta\right)$ is a dense subalgebra of $\overline{A}_{\widehat{\pi}^\oplus}$, i.e. the condition (b) of the Definition \ref{good_seq_defn} holds. Clearly the map $\widehat{C\left(\mathbb{T}^{2N}_\theta \right)} \to M\left(C_0\left( \mathbb{R}^{2N}_\theta\right) \right)$ is injective, i.e. the condition (a) of the Definition \ref{good_seq_defn} holds. If $G \subset \widehat{G}$ is the maximal group such that $GC_0\left( \mathbb{R}^{2N}_\theta\right) = C_0\left( \mathbb{R}^{2N}_\theta\right)$ then $G = \mathbb{Z}^{2N}$. The homomorphism $\mathbb{Z}^{2N} \to \mathbb{Z}^{2N}_{m_j}$ is surjective, it turns out that the condition (c) of the Definition \ref{good_seq_defn} holds. \newline (ii) and (iii) Follows from the proof of (i). \end{proof} \section{Isospectral deformations and their coverings} \paragraph*{}A very general construction of isospectral deformations of noncommutative geometries is described in \cite{connes_landi:isospectral}. The construction implies in particular that any compact spin-manifold $M$ whose isometry group has rank $\geq 2$ admits a natural one-parameter isospectral deformation to noncommutative geometries $M_\theta$. We let $(C^\infty\left(M \right) , \mathcal{H} = L^2\left(M,S \right) , \slashed D)$ be the canonical spectral triple associated with a compact spin-manifold $M$. We recall that $\mathcal{A} = C^\infty(M)$ is the algebra of smooth functions on $M$, $S$ is the spinor bundle and $\slashed D$ is the Dirac operator. Let us assume that the group $\mathrm{Isom}(M)$ of isometries of $M$ has rank $r\geq2$. Then, we have an inclusion \begin{equation*} \mathbb{T}^2 \subset \mathrm{Isom}(M) \cdot , \end{equation*} with $\mathbb{T}^2 = \mathbb{R}^2 / 2 \pi \mathbb{Z}^2$ the usual torus, and we let $U(s) , s \in \mathbb{T}^2$, be the corresponding unitary operators in $\mathcal{H} = L^2(M,S)$ so that by construction \begin{equation*} U(s) \cdot \slashed D = \slashed D \cdot U(s). \end{equation*} Also, \begin{equation}\label{isospectral_sym_eqn} U(s) \cdot a \cdot U(s)^{-1} = \alpha_s(a) \cdot , \cdot \cdot \cdot \forall \cdot a \in \mathcal{A} \cdot , \end{equation} where $\alpha_s \in \mathrm{Aut}(\mathcal{A})$ is the action by isometries on the algebra of functions on $M$. \noindent We let $p = (p_1, p_2)$ be the generator of the two-parameters group $U(s)$ so that \begin{equation*} U(s) = \exp(i(s_1 p_1 + s_2 p_2)) \cdot . \end{equation*} The operators $p_1$ and $p_2$ commute with $D$. Both $p_1$ and $p_2$ have integral spectrum, \begin{equation*} \mathrm{Spec}(p_j) \subset \mathbb{Z} \cdot , \cdot \cdot j = 1, 2 \cdot . \end{equation*} \noindent One defines a bigrading of the algebra of bounded operators in $\mathcal{H}$ with the operator $T$ declared to be of bidegree $(n_1,n_2)$ when, \begin{equation*} \alpha_s(T) = \exp(i(s_1 n_1 + s_2 n_2)) \cdot T \cdot , \cdot \cdot \cdot \forall \cdot s \in \mathbb{T}^2 \cdot , \end{equation*} where $\alpha_s(T) = U(s) \cdot T \cdot U(s)^{-1}$ as in \eqref{isospectral_sym_eqn}. \paragraph{} Any operator $T$ of class $C^\infty$ relative to $\alpha_s$ (i. e. such that the map $s \rightarrow \alpha_s(T) $ is of class $C^\infty$ for the norm topology) can be uniquely written as a doubly infinite norm convergent sum of homogeneous elements, \begin{equation*} T = \sum_{n_1,n_2} \cdot \widehat{T}_{n_1,n_2} \cdot , \end{equation*} with $\widehat{T}_{n_1,n_2}$ of bidegree $(n_1,n_2)$ and where the sequence of norms $|| \widehat{T}_{n_1,n_2} ||$ is of rapid decay in $(n_1,n_2)$. Let $\lambda = \exp(2 \pi i \theta)$. For any operator $T$ in $\mathcal{H}$ of class $C^\infty$ we define its left twist $l(T)$ by \begin{equation}\label{l_defn} l(T) = \sum_{n_1,n_2} \cdot \widehat{T}_{n_1,n_2} \cdot \lambda^{n_2 p_1} \cdot , \end{equation} and its right twist $r(T)$ by \begin{equation*} r(T) = \sum_{n_1,n_2} \cdot \widehat{T}_{n_1,n_2} \cdot \lambda^{n_1 p_2} \cdot , \end{equation*} Since $|\lambda | = 1$ and $p_1$, $p_2$ are self-adjoint, both series converge in norm. Denote by $C^\infty\left(M \right)_{n_1, n_2} \subset C^\infty\left(M \right) $ the $\mathbb{C}$-linear subspace of elements of bidegree $\left( n_1, n_2\right) $. \cdot One has, \begin{lem}\label{conn_landi_iso_lem}\cite{connes_landi:isospectral} \begin{itemize} \item[{\rm a)}] Let $x$ be a homogeneous operator of bidegree $(n_1,n_2)$ and $y$ be a homogeneous operator of bidegree $(n'_1,n'_2)$. Then, \begin{equation} l(x) \cdot r(y) \cdot - \cdot r(y) \cdot l(x) = (x \cdot y \cdot - y \cdot x) \cdot \lambda^{n'_1 n_2} \lambda^{n_2 p_1 + n'_1 p_2} \end{equation} In particular, $[l(x), r(y)] = 0$ if $[x, y] = 0$. \item[{\rm b)}] Let $x$ and $y$ be homogeneous operators as before and define \begin{equation} x * y = \lambda^{n'_1 n_2} \cdot x y \cdot ; \label{star} \end{equation} then $l(x) l(y) = l(x * y)$. \end{itemize} \end{lem} \noindent The product $*$ defined in (\ref{star}) extends by linearity to an associative product on the linear space of smooth operators and could be called a $*$-product. One could also define a deformed `right product'. If $x$ is homogeneous of bidegree $(n_1,n_2)$ and $y$ is homogeneous of bidegree $(n'_1,n'_2)$ the product is defined by \begin{equation*} x *_{r} y = \lambda^{n_1 n'_2} \cdot x y \cdot . \end{equation*} Then, along the lines of the previous lemma one shows that $r(x) r(y) = r(x *_{r} y)$. We can now define a new spectral triple where both $\mathcal{H}$ and the operator $D$ are unchanged while the algebra $C^\infty\left(M \right)$ is modified to $l(C^\infty\left(M \right))$ . By Lemma~{\ref{conn_landi_iso_lem}}~b) one checks that $l\left( C^\infty\left(M \right)\right) $ is still an algebra. Since $D$ is of bidegree $(0,0)$ one has, \begin{equation*} [D, \cdot l(a) ] = l([D, \cdot a]) \label{bound} \end{equation*} which is enough to check that $[D, x]$ is bounded for any $x \in l(\mathcal{A})$. There is a spectral triple $\left(l\left( C^\infty\left(M \right)\right) , \mathcal{H}, D\right)$. \paragraph{} Denote by $C\left(M_\theta \right)$ the operator norm completion (equivalently $C^*$-norm completion) of $l\left(C^\infty\left( M\right) \right) $, and denote by $\rho: C\left(M\right) \to L^2\left( M, S\right) $ (resp. $\pi_\theta: C\left(M_\theta\right) \to B\left( L^2\left( M, S\right)\right) $ ) natural representations. \subsection{Finite-fold coverings}\label{isosectral_fin_cov} \paragraph{} Let $M$ be a spin - manifold with the smooth action of $\mathbb{T}^2$. Let $\pi:\widetilde{M} \to M$ be a finite-fold covering. Let $\widetilde{x}_0 \in \widetilde{M}$ and $x_0=\pi\left(\widetilde{x}_0 \right)$. Denote by $\varphi: \mathbb{R}^2 \to \mathbb{R}^2 / \mathbb{Z}^2 = \mathbb{T}^2$ the natural covering. There are two closed paths $\omega_1, \omega_2: \left[0,1 \right]\to M$ given by \begin{equation*} \begin{split} \omega_1\left(t \right) = \varphi\left(t, 0 \right) x_0,~ \omega_2\left(t \right) = \varphi\left(0, t \right) x_0. \end{split} \end{equation*} There are lifts of these paths, i.e. maps $\widetilde{\omega}_1 , \widetilde{\omega}_2: \left[0,1 \right] \to\widetilde{M}$ such that \begin{equation*} \begin{split} \widetilde{\omega}_1\left(0 \right)= \widetilde{\omega}_2\left(0 \right)=\widetilde{x}_0,~ \cdot \pi\left( \widetilde{\omega}_1\left(t \right)\right) = \omega_1\left(t\right),\cdot \pi\left( \widetilde{\omega}_2\left(t \right)\right) = \omega_2\left(t\right). \end{split} \end{equation*} Since $\pi$ is a finite-fold covering there are $N_1, N_2 \in \mathbb{N}$ such that if $$ \gamma_1\left(t \right) = \varphi\left(N_1t, 0 \right) x_0,~ \gamma_2\left(t \right) = \varphi\left(0, N_2t \right) x_0. $$ and $\widetilde{\gamma}_1$ (resp. $\widetilde{\gamma}_2$) is the lift of $\gamma_1$ (resp. $\gamma_2$) then both $\widetilde{\gamma}_1$, $\widetilde{\gamma}_2$ are closed. Let us select minimal positive values of $N_1, N_2$. If $\text{pr}_n: S^1 \to S^1$ is an $n$ listed covering and $\text{pr}_{N_1, N_2}$ the covering given by $$ \widetilde{\mathbb{T}}^2 = S^1 \times S^1 \xrightarrow{\text{pr}_{N_1}\times\text{pr}_{N_2}} \to S^1 \times S^1 = \mathbb{T}^2 $$ then there is the action $\widetilde{\mathbb{T}}^2 \times \widetilde{M} \to \widetilde{M}$ such that \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { \widetilde{\mathbb{T}}^2 \times \widetilde{M} & & \widetilde{M} \cdot \mathbb{T}^2. \times M & & M \cdot}; \path[-stealth] (m-1-1) edge node [above] {$ \cdot$} (m-1-3) (m-1-1) edge node [right] {$\mathrm{pr}_{N_1N_2} \times \pi $} (m-2-1) (m-1-3) edge node [right] {$\pi$} (m-2-3) (m-2-1) edge node [above] {$ \cdot$} (m-2-3); \end{tikzpicture} where $\widetilde{\mathbb{T}}^2 \approx \mathbb{T}^2$. Let $\widetilde{p} = \left( \widetilde{p}_1, \widetilde{p}_2\right) $ be the generator of the associated with $\widetilde{\mathbb{T}}^2$ two-parameters group $\widetilde{U}\left(s \right) $ so that \begin{equation*} \widetilde{U}\left(s \right) = \exp\left( i\left( s_1 \widetilde{p}_1 + s_2 \widetilde{p}_2\right)\right). \end{equation*} The covering $\widetilde{M} \to M$ induces an involutive injective homomorphism \begin{equation*} \varphi:C^\infty\left(M \right)\hooktoC^\infty\left( \widetilde{M} \right). \end{equation*} Since $\widetilde{M} \to M$ is a covering $C^\infty\left( \widetilde{M} \right)$ is a finitely generated projective $C^\infty\left( M \right)$-module, i.e. there is the following direct sum of $C^\infty\left( \widetilde{M} \right)$-modules \begin{equation}\label{isosectral_proj_eqn} C^\infty\left( \widetilde{M} \right) \bigoplus P = C^\infty\left( M \right)^n \end{equation} such that \begin{equation*} \varphi\left(C^\infty\left(M \right) \right)_{n_1,n_2} \subset C^\infty\left( \widetilde{M} \right)_{n_1N_1,~ n_2N_2}. \end{equation*} Let $\theta, \widetilde{\theta} \in \mathbb{R}$ be such that $$ \widetilde{\theta}= \frac{\theta + n}{N_1N_2}, \text{ where }n \in \mathbb{Z}. $$ If $\lambda= e^{2\pi i \theta}$, $\widetilde{\lambda}= e^{2\pi i \widetilde{\theta}}$ then $ \lambda = \widetilde{\lambda}^{N_1N_2}. $ There are isospectral deformations $C^\infty\left(M_\theta \right), C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right)$ and $\mathbb{C}$-linear isomorphisms $l:C^\infty\left(M \right) \to C^\infty\left(M_\theta \right)$, $\widetilde{l}:C^\infty\left( \widetilde{M} \right) \to C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right)$. These isomorphisms and the inclusion $\varphi$ induce the inclusion \begin{equation*} \begin{split} \varphi_\theta:C^\infty\left(M_\theta \right)\toC^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right), \cdot \varphi_{\widetilde{\theta}}\left(C^\infty\left(M_\theta \right) \right)_{n_1,n_2} \subset C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right)_{n_1N_1,~ n_2N_2}. \end{split} \end{equation*} From \eqref{isosectral_proj_eqn} it follows that \begin{equation*} \begin{split} \widetilde{l}\left( C^\infty\left( \widetilde{M} \right)\right) \bigoplus l\left( P \right) = l\left( C^\infty\left( M \right)\right) ^n, \cdot \text{or equivalently }C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right) \bigoplus l\left(P \right) = C^\infty\left(M_\theta \right)^n, \end{split} \end{equation*} i.e. $C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right)$ is a finitely generated projective $C^\infty\left(M_\theta \right)$ module. There is a projection $p \in \mathbb{M}_n\left(C^\infty\left(M_\theta \right)\right)$ such that $$ C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right) = p C^\infty\left(M_\theta \right)^n $$ If $C\left( \widetilde{M}_{\widetilde{\theta}} \right)$ (resp. $C\left(M_\theta \right)$ ) is the operator norm completion of $C^\infty\left( \widetilde{M}_{\widetilde{\theta}} \right)$ (resp. $C^\infty\left(M_\theta \right)$ ) then $$ C\left( \widetilde{M}_{\widetilde{\theta}} \right) = p C\left(M_\theta \right)^n, $$ i.e. $C\left( \widetilde{M}_{\widetilde{\theta}} \right)$ is a finitely generated projective $C\left( M_\theta \right)$ module. Denote by $G = G\left(\widetilde{M}~|~M \right)$ the group of covering transformations. Since $\widetilde{l}$ is a $\mathbb{C}$-linear isomorphism the action of $G$ on $C^\infty\left( \widetilde{M} \right)$ induces a $\mathbb{C}$-linear action $G \times C^\infty\left( \widetilde{M}_{ \widetilde{\theta}} \right) \to C^\infty\left( \widetilde{M}_{ \widetilde{\theta}} \right)$. According to the definition of the action of $\widetilde{\mathbb{T}}^2$ on $\widetilde{M}$ it follows that the action of $G$ commutes with the action of $\widetilde{\mathbb{T}}^2$. It turns out $$ g C^\infty\left( \widetilde{M} \right)_{n_1,n_2} = C^\infty\left( \widetilde{M} \right)_{n_1,n_2} $$ for any $n_1, n_2 \in \mathbb{Z}$ and $g \in G$. If $\widetilde{a} \in C^\infty\left( \widetilde{M} \right)_{n_1,n_2}$, $\widetilde{b} \in C^\infty\left( \widetilde{M} \right)_{n'_1,n'_2}$ then $g\left( \widetilde{a}\widetilde{b}\right)= \left(g\widetilde{a} \right) \left(g\widetilde{b} \right)\in C^\infty\left( \widetilde{M} \right)_{n_1+n'_1,n_2+n'_2} $. One has \begin{equation*} \begin{split} \widetilde{l}\left(\widetilde{a}\right)\widetilde{l}\left(\widetilde{b}\right)= \widetilde{\lambda}^{n'_1n_2}\widetilde{l}\left(\widetilde{a}\widetilde{b}\right), \cdot \widetilde{\lambda}^{n_2\widetilde{p}_1}l\left( \widetilde{b}\right) = \widetilde{\lambda}^{n'_1n_2}l\left( \widetilde{b}\right) \widetilde{\lambda}^{n_2\widetilde{p}_1},\cdot \widetilde{l}\left(g \widetilde{a}\right)\widetilde{l}\left(g \widetilde{b}\right)= g \widetilde{a}\widetilde{\lambda}^{n_2\widetilde{p}_1}g \widetilde{b}\widetilde{\lambda}^{n'_2\widetilde{p}_1}= \widetilde{\lambda}^{n'_1n_2} g\left(\widetilde{a}\widetilde{b} \right) \widetilde{\lambda}^{\left( n_2+n_2'\right) \widetilde{p}_1}. \end{split} \end{equation*} On the other hand \begin{equation*} \begin{split} g\left( \widetilde{l}\left(\widetilde{a}\right)\widetilde{l}\left(\widetilde{b}\right)\right) = g\left( \widetilde{\lambda}^{n'_1n_2}\widetilde{l}\left(\widetilde{a}\widetilde{b}\right)\right)= \widetilde{\lambda}^{n'_1n_2} g\left(\widetilde{a}\widetilde{b} \right) \widetilde{\lambda}^{\left( n_2+n_2'\right) \widetilde{p}_1}. \end{split} \end{equation*} From above equations it turns out $$ \widetilde{l}\left(g \widetilde{a}\right)\widetilde{l}\left(g \widetilde{b}\right) = g\left( \widetilde{l}\left(\widetilde{a}\right)\widetilde{l}\left(\widetilde{b}\right)\right), $$ i.e. $g$ corresponds to automorphism of $C^\infty\left( \widetilde{M}_{ \widetilde{\theta}}\right)$. It turns out that $G$ is the group of automorphisms of $C^\infty\left( \widetilde{M}_{ \widetilde{\theta}}\right)$. Clearly form $\widetilde{a} \in C^\infty\left( \widetilde{M}_{ \widetilde{\theta}}\right)_{n_1,n_2}$ it follows that $\widetilde{a}^* \in C^\infty\left( \widetilde{M}_{ \widetilde{\theta}}\right)_{-n_1,-n_2}$. One has $$ g\left(\left( \widetilde{l}\left(\widetilde{a}\right)\right)^* \right) = g\left( \widetilde{\lambda}^{-n_2\widetilde{p_1}}\widetilde{a}^*\right) = g \left(\widetilde{\lambda}^{n_1 n_2} \widetilde{a}^*\widetilde{\lambda}^{-n_2\widetilde{p_1}}\right) = \widetilde{\lambda}^{n_1 n_2} g\left(\widetilde{l}\left(\widetilde{a}^* \right) \right). $$ On the other hand $$ \left(g \widetilde{l}\left(\widetilde{a}\right) \right)^*= \left(\left( g \widetilde{a}\right)\widetilde{\lambda}^{n_2\widetilde{p_1}} \right)^*=\widetilde{\lambda}^{-n_2\widetilde{p_1}}\left(ga^* \right) = \widetilde{\lambda}^{n_1 n_2}\left(ga^* \widetilde{\lambda}^{-n_2\widetilde{p_1}}\right)= \widetilde{\lambda}^{n_1 n_2} g\left(\widetilde{l}\left(\widetilde{a}^* \right) \right), $$ i.e. $g\left(\left( \widetilde{l}\left(\widetilde{a}\right)\right)^* \right) = \left(g \widetilde{l}\left(\widetilde{a}\right) \right)^*$. It follows that $g$ corresponds to the involutive automorphism of $C^\infty\left( \widetilde{M}_{ \widetilde{\theta}}\right)$. Since $C^\infty\left( \widetilde{M}_{ \widetilde{\theta}}\right)$ is dense in $C\left( \widetilde{M}_{ \widetilde{\theta}}\right)$ there is the unique involutive action $G \times C\left( \widetilde{M}_{ \widetilde{\theta}}\right) \to C\left( \widetilde{M}_{ \widetilde{\theta}}\right)$. From the above construction it turns out the following theorem. \begin{thm}\label{isospectral_fin_thm} The triple $\left( C\left(M_\theta\right), C\left( \widetilde{M}_{ \widetilde{\theta}}\right), G\left(\widetilde{M}~|~ M \right)\right) $ is an unital noncommutative finite-fold covering. \end{thm} \subsection{Infinite coverings} \paragraph{} Let $\mathfrak{S}_M =\left\{M = M^0 \leftarrow M^1 \leftarrow ... \leftarrow M^n \leftarrow ... \right\cdot \in \mathfrak{FinTop}$ be an infinite sequence of spin - manifolds and regular finite-fold covering. Suppose that there is the action $\mathbb{T}^2 \times M \to M$ given by \eqref{isospectral_sym_eqn}. From the Theorem \ref{isospectral_fin_thm} it follows that there is the algebraical finite covering sequence \begin{equation*}\label{isospectral_sequence_eqn} \mathfrak{S}_{C\left(M_\theta \right) } = \left\{C\left(M_\theta \right)\to ... \to C\left(M^n_{\theta_n} \right)\to ...\right\cdot. \end{equation*} So one can calculate a finite noncommutative limit of the above sequence. This article does not contain detailed properties of this noncommutative limit, because it is not known yet by the author of this article. \section*{Acknowledgment} \paragraph*{}Author would like to acknowledge members of the Moscow State University Seminar ``Algebras in analysis'' leaded by professor A. Ya. Helemskii and others for a discussion of this work. \end{document}
arXiv
\begin{document} \title{Influence of a tight isotropic harmonic trap on photoassociation in ultracold homonuclear alkali gases} \author{Sergey Grishkevich and Alejandro Saenz} \affiliation{AG Moderne Optik, Institut f\"ur Physik, Humboldt-Universit\"at zu Berlin, Hausvogteiplatz 5-7, 10117 Berlin, Germany} \date{\today} \begin{abstract} The influence of a tight isotropic harmonic trap on photoassociation of two ultracold alkali atoms forming a homonuclear diatomic is investigated using realistic atomic interaction potentials. Confinement of the initial atom pair due to the trap leads to a uniform strong enhancement of the photoassociation rate to most, but also to a strongly suppressed rate for some final states. Thus tighter traps do not necessarily enhance the photoassociation rate. A further massive enhancement of the rate is found for strong interatomic interaction potentials. The details of this interaction play a minor role, except for large repulsive interactions for which a sharp window occurs in the photoassociation spectrum as is known from the trap-free case. A comparison with simplified models describing the atomic interaction like the pseudopotential approximation shows that they often provide reasonable estimates for the trap-induced enhancement of the photoassociation rate even if the predicted rates can be completely erroneous. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Over the past ten years there has been an increasing interest in ultracold atomic and molecular physics. This interest was stimulated by the successful experimental observation of Bose-Einstein condensation (BEC) in dilute atomic gases~\cite{cold:ande95}. These atomic condensates exhibit many qualitatively new features. Besides their relevance to fundamental quantum statistical and possibly even solid-state questions a further interesting aspect is that the atoms can bind together to form ultracold and even Bose-Einstein condensed molecules~\cite{cold:joch03,cold:rega03,cold:zwie03}. Although so far the only successful way for achieving a molecular BEC is based on magnetic Feshbach resonances, alternative schemes are still highly desirable, since magnetic Feshbach resonances do not appear to be a universal tool. One of the alternative schemes is photoassociation where two ultracold or Bose-condensed atoms absorb a photon and form a bound excited molecule~\cite{cold:lett93,cold:fior98}. Although it was demonstrated that this process generates cold molecules, the yield is small compared to the one obtained by means of magnetic Feshbach resonances. The advantage of photoassociation (and related coherent-control schemes) compared to Feshbach resonances is, however, their assumed wider range of applicability, since there is no need for the occurrence of suitable resonances and thus no requirement for specific magnetic properties of the atoms involved. Besides simple one-step photoassociation that yields electronically excited molecules there are also resonant or non-resonant multi-step schemes leading to the electronic and possibly even rovibrational ground state. One of the schemes to produce molecules from atoms with help of lasers is, e.\,g., two-color stimulated Raman adiabatic passage~\cite{cold:drum02}. The present work discusses only one-photon association explicitly, but it is important to note that the discussed transition matrix elements are direct ingredients for the modeling of more sophisticated schemes like the mentioned two-photon processes. Photoassociation is also a powerful tool for the investigation of the properties of cold atoms and diatomic molecules. The absorption of the photon typically occurs at large internuclear distances, and thus the photoassociation spectrum provides important information about the long-range part of the molecular potential curves as well as the collisional properties of atoms~\cite{cold:abra95, cold:tiem96, cold:fior01, cold:gutt02}. The cooling of atomic samples is usually achieved in a trap and thus photoassociation experiments in ultra-cold atomic gases are performed in the presence of a trap potential. In most cases these traps are rather shallow so that the corresponding trap frequency $\displaystyle\omega$ is of the order of 100\,Hz~\cite{cold:schl03}. (Here and in the following the frequency of the trap corresponds to the one in the harmonic approximation). For such a frequency the influence of the trap is expected to be negligible. This may, however, change for very tight traps. In fact, it was pointed out that the atom-molecule conversion process is more efficient, if photoassociation is performed under tight trapping conditions as they are, e.\,g., accessible in optical lattices~\cite{cold:jaks02}. The advantage of using tight confinement has stimulated further theoretical investigations and very recently some proposals were made that discuss the possibility of using the trapping potential itself for the formation of molecules~\cite{cold:bold05,cold:koch05}. The study of photoassociation in tight optical lattices is of interest by itself, since it is possible to achieve tailored Mott-Insulator states containing a large number of almost identical lattice sites, each filled with exactly two atoms~\cite{cold:grei02}. The trap frequency of a lattice site in which molecules are produced via photoassociation can be of the order of 100\,kHz~\cite{cold:rom04}. The systematic investigation of the influence of a tight isotropic harmonic trap on the photoassociation process of two alkali atoms forming a homonuclear molecule is the topic of this work. Realistic atom-atom interaction potentials are adopted. This allows to check also the range of applicability of the $\delta$-function (pseudopotential) approximation for the description of the photoassociation process. In the pseudopotential approximations the true atom-atom interaction is replaced by one that reproduces asymptotically the two-body zero-energy s-wave scattering. For this choice of the potential ($\delta$-function) and if the two atoms are placed in a harmonic trap the Schr\"odinger equation possesses an analytical solution~\cite{cold:busc98,cold:idzi05}. The validity regime of the pseudopotential approximation has been discussed with respect to the energy levels for trapped atoms in \cite{cold:blum02}. It was shown that the use of an energy-dependent pseudopotential (instead of the mostly adopted energy-independent one) gives almost correct energy levels for two harmonically trapped atoms. Whether this simplified model for the atomic interaction is appropriate for the description of photoassociation in a harmonic trap is, however, not immediately evident. Therefore, the present work compares the results obtained using realistic atomic interaction potentials with the ones obtained with either the energy-dependent or -independent pseudopotential. Photoassociation in tight traps has been studied theoretically before \cite{cold:deb03}. The energy-independent pseudopotential approximation was adopted and only photoassociation into long-range states discussed. Since the present work uses realistic atomic interaction potentials, transitions to all final vibrational states can be considered. This allows to identify two different regimes with respect to the influence of a tight trap on the photoassociation rate as well as (approximate) rules where a transition from one regime to the other is to be expected. The outline of this work is the following. First, a brief description of the model systems is given in Sec.\,\ref{sec:system}. In Sec.\,\ref{sec:photoassociation} the influence of a tight trap on photoassociation is discussed. This includes after a brief general discussion of the trap influence in Sec.\,\ref{sec:spectrum} the derivation of a sum rule in Sec.\,\ref{sec:sumrule}, the introduction of an enhancement or suppression factor in Sec.\,\ref{sec:ratio}, and the discussion of two regimes in Sec.\,\ref{sec:constregim} and \ref{sec:cutoffregim}. Then the case of repulsive atom-atom interactions is considered in Sec.\,\ref{sec:positivesclen}. The combined influence of trap frequency and atom-atom interaction on photoassociation is investigated in Sec.\,\ref{sec:combined}, the validity of the pseudopotential approximation in Sec.\,\ref{sec:delta}. Finally, a discussion and outlook is given in Sec.\,\ref{sec:discussion}. All equations and quantities in this paper are given in atomic units unless otherwise specified. \section{The system} \label{sec:system} Photoassociation of two identical atoms confined in an isotropic harmonic trap and interacting through a two-body Born-Oppenheimer potential $\ds V_{\rm int}(R)$ is considered. The spherical symmetry and harmonicity of the trap allows to separate the center-of-mass and the radial internal motion~\cite{cold:busc98}. The eigenfunctions of the center-of-mass motion are the harmonic-oscillator states. Thus the problem reduces to solving the Schr\"odinger equation for the radial internal motion \begin{eqnarray} \ds \left[ \frac{1}{2\mu}\frac{d^2}{dR^2}\, -\,\frac{J(J+1)}{2\mu R^2} \,\right.&-& V_{\rm int}(R)\, -\, \frac12\mu\omega^2R^2 \nonumber \\ &+& \left. E \, \right]\,\Psi (R)\,=\,0\; . \label{SE_rim} \end{eqnarray} In Eq.~(\ref{SE_rim}) $J$ denotes the rotational quantum number, $\displaystyle\omega$ is the harmonic trap frequency, and $\displaystyle\mu $ is the reduced mass that is equal to $m/2$ in the present case of particles with identical mass $m$. In order to compute the photoassociation spectrum the vibrational wave functions $\Psi(R)/R$ are determined for the initial and final molecular states from Eq.~(\ref{SE_rim}) with the corresponding Born-Oppenheimer interaction potentials $V_{\rm int}(R)$. The equation is solved numerically using an expansion in $B$ splines. For the investigation of the influence of the trap on the photoassociation rate Eq.~(\ref{SE_rim}) is solved for $\omega\neq 0$. The photoassociation processes most relevant to experiments on ultracold alkali atoms correspond to transitions from two free ground-state atoms (interacting {\it via} the ground triplet or singlet potential) to the different vibrational levels of the first excited triplet or singlet state~\cite{cold:alma99,cold:alma01,cold:kemm04}. Due to hyperfine interaction, two alkali atoms can also interact via a coherent admixture of singlet and triplet states. This work starts by considering the photoassociative transition between the two triplet states $a ^3\Sigma^+_u$ and $1 ^3\Sigma^+_g$ for $^6 \mbox{Li}$. A corresponding experiment is, e.\,g., reported in~\cite{cold:schl03}. The generality of the conclusions drawn from this specific example are then tested by considering also other atoms ($^7$Li and $^{39}$K) or modifying artificially the interaction strength, as is discussed below. For the short-range part of the $a ^3\Sigma^+_u$ molecular potential of Li$_2$ the data in~\cite{cold:cola03} are used, including the van der Waals coefficients cited therein. In the case of the $1 ^3\Sigma^+_g$ state data for interatomic distances between $\ds R=4.66\,a_0$ and $\ds R=7.84\,a_0$ are taken from~\cite{cold:lint89} and are extended with {\it ab initio} values from~\cite{cold:schm85} for distances between $\ds R=3.25\,a_0$ and $\ds R=4.50\,a_0$ and between $\ds R=8.0\,a_0$ and $\ds R=30.0\,a_0$. The van der Waals coefficients from~\cite{cold:mari95} are used. For a $\Sigma$ to $\Sigma$ molecular dipole transition the selection rule is $J=J'\pm 1$. Assuming ultracold atomic gases the atoms interact initially in the $\ds J'=0$ state of the $a ^3\Sigma^+_u$ potential. The dipole selection rule leads then to transitions to the $\ds J=1$ states of $1 ^3\Sigma^+_g$. With the given potential-curve parameters a solution of Eq.~(\ref{SE_rim}) in the absence of a trap ($\omega=0$) yields for the fermionic $^6$Li atoms 10 and 100 vibrational bound states for the $a ^3\Sigma^+_u$ ($J'=0$) and the $1 ^3\Sigma^+_g$ ($J=1$) states, respectively. In the case of the bosonic $^7$Li atoms there are 11 and 108 vibrational bound states for the $a ^3\Sigma^+_u$ ($J'=0$) and the $1 ^3\Sigma^+_g$ ($J=1$) states, respectively. The electronic dipole moment $D(R)$ for the transition $a\, ^3\Sigma^+_u \rightarrow 1\, ^3\Sigma^+_g$ of Li was calculated~\cite{cold:vanne} with a configuration interaction (CI) method for the two valence electrons using the code described in~\cite{bsp:vann04}. The core electrons were described with the aid of the Klapish model potential with the parameters given in~\cite{aies:magn99} and polarization was considered as discussed in~\cite{bsp:dumi06}. The resulting $D(R)$ (and its value in the separated atom limit) is in good agreement with literature data~\cite{cold:schm85,cold:ratc87,cold:pipi91,cold:mari95}. In the limit of zero collision energy the interaction between two atoms can be characterised by their s-wave scattering length $\ds a_{\rm sc}$. Its sign determines the type of interaction (repulsive or attractive) and its absolute value the interaction strength. For a given potential curve the s-wave ($J'=0$) scattering length can be determined using the fact that at large distances the scattering wave function describing the relative motion (for $\omega=0$ and very small collision energies) reaches an asymptotic behavior of the form \begin{equation} \ds \Psi_{E}(R)=\sqrt{\frac{k}{\pi E}}~\sin\left[k(R-a_{\rm sc})\right]\; . \label{eq:asymptot_wf} \end{equation} In the present numerical approach discretised continuum states are obtained, since the wave-function calculation is performed within a finite $\ds R$ range, i.\,e.\ in the interval $[0,R_{\rm max}]$. Only wave functions that decay before or have a node at $R_{\rm max}$ are obtained. From the analysis of the lowest lying discretised continuum state the scattering length $a_{\rm sc}$ is obtained by the use of the relation $\ds a_{\rm sc} = R_{\rm max} - \frac{\pi}{k}$ with $\ds k=\sqrt{2\mu E}$~\cite{cold:juli96}. A variation of $R_{\rm max}$ changes the continuum discretization and therefore results in $R_{\rm max}$-dependent lowest lying continuum solutions $\Psi_{E_0}$. The scattering length extracted from $\Psi_{E_0}$ converges, however, to a constant value as $R_{\rm max}$ increases and $E_0$ approaches zero. Using this method and the adopted potential curves the scattering length values $\ds a_{\rm sc} = -2030\, a_0$ and $\ds a_{\rm sc} = -30\, a_0$ are obtained for $^6$Li and $^7$Li, respectively. These values agree well with the experimental ones: $a_{\rm sc} = (-2160\pm 250)\,a_0$ ($^6$Li) and $a_{\rm sc} = (-27.6\pm 0.5)\,a_0$ ($^7$Li)~\cite{cold:abra97}. The interaction of two ultracold $^6$Li atoms is strongly, the one of $^7$Li weakly attractive, as is reflected by the large and small but negative scattering lengths. In the case of two identical fermionic $^6$Li atoms the asymmetry requirement of the total wave function excludes s-wave scattering. Thus the present results are more applicable for two $^6$Li atoms in different hyperfine states (where the admixture of a singlet potential would, however, usually modify the scattering length), but are actually meant as a realistic example for a very large negative scattering length, i.\,e.\ strong attraction. In order to further check the generality of the results also the formation of $^{39}$K$_2$ is investigated as an example for a small repulsive interaction. In this case photoassociation starting from two potassium atoms interacting {\it via} the {\it singlet} $X ^1\Sigma^+_g$ ground state and transitions into the $A ^1\Sigma_u^+$ state are considered. This process is not only experimentally relevant \cite{cold:niko99}, but is at the same time an even further check of the generality of the conclusions obtained from the investigation of the transitions between {\it triplet} states in $\mbox{Li}_2$. The data for constructing the relevant potential curves for $^{39}$K$_2$ are taken from~\cite{cold:allouche,cold:amio95,cold:wang97}. The resulting potential curve for the $X ^1\Sigma^+_g$ state yields a scattering length $a_{\rm sc}\approx +90\,a_0$. This is in reasonable agreement with the experimental value given in \cite{cold:will99} where $a_{\rm sc}$ is found to be lying between $+90\,a_0$ and $+230\,a_0$. Instead of selecting additional atomic pairs that could represent examples for other values of the scattering length, the sensitivity of the s-wave interaction on the position of the least bound state is used to generate {\it artificially} a variable interaction strength. The scattering length is thus modified by a variation of the particle mass. The strong mass dependence of the scattering length is already evident from its change from $-2030\,a_0$ to $-30\,a_0$ for the isotopes $^6$Li and $^7$Li, respectively. Experimentally, a strong variation of the interaction strength can be realized by the aid of magnetic Feshbach resonances~\cite{cold:loft02,cold:rega03b}. \section{Photoassociation in an isotropic harmonic trap} \label{sec:photoassociation} \subsection{Photoassociation in a trap} \label{sec:spectrum} While Eq.~(\ref{SE_rim}) yields in the trap-free case ($\omega=0$) both bound (vibrational) and continuum (dissociative) states, the harmonic trap potential changes the energy spectrum to a purely discrete one, as is sketched in Fig.\,\ref{fig:PAscetch}. \begin{figure}\label{fig:PAscetch} \end{figure} Considering the concrete example of two $^6$Li atoms where the $a ^3\Sigma^+_u$ state supports the 10 vibrational bound states $v'=0$ to 9, $v'=10$ ($J'=0$) denotes the first state that results from the trap-induced continuum discretization. This (first trap-induced) state describes the initial state of two spin-polarized $^6$Li atoms interacting {\it via} the $a ^3\Sigma^+_u$ potential curve, if a sufficiently cold atomic gas in an (adiabatically turned-on) harmonic trap is considered. In the present work photoassociation (by means of a suitably tuned laser) from this initial state to one of the vibrational states $v$ of the $1 ^3\Sigma^+_g$ potential is investigated as a function of the trap frequency $\omega$. In view of the already discussed relevant dipole-selection rule the final state possesses $J=1$ and in the following $J'=0$ and $J=1$ is tacitly assumed. The strength of the photoassociation transition to final state $v$ is given by the rate \cite{cold:cote98b} \begin{equation} \ds \Gamma_{v}(\omega) = 4\pi^2 \mathcal{I}\, I^v(\omega) \label{PA_total} \end{equation} where $\mathcal{I}$ is the laser intensity and \begin{equation} \ds I^{v} (\omega) = \left| \int\limits_0^\infty \: \Psi^{v}(R;\omega) \: D(R) \: \Psi^{\rm 10'}(R;\omega) \:{\rm d}R \right|^2\; . \label{PA_dipole} \end{equation} In Eq.\,(\ref{PA_dipole}) $\Psi^v(R)/R$ and $\Psi^{\rm 10'}(R)/R$ are the vibrational wave functions of the final and initial state, respectively. Since the radial density is proportional to $|\Psi|^2$, it is convenient to discuss $\Psi$ instead of the true vibrational wave function $\Psi(R)/R$. This will be done in the following where $\Psi$ is for simplicity called vibrational wave function. Finally, $D(R)$ is the ($R$-dependent) electronic transition dipole matrix element between the $a ^3\Sigma^+_u$ and the $1 ^3\Sigma^+_g$ state of $\mbox{Li}_2$ introduced in Sec.\,\ref{sec:system}. $D(R)$ is practically constant for $R > 25\,a_0$. Eq.\,(\ref{PA_dipole}) is only valid within the dipole approximation. The latter is supposed to be applicable, if the photon wavelength is much larger than the extension of the atomic or molecular system. The shortest photoassociation laser wavelength corresponds to the transition to the highest-lying vibrational state and is thus approximately the one of the atomic (2\,$^2$S $\rightarrow$ 2\,$^2$P transition), $\lambda = 12680\,a_0$. Although the spatial extent of some of the final vibrational states (and of course the initial state in the case of shallow traps) has a similar or even larger extent, beyond dipole approximation effects are neglected in this work. The key quantity describing the photoassociation rate to different vibrational states $v$ or for variable trap frequency $\omega$ is $I^{v} (\omega)$ on whose calculation and discussion this work concentrates. It is important to note that also in the case of more elaborate laser-assisted association schemes like stimulated Raman processes that involve (virtual) transitions to the $v$ states the transition rate is proportional to $I^{v} (\omega)$. \begin{figure} \caption{{\footnotesize The classical outer turning points of the vibrational levels of the $1\,^3\Sigma_g^+$ state of $^6$Li$_2$ are shown on a linear (solid circles, left scale and insert) and on a logarithmic scale (empty circles, right scale). }} \label{fig:ClasTurnPoint} \end{figure} According to Eq.\,(\ref{PA_dipole}) the photoassociation rate depends for transitions between long-range states on the Franck-Condon factors between the initial and final nuclear wave functions, if $D(R)$ is practically constant for large $R$. In the case of alkali atoms the interaction potentials of the electronic states can be very long ranged and can contain numerous rovibrational bound states. Fig.~\ref{fig:ClasTurnPoint} shows, e.\,g., the classical outer turning points $R_{\rm out}$ of the 100 ($J=0$) vibrational bound states of $^6$Li$_2$ supported by the final-state electronic potential curve $1 ^3\Sigma^+_g$. The orthogonality of the states is achieved by the occurrence of $v'$ nodes. As $v'$ increases the wavefunctions consist of a highly oscillatory short range part with small overall amplitude that covers the range of the $v'-1$ wavefunction and a large outermost lobe. The $1 ^3\Sigma^+_g$ state is very long ranged, since its leading van der Waals term is $\displaystyle-C_3/R^3$. The initial electronic state $a ^3\Sigma^+_u$ with leading $\displaystyle-C_6/R^6$ van der Waals term is shorter ranged. Fig.~\ref{fig:iniWF4wregimes} shows the initial vibrational state for $^6$Li as a function of the trap frequency. This first trap-induced bound state possesses $v$ nodes (here $v=10$) that are located in the $R$ range of the last trap-free bound state ($v=9$). The overall amplitude in this about $25\,a_0$ long interval is very small and most of the wavefunction is distributed over the harmonic trap. \begin{figure}\label{fig:iniWF4wregimes} \end{figure} The squared transition dipole moments $\ds I^{v}(\omega)$ are shown for $ ^6 \mbox{Li}$ in Fig.~\ref{fig:PAtransmomAttract}(a) for three different trap frequencies $\displaystyle\omega$. \begin{figure}\label{fig:PAtransmomAttract} \end{figure} As mentioned before, the final vibrational levels with $v > 99$ are trap-induced bound states and exist only due to the continuum discretization in the presence of a trap. If the trap would be turned-off (adiabatically) after photoassociation to such a level, the trap induced dimer would immediately dissociate (without the need for any (radiative or non-radiative) coupling to some dissociative state). For a fixed trap frequency the photoassociation rate generally increases as a function of the final vibrational level $v$, but for small $v$ an oscillatory behavior is visible. These oscillations are a consequence of the nodal structure of the initial-state wave functions describing the atom pair. The 10 nodes (for the shown example of $^6$Li) of the initial-state wave function lead to exactly 10 dips in photoassociation spectrum. Their exact position depends on the interference with the nodal structure of the final-state wave functions. The oscillatory structure of $I^v(\omega)$ ends at about $v=55$ and beyond that point the rate increases by orders of magnitude, before a sharp decrease is observed close to the highest lying vibrational bound state ($v=99$). The absence of oscillatory behavior is a clear signature that for those transitions (in the present example for transitions into states with $v>55$) the Franck-Condon factors are determined by the overlap of the outermost lobe of the initial state with the one of the final state. The comparison of $I^v(\omega)$ for the different trap frequencies shown in Fig.\,\ref{fig:PAtransmomAttract}(a) indicates a very systematic trend. The transition probabilities to most of the vibrational bound states increases with increasing trap frequency. This is in accordance with simple confinement arguments, since a tighter trap confines the atoms in the initial state to a smaller spatial region. Due to the special properties of harmonic traps, this confinement translates directly into a corresponding confinement of the pair density (see Eq.(\ref{SE_rim})). The probability for atom pairs to have the correct separation for the photoassociative transition is thus expected to increase for tighter confinements, since a larger Franck-Condon overlap of the now more compact initial state with the bound molecular final state is expected. However, for the vibrational final states close to and above the (trap-free) dissociation threshold a completely different behavior is found. In this case the photoassociation rate decreases with increasing trap frequency, as can be seen especially from the insert of Fig.\,\ref{fig:PAtransmomAttract}(a). In fact, a sharp cut-off of the transition rate is observed. The transitions to the states that possessed the largest photoassociation rate for small trap frequencies are almost completely suppressed for large trap frequencies. Clearly, the simple assumption ``a tighter trap leads to a higher photoassociation rate due to an increased spatial confinement'' is only partly true. The fact that this assumption cannot be valid for all final states can be substantiated by means of a general sum-rule that is derived and discussed in the following subsection. \subsection{Sum rule} \label{sec:sumrule} Performing a summation (including for $\omega=0$ an integration over the dissociative continuum) over all final vibrational states (using closure) yields \begin{equation} \ds \widetilde{I}(\omega) \; =\; \sum\limits_{v=0}^\infty \: I^v(\omega) \;=\; \int\limits_0^\infty\: \Psi^{\rm 10'}(R;\omega) \, D^2 (R) \, \Psi^{\rm 10'}(R;\omega) \:{\rm d}R \; . \label{sumrule} \end{equation} While the electronic transition dipole moment $D(R)$ depends clearly on $R$ for small internuclear separations, it reaches its asymptotic value (the sum of the electronic dipole transition moments of two separated atoms, $D_{\rm at}$) at some $R$ value that is much smaller than the typical spatial extend of the final vibrational states with the largest transition amplitudes. (In the example of Li$_2$ this asymptotic limit is reached at about $25\, a_0$.) If the largest photoassociation amplitudes result from transitions to final states whose wave functions are mostly located outside this $R$ range, the integral in Eq.\,(\ref{sumrule}) is dominated by the $R$ regime in which $D(R)$ is constant. In this case $D^2$ can be taken out of the integral and normalization of the initial wave function assures $\widetilde{I}(\omega) \approx \widetilde{I} = D_{\rm at}^2$. Consequently, for all trap frequencies that are too small to confine the atoms into a spatial volume that is comparable to the atomic volumes (leading to $D(R)\neq \,D_{\rm at}$) and thus for all traps relevant to this work (and presently experimentally achievable) the total dipole transition moment $\widetilde{I}$ is to a good approximation independent of the trap frequency $\omega$. Therefore, changing the trap frequency can only redistribute transition probabilities between different final vibrational states. Increasing the transition rate to one final state must be compensated by a decrease of the transition probability to one or more other vibrational states. A conservative estimate of the minimum and maximum influence of a harmonic trap on the photoassociation rate is obtained from $\widetilde{I}_{\rm min}=D_{\rm min}^2$ and $\widetilde{I}_{\rm max}=D_{\rm max}^2$, respectively, where $D_{\rm min}$ ($D_{\rm max}$) is the minimum (maximum) value of the molecular electronic transition dipole moment. The sum-rule values obtained numerically for the trap frequencies shown in Fig.\,\ref{fig:PAtransmomAttract}(a) are $\widetilde{I}(\omega = 2\pi\times 1\,$kHz$) = 11.127222$, $\widetilde{I}(\omega = 2\pi\times 10\,$kHz$) = 11.12723$, and $\widetilde{I}(\omega = 2\pi\times 100$\,kHz$) = 11.1273$. This may be compared to the value $ \lim_{R\rightarrow\infty} \, |D(R)|^2 = D_{\rm at}^2\ = |D_{\rm 2s-2s}+D_{\rm 2s-2p}|^2 = |D_{\rm 2s-2p}|^2 = 11.1272213$ obtained from the calculation described in Sec.\,~\ref{sec:system}. Clearly, the sum-rule~(\ref{sumrule}) can also be used as a test for the correctness of numerical calculations. The very small deviations from the predicted sum-rule value may, however, not only be a result of an inaccuracy of the present numerical approach, but also reflect the (small) $R$ dependence of $D(R)$ that allows some $\omega$ dependence of the total photoassociation rate. This interpretation is supported by the fact that the numerically found deviations monotonously increase with increasing frequency $\omega$. Larger values of $\omega$ lead to a spatially more confined $\Psi^{\rm 10'}(R;\omega)$ which in turn probes more of the $R$-dependent part of $D(R)$. Since $D(R)$ approaches $D(R=\infty)=D_{\rm at}$ from above, a small increase of $\widetilde{I}$ is expected for increasing trap frequencies. As is evident from Fig.\,\ref{fig:PAtransmomAttract}\,(a) (especially the insert), the sum-rule fulfillment is achieved by a drastic decrease of the photoassociation rate to the highest lying final states. This compensates the trap-induced increased rate to the lower lying states. Since the rate to the highest lying states is orders of magnitude larger than the one to the low-lying states, the reduced transition probability of a small number of states can easily compensate the substantial increase by orders of magnitude observed for the large number of low-lying states. From Eq.~(\ref{sumrule}) it is clear that in those cases where most of the contribution to the sum rule stems from the $R$ range where $D(R)$ is practically constant, there is also no influence of the initial-state wave function. Taking $D$ out of the integral yields always the self-overlap of the initial-state wave function and thus unity. This is important, since it indicates that the sum-rule value is also (approximately) independent of the atom-atom interaction potential. \subsection{Enhancement and suppression factor $f^v$} \label{sec:ratio} In order to quantify the effect of a tight harmonic trap on the photoassociation rate and to get rid of its variation as a function of the final-state vibrational level $v$ (that is due to the nodal structure and clearly visible in Fig.\,\ref{fig:PAtransmomAttract}(a) especially for smaller $v$) the ratio \begin{equation} \ds f^{v}(\omega) = \frac{I^{v}(\omega)}{I^{v}(\omega_{\rm ref})}\; . \label{f_v} \end{equation} may be introduced. It describes the relative enhancement ($f^{v}(\omega)>1$) or suppression ($f^{v}(\omega)<1$) of the photoassociation rate to a specific final state $v$ at a given trap frequency $\omega$ with respect to the reference frequency~$\omega_{\rm ref}$. Although it may appear to be most natural to choose the trap-free case as reference ($\omega_{\rm ref}=0$), a finite value offers some advantages. First, a different normalization applies to $\omega=0$ and $\omega\neq 0$, since in one case it is free-to-bound transitions, while it is bound-to-bound transitions otherwise. Second, from a numerical point of view it is more convenient to treat both cases the same way and to avoid the variation of the box boundary $\ds R_{\rm max}$ that would otherwise be necessary for the trap-free case. Finally, it may be argued that a non-zero trap reference state is in fact more relevant to typical photoassociation experiments with ultracold alkali atoms, since most of them are anyhow performed in traps. In the present work $\omega_{\rm ref} = 2 \pi \times 1\,\mbox{kHz}$ was chosen. This value is sufficiently small to represent typical shallow traps in which the influence of the trap on photoassociation is supposedly (at least to a good approximation) negligible. On the other hand, it allows to calculate the transition dipole moments with reasonable numerical efforts and thus sufficient accuracy. The ratio $f^{v}(\omega)$ is shown for two different trap frequencies $\omega$ in Fig.\,\ref{fig:PAtransmomAttract}(b). For most of the vibrational final states a simple constant regime is observed, i.\,e.\ the ratio $f^{v}(\omega)$ is independent of $v$ for all except the highest lying states. This constant regime is followed by a relatively sharp cut-off beyond which the ratio $f^{v}(\omega)$ is very small. In the constant regime a 100\,kHz trap leads to an enhancement by almost 3 orders of magnitude. Comparing the results for different $\omega$ one notices that in the range of final states where a constant behavior (with respect to $v$) is observed, a tighter trap leads to an increased photoassociation rate, trap-induced enhanced photoassociation (EPA). Due to the cut-off this is, however, not true, if the last vibrational states are considered. Since the range of constant behavior shrinks with increasing trap frequency, there is an increasing range of vibrational states for which a tighter trap leads to a smaller photoassociation rate compared to a shallower trap. In this case trap-induced suppressed photoassociation (SPA) occurs. This effect is especially visible from the insert of Fig.\,\ref{fig:PAtransmomAttract}(a). The physical origin of the two different regimes (constant vs.\ cut-off) and their $\omega$ dependence is discussed separately in the following two subsections. \subsection{Constant regime} \label{sec:constregim} Since in the constant regime the ratio $f^v(\omega)$ is completely independent of the final state level $v$, its value (for a given $\omega$) and constancy (as a function of $v$) must be a consequence of the influence of the trap on the initial state. The initial-state wave function for a $^6$Li atom pair was shown for three different trap frequencies in Fig.~\ref{fig:iniWF4wregimes}. A view on the complete wave function reveals directly the confinement of the wave function to a smaller spatial volume, if the trap frequency is increased. However, on this scale the variation of the wave function at a specific value of $R$ appears to be quite complicated. Thus it is not at all clear why the enhancement factor $f^v$ has for so many states a constant value. A closer look at smaller internuclear separations (insert of Fig.~\ref{fig:iniWF4wregimes}) reveals that besides the initial oscillatory part confined to the effective range of the atom-atom interaction potential there is a relatively large $R$ interval in which the wave functions for the different trap frequencies vary linearly with $R$. In this case the slope is very small and the wave function is thus almost constant. If the Franck-Condon integral is determined only by the value of the initial-state wave function in this $R$ window, it produces an almost undistorted image of the final-state wave function. However, for the ratio $f^v(\omega)$ this final-state dependence disappears. The reason is that in the $R$ range where the initial-state wave function is almost constant, its variation with the trap frequency is also $R$ independent, as can be seen from the insert of Fig.~\ref{fig:iniWF4wregimes}. In other words, one finds $\displaystyle\Psi^{\rm 10'}(R;\omega)=C(\omega) \cdot \Psi^{\rm 10'}(R;\omega_{\rm ref})$. If no final-state dependence occurs, the constant $C(\omega)$ is related to the ratio $f$ via $f_c(\omega)=C^2(\omega)$. The validity of this argument for the occurrence of a constant ratio $f$ can thus be checked (and visualized) in the following way. Together with the correct wave function $\displaystyle\Psi^{\rm 10'}(R;\omega)$ the approximate one, $~\sqrt{f_c(\omega)}~\cdot~\Psi^{\rm 10'} (R;\omega_{\rm ref})$, is plotted where $f_c(\omega)$ is the value of the factor $f$ in the constant regime. A convenient way to determine $f_c(\omega)$ follows from the observation that the constant regime always starts at $v=0$. Thus $f_c(\omega)=f^{v=0}(\omega)$ is the most straightforward way of $f_c(\omega)$ determination. In Fig.~\ref{fig:constnonconst} the correct wave function $\displaystyle\Psi^{\rm 10'}(R;\omega)$ is plotted together with the approximate wave function $~\sqrt{f_c(\omega)}~\cdot~\Psi^{\rm 10'} (R;\omega_{\rm ref})$ for the trap frequency $\displaystyle\omega =2\pi\times\mbox{100kHz}$. \begin{figure}\label{fig:constnonconst} \end{figure} The agreement between the two wave functions is clearly good in the shown range of $R$ values, but it is better for small $R$, since at about $R=500\,a_0$ the two wave functions start to disagree. Below $R=500\,a_0$ the two wave functions agree completely with each other, even in the very short $R$ range where they possess an oscillatory behaviour. The key for understanding the occurrence of the constant regime is that a variation of the trap frequency modifies the spatial extent of the initial-state wave function, but leaves its norm and nodal structure preserved. As a consequence, the wave function changes qualitatively only in the large $R$ range, while in the short range only the amplitude varies (by factor $C(\omega)$). The reason for this behaviour is that at small $R$ the wave function is practically shielded from the trap potential by the dominant atom-atom interaction. Fig.~\ref{fig:constnonconst} shows also two final-state wave functions ($v=88$ and 94). According to Fig.~\ref{fig:PAtransmomAttract}\,(b) the transition to $v=88$ belongs still to the constant regime ($f^{88}(\omega)\approx f_c(\omega)$), though to its very end. The transition to $v=94$ does on the other hand not belong to this regime, since for the considered trap frequency $f^{94}(\omega)<f_c(\omega)$. As is evident from Fig.~\ref{fig:constnonconst}, a constant ratio $f^v(\omega)$ is observed as long as the final-state wave function $v$ is completely confined within an $R$ range in which the approximation $\displaystyle\Psi^{\rm 10'}(R;\omega)\approx C(\omega) \cdot \Psi^{\rm 10'}(R;\omega_{\rm ref})$ is well fulfilled. This is (for $\omega=2\pi\times\mbox{100kHz}$) the case for $v=88$ for which the wave function is confined within $R<600\,a_0$, but not for $v=94$ whose outermost lobe has its maximum at about $1350\,a_0$. Since the $R$ range in which the initial-state wave function can be approximated in the here discussed fashion decreases with increasing trap frequency, the range of $v$ values for which $f^v(\omega)\approx f_c(\omega)$ is valid diminishes with increasing trap frequency. The following rule of thumb is found to determine those vibrational levels $v$ for which the relation $f^v(\omega)\approx f_c(\omega)$ starts to break down. For trap frequencies $\displaystyle\omega_1$ and $\displaystyle\omega_2$ (with $\displaystyle\omega_2 > \omega_1$) one may define a difference $\Delta$ that quantifies the deviation of $C\cdot\Psi^{\rm 10'}(R;\omega_{\rm 1})$ and $\Psi^{\rm 10'}~(R;\omega_{\rm 2})$ as $\Delta (R)~ =~C\cdot~\Psi^{\rm 10'}~(R;\omega_{\rm 1}) ~ -~\Psi^{\rm 10'}~(R;\omega_{\rm 2})$ where $\ds C=\sqrt{f_c(\omega_2)}$. For example, in Fig.~\ref{fig:constnonconst} the difference $\Delta (R)$ is the distance between the solid curve and the dotted one. The relation $f^v(\omega)\approx f_c(\omega)$ breaks down for those final states $v$ whose classical turning point lies beyond $R_0$. $R_0$ itself is determined by $\Delta (R>R_0)\gtrsim 10^{-3}$. In other words, if the last lobe of the final wave function overlaps substantially with a region where the deviation defined by $\Delta$ is larger than about $10^{-3}$, a clear deviation from the constant regime is to be expected. \subsection{Cut-off regime} \label{sec:cutoffregim} Once the constant regime of the ratio $f^v$ (for a given trap frequency) is left, $f^v$ is steadily decreasing with $v$, as is apparent from Fig.~\ref{fig:PAtransmomAttract}\,(b). The photoassociation rate displays then a rather sharp cut-off behavior (see insert of Fig.~\ref{fig:PAtransmomAttract}\,(a)). The most loosely bound vibrational states of the final electronic state have in the trap-free case the largest rate but possess a very small one in very tight traps. For those high-lying states the wave functions have a very highly oscillatory behavior for short $R$ values and a large lobe close to the classical turning point. This outermost lobe determines the Franck-Condon integral, if the initial-state wave function is sufficiently smooth in this $R$ range. In Fig.~\ref{fig:cutoff} \begin{figure}\label{fig:cutoff} \end{figure} the initial-state wave function is shown together with the ones for $v=96$ and 98 (for $\displaystyle\omega = 2\pi\times\mbox{100kHz}$). It is evident from Fig.~\ref{fig:cutoff} that for $\ds v=96$ the overlap of the initial wave function with the last lobe of the final state is very large. In fact, for this trap frequency the overlap reaches its maximum for $v=96$ and $97$ (see Fig.~\ref{fig:PAtransmomAttract}\,(a)), despite the fact that the trap-induced relative enhancement factor $f^v(\omega)$ is small (Fig.~\ref{fig:PAtransmomAttract}\,(b)). In the case of $v=98$ the transition rate is not only clearly smaller than for $v=96$ or $97$, but it is also much smaller than the rate obtained for the same level at much lower trap frequencies (10 or 1\,kHz). Clearly, one has $f^{v=98} (\omega)<1$ and thus for $\displaystyle\omega = 2\pi\times\mbox{100kHz}$ the level $v=98$ represents an example for a trap-induced suppressed rate (SPA) in contrast to the usually expected enhanced photoassociation in a trap (EPA, $f^v(\omega)>1$). From Fig.~\ref{fig:cutoff} it is clear that the reason for the small transition rate to $v=98$ is due to the fact that the outermost lobe of the $v=98$ state lies mostly outside the $R$ range in which the initial-state wave function is non-zero. The least bound state (in the trap free case), $v=99$, possesses an even smaller photoassociation rate, since in this case the outermost lobe lies practically completely outside the non-zero $R$ range of the initial-state wave function. Due to the imperfect cancellation of the oscillating contributions from the inner lobes, the photoassociation rate for $v=99$ is very small, but non-zero. Increasing the trap frequency even more will confine the initial-state wave function to a smaller $R$ range and thus SPA occurs for smaller $v$ values. The origin of the suppression is in fact a quite remarkable feature, since from Fig.~\ref{fig:cutoff} it is clear that the trap has practically no influence on the final states, even if one considers the highest-lying ones that have very tiny binding energies. This is still true, if the spatial extent of the final state is much larger than the one of the trap potential. This may be interpreted as a shielding of the trap potential by the molecular (atom-atom interaction) potential. The reason for the different shielding experienced by the initial and the final states is not only due to the fact that the former lies above the dissociation threshold, since then the photoassociation rate should dramatically increase, if transitions into the purely trap-induced bound states of the final electronic state are considered. This is, however, not the case as can be seen for the states $v>99$ in Fig.~\ref{fig:PAtransmomAttract}\,(a). The different shielding is due to the inherently different long-range behaviors of the two electronic potential curves describing the initial ($a ^3\Sigma^+_u$) and the final ($1 ^3\Sigma^+_g$) state. If one introduces the crossing point $\ds R_c$ of the long range part of the van der Waals potential with the one of the inverted harmonic trap, it is defined by equating $\ds C_n/R_c^n$ and $\displaystyle\frac12\mu\omega^2R_c^2$ where $C_n$ is the corresponding leading van der Waals coefficient. At the point $\ds R_c$ the trap potential starts to dominate. For example, in the case of the trap frequency $\omega = 2\pi\times\mbox{10kHz}$ one finds $\ds R_c \approx +825\, a_0$ and $\ds R_c \approx +17700\, a_0$ for $a ^3\Sigma^+_u$ and $1 ^3\Sigma^+_g$ of Li$_2$, respectively. \subsection{$\ds I^{v}(\omega)$ for a repulsive interaction} \label{sec:positivesclen} In order to check the main conclusions of the results obtained for $^6$Li$_2$ also the formation of $^{39}$K$_2$ is investigated. While for $^6$Li a photoassociation process between {\it triplet} states was considered, a transition between the $X ^1\Sigma^+_g$ and the $A ^1\Sigma_u^+$ states is chosen for $^{39}$K. In contrast to the large negative scattering length of two $ ^6$Li atoms interacting via the $a ^3\Sigma^+_u$ potential two $^{39}$K ground-state atoms interact via a small positive s-wave scattering length. The obtained results for the squared transition dipole moments $I^v(\omega)$ are qualitatively very similar to the results obtained for $^6$Li$_2$. This includes the existence of a constant regime of $f^v(\omega)$ followed by a pronounced decrease for the highest-lying vibrational states, the cut-off. The rule of thumb for predicting the range of $v$ values for which a constant ratio $f^v$ is observed does also work in this case. $^{39}$K$_2$ shows thus trap-induced suppressed photoassociation for the highest lying states with a sharp cut-off in the $I^v(\omega)$ spectrum very much like $^6$Li$_2$. Therefore, the results are not explicitly shown for space reasons. \begin{figure}\label{fig:PAtransmomRepuls} \end{figure} For a more systematic investigation of the influence of the scattering length $a_{\rm sc}$ and thus the type of interaction (sign of $a_{\rm sc}$) and its strength (absolute value of $a_{\rm sc}$) the mass of the Li atoms is varied. The mass variation allows for an in principle continuous (though non-physical) modification of $a_{\rm sc}$ from very large positive to negative values. With increasing mass an increasing number of bound states is supported by the same potential curve. Since $a_{\rm sc}$ is sensitive to the position of the least bound state, even a very small mass variation has a very large effect, if a formerly unbound state becomes bound. For example, an increase of the mass of $ ^6$Li by 0.3\% changes $a_{\rm sc}$ from $-2030\,a_0$ to about $+850\, a_0$. The (for $^6$Li unbound) 11th vibrational state becomes weakly bound. A further increase of the mass increases its binding energy until it reaches the value for $^7$Li. It is also possible to modify $a_{\rm sc}$ from $-2030\,a_0$ to $+850\, a_0$ by lowering the mass of $^6$Li. A larger mass variation is required (about 18\,\%) but the number of bound states remains unchanged. In this case the large positive value of $a_{\rm sc}$ indicates that the 10th bound state is, however, only very weakly bound and a further small decrease of the mass will shift it into the dissociative continuum. \begin{figure}\label{fig:dip} \end{figure} In Fig.~\ref{fig:PAtransmomRepuls}\,(a) $I^v(\omega)$ is shown for $a_{\rm sc}=+850\,a_0$ (achieved by a 0.3\% increase of the mass) and three different trap frequencies as an example for a large positive scattering length and thus strong repulsive interaction. The overall result is again very similar to the one obtained for a large negative scattering length. A tighter trap increases the transition rate for most of the states, but there is a sharp cut-off for large $v$. The position of this cut-off moves to smaller $v$ as the trap frequency is increased. However, for a large positive value of $a_{\rm sc}$ an additional feature appears in the transition spectrum: a photoassociation window visible as a pronounced dip in the $I^v$ spectrum for large $v$. For the given choice of $a_{\rm sc}$ this minimum occurs for $v=92$. The occurence of the dip for $a_{\rm sc}\gg 0$ has been predicted and explained for the trap-free case in~\cite{cold:cote95,cold:cote98b} and was experimentally confirmed~\cite{cold:abra96}. Fig.~\ref{fig:dip} shows the last lobe of the final-state vibrational wave function $\Psi^{\rm 92}(R)$ together with the initial-state wave function, both for $\omega=2\pi\times 100\,$kHz. The key for understanding the occurrence of the dip for large positive scattering lengths and its absence for negative ones is the change of sign of the initial-state wave function as a consequence of the repulsive atom-atom interaction. In fact, in the trap-free case the position of this node agrees of course with the scattering length. As can be seen from Fig.~\ref{fig:dip}, the tight trap moves the nodal position to a smaller value, but this shift is comparatively small (about 5\,\%) even in the case of a 100\,kHz trap. For negative values of $a_{\rm sc}$ this node appears to be absent, since in this case only the extrapolated wave function intersects the $R$ axis, but this occurs at the non-physical interatomic separation $R_{\rm x}=-a_{\rm sc}$. As a result of the sign change occurring for $a_{\rm sc}>0$ the overlap of the initial-state wave function with a final state for which the mean position of the outermost lobe agrees with the nodal position ($R_{\rm x}$) vanishes. The probability for a perfect agreement of those two positions is of course rather unlikely, but as can be seen from Fig.~\ref{fig:PAtransmomRepuls}\,(a) and~\cite{cold:cote98b} where also an approximation for $I^v(\omega=0)$ was derived, the cancellation can be very efficient. \begin{figure}\label{fig:PArateascvarDifAscN11} \end{figure} It should be emphasised that of course also for $a_{\rm sc}<0$ a number of dips occur as was discussed in the context of Fig.\,\ref{fig:PAtransmomAttract}. The difference between those dips and the one discussed for $a_{\rm sc}\gg 0$ is the occurrence of the latter outside the molecular regime. While the other dips are a direct consequence of the short-range part of the atom-atom interaction potential and thus confined (for Li$_2$) to $v<55$ corresponding to $R<30\,a_0$, the dip occuring for $a_{\rm sc}\gg 0$ can be located outside the molecular regime. This is even more apparent from Fig.\,\ref{fig:PArateascvarDifAscN11} where the $I^v$ spectra for four different positive values of $a_{\rm sc}$ are shown together with the one for the (physical) value $a_{\rm sc}=-2030\,a_0$ (all for $\omega=2\,\pi\times 100\,$kHz). The values $a_{\rm sc}=+2020\,a_0$, $+350\,a_0$, $+115\,a_0$, and $+50\,a_0$ were obtained by a mass increase of $\sim 0.3$\%,$\sim 0.8$\%,$\sim 2$\%, and $\sim 6$\%, respectively. In agreement with the explanation given above, the position of the dip moves continuously to larger values of $v$ as the scattering length increases, since the position $R_{\rm x}$ of the last node of the initial state lies close to $a_{\rm sc}$. Also the positions of the other dips depend on $a_{\rm sc}$, but their dependence is much weaker and involves a much smaller $R$ interval. Clearly, the positions of the dips become more stable if they occur at smaller $v$. Noteworthy, the positions of the first 10 dips agree perfectly for $a_{\rm sc}=-2030$ and $+2020\,a_0$. In fact, both spectra are on a first glance in almost perfect overall agreement, except the occurrence of the additional dip for $v=92$. According to the discussion of the sum rule in Sec.\,\ref{sec:sumrule} the total sum $\tilde{I}$ should be (approximately) independent of the atomic interaction and thus $a_{\rm sc}$. This is also confirmed numerically for the present examples. The insert of Fig.\,\ref{fig:PArateascvarDifAscN11} reveals how the sum-rule is fulfilled. The due to the additional dip missing transition probability is compensated by an enhanced rate to the neighbor states with larger $v$. \begin{figure}\label{fig:PArateascvarDifAscN10} \end{figure} In all shown cases with $a_{\rm sc}>0$ there exist 11 bound states in contrast to the 10 states of $^6$Li ($a_{\rm sc}=-2030\,a_0$). As mentioned in the beginning of this section, it is also possible to change the sign of $a_{\rm sc}$ while preserving the number of nodes. The corresponding $I^v$ spectra (again for $\omega=2\pi\times 100\,$kHz) are shown in Fig.\,\ref{fig:PArateascvarDifAscN10}. The same values of $a_{\rm sc}$ as in Fig.\,\ref{fig:PArateascvarDifAscN11} ($+2020\,a_0$, $+350\,a_0$, $+115\,a_0$, and $+50\,a_0$) are now obtained by a decrease of the mass by $\sim 18$\%, $\sim 17.5$\%, $\sim 16$\%, and $\sim 13$\%, respectively. A comparison of the two Figs.\,\ref{fig:PArateascvarDifAscN11} and \ref{fig:PArateascvarDifAscN10} demonstrates that the position of the outermost dip (for $a_{\rm sc}\gg 0$) depends for a given $\omega$ solely on $a_{\rm sc}$, while the other dips (in the molecular regime) differ when changing the total number of bound states from 10 to 11. A comparison of the results obtained for $a_{\rm sc}=-2030\,a_0$ and $+2020\,a_0$ with 10 bound states in both cases shows that most of the nodes in the molecular regime are shifted with respect to each other in such a way that the $v$ range hosting 10 dips for $a_{\rm sc}=-2030\,a_0$ contains 9 dips for $a_{\rm sc}=-2030\,a_0$. Turning back to Fig.\,\ref{fig:PAtransmomRepuls} and the question of the influence of a tight trap on the photoassociation rate for $a_{\rm sc}\gg 0$ one notices that the position of the additional dip appears to be practically independent of $\omega$. As was explained in the context of Fig.\,\ref{fig:dip}, the reason is that the position of the outermost node depends only weakly on $\omega$. For the shown example this shift is even for a 100\,kHz trap small compared to the separation of the outermost lobes between neighboring $v$ states. Therefore, the shift is not sufficient to move the dip position away from $v=92$. However, if $a_{\rm sc}$ is, e.\,g.\, increased to $+2020\,a_0$ the crossing point $R_{\rm x}$ shifts in a 100\,kHz trap to about $1500\,a_0$ and changes thus by $\approx 25\,\%$. In this case the dip position moves from $v=95$ to $94$. It is therefore important to take the effects of a tight trap into account, if they are used for the determination of $a_{\rm sc}$ using photoassociation spectroscopy the way discussed in~\cite{cold:cote95,cold:abra96}. In order to focus on the effect of the tight trap it is again of interest to consider the ratio $f^v(\omega)$ introduced in Sec.\,\ref{sec:ratio}. For small but positive values of $a_{\rm sc}$ the ratio $f^v$ is structurally very similar to the case $a_{\rm sc}=-2030\,a_0$ shown in Fig.\,\ref{fig:PAtransmomAttract}\,(b). A uniform constant regime covering almost all $v$ states is followed by a sharp cut-off whose position shifts to smaller $v$ as $\omega$ increases. A similar behavior is encountered for $a_{\rm sc}=+850\,a_0$ and $\omega=10\,$kHz as shown in Fig.\,\ref{fig:PAtransmomRepuls}\,(b). However, for a tighter trap (100\,kHz) a new feature appears. In this case the relative enhancement at the dip position ($v=92$) is smaller than in the constant regime, but larger for the neighbor states. The enhancement factor for $v=92$ is only $\approx 25\,\%$ of $f_c$, while the one for $v=93$ is $\approx 60\,\%$ larger than $f_c$. This results in a dispersion-like structure in $f^v$. It should be emphasised that this is again remarkably different from the other dips in $I^v(\omega)$ ($v<55$) that show the same (constant) enhancement factor $f_c$ as their neighbor states. \subsection{Combined influence of trap and atomic interaction} \label{sec:combined} In view of the very important question how the efficiency of photoassociation can be improved, Fig.\,\ref{fig:PArateascvarDifAscN11} reveals that besides the use of a tight trap a large scattering length is also favorable. The photoassociation rate (away from the dips) is enhanced by orders of magnitude, if $a_{\rm sc}$ varies from $a_{\rm sc}=+50\,a_0$ to $a_{\rm sc}=+2020\,a_0$! In view of the already discussed fact that the results for the overall spectrum $I^v$ differ for $a_{\rm sc}>0$ and $a_{\rm sc}<0$ only by the position of the dips, it is evident that photoassociation (or corresponding Raman transitions) are much more efficient, if $|a_{\rm sc}|$ is very large. In order to understand the dependence of the FC factors of the vibrational final states on the scattering length it is instructive to look at the variation of the initial-state wave function with $a_{\rm sc}$ for large $R$ values. This is shown in Fig.~\ref{fig:AmplitudeDiffer} for $\displaystyle\omega = 2\pi\times\mbox{100\,kHz}$. \begin{figure}\label{fig:AmplitudeDiffer} \end{figure} While a large attractive interaction ($a_{\rm sc}\ll 0$) leads to a very confined wave function for the first trap-induced bound state, a large repulsive interaction ($a_{\rm sc}\gg 0$) does not only result in a node (responsible for the photoassociation window discussed above), but also to a push of the outermost lobe to larger $R$ values. This push is of course counteracted by the confinement of the trap. However, only the highest lying final states probe the very large $R$ range. As is apparent from Fig.~\ref{fig:ClasTurnPoint} the final states $v\le 92$ probe almost completely the range $R\le 1000\,a_0$. Within this $R$ interval the absolute value of the initial-state wave function increases with the absolute value of $a_{\rm sc}$. As a consequence, the corresponding FC factors and $I^v$ should increase with $|a_{\rm sc}|$. An exception to this is the already discussed occurrence of the photoassociation window (spectral dip) that occurs for a positive scattering length, if the position of the node is probed by the final-state wave function. Consequently, one expects for the low-lying final states (in fact for almost all except the very high-lying ones and the ones at the dip position) that an increase of $|a_{\rm sc}|$ leads to an increased photoassociation rate. An evident question is of course, whether the enhancements due to the use of tighter traps and tuning of $a_{\rm sc}$ can be used in a constructive fashion? In order to investigate this question, one can introduce another enhancement factor \begin{equation} g^v(\omega,a_{\rm sc}) \;=\; \frac{I^v(\omega,a_{\rm sc})} {I^v(\omega_{\rm ref},a_{\rm sc,ref})} \label{eq:g_v} \end{equation} with $a_{\rm sc,ref}=0\,a_0$ (and $\omega_{\rm ref} =2\pi\times 1\,$kHz as before). Clearly, a cut through $g^v(\omega,a_{\rm sc})$ for constant $a_{\rm sc}$ is equal to $f^v(\omega)$. A cut for constant $\omega$ describes on the other hand the relative enhancement of the photoassociation rate as a function of $a_{\rm sc}$. The function $g^v(\omega,a_{\rm sc})$ depends of course on the vibrational state $v$, but as was discussed before, most of the states show a constant enhancement factor $f_c$. Thus it is most interestingly to investigate $g_c(\omega,a_{\rm sc})$ that is defined as the $g$ function for vibrational states for which the relation $f^v = f_c$ is valid. This excludes the states in the cut-off regime and those at or very close to the photoassociation window. \begin{figure}\label{fig:w1TOw100_ascM1970TOP2020} \end{figure} In Fig.~\ref{fig:w1TOw100_ascM1970TOP2020} $g_c(\omega,a_{\rm sc})$ is shown as a function of $a_{\rm sc}$ for different trap frequencies. The important finding is that $g_c(\omega,a_{\rm sc})$ increases as a function of $\omega$ and $|a_{\rm sc}|$. In fact, within the shown ranges of $\omega$ and $a_{\rm sc}$ the function $g_c(\omega,a_{\rm sc})$ raises by 6 orders of magnitude, if the maximum values of $\omega$ ($2\pi\times 100\,$kHz) and $a_{\rm sc}$ ($\pm 2000\,a_0$) are considered! A more detailed analysis shows that the enhancement is almost equally distributed among the two parameters, i.\,e.\ a factor 10$^3$ stems from the variation of $\omega$ and about the same factor from varying $a_{\rm sc}$. Thus the enhancement of the photoassociation rate due to the two different physical parameters occurs practically independently of each other, at least in the rather large parameter range considered. It should be emphasized that these ranges are realistically achievable in present-day experiments. It is interesting to note that this finding is not only very encouraging with respect to the possible enhancement of photoassociation rates and related molecule production schemes, but it shows also that the influence of the parameters scattering ($a_{\rm sc}$) and characteristic length scale of an isotropic harmonic trap ($\ds a_{\rm ho} = \sqrt{1/(\mu\omega)}$) on the photoassociation process is very different from the one observed for the energy. In energy-related discussions (like the one on the validity of the pseudopotential approximation in~\cite{cold:blum02}) it was found that the ratio $\displaystyle|a_{\rm sc}/a_{\rm ho}|$ determines the behavior. In the present case, both parameters and not only their ratio are important. \section{Pseudopotential approximation} \label{sec:delta} The bound state of two atoms in a harmonic trap when the atom-atom interaction $V_{\rm int}(R)$ is approximated by a regularized contact potential $\displaystyle \frac{4\pi}{2\mu} a_{\rm sc}\delta^3(\vec{R}) \frac{\partial}{\partial R}\,R$ with energy-independent scattering length $\ds a_{\rm sc}$, was first derived analytically by Busch {\sl at al.}~\cite{cold:busc98}. The bound states with integer quantum number $\ds n_t$ are expressed as \begin{equation} \displaystyle \Psi^{n_t}_{a_{\rm sc}}(R) = \frac12\pi^{-3/2}A R e^{-\bar{R}^2/2}\Gamma (-\nu)U(-\nu,\frac32,\bar{R}^2)\; , \label{WF_pseudo} \end{equation} with $\bar{R}=R/a_{\rm ho}$ and the characteristic length scale $a_{\rm ho}$ of the harmonic trap introduced in the end of the previous section. $\ds A$ is a normalization constant having the dimension of the inverse of the square root of volume (see below) and $\nu$ is an effective quantum number for the relative motional eigenstate, $\displaystyle\nu = \frac{E_{a_{\rm sc}}^{n_t}}{2\omega} - \frac34$. The energy eigenvalues are given by the roots of the equation \begin{equation} \displaystyle \frac{\Gamma(-x/2+3/4)}{\Gamma(-x/2+1/4)}=\frac{1}{\sqrt{2}\xi}\; , \label{eq:energy_pseudo} \end{equation} where $x=E_{a_{\rm sc}}^{n_t}/\omega$ and $\xi=a_{\rm sc}/a_{\rm ho}$. The initial-state wave function $\Psi^{\rm 10'}(R;\omega)$ of two $^6\mbox{Li}$ atoms interacting through the $a ^3\Sigma^+_u$ potential and the pseudopotential wave function $\displaystyle\Psi_{a_{\rm sc}}^0$ with the physical (trap-free) value of the scattering length $\ds a_{\rm sc}=-2030\, a_0$ are plotted together in Fig.~\ref{fig:PPWFandRWFw10} for the case of a trap frequency $\displaystyle\omega = 2\pi\times\mbox{10kHz}$. \begin{figure}\label{fig:PPWFandRWFw10} \end{figure} As expected, wave function $\displaystyle\Psi_{a_{\rm sc}}^0$ fails completely for short internuclear separations, since it does not reproduce any nodal structure at all. In addition, $\displaystyle\Psi_{a_{\rm sc}}^0$ possesses a wrong behavior at $R=0$ where it is non-zero. In the long-range part $\displaystyle\Psi_{a_{\rm sc}}^0$ agrees better with the correct wave function. There the main difference is an evident phase shift between the two functions. This phase shift is a consequence of the trap and vanishes in the absence of the trap ($\omega\rightarrow 0$). The physical reason for the phase shift is the non-zero ground-state energy in a trap (zero-point energy and motion) due to the Heisenberg uncertainty principle. As a consequence, the scattering of the two atoms in a trap differs from the trap-free case even at zero temperature. On the basis of an analysis of the energy spectrum of two atoms in a harmonic trap it was found that a pseudopotential approximation using an energy-dependent scattering length $\ds a_E$ leads to a highly improved description of two particles confined in an isotropic harmonic trap~\cite{cold:blum02,cold:bold03}. While the scattering length is defined in the limit $E\rightarrow 0$, an energy-dependent scattering length can be introduced by extending its original asymptotic definition in terms of the phase shift for s-wave scattering $\delta_0(E)$ to non-zero collision energies. This yields $\ds a_E = -\mbox{tan} \delta_0(E)/k$ with $\ds k = \sqrt{2\mu E}$. Clearly, the evaluation of $\delta_0(E)$ requires to solve the complete scattering problem and thus also $\ds a_E$ can only be obtained from the knowledge of the solution for the correct atom-atom interaction potential. The values of $\ds a_E$ were obtained in the following way. After a determination of the ground-state energy of two $^6\mbox{Li}$ atoms from a full calculation (using the realistic interaction potential), this energy is used in Eq.\,(\ref{eq:energy_pseudo}) to find $\ds a_E$ (that is inserted in the equation in place of $\ds a_{\rm sc}$). More details about this so-called self-consistency approach are given in~\cite{cold:bold02}. In this way an energy-dependent scattering length $\ds a_E = -2872\,a_0$ is, e.\,g., found for two $^6$Li atoms in a trap with frequency $\displaystyle\omega = 2\pi\times\mbox{10kHz}$. The resulting wave function is also shown in Fig.~\ref{fig:PPWFandRWFw10}, together with the correct one and the one obtained for $\ds a_{\rm sc}=-2030\, a_0$. Clearly, the agreement with the correct wave function is very good for large $R$. For $R>150\,a_0$ the wave function obtained for $\ds a_E = -2872\,a_0$ is not distinguishable from the correct one. Only in the insert of Fig.~\ref{fig:PPWFandRWFw10} that shows the wave functions at short internuclear separations one sees a deviation. It is caused by the absence of any nodal structure and the wrong behavior at $R\rightarrow 0$ of the pseudopotential wave function. In fact, at short distances the introduction of an energy-dependent scattering length that corrects the phase shift leads to an even larger error compared to the use of $a_{\rm sc}$. The validity of the pseudopotential approximation using an energy-dependent scattering length has been discussed before. In~\cite{cold:blum02} it was found that applicability of this approximation depends on the ratios $\displaystyle\beta_6 / a_{\rm ho}$ and $\displaystyle|a_{\rm sc} / a_{\rm ho}|$ where $\displaystyle\beta_6 = (2\mu C_6)^{1/4}$ is the characteristic length scale of the interaction potential in the case of a leading $C_6/R^6$ van der Waals potential. For two $^6\mbox{Li}$ atoms in a trap with $\displaystyle\omega = 2\pi\times\mbox{100\,kHz}$ that interact via the $\ds a^3\Sigma^+_u$ potential those ratios are 0.02 and 0.59, respectively. These validity criteria are, however, based solely on energy arguments. In other words, if those ratios are sufficiently smaller than 1, the energy obtained by means of Eq.\,(\ref{eq:energy_pseudo}) with $a_{\rm sc}$ should agree well with the correct one. In the present example of $^6\mbox{Li}$ the ratio between the correct first trap-induced energy $E^{10'}$ and $E^{n_t=0}_{a_{\rm sc}}$ obtained with the energy-independent pseudopotential is $E^{10'}/E^{0}_{a_{\rm sc}}=0.96$ for $\displaystyle\omega = 2\pi\times\mbox{10\,kHz}$ and $E^{10'}/E^{0}_{a_{\rm sc}}=0.92$ for $\displaystyle\omega = 2\pi\times\mbox{100\,kHz}$. By construction, the energy $E^{n_t0}_{a_{\rm E}}$ agrees of course completely with $E^{10'}$. In Fig.~\ref{fig:PAtransmomPPandRw10} $I^v(\omega)$ obtained when using the pseudopotential approximation with energy-independent scattering length is compared to the spectrum obtained for the correct atom-atom interaction, both for a trap frequency $\displaystyle\omega = 2\pi\times\mbox{10\,kHz}$. \begin{figure}\label{fig:PAtransmomPPandRw10} \end{figure} The two results disagree completely for $v\leq 60$. For higher lying vibrational states ($v>60$) the agreement is reasonable. (Note, however, the logarithmic scale.) For the highest lying states ($v\ge 95$) very good agreement is found even on a linear scale (see insert of Fig.~\ref{fig:PAtransmomPPandRw10}). Adopting the energy-dependent scattering length yields quantitative agreement already for $v\ge 75$, but again a complete disagreement for $v\leq 60$. The breakdown of the pseudopotential approximation (with energy-independent or dependent scattering length) for describing photoassociation to the low-lying vibrational states is, of course, a direct consequence of the wrong short-range behavior of the pseudopotential wave functions (Fig.~\ref{fig:PPWFandRWFw10}). From the definition of $I^v(\omega)$ it follows that the pseudopotential approximation fails, if the final-state vibrational wave function has a substantial amplitude in the $R$ range in which the initial-state wave function is strongly influenced by the atom-atom interaction. An estimate for this $R$ range is (in the present case) the already discussed effective-range parameter $\displaystyle\beta_6 = (2\mu C_6)^{1/4}$. Since for large $v$ the final-state wave function is dominated by its outermost lobe whose position is in turn close to the classical outer turning point $R_{\rm out}$, the pseudopotential approximation should be valid for $\ds R_{\rm out} > \beta_6$. In the case of $^6$Li one finds $\beta_6 = 62.5 \,a_0$. According to Fig.~\ref{fig:ClasTurnPoint} the pseudopotential approximation should thus be applicable for $v>70$. A recently performed photoassociation experiment for $ ^6 \mbox{Li}$ considered the transition to $\ds v=59$ \cite{cold:schl03}. For this specific example the pseudopotential approximation would predict a two times smaller rate than the full calculation for $\displaystyle\omega = 2\pi\times\mbox{10\,kHz}$. The validity of the pseudopotential approximation for predicting the photoassociation rates to the high lying states can also be used to investigate whether the simulation of different scattering lengths by mass scaling is senseful. This could be questionable, since a change of the mass does not only modify the scattering length (by moving the position of the least bound state), but also the kinetic energy term. Thus it may be argued that the discussed influence of the scattering length on the photoassociation process could partly also be a consequence of the modification of the kinetic energy, at least in the case of a substantial mass variation as it was required for preserving the number of bound states. Within the pseudopotential approximation the scattering length is, however, a parameter independent of the mass. Therefore, in contrast to the case of the full calculation it is possible within the pseudopotential approximation to investigate the isolated influence of a variation of the scattering length (keeping the mass fixed). A corresponding analysis confirms that mass scaling can in fact be used to simulate a modified atom-atom interaction. The pseudopotential approximation was used already in~\cite{cold:deb03} for an analysis of the change of the photoassociation rate due to a scattering-length modification. The investigation concentrated, however, on very high lying vibrational states close to or even above the trap-free dissociation limit. Since for transitions to those states the $R$ dependence of the electronic transition dipole moment can safely be ignored, it is sufficient to concentrate on the Franck-Condon (FC) factors. In Fig.~\ref{fig:FCascvar} the squares of these factors are shown as a function of the scattering length for $90\le v\le 98$ and trap frequency $\displaystyle\omega = 2\pi\times\mbox{100\,kHz}$. \begin{figure}\label{fig:FCascvar} \end{figure} As in~\cite{cold:deb03} the pseudopotential approximation is used for the initial state, but here the final-state wave function is obtained by a full numerical calculation while an approach based on quantum defect theory (QDT) was used in~\cite{cold:deb03}. Furthermore, Na$_2$ was considered in~\cite{cold:deb03} while it is Li$_2$ in the present study. For the states $90\le v \le 93$ shown in Fig.~\ref{fig:FCascvar}(a) the dependence on $a_{\rm sc}$ in a 100\,kHz trap is very similar to the one found in~\cite{cold:deb03}. The rather regular variation with $a_{\rm sc}$ is due to the fact that the final-state wave function probes the flat part of the initial-state wave function, as can be seen in the insert of Fig.~\ref{fig:FCascvar}(a) where the wave function for $v=92$ is shown together with the initial-state wave function for three different values of $a_{\rm sc}$. The initial-state wave function varies almost linearly with $a_{\rm sc}$ in the Franck-Condon window of the $v=92$ final state. According to the discussion in Sec.\,\ref{sec:cutoffregim}, for a 100\,kHz trap the states $v\ge 90$ belong to the cut-off regime, but for $v\le 93$ the enhancement factor $f^v$ is still close to its value $f_c$ in the constant regime (see Figs.~\ref{fig:PAtransmomAttract} and~\ref{fig:PAtransmomRepuls}). The minima of the FC$^2$ factors for $a_{\rm sc}\gg 0$ are a consequence of the dip discussed in Sec.\,\ref{sec:positivesclen}. Since the nodal position $R_{\rm x}$ moves to larger $R$ if $a_{\rm sc}$ increases, the minimum in the FC$^2$ factors moves to a larger value of $a_{\rm sc}$ if $v$ increases. While the pseudopotential approximation is capable to predict the existence of the dip occuring for $a_{\rm sc}\gg 0$, its position is not necessarily correctly reproduced in a trap. This is due to the fact that the pseudopotential overestimates the trap-induced shift of the position of the outermost node. For example, if the mass of Li is varied such that $a_{\rm sc} = + 850\,a_0$ is obtained, a 100\,kHz trap shifts $R_{\rm x}$ to $\approx +810\,a_0$ (Fig.\,\ref{fig:dip}) and the dip occurs at $v=92$ (Fig.\,\ref{fig:PAtransmomRepuls}). Using the pseudopotential approximation (with $a_{\rm sc}=+850\,a_0$) yields on the other hand $R_{\rm x}\approx +580\,a_0$ and the dip occurs for $v=90$. This error in the prediction of $R_{\rm x}$ increases with $a_{\rm sc}$. The final states $94\le v \le 98$ whose FC$^2$ factors are shown in Fig.~\ref{fig:FCascvar}(b) probe on the other hand the non-linear part of the initial-state wave function (close to the trap boundary). Consequently, the dependence on $a_{\rm sc}$ differs from the one found in~\cite{cold:deb03}. While for $90\le v \le 92$ the FC$^2$ factors are first decreasing and then increasing, if $a_{\rm sc}$ varies from $-6000\,a_0$ to $+6000\,a_0$, the ones of $93\le v \le 96$ are purely decreasing. For $v=97$ and 98 the FC$^2$ factors are on the other hand increasing with $a_{\rm sc}$. In view of the fact that the scattering length of a given atom pair may be known (for example from some measurement), but the corresponding atom-atom interaction potential is unknown, it is of course interesting to investigate whether the pseudopotential approximation allows to predict the enhancement factor also in the constant regime, i.\,e.\ whether it correctly reproduces $f_c(\omega)$. This would allow for a simple estimate of the effect of a tight trap on the photoassociation rate in the constant regime that covers most of the spectrum. In order to determine $f_c(\omega)$ it is sufficient to analyze the ratio of the initial-state wave function $\Psi_{a_{\rm sc}}^0$ for the trap frequencies $\omega$ and $\omega_{\rm ref}$. This comparison may be done at any arbitrary internuclear separation $R_{\rm lin}$ provided it belongs to the linear regime. The result is \begin{equation} \ds \ds f_c^{\rm pseudo}({\omega}) = \left[\frac{\Psi^{0}_{a_{\rm sc}}(R_{\rm lin};\omega)} {\Psi^{0}_{a_{\rm sc}} (R_{\rm lin};\omega_{\rm ref})}\right]^2\; . \label{rule1a} \end{equation} A very special and simple choice that guarantees that $R_{\rm lin}$ belongs to the linear regime is $R_{\rm lin}=0$. With this value of $R_{\rm lin}$ one finds from the analysis of $\Psi^{0}_{a_{\rm sc}}$ \begin{equation} \ds \ds f_c^{\rm pseudo}({\omega}) = \left[\frac{A({\omega})}{A({\omega_{\rm ref}})}\right]^2 \: \frac{\omega_{\rm ref}}{\omega}\; , \label{eq:rule1b} \end{equation} where $\ds A({\omega})$ is the normalization factor fulfilling $\displaystyle|A({\omega})|^2 \, =\,\sqrt{2\omega}\,\pi\,\xi^2 \,\frac{\partial E}{\partial\xi}$ ~\cite{cold:busc98}. Depending on the level of approximation one may use either $a_{\rm sc}$ or $a_E$ in the evaluation of $A$. An even simpler estimate of the influence of a tight harmonic trap on the photoassociation rate is obtained, if the atom-atom interaction potential is completely ignored in the initial state. The harmonic-oscillator eigenfunctions at $R=R_{\rm lin}=0$ yield \begin{equation} f_c^{\rm ho}({\omega}) = \left(\, \frac{\omega}{\omega_{\rm ref}} \right)^{3/2} \; . \label{ruleharm} \end{equation} In Fig.~\ref{fig:PAratiowvar} the enhancement factors $f_c(\omega)$ calculated at the different levels of approximation are shown as a function of the trap frequency $\omega$. \begin{figure}\label{fig:PAratiowvar} \end{figure} The results obtained for $^6$Li and $^{39}$K are compared to each other. Remind, in the latter case the scattering length $a_{\rm sc}= +90\,a_0$ has a much smaller absolute value than for $^6$Li ($a_{\rm sc}=-2030\,a_0$). Consequently, one expects the atom-atom interaction to be less important. This is confirmed by (the insert of) Fig.~\ref{fig:PAratiowvar}. The results obtained for $f_c(\omega)$ with the aid of the different approximations discussed above are in very good agreement with the correct result in the case of $^{39}$K. Even the simple harmonic-oscillator model predicts the enhancement factor in the constant regime very accurately. It should be emphasized that the correct prediction of the enhancement factor by the simplified approximation works, although the prediction of the rates is completely wrong (Fig.\,\ref{fig:PAtransmomPPandRw10}) in this constant regime (small $v$). In the case of a large absolute value of the scattering length (like for $^6$Li), i.\,e.\ for a strong atom-atom interaction, the frequency dependence of $f_c(\omega)$ predicted by the simplified models is on the other hand not very accurate. In fact, the simple harmonic-oscillator model clearly overestimates the enhancement factor for large $\omega$. The pseudopotential approximation yields much better results, especially if the energy-dependent scattering length $a_E$ is used. (As already mentioned, $a_E$ is, however, only available from the knowledge of the exact atom-atom interaction.) In view of the usefulness of Eq.~(\ref{eq:rule1b}) for obtaining an estimate of the enhancement factor $f_c(\omega)$ but the rather complicated procedure to calculate $\displaystyle\frac{\partial E}{\partial\xi}$ required for obtaining $\ds A({\omega})$, it is interesting to test whether $\ds A({\omega})$ can alternatively be evaluated from an expansion of the energy $\ds E$ at $\displaystyle\xi=0$. Using the relation $\displaystyle\frac{\partial x}{\partial \xi} = \left(\frac{\partial \xi}{\partial x}\right)^{-1}$ it is straightforward to determine with the aid of~(\ref{eq:energy_pseudo}) an expansion for the scaled energy \begin{equation} \ds x(\xi) = \frac32+\sum\limits_{n=0}^{\infty} \frac{1}{(n+1)!} \left. \frac{\partial^{(n)} F(x)}{\partial x^{(n)}} \right|_{x=3/2} \xi^{n+1} \, \label{eq:energy_expansion} \end{equation} with \begin{equation} \ds F(x) = -\frac{2\sqrt{2}\,\Gamma\Bigl[\frac34-\frac{x}2\Bigr]} {\Gamma\Bigl[\frac14-\frac{x}2\Bigr] \psi\Bigl[\frac14-\frac{x}2\Bigr]- \psi\Bigl[\frac34-\frac{x}2\Bigr]}\, \end{equation} and the digamma function $\psi$. The zero- and first-order terms of the expansion (\ref{eq:energy_expansion}) are given in~\cite{cold:busc98}. Using Eqs.~(\ref{eq:energy_expansion}) and~(\ref{eq:rule1b}) \begin{equation} \ds f_c^{\rm pseudo}({\omega}) =\frac{\sum\limits_{n=0}^{\infty} \frac{1}{n!} \left. \frac{\partial^{(n)} F(x)}{\partial x^{(n)}} \right|_{x=3/2} \xi^{n}}{\sum\limits_{n=0}^{\infty} \frac{1}{n!} \left. \frac{\partial^{(n)} F(x)}{\partial x^{(n)}} \right|_{x=3/2} \xi^{n}_{\rm ref}} \Bigl(\frac{\omega}{\omega_{\rm ref}}\Bigr)^{\frac32} \label{eq:rule1c} \end{equation} is obtained with $\displaystyle\xi_{\rm ref}=\frac{a_{\rm sc}}{a_{\rm ho, ref}}$. In Fig.~\ref{fig:simpleestim} the 4th-, 5th-, and 6th-order expansions are compared to the results obtained with the non-approximated term (all for $a_{\rm sc}=-2030 \, a_0$) and with the correct atom-atom interaction result. \begin{figure}\label{fig:simpleestim} \end{figure} Note, Eq.~(\ref{eq:rule1c}) can also be used for the evaluation of $g_c(\omega,a_{\rm sc})$, if in the denominator $\xi_{\rm ref}$ is replaced by $\displaystyle\tilde{\xi}_{\rm ref}=\frac{a_{\rm sc,ref}}{a_{\rm ho, ref}}$. \section{Discussion and Outlook} \label{sec:discussion} In this work the influence of a tight isotropic harmonic trap on the photoassociation process has been investigated for alkali atoms. It is found that for most of the states (the ones in the constant regime) there is an identical enhancement as the trap frequency increases. This enhancement can reach 3 orders of magnitude for trap frequencies of about 100\,kHz as they are reported in literature. While the enhancement itself agrees at least qualitatively with the concept of confinement of the initial-state wave function, also trap-induced suppressed photoassociation is possible. In fact, as a simple sum rule confirms, any enhancement must be accompanied by suppression. The physical origin of this suppression is the trap-induced confinement of the initial-state wave function of relative motion within a radius that is smaller than the mean internuclear separation of the least bound vibrational states in the electronic target state. Since in the present calculation both initial and final state are exposed to the same harmonic trap, this result may appear surprising. While the explanation is based on the different long-range behaviors of the two involved electronic states, the effect itself may be very interesting in terms of, e.\,g., quantum information. Consider for example an optical lattice as trapping potential. The initial (unbound) atom pair is (for sufficient trap depths) located within a single lattice site (Mott insulator state). In the photoassociated state it could, however, reach into and thus communicate with the neighbor site, if the lattice parameters are appropriately chosen. Such a scenario could be used for a controlled logical operation (two-qubit gate) like the CNOT. Since the latter forms together with single-qubit gates a universal gate, this could provide a starting point for a quantum computer. Alternatively, it may be interesting to use the fact that if a single spot with the dimension of the trap length $a_{\rm ho}$ or a specific site in an optical lattice can be addressed, then the atoms would only respond, if they are in their (unbound) initial state. If they are in the photoassociated excited state, they would on the other hand be located outside the trap and thus would not respond. For this it is already sufficient, if they are (predominantly) located in the classically forbidden regime. Also, modifying the trap frequency it is possible to block the photoassociation process on demand. The trap frequency is then varied in such a fashion that a specific final state resonantly addressed with a laser with sufficiently small bandwidth belongs either to the constant or to the cut-off regime. A further important finding of this work is that the influence of a tight trap on the photoassociation spectra (as a function of the final vibrational state) for different alkali atoms is structurally very similar, independent whether photoassociation starts from the singlet or triplet ground state. Also the type of interaction (strong or weak as well as repulsive or attractive) does not lead to a substantial modification of the trap influence. The only exception is a strong repulsive interaction that leads to a pronounced window in the photoassociation spectrum. The reason is the position of the last node in the initial-state wave function that in this case is located at a relatively large value of $R$ and leads to a cancellation effect in the overlap with the final state. The nodal position depends strongly on $a_{\rm sc}$, but only for very tight traps also on $\omega$. As has been discussed previously~\cite{cold:cote98b,cold:abra96}, the position of the window may be used for a scattering-length determination. This will also approximately work for not too tight traps, but the trap influence has to be considered for very tight ones. Alternatively, the window provides a control facility, since the transition to a single state can be selectively suppressed. In very tight traps this effect is not only more pronounced, but in addition the transitions to the neighbor states are further enhanced. This could open up a new road to control in the context of the presently on-going discussion of using femtosecond lasers for creating non-stationary wave packets in the electronic excited state~\cite{cold:salz06,cold:brow06,cold:koch06}. One of the problems encountered in this approach is the difficulty to shape the wave packet, since the high-lying vibrational states that have a reasonable transition rate are energetically very closely spaced and thus the shape of the wavepacket is determined by the Franck-Condon factors that cannot easily be manipulated but strongly increase as a function of $v$. In view of the question how to enhance photoassociation or related association schemes (like Raman-based ones) the investigation of the enhancement factors $g^{v}(\omega,a_{\rm sc})$, especially its value in the constant regime ($g_c(\omega,a_{\rm sc})$) is most important. It shows that not only increasing the tightness of the trap (enlarging $\omega$) leads to an enhancement of the photoassociation rate, but a similar effect can be achieved by increasing the interaction strength $|a_{\rm sc}|$. Most interestingly, these two enhancement factors work practically independent of each other, i.\,e.\ it is possible to use both effects in a constructive fashion and to obtain a multiplicative overall enhancement factor. For a 100\,kHz trap and a scattering length $|a_{\rm sc}|$ of the order of 2000 an enhancement factor (uniform for all states in the constant regime) of 5 to 6 orders of magnitude is found compared to the case of a shallow 1\,kHz trap and $|a_{\rm sc}|= 0$. A comparison of the results obtained for the correct atom-atom interaction potential with the ones obtained using the approximate pseudopotential approximation or ignoring the interaction at all shows that these approximations yield only for the transitions to very high lying vibrational states a good estimate of the photoassociation rate. Nevertheless, despite the complete failure of predicting the rates to low lying states, these models allow to determine the enhancement factor in the constant regime. For weakly interacting atoms (small $|a_{\rm sc}|$) already the pure harmonic-oscillator model (ignoring the atomic interaction) leads to a reasonable prediction of the trap-induced enhancement factor $f_c(\omega)$. It is important to stress that the results in this work were obtained for isotropic harmonic traps with the same trapping potential seen by both atoms. In this case center-of-mass and relative motion can be separated and in both coordinates an isotropic harmonic trap potential (with different trap lengths due to the different total and reduced masses) is encountered. As is discussed, e.\,g., in \cite{cold:bold03,cold:idzi05} where a numerical and an analytical solution are respectively derived for the case of atoms interacting via a pseudopotential, a similar separation of center-of-mass and relative motion is possible for axially symmetric (cigar or pancake shaped) harmonic traps. In reality, the traps for alkali atoms are of course not strictly harmonic. Since the present work focuses, however, for the initial atom pair on the lowest trap induced state the harmonic approximation should in most cases be well justified. Independently on the exact way the trap is formed (for example by a far off-resonant focused Gaussian laser beam or by an optical lattice), the lowest trap-induced state agrees usually well with the one obtained in the harmonic approximation, if the zero-point energy is sufficiently small. This requirement sets of course an upper scale to the applicability of the harmonic approximation with respect to the trap frequency. If $\omega$ is too large, the atom pair sees the anharmonic part of the trap. (Clearly, the trap potential must also be sufficiently deep to support trap-induced bound states, i.\,e.\ to allow for Mott insulator states in the case of an optical lattice). An additional problem arises from the anharmonicity of a real trap: the anharmonic terms lead to a non-separability of the relative and the center-of-mass motion. In fact, a recent work discusses the possibility of using this coupling of the two motions for the creation of molecules \cite{cold:bold05}. Again, a tighter trap is expected to lead to a stronger coupling and thus finally to a breakdown of the applicability of the harmonic model. For the final state of the considered photoassociation process there exists on the first glance an even more severe complication. Usually, the two atoms will not feel the same trapping potential, since they populate different electronic states. In the case of traps whose action is related to the induced dipole moment (which is the case for optical potentials generated with the aid of lasers that are detuned from an atomic transition), the two atoms (in the case of Li the ones in the 2\,$^2$S and the 2\,$^2$P state) will in fact see potentials with opposite sign. If the laser traps the ground-state atoms, it repels the excited ones. In the alternative case of an extremely far-off resonant trap the trapping potential is proportional to the dynamic polarizability of the atoms. In the long-wavelength limit (as is realized, e.\,g., in focused CO$_2$ lasers~\cite{cold:take95}) the dynamic polarizability approaches the static one, $\lim_{\lambda\rightarrow \infty} \alpha(\lambda) = \alpha_{\rm st}$. The static polarizabilities do not necessarily have opposite signs for the ground and the excited electronic state of an alkali atom, but in many cases different values. Then the trapping potentials for the initial and final states of the photoassociation process are different. The Li system appears to be a counter example, since for $ ^6 \mbox{Li}_2$ the average polarizability of the $a ^3\Sigma^+_u$ (2s+2s) state is predicted to be equal to $\overline{\alpha}=\alpha_{zz}=\alpha_{xx}=2\alpha_0 (2s) = 2\times165= 330 \,a_0$ For the $1 ^3\Sigma^+_g$ (2s+2$p_z$) state one has $\alpha_{zz}=\alpha_0(2s)+\alpha_{zz}(2p_z)=285\, a_0$ and $\alpha_{xx}=\alpha_0(2s)+\alpha_{xx}(2p_z)=292 \, a_0$ yielding an average polarizability $\overline{\alpha} \approx 290 \,a_0$~\cite{cold:mera01}. Thus the trapping potentials are expected to be very similar. This is not the case for, e.\,g., $^{87}$Rb$_2$ where the average polarization for the $a ^3\Sigma^+_u$ state is $670 \, a_0$ and for $1 ^3\Sigma^+_g$ it is $1698 \,a_0$~\cite{cold:magn02}. It was checked that the use of very different values of $\omega$ for determining the initial and final state wave functions does not influence the basic findings of the present work. The reason is simple. Besides the very least bound states (and of course the trap-induced ones) the final states are effectively protected by the long-range interatomic potential from seeing the trap. However, if the two atoms are exposed to different trap potentials, a separation of center-of-mass and relative motion is again not possible, even in the fully harmonic case (a fact that was, e.\,g., overlooked in~\cite{cold:koch05}). One would again expect that this coupling increases with the difference in the trap potentials of the two involved states. A detailed study of the consequences of the coupling of center-of-mass and relative motion due to various reasons like an anharmonicity of the trap or different trapping potentials seen by the involved atoms is presently underway. This involves also the case of the formation of heteronuclear alkali dimers where besides the occurrence of this coupling for the initial state also a different long-range behavior of the interatomic potential has to be considered. Different interaction strengths occur naturally for different alkali atoms as is well known and also evident from the explicit examples of $^6$Li, $^7$Li, and $^{39}$K that were discussed in this work. According to the findings of this work the choice of a proper atom pair (with large $|a_{\rm sc}|$) enhances the achievable photoassociation yield quite dramatically. Clearly, for practical reasons it is usually not easy to change in an existing experiment the atomic species, since the trap and cooling lasers are adapted to a specific one. In addition, the naturally existing alkali species provide only a fixed and limited number of interaction strengths. The tunability of the interaction strength on the basis of Feshbach resonances, especially magnetic ones, marked a very important corner-stone in the research area of ultracold atomic gases. The findings of the present work strongly suggest that this tunability could be used to improve the efficiency of photoassociation (and related) schemes. However, it has to be emphasized that it is not at all self-evident that the independence of the scattering-length variation and the one of the trap frequency as it occurs in the model used in this work is applicable to (magnetic) Feshbach resonances. Furthermore, the present work considered only the single-channel case while the proper description of a magnetic Feshbach resonance requires a multi-channel treatment. Noteworthy, a strong enhancement of the photoassociation rate by at least 2 orders of magnitude while scanning over a magnetic Feshbach resonance was predicted on the basis of a multichannel calculation for a specific $^{85}$Rb resonance already in~\cite{cold:abee98}. An experimental confirmation followed very shortly thereafter~\cite{cold:cour98}. The explanation for the enhancement given in~\cite{cold:abee98} is, however, based on an increased admixture of bound-state contribution to the initial continuum state in the vicinity of the resonance. This is evidently different from the reason for the enhancement due to large values of $|a_{\rm sc}|$ discussed in the present work. An extension within the multichannel formalism that allows for a full treatment of magnetic Feshbach resonances is presently underway. \section*{Acknowledgments} The authors acknowledge financial support by the {\it Deutsche Forschungsgemeinschaft} within the SPP\,1116 (DFG-Sa\,936/1) and SFB\,450. AS is grateful to the {\it Stifterverband f\"ur die Deutsche Wissenschaft} (Programme {\it Forschungsdozenturen}) and the {\it Fonds der Chemischen Industrie} for financial support. This work is also supported by the European COST Programme D26/0002/02. \end{document}
arXiv
Demo SGI research projects Minimal Surfaces, But With Saddle Points Post author By Olga Guțan No Comments on Minimal Surfaces, But With Saddle Points By Natasha Diederen, Alice Mehalek, Zhecheng Wang, and Olga Guțan This week we worked on extending the results described here. We learned an array of new techniques and enhanced existing skills that we had developed the week(s) before. Here is some of the work we have accomplished since the last blog post. One of the improvements we made was to create a tiling function which created an \( n^3 \) grid of our triply periodic surfaces, so that we were better able to visualise them as a structure. We started off with a surface inside a \( [-1, 1]^3 \) cube, and then imagined an \(n^3\) grid (pictured below). To make a duplicate of the surface in one of these grid cubes, we considered how much the surface would need to be translated in the \(x\), \(y\) and \(z\) directions. For example to duplicate the original surface in the black cube into the green square, we would need to shift all the \(x\)-coordinates in the mesh by two (since the cube is length two in all directions) and leave the \(y\)- and \(z\)-coordinates unchanged. Similarly, to duplicate the original surface into the purple square, we would need to shift the all \(x\)-coordinates in the mesh by two, all the \(y\)-coordinates by two, and all the \(z\)-coordinates by \(2n\). Figure 1. A visualization of the the surface tiling. To copy the surface \(n^3\) times into the right grid cubes, we need find all the unique permutations of groups of three vectors chosen from \((0,2,4, \dots, 2n)\) and add them to the vertices matrix of the of the mesh. To update the face data, we add multiples of the number of vertices each time we duplicate into a new cube. With this function in place, we can now see our triply periodic surfaces morphing as a larger structure. Figure 2. A 3x3x3 Tiling of the Surface. A skill we continued developing and something we have grown to enjoy, is what we affectionately call "blendering." To speak in technical terms, we use the open-source software Blender to generate triangle meshes that we, then, use as tests for our code. For context: Blender is a free and open-source 3D computer graphics software tool set used for a plethora of purposes, such as creating animated films, 3D printed models, motion graphics, virtual reality, and computer games. It includes many features and it, truly, has endless possibilities. We used one small compartment of it: mesh creation and mesh editing, but we look forward to perhaps experiencing more of its possibilities in the future. We seek to create shapes that are non-manifold; mathematically, this means that there exist local regions in our surface that are not homeomorphic to a subset of the Euclidean space. In other words, non-manifold shapes contain junctions where more than two faces meet at an edge, or more than two faces meet at a vertex without sharing an edge. Figure 3. Manifold versus nonmanifold edges and vertices. Source. This is intellectually intriguing to consider, because most standard geometry processing methods and techniques do not consider this class of shapes. As such, most algorithms and ideas need to be redesigned for non-manifold surfaces. Our Blender work consisted of a lot of trial-and-error. None of us had used Blender before, so the learning curve was steep. Yet, despite the occasional frustration, we persevered. With each hour worked, we increased our understanding and expertise, and in the end we were able to generate surfaces we were quite proud of. Most importantly, these triangle meshes have been valuable input for our algorithm and have helped us explore in more detail the differences between manifold and non-manifold surfaces. Figure 4. Manifold and Nonmanifold Periodic Surfaces. The new fellows joining this week came from a previous project on minimal surface led by Stephanie Wang, which used Houdini as a basis for generating minimal surfaces. Thus, we decided we could use Houdini to carry out some physical deformations, to see how non-manifold surfaces performed compared to similar manifold surfaces. We used a standard Houdini vellum solver with some modifications to simulate a section of our surface falling under gravity. Below are some of the simulations we created. Figure 5. A Nonmanifold and a Manifold Surface Experiencing Gravity. Newton's Method When we were running Pinkhall and Polthier's algorithm on our surfaces, we noticed that that algorithm would not stop at a local saddle point such as the Schwarz P surface, but would instead run until there was only a thin strip of area left, which is indeed a minimum, but not a very useful one. Therefore, we switched to Newton's Method to solve our optimization problem. We define the triangle surface area as an energy: let the three vertices of a triangle be \(\mathbf{q}_1\), \(\mathbf{q}_2\), \(\mathbf{q}_3\). Then \(E = \frac{1}{2} \|\mathbf{n}\| \), where \(\mathbf{n} = (\mathbf{q}_2-\mathbf{q}_1) \times (\mathbf{q}_3-\mathbf{q}_1)\) is the surface area normal. Then we derive its Jacobian and Hessian, and construct the Jacobian and Hessian for all faces in the mesh. However, this optimization scheme still did not converge to the desired minimum, perhaps because our initialization is far from the solution. Additionally, one of our project mentors implemented the approach in C++ and, similarly, no results ensued. Later, we tried to add line search to Newton's Method, but also no luck. Although our algorithm still does not converge to some minimal surfaces which we know to exist, it has generated the following fun bugs. In the previous blog post, we discussed studying the physical properties of nonmanifold TPMS. Over the past week, we used the Vellum Solver in Houdini and explored some of the differences in physical properties between manifold and nonmanifold surfaces. However, this is just the beginning — we can continue to expand our work in that direction. Additional goals may include writing a script to generate many possible initial conditions, then converting the initial conditions into minimal surfaces, either by using the Pinkall and Polthier algorithm, or by implementing another minimal-surface-generating algorithm. More work can be done on enumerating all of the possible nonmanifold structures that the Pinkall and Polthier algorithm generates. The researchers can, then, categorize the structures based on their geometric or physical properties. As mentioned last week, this is still an open problem. We would like to thank our mentors Etienne Vouga, Nicholas Sharp, and Josh Vekhter for their patient guidance and the numerous hours they spent helping us debug our Matlab code, even when the answers were not immediately obvious to any of us. A special thanks goes to Stephanie Wang, who shared her Houdini expertise with us and, thus, made a lot of our visualizations possible. We would also like to thank our Teaching Assistant Erik Amezquita. SGI research projects Robust computation of the Hausdorff distance between triangle meshes Post author By Deniz No Comments on Robust computation of the Hausdorff distance between triangle meshes Authors: Bryce Van Ross, Talant Talipov, Deniz Ozbay The SGI project titled Robust computation of the Hausdorff distance between triangle meshes originally was planned for a 2 week duration, and due to common interest in continuing, was extended to 3 weeks. This research was led by mentor Dr. Leonardo Sacht of the Department of Mathematics of the Federal University of Santa Catarina (UFSC), Brazil. Accompanying support was TA Erik Amezquita, and the project team consisted of SGI fellows, math (under)graduate students, Deniz Ozbay, Talant Talipov, and Bryce Van Ross. Here is an introduction to the idea of our project. The following is a summary of our research and our experiences, regarding computation of the Hausdorff distance. Given two triangle meshes A, B in R³, the following are defined: 1-sided Hausdorff distance h: $$h(A, B) = \max\{d(x, B) : x\in A\} = \max\{\min\{\|x-y\|: x\in A\}: y\in B\}$$ where d is the Euclidean distance and |x-y| is the Euclidean norm. Note that h, in general, is not symmetric. In this sense, h differs from our intuitive sense of distance, being bidirectional. So, h(B, A) can potentially be a smaller (or larger) distance in contrast to h(A, B). This motivates the need for an absolute Hausdorff distance. $$H(A,B) = \max\{h(A, B), h(B, A)\}$$ By definition, H is symmetric. Again, by definition, H depends on h. Thus, to yield accurate distance values for H, we must be confident in our implementation and precision of computing h. For more mathematical explanation of the Hausdorff distance, please refer to this Wikipedia documentation and this YouTube video. Objects are geometrically complex and so too can their measurements be. There are different ways to compare meshes to each other via a range of geometry processing techniques and geometric properties. Distance is often a common metric of mesh comparison, but the conventional notion of distance is at times limited in scope. See Figures 1 and 2 below. Figure 1: This distance is limited to the red vertices, ignoring other points of the triangles. Figure 2: This distance ignores the spatial positions of the triangles. So, the distance is skewed to the points of the triangles, and not the contribution of the space between the triangles. Figures from the Hausdorff distance between convex polygons. Our research focuses on robustly (efficiently, for all possible cases) computing the Hausdorff distance h for pairs of triangular meshes of objects. The Hausdorff distance h is fundamentally a maximum distance among a set of desirable distances between 2 meshes. These desirable distances are minimum distances of all possible vectors resulting from points taken from the first mesh to the second mesh. Why is h significant? If h tends to 0, then this implies that our meshes, and the objects they represent, are very similar. Strong similarity indicates minimal change(s) from one mesh to the other, with possible dynamics being a slight deformation, rotation, translation, compression, stretch, etc. However, if h >> 0, then this implies that our meshes, and the objects they represent, are dissimilar. Weak similarity indicates significant change(s) from one mesh to the other, associated with the earlier dynamics. Intuitively, h depends on the strength of ideal correspondence from triangle to triangle, between the meshes. In summary, h serves as a means of calculating the similarity between triangle meshes by maximally separating the meshes according to the collection of all minimum pointwise-mesh distances. The Hausdorff distance can be used for a variety of geometry processing purposes. Primary domains include computer vision, computer graphics, digital fabrication, 3D-printing, and modeling, among others. Consider computer vision, an area vital to image processing, having a multitude of technological applications in our daily lives. It is often desirable to identify a best-candidate target relative to some initial mesh template. In reference to the set of points within the template, the Hausdorff distance can be computed for each potential target. The target with the minimum Hausdorff distance would qualify as being the best fit, ideally being a close approximation to the template object. The Hausdorff distance plays a similar role relative to other areas of computer science, engineering, animation, etc. See Figure 3 and 4, below, for a general representation of h. Figure 3: Hausdorff distance h corresponds to the dotted lined distance of the left image. In the right image, h is found in the black shaded region of the green triangle via the Branch and Bound Method. This image is found in Figure 1 of the initial reading provided by Dr. Leonardo Sacht. Figure 4: Hausdorff distance h corresponds to the solid lined distance of the left image. This distance is from the furthest "leftmost" point of the first mesh (armadillo) to the closest "leftmost" point of the second mesh. This image is found in Figure 5 of the initial reading provided by Dr. Leonardo Sacht. Branch and Bound Method Our goal was to implement the branch-and-bound method for calculating H. The main idea is to calculate the individual upper bounds of Hausdorff distance for each triangle meshes of mesh A and the common lower bound for the whole mesh A. If the upper bound of some triangle is greater than the general lower bound, then this face is discarded and the remaining ones are subdivided. So, we have these 3 main steps: 1. Calculating the lower bound We define the lower bound as the minimum of the distances from all the vertices of mesh A to mesh B. Firstly, we choose the vertex P on mesh A. Secondly, we compute the distances from point P to all the faces of mesh B. The actual distance from point P to mesh B is the minimum of the distances that were calculated the step before. For more theoretical details you should check this blog post: http://summergeometry.org/sgi2021/finding-the-lower-bounds-of-the-hausdorff-distance/ The implementation of this part: Calculating the distance from the point P to each triangle mesh T of mesh B is a very complicated process. So, I would like not to show the code and only describe it. The main features that should be considered during this computation is the position of point P relatively to the triangle T. For example, if the projection of point P on the triangle's plane lies inside the triangle, then the distance from point P to triangle is just the length of the corresponding normal vector. In the other cases it could be the distance to the closest edge or vertex of triangle T. During testing this code our team faced the problem: the calculating point-triangle distance takes too much time. So, we created the bounding-sphere idea. Instead of computing the point-triangle distance we decided to compute point-sphere distance, which is very simple. But what sphere should we pick? The first idea that sprung to our minds was the sphere that is generated by a circumscribed circle of the triangle T. But the computation of its center is also complicated. That is why we chose the center of mass M as the center of the sphere and maximal distance from M to each vertex of triangle T. So, if the distance from the point P to this sphere is greater than the actual minimum, then the distance to the corresponding triangle is exactly not the minimum. This trick made our code work approximately 30% faster. This is the realization: 2. Calculating the upper bounds Overall, the upper bound is derived by the distances between the vertices and triangle inequality. For more theoretical details you should check this blog post: http://summergeometry.org/sgi2021/upper-bound-for-the-hausdorff-distance/ During testing the code from this page on big meshes our team faced the memory problem. On the grounds that we tried to store the lengths that we already computed, it took too much memory. That is why we decided just compute these length one more time, even though it takes a little bit more time (it is not crucial): 3. Discarding and subdividing The faces are subdivided in the following way: we add the midpoints and triangles that are generated by the previous vertices and these new points. In the end, we have 4 new faces instead of the old one. For more theoretical details you should check this blog post: http://summergeometry.org/sgi2021/branch-and-bound-method-for-calculating-hausdorff-distance/ Below are our results for two simple meshes, first one being a sphere mesh and the second one being the simple mesh found in the blog post linked under the "Discarding and subdividing" section. Figure 5: Results for a sphere mesh with different tolerance levels Figure 6: Results for a smaller, simple mesh with different tolerance levels The intuition behind how to determine the Hausdorff distance is relatively simple, however the implementation of computing this distance isn't trivial. Among the 3 tasks of this Hausdorff distance algorithm (finding the lower bound, finding the upper bound, and finding a triangular subdivision routine), the latter two tasks were dependent on finding the lower bound. We at first thought that the subdivision routine would be the most complicated process, and the lower bound would be the easiest. We were very wrong: the lower bound was actually the most challenging aspect of our code. Finding vertex-vertex distances was the easiest aspect of this computation. Given that in MATLAB triangular meshes are represented as faces of vertex points, it is difficult to identify specific non-vertex points of some triangle. To account for this, we at first used computations dependent on finding a series of normals amongst points. This yielded a functional, albeit slow, algorithm. Upon comparison, the lower bounds computation was the cause of this latency and needed to be improved. At this point, it was a matter of finding a different implementation. It was fun brainstorming with each other about possible ways to do this. It was more fun to consider counterexamples to our initial ideas, and to proceed from there. At a certain point, we incorporated geometric ideas (using the centroid of triangles) and topological ideas (using the closed balls) to find a subset of relevant faces relative to some vertex of the first mesh, instead of having to consider all possible faces of the secondary mesh. Bryce's part was having to mathematically prove his argument, for it to be correct, but only to find out later it would be not feasible to implement (oh well). Currently, we are trying to resolve bugs, but throughout the entire process we learned a lot, had fun working with each other, and gained a stronger understanding of techniques used within geometry processing. In conclusion, it was really fascinating to see the connection between the theoretical ideas and the implementation of an algorithm, especially how a theoretically simple algorithm can be very difficult to implement. We were able to learn more about the topological and geometric ideas behind the problem as well as the coding part of the project. We look forward to finding more robust ways to code our algorithm, and learning more about the mathematics behind this seemingly simple measure of the geometric difference between two meshes. Tags Hausdorff distance Wrapping Up SGI 2021 Post author By Justin Solomon No Comments on Wrapping Up SGI 2021 After six weeks of intensive research and tutorials on applied geometry, we finally are ready to wrap up SGI 2021 and send our Fellows back to their home institutions. Directing this program has been one of the most rewarding experiences of my career, and it has been a pleasure seeing our students advance as scientists, mathematicians, and supportive community members. SGI's success is entirely thanks to a huge team of volunteers whose time and energy made the program possible. Below, I acknowledge the many colleagues who have participated in the planning, leadership, and day-to-day activities of SGI 2021. The sheer length of this list is an inspiring reflection of the supportive community we enjoy in geometry research. To start, SGI 2021 was only possible with the support of our generous sponsors, whose donations allowed us to offer a stipend to each SGI Fellow commensurate with a summer internship: Google Research ExploreCSR The MathWorks, Inc. US Army Research Lab, Mathematical Sciences Branch MachineLearningApplications@CSAIL Mosek ApS SGI was organized over the previous year with guidance from colleagues at a variety of institutions worldwide. A Steering Committee provided opinions and advice throughout the planning process: Prof. Mikhail Bessmeltsev, Université de Montréal Prof. Edward Chien, Boston University Prof. Keenan Crane, Carnegie Mellon University Dr. Ilke Demir, Intel Corporation Prof. Alec Jacobson, University of Toronto Prof. Misha Kazhdan, Johns Hopkins University Prof. Kathryn Leonard, Occidental College Prof. Daniele Panozzo, New York University Prof. Adriana Schulz, University of Washington Prof. Alla Sheffer, University of British Columbia Prof. Amir Vaxman, Utrecht University Prof. Etienne Vouga, University of Texas at Austin Within MIT, several faculty and staff members contributed substantial effort to make the program a success. My team in the Geometric Data Processing (GDP) Group provided advice and volunteer support, from feedback on how to structure the program to hand-packing 72 boxes to ship to our Fellows and volunteers; GDP admin Mieke Moran organized payments and many other key aspects that made the program run smoothly. Our EECS Department Chair Prof. Asu Ozdaglar, AI+D Chair Prof. Antonio Torralba, and CSAIL Director Prof. Daniela Rus advocated for the program and provided support and encouragement as SGI obtained final approval within the MIT administration. CSAIL Assistant Director Carmen Finn provided critical last-minute help to make sure our Fellows were paid on time. Prof. Frédo Durand provided much-needed advice—and allowed me to vent—at several junctures. SGI 2021 received far more applications than anticipated, and our final cadre of 34 Fellows and 29 additional tutorial week participants was painfully difficult to select. Our admissions committee carefully read all the applications: Lingxiao Li, MIT Silvia Sellán, University of Toronto Dmitriy Smirnov, MIT Dr. Oded Stein, MIT Nicholas Vining, University of British Columbia Paul Zhang, MIT The first week of SGI centered around five days of tutorials to get our Fellows up to speed on geometry processing research. Each tutorial day was organized by a different volunteer, who took charge of the content for the entire day and generated valuable course materials we can reuse in the future: Day 1: Dr. Oded Stein (MIT), basic techniques in geometry processing Day 2: Hsueh-Ti (Derek) Liu (University of Toronto) and Jiayi Eris Zhang (University of Toronto and Stanford), shape deformation Day 3: Silvia Sellán (University of Toronto), shape representations Day 4: Michal Edelstein (Technion) and Abhishek Sharma (École Polytechnique), shape correspondence Day 5: Prof. Amir Vaxman (Utrecht University), directional fields The remaining five weeks of SGI included a host of 1-3 week research projects, each involving experienced mentors working closely with SGI Fellow. Our full list of projects and project mentors is as follows: Dr. Itzik Ben-Shabat: Self-supervised normal estimation using shape implicit neural representation (August 16-August 27) Prof. Mikhail Bessmeltsev and Prof. Ed Chien: Singularity-Free Frame Field Design for Line Drawing Vectorization (July 26-August 6) Dr. Tolga Birdal and Prof. Nina Miolane: Uncertainty Aware 3D Multi-Way Matching via Soft Functional Maps (July 26-August 6) Prof. David Bommes and Dr. Pierre-Alexandre Beaufort: Quadrilateral and hexahedral mesh optimization with locally injective maps (July 26-August 6) Prof. Marcel Campen: Improved 2D Higher-Order Meshing (July 26-July 30) Prof. Keenan Crane: Surface Parameterization via Intrinsic Triangulations (August 9-August 27) Dr. Matheus Gadelha: Learning Classifiers of Parametric Implicit Functions (August 16-August 27) Prof. Lin Gao and Jie Yang: Unsupervised partial symmetry detection for 3D models with geometric deep learning (August 16-August 27) Christian Hafner and Prof. Bernd Bickel: Joints for Elastic Strips (August 9-August 13) Yijiang Huang and Prof. Caitlin Mueller: Design optimization via shape morphing (August 16-August 27) Dr. Xiangru Huang: Moving Object Detection from consecutive LiDAR Point Clouds (August 23-August 27) Prof. Michael Kazhdan: Multigrid on meshes (July 26-July 30) Prof. Paul Kry and Alexander Mercier-Aubin: Incompressible flow on meshes (July 26-August 6) Prof. Kathryn Leonard: 2D shape complexity (July 26-July 30) Prof. David Levin: Optimal Interlocking Parts via Implicit Shape Optimizations (July 26-August 6) Angran Li, Kuanren Qian, and Prof. Yongjie Jessica Zhang: Geometric Modeling for Isogeometric Analysis with Engineering Applications (August 2-August 6) David Palmer: Bayesian Rotation Synchronization (August 2-August 13); Planar-faced and other specialized hexahedral meshes (August 16-August 27) Prof. Jorg Peters (The beauty of) Semi-structured splines (August 9-August 13) Alison Pouplin and Dimitris Kalatzis: Learning projection of hierarchical data with a variational auto-encoder onto the Klein disk (July 26-August 6) Prof. Leonardo Sacht: Robust computation of the Hausdorff distance between triangle meshes (August 9-August 20) Prof. Yusuf Sahillioglu: Cut optimization for parameterization (August 2-August 13) Josua Sassen and Prof. Martin Rumpf: Mesh simplification driven by shell elasticity (August 9-August 20) Dr. Nicholas Sharp, Prof. Etienne Vouga, Josh Vekhter: Nonmanifold Periodic Minimal Surfaces (August 9-August 27) Dr. Tal Shnitzer-Dery: Self-similarity loss for shape descriptor learning in correspondence problems (August 9-August 13) Dr. Ankita Shukla and Prof. Pavan Turaga: Role of orthogonality in deep representation learning for eco-conservation applications (August 9-August 13) Prof. Noah Snavely: Reconstructing the Moon and Earth in 3D from the World's Photos (August 9-August 13) Prof. Justin Solomon: Anisotropic Schrödinger Bridges (August 16-August 27) Prof. Marco Tarini: Better Volume-encoded parametrizations (August 2-August 13) Prof. Amir Vaxman: High-order directional field design (July 26-August 6) Prof. Etienne Vouga: Differentiable Remeshing (July 26-August 6) Dr. Stephanie Wang: Discrete Laplacian, area functional, and minimal surfaces (August 16-August 20) Paul Zhang: Classifying hexahedral mesh singular vertices (July 26-August 6); Subdivision Surface Mesh Fitting (August 16-August 27) An intrepid team of TAs helped our participants learn new topics, refined the tutorial activities, and supported the research projects: Erik Amezquita, Michigan State University Dr. Dena Bazazian, University of Bristol Dr. Samir Chowdhury, Stanford University Klara Mundilova, MIT Nelson Nauata, Simon Fraser University Peter Rock, University of Colorado Boulder Ritesh Sharma, University of California Merced Dr. Antonio Teran-Espinoza, Waymo/Google Alberto Tono, Stanford University Eric Zhang, Harvard University Each week of SGI, we had multiple guest speakers drop by to share their research and experiences, and to give advice to the SGI Fellows: Prof. Katia Bertoldi, Harvard: Multistable inflatable origami from deployable structures to robots Prof. Michael Bronstein, Twitter/Imperial College London: Geometric Deep Learning: the Erlangen Programme of ML Prof. Albert Chern, UCSD: Gauge Theory for Vector Field Design Prof. Bruce Fischl, Harvard/MGH: Geometry and the Human Brain Dr. Fernando de Goes, Pixar: Geometry Processing at Pixar Prof. Rana Hanocka, University of Chicago: Deep Learning on Meshes Prof. Alexandra Ion, Carnegie-Mellon University: Interactive Structures – Materials that can move, walk, compute Prof. Chenfanfu Jiang, UCLA: Developments in smooth optimization contact Prof. Theodore Kim, Yale University: Anti-Racist Graphics Research Prof. Yaron Lipman, Weizmann Institute: Implicit Neural Representations Prof. Mina Konaković Luković, MIT: Transforming Design and Fabrication with Computational Discovery Prof. Lakshminarayanan Mahadevan, Harvard: Folding and cutting paper: art and science Prof. Caitlin Mueller, MIT: Geometry of High-Performance Architecture Prof. Matthias Niessner, TU Munich: Learning to Reconstruct 3D Scenes Íñigo Quílez: Intro to SDFs and Examples Dr. Elissa Ross, Metafold: Periodic geometry: from art to math and back again Dr. Ryan Schmidt, Epic Games and Gradientspace: Geometry Processing in Practice Prof. Tamar Shinar, UC Riverside: Partitioned solid-fluid coupling Prof. Emily Whiting, Boston University: Mechanics-Based Design for Computational Fabrication Last but not least, incoming MIT PhD student Leticia Mattos Da Silva organized a talk and panel discussion on the graduate school application process, including a Q&A with Silvia Sellán, Jiayi Eris Zhang, and Oded Stein. The cast of thousands above is a testament to the dedication of the geometry research community to developing a diverse, energetic community of young researchers. SGI 2021 comes to a close as quietly as it began, as our Fellows and mentors close one final Zoom call and return to their lives scattered over the globe. In the months and years to come, we look forward to keeping in touch as our Fellows become colleagues, collaborators, and leaders of future generations of researchers. Tags Credits Volume-encoded parameterization Post author By Alice Mehalek No Comments on Volume-encoded parameterization by Alice Mehalek, Marcus Vidaurri, and Zhecheng Wang UV mapping or UV parameterization is the process of mapping a 3D surface to a 2D plane. A UV map assigns every point on the surface to a point on the plane, so that a 2D texture can be applied to a 3D object. A simple example is the representation of the Earth as a 2D map. Because there is no perfect way to project the surface of a globe onto a plane, many different map projections have been made, each with different types and degrees of distortion. Some of these maps use "cuts" or discontinuities to fragment the Earth's surface. In computer graphics, the typical method of UV mapping is by representing a 3D object as a triangle mesh and explicitly assigning each vertex on the mesh to a point on the UV plane. This method is slow and must be repeated often, as the same texture map can't be used by different meshes of the same object. For any kind of parametrization or UV mapping, a good UV map must be injective and should be conformal (preserving angles) while having few cuts. Ideally it should also be isometric (preserve relative areas). In general, however, more cuts are needed to achieve less distortion. Our research mentor, Marco Tarini, developed the method of Volume-encoded UV mapping. In this method, a surface is represented by parametric functions and each point on the surface is mapped to a UV position as a function of its 3D position. This is done by enclosing the surface or portion of the surface in a unit cube, or voxel, and assigning UV coordinates to each of the cube's eight vertices. All other points on the surface can be mapped by trilinear interpolation. Volume-encoded parametrization has the advantage of only needing to store eight sets of UV coordinates per voxel, instead of unique locations of many mesh vertices, and can be applied to different types of surface representations, not just meshes. We spent the first week of our research project learning about volume-encoded parametrization by exploring the 2D equivalent: mapping 2D curves to a one-dimensional line, u. Given a curve enclosed within a unit square, our task was to find the u-value for each corner of the square that optimized injectivity, conformality, and orthogonality. We did this using a Least Squares method to solve a linear system consisting of a set of constraints applied to points on the surface. All other points on the curve could then be mapped to u by bilinear interpolation. A quarter circle (the red curve on the xy plane of the 3D plot) is mapped to a line, u, which is also represented as the height on the z axis in the 3D plot. The surface in the 3D plot is obtained by bilinear interpolation of the height of each corner of the square, and the red curve along the surface represents the path of the quarter circle mapped to 1D. Each point on the path has a unique height, indicating an injective mapping. An injective mapping was not possible for portions of a circle greater than a semicircle. Each point on the path of u does not have a unique height, indicating loss of injectivity. There is also distortion in the middle where the slope is variable. In Week 2 of the project, we moved on to 3D and created UV maps for portions of a sphere, using a similar method to the 2D case. Some of the questions we wanted to answer were: For what types of surfaces is volume-encoded parametrization possible? At what level of shape complexity is it necessary to introduce cuts, or split up the surface into smaller voxels? In the 2D case, we found that injectivity could be preserved when mapping curves less than half a circle, but there were overlaps for curves greater than a semicircle. One dimension up, when we go into the 3D space, we found that uniform sampling on the quarter sphere was challenging. Sampling uniformly in the 2D parametric space will result in a distorted sampling that becomes denser when getting closer to the north pole of the sphere. We tried three different approaches: rewriting the parametric equations, sampling in a unit-sphere, and sampling in the 3D target space and then projecting the sample points back to the surface. Unfortunately, all three methods only worked with a certain parametric sphere equation. When mapping a portion of a sphere to 2D, the grid allows us to see where the mapping is distorted. This mapping is most accurate at the north pole, while areas and angles both become distorted toward the equator. In conclusion, over the two weeks, we designed and tested a volume-encoded least-squares conformal mapping in both 2D and 3D. In the future, we plan to rewrite the code in C++ and run more experiments. Geometry processing Math When Algorithms Go Wrong No Comments on When Algorithms Go Wrong By Natasha Diederen, Sahra Yusuf, and Olga Guțan An algorithm is a finite sequence of well-defined, computer-implementable instructions, typically designed to solve a class of specific problems or to perform a computation. Similarly to how we use our brains to (sometimes wrongly) categorize things in order to make sense of the world, a computer needs an algorithm to make sense of what the user is asking it to do. Since an algorithm is a way of communication between two vastly different entities — a human and a machine — some information gets lost along the way. The process is intellectually intriguing to witness, however, problems can arise when we use algorithms to make decisions that influence the lives of humans and other self-aware living beings. Algorithms are indispensable for efficiently processing data; therefore they will continue to be part of our programming ecosystem. However, we can (and must) keep some space in our minds for the additional nuances that reality contains. While a programmer may not be able to fully convey these nuances to a computer, we must be aware that no algorithm's output is final, all-encompassing, and universally true. Furthermore, we must be mindful of the conclusions we draw from the output of algorithms. II. Algorithms Gone Wrong Broadly speaking, potential pitfalls for algorithms are manifold. The issues stem from the nature of the algorithms — a communication tool between a human entity and a nonhuman entity (a computer). These problems morph into different real-life issues based on what types of algorithms we use and what we use them for. Even when playing with algorithms intended to solve toy problems that we already have the answers to, we can notice errors. However, in real life, data is (even) more messy and consequences are far larger. In 2018, Elaine Herzberg was hit and killed by a self-driving taxi. She was jaywalking in the dark with a bicycle by her side and the car alternated between classifying her as a person and a bicycle, thus miscalculating her trajectory and not classifying Elaine as an obstruction. In this case, the safety driver was distracted by a television program, and thus not entirely blameless. However, this serves as an example of how our blind trust in the reliability of algorithms can often result in complacency, with far reaching consequences. A further example of error in algorithms is adversarial attacks on neural networks. This is when researchers (or hackers) feed a neural network a modified image, where changes made to the image are imperceptible to the human eye. Nevertheless, this can result in incorrect classification by neural networks, with high confidence to boot. Figure 1. A demonstration of fast adversarial example generation applied to GoogLeNet (Szegedy et al., 2014a) on ImageNet. Source. In Figure 1 we see how by adding an imperceptibly small vector whose elements are equal to the sign of the elements of the gradient of the cost function with respect to the input, we can change GoogLeNet's classification of the image. Here the \(\epsilon\) of 0.007 corresponds to the magnitude of the smallest bit of an 8 bit image encoding after GoogLeNet's conversion to real numbers. Although researchers are working on making neural networks more robust to these sorts of attacks, susceptibility to adversarial attacks seems to be an inherent weakness of deep neural networks. This has serious security implications as computer vision increasingly relies on these models for facial recognition, autonomous vehicles, speech recognition and much more. III. Geometry Processing with Imperfect Data In geometry processing, there is often a need for refinement of geometric data, especially when the data is taken from "real" contexts. Examples of imperfections in 3D models include gaps, self-intersections, and non-manifold geometry, or geometry which cannot exist in reality (e.g. edges included in more than two faces and disconnected vertices). One common source of imperfect, "real-life" data is 3D object scanning. The resulting models typically include gaps and large self-intersections as a result of incomplete scanning or other error arising from the scanning method. Despite these significant problems, scanned data is still invaluable for rapidly generating assets for use in game development and other applications. During our time at the Summer Geometry Institute, Dr. Matthias Nießner spoke about 3D scene reconstruction with scanned data. His work demonstrated a method of improving the overall quality of reconstructed scenes using bundle adjustment and other techniques. He also worked on solving the problem of incomplete scanned objects using machine learning. Previously, we have written about possible mistakes arising from the weaknesses of machine learning, but Dr. Nießner's work demonstrates that machine learning is a valuable tool for refining data and eliminating mistakes as well. Although error in geometry processing is not as frequently discussed, the implications are just as important as those of mistakes arising from the use of machine learning. This is primarily due to the fact that machine learning and geometry processing are not isolated fields or tools, but are often used together, especially in the sorts of situations we described earlier. By researching and developing new methods of data refinement, we can improve the usability of natural data and increase, for example, the safety of systems which rely on visual data. IV. Human Error and Bias The errors we have discussed so far exclude human error and bias, which can aggravate existing inequalities in society. For example, a Fellow one of us worked with during SGI mentioned how he worked on a project which used face tracking to animate digital characters. However, the state-of-the-art trackers only worked on him (a white male) and could not track eye or mouth movement for those in his team of black descent. Additionally, as we heard from Theodore Kim in another brilliant SGI talk, animation is focused on designing white skin and non-type 4 hair, further highlighting the systemic racism present in society. Moreover, the fact that 94.8% of Automatic Gender Recognition (AGR) papers recognize gender as binary has huge implications for the misgendering of trans people. This could lead to issues with AGR based access control for things like public bathrooms, where transgender people may be excluded from both male and female facilities. The combination of machine and human error is especially dangerous, and it is important to recognize this, so that we can mitigate against the worst harm. V. Conclusion Algorithms have become a fundamental part of human existence, however our blind faith that algorithms are always (1) correct and (2) unbiased is deeply flawed. The world is a complicated place, with far too much data for computers to handle, placing a strong reliance on simplification algorithms. As we have seen from the examples above, these algorithms are far from perfect and can sometimes erase or distort important data. This does not mean we should stop using algorithms entirely. What this does, however, mean is that we must employ a hearty dose of critical thinking and skepticism when analyzing results outputted by an algorithm. We must be especially careful if we use these results for making decisions that would influence the lives of other humans. Intrinsic Parameterization: Weeks 1-2 Post author By Tal No Comments on Intrinsic Parameterization: Weeks 1-2 By Joana Portmann, Tal Rastopchin, and Sahra Yusuf. Mentored by Professor Keenan Crane. Intrinsic parameterization During these last two weeks, we explored intrinsic encoding of triangle meshes. As an introduction to this new topic, we wrote a very simple algorithm that lays out a triangle mesh flat. We then improved this algorithm via line search over a week. In connection with this, we looked into terms like 'angle defects,' 'cotangent weights,' and the 'cotangent Laplacian' in preparation for more current research during the week. From intrinsic to extrinsic parameterization To get a short introduction into intrinsic parameterization and its applications, I quote some sentences from Keenan's course. If you're interested in the subject I can recommend the course notes "Geometry Processing with Intrinsic Triangulations." "As triangle meshes are a basic representation for 3D geometry, playing the same central role as pixel arrays in image processing, even seemingly small shifts in the way we think about triangle meshes can have major consequences for a wide variety of applications. Instead of thinking about a triangle as a collection of vertex positions in \(\mathbb{R}^n\) from the intrinsic perspective, it's a collection of edge lengths associated with edges." Many properties of a surface such as the shortest path do only depend on local measurements such as angles and distances along the surfaces and do not depend on how the surface is embedded in space (e.g. vertex positions in \(\mathbb{R}^n\)), so an intrinsic representation of the mesh works fine. Intrinsic triangulations bring several deep ideas from topology and differential geometry into a discrete, computational setting. And the framework of intrinsic triangulations is particularly useful for improving the robustness of existing algorithms. Laying out edge lengths in the plane Our first task was to implement a simple algorithm that uses intrinsic edge lengths and a breadth-first search to flatten a triangle mesh onto the plane. The key idea driving this algorithm is that given just a triangle's edge lengths we can use the law of cosines to compute its internal angles. Given a triangle in the plane we can use the internal angles to flatten out its neighbors into the plane. We will later use these angles to modify the edge lengths so that we "better" flatten the model. The algorithm works by choosing some root triangle and then performing a breadth-first traversal to flatten each of the adjacent triangles into the plane until we have visited every triangle in the mesh. Breadth-first search pseudocode Some initial geometry central pseudocode for this breadth first search looks something like // Pick and flatten some starting triangle Face rootTriangle = mesh->face(0); // Calculate the root triangle's flattened vertex positions calculateFlatVertices(rootTriangle); // Initialize a map encoding visited faces FaceData<bool> isVisited(*mesh, false); // Initialize a queue for the BFS std::queue<Face> visited; // Mark the root triangle as visited and pop it into the queue isVisited[rootTriangle] = true; visited.push(rootTriangle); // While our queue is not empty while(!visited.empty()) // Pop the current Face off the front of the queue Face currentFace = visited.front(); visited.pop(); // Visit the adjacent faces For (Face adjFace : currentFace.adjacentFaces()) { // If we have not already visited the face if (!visited[adjFace] // Calculate the triangle's flattened vertex positions calculateFlatVertices(adjFace); // And push it onto the queue visited.push(adjFace); In order to make sure we lay down adjacent triangles with respect to the computed flattened plane coordinates of their parent triangle we need to know exactly how a child triangle connects to its parent triangle. Specifically, we need to know which edge is shared by the parent and child triangle and which point belongs to the child triangle but not the parent triangle. One way we could retrieve this information is by computing the set difference between the vertices belonging to the parent triangle and child triangle, all while carefully keeping track of vertex indices and edge orientation. This certainly works, however, it can be cumbersome to write a brute force combinatorial helper method for each unique mesh element traversal routine. The halfedge mesh data structure Professor Keenan Crane explained that a popular mesh data structure that allows a scientist to conveniently implement mesh traversal routines is that of the halfedge mesh. At its core a halfedge mesh encodes the connectivity information of a combinatorial surface by keeping track of a set of halfedges and the two connectivity functions known as twin and next. Here the set of halfedges are none other than the directed edges obtained from an oriented triangle mesh. The twin function takes a halfedge and brings it to its corresponding oppositely oriented twin halfedge. In this fashion, if we apply the twin function to some halfedge twice we get the same initial halfedge back. The next function takes a halfedge and brings it to the next halfedge in the current triangle. In this fashion if we take the next function and apply it to a halfedge belonging to a triangle three times, we get the same initial halfedge back. A diagram depicting the halfedge data structure connectivity relationships. Source. Professor Keenan Crane has a well written introduction to the halfedge data structure in section 2.5 of his course notes on Discrete Differential Geometry. It turns out that geometry central uses the halfedge mesh data structure and so we can rewrite the traversal of the adjacent faces loop to more easily retrieve our desired connectivity information. In the geometry central implementation, every mesh element (vertex, edge, face, etc.) contains a reference to a halfedge, and vice versa. // Get the face's halfedge Halfedge currentHalfedge = currentFace.halfedge(); // Get the current adjacent face Face adjFace = currentHalfedge.twin().face(); if (!isVisited[adjFace]) // Retrieve our desired vertices Vertex a = currentHalfedge.vertex(); Vertex b = currentHalfedge.next().vertex(); Vertex c = currentHalfedge.twin().next().next().vertex(); calculateFlatVertices(a, b, c); // Iterate to the next halfedge currentHalfEdge = currentHalfEdge.next(); // Exit the loop when we reach our starting halfedge while (currentHalfEdge != currentFace.halfedge()); Here's a diagram illustrating the relationship between the currentHalfedge and vertices a, b, and c. A diagram illustrating the connectivity relationship between the currentHalfedge and vertices a, b, and c. Note that cH abbreviates currentHalfedge. Segfaults, debugging, and ghost faces This all looks great right? Now we need to determine the specifics of calculating the flat vertices? Well, not exactly. When we were running a version of this code in which we attempted to visualize the resulting breadth-first traversal we encountered several random segfaults. When Sahra ran a debugger (shout out to GDB and LLDB 🥰) we learned that the culprit was the isVisited[adjFace] function call on the line We were very confused as to why we would be getting a segfault while trying to retrieve the value corresponding to the key adjFace contained in the map FaceData<bool> isVisited. Sahra inspected the contents of the adjFace object and observed that it had index 248 whereas the mesh we were testing the breadth-first search on only had 247 faces. Because C++ zero indexes its elements, this means we somehow retrieved a face with index out of range by 2! How could this have happened? How did we retrieve that face in the first place? Looking at the lines we realized that we made an unsafe assumption about currentHalfedge. In particular, we assumed that it was not a boundary edge. What does the twin of a boundary halfedge that has no real twin look like? If the issue we were running to was that the currentHalfedge was a boundary halfedge, why didn't we get a segfault on simply currentHalfedge.twin()? Doing some research, we found that the geometry central internals documentation explains that "We can implicitly encode the twin() relationship by storing twinned halfedges adjacent to one another– that is, the twin of an even-numbered halfedge numbered he is he+1, and the twin of and odd-numbered halfedge is he-1." Geometry central internals documentation Aha! This explains exactly why currentHalfedge.twin() still works on a boundary halfedge; behind the scenes, it is just adding or subtracting one to the halfedge's index. Where did the face index come from? We're still unsure, but we realized that the face currentHalfedge.twin().face() only makes sense (and hence can only be used as a key for the visited map) when currentHalfedge is not a boundary halfedge. Here is a diagram of the "ghost face" that we think the line Face adjFace = currentHalfedge.twin().face() was producing. A diagram depicting how taking the face of the twin of a boundary halfedge produces a nonexistent face. Changing the map access line in the if statement to if (!currentHalfedge.edge().isBoundary() && !isVisited[adjFace]) resolved the segfaults and produced a working breadth-first traversal. Conformal parameterization Here is a picture illustrating applying the flattening algorithm to a model of a cat head. A picture illustrating the application of the flattening algorithm to a model of a cat head. You can see that there are many cracks and this is because the model of the cat head is not flat—in particular, it has vertices with nonzero angle defect. A vertex angle defect for a given vertex is equal to the difference between \(2 \pi\) and the sum of the corner angles containing that vertex. This is a good measure of how flat a vertex is because for a perfectly flat vertex, all angles surrounding it will sum to \(2 \pi\). After laying out the edges on the plane \(z=0\), we began the necessary steps to compute a conformal flattening (an angle-preserving parameterization). In order to complete this step, we needed to find a set of new edge lengths which would both be related to the original ones by a scale factor and minimize the angle defects, \(l_{ij} := \sqrt{ \phi_i \phi_j} l_{ij}^0, \quad \forall ij \in E\), where where \(l_{ij}\) is the new intrinsic edge length, \(\phi_i, \phi_j\) are the scale factors at vertices \(i, j\), and \(l_{ij}^0\) is the initial edge length. Discrete Yamabe flow At this stage, we have a clear goal: to optimize the scale factors in order to scale the edge lengths and minimize the angle defects across the mesh (i.e. fully flatten the mesh). From here, we use the discrete Yamabe flow to meet both of these requirements. Before implementing this algorithm, we began by substituting the scale factors with their logarithms \(l_{ij} = e^{(u_i + u_j)/2} l_{ij}^0\), where \(l_{ij}\) is the new intrinsic edge length, \(u_i, u_j\) are the scale factors at vertices \(i, j\), and \(l_{ij}^0\) is the initial edge length. This ensures the new intrinsic edge lengths are always positive and that the optimization is convex. To implement the algorithm, we followed this procedure: 1. Calculate the initial edge lengths 2. While all angle defects are below a certain threshold epsilon: Compute the scaled edge lengths Compute the current angle defects based on the new interior angles based on the scaled edge lengths Update the scale factors using the step and the angle defects: \(u_i \leftarrow u_i – h \Omega _i\), where \(u_i\) is the scale factor of the \(i\)th vertex, \(h\) is the gradient descent step size, and \(\Omega_i\) is the intrinsic angle defect at the \(i\)th vertex. After running this algorithm and displaying the result, we found that we were able to obtain a perfect conformal flattening of the input mesh (there were no cracks!). There was one issue, however: we needed to manually choose a step size that would work well for the chosen epsilon value. Our next step was to extend our current algorithm by implementing a backtracking line search which would change the step size based on the energy. Here are two videos demonstrating the Yamabe flow algorithm. The first video illustrates how each iteration of the flow improves the flattened mesh and the second video illustrates how that translates into UV coordinates for texture mapping. We are really happy with how these turned out! A video visualizing intermediate 2D parameterizations produced by the Yamabe flow. A video visualizing the intermediate UV coordinates on the cat head mesh produced by the Yamabe flow Line search To implement this, we added a sub-loop to our existing Yamabe flow procedure which repeatedly test smaller step sizes until one is found which will decrease the energy, e.g., backtracking line search. A good resource on this topic is Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. After resolving a bug in which our step size would stall at very small values with no progress, we were successful in implementing this line search. Now, a large step size could be given without missing the minima. Next, we worked on using Newton's method to improve the descent direction by changing the gradient to more easily reach the minimum. To complete this, we needed to calculate the Hessian — in this case, the Hessian of the discrete conformal energy was the cotan-Laplace matrix \(L\). This matrix had square dimensions (the number of rows and the number of columns was equal to the number of interior vertices) and has off-diagonal entries: \(L_{ij} = -\frac{1}{2} (\cot \theta_k ^{ij} + \cot \theta _k ^{ji})\) for each edge \(ij\), as well as diagonal entries \(L_{ii} = – \sum _{ij} L_{ij}\) for each edge \(ij\) incident to the \(i\)th vertex. The newton's descent algorithm is as follows: First, build the cotan-Laplace matrix L based on the above definitions Determine the descent direction \(\dot{u} \in \mathbb{R}^{|V_{\text{int}}|}\), by solving the system \(L \dot{u} = – \Omega\) with \(\Omega \in \mathbb{R}^{|V_{\text{int}}|}\).as the vector containing all of the angle defects at interior vertices. Run line search again, but with \(\dot{u}\) replacing \(– \Omega\) as the search direction. This method yielded a completely flat mesh with no cracks. Newton's method was also significantly faster: on one of our machines, Newton's method took 3 ms to compute a crack-free parameterization for the cat head model while the original Yamabe flow implementation took 58 ms. Tags intrinsic geometry, newton's method, parameterization Branch-and-bound method for calculating Hausdorff distance 1 Comment on Branch-and-bound method for calculating Hausdorff distance This week I worked on the "Robust computation of the Hausdorff distance between triangle meshes" project with our mentor Dr. Leonardo Sacht, TA Erik Amezquita and my team Talant Talipov and Bryce Van Ross. We started our project by doing some initial reading about the topic. Hausdorff distance from triangle meshes A to B is defined as $$h(A,B) = \max_{p \in A}d(p,B)$$ where d is the Euclidean norm. Finding the Hausdorff distance between two triangle meshes is one way of comparing these meshes. We note that the Hausdorff distance from A to B might be different from the Hausdorff distance from B to A, as you can see in the figure below, so it is important to distinguish which one is being computed. Figure 1: Hausdorff distance from Mesh A to Mesh B Figure 2: Hausdorff distance from Mesh B to Mesh A Finally we define $$H(a,b) = \max{h(A,B), h(B,A)}$$ and use this value that is symmetric when comparing triangle meshes A and B. Our first task was to implement Branch and Bound methods for calculating this distance. Suppose we want to calculate the Hausdorff distance from mesh A to mesh B. There were three main steps in the algorithm to do this: calculation of the upper and lower bound for the Hausdorff distance, and discarding and subdividing triangles according to the values of the upper and lower bound. The upper bound for each triangle in A is calculated by taking the maximum of the distances from the given triangle to every vertex in B. The lower bound is calculated over A by taking the minimum of the distances from each vertex p in A to the triangle in B that is closest to p. If a triangle in A has an upper bound that is less than the lower bound, the triangle is discarded. After the discarding process, the remaining triangles are subdivided into four triangles and the process is repeated with recalculation of the upper bounds and the lower bound. The algorithm is run until the values for the upper and lower bound are within some ε of each other. Ultimately, we get a triangle region that contains the point that realizes the Hausdorff distance from A to B. To implement this method, my teammates tackled the upper and lower bound codes while I wrote an algorithm for the discarding and subdividing process: We ran this algorithm with testing values u = [1;2;3;4;5] and l = 3 and got these results: Figure 3: Initial mesh Figure 4: After discarding and subdividing As expected, the two triangles that had the upper bound of 1 and 2 were discarded and the rest were subdivided. The lower bound algorithm turned out to be more challenging than we anticipated, and we worked together to come up with a robust method. Currently, we are working on finishing up this part of the code, so that we can run and test the whole algorithm. After this, we are looking forward to extending this project in different directions, whether it is a more theoretical analysis of the topic or working on improving our algorithm! Week 5 Talks Summary No Comments on Week 5 Talks Summary Authors: Bryce Van Ross, Deniz Ozbay, Talant Talipov The following is a summary of three Week 5 talks, presented on behalf of students in the MIT Summer Geometry Institute. These summaries were made by members of the team of the Hausdorff Distance group. Topics include: Partitioned Solid-Fluid Coupling, Graduate School Panel and Q&A, and Developments in Smooth Optimization Contact. Partitioned Solid-Fluid Coupling This talk was presented by Dr. Tamar Shinar at University of California, Riverside. I liked this talk for several reasons. From an introductory physics background, we are aware that forces act on multiple objects. Depending on the type of object, certain forces behave differently with respect to the object and we use certain equations to model this interaction. However, as the system of objects becomes mixed, our use of forces becomes more complex. So, the modeling efforts must adapt accordingly. This lecture focused on how to model fluid-structure interaction (FSI), commonly referred to as solid-fluid coupling, via numerical algorithms, and discussed related applications. Examples of applications typically include fluid dynamics or animation. As suggested by Dr. Shinar, more intuitive, concrete examples are modeling the flight of a bird in the air (Figure 1), or the movement of a straight hair flip relative to a medium of water (Figure 2). Figure 1: A bird (solid) flying in the air (fluid) adjacent to water droplets (another fluid). Image from unsplash.com/ Figure 2: A woman (solid) flipping her hair in a pool of water (fluid). Image from unsplash.com/ Solid-fluid coupling models the dynamics of fluids and solids. Specifically, these dynamics are computed simultaneously, which can be problematic. The challenge in this field is to make it less so, while ensuring a reflective representation of the physical forces at work. In 1- way coupling, the solid (or fluid) affects the fluid (solid), but not vice versa. This is not the case for 2-way coupling, where both objects affect each other, imposing more constraints upon the system. To account for each object's phenomena, there exists solid and fluid handlers to model these phenomena, respectively. These solvers utilize a variety of boundary conditions, familiar to us from past math/physics courses, such as the Dirichlet, Navier-Stokes, and Neumann equations. Depending on the object and conditions, desirable properties are optimized. For solids, these would be mass and acceleration, whereas for fluids, these are position and velocity. The solution lies in the mixed-solver, and which numerical methods best model these behaviors. There are multiple options to go about optimizing coupled-interaction. Partitioning the solver is a popular approach. In this approach, fixed point iteration and Newton's method are strategies that solve the nonlinear system of equations, linearizing the 2-way coupling in terms of the solid and fluid. The partitioned methods tend to be more modular and easier to implement. In contrast, monolithic methods are more stable, have better performance, and handle closed regions better. In comparison to each other, their strengths are the other's weaknesses. Common risks are that iterations can be computationally expensive and/or that they won't converge at all. This is remedied by incorporating a reduced model interface (RMI), which serves as an intermediary lightweight solver between the solid and fluid. In effect, the RMI helps estimate tricky Jacobians in a least-weight sense. Dr. Shinar's research specifically compares these numerical schemes in reference to an under-relaxation variable, which optimizes solution oscillations and aids in stability. The smaller this value, the slower and less efficient the computational time, in tradeoff for increased stability. The takeaways here are that the partition approach in isolation fails in terms of convergence, and the Partition-Coupling and RMI under low under-relaxation will yield optimal convergence in the shortest time. There still remain certain bounded regions, like (incompressible) Dirichlet regions, which interfere with the dynamics of known solid-fluid coupling techniques, but this has yet to be explored. For more information regarding Dr. Shinar's research, please refer here. An extended partitioned method for conservative solid-fluid coupling (Video from YouTube) Developments in Smooth Optimization Contact Professor Chenfanfu Jiang's talk on "Developments in Smooth Optimization Contact" was really interesting for many reasons. Essentially the problem they were trying to solve was "How to model contact as a smooth problem". This helps us accurately simulate solids which allows us to create realistic animations and benefits industry fields that use these simulations. For example in the fashion industry, these simulations provide feedback to the design of the garment. Firstly, Dr. Jiang presented different ways to approach the problem and the advantages and shortcomings of these approaches. Then, he gave a formulation of how they came up with a solution using these, which was really interesting as it combined various approaches to come up with a better working algorithm. The research problem was also simple to understand but the tools used to solve the problem were very advanced, which was also very intriguing. To solve this problem three main tools were used: nonlinear optimization based integrator which simulated large deformation with large steps, variational frictional contact and its extension to higher dimensions. To start with, the solution is an optimization based time integrator for electrodynamics. It is based on incremented time steps, which creates an entirely new problem at each time step. The ODE consists of an inertia term and elasticity term – which makes the problem harder as the elasticity term can be highly nonlinear and non convex. This simulates deformations based on squashing and stretching. For a more physically accurate solution, contact is added as a constraint. This constraint in the form of an inequality, makes the differential equation even harder to solve. To overcome this challenge, different approaches were developed based on previous ways to solve the problem. The difficulty arises from the fact that the formulated constraints and ODEs that are non-smooth and non-linear, for example, when the distance between objects is zero, we have a non-smooth constraint. To overcome this, the problem is turned into a smooth and unconstrained problem by using the incremental potential contact method which guarantees intersection and parameter tuning free simulations which are stable and accurate, and the barrier method, which turns the problem into an unconstrained problem where Newton iteration can be applied. Dr. Jiang also discussed some methods that can improve their algorithm such as clamping the barrier function, and the shortcomings such as the difficulty of implementing friction in the optimization problem. Again, the challenges arise from the function becoming non-smooth, which he mentioned can be solved by approximately defining the variational friction as a smooth function. The most intriguing part of the talk was how they were able to build on existing approaches to turn the problem into a smooth one. Although the techniques they used were very advanced, the idea behind the solution was simple to understand. The simulations (especially the twisting of the cloth) Dr. Jiang showed were also really interesting; it was fascinating to see the difference between two simulations of the same object and how they become visibly more accurate as the constraints change. Figure 3: Twisting of a cloth (image from https://ipc-sim.github.io/file/IPC-paper-350ppi.pdf) Figure 4: Rod twist (image from https://ipc-sim.github.io/file/IPC-paper-350ppi.pdf) Graduate School Panel Last week I attended a talk about applications to PhD programs that was organized by Leticia Mattos Da Silva, Silvia Sellán, Jiayi Eris Zhang and Oded Stein. One of the most exciting deals for every student is application for graduate programs. Most students are always confused with this procedure: they do not know what to write in a motivation letter and how to contact the prospective referee. All these questions were answered in this lecture. This lecture was filled with a lot of useful information about all the steps of applications to universities: from motivation letters to the choice of prospective scientific advisor. It was really interesting on the grounds that the lecturers experienced this procedure firsthand. Moreover, the speakers showed their own essays as successful examples. As I know from this talk, the motivation letter has a specific structure. The applicant has to mention his previous research experience, why he wants to study exactly on this program, and why the university has to choose this person among the others. Furthermore, there should not be any information that is not relevant to the program: if you are applying to a PhD program in Mathematics, then you do not have to say about your background in knitting. The best moments were when the lecturer highlighted the most frequent mistakes of applicants and I realized that I did it in my motivation letter. The recommendation letters are one of the most significant parts of the application. That is why the applicant should connect with his referee in advance. The best choice would be the person who did some research with you. In addition, you should have more referees in the case that the main ones reject. It was a very valuable experience to listen to this lecture! Tags Talks, Week 5 Bayesian Rotation Synchronization Post author By Adrish No Comments on Bayesian Rotation Synchronization By Adrish Dey, Dorothy Najjuma Kamya, and Lily Kimble Note: Although this is an ongoing work, this report documents our progress between the official 2 weeks of the project. (August 2, 2021 – August 13, 2021) The past 2 weeks at SGI, we have been working with David Palmer on investigating a novel Bayesian approach towards the angular synchronization problem. This blog post is written to document our work and share a sneak peek into our research. Consider a set of unknown absolute orientations \(\{q_1, q_2, \ldots, q_n\}\) with respect to some fixed basis. The problem of angular synchronization deals with the accurate estimation of these orientations from noisy observations of their relative offsets \(O_{i, j}\), up to a constant additive phase. We are particularly interested in estimating these "true" orientations in the presence of many outlier measurements. Our interest in this topic stems from the fact that the angular synchronization problem arises in various avenues of science, including reconstruction problems in computer vision, ranking problems, sensor network localization, structural biology, etc. In our work, we study this problem from a Bayesian perspective, by modelling the observed data as a mixture between noisy observations and outliers. We also investigate the problem of continuous label switching, a global ambiguity that arises from the lack of knowledge about the basis of the absolute orientations \(q_i\). Finally, we experiment on a novel Riemannian gradient descent method for alleviating this continuous label switching problem and provide our observations herein. Brief Primer on Bayesian Inference Before going deeper, we'll briefly discuss Bayesian inference. At the heart of Bayesian inference lies the celebrated Bayes' rule (read \(a|b\) as "a given b"): \[\underbrace{P(q | O)}_{\textrm{posterior}} = \frac{\overbrace{P(O|q)}^{\textrm{likelihood}} \cdot \overbrace{P(q)}^{\textrm{prior}}}{\underbrace{\int\limits_q P(O|q)\cdot P(q)}}_{\textrm{evidence}}\] In our problem, \(q\) and \(O\) denote the true orientations that we are estimating and the noisy observations with outliers respectively. We are interested in finding the posterior distribution (or at least samples from it) over the ground truth \(q\) given the noisy observations \(O\). The denominator (i.e., the evidence or partition function) is an integral over all \(q\)s. Exactly evaluating this integral is often intractable if \(q\) lies on some continuous manifold, as in our problem. This makes drawing samples from the posterior becomes hard. One way to avoid computing the partition function is a sampling method called Markov Chain Monte Carlo (MCMC). Intuitively, the posterior is approximated by a markov chain whose transitions can be computed using a simpler distribution called the proposal distribution. Successive samples are then accepted or rejected based on an acceptance probability designed to ensure convergence to the posterior distribution in the limit of infinite samples. Simply put, after enough samples are drawn using MCMC, they will look like the samples from the posterior\(P(q|O) \propto P(O|q) \cdot P(q)\) without requiring us to calculate the intractable normalization \(P(O) = \int\limits_q P(O|q)\cdot P(q)\). In our work, we use Hamiltonian Monte Carlo (HMC), an efficient variant of MCMC, which uses Hamiltonian Dynamics to propose the next sample. From an implementation perspective, we use the built-in HMC sampler in Stan for drawing samples. Mixture Model As mentioned before, we model the noisy observation as a mixture model of true distribution and outliers. This is denoted by (Equation 1): O_{i, j} = \begin{cases} q_i q_j^T + \eta_{i, j} & \textrm{with prob. } p \\ \textrm{Uniform}(\textrm{SO}(D)) & \textrm{with prob. } 1 – p \end{cases} where \(\eta_{i j}\) is the additive noise to our true observation, \(q_i q_j^T\) is the relative orientation between the \(i^\textrm{th}\) and \(j^\textrm{th}\) objects, and \(\textrm{Uniform}(\textrm{SO}(D))\) is the uniform distribution over the rotation group \(\textrm{SO}(D)\), representing our outlier distribution. \(\textrm{SO}(D)\) is the space where every element represents a D-dimensional rotation. This mixture model serves as the likelihood \(P(O|q)\) for our Bayesian framework. Sampled Result Ground truth samples of \(q_i\) \(\hat{q}_i \sim p(q|O)\) (estimated \(q_i\)) sampled from our posterior \(p(q|O)\) The orientations \(\hat{q}_i\) sampled from the posterior look significantly rotated with respect to the original samples. Notice this is a global rotation since all the samples are rotated equally. This problem of global ambiguity of absolute orientations \(q_i\) arises from the fact that the relative orientations \(q_i q_j^T\) and \(\tilde{q}_i \tilde{q_j}^T\) of two different set of vectors can be the same even if the absolute orientations are different. The following section goes over this and provides a sneak peek into our solution for alleviating this problem. Continuous Label Switching A careful observation of our problem formulation (Equation 1) would reveal the problem is invariant to transformation of the absolute orientations as long as the relative orientations are preserved. Consider the 2 pairs of observations in the figure below. (Blue and Red; Yellow and Green) Let the absolute orientations be \(q_1\), \(q_2\), \(\tilde{q}_1\) and \(\tilde{q}_2\) and relative orientations between the pairs be \(R_{12}\) and \(\tilde{R}_{12}\). As the absolute orientations \(q_1\) and \(q_2\) are equally rotated by a rotation matrix \(A\), the relative orientation between them \(R_{12}\) is preserved. More formally, Let \(A \in \textrm{SO}(D)\) be a random orientation matrix in D-dimensions. The following equation demonstrates how rotating two absolute orientations \(q_i\) and \(q_j\) by a rotation matrix \(A\) preserves the relative orientations — which in turn gives rise to a global ambiguity in our framework. R_{ij} = q_i q_j = q_i A A^T q_j = (q_i A) (A^T q_j) = \tilde{q_i} \tilde{q_j} Since our inputs to the model are relative orientations, this ambiguity (known as label switching) causes our Bayesian estimates to come randomly rotated by some rotation \(A\). Proposed Solution Based on Monteiller et al., in this project we explored a novel solution for alleviating this problem. The intuition is that we believe the unknown ground truth is close to the posterior samples up to a global rotation. Hence we try to approximate the ground truth by starting out with a random guess and optimizing for the alignment map between the estimate and the ground truth. Using this alignment map, and the posterior samples, we iteratively update the guess, using a custom Riemannian Stochastic Gradient Descent over \(\textrm{SO}(D)\). Start with a random guess \(\mu \sim \mathrm{Uniform}(\mathrm{SO}(D))\). Sample \(\hat{q} \sim P(q | O)\), where \(P(q | O)\) is the posterior. Find the global ambiguity \(R\), between \(\hat{q}\) and \(\mu\). This can be obtained by solving for \(\mathrm{argmin}_R \, \| \mu – R \hat{q}\|_\mathrm{F}\). Move \(\mu\) along the shortest geodesic toward \(\hat{q}\). Repeat Steps 2 – 4 until convergence. Convergence is detected by a threshold on geodesic distance. We use this method to estimate the mean of the posterior over \(\mathrm{SO}(2)\) and plot the results (i.e. 2D orientations) in the complex plane as shown below. Original Sample Sampled Posterior Optimized Mean Posterior The proposed optimization proceedure is able to successfully re-align the posterior samples by alleviating the continuous label switching problem. In conclusion, we study the rotation synchronisation problem from a Bayesian perspective. We explore a custom Riemannian Gradient Descent procedure and perform experiments in the \(\mathrm{SO}(2)\) case. The current method is tested on a simple toy dataset. As future work we are interested in improving our Bayesian model and benchmarking it against the current state-of-the-art. There are certain performance bottlenecks in our current architecture, which constrain us to test only on \(\mathrm{SO}(2)\). In the future, we are also interested in carrying out experiments more thoroughly in various dimensions. While the current MCMC procedure we are using does not account for the non-Euclidean geometry of the space of orientations, \(\mathrm{SO}(D)\), we are looking into replacing it with Riemannian versions of MCMC. Tags label switching, research project Math SGI research projects Minimal Surfaces, But Periodic 1 Comment on Minimal Surfaces, But Periodic By Zhecheng Wang, Zeltzyn Guadalupe Montes Rosales, and Olga Guțan Note: This post describes work that has occurred between August 9 and August 20. The project will continue for a third week; more details to come. For the past two weeks we had the pleasure of working with Nicholas Sharp, Etienne Vouga, Josh Vekhter, and Erik Amezquita. We learned about a special type of minimal surfaces: triply-periodic minimal surfaces. Their name stems from their repeating pattern. Broadly speaking, a minimal surface minimizes its surface area. This is equivalent to having zero mean curvature. A triply-periodic minimal surface (TPMS) is a surface in \(\mathbb{R}^{3}\) that is invariant under a rank-3 lattice of translations. Figure 1. (Left) A Minimal Surface [source] and (right) a TPMS [source]. Let's talk about nonmanifold surfaces. "Manifold" is a geometry term that means: every local region of the surface looks like the plane (more formally — it is homeomorphic to a subset of Euclidean space). Non-manifold then allows for parts of the surface that do not look like the plane, such as T-junctions. Within the context of triangle meshes, a nonmanifold surface is a surface where more than 3 faces share an edge. II. What We Did First, we read and studied the 1993 paper by Pinkhall and Polthier that describes the algorithm for generating minimal surfaces. Our next goal was to generate minimal surfaces. Initially, we used pinned (Dirichlet) boundary conditions and regular manifold shapes. After ensuring that our code worked on manifold surfaces, we tested it on non-manifold input. Additionally, our team members learned how to use Blender. It has been a very enjoyable process, and the work was deeply satisfying, because of the embedded mathematical ideas intertwined with the artistic components. III. Reading the Pinkall Paper "Computing Discrete Minimal Surfaces and Their Conjugates," by Ulrich Pinkall and Konrad Polthier, is the classic paper on this subject; it introduces the iterative scheme we used to find minimal surfaces. Reading this paper was our first step in this project. The algorithm that finds a discrete locally area-minimizing surface is as follows: Take the initial surface \(M_0\) with boundary \(∂M_0\) as first approximation of M. Set i to 0. Compute the next surface \(M_i\) by solving the linear Dirichlet problem $$ \min_{M} \frac{1}{2}\int_{M_{i}}|\nabla (f:M_{i} \to M)|^{2}$$ Set i to i+1 and continue with step 2. The stopping condition is \(|\text{area}(M_i)-\text{area}(M_{i+1})|<\epsilon\). In our case, we used a maximum number of iterations, set by the user, as a stopping condition. There are additional subtleties that must be considered (such as "what to do with degenerate triangles?"), but since we did not implement them — their discussion is beyond the scope of this post. IV. Adding Periodic Boundary Conditions This is, at its core, an optimization problem. To ensure that the optimization works, the boundary conditions have to be periodic instead of fixed in space. This is because we are enforcing a set of boundary conditions on periodic shapes — that is, tiling in a 3D space. IV(a). Matching Vertices First, we need to check every two pairs of vertices in the mesh. We are looking to see if they have identical coordinates in two dimensions, but are separated by exactly two units in the third dimension. When we find such pairs of vertices, we classify them to \(G_x\), \(G_y\), or \(G_z\). Note that we only store unique pairs. IV(b). Laplacian Smoothness at the Boundary Vertices Instead of using the discrete Laplacian, now we introduce a sparse matrix K to adjust our smoothness term based on the new boundary: $$\min_{x}x^T(L^TK^TKL)x \text{ s.t. } x[b] = x_0[b].$$ Next, we construct the matrix K, which is a sparse square matrix of dimension #vertices by #vertices. To do so, we set \(K(i,i) = 1\), \(K(i,j) = 1\), and set the \(j\)th row entries to 0 for every pair of unique matched boundary vertices. For every interior vertex \(i\), we set \(K(i,i) = 1\). IV(c). Aligning Boundary Vertices Now we no longer want to pin boundary vertices to their original location in space. Instead, we want to allow our vertices to move, while the opposite sides of the boundary still match. To do that, we need to adjust the existing constraint term and to include additional linear constraints \(Ax=b\). Therefore, we add two sets of linear constraints to our linear system: For any pair of boundary vertices distanced by 2 units in one direction, the new coordinates should differ by 2 units. For any pair of boundary vertices matched in the two other directions, the new coordinates should differ by 0. We construct a selection matrix \(A\), which is a #pairs of boundary vertices in \(G\) by #vertices sparse matrix, to get the distance between any pair of boundary vertices. For every \(r\)th row, \(A(r,i)=1, A(r,j)=-1.\)$ Then we need to construct 3 \(b\) vectors, each of which is a sparse square matrix of the size [# of vector pairs of boundary vertices in G for a 3D coordinate (x,y,z)]. Based on whether at one given moment we are working with \(G_x\), \(G_y\), or \(G_z\), we enter \(2\) for those selected pairs, and \(0\) for the rest. V. Correct Outcomes Below, we can see the algorithm being correctly implemented. Each video represents a different mesh. VI. Aesthetically Pleasing Bugs Nothing is perfect, and coding in Matlab is no exception. We went through many iterations of our code before we got a functional version. Below are some examples of cool-looking bugs we encountered along the way, while testing the code on (what has become) one of our favorite shapes. Each video represents a different bug applied onto the same mesh. VII. Conclusion and Future Work Further work may include studying the physical properties of nonmanifold TPMS. It may also include additional basic structural simulations for physical properties, and establishing a comparison between the results for nonmanifold surfaces and the existing results for manifold surfaces. Additional goals may be of computational or algebraic nature. For example, one can write scripts to generate many possible initial conditions, then use code to convert the surfaces with each of these conditions into minimal surfaces. An algebraic goal may be to enumerate all possible possibly-nonmanifold structures and perhaps categorize them based on their properties. This is, in fact, an open problem. The possibilities are truly endless, and potential directions depend on the interests of the group of researchers undertaking this project further.
CommonCrawl
- March 2021 - The Let's Encrypt duplicate signature key selection attack - March 2021 - A flamegraph of Real-World Cryptography - March 2021 - ''This destroyes the RSA cryptosystem'' - February 2021 - I was on the Technoculture podcast - January 2021 - I'm on the develomentor podcast to talk about what applied cryptography is! - December 2020 - I'm on the Cyber Security Interviews podcast! - November 2020 - How do people find bugs? - November 2020 - The book is finished, well sort of... The Let's Encrypt duplicate signature key selection attack posted March 2021 On August 11th, 2015, Andrew Ayer sent an email to the IETF mailing list starting with the following words: I recently reviewed draft-barnes-acme-04 and found vulnerabilities in the DNS, DVSNI, and Simple HTTP challenges that would allow an attacker to fraudulently complete these challenges. The draft-barnes-acme-04 mentioned by Andrew Ayer is a document specifying ACME, one of the protocols behind the Let's Encrypt certificate authority. A certificate authority is the thing that your browser trusts and that signs the public keys of websites you visit. It is called a "certificate" authority due to the fact that it does not sign public keys, but certificates. A certificate is just a blob of data bundling a website's public key, its domain name, and some other relevant metadata. The attack was found merely 6 weeks before major browsers were supposed to start trusting Let's Encrypt's public key. The draft has since become RFC 8555: Automatic Certificate Management Environment (ACME), mitigating the issues. Since then no cryptographic attacks are known on the protocol. This blog post will go over the accident, and explain why it happened, why it was a surprising bug, and what you should watch for when using signatures in cryptography. How Let's Encrypt used signatures Let's Encrypt is a pretty big deal. Created in 2014, it is a certificate authority run as a nonprofit, providing trust to hundreds of millions of websites. The key to Let's Encrypt's success are twofold: It is free. Before Let's Encrypt most certificate authorities charged fees from webmasters who wanted to obtain certificates. It is automated. If you follow their standardized protocol, you can request, renew and even revoke certificates via a web interface. Contrast that to other certificate authorities who did most processing manually, and took time to issue certificates. If a webmaster wants her website example.com to provide a secure connection to her users (via HTTPS), she can request a certificate from Let's Encrypt (essentially a signature over its domain name and public key), and after proving that she owns the domain example.com and getting her certificate issued, she will be able to use it to negotiate a secure connection with any browser trusting Let's Encrypt. That's the theory. In practice the flow goes like this: Alice registers on Let's Encrypt with an RSA public key. Alice asks Let's Encrypt for a certificate for example.com. Let's Encrypt asks Alice to prove that she owns example.com, for this she has to sign some data and upload it to example.com/.well-known/acme-challenge/some_file. Once Alice has signed and uploaded the signature, she asks Let's Encrypt to go check it. Let's Encrypt checks if it can access the file on example.com, if it successfully downloaded the signature and the signature is valid then Let's Encrypt issues a certificate to Alice. In 2015, Alice could request a signed certificate from Let's Encrypt by uploading a signature (from the key she registered with) on her domain. The certificate authority verifies that Alice owns the domain by downloading the signature from the domain and verifying it. If it is valid, the authority signs a certificate (which contains the domain's public key, the domain name example.com, and some other metadata) and sends it to Alice who can then use it to secure her website in a protocol called TLS. Let's see next how the attack worked. How did the Let's Encrypt attack work? In the attack that Andrew Ayer found in 2015, Andrew proposes a way to gain control of a Let's Encrypt account that has already validated a domain (let's pick example.com as an example) The attack goes something like this (keep in mind that I'm simplifying): Alice registers and goes through the process of verifying her domain example.com by uploading some signature over some data on example.com/.well-known/acme-challenge/some_file. She then successfully manages to obtain a certificate from Let's Encrypt. Later, Eve signs up to Let's Encrypt with a new account and an RSA public key, and request to recover the example.com domain Let's Encrypt asks Eve to sign some new data, and upload it to example.com/.well-known/acme-challenge/some_file (note that the file is still lingering there from Alice's previous domain validation) Eve crafts a new malicious keypair, and updates her public key on Let's Encrypt. She then asks Let's Encrypt to check the signature Let's Encrypt obtains the signature file from example.com, the signature matches, Eve is granted ownership of the domain example.com. She can then ask Let's Encrypt to issue valid certificates for this domain and any public key. The 2015 Let's Encrypt attack allowed an attacker (here Eve) to successfully recover an already approved account on the certificate authority. To do this, she simply forges a new keypair that can validate the already existing signature and data from the previous valid flow. Take a few minutes to understand the attack. It should be quite surprising to you. Next, let's see how Eve could craft a new keypair that worked like the original one did. Key substitution attacks on RSA In the previously discussed attack, Eve managed to create a valid public key that validates a given signature and message. This is quite a surprising property of RSA, so let's see how this works. A digital signature does not uniquely identify a key or a message. -- Andrew Ayer, Duplicate Signature Key Selection Attack in Let's Encrypt (2015) Here is the problem given to the attacker: for a fixed signature and (PKCS#1 v1.5 padded) message, a public key $(e, N)$ must satisfy the following equation to validate the signature: $$signature = message^e \pmod{N}$$ One can easily craft a key pair that will (most of the time) satisfy the equation: a public exponent $e = 1$ a private exponent $d = 1$ a public modulus $N = \text{signature} - \text{message}$ You can easily verify that the validation works with this keypair: $$\begin{align} &\text{signature} = \text{message}^e \mod{N} \\ \iff &\text{signature} = \text{message} \mod{\text{signature} - \text{message}} \\ \iff &\text{signature} - \text{message} = 0 \mod{\text{signature} - \text{message}} \end{align}$$ Is this issue surprising? It should be. This property called "key substitution" comes from the fact that there exists a gap between the theoretical cryptography world and the applied cryptography world, between the security proofs and the implemented protocols. Signatures in cryptography are usually analyzed with the EUF-CMA model, which stands for Existential Unforgeability under Adaptive Chosen Message Attack. In this model YOU generate a key pair, and then I request YOU to sign a number of arbitrary messages. While I observe the signatures you produce, I win if I can at some point in time produce a valid signature over a message I hadn't requested. Unfortunately, even though our modern signature schemes seem to pass the EUF-CMA test fine, they tend to exhibit some surprising properties like the key substitution one. To learn more about key substitution attack and other signature shenanigans, take a look at my book Real-World Cryptography. A flamegraph of Real-World Cryptography posted March 2021 I've now spent 2 years writing my introduction on applied cryptography: Real-World Cryptography, which you can already read online here. (If you're wondering why I'm writing another book on cryptography check this post.) I've written all the chapters, but there's still a lot of work to be done to make sure that it's good (collecting feedback), that it's consistent (unification of diagrams, of style, etc.), and that it's well structured. For the latter point, I thought I would leverage the fact that I'm an engineer and use a tool that's commonly used to measure performance: a flamegraph! It looks like this, and you can click around to zoom on different chapters and sections: The bottom layer shows all the chapter in order, and the width of the boxes show how lengthy they are. The more you go up, the more you "nest" yourself into a section. For example, clicking on the chapter 9: Secure transport, you can see that it is composed of several sections with the longest being "How does TLS work", which itself is composed of several subsections with the longest being "The TLS handshake". What is it good for? Using this flamegraph, I can now analyze how consistent the book is. The good news is that the chapters all seem pretty evenly distributed, for the exception of shorter chapters 3 (MACs), 6 (asymmetric encryption), and 16 (final remarks). This is also expected are these chapters are much more straightforward than the rest of the book. Too length Looks like the bigger chapters are in order: post-quantum crypto, authenticated encryption, hardware cryptography, user authentication, secure transport. This is not great, as post-quantum crypto is supposed to be a chapter for the curious people who get to the end of the book, not a chapter to make the book bigger... The other chapters are also unnecessary long. My goal is going to be to reduce these chapters' length in the coming weeks. Too nested This flamegraph is also useful to quickly see if there are sections that are way too nested. For example, Chapter 9 on secure transport has a lot of mini sections on TLS. Also, look at some of the section in chapter 5: Key exchanges > Key exchange standards > ECDH > ECDH standard. That's too much. Not nested enough Some chapters have almost no nested sections at all. For example, chapter 8 (randomness) and 16 (conclusion) are just successions of depth-1 sections. Is this a bad thing? Not necessarily, but if a section becomes too large it makes sense to either split it into several sections, or have subsections in it. I've noticed, for example, that the first section of chapter 3 on MACs titled "What is a MAC?" is quite long, and doesn't have subsections. (Same for section 6.2 asymmetric encryption in practice and section 8.2 what is a PRNG) I also managed to spot some errors in nested sections by doing this! So that was pretty cool as well :) EDIT: If you're interested in doing something like this with your own project, I published the script here. I was on the Technoculture podcast posted February 2021 Hey reader! I was on the Technoculture podcast (or videocast?) to talk about cryptography in general. The host Federica Bressan is releasing excerpts bit by bit. You can watch the first part (Theoretical vs. Real-World Cryptography) here: And here's the rest that I will update as they get posted 2/5: Real-World Cryptography & cryptocurrencies 3/5: Real-World Cryptography & applications 4/5: Real-World Cryptography & usable security 5/5: 5/5 Real-World Cryptography & COVID-19 I'm on the develomentor podcast to talk about what applied cryptography is! posted January 2021 I had a lot of fun talking about applied cryptography on the develomentor podcast a few monts ago and the episode just came out today! It also looks like you can get a free copy of my book by listening to it :) https://develomentor.com/2021/01/07/david-wong-what-is-applied-cryptography-121/ I'm on the Cyber Security Interviews podcast! posted December 2020 I recently went on to talk to Douglas Brush on his podcast Cyber Security Interviews. You can listen to the episode here: https://cybersecurityinterviews.com/episodes/104-david-wong-many-layers-of-complexity/ How do people find bugs? posted November 2020 You might wonder how people find bugs. Low-hanging fruit bugs can be found via code review, static analysis, dynamic analysis (like fuzzing), and other techniques. But what about deep logic bugs. Those you can't find easily. Perhaps the protocol implemented is quite complicated, or correctness is hard to define, and edge-cases hard to detect. One thing I've noticed is that re-visiting protocols are an excellent way to find logic bugs. Ian Miers once said something like that: "you need time, expertise, and meaningful engagement". I like that sentence, although one can point out that these traits are closely linked--you can't have meaningful engagement without time and expertise--it does show that finding bugs take "effort". OK. Meaningful engagement can lead to meaningful bugs, and meaningful bugs can be found at different levels. So you're here, seating in your undies in the dark, with a beer on your side and some uber eats lying on the floor. Your computer is staring back at you, blinking at a frequency you can't notice, and waiting for you to find a bug in this protocol. What do you do? Perhaps the protocol doesn't have a proof, and this leads you to wonder if you can write one for it... It worked for Ariel Gabizon, who in 2018 found a subtle error in a 2013 zk-SNARK paper used by the Zcash cryptocurrency he was working on. He found it by trying to write a proof for the paper he was reading, realizing that the authors had winged it. While protocols back in the days could afford to wing it, these days people are more difficult--they demand proofs. The bug Ariel found could have allowed anyone to forge an unlimited amount of money undetected. It was silently fixed months later in an upgrade to the network. Ariel Gabizon, a cryptographer employed by the Zcash Company at the time of discovery, uncovered a soundness vulnerability. The key generation procedure of [BCTV14], in step 3, produces various elements that are the result of evaluating polynomials related to the statement being proven. Some of these elements are unused by the prover and were included by mistake; but their presence allows a cheating prover to circumvent a consistency check, and thereby transform the proof of one statement into a valid-looking proof of a different statement. This breaks the soundness of the proving system. What if the protocol already had a proof though? Well that doesn't mean much, people enjoy writing unintelligible proofs, and people make errors in proofs all the time. So the second idea is that reading and trying to understand a proof might lead to a bug in the proof. Here's some meaningful engagement for you. In 2001, Shoup revisited some proofs and found some darning gaps in the proofs for RSA-OAEP, leading to a newer scheme OAEP+ which was never adopted in practice. Because back then, as I said, we really didn't care about proofs. [BR94] contains a valid proof that OAEP satisfies a certain technical property which they call "plaintext awareness." Let us call this property PA1. However, it is claimed without proof that PA1 implies security against chosen ciphertext attack and non-malleability. Moreover, it is not even clear if the authors mean adaptive chosen ciphertext attack (as in [RS91]) or indifferent (a.k.a. lunchtime) chosen ciphertext attack (as in [NY90]). Later in 2018, a series of discoveries on the proofs for the OCB2 block cipher quickly led to practical attacks breaking the cipher. We have presented practical forgery and decryption attacks against OCB2, a high-profile ISO-standard authenticated encryption scheme. This was possible due to the discrepancy between the proof of OCB2 and the actual construction, in particular the interpretation of OCB2 as a mode of a TBC which combines XEX and XE. We comment that, due to errors in proofs, 'provably-secure schemes' sometimes still can be broken, or schemes remain secure but nevertheless the proofs need to be fixed. Even if we limit our focus to AE, we have many examples for this, such as NSA's Dual CTR [37,11], EAX-prime [28], GCM [22], and some of the CAESAR submissions [30,10,40]. We believe our work emphasizes the need for quality of security proofs, and their active verification. Now, reading and verifying a proof is always a good idea, but it's slow, it's not flexible (if you change the protocol, good job changing the proof), and it's limited (you might want to prove different things re-using parts of the proofs, which is not straight forward). Today, we are starting to bridge the gap between pen and paper proofs and computer science: it is called formal verification. And indeed, formal verification is booming, with a number of papers in the recent years finding issues here and there just by describing protocols in a formal language and verifying that they withstand different types of attacks. Prime, Order Please! Revisiting Small Subgroup and Invalid Curve Attacks on Protocols using Diffie-Hellman: We implement our improved models in the Tamarin prover. We find a new attack on the Secure Scuttlebutt Gossip protocol, independently discover a recent attack on Tendermint's secure handshake, and evaluate the effectiveness of the proposed mitigations for recent Bluetooth attacks. Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures: We implement our models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies. In the process, we find new attacks on DRKey and SOAP's WS-Security, both protocols which were previously proven secure in traditional symbolic models. But even this kind of techniques has limitation! (OMG David when will you stop?) In 2017 Matthew Green wrote: I don't want to spend much time talking about KRACK itself, because the vulnerability is pretty straightforward. Instead, I want to talk about why this vulnerability continues to exist so many years after WPA was standardized. And separately, to answer a question: how did this attack slip through, despite the fact that the 802.11i handshake was formally proven secure? He later writes: The critical problem is that while people looked closely at the two components — handshake and encryption protocol — in isolation, apparently nobody looked closely at the two components as they were connected together. I'm pretty sure there's an entire geek meme about this. pointing to the "2 unit tests. 0 integration tests." joke. He then recognizes that it's a hard problem: Of course, the reason nobody looked closely at this stuff is that doing so is just plain hard. Protocols have an exponential number of possible cases to analyze, and we're just about at the limit of the complexity of protocols that human beings can truly reason about, or that peer-reviewers can verify. The more pieces you add to the mix, the worse this problem gets. In the end we all know that the answer is for humans to stop doing this work. We need machine-assisted verification of protocols, preferably tied to the actual source code that implements them. This would ensure that the protocol actually does what it says, and that implementers don't further screw it up, thus invalidating the security proof. Well, Matthew, we do have formally generated code! HACL* and fiat-crypto are two examples. Anybody has heard of that failing? I'd be interested… In any case, what's left for us? A lot! Formally generated code is hard, and generally covers small parts of your protocol (e.g. field arithmetic for elliptic curves). So what else can we do? Implementing the protocol, if it hasn't been implemented before, is a no-brainer. In 2016, Taylor Hornby an engineer at Zcash wrote about a bug he found while implementing the zerocash paper into the Zcash cryptocurrency: In this blog post, we report on the security issues we've found in the Zcash protocol while preparing to deploy it as an open, permissionless financial system. Had we launched Zcash without finding and fixing the InternalH Collision vulnerability, it could have been exploited to counterfeit currency. Someone with enough computing power to find 128-bit hash collisions would have been able to double-spend money to themselves, creating Zcash out of thin air. Perhaps re-implementing the protocol in a different language might work as well? One last thing, most of the code out there is not formally verified. So of course, reviewing code works, but you need time, expertise, money, etc. So instead, what about testing? This is what Wycheproof does by implementing a number of test vectors that are known to cause issues: These observations have prompted us to develop Project Wycheproof, a collection of unit tests that detect known weaknesses or check for expected behaviors of some cryptographic algorithm. Project Wycheproof provides tests for most cryptographic algorithms, including RSA, elliptic curve crypto and authenticated encryption. Our cryptographers have systematically surveyed the literature and implemented most known attacks. We have over 80 test cases which have uncovered more than 40 bugs. For example, we found that we could recover the private key of widely-used DSA and ECDHC implementations. In all of that, I didn't even talk about the benefits of writing a specification... that's for another day. The book is finished, well sort of... posted November 2020 If you didn't know, I've been writing a book (called Real-World Cryptography) for almost 2 years now on applied cryptography, why would I do this? I answered this in a post here. I've finished writing all 15 chapters which are split into a first part on primitives, the ingredients of cryptography, and a second part on protocols, the recipes of cryptography: Message authentication codes Authenticated encryption Key exchanges Asymmetric encryption and hybrid encryption Signatures and zero-knowledge proofs Randomness and secrets Secure transport Crypto as in cryptocurrency? Hardware cryptography Post-quantum cryptography Next-generation cryptography and final words Is this it? Unfortunately it is not, I now will start the long revision process. I am collecting feedback on various chapters, so if you want to help me write the best book possible please contact me with a chapter in mind :) - Hash-Based Signatures Part I: One-Time Signatures (OTS) - Let's Encrypt Overview
CommonCrawl
\begin{document} \author[D. Bartoli]{Daniele Bartoli} \address{ Dipartimento di Matematica e Informatica, Universit\`a degli Studi di Perugia, Perugia, Italy} \email{[email protected]} \author[M. Montanucci]{Maria Montanucci} \address{ Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark} \email{[email protected]} \author[Giovanni Zini]{Giovanni Zini} \address{ Dipartimento di Matematica e Fisica, Universit\`a degli Studi della Campania Luigi Vanvitelli, Caserta, Italy} \email{[email protected]} \title[On certain self-orthogonal AG codes]{On certain self-orthogonal AG codes with applications to Quantum error-correcting codes} \thanks{{\em 2010 Math. Subj. Class.}: 94B27, 11T71, 81P70, 14G50} \thanks{{\em Keywords}: Finite fields, algebraic geometry codes, quantum error-correction, algebraic curves} \begin{abstract} In this paper a construction of quantum codes from self-orthogonal algebraic geometry codes is provided. Our method is based on the CSS construction as well as on some peculiar properties of the underlying algebraic curves, named Swiss curves. Several classes of well-known algebraic curves with many rational points turn out to be Swiss curves. Examples are given by Castle curves, GK curves, generalized GK curves and the Abd\'on-Bezerra-Quoos maximal curves. Applications of our method to these curves are provided. Our construction extends a previous one due to Hernando, McGuire, Monserrat, and Moyano-Fern\'andez. \end{abstract} \maketitle \section{Introduction} Since the discovery of quantum algorithms, such as a polynomial time algorithm for factorization by Shor \cite{Shor} and a quantum search algorithm by Grover \cite{Grover}, quantum computing has received a lot of attention. Even though a concrete and practical implementation of these algorithms is far away, it has nonetheless become clear that some form of error correction is required to protect quantum data from noise. This was the motivation for the development of quantum computation and, more specifically, of quantum error-correcting codes. In the last decades much research has been done to find good quantum codes following several strategies and underlying mathematical structures. However, the most remarkable result is probably the one obtained by Calderbank and Shor \cite{CSH}, and Steane \cite{STE}; see also \cite{10}. Indeed they showed that quantum codes can be derived from classical linear error-correcting codes provided that certain orthogonality properties are satisfied, including Euclidean and Hermitian self-orthogonality; see \cite{10,28,NC}. This method, known as CSS construction, has allowed to find many powerful quantum stabilizer codes. Among all the classical codes used to produce quantum stabilizer codes, Algebraic-Geometry (AG) codes \cite{G1982} have received considerable attention \cite{MST, MTT, LGP, SSSSSOCODESSS, BMZ, MTZ, MPL, CHe, GAHE, c1,c2,c3, c4,d1,d2}. The interest towards AG codes is due to several reasons. First, every linear code can be realized as an algebraic geometry code\cite{PE}. Also, AG codes were indeed used to improve the Gilbert-Varshamov bound \cite{GV}, an outstanding result at that time. Finally, conditions for Euclidean self-orthogonality of AG codes are well known \cite{BO} and allow us to translate the pure combinatorial nature of this problem into geometrical terms concerning the structure of the curves involved and their corresponding function fields. Castle curves and AG codes from them \cite{MST} give rise to good quantum error-correcting codes. Indeed, among all curves used to get AG codes, Castle and weak Castle curves combine the good properties of having a reasonable simple handling and giving codes with excellent parameters. This is confirmed by the fact that most of the best one-point AG codes studied in the literature belong to the family of Castle codes. In \cite{MTT}, Munuera, Ten\'orio and Torres used the good properties of algebraic-geometry codes coming from Castle and weak Castle curves to provide new sequences of self-orthogonal codes. Their construction was extended in \cite{SSSSSOCODESSS} by Hernando, McGuire, Monserrat, and Moyano-Fern\'andez, who provided a way to obtain self-orthogonal AG codes, and hence good quantum codes, from a more general class of curves, strictly including Castle curves. In this paper we further generalize the family of curves considered in \cite{SSSSSOCODESSS} to what we call Swiss curves. The geometric properties on the underlying plane curves considered in \cite{SSSSSOCODESSS} are weakened, focusing on the algebraic structure of the curves, that is, on their function field. The family of Swiss curves, and more generally of $r$-Swiss curves, includes the most studied and known families of algebraic curves with many rational points over finite fields. Some example are given by the Giulietti-Korchm\'aros curve \cite{GK2009}, the two generalized Giulietti-Korchm\'aros curves \cite{GGS2010} and \cite{BM}, as well as the Abd\'on-Bezerra-Quoos curve \cite{ABQ}. Explicit constructions of quantum codes from these curves are provided, as well as comparisons with the quantum Gilbert-Varshamov bound. The paper is organized as follows. Section \ref{sec:preliminaries} recalls basic notions on AG codes and quantum codes; in particular, we present some constructions from the literature where quantum codes are obtained from AG codes with self-orthogonality properties. Section \ref{sec:swiss} defines a class of curves, namely Swiss curves, for which we prove in Theorem \ref{Main:Swiss} a result about self-orthogonality properties. This is applied in Section \ref{sec:appl} to several curves which are shown to be Swiss and which provide quantum codes. The results of Section \ref{sec:swiss} are generalized in Section \ref{sec:r-swiss} to a larger class of curves, called $r$-Swiss curves, and then applied in Section \ref{sec:r-appl} to generalized GK curves over finite fields of even order. Finally, we note in Section \ref{sec:comp} that certain stabilizer quantum codes constructed in the previous sections are pure and exceed the quantum Gilbert-Varshamov bound. \section{AG codes and quantum codes}\label{sec:preliminaries} \subsection{AG codes} We introduce here some basic notions on AG codes; for a detailed introduction to this topic, we refer to \cite[Chapter 2]{Sti}. Let $\mathbb{F}_q$ be the finite field of order $q$ and $\mathcal{X}$ be a projective, absolutely irreducible, algebraic curve of genus $g$ defined over $\mathbb{F}_q$. Let $\mathbb{F}_q(\mathcal{X})$ be the field of rational functions on $\mathcal{X}$ and $\mathcal{X}(\mathbb{F}_q)$ be the set of rational places of $\mathcal{X}$. For any divisor $D=\sum_{P\in\mathcal{X}(\overline{\mathbb{F}}_q)}n_P P$ on $\mathcal{X}$, we denote by $v_P(D)$ the weight $n_P\in\mathbb{Z}$ of $P$ in $D$ (also called the valuation of $D$ at $P$), and by ${\rm supp}(D)$ the support of $D$, that is the finite set of places with non-zero weight in $D$; the degree of $D$ is $\deg(D)=\sum_{P\in{\rm supp}(D)} n_P$. The Riemann-Roch space $\mathcal{L}(D)$ of an $\mathbb{F}_q$-rational divisor $D$ is the finite dimensional $\mathbb{F}_q$-vector space $$ \mathcal{L}(D)=\{f\in\mathbb{F}_q(\mathcal{X})\setminus\{0\}\colon (f)+D\geq0\}\cup\{0\},$$ where $(f)=(f)_0-(f)_{\infty}$ denotes the principal divisor of $f$; here, $(f)_0$ and $(f)_\infty$ are respectively the zero divisor and the pole divisor of $f$. The $\mathbb{F}_q$-dimension of $\mathcal{L}(D)$ is denoted by $\ell(D)$. Let $\{P_1,\ldots,P_N\}\subseteq\mathcal{X}(\mathbb{F}_q)$ with $P_i\ne P_j$ for $i\ne j$, $D$ be the $\mathbb{F}_q$-rational divisor $P_1+\cdots+P_N$, and $G$ be an $\mathbb{F}_q$-rational divisor of $\mathcal{X}$ such that ${\rm supp}(D)\cap{\rm supp}(G)=\emptyset$. Consider the $\mathbb{F}_q$-linear evaluation map \begin{eqnarray*} e_D:&\mathcal{L}(G)&\to\mathbb{F}_q^N\\ &f&\mapsto(f(P_1),\ldots,f(P_N)). \end{eqnarray*} The (functional) AG code $C(D,G)$ is defined as the image $e_D(\mathcal{L}(G))$ of $e_D$. The code $C(D,G)$ has parameters $[N,k,d]_q$ which satisfy $k=\ell(G)-\ell(G-D)$ and $d\geq N-\deg(G)$. If $\deg(G)<N$, then $e_D$ is injective and $k=\ell(G)$. If $2g-2<\deg(G)<N$, then $k=\deg(G)+1-g$. The (Euclidean) dual code $C(D,G)^\bot$ has parameters $[N^\bot,k^\bot,d^\bot]_q$, where $N^\bot=N$, $k^\bot=N-k$, and $d^\bot\geq \deg(G)-2g+2$. Note that, if $2g-2<\deg(G)<N$, then $k^\bot=N-\deg(G)+g-1$. \subsection{Quantum codes} The main ingredient to construct quantum codes in this paper is the so-called {\it CSS construction} (named after Calderbank, Shor and Steane) which enables to construct quantum codes from classical linear codes; see \cite[Lemma 2.5]{LGP}. A $q$-ary quantum code $Q$ of length $N$ and dimension $k$ is defined to be a $q^k$-dimensional Hilbert subspace of a $q^N$-dimensional Hilbert space $\mathbb H=(\mathbb C^q)^{\otimes n}=\mathbb C^q\otimes\cdots\otimes\mathbb C^q$. If $Q$ has minimum distance $D$, then $Q$ can correct up to $\lfloor\frac{D-1}{2}\rfloor$ quantum errors. The notation $[[N,k,D]]_q$ is used to denote such a quantum code $Q$. For an $[[N,k,D]]_q$-quantum code the quantum Singleton bound holds, that is, the minimum distance satisfies $D\leq 1+(N-k)/2$. The quantum Singleton defect is $\delta^Q:=N-k-2D+2\geq0$, and the relative quantum Singleton defect is $\Delta^Q:=\delta^Q/N$. If $\delta^Q=0$, then the code is said to be quantum MDS. For a detailed introduction on quantum codes see \cite{LGP} and the references therein. Another important bound for quantum codes is an analogue of the Gilbert-Varshamov bound. \begin{theorem}{\rm \cite[Theorem 1.4]{FM2004}} Suppose that $N>k\geq2$, $d\geq 2$, and $N\equiv k \pmod 2$. Then there exists a pure stabilizer quantum code with parameters $[[N,k,d]]_q$ provided that \begin{equation}\label{Dis:GV} \frac{q^{N-k+2}-1}{q^2-1}>\sum_{i=1}^{d-1}(q^2-1)^{i-1}\binom{N}{i}. \end{equation} \end{theorem} \begin{lemma}{\rm \cite{10,28,NC}} \label{ccs} {\rm (CSS construction)} Let $C_1$ and $C_2$ denote two linear codes with parameters $[N,k_i,d_i]_q$, $i=1,2$, and assume that $C_1 \subset C_2$. Then there exists an $[[N,k_2-k_1,D]]_q$ code with $D=\min\{wt(c) \mid c \in (C_2 \setminus C_1) \cup (C_1^\perp \setminus C_2^\perp)\}$, where $wt(c)$ is the Hamming weight of $c$. \end{lemma} A stabilizer quantum code $C$ is pure if the minimum distance of $C^\bot$ coincides with the minimum Hamming weight of $C^{\bot}\setminus C$. \begin{theorem} {\rm \cite{10,28}} \label{th:stab} Let $C$ be an $[N,k,d]_q$-code such that $C\subseteq C^{\bot}$, i.e. $C$ is self-orthogonal. Then there exists an $[[N,N-2k,\geq d^{\bot}]]_q$ stabilizer quantum code, where $d^{\bot}$ denotes the minimum distance of $C^\bot$. If the minimum weight of $C^{\bot}\setminus C$ is equal to $d^{\bot}$, then the stabilizer code is pure and has minimum distance $d^{\bot}$. \end{theorem} \begin{corollary}{\rm \cite{SSSSSOCODESSS}}\label{Corollary} Let $C$ be an $[N,k,d]_q$-code such that $C\subseteq C^\bot$. If $d>k+1$ then there exists an $[[N,N-2k, d^{\bot}]]_q$-code which is pure. \end{corollary} \begin{proof} $C^\bot$ is an $[N,N-k,d^{\bot}]_q$ code, with $d^{\bot}\leq k+1$ by the Singleton Bound. If $d>k+1$, then by Theorem \ref{th:stab} there exists a pure $[[N,N-2k, d^{\bot}]]_q$ stabilizer quantum code. \end{proof} \subsection{Constructions of AG quantum codes} We list here some constructions of quantum codes starting from AG codes which have been provided in the literature and exploit self-orthogonality properties of the underlying AG codes. \begin{itemize} \item \textit{General t-point construction} due to La Guardia and Pereira; see \cite[Theorem 3.1]{LGP}. This is a direct application of the CSS construction to AG codes. \begin{lemma} \label{lem1} {\rm (General t-point construction)} Let $\mathcal X$ be a nonsingular curve over $\mathbb F_q$ with genus $g$ and $N+t$ distinct $\mathbb F_q$-rational points, for some $N,t>0$. Assume that $a_i,b_i$, $i=1,\ldots,t$, are positive integers such that $a_i \leq b_i$ for all $i$ and $2g-2 < \sum_{i=1}^{t} a_i < \sum_{i=1}^t b_i < N$. Then there exists a quantum code with parameters $[[N,k,D]]_{q}$ with $k=\sum_{i=1}^{t} b_i - \sum_{i=1}^{t} a_i$ and $D \geq \min \big\{ N - \sum_{i=1}^{t} b_i, \sum_{i=1}^{t} a_i - (2g-2)\big\}$. \end{lemma} \item \textit{Quantum codes from weak Castle curves}, due to Munuera, Ten\'orio, and Torres; see \cite[Sections 3.3 and 3.4]{MTT}. A weak Castle curve over $\mathbb{F}_q$ is a pair $(\mathcal{X},P)$, where $\mathcal{X}$ is an absolutely irreducible $\mathbb{F}_q$-rational curve and $P$ is a rational place of $\mathcal{X}$ such that the following conditions hold. \begin{itemize} \item The Weierstrass semigroup $H(P)$ at $P$ is symmetric. \item there exist a positive integer $s$, a rational map $f:\mathcal{X}\to\mathbb{P}^1$, and a non-empty set $\{\alpha_1,\ldots,\alpha_h\}\subseteq\mathbb{F}_q$ such that $(f)_\infty= sP$ and for all $i=1,\ldots,h$ we have $f^{-1}(\alpha_i)\subseteq\mathcal{X}(\mathbb{F}_q)$ and $|f^{-1}(\alpha_i)=s|$. \end{itemize} With the same notation, let $\phi\in\mathbb{F}_q(\mathcal{X})$ be defined as $\phi=\prod_{i=1^h}(f-\alpha_i)$, and let $D$ be the sum of all $N=|\mathcal{X}(\mathbb{F}_q)|-1$ rational places of $\mathcal{X}$ different from $P$. Denote by $M=\{m_1=0,m_2,\ldots,m_N\}$ the dimension set of $(\mathcal{X},P)$, i.e. $m_i=\min\{m\colon \ell(mP)-\ell((m-N)P)\geq i\}$, and by $C_i$ the weak Castle code $C(D,m_i P)$. For any $r\geq1$ let $\gamma_r$ be the $r$-th gonality of $\mathcal{X}$, that is the minimum degree of a divisor $A$ on $\mathcal{X}$ such that $\ell(A)\geq r$. \begin{lemma}{\rm \cite[Corollary 5]{MTT}} Using the same notation as above, let $(\mathcal{X},P)$ be a weak Castle curve of genus $g$ over $\mathbb{F}_{q^2}$ such that $(d\phi)=(2g-2)P$. If $(q+1)m_i\leq N+2g-2$ for some $i$, then there exists a quantum code with parameters $[[N,N-2i,\geq d(C_{n-i})]]_q$ with $d(C_{n-i})\geq N-m_{N-i}+\gamma_{a+1}$, where $a=\ell((m_{N-i}-N)P)$. \end{lemma} \item \textit{Self-orthogonal AG codes from curves with only one place at infinity}, due to Hernando, McGuire, Monserrat, and Moyano-Fern\'andez; see \cite[Section 3]{SSSSSOCODESSS}. Let $\mathcal{X}$ be an absolutely irreducible $\mathbb{F}_q$-rational plane curve with $\mathbb{F}_q(\mathcal{X})=\mathbb{F}_q(x,y)$ such that $\mathcal{X}$ has only one point $\mathcal{P}_{\infty}$ at infinity, there is only one place $P_\infty$ centered at $\mathcal{P}_{\infty}$, and $P_{\infty}$ is rational. Let $\mathcal{A}$ be the set of the elements $a\in\mathbb{F}_q$ such that $\mathcal{X}$ and the line $L_a$ with affine equation $X=a$ are $\mathbb{F}_q$-transversal, that is, the points of $\mathcal{X}\cap L_a$ are $\mathbb{F}_q$-rational and the intersection multiplicity of $\mathcal{X}$ and $L_a$ is $1$ at every point of $\mathcal{X}\cap L_a$. Let $\mathcal{P}_{\mathcal{A}}$ be the set of places of $\mathcal{X}$ centered at affine points of $\mathcal{X}$ whose $X$-coordinate is in $\mathcal{A}$, and $D$ be the divisor $\sum_{P\in\mathcal{P}_{\mathcal A}}P$. Define the rational functions $f_{\mathcal A}(x)=\prod_{a\in\mathcal{A}}(x-a)$ and $f_{\mathcal A}^{\prime}(x)$, where $f_{\mathcal A}^\prime (X)=\partial_X f_{\mathcal A}(X)$. Let $M$ be the divisor of $\mathcal{X}$ such that ${\rm supp}(M)=\{P\in{\rm supp}((f_{\mathcal A}^\prime)_0)\colon P\ne P_{\infty}\}$ and $v_Q(M)=v_Q((f_{\mathcal A}^\prime)_0)$ for every $Q\in{\rm supp}(M)$. \begin{lemma}{\rm \cite[Theorem 3.1]{SSSSSOCODESSS}} Using the same notation as above, let $G$ be an $\mathbb{F}_q$-rational divisor of $\mathcal{X}$ with ${\rm supp}(G)\cap{\rm supp}(D)=\emptyset$. Then $$ C(D,G)^\bot = C(D,(2g-2+\deg(D)-\deg(M))P_{\infty}+M-G) .$$ If in addition $2G\leq(2g-2+\deg(D)-\deg(M))P_{\infty}+M$, then $$ C(D,G)\subseteq C(D,G)^\bot.$$ \end{lemma} \end{itemize} \section{Swiss curves and codes}\label{sec:swiss} \begin{definition} A Swiss curve is a pair $(\mathcal{C},P)$ such that $\mathcal{C}$ is an absolutely irreducible $\mathbb{F}_q$-rational curve, $P$ is a place of $\mathbb{F}_q(\mathcal{C})$ and the following holds. \begin{enumerate} \item $P$ is rational; \item there exists a function $x\in \mathbb{F}_q(\mathcal{C})$ such that $(dx)=(2g-2)P$. \end{enumerate} \end{definition} \begin{remark} Note that the existence of a function $x$ such that $(dx)=(2g-2)P$ implies that the Weierstrass semigroup at $P$ is symmetric, that is, $2g-1 \in G(P)$. Indeed, $(dx)=(2g-2)P$ implies that $(2g-2)P$ is a canonical divisor and hence the dimension of its Riemann-Roch space is equal to $g$, see \cite[Proposition 1.6.2]{Sti}. Since there are exactly $g$ elements in $G(P)$ (and they are at most $2g-1$) we get that $2g-1 \in G(P)$. \end{remark} Even though Condition $(1)$ is not difficult to be forced, Condition $(2)$ seems to be quite cryptic. The following remark describes a way to force also Condition $(2)$ to hold. \begin{remark}\label{remo} One way to force the existence of $x$ is the following. Suppose that there exists a function $x \in \mathbb{F}_q(\mathcal{C})$ such that $dx \ne 0 $ and in $\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(x)$ there is a unique ramification place and it is totally ramified. Without loss of generality we can assume that the totally ramified place is the pole $P_\infty$ of $x$. In fact, if such a place is the zero of $x-\alpha$, it is enough to replace $x$ with $1/(x-\alpha)$ and consider $\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(1/(x-\alpha))$. From \cite[Theorem 3.4.6]{Sti}, $$({\rm{Cotr}}_{\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(x)}(dx))=(dx)_{\mathbb{F}_q(\mathcal{C})}={\rm{Con}}_{\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(x)}((dx))+{\rm{Diff}}(\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(x)).$$ Since the support of both ${\rm{Con}}_{\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(x)}((dx))$ and ${\rm{Diff}}(\mathbb{F}_q(\mathcal{C}) / \mathbb{F}_q(x))$ is just $P_\infty$, we get that $(dx)_{\mathbb{F}_q(\mathcal{C})}=(2g-2)P_\infty$. \end{remark} Swiss curves can be constructed as explained in the following remark. \begin{remark} \label{remo1} Let $\mathcal{C}$ be an $\mathbb{F}_q$-rational curve of $p$-rank zero. Assume that there exist a rational place $P$ of $\mathbb{F}_q(\mathcal{C})$ and a $p$-subgroup $S$ of automorphisms of $\mathbb{F}_q(\mathcal{C})$ fixing $P$ such that the quotient curve $\mathcal{C}/S$ is rational. Then $(\mathcal{C},P)$ is a Swiss curve. Indeed $(1)$ is trivially satisfied and from \cite[Lemma 11.129]{HKT} $P$ is the unique place ramifying in $\mathcal{C}/S$ and it is totally ramified. Hence also Condition $(2)$ is satisfied by Remark \ref{remo}. \end{remark} In the following, we will denote by $\mathbb{P}_q$ the set of all rational places of $ \mathbb{F}_q(\mathcal{C})$. Also, given a divisor $D$ and a place $Q$, we denote by $v_Q(D)$ the weight of $D$ at $Q$. Consider a Swiss curve $(\mathcal{C},P)$ and the set $$\mathcal{A}=\left\{\alpha \in \mathbb{F}_q \ :\ (x-\alpha)_0-v_P((x-\alpha)_0) P\leq \sum_{Q\in \mathbb{P}_q\setminus \{P\}} Q \right\}.$$ Basically, $\mathcal{A}$ consists of all the $\alpha\in \mathbb{F}_q$ such that all the zeros of the function $x-\alpha$ other than (possibly) $P$ are rational and simple. Also, let $$D=\sum_{\alpha \in \mathcal{A}} \bigg((x-\alpha)_0-v_P((x-\alpha)_0) P \bigg).$$ \begin{theorem}\label{Main:Swiss} Let $(\mathcal{C},P)$ be a Swiss curve. With the same notation as above, consider another $\mathbb{F}_q$-rational divisor $G$ such that ${\rm supp}(G)\cap {\rm supp}(D) =\emptyset$. Then \begin{enumerate} \item $C(D,G)^{\bot} =C(D,E+(\gamma+2g-2) P-G)$, for some positive divisor $E$ and some integer $\gamma$; \item if, in addition, $2G\leq E+(\gamma+2g-2) P $ then $C(D,G)\subset C(D,G)^{\bot}$. \end{enumerate} \end{theorem} \begin{proof} Define $$h=\sum_{a \in \mathcal{A}} \frac{1}{x-a},\qquad\omega=\left(h\right)dx.$$ Clearly, places in ${\rm supp}(D)$ are simple poles of $h$. By hypothesis $(dx)=(2g-2)P$. Also, $\left(h\right)=E-D+\gamma P$, where $$E=\left(h\right)_0-v_P((h)_0) P,\qquad\gamma =\deg D-\deg E.$$ Hence, $(\omega)=E-D+(\gamma+2g-2) P$. Therefore $\omega$ has poles at places of $D$ and it is readily seen that the residue of $\omega$ at such places is $1$. Now, the claim follows from \cite[Theorem 2.72]{27}. \end{proof} In Section \ref{sec:appl}, we describe several Swiss curves. Using Theorem \ref{Main:Swiss} we construct families of self-orthogonal codes, which provide stabilizer quantum codes by means of Theorem \ref{th:stab}. \section{Applications to some Swiss curves}\label{sec:appl} \subsection{GK curve} The Giulietti-Korchm\'aros curve over $\mathbb{F}_{q^6}$ is a non-singular curve in ${\rm PG}(3,\mathbb{K})$, $\mathbb{K}=\overline{\mathbb{F}}_{q}$, defined by the affine equations: \[GK_q: \begin{cases} Y^{q+1} = X^q+X,\\ Z^{q^2-q+1} =Y^{q^2}-Y. \end{cases} \] It has genus $g=\frac{(q^3+1)(q^2-2)}{2}+1$, and the number of its $\mathbb{F}_{q^6}$-rational places is $q^8-q^6+q^5+1$. The GK curve first appeared in~\cite{GK2009} as a maximal curve over $\mathbb{F}_{q^6}$, since the latter number coincides with the Hasse-Weil upper bound, $q^6+2gq^3+1$. The GK curve is the first example of an $\mathbb{F}_{q^6}$-maximal curve that is not $\mathbb{F}_{q^6}$-covered by the Hermitian curve, provided that $q>2$. Since this curve is $\mathbb{F}_{q^6}$-maximal its $p$-rank is zero. Indeed, we will show that Condition $(2)$ is satisfied by applying Remark \ref{remo1}. The coordinate function $z$ has valuation $1$ at each affine $\mathbb{F}_{q^6}$-rational point of $GK_q$, hence $z$ is a separating element for $\mathbb{K}(GK_q)/\mathbb{K}$ by \cite[Prop. 3.10.2]{Sti}. Then $dz$ is non-zero by \cite[Prop. 4.1.8(c)]{Sti}. It is easily checked that $\mathbb{F}_{q^6}(GK_q)/\mathbb{F}_{q^6}(z)$ is a Galois extension of degree $q^3$; also, the unique place $P_\infty$ centered at the unique point at infinity of $GK_q$ is a ramification place for $\mathbb{F}_{q^6}(GK_q)/\mathbb{F}_{q^6}(z)$. From Remark \ref{remo1}, $(GK_q,P_\infty)$ is a Swiss curve and $(dz)=(2g-2)P_{\infty}=(q^3+1)(q^2-2)P_\infty$. Let $m=q^2-q+1$. It can be seen that if $\xi \in \mathbb{F}_{q^6}$ is such that $Y^{q^2}-Y=\xi^m$ has $q^2$ solutions in $\mathbb{F}_{q^6}$, then for each $\eta\in \mathbb{F}_{q^6}$ satisfying $\eta^{q^2}-\eta=\xi^m$ there are precisely $q$ values $\theta \in \mathbb{F}_{q^6}$ such that $\theta^q+\theta = \eta^{q+1}$. Also, the values $\xi\in \mathbb{F}_{q^6}$ for which all the zeros of $z-\xi$ are rational are those satisfying \begin{equation}\label{eq:cond} \xi^{mq^4}+\xi^{mq^2}+\xi^m=0; \end{equation} moreover for each of them there are exactly $q^3$ triples $(\bar x,\bar y,\bar z) \in \mathbb{F}_{q^6}^3$ such that $\bar{z}^m = \bar{y}^{q^2}-\bar{y}$ and $\bar{y}^{q+1}=\bar{x}^q+\bar{x}$. This implies that there are exactly $q^5-q^3+q^2$ values $\xi\in \mathbb{F}_{q^6}$ satisfying Equation \eqref{eq:cond}. Let $$\Xi =\{\xi \in \mathbb{F}_{q^6} \ : \ \xi^{mq^4}+\xi^{mq^2}+\xi^m=0\}.$$ Then $$\Xi \setminus \{0\}=\left\{\xi \in \mathbb{F}_{q^6} \ : \ \left(\xi^{(q-1)(q^3+1)}\right)^{q^2+1}+\left(\xi^{(q-1)(q^3+1)}\right)+1=0\right\}.$$ Note that if $\mu_{q^2+q+1}$ denotes the set of the $(q^2+q+1)$-th roots of unity then $$\{ \theta \in \mu_{q^2+q+1} \ : \ \theta^{q^2+1}+\theta+1 =0\}=\{ \theta \in \mu_{q^2+q+1} \ : \ \theta^{q+1}+\theta^q+1 =0\}.$$ Thus, the polynomial $$f(Z)=Z^{q^5-q^3+q^2}+Z^{q^5-q^4+q^2-q+1}+Z\in \mathbb{F}_{q^6}[Z]$$ factorizes completely over $\mathbb{F}_{q^6}$, and $$f(Z)=\prod _{\xi \in \Xi }(Z-\xi).$$ Also, $$f^{\prime}(Z) =Z^{q^5-q^4+q^2-q}+1=(Z^{(q^3+1)(q-1)}+1)^q,$$ and hence the zero divisor of the rational function $f^\prime(z)\in\mathbb{K}(GK_q)$ satisfies $$\deg(f^{\prime}(z))_0 =(q^5-q^4+q^2-q)q^3.$$ Now consider in $\mathbb{K}(GK_q)$ the function $$ \sum_{\xi \in \Xi}\frac{1}{z-\xi}=\frac{f^{\prime}(z)}{f(z)}.$$ Its principal divisor is $$M-D+(q^4-q^3+q)q^3P_{\infty},$$ where $M$ is the zero divisor of $f^{\prime}(z)$ and $D$ is the zero divisor $\sum_{P \in \mathbb{P}_{q^6}(GK_q)\setminus \{P_{\infty}\}}P$ of $f(z)$ of degree $q^5(q^3-q+1)$. The divisor of $$\omega=\sum_{\xi \in \Xi}\frac{1}{z-\xi}\,dz$$ is $$M-D+[2g-2+(q^4-q^3+q)q^3]P_{\infty}=M-D+[q^7-q^6+q^5+q^4-2q^3+q^2-2]P_{\infty}.$$ Consider the one-point divisor $G=sP_{\infty}$. By Theorem \ref{Main:Swiss} and its proof, \begin{eqnarray*} C(D,G)^{\bot}&=&C(D,M+[2g-2+(q^4-q^3+q)q^3-s]P_{\infty})\\ &=&C(D,M+[q^7-q^6+q^5+q^4-2q^3+q^2-2-s]P_{\infty}). \end{eqnarray*} Also, $C(D,G) \subset C(D,G)^{\bot}$ if $$s \leq \frac{q^7-q^6+q^5+q^4-2q^3+q^2-2}{2}.$$ Finally, by Theorem \ref{th:stab}, we obtain the following result. \begin{theorem}\label{Th:GKQuantum} With the same notation as above, consider the $q^6$-ary code $C(D,s P_{\infty})$ from the GK curve. Assume that $$q^5-2q^3+q^2-2\leq s \leq \frac{q^7-q^6+q^5+q^4-2q^3+q^2-2}{2}.$$ Then there exists a quantum code with parameters $$[[\,q^8-q^6+q^5,\;q^8-q^6+2q^5-2q^3+q^2-2-2s,\;\geq s-q^5+2q^3-q^2+2\,]]_{q^6}.$$ \end{theorem} \subsection{GGS curves}\label{SubSection:GGS} Let $q$ be a prime power and $n\geq5$ be an odd integer. The GGS curve $GGS(q,n)$ is defined by the equations \begin{equation}\label{GGS_equation} GGS(q,n): \left\{ \begin{array}{l} X^q + X = Y^{q+1}\\ Y^{q^2}-Y= Z^m\\ \end{array} \right. , \end{equation} where $m= (q^n+1)/(q+1)$; see \cite{GGS2010}. The genus of $GGS(q,n)$ is $\frac{1}{2}(q-1)(q^{n+1}+q^n-q^2)$, and $GGS(q,n)$ is $\mathbb F_{q^{2n}}$-maximal. Let $P_0=(0,0,0)$, $P_{(a,b,c)}=(a,b,c)$, and let $P_{\infty}$ be the unique ideal point of $GGS(q,n)$. Note that $GGS(q,n)$ is singular, being $P_\infty$ its unique singular point. Yet, there is only one place of $GGS(q,n)$ centered at $P_\infty$. The divisors of the coordinate functions $x,y,z$ satisfying $x^q + x = y^{q+1}$ and $y^{q^2}-y= z^m$ are \begin{eqnarray*} (x)&=&m(q+1)P_0-m(q+1)P_{\infty},\\ (y)&=&m\sum_{\alpha^q+\alpha=0} P_{(\alpha,0,0)}-mqP_{\infty},\\ (z)&=&\sum_{\scriptsize\begin{array}{l} \alpha^q+\alpha=\beta\\ \beta \in \mathbb{F}_{q^2}\\ \end{array}} P_{(\alpha,\beta,0)}-q^3P_{\infty}. \end{eqnarray*} As for the GK curve, the curve $GGS(q,n)$ has $p$-rank zero because it is $\mathbb{F}_{q^{2n}}$-maximal, and $(dz)=(2g-2)P_\infty$ being $\mathbb F_{q^{2n}}(GGS(q,n))/\mathbb F_{q^{2n}}(z)$ a Galois extension of degree $q^3$ in which $P_\infty$ is totally ramified. Hence $(GGS(q,n),P_\infty)$ is a Swiss curve. From the proof of \cite[Theorem 2.6]{GGS2010} every $\mathbb{F}_{q^{2n}}$-rational point of the curve $Y^{q^2}-Y=Z^m$ which is not centered at the unique point at infinity of the curve splits completely in $\mathbb{F}_{q^{2n}}(GGS(q,n))/\mathbb{F}_{q^{2n}}(y,z)$. This is equivalent to say, as for the GK curve, that if $\xi \in \mathbb{F}_{q^{2n}}$ is such that $Y^{q^2}-Y=\xi^m$ has $q^2$ solutions in $\mathbb{F}_{q^{2n}}$, then for each $\eta\in \mathbb{F}_{q^{2n}}$ satisfying $\eta^{q^2}-\eta=\xi^m$ there are precisely $q$ values $\theta \in \mathbb{F}_{q^{2n}}$ such that $\theta^q+\theta = \eta^{q+1}$. Also, the values $\xi\in \mathbb{F}_{q^{2n}}$ for which all the zeros of $z-\xi$ belong to $\mathbb{F}_{q^{2n}}$ are those satisfying $$\sum_{i=0}^{n-1} (\xi^m)^{q^{2i}}=0;$$ moreover for each of them there are exactly $q^3$ triples $(x,y,z) \in \mathbb{F}_{q^{2n}}^3$ such that $z^m = y^{q^2}-y$ and $y^{q+1}=x^q+x$. This means that there are exactly $q^{2n-1}-q^n+q^{n-1}$ such values $\xi\in \mathbb{F}_{q^{2n}}$ as $|GGS(q,n)(\mathbb{F}_{q^{2n}})|=q^{2n+2}-q^{n+3}+q^{n+2}+1$. Let $$\Xi =\{\xi \in \mathbb{F}_{q^{2n}} \ : \ \sum_{i=0}^{n-1} (\xi^m)^{q^{2i}}=0\}.$$ Then $$\Xi \setminus \{0\}=\{\xi \in \mathbb{F}_{q^{2n}} \ : (\xi^m)^{q^{2(n-1)}-1}+(\xi^m)^{q^{2(n-2)}-1}+\cdots+(\xi^m)^{q^2-1}+1=0\}$$ has cardinality $(q^n+1)(q^{n-1}-1)$. Let $\mu_{(q^n-1)/(q-1)}$ be the set of $\frac{q^n-1}{q-1}$-th roots of unity, and let $k=\frac{n-1}{2}\geq2$. Then $$\Theta=\{\theta \in \mu_{(q^n-1)/(q-1)} \mid \theta^{q^{2(n-2)}+q^{2(n-3)}+\cdots+q^2+1} + \theta^{q^{2(n-3)}+\cdots+q^2+1}+\cdots+\theta^{q^2+1}+\theta+1=0\}$$ $$=\{\theta \in \mu_{(q^n-1)/(q-1)} \mid p(\theta)=0\},$$ where \begin{equation}\label{eq:polp} p(Z)=1+\sum_{i=0}^{k-1} Z^{\sum_{j=0}^{i}q^{2j} + \sum_{j=0}^{k-1} q^{2j+1}} + \sum_{i=0}^{k-1} Z^{\sum_{j=0}^{i} q^{2j+1}}\;\in\mathbb{F}_{q^{2n}}[Z] \end{equation} which is a separable polynomial of degree $\sum_{j=0}^{2k-1} q^j$, see the proof of \cite[Lemma 2]{ABQ} and in particular \cite[Equation (4)]{ABQ}. Thus, the polynomial $$f(Z)=Z \cdot p(Z^{(q^n+1)(q-1)}) \in \mathbb{F}_{q^{2n}}[X]$$ factorizes completely over $\mathbb{F}_{q^{2n}}$, and \begin{eqnarray*} f(Z)&=\prod _{\xi \in \Xi }(Z-\xi)=&Z+\sum_{i=0}^{k-1} Z^{1+\sum_{j=0}^{i} q^{2j}(q^n+1)(q-1) + \sum_{j=0}^{k-1} q^{2j+1} (q^n+1)(q-1)}\\ & & + \sum_{i=0}^{k-1} Z^{1+\sum_{j=0}^{i} q^{2j+1}(q^n+1)(q-1)}. \end{eqnarray*} Also, $$f^{\prime}(Z)=1+Z^{q(q^n+1)(q-1)} + \sum_{i=1}^{k-1} Z^{\sum_{j=0}^{i} q^{2j+1}(q^n+1)(q-1)}$$ and hence $$\deg(f^{\prime}(z))_0 =\bigg(q\frac{(q^n+1)}{q+1}(q^{n-1}-1)\bigg)q^3.$$ Now consider the function $$ \sum_{\xi \in \Xi}\frac{1}{z-\xi}=\frac{f^{\prime}(z)}{f(z)}.$$ Its principal divisor is $$M-D+\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^3P_{\infty},$$ where $M$ is the zero divisor of $f^{\prime}(z)$ and $$D=(f(z))_0= \sum_{P \in \mathbb{P}_{q^{2n}}(GGS(q,n))\setminus \{P_{\infty}\}}P$$ has degree $q^3((q^{n-1}-1)(q^n+1)+1)= q^{2n+2}-q^{n+3}+q^{n+2}$. The principal divisor of $$\omega=\sum_{\xi \in \Xi}\frac{1}{z-\xi}\,dz$$ is $$M-D+\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^3+2g-2\bigg]P_{\infty}.$$ Consider the one-point divisor $G=s P_{\infty}$. By Theorem \ref{Main:Swiss} and its proof, $$ C(D,G)^{\bot}=C\bigg(D,M+\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^3+2g-2-s\bigg]P_{\infty}\bigg)$$ $$=C\bigg(D,M+\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^3+(q-1)(q^{n+1}+q^n-q^2)-2-s\bigg]P_{\infty}\bigg).$$ Also, $C(D,G) \subset C(D,G)^{\bot}$ if $$s \leq \frac{\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^3+(q-1)(q^{n+1}+q^n-q^2)-2\bigg]}{2}.$$ From Theorem \ref{th:stab} we have the following result. \begin{theorem}\label{Th:GGSQuantum} With the same notation as above, consider the $q^{2n}$-ary code $C(D,mP_{\infty})$ from the GGS curve. Assume that $$(q-1)(q^{n+1}+q^n-q^2)-2\leq s \leq \frac{\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^3+(q-1)(q^{n+1}+q^n-q^2)-2\bigg]}{2}.$$ Then there exists a quantum code with parameters $$[[\,q^{2n+2}-q^{n+3}+q^{n+2},\;q^{2n+2}-q^{n+3}+q^{n+2}+(q-1)(q^{n+1}+q^n-q^2)-2-2s,$$ $$\geq s-(q-1)(q^{n+1}+q^n-q^2)+2\,]]_{q^{2n}}.$$ \end{theorem} \subsection{Abd\'on-Bezerra-Quoos curve}\label{SubSection:ABQ} Let $q$ be a prime power and $n\geq3$ be an odd integer. The Abd\'on-Bezerra-Quoos curve $ABQ(q,n)$ is defined by the equation \begin{equation}\label{ABQ_equation} ABQ(q,n): Y^{q^2}-Y= X^m, \end{equation} where $m= (q^n+1)/(q+1)$; see \cite{ABQ,GGS2010}. The curve $ABQ(q,n)$ is singular, has genus $\frac{1}{2}(q-1)(q^{n}-q)$, and is $\mathbb F_{q^{2n}}$-maximal. Let $P_0=(0,0,0)$, $P_{(a,b,c)}=(a,b,c)$, and let $P_{\infty}$ be the unique ideal point of $ABQ(q,n)$. The point $P_\infty$ is the unique singular point of $ABQ(q,n)$. Yet, there is only one place of $ABQ(q,n)$ centered at $P_\infty$. As for the GK and GGS cases, $ABQ(q,n)$ is $\mathbb{F}_{q^{2n}}$-maximal and hence has $p$-rank zero. The extension $\mathbb F_{q^{2n}}(ABQ(q,n))/\mathbb F_{q^{2n}}(x)$ is a Galois extension of degree $q^2$, and $P_\infty$ is totally ramified in it. Thus, $(dx)=(2g-2)P_\infty$ and $(ABQ(q,n),P_\infty)$ is a Swiss curve. An element $\xi \in \mathbb{F}_{q^{2n}}$ is such that $Y^{q^2}-Y=\xi^m$ has $q^2$ solutions in $\mathbb{F}_{q^{2n}}$ if and only if $$\sum_{i=0}^{n-1} (\xi^m)^{q^{2i}}=0.$$ Also, there are exactly $q^{2n-1}-q^n+q^{n-1}$ such values $\xi\in \mathbb{F}_{q^{2n}}$ as $|ABQ(q,n)(\mathbb{F}_{q^{2n}})|=q^{2n+1}-q^{n+2}+q^{n+1}+1$. Arguing as in Section \ref{SubSection:GGS}, the polynomial $$f(Z)=Z \cdot p(Z^{(q^n+1)(q-1)})\, \in \mathbb{F}_{q^{2n}}[Z]$$ factorizes completely over $\mathbb{F}_{q^{2n}}$; here, the polynomial $p(X)\in\mathbb{F}_{q^{2n}}[X]$ is as in Equation \eqref{eq:polp}. Also, \begin{eqnarray*} f(Z)=\prod _{\xi \in \Xi }(Z-\xi)&=&Z+\sum_{i=0}^{k-1} Z^{1+\sum_{j=0}^{i} q^{2j}(q^n+1)(q-1) + \sum_{j=0}^{k-1} q^{2j+1} (q^n+1)(q-1)} \\ &&+ \sum_{i=0}^{k-1} Z^{1+\sum_{j=0}^{i} q^{2j+1}(q^n+1)(q-1)}. \end{eqnarray*} Also, $$f^{\prime}(Z)=1+Z^{q(q^n+1)(q-1)} + \sum_{i=1}^{k-1} Z^{\sum_{j=0}^{i} q^{2j+1}(q^n+1)(q-1)}$$ and hence $$\deg(f^{\prime}(z))_0 =\bigg(q\frac{(q^n+1)}{q+1}(q^{n-1}-1)\bigg)q^2.$$ Now, the function $$ \sum_{\xi \in \Xi}\frac{1}{z-\xi}=\frac{f^{\prime}(z)}{f(z)}$$ has principal divisor $$M-D+\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^2 P_{\infty},$$ where $M$ is the zero divisor of $f^{\prime}(z)$ and $$D=(f(z))_0= \sum_{P \in \mathbb{P}_{q^{2n}}(ABQ(q,n))\setminus \{P_{\infty}\}}P$$ has degree $q^2((q^{n-1}-1)(q^n+1)+1)= q^{2n+1}-q^{n+2}+q^{n+1}$. The principal divisor of $$\omega=\sum_{\xi \in \Xi}\frac{1}{z-\xi}dz$$ is $$M-D+\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^2+2g-2\bigg]P_{\infty}.$$ Consider the one-point divisor $G=s P_{\infty}$. By Theorem \ref{Main:Swiss} and its proof, \begin{eqnarray*} C(D,G)^{\bot}&=&C\bigg(D,M+\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^2+2g-2-s\bigg]P_{\infty}\bigg)\\ &=&C\bigg(D,M+\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^2+(q-1)(q^n-q)-2-s\bigg]P_{\infty}\bigg). \end{eqnarray*} Also, $C(D,G) \subset C(D,G)^{\bot}$ if $$s \leq \frac{\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^2+(q-1)(q^{n}-q)-2\bigg]}{2}.$$ The theorem below follows from Theorem \ref{th:stab}. \begin{theorem}\label{Th:ABQQuantum} With the same notation as above, consider the $q^{2n}$-ary code $C(D,mP_{\infty})$ from the ABQ curve. Assume that $$(q-1)(q^{n}-q)-2\leq s \leq \frac{\bigg[\bigg( (q^{n-1}-1)\frac{q^n+1}{q+1}+1\bigg)q^2+(q-1)(q^{n}-q)-2\bigg]}{2}.$$ Then there exists a quantum code with parameters $$[[\,q^{2n+1}-q^{n+2}+q^{n+1},\;q^{2n+1}-q^{n+2}+q^{n+1}+(q-1)(q^{n}-q)-2-2s,\,\geq s-(q-1)(q^{n}-q)+2\,]]_{q^{2n}}.$$ \end{theorem} \subsection{Suzuki and Ree curves} Let $q_0= 2^s$, where $s\geq1$, and $q= 2q_0^2$. The Suzuki curve $S_q$ is given by the affine model $$S_q: Y^q+Y=X^{q_0}(X^q+X).$$ The curve $S_q$ is $\mathbb{F}_{q^4}$-maximal of genus $q_0(q-1)$. It has a unique singular point, namely its unique point at infinity $P_\infty$,which is a $q_0$-fold point and the center of just one place of $S_q$. The extension $\mathbb{F}_{q^4}(S_q)/\mathbb{F}_{q^4}(x)$ is a Galois extension of degree $q$ in which $P_\infty$ is the only ramified place, and it is totally ramified. Hence, by Remark \ref{remo1}, $(dx)=(2g-2)P_\infty=(2q_0(q-1)-2)P_\infty$ and $(S_q,P_\infty)$ is a Swiss curve. Let $q_0= 3^s$, where $s\geq1$, and $q= 3q_0^2$. The Ree curve $R_q$ is given by the affine space model $$R_q:\begin{cases} Y^q-Y=X^{q_0}(X^q-X), \\ Z^q-Z=X^{2q_0}(X^q-X)\end{cases}.$$ This curve has genus $\frac{3}{2}q_0(q-1)(q+q_0+1)$ and it is $\mathbb{F}_{q^6}$-maximal. It has a unique singular point coinciding with its unique infinite point; moreover there is a unique place $P_\infty$ centered in it. The extension $\mathbb{F}_{q^6}(R_q)/\mathbb{F}_{q^6}(x)$ is a Galois extension of degree $q^2$ in which $P_\infty$ is the only ramified place, and it is totally ramified. Hence, by Remark \ref{remo1}, $(dx)=(2g-2)P_\infty$ and $(R_q,P_\infty)$ is a Swiss curve. \begin{remark} Since Suzuki and Ree curves are Swiss curves, it makes sense to ask for a suitable set $I$ of rational points as well as a covering of $I$ made of lines, to which Theorem \ref{Main:Swiss} applies. According to the equations defining the curves, the most natural choice would probably be $I={S}_q(\mathbb{F}_q)$ and $I={R}_q(\mathbb{F}_q)$ respectively. Indeed, in both cases a nice covering of lines is given simply by the vertical lines $x=a$, with $a \in \mathbb{F}_q$. However, in this case one would obtain $f(X)=\prod_{a\in\mathbb{F}_q}(X-a)=X^q-X$, which has clearly constant derivative. Hence the construction would be the same as in \cite{MTT}. The determination of a suitable set $I$ and a covering of lines remains an open problem. \end{remark} \section{$r$-Swiss curves and codes} \label{sec:r-swiss} In this section we generalize the construction of Section \ref{sec:swiss} to a larger class of curves. \begin{definition} Let $r$ be a positive integer. An $r$-Swiss curve is an $(r+1)$-tuple $(\mathcal{C},P_1,\ldots,P_r)$ such that $\mathcal{C}$ is an absolutely irreducible $\mathbb{F}_q$-rational curve, $P_1,\ldots,P_r$ are distinct places of $\mathbb{F}_q(\mathcal{C})$, and the following properties hold: \begin{enumerate} \item $P_i$ is rational for every $i=1,\ldots,r$; \item there exists a function $x\in \mathbb{F}_q(\mathcal{C})$ such that $(dx)=\frac{2g-2}{r}(P_1+\ldots+P_r)$; \item ${\rm supp}\left((x)_\infty\right)=\{P_1,\ldots,P_r\}$. \end{enumerate} \end{definition} \begin{remark} Clearly, a $1$-Swiss curve is just a Swiss curve. \end{remark} Consider an $r$-Swiss curve $(\mathcal{C},P_1,\ldots,P_r)$ and the set $$\mathcal{A}=\left\{\alpha \in \mathbb{F}_q \ :\ (x-\alpha)_0- \sum_{i=1}^r v_{P_i}((x-\alpha)_0) P_i\leq \sum_{Q\in \mathbb{P}_q\setminus \{P_1,\ldots,P_r\}} Q \right\}.$$ The set $\mathcal{A}$ consists of all elements $\alpha\in \mathbb{F}_q$ such that all the zeros of the function $x-\alpha$ other than (possibly) $P_1,\ldots,P_r$ are rational and simple. Also, let $$D=\sum_{\alpha \in \mathcal{A}}\left( (x-\alpha)_0- \sum_{i=1}^r v_{P_i}((x-\alpha)_0) P_i \right).$$ \begin{theorem}\label{Main:WSwiss} Let $(\mathcal{C},P_1,\ldots, P_r)$ be an $r$-Swiss curve. With the same notation as above, consider another $\mathbb{F}_q$-rational divisor $G$ such that $\rm{supp}(G)\cap \rm{supp}(D) =\emptyset$. Then \begin{enumerate} \item $C(D,G)^{\bot} =C(D,E+\sum_{i=1}^r(\gamma_i+2g-2) P_i-G)$, for some positive divisor $E$ and some integers $\gamma_1,\ldots,\gamma_r$; \item if, in addition, $2G\leq E+\sum_{i=1}^r\left(\gamma_i+\frac{2g-2}{r}\right) P_i $ then $C(D,G)\subset C(D,G)^{\bot}$. \end{enumerate} \end{theorem} \begin{proof} Let $h=\sum_{a \in \mathcal{A}} \frac{1}{x-a}$. Clearly, places in ${\rm supp}(D) \setminus \{P_1,\ldots, P_r\}$ are simple poles of $h$. Consider $$\omega=\left(h \right) dx.$$ By hypothesis $(dx)=\sum_{i=1}^r\frac{2g-2}{r}P_i$, and $$\left(h\right)=E-D+ \sum_{i=1}^r\gamma_i P_i,$$ where $E=\left(h\right)_0-\sum_{i=1}^r v_{P_i}((h)_0)P_i$ and $\sum_{i=1}^r \gamma_i =\deg D-\deg E$. Summing up, $$(\omega)=Z-D+\sum_{i=1}^r\bigg(\gamma_i+\frac{2g-2}{r}\bigg) P_i.$$ Therefore $\omega$ has poles at places of $D$ and it is readily seen that the residue of $\omega$ at such places is $1$. Now the claim follows from \cite[Theorem 2.72]{27}. \end{proof} \section{Applications to some $r$-Swiss curves}\label{sec:r-appl} \subsection{GGK curves}\label{SubSection:GGK} Let $q$ be a prime power and $n\geq3$ be an odd integer. The curve $GGK2(q,n)$ is defined by the equations \begin{equation}\label{GGK_equation} GGK2(q,n): \left\{ \begin{array}{l} X^{q+1} -1 = Y^{q+1}\\ Y\bigg(\frac{X^{q^2}-X}{X^{q+1}-1}\bigg)= Z^m\\ \end{array} \right. , \end{equation} where $m= (q^n+1)/(q+1)$; see \cite{BM}. The genus of $GGK2(q,n)$ is $\frac{1}{2}(q-1)(q^{n+1}+q^n-q^2)$, $GGK2(q,n)$ is $\mathbb F_{q^{2n}}$-maximal, and $GGK2(q,3)\cong GK_q$. The coordinate function $x$ has exactly $q+1$ distinct poles $P_1,\ldots,P_{q+1}$; also, the coordinate function $z$ has pole divisor $(z)_\infty=(q^2-q)(P_1+\ldots+P_{q+1})$, and $(dz)=\frac{2g-2}{q+1}(P_1+\ldots+P_{q+1})$ (see \cite[Page 17]{BM}). Hence, $(GGK2(q,n),P_1,\ldots,P_{q+1})$ is a $(q+1)$-Swiss curve. \subsubsection{\bf The case $q=2$.} In the rest of this section, $q=2$ and $GGK2(q,n)$ reads \begin{equation*} GGK2(2,n): \left\{ \begin{array}{l} X^{3} -1 = Y^{3}\\ YX= Z^{(2^n+1)/3}\\ \end{array} \right.. \end{equation*} It is easily seen that $z=a$ has exactly $q^3-q=6$ rational zeros if and only if either $a=0$ or $Y^6+Y^3-a^{2^n+1} \in \mathbb{F}_{2^{2n}}[Y]$ has $6$ distinct roots in $\mathbb{F}_{2^{2n}}$. From the maximality of $GGK2(2,n)$ and $[\mathbb{F}_{2^{2n}}(x,y,z) : \mathbb{F}_{2^{2n}}(z)]=6$ follows that the set $$\mathcal{A}=\{a \in \mathbb{F}_{2^{2n}}^* \mid Y^6+Y^3=a^{2^n+1} \ \textrm{has 6 distinct roots in} \ \mathbb{F}_{2^{2n}}\}$$ has size $$|\mathcal{A}|=4(2^n+1)(2^{n-1}-1)/3.$$ Let $f(X)=\prod_{a \in \mathcal{A}}(x-a)$. The following can be checked by direct computation with MAGMA. \begin{itemize} \item $n=3$. In this case, $$f(X)=X^{36} + X^{27} + X^{18} + 1, \qquad f^\prime (X)=x^{26}.$$ Since $P_1,P_2,P_3$ are not zeros of $f^\prime$, we have $(f^\prime(z))_\infty=26(q^2-q)(P_1+P_2+P_3)$ and hence $$\deg(f^{\prime}(z))_0 =26(q+1)(q^2-q)=156.$$ Now, the function $$ \sum_{\xi \in \mathcal{A}}\frac{1}{z-\xi}=\frac{f^{\prime}(z)}{f(z)}$$ has principal divisor $$M-D+20 \sum_{i=1}^{3}P_i,$$ where $M$ is the zero divisor of $f^{\prime}(z)$ and $$D=(f(z))_0= \sum_{P \in \mathbb{P}_{2^{6}}(GGK2(2,3))\setminus \mathbb{P}_{2^{2}}(GGK2(2,3)) }P$$ has degree $216$. The principal divisor of $$\omega=\sum_{\xi \in \mathcal{A}}\frac{1}{z-\xi}dz$$ is $$M-D+26 \sum_{i=1}^{3}P_i.$$ Consider the multi-point divisor $G=s \sum_{i=1}^{3} P_i$. By Theorem \ref{Main:Swiss} and its proof, $$C(D,G)^{\bot}=C\left(D,M+(26-m) \sum_{i=1}^{3}P_i \right).$$ Also, $C(D,G) \subset C(D,G)^{\bot}$ if $s \leq 13$. Now we apply Theorem \ref{th:stab}. \begin{theorem}\label{Th:GGK21Quantum_3} With the same notation as above, consider the $2^6$-ary code $C(D,s\sum_{i=1}^{3}P_i )$ from the curve $GGK2(2,6)$. Assume that $6\leq m \leq 13$. Then there exists a quantum code with parameters $$[[\,216,\;k_m,\;3m-18\,]]_{2^{6}},\qquad k_m=\begin{cases} 196, \ if \ m=6, \\ 192-6(m-7), \ if \ 7\leq m\leq 13. \end{cases}$$ \end{theorem} \item $n=5$. Here, \begin{align*} f(X)={}&X^{660} + X^{627} + X^{594} + X^{528} + X^{495} + X^{396} + X^{363} + X^{330} + X^{132} + X^{66} + 1, \end{align*} and $$ f^\prime (X)=X^{626} + X^{494} + X^{362}. $$ Since $P_1,P_2,P_3$ are not zeros of $f^\prime$ we have $(f^\prime(z))_\infty=626(q^2-q)\sum_{i=1}^{3} P_i$ and hence $$\deg(f^{\prime}(z))_0 =626(q+1)(q^2-q)=3756.$$ Now, the function $$ \sum_{\xi \in \Xi}\frac{1}{z-\xi}=\frac{f^{\prime}(z)}{f(z)}.$$ has principal divisor $$M-D+68 \sum_{i=1}^{3}P_i,$$ where $M$ is the zero divisor of $f^{\prime}(z)$ and $$D=(f(z))_0= \sum_{P \in \mathbb{P}_{2^{10}}(GGK2(2,5))\setminus \mathbb{P}_{2^{2}}(GGK2(2,5)) }P$$ has degree $3960$. The principal divisor of $$\omega=\sum_{\xi \in \mathcal{A}}\frac{1}{z-\xi}dz$$ is $$M-D+98 \sum_{i=1}^{3}P_i.$$ Consider the multi-point divisor $G=s \sum_{i=1}^{3} P_i$. By Theorem \ref{Main:Swiss} and its proof, $$C(D,G)^{\bot}=C\left(D,M+(98-m) \sum_{i=1}^{3}P_i \right).$$ Also, $C(D,G) \subset C(D,G)^{\bot}$ if $m \leq 49$. Now we apply Theorem \ref{th:stab}. \begin{theorem}\label{Th:GGK21Quantum_5} With the same notation as above, consider the $2^{10}$-ary code $C(D,m\sum_{i=1}^{3}P_i )$ from the curve $GGK2(2,10)$. Assume that $30\leq m \leq 49$. Then there exists a quantum code with parameters $$[[3960,k_m, 3m-90]]_{2^{6}},\qquad k_m=\begin{cases} 3868, \ if \ m=30, \\ 3864-6(m-31), \ if \ 31\leq m\leq 49. \end{cases} $$ \end{theorem} \end{itemize} \section{Comparisons}\label{sec:comp} \begin{corollary} The $[[N,k,d]]_{q^6}$-codes constructed in Theorem \ref{Th:GKQuantum} are pure. If in addition $s\geq 7q^5-14q^3+7q^2+12$, then they do not satisfy Condition \eqref{Dis:GV}. \end{corollary} \begin{proof} Firstly, note that all the $[[N,k,d]]_{q^6}$-codes of Theorem \ref{Th:GKQuantum} satisfy $N\equiv k \pmod 2$. Also, the codes are pure. In fact, $C(D,G)$ is an $[N_1,k_1,d_1]_q$ code, with $N_1=q^8-q^6+q^5$, $k_1=s-\frac{(q^3+1)(q^2-2)}{2}$, $d_1\geq q^8-q^6+q^5-s$. It is readily seen that $d_1>k_1+1$ and by Corollary \ref{Corollary} the quantum codes are pure. As $N-k+2=2s -q^5+2q^3-q^2+4$, the left-hand side of Condition \eqref{Dis:GV} reads $$\frac{q^{6(N-k+2)}-1}{q^{12}-1}<\frac{q^{6(N-k+2)}}{q^{12}-1}=\frac{q^{6(2s -q^5+2q^3-q^2+4)}}{q^{12}-1},$$ whereas the right-hand side is larger than \begin{eqnarray*} \binom{N}{d-1}(q^{12}-1)^{d-2}&=&\binom{N}{d-1}\frac{(q^{12}-1)^{d-1}}{q^{12}-1} >\left(\frac{N}{d-1}\right)^{d-1}\frac{(q^{12}-1)^{d-1}}{q^{12}-1} \\ &>&\left(\frac{N}{d-1}\right)^{d-2}\frac{q^{12(d-1)}}{q^{12}-1}\geq \frac{q^{12(s-q^5+2q^3-q^2+1)}}{q^{12}-1}q^{d-2}, \end{eqnarray*} where we used that $N/(d-1)\geq q$, which is implied by $s\leq q^7+q^4-2q^3+q^2-1$ and hence by the hypothesis $s\leq \frac{q^7-q^6+q^5+q^4-2q^3+q^2-2}{2}$. From $s\geq 7q^5-14q^3+7q^2+12$ follows $d-2+12(s-q^5+2q^3-q^2+1)\geq 6(2s -q^5+2q^3-q^2+4)$; therefore, the left-hand side is smaller than the right-hand side and Condition \eqref{Dis:GV} is not satisfied. \end{proof} \section{Acknowledgments*} The research of D. Bartoli and G. Zini was partially supported by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INdAM). \end{document}
arXiv
When did humanity have the knowledge to prove this semi-flat world is an octagon? The world below is flat. However, there are no endless walls of ice or force or magic - Instead, each side of the octagon "wraps" to the other side. That is, the North boundary wraps to the South boundary at the exact same point. The North-East to South-West, East to West, and South-East to North-West. There is also a featureless "infinitely" (For all reasonable purposes) tall tower at the cornerpoints, and occupies them all simultaneously. These wraps function exactly as if you printed this on paper, cut it out, and wrapped one edge to its counterpart. This leads to some... weird navigation around the corners. If you were to walk around the tower, you would walk in three full circles (Measured by angle) before reaching your starting point. Otherwise, navigation is fairly straightforward. Going across the north/south wrap doesn't change your east/west position, etc. Other information that may be relevant or helpful: The planet has two "Suns" - They are always directly across from each other, and rotate around the center of the world. The two "Suns" have a "Sun" face and a "Moon" face, which rotate in synch to provide a day/night cycle. Thus, the entirety of the world is at the same time of day. Assume that the day/night cycle functions otherwise like our terrestrial one. The central landmass is aproxamately 3600 miles wide by 4600 miles tall. Height of the world is thus aproxamately 10,000 miles. This puts the surface area of the world (And oceans) at around 82.8 million miles squared, a little over 42% of the Earth's surface. (If you want to get into the nitty gritty, each pixel on the map is 5mi per side) Gravity is a stable 1g at all points The world is truly "flat", and there are no map distortions like you would find on a spherical world. Religion has imparted the knowledge that the world is the shape it is, and tells of the tower in the corner positions. Even so, some wish to prove that the world is this shape, beyond "blindly" accepting religious dogma. Using European dating and technology, when would humanity be able to prove this world shape? Things that are beyond the scope of this question: This world affecting technological development. Assume relevant technology and knowledge are exactly what they were historically. Magic or other factors that created the world. Assume there is no magic in the world for the purposes of answering this question. Changes that the world shape would have on weather, wildlife, geography, etc Time needed to prove the world shape. This question is focused solely on technology and knowledge needed to prove it, not how long it would take to accomplish. A reference for movement across edges: Going North/South across Point 7 takes you to Point 7, North/East from Point 5 takes you to South/West Point 5, etc. Walking along the coastline, starting at Point 5 heading East, you would go: 5 to 4, then cross (West to East) 4 to 1, then cross (SE to NW) 1 to 2 (Along the "Inner" sea on the top left), then cross back (NW to SE) 2 to 6, then cross (SW to NE) 6 to 3, then cross (E to W) And then you would have a long hike from 3 back to 5 along that very long NW/N coast line. map-making exploration AndonAndon $\begingroup$ Is the civilization in question based on the outer continent or the inner continent? If they have access to the tower it's going to be a lot easier. Without at least seeing the edges for themselves, it's going to be very difficult to prove you aren't on a conventional flat plane. $\endgroup$ – Cadence Oct 6 '18 at 20:51 $\begingroup$ What do you mean by "European dating and technology" (sounds suspiciously like online dating) ? I presume you mean to infer some period of European history and it's associated technologies, but what period ? $\endgroup$ – StephenG Oct 6 '18 at 21:05 $\begingroup$ Just do you know, your model is incoherent. You claim both that there is no magic, the surface is flat, and that when someone moves north of the northern border they come out the southern border. Those three things cannot be true simultaneously. It's like if I said that people live on a world that both has a sun, and does not have a sun; how long would it taken them to figure out their situation with respect to the sun? Well, no person could be in that situation, so the question is moot. $\endgroup$ – Greg Schmit Oct 6 '18 at 23:44 $\begingroup$ @GregSchmit First, there is magic in the world. But I excluded it from valid answers to the question. That's all. I want to know when, technologically/scientifically, people could figure out the shape. Second, you must not have read the question well because there is a tower at the corner points - Its existence from a worldbuilding perspective is specifically to prevent such occurances from happening. $\endgroup$ – Andon Oct 6 '18 at 23:57 $\begingroup$ @Andon Whether there is a tower at the "corner" is irrelevant, my example was to show that the model is incoherent. The model is incoherent even with the tower there. In fact, it is even more incoherent because the dimensions of the tower couldn't be defined since the space it occupies is undefined. $\endgroup$ – Greg Schmit Oct 7 '18 at 0:31 Your not-Europeans would know that the world was flat in antiquity. The Greeks knew the Earth is round ca. 500 BC; this may have originated with or at least around the time of Pythagoras. It's thought to have been based on observations of the stars: the Greeks knew certain stars hung out near the North Pole, but from different places they appear at different heights above the horizon. This is consistent with a spherical world, with the changes in elevation corresponding to changes in latitude. Your astronomers, however, would know that the stars are identical no matter where they are. (They might or might not still move, depending on whether your flat world rotates; it doesn't make a difference here.) They would know that the sea blurs out into an unresolvable haze rather than a distinct horizon, and that distant objects are fully visible when resolved, rather than the uppermost parts being visible first. These are basic observations that could be pieced together with ancient astronomical and other techniques. Without traveling to the edge or corners, it would be impossible to know about the boundary conditions there. In fact it would be impossible to know you've passed over one of the edges from measurement alone. (It would be clear if you circumnavigated the world, of course, or otherwise ran into known landmarks.) Only by traversing the corners would the full shape of the world become apparent. In this case a number of geometric methods, such as measuring angles or distances involved in walking around the tower, could reveal that shapes there don't work the way they do in "regular" space. Again, this is a measurement that many well-known ancient civilizations (the Greeks and Egyptians spring to mind) could have carried out. The final puzzle would be in figuring out how the edges are actually mapped together. This might rely on circumnavigation. Essentially there are two "maps" to consider: the order of some set of landmarks as you travel in a circle around the tower, and the directions those landmarks are from the center of the plane. Figuring out the first is trivial, but the second probably requires some people to actually go and find out; I suspect any analytic method will give multiple possible solutions. TL;DR: Mapping whichever part of the world you're in (the center or the rim) is doable with Bronze Age know-how. Reconciling the two maps will require actually going and finding points of reference, which depends on how easy they find sailing. CadenceCadence Magellan's fleet finished the first circumnavigation in 1522, after a three years trip. In the following centuries, the European powers send countless expeditions throughout the world, to discover and annex it, thus expanding and precising the map. Maps from the 16th and 17th century already show a round world, even if bits and pieces are still missing since nobody explored them yet. The oldest globe map to have survived to us dates from the 1490s, although it has many errors and the American continent is, of course, missing. So, taking into account that technology develop the same way in your world, I'd say they could prove it as soon as their 16th century. SavaSava $\begingroup$ Re-read the question. The question describes a weird surface, which cannot be physically realized in three-dimensional space. It's most definitely not a sphere. $\endgroup$ – AlexP Oct 6 '18 at 20:38 $\begingroup$ My point was that circumnavigation is not enough. The question asks how to prove that the world has the weird shape as described. $\endgroup$ – AlexP Oct 6 '18 at 20:51 $\begingroup$ The actual question in bold letters is: 'Using European dating and technology, when would humanity be able to prove this world shape?'. 'When', not 'How'. $\endgroup$ – Sava Oct 6 '18 at 20:54 $\begingroup$ Precisely, 'How' is a different question entirely (And one I intend to ask). $\endgroup$ – Andon Oct 6 '18 at 21:44 $\begingroup$ I agree with @AlexP , "as soon as their 16th century" doesn't describe a time period, but rather a point of advancement in tech and ingenuity. But these are obviously required for advanced cartography so, the terminology of "when" and "how" is tricky in this question. Perhaps the OP ought to have asked "how" and then decided on the "when". $\endgroup$ – Nahshon paz Oct 7 '18 at 17:55 You know the geometry of your world when you can walk around a corner. You can infer it when you can tell you have walked across the edges, but you can't be sure of the geometry until you've done the corners. If you were to walk around the tower [at the corners], you would walk in three full circles (Measured by angle) before reaching your starting point. Unless I'm misreading your text, this is wrong (or at least contradictory). See the diagram below. You walk from 1 to 2, then 2 to 3, then 3 to 4 and finally 4 to 1. That's four interior angles of the Octagon ( $135^\text{o}$ each ). So you walk about a path $4\times135^\text{o}=540^\text{o}$ or one and a half times a normal circle (or the interior). Your idea seems to be that the corners all overlap (which would lead to your "three full circles" idea). But you also want east-west and north-south wrap around. If you try to do this then the corners cannot all be together. Now because four corners are involved in each tower circumnavigation, you only need to circumnavigate at two corners (one of the East-West and and one of the North-South set). So when you people can manage the two trips and share their information they can work out the geometry of your world. How soon they can do this depend on factors outside the question : How well do people (governments) share information ? How detailed is their map making ? How large is the world and what means of transport do they have ? Are their obstacles in the way (rivers, seas, mountain,) or weather problems ? How reliable and fast is the spread of information ? Note that navigational information would be considered potentially secret by governments. This is all very detailed and there's no point in discussing it. But, e.g. the Vikings could have performed this navigation weather and distances allowing. If the corners are landlocked it can be done by anyone local. But how soon anyone can piece all the information together depends on the politics of your word as well as the technology. StephenGStephenG $\begingroup$ I believe you misread what OP wrote about how the edges are linked: if North is Linked to South, then East is linked to West, North-East is linked to South-West and North-West is linked to South-East. Thus your points #1 and 3 on the map are wrong. $\endgroup$ – Sava Oct 6 '18 at 21:57 $\begingroup$ @Sava is correct. I've updated the question with a diagram and such to make it easier. $\endgroup$ – Andon Oct 6 '18 at 22:14 $\begingroup$ As a side note, I feel very sorry for whatever poor soul who would try to draw a map not centered on the central landmass... $\endgroup$ – Sava Oct 6 '18 at 22:20 $\begingroup$ @Sava that's because the model that the OP presented is incoherent. $\endgroup$ – Greg Schmit Oct 6 '18 at 23:45 Even so, some wish to prove that the world is this shape, beyond "blindly" accepting religious dogma. Look at the planet's shadow on it's moon. The shape of the shadow tells you the shape of the planet. RonJohnRonJohn $\begingroup$ The planet cannot create on shadow on it's 'moons' given the set up as they orbit above the plane of the world. $\endgroup$ – Sava Oct 7 '18 at 17:56 $\begingroup$ "as they orbit above the plane of the world." I'd need to see a picture, but I'm pretty sure that's not how orbits work. $\endgroup$ – RonJohn Oct 7 '18 at 18:32 $\begingroup$ There's magic in this world, OP said it. $\endgroup$ – Sava Oct 7 '18 at 18:40 $\begingroup$ Please enlighten me as to why OP's work is 'poorly-thought' in your opinion? If you say that it is because the world OP describes does not conform to known laws of physics, I will begin to question why you are here on an SE dedicated to the construction of imaginary worlds and settings. $\endgroup$ – Sava Oct 7 '18 at 19:05 $\begingroup$ @Sava I commented on your instant use of magic to solve every poorly-thought scenario. "an SE dedicated to the construction of imaginary worlds and settings." Constructing a world means rational rules. Heck, even magic has rules: coppermind.net/wiki/Sanderson%27s_Laws_of_Magic Quoting, "If characters (especially viewpoint characters) solve a problem by use of magic, the reader should be made to understand how that magic works. Otherwise, the magic can constitute a deus ex machina." $\endgroup$ – RonJohn Oct 7 '18 at 19:14 Not the answer you're looking for? Browse other questions tagged map-making exploration or ask your own question. If this earth were cube shaped would it be possible during Magellanic era using a float ship to figure out that the earth is cube shaped? How can I create a map of a rapidly-changing world? Which map projection would result in an accurate visual depiction of a mega crater? Designing a map by morphing an existing island How plausible are the terrain and land-forms on my fantasy map? Rotating a map projection - software question Medieval/ancient/fantasy setting in southern hemisphere? Does my climate map work? Where do hydrothermal vents form inside icy ocean worlds? Where would rivers and lakes make sense on this continent?
CommonCrawl
\begin{document} \title{Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization} \begin{abstract} The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, \textit{i}.\textit{e}., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both the neuron output and its influence on model decisions. Then, we propose the concept of \textit{neuron activation coverage} (NAC), which characterizes the neuron behaviors under InD and OOD data. Leveraging our NAC, we show that 1) InD and OOD inputs can be naturally separated based on the neuron behavior, which significantly eases the OOD detection problem and achieves a record-breaking performance of 0.03\% FPR95 on ResNet-50, outperforming the previous best method by 20.67\%; 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which enables a NAC-based criterion for evaluating model robustness. By comparison with the traditional validation criterion, we show that NAC-based criterion not only can select more robust models, but also has a stronger correlation with OOD test performance. \end{abstract} \addtocontents{toc}{\protect\setcounter{tocdepth}{0}} \section{Introduction} \label{Sec:Intro} Recent advances in machine learning systems hinge on an implicit assumption that the training and test data share the same distribution, known as in-distribution (InD)~\cite{tech:ViT,tech:conv,tech:ResNet,tech:deep_conv}. \setlength\intextsep{3pt} \begin{wrapfigure}[18]{r}{0cm} \centering \includegraphics[width=0.38\textwidth]{supplements/Intro_ood_plots.pdf} \caption{OOD~detection performance (AUROC) of {\texttt{NAC-UE}} on ResNet-50, averaged over 4 datasets. We achieve a record-breaking performance without impairing model classification ability.} \label{Fig:Intro-OOD-AUROC} \end{wrapfigure} However, this assumption rarely holds in real-world scenarios, due to the presence of out-of-distribution (OOD) data, \textit{e}.\textit{g}., samples pertaining to the unseen classes~\cite{ood_example1,Setup:DG}. Such distribution shifts between OOD and InD often drastically challenge well-trained models, resulting in significant performance drops~\cite{OOD_Problem:ImgNet,OOD_Problem:ML-State,OOD_Problem:Perturbation}. Prior efforts tackling this OOD problem mainly arise from two avenues: 1) OOD detection and 2) OOD generalization. The former one targets at designing tools that differentiate between InD and OOD data inputs, thereby refraining from using unreliable model predictions during real-world deployment~\cite{OOD_Detect:ODIN,OOD_Detect:Energy,OOD_Detect:MSP,OOD_Detect:Mahalanobis,OOD_Detect:DICE,OOD_Detect:GradNorm,OOD_Detect:ATOM,OOD_Detect:MOS,OOD_Detect:SimpleAct}. In contrast, OOD generalization focuses on developing robust networks to generalize unseen OOD data, though solely leveraging InD data for training~\cite{Setup:DG,Baseline:MMD,Baseline:CORAL,Baseline:Fish,Baseline:GroupDRO,Baseline:AND-Mask,Baseline:IRM,CL&DG:SelfReg,Setup:DomainBed}. Despite a plethora of studies emerging over this OOD problem, we find that an alternative space -- \textit{the behavior of individual neurons} -- still leaves under-explored. \setlength\intextsep{0pt} \begin{wrapfigure}[17]{R}{0cm} \centering \includegraphics[width=0.37\textwidth]{supplements/Intro_neuron_plots_plain.pdf} \caption{OOD~vs.~InD neuron activation states. We utilize PACS~\cite{Dataset:PACS} \texttt{Photo} domain as InD, and \texttt{Sketch} as OOD. All neurons stem from the penultimate layer in ResNet-50. } \label{Fig:Intro-Neuron-OOD} \end{wrapfigure} As demonstrated in Figure~\ref{Fig:Intro-Neuron-OOD}, neurons could exhibit distinct activation patterns when exposed to data inputs from InD and OOD. This reveals the potential of leveraging neuron behavior to characterize model states in terms of the OOD problem. Yet, though several studies recognize this significance, they either choose to modify the neural networks~\cite{OOD_Detect:ReAct}, or lack the suitable definition of neuron activation states~\cite{OOD_Detect:LINe,NACT_DG:NeuronCoverage}. For instance, \cite{OOD_Detect:ReAct} proposes a neuron truncation strategy that clips neuron output to separate the InD and OOD data, thereby improving the OOD detection. However, such truncation would unexpectedly decrease the model classification ability~\cite{OOD_Detect:SimpleAct}. More recently, \cite{OOD_Detect:LINe} and \cite{NACT_DG:NeuronCoverage} employ a threshold to characterize neurons into binary states (\textit{i}.\textit{e}., activated or not) based on the neuron output. This characterization, however, discards the valuable neuron distribution details. Unlike them, in this paper, we show that by leveraging natural neuron activation states, a simple statistical property of neuron distribution could effectively facilitate the OOD solutions. We first propose to formulate the neuron activation state by considering both the neuron output and its influence on model decisions. Specifically, inspired by~\cite{OOD_Detect:GradNorm}, we model neuron influence as the gradients derived from Kullback-Leibler (KL) divergence~\cite{tech:KL} between network output and a uniform vector. Then, to characterize the relationship between neuron behavior and the OOD problem, we draw insights from neuron coverage analysis in system testing~\cite{NACT_System:DeepXplore,NACT_System:DeepGauge,NACT_System:DeepHunter}, which reveals that rarely activated (covered) neurons under an input set can potentially trigger undetected defects, \textit{e}.\textit{g}., misclassifications. In this sense, we introduce the concept of \textit{neuron activation coverage} ({NAC}), which {quantifies the degree that a neuron state is covered under an InD set}. Specifically, if a neuron state is frequently activated by InD data, the NAC score would be high, denoting fewer potential defects that can be triggered under this state. In this work, we apply NAC to two OOD tasks\footnote{The code is available at \texttt{{https://github.com/bierone/ood\_coverage}}}: \bfstart{OOD detection} Since OOD data often trigger abnormal neuron activations, they should present smaller NAC scores compared to the InD input data. As such, we present \textbf{NAC} for \textbf{U}ncertainty \textbf{E}stimation (\texttt{NAC-UE}), which directly averages NAC scores over all neurons as data uncertainty. We evaluate our method across three architectures and benchmarks, establishing a record-breaking performance on the large-scale ImageNet OOD benchmark~\cite{Setup:ImageNet}. Notably, our \texttt{NAC-UE} method achieves a 0.03\% FPR95 (99.99\% AUROC) on ResNet-50 backbone without impairing model classification ability, which outperforms the previous best method by 20.67\% (4.96\%) (See Figure~\ref{Fig:Intro-OOD-AUROC}). \bfstart{OOD generalization} As a larger NAC score could indicate fewer potential defects, we hypothesize that if the whole neuron activation space is fully covered by InD training data, the more robust a neuron would be. To this end, we employ \textbf{NAC} for \textbf{M}odel \textbf{E}valuation (\texttt{NAC-ME}), which measures model generalization ability based on the integral of NAC distribution \textit{w.r.t.} all neurons. Through experiments on DomainBed~\cite{Setup:DomainBed}, we find that a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets. Moreover, by comparison with the InD validation criterion, \texttt{NAC-ME} not only can select more robust models, but also strongly correlates with OOD test performance. On the TerraInc dataset, \texttt{NAC-ME} achieves a rank correlation of 34.84\% with test accuracy, beating validation criterion by 33.92\% on Vit-b16. \begin{figure*} \caption{Illustration of our NAC-based methods. Our NAC function is derived from the probability density function (PDF), which quantifies the degree that a neuron state is covered under the InD dataset $X$. Building upon NAC, we devise two approaches for tackling different OOD problems: OOD Detection (\texttt{NAC-UE}) and OOD Generalization (\texttt{NAC-ME}). } \label{Fig:Method} \end{figure*} \section{NAC: Neuron Activation Coverage} \label{Sec:Method_Pre} This paper studies the OOD problem for supervised multi-class learning, where $\mathcal{X}=\mathbb{R}^d$ denotes the input space and $\mathcal{Y}=\{1,2,...,C\}$ is the output space. Let $D_{tr} = \{(\mathbf{x}_i,y_i)\}_{i=1}^{n_t}$ be the training set and $D_{val} = \{(\tilde{\mathbf{x}}_i,\tilde{y}_i)\}_{i=1}^{n_v}$ be the validation set, both of which are comprised of \textit{i.i.d.} samples from the joint distribution $\mathcal{P} = \mathcal{X} \times \mathcal{Y}$. A neural network parameterized by $\theta$, $F(\mathbf{x}; \theta): \mathcal{X} \rightarrow \mathbb{R}^{|\mathcal{Y}|}$, is trained on samples drawn from $\mathcal{P}$, producing a logit vector for classification. We denote $\mathcal{D}_{in}$ as the marginal distribution of $\mathcal{P}$ over $\mathcal{X}$, representing the distribution of InD data. During testing, we will meet out-of-distribution data, which follows a distribution $\mathcal{D}_{out}$ over $\mathcal{X}$. Figure~\ref{Fig:Method} illustrates our {NAC}-based approaches. In the following, we first formulate the neuron activation state (Section~\ref{Sec:Method_NState}), and then introduce the details of our NAC function (Section~\ref{Sec:Method_NACFunc}). We finally show how to apply NAC to two OOD problems (Section~\ref{Sec:Method_App}). See Appendix for more details. \subsection{Formulation of Neuron Activation State} \label{Sec:Method_NState} Neuron outputs generally depend on the propagation from network input to the layer where the neuron resides. However, this does not consider the neuron influence in subsequent propagations. {As such, we introduce gradients backpropagated from the KL divergence between network output and a uniform vector~\cite{OOD_Detect:GradNorm}, to model the neuron influence. } Formally, we denote by $f(\mathbf{x}) = \mathbf{z} \in \mathbb{R}^{N}$ the output vector of a specific layer (Section~\ref{Sec:Exp_OOD_Detection} discusses this layer choice), where $N$ is the number of neurons and $z_i$ is the raw output of $i$-th neuron in this layer. By setting the uniform vector $\mathbf{u} = [1/C, 1/C, ..., 1/C]\in \mathbb{R}^C$, the desired KL divergence can be given as: \begin{equation} D_{\rm KL}({\mathbf{u}}||\mathbf{p}) = \sum_{i=1}^{C}u_i \log{\frac{u_i}{p_i}} = - \sum_{i=1}^{C}u_i \log{p_i}-H(\mathbf{u}), \end{equation} where $\mathbf{p} = {\rm softmax}(F(\mathbf{x}))$, and ${p}_i$ denotes $i$-element in $\mathbf{p}$. $H(\mathbf{u})= -\sum_{i=1}^{C}u_i \log{u_i}$ is a constant. By combining the KL gradient with neuron output, we then formulate \textit{neuron activation state} as, \begin{equation} \hat{\mathbf{z}} = \sigma(\mathbf{z} \odot \frac{\partial{D_{\rm KL}({\mathbf{u}}||\mathbf{p})}}{\partial \mathbf{z}}) , \end{equation} where $\sigma(x) = 1/(1+e^{-\alpha x})$ is the sigmoid function with a steepness controller $\alpha$. In the rest of this paper, we will also use the notation $\hat{f}(\mathbf{x}) := \hat{\mathbf{z}}$ to represent the neuron state function. \bfstart{Theoretical insights} We further analyze the gradients from KL divergence to show how this part contributes to the neuron activation state. Without loss of generality, let the predictor following $\mathbf{z}$ be two fully-connected layers: $g(\mathbf{z}) = \mathbf{W}_2[\mathbf{W}_1\mathbf{z}]^{+}$, such that $F = f \circ g$. Here, $\mathbf{W}_1 \in \mathbb{R}^{K\times N}$ and $\mathbf{W}_2 \in \mathbb{R}^{C\times K}$ are two weight matrices, and $[\cdot]^+$ denotes the ReLU function. For simplicity, the bias term is absorbed into the weight matrices. Since $F(\mathbf{x}) = g(f(\mathbf{x})) = g(\mathbf{z})$, we have $\partial F(\mathbf{x}) / \partial \mathbf{z} = \partial g(\mathbf{z}) / \partial \mathbf{z} = (\mathbf{W}_1^T\mathbf{M})\mathbf{W}_2^T$, where $\mathbf{M} = {\rm diag}(1_{\mathbf{W}_1\mathbf{z}>0})$ is the gradient mask of ReLU function. The neuron activation state thus can be given as: \begin{equation} \hat{\mathbf{z}} = \sigma(\mathbf{z} \odot \frac{\partial{D_{\rm KL}}}{\partial \mathbf{z}}) = \sigma(\mathbf{z} \odot (\frac{\partial g(\mathbf{z})}{\partial \mathbf{z}} \cdot \frac{\partial{D_{\rm KL}}}{\partial g(\mathbf{z})})) = \sigma({\mathbf{z}} \odot (\aoverbrace[L1U1R]{(\mathbf{W}_1^T\mathbf{M})\mathbf{W}_2^T}^{\text{relevance to output space}} \cdot \aoverbrace[L1U1R]{(\mathbf{p}-\mathbf{u})}^{\text{sample confidence}})), \label{Eq:Insights} \end{equation} where ${\partial{D_{\rm KL}}}/{\partial g(\mathbf{z})} = (\mathbf{p}-\mathbf{u})$ measures how model predictions deviate from a uniform distribution, thus denoting sample confidence~\cite{OOD_Detect:GradNorm}. As shown above, we builds neuron activation state from three perspectives: 1) the magnitude of raw neuron output $\mathbf{z}$, 2) the neuron relevance to output space; and 3) the model confidence on the input. Intuitively, if a neuron is not relevant to the output (or the model is not confident about the input), the neuron would be considered less active during propagation. \subsection{Neuron Activation Coverage (NAC) Function} \label{Sec:Method_NACFunc} With the formulation of neuron activation state, we now introduce the \textit{neuron activation coverage} (NAC) function to characterize the neuron behaviors under InD and OOD data. Inspired by system testing~\cite{NACT_System:DeepXplore,NACT_System:DeepGauge,NACT_System:DeepHunter}, NAC aims to quantify the degree that a neuron state is covered under an InD dataset. The intuition is that \textit{if a neuron state is rarely activated (covered) by any InD input, the chances of triggering bugs (\textit{e}.\textit{g}., misclassification) under this state would be high.} Since NAC directly measures the statistical property (\textit{i}.\textit{e}., coverage) of distribution \textit{w.r.t.} neuron states, we derive the NAC function from the probability density function (PDF). Formally, given a state $\hat{z}_i$ of $i$-th neuron, and its PDF $\kappa_X^i(\cdot)$ over an InD set $X$, the function for NAC can be given as: \begin{equation} \Phi^i_X(\hat{z}_i;r)= \frac{1}{r}{\rm min}(\kappa_X^i(\hat{z}_i), r), \end{equation} where $\kappa_X^i(\hat{z}_i)$ is the probability density of $\hat{z}_i$ over the set $X$, and $r$ denotes the lower bound for achieving full coverage \textit{w.r.t.} state $\hat{z}_i$. In cases where the neuron state $\hat{z}_i$ is frequently activated by InD data, the NAC score $\Phi^i_X(\hat{z}_i;r)$ would be 1, denoting fewer potential defects that can be triggered under this state. Notably, if $r$ is too low, noisy activations would dominate the coverage, reducing the significance of NAC scores. Conversely, an excessively large value of $r$ also makes the NAC function vulnerable to data biases. For example, given a homogeneous dataset comprising numerous similar samples, the NAC score of a specific neuron state can be easily mischaracterized as abnormally high, marginalizing the effects of other meaningful states. We analyze the effect of $r$ in Section~\ref{Sec:Exp_OOD_Detection}. \subsection{Applications} After modeling the NAC function over InD data, we can directly apply it to tackle existing OOD problems. In the following, we illustrate two application scenarios. \label{Sec:Method_App} \bfstart{Uncertainty estimation for OOD detection} Since OOD data often trigger abnormal neuron behaviors (corresponding to small NAC scores), we employ \textbf{NAC} for \textbf{U}ncertainty \textbf{E}stimation (\texttt{NAC-UE}), which directly average NAC scores over all neurons as the uncertainty of input samples. Formally, given a test data $\mathbf{x^*}$, the function for \texttt{NAC-UE} can be given as, \begin{equation} S(\mathbf{x}^*;\hat{f},X)={\frac{1}{N} \sum_{i=1}^{N}} \Phi_{X}^{i}(\hat{f}(\mathbf{x}^*)_i;r), \label{Eq:NAC-UE} \end{equation} where $N$ is the number of neurons; $\hat{f}(\mathbf{x}^*)_i:=\hat{z}_i$ denotes the activation state of $i$-th neuron; $r$ is the controller of NAC function. In particular, if the neuron states triggered by $\mathbf{x}^*$ are frequently activated by InD samples, the coverage score $S(\mathbf{x}^*;\hat{f},X)$ would be high, suggesting that $\mathbf{x}^*$ is likely to come from InD distribution. As such, we propose using \texttt{NAC-UE} for OOD detection following~\cite{OOD_Detect:LINe,OOD_Detect:GradNorm,OOD_Detect:ReAct,OOD_Detect:Energy}: \begin{equation} D(\mathbf{x}^*) = \begin{cases} \mbox{InD} & \mbox{if } S(\mathbf{x}^*;\hat{f},X) \ge \lambda; \\ \mbox{OOD} & \mbox{if } S(\mathbf{x}^*;\hat{f},X) < \lambda, \end{cases} \end{equation} where $\lambda$ is a threshold. The sample with an uncertainty score $S(\mathbf{x}^*;\hat{f},X)$ less than $\lambda$ would be categorized as OOD (\textit{i}.\textit{e}., $\mathbf{x}^* \in \mathcal{D}_{out}$); otherwise, InD (\textit{i}.\textit{e}., $\mathbf{x}^* \in \mathcal{D}_{in}$). \bfstart{Model evaluation for OOD generalization} As a larger NAC score could indicate fewer potential defects, we hypothesize that if the whole neuron activation space is fully covered by InD data, the more robust a neuron would be. In this sense, we propose \textbf{NAC} for \textbf{M}odel \textbf{E}valuation (\texttt{NAC-ME}), which characterizes model generalization ability based on the integral of NAC distribution \textit{w.r.t.} all neurons. Formally, given an InD dataset $X$, \texttt{NAC-ME} measures the generalization ability of a model (parameterized by $\theta$) as the average of integral \textit{w.r.t.} NAC distribution: \begin{equation} G(X, \theta)=\frac{1}{N} \sum_{i=1}^{N}\int_{\xi=0}^{1} \Phi_{X}^{i}(\xi;r)~d\xi, \end{equation} where $N$ is the number of neurons, and $r$ is the controller of NAC function. Specifically, if a neuron is consistently active on the whole activation space (\textit{i}.\textit{e}., high NAC integral), we consider the probability that it meets bugs would be lower, indicating favorable robustness. \bfstart{Approximation} Our approach supports mini-batch approximation, allowing for efficient processing of large datasets. We utilize the typical histogram-based approach to model PDF, which divides the neuron activation space into $M$ equally-spaced intervals. In this way, mini-batch approximation iteratively takes a random batch of neuron states as input and assigns them corresponding intervals. Furthermore, we efficiently calculate $G(X,\theta)$ based on the Riemman approximation~\cite{Rieman}, \begin{equation} G(X, \theta) = \frac{1}{MN} \sum_{i=1}^{N}\sum_{k=1}^{M}\Phi_{X}^{i}(\frac{k}{M}; r). \label{Eq:OOD_Eval_Approximation} \end{equation} \begin{table*} \centering \adjustbox{max width=1.0\textwidth}{ \begin{tabular}{l cc cc cc cc cc} \toprule \multirow{3}{*}{Method} &\multicolumn{2}{c}{iNaturalist~\cite{OOD_Dataset:iNaturalist}} &\multicolumn{2}{c}{SUN~\cite{OOD_Dataset:SUN}} &\multicolumn{2}{c}{Places~\cite{OOD_Dataset:Places}} &\multicolumn{2}{c}{Textures~\cite{OOD_Dataset:Textures}} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5}\cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC \\ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ \\ \midrule \multicolumn{11}{c}{\emph{Backbone: ResNet-50}} \\ {MSP}~\cite{OOD_Detect:MSP} &54.99 &87.74 &70.83 &80.86 &73.99 &79.76 &68.00 &79.61 &66.95 &81.99 \\ {ODIN}~\cite{OOD_Detect:ODIN} &47.66 &89.66 &60.15 &84.59 &67.89 &81.78 &50.23 &85.62 &56.48 &85.41 \\ {Mahalanobis}~\cite{OOD_Detect:Mahalanobis} &97.00 &52.65 &98.50 &42.41 &98.40 &41.79 &55.80 &85.01 &87.43 &55.47 \\ {Energy}~\cite{OOD_Detect:Energy} &55.72 &89.95 &59.26 &85.89 &64.92 &82.86 &53.72 &85.99 &58.41 &86.17 \\ {SSD}~\cite{OOD_Detect:SSD} &57.16 &87.77 &78.23 &73.10 &81.19 &70.97 &36.37 &88.52 &63.24 &80.09 \\ {DICE}~\cite{OOD_Detect:DICE} &25.63 &94.49 &35.15 &90.83 &46.49 &87.48 &31.72 &90.30 &34.75 &90.77 \\ {DICE+ReAct}~\cite{OOD_Detect:DICE} &18.64 &96.24 &25.45 &93.94 &36.86 &90.67 &28.07 &92.74 &27.25 &93.40 \\ {ASH-B}~\cite{OOD_Detect:SimpleAct} &14.21 &97.32 &22.08 &95.10 &33.45 &92.31 &21.17 &95.50 &22.73 &95.06 \\ {LINe}~\cite{OOD_Detect:LINe} &12.26 &97.56 &19.48 &95.26 &28.52 &92.85 &22.54 &94.44 &20.70 &95.03 \\ \cellcolor{LightGray}{\textbf{NAC-UE (Ours)}} &\cellcolor{LightGray}\textbf{0.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{100.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.01}{\tiny $\pm$0.01} &\cellcolor{LightGray}\textbf{99.99}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.09}{\tiny $\pm$0.01} &\cellcolor{LightGray}\textbf{99.97}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.01}{\tiny $\pm$0.01} &\cellcolor{LightGray}\textbf{100.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.03} &\cellcolor{LightGray}\textbf{99.99} \\ \midrule \multicolumn{11}{c}{\emph{Backbone: BiTS-R101x1}} \\ {MSP}~\cite{OOD_Detect:MSP} &63.69 &87.59 &79.98 &78.34 &81.44 &76.76 &82.73 &74.45 &76.96 &79.29 \\ {ODIN}~\cite{OOD_Detect:ODIN} &62.69 &89.36 &71.67 &83.92 &76.27 &80.67 &81.31 &76.30 &72.99 &82.56 \\ {Mahalanobis}~\cite{OOD_Detect:Mahalanobis} &96.34 &46.33 &88.43 &65.20 &89.75 &64.46 &52.23 &72.10 &81.69 &62.02 \\ {Energy}~\cite{OOD_Detect:Energy} &64.91 &88.48 &65.33 &85.32 &73.02 &81.37 &80.87 &75.79 &71.03 &82.74 \\ {KL Matching}~\cite{OOD_Detect:KLMatching} &27.36 &93.00 &67.52 &78.72 &72.61 &76.49 &49.70 &87.07 &54.30 &83.82 \\ {GradNorm}~\cite{OOD_Detect:GradNorm} &50.03 &90.33 &46.48 &89.03 &60.86 &84.82 &61.42 &81.07 &54.70 &86.31 \\ {MOS}~\cite{OOD_Detect:MOS} &9.28 &98.15 &40.63 &92.01 &49.54 &89.06 &60.43 &81.23 &39.97 &90.11 \\ \cellcolor{LightGray}{\textbf{NAC-UE (Ours)}} &\cellcolor{LightGray}\textbf{0.01}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{100.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.06}{\tiny $\pm$0.01} &\cellcolor{LightGray}\textbf{99.98}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.07}{\tiny $\pm$0.02} &\cellcolor{LightGray}\textbf{99.98}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{1.22}{\tiny $\pm$0.04} &\cellcolor{LightGray}\textbf{99.58}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.34 } &\cellcolor{LightGray}\textbf{99.88} \\ \midrule \multicolumn{11}{c}{\emph{Backbone: MobileNet-v2}} \\ {MSP}~\cite{OOD_Detect:MSP} &64.29 &85.32 &77.02 &77.10 &79.23 &76.27 &73.51 &77.30 &73.51 &79.00 \\ {ODIN}~\cite{OOD_Detect:ODIN} &55.39 &87.62 &54.07 &85.88 &57.36 &84.71 &49.96 &85.03 &54.20 &85.81 \\ {Mahalanobis}~\cite{OOD_Detect:Mahalanobis} &62.11 &81.00 &47.82 &86.33 &52.09 &83.63 &92.38 &33.06 &63.60 &71.01 \\ {Energy}~\cite{OOD_Detect:Energy} &59.50 &88.91 &62.65 &84.50 &69.37 &81.19 &58.05 &85.03 &62.39 &84.91 \\ {DICE}~\cite{OOD_Detect:DICE} &43.09 &90.83 &38.69 &90.46 &53.11 &85.81 &32.80 &91.30 &41.92 &89.60 \\ {DICE+ReAct}~\cite{OOD_Detect:DICE} &32.30 &93.57 &31.22 &92.86 &46.78 &88.02 &16.28 &96.25 &31.64 &92.68 \\ {ASH-B}~\cite{OOD_Detect:SimpleAct} &31.46 &94.28 &38.45 &91.61 &51.80 &87.56 &20.92 &95.07 &35.66 &92.13 \\ {LINe}~\cite{OOD_Detect:LINe} &24.95 &95.53 &33.19 &92.94 &47.95 &88.98 &12.30 &97.05 &29.60 &93.62 \\ \cellcolor{LightGray}{\textbf{NAC-UE (Ours)}} &\cellcolor{LightGray}\textbf{0.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{100.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{100.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.02}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{100.00}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.85}{\tiny $\pm$0.03} &\cellcolor{LightGray}\textbf{99.84}{\tiny $\pm$0.00} &\cellcolor{LightGray}\textbf{0.22 } &\cellcolor{LightGray}\textbf{99.96} \\ \bottomrule \end{tabular} } \caption{OOD detection results on ImageNet. We report the performance over three backbones, which are trained solely on the InD dataset, \textit{i}.\textit{e}., ImageNet-1k. $\uparrow$ denotes the higher value is better, while $\downarrow$ indicates lower values are better. The results of our methods are averaged over 20 random seeds. } \label{table:OOD_Detection_ImgNet} \end{table*} \section{Experiments} In this section, we evaluate our approaches on two tasks: OOD detection (\texttt{NAC-UE}) (Section~\ref{Sec:Exp_OOD_Detection}) and OOD generalization (\texttt{NAC-ME}) (Section~\ref{Sec:Exp_OOD_Generalization}). We provide more details in Appendix. \subsection{Case Study 1: OOD Detection} \label{Sec:Exp_OOD_Detection} \bfstart{Setup} Our experimental settings align with previous SoTAs~\cite{OOD_Detect:ReAct,OOD_Detect:GradNorm,OOD_Detect:SimpleAct,OOD_Detect:LINe,OOD_Detect:MOS,OOD_Detect:Energy}. We mainly evaluate our \texttt{NAC-UC} on the large-scale ImageNet benchmark~\cite{OOD_Detect:MOS}, where ImageNet-1k serves as the InD dataset, along with 4 OOD test datasets: \texttt{iNaturalist}~\cite{OOD_Dataset:iNaturalist}, \texttt{SUN}~\cite{OOD_Dataset:SUN}, \texttt{Places365} [48]~\cite{OOD_Dataset:Places}, and \texttt{Textures}~\cite{OOD_Dataset:Textures}, which consist of non-overlapping categories with ImageNet-1k. In this series of experiments, we utilize the pretrained ResNet-50~\cite{tech:ResNet}, MobileNet-v2~\cite{tech:MobileNetv2}, and Google BiTS- R101x1~\cite{tech:BiT} as backbones following~\cite{OOD_Detect:SimpleAct,OOD_Detect:ReAct,OOD_Detect:GradNorm}. Additionally, we run experiments on CIFAR-10 and CIFAR-100, where the InD dataset corresponds to CIFAR, and 6 OOD datasets are included: \texttt{SVHN}~\cite{OOD_Dataset:SVHN}, \texttt{LSUN-Crop}~\cite{OOD_Dataset:LSUN}, \texttt{LSUN-Resize}~\cite{OOD_Dataset:LSUN}, \texttt{iSUN}~\cite{OOD_Dataset:iSUN}, \texttt{Places365}~\cite{OOD_Dataset:Places}, and \texttt{Textures}~\cite{OOD_Dataset:Textures}. We employ ResNet-18 in CIFAR following~\cite{OOD_Detect:ReAct}. Detailed results are provided in Appendix. We utilize two threshold-free metrics in our evaluation: 1) FPR95: the false-positive-rate of OOD samples when the true positive rate of ID samples is at 95\%; 2) AUROC: the area under the receiver operating characteristic curve. Throughout our experiments, all pretrained models are left unmodified, preserving their classification ability during the OOD detection phase. \bfstart{Implementation details} In all of our experiments, we first utilize the InD dataset to model the NAC function, and then apply \texttt{NAC-UE} to calculate uncertainty scores during the test phase (See Eq.~(\ref{Eq:NAC-UE})). We set $r=0.6$ and $M=30000$ for MobileNet-v2; otherwise, $r=1$ and $M=10000$. For all pretrained models, we set $\alpha=1000$, and calculate \texttt{NAC-UE} for neurons from the penultimate layer. \bfstart{Results} As illustrated in Table~\ref{table:OOD_Detection_ImgNet}, we first compare our approach with recent SoTAs on ImageNet benchmark, where our \texttt{NAC-UE} consistently outperforms previous methods on 4 OOD datasets , and establishes a record-breaking performance over 3 pretrained models. More importantly, \texttt{NAC-UE} reduces the FPR95 by 20.67\% and 29.38\% compared with the previous best method LINe~\cite{OOD_Detect:LINe} on ResNet-50 and MobileNet-v2, respectively. It also reduces the FPR95 by 39.63\% compared with the previous SoTA MOS~\cite{OOD_Detect:MOS} on BiTS-R101x1. Besides, as previously shown in Figure~\ref{Fig:Intro-OOD-AUROC}, \texttt{NAC-UE} preserves model classification ability (\textit{i}.\textit{e}., InD validation accuracy) during the OOD detection phase, since it performs in a \textit{post-hoc} fashion. In contrast, advanced methods such as ReAct~\cite{OOD_Detect:ReAct} and ASH-B~\cite{OOD_Detect:SimpleAct} exhibit promising OOD detection results at the expense of InD accuracy. We further compare \texttt{NAC-UE} with previous SoTAs on CIFAR benchmarks, and the results (averaged over 6 OOD datasets) are illustrated in Table~\ref{Tab:OOD_Detection_CIFAR}. As can be seen, \texttt{NAC-UE} consistently outperforms previous methods across two benchmarks, and showcases the large gains. This highlights the superiority of our method again. We provide detailed performance for each OOD dataset in Appendix. \begin{figure*}\label{Fig:Detection_Distr} \end{figure*} \begin{table*}[t] \begin{minipage}{.53\linewidth} \centering \resizebox{0.91\columnwidth}{!}{ \begin{tabular}{l cc cc} \toprule \multirow{3}{*}{Method} &\multicolumn{2}{c}{CIFAR-10} &\multicolumn{2}{c}{CIFAR-100} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} &FPR95 &AUROC &FPR95 &AUROC \\ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ \\ \midrule {MSP}~\cite{OOD_Detect:MSP} &56.71 &91.17 &80.72 &76.83 \\ {ODIN}~\cite{OOD_Detect:ODIN} &31.10 &93.79 &66.21 &82.88 \\ {Energy}~\cite{OOD_Detect:Energy} &35.60 &93.57 &71.93 &82.82 \\ {ReAct}~\cite{OOD_Detect:ReAct} &32.91 &94.27 &59.61 &87.48 \\ \cellcolor{LightGray}{NAC-UE (Ours)} &\cellcolor{LightGray}\textbf{0.16} &\cellcolor{LightGray}\textbf{99.95} &\cellcolor{LightGray}\textbf{0.20} &\cellcolor{LightGray}\textbf{99.94}\\ \bottomrule \end{tabular} } \caption{OOD detection results on CIFAR-10 and CIFAR-100. We utilize ResNet-18 as backbone. Results are averaged on 20 seeds over 6 OOD datasets. Detailed scores are provided in Appendix.} \label{Tab:OOD_Detection_CIFAR} \end{minipage} \begin{minipage}{.42\linewidth} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{c ccc} \toprule \multirow{2}{*}{\parbox{1.3cm}{\centering Layer}} & \multirow{2}{*}{\# Neurons} &FPR95 &AUROC \\ & &$\downarrow$ &$\uparrow$ \\ \midrule Conv1 &64 &48.26 &84.90\\ Layer1 &256 &13.59 &97.67\\ Layer2 &512 &16.76 &97.41\\ Layer3 &1024 &\textbf{0.02} &99.94\\ \rowcolor{LightGray} Layer4 &2048 &0.03 &\textbf{99.99}\\ \bottomrule \end{tabular} } \caption{Effect of \texttt{NAC-UE} using neurons from different layers. We report averaged scores on ImageNet benchmark by utilizing ResNet-50 backbone.} \label{Tab:OOD_Detection_Layer} \end{minipage} \end{table*} \bfstart{The superiority of neuron activation state $\hat{\mathbf{z}}$} In Section~\ref{Sec:Method_NState}, we formulate the neuron activation state $\hat{\mathbf{z}}$ by combining the raw neuron output ${\mathbf{z}}$ with its KL gradients $\partial D_{\rm KL}/\partial \mathbf{z}$. Here, we ablate this formulation to investigate the superiority of $\hat{\mathbf{z}}$. In particular, we analyze the \texttt{NAC-UE} \textit{w.r.t.} 1) raw neuron output: ${\mathbf{z}}$, 2) KL gradients of neuron output: $\partial D_{\rm KL}/\partial \mathbf{z}$, and 3) our defined neuron state: $\hat{\mathbf{z}}$. We also include the baseline GradNorm~\cite{OOD_Detect:GradNorm} for a further comparison, which employs the L1 norm of KL gradients in the last layer as uncertainty scores. Figure~\ref{Fig:Detection_Distr} illustrates the results, where we visualize the distribution of uncertainty scores \textit{w.r.t} these methods on ImageNet benchmark. We provide the main observations in the following: Firstly, we can observe that all NAC-based methods significantly outperform the baseline GradNorm. Specifically, the distribution between the InD and OOD samples could be largely separated when using NAC as uncertainty scores, thereby leading to the the lower FPR95. Secondly, among all the three variants of \texttt{NAC-UE}, $\hat{\mathbf{z}}$-based method performs the best, as it inherits the advantages from both ${\mathbf{z}}$ and $\partial D_{\rm KL}/\partial \mathbf{z}$. This spotlights the superiority of our defined neuron state. Thirdly, it can also be found that OOD samples generally present lower neuron activation coverage compared to InD samples. This demonstrates that OOD data tend to provoke abnormal neuron behaviors in comparison to InD data, which further confirms the rationale behind our NAC-based approaches. \noindent\textbf{Where to apply neuron action coverage (NAC)?} Prior studies have shown that the neurons from deeper layers often encode rich semantic information~\cite{NACT:PNAS_Dissect,NACT:MILAN}. Inspired by this, our experiments mostly implement \texttt{NAC-UE} by utilizing neurons from the penultimate layer. Here, we take ResNet-50 as backbone and analyze this layer choice. As shown in Table~\ref{Tab:OOD_Detection_Layer}, we compare the OOD detection results of \texttt{NAC-UE} \textit{w.r.t.} different layer choices. It can be drawn that as the layer goes deeper, more neuron numbers are considered, along with improved detection performance. Moreover, we can also observe that even with a single layer of neurons, \texttt{NAC-UE} is able to achieve favorable performance, which eases its integration into other architectures. \begin{table*}[!hb] \begin{minipage}{.33\linewidth} \centering \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{lcc} \toprule \multirow{2}{*}{\parbox{2cm}{Sigmoid \\Steepness ($\alpha$)}} &FPR95 &AUROC \\ &$\downarrow$ &$\uparrow$ \\ \midrule $\alpha=100$ &94.24 &46.75\\ $\alpha=500$ &35.34 &92.80\\ \rowcolor{LightGray} {$\alpha=1000$} &\textbf{0.34} &\textbf{99.88}\\ $\alpha=5000$ &8.41 &98.29\\ $\alpha=8000$ &14.44 &96.94\\ \bottomrule \end{tabular} } \caption{The effect of different $\alpha$ over BiTS-R101x1 backbone.} \label{Tab:OOD_Detection_Sig} \end{minipage} \begin{minipage}{.30\linewidth} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{lcc} \toprule \multirow{2}{*}{\parbox{1.5cm}{Lower \\Bound ($r$)}} &FPR95 &AUROC \\ &$\downarrow$ &$\uparrow$ \\ \midrule $r = 0.25$ &71.28 &80.91\\ \rowcolor{LightGray} $r = 1$ &\textbf{0.34} &\textbf{99.88}\\ $r = 2$ &2.26 &98.85\\ $r = 10$ &34.78 &87.68\\ $r = 100$ &75.60 &77.02\\ \bottomrule \end{tabular} } \caption{The effect of different $r$ over BiTS-R101x1.} \label{Tab:OOD_Detection_R} \end{minipage} \begin{minipage}{.33\linewidth} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{lcc} \toprule \multirow{2}{*}{\parbox{2cm}{ No. of \\Intervals ($M$)}} &FPR95 &AUROC \\ &$\downarrow$ &$\uparrow$ \\ \midrule $M=10$ &56.81 &81.49\\ $M=100$ &44.58 &89.61\\ $M=1000$ &{42.86} &{89.35}\\ $M=5000$ &{24.09} &{94.33}\\ \rowcolor{LightGray} $M=10000$ &\textbf{0.34} &\textbf{99.88}\\ \bottomrule \end{tabular} } \caption{The effect of different $M$ over BiTS-R101x1.} \label{Tab:OOD_Detection_M} \end{minipage} \end{table*} \bfstart{Paramter analysis} In Table~\ref{Tab:OOD_Detection_Sig}-\ref{Tab:OOD_Detection_M}, we systematically analyze the effect of sigmoid steepness ($\alpha$), lower bound ($r$) for full coverage, and the number of intervals ($M$) for PDF approximation. The following observations can be noted: 1) A relatively steep sigmoid function could make \texttt{NAC-UE} perform better. We conjecture this is due to that neuron activation states often distribute in a small range, thus requiring a steeper function to distinguish their finer variations; 2) \texttt{NAC-UE} is sensitive to the choice of $r$. As previously discussed, if $r$ is too small, noisy activations can dominate the coverage, thus diminishing the effect of NAC scores. Also, a large $r$ makes the NAC function susceptible to data biases, \textit{e}.\textit{g}., in datasets with numerous similar samples, a neuron state can be easily mischaracterized by an abnormally high NAC score, thereby marginalizing other meaningful neuron states. 3) The performance of \texttt{NAC-UE} positively correlates with $M$. This is intuitive as a larger $M$ allows for a closer approximation to the real PDF, resulting in improved performance. \subsection{Case Study 2: OOD Generalization} \label{Sec:Exp_OOD_Generalization} \bfstart{Setup} Our experimental settings carefully follow the Domainbed benchmark~\cite{Setup:DomainBed}. Without employing digital images, we adopt four datasets: \texttt{VLCS}~\cite{Dataset:VLCS} (4 domains, 10,729 images) , \texttt{PACS}~\cite{Dataset:PACS} (4 domains, 9,991 images), \texttt{OfficeHome}~\cite{Dataset:OfficeHome} (4 domains, 15,588 images), and \texttt{TerraInc}~\cite{Dataset:TerraIncognita} (4 domains, 24,788 images). For all datasets, we report the \textit{leave-one-out} test accuracy following~\cite{Setup:DomainBed}, whereby results are averaged over cases that use a single domain for test and the others for training. For all employed backbones, we utilize the hyperparameters suggested by~\cite{tech:swad} to fine-tune them. The training strategy is ERM~\cite{Baseline:ERM}, unless stated otherwise. We set the total training steps as 5000, and the evaluation frequency as 300 steps for all models. We provide implementation details in Appendix. \bfstart{Model evaluation criteria} Since OOD data is assumed unavailable during model training, existing methods commonly resort to InD validation accuracy to evaluate a model~\cite{Baseline:Fishr,CL&DG:SelfReg,CL&DG:PCL,Baseline:Fish,Setup:DomainBed}. Thus, we mainly compare our \texttt{NAC-ME} with the prevalent \textit{validation criterion}~\cite{Setup:DomainBed}. We also leverage the \textit{oracle criterion}~\cite{Setup:DomainBed} as the upper bound, which directly utilizes OOD test data to evaluate a model. \bfstart{Metrics} We utilize two metrics in this setting: 1) Spearman Rank Correlation (RC) between OOD test accuracy and the model evaluation scores (\textit{i}.\textit{e}., InD validation accuracy or InD NAC scores), which are sampled at regular evaluation intervals (\textit{i}.\textit{e}., every 300 steps) during the training process; 2) OOD Test Accuracy (ACC) of the best model selected by the criterion within a single run of training. \begin{table*} \centering \adjustbox{max width=1\textwidth}{ \newcolumntype{g}{>{\columncolor{LightGray}}c} \begin{tabular}{l g gg gg gg gg gg} \toprule \rowcolor{white} \multirow{2}{*}{Bakbone} &\multirow{2}{*}{Method} &\multicolumn{2}{c}{VLCS~\cite{Dataset:VLCS}} &\multicolumn{2}{c}{PACS~\cite{Dataset:PACS}} &\multicolumn{2}{c}{OfficeHome~\cite{Dataset:OfficeHome}} &\multicolumn{2}{c}{TerraInc~\cite{Dataset:TerraIncognita}} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \rowcolor{white} & &RC &ACC &RC &ACC &RC &ACC &RC &ACC &RC &ACC \\ \midrule \rowcolor{white} &Oracle &- &77.17 &- &80.13 &- &56.00 &- &44.32 &- &64.40 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &34.27 &75.12 &68.71 &79.01 &83.50 &55.60 &38.77 &39.03 &56.31 &62.19 \\ &{NAC-ME} &\textbf{52.00} &\textbf{76.10} &\textbf{72.55 } &\textbf{79.12} &\textbf{84.78} &\textbf{55.80} &\textbf{39.28 } &\textbf{40.41} &\textbf{62.15} &\textbf{62.86} \\ \multirow{-4}[1]{*}{ResNet-18} &$\Delta$ &\textcolor{BoldDelta}{(+17.73)} &\textcolor{BoldDelta}{(+0.97)} &\textcolor{BoldDelta}{(+3.84)} &\textcolor{BoldDelta}{(+0.11)} &\textcolor{BoldDelta}{(+1.29)} &\textcolor{BoldDelta}{(+0.20)} &\textcolor{BoldDelta}{(+0.51)} &\textcolor{BoldDelta}{(+1.38)} &\textcolor{BoldDelta}{(+5.84)} &\textcolor{BoldDelta}{(+0.67)} \\ \midrule \rowcolor{white} &Oracle &- &79.67 &- &85.41 &- &65.14 &- &50.23 &- &70.21 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{31.43} &\textbf{77.70} &58.54 &84.57 &67.93 &65.04 &37.07 &46.07 &48.74 &68.34 \\ &{NAC-ME} &28.78 &76.69 &\textbf{61.52} &\textbf{84.89} &\textbf{69.24} &\textbf{65.14} &\textbf{40.95} &\textbf{47.18} &\textbf{50.12 } &\textbf{68.48} \\ \multirow{-4}[1]{*}{ResNet-50} &$\Delta$ &\textcolor{gray}{(-2.66)} &\textcolor{gray}{(-1.01)} &\textcolor{BoldDelta}{(+2.98)} &\textcolor{BoldDelta}{(+0.32)} &\textcolor{BoldDelta}{(+1.31)} &\textcolor{BoldDelta}{(+0.11)} &\textcolor{BoldDelta}{(+3.88)} &\textcolor{BoldDelta}{(+1.11)} &\textcolor{BoldDelta}{(+1.38)} &\textcolor{BoldDelta}{(+0.14)} \\ \midrule \rowcolor{white} &Oracle &- &78.4 &- &71.83 &- &61.25 &- &41.08 &- &63.14 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &40.63 &77.55 &88.42 &69.77 &98.47 &\textbf{61.09} &22.71 &36.28 &62.56 &61.17 \\ &NAC-ME &\textbf{45.26} &\textbf{77.68} &\textbf{90.22 } &\textbf{71.14} &\textbf{98.82} &60.97 &\textbf{23.82 } &\textbf{37.47} &\textbf{64.53} &\textbf{61.82} \\ \multirow{-4}[1]{*}{Vit-t16} &$\Delta$ &\textcolor{BoldDelta}{(+4.63)} &\textcolor{BoldDelta}{(+0.13)} &\textcolor{BoldDelta}{(+1.80)} &\textcolor{BoldDelta}{(+1.37)} &\textcolor{BoldDelta}{(+0.35)} &\textcolor{gray}{(-0.12)} &\textcolor{BoldDelta}{(+1.11)} &\textcolor{BoldDelta}{(+1.19)} &\textcolor{BoldDelta}{(+1.97)} &\textcolor{BoldDelta}{(+0.65)} \\ \midrule \rowcolor{white} &Oracle &- &80.44 &- &89.10 &- &80.84 &- &51.81 &- &75.55 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &29.94 &\textbf{79.14} &31.54 &86.86 &65.24 &80.45 &0.92 &45.49 &31.91 &72.99 \\ &{NAC-ME} &\textbf{33.76} &{79.05} &\textbf{40.30 } &\textbf{88.19} &\textbf{68.38} &\textbf{80.65} &\textbf{34.84 } &\textbf{46.51} &\textbf{44.32} &\textbf{73.60} \\ \multirow{-4}[1]{*}{Vit-b16} &$\Delta$ &\textcolor{BoldDelta}{(+3.82)} &\textcolor{gray}{(-0.09)} &\textcolor{BoldDelta}{(+8.76)} &\textcolor{BoldDelta}{(+1.33)} &\textcolor{BoldDelta}{(+3.15)} &\textcolor{BoldDelta}{(+0.20)} &\textcolor{BoldDelta}{(+33.92)} &\textcolor{BoldDelta}{(+1.02)} &\textcolor{BoldDelta}{(+12.41)} &\textcolor{BoldDelta}{(+0.61)} \\ \bottomrule \end{tabular} } \caption{OOD generalization results on DomainBed. \textit{Oracle} indicates the upper bound, which utilizes OOD test data to evaluate a model. $\Delta$ denotes the improvement of coverage criterion over the validation criterion. The training strategy is ERM~\cite{Baseline:ERM}. All scores are averaged over 3 random trials. } \label{table:OOD_Selection_Main} \end{table*} \bfstart{Results} As illustrated in Table~\ref{table:OOD_Selection_Main}, we mainly compare our \texttt{NAC-ME} with typical validation criterion over four canonical backbones: ResNet-18~\cite{tech:ResNet}, ResNet-50~\cite{tech:ResNet}, Vit-t16~\cite{tech:ViT}, and Vit-b16~\cite{tech:ViT}. Throughout the results, we can draw the following observations: 1) The positive correlation (\textit{i}.\textit{e}., RC > 0) between the NAC scores and the OOD test performance consistently holds across architectures and datasets; 2) By comparison with the validation criterion, \texttt{NAC-ME} not only could select more generalized models (with higher OOD accuracy), but also exhibits stronger correlation with OOD test performance. For instance, on the TerraInc dataset, \texttt{NAC-ME} achieves a rank correlation of 34.84\% with OOD test accuracy, surpassing validation criterion by 33.92\% on the Vit-b16. Similarly, on the VLCS dataset, \texttt{NAC-ME} also shows a rank correlation of 52.00\%, outperforming the validation criterion by 17.73\% on the ResNet-18. Such results highlight the potential of leveraging our neuron activation coverage to reflect model generalization ability. \begin{figure}\label{Tab:OOD_Selection_SoTA} \label{Fig:rc_wilds} \end{figure} \noindent\textbf{Can NAC co-work with SoTA learning algorithms?} Recent literature has suggested numerous learning algorithms to enhance the model generalization ability~\cite{Baseline:CORAL,Baseline:DANN,Baseline:Fish,Baseline:Fishr}. In this sense, we further investigate the potential of \texttt{NAC-ME} by implementing it with two recent SoTA algorithms: SelfReg~\cite{CL&DG:SelfReg} and CORAL~\cite{Baseline:CORAL}. The results are shown in Table~\ref{Tab:OOD_Selection_SoTA}. It can be found that \texttt{NAC-ME} as an evaluation criterion still presents better performance compared with validation criterion. This finding further verifies the effectiveness of our NAC-based criterion. \noindent\textbf{Does the volume of OOD test data hinder the Rank Correlation (RC)?} As illustrated in Table~\ref{table:OOD_Selection_Main}, while in most cases \texttt{NAC-ME} outperforms the validation criterion on model selection, we can find that the Rank Correlation (RC) still falls short of its maximum value, \textit{e}.\textit{g}., on the VLCS dataset using ResNet-18, RC only reaches 52\% compared to the maximum of 100\%. Given that Domainbed only provides 6 OOD domains at most, we hypothesize that the volume/variance of OOD test data may be the reason: insufficient OOD test data may be unreliable to reflect model generalization ability, thereby hindering the validity of RC. To this end, we conduct additional experiments on the iWildCam dataset~\cite{Dataset:Wilds}, which includes 323 domains and 203,029 images in total. Figure~\ref{Fig:rc_wilds} illustrates the results, where we analyze the relationship between RC and the volume of OOD test data by randomly sampling different ratios of OOD data for RC calculation. As can be seen, an increase in the ratio of test data also leads to an improvement in the RC, which confirms our hypothesis regarding the effect of OOD data. Furthermore, we can observe that in most cases, \texttt{NAC-ME} could still outperform the validation criterion. These observations spotlight the capability of our NAC again. \section{Related Work} \bfstart{Neuron coverage} Traditional system testing commonly leverages coverage criteria to uncover defects in software programs~\cite{NACT_System:Traditional_Testing}. These criteria directly measure the degree to which certain codes or components have been exercised, thereby indicating potential defects. To simulate such program testing in deep neural networks, Pei \textit{et al}.~\cite{NACT_System:DeepXplore} firstly introduced neuron coverage, which measures the proportion of activated neurons within a given input set. The underlying idea is that if a network performs with larger neuron coverage during testing, there would be fewer undetected bugs (\textit{e}.\textit{g}., misclassification) that can be triggered. In line with this, Ma \textit{et al}.~\cite{NACT_System:DeepGauge} extended the neuron coverage with fine-grained criteria, which further considers the distribution of neuron outputs from training data. Subsequently, Yuan \textit{et al}.~\cite{NACT_System:NLC} advocated focusing on the interactions between neurons within the same layer, and introduced a layer-wise neuron coverage for network testing. The most recent work related to our paper is~\cite{NACT_DG:NeuronCoverage}, where they proposed to improve model generalization ability by maximizing neuron coverage during training. Likewise, in this work, we also demonstrate that the InD neuron activation coverage can be a potential criterion for evaluating model robustness, which even could surpass prevalent validation criterion across architectures and datasets. \bfstart{OOD detection} The goal of OOD detection is to distinguish between InD and OOD data inputs, thereby refraining from using unreliable model predictions during deployment. Existing detection methods can be broadly categorized into three groups: 1) confidence-based ~\cite{OOD_Detect:ODIN,ood_example1,OOD_Detect:MSP,OOD_Detect:MOS,OOD_Detect:confidence1}, 2) distance-based~\cite{OOD_Detect:distance1,OOD_Detect:distance2,OOD_Detect:distance3,OOD_Detect:CIDAR,OOD_Detect:KNN,OOD_Detect:Mahalanobis}, and 3) density-based~\cite{OOD_Detect:density4,OOD_Detect:density1,OOD_Detect:density2,OOD_Detect:density3} approaches. Confidence-based methods commonly resort to the confidence level of model outputs to detect OOD samples. For example, MSP~\cite{OOD_Detect:MSP} directly calculates the maximum softmax probability as the uncertainty score. In contrast, distance-based approaches identify OOD samples by measuring the distance (\textit{e}.\textit{g}., Mahalanobis distance~\cite{OOD_Detect:Mahalanobis}) between input sample and typical InD centroids or prototypes. Likewise, density-based methods employ probabilistic models to explicitly model InD distribution, and classify test data located in low-density regions as OOD. Specific to neuron behaviors, ReAct~\cite{OOD_Detect:ReAct} recently proposes to truncate high neuron activations to separate the InD and OOD data. However, such truncation would instead decrease model classification ability~\cite{OOD_Detect:SimpleAct}. Similarly, LINe~\cite{OOD_Detect:LINe} seeks to find important neurons based on the Shapley value~\cite{Shapley1988AVF} and then performs activation clipping. Yet, this approach relies on a threshold-based strategy that categorizes neurons into binary states (\textit{i}.\textit{e}., activated or not), thus disregarding valuable neuron distribution details. Unlike them, in this work, we show that by using natural neuron states, a distribution property (\textit{i}.\textit{e}., coverage) could greatly facilitate the OOD detection. \bfstart{OOD generalization} OOD generalization aims to train models that can overcome distribution shifts between InD and OOD data. While a myriad of studies has emerged to tackle this problem~\cite{Baseline:MMD,Baseline:CORAL,Baseline:GroupDRO,Baseline:AND-Mask,Baseline:IRM,Baseline:DANN,Baseline:MLDG}, Gulrajani \textit{et al}.~\cite{Setup:DomainBed} recently put forth the importance of model evaluation criterion, and demonstrated that a vanilla ERM~\cite{Baseline:ERM} along with a proper criterion could outperform most state-of-the-art methods. In line with this, Arpit \textit{et al}.~\cite{Criterion:SMA} discovered that using validation accuracy as the evaluation criterion could be unstable for model selection, and thus proposed moving average to stabilize model training. Contrary to that, this work sheds light on the potential of neuron activation coverage for model evaluation, showing that it outperforms the validation criterion in various cases. \section{Conclusion} In this work, we have presented a neuron activation view to reflect the OOD problem. We have shown that through our formulated neuron states, the concept of neuron activation coverage (NAC) could effectively facilitate two OOD tasks: OOD detection and OOD generalization. Specifically, we have demonstrated that 1) InD and OOD inputs can be more separable based on the neuron activation coverage, yielding substantially improved OOD detection performance; 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which highlights the potential of NAC-based criterion for model evaluation. Along these lines, we hope this paper has further motivated the community to consider neuron behavior in the OOD problem. This is also the most considerable benefit eventually lies. \bfstart{Limitation} The approximation of PDF function can be the limitation. While increasing the number of intervals improves the approximation, it also adds a computational burden. Besides, though using a parametric PDF alleviate this issue, it can be challenging to generalize across model architectures. { } \appendix \renewcommand\cftaftertoctitle{\vskip2pt\par\hrulefill\vskip-2mm\par\vskip2pt} \renewcommand\cftsecafterpnum{\vskip3pt} \cftsetindents{section}{0em}{2em} \cftsetindents{subsection}{2em}{2.5em} \hypersetup{colorlinks,linkcolor={black},citecolor={CiteBlue},urlcolor={CiteBlue}} \begin{center} \vspace*{1cm} {\huge \textbf{Appendix}} \vskip5em \end{center} {\addtolength{\textheight}{-10cm} \tableofcontents \vskip-2mm \noindent\hrulefill } \addtocontents{toc}{\protect\setcounter{tocdepth}{2}} \hypersetup{colorlinks,linkcolor={red},citecolor={CiteBlue},urlcolor={CiteBlue}} \section{Potential Social Impact} This study introduces neuron activation coverage (NAC) as an efficient tool for facilitating out-of-distribution (OOD) solutions. By improving OOD detection and generalization, NAC has the potential to significantly enhance the dependability and safety of modern machine learning models. Thus, the social impact of this research can be far-reaching, spanning consumer and business applications in digital content understanding, transportation systems including driver assistance and autonomous vehicles, as well as healthcare applications such as identifying unseen diseases. Moreover, by openly sharing our code, we strive to offer machine learning practitioners a readily available resource for responsible AI development, ultimately benefiting society as a whole. Although we anticipate no negative repercussions, we are committed to expanding upon our framework in future endeavors. \section{Additional Theoretical Details} Here, we present additional theoretical details for Eq.~(\ref{Eq:Insights}) in the main paper. Specifically, we elaborate on the calculation of gradients \textit{w.r.t.} the sample confidence. As a reminder, in the main paper, we introduce the Kullback-Leibler (KL) divergence~\cite{tech:KL} between the network output and a uniform vector $\mathbf{u} = [1/C, 1/C, ..., 1/C] \in \mathbb{R}^C$ as follows: \begin{equation*} \begin{aligned} D_{\rm KL}({\mathbf{u}}||\mathbf{p}) &= \sum_{i=1}^{C}u_i \log{\frac{u_i}{p_i}} \\ &= - \sum_{i=1}^{C}u_i \log{p_i} + \sum_{i=1}^{C}u_i \log{u_i} \\ &= - \frac{1}{C} \sum_{i=1}^{C} \log{p_i} - H(\mathbf{u}), \end{aligned} \end{equation*} where $\mathbf{p} = {\rm softmax}(F(\mathbf{x}))$, and ${p}_i$ denotes $i$-element in $\mathbf{p}$. $H(\mathbf{u})= -\sum_{i=1}^{C}u_i \log{u_i}$ is a constant. Let $F(\mathbf{x})_i$ indicates $i$-th element in $F(\mathbf{x})$, we have $p_i = {e^{F(\mathbf{x})_i}}/{\sum_{j=1}^{C} e^{F(\mathbf{x})_j}}$. Then, by substituting the expression of $p_i$, we can rewrite KL divergence as: \begin{equation*} \begin{aligned} D_{\rm KL}({\mathbf{u}}||{\rm softmax}(F(\mathbf{x}))) &= - \frac{1}{C} \sum_{i=1}^{C} \log{ \frac{e^{F(\mathbf{x})_i}}{\sum_{j=1}^{C} e^{F(\mathbf{x})_j}} } - H(\mathbf{u}) \\ &= - \frac{1}{C} \left( \sum_{i=1}^{C} {F(\mathbf{x})_i} - C \cdot \log{\sum_{j=1}^{C} e^{F(\mathbf{x})_j}} \right)- H(\mathbf{u}). \end{aligned} \end{equation*} Subsequently, we can drive the gradients of KL divergence \textit{w.r.t.} the output logit $F(\mathbf{x})_i$ as: \begin{equation*} \begin{aligned} \frac{\partial{D_{\rm KL}}}{\partial F(\mathbf{x})_i} &= - \frac{1}{C} \left(1 - C \cdot \frac{\partial \log{\sum_{j=1}^{C} e^{F(\mathbf{x})_j}}}{\partial F(\mathbf{x})_i} \right) \\ &= - \frac{1}{C} \left(1 - C \cdot \frac{e^{F(\mathbf{x})_i}}{\sum_{j=1}^{C} e^{F(\mathbf{x})_j}} \right) \\ &= - \frac{1}{C} + \frac{e^{F(\mathbf{x})_i}}{\sum_{j=1}^{C} e^{F(\mathbf{x})_j}} \\ &= p_i - u_i. \end{aligned} \end{equation*} Since $F(\mathbf{x}) = g(f(\mathbf{x})) = g(\mathbf{z})$, we finally have: \begin{equation} \frac{\partial{D_{\rm KL}}}{\partial g(\mathbf{z})} = \frac{\partial{D_{\rm KL}}}{\partial F(\mathbf{x})} = {[p_1 - u_1, ..., p_c - u_c]}^T = \mathbf{p}-\mathbf{u} \end{equation} \section{Approximation Details} In this section, we demonstrate details for the approximation of PDF, and further show the insights for the choice of $r$ in our NAC function. \subsection{Preliminaries} \bfstart{Probability density function (PDF)} The Probability Density Function (PDF), denoted by $\kappa(x)$, measures the probability of a continuous random variable taking on a specific value within a given range. Accordingly, $\kappa(x)$ should possess the following key properties: \begin{enumerate}[label=(\arabic*),topsep=1pt,parsep=1pt,itemindent=0.5em] \item {Non-Negativity}: $\kappa(x) \geq 0$, for all $x \in \mathbb{R}$; \item {Normalization}: $\int_{-\infty}^{\infty} \kappa(x) dx =1$; \item {Probability Interpretation}: $P(a\leq \mu \leq b) = \int_{a}^{b} \kappa(x) dx$, \end{enumerate} where $P(a\leq \mu \leq b)$ denotes the probability that random variable $\mu$ has values within range $[a,b]$. \bfstart{Cumulative distribution function (CDF)} In line with PDF, the Cumulative Distribution Function (CDF), denoted by $K(x)$, calculates the cumulative probability for a given $x$-value. Formally, $K(x)$ gives the area under the probability density function up to the specified $x$, \begin{equation} {K}(x) = P(\mu \leq x) = \int_{-\infty}^{x} \kappa(t) dt. \end{equation} By the Fundamental Theorem of Calculus, we can rewrite the function $\kappa(x)$ as, \begin{equation} \kappa(x) = K'(x) = \lim_{h \to 0} \frac{{K}(x+h) - {K}(x)}{h}. \end{equation} Note that in the main paper, we denote by $\kappa_X^i(\cdot)$ the PDF, and $\Phi_X^i(\cdot)$ the NAC function of $i$-th neuron over the dataset $X$. In this appendix, we will omit the superscript $i$ and subscript $X$ for simplicity. \subsection{Approximation} \label{Sec:Appendix:Approx} In accordance with the main paper, the approximation of PDF follows the typical histogram-based approach. Specifically, since the length of neuron activation space is 1 (bounded by the sigmoid function), we approximate PDF function by partitioning the neuron activation space into $M$ equally-spaced intervals/bins (each bin has a width $h = 1/M$), such that \begin{equation} \kappa(\hat{z}) \approx \frac{{K}(\hat{z}+h) - {K}(\hat{z})}{h} = \frac{P(\hat{z} < \mu \leq \hat{z}+h)}{h} \approx \frac{{O}(\hat{z})}{|X|} \cdot \frac{1}{h}, \end{equation} where $\hat{z}$ is the neuron activation state, and ${\rm O}(\hat{z})$ is the number of samples in the bin containing $\hat{z}$. \bfstart{The choice of $r$} With the approximation of PDF, we can rewrite the NAC function as, \begin{equation} \begin{aligned} \Phi(\hat{z};r) &= \frac{1}{r}{\rm min}(\kappa(\hat{z}), r) = {\rm min}(\frac{\kappa(\hat{z})}{r} , 1) \approx {\rm min}(\frac{{O}(\hat{z})}{|X|h} \cdot \frac{1}{r},1 ), \end{aligned} \end{equation} where $r$ denotes the lower bound for achieving full coverage \textit{w.r.t.} state $\hat{z}$. However, for the above formulation, it could be challenging to search for a suitable $r$, since various factors (\textit{e}.\textit{g}., InD dataset size $|X|$) could affect the significance of NAC scores $\Phi(\hat{z};r)$. In this sense, to further simplify this formulation in the practical deployment, we set $r = \frac{O^*}{|X|h}$, such that \begin{equation} \Phi(\hat{z};r) \approx {\rm min}(\frac{{O}(\hat{z})}{|X|h} \cdot \frac{1}{r},1 ) = {\rm min}(\frac{{O}(\hat{z})}{O^*},1 ),\\ \end{equation} where $O^*$ represents the minimum number of samples required to fill a bin completely, and ${O}(\hat{z})$ is the number of samples in the bin containing $\hat{z}$. In this way, we can directly manipulate $O^*$ to control the NAC function in the practical deployment. \section{Experimental Details for OOD Detection} \subsection{OOD Benchmarks} We conduct experiments following previous SoTA approaches~\cite{OOD_Detect:ReAct,OOD_Detect:GradNorm,OOD_Detect:SimpleAct,OOD_Detect:LINe,OOD_Detect:MOS,OOD_Detect:Energy}. Here we provide more details for the utilized two benchmarks: ImageNet and CIFAR benchmark. \paragraph{Large-scale ImageNet benchmark} We employ ImageNet-1k~\cite{Setup:ImageNet} as the in-distribution dataset, and conduct evaluations on four out-of-distribution (OOD) test datasets, following the specified setup~\cite{OOD_Detect:MOS}: \begin{enumerate} \item \texttt{iNaturalist}~\cite{OOD_Dataset:iNaturalist}: This dataset consists of 859,000 images of plants and animals, covering over 5,000 different species. Each image is resized to a maximum dimension of 800 pixels. Following~\cite{OOD_Detect:MOS}, we evaluate our method on a randomly selected subset of 10,000 images, which are drawn from 110 classes that do not overlap with ImageNet-1k. \item \texttt{SUN}~\cite{OOD_Dataset:SUN}: With over 130,000 images, \texttt{SUN} includes scenes from 397 categories. Since there are some categories overlapping with ImageNet-1k, we randomly sample 10,000 images from 50 classes that are distinct from ImageNet labels for evaluation. \item \texttt{Places}~\cite{OOD_Dataset:Places}: Similar to \texttt{SUN}, \texttt{Places} is another scene dataset encompassing comparable conceptual categories. We select a subset of 10,000 images across 50 classes that are not included in ImageNet-1k for evaluation. \item \texttt{Textures}~\cite{OOD_Dataset:Textures}: This dataset contains 5,640 real-world texture images categorized into 47 classes. We utilize the entire dataset for evaluation purposes. \end{enumerate} \paragraph{CIFAR benchmark} CIFAR-10 and CIFAR-100 are widely employed as in-distribution (ID) datasets in current studies. CIFAR-10 consists of 10 classes, while CIFAR-100 contains 100 classes. We follow the standard split, utilizing 50,000 training images and 10,000 test images. Following~\cite{OOD_Detect:SimpleAct,OOD_Detect:ReAct,OOD_Detect:DICE}, to assess the performance of our approach, we conduct evaluations on six commonly used out-of-distribution (OOD) datasets, which are detailed below: \begin{enumerate} \item \texttt{SVHN}~\cite{OOD_Dataset:SVHN}: This dataset consists of color images depicting house numbers, encompassing ten classes representing digits 0 to 9. We utilize the entire test set, containing 26,032 images. \item \texttt{LSUN-Crop}~\cite{OOD_Dataset:LSUN}: This dataset builds upon the LSUN dataset, containing 10,000 test images distributed across ten different scenes. For this \texttt{LSUN-Crop}, we randomly crop LSUN image patches of size 32x32. \item \texttt{LSUN-Resize}~\cite{OOD_Dataset:LSUN}: Likewise, \texttt{LSUN-Resize} is produced by downsampling each {LSUN} image to the size 32×32. \item \texttt{iSUN}~\cite{OOD_Dataset:iSUN}: This dataset comprises the ground truth of gaze traces on images from the SUN dataset. We employ the entire dataset for evaluation, which contains 8,925 images. \item \texttt{Places365}~\cite{OOD_Dataset:Places}: Places365 contains a vast collection of photographs depicting scenes, classified into 365 scene categories. The test set consists of 900 images per category. For evaluation, we utilize the entire test dataset containing 328,500 images following~\cite{OOD_Detect:ReAct}. \item \texttt{Textures}~\cite{OOD_Dataset:Textures}: The Textures dataset comprises 5,640 real-world texture images classified into 47 categories. We employ the entire dataset for evaluation purposes. \end{enumerate} \subsection{Model Architecture} In summary, our experiments for ImageNet benchmark employ three model architectures: \begin{itemize} \item {\texttt{ResNet-50}}~\cite{tech:ResNet} is pretrained on ImageNet-1k. For this model, all images are resized to 224 $\times$ 224 at the test phase. \item {\texttt{BiTS-R101x1}}~\cite{tech:BiT} is pre-trained on ImageNet-1k with a ResNetv2-101~\cite{tech:ResNet} architecture~\footnote{\texttt{ https://github.com/google-research/big\_transfer}}. At test time, all images are resized to 480 $\times$ 480 for this model. \item {\texttt{MobileNet-v2}}~\cite{tech:MobileNetv2} is also pretrained on ImageNet-1k. Similar to ResNet-50, all images are resized to 224 $\times$ 224 at the test phase. \end{itemize} For CIFAR benchmarks, we employ the powerful \texttt{ResNet-18}~\cite{tech:ResNet} architecture. In line with the ReAct~\cite{OOD_Detect:ReAct}, we train a \texttt{ResNet-18} model using standard in-distribution data. We train the models for 100 epochs for both CIFAR-10 and CIFAR-100 datasets. The initial learning rate is set to 0.1 and is reduced by a factor of 10 at epochs 50, 75, and 90. \subsection{Hyperparameters} In Table~\ref{Appendix:Tab:OOD_Detection_ImgNet_Prams} and Table~\ref{Appendix:Tab:OOD_Detection_CIFAR_Prams}, we list the values of hyperparameters for different model architectures over ImageNet and CIFAR benchmarks. In our experiments, we find that improving the number of intervals, and choosing deeper layers are beneficial for the final results. \begin{table*}[!hb] \centering \resizebox{0.97\textwidth}{!}{ \begin{tabular}{l c l l} \toprule \parbox{2cm}{ \textbf{Architecture}} &\parbox{2.1cm}{ \textbf{Parameter}} &\parbox{7cm}{ \textbf{Denotation}} &\parbox{1.5cm}{\textbf{Value}} \\ \midrule \multirow{4}[3]{*}{\parbox{2.5cm}{ResNet-50}} &\parbox{2.1cm}{\centering -\qquad\qquad} &layer choice &Layer4\\ &\parbox{2.1cm}{\centering $M$\qquad\qquad} &number of intervals for PDF approximation &10000\\ &\parbox{2.1cm}{\centering $\alpha$\qquad\qquad} &sigmoid steepness &1000\\ &\parbox{2.1cm}{\centering $r$\qquad\qquad} &lower bound for full coverage &1.0\\ &\parbox{2.1cm}{\centering $O^*$\qquad\qquad} &minimum number of samples to fill a bin &5\\ \midrule \multirow{4}[3]{*}{\parbox{2.5cm}{BiTS-R101x1}} &\parbox{2.1cm}{\centering -\qquad\qquad} &layer choice &Block4\\ &\parbox{2.1cm}{\centering $M$\qquad\qquad} &number of intervals for PDF approximation &10000\\ &\parbox{2.1cm}{\centering $\alpha$\qquad\qquad} &sigmoid steepness &1000\\ &\parbox{2.1cm}{\centering $r$\qquad\qquad} &lower bound for full coverage &1.0\\ &\parbox{2.1cm}{\centering $O^*$\qquad\qquad} &minimum number of samples to fill a bin &5\\ \midrule \multirow{4}[3]{*}{\parbox{2.5cm}{MobileNet-v2}} &\parbox{2.1cm}{\centering -\qquad\qquad} &layer choice &Layer18\\ &\parbox{2.1cm}{\centering $M$\qquad\qquad} &number of intervals for PDF approximation &30000\\ &\parbox{2.1cm}{\centering $\alpha$\qquad\qquad} &sigmoid steepness &1000\\ &\parbox{2.1cm}{\centering $r$\qquad\qquad} &lower bound for full coverage &0.6\\ &\parbox{2.1cm}{\centering $O^*$\qquad\qquad} &minimum number of samples to fill a bin &1\\ \bottomrule \end{tabular} } \caption{Hyperparameters and their default values on ImageNet benchmark. Note that $r$ is computed based on $O^*$, as illustrated in Appendix~\ref{Sec:Appendix:Approx}} \label{Appendix:Tab:OOD_Detection_ImgNet_Prams} \end{table*} \begin{table*}[!hb] \centering \resizebox{0.97\textwidth}{!}{ \begin{tabular}{l c l l} \toprule \parbox{2cm}{ \textbf{Architecture}} &\parbox{2.1cm}{ \textbf{Parameter}} &\parbox{7cm}{ \textbf{Denotation}} &\parbox{1.5cm}{\textbf{Value}} \\ \midrule \multirow{4}[3]{*}{\parbox{2.5cm}{ResNet-18}} &\parbox{2.1cm}{\centering -\qquad\qquad} &layer choice &Layer4\\ &\parbox{2.1cm}{\centering $M$\qquad\qquad} &number of intervals for PDF approximation &10000\\ &\parbox{2.1cm}{\centering $\alpha$\qquad\qquad} &sigmoid steepness &1000\\ &\parbox{2.1cm}{\centering $r$\qquad\qquad} &lower bound for full coverage &1.0\\ &\parbox{2.1cm}{\centering $O^*$\qquad\qquad} &minimum number of samples to fill a bin &1\\ \bottomrule \end{tabular} } \caption{Hyperparameters and their default values on CIFAR-10 and CIFAR-100 benchmarks. Note that $r$ is computed based on $O^*$, as illustrated in Appendix~\ref{Sec:Appendix:Approx}} \label{Appendix:Tab:OOD_Detection_CIFAR_Prams} \end{table*} \section{Experimental Details for OOD Generalization} \subsection{Domainbed Benchmark} \paragraph{Datasets} We conduct experiments on DomainBed~\cite{Setup:DomainBed} benchmark, which is an arguably fairer benchmark in OOD generalization\footnote{\texttt{{https://github.com/facebookresearch/DomainBed.}}}. Without utilizing digital images, we utilize four datasets: \begin{enumerate} \item \texttt{VLCS}~\cite{Dataset:VLCS} is composed of photographic domains, namely \texttt{Caltech101}, \texttt{LabelMe}, \texttt{SUN09}, and \texttt{VOC2007}. This dataset consists of 10,729 examples with dimensions (3, 224, 224) and 5 classes. \item \texttt{PACS} dataset \cite{Dataset:PACS} consists of four domains: \texttt{art}, \texttt{cartoons}, \texttt{photos}, and \texttt{sketches}. It comprises a total of 9,991 examples with dimensions (3, 224, 224) and 7 classes. \item \texttt{OfficeHome}~\cite{Dataset:OfficeHome} includes domains: \texttt{art}, \texttt{clipart}, \texttt{product}, \texttt{real}. This dataset contains 15,588 examples of dimension (3, 224, 224) and 65 classes. \item \texttt{TerraInc}~\cite{Dataset:TerraIncognita} is a collection of wildlife photographs captured by camera traps at various locations: \texttt{L100}, \texttt{L38}, \texttt{L43}, and \texttt{L46}. Our version of this dataset contains 24,788 examples of dimension (3, 224, 224) and 10 classes. \end{enumerate} \paragraph{Settings} To ensure the reliability of final results, the data from each domain is partitioned into two parts: 80\% for training or testing, and 20\% for validation. This process is repeated three times with different seeds, such that reported numbers represent the mean and standard errors across these three runs. In our experiments, we report \textit{leave-one-out} test accuracy scores, whereby results are averaged over cases that uses a single domain for test and the others for training. Besides, we set the total training steps as 5000, and the evaluation frequency as 300 steps for all runs. \paragraph{Model evaluation criteria} For model evaluation, we mainly compare our method with the \textit{validation criterion}, which measures model accuracy over 20\% source-domain validation data. In addition, we also employ the \textit{oracle criterion} as the upper bound, which directly utilizes the accuracy over 20\% test-domain data for model evaluation. For more details, we suggest to refer~\cite{Setup:DomainBed}. \subsection{Metric: Rank Correlation} Rank correlation metrics are widely utilized to measure the extent to which an increase in the value of one random variable aligns with an increase in the another random variable. Following~\cite{Criterion:SMA}, we utilize the Spearman Rank Correlation (RC) for assessing the relationship between OOD test accuracy and the model evaluation scores (\textit{i}.\textit{e}., InD validation accuracy or InD NAC scores). The rationale behind this choice is that during the training phase, the selection of the optimal model is frequently based on ranking the model performance, such as validation accuracy. Therefore, utilizing the RC score enables us to directly measure the effectiveness of evaluation criteria in model selection (which naturally translates to early stopping). The value of RC ranges between -1 and 1, where a value of -1 signifies that the rankings of two random variables are exactly opposite to each other; whereas, a value of +1 indicates that the rankings are exactly the same. Furthermore, a RC score of 0 indicates no linear relationship between the two variables. \subsection{Model Architecture} In our experiments, we employ four model architectures: ResNet-18~\cite{tech:ResNet}, ResNet-50~\cite{tech:ResNet}, Vit-t16~\cite{tech:ViT}, and Vit-b16~\cite{tech:ViT}. All of them are pretrained on the ImageNet dataset, and are employed as the initial weight. We utilize the Adam optimizer to optimize initialized models and set the learning rate of 5e-5. For other parameter choices, we suggest to refer~\cite{tech:swad}. \subsection{Hyperparameters} In the case of ResNet architectures, NAC computation is performed using the neurons in \textit{Layer-4}. For ResNet-50, Layer-4 consists of 2048 neurons, while ResNet-18 has 512 neurons. As for vision transformers, NAC computation utilizes the neurons in \textit{MLP-11}. In Vit-b16, MLP-11 comprises 3072 neurons; whereas Vit-t16 has 768 neurons. During this series of experiments, we employ the source-domain training data to formulate the NAC function. To mitigate noises in training samples, we merely utilize training data that can be correctly classified, to formulate the NAC function. In order to determine the hyperparameters for all models, we employ a random search method based on the parameter distribution outlined in Table~\ref{Appendix:Tab:OOD_Gen_Params}. This random search approach can be seamlessly integrated with the DomainBed parameter-search table, thereby facilitating model adaptation. \begin{table*}[!hb] \centering \resizebox{0.99\textwidth}{!}{ \begin{tabular}{l l l} \toprule \parbox{1.5cm}{ \textbf{Parameter}} &\parbox{5cm}{ \textbf{Denotation}} &\parbox{4cm}{\textbf{Random distribution}} \\ \midrule \parbox{1.5cm}{\centering $M$\qquad} &number of intervals for PDF approximation &10000\\ \parbox{1.5cm}{\centering $\alpha$\qquad} &sigmoid steepness &RandomChoice([1, 10, 100, 1000])\\ \parbox{1.5cm}{\centering $O^*$\qquad} &minimum number of samples to fill a bin &RandomChoice([1, 5, 10, 100, 1000, 5000])\\ \bottomrule \end{tabular} } \caption{Hyperparameters of our \texttt{NAC-ME} and their distributions for random search. Note that $r$ can be computed based on $O^*$, as illustrated in Appendix~\ref{Sec:Appendix:Approx}} \label{Appendix:Tab:OOD_Gen_Params} \end{table*} \section{Reproducibility} We will publicly release our code with detailed instructions. \subsection{Software and Hardware} All experiments are performed on a single NVIDIA GeForce RTX 3090 GPU, with Python version 3.8.11. The deep learning framework used is PyTorch 1.10.0, and Torchvision version 0.11.1 is utilized for image processing. We leverage CUDA 11.3 for GPU acceleration. \subsection{Runtime Analysis} The total runtime of the experiments varies depending on the tasks and datasets. In the following, we provide details for two OOD tasks with resent50 architecture, using a single NVIDIA GeForce RTX 3090 GPU. For OOD detection, the experiments (\textit{e}.\textit{g}., inference during the test phase) take approximately 10 minutes for all benchmarks. For OOD generalization, the experiments on average take approximately 4 hours for PACS and VLCS, 8 hours for OfficeHome, 8.5 hours for TerraInc. \section{Full Distribution Plots on ImageNet OOD benchmark} \begin{figure*} \caption{Distribution of uncertainty scores \textit{w.r.t.} our \texttt{NAC-UE}. We visualize the results over three backbones, which are solely trained on ImageNet-1k.} \label{Appendix:Fig:OOD_Distr} \end{figure*} \section{Full CIFAR Results} \begin{table*}[!hb] \centering \rotatebox{90}{ \begin{minipage}{0.91\textheight} \centering \adjustbox{width=0.91\textheight}{ \begin{tabular}{l cc cc cc cc cc cc cc} \toprule \multirow{3}{*}{Method} &\multicolumn{2}{c}{SVHN~\cite{OOD_Dataset:SVHN}} &\multicolumn{2}{c}{LSUN-Crop~\cite{OOD_Dataset:LSUN}} &\multicolumn{2}{c}{LSUN-Resize~\cite{OOD_Dataset:LSUN}} &\multicolumn{2}{c}{iSUN~\cite{OOD_Dataset:iSUN}} &\multicolumn{2}{c}{Textures~\cite{OOD_Dataset:Textures}} &\multicolumn{2}{c}{Places365~\cite{OOD_Dataset:Places}} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5}\cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-13} \cmidrule(lr){14-15} &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC &FPR95 &AUROC \\ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ &$\downarrow$ &$\uparrow$ \\ \midrule \multicolumn{15}{c}{\emph{OOD Benchmark: CIFAR-10}} \\ {MSP}~\cite{OOD_Detect:MSP} &59.66 &91.25 &45.21 &93.80 &51.93 &92.73 &54.57 &92.12 &66.45 &88.5 &62.46 &88.64 &56.71 &91.17 \\ {ODIN}~\cite{OOD_Detect:ODIN} &60.37 &88.27 &7.81 &98.58 &9.24 &98.25 &11.62 &97.91 &52.09 &89.17 &45.49 &90.58 &31.10 &93.79 \\ {Energy}~\cite{OOD_Detect:Energy} &54.41 &91.22 &10.19 &98.05 &23.45 &96.14 &27.52 &95.59 &55.23 &89.37 &42.77 &91.02 &35.60 &93.57 \\ {ReAct}~\cite{OOD_Detect:ReAct} &49.77 &92.18 &16.99 &97.11 &17.94 &96.98 &20.84 &96.46 &47.96 &91.55 &43.97 &91.33 &32.91 &94.27 \\ \rowcolor{LightGray} {\textbf{NAC-UE (Ours)}} &\textbf{0.08}{\tiny $\pm$ 0.00} &\textbf{99.97}{\tiny $\pm$0.00} &\textbf{0.02}{\tiny $\pm$0.03} &\textbf{100.00}{\tiny $\pm$0.01} &\textbf{0.02}{\tiny $\pm$0.11} &\textbf{99.99}{\tiny $\pm$0.03} &\textbf{0.17}{\tiny $\pm$0.34} &\textbf{99.96}{\tiny $\pm$0.07} &\textbf{0.66}{\tiny $\pm$0.00} &\textbf{99.81}{\tiny $\pm$0.00} &\textbf{0.00}{\tiny $\pm$0.00} &\textbf{100.00}{\tiny $\pm$0.00} &\textbf{0.16} &\textbf{99.95}\\ \midrule \multicolumn{15}{c}{\emph{OOD Benchmark: CIFAR-100}} \\ {MSP}~\cite{OOD_Detect:MSP} &81.32 &77.74 &70.11 &83.51 &82.46 &75.73 &82.26 &76.16 &85.11 &73.36 &83.06 &74.47 &80.72 &76.83 \\ {ODIN}~\cite{OOD_Detect:ODIN} &40.94 &93.29 &28.72 &94.51 &79.61 &82.13 &76.66 &83.51 &83.63 &72.37 &87.71 &71.46 &66.21 &82.88 \\ {Energy}~\cite{OOD_Detect:Energy} &81.74 &84.56 &34.78 &93.93 &73.57 &82.99 &73.36 &83.80 &85.87 &74.94 &82.23 &76.68 &71.93 &82.82 \\ {ReAct}~\cite{OOD_Detect:ReAct} &70.81 &88.24 &39.99 &92.51 &54.47 &89.56 &51.89 &90.12 &59.15 &87.96 &81.33 &76.49 &59.61 &87.48 \\ \rowcolor{LightGray} {\textbf{NAC-UE (Ours)}} &\textbf{0.01}{\tiny $\pm$0.00} &\textbf{99.97}{\tiny $\pm$0.01} &\textbf{0.00}{\tiny $\pm$0.01} &\textbf{99.98}{\tiny $\pm$0.01} &\textbf{0.00}{\tiny $\pm$0.00} &\textbf{99.99}{\tiny $\pm$0.01} &\textbf{0.00}{\tiny $\pm$0.00} &\textbf{99.99}{\tiny $\pm$0.01} &\textbf{1.19}{\tiny $\pm$0.20} &\textbf{99.74}{\tiny $\pm$0.02} &\textbf{0.00}{\tiny $\pm$0.00} &\textbf{99.99}{\tiny $\pm$0.01} &\textbf{0.20} &\textbf{99.94}\\ \bottomrule \end{tabular} } \captionsetup{width=.88\textheight} \caption{OOD detection results on CIFAR-10 and CIFAR-100. We report the performance over ResNet-18 backbone, which is trained solely on the InD dataset, \textit{i}.\textit{e}., CIFAR. $\uparrow$ denotes the higher value is better, while $\downarrow$ indicates lower values are better. The results of our methods are averaged over 20 random seeds.} \end{minipage} } \label{Appendix:Tab:OOD_Detection_Full_CIFAR} \end{table*} \section{Full DomainBed Results} \begin{table*}[h] \centering \adjustbox{max width=1\textwidth}{ \newcolumntype{g}{>{\columncolor{LightGray}}c} \begin{tabular}{l g gg gg gg gg gg} \toprule \rowcolor{white} \multirow{2}{*}{} &\multirow{2}{*}{Method} &\multicolumn{2}{c}{Caltech101} &\multicolumn{2}{c}{LabelMe} &\multicolumn{2}{c}{SUN09} &\multicolumn{2}{c}{VOC2007} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \rowcolor{white} & &RC &ACC &RC &ACC &RC &ACC &RC &ACC &RC &ACC \\ \midrule \rowcolor{white} &Oracle &- &96.00{\tiny $\pm$0.8} &- &65.10{\tiny $\pm$0.3} &- &71.34{\tiny $\pm$0.8} &- &76.24{\tiny $\pm$0.4} &- &77.17 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &36.03{\tiny $\pm$17.3} &95.38{\tiny $\pm$0.9} &\textbf{17.57}{\tiny $\pm$13.2} &\textbf{63.62}{\tiny $\pm$1.1} &50.33{\tiny $\pm$13.6} &67.73{\tiny $\pm$0.6} &33.17{\tiny $\pm$15.7} &73.75{\tiny $\pm$0.7} &34.27 &75.12\\ &{NAC-ME} &\textbf{71.41}{\tiny $\pm$2.6} &\textbf{96.17}{\tiny $\pm$0.7} &5.64{\tiny $\pm$1.0} &61.54{\tiny $\pm$1.3} &\textbf{65.28}{\tiny $\pm$8.3} &\textbf{71.44}{\tiny $\pm$0.8} &\textbf{65.69}{\tiny $\pm$10.4} &\textbf{75.23}{\tiny $\pm$0.5} &\textbf{52.00} &\textbf{76.10}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN18}} &Oracle &- &98.47{\tiny $\pm$0.3} &- &68.69{\tiny $\pm$0.8} &- &73.46{\tiny $\pm$0.9} &- &78.07{\tiny $\pm$0.3} &- &79.67 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &20.75{\tiny $\pm$17.0} &98.00{\tiny $\pm$0.2} &\textbf{35.29}{\tiny $\pm$13.2} &\textbf{65.16}{\tiny $\pm$1.4} &33.01{\tiny $\pm$3.1} &70.37{\tiny $\pm$0.6} &\textbf{36.68}{\tiny $\pm$4.3} &\textbf{77.28}{\tiny $\pm$0.3} &\textbf{31.43} &\textbf{77.70}\\ &{NAC-ME} &\textbf{57.92}{\tiny $\pm$2.5} &\textbf{98.50}{\tiny $\pm$0.3} &-4.74{\tiny $\pm$3.9} &60.27{\tiny $\pm$0.6} &\textbf{35.54}{\tiny $\pm$13.2} &\textbf{70.88}{\tiny $\pm$2.1} &26.39{\tiny $\pm$7.2} &77.13{\tiny $\pm$0.8} &28.78 &76.69\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN50}} &Oracle &- &97.91{\tiny $\pm$0.3} &- &66.67{\tiny $\pm$0.3} &- &73.47{\tiny $\pm$0.9} &- &75.54{\tiny $\pm$0.1} &- &78.4 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &30.72{\tiny $\pm$2.0} &98.35{\tiny $\pm$0.2} &\textbf{45.92}{\tiny $\pm$3.3} &\textbf{63.91}{\tiny $\pm$0.6} &50.65{\tiny $\pm$4.6} &72.62{\tiny $\pm$0.3} &35.21{\tiny $\pm$10.6} &\textbf{75.31}{\tiny $\pm$0.7} &40.63 &77.55\\ &{NAC-ME} &\textbf{55.47}{\tiny $\pm$1.5} &\textbf{98.47}{\tiny $\pm$0.1} &9.80{\tiny $\pm$1.6} &63.75{\tiny $\pm$0.4} &\textbf{72.96}{\tiny $\pm$6.6} &\textbf{74.02}{\tiny $\pm$0.2} &\textbf{42.81}{\tiny $\pm$4.1} &74.48{\tiny $\pm$0.2} &\textbf{45.26} &\textbf{77.68}\\ \midrule \rowcolor{white} \multirow{-3}[13]{*}{\rotatebox[origin=c]{90}{Vit-t16}} &Oracle &- &97.70{\tiny $\pm$0.6} &- &66.29{\tiny $\pm$0.5} &- &78.26{\tiny $\pm$0.4} &- &79.50{\tiny $\pm$0.9} &- &80.44 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &-11.27{\tiny $\pm$10.8} &96.85{\tiny $\pm$0.7} &\textbf{51.80}{\tiny $\pm$5.0} &\textbf{64.39}{\tiny $\pm$1.0} &\textbf{28.76}{\tiny $\pm$10.2} &\textbf{76.24}{\tiny $\pm$1.1} &\textbf{50.49}{\tiny $\pm$16.1} &\textbf{79.08}{\tiny $\pm$0.6} &29.94 &\textbf{79.14}\\ \multirow{-1}[9]{*}{\rotatebox[origin=c]{90}{Vit-b16}} &NAC-ME &\textbf{64.22}{\tiny $\pm$6.1} &\textbf{98.23}{\tiny $\pm$0.2} &16.99{\tiny $\pm$4.2} &63.69{\tiny $\pm$0.6} &19.36{\tiny $\pm$9.2} &76.15{\tiny $\pm$0.3} &34.48{\tiny $\pm$9.3} &78.13{\tiny $\pm$0.7} &\textbf{33.76} &79.05\\ \bottomrule \end{tabular} } \caption{OOD generalization results on VLCS dataset~\cite{Dataset:VLCS}. \textit{Oracle} indicates the upper bound. The training strategy is ERM~\cite{Baseline:ERM}. All scores are averaged over 3 random trials. } \label{Appendix:Tab:OOD_Gen_Full_VLCS} \end{table*} \begin{table*}[h] \centering \adjustbox{max width=1\textwidth}{ \newcolumntype{g}{>{\columncolor{LightGray}}c} \begin{tabular}{l g gg gg gg gg gg} \toprule \rowcolor{white} \multirow{2}{*}{} &\multirow{2}{*}{Method} &\multicolumn{2}{c}{Art} &\multicolumn{2}{c}{Cartoon} &\multicolumn{2}{c}{Photo} &\multicolumn{2}{c}{Sketch} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \rowcolor{white} & &RC &ACC &RC &ACC &RC &ACC &RC &ACC &RC &ACC \\ \midrule \rowcolor{white} &Oracle &- &78.48{\tiny $\pm$0.1} &- &75.05{\tiny $\pm$0.8} &- &94.26{\tiny $\pm$0.4} &- &72.74{\tiny $\pm$1.5} &- &80.13 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{72.22}{\tiny $\pm$5.1} &77.32{\tiny $\pm$0.7} &65.20{\tiny $\pm$6.6} &\textbf{71.91}{\tiny $\pm$0.7} &\textbf{60.87}{\tiny $\pm$7.1} &94.44{\tiny $\pm$0.2} &76.55{\tiny $\pm$1.2} &72.36{\tiny $\pm$1.1} &68.71 &79.01\\ &{NAC-ME} &71.32{\tiny $\pm$5.5} &\textbf{77.93}{\tiny $\pm$0.4} &\textbf{75.16}{\tiny $\pm$2.7} &71.54{\tiny $\pm$0.8} &60.62{\tiny $\pm$6.6} &\textbf{94.64}{\tiny $\pm$0.2} &\textbf{83.09}{\tiny $\pm$1.5} &\textbf{72.37}{\tiny $\pm$1.3} &\textbf{72.55} &\textbf{79.12}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN18}} &Oracle &- &86.11{\tiny $\pm$0.5} &- &80.99{\tiny $\pm$0.4} &- &97.73{\tiny $\pm$0.2} &- &76.82{\tiny $\pm$1.1} &- &85.41 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &70.26{\tiny $\pm$9.1} &\textbf{86.72}{\tiny $\pm$0.5} &65.93{\tiny $\pm$10.3} &\textbf{78.86}{\tiny $\pm$1.3} &\textbf{38.73}{\tiny $\pm$12.3} &\textbf{97.83}{\tiny $\pm$0.1} &59.23{\tiny $\pm$11.4} &74.87{\tiny $\pm$1.1} &58.54 &84.57\\ &{NAC-ME} &\textbf{72.88}{\tiny $\pm$1.5} &\textbf{86.72}{\tiny $\pm$0.5} &\textbf{75.57}{\tiny $\pm$4.5} &78.48{\tiny $\pm$1.4} &29.25{\tiny $\pm$16.0} &97.68{\tiny $\pm$0.1} &\textbf{68.38}{\tiny $\pm$8.8} &\textbf{76.66}{\tiny $\pm$1.2} &\textbf{61.52} &\textbf{84.89}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN50}} &Oracle &- &75.68{\tiny $\pm$0.2} &- &66.01{\tiny $\pm$0.7} &- &96.26{\tiny $\pm$0.2} &- &49.36{\tiny $\pm$1.8} &- &71.83 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{88.40}{\tiny $\pm$4.3} &\textbf{75.59}{\tiny $\pm$0.2} &92.73{\tiny $\pm$1.7} &\textbf{65.41}{\tiny $\pm$0.4} &91.58{\tiny $\pm$2.5} &\textbf{96.13}{\tiny $\pm$0.2} &80.96{\tiny $\pm$4.7} &41.95{\tiny $\pm$2.2} &88.42 &69.77\\ &{NAC-ME} &87.42{\tiny $\pm$4.3} &\textbf{75.59}{\tiny $\pm$0.2} &\textbf{93.71}{\tiny $\pm$0.9} &64.09{\tiny $\pm$0.6} &\textbf{91.75}{\tiny $\pm$2.0} &\textbf{96.13}{\tiny $\pm$0.2} &\textbf{87.99}{\tiny $\pm$2.2} &\textbf{48.76}{\tiny $\pm$1.7} &\textbf{90.22} &\textbf{71.14}\\ \midrule \rowcolor{white} \multirow{-3}[13]{*}{\rotatebox[origin=c]{90}{Vit-t16}} &Oracle &- &94.79{\tiny $\pm$0.2} &- &83.99{\tiny $\pm$0.5} &- &99.55{\tiny $\pm$0.1} &- &78.06{\tiny $\pm$0.6} &- &89.10 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &9.56{\tiny $\pm$5.2} &\textbf{93.37}{\tiny $\pm$0.6} &41.50{\tiny $\pm$8.0} &82.36{\tiny $\pm$0.4} &\textbf{45.67}{\tiny $\pm$6.0} &\textbf{99.65}{\tiny $\pm$0.1} &29.41{\tiny $\pm$10.8} &72.07{\tiny $\pm$3.3} &31.54 &86.86\\ \multirow{-1}[9]{*}{\rotatebox[origin=c]{90}{Vit-b16}} &NAC-ME &\textbf{29.74}{\tiny $\pm$3.1} &\textbf{93.27}{\tiny $\pm$0.6} &\textbf{53.02}{\tiny $\pm$4.4} &\textbf{83.92}{\tiny $\pm$0.1} &38.97{\tiny $\pm$16.5} &99.40{\tiny $\pm$0.2} &\textbf{39.46}{\tiny $\pm$14.8} &\textbf{76.19}{\tiny $\pm$2.0} &\textbf{40.30} &\textbf{88.19}\\ \bottomrule \end{tabular} } \caption{OOD generalization results on PACS dataset~\cite{Dataset:PACS}. \textit{Oracle} indicates the upper bound. The training strategy is ERM~\cite{Baseline:ERM}. All scores are averaged over 3 random trials. } \label{Appendix:Tab:OOD_Gen_Full_PACS} \end{table*} \begin{table*} \centering \adjustbox{max width=1\textwidth}{ \newcolumntype{g}{>{\columncolor{LightGray}}c} \begin{tabular}{l g gg gg gg gg gg} \toprule \rowcolor{white} \multirow{2}{*}{} &\multirow{2}{*}{Method} &\multicolumn{2}{c}{Art} &\multicolumn{2}{c}{Clipart} &\multicolumn{2}{c}{Product} &\multicolumn{2}{c}{Real} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \rowcolor{white} & &RC &ACC &RC &ACC &RC &ACC &RC &ACC &RC &ACC \\ \midrule \rowcolor{white} &Oracle &- &47.99{\tiny $\pm$0.2} &- &41.99{\tiny $\pm$0.2} &- &66.22{\tiny $\pm$0.1} &- &67.79{\tiny $\pm$0.4} &- &56.00 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{86.36}{\tiny $\pm$1.9} &\textbf{47.68}{\tiny $\pm$0.3} &75.33{\tiny $\pm$3.2} &41.16{\tiny $\pm$0.6} &88.73{\tiny $\pm$3.3} &65.82{\tiny $\pm$0.1} &83.58{\tiny $\pm$3.1} &67.73{\tiny $\pm$0.4} &83.50 &55.60\\ &{NAC-ME} &86.19{\tiny $\pm$2.5} &\textbf{47.68}{\tiny $\pm$0.1} &\textbf{77.61}{\tiny $\pm$5.7} &\textbf{41.26}{\tiny $\pm$0.6} &\textbf{91.09}{\tiny $\pm$1.8} &\textbf{66.10}{\tiny $\pm$0.1} &\textbf{84.23}{\tiny $\pm$5.0} &\textbf{68.16}{\tiny $\pm$0.1} &\textbf{84.78} &\textbf{55.80}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN18}} &Oracle &- &59.46{\tiny $\pm$0.6} &- &50.29{\tiny $\pm$0.5} &- &75.08{\tiny $\pm$0.2} &- &75.75{\tiny $\pm$0.2} &- &65.14 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &71.32{\tiny $\pm$4.2} &59.01{\tiny $\pm$0.5} &53.43{\tiny $\pm$6.5} &\textbf{50.29}{\tiny $\pm$0.4} &\textbf{81.21}{\tiny $\pm$5.7} &\textbf{74.96}{\tiny $\pm$0.5} &\textbf{65.77}{\tiny $\pm$7.0} &\textbf{75.88}{\tiny $\pm$0.2} &67.93 &65.04\\ &{NAC-ME} &\textbf{79.00}{\tiny $\pm$7.4} &\textbf{59.87}{\tiny $\pm$0.4} &\textbf{59.15}{\tiny $\pm$3.1} &50.19{\tiny $\pm$0.4} &78.68{\tiny $\pm$5.3} &74.66{\tiny $\pm$0.4} &60.13{\tiny $\pm$7.3} &75.86{\tiny $\pm$0.1} &\textbf{69.24} &\textbf{65.14}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN50}} &Oracle &- &56.85{\tiny $\pm$0.2} &- &43.38{\tiny $\pm$0.4} &- &71.83{\tiny $\pm$0.1} &- &72.92{\tiny $\pm$0.1} &- &61.25 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{98.77}{\tiny $\pm$0.4} &\textbf{56.27}{\tiny $\pm$0.3} &97.79{\tiny $\pm$0.5} &43.35{\tiny $\pm$0.5} &98.45{\tiny $\pm$0.5} &\textbf{71.64}{\tiny $\pm$0.2} &98.86{\tiny $\pm$0.3} &\textbf{73.12}{\tiny $\pm$0.1} &98.47 &\textbf{61.09}\\ &{NAC-ME} &98.04{\tiny $\pm$0.3} &55.89{\tiny $\pm$0.2} &\textbf{98.94}{\tiny $\pm$0.4} &\textbf{43.43}{\tiny $\pm$0.5} &\textbf{98.86}{\tiny $\pm$0.4} &71.43{\tiny $\pm$0.1} &\textbf{99.43}{\tiny $\pm$0.2} &\textbf{73.12}{\tiny $\pm$0.1} &\textbf{98.82} &60.97\\ \midrule \rowcolor{white} \multirow{-3}[13]{*}{\rotatebox[origin=c]{90}{Vit-t16}} &Oracle &- &74.51{\tiny $\pm$6.6} &- &71.73{\tiny $\pm$10.3} &- &68.63{\tiny $\pm$8.9} &- &47.06{\tiny $\pm$6.5} &- &80.84 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{78.59}{\tiny $\pm$4.4} &78.32{\tiny $\pm$0.3} &52.94{\tiny $\pm$5.4} &\textbf{66.72}{\tiny $\pm$0.3} &59.15{\tiny $\pm$6.0} &87.46{\tiny $\pm$0.1} &\textbf{70.26}{\tiny $\pm$2.9} &89.30{\tiny $\pm$0.2} &65.24 &80.45\\ \multirow{-1}[9]{*}{\rotatebox[origin=c]{90}{Vit-b16}} &NAC-ME &75.41{\tiny $\pm$2.8} &\textbf{78.90}{\tiny $\pm$0.3} &\textbf{64.46}{\tiny $\pm$6.7} &66.62{\tiny $\pm$0.1} &\textbf{66.01}{\tiny $\pm$6.4} &\textbf{87.61}{\tiny $\pm$0.1} &67.65{\tiny $\pm$8.3} &\textbf{89.48}{\tiny $\pm$0.2} &\textbf{68.38} &\textbf{80.65}\\ \bottomrule \end{tabular} } \caption{OOD generalization results on OfficeHome dataset~\cite{Dataset:OfficeHome}. \textit{Oracle} indicates the upper bound. The training strategy is ERM~\cite{Baseline:ERM}. All scores are averaged over 3 random trials. } \label{Appendix:Tab:OOD_Gen_Full_Office} \end{table*} \begin{table*} \centering \adjustbox{max width=1\textwidth}{ \newcolumntype{g}{>{\columncolor{LightGray}}c} \begin{tabular}{l g gg gg gg gg gg} \toprule \rowcolor{white} \multirow{2}{*}{} &\multirow{2}{*}{Method} &\multicolumn{2}{c}{Loc100} &\multicolumn{2}{c}{Loc38} &\multicolumn{2}{c}{Loc43} &\multicolumn{2}{c}{Loc46} &\multicolumn{2}{c}{Average} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \rowcolor{white} & &RC &ACC &RC &ACC &RC &ACC &RC &ACC &RC &ACC \\ \midrule \rowcolor{white} &Oracle &- &54.82{\tiny $\pm$1.3} &- &36.20{\tiny $\pm$0.2} &- &51.75{\tiny $\pm$0.3} &- &34.51{\tiny $\pm$0.7} &- &44.32 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &13.48{\tiny $\pm$12.5} &45.20{\tiny $\pm$0.3} &46.32{\tiny $\pm$10.9} &\textbf{29.90}{\tiny $\pm$2.6} &55.15{\tiny $\pm$13.8} &47.69{\tiny $\pm$1.2} &40.11{\tiny $\pm$2.7} &\textbf{33.35}{\tiny $\pm$0.1} &38.77 &39.03\\ &{NAC-ME} &\textbf{13.81}{\tiny $\pm$12.8} &\textbf{48.25}{\tiny $\pm$2.3} &\textbf{46.81}{\tiny $\pm$11.1} &29.59{\tiny $\pm$2.7} &\textbf{55.72}{\tiny $\pm$13.2} &\textbf{51.02}{\tiny $\pm$0.3} &\textbf{40.77}{\tiny $\pm$0.8} &32.79{\tiny $\pm$0.4} &\textbf{39.28} &\textbf{40.41}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN18}} &Oracle &- &55.62{\tiny $\pm$0.5} &- &45.12{\tiny $\pm$1.1} &- &56.92{\tiny $\pm$0.3} &- &43.26{\tiny $\pm$0.9} &- &50.23 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &43.95{\tiny $\pm$7.6} &49.08{\tiny $\pm$3.5} &\textbf{36.60}{\tiny $\pm$13.6} &37.44{\tiny $\pm$2.3} &\textbf{28.02}{\tiny $\pm$8.6} &56.12{\tiny $\pm$0.3} &39.71{\tiny $\pm$15.0} &\textbf{41.63}{\tiny $\pm$0.5} &37.07 &46.07\\ &{NAC-ME} &\textbf{50.98}{\tiny $\pm$8.5} &\textbf{49.65}{\tiny $\pm$3.5} &31.21{\tiny $\pm$19.0} &\textbf{41.44}{\tiny $\pm$2.4} &27.29{\tiny $\pm$7.8} &\textbf{57.03}{\tiny $\pm$0.4} &\textbf{54.33}{\tiny $\pm$12.7} &40.59{\tiny $\pm$0.9} &\textbf{40.95} &\textbf{47.18}\\ \midrule \rowcolor{white} \multirow{-3}[12]{*}{\rotatebox[origin=c]{90}{RN50}} &Oracle &- &52.03{\tiny $\pm$0.3} &- &27.35{\tiny $\pm$3.0} &- &49.41{\tiny $\pm$0.4} &- &35.50{\tiny $\pm$0.3} &- &41.08 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &\textbf{21.24}{\tiny $\pm$11.8} &43.51{\tiny $\pm$2.8} &13.15{\tiny $\pm$4.0} &20.85{\tiny $\pm$2.1} &20.02{\tiny $\pm$18.6} &46.55{\tiny $\pm$0.1} &\textbf{36.44}{\tiny $\pm$14.2} &34.20{\tiny $\pm$0.7} &22.71 &36.28\\ &{NAC-ME} &20.10{\tiny $\pm$12.4} &\textbf{44.15}{\tiny $\pm$3.4} &\textbf{17.97}{\tiny $\pm$1.8} &\textbf{23.43}{\tiny $\pm$1.0} &\textbf{21.08}{\tiny $\pm$18.5} &\textbf{47.49}{\tiny $\pm$0.5} &36.11{\tiny $\pm$16.1} &\textbf{34.80}{\tiny $\pm$0.6} &\textbf{23.82} &\textbf{37.47}\\ \midrule \rowcolor{white} \multirow{-3}[13]{*}{\rotatebox[origin=c]{90}{Vit-t16}} &Oracle &- &62.23{\tiny $\pm$0.4} &- &46.69{\tiny $\pm$1.9} &- &56.55{\tiny $\pm$0.4} &- &41.77{\tiny $\pm$0.3} &- &51.81 \\ \cmidrule(lr){3-12} \rowcolor{white} &Validation &-1.31{\tiny $\pm$3.1} &53.13{\tiny $\pm$2.0} &-16.91{\tiny $\pm$13.4} &\textbf{36.78}{\tiny $\pm$2.2} &-3.27{\tiny $\pm$9.5} &54.19{\tiny $\pm$0.2} &25.16{\tiny $\pm$7.0} &\textbf{37.84}{\tiny $\pm$0.4} &0.92 &45.49\\ \multirow{-1}[9]{*}{\rotatebox[origin=c]{90}{Vit-b16}} &NAC-ME &\textbf{32.68}{\tiny $\pm$13.5} &\textbf{57.95}{\tiny $\pm$2.3} &\textbf{22.39}{\tiny $\pm$9.4} &35.40{\tiny $\pm$1.6} &\textbf{51.31}{\tiny $\pm$3.5} &\textbf{55.00}{\tiny $\pm$0.6} &\textbf{33.01}{\tiny $\pm$1.4} &37.68{\tiny $\pm$1.6} &\textbf{34.84} &\textbf{46.51}\\ \bottomrule \end{tabular} } \caption{OOD generalization results on TerraInc dataset~\cite{Dataset:TerraIncognita}. \textit{Oracle} indicates the upper bound. The training strategy is ERM~\cite{Baseline:ERM}. All scores are averaged over 3 random trials. } \label{Appendix:Tab:OOD_Gen_Full_Terra} \end{table*} \end{document}
arXiv