text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
The Three Body Problem
Download The Three Body Problem or read online books in PDF, EPUB, Tuebl, and kindle. Click Get Book button to get The Three Body Problem book now. We cannot guarantee every books is in the library. Use search box to get ebook that you want.
Author : Cixin Liu
Publsiher : Bloomsbury Publishing
The Three Body Problem Book Excerpt:
Read the award-winning, critically acclaimed, multi-million-copy-selling science-fiction phenomenon – soon to be a Netflix Original Series from the creators of Game of Thrones. 1967: Ye Wenjie witnesses Red Guards beat her father to death during China's Cultural Revolution. This singular event will shape not only the rest of her life but also the future of mankind. Four decades later, Beijing police ask nanotech engineer Wang Miao to infiltrate a secretive cabal of scientists after a spate of inexplicable suicides. Wang's investigation will lead him to a mysterious online game and immerse him in a virtual world ruled by the intractable and unpredictable interaction of its three suns. This is the Three-Body Problem and it is the key to everything: the key to the scientists' deaths, the key to a conspiracy that spans light-years and the key to the extinction-level threat humanity now faces. Praise for The Three-Body Problem: 'Your next favourite sci-fi novel' Wired 'Immense' Barack Obama 'Unique' George R.R. Martin 'SF in the grand style' Guardian 'Mind-altering and immersive' Daily Mail Winner of the Hugo and Galaxy Awards for Best Novel
Three Body Problem Set
Publsiher : Remembrance of Earth's Past
Three Body Problem Set Book Excerpt:
The Three Body Problem Series
Publsiher : Tor Books
Total Pages : 1472
The Three Body Problem Series Book Excerpt:
This discounted ebundle of the Three-Body Trilogy includes: The Three-Body Problem, The Dark Forest, Death's End "Wildly imaginative, really interesting." —President Barack Obama The Three-Body trilogy by New York Times bestseller Cixin Liu keeps you riveted with high-octane action, political intrigue, and unexpected twists in this saga of first contact with the extraterrestrial Trisolaris. The Three-Body Problem — An alien civilization on the brink of destruction captures the signal and plans to invade Earth. Meanwhile, on Earth, different camps start forming, planning to either welcome the superior beings and help them take over a world seen as corrupt, or to fight against the invasion. The Dark Forest — In The Dark Forest, the aliens' human collaborators may have been defeated, but the presence of the sophons, the subatomic particles that allow Trisolaris instant access to all human information remains. Humanity responds with the Wallfacer Project, a daring plan that grants four men enormous resources to design secret strategies, hidden through deceit and misdirection from Earth and Trisolaris alike. Three of the Wallfacers are influential statesmen and scientists, but the fourth is a total unknown. Luo Ji, an unambitious Chinese astronomer and sociologist, is baffled by his new status. All he knows is that he's the one Wallfacer that Trisolaris wants dead. Death's End — Half a century after the Doomsday Battle, Cheng Xin, an aerospace engineer from the early 21st century, awakens from hibernation in this new age. She brings with her knowledge of a long-forgotten program dating from the beginning of the Trisolar Crisis, and her very presence may upset the delicate balance between two worlds. Will humanity reach for the stars or die in its cradle? Other Books by Cixin Liu (Translated to English) The Remembrance of Earth's Past The Three-Body Problem The Dark Forest Death's End Other Books Ball Lightning At the Publisher's request, this title is being sold without Digital Rights Management Software (DRM) applied.
The Redemption of Time
Author : Baoshu
Publsiher : A Three-Body Problem Novel
Category : Electronic Book
The Redemption of Time Book Excerpt:
Published with the blessing of Cixin Liu, The Redemption of Timeextends the astonishing universe conjured by the Three-Body Trilogy. Death is no release for Yun Tianming- merely the first step on a journey that will place him on the frontline of a war that has raged since the beginning of time. At the end of the fourth year of the Crisis Era, Yun Tianming died. He was flash frozen, put aboard a spacecraft and launched on a trajectory to intercept the Trisolaran First Fleet. It was a desperate plan, a Trojan gambit almost certain to fail. But there was an infinitesimal chance that the aliens would find rebooting a human irresistible, and that someday, somehow, Tianming might relay valuable information back to Earth. And so he did. But not before he betrayed humanity. Now, after millennia in exile, Tianming has a final chance at redemption. A being calling itself The Spirit has recruited him to help wage war against a foe that threatens the existence of the entire universe. a challenge he will accept, but this time Tianming refuses to be a mere pawn... He has his own plans. Published with the blessing of Cixin Liu, The Redemption of Timeextends the astonishing universe conjured by the Three-Body Trilogy. You'll discover why the universe is a 'dark forest', and for the first time, you'll come face-to-face with a Trisolaran...
Author : Mauri Valtonen,Hannu Karttunen
Publsiher : Cambridge University Press
"How do three celestial bodies move under their mutual gravitational attraction? It is a problem that has been studied by Isaac Newton and leading mathematicians over the last two centuries. Poincare's conclusion that the problem represents an example ofchaos in nature, opens new possibilities of dealing with it: a statistical approach. For the first time such methods are presented in a systematic way. The book surveys statistical methods as well as more traditional methods, suitable for students of celestial mechanics at advanced undergraduate level."--BOOK JACKET.
Poincare and the Three Body Problem
Author : June Barrow-Green
Publsiher : American Mathematical Soc.
Release : 1997
Poincare and the Three Body Problem Book Excerpt:
Poincare's famous memoir on the three body problem arose from his entry in the competition celebrating the 60th birthday of King Oscar of Sweden and Norway. His essay won the prize and was set up in print as a paper in Acta Mathematica when it was found to contain a deep and critical error. In correcting this error Poincare discovered mathematical chaos, as is now clear from June Barrow-Green's pioneering study of a copy of the original memoir annotated by Poincare himself, recently discovered in the Institut Mittag-Leffler in Stockholm. Poincare and the Three Body Problem opens with a discussion of the development of the three body problem itself and Poincare's related earlier work. The book also contains intriguing insights into the contemporary European mathematical community revealed by the workings of the competition. After an account of the discovery of the error and a detailed comparative study of both the original memoir and its rewritten version, the book concludes with an account of the final memoir's reception, influence and impact, and an examination of Poincare's subsequent highly influential work in celestial mechanics.
Author : C. Marchal
Publsiher : Elsevier
Recent research on the theory of perturbations, the analytical approach and the quantitative analysis of the three-body problem have reached a high degree of perfection. The use of electronics has aided developments in quantitative analysis and has helped to disclose the extreme complexity of the set of solutions. This accelerated progress has given new orientation and impetus to the qualitative analysis that is so complementary to the quantitative analysis. The book begins with the various formulations of the three-body problem, the main classical results and the important questions and conjectures involved in this subject. The main part of the book describes the remarkable progress achieved in qualitative analysis which has shed new light on the three-body problem. It deals with questions such as escapes, captures, periodic orbits, stability, chaotic motions, Arnold diffusion, etc. The most recent tests of escape have yielded very impressive results and border very close on the true limits of escape, showing the domain of bounded motions to be much smaller than was expected. An entirely new picture of the three-body problem is emerging, and the book reports on this recent progress. The structure of the solutions for the three-body problem lead to a general conjecture governing the picture of solutions for all Hamiltonian problems. The periodic, quasi-periodic and almost-periodic solutions form the basis for the set of solutions and separate the chaotic solutions from the open solutions.
The Dark Forest
The Dark Forest Book Excerpt:
Read the award-winning, critically acclaimed, multi-million-copy-selling science-fiction phenomenon – soon to be a Netflix Original Series from the creators of Game of Thrones. Imagine the universe as a forest, patrolled by numberless and nameless predators. In this forest, stealth is survival – any civilisation that reveals its location is prey. Earth has. Now the predators are coming. Crossing light years, the Trisolarians will reach Earth in four centuries' time. But the sophons, their extra-dimensional agents and saboteurs, are already here. Only the individual human mind remains immune to their influence. This is the motivation for the Wallfacer Project, a last-ditch defence that grants four individuals almost absolute power to design secret strategies, hidden through deceit and misdirection from human and alien alike. Three of the Wallfacers are influential statesmen and scientists, but the fourth is a total unknown. Luo Ji, an unambitious Chinese astronomer, is baffled by his new status. All he knows is that he's the one Wallfacer that Trisolaris wants dead. Praise for The Three-Body Problem: 'Your next favourite sci-fi novel' Wired 'Immense' Barack Obama 'Unique' George R.R. Martin 'SF in the grand style' Guardian 'Mind-altering and immersive' Daily Mail Winner of the Hugo and Galaxy Awards for Best Novel
On Lagrange s Theory of the Three body Problem
Author : Karl Stumpff
Publsiher : Unknown
Total Pages : 12
Category : Lagrange problem
On Lagrange s Theory of the Three body Problem Book Excerpt:
Author : Catherine Shaw
Publsiher : Allison & Busby
Cambridge, 1888. When schoolmistress Vanessa Duncan learns of a murder at St John's College, little does she know that she will become deeply entangled in the mystery. Dr Geoffrey Akers, Fellow in Pure Mathematics, has been found dead, struck down by a violent blow to the head. What could provoke such a brutal act? Vanessa, finding herself in amongst Cambridge's brightest scholarly minds, discovers that the motive may lie in mathematics itself. Drawn closer to the case by a blossoming friendship with mathematician Arthur Weatherburn, Vanessa begins to investigate. When she learns of Sir Isaac Newton's elusive 'n-body problem' and the prestigious prize offered to anyone with a solution, things begin to make sense. But with further deaths occurring and the threat of an innocent man being condemned, Vanessa must hurry with her calculations...
The Three body Problem from Pythagoras to Hawking
Author : Mauri Valtonen,Joanna Anosova,Konstantin Kholshevnikov,Aleksandr Mylläri,Victor Orlov,Kiyotaka Tanikawa
Publsiher : Springer
Category : Science
The Three body Problem from Pythagoras to Hawking Book Excerpt:
This book, written for a general readership, reviews and explains the three-body problem in historical context reaching to latest developments in computational physics and gravitation theory. The three-body problem is one of the oldest problems in science and it is most relevant even in today's physics and astronomy. The long history of the problem from Pythagoras to Hawking parallels the evolution of ideas about our physical universe, with a particular emphasis on understanding gravity and how it operates between astronomical bodies. The oldest astronomical three-body problem is the question how and when the moon and the sun line up with the earth to produce eclipses. Once the universal gravitation was discovered by Newton, it became immediately a problem to understand why these three-bodies form a stable system, in spite of the pull exerted from one to the other. In fact, it was a big question whether this system is stable at all in the long run. Leading mathematicians attacked this problem over more than two centuries without arriving at a definite answer. The introduction of computers in the last half-a-century has revolutionized the study; now many answers have been found while new questions about the three-body problem have sprung up. One of the most recent developments has been in the treatment of the problem in Einstein's General Relativity, the new theory of gravitation which is an improvement on Newton's theory. Now it is possible to solve the problem for three black holes and to test one of the most fundamental theorems of black hole physics, the no-hair theorem, due to Hawking and his co-workers.
The Integral Manifolds of the Three Body Problem
Author : Christopher Keil McCord,Kenneth Ray Meyer,Quidong Wang
The Integral Manifolds of the Three Body Problem Book Excerpt:
The phase space of the spatial three-body problem is an open subset in ${\mathbb R}^{18}$. Holding the ten classical integrals of energy, center of mass, linear and angular momentum fixed defines an eight dimensional submanifold. For fixed nonzero angular momentum, the topology of this manifold depends only on the energy. This volume computes the homology of this manifold for all energy values. This table of homology shows that for negative energy, the integral manifolds undergo seven bifurcations. Four of these are the well-known bifurcations due to central configurations, and three are due to 'critical points at infinity'. This disproves Birkhoff's conjecture that the bifurcations occur only at central configurations.
The Three Body Problem and the Equations of Dynamics
Author : Henri Poincaré
The Three Body Problem and the Equations of Dynamics Book Excerpt:
Here is an accurate and readable translation of a seminal article by Henri Poincaré that is a classic in the study of dynamical systems popularly called chaos theory. In an effort to understand the stability of orbits in the solar system, Poincaré applied a Hamiltonian formulation to the equations of planetary motion and studied these differential equations in the limited case of three bodies to arrive at properties of the equations' solutions, such as orbital resonances and horseshoe orbits. Poincaré wrote for professional mathematicians and astronomers interested in celestial mechanics and differential equations. Contemporary historians of math or science and researchers in dynamical systems and planetary motion with an interest in the origin or history of their field will find his work fascinating.
The Three Body Problem 3 Death s End
The Three Body Problem 3 Death s End Book Excerpt:
Half a century after the Doomsday Battle, the uneasy balance of Dark Forest Deterrence keeps the Trisolaran invaders at bay. Earth enjoys unprecedented prosperity due to the infusion of Trisolaran knowledge. With human science advancing and the Trisolarans adopting Earth culture, it seems that the two civilizations can co-exist peacefully as equals without the terrible threat of mutually assured annihilation. But peace has also made humanity complacent. Cheng Xin, an aerospace engineer from the 21st century, awakens from hibernation in this new age. She brings knowledge of a long-forgotten program dating from the start of the Trisolar Crisis, and her presence may upset the delicate balance between two worlds. Will humanity reach for the stars or die in its cradle?
Death s End
Publsiher : Macmillan
Death s End Book Excerpt:
Soon to be a Netflix Original Series! "The War of the Worlds for the 21st century... packed with a sense of wonder." – Wall Street Journal The New York Times bestselling conclusion to a tour de force near-future adventure trilogy from China's bestselling and beloved science fiction writer. With The Three-Body Problem, English-speaking readers got their first chance to read China's most beloved science fiction author, Cixin Liu. The Three-Body Problem was released to great acclaim including coverage in The New York Times and The Wall Street Journal and reading list picks by Barack Obama and Mark Zuckerberg. It was also won the Hugo and Nebula Awards, making it the first translated novel to win a major SF award. Now this epic trilogy concludes with Death's End. Half a century after the Doomsday Battle, the uneasy balance of Dark Forest Deterrence keeps the Trisolaran invaders at bay. Earth enjoys unprecedented prosperity due to the infusion of Trisolaran knowledge. With human science advancing daily and the Trisolarans adopting Earth culture, it seems that the two civilizations will soon be able to co-exist peacefully as equals without the terrible threat of mutually assured annihilation. But the peace has also made humanity complacent. Cheng Xin, an aerospace engineer from the early twenty-first century, awakens from hibernation in this new age. She brings with her knowledge of a long-forgotten program dating from the beginning of the Trisolar Crisis, and her very presence may upset the delicate balance between two worlds. Will humanity reach for the stars or die in its cradle? The Three-Body Problem Series The Three-Body Problem The Dark Forest Death's End Other Books Ball Lightning Supernova Era To Hold Up The Sky (forthcoming) At the Publisher's request, this title is being sold without Digital Rights Management Software (DRM) applied.
Lectures on the Singularities of the Three body Problem
Author : Carl Ludwig Siegel,K. Balagangadharah,M. K. Venkatesha Murthy
Category : Differential equations
Lectures on the Singularities of the Three body Problem Book Excerpt:
The Ascent of Science
Author : Brian L. Silver
Publsiher : Oxford University Press
The Ascent of Science Book Excerpt:
From the revolutionary discoveries of Galileo and Newton to the mind-bending theories of Einstein and Heisenberg, from plate tectonics to particle physics, from the origin of life to universal entropy, and from biology to cosmology, here is a sweeping, readable, and dynamic account of the whole of Western science. In the readable manner and method of Stephen Jay Gould and Carl Sagan, the late Brian L. Silver translates our most important, and often most obscure, scientific developments into a vernacular that is not only accessible and illuminating but also enjoyable. Silver makes his comprehensive case with much clarity and insight; he locates science as the apex of human reason, and reason as our best path to the truth. For all readers curious about--and especially those perhaps intimidated by--what Silver calls "the scientific campaign up to now" in his Preface, The Ascent of Science will be fresh, vivid, and fascinating reading.
Hold Up The Sky
Hold Up The Sky Book Excerpt:
A Financial Times Book of the Year From the author of The Three-Body Problem, a collection of award-winning short stories – a breath-taking selection of diamond-hard science fiction. In Hold Up the Sky, Cixin Liu takes us across time and space, from a rural mountain community where elementary students must use physics to prevent an alien invasion; to coal mines in northern China where new technology will either save lives of unleash a fire that will burn for centuries; to a time very much like our own, when superstring computers predict our every move; to 10,000 years in the future, when humanity is finally able to begin anew; to the very collapse of the universe itself. Written between 1999 and 2017 and never before published in English, these stories came into being during decades of major change in China and will take you across time and space through the eyes of one of science fiction's most visionary writers. Experience the limitless and pure joy of Cixin Liu's writing and imagination in this stunning collection. Praise for Cixin Liu: 'Cixin's trilogy is SF in the grand style, a galaxy-spanning, ideas-rich narrative of invasion and war' GUARDIAN 'Wildly imaginative, really interesting... The scope of it was immense' BARACK OBAMA, 44th President of the United States 'A unique blend of scientific and philosophical speculation, politics and history, conspiracy theory and cosmology' GEORGE R.R. MARTIN 'China's answer to Arthur C. Clarke' NEW YORKER
Nuclear Science Abstracts
Category : Nuclear energy
Nuclear Science Abstracts Book Excerpt:
Geometrical Themes Inspired by the N body Problem
Author : Luis Hernández-Lamoneda,Haydeé Herrera,Rafael Herrera
Geometrical Themes Inspired by the N body Problem Book Excerpt:
Presenting a selection of recent developments in geometrical problems inspired by the N-body problem, these lecture notes offer a variety of approaches to study them, ranging from variational to dynamical, while developing new insights, making geometrical and topological detours, and providing historical references. A. Guillot's notes aim to describe differential equations in the complex domain, motivated by the evolution of N particles moving on the plane subject to the influence of a magnetic field. Guillot studies such differential equations using different geometric structures on complex curves (in the sense of W. Thurston) in order to find isochronicity conditions. R. Montgomery's notes deal with a version of the planar Newtonian three-body equation. Namely, he investigates the problem of whether every free homotopy class is realized by a periodic geodesic. The solution involves geometry, dynamical systems, and the McGehee blow-up. A novelty of the approach is the use of energy-balance in order to motivate the McGehee transformation. A. Pedroza's notes provide a brief introduction to Lagrangian Floer homology and its relation to the solution of the Arnol'd conjecture on the minimal number of non-degenerate fixed points of a Hamiltonian diffeomorphism.
fire how fast inexpensive restrained and elegant m
Green Roofs, Facades, and Vegetative Systems
Island of the Blue Dolphins
arduino programming circuits and physics
Naked Statistics
Essential Linguistics
Pushing Up Daisies
The Story of Art
Essential Oils Desk Reference
Valorization of Biomass to Bioproducts
quilting for dummies
Neutrosophic Set in Medical Image Analysis
Counterterrorist Detection Techniques of Explosives
principles of foundation engineering 3
Solid-State Hydrogen Storage
fairy tale interrupted
31 days of encouragement meditations to help you
When GOD Winks
Significant Pharmaceuticals Reported in US Patents
Tropical Stream Ecology
Thats Not My Monster
Overcoming Resistance to EGFR Inhibitors in EGFR Mutant NSCLC
the complete book of essential oils and aromatherapy revised and expanded over 800 natural nontoxic and fragrant recipes to create health beauty and safe home and work environments
Nanomaterials for the Removal of Pollutants and Resource Reutilization
what to do when youre having two
Nutrition and Genomics
beginning java with websphere
Hebrew Illuminations 2017 Wall Calendar
Dr. Kellyann's Bone Broth Die | CommonCrawl |
\begin{document}
\title{Fast \dpp Sampling for \nys\\ with Application to Kernel Methods}
\begin{abstract}
The Nystr\"om\xspace method has long been popular for scaling up kernel methods. Its theoretical guarantees and empirical performance rely critically on the quality of the \emph{landmarks} selected. We study landmark selection for Nystr\"om\xspace using Determinantal Point Processes (\textsc{Dpp}\xspace{}s), discrete probability models that allow tractable generation of \emph{diverse} samples. We prove that landmarks selected via \textsc{Dpp}\xspace{}s guarantee bounds on approximation errors; subsequently, we analyze implications for kernel ridge regression. Contrary to prior reservations due to cubic complexity of \textsc{Dpp}\xspace sampling, we show that (under certain conditions) Markov chain \textsc{Dpp}\xspace sampling requires only \emph{linear} time in the size of the data. We present several empirical results that support our theoretical analysis, and demonstrate the superior performance of \textsc{Dpp}\xspace-based landmark selection compared with existing approaches. \end{abstract}
\section{Introduction} \label{sec:introduction} Low-rank matrix approximation is an important ingredient of modern machine learning methods. Numerous learning tasks rely on multiplication and inversion of matrices, operations that scale cubically in the number of data points $N$, and therefore quickly become a bottleneck for large data. In such cases, low-rank matrix approximations promise speedups with a tolerable loss in accuracy.
A notable instance is the \emph{Nystr\"om method} \citep{nystrom1930praktische,williams2001using}, which takes a positive semidefinite matrix $K\in\mathbb{R}^{N\times N}$ as input, selects from it a small subset $C$ of columns $K_{\cdot,C}$, and constructs the approximation $\tilde{K} = K_{\cdot,C}K_{C,C}^\dagger K_{C,\cdot}$. The matrix $\tilde{K}$ is then used in place of $K$, which can decrease runtimes from $\co(N^3)$ to $\co(N|C|^3)$, a huge savings (since typically $|C|\ll N$).
Since its introduction into machine learning, the Nystr\"om method has been applied to a wide spectrum of problems, including kernel ICA \cite{bach2003kernel,shen2009fast}, kernel and spectral methods in computer vision \cite{belabbas2009landmark,fowlkes2004spectral}, manifold learning~\cite{talwalkar2008large,talwalkar2013large}, regularization~\cite{rudi2015less}, and efficient approximate sampling \cite{affandi2013nystrom}. Recent work~\cite{cortes2010impact,bach2012sharp,alaoui2014fast} shows risk bounds for Nystr\"om applied to various kernel methods.
The most important step of the Nystr\"om method is the selection of the subset $C$, the so-called \emph{landmarks}. This choice governs the approximation error and subsequent performance of the approximated learning methods~\cite{cortes2010impact}. The most basic strategy is to sample landmarks uniformly at random~\cite{williams2001using}. More sophisticated non-uniform selection strategies include deterministic greedy schemes \cite{smola2000sparse}, incomplete Cholesky decomposition \cite{fine2002efficient,bach2005predictive}, sampling with probabilities proportional to diagonal values~\cite{drineas2005nystrom} or to column norms~\cite{drineas2006fast}, sampling based on leverage scores~\cite{gittens2013revisiting}, via K-means~\cite{zhang2008improved}, or using submatrix determinants~\cite{belabbas2009spectral}.
We study landmark selection using \emph{Determinantal Point Processes (\textsc{Dpp}\xspace)}, discrete probability models that allow tractable sampling of diverse non-independent subsets~\cite{macchi1975coincidence,kulesza2012determinantal}. Our work generalizes the determinant based scheme of~\citet{belabbas2009spectral}.\footnote{The authors do not make any connection to \textsc{Dpp}\xspace{}s.} We refer to our scheme as \textsc{Dpp}\xspace-Nystr\"om\xspace, and analyze it from several perspectives.
A key quantity in our analysis is the error of the Nystr\"om\xspace approximation. Suppose $k$ is the target rank; then for selecting $c\ge k$ landmarks, Nystr\"om\xspace's error is typically measured using the Frobenius or spectral norm relative to the best achievable error via rank-$k$ SVD $K_k$; i.e., we measure \begin{align*}\label{eq:frospebound}
{\|K - K_{\cdot,C}K_{C,C}^\dagger K_{C,\cdot}\|_F\over \|K - K_k\|_F} \quad \text{ or } \quad
{\|K - K_{\cdot,C}K_{C,C}^\dagger K_{C,\cdot}\|_2\over \|K - K_k\|_2}. \end{align*} Several authors also use additive instead of relative bounds. However, such bounds are very sensitive to scaling, and become loose even if a single entry of the matrix is large. Thus, we focus on the above relative error bounds.
First, we analyze this approximation error. Previous analyses~\cite{belabbas2009spectral} fix a cardinality $c=k$; we allow the general case of selecting $c \ge k$ columns. Our relative error bounds rely on the properties of characteristic polynomials. Empirically, \textsc{Dpp}\xspace-Nystr\"om\xspace obtains approximations competitive to state-of-the-art methods.
Second, we consider its impact on kernel methods. Specifically, we address the impact of Nystr\"om\xspace-based kernel approximations on kernel ridge regression. This task has been noted as the main application in~\cite{bach2012sharp,alaoui2014fast}. We show risk bounds of \textsc{Dpp}\xspace-Nystr\"om\xspace that hold in expectation. Empirically, it achieves the best performance among competing methods.
Third, we consider the efficiency of \textsc{Dpp}\xspace-Nystr\"om\xspace; specifically, its tradeoff between error and running time. Since its proposal, determinantal sampling has so far not been used widely in practice due to valid concerns about its scalability. We consider a Gibbs sampler for $k$-\textsc{Dpp}\xspace, and analyze its mixing time using a \emph{path coupling}~\cite{bubley1997path} argument. We prove that under certain conditions the chain is fast mixing, which implies a \emph{linear} running time for \textsc{Dpp}\xspace sampling of landmarks. Empirical results indicate that the chain yields favorable results within a small number of iterations, and the best efficiency-accuracy traedoffs compared to state-of-art methods (Figure~\ref{fig:tradeoff}).
\vspace*{-5pt} \section{Background and Notation} Throughout, we are approximating a given positive semidifinite (PSD) matrix $K\in\mathbb{R}^{N\times N}$ with eigendecomposition $K = U\Lambda U^\top$ and eigenvalues $\lambda_1 \geq \ldots \geq \lambda_N$. We use $K_{i,\cdot}$ for the $i$-th row and $K_{\cdot,j}$ for the $j$-th column, and, likewise, $K_{C,\cdot}$ for the rows of $K$ and $K_{\cdot,C}$ for the columns of $K$ indexed by $C\subseteq [N]$. Finally, $K_{C,C}$ is the submatrix of $K$ with rows and columns indexed by $C$. In this notation, $K_k = U_{\cdot,[k]}\Lambda_{[k],[k]}U_{\cdot,[k]}^\top$ is the best rank-$k$ approximation to $K$ in both Frobenius and spectral norm. We write $r(\cdot)$ for the rank and $(\cdot)^\dagger$ for the pseudoinverse, and denote a decomposition of $K$ by $B^\top B$, where $B\in\mathbb{R}^{r(K)\times N}$.
\textbf{The Nystr\"om Method.}
The \emph{standard Nystr\"om} method selects a subset $C\subseteq [N]$ of $c=|C|$ \emph{landmarks}, and approximates $K$ with $K_{\cdot,C} K_{C,C}^\dagger K_{C,\cdot}$. The actual set of landmarks affects the approximation quality, and is hence the subject of a substantial body of research \cite{cortes2010impact,smola2000sparse,fine2002efficient,bach2005predictive,drineas2005nystrom,drineas2006fast,gittens2013revisiting,zhang2008improved,belabbas2009spectral}. Besides various landmark selection methods, there exist variations of the standard Nystr\"om method. The \emph{ensemble Nystr\"om method} \cite{kumar2009ensemble}, for instance, uses a weighted combination of approximations. The \emph{modified Nystr\"om method} constructs an approximation $K_{\cdot,C} K_{\cdot,C}^\dagger K K_{C,\cdot}^\dagger K_{C,\cdot}$ \cite{sun2015review}. In this paper, we focus on the standard Nystr\"om method.
\textbf{Determinantal Point Processes.} A \emph{determinantal point process} $\textsc{Dpp}\xspace(K)$ is a distribution over all subsets of a ground set $\cy$ of cardinality $N$ that is determined by a PSD kernel $K\in\mathbb{R}^{N\times N}$. The probability of observing a subset $C\subseteq [N]$ is proportional to $\det(K_{C,C})$, that is, \begin{align} \Pr(C) = \det(K_{C,C})/\det(K+I). \end{align}
When conditioning on a fixed cardinality, one obtains a $k$-\textsc{Dpp}\xspace~\cite{kulesza2011k}. To avoid confusion with the target rank $k$, and since we use cardinality $c=|C|$, we will refer to this distribution as $c$-\textsc{Dpp}\xspace\footnote{Note that we refer to \textsc{Dpp}\xspace-Nystr\"om\xspace as \texttt{kDPP}\xspace in experimental parts.}, and note that \begin{align*}
\Pr(C \mid |C| = c) &= \det(K_{C,C})e_c(K)^{-1}\llbracket\, |C|=c\rrbracket, \end{align*} where $e_c(K)$ is the $c$-th coefficient of the characteristic polynomial $\det(\lambda I - K) = \sum_{j=0}^N(-1)^je_j(K)\lambda^{N-j}$.
Sampling from a ($c$-)\textsc{Dpp}\xspace can be done in polynomial time, but requires a full eigendecomposition of $K$~\cite{hough2006determinantal}, which is prohibitive for large $N$. A number of approaches have been proposed for more efficient sampling \cite{affandi2013nystrom,wang2014using,li2015efficient}. We follow an alternative approach based on Gibbs sampling and show that it can offer fast polynomial-time \textsc{Dpp}\xspace sampling and Nystr\"om approximations.
\section{\textsc{Dpp}\xspace for the Nystr\"om\xspace Method} \label{sec:dppnys} Next, we consider sampling $c$ landmarks $C\subseteq [N]$ from $c$-\textsc{Dpp}\xspace{($K$)}, and use the approximation $\tilde{K} = K_{\cdot,C} K_{C,C}^\dagger K_{C,\cdot}$. We call this approach \textsc{Dpp}\xspace-Nystr\"om\xspace. It was essentially introduced in~\cite{belabbas2009spectral}, but without making the explicit connection to {\textsc{Dpp}\xspace}s. Our analysis builds on this connection and subsumes existing results that only apply to $c$ being the rank $k$ of the target approximation.
We begin with error bounds for matrix approximations: \begin{theorem}[Relative Error]\label{thm:nys} If $C \sim c$-\textsc{Dpp}\xspace{($K$)}, then \textsc{Dpp}\xspace-Nystr\"om\xspace satisfies the relative error bounds \begin{small} \begin{align*}
\mathbb{E}_C\left[{\|K - K_{\cdot C} (K_{C,C})^\dagger K_{C\cdot}\|_F \over \|K - K_k\|_F}\right] &\le \left(\frac{c+1}{c+1-k}\right)\sqrt{N-k}, \\
\mathbb{E}_C\left[{\|K - K_{\cdot C} (K_{C,C})^\dagger K_{C\cdot}\|_2 \over \|K - K_k\|_2}\right] &\le \left({c+1\over c+1-k}\right)(N-k). \end{align*} \end{small} \end{theorem}
These bounds hold in expectation. An additional argument based on \cite{pemantle2014concentration} yields high probability bounds, too (Appendix~\ref{append:sec:proof}).
To show Theorem~\ref{thm:nys}, we exploit a property of characteristic polynomials observed in~\cite{guruswami2012optimal}. But first recall that the coefficients of characteristic polynomials satisfy
$e_c(K) = \sum\nolimits_{|S| = c}\det(B_{\cdot,S}^\top B_{\cdot,S}) = e_c(\Lambda)$. \begin{lemma}[\protect{\citet{guruswami2012optimal}}]\label{lem:char}
For any $c \geq k > 0$, it holds that
\begin{align*}
{e_{c+1}(K)\over e_c(K)} \le {1\over c + 1 - k}\sum_{i > k} \lambda_i.
\end{align*} \end{lemma} With Lemma~\ref{lem:char} in hand, we are ready to prove Theorem~\ref{thm:nys}.
\begin{proof}[Proof (Thm.~\ref{thm:nys}).]
We begin with the Frobenius norm error, and then show the spectral norm result. Using the decomposition $K = B^\top B$, it holds that
\begin{align*}
\mathbb{E}_C &\left[\|K - K_{\cdot C} K_{C,C}^\dagger K_{C\cdot}\|_F\right] = \mathbb{E}_C \left[\|B^\top B - B^\top B_{\cdot,C}(B_{\cdot,C}^\top B_{\cdot,C})^\dagger B_{\cdot,C}^\top B\|_F\right]\\
&= \mathbb{E}_C \left[\|B^\top (I - B_{\cdot,C}(B_{\cdot,C}^\top B_{\cdot,C})^\dagger B_{\cdot,C}^\top) B\|_F\right]= \mathbb{E}_C \left[\|B^\top (I - U^C (U^C)^\top) B\|_F\right],
\end{align*} where $U^C\Sigma^C (V^C)^\top$ is the SVD of $B_{\cdot,C}$. Next, we extend $U^C\in\mathbb{R}^{r(K)\times c}$ to an orthogonal basis $[U^C\;(U^C)^\perp]\in\mathbb{R}^{r(K)\times r(K)}$ of $\mathbb{R}^N$. Using that $I - U^C (U^C)^\top = (U^C)^\perp ((U^C)^\perp)^\top$ and applying Cauchy-Schwartz yields \begin{small}
\begin{align*}
\mathbb{E}_C &\left[\|B^\top (I - U^C (U^C)^\top) B\|_F \right]= \mathbb{E}_C \left[\|B^\top (U^C)^\perp ((U^C)^\perp)^\top B\|_F\right]\\
&= \mathbb{E}_C \left[\sqrt{\sum\nolimits_{i,j} (b_i^\top (U^C)^\perp ((U^C)^\perp)^\top b_j)^2}\right]\le \mathbb{E}_C \left[\sqrt{(\sum\nolimits_{i,j} \|b_i^\top (U^C)^\perp\|_2^2 \|b_j^\top (U^C)^\perp\|_2^2)}\right]\\
&= \mathbb{E}_C \left[\sum\nolimits_i \|b_i^\top (U^C)^\perp\|_2^2\right]= {1\over e_c(K)}\sum\nolimits_{|C| = c}\sum\nolimits_{i} \det(B_{\cdot,C}^\top B_{\cdot,C}) \|b_i^\top (U^C)^\perp\|_2^2\\
&\overset{(a)}= {1\over e_c(K)}\sum\nolimits_{|C| = c}\sum\nolimits_{i\notin C} \det(B_{\cdot,C\cup\{i\}}B_{\cdot,C\cup\{i\}}^\top)\\
&\overset{(b)}{=} (c+1) {e_{c+1}(K)\over e_c(K)}.
\end{align*} \end{small} In $(a)$, we use that $(U^C)^\perp$ projects vectors onto the null (column) space of $B$, and $(b)$ uses the definition of $e_c$. With Lemma~\ref{lem:char}, it follows that
\begin{align*}
(c+1) &\tfrac{e_{c+1}(K)}{e_c(K)} \le \tfrac{c+1}{c+1-k}\sum\nolimits_{i > k}\lambda_i\\
&\le \tfrac{c+1}{c+1-k}\sqrt{N-k}\sqrt{\sum\nolimits_{i > k} \lambda_i^2}= \tfrac{c+1}{c+1-k}\sqrt{N-k} \|K - K_k\|_F.
\end{align*}
The bound on the Frobenius norm immediately implies the bound on the spectral norm: \begin{small}
\begin{align*}
\mathbb{E}_C &\left[\|K - K_{\cdot C} (K_{C,C})^\dagger K_{C\cdot}\|_2\right] \;\;\le \mathbb{E}_C \left[\|K - K_{\cdot C} K_{C,C}^\dagger K_{C\cdot}\|_F \right]\\
&\;\;\le {c+1\over c+1-k}\sqrt{N-k} \|K - K_k\|_F \;\;\le {c+1\over c+1-k}(N-k) \|K - K_k\|_2 \qedhere
\end{align*} \end{small} \end{proof}
\paragraph{Remarks.} Compared to previous bounds (e.g.,~\cite{gittens2013revisiting} on uniform and leverage score sampling), our bounds seem somewhat weaker asymptotically (since as $c\to N$ they do not converge to 1). This suggests that there is an opportunity for further tightening our bounds, which may be worthwhile, given than in Section~\refsec{sec:exp:app} our extensive experiments on various datasets with \textsc{Dpp}\xspace-Nystr\"om\xspace show that it attains superior accuracies compared with various state-of-art methods.
\section{Low-rank Kernel Ridge Regression} Our theoretical (Section \ref{sec:dppnys}) and empirical (Section~\ref{sec:exp:app}) results suggest that \textsc{Dpp}\xspace-Nystr\"om\xspace is well-suited for scaling kernel methods. In this section, we analyze its implications on kernel ridge regression. The experiments in Section~\ref{sec:exp} confirm our results empirically.
We have $N$ training samples $\{(x_i,y_i)\}_{i=1}^N$, where $y_i = z_i + \epsilon_i$ are the observed labels under zero-mean noise with finite covariance. We minimize a regularized empirical loss \begin{align*}
\min_{f\in\cf}{1\over N}\sum_{i=1}^N \ell(y_i,f(x_i)) + {\gamma\over 2}\|f\|^2 \end{align*} over an RKHS $\mathcal{F}$. Equivalently, we solve the problem \begin{align*} \min_{\alpha\in\mathbb{R}^N}{1\over N} \sum_{i=1}^N \ell(y_i,(K\alpha)_i) + {\gamma\over 2}\alpha^\top K \alpha, \end{align*} for the corresponding kernel matrix $K$. With the squared loss $\ell(y,f(x)) = {1\over 2}(y - f(x))^2$, the resulting estimator is \begin{align}\label{eq:estimator} \hat{f}(x) &= \sum_{i=1}^N \hat{\alpha}_i k(x,x_i),\quad \hat{\alpha} = (K + n\gamma I)^{-1}y, \end{align} and the prediction for $\{x_i\}_{i=1}^N$ is given by $\hat{z} = K(K + N\gamma I)^{-1}y\in\mathbb{R}^N$. Denoting the noise covariance by $F$, we obtain the risk \begin{align}
\nonumber
\mathcal{R}&(\hat{z}) = \tfrac{1}{N}\mathbb{E}_{\varepsilon}\|\hat{z} - z\|^2 \\
\nonumber &= N\gamma^2 z^\top (K + N\gamma I)^{-2}z + \tfrac1N \tr(FK^2(K+N\gamma I)^{-2})\\
\label{eq:biasvar} &= \mathrm{bias}(K) + \mathrm{var}(K). \end{align} Observe that the bias term is matrix-decreasing (in $K$) while the variance term is matrix-increasing. Since the estimator~\eqref{eq:estimator} requires expensive matrix inversions, it is common to replace $K$ in \eqref{eq:estimator} by an approximation $\tilde{K}$. If $\tilde{K}$ is constructed via Nystr\"om\xspace we have $\tilde{K}\preceq K$, and it directly follows that the variance shrinks with this substitution, while the bias increases. Denoting the predictions from $\tilde{K}$ by $\hat{z}_{\tilde{K}}$, Theorem~\ref{thm:krr} completes the picture of how using $\tilde{K}$ affects the risk. \begin{theorem} \label{thm:krr} If $\tilde{K}$ is constructed via \textsc{Dpp}\xspace-Nystr\"om, then \begin{align*} \mathbb{E}_C \left[\sqrt{\mathcal{R}(\hat{z}_{\tilde{K}})\over \mathcal{R}(\hat{z})}\right]\le 1 + {(c+1)\over N\gamma }{e_{c+1}(K)\over e_c(K)}. \end{align*} \end{theorem} Again, using~\cite{pemantle2014concentration}, we obtain bounds that hold with high probability (Appendix~\ref{append:sec:proof}).
\begin{proof} We build on~\cite{bach2012sharp,alaoui2014fast}. Knowing that $\textrm{Var}(\tilde{K})\le \textrm{Var}(K)$ as $\tilde{K}\preceq K$, it remains to bound the bias. Using $K = B^\top B$ and $\tilde{K} = B^\top B_{\cdot,C}(B_{\cdot,C}^\top B_{\cdot,C})^\dagger B_{\cdot,C}^\top B$, we obtain \begin{small} \begin{align*} K &- \tilde{K} = B^\top (I - B_{\cdot,C}(B_{\cdot,C}^\top B_{\cdot,C})^\dagger B_{\cdot,C}^\top) B \\
&= B^\top (U^C)^\perp ((U^C)^\perp)^\top B \preceq \|B^\top (U^C)^\perp ((U^C)^\perp)^\top B\|_F I \\ &= \sqrt{\sum\nolimits_{i,j} (b_i^\top (U^C)^\perp ((U^C)^\perp)^\top b_j)^2} I \\
&\preceq \sqrt{(\sum\nolimits_{i,j} \|b_i^\top (U^C)^\perp\|_2^2 \|b_j^\top (U^C)^\perp\|_2^2)} I \\
&= \sum\nolimits_i \|b_i^\top (U^C)^\perp\|_2^2 I = \nu_C I, \end{align*} \end{small}
where $\nu_C = \sum_i \|b_i^\top (U^C)^\perp\|_2^2 \le \sum_i \|b_i^\top\|_2^2 = \text{tr}(K)$. Since $(K-\tilde{K})$ and $\nu_CI$ commute, we have \begin{small} \begin{align*}
\|(\tilde{K} + & N\gamma I)^{-1}(K - \tilde{K})\|_2^2 \\
& = \|(\tilde{K} + N\gamma I)^{-1}(K - \tilde{K})^2(\tilde{K} + N\gamma I)^{-1}\|_2\\
&\le \nu_C^2 \|(\tilde{K} + N\gamma I)^{-2}\|_2 \le \Big({\nu_C\over N\gamma}\Big)^2. \end{align*} \end{small} It follows that \begin{small} \begin{align*}
\|(\tilde{K} + &N\gamma I)^{-1}z - (K + N\gamma I)^{-1}z\|_2 \\
&= \|(\tilde{K} + N\gamma I)^{-1}(K - \tilde{K})(K + N\gamma I)^{-1}z\|_2\\
&\le \|(\tilde{K} + N\gamma I)^{-1}(K - \tilde{K})\|_2 \|(K + N\gamma I)^{-1}z\|_2\\
&\le {\nu_C\over N\gamma} \|(K + N\gamma I)^{-1}z\|_2. \end{align*} \end{small} Hence, \begin{small} \begin{align*}
&\sqrt{z^\top (\tilde{K} + N\gamma I)^{-2} z} = \|(\tilde{K} + N\gamma I)^{-1}z\|_2\\
&\le \|(K + N\gamma I)^{-1}z\|_2 + \|(\tilde{K} + N\gamma I)^{-1}z - (K + N\gamma I)^{-1}z\|_2\\
&\le (1 + {\nu_C\over N\gamma}) \|(K + N\gamma I)^{-1}z\|_2\\ &= (1 + {\nu_C\over N\gamma}) \sqrt{z^\top (K + N\gamma I)^{-2}z}. \end{align*} \end{small} Finally, this inequality implies that \begin{small} \begin{align*} \sqrt{\mathrm{bias}(\tilde{K})\over \mathrm{bias}(K)}\le (1 + {\nu_C\over N\gamma}). \end{align*} \end{small}
Taking the expectation over $C\sim c$-\textsc{Dpp}\xspace{($K$)} yields \begin{small} \begin{align*} \mathbb{E}_C\left[\sqrt{\mathrm{bias}(\tilde{K})\over \mathrm{bias}(K)}\right] \le 1 + \mathbb{E}_C\left[{\nu_C \over N\gamma}\right]= 1 + {(c+1)\over N\gamma }{e_{c+1}(K)\over e_c(K)}. \end{align*} \end{small} Together with the fact that $\mathrm{var}(\tilde{K})\le \mathrm{var}(K)$, we obtain \begin{small} \begin{align} \nonumber \mathbb{E}_C \left[\sqrt{\mathcal{R}(\hat{z}_{\tilde{K}})\over \mathcal{R}(\hat{z})}\right] &= \mathbb{E}_C \left[\sqrt{\mathrm{bias}(\tilde{K}) + \mathrm{var}(\tilde{K})\over \mathrm{bias}(K) + \mathrm{var}(K)}\right]\\ &\le 1 + {(c+1)\over N\gamma }{e_{c+1}(K)\over e_c(K)} \end{align} \end{small} for any $k \leq c$. \end{proof}
\paragraph{Remarks.} Theorem~\ref{thm:krr} quantifies how the learning results depend on the decay of the spectrum of $K$. In particular, the ratio $e_{c+1}(K)/e_c(K)$ closely relates to the effective rank of $K$: if $\lambda_c > a$ and $\lambda_{c+1}\ll a$, this ratio is almost zero, resulting in near-perfect approximations and no loss in learning.
There exist works that consider Nystr\"om\xspace methods in this scenario~\cite{bach2012sharp,alaoui2014fast}. Our theoretical bounds could also be tightened in this setting, possibly by a tighter bound on the elementary symmetric polynomial ratio. This theoretical exercise may be worthwhile given our extensive experiments comparing \textsc{Dpp}\xspace-Nystr\"om\xspace against other state-of-art methods in~\refsec{sec:exp:krr} that reveal the superior performance of \textsc{Dpp}\xspace-Nystr\"om\xspace.
\section{Fast Mixing Markov Chain \textsc{Dpp}\xspace} \label{sec:mix} Despite its excellent empirical performance and strong theoretical results, determinantal sampling for Nystr\"om\xspace has rarely been used in applications due to the computational cost of $\co(N^3)$ for directly sampling from a \textsc{Dpp}\xspace, which involves an eigendecomposition. Instead, we follow a different route: an MCMC sampler, which offers a promising alternative if the chain mixes fast enough. Recent empirical results provide initial evidence \cite{kang2013fast}, but without a theoretical analysis\footnote{The analysis in \cite{kang2013fast} is not correct.}; other recent works~\citep{rebeschini2015fast,gotovos2015sampling} do not apply to our cardinality-constrained setting. We offer a theoretical analysis that confirms fast mixing (i.e., polynomial or even \emph{linear}-time sampling) under certain conditions, and connect it to our empirical results. The empirical results in Section~\ref{sec:exp} illustrate the favorable performance of \textsc{Dpp}\xspace-Nystr\"om\xspace in trading off time and error. Concurrently with this paper, \citet{anari2016monte} derived a different, general analysis of fast mixing that also confirms our observations.
Algorithm~\ref{algo:mcdpp} shows a Gibbs sampler for $k$-{\textsc{Dpp}\xspace}. Starting with a uniformly random set $Y_0$, at iteration $t$, we try to swap an element $y^{\text{in}} \in Y_t$ with an element $y^{\text{out}} \notin Y_t$, according to $\Pr(Y_t)$ and $\Pr(Y_{t} \cup \{y^{\text{out}}\} \setminus \{y^{\text{in}}\})$. The stationary distribution of this chain is exactly the desired $k$-\textsc{Dpp}\xspace{($K$)}.
\begin{algorithm}
\caption{Gibbs sampler for $c$-\textsc{Dpp}\xspace}\label{algo:mcdpp}
\begin{algorithmic}
\STATE \textbf{Input:} $K$ the kernel matrix, $\cy = [N]$ the ground set
\STATE \textbf{Output:} $Y$ sampled from exact $c$-\textsc{Dpp}\xspace{($K$)}
\STATE Randomly Initialize $Y\subseteq \cy$, $|Y|=c$
\WHILE{not mixed}
\STATE Sample $b$ from uniform Bernoulli distribution
\IF{$b = 1$}
\STATE Pick $y^{\text{in}}\in Y$ and $y^{\text{out}}\in \cy\backslash Y$ uniformly randomly \\
\STATE $q(y^{\text{in}},y^{\text{out}},Y)\leftarrow{\det(K_{Y\cup \{y^{\text{out}}\}\backslash\{y^{\text{in}}\}})\over \det{K_{Y\cup \{y^{\text{out}}\}\backslash\{y^{\text{in}}\}}} + \det(K_Y)}$
\STATE $Y\leftarrow Y\cup \{y^{\text{out}}\}\backslash\{y^{\text{in}}\}$ with prob. $q(y^{\text{in}},y^{\text{out}},Y)$
\ENDIF
\ENDWHILE
\end{algorithmic} \end{algorithm}
The \emph{mixing time} $\tau(\varepsilon)$ of the chain is the number of iterations until the distribution over the states (subsets) is close to the desired one, as measured by total variation: $\tau(\varepsilon) = \min\{t|\max_{Y_0} \mathrm{TV}(Y_t,\pi) \leq \epsilon \}$. We bound $\tau(\varepsilon)$ via coupling techniques. Given a Markov chain $(Y_t)$ on a state space $\Omega$ with transition matrix $P$, a \emph{coupling} is a new chain $(Y_t,Z_t)$ on $\Omega\times \Omega$ such that both $(Y_t)$ and $(Z_t)$, if considered marginally, are Markov chains with the same transition matrix $P$. The key point of coupling is to construct such a new chain to encourage $Y_t$ and $Z_t$ to \emph{coalesce} quickly. If, in the new chain, $\Pr(Y_t\ne Z_t)\le \varepsilon$ for some fixed $t$ regardless of the starting state $(Y_0,Z_0)$, then $\tau(\varepsilon)\le t$~\cite{aldous1982some}.
Such coalescing chains can be difficult to construct. \emph{Path coupling} \cite{bubley1997path} relieves this burden by reducing the coupling to adjacent states in an appropriately constructed state graph. The coupling of arbitrary states follows by aggregation over a path between the states. Path coupling is formalized in the following lemma.
\begin{lemma} \cite{bubley1997path,dyer1998more} \label{lem:pathcoupling} Let $\delta$ be an integer-valued metric on $\Omega\times \Omega$ where $\delta(\cdot,\cdot)\le D$. Let $E$ be a subset of $\Omega\times\Omega$ such that for all $(Y_t,Z_t)\in\Omega\times\Omega$ there exists a path $Y_t = X_0,\ldots, X_r = Z_t$ between $Y_t$ and $Z_t$ where $(X_i,X_{i+1})\in E$ for $i\in[r-1]$ and $\sum_i \delta(X_i,X_{i+1}) = \delta(Y_t,Z_t)$. Suppose a coupling $(R,T)\to(R',T')$ of the Markov chain is defined on all pairs in $E$ such that there exists an $\alpha < 1$ such that $\mathbb{E}[\delta(R',T')]\le \alpha \delta(R,T)$ for all $(R,T)\in E$, then we have \begin{align*} \tau(\varepsilon)\le {\log(D\varepsilon^{-1})\over (1 - \alpha)}. \end{align*} \end{lemma} The lemma says that if we have a contraction of the two chains in expectation ($\alpha < 1$), then the chain mixes fast. With the path coupling lemma, we obtain a bound on the mixing time that can be \emph{linear} in the data set size $N$.
The actual mixing time depends on three quantities that relate to how sensitive the transition probabilities are to swapping a single element in a set of size $c$. Consider an arbitrary set $S$ of columns, $|S|=c-1$, and complete it to two $c$-sets $R = S \cup \{r\}$ and $T = S \cup \{t\}$ that differ in exactly one element. Our quantities are, for $ u \notin R \cup T$, and $v \in S$: \begin{align*}
p_1(S,r,t,u) &= \min\{q(r,u,R),q(t,u,T)\} \\
p_2(S,r,t,u) &= \min\{q(v,t,R),q(v,u,T)\} \\
p_3(S,r,t,v,u)&=|q(v,u,R) - q(v,u,T)|. \end{align*}
\begin{theorem} \label{thm:mix} Let the contraction coefficient $\alpha$ be given by \begin{small} \begin{align*}
\alpha = &\max_{|S| = c-1,r,t\in[n]\backslash S,r\neq t}\sum_{u_3\in S, u_4\notin S\cup\{r,t\}}p_3(S,r,t,u_3,u_4)- \sum_{u_1\notin S\cup\{r,t\}}p_1(S,r,t,u_1)-\sum_{u_2\in S}p_2(S,r,t,u_2). \end{align*} \end{small} When $\alpha < 1$, the mixing time for the Gibbs sampler in Algorithm~\ref{algo:mcdpp} is bounded as \begin{align*} \tau(\varepsilon) \le {2c(N-c)\log (c \varepsilon^{-1})\over (1 - \alpha)}. \end{align*} \end{theorem} \begin{proof}
We bound the mixing time via path coupling. Let $\delta(R,T) = |R\oplus T|/2$ be half the Hamming distance on the state space, and define $E$ to consist of all state pairs $(R,T)$ in $\Omega\times\Omega$ such that $\delta(R,T) = 1$. We intend to show that for all states $(R,T)\in E$ and next states $(R',T')\in E$, we have $\mathbb{E}[\delta(R',T')]\le \alpha \delta(R,T)$ for an appropriate $\alpha$.
Since $\delta(R,T) = 1$, the sets $R$ and $T$ differ in only two entries. Let $S = R\cap T$, so $|S| = c-1$ and $R = S\cup \{ r\}$ and $T = S\cup\{t\}$. For a state transition, we sample an element $r^{\text{in}}\in R$ and $r^{\text{out}}\in[n]\backslash R$ as switching candidates for $R$, and elements $t^{\text{in}}\in T$ and $t^{\text{out}}\in[n]\backslash T$ as switching candidates for $T$. Let $b_R$ and $b_T$ be the Bernoulli random variables indicating whether we try to make a transition. In our coupling we always set $b_R = b_T$. Hence, if $b_R = 0$ then both chains will not transition and the distance of states remains. For $b_R = b_T = 1$, we distinguish four cases: \paragraph{Case C1} If $r^{\text{in}} = r$ and $r^{\text{out}} = t$, we let $t^{\text{in}} = t$ and $t^{\text{out}} = r$. As a result, $\delta(R',T') = 0$. \paragraph{Case C2} If $r^{\text{in}} = r$ and $r^{\text{out}} = u_1 \notin S\cup\{r,t\}$, we let $t^{\text{in}} = t$ and $t^{\text{out}} = u_1$. In this case, if both chains transition, then the resulting distance is zero, otherwise it remains one. With probability $p_1(S,r,t,u_1) = \min\{q(r,u_1,R),q(t,u_1,T)\}$ both chains transition. \paragraph{Case C3} If $r^{\text{in}} = u_2\in S$ and $r^{\text{out}} = t$, we let $t^{\text{in}} = u_2$ and $t^{\text{out}} = r$. Again, if both chains transition, then the resulting distance is $\delta(R',T')= 0$, otherwise it remains one. With probability $p_2(S,r,t,u_2) = \min\{q(u_2,t,R),q(u_2,u_1,T)\}$ both chains transition.
\paragraph{Case C4} If $r^{\text{in}} = u_3\in S$ and $r^{\text{out}} = u_4\notin S\cup\{r,t\}$, we let $t^{\text{in}} = u_3$ and $t^{\text{out}} = u_4$. If both chains make the same transition (both move or do not move), the resulting distance is one, otherwise it increases to 2. The distance increases with probability $p_3(S,r,t,u_3,u_4)=|q(u_3,u_4,R) - q(u_3,u_4,T)|$.
With those four cases, we can now bound $\mathbb{E}[\delta(R',T')]$. For all $(R,T) \in E$, i.e., $\delta(R,T)=1$: \begin{small} \begin{align*}
&{\mathbb{E}[\delta(R',T')]\over \mathbb{E}[\delta(R,T)]} = {1\over 2} + \text{Pr}(C2) \mathbb{E}[\delta(R',T')| C2] + \text{Pr}(C3) \mathbb{E}[\delta(R',T')| C3] + \text{Pr}(C4) \mathbb{E}[\delta(R',T')| C4]\\ &= \frac12 + {1\over 2c(n-c)}\big(\sum_{u_1\notin S\cup\{r,t\}}(1 - p_1(u_1)) + \sum_{u_2\in S}(1 - p_2(u_2)) + \sum_{\substack{u_3\in S,\\ u_4\notin S\cup\{r,t\}}}(1 + p_3(u_3,u_4))\big) \\ &= {1\over 2c(n-c)}\big(2c(n-1)+\sum_{\substack{u_3\in S,\\ u_4\notin S\cup\{r,t\}}}p_3(u_3,u_4) - \sum_{u_1\notin S\cup\{r,t\}}p_1(u_1)-\sum_{u_2\in S}p_2(u_2) -1\big), \end{align*} \end{small} where we did not explicitly write the arguments $S,r,t$ to $p_{1}, p_2, p_3$. For \begin{small} \begin{align*}
\alpha = \max_{\substack{|S| = c-1,\\r,t\in[n]\backslash S,\\r\neq t}}&\sum_{\substack{u_3\in S,\\ u_4\notin S\cup\{r,t\}}}p_3(u_3,u_4)-\sum_{u_1\notin S\cup\{r,t\}}p_1(u_1)-\sum_{u_2\in S}p_2(u_2) \end{align*} \end{small} and $\alpha < 1$ the Path Coupling Lemma~\ref{lem:pathcoupling} implies that \begin{align*} \tau(\varepsilon) &\le {2c(N-c)\log (c \varepsilon^{-1})\over (1 - \alpha)}. \qedhere \end{align*} \end{proof}
\paragraph{Remarks.} If $\alpha<1$ is fixed, then the mixing time (running time) depends only linearly on $N$. The coefficient $\alpha$ itself depends on our three quantities. In particular, fast mixing requires $p_3$ (the difference between transition probabilities) to be very small compared to $p_1$, $p_2$, at least on average. The difference $p_3$ measures how exchangeable two points $r$ and $t$ are. This notion of symmetry is closely related to a symmetry that determines the complexity of submodular maximization~\citep{vondrak13} (indeed, $F(S)=\log\det K_S$ is a submodular function). This symmetry only needs to hold for most pairs $r$, $t$, and most swapping points $u$, $v$. It holds for kernels with sufficiently fast-decaying similarities, similar to the conditions in~\cite{rebeschini2015fast} for unconstrained sampling.
One iteration of the sampler can be implemented efficiently in $\co(c^2)$ time using block inversion \cite{golub2012matrix}. Additional speedups via quadrature are also possible \cite{li16icmlquad}. Together with the analysis of mixing time, this leads to fast sampling methods for $k$-{\textsc{Dpp}\xspace}s.
\section{Experiments} \label{sec:exp} In our experiments, we evaluate the performance of \textsc{Dpp}\xspace-Nystr\"om\xspace on both kernel approximation and kernel learning tasks, in terms of running time and accuracy.
We use 8 datasets: Abalone, Ailerons, Elevators, CompAct, CompAct(s), Bank32NH, Bank8FM and California Housing\footnote{\url{http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html}}. We subsample 4,000 points from each dataset (3,000 training and 1,000 test). Throughout our experiments, we use an RBF kernel and choose the bandwidth $\sigma$ and regularization parameter $\lambda$ for each dataset by 10-fold cross-validation. We initialize the Gibbs sampler via Kmeans++ and run for 3,000 iterations. Results are averaged over 3 random subsets of data.
\subsection{Kernel Approximation} \label{sec:exp:app}
\begin{figure}
\caption{\small Relative Frobenius/spectral norm errors from different kernel approximations (Ailerons data).}
\label{fig:app_ailerons_mc}
\end{figure}
We first explore \textsc{Dpp}\xspace-Nystr\"om\xspace (\texttt{kDPP}\xspace in the figures) for approximating kernel matrices. We compare to uniform sampling~(\texttt{Unif}\xspace) and leverage score sampling (\texttt{Lev}\xspace) \cite{gittens2013revisiting} as baseline landmark selection methods. We also include AdapFull~(\texttt{AdapFull}\xspace) \cite{deshpande2006matrix} that performs quite well in practice but scales poorly, as $\co(N^2)$, with the size of dataset. Although sampling with regularized leverage scores (\texttt{RegLev}\xspace) \cite{alaoui2014fast} is not originally designed for kernel approximations, we include its results to see how regularization affects leverage score sampling.
Figure~\ref{fig:app_ailerons_mc} shows example results on the Ailerons data; further results may be found in the appendix. \textsc{Dpp}\xspace-Nystr\"om\xspace performs well, achieving the lowest error as measured in both spectral and Frobenius norm. The only method that is on par in terms of accuracy is \texttt{AdapFull}\xspace, which has a much higher running time.
\begin{figure}
\caption{\small Improvement in relative Frobenius/spectral norm errors~(\%) over \texttt{Unif}\xspace~(with corresponding landmark sizes) for kernel approximation, averaged over all datasets. }
\label{fig:rel_app_mc}
\end{figure}
For a different perspective, Figure~\ref{fig:rel_app_mc} shows the improvement in error over \texttt{Unif}\xspace. Relative improvements are averaged over all data sets. Again, the performance of \textsc{Dpp}\xspace-Nystr\"om\xspace almost always dominate those of other methods, and achieves an up to 80\% reduction in error.
\begin{figure}
\caption{\small Training and test errors for kernel ridge regression with different Nystr\"om\xspace approximations (Ailerons data).}
\label{fig:krr_ailerons}
\end{figure}
\begin{figure}
\caption{\small Improvements in training/test errors~(\%) over uniform sampling (with same number of landmarks) in kernel ridge regression, averaged over all datasets.}
\label{fig:rel_krr}
\end{figure}
\subsection{Kernel Ridge Regression} \label{sec:exp:krr} Next, we apply \textsc{Dpp}\xspace-Nystr\"om\xspace to kernel ridge regression, comparing against uniform sampling (\texttt{Unif}\xspace) \citep{bach2012sharp} and regularized leverage score sampling (\texttt{RegLev}\xspace) \citep{alaoui2014fast} which have theoretical guarantees for this task. Figure~\ref{fig:krr_ailerons} illustrates an example result: non-uniform sampling greatly improves accuracy, with \texttt{kDPP}\xspace improving over regularized leverage scores in particular for a small number of landmarks, where a single column has a larger effect.
Figure~\ref{fig:rel_krr} displays the average improvement over \texttt{Unif}\xspace, averaged over 8 data sets. Again, the performance of \texttt{kDPP}\xspace dominates \texttt{RegLev}\xspace and \texttt{Unif}\xspace, and leads to gains in accuracy. On average \texttt{kDPP}\xspace consistently achieves more than $20\%$ improvement over \texttt{Unif}\xspace.
\begin{figure}
\caption{\small Relative Frobenius norm error of \textsc{Dpp}\xspace-Nystr\"om\xspace with 50 landmarks as changing across iterations of the Markov Chain (Ailerons data).}
\label{fig:conv_ailerons_50_fnorm}
\end{figure}
\subsection{Mixing of the Gibbs Markov Chain} \label{sec:exp:conv}
In the next experiment, we empirically study the mixing of the Gibbs chain with respect to matrix approximation errors, the ultimate measure that is of interest in our application of the sampler. We use $c=50$ and choose $N$ as 1,000 and 4,000. To exclude impacts of the initialization, we pick the initial state $Y_0$ uniformly at random. We run the chain for 5,000 iterations, monitoring how the error changes with the number of iterations. Example results on the Ailerons data are shown in Figure~\ref{fig:conv_ailerons_50_fnorm}. Empirically, the error drops very quickly and afterwards fluctuates only little, indicating a fast convergence of the approximation error. Other error measures and larger $c$, included in the appendix, confirm this trend.
Notably, our empirical results suggest that the mixing time does not increase much as $N$ increases greatly, suggesting that the Gibbs sampler remains fast even for large $N$.
In Theorem~\ref{thm:mix}, the mixing time depends on the quantity $\alpha$. By subsampling 1,000 random sets $S$ and column indices $r,t$, we approximately computed $\alpha$ on our data sets. We find that, as expected, $\alpha < 1$ in particular for kernels with a smaller bandwidth, and in general $\alpha$ increases with $k$. In accordance with the theory, we found that the mixing time (in terms of error) too increases with $k$. In practice, we observe a fast drop in error even for cases where $\alpha > 1$, indicating that Theorem~\ref{thm:mix} is conservative and that the iterative MCMC approach is even more widely applicable.
\subsection{Time-Error Tradeoffs}
\begin{figure}
\caption{Time-Error tradeoffs with 20 landmarks on Ailerons (size 4,000) and California Housing (size 12,000). Time and Errors are shown on a log scale. Bottom left is the best (low error, low running time), top right is the worst. We did not include \texttt{AdapFull}\xspace, \texttt{Lev}\xspace and \texttt{RegLev}\xspace on California Housing due to their long running times.}
\label{fig:tradeoff}
\end{figure}
Iterative methods like the Gibbs sampler offer tradeoffs between time and error. The longer the Markov Chain runs, the closer the sampling distribution is to the desired \textsc{Dpp}\xspace, and the higher the accuracy obtained by Nystr\"om. We hence explicitly show the time and accuracy trade-off of the sampler on Ailerons (of size 4,000) for up to 200 and California Housing (of size 12,000) for up to 100 iterations.
A similar tradeoff occurs with leverage scores. For the experiments in the other sections, we computed the (regularized) leverage scores for \texttt{Lev}\xspace and \texttt{RegLev}\xspace exactly. This requires a full, computationally expensive eigendecomposition. For a fast, rougher approximation, we here compare to an approximation mentioned in~\cite{alaoui2014fast}. Concretely, we sample $p$ elements with probability proportional to the diagonal entries of kernel matrices $K_{ii}$, and then use a Nystr\"om\xspace-like method to construct an approximate low-rank decomposition of $K$, and compute scores based on this approximation. We vary $p$ from 20 to 340 on Ailerons and 20 to 140 on California Housing to show the tradeoff for approximate leverage score sampling~(\texttt{AppLev}\xspace) and regularized leverage score sampling~(\texttt{AppRegLev}\xspace). We also include AdapPartial (\texttt{AdapPart}\xspace)~\cite{kumar2012sampling} that approximates \texttt{AdapFull}\xspace and is much more efficient, and Kmeans Nystr\"om\xspace (\texttt{Kmeans}\xspace)~\cite{zhang2008improved} that empirically perform very well in kernel approximation.
Figure~\ref{fig:tradeoff} summarizes and compares the tradeoffs offered by these different methods on the Ailerons and California Housing datasets. The $x$ axis indicates time, the $y$ axis error, so the lower left is the preferred corner. We see that \texttt{AdapFull}\xspace, \texttt{Lev}\xspace and \texttt{RegLev}\xspace are expensive and perform worse than \texttt{kDPP}\xspace. The approximate variants \texttt{AdapPart}\xspace, \texttt{AppLev}\xspace and \texttt{AppRegLev}\xspace have comparable efficiency but higher error. On the smaller data, \texttt{Kmeans}\xspace is accurate but needs more time than \texttt{kDPP}\xspace, while on the larger data it is dominated in both accuracy and time by \texttt{kDPP}\xspace. Overall, on the larger data, \textsc{Dpp}\xspace-Nystr\"om\xspace offers the best tradeoff of accuracy and efficiency.
\section{Conclusion}
In this paper, we revisited the use of $k$-Determinantal Point Processes for sampling good landmarks for the Nystr\"om\xspace method. We theoretically and empirically observe its competitive performance, for both matrix approximation and ridge regression, compared to state-of-the-art methods.
To make this accurate method scalable to large matrices, we consider an iterative approach, and analyze it theoretically as well as empirically. Our results indicate that the iterative approach, a Gibbs sampler, achieves good landmark samples quickly; under certain conditions even in a number of iteratons linear in $N$, for an $N$ by $N$ matrix. Finally, our empirical results demonstrate that among state-of-the-art methods, the iterative sampler yields the best tradeoff between efficiency and accuracy.
\setlength{\bibsep}{0pt}
\begin{appendix}
\section{Bounds that hold with High Probability} \label{append:sec:proof} To show high probability bounds we employ concentration results on homogeneous strongly Rayleigh measures. Specifically, we use the following theorem. \begin{theorem}[\protect{\citet{pemantle2014concentration}}]\label{thm:concentration} Let $\mathbb{P}$ be a $k$-homogeneous strongly Rayleigh probability measure on $\{0,1\}^N$ and $f$ an $\ell$-Lipschitz function on $\{0,1\}^N$, then \begin{equation*} \mathbb{P}(f - \mathbb{E}[f]\ge a\ell) \le \exp\{-a^2/8k\}. \end{equation*} \end{theorem}
It is known that a $k$-\textsc{Dpp}\xspace is a homogeneous strongly Rayleigh measure on $\{0,1\}^N$~\cite{borcea2009negative,anari2016monte}, thus Theorem~\ref{thm:concentration} applies to results obtained with $k$-\textsc{Dpp}\xspace. Concretely, for the bound in Theorem~\ref{thm:nys} that holds in expectation, we have the following bound that holds with high probability:
\begin{corollary}\label{cor:prob}
When sampling $C\sim k$-\textsc{Dpp}\xspace{($K$)}, for any $\delta\in(0,1)$, with probability at least $1-\delta$ we have \begin{small} \begin{align*}
&{\|K - K_{\cdot C} (K_{C,C})^\dagger K_{C\cdot}\|_F \over \|K - K_k\|_F} \le \left(\frac{c+1}{c+1-k}\right)\sqrt{N-k} + \sqrt{8c\log(1/\delta)}\sqrt{\sum_{i=1}^N \lambda_i^2 \over \sum_{i=k+1}^N \lambda_i^2}, \\
&{\|K - K_{\cdot C} (K_{C,C})^\dagger K_{C\cdot}\|_2 \over \|K - K_k\|_2} \le \left({c+1\over c+1-k}\right)(N-k) + \sqrt{8c\log(1/\delta)}\tfrac{\lambda_1}{\lambda_{k+1}}, \end{align*} \end{small} where $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_N$ are the eigenvalues of $K$.
\begin{proof} The Lipschitz constants of the relative errors are upper bounded by $\sqrt{\sum_{i=1}^N \lambda_i^2 \over \sum_{i=k+1}^N \lambda_i^2}$ and ${\lambda_1\over \lambda_{k+1}}$, respectively. Applying Theorem~\ref{thm:concentration} yields the results.
\end{proof} \end{corollary}
For the bound in Theorem~\ref{thm:krr} that holds in expectation, we have the following bound that holds with high probability:
\begin{corollary} If $\tilde{K}$ is constructed via \textsc{Dpp}\xspace-Nystr\"om\xspace, then with probability at least $1 - \delta$, $\sqrt{\mathrm{bias}(\tilde{K})\over \mathrm{bias}(K)}$ is upper-bounded by \begin{small} \begin{equation*} 1 + {1\over N\gamma} \left({(c+1) e_{c+1}(K)\over e_c(K)} + \sqrt{8c\log(1/\delta)} \text{tr}(K)\right). \end{equation*} \end{small}
\begin{proof}
Consider the function $f_C(K) = \nu_C = \sum_i \|b_i^\top (U^C)^\perp\|_2^2\le \sum_i \|b_i^\top\|_2^2 = \text{tr}(K)$. Since $0\le f_C(K)\le \text{tr}(K)$, it follows that the Lipschitz constant for $f_C$ is at most $\text{tr}(K)$. Thus when $C\sim k$-\textsc{Dpp}\xspace and $\delta\in(0,1)$, by applying Theorem~\ref{thm:concentration} we see that the inequality $\nu_C \le \mathbb{E}\left[\nu_C\right] + \sqrt{8c\log(1/\delta)} \text{tr}(K)$ holds with probability at least $1 - \delta$. Hence \begin{align*} &\mathbb{E}_C\left[\sqrt{\mathrm{bias}(\tilde{K})\over \mathrm{bias}(K)}\right]\le 1 + \mathbb{E}\left[{\nu_C\over N\gamma}\right] + \sqrt{8c\log(1/\delta)} {\text{tr}(K)\over N\gamma}\\ &\;\;\;\;= 1 + {1\over N\gamma} \left({(c+1) e_{c+1}(K)\over e_c(K)} + \sqrt{8c\log(1/\delta)} \text{tr}(K)\right) \end{align*} holds with probability at least $1 - \delta$. \end{proof} \end{corollary}
\section{Supplementary Experiments}
\subsection{Kernel Approximation}
\reffig{append:fig:app_others_mc} shows the matrix norm relative error of various methods in kernel approximation on the remaining 7 datasets mentioned in the main text.
\begin{figure}
\caption{Abalone}
\caption{Bank8FM}
\caption{Bank32NH}
\caption{California Housing}
\caption{CompAct}
\caption{CompAct(s)}
\caption{Elevators}
\caption{Relative Frobenius norm and spectral norm error achieved by different kernel approximation algorithms on the remaining 7 data sets.}
\label{append:fig:app_others_mc}
\end{figure}
\subsection{Approximated Kernel Ridge Regression}
\reffig{append:fig:krr_others_mc} shows the training and test error of various methods for kernel ridge regression on the remaining 7 datasets.
\begin{figure}
\caption{Abalone}
\caption{Bank8FM}
\caption{Bank32NH}
\caption{California Housing}
\caption{CompAct}
\caption{CompAct(s)}
\caption{Elevators}
\caption{Training and test error achieved by different Nystr\"om\xspace kernel ridge regression algorithms on the remaining 7 regression datasets.}
\label{append:fig:krr_others_mc}
\end{figure}
\subsection{Mixing of Markov Chain $k$-\textsc{Dpp}\xspace} \label{append:sec:conv}
We first show the mixing of the Gibbs \textsc{Dpp}\xspace-Nystr\"om\xspace with 50 landmarks with different performance measures: relative spectral norm error, training error and test error of kernel ridge regression in~\reffig{append:fig:conv_ailerons_50}.
We also show corresponding results with respect to 100 and 200 landmarks in~\reffig{append:fig:conv_ailerons_100} and~\reffig{append:fig:conv_ailerons_200}, so as to illustrate that for varying number of landmarks the chain is indeed fast mixing and will give reasonably good result within a small number of iterations.
\begin{figure}
\caption{Training error}
\caption{Test error}
\caption{Relative Frobenius norm error}
\caption{Relative Spectral norm error}
\caption{Performance of Markov chain \textsc{Dpp}\xspace-Nystr\"om\xspace with 50 landmarks on Ailerons. Runs for 5,000 iterations.}
\label{append:fig:conv_ailerons_50}
\end{figure}
\begin{figure}
\caption{Training error}
\caption{Test error}
\caption{Relative Frobenius norm error}
\caption{Relative Spectral norm error}
\caption{Performance of Markov chain \textsc{Dpp}\xspace-Nystr\"om\xspace with 100 landmarks on Ailerons. Runs for 5,000 iterations.}
\label{append:fig:conv_ailerons_100}
\end{figure}
\begin{figure}
\caption{Training error}
\caption{Test error}
\caption{Relative Frobenius norm error}
\caption{Relative Spectral norm error}
\caption{Performance of Markov chain \textsc{Dpp}\xspace-Nystr\"om\xspace with 200 landmarks on Ailerons. Runs for 5,000 iterations.}
\label{append:fig:conv_ailerons_200}
\end{figure}
\subsection{Running Time Analysis} \label{append:sec:tradeoff}
We next show time-error trade-offs for various sampling methods on small and larger datasets with respect to Fnorm and 2norm errors. We sample 20 landmarks from Ailerons dataset of size 4,000 and California Housing of size 12,000. The result is shown in Figure~\ref{append:fig:ailerons_tradeoff_large} and Figure~\ref{append:fig:calhousing_tradeoff_large} and similar trends as the example results in the main text could be spotted: on small scale dataset (size 4,000) \texttt{kDPP}\xspace get very good time-error trade-off. It is more efficient than \texttt{Kmeans}\xspace, though the error is a bit larger. While on larger dataset (size 12,000) the efficiency is further enhanced while the error is even lower than \texttt{Kmeans}\xspace. It also have lower variances in both cases compared to \texttt{AppLev}\xspace and \texttt{AppRegLev}\xspace. Overall, on larger dataset we obtain the best time-error trade-off with \texttt{kDPP}\xspace.
\begin{figure}
\caption{Fnorm Error vs. Time}
\caption{2norm Error vs. Time}
\caption{Time-Error tradeoff with 20 landmarks on Ailerons of size 4,000. Time and Errors shown in log-scale.}
\label{append:fig:ailerons_tradeoff_large}
\end{figure}
\begin{figure}
\caption{2norm Error vs. Time}
\caption{Training Error vs. Time}
\caption{Time-Error tradeoff with 20 landmarks on California Housing of size 12,000. Time and Errors shown in log-scale. We didn't include \texttt{AdapFull}\xspace, \texttt{Lev}\xspace and \texttt{RegLev}\xspace due to their inefficiency on larger datasets.}
\label{append:fig:calhousing_tradeoff_large}
\end{figure}
\end{appendix}
\end{document} | arXiv |
Ringed topos
In mathematics, a ringed topos is a generalization of a ringed space; that is, the notion is obtained by replacing a "topological space" by a "topos". The notion of a ringed topos has applications to deformation theory in algebraic geometry (cf. cotangent complex) and the mathematical foundation of quantum mechanics. In the latter subject, a Bohr topos is a ringed topos that plays the role of a quantum phase space.[1][2]
The definition of a topos-version of a "locally ringed space" is not straightforward, as the meaning of "local" in this context is not obvious. One can introduce the notion of a locally ringed topos by introducing a sort of geometric conditions of local rings (see SGA4, Exposé IV, Exercise 13.9), which is equivalent to saying that all the stalks of the structure ring object are local rings when there are enough points.
Morphisms
A morphism $(T,{\mathcal {O}}_{T})\to (T',{\mathcal {O}}_{T'})$ of ringed topoi is a pair consisting of a topos morphism $f:T\to T'$ and a ring homomorphism ${\mathcal {O}}_{T'}\to f_{*}{\mathcal {O}}_{T}$.
If one replaces a "topos" by an ∞-topos, then one gets the notion of a ringed ∞-topos.
Examples
Ringed topos of a topological space
One of the key motivating examples of a ringed topos comes from topology. Consider the site ${\text{Open}}(X)$ of a topological space $X$, and the sheaf of continuous functions
$C_{X}^{0}:{\text{Open}}(X)^{op}\to {\text{CRing}}$
sending an object $U\in {\text{Open}}(X)$, an open subset of $X$, to the ring of continuous functions $C_{X}^{0}(U)$ on $U$. Then, the pair $({\text{Sh}}({\text{Open}}(X)),C_{X}^{0})$ forms a ringed topos. Note this can be generalized to any ringed space $(X,{\mathcal {O}}_{X})$ where
${\mathcal {O}}_{X}:{\text{Open}}(X)^{op}\to {\text{Rings}}$
so the pair $({\text{Sh}}({\text{Open}}(X)),{\mathcal {O}}_{X})$ is a ringed topos.
Ringed topos of a scheme
Another key example is the ringed topos associated to a scheme $(X,{\mathcal {O}}_{X})$, which is again the ringed topos associated to the underlying locally ringed space.
Relation with functor of points
Recall that the functor of points view of scheme theory defines a scheme $X$ as a functor $X:{\text{CAlg}}\to {\text{Sets}}$ which satisfies a sheaf condition and gluing condition.[3] That is, for any open cover ${\text{Spec}}(R_{f_{i}})\to {\text{Spec}}(R)$ of affine schemes, there is the following exact sequence
$X(R)\to \prod X(R_{f_{i}})\rightrightarrows \prod X(R_{f_{i}f_{j}})$
Also, there must exist open affine subfunctors
$U_{i}={\text{Spec}}(A_{i})={\text{Hom}}_{\text{CAlg}}(A_{i},-)$
covering $X$, meaning for any $\xi \in X(R)$, there is a $\xi |_{U_{i}}\in U_{i}(R)$. Then, there is a topos associated to $X$ whose underlying site is the site of open subfunctors. This site is isomorphic to the site associated to the underlying topological space of the ringed space corresponding to the scheme. Then, topos theory gives a way to construct scheme theory without having to use locally ringed spaces using the associated locally ringed topos.
Ringed topos of sets
The category of sets is equivalent to the category of sheaves on the category with one object and only the identity morphism, so ${\text{Sh}}(*)\cong {\text{Sets}}$. Then, given any ring $A$, there is an associated sheaf ${\text{Hom}}_{Sets}(-,A):{\text{Sets}}^{op}\to {\text{Rings}}$. This can be used to find toy examples of morphisms of ringed topoi.
Notes
1. Schreiber, Urs (2011-07-25). "Bohr toposes". The n-Category Café. Retrieved 2018-02-19.
2. Heunen, Chris; Landsman, Nicolaas P.; Spitters, Bas (2009-10-01). "A Topos for Algebraic Quantum Theory". Communications in Mathematical Physics. 291 (1): 63–110. arXiv:0709.4364. Bibcode:2009CMaPh.291...63H. doi:10.1007/s00220-009-0865-6. ISSN 0010-3616.
3. "Section 26.15 (01JF): A representability criterion—The Stacks project". stacks.math.columbia.edu. Retrieved 2020-04-28.
References
• The standard reference is the fourth volume of the Séminaire de Géométrie Algébrique du Bois Marie.
• Francis, J. Derived Algebraic Geometry Over ${\mathcal {E}}_{n}$-Rings
• Grothendieck Duality for Derived Stacks
• Ringed topos at the nLab
• Locally ringed topos at the nLab
| Wikipedia |
Association analysis and functional annotation of imputed sequence data within genomic regions influencing resistance to gastro-intestinal parasites detected by an LDLA approach in a nucleus flock of Sarda dairy sheep
Sara Casu1,
Mario Graziano Usai ORCID: orcid.org/0000-0002-6002-22231,
Tiziana Sechi1,
Sotero L. Salaris1,
Sabrina Miari1,
Giuliana Mulas1,
Claudia Tamponi2,
Antonio Varcasia2,
Antonio Scala2 &
Antonello Carta1
Genetics Selection Evolution volume 54, Article number: 2 (2022) Cite this article
Gastroinestinal nematodes (GIN) are one of the major health problem in grazing sheep. Although genetic variability of the resistance to GIN has been documented, traditional selection is hampered by the difficulty of recording phenotypes, usually fecal egg count (FEC). To identify causative mutations or markers in linkage disequilibrium (LD) to be used for selection, the detection of quantitative trait loci (QTL) for FEC based on linkage disequilibrium-linkage analysis (LDLA) was performed on 4097 ewes (from 181 sires) all genotyped with the OvineSNP50 Beadchip. Identified QTL regions (QTLR) were imputed from whole-genome sequences of 56 target animals of the population. An association analysis and a functional annotation of imputed polymorphisms in the identified QTLR were performed to pinpoint functional variants with potential impact on candidate genes identified from ontological classification or differentially expressed in previous studies.
After clustering close significant locations, ten QTLR were defined on nine Ovis aries chromosomes (OAR) by LDLA. The ratio between the ANOVA estimators of the QTL variance and the total phenotypic variance ranged from 0.0087 to 0.0176. QTL on OAR4, 12, 19, and 20 were the most significant. The combination of association analysis and functional annotation of sequence data did not highlight any putative causative mutations. None of the most significant SNPs showed a functional effect on genes' transcript. However, in the most significant QTLR, we identified genes that contained polymorphisms with a high or moderate impact, were differentially expressed in previous studies, contributed to enrich the most represented GO process (regulation of immune system process, defense response). Among these, the most likely candidate genes were: TNFRSF1B and SELE on OAR12, IL5RA on OAR19, IL17A, IL17F, TRIM26, TRIM38, TNFRSF21, LOC101118999, VEGFA, and TNF on OAR20.
This study performed on a large experimental population provides a list of candidate genes and polymorphisms which could be used in further validation studies. The expected advancements in the quality of the annotation of the ovine genome and the use of experimental designs based on sequence data and phenotypes from multiple breeds that show different LD extents and gametic phases may help to identify causative mutations.
Gastrointestinal nematodes (GIN) are one of the major health problems in grazing animals [1]. GIN infections result in important yield reductions and higher production costs due to veterinary treatments and higher culling rates [2]. Moreover, chemical treatments involve the risk of drug residues in the food and environment and the appearance of anthelmintic resistance, that has been reported in several countries [3,4,5,6]. In sheep, GIN control strategies may also include management practices such as soil tillage or rotational grazing that aim at reducing pasture contamination [7, 8]. Alternative approaches to limit GIN infection rely on nutritional schemes based on either grazing crops with anthelmintic proprieties, such as chicory (Cichorium intybus), sulla (Hedysarum coronarium), sainfoin (Onobrychus viciifolia) and sericea lespedeza (Lespedeza cuneata) [9], or supplementation with tannins and/or proteins; but even these approaches are difficult to apply, especially in extensive or semi-extensive systems.
Fecal egg count (FEC), i.e. the number of parasite eggs per g of faeces, has been largely used as a proxy trait to measure individual resistance to GIN. Selective breeding of animals with enhanced resistance to GIN has been suggested for the sustainable control of parasite infections in sheep since genetic variation between individuals and breeds has been documented. Indeed, estimates of the heritability of proxy traits for GIN resistance in sheep ranges from 0.01 to 0.65 [10], but it is generally moderate for FEC (0.25–0.33 [11]; 0.16 [12]; 0.21–0.55 [13]; and 0.18–0.35 [14]). Thus, breeding for resistance to GIN can be considered in sheep but implies structured selection schemes and accurate recording of both performances and pedigree information, which are essential for genetic evaluation. However, the inclusion of GIN resistance in current breeding schemes is hampered by the difficulty to record FEC on a large scale since its measure is too laborious and costly in field conditions. For this reason, several studies have been carried out to dissect the genetic determinism of GIN resistance with the final aim of setting up breeding schemes that are based on molecular information rather than large-scale recording for progeny testing. Such studies have followed the development of the molecular biology and omic sciences and the concomitant advancement of the statistical methodologies. The first studies were based on sparse maps of molecular markers, such as microsatellites, and used linkage analysis on family-structured populations [15]. In spite of the large number of genomic regions detected in sheep [16,17,18], low significance levels and the low accuracy of localisations made marker-assisted selection unfeasible. Later on, the development of single nucleotide polymorphism (SNP) arrays with medium and high densities and the application of enhanced statistical methods allowed to extend the analysis at the population level and to increase the power of detection and the accuracy of localisations [19,20,21,22,23]. More recently, the availability of high-throughput sequencing technologies and increasingly accurate genome annotations may allow the discovery of new polymorphisms in DNA or RNA sequences and the classification of their effects on genes that are more and more well-known in terms of functions.
The Sarda breed is the most important Italian dairy sheep breed with around three million heads in approximately 10,000 flocks (Regional Department for Agriculture, unpublished observations). Sheep breeding has traditionally been the most important livestock production in Sardinia. Farming systems vary from semi-extensive to semi-intensive with a wide-spread use of grazing on natural pastures and forage crops where infection from GIN is unavoidable. The most represented nematodes species are Teladorsagia circuncincta, Trichostrongylus spp., Haemonchus contortus, Teladorsagia trifurcata, Cooperia spp., while Oesophagostomum venulosum and Nematodirus spp. are found in negligible quantities [24]. The prevalence rate in terms of worm egg count generally increases in the summer-autumn period. In these conditions, most farmers have to administer anthelmintics, often without well planned protocols in terms of individual diagnosis, doses and frequency of treatments. Anthelmintic treatments concern 99.4% of the sheep farms on the island, with on average 1.54 treatments per year that are mainly carried out with benzimidazoles (47.8%), levamisole 21.1%, avermectin (12.7%) and probenzimidazoles (11.5%) [25]. Thus, the control of GIN implies high costs, organizational efforts and further economic losses related to the rules that limit drug residues in milk. In this situation, selective breeding is an attractive option also for Sarda sheep. The current breeding scheme is implemented on about 8% of the purebred population for which yield traits and pedigree data are recorded (Herd Book). The main selection objectives are milk yield per lactation, scrapie resistance, and udder morphology [26]. With the aim of assessing the feasibility of a marker-assisted selection (MAS) scheme for resistance to GIN based on causative mutations or markers in linkage disequilibrium (LD), which does not need large-scale FEC recording, the Regional Agency for Agricultural Research (AGRIS) has established since the late 1990s an experimental population for which the individuals are genotyped with SNP arrays and routinely measured for FEC, as well as other production and functional traits. More recently, a target sample of influential animals from this population was whole-genome re-sequenced and SNP genotypes were imputed to the whole population.
The aim of this study was to identity QTL segregating in the Sarda breed and to search for candidate genes and causative mutations by the functional annotation and association analysis of imputed Sarda sequence data in these target regions.
Experimental population
The nucleus flock of the Sarda breed, that is described in more detail in [26, 27], derives from a backcross population of Sarda \(\times\) Lacaune ewes created in 1999 by mating 10 F1 Sarda \(\times\) Lacaune rams with purebred Sarda ewes. Thereafter, the generations of ewes that were produced until now, were obtained by mating adult ewes of the nucleus flock exclusively with rams coming from the Sarda Herd Book. This has led to a progressive reduction of the proportion of Lacaune blood in the experimental population, which is negligible in the latest generations (around 0.4%). The average size of the flock is about 900 milked ewes per year with a replacement rate of 25 to 30%. The flock is raised on an experimental farm located in the south of Sardinia that generally shows a semi-arid Mediterranean climate with important variations in rainfall and temperatures across seasons and years. The flock is managed following the traditional farming system adopted on the island, which is based on grazing natural or cultivated swards (mainly ryegrass and berseem clover) and supplements of hay, silage and concentrate. Lambings of most of adult ewes occur in the autumn, and those of the remaining ewes and of the primiparous ewes occur in late winter or early spring. Ewes are usually bred in management groups depending on the lambing period. They are milked twice a day by machine from after lamb separation (one month after lambing) until the early summer period when they are progressively and almost simultaneously dried off.
Molecular data
All the ewes of the experimental population born from 1999 to 2017 (n = 4355) and their sires (n = 181, including the 10 F1) and 11 Sarda grandsires were genotyped with the OvineSNP50 Beadchip (50 k hereafter). SNP editing was performed using the call rate and minor allele frequency (MAF) thresholds of 95% and 1%, respectively. The ovine genome assembly v4.0 and the SNPchimMpv.3 software [28] were used to construct the genetic map by assuming 1 Mb = 1 cM. Unmapped SNPs and SNPs on sex chromosomes were not included in the study. Finally, 43,390 SNPs were retained for further analyses.
Among the 4547 genotyped animals, 56 had also been fully re-sequenced within the framework of previous projects. The choice of these 56 animals was based on the assumption that they carried opposite alleles for specific QTL segregating in the Sarda breed and identified in our previous investigations [29] or because they had many progeny in the experimental population. The first group (24 animals, including two Sarda rams and 22 daughters of Sarda rams) had been whole-genome re-sequenced with a target coverage of 12X. The other 32 animals were Sarda sires chosen among those with a higher impact on the population, more recently re-sequenced on an Illumina HiSeq 3000 sequencer and a 30X target coverage. Whole-genome sequence (WGS) data was processed with a pipeline implemented with Snakemake [30], and developed at CRS4 (Center For Advanced Studies, Research and Development in Sardinia https://www.crs4.it/)) available at https://github.com/solida-core. Briefly, adapter sequences were removed from the short reads, then low-quality ends were trimmed, and sequences shorter than 25 bp after trimming were removed with the TrimGalore (v0.4.5) software [31]. The quality of the reads, before and after trimming, was evaluated with the Fastqc (v0.11.5) tool [32]. Trimmed reads were aligned to the Ovis aries reference genome v4.0 (https://www.ncbi.nlm.nih.gov/assembly/GCF_000298735.2) using the Burrow-Wheeler Aligner (BWA v0.7.15) program [33]. Alignments were further sorted, converted to a CRAM file and indexed with Samtools (v1.6) [34]. PCR duplicates were detected with the Picard (v2.18.9) tool [35]. After alignment, joint single nucleotide variants (SNV) (SNPs and insertion-deletions (INDELs)) calling was performed using the GATK (v4.0.11.0) software [36], according to the GATK Best practices workflow [37]. In order to apply the GATK Variant Quality Score Recalibration, first we ran an initial round of SNP calling and only used the top 5% SNPs with the highest quality scores.
FEC was the proxy trait used to assess GIN resistance under natural conditions of infection in the experimental flock. Periodically, a sample of ~ 50 ewes that represented the different management groups, was monitored to evaluate the percentage of infected animals and decide whether to sample the whole flock and possibly administrate anthelmintic treatment. The number of eggs of strongyles per g was determined using a copromicroscopic test according to the McMaster technique [38] on individual samples. When the number of infected animals and the level of infestation were considered sufficient to appreciate individual variability, individual FEC were measured on the whole flock. During the first three years of measurement, coprocultures of pooled samples were also performed at each round of scoring in order to identify GIN genera using the technique and the identification keys of [39, 40]. The results of pooled faecal cultures (mean of 4 cultures and 200 to 400 larval identifications) indicated that H. contortus, T. circumcincta and T. colubriformis were the dominant worm species.
From 2000 to 2012, individual FEC were recorded between one to three times per production year (considered from September to August), according to the level of infestation found in the periodic monitoring samplings that depended on annual variations in rainfall and temperature. Thus, since the level of infestation was low, no individual measures were carried out between July 2003 and September 2004 and between June 2006 and November 2007. The recording of FEC for the detection of QTL was closed in 2012. In 2015, FEC recording of the new generations of ewes born in the nucleus flock was started again in view of implementing marker-assisted or genomic selection in the Sarda breed. These data were added to the previous set to enhance the power of QTL detection of the analysis presented here.
Finally, 17,594 FEC measurements were recorded on 25 separate dates and on 4477 animals (Table 1). The average number of records per ewe was 3.93 ± 2.2, ranging from 1 (13.4% of animals) to 8 (14.13% of animals); almost half of the ewes (46.7%) had from 3 to 5 records.
Table 1 Dates of sampling, number of animals sampled, mean and standard deviation of Fec and LnFec [ln(Fec + 14)]
FEC measurements, that presented a skewed distribution, were log-transformed prior to further analysis using lnFec = ln(FEC + 14).
Variance components and pseudo-phenotypes for QTL detection
In order to calculate the pseudo-phenotypes for the detection of QTL and to estimate variance components, raw data were analysed by a repeatability model including the permanent environment and additive genetic random effects of individual animals and using the ASReml-R 4.1 software [41]. Environmental fixed effects were the date of sampling, the age of the animal (from 1 to 4 years) and its physiological status at the date of sampling. The levels of the physiological status were built considering the days from parturition and the number of lambs carried or born by the measured ewe in the considered production year. Five classes were considered: ewes without pregnancy and lactation, and thus with no lambs, in the considered production year; ewes sampled within 30 days before or after lambing with one lamb; ewes sampled within 30 days before or after lambing with two or more lambs; lactating ewes with one lamb; and lactating ewes with two or more lambs.
Individual FEC recorded from September to the following dry-off (July) were assigned to the same year of age. Data from animals younger than ten months (570 records), which can be considered without acquired immunisation, were not included so that a measure of the parasite resistance expressed by immunized animals was used. However, 90% of those animals had measurements at older ages which were included in the analysis. Only records from genotyped animals, i.e. born before 2017, were included in the analysis. The final dataset included 16,530 records from 4097 animals recorded on 24 separate dates. Genetic relationships between 4547 animals, including the recorded ewes and their sires and genotyped ancestors, were taken into account by calculating the genomic relationship matrix (GRM) based on 50 k genotypes, following [42] and using the GCTA software [43]. The GRM was then inverted using the Ginv function provided by the Mass R package (version 7.3–51.6), [44] which provides a generalized inverse matrix. Pseudo-phenotypes for QTL detection were then calculated as the average performance deviation (APD) of each ewe as proposed by Usai et al. [27]: i.e. by averaging single animal residuals and summing-up the genetic and permanent environment random predictions.
QTL detection analysis
The model used for the QTL detection based on 50k SNP data was the same that was applied to the experimental population for milk traits by Usai et al. [27]. It is based on the combined use of LD and linkage analysis (LA) information (LDLA) to estimate the probability of identity-by-descent (IBD) between pairs of gametes of the genotyped individuals at the investigated position. First, the paternal and maternal inherited gametes of the genotyped individuals were reconstructed by the LD multilocus iterative peeling method [27, 45] by exploiting the genotypes and the familial structure of the population. Then, the base gametes of the population were identified as the gametes inherited from an ungenotyped parent and corresponded to: the gametes of the 10 F1 rams and of the 74 Sarda (grand) sires, the maternal or paternal gametes of the 43 ewes with an unknown sire or dam, respectively, and the maternal gametes of the 928 back-cross ewes and of the 108 Sarda (grand) sires for which only the sire was genotyped. The 1247 base haplotypes (BH) were further divided according to their breed of origin in BHL (the 10 Lacaune paternal gametes carried by the F1 rams) and BHS (the remaining 1237 Sarda gametes). Finally, the remaining parental gametes of the genotyped animals which carried, at each position, an allele inherited from one out of the 1247 original BH were considered as replicates of BH (RH).
The IBD between pairs of BH were estimated by LD analysis (\(IBD_{LD}\)) based on the extent of identity-by-state (IBS) around the investigated position [46]. The \(IBD_{LD}\) between BHS and BHL were assumed to be null. The IBD between BH and RH were estimated by LA analysis (\(IBD_{LA}\)) given the known gametic phases and the pedigree information [27, 46,47,48]. The IBD between pairs of RH were, thus, calculated as the combination of \(IBD_{LD}\) and \(IBD_{LA}\) (\(IBD_{LDLA}\)). This allowed the construction, at each 50k SNP position l, of a matrix (\({\mathbf{G}}_{l}^{IBD}\)) allocating IBD between RH carried by phenotyped ewes. Moreover, in order to account for the polygenic effects, a matrix of genome-wide IBD between gametes (\({\mathbf{G}}_{g}^{IBD}\)) was constructed by averaging elements of \({\mathbf{G}}_{l}^{IBD}\) across all the investigated SNP positions. At this stage, Usai et al. [27] proposed the use of principal component analysis (PCA) to summarize the information of \({\mathbf{G}}_{l}^{IBD}\) and \({\mathbf{G}}_{g}^{IBD}\). The aim of using PCA was to overcome issues related to the non-positive definite status of \({\mathbf{G}}_{l}^{IBD}\) and to limit the computational needs in handling both the IBD matrices. In fact, PCA led to a dramatic reduction in the number of effects to be estimated, so that the principal components from \({\mathbf{G}}_{l}^{IBD}\) and \({\mathbf{G}}_{g}^{IBD}\) can be included in the model as fixed effects. The final model does not include random effects other than the residuals and is solved by a weighted least square method.
At each 50k SNP position l the model is the following:
$$ {\mathbf{y}} = \bf{1}{\upmu } + {\mathbf{ZV}}_{{\mathbf{l}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} + {{\boldsymbol{\upvarepsilon}}}, $$
where \({\mathbf{y}}\) is a vector of APD of \({\text{n}}_{{\text{p}}}\) phenotyped ewes for LnFec; \(\mu\) is the overall mean; \({{\varvec{\upbeta}}}_{l}\) is a vector of the fixed effects of the \({\text{n}}_{{{\text{PC}}_{l} }}\) principal components that explain more than 99% of the within-breed variation (\({\text{PC}}_{l}\)) of the IBD probability matrix \({\mathbf{G}}_{l}^{IBD}\), i.e. \({{\varvec{\upbeta}}}_{l}\) summarizes the effects of the gamete at the QTL position \(l\); \({{\varvec{\upalpha}}}_{l}\) is a vector of the fixed effects of the \({\text{n}}_{{{\text{PC}}_{g} }}\) principal components that explain more than 99% of the variation (\({\text{PC}}_{g}\)) of the genome-wide IBD probability matrix \({\mathbf{G}}_{g}^{IBD}\), i.e. \({{\varvec{\upalpha}}}_{l}\) summarizes the polygenic effects of the gametes; \(\bf{1}\) is a vector of \({\text{n}}_{{\text{p}}}\) ones; \({\mathbf{Z}}\) is a \({\text{n}}_{{\text{p}}} \times {\text{n}}_{{{\text{RH}}}}\) incidence matrix relating phenotypes with RH; \({\mathbf{V}}_{l}\) is a \({\text{n}}_{{{\text{RH}}}} \times {\text{n}}_{{{\text{PC}}_{l} }}\) matrix including the \({\text{PC}}_{l}\) scores of RH that summarize the IBD probabilities between the gametes at the considered position; \({\mathbf{V}}_{g}\) is a \({\text{n}}_{{{\text{RH}}}} \times {\text{n}}_{{{\text{PC}}_{g} }}\) matrix including the \({\text{PC}}_{g}\) scores of RH; \({{\varvec{\upvarepsilon}}}\) is a vector of \({\text{n}}_{{\text{p}}}\) residuals assuming that \({{\varvec{\upvarepsilon}}}\sim {\text{N}}\left( {\mathbf{0},\sigma_{{\upvarepsilon }}^{2} {\mathbf{R}}^{ - 1} } \right)\) with \({\mathbf{R}}\) being a diagonal matrix with the APD's reliability (\(r\)) as diagonal element. Reliabilities were calculated as \(r_{{\text{i}}} = 1 - {\text{se}}\left( {{\hat{\text{a}}}_{{\text{i}}} } \right)^{2} /\sigma_{{\text{a}}}^{2}\), from a repeatability linear model \({\text{y}}_{{{\text{ij}}}} = {\text{a}}_{{\text{i}}} + {\text{e}}_{{{\text{ij}}}}\), where \({\text{y}}_{{{\text{ij}}}}\) is the performance deviation \({\text{j}}\) adjusted for the fixed effects estimated with the full animal model of ewe \({\text{i}}\), \({\text{a}}_{{\text{i}}}\) is the random ewe effect assuming that \({\mathbf{a}}\sim {\text{N}}\left( {\mathbf{0}, \sigma_{{\text{a}}}^{2} {\mathbf{I}}} \right)\), and \({\text{e}}_{{{\text{ij}}}}\) is the corresponding error, assuming that \({\mathbf{e}}\sim {\text{N}}\left( {\mathbf{0}, \sigma_{{\text{e}}}^{2} {\mathbf{I}}} \right)\). Details on how the PC scores of the \({\mathbf{V}}_{l}\) and \({\mathbf{V}}_{g}\) matrices were calculated are in [27].
Since the IBD between segments of different breed origin (i.e. replicates of \({\text{BH}}^{{\text{S}}}\) and \({\text{BH}}^{{\text{L}}}\)) was set to 0, the PCA generated two sets of breed-specific \({\text{PC}}_{l}\). Thus, the matrix \({\mathbf{V}}_{l}\) can be written as \(\left[ {{\mathbf{V}}_{l}^{{\text{S}}} {\mathbf{V}}_{l}^{{\text{L}}} } \right]\) and the vector \({{\varvec{\upbeta}}}_{l}^{^{\prime}}\) as \(\left[ {{{\varvec{\upbeta}}}_{l}^{{^{\prime}{\text{S}}}} {{\varvec{\upbeta}}}_{l}^{{^{\prime}{\text{L}}}} } \right]\), where \({\mathbf{V}}_{l}^{{\text{S}}}\) and \({\mathbf{V}}_{l}^{{\text{L}}}\) are the \({\text{PC}}_{l}\) summarising IBD probabilities between the gametes of Sarda and Lacaune origin and \({{\varvec{\upbeta}}}_{l}^{{\text{S}}}\) and \({{\varvec{\upbeta}}}_{l}^{{\text{L}}}\) the corresponding effects.
The final aim of this work was to identify QTL segregating in the Sarda breed and to search for candidate genes and causative mutations by functional annotation and association analysis of the imputed Sarda sequence data in the identified regions. Thus, at each SNP position, we tested the null hypothesis that the effects of the principal components that explain 99% of the variability due to the Sarda gametes are zero (\(H_{0}\): \({{\varvec{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{S}}}\) = 0) by an F-test that compares the sums of squared residuals of the full model in Eq. (1) and of the following reduced model including all the other effects:
$$ {\mathbf{y}} = \bf{1}{\upmu } + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} + {{\boldsymbol{\upvarepsilon}}}^{*} . $$
The Bonferroni correction for multiple testing was used to estimate the threshold corresponding to the genome-wise (GW) significance level. To be conservative, we omitted the LD between SNPs, and calculated the nominal P-value for each tested position as \(P_{nominal} = \frac{{P_{GW} }}{n Test}\), were \(P_{GW}\) is the genome-wise significance level chosen for the analysis (0.05) and \(n Test\) is the number of tested positions (43,390). The negative logarithm of \(P_{nominal}\) resulted in a threshold of \(- {\text{log}}_{10} \left( {Pvalue} \right)\) equal to 5.938, which was rounded to 6.
Significant positions identified on the same chromosome were clustered into QTL regions (QTLR) in order to account for linkage between SNPs. As proposed by Usai et al. [27], the correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }} = {\mathbf{ZV}}_{l} {\widehat{\varvec{\upbeta }}}_{l}\) (corresponding to the portion of phenotypes predicted in the model by the QTL effect) were calculated for all pairs of significant SNPs on the same chromosome. The most significant SNP on the chromosome was taken as the peak of the first QTLR. Peaks that identified a further QTLR on the same chromosome were iteratively detected as the significant SNPs showing correlations lower than 0.15 with the already defined QTLR peaks. The remaining significant positions were assigned to the QTLR with which they had the highest correlation. Moreover, with the aim of appreciating the relative potential impact of a marker-assisted selection approach, we calculated the ANOVA estimator of the QTL variance for the most significant position of each QTLR as:
$$ \widehat{{\sigma_{qtlS}^{2} }} = \frac{{\frac{{SSE_{R} - SSE_{F} }}{{nPC_{S} }} - \frac{{SSE_{F} }}{{np - nPC_{g} - nPC_{L} - nPC_{S} - 1}}}}{{\frac{np}{{nPC_{S} }}}}, $$
where \( SSE_{F} = \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{S}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{S}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right) \right\rceil^{^{\prime}} \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{S}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{S}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right) \right\rceil\) is the sum of squared residuals of the full model including the Sarda PC at the peak position; and \(SSE_{R} = \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right)\right\rceil^{^{\prime}} \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right)\right\rceil\) is the sum of squared residuals of the reduced (without the Sarda PC) model; and \(nPC_{S}\) and \(nPC_{L}\) are the number of PC summarising the IBD probabilities between the gametes of Sarda and Lacaune origin, respectively; and \(nPC_{g}\) is the number of PC extracted from the genome-wide IBD probability matrix.
The ratio between the ANOVA estimators of the QTL variances (\(\widehat{{\sigma_{qtlS}^{2} }}\)) and the total phenotypic variance of the pseudo-phenotypes was calculated for the peak of each QTLR.
Analysis of sequence data
The QTLR as defined above or the 2-Mb intervals that surround the most significant locations when only one 50k SNP was significant, were further investigated using information from whole-genome sequence (WGS) data. Biallelic SNPs falling in these target QTLR were extracted from the assembled sequences of the re-sequenced animals as vcf-files. First, a functional annotation of the SNPs identified by WGS was performed using the NCBI 4.0 sheep genome annotation release 102 and the snpEff software v4.3.t [49]. Then, the parental gametes of the phenotyped ewes were imputed from 50 k data to WGS. The first step of the imputation procedure was to reconstruct the phase of each gamete \(i\) carried by the sequenced animals (\(h_{i}^{Q}\)) that consisted in estimating the probability of carrying the reference \(P\left( {h_{il}^{Q} = R} \right) \) and the alternative \(P\left( {h_{il}^{Q} = A} \right)\) allele at each WGS SNP position l based on the genotype information from sequencing and the IBD between gametes at the neighbouring SNP 50k. Then, at each WGS SNP position l of the parental gamete \(j\) carried by each of the none-sequenced phenotyped ewes, we inferred the probabilities of carrying the reference \(P\left( {h_{jl}^{p} = R} \right)\) and the alternative allele \(P\left( {h_{jl}^{p} = A} \right)\) based on the gametic phase of sequenced animals and the IBD between gametes of sequenced animals with gametes of the phenotyped ewes [50]. The accuracy of the imputation was calculated as the correlation between the probability for an imputed WGS SNP allele at each 50k SNP position and the actual occurrence of the same allele given the 50k genotyping information and the gametic phase reconstructed in the previous analysis. Moreover, to verify that the imputed data could be used for the association analysis, the information content of each WGS SNP for all imputed gametes was calculated as the squared difference of the allele probabilities \(\left[ {P\left( {h_{jl}^{p} = R} \right) - P\left( {h_{jl}^{p} = A} \right)} \right]^{2}\). These statistics were averaged across positions and gametes.
Finally, an association analysis was run in the target regions, by regressing the pseudo-phenotypes on the allele dosage calculated as the sum of the probabilities of carrying the reference allele in the paternal and maternal gametes predicted by imputation. The allele dosage was used instead of the genotype probabilities since it allows the direct estimation of the additive substitution effect of the reference allele with just one regressor in the model. However, the genotype probabilities imply a multiple regression model and are more adapted for estimating non-additive effects. As in Eq. (1), the model included the PC extracted from the genome-wide IBD probability matrix to adjust for the polygenic background.
An F test was performed to calculate the P-values of each tested WGS SNP. The analysis was performed in order to identify the most relevant WGS SNPs, which were selected by setting the threshold of \(- {\text{log}}_{10} \left( {Pvalue} \right)\) equal to the maximum per region minus 2.
Searching for candidate genes
Genes that harboured variants with a potential functional impact or variants that showed the highest P-values identified in the previous analyses, were compared with functional candidate genes selected from QTL or gene expression studies related to GIN resistance. In particular, we took advantage of the recent summary provided by [51] in which a deep review of the latest literature on the subject was performed. They identified 11 SNP chip-based QTL detection analyses (based on GWAS, LA, LDLA, selection sweep mapping or regional heritability mapping methods) from which they extracted 230 significantly associated genomic regions. Moreover, they proposed a list of 1892 genes reported as highly expressed or differentially expressed after GIN infection in sheep by 12 different experiments in the field. QTL regions and GIN activated genes proposed by [51] were remapped from the Ovis aries genome 3.1 assembly to the Oar4.0 version by using Biomart and NCBI remapping services for comparison with our results.
Finally, we performed an over-representation analysis (ORA) of gene ontology (GO) biological process terms of the genes harboring significant mutations or mutations with functional consequences on the transcripts. We performed the ORA with the web-based software WebGestalt [52]. Gene symbols of human gene orthologues were retrieved from the OrthoDB v10 data base [53] starting from the NBCI ID of sheep genes from the Ovis aries annotation release 102. The human genome protein coding database was taken as reference and the following parameters were used for the analysis: default statistical method (hypergeometric); minimum number of genes included in the term = 5, multiple test adjustment = BH method (Benjamini–Hochberg FDR). The ten top categories were retained based on FDR rank.
Variance components
Table 2 shows the variance component estimates obtained with the repeatability animal model. The heritability and repeatability estimates of lnFec were 0.21 ± 0.015 and 0.27 ± 0.012, respectively (Table 2).
Table 2 Estimates and standard errors of genetic, permanent environment (Pe) and residual variances and repeatability (Rp) and heritability (h2) estimates for LnFec
Figure 1 presents the Manhattan plot of the \(- {\text{log}}_{10} \left( {Pvalue} \right)\) corresponding to the null hypothesis that the effects of PC that explain 99% of the variability due to the Sarda base gametes at each locus (43,390 SNPs) are zero. Two hundred and two SNPs encompassed the 5% genome-wide significant threshold. With the exception of Ovis aries chromosome (OAR) 1, on which only one significant location was found, many significant SNPs mapped to the same chromosome. After clustering the significant locations on the same chromosome, ten QTLR were defined on nine chromosomes (Table 3). The ratio between the ANOVA estimator of the QTL variance and the total phenotypic variance ranged from 0.0087 to 0.0176.
Manhattan plot of the \(- {\text{log}}_{10} \left( {Pvalue} \right)\) corresponding to the null hypothesis that the effects of principal components that explain 99% of the variability due to the Sarda base gametes at each locus are zero. The grey line indicates the 0.05 genome-wide significance threshold determined by Bonferroni correction for 43,390 tests
Table 3 QTL regions from the LDLA analysis
The most significant location (\(- {\text{log}}_{10} \left( {Pvalue} \right)\) = 12.861) was in a large region on OAR20, that covered almost 20 Mb and included 154 significant SNPs. Correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) at the peak position and the other 153 significant locations were always higher than 0.25. The second most significant peak was on OAR12 in a QTLR spanning 5.18 Mb and including another 18 significant SNPs, with correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) greater than 0.46. The third QTLR in order of significance was at the beginning of OAR4, spanned 4.6 Mb and included six SNPs. Eleven SNPs on OAR19 exceeded the 5% genome-wide significance threshold. Although the two most distant SNPs defined an interval of about 12.5 Mb, all the SNPs clustered in the same QTLR, since the correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) were always higher than 0.48. Other QTLR (approximately 500 to 700 kb long and including from 1 to 3 significant SNPs) were identified on OAR15, 6, 7 and 2. An additional significant SNP, ~ 100 Mb apart from the previous one, was also identified on OAR2. The last QTLR was defined in the 2-Mb interval surrounding the single significant SNP on OAR1.
QTLR, rounded to the closest Mb, were further investigated with WGS data. Overall, 712,987 biallelic SNPs were extracted from the target regions. Among these, 649,054 were already known in the European Variation Archive (EVA, ftp://ftp.ebi.ac.uk/pub/databases/eva/rs_releases/release_1/by_species/Sheep_9940/GCA_000298735.2), while 63,933 (8.96%) were novel variants, without an associated rs identifier.
The average mutation rate ranged from 7711 to 14,428 SNPs per Mb. Accuracy of imputation at the 50 k SNP positions ranged from 0.990 on OAR6 to 0.979 on OAR7 (Table 4). The imputation process resulted in an average information content across gametes and QTLR of 0.976 ± 0.17, which ranged from 0.967 ± 0.02 for OAR4 to 0.985 ± 0.14 for OAR12. Based on such informativeness, we performed an association analysis at each polymorphic site from WGS (Table 5). Graphical comparison between Manhattan-plots of LDLA and WGS-based data association analysis are reported in Additional file 1: Figs. S1–S10.
Table 4 Description of the QTL regions from whole-genome sequences and results of the imputation procedure
Table 5 Results of the association analysis based on imputed alleles at the polymorphic sites from WGS
QTL on OAR4, 12, 19 and 20 remained the most significant. As in the LDLA analysis, the test statistic profile in the WGS analysis was not unimodal and, in some cases, the most significant positions were at different locations compared to the previous analysis. Thus, on OAR4 the peak from the WGS association analysis mapped at 8,686,421 bp, closer to the second peak and almost 3.3 Mb from the most significant position identified with LDLA. Similarly, on OAR12, the WGS peak position was at 41,043,088 bp, 1.6 Mb from the LDLA peak and close to a SNP from the OvineSNP50 Beadchip which did not reach genome-wide significance with LDLA (\(- {\text{log}}_{10} \left( {Pvalue} \right)\) = 5.79). On OAR19, the most significant position in the LDLA and WGS analyses were only 467 kb apart, although the explored region was 14 Mb long and showed several peaks in both analyses. As far as the QTLR on OAR20 was concerned, the most significant position in the WGS association analysis, was almost 5 Mb distant from the LDLA peak. However, the other WGS significant SNPs were close to the LDLA peak. Indeed, the second peak from WGS was only 68 kb apart from the LDLA peak. Moreover, the SNPs from the OvineSNP50 Beadchip which were closer to the second (rs416381272) and third (rs411905117) significant WGS peaks also ranked third and second in the LDLA analysis. In the other analysed QTLR, with a lower significance level and smaller number of significant SNPs, peak positions from WGS data were within a distance of 500 kb from the LDLA peaks. Finally, while nominal P-values remained similar in the two analyses for most of the investigated regions, an evident drop of significance was observed on OAR15, where the \(- {\text{log}}_{10} \left( {Pvalue} \right)\) dropped from 7.36, in the LDLA analysis to 4.97 in the WGS based association analysis.
As far as the functional annotation was concerned, SNPeff provided 2,250,514 effects for the 712,987 analysed SNPs in the explored 60 Mb, since a variant can affect two genes and a gene can have multiple transcripts (Table 6).
Table 6 Summary of the genomic features in the investigated regions
The number of effects by impact (high, moderate, modifier and low), type and region according to SNPeff classification is reported in Additional file 2: Tables S1–S10. Among the SNPs that affect transcripts, 0.8 to 1% of them per region, concerned pseudogenes and were not considered. In addition, variants that were in intergenic regions (from 4.2 to 27.4% of the predicted effects per QTLR) were not further investigated.
Finally, we focused on variants that were classified as having a high impact on the transcript of protein coding genes (classified by SNPeff as: splice_acceptor_variant; splice_donor_variant; start_lost; stop_gained; stop_lost) or a moderate impact (which were all predicted as having a missense effect in our case, i.e. variants that change one or more bases, resulting in a different amino acid sequence but the length of which is preserved). On the whole, 3538 polymorphisms were predicted to cause high-impact or missense effects (340 and 9105 effects, respectively) on the multiple transcripts of 530 protein coding genes. A detailed description of the classification of the retained variants is in Additional file 3: Table S11.
The ten most significant SNPs from the WGS analysis were all classified as modifier, since they were either intergenic or intronic (see Additional file 4: Table S12), and thus had no effect on the transcript. None of the high-impact variants showed high significance levels. Indeed, only four missense variants encompassed the empirical threshold of \(- {\text{log}}_{10} \left( {Pvalue} \right)\) equal to the maximum per region minus 2: one affected three transcripts of the CIART (circadian associated repressor of transcription) gene on OAR1 (rs159646335) and three affected the transcript of the OTOG (otogelin) gene on OAR15 (rs420057627, rs401738285 and rs422155776).
The 530 genes that harbored high or moderate (missense effect) impact variants and another 13 genes with polymorphisms encompassing the empirical threshold of \(max\left( { - {\text{log}}_{10} \left( {Pvalue} \right)} \right) - 2\) were submitted to an enrichment analysis of GO biological process terms. Of the 543 genes considered, 50 did not have a human ortholog in the OrthoDb database [53] and 493 mapped to 442 human genes, since 53 shared the same human ortholog. Finally, 376 genes were annotated to the selected functional categories (GO biological process) and were used for the enrichment analysis.
None of the GO terms identified by the enrichment analysis from the biological process database was significantly enriched. The ten most abundant terms (see Additional file 5: Table S13) identified (interferon-gamma-mediated signaling pathway; sialic acid transport; T cell receptor signaling pathway; activation of immune response; positive regulation of immune system process; regulation of immune system process; immune response-activating cell surface receptor signaling pathway; immune response-regulating signaling pathway; innate immune response; and defense response) were further clustered into three superior categories according to the Weighted set cover method for redundancy reduction available in Genstalt [52]: sialic acid transport; regulation of immune system process; and defense response. The last two categories, which clearly relate to resistance to diseases, included 53 and 56 genes, respectively, 36 of which enriched both terms. Among the genes in one of these two GO higher categories, 12 were also in the list of GIN activated genes provided by Chitneedi et al. 2020 [51]: CTSS on OAR1, TNFRSF1B and SELE on OAR12, IL5RA on OAR19, IL17A, IL17F, TRIM26, TRIM38, TNFRSF21, LOC101118999, VEGFA, and TNF on OAR20.
The heritability estimate of lnFec in this study was low to moderate and consistent with previous studies in adult ewes, which reported heritabilities of FEC, after appropriate logarithmic or squared root transformation, ranging from 0.09 [54] to 0.21 [12] and 0.35 [14]. On the contrary, the repeatability estimate was higher with the permanent environmental variance equal to 6% of the total phenotypic variance. Aguerre et al. [14] did not find significant differences between heritability and repeatability estimates in naturally-infected ewes and suggested that individual variability was mainly due to differences in the genetic background rather than in differences in the immune history of the animals. Although the characterisation of worm species in individual samples was not systematically performed in our experiment, it has been demonstrated that resistance to different species of nematodes tend to be interrelated, with genetic correlations between FEC values from different species or genera of parasites being generally close to 0.5 or higher in some cases [55, 56]. Moreover, it has been shown that sheep that are selected on the basis of their response to artificial challenges respond similarly when exposed to natural infection, and a high positive genetic correlation was estimated between FEC recorded under artificial or natural infection [14, 57]. Such evidence and the heritability estimate found in our study suggest that genetic selection for parasitism resistance could be considered in the Sarda breed.
The LDLA analysis identified 202 genomic positions that were significantly associated to FEC. We grouped these positions into regions based on the correlations between the predicted effects of the QTL. Five of the ten identified QTLR (OAR4, 7, 12, 19, 20) overlapped with regions that were shown to be associated to traits related to GIN resistance in previous SNPs based studies. In particular, the QTLR on OAR4, 12, 19 and 20 overlap with significant windows identified by [21] in a meta-analysis based on the regional heritability mapping method on data including the first two generations of our experimental population. QTLR on OAR19 has also been found to be significantly associated to FEC measured in lambs [58], while several positions on OAR20 have been indicated as associated to susceptibility to parasites in other studies [17, 19, 20]. The QTLR on OAR7 falls in a region that was identified in a breed of sheep adapted to tropical climate [59] and is close to a signature of selection detected by comparing two breeds selectively bred for high and low FEC [22]. The regions associated to resistance to nematode infection on OAR2 [20, 58, 59], OAR6 [20, 23, 59, 61] and OAR15 [58, 61] were found in several studies but only our first QTLR on OAR2 (Q_02_1), was close to previously reported significant positions [20, 58, 59].
QTL associated to nematode resistance have been identified on almost all the ovine chromosomes (see [10, 62] and [51] for a recent summary) for a recent summary). However, the comparison of results between studies is complex due to the variability of the breeds and nematode species analyzed, and to the use of different statistical approaches. It is likely that resistance to GIN is a complex trait that is determined by a large number of genes [63], and, to date, no major gene has been identified.
In this study, we examined whether combining the significant results obtained from an association analysis of accurate imputed data with the functional annotation of SNPs within target regions was advantageous. The original idea was to verify if considering the significance levels of SNPs was useful to pinpoint functional variants with a potential impact on candidate genes that are identified based on their ontological classification or that are differentially expressed in studies that analyze susceptibility differences of sheep to nematodes. All these results are summarized in Additional file 3: Table S11.
The WGS association analysis was not able to provide a definite significance profile within QTLR. In all the QTLR, the number of peaks still remained large, and often the distance between them was quite big. This is likely a consequence of the large size of the chromosomal segments with high correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) that reveals high LD levels within QTLR. Moreover, none of the most significant SNPs showed a functional effect on the genes' transcript. This result can be in part due to the fact that we focused on intragenic regions of protein coding genes, whereas it has been suggested that a large part of the genetic variability of quantitative traits lies in regulatory regions or in non-protein coding regions, which are, however, very poorly annotated in the ovine genome.
However, our results indicate that the QTLR located on OAR12, 19 and 20 are strongly involved in the complex mechanism of resistance of sheep to GIN. Not only these regions harbor the most significant SNPs in both the LDLA and WGS analyses, but they have also been reported in the literature either from other QTL detection analyses and from studies on GIN resistance based on differential gene expression. In particular, in these regions, we found genes that: (i) contain polymorphisms with a high impact or missense effect, (ii) included in list of GIN-activated genes, and (iii) contribute to enrich the most represented GO process in our enrichment analysis. Among these genes, two contributed to enrich the GO terms regulation of immune system process and defense response and mapped to the QTLR region on OAR12: the TNFRSF1B (TNF receptor superfamily member 1B) gene that harbors a missense mutation (c.103G > A) in exon 2 at position 39,567,687 bp and is very close to the peak of the LDLA analysis (3,943,0517 bp), and the SELE (selectin E) gene that contains four missense variants. According to the Entrez summary for the human ortholog, SELE encodes a protein that is found in cytokine-stimulated endothelial cells and is thought to be responsible for the accumulation of blood leukocytes at sites of inflammation by mediating the adhesion of cells to the vascular lining. In sheep, Gossner et al. [64] found that the SELE gene is down-expressed in the abomasal lymph nodes of resistant lambs infected with T. circumcincta, which suggests that a possible component of the response of resistant animals to GIN infection could be the repression of acute inflammation and tissues healing.
On OAR19, the most significant peak of the WGS association analysis falls in the first intron of the GRM7 (glutamate metabotropic receptor 7) gene, which is neither included in the list of GIN-activated genes nor contributes to the GO selected terms. However, in the explored QTLR on this chromosome, we found 13 missense variants in the IL5RA (interleukin 5 receptor subunit alpha) gene, which support the enriched GO term "defense response" in our GO enrichment analysis and appears in the list of GIN-activated genes. Indeed, the IL5RA gene was found to have an increased expression in resistant animals in several studies (Scottish Blackface lambs resistant to T. circumcincta [64]; Churra resistant sheep infected by the same species [65]; resistant lambs of two different selection flocks of merino sheep [66]).
The QTLR identified on OAR20 is indeed very large and encompasses the MHC region, although the genes from the MHC are located 4 to 6 Mb away from the LDLA most significant location. The MHC complex plays an important role in presenting processed antigens to host T lymphocytes, causing T cell activation and an immunological cascade of events that build the host immunity. Due to the highly polymorphic nature of the MHC region, it is difficult to identify causative mutations useful for selection for GIN resistance [62]. The most significant SNP in the WGS analysis (rs404860665) mapped to the fourth intron of the LOC101111058 (butyrophilin-like protein 1) gene with no function defined in NCBI for sheep. Since no human orthologue of this gene was found in the OrthoDB data base [53], it was not included in the enrichment analysis. However, it is highly expressed in the gastrointestinal tract of sheep (caecum, duodenum, colon, and rectum). Moreover, there is cumulating evidence that butyrophilin-like proteins may have a role as local regulators of intestinal inflammation in other species [67].
In the target region on OAR20, another 20 missense mutations were detected in eight genes (IL17A, IL17F, TRIM26, TRIM38, TNFRSF21, LOC101118999, VEGFA, and TNF), which are present in the list of GIN-activated genes and contributed to enrich the main GO terms "regulation of immune system process" and "defense response". Among these, the genes encoding interleukins 17 (IL17A and IL17F), have been mentioned [68] as positional candidates for GIN resistance, but to date, they have not been described in studies on sheep resistance to GIN. However, Gadahi et al. [69] found that IL-17 level was significantly increased in peripheral blood mononuclear cells (PBMC) of goats incubated with Haemonchus contortus excretory and secretory proteins (HcESP) and they suggested that such an enhanced IL-17 level might favor the survival of the worm in the host. Moreover, it has been reported that the IL17F gene showed the most significant expression difference in the response of the abomasal mucosa of Creole goat kids infected with Haemonchus contortus, i.e. its expression was three times higher in resistant compared to susceptible animals [70]. Missense mutations were also detected in the TNF (tumor necrosis factor) and TNFRSF21 (TNF receptor superfamily member 21) genes. Tumor necrosis factor (TNF) is a cytokine involved in systemic inflammation. The interactions between TNF family ligands and their receptors are involved in the modulation of a number of signaling pathways in the immune system, such as cell proliferation, differentiation, apoptosis and survival [71]. Artis et al. [72] suggested a role for TNF-α in regulating Th2 cytokine responses in the intestine, which has a significant effect on protective immunity to helminth infection. Moreover, the TNFα gene was relatively highly expressed in intestinal lymph cells of sheep selected for resistance to nematodes during infection with Trichostrongylus colubriformis [73]. In mice, TNFRSF21-knockout studies suggest that this gene plays a role in T-helper cell activation, and may be involved in inflammation and immune regulation [71]. A missense mutation was found in the VEGFA (vascular endothelial growth factor A) gene, which was differentially expressed in abomasal limphonodes of lambs with different susceptibilities to GIN [64] and in the abomasal mucosa of sheep infected with Haemonchus contortus [74]. Finally, nine already known missense mutations were detected in the TRIM26 and TRIM38 genes. The products of these genes belong to the tripartite motif (TRIM) protein family composed of more than 70 members in humans. Accumulating evidence has indicated that TRIM proteins play crucial roles in the regulation of the pathogenesis of autoimmune diseases and the host defense against pathogens, especially viruses [75]. Both genes were among the GIN-activated genes and contributed to enrich the terms "defense response" (TRIM38) and "interferon-gamma-mediated signaling pathway", "innate immune response", "defense response" (TRIM26). Lyu et al. [76] who investigated the risk associated to nasopharyngeal carcinoma in humans, detected a regulatory variant in this gene and suggested that the downregulation of TRIM26, which is dependent on the allele at this variant, contributed to the downregulation of several immune genes and thus was associated to a low immune response.
Our results show that selective breeding may be an option to limit the issues related to infestation of gastro-intestinal nematodes in sheep. On the one hand, the heritability estimate and QTL detection results confirm that both traditional progeny testing and marker-assisted selection are realistic options. However, the laboriousness of fecal egg counting on a large scale makes marker-assisted selection potentially more profitable in terms of cost benefits. Indeed, the ten significant markers identified in our study and already available on the commercial Illumina arrays explain an important portion of the genetic variation in our large population. On the other hand, the results of the combined use of whole genome data and functional annotation did not provide any marker or causative mutation to improve the efficiency of a marker-assisted selection program in the short term. However, our study which was carried out on a large experimental population provides a first list of candidate genes and SNPs which could be used to address further validation studies on independent populations. In the mid-term, the expected advancements in the quality of the annotation of the ovine genome and the use of experimental designs based on sequence data and phenotypes from multiple breeds that show different LD extents and gametic phases may help to identify causative mutations. As far as the Sarda breed is concerned, the Breeders Association is assessing the feasibility of a selection program for nematode resistance based on fecal egg counting and on the genotypes described in this study for the nucleus flock and combined with the genotyping of selection candidate males that are bred in Herd Book farms and are genetically connected with the experimental flock.
The data that support the findings of this study are available from Centro Regionale di Programmazione (CRP), Regione Autonoma della Sardegna but restrictions apply to the availability of these data, which were used under license for the current study, and thus are not publicly available. However, data are available from the authors upon reasonable request and with permission of Centro Regionale di Programmazione (CRP), Regione Autonoma della Sardegna.
Kaplan RM, Vidyashankar AN. An inconvenient truth: Global worming and anthelmintic resistance. Vet Parasitol. 2012;186:70–8.
Mavrot F, Hertzberg H, Torgerson P. Effect of gastro-intestinal nematode infection on sheep performance: a systematic review and meta-analysis. Parasit Vectors. 2015;8:557.
Geurden T, Hoste H, Jacquiet P, Traversa D, Sotiraki S, Frangipane di Regalbono A, et al. Anthelmintic resistance and multidrug resistance in sheep gastro-intestinal nematodes in France, Greece and Italy. Vet Parasitol. 2014;201:59–66.
Aguiar de Oliveira P, Riet-Correa B, Estima-Silva P, Coelho ACB, dos Santos BL, Costa MAP, et al. Multiple anthelmintic resistance in Southern Brazil sheep flocks. Rev Bras Parasitol Vet. 2017;26:427–32.
Sargison ND, Jackson F, Bartley DJ, Wilson DJ, Stenhouse LJ, Penny CD. Observations on the emergence of multiple anthelmintic resistance in sheep flocks in the south-east of Scotland. Vet Parasitol. 2007;145:65–76.
McMahon C, Bartley DJ, Edgar HWJ, Ellison SE, Barley JP, Malone FE, et al. Anthelmintic resistance in Northern Ireland (I): Prevalence of resistance in ovine gastrointestinal nematodes, as determined through faecal egg count reduction testing. Vet Parasitol. 2013;195:122–30.
Jackson F, Miller J. Alternative approaches to control-Quo vadit? Vet Parasitol. 2006;139:371–84.
Brito DL, Dallago BSL, Louvandini H, dos Santos VRV, de Araújo Torres SEF, Gomes EF, et al. Effect of alternate and simultaneous grazing on endoparasite infection in sheep and cattle. Rev Bras Parasitol Vet. 2013;22:485–94.
Houdijk JGM, Kyriazakis I, Kidane A, Athanasiadou S. Manipulating small ruminant parasite epidemiology through the combination of nutritional strategies. Vet Parasitol. 2012;186:38–50.
Zvinorova PI, Halimani TE, Muchadeyi FC, Matika O, Riggio V, Dzama K. Breeding for resistance to gastrointestinal nematodes - the potential in low-input/output small ruminant production systems. Vet Parasitol. 2016;225:19–28.
Bouix J, Krupinski J, Rzepecki R, Nowosad B, Skrzyzala I, Roborzynski M, et al. Genetic resistance to gastrointestinal nematode parasites in Polish long-wool sheep. Int J Parasitol. 1998;28:1797–804.
Sechi S, Salaris S, Scala A, Rupp R, Moreno C, Bishop SC, et al. Estimation of ( co ) variance components of nematode parasites resistance and somatic cell count in dairy sheep. Ital J Anim Sci. 2009;8:156–8.
Assenza F, Elsen J-M, Legarra A, Carré C, Sallé G, Robert-Granié C, et al. Genetic parameters for growth and faecal worm egg count following Haemonchus contortus experimental infestations using pedigree and molecular information. Genet Sel Evol. 2014;46:13.
Aguerre S, Jacquiet P, Brodier H, Bournazel JP, Grisez C, Prévot F, et al. Resistance to gastrointestinal nematodes in dairy sheep: genetic variability and relevance of artificial infection of nucleus rams to select for resistant ewes on farms. Vet Parasitol. 2018;256:16–23.
Beh KJ, Hulme DJ, Callaghan MJ, Leish Z, Lenane I, Windon RG, et al. A genome scan for quantitative trait loci affecting resistance to Trichostrongylus colubriformis in sheep. Anim Genet. 2002;33:97–106.
Crawford AM, Paterson KA, Dodds KG, Diez Tascon C, Williamson PA, Roberts Thomson M, et al. Discovery of quantitative trait loci for resistance to parasitic nematode infection in sheep: I. Analysis of outcross pedigrees. BMC Genomics. 2006;7:178.
Davies G, Stear MJ, Benothman M, Abuagob O, Kerr A, Mitchell S, et al. Quantitative trait loci associated with parasitic infection in Scottish blackface sheep. Heredity. 2006;96:252–8.
Gutiérrez-Gil B, Pérez J, Álvarez L, Martínez-Valladares M, De La Fuente LF, Bayán Y, et al. Quantitative trait loci for resistance to trichostrongylid infection in Spanish Churra sheep. Genet Sel Evol. 2009;41:46.
Sallé G, Jacquiet P, Gruner L, Cortet J, Sauvé C, Prévot F, et al. A genome scan for QTL affecting resistance to Haemonchus contortus in sheep. J Anim Sci. 2012;90:4690–705.
Riggio V, Matika O, Pong-Wong R, Stear MJ, Bishop SC. Genome-wide association and regional heritability mapping to identify loci underlying variation in nematode resistance and body weight in Scottish Blackface lambs. Heredity. 2013;110:420–9.
Riggio V, Pong-Wong R, Sallé G, Usai MG, Casu S, Moreno CR, et al. A joint analysis to identify loci underlying variation in nematode resistance in three European sheep populations. J Anim Breed Genet. 2014;131:426–36.
McRae KM, McEwan JC, Dodds KG, Gemmell NJ. Signatures of selection in sheep bred for resistance or susceptibility to gastrointestinal nematodes. BMC Genomics. 2014;15:637.
Atlija M, Arranz J-J, Martinez-Valladares M, Gutiérrez-Gil B. Detection and replication of QTL underlying resistance to gastrointestinal nematodes in adult sheep using the ovine 50K SNP array. Genet Sel Evol. 2016;48:4.
Sechi S, Giobbe M, Sanna G, Casu S, Carta A, Scala A. Effects of anthelmintic treatment on milk production in Sarda dairy ewes naturally infected by gastrointestinal nematodes. Small Rumin Res. 2010;88:145–50.
Scala A, Bitti PL, Fadda M, Pilia A, Varcasia A. I trattamenti antiparassitari negli allevamenti ovini della Sardegna. In: Proceedings of the 7th Congress of Mediterranean Federation for Health and Production of Ruminants: 22–24 April 1999; Santarem. 1999. p. 267–72.
Salaris S, Usai MG, Casu S, Sechi T, Manunta A, Bitti M, et al. Perspectives of the selection scheme of the Sarda dairy sheep breed in the era of genomics. ICAR Tech Ser. 2018;23:79–88.
Usai MG, Casu S, Sechi T, Salaris SL, Miari S, Sechi S, et al. Mapping genomic regions affecting milk traits in Sarda sheep by using the OvineSNP50 Beadchip and principal components to perform combined linkage and linkage disequilibrium analysis. Genet Sel Evol. 2019;51:65.
Nicolazzi EL, Caprera A, Nazzicari N, Cozzi P, Strozzi F, Lawley C, et al. SNPchiMp vol 3: integrating and standardizing single nucleotide polymorphism data for livestock species. BMC Genomics. 2015;16:283.
Casu S, T. Sechi, MG. Usai SM, Casula M, Mulas G, et al. Investigating a Highly Significant QTL for Milk Protein Content Segregating in Sarda Sheep Breed Close to the Caseins Cluster Region by Whole Genome Re-sequencing of Target Animals. In: 10th World Congress of Genetics Applied to Livestock Production. Asas; 2014. Accessed on 24/01/2018.
Köster J, Rahmann S. Snakemake—a scalable bioinformatics workflow engine. Bioinformatics. 2018;34:3600.
Krueger F, James F, Ewels P, Afyounian E, Schuster-Boeckler B. FelixKrueger/TrimGalore: v0.6.7. https://doi.org/10.5281/zenodo.5127898 Accessed 3 Dec 2021.
Andrews S. FastQC: a quality control tool for high throughput sequence data. 2010. https://www.bioinformatics.babraham.ac.uk/projects/fastqc Accessed 3 Dec 2021.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60.
Danecek P, Bonfield JK, Liddle J, Marshall J, Ohan V, Pollard MO, et al. Twelve years of SAMtools and BCFtools. Gigascience. 2021;10:1–4.
"Picard Toolkit." 2019. Broad Institute, GitHub Repository. https://broadinstitute.github.io/picard/; Accessed 3 Dec 2021.
McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303.
Van der Auwera GA, Carneiro MO, Hartl C, Poplin R, del Angel G, Levy-Moonshine A, et al. From fastQ data to high-confidence variant calls: the genome analysis toolkit best practices pipeline. Curr Protoc Bioinform. 2013;43:11.
Raynaud J-P, William G, Brunault G. Etude de l'efficacité d'une technique de coproscopie quantitative pour le diagnostic de routine et le contrôle des infestations parasitaires des bovins, ovins, équins et porcins. Ann Parasitol Hum Comp. 1970;45:321–42.
Euzéby J. Diagnostic expérimental des helminthoses animales. Paris: Edition Vigot Frères; 1958.
van Wyk JA, Mayhew E. Morphological identification of parasitic nematode infective larvae of small ruminants and cattle: a practical lab guide. Onderstepoort J Vet Res. 2013;80:539.
Butler DG, Cullis BR, Gilmour AR, Gogel BJ, Thompson R. ASReml-R Reference Manual Version 4 ASReml estimates variance components under a general linear mixed model by residual maximum likelihood (REML). Hemel Hempstead: VSN International Ltd; 2018. p. 188.
Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, et al. Common SNPs explain a large proportion of the heritability for human height. Nat Genet. 2010;42:565–9.
Yang J, Lee SH, Goddard ME, Visscher PM. GCTA: a tool for genome-wide complex trait analysis. Am J Hum Genet. 2011;88:76–82.
Ripley B, Venables B, Bates DM, Firth D, Hornik K, Gebhardt A. Package "MASS". Support functions and datasets for Venables and Ripley's MASS. 2018. http://www.r-project.org. Accessed 3 Dec 2021.
Meuwissen T, Goddard M. The use of family relationships and linkage disequilibrium to impute phase and missing genotypes in up to whole-genome sequence density genotypic data. Genetics. 2010;185:1441–9.
Meuwissen TH, Goddard ME. Prediction of identity by descent probabilities from marker-haplotypes. Genet Sel Evol. 2001;33:605–34.
Elsen J-M, Mangin B, Goffinet B, Boichard D, Le Roy P. Alternative models for QTL detection in livestock I. General introduction. Genet Sel Evol. 1999;31:213–24.
PubMed Central Google Scholar
Pong-Wong R, George AW, Woolliams JA, Haley CS. A simple and rapid method for calculating identity-by-descent matrices using multiple markers. Genet Sel Evol. 2001;33:453–71.
Cingolani P, Platts A, Wang LL, Coon M, Nguyen T, Wang L, et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly. 2012;6:80–92.
Usai MG, Casu S, Ziccheddu B, Sechi T, Miari S, Carta P, et al. Using identity-by-descent probability to impute whole genome sequence variants in a nucleus flock. Ital J Anim Sci. 2019;18:S52.
Chitneedi PK, Arranz JJ, Suárez-Vega A, Martínez-Valladares M, Gutiérrez-Gil B. Identification of potential functional variants underlying ovine resistance to gastrointestinal nematode infection by using RNA-Seq. Anim Genet. 2020;51:266–77.
Liao Y, Wang J, Jaehnig EJ, Shi Z, Zhang B. WebGestalt 2019: gene set analysis toolkit with revamped UIs and APIs. Nucleic Acids Res. 2019;47:W199-205.
Kriventseva EV, Kuznetsov D, Tegenfeldt F, Manni M, Dias R, Simão FA, et al. OrthoDB v10: sampling the diversity of animal, plant, fungal, protist, bacterial and viral genomes for evolutionary and functional annotations of orthologs. Nucleic Acids Res. 2019;47:D807–11.
Gutiérrez-Gil B, Pérez J, De La Fuente LF, Meana A, Martínez-Valladares M, San Primitivo F, et al. Genetic parameters for resistance to trichostrongylid infection in dairy sheep. Animal. 2010;4:505–12.
Bishop SC, Jackson F, Coop RL, Stear MJ. Genetic parameters for resistance to nematode infections in Texel lambs and their utility in breeding programmes. Anim Sci. 2004;78:185–94.
Gruner L, Bouix J, Brunel JC. High genetic correlation between resistance to Haemonchus contortus and to Trichostrongylus colubriformis in INRA 401 sheep. Vet Parasitol. 2004;119:51–8.
Gruner L, Bouix J, Vu Tien Khang J, Mandonnet N, Eychenne F, Cortet J, et al. A short-term divergent selection for resistance to Teladorsagia circumcincta in Romanov sheep using natural or artificial challenge. Genet Sel Evol. 2004;36:217–42.
Pickering NK, Auvray B, Dodds KG, McEwan JC. Genomic prediction and genome-wide association study for dagginess and host internal parasite resistance in New Zealand sheep. BMC Genomics. 2015;16:958.
Berton MP, de Oliveira Silva RM, Peripolli E, Stafuzza NB, Martin JF, Álvarez MS, et al. Genomic regions and pathways associated with gastrointestinal parasites resistance in Santa Inês breed adapted to tropical climate. J Anim Sci Biotechnol. 2017;8:73.
Al Kalaldeh M, Gibson J, Lee SH, Gondro C, van der Werf JHJ. Detection of genomic regions underlying resistance to gastrointestinal parasites in Australian sheep. Genet Sel Evol. 2019;51:37.
Benavides MV, Sonstegard TS, Kemp S, Mugambi JM, Gibson JP, Baker RL, et al. Identification of novel loci associated with gastrointestinal parasite resistance in a red Maasai x Dorper backcross population. PLoS ONE. 2015;10:e0122797.
Sweeney T, Hanrahan JP, Ryan MT, Good B. Immunogenomics of gastrointestinal nematode infection in ruminants—breeding for resistance to produce food sustainably and safely. Parasite Immunol. 2016;38:569–86.
Kemper KE, Emery DL, Bishop SC, Oddy H, Hayes BJ, Dominik S, et al. The distribution of SNP marker effects for faecal worm egg count in sheep, and the feasibility of using these markers to predict genetic merit for resistance to worm infections. Genet Res. 2011;93:203–19.
Gossner A, Wilkie H, Joshi A, Hopkins J. Exploring the abomasal lymph node transcriptome for genes associated with resistance to the sheep nematode Teladorsagia circumcincta. Vet Res. 2013;44:68.
Chitneedi PK, Suárez-Vega A, Martínez-Valladares M, Arranz JJ, Gutiérrez-Gil B. Exploring the mechanisms of resistance to Teladorsagia circumcincta infection in sheep through transcriptome analysis of abomasal mucosa and abomasal lymph nodes. Vet Res. 2018;49:39.
Zhang R, Liu F, Hunt P, Li C, Zhang L, Ingham A, et al. Transcriptome analysis unraveled potential mechanisms of resistance to Haemonchus contortus infection in Merino sheep populations bred for parasite resistance. Vet Res. 2019;50:7.
Yamazaki T, Goya I, Graf D, Craig S, Martin-Orozco N, Dong C. A butyrophilin family member critically inhibits T cell activation. J Immunol. 2010;185:5907–14.
Benavides MV, Sonstegard TS, Van Tassell C. Genomic regions associated with sheep resistance to gastrointestinal nematodes. Trends Parasitol. 2016;32:470–80.
Gadahi JA, Yongqian B, Ehsan M, Zhang ZC, Wang S, Yan RF, et al. Haemonchus contortus excretory and secretory proteins (HcESPs) suppress functions of goat PBMCs in vitro. Oncotarget. 2016;7:35670–9.
Aboshady HM, Mandonnet N, Félicité Y, Hira J, Fourcot A, Barbier C, et al. Dynamic transcriptomic changes of goat abomasal mucosa in response to Haemonchus contortus infection. Vet Res. 2020;51:44.
Liu J, Na S, Glasebrook A, Fox N, Solenberg PJ, Zhang Q, et al. Enhanced CD4+ T cell proliferation and Th2 cytokine production in DR6-deficient mice. Immunity. 2001;15:23–34.
Artis D, Humphreys NE, Bancroft AJ, Rothwell NJ, Potten CS, Grencis RK. Tumor necrosis factor α is a critical component of interleukin 13- mediated protective T helper cell type 2 responses during helminth infection. J Exp Med. 1999;190:953–62.
Pernthaner A, Cole SA, Morrison L, Hein WR. Increased expression of interleukin-5 (IL-5), IL-13, and tumor necrosis factor alpha genes in intestinal lymph cells of sheep selected for enhanced resistance to nematodes during infection with Trichostrongylus colubriformis. Infect Immun. 2005;73:2175–83.
Guo Z, González JF, Hernandez JN, McNeilly TN, Corripio-Miyar Y, Frew D, et al. Possible mechanisms of host resistance to Haemonchus contortus infection in sheep breeds native to the Canary Islands. Sci Rep. 2016;6:26200.
Yang W, Gu Z, Zhang H, Hu H. To TRIM the immunity: From innate to adaptive immunity. Front Immunol. 2020;11:02157.
Lyu XM, Zhu XW, Zhao M, Zuo XB, Huang ZX, Liu X, et al. A regulatory mutant on TRIM26 conferring the risk of nasopharyngeal carcinoma by inducing low immune response. Cancer Med. 2018;7:3848–61.
The authors gratefully acknowledge Severino Tolu and the staff of the AGRIS experimental unit at Monastir for technical support in raising, monitoring and recording the animals; Giorgia Dessì for participating in the fecal egg counting; Stefania Sechi for her contribution in editing and archiving data collected in early stages of the experiment.
This study was part of the MIGLIOVIGENSAR project funded by Centro Regionale di Programmazione (CRP), Regione Autonoma della Sardegna (LR n.7/2007 R.A).
Genetics and Biotechnology – Agris Sardegna, Olmedo, Italy
Sara Casu, Mario Graziano Usai, Tiziana Sechi, Sotero L. Salaris, Sabrina Miari, Giuliana Mulas & Antonello Carta
Department of Veterinary Medicine, University of Sassari, Sassari, Italy
Claudia Tamponi, Antonio Varcasia & Antonio Scala
Sara Casu
Mario Graziano Usai
Tiziana Sechi
Sotero L. Salaris
Sabrina Miari
Giuliana Mulas
Claudia Tamponi
Antonio Varcasia
Antonio Scala
Antonello Carta
SC carried out the phenotypic and the functional annotation analyses, participated in data interpretation and drafted the manuscript. MGU developed the statistical methodology for QTL detection and imputation analyses, wrote the Fortran programs and performed the statistical analyses. TS, with the collaboration of GM and SM, performed the genotyping. SLS participated in the data analyses and interpretation of results. AV and CT performed the fecal egg count. AS planned the recording system and managed the fecal egg counting. AC conceived the overall design, undertook the project management, contributed to the interpretation of results and critically revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Mario Graziano Usai.
Ewes from the experimental farm were raised under breeding conditions that are similar to those of commercial sheep flocks. Blood sampling and anthelmintic treatments were performed by veterinarians or under veterinarian supervision following standard procedures and relevant national guidelines to ensure appropriate animal care.
Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_01_1 (chromosome 1). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals (WGS Mapping, blue dots) in the QTL region Q_01_1 (chromosome 1, imputation from 99 to 100 Mb of the Ovis aries genome assembly v4.0). Figure S2. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_02_1 (chromosome 2). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_02_1 (chromosome 2, imputation from 135 to 137 Mb of the Ovis aries genome assembly v4.0). Figure S3. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_02_2 (chromosome 2). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_02_2 (chromosome 2, imputation from 212 to 214 Mb of the Ovis aries genome assembly v4.0). Figure S4. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_04_1 (chromosome 4). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_04_1 (chromosome 4, imputation from 4 to 10 Mb of the Ovis aries genome assembly v4.0). Figure S5. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_06_1 (chromosome 6). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_06_1 (chromosome 6, imputation from 12 to 14 Mb of the Ovis aries genome assembly v4.0). Figure S6. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_07_1 (chromosome 7). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_07_1 (chromosome 7, imputation from 87 to 89 Mb of the Ovis aries genome assembly v4.0). Figure S7. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_12_1 (chromosome 12). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_12_1 (chromosome 7, imputation from 35 to 42 Mb of the Ovis aries genome assembly v4.0). Figure S8. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_15_1 (chromosome 15). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_15_1 (chromosome 15, imputation from 33 to 35 Mb of the Ovis aries genome assembly v4.0). Figure S9. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_19_1 (chromosome 19). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_19_1 (chromosome 19, imputation from 18 to 32 Mb of the Ovis aries genome assembly v4.0). Figure S10. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_20_1 (chromosome 20). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_20_1 (chromosome 20, imputation from 16 to 37 Mb of the Ovis aries genome assembly v4.0).
Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_01_1 on Ovis aries chromosome 1. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_01_1 on chromosome 1 (from 99000291 to 100998839 bp, Ovis aries genome assembly v4.0). Tables S2. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_02_1 on Ovis aries chromosome 2. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_02_1 on chromosome 2 (from 135000202 to 136999313 bp, Ovis aries genome assembly v4.0). Tables S3. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_02_2 on Ovis aries chromosome 2. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_02_2 on chromosome 2 (from 212000099 to 213999982 bp, Ovis aries genome assembly v4.0). Tables S4. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_04_1 on Ovis aries chromosome 4. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_04_1 on chromosome 4 (from 4000037 to 10000000 bp, Ovis aries genome assembly v4.0). Tables S5. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_06_1 on Ovis aries chromosome 6. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_06_1 on chromosome 6 (from 12000078 to 13999887 bp, Ovis aries genome assembly v4.0). Tables S6. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_07_1 on Ovis aries chromosome 7. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_07_1 on chromosome 7 (from 87000021 to 88999946 bp, Ovis aries genome assembly v4.0). Tables S7. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_12_1 on Ovis aries chromosome 12. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_12_1 on chromosome 12 (from 35000043 to 41999843 bp, Ovis aries genome assembly v4.0). Tables S8. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_15_1 on Ovis aries chromosome 15. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_15_1 on chromosome 15 (from 33000037 to 34999984 bp, Ovis aries genome assembly v4.0). Tables S9. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_19_1 on Ovis aries chromosome 19. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_19_1 on chromosome 19 (from 18000014 to 31999894 bp, Ovis aries genome assembly v4.0). Tables S10. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_20_1 on Ovis aries chromosome 20. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_20_1 on chromosome 20 (from 16000304 to 36997864 bp, Ovis aries genome assembly v4.0).
Additional file 3: Table S11.
Full characterisation of the retained SNPs: high or moderate impact variants or most significant variants from the association analysis mapping within the QTL regions. Description of the retained SNPs that mapped within the QTL regions identified in the present work: functional annotation from SNPeff; nominal significance level (− log10(nominal p-values) from the WGS based association analysis, GO biological process enriched term from WebGestalt analysis; and study from which the candidate GIN-activated gene listed by Chitneedi et al. 2020 [51] was identified. The SNP positions are from the Ovis aries genome assembly v4.0.
Functional characterization of the 10 most significant SNPs per QTLR from the WGS analysis. Characterization of the 10 most significant SNPs of the QTLR considered in this work and their functional consequences according to the annotation performed with SnpEff. The SNP positions are from the Ovis aries genome assembly v4.0.
Top hierarchical terms identified by the Gene Ontology (GO) enrichment analysis (biological process database) performed with WebGestalt. Results of the over-representation analysis (ORA) of GO biological process terms of the genes harboring significant mutations or mutations with functional consequences on the transcripts performed with WebGestalt. Gene symbols and ID of human gene orthologues are reported. They were retrieved from the OrthoDB v10 data base starting from the NBCI ID of ovine genes from the Ovis aries annotation release 102.
Casu, S., Usai, M.G., Sechi, T. et al. Association analysis and functional annotation of imputed sequence data within genomic regions influencing resistance to gastro-intestinal parasites detected by an LDLA approach in a nucleus flock of Sarda dairy sheep. Genet Sel Evol 54, 2 (2022). https://doi.org/10.1186/s12711-021-00690-7 | CommonCrawl |
Journal of Big Data
A new effective method for labeling dynamic XML data
Eynollah Khanjari1 &
Leila Gaeini1
Journal of Big Data volume 5, Article number: 50 (2018) Cite this article
Query processing based on labeling dynamic XML documents has gained more attention in the past several years. An efficient labeling scheme should provide small size labels keeping the simplicity of the exploited algorithm in order to avoid complex computations as well as retaining the readability of structural relationships between nodes. Moreover, for dynamic XML data, relabeling the nodes in XML updates should be avoided. However, the existing schemes lack the capability of supporting all of these requirements. In this paper, we propose a new labeling scheme which assigns variable-length labels to nodes in dynamic XML documents. Our method employs the FibLSS encoding scheme that exploits the properties of the Fibonacci sequence to provide variable-length node labels of appropriate size. In XML updating process, we add a new section only in the new node's label without relabeling the existing nodes while keeping the order of nodes as well as preserving the structural relationships. Our labeling method is scalable as it is not subject to overflow, and as the number of nodes to be labeled increases exponentially, the size of labels grows linearly, which makes it suitable for big datasets. It also has the best performance in computational processing costs compared to existing approaches. The results of the experiments confirm the advantages of our proposed method in comparison to state-of-the-art techniques.
XML is a semi-structural and standard document format to exchange data. Elements in XML documents are regular and there are structural relationships between them [1]. In fact, processing queries in XML should recognize these structural relationships. They also should determine the order of elements in a document. Node labeling in XML data is one way to increase the efficiency of query processing. Labeling means allocating a unique identifier to each node in XML documents [2]. A labeling scheme encompasses traversal or browsing the document, analyzing the elements, and assessing available relationships between elements. So, it should generate small enough labels in order to be processed efficiently both in initial label assigning as well as when queries are issued.
A challenging problem of the existing labeling schemes is the need for relabeling nearly all the existing nodes after inserting new nodes in XML documents. While update in XML data is a usual operation in many real-world applications, e.g. stream data, relabeling will influence the query performance, especially when large-size labels are assigned to the nodes. In this paper, we introduce a new method to label XML documents. In this method, we pay attention to the evaluation criteria of labeling schemes, so besides the efficiency of the method, it can optimize queries on dynamic XML data without relabeling the existing nodes.
The labeling method supports the structural relationships, AD (Ancestor–Descendant), PC (Parent–Child), DO (Document Order)Footnote 1 and Sibling relations, between nodes. The method uses a simple algorithm to produce small-size labels. Experimental results show that the proposed method is efficient in terms of the label size, labeling time, querying time and update/insertion time. The results are compared against state-of-the-art labeling methods.
In the following, we provide a summary of the related works in "Related works" section. Our proposed method is introduced in "Proposed method" section. In "Results and discussion" section, the results of the experiments are reported and analyzed in terms of several performance criteria. The paper is concluded in "Summary and conclusions" section, followed by "Future work" section which gives some perspectives on future works.
Several methods have been proposed for labeling XML documents. Existing methods can be summarized and can be grouped into four categories:
Range-based labeling schemes: These schemes are characterized by incorporating <START, END> arguments to the labels [3]. The START and END components of a label determine the start and the end of the corresponding position of nodes in the XML tree. The schemes under this category [1, 3,4,5,6,7,8,9,10] can determine AD relationships. Labels' size in these schemes are compact. The depth of the tree does not influence the size of the labels. The main challenge of these schemes is they do not support dynamic XML documents. That is, after inserting new nodes, relabeling is inevitable. Inability to detect all the structural relationships is another weakness of the range based labeling schemes.
Prefix-based labeling schemes: In the schemes in this category [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25], each node label involves the label of its ancestor besides its own. The structural relationships are determined by looking at the labels. However, the size of the labels increases especially for nodes occurring in deeper levels of the XML tree, yielding more storage overhead. Moreover, inserting a new node in the rightmost places at a level does not need to re-label other nodes but inserting in elsewhere will affect the label of other nodes. In [26], the deleted labels are reused for encoding newly inserted nodes, which could effectively lower the label size.
Multiplication based labeling schemes: These schemes [5, 27,28,29,30,31,32,33,34,35] use prime numbers with multiplication and division operations to determine the relationships among the nodes. Prime numbers are exploited in order to support the uniqueness of the labels. However, this leads to an exploded space overhead as well as computation overhead for determining the order of nodes in the document.
Vector based labeling schemes: These schemes use a vector order [36,37,38,39,40,41]. The methods in this category are defined based on vector ordering and commonly are orthogonal to the other schemes, i.e. they can be applied to other labeling schemes. The basic problem in these schemes is their computation time overhead for general cases of a new node insertion as well as for querying nodes in different levels.
Our proposed scheme exploits the advantages of prefix-based labeling schemes and provides solutions to overcome their disadvantages.
Proposed method
In this section, we provide a detailed presentation of how the whole process of node labeling is taken place. Firstly, we illustrate how to label the nodes using an example. Then, we describe how a newly inserted node is labeled and how the order of nodes is preserved, and how the structural relationships among nodes are determined. Also, we analyze the label size requirements of our labeling method.
The proposed labeling method uses binary bit values (0 and 1) for specifying the identity of a node. For a node in an XML tree, we keep the level at which the node is in the tree, the identifier of its parent, and the node's identifier, which we denote respectively by <Level, Parentid, Selfid>, where Selfid is the identifier of the node itself. By the "label", we mean the whole collection composed of the three parts.
Binary values are used for each of the three parts with appropriate minimum length. For example, for a length of one bit, the first node takes Selfid 0 and the second one takes 1 in a given level. To label the third node in that level, two bits are required and the third node takes 00 for its Selfid, the fourth takes 01, and so on. When new nodes are inserted, the label of the existing nodes remains unchanged. To preserve the uniqueness of the labels, some bits are added to the end of Selfid. We call the augmented bits 'update identifier', which is denoted by UpID. Note that our labeling method keeps the ordering of nodes and supports the structural relationships among them. Figure 1 depicts an example XML tree with nodes labeled by the proposed method.
An XML tree labeled by the proposed method
As shown in Fig. 1, the length of the labels is variable; so, we should take this into account in the labeling scheme. To do this, a fixed label length could be assigned for all nodes. In this way, there is no need to store the size of labels. However, it is clear that a fixed length label storage schemes are subject to overflow by the update process which requires the relabeling of all existing nodes [30]. Alternatively, it is possible to use a variable length labeling scheme which requires the size (length) of the label to be stored in addition to the label itself. It is not the case to use a fixed length of bits for storing the variable size of the labels because the original fixed-length of bits will eventually be too small, requiring all existing nodes to be relabeled, or this leads to significant wastage of storage space if a very large fixed-length of bits is used [42]. In other words, not only labels should have variable sizes but also the length field of the labels should be stored, and be identified, using a variable length scheme. Thus, we store each label's length using the encoding method called FibLSS [43] along with the label itself, just before it.
FibLSS encoding
The FibLSS method uses the Fibonacci numbers sequence. It is defined such that each term in the sequence is the sum of the previous two terms:
$$\begin{aligned} & The \, Fibonacci \, sequence \, is \, given \, by \, the \, recurrence \, relation \, F_{n} = \, F_{n - 1} + \, F_{n - 2} \\ & For \, n \ge 2, \, where \, F_{0} = 0 \, and \, F_{ 1} = 1. \\ \end{aligned}$$
Note that, every positive integer n has a unique representation as the sum of one or more distinct non-consecutive Fibonacci numbers called Zeckendorf representation. For each positive integer n, there is a positive integer N such that:
$$n = \mathop \sum \limits_{k = 0}^{N } \in_{k} F_{k} \quad {\text{where}} \in_{k} {\text{is }} 0 \;{\text{or }}1,\; {\text{and}} \in_{k} * \in_{k + 1} = 0$$
The Zeckendorf representation of a positive integer is unique because no two consecutive Fibonacci terms occur in the Zeckendorf representation to build the Fibonacci label encoding scheme.
Given a binary encoded bit-string label Nnew = 110101, the length of Nnew is firstly determined, i.e. 6 bits. The Zeckendorf representation of 6 is 5 + 1 = 1 × 1 + 0 × 2 + 0 × 3 + 1 × 5. So, the binary string is "1001". Since the last bit in the Fibonacci coded binary string of a Zeckendorf representation is always "1", an extra "1" bit is appended to the end of the bit string to act as a delimiter, separating the length of a node label from the label itself. Thus, the binary string for the length is now "10011". The length of the label is encoded and stored before the label itself. Hence, the label Nnew (110101) is encoded and stored using the Fibonacci label storage scheme as 10011110101.
Label size analysis
To label the nodes at each level of the XML tree, the number of bits used starts from 1, i.e. the length field. We can identify two nodes with one bit. Afterward, the number of bits increases one bit and we can identify four nodes with two bits. Likewise, the number of nodes we can encode with n bits is 2n. So, the number of total bits used to encode nodes with 1 up to the n bits, denoted by B, is computed using the following equation:
$$B = \mathop \sum \limits_{i = 1}^{n} i \times 2^{i} = \left( {n - 1} \right) \times 2^{n + 1} + 2$$
Equation 1 can be proved by induction. Let N be the maximum number of nodes to be encoded. It can be obtained from the following relation:
$${\text{N}} = 2^{ 1} + 2^{ 2} + 2^{ 3} + \cdots + 2^{\text{n}} = {2}^{{{\text{n}} + 1}} - 2\to n = log_{2}^{N + 2} - 1$$
By substituting Eq. 2 in Eq. 1 we will have:
$$\begin{aligned}B & = \left( {n - 1} \right) \times 2^{n + 1} + \,2 = \left[ {\left( {log_{2}^{N + 2} - 2} \right) \times \left( {N + 2} \right)} \right] + \, 2 \\ & = Nlog_{2}^{N + 2} \, + \,2log_{2}^{N + 2} \,- \,2N \,-\, 2 \end{aligned}$$
Consequently, to assign the unique identifiers to N nodes each with an appropriate length, i.e. at least 1 bit and at most n bits, the total storage requirement of the labeling scheme is \(Nlog_{2}^{N + 2} + 2log_{2}^{N + 2} - 2N - 2\) bits. Note that, this is the total storage which is needed in the worst case. However, for the average case, as the number nodes to be encoded using the FibLSS encoding increases exponentially, the growth rate in the number of bits required to encode nodes is linear [43].
Algorithm 1 provides a summarized illustration of how labels are assigned to the nodes based on the FibLSS encoding scheme.
Updating the document
An important issue in labeling XML documents is the "updating process". Particularly, the insertion of new nodes to the XML tree should not influence the label of the other nodes. Our proposed method avoids relabeling due to the technique used for labeling nodes when new nodes are inserted.
Inserting new nodes occurs in three general cases: inserting a node before the leftmost node, inserting a node after the rightmost node, and inserting a node between any two nodes at any position. We review each of these cases and explain how the proposed method overcomes such cases.
Case 1: Insert a node before the leftmost node
For this case, the label of the inserted node becomes the label of the leftmost node concatenated with a more "0" bit as its UpID. For example, in Fig. 2 the leftmost node in the second level has the label "2,1,0". Thus, the label of the inserted node A would be "2,1,0.0". Then, node B is inserted and its label will be "2,1,0.00".
Insert a node before the leftmost node
Case 2: Insert a node after the rightmost node
The label of the inserted node is the label of the rightmost node concatenated with a more "1" bit as its UpID. For example, in Fig. 3 the rightmost node has the label "3,10,1". Therefore, the label of the inserted node C will be "3,10,1.1". Then, node D is inserted and its label will be "3,10,1.11".
Insert a node after the rightmost node
If E and F (Fig. 4) are inserted in the XML tree as the children of node C, their labels will become "4,1.1,0" and "4,1.1,1", respectively.
Insert child of nodes with UpID
Case 3: Insert a node between any two nodes at any position
In such cases, the size of labels of the two neighbor sibling nodes is compared. If the size of the left sibling node's label is less than or equal to the size of the right sibling, we add a more '0' bit to UpID of the right node. This means that the new node has been inserted between these nodes as its left and right siblings. Otherwise, the label of the inserted node is the label of left sibling node concatenated with a more "1" bit to its UpID.
Due to the method of assigning labels to the nodes the size of identifiers increases from left to right. So, the length of each node's ID is smaller than the one of its right neighbors. Hence, the label size of the left node is greater than its right neighbor if UpID of left node's label is not null.
For example, in Fig. 5 node G is inserted between nodes with labels "2,00,00" and "2,00,01". The left sibling node has the SelfID "00" and the right sibling node has the SelfID "01". The length of the two labels is equal and the right node has not UpID. Hence, the label of G is the SelfID of the right node concatenated by a "0" bit as its UpID, i.e. "2,00,01.0". Then, node H is inserted into the tree. The Labels of the left and the right neighbors of node H are "2,00,00" and "2,00,01.0", respectively. The label of node H is formed by the right node's UpID concatenated with a "0" bit ("2,00,01.00") because the size of the left node's label is less than that of the right node. Also, node I is insert between the nodes with respective labels ("2,00,01.0" and "2,00,01"). The SelfIDs of these nodes are "01.0" and "01". Since the length of the left node's label is greater than that of the right node, we concatenate the label of the left node with a more "1" bit to obtain the label of the inserted node, I, which is "2,00,01.01".
Insert nodes at any position
Algorithm 2 summarizes the node insertion process discussed above.
Order of labels
Primarily, the ordering between nodes is determined according to the size and the binary value of the labels. Here we only consider the SelfID for simplicity and omit the Level and the ParentID which are clear in a given context node. We label the nodes of a given level from left to right using appropriate label sizes starting from 1 bit, up to whatever is needed.
After inserting a node and updating the XML tree, we add the UpID part to the node's label, just after its SelfID. Therefore, for insert operation, only the UpID part of the inserted nodes are changed. The following rules are used to determine the order of two nodes A and B in a given context:
Without considering UpIDs, if SelfID of A is less than B's SelfID then A is in the left of B, e.g. 1 < 00.001 and 1 < 1010 and 10 < 11.
Without considering UpIDs, if SelfID of A is equal to B's SelfID then we should compare their UpIDs. If UpID of A is less than B's UpID then A is in the left of B. Two UpIDs are compared according to the following rules:
If the UpID of a label is not null and start with 0 then it is smaller than the label that its UpID is null and if the UpID of a label starts with 1, then it is greater than the label that has no UpID.
The lexicographical comparison of UpIDs is used if the UpIDs have the same size.
If the length of A's UpID is less than the length of B's UpID and A's UpID is the prefix of B's, then if the first bit after the prefix string of A is 1 then B's UpID is greater than A's UpID otherwise A's UpID is the greater one.
If the length of A's UpID is greater than B's and B's UpID is the prefix of A's UpID then, if the first bit after the prefix string of B is 1 then A's UpID is less than B's UpID otherwise A's UpID is the greater one.
For example: 0.000 < 0.00 < 0.001 < 0.0 < 0.01 < 0 < 1.00 < 1.0 < 1.01 < 1 < 00.0 < 00.01 < 00 < 01.
Determining structural relationships
To have an efficient XML query processing, the labeling scheme should determine the structural relationships. The proposed labeling scheme can determine P–C, A–D and sibling relationships among arbitrary nodes.
P–C relationship
In an XML tree to specify P–C relationship between node A with label <LevelA, ParentIDA, SelfIDA> and node B with label <LevelB, ParentIDB, SelfIDB> it suffices to compare ParentID of a node with SelfID of the other node. It means, if ParentIDA = SelftIDB and LevelA = LevelB + 1 then A is the child of B. For example, node A with label "2,00,01.001" is the child of node B with label "1,0,00" because ParentIDA = SelftIDB = 00 and LevelA = 2 = LevelB + 1=1 + 1.
A–D relationship
We can determine the A–D relationship like to the P–C relationship as a recursive function. It repeats this recursive function |LevelA–LevelB| times.
Sibling relationship
In an XML tree, node A with label <LevelA, ParentIDA, SelfIDA> and node B with label <LevelB, ParentIDB, SelfIDB> should be in the same level to have the sibling relationship. That is, ParentIDA = ParentIDB and LevelA = LevelB. For example, node A with label "2,00,01.011" is the sibling of node B with label "2,00,01.001" since ParentIDA = ParentIDB = 00 and LevelA = LevelB = 2.
To evaluate the proposed method for labeling XML documents, we compare it with two leading methods, namely IBSL and P–Containment [5]. The P–Containment method has the difficulty with relabeling when inserting nodes in the XML tree. We solve this problem by applying our method to it. In this way, we implement the mapping function to convert integers allocated to parameters Start, End, and Parent_Start to the binary bit string, and we call it NP–Containment.
There are four sets of tests in this performance evaluation: the first set compares the storage requirement of three schemes. The second set analyzes labeling time. The third set examines the query performance and the last set investigates update performance.
Experimental setting
We conduct the performance evaluations on a Pentium (R) Dual-Core CPU E5300 @2.60 GHz 2.60 GHz and 4.00 GB of RAM Windows 7 Professional computer. We implement all schemes using Visual C#. net 2008. To avoid the discrepancy, we run each performance test 6 times. We ignore the first run and take the average value.
Characteristics of the datasets
The datasets used in the performance evaluations are "Lineitem", "Treebank_e", and "Orders" that are accessible online on the internet [44]. The XMark datasets were generated using xmlgen of the XMark: Benchmark Standard for XML Database Management [45] with factors 0.02 and 0.3. Table 1 presents the characteristics of the datasets.
Table 1 Characteristics of the datasets
Storage requirement
In this section, we evaluate the storage space required for storing the labels generated by each of the labeling schemes. We save labels in a file in the formats shown in Table 2.
Table 2 Label format of schemes
Figure 6 shows the total amount of storage required for labels to be generated by each of the three methods when executed on the datasets.
In the experiments, no compression is used. The storage requirement of IBSL is more than that of the other methods. This is expectable, as IBSL is a prefix-based scheme and its label size depends on the number of children of every node, i.e. the fan-out of XML documents. Regarding Table 1, the maximum fan-out of dataset Orders is 15,000. Therefore, it uses 15,001 bits (1 bit zero and 15,000 bits one) to signify the last child of a given node. Moreover, this scheme is based on a fix-length approach, i.e. the children of an intended node have the same label size. However, new needs maximum 17 bits for this case, where all the children of a node do not have the same label size.
Figure 7 presents the maximum length of labels generated by the three competing methods.
Largest size of labels
The label's length is the number of bits that assigned to SelfID for the IBSL as well as New schemes, and the sum of the Start length and End length for the NP–Containment scheme. The New scheme stores the length field of SelfID and SelfID itself. Nevertheless, the size of the labels generated by New is still much smaller than that of IBSL, and outperforms NP–Containment.
Overall, regarding all the datasets, the New method has the least storage requirement, and its average label size is 19% and 63.53% of IBSL's and NP–Containment's label sizes, respectively. The maximum label size of the New scheme is 0.04% of IBSL and 33.88% of NP–Containment, on average for all the datasets in our experiments.
Labeling time
We study the time required to label a given XML document. Figure 8 shows the time required for labeling. It is the average labeling time taken from five tests we performed on each dataset. The labels are generated by applying depth-first strategy by all the three labeling schemes.
Figure 8 shows that for all the five datasets. The New scheme is faster than IBSL, especially for bigger data sets. Also, the labeling time for the NP–Containment scheme (that is, P–Containment combined with the New labeling scheme in order of contextualization) is as good as New's time. However, notice that the P–Containment method has to relabel all nodes when new nodes are inserted. IBSL manipulates longer node labels as it stores the label of the parent (hence the label of its ancestors) of an intended node in addition to SelfID. Therefore, this method requires much more time for the labeling process. Considering all the datasets, the New method generates required labels almost 12.74 times faster than IBSL.
Query response time
For performance evaluation in updates, we investigate the time needed to process the queries. It stores labels in the tables of a relational database. To optimize queries and increase the efficiency of the update operations, we consider two table storing information about datasets. The first table includes the name of each node, its text, and label. The second table stores node attributes name and their values with the foreign key that is the primary key of the first table.
We store the labels of New in three fields including NodeLevel, ParentID, and SelfID. The primary key of the first table is SelfID along with NodeLevel. For NP–Containment, the relational table contains the name of every node, its text, Start, End, and Parent_Start values. At the update time, We use New to generate inserted nodes' labels. We implemented the IBSL scheme for which the relational table includes the name of each node, its text and the values of SelfID and ParentLabel.
We have implemented all schemes by the depth-first traversal method. We applied this evaluation on Xmark (0.02) database. The table for the elements includes 33,140 records and 7384 records for the attributes.
Table 3 shows the studied queries. The "number of retrieved nodes" is the count of records retrieved as the query response.
Table 3 Evaluated queries
Figure 9 presents the results. For the New method, the time needed for queries, Q1 to Q5, is 80% of the NP–Containment time on average, but for query Q6, NP–Containment has been 1.3 times faster.
Queries response times
To answer query Q6, the New method should check one more condition than NP–Containment; i.e. the comparison of the levels to determine P–C relation. This additional operation justifies why the NP–Containment scheme operates faster than the New method.
In query Q6, where it retrieves all descendants for the "site" node, IBSL acts better than the other two methods because it is a full prefix-based scheme and can quickly determine the A–D relationships. Therefore, IBSL is 40.9 times faster than New. New and NP–containment use a recursive algorithm to determine A–D relationship, hence, they need more time to process such queries. For the queries Q1 to Q5, New is 15.9 and 1.3 times faster than IBSL and NP–Containment respectively, on average.
Updating time
In this part of the performance evaluation, we test the time needed to insert new nodes in various positions of the XML tree. Figure 10 illustrates the results.
For this test, we insert 5 nodes in the leftmost place of the first level. Figure 10 shows the average time needed for the insertions as "Left_ins". Then we insert 5 nodes in the rightmost place with an average time expressed as "Right_ins". After that, we add 5 nodes in between the sixth and the seventh nodes in this level, with an average time denoted by "Middle_ins".
According to the results, our method needs less time than the others. As we apply New for labeling in NP–Containment, it runs faster than IBSL for insertion. New is 2.8 and 1.4 times faster than IBSL and NP–Containment respectively, on average.
In this paper, we provided an overview of the existing state-of-the-art methods for labeling dynamic XML data which have been investigated during the past years. Considering dynamic XML, the existing methods do not fulfill all the performance requirements at the same time. An XML labeling method should support the structural relationships among nodes as well as avoid relabeling any existing node and keep the order of nodes when new nodes are inserted into the XML tree. Moreover, scalability and efficiency are the two essential requirements to be fulfilled by a labeling method. We proposed a novel labeling method which exploits the FibLSS encoding for labeling XML documents. We discussed the efficiency of the proposed scheme and tested it considering the label size, labeling time, querying time and node insertion time. We compared this evaluation with IBSL and P–Containment which are among the leading labeling schemes in labeling the literature. Our method is scalable as it avoids relabeling and it has a linear growth for label size. Moreover, it supports structural relationships while keeping the order of nodes. Our experimental evaluation demonstrated that the proposed method outperforms existing methods in terms of several essential requirements, especially the computational processing costs, the storage cost while keeping the order and relationships among nodes without any relabeling.
The proposed scheme is suitable and usable for both static and dynamic documents. We will apply this scheme to different labeling approaches, especially "range-based" methods that have a small storage size and high efficiency. We will also conduct more comparative studies on the efficiency of our method for a wider range of XML queries. Besides, some investigations are required for supplying more compact representations of the labels.
Following sibling and preceding sibling.
Fu L, Meng X. Triple code: an efficient labeling scheme for query answering in XML data. In: Web information system and application conference (WISA). 2013. p. 42–7.
Haw S-C, Lee C-S. Data storage practices and query processing in XML databases: a survey. Knowl Based Syst. 2011;24(8):1317–40.
Amagasa T, Yoshikawa M, Uemura S. QRS: a robust numbering scheme for XML documents. In: Proceedings 19th international conference on data engineering. 2003. p. 705–7.
Dietz PF. Maintaining order in a linked list. In: Proceedings of the fourteenth annual ACM symposium on theory of computing. New York: ACM; 1982. p. 122–7.
Li C, Ling TW, Lu J, Yu T. On reducing redundancy and improving efficiency of XML labeling schemes. In: Proceedings of the 14th ACM international conference on information and knowledge management. New York: ACM. 2005, p. 225–6.
Li Q, Moon B. Indexing and querying XML data for regular path expressions. VLDB. 2001;1:361–70.
Min J-K, Lee J, Chung C-W. An efficient encoding and labeling for dynamic xml data. In: International conference on database systems for advanced applications. Berlin: Springer; 2007, p. 715–26.
Thonangi R. A concise labeling scheme for XML data. In: COMAD. 2006. p. 4–14.
Yun J-H, Chung C-W. Dynamic interval-based labeling scheme for efficient XML query and update processing. J Syst Softw. 2008;81(1):56–70.
Zhang C, Naughton J, DeWitt D, Luo Q, Lohman G. On supporting containment queries in relational database management systems. In: ACM SIGMOD Record, vol 30. New York: ACM; 2001. p. 425–36.
Al-Jamimi HA, Barradah A, Mohammed S. Siblings labeling scheme for updating XML trees dynamically. In: International conference on computer engineering and technology. 2012. p. 21–5.
Almelibari A. Labelling dynamic XML documents: a group based approach. In: Doctoral dissertation, University of Sheffield; 2015.
Assefa BG, Ergenc B. Order based labeling scheme for dynamic XML query processing. In: International conference on availability, reliability, and security. Berlin: Springer; 2012. p. 287–301.
Cohen E, Kaplan H, Milo T. Labeling dynamic XML tree. SIAM J Comput. 2010;39(5):2048–74.
Duong M, Zhang Y. LSDX: a new labelling scheme for dynamically updating XML data. In: Proceedings of the 16th Australasian database conference, vol 39. New York: Australian Computer Society, Inc.; 2005. p. 185–93.
Duong M, Zhang Y. Dynamic labelling scheme for xml data processing. In: On the move to meaningful internet systems: OTM 2008. Berlin: Springer; 2008. p. 1183–99.
Ghaleb TA, Mohammed S. Novel scheme for labeling XML trees based on bits-masking and logical matching. In: 2013 world congress on computer and information technology (WCCIT), IEEE. 2013. p. 1–5.
Ghaleb TA, Mohammed S. A dynamic labeling scheme based on logical operators: a support for order-sensitive XML updates. Procedia Computer Sci. 2015;57:1211–8.
Li C, Ling TW. An improved prefix labeling scheme: a binary string approach for dynamic ordered XML. In: Database systems for advanced applications. Berlin: Springer; 2005. p. 125–37.
Liu J, Ma Z, Qv Q. Dynamically querying possibilistic XML data. Inf Sci. 2014;261:70–88.
Lu J, Ling TW, Chan C-Y, Chen T. From region encoding to extended dewey: on efficient processing of XML twig pattern matching. In: Proceedings of the 31st international conference on very large data bases. VLDB Endowment; 2005. p. 193–204.
Lu J, Meng X, Ling TW. Indexing and querying XML using extended Dewey labeling scheme. Data Knowl Eng. 2011;70(1):35–59.
O'Neil P, O'Neil E, Pal S, Cseri I, Schaller G, Westbury N. ORDPATHs: insert-friendly XML node labels. In: Proceedings of the 2004 ACM SIGMOD international conference on management of data. New York: ACM; 2004. p. 903–8.
Soltan S, Zarnani A, Ali Mohammadzadeh R, Rahgozar M. IFDewey: a new insert-friendly labeling schema for XML data. In: World academy of science, engineering and technology, international journal of computer, electrical, automation, control and information engineering 2. 2008. p. 203–5.
Tatarinov I, Viglas SD, Beyer K, Shanmugasundaram J, Shekita E, Zhang C. Storing and querying ordered XML using a relational database system. In: Proceedings of the 2002 ACM SIGMOD international conference on management of data. New York: ACM; 2002. p. 204–15.
Liu J, Zhang X. Dynamic labeling scheme for XML updates. Knowl Based Syst. 2016;106:135–49.
Kha DD, Yoshikawa M, Uemura S. A structural numbering scheme for XML data. In: International conference on extending database technology. Berlin: Springer; 2002. p. 91–108.
Ko H-K, Lee S. A binary string approach for updates in dynamic ordered xml data. IEEE Trans Knowl Data Eng. 2010;22(4):602–7.
Kobayashi K, Liang W, Kobayashi D, Watanabe A, Yokota H. VLEI code: an efficient labeling method for handling XML documents in an RDB. In: Proceedings 21st international conference on data engineering, 2005. ICDE 2005. p. 386–7.
Li C, Ling TW. QED: a novel quaternary encoding to completely avoid re-labeling in XML updates. In: Proceedings of the 14th ACM international conference on information and knowledge management. New York: ACM; 2005. p. 501–8.
Li C, Ling TW, Hu M. Efficient updates in dynamic XML data: from binary string to quaternary string. In: The VLDB journal—the international journal on very large data bases, vol. 17, no. 3. 2008. p. 573–601.
O'Connor M, Roantree M. SCOOTER: a compact and scalable dynamic labeling scheme for XML updates. In: Database and expert systems applications. Berlin: Springer; 2012. p. 26–40.
Weigel F, Schulz KU, Meuss H. The BIRD numbering scheme for XML and tree databases–deciding and reconstructing tree relations using efficient arithmetic operations. In: Database and XML technologies. Berlin: Springer; 2005. p. 49–67.
Wu X, Lee ML, Hsu W. A prime number labeling scheme for dynamic ordered XML trees. In: 2004 proceedings 20th international conference on data engineering, 2004. p. 66–78.
Qin Z, Tang Y, Tang F, Xiao J, Huang C, Xu H. Efficient XML query and update processing using a novel prime-based middle fraction labeling scheme. China Commun. 2017;14(3):145–57.
Jiang Y, He X, Lin F, Jia W. An encoding and labeling scheme based on continued fraction for dynamic XML. J Softw. 2011;6(10):2043–9.
Mirabi M, Ibrahim H, Udzir NI, Mamat A. An encoding scheme based on fractional number for querying and updating XML data. J Syst Softw. 2012;85(8):1831–51.
Ni Y-F, Fan Y-C, Tan X-C, Cui J, Wang X-L. Numeric-based XML labeling schema by generalized dynamic method. J Shanghai Jiaotong University (Science). 2012;17:203–8.
Noor Ea Thahasin S, Jayanthi P. Vector based labeling method for dynamic XML documents. In: 2013 international conference on information communication and embedded systems (ICICES). 2013. p. 217–21.
Xu L, Bao Z, Ling TW. A dynamic labeling scheme using vectors. In: Database and expert systems applications. Berlin: Springer; 2007. p. 130–40.
Xu L, Ling TW, Wu H, Bao Z. DDE: from dewey to a fully dynamic XML labeling scheme. In: Proceedings of the 2009 ACM SIGMOD international conference on management of data. 2009. p. 719–30.
Härder T, Haustein M, Mathis C, Wagner M. Node labeling schemes for dynamic XML documents reconsidered. Data Knowl Eng. 2007;60(1):126–49.
O'Connor MF, Roantree M. FibLSS: a scalable label storage scheme for dynamic XML updates. In: East European conference on advances in databases and information systems. Berlin: Springer; 2013. p. 218–31.
X. d. repository. http://www.cs.washington.edu/research/xmldatasets. 2015.
Schmidt A, Waas F, Kersten M, Carey MJ, Manolescu I, Busse R. XMark: a benchmark for XML data management. In: Proceedings of the 28th international conference on very large data bases: VLDB Endowment. 2002. p. 974–85.
EK and LG conceived the research in the present study and conducted data analysis in this study. LG performed the analysis and experiments and developed the algorithm which was verified by EK. LG drafted the manuscript and EK prepared the revised manuscript and participated in the complexity analysis and the interpretation of the results. The whole work was supervised by EK. Both authors read and approved the final manuscript.
X. d. repository, "[online] http://www.cs.washington.edu/research/xmldatasets", 2015.
Iran University of Science and Technology, Tehran, Iran
Eynollah Khanjari
& Leila Gaeini
Search for Eynollah Khanjari in:
Search for Leila Gaeini in:
Correspondence to Leila Gaeini.
Khanjari, E., Gaeini, L. A new effective method for labeling dynamic XML data. J Big Data 5, 50 (2018) doi:10.1186/s40537-018-0161-4
XML labeling scheme
Dynamic labeling
XML query processing
XML updates | CommonCrawl |
\begin{document}
\title{Post-Selection Inference for Changepoint Detection Algorithms
with Application to Copy Number Variation Data} \author{SANGWON HYUN$^\ast$, KEVIN LIN, MAX G'SELL, RYAN J. TIBSHIRANI\\[4pt]
\textit{ Department of Statistics, Carnegie Mellon University, 132 Baker Hall, Pittsburgh, PA 15213. } \\[2pt]
}
\markboth
{S. Hyun and others}
{Post-selection inference for changepoint detection}
\maketitle
\footnotetext{To whom correspondence should be addressed: \href{mailto:[email protected]}{[email protected]}.}
\maketitle
\begin{abstract} {Changepoint detection methods are used in many areas of science and engineering, e.g., in the analysis of copy number variation data, to detect abnormalities in copy numbers along the genome. Despite the broad array of available tools, methodology for quantifying our uncertainty in the strength (or presence) of given changepoints, {\it post-detection}, are lacking. Post-selection inference offers a framework to fill this gap, but the most straightforward application of these methods results in low-powered tests and leaves open several important questions about practical usability. In this work, we carefully tailor post-selection inference methods towards changepoint detection, focusing as our main scientific application on copy number variation data. As for changepoint algorithms, we study binary segmentation, and two of its most popular variants, wild and circular, and the fused lasso. We implement some of the latest developments in post-selection inference theory: we use auxiliary randomization to improve power, which requires implementations of MCMC algorithms (importance sampling and hit-and-run sampling) to carry out our tests. We also provide recommendations for improving practical useability, detailed simulations, and an example analysis on array comparative genomic hybridization (CGH) data.} {CGH analysis; changepoint detection; copy number variation; hypothesis tests; post-selection inference; segmentation algorithms} \end{abstract}
\section{Introduction} \label{sec:introduction}
Changepoint detection is the problem of identifying changes in data distribution along a sequence of observations. We study the canonical changepoint problem, where changes occur only in the mean: let vector $Y=(Y_1,\ldots,Y_n) \in \mathbb{R}^n$ be a data vector with independent entries following \begin{equation} \label{eq:data-model} Y_i \sim \mathcal{N}(\theta_i, \sigma^2), \quad i=1,\ldots,n, \end{equation} where the unknown mean vector $\theta \in \mathbb{R}^n$ forms a piecewise constant sequence. That is, for locations $1 \leq b_1 < \cdots < b_t \leq n-1$, \[ \theta_{b_j+1} = \ldots = \theta_{b_{j+1}}, \quad j=0,\ldots,t. \] where for convenience we write $b_0=0$ and $b_{t+1}=n$. We call $b_1, \ldots, b_t$ {\it changepoint} locations of $\theta$. Changepoint detection algorithms typically focus on estimating the number of changepoints $t$ (which could possibly be 0), as well as the locations $b_1, \ldots, b_t$, from a single realization $Y$. Roughly speaking, changepoint methodology (and its associated literature) can be divided into two classes of algorithms: {\it
segmentation} algorithms and {\it penalization} algorithms. The former class includes {\it binary segmentation} (BS) \citep{vostrikova1981detecting} and popular variants like {\it wild binary segmentation} (WBS) \citep{fryzlewicz2014wild} and {\it circular binary segmentation} (CBS) \citep{olshen2004circular}; the latter class includes the {\it fused lasso} (FL) \citep{tibshirani2005sparsity} (also called {\it total variation denoising} \citep{rudin1992nonlinear} in signal processing), and the {\it Potts estimator} \citep{boysen2009consistencies}. These two classes have different strengths; see, e.g., \citet{lin2016approximate} for more discussion.
Having estimated changepoint locations, a natural follow-up goal would be to conduct statistical inference on the significance of the changes in mean at these locations. Despite the large number of segmentation algorithms and penalization algorithms available for changepoint detection, there has been very little focus on formally valid inferential tools to use {\it post-detection}. In this work, we describe a suite of inference tools to use after a changepoint algorithm has been applied---namely, BS, WBS, CBS, or FL. We work in the framework of {\it post-selection inference}, also called {\it
selective inference}. The specific machinery that we build off was first introduced in \citet{lee2016exact,tibshirani2016exact}, and further developed in various works, notably \citet{fithian2014optimal,fithian2015selective,tian2018selective}, whose extensions we rely on in particular. The basic inference procedure we develop can be outlined as follows.
\begin{enumerate}
\item Given data $Y$, apply a changepoint algorithm to detect some fixed number
of changepoints $k$. Denote the sorted estimated changepoint locations by
\begin{equation}
\label{eq:estimated-changepoints}
1 \leq \hat{c}_1 < \cdots < \hat{c}_k \leq n-1,
\end{equation}
and their respective changepoint directions (whether the estimated change in
mean was positive or negative) by \smash{$\hat{d}_1, \ldots,\hat{d}_k \in
\{-1,1\}$}. For notational convenience, we set \smash{$\hat{c}_0= 0$} and
\smash{$\hat{c}_{k+1} = n$}. The specifics of the changepoint algorithms that
we consider are given in \Fref{sec:algorithms}.
\item Form contrast vectors $v_1,\ldots, v_k \in \mathbb{R}^n$, defined so that for
arbitrary $y \in \mathbb{R}^n$,
\begin{equation}
\label{eq:segment-contrast}
v_j^T y = \hat{d}_j \bigg( \frac{1}{\hat{c}_{j+1}-\hat{c}_j}
\Big(\sum_{i=\hat{c}_j+1}^{\hat{c}_{j+1}} y_i \Big)-
\frac{1}{\hat{c}_j-\hat{c}_{j-1}+1}
\Big(\sum_{i=\hat{c}_{j-1}+1}^{\hat{c}_j} y_i \Big)\bigg),
\end{equation}
the difference between the sample means of segments to right and left of
\smash{$\hat{c}_j$}, for $j=1,\ldots,k$.
\item For each $j = 1,\ldots, k$, we test the hypothesis $H_0: v_j^T \theta=0$ by
rejecting for large values of a statistic $T(Y, v_j)$, which is computed
based on knowledge of the changepoint algorithm that produced
\eqref{eq:estimated-changepoints} in Step 1, and the desired contrast vector
\eqref{eq:segment-contrast} formed in Step 2. Each statistic
yields an exact p-value under the null (assuming Gaussian errors
\eqref{eq:data-model}). The details are given in Sections
\ref{sec:post-selection} and \ref{sec:inference-ours}.
\item Optionally, we can use Bonferroni correction and multiply our p-values by
$k$, to account for multiplicity. \end{enumerate}
It is worth mentioning that several variants of this basic procedure are possible. For example, the number of changepoints $k$ in Step 1 need not be seen as fixed and may be itself estimated from data; the set of estimated changepoints \eqref{eq:estimated-changepoints} may be pruned after Step 1 to eliminate changepoints that lie too close to others, and alternative contrast vectors to \eqref{eq:segment-contrast} in Step 2 may be used to measure more localized mean changes; these are all briefly described in \Fref{sec:practicalities}. Though not covered in our paper, the p-values from our tests can be inverted to form confidence intervals for population contrasts $v_j^T \theta$ for $j = 1,\ldots, k$
\citep{lee2016exact,tibshirani2016exact}.
At a more comprehensive level, our contributions in this work are to implement theoretically valid inference tools and practical guidance for each combination of the following choices that a typical user might face in a changepoint analysis: the algorithm (BS, WBS, CBS, or FL), number of estimated changepoints $k$ (fixed or data-driven), the null hypothesis model (saturated or selected model, to be explained in \Fref{sec:post-selection}), what type of conditioning (plain or marginalized, to be explained in \Fref{sec:randomization}), and the error variance $\sigma^2$ (known or unknown).
In \Fref{sec:practicalities}, we summarize the tradeoffs underlying each of these choices.
Finally, as the primary application of our inference tools, we study comparative genomic hybridization (CGH) data, making particular suggestions geared towards this problem throughout the paper. We begin with a motivating CGH data example in the next subsection, and return to it at the end of the paper.
\subsection{Motivating example: array CGH data analysis}
We examine array CGH data from the 14th chromosome of cell line GM01750, one of the 15 datasets from \citet{Snijders2001}; more background can be found in \citet{lai2005comparative} and references therein. Array CGH data are $\log_2$ ratios of dye intensities of diseased to healthy subjects' measurements, mixed across many samples. Normal regions of the gene are thought to have an underlying mean $\log_2$ ratio of zero, and aberrations are regions of upward or downward departures from zero because the gene in that region has been mutated -- duplicated or deleted. The presence and locations of aberrations are well studied in the biomedical literature to be associated with the presence of a wide range of genetically driven diseases -- as many types of cancer, Alzheimer, and autism \citep{fanciulli2007fcgr3b, sebat2007strong, international2008rare,
stefansson2008large, walters2010new, bochukova2010large}. Accurate changepoint analysis of array CGH data is thus useful in studying association with diseases, and for medical diagnosis.
The data is plotted in the left panel of \Fref{fig:intro}. Two locations \smash{$\hat{c}_1 < \hat{c}_2$}, marked A and B respectively, were detected by running 2-step WBS. Ground truth in this data set can be defined via an external process called called karyotyping; this is done by \citet{Snijders2001} who finds only one true changepoint at location A. (To be precise, they do not report exact locations of abnormalities, but find a single start-to-middle deviation from zero level.)
Without access to any post-selection inference tools, we might treat locations A and B as fixed, and simply run t-tests for equality of means of neighboring data segments, to the left and right of each location. This is precisely testing the null hypothesis $H_0 : v_j^T\theta=0$, $j = 1,2$, where the contrast vectors are as defined in \eqref{eq:segment-contrast}. P-values from the t-tests are reported in the first row of the table in \Fref{fig:intro}: we see that location A has a p-value of $< 10^{-5}$, but location B also has a small p-value of $5 \times 10^{-4}$, which is troublesome. The problem is that location B was specifically selected by WBS because (loosely put) the sample means to left and right of B are well separated, thus a t-test a location B is bound to be optimistic.
Using the tools we describe shortly, we test $H_0 :v_j^T \theta=0$, $j = 1,2$ in two ways: using a {\it saturated model} and a {\it selected model} on the mean vector $\theta$. The satured model assumes nothing about $\theta$, while the selected model assumes $\theta$ is constant between the intervals formed by $A$ and $B$. Both tests yield a p-value $< 10^{-5}$ at location A, but only a moderately small p-value at location B. If we were to use the Bonferroni correction at a nominal significance level $\alpha=0.05$, then in neither case would we reject the null at location B.
\begin{figure}
\caption{\it\small Left: array CGH data from the 14th chromosome of
fibroblast cell line GM01750, from \citet{Snijders2001}. The x-axis denotes
the relative index of the genome position, and the y-axis denotes the log
ratio in fluorescence intensities of the test and reference samples. The
dotted horizontal line denotes a log ratio of 0 for reference. The bold
vertical lines denote the locations A and B from running WBS for 2 steps. Right: the
p-values using classical (naive) t-tests, saturated model tests, and
selected model tests, at each location A and B. The ground truth is also
given, as determined by karyotyping. The saturated model test used an estimated
noise level $\sigma^2$ from the entire 23-chromosome data set. The selected
model test was performed in the unknown $\sigma^2$ setting.}
\label{fig:intro}
\end{figure}
\subsection{Related work}
In addition to the references on general post-selection inference methodology given previously, we highlight the recent work of \citet{hyun2018exact}, who study post-selection inference for the generalized lasso, a special case of which is the fused lasso. These authors already characterize the polyhedral form of fused lasso selection events, and study inference using contrasts as in \eqref{eq:segment-contrast}. While writing the current paper, we became aware of the independent contributions of \citet{umezu2017selective}, who study multi-dimensional changepoint sequences, but focus problems in which the mean $\theta$ has only one changepoint. Aside from these papers, there is little focus on valid inference methods to apply post-detection in changepoint analysis. On the other hand, there is a huge literature on changepoint estimation, and inference for {\it fixed} hypotheses in changepoint problems; we refer to \citet{jandhyala2013,aueHorvath2013,horvath2014}, which collectively summarize a good deal of the literature.
\section{Preliminaries}
\subsection{Review: changepoint algorithms} \label{sec:algorithms}
Below we describe the changepoint algorithms that we will study in this paper. For the first three segmentation algorithms, we will focus on formulations that run the algorithm for a given number of steps $k$; these algorithms are typically described in the literature as being run until internally calculated statistics do not exceed a given threshold level $\tau$. The reason that we choose the former formulation is twofold: first, we feel it is easier for a user to specify a priori a reasonable number of steps $k$, versus a threshold level $\tau$; second, we can use the method in \citet{hyun2018exact} to adaptively choose the number of steps $k$ and still perform valid inferences. In what follows, we use the notation \smash{$y_{a:b}=(y_a, y_{a+1}, \ldots, y_b)$} and
\smash{$\bar{y}_{a:b} = (b-a+1)^{-1} \sum_{i=a}^b y_i$} for a vector $y$.
\paragraph{\textbf{Binary segmentation (BS).}} Given a data vector $y \in \mathbb{R}^n$, the $k$-step BS algorithm \citep{vostrikova1981detecting} sequentially splits the data based on the cumulative sum (CUSUM) statistics, defined below. At a step $\ell = 1,\ldots,k$, let \smash{$\hat{b}_{1:(\ell-1)}$} be the changepoints estimated so far, and let $I_j$, $j=1,\ldots,\ell-1$ be the partition of $\{1,\ldots,n\}$ induced by \smash{$\hat{b}_{1:(\ell-1)}$}. Intervals of length 1 are discarded. Let $s_j$ and $e_j$ be the start and end indices of $I_j$. The next changepoint \smash{$\hat{b}_\ell$} and maximizing interval \smash{$\hat{j}_\ell$} are chosen to maximize the absolute CUSUM statistic: \begin{gather}
\big\{\hat j_{\ell}, \hat b_{\ell}\big\} =
\mathop{\mathrm{argmax}}_{\substack{j \in \{1, \ldots, \ell-1\} \\
b \in \{s_j, \ldots, e_j-1 \}}}
\big|g^T_{(s_j, b, e_j)} y\big|, \quad \text{where} \nonumber\\ \label{eq:bs-g-fun}
g_{(s,b,e)}^Ty = \sqrt{\frac{1}{\frac{1}{|e-b|}+\frac{1}{|b+1-s| }}}\big(\bar y_{(b+1):e} - \bar y_{s:b}\big). \end{gather} Additionally, the direction \smash{$\hat{d}_\ell$} of the new changepoint is calculated by the sign of the maximizing absolute CUSUM statistic, \smash{$\hat{d}_{\ell} = \mathrm{sign}(g_{(s_j, b_{\ell}, e_j)}^Ty)$} for $j = \hat j_{\ell+1}$.
\paragraph{\textbf{Wild binary segmentation (WBS).}} The $k$-step WBS algorithm \citep{fryzlewicz2014wild} is a modification of BS that calculates CUSUM statistics over randomly drawn segments of the data. Denote by $w = \{w_1, \ldots, w_B\} = \{(s_1, \ldots, e_1), \ldots, (s_B, \ldots, e_B)\}$
a set of $B$ uniformly randomly drawn intervals with $1 \leq s_i < e_i \leq n$, $i=1,\ldots,B$. At a step $\ell=1,\ldots,k$, let $J_\ell$ to be the index set of the intervals in $w$ which do not intersect with the changepoints \smash{$\hat{b}_{1:(\ell-1))}$} estimated so far. The next changepoint \smash{$\hat{b}_{\ell}$} and the maximizing interval \smash{$\hat{j}_{\ell}$} are obtained by: \begin{equation*}
\big\{\hat j_{\ell}, \hat b_{\ell}\big\} =
\mathop{\mathrm{argmax}}_{\substack{j \in J_\ell \\
b \in \{s_j, \ldots, e_j-1\}}}
\big| g^T_{(s_j, b, e_j)} y \big|, \end{equation*} where $g_{(s,b,e)}^T y$ is as defined in \eqref{eq:bs-g-fun}. Similar to BS, the direction of the changepoint \smash{$\hat{d}_\ell$} is defined by the sign of the maximizing absolute CUSUM statistic.
\paragraph{\textbf{Circular binary segmentation (CBS).}} The $k$-step CBS algorithm \citep{olshen2004circular} specializes in detecting {\it pairs} of changepoints that have alternating directions. At a step $\ell=1,\ldots,k$, let \smash{$\hat a_{1:(\ell-1)}$}, \smash{$\hat b_{1:(\ell-1)}$} be the changepoints estimated so far (with the pair $a_j$, $b_j$ estimated at step $j$), and let $I_j$, $j=1,\ldots,2(\ell-1)+1$ be the associated partition of $\{1,\ldots,n\}$. Intervals of length 2 are discarded. Let $s_j$ and $e_j$ denote the start and end index of $I_j$. The next changepoint pair \smash{$\hat a_{\ell}$} and \smash{$\hat b_{\ell}$}, and the maximizing interval \smash{$\hat j_{\ell}$}, are found by: \begin{gather}
\label{eq:cbs-opt-prob}
\big\{\hat j_{\ell}, \hat a_{\ell}, \hat b_{\ell}\big\} =
\mathop{\mathrm{argmax}}_{\substack{ j \in \{1,\ldots,2(\ell-1)+1)\} \\
a < b \in \{s_j, \ldots, e_j-1\} }}
\big| g^T_{(s_j, a, b, e_j)}y
\big| \quad \text{where} \\
\label{eq:cbs-g-fun}
g_{(s,a,b,e)}^Ty =
\sqrt{\frac{1}{\frac{1}{|b-a|}+\frac{1}{|e-s-b+a|}}}
\Big(\bar y_{(a+1):b} - \bar y_{\{s:a\}\cup\{(b+1):e\}}\Big). \end{gather} As before, the new changepoint direction \smash{$\hat d_{\ell}$} is defined based on the sign of the (modified) CUSUM statistic, \smash{$\hat d_{\ell} = \mathrm{sign}(g^T_{(s_j, a_{\ell+1}, b_{\ell+1}, e_j)}y)$} for \smash{$j = \hat j_{\ell+1}(y)$.}
\paragraph{\textbf{Fused lasso.}} The fused lasso (FL) estimator \citep{rudin1992nonlinear,tibshirani2005sparsity} is defined by solving the convex optimization problem: \begin{equation}
\label{eq:fl}
\min_{\theta \in \mathbb{R}^n} \; \sum_{i=1}^n (y_i - \theta_i)^2 + \lambda
\sum_{i=1}^{n-1} |\theta_i - \theta_{i+1}|, \end{equation} for a tuning parameter $\lambda \geq 0$. The fused lasso can be seen as a $k$-step algorithm by sweeping the tuning parameter from $\lambda=\infty$ down to $\lambda=0$. Then, at given values of $\lambda$ (called knots), the FL estimator introduces an additional changepoint in the solution in \eqref{eq:fl} \citep{hoefling2010path}.
\subsection{Review: post-selection inference} \label{sec:post-selection}
We briefly review post-selection inference as developed in \citet{lee2016exact,tibshirani2016exact,fithian2014optimal}. For a more thorough and general treatment, we refer to these papers or to \citet{hyun2018exact}. Our description here will be cast towards changepoint problems. For clarity, we notationally distinguish between a random vector $Y$ distributed as in \eqref{eq:data-model}, and $y_\mathrm{obs}$, a single data vector we observe for changepoint analysis. When a changepoint algorithm---such as BS, WBS, CBS, or FL---is applied to the data $y_\mathrm{obs}$, it selects a particular changepoint model $M(y_\mathrm{obs})$. The specific forms of such models are described in \Fref{sec:polyhedra}; for now, loosely, we may think of $M(y_\mathrm{obs})$ as the estimated changepoint locations and directions made by the algorithm on the data at hand. Post-selection inference revolves around the selective distribution, i.e., the law of \begin{equation} \label{eq:selective-distribution}
v^T Y \; | \; \Big(M(Y) = M(y_\mathrm{obs}),\; q(Y) = q(y_\mathrm{obs})\Big), \end{equation} under the null hypothesis $H_0: v^T \theta = 0$, for any $v$ that is a measurable function of $M(y_\mathrm{obs})$. Here $q(Y)$ is a vector of sufficient statistic of nuisance parameters that need to be conditioned on in order to tractably compute inferences based on \eqref{eq:selective-distribution}. The explicit form of $q(Y)$ differs based on the assumptions imposed on $\theta$ under the null model. Broadly, there are two classes of null models we may study: saturated and selected models \citep{fithian2014optimal}. Computationally, in either null models, it is important for the selection event $\{ y: M(y) = M(y_\mathrm{obs})\}$ be polyhedral. This is described in detail in Section \ref{sec:polyhedra}, where we show that this holds for BS, WBS, CBS, and FL.
\paragraph{\textbf{Saturated model.}} The {\it saturated model} assumes that $Y$ is distributed as in \eqref{eq:data-model} with known error variance $\sigma^2$, and assumes nothing about the mean vector $\theta$. We set $q(Y) = \Pi_v^\perp Y$, the projection of $Y$ onto the hyperplane orthogonal to $v$. The selective distribution becomes the law of \begin{equation} \label{eq:selective-distribution-saturated}
v^TY \; | \; \Big(M(Y) = M(y_\mathrm{obs}),\; \Pi_v^\perp Y = \Pi_v^\perp y_\mathrm{obs}\Big). \end{equation}
\paragraph{\textbf{Selected model.}} The {\it selected model} again assumes that $Y$ follows \eqref{eq:data-model}, but additionally assumes that the mean vector $\theta$ is piecewise constant with changepoints at the sorted estimated locations \smash{$\hat c_{1:k}=\hat c_{1:k}(y_\mathrm{obs})$} (assuming we have run our changepoint algorithm for $k$ steps). That is, we assume \[ \theta_{\hat c_j+1} = \ldots = \theta_{\hat c_{j+1}}, \quad j\in\{0,\ldots,k\}. \] where for convenience we use \smash{$\hat c_0=0$} and \smash{$\hat c_{k+1}=n$}. Under this assumption, the law of $Y$ becomes a $(k+1)$-parameter Gaussian distribution. Additionally, with the contrast vector $v_j$ defined as in \eqref{eq:segment-contrast}, for any fixed $j=1,\ldots,k$, the quantity $v_j^T \theta$ of interest is simply the difference between two of the parameters in this distribution. Assuming $\sigma^2$ is known, the sufficient statistics $q(Y)$ for the nuisance parameters in the Gaussian family are simply sample averages of the appropriate data segments, and the selective distribution becomes the law of \begin{equation} \label{eq:selective-distribution-selected-known-sigma} \big( \bar{Y}_{(\hat c_j + 1) : \hat c_{j+1}} -
\bar{Y}_{(\hat c_{j-1}+1) : \hat c_j} \big) \; \big| \; \Big(M(Y) = M(y_\mathrm{obs}),\; \bar{Y}_{(\hat c_\ell + 1) : \hat c_{\ell+1}} = \big(\bar{y}_\mathrm{obs}\big)_{(\hat c_\ell + 1) : \hat c_{\ell+1}}, \; \ell \neq j\Big). \end{equation} Part of the strength of the selected model is that we can properly treat $\sigma^2$ as unknown; in this case, we must only additionally condition on the Euclidean norm of $y_\mathrm{obs}$ to cover this nuisance parameter, and the selective distribution becomes the law of \begin{multline} \label{eq:selective-distribution-selected-unknown-sigma} \big( \bar{Y}_{(\hat c_j + 1) : \hat c_{j+1}} -
\bar{Y}_{(\hat c_{j-1}+1) : \hat c_j} \big) \; \big| \; \Big(M(Y) = M(y_\mathrm{obs}),\; \bar{Y}_{(\hat c_\ell + 1) : \hat c_{\ell+1}} = \big(\bar{y}_\mathrm{obs}\big)_{(\hat c_\ell + 1) : \hat c_{\ell+1}}, \; \ell \neq j, \\
\|Y\|_2 = \|y_\mathrm{obs}\|_2\Big). \end{multline}
\section{Inference for changepoint algorithms} \label{sec:inference-ours}
We describe our contributions that enable post-selection inference for changepoint analyses, beginning with the form of model selection events for common changepoint algorithms. We then describe computational details for saturated and selected model tests, and auxiliary randomization.
\subsection{Polyhedral selection events} \label{sec:polyhedra}
We show that, for each of the BS, WBS, and CBS algorithms, there is a parametrization for their models such that event $\{y : M(y) = M(y_\mathrm{obs})\}$ is a polyhedron---in fact a convex cone---of the form $\{ y : \Gamma y \geq 0 \}$, for a matrix $\Gamma \in \mathbb{R}^{m \times n}$ that depends on $M(y_\mathrm{obs})$ (and we interpret the inequality $\Gamma y \geq 0$ componentwise). Throughout the description of the polyhedra for each algorithm, we display the number of rows in $\Gamma$ since it loosely denotes how ``complex'' each model selection event is.
The same was already shown for FL in \citet{hyun2018exact}, and we omit details, but briefly comment on it below. Overall, the $\Gamma$ matrices for FL and BS are linear in $n$, while it is quadratic in $n$ for CBS, and $O(Bkp)$ for WBS using intervals of length $p$. This number can grow faster than linear in $n$ if $B\ge n$, which is recommended in practice \citep{fryzlewicz2014wild}.
\paragraph{\textbf{Selection event for BS.}} We define the model for the $k$-step BS estimator as \[ M^{\mathrm{BS}}_{1:k}(y_\mathrm{obs}) = \big\{\hat b_{1:k}(y_\mathrm{obs}), \; \hat d_{1:k}(y_\mathrm{obs})\big\}, \] where \smash{$\hat b_{1:k}(y_\mathrm{obs})$} and \smash{$\hat d_{1:k}(y_\mathrm{obs})$} are the changepoint locations and directions when the algorithm is run on $y_\mathrm{obs}$, as described in Section \ref{sec:algorithms}.
\begin{proposition} \label{prop:bs-polyhedral-event} \textit{Given any fixed $k \geq 1$ and $b_{1:k},d_{1:k}$, we can explicitly construct $\Gamma$ where \[ \big\{y : M_{1:k}^{\mathrm{BS}}(y) = \{b_{1:k}, d_{1:k} \} \big\} = \{ y : \Gamma y \geq 0 \}, \] and where $\Gamma$ has \smash{$2 \sum_{\ell=1}^k (n - \ell - 1)$} rows.} \end{proposition}
\begin{proof} When $k=1$, $2(n-2)$ linear inequalities
characterize the single changepoint model $\{b_1, d_1\}$: \[ d_1 \cdot g^T_{(1, b_1, n)} y \geq g^T_{(1,b,n)} y, \quad \text{and} \quad d_1 \cdot g^T_{(1, b_1, n)} y \geq -g^T_{(1,b,n)} y, \quad b \in \{1,\ldots, n-1\} \backslash \{ b_1\}. \] Now by induction, assume we have constructed a polyhedral representation of the selection event up through step $k-1$. All that remains is to characterize the $k$th estimated changepoint and direction $\{b_k, d_k\}$ by inequalities that are linear in $y$. This can be done with $2 (n-k-1)$
inequalities. To see this, assume without a loss of generality that the maximizing interval is $j_k=k$; then $\{b_k,d_k\}$ must satisfy the $2 (|I_k|-2)$ inequalities \[
d_k \cdot g^T_{(s_k, b_k, e_k)} y \geq g^T_{(s_k, b, e_k)} y
\quad \text{and} \quad
d_k \cdot g^T_{(s_k, b_k, e_k)} y \geq -g^T_{(s_k, b, e_k)} y,
\quad
b \in \{s_k, \ldots, e_k - 1\} \backslash \{b_k\}. \]
For each interval $I_\ell$, $\ell=1,\ldots,k-1$, we also have $2 (|I_\ell|-1)$ inequalities \[
d_k \cdot g^T_{(s_k, b_k, e_k)}
y \geq g^T_{(s_\ell, b, e_\ell)} y \quad\text{and}\quad
d_k \cdot g^T_{(s_k, b_k, e_k)}
y \geq -g^T_{(s_\ell, b, e_\ell)} y,
\quad
b \in \{s_\ell, \ldots, e_\ell- 1\}. \] The last two displays together completely determine $\{b_k,d_k\}$, and as
\smash{$\sum_{\ell=1}^k |I_\ell| = n$}, we get our desired total of $2 (n-k-1)$ inequalities. \end{proof}
\paragraph{\textbf{Selection event for WBS.}} We define the model of the $k$-step WBS estimator as \[ M^{\mathrm{WBS}}_{1:k}(y_\mathrm{obs}, w) = \big\{\hat b_{1:k}(y_\mathrm{obs}), \; \hat d_{1:k}(y_\mathrm{obs}),\; \hat j_{1:k}(y_\mathrm{obs}) \big\}, \] where $w$ is the set of $B$ intervals that the algorithm uses, \smash{$\hat b_{1:k}(y_\mathrm{obs})$} and \smash{$\hat d_{1:k}(y_\mathrm{obs})$} are the changepoint locations and directions, and \smash{$\hat j_{1:k}(y_\mathrm{obs})$} are the maximizing intervals.
\begin{proposition} \label{prop:wbs-polyhedral-event} \textit{Given any fixed $k \geq 1$, and $\{w,b_{1:k},d_{1:k},j_{1:k}\}$, we can explicitly construct $\Gamma$ where \[ \big\{y : M_{1:k}^{\mathrm{WBS}}(y, w) = \{b_{1:k},d_{1:k},j_{1:k}\} \big\} = \big\{ y : \Gamma y \geq 0 \big\}. \] The number of rows in $\Gamma$ will vary depending on the configuration of $w$ and $b_{1:k}$, but if each of the $B$ intervals in $w$ has length $p$, it will be at most $2\sum_{\ell=1}^{k}((B-\ell)\cdot(p-1) + (p-2))$.} \end{proposition}
The proof of \Fref{prop:wbs-polyhedral-event} is only slightly more complicated than that of \Fref{prop:bs-polyhedral-event}, and is deferred until Appendix \ref{sec:proofs}. Note that unlike BS, the maximizing intervals $\hat j_{1:k}$ are part of WBS's model.
\paragraph{\textbf{Selection event for CBS.}} Finally, we define the model for the $k$-step CBS estimator as \[ M^{\mathrm{CBS}}_{1:k}(y_\mathrm{obs}) = \big\{\hat a_{1:k}(y_\mathrm{obs}), \; \hat b_{1:k}(y_\mathrm{obs}),\; \hat d_{1:k}(y_\mathrm{obs}) \big\}, \] where now \smash{$\hat a_{1:k}(y_\mathrm{obs})$} and \smash{$\hat b_{1:k}(y_\mathrm{obs})$} are the pairs of estimated changepoint locations, and \smash{$\hat d_{1:k}(y_\mathrm{obs})$} are the changepoint directions, as described in Section \ref{sec:algorithms}.
\begin{proposition} \label{prop:cbs-polyhedral-event} \textit{Given any fixed $k \geq 1$ and $\{a_{1:k},b_{1:k},d_{1:k}\}$, we can explicitly construct $\Gamma$ where \[ \big\{y : M_{1:k}^{\mathrm{CBS}}(y, w) = \{a_{1:k},b_{1:k},d_{1:k}\} \big\} = \big\{ y : \Gamma y \geq 0 \big\}. \] Let $I_j^{(\ell)}$ denote the $j$th interval formed and $j_\ell$ be the selected interval defined in \eqref{eq:cbs-opt-prob} for an intermediate step $\ell \in \{1,\ldots,k\}$, and let $C(x,2) = {x \choose 2}$. Then $\Gamma$ has a number of rows equal to \[
2 \sum_{\ell = 1}^{k} \Big[C(|I^{(\ell)}_{j_k}| - 1, 2) -1 + \sum_{j' \neq j_k} C(|I^{(\ell)}_{j'}| -1 , 2)\Big]. \]} \end{proposition}
The proof of \Fref{prop:cbs-polyhedral-event} is only slightly more complicated than that of \Fref{prop:bs-polyhedral-event}, and is deferred until Appendix \ref{sec:proofs}.
\paragraph{\textbf{Selection events for FL, and a brief comparison.}} The model for the $k$-step FL estimator is: \[
M^{\mathrm{FL}}_{1:k}(y_\mathrm{obs}) = \big\{ \hat b_{1:k}(y_\mathrm{obs}), \; \hat d_{1:k}(y_\mathrm{obs}) , \; \hat
R_{1:k}(y_\mathrm{obs})\big\}, \] where \smash{$\hat b_{1:k}(y)$} and \smash{$\hat d_{1:k}(y)$} are changepoint locations and directions, and $\smash{\hat R_{\ell}(y)\in\mathbb{R}^{n-\ell}, \ell=1,\ldots,k}$ whose elements represent signs of a certain statistic $h_i(y)$ calculated at location $i$ in competition for maximization with $\hat b_\ell$ at step $\ell$. These statistics $h_i(y)$ are weighted mean differences at location $i$ and are analogous to CUSUM statistics in BS. \cite{hyun2018exact} make this representation more explicit, proving that for any fixed $k \geq 1$ and $b_{1:k},d_{1:k}, R_{1:k}$, we can explicitly construct $\Gamma$ such that \[
\big\{y : M_{1:k}^{\mathrm{FL}}(y) = \{b_{1:k}, d_{1:k}, R_{1:k} \} \big\} =
\{ y : \Gamma y \geq 0 \}, \] where $\Gamma$ has the same number of rows as a $k$-step BS event.
\subsection{Computation of p-values}\label{sec:computation}
Given a precise description of the polyhedral selection event $\{y : M(y) = M(y_\mathrm{obs})\}$, we can describe the methods to compute the p-value, i.e. the tail probability of the selective distributions described in \Fref{sec:post-selection}. Without loss of generality, all of our descriptions will be specialized to testing the null hypothesis of $H_0: v^T\theta = 0$ against the one-sided alternative $H_1: v^T \theta>0$. For saturated model tests, this exact calculation has been developed in previous work and we review it as it is relevant to our contributions on increasing its power. For selected model tests, an approximation was described in previous work, but we develop a new hit-and-run sampler that has not been implemented before.
\paragraph{\textbf{Saturated model tests: exact formulae.}}
As shown in \citet{lee2016exact} and \citet{tibshirani2016exact}, the saturated selective distribution \eqref{eq:selective-distribution-saturated} has a particularly computationally convenient distribution when $Y$ is Gaussian and the model selection event $\{y : M(y)=M(y_\mathrm{obs})\}$ is a polyhedral set in $y$. In this case, the law of \eqref{eq:selective-distribution-saturated} is a {\it truncated Gaussian} (TG), whose truncation limits depend only on \smash{$\Pi_v^\perp y_\mathrm{obs}$}, and can be computed explicitly. Its tail probability can be computed in closed form (without Monte Carlo sampling). That is, the probability that $v^TY \geq v^Ty_\mathrm{obs}$ under the law of \eqref{eq:selective-distribution-saturated} is exactly equal to
\begin{equation}\label{eq:tg_statistic}
(\Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(v^Ty_\mathrm{obs}/\tau))/(\Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(\mathcal{V}_{\text{lo}}/\tau))
\end{equation} where $\Phi(\cdot)$ represents the standard Gaussian CDF,
$\tau = \sigma^2 \|v\|^2_2$, $\rho = \Gamma v /\|v\|^2_2$ and \begin{equation} \label{eq:vlo_vup} \mathcal{V}_{\text{lo}} = v^Ty_\mathrm{obs} - \min_{j: \rho_j > 0} \big(\Gamma y_\mathrm{obs} \big)_j/\rho_j,\quad \text{and}\quad \mathcal{V}_{\text{up}} = v^Ty_\mathrm{obs} - \max_{j: \rho_j < 0} \big(\Gamma y_\mathrm{obs} \big)_j/\rho_j. \end{equation} This above equation is commonly referred as the TG statistic. Since this statistic is a pivot, it is the p-value used for the saturated model test.
\paragraph{\textbf{Selected model tests: hit-and-run sampling.}}
To compute the p-value for selected model tests, \cite{fithian2015selective} proposed a hit-and-run strategy for sampling from the distribution for the known $\sigma^2$ setting, \eqref{eq:selective-distribution-selected-known-sigma}. This was implemented by the authors, and we briefly review the details in Appendix \ref{app:known_sigma}. For the unknown $\sigma^2$ setting, \cite{fithian2014optimal} suggested an importance sampling strategy for sampling the distribution \eqref{eq:selective-distribution-selected-unknown-sigma}. However, we find that an intuitive hit-and-run strategy can be adapted to the unknown $\sigma^2$ setting and implement this as a new algorithm.
Given a changepoint $j = 1,\ldots, k$, observe that we can design a segment test contrast $v$ where sampling from \eqref{eq:selective-distribution-selected-unknown-sigma} is equivalent to sampling uniformly from the set
\begin{equation} \label{eq:selected_set}
\Big\{ v^TY: M(Y) = M(y_\mathrm{obs}),\; \|Y\|_2 = \|y_{\text{obs}}\|_2, \; \bar{Y}_{(\hat{c}_{\ell}+1):\hat{c}_{\ell+1}} = \bar{y}_{\mathrm{obs}, (\hat{c}_{\ell}+1):\hat{c}_{\ell+1}} \ell \neq j
\Big\}. \end{equation} Note that the above set no longer depends on $\theta$ or $\sigma^2$. This is because we conditioned all the relevant sufficient statistics under the selected model. Our hit-and-run sampler then sequentially draws samples $v^TY$ from the above set. For notational convenience, observe that the last $k$ constraints in \eqref{eq:selected_set} can be rewritten as $AY = Ay_{\text{(obs)}}$ for some matrix $A \in \mathbb{R}^{k \times n}$. Our new hit-and-run algorithm is then shown in \Fref{alg:hitandrun}.
\begin{algorithm}[t] Choose a number $M$ of iterations.\\% and let $J$ denote the number of rows of
Set $y^{(0)} = y_\mathrm{obs}$.\\
\For{$m \in \{1,\ldots,M\}$}{ Uniformly sample two unit vectors $s$ and $t$ in the nullspace of $A$.\\
Compute the set $\mathcal{I} \subseteq [-\pi/2, \pi/2]$ that intersects the set \begin{equation*} \Big\{ y\; : \; y = y^{(m-1)} + r(\omega)\sin(\omega) \cdot s + r(\omega) \cos(\omega)\cdot t \quad \text{for any }\omega \in [-\pi/2,\pi/2]\Big\}, \end{equation*} for the radius function $r(\omega) = -2(y^{(m-1)})^T(\sin(\omega)\cdot s + \cos(\omega)\cdot t)$, with the polyhedral set implied by the selected model $M(y_\mathrm{obs})$ based on \Fref{sec:polyhedra}.\\
Uniformly sample $\omega^{(m)}$ from $\mathcal{I}$ and form the next sample \[ y^{(m)} = y^{(m-1)} + r(\omega^{(m)})\sin(\omega^{(m)})\cdot s + r(\omega) \cos(\omega^{(m)})\cdot t. \] } Return the approximate for the tail probability of \eqref{eq:selective-distribution-selected-unknown-sigma},
$ \sum_{m=1}^{M} \mathds{1}[v^Ty^{(m)} \geq v^Ty_\mathrm{obs}]/M. $
\caption{MCMC hit-and-run algorithm for selected model test with unknown $\sigma^2$} \label{alg:hitandrun} \end{algorithm}
\subsection{Randomization and marginalization} \label{sec:randomization}
We apply the ideas of randomization in \cite{randomized-selinf} that improve the power of selective inference to changepoint algorithms and devise explicit samplers. We investigate two specific forms of randomization: randomization over additive noise and randomization over random intervals. We specialize the following descriptions to saturated models. We note that similar randomization of selected model inferences is also possible but is doubly computationally burdensome.
\paragraph{\textbf{Marginalization over additive noise.}} \cite{randomized-selinf} shows that performing inference based on the selected model $ M(y_\mathrm{obs} + w_\mathrm{obs})$ where $w_\mathrm{obs}$ is additive noise
and then marginalizing over $W$ leads to improved power. Here, $w_\mathrm{obs}$ is a realization of a random component $W$ sampled from $\mathcal{N}(0, \sigma_{\text{add}}^2I_n)$, where $\sigma_{\text{add}}^2 > 0$ is set by the user.
\cite{fithian2014optimal} provides a mathematical basis for pursuing such randomization, stating that less conditioning results in an increase in Fisher information.
For additive noise, the above model selection event is: $$\{ y : \Gamma(y + w_\mathrm{obs}) \geq 0\} = \{y : \Gamma y \geq -\Gamma w_\mathrm{obs}\}.$$
This means the new polyhedron formed by the model selection event based on perturbed data $y_\mathrm{obs} + w_\mathrm{obs}$ is slightly shifted.
Porting the ideas of \cite{randomized-selinf} to our setting, to test the one-sided null hypothesis $H_0: v^T\theta = 0$, we want to compute the following tail probability of the marginalized selective distribution, \begin{equation}\label{eq:cond-dist-addnoise-marg}
T(y_\mathrm{obs}, v) = \mathbb{P}\bigg(v^TY \geq v^Ty_\mathrm{obs} ~\big|~ \Big( M(Y + W) = M(y_\mathrm{obs} + W), \; \Pi_v^\perp Y = \Pi_v^\perp y_{\text{obs}}\Big)\bigg).
\end{equation} It is hard to directly compute this.
However, the formulas in \eqref{eq:tg_statistic} and \eqref{eq:vlo_vup}
give us
exact formulas to compute the
non-marginalized tail-probabilities,
\begin{equation*}
T(y_\mathrm{obs}, v, w_\mathrm{obs}) = \mathbb{P}\bigg(v^TY \geq v^Ty_\mathrm{obs}~\big|~ \Big(M(Y + W) = M(y_\mathrm{obs} + W), \;\Pi_v^\perp Y = \Pi_v^\perp y_{\text{obs}},\; W=w_{\text{obs}}\Big)\bigg).
\end{equation*} The following proposition shows that we can compute $T(y_\mathrm{obs}, v) $
by reweighting instances of $T(y_\mathrm{obs}, v, w_\mathrm{obs}) $ via importance sampling.
Here, let $E_1 = \mathds{1}[M(Y + W) = M(y_\mathrm{obs} + W)]$ and
$E_2 = \mathds{1}[\Pi_v^\perp Y = \Pi_v^\perp y_{\text{obs}}]$.
\begin{proposition}\label{prop:additive_noise} \textit{Let $\Omega$ denote the support of the random component $W$. If the distribution of $W$ is independent of the random event $E_2$,
\eqref{eq:cond-dist-addnoise-marg} can be exactly computed as \begin{equation}\label{eq:additive_noise} T(y_\mathrm{obs}, v) = \int_{\Omega} T(y_\mathrm{obs}, v, w_\mathrm{obs}) \cdot a(w_\mathrm{obs}) \; dP_W(w_\mathrm{obs}) = \frac{\int_{\Omega} \Phi\big(\mathcal{V}_{\text{up}}/\tau\big) - \Phi\big(v^Ty_\mathrm{obs}/\tau\big) \;dP_W(w_\mathrm{obs})}{\int_{\Omega} \Phi\big(\mathcal{V}_{\text{up}}/\tau\big) - \Phi\big(\mathcal{V}_{\text{lo}}/\tau\big) \; dP_W(w_\mathrm{obs})}. \end{equation} where the weighting factor is
$a(w_\mathrm{obs})= \mathbb{P}(W = w_\mathrm{obs} | E_1, E_2)/\mathbb{P}(W = w_\mathrm{obs})$.} \end{proposition} The first equality in \eqref{eq:additive_noise} demonstrates the reweighting of $T(y_\mathrm{obs}, v, w_\mathrm{obs}) $, but the second equality gives a sampling strategy where we approximate the integrals. \Fref{alg:additive-importance-sampler} describes this, where for one realization $w_\mathrm{obs}$, we let $k(w_\mathrm{obs})$ and $g(w_\mathrm{obs})$ denote the integrand of the last term's numerator and denominator in \eqref{eq:additive_noise} respectively.
\paragraph{\textbf{Marginalization over WBS intervals.}} In contrast to the above setting where $W$ represents Gaussian noise, in wild binary segmentation described in \Fref{sec:algorithms}, $W$ represents the set of $B$ randomly drawn intervals. Observe that \Fref{prop:additive_noise} still applies to this setting, where $M(y_\mathrm{obs} + w_\mathrm{obs})$ is now replaced with $M(y_\mathrm{obs}, w_\mathrm{obs})$, as described in \Fref{sec:polyhedra}.
However, one additional complication is that the maximizing intervals $\hat j_{1:k}$ in the model $M(y_\mathrm{obs}, w_\mathrm{obs})$ are embedded in the construction of the matrix $\Gamma$ representing the polyhedra.
This prevents a naive resampling of all $B$ intervals.
We describe how to overcome this complication. Let
\smash{$\{W_{\hat j_1}, \ldots, W_{\hat j_k}\}$} be the maximizing intervals. We resample all other intervals, $W_\ell$ for \smash{$\ell \in \{1, \ldots, B\} \backslash \{\hat j_1, \ldots, \hat j_k\}$}.
Specifically, for each of such intervals $W_\ell = (s_\ell, \ldots, e_\ell)$, $s_\ell$ and $e_\ell$ are sampled uniformly between $1$ to $n$ where $s_\ell < e_\ell$. After all $B-k$ intervals are resampled, a check is performed to ensure that
$\{W_{\hat j_1}, \ldots, W_{\hat j_k}\}$ are still the maximizing intervals when WBS is
applied again to $y_\mathrm{obs}$.
The full algorithm is in \Fref{alg:wbs-importance-sampler}.
\hspace{-.5cm}\begin{minipage}[t]{6cm}
\begin{algorithm}[H]
\caption{Marginalizing over additive noise} \label{alg:additive-importance-sampler}
Choose a number $T$ of trials.\\
\For{$t \in \{1, \ldots, T\}$}{
Sample the additive noise $w_j$ from $\mathcal{N}(0, \sigma^2_{\text{add}}I_n)$.\\
Compute $k(w_t)$ and $g(w_t)$.\\
}
Return the approximate for the tail probability \eqref{eq:additive_noise},
\[ \frac{\sum_{t=1}^{T}k(w_t)}{\sum_{t=1}^{T}g(w_t)}.
\]
\end{algorithm} \end{minipage} \hspace{0.5cm} \begin{minipage}[t]{9.5cm}
\begin{algorithm}[H]
\caption{Marginalizing over random intervals}\label{alg:wbs-importance-sampler}
Choose a number $T$ of trials.\\
\For{$t \in \{1, \ldots, T\}$}{
Sample the non-maximizing intervals
$w_\ell = (s_\ell, \ldots, e_\ell)$ for $\ell \in \{1, \ldots, B\} \backslash \{\hat j_{1:k}\}$
where $s_\ell, e_\ell$ are uniformly drawn from 1 to $n$ and $s_\ell < e_\ell$.\\
Check to see that $\{\hat j_{1:k}\}$ are still the indices of the maximizing intervals. If not,
return to the previous step.\\
Compute $k(w_t)$ and $g(w_t)$.\\
}
Return the approximate for the tail probability \eqref{eq:additive_noise},
\[ \frac{\sum_{t=1}^{T}k(w_t)}{\sum_{t=1}^{T}g(w_t)}.
\]
\end{algorithm} \end{minipage}
\section{Practicalities and extensions} \label{sec:practicalities}
The above sections formalize the mechanisms to perform selective inference with respect to the basic procedure highlighted in \Fref{sec:introduction}. We now briefly summarize the all the combination of choices that the user faces based on the methods developed in the above sections and their practical impact.
\subsection{Practical considerations}
There are some practical choices that the user needs to make when implementing the procedure. Here, we outline a few, each related with a key element of the broader inference procedure.
\begin{itemize} \item \textbf{Algorithm} (BS, WBS, CBS and FL): It is useful for the user to be
able to compare algorithms. CBS is specialized for pairs of changepoints, and
WBS specializes in localized changepoint detection compared to BS. FL and BS
have similar mechansims which sequentially admit changepoints by maximizing a
statistic. However, BS has a simpler mechanism and a less complex selection
event, potentially giving higher post-selection conditional power.
\item \textbf{Conditioning} (Plain or marginalized): Marginalizing over a source
of randomness yields tests with higher power than plain inference, but at two
costs: increased computational burden due to MCMC sampling being required, and
worsened detection ability when using additive noise marginalization. Also,
the marginalized p-values are subject to the sampling randomness, and the
number of trials $T$ needed to reduce the p-values' intrinsic
variability
scales with $\sigma^2_{\text{add}}$.
\item \textbf{Number of estimated changepoints $k$} (Fixed or data-driven):
As currently described in \Fref{sec:algorithms}, the changepoint algorithms
discussed in our paper require the user to pre-specify the number of estimated
changepoints $k$.
However, we can adopt local stopping rules from \cite{hyun2018exact}
to adaptively choose $k$.
This variation
increases the complexity of the polyhedra compared to those in
\Fref{sec:polyhedra}, leading to
lower statistical power than its fixed-$k$
counterpart. This is shown in Appendix \ref{app:ic}.
\item \textbf{Assumed null model} (Saturated or selected): As mentioned in
\Fref{sec:post-selection}, selected model tests are valid under a stricter set
of assumptions but often yield higher power.
Computationally, saturated model tests are often simpler to perform
than selected model tests due to the closed form expression of the tail probability.
\item \textbf{Error variance $\sigma^2$} (Known or unknown): Saturated model
tests require $\sigma^2$ to be known. In practice, we need to estimate it
in-sample from a reasonable changepoint mean fitted to the same data, or
estimated out-of-sample on left-out data. Selected model tests have the
advantage of not requiring knowledge of $\sigma^2$.
\end{itemize}
\subsection{Extensions}
As mentioned in \cite{hyun2018exact}, there are many practically-motivated extensions to the baseline procedure mentioned in \Fref{sec:introduction} to either improve power or interpretability. We highlight these below. All of these extensions will still give proper Type-I error control under the appropriate null hypotheses.
\begin{itemize} \itemsep-.5em \item \textbf{Designing linear contrasts}: The user can make many types of contrast vectors $v$ to fit their analysis, in addition to the
segment test contrasts \eqref{eq:segment-contrast}, as long as it
measurable with respect to $M(y_\mathrm{obs})$.
One example is the spike test from
\citep{hyun2018exact} of single location mean changes. For CNV analysis, it
could be useful to test regions between an adjacent pair of changepoints away
from the immediately surrounding regions.
Also, a step-sign plot (a plot
that shows the locations and direction of the changepoints, but not their magnitude)
can help the user design contrasts \citep{hyun2018exact}.
\item \textbf{Post-processing the estimated changepoints}: Multiple detected
changepoints too close to one another can hurt the power of segment tests.
Post-processing the estimated changepoints based on decluttering
\citep{hyun2018exact} or filtering \citep{lin2017sharp} so the new set of
changepoints are well-separated can lead to contrasts that yield higher
power. We show empirical evidence of this improving power of the fused lasso,
in Appendix \ref{app:unique-detection}.
\item \textbf{Pre-cutting}: We can also modify all the algorithms in
\Fref{sec:algorithms} to start with an initial existing set of
changepoints. This is useful in CGH analyses, when it is not meaningful to
consider segments that start in one chromosome and end in another. By pooling
information in this manner from separate chromosomal regions, the pre-cut
analysis is an improvement over conducting separate analyses in individual
chromosomes.
\end{itemize}
\section{Simulations} \label{sec:simulation}
\subsection{Gaussian simulations}
In this section, we show simulation examples to demonstrate properties of the segmentation post-selection inference tools presented in the current paper. The mean $\theta$ consists of two alternating-direction changepoints of size $\delta$ in the middle as in \eqref{eq:middle-mutation}, chosen to be a realistic example of mutation phenomena as observed in array CGH datasets \citep{Snijders2001}. We vary the signal size $\delta \in (0,4)$, while generating Gaussian data from a fixed noise level $\sigma^2=1$.
This is the \textit{duplication} mutation scenario. The sample size $n=200$ is chosen to be in the scale of the chromosomal data. An example of this synthetic dataset can be seen in Figure \ref{fig:power-comparison-data}.
\begin{equation}\label{eq:middle-mutation}
\hspace{-20mm}\textbf{Middle mutation:}\hspace{5mm}
y_i \sim \mathcal{N}(\theta_i, 1), \;\;
\theta_i = \begin{cases}
\delta & \text{ if } 101\le i \le 140\\
0 & \text{ if otherwise } \\
\end{cases} \end{equation}
\begin{figure}
\caption{\it\small Example of simulated Gaussian data for middle mutation as
defined in \eqref{eq:middle-mutation} with $\delta=4$, with data length
$n=200$ and noise level $\sigma=1$. The possible mean vectors $\theta$ for
$\delta = 0, 1, 2$ are also shown.}
\label{fig:power-comparison-data}
\end{figure}
\paragraph{\textbf{Methodology.}} In the following simulations, we consider the following four estimators (BS, WBS, CBS and FL) each run for two steps. From each, we perform both saturated and selected model tests. For the latter, we only include the results of BS and FL for simplicity, for both settings of known and unknown noise parameter $\sigma^2$.
We use the basis procedure outlined in \Fref{sec:introduction} with
a significance level of $\alpha=0.05$. We verify the Type-I error control of our methods next. Throughout the entire simulation suite to come, the standard deviation in each of the power curves and detection probabilities is less than 0.02. For each method, for each signal-to-noise size $\delta$, we run more than 250 trials.
\paragraph{\textbf{Type-I error control verification.}} We examine all our statistical inferences under the global null where $\theta = 0$ to demonstrate their validity -- uniformity of null p-values, or type I error control. Specifically, any simulations from the no-signal regime $\delta=0$ from the middle mutation \eqref{eq:middle-mutation} can be used. When there is no signal, the null scenario $v^T\theta=0$ is always true so we expect all p-value to be uniformly distributed between 0 and 1. We verify this expected behavior in \Fref{fig:null-dist}. We notice that the methods that require MCMC (marginalized saturated and selected model tests) requires more trials to converge towards the uniform distribution compared to their counterparts that have exact calculations.
\begin{figure}
\caption{\it\small
All plots showing the p-values of various statistical inferences under the global null,
with colors of lines given according to \Fref{fig:power-comparison} and \ref{fig:power-comparison-selected}.
(Left): Saturated model tests, specifically BS (black), WBS (blue), CBS (red) and
FL (green). (Middle): Marginalized variants of the left plot.
(Right): Selected model tests, specifically BS (black) and FL (green), either
with unknown $\sigma^2$ (solid) or known $\sigma^2$ (dashed).
}
\label{fig:null-dist}
\end{figure}
\paragraph{\textbf{Calculating power.}}
Since the tests are performed only when a changepoint is selected, it is necessary to separate the detection ability of the estimator from power of the test. To that end, we define the following quantities, \begin{align}
\text{Conditional power} &= \frac{\# \;\text{correctly detected \& rejected}}{ \#\; \text{correctly detected}}\label{eq:powdef2}\\
\text{Detection probability} &= \frac{\# \;\text{correctly detected}}{ \# \;\text{tests conducted}}\label{eq:powdef1}\\
\text{Unconditional power} &= \frac{\# \;\text{correctly detected \& rejected}}{\# \;\text{tests
conducted}} = \text{Detection} \times \text{Conditional power} \label{eq:powdef3} \end{align} The overall power of an inference tool can only be assessed by examining the conditional and unconditional power together.
We consider a detection to be correct if it is within $\pm 2$ of the true changepoint locations.
\paragraph{\textbf{Power comparison across signal sizes $\delta$.}} For saturated model tests, we perform additive-noise inferences using Gaussian $\mathcal{N}(0,\sigma_{\text{add}}^2)$ with $\sigma_{\text{add}}=0.2$ for BS, FL, and CBS. For WBS, we employ the randomization scheme as described in \Fref{sec:randomization} with $B=n$. With the metrics in \eqref{eq:powdef1}-\eqref{eq:powdef3}, we examine the performance of the four methods. The solid lines in \Fref{fig:power-comparison} show the ``plain'' method where model selection based on $M(y_\mathrm{obs})$. The dotted lines show the marginalized counterparts where the model selection is $M(y_\mathrm{obs}, W)$, margnialized over $W$.
WBS and CBS have higher conditional and unconditional power than BS. This is as expected since the former two are more adept for localized change-points of alternating directions. FL noticeably under-performs in power compared to segmentation methods. This is partially caused by FL's detection behavior, and can be explained by examining alternative measures of detection and improved with post-processing. This investigation is deferred to Appendix \ref{app:unique-detection}. The marginalized versions of each algorithm have noticeably improved power, but almost unnoticeably worse detection than their non-randomized, plain versions (middle panel of \Fref{fig:power-comparison})
. Combined, in terms of unconditional power, marginalized inferences clearly dominate their plain counterparts.
Selected model inference simulations are shown in \Fref{fig:power-comparison-selected}. Surprisingly, there is an almost inconceivable drop in power from unknown $\sigma^2$ to known $\sigma^2$. Compared to the saturated model tests in \Fref{fig:power-comparison}, there is smaller power gap between FL and BS. Also, selected model tests appear to have higher power than saturated model tests. In general however, it is hard to compare the power of saturated and selected models due to the clear difference in model assumptions.
\paragraph{\textbf{Comparison with sample-splitting.}}
Sample splitting is another valid inference technique. After splitting the dataset in half based on even and odd indices, we run a changepoint algorithm on one dataset and conduct classical one-sided t-test on the other. This is the most comparable test, as it does not assume $\sigma^2$ is known and conducts a one-sided test of the null $H_0:v^T\theta=0$. Instead of $\pm 2$ slack used for calculating detection in selective inference detection (dotted and dashed lines), $\pm 1$ was used for sample splitting inference (solid line). The loss in detection accuracy in the middle panel of \Fref{fig:samplesplit} shows the downside of halving data size for detection. Unconditional power for marginalized saturated model tests and selected model tests are noticeably higher than the other two.
\begin{figure}
\caption{\it\small Data was simulated from two settings over signal size
$\delta \in (0,4)$ with $n=200$ data points. Several two-step algorithms
(WBS, SBS, CBS, FL) were applied, and post-selection segment test
inference was conducted on the resulting two detected changepoints from
each method. The dotted lines are the marginalized versions of each
test. }
\caption{\it\small Setup similar to \Fref{fig:power-comparison} but for
selected model tests. Only BS (black) and FL (green) are shown.
but the selected model test is applied to both known (dashed line) and unknown noise
parameter $\sigma^2$ (solid line).
}
\caption{\it\small
Setup similar to \Fref{fig:power-comparison} but
comparing sample
splitting (black solid), plain saturated model test (red dashed),
additive noise marginalized saturated model test (green dashed),
and selected model test with unknown $\sigma^2$ (blue dashed),
all using a 2-step binary segmentation.
(Middle): Detection probability for the binary segmentation
applied on the sample split dataset (black solid)
or the full dataset (red dashed).
(Right): Unconditional power, computed by multiplying the conditional power curve and its
relevant detection probability curve.
}
\label{fig:power-comparison}
\label{fig:power-comparison-selected}
\label{fig:samplesplit}
\end{figure}
\subsection{Pseudo-real simulation with heavy tails} \label{sec:heavytail}
We present pseudo-real datasets based on a single chromosome -- chromosome 9 in GM01750 -- in order to investigate how heavy-tailed distributions affect our inferences. We only present saturated model tests for brevity.
From the original data, we estimate a 1-changepoint mean $\theta$, shown in the bold red line in Figure \ref{fig:pseudoreal}, and residuals $r$, both based on a fitted 1-step wild binary segmentation model. The QQ plot shows that these residuals have heavier tails than a Gaussian (top middle panel of \Fref{fig:pseudoreal}), and are close in distribution to a Laplacian.
This motivates us to generate synthetic data $y = \theta + \epsilon$ by adding noise $\epsilon$ in three ways: \begin{enumerate} \itemsep-.5em \item Gaussian noise $\epsilon \sim \mathcal{N}(0, \sigma^2 I)$ (black), \item Laplace noise $\epsilon \sim \operatorname{Laplace}(0, \sigma/\sqrt{2})$ (green), and \item Bootstrapped residuals, $\epsilon = b(r)$, where $b(\cdot)$ samples the residuals with
replacement (red). \label{eq:bootstrap-data} \end{enumerate}
\begin{figure}\label{fig:pseudoreal}
\end{figure}
We then investigate the behavior of saturated model tests after a 3-step binary segmentation across all three types of noises when the null hypothesis $H_0: v^T\theta = 0$ is true. To set $\sigma^2$ for these saturated model tests, we compute the empirical variance after fitting a pre-cut 10-step wild binary segmentation across the entire cell line.
The results are shown in Figure \ref{fig:pseudoreal}. Exactly valid null p-values would follow the theoretical $U(0,1)$ distribution, optimistic (superuniform) p-values would lie below the diagonal, and conservative (subuniform) p-values would lie above the diagonal. We see that the inferences are exactly valid with Gaussian noise but is optimistic with both Laplacian noise and bootstrapped residuals (panel B of \Fref{fig:pseudoreal}).
To overcome this optimism, we modify the \textit{bootstrap substitution method} \citep{asymppostsel}. Let $\beta$ denote $\bar \theta$, the grand mean of $\theta$. Originally, the authors' main idea is to approximate the law of $v^TY$ used to construct the TG statistic \eqref{eq:tg_statistic} with the bootstrapped distribution of $v^T(Y- \beta)$ by bootstrapping the residuals, $y-\bar y$. Here, the empirical grand mean $\bar y$ represents the simplest model with no changepoints. While this estimate will usually restore validity, it is expected to produce overly conservative p-values if there exist \textit{any} changepoints (panel C of \Fref{fig:pseudoreal}).
Hence, we instead consider the bootstrapped distribution of $v^T(Y - \theta)$, by bootstrapping the residuals, $y - \hat{\theta}$, where
$\hat \theta$ is a piecewise constant estimate of $\theta$. For our instance, we use a $k$-step binary segmentation model to estimate $\hat \theta$, where we choose $k$ using two-fold cross validation from a two-fold split of the data $y$ into odd and even indices. This procedure is not valid in general and should be used with caution.
In order to combat the main risk of over-fitting of $\hat \theta$, we may further modify this procedure by
excluding shorter segments in $\hat \theta$ prior to bootstrapping. For our dataset, these potential downsides do not seem to come to fruition in practice. At the sample size $n \simeq 100$ and signal-to-noise ratio of our current dataset, the resulting p-values in both heavy-tailed and Gaussian data are convincingly uniform (panel D of \Fref{fig:pseudoreal}).
\section{Copy Number Variation (CNV) data application} \label{sec:application}
Array CGH analyses
detect changes in expression levels (measured as a log ratio in fluorescence intensity between test and reference samples) across the genome. Aberrations found are linked
with the presence of a wide range of genetically driven diseases -- as many types of cancer, Alzheimer's disease, and autism, see, eg. \citet{
international2008rare,
bochukova2010large}.
The datasets we study in this paper are originally from \cite{Snijders2001}, and have been studied by numerous works in the statistics literature, e.g. \citet{sara, lai2008}. In each dataset consist of individual cell lines with $2,000$ measurements or more across 23 chromosomes. Our analysis focuses on middle-to-middle duplication, the setting that was studied in \Fref{sec:simulation}.
In our analysis, we use a 4-step wild binary segmentation and perform marginalized saturated model tests on two cell lines GM01524 and GM01750 in \Fref{fig:analysis}. Recall that the 14th chromosome of the latter cell line was shown in \Fref{fig:intro}. As decribed in \Fref{sec:practicalities}, we pre-cut both analyses at chromosome boundaries since the ordering of 1 through 23 is essentially arbitrary. In GM01524, we can see that the our choice of methods -- segment test inferences on changepoints recovered from pre-cut wild binary segmentation, after decluttering -- deems two changepoint locations A and B of alternating directions in chromosome 6 to be significant, and two other locations to be spurious, at the signifance level $\alpha = 0.05$ after Bonferroni correction. This result is consistent with karyotyping results of a single middle-to-middle duplication. Likewise, in GM01750, the wild binary segmentation inference correctly identified the two start-to-middle duplications in chromosomes 9 and 14 which were confirmed with karyotyping, and correctly invalidated the rest.
\section{Conclusions}
We have described an approach to conduct post-selection inference on changepoints detected by common segmentation algorithms, using the same data for detection and testing.
Through simulations, we demonstrated the detection probability and power over signal-to-noise ratios in a variety of settings, as well as our tools' robustness to heavy-tailed data. Finally, we demonstrated the application in array CGH data, where we show that our methods effectively provide a statistical filter
that retains the changepoints that validated by karyotyping and discards the rest.
Future work in this area could improve the practical applicability of these methods. One useful extension would be to incorporate more complex and realistic noise models.
For example, the selected model testing framework can be extended to include other exponential family models. The methodology for inference after changepoint detection may also be extended to multiple streams of copy number variation data in order to make more powerful inferences about changepoint locations. These and other methodological extensions can be useful for newer types of copy number variation data from recent technology, such as next-generation sequencing.
\begin{figure}
\caption{\it \small ``Pre-cut'' changepoint inference using
saturated model tests for wild binary segmentation
marginalized over random intervals conducted on two cell lines,
from \citet{Snijders2001}. Data points are colored in two alternating
tones, to visually depict the chromosomal boundaries.
For each cell line, the letters A through D
denote the estimated changepoints, $\hat b_1$ through $\hat b_4$ respectively.
The bolded lines denote changepoints that were rejected under the null hypothesis
$H_0: v^T\theta = 0$ at a Type-I error control level $\alpha = 0.05$ after Bonferroni-correction.
(Top): The analysis for the cell line GM01524, with all 23 chromosomes shown.
(Bottom): The same setup as above, but for the cell line GM01750.
}
\label{fig:analysis}
\end{figure}
\begin{small} \section{Code and supplemental material} The code to perform estimation as well as saturated model tests are in \url{https://github.com/robohyun66/binseginf}, while the code to perform selected model tests are additionally in \url{https://github.com/linnylin92/selectiveModel}.
The following is a brief summary of the supplements. Appendix A contains the proofs omitted from the main text. Appendix B contains the algorithmic details for the selected model test sampler in the known $\sigma^2$ setting. Appendix C contains numerous additional simulations results and details. Appendix D contains a description of the procedure to choose $k$ adaptively and its corresponding simulation results. Appendix E contains additional results on our array CGH application.
\section{Acknowledgment} The authors used Pittsburgh Supercomputing Center resources (Proposal/Grant Number: DMS180016P). Sangwon Hyun was supported by supported by NSF grants DMS-1554123 and DMS-1613202. Max G'Sell was supported by NSF grant DMS-1613202. Ryan Tibshirani was supported by NSF grant DMS-1554123.
\end{small}
\appendix
\section{Additional proofs} \label{sec:proofs}
\subsection{Proof of \Fref{prop:wbs-polyhedral-event}, (WBS)} \label{app:wbs_polyhedra} \begin{proof} The construction of $\Gamma$ is basically the same as that for BS in \Fref{prop:bs-polyhedral-event}; the only difference is that, at step $k$, the inequalities defining the new rows of $\Gamma$ are based on the intervals \smash{$w_{j_k}$} and $w_\ell$, $\ell \in J_k \backslash \{j_k\}$, instead of \smash{$I_{j_k}$} and $I_\ell$, $\ell \neq j_k$, respectively. To compute the upper bound on the number of rows $m$, observe that in step $\ell \in \{1,\ldots, k\}$, there are at most $B-\ell+1$ intervals remaining. Among these, the interval $j_k$ contributes $p-2$ inequalities, and the remaining $B-\ell$ intervals contributes $p-1$ inequalities. \end{proof}
\subsection{Proof of \Fref{prop:cbs-polyhedral-event}, (CBS)} \label{app:cbs_polyhedra}
\begin{proof} The proof follows similarly to the proof of \Fref{prop:bs-polyhedral-event}. Observe that for any $k' < k$, the model $M^{\mathrm{CBS}}_{1:k'}(y_\mathrm{obs})$ is strictly contained in the model $M^{\mathrm{CBS}}_{1:k}(y_\mathrm{obs})$. Hence, we can proceed using induction, and let $ b_i$ for $i \in \{1, \ldots, k\}$ denote $\hat b_i$ for simplicity, and do the same for $a_i$, $d_i$ and $j_i$. Let $C(x,2) = {x \choose 2}$ for simplicity as well.
For $k=1$, the following $2 \cdot (C(n-1, 2)- 1)$ inequalities characterize the selection of the changepoint model $\{a_1, b_1, d_1\}$,
\begin{align*} d_1 \cdot g^T_{(1, a_1, b_1, n)}y \geq g^T_{(1,r,t,n)}y, \quad \text{and}\quad d_1 \cdot g^T_{(1, a_1, b_1, n)}y \geq -g^T_{(1,r,t,n)}y, \end{align*} for all $r, t \in \{1, \ldots, n-1\}$ where $r < t$, $r \neq a_1$ and $t \neq b_1$.
By induction, assume we have constructed the polyhedra for the model, $M^{\mathrm{CBS}}_{1:(k-1)}(y_\mathrm{obs}) = \{a_{1:(k-1)}, b_{1:(k-1)}, d_{1:(k-1)}\}$. To construct $M^{\mathrm{CBS}}_{1:k}(y_\mathrm{obs})$, all that remains is to characterize the $k$th parameters $\{a_k, b_k, d_k\}$. To do this, assume that
$j_k$ corresponds with the interval $I_k$ having the form $\{s_k, \ldots, e_k\}$. Within this interval, we form the first $2 \cdot (C(|I_{j_k}|-1, 2)-1)$ inequalities of the form, \begin{equation*}
d_k \cdot g^T_{(s_k, a_k, b_k, e_k)}
y \geq g^T_{(s_k, r,t, e_k)} y \quad\text{and}\quad
d_k \cdot g^T_{(s_k, a_k,b_k, e_k)}
y \geq -g^T_{(s_k, r,t, e_k)} y
\end{equation*} for all $r,t \in \{s_k, \ldots, e_k - 1\}$ where $r < t$ and $r \neq a_k$ and $t \neq b_k$.
The remaining inequalities originate from the remaining intervals.
For each interval $I_\ell$, for $\ell \in \{1, \ldots, 2 k -1\}\backslash \{j_k\}$,
let $I_\ell$ have the form $\{s_\ell, \ldots, e_\ell \}$.
We form the next $2 \cdot C(|I_\ell| -1, 2)$ inequalities of the form
\begin{equation*}
d_k \cdot g^T_{(s_k, a_k, b_k, e_k)}
y \geq g^T_{(s_\ell, r, t, e_\ell)} y \quad\text{and}\quad
d_k \cdot g^T_{(s_k, a_k, b_k, e_k)}
y \geq -g^T_{(s_\ell, r, t, e_\ell)} y
\end{equation*} for all $r,t \in \{s_\ell, \ldots, e_\ell - 1\}$ where $r < t$. \end{proof}
\subsection{Proof of \Fref{prop:additive_noise}, (Marginalization)}
\begin{proof} For concreteness, we write the proof where $W$ represents additive noise, but the proof generalizes to the setting where $W$ represents random intervals easily. First write $T(y_\mathrm{obs}, v)$ as an integral over the joint density of $W$ and $Y$, \begin{align}
T(y_\mathrm{obs}, v) &= P(v^T Y \ge v^Ty_\mathrm{obs}|M(Y+W) = M(y_\mathrm{obs} + W), \Pi_v^\perp Y = \Pi_v^\perp y_\mathrm{obs}) \nonumber\\
&= \int \mathds{1} (v^Ty \ge v^Ty_\mathrm{obs}) f_{W,Y|E_1,E_2}(w,y) dwdy . \label{eq:orig-randtg2} \end{align}
Then the joint density $f_{W,Y|E_1,E_2}(w,y)$ partitions into two components, whose latter component (a probability mass function) can be rewritten using Bayes rule. For convenience, denote $g(w)=\mathbb{P}(E_1 |W = w, E_2)$. \begin{align*}
f_{W,Y|E_1,E_2}(w,y) dy dw &= f_{Y|W=w,E_1,E_2}(y) \cdot
f_{W|E_1,E_2}(w) \;dy\; dw\\
&= f_{Y|W=w,E_1,E_2}(y) \cdot
\frac{\mathbb{P}(E_1|W=w,E_2)f_{W|E_2}(w)}{\mathbb{P}(E_1|E_2)} \; dy \; dw\\
&= f_{Y|W=w,E_1,E_2}(y) \cdot
\frac{g(w) f_{W}(w)}{\int g(w') f_{W}(w') dw'} \; dy \; dw, \end{align*} where we used the independence between $W$ and $E_2$ in the last equality. With this, $T(y_\mathrm{obs}, v) $ from \eqref{eq:orig-randtg2} becomes: \begin{equation*}
T(y_\mathrm{obs}, v) = \int \mathds{1} (v^Ty \ge v^Ty_\mathrm{obs}) \cdot g(w) \cdot \frac{f_{W|E_2}(w)}{\int g(w') f_{W}(w') dw'}
\cdot f_{Y|W=w,E_1,E_2}(y)\;dy\; dw. \end{equation*} Now, rearranging, we get: \begin{align} T(y_\mathrm{obs}, v) &= \int \underbrace{ \left[\int \mathds{1} (v^Ty \ge v^Ty_\mathrm{obs})\cdot
f_{Y|W=w,E_1,E_2}(y) dy \right]}_{T(y_\mathrm{obs}, v, w) } \underbrace{\frac{g(w)}{\int
g(w') f_{W}(w') dw'}}_{a(w)} f_{W}(w)dw \nonumber \\
&= \int T(y_\mathrm{obs}, v, w) a(w) \; f_{W}(w)\; dw. \label{eq:simpler-final-form} \end{align} This proves the first equality in \Fref{prop:additive_noise}. To show what the weighting factor $a(w)$ equals, observe that by applying Bayes rule to the numerator of $a(w_\mathrm{obs})$, and rearranging: \begin{align*}
a(w) &= \frac{g(w)}{\int g(w') f_W(w')\;dw'} = \frac{\mathbb{P}(E_1|E_2, W=w)}{P(E_1|E_2)} = \frac{\mathbb{P}(W=w |E_1, E_2) }{\mathbb{P}(W=w |E_2)}\\
&= \frac{\mathbb{P}(W=w |E_1,E_2)}{\mathbb{P}(W=w)}. \end{align*}
Finally, to show the seocnd equality in \Fref{prop:additive_noise}, observe that we can also represent $a(w)$ as \begin{equation} \label{eq:a} a(w) = \frac{g(w)}{\mathbb{E}[g(w)]} \end{equation} by definition, where the denominator is the expectation taken with respect to the random variable $W$. Leveraging the geometric theorems of \cite{lee2016exact,tibshirani2016exact}, it can be shown that \begin{equation} \label{eq:g}
g(w) = P\Big(M(Y+W) = M(y_\mathrm{obs} + W) ~|~ \Pi^\perp_vY = \Pi^\perp_v y_\mathrm{obs}\Big) = \Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(\mathcal{V}_{\text{lo}}/\tau). \end{equation} Also from the same references as well as stated in \Fref{sec:randomization}, we know that \begin{equation} \label{eq:t} T(y_\mathrm{obs}, v, w) = \frac{\Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(v^Ty_\mathrm{obs}/\tau)}{\Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(\mathcal{V}_{\text{lo}}/\tau)} \end{equation} Putting \eqref{eq:a}, \eqref{eq:g} and \eqref{eq:t} together into \eqref{eq:simpler-final-form}, we complete the proof by obtaining \begin{align*} T(y_\mathrm{obs}, v) = \frac{\int T(y_\mathrm{obs}, v, w) g(w) f_W(w) dw}{ \int g(w) f_W(w) dw} = \frac{\int \Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(v^Ty_\mathrm{obs}/\tau) f_W(w) dw}{ \int \Phi(\mathcal{V}_{\text{up}}/\tau) - \Phi(\mathcal{V}_{\text{lo}}/\tau) f_W(w) dw}. \end{align*} \end{proof}
\section{Selected model tests, hit-and-run sampling for known $\sigma^2$} \label{app:known_sigma}
The following is the hit-and-run sampler to estimate the tail probability of the law of \eqref{eq:selective-distribution-saturated}. This is for the known $\sigma^2$ setting, which differs from the setting described in the main text in \Fref{sec:computation}. This was briefly described in \cite{fithian2015selective} but the authors have later implemented it in ways not originally described in the above work to make it more efficient. We do not claim novelty for the following algorithm, but simply state it for completion. The original code can be found the repository \url{https://github.com/selective-inference}, and we reimplemented it to suite our coding framework and simulation setup.
We specialize our description to test the null hypothesis $H_0: v^T\theta = 0$ against the one-sided alternative $H_1: v^T\theta > 0$. There are some notation to clarify prior to describing the algorithm. Let $v \in\mathbb{R}^n$ denote the vector such that \[ v^T y = \bar{y}_{(\hat c_j + 1):\hat c_{j+1}} - \bar{y}_{(\hat c_{j-1} + 1):\hat c_{j}}. \] As in \Fref{sec:computation}, let $A \in \mathbb{R}^{k\times n}$ denote the matrix such that the last $k$ equations in the above display are satisfied if and only if $AY = Ay_\mathrm{obs}$. Based on \Fref{sec:polyhedra}, observe that our goal reduces to sampling from the $n$-dimensional distribution \begin{equation}\label{eq:full_gaussian} Y \sim \mathcal{N}(0, \sigma^2 I_n), \quad \text{conditioned on} \quad \Gamma Y \geq 0,\; AY = Ay_\mathrm{obs}. \end{equation} where $I_n$ is the $n \times n$ identity matrix.
The first stage of the algorithm \emph{removes the nullspace} of $A$ in the following sense. Construct any matrix $B \in \mathbb{R}^{n \times n}$ such that it has full rank and the last $k$ rows are equal to $A$. Then, consider the following $n$-dimensional distribution. \begin{equation} \label{eq:no_nullspace_gaussian1} Y' \sim \mathcal{N}(0, \sigma^2 B^TB), \quad \text{conditioned on} \quad \Gamma B^{-1} Y' \geq 0,\; (Y')_{(n-k+1):n} = Ay_\mathrm{obs}. \end{equation} Note that $B^{-1}Y'$ has the same law as \eqref{eq:full_gaussian}. Observe that the above distribution is a conditional Gaussian, meaning we can remove the last conditioning event. Towards that end, let $\Gamma''$ denote the first $n-k$ columns of the matrix $\Gamma B^{-1}$, and let $u''$ denote the last $k$ columns of $\Gamma B^{-1}$ left-multiplying $Ay_\mathrm{obs}$. Also, consider the following partitioning of the matrix $B^TB$, \[ \sigma^2 B^TB = \begin{bmatrix} B_{11} & B_{12} \\ B_{12}^T & B_{22} \end{bmatrix}, \] where $B_{11}$ is a $(n-k) \times (n-k)$ submatrix, $B_{12}$ is a $(n-k) \times k$ submatrix, and $B_{22}$ is a $k\times k$ submatrix. Then, consider the following $n-k$-dimensional distribution. \begin{equation}\label{eq:no_nullspace_gaussian2} Y'' \sim \mathcal{N}\Big(B_{12}B_{22}^{-1}(Ay_\mathrm{obs}), \; B_{11} - B_{12}B_{22}^{-1}B_{12}^T\Big), \quad \text{conditioned on} \quad \Gamma'' Y'' \geq -u''. \end{equation} Note that $Y''$ has the same law as the first $n-k$ coordinates of \eqref{eq:no_nullspace_gaussian1}.
The next stage of the algorithm \emph{whitens} the above distribution so its covariance is the identity. Let $\mu''$ and $\Sigma''$ denote the mean and variance of the unconditional form of the above distribution \eqref{eq:no_nullspace_gaussian2}. Let $\Theta$ be the matrix such that $\Theta \Sigma'' \Theta^T = I_n$. This must exist since $\Sigma''$ is positive definite. Consider the following $n-k$ dimensional distribution, \begin{equation}\label{eq:conditional_gaussian} Z \sim \mathcal{N} (0, I_n) , \quad \text{conditioned on} \quad \Gamma'' \Theta^{-1} Z \geq -u'' -\Gamma''\mu''. \end{equation} Note that $\Theta^{-1}Z+ \mu''$ has the same law as \eqref{eq:no_nullspace_gaussian2}. Hence, we have constructed linear mapping $F$ and $G$ between \eqref{eq:full_gaussian} and \eqref{eq:conditional_gaussian} such that $F(Y) \overset{d}{=} Z$, and $G(Z) \overset{d}{=} Y$.
In order to set up a hit-and-run sampler, generate $p$ unit vectors $g_1, \ldots, g_p$. (The choice of $p$ is arbitrary, and the specific method of generating these $p$ vectors is also arbitrary.) Our hit-and-run sampler with move in the linear directions dictated by $g_1, \ldots, g_p$. We are now ready to describe the hit-and-run sampler in \Fref{alg:hitandrun_knownsigma}, which leverages many of the same calculations in \eqref{eq:tg_statistic} and \eqref{eq:vlo_vup}. The similarity arises since $\Pi_{g_i}^{\perp} Z = \Pi_{g_i}^{\perp}(Z + g_i)$ by definition of projection.
\begin{algorithm}[t] Choose a number $M$ of iterations.\\ Set $z^{(0)} = F(y_\mathrm{obs})$, as described in the text.\\ Generate $p$ unit directions $g_1, \ldots, g_p$, each vector of length $n$.\\ Compute $U = \Gamma'' \Theta^{-1} z^{(0)} +u'' + \Gamma'' \mu''$, which represents the ``slack'' of each constraint.\\ Compute the $p$ vectors, $\rho_i = \Gamma'' \Theta^{-1} g_i$ for $i \in \{1,\ldots, p\}$.\\
\For{$m \in \{1,\ldots,M\}$}{
Select an index $i$ uniformly from $1$ to $p$.\\ Compute the truncation bounds \[ \mathcal{V}_{\text{lo}} = g_i^Tz^{(m-1)} - \min_{j:(\rho_i)_j > 0} U_j/(\rho_i)_j, \quad\text{and}\quad \mathcal{V}_{\text{up}} = g_i^Tz^{(m-1)} - \max_{j:(\rho_i)_j < 0} U_j/(\rho_i)_j. \]\\ Sample $\alpha^{(m)}$ from a Gaussian with mean $g_i^Tz^{(m-1)}$ and variance $1$, truncated to lie between $\mathcal{V}_{\text{lo}}$ and $\mathcal{V}_{\text{up}}$.\\ Form the next sample \[ z^{(m)} = z^{(m-1)} + \alpha^{(m)} g_i, \quad \text{and} \quad y^{(m)} = G(z^{(m)}). \]\\ Update the slack variable, \[ U \leftarrow U + \alpha^{(m)} \rho_i. \] } Return the approximate for the tail probability of \eqref{eq:selective-distribution-selected-known-sigma}, $ \sum_{m=1}^{M} \mathds{1}[v^Ty^{(m)} \geq v^Ty_\mathrm{obs}]/M. $
\caption{MCMC hit-and-run algorithm for selected model test with known $\sigma^2$} \label{alg:hitandrun_knownsigma} \end{algorithm}
The computational efficiency of the above algorithm comes from the fact that little multiplication needs to be done with the polyhedron matrix $\Gamma'' \Theta^{-1}$, a potentially huge matrix. $U$ and $\rho_1, \ldots, \rho_p$, each vectors of the same length, carry all the information needed about polyhedron throughout the entire procedure of generating $M$ samples.
\section{Additional simulation results}\label{app:simulations}
\subsection{Power comparison using unique detection} \label{app:unique-detection}
Fused lasso was
appeared to have a large drop in power compared to segmentation
algorithms. In addition to these three measures shown in \Fref{sec:simulation}, for multiple changepoint problems like middle mutations it is useful to measure performance using an alternative measure of detection called unique detection. This is useful because some algorithms -- mainly fused lasso, but to also binary segmentation to some extent, primarily in later steps -- admit ``clumps'' of nearby points. If this clumped detection pattern occurs in early steps, the algorithm requires more steps than others to fully admit the correct changepoints. In this case, detection alone is not an adequate metric, and unique detection can be used in place. \begin{equation}
\text{Unique detection probability} = \frac{\# \text{changepoints which were
approximately detected}}{ \#
\text{number of true changepoints.}}\label{eq:powdef4}\\ \end{equation} In plain words, unique detection is measuring how many of the true changepoint locations have been approximately recovered.
We present a simple case study. In addition to a 2-step fused lasso, imagine using a 3-step fused lasso, but with post-processing. For post-processing, declutter by centroid clustering with maximum distance of 2, and test the $k_0<3$ changepoints, pitting the resulting segment test p-values against $0.05/k_0$. A 2-step fused lasso's detection does not reach 1 even at high signals ($\delta=4$) because of the aforementioned clumped detection behavior. The resulting segment tests are also not powerful, since the segment test contrast vectors consist of left and right segments which do not closely resemble true underlying piecewise constant segments in the data. However, when detection is replaced with unique detection, two things are noticeable. First, decluttered lasso's detection performance is noticeably improved when going from 2 to 3 steps. Also, when unconditional power is calculated using unique detection, binary segmentation does not have as large of an advantage over the the several variants of fused lasso. This is shown in \Fref{fig:unique-power-comparison}. We see from the right figure (compared to
the left) that the a ``decluttered'' version of 2- or 3-step fused lasso
has much closer unconditional power to binary segmentation.
\begin{figure}
\caption{\it\small
(Left): Various detections for FL, either using 2 or 3 steps, and either using decluttering
or not.
(Middle): The unconditional power of various segmentation algorithms.
(Right): The unconditional power, but defined as the conditional power multiplied by
the unique detection probability.
}
\label{fig:unique-power-comparison}
\end{figure}
\subsection{Power comparison with different mean shape} \label{app:edge-mutation}
The synthetic mean discussed here consists of a single upward changepoint piece-wise constant mean, as shown in \eqref{eq:edge-mutation} and \Fref{fig:power-comparison-data-edge}. This is chosen to be another realistic example of the mutation phenomenon as observed in array CGH datasets from \citet{Snijders2001}, in addition to the case shown in the main text. We focus on the \textit{duplication} mutation scenario, but the results apply similarly to deletions. As before, the sample size $n=200$ was chosen to be in the scale of the data length in a typical array CGH dataset in a single chromosome. An example of this synthetic dataset can be seen in Figure \ref{fig:power-comparison-data}. For saturated model tests, WBS no longer outperforms binary segmentation in power. This is expected since there is only a single changepoint not accompanied by opposing-direction changepoints.
\begin{equation}\label{eq:edge-mutation} \hspace{-20mm}\textbf{Edge mutation:}\hspace{5mm}
y_i \sim \mathcal{N}(\theta_i, 1), \;\;
\theta_i = \begin{cases}
\delta & \text{ if } 161\le i \le 200\\
0 & \text{ if otherwise }\\
\end{cases} \end{equation}
\begin{figure}
\caption{\it\small Analogous to \Fref{fig:power-comparison-data} but representing edge mutations.}
\label{fig:power-comparison-data-edge}
\end{figure}
\begin{figure}
\caption{\it\small
Same setup as \Fref{fig:power-comparison} but
for edge-mutation data. }
\label{fig:power-comparison-edge}
\end{figure}
\subsection{Sample splitting (continued)} The results in \Fref{fig:samplesplit} were based on approximate detection where, for methods used on the entire dataset of length $n$, we defined a detection event as estimating $\pm 2$ of the true changepoint locations. For sample splitting, this was defined as estimate $\pm 1$ of the true changepoint location based on half the dataset. This choice of approximate detection is somewhat arbitrary, and it is informative to see if the results would change if we considered only exact detection. We can see from \Fref{fig:samplesplit-exact} that randomized TG p-values have comparable power with sample splitting inferences, among tests that are regarding exactly the right changepoints.
\begin{figure}
\caption{\it\small The same setup as in \Fref{fig:samplesplit} but with exact detection.}
\label{fig:samplesplit-exact}
\end{figure}
\section{Model size selection using information criteria} \label{app:ic}
Throughout the paper we assume that the number of algorithm steps $k$ is fixed. \citet{hyun2018exact} introduces a stopping rule based on information criteria (IC) which can be characterized as a polyhedral selection event. The IC for the sequence of models $M_{1:\ell}, \ell=1,\ldots, n-1$ is \begin{equation}
J(M_{1:\ell}) = \|y - \hat y_{M_{1:\ell}(y)}\|^2_2 + p\big(M_{1:\ell}(y)\big). \end{equation} We omit the dependency on $y$ when obvious. We use the BIC complexity penalty $p(M_k) = \sigma^2 \cdot k \cdot \log(n)$ for this paper. Also define $S_\ell(y) = \mathrm{sign}\left(J(M_{1:\ell}) -J(M_{1:(\ell-1)})\right)$ to be the sign of the difference in IC between step $\ell-1$ and $\ell$. This is a $+1$ for a rise and $-1$ for a decline. A data-dependent stopping rule $\hat k$ is defined as \begin{equation}\label{eq:stoprule}
\hat k (y) = \min\{k : S_k(y) = S_{k+1}(y) = \ldots = S_{k+q}(y) = 1\} \end{equation} which is a local minimization of IC, defined as the first time $q$ consecutive rises occur. As discussed in \cite{hyun2018exact}, $q=2$ is a reasonable choice for the changepoint detection. To carry out valid selective inference, we condition on the selection event $\mathds{1}[ S_{1:(k+q)}(y) = S_{1:(k+q)}(y_\mathrm{obs})]$, which is enough to determine $\hat k$. A $k$-step model for $k$ chosen by \eqref{eq:stoprule} can be understood to be $ M_{1:\hat k}(Y) = M_{1:k}(y_\mathrm{obs})$. The corresponding selection event $P_{M_{1:\hat{k}}}$ is with the additional halfspaces, as outlined in \cite{hyun2018exact}. Simulations in Figure \ref{fig:ic-power} show that introducing IC stopping is valid, by controlled type-I error, but comes at the cost of considerable power loss.
\begin{figure}
\caption{\small\it
Similar setup as \Fref{fig:power-comparison}.
In the middle-mutation data example from
\eqref{eq:middle-mutation}. IC-stopped binary segmentation inference (bold
line) is compared to a fixed 2-step binary segmentation inferences (thin
line). We can see that the power and detection are considerably lower. The
average number of steps taken per each $\delta$ on x-axis ticks are
$1.34, 1.86, 3.02, 3.64, 3.77, 3.72$, respectively.}
\label{fig:ic-power}
\end{figure}
\end{document} | arXiv |
Medical & Biological Engineering & Computing
October 2013 , Volume 51, Issue 10, pp 1069–1077 | Cite as
A real-time system for biomechanical analysis of human movement and muscle function
Antonie J. van den Bogert
Thomas Geijtenbeek
Oshri Even-Zohar
Frans Steenbrink
Elizabeth C. Hardin
Mechanical analysis of movement plays an important role in clinical management of neurological and orthopedic conditions. There has been increasing interest in performing movement analysis in real-time, to provide immediate feedback to both therapist and patient. However, such work to date has been limited to single-joint kinematics and kinetics. Here we present a software system, named human body model (HBM), to compute joint kinematics and kinetics for a full body model with 44 degrees of freedom, in real-time, and to estimate length changes and forces in 300 muscle elements. HBM was used to analyze lower extremity function during gait in 12 able-bodied subjects. Processing speed exceeded 120 samples per second on standard PC hardware. Joint angles and moments were consistent within the group, and consistent with other studies in the literature. Estimated muscle force patterns were consistent among subjects and agreed qualitatively with electromyography, to the extent that can be expected from a biomechanical model. The real-time analysis was integrated into the D-Flow system for development of custom real-time feedback applications and into the gait real-time analysis interactive lab system for gait analysis and gait retraining.
Gait Movement analysis Biomechanics Real-time Virtual reality
The online version of this article (doi: 10.1007/s11517-013-1076-z) contains supplementary material, which is available to authorized users.
Biomechanical analysis of human movement has become an important tool for basic research and for clinical management of orthopedic and neurological conditions. Clinical movement analysis is traditionally performed off-line by processing of previously recorded raw motion and force data, resulting in a laboratory or gait report to the clinician who makes treatment decisions. Clinically relevant information in the report typically includes the time histories of biomechanical variables such as joint angles (kinematics) and joint moments (kinetics) [15]. In recent years, musculoskeletal models have been used to provide additional information about muscle length changes [2] and muscle forces [8, 9, 12, 30].
A real-time biomechanical analysis, as opposed to a report that is generated during post-processing, would create unique opportunities for both the patient and the therapist to interact in real-time with biomechanical data during patient examination or treatment. Clinicians and physical therapists could benefit from a real-time visualization and quantification of specific motion variables, as well as from having additional information about internal forces and moments which would remain otherwise fundamentally invisible. Furthermore, such biomechanical data can also be presented to the patient in real-time, to help them perform therapeutic exercises more effectively than could be done with verbal or tactile feedback from a physical therapist [10].
Custom applications have been developed for feedback training using specific variables computed in real-time, such as a single joint angle [3] or a single joint moment [25]. To make real-time computation feasible, approximations are often used that neglect certain mechanical effects, such as inertial terms in the equations of motion [25]. Real-time commercial systems are currently limited to kinematic variables (joint angles) [3, 27] and possibly joint moments, but do not include muscle variables. Although angles and moments can be a useful surrogate for tissue loads and muscle recruitment that are relevant to orthopedic or neurological rehabilitation, an analysis at the muscle level is needed for a full understanding [8, 9]. This is, however, computationally demanding because muscle forces must be estimated simultaneously for all muscles in a limb, or ideally, in the whole body [8, 9]. Consequently, currently available software systems for analysis of muscle function (Anybody, www.anybodytech.com; and OpenSim [8]) do not perform real-time analysis.
In this paper we present a full human body model (HBM) that can produce a real-time analysis of 3D kinematics, kinetics, and muscle function. The goals of this paper are (1) to present the model and the methods of computation, and (2) to present results from a group of able-bodied subjects.
2 Methods
2.1 Numerical methods
Within the HBM, the processing pipeline consists of inverse kinematics, low-pass filtering, inverse dynamics, muscle kinematics (length change and moment arms), and muscle force estimation (Fig. 1). In order to keep up with an input stream of 120 frames per second (fps), which is typical for inverse dynamic analysis, the total computation time for all processing steps must be <8.33 ms per frame.
Data flow within the human body model (HBM)
The kinematic model in HBM consists of 16 rigid body segments that are coupled by joints, with a total of 44 kinematic degrees of freedom. Subject-specific joint centers and axes are calculated from 3D coordinates of markers attached to anatomical landmarks, while the subject is in an initialization pose. Details can be found in "Supplemental Material". Inertial properties for all body segments are estimated during initialization from segment lengths and total body mass using published regression equations [6]. Forward kinematic equations were generated to express the global 3D position \(\vec{r}_{i} ({\mathbf{q}})\) of a marker i as a function of the 44 generalized coordinates q. Given a set of marker coordinates \(\vec{r}_{{i,{\text{meas}}}}\) measured by the motion capture system, the inverse kinematic problem is to find the model pose q that best fits the marker data. This was formulated as a nonlinear least-squares problem:
$${\mathbf{q}} = \arg \mathop {\hbox{min} }\limits_{{\mathbf{q}}} \sum\limits_{i = 1}^{N} {\left\| {\vec{r}_{i} ({\mathbf{q}}) - \vec{r}_{{i,{\text{meas}}}} } \right\|^{2} } $$
A full body marker set consisting of N = 47 markers was defined (see "Supplemental Material") to provide redundancy and robustness against occasional marker dropout which is inevitable in real-time motion capture. After solving (1), the estimated body pose is processed by a real-time low-pass filter (second order Butterworth) that outputs the smoothed pose q as well as the generalized velocities \({\dot{\mathbf{q}}}\) and generalized accelerations \(\ddot{{\mathbf{q}}}\). Details on the filter and its implementation are presented elsewhere [29]. The user would set the cutoff frequency of the filter based on the bandwidth of the movement that is being studied. Force platform data were processed with the same filter to prevent impact artifacts in the subsequent inverse dynamic calculations [16].
In the inverse dynamics processing step, a vector \({\varvec{\tau}}\) of unknown forces and moments, associated with the kinematic degrees of freedom, is solved from the multibody equations of motion:
$${\varvec{\tau}} = {\mathbf{M}}({\mathbf{q}})\ddot{\mathbf{{q}}} + {\mathbf{c}}({\mathbf{q}},{\dot{\mathbf{q}}}) + {\mathbf{B}}({\mathbf{q}}){\varvec{\tau}}_{\text{ext}} $$
where M is a square mass matrix, and c are terms related to Coriolis and centrifugal effects and gravity. The final term represents measured external forces (force plate data). Joint power was calculated as the product of joint moment and angular velocity. Separate equations were used to compute the full 6-DOF intersegmental loads at the knee, and these loads were expressed in the reference frame of the shank.
A total of 300 muscles are presently included in the model, based on previously published musculoskeletal models: 43 muscle elements in each lower extremity [7], 102 in each arm [4], and 10 in the spine [17]. The coupling between muscles and skeleton was represented by polynomials that compute total muscle–tendon length L as a function of skeleton pose q:
$$L({\mathbf{q}}) = \sum\limits_{i = 1}^{{N_{\text{terms}} }} {c_{i} } \prod\limits_{j = 1}^{{N_{\text{DOF}} }} {q_{i}^{{E_{ij} }} } $$
The number of terms will depend on how much detail is required to represent the function \(L({\mathbf{q}})\). Based on the principle of virtual work [1], the muscle moment arm d k with respect to a joint angle k is computed analytically by partial differentiation:
$$d_{k} = - \frac{{\partial L({\mathbf{q}})}}{{dq_{k} }} = - \sum\limits_{i = 1}^{{N_{\text{terms}} }} {c_{i} E_{ik} \prod\limits_{{{\text{j}} = 1}}^{{N_{\text{DOF}} }} {q_{i}^{{E_{ij} - \delta_{kj} }} } } $$
where \(\delta_{kj}\) is the Kronecker delta. Coefficients c i and exponents E ij were obtained by stepwise regression to fit the polynomial model to moment arms obtained from OpenSim [8] for a sufficiently large set of skeleton poses q. The stepwise regression added successively terms (up to a maximum order) to the polynomial until difference in moment arm between polynomial and Opensim result was reduced to <2 mm. The muscle shortening velocity was computed as the dot product of moment arms d and generalized velocities \({\dot{\mathbf{q}}}\):
$$v = - \frac{{{\text{d}}L({\mathbf{q}})}}{{{\text{d}}t}} = - \sum\limits_{k} {\frac{{\partial L({\mathbf{q}})}}{{\partial q_{k} }}\frac{{{\text{d}}q_{k} }}{{{\text{d}}t}} = {\mathbf{d}}^{\text{T}} {\dot{\mathbf{q}}}} . $$
The final processing step performed static optimization to simultaneously estimate the forces F in all muscle elements. The optimization problem is formulated as a quadratic programming problem [9, 30]:
$$\begin{array}{*{20}c} {{\mathbf{F}} = \arg \mathop {\hbox{min} }\limits_{{\mathbf{F}}} \sum\limits_{i = 1}^{{N_{\text{muscles}} }} {V_{i} \left( {\frac{{F_{i} }}{{F_{{{ \hbox{max} },i}} }}} \right)}^{2} } \hfill \\ {\quad \quad {\text{subject to }}\left\{ {\begin{array}{l} {{\mathbf{D}}({\mathbf{q}}){\mathbf{F}} = {\varvec{\tau}}} \\ {F_{i} \ge 0} \\ \end{array} } \right.} \hfill \\ \end{array} \, $$
where \(F_{{{ \hbox{max} },i}}\) is the maximal force that muscle i can produce and V i is the muscle volume, which was assumed to be proportional to the product of maximal force and fiber length. These muscle properties were taken from the original models [4, 7, 17]. Weighting of the optimization objective by muscle volume is required to make the solutions independent of the level of discretization of the muscular anatomy [14]. The matrix \({\mathbf{D}}({\mathbf{q}})\) contains the moment arms \(d_{ij}\) of muscle j with respect to kinematic variable i, which are dependent on joint angles q and computed using (4). Power generation of each muscle is now easily calculated as the product of muscle force and shortening velocity (5).
The HBM was implemented as a software library with a C/C++ application programming interface (API), coded with specific emphasis on real-time computation. C code for the forward kinematic model in (1) was generated using Autolev (Online Dynamics, Sunnyvale, CA, USA). The nonlinear optimization problem in (1) was solved with the Levenberg–Marquardt algorithm [20], with a Jacobian matrix for the forward kinematic model that was generated by symbolical differentiation in Autolev. The solution of each frame was used as the initial guess for the next frame. Solver iterations were terminated after a specified computation time, to ensure real-time performance. Autolev also generated the C code to compute the joint moments using (2). The static optimization problem (6) was solved with a recurrent neural network [32], simulated numerically with the forward Euler method up to a specified computation time for each frame. The result of each frame was used as initial condition for the next frame.
HBM was integrated in two applications. D-Flow (Motek Medical, Amsterdam, the Netherlands) provides a software development platform for custom applications that generate real-time feedback and visualization in a virtual reality environment [10]. Within D-Flow, biomechanical variables obtained from HBM can be visualized on an avatar using a coloring scheme to illustrate active muscles, or can used to control events and objects in a virtual environment providing many possibilities for rehabilitation, research and sports (Fig. 2). The lower extremity portion of HBM was also integrated in GRAIL (Gait Real-time Analysis Interactive Lab, Motek Medical, Amsterdam, the Netherlands) for clinical gait analysis and gait retraining. The results presented in this paper were obtained with HBM embedded in D-Flow version 3.10.1.
Screen image from the D-Flow system. The distributed rendering system (DRS) window is normally displayed on a large projection screen for interaction with patient and therapist. Muscle activation is visualized as a change in muscle color. The window on the bottom right is the console for application development, showing the data flow editor and the connection editor. A simple application is shown, in which estimated quadriceps forces are used to control a virtual ball, such that upward motion responds to total force, and horizontal motion responds to asymmetry. This simple application would help a patient train to increase their quadriceps activation while maintaining left–right symmetry. The window on the left is the user interface for the HBM
2.3 Human subject data
Twelve healthy subjects (11 males and 1 female) volunteered to participate in this study which was approved by the Institutional Review Board of the Cleveland VA Medical Center. Average subject characteristics were: age 28.3 ± 3.9 years, body mass (with shoes) 75.9 ± 11.2 kg, and height 175 ± 8 cm. Subjects walked on a split-belt instrumented treadmill (ADAL3DM-F-COP-Mz, Tecmachine, France) for 30 s at their preferred walking speed and wearing their own shoes. Preferred walking speed was 0.97 ± 0.12 m/s with a gait cycle of 1.23 ± 0.09 s. During walking, kinematic marker data were collected at 100 Hz via a 16-camera passive marker motion capture system (Vicon, Oxford Metrics, UK) with the marker set described in "Supplementary Material". Ground reaction forces were collected at 1,000 Hz from load cells in the treadmill.
For data processing, 100 frames were averaged from a standing trial for initialization of the subject-specific model. The low-pass filter was set to 6 Hz. Computation time limits for the iterative solvers were set to 1 ms for inverse kinematics, and 5 ms for static optimization. HBM was executed under Windows 7 on a 2.4 GHz Intel i5 CPU. All output variables were ensemble averaged over the 30-s trial to obtain one average gait cycle for each subject, from right heel strike to right heel strike. It was verified that the subjects had symmetrical gait, and therefore only the results from the right lower extremity will be presented.
On one subject, the analysis was performed at various computation time settings. Error due to premature termination of the iterative solvers was quantified as the overall root mean square (RMS) difference in joint angles and muscle forces between the test result and a result where there was no time limit for computation.
With a computation time limit of 1 ms per frame, the kinematic solver (1) terminated, on average at 1.24 ms after doing four iterations. The low-pass filter required 0.07 ms, and the inverse dynamic calculation (2) required 0.41 ms. The iterative solver for the static optimization problem (6) performed, on average, 230 Euler integration steps in the allotted time of 5 ms. Errors due to time limits in the iterative solvers are shown in Fig. 3. At real-time speed settings, the errors due to premature termination of the iteration process were <0.01° for kinematics and <5 % for muscle forces. Figure 3 can be used to determine how these errors would change when the code is executed on faster or slower computer hardware, or when time limits are adjusted to a different frame rate for the streaming raw data.
Errors in joint angles and muscle forces as a function of the allowed computation time in, respectively, the kinematic solver (1) and the static optimization (6). Results are presented for one representative subject. Arrows indicate the settings that are normally used for real-time analysis
Figure 4 (top panels) shows the lower extremity joint angles, moments, and powers obtained from all subjects. When available, results from the literature [24] were superimposed for comparison. Intersegmental knee loads are presented in the bottom panels of Fig. 4.
The top two rows show lower extremity joint angles and moments obtained with the human body model (HBM) from the 12 able-bodied subjects walking at preferred speed. Each curve represents one subject's mean gait cycle. The shaded area represents mean and standard deviation from a study on children [24], for those variables that were available. Other joint-related variables are available in HBM, but not shown: joint angular velocity, and joint power generation. The bottom two rows show the inter-segmental loads at the knee, acting on the shank segment, and expressed using the axes of the shank reference frame: X (anterior), Y (lateral), and Z (superior)
Muscle forces, length changes, shortening velocities, and powers in the lower extremity and spine are presented in Fig. 5 for 16 selected muscles, with electromyography (EMG) data from the literature [31] for visual comparison.
Forces and length changes for 16 muscle groups. EMG patterns from the literature [31] are shown for comparison, with the area under the EMG-time curve shaded. Amplitudes of the EMG patterns were scaled to coincide with the amplitude of estimated muscle force. Other muscle-related variables are available in HBM, but not shown: velocity of length change, power generation, and muscle activation (F/F max)
All results, including those not shown in figures, are available as "Supplementary Material".
4 Discussion
We have developed a system that performs a full biomechanical analysis of human movement in real-time. The analysis that is performed by the system is identical to existing approaches for inverse kinematic analysis [8], inverse dynamic analysis [30], and muscle force estimation [30]. The real-time performance is not achieved by simplifications of the model or the analysis, but by several innovations in computational methods to solve the analysis. Because the software does not need the capability to solve other models, the kinematic model and inverse dynamic model could be coded symbolically using the Autolev system. The resulting C code had a length of several megabytes, but was free from overhead due to loops, tests and branches, and function calls, and required only several milliseconds to execute. Muscle moment arm calculations were accelerated by using polynomials (3) that acted as lookup tables to produce results that were, for practical purposes, identical to the more time-consuming geometrical calculations performed by Opensim [8]. The static optimization problem to estimate muscle forces was solved by an iterative method [32] that eliminates the need to solve large systems of linear equations. It has been proved that this method produces the same solution as conventional methods for quadratic programming [32], when iterated long enough. In real-time applications, the initial guess is the result of the previous frame, and already very close to the correct solution. This allows us to terminate the iterations when the available computation time has been used up. Figure 3 shows that within 5 ms the solution is, on average, already within 5 % of the exact solution which would be reached when the algorithm is given unlimited computation time.
As configured, the total time to perform all model-based analyses was 6.72 ms, well within the requirement for real-time processing of streaming raw data at 120 fps, and a lag time that is sufficiently short for feedback and training applications. The kinematic analysis was hardly affected by allowing only 1 ms of computation, and could even be done at higher camera frame rates (when available) to maximize the benefit of noise reduction by low-pass filtering for estimation of velocities and accelerations. After the low-pass filtering, however, bandwidth is reduced and inverse dynamic analysis and static optimization can be performed at lower frame rate without loss of accuracy. This would reduce the load on the processor, or improve accuracy, or allow more complex models to be solved.
A low-pass filter was used to prevent noise in the inverse dynamic results, but unlike offline filtering, a time lag is inevitable in a real-time filter. The second order real-time Butterworth filter has a phase delay of 0.22/f, where f is the corner frequency [29]. With the 6 Hz filter that was used for the gait data, this amounts to 37 ms or about 4 % of the gait cycle. The results presented in Figs. 4 and 5 were not corrected for this delay; the results are presented as they would appear in a real-time application. This 4 % delay should be kept in mind when interpreting these results or comparing them to results from other studies.
Joint angles and moments (Fig. 4) showed the typical features that are usually seen in mechanical analysis of gait [24]. Differences between studies are inevitable because of study population and test protocol. Our results show lower knee and ankle moments (normalized to body mass) than [24] which is not surprising because of shoes and a higher length–mass ratio in adults. Hip moments are affected by the choice of reference frame [23]. We reported the joint moments in a joint coordinate system, rather than the thigh reference frame as in [24]. Other modeling assumptions have an affect as well, such as the definition of joint centers and joint axes. Details of the data processing can affect results. Our system, and Opensim [8], both use redundant marker sets to suppress to effect of soft tissue motion, while existing commercial systems for clinical movement analysis, such as used in [24], do not. The resulting differences can be substantial, but do not always interfere with clinical applications. The current practice is that each laboratory obtains their own normal reference data, using their study population, study protocol, and software system. The question may still be raised which system produces a more "correct" result, but this is outside of the scope of this paper.
Intersegmental forces and moments are useful for orthopedic questions related to joint injury. We have not yet implemented this for all joints in the model, but we do have this information available for the knee joint (Fig. 4), where these variables have been shown to be relevant to the risk of ACL injury [13] and progression of osteoarthritis [3, 25]. The ability to calculate knee joint loads and provide feedback on these variables in real time can help athletes and patients modify these variables via gait retraining exercises [3, 25]. Future versions of the software will provide information about intersegmental loads at all joints.
Estimated muscle forces (Fig. 5) had peaks that coincided with peaks in normal EMG [31] for most muscles, notable exceptions being the Sartorius and Rectus Femoris muscles. Similar relationships between muscle force and EMG are found in other modeling studies of walking [12, 28]. Perfect correlation can not be expected because EMG measures activation, not force. When there are major discrepancies in timing of peaks, however, it is likely that the force estimate is not correct. This can be caused by errors in the moment arms of the muscle in the model, or by the assumption that muscle force is distributed according to an optimization principle as stated in Eq. (6). These results show that users must be cautious when using the muscle force estimates, especially for certain muscles.
Analysis of muscle contraction kinematics and muscle forces is not yet well established in clinical movement analysis, but there are large potential benefits. For instance, information about muscle length change during gait can assist surgical planning for patients with cerebral palsy [2]. In stroke patients, estimation of muscle forces during gait can help identify specific deficits and compensatory strategies [19]. Software tools are already available for such analyses (Anybody and OpenSim) but these tend to be research-oriented and not sufficiently fast or user-friendly for clinical applications. Our system is, at this time, the only system that can perform muscle force estimation in real time. It is important that these estimates are validated before the system is applied clinically, and the validation must be done with a well-designed study that is relevant to the clinical question.
We performed the muscle force estimation using static optimization (6). This does not take into account the force–length or force–velocity properties, or internal dynamics of the muscles. Some of these properties are included in the OpenSim and Anybody systems, but this increases the computational cost but may not significantly improve the results in clinical applications [18]. The quadratic cost function [30] was chosen over the classical cubic cost function [5], mainly because it allowed us to use an efficient real-time solution method [32]. While the choice of cost function is subject of active research, the results of a static optimization seem to be rather robust with respect to the choice of cost function [11, 26]. A promising alternative is the minmax criterion [21], which would allow a real-time implementation but may lead to discontinuities in the muscle force trajectories [22]. A fundamental limitation of model-based muscle force estimation, as presented here, is that the same generic muscle models are used for all subjects. We assume standard anatomy (moment arms) and standard muscle strengths. Therefore, muscle force estimates may be biased towards normal in patients with neurological problems, muscle weakness, or pain. An approach to overcome such limitations was recently proposed [33], but this requires extensive patient calibration protocols which would be impractical in routine clinical use.
In conclusion, we have shown that a full biomechanical analysis of joint and muscle function can be obtained in real time, and that results are consistent between subjects and resemble previously published results. Real-time processing offers the unique opportunity for interactive use of biomechanical movement analysis in which the patient and therapist not only interact with each other, but also with biomechanical information that is presented to them in real time using advanced visualization methods (Fig. 2).
We acknowledge the assistance of Stephanie Nogan (Cleveland VA Medical Center) with the data collection.
11517_2013_1076_MOESM1_ESM.pdf (673 kb)
Detailed description of the model (PDF 672 kb)
11517_2013_1076_MOESM2_ESM.xls (24 kb)
Subject characteristics (XLS 24 kb)
11517_2013_1076_MOESM3_ESM.xls (350 kb)
Ground reaction force variables for each foot: 3D force (N/kg), center of pressure (m), free vertical moment (Nm/kg) (XLS 350 kb)
3D coordinates of the whole-body center of mass (m) (XLS 124 kb)
11517_2013_1076_MOESM5_ESM.xls (1.2 mb)
Kinematic analysis results (meters and degrees) (XLS 1212 kb)
Inverse dynamic analysis results (N/kg and Nm/kg) (XLS 1180 kb)
Joint power for each kinematic degree of freedom (W/kg) (XLS 1180 kb)
6-DOF intersegmental loads (N/kg and Nm/kg) (XLS 1208 kb)
Muscle forces (N/kg) (XLS 5803 kb)
11517_2013_1076_MOESM10_ESM.xls (5.6 mb)
Muscle activations (F/Fmax) (XLS 5732 kb)
Muscle power (W/kg) (XLS 5742 kb)
Muscle length changes (m) (XLS 7645 kb)
Muscle shortening velocities (m/s) (XLS 7645 kb)
An KN, Takahashi K, Harrigan TP, Chao EY (1984) Determination of muscle orientations and moment arms. J Biomech Eng 106:280–282PubMedCrossRefGoogle Scholar
Arnold AS, Liu MQ, Schwartz MH, Ounpuu S, Delp SL (2006) The role of estimating muscle-tendon lengths and velocities of the hamstrings in the evaluation and treatment of crouch gait. Gait Posture 23:273–281PubMedCrossRefGoogle Scholar
Barrios JA, Crossley KM, Davis IS (2011) Gait retraining to reduce the knee adduction moment through real-time visual feedback of dynamic knee alignment. J Biomech 43:2208–2213CrossRefGoogle Scholar
Chadwick EK, Blana D, van den Bogert AJ, Kirsch RF (2009) A real-time, 3-D musculoskeletal model for dynamic simulation of arm movements. IEEE Trans Biomed Eng 56:941–948PubMedCrossRefGoogle Scholar
Crowninshield RD, Brand RA (1981) A physiologically based criterion of muscle force prediction in locomotion. J Biomech 14:793–801PubMedCrossRefGoogle Scholar
de Leva P (1996) Adjustments to Zatsiorsky–Seluyanov's segment inertia parameters. J Biomech 29:1223–1230PubMedCrossRefGoogle Scholar
Delp SL, Loan JP, Hoy MG, Zajac FE, Topp EL, Rosen JM (1990) An interactive graphics-based model of the lower extremity to study orthopaedic surgical procedures. IEEE Trans Biomed Eng 37:757–767PubMedCrossRefGoogle Scholar
Delp SL, Anderson FC, Arnold AS, Loan P, Habib A, John CT, Guendelman E, Thelen DG (2007) OpenSim: open-source software to create and analyze dynamic simulations of movement. IEEE Trans Biomed Eng 54:1940–1950PubMedCrossRefGoogle Scholar
Erdemir A, McLean S, Herzog W, van den Bogert AJ (2007) Model-based estimation of muscle forces exerted during movements. Clin Biomech 22:31–154CrossRefGoogle Scholar
Geijtenbeek T, Steenbrink F, Otten B, Even-Zohar O (2011) D-flow: immersive virtual reality and real-time feedback for rehabilitation. In: Proceedings of the 10th international conference on virtual reality continuum and its applications in industry (VRCAI '11). ACM, New York, pp 201–208Google Scholar
Glitsch U, Baumann W (1997) The three-dimensional determination of internal loads in the lower extremity. J Biomech 30:1123–1131PubMedCrossRefGoogle Scholar
Heintz S, Gutierrez-Farewik EM (2007) Static optimization of muscle forces during gait in comparison to EMG-to-force processing approach. Gait Posture 26:279–288PubMedCrossRefGoogle Scholar
Hewett TE, Myer GD, Ford KR, Heidt RS Jr, Colosimo AJ, McLean SG, van den Bogert AJ, Paterno MV, Succop P (2005) Biomechanical measures of neuromuscular control and valgus loading of the knee predict anterior cruciate ligament injury risk in female athletes: a prospective study. Am J Sports Med 33:492–501PubMedCrossRefGoogle Scholar
Holmberg LJ, Klarbring A (2012) Muscle decomposition and recruitment criteria influence muscle force estimates. Multibody Syst Dyn 28:283–289CrossRefGoogle Scholar
Kadaba MP, Ramakrishnan HK, Wootten ME, Gainey J, Gorton G, Cochran GV (1989) Repeatability of kinematic, kinetic, and electromyographic data in normal adult gait. J Orthop Res 7:849–860PubMedCrossRefGoogle Scholar
Kristianslund E, Krosshaug T, van den Bogert AJ (2012) Effect of low pass filtering on joint moments from inverse dynamics: implications for injury prevention. J Biomech 45:666–671PubMedCrossRefGoogle Scholar
Lambrecht JM, Audu ML, Triolo RJ, Kirsch RF (2009) Musculoskeletal model of trunk and hips for development of seated–posture-control neuroprosthesis. J Rehabil Res Dev 46:515–528PubMedCrossRefGoogle Scholar
Lin YC, Dorn TW, Schache AG, Pandy MG (2012) Comparison of different methods for estimating muscle forces in human movement. Proc Inst Mech Eng 226:103–112Google Scholar
Peterson CL, Kautz SA, Neptune RR (2011) Muscle work is increased in pre-swing during hemiparetic walking. Clin Biomech 26:859–866CrossRefGoogle Scholar
Press WH, Teukolsky SA, Vetterling WT, Flannery BP (2007) Numerical recipes. The art of scientific computing, 3rd edn. Cambridge University Press, Cambridge, pp 799–806Google Scholar
Rasmussen J, Damsgaard M, Voigt M (2001) Muscle recruitment by the min/max criterion—a comparative numerical study. J Biomech 34:409–415PubMedCrossRefGoogle Scholar
Rasmussen J, de Zee M, Dahl J, Damsgaard M (2009) Salient properties of a combined minimum-fatigue and and quadratic muscle recruitment criterion. In: Proceedings of the 12th international symposium on computer simulation in biomechanics, Cape Town, South Africa, 2–4 July 2009Google Scholar
Schache AG, Baker R (2007) On the expression of joint moments during gait. Gait Posture 25:440–452PubMedCrossRefGoogle Scholar
Schwartz MH, Rozumalski A, Trost JP (2008) The effect of walking speed on the gait of typically developing children. J Biomech 41:1639–1650PubMedCrossRefGoogle Scholar
Shull PB, Lurie KL, Cutkosky MR, Besier TF (2011) Training multi-parameter gaits to reduce the knee adduction moment with data-driven models and haptic feedback. J Biomech 44:1605–1609PubMedCrossRefGoogle Scholar
Steenbrink F, Meskers CG, van Vliet B, Slaman J, Veeger HE, De Groot JH (2009) Arm load magnitude affects selective shoulder muscle activation. Med Biol Eng Comput 47:565–572PubMedCrossRefGoogle Scholar
Teran-Yengle P, Birkhofer R, Weber MA, Patton K, Thatcher E, Yack HJ (2011) Efficacy of gait training with real-time biofeedback in correcting knee hyperextension patterns in young women. J Orthop Sports Phys Ther 41:948–952PubMedGoogle Scholar
Thelen DG, Anderson FC (2006) Using computed muscle control to generate forward dynamic simulations of human walking from experimental data. J Biomech 39:1107–1115PubMedCrossRefGoogle Scholar
van den Bogert AJ, Geijtenbeek T A state space filter for smoothing and differentiation of real-time data with variable sampling rate. Comput Methods Biomech Biomed Eng (in review)Google Scholar
van der Helm FC (1994) A finite element musculoskeletal model of the shoulder mechanism. J Biomech 27:551–569PubMedCrossRefGoogle Scholar
Winter DA, Yack HJ (1987) EMG profiles during normal human walking: stride-to-stride and inter-subject variability. Electroencephalogr Clin Neurophysiol 67:402–411PubMedCrossRefGoogle Scholar
Xia Y, Feng G (2005) An improved neural network for convex quadratic optimization with application to real-time beamforming. Neurocomputing 64:359–374CrossRefGoogle Scholar
Zariffa J, Steeves JD, Pai DK (2011) Muscle tension estimation in the presence of neuromuscular impairment. J Biomech Eng 133:121009PubMedCrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.Department of Mechanical EngineeringCleveland State UniversityClevelandUSA
2.Orchard Kinetics LLCClevelandUSA
3.Motek Medical B.V.AmsterdamThe Netherlands
4.Cleveland VA Medical CenterClevelandUSA
van den Bogert, A.J., Geijtenbeek, T., Even-Zohar, O. et al. Med Biol Eng Comput (2013) 51: 1069. https://doi.org/10.1007/s11517-013-1076-z
Received 05 September 2012
Accepted 17 April 2013
DOI https://doi.org/10.1007/s11517-013-1076-z
International Federation for Medical and Biological Engineering | CommonCrawl |
Plane mirror- Definition, Properties and Ray Diagram
Category : Optic
Tutorials and Solved Problems on Plane Mirrors, Spherical mirrors, and Lenses are presented here.
Spherical Mirrors
Mirrors are defined as one side-polished surface that can reflect the light rays. Plane mirrors in physics are the ones that have a flat reflecting surface and produce always a virtual image.
In this section, we review the most important topics in plane (flat) mirrors in physics including image formation by ray diagrams, image properties of plane mirrors, proving the equality of image and object distance and definition of lateral magnification in plane mirrors.
Image formation in plane mirror by ray diagram :
Let's start to define some elements of the method of image formation by a plane mirror.
As shown in the figure all light rays emanating from a point source $P$ are reflected from a plane mirror so that the extensions of the reflected rays backward appear to come (or diverge) from point $P'$ (The rays do not actually pass through the mirror since the most mirrors in the market are opaque).
We call point $P$ an object point and point $P'$ the corresponding image point.
Now we're going to answer this question that why does plane mirror create a virtual image?
By principle, in optics, image is formed where reflected light rays actually intersect each other or where they appear to come from.
For flat mirrors, no light rays actually intersect at the point $P'$ (image point) but from the point of view of an observer appears that the light rays originate from that point and that's why we called this image as a virtual image.
In other words, since the light rays don't intersect each other on a physical screen so proved that the image formed in a plane mirror is virtual. There are situations where reflected light rays actually meet each other such as concave mirror.
Another reason why the image is virtual in plane mirrors is that image is formed on the opposite side of the mirror where the object does not exist.
The above setup is based on one point source, to find the precise position of image formed by a plane mirror, we must use at least two rays diverging from a point, say $P$, of an extended object, like an arrow in the figure. One of them incident normally on the mirror and the other strikes the mirror at an angle of incidence $\theta$ and is reflected at an equal angle with the normal (due to the law of reflection: all rays striking any surface (polished or rough) are reflected at an angle from the normal equal to the incident angle).
Extending those two reflected rays backward they intersect each other at point $P'$, at a distance $s'$, behind the plane mirror. The distance $s'$ which is the distance of the image from the plane mirror is called the image distance.
Now we geometrically prove another important question arising in plane mirrors using ray diagram. Why are the image distance and object distance equal in the plane mirror?
Answer: Suppose the two rays from the source point $P$ striking the mirror as shown in the figure, one of those rays follows the path $OP$ and the other follows the indirect path $PB$.
The backward extension of the ray $OP$ is along the horizontal axis. The $PB$ ray also reflects at an equal angle with the incident angle then extent backward it to meet the horizontal path. The intersection of the two backward rays lies along the horizontal axis, behind the mirror, where the image point $P'$ is formed. From this construction, one can observe that the two triangles $\Delta POB$ and $\Delta P'OB$ are similar since all angles between those are the same.
Recall from high school geometry that when two triangles are similar then there is a proportionality between their corresponding sides.
Consequently, from similarity of triangles $\Delta POB$ and $\Delta P'OB$ we have the following ratios between the lengths of the triangles
\begin{align*}\frac {OP}{OP'}=\frac {OB}{OB} =1\\ \frac {PB}{P'B}=\frac {OB}{OB} =1 \end{align*}
From the first equality we can obtain the equality of distance object $OP$ with the image distance $OP'$ i.e. $OP=OP'$ which is the required result.
Now that the basics of image formation using a ray diagram in plane mirrors are reviewed, practice the following example.
Example: two mirrors sit at an angle of $120{}^\circ$ to one another. A ray of light is incident at $50{}^\circ$ on the first mirror. What is the angle of refraction $\theta_r$, with respect to a perpendicular line to the second mirror, as shown?
Solution: By the law of reflection, the angle of incidence equals the angle of reflection, the reflected ray from the first incident on the horizontal mirror is at angle $50{}^\circ$.
In a triangle, the three interior angles always add up to $180{}^\circ$, so the reflected ray of the first mirror, which is served as the incident ray for the second mirror, strikes at an angle of $70{}^\circ$ with respect to the perpendicular line of the second mirror. Again using the law of reflection, the reflected ray from the latter mirror is $\theta_r=70{}^\circ$.
Image properties in plane mirrors:
The following list is all properties of image formed in a plane mirror (All illustrated in the figure below)
The image distance $q$ is always equal to the object distance $p$ (proved above).
The image size $h_i$ is always equal to the object size $h_O$ i.e. $h_i=h_O$.
Image is always virtual (i.e. it is formed by the backward extension of the reflected rays behind the mirror) or Image and object lie in the opposite side of the flat mirror.
Image is always upright that is the upward direction of the tip of an arrow object is also upward in the image.
Image with respect to the object is always lateral inversion that is the right side of an object projected as left side in the image (the object's right point $b$ is projected as image's left point $b$ in the figure below).
Definition of lateral magnification:
In any image forming instruments, we can define a useful quantity which is the ratio of the image height $h_i$ to the object height $h_O$ and is called the lateral magnification $m$ \[m=\frac{\text{image height}}{\text{object height}}=\frac{h_i}{h_O}\]
The lateral magnification for a plane mirror is one ($m=1$) since the image height of an object in the plane mirror is the same size as the object i.e. $h_i=h_O$.
The above formula for lateral magnification is a general definition of any type of mirror. We can extend the above magnification formula to include other mirrors as
\begin{align*} m \equiv \frac{\text{image height}}{\text{object height}}&=\frac{h_i}{h_O}\\
\frac{\text{image distance}}{\text{object distance}}&=-\frac{q}{p}
To use this revised definition, we need some sign conventions to get a correct magnification size and sign.
Sign rule for the object distance: if the object is on the same side of the refracting or reflecting surface as the incoming light, then the object distance is positive $p>0$, otherwise, it is negative.
Sign rule for the image distance: when the image is on the same side of the refracting or reflecting surface (polished side) as the outgoing light, then the image distance is positive $q>0$, otherwise, it is negative.
Therefore, for plane mirrors, object and diverging light rays are on the same side of the mirror so object distance is positive ($p>0$) but the image and outgoing (reflected) rays are on the opposite side of the mirror so image distance is negative i.e. $q<0$ and proves that $m>0$
From the lateral magnification, we can deduce the orientation of the object's image. When $m>0$, we say that the image is erect or upright. In the plane mirrors, we have always this case. But there are situations (such as in the spherical mirror) in which the lateral magnifications is negative $m<0$. In these cases, the image is on the opposite side of the mirror (where the object does not exist) or the image is inverted.
Practice more problems: Mirrors and Lenses - Problems and solution
We end this course about plane mirrors in physics by the following question.
Which of the following statements are true for an image formed by a plane mirror?
(a) Image is sometimes erect.
(b) Image has sometimes apparent left-right reversal.
(c) Based on the position of the object, there is a situation where the image is real.
(d) Image has always lateral magnification of one.
Answer: See the above section about properties of plane mirror for the correct answer which is (d).
Ali Nemati
Plane Mirror, Object distance, Image distance, Lateral magnification, Magnification formula
Welcome to Physexams
Physics problems and solutions aimed for high school and college students are provided. In addition, there are hundreds of problems with detailed solutions on various physics topics.
The individuals who are preparing for Physics GRE Subject, AP, SAT, ACT exams in physics can make the most of this collection.
© 2015 All rights reserved. by Physexams.com | CommonCrawl |
Duopyramid
In geometry of 4 dimensions or higher, a double pyramid or duopyramid or fusil is a polytope constructed by 2 orthogonal polytopes with edges connecting all pairs of vertices between the two. The term fusil is used by Norman Johnson as a rhombic-shape.[1] The term duopyramid was used by George Olshevsky, as the dual of a duoprism.[2]
Polygonal forms
Set of dual uniform p-q duopyramids
Example 4-4 duopyramid (16-cell)
Orthogonal projection
TypeUniform dual polychoron
Schläfli symbol{p} + {q}[3]
Coxeter diagram
Cellspq digonal disphenoids
Faces2pq triangles
Edgespq+p+q
Verticesp+q
Vertex figuresp-gonal bipyramid
q-gonal bipyramid
Symmetry[p,2,q], order 4pq
Dualp-q duoprism
Propertiesconvex, facet-transitive
Set of dual uniform p-p duopyramids
Schläfli symbol{p} + {p} = 2{p}
Coxeter diagram
Cellsp2 tetragonal disphenoids
Faces2p2 triangles
Edgesp2+2p
Vertices2p
Vertex figurep-gonal bipyramid
Symmetry[[p,2,p]] = [2p,2+,2p], order 8p2
Dualp-p duoprism
Propertiesconvex, facet-transitive
The lowest dimensional forms are 4 dimensional and connect two polygons. A p-q duopyramid or p-q fusil, represented by a composite Schläfli symbol {p} + {q}, and Coxeter-Dynkin diagram . The regular 16-cell can be seen as a 4-4 duopyramid or 4-4 fusil, , symmetry [[4,2,4]], order 128.
A p-q duopyramid or p-q fusil has Coxeter group symmetry [p,2,q], order 4pq. When p and q are identical, the symmetry in Coxeter notation is doubled as [[p,2,p]] or [2p,2+,2q], order 8p2.
Edges exist on all pairs of vertices between the p-gon and q-gon. The 1-skeleton of a p-q duopyramid represents edges of each p and q polygon and pq complete bipartite graph between them.
Geometry
A p-q duopyramid can be seen as two regular planar polygons of p and q sides with the same center and orthogonal orientations in 4 dimensions. Along with the p and q edges of the two polygons, all permutations of vertices in one polygon to vertices in the other form edges. All faces are triangular, with one edge of one polygon connected to one vertex of the other polygon. The p and q sided polygons are hollow, passing through the polytope center and not defining faces. Cells are tetrahedra constructed as all permutations of edge pairs between each polygon.
It can be understood by analogy to the relation of the 3D prisms and their dual bipyramids with Schläfli symbol { } + {p}, and a rhombus in 2D as { } + { }. A bipyramid can be seen as a 3D degenerated duopyramid, by adding an edge across the digon { } on the inner axis, and adding intersecting interior triangles and tetrahedra connecting that new edge to p-gon vertices and edges.
Other nonuniform polychora can be called duopyramids by the same construction, as two orthogonal and co-centered polygons, connected with edges with all combinations of vertex pairs between the polygons. The symmetry will be the product of the symmetry of the two polygons. So a rectangle-rectangle duopyramid would be topologically identical to the uniform 4-4 duopyramid, but a lower symmetry [2,2,2], order 16, possibly doubled to 32 if the two rectangles are identical.
Coordinates
The coordinates of a p-q duopyramid (on a unit 3-sphere) can be given as:
$(\cos {\frac {2\pi i}{p}},\sin {\frac {2\pi i}{p}},0,0),\quad i=1\dots p$
$(0,0,\cos {\frac {2\pi j}{q}},\sin {\frac {2\pi j}{q}}),\quad j=1\dots q$
All pairs of vertices are connected by edges.
Perspective projections
3-3 3-4 4-4 (16-cell)
Orthogonal projections
The 2n vertices of a n-n duopyramid can be orthogonally projected into two regular n-gons with edges between all vertices of each n-gon.
The regular 16-cell can be seen as a 4-4 duopyramid, being dual to the 4-4 duoprism, which is the tesseract. As a 4-4 duopyramid, the 16-cell's symmetry is [4,2,4], order 64, and doubled to [[4,2,4]], order 128 with the 2 central squares interchangeable. The regular 16-cell has a higher symmetry [3,3,4], order 384.
p-p duopyramids
3-3
5-5
7-7
9-9
11-11
13-13
15-15
17-17
19-19
4-4 (16-cell)
6-6
8-8
10-10
12-12
14-14
16-16
18-18
20-20
p-q duopyramids
3-4
3-5
3-6
3-8
4-5
4-6
Example 6-4 duopyramid
This vertex-centered stereographic projection of 6-4 duopyramid (blue) with its dual duoprism (in transparent red).
In the last row, the duopyramid is projected by a direction perpendicular to the first one; so the two parameters (6,4) seem to be reversed. Indeed, asymmetry is due to the projection: the two parameters are symmetric in 4D.
References
1. Norman W. Johnson, Geometries and Transformations (2018), p.167
2. Olshevsky, George. "Duopyramid". Glossary for Hyperspace. Archived from the original on 4 February 2007.
3. N.W. Johnson: Geometries and Transformations, (2018) ISBN 978-1-107-10340-5 Chapter 11: Finite symmetry groups, 11.5 Spherical Coxeter groups, p.251
| Wikipedia |
\begin{definition}[Definition:Composition of Mappings/Definition 3]
Let $S_1$, $S_2$ and $S_3$ be sets.
Let $f_1: S_1 \to S_2$ and $f_2: S_2 \to S_3$ be mappings such that the domain of $f_2$ is the same set as the codomain of $f_1$.
The '''composite of $f_1$ and $f_2$''' is defined and denoted as:
:$f_2 \circ f_1 := \set {\tuple {x, z} \in S_1 \times S_3: \exists y \in S_2: \map {f_1} x = y \land \map {f_2} y = z}$
That is:
:$f_2 \circ f_1 := \set {\tuple {x, z} \in S_1 \times S_3: \exists y \in S_2: \tuple {x, y} \in f_1 \land \tuple {y, z} \in f_2}$
:850px
\end{definition} | ProofWiki |
\begin{document}
\begin{center} {\LARGE Entire Functions Sharing Small Functions With Their Difference Operators}
\quad
\textbf{Zinel\^{a}abidine} \textbf{LATREUCH}$^{1}$, \textbf{Abdallah} \textbf{EL FARISSI}$^{2}$ \textbf{and Benharrat} \textbf{BELA\"{I}DI}$^{1}$
\quad
$^{1}$\textbf{Department of Mathematics }
\textbf{Laboratory of Pure and Applied Mathematics }
\textbf{University of Mostaganem (UMAB) }
\textbf{B. P. 227 Mostaganem-(Algeria)}
\textbf{[email protected]}
\textbf{[email protected]}
$^{2}$\textbf{Department of Mathematics and Informatics, }
\textbf{Faculty of Exact Sciences,}
\textbf{University of Bechar-(Algeria)}
\textbf{[email protected]}
\quad \end{center}
\noindent \textbf{Abstract. }We investigate uniqueness problems for an entire function that shares two small functions of finite order with their difference operators. In particular, we give a generalization of a result in $[2]$.
\quad
\noindent 2010 \textit{Mathematics Subject Classification}:30D35, 39A32.
\noindent \textit{Key words}: Uniqueness, Entire functions, Difference operators.
\section{Introduction and Main Results}
\noindent Throughout this paper, we assume that the reader is familiar with the fundamental results and the standard notations of the Nevanlinna's value distribution theory $(\left[ 7\right] ,$ $\left[ 9\right] ,$ $\left[ 12 \right] )$. In addition, we will use $\rho \left( f\right) $ to denote the order of growth of $f$ and $\tau \left( f\right) $ to denote the type of growth of $f$, we say that a meromorphic function $a\left( z\right) $ is a small function of $f\left( z\right) $ if $T\left( r,a\right) =S\left( r,f\right) ,$ where $S\left( r,f\right) =o\left( T\left( r,f\right) \right) , $ as $r\rightarrow \infty $ outside of a possible exceptional set of finite logarithmic measure, we use $S\left( f\right) $ to denote the family of all small functions with respect to $f\left( z\right) $. For a meromorphic function $f\left( z\right) ,$ we define its shift by $ f_{c}\left( z\right) =f\left( z+c\right) $ $\left( \text{Resp. }f_{0}\left( z\right) =f\left( z\right) \right) $ and its difference operators by \begin{equation*} \Delta _{c}f\left( z\right) =f\left( z+c\right) -f\left( z\right) ,\text{ \ \ }\Delta _{c}^{n}f\left( z\right) =\Delta _{c}^{n-1}\left( \Delta _{c}f\left( z\right) \right) ,\text{ }n\in
\mathbb{N}
,\text{ }n\geq 2. \end{equation*} In particular, $\Delta _{c}^{n}f\left( z\right) =\Delta ^{n}f\left( z\right) $ for the case $c=1.$
\noindent \qquad Let $f\left( z\right) $ and $g\left( z\right) $ be two meromorphic functions, and let $a\left( z\right) $ be a small function with respect to $f\left( z\right) $ and $g\left( z\right) .$ We say that $f\left( z\right) $ and $g\left( z\right) $ share $a\left( z\right) $ CM (counting multiplicity), provided that $f\left( z\right) -a\left( z\right) $ and $ g\left( z\right) -a\left( z\right) $ have the same zeros with the same multiplicities.
\noindent \qquad The problem of meromorphic functions sharing small functions with their differences is an important topic of uniqueness theory of meromorphic functions $\left( \text{see, }\left[ 1,4-6\right] \right) $. In 1986, Jank, Mues and Volkmann $\left( \text{see, }\left[ 8\right] \right) $ proved:
\quad
\noindent \textbf{Theorem A} \textit{Let }$f$\textit{\ be a nonconstant meromorphic function, and let }$a\neq 0$\textit{\ be a finite constant. If }$ f,$\textit{\ }$f^{\prime }$\textit{\ and }$f^{\prime \prime }$\textit{\ share the value }$a$\textit{\ CM, then }$f\equiv f^{\prime }.$
\quad
\noindent In $\left[ 11\right] ,$ P. Li and C. C. Yang gives the following generalization of Theorem A.
\quad
\noindent \textbf{Theorem B} \textit{Let }$f$\textit{\ be a nonconstant entire function, let }$a$\textit{\ be a finite nonzero constant, and let }$n$ \textit{\ be a positive integer. If }$f$\textit{, }$f^{\left( n\right) }$ \textit{\ and }$f^{\left( n+1\right) }$\textit{\ share the value }$a$\textit{ \ CM, then }$f\equiv f^{\prime }.$
\textit{\quad }
\noindent \qquad In $\left[ 2\right] ,$ B. Chen et al proved a difference analogue of result of Theorem A and obtained the following results:
\quad
\noindent \textbf{Theorem C }\textit{Let }$f\left( z\right) $ \textit{be a nonconstant entire function of finite order, and let }$a\left( z\right) \left( \not\equiv 0\right) \in S\left( f\right) $\textit{\ be a periodic entire function with period }$c$\textit{. If }$f\left( z\right) ,$\textit{\ } $\Delta _{c}f$\textit{\ and }$\Delta _{c}^{2}f$\textit{\ share }$a\left( z\right) $\textit{\ CM, then }$\Delta _{c}f\equiv \Delta _{c}^{2}f.$
\quad
\noindent \textbf{Theorem D }\textit{Let }$f\left( z\right) $ \textit{be a nonconstant entire function of finite order, and let }$a\left( z\right) ,$ $ b\left( z\right) \left( \not\equiv 0\right) \in S\left( f\right) $\textit{\ be periodic entire functions with period }$c$\textit{. If }$f\left( z\right) -a\left( z\right) ,$\textit{\ }$\Delta _{c}f\left( z\right) -b\left( z\right) $\textit{\ and }$\Delta _{c}^{2}f\left( z\right) -b\left( z\right) $ \textit{\ share }$0$\textit{\ CM, then }$\Delta _{c}f\equiv \Delta _{c}^{2}f. $
\quad
\noindent \qquad Recently in $\left[ 3\right] ,$ B. Chen and S. Li generalized Theorem C and proved the following results:
\quad
\noindent \textbf{Theorem E }\textit{Let }$f\left( z\right) $ \textit{be a nonconstant entire function of finite order, and let }$a\left( z\right) \left( \not\equiv 0\right) \in S\left( f\right) $\textit{\ be a periodic entire function with period }$c$\textit{. If }$f\left( z\right) ,$\textit{\ } $\Delta _{c}f$\textit{\ and }$\Delta _{c}^{n}f$\textit{\ }$\left( n\geq 2\right) $ \textit{share }$a\left( z\right) $\textit{\ CM, then }$\Delta _{c}f\equiv \Delta _{c}^{n}f.$
\quad
\noindent \textbf{Theorem F }\textit{Let }$f\left( z\right) $ \textit{be a nonconstant entire function of finite order. If }$f\left( z\right) ,$\textit{ \ }$\Delta _{c}f\left( z\right) $\textit{\ and }$\Delta _{c}^{n}f\left( z\right) $\textit{\ share }$0$\textit{\ CM, then }$\Delta _{c}^{n}f\left( z\right) =C\Delta _{c}f\left( z\right) ,$\textit{\ where }$C$\textit{\ is a nonzero constant.}
\quad
\noindent \qquad It is interesting now to see what happening when $f\left( z\right) $, $\Delta _{c}^{n}f\left( z\right) $\ and $\Delta _{c}^{n+1}f\left( z\right) $\ $\left( n\geq 1\right) $ share $a\left( z\right) $ CM. The main of this paper is to give a difference analogue of result of Theorem B. In fact, we prove that the conclusion of Theorems E and F remains valid when we replace $\Delta _{c}f\left( z\right) $ by $\Delta _{c}^{n+1}f\left( z\right) $, and we obtain the following results.
\quad
\noindent \textbf{Theorem 1.1} \textbf{\ }\textit{Let }$f\left( z\right) $ \textit{be a nonconstant entire function of finite order, and let }$a\left( z\right) \left( \not\equiv 0\right) \in S\left( f\right) $\textit{\ be a periodic entire function with period }$c$\textit{. If }$f\left( z\right) $ \textit{, }$\Delta _{c}^{n}f\left( z\right) $\textit{\ and }$\Delta _{c}^{n+1}f\left( z\right) $\textit{\ }$\left( n\geq 1\right) $ \textit{ share }$a\left( z\right) $ \textit{CM, then }$\Delta _{c}^{n+1}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) .$
\quad
\noindent \textbf{Example 1.1 }Let $f\left( z\right) =e^{z\ln 2}$ and $c=1.$ Then, for any $a\in
\mathbb{C}
,$ we notice that $f\left( z\right) ,$ $\Delta _{c}^{n}f\left( z\right) $ \textit{\ }and $\Delta _{c}^{n+1}f\left( z\right) $\ share $a$\ CM for all $ n\in
\mathbb{N}
$ and we can easily see that $\Delta _{c}^{n+1}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) .$ This example satisfies Theorem 1.1.
\quad
\noindent \textbf{Remark 1.1 }In Example 1.1, we have $\Delta _{c}^{m}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) $ for any integer $m>n+1.$ However, it remains open when $f\left( z\right) $\textit{, } $\Delta _{c}^{n}f\left( z\right) $\textit{\ }and\textit{\ }$\Delta _{c}^{m}f\left( z\right) $\textit{\ }$\left( m>n+1\right) $ share\textit{\ }$ a\left( z\right) $ CM, the claim $\Delta _{c}^{n+1}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) $ in Theorem 1.1 can be replaced by $\Delta _{c}^{m}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) $ in general.
\quad
\noindent \textbf{Theorem 1.2 }\textit{Let }$f\left( z\right) $\textit{be a nonconstant entire function of finite order, and let }$a\left( z\right) ,$ $ b\left( z\right) \left( \not\equiv 0\right) \in S\left( f\right) $\textit{\ be a periodic entire function with period }$c$\textit{. If }$f\left( z\right) -a\left( z\right) ,$\textit{\ }$\Delta _{c}^{n}f\left( z\right) -b\left( z\right) $\textit{\ and }$\Delta _{c}^{n+1}f\left( z\right) -b\left( z\right) $\textit{\ share }$0$\textit{\ CM, then }$\Delta _{c}^{n+1}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) .$
\quad
\noindent \textbf{Theorem 1.3 }\textit{Let }$f\left( z\right) $\textit{be a nonconstant entire function of finite order. If }$f\left( z\right) ,$\textit{ \ }$\Delta _{c}^{n}f\left( z\right) $\textit{\ and }$\Delta _{c}^{n+1}f\left( z\right) $\textit{\ share }$0$\textit{\ CM, then }$\Delta _{c}^{n+1}f\left( z\right) \equiv C\Delta _{c}^{n}f\left( z\right) ,$\textit{ \ where }$C$\textit{\ is a nonzero constant.}
\quad
\noindent \textbf{Example 1.2 }Let $f\left( z\right) =e^{az}$ and $c=1$ where $a\neq 2k\pi i$ $\left( k\in
\mathbb{Z}
\right) ,$ it is clear that $\Delta _{c}^{n}f\left( z\right) =\left( e^{a}-1\right) ^{n}e^{az}$ for any integer $n\geq 1.$ So, $f\left( z\right) , $ $\Delta _{c}^{n}f\left( z\right) $\textit{\ }and $\Delta _{c}^{n+1}f\left( z\right) $\ share $0$\ CM for all $n\in
\mathbb{N}
$ and we can easily see that $\Delta _{c}^{n+1}f\left( z\right) \equiv C\Delta _{c}^{n}f\left( z\right) $ where $C=e^{a}-1.$ This example satisfies Theorem 1.3.
\section{Some lemmas}
\noindent \textbf{Lemma 2.1 }$\left[ 10\right] $\ \ \textit{Let }$f$\textit{ \ and }$g$\textit{\ be meromorphic functions\ such that }$0<$\textit{\ }$ \rho \left( f\right) ,\rho \left( g\right) <\infty $\textit{\ and }$0<\tau \left( f\right) ,\tau \left( g\right) <\infty .$\textit{\ Then we have}
\noindent $\left( \text{i}\right) $ \textit{If }$\rho \left( f\right) >\rho \left( g\right) ,$ \textit{then we obtain} \begin{equation*} \tau \left( f+g\right) =\tau \left( fg\right) =\tau \left( f\right) . \end{equation*} $\left( \text{ii}\right) $ \textit{If }$\rho \left( f\right) =\rho \left( g\right) $ \textit{and }$\tau \left( f\right) \neq \tau \left( g\right) ,$ \textit{then we get} \begin{equation*} \rho \left( f+g\right) =\rho \left( fg\right) =\rho \left( f\right) =\rho \left( g\right) . \end{equation*} \textbf{Lemma 2.2 }$\left[ 12\right] $ \textit{Suppose }$f_{j}\left( z\right) $\textit{\ }$(j=1,2,\cdots ,n+1)$\textit{\ and }$g_{j}\left( z\right) $\textit{\ }$(j=1,2,\cdots ,n)$\textit{\ }$(n\geq 1)$\textit{\ are entire functions satisfying the following conditions:}
\noindent $\left( \text{i}\right) $\textit{\ }$\overset{n}{\underset{j=1}{ \sum }}f_{j}\left( z\right) e^{g_{j}\left( z\right) }\equiv f_{n+1}\left( z\right) ;$
\noindent $\left( \text{ii}\right) $\textit{\ The order of }$f_{j}\left( z\right) $\textit{\ is less than the order of }$e^{g_{k}\left( z\right) }$ \textit{\ for }$1\leq j\leq n+1,$\textit{\ }$1\leq k\leq n.$\textit{\ And furthermore, the order of }$f_{j}\left( z\right) $\textit{\ is less than the order of }$e^{g_{h}\left( z\right) -g_{k}\left( z\right) }$\textit{\ for }$ n\geq 2$\textit{\ and }$1\leq j\leq n+1,$\textit{\ }$1\leq h<k\leq n.$
\noindent \textit{Then }$f_{j}\left( z\right) \equiv 0,$\textit{\ }$\left( j=1,2,\cdots n+1\right) .$
\quad
\noindent \textbf{Lemma 2.3 }$\left[ 5\right] $ \textit{Let }$c\in
\mathbb{C}
,$\textit{\ }$n\in
\mathbb{N}
,$\textit{\ and let }$f\left( z\right) $\textit{\ be a meromorphic function of finite order. Then for any small periodic function }$a\left( z\right) $ \textit{\ with period }$c,$\textit{\ with respect to }$f\left( z\right) ,$ \begin{equation*} m\left( r,\frac{\Delta _{c}^{n}f}{f-a}\right) =S\left( r,f\right) , \end{equation*} \textit{where the exceptional set associated with }$S\left( r,f\right) $ \textit{\ is of at most finite logarithmic measure.}
\section{Proof of the Theorems}
\noindent \textbf{Proof of the Theorem 1.1.} Suppose on the contrary to the assertion that $\Delta _{c}^{n}f\left( z\right) \not\equiv \Delta _{c}^{n+1}f\left( z\right) .$ Note that $f\left( z\right) $ is a nonconstant entire function of finite order. By Lemma 2.3, for $n\geq 1$, we have \begin{equation*} T\left( r,\Delta _{c}^{n}f\right) =m\left( r,\Delta _{c}^{n}f\right) \leq m\left( r,\frac{\Delta _{c}^{n}f}{f}\right) +m\left( r,f\right) \leq T\left( r,f\right) +S\left( r,f\right) . \end{equation*} Since $f\left( z\right) $, $\Delta ^{n}f\left( z\right) $\ and $\Delta ^{n+1}f\left( z\right) $\ $\left( n\geq 1\right) $ share $a\left( z\right) $ CM, then \begin{equation} \frac{\Delta _{c}^{n}f\left( z\right) -a\left( z\right) }{f\left( z\right) -a\left( z\right) }=e^{P\left( z\right) } \tag{3.1} \end{equation} and \begin{equation} \frac{\Delta _{c}^{n+1}f\left( z\right) -a\left( z\right) }{f\left( z\right) -a\left( z\right) }=e^{Q\left( z\right) }, \tag{3.2} \end{equation} where $P$ and $Q$ are polynomials. Set \begin{equation} \varphi \left( z\right) =\frac{\Delta _{c}^{n+1}f\left( z\right) -\Delta _{c}^{n}f\left( z\right) }{f\left( z\right) -a\left( z\right) }. \tag{3.3} \end{equation} From $\left( 3.1\right) $ and $\left( 3.2\right) ,$ we get $\varphi \left( z\right) =e^{Q\left( z\right) }-e^{P\left( z\right) }.$ Then, by supposition and $(3.3)$, we see that $\varphi \left( z\right) \not\equiv 0$. By Lemma 2.3, we deduce that \begin{equation} T\left( r,\varphi \right) =m\left( r,\varphi \right) \leq m\left( r,\frac{ \Delta _{c}^{n+1}f}{f-a}\right) +m\left( r,\frac{\Delta _{c}^{n}f}{f-a} \right) +O\left( 1\right) =S\left( r,f\right) . \tag{3.4} \end{equation} Note that $\frac{e^{Q\left( z\right) }}{\varphi \left( z\right) }-\frac{ e^{P\left( z\right) }}{\varphi \left( z\right) }=1.$ By using the second main theorem and $(3.4)$, we have \begin{equation*} T\left( r,\frac{e^{Q}}{\varphi }\right) \leq \overline{N}\left( r,\frac{e^{Q} }{\varphi }\right) +\overline{N}\left( r,\frac{\varphi }{e^{Q}}\right) + \overline{N}\left( r,\frac{1}{\frac{e^{Q}}{\varphi }-1}\right) +S\left( r, \frac{e^{Q}}{\varphi }\right) \end{equation*} \begin{equation*} =\overline{N}\left( r,\frac{e^{Q}}{\varphi }\right) +\overline{N}\left( r, \frac{\varphi }{e^{Q}}\right) +\overline{N}\left( r,\frac{\varphi }{e^{P}} \right) +S\left( r,\frac{e^{Q}}{\varphi }\right) \end{equation*} \begin{equation} =S\left( r,f\right) +S\left( r,\frac{e^{Q}}{\varphi }\right) . \tag{3.5} \end{equation} Thus, by $(3.4)$ and $(3.5)$, we have $T(r,e^{Q})$ $=S(r,f)$. Similarly, $ T(r,e^{P})=S(r,f)$. Setting now $g\left( z\right) =f\left( z\right) -a\left( z\right) ,$ we have from $\left( 3.1\right) $ and $\left( 3.2\right) $ \begin{equation} \Delta _{c}^{n}g\left( z\right) =g\left( z\right) e^{P\left( z\right) }+a\left( z\right) \tag{3.6} \end{equation} and \begin{equation} \Delta _{c}^{n+1}g\left( z\right) =g\left( z\right) e^{Q\left( z\right) }+a\left( z\right) . \tag{3.7} \end{equation} By $\left( 3.6\right) $ and $\left( 3.7\right) ,$ we have \begin{equation*} g\left( z\right) e^{Q\left( z\right) }+a\left( z\right) =\Delta _{c}\left( \Delta _{c}^{n}g\left( z\right) \right) =\Delta _{c}\left( g\left( z\right) e^{P\left( z\right) }+a\left( z\right) \right) . \end{equation*} Thus \begin{equation*} g\left( z\right) e^{Q\left( z\right) }+a\left( z\right) =g_{c}\left( z\right) e^{P_{c}\left( z\right) }-g\left( z\right) e^{P\left( z\right) }, \end{equation*} which implies \begin{equation} g_{c}\left( z\right) =M\left( z\right) g\left( z\right) +N\left( z\right) , \tag{3.8} \end{equation} where $M\left( z\right) =e^{-P_{c}\left( z\right) }\left( e^{P\left( z\right) }+e^{Q\left( z\right) }\right) $ and $N\left( z\right) =a\left( z\right) e^{-P_{c}\left( z\right) }.$ From $\left( 3.8\right) ,$ we have \begin{equation*} g_{2c}\left( z\right) =M_{c}\left( z\right) g_{c}\left( z\right) +N_{c}\left( z\right) =M_{c}\left( z\right) \left( M\left( z\right) g\left( z\right) +N\left( z\right) \right) +N_{c}\left( z\right) , \end{equation*} hence \begin{equation*} g_{2c}\left( z\right) =M_{c}\left( z\right) M_{0}\left( z\right) g\left( z\right) +N^{1}\left( z\right) , \end{equation*} where $N^{1}\left( z\right) =M_{c}\left( z\right) N_{0}\left( z\right) +N_{c}\left( z\right) .$ By the same method, we can deduce that \begin{equation} g_{ic}\left( z\right) =\left( \underset{k=0}{\overset{i-1}{\prod }} M_{kc}\left( z\right) \right) g\left( z\right) +N^{i-1}\left( z\right) \text{ }\left( i\geq 1\right) , \tag{3.9} \end{equation} where $N^{i-1}\left( z\right) $ $\left( i\geq 1\right) $ is an entire function depending on $a\left( z\right) ,e^{P\left( z\right) },e^{Q\left( z\right) }$ and their differences. Now, we can rewrite $\left( 3.6\right) $ as \begin{equation} \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}g_{ic}\left( z\right) =\left( e^{P\left( z\right) }-\left( -1\right) ^{n}\right) g\left( z\right) +a\left( z\right) . \tag{3.10} \end{equation} By $\left( 3.9\right) $ and $\left( 3.10\right) ,$ we have \begin{equation*} \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}\left( \left( \underset{k=0}{\overset{i-1}{\prod }}M_{kc}\left( z\right) \right) g\left( z\right) +N^{i-1}\left( z\right) \right) -\left( e^{P\left( z\right) }-\left( -1\right) ^{n}\right) g\left( z\right) =a\left( z\right) \end{equation*} which implies \begin{equation} A\left( z\right) g\left( z\right) +B\left( z\right) =0, \tag{3.11} \end{equation} where \begin{equation*} A\left( z\right) =\overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}\underset{k=0}{\overset{i-1}{\prod }}M_{kc}\left( z\right) -e^{P\left( z\right) }+\left( -1\right) ^{n} \end{equation*} and \begin{equation*} B\left( z\right) =\overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}N^{i-1}\left( z\right) -a\left( z\right) . \end{equation*} It is clear that $A\left( z\right) $ and $B\left( z\right) $ are small functions with respect to $f\left( z\right) .$ If\textbf{\ }$A\left( z\right) \not\equiv 0$, then $\left( 3.11\right) $ yields the contradiction \begin{equation*} T\left( r,f\right) =T\left( r,g\right) =T\left( r,\frac{B}{A}\right) =S\left( r,f\right) . \end{equation*} Suppose now that $A\left( z\right) \equiv 0,$ rewrite the equation $A\left( z\right) \equiv 0$ as \begin{equation*} \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}\underset{ k=0}{\overset{i-1}{\prod }}e^{-P_{\left( k+1\right) c}}\left( e^{P_{kc}}+e^{Q_{kc}}\right) =e^{P}-\left( -1\right) ^{n}. \end{equation*} We can rewrite the left side of above equality as \begin{equation*} \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{- \overset{i}{\underset{k=1}{\sum }}P_{kc}}\underset{k=0}{\overset{i-1}{\prod } }\left( e^{P_{kc}}+e^{Q_{kc}}\right) \end{equation*} \begin{equation*} =\overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{- \overset{i}{\underset{k=1}{\sum }}P_{kc}}e^{\overset{i-1}{\underset{k=0}{ \sum }}P_{kc}}\underset{k=0}{\overset{i-1}{\prod }}\left( 1+e^{Q_{kc}-P_{kc}}\right) \end{equation*} \begin{equation*} \overset{n}{=\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{P-P_{ic}}\underset{k=0}{\overset{i-1}{\prod }}\left( 1+e^{Q_{kc}-P_{kc}}\right) . \end{equation*} So \begin{equation} \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{P-P_{ic}}\underset{k=0}{\overset{i-1}{\prod }}\left( 1+e^{h_{kc}}\right) =e^{P}-\left( -1\right) ^{n}, \tag{3.12} \end{equation} where $h_{kc}=Q_{kc}-P_{kc}.$ On the other hand, let $\Omega _{i}=\left\{ 0,1,\cdots ,i-1\right\} $ be a finite set of $i$ elements, and \begin{equation*} P\left( \Omega _{i}\right) =\{\varnothing ,\left\{ 0\right\} ,\left\{ 1\right\} ,\cdots ,\left\{ i-1\right\} ,\left\{ 0,1\right\} ,\left\{ 0,2\right\} ,\cdots ,\Omega _{i}\}, \end{equation*} where $\varnothing $ is an empty set. It is easy to see that \begin{equation*} \underset{k=0}{\overset{i-1}{\prod }}\left( 1+e^{h_{kc}}\right) =1+\underset{ A\in P\left( \Omega _{i}\right) \backslash \left\{ \varnothing \right\} }{ \sum }\exp \left( \underset{j\in A}{\sum }h_{jc}\right) \end{equation*} \begin{equation} =1+\left[ e^{h}+e^{h_{c}}+\cdots +e^{h_{\left( i-1\right) c}}\right] +\left[ e^{h+h_{c}}+e^{h+h_{2c}}+\cdots \right] +\cdots +\left[ e^{h+h_{c}+\cdots +h_{\left( i-1\right) c}}\right] . \tag{3.13} \end{equation} Dividing the proof on two parts:
\noindent \textbf{Part (1). }$h\left( z\right) $ is non-constant polynomial. Suppose that $h\left( z\right) =a_{m}z^{m}+\cdots +a_{0}$ $\left( a_{m}\neq 0\right) ,$ since $P\left( \Omega _{i}\right) \subset P\left( \Omega _{i+1}\right) ,$ then by $\left( 3.12\right) $ and $\left( 3.13\right) $ we have \begin{equation*} \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{P-P_{ic}}+\alpha _{1}e^{a_{m}z^{m}}+\alpha _{2}e^{2a_{m}z^{m}}+\cdots +\alpha _{n}e^{na_{m}z^{m}}=e^{P}-\left( -1\right) ^{n} \end{equation*} which is equivalent to \begin{equation} \alpha _{0}+\alpha _{1}e^{a_{m}z^{m}}+\alpha _{2}e^{2a_{m}z^{m}}+\cdots +\alpha _{n}e^{na_{m}z^{m}}=e^{P}, \tag{3.14} \end{equation} where $\alpha _{i}$ $\left( i=0,\cdots ,n\right) $ are entire functions of order less than $m.$ Moreover, \begin{equation*} \alpha _{0}=\overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{P-P_{ic}}+\left( -1\right) ^{n} \end{equation*} \begin{equation*} =e^{P}\left( \overset{n}{\underset{i=1}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{-P_{ic}}+\left( -1\right) ^{n}e^{-P}\right) =e^{P}\Delta _{c}^{n}e^{-P}. \end{equation*} $\left( \text{i}\right) $ If $\deg P>m$, then we obtain from $\left( 3.14\right) $ that \begin{equation*} \deg P\leq m \end{equation*} which is a contradiction.
\noindent $\left( \text{ii}\right) $ If $\deg P<m,$ then by using Lemma 2.1 and $\left( 3.14\right) $ we obtain \begin{equation*} \deg P=\rho \left( e^{P}\right) =\rho \left( \alpha _{0}+\alpha _{1}e^{a_{m}z^{m}}+\alpha _{2}e^{2a_{m}z^{m}}+\cdots +\alpha _{n}e^{na_{m}z^{m}}\right) =m, \end{equation*} which is also a contradiction.
\noindent $\left( \text{iii}\right) $ If $\deg P=m,$ then we suppose that $ P\left( z\right) =dz^{m}+P^{\ast }\left( z\right) $ where $\deg P^{\ast }<m.$ We have to study two subcases:
\noindent $\left( \ast \right) $ If $d\neq ia_{m}$ $\left( i=1,\cdots ,n\right) ,$ then we have \begin{equation*} \alpha _{1}e^{a_{m}z^{m}}+\alpha _{2}e^{2a_{m}z^{m}}+\cdots +\alpha _{n}e^{na_{m}z^{m}}-e^{P^{\ast }}e^{dz^{m}}=-\alpha _{0}. \end{equation*} By using Lemma 2.2, we obtain $e^{P^{\ast }}\equiv 0,$ which is impossible.
\noindent $\left( \ast \ast \right) $ Suppose now that there exists at most $ j\in \left\{ 1,2,\cdots ,n\right\} $ such that $d=ja_{m}.$ Without loss of generality, we assume that $j=n.$ Then we rewrite $\left( 3.14\right) $ as \begin{equation*} \alpha _{1}e^{a_{m}z^{m}}+\alpha _{2}e^{2a_{m}z^{m}}+\cdots +\left( \alpha _{n}-e^{P^{\ast }}\right) e^{na_{m}z^{m}}=-\alpha _{0}. \end{equation*} By using Lemma 2.2, we have $\alpha _{0}\equiv 0,$ so $\Delta _{c}^{n}e^{-P}=0.$ Thus \begin{equation} \overset{n}{\underset{i=0}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{-P_{ic}}\equiv 0. \tag{3.15} \end{equation} Suppose that $\deg P=\deg h=m>1$ and \begin{equation*} P\left( z\right) =b_{m}z^{m}+b_{m-1}z^{m-1}+...+b_{0},\text{ }\left( b_{m}\neq 0\right) . \end{equation*} Note that for $j=0,1,\cdots ,n,$ we have \begin{equation*} P\left( z+jc\right) =b_{m}z^{m}+\left( b_{m-1}+mb_{m}jc\right) z^{m-1}+\beta _{j}\left( z\right) , \end{equation*} where $\beta _{j}\left( z\right) $ are polynomials with degree less than $ m-1.$ Rewrite $\left( 3.15\right) $ as \begin{equation*} e^{-\beta _{n}\left( z\right) }e^{-b_{m}z^{m}-\left( b_{m-1}+mb_{m}nc\right) z^{m-1}}-ne^{-\beta _{n-1}\left( z\right) }e^{-b_{m}z^{m}-\left( b_{m-1}+mb_{m}\left( n-1\right) c\right) z^{m-1}} \end{equation*} \begin{equation} +\cdots +\left( -1\right) ^{n}e^{-\beta _{0}\left( z\right) }e^{-b_{m}z^{m}-b_{m-1}z^{m-1}}\equiv 0. \tag{3.16} \end{equation} For any $0\leq l<k\leq n,$ we have \begin{equation*} \rho \left( e^{-b_{m}z^{m}-\left( b_{m-1}+mb_{m}lc\right) z^{m-1}-\left( -b_{m}z^{m}-\left( b_{m-1}+mb_{m}kc\right) z^{m-1}\right) }\right) =\rho \left( e^{-mb_{m}\left( l-k\right) cz^{m-1}}\right) \end{equation*} \begin{equation*} =m-1, \end{equation*} and for $j=0,1,\cdots ,n,$ we see that \begin{equation*} \rho \left( e^{\beta _{j}}\right) \leq m-2. \end{equation*} By this, together with $\left( 3.16\right) $ and Lemma 2.2, we obtain $ e^{-\beta _{n}\left( z\right) }\equiv 0,$ which is impossible. Suppose now that $P\left( z\right) =\mu z+\eta $ $\left( \mu \neq 0\right) $ and $ Q\left( z\right) =\alpha z+\beta $ because if $\deg Q>1,$ then we back to the case $\left( \text{ii}\right) .$ It easy to see that \begin{equation*} \Delta _{c}^{n}e^{-P}=\overset{n}{\underset{i=0}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{-\mu \left( z+ic\right) -\eta }=e^{-P}\overset{n}{ \underset{i=0}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{-\mu ic} \end{equation*} \begin{equation*} =e^{-P}\left( e^{-\mu c}-1\right) ^{n}. \end{equation*} This together with $\Delta _{c}^{n}e^{-P}\equiv 0$ gives $\left( e^{-\mu c}-1\right) ^{n}\equiv 0,$ which yields $e^{\mu c}\equiv 1.$ Therefore, for any $j\in
\mathbb{Z}
$ \begin{equation*} e^{P\left( z+jc\right) }=e^{\mu z+\mu jc+\eta }=\left( e^{\mu c}\right) ^{j}e^{P\left( z\right) }=e^{P\left( z\right) }. \end{equation*} In order to prove that $e^{Q\left( z\right) }$ is also periodic entire function with period $c,$ we suppose the contrary, which means that $ e^{\alpha c}\neq 1$. Since $e^{P\left( z\right) }$ is of period $c,$ then by $\left( 3.14\right) $, we get \begin{equation} \alpha _{1}e^{\left( \alpha -\mu \right) z}+\alpha _{2}e^{2\left( \alpha -\mu \right) z}+\cdots +\alpha _{n}e^{n\left( \alpha -\mu \right) z}=e^{\mu z+\eta }, \tag{3.17} \end{equation} where $\alpha _{i}$ $\left( i=1,\cdots ,n\right) $ are constants. In particular, \begin{equation*} \alpha _{n}=e^{n\left( \beta -\eta \right) +\alpha c\frac{n\left( n-1\right) }{2}} \end{equation*} and \begin{equation*} \alpha _{1}=\left[ \underset{i=1}{\overset{n}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}+\underset{i=2}{\overset{n}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{\alpha c}\right. \end{equation*} \begin{equation*} \left. +\underset{i=3}{\overset{n}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{2\alpha c}+\cdots +e^{\left( n-1\right) \alpha c}\right] e^{\left( \beta -\eta \right) } \end{equation*} \begin{equation*} =[C_{n}^{1}\left( -1\right) ^{n-1}+C_{n}^{2}\left( -1\right) ^{n-2}\left( 1+e^{\alpha c}\right) +C_{n}^{3}\left( -1\right) ^{n-3}\left( 1+e^{\alpha c}+e^{2\alpha c}\right) \end{equation*} \begin{equation*} +\cdots +C_{n}^{n}\left( -1\right) ^{n-n}\left( 1+e^{\alpha c}+\cdots +e^{\left( n-1\right) \alpha c}\right) ]e^{\left( \beta -\eta \right) } \end{equation*} \begin{equation*} =[C_{n}^{1}\left( -1\right) ^{n-1}\frac{e^{\alpha c}-1}{e^{\alpha c}-1} +C_{n}^{2}\left( -1\right) ^{n-2}\frac{e^{2\alpha c}-1}{e^{\alpha c}-1} +C_{n}^{3}\left( -1\right) ^{n-3}\frac{e^{3\alpha c}-1}{e^{\alpha c}-1} \end{equation*} \begin{equation*} +\cdots +C_{n}^{n}\left( -1\right) ^{n-n}\frac{e^{n\alpha c}-1}{e^{\alpha c}-1}]e^{\left( \beta -\eta \right) } \end{equation*} \begin{equation*} =[C_{n}^{1}\left( -1\right) ^{n-1}\left( e^{\alpha c}-1\right) +C_{n}^{2}\left( -1\right) ^{n-2}\left( e^{2\alpha c}-1\right) +C_{n}^{3}\left( -1\right) ^{n-3}\left( e^{3\alpha c}-1\right) \end{equation*} \begin{equation*} +\cdots +C_{n}^{n}\left( -1\right) ^{n-n}\left( e^{n\alpha c}-1\right) ] \frac{e^{\left( \beta -\eta \right) }}{e^{\alpha c}-1} \end{equation*} \begin{equation*} =\left[ \underset{i=0}{\overset{n}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}e^{i\alpha c}-\left( -1\right) ^{n}-\underset{i=1}{\overset{n}{\sum }} C_{n}^{i}\left( -1\right) ^{n-i}\right] \frac{e^{\left( \beta -\eta \right) } }{e^{\alpha c}-1} \end{equation*} \begin{equation*} =\left( e^{\alpha c}-1\right) ^{n-1}e^{\left( \beta -\eta \right) }. \end{equation*} Rewrite $\left( 3.17\right) $ as \begin{equation} \alpha _{1}e^{\left( \alpha -2\mu \right) z}+\alpha _{2}e^{\left( 2\alpha -3\mu \right) z}+\cdots +\alpha _{n}e^{\left( n\alpha -\left( n+1\right) \mu \right) z}=e^{\eta }, \tag{3.18} \end{equation} it is clear that for each $1\leq l<m\leq n,$ we have \begin{equation*} \rho \left( e^{\left( m\alpha -\left( m+1\right) \mu -l\alpha +\left( l+1\right) \mu \right) z}\right) =\rho \left( e^{\left( m-l\right) \left( \alpha -\mu \right) z}\right) =1. \end{equation*} We have the following two cases:
\noindent $\left( \text{i}_{1}\right) $ If $j\alpha -\left( j+1\right) \mu \neq 0$ for all $j\in \left\{ 1,2,\cdots ,n\right\} ,$ which means that \begin{equation*} \rho \left( e^{\left( j\alpha -\left( j+1\right) \mu \right) z}\right) =1, \text{ }1\leq j\leq n \end{equation*} then, by applying Lemma 2.2 we obtain $e^{\eta }\equiv 0,$ which is a contradiction.
\noindent $\left( \text{i}_{2}\right) $ If there exists $\left( \text{at most one}\right) $ an integer $j\in \left\{ 1,2,\cdots ,n\right\} $ such that $j\alpha -\left( j+1\right) \mu =0.$ Without loss of generality, assume that $e^{\left( n\alpha -\left( n+1\right) \mu \right) z}=1,$ the equation $ \left( 3.18\right) $ will be \begin{equation*} \alpha _{1}e^{\left( \alpha -2\mu \right) z}+\alpha _{2}e^{\left( 2\alpha -3\mu \right) z}+\cdots +\alpha _{n-1}e^{\left( \left( n-1\right) \alpha -n\mu \right) z}=e^{\eta }-e^{n\left( \beta -\eta \right) +\alpha c\frac{ n\left( n-1\right) }{2}} \end{equation*} and by applying Lemma 2.2, we obtain $\alpha _{1}=\left( e^{\alpha c}-1\right) ^{n-1}e^{\left( \beta -\eta \right) }\equiv 0,$ which is impossible. So, by $\left( \text{i}_{1}\right) $ and $\left( \text{i} _{2}\right) ,$ we deduce that $e^{\alpha c}\equiv 1$. Therefore, for any $ j\in
\mathbb{Z}
$ we have \begin{equation*} e^{Q\left( z+jc\right) }=e^{\alpha z+\beta }\left( e^{\alpha c}\right) ^{j}=e^{Q\left( z\right) }, \end{equation*} which implies that $e^{Q}$ is periodic of period $c.$ Since $e^{P\left( z\right) }$ is of period $c,$ then by $\left( 3.1\right) ,$ we obtain \begin{equation} \Delta _{c}^{n+1}f\left( z\right) =e^{P}\Delta _{c}f\left( z\right) , \tag{3.19} \end{equation} then $\Delta _{c}^{n+1}f\left( z\right) $ and $\Delta _{c}f\left( z\right) $ share $0$ CM. Substituting $\left( 3.19\right) $ into the second equation $ \left( 3.2\right) ,$ we get \begin{equation} e^{P\left( z\right) }\Delta _{c}f\left( z\right) =e^{Q\left( z\right) }\left( f\left( z\right) -a\left( z\right) \right) +a\left( z\right) . \tag{3.20} \end{equation} Since $e^{P\left( z\right) }$ and $e^{Q\left( z\right) }$ are of period $c,$ then by $\left( 3.20\right) ,$ we obtain \begin{equation} \Delta _{c}^{n+1}f\left( z\right) =e^{Q-P}\Delta _{c}^{n}f\left( z\right) . \tag{3.21} \end{equation} So, $\Delta ^{n+1}f\left( z\right) $ and $\Delta ^{n}f\left( z\right) $ share $0,a\left( z\right) $ CM, combining $\left( 3.1\right) ,$ $\left( 3.2\right) $ and $\left( 3.21\right) ,$ we deduce that \begin{equation*} \frac{\Delta ^{n+1}f\left( z\right) -a\left( z\right) }{\Delta ^{n}f\left( z\right) -a\left( z\right) }=\frac{\Delta ^{n+1}f\left( z\right) }{\Delta ^{n}f\left( z\right) }, \end{equation*} and we get \begin{equation*} \Delta ^{n+1}f\left( z\right) =\Delta ^{n}f\left( z\right) \end{equation*} which is a contradiction. Suppose now that $P=c_{1}$ and $Q=c_{2}$ are constants $\left( e^{c_{1}}\neq e^{c_{2}}\right) .$ By $\left( 3.8\right) $ we have \begin{equation*} g_{c}\left( z\right) =\left( e^{c_{2}-c_{1}}+1\right) g\left( z\right) +a\left( z\right) e^{-c_{1}} \end{equation*} by the same \begin{equation*} g_{2c}\left( z\right) =\left( e^{c_{2}-c_{1}}+1\right) ^{2}g\left( z\right) +a\left( z\right) e^{-c_{1}}\left( \left( e^{c_{2}-c_{1}}+1\right) +1\right) . \end{equation*} By induction, we obtain \begin{equation*} g_{nc}\left( z\right) =\left( e^{c_{2}-c_{1}}+1\right) ^{n}g\left( z\right) +a\left( z\right) e^{-c_{1}}\underset{i=0}{\overset{n-1}{\sum }}\left( e^{c_{2}-c_{1}}+1\right) ^{i} \end{equation*} \begin{equation*} =\left( e^{c_{2}-c_{1}}+1\right) ^{n}g\left( z\right) +a\left( z\right) e^{-c_{2}}\left( \left( e^{c_{2}-c_{1}}+1\right) ^{n}-1\right) . \end{equation*} Rewrite the equation $\left( 3.6\right) $ as \begin{equation*} \Delta _{c}^{n}g\left( z\right) =\overset{n}{\underset{i=0}{\sum }} C_{n}^{i}\left( -1\right) ^{n-i}\left[ \left( e^{c_{2}-c_{1}}+1\right) ^{i}g\left( z\right) +a\left( z\right) e^{-c_{2}}\left( \left( e^{c_{2}-c_{1}}+1\right) ^{i}-1\right) \right] \end{equation*} \begin{equation*} =e^{c_{1}}g\left( z\right) +a\left( z\right) . \end{equation*} Since $A\left( z\right) \equiv 0,$ then we have \begin{equation*} \overset{n}{\underset{i=0}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}\left( e^{c_{2}-c_{1}}+1\right) ^{i}=e^{c_{1}} \end{equation*} and \begin{equation*} \overset{n}{\underset{i=0}{\sum }}C_{n}^{i}\left( -1\right) ^{n-i}\left( \left( e^{c_{2}-c_{1}}+1\right) ^{i}-1\right) =e^{c_{2}} \end{equation*} which are equivalent to \begin{equation*} e^{n\left( c_{2}-c_{1}\right) }=e^{c_{1}} \end{equation*} and \begin{equation*} e^{n\left( c_{2}-c_{1}\right) }=e^{c_{2}} \end{equation*} which is a contradiction.
\noindent \textbf{Part (2). }$h\left( z\right) $ is a constant. We show first that $P\left( z\right) $ is a constant. If $\deg P>0,$ from the equation $\left( 3.12\right) ,$ we see \begin{equation*} \deg P\leq \deg P-1, \end{equation*} which is a contradiction. Then $P\left( z\right) $ must be a constant and since $h\left( z\right) =Q\left( z\right) -P\left( z\right) $ is a constant, we deduce that both of $P\left( z\right) $ and $Q\left( z\right) $ is constant. This case is impossible too (the last case in Part (1)), and we deduced that $h\left( z\right) $ can not be a constant. Thus, the proof of Theorem 1.1 is completed.
\quad
\noindent \textbf{Proof of the Theorem 1.2. }Setting $g\left( z\right) =f\left( z\right) +b\left( z\right) -a\left( z\right) ,$ we can remark that \begin{equation*} g\left( z\right) -b\left( z\right) =f\left( z\right) -a\left( z\right) , \end{equation*} \begin{equation*} \Delta _{c}^{n}g\left( z\right) -b\left( z\right) =\Delta _{c}^{n}f\left( z\right) -b\left( z\right) \end{equation*} and \begin{equation*} \Delta _{c}^{n+1}g\left( z\right) -b\left( z\right) =\Delta _{c}^{n}f\left( z\right) -b\left( z\right) ,\text{ }n\geq 2. \end{equation*} Since $f\left( z\right) -a\left( z\right) ,$\textit{\ }$\Delta _{c}^{n}f\left( z\right) -b\left( z\right) $\textit{\ }and $\Delta _{c}^{n+1}f\left( z\right) -b\left( z\right) $\ share $0$\ CM, it follows that $g\left( z\right) ,$\textit{\ }$\Delta _{c}^{n}g\left( z\right) $ \textit{\ }and $\Delta _{c}^{n+1}g\left( z\right) $ share $b\left( z\right) $ CM. By using Theorem 1.1, we deduce that $\Delta _{c}^{n+1}g\left( z\right) \equiv \Delta _{c}^{n}g\left( z\right) ,$ which leads to $\Delta _{c}^{n+1}f\left( z\right) \equiv \Delta _{c}^{n}f\left( z\right) $ and the proof of Theorem 1.2 is completed.
\quad
\noindent \textbf{Proof of the Theorem 1.3. }Note that $f\left( z\right) $ is a nonconstant entire function of finite order. Since $f\left( z\right) ,$ \textit{\ }$\Delta _{c}^{n}f\left( z\right) $\textit{\ }and $\Delta _{c}^{n+1}f\left( z\right) $\ share $0$\textit{\ }CM, then we have \begin{equation} \frac{\Delta _{c}^{n}f\left( z\right) }{f\left( z\right) }=e^{P\left( z\right) } \tag{3.22} \end{equation} and \begin{equation} \frac{\Delta _{c}^{n+1}f\left( z\right) }{f\left( z\right) }=e^{Q\left( z\right) }, \tag{3.23} \end{equation} where $P$ and $Q$ are polynomials. If $Q-P$ is a constant, then we can get easily from $\left( 3.22\right) $ and $\left( 3.23\right) $ \begin{equation*} \Delta _{c}^{n+1}f\left( z\right) =e^{Q\left( z\right) -P\left( z\right) }\Delta _{c}^{n}f\left( z\right) :\equiv C\Delta _{c}^{n}f\left( z\right) . \end{equation*} This complete our proof. If $Q-P$ is a not constant, with a similar arguing as in the proof of Theorem 1.1, we can deduce that the case $\deg P=\deg \left( Q-P\right) >1$ is impossible. For the case $\deg P=\deg \left( Q-P\right) =1,$ we can obtain that $e^{P\left( z\right) }$ is periodic entire function with period $c.$ This together with $\left( 3.22\right) $ yields \begin{equation} \Delta _{c}^{n+1}f\left( z\right) =e^{P\left( z\right) }\Delta _{c}f\left( z\right) \tag{3.24} \end{equation} which means that $f\left( z\right) ,$ $\Delta _{c}f\left( z\right) $ and $ \Delta _{c}^{n+1}f\left( z\right) $ share $0$ CM. Thus, by Theorem F, we obtain \begin{equation*} \Delta _{c}^{n+1}f\left( z\right) \equiv C\Delta _{c}f\left( z\right) \end{equation*} which is a contradiction with $\left( 3.22\right) $ and $\deg P=1.$ Theorem 1.3 is thus proved.
\quad
\noindent \textbf{Acknowledgements.} The authors are grateful to the referee for his/her valuable comments which lead to the improvement of this paper.
\begin{center} {\Large References} \end{center}
\noindent $\left[ 1\right] \ $W. Bergweiler, J. K. Langley, \textit{Zeros of differences of meromorphic functions}, Math. Proc. Cambridge Philos. Soc. 142 (2007), no. 1, 133--147.
\noindent $\left[ 2\right] \ $B. Chen, Z. X. Chen and S. Li,\textit{\ Uniqueness theorems on entire functions and their difference operators or shifts}, Abstr. Appl. Anal. 2012, Art. ID 906893, 8 pp.
\noindent $\left[ 3\right] \ $B. Chen, and S. Li, \textit{Uniquness problems on entire functions that share a small function with their difference operators}, Adv. Difference Equ. 2014, 2014:311, 11 pp.
\noindent $\left[ 4\right] \ $Y. M. Chiang, S. J. Feng, \textit{On the Nevanlinna characteristic of }$f\left( z+\eta \right) $ \textit{and difference equations in the complex plane, }Ramanujan J. 16 (2008), no. 1, 105-129.
\noindent $\left[ 5\right] \ $R. G. Halburd, R. J. Korhonen, \textit{ Difference analogue of the lemma on the logarithmic derivative with applications to difference equations, }J. Math. Anal. Appl. 314 (2006) \textit{, }no. 2, 477-487.
\noindent $\left[ 6\right] \ $R. G. Halburd, R. J. Korhonen, \textit{ Nevanlinna theory for the difference operator}, Ann. Acad. Sci. Fenn. Math. 31 (2006), no. 2, 463--478.
\noindent $\left[ 7\right] \ $W. K. Hayman, \textit{Meromorphic functions}, Oxford Mathematical Monographs Clarendon Press, Oxford 1964.
\noindent $\left[ 8\right] \ $G. Jank. E. Mues and L. Volkmann, \textit{ Meromorphe Funktionen, die mit ihrer ersten und zweiten Ableitung einen endlichen Wert teilen}, Complex Variables Theory Appl. 6 (1986), no. 1, 51--71.
\noindent $\left[ 9\right] \ $I. Laine, \textit{Nevanlinna theory and complex differential equations}, de Gruyter Studies in Mathematics, 15. Walter de Gruyter \& Co., Berlin, 1993.
\noindent $\lbrack 10]$ Z. Latreuch and B. Bela\"{\i}di, \textit{Estimations about the order of growth and the type of meromorphic functions in the complex plane}, An. Univ. Oradea, Fasc. Matematica, Tom XX (2013), Issue No. 1, 179-186.
\noindent $\lbrack 11]$ P. Li and C. C. Yang, \textit{Uniqueness theorems on entire functions and their derivatives}, J. Math. Anal. Appl. 253 (2001), no. 1, 50--57.
\noindent $\left[ 12\right] \ $C. C. Yang, H. X. Yi, \textit{Uniqueness theory of meromorphic functions}, Mathematics and its Applications, 557. Kluwer Academic Publishers Group, Dordrecht, 2003.
\end{document} | arXiv |
\begin{document}
\renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \theoremstyle{plain} \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{lemma}[theorem]{\bf Lemma} \newtheorem{corollary}[theorem]{\bf Corollary} \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{definition}[theorem]{\bf Definition} \newtheorem{remark}[theorem]{\it Remark}
\def\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}{\alpha} \def\cA{{\mathcal A}} \def\bA{{\bf A}} \def\mA{{\mathscr A}} \def\beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}{\beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}} \def\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}{\gamma} \def\cC{{\mathcal C}} \def\bC{{\bf C}} \def\mC{{\mathscr C}} \def\Gamma} \def\cD{{\mathcal D}} \def\bD{{\bf D}} \def\mD{{\mathscr D}{\Gamma} \def\cD{{\mathcal D}} \def\bD{{\bf D}} \def\mD{{\mathscr D}} \def\delta} \def\cE{{\mathcal E}} \def\bE{{\bf E}} \def\mE{{\mathscr E}{\delta} \def\cE{{\mathcal E}} \def\bE{{\bf E}} \def\mE{{\mathscr E}} \def\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}{\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}} \def\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}{\chi} \def\cG{{\mathcal G}} \def\bG{{\bf G}} \def\mG{{\mathscr G}} \def\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}} \def\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}{\eta} \def\cI{{\mathcal I}} \def\bI{{\bf I}} \def\mI{{\mathscr I}} \def\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}{\psi} \def\cJ{{\mathcal J}} \def\bJ{{\bf J}} \def\mJ{{\mathscr J}} \def\Theta} \def\cK{{\mathcal K}} \def\bK{{\bf K}} \def\mK{{\mathscr K}{\Theta} \def\cK{{\mathcal K}} \def\bK{{\bf K}} \def\mK{{\mathscr K}} \def\kappa} \def\cL{{\mathcal L}} \def\bL{{\bf L}} \def\mL{{\mathscr L}{\kappa} \def\cL{{\mathcal L}} \def\bL{{\bf L}} \def\mL{{\mathscr L}} \def\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}} \def\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}{\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}} \def\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}{\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}} \def\nu} \def\cP{{\mathcal P}} \def\bP{{\bf P}} \def\mP{{\mathscr P}{\nu} \def\cP{{\mathcal P}} \def\bP{{\bf P}} \def\mP{{\mathscr P}} \def\rho} \def\cQ{{\mathcal Q}} \def\bQ{{\bf Q}} \def\mQ{{\mathscr Q}{\rho} \def\cQ{{\mathcal Q}} \def\bQ{{\bf Q}} \def\mQ{{\mathscr Q}} \def\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}{\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}}
\def{\mathcal S}{{\mathcal S}} \def{\bf S}} \def\mS{{\mathscr S}{{\bf S}} \def\mS{{\mathscr S}} \def\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}{\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}} \def\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}{\phi} \def\cU{{\mathcal U}} \def\bU{{\bf U}} \def\mU{{\mathscr U}} \def\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V}{\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V}} \def\Psi} \def\cW{{\mathcal W}} \def\bW{{\bf W}} \def\mW{{\mathscr W}{\Psi} \def\cW{{\mathcal W}} \def\bW{{\bf W}} \def\mW{{\mathscr W}} \def\omega} \def\cX{{\mathcal X}} \def\bX{{\bf X}} \def\mX{{\mathscr X}{\omega} \def\cX{{\mathcal X}} \def\bX{{\bf X}} \def\mX{{\mathscr X}} \def\xi} \def\cY{{\mathcal Y}} \def\bY{{\bf Y}} \def\mY{{\mathscr Y}{\xi} \def\cY{{\mathcal Y}} \def\bY{{\bf Y}} \def\mY{{\mathscr Y}} \def\Xi} \def\cZ{{\mathcal Z}} \def\bZ{{\bf Z}} \def\mZ{{\mathscr Z}{\Xi} \def\cZ{{\mathcal Z}} \def\bZ{{\bf Z}} \def\mZ{{\mathscr Z}} \def\Omega{\Omega}
\newcommand{\mathfrak{A}}{\mathfrak{A}} \newcommand{\mathfrak{B}}{\mathfrak{B}} \newcommand{\mathfrak{C}}{\mathfrak{C}} \newcommand{\mathfrak{D}}{\mathfrak{D}} \newcommand{\mathfrak{E}}{\mathfrak{E}} \newcommand{\mathfrak{F}}{\mathfrak{F}} \newcommand{\mathfrak{G}}{\mathfrak{G}} \newcommand{\mathfrak{H}}{\mathfrak{H}} \newcommand{\mathfrak{I}}{\mathfrak{I}} \newcommand{\mathfrak{J}}{\mathfrak{J}} \newcommand{\mathfrak{K}}{\mathfrak{K}} \newcommand{\mathfrak{L}}{\mathfrak{L}} \newcommand{\mathfrak{M}}{\mathfrak{M}} \newcommand{\mathfrak{N}}{\mathfrak{N}} \newcommand{\mathfrak{O}}{\mathfrak{O}} \newcommand{\mathfrak{P}}{\mathfrak{P}} \newcommand{\mathfrak{R}}{\mathfrak{R}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\mathfrak{T}}{\mathfrak{T}} \newcommand{\mathfrak{U}}{\mathfrak{U}} \newcommand{\mathfrak{V}}{\mathfrak{V}} \newcommand{\mathfrak{W}}{\mathfrak{W}} \newcommand{\mathfrak{X}}{\mathfrak{X}} \newcommand{\mathfrak{Y}}{\mathfrak{Y}} \newcommand{\mathfrak{Z}}{\mathfrak{Z}}
\def\varepsilon} \def\vt{\vartheta} \def\vp{\varphi} \def\vk{\varkappa{\varepsilon} \def\vt{\vartheta} \def\vp{\varphi} \def\vk{\varkappa}
\def{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}{{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}} \def{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}{{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}} \def{\mathbb B}} \def\dS{{\mathbb S}{{\mathbb B}} \def\dS{{\mathbb S}}
\def\leftarrow} \def\ra{\rightarrow} \def\Ra{\Rightarrow{\leftarrow} \def\ra{\rightarrow} \def\Ra{\Rightarrow} \def\uparrow} \def\da{\downarrow{\uparrow} \def\da{\downarrow} \def\leftrightarrow} \def\Lra{\Leftrightarrow{\leftrightarrow} \def\Lra{\Leftrightarrow}
\def\biggl} \def\rt{\biggr{\biggl} \def\rt{\biggr} \def\overline} \def\wt{\widetilde{\overline} \def\wt{\widetilde} \def\noindent{\noindent}
\let\ge\geqslant \let\le\leqslant \def\langle} \def\ran{\rangle{\langle} \def\ran{\rangle} \def\over} \def\iy{\infty{\over} \def\iy{\infty} \def\setminus} \def\es{\emptyset{\setminus} \def\es{\emptyset} \def\subset} \def\ts{\times{\subset} \def\ts{\times} \def\partial} \def\os{\oplus{\partial} \def\os{\oplus} \def\ominus} \def\ev{\equiv{\ominus} \def\ev{\equiv} \def\int\!\!\!\int} \def\iintt{\mathop{\int\!\!\int\!\!\dots\!\!\int}\limits{\int\!\!\!\int} \def\iintt{\mathop{\int\!\!\int\!\!\dots\!\!\int}\limits} \def\ell^{\,2}} \def\1{1\!\!1{\ell^{\,2}} \def\1{1\!\!1} \def\sharp{\sharp} \def\widehat{\widehat}
\def\mathop{\mathrm{where}}\nolimits{\mathop{\mathrm{where}}\nolimits}
\def\mathop{\mathrm{as}}\nolimits{\mathop{\mathrm{as}}\nolimits} \def\mathop{\mathrm{Area}}\nolimits{\mathop{\mathrm{Area}}\nolimits} \def\mathop{\mathrm{arg}}\nolimits{\mathop{\mathrm{arg}}\nolimits} \def\mathop{\mathrm{const}}\nolimits{\mathop{\mathrm{const}}\nolimits} \def\mathop{\mathrm{det}}\nolimits{\mathop{\mathrm{det}}\nolimits} \def\mathop{\mathrm{diag}}\nolimits{\mathop{\mathrm{diag}}\nolimits} \def\mathop{\mathrm{diam}}\nolimits{\mathop{\mathrm{diam}}\nolimits} \def\mathop{\mathrm{dim}}\nolimits{\mathop{\mathrm{dim}}\nolimits} \def\mathop{\mathrm{dist}}\nolimits{\mathop{\mathrm{dist}}\nolimits} \def\mathop{\mathrm{Im}}\nolimits{\mathop{\mathrm{Im}}\nolimits} \def\mathop{\mathrm{Iso}}\nolimits{\mathop{\mathrm{Iso}}\nolimits} \def\mathop{\mathrm{Ker}}\nolimits{\mathop{\mathrm{Ker}}\nolimits} \def\mathop{\mathrm{Lip}}\nolimits{\mathop{\mathrm{Lip}}\nolimits} \def\mathop{\mathrm{rank}}\limits{\mathop{\mathrm{rank}}\limits} \def\mathop{\mathrm{Ran}}\nolimits{\mathop{\mathrm{Ran}}\nolimits} \def\mathop{\mathrm{Re}}\nolimits{\mathop{\mathrm{Re}}\nolimits} \def\mathop{\mathrm{Res}}\nolimits{\mathop{\mathrm{Res}}\nolimits} \def\mathop{\mathrm{res}}\limits{\mathop{\mathrm{res}}\limits} \def\mathop{\mathrm{sign}}\nolimits{\mathop{\mathrm{sign}}\nolimits} \def\mathop{\mathrm{span}}\nolimits{\mathop{\mathrm{span}}\nolimits} \def\mathop{\mathrm{supp}}\nolimits{\mathop{\mathrm{supp}}\nolimits} \def\mathop{\mathrm{Tr}}\nolimits{\mathop{\mathrm{Tr}}\nolimits} \def\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}{\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}}
\newcommand\nh[2]{\widehat{#1}\vphantom{#1}^{(#2)}}
\def\diamond{\diamond}
\def\bigoplus\nolimits{\bigoplus\nolimits}
\def\qquad{\qquad} \def\quad{\quad} \let\ge\geqslant \let\le\leqslant \let\geq\geqslant \let\leq\leqslant
\newcommand{\begin{aligned}}{\begin{aligned}} \newcommand{\end{aligned}}{\end{aligned}}
\newcommand{\begin{cases}}{\begin{cases}} \newcommand{\end{cases}}{\end{cases}} \newcommand{\begin{pmatrix}}{\begin{pmatrix}} \newcommand{\end{pmatrix}}{\end{pmatrix}} \renewcommand{\[}{\begin{equation}} \renewcommand{\end{equation}}{\end{equation}} \def\bullet{\bullet}
\title[{Trace formulas for Schr\"odinger operators on lattices}] {Trace formulae for Schr\"odinger operators with complex-valued potentials on cubic lattices}
\date{\today}
\author[Evgeny Korotyaev]{Evgeny Korotyaev} \address{Saint-Petersburg State University, Universitetskaya nab. 7/9, St. Petersburg, 199034, Russia, \ [email protected], \ [email protected] }
\author[Ari Laptev]{Ari Laptev} \address{ Imperial College London, United Kingdom, \ [email protected] }
\subjclass{34A55, (34B24, 47E05)}\keywords{scattering, lattice, }
\begin{abstract} We consider Schr\"odinger operators with complex decaying potentials on the lattice. Using some classical results from Complex Analysis we obtain some trace formulae and using them estimate globally all zeros of the Fredholm determinant in terms of the potential.
\end{abstract}
\maketitle
\section{Introduction}
\noindent Let us consider the Schr\"odinger operator $H$ acting in $\ell^2({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^{d}),d\ge 3$ and given by $$ {H}=H_0+V, \qquad H_0=\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}, $$ where ${\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}}$ is the discrete Laplacian on ${\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d$ given by $$ \big(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F} f \big)(n)=\frac{1}{2}\sum_{j=1}^{d}\big(f(n+ e_{j}) + f(n- e_{j})\big),\qquad n=(n_j)_{j=1}^d\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d, $$ for $f =(f_n)_{n\in{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d} \in \ell^{2}({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d)$. Here $ e_{1} = (1,0,\cdots,0), \cdots, e_{d} = (0,\cdots,0,1) $
is the standard basis of ${\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d$. The operator $V = (V_n)_{n\in{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}$, $V_n\in\C$, is a complex potential given by
\begin{equation*} (Vf)(n)=V_nf_n, \qquad n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d. \end{equation*}
We assume that the potential $V$ satisfies the following condition:
\begin{equation} \label{Vc} V\in \ell^{2/3}(\Bbb Z^d). \end{equation}
Note that the condition \eqref{Vc} implies that $V$ can be factorised as
\begin{equation} \label{Vfact} V = V_1V_2, \qquad {\rm where}\quad V_1\in \ell^1(\Bbb Z^d), \, V_2 \in \ell ^2(\Bbb Z^d), \end{equation}
with $V_1=|V |^{2/3-1} V$ and $V_2=|V|^{1/3}$.
\noindent Here $\ell^q({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^{d}), q>0$ is the space of sequences
$f=(f_n)_{n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}$ such that $\|f\|_{q}<\iy$, where $$ \begin{aligned}
\|f\|_{q}=\|f\|_{\ell^q({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^{d})} =
\begin{cases} \sup_{n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}|f_n|,\quad & \ q=\iy, \\
\big(\sum_{n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}|f_n|^q\big)^{1\/q},\quad & \ q\in (0,\iy). \end{cases} \end{aligned} $$
Note that $\ell^q({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^{d}), q\ge 1$ is the Banach space equipped with the norm $\|\cdot\|_{q}$.
It is well-known that the spectrum of the Laplacian $\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}$ is absolutely continuous and equals $$ \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F})=\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{\textup{ac}}(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F})=[-d,d]. $$ It is also well known that if $V$ satisfies \eqref{Vc}, the essential spectrum of the Schr\"odinger operator $H$ on $\ell^2({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d)$ is $$ \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{\textup{ess}}(H)=[-d,d]. $$ However, this condition does not exclude appearance of the singular continuous spectrum on the interval $[-d,d]$. Our main goal is to find new trace formulae for the operator $H$ with complex potentials $V$ and to use these formulae for some estimates of complex eigenvalues in terms of potentials.
Note that some of the results obtained in this paper are new even in the case of real-valued potentials due to the presence of the measure $\sigma$ (see Theorem \ref{T3}) appearing in the canonical factorisation of the respective Fredholm determinants. Non-triviality of such a measure is due to the weak condition \eqref{Vc} on the potential $V$. We believe that it would be interesting to study the relation between properties of $V$v and $\sigma$.
Recently uniform bounds on eigenvalues of Schr\"odinger operators in $\Bbb R^d$ with complex-valued potentials decaying at infinity attracted attention of many specialists. We refer to \cite{Da} for a review of the state of the art of non-selfadjoint Schr\"odinger operators and for motivations and applications. Bounds on single eigenvalues were proved, for instance, in \cite{AAD,DN,FrLaSe,Fr} and bounds on sums of powers of eigenvalues were found in \cite{FrLaLiSe,LaSa,DeHaKa0,DeHaKa,BGK,FrSa,Fr3}. The latter bounds generalise the Lieb--Thirring bounds \cite{LiTh} to the non-selfadjoint setting. Note that in \cite{FrSa} (Theorem 16) the authors obtained estimates on the sum of the distances between the complex eigenvalues and the continuous spectrum $[0,\infty)$ in terms of $L^p$-norms of the potentials. Note that almost no results are known on the number of eigenvalues of Schr\"odinger operators with complex potentials. We referee here to a recent paper \cite{FLS} where the authors discussed this problem in details in odd dimensions.
For the discrete Schr\"odinger operators most of the results were obtained in the self-adjoint case, see, for example, \cite{T89} (for the ${\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^1$ case). Schr\"odinger operators with decreasing potentials on the lattice ${\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d$ have been considered by Boutet de Monvel-Sahbani \cite{BS99}, Isozaki-Korotyaev \cite{IK12}, Kopylova \cite{Kop10}, Rosenblum-Solomjak \cite{RoS09}, Shaban-Vainberg \cite{SV01} and see references therein. Ando \cite{A12} studied the inverse spectral theory for the discrete Schr\"odinger operators with finitely supported potentials on the hexagonal lattice. Scattering on periodic metric graphs $\Bbb Z^d$ was considered by Korotyaev-Saburova \cite{KS}.
\noindent Isozaki and Morioka (see Theorem 2.1. in \cite{IM14}) proved that if the potential $V$ is real and compactly supported, then the point-spectrum of $H$ on the interval $(-d,d)$ is absent. Note that in [10] the author gave an example of embedded eigenvalue at the endpoint $\{\pm d\}$.
\noindent In this paper we use classical results from Complex Analysis that lead us to a new class of trace formula for the spectrum of discrete multi-dimensional Scr\"odinger operators with complex-valued potentials. In particular, we consider a so-called canonical factorisation of analytic functions from Hardy spaced via its inner and outer factors, see Section 6. Such factorisations allied for Fredholm determinants allow us to obtain trace formula that lead to some inequalities on the complex spectrum in terms of the $L^{2/3}$ norm of the potential function. Note also that in the case $d=3$ we use a delicate uniform inequality for Bessel's functions obtained in Lemma \ref{case3}.
\section{Some notations and statements of main results}
We denote by $\dD_r(z_0)\subset\C$ the disc with radius $r>0$ and center $z_0\in \C$ $$
\dD_r(z_0)=\{z\in \C:|z-z_0|<r\},
$$ and abbreviate $\dD_r=\dD_r(0)$ and $\dD=\dD_1$. Let also $\Bbb T = \partial \Bbb D$. It is convenient to introduce a new spectral variable $z\in \dD$ by
\begin{equation}\label{lambda} \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}=\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)={d\/2}\rt(z+{1\/z}\rt)\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}=\C\setminus} \def\es{\emptyset [-d,d] ,\qquad z\in \dD. \end{equation}
The function $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)$ has the following properties:
{\it $\bullet$ $z\to \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)$ is a conformal mapping from $\dD$ onto the spectral domain $\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$.
$\bullet$ $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(\dD)=\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}=\C\setminus} \def\es{\emptyset [-d,d] $ and $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(\dD\cap \C_\mp)=\C_\pm $.
$\bullet$ $\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$ is the cut domain with the cut $[-d,d]$, having the upper side $[-d,d]+i0$ and the lower side $[-d,d]-i0$. $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)$ maps the boundary: the upper semi-circle onto the lower side $[-d,d]-i0$ and the lower semi-circle onto the upper side $[-d,d]+i0$.
$\bullet$ $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)$ maps $z=0$ to $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}=\iy$.
$\bullet$ The inverse mapping $z(\cdot ): \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}\to \dD$ is given by $$ \begin{aligned} z={1\/d}\rt(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2}\rt),\qquad \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N},\\
z={d\/2\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}}+{O(1)\over} \def\iy{\infty\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^3},\qquad as \quad |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}|\to \iy. \end{aligned} $$ }
Next we introduce the Hardy space $ \mH_p=\mH_p(\dD)$. Let $F$ be analytic in $\dD$. For $0<p\le \iy$ we say $F$ belongs the Hardy space $ \mH_p$
if $F$ satisfies $\|F\|_{\mH_p}<\iy$, where $\|F\|_{\mH_p}$ is given by $$
\|F\|_{\mH_p}= \begin{cases} \sup_{r\in (0,1)}\rt({1\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}
|F(re^{i\vt})|^pd\vt\rt)^{1\/p}, & if \qquad 0< p<\iy,\\
\sup_{z\in \dD}|F(z)|, & if \qquad p=\iy.
\end{cases} $$ Let $\cB$ denote the class of bounded operators and $\cB_1$ and
$\cB_2$ be the trace and the Hilbert-Schmidt class equipped with the norm $\|\cdot \|_{\cB_1}$ and $ \|\cdot \|_{\cB_2}$ respectively.
Denote by $D(z), z\in \dD$ the determinant $$ D(z)=\mathop{\mathrm{det}}\nolimits \left(I+VR_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)\right), \qquad z\in \dD, $$ where $R_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=(H_0-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})^{-1}, \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$. The determinant $D(z), z\in \dD$, is well defined for $V\in \cB_1$ and if $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_0\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$ is an eigenvalue of $H$, then $z_0=z(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_0)\in \dD$ is a zero of $D$ with the same multiplicity.
\begin{theorem} \label{T1} Let a potential $V$ satisfy \er{Vc}. Then the determinant $D(z)=\mathop{\mathrm{det}}\nolimits (I+VR_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)), \ z\in \dD$, is analytic in $\Bbb D$. It has $N\le \iy$ zeros $\{z_j\}_{j=1}^N$, such that
\[ \label{zD} \begin{aligned}
& 0<r_0=|z_1|\le |z_2|\le ...\le |z_j|\le |z_{j+1}\le
|z_{j+2}|\le .... ,\\
& where \qquad r_0=\inf |z_j|>0. \end{aligned} \end{equation} Moreover, it satisfies
\begin{equation} \label{D1} \begin{aligned}
\|D\|_{\mH_\iy}\le e^{C\|V\|_{2/3}}, \end{aligned} \end{equation}
where the constant $C$ depends only on $d$.
\noindent Furthermore, the function $\log D(z)$ whose branch is defined by $\log D(0)=0$, is analytic in the disk $\dD_{r_0}$ with the radius $r_0>0$ defined by
\er{zD} and it has the Taylor series as $|z|<r_0$: \[ \label{D3} \log D(z)=-c_1z-c_2z^2-c_3z^3-c_4z^4 -..... \end{equation}
where \[ \label{D4} c_1=d_1a,\qquad c_2=d_2a^2, \qquad c_3=d_3a^3-c_1, \qquad c_4=d_4a^4-c_2,.... \end{equation} \begin{equation} \label{cD4} \begin{aligned} d_1=\mathop{\mathrm{Tr}}\nolimits V,\qquad d_2= \mathop{\mathrm{Tr}}\nolimits\,V^2,\qquad d_3=\mathop{\mathrm{Tr}}\nolimits\,\big(V^3+(3d/2)V\big),...., \end{aligned} \end{equation} and where $a={2\/d}$. \end{theorem}
\noindent Define the Blaschke product $B(z), z\in \dD$ by \[ \label{B2} \begin{aligned}
& B(z)=\prod_{j=1}^N {|z_j|\/z_j}{(z_j-z)\over} \def\iy{\infty(1-\overline} \def\wt{\widetilde z_j z)},\qquad
& if \qquad N\ge 1,\\ &B=1, \qquad & if \qquad N=0. \end{aligned} \end{equation}
\begin{theorem}\label{T2} Let a potential $V$ satisfy \er{Vc} and let $N\ge 1$. Then the zeros $\{z_j\}$ of $D$ in the disk $\dD$ labeled by \er{zD} satisfy \[
\label{B1} \sum _{j=1}^N (1-|z_j|)<\iy. \end{equation} Moreover, the Blaschke product $B(z), z\in \dD$ given by \er{B2}
converges absolutely for $\{|z|<1\}$ and satisfies
i) $B\in \mH_\iy$ with $\|B\|_{\mH_\iy}\le 1$, \[
\label{B3} \lim_{r\to 1}|B(re^{i\vt})|=|B(e^{i\vt})|=1 \quad {\rm for \ almost \ all}Ê\quad \vt\in {\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}, \end{equation} and \[
\label{B4} \lim _{r\to 1}\int_0^{2\pi}\log |B(re^{i\vt})|d\vt=0. \end{equation}
ii) The determinant $D$ has the factorization in the disc $\dD$:
\begin{equation*}
D=BD_B, \end{equation*}
where $D_B$ is analytic in the unit disc $\dD$ and has not zeros in $\dD$.
iii) The Blaschke product $B$ has the Taylor series at $z=0$: \[ \begin{aligned} \label{B6} \log B(z)=B_0-B_1z-B_2z^2-... \qquad as \qquad z\to 0, \end{aligned} \end{equation} where $B_n$ satisfy
\begin{equation*} \begin{aligned}
& B_0=\log B(0)<0,\qquad B_1=\sum_{j=1}^N\rt({1\/z_j}-\overline} \def\wt{\widetilde z_j \rt),..., \qquad B_n={1\/n}\sum_{j=1}^N\rt({1\/z_j^n}-\overline} \def\wt{\widetilde z_j^n \rt),....\\
& |B_n|\le {2\/r_0^n}\sum _{j=1}^N (1-|z_j|). \end{aligned} \end{equation*}
\end{theorem}
\noindent The next statement describes the canonical representation of the determinant $D(z)$.
\begin{theorem} \label{T3} Let a potential $V$ satisfy \er{Vc}. Then
i) There exists a singular measure $\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}\ge 0$ on $[-\pi,\pi]$, such that the determinant $D$ has a canonical factorization for all
$|z|<1$ given by \[ \label{cfD} \begin{aligned} & D(z)=B(z)e^{-K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R} (z)}e^{K_D(z)},\\ & K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(z)={1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(t),\\ & K_D(z)= {1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}\log
|D(e^{it})|dt,
\end{aligned} \end{equation}
where $\log |D(e^{it}) |\in L^1(-\pi,\pi)$.
ii) The measure $\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}$ satisfies \[ \mathop{\mathrm{supp}}\nolimits \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}\subset} \def\ts{\times \{t\in [-\pi,\pi]: D(e^{it})=0\}. \end{equation} \end{theorem}
\noindent {\bf Remarks.}
\noindent 1) For the canonical factorisation of analytic functions see, for example, \cite{Koo98}.
\noindent 2)
Note that for $D_{in}(z)$ defined by $D_{in}(z)= B(z) e^{-K_\sigma(z)}$, we have $| D_{in}(z)|\le 1$, since $d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}\ge 0$ and $\mathop{\mathrm{Re}}\nolimits {e^{it}+z\/e^{it}-z}\ge 0$ for all $(t,z)\in {\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}\ts\dD$.
\noindent 3) The closure of the set $\{z_j\}\cup \mathop{\mathrm{supp}}\nolimits \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}$ is called the spectrum of the inner function $ D_{in}$.
\noindent 4) $D_B={D\/B}$ has no zeros in the disk $\dD$ and satisfies $$ \log D_B(z)={1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), $$ where the measure $\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}$ equals $$
d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t)=\log
|D(e^{it})|dt-d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(t). $$
\begin{theorem} \label{T4} {\bf (Trace formulae.)} Let $V$ satisfy \er{Vc}. Then the following identities hold \[
\label{tr0} {\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})\/2\pi}-B_0={1\/2\pi}\int_{-\pi}^{\pi}\log |D(e^{it})|dt\ge 0, \end{equation} \[ \label{tr1} -D_n+B_n={1\over} \def\iy{\infty\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} e^{-int}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t),\qquad n=1,2,.... \end{equation}
where $B_0=\log B(0)=\log \left(\prod_{j=1}^N |z_j|\right)<0$ and $B_n$ are given by \er{B6}. In particular, \[ \label{tr4} \sum_{j=1}^N\rt({1\/z_j}-\overline} \def\wt{\widetilde z_j \rt)={2\/d}\mathop{\mathrm{Tr}}\nolimits \, V+{1\over} \def\iy{\infty\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} e^{-it}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), \end{equation} \[ \label{tr55} \sum_{j=1}^N\rt({1\/z_j^2}-\overline} \def\wt{\widetilde z_j^2 \rt)={4\/d^2}\mathop{\mathrm{Tr}}\nolimits \, V^2+{1\over} \def\iy{\infty\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} e^{-i2t}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), \end{equation} and \[ \label{t52} \begin{aligned}
\sum_{j=1}^N \mathop{\mathrm{Im}}\nolimits\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j=\mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Im}}\nolimits V-{d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \sin t\, d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), \\ \sum_{j=1}^N \mathop{\mathrm{Re}}\nolimits\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j^2-d^2}=\mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Re}}\nolimits V+{d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \cos t\,d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t). \end{aligned} \end{equation}
\end{theorem}
\begin{theorem} \label{T5} Let $V$ satisfy \er{Vc}. Then we have the following estimates: \[
\label{t51} \sum (1-|z_j|)\le -B_0\le C(d)\|V\|_{2/3}-{\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})\/2\pi}. \end{equation} and if $\mathop{\mathrm{Im}}\nolimits V\ge 0$, then \[ \label{t51x}
\sum_{j=1}^N \mathop{\mathrm{Im}}\nolimits\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j\le \mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Im}}\nolimits V+C(d)\|V\|_{2/3}, \end{equation} and if $ V\ge 0$, then \[ \label{t51xx}
\sum_{j=1}^N \sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j^2-d^2}\le \mathop{\mathrm{Tr}}\nolimits V+C(d)\|V\|_{2/3}, \end{equation}
\end{theorem}
\noindent {\bf Remark.}
\noindent Note that some of the results stated in Theorems \ref{T4} and \ref{T5} are new even for real-valued potentials, see Section 5.
\section {Determinants } \setcounter{equation}{0}
\subsection {Properties of the Laplacian } One may diagonalize the discrete Laplacian, using the (unitary) Fourier transform
$\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V}\colon \ell^2({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d)\to L^2(\Bbb S^d)$, where $\Bbb S=\R/(2\pi {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C})$. It is defined by
$$
(\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V} f)(k)=\widehat f(k)={1\over} \def\iy{\infty(2\pi)^{{d\/2}}}\sum_{n\in
{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d} f_ne^{i(n,k)},\quad \textup{where} \quad
k=(k_j)_{j=1}^d\in \Bbb S^d.
$$
Here $(\cdot,\cdot)$ is the scalar product in $\R^d$. In the so-called momentum representation of the operator $H$, we have: $$
\Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V} H \Phi} \def\cV{{\mathcal V}} \def\bV{{\bf V}} \def\mV{{\mathscr V}^*=\widehat \Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F} +\widehat V. $$ The Laplacian is transformed into the multiplication operator $$ (\widehat \Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F} \widehat f)(k)=h(k)\widehat f(k),\qquad h(k)=\sum_1^d \cos k_j,\qquad k\in \Bbb S^d, $$ and the potential $V$ becomes a convolution operator $$ (\widehat V\widehat f)(k)={1\over} \def\iy{\infty(2\pi)^{d\/2}}\int_{\Bbb S^d} \widehat V(k-k')\widehat f(k')dk', $$ where $$ \widehat V(k)={1\over} \def\iy{\infty(2\pi)^{d\/2}}\sum_{n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}V_ne^{i(n,k)},\qquad V_n={1\over} \def\iy{\infty(2\pi)^{d\/2}}\int_{\Bbb S^d} \widehat V(k)e^{-i(n,k)}dk. $$
\subsection{Trace class operators} Here for the sake of completeness we give some standard facts from Operator Theory in Hibert Spaces..
\noindent Let $\mathcal H$ be a Hilbert space endowed with inner product $(\,
, \, )$ and norm $\|\cdot\|$. Let $\cB_1$ be the set of all trace class operators on $\mathcal H$ equipped with the trace norm
$\|\cdot\|_{\cB_1}$. Let us recall some well-known facts.
$\bullet$ Let $A, B\in \cB$ and $AB, BA\in \cB_1$. Then \begin{equation*} \label{AB} {\rm Tr}\, AB={\rm Tr}\, BA, \end{equation*} \begin{equation*} \label{1+AB} \mathop{\mathrm{det}}\nolimits (I+ AB)=\mathop{\mathrm{det}}\nolimits (I+BA). \end{equation*} \begin{equation*}
\label{DA1} |\mathop{\mathrm{det}}\nolimits (I+ A)|\le e^{\|A\|_{\cB_1}}. \end{equation*} \begin{equation*}
\label{DA1x} |\mathop{\mathrm{det}}\nolimits (I+ A)-\mathop{\mathrm{det}}\nolimits (I+ B)|\le \|A-B\|_{\cB_1}
e^{1+\|A\|_{\cB_1}+\|B\|_{\cB_1}}. \end{equation*} Moreover, $I+ A$ is invertible if and only if $\mathop{\mathrm{det}}\nolimits (I+ A)\ne 0$.
$\bullet$ Suppose for a domain $\mD \subset {\C}$, the function $\Omega(\cdot)-I: \Omega\to \cB_1 $ is analytic and invertible for any $z\in D$. Then for $F(z)=\mathop{\mathrm{det}}\nolimits \Omega (z)$ we have
\begin{equation*}
F'(z)= F(z){\rm Tr}\,\left(\Omega(z)^{-1}\Omega'(z)\right). \end{equation*}
$\bullet$ Recall that for $K \in \cB_1$ and $z \in {\C}$, the following identity holds true: \begin{equation*} \mathop{\mathrm{det}}\nolimits\,(I - zK) = \exp\left(- \int_0^z{\rm Tr}\, \big(K(1 - sK)^{-1}\big)ds\right) \label{S6Detdefine} \end{equation*} (see e.g. \cite{GK}, p.167, or \cite{RS78}, p.331).
\subsection{Fredholm determinant}
We recall here results from \cite{IK12} about the asymptotics of the determinant $\cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=\mathop{\mathrm{det}}\nolimits (I+VR_0(\lambda))$ as $|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}|\to \iy$.
\begin{lemma} \label{TaD1} Let $V\in \ell^1({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d)$. Then the determinant $\cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=\mathop{\mathrm{det}}\nolimits (I+VR_0(\lambda))$ is analytic in $\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}=\C\setminus} \def\es{\emptyset [-d,d]$ and satisfies
\begin{equation*}
\cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=1+O(1/\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}) \quad as \quad |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}|\to {\infty}, \end{equation*}
uniformly in $\mathop{\mathrm{arg}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in [0,2\pi]$, and
\begin{equation*}
\log \cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}) = - \sum_{n=1}^{\infty}\frac{(-1)^n}{n}{\rm Tr}\,\left(VR_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})\right)^n, \end{equation*}
\begin{equation} \label{aD3} \log \cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}) =-\sum _{n \geq 1}\frac{d_n}{n\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^n},\quad
d_n={\rm Tr}\,(H^n-H_0^n), \end{equation}
where the right-hand side is absolutely convergent for $|\lambda| > r_1$, $r_1 >0$ being a sufficiently large constant. In particular, \begin{equation} \label{aD4} \begin{aligned} &d_1={\mathop{\mathrm{Tr}}\nolimits} \, V,\\ &d_2={\mathop{\mathrm{Tr}}\nolimits}\,V^2,\\ & d_3={\mathop{\mathrm{Tr}}\nolimits}\,\big(V^3+6d\tau^2V\big),\\ & d_4={\rm Tr}\, \big(V^4+8d\tau^2V^2+2\tau^2(V_\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F})V\big), \dots, \end{aligned} \end{equation} where $V_\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}=\sum_{i=1}^d(S_jVS_j^*+S_j^*VS_j)$ and $(S_jf)(n)=f(n+e_j)$ and $\tau} \def\cT{{\mathcal T}} \def\bT{{\bf T}} \def\mT{{\mathscr T}={1\/2}$. \end{lemma}
\noindent Recall the conformal mapping $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(\cdot): \dD\to \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$ is given by $ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)={d\/2}\big(z+{1\/z} \big),\ |z|<1$, and note that
$|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}|\to \iy$ iff $z\to 0$. We consider the operator-valued function $Y(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)), \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$, defined by $$
Y(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)) = V_2 X (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)), \qquad {\rm where}\qquad X (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)) = |V_1|^{1/2} R_0 (\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)) |V_1|^{1/2} V_1|V_1|^{-1}, $$ and where $V_1$ and $V_2$ are defined in \er{Vfact}.
\begin{theorem} \label{T2x} Let $V$ satisfy \er{Vc}. Then the operator-valued function $Y(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)): \dD\to \cB_1$ is analytic in the unit disc $\dD$ and satisfies \begin{equation*} \begin{aligned}
\|Y(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z))\|_{\cB_1}\le C(d) \|V\|_{2/3},\qquad \forall \ z\in \dD. \end{aligned} \end{equation*}
Moreover, the function $D(z), z\in \dD$ belongs to $\mH_\iy$ and \[ \label{det1}
\|D(\cdot)\|_{\mH_\iy}\le e^{C(d) \|V\|_{2/3}}. \end{equation}
\end{theorem}
\noindent {\bf Proof.} The operator $V_2$ belongs to $\cB_2$ and due to Theorem \ref{TApp} (see Appendix 2) the operator-function $X(\cdot):\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}\to \cB_2$ satisfies the inequality
\begin{equation*} \begin{aligned}
\|X(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})\|_{\cB_2}\le C(d) \|V_1\|_2,\qquad \forall\ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} \in \Lambda = \C\setminus} \def\es{\emptyset [-d,d], \end{aligned} \end{equation*}
Thus the operator-valued function $Y(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z))=V_2\, X(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z)): \dD\to \cB_1$ is of trace class. Moreover, the function $D(z), z\in \dD$, belongs to $\mH_\iy$ and due to \er{DA1} it satisfies \er{det1}. \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
The function $D(z)=\cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}(z))$ is analytic in $\dD$ with the zeros given by $$ z_j=z(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j), \qquad j=1,2,..., N, $$ where $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j$ are zeros (counting with multiplicity) of $\cD(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ in $\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}=\C\setminus} \def\es{\emptyset [-d,d]$, i.e., eigenvalues of $H$.
\begin{lemma} \label{TaD2} Let a potential $V\in \ell^1({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d)$. Then $\log D(z)$ is analytic in $\dD_{r_0}$ defined by $\log D(0)=0$, where $r_0$ is given by \er{zD}, and has the following Taylor series \[ \label{aaD1} \log D(z)=-c_1z-c_2z^2-c_3z^3-c_4z^4-......, \qquad \mathop{\mathrm{as}}\nolimits
\quad |z|<r_0, \end{equation}
and \[ \label{aaD2} c_1=d_1a,\qquad c_2=d_2a^2, \qquad c_3=d_3a^3-c_1, \qquad c_4=d_4a^4-c_2,.... \end{equation} where $a={2\/d}$ and the coefficients $d_j$ are given by \er{aD4}. \end{lemma}
\noindent {\bf Proof.} We have $$ {1\over} \def\iy{\infty\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}}={az\/1+z^2}=a(z-z^3+O(z^5)),\qquad {1\over} \def\iy{\infty\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2}=a^2(z^2-z^4+O(z^6)), \qquad {1\over} \def\iy{\infty\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^3}=a^3z^3+O(z^5) $$ as $z\to 0$. Substituting these asymptotics into \er{aD3} we obtain \er{aaD1} and \er{aaD2}. \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\section {Proof of the main results} \setcounter{equation}{0}
\noindent We are ready to prove main results.
\noindent {\bf Proof of Theorem \ref{T1}.} Let $V$ satisfy \er{Vc}. Then by Theoem \ref{T2x}, the determinant $D(z), z\in \dD$, is analytic and $D\in \mH_\iy$. Moreover, Lemma \ref{TaD1} gives \er{D1} and Lemma \ref{TaD2} gives \er{D3}-\er{cD4}. \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\noindent {\bf Proof of Theorem \ref{T2}.} Due to Theorem \ref{T1} the determinant $D(z)$ is analytic in $\dD$. Then Theorem \ref{TA1} (see Appendix 1) yields $$
\cZ_D:=\sum_{j=1}^\iy(1-|z_j|)<\iy $$
and the Blaschke product $B(z)$ given by $$
B(z)=\prod_{j=1}^N {|z_j|\/z_j}{z_j-z\/1-\overline} \def\wt{\widetilde z_j z}, \qquad
z\in \dD, $$
converges absolutely for $\{|z|<1\}$. We have $D(z)=B(z)D_B(z)$, where $D_B$ is analytic in the unit disc $\dD$ and has no zeros in $\dD$. Thus we have proved ii).
i) Lemma \ref{TF2} gives \er{B3} and \er{B4}.
iii) For small a sufficiently small $z$ and for $t=z_j\in \dD$ for some $j$ we have the following identity: $$
\log {|t|\/t}{t-z\/1-\overline} \def\wt{\widetilde t z}=\log |t|+ \log \rt(1-{z\/t}\rt)-\log (1-\overline} \def\wt{\widetilde t z)
=\log |t|-\sum_{n\ge1}\rt({1\/t^n}-\overline} \def\wt{\widetilde t^n\rt){z^n\/n}. $$ Besides, $$ \begin{aligned}
& |1-|t|^n|\le n|1-|t||,\\
&\big|t^{-n}-\overline} \def\wt{\widetilde t^n \big|\le \big|1-t^n| +\big|1-t^{-n} \big|\le
|1-t^n|\rt(1+{1\over} \def\iy{\infty|t|^n} \rt)\le
|1-|t|^n|{2\/r_0^n}\le |1-|t||{2n\/r_0^n}, \end{aligned} $$
where $r_0=\inf |z_j|>0$.
This yields \[ \begin{aligned}
& \log B(z)=\sum_{j=1}^N\log {|z_j|\/z_j}{z_j-z\/1-\overline} \def\wt{\widetilde z_j z}
=\sum_{j=1}^N\rt( \log |z_j|+ \log \big(1-(z/z_j)\big)-\log (1-\overline} \def\wt{\widetilde z_j z)\rt)\\
&=\sum_{j=1}^N\log |z_j|-\sum_{n=1}^\iy\sum_{j=1}^N\rt({1\/z_j^n}-\overline} \def\wt{\widetilde z_j^n \rt){z^n\/n}=\log B(0)-b(z),\\ & b(z)=\sum_{n=1}^N\sum_{j=1}^\iy\rt({1\/z_j^n}-\overline} \def\wt{\widetilde z_j^n \rt){z^n\/n}=\sum_{n=1}^N z^nB_n,\qquad B_n={1\/n}\sum_{j=1}^N\rt({1\/z_j^n}-\overline} \def\wt{\widetilde z_j^n \rt), \end{aligned} \end{equation}
where the function $b$ is analytic in the disk $\{|z|<{r_0\/2}\}$ and $B_n$ satisfy $$ \begin{aligned}
|B_n|\le {1\/n}\sum_{j=1}^N\rt|{1\/z_j^n}-\overline} \def\wt{\widetilde z_j^n \rt| \le
{2\/r_0^n}\sum_{j=1}^N |1-|z_j||={2\/r_0^n}\cZ_D,
\end{aligned} $$
where $\cZ_D=\sum_{j=1}^\iy(1-|z_j|)$.
Thus $$
|b(z)|\le \sum_{n=1}^\iy |B_n|{|z|^n}\le 2\cZ_D\sum_{n=1}^\iy{|z|^n\/r_0^n}={2\cZ_D\/1-{|z|\/r_0}}. $$ \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\noindent {\bf Proof of Theorem \ref{T3}.}
i) Theorem \ref{T1} implies $D\in \mH_\iy$. Therefore the canonical representation \er{cfD} follows from Lemma \ref{Tft}.
ii) The relation \er{meraze} gives the proof of ii). \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\noindent {\bf Proof of Theorem \ref{T4}.} By using Lemma \ref{TAt}, \er{D1}-\er{D4} and Proposition \ref{T2} we obtain identities \er{tr0}-\er{tr55}.
We have the following identities for $z\in \dD$ and $\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}={\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\/d}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}_1$: \[ \label{Et1} \begin{aligned} 2\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}=z+{1\/z},\qquad z=\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}-\sqrt{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}^2-1},\qquad z-{1\/z}=-2\sqrt{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}^2-1}. \end{aligned} \end{equation} These identities yield \[ \label{Et2} \begin{aligned}
\overline} \def\wt{\widetilde z-{1\/z}=z+\overline} \def\wt{\widetilde z-2\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}=2\mathop{\mathrm{Re}}\nolimits z-2\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H},\\ \overline} \def\wt{\widetilde z-{1\/z}=\overline} \def\wt{\widetilde z-z-2\sqrt{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}^2-1}=-2i\mathop{\mathrm{Im}}\nolimits z-2\sqrt{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}^2-1}.
\end{aligned} \end{equation} Then we get $$ \begin{aligned} {2\/d}\mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Im}}\nolimits V+\mathop{\mathrm{Im}}\nolimits \sum_{j=1}^N\rt(\overline} \def\wt{\widetilde z_j -{1\/z_j}\rt)={1\over} \def\iy{\infty\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \sin t\, d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t),\\ \mathop{\mathrm{Im}}\nolimits \rt(\overline} \def\wt{\widetilde z_j -{1\/z_j}\rt)= -2\mathop{\mathrm{Im}}\nolimits\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}_j= -{2\/d}\mathop{\mathrm{Im}}\nolimits\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j,\\ \mathop{\mathrm{Re}}\nolimits \rt(\overline} \def\wt{\widetilde z_j -{1\/z_j}\rt)= 2\mathop{\mathrm{Re}}\nolimits(z_j-\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}_j)=-2\mathop{\mathrm{Re}}\nolimits\sqrt{\zeta} \def\cH{{\mathcal H}} \def\bH{{\bf H}} \def\mH{{\mathscr H}_j^2-1}
\end{aligned} $$ and thus $$ \sum_{j=1}^N \mathop{\mathrm{Im}}\nolimits\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j=\mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Im}}\nolimits V-{d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \sin t\,d \mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), $$ $$ \sum_{j=1}^N \mathop{\mathrm{Re}}\nolimits\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j^2-d^2}=\mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Re}}\nolimits V+{d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \cos t\,d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), $$ \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\noindent {\bf Proof of Theorem \ref{T5}.} The simple inequality $1-x\le -\log x$ for $\forall \ x\in (0,1]$, implies
$-B_0=-B(0)=-\sum \log |z_j|\ge \sum (1- |z_j|)$. Then substituting the last estimate and the estimate \er{D1} into the first trace formula \er{tr0} we obtain \er{t51}.
In order to determine the next two estimates we use the trace formula \er{tr0}. Let $\mathop{\mathrm{Im}}\nolimits V\ge 0$. Then $\mathop{\mathrm{Im}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j\ge 0$ and the estimates \er{D1} and \er{t51} and the second trace formula \er{t52} imply $$ \begin{aligned}
\sum_{j=1}^N \mathop{\mathrm{Im}}\nolimits\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j-\mathop{\mathrm{Tr}}\nolimits \mathop{\mathrm{Im}}\nolimits V=-{d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \sin t\, d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t)\\
\le {d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} (C\|V\|_{2\/3}dt+ d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(t)) \le C(d)\|V\|_{2\/3}, \end{aligned} $$ which yields \er{t51x}. Similar arguments give \er{t51xx}. \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\section {Schr\"odinger operators with real potentials} \setcounter{equation}{0}
Consider Schr\"odinger operators $H=-\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}+V$, where the potential $V$ is real and satisfies condition \er{Vc} . The spectrum of $H$ has the form $$ \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(H)=\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{ac}(H)\cup \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{sc}(H)\cup \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_p(H)\cup \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{dis}(H),\quad \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{ac}(H)=[-d,d], $$ where $$
\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_p(H)\subset} \def\ts{\times [-d,d], \quad \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_{dis}(H)\subset} \def\ts{\times \R\setminus} \def\es{\emptyset[-d,d]. $$ Note that each eigenvalues of $H$ has a finite multiplicity.
\subsection{Discrete spectrum} The discrete eigenvalues of the operator $H$ are real and belong to the set $\R\setminus} \def\es{\emptyset[-d,d]$. Let they are labeled by $$ \dots \le \lambda_{-2} \le \lambda_{-1}<-d < d< \lambda_1\le \lambda_2 \le \dots $$ The corresponding point from $z_j\in\dD$ is real and satisfy $$ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j={d\/2}\rt(z_j+{1\/z_j}\rt), \qquad j\in\Bbb Z\setminus\{0\}. $$ Moreover, we have the identity $$ \sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2}={d\/2}\rt(z-{1\/z}\rt) $$ for all $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$ and $z\in \dD$. If $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}$ is the eigenvalue of $H$, then we have the identity \[ \label{i2} \begin{aligned}
{d\/2}\rt({1\/z}-z\rt)=-|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2|^{1\/2}\quad if \quad \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}<-d,\\
{d\/2}\rt({1\/z}-z\rt)=|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2|^{1\/2}\quad if \quad \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}>d. \end{aligned} \end{equation}
\noindent The next result follows immediately from Theorem \ref{T4}. \begin{theorem} \label{TrV} {\bf (The trace formulas.)} Let a real potential $V$ satisfy \er{Vc}. Then there is infinite number of trace formulae \[
\label{rV1} 0\le {\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})\/2\pi}-B_0={1\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} \log |D(e^{it})|dt\le C(d,p)\|V\|_q, \end{equation} $$
-\mathop{\mathrm{Tr}}\nolimits \, V+\sum_{j=1}^N |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j^2-d^2|^{1\/2}\mathop{\mathrm{sign}}\nolimits \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j ={d\/2\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} e^{-it}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), $$ $$
-\mathop{\mathrm{Tr}}\nolimits \, V^2+\sum_{j=1}^N |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j||\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j^2-d^2|^{1\/2} ={d^2\/4\pi}\int_{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D} e^{-i2t}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), \qquad \dots. $$ \end{theorem}
{\bf Proof.} The eigenvalue of $H$, then we have the identity $$ z={1\/d}\rt(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M} \pm\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2} \rt). $$ \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
{\bf Remark.} 1) We consider the case \er{rV1}. If $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}>d$, then we have $z\in (0,1)$ and then $$ \begin{aligned} 1-z=1-{1\/d}\rt(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2} \rt)={1\/d} \rt(d-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}+\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}^2-d^2}\rt)\\ ={\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-d}\/d} \rt(\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}+d}-\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-d}\rt)= {2\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-d}\over} \def\iy{\infty (\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}+d}+\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-d})}\ge {\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}-d}\over} \def\iy{\infty \sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}+d}}. \end{aligned} $$ This yields $$ \sum_{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j>d} {\sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j-d}\over} \def\iy{\infty \sqrt{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j+d}}+ \sum_{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j<-d}
{\sqrt{-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j-d}\over} \def\iy{\infty \sqrt{-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j+d}}=\sum_{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j} {\sqrt{|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j|-d}\over} \def\iy{\infty
\sqrt{|\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j|+d}}\le C_d \|V\|_{2/3}. $$
\begin{corollary} \label{Ter} Let a potential $V$ be real and satisfy \er{Vc}. Then the following estimates hold true: $$
\sum_{j=1}^N |\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j||\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}_j^2-d^2|^{1\/2}\le \mathop{\mathrm{Tr}}\nolimits V^2+
{d^2\/4\pi}C(d,p)\|V\|_{2/3}. $$ \end{corollary}
\section {Appendix, Hardy spaces} \setcounter{equation}{0}
\subsection{Analytic functions}
We recall the basic facts about the Blaschke product (see pages 53-55 in \cite{G81}) of zeros $\{z_n\}$. The subharmonic function $v(z)$ on $\Omega$ has a harmonic majorant if there is a harmonic function $U(z)$ such that $v(z) \le U(z)$ throughout $\Omega$.
We need the following well-known results, see e.g. Sect. 2 from \cite{G81}.
\begin{lemma} \label{TF2}
Let $\{z_j\}$ be a sequence of points in $\dD\setminus} \def\es{\emptyset \{0\}$ such that $\sum (1-|z_j|)<\iy$ and let $m\ge 0$ be an integer . Then the Blaschke product $$ B(z)=z^m \prod_{z_j\ne 0}{
|z_j|\/z_j}\rt(\frac{z_j-z}{1-\overline} \def\wt{\widetilde z_j z}\rt), $$ converges in $\dD$. Moreover, the function $B$ is in $\mH_\iy$ and zeros of $B$ are precisely the points $z_j$, according to the multiplicity. Moreover, \[ \label{BL3}
\begin{aligned} |B(z)|\le 1 \qquad \forall \ z\in \dD, \end{aligned}
\end{equation} $$
\lim _{r\to 1} |B(re^{i\vt})|=|B(e^{i\vt})|=1 \qquad \ almost\ everywhere,\quad \vt\in {\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}, $$ $$
\lim _{r\to 1}\int_0^{2\pi}\log |B(re^{i\vt})|d\vt=0. $$ \end{lemma}
\noindent Let us recall a well-known result concerning analytic functions in the unit disc, e.g., see Koosis page 67 in \cite{Koo98}.
\begin{theorem} \label{TA1} Let $f$ be analytic in the unit disc $\dD$ and let $z_j\ne 0, j=1,2,..., N\le \iy$ be its zeros labeled by $$
0<|z_1|\le ...\le |z_j|\le |z_{j+1}\le |z_{j+2}|\le .... $$ Suppose that $f$ satisfies the condition $$ \sup_{r\in (0,1)}
\int_0^{2\pi}\log |f(re^{i\vt})|d\vt<\iy. $$ Then $$
\sum _{j=1}^N (1-|z_j|)<\iy. $$ The Blaschke product $B(z)$ given by $$
B(z)=z^m\prod_{j=1}^N {|z_j|\/z_j}{(z_j-z)\over} \def\iy{\infty(1-\overline} \def\wt{\widetilde z_j z)}, $$ where $m$ is the multiplicity of $B$ at zero,
converges absolutely for $\{|z|<1\}$. Besides, $f_B(z)=f(z)/B(z)$ is analytic in the unit disc $\dD$ and has no zeros in $\dD$.
\noindent Moreover, if $f(0)\ne 0$ and if $u(z)$ is the least harmonic majorant of $\log |f(z)|$, then $$
\sum (1-|z_j|)<u(0) - \log | f (0)|. $$ \end{theorem}
We now consider the canonical representation \er{cr} for a function $f\in \mH_p, p>0$ (see, \cite{Koo98}, p. 76).
\begin{lemma} \label{Tft} Let a function $f\in \mH_p, p>0$. Let $B$ be its Blaschke product. Then there exists a singular measure $\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}=\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_f\ge 0$ on $[-\pi,\pi]$ with \[ \label{cr} \begin{aligned}
f(z)=B(z)e^{ic-K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R} (z)}e^{K_f(z)},\\ K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(z)={1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(t),\\ K_f(z)= {1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}\log
|f(e^{it})|dt,
\end{aligned} \end{equation}
for all $|z|<1$, where $c$ is real constant and $\log |f(e^{it})|\in L^1(-\pi,\pi)$. \end{lemma}
\noindent We define the functions (after Beurling) in the disc by $$ \begin{aligned} & f_{in}(z)=B(z)e^{ic-K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R} (z)} \quad & the \ inner \ factor\ of\ f,\\ &f_{out}(z)=e^{K_f(z)} \quad & the \ outer \ factor\ of\ f,\\
\end{aligned} $$
for $|z|<1$. Note that we have $| f_{in}(z)|\le 1$, since $d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}\ge 0.$
\noindent Thus $f_B(z)={f(z)\/B(z)} $ has no zeros in the disc $\dD$ and satisfies $$ \log f_B(z)=ic+{1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), $$ where the measure $\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}$ equals $$
d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t)=\log
|f(e^{it})|dt-d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(t). $$
\noindent For a function $f$ continuous on the disc $\overline} \def\wt{\widetilde\dD$ we define the set of zeros of $f$ lying on the boundary $\partial} \def\os{\oplus \dD $ by $$ \mathfrak{S}_0(f)=\{z\in \dS: f(z)=0\}. $$ It is well known that the support of the singular measure $\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}=\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_f$ satisfies \[
\label{meraze} \mathop{\mathrm{supp}}\nolimits \sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}_f\subset} \def\ts{\times \mathfrak{S}_0(f)=\{z\in \dS: f(z)=0\} \end{equation}
see for example, Hoffman \cite{Ho62}, p. 70.
\noindent In the next statement we present trace formulae for a function $f\in \mH_p, p>0$.
\begin{lemma} \label{TAt} Let $f\in \mH_p, p>0$ and $f(0)=1$ and let $B$ be its Blaschke product. Let the functions $\log f$ and $F=\log f_B$ have the Taylor series in some small disc $\dD_r, r>0$ given by \[ \label{asf1} \begin{aligned} \log f(z)=-f_1z-f_2z^2-f_3z^3-.....,\\ F=\log f_B(z)=F_0+F_1z+F_2z^2+F_3z^3+.....,\\ \log B(z)=B_0-B_1z-B_2z^2-..., \qquad as \qquad z\to 0,\\
F_0=-\log B(0)>0,\qquad F_n=B_n-f_n,\qquad n\ge 1. \end{aligned} \end{equation}
Then the factorization \er{cr} holds true and we have \[ \label{ftr0} c=0,\qquad F_0=-\log B(0)={\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})\/2\pi}\ge 0,\qquad \end{equation} \[ \label{ftr1} F_n={1\over} \def\iy{\infty\pi}\int_{-\pi}^{\pi}e^{-int}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t),\qquad n=1,2,...., \end{equation}
where the measure $d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t)=\log |f(e^{it})|dt-d\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R}(t)$. \end{lemma}
\noindent {\bf Proof.} Recall that the identity \er{cr} gives $f(z)=B(z)e^{ic-K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R} (z)}e^{K_f(z)}$, then at $z=0$ we obtain $$ 1=f(0)=B(0)e^{ic-K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R} (0)}e^{K_f(0)}. $$ Since $B(0), K_\sigma} \def\cR{{\mathcal R}} \def\bR{{\bf R}} \def\mR{{\mathscr R} (0), K_f(0)$ and $c$ are real we obtain $c=0$. Moreover, the inequality \eqref{BL3} implies $F_0\ge0$.
\noindent In order to show \er{ftr1} we need the asymptotics of the Schwatz integral \[ \label{Si} f(z)=B(z)f_B(z),\qquad
F(z)=\log f_B(z)={1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t), \end{equation}
as $z\to 0$. The following identity holds true \[ \label{ts1} {e^{it}+z\/e^{it}-z}=1+{2ze^{-it}\/1-ze^{-it}}=1+2\sum_{n\ge 1} \big({ze^{-it}}\big)^n= 1+2\big({ze^{-it}}\big)+2\big({ze^{-it}}\big)^2+..... \end{equation} Thus \er{Si}, \er{ts1} yield the Taylor series at $z=0$: \[ \label{asm} {1\/2\pi}\int_{-\pi}^{\pi}{e^{it}+z\/e^{it}-z}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t)={\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})\/2\pi}+\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}_1z+ \mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}_2z^2+\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}_3z^3+\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}_4z^4+...\qquad as \qquad z\to 0, \end{equation} where $$ \mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})=\int_0^{2\pi}d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t),\qquad \mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}_n={1\over} \def\iy{\infty\pi}\int_0^{2\pi}e^{-in\vt} d\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}(t),\qquad n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}. $$
Thus comparing \er{asf1} and \er{asm} we obtain
$$ -\log B(0)={\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}({\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D})\/2\pi}\ge 0,\ F_n=\mu} \def\cO{{\mathcal O}} \def\bO{{\bf O}} \def\mO{{\mathscr O}_n \qquad \forall \ n\ge 1. $$
\hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\section {Appendix, estimates involving Bessel's functions} \setcounter{equation}{0}
\noindent In order complete the proof of Theorem \ref{T2x} we need some uniform estimates for the Bessel functions $J_m, m\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}$ with respect to $m$ for which we their integral representation
\begin{equation} \label{Be1} J_m(t)={1\/2\pi}\int_0^{2\pi} e^{-imk-i{t}\sin k}\,dk={i^m\/2\pi}\int_0^{2\pi} e^{-imk+i{t}\cos k}\,dk. \end{equation}
Note that for all $(t,m)\in \R\ts {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}$: \begin{equation} \label{Be2}
J_{-m}(t)=J_m(t) \quad \textup{and} \quad J_{m}(-t)=(-1)^mJ_n(t). \end{equation}
Our estimates are based on the following three asymptotic formulae, see \cite{Smirn}, Ch IV, $\S$ 2. Let $$ \xi = \frac{m}{t}. $$ Then for a fixed $\varepsilon$, $0<\varepsilon<1$ we have
{\bf 1.} if $\xi>1+\varepsilon$ then
\begin{equation}\label{bess1} J_m(t) = \frac12 \, \sqrt{\frac{2}{\pi t}} \, \frac{1}{(\xi^2-1)^{1/4}} \, e^{-t\left(\xi \ln\left(\xi + \sqrt{\xi^2 -1}\right) - \sqrt{\xi^2-1}\right)} \, \left(1 + O\left(\frac{1}{m}\right)\right). \end{equation}
Therefore if $\xi>1+\varepsilon$ then this formula implies the uniform with respect to $m$ exponential decay of the Bessel function in $t$.
{\bf 2.} if $\xi<1-\varepsilon$ then
\begin{multline}\label{bess2} J_m(t) = \frac12\, \sqrt{\frac{2}{\pi t}} \, \frac{1}{(\xi^2-1)^{1/4}} \left( e^{-i\pi/4i + it\left(-\xi \arccos \xi + \sqrt{1-\xi^2}\right)} + e^{i\pi/4i + it\left(\xi \arccos \xi - \sqrt{1-\xi^2}\right)}\right) \\ \left( 1 + O\left(\frac{1}{t}\right)\right). \end{multline}
In this case the Bessel function oscillates as $t\to\infty$ and obviously the latter formula implies the uniform with respect to $m$ estimate $$
|J_m(t)|Ê\le C \, t^{-1/2}, \qquad C= C(\varepsilon). $$
{\bf 3.} We now consider the third case $ 1-\varepsilon\le \xi \le 1+ \varepsilon$ which is more difficult.
\begin{lemma}\label{case3} If $1-\varepsilon\le \xi \le 1+ \varepsilon$, $\varepsilon>0$, then there is a constant $C=C(\varepsilon)$ such that
\begin{equation}\label{3case}
J_m(t) \le C \, t^{-\frac14} \, \left(|t|^{\frac13}Ê+ |m-t|\right)^{-\frac14}, \qquad \forall \, m, \, t, \quad |m-t|<\varepsilon\, t. \end{equation}
\end{lemma}
\noindent {\bf Proof.} If $1-\varepsilon\le \xi \le 1+ \varepsilon$, then (see \cite{Smirn}, Ch IV, $\S$ 2)
\begin{equation}\label{bess3} J_m(t) = \frac{v\left( t^{2/3} \tau(\xi)\right) }{t^{1/3}} \left(c_0(\xi) + O\left(\frac{1}{t}\right)\right) \\+
\frac{v'\left( t^{2/3} \tau(\xi)\right)}{t^{4/3}} \left(d_0(\xi) + O\left(\frac{1}{t}\right)\right), \end{equation}
where $v$ is the Airy function and $$ \tau^{3/2}(\xi) = \xi \, \ln\left(\xi + \sqrt{\xi^2 -1} \right) - \sqrt{\xi^2 -1} $$ and therefore
\begin{equation}\label{tau} \tau(\xi) = 2^{1/3}(\xi-1) + O\left((\xi-1)^2\right), \quad {\rm as} \quad \xi \to 1. \end{equation}
Besides the functions $c_0(\xi)$ and $d_0(\xi)$ are bounded with respect to $\xi$ and, for example, $$ c_0(\xi) = \sqrt{\frac{2}{\pi}} \, \left(\frac{\tau(\xi)}{\xi^2-1}\right)^{1/4}, $$ (see \cite{Olver} formulae (10.06), (10.07))
In what follows all the constants depend on $\varepsilon$, $0<\varepsilon<1$, but not on $m$ and $t$.
\noindent Due \eqref{tau} there are constants $c$ and $C$ such that
\begin{equation}\label{y1}
c(\varepsilon) (1+ |y|)^{\frac14} \le \left(t^{\frac13} + |m-t|\right)^{\frac14}\, t^{-\frac{1}{12}}\le C(\varepsilon) (1+ |y|)^{\frac14}, \end{equation}
where $y = t^{\frac23} \, \tau(\xi)$. Moreover, since $|\xi-1| = |\frac{m}{t} -1 |Ê\le \varepsilon$ we also have
\begin{equation}\label{y2}
\left(t^{\frac13} + |m-t|\right)^{\frac14}\, t^{-\frac{1}{12}}\le C(\varepsilon) \, t^{\frac16}. \end{equation}
Applying estimates for the Airy functions in \eqref{bess3} $$
|v(y)| \le C \, (1 + |y|)^{-1/4}, \qquad |v'(y)| \le C \, (1 + |y|)^{1/4} $$
and using \eqref{y1}, \eqref{y2} we find that if $|t|\ge1$
\begin{multline*}
|J_m(t)| \le C\left( \frac{1}{(1+|y|)^{\frac14} \, |t|^{\frac13}} + \frac{(1+|y|)^{\frac14}}{|t|^{\frac43}} \right)\\
\le C\, \left( \frac{t^{\frac{1}{12}}}{t^{\frac13}\, \left(t^{\frac13}Ê+ |m-t|\right)^{\frac14}}Ê
+ \frac{\left(t^{\frac13} + |m-t|\right)^{\frac14}}{t^{\frac43} \, t^{\frac{1}{12}}} \right)\\
\le C \, t^{-\frac14} \, \left(|t|^{\frac13}Ê+ |m-t|\right)^{-\frac14}. \end{multline*}
The proof is complete. \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\noindent
Let us now consider the operator $e^{it \Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}}, t\in \R$. It is unitary on $L^2(\Bbb S^d)$ and its kernel is given by \[ \label{kr1} (e^{it \Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(n-n')={1\over} \def\iy{\infty(2\pi)^{d}}\int_{\Bbb S^d} e^{-i(n-n',k)+ith(k)}dk,\qquad n,n'\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d. \end{equation} where $h(k)=\sum_{j=1}^d\cos k_j, k=(k_j)_{j=1}^d\in \Bbb S^d$.
\begin{lemma}\label{Texp1} Let $n=(n_j)_{j=1}^d\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d, d\ge 1$. Then \[
\label{ehtd} (e^{it\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(n)=i^{-|n|}\prod_{j=1}^d J_{n_j}(t), \quad
(n,t)\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d\times \R, \end{equation}
where $|n|=|n_1|+....+|n_d|$. Moreover, the following estimates are satisfied: \[
\label{R12} |(e^{it\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(n)|\le C_1 |t|^{-{d\/3}}, \qquad t\ge1, \end{equation} \[
\label{iJm} \int_1^\iy |J_{m}(t)|^d\, dt<C_2,\qquad if \quad m\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C},\ d\ge 3, \end{equation} for all $ (t,n)\in \R\ts {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d$ and some constants $C_1=C_1(d)$ and $C_2=C_2(d)$. \end{lemma}
\noindent {\bf Proof.} Let $d=1$ and $h(k)=\cos k, k\in {\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}$. Then using \er{kr1} and \er{Be1} we obtain $$ (e^{it\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(n)={1\over} \def\iy{\infty(2\pi)}\int_{{\mathbb T}} \def\N{{\mathbb N}} \def\dD{{\mathbb D}} e^{-ink+it\cos k}dk=i^{-n}J_n(t),\quad \forall \ (n,t)\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}\ts \R. $$ which yields \er{ehtd} for $d=1$. Due to the separation of variables we also obtain \er{ehtd} for any $d\ge 1$.
\noindent In view of \er{Be2} it is enough to consider the case $n_j\ge 0$. In order to obtain \eqref{R12} it is enough ti apply the inequalities
\eqref{bess1}, \eqref{bess2} and also \eqref{3case} if in this inequality we ignore the term $|m-t|$.
\noindent If $d>3$, then \eqref{R12} implies \eqref{iJm}. Let now $d=3$. From \eqref{3case} we obtain \begin{multline*}
\int_1^\iy |J_{m}(t)|^d\, dt \le C \left( \int_1^\iy |t|^{-3/2}\, dt + \int_1^\iy |t|^{-{3\/4}}\rt(|t|^{1\/3}+|m-t|\rt)^{-{3\/4}} \, dt \right)\\
\le C/2 + C \, \int_1^\iy |t|^{-{3\/4}}\, (1 + |m-t|)^{-{3\/4}}\, dt\\
\le C/2 + C \, \left( \int_1^\iy |t|^{-3/2}\, dt\right)^{1/2}
\left( \int_1^\iy (1 + |m-t|)^{-3/2}\, dt\right)^{1/2}<\infty. \end{multline*} \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\begin{theorem} \label{TApp} i) Let $d\ge 3$.
Then for each $n\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d$ the following estimate holds true: \[
\label{Int} \int_1^\iy \bigl|(e^{\pm it\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(n)\bigr|\, dt\le \beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}, \quad \end{equation} where \[
\label{Intb} \beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}=\sup_{m\in {\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}}\int_1^\iy |J_{m}(t)|^ddt<\iy. \end{equation} ii) Let a function $q\in \ell^2({\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d)$ and let $X(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}) =qR_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})q,\ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}$. Then the operator-valued function $X:\Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}\to \cB_2$ ia analytic and satisfies \begin{equation}
\label{X2} \sup_{\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \Lambda} \def\cN{{\mathcal N}} \def\bN{{\bf N}} \def\mN{{\mathscr N}} \|X(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})\|_{\cB_2}\le (1+\beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B})\|q\|_{2}^2 , \end{equation} \end{theorem}
\noindent {\bf Proof.}
\noindent i) Note that \er{iJm} gives \er{Intb}. Due to \er{Be2} it is sufficient to show \er{Int} for $n\in (\Bbb Z_+)^d$. Using \er{ehtd} and \er{Intb}, we obtain $$
\int_1^\iy |(e^{it\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(n)|dt=\int_1^\iy \prod_1^d |J_{n_j}(t)|dt\le
\prod_1^d \rt (\int_1^\iy |J_{n_j}(t)|^d\rt)^{1/d}\le \beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}, $$ which yields \er{Int}.
\noindent
ii) Consider the case $\C_-$, the proof for $\C_+$ is similar. We have the standard representation of the free resolvent $R_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ in the lower half-plane $\C_-$ given by $$ \begin{aligned} R_0(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=-i\int_0^\iy e^{it(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})}dt=R_{01}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})+R_{02}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}), \\ R_{01}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=-i\int_0^1 e^{it(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})}dt,\qquad R_{02}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=-i\int_1^\iy e^{it(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})}dt, \end{aligned} $$ for all $\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \C_-$.
Here the operator valued-function $R_{01}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ has analytic extension from $\C_-$ into the whole complex plane $\C$ and satisfies $$
\|R_{01}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})\|\le 1,\qquad \qquad \|qR_{01}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})q\|_{\cB_2}\le
\|q\|_{2}^2\quad \forall \ \lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M}\in \C_-, $$ Let $R_{02}(n'-n,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$ be the kernel of the operator $R_{02}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})$. We have the identity $$ R_{02}(m,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})=-i\int_1^\iy (e^{it(\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}-\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})})(m)\,dt,\qquad m=n'-n. $$ Then the estimate \er{Int} gives $$
|R_{02}(m,\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|\le \int_1^\iy|(e^{it\Delta} \def\cF{{\mathcal F}} \def\bF{{\bf F}} \def\mF{{\mathscr F}})(m)|dt \le \beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}, $$ which yields $$
\|qR_{02}(\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})q\|_{\cB_2}^2=\sum_{n,n'\in{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}|q(n)|^2
|R_{02}(n-n'\lambda} \def\cM{{\mathcal M}} \def\bM{{\bf M}} \def\mM{{\mathscr M})|^2|q(n)|^2\le \sum_{n,n'\in{\mathbb Z}} \def\R{{\mathbb R}} \def\C{{\mathbb C}^d}|q(n)|^2
\beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}^2|q(n)|^2= \beta} \def\cB{{\mathcal B}} \def\bB{{\bf B}} \def\mB{{\mathscr B}^2\|q\|_{2}^4, $$ and summing results for $R_{01}$ and $R_{02}$ we obtain \er{X2}. \hspace{1mm}\vrule height6pt width5.5pt depth0pt \hspace{6pt}
\noindent \textbf{Acknowledgments.} Various parts of this paper were written during Evgeny Korotyaev's stay in KTH and Mittag-Leffler Institute, Stockholm. He is grateful to the institutes for the hospitality. He is also grateful to Alexei Alexandrov (St. Petersburg) and Konstantin Dyakonov (Barcelona), Nikolay Shirokov (St. Petersburg) for stimulating discussions and useful comments about Hardy spaces. Our study was supported by the RSF grant No 15-11-30007.
\end{document} | arXiv |
Karen Rhea
Karen Rhea is an American mathematics educator, a Collegiate Lecturer Emerita in the mathematics department of the University of Michigan.[1] Before joining the University of Michigan faculty, she was on the faculty at the University of Southern Mississippi.[2]
Contributions
With Andrew M. Gleason, Deborah Hughes Hallett and others, Rhea is a co-author of several calculus textbooks produced by the Harvard Calculus Consortium.[3] She is also a proponent of flipped classrooms for calculus instruction.[4]
Recognition
In 1998, the Louisiana–Mississippi section of the Mathematical Association of America gave Rhea its Award for Distinguished College or University Teaching of Mathematics. In 2011, Rhea won the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics, the highest teaching award of the Mathematical Association of America. The award citation credited her work at Michigan, directing the annual 4500-student calculus sequence and preparing instructors for the sequence, as well as her work in national-level education in the Harvard Calculus Consortium.[2]
In honor of Rhea's teaching, the University of Michigan's department of mathematics offers an annual award: the Karen Rhea Excellence in Teaching Award, for outstanding performance by its graduate student instructors.[5]
References
1. Karen Rhea, University of Michigan Mathematics, retrieved 2019-10-02
2. "MAA Prizes Presented in New Orleans" (PDF), Notices of the American Mathematical Society, 58 (5): 708–710, May 2011
3. "Rhea, Karen", WorldCat Identities, retrieved 2019-11-02
4. Berrett, Dan (February 19, 2012), "How 'Flipping' the Classroom Can Improve the Traditional Lecture", The Chronicle of Higher Education
5. Department Teaching Awards, University of Michigan Mathematics, retrieved 2019-10-02
| Wikipedia |
\begin{document}
\begin{frontmatter}
\title{Asymptotic behavior and distributional limits of preferential attachment graphs} \runtitle{Preferential attachment limits}
\begin{aug} \author[A]{\fnms{Noam} \snm{Berger}\ead[label=e1]{[email protected]}}, \author[B]{\fnms{Christian} \snm{Borgs}\ead[label=e2]{[email protected]}}, \author[B]{\fnms{Jennifer T.} \snm{Chayes}\ead[label=e3]{[email protected]}}\\ \and \author[C]{\fnms{Amin} \snm{Saberi}\corref{}\ead[label=e4]{[email protected]}} \runauthor{Berger, Borgs, Chayes and Saberi} \affiliation{Hebrew University, Microsoft Research, Microsoft Research and Stanford~University} \address[A]{N. Berger\\ Mathematics Department\\ Hebrew University\\ Jerusalem 91904\\ Israel\\ \printead{e1}}
\address[B]{C. Borgs\\ J. T. Chayes\\ Microsoft Research New England\\ Cambridge, Massachusetts 02142\\ USA\\ \printead{e2}\\ \hphantom{E-mail: }\printead*{e3}}
\address[C]{A. Saberi\\ Management Science and Engineering\\ Stanford University\\ Palo Alto, California 94305\\ USA\\ \printead{e4}} \end{aug}
\received{\smonth{9} \syear{2010}} \revised{\smonth{3} \syear{2012}}
\begin{abstract} We give an explicit construction of the weak local limit
of a class of preferential attachment graphs. This limit contains all local information and allows several computations that are otherwise hard, for example, joint degree distributions and, more generally, the limiting distribution of subgraphs in balls of any given radius $k$ around a random vertex in the preferential attachment graph. We also establish the finite-volume corrections which give the approach to the limit. \end{abstract}
\begin{keyword}[class=AMS] \kwd{60C05} \kwd{60K99} \end{keyword}
\begin{keyword} \kwd{Preferential attachment graphs} \kwd{graph limits} \kwd{weak local limit} \end{keyword}
\end{frontmatter}
\section{Introduction}
About a decade ago, it was realized that the Internet has a power-law degree distribution \cite{Faloutsos,AJB99}. This observation led to the so-called preferential attachment model of Barab\'asi and Albert \cite{Barabasi1}, which was later used to explain the observed power-law degree sequence of a host of real-world networks, including social and biological networks, in addition to technological ones.
The first rigorous analysis of a preferential attachment model, in particular proving that it has small diameter, was given by Bollob\'as and Riordan \cite{brdiam}. Since these works there has been a tremendous amount of study, both nonrigorous and rigorous, of the random graph models that explain the power-law degree distribution; see \cite{BAreview} and \cite{BRreview} and references therein for some of the nonrigorous and rigorous work, respectively.
Also motivated by the growing graphs appearing in real-world networks, for the past five years or so, there has been much study in the mathematics community of notions of graph limits. In this context, most of the work has focused on dense graphs. In particular, there has been a series of papers on a notion of graph limits defined via graph homomorphisms \cite{BCLSV-rev,dense1,dense2,LSz}; these have been shown to be equivalent to limits defined in many other senses \cite{dense1,dense2}. Although most of the results in this work concern dense graphs, the paper \cite{BCLSV-rev} also introduces a notion of graph limits for sparse graphs with bounded degrees in terms of graph homomorphisms; using expansion methods from mathematical physics, Borgs et al. \cite{sparse} establishes some general results on this type of limit for sparse graphs. Another recent work \cite{BR07} concerns limits for graphs which are neither dense nor sparse in the above senses; they have average degrees which tend to infinity.
Earlier, a notion of a weak local limit of a sequence of graphs with bounded degrees was given by Benjamini and Schramm \cite{BS01} (this notion was in fact already implicit in \cite{Ald99}). Interestingly, it is not hard to show that the Benjamini--Schramm limit coincides with the limit defined via graph homomorphisms in the case of sparse graphs of bounded degree; see \cite{Elek} for yet another equivalent notion of convergent sequences of graphs with bounded degrees.
As observed by Lyons \cite{Lyo05}, the notion of graph convergence introduced by Benjamini and Schramm is meaningful even when the degrees are unbounded, provided the \textit{average degree} stays bounded. Since the average degree of the Barab\'asi--Albert graph is bounded by construction, it is therefore natural to ask whether this graph sequence converges in the sense of Benjamini and Schramm.
In this paper, we establish the existence of the Benjamini--Schramm limit for the Barab\'asi--Albert graph by giving an explicit construction of the limit process, and use it to derive various properties of the limit. Our results cover the case of both uniform and preferential attachments graphs.\footnote{Note, however, that we do not cover models exhibiting densification in the sense of Leskovec, Kleinberg and Faloutsos \cite{LKF07}; see \cite{LCKFG10} for a mathematical model exhibiting this phenomenon. Indeed, these models are outside the scope of convergence considered in this paper, since they have bounded diameter and growing average degree, and hence do not converge in the sense of Benjamini--Schramm.} Moreover, our methods establish the finite-volume corrections which give the approach to the limit.
Our proof uses a representation, which we first introduced in \cite{BBCS05}, to analyze processes that model the spread of viral infections on preferential attachment graphs. Our representation expresses the preferential attachment model process as a combination of several P\'olya urn processes. The classic P\'olya urn model was of course proposed and analyzed in the beautiful work of P\'olya and Eggenberger in the early twentieth century \cite{EP}; see \cite{durrett} for a basic reference. Despite the fact that our P\'olya urn representation is a priori only valid for a limited class of preferential attachment graphs, we give an approximating coupling which proves that the limit constructed here is the limit of a much wider class of preferential attachment graphs.
Our alternative representation contains much more independence than previous representations of preferential attachment and is therefore simpler to analyze. In order to demonstrate this, we also give a few applications of the limit. In particular, we use the limit to calculate the degree distribution and the joint degree distribution of a typical vertex with
the vertex it attached to in the preferential attachment process (more precisely, a vertex chosen uniformly from the ones it attached to).
\section{Definition of the model and statements of results}\label{secresults}
\subsection{Definition of the model} \label{secdef-mod}
The preferential attachment graph we define generalizes the model introduced by Barab\'asi and Albert \cite{Barabasi1} and rigorously constructed in \cite{brdiam}. Fix an integer $m\geq 2$ and a real number $0\leq\alpha<1$. We will construct a sequence of graphs $(G_n)$ (where $G_n$ has $n$ vertices labeled $1,\ldots,n$) as follows:
$G_1$ contains one vertex and no edges, and $G_2$ contains two vertices and $m$ edges connecting them. Given $G_{n-1}$ we create $G_n$ the following way: we add the vertex $n$ to the graph, and choose $m$ vertices $w_1,\ldots,w_m$, possibly with repetitions, from $G_{n-1}$. Then we draw edges between $n$ and each of $w_1,\ldots,w_m$. Repetitions in the sequence $w_1,\ldots,w_m$ result in multiple edges in the graph $G_n$.
We suggest three different ways of choosing the vertices $w_1,\ldots,w_m$. The first two ways, the independent and the conditional, are natural ways which we consider of interest, and are the two most common interpretations of the preferential attachment model. The third way, that is, the sequential model, is less natural, but is much easier to analyze because it is exchangeable, and therefore by de-Finetti's theorem (see \cite{durrett}) has an alternative representation, which contains much more independence. We call this representation the P\'olya urn representation because the exchangeable system we use is the P\'olya urn scheme.
\begin{enumerate}[(2)]
\item[(1)]\label{itemindep} The independent model: $w_1,\ldots,w_m$ are chosen independently of each other conditioned on the past, where for each $i=1,\ldots,m$, we choose $w_i$ as follows: with probability $\alpha$, we choose $w_i$ uniformly from the vertices of $G_{n-1}$, and with probability $1-\alpha$, we choose $w_i$ according to the preferential attachment rule, that is, for all $k=1,\ldots, n-1$,
\[ {\mathbf P}(w_i=k )=\frac{\deg_{n-1}(k)}Z, \]
where $Z$ is the normalizing constant $Z=\sum_{k=1}^{n-1}\deg_{n-1}(k)= 2m(n-2)$.
\item[(2)]\label{itemindepcond} The conditional model: here we start with some predetermined graph structure for the first $m$ vertices. Then at each step, $w_1,\ldots,w_m$ are chosen as in the independent case, \emph{conditioned} on them being different from one another.
\item[(3)]\label{itemsequential} The sequential model: $w_1,\ldots,w_m$ are chosen inductively as follows: with probability $\alpha$, $w_1$ is chosen uniformly, and with probability $1-\alpha$, $w_1$ is chosen according to the preferential attachment rule, that is, for every $k=1,\ldots,n-1$, we take $w_1=k$ with probability $(\deg_{n-1}(k))/Z$ where as before $Z=2m(n-2)$. Then we proceed inductively, applying the same rule, but with two modifications:
\begin{enumerate}[(a)]
\item[(a)] When determining $w_i$, instead of the degree $\deg_{n-1}(k)$, we use
\[ \deg^\prime_{n-1}(k)=\deg_{n-1}(k)+\#\{1\leq j\leq i-1 \mid w_j=k\} \]
and normalization constant
\[ Z^\prime=\sum_{k=1}^{n-1}\bigl( \deg^\prime_{n-1}(k)\bigr)= 2m(n-2)+i-1. \]
\item[(b)] The probability of uniform connection will be
\begin{equation} \label{tilde-alpha} \tilde\alpha=\alpha\frac{2m(n-1)}{2m(n-2)+2m\alpha +(1-\alpha)(i-1)} =\alpha+O \bigl(n^{-1}\bigr) \end{equation}
rather than $\alpha$. \end{enumerate}
\end{enumerate}
We will refer to all three models as versions of the preferential attachment graph, or PA-graph, for short. Even though we consider the graph $G_n$ as undirected, it will often be useful to think of the vertices $w_1,\ldots,w_m$ as vertices which ``received an edge'' from the vertex $n$, and of $n$ as a vertex which ``sent out $m$ edges'' to the vertices $w_1,\ldots,w_m$. Note in particular, that the degree of a general vertex $v$ in $G_n$ can be written as $m+q$, where $m$ is the number of edges sent out by $v$ and $q$ is the (random) number of edges received by $v$.
\subsection{P\'olya urn representation of the sequential model}
Our first theorem gives the P\'olya urn representation of the sequential model. To state it, we use the standard notation $X\sim\beta(a,b)$ for a random variable $X\in[0,1]$ whose density is equal to $\frac1Z x^{a-1}(1-x)^{b-1}$, where $Z=\int_0^1 x^{a-1}(1-x)^{b-1}\,dx$. We set
\[ u=\frac\alpha{1-\alpha}. \]
Note that $u\in[0,\infty)$.
\begin{Theorem} \label{thm1} Fix $m$, $\alpha$ and $n$. Let $\psi_1=1$, let $\psi_2,\ldots,\psi_n$ be independent random variables with
\begin{equation} \label{psik-dis} \psi_j\sim\beta\bigl(m+2mu, (2j-3)m+2mu (j-1) \bigr) \end{equation}
and let
\begin{equation} \label{Sk} \varphi_j=\psi_j\prod _{i=j+1}^n(1-\psi_i),\qquad S_k= \sum_{j=1}^k\varphi_j\quad \mbox{and}\quad I_k=[S_{k-1},S_k). \end{equation}
Conditioned on $\psi_1,\ldots,\psi_n$, choose $\{U_{k,i}\}_{k=1,\ldots, n, i=1,\ldots,m}$ as a sequence of independent random variables, with $U_{{ k,i}}$ chosen uniformly at random from $[0,S_{k-1}]$. Join two vertices $j$ and $k$ if $j<k$ and $U_{k,i}\in I_j$ for some $i\in\{1,\ldots,m\}$ (with multiple edges between $j$ and $k$ if there are several such $i$). Denote the resulting random multi-graph by $G_n$.
Then $G_n$ has the same distribution as the sequential PA-graph. \end{Theorem}
Figure \ref{fig1} illustrates this theorem.
\begin{figure}
\caption{The P\'olya-representation of the sequential model for $m=2$, $n=4$ and $k=4$. The variables $U_{4,1}$ and $U_{4,2}$ are chosen uniformly at random from $[0,S_3]$.}
\label{fig1}
\end{figure}
It should be noted that the $\alpha= 0$ case of the sequential model defined here differs slightly from the model of Bollob\'as and Riordan \cite{brdiam} in that they allow (self-)loops, while we do not. In fact, a minor alteration of our P\'olya urn representation models their graph, and we suspect that a minor alteration of their pairing representation can model our graph.
\subsection{Definition of the P\'olya-point graph model}
\subsubsection{Motivation} \label{secexplore}
The Benjamini--Schramm notion \cite{BS01} of weak convergence involves the view of the graph $G_n$ from the point of view of a ``root'' $k_0$ chosen uniformly at random from all vertices in $G_n$. More precisely, it involves the limit of the sequence of balls of radius $1,2,\ldots\,$, about this root; see Definition \ref{defBS-limit} in Section \ref{secmainresult} below for the details.
It turns out that for the sequential model, this limit is naturally described in terms of the random variables $S_{k-1}$ introduced in Theorem \ref{thm1}. To explain this, it is instructive to first consider the ball of radius $1$ around the random root $k_0$. This ball contains the $m$ neighbors of $k_0$ that were born before $k_0$ and received an edge from $k_0$ under the preferential attachment rule described above, as well as a random number $q_0$ of neighbors that were born after $k_0$ and send an edge to $k_0$ at the time they were born. We denote these neighbors by $k_{01},\ldots,k_{0m}$ and $k_{0,m+1},\ldots,k_{0,m+q_0}$, respectively.
Let
\begin{equation} \label{eqdefu} \chi=\frac{1+2u}{2+2u} \quad\mbox{and}\quad \psi=\frac{1-\chi }{\chi}{ = \frac1{1+2u}} \end{equation}
and note that $\frac12\leq\chi<1$ and $0<\psi\leq1$. As we will see, the random variables $S_{k-1}$ behave asymptotically like $(k/n)^\chi$, implying in particular that the distribution of $S_{k_0-1}$ tends to that of a random variable $x_0=y_0^\chi$, where $y_0$ is chosen uniformly at random in $[0,1]$. The limiting distribution of $S_{k_{01}-1},\ldots,S_{k_{0m}-1}$ turns out to be quite simple as well: in the limit these random variables are i.i.d. random variables $x_{0,i}$ chosen uniformly from $[0,x_0]$, a distribution which is more or less directly inherited from the uniform random variables $U_{k,i}\in [0,S_{k_0-1}]$ from Theorem \ref{thm1}. The limiting distribution of the random variables $S_{k_{0,m+1}-1},\ldots,S_{k_{0,m+q_0}-1}$ is slightly more complicated to derive and is given by a Poisson process in $[x_0,1]$ with intensity
\[ { \gamma_0\frac{\psi x^{\psi-1}}{x_0^{\psi}}\,dx.} \]
Here $\gamma_0$ is a random ``strength'' which arises as a limit of the $\beta$-distributed random variable $\psi_{k_0}$, and is distributed according to $\Gamma({m+2mu},1)$. Here, as usual, $\Gamma(a,b)$ is used to denote a distribution on $[0,\infty)$ which has density $\frac1{Z} x^{a-1}e^{-bx}$, with $Z=\int_{0}^\infty x^{a-1}e^{-bx}\,dx$.
Next, consider the branching that results from exploring the neighborhood of a random vertex in $G_n$ in a ball of radius bigger than one. In each step of this exploration, we will find two kinds of children of the current vertex $k$: those which were born before $k$, and were attached to $k$ at the birth of $k$, and those which were born after $k$, and were connected to $k$ at their own birth. There are always either $m$ or $m-1$ children of the first kind (if $k$ was born after its parent, there will be $m-1$ such children, since one of the $m$ edges sent out by $k$ was sent out to $k$'s parent; otherwise there will be $m$ children of the first type). The number of children of the second kind is a random variable.
In the limit $n\to\infty$, this branching process leads to a random tree whose vertices, $\bar a$, carry three labels: a ``strength'' $\gamma_{\bar a}\in(0,\infty)$ inherited from the $\beta$-random variables $\psi_k$, a ``position'' $x_{\bar a}\in[0,1]$ inherited from the random variables $S_{k-1}$ and a type which can be either { $L$ (for ``left'') or $R$ (for ``right'')}, reflecting whether the vertex $k$ was born before or after its parent. While the strengths of vertices of type { $R$} turn out to be again $\Gamma({m+2mu},1)$-distributed, this is not the case for vertices of type { $L$}, since a vertex with higher values of $\psi_k$ has a larger probability of receiving an edge from its
child. In the limit, this will be reflected by the fact that the strength of vertices of type { $L$} is $\Gamma({m+2mu}+1,1)$-distributed.
\subsubsection{Formal definition} \label{secpolyapointdef}
The main goal of the previous subsection was to give an intuition of the structure of the neighborhood of a random vertex. We will show that asymptotically, the branching process obtained by exploring the neighborhood of a random vertex $k_0$ in $G_n$ is given by a random tree with a certain distribution. In order to state our main theorem, we give a formal definition of this tree.
Let $F$ be the Gamma distribution $\Gamma({ m+2mu},1)$, and let $F^\prime$ be the Gamma distribution $\Gamma({ m+2mu+1},1)$. We define a random, rooted tree $(T,0)$ with vertices labeled by finite sequences
\[ \bar{a}=(0,a_1,a_2,\ldots,a_l) \]
inductively as follows:
\begin{itemize}
\item The root $(0)$ has a position $x_0=y_0^\chi$, where $y_0$ is chosen uniformly at random in $[0,1]$. In the rest of the paper, for notational convenience, we will write $0$ instead of $(0)$ for the root.
\item In the induction step, we assume that $\bar{a}=(0,a_1,a_2,\ldots,a_l)$ and the corresponding variable $x_{\bar{a}}\in[0,1]$ have been chosen in a previous step. Define $(\bar a,j)$ as $(0,a_1,a_2,\ldots,a_l,j)$, $j=1,2,\ldots\,$, and set
\[ m_-({\bar{a}})= \cases{ m, &\quad if $\bar a$ is the root or of type $L$, \cr m-1, &\quad if $\bar a$ is of type $R$.} \]
We then take
\[ \gamma_{\bar{a}}\sim\cases{F, &\quad if $\bar a$ is the root or of type $R$, \cr F', &\quad if $\bar a$ is of type $L$,} \]
independently of everything previously sampled, choose $x_{(\bar{a},1)},\ldots,x_{(\bar{a},m_-(\bar{a}))}$ i.i.d. uniformly at random in $[0,x_{\bar{a}}]$, and $x_{(\bar{a},m_-(\bar{a})+1)},\ldots,x_{(\bar{a},m_-(\bar{a})+q _{\bar{a}})}$ as the points of a Poisson process with intensity
\begin{equation} \label{poisson-intensity} \rho_{\bar a}(x) \,dx = { \gamma_{\bar{a}}\frac{\psi x^{\psi-1}}{x_{\bar{a}}^{\psi}}\,dx} \end{equation}
on $[x_{\bar{a}},1]$ (recall that $0<\psi\leq1$). The children of $\bar{a}$ are the vertices $(\bar{a},1),\ldots,\break (\bar{a},m_-(\bar{a})+q_{\bar{a}})$, with $(\bar{a},1),\ldots,(\bar{a},m_-(\bar{a}))$ called of type $L$, and the remaining ones called of type $R$. \end{itemize}
We continue this process ad infinitum to obtain an infinite, rooted tree $(T,0)$. We call this tree the P\'olya-point graph, and the point process $\{x_{\bar a}\}$ the P\'olya-point process.
\subsection{Main result} \label{secmainresult}
We are now ready to formulate our main result, which states that in all three versions, the graph $G_n$ converges to the P\'olya-point graph in the sense of \cite{BS01}.
Let ${\cal{G}}$ be the set of rooted graphs, that is, the set of all pairs $(G,x)$ consisting of a connected graph $G$ and a designated vertex $x$ in $G$, called the root. Two rooted graphs $(G,x),(G',x')\in{\cal{G}}$ are called isomorphic if there is an isomorphism from $G$ to $G'$ which maps $x$ to $x'$. Given a finite integer $r$, we denote the rooted ball of radius $r$ around $x$ in $(G,x)\in{\cal{G}}$ by $B_r(G,x)$. We then equip ${\cal{G}}$ with the $\sigma$-algebra generated by the events that $B_r(G,x)$ is isomorphic to a finite, rooted graph $(H,y)$ (with $r$ running over all finite, positive integers, and $(H,y)$ running over all finite, rooted graphs),\vadjust{\goodbreak} and call $(G,x)$ a random, rooted graph if it is a sample from a probability distribution on ${\cal{G}}$. We write $(G,x)\sim(G',x')$ if $(G,x)$ and $(G',x')$ are isomorphic.
\begin{Definition} \label{defBS-limit} Given a sequence of random, finite graphs $G_n$, let $k_0^{(n)}$ be a uniformly random vertex from $G_n$. Following \cite{BS01}, we say that an infinite random, rooted graph $(G,x)$ is the weak local limit of $G_n$ if for all finite rooted graphs $(H,y)$ and all finite $r$, the probability that $B_{r}(G_n,k_0^{(n)})$ is isomorphic to $(H,y)$ converges to the probability that $B_{r}(G,x)$ is isomorphic to $(H,y)$. \end{Definition}
The main result of the paper is the following theorem.
\begin{Theorem}\label{thmmain} The weak local limit of the all three variations of the preferential attachment model is the P\'olya-point graph. \end{Theorem}
Recently, and independently of our work, Rudas et al. \cite{rudas}, studied the random tree resulting from the preferential attachment model when $m = 1$. They derived the asymptotic distribution of the subtree under a randomly selected vertex which implies the Benjamini--Schramm limit. Note that when $m=1$, there is no distinction between the independent, conditional and sequential models.
As alluded to before, the points $x_{\bar a}$ of the P\'olya-point process represent the random variables $S_{k-1}$ of the vertices in $G_n$, which in turn behave like $(k/n)^\chi$ as $n\to\infty$. The variable $y_{\bar a}=x_{\bar a}^{1/\chi}$ thus represents the birth-time of the corresponding vertex in $G_n$. This is made precise in the following corollary to the proof of Theorem \ref{thmmain}. As the theorem, the corollary holds for all three versions of the Preferential Attachment model.
\begin{Corollary} \label{corlimit} Given $r<\infty$ and $\varepsilon>0$ there exists a $n_0<\infty$ such that for $n\geq n_0$, there exists a coupling $\mu$ between a sample $T$ of the P\'olya-point and an ensemble $\{G_n, v_0\}$ where $G_n$ has the distribution of the preferential attachment graph of size $n$, and $v_0$ is a uniformly chosen vertex of $G_n$, satisfying: with $\mu$ probability at least $1-\varepsilon$, there exists an isomorphism $\bar a\mapsto k_{\bar a}$ from the ball of radius $r$ about $0$ in $(T,0)$ into the ball of radius $r$ about $v_0$ in $G_n$, with the property that
\[ \biggl\vert y_{\bar a}-\frac{k_{\bar a}}n\biggr\vert\leq\varepsilon \]
for all $\bar a$ with distance at most $r$ from the root in $(T,0)$. Here $y_{\bar a}$ is defined as $y_{\bar a}=x_{\bar a}^{1/\chi}$. \end{Corollary}
The numerator ${x_{\bar{a}}^{\psi}}=y_{\bar a}^{1-\chi}$ in (\ref{poisson-intensity}) thus expresses the fact that in the preferential attachment process, earlier vertices are more likely to attract many neighbors than later vertices.\vadjust{\goodbreak}
\subsection{Subgraph frequencies} \label{secsubgraphfrequency} A natural question concerning a sequence of growing graphs $(G_n)$ is the question of how often a small graph $F$ is contained in $G_n$ as a subgraph. This question can be formalized in several ways, for example, by considering the number of homomorphisms from $F$ into $G_n$, or the number of injective homomorphism, or the number of times $F$ is contained in $G_n$ as an \textit{induced} subgraph.
For graph sequences with bounded degrees, this leads to an alternative notion of convergence, by defining sequence of graphs to be convergent if the homomorphism density $t(F,G_n)$---defined as the number of homomorphisms from $F$ into $G_n$ divided by the number of vertices in $G_n$---converges for all finite connected graphs $F$ \cite{BCLSV-rev,sparse}. Indeed, for sequences of graphs $G_n$ whose degree is bounded uniformly in $n$, this notion can easily be shown to be equivalent to the notion introduced by Benjamini and Schramm; moreover, the corresponding notions involving the number of injective homomorphisms, or the number of induced subgraphs, are equivalent as well; see \cite{BCLSV-rev}, Section 2.2 for formulas expressing these various numbers in terms of each other.
But for graphs with growing maximal degree, this equivalence does not hold in general. Indeed, consider a sequence of graphs with uniformly bounded degrees, augmented by a vertex of degree $n^{1/2}$. Such a vertex does not change the notion of convergence introduced by Benjamini and Schramm; however, the number of homomorphisms from a star with $3$ legs into this graph sequence grows like $n^{3/2}$, implying that the homomorphism density diverges.
To overcome this difficulty, we will consider maps $\Phi$ from $V(F)$, the vertex set of $F$, into $V(G_n)$, the vertex set of $G_n$ which in addition to being homormorphisms also preserve degrees. More explicitly, given a graph $F$ and a map $ \mathbf{n}\dvtx V(F)\to\{0,1,2,\ldots\}$, we define $\operatorname{inj}(F,\mathbf{n};G_n)$ as the number of injective maps $\Phi\dvtx V(F)\to V(G_n)$ such that:
\begin{longlist}[(2)]
\item[(1)] If $ij\in E(F)$, then $\Phi(i)\Phi(j)\in E(G_n)$;
\item[(2)] $d_{\Phi(i)}(G_n)=d_i(F)+n(i)$ for all $i\in V(F)$, \end{longlist}
where $E(F)$ denotes the set of edges in $F$, and $d_i(F)$ denotes the degree of the vertex $i$ in $F$.
The following lemma is due to Laci Lovasz.
\begin{Lemma} \label{lemsub-G-conv} Let $D<\infty$, and let $G_n$ be a sequence of graphs that
converges in the sense of Benjamini and Schramm. Then the limit
\[
\hat t(F,\mathbf{n})=\lim_{n\to\infty}\frac1{|V(G_n)|} \operatorname{inj}(F,\mathbf{n};G_n) \]
exists for all finite connected graphs $F$ and all maps $ \mathbf{n}\dvtx V(F)\to\{0,1,2,\ldots\}$. \end{Lemma}
As stated, the lemma refers to sequences of deterministic graphs. For sequences of random graphs, its proof\vspace*{2pt} gives convergence of the expected number of the subgraph frequencies $\frac1{|V(G_n)|} \operatorname{inj}(F,\mathbf{n};G_n)$. To prove\vspace*{2pt} convergence in probability for these frequencies, a little more work is needed. For the case of preferential attachment graphs, we do this in Section \ref{secfinite-ball}, together with an explicit calculation of the actual values of these numbers.
\begin{Remark} When $G_n$ has multiple edges, the definition of $ \operatorname{inj}(F,\mathbf{n};G_n)$ has to be modified. There are a priory several possible definitions; motivated by the notions introduced in \cite{BCLSV-rev} we chose the definition
\[ \operatorname{inj}(F,\mathbf{n};G_n)= \sum_{\Phi}\prod _{ij\in E(F)}m_{\Phi(i)\Phi(j)}(G_n)^{m_{ij}(H)}, \]
where the sum goes over injective maps $\Phi\dvtx V(H)\to V(G_n)$ obeying condition (2) above with $d_i(H)$ and $d_{\Phi(i)}(G_n)$ denoting degrees counting multiplicities, and where $m_{ij}(H)$ is the multiplicity of the edge $ij$ in $H$ [and similarly for $m_{\Phi(i)\Phi(j)}(G_n)$]. With this definition, the above lemma holds for graphs with multiple edges as well. \end{Remark}
\section{Proof of weak distributional convergence for the sequential model}\label{secpf} In this section we prove that the sequential model converges to the P\'olya-point tree.
\subsection{P\'olya urn representation of the sequential model} \label{secpolya}
In the early twentieth century, P\'olya proposed and analyzed the following model known as the P\'olya urn model; see \cite{durrett}. The model is described as follows. We have a number of urns, each holding a number of balls, and at each step, a new ball is added to one of the urns. The probability that the ball is added to urn $i$ is proportional to $N_i + u$ where $N_i$ is the number of balls in the $i$th urn and $u$ is a predetermined parameter of the model.
P\'olya showed that this model is equivalent to another process as follows. For every $i$, choose at random a parameter (which we call ``strength'' or ``attractiveness'')~$p_i$, and at each step, \emph{independently} of our decision in previous steps, put the new ball in urn $i$ with probability $p_i$. P\'olya specified the distribution (as a function of $u$ and the initial number of balls in each urn) for which this mimics the urn model. A particularly nice example is the case of two urns, each starting with one ball and $u=0$. Then $p_1$ is a uniform $[0,1]$ variable, and $p_2=1-p_1$. P\'olya showed that for general values of $u$ and $\{N_i(0)\}$, the values of $\{p_i\}$ are determined by the $\beta$-distribution with appropriate parameters.
It is not hard to see that there is a close connection between the preferential attachment model of Barab\'asi and Albert and the P\'olya urn model in the following sense: every new connection that a vertex gains can be represented by a new ball added in the urn corresponding to that vertex.\vadjust{\goodbreak}
To derive this representation, let us consider first a two-urn model, with the number of balls in one urn representing the degree of a particular vertex $k$, and the number of balls in the other representing the sum of the degrees of the vertices $1,\ldots, k-1$. We will start this process at the point when $n=k$ and $k$ has connected to precisely $m$ vertices in $\{1,\ldots, k-1\}$. Note that at this point, the urn representing the degree of $k$ has $m$ balls, while the other one has $(2k-3)m$ balls.
Consider a time in the evolution of the preferential attachment model when we have $n-1\geq k$ old vertices, and $i-1$ edges between the new vertex $n$ and $\{1,\ldots, k-1\}$. Assume that at this point the degree of $k$ is $d_k$, and the sum of the degrees of $1,\ldots,k-1$ is $d_{<k}$. At this point, the probability that the $i$th edge from $n$ to $\{1,\ldots, n-1\}$ is attached to $k$ is
\begin{eqnarray} \label{eqPntok} &&\tilde\alpha\frac{1}{n-1} + (1-\tilde\alpha) \frac{d_k}{2m(n-2) + (i-1)} \nonumber\\[-8pt]\\[-8pt] &&\qquad=\frac{2m\alpha+ (1-\alpha )d_k}{2m(n-2)+2m\alpha+(1-\alpha)(i-1)},\nonumber \end{eqnarray}
while the probability that it is attached to one of the nodes $1,\ldots, k-1$ is
\begin{eqnarray} \label{eqPnto<k} && \tilde\alpha\frac{k-1}{n-1} + (1-\tilde\alpha) \frac {d_{<k}}{2m(n-2) + (i-1)} \nonumber\\[-8pt]\\[-8pt] &&\qquad=\frac{2m\alpha+ (1-\alpha )d_{<k}}{2m(n-2)+2m\alpha+(1-\alpha)(i-1)}.\nonumber \end{eqnarray}
Thus, conditioned on connecting to $\{1,\ldots,k\}$, the probability that the $i$th edge from $n$ to $\{1,\ldots, n-1\}$ is attached to $k$ is
\[ \frac1Z (2mu+d_k ), \]
while the conditional probability that it is attached to one of the nodes $1,\ldots, k-1$ is
\[ \frac1Z \bigl(2mu(k-1)+d_{<k} \bigr), \]
where
$Z$ is an appropriate normalization constant. Note that the constant $\tilde\alpha$ in (\ref{tilde-alpha}) was chosen in such a way that the factor $u$ appearing in these expressions does not depend on $i$, which is crucial to guaranty exchangeability.
Taking into account that the two urns start with $m$ and $(2k-3)m$ balls, respectively, we see that the evolution of the two bins is a P\'olya urn with strengths $\psi_k$ and $1-\psi_k$, where $\psi_k\sim B_k=\beta(m+2mu,(2k-3)m+\break 2mu(k-1) )$.
\begin{pf*}{Proof of Theorem \ref{thm1}} Using the two urn process as an inductive input, we can now easily construct the P\'olya graph defined in Theorem \ref{thm1}. Indeed, let $X_t\in\{1,2,\ldots,\lceil \frac tm\rceil\}$ be the vertex receiving the $t$th edge in the sequential model (the other endpoint of this edge being the vertex $\lceil\frac tm\rceil+1$). For $t\leq m$, $X_t$ is deterministic (and equal to $1$), but starting at $t=m+1$, we have a two-urn model, starting with $m$ balls in each urn. As shown above, the two urns can be described as P\'olya-urns with strengths $1-\psi_2$ and $\psi_2$. Once $t>2m$, $X_t$ can take three values, but conditioned on $X_t\leq2$, the process continues to be a two-urn model with strengths $1-\psi_2$ and $\psi_2$. To determine the probability of the event that $X_t\leq2$, we now use the above two-urn model with $k=3$, which gives that the probability of the event $X_t\leq2$ is $1-\psi_3$, at least as long as $t\leq 3m$. Combining these two-urn models, we get a three-urn model with strengths $(1-\psi_2)(1-\psi_3)$, $\psi_2(1-\psi_3)$ and $\psi_3$. Again, this model remains valid for $t>3m$, as long as we condition on $X_t\leq3$.
Continuing inductively, we see that the sequence $X_t$ evolves in stages:
\begin{itemize}
\item For $t=1,\ldots,m$, the variable $X_t$ is deterministic: $X_t=1$.
\item For $t=m+1,\ldots, 2m$, the distribution of $X_t\in\{1,2\}$ is described by a two-urn model with strengths $1-\psi_2$ and $\psi_2$, where $\psi_2\sim B_2$.
\item In general, for $t=m(k-1)+1,\ldots, km$, the distribution of $X_t\in\{1,\ldots,k\}$ is described by a $k$-urn model with strengths
\begin{equation} \label{phi-jk} \varphi_j^{(k)}=\psi_j\prod _{i=j+1}^k (1-\psi_i),\qquad j=1,\ldots, k. \end{equation}
Here $\psi_k\sim B_k$ is chosen at the beginning of the $k$th stage, independently of the previously chosen strengths $\psi_1,\ldots,\psi_{k-1}$ (for convenience, we set $\psi_1=1$). \end{itemize}
Note that\vspace*{1pt} the random variables $\varphi_j^{(k)}$ can be expressed in terms of the random variables introduced in Theorem \ref{thm1} as follows: by induction on $k$, it is easy to show that
\begin{equation} \label{S-k-prod} S_k=\prod_{j=k+1}^n(1- \psi_k). \end{equation}
This implies that
\[ \varphi_j^{(k)}= \frac{\psi_j}{S_k}, \]
which relates the strengths $\varphi_j^{(k)}$ to the random variables defined in Theorem \ref{thm1}, and shows that the process derived above is indeed the process given in the theorem. \end{pf*}
In order to apply Theorem \ref{thm1}, we will use two technical lemmas, whose proofs will be deferred to a later section. The first lemma states a law of large numbers for the random variables $S_k$.
\begin{Lemma}\label{lemSk} For every $\varepsilon$ there exist $K<\infty$ such that for $n\geq K$, we have that with probability at least $1-\varepsilon$,
\[ \max_{k\in\{1,\ldots,n\}}\biggl\vert S_k- \biggl(\frac kn \biggr)^\chi\biggr\vert\leq\varepsilon \]
and
\[ \max_{k\in\{K,\ldots,n\}}\biggl\vert S_k- \biggl(\frac kn \biggr)^\chi\biggr\vert\leq\varepsilon\biggl(\frac kn \biggr)^\chi. \]
\end{Lemma}
The second lemma concerns a coupling of the sequence $\{\psi_k\}_{k\geq1}$ and an i.i.d. sequence of $\Gamma$-random variables $\{\chi_k\}_{k\geq1}$, where $\chi_k\sim\Gamma(m+2mu,1)$. To describe the coupling, we define a sequence of functions $f_k\dvtx[0,\infty)\to[0,1)$ by
\begin{equation} \label{coup1} {\mathbf P}\bigl(\psi_k\leq f_k(x)\bigr)={\mathbf P}( \chi_k\leq x). \end{equation}
Then $f_k(\chi_k)$ has the same distribution as $\psi_k$, implying that $ (\{\chi_k\}_{k\geq1},\break \{f_k(\chi_k)\}_{k\geq 1}) $ defines a coupling between $\{\chi_k\}_{k\geq1}$ and $\{\psi_k\}_{k\geq1}$.
\begin{Lemma}\label{lemfk} Let $f_k$ be as in (\ref{coup1}), and let $\{\chi_k\}_{k\geq 1}$ i.i.d. random variables with distribution $\Gamma(m+2mu,1)$. Given $\varepsilon>0$ there exist a $K<\infty$ so that the following holds:
\begin{longlist} \item With probability at least $1-\varepsilon$,
\begin{equation} \label{chi-k-bd} \chi_k\leq\log^2 k \qquad\mbox{for all } k\geq K; \end{equation}
\item For $k\geq K$ and $x\leq\log^2 k$,
\begin{equation} \label{coupling-bd} \frac{1-\varepsilon}{ 2mk(1+u)} x \leq f_k(x) \leq \frac{1+\varepsilon}{ 2mk(1+u)} x. \end{equation}
\end{longlist} \end{Lemma}
We defer the proof of Lemmas \ref{lemSk} and \ref{lemfk} to Section \ref{secpolest}.
\subsection{The exploration tree of $G_n$} \label{secexpltree}
Let $K_r=K_r(G_n,k_0)$ be the set of vertices in $G_n$ which have distance at most $r$ from the random root $k_0$, and let $\hat B_r(G_n,k_0)$ be the graph on $K_r$ that contains all edges in $G_n$ for which at least
one endpoint has distance $\leq r$ from $k_0$. When proving that the preferential model converges to the P\'olya-point graph, we will use the notion of convergence given in Definition~\ref{defBS-limit}, but instead of the standard ball of radius $r$, we will use the modified ball $\hat B_r(G_n,k_0)$. (It is obvious that this definition is equivalent.)
We will prove our results by induction on $r$, using the exploration procedure outlined in Section \ref{secexplore} in the inductive step. To this end, it will be convenient to endow the rooted graph $(G_n,k_0)$ with a structure which is similar to the one defined for the P\'olya-point graph.\vadjust{\goodbreak} More precisely, we will inductively define a rooted tree $(T^{(n)}_r,0)$ on sequence of integers $\bar{a}=(0,a_1,a_2,\ldots,a_l)$, and a homomorphism
\[ {\mathbf k}^{(r)}\dvtx T^{(n)}_r\to\hat B_r(G_n,k_0)\dvtx \bar a\mapsto k_{\bar a} \]
as follows.
We start our inductive definition by mapping $0$ into a vertex $k_0$ chosen uniformly at random from the vertex set $\{1,\ldots,n\}$ of $G_n$. Given a vertex $\bar a=(0,a_1,a_2,\ldots,a_l)\in T^{(n)}_r$ and its image $k_{\bar a}$ in $G_n$, let $d_{\bar a}$ be the degree of $k_{\bar a}$ in $G_n$, and let $k_{\bar a_-},k_1,\ldots, k_{d_{\bar a}-1}$ be the neighbors of $k_{\bar a}$ in $G_n$, where $\bar a_-=(0,a_1,a_2,\ldots,a_{l-1})$. Recalling that edges were created one by one during the sequential preferential attachment process, we order $k_1,\ldots, k_{d_{\bar a}-1}$ in such a way that for all $i=1,\ldots, d_{\bar a}-2$, the edge $(k_{\bar a},k_i)$ was born before the edge $(k_{\bar a},k_{i+1})$. We then define the children of $\bar a$ to be the points $(\bar a,1),\ldots,(\bar a,d_{\bar a}-1)$. This defines~$T_{r+1}^{(n)}$. The map ${\mathbf k}^{(r+1)}$ is the extension of $\mathbf k^{(r)}$ which maps $(\bar a,1),\ldots,(\bar a,d_{\bar a}-1)$ to the vertices $k_1,\ldots, k_{d_{\bar a}-1}$, respectively. We call a vertex $(\bar a,i)$ early or of type { $L$} if $k_{i}<k_{\bar a_-}$ and late or of type { $R$} otherwise. Note that the root and vertices of type { $L$} have $m$ children of type { $L$}, while vertices of type { $R$} have $m-1$ children of type { $L$}.
To make the dependence on $G_n$ explicit, we often use the notation $T_r(G_n)$ for the tree $T_r^{(n)}$, and the notation $\mathbf k^{(r)}(G_n)$ for the map $\mathbf k^{(r)}$. Note that $\mathbf k^{(r)}$ does not, in general, give a graph isomorphism between $T_r^{(n)}$ and $\hat B_r(G_n,k_0)$. But if the map is injective when restricted to $T_r^{(n)}$, it is a graph isomorphism. To prove Theorem \ref{thmmain}, it is therefore enough to show that for all $r$, the map $\mathbf k^{(r)}$ is injective and the tree $T_r^{(n)}$ converges in distribution to $T_r$, the ball of radius $r$ in the P\'olya-point graph $(T,0)$.
\subsection{Regularity properties of the P\'olya-point process}
In order to prove Theorem \ref{thmmain}, we will use some simple regularity properties of the P\'olya-point process.
Recall the definition of the P\'olya-point graph $(T,0)$ and the P\'olya-point process $\{x_{\bar a}\}$ from Section \ref {secpolyapointdef}, as well as the notation $\rho_{\bar a}(x)\,dx$ for the intensity defined in (\ref{poisson-intensity}). As usual, we define the height of a vertex $\bar a=(0,a_1,a_2,\ldots,a_l)$ in $T$ as its distance $l$ from the root. We denote the rooted subtree of height $r$ in $(T,0)$ by $(T_r,0)$.
\begin{Lemma} \label{lemreg1} Fix $0\leq r<\infty$ and $\varepsilon>0$. Then there are constants $\delta>0$, \mbox{$C<\infty$}, $K<\infty$ and $N<\infty$ such that with probability at least $1-\varepsilon$, we have that:
\begin{itemize}
\item$x_{\bar a}\geq\delta$ for all vertices $\bar a$ in $T_r$;
\item$\gamma_{\bar a}\leq C$;
\item$\rho_{\bar a}(\cdot)\leq K$;
\item$|T_r|\leq N$.\vadjust{\goodbreak} \end{itemize}
\end{Lemma}
\begin{pf} The proof of the lemma is easily obtained by induction on $r$. We leave it to the reader.
\end{pf}
\begin{Corollary} \label{correg2} For all $\varepsilon>0$ and all $r<\infty$ there is a constant $\delta>0$ such that with probability at least $1-\varepsilon$, we have
\[
\mathop{\min_{\bar a,\bar b\in T_r}}_{\bar a\neq\bar b} |x_{\bar b}-x_{\bar a}|\geq\delta. \]
\end{Corollary}
\begin{pf} This is an immediate consequence of the continuous nature of the random variables $x_{\bar a}$ and the statements of Lemma \ref{lemreg1}. \end{pf}
\subsection{The neighborhood of radius one} \label{sec1-Neighborhood}
Before proving our main theorem, Theorem \ref{thmmain}, for the sequential model, we establish the following lemma, which will serve as the base in an inductive proof of our main theorem.
\begin{Lemma} \label{thm2} Let $G_n$ be the sequential preferential attachment graph, let $k_0$ be chosen uniformly at random in $\{1,\ldots,n\}$ and let $k_{0,1},\ldots,k_{0,m+q_0}$ be the neighbors of $k_0$, ordered as in Section \ref{secexpltree} by the birth times of the edges $\{k_0,k_{0,i}\}$. Then $(G_n,k_0)$ and the P\'olya-point process $\{x_{\bar a}\}$ can be coupled in such a way that for all $\varepsilon>0$ there are constants $C,N<\infty$, $\delta>0$ and $n_0<\infty$ such that for $n\geq n_0$, with probability at least $1-\varepsilon$, we have that:
\begin{longlist}[(iii)]
\item[(i)] $T_1 \cong T_1(G_n)$ and $|T_1(G_n)|\leq N$;
\item[(ii)] $\vert x_{\bar a}- S_{k_{\bar a}-1}\vert\leq \varepsilon$ for all $\bar a\in T_1$;
\item[(iii)] $k_0,k_{0,1},\ldots,k_{0,m+q_0}$ are pairwise distinct and $k_{\bar a}\geq\delta n$ for all $\bar a\in T_1$;
\item[(iv)] $ \chi_{k_{\bar a}}=\gamma_{\bar a}\leq C$ for all $\bar a\in T_1. $ \end{longlist}
\end{Lemma}
\begin{pf} (i)--(ii): We start by proving the first two statements. Choose $y_0$ uniformly at random in $[0,1]$, let $x_0=y_0^{\chi}$ and let $x_{0,1},\ldots,x_{0,m+q_0'}$ be the\vspace*{2pt} positions of the children of $0$ in $(T,0)$. Define $k_0=\lceil ny_0\rceil$, so that $k_0$ is distributed uniformly in $\{1,\ldots,n\}$, and for $i=1,\ldots,m$, define $k_{0,i}$ by
\[ S_{k_{0,i}-1}\leq\frac{x_{0,i}}{x_0} S_{k_0-1} < S_{k_{0,i}}. \]
By\vspace*{2pt} Theorem \ref{thm1} and the observation that $U_{k_0,1}=\frac{x_{0,1}}{x_0},\ldots,U_{k_0,m}=\frac{x_{0,m}}{x_0}$ are i.i.d. random variables chosen uniformly at random from $[0,1]$, we have that indeed, with large probability, $k_{0,1},\ldots,k_{0,m}$ are close enough to the $x_{0,i}$'s.
Indeed, given $\varepsilon>0$ choose $\delta$, $C$, $K$ and $N$ in such a way that the statements of Lemma \ref{lemreg1} and Corollary \ref{correg2} hold for $r=1$, and let $\varepsilon'=\min\{\varepsilon,\delta/4\}$. By Lemma~\ref{lemSk} there exists a constant $n_0<\infty$ such that for $n\geq n_0$, we have that
\begin{equation}
\label{S-x-upbd1}\quad |S_{k_0-1}-x_0|\leq\varepsilon'
\quad\mbox{and}\quad |S_{k_{0,i}-1}-x_{0,i}| \leq\varepsilon' \qquad\mbox{for all } i=1,\ldots,m \end{equation}
with probability at least $1-2\varepsilon$.\vadjust{\goodbreak}
To understand the limiting distribution of the remaining neighbors, $k_{0,m+1},\allowbreak\ldots, k_{0,m+q_0}$, of $k_0$, we observe that conditioned on the random variables $\psi_1,\ldots,\psi_n$, each vertex $k>k_0$ has $m$ independent chances of being connected to $k_0$, corresponding to the $m$ independent events $\{X_{k,i}=k_0\}$, $i=1,\ldots,m$, where we used the shorthand $X_{k,i}$ for the interval containing the endpoint of the $i$th edge sent out from $k$ (it is related to the random variables $X_t$ introduced in the proof of Theorem~\ref{thm1} via $X_{k,i}=X_{(k-2)m+i}$). Let
\begin{equation} \label{Pktok} P_{k\to k_0}=\varphi_{k_0} \frac1{S_{k-1}} =\frac{S_{k_0}}{S_{k-1}}\psi_{k_0} \end{equation}
be the probability of the event $\{X_{k,i}=k_0\}$, and let $N_{y_0}(y)=\break \sum_{i=1}^m\sum_{k=k_0}^{\lceil ny\rceil}\mathbb I(X_{k,i}=k_0)$ where $\mathbb I(A)$ is the indicator function of the event $A$. We want to show that $N_{y_0}(\cdot)$ converges to a Poisson process on $[y_0,1]$.
By Lemma \ref{lemreg1}, we have that $k_0\geq nx_0\geq n\delta$ with probability at least $1-\varepsilon$, which allows us to apply Lemmas \ref{lemSk} and \ref{lemfk} to show that for $n$ large enough, with probability at least $1-2\varepsilon$, we have
\[ \hat P_{k\to k_0}(1-\varepsilon) \leq P_{k\to k_0} \leq(1+\varepsilon )\hat P_{k\to k_0} \]
where
\[ \hat P_{k\to k_0} =\frac1{nm} \frac{\chi_{k_0}}{2(1+u)}\frac n{k_0} \biggl(\frac{k_0}k \biggr)^\chi. \]
For $y>y_0$, let $\hat N_{y_0}(y)= \sum_{i=1}^m\sum_{k=k_0}^{\lceil ny\rceil}\hat Y_{k\to k_0}^{(i)}$ where $\{\hat Y_{k\to k_0}^{(i)}\}$ are independent random variables such that $\hat Y_{k\to k_0}^{(i)}=1$ with probability $\hat P_{k\to k_0}$ and $\hat Y_{k\to k_0}^{(i)}=0$ with probability $1-\hat P_{k\to k_0}$. It follows from standard results on convergence to Poisson processes (and the fact that $\gamma_0$ has the same distribution as $\chi_{k_0}$) that $\hat N_{y_0}(\cdot)$ converges weakly to a Poisson process with density $\frac{\gamma_0}{2(u+1)y_0} (\frac {y_0}y )^\chi$ on $[y_0,1]$. A change of variables from $y$ to $x=y^\chi$ now leads to the Poisson process with density
\[ \frac{\gamma_0}{ 2(1+u)\chi}\frac{x^{\psi-1}}{x_0^\psi} { =\gamma_0 \frac{\psi x^{\psi-1}}{x_0^\psi}} \]
on $[x_0,1]$. Combined with a last application of Lemma \ref{lemSk} to bound the difference between $S_{k_{0,i}-1}$ and $(k_{0,i}/n)^\chi$, this proves that $x_{0,m+1},\ldots,x_{m+q_0'}\in[x_0,1]$ and $k_{0,{m+1}},\ldots,k_{0,m+q_0}$ can be coupled in such a way that for $n$ large enough, with probability at least $1-3\varepsilon$, we have that $q_0=q_0'\leq Q=N-m-1$, $\chi_{k_0}=\gamma_0\leq C$ and
\begin{equation}
\label{S-x-upbd2} |x_{0,i}-S_{k_{0,i}-1}|\leq\varepsilon' \qquad\mbox{for } i=m+1,\ldots,m+q_0. \end{equation}
Since $\varepsilon>0$ was arbitrary, this completes the proof of the first two statements of the lemma.
(iii) To prove the third statement, we use bounds (\ref{S-x-upbd1}) and (\ref{S-x-upbd2}), and a final application of Lemma \ref{lemSk}, to establish the existence of two constants $\delta'>0$ and $n_0'<\infty$ such that for $n\geq n_0'$, with probability at least $1-4\varepsilon$,
\begin{equation} \label{k-lbd1} k_{\bar a} \geq\delta'n \qquad\mbox{for all } \bar a \in T_1(G_n) \end{equation}
and
\[
|k_{\bar a}-k_{\bar b}|\geq\delta'n \qquad\mbox{for all } \bar a,\bar b\in T_1(G_n) \mbox{ with } \bar a\neq\bar b, \]
implying in particular that $k_0,k_{0,1},\ldots,k_{0,m+q_0}$ are pairwise distinct.
(iv) To prove the last statement, let us assume that $\gamma_0\leq C$, and that $k_{0,1},\ldots,\allowbreak k_{0,m+q}$ are pairwise distinct, with $k_{0,i}<k_0$ for $i\leq m$, $k_{0,i}>k_0$ for $i> m$, $\min k_{0,i}\geq n\delta'$ and $q\leq Q$. Let $A$ be the event that we have chosen $k_0$ as the uniformly random vertex and that the neighbors of $k_0$ are the vertices $k_{0,1},\ldots,k_{0,m+q}$. Let $\chi^{A,\gamma_0}$ be the collection of random variables $\{\chi_k\}_{k\neq k_0}$ conditioned on $\chi_{k_0}=\gamma_0$ and $A$.
We will want to show that $\chi^{A,\gamma_0}$ can be coupled to a collection of independent random variables $\{\hat\chi_k\}_{k\neq k_0}$ such that $\chi^{A,\gamma_{0}}=\{\hat\chi_k\}_{k\neq k_0}$ with probability at least $1-\varepsilon$, and
\begin{equation} \label{psi-distr} \hat\chi_k \sim\cases{ F_k', &\quad if $k\in\{k_{0,1},\ldots,k_{0,m}\}$, \cr F_k, &\quad otherwise.} \end{equation}
Let $\rho(\cdot\mid A,\chi_{k_0})$ be the density of the { (multi-dimesional)} random variable $\chi^{A,\gamma_0}$, and let ${\mathbf P}(\cdot)$ be the joint distribution of $G_n$ and the random variables $\chi_1,\ldots,\chi_n$. By Bayes's theorem,
\begin{equation} \label{Bayes} \rho(\cdot\mid A,\chi_{k_0}=\gamma_0)= \frac{{\mathbf P}(A\mid\cdot,\chi_{k_0}{ = \gamma_0})} { {\mathbf P}(A\mid\chi_{k_0}{ = \gamma_0})}\rho_0(\cdot), \end{equation}
where $\rho_0$ is the original density of the random variables $\{\chi_k\}_{k\neq k_0}$ (we denote the corresponding probability distribution and expectations by $P_0$ and $E_0$, resp.).
We thus have to determine the probability of $A$ conditioned on $\chi_1,\ldots,\chi_n$. With the help of Theorem \ref{thm1}, this probability is easily calculated, and is equal to
\begin{eqnarray*} {\mathbf P}\bigl(A\mid\{\chi_k\}\bigr) &=&{ m!} \prod _{i=1}^m P_{k_0\to k_{0,i}} \prod _{j=1}^{q} mP_{k_{0,m+j}\to k_0} (1-P_{k_{0,m+j}\to k_0})^{m-1} \\ &&{} \times\prod_{k>k_0:k\notin\{k_{0,m+1},\ldots,k_{0,m+q}\}} (1-P_{k\to k_0} )^m \\
&=&{ m!}\prod_{i=1}^m P_{k_0\to k_{0,i}} \prod_{j=1}^{q} \frac{mP_{k_{0,m+j}\to k_0}}{1-P_{k_{0,m+j}\to k_0}} \prod_{k>k_0} (1-P_{k\to k_0} )^m, \end{eqnarray*}
where $P_{k\to k'}$ is the conditional probability defined in (\ref {Pktok}). By Lemma \ref{lemSk}, this implies that given any $\varepsilon'>0$, we can find $n_0<\infty$ such that for $n\geq n_0$, we have that with probability at least $1-\varepsilon'$ with respect to $P_0$,
\begin{eqnarray*} &&\bigl(1{-\varepsilon'}\bigr){\mathbf P}\bigl(A\mid\{\chi_k\} \bigr) \\
&&\quad\leq{ m!} \Biggl(\prod_{i=1}^m \psi_{k_{0,i}} \biggl(\frac{k_{0,i}}{k_0} \biggr)^\chi\prod _{j=m+1}^{m+q}m\psi_{k_0} \biggl( \frac{k_{0}}{k_{0,j}} \biggr)^\chi\Biggr) \exp\biggl( -m \psi_{k_0}\sum_{k>k_0} \biggl( \frac{{k_0}}{{k}} \biggr)^\chi\biggr) \\ &&\quad\leq\bigl(1+{\varepsilon'}\bigr){\mathbf P}\bigl(A\mid\{ \chi_k\}\bigr). \end{eqnarray*}
To estimate ${\mathbf P}(A\mid\chi_{k_0})=E_0[{\mathbf P}(A\mid\{\chi_k\})]$, we combined this bound with the deterministic upper bound
\begin{eqnarray*} \hspace*{-4pt}&&{\mathbf P}\bigl(A\mid\{\chi_k\}\bigr) \\
\hspace*{-4pt}&&\quad\leq{ m!} \prod_{i=1}^m P_{k_0\to k_{0,i}} \prod_{j=1}^{q}mP_{k_{0,m+j}\to k_0} \leq\frac1n (m\psi_{k_0})^q\prod _{i=1}^m \psi_{k_{0,i}} \\ \hspace*{-4pt}&&\quad\leq C'{m!} \Biggl(\prod _{i=1}^m \psi_{k_{0,i}} \biggl( \frac{k_{0,i}}{k_0} \biggr)^\chi\prod_{j=m+1}^{m+q}m \psi_{k_0} \biggl(\frac{k_{0}}{k_{0,j}} \biggr)^\chi\Biggr) \exp\biggl( -m\psi_{k_0}\sum_{k>k_0} \biggl( \frac{{k_0}}{{k}} \biggr)^\chi\biggr), \end{eqnarray*}
where $C'=(\delta')^{-(m+Q)}\sup_{n\geq1}e^{mn f_{\delta' n}(C)}$.\vspace*{2pt}
These bounds imply that given any $\varepsilon'>0$, we can find an $n_0<\infty$ such that for $n\geq n_0$, with probability at least $1-\varepsilon'/2$ with respect to $P_0$, we have
\[ \sqrt{1-\varepsilon'}\prod_{i=1}^m \frac{\psi_{k_{0,i}}}{E_0(\psi_{k_{0,i}})} \leq\frac{{\mathbf P}(A\mid\{\chi _k\})} { {\mathbf P}(A\mid\chi_{k_0})} \leq\sqrt{1+\varepsilon'} \prod_{i=1}^m \frac{\psi_{k_{0,i}}}{E_0(\psi _{k_{0,i}})}. \]
With the help of Lemma \ref{lemfk}, this shows that for $n$ large enough, with probability at least $1-\varepsilon'$, we have
\[ \bigl(1-\varepsilon'\bigr)\prod_{i=1}^m \frac{\chi_{k_{0,i}}}{E_0(\chi_{k_{0,i}})} \leq\frac{{\mathbf P}(A\mid\{\chi _k\})} { {\mathbf P}(A\mid\chi_{k_0})} \leq\bigl(1+\varepsilon' \bigr)\prod_{i=1}^m \frac{\chi_{k_{0,i}}}{E_0(\chi_{k_{0,i}})}. \]
Recalling (\ref{Bayes}) and the definition of the random variables $\{\hat\chi_k\}_{k\neq k_0}$, we therefore have shown that with probability at least $1-\varepsilon'$ with respect to~$P_0$,
\begin{equation} \label{rho-rho1} \bigl(1-\varepsilon'\bigr)\hat\rho(\cdot) \leq\rho( \cdot\mid A,\chi_{k_0}=\gamma_0) \leq\bigl(1+ \varepsilon'\bigr)\hat\rho(\cdot), \end{equation}
where $\hat\rho$ is the density of the random variables $\{\hat\chi_k\}_{k\neq k_0}$. (We denote the corresponding product measure by $\hat P$.)
To continue, we need to transform statements which happen with high probability with respect to $P_0$ into statements which happen with high probability with respect to $\hat P$. To this end, we consider the general case of two probability measures $\mu$ and $\nu$\vadjust{\goodbreak} such that $\nu$ is absolutely continuous with respect to $\mu$, $\nu=f\mu$ for some nonnegative function $f\in L_2(\mu)$. Let $\Omega_0$ be an event which happens with probability $1-\varepsilon'$ with respect to $\mu$. Then
\begin{equation} \label{abs-cont} \nu\bigl(\Omega_0^c\bigr)=\int f1_{\Omega_0^c}\leq\sqrt{E_\mu\bigl(f^2\bigr) \mu\bigl(\Omega_0^c\bigr)} =\sqrt{ \varepsilon' E_\mu\bigl(f^2\bigr)}, \end{equation}
implying that $\Omega_0$ happens with probability at least $1-\sqrt{\varepsilon' E_\mu(f^2)}$ with respect to~$\nu$.
Applying this bound to the probability measures $P_0$ and $\hat P$, we see that bound (\ref{rho-rho1}) holds with probability at least $1-\sqrt{2\varepsilon'}$ with respect to $\hat P$, provided $n$ (and hence $k_{0,1},\ldots,k_{0,m}$) is large enough. Using this fact, one then easily shows that
\[
\bigl\|\hat\rho-\rho(\cdot\mid A,\chi_{k_0}=\gamma_0)
\bigr\|_1 \leq2\varepsilon'+2\sqrt{2\varepsilon'}. \]
Choosing $\varepsilon'$ sufficiently small ($\varepsilon'=\varepsilon^2/32$ is small enough), we see that the right-hand side can be bounded by $\varepsilon$, which proves that $\chi^{A,\gamma_0}$ and $\{\hat\chi_k\}_{k\neq k_0}$ can be coupled in such a way that they are equal with probability at least $1-\varepsilon$, as required. \end{pf}
\subsection{Proof of convergence for the sequential model}
In this section we show that the sequential model converges to the P\'olya-point graph. Indeed, we prove slightly more, namely the following proposition:
\begin{Proposition}\label{propmain} Given $\varepsilon>0$ and $r<\infty$, there are constants $C,N<\infty$, $\delta>0$ and $n_0<\infty$ such that for $n\geq n_0$, the rooted sequential attachment graph $(G_n,k_0)$ and the P\'olya-point process $\{x_{\bar a}\}$ can be coupled in such a way that with probability at least $1-\varepsilon$, the following holds:
\begin{longlist}[(4)]
\item[(1)] \label{l1} $T_r(G_n)\cong T_r$ and $|T_r(G_n)|\leq N$;
\item[(2)]\label{l2} $|x_{\bar a}-S_{k_{\bar a}-1}|\leq \varepsilon$ for all $\bar a\in T_r$;
\item[(3)]\label{l3} $\mathbf{k}^{(r)}(G_n)$ is injective, and $k_{\bar a}\geq\delta n$ for all $\bar a\in T_r$;
\item[(4)]\label{l4} $\gamma_{\bar a}=\chi_{k_{\bar a}}\leq C$ for all $\bar a\in T_r$. \end{longlist}
\end{Proposition}
\begin{pf} For $r=1$, this follows from Lemmas \ref{thm2} and \ref{lemreg1}.
Assume by induction that the lemma holds for $r<\infty$, and fix $T_r$, $\mathbf k^{(r)}(G_n)$, $\{x_{\bar a}\}_{\bar a\in T_r}$, $\{\gamma_{\bar a}\}_{\bar a\in T_r}$ and $\{\chi_{k_{\bar a}}\}_{\bar a\in T_r}$ in such a way that (1)--(4) hold (an event which has probability at least $1-\varepsilon$ by our inductive assumption).
Consider a vertex $\bar a\in\partial T_{r}=T_r\setminus T_{r-1}$. We want to explore the neighborhood of $k_{\bar a}$ in $G_n$. To this end, we note that for all $\bar b\in T_{r-1}$, the neighborhood of $k_{\bar b}$ is already determined by our conditioning on $\mathbf k^{(r)}(G_n)$, implying in particular that none of the edges sent out from $k_{\bar a}$ can hit a vertex $k\in K_{r-1}$, unless, of course, $\bar a$ is of type { $R$}, and $k$ happens to be the parent of $k_{\bar a}$---in which case the edge between $k$ and $k_{\bar a}$ is already present. To determine the children of type { $L$} of the vertex $k_{\bar a}$, we therefore have to condition on not hitting the set $K_{r-1}$.\vadjust{\goodbreak} But apart from this, the process of determining the children of $k_{\bar a}$
is exactly the same as that of determining the children of the root $k_0$. Since $|K_{r}|\leq N$, $k\geq\delta n$ for all $k\in K_{r}$, and $\chi_{k}\leq C$ for all $k\in K_{r}$, we have that $\sum_{k\in K_r} \varphi_k\leq C'/n$ for some
$C'<\infty$, implying that conditioning on $k\notin K_{r-1}\subset K_r$ has only a negligible influence on the distribution of the children of $k_{\bar a}$. We may therefore proceed as in the proof of Lemma \ref{thm2} to obtain a coupling between a sequence of i.i.d. random variables $x_{\bar a,i}$ distributed uniformly in $[0,x_{\bar a}]$ and the children $k_{\bar a,i}$ of $k_{\bar a}$ that are of type ${ L}$. As before, we obtain that for $n$ large enough, with probability at least $1-\varepsilon$, we have $|S_{k_{\bar a,i}-1}-x_{\bar a,i}|\leq\varepsilon$.
Repeating this process for all $k_{\bar a}\in\partial K_{r}=K_r\setminus K_{r-1}$, we obtain a set of vertices ${ L}_{r+1}$ consisting of all children of type ${ L}$ with parents in $\partial K_{r}$. It is easy to see that with probability tending to one as $n\to\infty$, the set ${ L}_{r+1}$ has no intersection with $K_r$, so we will assume this for the rest of this proof.
Next we continue with the vertices of type $R$. Assume that we have already determined all children of type $R$ for a certain subset $U_r\subset\partial K_{r}$, and denote the set children obtained so far by $R_{r+1}$. We decompose this set as $R_{r+1}=\bigcup_{i=1}^m R_{r+1}^{(i)}$, where $R_{r+1}^{(i)}=\{k\in R_{r+1}\dvtx X_{i,k}\in U_r\}$.
Consider a vertex $\bar a\in\partial K_r\setminus U_r$. Conditioning on the graph explored so far is again not difficult, and now amounts to two conditions:
\begin{longlist}[(2)]
\item[(1)] $X_{k,i}\neq k_{\bar a}$ if $k\in K_r\cup R_{r+1}^{(i)}$, since all the edges sent out from this set have already been determined.
\item[(2)] For $k\notin K_r\cup R_{r+1}^{(i)}$, the probability that $k_{\bar a}$ receives the $i$th edge from $k$ is different from the probability given in (\ref{Pktok}), since the random variables $X_{k,i}$ has been probed before: we know that $X_{k,i}\notin K_{r-1}$ since otherwise $k$ had sent out an edge to a vertex in $K_{r-1}$, which means that $k$ would have been a child of type ${ R}$ in $K_r$. We also know that $X_{k,i}\notin U_{r}$, since otherwise $k\in { R}_{r+1}^{(i)}$. Instead of (\ref{Pktok}), we therefore have to use the modified probability
\[ P_{k\to k_{\bar a}}= \varphi_{k_{\bar a}}\frac1{\tilde S_{k-1}}, \]
where
\[ \tilde S_{k-1} =\mathop{\sum_{k'>k_{\bar a}:}}_{k'\notin K_{r-1}\cup U_r} \varphi_{k'}. \]
\end{longlist}
Since $\tilde S_{k-1}\leq S_{k-1}\leq\tilde S_{k-1}+C'/n$ by our inductive assumption, we can again refer to Lemma \ref{lemSk} to approximate $P_{k\to k_{\bar a}}$ by
\[ \hat P_{k\to k_{\bar a}} =\frac1{nm}\frac{\chi_{k_{\bar a}}}{2(1+u)}\frac n{k_{\bar a}} \biggl(\frac{k_{\bar a}}k \biggr)^\chi. \]
From here on the proof of our inductive claim is completely analog to the proof of Lemma \ref{thm2}. We leave it to the reader to fill in the (straightforward but slightly tedious) details. \end{pf}
\subsection{Estimates for the P\'olya urn representation}\label{secpolest}
In this section we complete the work started in Section \ref {secpolya} by proving Lemmas \ref{lemSk} and \ref{lemfk}.
\begin{pf*}{Proof of Lemma \ref{lemSk}} Fix $\varepsilon$, and recall that
\[ \chi=\frac{1+2u}{2+2u}\in\biggl[\frac12,1 \biggr). \]
Writing $S_k$ as
\[ S_k=\prod_{j=k+1}^n(1- \psi_j) =\exp\Biggl( \sum_{j=k+1}^n \log(1-\psi_j) \Biggr), \]
we use the fact that if $0<x<1$, then $x\leq-\log(1-x)\leq x +x^2/(1-x)$ to bound
\[ \Biggl\vert E \Biggl[\sum_{j=k+1}^n \log(1-\psi_j) \Biggr]+\sum_{j=k+1}^n E[\psi_j]\Biggr\vert\leq\sum_{j=k+1}^n E \biggl[\frac{\psi_j^2}{1-\psi_j} \biggr]. \]
On the other hand, by Kolmogorov's inequality and the fact that
\[ \operatorname{Var}\bigl(\log(1-\psi_j)\bigr)\leq E\bigl[\bigl(\log(1- \psi_k)\bigr)^2\bigr]\leq E\bigl[\psi_j^2 (1-\psi_j)^{-2}\bigr], \]
we have
\begin{eqnarray*} && {\mathbf P}\Biggl(\max_{K\leq k\leq n} \Biggl\vert\sum _{j=k+1}^n\log(1-\psi_j)-E \Biggl[\sum _{j=k+1}^n\log(1-\psi_j) \Biggr]\Biggr\vert\geq\varepsilon\Biggr) \\ &&\qquad\leq\frac1{\varepsilon^2}\sum_{j=K+1}^n E \biggl[\frac{\psi _j^2}{(1-\psi_j)^2} \biggr]. \end{eqnarray*}
We will use that for any $\beta_{a,b}$ distributed random variable $\psi$, we have
\[ E[\psi]=\frac a{a+b},\qquad E \biggl[\frac{\psi^2}{1-\psi} \biggr] =\frac {a(a+1)}{(a+b)(b-1)} \] and
\[ E \biggl[\frac{\psi^2}{(1-\psi)^2} \biggr] =\frac {a(a+1)}{(b-2)(b-1)}. \]
Using these relations for $a=m+2mu$ and $b=(2j-3)m+2mu(j-1)$, we get
\begin{eqnarray} \label{eqexpepsi-0} E(\psi_j)&=& \frac{m+2mu}{(2j-2)m+2jmu}= \frac{\chi}{j}+O \biggl(\frac1{j^2} \biggr), \\
\label{beta-moments} E\bigl[\psi_j^2\bigr]&\leq& E \biggl[\frac{\psi_j^2}{1-\psi_j} \biggr]=O \biggl(\frac1{j^2} \biggr) \quad\mbox{and}\quad E \biggl[\frac{\psi_j^2}{(1-\psi_j)^2} \biggr]=O \biggl (\frac1{j^2} \biggr). \end{eqnarray}
Putting these bounds together, and observing that $\sum_{j=k+1}^n\frac1j=\log(n/k)+O(k^{-1})$, we get that there exists a constant $K(\varepsilon)$ not depending on $n$ such that with probability at least $1-\varepsilon$, we have that
\[ \biggl(\frac{k}{n} \biggr)^\chi e^{-\varepsilon}<S_k< \biggl(\frac{k}{n} \biggr)^\chi e^\varepsilon\qquad{\mbox{for all } K(\varepsilon)\leq k\leq n.} \]
For $k<K(\varepsilon)$, we bound $S_k\leq S_K$ to conclude that with probability at least $1-\varepsilon$,
\[ \biggl\vert S_k- \biggl(\frac kn \biggr)^\chi\biggr \vert=O \biggl( \biggl(\frac Kn \biggr)^\chi\biggr). \]
The lemma now follows. \end{pf*}
\begin{pf*}{Proof of Lemma \ref{lemfk}} (i) Let $a=m+2mu$, so that $\chi_k\sim\Gamma(a,1)$. Then
\[ {\mathbf P}\bigl(\chi_k\geq\log^2k\bigr)\leq E \bigl[e^{\chi_k/2}\bigr] e^{- (\log^2 k)/2} =2^a k^{-(\log k)/2}. \]
Since the right-hand side is sumable, this implies the first statement of the lemma through the Borel--Cantelli lemma.
(ii) Let $b_k=(2k-3)m+2mu(k-1)-1$, and let $\chi_k'=\chi_k/b_k$. Then $f_k$ can be defined by
\[ {\mathbf P}\bigl(\psi_k\leq f_k(x)\bigr)={\mathbf P}\bigl( \chi_k'\leq x/b_k\bigr). \]
In order to prove the second statement of the lemma, it is clearly enough to prove that for all sufficiently large $k$, we have
\[ (1-\varepsilon)\frac x{b_k} \leq f_k(x) \leq\frac x{b_k} \qquad\mbox{for } x\leq\log^2k, \]
which in turn is equivalent to showing that
\begin{equation} \label{proof-equ} {\mathbf P}\bigl(\psi_k\leq(1-\varepsilon)x \bigr)\leq {\mathbf P}\bigl(\chi_k'\leq x \bigr)
\leq{\mathbf P}(\psi_k\leq x ) \qquad\mbox{for } x\leq \frac{\log^2k}{b_k} \end{equation}
provided $k$ is large enough.
We start by proving that
\[ \Delta(x):={\mathbf P}(\psi_k\leq x)-{\mathbf P}\bigl(\chi_k' \leq x\bigr)\geq0. \]
To this end, we rewrite
\[ {\mathbf P}(\psi_k\leq x) =\frac1{Z_\beta}\int _0^x y^{a-1}(1-y)^b\,dy \]
and
\[ {\mathbf P}\bigl(\chi_k'\leq\lambda\bigr) = \frac1{Z_\gamma}\int_0^\lambda y^{a-1}e^{-by}\,dy, \]
where $a=m+2mu$, $b=b_k$ and $Z_\gamma=\int_0^\infty y^{a-1}e^{-by}\,dy$ and $Z_\beta=\int_0^1y^{a-1}(1-y)^b\,dy$ are the appropriate normalization factors. For $x\leq1$, we express $\Delta(x)$ as
\[ \Delta(x)=\frac1{Z_\gamma}\int_0^x dyy^{a-1}e^{-b y} \Biggl(e^\delta\exp\Biggl(-b\sum _{k=2}^\infty\frac{y^k}k \Biggr)-1 \Biggr), \]
where $e^\delta=Z_\gamma/Z_\beta$. Note that $\delta>0$ by the fact that $(1-x)\leq e^{-x}$. It is also easy to see that $\delta\to0$ as $k\to\infty$; indeed, we have $\delta=O(b^{-1})=O(k^{-1})$.
Consider the derivative
\[ \frac{d\Delta(x)}{dx} =\frac{x^{a-1}e^{-b x}}{Z_\gamma} \Biggl(e^\delta \exp\Biggl(-b \sum_{k=2}^\infty\frac{x^k}k \Biggr)-1 \Biggr), \]
and let $x_0$ be the unique root, that is, let $x_0\in(0,1)$ be the solution of the equation
\[ \delta=b\sum_{k=2}^\infty \frac{x_0^k}k. \]
Then $\Delta(x)$ is monotone increasing for $0<x<x_0$ and monotone decreasing for all $x>x_0$. Together with the observation that $\Delta(x)>0$ for all sufficiently small $x$, and $\Delta(x)\to0$ as $x\to\infty$, we conclude that $\Delta(x)\geq0$ for $0\leq x<\infty$. This proves that ${\mathbf P}(\chi_k'\leq x)\leq{\mathbf P}(\psi_k\leq x))$ for all $x\geq 0$.
To prove the lower bound in (\ref{proof-equ}), we will prove that
\[ \tilde\Delta(x)={\mathbf P}\bigl(\chi_k'\leq x \bigr) - {\mathbf P}\bigl(\psi_k\leq(1-\varepsilon)x \bigr)\geq0 \qquad\mbox{if } x\leq\frac \varepsilon4\leq\frac1{8}. \]
We decompose the range of $x$ into two regions, depending on whether $x\geq\frac{4a}{b\varepsilon}$ or $x\leq \frac{4a}{\varepsilon b}$.
In the first region, we express $\tilde\Delta(x)$ as
\begin{eqnarray*} \tilde\Delta(x) &=& {\mathbf P}\bigl(\psi_k\geq(1-\varepsilon)x \bigr)-{\mathbf P} \bigl(\chi_k'\geq x \bigr) \\ &=&\frac{e^\delta}{Z_\gamma}\int_{x(1-\varepsilon)}^1 dyy^{a-1}(1-y)^b -\frac1{Z_\gamma}\int _x^\infty dyy^{a-1}e^{-b y}. \end{eqnarray*}
We then bound
\begin{eqnarray*} \int_{x}^\infty dy(2y)^{a-1}e^{-b y} &\leq&\int_x^{2x} dyy^{a-1}e^{-b y} \int_{2x}^\infty dyy^{a-1}e^{-b y} \\ &\leq&\int_x^{2x} dyy^{a-1}e^{-b y} +2^{a-1}e^{-bx}\int_{x}^\infty dyy^{a-1}e^{-b y} \end{eqnarray*}
proving that
\begin{eqnarray} \label{Csbd1} \int_{x}^\infty dy(2y)^{a-1}e^{-b y} &\leq&\bigl(1-2^{a-1}e^{-bx}\bigr)^{-1}\int _x^{2x} dyy^{a-1}e^{-b y} \nonumber\\[-8pt]\\[-8pt] &\leq&2 \int_x^{2x} dyy^{a-1}e^{-b y},\nonumber \end{eqnarray}
where we have used $bx\geq a\log2$ in the last step.
On the other hand, using that $(1-y)^b\geq e^{-by(1+x)}$ if $y\leq2x\leq1/2$, we have that
\begin{eqnarray*} {e^\delta}\int_{x(1-\varepsilon)}^1 dyy^{a-1}(1-y)^b &\geq& \int_{x(1-\varepsilon)}^{2x(1-\varepsilon)} dyy^{a-1}e^{-by(1+x)} \\ &=& \int_{x}^{2x} dyy^{a-1}(1- \varepsilon)^ae^{-by(1+x)(1-\varepsilon)} \\ &\geq& (1-\varepsilon)^ae^{-2bx^2}e^{\varepsilon bx} \int _x^{2x}dyy^{a-1}e^{-b y} \\ &\geq& 2\int_x^{2x}dyy^{a-1}e^{-b y}. \end{eqnarray*}
Combined with (\ref{Csbd1}), this proves that $\tilde\Delta(x)\geq0$ if $\varepsilon b x\geq4a$.
For $\varepsilon b x\leq4a$, we bound
\begin{eqnarray*} \tilde\Delta(x) &=& \frac1{Z_\gamma} \biggl(\int_0^x dyy^{a-1}e^{-b y} -e^\delta\int_0^{x(1-\varepsilon)}dyy^{a-1}(1-y)^b \biggr) \\ &\geq& \frac1{Z_\gamma} \biggl(\int_0^x dyy^{a-1}e^{-b y} -e^\delta\int_0^{x(1-\varepsilon)}dyy^{a-1}e^{-by} \biggr) \\ &=& \frac1{Z_\gamma} \biggl( \int_{(1-\varepsilon)x}^x dyy^{a-1}e^{-b y} -\bigl(e^\delta-1\bigr)\int _0^{x(1-\varepsilon)}dyy^{a-1}e^{-by} \biggr) \\ &\geq& \frac1{Z_\gamma} \bigl(\varepsilon x \bigl[(1-\varepsilon)x \bigr]^{a-1}e^{-bx} -\bigl(e^\delta-1 \bigr)x^a \bigr) \\ &\geq& \frac{x^a}{Z_\gamma} \bigl(\varepsilon2^{1-a}e^{-4a/\varepsilon} - \bigl(e^\delta-1\bigr) \bigr). \end{eqnarray*}
Since $\delta\to0$ as $b\to\infty$, we see that the right-hand side becomes positive if $k\geq K$ for some $K<\infty$ that depends on $a$ and $\varepsilon$ (it grows exponentially in $a/\varepsilon$). \end{pf*}
\section{Approximating coupling for the independent and the conditional models}
In this section we prove that the sequential and the independent model have the same\vadjust{\goodbreak} weak limit. To this end we construct a coupling between the two models such with probability tending to $1$, the balls around a randomly chosen vertex in $\{1,\ldots,n\}$ are identical in both models. This will imply that both models have the same weak local limit.
We only give full details for the coupling between the independent and the sequential model. The approximating coupling between the conditional and the sequential model is very similar, and the proof that it works is identical.
We construct the coupling inductively as follows: let $V=1,2,\ldots$ be the vertices of the preferential attachment graph. For $1\neq n\in V$ and $i=1,\ldots,m$ let $e^i_n<n$ and $f^i_n<n$ be the $i$th vertex that $n$ is connected to in, respectively, the sequential and the independent models. We use the symbol $\mathbf e_n$ to denote the vector
$\{e^i_n\}_{1\leq i\leq m}$, and the symbol $\mathbf f_n$ to denote the vector $\{f^i_n\}_{i=1}^m$.
By construction, $e^i_2=f^i_2=1$ for all $i$. Once we know $\mathbf e_l$ and $\mathbf f_l$ for every $l<n$, we determine $\mathbf e_n$ and $\mathbf f_n$ as follows: let $D_1$ be the distribution of $\mathbf e_n$, based on the sequential rule and conditioned on $\{\mathbf e_l\}_{l<n}$, and let $D_2$ be the distribution of $\mathbf f_n$ based on the independent rule and conditioned on $\{\mathbf f_l\}_{l<n}$. Let $D$ be an (arbitrarily chosen) coupling of $D_1$ and $D_2$ that minimizes the total variation distance. Then we choose $\mathbf e_n$ and $\mathbf f_n$ according to $D$.
Our goal is to prove the following proposition:
\begin{Proposition} \label{propcoupling} Let $(G_n)$ and $(G_n')$ be the sequence of preferential attachment graphs in the sequential and the conditional model, respectively, coupled as above. Let $\varepsilon>0$ and let $r$ be an arbitrary positive integer. Then there exists an integer $n_0$ such that for $n\geq n_0$, with probability at least $1-\varepsilon$, a uniformly chosen random vertex $k_0\in\{1,\ldots,n\}$ has the same $r$-neighborhood in $G_n$ and $G_n'$. \end{Proposition}
The proof of the proposition relies on following two lemmas, to be proven in Sections \ref{seccoupgood} and \ref{seclem-dk-proof}, respectively.
\begin{Lemma}\label{claimcoupgood} Consider the coupling defined above, and fix $k\geq2$. For $n>k$, let $A_n=A_n^{(k)}$ be the event that there exists an $i\in\{1,\ldots,m\}$ such that $e^i_n=k\neq f^i_n$ or $e^i_n\neq k=f^i_n$. Then
\begin{equation}
\label{eq1} {\mathbf P}\Biggl(A_n\biggm|\bigcap_{h=k+1}^{n-1}A_h^c,d_{n-1}(k) \Biggr) =O \biggl(\frac{d_{n-1}(k)}{n^2} \biggr). \end{equation}
\end{Lemma}
Note that under the conditioning, $d_{n-1}(k)$ is the same in both models.
\begin{Lemma}\label{lemdk} For the sequential preferential attachment model, for every $n$ and $k$ such that $n>k$, let $d_n(k)$ be the degree of vertex $k$ when the graph contains $n$ vertices. Then
\begin{equation} \label{eqdexp} E\bigl[d_n(k)\bigr]= m \biggl[1+\frac{\chi}{1-\chi} \biggl( \biggl(\frac{n}{k} \biggr)^{1-\chi}-1 \biggr) \biggr] +O \biggl(\frac{n^{1-\chi}}{k^{2-\chi}} \biggr), \end{equation}
where the constant implicit in the $O$-symbol depends on $m$ and $u$.\vadjust{\goodbreak} \end{Lemma}
\subsection{\texorpdfstring{Proof of Proposition \protect\ref{propcoupling}} {Proof of Proposition 4.1}}
Fix $\varepsilon$ and $r$, let $B_r(k)$ and $B_r(k)'$ be the ball of radius $r$ about $k$ in $G_n$ and $G_n'$, respectively, and let $B$ be the set of vertices $k\in\{1,\ldots,n\}$ for which $B_r(k)\neq B_r(k)'$. Then the probability that a uniformly chosen vertex in $\{1,\ldots,n\}$ is in $B$ is just $1/n$ times the expected size of $B$. We thus have to show that
\[
E\bigl[|B|\bigr]\leq\varepsilon n. \]
In a preliminary step note that $B_r(k)=B_r(k)'$ unless there exists a vertex $k'\in B_r(k)$ such that $e^i_{n'}=k'\neq f^i_{n'}$ or $e^i_{n'}\neq k'=f^i_{n'}$ for some $i=1,\ldots, m$ and some $n'>k'$.
To prove this fact, let us consider the event $A^{(k)}=\bigcup_{n>k} A_n^{(k)}$. It is easy to see that this event is the event that at least one of the edges received by $k$ is different in $(G_n)$ and $(G_n')$. Using this fact, one easily shows that the ball of radius $1$ around a vertex $k$ must be identical in $G_n$ and $G_n'$ unless $A^{(k')}$ happens for at least one vertex $k'$ in the $1$-neighborhood of $k$ in $G_n$. By induction, this implies that $B_r(k)=B_r(k)'$ unless there exists a vertex $k'\in B_r(k)$ such that the event $A^{(k')}$ happens, which is what we claimed in the previous paragraph.
Next we note that by Proposition \ref{propmain}, there exist $\delta>0$ and $N<\infty$ such that with probability at least $1-\varepsilon/2$, a random vertex $k\in\{1,\ldots,n\}$ obey the two following two conditions:
\begin{longlist}[(2)]
\item[(1)]\label{A1} the ball of radius $2r$ around $k$ in the sequential graph $G_n$ contains no more than $N$ vertices;
\item[(2)]\label{A2} the oldest vertex (the vertex with the smallest index) in this ball is no older than $\delta n$. \end{longlist}
If we denote the set of vertices satisfying these two conditions by $W$, we thus have that
\[
E\bigl[|W|\bigr]\geq\biggl(1-\frac\varepsilon2\biggr)n. \]
As a consequence, it will be enough to show that
\[
E\bigl[|W\cap B|\bigr]\leq\frac\varepsilon2 n. \]
If $k\in W\cap B$, there must be a vertex $k'\in B_r(k)$ such that the event $A^{(k)}$ happens. But $k'\in B_r(k)$ if and only if $k\in B_r(k')$, and since $B_r(k')\subset B_{2r}(k)$, we must further have that $|B_r(k')|\leq N$ and $k'\geq\delta n$. As a consequence,
\begin{eqnarray*}
|W\cap B| &=& \sum_{k\in W}{\mathbf{I}}(k\in B) \leq\sum _{k\in W}\sum_{k'\in B_r(k)}{\mathbf{I}} \bigl(A^{(k')}\bigr) \\ &=&\sum_{k'}{\mathbf{I}}\bigl(A^{(k')}\bigr)\sum _{k\in B_r(k')}I(k\in W) \\ &\leq& N\sum_{k'= \delta n}^{{n}}{\mathbf{I}} \bigl(A^{(k')}\bigr), \end{eqnarray*}
where we used the symbol ${\mathbf{I}}(A)$ to denote the indicator function of the event~$A$.
Finally by Lemmas \ref{claimcoupgood} and \ref{lemdk},
\[ P\bigl(A^{(k)}\bigr) \leq O(1)\sum_{n>k} \frac1{n^2} \biggl(\frac n k \biggr)^{1-\chi} = O \biggl(\frac1k \biggr). \]
As a consequence we can find a constant $C$ such that
\[
E\bigl[|W\cap B|\bigr] \leq N\sum_{k'= n\delta}^{{n}}\frac C{k'} \leq CN/{\delta}. \]
For $n$ large enough, the right-hand side is smaller than $\frac\varepsilon2 n$, which is the bound we had to establish.
\subsection{\texorpdfstring{Proof of Lemma \protect\ref{claimcoupgood}} {Proof of Lemma 4.2}} \label{seccoupgood}
Let us the shorthand $d$ for the degree $d_{n-1}^k$.
In the independent model the probability of having $r$ connections to $k$ and $h=m-r$ connections to other vertices in $\{1,\ldots,n-1\}$ is
\[ \pmatrix{m \cr r} p^r (1-p)^h \qquad\mbox{with } p=\frac \alpha{n-1}+\frac{(1-\alpha)d}{2m(n-2)}, \]
while in the sequential model it is
\[ \pmatrix{m \cr r} \prod_{l=0}^{r-1}p_l \prod_{l=r}^{m-1}(1-p_l) \]
with
\[ p_l=p_l(r) =\cases{\displaystyle \frac{2m \alpha+ (1-\alpha)(d+l)}{2m(n-2)+2m\alpha+(1-\alpha)l}, &\quad if $l<r$, \vspace*{2pt}\cr \displaystyle \frac{2m \alpha+ (1-\alpha)(d+r)}{2m(n-2)+2m\alpha+(1-\alpha)l}, &\quad if $l\geq r$.} \]
[Here we used exchangeability and (\ref{eqPntok}).]
As a consequence, the probability in (\ref{eq1}) is bounded by a constant times
\begin{equation} \label{eqrandj} \max_{r=0,\ldots,m} \Biggl\vert\Biggl[ \prod _{l=0}^{r-1}p_l\prod _{l=r}^{m-1}(1-p_l) \Biggr] - p^r (1-p)^h \Biggr\vert. \end{equation}
Telescoping the difference, we bound (\ref{eqrandj}) by
\begin{eqnarray*}
&&\max_{r=0,\ldots,m} \Biggl( \sum_{l=0}^{r-1}p^l|p_l-p| \prod_{l'=l+1}^{r-1}p_{l'} \\
&&\hspace*{17pt}\qquad{}+ \sum _{l=r}^{m-1}p^r(1-p)^{l-r}\bigl|(1-p)-(1-p_l)\bigr| \prod_{l'=l+1}^{m-1}(1-p_{l'}) \Biggr) \\
&&\qquad\leq\max_{r=0,\ldots,m} \Biggl( {\tilde p}^{r-1}\sum _{l=0}^{r-1}|p_l-p| +
p^r\sum_{l=r}^{m-1}|p-p_l| \Biggr), \end{eqnarray*}
where ${\tilde p}=\max\{p,p_1,\ldots,p_m\}=O(d/n)$. We now distinguish three cases:
\begin{longlist}[(iii)] \item[(i)] if $r\geq2$, we use the fact that $p-p_l=O(1/n)$ to get a bound of order $ O({\tilde p}/n)=O(d/n^2)$ for both sums;
\item[(ii)] if $r=1$, we use the fact that the first sum is equal to
$|p_0-p|=O(1/n^2)$, while the second can be bounded by $ O({\tilde p}/n)=O(d/n^2)$ as before;
\item[(iii)] if $r=0$, we use that fact that
\begin{eqnarray*} p_l(0)&=&\frac{2m \alpha+ (1-\alpha)d}{2m(n-2)+2m\alpha+(1-\alpha)l} \\ &=&\frac{2m \alpha}{2m(n-1)} \bigl(1+O\bigl(n^{-1}\bigr) \bigr) + \frac{ (1-\alpha)d}{2m(n-2)} \bigl(1+O\bigl(n^{-1}\bigr) \bigr) \\ &=&p+O\bigl(d/n^2\bigr) \end{eqnarray*}
to show that for $r=0$, all terms in the sum
$\sum_{l=0}^{m-1}|p- p_l|$ are of order $O(d/n^2)$. \end{longlist}
This completes the proof of the lemma.
\subsection{\texorpdfstring{Proof of Lemma \protect\ref{lemdk}}{Proof of Lemma 4.3}} \label{seclem-dk-proof}
As before, we use $\varphi_k^{(n)}$ for
\[ \varphi_k^{(n)}=\psi_k\prod _{i=k+1}^{n}(1-\psi_i). \]
By construction,
\begin{equation} \label{d-alt-expression} d_n(k)=m+\sum_{t=(k-1)m+1}^{(n-1)m} {\mathcal U}_t, \end{equation}
where\vspace*{1pt} the variables $\{{\mathcal U}_t\}$ are defined as follows: let $\{\hat{U}_t\}_{t=1}^\infty$ be i.i.d. $U[0,1]$ variables, independent of the $\varphi_k$'s. Then $ {\mathcal U}_t={\mathbf1}_{\hat{U}_t<\varphi_k^{(\lceil t/m\rceil)}}. $ Note that conditioned on $\{\varphi_k^{(j )}\}_{j\geq k}$, $\{{\mathcal U}_t\}$'s are independent, each being Bernoulli $\varphi_k^{(\lceil t/m\rceil)}$.
Let ${\mathcal F}$ be the $\sigma$-algebra generated by $\{\psi_h\}_{h=1}^\infty$. Then
\begin{equation} \label{eqhatd} E\bigl(d_n(k)\mid{\mathcal F}\bigr) = m+m\sum _{\ell= k}^{n-1} \varphi_k^{(\ell)}. \end{equation}
By (\ref{eqexpepsi-0}),
\begin{equation} \label{eqexpepsi} \frac\chi k\leq E(\psi_k) \leq\frac\chi{k-1}, \end{equation}
which in turn implies that
\begin{eqnarray*} E\bigl[\varphi_k^{(\ell)}\bigr] &=& E[\psi_k]\prod _{i=k+1}^{\ell} \bigl(1-E[\psi_i ] \bigr) \leq\frac{\chi}{k-1}\prod_{i=k+1}^\ell \biggl(1-\frac\chi i \biggr) \\ &\leq& \frac{\chi}{k-1}\exp\Biggl(-\chi\sum_{i=k+1}^\ell \frac1i \Biggr) \leq\frac{\chi}{k-1}\exp\biggl(-\chi\log\biggl( \frac{\ell +1}{k+1} \biggr) \biggr) \\ &=& \frac{\chi}{k-1} \biggl(\frac{k+1}{\ell+1} \biggr)^\chi, \end{eqnarray*}
implying that
\begin{eqnarray} \label{eqexp-of-d-up} E\bigl[d_n(k)\bigr] &\leq& m+m\chi \frac{(k+1)^\chi}{k-1} \sum_{\ell=k}^{n-1} \biggl( \frac1{\ell+1} \biggr)^\chi\nonumber\\ &\leq& m+m\chi\frac{(k+1)^\chi}{k-1} \int _{k-1}^{n-1} dx \biggl(\frac1{x+1} \biggr)^\chi \nonumber \\ &=& m+m\frac{\chi}{1-\chi}\frac{(k+1)^\chi}{k-1} \bigl(n^{1-\chi }-k^{1-\chi} \bigr) \\ &\leq& m+ m\frac{\chi}{1-\chi}\frac{k+1}{k-1} \biggl( \biggl(\frac{n}{k} \biggr)^{1-\chi} -1 \biggr) \nonumber \\ &\leq& m+ m\frac{\chi}{1-\chi} \biggl( \biggl(\frac{n}{k} \biggr)^{1-\chi} -1 \biggr) \biggl(1+\frac4k \biggr). \nonumber \end{eqnarray}
On the other hand, again by (\ref{eqexpepsi}),
\begin{eqnarray*} E\bigl[\varphi_k^{(\ell)}\bigr] &\geq& \frac{\chi}{k}\prod _{i=k+1}^\ell\biggl(1-\frac\chi{ i-1} \biggr) \geq\frac{\chi}{k}\prod_{i=k+1}^\ell \biggl(1-\frac1{ i-1} \biggr)^\chi \\ &=& \frac{\chi}{k}\prod_{i=k+1}^\ell \biggl(\frac{i-2}{ i-1} \biggr)^\chi=\frac{\chi}{k} \biggl( \frac{k-1}{ \ell-1} \biggr)^\chi \end{eqnarray*}
implying that
\begin{eqnarray} \label{eqexp-of-d-low} E\bigl[d_n(k)\bigr] &\geq& m+m\chi \frac{(k-1)^\chi}{k} \sum_{\ell=k}^{n-1} \biggl( \frac1{\ell-1} \biggr)^\chi\nonumber\\ &\geq& m+m\chi\frac{(k-1)^\chi}{k} \int _{k}^{n} dx \biggl(\frac1{x-1} \biggr)^\chi \nonumber \\ &=& m+m\frac{\chi}{1-\chi}\frac{(k-1)^\chi}{k} \bigl((n-1)^{1-\chi }-(k-1)^{1-\chi} \bigr) \\ &=& m+m\frac{\chi}{1-\chi}\frac{k-1}{k} \biggl( \biggl(\frac{n-1}{k-1} \biggr)^{1-\chi}-1 \biggr) \nonumber \\ &\geq& m+ m\frac{\chi}{1-\chi} \biggl( \biggl(\frac{n}{k} \biggr)^{1-\chi} -1 \biggr) \biggl(1-\frac1k \biggr). \nonumber \end{eqnarray}
\section{Applications}
\subsection{Degree distribution of an early vertex} \label{seclim-dn}
In this section, we will show that for $n\gg k\gg1$, $d_n(k)$ grows like $ (\frac nk )^{1-\chi} = (\frac nk )^{\psi/(\psi+1)}$. To give the precise statement, we need some definition. To this end, let us consider the random variables
\[ M_k^{({\ell})}=\prod_{j=k+1}^{\ell} \frac{1-\psi_j}{1-E[\psi_j]}. \]
The bounds (\ref{eqexpepsi-0}) and (\ref{beta-moments}) imply that the second moment of $M_k^{({\ell})}$ is bounded uniformly in ${\ell}$, so by the martingale convergence theorem, $M_k^{({\ell})}$ converges both a.s. and in $L^2$. Since $1-E[\psi_j]= (\frac{j-1}j )^\chi+O(j^{-2})$, this also implies that the limit
\begin{equation} \label{Fk-def} F_k=\lim_{{\ell}\to\infty} \prod _{j=k+1}^{\ell} (1-\psi_j) \biggl(\frac j{j-1} \biggr)^\chi=\lim_{{\ell}\to\infty} \biggl( \frac{\ell}{k} \biggr)^\chi\prod_{j=k+1}^{\ell} (1-\psi_j) \end{equation}
exists a.s. and in $L^2$. In the following lemma, $O_P(k^{-1/2})$ stand for a random variable $A$ such that $Ak^{1/2}$ is bounded in probability.
\begin{Lemma}\label{lem51} Consider the sequential model for some $\alpha$ and $m$, and let $F_k$ be as above. Then
\begin{equation} \label{dnk-limit} \frac{d_n(k)}{n^{1-\chi}} \to\frac{m}{1-\chi}k^\chi \psi_k F_k \qquad\mbox{as } n\to\infty, \end{equation}
both in expectation and in distribution. Furthermore,
\[ F_k>0 \qquad\mbox{a.s. for all $k\geq1$},\qquad \log F_k =O_P\bigl(k^{-1/2}\bigr) \]
and
\[ E[F_k]=1+ O \bigl( k^{-1}\bigr), \]
implying in particular that
\[ \lim_{n\to\infty}\frac{E[d_n(k)]}{n^{1-\chi}} = \frac{m\chi}{1-\chi} \frac{1}{k^{1-\chi}}\bigl(1+O\bigl(k^{-1}\bigr)\bigr). \]
\end{Lemma}
\begin{Remark*} Note that (\ref{dnk-limit}) holds also for the independent and the conditional models. The reason is that by the approximating coupling, the total variation distance between the degree distribution of vertex number $k$ in the sequential model and that of vertex number $k$ in the independent (or conditional) model goes to $0$ as $k$ goes to infinity, and the convergence is uniform in $n$ (the size of the graph). \end{Remark*}
\begin{pf*}{Proof of Lemma \ref{lem51}} We first consider the conditional expectation $E[d_n(k)\mid{\mathcal F}]$, where, as before, ${\mathcal F}$ is the $\sigma$-algebra generated by $\{\psi_h\}_{h=1}^\infty$. Fix $\varepsilon$, and let $K$ be such that for $\ell\geq K$,
\[
\Biggl\|F_k- \biggl(\frac{\ell}{k} \biggr)^\chi\prod _{j=k+1}^{\ell} (1-\psi_j)
\Biggr\|_2\leq\varepsilon. \]
Bounding
\[
\Biggl\|E\bigl[d_n(k)\mid{\mathcal F}\bigr]- \sum_{\ell={K}}^{n-1}m
\varphi_k^{(\ell)} \Biggr\|_2\leq mK \]
we then approximate
\begin{eqnarray*} \sum_{\ell={K}}^{n-1}m\varphi_k^{(\ell)} &=& m\psi_k \sum_{\ell={K}}^{n-1}\prod _{j=k+1}^{\ell}(1-\psi_j)= m \psi_k \sum_{\ell={K}}^{n-1} \biggl( \frac k\ell\biggr)^\chi\bigl(F_k +O(\varepsilon) \bigr) \\ &=& n^{1-\chi} \biggl(\frac{m}{1-\chi}k^\chi\psi_k F_k+O(\varepsilon) \biggr), \end{eqnarray*}
where the errors $O(\varepsilon)$ stand for errors in $L^2$. We thus have show that as $n\to\infty$,
\[ \frac1{n^{1-\chi}}E\bigl[d_n(k)\mid{\mathcal F}\bigr]\to \frac{m}{1-\chi}k^\chi\psi_k F_k \]
in $L^2$. Taking expectations on both sides, we obtain that (\ref{dnk-limit}) holds in expectation.
To prove convergence in distribution, it is clearly enough to show that $E[d_n(k)\mid{\mathcal F}] - d_n(k)\to0$ in probability. But this follows by an easy second moment estimate and the observation that
\[ E\bigl[d_n(k)^2\mid{\mathcal F}\bigr]\leq E\bigl[d_n(k)\mid {\mathcal F}\bigr]^2 +E\bigl[d_n(k)\mid{\mathcal F}\bigr]. \]
Next we observe that the bounds established in Section \ref{secpolest} imply that there is a constant $C<\infty$ such that for $k\geq2$,
\[
\bigl|\log M_k^{(\ell)}\bigr|\leq\varepsilon+ \frac Ck \]
with probability at least $1-\frac{C}{\varepsilon^2k}$. Since these bounds are uniform in $\ell$, they carry over to the limit, and imply both that a.s. $F_k>0$ for all fixed $k\geq 2$, and that $\log F_k=O_P(k^{-1/2})$ as $k\to\infty$. To prove that a.s. $F_1>0$, we note that $F_1/F_2$ is proportional to $1-\psi_2$. The bound $E[F_k]=1+O(k^{-1})$ finally follows from the fact that $E[M_k^{(\ell)}]=1$ and the observation that $1-E[\psi_j]= (\frac{j-1}j )^\chi+O(j^{-2})$. \end{pf*}
\subsection{Degree distribution}
By Theorem \ref{thmmain} and Corollary \ref{corlimit}, the limiting degree distribution of the preferential attachment graph $G_n$ is exactly the degree distribution of the root of the P\'olya-point graph. As we will see, this allows us to explicitly calculate the limiting degree distribution of the preferential attachment graph. In a similar way, it also allows us to calculate the limiting degree distribution of a vertex chosen at random from the vertices that receive an edge from a uniformly random vertex $v_0$ in $G_n$. We summarize the results in the following lemma.
\begin{Lemma}\label{lemma5.2} Let $v_0$ be a uniformly chosen vertex in $G_n$, let $D$ be the degree of $v_0$ and let $D'$ be the degree of a vertex chosen uniformly at random from the $m$ vertices which received an edge from $v_0$. In the limit $n\to\infty$, the distribution of $D$ and $D'$ for all three versions of the preferential attachment graph converge to
\[ {\mathbf P}(D=m+k) = \frac{\psi+1}\psi\frac{\Gamma(a+1/\psi+1)}{\Gamma ({a})} \frac{\Gamma(k+{a})}{\Gamma(a+1/\psi+k+2)} \]
and
\[ {\mathbf P}\bigl(D'=m+1+k\bigr) =\frac{\psi+1}{\psi^2}\frac{\Gamma(a+ 1/\psi+1)}{\Gamma({a+1})} \frac{(k+1)\Gamma(k+{a+1})}{\Gamma(a+1/\psi+k+3)}, \]
where ${a}=m+2mu$. As $k\to\infty$, this gives
\[ {\mathbf P}(D=m+k)=Ck^{-2-1/\psi}\bigl(1+O\bigl(k^{-1}\bigr)\bigr) \]
and
\[ {\mathbf P}\bigl(D'=m+1+k\bigr)=\tilde Ck^{-1-1/\psi}\bigl(1+O \bigl(k^{-1}\bigr)\bigr) \]
for some constants $C$ and $\tilde C$ depending on $m$ and $\alpha$. \end{Lemma}
Note that for $\alpha=0$, the statements of the lemma reduce to
\[ {\mathbf P}(D=m+k)=\frac{2m(m+1)}{{(m+k)(m+k+1)(m+k+2)}} \]
and
\[ {\mathbf P}\bigl(D'=m+1+k\bigr)=\frac{2(m+1)(k+1)}{{(m+k+1)(m+k+2)(m+k+3)}}. \]
\begin{pf*}{Proof of Lemma \ref{lemma5.2}} First we condition on the position $x_0$ of the root of the P\'olya graph. Let $D$ be the degree of the root. $D$ conditioned on $x_0$ is $m$ plus a Poisson variable with parameter
\[ \frac{\gamma}{x_0^\psi}\int_{x_0}^1{ \psi} x^{\psi-1}\,dx =\gamma\frac{1-x_0^\psi}{x_0^\psi}, \]
where $\gamma$ is a Gamma variable with parameters $a= m+2mu$ and $1$.
Let
\[ \kappa=\kappa(x_0)=\frac{1-x_0^\psi}
{x_0^\psi}. \]
Then
\begin{eqnarray} \label{eqkappaalpha} {\mathbf P}(D=m+k\mid x_0) &=& \frac{\Gamma(k+{a})}{k!\Gamma ({a})} \frac{\kappa^k}{(\kappa+1)^{k+{a}}} \nonumber\\ &=& \frac{\Gamma(k+{a})}{k!\Gamma ({a})}\frac{(1-x_0^\psi )^k}{x_0^{k\psi}}\bigl(x_0^\psi \bigr)^{k+a} \\ &=& \frac{\Gamma(k+{a})}{k!\Gamma({a})}\bigl(1-x_0^\psi \bigr)^kx_0^{a\psi} \nonumber \end{eqnarray}
and
\begin{eqnarray*} {\mathbf P}(D=m+k) &=&{(\psi+1)}\int_0^1 {\mathbf P}(D=m+k\mid x_0=x)x^\psi \,dx \\ &=&(\psi+1)\frac{\Gamma(k+{a})}{k!\Gamma({a})} \int_0^1 { \bigl(1-x^\psi\bigr)^k x^{{(a+1)}\psi}} \,dx \\ &=&\frac{\psi+1}\psi\frac{\Gamma(k+{a})}{k!\Gamma({a})} \int_0^1 ( {1-y} )^k y^{a+1/\psi} \,dy \\ &=&\frac{\psi+1}\psi\frac{\Gamma(k+{a})}{\Gamma({a})} \prod_{i=1}^{k+1} \frac{1}{a+1/\psi+i} \\ &=&\frac{\psi+1}\psi\frac{\Gamma(k+{a})}{\Gamma({a})} \frac{\Gamma(a+ 1/\psi+1)}{\Gamma(a+1/\psi+k+2)}. \end{eqnarray*}
To calculate the distribution of $D'$, we chose $y_0$ uniformly at random from $[0,x_0]$. Conditioned on $y_0$, the limiting degree $D'$ is equal to $m+1$ plus a Poisson variable with parameter
\[ \frac{\gamma'}{y_0^\psi}\int_{y_0}^1{ \psi} x^{\psi-1}\,dx =\gamma'\frac{1-y_0^\psi}{y_0^\psi}, \]
where $\gamma'$ is a Gamma variable with parameters $a+1$ and $1$. Continuing as before, this gives
\begin{equation} \label{eqkappaalpha-} {\mathbf P}\bigl(D'=m+1+k\mid y_0=y \bigr) = \frac{\Gamma(k+{a+1})}{k!\Gamma({a+1})}\bigl(1-y^\psi\bigr )^ky^{(a+1)\psi} \end{equation}
\begin{eqnarray*} {\mathbf P}\bigl(D'=m+1+k\bigr) &=&{(\psi+1)}\int_0^1dx_0x_0^\psi \frac1{x_0}\int_0^{x_0} dy{\mathbf P} \bigl(D'=m+1+k\mid y_0=y\bigr) \\ &=&(\psi+1)\frac{\Gamma(k+{a+1})}{k!\Gamma({a+1})} \int_0^1dxx^{\psi-1} \int_0^{x} dy \bigl(1-y^\psi \bigr)^ky^{(a+1)\psi} \\ &=&\frac{\psi+1}{\psi^2}\frac{\Gamma(k+{a+1})}{k!\Gamma({a+1})} \int_0^1du \int_0^{u} dv (1-v)^kv^{a+1/\psi}. \end{eqnarray*}
Exchanging the integral over $u$ and $v$ we obtain
\begin{eqnarray*} {\mathbf P}\bigl(D'=m+1+k\bigr) &=&\frac{\psi+1}{\psi^2}\frac{\Gamma (k+{a+1})}{k!\Gamma({a+1})} \int_0^{1} dv (1-v)^kv^{a+1/\psi} \int_v^1du \\ &=&\frac{\psi+1}{\psi^2}\frac{\Gamma(k+{a+1})}{k!\Gamma({a+1})} \int_0^{1} dv (1-v)^{k+1}v^{a+1/\psi} \\ &=&\frac{\psi+1}{\psi^2}\frac{(k+1)\Gamma(k+{a+1})}{\Gamma({a+1})} \frac {\Gamma(a+1/\psi+1)}{\Gamma(a+1/\psi+k+3)}. \end{eqnarray*}
The asymptotic behavior as $k\to\infty$ follows from the well-known asymptotic behavior of the Gamma function.
\end{pf*}
\subsection{Joint degree distributions}
We can use the same calculation in order to determine the joint distribution of the degree of the root of the preferential attachment graph with a vertex chosen uniformly among the $m$ vertices that receive an edge from the root.
\begin{Lemma}\label{lemma5.3} Let $v_0$ be a uniformly chosen vertex in $G_n$, let $D$ be the degree of $v_0$ and let $D'$ be the degree of a vertex chosen uniformly at random from the $m$ vertices which received an edge from $v_0$. In the limit $n\to\infty$, the joint distribution of $D$ and $D'$ for all three versions of the preferential attachment graph converges to
\begin{eqnarray*} && {\mathbf P}\bigl(D'=m+1+k,D=m+j\bigr) \\ &&\qquad= \frac{\psi+1}{\psi^2} \frac{\Gamma(k+{a+1})}{k!\Gamma({a+1})}\frac {\Gamma(j+{a})}{j!\Gamma({a})} \int_0^{1} {dv} (1-v)^kv^{a+1/\psi}\int_v^1du (1-u)^ju^{a}, \end{eqnarray*}
where ${a}=m+2mu$. As $k\to\infty$ while $j$ is fixed, this gives
\[ {\mathbf P}\bigl(D'=m+1+k\mid D=m+j\bigr)=C_jk^{-1-1/\psi} \biggl(1+O \biggl(\frac1k \biggr) \biggr), \]
where $C_j$ is a constant depending on $j$, $m$ and $\alpha$, while for $k$ fixed and $j\to\infty$, we have
\[ {\mathbf P}\bigl(D=m+j\mid D'=m+1+k\bigr)= \tilde C_k j^{-a-3-1/\psi} \biggl(1+O\biggl( \biggl(\frac1j \biggr) \biggr) \biggr), \]
where $\tilde C_k$ is a constant depending on $k$, $m$ and $\alpha$. \end{Lemma}
Note that the conditioning on $D$ does not change the power law for the degree distribution of $D'$, while the conditioning on $D'$ leads to a much faster falloff for the degree distribution of $D$. Intuitively, this can be explained by the fact that earlier vertices tend to have higher degree. Conditioning on the degree $D'$ to be a fixed number therefore makes it more likely that at least one of the $m$ vertices receiving an edge from $v_0$ was born late, which in turn makes it more likely that $v_0$ was born late. This in turn makes it much less likely that the root $v_0$ has very high degrees, leading to a faster decay at infinity. This effect does not happen for the distribution of $D'$ conditioned on $D$, since the vertices receiving edges from the root are born \textit{before} the root. Note the fact that the exponent of the power law of the distribution of $D$ conditioned on $D'$ depends (through $a$) on $m$. Heuristically, this seemingly surprising result follows from the fact that the distribution of the degree of the vertex at time $k$ is (in the limit) a discretized Gamma distribution with parameter $a$ (i.e., the probability of being equal $k$ is proportional to $e^{-k/\lambda}\cdot k^a$. $\lambda$ here is basically an appropriate power of $n/k$). Note that with this distribution, when $\lambda$ is relatively large the probability of the degree being small is approximately $\lambda^{-a}$. This means that when $D^\prime$ is small, the probability that $k$ is small (i.e., $n/k$ is large) is as small as $(n/k)^a$. But for $D$ to be big, $k$ needs to be small (up to an exponential tail). This is the intuitive explanation for the parameter $a$ comes into the exponent of the joint distribution.
\begin{pf*}{Proof of Lemma \ref{lemma5.3}} Let $x_0$ be the location of the root in the P\'olya-point graph, and let $y_0$ be the location of a vertex chosen uniformly at random from the $m$ vertices of type ${ L}$ connected to the root. Then
\begin{eqnarray*} &&{\mathbf P}\bigl(D'=k+m+1,D=j+m\bigr) \\ &&\qquad =(\psi+1)\int_0^1dx x^{\psi-1} \int _0^x {dy}{\mathbf P}\bigl(D'=k+m\mid y_0=y \bigr) \\ &&\qquad\quad\hspace*{49pt}{}\times{\mathbf P}(D=j+m\mid x_0=x). \end{eqnarray*}
Using (\ref{eqkappaalpha}) and (\ref{eqkappaalpha-}), we can write this explicitly a
\begin{eqnarray*} &&{\mathbf P}\bigl(D'=k+m+1,D=j+m\bigr) \\ &&\qquad=(\psi+1)\frac{\Gamma({k+a +1})}{k!\Gamma({ a+1})}\frac{\Gamma (j+{a})}{j!\Gamma({a})} \int_0^1dx \bigl(1-x^\psi\bigr)^jx^{(a+1)\psi-1} \\ &&\qquad\quad\hspace*{0pt}{}\times\int _0^x {dy} \bigl(1-y^\psi \bigr)^ky^{(a{ +1})\psi} \\ &&\qquad=\frac{\psi+1}{\psi^2}\frac{\Gamma(k+{a+1})}{k!\Gamma ({a+1})}\frac{\Gamma(j+{a})}{j!\Gamma({a})} \int _0^1du (1-u)^ju^{a} \int_0^{u} {dv} (1-v)^kv^{a+1/\psi} \\ &&\qquad=\frac{\psi+1}{\psi^2}\frac{\Gamma(k+{a+1})}{k!\Gamma ({a+1})}\frac{\Gamma(j+{a})}{j!\Gamma({a})} \int _0^{1} {dv} (1-v)^kv^{a+1/\psi} \int_v^1du (1-u)^ju^{a}. \end{eqnarray*}
We want to approximate the double integral by a product of integrals. Clearly
\begin{eqnarray*} && \int_0^{1} {dv} (1-v)^kv^{a+1/\psi} \int_v^1du (1-u)^ju^{a}\\ &&\qquad\leq \int_0^{1} {dv} (1-v)^kv^{a+1/\psi} \int_0^1du (1-u)^ju^{a} \\ &&\qquad= k! j!\frac{\Gamma(a+1/\psi+1)}{\Gamma(a+1/\psi+k+2)} \frac {\Gamma(a+1)}{\Gamma(a+j+2)}:=Z. \end{eqnarray*}
On the other hand,
\begin{eqnarray*} && \int_0^{1} {dv} (1-v)^kv^{a+1/\psi} \int_0^vdu (1-u)^ju^{a}\\ &&\qquad\leq \int_0^{1} {dv} (1-v)^kv^{a+1/\psi} \int_0^vdu u^{a} \\ &&\qquad= \frac1{a+1} k! \frac{\Gamma(2a+1/\psi+2)}{\Gamma(2a+1/\psi +k+3)} \\ &&\qquad=\frac{\Gamma(2a+1/\psi+2)}{\Gamma(a+1/\psi +1)\Gamma(a+2)} \frac{\Gamma(a+1/\psi+k+2)}{\Gamma(2a+1/\psi +k+3)} \frac{\Gamma(a+j+2)}{ j!}Z \\ &&\qquad= O \biggl( \biggl(\frac jk \biggr)^{a+1} \biggr) Z, \end{eqnarray*}
implying that
\begin{eqnarray} \label{DcondD1}\qquad &&{\mathbf P}\bigl(D'=k+m+1\mid D=j+m\bigr)\nonumber\\[-8pt]\\[-8pt] &&\qquad= \frac{1}{\psi} \frac{\Gamma(k+{a+1})}{\Gamma(a+1/\psi+k+2)} \frac {\Gamma(a+1/\psi+j+2)}{\Gamma(a+j+2)} \biggl(1+O \biggl( \biggl(\frac jk \biggr)^{a+1} \biggr) \biggr).\nonumber \end{eqnarray}
A similar calculation gives
\begin{eqnarray*} && \int_0^1du (1-u)^ju^{a} \int_0^{u} {dv} (1-v)^kv^{a+1/\psi}\\ &&\qquad = \frac{j!}{a+1/\psi+1} \frac{\Gamma(2a+1/\psi+2)}{\Gamma (2a+1/\psi+j+3)} \biggl(1+O \biggl(\frac kj \biggr) \biggr), \end{eqnarray*}
which in turn implies that for fixed $k$, as $j$ goes to infinity, we get
\begin{eqnarray} \label{DcondD2} &&{\mathbf P}\bigl(D=j+m\mid D'=k+m+1\bigr) \nonumber\\ &&\qquad=\frac{\Gamma(2a+1/\psi+2)}{\Gamma({a})\Gamma(a+1/\psi+2)} \frac{\Gamma(j+{a})}{\Gamma(2a+1/\psi+j+3)}\\ &&\qquad\quad{}\times \frac{\Gamma(a+ 1/\psi+k+3)}{\Gamma(k+2)} \biggl(1+O \biggl( \frac kj \biggr) \biggr). \nonumber \end{eqnarray}
The statements of the lemma describing the decay of (\ref{DcondD1}) and (\ref{DcondD2}) as (resp.) $k\to\infty$ and $j\to\infty$ follow from the well-known asymptotics of the $\Gamma$-function. \end{pf*}
\subsection{Subgraph frequencies}
\subsubsection{\texorpdfstring{Proof of Lemma \protect\ref{lemsub-G-conv}} {Proof of Lemma 2.4}}
Let $F$ be a finite graph with vertex set $V(F) = \{v_1, v_2,\ldots, v_k\}$. As in Section \ref{secsubgraphfrequency}, let $\operatorname{inj}(F,\mathbf n;G_n)$ be the number of injective maps $\Phi$ from $V(F)$ into $V(G_n)$ that are homomorphisms and preserve the degrees. In a similar way, given two rooted graphs $(F,v)$ and $(G,x)$, let $\widehat\operatorname{inj}((F,v), \mathbf n; (G_n,x))$ be the number of injective maps $\Phi$ from $V(F)$ into $V(G_n)$ that are homomorphisms, preserve the degrees and map $v$ into $x$. Then $\operatorname{inj}(F, \mathbf n;G_n)$ can be reexpressed as
\[ \operatorname{inj}(F, \mathbf n;G_n) = \sum_{x_1\in V(G_n)} \widehat{\mathrm{inj}}\bigl((F,v_1), \mathbf n; (G_n,x_1) \bigr). \]
Since the diameter of $(F,v_1)$ is at most $k$, its image under a homomorphism $\Phi$ has diameter at most $k$ as well, which in turn implies that
\[ \frac1n\operatorname{inj}(F, \mathbf n;G_n) = \frac1n \sum _{x_1\in V(G_n)} \widehat{\mathrm{inj}}\bigl((F,v_1), \mathbf n; B_{k+1}(G_n,x_1)\bigr). \]
Given $N$ and $r$, let $\mathcal B_r^{(N)}$ be the set of routed graphs on $\{1,2,\ldots,N\}$ that have radius $r$ and contain exactly one\vspace*{1pt} of the representatives from each isomorphism class, and let $\mathcal B_r=\bigcup_{N=1}^\infty\mathcal B_r^{(N)}$. Then
\begin{eqnarray*} \frac1n\operatorname{inj}(F, \mathbf n;G_n) & = & \frac1n \sum _{x_1\in V(G_n)} \widehat{\mathrm{inj}}\bigl((F,v_1), \mathbf n; B_{k+1}(G_n,x_1)\bigr) \\ &=&\sum_{B\in\mathcal B_{{ k}+1}} \widehat{\mathrm{inj}}\bigl((F,v_1), \mathbf n; B\bigr) {\Pr}_{x_1} \bigl(B_{k+1}(G_n,x_1) \sim B \bigr), \end{eqnarray*}
where $\sim$ indicates rooted isomorphisms and the probability is the probability over rooted balls induced by the random choice of $x_1\in V(G_n)$.
Since $F$ is connected, $\widehat{\mathrm{inj}}((F,v_1), \mathbf n; B_{k+1}(G_n,x_1))$ is upper bounded by the constant $C=\max_{1 \leq i \leq k}(n(i) + d_F(v_i))^{k-1} $. Therefore convergence in the sense of Benjamini--Schramm implies convergence of the right-hand side, giving that
\begin{eqnarray} \label{hat-F-Calc} \hat t(F,\mathbf n)&:=&\lim_{n\to\infty}
\frac1{|V(G_n)|} \operatorname{inj}(F,\mathbf n;G_n) \nonumber \\ &=&\sum_{B\in\mathcal B_{{ k}+1}} \widehat{\mathrm{inj}}\bigl((F,v_1), \mathbf n; B\bigr) {\Pr} \bigl(B_{k+1}(G,x)\sim B \bigr) \\ &=& E \bigl[\widehat{\mathrm{inj}}\bigl((F,v_1), \mathbf n; (G,x)\bigr) \bigr], \nonumber \end{eqnarray}
where $E[\cdot]$ denotes expectation over the random choices of the limit graph $(G,x)$.
\subsubsection{Convergence in probability}
If $G_n$ is a sequence of random graphs, the subgraph frequencies ${\mathrm{inj}}(F, \mathbf n; G_n,)$ are random numbers as well. Examining the last proof, one easily sees that the expectation of these numbers converges if $G_n$ converges in the sense of Definition \ref{defBS-limit}. For the preferential attachment graph, this gives
\[
\lim_{n\to\infty}\frac1{|V(G_n)|}E \bigl[ \operatorname{inj}(F,\mathbf n;G_n) \bigr] = \hat t(F,\mathbf n), \]
where
\begin{equation} \label{t-hat-def} \hat t(F,\mathbf n)= E \bigl[\widehat{\mathrm{inj}} \bigl((F,v_1), \mathbf n; (T,{ 0})\bigr) \bigr] \end{equation}
with $(T,0)$ denoting the P\'olya-point graph. It turns out that we can prove a little more, namely convergence in probability.
\begin{Lemma} Let $G_n$ be one of the three versions of the preferential attachment graph defined in Section \ref{secdef-mod}, let $F$ be a finite connected graph and let $\mathbf n:V(F)\to\{0,1,\ldots,\}$. Then
\[ \frac{1}{n} \operatorname{inj}\bigl((F,\mathbf n);G_n\bigr)\to\hat t(F,\mathbf n) \qquad\mbox{in probability}. \]
\end{Lemma}
\begin{pf} Assume that $x_0$ and $x_0'$ are chosen independently uniformly at random from $V(G_n)$. Repeating the proof of Theorem \ref{thmmain}, one easily obtains that the pair $((G_n,x_0),(G_n,x_0'))$ converges to two independent copies of the P\'olya-point graph [more precisely, that the distribution of all pairs of balls $(B_r(G_n,x_0),B_r(G_n,x_0'))$ converges to the product distribution of the corresponding balls in $(T,0)$]. As a consequence, the expectation of $ [\frac{1}{n} \operatorname{inj}((F,\mathbf n);G_n) ]^2$ converges $ [\hat t(F,\mathbf n) ]^2$, which in turn implies the claim. \end{pf}
\subsubsection{Calculation of subgraph frequencies} \label{secfinite-ball}
In this subsection, we calculate the limiting subgraph frequencies $\hat t(F,\mathbf n)$ using the expression (\ref{t-hat-def}). Alternatively, one could use the intermediate expression in (\ref{hat-F-Calc}) and the fact that for each given rooted graph $B$ of radius $k$, we can calculate the probability that the ball of radius $k$ in the P\' olya-point graph $(T,0)$ is isomorphic to $B$. But this gives an expression involving the countably infinite sum over the balls in $\mathcal B_{{k}+1}$, while our calculation below only involves a finite number of terms.
In a preliminary step, we note that the P\'olya-point graph $(T,0)$ and the point process $\{x_{\bar a}\}$ can be easily recovered from the countable graph on $[0,1]$ which is obtained by joining two points $x,x'\in[0,1]$ by an edge whenever $x=x_{\bar a}$ and $x'=x_{\bar a'}$ for a pair of neighbors $\bar a,\bar a'$ in $T$. Identifying the point $x_0$ as the root, we obtain an infinite, random rooted tree on $[0,1]$ which we will again denote by $T$.
Recalling (\ref{t-hat-def}), we will want to calculate the expected number of maps $\varphi$ from $V(F)$ to $[0,1]$ and are degree preserving homomorphism from $(F,\mathbf n)$ into $T$ that map $v_1$ into the root $x_0$. To this end, we explore the tree structure around the node { $x_0$} in $T$, in a similar fashion as in Section \ref{secexpltree}. Obviously, if $F$ is not a tree, then $\hat t(F,\mathbf n) = 0$. Otherwise, denote the vertex $v_1 \in V(F)$ as the root and obtain a rooted tree in which the set of children of every node is uniquely defined.
A mapping $\varphi$ from vertices $v_1, v_2,\ldots, v_k$ to points $x_1, x_2,\ldots, x_k$ on the interval $[0,1]$ defines a natural total order $\theta$ on $V(F)$. We say a mapping is consistent with total order $\theta$ if and only if for every $i$ and $j$, $\theta(v_i) < \theta(v_j)$ implies $x_i < x_j$.
Given the positions $x_1, x_2,\ldots, x_k$ (or equivalently the ordering $\theta$), we can divide the children of every node $v_i$ to two sets $L(v_i)$ and $R(v_i)$, depending on whether their corresponding points on the interval are to the left or right of $x_i$, respectively. With a slight abuse of notation, define\vspace*{1pt} $L = \bigcup_{1 \leq i \leq k} L(v_i)$ and ${ R} = \bigcup_{1 \leq i \leq k} { R}(v_i)$. Note that $\{v_2,\ldots,v_k\}$ is the disjoint union of $L$ and $R$. Since we require that the degrees are preserved, the degree of a node $x_i$ in $T$ is $d_F(v_i)+n_i$. For the root $x_1=x_0$ this gives $d_F(v_1)+n_1$ children,
$m$ to the left, and $n'_1+|R(v_1)|=d_F(v_1)+n_1-m$ to its right. If $v_i \in L$, its parent appears on its right. Therefore, of $n(v_i)$ remaining neighbors of $x_i$ that are not mapped to any vertex in
$F$, $n'(v_i) = d_F(v_i) + n(v_i) - (m + |R(v_i)| + 1)$ should appear to its right-hand side. For $v_i \in R$, $n'(v_i) =
d_F(v_i) + n(v_i) - (m + |R(v_i)|)$.
Using the above notation, we can finally write the probability density function $p(F, \mathbf n, x)$ for a mapping from $V(F)$ to $x = (x_1, x_2,\ldots, x_k)$ to be homomorphic and degree preserving. Conditioned on $\gamma(x_i)=\gamma_i$, it can be written as
\begin{eqnarray}\qquad && p(F, \mathbf n, x, \gamma)\nonumber\\[-8pt]\\[-8pt] &&\qquad={ (\psi+1)x_1^{\psi}} \prod_{v_i\in V}{ \biggl( \frac{\exp(-H_i)H_i^{n'(i)}}{n'(i)!} \prod _{v_j \in L(v_i)} x_i^{-1} \prod _{v_j \in R(v_i)} \gamma_i\frac{\psi x_j^{\psi-1}}{x_i^\psi} \biggr) },\nonumber \end{eqnarray}
where
\[ H_i=\gamma_i\frac{1-x_i^{\psi}}{x_i^\psi}. \]
The two inner product terms in the above equations are derived using the description of the P\'olya-point in Section \ref{secpolyapointdef}. The first term captures the probability that the remaining degree of $x_i$ is the desired value $n'(i)$. Indeed, recalling that the children $x>x_i$ of a vertex $x_i$ are given by a Poison process with density $ \gamma_i\frac{\psi x^{\psi-1}}{x_i^\psi}$ on $[x_i,1]$, we see that $n_i'$ is a Poisson random variable with rate
\[ \gamma_i\int_{x_i}^1 \frac{\psi x^{\psi-1}}{x_i^\psi}\,dx=H_i, \]
giving the first term in the product above.
Also, $\gamma_i$ is a Gamma variable with parameters $\alpha{ (i)}$ and 1, where $\alpha_i$ depends on whether we discover $v_i$ from right or left.
\[ \alpha(i)=\cases{m+2mu+1, &\quad if $v_i \in L$, \cr m+2mu, &\quad if $v_i \in R$.} \]
Similarly, $\alpha(1)=m+2mu$. Let $C(\theta)$ be the simplex containing all points $x = (x_1, x_2,\ldots, x_k)$ consistent with an ordering $\theta$. Setting
\[ \hat t(F, \mathbf n, \theta) = \int_{C(\theta) \times(0,\infty)^{k}} \prod _{i=1}^k \frac{e^{-\gamma_i}\gamma_k^{\alpha_k-1}}{\Gamma (\alpha_i)} p(F,\mathbf n, x, \gamma)\,dx_1 \cdots dx_k \,d\gamma_1 \cdots d\gamma_k, \]
$t(F, \mathbf n)$ can now be computed by summing
$t(F, \mathbf n, \theta)$ over the
$k!$ choices of $\theta$.
\section*{Acknowledgment}
We thank an anonymous referee for helping us in improving the presentation of the paper. The research was performed while N. Berger and A. Saberi were visiting Microsoft Research.
\printaddresses
\end{document} | arXiv |
\begin{document}
\title{Monadicity of Non-deterministic Logical\Matrices is Undecidable}
\begin{abstract} The notion of non-deterministic logical matrix (where connectives are interpreted as multi-functions) preserves many good properties of traditional semantics based on logical matrices (where connectives are interpreted as functions) whilst finitely characterizing a much wider class of logics, and has proven to be decisive in a myriad of recent compositional results in logic. Crucially, when a finite non-deterministic matrix satisfies monadicity (distinct truth-values can be separated by unary formulas) one can automatically produce an axiomatization of the induced logic. Furthermore, the resulting calculi are analytical and enable algorithmic proof-search and symbolic counter-model generation.
For finite (deterministic) matrices it is well known that checking monadicity is decidable. We show that, in the presence of non-determinism, the property becomes undecidable. As a consequence, we conclude that there is no algorithm for computing the set of all multi-functions expressible in a given finite Nmatrix. The undecidability result is obtained by reduction from the halting problem for deterministic counter machines. \end{abstract}
\section{Introduction}\label{sec:intro} Logical matrices are arguably the most widespread semantic structures for propositional logics~\cite{WojBook,AlgLogBook}. After {\L}ukasiewicz, a logical matrix consists in an underlying algebra, functionally interpreting logical connectives over a set of truth-values, together with a designated set of truth-values. The logical models (valuations) are obtained by considering homomorphisms from the free-algebra in the matrix similarity type into the algebra, and formulas that hold in the model are the ones that take designated values.
However, in recent years, it has become clear that there are advantages in departing from semantics based on logical matrices, by adopting a non-deterministic generalization of the standard notion. Non-deterministic logical matrices (Nmatrices) were introduced in the beginning of this century by Avron and his collaborators~\cite{Avron0,Avron1}, and interpret connectives by multi-functions instead of functions. The central idea is that a connective can non-deterministically pick from a set of possible values instead of its value being completely determined by the input values. Logical semantics based on Nmatrices are very malleable, allowing not only for finite characterizations of logics that do not admit finite semantics based on logical matrices, but also for general recipes for various practical problems in logic~\cite{wollic17}. Further, Nmatrices still permit, whenever the underlying logical language is sufficiently expressive, to extend from logical matrices general techniques for effectively producing analytic calculi for the induced logics, over which a series of reasoning activities in a purely symbolic fashion can be automated, including proof-search and counter-model generation~\cite{SS,Avron0,Avron1,AKZ,CCALJM,synthese,wollic19}. In its simplest form, the sufficient expressiveness requirement mentioned above corresponds to \emph{monadicity}~\cite{SS,synthese,wollic19}. A Nmatrix is \emph{monadic} when pairs of distinct truth-values can be separated by unary formulas of the logic. This crucial property is decidable, in a straightforward way, for finite logical matrices, as one can simply compute the set of all unary functions expressible in a given finite matrix. However, the computational character of monadicity with respect to Nmatrices has not been studied before.
In this paper we show that, in fact, monadicity is undecidable for Nmatrices. Our proof is obtained by means of a suitable reduction from the halting problem for counter machines, well known to be undecidable~\cite{minsky}. Several details of the construction are inspired by results about the inclusion of infectious values in Nmatrices~\cite{ismvl}, and also by undecidability results concerning term-dag-automata (a computational model that bears some interesting connections with Nmatrices)~\cite{ClosureProperties}. As a consequence, we conclude that the set of all unary multi-functions expressible in a given finite Nmatrix is not computable.
The paper is organized as follows. In Section~\ref{sec:prelim} we introduce and illustrate our objects of study, namely logical matrices and Nmatrices, the logics they induce, and the monadicity property. Section~\ref{sec:machines} recalls the counter machine model of computation, and shows how its computations can be encoded into suitable Nmatrices. Finally, Section~\ref{sec:undecided} establishes our main results, namely the undecidability of monadicity for Nmatrices, and as a corollary the uncomputability of expressible multi-functions. We conclude, in Section~\ref{sec:conclude}, with a discussion of the importance of the results obtained and several topics for further research.
\section{Preliminaries}\label{sec:prelim} In this section we recall the notion of logical matrix, non-deterministic matrix, and their associated logics. We also introduce, exemplify and discuss the notion of monadicity.
\paragraph*{Matrices, Nmatrices and their logics} A signature $\Sigma$ is a family of connectives indexed by their arity, $\Sigma=\{\Sigma^{(k)}:k\in \mathbb{N}\}$. The set of formulas over $\Sigma$ based on a set of propositional variables $P$ is denoted by $L_\Sigma(P)$. The set of subformulas (resp, variables) of a formula $\varphi\in L_\Sigma(P)$ is denoted by $\Sub(\varphi)$ (resp.,$\Var(\varphi)$). There are two subsets of $L_\Sigma(P)$ that will be of particular interest to us: the set of closed formulas, denoted by $L_\Sigma(\emptyset)$, and the set of unary (or monadic) formulas, denoted by $L_\Sigma(\{p\})$.
A $\Sigma$-Nmatrix, is a tuple $\mathbb{M}=\tuple{A,\cdot_\mathbb{M},D}$ where $A$ is the set of truth-values, $D\subseteq A$ is the set of designated truth-values, and for each $\copyright\in \Sigma^{(k)}$, the function $\copyright_\mathbb{M}:A^k\to \wp(A)\setminus\{\emptyset\}$ interprets the connective $\copyright$. A $\Sigma$-Nmatrix $\mathbb{M}$ is finite if it contains only a finite number of truth-values and $\Sigma$ is finite. Clearly, this definition generalizes the usual definition of logical matrix, which is recovered when, for every $\copyright\in\Sigma^{(k)}$ and $a_1,\ldots,a_k \in A$, $\copyright_{\mathbb{M}}(a_1,\ldots,a_k)$ is a singleton. In this case we will sometimes refer to $\copyright_{\mathbb{M}}$ simply as a function. A valuation over $\pnmat{M}$ is a function $v: L_{\Sigma}(P) \rightarrow A$, such that, $v(\copyright(\varphi_1, \dots, \varphi_n)) \in \copyright_{\pnmat{M}}(v(\varphi_1), \dots, v(\varphi_n))$ for every $\copyright\in \Sigma^{(k)}$. We use $\Val(\mathbb{M})$ to denote the set of all valuations over $\mathbb{M}$. A valuation $v \in \Val(\mathbb{M})$ is said to satisfy a formula $\varphi$ if $v(\varphi) \in D$, and is said to falsify $\varphi$, otherwise. Note that every formula $\varphi\in L_{\Sigma}(P)$, with $\Var(\varphi)=\{p_1,\ldots,p_k\}$, defines a multi-function $\varphi_\mathbb{M}:A^{k}\to \wp(A)\setminus\{\emptyset\}$ as $\varphi_\mathbb{M}(x_1,\ldots,x_k)=\{v(\varphi):v\in \Val(\mathbb{M}),v(p_i)=x_i,1\leq i\leq k\}$. The multi-function $\varphi_\mathbb{M}$ is said to be \emph{represented}, or \emph{expressed}, by the formula $\varphi$ in $\mathbb{M}$. Furthermore, we say that a multi-function $f$ is \emph{expressible} in an Nmatrix $\mathbb{M}$ if there is a formula $\varphi$ such that $\varphi_{\mathbb{M}} = f$.
The logic induced by an Nmatrix $\mathbb{M}$ is the Tarskian consequence relation $\vdash_\mathbb{M}\,\subseteq \wp(L_\Sigma(P))\times L_\Sigma(P)$ defined as $\Gamma\vdash_\mathbb{M} \varphi$ whenever, for every $v \in \Val(\mathbb{M})$, if $v(\Gamma)\subseteq D$ then $v(\varphi)\in D$. This definition generalizes the usual logical matrix semantics~\cite{WojBook,AlgLogBook}. As usual, a formula $\varphi$ is said to be a theorem of $\mathbb{M}$ if $\emptyset\vdash_\mathbb{M} \varphi$.
\paragraph*{Monadicity} Given a $\Sigma$-Nmatrix $\mathbb{M}=\tuple{A,\cdot_\mathbb{M},D}$, we say that $a,b \in A$ are separated, written $a\# b$ if $a \in D$ and $b \notin D$, or vice-versa. A pair of sets of elements $X,Y \subseteq A$ are separated, written $X\# Y$, if for every $a \in X$ and $b \in Y$ we have that $a\#b$. Note that $X\#Y$ precisely if $X \subseteq D$ and $Y \subseteq A\setminus D$, or vice versa. A monadic formula $\varphi\in L_{\Sigma}(\{p\})$ such that $\varphi_{\mathbb{M}}(a)\#\varphi_{\mathbb{M}}(b)$ is said to separate $a$ and $b$. We say that a set of monadic formulas $\mathsf{S}$ is a set of monadic separators for $\mathbb{M}$ when, for every pair of distinct truth-values of $\mathbb{M}$, there is a formula of $\mathsf{S}$ separating them. An Nmatrix $\mathbb{M}$ satisfies monadicity (or simply, is monadic) if there is a set of monadic separators for $\mathbb{M}$. \begin{example}\label{ex:luk}
Consider the signature $\Sigma = \{\neg, \vee, \wedge, \rightarrow\}$ and the $\Sigma$-matrix $\mathbb{M}_{\text{\L}} = \tuple{A,\cdot_{\text{\L}},D}$,
with $A = \{0,\frac{1}{2},1\}$ and $D = \{1\}$, corresponding to {\L}ukasiewicz $3$-valued logic, with interpretations as described in the following tables.
\begin{center}
\begin{tabular}{c | c}
$x$ & $\neg_{\text{\L}}(x)$ \\
\hline
$0$ & $1$ \\
$\frac{1}{2}$ & $\frac{1}{2}$ \\
$1$ & $0$ \\
\end{tabular}
\quad
\begin{tabular}{c | c c c}
$\vee_{\text{\L}}$ & $0$ & $\frac{1}{2}$ & $1$ \\
\hline
$0$ & $0$ & $\frac{1}{2}$ & $1$ \\
$\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $1$ \\
$1$ & $1$ & $1$ & $1$ \\
\end{tabular}
\quad
\begin{tabular}{c | c c c}
$\wedge_{\text{\L}}$ & $0$ & $\frac{1}{2}$ & $1$ \\
\hline
$0$ & $0$ & $0$ & $0$ \\
$\frac{1}{2}$ & $0$ & $\frac{1}{2}$ & $\frac{1}{2}$ \\
$1$ & $0$ & $\frac{1}{2}$ & $1$ \\
\end{tabular}
\quad
\begin{tabular}{c | c c c}
$\rightarrow_{\text{\L}}$ & $0$ & $\frac{1}{2}$ & $1$ \\
\hline
$0$ & $1$ & $1$ & $1$ \\
$\frac{1}{2}$ & $\frac{1}{2}$ & $1$ & $1$ \\
$1$ & $0$ & $\frac{1}{2}$ & $1$ \\
\end{tabular}
\end{center}
$\Mt_{\text{\L}}$ is monadic with $
\{p, \neg p\}$ as a set of separators. Indeed, $p$ separates $1$ from $0$, and also $1$ from $\frac{1}{2}$, whereas $\neg p$ separates $0$ and $\frac{1}{2}$. One may
wonder, though, whether one could separate $0$ and $\frac{1}{2}$ without using negation.
$\triangle$ \end{example}
\begin{remark}\label{remark:MonadicityIsDecidableForMatrices} Notice that we can decide if a given matrix $\mathbb{M}$ is monadic by algorithmically generating every unary function expressible in $\mathbb{M}$, as it is usually done when calculating clones over finite algebras \cite{Lau}. This procedure is, however, quite expensive, since there are, at most, $n^n$ unary formulas from a set with $n$ values to itself. The next example illustrates this procedure. \end{remark}
\begin{example}\label{ex:luk2}
Let $\Mt_{\text{\L}}'$ be the $\{\vee, \wedge, \rightarrow\}$-reduct of $\Mt_{\text{\L}}$ introduced in Example~\ref{ex:luk}, corresponding to the negationless fragment of {\L}ukasiewicz
$3$-valued logic. Let us show that $\Mt_{\text{\L}}'$ is not monadic, by generating every unary expressible function.
For simplicity, we represent a unary function $h : A \rightarrow A$ as a tuple $(h(0),h(\frac{1}{2}),h(1))$.
The formulas $p$, $p \vee p$ and $p \wedge p$ define the same function $(0, \frac{1}{2}, 1)$ and the formula $p \rightarrow p$ defines the constant function $(1,1,1)$.
It is easy to see that we cannot obtain new functions by further applying connectives, so $(0,\frac{1}{2},1)$ and $(1,1,1)$ are the only expressible unary functions.
For example, $((p\rightarrow p)\rightarrow(p\rightarrow p))_{\Mt_{\text{\L}}'} = (1,1,1)$. We conclude that $0$ cannot be separated from $\frac{1}{2}$, and so $\Mt_{\text{\L}}'$ is not monadic.
$\triangle$ \end{example}
\paragraph*{Monadicity in Nmatrices} In the presence of non-determinism, we cannot follow the strategy described in Remark~\ref{remark:MonadicityIsDecidableForMatrices} and Example~\ref{ex:luk2}. A fundamental difference from the deterministic case is that the multi-functions represented by formulas in a Nmatrix are sensitive to the syntax since, even if there are multiple choices for the value of a subformula, all its occurrences need to have the same value. Crucially, on an Nmatrix $\mathbb{M}$, the multi-function $\copyright(\varphi_1,\ldots,\varphi_k)_\mathbb{M}$ does not depend only on the multi-functions $\copyright_\mathbb{M}$ and $(\varphi_1)_\mathbb{M}$, \ldots, $(\varphi_k)_\mathbb{M}$, as we shall see in the next example.
Hence, contrarily to what happens in the deterministic case, when generating the expressible multi-functions in an Nmatrix $\mathbb{M}$ (to find if $\mathbb{M}$ is monadic, or for any other purpose), we cannot just keep the information about the multi-functions themselves but also about the formulas that produce them. Otherwise, we might generate a non-expressible function (as every occurrence of a subformula must have the same value) or miss some multi-functions that are still expressible.
With the intent of making the notation lighter, when representing the interpretation of the connectives using tables, we drop the curly brackets around the elements of an output set. For example, in the table of Example~\ref{example:SubformulaDependenceOnExpressedFunctions} we simply write $a,c$ instead of $\{a,c\}$.
\begin{example}\label{example:SubformulaDependenceOnExpressedFunctions}
Consider the signature $\Sigma$ with only one binary connective $g$, and two Nmatrices $\mathbb{M}=\tuple{\{a,b,c\},\cdot_{\mathbb{M}},D}$ and
$\mathbb{M}'=\tuple{\{a,b,c\},\cdot_{\mathbb{M}'},\{c\}}$, with interpretation described in the following
table.
\begin{center}
\begin{tabular}{c | c c c }
$g_\mathbb{M}$ & $a$& $b$ & $c$ \\
\hline
$a$& $c$ &$ a $ & $ b,c $ \\
$b$& $b$ &$c $ & $ a,c $ \\
$c$ &$ b,c$ &$a,c$ & $ c$
\end{tabular}
\qquad\qquad
\begin{tabular}{c | c c c }
$g_{\mathbb{M}'}$ & $a$& $b$ & $c$ \\
\hline
$a$& $c$ &$ a $ & $ c $ \\
$b$& $b$ &$c $ & $ a $ \\
$c$ &$ b,c$ &$a,c$ & $ c$
\end{tabular}
\end{center}
Let $\varphi=g(g(p,p),p)$ and $\psi=g(p,g(p,p))$. In $\mathbb{M}$, $\varphi_{\mathbb{M}} = \psi_{\mathbb{M}} = (\{b,c\},\{a,c\},\{c\})$.
Although these formulas define the same multi-function, they are not interchangeable, as $g(\varphi,\varphi)_\mathbb{M}=g(\psi,\psi)_\mathbb{M}=g(p,p)_\mathbb{M} = (\{c\},\{c\},\{c\})$ but
$g(\varphi,\psi)_\mathbb{M}=g(\psi,\varphi)_\mathbb{M}=(\{a,c\},\{b,c\},\{c\})$, thus illustrating the already mentioned sensitivity to the syntax.
Still, consider $v_x : L_{\Sigma}(P) \to \{a,b,c\}$, with $x \in \{a,b,c\}$, defined as
\begin{equation*}
v_x(\gamma) =
\begin{cases}
x &\text{if } \gamma \in P \\
c &\text{otherwise}.
\end{cases}
\end{equation*}
These functions can easily be shown to be valuations over $\mathbb{M}$ for every choice of $x$ and so, for every unary formula $\varphi \neq p$, we have that $c \in \varphi_{\mathbb{M}}(a)
\cap \varphi_{\mathbb{M}}(b) \cap \varphi_{\mathbb{M}}(c)$. We conclude that, apart from $p$, no unary formula can separate elements of $\{a,b,c\}$, and so $\mathbb{M}$ is not monadic, for
any choice of $D$.
In $\mathbb{M}'$ the outcome is radically different. As $g(p,p)_{\mathbb{M}'}(x) = \{c\}$ for
every $x \in \{a,b,c\}$, we have $g(p,g(p,p))_{\mathbb{M}'}(a) = \{c\}$ and $g(p,g(p,p))_{\mathbb{M}'}(b) = \{a\}$ and so, in this case, $\mathbb{M}'$ is monadic with set of separators
$\{p,g(p,g(p,p))\}$.
$\triangle$ \end{example}
\section{Counter machines and Nmatrices}\label{sec:machines} In this section we recall the essentials of counter machines, and define a suitable finite Nmatrix representing the computations of any given counter machine. \subsection*{Counter machines} A \emph{(deterministic) counter machine} is a tuple $\mathcal{C} = \langle n, Q, q_{\mathsf{init}}, \delta \rangle$, where $n \in \mathbb{N}$ is the number of counters, $Q$ is a finite set of states, $q_{\mathsf{init}} \in Q$ is the initial state, and $\delta$ is a partial transition function $\delta:Q\not\to (\{\inc{i} : 1 \leq i \leq n\} \times Q) \cup (\{\test{i} : 1 \leq i \leq n\} \times Q^2)$.
The set of halting states of $\mathcal{C}$ is denoted by $H=\{q\in Q:\delta(q)\text{ is undefined}\}$.
A configuration of $\mathcal{C}$ is a tuple $C=(q,\vec{x}) \in Q \times \mathbb{N}_0^n$, where $q$ is a state, and $\vec{x}=x_1,\ldots,x_n$ are the values of the counters. Let $\mathsf{Conf}(\mathcal{C})$ be the set of all configurations.
$C\in \mathsf{Conf}(\mathcal{C})$ is said to be the initial configuration if $q=q_{\mathsf{init}}$ and $\vec{x}=\vec{0}$. $C$ is said to be a halting configuration if $q \in H$.
When $(q,\vec{y})$ is not a halting configuration, the transition function $\delta$ completely determines the next configuration $\textsf{nxt}(q,\vec{y})$ as follows: \begin{equation*}
\textsf{nxt}(q, \vec{y}) =
\begin{cases}
(q', \vec{y} + \vec{\mathsf{e}_i}) &\text{if } \delta(q) = \tuple{\inc{i}, q'}, \\
(q'', \vec{y}) &\text{if } \delta(q) = \tuple{\test{i}, q'', q'''} \text{ and } x_i = 0, \\
(q''', \vec{y} - \vec{\mathsf{e}_i}) &\text{if } \delta(q) = \tuple{\test{i}, q'', q'''} \text{ and } x_i \neq 0,
\end{cases} \end{equation*} where $\vec{\mathsf{e}_i}$ is such that $(\mathsf{e}_i)_i = 1$ and $(\mathsf{e}_i)_j = 0$, for all $j \neq i$.
The computation of $\mathcal{C}$ is a finite or infinite sequence of configurations $\tuple{C_i}_{i<\eta}$, where $\eta\in\mathbb{N}_0\cup\{\omega\}$ such that $C_0$ is the initial configuration, and for each $i<\eta$, either $C_i$ is a halting configuration and $i+1=\eta$ is the length of the computation, or else $C_{i+1}=\textsf{nxt}(C_i)$. \pagebreak
The intuition behind the transitions of a counter machine is clear from the underlying notion of computation, and in particular the definition of the next configuration. Clearly, $\delta(q)=\tuple{\inc{i},r}$ results in incrementing the $i$-th counter and moving to state $r$, whereas $\delta(q)=\tuple{\test{i},r,s}$ either moves to state $r$, leaving the counters unchanged, when the value of the $i$-th counter is zero, or moves to state $s$, and decrements the $i$-th counter, when its value is not zero. As usual in counter machine models (see~\cite{minsky}), and also for the sake of simplicity, we are assuming that in the initial configuration all counters have value 0. This is well known not to hinder the computational power of the model, as a machine can always start by setting the counters to other desired input values. We will base our undecidability result on the following well known result about counter machines.
\begin{theorem}[\cite{minsky}]\label{thm:Halt}
It is undecidable if a given counter machine halts when starting with all counters set to zero. \end{theorem}
In what follows, given $\vec{x} \in A^n$ and $f : A \rightarrow B$, we define $f(\vec{x}) \in B^n$ as $f(\vec{x}) = \tuple{f(x_i) : 1 \leq i \leq n}$. For a given counter machine $\mathcal{C} = \langle n, Q, q_{\mathsf{init}}, \delta \rangle$, we define the signature $\Sigma_{\mathcal{C}}$ such that $\Sigma_{\mathcal{C}}^{(0)} = \{\mathsf{zero}, \epsilon\}$, $\Sigma_{\mathcal{C}}^{(1)} = \{\mathsf{suc}\}$, $\Sigma_{\mathcal{C}}^{(n+1)} = \{\mathsf{step}_q : q \in Q\}$ and $\Sigma_{\mathcal{C}}^{(j)} = \emptyset$, for all $j \notin \{0,1,n+1\}$. As usual, $\mathsf{zero}$ and $\mathsf{suc}$ allow us to encode every $k \in \mathbb{N}_0$ as the closed formula $\mathsf{enc}(k)=\mathsf{suc}^k(\mathsf{zero})$. Moreover, we can encode every finite sequence of configurations $\tuple{C_0,\ldots,C_k}$ as a sequence formula in the following way: \begin{itemize}
\item $\mathsf{seq}(\tuple{}) = \epsilon$, and $\mathsf{seq}(\tuple{C_0,\ldots,C_{k-1},(q, \vec{y})}) = \mathsf{step}_q(\mathsf{seq}(\tuple{C_0,\ldots,C_{k-1}}),\mathsf{enc}(\vec{y}))$.
\end{itemize}
We will construct a finite Nmatrix $\mathbb{M}_\mathcal{C}$ over $\Sigma_\mathcal{C}$ that recognizes as a theorem precisely the finite computation of $\mathcal{C}$, if it exists. This means that $\mathbb{M}_\mathcal{C}$ can only falsify a formula $\varphi$ if it is not a sequence formula, or if $\varphi = \mathsf{seq}(\tuple{C_0,\ldots,C_k})$ but $C_0$ is not the initial configuration of $\mathcal{C}$, or $C_k$ is not a halting configuration of $\mathcal{C}$, or $\textsf{nxt}(C_i)\neq C_{i+1}$ for some $0 \leq i < k$.\\[-8mm]
\subsection*{From counter machines to Nmatrices} For a given counter machine $\mathcal{C} = \tuple{n, Q, q_{\mathsf{init}}, \delta}$ let \begin{equation*}
\textbf{Rm}=\{\mathtt{r}_{= 0},\mathtt{r}_{\geq 0},\mathtt{r}_{\geq 1},\mathtt{r}_{\geq 2}\} \qquad\text{ and }\qquad
\mathbf{Conf}=\{\mathtt{conf}_{q,\overrightarrow{\mathtt{r}}} :q \in Q,\overrightarrow{\mathtt{r}}\in \textbf{Rm}^n\} \end{equation*} and consider the Nmatrix $\mathbb{M}_\mathcal{C} = \tuple{A, \cdot_{\mathbb{M}_\mathcal{C}}, D}$ over $\Sigma_\mathcal{C}$, where \begin{align*}
A = \textbf{Rm} \cup \mathbf{Conf} \cup \{\mathtt{init},\mathtt{error}\} \qquad\text{ and }\qquad
D= \{\mathtt{conf}_{q,\overrightarrow{\mathtt{r}}} :q \in H,\overrightarrow{\mathtt{r}}\in \textbf{Rm}^n\}\qquad\text{ and } \end{align*} \begin{equation*}
\mathsf{zero}_{\mathbb{M}_\mathcal{C}} = \{\mathtt{r}_{= 0},\mathtt{r}_{\geq 0}\} \qquad \epsilon_{\mathbb{M}_\mathcal{C}} = \{\mathtt{init}\} \qquad
\mathsf{suc}_{\mathbb{M}_\mathcal{C}}(x) =
\begin{cases}
\{\mathtt{r}_{\geq 1}\} &\quad\text{if } x = \mathtt{r}_{= 0} \\
\{\mathtt{r}_{\geq 0},\mathtt{r}_{\geq 1}\} &\quad\text{if } x = \mathtt{r}_{\geq 0} \\
\{\mathtt{r}_{\geq 2}\} &\quad\text{if } x \in\{ \mathtt{r}_{\geq 1},\mathtt{r}_{\geq 2} \}\\
\{\mathtt{error}\} &\quad\text{otherwise}
\end{cases} \end{equation*} \begin{equation*}
(\mathsf{step}_q)_{\mathbb{M}_\mathcal{C}}(x,\vec{z}) =
\begin{cases}
\{\mathtt{conf}_{q,\vec{z}}\}
&\text{if } x=\mathtt{init},q=q_{\mathsf{init}}\text{ and }\vec{z}\in \{\mathtt{r}_{= 0}\}^n\cup\{\mathtt{r}_{\geq 0}\}^n, \text{ or} \\
&\text{if } x=\mathtt{conf}_{q',\overrightarrow{y}}, \overrightarrow{z} \in \textbf{Rm}^n, \text{ and }\\
&\,\,\delta(q')=\tuple{\test{i},q,s},y_i\in\{\mathtt{r}_{= 0},\mathtt{r}_{\geq 0}\}\text{ and }\overrightarrow{y}=\overrightarrow{z},\text{ or }\\
&\,\,\delta(q')=\tuple{\test{i},s,q},y_i\in \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(z_i) \text{ and } z_j=y_j \text{ for }j\neq i , \text{ or } \\
&\,\,\delta(q')=\tuple{\inc{i},q},z_i\in \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(y_i) \text{ and } z_j=y_j \text{ for }j\neq i \\
\{\mathtt{error}\} &\text{otherwise}
\end{cases} \end{equation*} where, $s \in Q$ represents an arbitrary state.\pagebreak
$\mathbb{M}_\mathcal{C}$ is conceived as a finite way of representing the behavior of $\mathcal{C}$. For that purpose, it is useful to understand the operations $\mathsf{zero}$ and $\mathsf{suc}$ as means of representing the natural number values of the counters. Their interpretation, however, is finitely defined over the abstract values $\textbf{Rm}$. In fact, in order to check if some formula $\varphi$ encodes a sequence of computations respecting $\textsf{nxt}$, it is not essential to distinguish all natural values. Indeed, it is easy to conclude from the definition of counter machine that in each computation step its counters either retain their previous values, or else they are incremented or decremented. As we set the initial configuration with all counters set to zero and the effect of test transitions also depends on detecting zero values, it is sufficient to being able to characterize unambiguously the value $0$ and, additionally, being able to recognize pairs of values whose difference is larger than one. This is successfully accomplished with the proposed non-deterministic interpretation of $\mathsf{suc}$, as shall be made clear below. The $\epsilon$ and $\mathsf{step}$ operations are then meant to represent sequences of configurations, whereas their interpretation over the abstract values $\mathbf{Conf}\cup\{\mathtt{init}\}$ guarantees that consecutive configurations respect $\textsf{nxt}$. Of course, the designated values of $\mathbb{M}_\mathcal{C}$ are those corresponding to halting configurations. The $\mathtt{error}$ value is absorbing with respect to the interpretation of all operations, and gathers all meaningless situations. Overall, as we will show, $\mathbb{M}_{\mathcal{C}}$ induces a logic that has at most one theorem, corresponding to the computation of $\mathcal{C}$, if it is halting.
\subsection*{The inner workings of the construction} In the next examples we will illustrate the way the Nmatrix $\mathbb{M}_\mathcal{C}$ encodes the computations of $\mathcal{C}$. Proofs of the general statements are postponed to the next section.
In order for $\vdash_{\mathbb{M}_\mathcal{C}}$ to have, at most, the formula representing the computation of $\mathcal{C}$ as theorem, $\Val(\mathbb{M}_\mathcal{C})$ must contain enough valuations to refute every formula not representing the computation of $\mathcal{C}$. These valuations are presented in the following example \begin{example}\label{ex:v} By definition of $\mathsf{seq}$, it is clear that no formula containing variables corresponds to a sequence of configurations. Furthermore, no formula containing variables can be a theorem of $\mathbb{M}_\mathcal{C}$ since these formulas are easily refuted by any valuation sending the variables to the truth value $\mathtt{error}$, as this value is absorbing (aka infectious), that is, $\mathsf{suc}_{\mathbb{M}_{\mathcal{C}}}(x) = \mathtt{error}$ whenever $x = \mathtt{error}$ and $(\mathsf{step}_q)_{\mathbb{M}_\mathcal{C}}(x,\vec{z}) = \mathtt{error}$ whenever $x = \mathtt{error}$ or $z_i = \mathtt{error}$. Because of this, from here onwards, we concern ourselves only with the truth-values assigned to closed formulas.
We do not have much freedom left, but it will be enough. The interpretations of the connectives are all deterministic, except in the case of $\mathsf{zero}_{\mathbb{M}_\mathcal{C}}$ and $\mathsf{suc}_{\mathbb{M}_\mathcal{C}}(\mathtt{r}_{\geq 0})$. This means that, if $v \in \Val(\mathbb{M}_\mathcal{C})$ and $v(\mathsf{zero}) = \mathtt{r}_{=0}$, then there is no choice left for the values assigned by $v$ to the remaining closed formulas. Consider, therefore, the following valuation \begin{equation*}
v^=_0(\psi) =
\begin{cases}
\mathtt{r}_{=0} &\text{if } \psi = \mathsf{zero}, \\
\mathtt{r}_{\geq 1} &\text{if } \psi = \mathsf{enc}(1), \\
\mathtt{r}_{\geq 2} &\text{if } \psi = \mathsf{enc}(j) \text{ with } j \geq 2, \\
\mathtt{init} &\text{if } \psi = \epsilon, \\
(\mathsf{step}_q)_{\mathbb{M}_\mathcal{C}}(v^=_0(\varphi),\vec{z}) &\text{if } \psi = \mathsf{step}_q(\varphi,\psi_1,\dots,\psi_n) \text{ and } z_i = v^=_0(\psi_i), \\
\mathtt{error}, &\text{otherwise}.
\end{cases} \end{equation*} If $v(\mathsf{zero}) = \mathtt{r}_{\geq 0}$, however, we can still loop the truth values assigned to formulas of the form $\mathsf{enc}(j)$. The amount of loops could be infinite or finite, though the infinite case is of no interest to us, since it does not allow us to falsify any of the formulas that we want to falsify.
Let $k \in\mathbb{N}_0$, consider the valuations \begin{equation*}
v_k(\psi) =
\begin{cases}
\mathtt{r}_{\geq 0} &\text{if } \psi = \mathsf{enc}(j) \text{ with } j \leq k, \\
\mathtt{r}_{\geq 1} &\text{if } \psi = \mathsf{enc}(j) \text{ with } j = k + 1, \\
\mathtt{r}_{\geq 2} &\text{if } \psi = \mathsf{enc}(j) \text{ with } j \geq k + 2, \\
\mathtt{init} &\text{if } \psi = \epsilon, \\
(\mathsf{step}_q)_{\mathbb{M}_\mathcal{C}}(v_k(\varphi),\vec{z}) &\text{if } \psi = \mathsf{step}_q(\varphi,\psi_1,\dots,\psi_n) \text{ and } z_i = v_k(\psi_i), \\
\mathtt{error}, &\text{otherwise}.
\end{cases} \end{equation*}\\[-15mm]
$\triangle$ \end{example}
{}
As previously discussed, it is crucial for the valuations in $\Val(\mathbb{M}_\mathcal{C})$ to be able to identify whenever two given numbers $a,b \in \mathbb{N}_0$ are not consecutive, or different. To this aim, for every pair $a,b \in (\mathbb{N}_0)^2$, we denote by $\mu^{+}_{a,b},\mu^{-}_{a,b},\mu^{\neq}_{a,b}$ the valuations determined by the following conditions \begin{equation*}
\mu^{+}_{a,b} =
\begin{cases}
v_a &\text{if } b \geq a + 1, \\
v_{a-1} &\text{if } b \leq a \text{ and } a \neq 0, \\
v^=_0 &\text{if } b \leq a \text{ and } a = 0.
\end{cases}
\quad
\mu^{-}_{a,b} =
\begin{cases}
v_{a-2} &\text{if } b \leq a - 2, \\
v_{a-1} &\text{if } b \geq a - 1 \text{ and } a \neq 0, \\
v^=_0 &\text{if } b \geq a - 1 \text{ and } a = 0.
\end{cases}
\quad
\mu^{\neq}_{a,b} =
\begin{cases}
v^=_0 &\text{if } a = 0, \\
v_{a-1} &\text{if } a \neq 0.
\end{cases} \end{equation*}
\begin{remark}\label{remark:MuProperties}
The following properties can be easily checked by inspecting the corresponding definition:
\begin{itemize}
\item if $b \neq a + 1$ then $\mu^{+}_{a,b}(\mathsf{enc}(b)) \notin \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(\mu^{+}_{a,b}(\mathsf{enc}(a)))$,
\item if $b \neq a - 1$ then $\mu^{-}_{a,b}(\mathsf{enc}(a)) \notin \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(\mu^{-}_{a,b}(\mathsf{enc}(b)))$, and
\item if $b \neq a$ then $\mu^{\neq}_{a,b}(\mathsf{enc}(b)) \neq \mu^{\neq}_{a,b}(\mathsf{enc}(a))$.
\end{itemize} \end{remark}
In the following two examples we consider two different machines that should make clear the soundness of our construction. In the first one, we show how every valuation validates the formula encoding a finite computation. We also see how sequences of configurations can fail to respect $\textsf{nxt}$ in different ways and how we can use the valuations presented in Example~\ref{ex:v} to falsify formulas encoding them. In the second example, we show a counter machine that never halts.
\begin{example}
Consider the counter machine $\mathcal{C} = \tuple{1,Q,q_{\mathsf{init}},\delta}$ with $Q = \{q_{\mathsf{init}},q_1,q_2,q_3\}$ and $\delta$ as defined in the following table
\begin{center}
\begin{tabular}{ c | c c c c}
$q$ & $q_{\mathsf{init}}$ & $q_1$ & $q_2$ & $q_3$ \\
\hline
$\delta(q)$ & $\tuple{\inc{1},q_1}$ & $\tuple{\test{1},q_3,q_2}$ & $\tuple{\test{1},q_3,q_3}$ & undefined
\end{tabular}
\end{center}
The only halting state of $\mathcal{C}$ is $q_3$ and the machine $\mathcal{C}$ has the following finite computation
\begin{equation*}
\tuple{(q_{\mathsf{init}},0),(q_1,1),(q_2,0),(q_3,0)}.
\end{equation*}
For every $v \in \Val(\mathbb{M}_\mathcal{C})$, we have $v(\epsilon)=\mathtt{init}$. The values of $v(\mathsf{enc}(k))$ are dependent on $v$:
if $v = v^=_0$ then $v(\mathsf{enc}(0)) = \mathtt{r}_{=0}$ and $v(\mathsf{enc}(1)) = \mathtt{r}_{\geq 1}$.
If $v \neq v^=_0$ then $v(\mathsf{enc}(0)) = \mathtt{r}_{\geq 0}$ and $v(\mathsf{enc}(1)) \in (\mathsf{suc})_{\mathbb{M}_\mathcal{C}}(\mathtt{r}_{\geq 0}) = \{\mathtt{r}_{\geq 0},\mathtt{r}_{\geq 1}\}$.
Let $\varphi_j$ be the formula representing the prefix with only the first $j+1$ configurations, we obtain, from the above equalities
and the definition of $\mathsf{step}_\mathbb{M}$, that
\begin{align*}
v(\varphi_0) &= v(\mathsf{step}_{q_{\mathsf{init}}}(\epsilon,\mathsf{enc}(0))) = (\mathsf{step}_{q_{\mathsf{init}}})_\mathbb{M}(\mathtt{init},v(\mathsf{enc}(0)))= \mathtt{conf}_{q_{\mathsf{init}},v(\mathsf{enc}(0))} \\
v(\varphi_1) &= v(\mathsf{step}_{q_1}(\varphi_0,\mathsf{enc}(1))) = (\mathsf{step}_{q_1})_\mathbb{M}(\mathtt{conf}_{q_{\mathsf{init}},v(\mathsf{enc}(0))},v(\mathsf{enc}(1)))= \mathtt{conf}_{q_1,v(\mathsf{enc}(1))} \\
v(\varphi_2) &= v(\mathsf{step}_{q_2}(\varphi_1,\mathsf{enc}(0))) = (\mathsf{step}_{q_2})_\mathbb{M}(\mathtt{conf}_{q_1,v(\mathsf{enc}(1))},v(\mathsf{enc}(0)))= \mathtt{conf}_{q_2,v(\mathsf{enc}(0))} \\
v(\varphi_3) &= v(\mathsf{step}_{q_3}(\varphi_2,\mathsf{enc}(0))) = (\mathsf{step}_{q_3})_\mathbb{M}(\mathtt{conf}_{q_2,v(\mathsf{enc}(0))},v(\mathsf{enc}(0)))= \mathtt{conf}_{q_3,v(\mathsf{enc}(0))}
\end{align*}
The formula $\varphi_3$ encodes the finite computation of $\mathcal{C}$ and, since $\mathtt{conf}_{q_3,v(\mathsf{enc}(0))} \in D$, $\emptyset \vdash_{\mathbb{M}_\mathcal{C}}
\varphi_3$. Furthermore, the formulas $\varphi_i$ with $0\leq i\leq 3$, that encode its strict prefixes, are falsified by all valuations, since
$q_3$ is the only halting state.
Formulas not representing sequences of configurations, like $\mathsf{suc}(\psi)$ with
$\psi\neq \mathsf{enc}(j)$, are falsified by every $v \in \Val(\mathbb{M}_\mathcal{C})$ since $v(\mathsf{suc}(\psi))=\mathsf{suc}_{\mathbb{M}}=\mathtt{error}$.
Formulas encoding sequences of configurations not starting in the initial configuration of $\mathbb{M}$ are also falsifiable:
$v_0(\mathsf{step}_q(\psi,\mathsf{enc}(j)))=\mathtt{error}$ whenever, either $q\neq q_{\mathsf{init}}$ and $\psi=\epsilon$, or,
$q=q_{\mathsf{init}}$, $\psi=\epsilon$ and $j\neq 0$. For example,
$v_0(\mathsf{step}_{q_{\mathsf{init}}}(\epsilon,\mathsf{enc}(1)))=(\mathsf{step}_{q_{\mathsf{init}}})_{\mathbb{M}_{\mathcal{C}}}(\mathtt{init},\mathtt{r}_{\geq 1}) =\mathtt{error}$.
The sequence $\tuple{(q_{\mathsf{init}},0),(q_1,2)}$, encoded by $\psi=\mathsf{step}_{q_1}(\varphi_0,\mathsf{enc}(2))$, illustrates a situation
where the value in the counter was incremented by two while
the transition $\delta(q_1)=\tuple{\inc{1},q_2}$ required it to increase by only one. In this case, we have
$\mu^{+}_{0,2}(\psi)=v^=_0(\psi)=(\mathsf{step}_{q_1})_\mathbb{M}(\mathtt{conf}_{q_{\mathsf{init}},\mathtt{r}_{= 0}},\mathtt{r}_{\geq 2})=\mathtt{error}$.
In the same way, we also have $\mu^{-}_{2,0}(\gamma)=\mathtt{error}$ with $\gamma=\mathsf{step}_{q_2}(\varphi_1,\mathsf{enc}(2))$.
This reflects the fact that $\gamma$ encodes the sequence resulting from appending $(q_2,2)$ to the sequence encoded by $\varphi_1$, hence
incrementing the value the counter while $\delta(q_1)$ required it to be decremented by one.
Finally, consider $\xi=\mathsf{step}_{q_3}(\varphi_2,\mathsf{enc}(1))$ encoding the sequence resulting from appending $(q_3,1)$ to the sequence encoded by
$\varphi_2$. As the value in the first counter was incremented, while the transition $\delta(q_2)=\tuple{\test{1},q_3,q_3}$ required it to remain
unchanged we obtain $\mu^{\neq}_{0,1}(\xi) = v^=_0(\xi) = \mathtt{error}$.
$\triangle$ \end{example}
\begin{example}
Consider the counter machine $\mathcal{C} = \tuple{2,Q,q_{\mathsf{init}},\delta}$ with $Q = \{q_{\mathsf{init}},q_1,q_2,q_3,q_4\}$ and $\delta$ as defined in following table
\begin{center}
\begin{tabular}{c | c c c c c}
$q$ & $q_{\mathsf{init}}$ & $q_1$ & $q_2$ & $q_3$ & $q_4$ \\
\hline
$\delta(q)$ & $\tuple{\inc{1},q_1}$ & $\tuple{\inc{2},q_2}$ & $\tuple{\test{1},q_4,q_3}$ & $\tuple{\inc{1},q_{\mathsf{init}}}$ & undefined
\end{tabular}
\end{center}
This machine does not have a finite computation, and its infinite computation loops indefinitely in the following cycle consisting of $4$ transitions.
It starts by incrementing both counters.
Then it tests if the first counter has the value $0$.
As the counter has just been incremented, the test is bound to fail and hence that counter is decremented.
It then increments the same counter, and returns to the initial state.
The halting state $q_4$ could only be reached if at a certain point the test would succeed, but this never happens since, at the point the tests are
made, the value of the first counter is never $0$. Thus, the machine $\mathcal{C}$ has the following infinite computation
\begin{equation*}
\tuple{(q_{\mathsf{init}},0,0),(q_1,1,0),(q_2,1,1),(q_3,0,1),(q_{\mathsf{init}},1,1),(q_1,2,1),(q_2,2,2),(q_3,1,2),(q_{\mathsf{init}},2,2),\ldots}
\end{equation*}
As we show in the next section, since $\mathcal{C}$ has no finite computations, $\mathbb{M}_{\mathcal{C}}$ has no theorems.
Let $k\geq 1$ and consider the sequence resulting from adding $(q_3,k-1,k+1)$ to the prefix of the infinite computation of $\mathcal{C}$ with
$4(k-1)+3$ elements, and let $\varphi_k$ encode this sequence. In this case, the value of the second counter is increased, when it should have
remained the same, and we have
\begin{equation*}
\mu^{\neq}_{k,k+1}(\varphi_k) = v_{k-1}(\varphi_k) = (\mathsf{step}_{q_3})_{\mathbb{M}_\mathcal{C}}(\mathtt{conf}_{q_2,\mathtt{r}_{\geq 1},\mathtt{r}_{\geq 1}},\mathtt{r}_{\geq 0},\mathtt{r}_{\geq 2})
= \mathtt{error}.
\end{equation*}
$\triangle$
\end{example}
\section{Monadicity of Nmatrices is undecidable}\label{sec:undecided} In this section we show that $\mathbb{M}_\mathcal{C}$ really does what is intended. The main result of the paper then follows, after we additionally introduce a construction connecting the existence of a theorem with monadicity.
\subsection*{$\mathbb{M}_\mathcal{C}$ validates the finite computation of $\mathcal{C}$} In the following propositions we show that $\mathbb{M}_\mathcal{C}$ is interpreting computations of $\mathcal{C}$ as it should. Thus, formulas encoding computations of $\mathcal{C}$ that do not end in a halting state can be falsified, whilst the one encoding the finite computation of $\mathcal{C}$ is always designated.
\begin{proposition}\label{prop:StepValidatesNxt}
Let $\mathcal{C} = \tuple{n,Q,q_{\mathsf{init}},\delta}$ be a deterministic $n$-counter machine. If $\textsf{nxt}(q,\vec{y}) = (q',\vec{z})$ then
\begin{equation}\label{equation:ValidStep}
(\mathsf{step}_{q'})_{\mathbb{M}_\mathcal{C}}(\mathtt{conf}_{q,v(\mathsf{enc}(\vec{y}))}, v(\mathsf{enc}(\vec{z}))) = \{\mathtt{conf}_{q',v(\mathsf{enc}(\vec{z}))}\}, \text{ for every } v \in \Val(\mathbb{M}_\mathcal{C}).
\end{equation} \end{proposition} \begin{proof}
Suppose that $\textsf{nxt}(q,\vec{y}) = (q',\vec{z})$. We have to consider three cases, depending on
$\delta(q)$.
If $\delta(q) = (\inc{i},q')$ and $\vec{z} = \vec{y} + \vec{\mathsf{e}_i}$ then, for every $j \neq i$ and $v \in \Val(\mathbb{M}_\mathcal{C})$, we have
$z_j = y_j$ and $v(\mathsf{enc}(z_j)) = v(\mathsf{enc}(y_j))$. Furthermore, $z_i = y_i + 1$, so $\mathsf{enc}(z_i) = \mathsf{suc}(\mathsf{enc}(y_i))$ and, for every $v \in \Val(\mathbb{M}_\mathcal{C})$,
$v(\mathsf{enc}(z_i)) = v(\mathsf{suc}(\mathsf{enc}(y_i))) \in \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(v(\mathsf{enc}(y_i)))$.
Otherwise, $\delta(q) = \tuple{\test{i}, s_1, s_2}$ for some machine states $s_1$ and $s_2$.
If $s_1=q'$,
$y_i = 0$ and $\vec{z} = \vec{y}$. Then, for every
$v \in \Val(\mathbb{M}_\mathcal{C})$, we have $v(\mathsf{enc}(\vec{z})) = v(\mathsf{enc}(\vec{y}))$ and $v(\mathsf{enc}(z_i)) = v(\mathsf{enc}(y_i)) = v(\mathsf{zero}) \in \{\mathtt{r}_{=0}, \mathtt{r}_{\geq 0}\}$.
If $s_2=q'$,
$y_i \neq 0$ and $\vec{y} = \vec{z} + \vec{\mathsf{e}_i}$.
Then, for every $j \neq i$ and $v \in \Val(\mathbb{M}_\mathcal{C})$, we have $z_j = y_j$ and so $v(\mathsf{enc}(z_j)) = v(\mathsf{enc}(y_j))$. Furthermore, $y_i = z_i + 1$,
so $\mathsf{enc}(y_i) = \mathsf{suc}(\mathsf{enc}(z_i))$ and, for every $v \in \Val(\mathbb{M}_\mathcal{C})$, $v(\mathsf{enc}(y_i)) = v(\mathsf{suc}(\mathsf{enc}(z_i))) \in \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(v(\mathsf{enc}(z_i)))$.
In all the three cases we conclude
\eqref{equation:ValidStep} directly by the definition of $(\mathsf{step}_{q'})_{\mathbb{M}_\mathcal{C}}$. \end{proof}
\begin{theorem}
\label{theorem:IfFiniteComputationThenTheorem}
Let $\mathcal{C} = \tuple{n,Q,q_{\mathsf{init}},\delta}$ be a deterministic $n$-counter machine with a finite computation $\tuple{C_0,\dots,C_k}$, then
$\emptyset \vdash_{\mathbb{M}_\mathcal{C}} \mathsf{seq}(\tuple{C_0,\dots,C_k})$. \end{theorem} \begin{proof}
Suppose that $\tuple{C_0,\dots,C_k}$ is the finite computation of $\mathcal{C}$. Then we have that
$C_0 = (q_{\mathsf{init}}, \vec{\mathsf{zero}})$, where $\vec{\mathsf{zero}} =
\tuple{\mathsf{zero},\dots,\mathsf{zero}}$,
$\textsf{nxt}(C_j) = C_{j+1}$ for every $0 \leq j < k$ and
$C_k = (q_k, \vec{z})$ is a halting configuration.
For every $v \in \Val(\mathbb{M}_\mathcal{C})$ we have $v(\mathsf{seq}(\tuple{C_0})) = \mathtt{conf}_{q_{\mathsf{init}},v(\vec{\mathsf{zero}})}$ and, by proposition~\ref{prop:StepValidatesNxt},
$v(\tuple{C_0,\dots,C_k}) = \mathtt{conf}_{q_k,v(\mathsf{enc}(\vec{z}))}$, where $C_k = (q_k, \vec{z})$. Since $C_k$ is a halting configuration, we conclude that
$\mathtt{conf}_{q_k,v(\mathsf{enc}(\vec{z}))} \in D$, and so $\emptyset \vdash_{\mathbb{M}_\mathcal{C}} \mathsf{seq}(\tuple{C_0,\dots,C_k})$. \end{proof}
\subsection*{$\mathbb{M}_\mathcal{C}$ can falsify everything else} The following propositions deal with all the possible ways in which a formula can fail to represent a halting computation of $\mathcal{C}$.
\begin{proposition}\label{prop:RefutesNonSequences}
Let $\mathcal{C} = \tuple{n,Q,q_{\mathsf{init}},\delta}$ be a deterministic $n$-counter machine.
If $\varphi \in \fm{\Sigma_\mathcal{C}}(\emptyset)$ does not represent a sequence of configurations of $\mathcal{C}$ then $v(\varphi)
\neq \mathtt{conf}_{q,\vec{y}}$ for all $v\in \Val_{\mathcal{C}}$, $q \in Q$ and $\vec{y} \in \textbf{Rm}^n$. \end{proposition} \begin{proof}
The proof follows by induction on the structure of the formula $\varphi\in \fm{\Sigma_\mathcal{C}}(\emptyset)$.
In the base case we have $\varphi \in \{\mathsf{zero},\epsilon\}$. The statement then holds since, for all $v\in \Val_{\mathcal{C}}$, $v(\varphi) \in \{\mathsf{zero},\mathtt{init}\}$.
For the step we have two cases.
In the first case, $\varphi=\mathsf{suc}(\psi)$ and $v(\mathsf{suc}(\psi))\in \textbf{Rm} \cup \{\mathtt{error}\}$. In the second case, $\varphi=\mathtt{step}_q(\psi,\psi_1,\ldots,
\psi_n)$ and, if $\varphi$ does not represent any sequence of configurations, then one of the following must hold
\begin{itemize}
\item $\psi$ does not represent a sequence of configurations, in which case, by induction hypothesis, $v(\psi)\neq\mathtt{conf}_{q,\vec{y}}$, or
\item $\psi_i \neq \mathsf{enc}(j)$, for some $1 \leq i \leq n$, so $v(\psi_i)\notin\textbf{Rm}$.
\end{itemize}
In both cases we have $v(\varphi)=\mathsf{suc}_{\mathbb{M}_\mathcal{C}}(v(\psi),v(\psi_1),\ldots,v(\psi_n))=\mathtt{error}$. \end{proof}
\begin{proposition}
\label{prop:NotRespectingNxtEntailsError}
Given a deterministic counter machine $\mathcal{C} = \tuple{n,Q,q_{\mathsf{init}},\delta}$. If $\textsf{nxt}(q,\vec{y}) \neq (q',\vec{z})$ then
\begin{equation}\label{equation:NotValidStep}
(\mathsf{step}_{q'})_{\mathbb{M}_\mathcal{C}}(\mathtt{conf}_{q,v(\mathsf{enc}(\vec{y}))}, v(\mathsf{enc}(\vec{z}))) = \{\mathtt{error}\},
\end{equation}
for some $v \in \Val(\mathbb{M}_\mathcal{C})$. \end{proposition} \begin{proof}
Assume $\textsf{nxt}(q,\vec{y}) \neq (q',\vec{z})$ and notice that, if $\delta(q)$ concerns the $i$th counter and $y_j \neq z_j$, for some $j \neq i$,
then $\mu^{\neq}_{y_j,z_j}(\mathsf{enc}(y_j)) \neq \mu^{\neq}_{y_j,z_j}(\mathsf{enc}(z_j))$, by remark~\ref{remark:MuProperties}. Therefore,
equality~\eqref{equation:NotValidStep} holds for $v = \mu^{\neq}_{y_j,z_j}$. Because of this, throughout the rest of the proof, we assume
that, $y_j = z_j$, for every $j \neq i$, and concern ourselves only with the values taken by $y_i$ and $z_i$.
We have to consider three cases, depending on
$\delta(q)$.
If $\delta(q) = \tuple{\inc{i}, s}$, for some machine state $s$. Then either
$q' \neq s$, in which case equality~\eqref{equation:NotValidStep} holds for every $v \in \Val(\mathbb{M}_\mathcal{C})$, or $z_i \neq y_i + 1$.
In this later case, also by remark~\ref{remark:MuProperties}, we have $\mu^{+}_{y_i,z_i}(\mathsf{enc}(z_i)) \notin \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(\mu^{+}_{y_i,z_i}
(\mathsf{enc}(y_i)))$, so equality~\eqref{equation:NotValidStep} holds for $v = \mu^{+}_{y_i,z_i}$.
Otherwise, $\delta(q) = \tuple{\test{i}, s_1, s_2}$ for some machine states $s_1$ and $s_2$.
If $y_i = 0$, then either $q' \neq s_1$ or $z_i \neq y_i$. Consider the valuation $\mu^{=}_{y_i,z_i} = v^=_0$. Since $v^=_0(y_i) = \mathtt{r}_{=0}
\notin \mathsf{suc}_{\mathbb{M}_\mathcal{C}}(v^=_0(\mathsf{enc}(z_i)))$, the second condition concerning $\test{i}$, in the definition of $(\mathsf{step}_{q'})_{\mathbb{M}_\mathcal{C}}$, is not
satisfied whenever $v = \mu^{=}_{y_i,z_i}$. The first condition is also not satisfied if $q' \neq s_1$, directly, or if $z_i \neq y_i$, since in this case
$\mu^{=}_{y_i,z_i}(z_i) \neq \mu^{=}_{y_i,z_i}(y_i)$, by remark~\ref{remark:MuProperties}. We conclude that equality~\eqref{equation:NotValidStep} holds for $v = \mu^{=}_{y_i,z_i}$.
If $y_i \neq 0$, then either $q' \neq s_2$ or $z_i \neq y_i - 1$. Note that, in any case, $\mu^{-}_{y_i,z_i}(\mathsf{enc}(y_i)) \notin
\{\mathtt{r}_{=0},\mathtt{r}_{\geq 0}\}$, which can easily be checked using the definition of $\mu^{-}_{y_i,z_i}$. Therefore, the first condition
concerning $\test{i}$, in the definition of $(\mathsf{step}_{q'})_{\mathbb{M}_\mathcal{C}}$, is not satisfied whenever $v = \mu^{-}_{y_i,z_i}$.
The second condition is also not satisfied if
$q' \neq s_2$, directly, or if $z_i \neq y_i - 1$, since in this case $\mu^{-}_{y_i,z_i}(\mathsf{enc}(y_i)) \notin
\mathsf{suc}_{\mathbb{M}_\mathcal{C}}(\mu^{-}_{y_i,z_i}(\mathsf{enc}(z_i)))$, by remark~\ref{remark:MuProperties}. We conclude that equality~\eqref{equation:NotValidStep}
holds for $v = \mu^{-}_{y_i,z_i}$. \end{proof}
\begin{proposition}\label{prop:RefutesNonComputations}
Given a deterministic counter machine $\mathcal{C} = \tuple{n,Q,q_{\mathsf{init}},\delta}$ and $\varphi \in \fm{\Sigma_\mathcal{C}}(\emptyset)$ such that
$\varphi = \mathsf{seq}(\tuple{C_0,\dots,C_k})$. If $\tuple{C_0,\dots,C_k}$ is not the computation of $\mathcal{C}$ then $\emptyset \not\vdash_{\mathbb{M}_\mathcal{C}} \varphi$. \end{proposition} \begin{proof}
If $\tuple{C_0,\dots,C_k}$ is not the computation of $\mathcal{C}$, then one of the following must hold: (i) $C_0$ is not the initial configuration of $\mathcal{C}$,
(ii) $C_k$ is not a halting configuration of $\mathcal{C}$, or (iii) there is some $1 \leq i < k$ such that $\textsf{nxt}(C_i) \neq C_{i+1}$. We deal with each
situation separately.
If (i) holds and $C_0 = (q,\vec{y})$ then either $q \neq q_{\mathsf{init}}$ or $y_j \neq 0$, for some $1 \leq j \leq n$. In the first case, for all $v \in
\Val(\mathbb{M}_\mathcal{C})$, we have $(\mathsf{step}_q)_{\mathbb{M}_\mathcal{C}}(\mathtt{init},v(\mathsf{enc}(\vec{y}))) = \mathtt{error}$. In the second case, $v_{y_j - 1}(\mathsf{enc}(y_j)) \notin \{\mathtt{r}_{=0},
\mathtt{r}_{\geq 0}\}$ and $(\mathsf{step}_{q_{\mathsf{init}}})_{\mathbb{M}_\mathcal{C}}(\mathtt{init},v_{y_j - 1}(\mathsf{enc}(\vec{y}))) = \mathtt{error}$.
If (ii) holds and $C_k = (q,\vec{y})$ then $q$ is not a halting state and
$v(\varphi) \in \{\mathtt{error}, \mathtt{conf}_{q,v(\vec{y})}\} \subseteq A \setminus D$,
for all $v \in \Val(\mathbb{M}_\mathcal{C})$.
If (iii) holds then, by proposition~\ref{prop:NotRespectingNxtEntailsError}, there is $v \in \Val(\mathbb{M}_\mathcal{C})$ such that
$v(\mathsf{seq}(\tuple{C_0,\dots,C_{i+1}})) = \mathtt{error}$.
In any of the cases, there is some $v \in \Val(\mathbb{M}_\mathcal{C})$ such that $v(\varphi) \notin D$, so $\emptyset\not\vdash_{\mathbb{M}_\mathcal{C}} \varphi$. \end{proof}
Having seen how to refute any formula not representing a computation of $\mathcal{C}$ we conclude $\mathbb{M}_\mathcal{C}$ does exactly what we intended.
\begin{theorem}\label{thm:teohalt}
Let $\mathcal{C} = \tuple{n,Q,q_{\mathsf{init}},\delta}$ be a deterministic counter machine. For any formula $\varphi\in \fm{\Sigma_\mathcal{C}}(P)$ we have
$\emptyset \vdash_{\mathbb{M}_\mathcal{C}} \varphi$ if and only if $\varphi = \mathsf{seq}(\tuple{C_0,\dots,C_k})$ and $\tuple{C_0,\dots,C_k}$ is a finite
computation of $\mathcal{C}$. \end{theorem} \begin{proof}
From right to left, if $\tuple{C_0,\dots,C_k}$ is a finite computation of $\mathcal{C}$ and $\varphi = \mathsf{seq}(\tuple{C_0,\dots,C_k})$ then, by
theorem~\ref{theorem:IfFiniteComputationThenTheorem}, we have that $\emptyset \vdash_{\mathbb{M}_\mathcal{C}} \varphi$. In the other direction, suppose
$\emptyset \vdash_{\mathbb{M}_\mathcal{C}} \varphi$ then, as discussed in example~\ref{ex:v}, $\varphi$ must be a closed formula. By
proposition~\ref{prop:RefutesNonSequences}, $\varphi = \mathsf{seq}(\tuple{C_0,\dots,C_k})$ for some sequence of configurations $\mathsf{seq}(\tuple{C_0,\dots,C_k})$,
and, by proposition~\ref{prop:RefutesNonComputations}, $\mathsf{seq}(\tuple{C_0,\dots,C_k})$ must be the computation of $\mathcal{C}$. \end{proof}
\subsection*{From theoremhood to monadicity} In order to obtain the announced undecidability result we need one last construction. We will show how to build an Nmatrix $\mathbb{M}_{\mathsf{m}}$ from an Nmatrix $\mathbb{M}$, under certain conditions, such that $\mathbb{M}_{\mathsf{m}}$ is monadic if and only if $\vdash_{\mathbb{M}}$ has theorems.
Given a finite $\Sigma$-Nmatrix $\mathbb{M}=\tuple{A,\cdot_\mathbb{M},D}$, let $\Sigma_{\mathsf{m}}$ be such that $\Sigma_{\mathsf{m}}^{(2)} = \Sigma^{(2)} \cup \{f_a : a \in A \}$ and $\Sigma_{\mathsf{m}}^{(k)} = \Sigma^{(k)}$, for every $k \neq 2$.
Let $A_{\mathsf{m}}=A\cup \{1\}$, assuming w.l.g. that $1\notin A$, consider $\mathbb{M}_{\mathsf{m}}=\tuple{A_{\mathsf{m}},\cdot_{\mathsf{m}},\{1\}}$ the $\Sigma_{\mathsf{m}}$-Nmatrix where, for each $g \in \Sigma^{(k)}$, \begin{equation*}
g_{\mathsf{m}}(x_1,\ldots,x_k)=
\begin{cases}
g_\mathbb{M}(x_1,\ldots,x_k)& \text{ if } x_1,\ldots,x_k\in A\\
A_{\mathsf{m}}&\text{ otherwise }
\end{cases} \end{equation*} and, for every $a \in A$, \begin{equation*}
(f_{a})_{\mathsf{m}}(x,y)=
\begin{cases}
\{1\}& \text{ if } x=a\text{ and }y\in D\\
\; A& \text{ if } x \in A\setminus \{a\}\text{ and }y\in D\\
\; A_{\mathsf{m}}&\text{ otherwise }
\end{cases} \end{equation*}
The following theorem targets Nmatrices with infectious values. Recall that $*$ is infectious in $\mathbb{M}$ if for every connective $\copyright$ in the signature of $\mathbb{M}$ we have $\copyright_{\mathbb{M}}(x_1,\ldots,x_k)=*$ whenever $*\in \{x_1,\ldots,x_k\}$.
\begin{proposition}\label{prop:TheoremsIifMonadic}
Given Nmatrix $\mathbb{M}$ with at least two truth-values and among them an infectious non-designated value, $\vdash_\mathbb{M}$ has theorems if and only if
$\mathbb{M}_{\mathsf{m}}$ is monadic. \end{proposition} \begin{proof}
Let us denote the infectious non-designated value of $\mathbb{M}$ by $*$.
Clearly, $*$ ceases to be infectious in $\mathbb{M}_{\mathsf{m}}$ as $(f_a)_{\mathbb{M}_{\mathsf{m}}}$ does not necessarily output $*$ when it receives it as
input.
The value $1$ is also not infectious in $\mathbb{M}_{\mathsf{m}}$, quite the opposite, when given as input to any connective the output can take any value.
That is, for every connective $\copyright\in \Sigma_{\mathsf{m}}$ we have $\copyright_{\mathbb{M}_{\mathsf{m}}}(x_1,\ldots,x_k)=A_\mathsf{m}$ whenever $1\in \{x_1,
\ldots,x_k\}$. This immediately implies that if $\psi\in \Sub(\varphi)\setminus \{\varphi\}$ and $1\in \psi_{\mathbb{M}_\mathsf{m}}(x)$ then
$\varphi_{\mathbb{M}_\mathsf{m}}(x)=A_\mathsf{m}$ for any $x\in A_\mathsf{m}$.
If $\emptyset \vdash_\mathbb{M}\varphi$ then $\varphi$ must be a closed formula due to the presence of $*$.
Hence, for $v\in\Val(\mathbb{M}_{\mathsf{m}})$ we have $v(\varphi)\in D$.
Thus, $\{p\}\cup \{f_a(p,\varphi):a\in A\}$ is a set of monadic separators for $\mathbb{M}_{\mathsf{m}}$, as $p$ separates $1$ from the elements in $A$, and
$f_a(p,\varphi)$ separates $a$ from every $b\in A$.
If instead there are no theorems in $\vdash_\mathbb{M}$, let us consider an arbitrary monadic formula $\varphi\in L_{\Sigma_{\mathsf{m}}}(\{p\})$ and show it
cannot separate $*$ from the other elements of $A$.
We need to consider two cases.
\begin{itemize}
\item If $\varphi\in L_{\Sigma}(\{p\})$ then $\varphi_{\mathbb{M}_{\mathsf{m}}}(a)=\varphi_{\mathbb{M}}(a)\subseteq A\not\ni 1$ for every $a\in A$.
In which case $\varphi$ cannot separate any pair of distinct elements of $A$ and, in particular, cannot separate $*$ from any other
element of $A$.
\item If $\varphi\in L_{\Sigma_{\mathsf{m}}}(\{p\})\setminus L_{\Sigma}(\{p\})$
then there is $f_a(\psi_1,\psi_2)\in \Sub(\varphi)$ with $\psi_1,\psi_2\in L_{\Sigma}(\{p\})$.
If $p$ occurs in $\psi_2$ then $(\psi_2)_{\mathbb{M}_\mathsf{m}}(*)=(\psi_2)_{\mathbb{M}}(*)=\{*\}$ and
$(f_a(\psi_1,\psi_2))_{\mathbb{M}_{\mathsf{m}}}(*)= A_{\mathsf{m}}$, since $* \notin D$.
If $p$ does not occur in $\psi_2$ then, since $\emptyset\not\vdash_\mathbb{M}\psi_2$, $(\psi_2)_{\mathbb{M}}\cap (A\setminus D)\neq \emptyset$ and
we also obtain $(f_a(\psi_1,\psi_2))_{\mathbb{M}_{\mathsf{m}}}(*)= A_{\mathsf{m}}$.
Therefore, $\varphi_{\mathbb{M}_{\mathsf{m}}}(*)= A_{\mathsf{m}}$ since either $\varphi=f_a(\psi_1,\psi_2)$ or $f_a(\psi_1,\psi_2)\in \Sub(\varphi)
\setminus \{\varphi\}$ and $1\in(f_a(\psi_1,\psi_2))_{\mathbb{M}_{\mathsf{m}}}(*)$. As $\varphi_{\mathbb{M}_{\mathsf{m}}}(*)$ contains both designated and
non-designated elements it cannot separate $*$ from any other element of $A$.
\end{itemize}
As we are assuming that $A$ has at least two elements, we conclude that $\mathbb{M}_{\mathsf{m}}$ is not monadic. \end{proof}
Finally, we get to the main result of this paper.
\begin{theorem}
The problem of determining if a given finite $\Sigma$-Nmatrix is monadic is undecidable. \end{theorem} \begin{proof}
For every counter machine $\mathcal{C}$, the Nmatrix $\mathbb{M}_{\mathcal{C}}$ is in the conditions of Theorem~\ref{prop:TheoremsIifMonadic}, as it has more than two
truth-values and $\mathtt{error}$ is infectious.
Therefore, by applying successively Theorem~\ref{thm:teohalt} and~\ref{prop:TheoremsIifMonadic}, we reduce the halting problem for counter machines
to the problem of checking if a finite Nmatrix is monadic.
Indeed, for a given counter machine $\mathcal{C}$, $\mathcal{C}$ halts if and only if $\vdash_{\mathbb{M}_\mathcal{C}}$ has theorems if and only if $(\mathbb{M}_\mathcal{C})_{\mathsf{m}}$ is monadic.
Furthermore, the presented constructions are all computable and $(\mathbb{M}_\mathcal{C})_{\mathsf{m}}$ is always finite since, if $\mathcal{C}$ has $m$ states and $n$
counters, then $\mathbb{M}_\mathcal{C}$ has $m \times 4^n + 6$ truth-values and $\Sigma_\mathcal{C}$ has $3+m$ connectives. Therefore, $(\mathbb{M}_\mathcal{C})_{\mathsf{m}}$ has
$m \times 4^n + 7$ truth-values and $(\Sigma_{\mathcal{C}})_{\mathsf{m}}$ has $m \times 4^n + m + 9$ connectives. We can therefore conclude the proof just by
invoking Theorem~\ref{thm:Halt}. \end{proof}
As a simple corollary we obtain that following result about Nmatrices, or better, about their underlying multi-algebras.
\begin{corollary} The problem of generating all expressible unary multi-functions in an arbitrary finite Nmatrix is not computable. \end{corollary} \begin{proof}
Just note that if we could compute the set of all expressible unary multi-functions, as the set is necessarily finite, we could test each of them
for the separation of values, as illustrated in the case of matrices in Example~\ref{ex:luk2}. \end{proof}
\section{Conclusion}\label{sec:conclude}
In this paper we have shown that, contrarily to the most common case of logical matrices, the monadicity property is undecidable for non-deterministic matrices. As a consequence, we conclude that the set of all multi-functions expressible in a given finite Nmatrix is not computable, in general. These results, of course, do not spoil the usefulness of the techniques for obtaining axiomatizations, analytical calculi and automated proof-search for monadic non-deterministic matrices. This is especially the case since, for a given Nmatrix, one can always define a monadic Nmatrix over an enriched signature, such that its logic is a conservative extension of the logic of the previous Nmatrix, as described in~\cite{synthese}.
The results show, however, that tool support for logics based on non-deterministic matrices must necessarily have its limitations.
On a closer perspective, the reduction we have obtained from counter machines to Nmatrices (of which non-determinism is a fundamental ingredient) just adds to the initial perception that allowing for non-determinism brings a substantial amount of expressive power to logical matrices. Concretely, it opens the door for studying the computational hardness of other fundamental meta-theoretical questions regarding logics defined by Nmatrices. In particular, we will be interested in studying the problem of deciding whether two given finite Nmatrices define the same logic, a fundamental question raised by Zohar and Avron in~\cite{AvronZohar}, for which only necessary or sufficient conditions are known.
Additionally, we deem it important to further explore the connections between Nmatrices and term-dag-automata (an interesting computational model for term languages~\cite{AutomataOnDAGRepresentationsOfFiniteTrees,ClosureProperties}) and which informed our undecidability result. Another relevant direction for further investigation is the systematic study of infectious semantics, in the lines of~\cite{ismvl,infect}, whose variable inclusion properties also played an important role in our results.
\end{document} | arXiv |
What is the area, in square units, of a triangle with vertices at $A(1, 1), B(6, 1), C(3, 7)$?
Note that $AB$ has length 5 and is parallel to the $x$-axis. Therefore, the height of the triangle is the difference in the $y$-coordinates of $A$ and $C$, or $7-1 = 6$. Therefore, the area of the triangle is $\frac{6 \times 5}{2} = \boxed{15}$. | Math Dataset |
When Science Confronts Philosophy: Three Case Studies
Eric Dietrich1
Axiomathes (2020)Cite this article
This paper examines three cases of the clash between science and philosophy: Zeno's paradoxes, the Frame Problem, and a recent attempt to experimentally refute skepticism. In all three cases, the relevant science claims to have resolved the purported problem. The sciences, construing the term broadly, are mathematics, artificial intelligence, and psychology. The goal of this paper is to show that none of the three scientific solutions work. The three philosophical problems remain as vibrant as ever in the face of robust scientific attempts to dispel them. The paper concludes by examining some consequences of this persistence.
This paper provides arguments that three famous philosophy problems are not solved or even dented by the best efforts of science and mathematics. The three problems are Zeno's Paradoxes, the Frame Problem, and Skepticism. The first and third are ancient. The Frame Problem in much younger; it is just now only a little over fifty years old. The conclusions will be the same for all three. The sciences (broadly construed to include mathematics) that allegedly solve each problem are all susceptible to the same objection: the empirical and theoretical methods used assume the existence of precisely what the philosopher calls into question. So, in each case, the scientist begs the question against the philosopher.
Zeno's Paradoxes
Parmenides held that reality—the What Is—is a partless, continuous, unchangeable, immovable, unity that will exist forever, could never come into existence from "nothingness," and could never go out of existence. There is, finally, only pure crystalline being, now and forever. This is Parmenides's primary truth. A second truth (or path) concerns the What Is Not, which necessarily is not (i.e., it doesn't and cannot exist). Together, these two truths make up the Way of Truth. (In modern parlance we would probably call the Way of Truth, the Way of A priori, Necessary Truth.) Contrasting with the Way of Truth is the Way of Opinion. This way is merely the way of appearances, which merely seem to exist. This way is untrustworthy because it is often false or varied (one person has one opinion, another has the opposite opinion). Accordingly, it is the way of mortals, of contingent beings. This way is ugly and should be avoided, but, alas, it cannot, Parmenides held, due to its pervasiveness. At best, one can adopt a sort of detestable pragmatism toward the Way of Opinion. It may be that Parmenides held that the Way of Opinion is a pervasive illusion.Footnote 1
Parmenides's philosophy is a shocking metaphysics and epistemology, and unsurprisingly it was not widely embraced. But one consequence of Parmenides's philosophy is still with us to this day, and continues to bedevil our understandings of motion, the relation between the many and the one, and the relation between thought and the world.
There is No Motion
Parmenides's metaphysics implies that there is no motion—nothing moves—another strange idea. But Parmenides had a student who was unusually bright: Zeno of Elea.Footnote 2 Zeno defended his teacher's metaphysics with at least four now-infamous motion paradoxes. On the power of these paradoxes, Bertrand Russell says:
In this capricious world, nothing is more capricious than posthumous fame. One of the most notable victims of posterity's lack of judgment is the Eleatic Zeno. Having invented four arguments, all immeasurably subtle and profound, the grossness of subsequent philosophers pronounced him to be a mere ingenious juggler, and his arguments to be one and all sophisms (Russell and Bertrand 1903).
While it would be instructive to examine all four of Zeno's paradoxes, here we will only examine one, The Dichotomy. This paradox is easily accessible and widely known, and in attacks on Zeno's conclusion, the fallacy of begging the question against Zeno is easily seen.
Before we begin with the paradox, however, it will be worthwhile to clear up a potential source of confusion. Zeno didn't deny that we seem to experience motion, either by seeing something move or by seeming to experience our own movement. His argument was that while denying motion (and, more strongly, asserting Parmenides's unitary What is was the correct metaphysics) led to conflicts with perception (we do seem to see plenty of motion and we do seem to see a multitude of things), asserting that motion existed (and, more strongly, that Parmenides was wrong) led to conflicts with logic—which is far worse.Footnote 3 After all, perception is well-known to be fallible. Zeno's position, then, is that motion belongs to the ugly Way of Opinion. Finally, it is worth stressing that all of Zeno's paradoxes are clashes between what we seem to experience and our logical reasoning. Zeno naturally insisted that the latter takes precedence over the former, 'naturally', because he didn't want to violate the law of non-contradiction.
The Dichotomy Let's have Achilles run from a stationary point A to another stationary point B, a distance, let's say, of a 100 meters. Zeno asserts that Achilles cannot run this distance. There are two versions of The Dichotomy: the progressive version and the regressive version. Here, we are only interested in the progressive version. In the progressive version, Achilles cannot get to B (in the regressive version, he cannot even get started; he can't leave A). In order to make it to B, Achilles has to first go halfway to B: 50 meters. 50 meters remain. Now to get to B, Achilles must again go halfway to B: 25 meters. 25 meters remain. Now to get to B, Achilles must go yet again halfway to B: 12.5 meters. 12.5 meters remain. And so forth, ad infinitum. We see, then, that Achilles has to cover an infinite number of decreasing halfway distances in order to get to B. So, there is always some distance to cover before he can get to B. So, he cannot get to B. (Though this argument seems to break up Achilles's run to B into discrete distances, this is an artifact of the presentation of the argument. Achilles's attempted run to B is smooth and continuous… and never ending.)
Does Math Solve Zeno's Paradoxes?
It is widely held that the sum of a specific geometric series dispels the progressive form of the Dichotomy (e.g., Salmon 1975). Euclid knew well this sort of series and others like it (and so did others before Euclid). Euclid derived a general formula for the expression of the sum of any geometric series (discussed below). If the sum converges on a single number, the series is said to converge. If the series did not sum to a specific number, but grew beyond all bounds, it is said to diverge. A much clearer and cleaner picture of the convergence or divergence of a geometric series only emerged after mathematicians Cauchy, Dedekind, Weierstrass, and others finally placed the foundations of the calculus on a firm conceptual footing, repairing Newton's and Leibniz's initial foray into the infinitesimal. The relevant notions Cauchy and company gave us include limit, approaching a limit, converging to a limit, and approaching infinity, as well as others.
The series we are interested in is the geometric one below, which we will dub the Z Summation. The crucial, provable property of this sum is that it is an infinite summation that converges to a finite sum.
The Z Summation
$$\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} \cdots$$
Here, we add up all the numbers between 0 and 1 of the form \(\frac{1}{{2^{n} }}\), where n takes on all numbers 1, 2, 3, 4, 5,…. Intuitively, this infinite summation sums to 1: the sum will never grow beyond 1 because what's always added is one-half of the remaining distance to 1, and the sum clearly gets closer and closer to 1.
Of course, we don't actually do an infinite number of additions—we can't. Rather, the math allows us to derive the sum of the Z Summation, namely 1, by considering the sum in the limit, as n approaches infinity. The details are crucial for both the alleged refutation of Zeno's conclusion and for Zeno's defense of his conclusion, so here, briefly, is how this works.Footnote 4
To begin, here is the formula for the sum of the first n terms of a series:
$$(1)~~S_{n} = \frac{{a(1 - r^{n} )}}{1 - r}$$
Sn is our sum of the first n terms of any series, a our starting number, and r our constant ratio used to control the growth of the series. Deriving this formula took insight and cleverness. (Apparently, it is not known who first derived this formula—someone who lived well before Euclid; Euclid's proof that this formula will work is different from the one used today in most calculus books (Kaplan and Kaplan 2003, p. 90). From (1), it is easy to derive (2), the sum of an infinite series, provided that \(\left| r \right|\; < \;1\), for then as n → ∞, rn → 0.Footnote 5 We thus get:
$$(2)~~\mathop {\lim }\limits_{n \to \infty } S_{n} = \frac{a}{1 - r}$$
This is read: "the limit of Sn as n goes to infinity is \(a/\left( {1 - r} \right)\)." Again, Sn is our sum, a our starting number (in the Zeno case, \(1/2\), since Achilles runs half of each of the remaining distances), and r, also here \(1/2\), controls the growth of the series. The series converges (produces a sum) because the absolute value of r is less than 1: \(\left| {\frac{1}{2}} \right|\text{ < 1}.\;\)
(2) then is how we get the finite sum of the infinite Z Summation: making the relevant substitutions gives us \(\frac{{\tfrac{1}{2}}}{{\tfrac{1}{2}}}\), which equals 1, our desired result.
Now, the argument against Zeno is this. The Z summation looks a lot like the progressive form of the Dichotomy (as many have noted, again see Salmon 1975). If this summation equals 1, then by covering an infinite number of decreasing halfways (as described above), Achilles can indeed race to the end of one race course. Paradox dispelled. The alleged power of this mathematical dispelling can be summarized in the following exchange between Zeno and the Fans-of-Using-Math-to-Dispel-Zeno's-Paradoxes:
Zeno:
Assume Achilles attempts to run from point A to point B.
Then, by logical reasoning, we see that he can never complete his run (because he has to cover an infinite number of half-distances).
But he does seem to complete the run.
This results in a contradiction between the Way of Opinion and the Way of Truth.
Clearly the Way of Truth is the Way to follow, for it is the more certain.
Therefore, Achilles cannot complete the run.
Fans of Math:
Assume Achilles is running from point A to point B.
This results in no contradiction between experience (Way of Opinion) and reasoning (Way of Truth) because the Z Summation is finite—1, to be exact. We know this because the Z Summation converges, as shown in (2).
There is now no roadblock to letting the compelling nature of our experience of motion guide us to acknowledging motion's real existence.
Therefore, Achilles really is moving, and motion is perfectly okay.
Zeno is unmoved. He gives the following argument in reply to the Fans of Math:
The derivation of the sum of the Z Summation confirms my conviction, not yours. As you Math Fans freely admit, this infinite summation cannot be carried out directly in the physical world simply because it requires an infinite number of 1/2's to be actually summed, which is impossible. We know that the Z Summation converges to 1 not because we do the actual summing, but because in our minds, we make the appropriate inferential leap from a finite amount of summing in (1) to the full infinite summation using (2). The brilliance of (1) and (2) is that we can mentally derive the sum of the Z summation (or any other appropriate geometric series) without actually doing the impossible, summing up the infinite additions. Indeed, this is the point of (1) and (2). But Achilles, in running from A to B can't "mentally derive" or "inferentially leap" to B—he is running, not thinking. I do not doubt that the Z summation sums to 1. I merely deny that this sum can be reached by doing the actual summing of all the 1/2's. This sum can only be reached by doing mathematics. And you actually agree with me on this. Again, Achilles is not doing math, he's running (or appears to be). You are merely assuming that Achilles is somehow doing what your mathematics shows to be true. By making this assumption, you are further assuming that Achilles is moving. This latter assumption, and so the former one, beg the question against me. You are assuming precisely what I am denying. It is easy to win an argument if you assume what your opponent is denying. I conclude that the sum of the Z Summation (which is not in dispute) is irrelevant to my arguments. My paradoxes remain even though we agree on the sum of the Z Summation and agree that the Z Summation looks a lot like the progressive form of the Dichotomy.Footnote 6
This reply to the Fans of Math seems decisive. The Fans either have to agree with Zeno or beg the question against him, a vacuous victory. Zeno's paradoxes prevail.
In Sum
We have here an example of the pattern of interest. This confrontation between science (mathematics) and philosophy leaves the philosophical point unfazed—indeed, untouched. Mathematics confronts Zeno's paradoxes, changes the rules, assumes what Zeno explicitly denies, and then proclaims that it has refuted the paradoxes. Math has done nothing of the sort. The sum, 1, can be mentally, mathematically inferred, but one cannot run to it, count to it, nor traverse a space to it. Achilles is an actual physical being. No actual physical being can do an actually infinite task—which is why we need (1) and (2). Yes, somehow Achilles appears to run from point A to point B. But that's just the perversity of the Way of Opinion…. We were warned to stay clear of it.
The Frame Problem
Humans are probably the smartest animals on planet Earth. The Frame Problem begins here: What enables us to be so smart? Aristotle intuited, dimly, what is perhaps the key: "The animals other than [humans] live by appearances and memories, and have but little of connected experience; but the human race lives also by art and reasonings" (Aristotle, Book 1, Part 1, Metaphysics, emphasis added; Aristotle intends to include only rational reasonings—for him there was no other kind of reasoning). The Frame Problem is the problem of how we so robustly connect our experiences. This no doubt is a bit cryptic; the Frame Problem takes quite a bit of set-up to explain.
Defining the Frame Problem
The Frame Problem is ultimately about relevance—semantic, cognitive relevance. The problem of what is relevant to what runs deeply throughout large parts of philosophy of language, philosophy of mind, metaphysics, and epistemology. However, the Problem's central two technical issues involve change and updating. All living things live in changing environments. Among thinking things, some of the most important changes that they must deal with are changes they themselves bring about via their actions. However, more than change is involved. To successfully handle any change, thinking things must update what they believe and expect after experiencing the change. The major difficulty wrought by the Frame Problem is questioning how, and how far, this updating should be carried out. This is best explained by an example.
If you put a pot on the stove to boil water, you need be aware that the lid, including its handle, is likely to get hot, too; so you can't just grab the lid's handle with your fingers to check on the boiling water or to put in, say, some rice or pasta. Putting a pot on the stove to boil water is a change, and this change requires updating one's belief about the temperature of the lid and its handle (note, however, the temperature of the pot itself quite naturally gets updated in your mind—you would never just grab the pot between your two hands). Failure to so update the temperature of the lid will result in burned fingers. This can be put in terms of relevance: the temperature of the pot is relevant to the temperature of the lid, so if the former changes, so does the latter.
But now there are also the problems concerning what you can afford not to update if considered, and what you can simply ignore utterly. Consider two further cases.
Case one: With a pot on the stove, your belief that your car has four tires should not be updated for the simple reason that pots of boiling water on your stove do not affect how many tires your car has—the two topics are not relevant to each other. (Note: one can make up a story where the two are related, but that story will have a very low probability of occurring).
Case two: Given the pot of boiling water on the stove, should you update your belief that Antarctica is still cold? It depends. On the one hand, arguably Yes, since your use of your stove and the use of stoves all around the world are helping raise the global temperature by burning fossil fuels, which, in turn, is warming Antarctica. But on the other hand, No, since Antarctica is still cold. Perhaps it is less cold by a tiny fraction of a degree because of your boiling water, but this fraction is probably not even measurable.
Now here is the key. Until they were mentioned, you almost certainly would not have even considered updating your beliefs about your car's tires, nor the ones about Antarctica's temperature. In fact, you would not have considered the vast majority of your beliefs for updating; nor should you have, because you have an enormous number of beliefs; considering them all would take a lot of time. But hidden in your enormous number of beliefs are some that are in fact relevant to your pot of boiling water, but which, up to now, you would have considered irrelevant.
Aristotle's claim above was that part of humans' robust intelligence is how we connect our experiences. Connection is captured by the notion of relevance. Importantly, which of your beliefs are relevant to any current change you are experiencing is not fixed. Relevance itself changes as the world changes. Beliefs once thought to be irrelevant become relevant, and vice versa. So, your tire belief should not even be considered when you update your beliefs given your pot of boiling water. But your Antarctica belief is a candidate for such updating (at least arguably).
And now we have the Frame Problem. To find the beliefs relevant to a given change, you do, apparently, have to canvass your entire belief store—a vast collection of beliefs.Footnote 7 Some will be more obvious than others, and some more relevant than others, but the entire store has to be canvassed. So, there you stand examining and examining while the water in your pot boils away.
Victimized by the Frame Problem: Heuristic Updating
Something has gone badly wrong. Given any change, we do not and cannot examine the whole of our belief store because it is too big. But this is precisely why we fall victim to the Frame Problem. Losing your keys, wallet, or cell phone is often an instance of the Frame Problem. You put your keys down, but not in the spot you normally put them; you don't update your beliefs correctly, and oops! lost keys.Footnote 8
Being a victim of the Frame Problem is just part of living. If your mistake is bad enough, the consequences will be, too. But usually we find our keys. The fact that we are often victims of the Frame Problem strongly makes the case that given an experienced change, what humans and other animals do is use the following heuristic for updating: update those beliefs that seem now to be obviously relevant, while ignoring the rest.
The Official Frame Problem
To complicate matters, the above description of the Frame Problem is not the official version, according to AI scientists. This takes some explaining. The Frame Problem is a complicated problem with a tortured history. And it is not possible to define it succinctly and easily. In fact, this was a big part of what the struggle over the Frame Problem was about.
The Frame Problem began life in an infamous 1969 paper by John McCarthy and Patrick Hayes ("Some Philosophical Problems From the Standpoint of Artificial Intelligence"). In this paper, McCarthy and Hayes, both computer scientists (this matters), introduced the term "Frame Problem" to denote a seemingly narrow logic problem that arose while the two worked to develop a logic for modeling reasoning about change.Footnote 9 But just nine years later, in 1978, the Frame Problem had become "an abstract epistemological problem," (see Dennett 1978, p. 125, emphasis in original). Then, in 1987, the philosopher Jerry Fodor equated the Frame Problem with "the problem of how the cognitive mind works" (Fodor 1987, p. 148). He then claimed that understanding how the mind works requires unraveling the nature of inductive relevance and rationality (ibid.). So, Fodor was saying that solving the Frame Problem would be figuring out how inductive relevance and rationality operate. From here, the Frame Problem continued to grow and expand until it covered vast areas of philosophical research. The Frame problem was revealed to be a serious and deep philosophical problem, and therefore, probably completely intractable; hence humankind's heuristic solution to it discussed at the end of Sect. 3.2.Footnote 10
In fact, Fodor thought the Frame Problem was so serious that it explained why AI had failed thus far and was going to continue to fail. Fodor said:
We can do science perfectly well without having a formal theory of [which ideas or events are relevant to each other]; which is to say that we can do science perfectly well without solving the Frame problem. That's because doing science doesn't require having mechanical scientists; we have us instead. But we can't do AI perfectly well without having mechanical intelligence; doing AI perfectly well just is having mechanical intelligence. So we can't do AI without solving the Frame Problem. But we don't know how to solve the Frame Problem. That in a nutshell, is why, though science works, AI doesn't. (Fodor 1987, p. 148; emphases in original).
But of course, AI researchers and their philosophical allies disagreed with Fodor. Hayes (1987) gives a definition of the Frame Problem. He asks us to consider a case where someone goes through a door from room 1 to room 2. Hayes says that we want to be able to prove (in some logic) that when an agent (a thinking thing) goes from room 1 to room 2, then the agent is in room 2. (Again, this might seem obvious to human readers, but to be obvious to a computer, the computer has to be programmed correctly.) To get this conclusion that going from room 1 to room 2 puts one in room 2, we need an axiom to this effect. No problem. We simply add it in. Hayes then says:
But here at last is the frame problem. With axioms [like the one we are adding about changing rooms], it is possible to infer what the immediate consequences of actions are. But what about the immediate non-consequences? When I go through a door, my position changes. But the color of my hair, and the positions of the cars in the streets, and the place my granny is sitting, don't change. In fact, most of the world carries on in just the same way that it did before… But since many of these things CAN change, they are described in our vocabulary as being relative to the time-instant [of when the room change occurred], so we cannot directly infer, as a matter of logic, that they [remain] true [after I change rooms] just because they were [true before I changed rooms]: This needs to be stated somehow in our axioms (p. 125).
Then Hayes, continuing, points out:
In this ontology, whenever something MIGHT change from one moment to another, we have to find some way of stating that it DOESN'T change whenever ANYTHING changes. And this seems silly, since almost all changes in fact change very little of the world. One feels that there should be some economical and principled way of succinctly saying what changes an action makes, without having to explicitly list all the things it doesn't change as well; yet there doesn't seem to be another way to do it. That is the frame problem (p. 125, emphasis in original).
What Hayes wanted, what all Frame Problem AI researchers wanted, was some epistemic and metaphysical stability principle or principle of epistemic inertia which worked in all cases and for all tasks no matter how complicated. But philosophy was exactly pointing out that such a principle was not possible because we cannot know ahead of time all of what is relevant to a given change or action. For starters, changes change relevance relations. So, we actually have to make the change and then wait and see what other things change as a result. If it were otherwise, we could do all of science from our armchair armed only with pencil and paper.
The "Two Narratives" Interpretation
Hayes disagreed with Fodor, Dennett, and other philosophers that the Frame Problem was profound and that it was deeply connected with how the mind works and knows whatever it knows. Hayes replied to Fodor:
The term "frame problem" is due to John McCarthy, and was introduced in McCarthy and Hayes (1969). It is generally used within the AI field in something close to its original meaning. Others, however, especially philosophers, sometimes interpret the term in different ways… In this short paper I will try to state clearly and informally what the frame problem is, distinguish it from other problems with which it is often confused, briefly survey the currently available partial solutions, and respond to some of the sillier misunderstandings (1987, p. 123; emphasis added; as Hayes's paper proceeds, the language gets sharper).
By sillier misunderstandings Hayes means, of course, philosophical misunderstandings.
The eventual compromise was to say there were two narratives about the Frame Problem. The philosophy narrative says that the Problem is as deep as metaphysics and the nature of rationality, and as complex. Therefore, understanding how humans heuristically avoid the Frame Problem some significant amount of time is probably one of the keys to understanding the human mind (see Fields 2013). Opposing this narrative, we have the AI narrative, which says the Problem is only a challenging technical problem in the logic of reasoning about change. And, if that weren't difference enough, on this latter narrative, the Frame Problem has been more or less solved! Shanahan (2016), who adopts completely the two-narrative narrative says:
… [A] number of solutions to the technical frame problem [the logic version] now exist that are adequate for logic-based AI research. Although improvements and extensions continue to be found, it is fair to say that the dust has settled, and that the frame problem, in its technical guise, is more-or-less solved (emphasis added).
Philosophers impressed with the Problem will point out that "more or less solved" means not solved. All the logic solutions to the Frame Problem invoke some sort of strong restrictions or circumscriptions to the target logic so that accounting for unintended effects of changes due to actions is kept manageable. There is no general algorithmic or logical solution to the AI interpretation of the Frame Problem; nor is there any other kind of solution. There are only heuristic avoidances of the Problem that work locally, and only more or less well; see again the heuristic updating "solution" at the end of Sect. 3.2. So, the AI interpretation of the Frame Problem is not so much a solution to the problem as a finely crafted way to avoid the problem. That just leaves the philosophical interpretation.
The Profundity of the Frame Problem
A particularly well-developed argument for the above conclusion is found in Fields' 2013 paper. Fields concludes, upon considering the relevant neuroscience, that the Frame Problem and the problem of object re-identification (How do we re-identify objects as persisting things from one context to another?) are equivalent. Object re-identification is widely thought to be "solved" in part by pre-motor fictive (made up) causal histories, clearly a heuristic process that can only work some of the time. Almost all of AI and large areas of cognitive science (notably the study of cognition) assume that objective re-identification is unproblematic. Clearly this assumption will have to be abandoned if the relevant disciplines are to have any hope of unraveling how humans successfully avoid the philosophical version of Frame Problem—when they do. Finally, Fields points out that object re-identification is fundamentally analogical in nature. Hence this analogical character will have to be deployed both in AI approaches to the Frame Problem as well as to understanding how humans deal with the Problem.
But of course, analogy-making in the mind is a creative process that alters the very mental representations one is using to think about those changes one is causing in one's environment and the objects that need to be re-identified (see Dietrich 2000, 2010, and also see Connell and Lynott 2014). What guarantees that those internal representations still represent what their immediate precursors represented? Further use of analogical thinking leads to an infinite regress. Hence the Frame Problem, as well as object re-identification, emerges again, inside the minds of those trying to avoid it in their daily lives. It is clear that the mind itself begs the question when it comes to the Frame Problem: It simply assumes that certain representations re-identify objects. If it didn't, an infinite regress would result, and we'd all come to screeching halt.
We now have here the same pattern we had with Zeno's Paradoxes. Those seeking to avoid the philosophical gravity of a certain problem simply deny that such gravity exists and declare that the problem commits a technical error correctable with enough fancy logical or mathematical machinery. The philosophers reply that deploying this machinery is entirely question-begging because the machinery derives from asserting the very claim they (the philosophers) deny. It is easy to more or less solve the Frame Problem if one denies its profundity and ubiquity. That the Problem returns down deep in the mind is hardly surprising to philosophers who take it and philosophy seriously.
The Frame Problem, therefore, is here to stay. Humans deal with it as best they can, just as they do when re-identifying objects in their perceptual environments. Understanding fully, not more or less, the persistence and gravity of the Frame Problem requires understanding deeply both the mind and the fundamental nature of reality, philosophical problems both.
Experimentally Refuting Skepticism
Thanks to Anna Tsvetkov for co-authoring parts of this section using her surperlative 2018 Department of Philosophy Honor Thesis (Binghamton University) entitled "Saving Skepticism: A Response to Turri."
Skepticism, the view that we know nothing, is both outlandish and persistent. Instead of attacking skepticism directly, a new strategy would be to use human emotional and epistemic preferences to explain why such an implausible view has such staying power. This strategy would then help explain away skepticism, rather than attempting to refute it directly. The claim would then be that we humans have a natural but unjustified bias against claiming knowledge in the very cases skepticism depends on. This is Turri's strategy in his Skeptical Appeal: The Source Content Bias (2015).
The classical skeptical argument aims to prove that insofar as we do not know that a skeptical hypothesis does not hold, our ordinary beliefs about the world fail to constitute knowledge. The argument is typically cast in the following form. First, let O be a proposition one would ordinarily take oneself to know (e.g. that the external world exists or that one has hands) and let T be a tricky, skeptical hypothesis such as that one is dreaming or being systematically deceived. Not-T, accordingly, is the denial of the skeptical hypothesis. The form of the skeptical argument is then:
If I know that O, then I know that not-T.
But I do not know that not-T.
Therefore, I do not know that O (Turri 2015, p. 309).
For example, a skeptic might raise the brain-in-a-vat hypothesis (T): one might be a disembodied brain in a vat that receives electrical stimulation from a powerful computer. The stimulation induces a set of false experiences of an external world qualitatively indistinguishable from the experiences one enjoys now. Suppose Jones ordinarily knows that she has a physical body (O). The skeptic argues that if Jones knows this, then Jones knows that the brain-in-a-vat hypothesis is wrong (not-T). The skeptic then points out that Jones does not know that she is not a brain in a vat; she cannot rule out the brain-in-a-vat hypothesis because Jones's experiences remain the same regardless of whether or not the skeptical hypothesis is realized. From these premises Jones is led to accept the conclusion: "I do not know that I have a physical body." This argument works for any ordinary knowledge claim. The skeptical argument utilizes intuitively plausible premises to lead us to profound and startling conclusions: no knowledge claim is secure.
Many strategies have been deployed against the skeptical argument. Nozick (1981) and Dretske (1970, 1971), reject epistemic closure and consequently, the first premise of the skeptical argument. Other philosophers reject the second premise of the skeptical argument in favor of an appeal to common sense or to semantic externalism (see Moore 1962; Putnam 1975, respectively). Contextualists, such as Lewis (1996) and DeRose (2009), argue that the skeptic's argument garners putative force by simply raising the semantic standards for knowledge. This last strategy of attacking the force of the skeptical argument is a version of the strategy Turri uses.
After conducting a series of experiments (2015), Turri concludes that the potency of the skeptical argument can be explained away as a byproduct of human psychology. He maintains that humans possess a source-content bias: we are biased against classifying negative inferential beliefs as knowledge, where a negative inferential belief is a derived belief that something is not the case.Footnote 11 The second premise of the skeptical argument—a denial of knowledge of a negative inferential belief—achieves acceptance by preying upon our evaluative bias. Turri concludes that skepticism is an illusion that results from our psychology.
Turri's Experiments: Implicating a Source-Content Bias
Turri is particularly interested in locating the appeal for skeptical arguments outside the power of skepticism and the relevant logic itself. He maintains that no one reasonably accepts skeptical arguments within one's daily life. One does not go about contending or even worrying that one does not know that other people have minds, that one is not dreaming, or that one is interacting with an external world. Yet skepticism endures. Skeptical arguments appeal to us when they are considered, at least in certain settings, like in an epistemology class. Turri examines whether the cause of the appeal and force of skepticism can be attributed to preferences in human psychology. Namely, do humans possess a bias which renders forceful an otherwise impotent, skeptical argument?
Turri hypothesizes and claims to experimentally show that humans possess a source-content bias. The bias is two-fold. First, humans evaluate inferential beliefs more harshly than perceptual beliefs. People more readily classify beliefs derived from direct perception as knowledge than beliefs arrived at by inference. The source of a belief, therefore, affects whether or not one classifies one's belief as knowledge. Second, humans evaluate negative inferential beliefs more harshly than positive inferential beliefs. A belief stated in a positive form—that something is the case—is more readily accepted as knowledge than a belief that states that something is not the case. Hence, the content of a belief affects rates of knowledge ascription and denial. Turri hypothesizes that the second premise of any instantiation of the skeptical argument form (at the beginning of Sect. 4) is appealing because of an interaction of these two effects: the inferential source and negative content of the relevant belief.
To test his theory, Turri conducted two experiments. Turri concludes: "… the skeptic has simply alighted on a class of beliefs that we are antecedently inclined to evaluate especially harshly" (2015, p. 316); and finally, "It is not… some deep fact about the nature of knowledge or the skeptic's ingenuity that invites skepticism. It is our psychology" (2015, p. 320). We humans only find the skeptical argument to be forceful, and accordingly, worry about skepticism, when we are presented with the skeptical argument and are misled by our bias. Otherwise, as Turri claims, we easily renounce skepticism.
For convenience in the following discussion, let us recast Turri's argument as follows.
People evaluate inferential beliefs more harshly than perceptual beliefs.
[Per Turri's experimental findings.]
People evaluate an inferential belief more harshly when its content is negative.
If (1) and (2) then the source-content bias exists.
Therefore, the source-content bias exists.
The second premise of the skeptical argument is a harsh evaluation of a negative inferential belief.
If the source-content bias exists and the second premise of the skeptical argument is a harsh evaluation of a negative inferential belief, then the appeal and force of the skeptical argument is a byproduct of our psychology.
(Skeptical arguments simply prey upon our evaluative bias against classifying negative inferential beliefs as knowledge. According to Turri, the bias can also "explain the classical skeptical argument's force" [2015, p. 310].)
Hence, the appeal and force of the skeptical argument is merely a byproduct of our psychology.
If the appeal and force of the skeptical argument is a byproduct of our psychology, then skepticism only results from our psychology.
Therefore, skepticism only results from our psychology.
Therefore, skepticism is not tenable.
There are now three objections to Turri's project.
Objection 1: Experiments Beg the Question Against the Skeptic
Has skepticism been explained away? A skeptic is going to be unimpressed with Turri's experiment-based argument because the argument begs the question against the skeptic.
Consider the skeptical argument again. It is important to note that the argument generalizes. That is, the argument can be employed not only to show that one does not know that an ordinary belief one holds is true, but that none of the beliefs one holds can be known to be true. But Turri must begin with the assumption that something is known—namely, scientific knowledge—to argue against skepticism.
Turri's paper follows in the tradition of naturalized epistemology. According to this view, philosophical questions concerning the nature of knowledge, evidence, justification and so forth, can be empirically tested and answered. Epistemology is cast within the experimental domain, alongside skepticism. The success of Turri's approach hinges on the possibility for skepticism to be addressed within the field of psychology. But skepticism casts all knowledge under doubt, including scientific knowledge. To address skepticism by accepting and utilizing the putative scientific knowledge in one's repertoire is to beg the question and egregiously so.Footnote 12 How can one know one's scientific knowledge is true? More generally, how can one know that the scientific enterprise produces knowledge? To address these concerns with further empirical evidence is to give a circular argument: science produces knowledge because scientific results (i.e. evidence) constitute knowledge. But each piece of purported evidence is laden with skeptical doubt.
Turri would reject this skeptical reply to his argument. First, he assumes that empirical investigations can, indeed, provide us with conclusive evidence to constitute knowledge. Although he remarks in his paper that he intends to avoid making, "sweeping conclusions on the nature of knowledge", elsewhere he writes, "it is not, in the first place, some deep fact about the nature of knowledge or the skeptic's ingenuity that invites skepticism. It is our psychology" (2015, p. 320). But to state that it is not the nature of knowledge that invites skepticism, but rather a psychological bias is to make a sweeping conclusion on the nature of knowledge. Recall that Turri suggests that, without the bias, one could know that the skeptic's alternative hypotheses are not true. In other words, skepticism only results from an evaluative bias. This is a knowledge claim derived from experimental results, which the skeptic will deny.
Second, Turri presupposes realism, the view that there is a mind-independent external world. In order for Turri to conduct empirical investigation he must make several epistemological and metaphysical commitments. He must accept the existence of an external world and the validity of epistemic norms (e.g., the scientific method) that allow for findings that reflect a true (or approximately true) account of a natural world. But the skeptical hypotheses threaten realism and the possibility for any knowledge of an external world. To defeat skepticism by starting with presuppositions of an external world is again to beg the question against skepticism.
Turri Counters
Perhaps the best way for the experimental-anti-skeptic to respond to the objection of begging the question is to shift the burden of proof. That is, Turri might argue that he, himself, is not making any question-begging assumptions by employing scientific methods. If the skeptic saddles scientific methodology with doubt, it is she who thereby takes upon herself the burden of proof. That is to say, the skeptic must provide a successful argument for the acceptance of her skeptical conclusions. The anti-skeptic might argue no such argument is given. As a result, the question-begging objection against Turri ought to be rejected.
As it stands, this rebuttal is powerful. Recall the skeptical argument:
Therefore, I do not know that O (Turri 2015, p. 309, emphasis added).
The first premise is uncontroversial. But the skeptic must persuade the experimentalist of the second premise to reach the skeptical conclusion. That is, the anti-skeptic must concede that she does not really know that she is not a brain in a vat or that she is not being systematically deceived by an evil demon, and so forth for the skeptical argument to prove successful. But the anti-skeptic, naturally, does not find any such premises forceful. For she finds greater intuitive appeal in the doctrines of realism and the proposition that there is an external reality than in any claims to the contrary. This, after all, is the thrust of such Moorean facts: they are propositions one knows better than the premises of any competing philosophical argument.Footnote 13 The skeptic has failed to provide any reasons for the anti-skeptic to accept her conclusion. Indeed we might make the stronger claim: no such reason can be given. The skeptic seems to have lost from the get-go.Footnote 14 She cannot, in principle, provide a successful argument to the anti-skeptic. No reasons will persuade the anti-skeptic of the pivotal second premise.
Attractive as this objection might be, there are two central motivations to resist it. The first motivation, derived from Pryor (2000) is as follows. Recall Turri's project takes the central aim of explaining away skepticism as the byproduct of a psychological bias. A bias which is inherent in all of us: skeptics and anti-skeptics, alike. Accordingly, Turri takes on what Pryor dubs to be an ambitious anti-skeptical project. That is to say, Turri's project aims to give a robust and universal answer to the problem of skepticism. Skepticism can be explained away as a psychological illusion, of sorts. Upon reading Turri's results, both the skeptic and the anti-skeptic ought to come to recognize the putative force of the skeptical argument merely stems from an illusion. Simply put: Turri intends to explain away skepticism for the skeptic.Footnote 15 But he cannot do so by employing methodologies and making stipulations the skeptic would not allow on pain of begging the question. Because Turri has taken up an anti-skeptical project of the ambitious kind, he comes to bear the burden of proof against the skeptic. Otherwise, his project is for naught.
The second motivation is simple. There is an important sense in which the skeptic cannot be said to beg the question against the dogmatist, or anti-skeptic. That is, the skeptic has the following response available to her: she can recommend suspension of all belief, either positive or negative. She can argue, like Sextus Empiricus, that she is neither asserting nor denying her position in deploying the skeptical argument; rather, she is in a state of epoche where she makes no epistemological commitments (see Sextus's Outlines of Pyrrhonism). Using this strategy, our skeptic claims that in epoche she seems to be speaking (making certain noises), and that if there are anti-skeptics in hearing range of her voice, the rational and logical force of her alleged claims is a problem for them, not her. Maintaining such epoche might be difficult for the skeptic, but that is due merely to habit, and not to any deep fact about knowledge or the world.
Now because the skeptic resides in epoche she has no positive epistemological commitments to refute. She cannot be said to beg the question. For she makes no assertions. The skeptic enjoys a tranquility of mind precisely because she suspends all belief. Indeed it is Turri who bears a burden of proof because he, unlike the skeptic, is asserting propositions and so bears the relevant epistemic and metaphysical commitments.
Though the begging-the-question charge against Turri is quite powerful, most will reject it (if only for pragmatic reasons) because at a minimum, it stifles engaging with the deeper parts of Turri's anti-skepticism project. It is to these, we know turn.
Objection 2: Two Kinds of Negative Inferential Beliefs
We need a distinction, one missed by Turri, because not all negative inferential beliefs concerned with knowing are equally susceptible to Turri's experiment-based argument. Consequently, not all negative inferential beliefs are equally powerful in giving us skepticism.
Distinguish between quotidian negative inferential beliefs (about knowing) and world-changing negative inferential beliefs (about knowing). Turri's experiments are exclusively about the first kind, negative inferences we draw in our ordinary lives. In one experimental set-up, Person-1 makes inferences about whether his car has been stolen, and in another, Person-2 makes inferences about what kind of large cat she is looking at while visiting a zoo. Further, we might say these quotidian beliefs correspond to cases of ordinary incredulity, not skepticism.Footnote 16 That is to say, Turri's vignettes do not evaluate a protagonist's beliefs in the realization of alternative skeptical hypotheses: that one is a brain in a vat, one is dreaming, and so forth. To the contrary, Turri's vignettes presuppose that such skeptical scenarios are not realized. There is indeed an external world such that Person-1 might worry whether his car is stolen, or Person-2 might visit the zoo and inquire into the nature of a feline. With further observation or sound inferential methods, the protagonists might alleviate their doubts. It is in this way Turri constructs a strawman against the skepticism he is after.
Skepticism is most powerfully argued for by using world-changing negative inferences: We are not in the world we think we are, but in a different world. We are some kind of thinking things, fooled by an evil demon, trapped in the Matrix (or some other computer simulation), always dreaming, or are unbodied brains floating in vats being given inputs from some machine. These world-changing inferences powerfully argue for skepticism because they all invoke an entire world strongly different from the one we think we inhabit, but a world that is experientially invariant from the world we think we inhabit. Turri discusses such world-changing inferences in the beginning of his paper, but he doesn't experiment on them at all. Here is an example of such a world-changing argument—we dub this argument Hands:
H1.
If I know I have hands, then I know I am not a brain in a vat.
But I don't know that I'm not a brain in a vat.
Therefore, I don't know I have hands.
H2 no doubt strikes the reader as true, or at least credibly true. But premise H2 gets its credibility not from being a part of a negative inferential belief (which it is), but from participating in a deep truth: the brain-in-a-vat scenario is experientially indistinguishable from our experiences in what we take to be the ordinary non-Matrix world. Perhaps Hands gets its force, appeal, and emotional power from our psychology, but it gets its logical plausibility as a correct description of the way things are from experiential invariance: if we were brains in vats, it would be impossible for us to discover this fact.Footnote 17
To avoid an objection of equivocation, Turri would perhaps argue here that the psychological appeal of Hands and its logical plausibility are closely tied together. Such a move is plausible because there are cases in which an argument lacks psychological appeal to someone precisely because the argument lacks logical plausibility: One can become averse to a position one deems lacking in logical strength. However, an argument which enjoys psychological appeal from a human bias cannot ipso facto lack logical plausibility. Yet this is the very conclusion Turri draws. Recall Turri concludes, from his findings, that it is not "some deep fact about the nature of knowledge", but our psychology that invites skepticism (2015, p. 320). But the psychological appeal of a position does not entail the position's lack of logical strength. Turri is seeking to explain skepticism's emotional aspect and instead we are focusing on its logical aspects. The two aspects are distinct. And again, for Turri to insist that this distinction isn't real, or isn't real in this case, is to beg the question against the arguments presented here.
By focusing on quotidian beliefs, Turri has found that certain negative inferential beliefs have a kind of epistemically destructive appeal…. An interesting result. But this result hardly undermines the logical plausibility of an argument like Hands.
We conclude that Turri's experiments, while interesting and even useful, do not so much as dent skepticism.
Objection 3: Turri's Logical Bind
Turri has to hold that not all negative inferential beliefs are problematic, that not all of them are to be rejected because his own experiment-based argument involves negative inferential beliefs. We call this next argument Turri's Argument (TA for short).
TA1.
If skepticism is philosophically tenable, then the negative inferential beliefs upon which skepticism depends are bias-free.
But the negative inferential beliefs upon which skepticism depends are not bias-free; indeed they are bias-laden.
Therefore, skepticism is not philosophically tenable.
Let's dispel immediately any worries of equivocation between the psychological appeal and philosophical tenability of the skeptical argument. This objection was discussed in the previous section. Instead let's focus here on the following move. Unless Turri is willing to banish modus tollens and then abandon his project, he must grant the negative inferential belief contained in his argument and its conclusion. We assume that Turri embraces modus tollens.
So, not all negative inferential beliefs are bad, on Turri's own view. Well, which ones are good? Clearly his in TA is. But we have argued in Sect. 4.4 (Objection 2), that other negative inferential beliefs, the world-changing ones, which depend on experiential invariance, are also good, and for logical reasons. But these latter beliefs result in skepticism. The only negative inferential beliefs to be rejected on the basis of Turri's experiments are the quotidian ones—and probably not even all of them, since negative inferential beliefs are ubiquitous and ineluctable.
Turri is now in the following logical bind:
To maintain his experiment-based anti-skepticism project, he has to reject the argument Hands (and all similar arguments). To do this, he has to reject all negative inferential beliefs—the quotidian ones (which he rejects in his experiments), and the world-changing ones (since they fund skepticism), as well as other such beliefs like those supplied by modus tollens. But if he does this, he must abandon his experiment-based anti-skepticism project, which depends on drawing negative inferences, leading to negative inferential beliefs about skepticism.
No doubt Turri would claim to be able to avoid this bind by insisting only the negative inferential beliefs responsible for skepticism (along with the ones in his experiments) are to be rejected. We don't see any available argument for any such special pleading.
Three objections against Turri's anti-skepticism project have been examined. It seems clear now that Turri's claims that skepticism can be explained away as a mere function of our psychology aren't correct. We conclude that readers of Turri's paper (including Turri) cannot know that he has successfully refuted skepticism. At best, such readers should suspend their judgment—which is a kind of skepticism about Turri's project. By surviving Turri's new, experimental attack, skepticism, then, should look stronger than ever. Rather than explaining skepticism away, Turri has paved way for a new and more virulent skepticism.
This paper has examined three powerful clashes between science and philosophy. In particular, we have investigated three putative solutions: First to Zeno's paradoxes, then to the Frame Problem, and finally to skepticism. Examination of the three cases has, in each case, revealed a limitation of scientific inquiry: to solve these three problems raised by philosophers, scientists must question-beggingly deploy premises and methods rejected by those very philosophers who constructed the problems. These deep and longstanding philosophical problems remain irresoluble and, so, implacable.Footnote 18
See Palmer (2012) and Robinson (1968, pp. 108–109). The "Way of Truth" and "Way of Opinion" are often treated as their standard names, but other scholars have other names to the same effect. For example, Palmer calls the Ways of Truth and Opinion the Way of Conviction and the Way of Mortals, respectively (Palmer 2012).
For the general ideas here in Sect. 2, I am indebted to Papa-Grimaldi's (1996) paper. I also draw on Benardete's (1964) conclusions.
See for example, Robinson, 1968, p. 128.
The proofs for what is to follow can be found in any calculus or real analysis book, and online in various places, such as Wikipedia: https://en.wikipedia.org/wiki/Series_(mathematics).
If \(\left| r \right|\) ≥ 1 then (1) diverges, i.e., goes to infinity.
The foundations of the argument presented here can be found in: Papa-Grimaldi's "Why mathematical solutions of Zeno's paradoxes miss the point" (1996) and Benardete's Infinity, especially, p. 12 (1964), where Benardete argues that Zeno's paradoxes remain unsolved and unscathed to this day.
It does not matter for our purposes whether most of one's beliefs are made explicit and stored as such or whether the beliefs are left implicit, to be derived from a relatively small set of more active beliefs. Either way, an enormous number of your beliefs must be canvassed and analyzed for relevance to the current change, for each change. It is also worth stating that it is here assumed that one's belief store, no matter how large it is, is finite.
Global warming is a stunning example of the Frame Problem on world-wide scale. Few updated their beliefs and understandings about what life would be like if billions of cars were driven around every day pumping, together with factories of all kinds, large quantities of CO2 into the air while rain forests were razed around the globe. True, in the 1960s some concerned scientists finally warned the world that global warming was a likely result of industrialization, but few in government or industry listened, and anyway, by then it was too late. The evolution of Antibiotic-resistant pathogens is another example.
Why is it called the Frame Problem? McCarthy and Hayes say: "In the last section of part 3, in proving that one person could get into conversation [over the phone] with another, we were obliged to add the hypothesis that if a person has a telephone he still has it after looking up a number in the telephone book. If we had a number of actions to be performed in sequence we would have quite a number of conditions to write down that certain actions do not change the values of certain fluents. In fact with n actions and m fluents we might have to write down mn such conditions. [A fluent is a sentence or predicate in the logic being used that is a condition, property, or state of affairs that can change over time.]" Then McCarthy and Hayes say: "We see two ways out of this difficulty. The first is to introduce the notion of [a] frame…. A number of fluents are declared as attached to the frame and the effect of an action is described by telling which fluents are changed, all others being presumed unchanged.".
The Frame Problem is intractable because it is a philosophy problem. I say this because I hold that philosophy makes no progress and cannot solve its major problems. See Dietrich (2011).
It is important to note that the negative inferential belief is not not-T. Turri doesn't elucidate this. The belief is implicit in the first premise of the skeptical argument, derived here, where K is a modal knowledge operator meaning "It is known that":
O → not-T
K(O → not-T)
∴ K(O) → K(not-T)
The negative inferential belief is explicit in the conclusion here (which is the first premise in the skeptical argument). Turri's experimental results seem to show that we are reluctant to classify the inference from O to not-T as knowledge (the second premise), and so we are reluctant to say the consequent of the conclusion is true. Hence we are led, according to Turri, to deny that we know that not-T. (The conclusion follows from the second premise via the Distribution Axiom of the modal logic K (Kripke): □(p → q) → (□p → □q).)
Consider the countervailing view. Quine, a proponent of naturalized epistemology, argues: "skeptical doubts are scientific doubts" (1975, p. 65, emphasis added). That is to say, skeptical doubts are akin to an awareness of illusions. Illusions can only arise against the backdrop of certain beliefs formed by science. Because of their empirical origins, one can employ scientific methods to neutralize skeptical concerns without begging the question. We reject this argument because of its reliance on Quinean empiricism and because of its circular nature. Skepticism calls our fundamental beliefs into questions. One cannot justify one's basic beliefs by the very belief-forming methods called into question by raising the skepticism issue. See Fumerton (1994) for a similar objection.
Lewis (1996).
See Kelly (2005) for a similar objection.
For otherwise, Turri's paper bears an uninteresting thesis. Surely, there is little (perhaps no) motivation in shedding light on a tremendous bias towards accepting skeptical beliefs to those (the anti-skeptics) who do not accept such beliefs in the first place.
See Klein (2015) for pertinent discussion.
Putnam (1981) holds that if we were brains in vats we could not even think or wonder if we were brains in vats (due to his causal theory of meaning and reference). According to Putnam, if we think we might be brains in vats, we aren't. Not surprisingly, Putnam's argument is here rejected because the causal theory it requires is rejected.
I thank Chris Fields for comments on previous versions of this paper.
Benardete J (1964) Infinity. Clarendon (Oxford University) Press, Oxford
Connell Louise, Lynott Dermot (2014) Principles of representation: why you can't represent the same concept twice. Top Cognit Sci 6:390–406
Dennett D (1978) Brainstorms: philosophical essays on mind and psychology. Bradford Books, Cambridge
DeRose K (2009) The case for contextualism: knowledge, skepticism, and context. Oxford University Press, New York
Dietrich E (2000) Analogy and conceptual change, or You can't step into the same mind twice. In: Dietrich E, Markman A (eds) Cognitive dynamics: conceptual change in humans and machines. Lawrence Erlbaum, Mahwah, pp 265–294
Dietrich E (2010) Analogical insight: toward unifying categorization and analogy. Cognit Proc 11(4):331
Dietrich E (2011) There is no progress in philosophy. In: Dietrich E, Weber Z (eds) Essays in philosophy, vol 12, issue date: July 2011, issue topic: philosophy's future: science or something else?
Dretske F (1970) Epistemic operators. J Philos 67:1007–1023
Dretske F (1971) Conclusive reasons. Aust J Philos 49:1–22
Fields C (2013) How humans solve the frame problem. J Exp Theor Artif Intell. https://doi.org/10.1080/0952813X.2012.741624
Fodor JA (1987) Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In: Pylyshyn (ed) The Robot's dilemma: the frame problem in artificial intelligence. Ablex, Norwood, pp 139–149
Fumerton R (1994) Skepticism and naturalistic epistemology. Midwest Stud Philos 19:321–340
Hayes P (1987) What the frame problem is and isn't. In: Pylyshyn Z (ed) The Robot's dilemma: the frame problem in artificial intelligence. Ablex Publishing, Norwood
Kaplan R, Kaplan E (2003) The art of the infinite. Oxford University Press, Oxford
Kelly T (2005) Moorean facts and belief revision, or can the skeptic win? Philos Perspect 19:179–209
Klein P (2015) Skepticism. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/skepticism/
Lewis D (1996) Elusive knowledge. Aust J Philos 74:549–567
McCarthy J, Hayes P (1969) Some philosophical problems from the standpoint of artificial intelligence. In: Meltzer B, Michie D (eds) Machine intelligence, vol 4. Edinburgh, Edinburgh University Press, pp 463–502
Moore GE (1962) Proof of an external world. Philosophical papers. Collier Brooks, New York, pp 144–148
Nozick R (1981) Philosophical explanations. Harvard University Press, Cambridge
Palmer J (2012) "Parmenides," The Stanford encyclopedia of philosophy, Summer 2012 Edition, Edward N. Zalta (ed). http://plato.stanford.edu/archives/sum2012/entries/parmenides/
Papa-Grimaldi A (1996) Why mathematical solutions of Zeno's Paradoxes miss the point: zeno's one and many relation and Parmenides' prohibition. Rev Metaphys 50:299–314
Pryor J (2000) The skeptic and the dogmatist. Noûs 34:517–549
Putnam H (1975) The meaning of 'Meaning'. In: Philosophical papers, vol 2: mind, language and reality. Cambridge University Press
Putnam H (1981) Brains in a vat. In: Reason, truth, and history, Chap. 1. Cambridge University Press, Cambridge, pp 1–21
Quine WVO (1975) The nature of natural knowledge. In: Guttenplan (ed) Mind and language. Clarendon Press
Robinson JM (1968) An introduction to early Greek philosophy. Houghton Mifflin Company, Boston
Russell B (1903) Principles of mathematics. Cambridge University Press, Cambridge
Salmon W (1975) Space, time, and motion: a philosophical introduction. Dickenson Press, Encino
Shanahan M (2016). The frame problem. In: The Stanford encyclopedia of philosophy (Spring 2016 Edition), Edward N. Zalta (ed). https://plato.stanford.edu/archives/spr2016/entries/frame-problem/
Turri J (2015) Skeptical appeal: the source-content bias. Cognit Sci 39:307–324
Philosophy Department, Binghamton University, Binghamton, NY, USA
Eric Dietrich
Search for Eric Dietrich in:
Correspondence to Eric Dietrich.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Dietrich, E. When Science Confronts Philosophy: Three Case Studies. Axiomathes (2020) doi:10.1007/s10516-019-09472-9
Frame Problem | CommonCrawl |
\begin{document}
\title{Prikry-type forcings after collapsing a huge cardinal} \begin{abstract}
Some models of combinatorial principles have been obtained by collapsing a huge cardinal in the case of the successors of regular cardinals. For example, saturated ideals, Chang's conjecture, polarized partition relations, and transfer principles for chromatic numbers of graphs.
In this paper, we study these in the case of the successors of singular cardinals. In particular, we show that Prikry forcing preserves the centeredness of ideals but kills the layeredness. We also study $\polar{\mu^{++}}{\mu^{+}}{\kappa}{\mu^{+}}{\mu}$ and $\mathrm{Tr}_{\mathrm{Chr}}(\mu^{+++},\mu^{+})$ in the extension by Prikry forcing at $\mu$. \end{abstract} \section{Introduction} The existence of a saturated ideals over the successor cardinal is one of generic large cardinal axioms. Kunen~\cite{MR495118} constructed a model of a saturated ideal over $\aleph_1$ by collapsing a huge cardinal. Let $\mu$ be a regular cardinal. Kunen's construction has been modified to obtain models in which \begin{itemize}
\item (Laver~\cite{MR673792}) $\mu^{+}$ carries a strongly saturated ideal.
\item (Foreman--Laver~\cite{MR925267}) $\mu^{+}$ carries a centered ideal.
\item (Foreman--Magidor--Shelah~\cite{MR942519}) $\mu^{+}$ carries a layered ideal. \end{itemize} Strong saturation, centeredness, and layerdness are strengthenings of saturation. See Section 2 for the definitions. The following hold in respective models. \begin{itemize}
\item (Laver~\cite{MR673792}) $(\mu^{++},\mu^{+}) \twoheadrightarrow (\mu^{+},\mu)$.
\item (Laver~\cite{MR673792}) $\polar{\mu^{++}}{\mu^{+}}{\mu^{+}}{\mu^{+}}{\mu}$.
\item (Foreman--Laver~\cite{MR925267}) Every graph of size and chromatic number $\mu^{++}$ has a subgraph of size and chromatic number $\mu^{+}$. \end{itemize}
In this paper, we will consider these principles for singular $\mu$. Note that the above models are obtained by $\mu$-directed closed posets. Then, by Laver's theorem~\cite{Laver}, we can get models in which $\mu$ is supercompact as well. This enables us to use Prikry-type forcings. The problem is whether the above principles are preserved by Prikry-type forcings. Foreman~\cite{MR730584} showed that a class of posets including some Prikry-type forcings preserve the existence of saturated ideals. \begin{lem}[Foreman~\cite{MR730584}]\label{foremanfst}
Suppose that $\mu$ is a regular cardinal and $I$ is a saturated ideal over $\mu^+$. Then every $\mu$-centered poset forces the ideal $\overline{I}$ generated by $I$ is a saturated ideal. \end{lem} Foreman~\cite{MR2583810} also constructed a model in which every successor cardinal carries a centered ideal. In his proof, he claimed an analogue of Lemma \ref{foremanfst} for centered ideals without proof. In this paper, we give a proof of this in Lemma \ref{termcentered}. We also study the layeredness of $\overline{I}$ in the extension by some Prikry-type forcings. We will show that $\overline{I}$ is always \emph{not} layered. We have \begin{thm}\label{maintheorem1}
Suppose that $\mu < \kappa$ is a measurable cardinal and $\mu^{+}$ carries a saturated ideal $I$, and $2^{\mu} = \mu^{+}$. Then Prikry forcing, Woodin's modification~\cite{MR1007865}, and Magidor forcing at $\mu$ force that \begin{enumerate}
\item (Foreman~\cite{MR2583810}) $\overline{I}$ is centered if $I$ is centered in $V$.
\item $\overline{I}$ is $\emph{not}$ layered. \end{enumerate} \end{thm}
We also study the preservation of polarized partition relations by Prikry forcing. \begin{thm}\label{maintheorem2}
Prikry forcing preserves the following: \begin{enumerate}
\item $\polar{\mu^{++}}{\mu^{+}}{n}{\mu^{+}}{\mu}$ for each $n < \omega$.
\item $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for each regular $\nu < \mu$.
\item $\npolar{\mu^{++}}{\mu^{+}}{n}{\mu^{+}}{\mu}$ for each $n < \omega$.
\item $\npolar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for each regular $\nu < \mu$.
\item $\npolar{\mu^{++}}{\mu^{+}}{\mu^{+}}{\mu^{+}}{\mu}$.
\end{enumerate} \end{thm}
Our proof of Theorem \ref{maintheorem2} (1) shows that Cohen forcing $\mathrm{Add}(\aleph_0,\aleph_1)$ preserves $\polar{\aleph_2}{\aleph_1}{2}{\aleph_1}{\aleph_0}$ as well. Note that $\mathrm{Add}(\aleph_0,\aleph_1)$ forces $\npolar{\aleph_2}{\aleph_1}{\aleph_0}{\aleph_1}{2}$ as shown by Hajnal--Juhasz (see Theorem \ref{hajnaljuhasz}). This enables us to answer a question by Garti~\cite[Question 1.11]{MR4101445}. Combining these result, we show
\begin{thm}\label{maintheorem3}
Suppose that $\kappa$ is a huge cardinal and $\mu < \kappa$ is a supercompact cardinal. Then there is a poset which forces that $\kappa = \aleph_{\omega+2}$, $\mu = \aleph_{\omega}$, \begin{enumerate}
\item $\aleph_{\omega+1}$ carries an ideal $I$ that is centered but \emph{not} layered, and
\item $I$ is $(\aleph_{\omega+2},\aleph_{n},\aleph_{n})$-saturated for all $n < \omega$.
\item $\polar{\aleph_{\omega+2}}{\aleph_{\omega+1}}{\aleph_{n}}{\aleph_{\omega+1}}{\aleph_{\omega}}$ for all $n < \omega$,
\item $\npolar{\aleph_{\omega+2}}{\aleph_{\omega+1}}{\aleph_{\omega+1}}{\aleph_{\omega+1}}{\aleph_{\omega}}$, and
\item $(\aleph_{\omega+2},\aleph_{\omega+1}) \twoheadrightarrow (\aleph_{\omega+1},\aleph_{\omega})$. \end{enumerate} \end{thm}
We will show that the existence of a $\mu^{++}$-centered ideal over $[\mu^{+++}]^{\mu^{+}}$ implies that every graph of size and chromatic number $\mu^{+++}$ has a subgraph of size and chromatic number $\mu^{+}$. We also give a model in which $[\aleph_{\omega+3}]^{\aleph_{\omega+1}}$ carries a $\aleph_{\omega+2}$-centered ideal by generalizing Theorem \ref{maintheorem1} (1) (see Lemma \ref{generalizedpreservation}). We have \begin{thm}\label{maintheorem4}
Suppose that $\kappa$ is a huge cardinal and $\mu < \kappa$ is a supercompact cardinal. Then there is a poset which forces that $\kappa = \aleph_{\omega+2}$, $\mu = \aleph_{\omega}$, \begin{enumerate}
\item $[\aleph_{\omega+3}]^{\aleph_{\omega+1}}$ carries a normal, fine, $\aleph_{\omega+1}$-complete $\aleph_{\omega+2}$-centered ideal, and
\item Every graph of size and chromatic number $\aleph_{\omega+3}$ has a subgraph of size and chromatic number $\aleph_{\omega+1}$. \end{enumerate} \end{thm}
The structure of this paper is as follows: In Section 2, we recall basic facts of some saturation properties, duality theorem, and Prikry-type forcings. The duality theorem plays a central role when we study the saturation property of ideals in some extension. In Section 3, we show Theorem \ref{maintheorem1}. In Section 4, we will see that Chang's conjecture and the existence of saturated ideals imply some polarized partition relations, respectively. The proof of Theorem \ref{maintheorem2} is contained in Section 4. Combining the results in Section 3 and Section 4, we give a proof of Theorem \ref{maintheorem3} in Section 5. In Section 6, we introduce the transfer principle $\mathrm{Tr}_{\mathrm{Chr}}(\lambda,\kappa)$ for the chromatic number of graphs. We generalize Theorem \ref{maintheorem1}.(1) to an ideal over $Z \subseteq \mathcal{P}(\lambda)$ and show that the existence of a $\mu^{++}$-centered ideal over $[\mu^{+++}]^{\mu^{+}}$ implies $\mathrm{Tr}_{\mathrm{Chr}}(\mu^{+++},\mu^{+})$. By using these facts, we give a proof of Theorem \ref{maintheorem4}.
\section{Preliminaries} In this section, we recall basic facts of the saturation properties of ideals and Prikry-type forcings, and some combinatorial principles. We use~\cite{MR1994835} as a reference for set theory in general. For more on the topic of saturated ideal and Prikry-type forcing, we refer to~\cite{MR2768692} and~\cite{Gitik}, respectively.
Our notation is standard. We use $\kappa,\lambda$ to denote a regular cardinal unless otherwise stated. We also use $\mu,\nu$ to denote a cardinal, possibly finite, unless otherwise stated. For $\kappa < \lambda$, $E^{\lambda}_\kappa$, $E^{\lambda}_{>\kappa}$ and $E^{\lambda}_{\leq\kappa}$ denote the set of all ordinals below $\lambda$ of cofinality $\kappa$, $>\kappa$ and $\leq\kappa$, respectively. We also write $[\kappa,\lambda) = \{\xi \mid \kappa \leq \xi < \lambda\}$. By $\mathrm{Reg}$, we mean the class of regular cardinals.
For every poset $P$, We identify $P$ with its separative quotient. That is, $p\leq q \leftrightarrow \forall r \leq p(r \parallel q) \leftrightarrow p \Vdash q \in \dot{G}$ for all $p,q \in P$. Here, $\dot{G}$ is the canonical name for a generic filter. We say that $P$ is well-met if $\prod Z \in P$ for all $Z \subseteq P$ that has a lower bound in $P$. Note that all poset we will deal with in this paper is well-met.
For a complete embedding $e:P \to Q$, the quotient forcing is defined by $P \Vdash Q/ e ``\dot{G} = \{q \in Q \mid \forall r \in \dot{G}(e(r) \parallel q)\}$ ordered by $\leq_{Q}$. $P \ast Q / e ``\dot{G}$ is forcing equivalent with $Q$. We also write $Q / \dot{G}$ for $Q / e ``\dot{G}$ if $e$ is clear from the context. If the inclusion mapping from $P$ to $Q$ is a complete embedding, we say that $P$ is a complete suborder of $Q$, denoted by $P \lessdot Q$. The completion of $P$ is a complete Boolean algebra $\mathcal{B}(P)$ such that $P \lessdot \mathcal{B}(P)$ and $P$ is a dense subset of $\mathcal{B}(P)$. $\mathcal{B}(P)$ is unique up to isomorphism.
For a given complete embedding $e:P \to Q$, the mapping $q \mapsto \prod\{p \in P \mid \forall r \leq p(e(r)\parallel q)\}$ define a projection $\pi:Q \to \mathcal{B}(P)$. It is easy to see that $\pi(e(p)) = p$ and $e(\pi(q)) \geq q$.
We often use the following theorem. \begin{thm}[Laver~\cite{Laver}]\label{laverind}
If $\mu$ is supercompact then there is a poset $P$ such that \begin{enumerate}
\item $P \subseteq V_{\mu}$,
\item $P \Vdash \mu$ is supercompact.
\item For every $P$-name $\dot{Q}$ with $P \Vdash \dot{Q}$ is $\mu$-directed closed, $P \ast \dot{Q} \Vdash \mu$ is supercompact. \end{enumerate} \end{thm} We say that a supercompact cardinal $\mu$ is indestructible if, for every $\mu$-directed closed poset $Q$, $Q \Vdash \mu$ is supercompact. If $\mu$ is supercompact and $\kappa > \mu$ is huge then we can force $\mu$ to be indestructible without destroying the hugeness of $\kappa$.
\subsection{Saturation of Ideals}
For cardinals $\mu \leq \kappa \leq \lambda$, we say that $P$ has the $(\lambda,\kappa,<\mu)$-c.c. if, for every $X \in [P]^{\lambda}$ there is a $Y \in [X]^{\kappa}$ such that $Z$ has a lower bound for all $Z \in [Y]^{<\mu}$. By the $(\lambda,\kappa,\mu)$-c.c., we mean the $(\lambda,\kappa,<\mu^{+})$-c.c. $\lambda$-c.c. and $\lambda$-Knaster are the same as the $(\lambda,2,2)$-c.c. and the $(\lambda,\lambda,2)$-c.c., respectively.
$P$ is $(\lambda,<\nu)$-centered if and only if $P = \bigcup_{\alpha < \lambda} P_{\alpha}$ for some $<\nu$-centered subsets $P_{\alpha} \subseteq P$. $<\nu$-centered subset is a $C \subseteq P$ such that $\forall Z \in [C]^{<\nu}(Z$ has a lower bound in $P)$. We call such a family of centered subsets a centering family of $P$. This is equivalent with the existence of a function $f:P \to \lambda$ such that $f^{-1}\{\alpha\}$ is a centered subset for each $\alpha$. We call such $f$ a centering function of $P$. By $\lambda$-centered, we means $(\lambda,<\omega)$-centered. Note that if $P$ is $\lambda$-centered then $|P| \leq 2^{\lambda}$. Indeed, for a centering family $\{P_{\alpha}\mid \alpha < \lambda\}$, $p \mapsto \{\alpha \mid p \in P_{\alpha}\}$ is an injective mapping from $P$ to $\mathcal{P}(\lambda)$.
If $P$ is well-met then the $(\lambda,<\nu)$-centeredness of $P$ is equivalent with $P$ can be covered by $\lambda$-many $<\nu$-complete filters.
For a stationary subset $S \subseteq \lambda$, let us introduce the $S$-layeredness, which was originally introduced by Shelah~\cite{MR942519}. We say that $P$ is $S$-layered if there is a club $C \subseteq [\mathcal{H}_{\theta}]^{<\lambda}$ such that, for all $M \in C$, if $\sup (M \cap \lambda) \in S \to M \cap P \mathrel{\lessdot} P$ for any sufficiently large regular $\theta$. We will consider the $S$-layeredness of complete Boolean algebra $P$. Note that $M \cap P$ is a Boolean subalgebra of $P$ but is not necessarily a complete Boolean subalgebra of $P$ even if $M \cap P \mathrel{\lessdot} P$.
\begin{lem}\label{charlayered}
For a stationary subset $S \subseteq \lambda$ and poset $P$ of size $\leq \lambda$, the following are equivalent: \begin{enumerate}
\item $P$ is $S$-layered.
\item There is an $\subseteq$-increasing sequence $\langle P_\alpha \mid \alpha < \lambda \rangle$ with the following properties:
\begin{enumerate}
\item $P = \bigcup_{\alpha < \lambda}P_{\alpha}$.
\item $P_{\alpha} \lessdot P$ and $|P_\alpha| < \lambda$ for all $\alpha < \lambda$.
\item There is a club $C \subseteq \lambda$ such that $\forall \alpha \in S \cap C (P_\alpha = \bigcup_{\beta < \alpha}P_\alpha)$.
\end{enumerate}
\item There is an $\subseteq$-increasing continuous sequence $\langle P_\alpha \mid \alpha < \lambda \rangle$ with the following properties:
\begin{enumerate}
\item $P = \bigcup_{\alpha < \lambda}P_{\alpha}$.
\item $P_{\alpha} \subseteq P$ and $|P_\alpha| < \lambda$ for all $\alpha < \lambda$.
\item There is a club $C \subseteq \lambda$ such that $\forall \alpha \in S \cap C (P_\alpha \lessdot P)$.
\end{enumerate} \end{enumerate} \end{lem} \begin{proof}
For the equivalence between (2) and (3), we refer to \cite{preprint}. It is easy to see that (1) and (3) are equivalent. \end{proof}
The original definition of $S$-layeredness of $P$ by Shelah is (3). If we define the $S$-layeredness by (3) then $\mathcal{B}(P)$ is not necessarily $S$-layered even if $P$ is. By our definition, the $S$-layeredness of $P$ is equivalent with that of $\mathcal{B}(P)$.
\begin{lem}
Suppose that there is a complete embedding $\tau:P \to Q$. \begin{enumerate}
\item If $Q$ has the $(\lambda,\lambda,<\nu)$-c.c. then so does $P$.
\item If $Q$ is $S$-layered for some stationary $S \subseteq \lambda$, then so is $P$.
\item If $Q$ is $(\lambda,<\nu)$-centered, then so is $P$. \end{enumerate} \end{lem} \begin{proof}
We may assume that $P$ and $Q$ are Boolean algebra (not necessarily complete). We show only (2). It suffices to show that $Q \cap M \lessdot Q$ implies $P \cap M \lessdot P$ for club many $M \in [\mathcal{H}_{\theta}]^{<\lambda}$. We fix $M \prec \mathcal{H}_{\theta}$ with $P,Q,\tau \in M$. Suppose $Q \cap M \lessdot Q$. Let $p \in P$ be arbitrary. $\tau(p)$ has a reduct $q$ in $Q \cap M$. By the elementarity of $M$, we can choose a reduct $p_0 \in P \cap M$ of $q$ (in the sense of $\tau:P \to Q$). For every $r \in P \cap M$ with $r \leq p_0$, $\tau(r) \cdot q \not= 0$. Since $q$ is a reduct of $\tau(p)$, $\tau(r) \cdot q \cdot \tau(p) \not=0$, which in turn implies $r \cdot p\not= 0$ in $P$. Therefore $p_0 \in P \cap M$ is a reduct of $p \in P$. \end{proof}
In this paper, by an ideal over $\mu^{+}$, we mean a normal, fine, and $\mu^{+}$-complete ideal. For an ideal $I$ over $Z \subseteq \mathcal{P}(\lambda)$, we say $I$ is fine whenever $\{x \in Z\mid \alpha \not\in x\} \in I$ for all $\alpha \in X$. $\mathcal{P}(Z) / I$ denotes the poset with the underling set $I^{+} = \mathcal{P}(Z) \setminus I$ and the ordered on $I^{+}$ is defined by $A \leq B\leftrightarrow A \setminus B \in I$.
For an ideal $I$ over $\mu^{+}$, $I$ is saturated if $\mathcal{P}(\mu^{+}) / I$ has the $\mu^{++}$-c.c. $I$ is $(\kappa,\lambda,<\nu)$-saturated if $\mathcal{P}(\mu^{+}) / I$ has the $(\kappa,\lambda,<\nu)$-c.c. We also say $I$ is strongly saturated if $I$ is $(\mu^{++},\mu^{++},\mu)$-saturated.
$I$ is centered and layered if $\mathcal{P}(\mu^{+}) / I$ is $\mu$-centered and $S$-layered for some $S \subseteq E_{\mu^{+}}^{\mu^{++}}$, respectively. For other saturation property $\psi$ of posets, we use a phrase that an ideal (over $Z$) is $\psi$ as well.
In Sections 3 and 6, we will use Theorem \ref{duality}. For a $\mu^{+}$-c.c. poset $P$ and a normal and $\mu^{+}$-complete ideal $I$ over $Z$, we can consider a $P$-name $\overline{I}$ for the ideal generated by $I$. That is, $P \Vdash \overline{I} = \{A \subseteq Z \mid \exists B \in I(A \subseteq B)\}$. $\overline{I}$ is normal and $\mu^{+}$-complete in the extension. We are interested in the extent of saturation of $\overline{I}$. Theorem \ref{duality}, which is one of special cases of the duality theorem, is useful to study $\dot{\mathcal{P}}(Z) /\overline{I}$. For details of the duality theorem, we refer to~\cite{MR3279214},~\cite{MR2768692}, or~\cite{MR3038554}. Here, we give a direct proof for understanding proofs in Section \ref{centeredlayered}. \begin{thm}[Foreman~\cite{MR3038554}]\label{duality}
For a normal, fine, $\mu^{+}$-complete $\lambda^{+}$-saturated ideal over $Z \subseteq \mathcal{P}(\lambda)$ (for some $\lambda > \mu$) and $\mu^{+}$-c.c. $P$, there is a dense embedding $\tau$ such that:
\[
\begin{array}{rccc}
\tau:& P \ast \dot{\mathcal{P}}(Z) / \overline{I} &\longrightarrow & \mathcal{B}(\mathcal{P}(Z) / I \ast \dot{j}(P)) \\
& \rotatebox{90}{$\in$} & &\rotatebox{90}{$\in$} \\
& \langle p,\dot{A} \rangle & \longmapsto & e(p)\cdot||[\mathrm{id}] \in \dot{j}(\dot{A})||
\end{array}
\]
Here, $e(p) = \langle 1,\dot{j}({p}) \rangle$ is a complete embedding from $P$ to $\mathcal{P}(Z) / I \ast \dot{j}(P)$ and $\dot{j}:V \to \dot{M}$ denotes the generic ultrapower mapping by $\mathcal{P}(Z) / I$. In particular, $P \Vdash\dot{\mathcal{P}}(Z) / \overline{I} \simeq \mathcal{B}(\mathcal{P}(Z) / I \ast \dot{j}(P) / e ``\dot{H}_0)$. Here, $\dot{H}_0$ is the canonical $P$-name for a generic filter. \end{thm}
\begin{proof}
We may assume that $P$ is a complete Boolean algebra. Note that it follows that $e$ is complete since $P$ has the $\mu^{+}$-c.c. and $\mathrm{crit}(\dot{j}) = \mu^{+}$. Indeed, for every maximal anti-chain $\mathcal{A} \subseteq P$, by $|\mathcal{A}| <\mu^{+}$,
\begin{align*}
\textstyle\sum e ``\mathcal{A} &= \textstyle\sum_{p \in \mathcal{A}}||\langle 1,\dot{j}(p) \rangle \in \dot{H}|| = \textstyle\sum_{p \in \mathcal{A}}||\langle 1,\dot{j}(p) \rangle \in \dot{H}||\\ & = ||j `` \mathcal{A} \cap \dot{H} \not= \emptyset|| = ||j(\mathcal{A}) \cap \dot{H}\not= \emptyset|| \\ &= 1
\end{align*} Here, $\dot{G} \ast \dot{H}$ is the canonical $\mathcal{P}(Z) / I \ast \dot{j}(P)$-name for a generic filter.
Our proof consists of two parts. First, we will give a $P$-name $\dot{J}$ and a dense embedding $\tau_0:P \ast \dot{\mathcal{P}}(Z) / \dot{J} \to \mathcal{B}(\mathcal{P}(Z) / I \ast \dot{j}(P))$. After that, we will see $P \Vdash \dot{J} = \overline{I}$ and $\tau_0 = \tau$.
Let $\dot{J}$ be a $P$-name defined by $P \Vdash \dot{J} \subseteq \dot{\mathcal{P}}(Z)$ and \begin{center}
$A \in \dot{J}$ if and only if $\mathcal{P(\mu^{+})} / I \ast \dot{j}(P) / e ``\dot{H}_0 \Vdash [\mathrm{id}] \not\in \dot{j}(\dot{A})$. \end{center} It is easy to see that $\dot{J}$ is forced to be an ideal.
Define $\tau_0:P \ast \dot{\mathcal{P}}(Z) / \dot{J} \to \mathcal{B}(\mathcal{P}(Z) / I \ast \dot{j}(P))$ by $\tau_0(p,\dot{A}) = e(p) \cdot ||[\mathrm{id}] \in \dot{j}(\dot{A})||$. By the definition of $\dot{J}$, if $P \not\Vdash \dot{A} \in \dot{J}$ then by some element then $||[\mathrm{id}] \in \dot{j}(\dot{A})|| \not= 0$.
Let us see the range of $\tau_0$ is a dense subset. Let $\langle B,\dot{q} \rangle \in \mathcal{P}(Z) / I \ast \dot{j}(P)$ be an arbitrary element. Since $I$ is $\lambda^{+}$-saturated, we can choose $f:Z \to P$ such that $B \Vdash \dot{q} = [f]$. Since $e$ is complete, $\langle B,\dot{q}\rangle$ has a reduct $p \in P$. For every $r \leq p$, $e(r) \cdot \langle B,\dot{q} \rangle \not= 0$ and this forces $\dot{j}(f)([\mathrm{id}]) = [f] = \dot{q} \in \dot{H} = \dot{j}(\dot{H}_0)$. Therefore $p$ forces that $\mathcal{P(\mu^{+})} / I \ast \dot{j}(P) / e ``\dot{H}_0 \not\Vdash [\mathrm{id}] \not\in \dot{j}(\{x \in B \mid f(x) \in \dot{H}_0\})$''. Thus, there is a $P$-name $\dot{A}$ such that $P \Vdash \dot{A} \in \dot{J}^{+}$ and $p \Vdash \dot{A} = \{x \in B \mid f(x) \in \dot{H}_0\}$. It is easy to see $\tau(p,\dot{A}) = e(p) \cdot ||[\mathrm{id}] \in \dot{j}(\dot{A})|| \leq \langle B,\dot{q} \rangle$, as desired.
Lastly, we claim that $P \Vdash \dot{J} = \overline{I}$. $P \Vdash \overline{I} \subseteq \dot{J}$ is clear. To show $P \Vdash \dot{J} \subseteq \overline{I}$, let us consider $p \Vdash \dot{C} \in \overline{I}^{+}$. We let $D = \{x \in Z \mid ||x \in \dot{C}||_{P} \cdot p \not= 0\} \in I^{+}$. $D$ forces $\dot{j}(p) \cdot ||[\mathrm{id}] \in \dot{j}(\dot{C})||_{\dot{j}(P)}^{\dot{M}} \not= 0$ . Let $\dot{q}$ be a $\mathcal{P}(Z)/ I$-name such that $\Vdash \dot{q} \in \dot{j}(P)$ and $D\Vdash \dot{q} = \dot{j}(p) \cdot ||[\mathrm{id}] \in \dot{j}(\dot{C})||^{\dot{M}}_{\dot{j}(P)}$. Let $r$ be a reduct of $\langle D,\dot{q}\rangle \in \mathcal{P}(Z) / I \ast \dot{j}(P)$. It is easy to see that $r\leq p$ and $r \Vdash$ ``$\langle D,\dot{q}\rangle \leq ||[\mathrm{id}] \in \dot{j}(\dot{C})||$ in the quotient forcing''. By the definition of $\dot{J}$, $r \Vdash \dot{C} \in \dot{J}^{+}$, as desired. Of course, $\tau = \tau_0$. The proof is completed. \end{proof} The following corollaries have nothing to do with proofs of main theorems. But we introduce these. \begin{coro}[Baumgartner--Taylor~\cite{MR654852}]\label{baumgartnertaylor}
For a saturated ideal $I$ over $\mu^{+}$ and $\mu^{+}$-c.c. $P$, the following are equivalent: \begin{enumerate}
\item $P \Vdash \overline{I}$ is saturated.
\item $\mathcal{P}(\mu^{+})/ I \Vdash \dot{j}(P)$ has the $(\mu^{++})^{V}$-c.c. \end{enumerate} \end{coro} In particular, for a saturated ideal $I$ over $\mu^{+}$, if $P$ is $\mu$-centered then $P \Vdash \overline{I}$ is saturated. Corollary \ref{baumgartnertaylor} is one of the improvements of Lemma \ref{foremanfst}. Some of Prikry-type forcings are $\mu$-centered. Therefore $\overline{I}$ is forced to be saturated by these posets. Using Theorem \ref{duality}, we can get necessary conditions of $\overline{I}$ to become centered or strongly saturated. We prove the following corollary because this is the motivation for Theorem \ref{maintheorem1}. \begin{coro}
For a saturated ideal $I$ over $\mu^{+}$ and $\mu$-centered poset $P$, if $(\mu^{+})^{\mu} = \mu^{+}$ then the following holds. \begin{enumerate}
\item If $P \Vdash \overline{I}$ is centered then $I$ is centered.
\item If $P \Vdash \overline{I}$ is strongly saturated then $I$ is strongly saturated. \end{enumerate} \end{coro} \begin{proof} We may assume that $P$ is a Boolean algebra (not necessarily complete).
Let $e:P \to (\mathcal{P}(\mu^{+})/ I \ast \dot{j}(P))$ be a complete embedding given in Theorem \ref{duality}. Then $P \Vdash \mathcal{P}(\mu^{+}) / \overline{I} \simeq \mathcal{P}(\mu^{+})/ I \ast \dot{j}(P) / \dot{G}$. For every $A \in I^{+}$, $P \Vdash \langle A,\dot{1}\rangle \in \mathcal{P}(\mu^{+})/ I \ast \dot{j}(P) / \dot{G}$. Indeed, for every $p \in P$, $e(p) \cdot \langle A,1\rangle = \langle A,\dot{j}(p) \rangle \in \mathcal{P}(\mu^{+}) / I \ast \dot{j}(P)$. It is easy to see that $\mathcal{P}(\mu^{+}) / I$ is completely embedded in $\mathcal{P}(\mu^{+}) / I \ast \dot{j}(P) / \dot{G}$ by a mapping $A \mapsto \langle A,\dot{1}\rangle$.
We check (1). Let $\langle P_\alpha \mid \alpha < \mu\rangle$ be a centering family of $P$ and let $\dot{f}$ be a $P$-name for a centering function of $(\mathcal{P}(\mu^{+}) / I)^{V}$. We may assume that each $P_{\alpha}$ is a filter. For each $A \in I^{+}$, define $f(A) = \langle \xi \mid \alpha < \lambda, \exists q \in P_{\alpha}(q \Vdash \dot{f}(A) = \xi) \rangle$. By $(\mu^{+})^{\mu} =\mu^{+}$, we identify the range of $f$ with $\mu^{+}$. It is easy to see that $f$ works as a centering function in $V$. Therefore $I$ is centered.
Let us see (2). Similarly, $P$ forces $(\mathcal{P}(\mu^{+}) / I)^{V}$ has the $(\mu^{++},\mu^{++},\mu)$-c.c. For every $X \in [I^{+}]^{\mu^{++}}$, there is a $P$-name $\dot{Y}$ such that $P \Vdash \dot{Y} \in [X]^{\mu^{++}}$ and $\forall Z \in [\dot{Y}]^{\mu}(\bigcap Z \in I^{+})$. Since $P$ is $\mu$-centered, $|P| \leq 2^{\mu} = \mu^{+}$. Therefore there is an $Y \in [X]^{\mu^{++}}$ and $p \in P$ such that $p \Vdash Y \in [\dot{Y}]^{\mu^{++}}$. $Y$ works as a witness. \end{proof}
\subsection{Prikry-type forcings}\label{prikrytypeforcings} Modifications of Prikry forcing are called Prikry-type forcings. Original Prikry forcing was introduced by Prikry~\cite{prikry}. For a given filter $F$ over $\mu$, $\mathcal{P}_{F}$ is $[\mu]^{<\omega} \times F$ ordered by $\langle a, X \rangle \leq \langle b, Y\rangle$ if and only if $a \supseteq b$, $a \cap (\max{b} + 1) = b$ and $a\setminus b \cup X \subseteq Y$. It is easy to see that $\mathcal{P}_{U}$ is $(\mu,<\mu)$-centered. Prikry forcing is $\mathcal{P}_{U}$ for some normal ultrafilter $U$. Prikry forcing preserves all cardinals and forces $\mathrm{cf}(\mu) = \omega$. \begin{lem}\label{prikrylem}
Suppose that $U$ is a normal ultrafilter over $\mu$. For every $a \in [\mu]^{<\mu}$ and statement $\sigma$ in the forcing language of $\mathcal{P}_{U}$, there is $Z \in U$ such that $\langle a,Z \rangle$ decides $\sigma$. That is, $\langle a,Z \rangle \Vdash \sigma$ or $\langle a,Z\rangle \Vdash \lnot \sigma$. \end{lem} \begin{proof}
See~\cite{Gitik} or~\cite{MR4404936}. \end{proof}
We often use the following variation of Lemma \ref{prikrylem}. \begin{lem}\label{prikrycondi}
Suppose that $U$ is a normal ultrafilter over $\mu$ and $\mathcal{A} \subseteq \mathcal{P}_{{U}}$ is a maximal anti-chain below $\langle a,X \rangle$. Then there are $n$ and $X\supseteq Z\in U$ such that $\{\langle b,Y \rangle \in \mathcal{A}\mid |b| = n\}$ is a maximal anti-chain below $\langle a,Z \rangle$. \end{lem} \begin{proof}
Suppose that $\mathcal{A}$ is a maximal anti-chain below $\langle a,X \rangle$. For each $n < \omega$, by Lemma \ref{prikrylem}, there is a $Z_{n} \in U$ such that $\langle a,Z_n \rangle$ decides $\exists \langle b,Y\rangle \in \dot{G} \cap \mathcal{A}(|b| = n)$. $Z = X \cap\bigcap_{n}Z_n$ works. \end{proof}
The following lemma will be used in a proof of Theorem \ref{maintheorem1}(2) and Proposition \ref{hugeprop}. \begin{lem}\label{prikryequiv}
For a posets $P \lessdot Q$, let $\dot{U}$ and $\dot{W}$ be a $P$-name and a $Q$-name for a filter over $\mu$, respectively. If $Q \Vdash \dot{U} \subseteq \dot{W}$ and $\dot{W}$ is a normal ultrafilter over $\mu$, then the following are equivalent: \begin{enumerate}
\item $P \ast \mathcal{P}_{\dot{U}} \lessdot Q \ast \mathcal{P}_{\dot{W}}$.
\item $P \Vdash \dot{U}$ is ultrafilter. \end{enumerate} \end{lem} \begin{proof}
We may assume that $P$ and $Q$ are Boolean algebras.
Let us show the forward direction. We show contraposition. Suppose that there are $p \in P$ and $\dot{X}$ such that $p \Vdash \dot{X} \not\in \dot{U}$ and $\mu \setminus \dot{X} \not\in \dot{U}$. Then there is a extension $q \in Q$ of $p$ which decides $\dot{X} \in \dot{W}$. We may assume that $q$ forces $\dot{X} \in \dot{W}$. We claim that there is no reduct of $\langle q,\langle \emptyset,\dot{X} \rangle \rangle$ in $P \ast \mathcal{P}_{\dot{U}}$.
For any $\langle r,\langle a,\dot{Y} \rangle \rangle \in P \ast \mathcal{P}_{\dot{U}}$, if $r$ is not a reduct of $q$ (in the sense of $P \lessdot Q$), there is nothing to do. Suppose that $r$ is a reduct of $q$. Then we have $r \leq p$. By $r \Vdash \dot{X} \not\in \dot{U}$, $r \Vdash |\dot{Y} \setminus \dot{X}| = \mu$. Choose $r' \leq r$ and $\alpha$ with $r \Vdash \alpha \in \dot{Y} \setminus \dot{X} \cup (\max a + 1)$. Then $\langle r',\langle a \cup \{\alpha\},\dot{Y} \rangle \rangle \leq \langle r,\langle a,\dot{Y} \rangle \rangle$ does not meet with $\langle q,\langle \emptyset,\dot{X} \rangle \rangle$, as desired.
The inverse direction follows by Lemma \ref{prikrycondi}. For a maximal anti-chain $\mathcal{A} \subseteq P \ast \mathcal{P}_{\dot{U}}$, consider $P$-name $\dot{\mathcal{B}}$ such that $P \Vdash \dot{\mathcal{B}} = \{\langle a,X \rangle \mid \exists p \in \dot{G}(\langle p,\langle a,X\rangle \rangle \in \mathcal{A})\}$. $\dot{\mathcal{B}}$ is forced to be a maximal anti-chain. It is enough to prove that $Q \Vdash \dot{\mathcal{B}}$ is maximal anti-chain below $\mathcal{P}_{\dot{W}}$. For every $p \Vdash \langle a,\dot{X} \rangle \in \mathcal{P}_{\dot{W}}$, because of $P \Vdash \dot{\mathcal{B}}$ is maximal anti-chain below $\langle a,\emptyset \rangle$, there are $p' \leq p$, $n$, and, $P$-name $\dot{Z}$ such that $p' \Vdash \{\langle b,Y \rangle \in \dot{\mathcal{B}} \mid |b| = n\}$ is maximal anti-chain below $\langle a,\dot{Z} \rangle \in \mathcal{P}_{\dot{U}}$. If $n \leq |a|$, there is a $\dot{Y}$ such that $p' \Vdash \langle b,\dot{Y} \rangle \in \dot{\mathcal{B}} \land a \setminus b \subseteq Y)$. Here, $b$ is the first $n$-th elements in $a$. Thus, it is forced that $\langle a,\dot{X} \cap \dot{Y} \rangle \leq \langle b,\dot{Y}\rangle, \langle a, \dot{X} \rangle$.
If $n > |a|$, we can choose $p'' \leq p'$ and $\alpha_0,...,\alpha_{n-|a|-1}$ with $p''\Vdash \{\alpha_i \mid i < n-|a|\} \in [(\dot{X} \cap \dot{Z}) \setminus (\max{a} + 1)]^{n-|a|}$. Let $c = a \cup \{\alpha_{i}\mid i < n-|a|\}$. $p''$ forces that $\langle c,\dot{Z} \rangle \leq \langle a,\dot{Z}\rangle$ meet with $\dot{\mathcal{B}}$. Because of $|c| =n$, there is a $\dot{Y}$ with $p'' \Vdash \langle c,\dot{Y} \rangle \in \dot{\mathcal{B}}$. In particular, it is forced that $\langle c,\dot{Y} \cap \dot{X}\rangle$ is a common extension of $\langle c,\dot{Y} \rangle$ and $\langle a,\dot{X} \rangle$, as desired. \end{proof}
We introduce two forcing notions of variations of Prikry forcing. The first one is Woodin's modification~\cite{MR1007865}, which changes a measurable cardinal into $\aleph_{\omega}$. For a normal ultrafilter $U$ over $\mu$, let $j_{U}$ denote the ultrapower mapping $j_{U}:V \to M_U \simeq \mathrm{Ult}(V,U)$. If we suppose $2^{\mu} = \mu^{+}$ then $|j_{U}(\mu)| = 2^{\mu} = \mu^{+}$. This shows \begin{lem}\label{guidinggeneric}
If $2^{\mu} = \mu^{+}$ then there is a $(M_U,\mathrm{Coll}(\mu^{+},<j_U(\mu))^{M_U})$-generic filter $\mathcal{G}$. \end{lem} \begin{proof}
Since $\mathrm{Coll}(\mu^{+},<j_U(\mu)))^{M_U}$ has the $j_U(\mu)$-c.c. in $M_{U}$ and $|j_U(\mu)^{<j_U(\mu)}| = |j_U(\mu)| = \mu^{+}$, we can enumerate $\mathrm{Coll}(\mu^{+},<j_U(\mu)))^{M_U}$ anti-chain belongs to $M_U$ as $\langle \mathcal{A}_\alpha \mid \alpha < \mu^{+} \rangle$. Because $\mathrm{Coll}(\mu^{+},<j_U(\mu)))^{M_U}$ is $\mu^{+}$-closed, the standard argument takes a filter $\mathcal{G}$ that meets with any $\mathcal{A}_{\alpha}$. \end{proof} We call this $\mathcal{G}$ a guiding generic of $U$. $\mathcal{P}_{U,\mathcal{G}}$ consists of $\langle a,f,X,F\rangle$ such that \begin{itemize}
\item $a = \{\alpha_1,...,\alpha_{n-1}\} \in [\Psi]^{<\omega}$.
\item $f = \langle f_0,...,f_{n-1}\rangle \in \prod_{i < n}\mathrm{Coll}(\alpha_i^{+},<\alpha_{i+1})$. But $\alpha_0$ and $\alpha_{n}$ denote $\omega$ and $\mu$, respectively.
\item $X\in U$ and $X \subseteq \Psi$.
\item $F \in \prod_{\alpha \in X} \mathrm{Coll}(\alpha^{+},<\mu)$ and $[F]\in \mathcal{G}$. \end{itemize} Here, $\Psi= \{\alpha < \mu\mid \alpha$ is an inaccessible and $2^{\alpha} = \alpha^{+}\}$.
$\mathcal{P}_{U,\mathcal{G}}$ is ordered by $\langle a,f,X,F\rangle \leq \langle b,g,Y,H\rangle$ if and only if $\langle a,X\rangle \leq \langle b,Y\rangle$ in $\mathcal{P}_{U}$, $\forall i \in [|b|,|a|)(h(i) \supseteq F(\beta_i))$, and $\forall \alpha \in X(F(\alpha)\supseteq H(\alpha))$. It is easy to see that $\mathcal{P}_{U,\mathcal{G}}$ is $(\mu,<\mu)$-centered.
Lemma \ref{modificationprikrylemma} and \ref{modificationprikrycondi} are analogies of Lemma \ref{prikrylem} and \ref{prikrycondi} for $\mathcal{P}_{U,\mathcal{G}}$, respectively. \begin{lem}\label{modificationprikrylemma}
For any $\langle a,f,X,F\rangle$ and $\sigma$, there is a $\langle a,f,Z,I\rangle$ such that, if $\langle b,g,Y,G\rangle\leq \langle a,f,Z,I\rangle$ decides $\sigma$ then $\langle a,g\upharpoonright |a|,Z,I\rangle$ decides $\sigma$. \end{lem} This shows that the cardinality of $\mu$ is preserved by $\mathcal{P}_{U,\mathcal{G}}$. By the density argument shows $\mathcal{P}_{U,\mathcal{G}} \Vdash \mu = \aleph_{\omega}$. \begin{lem}\label{modificationprikrycondi}
For any $\langle a,f,X,F\rangle$ and maximal anti-chain $\mathcal{A}$ below $p$, there are $n,f',Z,I$ such that $\{\langle b,g,Y,H\rangle \in \mathcal{A} \mid |b| = n\}$ is a maximal anti-chain below $\langle a,f',Z,I\rangle$. \end{lem}
The other is Magidor forcing~\cite{Magidor1978changing}. Magidor forcing used a sequence of normal ultrafilters over $\mu$ instead of a single normal ultrafilter. For the definition of Magidor forcing and its details, we refer to~\cite{Magidor1978changing} or ~\cite{MR4404936}.
\begin{thm}[Magidor~\cite{Magidor1978changing}]\label{magidorforcing} Suppose $\mu$ is supercompact and $\nu < \mu$ is regular. Then there is a poset $P$ such that \begin{itemize}
\item $P$ is $(\mu,<\mu)$-centered.
\item $P$ adds no new subset to $\nu$. Thus, the regularities below $\nu$ are preserved.
\item $P$ preserves all cardinals.
\item $P \Vdash \mathrm{cf}(\mu) = \nu$. \end{itemize} \end{thm}
\subsection{Combinatorics}
The notion of polarized partition relations was introduced by Erd\H{o}s--Hajnal--Rado~\cite{MR202613}. $\polar{\kappa_0}{\kappa_1}{\lambda_0}{\lambda_1}{\theta}$ states, for every $f:\kappa_0 \times \kappa_1 \to \theta$ there are $H_0 \in [\kappa_0]^{\lambda_0}$ and $H_1 \in [\kappa_1]^{\lambda_1}$ such that $|f ``H_0 \times H_1| \leq 1$. $\polar{\kappa_0}{\kappa_1}{\kappa_0}{\kappa_1}{\theta}$ is the most strongest form. This form trivially holds sometimes. Indeed, if $\mathrm{cf}(\kappa_0) > \theta^{\kappa_1}$ then $\polar{\kappa_0}{\kappa_1}{\kappa_0}{\kappa_1}{\theta}$ holds. But under the GCH, the non-trivial case cannot hold: \begin{thm}[Erd\H{o}s--Hajnal--Rado~\cite{MR202613}]
If $2^{\mu} = \mu^{+}$ then $\polar{\mu^{+}}{\mu}{\mu^{+}}{\mu}{2}$. \end{thm} We are interested in how strong $\polar{\mu^{+}}{\mu}{\lambda_0}{\lambda_1}{\theta}$ can hold under the GCH. If $\mu$ is a limit cardinal, $\polar{\mu^{+}}{\mu}{\mu}{\mu}{<\mu}$ holds sometimes (For example, see \cite{MR1833480}, \cite{MR1606515}, and \cite{MR0371655}). On the other hand, for a successor cardinal, negative partition relation is known as Theorem \ref{kurepaimpliesnpp}. \begin{thm}[Folklore?]\label{kurepaimpliesnpp}
If there is a Kurepa tree on $\mu^{+}$ then $\npolar{\mu^{++}}{\mu^{+}}{2}{\mu^{+}}{\mu}$ holds. \end{thm} Therefore $\polar{\mu^{++}}{\mu^{+}}{2}{\mu^{+}}{\mu}$ is a large cardinal property. Erd\H{o}s--Hajnal~\cite{MR0280381} asked whether or not $\polar{\aleph_2}{\aleph_1}{\aleph_0}{\aleph_1}{2}$ is consistent. To solve this, the notion of a strongly saturated ideal was introduced by Laver. Laver also proved that \begin{thm}[Laver~\cite{MR673792}]\label{stronglysatimplypp}
Suppose that $2^{\mu} = \mu^{+}$ and $\mu^{+}$ carries a strongly saturated ideal. Then $\polar{\mu^{++}}{\mu^{+}}{\mu^{+}}{\mu^{+}}{\mu}$ holds. \end{thm} The assumption of Theorem \ref{stronglysatimplypp} implies $2^{\mu^+} = \mu^{++}$, and thus, $\npolar{\mu^{++}}{\mu^{+}}{\mu^{++}}{\mu^{+}}{2}$. We will discuss $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for $\nu \in [2,\mu^{+}]$ in Section \ref{ccandpp}.
We recall basic properties of Chang's conjecture for section 4 and showing Theorem \ref{maintheorem3}.(5).
For cardinals $\lambda \geq \lambda'$ and $\kappa \geq \kappa' \geq \mu$, we need a strengthening of Chang's conjecture $(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$, which was introduced by Shelah~\cite{MR1126352}. We say $(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$ if any structure $\langle \lambda,\lambda',\in,...\rangle$, of a language of size $\mu$ has an elementary substructure $\langle X,X \cap \lambda',\in,...\rangle$ such that $|X| = \kappa$, $|X \cap \lambda'| = \kappa'$, and $\mu \subseteq X$. Note that $(\lambda,\lambda') \twoheadrightarrow_{\omega} (\kappa,\kappa')$ is the same as $(\lambda,\lambda') \twoheadrightarrow (\kappa,\kappa')$. It is easy to see \begin{lem}\label{changchar}
The following are equivalent: \begin{enumerate}
\item $(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$ holds.
\item Any structure of a countable language $\langle \lambda,\lambda',\in,...\rangle$ has a substructure $\langle X,X \cap \lambda',\in,...\rangle$ such that $|X| = \kappa$, $|X \cap \lambda'| = \kappa'$, and $\mu \subseteq X$.
\item For every $f:{^{<\omega}}\lambda \to \lambda$, there is an $X \in [\lambda]^{\kappa}$ such that $X$ closed under $f$, $|X \cap \lambda'| = \kappa'$, and $\mu \subseteq X$. \end{enumerate} \end{lem} \begin{proof} $(1) \to(2) \to (3)$ is easy. We check $(3) \to (1)$.
For any structure $\mathcal{A} = \langle \lambda,\lambda',\in,...\rangle$ of a language of size $\mu$, there is a complete set of Skolem functions $\{f_{\xi} \mid \xi < \mu\}$. Define $g:{^{<\omega}\lambda} \to \lambda$ by \begin{center}
$g(a) = \begin{cases}f_{a}(b) & a = \langle \alpha,b\rangle\text{ for some }\alpha < \mu\\ 0 & \text{otherwise}
\end{cases}.$\end{center}
By the assumption, there is an $X \in [\lambda]^{\kappa}$ such that $X$ closed under $g$, $|X \cap \lambda'| = \kappa'$, and $\mu \subseteq X$. Then $\langle X,X \cap \kappa ,\in ,... \rangle \prec \mathcal{A}$ witnesses. \end{proof} Chang's conjecture follows by the existence of elementary embeddings. \begin{lem}\label{changsuff}
Suppose that $j$ is an elementary embedding from $V$ to $M$ which is defined in an outer model. For cardinals $\lambda \geq \lambda'$ and $\kappa \geq \kappa' \geq \mu$ in $V$, suppose that \begin{itemize}
\item $\mathrm{crit}(j) > \mu$,
\item $j``\lambda \in M$,
\item $j(\kappa) = |j``\lambda|$ in $M$, and
\item $j(\kappa') = |j``\lambda'|$ in $M$. \end{itemize}
Then $(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$. \end{lem} \begin{proof}
For any $\mathcal{A} = \langle \lambda,\lambda',\in,...\rangle$, $\mathcal{B} = \langle j``\lambda,j``\lambda',\in,... \rangle$ is a substructure of $\mathcal{A}$ that witnesses with $(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$. \end{proof}
\begin{lem}[Folklore]\label{ccpreserved}
$(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$ is preserved by $\mu^{+}$-c.c. poset. \end{lem} \begin{proof}This proof is due to Eskew--Hayut~\cite{MR3748588}.
Let $P$ be a $\mu^{+}$-c.c. poset. Assume $(\lambda,\lambda') \twoheadrightarrow_{\mu} (\kappa,\kappa')$. For each $p \Vdash \dot{f} :{^{<\omega}}\lambda \to \lambda$ and $a \in {^{<\omega}}\lambda$, by the $\mu^{+}$-c.c., there is an $X_{a} \in [\lambda]^{\leq\mu}$ such that $p \Vdash \dot{f}(a) \in X_{a}$. Define $g(a)$ by \begin{center}
$g(a) = \begin{cases}\text{the }\alpha\text{-th element in }X_{b} & a = \langle \alpha,b\rangle\text{ for some }\alpha < \mu\\ 0 & \text{otherwise}
\end{cases}.$ \end{center}
By Lemma \ref{changchar}, we have an $X \in [\lambda]^{\kappa}$ such that $|X \cap \lambda'| = \kappa'$ and $\mu \subseteq X$. Note that each $X_{a}$ is of size $\leq \mu$ and $\mu \subseteq X$. For every $a \in {^{<\omega}X}$, we have \begin{center}
$p \Vdash \dot{f}(a) \in X_{a} = \{g(\alpha,a) \mid \alpha < \mu\} \subseteq g``({^{<\omega}}X) \subseteq X$. \end{center} By Lemma \ref{changchar}, the proof is completed. \end{proof}
\section{Centeredness and Layeredness}\label{centeredlayered} In this section, we show Theorem \ref{maintheorem1}. Lemma \ref{termcentered} is essentially due to Foreman. We will show a more general result as Lemma \ref{generalizedpreservation}. Our proof of Lemma \ref{termcentered} is a prototype of that of Lemma \ref{generalizedpreservation}.
To study the centeredness, we use the notion of the term forcing. For a poset $P$ and a $P$-name $\dot{Q}$ for a poset, the term forcing $T(P,\dot{Q})$ is a complete set of representatives from $\{\dot{q} \mid \Vdash \dot{q} \in \dot{Q}\}$ with respect to the canonical equivalence relation. $T(P,\dot{Q})$ is ordered by $\dot{q} \leq \dot{q}' \leftrightarrow \Vdash \dot{q} \leq \dot{q}'$. The following is known as the basic lemma of the term forcing.
\begin{lem}[Laver]\label{laverbasiclemma}
$\mathrm{id}:P \times T(P,\dot{Q}) \to P \ast \dot{Q}$ is a projection. In particular, $P \Vdash$ there is a projection from $T(P,\dot{Q})$ to $\dot{Q}$. \end{lem}
\begin{lem}[Foreman]\label{termcentered}
Suppose that $P$ is $(\mu,<\nu)$-centered and $I$ is a $(\mu^{+},<\nu)$-centered ideal over $\mu^{+}$. If $2^{\mu} = \mu^{+}$ then $T(P,\dot{\mathcal{P}}(\mu^{+})/\overline{I})$ is $(\nu^{+},<\nu)$-centered.
In particular, if $P$ is $\nu$-Baire then $P$ forces that $\overline{I}$ is $(\mu,<\nu)$-centered. \end{lem} \begin{proof}
Let $f:I^{+} \to \mu^{+}$ be a $(\mu^{+},<\nu)$-centering function. Let $\{P_{\alpha} \mid \alpha < \mu\}$ be a $(\mu,<\nu)$-centering family of $P$. We may assume that each $P_\alpha$ is a $<\nu$-complete filter.
We want to define a $(\mu^{+},<\nu)$-centering function $h:T(P,\dot{\mathcal{P}}(\mu^{+})/\overline{I})) \to \mu^{+}$. For each $\dot{A} \in T(P,\dot{\mathcal{P}}(\mu^{+})/\overline{I}))$, $B = \{\xi < \mu^{+} \mid ||\xi \in \dot{A} || \not=0\} \in I^{+}$. For each $\alpha < \nu$, if we let $B_{\alpha} = \{\xi < \mu^{+} \mid ||\xi \in \dot{A}|| \in P_{\alpha}\}$ then $B = \bigcup_{\alpha}B_{\alpha}$. Since $I$ is $\mu^{+}$-complete, there is an $\alpha < \mu$ with $B_{\alpha} \in I^{+}$.
Define $h(\dot{A})$ by $\langle f(B_{\alpha}) \mid \alpha < \mu, B_{\alpha} \in I^{+}\rangle$. Note that the range of $h$ is ${^{\leq\mu}}\mu^{+}$. Therefore $h$ can be seen as a mapping with its range $\mu^{+}$.
For $\{\dot{A}_{i} \mid i < \nu'\} \in [T(P,\dot{\mathcal{P}}(\mu^{+})/\overline{I}))]^{<\nu}$, if $h(\dot{A}_i) = d$ for all $i < \nu'$. For each $i$ and $\alpha < \mu$, $B_{\alpha}^{i} = \{\xi < \mu \mid \exists q \in P_{\alpha} (q \Vdash \xi \in \dot{A}_{i})\}$. We want to show $P \Vdash \bigcap_{i} \dot{A}_i \in \overline{I}^{+}$.
\begin{clam}\label{kanamorilemma}
There is an $A \subseteq \mu^{+}$ such that $A \in I$ and $P \Vdash \bigcap_{i} \dot{A}_{i} \setminus A \not= \emptyset \to \bigcap_{i} \dot{A}_{i} \in \overline{I}^{+}$.
\end{clam}
\begin{proof}[Proof of Claim]
Our proof is based on the proof in \cite[Theorem 17.1]{MR1994835}. Let $\mathcal{A} \subseteq P$ be a maximal subset such that \begin{itemize}
\item $\mathcal{A}$ is an anti-chain.
\item $\forall p\in \mathcal{A} \exists A_{p}\subseteq \mu^{+}(\mu^{+} \setminus A_p \in I \land p \Vdash \bigcap_{i}\dot{A} \subseteq A_p)$. \end{itemize}
We note $\sum \mathcal{A} = ||\bigcap_{i}\dot{A}_i \not\in \overline{I}^{+}||$. By the $\mu^{+}$-c.c. of $P$, $|\mathcal{A}| \leq \mu$. Let $A = \bigcup_{p \in \mathcal{A}_p} A_p$. By the $\mu^{+}$-completeness of $I$, $A \in I$. For every $p \in P$, if $p \Vdash \bigcap_i \dot{A}_i \setminus A \not= \emptyset$ then $p$ and $\sum \mathcal{A}$ are incompatible, and thus, $p$ forces $\bigcap_i \dot{A}_i \in \overline{I}^{+}$.
\end{proof}
For each $p \in P$ and $j$, there is an $\alpha < \mu$ such that $B_{\alpha}^j \in {I}^{+}$ and $p \in P_{\alpha}$. By the assumption, $h(B_{\alpha}^{i}) = h(B_\alpha^{j})$ for all $i <j< \nu'$. Since $h$ is $(\mu^{+},<\nu)$-centering, $\bigcap_{i}B_{\alpha}^{i} \in I^{+}$. We can choose $\xi \in \bigcap_{i}B_{\alpha}^{i} \setminus A$. By the definition of $B_{\alpha}^{i}$, there exists $q_{i} \in P_{\alpha}$ which forces $\xi \in \dot{A}_{i} \setminus A$ for each $i < \nu'$. Since $P_{\alpha}$ is a $<\nu$-complete filter, $q := p \cdot \prod_{i}q_{i} \in P_{\alpha}$. $q \leq p$ forces $\xi \in \bigcap_{i}\dot{A}_i \setminus A$. By the claim, $q \Vdash \bigcap_i \dot{A}_i \in \overline{I}^{+}$, as desired.
If $P$ is $\nu$-Baire, then $P \Vdash T(P,\dot{\mathcal{P}}(\mu^{+})/\overline{I}))$ remains $(\mu^{+},<\nu)$-centered. By Lemma \ref{laverbasiclemma}, $P \Vdash \overline{I}$ is $(\mu^{+},<\nu)$-centered, as desired. \end{proof} Next, we deal with layeredness. Let us describe a sufficient condition for the quotient forcing \emph{not} to be $S$-layered. We say that $Q$ is nowhere $S$-layered if $Q\upharpoonright q$ is not $S$-layered for all $q \in Q$.
\begin{lem}\label{quotientnotlayered}
Suppose that $Q$ is nowhere $S$-layered for some $S \subseteq E^{\mu^{++}}_{\mu^{+}}$, and $Q$ is of size $\mu^{++}$. We also assume that there is a complete embedding $\tau$ from $\mu^{+}$-c.c. $P$ to $Q$. Then $P \Vdash Q / \dot{G}$ is not $S$-layered. \end{lem} \begin{proof}
Suppose otherwise. That is, there is a $p \in P$ which forces that $Q/ \dot{G}$ is $S$-layered. By the assumption and Lemma \ref{charlayered}, we can fix $P$-names $\dot{R}_{\alpha}$ such that \begin{itemize}
\item $p \Vdash \dot{R}_{\alpha}\lessdot Q / \dot{G}$ for each $\alpha$.
\item $p \Vdash \alpha < \beta \to \dot{R}_{\alpha}\subseteq \dot{R}_{\beta}$.
\item $p \Vdash$ there is a club $C \subseteq \mu^{++}$ such that $\forall \alpha \in C \cap S(\dot{R}_{\alpha} = \bigcup_{\beta < \alpha} \dot{R}_{\beta})$. \end{itemize}
By the $\mu^{+}$-c.c. of $P$, we can choose such a club $C$ in $V$.
We claim that $P \upharpoonright p \ast (Q / \dot{G})$ is $S$-layered. Let $Q_{\alpha} = P \upharpoonright p \ast \dot{R}_{\alpha}$. It is easy to see that $Q_{\alpha}\lessdot P \upharpoonright p \ast (Q / \dot{G})$. For $\alpha \in C \cap S$, choose $\langle p_0,\dot{q}_0\rangle \in Q_{\alpha}$ then $p \Vdash \dot{q}_{0} \in \dot{R}_{\alpha}$. By $\mathrm{cf}(\alpha) = \mu^{+}$ and $P$ has the $\mu^{+}$-c.c., there is an $\beta < \alpha$ such that $P \Vdash \dot{q}_{0} \in P_{\beta}$. Therefore $\langle p_0,\dot{q}_0\rangle \in R_{\beta}$, as desired.
Since $\mathcal{B}(Q)$ has a dense subset that is isomorphic to $P \ast (Q / \dot{G})$ and $P \ast (Q / \dot{G}) \upharpoonright \langle p,\dot{1}\rangle$ is $S$-layered, this contradicts that $Q$ is nowhere $S$-layered. \end{proof} To show Theorem \ref{maintheorem1}, the following is a key lemma.
\begin{lem}\label{mainlemmalayered} Suppose that $I$ is a saturated ideal over $\mu^{+}$ and $P$ is one of Prikry forcing, Woodin's modification, or Magidor forcing at $\mu$. Then ${\mathcal{P}}(\mu^{+})/{I} \ast \dot{j}(P)$ is nowhere $S$-layered for all stationary $S \subseteq \mu^{++}$. \end{lem} \begin{proof}
Because the similar proof works for each of Prikry-type forcings, we only check if $P = \mathcal{P}_{U}$ for some normal ultrafilter $U$ over $\mu$.
If $\mathcal{P}(\mu^{+})/I$ is nowhere $S$-layered, there is nothing to do. We assume that there is an $A \in I^{+}$ with $I \upharpoonright A$ is $S$-layered. For simplicity, we assume that $I$ is $S$-layered.
We fix sufficiently large regular $\theta$ and $M \prec \mathcal{H}_{\theta}$ such that \begin{itemize}
\item $|M| = \mu^{+}$ and $\mu \subseteq M$,
\item $\mathcal{P}(\mu^{+})/I \cap M \lessdot \mathcal{P}(\mu^{+})/I$ forces $|(\mu^{+})^{V}| = \mu$, and
\item $M$ contains all relevant elements. \end{itemize}
It is enough to show that $Q \cap M \not\mathrel{\lessdot} Q$. Let $\dot{F}$ be a $\mathcal{P}(\mu^{+})/I \cap M$-name for the filter generated by $\{X \in M[\dot{G}] \mid \exists q \in G(q \Vdash X \in \dot{j}({U})\}$. It is easy to see that $Q \cap M$ is dense in $(\mathcal{P}(\mu^{+})/I \cap M) \ast \mathcal{P}_{\dot{F}}$. By $\mathcal{P}(\mu^{+})/I \cap M \Vdash |{M}[\dot{G}]| = |M| \leq |(\mu^{+})^{V}| = \mu$, $\mathcal{P}(\mu^{+})/I \cap M \Vdash \dot{F}$ is not an ultrafilter. By Lemma \ref{prikryequiv}, we have
$Q \cap M \simeq (\mathcal{P}(\mu^{+})/I \cap M) \ast \mathcal{P}_{\dot{F}} \not\mathrel{\lessdot} \mathcal{P}(\mu^{+})/I \ast \mathcal{P}_{\dot{j}(U)}$. \end{proof}
Let us show Theorem \ref{maintheorem1}. \begin{proof}[Proof of Theorem \ref{maintheorem1}]
Let $P$ be one of Prikry forcing, Woodin's modification, or Magidor forcing. Then $P$ is $\mu$-centered. For (1), by Lemma \ref{termcentered}, $P \Vdash \overline{I}$ is $\mu$-centered.
Let us show (2). Since $\mu^{+}$ carries a saturated ideal and $2^{\mu}= \mu^{+}$, we know $2^{\mu^{+}} = \mu^{++}$. Note that $\dot{j}(P)$ is forced to be $\mu$-centered by $\mathcal{P}(\mu^{+}) / I$. Therefore $\mathcal{P}(\mu^{+})/I \ast \dot{j}(P)$ is of size $2^{\mu^{+}} = \mu^{++}$ since $\mathcal{P}(\mu^{+})/I \ast \dot{j}(P)$ is $\mu^{+}$-centered.
The size of $P$ is $2^{\mu} =\mu^{+}$ by the $\mu$-centeredness of $P$. Fix a $P$-name for a stationary subset $\dot{T}$ such that $P \Vdash \dot{T} \subseteq \dot{E}^{\mu^{++}}_{\mu^{+}}$. Let $S_q = \{\alpha < \mu^{++} \mid q \Vdash \alpha \in \dot{T}\}$. Then $P \Vdash \dot{T} = \bigcup_{q \in \dot{G}} S_{q}$. This implies that there is $q$ such that $q \Vdash S_q \subseteq \dot{T}$ is stationary. Since $P$ has the $\mu^{+}$-c.c., $P$ does not change the set $E^{\mu^{++}}_{\mu^{+}}$, therefore $S_q \subseteq E_{\mu^{+}}^{\mu^{++}}$. By Lemma \ref{quotientnotlayered}, $P \Vdash Q / \dot{G}$ is not $S_q$-layered. In particular, $q \Vdash Q / \dot{G}$ is not $\dot{T}$-layered. Since $q$ and $\dot{T}$ are arbitrary, $\Vdash Q / \dot{G}$ is not $\dot{T}$-layered for all stationary $\dot{T} \subseteq \dot{E}^{\mu^{++}}_{\mu^{+}}$.
By Theorem \ref{duality}, $P \Vdash \mathcal{P}(\mu^{+}) / \overline{I} \simeq \mathcal{B}(Q / \dot{G})$ is not $\dot{T}$-layered for all stationary $\dot{T} \subseteq \dot{E}^{\mu^{++}}_{\mu^{+}}$. That is, $P \Vdash \overline{I}$ is \emph{not} layered. \end{proof}
We note that the proof of Lemma \ref{foremanfst}, which was given in~\cite{MR730584}, shows that \begin{lem}\label{saturationinprikry}
Suppose that $\mu$ is a regular cardinal and $I$ is a $(\mu^{++},\nu,\nu)$-saturated ideal over $\mu^+$. Then every $(\mu,<\nu^{+})$-centered poset forces that $\overline{I}$ is $(\mu^{++},\nu,\nu)$-saturated. \end{lem} \begin{proof}
Let $P$ be a $(\mu,<\nu^{+})$-centered poset. We may assume that $P$ is a complete Boolean algebra. Let $\{P_{\alpha} \mid \alpha < \lambda\}$ be $\nu^{+}$-complete filters that cover $P$.
Let $p \in P$ and $\{\dot{A}_{i} \mid i < \mu^{++}\}$ be arbitrary such that $p \Vdash \dot{A}_{i}\in [I^{+}]^{\mu^{++}}$ for each $i$. As proof of Theorem \ref{duality}, we let consider the set $B_{i} = \{\xi < \mu^{+} \mid p\cdot ||\xi \in \dot{A}_{i}||\not=0\} \in I^{+}$. Define $B^i_{\alpha} = \{\xi \in B_i \mid p \cdot ||\xi \in \dot{A}_{i}|| \in P_{\alpha}\}$. By $B_{i} = \bigcup_{\alpha < \mu}B_{\alpha}^i$, there is an $\alpha_i < \mu$ such that $B_{\alpha_i}^i \in I^{+}$. There are $K \in [\mu^{++}]^{\mu^{++}}$ and $\alpha$ such that $\forall i \in K(\alpha_i = \alpha)$. Since $I$ is $(\mu^{++},\nu,\nu)$-saturated, there is $Z \in [K]^{\nu}$ such that $\bigcap_{i \in Z}B_\alpha^{i} \in I^{+}$.
By the proof of Claim \ref{kanamorilemma}, we have $A \in I$ such that $P \Vdash \bigcap_{i\in Z} \dot{A}_i \setminus A \not= 0$ implies $\bigcap_i \in \overline{I}^{+}$. Then, For each $\xi \in \bigcap_{i \in Z}B_\alpha^i \in I^{+} \setminus A$, $q:=\prod_{\xi \in Z} p\cdot ||\xi \in \dot{A}_i|| = p \cdot ||\xi \in \bigcap_{i \in Z}\dot{A}_i||\not= 0$ forces $\bigcap_{i\in Z} \dot{A}_i \setminus A \not= 0$. By the choice of $A$, $q \Vdash \bigcap_{i \in Z} \dot{A}_i \in \overline{I}^{+}$. \end{proof} We notice that, by the proof of the Theorem \ref{duality}, this proof of Lemma \ref{saturationinprikry} essentially shows that $\mathcal{P}(\mu^{+}) / I \ast \dot{j}(P)$ has the $(\mu^{++},\nu,\nu)$-c.c. Note that if $Q$ has the $(\mu^{++},\nu,\nu)$-c.c. and there is a complete embedding $e:P \to Q$ then $P \Vdash Q / \dot{G}$ has the $(\mu^{++},\nu,\nu)$-c.c. Thus $P \Vdash \mathcal{P}(\mu^{+}) / I \ast \dot{j}(P)/ \dot{G}$ has the $(\mu^{++},\nu,\nu)$-c.c.
We conclude this section with the following question. \begin{ques}
Can a layered ideal exist over the successor of singular cardinals? \end{ques}
Eskew pointed out that \begin{thm}[Eskew--Sakai~\cite{MR4092254}]
There is no dense ideal over the successor of singular cardinals. \end{thm} Here, a dense ideal over $\mu^{+}$ is an ideal $I$ such that $\mathcal{P}(\mu^{+}) / I$ has a dense subset of size $\mu^{+}$. Density is the strongest saturation property. The consistency of dense ideal on $\omega_1$ is known by Woodin. After that, Eskew~\cite{MR3569105} extended Woodin's result to an arbitrary successor of regular cardinals.
\section{Polarized Partition Relations}\label{ccandpp} In the first half of this section, we collect some sufficient conditions for polarized partition relations in terms of Chang's conjecture and the existence of some saturated ideals. The rest of this section is devoted to proving Theorem \ref{maintheorem2} and its application. Lemma \ref{ppmainlemma2}, which is used in a proof of Theorem \ref{maintheorem2}, answers \cite[Question 1.11]{MR4101445}.
In Section 2, we saw Laver's result for strongly saturated ideals and polarized partition relations in Theorem \ref{stronglysatimplypp}. Other than this result, it is known that Todor\v{c}evi\'{c}~\cite{MR1127033} proved that $(\omega_2,\omega_1) \twoheadrightarrow (\omega_1,\omega)$ implies $\polar{\omega_2}{\omega_2}{\omega}{\omega}{\omega}$. Modifying the proof of Todor\v{c}evi\'{c}, Zhang~\cite{MR4094551} showed that the existence of pre-saturated ideal over $\omega_1$ implies $\polar{\omega_2}{\omega_1}{\omega}{\omega}{\omega}$ and $\polar{\omega_2}{\omega_1}{n}{\omega_1}{\omega}$ for any $n < \omega$. Here, pre-saturated ideal over $\mu^{+}$ is a precipitous ideal $I$ with $\mathcal{P}(\mu^{+})/ I$ preserves the cardinality of $\mu^{++}$. Let us generalize this \begin{lem}\label{ccimpliespp}
Assume one of the following holds:
\begin{enumerate}
\item $(\mu^{++},\mu^{+})\twoheadrightarrow (\mu^{+},\mu)$.
\item $\mu^{+}$ carries a pre-saturated ideal.
\end{enumerate} Then $\polar{\mu^{++}}{\mu^{+}}{n}{\mu^{+}}{\mu}$ holds for all $n < \omega$. \end{lem} \begin{proof}
Let $f:\mu^{++} \times \mu^{+} \to \mu$ be an arbitrary coloring.
First, we assume $(\mu^{++},\mu^{+})\twoheadrightarrow (\mu^{+},\mu)$. By \cite[Lemma 14]{MR3748588}, we have $(\mu^{++},\mu^{+})\twoheadrightarrow_{\mu} (\mu^{+},\mu)$. Consider a structure $\mathcal{A} = \langle \mu^{++},\mu^{+},\in,f\rangle$. By Lemma \ref{changchar}, we can choose a $\mathcal{B} = \langle X,X \cap \mu^{+},\in,f \upharpoonright X\rangle \prec \mathcal{A}$ such that $|X| = \mu^{+}$, $|X \cap \mu^{+}| = \mu$, and $\mu \subseteq X$. Let $\delta = \sup X \in \mu^{+}$. There are $\alpha_0,...,\alpha_{n-1} \in X$ such that $f(\alpha_0,\delta)= \cdots = f(\alpha_{n-1},\delta)= \eta$ for some $\eta \in \mu \subseteq X$.
Then, the elementarity shows \begin{center}
$\mathcal{B} \models \forall \xi \in \mu^{+}\exists \zeta \geq \xi(f(\alpha_0,\zeta)\land \cdots \land f(\alpha_{n-1},\zeta) = \eta)$. \end{center} Indeed, for every $\xi \in X$, $\zeta\geq \xi$ can be taken as $\delta$ in $\mathcal{A}$. In particular, $H_1 = \{\xi <\mu^{+}\mid \forall i < n(f(\alpha_i,\xi) = \eta)\}$ is unbounded in $\mu^{+}$. $f ``\{\alpha_0,...,\alpha_{n-1}\} \times H_1 = \{\eta\}$.
Next, we assume the existence of pre-saturated ideal $I$. Let $G$ be a $(V,\mathcal{P}(\mu^{+}) / I)$-generic filter and $j:V \to M \subseteq V[G]$ be a generic ultrapower mapping. Note that $|j ``\mu^{++}| = (\mu^{+})^{V[G]}$ and $\mathrm{crit}(j) = (\mu^{+})^V$. Then, in $V[G]$, there are $\alpha_0,...,\alpha_{n-1} \in \mu^{++}$ and $\eta$ such that $M \models j(f)(j(\alpha_0),(\mu^{+})^{V}) = \cdots =j(f)(j(\alpha_{n-1}),(\mu^{+})^{V}) = \eta = j(\eta)$.
Again, the elementarity shows that $H_1 = \{\xi <\mu^{+}\mid \forall i < n(f(\alpha_i,\xi) = \eta)\}$ is unbounded in $\mu^{+}$ and $f ``\{\alpha_0,...,\alpha_{n-1}\} \times H_1 = \{\eta\}$. \end{proof} For each $\nu \leq \mu$, if $\mu$ carries a pre-saturated ideal $I$ that satisfies the condition of $\mathcal{P}(\mu^{+}) / I \Vdash \forall f:\mu^{++} \to {\mathrm{ON}} \exists X \in [\mu^{++}]^{\nu}(f \upharpoonright X \in V)$, then the same proof of Theorem \ref{ccimpliespp} shows $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$. It is easy to see that this condition is equivalent with the $(\mu^{++},\nu,\nu)$-saturation for saturated ideals. Note that we can show the following without using generic ultrapower:
\begin{lem}\label{saturatedimplypolarized} If $\mu^{+}$ carries $(\mu^{++},\nu,\nu)$-saturated ideal for some $\nu \leq \mu$ then $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ holds. \end{lem}
\begin{proof}
Let $I$ be a $(\mu^{++},\nu,\nu)$-saturated ideal.
Let $f:\mu^{++} \times \mu^{+} \to \mu$ be an arbitrary coloring. For each $\alpha < \mu^{++}$, there is an $\eta_{\alpha}$ such that $A_{\alpha} = \{\xi < \mu^{+} \mid f(\alpha,\xi) = \eta_{\alpha}\} \in F^{+}$. By the $(\mu^{++},\nu,\nu)$-saturation and $\mu^{+}$-completeness of $I$, there are $H_0 \in [\mu^{++}]^{\nu}$ and $\eta$ such that $H_1 = \bigcap_{\alpha \in H_0}A_{\alpha} \in F^{+}$ and $\forall \alpha \in H_0(\eta_{\alpha} = \eta)$. Then $f ``H_0 \times H_1 = \{\eta\}$. \end{proof} The existence of $(\mu^{++},\nu,\nu)$-saturated ideals is preserved by $(\mu^{+},<\nu^{+})$-centered poset as we saw in Lemma \ref{saturationinprikry}. Therefore, under the existence of this ideal, $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ is also preserved by $(\mu^{+},<\nu^{+})$-centered posets. We can omit the ideal assumption from this fact (See Lemma \ref{ppmainlemma1}).
Lemmas \ref{ppmainlemma1}, \ref{ppmainlemma2}, \ref{negationpreserved}, and \ref{ppmainlemma3} show Theorem \ref{maintheorem2}. Let us begin to prove these.
\begin{lem}\label{ppmainlemma1}
If $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ holds for some cardinal $\nu < \mu$ then any $(\mu,<\nu^{+})$-centered poset forces the same partition relation. \end{lem} \begin{proof}
Let $P$ be a $(\mu,<\nu^{+})$-centered poset and $\langle P_{\alpha}\mid \alpha < \mu\rangle$ be a $(\mu,<\nu^{+})$-centering family of $P$. Consider $p \Vdash \dot{f}:\mu^{++} \times \mu^{+} \to \mu$. For each $\alpha \in \mu^{++}$ and $\xi < \mu^{+}$, we can choose $p_{\alpha\xi}\leq p$ and $\eta_{\alpha\xi}$ such that $p_{\alpha\xi} \Vdash \dot{f}(\alpha,\xi) = \eta_{\alpha\xi}$. We fix $\beta_{\alpha\xi} < \mu$ with $p_{\alpha\xi} \in P_{\beta_{\alpha\xi}}$
Define $d(\alpha,\xi) = \langle \beta_{\alpha\xi},\eta_{\alpha\xi} \rangle$. By $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$, there are $H_0 \in [\mu^{++}]^{\nu}$ and $H_1 \in [\mu^{+}]^{\mu^{+}}$ such that $d$ is monochromatic on $H_0 \times H_1$ with value $\langle \beta,\eta\rangle$.
For each $\xi \in H_1$, $q_{\xi}$ be a lower bound of $\{p_{\alpha\xi} \mid \alpha \in H_0\}$. By the $\mu^{+}$-c.c. of $P$, there is a $q \leq p$ which forces that $|\{\xi \in H_1\mid q_{\xi} \in \dot{G}\}| = \mu^{+}$. Let $\dot{H}_1$ be a $P$-name for such set. We have $q \Vdash \dot{f} ``H_{0}\times \dot{H}_{1} = \{\eta\}$. \end{proof}
\begin{lem}\label{ppmainlemma2}
If $\npolar{\mu^{++}}{\mu^{+}}{n}{\mu^{+}}{\mu}$ for some regular $n < \omega$ then any $\mu^{+}$-Knaster poset forces the same partition relation. \end{lem} \begin{proof}
Let $f$ be a coloring that witnesses with $\npolar{\mu^{++}}{\mu^{+}}{n}{\mu^{+}}{\mu}$. For each $p \Vdash \dot{H}_{1} \in [\mu^{+}]^{\mu^{+}}$ and $H_0 \in [\mu^{++}]^{n}$, we want to find $q \leq p$ which forces $|f ``H_0 \times H_1| \geq 2$. For each $i < \mu^+$, we can choose $q_i \leq p$ which decides the value of the $i$-th element of $\dot{H}_{1}$ as $\xi_{i}$. There is a $K \in [\mu^{+}]^{\mu^{+}}$ such that $\forall i,j \in K(q_i \cdot q_j \not= 0)$. By the property of $f$, we can choose $\alpha,\beta \in H_0$ and $i,j\in K$ such that $f(\alpha,\xi_i) \not= f(\beta,\xi_j)$. Thus, $q_i\cdot q_j \Vdash |f``H_0 \times \dot{H}_1| \geq 2$. \end{proof}
\begin{lem}\label{negationpreserved}
If $\npolar{\mu^{++}}{\mu^{+}}{\mu^{+}}{\mu^{+}}{\mu}$ holds then any $(\mu^{+},\mu^{+},<\omega)$-c.c. poset forces the same partition relation. \end{lem} \begin{proof}
Let $f$ be a coloring that witnesses with $\npolar{\mu^{++}}{\mu^{+}}{\mu^{+}}{\mu^{+}}{\mu}$. For each $p \Vdash \dot{H}_{0} \in [\mu^{++}]^{\mu^{+}}$ and $\dot{H}_0 \in [\mu^{+}]^{\mu^{+}}$, we want to find $q \leq p$ which forces $|f ``H_0 \times \dot{H}_1| \geq 2$. For each $i < \mu^+$, we can choose $q_i \leq p$ which decides the value of the $i$-th element of $\dot{H}_{0}$ and $\dot{H}_1$ as $\xi_{i},\zeta_i$. There is a $K \in [\mu^{+}]^{\mu^{+}}$ such that $\forall i,i',j,j' \in K(q_i \cdot q_{i'} \cdot q_j \cdot q_{j'}\not= 0)$. By the property of $f$, we can choose $i,i',j,j' \in K$ such that $f(\xi_i,\zeta_{j}) \not= f(\xi_{i'},\zeta_{j'})$. Thus, $q_i \cdot q_{i'} \cdot q_j \cdot q_{j'} \Vdash |f``\dot{H}_0 \times \dot{H}_1| \geq 2$. \end{proof}
\begin{lem}\label{ppmainlemma3}
Suppose that $U$ is a normal ultrafilter over $\mu$. If $\npolar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for some regular $\nu < \mu$ then $\mathcal{P}_{U}$ forces the same partition relation. \end{lem} \begin{proof} We divide two cases $\gamma = \omega$ and $\gamma > \omega$. First, we assume $\gamma > \omega$.
Let $f:\mu^{++} \times \mu^{+} \to \mu$ be a coloring that witnesses with $\npolar{\mu^{++}}{\mu^{+}}{\gamma}{\mu^{+}}{\mu}$. For each $\langle a,X\rangle \Vdash \dot{H}_{0} \in [\mu^{++}]^{\gamma}$ and $\dot{H}_{1} \in [\mu^{+}]^{\mu^{+}}$, we want to find an extension of $\langle a,X\rangle$ that forces $|f ``\dot{H}_{0} \times \dot{H}_1|\geq 2$. This $\dot{H}_0$ can be shrinked to be in $V$ by \begin{clam}
$\mathcal{P}_{U}$ forces that if $\mu > {\mathrm{ot}}(A) = \mathrm{cf}(\gamma) = \gamma> \omega$ then there is $B \in V$ such that ${\mathrm{ot}}(A) = \mathrm{ot}(B)$ and $B \subseteq A$ for all $A \subseteq \mathrm{ON}$. \end{clam} \begin{proof}[Proof of Claim]
Consider $\langle a,X \rangle \Vdash \dot{A} \subseteq \mathrm{ON}$ and $\mathrm{ot}(\dot{A}) = \gamma$, $\gamma \in (\omega,\mu)\cap \mathrm{Reg}$. For every $i < \gamma$, let $\mathcal{A}_{i}$ be a maximal anti-chain below $\langle a,X\rangle$ such that $\forall \langle b,Y\rangle \in \mathcal{A}_{i} \exists \xi\in \mathrm{ON}(\langle b,Y\rangle \Vdash$ the $i$-th element in $\dot{A}$ is $\xi$). By Lemma \ref{prikrycondi}, there is a $Z_i \subseteq X$ and $n_i < \omega$ such that $\mathcal{B}_{i} = \{\langle b,Y\rangle \in \mathcal{A}_{i} \mid |b| = n_i\}$ is maximal anti-chain below $\langle a,Z_i \rangle$. There are $I \in [\gamma]^{\gamma}$ and $n$ such that $n_i = n$ for all $i \in I$. If $n \leq |a|$, letting $B = \{\xi \mid \langle a,Z_i\rangle \Vdash $the $i$-th element of $\dot{A}$ is $\xi\}$, it is easy to see that $\langle a, \bigcap_{i \in K}Z_i \rangle \Vdash B \subseteq \dot{A}$ and $\mathrm{ot}(B) = \gamma$. If $n > |a|$, Let $x\in [\bigcap_{i\in K}Z_i \setminus (\max{a} + 1)]^{n-|a|}$. If we let $b = a \cup x$ then it is forced that $B = \{\xi \mid \langle b,Z_i\rangle \Vdash $the $i$-th element of $\dot{A}$ is $\xi\}$ works as a witness by $\langle b,\bigcap_{i}Z_i\rangle\leq \langle a,\bigcap_{i \in K} Z_i \rangle $, as desired. \end{proof}
By the claim, there are $q \leq \langle{a,X} \rangle$ and $H_0$ such that $q \Vdash H_0 \subseteq \dot{H}_{0}$ and ${\mathrm{ot}}(H_0) = \gamma$. For each $i < \mu^{+}$, we can choose $\langle c_{i},Z_{i}\rangle \leq q$ which forces that the $i$-th value of $\dot{H}_{1}$ is $\xi_{i}$. Then there are $K \in [\mu^{+}]^{\mu^{+}}$ and $c$ such that $c_{i} = c$ for all $i \in K$. By the property of $f$, we can choose $\alpha<\beta$ in $H_0$ and $i<j$ in $K$ such that $f(\alpha,\xi_i) \not= f(\beta,\xi_j)$. Now $\langle {c,Z_i \cap Z_j} \rangle$ forces $f(\alpha,\xi_i), f(\beta,\xi_j) \in f``\dot{H}_0 \times \dot{H}_1$.
Let us show in the case of $\gamma = \omega$. Let $f$ be a coloring that witnesses with $\npolar{\mu^{++}}{\mu^{+}}{\omega}{\mu^{+}}{\mu}$. Let $\langle a,X \rangle \Vdash \dot{H}_{0} \in [\mu^{++}]^{\omega}$ and $\dot{H}_{1} \in [\mu^{+}]^{\mu^{+}}$. For each $i < \mu^{+}$, we can choose $\langle c_{i},Z_i \rangle \leq \langle a,X \rangle$ which forces that the $i$-th value of $\dot{H}_1$ is $\xi_i$. Again, there are $K \in [\mu^{+}]^{\mu^{+}}$ and $c$ such that $c_i = c$ for all $i \in K$. There is a $\langle{c,Z} \rangle \leq \langle a,X \rangle$ which forces that $|\{i \in K \mid \langle c,Z_i \rangle \in \dot{G}\}| = \mu^{+}$. We claim that $\langle c,Z \rangle$ forces $|f``\dot{H}_0 \times \dot{H}_1| \geq 2$.
First, we assume there are $\langle b,Y\rangle \leq \langle c,Z\rangle$, $J \in [\omega]^{\omega}$, and $H = \{\alpha_{n} \mid n \in J\}$ such that $\langle b,Y\rangle \Vdash H \subseteq \dot{H}_0$. Note that $\{i < \mu^{+} \mid b \setminus c \subseteq Z_i\}$ is unbounded since $\langle b,Y\rangle \Vdash \{i < \mu^{+} \mid \langle c,Z_i \rangle \in \dot{G}\}$ is unbounded. By the property of $f$, there are $i,j \in K$ and $n,m \in J$ such that $f(\alpha_n,\xi_i) \not= f(\alpha_m,\xi_j)$ and $b \setminus c \subseteq Z_i \cap Z_j$. $\langle b,Z\cap Z_i \cap Z_j \rangle \Vdash f(\alpha_n,\xi_i), f(\alpha_m,\xi_j) \in f``\dot{H}_0 \times \dot{H}_1$.
Next, we assume there is no such $\langle b,Y\rangle$. Towards showing a contradiction, suppose that there is an extension $\langle b,Y \rangle$ of $\langle c,Z \rangle$ which forces $f``\dot{H}_0 \times \dot{H}_1 = \{\eta\}$ for some $\eta$. Let $\dot{\alpha}_{n}$ be a $\mathcal{P}_{U}$-name of the $n$-th element of $\dot{H}_{0}$. By Lemma \ref{prikrycondi}, for each $n < \omega$, there are $Y_n$ and $l(n)$ such that $\{\langle b',Y' \rangle \leq \langle b,Y_n\rangle \mid |b'| = l(n)$ and $\langle b',Y' \rangle$ decides the value of $\dot{\alpha}_n\}$ contains maximal anti-chain $\mathcal{A}_n$. Note that, for each $x \in [Y_{n} \setminus (\max{b} + 1)]^{l(n)-n}$, there are $\alpha_{n}^x < \mu^{++}$ and $Y_{n}^x$ such that $\langle b \cup x,Y_{n}^{x} \rangle \Vdash \dot{\alpha}_n = \alpha_{n}^{x}$ and $\mathcal{A}_{n} = \{\langle b \cup x,Y_{n}^{x} \rangle \mid x \in [Y_{n}\setminus (\max{b} + 1)]^{n-l(n)}\}$.
Then the nice name defined by $\bigcup_{x \in [Y_{n}\setminus (\max{b} + 1)]^{n-l(n)}}\{\langle \alpha_n^{x},\langle b\cup x,Y_{n}^{x}\rangle\rangle \}$ denotes $\dot{\alpha}_n$ below $\langle b,Y_n\rangle$.
Again, we note that $\{i \in K \mid b \setminus c \subseteq Z_i\}$ is unbounded. Let $\theta$ be a sufficiently large regular. Let $M \prec \mathcal{H}_{\theta}$ be an elementary substructure with the following conditions: \begin{itemize}
\item ${^{\omega}M} \cup \mu \subseteq M$ and $f,\{Z_{i},\xi_i\mid i \in K\}, \{\langle\alpha_{n}^{x} \mid x \in [Y_{n}\setminus (\max{b} + 1)]^{n-l(n)} \rangle \mid n < \omega\},U \in M$.
\item $M \cap \mu^{+} = \delta < \mu^{+}$ and $|M| < \mu^{+}$. \end{itemize} Note that there is a $\delta^{*} \geq \delta$ such that $b \setminus c \subseteq Z_{\delta^{*}}$ and $\delta^{*} \in K$. Because there is no extension of $\langle c,Z\rangle$ which forces $[\dot{H}_0]^{\omega} \cap V \not= \emptyset$, there is $n$ such that $\{\alpha_n^{x} \mid x \in [Y_{n} \cap Z_{\delta^{*}}\setminus (\max{b} + 1)]^{n-l(n)} \}$ is of size $\mu$. Fix $\{x_k \mid k < \omega \}\subseteq [Y_n \cap Z_{\delta^{\ast}} \setminus (\max{b} + 1)]^{l(n)}$ with $\alpha_{n}^{x_{k}} \not= \alpha_{n}^{x_{l}}$ for $k \not= l$. Because $\langle b \cup x_k,Z_{\delta^{\ast}}\cap Y_{n}\rangle$ is a common extension of $\langle c,Z_{\delta^{\ast}}\rangle$ and $\langle b\cup x_k,Y_{n} \cap Y_{n}^{x_k} \rangle$, that forces $\langle \alpha_{n}^{x_k},\xi_{\delta^{\ast}}\rangle \in \dot{H}_0 \times \dot{H}_1$. In particular, $f(\alpha_{n}^{x_k},\xi_{\delta^{\ast}}) = \eta$.
Since ${^{\omega}M} \subseteq M$, $H'_0 = \{\alpha_n^{x_k} \mid k < \omega \}\in M$. In $M$, the following holds: \begin{center}
$\forall i < \mu^{+}\exists j > i(f``H'_0 \times \{\xi_{j}\} = \{\eta\})$. \end{center} From this, $H'_1 = \{\xi_{i} \mid f``H \times \{\xi_i\} = \eta\}$ is unbounded in $\mu^{+}$. Thus, $f``H'_0 \times H'_1=\{\gamma\}$. This contradicts the choice of $f$. \end{proof}
Let us show Theorem \ref{maintheorem2}. \begin{proof}[Proof of Theorem \ref{maintheorem2}]
(1) and (2) follow by Lemma \ref{ppmainlemma1}. (3), (4), and (5) follow by Lemmas \ref{ppmainlemma2}, \ref{negationpreserved}, and \ref{ppmainlemma3}, respectively. \end{proof}
For section \ref{mainsection}, we recall a theorem of Hajnal--Juhasz. Let $\mathrm{Add}(\mu,\lambda)$ be the set of all partial functions from $\lambda$ to $\mu$ of size $<\mu$. \begin{thm}[Hajnal--Juhasz]\label{hajnaljuhasz}
For any regular $\mu$, if $\mathrm{Add}(\mu,\mu^{+})$ has the $\mu^{+}$-c.c. then $\mathrm{Add}(\mu,\mu^{+})\Vdash \npolar{\mu^{++}}{\mu^{+}}{\mu}{\mu^{+}}{2}$. \end{thm} \begin{proof}
The same proof as in \cite[Theorem 5.37]{MR2768692} works. \end{proof}
\begin{coro}\label{ppcoro}
Suppose that $\mu^{<\mu} = \mu$ and $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ holds for all $\nu < \mu$. Then $\mathrm{Add}(\mu,\mu^{+})$ forces that $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ holds for all $\nu < \mu$ but $\polar{\mu^{++}}{\mu^{+}}{\mu}{\mu^{+}}{2}$ fails. \end{coro} \begin{proof}
Note that if $\mu^{<\mu} = \mu$ then the product forcing $\prod_{\alpha < \mu^{+}}^{<\mu}P_{\alpha}$ of $(\mu,<\mu)$-centered posets $P_{\alpha}$ is $(\mu,<\mu)$-centered. For a proof, we refer to \cite[Lemma 4.2]{centeredkunen}. By $\mathrm{Add}(\mu,\mu^{+})\simeq \prod_{\alpha < \mu^{+}}^{<\mu}{2^{<\mu}}$, this is $(\mu,<\mu)$-centered. Lemmas \ref{ppmainlemma1} and \ref{hajnaljuhasz} show $\mathrm{Add}(\mu,\mu^{+})$ forces the desired partition relations. \end{proof} This answers \cite[Question 1.11]{MR4101445}. Note that this question has been solved in \cite{MR4094551} and \cite{gartishelah}. But our proof is the simplest of them. Indeed, \begin{coro}
Suppose that $\lambda$ is an $\omega_1$-Erd\H{o}s cardinal. Then $\mathrm{Coll}(\omega_1,<\lambda) \times \mathrm{Add}(\omega,\omega_1)$ forces $\polar{\aleph_2}{\aleph_1}{n}{\aleph_1}{\aleph_0}$ for all $n < \omega$ and $\npolar{\aleph_2}{\aleph_1}{\aleph_0}{\aleph_1}{\aleph_0}$. \end{coro} \begin{proof}
It is known that $\mathrm{Coll}(\omega_1,<\lambda)$ forces $(\omega_2,\omega_1) \twoheadrightarrow (\omega_1,\omega)$ if $\lambda$ is $\omega_1$-Erd\H{o}s. By Lemma \ref{ccimpliespp}, $\mathrm{Coll}(\omega_1,<\lambda)$ forces $\polar{\aleph_2}{\aleph_1}{n}{\aleph_1}{\aleph_0}$ for all $n < \omega$. By Corollary \ref{ppcoro}, $\mathrm{Coll}(\omega_1,<\lambda) \times \mathrm{Add}(\omega,\omega_1) \simeq \mathrm{Coll}(\omega_1,<\lambda) \ast \dot{\mathrm{Add}}(\omega,\omega_1)$ forces the desired conditions. \end{proof} Using an almost-huge cardinal, we can show that \begin{coro}
Suppose that $\mu$ is a regular cardinal below an almost-huge cardinal. Then there is a $\mu$-directed closed poset which forces that $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for all $\nu < \mu$ and $\npolar{\mu^{++}}{\mu^{+}}{\mu}{\mu^{+}}{\mu}$. \end{coro} \begin{proof}
By \cite[Theorem 1.2]{preprint}, there is a $\mu$-directed closed poset $P$ which forces that $\mu^{+}$ carries a $(\mu^{++},\mu^{++},<\mu)$-saturated ideal and $2^{\mu} = \mu^{+}$. By Lemma \ref{saturatedimplypolarized}, it is forced by $P$ that $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for all $\nu < \mu$. By Corollary \ref{ppcoro}, $P \ast \dot{\mathrm{Add}}(\mu,\mu^{+})$ is a required poset. \end{proof}
\begin{rema} We give more observations for the preservation of polarized partition relations. As we saw in Theorem \ref{ccimpliespp}, $\polar{\aleph_{2}}{\aleph_1}{2}{\aleph_1}{\aleph_0}$ follows from Chang's conjecture $(\aleph_2,\aleph_1) \twoheadrightarrow (\aleph_1,\aleph_0)$. It is known that $(\aleph_2,\aleph_1) \twoheadrightarrow (\aleph_1,\aleph_0)$ is c.c.c. indestructible. On the other hand, as known the result of Jensen, $\square_{\omega_1}$ shows there is a c.c.c. poset which adds a Kurepa tree on $\omega_1$. Therefore, by Theorem \ref{kurepaimpliesnpp}, $\polar{\aleph_{2}}{\aleph_1}{2}{\aleph_1}{\aleph_0}$ can be destroyed by c.c.c. poset. Indeed, $\polar{\aleph_{2}}{\aleph_1}{2}{\aleph_1}{\aleph_0}$ is compatible with $\square_{\omega_1}$ (For example, \cite[Theorem 8.54]{MR2768692} and Lemma \ref{ccimpliespp} show). \end{rema}
\section{Proof of Theorem \ref{maintheorem3}}\label{mainsection} In this section, we devoted to \begin{proof}[Proof of Theorem \ref{maintheorem3}]
Starting a model with a supercompact cardinal $\mu$ below a huge cardinal $\kappa$. By Theorem \ref{laverind}, we may assume that $\mu$ is indestructible.
By \cite{MR925267}, there is a $\mu$-directed closed poset $P$ which forces that $\mu^{+} = \kappa$ carries an ideal $\dot{J}$ such that \begin{enumerate}
\item $\dot{J}$ is centered.
\item $\dot{J}$ is $(\dot{\mu^{++}},\dot{\mu^{++}},<\mu)$-saturated.
\item $(\mu^{++},\mu^{+}) \twoheadrightarrow_{\mu} (\mu^{+},\mu)$.
\item $2^{\mu}= \mu^{+}$. \end{enumerate} For (1) and (4), we refer to~\cite{MR925267}. (2) follows by~\cite[Section 3]{preprint}. (3) follows by the generic elementary embedding in~\cite{MR925267} and Lemma \ref{changsuff}.
Let $G \ast H$ be a $(V,P \ast \dot{\mathrm{Add}}(\mu,\mu^{+}))$-generic. Note that $\mathrm{Add}(\mu,\mu^{+})$ is $\mu$-directed closed and $\mu$-centered. In $V[G][H]$, the following holds: \begin{enumerate}
\item There is an ideal $I = \overline{\dot{J}^{G}}$ over $\mu^{+}$, which is centered.
\item $I$ is $(\mu^{++},\nu,\nu)$-saturated, which in turn implies $\polar{\mu^{++}}{\mu^{+}}{\nu}{\mu^{+}}{\mu}$ for all $\nu < \mu$.
\item $\npolar{\mu^{++}}{\mu^{+}}{\mu}{\mu^{+}}{2}$.
\item $(\mu^{++},\mu^{+}) \twoheadrightarrow_{\mu} (\mu^{+},\mu)$.
\item $\mu$ is still supercompact and $2^{\mu} = \mu^{+}$. \end{enumerate} (1) follows by Lemma \ref{termcentered}. (2) and (4) follow by Lemmas \ref{saturationinprikry} and \ref{ccpreserved}, respectively. (3) follows by Theorem \ref{hajnaljuhasz}. Let $U$ be a normal ultrafilter over $\mu$. By $2^{\mu} = \mu^{+}$ and Lemma \ref{guidinggeneric}, there is a guiding generic $\mathcal{G}$. $\mathcal{P}_{U,\mathcal{G}}$ forces that \begin{enumerate}
\item $\mu = \aleph_{\omega}$.
\item $\overline{I}$ is an ideal over $\mu^{+} = \aleph_{\omega+1}$ that is centered but \emph{not} layered.
\item $\overline{I}$ is $(\aleph_{\omega+2},\aleph_{n},\aleph_{n})$-saturated, which in turn implies $\polar{\aleph_{\omega+2}}{\aleph_{\omega+1}}{\aleph_{n}}{\aleph_{\omega+1}}{\aleph_{\omega}}$ for all $n < \omega$.
\item $\npolar{\aleph_{\omega+2}}{\aleph_{\omega+1}}{\aleph_{\omega+1}}{\aleph_{\omega+1}}{\aleph_{\omega}}$. In particular, $\overline{I}$ is \emph{not} strongly saturated.
\item $(\aleph_{\omega+2},\aleph_{\omega+1}) \twoheadrightarrow (\aleph_{\omega+1},\aleph_{\omega})$.
\end{enumerate} (2) follows by Theorem \ref{maintheorem2}. (3), (4), and (5) follow by Lemmas \ref{saturationinprikry}, \ref{negationpreserved}, and \ref{ccpreserved}, respectively.
Let $\dot{U}$ and $\dot{\mathcal{G}}$ be $P \ast \dot{\mathrm{Add}}(\mu,\mu^{+})$-names for $U$ and $\mathcal{G}$. $P \ast \dot{\mathrm{Add}}(\mu,\mu^{+})\ast \mathcal{P}_{\dot{U},\dot{\mathcal{G}}}$ is a required poset. \end{proof} \begin{rema}
In $V[G][H]$, the ideal $I$ is $(\mu^{++},\mu^{++},<\mu)$-saturated. Let us discuss in $V[G]$. Let $\dot{I}$ be $\mathrm{Add}(\mu,\mu^{+})$-name for $I$ and $J = \dot{J}$. Since $\mathrm{Add}(\mu,\mu^{+}) \Vdash \mathcal{P}(\mu^{+}) / \dot{I} \simeq \mathcal{P}(\mu^{+})/J \ast \dot{j}(\mathrm{Add}(\mu,\mu^{+})) \simeq \mathcal{P}(\mu^{+})/J \times \mathrm{Add}(\mu,\mu^{++})$. Let $e$ be a complete embedding that is given by Theorem \ref{duality}. By $e(p) = \langle 1,p\rangle \in \mathcal{P}(\mu^{+})/J \times \mathrm{Add}(\mu,\mu^{++})$, \begin{center} $\mathrm{Add}(\mu,\mu^{+}) \Vdash \mathcal{P}(\mu^{+})/J \times \mathrm{Add}(\mu,\mu^{++}) /\dot{G} \simeq \mathcal{P}(\mu^{+})/J \times \mathrm{Add}(\mu,\mu^{++})$. \end{center}It is easy to see that $\mathcal{P}(\mu^{+})/J \times \mathrm{Add}(\mu,\mu^{++})$ has the $(\mu^{++},\mu^{++},<\mu)$-c.c. in the extension by $\mathrm{Add}(\mu,\mu^{+})$, as desired.
This shows that the assumption of strong saturation (that is the $(\mu^{++},\mu^{++},\mu)$-saturation) in Theorem \ref{stronglysatimplypp} cannot be improved to the $(\mu^{++},\mu^{++},<\mu)$-saturation. \end{rema}
\section{$\aleph_{\omega+2}$-Centered Ideal over $[\aleph_{\omega+3}]^{\aleph_{\omega+1}}$ and $\mathrm{Tr}_{\mathrm{Chr}}(\aleph_{\omega+3},\aleph_{\omega+1})$} In this section, we study the relation between centered ideals on and $\mathrm{Tr}_{\mathrm{Chr}}(\lambda,\kappa)$. $\mathrm{Tr}_{\mathrm{Chr}}(\lambda,\kappa)$ is the statement that every graph of size and chromatic number $\lambda$ has a subgraph of size and chromatic number $\kappa$.
Shelah~\cite{MR1117029} proved that $V = L$ implies the existence of a graph $\mathcal{G}$ of size and chromatic number $\mu^{+}$ with every subgraph of size $\leq \mu$ has countable chromatic number for every cardinal $\mu$. That is, $\mathrm{Tr}_{\mathrm{Chr}}(\mu^{+},\lambda)$ fails for all $\lambda \in [\aleph_1,\mu^{+})$ in $L$. Foreman and Laver~\cite{MR925267} proved the consistency of $\mathrm{Tr}_{\mathrm{Chr}}(\lambda^{+},\mu^{+})$ for each regular $\mu < \lambda$. Therefore we are interested in $\mathrm{Tr}_{\mathrm{Chr}}(\lambda^{+},\mu^{+})$ for singular $\mu$.
Here, we show the consistency of $\mathrm{Tr}_{\mathrm{Chr}}(\aleph_{\omega+3},\aleph_{\omega+1})$. First, we check some ideal assumption implies $\mathrm{Tr}_{\mathrm{Chr}}(\lambda^{+},\mu^{+})$ in Lemma \ref{idealandtrchr}. Then we generalize Theorem \ref{maintheorem2} to an ideal over $Z \subseteq \mathcal{P}(X)$ (see Lemma \ref{generalizedpreservation}) using Theorem \ref{duality}. Lastly, we construct a model with a required ideal.
\begin{lem}\label{idealandtrchr}
Suppose that $[\lambda^{+}]^{\mu^{+}}$ carries a normal, fine, $\mu^{+}$-complete $\lambda$-centered ideal. Then $\mathrm{Tr}_{\mathrm{Chr}}(\lambda^{+},\mu^{+})$ holds. \end{lem} \begin{proof}
Let $P = \mathcal{P}([\lambda^{+}]^{\mu^{+}})/I$ and $G$ be a $(V,P)$-generic filter. In $V[G]$, there is an elementary embedding $j:V \to M$ such that \begin{itemize}
\item $\mathrm{crit}(j) = \mu^{+}$.
\item $j(\mu^{+}) = \lambda^{+}$ and $j(\mu^{++}) = \lambda^{++}$.
\item $j ``\lambda^{+} \in M$. \end{itemize} Let $\mathcal{G} = \langle \lambda^{+},\mathrel{E} \rangle \in V$ be a graph of chromatic number $\lambda^{+}$. Since $j ``\lambda^{+} \in M$, $j(\mathcal{G})$ has a subgraph that is isomorphic with $\mathcal{G}$ in $M$.
We claim that the chromatic number of $\mathcal{G}$ is $j(\mu^{+})$ in $V[G]$. Fix a $P$-name $\dot{c}$ for a coloring $\mathcal{G} \to \mu$ and $p \in P$. Let $F:P \to \lambda$ be a centering function of $P$. Define $d:\mathcal{G} \to \mu\times \lambda$ by $d(x) = \langle\xi,\alpha\rangle$ if and only if $\exists q \leq p(F(q) = \alpha \land q \Vdash \dot{c}(x) = \xi))$. Since the chromatic number of $\mathcal{G}$ is $\lambda^{+}$, there are $x,y \in \mathcal{G}$ such that $x \mathrel{E} y$ and $d(x) = d(y)$. By $d(x) = d(y)$ and the definition of $d$, we have a $q \leq p$ which forces that $\dot{c}(x) = \dot{c}(y)$. Thus $P$ forces that the chromatic number of $\mathcal{G}$ is $\lambda^{+} = \dot{j}(\mu^{+})$.
By the elementarity of $j$, there is a subgraph $\mathcal{G}$ of size and chromatic number of $\mu^{+}$. The proof is completed. \end{proof}
To obtain a model in which $\mathrm{Tr}_{\mathrm{Chr}}(\aleph_{\omega+3},\aleph_{\omega+1})$ holds, it is enough to construct a model with a $\aleph_{\omega+2}$-centered ideal over $[\aleph_{\omega+3}]^{\aleph_{\omega+1}}$. The following lemma is an analogue of Lemma \ref{termcentered} for general quotient forcings. \begin{lem}\label{prikryquotientcentered} Suppose that $P$ is $(\mu,<\nu)$-centered, $Q$ is $(\lambda,<\nu)$-centered, and $\dot{R}$ is a $Q$-name for a $(\mu,<\nu)$-centered poset. We also assume that the mapping $\tau:P \to Q \ast \dot{R}$, which has the form of $\tau(p) = \langle 1,f(p)\rangle$, is complete and there is a $\langle P_{\alpha},\dot{R}_{\alpha} \mid \alpha < \mu\rangle$ such that $P_{\alpha}$ is a filter, $P = \bigcup_{\alpha<\lambda}P_\alpha$, $\dot{R}_{\alpha}$ is a $Q$-names for a $<\nu$-complete filter, and $Q \Vdash f ``P_{\alpha} \subseteq\dot{R}_{\alpha}$ and $\bigcup_{\alpha}\dot{R}_{\alpha} = \dot{R}$. If $\lambda^{\mu} = \lambda$ then the term forcing $T(P,Q \ast \dot{R} / \dot{G})$ is $(\lambda,<\nu)$-centered. In particular, if $P$ is $\nu$-Baire then $P \Vdash Q \ast \dot{R} / \dot{G}$ is $(\lambda,<\nu)$-centered. \end{lem} \begin{proof}
We may assume that $Q$ is a complete Boolean algebra. Let $F:Q \to \lambda$ be a centering function.
We want to define a centering function $l:T(P,Q \ast \dot{R} / \dot{G})\to \lambda$. For each $\dot{p} \in T(P,Q \ast \dot{R} / \dot{G})$, we have a maximal anti-chain $\mathcal{A}_{\dot{p}} \subseteq P$ such that every $p \in \mathcal{A}_{\dot{p}}$ forces $\dot{p} = \langle q,\dot{r} \rangle$ for some $\langle q,\dot{r} \rangle \in Q \ast \dot{R}$. Note that $p$ is a reduct of $\langle q,\dot{r} \rangle$.
Define $l(p)$ by $\langle F(q\cdot ||\dot{r} \in \dot{R}_{\alpha}||) \mid p \in \mathcal{A}_{\dot{q}}, \alpha < \mu, p \Vdash \dot{p} = \langle q,\dot{r}\rangle\rangle$. Note that the size of $\mu$-centered posets is at most $2^{\mu}$. By the assumption, the number of $\mathcal{A}_p$ is at most $\lambda$. This observation enables us to identify the range of $l$ by $\lambda$. Suppose $l(\dot{p}_0) = \cdots = l(\dot{p}_{i}) = \cdots$ ($i < \zeta < \nu$). Put $\mathcal{A} = \mathcal{A}_{\dot{p}}$. It is enough to show that each $p \in \mathcal{A}$ forces $\prod_i \dot{p}_i \in Q \ast \dot{R} / \dot{G}$. Fix $p \in \mathcal{A}$. Then, for each $i < \zeta$, there is a $\langle q_i,\dot{r}_i\rangle \in Q \ast \dot{R}$ such that $p$ forces $\dot{p}_i = \langle q_i,\dot{r}_i\rangle$.
For every $r \leq p$, since $p$ is a reduct of $\langle q_i,\dot{r}_i\rangle$, there is an $\alpha < \mu$ such that $q_i \cdot ||\dot{r}_i\cdot f(r)\in \dot{R}_{\alpha}||\not= 0$ for some (any) $i<\zeta$. By $Q \Vdash f ``P_{\alpha}\subseteq \dot{R}_{\alpha}$, $r \in P_{\alpha}$. Note that $\prod_i q_i \cdot ||f(r) \cdot \prod_i\dot{r}_i \in \dot{R}_{\alpha}||=\prod_i q_i \cdot ||\dot{r}_i \in \dot{R}_{\alpha}|| \not= 0$. $p$ forces that \begin{align*} \tau(r)\cdot \textstyle \prod_i \dot{p}_i & = \tau(r)\cdot \textstyle \prod_{i}\langle q_i ,\dot{r}_i\rangle \\
&\geq \langle 1,f(r) \rangle \cdot \textstyle \prod_{i}\langle q_i \cdot ||\dot{r}_i \in \dot{R}_\alpha||,\dot{r}_i\rangle \\
&= \langle 1,f(r) \rangle \cdot \langle \textstyle \prod_{i}q_i \cdot ||\prod_i\dot{r}_i\in \dot{R}_\alpha||,\prod_i \dot{r}_i\rangle \\
&= \langle \textstyle \prod_{i}q_i \cdot ||\prod_i \dot{r}_i \cdot f(r) \in \dot{R}_{\alpha}||,\prod_i\dot{r}_i \cdot r\rangle\\ & \not=0. \end{align*}
The translation of lines three to four follows by $||f(r) \in \dot{R}_{\alpha}|| = 1$ and $|| \prod_{i} \dot{r}_{i} \in \dot{R}_{\alpha}|| \cdot ||f(r) \in \dot{R}_{\alpha}|| = ||\prod_{i} \dot{r}_{i} \cdot f(r) \in \dot{R}_{\alpha}|| \leq ||\prod_{i} \dot{r}_{i} \cdot f(r) \not= 0||$. Therefore $p$ is a reduct of $\textstyle \prod_i \langle q_{i},\dot{r}_i\rangle$. $p$ forces $\textstyle \prod_i \langle q_{i},\dot{r}_i\rangle =\prod_i \dot{p}_i \in Q \ast \dot{R} / \dot{G}$. In particular, $\textstyle \prod_i \dot{p}_i$ in the term forcing and it is a lower bound of $\dot{p}_i$'s. By Lemma \ref{laverbasiclemma}, if $P$ is $\nu$-Baire then $P \Vdash Q \ast \dot{R} / \dot{G}$ is $(\lambda,<\nu)$-centered. \end{proof}
\begin{lem}\label{generalizedpreservation}
Suppose that $I$ is a normal, fine, $\mu^{+}$-complete $(\lambda,<\nu)$-centered ideal over $Z\subseteq \mathcal{P}(\lambda')$. Let $P$ be a $(\mu,<\nu)$-centered and $\nu$-Baire poset. If $\lambda^{\mu}= \lambda \leq \lambda^{'}$ then $P \Vdash \overline{I}$ is a normal, fine, $\mu^{+}$-complete $(\lambda,<\nu)$-centered ideal over $Z$. \end{lem} \begin{proof}
We may assume that $P$ is a Boolean algebra. Let $\langle P_\alpha \mid \alpha < \lambda \rangle$ be a centering family of $P$. We may assume that each $P_{\alpha}$ is a filter. Let $e$ be a complete embedding from $P$ to $\mathcal{P}(Z) / I \ast \dot{j}(P)$ that sends $p$ to $\langle 1,\dot{j}(p)\rangle$ by Theorem \ref{duality}. Theorem \ref{duality} also gives $P \Vdash \mathcal{P}(Z) / \overline{I} \simeq \mathcal{B}(\mathcal{P}(Z) / I \ast \dot{j}(P) / \dot{G})$. By elementarity of $\dot{j}$ and $\mathrm{crit}(\dot{j}) \geq (\mu^{+})^{V}$, $\mathcal{P}(Z) / I \Vdash \dot{j}(P) = \bigcup_{\alpha < \mu}\dot{j}(P_{\alpha})$ and $\dot{j} ``P_{\alpha} \subseteq j(\dot{P}_{\alpha})$.
Let $\dot{R}_{\alpha}$ be a $\mathcal{P}(Z) / I$-name for $\dot{j}(P_{\alpha})$. Since $\mathcal{P}(Z) / I$ is $\lambda$-centered and $\lambda^{\mu} =\lambda$, we can apply Lemma \ref{prikryquotientcentered} to $e:P \to \mathcal{P}(Z) / I \ast \dot{j}(P)$. Therefore $P \Vdash \mathcal{P}(Z) / \overline{I} \simeq \mathcal{B}(\mathcal{P}(Z) / I \ast \dot{j}(P) / \dot{G})$ is $(\lambda,<\nu)$-centered. \end{proof}
\begin{proof}[Proof of Theorem \ref{maintheorem4}]
We may assume that $\mu$ is indestructible supercompact and $\mathrm{GCH}$ holds above $\mu$ by Theorem \ref{laverind}.
By~\cite{shioya2}, there is a $\mu$-directed closed poset which forces that $[\mu^{+++}]^{\mu^{+}}$ carries a normal, fine, $\mu^{+}$-complete $\mu^{+}$-centered ideal and $2^{\mu} = \mu^{+}$. Let us discuss in the extension by this poset. Let $I$ be such an ideal. By Lemma \ref{guidinggeneric}, we can define $\mathcal{P}_{U,\mathcal{G}}$. By Lemma \ref{generalizedpreservation}, $\mathcal{P}_{U,\mathcal{G}}$ forces $\overline{I}$ is a normal, fine, $\aleph_{\omega+1}$-complete $\aleph_{\omega+2}$-centered ideal over $([\aleph_{\omega+3}]^{\aleph_{\omega+1}})^{V}$. By $\mathcal{P}_{U,\mathcal{G}}$ forces $([\aleph_{\omega+3}]^{\aleph_{\omega+1}})^{V} \subseteq [\aleph_{\omega+3}]^{\aleph_{\omega+1}}$, we can see $\overline{I}$ an ideal over $[\aleph_{\omega+3}]^{\aleph_{\omega+1}}$.
By Lemma \ref{idealandtrchr}, $\mathcal{P}_{U,\mathcal{G}} \Vdash \mathrm{Tr}_{\mathrm{Chr}}(\aleph_{\omega+3},\aleph_{\omega+1})$, as desired. \end{proof}
By using Magidor forcing, we obtain
\begin{thm}\label{maintheorem5}
Suppose that $\kappa$ is a huge cardinal with target $\theta$, $\mu < \kappa$ is a supercompact cardinal. For regular cardinals $\nu < \mu < \kappa < \lambda< \theta$, there is a poset which forces that \begin{enumerate}
\item $[\kappa,\lambda] \cap \mathrm{Reg}$ and $[\omega,\nu] \cap \mathrm{Reg}$ are not changed,
\item $\kappa = \mu^{+}$, $\lambda^{+} = \theta$, $\mathrm{cf}(\mu) = \nu$,
\item $[\theta]^{\kappa}$ carries a normal, fine, $\kappa$-complete $\lambda$-centered ideal, and
\item $\mathrm{Tr}_{\mathrm{Chr}}(\theta,\kappa)$. \end{enumerate} \end{thm} \begin{proof}
This follows by \cite{shioya2} and Theorems \ref{laverind} and \ref{magidorforcing}. \end{proof} We proved the consistency of $\mathrm{Tr}_{\mathrm{Chr}}(\lambda^{+},\mu^{+})$ for singular $\mu$ and regular $\lambda > \mu^{+}$. We ask \begin{ques}
Is $\mathrm{Tr}_{\mathrm{Chr}}(\aleph_{\omega+2},\aleph_{\omega+1})$ consistent? \end{ques}
We conclude this paper with the following observation about layeredness.
The ideal $I$ over $[\mu^{+++}]^{\mu^{+}}$ in a proof of Theorem \ref{maintheorem4} is $S$-layered for some stationary subset $S \subseteq E^{\mu^{+++}}_{\mu^{++}}$. We show that $I$ is not $T$-layered for all $T\subseteq E^{\mu^{+++}}_{<\mu^{++}}$ and $\overline{I}$ is forced to be not $S$-layered for all $S\subseteq E^{\mu^{+++}}_{\mu^{++}}$. The former follows by \cite[Claim 3]{shioya2} and in \cite[Section 4,5]{preprint}. The latter follows by the proof of Theorem \ref{maintheorem1}. Therefore $\overline{I}$ is not $S$-layered for all stationary $S \subseteq \mu^{+++}$ in the final model. On the other hand, $[\lambda]^{\mu^{+}}$ may carry an $S$-layered ideal for some stationary $S \subseteq \lambda$ and singular $\mu$ if $\lambda$ is a limit cardinal. Indeed, \begin{prop}\label{hugeprop}
Suppose that $\kappa$ is a huge cardinal and $\mu < \kappa$ is a supercompact cardinal. Then there is a poset that forces that $\lambda$ is a Mahlo cardinal, $[\lambda]^{\mu^{+}}$ carries a $S$-layered ideal for some stationary $S\subseteq \lambda \cap \mathrm{Reg}$, and $\mu$ is a singular cardinal. \end{prop} \begin{proof}
We may assume that $\mu$ is indestructible supercompact by Theorem \ref{laverind}. Let $j:V \to M$ be a huge embedding with critical point $\kappa$. Then $\mathrm{Coll}(\mu,<\kappa) \Vdash [j(\kappa)]^{\kappa}$ carries a normal, fine, and $\kappa$-complete ideal $I$ such that $\mathcal{P}([j(\kappa)]^{\kappa}) / I \simeq \mathrm{Coll}(\mu,<j(\kappa))$ (See \cite[Example 7.25]{MR2768692}). Let $\dot{U}$ be a $\mathrm{Coll}(\mu,<\kappa)$-name for a normal ultrafilter over $\mu$. By Theorem \ref{duality}, $\mathrm{Coll}(\mu,<\kappa) \ast \mathcal{P}_{\dot{U}} \Vdash \mathcal{P}([j(\kappa)]^{\kappa})/\overline{I}\simeq \mathrm{Coll}(\mu,<j(\kappa)) \ast \mathcal{P}_{j(\dot{U})} / \dot{G} \ast \dot{H}$. We claim that it is forced that $\mathrm{Coll}(\mu,<\kappa) \ast \mathcal{P}_{\dot{U}} \Vdash \mathrm{Coll}(\mu,<j(\kappa)) \ast \mathcal{P}_{j(\dot{U})} / \dot{G} \ast \dot{H}$ is $(\mathrm{Reg} \cap j(\kappa))^{V}$-layered. For $\mathrm{Coll}(\mu,<j(\kappa))$-name $\dot{X}$ for a subset of $\mu$, there is a maximal anti-chain $\mathcal{A}_{\dot{X}}$ such that every $q \in \mathcal{A}_{\dot{X}}$ decides $\dot{X}\in j(\dot{U})$. Let $\rho(\dot{X})$ be the least $\alpha < j(\kappa)$ such that $\mathcal{A}_{\dot{X}}\subseteq \mathrm{Coll}(\mu,<\alpha)$. For $\beta < j(\kappa)$, define $\rho(\beta) < j(\kappa)$ by $\sup \{\rho(\dot{X}) \mid \dot{X}$ is $\mathrm{Coll}(\mu,<\beta)$-name for a subset of $\mu\}\cup\{2^{\beta}\}$. Let $C$ be a club generated by $\rho$. For every $\alpha \in C \cap \mathrm{Reg}$, $\mathrm{Coll}(\mu,<\alpha) \Vdash \dot{U}_{\alpha}:=\dot{j}(U) \cap V[\dot{G}_{\alpha}]$ is an ultrafilter. Here, $\dot{G}_{\alpha}$ is the canonical name for a generic filter of $\mathrm{Coll}(\mu,<\alpha)$. By Lemma \ref{prikryequiv}, \begin{center}
$\mathrm{Coll}(\mu,<\kappa) \ast \mathcal{P}_{\dot{U}} \lessdot \mathrm{Coll}(\mu,<\alpha) \ast \mathcal{P}_{\dot{U}_{\alpha}}\lessdot \mathrm{Coll}(\mu,<j(\kappa)) \ast \mathcal{P}_{j(\dot{U})}$. \end{center}
Then $\mathrm{Coll}(\mu,<\alpha) \ast \mathcal{P}_{\dot{U}_{\alpha}} /\dot{G} \ast \dot{H}\lessdot \mathrm{Coll}(\mu,<j(\kappa)) \ast \mathcal{P}_{j(\dot{U})} /\dot{G} \ast \dot{H}$ holds in the extension by $\mathrm{Coll}(\mu,<\kappa) \ast \mathcal{P}_{\dot{U}}$. Let $\dot{P}_{\alpha}$ be a $\mathrm{Coll}(\mu,<\kappa) \ast \mathcal{P}_{\dot{U}}$-name for $\mathrm{Coll}(\mu,<f(\alpha)) \ast \mathcal{P}_{\dot{U}_{f(\alpha)}}$, here $f(\alpha) = \min (C \cap \mathrm{Reg})^{V}\setminus \alpha$. $\langle \dot{P}_{\alpha}\mid \alpha < j(\kappa) \rangle$ is forced to satisfy the condition of (2) in Lemma \ref{charlayered}. \end{proof} Lemma \ref{modificationprikrycondi} brings an analogue of Lemma \ref{prikryequiv} for $\mathcal{P}_{U,\mathcal{G}}$. We can replace ``$\mu$ is a singular cardinal'' with $\mu = \aleph_{\omega}$ in the statement of Proposition \ref{hugeprop}.
\end{document} | arXiv |
Connection (vector bundle)
In mathematics, and especially differential geometry and gauge theory, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. The most common case is that of a linear connection on a vector bundle, for which the notion of parallel transport must be linear. A linear connection is equivalently specified by a covariant derivative, an operator that differentiates sections of the bundle along tangent directions in the base manifold, in such a way that parallel sections have derivative zero. Linear connections generalize, to arbitrary vector bundles, the Levi-Civita connection on the tangent bundle of a pseudo-Riemannian manifold, which gives a standard way to differentiate vector fields. Nonlinear connections generalize this concept to bundles whose fibers are not necessarily linear.
This article is about connections on vector bundles. For other types of connections in mathematics, see connection (mathematics).
Linear connections are also called Koszul connections after Jean-Louis Koszul, who gave an algebraic framework for describing them (Koszul 1950).
This article defines the connection on a vector bundle using a common mathematical notation which de-emphasizes coordinates. However, other notations are also regularly used: in general relativity, vector bundle computations are usually written using indexed tensors; in gauge theory, the endomorphisms of the vector space fibers are emphasized. The different notations are equivalent, as discussed in the article on metric connections (the comments made there apply to all vector bundles).
Motivation
Let M be a differentiable manifold, such as Euclidean space. A vector-valued function $M\to \mathbb {R} ^{n}$ can be viewed as a section of the trivial vector bundle $M\times \mathbb {R} ^{n}\to M.$ One may consider a section of a general differentiable vector bundle, and it is therefore natural to ask if it is possible to differentiate a section, as a generalization of how one differentiates a function on M.
The model case is to differentiate a function $X:\mathbb {R} ^{n}\to \mathbb {R} ^{m}$ on Euclidean space $\mathbb {R} ^{n}$. In this setting the derivative $dX$ at a point $x\in \mathbb {R} ^{n}$ in the direction $v\in \mathbb {R} ^{n}$ may be defined by the standard formula
$dX(v)(x)=\lim _{t\to 0}{\frac {X(x+tv)-X(x)}{t}}.$
For every $x\in \mathbb {R} ^{n}$, this defines a new vector $dX(v)(x)\in \mathbb {R} ^{m}.$
When passing to a section $X$ of a vector bundle $E$ over a manifold $M$, one encounters two key issues with this definition. Firstly, since the manifold has no linear structure, the term $x+tv$ makes no sense on $M$. Instead one takes a path $\gamma :(-1,1)\to M$ :(-1,1)\to M} such that $\gamma (0)=x,\gamma '(0)=v$ and computes
$dX(v)(x)=\lim _{t\to 0}{\frac {X(\gamma (t))-X(\gamma (0))}{t}}.$
However this still does not make sense, because $X(\gamma (t))$ and $X(\gamma (0))$ are elements of the distinct vector spaces $E_{\gamma (t)}$ and $E_{x}.$ This means that subtraction of these two terms is not naturally defined.
The problem is resolved by introducing the extra structure of a connection to the vector bundle. There are at least three perspectives from which connections can be understood. When formulated precisely, all three perspectives are equivalent.
1. (Parallel transport) A connection can be viewed as assigning to every differentiable path $\gamma $ a linear isomorphism $P_{t}^{\gamma }:E_{\gamma (t)}\to E_{x}$ for all $t.$ Using this isomorphism one can transport $X(\gamma (t))$ to the fibre $E_{x}$ and then take the difference; explicitly,
$\nabla _{v}X=\lim _{t\to 0}{\frac {P_{t}^{\gamma }X(\gamma (t))-X(\gamma (0))}{t}}.$
In order for this to depend only on $v,$ and not on the path $\gamma $ extending $v,$ it is necessary to place restrictions (in the definition) on the dependence of $P_{t}^{\gamma }$ on $\gamma .$ This is not straightforward to formulate, and so this notion of "parallel transport" is usually derived as a by-product of other ways of defining connections. In fact, the following notion of "Ehresmann connection" is nothing but an infinitesimal formulation of parallel transport.
2. (Ehresmann connection) The section $X$ may be viewed as a smooth map from the smooth manifold $M$ to the smooth manifold $E.$ As such, one may consider the pushforward $dX(v),$ which is an element of the tangent space $T_{X(x)}E.$ In Ehresmann's formulation of a connection, one chooses a way of assigning, to each $x$ and every $e\in E_{x},$ a direct sum decomposition of $T_{X(x)}E$ into two linear subspaces, one of which is the natural embedding of $E_{x}.$ With this additional data, one defines $\nabla _{v}X$ by projecting $dX(v)$ to be valued in $E_{x}.$ In order to respect the linear structure of a vector bundle, one imposes additional restrictions on how the direct sum decomposition of $T_{e}E$ moves as e is varied over a fiber.
3. (Covariant derivative) The standard derivative $dX(v)$ in Euclidean contexts satisfies certain dependencies on $X$ and $v,$ the most fundamental being linearity. A covariant derivative is defined to be any operation $(v,X)\mapsto \nabla _{v}X$ which mimics these properties, together with a form of the product rule.
Unless the base is zero-dimensional, there are always infinitely many connections which exist on a given differentiable vector bundle, and so there is always a corresponding choice of how to differentiate sections. Depending on context, there may be distinguished choices, for instance those which are determined by solving certain partial differential equations. In the case of the tangent bundle, any pseudo-Riemannian metric (and in particular any Riemannian metric) determines a canonical connection, called the Levi-Civita connection.
Formal definition
Let $E\to M$ be a smooth real vector bundle over a smooth manifold $M$. Denote the space of smooth sections of $E\to M$ by $\Gamma (E)$. A covariant derivative on $E\to M$ is either of the following equivalent structures:
1. an $\mathbb {R} $-linear map $\nabla :\Gamma (E)\to \Gamma (T^{*}M\otimes E)$ :\Gamma (E)\to \Gamma (T^{*}M\otimes E)} such that the product rule
$\nabla (fs)=df\otimes s+f\nabla s$
holds for all smooth functions $f$ on $M$ and all smooth sections $s$ of $E.$
2. an assignment, to any smooth section s and every $x\in M$, of a $\mathbb {R} $-linear map $(\nabla s)_{x}:T_{x}M\to E_{x}$ which depends smoothly on x and such that
$\nabla (a_{1}s_{1}+a_{2}s_{2})=a_{1}\nabla s_{1}+a_{2}\nabla s_{2}$
for any two smooth sections $s_{1},s_{2}$ and any real numbers $a_{1},a_{2},$ and such that for every smooth function $f$, $\nabla (fs)$ is related to $\nabla s$ by
${\big (}\nabla (fs){\big )}_{x}(v)=df(v)s(x)+f(x)(\nabla s)_{x}(v)$
for any $x\in M$ and $v\in T_{x}M.$
Beyond using the canonical identification between the vector space $T_{x}^{\ast }M\otimes E_{x}$ and the vector space of linear maps $T_{x}M\to E_{x},$ these two definitions are identical and differ only in the language used.
It is typical to denote $(\nabla s)_{x}(v)$ by $\nabla _{v}s,$ with $x$ being implicit in $v.$ With this notation, the product rule in the second version of the definition given above is written
$\nabla _{v}(fs)=df(v)s+f\nabla _{v}s.$
Remark. In the case of a complex vector bundle, the above definition is still meaningful, but is usually taken to be modified by changing "real" and "ℝ" everywhere they appear to "complex" and "$\mathbb {C} .$" This places extra restrictions, as not every real-linear map between complex vector spaces is complex-linear. There is some ambiguity in this distinction, as a complex vector bundle can also be regarded as a real vector bundle.
Induced connections
Given a vector bundle $E\to M$, there are many associated bundles to $E$ which may be constructed, for example the dual vector bundle $E^{*}$, tensor powers $E^{\otimes k}$, symmetric and antisymmetric tensor powers $S^{k}E,\Lambda ^{k}E$, and the direct sums $E^{\oplus k}$. A connection on $E$ induces a connection on any one of these associated bundles. The ease of passing between connections on associated bundles is more elegantly captured by the theory of principal bundle connections, but here we present some of the basic induced connections.
Dual connection
Given $\nabla $ a connection on $E$, the induced dual connection $\nabla ^{*}$ on $E^{*}$ is defined implicitly by
$d(\langle \xi ,s\rangle )(X)=\langle \nabla _{X}^{*}\xi ,s\rangle +\langle \xi ,\nabla _{X}s\rangle .$
Here $X\in \Gamma (TM)$ is a smooth vector field, $s\in \Gamma (E)$ is a section of $E$, and $\xi \in \Gamma (E^{*})$ a section of the dual bundle, and $\langle \cdot ,\cdot \rangle $ the natural pairing between a vector space and its dual (occurring on each fibre between $E$ and $E^{*}$), i.e., $\langle \xi ,s\rangle :=\xi (s)$ :=\xi (s)} . Notice that this definition is essentially enforcing that $\nabla ^{*}$ be the connection on $E^{*}$ so that a natural product rule is satisfied for pairing $\langle \cdot ,\cdot \rangle $.
Tensor product connection
Given $\nabla ^{E},\nabla ^{F}$ connections on two vector bundles $E,F\to M$, define the tensor product connection by the formula
$(\nabla ^{E}\otimes \nabla ^{F})_{X}(s\otimes t)=\nabla _{X}^{E}(s)\otimes t+s\otimes \nabla _{X}^{F}(t).$
Here we have $s\in \Gamma (E),t\in \Gamma (F),X\in \Gamma (TM)$. Notice again this is the natural way of combining $\nabla ^{E},\nabla ^{F}$ to enforce the product rule for the tensor product connection. By repeated application of the above construction applied to the tensor product $E^{\otimes k}=(E^{\otimes (k-1)})\otimes E$, one also obtains the tensor power connection on $E^{\otimes k}$ for any $k\geq 1$ and vector bundle $E$.
Direct sum connection
The direct sum connection is defined by
$(\nabla ^{E}\oplus \nabla ^{F})_{X}(s\oplus t)=\nabla _{X}^{E}(s)\oplus \nabla _{X}^{F}(t),$
where $s\oplus t\in \Gamma (E\oplus F)$.
Symmetric and exterior power connections
Since the symmetric power and exterior power of a vector bundle may be viewed naturally as subspaces of the tensor power, $S^{k}E,\Lambda ^{k}E\subset E^{\otimes k}$, the definition of the tensor product connection applies in a straightforward manner to this setting. Indeed, since the symmetric and exterior algebras sit inside the tensor algebra as direct summands, and the connection $\nabla $ respects this natural splitting, one can simply restrict $\nabla $ to these summands. Explicitly, define the symmetric product connection by
$\nabla _{X}^{\odot 2}(s\cdot t)=\nabla _{X}s\odot t+s\odot \nabla _{X}t$
and the exterior product connection by
$\nabla _{X}^{\wedge 2}(s\wedge t)=\nabla _{X}s\wedge t+s\wedge \nabla _{X}t$
for all $s,t\in \Gamma (E),X\in \Gamma (TM)$. Repeated applications of these products gives induced symmetric power and exterior power connections on $S^{k}E$ and $\Lambda ^{k}E$ respectively.
Endomorphism connection
Finally, one may define the induced connection $\nabla ^{\operatorname {End} {E}}$ on the vector bundle of endomorphisms $\operatorname {End} (E)=E^{*}\otimes E$, the endomorphism connection. This is simply the tensor product connection of the dual connection $\nabla ^{*}$ on $E^{*}$ and $\nabla $ on $E$. If $s\in \Gamma (E)$ and $u\in \Gamma (\operatorname {End} (E))$, so that the composition $u(s)\in \Gamma (E)$ also, then the following product rule holds for the endomorphism connection:
$\nabla _{X}(u(s))=\nabla _{X}^{\operatorname {End} (E)}(u)(s)+u(\nabla _{X}(s)).$
By reversing this equation, it is possible to define the endomorphism connection as the unique connection satisfying
$\nabla _{X}^{\operatorname {End} (E)}(u)(s)=\nabla _{X}(u(s))-u(\nabla _{X}(s))$
for any $u,s,X$, thus avoiding the need to first define the dual connection and tensor product connection.
Any associated bundle
See also: Connection (principal bundle)
Given a vector bundle $E$ of rank $r$, and any representation $\rho :\mathrm {GL} (r,\mathbb {K} )\to G$ :\mathrm {GL} (r,\mathbb {K} )\to G} into a linear group $G\subset \mathrm {GL} (V)$, there is an induced connection on the associated vector bundle $F=E\times _{\rho }V$. This theory is most succinctly captured by passing to the principal bundle connection on the frame bundle of $E$ and using the theory of principal bundles. Each of the above examples can be seen as special cases of this construction: the dual bundle corresponds to the inverse transpose (or inverse adjoint) representation, the tensor product to the tensor product representation, the direct sum to the direct sum representation, and so on.
Exterior covariant derivative and vector-valued forms
See also: Exterior covariant derivative
Let $E\to M$ be a vector bundle. An $E$-valued differential form of degree $r$ is a section of the tensor product bundle:
$\bigwedge ^{r}T^{*}M\otimes E.$
The space of such forms is denoted by
$\Omega ^{r}(E)=\Omega ^{r}(M;E)=\Gamma \left(\bigwedge ^{r}T^{*}M\otimes E\right)=\Omega ^{r}(M)\otimes _{C^{\infty }(M)}\Gamma (E),$
where the last tensor product denotes the tensor product of modules over the ring of smooth functions on $M$.
An $E$-valued 0-form is just a section of the bundle $E$. That is,
$\Omega ^{0}(E)=\Gamma (E).$
In this notation a connection on $E\to M$ is a linear map
$\nabla :\Omega ^{0}(E)\to \Omega ^{1}(E).$ :\Omega ^{0}(E)\to \Omega ^{1}(E).}
A connection may then be viewed as a generalization of the exterior derivative to vector bundle valued forms. In fact, given a connection $\nabla $ on $E$ there is a unique way to extend $\nabla $ to an exterior covariant derivative
$d_{\nabla }:\Omega ^{r}(E)\to \Omega ^{r+1}(E).$
This exterior covariant derivative is defined by the following Leibniz rule, which is specified on simple tensors of the form $\omega \otimes s$ and extended linearly:
$d_{\nabla }(\omega \otimes s)=d\omega \otimes s+(-1)^{\deg \omega }\omega \wedge \nabla s$
where $\omega \in \Omega ^{r}(M)$ so that $\deg \omega =r$, $s\in \Gamma (E)$ is a section, and $\omega \wedge \nabla s$ denotes the $(r+1)$-form with values in $E$ defined by wedging $\omega $ with the one-form part of $\nabla s$. Notice that for $E$-valued 0-forms, this recovers the normal Leibniz rule for the connection $\nabla $.
Unlike the ordinary exterior derivative, one generally has $d_{\nabla }^{2}\neq 0$. In fact, $d_{\nabla }^{2}$ is directly related to the curvature of the connection $\nabla $ (see below).
Affine properties of the set of connections
Every vector bundle over a manifold admits a connection, which can be proved using partitions of unity. However, connections are not unique. If $\nabla _{1}$ and $\nabla _{2}$ are two connections on $E\to M$ then their difference is a $C^{\infty }(M)$-linear operator. That is,
$(\nabla _{1}-\nabla _{2})(fs)=f(\nabla _{1}s-\nabla _{2}s)$
for all smooth functions $f$ on $M$ and all smooth sections $s$ of $E$. It follows that the difference $\nabla _{1}-\nabla _{2}$ can be uniquely identified with a one-form on $M$ with values in the endomorphism bundle $\operatorname {End} (E)=E^{*}\otimes E$:
$\nabla _{1}-\nabla _{2}\in \Omega ^{1}(M;\mathrm {End} \,E).$
Conversely, if $\nabla $ is a connection on $E$ and $A$ is a one-form on $M$ with values in $\operatorname {End} (E)$, then $\nabla +A$ is a connection on $E$.
In other words, the space of connections on $E$ is an affine space for $\Omega ^{1}(\operatorname {End} (E))$. This affine space is commonly denoted ${\mathcal {A}}$.
Relation to principal and Ehresmann connections
Let $E\to M$ be a vector bundle of rank $k$ and let ${\mathcal {F}}(E)$ be the frame bundle of $E$. Then a (principal) connection on ${\mathcal {F}}(E)$ induces a connection on $E$. First note that sections of $E$ are in one-to-one correspondence with right-equivariant maps ${\mathcal {F}}(E)\to \mathbb {R} ^{k}$. (This can be seen by considering the pullback of $E$ over ${\mathcal {F}}(E)\to M$, which is isomorphic to the trivial bundle ${\mathcal {F}}(E)\times \mathbb {R} ^{k}$.) Given a section $s$ of $E$ let the corresponding equivariant map be $\psi (s)$. The covariant derivative on $E$ is then given by
$\psi (\nabla _{X}s)=X^{H}(\psi (s))$
where $X^{H}$ is the horizontal lift of $X$ from $M$ to ${\mathcal {F}}(E)$. (Recall that the horizontal lift is determined by the connection on ${\mathcal {F}}(E)$.)
Conversely, a connection on $E$ determines a connection on ${\mathcal {F}}(E)$, and these two constructions are mutually inverse.
A connection on $E$ is also determined equivalently by a linear Ehresmann connection on $E$. This provides one method to construct the associated principal connection.
The induced connections discussed in #Induced connections can be constructed as connections on other associated bundles to the frame bundle of $E$, using representations other than the standard representation used above. For example if $\rho $ denotes the standard representation of $\operatorname {GL} (k,\mathbb {R} )$ on $\mathbb {R} ^{k}$, then the associated bundle to the representation $\rho \oplus \rho $ of $\operatorname {GL} (k,\mathbb {R} )$ on $\mathbb {R} ^{k}\oplus \mathbb {R} ^{k}$ is the direct sum bundle $E\oplus E$, and the induced connection is precisely that which was described above.
Local expression
Let $E\to M$ be a vector bundle of rank $k$, and let $U$ be an open subset of $M$ over which $E$ trivialises. Therefore over the set $U$, $E$ admits a local smooth frame of sections
$\mathbf {e} =(e_{1},\dots ,e_{k});\quad e_{i}:U\to \left.E\right|_{U}.$
Since the frame $\mathbf {e} $ defines a basis of the fibre $E_{x}$ for any $x\in U$, one can expand any local section $s:U\to \left.E\right|_{U}$ in the frame as
$s=\sum _{i=1}^{k}s^{i}e_{i}$
for a collection of smooth functions $s^{1},\dots ,s^{k}:U\to \mathbb {R} $.
Given a connection $\nabla $ on $E$, it is possible to express $\nabla $ over $U$ in terms of the local frame of sections, by using the characteristic product rule for the connection. For any basis section $e_{i}$, the quantity $\nabla (e_{i})\in \Omega ^{1}(U)\otimes \Gamma (U,E)$ may be expanded in the local frame $\mathbf {e} $ as
$\nabla (e_{i})=\sum _{j=1}^{k}A_{i}^{\ j}\otimes e_{j},$
where $A_{i}^{\ j}\in \Omega ^{1}(U);\,j=1,\dots ,k$ are a collection of local one-forms. These forms can be put into a matrix of one-forms defined by
$A={\begin{pmatrix}A_{1}^{\ 1}&\cdots &A_{k}^{\ 1}\\\vdots &\ddots &\vdots \\A_{1}^{\ k}&\cdots &A_{k}^{\ k}\end{pmatrix}}\in \Omega ^{1}(U,\operatorname {End} (\left.E\right|_{U}))$
called the local connection form of $\nabla $ over $U$. The action of $\nabla $ on any section $s:U\to \left.E\right|_{U}$ can be computed in terms of $A$ using the product rule as
$\nabla (s)=\sum _{j=1}^{k}\left(ds^{j}+\sum _{i=1}^{k}A_{i}^{\ j}s^{i}\right)\otimes e_{j}.$
If the local section $s$ is also written in matrix notation as a column vector using the local frame $\mathbf {e} $ as a basis,
$s={\begin{pmatrix}s^{1}\\\vdots \\s^{k}\end{pmatrix}},$
then using regular matrix multiplication one can write
$\nabla (s)=ds+As$
where $ds$ is shorthand for applying the exterior derivative $d$ to each component of $s$ as a column vector. In this notation, one often writes locally that $\left.\nabla \right|_{U}=d+A$. In this sense a connection is locally completely specified by its connection one-form in some trivialisation.
As explained in #Affine properties of the set of connections, any connection differs from another by an endomorphism-valued one-form. From this perspective, the connection one-form $A$ is precisely the endomorphism-valued one-form such that the connection $\left.\nabla \right|_{U}$ on $\left.E\right|_{U}$ differs from the trivial connection $d$ on $\left.E\right|_{U}$, which exists because $U$ is a trivialising set for $E$.
Relationship to Christoffel symbols
In pseudo-Riemannian geometry, the Levi-Civita connection is often written in terms of the Christoffel symbols $\Gamma _{ij}^{\ \ k}$ instead of the connection one-form $A$. It is possible to define Christoffel symbols for a connection on any vector bundle, and not just the tangent bundle of a pseudo-Riemannian manifold. To do this, suppose that in addition to $U$ being a trivialising open subset for the vector bundle $E\to M$, that $U$ is also a local chart for the manifold $M$, admitting local coordinates $\mathbf {x} =(x^{1},\dots ,x^{n});\quad x^{i}:U\to \mathbb {R} $.
In such a local chart, there is a distinguished local frame for the differential one-forms given by $(dx^{1},\dots ,dx^{n})$, and the local connection one-forms $A_{i}^{j}$ can be expanded in this basis as
$A_{i}^{\ j}=\sum _{\ell =1}^{n}\Gamma _{\ell i}^{\ \ j}dx^{\ell }$
for a collection of local smooth functions $\Gamma _{\ell i}^{\ \ j}:U\to \mathbb {R} $, called the Christoffel symbols of $\nabla $ over $U$. In the case where $E=TM$ and $\nabla $ is the Levi-Civita connection, these symbols agree precisely with the Christoffel symbols from pseudo-Riemannian geometry.
The expression for how $\nabla $ acts in local coordinates can be further expanded in terms of the local chart $U$ and the Christoffel symbols, to be given by
$\nabla (s)=\sum _{i,j=1}^{k}\sum _{\ell =1}^{n}\left({\frac {\partial s^{j}}{\partial x^{\ell }}}+\Gamma _{\ell i}^{\ \ j}s^{i}\right)dx^{\ell }\otimes e_{j}.$
Contracting this expression with the local coordinate tangent vector ${\frac {\partial }{\partial x^{\ell }}}$ leads to
$\nabla _{\frac {\partial }{\partial x^{\ell }}}(s)=\sum _{i,j=1}^{k}\left({\frac {\partial s^{j}}{\partial x^{\ell }}}+\Gamma _{\ell i}^{\ \ j}s^{i}\right)e_{j}.$
This defines a collection of $n$ locally defined operators
$\nabla _{\ell }:\Gamma (U,E)\to \Gamma (U,E);\quad \nabla _{\ell }(s):=\sum _{i,j=1}^{k}\left({\frac {\partial s^{j}}{\partial x^{\ell }}}+\Gamma _{\ell i}^{\ \ j}s^{i}\right)e_{j},$
with the property that
$\nabla (s)=\sum _{\ell =1}^{n}dx^{\ell }\otimes \nabla _{\ell }(s).$
Change of local trivialisation
Suppose $\mathbf {e'} $ is another choice of local frame over the same trivialising set $U$, so that there is a matrix $g=(g_{i}^{\ j})$ of smooth functions relating $\mathbf {e} $ and $\mathbf {e'} $, defined by
$e_{i}=\sum _{j=1}^{k}g_{i}^{\ j}e'_{j}.$
Tracing through the construction of the local connection form $A$ for the frame $\mathbf {e} $, one finds that the connection one-form $A'$ for $\mathbf {e'} $ is given by
${A'}_{i}^{\ j}=\sum _{p,q=1}^{k}g_{p}^{\ j}A_{q}^{\ p}{(g^{-1})}_{i}^{\ q}-\sum _{p=1}^{k}(dg)_{p}^{\ j}{(g^{-1})}_{i}^{\ p}$
where $g^{-1}=\left({(g^{-1})}_{i}^{\ j}\right)$ denotes the inverse matrix to $g$. In matrix notation this may be written
$A'=gAg^{-1}-(dg)g^{-1}$
where $dg$ is the matrix of one-forms given by taking the exterior derivative of the matrix $g$ component-by-component.
In the case where $E=TM$ is the tangent bundle and $g$ is the Jacobian of a coordinate transformation of $M$, the lengthy formulae for the transformation of the Christoffel symbols of the Levi-Civita connection can be recovered from the more succinct transformation laws of the connection form above.
Parallel transport and holonomy
A connection $\nabla $ on a vector bundle $E\to M$ defines a notion of parallel transport on $E$ along a curve in $M$. Let $\gamma :[0,1]\to M$ :[0,1]\to M} be a smooth path in $M$. A section $s$ of $E$ along $\gamma $ is said to be parallel if
$\nabla _{{\dot {\gamma }}(t)}s=0$
for all $t\in [0,1]$. Equivalently, one can consider the pullback bundle $\gamma ^{*}E$ of $E$ by $\gamma $. This is a vector bundle over $[0,1]$ with fiber $E_{\gamma (t)}$ over $t\in [0,1]$. The connection $\nabla $ on $E$ pulls back to a connection on $\gamma ^{*}E$. A section $s$ of $\gamma ^{*}E$ is parallel if and only if $\gamma ^{*}\nabla (s)=0$.
Suppose $\gamma $ is a path from $x$ to $y$ in $M$. The above equation defining parallel sections is a first-order ordinary differential equation (cf. local expression above) and so has a unique solution for each possible initial condition. That is, for each vector $v$ in $E_{x}$ there exists a unique parallel section $s$ of $\gamma ^{*}E$ with $s(0)=v$. Define a parallel transport map
$\tau _{\gamma }:E_{x}\to E_{y}\,$
by $\tau _{\gamma }(v)=s(1)$. It can be shown that $\tau _{\gamma }$ is a linear isomorphism, with inverse given by following the same procedure with the reversed path $\gamma ^{-}$ from $y$ to $x$.
Parallel transport can be used to define the holonomy group of the connection $\nabla $ based at a point $x$ in $M$. This is the subgroup of $\operatorname {GL} (E_{x})$ consisting of all parallel transport maps coming from loops based at $x$:
$\mathrm {Hol} _{x}=\{\tau _{\gamma }:\gamma {\text{ is a loop based at }}x\}.\,$
The holonomy group of a connection is intimately related to the curvature of the connection (AmbroseSinger 1953).
The connection can be recovered from its parallel transport operators as follows. If $X\in \Gamma (TM)$ is a vector field and $s\in \Gamma (E)$ a section, at a point $x\in M$ pick an integral curve $\gamma :(-\varepsilon ,\varepsilon )\to M$ :(-\varepsilon ,\varepsilon )\to M} for $X$ at $x$. For each $t\in (-\varepsilon ,\varepsilon )$ we will write $\tau _{t}:E_{\gamma (t)}\to E_{x}$ for the parallel transport map traveling along $\gamma $ from $t$ to $0$. In particular for every $t\in (-\varepsilon ,\varepsilon )$, we have $\tau _{t}s(\gamma (t))\in E_{x}$. Then $t\mapsto \tau _{t}s(\gamma (t))$ defines a curve in the vector space $E_{x}$, which may be differentiated. The covariant derivative is recovered as
$\nabla _{X}s(x)={\frac {d}{dt}}\left(\tau _{t}s(\gamma (t))\right)_{t=0}.$
This demonstrates that an equivalent definition of a connection is given by specifying all the parallel transport isomorphisms $\tau _{\gamma }$ between fibres of $E$ and taking the above expression as the definition of $\nabla $.
Curvature
See also: Curvature form
The curvature of a connection $\nabla $ on $E\to M$ is a 2-form $F_{\nabla }$ on $M$ with values in the endomorphism bundle $\operatorname {End} (E)=E^{*}\otimes E$. That is,
$F_{\nabla }\in \Omega ^{2}(\mathrm {End} (E))=\Gamma (\Lambda ^{2}T^{*}M\otimes \mathrm {End} (E)).$
It is defined by the expression
$F_{\nabla }(X,Y)(s)=\nabla _{X}\nabla _{Y}s-\nabla _{Y}\nabla _{X}s-\nabla _{[X,Y]}s$
where $X$ and $Y$ are tangent vector fields on $M$ and $s$ is a section of $E$. One must check that $F_{\nabla }$ is $C^{\infty }(M)$-linear in both $X$ and $Y$ and that it does in fact define a bundle endomorphism of $E$.
As mentioned above, the covariant exterior derivative $d_{\nabla }$ need not square to zero when acting on $E$-valued forms. The operator $d_{\nabla }^{2}$ is, however, strictly tensorial (i.e. $C^{\infty }(M)$-linear). This implies that it is induced from a 2-form with values in $\operatorname {End} (E)$. This 2-form is precisely the curvature form given above. For an $E$-valued form $\sigma $ we have
$(d_{\nabla })^{2}\sigma =F_{\nabla }\wedge \sigma .$
A flat connection is one whose curvature form vanishes identically.
Local form and Cartan's structure equation
The curvature form has a local description called Cartan's structure equation. If $\nabla $ has local form $A$ on some trivialising open subset $U\subset M$ for $E$, then
$F_{\nabla }=dA+A\wedge A$
on $U$. To clarify this notation, notice that $A$ is a endomorphism-valued one-form, and so in local coordinates takes the form of a matrix of one-forms. The operation $d$ applies the exterior derivative component-wise to this matrix, and $A\wedge A$ denotes matrix multiplication, where the components are wedged rather than multiplied.
In local coordinates $\mathbf {x} =(x^{1},\dots ,x^{n})$ on $M$ over $U$, if the connection form is written $A=A_{\ell }dx^{\ell }=(\Gamma _{\ell i}^{\ \ j})dx^{\ell }$ for a collection of local endomorphisms $A_{\ell }=(\Gamma _{\ell i}^{\ \ j})$, then one has
$F_{\nabla }=\sum _{p,q=1}^{n}{\frac {1}{2}}\left({\frac {\partial A_{q}}{\partial x^{p}}}-{\frac {\partial A_{p}}{\partial x^{q}}}+[A_{p},A_{q}]\right)dx^{p}\wedge dx^{q}.$
Further expanding this in terms of the Christoffel symbols $\Gamma _{\ell i}^{\ \ j}$ produces the familiar expression from Riemannian geometry. Namely if $s=s^{i}e_{i}$ is a section of $E$ over $U$, then
$F_{\nabla }(s)=\sum _{i,j=1}^{k}\sum _{p,q=1}^{n}{\frac {1}{2}}\left({\frac {\partial \Gamma _{qi}^{\ \ j}}{\partial x^{p}}}-{\frac {\partial \Gamma _{pi}^{\ \ j}}{\partial x^{q}}}+\Gamma _{pr}^{\ \ j}\Gamma _{qi}^{\ \ r}-\Gamma _{qr}^{\ \ j}\Gamma _{pi}^{\ \ r}\right)s^{i}dx^{p}\wedge dx^{q}\otimes e_{j}=\sum _{i,j=1}^{k}\sum _{p,q=1}^{n}R_{pqi}^{\ \ \ j}s^{i}dx^{p}\wedge dx^{q}\otimes e_{j}.$
Here $R=(R_{pqi}^{\ \ \ j})$ is the full curvature tensor of $F_{\nabla }$, and in Riemannian geometry would be identified with the Riemannian curvature tensor.
It can be checked that if we define $[A,A]$ to be wedge product of forms but commutator of endomorphisms as opposed to composition, then $A\wedge A={\frac {1}{2}}[A,A]$, and with this alternate notation the Cartan structure equation takes the form
$F_{\nabla }=dA+{\frac {1}{2}}[A,A].$
This alternate notation is commonly used in the theory of principal bundle connections, where instead we use a connection form $\omega $, a Lie algebra-valued one-form, for which there is no notion of composition (unlike in the case of endomorphisms), but there is a notion of a Lie bracket.
In some references (see for example (MadsenTornehave1997)) the Cartan structure equation may be written with a minus sign:
$F_{\nabla }=dA-A\wedge A.$
This different convention uses an order of matrix multiplication that is different from the standard Einstein notation in the wedge product of matrix-valued one-forms.
Bianchi identity
A version of the second (differential) Bianchi identity from Riemannian geometry holds for a connection on any vector bundle. Recall that a connection $\nabla $ on a vector bundle $E\to M$ induces an endomorphism connection on $\operatorname {End} (E)$. This endomorphism connection has itself an exterior covariant derivative, which we ambiguously call $d_{\nabla }$. Since the curvature is a globally defined $\operatorname {End} (E)$-valued two-form, we may apply the exterior covariant derivative to it. The Bianchi identity says that
$d_{\nabla }F_{\nabla }=0$.
This succinctly captures the complicated tensor formulae of the Bianchi identity in the case of Riemannian manifolds, and one may translate from this equation to the standard Bianchi identities by expanding the connection and curvature in local coordinates.
There is no analogue in general of the first (algebraic) Bianchi identity for a general connection, as this exploits the special symmetries of the Levi-Civita connection. Namely, one exploits that the vector bundle indices of $E=TM$ in the curvature tensor $R$ may be swapped with the cotangent bundle indices coming from $T^{*}M$ after using the metric to lower or raise indices. For example this allows the torsion-freeness condition $\Gamma _{\ell i}^{\ \ j}=\Gamma _{i\ell }^{\ \ j}$ to be defined for the Levi-Civita connection, but for a general vector bundle the $\ell $-index refers to the local coordinate basis of $T^{*}M$, and the $i,j$-indices to the local coordinate frame of $E$ and $E^{*}$ coming from the splitting $\mathrm {End} (E)=E^{*}\otimes E$. However in special circumstance, for example when the rank of $E$ equals the dimension of $M$ and a solder form has been chosen, one can use the soldering to interchange the indices and define a notion of torsion for affine connections which are not the Levi-Civita connection.
Gauge transformations
See also: Gauge group (mathematics)
Given two connections $\nabla _{1},\nabla _{2}$ on a vector bundle $E\to M$, it is natural to ask when they might be considered equivalent. There is a well-defined notion of an automorphism of a vector bundle $E\to M$. A section $u\in \Gamma (\operatorname {End} (E))$ is an automorphism if $u(x)\in \operatorname {End} (E_{x})$ is invertible at every point $x\in M$. Such an automorphism is called a gauge transformation of $E$, and the group of all automorphisms is called the gauge group, often denoted ${\mathcal {G}}$ or $\operatorname {Aut} (E)$. The group of gauge transformations may be neatly characterised as the space of sections of the capital A adjoint bundle $\operatorname {Ad} ({\mathcal {F}}(E))$ of the frame bundle of the vector bundle $E$. This is not to be confused with the lowercase a adjoint bundle $\operatorname {ad} ({\mathcal {F}}(E))$, which is naturally identified with $\operatorname {End} (E)$ itself. The bundle $\operatorname {Ad} {\mathcal {F}}(E)$ is the associated bundle to the principal frame bundle by the conjugation representation of $G=\operatorname {GL} (r)$ on itself, $g\mapsto ghg^{-1}$, and has fibre the same general linear group $\operatorname {GL} (r)$ where $\operatorname {rank} (E)=r$. Notice that despite having the same fibre as the frame bundle ${\mathcal {F}}(E)$ and being associated to it, $\operatorname {Ad} ({\mathcal {F}}(E))$ is not equal to the frame bundle, nor even a principal bundle itself. The gauge group may be equivalently characterised as ${\mathcal {G}}=\Gamma (\operatorname {Ad} {\mathcal {F}}(E)).$
A gauge transformation $u$ of $E$ acts on sections $s\in \Gamma (E)$, and therefore acts on connections by conjugation. Explicitly, if $\nabla $ is a connection on $E$, then one defines $u\cdot \nabla $ by
$(u\cdot \nabla )_{X}(s)=u(\nabla _{X}(u^{-1}(s))$
for $s\in \Gamma (E),X\in \Gamma (TM)$. To check that $u\cdot \nabla $ is a connection, one verifies the product rule
${\begin{aligned}u\cdot \nabla (fs)&=u(\nabla (u^{-1}(fs)))\\&=u(\nabla (fu^{-1}(s)))\\&=u(df\otimes u^{-1}(s))+u(f\nabla (u^{-1}(s)))\\&=df\otimes s+fu\cdot \nabla (s).\end{aligned}}$
It may be checked that this defines a left group action of ${\mathcal {G}}$ on the affine space of all connections ${\mathcal {A}}$.
Since ${\mathcal {A}}$ is an affine space modelled on $\Omega ^{1}(M,\operatorname {End} (E))$, there should exist some endomorphism-valued one-form $A_{u}\in \Omega ^{1}(M,\operatorname {End} (E))$ such that $u\cdot \nabla =\nabla +A_{u}$. Using the definition of the endomorphism connection $\nabla ^{\operatorname {End} (E)}$ induced by $\nabla $, it can be seen that
$u\cdot \nabla =\nabla -d^{\nabla }(u)u^{-1}$
which is to say that $A_{u}=-d^{\nabla }(u)u^{-1}$.
Two connections are said to be gauge equivalent if they differ by the action of the gauge group, and the quotient space ${\mathcal {B}}={\mathcal {A}}/{\mathcal {G}}$ is the moduli space of all connections on $E$. In general this topological space is neither a smooth manifold or even a Hausdorff space, but contains inside it the moduli space of Yang–Mills connections on $E$, which is of significant interest in gauge theory and physics.
Examples
• A classical covariant derivative or affine connection defines a connection on the tangent bundle of M, or more generally on any tensor bundle formed by taking tensor products of the tangent bundle with itself and its dual.
• A connection on $\pi :\mathbb {R} ^{2}\times \mathbb {R} \to \mathbb {R} $ :\mathbb {R} ^{2}\times \mathbb {R} \to \mathbb {R} } can be described explicitly as the operator
$\nabla =d+{\begin{bmatrix}f_{11}(x)&f_{12}(x)\\f_{21}(x)&f_{22}(x)\end{bmatrix}}dx$
where $d$ is the exterior derivative evaluated on vector-valued smooth functions and $f_{ij}(x)$ are smooth. A section $a\in \Gamma (\pi )$ may be identified with a map
${\begin{cases}\mathbb {R} \to \mathbb {R} ^{2}\\x\mapsto (a_{1}(x),a_{2}(x))\end{cases}}$
and then
$\nabla (a)=\nabla {\begin{bmatrix}a_{1}(x)\\a_{2}(x)\end{bmatrix}}={\begin{bmatrix}{\frac {da_{1}(x)}{dx}}+f_{11}(x)a_{1}(x)+f_{12}(x)a_{2}(x)\\{\frac {da_{2}(x)}{dx}}+f_{21}(x)a_{1}(x)+f_{22}(x)a_{2}(x)\end{bmatrix}}dx$
• If the bundle is endowed with a bundle metric, an inner product on its vector space fibers, a metric connection is defined as a connection that is compatible with the bundle metric.
• A Yang-Mills connection is a special metric connection which satisfies the Yang-Mills equations of motion.
• A Riemannian connection is a metric connection on the tangent bundle of a Riemannian manifold.
• A Levi-Civita connection is a special Riemannian connection: the metric-compatible connection on the tangent bundle that is also torsion-free. It is unique, in the sense that given any Riemannian connection, one can always find one and only one equivalent connection that is torsion-free. "Equivalent" means it is compatible with the same metric, although the curvature tensors may be different; see teleparallelism. The difference between a Riemannian connection and the corresponding Levi-Civita connection is given by the contorsion tensor.
• The exterior derivative is a flat connection on $E=M\times \mathbb {R} $ (the trivial line bundle over M).
• More generally, there is a canonical flat connection on any flat vector bundle (i.e. a vector bundle whose transition functions are all constant) which is given by the exterior derivative in any trivialization.
See also
• D-module
• Connection (mathematics)
References
• Chern, Shiing-Shen (1951), Topics in Differential Geometry, Institute for Advanced Study, mimeographed lecture notes
• Darling, R. W. R. (1994), Differential Forms and Connections, Cambridge, UK: Cambridge University Press, Bibcode:1994dfc..book.....D, ISBN 0-521-46800-0
• Kobayashi, Shoshichi; Nomizu, Katsumi (1996) [1963], Foundations of Differential Geometry, Vol. 1, Wiley Classics Library, New York: Wiley Interscience, ISBN 0-471-15733-3
• Koszul, J. L. (1950), "Homologie et cohomologie des algebres de Lie", Bulletin de la Société Mathématique, 78: 65–127, doi:10.24033/bsmf.1410
• Wells, R.O. (1973), Differential analysis on complex manifolds, Springer-Verlag, ISBN 0-387-90419-0
• Ambrose, W.; Singer, I.M. (1953), "A theorem on holonomy", Transactions of the American Mathematical Society, 75 (3): 428–443, doi:10.2307/1990721, JSTOR 1990721
• Donaldson, S.K. and Kronheimer, P.B., 1997. The geometry of four-manifolds. Oxford University Press.
• Tu, L.W., 2017. Differential geometry: connections, curvature, and characteristic classes (Vol. 275). Springer.
• Taubes, C.H., 2011. Differential geometry: Bundles, connections, metrics and curvature (Vol. 23). OUP Oxford.
• Lee, J.M., 2018. Introduction to Riemannian manifolds. Springer International Publishing.
• Madsen, I.H.; Tornehave, J. (1997), From calculus to cohomology: de Rham cohomology and characteristic classes, Cambridge University Press
| Wikipedia |
Compute $i^{600} + i^{599} + \cdots + i + 1$, where $i^2=-1$.
Each group of 4 consecutive powers of $i$ adds to 0: $i + i^2 + i^3 + i^4 = i - 1 - i +1 = 0$, $i^5+i^6+i^7+i^8 = i^4(i+i^2+i^3+i^4) = 1(0) = 0$, and so on. Because 600 is divisible by 4, we know that if we start grouping the powers of $i$ as suggested by our first two groups above, we won't have any `extra' powers of $i$ beyond $i^{600}$. We will, however, have the extra 1 before the $i$, so: \[i^{600} + i^{599} + \cdots + i + 1 = (0) + (0) + \cdots + (0) + 1 = \boxed{1}.\] | Math Dataset |
Fast intra algorithm based on texture characteristics for 360 videos
Mengmeng Zhang ORCID: orcid.org/0000-0002-2016-741X1,
Xiaosha Dong1,
Zhi Liu1,
Fuqi Mao1,2 &
Wen Yue2
With the rapid progress of virtual reality technology, 360 videos have become increasingly popular. Given that the resolution of a 360 video is ultra-high (generally 4K to 8K), the encoding time for this type of video is considerably high. To reduce encoding complexity, this study proposed a fast intra algorithm that is based on image texture characteristics. On the one hand, the proposed algorithm determines whether to terminate the coding unit partition early on the basis of texture complexity. On the other hand, the proposed algorithm reduces the number of candidate modes in mode decision according to texture directivity. Experimental results showed that the proposed algorithm can obtain an average time reduction rate of 53% and a Bjontegaard delta rate increase of only 1.3%, which is acceptable for rate distortion performance.
With a growing commercial interest in virtual reality (VR) fields in recent years, ITU-T's Video Coding Experts Group (VCEG) and ISO/IEC's Moving Picture Exports Group (MPEG) jointly established the Joint Video Exploration Team (JVET) for future video coding research and proposed the VR 360° video (referred to as 360 video) scheme [1]. A 360 video is usually obtained from a multi-camera array, such as a GoPro Omni camera. Images from multiple cameras are assembled to achieve a scene with a spherical projection in the horizontal direction at 360° and in the vertical direction at 180°.
With the spherical feature of VR 360 videos, traditional video coding methods are difficult to apply directly. Therefore, JVET proposed 11 different projection formats for spherical videos to address coding issues. A 360 video is projected onto a two-dimensional plane and converted into a 2D projection format in a certain ratio [2], such as equirectangular projection (ERP), octahedron projection (OHP), truncated square pyramid projection (TSP), rotated sphere projection (RSP), cubemap projection (CMP), and segmented sphere projection (SSP) (Fig. 1). The video is then encoded as a traditional video.
Projection formats of 360 video. a ERP. b OHP. c TSP. d RSP. e CMP4X3. f CMP3X2. g SSP
The 360 video coding framework based on High Efficiency Video Coding (HEVC) is shown in Fig. 2 [3]. On the basis of traditional video coding, 360 video coding increases the down-sampling process before encoding and the up-sampling process after decoding and the conversion between formats in the codec process, and some new quality evaluation standards proposed for 360 videos such as peak to signal noise ratio weighted by sample area (WSPSNR) and spherical peak to signal noise ratio (PSNR) without interpolation (S_PSNR_NN) [4]. HEVC is the latest international coding standard and uses the traditional block-based hybrid coding framework. In HEVC, blocks are divided into different sizes. The coded image is divided frame by frame into a series of segments called a coding unit (CU), prediction unit (PU), and transform unit (TU). Compared with H.264, HEVC has 35 different prediction modes for luminance information, including 33 angle predictions, a planner, and a DC prediction. The mode decision is divided into two processes, namely, rough mode decision (RMD) and most probable mode (MPM).
Encoding process for 360 video testing
The remainder of this paper is organized as follows. Section 2 provides the related works. Section 3 explains in detail the specific steps of the proposed algorithm. Section 4 verifies the effectiveness of the proposed algorithm through the results of the experiment. Section 5 provides the conclusions.
Considering that HEVC is proposed for traditional videos, a 360 video coding framework based on HEVC cannot efficiently encode 360 videos. To improve the performance of 360 video coding, the authors in [5] proposed two algorithms of adaptive encoding techniques for omnidirectional videos (OVs) to reduce the bitrate of OVs after compression, as well as two other algorithms to reduce the bitrate. In [6], a real-time 360 video stitching framework was proposed to render the entire scene at different levels of detail. A motion-estimation algorithm that can improve the accuracy of motion prediction in 360 videos was proposed in [7]. In [8], considering distortion in the spherical domain, Li derived the optimal rate distortion relationship in the spherical domain and presented its optimal solution that can achieve bit savings of up to 11.5%. In [9, 10], the video encoding is optimized from wavelet image features and human visual system features.
A 360 video has a relatively high resolution and thus requires a large number of coding tree units (CTUs) in encoding. The process is time consuming and can affect real-time video encoding and transmission. We can improve coding efficiency by enhancing the HEVC intra prediction algorithm. In HEVC intra prediction, the rate distortion (RD) cost is used to determine the CU partition and mode decision and thus leads to high computational complexity. Therefore, a large number of improved algorithms have been proposed. These algorithms include fast CU/PU size decision such as that in [11, 12], fast intra prediction mode decision such as that in [13, 14], and fast mode decision such as that in [15, 16]. In [16], it exploits the depth information of neighboring CUs to make an early CU split decision or CU pruning decision, which can save 37.91% computational complexity on average as compared with the current HM (HEVC test model) with only a 0.66% increase. In [17], an algorithm that combines CU coding bits with the reduction of unnecessary intra prediction modes was proposed to decrease computational complexity; it provides an average time reduction rate of 53% with only a 1.7% Bjontegaard delta rate (BD rate) increase. Optimized algorithms based on image texture, such as the gradient-based algorithm in [18], edge density algorithm in [19], and texture-based algorithm in [20] are used in traditional videos. In [18], it proposes a fast hardware-friendly intra block size selection algorithm with simple gradient calculations and the bottom-up structure which can save 57.3% encoding time on average for all-intra main case with 2.2% BD rate increases. In [19], the algorithm includes a pre-partitioning for CUs based on the edge density of the textures to simplify the partitioning, it can save about 33% encoding time for all intra configurations with a slight loss of PSNR. In [20], it reduces the average encoding time by 47.21% at a cost of a 2.55% bit rate increase.
These proposed intra prediction algorithms are based on traditional videos. Traditional videos and 360 video projection formats show different image textures. Therefore, the texture thresholds selected in traditional videos are not applicable to 360 videos. In the present work, we propose a fast intra algorithm that is based on video texture characteristics that can be applied to 360 videos. We experimentally re-selected the texture complexity thresholds in the CU partition process to render the thresholds relatively suitable for 360 videos and added texture directivity to the mode decision process. After the experiments, we found that the proposed algorithm can efficiently reduce computational complexity and encoding time.
The proposed algorithm is divided into two parts. First, according to the thresholds of image texture complexity, the level of complexity is classified to determine whether the current CU is to be skipped or further divided. Second, the candidate prediction mode is further reduced on the basis of texture directionality.
Proposed algorithm
In VR 360 video coding, a spherical video is projected into a two-dimensional rectangular format. The ERP format is the most representative format in the VR 360 video format. In many encoding processes, other formats are first converted into an EPR format for encoding. For the most popular ERP projection format used in VR 360 video coding, the contents located near two poles are stretched substantially, resulting in changes in the texture feature of the area. We performed a comprehensive statistical analysis of 360 video sequences (4K–8K) in ERP format, all sequences used in the experiments are the test sequences provided by JVET. Figure 3 shows some sequences of different resolutions and encoding bits. As shown in Fig. 3, the stretch of the upper and lower parts of these sequences is usually the largest, and the texture is usually homogeneous. This means that these parts can use the smaller CU partition depth for intra prediction during the encoding process. Thus, these parts can be encoded using large blocks. The block information in the middle part of these sequences is relatively complex and requires small blocks to be encoded.
Features of the upper and lower parts of 360 ERP video sequences
The proposed algorithm reduces the encoding complexity of a 360 video in two aspects. First, as shown in Fig. 4, the red part is located at the two poles of the sphere; its stretching is the largest, and the stretching of the yellow sphere in the middle of the frame is relatively minimal. The proposed algorithm based on texture complexity can effectively skip the rate distortion optimization (RDO) calculation of these parts with minimal loss of viewing experience. The texture of the stretched part, such as the sky and water in Fig. 4, is relatively homogeneous. We can encode the CTUs with a large block and low depth to establish a balance between encoding bits and image quality. CTU blocks with complex textures, such as the houses and ships in Fig. 4, contain additional information and usually need to be divided into high depths to keep the video image information intact during the compression process. Second, the horizontal stretch of a 360 ERP video is much larger than the vertical stretch, thereby causing the directionality of the prediction mode to change relative to the case of traditional videos. We can optimize the mode decision by determining the texture directionality.
ERP format: expands from spherical to rectangular
The following sections elaborate the proposed algorithm in three aspects. First, the ideas and preparations used in the algorithm are introduced. Second, the CU size decision algorithm based on texture complexity is presented. Third, the mode decision algorithm based on texture direction is discussed.
In the HEVC intra prediction for 360 videos, CU size and mode decision use RD cost to determine the best partition depth and mode. The RD costs of the current depth block and four sub-blocks are calculated and compared to determine whether to divide the blocks further.
The stretch and fill characteristics of a 360 video horizontally represent information about a 360° circle enclosing a person. The vertical direction shows a 180° angle of view information from the head of a person to the sole. In HEVC, the CTU size or number of pixels can be 16 × 16, 32 × 32, or 64 × 64. CTU can be decomposed into several CUs in a quad-tree structure. The CUs in the same level must be four square blocks that have uniform sizes, are close to each other, and are not overlapping. Each CU has up to four layers of decomposition, namely, 64 × 64, 32 × 32, 16 × 16, and 8 × 8, with corresponding depths of 0, 1, 2, and 3, respectively. Figure 5 shows the quad-tree structure of a CU in HEVC.
Partition of a CU structure (64 × 64): quad-tree-based coding structure
The proposed algorithm uses texture characteristics to improve the intra prediction of a 360 ERP video. On the basis of the complexity of the texture characteristics, we can predict the depth of a CU block in advance in the CU partition process. The directionality of the texture characteristics can also be used to determine the direction of candidate modes in the mode decision process.
The main purpose of the CU size decision algorithm based on texture complexity is to classify the horizontal and vertical texture complexities before RDO. Texture complexity is calculated in advance to determine whether the current block is to be further divided. If the current block texture has low complexity, then we skip the calculation of RDO and directly decide that no further division is needed; if the current block has high texture complexity, then we should divide it into small ones to balance the compression and image quality. In this case, we can skip the calculation of RDO and directly proceed to the partition process. For the blocks with uncertain texture complexities, further division is determined by calculating the RD cost. In this way, many unnecessary calculations and comparisons of RD cost can be skipped, thereby reducing the computational complexity and saving time.
In the prediction mode decision algorithm, we add a number of new decisions in the RMD and MPM processes before RDO (Fig. 6) according to the vertical and horizontal texture directions calculated in the previous step. The angle candidate mode is reduced from 33 to 17, and then the 17 modes are divided into five groups to further determine which group is likely to be the optimal mode. Finally, we add the planner and DC modes, determine the candidate mode with small RD cost and two optimal candidate modes, and finally compare and add them to the MPM. Compared with the original 35 to 8/3 algorithms, the original HM algorithm must traverse 35 modes to determine the optimal algorithm. The proposed algorithm directly reduces the candidate mode by half by determining the texture direction and then further filtering the remaining half to determine the optimal mode.
CU size and mode decision in HEVC intra mode
The test conditions of the proposed algorithms show good agreement with those in [21]and are based on the latest HM16.16 for research. The following section will discuss the two proposed algorithms.
Texture complexity-based CU size decision algorithm
Measure of image texture complexity
The proposed CU size algorithm for 360 videos achieves improved performance by calculating image texture complexity. An image texture complexity is an important concept in image processing. Several metrics have been proposed to describe image texture, and they include local binary patterns, Markov random field, and gray-level co-occurrence matrix. The above methods are widely used because of their accuracy and precision. We tried to use them in HM 16.16, but we found that only one frame of video has taken several times longer than the original HM 16.16. The long computation time of these methods leads to difficulties in encoding real-time videos. Through experimental comparison, we chose another more appropriate metric. The mean of the absolute difference (MAD) measures image texture, which can balance the calculation time cost and the accuracy of describing the texture complexity. The MAD of an image is calculated as follows:
$$ \mathrm{MAD}=\frac{1}{n^2}\sum \limits_{y=0}^{n-1}\sum \limits_{x=0}^{n-1}\left|p\left(x,y\right)-\right.\left.\overline{m}\right| $$
For the CTU of a 360 video, the MAD of the horizontal texture of the image pixels is smoother than that of the vertical texture. The horizontal stretch of the 360 video in ERP format is greater than the vertical stretch. Directly calculating the MAD of the entire CTU cannot accurately represent texture complexity. A CTU may have the characteristics found in Fig. 7 and thus shows a simple texture. However, when we calculate the MAD of the entire block, the result is relatively large, leading to an inaccurate texture complexity of the CTU. Therefore, to represent the texture complexity of the CTU, we adjust the MAD formula and calculate the horizontal MAD (HMAD) and vertical MAD (VMAD) for each row separately.
$$ {\mathrm{VMAD}}_i=\sum \limits_{y=0}^{n-1}\left|p\left(i,y\right)-{\overline{m}}_i\right|\kern0.5em \left(0\le i\le n\right) $$
$$ {\mathrm{HMAD}}_j=\sum \limits_{x=0}^{m-1}\left|p\left(x,j\right)-{\overline{m}}_j\right|\kern0.5em \left(0\le j\le m\right) $$
CTUs with smooth texture features a. image with a simple horizontal texture b. image with multiple lines of simple horizontal orientation texture
Then, we calculate the average VMAD and HMAD as follows:
$$ \mathrm{meanVMAD}=\frac{1}{n}\sum \limits_0^{n-1}{\mathrm{VMAD}}_i\kern0.5em \left(0\le i\le n\right) $$
$$ \mathrm{meanHMAD}=\frac{1}{m}\sum \limits_0^{m-1}{\mathrm{HMAD}}_j\kern0.5em \left(0\le j\le m\right) $$
The decision of the thresholds is further described below.
Flow of the proposed algorithm
The accuracy of the CU partition of the proposed algorithm largely depends on the threshold setting. Therefore, selecting an appropriate threshold is key for the proposed algorithm. Two thresholds, namely, α and β, are defined here. α is used to represent the minimum threshold of the complex image texture, and β represents the maximum homogeneous threshold of the image texture. We render the following improvements on the basis of the original HM 16.16. Figure 8 shows the process of this algorithm.
Schematic of the proposed CU size decision algorithm based on texture complexity
Through the preset thresholds, we can divide the CU into three parts.
(a) When meanHMAD < α, that is, the texture information of the CTU block is homogeneous, further calculation of the RD cost comparison between the CTU block and its four sub-blocks is skipped, and the current depth is directly determined as the optimal depth of the CTU.
(b) When meanHMAD > β, that is, the texture information of the CTU block is complex, further calculation of the RD cost comparison between the CTU block and its four sub-blocks is skipped, and the current depth is directly determined as the non-optimal depth and thus requires further division.
(c) When α < meanHMAD < β, the RD cost should be calculated to determine whether to proceed with further division.
VMAD/HMAD threshold parameters
To ensure that the thresholds are appropriately set, we select a large number of frames of several sequences to perform a statistical analysis and analyze the thresholds. In the appropriate range, a large number of statistical tests are conducted. In order to avoid yield overfitting, we did not select all the test sequences, we selected a sequence for the 4K, 6K, and 8K sequences for statistics respectively. We judge the texture complexity of the CU, the selected statistic set already contains enough samples (64 × 64, 32 × 32, 16 × 16, and 8 × 8 CU samples are 14,600, 58,400, 233,600, and 934,400 respectively). We find that the range of texture complexity thresholds obtained based on different videos is similar, and the statistical results are shown in Fig. 9. We then use the obtained thresholds to test other test sequences and find that the threshold can measure not only texture complexity for statistical video sequences, but also other unstated video sequences. As shown in Fig. 9, using the CU partition with depths of 0, 1, and 2 and quantization parameters (QP) of 27 and 37 of the first frame of all 360 video sequences as examples, we separately calculate the HMAD of the CU blocks that need to be further divided and the HMAD of the CU blocks that can be skipped. Through the statistical analysis, we find that the HMAD value of the CU that does not need to be divided is generally always smaller than the HAMD value of the CU that needs to be divided. As the depth increases, the number of CUs that need to be divided decreases, and the number of CUs that do not need to be divided increases. When the HMAD is greater than a certain value, the CU must be divided. Similarly, when the HMAD is less than a certain value, this CU does not need to be divided. By accurately setting the thresholds, the proposed algorithm could decide in advance whether to proceed with the division, thereby improving the coding efficiency.
HMAD maps of CUs that can be skipped and further divided. a CU division of depth = 0, QP = 37. b CU division of depth = 1, QP = 37. c CU division of depth = 2, QP = 37. d CU division of depth = 0, QP = 27. e CU division of depth = 1, QP = 27. f CU division of depth = 2, QP = 27
We find that for different QP and CU depths, different thresholds should be selected (Fig. 10). For different CU depths and QPs, α maintains a relatively small fluctuation range. For β, different CU depths have a remarkable impact on the threshold decision.
Threshold range of α and β of HMAD
We select one frame in these test sequences and separately calculate the CU partition of the original HM 16.16 and the CU partition of the proposed algorithm. As shown in Tables 1 and 2, the similarity rate of the proposed CU size decision algorithm is above 90%.
Table 1 Comparison of the division of threshold α and HM 16.16
Table 2 Comparison of the division of threshold β and HM 16.16
Figure 11 shows the CU segmentation result of the original HM 16.16 and of the proposed algorithm. As shown in the figure, the CU partition of the proposed algorithm is basically the same as the HM 16.16 and has high accuracy. Furthermore, the proposed CU size decision algorithm performs better than the original HM 16.16 in some homogeneous blocks that not need to be divided and in complex blocks that need further partitioning. For example, in Fig. 11d, the roof texture in the lower left corner is complex; the proposed algorithm makes a more detailed division of these CU blocks. The sky texture in the upper right corner is simple; compared with the original HM 16.16, the proposed algorithm for these CU blocks division is larger (i.e., the depth of CU is smaller). It can also be seen in Fig. 11h that the CU blocks of the cable that need to be subdivided is more finely divided, and the partitioning of CU blocks with simple texture is larger.
Comparison of CU partition line images of the proposed algorithm and HM 16.16. a, e CU partition line image of HM 16.16. b, f CU partition line image of the proposed algorithm. c, g Partially enlarged CU partition line image of HM 16.16. d, h Partially enlarged CU partition line image of the proposed algorithm
Texture direction-based prediction mode decision algorithm
VMAD and HMAD determine the texture complexity and texture directionality of the image. The smaller the MAD in the horizontal or vertical direction, the lower the texture complexity in this direction. This condition indicates that the prediction mode in this direction involves a small RD cost. To resolve the problem of the large computation in the RDO process for a 360 video and to reduce the large number of candidate modes in RMD, we propose to use texture directionality in RMD and MPM. By reducing the candidate modes, the proposed algorithm can further reduce the computational complexity in RDO.
The original HM 16.16 for 360 videos traverses 35 types of prediction modes, calculates the RD cost to sort the optimal mode, and then selects three (when PU or TU size is 32 × 32 or 64 × 64) or eight (when PU or TU size is 4 × 4, 8 × 8, or 16 × 16) modes. We then compare these modes with MPM to determine the best prediction model. Many candidate modes in RMD can be excluded in advance. Therefore, we can use the VMAD and HMAD calculated in the proposed CU mode decision algorithm to divide 33 angular predictions into horizontal (2–18) and vertical (18–34) modes according to the texture directionality (Fig. 12). We label the yellow part of the figure as the horizontal mode and the red part as the vertical mode.
$$ \left.\mathrm{a}\right)\;\mathrm{If}\ \mathrm{VMAD}>\mathrm{HMAD},C1=\mathrm{horizontal}\ \mathrm{mode} $$
$$ \left.\mathrm{b}\right)\;\mathrm{If}\ \mathrm{VMAD}\le \mathrm{HMAD},C1=\mathrm{vertical}\ \mathrm{mode} $$
Vertical and horizontal mode decisions
When classifying the vertical and horizontal textures, we first reduce the 35 modes to 19 (including 0, 1 mode), further reduce the 19 modes to obtain the two best candidate modes, and finally add the candidates to MPM. Candidate modes can be divided into three layers and are calculated layer by layer. In Fig. 12, the mode marked with a yellow line is the horizontal mode C2, and the red line is the vertical mode C2 (Table 3).
Table 3 Horizontal and vertical candidate modes for each level
Take for example the horizontal direction. Before the RMD, we first determine whether the candidate mode is horizontal or vertical through the first level C1. Then, we consider the five adjacent modes as a whole, calculate the RD cost of the five representative prediction modes (2, 6, 10, 14, 18) in the second level C2, and finally obtain the smallest and most suitable C3 (for example, the RD cost of mode 6 in C2 is the smallest, and the candidate modes of C3 are 4, 5, 7, and 8). We then traverse C3 to obtain the optimal mode. With this layering, the number of candidate modes to be calculated is reduced. Notably, in the actual calculation, each layer needs the addition of modes 0 and 1. The specific process is shown in Fig. 13.
Schematic of the proposed prediction mode decision algorithm based on texture direction
Through this proposed algorithm, we effectively reduce the number of candidate modes. Table 4 lists the number of candidate modes of the proposed algorithm and the original HM 16.16 in RMD and RDO.
Table 4 Number of candidate modes of the proposed algorithm and original HM 16.16
The proposed algorithm first determines the horizontal or vertical mode and reduces the candidate mode to seven (five types of C2 + modes 0 and 1). If modes 0 and 1 are the two best modes, then we add these two modes to RDO. If modes 0 and 1 are not the two best modes, then the mode in C3 is further calculated, and the two best candidate modes are finally added to RDO.
In the process of achieving real-time transmission, the encoding time must be controlled within a certain range. Through experiments, we find that the combination of the two proposed image texture-based algorithms can effectively reduce coding time with a minimal loss of the BD rate.
Experimental results and discussion
The effectiveness of the proposed algorithm is determined on the basis of the latest HM16.16. The experimental hardware used is Intel Core i7-7700 CPU @ 3.60 GHz with 8.0 GB RAM. The test sequence includes 4K, 6K, 8K, and 8K 10-bit sequences, which are provided by JVET [22]. The test sequences used in this study are from GoPro [23], InterDigital [24], LetinVR [25], and Nokia [26], and they have been recommended by CTC as test sequences for the 360° video. All 360 videos with different resolutions are tested to validate the proposed algorithm. For each video sequence, four quantization parameter values are used: 22, 27, 32, and 37. We use the encoder configuration All Intra and BD rate to measure the quality of the algorithm. The PSNR uses three parameters: WSPSNR, S_PSNR_NN, and PSNR. WSPSNR and S_PSNR_NN are the two new quality evaluation standards proposed for 360 videos [4]. According to the characteristics of a 360 video, its quality can be accurately measured using WSPSNR and S_PSNR_NN. Time reduction is calculated by:
$$ \Delta T=\frac{T_{\mathrm{HM}16.16}-{T}_{\mathrm{proposed}}}{T_{\mathrm{HM}16.16}}\times 100\% $$
The experimental results are shown in Tables 5, 6, and 7. The BD rate_Y1–Y3 in the table represents the values calculated using WSPSNR Y, S_PSNR_NN Y, and PSNR Y.
Table 5 Performance comparison between the original HM 16.16 and the proposed algorithms
Table 6 Performance comparison between the original HM 16.16 and the proposed mode algorithm
Table 7 Performance comparison between the original HM 16.16 and the proposed overall algorithm
Table 5 shows the experimental results using the proposed CU size decision algorithm based on texture complexity. Table 6 shows the experimental results using the proposed prediction mode decision algorithm based on texture direction. Table 7 shows the experimental results using the two proposed texture characteristic algorithms.
As shown in Table 5, the original HM 16.16 and the proposed CU size decision algorithm show different performances for different sequences, particularly with regard to improving encoding time. For the sequences with relatively simple textures, the proposed CU size decision algorithm performs excellently in shortening the encoding time, with the highest improvement reaching 44%. For several sequences with relatively complex textures, the improvement of encoding complexity is limited.
Table 6 shows the performance comparison between the proposed prediction mode decision algorithm and the original algorithm for different video sequences under different QPs. According to the simulation results, the proposed intra prediction mode decision algorithm can reduce computational complexity by 33% on average, with the BD rate slightly increasing. Therefore, the proposed intra prediction mode decision algorithm reasonably and effectively reduces the number of candidate modes for RMD and RDO processing.
Compared with HM16.16 (Table 7), the proposed algorithm can save time at an average of 53% (up to 59%), with bitrate loss being 1.3%. For the two proposed algorithms, we use the same parameters of texture complexity mentioned in the CU size decision algorithm. As shown in Fig. 14, we select 360 video sequences: 4K-8 bit (DrivingInCountry), 6K-8 bit (Balboa), 8K-8 bit (KiteFlite), and 8K-10 bit (ChairliftRide), and compare the bitrates using the proposed algorithm and the original HM 16.16. The RD curves of the sequences are almost the same as those of the original encoder. Thus, the proposed algorithm offers advantages in terms of complexity and coding efficiency. The computational complexity is reduced while the BD rate increases insignificantly at an average of 1.3% (WSPSNR Y, S_PSNR_NN Y, and PSNR Y).
Comparison of RD curves of different resolution sequences under different QPs (22, 27, 32, 37) between the original HM 16.16 and the proposed overall algorithm
In the latest research into 360 video coding, no similar intra algorithm has been proposed. To verify the performance of the proposed algorithm, we experimented with all 360 sequences to test the performance of the algorithm. The proposed algorithm could tremendously reduce the amount of calculation and time complexity, with the highest reduction reaching 59% and with the BD rate only increasing by 1.3% and is thus negligible.
A fast intra prediction algorithm based on texture characteristics for 360 videos was proposed. Two metrics, namely, VMAD and HMAD, were used to measure the texture characteristics of a CU in vertical and horizontal directions, respectively. A fast CU size decision algorithm based on texture complexity was proposed to reduce the computation complexity of RDO. According to the two metrics, a fast mode decision algorithm was designed. This algorithm dramatically reduced the number of candidate modes from 35 to 7/11 in the RMD process and the number of candidate modes from 8 to 2 in the RDO process. The experimental results showed that the proposed algorithm could relatively reduce encoding time to achieve savings of up to 53% on average while only incurring negligible loss of the BD rate and video quality.
BD rate:
Bjontegaard delta rate
CMP:
Cubemap projection
CTU:
Coding tree unit
CU:
Coding unit
ERP:
Equirectangular projection
HEVC:
High Efficiency Video Coding
HEVC test model
HMAD:
The horizontal absolute difference
JVET:
Joint Video Exploration Team
The absolute difference
MPEG:
Moving Picture Exports Group
MPMs:
Most probable modes
OHP:
Octahedron projection
Peak to signal noise ratio
PU:
Prediction unit
QP:
Quantization parameters
RD cost:
Rate distortion cost
RD:
Rate distortion
RDO:
Rate distortion optimization
RMD:
Rough mode decision
RQT:
Residual quad-tree
RSP:
Rotated sphere projection
S_PSNR_NN:
Spherical PSNR without interpolation
SSP:
Segmented sphere projection
TSP:
Truncated square pyramid projection
Transform unit
VCEG:
Video Coding Experts Group
VMAD:
Vertical absolute difference
WSPSNR:
PSNR weighted by sample area
Yuwen He, Xiaoyu Xiu, Yan Ye et al. "360Lib Software Manual", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET 360Lib Software Manual, 2017
Y. Lu, Jisheng Li, Ziyu Wen, Xianyu Meng, "AHG8: Padding Method for Segmented Sphere Projection", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting: Torino, IT, 13 July-21 (2017)
G.J. Sullivan, J.R. Ohm, W.J. Han, T. Wiegand, Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22(12), 1649–1668 (2012)
Y. Ye, E. Alshina, J. Boyce, "Algorithm Descriptions of Projection Format Conversion and Video Quality Metrics in 360Lib", Joint Video Exploration Team of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-G1003, 7th Meeting (2017)
M. Tang, Z. Yu, J. Wen, S. Yang, "Optimized Video Coding for Omnidirectional Videos", IEEE International Conference on Multimedia and Expo (ICME) (2017), pp. 799–804
W.-T. Lee, H.-I. Chen, M.-S. Chen, et al., High-resolution 360 video Foveated stitching for real-time VR. Computer Graphics Forum 36(7), 115–123 (2017)
N. Kim, J.-W. Kan, Bi-directional deformable block-based motion estimation for frame rate-up conversion of 360-degree videos. Electron. Lett. 53(17), 1192–1194 (2017)
Y. Li, J. Xu, Z. Chen, "Spherical Domain Rate-Distortion Optimization for 360-Degree Video Coding", IEEE International Conference on Multimedia and Expo (ICME) (2017), pp. 709–714
H. Bai, C. Zhu, Y. Zhao, Optimized multiple description lattice vector quantization for wavelet image coding. IEEE Trans. Circuits Syst. Video Technol 17(7), 912–917 (2017)
H. Bai, W. Lin, M. Zhang, A. Wang, Y. Zhao, Multiple description video coding based on human visual system characteristics. IEEE Trans. Circuits Syst. Video Technol 24(8), 1390–1394 (2014)
D.G. Fernández, A.A. Del Barrio, G. Botella, C. García, "Fast CU Size Decision Based on Temporal Homogeneity Detection", Conference on Design of Circuits and Integrated Systems (DCIS) (2016), pp. 1–6
M. Zhang, S. Dou, Zhi liu, "Early CU size Determination Based on Image Complexity in HEVC", 2017 Data Compression Conference (DCC) (2017), pp. 474–474
Y. Yao, Y.L. Xiaojuan Li, Fast intra mode decision algorithm for HEVC based on dominant edge assent distribution. Multimed. Tools Appl. 75, 1963–1981 (2016)
Y. Huan, Huabiao Qin, The Optimization of HEVC Intra Prediction Mode Selection, 2017 4th International Conference on Information Science and Control Engineering (ICISCE) (2017), pp. 1743–1748
L. Shen, Z. Zhang, P. An, Fast CU size decision and mode decision algorithm for HEVC intra coding. IEEE Trans. Consum. Electron. 59, 207–213 (2013)
X. Shang, G. Wang, T. Fan, Y. Li, "Fast CU Size Decision and PU Mode Decision Algorithm in HEVC Intra Coding", IEEE International Conference on Image Processing (ICIP) (2015), pp. 1593–1597
M. Zhang, X. Zhai, Z. Liu, Fast and adaptive mode decision and CU partition early termination algorithm for intra-prediction in HEVC. EURASIP Journal on Image and Video Processing 1, 86–97 (2017)
Y.C. Ting, T.S. Chang, "Gradient-Based PU Size Selection for HEVC Intra Prediction", IEEE International Symposium on Circuits and Systems (ISCAS) (2014), pp. 1929–1932
H. Huang, F. Wei, "Fast Algorithm Based on Edge Density and Gradient Angle for Intra Encoding in HEVC", IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC) (2016), pp. 347–351
J.M. Ha, J.H. Bae, M.H. Sunwoo, "Texture-Based Fast CU Size Decision Algorithm for HEVC Intra Coding", 2016 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) (2016), pp. 702–705
J. Boyce, E. Alshina, A. Abbas, "JVET Common Test Conditions and Evaluation Procedures for 360° Video", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-H1030, 8th Meeting (2017)
Test sequences are available on ftp://[email protected] and ftp://[email protected] in the /testsequences/testset360 directory. Accredited members of VCEG and MPEG may contact the JVET chairs for login information
A. Abbas, B. Adsumilli, "New GoPro Test Sequences for Virtual Reality Video Coding", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-D0026, 4th Meeting (2016)
E. Asbun, Y. He, Y. He, Y. Ye, "AHG8: InterDigital Test Sequences for Virtual Reality Video Coding", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-D0039, 4th Meeting (2016)
R. Guo, W. Sun, "Test Sequences for Virtual Reality Video Coding from LetinVR", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-G0053, 7th Meeting (2017)
S. Schwarz, A. Aminlou, I.D.D. Curcio, M.M. Hannuksela, "Tampere Pole Vaulting Sequence for Virtual Reality Video Coding", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-D0143, 4th Meeting (Oct. 2016)
This work is supported by the National Natural Science Foundation of China (No.61370111), Beijing Municipal Natural Science Foundation (No.4172020), Great Wall Scholar Project of Beijing Municipal Education Commission (CIT&TCD20180304), Beijing Youth Talent Project (CIT&TCD 201504001), and Beijing Municipal Education Commission General Program (KM201610009003).
The conclusion and comparison data of this article are included within the article.
North China University of Technology, Beijing, 100144, China
Mengmeng Zhang
, Xiaosha Dong
, Zhi Liu
& Fuqi Mao
China University of Geosciences, Beijing, 100083, China
Fuqi Mao
& Wen Yue
Search for Mengmeng Zhang in:
Search for Xiaosha Dong in:
Search for Zhi Liu in:
Search for Fuqi Mao in:
Search for Wen Yue in:
MZ proposed the framework of this work, and XD carried out the whole experiments and drafted the manuscript. ZL offered useful suggestions and helped to modify the manuscript. All authors read and approved the final manuscript.
Correspondence to Mengmeng Zhang or Zhi Liu.
MMZ: Doctor of Engineering, professor, master instructor, master of Communication and Information Systems. His major research interests include the video codec, embedded systems, image processing, and pattern recognition. He has authored or co-authored more than 40 refereed technical papers in international journals and conferences in the field of video coding, image processing, and pattern recognition. He holds 21 national patents and 2 monographs in the areas of image/video coding and communications.
XSD: Studying master of North China University of Technology. Her major research is HEVC.
ZL: Doctor of Engineering, master instructor. He received the B.S. degree in electronic information technology and the Ph.D. in signal and information processing from Beijing Jiaotong University, China in 2001 and 2011 respectively. Currently, he is a lecturer in North China University of Technology. His major research interests include the video codec, pattern recognition, and self-organizing network.
FQM: Master of engineering, He received his B.S. degree in computer science and technology and M.S. degree in computer application technology from North China University of Technology. Currently, he is a research assistant at North China University of Technology.
WY: Doctor of Engineering, professor, doctoral supervisor. His major research interests include mechanical tribology and surface technology, exploration technology and geological drilling, diamond and other superhard materials.
Zhang, M., Dong, X., Liu, Z. et al. Fast intra algorithm based on texture characteristics for 360 videos. J Image Video Proc. 2019, 53 (2019) doi:10.1186/s13640-019-0446-3
Texture characteristics
Intra prediction
Mode decision | CommonCrawl |
MS2CNN: predicting MS/MS spectrum based on protein sequence using deep convolutional neural networks
Volume 20 Supplement 9
18th International Conference on Bioinformatics
Yang-Ming Lin1 na1,
Ching-Tai Chen2 na1 &
Jia-Ming Chang ORCID: orcid.org/0000-0002-6711-17391
Tandem mass spectrometry allows biologists to identify and quantify protein samples in the form of digested peptide sequences. When performing peptide identification, spectral library search is more sensitive than traditional database search but is limited to peptides that have been previously identified. An accurate tandem mass spectrum prediction tool is thus crucial in expanding the peptide space and increasing the coverage of spectral library search.
We propose MS2CNN, a non-linear regression model based on deep convolutional neural networks, a deep learning algorithm. The features for our model are amino acid composition, predicted secondary structure, and physical-chemical features such as isoelectric point, aromaticity, helicity, hydrophobicity, and basicity. MS2CNN was trained with five-fold cross validation on a three-way data split on the large-scale human HCD MS2 dataset of Orbitrap LC-MS/MS downloaded from the National Institute of Standards and Technology. It was then evaluated on a publicly available independent test dataset of human HeLa cell lysate from LC-MS experiments. On average, our model shows better cosine similarity and Pearson correlation coefficient (0.690 and 0.632) than MS2PIP (0.647 and 0.601) and is comparable with pDeep (0.692 and 0.642). Notably, for the more complex MS2 spectra of 3+ peptides, MS2PIP is significantly better than both MS2PIP and pDeep.
We showed that MS2CNN outperforms MS2PIP for 2+ and 3+ peptides and pDeep for 3+ peptides. This implies that MS2CNN, the proposed convolutional neural network model, generates highly accurate MS2 spectra for LC-MS/MS experiments using Orbitrap machines, which can be of great help in protein and peptide identifications. The results suggest that incorporating more data for deep learning model may improve performance.
Tandem mass spectrometry (MS2) has emerged as an indispensable technology in high-throughput proteomics experiments [1]. Tandem mass spectra generated from bottom-up proteomics consist of mass-to-charge ratios and relative abundances of a set of fragment ions generated from digested peptides. The patterns of these fragment ions are useful for the identification and quantification of proteomes in the sample.
There are two common approaches for protein identification: database search and spectral library search. The former searches each tandem mass spectrum (or MS2 spectrum) acquired from experiments against theoretical spectrums generated from all possible digested peptides (with trypsin in most of the cases) in the human proteome using a scoring function. The latter searches a MS2 spectrum against a spectral library, a collection of high-quality spectra of all identified peptides from previous experiments [2]. Although database search is more comprehensive and covers all possible peptide space, the sensitivity is lower because of the absence of intensity for each fragment ion in theoretical spectra. In contrast, spectral library search provides considerably higher sensitivity since a spectral library consists of realistic fragment ion intensities [3]. However, spectral library search is limited to peptides that have been previously identified, which hinders the application of spectral library search in areas where the discovery of novel peptides is of importance, such as the identification of peptides with mutations or peptides from isoforms of proteins. To take this into account, it is necessary to develop methods for computational prediction or simulation of MS2 spectra from amino acid sequences to expand the size of a spectral library.
There are several different strategies in predicting the MS2 spectrum of a peptide. MassAnalyzer, a pioneer work in computational prediction of a MS2 spectrum, uses a kinetic model on the basis of the mobile proton hypothesis to simulate peptide fragmentation [4, 5]. A semi-empirical approach is to predict the MS2 spectrum of a peptide from the spectra of similar peptides by peak perturbation [6]. The approach is based on the observation that the peptides of similar sequences produce similar fragmentation patterns in most cases. The concept is then generalized to a weighted K-nearest neighbor (KNN) approach in which a machine learning model first selects peptides that are likely to have high spectra similarity to the target peptide, and then a consensus algorithm combines their spectra to predict the MS2 spectrum of the target peptide [7]. Though the two approaches can yield good prediction accuracy for target peptides with similar amino acid sequence neighbors, they are not designed to predict the MS2 spectrum for arbitrary peptides of interest. For better predictive capability, other methods simplify the model by focusing on the prediction of y-ion intensities only [8,9,10]. Although they achieve some success, the applicability of these methods is somewhat restricted.
PeptideART, a data-driven approach based on feed-forward neural networks, is trained with more than 40,000 peptide spectrum matches (PSMs) [11]. In benchmark tests on five different data sets for MS2 spectrum prediction, PeptideART compares favorably to MassAnalyzer. MS2PIP [12], a later random forest approach, incorporates different predictive models for different peptide lengths (8 to 28 amino acids) and different charge states (charge 2+ and 3+). These models are trained with more than 73,000 PSMs; the overall performance is reported to be better than PeptideART. A web server version of MS2PIP has been constructed with a new computational model and a much larger training data set of more than 170,000 PSMs [13]. More recently, a deep neural network-based method called pDeep has been developed [14]. The method is based on a bidirectional long short-term memory (BiLSTM) model and is trained with a data set of around 4,000,000 MS2 spectra. Notably, for the same peptide sequence, it predicts MS2 spectra of three different fragmentation approaches: HCD (higher-energy collisional dissociation), ETD (electron-transfer dissociation), and EThcD (electron-transfer/higher-energy collision dissociation). According to the reported benchmark experiment, pDeep yields considerable improvements over MassAnalyzer and MS-Simulator.
In this study, we propose MS2CNN, a deep convolutional neural network (DCNN) method for MS2 spectrum prediction given experimental spectra large enough to effectively train a sophisticated deep learning model. We adopt the network structure of LeNet-5 [15], a DCNN consisting of three major components: a convolutional layer, a pooling layer, and a fully connected layer. A single DCNN is constructed to predict peptides of a specific length and charge. The entire training set was composed of high-quality human HCD MS2 spectra from an Orbitrap LC-MS/MS experiment downloaded from the National Institute of Standards and Technology (NIST) consisting of 320,824 unique peptide sequences and 1,127,971 spectra. Five-fold cross validation was performed and the method was then benchmarked on a publicly available independent test dataset of human HeLa cell lysate from LC-MS/MS experiments with MS2PIP and pDeep. MS2CNN achieved a cosine similarity (COS) in the range of 0.57–0.79 and 0.59–0.74 for peptides of charge 2+ and charge 3+, respectively. These results suggest that MS2CNN significantly outperforms MS2PIP, especially for shorter peptide sequences for which abundant training data is available. It is also shown that MS2CNN has an overall comparable performance to pDeep; however the former predicts MS2 spectra for charge 3+ peptides, which are usually considered more complicated than the spectra for 2+ peptides, at a higher accuracy.
Five-fold cross validation for determining convolutional layer
Because there are significantly more charge 2+ than charge 3+ peptide sequences, the best layer number of MS2CNN is determined by charge 2+, after which the value is directly applied to charge 3+. Given the one-fold run of the five-fold validation result, we chose the 4-layer model as the default structure of MS2CNN because it yielded the best performance and is the most efficient of all the models (Additional file 1: Table S4). Although the 5-layer model is comparable to the 4-layer model for some peptide lengths, we did not consider it as its performance fluctuates considerably for peptides of different lengths and it also requires longer training times.
Figure 1 shows the five-fold cross validation performance evaluated with COS for different peptide lengths and charge states (other detailed metrics are given in Additional file 1: Table S5). The figure shows that the predictive capability decreases as the peptide length gets larger, possibly due to less training data for longer peptides. We further investigated whether there is a benefit to merging charge 2+ and 3+ training data to build up a single model as MS2CNN_mix instead of having the two MS2CNN 2+ and MS2CNN 3+ models for charge 2+ and charge 3+ peptides, respectively. We followed the previous training procedure with an additional input feature-engineered procedure based on the merged data set of charge 2+ or 3+ peptides. The performance in general falls between the performance of charge 2+ and charge 3+ (Fig. 1, gray bar). This shows that although a larger data set boosts performance (improves MS2CNN 3+ performance), different charge states also contain specific patterns in terms of spectrum prediction (impairing MS2CNN 2+ performance).
Bar chart of MS2CNN COS on charge 2+ (blue), 3+ (orange), and mix (gray) models. Blue and orange dashed lines indicate the peptide number of charge 2+ and 3+ data sets, respectively
Upper bound analysis
Peptide fragmentation is a random process; for example, even the same peptide in the same experiment can sometimes result in different peak intensities in spectra. When combining different ionization sources, ion detection, experimental steps, and even different species, the spectrum of the same peptide can be significantly different. Therefore, we compare the similarity between the training spectra and independent spectra for the same peptide sequence (Table 1). Ideally, the similarity in terms of COS or PCC should be 1 if the experimental conditions and the random processes for generating the two spectra are perfectly identical. In reality, the similarity can be seen as the Bayes rate, the theoretical prediction upper bound on prediction accuracy due to unexplainable variance. To conclude, the average upper bound COS for different peptide lengths ranges from 0.600 to 0.800 and decreases as peptide length increases. The average upper bound of PCC for different peptide lengths is even lower, ranging from 0.550 to 0.760. Peptide length seems to have a smaller effect on PCC than on COS, especially for peptides of charge 3 + .
Table 1 Average cosine similarity (COS) and Pearson correlation coefficient (PCC) of spectra from the same peptide in training and independent test sets with charge 2+ and charge 3+
Independent test set evaluation
We compared the proposed MS2CNN and MS2CNN_mix models with MS2PIP and pDeep based on the independent test set in terms of COS and PCC (Figs. 2 and 3, detailed values in Additional file 1: Table S6). In general, MS2CNN and MS2CNN_mix outperform MS2PIP for charge 2+ (Fig. 2) and charge 3+ (Fig. 3) peptides in both metrics significantly with a p-value < 0.01 by a Wilcoxon signed-rank test (Additional file 2: R Script). For charge 2+ peptides, MS2CNN outperforms pDeep marginally for peptide lengths no greater than 11, whereas for peptide lengths from 12 to 19, pDeep considerably outperforms the other methods for both COS and PCC (Fig. 2). In contrast, for charge 3+ peptides, MS2CNN and MS2CNN_mix yield higher COS and PCC than pDeep for all peptide lengths significantly with a p-value < 0.01 by the Wilcoxon signed-rank test (Fig. 3). This suggests that pDeep might be more sensitive to the size of training data, as the number of spectra for charge 3+ peptides is significantly smaller than that of the charge 2+ peptides. Note that pDeep was trained with HCD mouse spectra. Although they show a high MS/MS spectra similarity (a median PCC of 0.94) across different species, a minority of peptides which share low similarity across species can nevertheless deteriorate prediction performance.
a COS (cosine similarity) and b PCC (Pearson's correlation coefficient) of MS2CNN 2+ (blue bar), MS2CNN_mix (blue bar with white dots), MS2PIP (white bar with blue dashes), and pDeep (black bar) on the charge 2+ peptides from the independent test set
a COS and b PCC of MS2CNN 3+ (blue bar), MS2CNN_mix (blue bar with white dots), MS2PIP (white bar with blue dashes), and pDeep (black bar) on the charge 3+ peptides from the independent test set
Note that the performance of charge 3+ peptides at lengths of 17, 18, and 19 are better than that of charge 2+ peptides for both COS and PCC. This may be due to the richer training data set and higher theoretical prediction upper bound in those ranges. The advantage of MS2CNN_mix can be seen in the prediction results of charge 3+ (Fig. 3), for which the size of the training data set greatly increases. This benefit becomes insignificant for charge 2+ peptides, as the original training data set is much larger: the improvement is not affected by theoretical prediction upper bound. Taking charge 3+ peptide lengths of 11 and 12 as an example (Fig. 3 b), there is more improvement in length 12 (MS2CNN_mix vs MS2PIP) but a higher upper bound in length 11 than length 12 (0.721 vs 0.682, Table 2 charge 3 + .PCC).
Table 2 Features used to encode a peptide sequence and its fragment ion sequences
Discussion and conclusion
Peptide identification is an important issue in mass spectrometry-based proteomics. There are two major approaches for peptide identification: database search and spectral library search. Spectral library search boasts a greater sensitivity than database search, but is limited to peptides that have been previously identified. Overcoming this limitation calls for an accurate MS2 spectrum prediction tool that is capable of reproducing the chemical fragmentation pattern of a peptide sequence. Over the years, a large number of high quality MS2 spectra have been generated and made publicly available by experimentalists, making for an excellent opportunity for researchers to effectively train modern machine learning models such as deep convolutional neural networks for MS2 spectra prediction.
We devise DCNN, a deep learning model for the prediction of peak intensities of MS2 spectra. In addition to DCNN, we incorporate different Python libraries for feature engineering to facilitate the training process. According to our independent test set of HCD spectra of human samples from Orbitrap LC-MS experiments, MS2CNN shows superior prediction performance compared to MS2PIP for charge 2+ and 3+ peptides in terms of COS. It also outperforms pDeep, another deep learning approach, for charge 3+ peptides. In the future, we plan to improve the predictive power of our model by either including more data for longer peptide sequences or employing another popular approach in deep learning such as transfer learning, in which a pretrained model is reused for another task, for example, we use a model trained on short peptides for a long peptide task. In the light of our results, we believe MS2CNN can be of great use in expanding the coverage of a spectral library and improving the identification accuracy of spectral library search in the analysis of proteomics samples.
To apply a deep learning method to our dataset, each peptide sequence must be converted into a feature vector with a label. Table 2 lists the features we use to characterize a peptide sequence. These features include peptide composition (similar to amino acid composition), mass-to-charge ratio (m/z), and peptide physical-chemical properties such as isoelectric point, instability index, aromaticity, secondary structure fraction, helicity, hydrophobicity, and basicity. The m/z and physical-chemical features of not only the peptide sequence but all the possible b and y fragment ions are also included in the feature vector. Take for example the peptide sequence AAAAAAAAGAFAGR (length = 14): its m/z is 577.80, the amino acid composition is {A: 10, C: 0, D: 0, E: 0, F: 1, G: 2, H: 0, I: 0, K: 0, L: 0, M: 0, N: 0, P: 0, Q: 0, R: 1, S: 0, T: 0, V: 0, W: 0, Y: 0}, and the physical-chemical properties {isoelectric point, instability index, aromaticity, helicity, hydrophobicity, basicity, secondary structure fraction} are {9.80, 3.22, 0.07, − 0.21, 1.21, 208.46, (0.071, 0.14, 0.71)}. In addition, the m/z and physical-chemical properties of all the 26 (=2*(14–1)) fragment ions are included in the feature vector. The total number of features for a peptide sequence is 290 (=1 + 20 + 9 + 26*1 + 26*9). We used Pyteomics v3.4.2 [16] to compute the mass-to-charge ratio and Biopython v1.7 [17] to calculate the amino acid composition, instability index, isoelectric point, and secondary structure fraction.
MS2CNN model
We propose MS2CNN, a DCNN model that uses the aforementioned features (Fig. 4). The MS2CNN model takes a peptide feature vector as input and computes an ensemble of nonlinear function nodes in which each layer consists of a number of nodes. The predicted peak intensity corresponds to an output node of the MS2CNN model.
MS2CNN model architecture
In the proposed model, a convolution layer is activated by the relu activation function. A max-pooling layer is added after a convolution layer: together they constitute one convolution-pooling layer. The number of convolution-pooling layers is repeated n times in MS2CNN, where n ranges from 2 to 7. The best number was determined by a cross validation experiment. We unify the node number of the convolutional layers as 10; the node number for the last convolutional layer depends on the layer depth. Additional file 1: Table S1 lists the detailed configurations for convolutional layers from layers 2 to 7. The repeated convolution-pooling layers are followed by another layer to flatten the output. Then we add a fully connected layer with twice as many nodes as the number of output nodes. We implemented the MS2CNN architecture and executed the whole training process using the Keras Python package version 2.0.4 [18]. Figure 4 illustrates the MS2CNN model structure.
Training data set
We downloaded the training set – a human HCD library based on an Orbitrap mass analyzer and LC-MS (Liquid chromatography–mass spectrometry) – from the NIST website. This set is based on CPTAC and ProteomeXchange, two public repositories containing 1,127,971 spectra from 320,824 unique peptide sequences in .msp format. The dataset consists of peptides with charge states ranging from 1+ to 9+, among which only charge states of 2+ and 3+ were selected as there was not enough data for the other charges to effectively train a machine learning model. This strategy is consistent with previous studies.
De-duplicated spectrum
It is common for different spectra to belong to the same peptide sequence, and for charge states to have different peak intensities for their fragment ions. We performed a two-step process to generate a de-duplicated spectrum from a set of spectra for a given peptide. First, each peak in a spectrum was normalized by the maximum peak intensity of the spectrum. Then, the intensity of each b- and y-ion was determined by the median intensity of the ion across different spectra. This yielded a consensus spectrum which filters out noise that could degrade DCNN training. Additional file 1: Table S2 summarizes the number of spectra after deduplication. For effective training of a complex DCNN model, the number of peptides should exceed 5000 after deduplication. Based on this criterion, we focused on peptides of lengths 9 to 19 and eliminated the rest. This resulted in 166,371 charge 2+ peptides (70.4% of the 2+ peptides from NIST) and 98,364 charge 3+ peptides (69.6% of the 3+ peptides from NIST).
Independent test set
We used the data-dependent acquisition data of Orbitrap LC-MS experiments from [19] as an independent test set. This included 22,890 and 5998 spectra for charge 2+ and 3+ peptides, respectively. The proportion of common peptides in our training set and independent test set exceeded 90%. Although these peptides were viewed as easier prediction targets, the performance is still bounded by the theoretical upper bound; for example, the upper bound of COS for charge 2+ and charge 3+ peptides ranges from 0.636 to 0.800 and from 0.617 to 0.781, respectively (detailed numbers shown in Table 1). The numbers of commonly observed peptides for different lengths are summarized in Additional file 1: Table S3.
K-fold cross validation
To select the best parameters (i.e., layer numbers) for the MS2CNN model and to prevent overfitting, we applied five-fold cross validation with a three-way data split, namely, the entire data set was partitioned into training, validation (10% of training data), and test sets. Training epochs continued as long as the accuracy of the validation set improved over the previous epoch by 0.001; otherwise, training was terminated. The final model was selected based on validation performance, and was used to predict the test set for performance evaluation. Since our model was selected based on validation set performance, there was no data leakage problem, in which information in the test data is involved in model selection. This problem can result in over-estimation of the performance and unfair comparison with other methods.
Two metrics are used: Cosine similarity (COS) and Pearson correlation coefficient (PCC). COS is one of the most widely used spectrum similarity measures for mass spectrometry. It measures the similarity between two non-zero vectors by calculating the angle between them (Eq. 1, calculated by the Python scikit-learn package [20]). COS ranges from − 1 to + 1 (angle from 180° to 0°).
$$ \mathit{\cos}\left(X,Y\right)=\frac{X{Y}^T}{\left|\left|X\right|\right|\left|\left|Y\right|\right|}\cdots $$
The PCC measures the linear correlation between two variables X and Y (Eq. 2, calculated by the Python Scipy package [21]). It ranges from 1 to − 1, where 1 denotes a completely positive correlation, − 1 a completely negative correlation, and 0 a random correlation or two variables that have no association.
$$ {\rho}_{XY}=\frac{\mathit{\operatorname{cov}}\left(X,Y\right)}{\sigma_X\ {\sigma}_Y}\cdots $$
Evaluation methods
MS2PIP
Recently, MS2PIP released a new prediction model using XGBoost [22]; the previous random-forest model [13] was not available. Thus, we used the latest MS2PIP model for benchmark comparison. The local standalone version (Python code downloaded from [23]) was used instead of the online server as the latter is subject to a maximum number of 5000 peptides per query.
We used the default settings of MS2PIP according to the Github config file, other than changing frag_method from HCD to HCDch2. In addition, the MGF function was enabled to generate intensities without log2 transformation. To ensure a fair comparison, we processed the test data using the same peak normalization procedure used to process our training data.
pDeep
First, we converted a peptide to a 2D array using the pDeep API. Then, we loaded the pDeep model (.h5 format), which we used to predict the intensities of the peptide [14]. Although the pDeep documentation states "If the precursor charge state is <= 2, 2+ ions should be ignored", to ensure a fair and complete charge 2+ peptide comparison, we set the intensity of the testing 2+ peak to zero as if it were missing in pDeep prediction. pDeep provided three trained models – BiLSTM, ProteomeTools-ETD, and ProteomeTools-EThcD – of which the BiLSTM model was used for comparison as it performed the best in both COS and PCC metrics (Additional file 1: Table S6).
Our source code for the whole experiments, including preprocessing, feature engineering, and MS2CNN, is publicly available at https://github.com/changlabtw/MS2CNN.
The materials generated and analyzed during the current study are available at
○ Training data https://figshare.com/s/52fcaade571a7d60241c.
○ Independent test data https://figshare.com/s/91e99d2c9f7ce03f1210.
COS:
Cosine similarity
DCNN:
Deep convolutional neural network
KNN:
K-nearest neighbor
m/z:
mass-to-charge
MS2 :
Tandem mass spectrometry
PCC:
Pearson correlation coefficient
Aebersold R, Mann M. Mass spectrometry-based proteomics. Nature. 2003;422(6928):198–207.
Lam H, Deutsch E, Eddes J, Eng J, King N, Stein S, et al. Development and validation of a spectral library searching method for peptide identification from MS/MS. PROTEOMICS. 2007;7(5):655–67.
Zhang X, Li Y, Shao W, Lam H. Understanding the improved sensitivity of spectral library searching over sequence database searching in proteomics data analysis. Proteomics. 2011;11(6):1075–85.
Zhang Z. Prediction of low-energy collision-induced dissociation spectra of peptides. Anal Chem. 2004;76(14):3908–22.
Zhang Z. Prediction of low-energy collision-induced dissociation spectra of peptides with three or more charges. Anal Chem. 2005;77(19):6364–73.
Hu Y, Li Y, Lam H. A semi-empirical approach for predicting unobserved peptide MS/MS spectra from spectral libraries. Proteomics. 2011;11(24):4702–11.
Ji C, Arnold RJ, Sokoloski KJ, Hardy RW, Tang H, Radivojac P. Extending the coverage of spectral libraries: a neighbor-based approach to predicting intensities of peptide fragmentation spectra. Proteomics. 2013;13(5):756–65.
Zhou C, Bowler LD, Feng J. A machine learning approach to explore the spectra intensity pattern of peptides using tandem mass spectrometry data. BMC Bioinformatics. 2008;9:325.
Sun S, Yang F, Yang Q, Zhang H, Wang Y, Bu D, et al. MS-simulator: predicting y-ion intensities for peptides with two charges based on the intensity ratio of neighboring ions. J Proteome Res. 2012;11(9):4509–16.
Wang Y, Yang F, Wu P, Bu D, Sun S. OpenMS-simulator: an open-source software for theoretical tandem mass spectrum prediction. BMC Bioinformatics. 2015;16:110.
Li S, Arnold RJ, Tang H, Radivojac P. On the accuracy and limits of peptide fragmentation spectrum prediction. Anal Chem. 2011;83(3):790–6.
Degroeve S, Martens L. MS2PIP: a tool for MS/MS peak intensity prediction. Bioinformatics. 2013;29(24):3199–203.
Degroeve S, Maddelein D, Martens L. MS2PIP prediction server: compute and visualize MS2 peak intensity predictions for CID and HCD fragmentation. Nucleic Acids Res. 2015;43(W1):W326–30.
Zhou XX, Zeng WF, Chi H, Luo C, Liu C, Zhan J, et al. pDeep: Predicting MS/MS Spectra of Peptides with Deep Learning. Anal Chem. 2017;89(23):12690–7.
Lecun Y, et al. Gradient-based learning applied to document recognition. Proc IEEE. 1998 Nov;86:2278–324.
Goloborodko AA, Levitsky LI, Ivanov MV, Gorshkov MV. Pyteomics--a Python framework for exploratory data analysis and rapid software prototyping in proteomics. J Am Soc Mass Spectrom. 2013;24(2):301–4.
Cock PJ, Antao T, Chang JT, Chapman BA, Cox CJ, Dalke A, et al. Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics. 2009;25(11):1422–3.
Chollet F. Keras. https://github.com/fchollet/keras Date Accessed at 2017/09/01.
Tsou CC, Tsai CF, Teo GC, Chen YJ, Nesvizhskii AI. Untargeted, spectral library-free analysis of data-independent acquisition proteomics data generated using Orbitrap mass spectrometers. Proteomics. 2016;16(15–16):2257–71.
Pedregosa F, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2011;12:2825–30.
van der Walt S, et al. The NumPy Array: a structure for efficient numerical computation. Comput Sci Eng. 2011;13:22–30.
Gabriels R, Martens L, Degroeve S. Updated MS2PIP web server delivers fast and accurate MS2 peak intensity prediction for multiple fragmentation methods, instruments and labeling techniques. Nucleic Acids Res. 2019;47(W1):W295–9.
https://github.com/sdgroeve/ms2pip_c Date accessed at 2018/03/13.
We would like to acknowledge Dr. Ting-Yi Sung and Dr. Aaron Heidel for their assistance in polishing the manuscript.
About this supplement
This article has been published as part of BMC Genomics, Volume 20 Supplement 9, 2019: 18th International Conference on Bioinformatics. The full contents of the supplement are available at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-20-supplement-9.
Publication of this supplement was funded by the Taiwan Ministry of Science and Technology [106–2221-E-004-011-MY2 to J.-M.C. and 108-2628-E-004-001-MY3 to Y.-M.L.] and "The Human Project from Mind, Brain and Learning" of NCCU from the Higher Education Sprout Project by the Ministry of Education in Taiwan. We are grateful to the National Center for High-performance Computing for computer time and facilities.
Yang-Ming Lin and Ching-Tai Chen contributed equally to this work.
Department of Computer Science, National Chengchi University, 11605, Taipei City, Taiwan
Yang-Ming Lin & Jia-Ming Chang
Institute of Information Science, Academia Sinica, 115, Taipei City, Taiwan
Ching-Tai Chen
Yang-Ming Lin
Jia-Ming Chang
YML, CTC, and JMC conceived the study. CTC collected the data sets. YML implemented the method and conducted the experiments. All authors wrote the paper and approved the final version of the manuscript.
Correspondence to Jia-Ming Chang.
Excel file: Supplementary tables S1–S6 for additional results.
R Script: Wilcoxon signed-rank test for independent test set evaluation.
Lin, YM., Chen, CT. & Chang, JM. MS2CNN: predicting MS/MS spectrum based on protein sequence using deep convolutional neural networks. BMC Genomics 20 (Suppl 9), 906 (2019). https://doi.org/10.1186/s12864-019-6297-6
Mass spectrum
Spectral library search
Deep convolutional neural networks | CommonCrawl |
\begin{document}
\author{Elisabetta Chiodaroli and Ond\v{r}ej Kreml} \title{An overview of some recent results on the Euler system of isentropic gas dynamics } \date{}
\maketitle
\centerline{EPFL Lausanne}
\centerline{Station 8, CH-1015 Lausanne, Switzerland}
\centerline{Institute of Mathematics of the Academy of Sciences of the Czech Republic}
\centerline{\v{Z}itn\'a 25, 115 67 Praha 1, Czech Republic}
\begin{abstract}
This overview is concerned with the well-posedness problem for the isentropic compressible Euler equations of gas dynamics. The results we present are in line with the program of investigating the efficiency of different selection criteria proposed in the literature in order to weed out non-physical solutions to more-dimensional systems of conservation laws and they build upon the method of convex integration developed by De Lellis and Sz\'ekelyhidi for the incompressible Euler equations. Mainly following \cite{chk}, we investigate the role of the maximal dissipation criterion proposed by Dafermos in \cite{Da1}: we prove how, for specific pressure laws, some non-standard (i.e. constructed via convex integration methods) solutions to the Riemann problem for the isentropic Euler system in two space dimensions have greater energy dissipation rate than the classical self-similar solution emanating from the same Riemann data. We therefore show that the maximal dissipation criterion proposed by Dafermos does not favour in general the self-similar solutions. \end{abstract} {\let\thefootnote\relax\footnote{2010 \textit{Mathematics Subject Classification}. Primary: $35$L$65$; Secondary: $35$L$67$ $35$L$45$.\\ \textit{Key words and phrases.} Hyperbolic systems of conservation laws, Riemann problem, admissible solutions, entropy rate criterion, ill--posedness, convex integration.}
\section{Introduction}
We consider the isentropic compressible Euler system of gas dynamics in $2$ space dimensions (cf. \cite{da} or \cite{se} or \cite{br}). It is obtained as a simplification of the full compressible Euler equations, by assuming the entropy to be constant. The state of the gas is described through the state vector $$ V=(\rho, v)$$ whose components are the density $\rho$ and the velocity $v$. The system consists of $3$ equations which correspond to balance statements for mass and linear momentum. The corresponding Cauchy problems reads as \begin{equation}\label{eq:Euler system} \left\{\begin{array}{l} \partial_t \rho + {\rm div}_x (\rho v) \;=\; 0\\ \partial_t (\rho v) + {\rm div}_x \left(\rho v\otimes v \right) + \nabla_x [ p(\rho)]\;=\; 0\\ \rho (\cdot,0)\;=\; \rho^0\\ v (\cdot, 0)\;=\; v^0 \, , \end{array}\right. \end{equation} with $t\in \mathbb{R}^+$, $x\in \mathbb{R}^2$. The pressure $p$ is a function of the density $\rho$ determined from the constitutive thermodynamic relations of the gas under consideration and it is assumed to satisfy $p'>0$ (under this assumption the system is hyperbolic). We will work with pressure laws $p(\rho)= \rho^\gamma$ with constant $\gamma\geq 1$.
Our aim is to discuss the issue of uniqueness of weak solutions to the Cauchy problem \eqref{eq:Euler system}. The theory of the Cauchy problem for hyperbolic systems of conservation laws is typically confronted with two major challenges. First, it is well-known that classical solutions develop discontinuities, even starting out from smooth initial data. In the literature this behaviour is known as breakdown of classical solutions. Therefore, it becomes imperative to introduce the notion of weak solution. However, weak solutions fail to be unique. In order to restore uniqueness restrictions need to be imposed in hope of singling out a unique physical solution. When dealing with systems coming from Physics, as in our case, the second law of Thermodynamics naturally induces such restrictions, such admissibility criteria by stipulating that weak solutions are admissible/entropy solutions if they satisfy some \textit{entropy} inequalities (see \eqref{eq:energy inequality} for the specific case of the compressible Euler system). Finally, a third important challenge then arises: do \textit{entropy} inequalities really serve as selection criteria? Are admissible solutions unique? Or at least, do there exist efficient criteria restoring uniqueness? This is a central problem to set down a complete theory for the Cauchy problem. It has deserved a lot of attention, but positive answers were found only for scalar conservation laws or for systems in one space dimensions (under smallness assumptions on the initial data): for a complete account of the existing literature we refer the reader to \cite{da} and \cite{se}. When dealing with systems of conservation laws in more than one space dimension, it is still an intriguing mathematical problem to develop a theory of well-posedness for the Cauchy problem which includes the formation and evolution of shock waves.
In 2006, Elling \cite{el} studied numerically a particular case of initial data for the two dimensional non-isentropic Euler equations. His results show that the numerical method does not always converge to the physical solution. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) to the multi-dimensional Euler equations are not always unique.
In a groundbreaking paper \cite{dls2}, De Lellis-Sz\'{e}kelyhidi give an example in favour of the conjecture that entropy/admissible solutions to the multi-dimensional compressible Euler equations are in general not unique. The non-uniqueness result by De Lellis-Sz\'{e}kelyhidi is a byproduct of their analysis of the incompressible Euler equations based on its formulation as a differential inclusion (see \cite{dls1} and \cite{dls3}) combined with convex integration methods: they exploit the result for the incompressible Euler equations to exhibit bounded initial density and bounded compactly supported initial velocity for which admissible solutions of \eqref{eq:Euler system} are not unique (in more than one space dimension). However the initial data constructed in \cite{dls2} are very irregular. The result by De Lellis and Sz\'ekelyhidi is improved by the author in \cite{ch}
where it is proven that non-uniqueness still holds in the case of regular initial density (see also \cite{ChFeKr} for further generalizations). Non--unique solutions constructed via convex integration are referred to as non--standard or oscillatory solutions. Moreover in \cite{ChDLKr}, using the Riemann problem as a building block, the authors show that, in the two dimensional case, the entropy inequality (see \eqref{eq:energy inequality}) does not single out unique weak solutions even under very strong assumptions on the initial data ($(\rho^0, v^0) \in W^{1, \infty}(\mathbb{R}^2)$): \begin{theorem}[Chiodaroli, De Lellis, Kreml]\label{t:lipschitz} There are Lipschitz initial data $(\rho^0, v^0)$ for which there are infinitely many bounded admissible solutions $(\rho, v)$ of \eqref{eq:Euler system} on $\mathbb{R}^2\times [0, \infty[$ with $\inf \rho >0$. These solutions are all locally Lipschitz on a finite interval on which they all coincide with the unique classical solution. \end{theorem} This is proven by constructing infinitely many entropy weak solutions in forward time to a Riemann problem for \eqref{eq:Euler system} whose Riemann data can be generated, backwards in time, by a classical compression wave: the Lipschitz initial data of Theorem \ref{t:lipschitz} will be provided by the values of the compression wave at some finite negative time. It is clear now that the infinitely many admissible solutions constructed in Theorem \ref{t:lipschitz} all coincide with the unique classical solution (compression wave) on a finite time interval
whereas non--uniqueness arises after the first blow--up time.
This series of negative results concerning the entropy inequality as selection criterion for system \eqref{eq:Euler system} motivated the authors to explore other admissibility criteria which could work in favour of uniqueness, in particular we investigated an alternative criterion which has been proposed by Dafermos in \cite{Da1} under the name \textit{entropy rate admissibility criterion}. The ideas developed in \cite{ChDLKr} enabled us to prove in \cite{chk} the following theorem: \begin{theorem}[Chiodaroli, Kreml]\label{t:mainthm}
Let $p(\rho) = \rho^\gamma$, $1 \leq \gamma < 3$.
There exist Riemann data for which the self-similar solution to \eqref{eq:Euler system}
emanating from these data is not entropy rate admissible. \end{theorem} This result does not exclude that the entropy rate admissibility criterion could still select a unique solution, but surely prevents the self--similar solution to be the selected one. Moreover, since Theorem \ref{t:mainthm} is proven using non--standard solutions as competitors, with respect to Dafermos' criterion, for the self--similar solution, we can affirm that the entropy rate criterion cannot, at least in our setting, rule out oscillatory solutions obtained via convex integration.
In the rest of the paper we will further explain this result.
\section{Entropy criteria}
\subsection{Entropy inequality} We recall here the usual definitions of weak and admissible solutions to \eqref{eq:Euler system} in the two--dimensional case.
\begin{definition}\label{d:weak} By a \emph{weak solution} of \eqref{eq:Euler system} on $\mathbb{R}^2\times[0,\infty[$ we mean a pair $(\rho, v)\in L^\infty(\mathbb{R}^2\times [0,\infty[)$ such that the following identities hold for every test functions $\psi\in C_c^{\infty}(\mathbb{R}^2\times [0, \infty[)$, $\phi\in C_c^{\infty}(\mathbb{R}^2\times [0, \infty[)$: \begin{equation} \label{eq:weak1} \int_0^\infty \int_{\mathbb{R}^2} \left[\rho\partial_t \psi+ \rho v \cdot \nabla_x \psi\right] dx dt+\int_{\mathbb{R}^2} \rho^0(x)\psi(x,0) dx \;=\; 0 \end{equation} \begin{align} \label{eq:weak2} &\int_0^\infty \int_{\mathbb{R}^2} \left[ \rho v \cdot \partial_t \phi+ \rho v \otimes v : D_x \phi +p(\rho) {\rm div}\,_x \phi \right] +\int_{\mathbb{R}^2} \rho^0(x) v^0(x)\cdot\phi(x,0) dx\;=\; 0. \end{align} \end{definition}
\begin{definition}\label{d:admissible} A bounded weak solution $(\rho, v)$ of \eqref{eq:Euler system} is \emph{admissible} if it satisfies the following inequality for every nonnegative test function $\varphi\in C_c^{\infty}(\mathbb{R}^2\times [0,\infty[)$: \begin{align} \label{eq:energy inequality}
&\int_0^\infty\int_{\mathbb{R}^2} \left[\left(\rho\varepsilon(\rho)+\rho \frac{\abs{v}^2}{2}\right)\partial_t \varphi+\left(\rho\varepsilon(\rho)+\rho \frac{\abs{v}^2}{2}+p(\rho)\right) v \cdot \nabla_x \varphi \right]\notag \\ &+\int_{\mathbb{R}^2} \left(\rho^0 (x) \varepsilon(\rho^0 (x))+\rho^0 (x)\frac{\abs{v^0 (x)}^2}{2}\right) \varphi(x,0)\, dx \;\geq\; 0\, . \end{align} \end{definition} Note that \eqref{eq:energy inequality} is rather a weak form of energy balance.
\subsection{Entropy rate admissibility criterion} \label{s:entropy}
An alternative criterion to the entropy inequality has been proposed by Dafermos in \cite{Da1} under the name of \textit{entropy rate admissibility criterion}. In order to formulate this criterion for the specific system \eqref{eq:Euler system} we define the \textit{total energy} of the solutions $(\rho,v)$ to \eqref{eq:Euler system} as \begin{equation} \label{eq:energy0} E[\rho,v](t) = \int_{\mathbb{R}^2} \left(\rho\varepsilon(\rho) + \rho\frac{\abs{v}^2}{2}\right)\d x . \end{equation} Let us remark that in Dafermos' terminology $E[\rho,v](t)$ is called ``total entropy'' (see \cite{Da1}). However, since in the context of system \eqref{eq:Euler system} the physical energy plays the role of the mathematical entropy, it is more convenient to call $E[\rho,v](t)$ \textit{total energy}. The right derivative of $E[\rho,v](t)$ defines the \textit{energy dissipation rate} of $(\rho,v)$ at time $t$: \begin{equation} \label{eq:dissipation rate0} D[\rho,v](t) = \frac{\d_+ E[\rho,v](t)}{\d t}. \end{equation} Since our solutions will have piecewise constant values of $\rho$ and $\abs{v}^2$ and it is easy to see that the total energy of any solution we construct is infinite, we shall restrict the infinite domain $\mathbb{R}^2$ to a finite box $(-L,L)^2$ and denote \begin{align} &E_L[\rho,v](t) = \int_{(-L,L)^2} \left(\rho\varepsilon(\rho) + \rho\frac{\abs{v}^2}{2}\right)\d x \label{eq:energy L}\\ &D_L[\rho,v](t) = \frac{\d_+ E_L[\rho,v](t)}{\d t}. \label{eq:dissipation rate L} \end{align} The problem of infinite energy of solutions may be solved also by restricting to a periodic domain and constructing (locally in time) periodic solutions. This procedure is carefully described in \cite{chk}.
Using the concept of energy dissipation rate, Dafermos in \cite{Da1} introduces a new selection criterion for weak solutions which goes under the name of \textit{entropy rate admissibility criterion}. We recall here the definition of \textit{entropy rate admissible solutions}. \begin{definition}[Entropy rate admissible solution]\label{d:entropy rate} A weak solution $(\rho,v)$ of \eqref{eq:Euler system} is \emph{entropy rate admissible} if there exists $L^* > 0$ such that there is no other weak solution $(\overline{\rho},\overline{v})$ with the property that for some $\tau\geq 0$, $(\overline{\rho},\overline{v})(x,t)= (\rho,v)(x,t)$ on $\mathbb{R}^2 \times [0, \tau]$ and $ D_L[\overline{\rho},\overline{v}](\tau) < D_L[\rho,v](\tau) $ for all $L \geq L^*$. \end{definition} In other words, we call entropy rate admissible the solution(s) dissipating most total energy.
\section{Background literature and main results} The investigation of the entropy rate admissibility criterion initiated with the paper \cite{Da1} of Dafermos where he puts it forward and moreover proves that for a single equation the entropy rate criterion is equivalent to the viscosity criterion in the class of piecewise smooth solutions. Later on, following the approach of Dafermos, Hsiao in \cite{Hs} proves, in the class of piecewise smooth solutions, the equivalence of the entropy rate criterion and the viscosity criterion for the one-dimensional system of equations of nonisentropic gas dynamics in lagrangian formulation with pressure laws $p(\rho)= \rho^\gamma$ for $\gamma\geq 5/3 $ while the same equivalence is disproved for $\gamma < 5/3 $. For further analysis on the relation between entropy rate minimization and admissibility of solutions for a more general class of evolutionary equations we refer to \cite{Da2}. However, to our knowledge, up to some time ago the entropy rate criterion had not been tested in the case of several space variables and on broader class of solutions than the piecewise smooth ones.
Recently Feireisl in \cite{fe} extended the result of Chiodaroli \cite{ch} and obtained infinitely many global admissible weak solutions of \eqref{eq:Euler system} none of which is entropy rate admissible: this results seems in favour of the effectiveness of the entropy rate criterion to rule out non--standard solutions (i.e. constructed by the method of De Lellis and Sz\'ekelyhidi). In \cite{chk} we have actually shown that for specific initial data, and in the two--dimensional case, the oscillatory (non--standard) solutions dissipate more energy than the self-similar solution which may be believed to be the physical one. The results obtained in \cite{ChDLKr} hinge upon some of the ideas devised in \cite{ChDLKr} combined with novel developments to deal with the entropy rate criterion.
We refer also to the work \cite{sz}, where Sz\'ekelyhidi constructed irregular solutions of the incompressible Euler equations with vortex-sheet initial data and computed their dissipation rate.
We focus on the Riemann problem for the system \eqref{eq:Euler system},\eqref{eq:energy inequality} in two-space dimensions. Hence, we denote the space variable as $x=(x_1, x_2)\in \mathbb{R}^2$ and consider initial data in the form \begin{equation}\label{eq:R_data} (\rho^0 (x), v^0 (x)) := \left\{ \begin{array}{ll} (\rho_-, v_-) \quad & \mbox{if $x_2<0$}\\ \\ (\rho_+, v_+) & \mbox{if $x_2>0$,} \end{array}\right. \end{equation} where $\rho_\pm, v_{\pm 1}, v_{\pm 2}$ are constants. Our concern has been to compare the energy dissipation rate of standard self-similar solutions associated to the Riemann problem \eqref{eq:Euler system}, \eqref{eq:energy inequality}, \eqref{eq:R_data} with the energy dissipation rate of non-standard solutions for the same problem obtained by the method developed in \cite{ChDLKr}.
We obtained the following results.
\begin{theorem}\label{t:main0}
Let $p(\rho) = \rho^{\gamma}$ with $\gamma \geq 1$. For every Riemann data \eqref{eq:R_data} such that the self-similar solution
to the Riemann problem \eqref{eq:Euler system}--\eqref{eq:energy inequality}, \eqref{eq:R_data} consists of an admissible $1-$shock and an admissible $3-$shock, i.e. $v_{-1} = v_{+1}$ and
\begin{equation}\label{eq:2shocks condition}
v_{+2} - v_{-2} < -\sqrt{\frac{(\rho_--\rho_+)(p(\rho_-)-p(\rho_+))}{\rho_-\rho_+}},
\end{equation}
there exist infinitely many admissible solutions to \eqref{eq:Euler system}--\eqref{eq:energy inequality}, \eqref{eq:R_data}. \end{theorem}
Theorem \ref{t:main0} can be viewed as an extension of the results obtained together with De Lellis in \cite{ChDLKr}. As a consequence of Theorem \ref{t:main0} and by a suitable choice of initial data, we can prove the following main theorem.
\begin{theorem}\label{t:main}
Let $p(\rho) = \rho^\gamma$, $1 \leq \gamma < 3$.
There exist Riemann data \eqref{eq:R_data} for which the self-similar solution to \eqref{eq:Euler system},\eqref{eq:energy inequality}
emanating from these data is not entropy rate admissible. \end{theorem}
Theorem \ref{t:main} ensures that for $1 \leq \gamma < 3$ there exist initial Riemann data \eqref{eq:R_data} for which some of the infinitely many nonstandard solutions constructed as in Theorem \ref{t:main0} dissipate more energy than the self-similar solution, suggesting in particular that the Dafermos entropy rate admissibility criterion would not pick the self-similar solution as the admissible one.
\section{Strategies of proof}\label{s:I}
In this section we explain how to prove Theorem \ref{t:main0} and \ref{t:main}. For the complete proofs we refer the reader to \cite{chk}.
Both theorems stem from the framework introduced in \cite{ChDLKr} where the Riemann problem constitutes the starting point for constructing non--unique admissible non--standard solutions. In particular, in \cite{ChDLKr}, the authors jointly with Camillo De Lellis developed a method which allows to obtain infinitely many entropy (oscillatory) solutions to a Riemann problem provided a suitable \textit{admissible subsolution} exists.
\subsection{From subsolutions to solutions}
We shall introduce the notion of \textit{fan subsolution} and \textit{admissible fan subsolution} as in \cite[Section 3]{ChDLKr}.
\begin{definition}[Fan partition]\label{d:fan} A {\em fan partition} of $\mathbb{R}^2\times (0, \infty)$ consists of three open sets $P_-, P_1, P_+$ of the following form \begin{align}
P_- &= \{(x,t): t>0 \quad \mbox{and} \quad x_2 < \nu_- t\}\\
P_1 &= \{(x,t): t>0 \quad \mbox{and} \quad \nu_- t < x_2 < \nu_+ t\}\\
P_+ &= \{(x,t): t>0 \quad \mbox{and} \quad x_2 > \nu_+ t\}, \end{align} where $\nu_- < \nu_+$ is an arbitrary couple of real numbers. \end{definition}
We denote by $\mathcal{S}_0^{2\times2}$ the set of all symmetric $2\times2$ matrices with zero trace.
\begin{definition}[Fan subsolution] \label{d:subs} A {\em fan subsolution} to the compressible Euler equations \eqref{eq:Euler system} with initial data \eqref{eq:R_data} is a triple $(\overline{\rho}, \overline{v}, \overline{u}): \mathbb{R}^2\times (0,\infty) \rightarrow (\mathbb{R}^+, \mathbb{R}^2, \mathcal{S}_0^{2\times2})$ of piecewise constant functions satisfying the following requirements. \begin{itemize} \item[(i)] There is a fan partition $P_-, P_1, P_+$ of $\mathbb{R}^2\times (0, \infty)$ such that \[ (\overline{\rho}, \overline{v}, \overline{u})= (\rho_-, v_-, u_-) \bm{1}_{P_-} + (\rho_1, v_1, u_1) \bm{1}_{P_1} + (\rho_+, v_+, u_+) \bm{1}_{P_+} \] where $\rho_1, v_1, u_1$ are constants with $\rho_1 >0$ and $u_\pm =
v_\pm\otimes v_\pm - \textstyle{\frac{1}{2}} |v_\pm|^2 {\rm Id}$; \item[(ii)] There exists a positive constant $C$ such that \begin{equation} \label{eq:subsolution 2} v_1\otimes v_1 - u_1 < \frac{C}{2} {\rm Id}\, ; \end{equation} \item[(iii)] The triple $(\overline{\rho}, \overline{v}, \overline{u})$ solves the following system in the sense of distributions: \begin{align} &\partial_t \overline{\rho} + {\rm div}_x (\overline{\rho} \, \overline{v}) \;=\; 0\label{eq:continuity}\\ &\partial_t (\overline{\rho} \, \overline{v})+{\rm div}_x \left(\overline{\rho} \, \overline{u} \right) + \nabla_x \left( p(\overline{\rho})+\frac{1}{2}\left( C \rho_1
\bm{1}_{P_1} + \overline{\rho} |\overline{v}|^2 \bm{1}_{P_+\cup P_-}\right)\right)= 0.\label{eq:momentum} \end{align} \end{itemize} \end{definition}
\begin{definition}[Admissible fan subsolution]\label{d:admiss}
A fan subsolution $(\overline{\rho}, \overline{v}, \overline{u})$ is said to be {\em admissible} if it satisfies the following inequality in the sense of distributions \begin{align} &\de_t \left(\overline{\rho} \varepsilon(\overline{\rho})\right)+{\rm div}\,_x \left[\left(\overline{\rho}\varepsilon(\overline{\rho})+p(\overline{\rho})\right) \overline{v}\right]
+ \de_t \left( \overline{\rho} \frac{|\overline{v}|^2}{2} \bm{1}_{P_+\cup P_-} \right)
+ {\rm div}\,_x \left(\overline{\rho} \frac{|\overline{v}|^2}{2} \overline{v} \bm{1}_{P_+\cup P_-}\right)\nonumber\\ &\qquad\qquad+ \left[\de_t\left(\rho_1 \, \frac{C}{2} \, \bm{1}_{P_1}\right) + {\rm div}\,_x\left(\rho_1 \, \overline{v} \, \frac{C}{2} \, \bm{1}_{P_1}\right)\right] \;\leq\; 0\, .\label{eq:admissible subsolution} \end{align} \end{definition}
The strategy which lies behind Theorem \ref{t:main0}, as well as behind Theorem \ref{t:lipschitz} in \cite{ChDLKr}, consists in reducing the proof of the existence of infinitely many admissible solutions to the Riemann problem for \eqref{eq:Euler system} to the proof of the existence of an admissible fan subsolution as defined in Definition \ref{d:admiss}. This is the content of the following Proposition which represents the key ingredient of \cite{chk} and is proven in \cite{ChDLKr}.
\begin{proposition}\label{p:subs} Let $p$ be any $C^1$ function and $(\rho_\pm, v_\pm)$ be such that there exists at least one admissible fan subsolution $(\overline{\rho}, \overline{v}, \overline{u})$ of \eqref{eq:Euler system} with initial data \eqref{eq:R_data}. Then there are infinitely many bounded admissible solutions $(\rho, v)$ to \eqref{eq:Euler system}-\eqref{eq:energy inequality}, \eqref{eq:R_data} such that $\rho=\overline{\rho}$ and $\abs{v}^2\bm{1}_{P_1} = C$. \end{proposition}
Roughly speaking, the infinitely many bounded admissible solutions $(\rho,v)$ of Proposition \ref{p:subs} are constructed by adding to the subsolution solutions to the linearized pressureless incompressible Euler equations supported in $P_1$. Such solutions are given by the following Lemma, cf. \cite[Lemma 3.7]{ChDLKr}.
\begin{lemma}\label{l:ci} Let $(\tilde{v}, \tilde{u})\in \mathbb{R}^2\times \mathcal{S}_0^{2\times 2}$ and $C_0>0$ be such that $\tilde{v}\otimes \tilde{v} - \tilde{u} < \frac{C_0}{2} {\rm Id}$. For any open set $\Omega\subset \mathbb{R}^2\times \mathbb{R}$ there are infinitely many maps $(\underline{v}, \underline{u}) \in L^\infty (\mathbb{R}^2\times \mathbb{R} , \mathbb{R}^2\times \mathcal{S}_0^{2\times 2})$ with the following property \begin{itemize} \item[(i)] $\underline{v}$ and $\underline{u}$ vanish identically outside $\Omega$; \item[(ii)] ${\rm div}\,_x \underline{v} = 0$ and $\partial_t \underline{v} + {\rm div}\,_x \underline{u} = 0$; \item[(iii)] $ (\tilde{v} + \underline{v})\otimes (\tilde{v} + \underline{v}) - (\tilde{u} + \underline{u}) = \frac{C_0}{2} {\rm Id}$ a.e. on $\Omega$. \end{itemize} \end{lemma}
Proposition \ref{p:subs} is then proved by applying Lemma \ref{l:ci} with $\Omega = P_1$, $(\tilde{v}, \tilde{u}) = (v_1,u_1)$ and $C_0 = C$. It is then a matter of easy computation to check that each couple $(\overline{\rho}, \overline{v} + \underline{v})$ is indeed an admissible weak solution to \eqref{eq:Euler system}--\eqref{eq:energy inequality} with initial data \eqref{eq:R_data}, for details see \cite[Section 3.3]{ChDLKr}. The whole proof of Lemma \ref{l:ci} can be found in \cite[Section 4]{ChDLKr}.
\subsection{Concluding arguments}\label{s:concluding}
Thanks to Proposition \ref{p:subs}, Theorem \ref{t:main0} amounts to showing the existence of a fan admissible subsolution with appropriate initial data under the hypothesis that \eqref{eq:R_data} is such that the self-similar solution
to the Riemann problem \eqref{eq:Euler system}, \eqref{eq:energy inequality}, \eqref{eq:R_data} consists of an admissible $1-$shock and an admissible $3-$shock
Indeed, a fan admissible sunsolution with initial data \eqref{eq:R_data} is defined through a the set of identities and inequalities which we recall here (see also \cite[Section 5]{ChDLKr}).
We introduce the real numbers $\alpha, \beta, \gamma_1, \gamma_2, v_{-1}, v_{-2}, v_{+1}, v_{+2}$ such that \begin{align} v_1 &= (\alpha, \beta),\label{eq:v1}\\ v_- &= (v_{-1}, v_{-2})\\ v_+ &= (v_{+1}, v_{+2})\\ u_1 &=\left( \begin{array}{cc}
\gamma_1 & \gamma_2 \\
\gamma_2 & -\gamma_1\\
\end{array} \right)\, .\label{eq:u1} \end{align}
Then, Proposition \ref{p:subs} translates into the following set of algebraic identities and inequalities.
\begin{proposition}\label{p:algebra} Let $P_-, P_1, P_+$ be a fan partition as in Definition \ref{d:fan}.\\
The constants $v_1, v_-, v_+, u_1, \rho_-, \rho_+, \rho_1$ as in \eqref{eq:v1}-\eqref{eq:u1} define an \emph{admissible fan subsolution} as in Definitions \ref{d:subs}-\ref{d:admiss} if and only if the following identities and inequalities hold: \begin{itemize} \item Rankine-Hugoniot conditions on the left interface: \begin{align} &\nu_- (\rho_- - \rho_1) \, =\, \rho_- v_{-2} -\rho_1 \beta \label{eq:cont_left} \\ &\nu_- (\rho_- v_{-1}- \rho_1 \alpha) \, = \, \rho_- v_{-1} v_{-2}- \rho_1 \gamma_2 \label{eq:mom_1_left}\\ &\nu_- (\rho_- v_{-2}- \rho_1 \beta) \, = \, \rho_- v_{-2}^2 + \rho_1 \gamma_1 +p (\rho_-)-p (\rho_1) - \rho_1 \frac{C}{2}\, ;\label{eq:mom_2_left} \end{align} \item Rankine-Hugoniot conditions on the right interface: \begin{align} &\nu_+ (\rho_1-\rho_+ ) \, =\, \rho_1 \beta - \rho_+ v_{+2} \label{eq:cont_right}\\ &\nu_+ (\rho_1 \alpha- \rho_+ v_{+1}) \, = \, \rho_1 \gamma_2 - \rho_+ v_{+1} v_{+2} \label{eq:mom_1_right}\\ &\nu_+ (\rho_1 \beta- \rho_+ v_{+2}) \, = \, - \rho_1 \gamma_1 - \rho_+ v_{+2}^2 +p (\rho_1) -p (\rho_+) + \rho_1 \frac{C}{2}\, ;\label{eq:mom_2_right} \end{align} \item Subsolution condition: \begin{align}
&\alpha^2 +\beta^2 < C \label{eq:sub_trace}\\ & \left( \frac{C}{2} -{\alpha}^2 +\gamma_1 \right) \left( \frac{C}{2} -{\beta}^2 -\gamma_1 \right) - \left( \gamma_2 - \alpha \beta \right)^2 >0\, ;\label{eq:sub_det} \end{align} \item Admissibility condition on the left interface: \begin{align} & \nu_-(\rho_- \varepsilon(\rho_-)- \rho_1 \varepsilon( \rho_1))+\nu_- \left(\rho_- \frac{\abs{v_-}^2}{2}- \rho_1 \frac{C}{2}\right)\nonumber\\ \leq & \left[(\rho_- \varepsilon(\rho_-)+ p(\rho_-)) v_{-2}- ( \rho_1 \varepsilon( \rho_1)+ p(\rho_1)) \beta \right] + \left( \rho_- v_{-2} \frac{\abs{v_-}^2}{2}- \rho_1 \beta \frac{C}{2}\right)\, ;\label{eq:E_left} \end{align} \item Admissibility condition on the right interface: \begin{align} &\nu_+(\rho_1 \varepsilon( \rho_1)- \rho_+ \varepsilon(\rho_+))+\nu_+ \left( \rho_1 \frac{C}{2}- \rho_+ \frac{\abs{v_+}^2}{2}\right)\nonumber\\ \leq &\left[ ( \rho_1 \varepsilon( \rho_1)+ p(\rho_1)) \beta- (\rho_+ \varepsilon(\rho_+)+ p(\rho_+)) v_{+2}\right] + \left( \rho_1 \beta \frac{C}{2}- \rho_+ v_{+2} \frac{\abs{v_+}^2}{2}\right)\, .\label{eq:E_right} \end{align} \end{itemize} \end{proposition}
Theorem \ref{t:main} is then a corollary of the following theorem, proven in \cite{chk} and showing the existence of an admissible fan subsolution, combined with Proposition \ref{p:subs}.
\begin{theorem} \label{t:intermediate}
Let $p(\rho) = \rho^{\gamma}$ with $\gamma \geq 1$.
For every Riemann data \eqref{eq:R_data} such that $v_{-1} = v_{+1}$ and
\begin{equation}\label{eq:2shocks condition}
v_{+2} - v_{-2} < -\sqrt{\frac{(\rho_--\rho_+)(p(\rho_-)-p(\rho_+))}{\rho_-\rho_+}},
\end{equation}
there exist $\nu_{-}, \nu_+, v_1, u_1,\rho_1, C$ such that \eqref{eq:cont_left}--\eqref{eq:E_right} hold. \end{theorem}
It remains to prove Theorem \ref{t:main}. This is obtained by showing that among the infinitely many admissible solutions provided by Theorem \ref{t:main0} one has lower energy dissipation rate than the self--similar solution emanating from the same Riemann data, thus contradicting Definition \ref{d:entropy rate}. Let us remark that the Riemann data allowing for the result of Theorem \ref{t:main} are of the same type as the ones of Theorem \ref{t:main0}, i.e. they admit a forward in time self--similar solution consisting of two shocks. We also underline that the self--similar solution depends in fact only on one variable, specifically on $x_2$.
Assume from now on for simplicity that $ v_{-1} = v_{+1} = 0$ in \eqref{eq:R_data}. Let us denote the self--similar solution emanating from the Riemann data as in Theorem \ref{t:main0}. The value of the dissipation rate $D_L[\rho_S,v_S](t)$ has a specific form for the solution $(\rho_S, v_S)$ consisting (by assumption) of two shocks of speeds $\nu_1$ and $\nu_2$. Denoting the middle state $(\rho_m,v_m = (0,\overline{v}))$ and introducing the notation \begin{align}
E_\pm &:= \rho_\pm\varepsilon(\rho_\pm) + \rho_\pm\frac{v_\pm^2}{2} \label{eq:E pm} \\
E_m &:= \rho_m\varepsilon(\rho_m) + \rho_m\frac{\overline{v}^2}{2} \end{align} we have \begin{equation}\label{eq:Dis rate for 2 shocks}
D_L[\rho_S,v_S] = -2L\left(\nu_1(E_- -E_m) + \nu_2(E_m-E_+)\right). \end{equation} Now let us consider a solution $(\rho, v)$ with the same initial data \eqref{eq:R_data} but constructed by the method of convex integration starting from an admissible fan subsolution using Proposition \ref{p:subs}. We also assume, that the fan admissible subsolution (which exists by Theorem \ref{t:intermediate}) has an underlying fan partition defined by the speeds $\nu_-$ and $\nu_+$. Although $v$ is not constant in $P_1$ we still have, by construction (see Proposition \ref{p:subs}), that $\abs{v}^2\bm{1}_{P_1} = C $, in particular $E_1: = \rho_1\varepsilon(\rho_1) + \rho_1\frac{C}{2}$ is constant in $P_1$. The dissipation rate for all solutions constructed from a given subsolution hence depends only on the underlying subsolution and is given by \begin{equation}\label{eq:Dis rate for non-standard}
D_L[\rho,v] = -2L\left(\nu_-(E_- -E_1) + \nu_+(E_1-E_+)\right). \end{equation} If, for a moment, we assume that the speeds of the self--similar solution and of the subsolution coincide, i.e. $\nu_-=\nu_1$ and $\nu_+=\nu_2$, it is clearly enough to achieve $E_1>E_m$ in order to prove Theorem \ref{t:main}. Of course, as one can see from \cite[Section 4]{chk} this is not the case; nonetheless the proof works along the same line: we can prove that there is still some freedom in the choice of the parameters defining the underlying subsolution for $(\rho,v)$ which allows to tune them in such a way that Theorem \ref{t:main} holds. For a complete and rigorous proof we refer to \cite[Section 5]{chk}.
{\small
}
\end{document} | arXiv |
How do you show that $l_p \subset l_q$ for $p \leq q$?
Can you provide me historical examples of pure mathematics becoming "useful"?
What makes elementary functions elementary?
Is there possibly a largest prime number?
What is the role of conjectures in modern mathematics?
Comparing Hilbert spaces and Banach spaces.
What do Greek Mathematicians use when they use our equivalent Greek letters in formulas and equations?
Is there a "continuous product"?
Why do we require a topological space to be closed under finite intersection?
Find a $4\times 4$ matrix $A$ where $A\neq I$ and $A^2 \neq I$, but $A^3 = I$.
Is $p$-norm decreasing in $p$?
Why is $(0, 0)$ not a minimum of $f(x, y) = (y-3x^2)(y-x^2)$?
Elevator pitch for a (sub)field of maths? | CommonCrawl |
\begin{document}
\title{Unified products for Malcev algebras}
\author{Tao Zhang, Ling Zhang, Ruyi Xie}
\date{} \maketitle
\noindent
\allowdisplaybreaks
\begin{abstract} The extending structures and unified products for Malcev algebras are developed. Some special cases of unified products such as crossed products and matched pair of Malcev algebras are studied. It is proved that the extending structures can be classified by some non-abelian cohomology theory. One dimensional flag extending structures of Malcev algebras are also investigated. \end{abstract}
\maketitle
\footnotetext{{\it{Keyword}: Malcev algebra, extending structures, unified products, matched pair, non-abelian cohomology}}
\footnotetext{{\it{Mathematics Subject Classification (2010)}}: 17A30, 17B99, 17B56.}
\section*{Introduction} As a generalization of the Lie algebra, Malcev algebras (Mal'tsev algebras, Moufang-Lie algebras) were introduced by A. Maltsev in \cite{Ma} as the tangent algebras to locally smooth Moufang loops. As an algebraic structure, the concept of Malcev algebras was studied later by by many authors, see for example \cite{Ku,Sa,Ya}. The representation and cohomology theory of Malcev algebras was established by K. Yamaguti in \cite{Ya}.
On the other hand, extending structures for Lie algebras, associative algebras and Leibniz algebras are studied by Agore and Militaru in \cite{AM1,AM2,AM3,AM4,AM5, AM6}. The extending structures for left-symmetric algebras, associative and Lie conformal algebras has also been studied by Y. Hong and Y. Su in \cite{Hong1,Hong2,Hong3}.
In this paper, we study extending structures and unified products for Malcev algebras. We follow closely to the theory of unified product and extending structures which were well developed by A. L. Agore and G. Militaru in \cite{AM3,AM4,AM6}. Let $M$ be a Malcev algebra and $E$ a vector space containing $M$ as a subspace. We will describe and classify all Malcev algebras structures on $E$ such that $M$ is a subalgebra of $E$. We show that associated to any extending structure of $M$ by a complement space $V$, there is a unified product on the direct sum space $E\cong M\oplus V$. We will show how to classify extending structures for Malcev algebras by using some non-abelian cohomology and deformation map theory.
The organization of this paper is as follows. In the first section, we review some basic facts about Malcev algebras. In the second section, the definition of extending structures for Malcev algebras is introduced. We give the necessary and sufficient conditions for a unified product to form Malcev algebras. In the last section, we study some special cases of unified products, which include crossed products of Malcev algebras and a matched pair of Malcevalgebras. We also study the flag extending structures. Some concrete examples are given at the end of this section.
Throughout this paper, all Malcev algebras are assumed to be over an algebraically closed field $\mathbb{F}$ of characteristic different from 2 and 3. The space of linear maps from $V$ to $W$ is denoted by ${\rm Hom}(V,W)$.
\section{Preliminaries} In this section, we recall some basic facts about Malcev algebras. \begin{definition}
Let $M$ be a linear space on the field $F$, and there is a binary linear operation $[\,,\,]: M\times M\to M$ satisfying the identity: \begin{eqnarray} [ x,y ] &=& - [ y,x ]\\ J\left( x,y,[ x,z ] \right) &=& [J(x,y,z), x] \end{eqnarray} for any $x,y,z \in M$, where $J\left( x,y,z \right) = [ [ x,y ],z] + [ [ y,z ], x ] + [ [ z,x ],y]$, then $\left( M,[\,,\,] \right)$ is called a Malcev algebra. \end{definition}
A Malcev algebras is a Lie algebra when the Jacobiator $J\left( x,y,z \right)=0$.
\begin{theorem} Equation $(2)$ is equivalent to $(3)$. \begin{eqnarray} &&[ {[ {x,z} ],[ {y,w} ]} ] = [{[ {[ {x,y} ],z} ],w} ] + [ {[ {[{y,z} ],w} ], x} ] + [ {[ {[ {z,w}], x} ],y} ] + [ {[ {[ {w,x} ],y}],z} ] \end{eqnarray} \noindent for any $x,y,z,w \in M$. \end{theorem}
\begin{proof} Substitute $J\left( {x,y,z} \right) = [ {[ {x,y} ],z} ] + [ {[ {y,z} ], x} ] +[ {[ {z,x} ],y} ]$ into $(2)$ , we get \[ [ {[ {x,y} ],[ {x,z} ]} ] = [ {[{[ {x,y} ],z} ], x} ] + [ {[ {[ {y,z}], x} ], x} ] + [ {[ {[ {z,x} ], x}],y} ], \] for any $x,y,z,w \in M$. Thus we have \[ \begin{array}{l}
[ {[ {w,y} ],[ {x,z} ]} ] + [ {[{x,y} ],[ {w,z} ]} ] \\
= [ {[ {[ {w,y} ],z} ], x} ] + [{[ {[ {y,z} ], x} ],w} ] + [ {[ {[{x,y} ],z} ],w} ]
+ [ {[ {[ {y,z} ],w} ], x} ] + [{[ {[ {z,w} ], x} ],y} ],
\end{array} \] \[ \begin{array}{l}
[ {[ {w,y} ],[ {x,z} ]} ] - [ {[{x,y} ],[ {w,z} ]} ] \\
= [ {[ {[ {y,w} ], x} ],z} ] + [{[ {[ {w,x} ],z} ],y} ] + [ {[ {[{z,w} ], x} ],y} ]
+ [ {[ {[ {w,x} ],y} ],z} ] + [{[ {[ {x,y} ],z} ],w} ].
\end{array} \] Add the above two formulas to get the formula (3).
Conversely, when $w = x$ in equation (3), it is equation (2). \end{proof}
\begin{definition}
Let $\left( {M,\left[\,, \,\right]} \right)$ be a Malcev algebra, a left module of $M$ over a vector space $V$ is a bilinear map $\triangleright :{M}\times {V} \to {V}$ such that the following condition holds: \begin{eqnarray} &&[x,z] \triangleright (y \triangleright q) = [[x,y],z]\triangleright q
- x \triangleright ([y,z] \triangleright q) + z\triangleright (y \triangleright (x \triangleright q)) \end{eqnarray} \noindent for all $ x,y,z\in M, q \in V$. \end{definition}
\begin{proposition}\label{prop:01} Let $M$ be a Malcev algebra and $(V,\triangleright)$ be a left module. Then the direct sum vector space $M \oplus V$ is a Malcev algebra with bracket defined by: \begin{equation} [(x, u), (y, v)]= ([x,y] , x \triangleright v - y \triangleright u) \end{equation} for all $ x, y \in M, u, v \in V$. This is called the semi-direct product of $M$ and $V$. \end{proposition}
\begin{theorem} \label{thm:02} Let $M$ be a Malcev algebra, $V$ be a left $M$-module. Assume there is an anti-symmetric bilinear mapping $\omega: M\times M\to V$. Define the bracket on $M \oplus V$ by: \begin{eqnarray} &&[(x, u), (y, v)] = ([x,y] , \,x \triangleright v - y \triangleright u + \omega (x,y)) \end{eqnarray} for all $x,y \in M, u,v \in V$. Then $\left( {M \oplus V,\left[\,, \,\right]}\right)$ is a Malcev algebra if and only if $\omega$ satisfying the following identity: \[ \begin{array}{l}
\omega ([x,z],[y,t]) + [t,y] \triangleright \omega (x,z) + [x,z] \triangleright \omega (y,t) \\
= \omega ([[x,y],z],t) + \omega ([[y,z],t], x) + \omega ([[z,t], x],y) + \omega ([[t,x],y],z) \\
\;\;\; + x \triangleright (t \triangleright \omega (y,z)) - x \triangleright \omega ([y,z],t) + z \triangleright (y \triangleright \omega (t,x)) - z \triangleright \omega ([t,x],y) \\
\;\;\; + t \triangleright (z \triangleright \omega (x,y)) - t \triangleright \omega ([x,y],z) + y \triangleright (x \triangleright \omega (z,t)) - y \triangleright \omega ([z,t], x),
\end{array} \] for all $ x,y,z,t \in M$. In this case, $\omega$ is called 2-cocycle on $M$. \end{theorem}
The proof of the above Proposition \ref{prop:01} and Theorem \ref{thm:02} are by direct computations, so we omit the details.
\begin{definition} \delabel{echivextedn} Let ${M} $ be a Malcev algebra, $E$ be a Malcev algebra such that ${M} $ is a subalgebra of $E$ and $V$ a complement of ${M} $ in $E$. For a linear map $\varphi: E \to E$ we consider the diagram: \begin{eqnarray} \eqlabel{diagrama1} \xymatrix {& {M} \ar[r]^{i} \ar[d]_{id} & {E} \ar[r]^{\pi} \ar[d]^{\varphi} & V \ar[d]^{id}\\ & {M} \ar[r]^{i} & {E}\ar[r]^{\pi } & V} \end{eqnarray} where $\pi : E \to V$ is the canonical projection of $E = {M} \oplus V$ on $V$ and $i: {M} \to E$ is the inclusion map. We say that $\varphi: E \to E$ \emph{stabilizes} ${M} $ if the left square of the diagram \equref{diagrama1} is commutative, and $\varphi: E \to E$ \emph{stabilizes} $V$ if the right square of the diagram \equref{diagrama1} is commutative.
Let $(E,\cdot)$ and $(E,\cdot')$ be two Malcev algebra structures on $E$ both containing ${M} $ as a subalgebra. $(E,\cdot)$ and $(E,\cdot')$ are called \emph{equivalent}, and we denote this by $(E, \cdot) \equiv (E, \cdot')$, if there exists a Malcev algebra isomorphism $\varphi: (E, \cdot) \to (E, \cdot')$ which stabilizes ${M} $. Denote by $Extd(E,{M} )$ the set of equivalent classes of ${M} $ through $V$.
$(E,\cdot)$ and $(E,\cdot')$ are called \emph{cohomologous}, and we denote this by $(E,\cdot) \approx (E, \cdot')$, if there exists a Malcev algebra isomorphism $\varphi: (E, \cdot) \to (E,\cdot')$ which stabilizes ${M} $ and co-stabilizes $V$. Denote by $Extd'(E,{M} )$ the set of cohomologous classes of ${M} $ through $V$. \end{definition}
\section{Unified products for Malcev algebras}
\begin{definition}\delabel{exdatum} Let $M$ be a Malcev algebra and $V$ a vector space. An \textit{extending datum of $M$ through $V$} is a system $\Omega(M, V) = \bigl(\triangleleft, \, \triangleright,\, \, \omega, \, [\,,\,] \bigl)$ consisting of two bilinear maps: \begin{eqnarray*} \triangleright:M\times V \to V, \quad\triangleleft :M\times V \to M, \end{eqnarray*} and two skew-symmetric bilinear maps: \begin{eqnarray*} [,\,] :V\times V \to V,\quad \omega: V\times V \to M. \end{eqnarray*} Let $\Omega(M, V) = \bigl(\triangleleft, \, \triangleright, \, \, \omega, \, [\,,\,] \bigl)$ be an extending datum. We denote by $M\, \natural V$ the direct sum vector space $M\oplus V$ together with the bracket $[\cdot,\cdot]: (M\oplus V) \times (M\oplus V) \to M\oplus V$ defined by: \begin{eqnarray} [(x, u), (y, v)] = \Big([x,y] + x\triangleleft v - y \triangleleft u + \omega (u,v), x \triangleright v - y \triangleright u+ [u,v] \Big), \end{eqnarray} for all $x,y,z,t \in M, u,v,p, q \in V$. The object $M \natural V$ is called the \textit{unified product} of $M$ and $V$ if it is a Malcev algebra with the above bracket.
\end{definition}
Then the following theorem provides the set of axioms that need to be fulfilled by an extending datum $\Omega(M, V)$ such that $M \natural V$ is a unified product.
\begin{theorem}\thlabel{1} Let $(M,[\,,\,])$ be a Malcev algebra, $V$ a vector space and $\Omega(M, V)$ an extending datum of $M$ by $V$. Then $M \natural V$ is a unified product if and only if the following compatibility conditions hold for all $x,y,z,t \in M, u,v,p, q \in V$: \begin{enumerate} \item[(U1)]$
[ {[ {x,z} ],y \triangleleft q} ] + [ {x,z}] \triangleleft \left( {y \triangleright q} \right) \\ { = }[ {[ {x,y} ],z} ] \triangleleft q + [{[ {y,z} ] \triangleleft q,x} ] - x \triangleleft \left({[ {y,z} ] \triangleright q} \right)
+ [ {[ {z \triangleleft q,x} ],y} ] - [ {x\triangleleft \left( {z \triangleright q} \right),y} ] \\ + y \left( {x \triangleleft \left( {z \triangleright q} \right)}\right) - [ {[ {x \triangleleft q,y} ],z} ] + [ {y\triangleleft \left( {x \triangleright q} \right),z} ] - z\triangleleft \left( {y \triangleleft \left( {x \triangleright q} \right)} \right) $,
\item[(U2)] $
[ {[ {x,z} ],\omega (v,q)} ] + [ {x,z} ]\triangleleft [ {v,q} ] \\ { = }[ {x \triangleleft v,z} ] \triangleleft q - \left( {z\triangleleft \left( {x \triangleright v} \right)} \right) \triangleleft q -\omega \left( {z \triangleleft \left( {x \triangleright v} \right),q}\right) - [ {\left( {z \triangleleft v} \right) \triangleleft q,x} ]\\
-[ {\omega \left( {z \triangleright v,q} \right), x} ]+ x\triangleleft [ {z \triangleright v,q} ] + x \triangleleft \left( {\left( {z \triangleleft v} \right) \triangleleft q} \right) + [ {z \triangleleft q,x} ] \triangleleft v \\ - \left({x \triangleleft \left( {z \triangleright q} \right)} \right) \triangleleft v- \omega \left( {x \triangleright \left( {z \triangleright q} \right),v}\right)- [ {\left( {x \triangleleft q} \right) \triangleleft v,z}] - [ {\omega \left( {x \triangleright q,v} \right),z} ]\\
+ z \triangleleft [ {x \triangleright q,v} ] + z \triangleleft\left( {\left( {x \triangleleft q} \right) \triangleleft v} \right)
$,
\item[(U3)] $
[ {x \triangleleft p,y \triangleleft q} ] + \left( {x\triangleleft p} \right) \triangleleft \left( {y \triangleright q} \right) -\left( {y \triangleleft q} \right) \triangleleft \left( {x \triangleright p} \right) + \omega \left( {x \triangleright p,y \triangleright q} \right) \\ { = }\left( {[ {x,y} ] \triangleleft p} \right)\triangleleft q + \omega \left( {[ {x,y} ] \triangleright p, q}\right) + [ {\left( {y \triangleleft p} \right) \triangleleft q,x}] + [ {\omega \left( {y \triangleright p, q} \right), x} ] \\
- x \triangleleft [ {y \triangleright p, q} ] - x \triangleleft\left( {\left( {y \triangleleft p} \right) \triangleright q} \right) +[ {[ {\omega \left( {p, q} \right), x} ],y} ] - [{x \triangleleft [ {p, q} ],y} ] \\
+ y \triangleleft \left( {x \triangleleft [ {p, q} ]} \right) -[ {x \triangleleft q,y} ] \triangleleft p + \left( {y\triangleleft \left( {x \triangleright q} \right)} \right) \triangleleft p +\omega \left( {y \triangleleft \left( {x \triangleright q} \right),p} \right) $,
\item[(U4)] $
[ {x \triangleleft p,\omega (v,q)} ] + \left( {x \triangleleft p} \right) \triangleleft [ {v,q} ] - \omega (v,q) \triangleleft \left( {x \triangleright p} \right) + \omega \left( {x \triangleright p,[ {v,q} ]} \right) \\ { = }\left( {\left( {x \triangleleft v} \right) \triangleleft p} \right) \triangleleft q + \omega \left( {x \triangleright v,p} \right) \triangleleft q + \omega \left( {[ {x \triangleright v,p} ], q}\right) + \omega \left( {\left( {x \triangleleft v} \right) \triangleright p, q} \right) \\
+ [ {\omega \left( {v,p} \right) \triangleleft q,x} ] + [{\omega \left( {[ {v,p} ], q} \right), x} ] - x \triangleleft [ {[ {v,p} ], q} ] - x \triangleleft \left( {\omega \left( {v,p} \right) \triangleleft q} \right) \\
+ [ {\omega \left( {p, q} \right), x} ] \triangleleft v - \left({x \triangleleft [ {p, q} ]} \right) \triangleleft v - \omega\left( {x \triangleright [ {p, q} ], v} \right) - \left( {\left( {x\triangleleft q} \right) \triangleleft v} \right) \triangleleft p \\
- \omega \left( {x \triangleright q,v} \right) \triangleleft p - \omega\left( {[ {x \triangleright q,v} ],p} \right) - \omega \left({\left( {x \triangleleft q} \right) \triangleright v,p} \right)
$,
\item[(U5)] $
[ {\omega \left( {u,p} \right),y \triangleleft q} ] + \omega\left( {u,p} \right) \triangleleft \left( {y \triangleright q} \right) - \left( {y \triangleleft q} \right) \triangleleft [ {u,p} ] +\omega \left( {[ {u,p} ],y \triangleright q} \right) \\ { = } - \left( {\left( {y \triangleleft u} \right) \triangleleft p}\right) \triangleleft q - \omega \left( {y \triangleright u,p} \right)\triangleleft q - \omega \left( {[ {y \triangleright u,p} ], q}\right) \\
- \omega \left( {\left( {y \triangleleft u} \right) \triangleleft p, q}\right) + \left( {\left( {y \triangleleft p} \right) \triangleleft q}\right) \triangleleft u + \omega \left( {y \triangleright p, q} \right)\triangleleft u \\
+ \omega \left( {[ {y \triangleright p, q} ], u} \right) + \omega\left( {\left( {y \triangleleft p} \right) \triangleright q, u} \right) +[ {\omega \left( {p, q} \right) \triangleleft u,y} ] \\
+ [ {\omega \left( {[ {p, q} ], u} \right),y} ] - y\triangleleft [ {[ {p, q} ], u} ] - y \triangleleft\left( {\omega \left( {p, q} \right) \triangleleft u} \right) \\
+ [ {\omega\left({q, u} \right), y} ] \triangleleft p - \left({y \triangleleft [ {q, u} ]} \right) \triangleleft p - \omega\left( {y \triangleright [ {q, u} ], p} \right)
$,
\item[(U6)] $
[ {\omega \left( {u,p} \right),\omega (v,q)} ] + \omega \left({u,p} \right) \triangleleft [ {v,q} ] - \omega (v,q) \triangleleft [ {u,p} ] + \omega \left( {[{u, p}], [ {v, q} ]} \right) \\ { = }\left( {\omega \left( {u,v} \right) \triangleleft p} \right)\triangleleft q + \omega \left( {[ {u,v} ],p} \right)\triangleleft q + \omega \left( {[ {[ {u,v} ],p} ], q}\right) \\
+ \omega \left( {\omega \left( {u,v} \right) \triangleright p, q} \right) +\left( {\omega \left( {v,p} \right) \triangleleft q} \right) \triangleleft u+ \omega \left( {[ {v,p} ], q} \right) \triangleleft u \\
+ \omega \left( {[ {[ {v,p} ], q} ], u} \right) +\omega \left( {\omega \left( {v,p} \right) \triangleright q, u} \right) +\left( {\omega \left( {p, q} \right) \triangleleft u} \right) \triangleleft v\\
+ \omega \left( {[ {p, q} ], u} \right) \triangleleft v + \omega\left( {[ {[ {p, q} ], u} ], v} \right) + \omega \left({\omega \left( {p, q} \right) \triangleright u,v} \right) \\
+ \left( {\omega \left( {q, u} \right) \triangleleft v} \right)\triangleleft p + \omega \left( {[ {q, u} ], v} \right)\triangleleft p + \omega \left( {[ {[ {q, u} ], v} ],p}\right) \\
+ \omega \left( {\omega \left( {q, u} \right) \triangleright v,p} \right)
$,
\item[(U7)] $
[ {x,z} ] \triangleright [ {v,q} ] \\ { = } - [ {z \triangleleft \left( {x \triangleright v} \right),q} ] + [ {x \triangleleft v,z} ] \triangleright q - \left( {z \triangleleft \left( {x \triangleright v} \right)} \right) \triangleright q
+ x \triangleright [ {z \triangleright v,q} ] \\
+ x \triangleright\left( {\left( {z \triangleleft v} \right) \triangleleft q} \right) - [ {x \triangleright \left( {z \triangleright q} \right),v} ]
+ [ {z \triangleleft q,x} ] \triangleright v - \left( {x \triangleleft \left( {z \triangleright q} \right)} \right) \triangleright v\\ + z \triangleright [ {x \triangleright q,v} ]
+ z \triangleright \left( {\left( {x \triangleleft q} \right) \triangleleft v} \right) $,
\item[(U8)] $
[ {y,t} ] \triangleright \left( {x \triangleright p}\right)\\
= t \triangleright \left( {[ {x,y} ] \triangleright p} \right) - x \triangleright \left( {t \triangleleft \left( {y \triangleright p} \right)} \right) + y \triangleright \left( {x \triangleleft \left( {t \triangleright p} \right)} \right) - [ {[ {t,x} ],y} ] \triangleright p $
\item[(U9)] $
[ {x \triangleright p,[ {v,q} ]} ] + \left( {x\triangleleft p} \right) \triangleright [ {v,q} ] - \omega (v,q)\triangleright \left( {x \triangleright p} \right) \\ { = }[ {[ {x \triangleright v,p} ], q} ] + [{\left( {x \triangleleft v} \right) \triangleright p, q} ] + \left({\left( {x \triangleleft v} \right) \triangleleft p} \right) \triangleright q + \omega \left( {x \triangleright v,p} \right) \triangleright q\\
- x\triangleright [ {[ {v,p} ], q} ] - x \triangleright\left( {\omega \left( {v,p} \right) \triangleright q} \right) - [ {x \triangleright [ {p, q} ], v} ] + [ {\omega\left( {p, q} \right), x} ] \triangleright v\\
- \left( {x \triangleleft [ {p, q} ]} \right) \triangleright v - [ {[ {x \triangleright q,v} ],p} ] - [ {\left({x \triangleleft q} \right) \triangleright v,p} ] - \left( {\left( {x\triangleleft q} \right) \triangleright v} \right) \triangleright p \\
- \omega \left( {x \triangleright q,v} \right) \triangleright p $,
\item[(U10)] $
[ {x \triangleright p,y \triangleright q} ] + \left( {x\triangleleft p} \right) \triangleright \left( {y \triangleright q} \right)- \left( {y \triangleleft q} \right) \triangleright \left( {x \triangleright p} \right) \\ { = }[ {[ {x,y} ] \triangleright p, q} ] + \left( {[ {x,y} ] \triangleleft p} \right) \triangleright q - x \triangleright [ {y \triangleright p, q} ] - x \triangleright \left( {\left( {y \triangleleft p} \right)\triangleright q} \right) \\ + y \triangleright \left( {x \triangleright [{p, q} ]} \right) + [ {y \triangleright \left( {x \triangleright q}\right),p} ] - [ {x \triangleleft q,y} ] \triangleright p + \left( {y\triangleleft \left( {x \triangleright q} \right)} \right) \triangleright p $,
\item[(U11)] $
[ {[ {u,p} ],[ {v,q} ]} ] + \omega \left({u,p} \right) \triangleright [ {v,q} ] - \omega (v,q)\triangleright [ {u,p} ] \\ { = }[ {[ {[ {u,v} ],p} ], q} ] +[ {\omega \left( {u,v} \right) \triangleright p, q} ] + \left({\omega \left( {u,v} \right) \triangleleft p} \right) \triangleright q +\omega \left( {[ {u,v} ],p} \right) \triangleright q \\
+ [ {[ {[ {v,p} ], q} ], u} ] + [{\omega \left( {v,p} \right) \triangleright q, u} ] + \left( {\omega\left( {v,p} \right) \triangleleft q} \right) \triangleright u + \omega\left( {[ {v,p} ], q} \right) \triangleright u \\
+ [ {[ {[ {p, q} ], u} ], v} ] + [{\omega \left( {p, q} \right) \triangleright u,v} ] + \left( {\omega \left( {p, q} \right) \triangleleft u} \right) \triangleright v + \omega\left( {[ {p, q} ], u} \right) \triangleright v \\
+ [ {[ {[ {q, u} ], v} ],p} ] + [{\omega \left( {q, u} \right) \triangleright v,p} ] + \left( {\omega \left( {q, u} \right) \triangleleft v} \right) \triangleright p + \omega\left( {[ {q, u} ], v} \right) \triangleright p $. \end{enumerate} \end{theorem}
Given an extending structure $\Omega({M},V)$, then ${M}$ can be seen a Malcev subalgebra of ${M}\natural V$. On the contrary, we now prove that any Malcev algebra structure on a vector space $E$ containing ${M}$ as a subalgebra is isomorphic to a unified product.
\begin{theorem}\thlabel{classif} Let ${M}$ be a Malcev algebra, $E$ a vector space containing ${M}$ as a subspace and $(E, [\,, \, ])$ a Malcev algebra structure on $E$ such that ${M}$ is a Lie subalgebra. Then there exists a Malcev extending structure $\Omega({M}, V) = \bigl(\triangleleft, \, \triangleright, \,\omega, [\,, \, ] \bigl)$ of ${M}$ trough a subspace $V$ of $E$ and an isomorphism of Malcev algebras $(E, [\,, \, ]) \cong {M} \,\natural \, V$ that stabilizes ${M}$ and co-stabilizes $V$. \end{theorem}
\begin{proof} Since $M$ is a subspace and $E$, there exists a projection map $p: E \to{M}$ such that $p(x) = x$, for all $x \in {M}$. Then $V := \rm{ker}(p)$ is also a subspace of $E$ and a complement of ${M}$ in $E$. We define the extending datum of ${M}$ through $V$ by the following formulas: \begin{eqnarray*} \triangleright = \triangleright_p : M \times {V} \to {V}, \qquad x \triangleright u &:=& [x, \, u] - p \bigl([x, \, u]\bigl)\\ \triangleleft = \triangleleft_p: M \times {V} \to M, \qquad x \triangleleft u &:=& p \bigl([x, \,u]\bigl)\\ \omega= \omega_p: V\times V \to M, \qquad \omega(u, v) &:=& p \bigl([u, \, v]\bigl)\\ {[\,, \,]} = {[\,, \,]}_p: V \times V \to V, \qquad [u, v] &:=& [u, \, v] - p \bigl([u, \, v]\bigl) \end{eqnarray*} for any $x , y\in {M}$ and $u$, $v\in V$. First of all, we observe that the above maps are all well defined bilinear maps: $u \triangleleft x \in V$ and $\{u, \, v \} \in V$, for all $u$, $v \in V$ and $x \in {M}$. We shall prove that $\Omega({M}, V) = \bigl(\triangleleft, \, \triangleright, \, \omega, \{-, \, -\} \bigl)$ is an extending structure of ${M}$ trough $V$ and \begin{eqnarray*} \varphi: {M} \,\natural \, V \to E, \qquad \varphi(x, u) := x+u \end{eqnarray*} is an isomorphism of Malcev algebras that stabilizes ${M}$ and co-stabilizes $V$. Now $\varphi: {M} \times V \to E$, $\varphi(x, \, u) := x+u$ is a linear isomorphism between the Malcev algebra $E$ and the direct product of vector spaces ${M}\oplus V$ with the inverse given by $\varphi^{-1}(v) := \bigl(p(v), \, v - p(v)\bigl)$, for all $v \in E$. Thus, there exists a unique Malcev algebra structure on ${M}\oplus V$ such that $\varphi$ is an isomorphism of Malcev algebras and this unique bracket on ${M}\oplus V$ is given by $$ [(x, u), \, (y, v)] := \varphi^{-1} \bigl([\varphi(x, u), \, \varphi(y, v)]\bigl) $$ for all $x$, $y \in {M}$ and $u$, $v\in V$. The proof is completely finished if we prove that this bracket coincides with the one defined by above associated to the system $\bigl(\triangleleft_p, \, \triangleright_p, \,\omega_p, {[\,, \, ]}_p \bigl)$. Indeed, for any $x$, $y \in {M}$ and $u$, $v\in V$ we have: \begin{eqnarray*} [(x, u), \, (y, v)] &=& \varphi^{-1} \bigl([\varphi(x, u), \, \varphi(y, v)]\bigl) = \varphi^{-1} \bigl([x, \, y] + [x, \, v] + [u, \, y] + [u, \, v]\bigl)\\ &=& \bigl(p([x, \, y]), [x, \, y] - p([x, \, y])\bigl) + \bigl(p([x, \, v]), [x, \, v] - p([x, \, v])\bigl)\\ && + \bigl(p([u, \, y]), [u, \, y] - p([u, \, y])\bigl) + \bigl(p([u, \, v]), [u, \, v] - p([u, \,v ])\bigl)\\ &=& \Bigl(p([x, \, y]) + p([x, \, v]) + p([u, \, y]) + p([u, \, v]), \ [x, \, y] + [x, \, v]\\ &&+ [u, \, y] + [u, \, v] - p([x, \, y]) - p([x, \, v]) - p([u, \, y]) - p([u, \, v])\Bigl)\\ &=& \Bigl([x, \, y] + x \triangleleft v - y \triangleleft u , \omega (x,y) + x \triangleright v - y \triangleright u+ [u,v]\Bigl) \end{eqnarray*} as needed. Moreover, the following diagram is commutative \begin{eqnarray*} \xymatrix {& {M} \ar[r]^{i_{{M}}} \ar[d]_{Id} & {{M} \,\natural \, V} \ar[r]^{q} \ar[d]^{\varphi} & V \ar[d]^{Id}\\ & {M} \ar[r]^{i} & {E}\ar[r]^{\pi } & V} \end{eqnarray*} where $\pi : E \to V$ is the projection of $E = {M}\oplus V$ on the vector space $V$ and $q: {{M} \,\natural \, V} \to V$, $q (x, u) := u$ is the canonical projection. The proof is now finished. \end{proof}
\begin{lemma} Let $\Omega({M}, V) = \bigl(\triangleleft, \,\triangleright, \, \omega, [\cdot,\cdot] \bigl)$ and $\Omega'({M}, V) = \bigl(\triangleleft ', \, \triangleright ', \, \omega', [\cdot,\cdot]' \bigl)$ be two Malcev extending structures of ${M}$ though $V$, ${M} \,\natural \, V$ and $ {M} \,\natural \, ' V$ the associated unified products. Then there exists a bijection between the set of all morphisms of Malcev algebras $\psi: {M} \,\natural \, V \to {M} \,\natural \, ' V$ which stabilizes ${M}$ and the set of pairs $(r, s)$, where $r: V \to {M}$, $s: V \to V$ are two linear maps satisfying the following compatibility conditions for any $x \in {M}$, $u$, $v \in V$: \begin{enumerate} \item[(M1)] $s (u) \triangleleft ' x = s(u \triangleleft x)$, \item[(M2)] $r(u \triangleleft x) = [r(u),\, x] - u \triangleright x + s(u) \triangleright ' x$, \item[(M3)] $s([u,\,v])= [s(u), s(v)]' + s(u) \triangleleft ' r(v) - s(v) \triangleleft 'r(u)$, \item[(M4)] $r([u, v]) = [r(u), \, r(v)] + s(u) \triangleright ' r(v) - s(v) \triangleright ' r(u) + \omega' \bigl(s(u), s(v)\bigl) - \omega(u, v)$. \end{enumerate} Under the above bijection the homomorphism of Malcev algebras $\psi = \psi_{(r, s)}: {M} \,\natural \, V \to {M} \,\natural \, ' V$ corresponding to $(r, s)$ is given for any $x \in {M}$ and $u \in V$ by: $$ \psi(x, u) = (x + r(u), s(u)). $$ Moreover, $\psi = \psi_{(r, s)}$ is an isomorphism if and only if $s: V \to V$ is an isomorphism and $\psi = \psi_{(r, s)}$ co-stabilizes $V$ if and only if $s = id_V$. \end{lemma}
We denote by $\mathfrak{T}({M},V)$ the set of all extending structures $\Omega(M, V)$. It is easy to see that $\equiv$ and $\approx$ are equivalence relations on the set $\mathfrak{T}(M,V)$. By the above constructions and Lemmas we obtain the following result.
\begin{theorem} Let ${M}$ be a Malcev algebra, $E$ a vector space which contains ${M}$ as a subspace and $V$ a complement of ${M}$ in $E$. Then, we get:\\ (1)Denote $\mathcal{E}\mathcal{H}^2(V,{M}):=\mathfrak{T}({M},V)/\equiv$. Then, the map \begin{eqnarray} \mathcal{E}\mathcal{H}^2(V,{M})\rightarrow Extd(E,{M}),~~~~\overline{\Omega({M},V)}\rightarrow {M}\natural V \end{eqnarray} is bijective, where $\overline{\Omega({M},V)}$ is the equivalence class of $\Omega({M},V)$ under $\equiv$.\\ (2) Denote $\mathcal{U}\mathcal{H}^2(V,{M}):=\mathfrak{T}({M},V)/\approx$. Then, the map \begin{eqnarray} \mathcal{U}\mathcal{H}^2(V,{M})\rightarrow Extd'(E,{M}),~~~~\overline{\overline{\Omega({M},V)}}\rightarrow {M}\natural V \end{eqnarray} is bijective, where $\overline{\overline{\Omega({M},V)}}$ is the equivalence class of $\Omega({M},V)$ under $\approx$.\\ \end{theorem}
\section{Special cases of unified products}\selabel{cazurispeciale}
In this section, we consider the following problem: what is conditions for unified products when $M$ and $V$ are all subalgebras of $E$? In fact, we obtain two special cases of unified product: crossed products and matched pairs of Malcev algebras.
\subsection{Crossed products of Malcev algebras} \begin{definition} Let $M$ and $V$ be two Malcev algebras with two bilinear maps $\triangleleft:{M}\times {V} \to {M}$ and $\omega: V\times V \to M$ where $\omega$ is a skew-symmetric map. We define on the direct sum vector space $M\oplus V$ with the bracket $[\cdot,\cdot]: (M\oplus V) \times (M\oplus V) \to M\oplus V$ by: \begin{eqnarray} [(x, u), (y, v)] = \Big([x,y] + x\triangleleft v - y \triangleleft u + \omega (u,v), [u,v] \Big), \end{eqnarray} for all $x,y,z,t \in M, u,v,p, q \in V$. Then $M\oplus V$ is a Malcev algebra under the above bracket if and only if the following compatibility conditions hold: \begin{enumerate} \item[(CP1)]$
[ {[ {x,z} ],y \triangleleft q} ] { = }[ {[ {x,y} ],z} ] \triangleleft q + [{[ {y,z} ] \triangleleft q,x} ] + [ {[ {z \triangleleft q,x} ],y} ] - [ {[ {x \triangleleft q,y} ],z} ]$,
\item[(CP2)] $
[ {[ {x,z} ],\omega (v,q)} ] + [ {x,z} ]\triangleleft [ {v,q} ] \\ { = }[ {x \triangleleft v,z} ] \triangleleft q - [ {\left( {z \triangleleft v} \right) \triangleleft q,x} ]+ x \triangleleft \left( {\left( {z \triangleleft v} \right) \triangleleft q} \right) + [ {z \triangleleft q,x} ] \triangleleft v \\ - [ {\left( {x \triangleleft q} \right) \triangleleft v,z}] + z \triangleleft\left( {\left( {x \triangleleft q} \right) \triangleleft v} \right)
$,
\item[(CP3)] $ [ {x \triangleleft p,y \triangleleft q} ] \\ { = }\left( {[ {x,y} ] \triangleleft p} \right)\triangleleft q + [ {\left( {y \triangleleft p} \right) \triangleleft q,x}] +[ {[ {\omega \left( {p, q} \right), x} ],y} ] - [{x \triangleleft [ {p, q} ],y} ] \\
+ y \triangleleft \left( {x \triangleleft [ {p, q} ]} \right) -[ {x \triangleleft q,y} ] \triangleleft p$,
\item[(CP4)] $
[ {x \triangleleft p,\omega (v,q)} ] + \left( {x \triangleleft p} \right) \triangleleft [ {v,q} ]\\ { = }\left( {\left( {x \triangleleft v} \right) \triangleleft p} \right) \triangleleft q - x \triangleleft [ {[ {v,p} ], q} ] - x \triangleleft \left( {\omega \left( {v,p} \right) \triangleleft q} \right) \\
+ [ {\omega \left( {v,p} \right) \triangleleft q,x} ] + [{\omega \left( {[ {v,p} ], q} \right), x} ] + [ {\omega \left( {p, q} \right), x} ] \triangleleft v \\
- \left({x \triangleleft [ {p, q} ]} \right) \triangleleft v - \omega\left( {x \triangleright [ {p, q} ], v} \right) - \left( {\left( {x\triangleleft q} \right) \triangleleft v} \right) \triangleleft p $,
\item[(CP5)] $
[ {\omega \left( {u,p} \right),y \triangleleft q} ] - \left( {y \triangleleft q} \right) \triangleleft [ {u,p} ]\\ { = } - \left( {\left( {y \triangleleft u} \right) \triangleleft p}\right) \triangleleft q - \omega \left( {\left( {y \triangleleft u} \right) \triangleleft p, q}\right) + \left( {\left( {y \triangleleft p} \right) \triangleleft q}\right) \triangleleft u \\
+[ {\omega \left( {p, q} \right) \triangleleft u,y} ] + [ {\omega \left( {[ {p, q} ], u} \right),y} ] - y\triangleleft [ {[ {p, q} ], u} ] - y \triangleleft\left( {\omega \left( {p, q} \right) \triangleleft u} \right) \\
+ [ {\omega\left({q, u} \right), y} ] \triangleleft p - \left({y \triangleleft [ {q, u} ]} \right) \triangleleft p$,
\item[(CP6)] $
[ {\omega \left( {u,p} \right),\omega (v,q)} ] + \omega \left({u,p} \right) \triangleleft [ {v,q} ] - \omega (v,q) \triangleleft [ {u,p} ] + \omega \left( {[{u, p}], [ {v, q} ]} \right) \\ { = }\left( {\omega \left( {u,v} \right) \triangleleft p} \right)\triangleleft q + \omega \left( {[ {u,v} ],p} \right)\triangleleft q + \omega \left( {[ {[ {u,v} ],p} ], q}\right) \\
+\left( {\omega \left( {v,p} \right) \triangleleft q} \right) \triangleleft u+ \omega \left( {[ {v,p} ], q} \right) \triangleleft u \\
+ \omega \left( {[ {[ {v,p} ], q} ], u} \right) +\left( {\omega \left( {p, q} \right) \triangleleft u} \right) \triangleleft v\\
+ \omega \left( {[ {p, q} ], u} \right) \triangleleft v + \omega\left( {[ {[ {p, q} ], u} ], v} \right) + \omega \left({\omega \left( {p, q} \right) \triangleright u,v} \right) \\
+ \left( {\omega \left( {q, u} \right) \triangleleft v} \right)\triangleleft p + \omega \left( {[ {q, u} ], v} \right)\triangleleft p + \omega \left( {[ {[ {q, u} ], v} ],p}\right)$. \end{enumerate} This will be called the \textit{crossed product} of $M$ and $V$ and we denote it by $M\#_\omega^\triangleleft V$. \end{definition}
\subsection{Skew-crossed products of Malcev algebras} \begin{definition} Let $M$ and $V$ be two Malcev algebras with two bilinear maps $\triangleleft:{M}\times {V} \to {M}$ and $\omega: V\times V \to M$ where $\omega$ is a skew-symmetric map. We define on the direct sum vector space $M\oplus V$ with the bracket $[\cdot,\cdot]: (M\oplus V) \times (M\oplus V) \to M\oplus V$ by: \begin{eqnarray} [(x, u), (y, v)] = \Big([x,y] + \omega (u,v), x \triangleright v - y \triangleright u+ [u,v] \Big), \end{eqnarray} for all $x,y,z,t \in M, u,v,p, q \in V$. Then $M\oplus V$ is a Malcev algebra under the above bracket if and only if the following compatibility conditions hold: \begin{enumerate} \item[(SP1)] $
[ {[ {x,z} ],\omega (v,q)} ] { = } -[ {\omega \left( {z \triangleright v,q} \right), x} ]- \omega \left( {x \triangleright \left( {z \triangleright q} \right),v}\right) - [ {\omega \left( {x \triangleright q,v} \right),z} ]
$,
\item[(SP2)] $ \omega \left( {x \triangleright p,y \triangleright q} \right) { = } \omega \left( {[ {x,y} ] \triangleright p, q}\right) + [ {\omega \left( {y \triangleright p, q} \right), x} ] +[ {[ {\omega \left( {p, q} \right), x} ],y} ] $,
\item[(SP3)] $
\omega \left( {x \triangleright p,[ {v,q} ]} \right) \\ { = } \omega \left( {[ {x \triangleright v,p} ], q}\right) + [{\omega \left( {[ {v,p} ], q} \right), x} ] - \omega\left( {x \triangleright [ {p, q} ], v} \right) - \omega\left( {[ {x \triangleright q,v} ],p} \right)
$, \item[(SP5)] $
[ {x,z} ] \triangleright [ {v,q} ]= z \triangleright [ {x \triangleright q,v} ] - [{x \triangleright \left( {z \triangleright q} \right),v} ]$,
\item[(SP6)] $ [ {y,t} ] \triangleright \left( {x \triangleright p}\right) = t \triangleright \left( {[ {x,y} ]\triangleright p} \right) - [ {[ {t,x} ],y} ] \triangleright p$,
\item[(SP7)] $
[ {x \triangleright p,[ {v,q} ]} ] - \omega (v,q)\triangleright \left( {x \triangleright p} \right) \\ { = }[ {[ {x \triangleright v,p} ], q} ] - x\triangleright [ {[ {v,p} ], q} ] + \omega \left( {x \triangleright v,p} \right) \triangleright q - x \triangleright\left( {\omega \left( {v,p} \right) \triangleright q} \right) \\ - [ {x \triangleright [ {p, q} ], v} ] - [ {[ {x \triangleright q,v} ],p} ]+ [ {\omega\left( {p, q} \right), x} ] \triangleright v - \omega \left( {x \triangleright q,v} \right) \triangleright p $,
\item[(SP8)] $
[ {x \triangleright p,y \triangleright q} ] = [ {[ {x,y} ] \triangleright p, q} ] - x \triangleright [ {y \triangleright p, q} ] + y \triangleright \left( {x \triangleright [{p, q} ]} \right) + [ {y \triangleright \left( {x \triangleright q}\right),p} ]$,
\item[(SP4)] $
[ {\omega \left( {u,p} \right),\omega (v,q)} ] + \omega \left( {[{u, p}], [ {v, q} ]} \right) \\ { = } \omega \left( {[ {[ {u,v} ],p} ], q}\right) + \omega \left( {\omega \left( {u,v} \right) \triangleright p, q} \right) \\
+ \omega \left( {[ {[ {v,p} ], q} ], u} \right) +\omega \left( {\omega \left( {v,p} \right) \triangleright q, u} \right) \\ + \omega\left( {[ {[ {p, q} ], u} ], v} \right) + \omega \left({\omega \left( {p, q} \right) \triangleright u,v} \right) \\
+ \omega \left( {[ {[ {q, u} ], v} ],p}\right) + \omega \left( {\omega \left( {q, u} \right) \triangleright v,p} \right)
$,
\item[(SP9)] $ \omega \left({u,p} \right) \triangleright [ {v,q} ] - \omega (v,q)\triangleright [ {u,p} ] \\ { = }[ {\omega \left( {u,v} \right) \triangleright p, q} ] +\omega \left( {[ {u,v} ],p} \right) \triangleright q \\
+ [{\omega \left( {v,p} \right) \triangleright q, u} ] + \omega\left( {[ {v,p} ], q} \right) \triangleright u \\
+ [{\omega \left( {p, q} \right) \triangleright u,v} ] + \omega\left( {[ {p, q} ], u} \right) \triangleright v \\ + [{\omega \left( {q, u} \right) \triangleright v,p} ] + \omega\left( {[ {q, u} ], v} \right) \triangleright p $. \end{enumerate} This will be called the \textit{skew crossed product} of $M$ and $V$ and we denote it by $M\#_\omega^\triangleright V$. \end{definition}
\subsection{Matched pair for Malcev algebras}
\begin{theorem}
Let $\left( {M, [\,,\,]} \right),\left({V, [\,,\,]}\right)$ be two Malcev algebras. If there are bilinear maps $\triangleright:M \times V\to V,\, \triangleleft: V\times M \to V$, define
bracket on $M \oplus V$ by: \begin{eqnarray} [(x, u), (y, v)] = \Big([x,y] + x\triangleleft v - y \triangleleft u, x \triangleright v - y \triangleright u+ [u,v] \Big). \end{eqnarray} Then ${M \oplus V}$ is a Malcev algebra under the above bracket if and only if the following compatibility conditions hold: \begin{enumerate} \item[(MP1)] \begin{eqnarray*} \notag&& [ [x\triangleleft u, y ], z ]- [[y, z ], x ]\triangleleft u- [ y\triangleleft (x\triangleright u ), z ]\\ \notag&&- [ [z\triangleleft u, x ], y ]+ z\triangleleft (y\triangleright (x\triangleright u ) )+ [[x, z ], y\triangleleft u ]\\ \notag&&- [ [x, y ], z ]\triangleleft u+ [x, z ] \triangleleft (y \triangleright u )+ [x\triangleleft (z\triangleright u ), y]\\ &&+ x\triangleleft ( [y, z ] \triangleright u ) - y\triangleleft (x\triangleright (z\triangleright u ))=0, \end{eqnarray*} \item[(MP2)] \begin{eqnarray*} \notag&& [ [x\triangleright u, v ], w ]- [x\triangleright [v, w ], u ]- [(x\triangleleft u )\triangleright v, w]\\ \notag&&- [ [x\triangleright w, u ], v ]+ (x\triangleleft u )\triangleleft v )\triangleright w+ [ [u, w ], x\triangleright v]\\ \notag&&-x\triangleright [ [u, v ], w ]+ [ (x\triangleleft w )\triangleright u, v ]+ (x\triangleleft v )\triangleright [u, w]\\ &&+(x \triangleleft [v, w ] )\triangleright u-((x\triangleleft w )\triangleleft u )\triangleright v=0, \end{eqnarray*} \item[(MP3)] \begin{eqnarray*} \notag&&y\triangleleft (x\triangleright [u, v ] ) -(y\triangleleft v )\triangleleft (x\triangleright u ) +[v\triangleleft x , y ]\triangleleft u\\ \notag&&- [(y\triangleleft u )\triangleleft v , x ]- [x\triangleleft [u, v ], y ]+(x\triangleleft u )\triangleleft (y\triangleright v)\\ \notag&&-x\triangleleft ( [y\triangleright u, v ] ) -([x, y ]\triangleleft u )\triangleleft v+ [x\triangleleft u, y\triangleleft v ])\\ &&-(y\triangleleft (x\triangleright v ))\triangleleft u +x\triangleleft ((y\triangleleft u )\triangleright v )=0, \end{eqnarray*} \item[(MP4)] \begin{eqnarray*} \notag&&([x, y ] \triangleleft u) \triangleright v-(x\triangleleft u )\triangleright (y\triangleright v ) +x\triangleright [y\triangleright u, v]\\ \notag&&- [y\triangleright (x\triangleright v ), u ]- [ [x, y ] \triangleright u, v ]+ (y\triangleleft v )\triangleright (x\triangleright u)\\ \notag&&- ( [x\triangleleft v, y ] )\triangleright u-y\triangleright (x\triangleright [u, v ] )+ [x\triangleright u, y\triangleright v] \\ &&-x\triangleright ((y\triangleleft u )\triangleright v )+ (y\triangleleft (x\triangleright v )) \triangleright u=0, \end{eqnarray*} \item[(MP5)] \begin{eqnarray*} \notag&&[y\triangleleft v, x ]\triangleleft u -y\triangleleft ( [x\triangleright v, u ] ) - [(x\triangleleft v )\triangleleft u,\ y]\\ \notag&&+[x\triangleleft u, y ]\triangleleft v - [(y\triangleleft u )\triangleleft v, x ]- x\triangleleft [y\triangleright u, v]\\ \notag&&- (y\triangleleft (x\triangleright u )\triangleleft v- (x\triangleleft (y\triangleright v ))\triangleleft u+[x, y ]\triangleleft [u, v]\\ &&+x\triangleleft((y\triangleleft u )\triangleright v) + y\triangleleft((x\triangleleft v)\triangleright u)=0, \end{eqnarray*} \item[(MP6)] \begin{eqnarray*} \notag&&x\triangleright [y\triangleright v, u ]- [y\triangleleft u, x ] \triangleright v- [x\triangleright (y\triangleright u ), v ]\\ \notag&&+y\triangleright [x\triangleright u, v ]- [y\triangleright (x\triangleright v ), u ]- [x\triangleleft v, y ] \triangleright u\\ \notag&&-y\triangleright ((x\triangleleft u )\triangleright v )-x\triangleright ((y\triangleleft v )\triangleright u )+ [x, y ]\triangleright [u, v]\\ &&+ ( y\triangleleft (x\triangleright v ) )\triangleright u+ (x\triangleleft (y\triangleright u ) )\triangleright v=0. \end{eqnarray*} \end{enumerate} This is called a matched pair of two Malcev algebras $M$ and $V$ if the above conditions are satisfied.
We will denoted it by $M \, \bowtie V$. \end{theorem}
\section{Flag extending structures} In this section, we study the case when $V$ is a one dimensional vector space. This will be called flag extending structures.
\begin{definition} Let $M$ be a Malcev algebra. Then $(\lambda,D)$ is a twisted derivation of $M$ if there exist maps $\lambda :M \to k$ and $D:M \to M$ such that the following conditions hold: \begin{enumerate} \item[(T1)] $
[ {[ {x,z} ],D( y )} ] + \lambda ( y)D( {[ {x,z} ]} ) + \lambda ( {[ {y,z}]} )D( x )
- D( {[ {[ {y,z} ], x} ]} )\\
+ [{\lambda ( z )D( x ),y} ]- [ {[{D( z ), x} ],y} ] - \lambda ( x )\lambda ( z )D( y ) +[ {[ {D( x ),y} ],z}]\\
- [ {\lambda( x )D( y ),z} ] - D( {[ {[ {x,y} ],z} ]} )+ \lambda( x )\lambda ( y )D( z ) = 0 $,
\item[(T2)] $
\lambda ( {D( y )} )D( x ) - \lambda( x )D( {D( y )} ) + D( {[{D( x ),y} ]} )
- [ {D( {D( y )} ), x} ] + \lambda ( y)D( {D( x )} ) \\
- D( {D( {[ {x,y}]} )} ) + [ {D( x ),D( y )} ] - D( {\lambda( x )D( y )} ) = 0 $,
\item[(T3)] $ \lambda ( {D( {[ {x,y} ]} )} ) - \lambda ( {D( x )} )\lambda ( y ) - \lambda( {[ {D( x ),y} ]} ) + \lambda ( x )\lambda ( {D( y )} ) = 0 $,
\item[(T4)] $
D( {[ {D( x ),y} ]} ) - D( {\lambda( x )D( y )} ) - [ {D( {D( y)} ), x} ]
+ \lambda ( {D( y )} )D( x ) + D({[ {D( y ), x} ]} ) \\
- D( {\lambda ( y)D( x )} ) - [ {D( {D( x )} ),y} ] + \lambda ({D( x )} )D( y ) = 0 $,
\item[(T5)] $\lambda ( {[ {D( x ),y} ]} ) + \lambda( {[ {D( y ), x} ]} ) = 0$,
\item[(T6)] $\lambda ( {[x,z]} )\lambda ( y ) = \lambda ({[[x,y],z]} ) - \lambda ( x )\lambda ( {[y,z]})$. \end{enumerate} The set of twisted derivations is denoted by ${\mathcal F} \, (M)$. \end{definition}
\begin{proposition} Let $M$ be a Malcev algebra and $V$ a vector space of dimension $1$ with a basis $\{u\}$. Then there exists a bijection between the set of extending structures of $M$ through $V$ and ${\mathcal F} \, (M)$.
Under the above bijective correspondence the extending datum $\Omega(M, V)$ corresponding to $(\lambda, \, D) \in {\mathcal F} \, (M)$ is given by: \begin{eqnarray} &&x \triangleleft u = D(x), \quad x\triangleright u = \lambda (x) u, \\ &&\omega(u, u) = 0, \quad [u , u] = 0. \end{eqnarray} In this case, the unified product $M\natural V$ associated to the extending structure is given by \begin{equation} [(x, u), (y, u)] =\Big ([x,y] + D(x) - D(y) , (\lambda (x) - \lambda (y)) u\Big). \end{equation} \end{proposition}
\begin{theorem} Let $M$ be an algebra of codimension $1$ in the vector space $V$. Then: $\operatorname{Extd}(V, M) \cong{\mathcal{A}{{\mathcal{H}}}}^{2}(k, M) \cong \mathcal{F}(M) / \equiv$, where $\equiv$ is the equivalence relation on the set $\mathcal{F}(M)$ defined as follows: $\left(\lambda, D,\right) \equiv$ $\left(\lambda^{\prime}, D^{\prime}\right)$ if and only if $\lambda(x)=\lambda^{\prime}(x)$ and there exists a pair $(r, s) $, where $r: V \to M$, $s: V \to V$ are two linear maps, such that: \begin{eqnarray} && D^{\prime}(x) = [r(u) , x] + D(x) + \lambda(x) r(u). \end{eqnarray} \end{theorem}
\begin{example} Let $M$ be a 4-dimensional Malcev algebra with a basis $\left\{ {e_1 ,e_2 ,e_3 ,e_4 } \right\}$, if it is not a Lie algebra, there is only one, relations given as follows:
\[ \left[ {e_1 ,e_2 } \right] = e_2 ,\, \left[ {e_1 ,e_3 } \right] = e_3 ,\, \left[ {e_1 ,e_4 } \right] = - e_4 ,\, \left[ {e_2 ,e_3 } \right] = e_4. \] \end{example} Now we compute the set of twisted derivations as follows.
Denote by \[ D\left( {{\begin{array}{*{20}c}
{e_1 } \\
{e_2 } \\
{e_3 } \\
{e_4 } \\ \end{array} }} \right) = \left( {{\begin{array}{*{20}c}
{a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\
{a_{21} } & {a_{22} } & {a_{23} } & {a_{24} } \\
{a_{31} } & {a_{32} } & {a_{33} } & {a_{34} } \\
{a_{41} } & {a_{42} } & {a_{43} } & {a_{44} } \\ \end{array} }} \right)\left( {{\begin{array}{*{20}c}
{e_1 } \\
{e_2 } \\
{e_3 } \\
{e_4 } \\ \end{array} }} \right). \] Then we have \[ D\left( {e_1 } \right) = a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4 ,\, D\left( {e_2 } \right) = a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4,\, \] \[ D\left( {e_3 } \right) = a_{31} e_1 + a_{32} e_2 +a_{33} e_3 +a_{34} e_4 ,\, D\left( {e_4 } \right) = a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4. \]
From $(T6)$ we get $$\lambda ( {[e_1, e_3]} )\lambda( e_2 ) = \lambda ({[[e_1, e_2],e_3]} ) - \lambda ( e_1 )\lambda ( {[e_2, e_3]}),$$ thus $$\lambda ( e_3 )\lambda ( e_2 ) = \lambda ( e_4 ) - \lambda ( e_1 )\lambda ( e_4 ).$$ We will discuss different choice of $\lambda$ in the following cases.
Case $I$:
Let $$\lambda ( e_1 ) = \lambda ( e_3 ) = \lambda ( e_4 ) = 0, \,\lambda ( e_2 ) = \lambda_2,$$ and substituting this into the twisted derivation conditions we can obtain the following result.
From $(T5)$, $$\lambda ( {[ {D( e_1 ),e_2} ]} ) + \lambda( {[ {D( e_2 ),e_1} ]} ) = 0.$$ we get $$a_{11}\lambda_2 = 0,$$ thus $$a_{11} = 0.$$
From $(T3)$, $$\lambda ( {D( {[ {e_1,e_2} ]} )} ) - \lambda ( {D( e_1 )} )\lambda ( e_2 )
- \lambda( {[ {D( e_1 ),e_2} ]} ) + \lambda ( e_1 )\lambda ( {D( e_2 )} ) = 0,$$
we get $$a_{22}\lambda_2 - a_{12}\lambda_2^{2} - a_{11}\lambda_2= 0,$$ thus $$a_{22} = \lambda_2 a_{12} + a_{11} = \lambda_2 a_{12}.$$
From $(T1)$, \begin{eqnarray*} &&[ {[ {e_1,e_3} ],D( e_2 )} ] + \lambda ( e_2)D( {[ {e_1,e_3} ]} ) + \lambda ( {[ {e_2,e_3}]} )D( e_1 )
- D( {[ {[ {e_2,e_3} ],e_1} ]} )\\ && + [{\lambda ( e_3 )D( e_1 ),e_2} ]- [ {[{D( e_3 ),e_1} ],e_2} ] - \lambda ( e_1 )\lambda ( e_3 )D( e_2 ) +[ {[ {D( e_1 ),e_2} ],e_3}\\ && - [ {\lambda( e_1 )D( e_2 ),e_3} ] - D( {[ {[ {e_1,e_2} ],e_3} ]} )+ \lambda( e_1 )\lambda ( e_2 )D( e_3 ) = 0, \end{eqnarray*} $$ \lambda ( e_2 )D( e_3 ) + [ [ D( e_1 ),e_2 ],e_3] - D( e_4 ) = 0,$$ we get $$\lambda_2(a_{31} e_1 + a_{32} e_2 +a_{33} e_3 +a_{34} e_4) + a_{11}e_4= a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4,$$ thus we obtain $$a_{41} = \lambda_2 a_{31},\, a_{42} = \lambda_2 a_{32},\, a_{43} = \lambda_2 a_{33},\, a_{44} = a_{11} + \lambda_2 a_{34} = \lambda_2 a_{34}.$$
From $(T4)$ \begin{eqnarray*} &&D( {[ {D( e_1 ),e_2} ]} ) + \lambda ( {D( e_2 )} )D( e_1 ) - D( {\lambda ( e_2)D( e_1 )} ) - [ {D( {D( e_1 )} ),e_2} ] + \lambda ({D( e_1 )} )D( e_2 ) = 0, \end{eqnarray*} we have \begin{eqnarray*} &&a_{11}(a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) + \lambda_2 a_{22} (a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4)\\ &&- (a_{11}a_{11}+a_{12}a_{21}+a_{13}a_{31}+a_{14}a_{41}) e_2 - \lambda_2(a_{11}(a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4)\\ &&+ a_{12} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) +a_{13} (a_{31} e_1 + a_{32} e_2 +a_{33} e_3 +a_{34} e_4)\\ &&+a_{14}(a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4)) + \lambda_2 a_{12} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&= 0, \end{eqnarray*} thus we obtain \begin{eqnarray*} \lambda_2a_{13} a_{31} + \lambda_2a_{14}a_{41} &= 0,\\ - a_{12}a_{21}-a_{13}a_{31}-a_{14}a_{41} - \lambda_2a_{13} a_{32} - \lambda_2a_{14} a_{42} + \lambda_2a_{22}a_{12} &= 0,\\ \lambda_2a_{22}a_{13} - \lambda_2a_{13}a_{33} - \lambda_2a_{14}a_{43} &= 0,\\ \lambda_2a_{22}a_{14} - \lambda_2a_{13}a_{34} - \lambda_2a_{14}a_{44} &= 0. \end{eqnarray*} Substituting $a_{22} = \lambda_2 a_{12},$ $a_{41} = \lambda_2 a_{31},\, a_{42} = \lambda_2 a_{32},\, a_{43} = \lambda_2 a_{33},\, a_{44} = \lambda_2 a_{34}$ into the above formula, we get \begin{eqnarray*} \lambda_2a_{13} a_{31} + \lambda_2^{2}a_{14} a_{31} = \lambda_2 a_{31}(a_{13} + \lambda_2a_{14}) &=0,\\ - a_{12}a_{21}-a_{13}a_{31}-\lambda_2 a_{14} a_{31} - \lambda_2a_{13} a_{32} - \lambda_2^{2}a_{14} a_{32} + \lambda_2^{2}a_{12}^{2} &= 0,\\ \lambda_2^{2}a_{12}a_{13} - \lambda_2a_{13}a_{33} - \lambda_2^{2}a_{14} a_{33} = \lambda_2^{2}a_{12}a_{13} - \lambda_2 a_{33}(a_{13} + \lambda_2a_{14}) &= 0,\\ a_{12}\lambda_2^{2}a_{14} - \lambda_2a_{13}a_{34} - \lambda_2^{2}a_{14} a_{34} &= 0. \end{eqnarray*} Let $$a_{12} \neq 0, a_{31} \neq 0, a_{32} \neq 0, a_{33} \neq 0, a_{34} \neq 0,$$ Then we get $$a_{13} + \lambda_2a_{14} = 0.$$ Therefore $$a_{13} = 0, a_{14} = 0, a_{21} = \lambda_2^{2}a_{12}.$$
From $(T2)$ $$ \lambda ( {D( e_2 )} )D( e_1 ) + D( {[{D( e_1 ),e_2} ]} ) + \lambda ( e_2)D( {D( e_1 )} ) - D( {D( {[ {e_1,e_2}]} )} ) + [ {D( e_1 ),D( e_2 )} ] = 0,$$ we have \begin{eqnarray*} &&\lambda_2 a_{22} (a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) + a_{11}(a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&+ \lambda_2(a_{11}(a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) + a_{12} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4)\\ &&+a_{13} (a_{31} e_1 + a_{32} e_2 +a_{33} e_3 +a_{34} e_4) +a_{14}(a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4))\\ &&- a_{21}(a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) - a_{22} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&-a_{23} (a_{31} e_1 + a_{32} e_2 +a_{33} e_3 +a_{34} e_4) -a_{24}(a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4)\\ &&+ a_{11}a_{22} e_2+a_{11}a_{23} e_3-a_{11}a_{24} e_4+ a_{12}a_{23} e_4 = 0 , \end{eqnarray*} Thus we obtian $$ \begin{aligned} a_{23} a_{31} + \lambda_2 a_{24} a_{31} &= 0,\\ a_{23}a_{32} + \lambda_2 a_{24} a_{32} &= 0,\\ -a_{23}a_{33} -\lambda_2 a_{24} a_{33} &= 0,\\ \lambda_2a_{12}a_{24} - \lambda_2a_{12}a_{24} -a_{23}a_{34} - \lambda_2 a_{24} a_{34} + a_{12}a_{23} &= 0 , \end{aligned} $$ $$ a_{24}= -\frac{a_{23}}{\lambda_2}, a_{12}a_{23} = 0,$$ Therefore $$ a_{23}= 0, a_{24}= 0.$$
To sum up, the matrix of $D$ under the base $e_1 ,e_2 ,e_3 ,e_4 $ is give by \[ \mathop D\nolimits_{1} = \left( {{\begin{array}{*{20}c}
0 & {a_{12} } & 0 & 0 \\
\lambda_2^{2}a_{12} & \lambda_2 a_{12} & 0 & 0 \\
{a_{31} } & {a_{32} } & {a_{33} }
& {a_{34} } \\
\lambda_2 a_{31} & \lambda_2 a_{32} & \lambda_2 a_{33} & \lambda_2 a_{34} \\ \end{array} }} \right). \]
Case $II$: Let $$\lambda ( e_1 ) = \lambda ( e_2 ) = \lambda ( e_4 ) = 0, \,\lambda ( e_3 ) = \lambda_3.$$
Then substituting this into the twisted derivation conditions we can obtain the following result.
From $(T5)$, this condition is trivial since both sides are zero.
From $(T3)$, $$\lambda ( {D( e_2 ]} ) ) = 0,$$ we get $$a_{23}\lambda_3 = 0,$$ thus $$a_{23} = 0.$$
From $(T1)$, $$[ e_3 ,D( e_2 ) ] + \lambda ( e_4 )D( e_1 ) + [{\lambda ( e_3 )D( e_1 ),e_2} ]+[ {[ {D( e_1 ),e_2} ],e_3} - D(e_4) = 0,$$ we have $$\lambda_3a_{11} e_2 + a_{11} e_4 = a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4,$$ Thus we obtain $$a_{41} = 0,\, a_{42} = \lambda_3a_{11},\, a_{43} = 0,\, a_{44} = a_{11}.$$
From $(T4)$, \begin{eqnarray*} &&D( {[ {D( e_1 ),e_2} ]} ) + \lambda ( {D( e_2 )} )D( e_1 ) - [ {D( {D( e_1 )} ),e_2} ] + \lambda ({D( e_1 )} )D( e_2 ) = 0, \end{eqnarray*} we get \begin{eqnarray*} &&a_{11}(a_{21} e_1 + a_{22} e_2 + a_{23} e_3 + a_{24} e_4) + \lambda_3 a_{23} (a_{11} e_1 + a_{12} e_2 + a_{13} e_3 + a_{14} e_4)\\ &&- (a_{11}a_{11}+a_{12}a_{21}+a_{13}a_{31}+a_{14}a_{41}) e_2 + \lambda_3 a_{13} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&= 0, \end{eqnarray*} Thus we obtain \begin{eqnarray*} (a_{11} + \lambda_3 a_{13})a_{21} &= 0,\\ (a_{11}+ \lambda_3 a_{13})a_{22} - a_{11}^{2} - a_{12}a_{21} - a_{13}a_{31} &= 0,\\ (a_{11} + \lambda_3 a_{13})a_{24} &= 0, \end{eqnarray*}
From $(T2)$, $$ \lambda ( {D( e_2 )} )D( e_1 ) + D( {[{D( e_1 ),e_2} ]} )
- D( {D( {[ {e_1,e_2}]} )} ) + [ {D( e_1 ),D( e_2 )} ] = 0,$$ we have \begin{eqnarray*} &&\lambda_3 a_{23} (a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) + a_{11}(a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&- a_{21}(a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) - a_{22} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&-a_{24}(a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4) + a_{11}a_{22} e_2+a_{11}a_{23} e_3-a_{11}a_{24} e_4 \\ &&+ a_{12}a_{23} e_4 = 0 , \end{eqnarray*} Thus we obtain $$ \begin{aligned} a_{22} a_{21} &= 0,\\ 2a_{11}a_{22} - a_{21} a_{12} - a_{22} a_{22} - a_{24} a_{42} &= 0,\\ a_{21}a_{13} &= 0,\\ a_{21}a_{14} + a_{22} a_{24} +a_{24} a_{44} &= 0. \end{aligned} $$
Let $a_{11} \neq 0,$ then we have $$a_{13} = -\frac{a_{11}}{\lambda_3}, a_{21} = a_{24} = 0, a_{22} = 2a_{11}, a_{31} = \lambda_3 a_{11},$$ or $$a_{13} = -\frac{a_{11}}{\lambda_3}, a_{21} = 0, a_{22} = -a_{11}, a_{31} = -\frac{3a_{11}}{\lambda_3},$$ or $$a_{21} = a_{24} = 0, a_{22} = 0, a_{31} = -\frac{a_{11}^{2}}{a_{13}},$$ or $$a_{21} = a_{24} = 0, a_{22} = 2a_{11}, a_{31} = \frac{(a_{11} + 2\lambda_3 a_{13})a_{11}}{a_{13}}.$$
To sum up, the matrix of $D$ under the base $e_1 ,e_2 ,e_3 ,e_4 $ is given by \[ \mathop D\nolimits_{21} = \left( {{\begin{array}{*{20}c}
a_{11} & {a_{12} } & -\frac{a_{11}}{\lambda_3} & a_{14} \\
0 & 2a_{11} & 0 & 0 \\
{a_{31} } & {a_{32} } & {a_{33} }
& {a_{34} } \\
0 & \lambda_3 a_{11} & 0 & a_{11} \\ \end{array} }} \right), \] \[ \mathop D\nolimits_{22} = \left( {{\begin{array}{*{20}c} a_{11} & {a_{12} } & -\frac{a_{11}}{\lambda_3} & a_{14} \\
0 & -a_{11} & 0 & a_{24} \\
-\frac{3a_{11}}{\lambda_3} & {a_{32} } & {a_{33} }
& {a_{34} } \\
0 & \lambda_3 a_{11} & 0 & a_{11} \\ \end{array} }} \right), \] \[ \mathop D\nolimits_{23} = \left( {{\begin{array}{*{20}c}
a_{11} & {a_{12} } & a_{13} & a_{14} \\
0 & 0 & 0 & 0 \\
-\frac{a_{11}^{2}}{a_{13}} & {a_{32} } & {a_{33} }
& {a_{34} } \\
0 & \lambda_3 a_{11} & 0 & a_{11} \\ \end{array} }} \right), \] \[ \mathop D\nolimits_{24} = \left( {{\begin{array}{*{20}c}
a_{11} & {a_{12} } & a_{13} & a_{14} \\
0 & 2a_{11} & 0 & 0 \\
\frac{(a_{11} + 2\lambda_3 a_{13})a_{11}}{a_{13}} & {a_{32} } & {a_{33} }
& {a_{34} } \\
0 & \lambda_3 a_{11} & 0 & a_{11} \\ \end{array} }} \right). \]
Case $III$: Let $$\lambda ( e_1 ) = \lambda ( e_2 ) = \lambda ( e_3 ) = 0, \,\lambda ( e_4 ) = \lambda_4 \neq 0,$$ then substituting this into the twisted derivation condition we can obtain the following result.
From $(T3)$, we obtian $$a_{24} = 0.$$
From $(T1)$, $$ \lambda ( e_4 )D( e_1 ) + [ [ D( e_1 ),e_2 ],e_3] - D( e_4 ) = 0,$$ we have $$\lambda_4(a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) + a_{11}e_4= a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4,$$ Thus we obtain $$a_{41} = \lambda_4 a_{11},\, a_{42} = \lambda_4 a_{12},\, a_{43} = \lambda_4 a_{13},\, a_{44} = a_{11} + \lambda_4 a_{14}.$$
From $(T4)$, \begin{eqnarray*} &&D( {[ {D( e_1 ),e_2} ]} ) + \lambda ( {D( e_2 )} )D( e_1 ) - [ {D( {D( e_1 )} ),e_2} ] + \lambda ({D( e_1 )} )D( e_2 ) = 0, \end{eqnarray*} we have \begin{eqnarray*} &&a_{11}(a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) + \lambda_4 a_{24} (a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4)\\ &&- (a_{11}a_{11}+a_{12}a_{21}+a_{13}a_{31}+a_{14}a_{41}) e_2 + \lambda_4 a_{14} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&= 0, \end{eqnarray*} thus we obtain \begin{eqnarray*} (a_{11}+ \lambda_4 a_{14})a_{21} &= 0,\\ (a_{11}+ \lambda_4 a_{14})a_{22} - a_{11}^{2} - a_{12}a_{21} - a_{13}a_{31} - a_{14}\lambda_4 a_{11} &= 0,\\ (a_{11}+ \lambda_4 a_{14})a_{23} &= 0. \end{eqnarray*}
Let $$a_{11} \neq 0, a_{12} \neq 0, a_{13} \neq 0, a_{14} \neq 0,$$ Then we obtain $$a_{21} = a_{23} = 0, (a_{11}+ \lambda_4 a_{14})a_{22} - a_{11}^{2} - a_{13}a_{31} - a_{14}\lambda_4 a_{11} = 0.$$
From $(T2)$, $$ \lambda ( {D( e_2 )} )D( e_1 ) + D( {[{D( e_1 ),e_2} ]} ) - D( {D( {[ {e_1,e_2}]} )} ) + [ {D( e_1 ),D( e_2 )} ] = 0,$$ we have \begin{eqnarray*} &&\lambda_4 a_{24} (a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) + a_{11}(a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&- a_{21}(a_{11} e_1 + a_{12} e_2 +a_{13} e_3 +a_{14} e_4) - a_{22} (a_{21} e_1 + a_{22} e_2 +a_{23} e_3 +a_{24} e_4) \\ &&-a_{23} (a_{31} e_1 + a_{32} e_2 +a_{33} e_3 +a_{34} e_4) -a_{24}(a_{41} e_1 + a_{42} e_2 +a_{43} e_3 +a_{44} e_4)\\ &&+ a_{11}a_{22} e_2+a_{11}a_{23} e_3-a_{11}a_{24} e_4+ a_{12}a_{23} e_4 = 0 , \end{eqnarray*} thus we obtain $$ \begin{aligned} a_{22} a_{21} + a_{23} a_{31} &= 0,\\ 2a_{11}a_{22} - a_{21}a_{12} - a_{22}^{2} -a_{23} a_{32} &= 0,\\ 2a_{11}a_{23} - a_{21}a_{13} - a_{22}a_{23} -a_{23}a_{33} &= 0,\\ - a_{21}a_{14} - a_{23}a_{34} + a_{12}a_{23} &= 0. \end{aligned} $$ Therefore we the following $$a_{22} = 0, a_{31} = -\frac{a_{11}^{2} + \lambda_4 a_{14} a_{11}}{a_{13}},$$ or $$a_{22} = 2a_{11}, a_{31} = \frac{a_{11}^{2} + \lambda_4 a_{14} a_{11}}{a_{13}}.$$
To sum up, the matrix of $D$ under the base $e_1 ,e_2 ,e_3 ,e_4 $ is \[ \mathop D\nolimits_{31} = \left( {{\begin{array}{*{20}c}
{a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\
0 & 0 & 0 & 0 \\
-\frac{a_{11}^{2} + \lambda_4 a_{14} a_{11}}{a_{13}} & {a_{32} } & {a_{33} }
& {a_{34} } \\
\lambda_4 a_{11} & \lambda_4 a_{12} & \lambda_4 a_{13} & \lambda_4 a_{14} + a_{11} \\ \end{array} }} \right) \] or \[ \mathop D\nolimits_{32} = \left( {{\begin{array}{*{20}c}
{a_{11}} & {a_{12} } & {a_{13} } & {a_{14} } \\
0 & 2a_{11} & 0 & 0 \\
\frac{a_{11}^{2} + \lambda_4 a_{14} a_{11}}{a_{13}} & {a_{32} } & {a_{33} }
& {a_{34} } \\
\lambda_4 a_{11} & \lambda_4 a_{12} & \lambda_4 a_{13} & \lambda_4 a_{14} + a_{11} \\ \end{array} }} \right). \]
\section*{Acknowledgments} This is a primary edition. Something should be modified in the future.
\vskip7pt
\footnotesize{
\noindent
College of Mathematics, Henan Normal University, Xinxiang 453007, P. R. China;\\ E-mail address:\texttt{{
[email protected]}}\vskip5pt
\noindent
College of Mathematics, Henan Normal University, Xinxiang 453007, P. R. China;\\
E-mail address:\texttt{{ [email protected]}}.\vskip5pt
\footnotesize{\noindent
College of Mathematics, Henan Normal University, Xinxiang 453007, P. R. China;\\
E-mail address:\texttt{{
[email protected]}}.\vskip5pt }
\end{document} | arXiv |
Journal of Materials Science
May 2015 , Volume 50, Issue 10, pp 3586–3596 | Cite as
Effect of heat treatment on phase structure and thermal conductivity of a copper-infiltrated steel
S. Klein
S. Weber
W. Theisen
Infiltration of tool steels with copper is a suitable and cheap method to create dense parts using powder metallurgy. In this work, it is shown that the copper network that forms inside the steel skeleton during infiltration enhances the thermal conductivity of the resulting composite. The level of enhancement is dependent on the thermal conductivity of the copper phase and the volume fraction of copper. Multiple heat treatments of this composite revealed a strong dependency between the thermal conductivity of the composite and the solution state of Fe in the copper network. The latter is highly dependent on the heat-treated condition of the multi-phase material. Using infiltration, the thermal and electrical conductivity was increased from \(21.3\hbox { to }50.1\,\hbox {Wm}^{-1}\, \hbox {K}^{-1}\) and from \(2.5\,\hbox { to }7.7\,{\upmu \Omega }^{-1}\, \hbox {m}^{-1},\) respectively, for aged steel-copper composite in comparison with original X245VCrMo9-4-4 steel. In addition, a model alloy that represents the copper-phase network in the composite was manufactured. By measuring both, the thermal conductivity of this model alloy and the bulk steel, and comparing it to the data for the composite, different models for calculating the overall conductivity of the composite are discussed.
Thermal Conductivity Tool Steel Equilibrium Calculation Solution Annealing Liquid Copper
Tool steels are usually designed with respect to their mechanical properties, particularly wear resistance, hardness, strength, toughness and, to a lesser extent, corrosion resistance [1, 2]. In the last decades, much effort has been spent onto increasing these properties by optimizing alloying composition, heat treatment, and the production route. During the last few years, development and research has focused on additional properties, especially the thermal conductivity [3, 4, 5], which has resulted in newly developed steels optimized for high thermal conductivity [6, 7, 8]. Previous work showed that thermal conductivity is not only influenced by chemical composition, but also strongly by heat treatment [9, 10]. This influence is even greater on steels with a high volume fraction of precipitates, such as carbide-rich cold-work tool steels [11].
In many advanced industrial applications, the commonly used cast and forged tool steels are being increasingly substituted by steels produced by powder metallurgy (PM). These steels can be manufactured from a mixture of binary or ternary powders or, for the most homogenous microstructure, from a single fully prealloyed, atomized steel powder. Compaction of these powders can be carried out either by solid-state sintering, liquid-phase sintering (LPS), super-solidus liquid-phase sintering (SLPS), or hot isostatic pressing (HIP). Except for solid-state sintering, all these techniques are able to create a fully densified material; however, they are often associated with high costs and different kinds of microstructure [12].
Another method to achieve a fully densified PM tool steel is to fill the pores of a solid-state sintered part with a lower melting metal, commonly copper. This technique is called infiltration and is widely used, e.g., in the automotive industry to produce valve seat rings and leads. Infiltration closes the open pores of sintered parts, to create a copper network in the material, which improves the mechanical properties by eliminating notches [13, 14].
Parts produced by infiltration contain a certain amount of copper, commonly 10 to 15 vol%, depending on the open porosity after sintering [13]. Copper is used because of its good behavior during infiltration: It does not form any intermetallic phases with Fe, has limited solubility for Fe, and a good wettability on Fe. Copper itself is generally used as a technical material because of its superior physical properties. In its pure state, it has an exceptionally high thermal and electrical conductivity, up to \(394\,\hbox {Wm}^{-1}\,\hbox {K}^{-1}\) and \(58\,\upmu {\Omega }^{-1}\,\hbox {m}^{-1},\) respectively [15], compared to about \(10 \hbox { to } 20\hbox { Wm}^{-1}\hbox { K}^{-1}\) and \(1.1 \hbox { to } 2.0\,\upmu {\Omega }^{-1}\hbox {m}^{-1}\) for most tool steels [16]. Therefore, it is one of the most popular materials for many technical applications, such as heat exchangers as well as electrical and thermal conductors, etc.
Because of its high conductivity, it seems possible that the copper network residing in infiltrated parts increases their thermal (and electrical) conductivity. If this is the case, infiltration provides an opportunity to produce parts whose mechanical properties are dominated by the sintered steel, whereas the copper network increases their thermal and electrical conductivity. Provided, the interaction between the tool-steel skeleton and the copper network as well as the influence of the copper network and its geometry are known, parts could be manufactured, whose mechanical and physical properties could be tailored by selecting an appropriate steel and volume fraction of the copper.
The present work investigates the ability to create a copper-infiltrated cold-work tool steel with high thermal conductivity as well as the influence of heat treatment on its thermal conductivity. Additionally, a model alloy with the same chemical composition as the copper network residing in the infiltrated steel is investigated. Together with hot isostatically pressed bulk material of the steel X245, this allows the measurement and discussion of the properties of the single phases compared to the composite. Future studies will focus on the mechanical properties and the temperature dependency of the physical properties.
Microstructure of a X245VCrMo9-4-4 powder particle, SE contrast. The steel consists of \(\hbox {V}_8\hbox {C}_7\)-carbides (big, gray, globular precipitates) and some \(\hbox {Mo}_2\hbox {C}\)-carbides (small, black dots), residing in a martensitic matrix with little retained austenite (after hardening)
Temperature versus time plot of the sintering and infiltration processes
Steel skeleton infiltrated with electrolytic copper. SEM image with SE contrast, 20 kV
More detailed SEM image of the infiltrated steel skeleton with SE contrast
Calculated phase contents of X245VCrMo9-4-4. Calculations were performed with Thermo-Calc® and TCFe7 database
Three materials were used in this investigation: firstly, the composite material named X245-Cu, which consists of solid-state sintered PM cold-work tool steel DIN EN ISO 4957 X245VCrMo9-4-4 (chemical composition is given in Table 1), that has been infiltrated with electrolytic copper; secondly, the same steel created by HIP (referred to as X245-HIP); thirdly, the model alloy CuFe3, which recreates the copper network in the infiltrated steel by alloying electrolytic copper with 3 mass% Fe.
Materials, processing, and heat treatment
The composite material X245-Cu is manufactured in a two-step process that comprises solid-state sintering of the steel skeleton and subsequent infiltration with liquid copper. To retain a high open porosity, only the particle fraction from \(63\hbox { to } 80\, \upmu \hbox {m}\) is used for sintering. The steel X245 is provided as gas atomized, prealloyed steel powder by Böhler Edelstahl GmbH. In its initial state, it consists of a major fraction of \(\hbox {V}_8\hbox { C}_7\) carbides and a minor fraction of \(\hbox {Mo}_2\hbox {C}\) carbides, residing in a matrix with a huge amount of retained austenite and a small amount of martensite. The microstructural changes and sintering behavior of this steel has already been deeply analyzed by Blüm [17] and Krasokha [18]. The microstructure of the powder is shown in Fig. 1. In accordance with these studies, a sintering temperature of 1200 °C was chosen. This temperature is slightly lower than \({T_{\rm{sol}}}\) (1237 °C) which thus prevents the formation of a liquid phase. Sintering was performed in a vacuum radiation furnace (\(p=5\times 10^{-3}\,\hbox {mbar}\)) by heating the powder at 1200 °C for 1 h in an alumina crucible. The sintered skeleton had a diameter of about 45 mm and a height of about 60 mm.
Subsequently, the material was allowed to cool passively. Infiltration was performed by placing granulated electrolytic copper on the sintered steel skeleton, and then heating both above the solidus temperature of the copper, in this case to 1120 °C, with a preheating step of 60 min at 1000 °C. After maintaining this temperature for 1 h, the composite was cooled passively below the solidification temperature and then gas-cooled to ambient temperature. Temperature profiles for sintering and infiltration are shown in Fig. 2. The microstructure of the resulting composite material is shown in Figs. 3 and 4.
For the sintered and infiltrated composite, the remaining closed porosity was determined by quantitative image analysis, revealing a porosity of less than 0.2 vol%. All detected pores were identified as residual inner pores of the steel powder, originating from atomization. Using the measured densities of the composite, the steel X245-HIP (corrected by the value of inner porosity) and the model alloy CuFe3, the volume fraction of the copper network was determined as 35 vol%, which was also confirmed by quantitative image analysis.
The hot isostatically pressed steel X245-HIP is an industrial-grade material provided by Böhler Edelstahl GmbH that therefore needed no additional production steps.
The CuFe3 model alloy was produced by mixing granular electrolytic copper with pure iron powder and melting the mixture in a vacuum induction furnace in a graphite crucible. The bulk cast slug was then hot-rolled, cold-rolled, and recrystallized at 400 °C for 15 min.
All materials were solution annealed using radiation furnaces at 1050 °C for 30 min in inert gas, followed by water quenching, and optional aging using three sequent aging steps at 550 °C for 8 h per step. For aging, convectional furnaces with ambient atmosphere were used.
For the measurements, specimens with the following dimensions were cut: 10 × 10 × 1.5 mm for laser-flash, 4 × 4 × 1.5 mm for mDSC, and 4 × 4 × 20 mm for electric resistivity.
Equilibrium calculations
The thermodynamic equilibria were calculated using the Calphad software Thermo-Calc® [19]. Based on the chemical composition given in Table 1, the phase diagram of X245VCrMo9-4-4, shown in Fig. 5, was calculated with the TCFe7 database [20]. From this data, the temperatures for austenitizing, solution annealing, sintering, and aging were chosen. Thermo-Calc® and the TCBin database [21] were used to calculate the solubility of Fe in Cu and Cu in Fe at the infiltration temperature (1120 °C), solution annealing temperature (1050 °C), and aging temperature (550 °C).
By adding 39.8 mass% (35.0 vol%) Cu to the composition, proportionally aligning the mass fraction of the other elements, equilibrium calculations for the composite material X245-Cu were performed. This technique was used to calculate the temperature-dependent equilibrium composition of the copper phase for all alloying elements of X245VCrMo9-4-4. The results are given in Fig. 6.
Measurement of the physical properties
In this work, the thermal conductivity was determined using the dynamic method described by Tritt [22]. Accordingly, the thermal diffusivity \({a}\), heat capacity cp, and density \({\rho }\) were measured separately to allow calculation of the thermal conductivity \({\lambda }\) using the relation:
$$\lambda = a \cdot \rho \cdot c_\mathrm{{p}}$$
All measurements were repeated at least three times with three different specimens. The resulting mean values and standard deviations were plotted. A laser-flash device type LFA-1000 from Linseis Messgeräte GmbH was used for measuring the thermal diffusivity. Its accuracy is about \(\pm 3\,{\%}\). Values were measured at 20 °C and calculated according to Eq. 2 where \(L_0\) is the thickness of the specimen and \(t_{0.5}\) is the time to reach half of the temperature increase induced by the laser pulse.
$$\begin{aligned} a = 1.38 \cdot \frac{L_0^2}{\pi ^2 \cdot t_{0.5}} \end{aligned}$$
The density was measured on the basis of the Archimedes buoyancy principle by measuring the weight in air and in water on a balance having a precision of 0.01 mg (the weight of the specimen was about 1.5 g).
The specific isobaric heat capacity was measured using modulated differential scanning calorimetry (mDSC). Measurements were carried out in Pt/Rh pans with lids in a dynamic He atmosphere using a DSC 2920 CE from TA Instruments GmbH, exhibiting a accuracy of \(\pm 1\,{\%}\) for the used specimen. Before every sequence, a baseline calibration and a signal calibration with a sapphire standard were performed.
The electric conductivity was derived from the specific resistivity, which was measured using the four-wire method. Experiments were taken out with a LSR 1100 from Linseis Messgeräte GmbH with an accuracy of \(\pm 0.2\,{\%}\).
Chemical composition of the investigated X245 alloy in mass%
Chemical composition of the phases in X245-Cu composite in mass% : (a) copper infiltrated at 1120 °C; calculated with Thermo-Calc® and TCFe7 at 1120 °C, (b) copper infiltrated at 1120 °C; measured with EDS at 15 kV at 20 °C, (c) \(\upgamma{\text{-}}\hbox{Fe}\) phase without the contact with copper at 1120 °C; calculated with Thermo-Calc® and TCFe7 (trace elements are not shown)
(a) Cu liquid phase, Calphad
(b) Cu network, EDS
(c) \(\upgamma{\text{-}}\hbox{Fe}\), Calphad
Calculated equilibrium composition of the copper phase in X245-Cu. Elements with a maximum fraction below 0.2 mass% are not shown
Calculated phase diagram of the Cu-Fe system on the Cu-rich side. Data from [26] is shown for comparison
Stable phases in the copper network of X245-Cu after infiltration. The calculation is based on the calculated composition of the liquid copper at 1120 °C (see Table 2)
Thermal conductivity of the investigated X245 steel, X245-Cu composite, and CuFe3 alloy in the quenched state and in the quenched and aged state
Thermal diffusivity and electrical conductivity of the investigated X245 steel, X245-Cu composite, and CuFe3 alloy in the quenched state and in the quenched and aged state
Characterization of the resulting composite
The Figs. 3 and 4 show cross sections of the composite X245-Cu. This material, created by infiltration with liquid electrolytic copper, consists of a carbide-rich, sintered steel skeleton containing an interconnected copper network residing in its formerly open porosity. The particles forming the steel skeleton were only sintered slightly, resulting in a poor connection of the particles. EDS measurements of the copper network reveal a Fe content of 3 mass% .
Since the diffusivity in the liquid copper is about 6 to 7 magnitudes higher than in the solid state, the transport of alloying elements from the steel component into the infiltrating copper is much faster in the liquid state than in the solid state [23, 24, 25].
As a result of fast cooling after infiltration, most of the transport of atoms into the copper takes place during the infiltration step. Assuming that the state of the liquid copper phase equals the equilibrium state after a short time, equilibrium calculations with the Calphad method can be used to approximate the chemical composition of the copper phase at the infiltration temperature. Due cooling, the previously dissolved alloying elements remain in the copper network, either dissolved in the copper matrix or as precipitates.
The investigated system is rather complex and contains many phases with different base elements: Fe in the steel matrix and Cu in the copper network as well as different carbides residing in the steel. No available thermodynamic database represents both cases: a complex steel with multiple phases and copper-based alloys. Nevertheless, the TCFe7 database contains extensive information in addition to Fe-based systems. To at least check the correctness of the data for high Fe contents in the Cu-Fe system, the corresponding phase diagram was calculated and compared to data calculated using the binary TCBin database and experimental data from Boltax et al. [26] (see Fig. 7). Although both show good agreement in the solid state, the experimental data for the liquid phase differs, whereas the data calculated with the binary database is identical. Measuring the chemical composition of the copper phase after infiltration using EDS shows good agreement between the Fe content and the calculated value (Table 2). Only Fe and Si could be identified as components of the Cu phase by EDS and only the Fe signal was high enough to be quantified.
Figure 6 shows the calculated chemical composition of the copper phase in dependence of temperature. At lower temperatures, the content of the relevant elements Fe, Cr, W, and Mo decreases. Since their mobility is also lowered, precipitation of these elements inside the copper matrix is to be expected. On the basis of these prerequisites, equilibrium calculations can be performed with the previously calculated chemical composition of the liquid copper phase at the infiltration temperature of 1120 °C. The results of this calculation are shown in Fig. 8, which reveals that \(\upalpha{\text{-}}\) and \(\upgamma{\text{-}}\hbox{Fe}\) are the only stable phases besides fcc copper, with \(\upalpha{\text{-}}\hbox{Fe}\) being stable at the aging temperature (550 °C).
Model calculations of thermal conductivity for steel-copper composite X245-Cu in comparison with experimental data
S-A and quenched
S-A, quenched and aged
Volume fraction copper alloy
\(v_1= {0.35}\)
Volume fraction steel
Th. conductivity copper alloy
\(\hbox {Wm}^{-1}\hbox { K}^{-1}\)
\(\lambda _1= 141.9 \pm 3.1\)
Th. conductivity steel
\(\lambda _2= 14.7 \pm 2.9\)
Calculated thermal conductivity in (\(\hbox {Wm}^{-1}\hbox { K}^{-1}\))
Wiener (Eqs. 4, 3)
21.4 to 59.2
31.6 to 120.4
Hashin–Shtrikman (Eqs. 5, 6)
Maxwell (Eq. 7)
Hasselman–Johnson (Eq. 8)
\(h_c=5\times 10^{8}\hbox { Wm}^{-1}\hbox { K}^{-1}, \alpha =36\,{\upmu \hbox {m}}\)
Lichtenecker (Eq. 9)
Bruggeman–Landauer (Eq. 11)
Measurement (Fig. 9)
\(29.6 \pm 3.6\)
Effect of heat treatment on the thermal conductivity
A comparison of the thermal conductivity of the differently heat-treated X245-Cu (Fig. 9) composite specimens reveals a huge impact of aging on the thermal and electrical conductivities. This difference is mostly caused by changes in the thermal diffusivity (Fig. 10) because the specific heat capacity and density of X245-Cu are largely unaffected by the heat treatment (Fig. 11). Furthermore, most of the change is related to changes in the copper phase because the thermal conductivity of bulk X245VCrMo9-4-4 in the quenched or aged state differs only between \(14.7\pm 2.8\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\) and \(21.3 \pm 2.6\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\), respectively.
This behavior can be explained well with the findings of the thermodynamic calculations ("Equilibrium calculations" section). Due to the decreasing solubility of Fe in the Cu during solidification (see Fig. 7), precipitation of Fe occurs after infiltration, thus raising the purity of the Cu matrix and with it the conductivity of the Cu network. During solution annealing at 1050 °C for 30 min, all Fe precipitates dissolve in the Cu matrix, leading to a re-solvation of Fe, which thus lowers the conductivity. On the other hand, when aged at 550 °C, Fe precipitates again and the conductivity increases.
The precipitation behavior was confirmed using SEM imaging (Figs. 12, 13, 14). The precipitates occurring after infiltration have a dendritic shape (Fig. 12), which indicates that they formed within the liquid copper. After quenching in water, no precipitates were found in the Cu network (Fig. 13). However, when aged at 550 °C, new globular shaped Fe precipitates formed in the copper (Fig. 14). These precipitates were identified as being Fe-based by EDS, their shape is no further dendritic but globular due to solid-state precipitation. Figure 15 shows a schematic of the described processes.
The fact that the precipitation of Fe enhances the conductivity of the copper agrees well with the results of [26], who found that precipitates had a much lower impact on the electric conductivity of copper than that induced by substitutional solvation of Fe in the matrix. Most of the change in conductivity of the copper phase is induced by the solution state of Fe. This is because Fe has one of the highest impacts on the conductivity of Cu when solvated, as has been shown by Linde [27] and Kierspe [28]. By removing Fe from the Cu matrix through precipitation, the purity of the matrix rises. Of course, the Fe precipitates reduce the overall thermal conductivity owing to the additional interfaces between the Cu matrix and the Fe precipitates, and to the lower conductivity of the Fe precipitates themselves. Below a critical size of about \(1\,\upmu \hbox {m}\), they behave similarly to lattice defects, because the interfacial thermal barrier resistance is high compared to the contribution of the precipitate to conductivity [29, 30]. Thus, the influence of precipitated Fe is much lower than in the dissolved state, as it was already hypothesized by [26].
Contribution of the components to the thermal conductivity of X245-Cu
Since the copper network controls the conductivity of the X245-Cu composite, it is interesting to measure the influence of heat treatment on its thermal conductivity directly. This was achieved by manufacturing the CuFe3 model alloy with a chemical composition equal to that of the copper network after infiltration, which was measured by EDS (Table 2).
The physical properties of this model alloy were measured in the solution-annealed and aged state. Solution annealing was performed at 1050 °C for 30 min, followed by quenching in water and aging at 550 °C for 24 h. Measurements show a high impact of the heat treatment on the thermal and electrical conductivities of CuFe3, similar to those of the composite. At room temperature, the solution-annealed CuFe3 shows a thermal conductivity of \(141.9 \pm 3.1\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\), which increases to \(304.4 \pm 3.7\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\) after aging. This increase reflects the change in thermal conductivity of the X245-Cu composite, which ranges between \(29.6 \pm 3.6\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\) after solution annealing and \(50.1 \pm 3.1\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\) after aging.
When comparing these values, one could expect an even higher thermal conductivity of the composite because it consists of 35 vol% copper. Assuming that the infiltrated copper has the same conductivity as the CuFe3 model alloy, the trivial rule of mixture by volume fraction, as described by Eq. 3 (where \(\lambda _x\) is the thermal conductivity and \(v_x\) is the volume fraction of the component), results in a thermal conductivity of \(59.2\hbox { Wm}^{-1}\hbox { K}^{-1}\) in the solution-annealed state and \(120.4\hbox { Wm}^{-1}\hbox { K}^{-1}\) in the aged state.
$$\lambda _{\mathrm {eff}}^{\mathrm {max}} = \lambda _1 \cdot v_1 + \lambda _2 \cdot v_2$$
Of course, this rule is overly simplified and ignores the effect of interfaces and internal geometry. It is only valid for a composite comprising parallel aligned stripes of two phases, which are parallel to the direction of heat transfer, with no interaction between the two phases. It equals the higher Wiener boundary [31] and gives the absolute maximum thermal conductivity, the composite can have. The other extreme is the serial alignment of the phases, representing the lower Wiener boundary (Eq. 4), giving the absolute minimum thermal conductivity the composite can have. See Table 3 for the results of all tested mixing models.
$$\lambda _{\mathrm {eff}}^{\mathrm {min}} = \frac{\lambda _1\lambda _2}{\lambda _1 \cdot v_2 + \lambda _2 \cdot v_1}$$
While these boundaries are valid for every possible composite material, the range can be narrowed when assuming an isotropic material, like the investigated X245-Cu. This case is addressed by the bounds described by Hashin and Shtrikman [32], which are called the Hashin–Shtrikman boundaries (or HS-boundaries). They are calculated according to the Eqs. 5 and 6 for the lower and upper bound, respectively.
$$\lambda _{\mathrm {eff}}^{\mathrm {min}}= v_1\lambda _1 + v_2\lambda _2 - \frac{v_1v_2(\lambda _1-\lambda _2)^2}{3\lambda _2-v_2(\lambda _2-\lambda _1)}$$
$$\lambda _{\mathrm {eff}}^{\mathrm {max}}= v_1\lambda _1 + v_2\lambda _2 - \frac{v_1v_2(\lambda _1-\lambda _2)^2}{3\lambda _1-v_1(\lambda _1-\lambda _2)}$$
The measured values for both states lay at the lower bound of this range, which predicts a thermal conductivity between 30.2 to 49.6 \(\hbox{Wm}^{-1}\hbox{ K}^{-1}\) in the quenched and 46.8 to 102.4\(\hbox { Wm}^{-1}\hbox{ K}^{-1}\) in the aged state (see Table 3).
Other models do not predict a range but a single value of the overall thermal conductivity of the composite, which may be preferable if the model is close enough to the present case. The models from Maxwell [33], Lichtenecker [34], Bruggeman [35] and Hasselman and Johnson [36] were tested (see Table 3). However, their results are less accurate than the lower HS bound. The reasons will be discussed for each model.
$$\begin{aligned} \lambda _{\mathrm {eff}} = \lambda _1 \frac{\lambda _2 + 2 \lambda _1 + 2 v_2(\lambda _2-\lambda _1)}{\lambda _2 + 2 \lambda _1 - v_2(\lambda _2-\lambda _1)} \end{aligned}$$
The model from Maxwell (Eq. 7) is only valid for a low fraction of dispersed particles, whose distance is greater than their size and which are randomly distributed. It is therefore not applicable for the investigated material and predicts a much too high thermal conductivity.
Hasselman and Johnson (Eq. 8) [36] assumed a very similar geometry like Maxwell. In addition, they consider the effects of interfaces and the mean free path by including the interfacial thermal conductance \(h_{\mathrm {c}}\) between both phases and the sphere size \(a\). In this case, \(h_{\mathrm {c}}\) was estimated to \(5\times 10^8\hbox { Wm}^{-2}\hbox{ K}^{-1}\) according to [37].
$$\begin{aligned} \lambda _{\mathrm {eff}} = \lambda _{\mathrm {1}} \frac{\left[ 2 \cdot v_{\mathrm {2}} \left( \frac{\lambda _{\mathrm {2}}}{\lambda _{\mathrm {1}}} - \frac{\lambda _{\mathrm {2}}}{a h_{\mathrm {c}}} -1 \right) + \frac{\lambda _{\mathrm {2}}}{\lambda _{\mathrm {1}}} + \frac{2 \lambda _{\mathrm {2}}}{a h_{\mathrm {c}}} +2 \right] }{\left[ v_{\mathrm {2}} \left( 1-\frac{\lambda _{\mathrm {2}}}{\lambda _{\mathrm {1}}}+ \frac{\lambda _{\mathrm {2}}}{a h_{\mathrm {c}}}\right) +\frac{\lambda _{\mathrm {2}}}{\lambda _{\mathrm {1}}}+\frac{2 \lambda _{\mathrm {2}}}{a h_{\mathrm {c}}}+2\right] } \end{aligned}$$
Its result is identical to the Maxwell model, because the impact of both considered effects are low when a high contrast between the thermal conductivities of both phases is assumed. Furthermore, it is only defined for a low content of the dispersed phase.
A much simpler approach is the formula from Lichtenecker (Eq. 9). Originally, it is more an empirical formula without a well-founded physical model, but recent research showed a good agreement if both phases are randomly distributed [38].
$$\begin{aligned} \lambda _{\mathrm {eff}} = \lambda _1^{v_1} \cdot \lambda _2^{v_2} \end{aligned}$$
This formula gives the most accurate results beside the lower HS bound, but it gives no information for the reason of its good agreement. The model from Bruggeman–Landauer (Eq. 11) is sometimes considered an enhancement of Lichtenecker's model for anisotropic materials. Nevertheless, it is less accurate in the present case.
$$k_{\rm{b}}= \lambda _1(3 v_1 - 1) + \lambda _2(3 v_2 - 1) $$
$$\lambda _{\mathrm {eff}}= \frac{k_{\rm{b}} + \sqrt{k_{\rm{b}}^2 + 8 \lambda _1 \lambda _2}}{4}$$
In summary, the formula of Lichtenecker and the model of the lower Hashin–Shtrikman bound are most accurate when calculating the overall thermal conductivity of the composite X245-Cu.
This result is surprising, because the conceptional model of the upper bound is close to the present case; The upper bound is valid for a volume built from composite spheres with a low conducting core and a high conducting shell, resulting in isolated cores with a low thermal conductivity and a network of shells with a high conductivity [32]. The lower bound represents the opposite case, which is far from the investigated composite material X245-Cu.
The main difference between the upper HS-model and the material X245-Cu is the connection of the steel particles. Due to this, the particles are not isolated, instead they form a skeleton. This exception might be the reason for the significant difference between the upper HS bound and the measured thermal conductivity. Although the thermal conductivity of the bare steel skeleton is increased, the connections form an additional resistance in the otherwise high conducting phase of the copper network, thus reducing the overall thermal conductivity. It is unclear if this resistance is responsible for the discrepancy or if other effects apply, too.
One method to further investigate the applicability of the shown models is to simulate the heat flow using a representative FEM model. As pointed out by [39], this technique can be used to simulate the thermal conductivity of the composite and also take account of its inner geometry. In their study, the simulation gave more realistic results than the theoretical approaches, especially if the components had greatly differing thermal conductivities. A similar approach will be used to simulate the investigated X245-Cu composite. The results of this ongoing work will be topic of a future publication.
Density and specific isobaric heat capacity of the investigated X245 steel, X245-Cu composite, and CuFe3 alloy in the quenched state and in the quenched and aged state
SEM image with SE contrast of the X245-Cu composite. The iron precipitates in the copper matrix (lighter field) show a dendritic shape and a diameter of up to \(2\,{\upmu }\hbox {m}\) after infiltration and furnace cooling
SEM image with SE contrast of the X245-Cu composite. No precipitates are visible in the matrix of the infiltrated copper (lighter field) after solution annealing at 1050 °C for 30 min
SEM image with SE contrast of the X245-Cu composite. Globular Fe precipitates reside in the matrix of the infiltrated copper (lighter field) after solution annealing at 1050 °C for 30 min, quenching and aging at 550 °C for 3 × 8 h
Diagram of the processes during heat treatment of X245-Cu composite. Gray regions—steel particles, white regions—copper matrix. a Diffusion of Fe into the copper network during infiltration. b Primary precipitation of Fe in the Cu melt during solidification c Dissolution of Fe precipitates in the copper network when solution annealed and quenched. d Solid-state precipitation of Fe in the copper network due to aging
Conclusions and summary
The infiltration of tool steels with copper has the potential to produce composites with an interesting combination of physical and mechanical properties. The present study investigated the composite X245-Cu, which consists of the sintered PM cold-work tool steel X245VCrMo9-4-4 that was infiltrated with 35 vol% electrolytic copper. This composite material showed a thermal conductivity of \(29.6 \pm 3.6\,\hbox{ Wm}^{-1}\,\hbox { K}^{-1}\) in the solution-annealed state, which is high compared to most tool steels. Nevertheless, a even higher conductivity could be expected by the high copper content of the composite. The solvation of Fe from the steel into the liquid copper was found to provide the major contribution to the thermal resistance. It was then shown that the conductivity of the copper and with it, that of the composite was strongly influenced by heat treatment. This was possible due to the strong temperature dependency of the solubility of Fe in Cu, which results in a rapid precipitation of Fe when aged at lower temperatures. The conductivity of the composite increased to \(50.1 \pm 3.1\,\hbox{ Wm}^{-1}\hbox { K}^{-1}\) by aging at 550 °C for 24 h.
It is possible that even higher conductivities could be achieved by decreasing the amount of Fe that is solvated during infiltration. On the other hand, there is a need for a more reliable method of predicting the thermal conductivity of the composite using the parameters of the individual components. Indeed, a corresponding FEM model will be developed that will allow investigations of the effect of the composite's inner geometry on the thermal conductivity so that the geometry can be optimized with respect to a high conductivity. Further studies will focus on the mechanical properties of the composite.
The authors gratefully acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG) under support code TH531/13-1. Further thanks go to Fabian Nowara and Robin Thiel, who were involved in gaining a part of the experimental results.
Berns H, Theisen W (2008) Eisenwerkstoffe: Stahl und Gusseisen, 4th edn. Springer, BerlinGoogle Scholar
Dahl W (ed) (1993) Eigenschaften und Anwendungen von Stählen, vol 2, 1st edn. Verlag der Augustinus-Buchhandlung, AachenGoogle Scholar
Valls I, Casas B, Rodriguez N (2009) Importance of tool material thermal conductivity in the die longevity and product quality in hpdc. In: Beiss P, Broeckmann C, Franke S, Keysselitz B (eds) Tool steels—deciding factor in worldwide production, vol 1. Mainz, Aachen, pp 127–140Google Scholar
Valls I, Casas B, Rodriguez N, Paar U (2010) Benefits from using high thermal conductivity tool steels in the hot forming of steels. La Metallurgia Italiana - n. 11–12Google Scholar
Meurisse E, Ernst C, Bleck W (2012) Improvement of thermal conductivity of hot-work tool steels by alloy design and heat treatment. In: Leitner H, Kranz R, Tremmel A (eds) TOOL 2012: developing the world of tooling. Verlag Gutenberghaus, Knittelfeld, pp 215–224Google Scholar
Rovalma SA (2008) Warmarbeitsstahl. European Patent EP 1,887,096 A1Google Scholar
Rovalma SA (2008) Verfahren zur Einstellung der Wärmeleitfähigkeit eines Stahls, Werkzeugstahls, insbesondere Warmarbeitsstahl, und Stahlgegenstand. International Patent WO 2008/017341 A1Google Scholar
Gelder S, Jesner G (2012) New high performance hot work tool steel with improved physical properties. In: Leitner H, Kranz R, Tremmel A (eds) TOOL 2012: developing the world of tooling. Verlag Gutenberghaus, Knittelfeld, pp 199–206Google Scholar
Wilzer J (2012) On the relationship of heat treatment, microsctructure, mechanical properties, and thermal conductivity of tool steels. In: Leitner H, Kranz R, Tremmel A (eds) TOOL 2012: developing the world of tooling. Verlag Gutenberghaus, Knittelfeld, pp 143–152Google Scholar
Wilzer J, Lüdtke F, Weber S, Theisen W (2013) The influence of heat treatment and resulting microstructures on the thermophysical properties of martensitic steels. J Mater Sci 48(24):8483–8492. doi: 10.1007/s10853-013-7665-2 CrossRefGoogle Scholar
Wilzer J, Weber S, Escher C, Theisen W (2014) Werkstofftechnische Anforderungen an Press-härtewerkzeuge am Beispiel der Werkzeugstähle X38CrMoV5-3, 30MoW33-7 und 60MoCrW28-8-4. HTM J Heat Treat Mater 69(6):325–332CrossRefGoogle Scholar
Schatt W, Wieters K-P, Kieback B (2007) Pulvermetallurgie: Technologien und Werkstoffe, 2nd edn. VDI-Buch Springer, BerlinCrossRefGoogle Scholar
Samal PK, Klar E (1998) Copper-infiltrated steels. In: ASM handbook. vol 7, pp 769–773Google Scholar
Bernier F, Beaulieu P, Baïlon J-P, L'Espérance G (2011) Effect of Cu infiltration on static and dynamic properties of PM steels. Powder Metall 54(3):314–319CrossRefGoogle Scholar
Gottstein G (2007) Physikalische Grundlagen der Materialkunde. Springer, BerlinGoogle Scholar
Richter F (1983) Die wichtigsten physikalischen Eigenschaften von 52 Eisenwerkstoffen. Stahleisen Sonderberichte, Heft 10, Verlag StahleisenGoogle Scholar
Blüm M (2014) Neuartige Schichtverbundwerkstoffe zur Standzeiterhöhung verschleißbeanspruchter Werkzeuge für die Mineralverarbeitung. Dissertation, Ruhr-Universität Bochum, BochumGoogle Scholar
Krasokha N (2012) Einfluss der Sinteratmosphäre auf das Verdichtungsverhalten hochlegierter PM-stähle. Dissertation, Ruhr-Universität Bochum, BochumGoogle Scholar
Andersson J-O, Helander T, Höglund L, Shi P, Sundman B (2002) Thermo-Calc & DICTRA, computational tools for materials science. Calphad 26(2):273–312CrossRefGoogle Scholar
Thermo-Calc Software AB (2013) TCFE7 steels/Fe-alloys database, version 7. Thermo-Calc Software AB, StockholmGoogle Scholar
Thermo-Calc Software AB (2013) TCBin/Binary Solutions Database. Thermo-Calc Software AB, StockholmGoogle Scholar
Tritt TM, Weston D (2010) Measurement techniques and considerations for determining thermal conductivity of bulk materials. In: Thermal conductivity, Physics of solids and liquids. Springer, New York, pp 187–203Google Scholar
Ohno R (1986) Rates of dissolution of solid iron, cobalt, nickel, and silicon in liquid copper and diffusion rate of iron from liquid cu–fe alloy into liquid copper. Metall Trans B 17(2):291–305CrossRefGoogle Scholar
Butrymowicz DB, Manning JR, Read ME (1976) Diffusion in copper and copper alloys. Part IV. Diffusion in systems involving elements of group VIII. JPCRD 5(1):103–200Google Scholar
Butrymowicz DB, Manning JR, Read ME (1973) Diffusion in copper and copper alloys, Part I. Volume and surface self-diffusion in copper. JPCRD 2(3):643–655Google Scholar
Boltax A (1960) Precipitation processes in copper-rich copper-iron alloys. Trans Am Inst Min Metall Eng 218(5):812–821Google Scholar
Linde JO (1931) Elektrische Eigenschaften verdünnter Mischkristallegierungen I. Goldlegierungen. Ann Phys 402(1):52–70CrossRefGoogle Scholar
Kierspe W (1967) Über den Einfluß der Übergangselemente der ersten großen Periode auf die Leitfähigkeitseigenschaften von Kupfer. Universität zu KölnGoogle Scholar
Bhatt H, Donaldson KY, Hasselman DPH (1990) Role of the interfacial thermal barrier in the effective thermal diffusivity/conductivity of SiC-fiber-reinforced reaction-bonded silicon nitride. J Am Ceram Soc 73(2):312–316CrossRefGoogle Scholar
Geiger AL, Hasselman DPH, Donaldson KY (1993) Effect of reinforcement particle size on the thermal conductivity of a particulate silicon carbide-reinforced aluminium-matrix composite. J Mater Sci Lett 12(6):420–423CrossRefGoogle Scholar
Wiener O (1912) Die Theorie des Mischkörpers für das Feld der stationären Strömung. Abhandlungen der mathematisch-physischen Klasse der K. Sächs. Gesellschaft der Wissenschaften. Bd. 32. No. 6Google Scholar
Hashin Z, Shtrikman S (1961) Note on a variational approach to the theory of composite elastic materials. J Frankl Inst 271(4):336–341CrossRefGoogle Scholar
Maxwell JC (1873) A treatise on electricity and magnetism. Clarendon press series. Clarendon Press, OxfordGoogle Scholar
Lichtenecker K (1926) Die Ableitung der logarithmischen Mischungsregel aus dem Maxwell-Rayleighschen Schrankenwertverfahren. Kolloidchemische Beihefte 23(1–9):285–291CrossRefGoogle Scholar
Bruggeman DAG (1935) Berechnung verschiedener physikalischer konstanten von heterogenen substanzen. I. Dielektrizitätskonstanten und leitfähigkeiten der mischkörper aus isotropen substanzen. Ann Phys 416(7):636–664CrossRefGoogle Scholar
Hasselman DPH, Johnson LF (1987) Effective thermal conductivity of composites with interfacial thermal barrier resistance. J Compos Mater 21(6):508–515CrossRefGoogle Scholar
Wang H, Xu Y, Shimono M, Tanaka Y, Yamazaki M (2007) Computation of interfacial thermal resistance by phonon diffuse mismatch model. Mater Trans 48(9):2349–2352CrossRefGoogle Scholar
Simpkin R (2010) Derivation of Lichtenecker's logarithmic mixture formula from Maxwell's equations. IEEE Trans Microw Theory Tech 58(3):545–550CrossRefGoogle Scholar
Zhang H, Zeng Y, Zhang H, Guo F (2010) Computational investigation of the effective thermal conductivity of interpenetrating network composites. J Compos Mater 44(10):1247–1260CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.Ruhr-University BochumBochumGermany
2.Bergische Universität WuppertalSolingenGermany
Klein, S., Weber, S. & Theisen, W. J Mater Sci (2015) 50: 3586. https://doi.org/10.1007/s10853-015-8919-y | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{The Computational Complexity of Angry Birds}
\author{Matthew Stephenson} \address{Department of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands} \ead{[email protected]} \author{Jochen Renz, Xiaoyu Ge} \address{Research School of Computer Science, Australian National University, Canberra, Australia}
\begin{abstract} The physics-based simulation game Angry Birds has been heavily researched by the AI community over the past five years, and has been the subject of a popular AI competition that is currently held annually as part of a leading AI conference. Developing intelligent agents that can play this game effectively has been an incredibly complex and challenging problem for traditional AI techniques to solve, even though the game is simple enough that any human player could learn and master it within a short time. In this paper we analyse how hard the problem really is, presenting several proofs for the computational complexity of Angry Birds. By using a combination of several gadgets within this game's environment, we are able to demonstrate that the decision problem of solving general levels for different versions of Angry Birds is either NP-hard, PSPACE-hard, PSPACE-complete or EXPTIME-hard. Proof of NP-hardness is by reduction from 3-SAT, whilst proof of PSPACE-hardness is by reduction from True Quantified Boolean Formula (TQBF). Proof of EXPTIME-hardness is by reduction from G2, a known EXPTIME-complete problem similar to that used for many previous games such as Chess, Go and Checkers. To the best of our knowledge, this is the first time that a single-player game has been proven EXPTIME-hard. This is achieved by using stochastic game engine dynamics to effectively model the real world, or in our case the physics simulator, as the opponent against which we are playing. These proofs can also be extended to other physics-based games with similar mechanics. \end{abstract}
\begin{keyword} Computational complexity, AI and games, Physics simulation games, Game playing, Angry Birds \end{keyword}
\end{frontmatter}
\section{Introduction} The computational complexity of different video games has been the subject of much investigation over the past decade. However, this has mostly been carried out on traditional style platformers \cite{ori1,ori4} or primitive puzzle games \cite{ori8,ori5}. In this paper, we analyse the complexity of playing different variants of the video game Angry Birds, which is a sophisticated physics-based puzzle game with a semi-realistic and controlled environment \cite{web}. The objective of each level in this game is to hit a number of pre-defined targets (pigs) with a certain number of shots (birds) taken from a fixed location (slingshot), often utilising or avoiding blocks and other game elements to achieve this. An example of an Angry Birds level is shown in Figure 1. Angry Birds is a game of great interest to the wider AI research community due to the complex planning and physical reasoning required to solve its levels, similar to that of many real-world problems. It has also been used in the AIBIRDS competition \cite{ABcomp} which tasks entrants with developing agents to solve unknown Angry Birds levels and aims to promote the integration of different AI areas \cite{extra5}. Many of the previous agents that have participated in this competition employ a variety of AI techniques, including qualitative reasoning \cite{cite3}, internal simulation analysis \cite{cite5,cite4}, logic programming \cite{cite10}, heuristics \cite{cite11}, Bayesian inferences \cite{cite7,cite6}, and structural analysis \cite{cite8}. Despite many different attempts over the past five years the problem is still largely unsolved, with AI approaches far from human-level performance.
\begin{figure}
\caption{Screenshot of a level for the Angry Birds game.}
\end{figure}
Video games have been the subject of much prior research on computational complexity, with many papers proving specific games to be either NP-hard or PSPACE-complete. Examples of past proofs for NP-hardness include games such as Pac-Man \cite{ori5}, Lemmings \cite{ori3}, Portal \cite{ori7}, Candy Crush \cite{ori17}, Bejeweled \cite{ori18}, Minesweeper \cite{ori19}, Tetris \cite{ori22}, and multiple classic Nintendo games \cite{ori1}. Proofs of PSPACE-completeness have also been described for games such as Mario Bros. \cite{ori9}, Doom \cite{ori5}, Pok\'{e}mon \cite{ori1}, Rush Hour \cite{extra3}, Mario Kart \cite{mariokart} and Prince of Persia \cite{ori4}. Interestingly, the video game Braid has been proven to be PSPACE-hard \cite{extra4} but not PSPACE-complete. However, none of these video games have yet been proven EXPTIME-hard. Proofs of EXPTIME-hardness have previously been demonstrated for several traditional two-player board games, including Chess \cite{exp3}, Checkers \cite{exp2} and the Japanese version of Go \cite{exp4}. As far as we are aware, no single-player video game without a traditional opponent has ever been proven EXPTIME-hard before now.
Complexity proofs have also been presented for many different block pushing puzzle games, including Sokoban \cite{ori14}, Bloxorz \cite{ori12} and multiple varieties of PushPush \cite{ori2,ori13,ori15}. These proofs have been used to advance our understanding of motion planning models due to their real-world similarities \cite{ori11}. It is therefore important that the computational complexity of physics-based games is investigated further, as playing video games such as Angry Birds has much in common with other real-world AI and robotics problems \cite{new1}. A physics-based environment is very different from that of traditional games as the attributes and parameters of various objects are often imprecise or unknown, meaning that it is very difficult to accurately predict the outcome of any action taken. Angry Birds also differs from many previously investigated games in terms of its control scheme, as the player always makes their shots from the same location within each level (slingshot position) and can only vary the speed and angle at which each bird travels from it. This heavily reduces the amount of control that the player has over the bird's movement, with the game's physics engine being used to determine the outcome of shots after they are made.
The remainder of this paper is organised as follows: Section 2 formally defines the Angry Birds game, as well as the different variants of it that will be used within our proofs; Section 3 describes the designs and workings of several gates that will be used in later proofs; Sections 4 - 7 present proofs that particular variants of Angry Birds are either PSPACE-complete, PSPACE-hard, NP-hard or EXPTIME-hard respectively; Section 8 provides some suggestions and examples of how the presented proofs could be extended to other games with similar mechanics; Section 9 concludes this work and proposes future possibilities.
\section{Angry Birds Game Definition} Angry Birds is a popular physics-based puzzle game in which the objective is to kill all the pigs within a 2D level space using a set number of birds. Each level has a predefined size and any game element that moves outside of its boundaries is destroyed. The area below the level space is comprised of solid ground that cannot be moved or changed in any way, although other elements can be placed on or bounced off of it. Players make their shots sequentially and in a predefined order, with all birds being fired from the location of the slingshot. The player can alter the speed (up to a set maximum) and angle with which these birds are fired from the slingshot but cannot alter the bird's flight trajectory after doing so, except in the case of some special bird types with secondary effects that can be activated by the player. Once a bird has been fired, it is removed from the level after not moving for a certain period of time. The level space can also contain many other game elements, such as blocks, static terrain, explosives, etc. All game elements have a positive fixed mass, friction, dimensions and shape (based on their type), and no element may overlap any other. Birds that have yet to be fired are the only exception to this rule and may overlap other elements within the level space (i.e. birds do not interact physically with other game elements until fired from the slingshot; they are simply visible within the level for visual effect). The level itself also has a fixed gravitational force that always acts downwards. If two objects collide they will typically bounce off each other or one of the objects will break. Calculations done with regard to object movement and resolving collisions are simulated using a simplified physics engine based on Newtonian mechanics. The exact mathematics and physical rules of how the engine works are not provided as this would be incredibly long and tedious. Instead all proofs presented in this paper are done at a high level, allowing the concepts and ideas to be easily extended to other similar games or problems. All level designs presented in subsequent sections have taken the specific physics of the engine into consideration and can be demonstrated to work within the original Angry Birds game environment.
The description of an Angry Birds level can be formalised as $Level = (L_{x}, L_{y}, slingshot, birds, pigs, other)$. \begin{itemize} \item $L_{x}$ is the width of the level in pixels. \item $L_{y}$ is the height of the level in pixels. \item $slingshot$ is the pixel coordinates $(x,y)$ from which the player makes their shots. \item $birds$ is a list containing the number $(N_{b})$, type and order of the birds available. \item $pigs$ is a list containing the type, angle and pixel coordinates $(x,y)$ of all the pigs. \item $other$ is a list containing the type, angle and pixel coordinates $(x,y)$ of all other game elements; including blocks, static terrain and other miscellaneous objects not considered for our presented proofs. \end{itemize}
The top left corner of a level is given the coordinates $(0,0)$ and all other coordinates use this as a reference point. The width and height of a level must be specified as non-negative integer values, and all pixel coordinates must be defined as integers within the level space. All numerical values are assumed to be stored in binary, meaning that the size of a given level description is logarithmic with respect to the values inside of it. The precision with which the angle of a pig or other game element within the level description can be defined is set to some arbitrary value (e.g. 0.01 degrees) as the rotation of objects is not important for the proofs presented in this paper. The type of a bird, pig or other game element is defined using a fixed length word (e.g. ``red'' or ``small''). How the number of birds $(N_{b})$ is defined greatly impacts the complexity of the game, with further details on this point described in Section 2.1. There is also a finite sized list which contains all the possible types of birds, pigs and other game elements, as well as their properties (e.g. mass, friction, size, etc.). This list is fixed in size and so is not relevant to the complexity of the game.
One important point that must be addressed is how the properties of certain game elements (position, angle, speed, etc.) are represented within the game engine. Whilst the initial location of each game element is defined using integer values (pixel coordinates), when the game is being played it is highly likely that the location of an object could be much more precise (i.e. sub-pixel values). For our proofs we assume that the current state of a level, including the current properties of all game elements within it, can always be stored in a polynomial number of bits.
A strategy $(S)$ for solving a given level description consists of a sequence of ordered shots ($A_{1},A_{2},...,A_{N_{b}}$). Each shot $(A_{i})$ consists of both a pixel coordinates $(x,y)$ within the level space (release point), which determines the speed $(v_{b})$ and angle $(a_{b})$ with which each of the available birds is fired, as well as a tap time for activating each bird's secondary effect (ability) if it has one. For our presented proofs we do not use any bird abilities, meaning that a particular shot $A_{i}$ can be defined using just a release point $(x,y)$. A level is won/solved once all pigs have been killed, and is lost/unsolved if there are any pigs left once all birds have been used.
While the speed with which a bird can be fired is bounded, and therefore can only be determined to a set level of precision, the angle of a shot is a rational value that is determined by the release point given. The tap time for activating a bird's ability must occur before the bird collides with another game element or moves out of bounds. Therefore, the precision with which shots can be specified, as well as the number of bits required to define a shot and the number of distinct shots possible, is polynomial in the size of the level (i.e. the size of the level dictates the number of possible release points/shot angles and tap times, which in turn determines the number of distinct shots possible), and is exponential relative to the size of the level description (as all numerical values are specified in binary). This means that the number of possible distinct shots that a player can make increases as the size of the level increases (i.e. no fixed arbitrary precision on possible shot angles), but this number is always bounded by the size of the level ($L_{x} \times L_{y}$).
The decision problem we are considering in this paper can be formalised as: \begin{problem}
\problemtitle{\textbf{Angry Birds Formal Decision Problem}}
\probleminput{Angry Birds level description $(Level)$.}
\problemquestion{Is there a strategy $S$ that always results in all $pigs$ being killed?} \end{problem}
This is the same problem that is faced by both level designers and play testers for this game.
For the proofs described in this paper the following game elements are required: \begin{itemize} \item Red Birds: These are the most basic bird type within the game and possess no special abilities. Once the player has determined the speed and angle with which to fire this bird it follows a trajectory determined by both this and the gravity of the level, which the player cannot subsequently affect. This bird has no secondary effect so a tap time is not needed. \item Small Pigs: These are the most basic pig type within the game and are killed once they are hit by either a bird or block. \item Unbreakable Blocks: These are blocks that do not break if they are hit but instead react in a semi-realistic physical way, moving and rotating if forces are applied to them. They are represented in this paper by blocks made of stone. \item Static Terrain: This is simply a set area of the level that cannot move or be destroyed. Static terrain is also not affected by gravity, meaning that it can be suspended in the air without anything else holding it up. It is represented in this paper by plain, untextured, brown areas. The ground at the bottom of the level space also behaves in the same way as static terrain. \end{itemize}
For our proofs, we assume that the size of a level is not bounded by the game engine and that the player's next shot only occurs once all game elements are stationary. We also assume that the physics calculations performed by the game engine are not impacted or affected as the size of the level increases (i.e. no glitches or other simulation errors) and that there is no arbitrary fixed precision with regard to the angles that shots can have (i.e. the number of distinct shots possible always increases and decreases based on the size of the level). As the exact physics engine parameters used for Angry Birds are not currently available for analysis, all assumptions made about the game and its underlying properties are determined through careful observation.
\subsection{Game Variants} While an Angry Birds level that is created using the above description can be shown to be at least NP-hard, by making additional specifications on the type of physics engine used or how a level is described, we can increase its complexity further. Deciding whether a particular version of Angry Birds is NP-hard, PSPACE-hard, PSPACE-complete or EXPTIME-hard is based on a combination of two factors.
\textbf{Number of Birds:} The first factor is whether the number of birds that the player has is polynomial or exponential relative to the size of the level description. In practical terms this means, does the type and order of each bird have to specified individually (i.e. an explicit list of all bird types, e.g. [red, blue, black, red, yellow]) or can the number of birds simply be stated if all birds are the same type (i.e. [red, 5] rather than [red, red, red, red, red])? If this abbreviated version of $birds$ is valid within the level description then the player can potentially have an exponential number of birds, otherwise only a polynomial number of birds is possible.
\textbf{Probabilistic Model:} The second factor is whether the physics engine used by the game is deterministic or stochastic. A game engine that is deterministic will always base its output only on the player's input, and so the outcome of any action can be calculated in advance. However, if the game engine is stochastic in nature then physical interactions between game elements may be influenced slightly by randomly generated values. This randomness within the engine is used to simulate the effects of unknown variables in the real world. Specific real-world properties such as air movement (wind), temperature fluctuations, differences in the gravitational field, object vibrations, etc., might affect the outcome of a physical action. These effects are usually not modelled and add some stochasticity to the outcome of physical actions. For Angry Birds, the source of this stochasticity comes from a random amount of noise that is included when collisions occur within the game's physics-engine, causing the object(s) involved in the collision to move slightly differently each time. This means that even if the same collision occurs multiple times for the exact same level state, the outcome may not always be the same. These changes are typically not very large, often only affecting the outcome very slightly within a pre-defined range of options. While the player might know the different outcomes that an action could have, they may not know exactly which one will occur until after said action is performed. Please note that for the sake of our proofs we consider a game containing elements with pseudorandom behaviour/physics to still be deterministic, as long as the random seed used to define them can be encoded in a polynomial number of bits (i.e. not truly stochastic) \cite{ori1}.
\begin{table} \begin{center}
\begin{tabular}{|p{3.8cm}|p{4.0cm}|p{2.6cm}|p{3.8cm}|}
\hline
\multicolumn{3}{|c|}{\textbf{Game Version}} & \\ \hline
\textbf{Number of Birds} & \textbf{Probabilistic Model} & \textbf{Acronym} & \textbf{Complexity} \\ \hline Polynomial & Deterministic & ABPD & NP-hard \\ \hline Exponential & Deterministic & ABED & PSPACE-complete \\ \hline Polynomial & Stochastic & ABPS & PSPACE-hard \\ \hline Exponential & Stochastic & ABES & EXPTIME-hard \\ \hline
\end{tabular} \end{center} \captionof{table}{Complexity results summary.} \end{table}
Table 1 shows how altering these two factors within the Angry Birds game affects its complexity. For each of our subsequent complexity proofs, we will assume that we are using the appropriate version of Angry Birds as defined by this table. These different game versions will be abbreviated as ABPD for our NP-hard variant, ABED for our PSPACE-complete variant, ABPS for our PSPACE-hard variant, and ABES for our EXPTIME-hard variant.
\section{Gates} Before presenting our complexity proofs we will first define three different ``gates'' as well as a Crossover, that help dictate the outcomes of shots taken by the player. The design and behaviour of these gates is described here so that they can be easily referred to in later sections. Depending on the specific physics parameters of the environment and objects used, the exact values used to define each gate's design may vary. However, a gate that works for certain velocities and gravitational forces can always be created. The design and parameters of these gates have been fine-tuned for the Angry Birds game engine to prevent elements within them from moving in unintended ways, but could easily be generalised to different game environments.
\subsection{Selector Gate} The Selector gate implementation for Angry Birds is shown in Figure 2. The Selector gate can exist in one of two states, ``select-left'' or ``select-right'', and essentially mimics the behaviour of a 2-output demultiplexer. A summary of the Selector gate behaviour is shown in Table 2.
\begin{figure}
\caption{Models of the Selector gate (a) in the ``select-left'' position and (b) in the ``select-right'' position.}
\end{figure}
\begin{table}[t] \small \begin{center}
\begin{tabular}{|p{2cm}|p{2cm}|p{3.2cm}|p{2cm}|p{3.2cm}|}
\multicolumn{5}{c}{\textbf{Selector gate}} \\ \hline
& \multicolumn{4}{c|}{\textbf{Current gate position}} \\ \hline
& \multicolumn{2}{c|}{select-left} & \multicolumn{2}{c|}{select-right} \\ \hline
\textbf{Entrance} & \textbf{Exit} & \textbf{Next gate position} & \textbf{Exit} & \textbf{Next gate position} \\ \hline
\textbf{$T_{I}$} & $T_{L}$ & select-left & $T_{R}$ & select-right \\ \hline
\textbf{$L_{I}$} & $L_{O}$ & select-left & $L_{O}$ & select-left \\ \hline
\textbf{$R_{I}$} & $R_{O}$ & select-right & $R_{O}$ & select-right \\ \hline
\end{tabular} \end{center} \captionof{table}{Selector gate summary, shows exits and next gate positions for given entrances and current gate positions.} \end{table}
\begin{prop} A bird which enters a Selector gate at $T_{I}$ will exit the Selector gate at $T_{L}$, if and only if the Selector gate is in the select-left position. Otherwise the bird will exit out of $T_{R}$. \end{prop}
\begin{prop} A bird which enters a Selector gate at $L_{I}$ will exit the Selector gate at $L_{O}$ and set the Selector gate to the select-left position. \end{prop}
\begin{prop} A bird which enters a Selector gate at $R_{I}$ will exit the Selector gate at $R_{O}$ and set the Selector gate to the select-right position. \end{prop}
\subsection{Automatically Unsetting Transfer Gate} The Automatically Unsetting Transfer Gate (AUT gate) implementation for Angry Birds is shown in Figure 3. The AUT gate can exist in one of two states, ``select-left'' or ``select-right''. A summary of the AUT gate behaviour is shown in Table 3.
\begin{prop} A bird which enters an AUT gate at $T_{I}$ will exit the AUT gate at $T_{L}$ and set the AUT gate to the select-right position, if and only if the AUT gate is in the select-left position. Otherwise the bird will exit out of $T_{R}$ and not change the AUT gate's position. \end{prop}
\begin{prop} A bird which enters an AUT gate at $L_{I}$ will exit the AUT gate at $L_{O}$ and set the AUT gate to the select-left position. \end{prop}
\begin{figure}
\caption{Models of the AUT gate (a) in the ``select-left'' position and (b) in the ``select-right'' position.}
\end{figure}
\begin{table}[t] \small \begin{center}
\begin{tabular}{|p{2cm}|p{2cm}|p{3.2cm}|p{2cm}|p{3.2cm}|}
\multicolumn{5}{c}{\textbf{AUT gate}} \\ \hline
& \multicolumn{4}{c|}{\textbf{Current gate position}} \\ \hline
& \multicolumn{2}{c|}{select-left} & \multicolumn{2}{c|}{select-right} \\ \hline
\textbf{Entrance} & \textbf{Exit} & \textbf{Next gate position} & \textbf{Exit} & \textbf{Next gate position} \\ \hline
\textbf{$T_{I}$} & $T_{L}$ & select-right & $T_{R}$ & select-right \\ \hline
\textbf{$L_{I}$} & $L_{O}$ & select-left & $L_{O}$ & select-left \\ \hline
\end{tabular} \end{center} \captionof{table}{AUT gate summary, shows exits and next gate positions for given entrances and current gate positions.} \end{table}
\subsection{Random Gate} The Random gate implementation for Angry Birds is shown in Figure 4. The Random gate can only be used in variants of Angry Birds with a stochastic game engine (ABPS and ABES), and essentially mimics the behaviour of a random binary splitter.
\begin{prop} A bird which enters a Random gate at point $T$ has a non-zero probability of exiting at point $L$ ($P(L)>0$) and a non-zero probability of exiting at point $R$ ($P(R)>0$). \end{prop}
\begin{hproof} When a bird enters a Random gate at $T$, it will hit the tip of the point. When this happens the physics engine will use randomly generated values to slightly alter the physics of the impact, with three possible outcomes: the bird falls down the left tunnel (exit at $L$), the bird falls down the right tunnel (exit at $R$), the bird remains on the point and falls neither left nor right (does not exit the gate). Property 3.8 is true if the probability for each of the first two outcomes occurring is greater than zero, which is the case for the stochastic Angry Birds game environment. \end{hproof}
\begin{figure}\label{fig:test2}
\label{fig:test1}
\end{figure}
\subsection{Crossover} The Crossover implementation for Angry Birds is shown in Figure 5.
\begin{prop} A bird which enters a Crossover at $D_{I}$ will exit the Crossover at $D_{O}$. \end{prop}
\begin{prop} A bird which enters a Crossover at $V_{I}$ will exit the Crossover at $V_{O}$. \end{prop}
\begin{figure}
\caption{Selector gate (left), AUT gate (middle) and Random gate (right) compact representations, used in our subsequent proof diagrams.}
\end{figure}
\subsection{Gate Representation}
For the diagrams presented in the following proofs we will use a more compact way of representing gates, see Figure 6. Squares represent Selector gates, circles represent AUT gates, and triangles represent Random gates, with the location of the arrows representing the entries to and exits from each gate. Arrows leading from the exit of one gate to the entrance of another, represent tunnels that can be used to connect multiple gates together. A bird will travel along this tunnel, provided that the start of the tunnel is not below the end (bird is essentially falling down the tunnel). If a particular arrow is not given for a specific gate, then that entry or exit is not used (blocked off with static terrain). Any bird that attempts to leave through an exit that is blocked off will be trapped inside the gate, with the bird subsequently disappearing after a short period of time. Crossovers do not have a compact representation, and are instead used to deal with any intersecting tunnels between gates. Note that even though the exact entry and exit locations on the compact gate representations do not match those on the actual gate models/designs, additional tunnels and Crossovers can be easily used to adjust the entry and exit locations for each gate. Therefore an Angry-Birds reduction can be represented by an equivalent "circuit diagram".
\subsection{Terminology}
Selector gates with $T_{R}$ blocked off can be thought of as being very similar to that of a ``door'' mechanism used in several previous video game complexity proofs \cite{ori1,ori5,ori6}. For the sake of both intuitive names and consistent terminology with prior work, we define new terms for our Selector and AUT gates. If a gate is in the select-left position then we say that the gate is ``open'', and if the gate is in the select-right position then we say that the gate is ``closed''. If a gate is open then we say that it can be ``traversed'' by firing a bird into $T_{I}$, which will then exit out of $T_{L}$. A gate can be ``opened'' by firing a bird into $L_{I}$ or ``closed'' by firing a bird into $R_{I}$. Entrances $T_{I}$, $L_{I}$ and $R_{I}$ are referred to as the ``traverse'', ``open'' and ``close'' paths respectively. In subsequent proof diagrams that use the compact gate representation shown in Figure 6, Selector or AUT gates that are initially closed (i.e. select-right) will have a single line border while those that are initially open (i.e. select-left) will have a double line border. This terminology only applies to the Selector and AUT gates, not the Random gate.
\section{PSPACE-Completeness of ABED (exponential and deterministic)} For our proof of PSPACE-hardness, we will reduce from the PSPACE-complete problem TQBF, which consists of determining if a given quantified 3-CNF Boolean formula is ``true''. In order to demonstrate that Angry Birds is PSPACE-hard, it must be possible to construct a level that represents any given quantified Boolean formula, which can only be solved if the quantified Boolean formula is true (i.e. the player will be able to kill the pig(s) within the level by making shots with their bird(s), if and only if the quantified Boolean formula that the level was based on is true). We can also extend this proof to PSPACE-completeness if the problem of solving ABED levels is also in PSPACE. Due to the length and complexity of our presented proofs, this section will be split into the following sub-sections: Section 4.1 describes a high-level overview of the framework that we will use to prove that solving ABED levels is PSAPCE-hard; Section 4.2 describes how we can create the gadgets for this framework within the ABED environment; Section 4.3 describes a method for constructing this framework within the ABED environment using our designed gadgets; Section 4.4 describes a possible winning strategy for an ABED level based on an example quantified Boolean formula; and Section 4.5 proves that solving ABED levels is also in PSPACE.
\subsection{Framework} For our proof of PSPACE-hardness by TQBF reduction, we will use a heavily modified version of the general framework described in \cite{ori1,ori5,ori6}. This framework uses a systematic procedure to verify if a quantified Boolean formula is true. This process can be defined in general terms, allowing it to be applied to any game environment (including Angry Birds).
\begin{description} \item[\textbf{TQBF verification process:}] \end{description} \begin{enumerate} \item The player initially chooses the value of all existentially quantified variables, and the value of all universally quantified variables is set to positive. \item Check that all clauses within the quantified Boolean formula are satisfied (if not then cannot proceed). \item If all universally quantified variables have a negative value, then the quantified Boolean formula is true (verification process complete). \item The universal quantifier ($UQ_{R}$) with the smallest scope (rightmost universal quantifier in Boolean formula) that has a positive value for its variable, has the value of its variable set to negative. \item The player can change the value of any existentially quantified variables within the scope of $UQ_{R}$, and all universally quantified variables within the scope of $UQ_{R}$ are set to positive. \item Go to step 2. \end{enumerate} As an example, given a quantified Boolean formula with three universally quantified variables (x,y,z) of decreasing scope size, the order in which the universal variables are verified is as follows: (1,1,1) (1,1,0) (1,0,1) (1,0,0) (0,1,1) (0,1,0) (0,0,1) (0,0,0).
This process can be successfully completed if and only if the given quantified Boolean formula is true.
While we will still be using this same TQBF verification process for our proposed Angry Birds proof, the overall design of the framework for applying this procedure will be significantly different from those of previous game examples. This is mostly due to the fact that Angry Birds does not have a single controllable ``Avatar'', and thus has no easy way of achieving a sense of ``player traversal''. The general design of our TQBF verification framework for Angry Birds is shown in Figure 7. This framework can be used to prove that a game is PSPACE-hard by constructing the necessary ``gadgets'' (each box within the general framework diagram). Each of these gadgets serves a distinct purpose and simplifies the complex physics of Angry Birds into more easily manageable sections (for our proofs, each gadget is made up of multiple interconnected gates). For each existential quantifier in the Boolean formula there is an associated Existential Quantifier (EQ) gadget, for each Clause in the Boolean formula there is an associated Clause gadget, and for each universal quantifier in the Boolean formula there is both an associated Universal Quantifier True (UQ-T) gadget and Universal Quantifier False (UQ-F) gadget. There is also a Finish gadget, which the player must be able to ``pass through'' in order to solve the level. Figure 7 demonstrates an example arrangement of these gadgets using the quantified Boolean formula $\exists x \forall y \exists z \forall w ((x \vee y \vee w) \wedge (y \vee \neg z \vee \neg w) \wedge (\neg x \vee \neg y \vee z))$ as an example (each variable in a Boolean formula can have either a ``positive'' or ``negative'' truth value). Using this framework, if the necessary gadgets can be created and arranged in our ABED environment within polynomial time, then ABED is PSPACE-hard. While it may initially seem unclear as to how exactly this framework can be used to prove PSPACE-hardness, the following sections will describe the function of each gadget, as well as how these gadgets combine together within the framework to apply our described TQBF verification process.
\begin{figure}
\caption{General framework diagram for PSPACE-hardness (ABED).}
\end{figure}
\subsubsection{Formal framework reference terms} In this section we define some formal terms that can be used to reference specific gadgets within our framework:
\begin{definition} \emph{(enabled, disabled, current, next, next adjacent, next UQ-F, previous, first, last):} Each gadget can either be \emph{``enabled''} or \emph{``disabled''} (exactly what this means for each type of gadget is discussed in the next section). The \emph{``current''} gadget ($Q_{i}$) is the (vertically) lowest enabled gadget in the general framework diagram (Figure 7). The \emph{``next''} gadget ($Q_{i+1}$) for the current gadget is indicated by the arrows in our general framework diagram, which represent the scope of each quantifier. For each UQ-F gadget there are two possible next gadgets, the next gadget for the UQ-T gadget associated with its variable (horizontal output arrow in Figure 7) referred to as the \emph{``next adjacent''} gadget, and the UQ-F gadget directly below it (vertical output arrow in Figure 7) referred to simply as the \emph{``next UQ-F''} gadget (note that the last UQ-F gadget has no next UQ-F gadget). The \emph{``previous''} gadget ($Q_{i-1}$) refers to the most recent current gadget (i.e. essentially the opposite of the next gadget). We also define the terms \emph{``first''} gadget and \emph{``last''} gadget with respect to the vertical position of specific gadget types in our general framework diagram. The highest of a particular gadget type is the first gadget of that type, whilst the lowest is the last gadget (e.g. for Figure 7, the UQ-F Gadget for the variable $w$ is the first UQ-F gadget, whilst the EQ Gadget for $z$ is the last EQ Gadget). \end{definition}
\subsubsection{Gadget design requirements} In this section we describe the purpose and requirements of the gadgets that will need to be followed by our specific ABED gadget implementations / level construction:
\textbf{EQ gadget:} If an EQ gadget is enabled then the player can use it to set the value of its associated variable to either positive or negative. Doing this disables the EQ gadget and allows the player to enable the next gadget.
\textbf{UQ-T gadget:} If a UQ-T gadget is enabled then it automatically sets the value of its associated variable to positive. The player can then enable the next gadget which also disables the UQ-T gadget.
\textbf{UQ-F gadget:} If a UQ-F gadget is enabled then it alternates between allowing the player to do either of the following two actions: ($A$) the player can set the value of its associated variable to negative, which disables the UQ-F gadget and allows the player to enable the next adjacent gadget; or ($B$) the player can disable the UQ-F gadget and enable the next UQ-F gadget. Note that, as previously mentioned, the last UQ-F gadget does not have a next UQ-F gadget. Attempting to enable the next UQ-F gadget from the last UQ-F gadget will instead attempt to pass through the Finish gadget and solve the level.
\textbf{Clause gadget:} A Clause gadget is ``activated'' if and only if its associated clause is satisfied (i.e. at least one of the literals in the associated clause is true). The level can be solved if and only if all Clause gadgets can be activated for each possible value combination of all universally quantified variables (abbreviated to UQVC). This means that the level can be solved if and only if the given quantified Boolean formula is true. If the current gadget is a Clause gadget that is both enabled and activated, then the next gadget can be enabled.
\textbf{Finish gadget:} The Finish gadget can be enabled if and only if all Clause gadgets are both enabled and activated.
\subsubsection{Framework design requirements} The gadget associated with the quantifier with the largest scope (leftmost quantifier in Boolean Formula) is initially enabled (gadget pointed to by Start label in our general framework diagram), with the UQ-T version of the gadget being enabled if it is a universal quantifier, whilst all other gadgets are disabled. The player can enable the first UQ-F gadget at any time, but doing so when the Finish gadget is disabled will put the level into an unsolvable state (prevents the player from ever being able to pass through the Finish gadget). Enabling the first UQ-F gadget also disables all Clause and Finish gadgets.
Essentially, the Finish gadget is used to maintain the ordering of the framework, by automatically making the level unsolvable if the player attempts to open the first UQ-F gadget at any time except after checking that all Clause gadgets are activated (i.e. once we reach the bottom of the framework we start again from the top). This action of enabling the first UQ-F gadget begins a new ``framework cycle'', with each framework cycle testing a specific UQVC. Once all possible UQVCs have been tested, and assuming that the Finish gadget has not made the level unsolvable, then the player can pass through the Finish gadget and solve the level.
\subsubsection{Framework process summary} In summary, the player will initially enable and then disable all EQ and UQ-T gadgets, either choosing the value of the associated variable or having it automatically set to positive whilst doing so. The first Clause gadget is then enabled and if it is activated, then the next Clause gadget can also be enabled. If all Clause gadgets are activated then eventually they will all be sequentially enabled, after which the Finish gadget can be enabled as well. The player can then enable the first UQ-F gadget (begin new framework cycle) without putting the level into an unsolvable state, which also disables all Clause and Finish gadgets. Each time a UQ-F gadget is enabled the outcome will alternate between setting the value of the associated variable to negative and then enabling the next adjacent gadget, or enabling the next UQ-F gadget (both outcomes also disable the current UQ-F gadget). This is equivalent to the next adjacent gadget being enabled if the associated variable was positive and the next UQ-F gadget being enabled if the associated variable was negative. If the next adjacent gadget was enabled, then the player can change the values of any variables associated with EQ gadgets after this point in the framework as well as any subsequent UQ-T gadgets setting the value of their associated variable to positive, after which if all Clause gadgets are still activated then the Finish gadget will be enabled again. This process repeats $2^{U}$ times, where $U$ is the number of universal quantifiers in the Boolean formula. Once the player can enable the next UQ-F gadget for all UQ-F gadgets within a single framework cycle (i.e. once all universally quantified variables are negative) a bird will attempt to pass through the Finish gadget. If the player has ensured that they only enabled the first UQ-F gadget when the Finish gadget was enabled, then the bird will successfully pass through the Finish gadget and kill a single pig to solve the level. While this process may initially seem somewhat confusing, following through our framework using this system will confirm that all UQVCs within the quantified Boolean formula are indeed tested.
This means that solving the level is equivalent to finding a solution to the given quantified Boolean formula. Thus, we can show that ABED is PSPACE-hard if the required gadgets can be successfully implemented within the game's environment and the reduction from quantified Boolean Formula to level description can be achieved in polynomial time.
\subsection{Gadget Design} This section deals with the implementation and arrangement of the necessary framework gadgets for the ABED game environment.
All Selector and AUT gates within our gadgets are initially closed except for those in the gadget associated with the leftmost quantifier from the Boolean Formula (pointed to by Start label), which will initially have certain gates open corresponding to the gadget's own definition of being enabled, and the Finish gadget which will be discussed later.
\subsubsection{Existential Quantifier (EQ) Gadget} The structure of the EQ gadget implementation for ABED is shown in Figure 8. This gadget is comprised of two Selector gates $(S_{1},S_{2})$ and four AUT gates $(A_{1},A_{2},A_{3},A_{4})$, where all AUT gates have traverse paths that can be shot into by the player. An EQ gadget is enabled if $A_{1}$, $A_{2}$, $S_{1}$ and $S_{2}$ are open, otherwise it is disabled. A truth table for this gadget is shown in the Appendix (Figure C.32).
\begin{figure}
\caption{Structure of the Existential Quantifier (EQ) gadget.}
\end{figure}
\begin{lemma} An EQ gadget can be used to select one of two binary choices, positive or negative, for an associated variable, if and only if it is enabled. \end{lemma}
\begin{hproof} AUT gates $A_{1}$ and $A_{2}$ are used to indicate the choice of which value to set the associated variable to. The player fires a bird into the traverse path of $A_{1}$ to indicate a positive value, and $A_{2}$ to indicate a negative value. Traversing $A_{1}$ results in $A_{1}$ and $S_{2}$ being closed and $A_{3}$ being opened, while traversing $A_{2}$ results in $A_{2}$ and $S_{1}$ being closed and $A_{4}$ being opened. Opening either $A_{3}$ or $A_{4}$ sets the value of the associated variable to either positive or negative respectively.
As the traverse path of $A_{2}$ directly leads into the close path of $S_{1}$, and the traverse path of $A_{1}$ leads into the close path of $S_{2}$ (albeit through $S_{1}$ first), it is impossible to have $A_{2}$ open and $S_{1}$ closed, $S_{1}$ open and $A_{2}$ closed, or $A_{1}$ open and $S_{2}$ closed. The value of the associated variable can only be set to positive by opening $A_{3}$. This can only be done by traversing $A_{1}$ if both it and $S_{1}$ are open. Likewise, the value can only be set to negative by opening $A_{4}$, which is only possible if both $A_{2}$ and $S_{2}$ are open.
Thus, by combining all this information we can see that neither $A_{3}$ nor $A_{4}$ can be opened if the gadget is disabled. Therefore, the player can only choose the value of the associated variable if the EQ gadget is enabled. \end{hproof}
\begin{lemma} An EQ gadget will become disabled after selecting a value for the associated variable. \end{lemma}
\begin{hproof} As $A_{1}$ and $A_{2}$ are AUT gates, we know that traversing either of them will close the gate, and thus disable the EQ gadget. Traversing either of these two gates is the only way of selecting a value for the associated variable, so the EQ gadget will clearly be disabled after doing so. \end{hproof}
\begin{lemma} The next gadget after an EQ gadget can be enabled if and only if a value has been selected for the associated variable. \end{lemma}
\begin{hproof} The next gadget is enabled by firing a bird into the traverse path of either $A_{3}$ or $A_{4}$. Opening either $A_{3}$ or $A_{4}$ sets the value of the associated variable to either positive or negative respectively. Therefore, the value for the associated variable must be selected before the next gadget can be enabled. \end{hproof}
Essentially, traversing gate $A_{1}$ or $A_{2}$ is used to set the value for the associated variable to either positive or negative respectively (i.e. setter gates). Traversing gate $A_{3}$ or $A_{4}$ is used to enable the next gadget once the player has chosen the value of the associated variable (i.e. checker gates). Which of these two gates ($A_{3}$ or $A_{4}$) is used to achieve this is based on which value was selected for the associated variable, and traversing either gate achieves the same end result. Gates $S_{1}$ and $S_{2}$ ensure that the player can only indicate a single value for the associated variable each time the EQ gadget is enabled.
To summarise, for each existential quantifier in the given quantified Boolean formula there will be an associated EQ gadget. If an EQ gadget is enabled then the player can use it to set the value of its associated variable to either positive or negative, after which the EQ gadget is disabled and the next gadget is enabled. Once the value of a variable associated with an EQ gadget has been set, it cannot be changed during this framework cycle. The only time the value of an existentially quantified variable can be changed (i.e. its associated EQ gadget is re-enabled), is if it is within the scope of a universal quantifier that has its value changed (perhaps not immediately but will occur before the clauses are next checked for activation).
\subsubsection{Universal Quantifier True (UQ-T) Gadget} The structure of the UQ-T gadget implementation for ABED is shown in Figure 9. This gadget is comprised of a single AUT gate $(A_{1})$, that has a traverse path which can be shot into by the player. A UQ-T gadget is enabled if $A_{1}$ is open, otherwise it is disabled. A truth table for this gadget is shown in the Appendix (Figure C.33).
\begin{figure}
\caption{Structure of the Universal Quantifier True (UQ-T) gadget.}
\end{figure}
\begin{lemma} A UQ-T gadget will set the value of the associated variable to positive, if and only if it is enabled. \end{lemma}
\begin{hproof} Opening $A_{1}$ is the only way to enable the gadget, and doing so automatically sets the value of the associated variable to positive. \end{hproof}
\begin{lemma} A UQ-T gadget will become disabled after the associated variable has been set to positive. \end{lemma}
\begin{hproof} Although the value for the associated variable is automatically set to positive when the gadget is enabled, the player cannot enable any more gadgets until they traverse $A_{1}$. Doing this closes $A_{1}$ and thus disables the gadget. \end{hproof}
\begin{lemma} The next gadget after a UQ-T gadget can be enabled if and only if the associated variable has been set to positive. \end{lemma}
\begin{hproof} The next gadget is enabled by firing a bird into the traverse path of $A_{1}$. As opening $A_{1}$ sets the value of the associated variable to positive, this must clearly have already been done in order for the player to traverse $A_{1}$. \end{hproof}
\subsubsection{Universal Quantifier False (UQ-F) Gadget} The structure of the UQ-F gadget implementation for ABED is shown in Figure 10. This gadget is comprised of two Selector gates $(S_{1},S_{2})$ and three AUT gates $(A_{1},A_{2},A_{3})$, where $A_{1}$, $S_{1}$ and $A_{3}$ have traverse paths that be shot into by the player. A UQ-F gadget is enabled if $A_{1}$, $S_{1}$ and $S_{2}$ are open, otherwise it is disabled. A UQ-F gadget is ``unlocked'' if $A_{2}$ is open, otherwise it is ``locked''. Enabling the first UQ-F gadget also disables all Clause and Finish gadgets. A truth table for this gadget is shown in the Appendix (Figure C.34).
\begin{figure}
\caption{Structure of the Universal Quantifier False (UQ-F) gadget.}
\end{figure}
\begin{lemma} A UQ-F gadget can be used to set the value of an associated variable to negative, if and only if it is enabled. \end{lemma}
\begin{hproof} The only initial thing that a player can do to with a UQ-F gadget after it has been enabled is to traverse either $A_{1}$ or $S_{1}$. Traversing $S_{1}$ would be pointless at this stage as $A_{2}$ is not yet open, so all that would happen is that $S_{2}$ would be closed. Traversing $A_{1}$ instead would close both $A_{1}$ and $S_{1}$ but would also open $A_{2}$ and $A_{3}$, as well as setting the value of the associated variable to negative.
As the traverse path of $A_{1}$ directly leads into the close path of $S_{1}$ it is impossible to have one open/closed and not the other (both gates must always be in the same position). If both are closed then the player cannot open $A_{2}$ and $A_{3}$. If $S_{2}$ is closed then it cannot be traversed which also means the player cannot open $A_{2}$ or $A_{3}$. Thus, the value of the associated variable can only be set to negative if the gadget is enabled. \end{hproof}
\begin{lemma} A UQ-F gadget will become disabled and unlocked after the associated variable has been set to negative. \end{lemma}
\begin{hproof} The only way to set the value of the associated variable to negative is to open $A_{3}$. The only way to achieve this is to traverse $A_{1}$, which closes both $A_{1}$ and $S_{1}$ as well as opening $A_{2}$, causing the UQ-F gadget to be both disabled and unlocked. \end{hproof}
\begin{lemma} The next adjacent gadget after a UQ-F gadget can be enabled if and only if the associated variable has been set to negative. \end{lemma}
\begin{hproof} Traversing $A_{3}$ is the only way to enable the next adjacent gadget. As opening $A_{3}$ sets the value of the associated variable to negative, this must clearly have already been done first in order for the player to traverse $A_{3}$. \end{hproof}
\begin{lemma} The next UQ-F gadget after a UQ-F gadget can be enabled if and only if the (current) UQ-F gadget is both enabled and unlocked. \end{lemma}
\begin{hproof} The only way to enable the next UQ-F gadget is to traverse $A_{2}$ via $S_{1}$. After the player has just unlocked a UQ-F gadget they cannot traverse $A_{2}$ as $S_{1}$ has been closed. Instead they must go back through the framework again, starting from the next adjacent gadget, which can be enabled by traversing $A_{3}$. Once the UQ-F gadget is enabled again the player can then traverse $S_{1}$ (as $A_{2}$ is now open) which enables the next UQ-F gadget (or attempts to pass through the Finish gadget). Traversing $A_{1}$ instead would just result in the same outcome as the first time the gadget was enabled and so would be a redundant action. \end{hproof}
\begin{lemma} A UQ-F gadget will become disabled and locked after the next UQ-F gadget is enabled. \end{lemma}
\begin{hproof} The only way to enable the next UQ-F gadget is to traverse $S_{1}$. Doing so clearly results in $S_{2}$ and $A_{2}$ being closed in the process (disables and locks the gadget). The player cannot re-open $A_{2}$ as $S_{2}$ is now closed, so the gadget will remain locked until it is re-enabled. \end{hproof}
Essentially, traversing gate $A_{1}$ is used to set the value of the associated variable to negative, while traversing gate $S_{1}$ is used enable the next UQ-F gadget. The specific wiring arrangement of these gates, along with the gate $S_{2}$, ensures that the player can only select one of these two options each time the UQ-F gadget is enabled. Gate $A_{2}$ ensures that the player can only enable the next UQ-F gadget every other time the current UQ-F gadget is enabled. Traversing gate $A_{3}$ is used to enable the next adjacent gadget, if the player has set the value of the associated variable to negative (i.e. traversed $A_{1}$ instead of $S_{1}$).
To summarise, for each universal quantifier in the given quantified Boolean formula, there will be both an associated UQ-T gadget and UQ-F gadget. Each time the UQ-T gadget is enabled there is only one possible outcome: the value of its associated variable is set to positive, the UQ-T gadget is disabled and the next gadget is enabled. Each time the UQ-F gadget is enabled there are two possible outcomes: ($A$) the value of its associated variable is set to negative, the UQ-F gadget is disabled and the next adjacent gadget is enabled, or ($B$) the UQ-F gadget is disabled and the next UQ-F gadget is enabled (or attempt to pass through the Finish gadget if this is the last UQ-F gadget). The player can always choose outcome $A$, but can only choose outcome $B$ if outcome $A$ was chosen the last time the UQ-F gadget was enabled. However, choosing outcome $A$ when outcome $B$ is possible will never yield a better result, and will only lead to repeat checks of already tested UQVCs. Assuming that the player always selects outcome $B$ whenever they can, each UQ-F gadget will alternate between outcomes $A$ and $B$ each time it is enabled.
\begin{figure}
\caption{Structure of the Clause gadget.}
\end{figure}
\subsubsection{Clause Gadget} The structure of the Clause gadget implementation for ABED is shown in Figure 11. This gadget is comprised of six Selector gates $(S_{1},S_{2},S_{3},S_{4},S_{5},S_{6})$, where $S_{1}$, $S_{2}$ and $S_{3}$ have traverse paths that can be shot into by the player. Selector gates $S_{1}$, $S_{2}$ and $S_{3}$ must always be in the same position (closed or open). A Clause gadget is enabled if $S_{1}$, $S_{2}$ and $S_{3}$ are all open, and is disabled if $S_{1}$, $S_{2}$ and $S_{3}$ are all closed. Each Clause gadget is associated with a particular clause from the quantified Boolean formula, and each of the Selector gates $S_{4}$, $S_{5}$ and $S_{6}$ is associated with a specific literal from that clause. The first Clause gadget is enabled by the last Quantifier gadget and the Finish gadget is enabled by the last Clause gadget. A truth table for this gadget is shown in the Appendix (Figure C.35).
When the value of a variable is modified using a Quantifier gadget (exit paths labelled as ``modify Clause gadgets''), the bird on this path will fall down tunnels which lead to the first Clause gadget that contains the variable associated with it. If the value of the variable was set to positive then the bird opens any of $S_{4}$, $S_{5}$ or $S_{6}$ that are associated with the variable's positive literal, whilst closing any of those that are associated with the variable's negative literal (vice versa if the value of the variable was set to negative). This bird then travels into the next Clause gadget that contains this variable, and the process repeats until all applicable Clause gadgets have been visited. Therefore each Clause gadget represents a chosen clause from our quantified Boolean formula, and Selector gates $S_{4}$, $S_{5}$ and $S_{6}$ are either open or closed depending on whether their associated literal is true or not. Therefore, we can say that a Clause gadget is activated if and only if any of $S_{4}$, $S_{5}$ or $S_{6}$ are open.
\begin{lemma} The next gadget after a Clause gadget can be enabled if and only if the Clause gadget is enabled and activated. \end{lemma}
\begin{hproof} The next gadget after a Clause gadget is enabled by firing a bird into the traverse path of $S_{1}$, $S_{2}$ or $S_{3}$. This shot will only enable the next gadget if $S_{4}$, $S_{5}$ or $S_{6}$ is open respectively. This means that at least one of $S_{4}$, $S_{5}$ or $S_{6}$ must be open (i.e. the Clause gadget must be activated) in order for the player to enable the next gadget. This obviously cannot be performed if the Clause gadget is disabled. \end{hproof}
To summarise, a player can only enable the next Clause gadget (or enable the Finish gadget if this is the last Clause gadget) if at least one of the literals within the current Clause gadget is true, and thus the clause is activated. Enabling the Finish gadget can therefore only be achieved if all Clause gadgets are activated by the current combination of variable values (i.e. all clauses in the quantified Boolean formula are satisfied).
\subsubsection{Finish Gadget} The structure of the Finish gadget implementation for ABED is shown in Figure 12. This gadget is comprised of a Selector gate $(S_{1})$ and an AUT gate $(A_{1})$, but the player cannot directly fire into either of them. Traversing $S_{1}$ can also be referred to as ``passing through'' the Finish gadget, and results in the level being solved. The Finish gadget can exist in one of three states: enabled, disabled and unsolvable. The Finish gadget is enabled if $A_{1}$ is open and $S_{1}$ is open, disabled if $A_{1}$ is closed and $S_{1}$ is open, and unsolvable if $S_{1}$ is closed. The Finish gadget is initially disabled ($A_{1}$ is closed and $S_{1}$ is open). A truth table for this gadget is shown in the Appendix (Figure C.36).
\begin{figure}
\caption{Structure of the Finish gadget.}
\end{figure}
\begin{lemma} The player can enable the first UQ-F gadget without making the level unsolvable, if and only if the Finish gadget is enabled. \end{lemma}
\begin{hproof} The three states that a Finish gadget can be in are all mutually exclusive. Also as there is no way of opening $S_{1}$, if the Finish gadget is ever in the unsolvable state then it can never be taken out of this state. Therefore, as traversing $S_{1}$ is the only way to solve the level, if the Finish gadget is ever unsolvable then the level is unsolvable. While closing $S_{1}$ does not immediately satisfy the loss condition for the level, and allows the player to continue to make further shots, the player can no longer reach the win condition so their loss is guaranteed (eventually the player will run out of birds). We can also observe that the Finish gadget becomes disabled if and only if it is enabled and the first UQ-F gadget is enabled, and that the Finish gadget becomes unsolvable if and only if it is disabled and the first UQ-F gadget is enabled. Therefore, the only way for us to enable the first UQ-F gadget without making the level unsolvable is if the Finish gadget is enabled. \end{hproof}
Essentially, as the Finish gadget can only be enabled if the last (and by extension all) Clause gadget(s) are enabled and activated, coupled with the fact that opening the first UQ-F gadget disables all Clause and Finish gadgets, we can ensure that the first UQ-F gadget can only be enabled directly after the Clause gadgets have been checked for activation. Also, as the only way to solve the level to traverse $S_{1}$, which can only happen from the last UQ-F gadget, we can guarantee that all UQVCs are tested before the level can be solved.
\subsection{Level Construction} This section deals with the reduction process from any given quantified 3-CNF Boolean formula to an equivalent ABED level description, using our previously described framework and gadgets. As Angry Birds is a game that relies heavily on physics simulations to resolve player actions, the relative positions of the gadgets is extremely important. Elements within the game are bound by the physics of their environment and the only immediate control the player has is with regard to the shots they make. For this reason, it is necessary to confirm that the gadgets described can be successfully arranged throughout the level space.
\begin{lemmaz} Any given TQBF problem can be reduced to an ABED level description in polynomial time. \end{lemmaz}
\begin{zproof} As each of the necessary gadgets can be created using a constant amount of space and elements, they can also be described in polynomial time. Consequently, the only remaining requirement is that all gadgets can be successfully arranged throughout the level in polynomial time, relative to the size of the quantified 3-CNF Boolean formula. As the number of gadgets required is clearly polynomial, it suffices to describe a polynomial time method for determining the location of each gadget, as well as the level's width, height, slingshot position and number of birds.
Whilst the exact calculations for determining gadget positions for a given quantified Boolean formula can be determined, they are exceptionally long and somewhat irrelevant to this proof. Instead, we will simply show that the tunnels out from each gadget can connect to their appropriate destinations in a polynomial amount of space, and can therefore also be defined in polynomial time. The number of tunnels out of each gadget type is constant, and the number of each gadget type is polynomial. Because of this, there are only a polynomial number of tunnels to consider and each of these can always be connected to their appropriate destination gadget using a polynomial amount of space. This means that the entire framework must also be polynomial in size, and can therefore be described in polynomial time. We also know that there are a polynomial number of entrance tunnels to these gadgets that the player can fire into, determined based on the number of quantifier and clause gadgets. Each of these entrance tunnels can simply start above the framework (facing downwards) and then lead into the required gadget entrances. This allows us to define the total width $(W_{T})$ of all entrance tunnels that the player can fire into, which is also polynomial in size.
Although the speed at which a bird can be fired from the slingshot is bounded (less than or equal to a maximum velocity $v_{M}$), we can still ensure that all gadgets are reachable from the slingshot by placing them lower in the level. As there is no air resistance, the trajectory of a fired bird follows a simple parabolic curve for projectile motion, $y=x\tan(\phi)-\frac{g}{2v_{0}^{2}\cos^{2}(\phi)}x^{2}$, where $v_{0}$ is the initial velocity of the fired bird, $\phi$ is the initial angle with which the bird was fired, and $g$ is the gravitational force of the level. While it is highly likely that Angry Birds has a maximum speed that an element could possess, this is not addressed by the formula given (i.e. we assume a theoretical worst case scenario of no terminal velocity). This means that in order for us to ensure that all gadgets are reachable, they must be placed at a distance below the slingshot equal to or greater than $-W_{T}+\frac{g}{v_{M}^{2}}W_{T}^{2}$. We can also use the same formula to calculate the maximum height that a bird fired from the slingshot can reach, $\frac{v_{M}^{2}}{2g}$. Using this we can set the position of the slingshot to $(0,\frac{-v_{M}^{2}}{2g})$ and place all entrance tunnels that the player can fire into the required distance below this in a horizontal alignment against the left side of the level. In addition, we need to guarantee that there are enough release points available to allow a bird to be shot into any entrance tunnel for any gadget. To ensure this, we simply move everything constructed so far $W_{T}$ pixels to the right. Lastly, the number of birds that the player has is equal to $(C+2E+3U)2^U$ (although often this many are not needed), where $C$ is the number of clauses, $E$ is the number of existential quantifiers, and $U$ is the number of universal quantifiers, within the given quantified Boolean formula. \end{zproof}
An example diagram of a fully constructed structure, using the same quantified Boolean formula as in Figure 7, is shown in the Appendix (Figure A.27).
As we have constructed the necessary gadgets and can position them within the game's environment in polynomial time, the problem of solving levels for ABED is PSPACE-hard.
\begin{theorem} The problem of solving levels for ABED is PSPACE-hard. \end{theorem}
\subsection{Winning Strategy (Example)} We now describe an example of a winning strategy for solving an ABED level description that has been reduced from the same quantified Boolean formula as in Figure 7. For this level description, one strategy that would solve the level would be to set the value of $x$ to positive at the start (after which the EQ gadget associated with $x$ is never enabled again), and set the value of $z$ to be the same as $y$ whenever the EQ gadget associated with $z$ is enabled. The framework will then be cycled four times, for each combination of values for $y$ and $w$, giving the following variable value combinations when the Clause gadgets are enabled: \begin{itemize} \item Framework cycle \#1: $x=1, y=1, z=1, w=1$ \item Framework cycle \#2: $x=1, y=1, z=1, w=0$ \item Framework cycle \#3: $x=1, y=0, z=0, w=1$ \item Framework cycle \#4: $x=1, y=0, z=0, w=0$ \end{itemize}
By comparing these variable values against our quantified Boolean formula, we can see that all clauses are satisfied for each framework cycle, allowing us to enable the Finish gadget and begin the next framework cycle. Essentially, this particular strategy ensures that all Clause gadgets for the given quantified Boolean formula are activated for all UQVCs. As both universally quantified variables ($y$ and $w$) are set to negative on the fourth framework cycle, the fifth framework cycle will allow us to pass through the Finish gadget and solve the level. A table detailing the 36 shots needed to solve this level is shown in the Appendix (Figure B.31).
\subsection{In PSPACE} As we have already shown that ABED is PSPACE-hard, the only remaining requirement for completeness is that it also be in PSPACE. The problem of solving levels for ABED can be defined as within PSPACE if it is possible to solve any given level in polynomial space relative to the size of the level's description, and that there are a finite number of states and strategies for solving any given level.
\begin{lemmaz} Any given ABED level can be solved in polynomial space. \end{lemmaz}
\begin{zproof} All game elements can be described using a polynomial amount of memory (e.g. position, velocity, size, etc.), the size of a level does not increase (pre-defined out of bounds limits), no additional elements are added to a level whilst playing (only removed), and every game element behaves deterministically based on a function of the player's actions. Because of this, the current state of a level can always be stored in polynomial space. Thus, the state space of a level can be searched non-deterministically for any possible solutions. This means that the problem is in NPSPACE. We can then use Savitch's theorem \cite{ori10} that NPSPACE = PSPACE to conclude that the problem of solving levels for ABED is indeed in PSPACE. \end{zproof}
\begin{lemmaz} There are a finite number of states and strategies for any given ABED level. \end{lemmaz}
\begin{zproof} The state of a level is defined based on the current attribute values of all the elements within it. These attribute values are all defined as rational numbers that each take up a finite amount of memory. Therefore, it must also be possible to define the current state of any given level in a finite amount of memory. Thus, the total number of states for any given level is finite. As the number of shots and release points for any given level is polynomial, relative to the size of the level's description, the number of possible strategies for a level is also finite. \end{zproof}
Thus, as ABED is both PSPACE-hard and in PSPACE, the problem of solving levels for ABED is PSPACE-complete.
\begin{theorem} The problem of solving levels for ABED is PSPACE-complete. \end{theorem}
\section{PSAPCE-Hardness of ABPS (polynomial and stochastic)} \subsection{Framework} Whilst the problem of solving levels for ABED has been proven PSPACE-complete, it is also possible to show that solving levels for ABPS is PSPACE-hard. This version of Angry Birds no longer allows for an exponential number of birds, but does feature a stochastic game engine. Our proof of PSPACE-hardness for ABPS is based on the same TQBF problem as for ABED, and uses a very similar framework, see Figure 13 (also uses the same example quantified Boolean formula from Figure 7).
\begin{figure}
\caption{General framework diagram for PSPACE-hardness (ABPS).}
\end{figure}
The EQ and Clause gadgets from the ABED proof remain the same, except that all Clause gadgets are initially set up as if all universally quantified variables are negative. We no longer require UQ-F or Finish gadgets, and UQ-T gadgets are replaced by a new Universal Quantifier Random (UQ-R) gadget. Each UQ-R gadget has a non-zero and non-certain probability of setting the value of its associated variable to positive when it is enabled. If all Clause gadgets are activated after the player has selected a value for each existentially quantified variable, and the value for each universally quantified variable has been (randomly) either set to positive or remains negative, then the player will be able to kill a single pig within the level which replaces the Finish gadget. We also only need as many birds as there are variables and clauses within the given quantified Boolean formula (i.e. the number of birds needed is polynomial).
Essentially, we are no longer testing out every possible UQVC, but are testing a single possible UQVC that is selected at random. As our formal decision problem posed at the beginning of this paper was to determine if there exists a strategy that ALWAYS solves a given level, these two testing approaches are equivalent (as long as the probability of selecting each possible UQVC is greater than zero).
\subsection{Universal Quantifier Random (UQ-R) Gadget} The structure of the UQ-R gadget implementation for ABPS is shown in Figure 14. This gadget is comprised of an AUT gate $(A_{1})$ and a Random gate $(R_{1})$, where $(A_{1})$ has a traverse path which can be shot into by the player. A UQ-R gadget is enabled if $A_{1}$ is open , otherwise it is disabled. This gadget behaves in a similar manner to the UQ-T gadget from our ABED proof, except that instead of always setting the value of the associated Boolean variable to positive it has a non-zero and non-certain probability of doing so. A truth table for this gadget is shown in the Appendix (Figure C.37).
\begin{lemma} A UQ-R gadget has a non-zero and non-certain probability of setting the value of an associated variable to positive, if and only if it is enabled. \end{lemma}
\begin{hproof} Opening $A_{1}$ is the only way to enable the gadget, and doing this causes a bird to also enter $R_{1}$. This bird then has a non-zero probability of leaving $R_{1}$ through the left exit, but also has a non-zero probability of not leaving $R_{1}$ (either by being trapped in the right exit or by remaining on the point inside the gate). If the bird leaves $R_{1}$ through the left exit then the value of the associated variable is set to positive. \end{hproof}
\begin{figure}
\caption{Structure of the Universal Quantifier Random (UQ-R) gadget.}
\end{figure}
Properties and justifications for how the UQ-R gadget is disabled and how the next gadget is enabled can be easily generalised from Section 4.2.2.
Essentially, as all Clause gadgets are initially configured as if all universally quantified variables are negative, when the Clause gadgets are checked for activation there is a non-zero probability that each universally quantified variable will remain negative, but also a non-zero probability that its value will have been changed to positive (i.e. each UQVC has a chance greater than zero of being selected as the outcome).
As the framework for this proof is very similar to that for ABED, the gadgets can be arranged using roughly the same process as described in Section 4.3, except that UQ-T gadgets are replaced by UQ-R gadgets, and no UQ-F or Finish gadgets are necessary. An example diagram of a fully constructed structure, using the same quantified Boolean formula as in Figure 13, is shown in the Appendix (Figure A.28).
As we have constructed the necessary gadgets and can position them within the game's environment in polynomial time, the problem of solving levels for ABPS is PSPACE-hard.
\begin{theorem} The problem of solving levels for ABPS is PSPACE-hard. \end{theorem}
\subsection{Winning Strategy (Example)} The same winning strategy that was used in Section 4.4 ($x=1, z=y$) can also be used here for the same quantified Boolean formula, see Figure 13. In this case, however, the framework does not need to be cycled multiple times to test each UQVC, but instead one of the four possible UQVCs will be randomly selected. As all clauses remain satisfied for our strategy regardless of which UQVC is selected, we can guarantee that the player will always be able to kill the pig and thus solve the level.
\section{NP-Hardness of ABPD (polynomial and deterministic)} \subsection{Framework} By using a very similar framework to those used in the last two PSPACE-hard proofs, we can also show that solving levels for ABPD is NP-hard. While this is the ``weakest'' complexity class that is proven in this paper, this version of Angry birds allows for only a polynomial number of birds and features a deterministic physics engine. Our proof of NP-hardness reduces from the NP-complete problem 3-SAT, which involves deciding whether a given 3-CNF Boolean formula is satisfiable. The framework we use for this proof is essentially a reduced version of that used for the TQBF problem, see Figure 15, and is similar to that used for many past platformer games \cite{ori1,ori7,ori2}. Figure 15 uses the Boolean formula $(x \vee y \vee z) \wedge (\neg x \vee y \vee \neg z) \wedge (\neg x \vee \neg y \vee \neg z)$ as an example.
\begin{figure}
\caption{General framework diagram for NP-hardness (ABPD).}
\end{figure}
Essentially any 3-CNF Boolean formula can be represented using our TQBF framework by simply making all variables existentially quantified. This removes the need for any UQ-F, UQ-T or Finish gadgets, relying only on the EQ and Clause gadgets (i.e. for each variable in the Boolean formula there is an associated EQ gadget and for each clause in the Boolean formula there is an associated Clause gadget). If all Clause gadgets are activated after the player has selected a value for each variable, then the player will be able to kill a single pig within the level that replaces the Finish gadget. We also only need as many birds as there are variables and clauses within the given Boolean formula.
As the framework for this proof is very similar to that of ABED, the gadgets can be arranged using roughly the same process as described in Section 4.3, except that no UQ-F, UQ-T or Finish gadgets are necessary. An example diagram of a fully constructed structure, using the same Boolean formula as in Figure 15, is shown in the Appendix (Figure A.29).
As we have constructed the necessary gadgets (although no new gadgets were added for this proof) and can position them within the game's environment in polynomial time, the problem of solving levels for ABPD is NP-hard.
\begin{theorem} The problem of solving levels for ABPD is NP-hard. \end{theorem}
We should point out that an NP-hard proof for a version of Angry Birds which had a similar environment to ABPD was previously presented by us in \cite{AIIDE1715829}. However, this proof also used ``breakable blocks'' in addition to the other game elements mentioned in our requirements. This proof was arguably simpler than the one which we present here, but due to the fact that it required additional game elements, we treat this new proof as an improved alternative to that presented in \cite{AIIDE1715829}.
\subsection{Winning Strategy (Example)} We now describe an example of a winning strategy for solving an ABPD level description that has been reduced from the same quantified Boolean formula as in Figure 15. For this level description, one strategy that would solve the level would be to set the value of $x$ to positive, the value of $y$ to positive, and the value of $z$ to negative. This will ensure that all Clause gadgets are activated, allowing us to kill the pig and solve the level.
\section{EXPTIME-hardness of ABES (exponential and stochastic)}
\subsection{EXPTIME-Complete Original Game} To show that solving levels for ABES is EXPTIME-hard we will reduce from a known EXPTIME-complete decision problem. For our proof we will use the problem of determining whether a player can force a victory for the game G2, as shown in \cite{exp1}. G2 is a game that is played between two people, with each player attempting to win the game before the other player does. A full and formal definition of G2 can be found in \cite{exp1}, but we provide here a simplified explanation of how it is played.
The game is setup as follows. Each player is given a separate 12-DNF Boolean formula which they are attempting to make true. Each of the variables that are used in these Boolean formulas are assigned to either player 1 or player 2. The initial values of the variables are also set to either positive or negative.
The game is played as follows. Each player takes turns making a move (starting with player 1), where they can change the value of at most one variable assigned to them (changing the value of no variables is referred to as ``passing''). The first player to have their Boolean formula ``true'' after making a move wins the game. This victory condition is equivalent to saying that whichever player's Boolean formula is satisfied first wins, but if both players' Boolean formulas are satisfied simultaneously then the player that made the most recent move wins.
If, after the game has been setup, a player can guarantee that they will win regardless of the other player's actions, then that player can force a victory, otherwise they cannot. Determining whether player 1 can force a victory is the known EXPTIME-complete decision problem that we will be using for our proof.
\begin{problem}
\problemtitle{\textbf{G2 Formal Decision Problem}}
\probleminput{12-DNF Boolean formula for each player, variable assignment, initial variable values.}
\problemquestion{Can player 1 force a victory?} \end{problem} From this point on we will refer to player 1 as the ``player'' and player 2 as the ``opponent''.
While many classical two-player games such as Chess, Go and Checkers contain the mechanics necessary to mimic games such as G2, Angry Birds does not on first glance appear to be a suitable choice. Angry Birds is a single-player game and so does not inherently feature an opponent, in the traditional sense, against which to play. However, we can instead use the stochasticity of the physics engine as the opponent we will be facing. This stochasticity allows us to create situations where the player is uncertain about the exact outcome of shots that they make. By utilising this element of uncertainty in shot outcomes, we can create a ``random'' opponent, that will make random moves after each of the player's moves. Even though an opponent that just makes random moves may seem very easy to beat, the complexity of determining whether the player can force a victory for a given G2 instance is the same when facing both an opponent that plays optimally and one that plays randomly, as it is always possible that the random opponent will, by pure chance, actually play optimally (i.e. the player must assume Murphy's Law). Even if the player can beat a random opponent many times for a particular G2 instance, if there exists some small probability that the player will not win then they cannot force a victory (i.e. guaranteeing victory against an opponent that makes random moves is the same as against an opponent that plays perfectly). Exactly how this simulation of a random opponent by our stochastic physics engine is achieved will be discussed in greater detail later. All that needs to be understood now is that the decision problem we are considering involves determining whether the player can force a victory (i.e. guarantee that they can always solve the level) without knowing exactly how the game's physics will respond to their actions.
\subsection{Framework} For our proof of EXPTIME-hardness we describe a method of combining several new types of gadget to create an ABES representation for any given setup of the game G2. A framework diagram showing how these gadgets connect within the level space is shown in Figure 16, which uses the example Boolean formulas $(x \wedge \neg y \wedge z) \vee (\neg x \wedge y \wedge w)$ for the player and $(x \wedge y \wedge \neg z) \vee (\neg x \wedge y \wedge \neg w)$ for the opponent. For each Clause in either the player's or opponent's Boolean formula there is an associated Clause gadget. The framework also contains an Ordering, Random, Choice and Result gadget, the purpose of which will be discussed later.
\begin{figure}
\caption{General framework diagram for EXPTIME-hardness.}
\end{figure}
As there is no traditional opponent to make moves for themselves, we must design the level such that the player is forced to make a move for the opponent after they have made their own move. The player first makes their move by either changing the value of a variable assigned to them or by passing. The player can then check whether their own Boolean formula is satisfied, although this is optional and not enforced by the level's design. The player is then forced to randomly change the value of a variable assigned to the opponent (passing is also a possible outcome) and check whether the opponent's Boolean formula is satisfied, before they are allowed to make another move for themselves.
\subsubsection{Gadget design requirements} The \textit{Ordering} gadget ensures that the correct order of actions is followed by the player. Essentially, all actions must be repeatedly performed in the following order: \begin{enumerate} \item The player makes their move (can effectively skip this step by passing). \item The player checks whether their Boolean formula is satisfied (can skip this step). \item The player makes a random move for the opponent (cannot skip but passing may occur as a random possibility). \item The player checks whether the opponent's Boolean formula is satisfied (cannot skip this step). \end{enumerate}
The \textit{Choice gadget} allows the player to make a single choice about which of their assigned variables will change in value during their move. The player should also have the option to pass if they do not wish to change the value of a variable. When a bird enters the Choice gadget via the Ordering gadget, the location at which it will exit is based on this choice made by the player. Depending on where the bird exits, the value of a single variable assigned to the player will either be changed or kept the same (pass).
The \textit{Random gadget} makes a random choice between multiple options, based on the stochasticity of the game engine. When a bird enters the Random gadget there are several possible locations where it can exit, each of which has a probability of occurring that is greater than zero. Depending on where the bird exits, the value of a single variable assigned to the opponent will either be changed or kept the same (pass).
Each \textit{Clause gadget} represents a specific clause from either the player's or opponent's Boolean formula, and is ``activated'' if its associated clause is satisfied (i.e. all literals within the associated clause are true). This means that checking if either the player's or opponent's Boolean formula is satisfied, is equivalent to checking if any of their associated Clause gadgets are activated. If any of their associated Clause gadgets are activated during this checking step, then a bird will travel into the Result gadget. Notions off ``first'', ``last'', ``next'' and ``previous'' Clause gadget are the same as for Section 4.1.
The \textit{Result gadget} is used to decide whether the level has been won or lost, depending on if the player's or opponent's Boolean formula is satisfied first after they have made a move. If the player's Boolean formula is satisfied first, then the player can travel to the Result gadget from one of their activated Clause gadgets, allowing them to ``pass through'' the Result gadget and win the level. If the opponent's Boolean formula is satisfied first, then the player will be forced to travel to the Result gadget from one of the opponent's activated Clause gadgets, which will then close the Result gadget and prevent the player from ever being able to pass through it in the future (i.e. makes the level unsolvable). Essentially, the location and outcome of the first bird to enter the Result gadget depends on whether it came from one of the player's or opponent's Clause gadgets.
\subsubsection{Framework design requirements} The player fires a bird into the Ordering gadget to make the majority of actions, as well as into the Choice gadget to dictate which of their assigned variables will change in value for their next move. For our general framework diagram (Figure 16), an arrow into the left side of a Clause gadget indicates that the value of a variable is being changed, while an arrow into the right side indicates that the Clause gadget is being checked for activation (i.e. check if associated clause is satisfied). The arrow into the left side of the Result gadget signifies that the level is lost (unsolvable), while the arrow into the right side signifies that the level is won (solved). Lastly, the arrow into the left side of the Choice gadget carries out the player's chosen move, while the arrow into the right side allows the player to specify the move they wish to make next.
This means that solving the level is equivalent to winning a game of G2 (against a random opponent). Thus, we can show that ABES is EXPTIME-hard if the required gadgets can be successfully implemented within the game's environment and the reduction from G2 setup to level description can be achieved in polynomial time.
\subsection{EXPTIME-Hardness} This section deals with the implementation and arrangement of the necessary framework gadgets for the ABES game environment, as well as the reduction process from any given setup of G2 to an equivalent ABES level description.
\subsubsection{Ordering Gadget} The purpose of the Ordering gadget is to ensure that all actions are carried out in the correct order. The structure of the Ordering gadget implementation for ABES is shown in Figure 17. This gadget is comprised of two Selector gates $(S_{1},S_{2})$ and an AUT gate $(A_{1})$. $A_{1}$ and $S_{1}$ are initially open while $S_{2}$ is initially closed. There are four entry points to the Ordering gadget $(SO_{I},SP_{I},CO_{I},CP_{I})$ and four corresponding exit points $(SO_{O},SP_{O},CO_{O},CP_{O})$. A bird which enters the Ordering gadget at a given entry point will either leave at the corresponding exit point or fail to leave the Ordering gadget, based on whether certain gates within the Ordering gadget are open or closed. Each exit point leads to the following gadgets/actions: $SP_{O}$ to the Choice gadget (\textbf{P}layer makes their move to \textbf{S}et the truth value for one of their assigned variables), $SO_{O}$ to the Random gadget (make a random move for the \textbf{O}pponent to \textbf{S}et the truth value for one of their assigned variables), $CP_{O}$ to the \textbf{P}layer's \textbf{C}lause gadgets (check whether the player's Boolean formula is satisfied), and $CO_{O}$ to the \textbf{O}pponent's \textbf{C}lause gadgets (check whether the opponent's Boolean formula is satisfied). A deterministic finite state machine (DFSM) showing the relations between gate states, entry points and exit points is shown in Figure 18 (note that the first value given for each arrow is the entry point, while the second value is the exit point; exit points marked as ``-'' indicate that the bird did not leave the Ordering gadget). A truth table for this gadget is shown in the Appendix (Figure C.38).
\begin{figure}\label{fig:test1}
\label{fig:test2}
\end{figure}
Because both the player and opponent can pass as a possible move, and the player does not have to check whether their Boolean formula is satisfied after making their move, we can ensure that the correct order of actions is followed if the following two properties hold.
\begin{lemma} If the player makes a move, they must make a random move for the opponent and then check whether the opponent's Boolean formula is satisfied, before they can make another move. \end{lemma}
\begin{hproof} Using the DFSM in Figure 18, we can see that after a bird exits via $SP_{O}$, a bird must exit via $SO_{O}$ followed by a bird exiting via $CO_{O}$, before a bird can exit via $SP_{O}$ again. Note that it is also possible for a bird to exit via $SO_{O}$ and/or $CO_{O}$ multiple times before a bird exits via $SP_{O}$ again, but as both the player and opponent have passing as a possible move, there is no issue with this (any duplicate opponent moves can simply be treated as the player passing, and as the opponent can potentially pass their move as a random outcome we only need to check if the opponent's Boolean formula is satisfied if the player didn't pass on their previous move). \end{hproof}
\begin{lemma} If the player makes a random move for the opponent, they must check whether the opponent's Boolean formula is satisfied before they can check if the player's Boolean formula is satisfied. \end{lemma}
\begin{hproof} Again using the DFSM in Figure 18, we can see that after a bird exits via $SO_{O}$ a bird must also exit via $CO_{O}$, before a bird can exit via $CP_{O}$. This essentially ensures that the player is only able to check if their Boolean formula is satisfied between making their own move and making a random move for the opponent. \end{hproof}
\subsubsection{Choice Gadget} The purpose of the Choice gadget is to allow the player to make a decision about which of their assigned variables will change in value. An example of a Choice gadget implementation for ABES with four possible exit points is shown in Figure 19. This gadget is comprised of a sequence of AUT gates ($A_{1}$, $A_{2}$, $A_{3}$,..., $A_{(2V_{p})}$, where $V_{p}$ is the number of variables assigned to the player). Each AUT gate is associated with a particular value for one of the player's variables (i.e. a literal). The player can directly open any AUT gate within the Choice gadget at any time, and a bird attempts to traverse this sequence of AUT gates whenever it leaves the Ordering gadget from exit $SP_{O}$.
\begin{figure}
\caption{Example model of a Choice gadget with four possible outcomes.}
\end{figure}
\begin{lemma} The Choice gadget can be used to indicate which of the player's variables will change in value (i.e. which literal to make true). \end{lemma}
\begin{hproof} The first AUT gate in the sequence that is closed represents the literal that the player wishes to make true. For the example shown, the player wished to choose the literal represented by the third AUT gate, so has opened all the other AUT gates before it. Essentially, when a bird attempts to traverse this sequence of AUT gates, the first AUT gate that it is unable to traverse represents the selection of its associated literal to make true. \end{hproof}
\begin{lemma} A bird which enters the Choice gadget from exit B of the Ordering gadget, will exit the Choice gadget at a location unique to the literal selected by the player. \end{lemma}
\begin{hproof} Whilst, the player can open any number of AUT gates within the Choice gadget, they can only be traversed from exit $SP_{O}$ of the Ordering gadget. If an AUT gate is open then a bird can traverse it (closing the AUT gate in the process) and then attempt to traverse the next AUT gate in the sequence. The first AUT gate in this sequence that is closed will prevent the bird from being able to traverse it, meaning it will instead leave the AUT gate at exit $T_{R}$. The bird will then travel into the Clause gadgets and make the desired change, based on the literal associated with this closed AUT gate. The $T_{R}$ exit for each AUT gate in this gadget essentially represents a unique literal that the player can make true during their move, and so the location where a bird exits the gadget is unique to the chosen literal. \end{hproof}
In summary, the player can determine the exit point for any bird that enters the Choice gadget from exit $SP_{O}$ of the Ordering gadget, by opening all AUT gates before the desired exit point. Each exit point from the Choice gadget then sets the literal associated with its AUT gate to true for both the player's and opponent's Clause gadgets.
\begin{lemma} The player can pass if they do not wish to change the value of any of their assigned variables. \end{lemma}
\begin{hproof} A pass can be made either by selecting a literal that is already true, or by opening all AUT gates in the Choice gadget. \end{hproof}
\begin{lemma} The width and height of the Choice gadget, as well as the number of game elements it contains, is polynomial in the number of variables assigned to the player. \end{lemma}
\begin{hproof} Let $A_{W}$, $A_{H}$ and $A_{E}$ be constants representing the width, height and number of elements (respectively) for an AUT gate. The width, height and number of elements for a Choice gadget is therefore bounded by the polynomial expressions $(2V_{p})A_{W}$, $(2V_{p})A_{H}$ and $(2V_{p})A_{E}$ respectively. \end{hproof}
\subsubsection{Random Gadget} The purpose of the Random gadget is to randomly select one of several options, each of which is associated with a particular value for one of the opponent's variables (i.e. the Random gadget uses the inherent uncertainty in the outcome of collisions to make a random move for the opponent). Each of these options should have a probability greater than zero of occurring, and the player cannot be allowed to influence or know the outcome of the Random gadget in advance. An example of a Random gadget implementation for ABES with four possible exit points is shown in Figure 20. This gadget is comprised of multiple Random gates ($R_{1}$, $R_{2}$, $R_{3}$,..., $R_{(2V_{o}-1)}$), where $V_{o}$ is the number of variables assigned to the opponent, that are arranged in a Binary tree fashion. The first row has one Random gate, then the next two, then four, and so on. A bird enters at the top of this tree of Random gates whenever it leaves the Ordering gadget from exit $SO_{O}$.
\begin{figure}
\caption{Example model of a Random gadget with four possible outcomes.}
\end{figure}
\begin{lemma} The Random gadget can be used to randomly select which of the opponent's variables will change in value (i.e. which literal to make true) or pass, using the stochasticity of the game engine. \end{lemma}
\begin{hproof} As any bird which enters a Random gate has a probability greater than zero of leaving the Random gate at either exit point, then regardless of how many Random gates the bird interacts with inside our Random gadget, the probability of the bird leaving the Random gadget at any specific exit point is also greater than zero (i.e. by combining together multiple Random gates, it is possible to create a Random gadget that can select between any number of different options). Note that if the bird remains at any point within the Random gadget, then this can simply be treated as a pass. Each exit point from the Random gadget is associated with a particular literal for one of the opponent's variables. The bird will then travel into the Clause gadgets and make the desired change, based on the literal associated with the exit point. If the literal associated with the bird's exit point is already true then nothing will change (treated as a pass). \end{hproof}
In summary, any bird that enters the Random gadget from exit $SO_{O}$ of the Ordering gadget has a probability greater than zero of leaving the gadget at any specific exit point. Each exit point from the Random gadget then sets the literal associated with it to true for both the player's and opponent's Clause gadgets.
\begin{lemma} The width and height of the Random gadget, as well as the number of game elements it contains, is polynomial in the number of variables assigned to the opponent. \end{lemma}
\begin{hproof} Let $R_{W}$, $R_{H}$ and $R_{E}$ be constants representing the width, height and number of elements (respectively) for a Random gate. The width, height and number of elements for a Random gadget is therefore bounded by the polynomial expressions $(2V_{o}-1)R_{W}$, $(2V_{o}-1)R_{H}$ and $(2V_{o}-1)R_{E}$ respectively. \end{hproof}
\subsubsection{Clause Gadget} The purpose of the Clause gadget is to represent a single associated clause from either the player's or opponent's Boolean formula, and is activated if the clause is satisfied. An example of a Clause gadget implementation for ABES is shown in Figure 21. This gadget is comprised of a sequence of Selector gates ($S_{1}$, $S_{2}$, $S_{3}$,..., $S_{L}$), where $L$ is the number of literals within its associated clause (maximum of 12). Each of these Selector gates represents a literal from the associated Clause, and is either open or closed depending on whether their associated literal is true or not. Therefore, we can say that a Clause gadget is activated if and only if all Selector gates within it are open. An example truth table for this gadget is shown in the Appendix (Figure C.39).
Figure 22 also provides an example of how multiple Clause gadgets can be combined to represent a complete Boolean formula, in this case for the Boolean formula $(X \wedge Y) \vee (\neg X \wedge \neg Y)$ (i.e. two Clause gadgets which each contain two Selector gates). For this example, the value of $X$ is negative whilst the value of Y is positive. There are five points of entry to the first Clause gadget and the purpose of these different entry points is as follows (starting from the leftmost entry point): check whether any Clause gadgets are activated (if so then bird travels to the Result gadget), set the value of $X$ to positive, set the value of $X$ to negative, set the value of $Y$ to positive, set the value of $Y$ to negative. This arrangement ensures that we can check if any number of Clause gadgets are activated using a single bird. \begin{figure}\label{fig:test2}
\label{fig:test1}
\end{figure}
Whenever the Random or Choice gadget is used to set the value of a variable (exit paths labelled as ``modify Clause gadgets''), a bird will travel through all the Clause gadgets that contain that variable, for both the player and opponent, opening the Selector gates that represent the chosen literal and closing those that represent the negation of it (similar reasoning and setup to the Clause gadget description in Section 4.2.4 for our PSPACE-hard proofs).
\begin{lemma} The Result gadget can be reached from a specific Clause gadget if and only if the Clause gadget is activated \end{lemma}
\begin{hproof} The Result gadget can only be reached from a Clause gadget if a bird is able to traverse every Selector gate within it. As this is clearly only possible if all Selector gates are open, the Clause gadget must be activated for a bird to reach the Result gadget from it. \end{hproof}
To summarise, each time that we are checking if either the player's or opponent's Boolean formula is satisfied, we are actually sequentially checking if any of the Clause gadgets associated with clauses from their respective Boolean formulas are activated. If any of these Clause gadgets are activated, then a bird will be able to travel to the Result gadget. The location that the bird enters the Result gadget depends on whether the activated Clause gadget that it successfully travelled through was associated with a clause from either the player's or opponent's Boolean formula.
\begin{lemma} The maximum width and height of a Clause gadget, as well as the number of game elements it contains, is constant. \end{lemma}
\begin{hproof} As a 12-DNF Boolean formula can contain a maximum of 12 literals, the maximum number of Selector gates that a Clause gadget can contain is 12. As the width, height and number of elements for each Selector gate is constant, the maximum width, height and number of elements for a Clause gadget is also constant. \end{hproof}
\subsubsection{Result Gadget} The purpose of the Result gadget is to either solve the level or make the level unsolvable, depending on whether the player's or opponent's Boolean formula was satisfied first after making their move. The structure of the Result gadget implementation for ABES is shown in Figure 23. This gadget is comprised of a single Selector gate $(S_{1})$ that is initially in the open position. Traversing $S_{1}$ can also be referred to as passing through the Finish gadget, and results in the level being solved.
\begin{lemma} The entry point of the first bird to enter the Result gadget will either solve the level or make it unsolvable. \end{lemma}
\begin{hproof} If the first bird to enter the Result gadget traverses $S_{1}$, then the bird will kill the pig and solve the level. If the first bird to enter the Result gadget closes $S_{1}$, then the pig can never be killed and the level becomes unsolvable. \end{hproof}
Because of this, we can simply connect the tunnels so that any bird which enters the Result gadget from one of the player's activated Clause gadgets attempts to traverse $S_{1}$ (i.e. attempts to pass through the Result gadget), and any bird which enters the Result gadget from one of the opponent's activated Clause gadgets closes $S_{1}$ (i.e. makes the level unsolvable).
\begin{figure}
\caption{Model of the Result gadget used.}
\end{figure}
\subsubsection{Level Construction} Now that all the necessary gadgets have been described, the only remaining requirement is that they can be successfully arranged throughout the level space.
\begin{lemmaz} Any given game of G2 can be reduced to an ABES level definition in polynomial time. \end{lemmaz}
\begin{zproof} As we have already shown that each of the necessary gadgets can be created using a polynomial amount of space and elements and can therefore also be described in polynomial time, the only remaining requirement is that all the gadgets can be successfully arranged throughout the level in polynomial time, relative to the size of the G2 setup description (two 12-DNF Boolean formulas, variable assignment and initial variable values). As the number of gadgets required is clearly polynomial, it suffices to describe a polynomial time method for determining the location of each gadget, as well as the level's width, height, slingshot position and number of birds.
By using the same reasoning as in our PSPACE-hard level construction (Lemma 4.14), we know that the time required to compute the relative placement (spatial arrangement) of these gadgets, as well as the space between them, is polynomial in the total number of gadgets. There are also always a polynomial number of tunnels between gadgets and each tunnel can always be connected to its appropriate destination in polynomial time. Because of this, we can be certain than an equivalent ABES level description for any given game of G2 can always be created in a polynomial amount of space relative to the length of the original Boolean formulas, and thus it can also be defined in polynomial time. All calculations for slingshot position, release points needed, level's width/height, etc., can be calculated the same as in Section 4.3.
Lastly, the number of birds the player has is equal to $(2V_{p}+4)(2^{V_{N}})$, where $V_{p}$ is equal to the total number of variables assigned to the player, and $V_{N}$ is equal to the total number of variables assigned to both the player and the opponent. This is equivalent to the maximum number of birds required to make a move for both the player and opponent (four birds needed for the Ordering gadget, as well as $2V_{p}$ possible literal options in the Choice gadget), multiplied by the maximum number of possible value combinations for all variables $(2^{V_{N}})$. If the player cannot win the level in this many birds, then at least one of the variable value combinations has been repeated. \end{zproof}
An example diagram of a fully constructed structure, using the same Boolean formula as in Figure 16, is shown in the Appendix (Figure A.30). For this example, the player is assigned the variables $z$ and $w$, the opponent is assigned the variables $x$ and $y$, and all variables are initially given a negative truth value.
As we have constructed the necessary gadgets and can position them within the game's environment in polynomial time, the problem of solving levels for ABES is EXPTIME-hard.
\begin{theorem} The problem of solving levels for ABES is EXPTIME-hard. \end{theorem}
\subsection{Winning Strategy (Example)} We now describe an example of a winning strategy for solving an ABES level description that has been reduced from the Boolean formulas for the player and opponent given in Figure 16. For this example, the player is assigned the variables $z$ and $w$, the opponent is assigned the variables $x$ and $y$, and all variables are initially given a negative truth value (same setup as for the example structure diagram in Figure A.30). For this level description, we can see that the player will immediately need to set the value of $w$ to positive. If the player doesn't do this then there is a chance that variable $y$ would be changed to positive when the opponent makes their move, which would mean that the opponent's second clause would be satisfied (leading to a loss). To set the variable $w$ to positive we need to open all AUT gates in the Choice gadget except for the last one. We can then traverse the AUT gates in the Choice gadget via the Ordering gadget, which will subsequently adjust the Clause gadgets to represent $w$ now being positive. We then need to make a random move for the opponent, and check if any of their associated Clause gadgets are activated (none of them are regardless of the outcome of the opponent's random move). After this, we should see that we only need to set the value of the variable $z$ to positive to satisfy one of our clauses. This is the case regardless of what move was previously made for the opponent, although the specific clause that is satisfied might change. After setting $z$ to positive we can then check our clauses for satisfiability, and as one of our Clause gadgets is activated a bird will pass through the Result gadget and solve the level.
\section{Proof Generalisation} The complexity proofs described in this paper can be replicated in many other games similar to Angry Birds, as long as the necessary gadgets can be constructed. In general, this means that the computational complexity of any physics-based game can be established using our frameworks, as long as the following requirements hold. A level within the game contains a set number of targets, which the player needs to hit or reach in order to solve the level. The game contains both static and non-static elements. The game contains elements that can be moved as a result of the player's actions. The physics engine utilised by the game allows for rudimentary systems of gravity, momentum, energy transfer and rotational motion (almost all simple physics engines should contain this). The player cannot directly influence any element within a gadget framework, instead only being able to interact with it through the use of a secondary non-static game element (in our case a bird), which enters the gadget framework through designated entry points. No new element can enter this framework until the outcome of any previously entered element is finalised. For our EXPTIME-hardness proof, we also require the exact outcome of certain player actions to be unknown beforehand.
Whilst by no means applicable to all games that contain these features, this generalisation suggests that many other physics-based games are NP-hard and/or PSPACE-complete. This includes both games that are similar in play style to Angry Birds, such as Crush the Castle, Siege Hero or Fragger, as well as games that play considerably differently, such as Where's My Water, World of Goo, Bad Piggies, Cut the Rope 2, Crayon Physics Deluxe, The Incredible Machine, Eets and Peggle, to name just a few. Even though formal proofs on the complexity of these games would likely each be as long as this paper again, we provide below some rough outlines for how single-use EQ and Clause gadgets could be implemented for several popular examples of other physics-based games. Single-use EQ gadgets can only be used to set the value of their associated variable once, while single-use Clause gadgets remain activated once they are activated the first time (i.e. can't be un-activated). While these single-use gadgets are much less sophisticated than those we presented previously, they can still be used for NP-hardness proofs based on our 3-SAT reduction framework as only a single framework cycle is needed.
\subsection{Where's My Water} The aim of this game is to get a certain number of water droplets into a specific destination pipe. These water droplets behave in the same manner as red birds in Angry Birds, after they have been fired from the slingshot. The game contains dirt areas that water droplets cannot pass through, but which the player can remove by tapping them. The game also contains doors that stop water droplets when closed. Each door has a button associated with it. When the button associated with a door is pressed by a water droplet, the door opens. The game also contains pipes that allow water droplets to pass each other without any risk of leakage or collision. An example level from Where's My Water \cite{water} is shown in Figure 24.
\textbf{EQ gadget:} Each EQ gadget contains a single water droplet and two possible tunnels on either side of it that are blocked by dirt. The player can remove this dirt by tapping on it, allowing them to direct the water droplet into either tunnel. Whichever tunnel the player directs the water droplet into indicates the value to set the associated variable to (i.e. if the water droplet falls into the left/right tunnel then set the value of the variable to negative/positive). As there is only one water droplet in each EQ gadget, the player can only set the value of the associated variable once (i.e. this EQ gadget is single-use only).
\textbf{Clause gadget:} Each Clause gadget contains a button that, when touched by a water droplet, opens a door that releases a set number of water droplets into the destination pipe. When the player indicates the truth value for a variable using its associated EQ gadget, the water droplet will travel through all the Clause gadgets that contain the chosen literal, pressing the button within any Clause gadget it travels through (i.e. pressing the button within a Clause gadget will essentially activate it). As the effect of pressing the button within a Clause gadget cannot be undone, these Clause gadgets are single-use only.
\textbf{Crossover:} Pipes can simply be used to allow water droplets to travel over one another.
\textbf{Victory condition:} The level is solved once all Clause gadgets have released their water droplets into this destination pipe (i.e. when all Clause gadgets are activated).
\subsection{Cut the Rope 2} The aim of this game is to transport a piece of candy to a stationary creature. This piece of candy behaves in the same manner as red birds in Angry Birds, after they have been fired from the slingshot. The game contains balloons which can hold objects in a specific place (i.e. the object becomes unaffected by gravity). If an object is connected to one balloon then it is suspended a fixed distance directly below this balloon. If an object is connected to several balloons then it is suspended a fixed distance below the mid-point between these balloons. The player can remove a balloon by tapping on it (i.e. ``pop'' the balloon). The game also contains wooden balls that behave the same as the piece of candy. The game also contains rotating doors (gear attached to a wooden block) that objects cannot pass through when closed. Each door has a button associated with it. When the button associated with a door is pressed by an object, the door opens. An example level from Cut the Rope 2 \cite{rope} is shown in Figure 25.
\textbf{EQ gadget:} Each EQ gadget contains a wooden ball that is suspended in place by two balloons, and two possible tunnels on either side of the wooden ball. The player can pop each of these balloons by tapping them. The order in which the two balloons suspending the wooden ball are popped can be used to direct the wooden ball into either tunnel. Whichever tunnel the player directs the wooden ball into indicates the value to set the associated variable to. As there is only one wooden ball in each EQ gadget, the player can only set the value of the associated variable once.
\textbf{Clause gadget:} Each Clause gadget contains a button that when touched by a wooden ball, opens a rotating door outside of the framework. When the player indicates the truth value for a variable using its associated EQ gadget, the wooden ball will travel through all the Clause gadgets that contain the chosen literal, pressing the button within any Clause gadget it travels through (i.e. activates the Clause gadget).
\textbf{Crossover:} Crossover gates can be constructed using the exact same design as for Angry Birds (Section 3.4).
\textbf{Victory condition:} The piece of candy is suspended by a balloon above a stack of rotating doors placed outside the rest of the framework. Each rotating door in this stack is turned on when one of the Clause gadgets is activated (i.e. each button in a Clause gadget turns on one of these rotating doors). The creature is placed below this stack of rotating doors. The player can pop the balloon suspending the piece of candy at any point, but the candy can only reach the creature (i.e. solve the level) if all rotating doors are turned on (i.e. if all Clause gadgets are activated).
\begin{figure}\label{fig:test2}
\label{fig:test1}
\end{figure}
\subsection{The Incredible Machine} The aim of this game is to accomplish some predefined task for a given environment by placing objects within the level. For our setup, the only objects that the player can place in the level are candles. The game contains baseballs that behave in the same manner as red birds in Angry Birds, after they have been fired from the slingshot. The game also contains brick walls that objects cannot pass through, and TNT that can be ignited with candle. When a TNT is ignited it will explode and destroy (remove) both itself and any objects (such as walls) next to it. The game also contains torches that can be turned on by an object hitting them. The game also contains pipes that allow objects to pass each other without any risk of leakage or collision. An example level from The Incredible Machine \cite{machine} is shown in Figure 26.
\textbf{EQ gadget:} Each EQ gadget contains a baseball and two possible tunnels on either side of it that are blocked by brick walls. TNT is placed next to each of these brick walls and can be ignited by placing a candle next to it. When a TNT is ignited it will explode and destroy both itself and the brick wall next to it. Igniting one of these TNTs can therefore be used to direct the baseball into either tunnel. Whichever tunnel the player directs the baseball into indicates the value to set the associated variable to. As there is only one baseball in each EQ gadget, the player can only set the value of the associated variable once.
\textbf{Clause gadget:} Each Clause gadget contains a torch. When the player indicates the truth value for a variable using its associated EQ gadget, the baseball will travel through all the Clause gadgets that contain the chosen literal, hitting and turning on the torch within any Clause gadget it travels through (i.e. activates the Clause gadget).
\textbf{Crossover:} Pipes can simply be used to allow baseballs to travel over one another.
\textbf{Victory condition:} The requirement for solving the level is set to turning on all of the torches within the Clause gadgets (i.e. when all Clause gadgets are activated).
\vskip 0.1in
While proofs for NP-hardness and PSPACE-hardness can often be generalised between different video games, our proposed proof of EXPTIME-hardness is trickier to replicate. We postulate though that it might be possible to prove that extended versions of other popular games such as Super Mario Bros. are EXPTIME-hard by introducing elements such as ``mystery'' boxes which could spawn a random item, thus providing the necessary uncertainty in player actions. However, a more thorough investigation and research would be needed to determine if this is possible.
\begin{figure}
\caption{Screenshot of a level for The Incredible Machine.}
\end{figure}
\section{Conclusions} In this paper, we have proven that the task of deciding whether a given Angry Birds level can be solved is either NP-hard, PSPACE-hard, PSPACE-complete or EXPTIME-hard, depending on the version of the game being used.
To the best of our knowledge, this is the first example of a single-player game without a traditional opponent being proved EXPTIME-hard. Our use of unknown and changing environmental variables as the opponent which the player is facing, is a unique view of the problem and opens up the possibility of proving many other games EXPTIME-hard using this methodology. The most likely candidates for this analysis would be games that feature some inherent stochasticity in their engine (similar to the method employed for our proof), or games which use randomness within one of their gameplay elements (such as mystery/question blocks in Mario games). In games like this the player may know what elements the box could contain, but will not know exactly what it does contain until after they open it. This would be a good basis for constructing an opponent for a reduction from G2 or another similar EXPTIME-complete game. It is also possible to use the inaccuracy of the player's input or another similar area of uncertainty to generate the required randomness. EXPTIME-hardness proofs might also be able to be applied to real-world environments.
This work provides an important contribution to the collection of games that have been investigated within the field of computational complexity. However, there is still a huge assortment of physics-based and other non-traditional puzzle games that are available for future analysis, which do not follow the typical structure of those previously studied. The importance of games for AI research lies in the fact that games can form a simplified and controlled environment, which allows for the development and testing of AI methods that will eventually be used in the real world. It is also highly likely that the proofs presented in this paper can be generalised to other physical reasoning and AI problems. Even though Angry Birds may initially seem like a simple game, the challenges that dealing with its physics simulation engine pose make it incredibly relevant to those in the real world. We are therefore hopeful that this work will inspire future research into a more diverse range of game types and problems.
\section*{Acknowledgments} We would like to thank the three reviewers for their incredibly detailed reviews and the many excellent suggestions they made for improving this paper.
\par\vspace*{\fill} \appendix \section{Full structure construction examples (not to scale)}
\begin{figure}
\caption{ABED (PSPACE-complete)}
\end{figure}
\begin{figure}
\caption{ABPS (PSPACE-hard)}
\end{figure}
\begin{figure}
\caption{ABPD (NP-hard)}
\end{figure}
\begin{figure}
\caption{ABES (EXPTIME-hard)}
\end{figure}
\section{Step-by-step shot ordering}
\begin{figure}
\caption{Shots required to solve the level created using the example ABED framework shown in Figures 7 and A.24. Each row of the table specifies the target for each shot (gadget, gate, and entrance tunnel when ambiguous), the state of each gadget after the shot has resolved (whether it is enabled (E), disabled (D), locked (L), or unlocked (U)), and the truth value of each Boolean variable.}
\end{figure}
\par\vspace*{\fill} \section{Gadget truth tables}
This appendix provides detailed truth tables for each gadget described in this paper (except for the trivial cases). Empty cells for the current state indicate that the gate in question can be either open or closed. The bird input point specifies the gate by which the bird entered the gadget, as well as the specific gate entrance point when ambiguous. Bird input points with the ``(Enable)'' marker represent that the gadget is enabled by a bird entering here. Empty cells for the next state indicate that the position of the gate in question is unchanged. Empty cells for the Output indicate that the bird did not exit the gadget.
\begin{figure}
\caption{EQ gadget truth table.}
\end{figure}
\begin{figure}
\caption{UQ-T gadget truth table.}
\end{figure}
\begin{figure}
\caption{UQ-F gadget truth table.}
\end{figure}
\begin{figure}
\caption{Clause gadget truth table (ABED).}
\end{figure}
\begin{figure}
\caption{Finish gadget truth table.}
\end{figure}
\begin{figure}
\caption{UQ-R gadget truth table.}
\end{figure}
\begin{figure}
\caption{Ordering gadget truth table.}
\end{figure}
\begin{figure}
\caption{Clause gadget truth table (ABES) example for a clause with three literals.}
\end{figure}
\end{document} | arXiv |
# Input-output in signal processing
In signal processing, the input-output relationship is crucial to understand. The input is the original signal, and the output is the processed signal after applying various operations. The goal of signal processing is to transform the input signal into an output signal that is easier to analyze or manipulate.
Consider a simple example: a sine wave signal. The input signal is the sine wave, and the output signal can be the amplitude, frequency, or phase of the sine wave.
# Convolution as a mathematical operation
Convolution is a mathematical operation that combines two signals into one. It is defined as the integral of the product of the two signals, multiplied by a kernel function. The kernel is a small function that slides over the product of the two signals, and the output is the sum of the products.
## Exercise
Calculate the convolution of two signals f(t) and g(t) using the following formula:
$$
(f * g)(t) = \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d\tau
$$
# Properties of convolution
Convolution has several important properties:
- Linearity: Convolution is linear with respect to both input signals. This means that if we have two signals f(t) and g(t), and two other signals h(t) and k(t), then the convolution of the first pair with a scalar is the same as the convolution of the second pair with the same scalar.
- Commutativity: The order of the input signals does not affect the output. This means that the convolution of f(t) and g(t) is the same as the convolution of g(t) and f(t).
- Associativity: The convolution of three signals f(t), g(t), and h(t) is the same as the convolution of the first two signals, followed by the convolution of the result with the third signal.
Consider the signals f(t) = 3, g(t) = 5, and h(t) = 7. Calculate the convolution of f(t) and g(t), then the convolution of the result with h(t).
# Applications of convolution in signal processing
Convolution is widely used in signal processing for various applications:
- Filtering: Convolution can be used to filter out unwanted frequencies from a signal. For example, a low-pass filter can be implemented using a Gaussian function as the kernel.
- Smoothing: Convolution can be used to smooth out a noisy signal by applying a Gaussian kernel with a large standard deviation.
- Time-domain analysis: Convolution can be used to calculate the cross-correlation of two signals, which is a measure of their similarity.
- Image processing: Convolution can be used to apply various filters to images, such as blurring, sharpening, or edge detection.
# Discrete convolution and its implementation
Discrete convolution is the convolution of two discrete signals, which are defined on a discrete set of points. The discrete convolution can be calculated using a sliding window or by using the Fast Fourier Transform (FFT).
Consider two discrete signals f[n] = {1, 2, 3} and g[n] = {4, 5, 6}. Calculate the discrete convolution of f[n] and g[n] using the sliding window method.
# Continuous convolution
Continuous convolution is the convolution of two continuous signals, which are defined on a continuous set of points. The continuous convolution can be calculated using the integral formula or by using the Fourier transform.
Consider two continuous signals f(t) = t and g(t) = t^2. Calculate the continuous convolution of f(t) and g(t) using the integral formula.
# Kernel and its role in convolution
The kernel is a small function that slides over the product of the two signals in convolution. The kernel can be used to implement various filters, such as low-pass, high-pass, or band-pass filters.
Consider a low-pass filter with a cutoff frequency of 0.5. The kernel of the filter is a Gaussian function with a standard deviation of 0.1. Calculate the output signal after applying the filter to an input signal f(t) = 3.
# Filtering using convolution
Filtering is a common application of convolution. It involves applying a filter to a signal to remove unwanted frequencies or components.
Consider a low-pass filter with a cutoff frequency of 0.5. The kernel of the filter is a Gaussian function with a standard deviation of 0.1. Calculate the output signal after applying the filter to an input signal f(t) = 3.
# Applications of filtering in signal processing
Filtering is widely used in signal processing for various applications:
- Noise reduction: Filters can be used to reduce unwanted noise in a signal, such as background noise in an audio recording.
- Frequency analysis: Filters can be used to isolate specific frequency components in a signal, such as the fundamental frequency in a musical note.
- Image processing: Filters can be used to enhance or deemphasize certain features in an image, such as edges or textures.
Consider an image with a noisy background. Apply a median filter to remove the noise and improve the image quality.
# Fourier transform and its relation to convolution
The Fourier transform is a mathematical operation that converts a signal from the time domain to the frequency domain. The Fourier transform is closely related to convolution, as the inverse Fourier transform of a signal is equivalent to a convolution with the signal itself.
Consider a signal f(t) = 3. Calculate the Fourier transform of the signal and then the inverse Fourier transform to obtain the original signal.
# Convolution in machine learning and deep learning
Convolution is also used in machine learning and deep learning for various applications:
- Feature extraction: Convolution can be used to extract features from an input signal or image, such as edges or textures.
- Neural networks: Convolutional neural networks (CNNs) are a type of deep learning architecture that use convolution to process input data.
- Image recognition: Convolution can be used to classify images based on their features, such as edges or textures.
Consider a CNN that is trained to classify images of cats and dogs. Apply the CNN to an input image of a cat and calculate the output probabilities for the cat and dog classes. | Textbooks |
Moving from discrete probability distributions to continuous ones
I'm teaching an introductory statistics class at a community college, and we've just finished a unit on discrete probability. At the moment, the students' conception of the probability of an event A is
$$P(A)=\frac{\text{# of outcomes in A}}{\text{# of total outcomes}}$$
(Supposing all outcomes of the random experiment are equally likely to occur). We can use this basic understanding of probability to derive probability mass functions for discrete random variables, and graph these functions.
How do I help the students jump from the discrete to the continuous? Beginning with the basic understanding of probability as a ratio (required to understand the above formula), how do I move to the notion that the probability for a range of continuous values can be represented as the area under a curve called the probability density function?
concept-motivation statistics probability
JaredJared
$\begingroup$ A person could TRY to motivate some geometric probability like "if we drop a coin on a grid of square tiles, what's the chance it will land crossing one of the cracks." (ask where the center of the coin lands and which locations would make it so the coin is crossing a crack. Then find areas.) Of course, this naive approach is inherently problematic [see Bertrand's paradox], and it's not clear if thinking in terms of area ratios will help understanding continuous distributions. But it's something I do for 20 minutes to motivate the need for calculus ideas when I teach calc-based prob. $\endgroup$
– Pat Devlin
$\begingroup$ Not long enough for an answer, perhaps I'll write one later: Use the relationship between the binomial distribution and the normal distribution. As the number $n$ of trials increases, a binomial distribution approaches a normal distribution with mean $np$ and standard deviation $\sqrt{np(1-p)}$. $\endgroup$
This is an uncomfortable moment, mathematically, in a non-calculus-based statistics course; frankly, we simply need to steal the calculus concept and hope that students trust us about it, without formal grounding. It's somewhat degenerate mathematics but it's the position we're required to deal with.
That said: I find that students do a pretty good job of picking up the idea by just flat-out being explicitly told ("the area under a curve is the probability of getting an outcome in that range") and then practicing some with a normal-curve table. Start with a very simple case: normal curve with mean 9, standard deviation 2 (sketch); what is the probability of getting a value above 9? Every class I've taught, at least one student has intuited that the answer is 50%; and I can say, yes, because half of the area is above 9. Run more exercises and in every case sketch the curve and check for reasonability.
I teach out of Weiss Introductory Statistics and it works perfectly fine. Up through the 8th Edition there wasn't any text on continuous probability distributions in the general sense at all; it just started using a normal curve table like this in Ch. 6. As of the 9th edition they added a 2-page introduction on density curves in general, but I don't find that any more illuminating; I just skip it. In fact -- I even skip the whole chapter on discrete probability distributions (Weiss marks it as optional). Due to the conceptual gap here between discrete/continuous distributions, I don't think the discrete case increases understanding of the continuous case, and it delays getting to the crown jewels of the course: inferences with confidence intervals and P-values for means and proportions for large samples.
Daniel R. CollinsDaniel R. Collins
This is treason, but anyway:
If your students can jump from "ratio of outcomes in $A$ over all possible outcomes" to "ratio of length of interval, over total feasible length", then the answer why probability can be represented as the area under a curve could half-jokingly (but only half-) be "for convenience, since we set the height of the curve at a value that it will replicate our "ratio" approach".
The "ratio approach" would require that
$$P([a,b]) = \frac {length[a,b]}{length[a,c]} = (b-a)\cdot \frac {1}{c-a}$$
Instead of doing a division in one dimension, we do a multiplication in two dimensions, by setting the probability density curve at height $1/(c-a)$.
With any other distribution, one has to move from simple multiplication of the sizes of a rectangle, to integration to find the area under a curve.
Alecos PapadopoulosAlecos Papadopoulos
For discrete distributions, it's helpful to look at coinflips. You can find the probability of getting m heads/tails out of n flips. (Though a normal distribution may be simpler to start with than a binomial distribution.)
As for continuous distributions, it may be helpful to look at the Maxwell-Boltzmann Distribution, which can tell you the probability of finding a particle traveling at velocity v (where v is the random continuous variable). Since the total probability is one (100%), it means that the area under the curve described by the function is also one, which can be exploited to normalize the function.
$\begingroup$ Thanks for pointing that out, just edited. $\endgroup$
Draw a square on the board and shade in half of it. Ask your students, "If you throw a dart and hit the board, what's the probability that you hit the shaded region?" Cut the region down to a quarter of the square and ask the question again. Lead this into a discussion of how the probability is the ratio of the area of the region to the total area and tie this back to the discrete probability formula.
Now ask the same question about a point. A point has "0 area" so the probability of hitting it is 0. When there are infinitely many possibilities, i.e. there are infinitely many points inside the square versus, for example, a finite number of die rolls, we can't talk about the probability of individual events - just ranges of events/results.
G. AllenG. Allen
I present just a couple of ideas that might make a bridge between the discrete and continuous.
Firstly, one thing I notice with many students is they don't have an idea that the probabilities of all the outcomes add up to 1. This stops them being able to do a lot of calculations for continuous distributions later. Moreover, it means they don't get the fundamental idea of distribution which is a description of all the possibilities and how likely they all are.
I think that we should probably never ask for the probability of any event in isolation but always also ask for probabilities for a set of events that complete it to the whole sample space. For example, don't just ask for the probability of getting a 7 on two dice, but also less than 7 and more than 7. At the very least, always ask for the probability of A and not A. My instinct is that to start with, students will calculate both based on the ratios, and maybe later will realise there's an easier way. But at least we'll drive home the idea that events never exist without other events to fill them out.
Secondly, perhaps a road to asking about continuous probabilities, is to ask about a discrete set of objects with continuous measurements on them. If the measurements are written with decimals it will be even more obvious that they are supposed to be continuous.
For example, take 14 trees with their heights measured in feet:
60.92 63.39 64.10 59.07 63.05 59.64 60.07 60.69 60.28 61.62 58.49 56.81 56.43 59.49 (from the Loblolly dataset in R).
Then you could ask questions like the probabilities of choosing a tree with height in the ranges 55 to 57.5, 57.5 to 60, 60 to 62.5 and 62.5 to 65 including drawing a graph showing these probabilities. Asking various different collections of zones might help to necessitate the axis being marked out on a scale rather than in discrete labels. Choosing zones that aren't equal might make it easier to necessitate a way to show the amount of probability in a way other than height so that the graph isn't so misleading. A dataset with 100 numbers in it might make this even more obvious and lead towards the probability distribution which has infinitely many numbers.
DavidButlerUofADavidButlerUofA
Not the answer you're looking for? Browse other questions tagged concept-motivation statistics probability or ask your own question.
Recommendations for free, basic resource on discrete probability for a discrete structures class?
How to convince the following probability problem to highschool student? | CommonCrawl |
\begin{document}
\oddsidemargin= 10mm \topmargin= 35mm \textwidth=170mm \textheight=270mm \pagestyle{plain} \newcounter{r} \newcommand{Ker}{Ker} \newcommand{Gal}{Gal} \newcommand{Mat}{Mat} \newcommand{sign}{sign} \newcommand {\Log}{Log} \newcommand{Res}{Res} \newcommand{Tr}{Tr} \newcommand{Nm}{Nm} \begin{center} {\large\bf On the Diophantine Approximations of logarithms in cyclotomic fields.}\end{center} \begin{center} \vskip.4pt {\large\bf L.A. Gutnik} \end{center} \vskip10pt \small Abstract. I present here the proofs of results, which are obtained in my papers " On the linear forms, whose coefficients are linear combinations with algebraic coefficients of logarithms of algebraic numbers,"
VINITI, 1996, 1617-B96, pp. 1 -- 23 (in Russian), and " On the systems of linear forms, whose coefficients are linear combinations with algebraic coefficients of logarithms of algebraic numbers,"
VINITI, 1996, 2663-B96, pp. 1 -- 18 (in Russian), \vskip 10pt \normalsize \vskip.10pt
\rm Let $T$ is a real number, $\Delta,m$ and $n,$ are positive integers,
$\Delta$ is greater or equal than $2,$ $K_m={{\mathbb Q}[\exp(2\pi i/m)]}$ is a cyclotomic field, ${\mathbb Z}_{K_m}$ is the ring of all the integers of the field $K_m,\,\Lambda(n)$ is the Mangold's function,
$\epsilon^2=\epsilon.$ Let $\Lambda_0(m)=0,$ if $m$ is odd and $\Lambda_0(m)=\Lambda(m/2),$ if $m$ is even. Let further $\omega_1(m)=(m-1)/2,$ if $m$ is odd, $\omega_1(m)=m/2-2,$ if $m\equiv2(\mod4)$ and $\omega_1(m)=m/2-1,$ if $m\equiv0(\mod4).$ Let \begin{equation} w_\Delta(T)=\sqrt{\frac{\sqrt{(\Delta^2(3-T^2)+1)^2+16\Delta^4T^2}+ \Delta^2(3-T^2)+1}2}, \label{eq:0}\end{equation} and the values $V_{\Delta}^\ast,\,V_{\Delta}(m),\,l_\Delta(\epsilon,T),\, g_{\Delta,k}(m)$ and $h_{\Delta,k}(m)$ are defined by the equalities \begin{equation} V_\Delta^\ast=(\Delta+1)+ \log((\Delta-1)^{(\Delta-1)/2}(\Delta+1)^{(\Delta+1)/2}\Delta^{-\Delta})\,+ \label{eq:1}\end{equation} $$\frac\pi2\sum\limits_{\mu=0}^1(1-2\mu) \sum\limits_{\kappa=1}^{[(d-1)/2]+\mu} \cot\left(\frac{\pi\kappa}{d-1+2\mu}\right), $$ \begin{equation} V_\Delta(m)=V^\ast+(\Delta+1)\Lambda_0(m)/\phi(m), \label{eq:3}\end{equation} \begin{equation} l_\Delta(\epsilon,T)= -log\left(4(\Delta+1)^{\Delta+1}(1-1/\Delta)^(\Delta-1)\right)\,+ \label{eq:2}\end{equation} $$\frac12\log\left(\left(2\Delta+(-1)^\epsilon w_\Delta(T)+(\Delta+1)\right)^2+ T^2\Delta^2\left(1+\frac{(-1)^\epsilon2\Delta}{w_\Delta(T)}\right)^2\right)+$$ $$\frac12 \log\left(\left(2\Delta+(-1)^\epsilon w_\Delta(T)-(\Delta+1)\right)^2+ T^2\Delta^2\left(1+\frac{(-1)^\epsilon2\Delta}{w_\Delta(T)}\right)^2\right)+$$
\pagestyle{headings} \topmargin= -15mm \textheight=250mm \markright{\footnotesize\bf L.A.Gutnik,
On the Diophantine Approximations of logarithms in cyclotomic fields.} $$\frac{(\Delta-1)}2 \log\left(\left(2\Delta+(-1)^\epsilon w_\Delta(T)\right)^2+ T^2\Delta^2\left(1+\frac{(-1)^\epsilon2\Delta}{w_\Delta(T)}\right)^2\right),$$ \begin{equation} g_{\Delta,\epsilon}(m)= (-1)^\varepsilon(l_\Delta(\epsilon,\tan(\pi\omega_1(m)/m)+V_\Delta(m))), \label{eq:2a0}\end{equation} \begin{equation} h_{\Delta}(m)=-V_\Delta(m)-l_\Delta(1,\tan(\pi/m)), \label{eq:2a}\end{equation} where $m\ne 2,\,k=0,1.$ Let, finally, the values $\beta(d,m)$ and $\alpha(\Delta,m)$ are defined by the equalities $$\beta(\Delta,m)=g_{d,0}(m)/h_{\Delta}(m),\, \alpha(\Delta,m)=\beta(\Delta,m)-1+g_{\Delta,1}(m)/h_{\Delta}(m).$$ \vskip1pt \bf Theorem. \it Let $m$ is a positive integer different from one, two
and six, $$\Delta\in\{5,7\}.$$
Then \begin{equation} h_\Delta(m)>0 \label{eq:4}\end{equation}
and for each $\varepsilon>0$
there exists $C_{\Delta,m}(\varepsilon)>0$ such that \begin{equation} \max_{\sigma\in Gal(K/{\mathbb Q})} (\vert q^\sigma\log((2+\exp(2\pi i/m))^\sigma)-p^\sigma\vert)\ge \label{eq:5}\end{equation} $$C_{\Delta,m}(\varepsilon)(\max_{\sigma\inGal(K_m/{\mathbb Q})} (\vert q^\sigma\vert)^{-\alpha(\Delta,m)-\varepsilon},$$ where $p\in{\mathbb Z}_{K_m}$ and $q\in{\mathbb Z}_{K_m}\diagdown \{0_{K_m}\};$
moreover, for any $q\in{\mathbb Z}_{K_m}\diagdown \{0_{K_m}\}$ and any $\varepsilon>0$ there exists $C^\ast_{\Delta,m}(q,\varepsilon)>0$
such that \begin{equation} b^{{\beta(\Delta,m)+\varepsilon}}\max_{\sigma\in Gal(K/{\mathbb Q})} (\vert q^\sigma b\log((2+\exp(2\pi i/m))^\sigma)-p^\sigma\vert)\ge \label{eq:6}\end{equation} $$C^\ast_{\Delta,m}(q,\varepsilon),$$ where $p\in{\mathbb Z}_{K_m},\,b\in{\mathbb N}.$
For the proof I use the same method, as in [\ref{r:ag}] -- [\ref{r:ci2}]. I must work on the Riemann surface $\mathfrak F$ of the function $\Log (z)$ and identify it with the direct product of the multiplicative group ${\mathbb R}_+^\ast= \{r\in{\mathbb R}\colon r>0\}$ of all the positive
real numbers with the operation $\times,$ not to be written down explicitly as usual,
and the additive group ${\mathbb R}$ of all the real numbers, so that $$z_1z_2=(r_1r_2,\phi_1+\phi_2)$$ for any two points $z_1=(r_1,\phi_1)$ and $z_2 = (r_2,\phi_2)$ on $\mathfrak F.$ I will illustrate the appearing situations on the half plain $(\phi,r),$ where $r>0.$
For each $z=(r,\phi)\in{\mathfrak F},$ let $$\theta_0 (z)=r\exp {i\phi},\,\Log(z)=\ln(r)+i\phi,\,\eta^\ast_\alpha(z) = (r,\phi-\alpha),$$
where $\alpha\in{\mathbb R}.$ Clearly, $\Log(z_1z_2)=Log(z_1)+\Log(z_2)$ for any $z_1\in {\mathfrak F}$ $z_2\in{\mathfrak F}.$ Let $\rho(z_1,z_2)=\vert\Log(z_1)-\Log(z_2)\vert,$ where $z_1\in{\mathfrak F}$ and $z_2\in {\mathfrak F};$ clearly,
$({\mathfrak F},\rho)$ is a metric space.
Clearly, $\rho(zz_1,zz_2)=\rho(z_1,z_2)$ for any $z_1,z_2$ and $z$ in ${\mathfrak F}.$ Clearly, $\theta_0(z)=\exp(\Log(z))$ for any $z\in{\mathfrak F}.$ Clearly, for any $\alpha\in{\mathbb R}$ the map $z\to\eta^\ast_\alpha(z)$ is the bijection of ${\mathfrak F}$ onto ${\mathfrak F}$ and $$\theta_0((\eta^\ast_\alpha)^m(z))=\exp(-im\alpha)\theta_0(z)$$ for each $z=(r,\phi)\in{\mathfrak F},\,\alpha\in{\mathbb R}$
and $m\in{\mathbb Z}.$ Clearly, the group ${\mathfrak F}$ may be considered
as ${\mathbb C}$-linear space, if for
any $z\in{\mathfrak F}$ and any $s\in{\mathbb C}$ we let $$ z^s=(\vert\exp(s\Log(z))\vert,\Im(s\Log(z)). $$
Let us fix a domain $D$ in ${\mathfrak F}.$ Let $f(z)=f^\wedge (r,\phi)$ for a complex-valued
function $f(z)$ on $D,$ It is well known that $f(z)$ is holomorphic in $D$
if the complex-valued function $f^\wedge (r,\phi)$ of two real variables $r$
and $\phi$ has continuous partial derivatives in $D,$ and
the Cauchy-Riemann conditions \begin{equation} r(((\partial/\partial r)f^\wedge)(r,\phi))= -i((\partial/\partial\phi) f^\wedge)(r,\phi)):=\label{eq:a}\end{equation} $$(\delta f)(z):=
\theta_0(z)((\partial/\partial z)f)(z)) $$ are satisfied for every point $z=(r,\phi)\in D$. The equalities (\ref{eq:a})
determine a differentiations $\frac{\partial}{\partial z}$
and $\delta=\theta_0(z)\frac{\partial}{\partial z}$ on the ring of all
the holomorphic in the domain D functions. In particular,
the function $\Log(z)$ is holomorphic on $\mathfrak F$ and we have
the equalities $$\left((\partial/\partial z)\Log\right)(z)=\theta_0(z^{-1}),\, (\delta\Log)(z)=1.$$ For the proof I use the functions of C.S.Mejer. Let $\Delta\in{\mathbb N}+1,\;\delta_0=1/\Delta,$ $$ \gamma_1=(1-\delta_0)/(1+\delta_0),\quad d_{l}=\Delta+(-1)^l, \quad l = 1, 2. $$ To introduce the first of my auxiliary function $f_1(z,\nu),$ I use
the auxiliary set $$ \Omega_0=\{z\in{\mathfrak F}\,\colon\,\vert z\vert\le1\}. $$ I prove that, for each $\nu \in {\mathbb N},$ the function $f_1(z,\nu)$ belongs to the ring ${\mathbb Q}[\theta_0(z)];$ therefore using the principle of analytic continuation we may regard it as being defined in ${\mathfrak F}.$ For $\nu\in{\mathbb N},$ let \begin{equation} f_1(z,\nu) =
-(-1)^{\nu(\Delta+1)}G_{2, 2}^{(1, 1)} \left(z\bigg\vert\matrix -\nu d_1,
&\!\! 1+\nu d\sb2\\ 0, &\!\! \nu\\\endmatrix\right) \label{eq:b}\end{equation} $$= -(-1)^{\nu(\Delta+1)}\frac1{2\pi i}\int\limits_{L_1}g_{2, 2}^{(1,1)}(s)ds, $$ where $$ g_{2,2}^{(1,1)}(s)= \theta_0(z^s)\Gamma(-s) \Gamma(1+d_1\nu+s)/(\Gamma (1-\nu +s)\Gamma (1+d_2\nu-s)) $$ and the curve $L_1$ passes from $+\infty$ to $+\infty$ encircling the set ${\mathbb N}-1 $ in the negative direction, but not including any point of the set $-{\mathbb N}.$ So, for the parameters of the Meyer's functions we have $$p=q=2,\,m=n= 1,\,a_1=-\nu d_1,\,a_2=1+\nu d_2,\,b_1=0,\,b_2=\nu,$$ $$\Delta^\ast=\left(\sum\limits_{k=1}^qb_k\right)- \sum\limits_{j=1}^pa_j=-\nu-1<-1,$$ and, since we take $\vert z \vert \le 1,$ convergence conditions of the integral in (\ref{eq:b}) hold. To compute the function $f_1(z,\nu),$
we use the following formula \begin{equation} G = (-1)^k\sum\limits_{s\in S_k}Res(g;s), \label{eq:c}\end{equation} where $k=1,\,G$ denotes the integral (\ref{eq:b}) with $L = L\sb k,$ $g$ denotes the integrand of the integral (\ref{eq:b}),
$S_k$ denotes the set of all the unremovable singularities of $g$ encircled by $L_k,$ and $Res(g;s)$ denotes the residue of the function $g$ at the point $s.$ Then we obtain the equlity $$ f_1(z,\,\nu) = $$ $$ (\nu d\sb 1)!/(\nu \Delta)! z^\nu(-1)^{\nu\Delta}\sum\limits_{k=0}^{\nu\Delta}(-\theta_0(z))^k
\binom {\nu \Delta }k\binom {\nu \Delta + k }{\nu d \sb 1}.$$
Therefore, as it has been already remarked, using the principle of analytic continuation we may regard it as being defined in $\mathfrak F.$ Let $$ \Omega_1=\{z\in{\mathfrak F}\,\colon\,\vert z\vert\ge 1\}. $$ Now, let me introduce my second auxiliary function defined for $z\in\Omega_1.$ For $\nu\in{\mathbb N},$ let \begin{equation} f_2(z,\nu) =
-(-1)^{\nu(\Delta+1)}G_{2, 2}^{(2, 1)} \left(z\bigg\vert\matrix -\nu d_1,
&\!\! 1+\nu d\sb2\\ 0, &\!\! \nu\\ \endmatrix\right)= \label{eq:e}\end{equation} $$ -(-1)^{\nu(\Delta+1)}\frac1{2\pi i}\int\limits_{L_2}g_{2, 2}^{(2,1)}(s)ds, $$ where $$ g_{2,2}^{(2,1)}(s)= \theta_0((\eta_\pi(z))^s)\Gamma(-s)\Gamma (\nu-s) \Gamma(1+d_1\nu+s)/\Gamma (1+d_2\nu-s). $$ and the curve $L_2$ passes from $-\infty$ to $-\infty$ encircling the set $-{\mathbb N}$ in the positive direction,
but not including any point of the set ${\mathbb N}-1.$
So, for the parameters of the Meyer's functions we have $$p=q=m=2,\,n= 1,\, a_1=-\nu d_1,\,a_2=1+\nu d_2, \, b_1=0,\,b_2=\nu,$$ $$\Delta^\ast=\left(\sum\limits_{k=1}^qb_k\right)- \sum\limits_{j=1}^pa_j=-nu-1<-1,$$ and, since we take $\vert z \vert \ge 1,$ convergence conditions of the integral in (\ref{eq:e}) hold. To compute the function $f_2(z,\nu),$
we use the formula (\ref{eq:c}) where $k=2,\,G$ denotes the integral in (\ref{eq:e}) with $L = L\sb k,$ $g$ denotes the integrand of the integral in (\ref{eq:e}),
$S_k$ denotes the set of all the unremovable singularities of $g$ encircled by $L_k,$ and $Res(g;s)$ denotes the residue of the function $g$ at the point $s.$ Then we obtain the equality \begin{equation} f_2 (z, \nu )(\nu \Delta )!/(\nu d\sb1)! = (-1)^\nu \!\!\sum \limits \sb {t=\nu + 1} \sp \infty R_0(t;\nu) \theta_0(z^{-t+\nu}), \label{eq:f}\end{equation} where $$
R_0(t;\nu)=(\nu \Delta )!/(\nu d\sb1)! \left(\prod\limits_{\kappa=\nu+1}^{\nu\Delta}(t-\kappa)\right) \prod\limits_{\kappa=0}^{\nu\Delta}(t+\kappa)^{-1}. $$ Let further \begin{equation}\label{eq:h} f_k^\ast(z,\,\nu )=f_k (z,\,\nu )(\nu \Delta )!/(\nu d\sb1)!, \end{equation} where $k=1,\,2.$ Expanding the function $R_0(t;\nu)$ into partial fractions, we obtain the equality $$ R_0(t;\nu)=\sum\limits_{k=0}^{\nu\Delta} \alpha^\ast_{\nu, k}/(t + k) $$ with \begin{equation} \alpha\sp\ast\sb{\nu, k} = (-1)\sp {\nu + \nu \Delta + k} \binom{\nu \Delta} {k}\binom{\nu \Delta+k} {\nu \Delta - \nu}, \label{eq:i}\end{equation} where $k=0,\,\ldots,\,\nu\Delta.$ It follows from (\ref{eq:e}), (\ref{eq:f}), (\ref{eq:h}) and (\ref{eq:i}) that \begin{equation} f^\ast_2(z,\nu)=(-\theta_0(z))^\nu \sum\limits_{t=1+\nu}^{+\infty}(\theta_0(z))^{-t}R\sb 0(t;\nu)= \label{eq:aa}\end{equation} $$ =(-\theta_0(z))^\nu\sum \limits_{t=1+\nu}^{+\infty}(\theta_0(z))^{-t-k+k} \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}/(t+ k) $$ $$ =(-\theta_0(z))^\nu\sum \limits_{t=1+\nu}^{+\infty} ((\theta_0(z))^{-t-k}/(t+k)) \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{k}= $$ $$ (-\theta_0(z))^\nu \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{k} \sum \limits_{\tau=1+\nu+k}^{+\infty} ((\theta_0(z))^{-\tau}/\tau))= $$ $$ =\alpha^\ast (z;\nu)(-\log(1-1/\theta_0(z))) - \phi^\ast(z;\nu), $$ where $\log(\zeta)$ is a branch of $\Log(\zeta)$
with $\vert\arg(\zeta)\vert<\pi,$ \begin{equation} \alpha^\ast(z;\nu)=(-(\theta_0(z))^\nu\sum\limits_{k=0}^{\nu\Delta} \alpha^\ast_{\nu,\,k}(\theta_0(z))^k=f^\ast_1 (z;\nu), \label{eq:ab}\end{equation} \begin{equation} \phi^\ast(z;\nu)=(-\theta_0(z))^\nu \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{k} \sum \limits_{\tau=1}^{\nu+k} ((\theta_0(z))^{-\tau}/\tau))= \label{eq:ac}\end{equation} $$ (-\theta_0(z))^\nu\sum \limits_{\tau=1}^\nu((\theta_0(z))^{-\tau} \alpha^\ast(z;\nu)/\tau+ $$ $$ (-\theta_0(z))^\nu \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{k} \sum \limits_{\tau=1+\nu}^{\nu+k} ((\theta_0(z))^{-\tau}/\tau)). $$ The change of order of summation by passage to (\ref{eq:aa})
is possible, because the series in the second sum in (\ref{eq:aa}) is convergent, if $\vert z\vert\ge1$ and $\theta_0(z)\ne1.$ Since $$\deg_t\left(\prod\limits_{\kappa=\nu+1}^{\nu\Delta}(t-\kappa)\right)- \deg_t\left(\prod\limits_{\kappa=0}^{\nu\Delta}(t+\kappa)\right)=-\nu-1, $$ it follows that $$ \alpha^\ast(1;\nu)=Res(R_0(t;\nu);t=\infty)=0 $$ So in the domain $D_0 = \{z\in{\mathfrak F}\colon \vert z\vert>1$ the funcion $f^\ast_2(z,\nu)$ coincides with the functin \begin{equation} f^\ast_0(z,\nu)=\alpha^\ast (z;\nu)(-\log(1-1/\theta_0(z)))
- \phi^\ast(z;\nu), \label{eq:aa1}\end{equation} The form (\ref{eq:aa1}) may be used for various applikations. Espeshially it is pleasant, when both $1/\theta_0(z)$ and $\alpha^\ast(z;\nu)$ for some $z$ is integer algebraic number. The following Lemma corresponds to this remark.
\bf Lemma 1. \it Let $m\in{\mathbb N},\,m>2$ $m\ne2p^\alpha,$ where $p$ run over the all the prime numbers and $\alpha$ run over $\mathbb N.$ Then $1+\exp(2\pi i/m)$ belongs to the group of
the units of the field $K_m.$ If $m=2p^\alpha,$ where $p$ is a prime number and $\alpha\in\mathbb N,$ then the ideal ${\mathfrak l}=(1+exp(2\pi i/m))$ is a prime ideal
in the field $K_m,$ and ${\mathfrak l}^{\phi(m)}=(p).$
\bf Proof. \rm Let polynomial $\Phi_m(z)$ is irreducible over ${\mathbb Q},$ has the leading coefficient equal to one and $\Phi_m(\exp(2\pi i/m))=0.$ Let $\Lambda(n),$ as usual, denotes the Mangold's function. Since (see, for example, [\ref{r:cb1}], end of the chapter 3) $$\Phi_m(z)=\prod\limits_{d\vert m}(z^{m/d}-1)^{\mu(d)},$$ it follows that $$\Phi_m(-1)=(-2)^{\left(\sum\limits_{d\vert m}\mu(d)\right)}=1,$$ if $m\in 1+2{\mathbb N},$ $$\Phi_m(z)=\prod\limits_{d\vert (m/2)} (((z)^{m/(2d)}-1)^{\mu(2)}((-z)^{m/d}-1)/((-z)-1))^{\mu(d)},$$ $$\Phi_m(-1)=\lim\limits_{z\to-1}\prod\limits_{d\vert (m/2)} (((-z)^{m/d}-1)/((-z)^-1))^{\mu(d)}\times$$ $$(-2)^{\mu(2)\left(\sum\limits_{d\vert (m/20}\mu(d)\right)}=$$ $$\exp\left(\sum\limits_{d\vert (m/2)}\ln(m/(2d)){\mu(2d)}\right)= \exp(\Lambda(m/2)),$$ if $m\in 2(1+2{\mathbb N}),$ $$\Phi_m(z)=\prod\limits_{d\vert (m/2)} (((-z)^{m/d}-1)/((-z)-1))^{\mu(d)},$$ and $$\Phi_m(-1)=\lim\limits_{z\to-1}\prod\limits_{d\vert(m/2)} (((-z)^{m/d}-1)/((-z)-1))^{\mu(d)}=$$ $$\exp\left(\sum\limits_{d\vert m/2}\ln(m/(2d)){\mu(d)}\right) =\exp(\Lambda(m/2)),$$ if $m\in 4{\mathbb N}.$ If $m=2p^\alpha$ with $\alpha\in{\mathbb N},$
then $\Phi_m(-1)=\exp(\Lambda(m/2))=p,$ and ideals ${\mathfrak l}_k=(1+exp(2\pi ik/m))$, where $(k,m)=1,$
divide each other and in the standard equality $efg=n$ (see, [\ref{r:cb1}], chapter 3, section 10)
we have $$e=n=\phi(m),\ f=g=1.\,\blacksquare.$$ In connection with the above remark and with the Lemma 1, the following case is interesting for us: \begin{equation} \theta_0(z)=(-\rho)(1+\exp(-i\beta))= -(\rho\exp(i\beta/2))/(2cos(\beta/2))= \label{eq:aa3}\end{equation} $$-(\rho\exp(i\psi))(2cos(\psi))= -(1+i\tan(\psi))/2$$ with $\rho>2/3,\, \vert\beta\vert<\pi$ and $-\pi/2<\psi=\beta/2<\pi/2;$
then $$ \Re(1-1/\theta_0(z))=\Re(2+\exp(i\beta)/\rho)>1/2, $$
and we have no problems with $\log(1-1/\theta_0(z)).$ Of course, according to the Lemma 1, the case $\rho=1$ is interesting
especially. So, we will take further \begin{equation} z=(\rho/(2\cos(\psi)),\psi-\pi)= (\rho/(-2cos(\theta),\,\theta), \label{eq:aa4}\end{equation} where $\rho>2/3,\,\vert\psi\vert<\pi/2$ and $-3\pi/2<\theta=\psi-\pi<-\pi/2;$
clearly, the function (\ref{eq:aa1}) is analytic in the domain $$ D_1=
\{z=(\rho(2\cos(\psi))^{-1},\psi-\pi))\colon\rho>2/3,\,-\pi/2<\psi<\pi/2\}= $$ $$ \{z=((-2\rho\cos(\theta))^{-1},\theta))\colon\rho>2/3,\, -3\pi/2<\theta<-\pi/2\}. $$ Let \begin{equation} D_2(\delta_0) = \{z\in{\mathfrak F}\colon \vert z\vert>1+\delta_0/2\},\, D_3=D_2(\delta_0)\cup D_1. \label{eq:aa6}\end{equation} So, the funcion $f^\ast_2(z,\nu)$ coincides with the function (\ref{eq:aa1}) in $D_2(\delta_0)\subset D_0.$ Since $D_2(\delta_0)\cap D_1\ne\emptyset,$ it follows that the join
$D_3=D_2(\delta_0)\cup D_1$ of the domains $D_2(\delta_0)$ and $D_1$ is a domain in $\mathfrak F$ and the function (\ref{eq:aa1}) is analytic in this domain.
The conditions, which imply the equality \begin{equation} (-1)^{m+p-n}\exp(-i\alpha)\theta_0(z)\times \label{eq:ae}\end{equation} $$\left(\left(\prod\limits_{j=1}^p(\delta+1-a_j)\right) (G\circ\eta^\ast_\alpha)\right)(z)= \left(\left(\prod\limits_{k=1}^q(\delta-b_k)\right) (G\circ\eta^\ast_\alpha)\right)(z)$$ hold in our case for the Mejer's function $$ G=G \sb {p, q} \sp{(m, n)}\left(z\bigg\vert\matrix a_1,&\ldots,&a_p\\ b_1,& \ldots,&b_q\\ \endmatrix\right).$$
We have $p=q=2,\,m=n=1,\,\alpha=0$ for the function $f_1(z,\nu)$
and the equation (\ref{eq:ae}) takes the form $$ \theta_0(z) ((\delta+1+d_1\nu)(\delta-d_2\nu)f_1)(z,\nu)= (\delta(\delta-\nu)f_1)(z,\nu) $$ We have $p=q=m=2,\,n=1,\,\alpha=\pi$ for the function $f_2(z,\nu)$
and the equation (\ref{eq:ae}) takes the form $$ \theta_0(z) ((\delta+1+d_1\nu)(\delta-d_2\nu)f_2)(z,\nu)= (\delta(\delta-\nu)f_2)(z,\nu). $$ We see that both the functions $f(z,\nu)=f^\ast_k(z,\nu),$ where $k=1,2$ satisfy to the same differential equation \begin{equation} \theta_0(z) (\delta+1+d_1\nu)(\delta-d_2\nu)f(z,\nu)= (\delta(\delta-\nu)f)(z,\nu). \label{eq:ah}\end{equation} in the domain $D_0.$ According to the general properties of the Mejer's functions
we have the equality \begin{equation} \left(\prod\limits_{\kappa=1}^{\Delta-1}(\nu(\Delta-1)+\kappa)\right) \prod\limits_{\kappa=1}^{d_2}(\delta-d_2\nu-\kappa) f^\ast_k(z,\,\nu+1) = \label{eq:ai}\end{equation} $$ \left(\prod\limits_{\kappa=1}^{\Delta}(\nu\Delta+\kappa)\right)(\delta-\nu) \prod\limits_{\kappa=1}^{d_1}(\delta+d_1\nu+\kappa)f^\ast_k(z,\,\nu), $$ where $k=1,2$ and $z\in D_0.$ Since $f^\ast_0(z,\nu)$ and polynomial $f^\ast_1(z,\nu)$ are analytic in the domain $D_0\cup D_1,$ and $f^\ast_0(z,\nu)$ coincides with $f^\ast_2(z,\nu),$ it follows that the equations (\ref{eq:ah}) and (\ref{eq:ai}) hold in $D_0\cup D_1$ for $k=0,1.$
Let \begin{equation} D^\vee (w,\eta)=(\eta+1)(\eta+\gamma_1)- 2(1+\gamma_1)w\eta, \label{eq:bj}\end{equation} \begin{equation} D^\wedge (z,\eta)=D^\vee (\theta_0(z),\eta), \label{eq:bj1}\end{equation} where, in view of (\ref{eq:aa3}), \begin{equation} w=\theta_0(z)=-r\exp(i\psi),\,r=1/(2cos(\psi)),\,\vert\psi\vert<\pi/2. \label{eq:bj2}\end{equation} In view of (\ref{eq:bj2}), the polynomial (\ref{eq:bj}) coincides with the polynomial (1) in [\ref{r:cc}]. Let \begin{equation} h^\sim(\eta)= (\eta-1)(1-\delta_0)^{-d_1}(\eta+1)2^{-2} \ \eta^{d_1}. \label{eq:ba}\end{equation}
As in [\ref{r:bg}], we consider $\nu^{-1}$ as an independent variable taking its values in the field ${\mathbb C}$ including $0.$ Let $F$ be a bounded closed subset of ${\mathfrak F} $ (in particular, this compact $F$ may be an one-point set). Let ${\mathfrak H}_0 (F)$ be the subring
of all those functions in ${\mathbb Q}(w),$ which are well defined for every $w\in\theta_0(F).$ For $\varepsilon\in(0,1)$, let ${\mathfrak H} (F,\varepsilon)$ be the subring of all those functions in ${\mathbb Q}(w,\nu^{-1})$, which are well defined for every $(w, \nu^{-1})$ with $w\in \theta_0(F),\,\vert\nu^{-1}\vert\le\varepsilon_0.$
\bf Lemma 2. \it Let $F$ be a closed bounded subset of $D_0\cup D_1$ (in particular, $F$ may be an one-point set).
Let further for any $ z\in F$ the polynomial (\ref{eq:bj1})
has only simple roots and on the set of all the roots $\eta$ of
the polynomial $D^\wedge (z,\eta)$ the map \begin{equation}
\eta\to h^\sim(\eta) \label{eq:bb}\end{equation} is injective. Then there is $\varepsilon\in (0,1)$ such that, for any $z\in F,\nu\in {\mathbb N}+[1\diagup\varepsilon]$, the functions $f^\ast_0(z,\nu),\,f^\ast_1(z,\nu)=\alpha^\ast (z;\nu)$ and $\phi^\ast(z;\nu)$ are solutions of the difference equation \begin{equation} x(z,\nu+2)+\sum\limits_{j=0}^1 q^\ast_j(z,\nu^{-1})x(z,\nu+j) = 0, \label{eq:bc}\end{equation} moreover, \begin{equation} q^\ast_j (z,\,\nu^{-1})\in{\mathfrak H}(F,\,\varepsilon) \label{eq:bd}\end{equation} for $j = 0,\,1,$ and trinomial \begin{equation}
w^2+\sum\limits_{j=0}^1q^\ast_j (z,0)w^j \label{eq:be}\end{equation} coincides with \begin{equation} \prod\limits_{k=0}^1(w-h(\eta_k)), \label{eq:bd1}\end{equation} if $$\prod\limits_{k=0}^1(w-\eta_k),$$ coincides with $D^\vee (w,\eta)$ from (\ref{eq:bj}).
\bf Proof. \rm Proof may be found in [\ref{r:bg}]. $\blacksquare$
This Lemma shows the importance of the properties of the roots
of the polynomial (\ref{eq:bj}). In correspondence with (\ref{eq:aa4}) and with notations in [\ref{r:cc}], let \begin{equation} \rho>2/3,\,r=\rho/(2\cos(\psi)),\,t=\cos(\psi),\, \vert\psi\vert<\pi/2. \label{eq:be1}\end{equation} Let $ u=r^2,\delta_0\le1/2<2/3<\rho. $
Then \begin{equation} 2\delta_0\le2/5<2/3<\rho<2\sqrt{u}=2r. \label{eq:be3}\end{equation}
Clearly, $$(\partial/\partial\psi)r=(\rho\sin(\psi))/(2\cos^2(\psi))= -2\rho(\sin(\psi)-1)-2\rho/(\sin(\psi)+1),$$ $$(\partial/\partial\psi)^2r= (2\rho\cos(\psi))/(\sin(\psi)-1)^2 +(2\rho\cos(\psi))/(\sin(\psi)+1)^2>0,$$ if $\vert\psi\vert<\pi/2$
In view of (3.1.10) in [\ref{r:bh}], \begin{equation}\label{eq:bf} \vert D_0(r,\psi,\delta_0 )\vert^2 = r^4+r^2+(\delta_0/2)^4+\end{equation} $$2r^2(\delta_0/2)^2(2t^2-1)+2r(r^2+(\delta_0/2)^2)t=$$ $$u^2+u+(\delta_0/2)^4+(\delta_0/2)^2(\rho^2-2u)+\rho(u+(\delta_0/2)^2)=$$ $$u^2+u(\rho+1-(\delta_0)^2/2)+ (\delta_0/2)^2(\rho^2+\rho+(\delta_0/2)^2),$$ \begin{equation} \vert R_0(r,\psi,\delta_0)\vert^2=\vert D_0(r,\psi,\delta_0)\vert= \label{eq:bg}\end{equation} $$\sqrt{u^2+u(\rho+1-(\delta_0)^2/2)+ (\delta_0/2)^2(\rho^2+\rho+(\delta_0/2)^2)}.$$
In view of (3.1.41) - (3.1.43) in [\ref{r:bh}] and (\ref{eq:bg}), \begin{equation} p_1 =8(\vert R^\ast_0(r,\psi,\delta_0)\vert^2+ \vert R_0(r,\psi,\delta_0)\vert^2)/(1+\delta_0)^2= \label{eq:bh}\end{equation} $$ 8(r^2+rt+1/4+\vert D_0(r,\psi,\delta_0)\vert)/ (1+\delta_0)^2=8(1+\delta_0)^{-2}\times $$ $$ \left(u+\rho/2+1/4+ \sqrt{u^2+u(\rho+1-(\delta_0)^2/2)+ (\delta_0/2)^2 (\rho^2+\rho+(\delta_0/2)^2)}\,\right), $$ \begin{equation} p_2=(8(\vert R^\ast_1 (r,\psi,\delta_0)\vert^2+ \vert R_0(r,\psi,\delta_0)\vert^2))/(1+\delta_0)^2= \label{eq:bi}\end{equation} $$ 8(r^2-r\delta_0t+(\delta_0)^2/4+ \vert D_0(r,\psi,\delta_0)\vert)/(1+\delta_0)^2= $$ $$ 8(u-\delta_0\rho/2+(\delta_0)^2/4)/(1+\delta_0)^2+ $$ $$8(1+\delta_0)^{-2}\sqrt{u^2+u(\rho+1-(\delta_0)^2/2)+ (\delta_0/2)^2(\rho^2+\rho+(\delta_0/2)^2)}= $$ $$ 8(1+\delta_0)^{-2}u(2+(\rho+1-\delta_0\rho)/(2u)+O(1/u^2)), $$ \begin{equation}\label{eq:cj} q_1(r,\psi,\delta_0)=((1-\delta_0)/(1+\delta_0))^2,\, q_2(r,\psi,\delta\sb 0)= \end{equation} $$ (4r/(1+\delta_0))^2=(16u)/(1+\delta_0)^2. $$ In view of (91) in [\ref{r:cc}], (\ref{eq:be1}) and (\ref{eq:be3}), \begin{equation}\label{eq:ca} s=s_0(r,\psi)=\vert r\exp(i\psi)+1\vert\,/2=\sqrt{(r^2+1+2rcos(\psi))/4}= \end{equation} $$ \sqrt{(u+1+\rho)/4}\in(\max(\vert r-1\vert/2,\,\delta_0/4),\,(r+1)/2] $$ and $$ t=cos(\psi)=(4s^2-r^2-1)/(2r). $$ In view of (3.1.68) in [\ref{r:bh}], (3.1.70) -- (3.1.71) in [\ref{r:bh}]
and (\ref{eq:bg}), $$ \vert R^\ast_{-1}(r,\psi,\delta_0)\vert^2= r^2+(2+\delta_0)^2/4+r(2+\delta_0)\cos(\psi)= $$ $$ u+(2+\delta_0)^2/4+\rho(2+\delta_0)/2, $$ \begin{equation} p_0=8(\vert R^\ast_{-1}(r,\psi,\delta_0)\vert^2+ \vert R_0(r,\psi,\delta_0)\vert^2)/(1+\delta_0)^2= \label{eq:cd}\end{equation} $$ 8(u+(2+\delta_0)^2/4+\rho(2+\delta_0)/2)/(1+\delta_0)^2+$$ $$ 8(1+\delta_0)^{-2}\sqrt{u^2+u(\rho+1-(\delta_0)^2/2)+ (\delta_0/2)^2(\rho^2+\rho+(\delta_0/2)^2)}, $$ \begin{equation} q_0(r,\psi,\delta_0)(1+\delta_0)^2/16=(r^2+1+2rcos(\psi))=(u+1+\rho). \label{eq:ce}\end{equation} According to Lemma 4.4 in [\ref{r:cc}], (\ref{eq:aa6}) and (\ref{eq:be3}), \begin{equation} \vert \eta_1^ \wedge (r,\psi,\delta_0)+\epsilon\vert< \vert\eta_0^\wedge (r,\psi,\delta_0)+\epsilon\vert, \label{eq:ch}\end{equation} if $\epsilon^2=\epsilon$ and $z\in D_3.$ Therefore, according to (\ref{eq:bh}), (\ref{eq:cj}) and (\ref{eq:ch}), \begin{equation} (-1)^k(\partial/ \partial u)\vert\eta_k^\wedge(r,\psi,\delta_0) \vert>0, \label{eq:cf}\end{equation} where $\frac13<\rho/2<\sqrt{u}=r,\,k^2=k.$ According to a) and c) of the Lemma 4.6 in [\ref{r:cc}], and in view of (\ref{eq:aa6}) and (\ref{eq:ca}), \begin{equation} \vert \eta_1^\wedge(r,\psi,\delta_0)-1\vert< \vert\eta_0^\wedge (r,\psi,\delta_0)-1\vert, \label{eq:ci}\end{equation} if $z\in D_3.$ In view of (\ref{eq:bf}), \begin{equation} \vert D_0(r,\psi,\delta_0 )\vert^2 = \label{eq:db}\end{equation} $$u^2+u(\rho+1-(\delta_0)^2/2)+ (\delta_0/2)^2(\rho^2+\rho+(\delta_0/2)^2)=$$ $$(u+(\rho+1)/2-(\delta_0)^2/4)^2+ (\delta_0/2)^2(\rho^2+\rho+(\delta_0/2)^2)-$$ $$(((\rho+1)/2)^2-(\rho+1)(\delta_0)^2/4+ (\delta_0/2)^4)=$$ $$(u+(\rho+1)/2-(\delta_0)^2/4)^2+ (\delta_0/2)^2(\rho^2+2\rho+1)-(\rho+1)^2/4=$$ $$(u+(\rho+1)/2-(\delta_0)^2/4)^2- (\rho+1)^2(1-(\delta_0)^2)/4.$$ Consequently, \begin{equation} \vert D_0(r,\psi,\delta_0 )\vert= u+\frac{\rho+1}2-\frac{(\delta_0)^2}4+O(1/u), \label{eq:db0}\end{equation} where $u\ge1/4.$ Since $u\ge1/4>(\delta_0)^2/4,$ it follows that $$u+(\rho+1)/2-(\delta_0)^2/4>\sqrt{1-(\delta_0)^2}(\rho+1)/2.$$ If $\rho=1,u=1/4$ then in view of (\ref{eq:db}), $$ \vert D_0(r,\psi,\delta_0 )\vert^2 = (5/4-(\delta_0)^2/4)^2- (1-(\delta_0)^2)=$$ $$\left(\tau-5/4)^2\right)^2+4\tau-1, $$ where $0<\tau=\frac{(\delta_0)^2}4<\frac1{100};$ moreover, in this case $$(\partial/\partial\tau)\vert D_0(r,\psi,\delta_0 )\vert^2= 2\tau-5/2+4>0;$$ therefore if $\delta_0\le1/5,$ then $$ \vert D_0(r,\psi,\delta_0 )\vert^2\bigg\vert_{u=1/4,\rho=1}\le (1,24)^2-0,96=0,5776 $$ and $$ \vert D_0(r,\psi,\delta_0 )\vert^2\bigg\vert_{u=1/4,\rho=1}\le0,76. $$ In view of (\ref{eq:db}), $$ 1<(\partial/\partial u)\vert D_0(r,\psi,\delta_0 )\vert= $$ $$ \sqrt{ \frac{(u+(\rho+1)/2-(\delta_0)^2/4)^2} {(u+(\rho+1)/2-\frac{(\delta_0)^2}4)^2- (\rho+1)^2(1-(\delta_0)^2)/4}}=1+O(1/u^2),$$ in view of (\ref{eq:bh}), (\ref{eq:bi}) and (\ref{eq:cd}), \begin{equation}\label{eq:db2} (\partial/\partial u)p_\epsilon= 8(2+O(1/u^2))/(1+\delta_0)^2, \end{equation} where $\epsilon^3=\epsilon,$ and $(\partial/\partial u)\vert D_0(r,\psi,\delta_0 )\vert$ decreases with increasing $u;$ consequently, $$(\partial/{\partial u})^2 \vert D_0(r,\psi,\delta_0 )\vert<0,$$ if $u\ge1/4.$ In view of (\ref{eq:bh}), (\ref{eq:bi}) and (\ref{eq:cd}), \begin{equation} (\partial/\partial u)^2p_\epsilon= (\partial/\partial u)^2 \vert D_0(r,\psi,\delta_0)\vert<0, \label{eq:dc}\end{equation} where $u\ge1/4,\,0<\delta_0<2/3<\rho,\,\epsilon^3=\epsilon.$ In view of (\ref{eq:bi}), (\ref{eq:cj}), (\ref{eq:db}) and (\ref{eq:db0}),
if $\rho=1,\,u>1/4,\,0<\delta_0\le1/5,$ then \begin{equation}\label{eq:da} q_2((\partial/\partial u)p_2)/(\partial/\partial u)q_2- p_2/2=\end{equation} $$8u(1+(u+1-(\delta_0)^2/4))/ \vert D_0(r,\psi,\delta_0 )\vert)/(1+\delta_0)^2-$$ $$ 4(u-\delta_0/2+(\delta_0)^2/4+ \vert D_0(r,\psi,\delta_0 )\vert)/(1+\delta_0)^2= $$ $$ 4(u+\delta_0/2-(\delta_0)^2/4)/(1+\delta_0)^2+ $$ $$ 4((1+\delta_0)^2\vert D_0(r,\psi,\delta_0 )\vert)^{-1} (2u^2+u(2-(\delta_0)^2/2)-$$ $$ 4((1+\delta_0)^2\vert D_0(r,\psi,\delta_0 )\vert)^{-1} (u^2+u(2-(\delta_0)^2/2)+ (\delta_0/2)^2(2+(\delta_0/2)^2))= $$ $$ 4(u+\delta_0/2-(\delta_0)^2/4)/(1+\delta_0)^2+ $$ $$ 4(1+\delta_0)^2\vert D_0(r,\psi,\delta_0 )\vert)^{-1} (u^2-(\delta_0/2)^2(2+(\delta_0/2)^2))>0, $$ $$ q_2((\partial/\partial u)p_2)/(\partial/\partial u)q_2- p_2= \frac{8} u(1+(u+1-(\delta_0)^2/4))/ \vert D_0(r,\psi,\delta_0)\vert)/(1+\delta_0)^2- $$ $$ 8(u-\delta_0/2+(\delta_0)^2/4+ \vert D_0(r,\psi,\delta_0 )\vert)/(1+\delta_0)^2= $$ $$ 8u(2+O(1/u^2))/(1+\delta_0)^2- $$ $$ 8(u-\frac{\delta_0}2+\frac{(\delta_0)^2}4+ u+1-\frac{(\delta_0)^2}4+O(1/u))/(1+\delta_0)^2= $$ $$ -8(1-\frac{\delta_0}2+O(1/u))/(1+\delta_0)^2. $$ In view of (\ref{eq:cd}), (\ref{eq:ce}), (\ref{eq:da}), (\ref{eq:db}), (\ref{eq:db2}), (\ref{eq:db0}), if
$\rho=1,\,u>1/4,\,0<\delta_0\le1/5,$ then $$ (u+1)(\partial/\partial u)p_0- p_0/2> 8(2u+2)/(1+\delta_0)^2- $$ $$ 4(u+(2+\delta_0)^2/4+(2+\delta_0)/2+u+1 -(\delta_0)^2/4)/(1+\delta_0)^2=$$ $$\frac8 (1/2+u-(3\delta_0)/4)/(1+\delta_0)^2>0,$$ \begin{equation}\label{eq:da3} q_0((\partial/\partial u)p_0)/(\partial/\partial u)q_0- p_0/2= (u+2)(\partial/\partial u)p_0- p_0/2>\end{equation} $$(u+1)(\partial/\partial u)p_0-p_0/2>0,$$ \begin{equation}\label{eq:da4} q_0(\partial/\partial u)p_0)/(\partial/\partial u)q_0-p_0= 8(u+2)(2+O(1/u^2))/(1+\delta_0)^2- \end{equation} $$ 8(u+(2+\delta_0)^2/4+(2+\delta_0)/2+ u+1-(\delta_0)^2/4)/(1+\delta_0)^2= $$ $$ 8(4+O(1/u))/(1+\delta_0)^2- (2+\delta_0)^2/4-(2+\delta_0)/2-1+(\delta_0)^2/4+ O(\frac1/u)= $$ $$ 8(1-(3/2)\delta_0+O(1/u))/(1+\delta_0)^2, $$ where $u>1/4.$ In view of (\ref{eq:ce}), (\ref{eq:da3}) and (\ref{eq:dc}), $$ (\partial/\partial u) (( q_0(\partial/\partial u)p_0)/ (\partial/\partial u)q_0-p_0)(\partial/(\partial u)p_0+ (\partial/\partial u)q_0)= $$ $$ (\partial/\partial u)(((u+2)(\partial/\partial u)p_0- p_0)(\partial/\partial u)p_0)= $$ $$ ((\partial/\partial u)p_0)^2+ ((u+2)(\partial/\partial u)^2p_0- (\partial/\partial u)p_0)(\partial/\partial u)p_0+ $$ $$ ( (u+2)(\partial/\partial u)p_0-p_0)(\partial/\partial u)^2p_0= $$ $$ ((u+2)(\partial/\partial u)^2p_0)(\partial/\partial u)p_0+ ((u+2)(\partial/(\partial u)p_0-p_0) (\partial/\partial u)^2p_0= $$ $$ (2(u+2)(\partial/\partial u)p_0-p_0)(\partial/\partial u)^2p_0<0. $$ Therefore, according to (\ref{eq:da4}), (\ref{eq:db2}) and
(\ref{eq:ce}), \begin{equation} \inf\{((u+2)(\partial/\partial u)p_0-p_0)(\partial/\partial u)p_0+ (\partial/\partial u)q_0\colon u\ge1/4\}= \label{eq:gd}\end{equation} $$ \lim\limits_{u\to+\infty} ((u(\partial/\partial u)p_0-p_0)(\partial/\partial u)p_0+ (\partial/\partial u)q_0)= $$ $$ 128(1-(3/2)\delta_0)/(1+\delta_0)^4+16/(1+\delta_0)^2)>0. $$ According to the Lemma 4.17 in [\ref{r:cc}] and in view
of (\ref{eq:da}), (\ref{eq:da3}), (\ref{eq:gd}), \begin{equation} (\partial/\partial u)\vert\eta_0(r,\psi,\delta_0)+\epsilon\vert^2>0, \label{eq:ge}\end{equation} where $\epsilon^2=1,\,u>1/4,$ \begin{equation} (\partial/\partial u)\vert\eta_1(r,\psi,\delta_0)-1\vert^2<0, \label{eq:gf}\end{equation} where $u>1/4.$ The following Lemma describes the behavior of
the value $h^\sim(\eta_k(r,\psi,\delta_0))$ with $k^2=k$ and $h^sim$ in (\ref{eq:ba}).
\bf Lemma 3. \it If $\Delta\ge5,$ then \begin{equation} (\partial/\partial u)(\vert h^\sim(\eta_0(r,\psi,\delta_0))\vert)>0, \label{eq:gf1}\end{equation} $$ (\partial/\partial u)(\vert h^\sim(\eta_1(r,\psi,\delta_0))\vert)<0, $$ where $u\in(1/4,+\infty).$
\bf Proof. \rm The inequality (\ref{eq:gf1}) directly follows from (\ref{eq:ch}), (\ref{eq:ge}) and (\ref{eq:ba}). So, we must prove the inequality (\ref{eq:ba}) Clearly, if $\beta<1,\,u>1/4$ then $$ (\partial/\partial u)(u^{3/4}+(3/4)\beta u^{-1/4})>0, $$ We take $$\beta= (4/3)(\delta_0/2)^2 (2+(\delta_0)^2)/4)/(2-(\delta_0)^2/2). $$ Then, clearly, $\beta<(\delta_0)^2=1/(\Delta)^2<1.$ Therefore,
in view of (\ref{eq:bh}) and (\ref{eq:db}), if $\rho=1,$ then $$ p_1u^{-1/4}=8(1+\delta_0)^{-2}\times $$ $$ (u^{3/4}+(3/4)u^{-1/4}+ \sqrt{u^{3/2}+u^{1/4}(\rho+1-(\delta_0)^2/2) (u^{3/4}+(3/4)\beta u^{-1/4})}) $$ increases together with increasing $u\in(1/4,+\infty),$ and, in view of (\ref{eq:cj}), \begin{equation}\label{eq:gi} \vert\eta_0(r,\psi,\delta_0)\vert^2u^{-1/4}\vert= p_1u^{-1/4}/2+ \sqrt{(p_1u^{-1/4}/2)^2-q_1u^{-1/2}} \end{equation} increases together with increasing $u\in(1/4,+\infty).$
In view of (\ref{eq:cf}), (\ref{eq:cj}), (\ref{eq:gi}), (\ref{eq:ge}) and (\ref{eq:gf}), if $\Delta\ge5,$ then $$ \vert\eta_1(r,\psi,\delta_0)\vert^{2(\Delta-1)} \vert(\eta_1(r,\psi,\delta_0))^2-1\vert^2= $$ $$ \vert\eta_1(r,\psi,\delta_0)\vert^{2(\Delta-5)} \frac{(q_1)^4}{(\vert\eta_0(r,\psi,\delta_0)\vert^2u^{-1/4})^4}\times$$ $$\frac{16}{(1+\delta_0)^2}\vert\eta_0(r,\psi,\delta_0)+1\vert^{-2} \vert\eta_1(r,\psi,\delta_0)-1\vert^2$$ decreases together with increasing $u\in(1/4,+\infty).\,\blacksquare$
Let $D$ is bounded domain in ${\mathbb C}$ or ${\mathfrak F}.$ and $D^\ast$ is closure of $D.$ Let \begin{equation} a^\sim_0(z)\,,\ldots\,,a^\sim_n(z) \label{eq:dc1}\end{equation} are the functions continuous on $D^\ast$ and analytic in $D.$ Let $a^\sim_n(z)=1$ for any $z\in D^\ast.$ Let \begin{equation} T(z,\lambda)=\sum\limits_{i=0}^na^\sim_i(z)\lambda^k. \label{eq:dc2}\end{equation} Let $s\in{\mathbb N},\,n_i\in{\mathbb N}-1,$ where $i=1,\,\ldots,\,s$ and $\sum\limits_{i=1}^sn_i=n.$ We say that polynomial $T(z,\lambda)$ has $(n_1,\,\ldots,\,n_s)$-disjoint system of roots on $D^\ast,$
if for any $z\in D^\ast$ the set of all the roots $\lambda$ of the polynomial $T(z,\lambda)$ splits in $s$ klasses ${\mathfrak K}_1(z),\,\ldots,\,{\mathfrak K}_s(z)$ with following properties:
a) the sum of the multiplicities of the roots of the klass ${\mathfrak K}_i$ is equal to $n_i$ for $i=1,\,\ldots,\,s;$
b) if $i\in[1,s]\cap{\mathbb N},\,j\in(i,s]\cap{\mathbb N}$ and $n_in_j\ne0,$ then the absolute value of each roots of the klass ${\mathfrak K}_i(z)$ is greater than absolute value of the each roots of the klass ${\mathfrak K}_j(z).$
If the polynomial (\ref{eq:dc2}) has $(n_1,\,\ldots,\,n_s)$-disjoint system of roots on $D^\ast,$ then for each $i=1,\,\ldots,\,s$ we denote by $\rho^\ast_{i,0}(z)$ and $\rho^\ast_{i,1}(z)$ respectively the maximal and minimal absolute value of the roots of the klass ${\mathfrak K}_i(z).$
Let $D$ is bounded domain in ${\mathfrak F}$ such that
$D^\ast\in D_3.$ Let \begin{equation} F^\wedge(z,\eta)= \prod\limits_{i=1}^2(\theta_0(z)-h(\eta_{i-1}(r,\psi,\delta_0))), \label{eq:bj3}\end{equation} $$n=s=2,\,n_1=n_2=1,\,{\mathfrak K}_i(z)=\{h(\eta_{i-1}(r,\psi,\delta_0))\},$$ $$\rho_{i,0}=\rho_{i,1}=\vert h(\eta_{i-1}(r,\psi,\delta_0))\vert,$$ where $i=1,2.$
\bf Lemma 4. \it The polynomial $F^\wedge(z,\eta)$ in (\ref{eq:bj3})
has $(1,\,1)$-disjoint system of roots on $D^\ast.$
\bf Proof. \rm The assertion of the Lemma follows from (\ref{eq:ch})
and (\ref{eq:ci}). $\blacksquare$
\bf Corollary. \it The map $(\ref{eq:bb})$ is injective
for every $z\in D^\ast;$ all the conditions of the Lemma 2 are fulfilled
for the functions $f_0^\ast(z,\nu)$ from (\ref{eq:aa1}), $\alpha^\ast(z,\nu)$ from (\ref{eq:ab}) and $\phi^\ast(z,\nu)$ from (\ref{eq:ac})
in every $z\in D^\ast;$ therefore for every $z\in D^\ast$ these functions
are solutions of the difference equation of Poincar\'e type (\ref{eq:bc}),
and the polynomial (\ref{eq:bd1}) coincides with
characteristical polynomial of this equation. $\blacksquare$
Let for each $\nu\in{\mathbb N}-1$ are given continuous on $D^\ast$ functions \begin{equation}\label{eq:dd0} a_0(z;\nu),\,\ldots,\,a_n(z,\nu), \end{equation} which are analytic in $D.$
Let $a_n(z:\nu)=1$
for any $z\in D^\ast$ and any $\nu\in{\mathbb N}-1.$ We suppose that for any $i=1,\,\ldots,\,n-1$ the sequence of functions $a_i(z;\nu)$ converges to $a^\sim_i(z)$
uniformly on $D^\ast,$ when $\nu\to\infty.$ Let us consider now the difference equation \begin{equation}\label{eq:dd} a_0(z;\nu)y(\nu+0)\,+\ldots\,+a_n(z;\nu)y(\nu+n)=0, \end{equation} i.e. we consider a difference equation of the Poincar\'e type,
coefficients (\ref{eq:dd0}) of this equation are continuous on $D^\ast$
and analytic in $D,$ and they uniformly converge to limit functions (\ref{eq:dc1}),
when $\nu\to\infty.$
\bf Lemma 5. \it Let polynomial (\ref{eq:dc2}) has $(n_1,\,\ldots,\,n_s)$-disjoint system of roots on $D^\ast.$
Let $y(z,\,\nu)$ is a solution of the equation (\ref{eq:dd}), and this solution is continuous on $D^\ast$ and analytic on $D.$ Let further $i\in[1,s]\cap{\mathbb Z}.$ Let us consider the set of all the $z\in D,$ for which the following inequality holds \begin{equation} \limsup\limits_{\nu\in{\mathbb N},\,\nu\to\infty} \vert y(z,\,\nu)\vert^{1/\nu}) <\rho_{i,1}(z); \label{eq:de}\end{equation} if this set has a limit point in $D,$ then the inequality (\ref{eq:de}) holds in $D^\ast.$
\bf Proof. \rm The proof may be found in [\ref{r:aa}]
(Theorem 1 and its Corollary). $\blacksquare$
\bf Lemma 6. \it Let $D$ is bounded domain in ${\mathfrak F}$ such that
$D^\ast\in D_3.$ Then \begin{equation} \limsup\limits_{\nu\in{\mathbb N},\,\nu\to\infty} \vert f^\ast_0(z,\,\nu)\vert^{1/\nu}) <\rho_{1,1}(z)=\vert h^\sim(\eta_{0}(r,\psi,\delta_0))\vert \label{eq:ha}\end{equation} for any $z\in D^\ast.$
\bf Proof. \rm In view of (\ref{eq:aa6}), expanding the domain $D$,
if necessary, we can suppose that $\{(r,\,\phi)\colon\,r\in[2,\,3],\,\phi=0\}\in D.$
Making use the same arguments, as in [\ref{r:bi}], Lemma 4.2.1,
we see that the inequality (\ref{eq:ha})
holds for any point $z=(r,\phi)\in\{r\in[2,\,3],\,\phi=0\}.$
According to the Lemma 5, the inequality (\ref{eq:ha})
holds for any $z\in D^\ast.\,\blacksquare$
For each prime $p\in{\mathbb N}$ let $v_p$ denotes the $p$-adic valuation on $\Bbb Q.$
\bf Lemma 7.\it Let $p\in{\mathbb N}+2 $ is a prime number, $$ d\in{\mathbb N}-1,\,r\in{\mathbb N}-1,\,r<p. $$ Then $$ v_p((dp + r)!/((-p)^{d}d!\,r!)-1)\ge1. $$
\bf Lemma 8. \it Let $p\in{\mathbb N}+2$ is a prime number, $d\in{\mathbb N}-1,\,d_1\in{\mathbb N}-1$, \begin {equation}\label{eq:zc} r\in[0,p-1]\cap{\mathbb N},\,r_1\in[0,p-1]\cap{\mathbb N},\,d_1p+r_1\le dp+r. \end{equation} Then \begin {equation}\label{eq:zd} v_p\left( \binom{dp+r}{d_1p+r_1}\right)=v_p\left(\binom d{d_1}\right), \end{equation} if $r_1\le r,$ \begin {equation}\label{eq:ze} v_p\left( \binom{dp+r}{d_1p+r_1}\left(\binom d{d_1}\binom r{r_1}\right)^{-1}-1\right) \ge1, \end{equation} if $r_1\le r,$ \begin {equation}\label{eq:zf} v_p\left( \binom{dp+r}{d_1p+r_1}\right)=1+v_p\left((d-d_1)\binom d{d_1}\right), \end{equation} if $r<r_1,$ \begin {equation}\label{eq:zg} v_p\left((-1)^{r_1-r-1}\binom{dp+r}{d_1p+r_1} \binom{r_1}{r}(r_1-r)\left(p\binom{d}{d_1}(d-d_1)\right)^{-1}-1\right)\ge1, \end{equation} \bf Proof. \rm Clearly, $d_1\le d.$ If $r_1\le r,$ then let $r_2=r-r_1,\,d_2=d-d_1.$ On the other hand, if $r_1> r,$ then, in view of (\ref{eq:zc}), $d\ge d_1+1;$ therefore in this case we let \begin{equation}\label{eq:zh} r_2=p+r-r_1,\,d_2=d-d-1. \end{equation} Then $d=d_1+d_2,\,r=r_1+r_2,$ $$ \binom{dp+r}{d_1p+r_1}=(dp+r)!((d_1p+r_1)!(d_2p+r_2)!)^{-1}. $$ Accordindg to the Lemma 7, \begin {equation}\label{eq:zi} v_p\left(\binom{dp+r}{d_1p+r_1}(-p)^{-d+d_1+d_2} d_1!\,r_1!\,d_2!\,r_2!/(d!\,r!)-1\right)\ge1, \end{equation} \begin {equation}\label{eq:zaj} v_p\left(\binom{dp+r}{d_1p+r_1}\right)=d-d_1-d_2+ \end{equation} $$v_p(d!\,r!/(d_1!\,r_1!\,d_2!\,r_2!)). $$ The equality (\ref{eq:zd}) and the inequality (\ref{eq:e}) diectly follow from (\ref{eq:zi}) and (\ref{eq:zaj}). If
the inequality $r<r_1$ holds, then in view of (\ref{eq:zh}) -- (\ref{eq:zaj}), $$r_2!\prod\limits_{j=1}{r_1-r-1}(p+r-r_1+j)=(p-1)!,\, v_p(r_2!(r_1-r-1)!(-1)^{r_1-r}-1)\ge1,$$ and (\ref{eq:zg}) holds.
\bf Corollary 1. \it Let $p\in{\mathbb N} $ is a prime number, $$ d\in{\mathbb N}-1,r\in{\mathbb N}-1,d_1\in{\mathbb N}-1,d_2\in{\mathbb N}-1, r_1\in{\mathbb N}-1,r_2\in{\mathbb N}-1, $$ $$ max(r_1,r_2)<p. $$ Then $$ p^{-d}(dp + r)!\in (-1)^d d!r!+ p\Bbb Z, $$ $$ \binom{d_1+d_2)p+r_1+r_2}{d_1p+r_1}\in \binom{d_1+d_2}{d_1}\binom{r_1+r_2}{r_1}+p{\mathbb Z}. $$
\bf Proof. \rm This is direct corollary of the Lemma 7 and Lemma 8.
See also Lemma 9 in [\ref{r:eh}]. $\blacksquare$
\bf Corolary 2. \it Let $p\in{\mathbb N}+2$ is a prime number, $$ d\in{\mathbb N},\,r_1\in{\mathbb N},\,r_1<p,\,d_1\in{\mathbb N}-1,\,d_1<d. $$ Then \begin {equation}\label{eq:zf1} v_p\left(\binom{dp}{d_1p + r_1}\left(
d\binom{d-1}{d_1}{\binom p{r_1}}\right)^{-1}+1\right)\ge1 \end{equation}
\bf Proof. \rm Since, $$d\binom{d-1}{d_1}=(d-d_1)\binom d{d_1}, \,v_p\left(\binom p{r_1}r_1/p-(-1)^{r_1}\right)\ge1,$$ the equality (\ref{eq:zf1}) directly follows from (\ref{eq:zg}). $\blacksquare$
\bf Corolary 3. \it Let $p\in{\mathbb N}+2$ is a prime number, $$ d\in{\mathbb N},\,r_1\in{\mathbb N}, \,r_1<p,\,d^\sim\in{\mathbb N}-1,\,d^\sim<d. $$ Then $$ \binom{dp}{d_1p + r_1}\in
d\binom{d-1}{d^\sim}\binom p{r_1}+p^2{\mathbb Z}. $$ \bf Proof. \rm This is a corollary of the Corrolary 2. See also Lemma 10 in [\ref{r:eh}] .$\blacksquare$
Let let $p$ be prime in $(2,\,+\infty),$ let $K$ be a finite extension of ${\mathbb Q}$ let ${\mathfrak p}$ be a prime ideal in ${\mathbb Z}_K$
and $p\in{\mathfrak p},$ let $f$ be the degree of ${\mathfrak p},$ let $(p)={\mathfrak p}^e{\mathfrak b},$ with entire ideal ${\mathfrak b}$
not contained in ${\mathfrak p},$ let $v_{\mathfrak p}$ be additive ${\mathfrak p}$-valuation, which prolongs $v_p;$ so,
if $\pi$ is a ${\mathfrak p}$-prime number, then $v_{\mathfrak p}(\pi)=1/e.$ If $f$ is the degree of the ideal ${\mathfrak p}$ then \begin{equation}\label{eq:z7a} v_{\mathfrak p}\left(w^{p^\beta}-w\right)\ge1, \end{equation} where $\beta\in{\mathbb N}f,\,w\in K$ and $$ v_{\mathfrak p}(w)\ge0. $$ In viw of (\ref{eq:z7a}), (\ref{eq:ab}), and (\ref{eq:i}), $$ v_{\mathfrak p}(\alpha^\ast(z;\,p^\beta l)-\alpha^\ast(z;\,l))>1/e, $$ if $\beta\in{\mathbb N}f,\,\theta_0(z)\in K$ and
$v_{\mathfrak p}(\theta_0(z))\ge0.$ In view of (\ref{eq:ac}), \begin{equation}\label{eq:r7d} \phi^\ast(z;\nu)=(-\theta_0(z))^\nu \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{k} \sum \limits_{\tau=1}^{\nu+k} ((\theta_0(z))^{-\tau}/\tau))= \end{equation} $$ (-\theta_0(z))^\nu\sum \limits_{\tau=1}^\nu((\theta_0(z))^{-\tau} \alpha^\ast(z;\nu)/\tau+ $$ $$ (-\theta_0(z))^\nu \sum\limits_{k=0}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{k} \sum \limits_{\tau=1+\nu}^{\nu+k} ((\theta_0(z))^{-\tau}/\tau))= $$ $$(-1)^\nu\sum\limits_{\tau=1}^{\nu(\Delta+1)}\frac1\tau\sum \limits_{k=\max(0,\,\tau-\nu)}^{\nu\Delta} \alpha^\ast_{\nu,k}(\theta_0(z))^{\nu-\tau+k};$$ therefore, if $\nu=p^\beta l,\,f=1,\,\beta\in{\mathbb N}f,\,p>l(\Delta+1) ,\,\theta_0(z)\in K$ and $v_{\mathfrak p}(\theta_0(z))\ge0,$
then, according to the Lemma 2, \begin{equation}\label{eq:z7e}1-\beta\le\end{equation} $$v_{\mathfrak p}\left(\phi^\ast(z;\nu)- \sum\limits\Sb\eta\in[1,\,\Delta+1]\cap {\mathbb Z}\\ k\in[p^\beta(\eta-l),\,p^\beta l\Delta]\cap {\mathbb Z}\\ k\ge0,\, v_{\mathfrak p}(k)>0\endSb \frac{(-1)^{pl}}{p^\beta\eta} (\theta_0(z))^{p^\beta(l-\eta)+k}\alpha^\ast_{\nu,k}\right),$$ \begin{equation}\label{eq:z7f}1/e-\beta\le\end{equation} $$v_{\mathfrak p}\left(\phi^\ast(z;\nu)- \sum\limits\Sb\eta\in[1,\,\Delta+1]\cap {\mathbb Z}\\ k\in[p^{\beta-1}(\eta-l),\,p^{\beta-1}l\Delta]\cap {\mathbb Z}\\ k\ge0\endSb \frac{(-1)^{pl}p}{p^\beta\eta} (\theta_0(z))^{p^{\beta-1}(l-\eta)+k}\alpha^\ast_{\nu/p,k}\right). $$ We make the pass (\ref{eq:z7e}) $\to$ (\ref{eq:z7f}) $\beta$ times and obtain the inequality \begin{equation}\label{eq:z7g}1/e-\beta\le\end{equation} $$v_{\mathfrak p}\left(\phi^\ast(z;\,p^\beta l)-p^{-\beta}\phi^\ast(z;\, l) \right), $$ where $$\{l,\,\beta\}\subset{\mathbb N},\,p>l(\Delta+1),\,p\in\mathfrak p$$ and $\mathfrak p$ is ideal of the first degree.
\bf Lemma 9. \it If $m\in{\mathbb N}+1,\,K={\mathbb Q}[\exp(2pi/m)],$ $$ \alpha^\ast(z;\,l_1)\phi^\ast(z;\,l_2))\ne0 $$ for some $z\in K\diagdown\{0\},\,l_1\in{\mathbb N},\,l_2\in{\mathbb N},$ then for any $l\in{\mathbb N}$ the sequenses \begin{equation}\label{eq:z7i} \alpha^\ast(z;\,\nu),\,\phi^\ast(z;\,\nu), \end{equation} where $\nu\in l+{\mathbb N}$ form a linear independent system over $K.$
\bf Proof. \rm There exists $d^\ast\in{\mathbb N}$ such that $$d^\ast z\in{\mathbb Z}_K,\,d^\ast z\alpha^\ast(z;\,l_1)\in{\mathbb Z}_K,\, d^\ast z\phi^\ast(z;\,l_2)\in{\mathbb Z}_K.$$ Let a prime $p\in{\mathbb N}m+1$ satisfies to the inequality $$ p>\vert Nm_{K/{\mathbb Q}}(d^\ast z\alpha^\ast(z;\,\nu))\vert+ \vert Nm_{K/{\mathbb Q}}(d^\ast z\phi^\ast(z;\,\nu))+ $$ $$\vert Nm_{K/{\mathbb Q}}(d^\ast z)\vert+ \vert Nm_{K/{\mathbb Q}}(d^\ast)+(\Delta+1)(l_1+l_2).$$ Let $\mathfrak p$ is a prime ideal containing $p.$ Then $$ v_{\mathfrak p}\left(\alpha^\ast(z;\,l_1)\right)= v_{\mathfrak p}\left(\phi^\ast(z;\,l_2)\right)=0, $$ and, in view of (\ref{eq:z7g}), $$ v_{\mathfrak p}\left(\phi^\ast(z;\,p^\beta l_2)\right)=-\beta, $$ but $$ v_{\mathfrak p}\left(\alpha^\ast(z;\,p^\beta l_1)\right)=0. $$ with $\beta\in{\mathbb N}\,\blacksquare.$
Let $m\in{\mathbb N},\,k\in{\mathbb Z},\,2\le2\vert k\vert<m,$ and let $m$ and $k$ have no common divisor with exeption $\pm1.$ Let further $K_m={{\mathbb Q}[\exp(2\pi i/m)]}$ is a cyclotomic field, ${\mathbb Z}_{K_m}$ is the ring of all the integers of the field $K_m.$
\bf Lemma 10. \it Let $\Delta\in\{5,\,7\}.$
In correspondece with (\ref{eq:aa3}), (\ref{eq:aa4}) and (\ref{eq:aa6}), let $z=\left(1/(2\cos(k\pi i/m),k\pi i/m-\pi\right),$ where $\vert k\vert<m/2,\, (\vert k\vert,m)=1.$
Then for each $l\in{\mathbb N}$ the two sequences (\ref{eq:z7i}) form a linear independent system over $\mathbb C.$
\bf Proof. \rm We check the fulfilment of the conditions of theLemma 9.
Let ${\mathfrak M}={\mathbb N}\diagdown\{1,\,2,\,6\}$ and ${\mathfrak M}_0=\{m\in{\mathfrak M}\colon \Lambda_0(m)=0\}.$ According to the condition of the Lemma, $\theta_0(z)=-1/(1+\exp(2i\pi/m)$ with $m\in{\mathfrak M}.$ If $m\in{\mathfrak M}$ and $\phi(m)>\Delta$, then, in view of (\ref{eq:ab}) and (\ref{eq:i}), $\alpha^\ast(z;1)\ne0,$ because the numbers $(1+\exp(2i\pi/m)^k,$ where $k=0,\,\ldots,\,\phi(m)-1,$ form a basis of the field $K_m.$ Let $\Delta=p\in2{\mathbb N}+1,$
where $p$ is a prime,
$\mathfrak p$ is a prime ideal containing $p,$ and, as before,
let $(p)={\mathfrak b}{\mathfrak p}^e,\,1_{K_m}\in{\mathfrak b}\,+\, {\mathfrak p}.$ Then \begin{equation}\label{eq:z13} \binom{2p-1}p\binom p{p-1}\equiv p \mod p^2,\, v_{\mathfrak p}\left(\binom{p+k}{1+k}\binom pk\right)=2, \end{equation} where $k=1,\,\ldots,\,p-2,$ \begin{equation}\label{eq:z14} \binom p1\binom p0=p,\,\binom{2p}{p+1}\binom pp\equiv 2p \mod p^2. \end{equation} If $m\in{\mathfrak M}$ and $(m,p)=1,$ or, if $m\in{\mathfrak M}_0,$ then, according to the Lemma 1, \begin{equation}\label{eq:z16} (1+\exp(2i\pi/m),p)=(1) \end{equation} and, according to the Lemmata 7 and 8, \begin{equation}\label{eq:z17} \alpha^\ast(z;1)/(p\theta_0(z))\equiv 1+(\theta_0(z))^{p-1}- 2(\theta_0(z))^p\equiv \end{equation} $$ 1+(\exp(2i\pi/m)+3)/(1+\exp(2pi\pi/m))\equiv$$ $$(\exp(2ip\pi/m)+\exp(2i\pi/m)+4)/(1+\exp(2pi\pi/m)) \mod p. $$ If $m=q^\alpha$ with $\alpha\in{\mathbb N}$ and prime $q$ and there exists $l$ in $\{0,\,\ldots,\,\phi(m)-1\}$ such that
$p\equiv l \mod (m),$ then \begin{equation}\label{eq:y1} \exp(2ip\pi/m)+\exp(2i\pi/m)+4\not\equiv0 \mod p. \end{equation} If $m=2q^\alpha$ with odd prime $q$ and $\alpha\in{\mathbb N,}$ and
there exist $l$ in $\{0,\,\ldots,\,\phi(m/2)-1\}$ such that
$p\equiv 2l\mod (m/2),$ then (\ref{eq:y1}) holds.
If $p=5,$ then $\{3,\,4,\,5\,\,8,\,10,\,12\}= \{m\in\mathfrak M\colon\phi(m)\le p\}.$
If $m=3,\,4,\,5,\,8,\,10$ then, clearly, (\ref{eq:y1}) holds.
If $m=12,$ then $1,\,\exp(i\pi/2),\,\exp(2i\pi/3),\,\exp(i\pi/6),\,$ form a entire basis of $K_{12},$ $\exp(5i\pi/6)=\exp(i\pi/2)-\exp(i\pi/6),\,$ and (\ref{eq:y1}) holds.
If $p=7$ then $\{3,\,4,\,5\,7,\,\,8,\,9,\,10,\,12,\,14,\,18\}= \{m\in\mathfrak M\colon\phi(m)\le p.$
If $m=3,4,\,\,5,\,7,\,9,\,14,$ then, clearly, (\ref{eq:y1}) holds.
If $m=8,$ then $\exp(7i\pi/4)=-\exp(3i\pi/4)$ and (\ref{eq:y1}) holds.
If $m=12,$ then $1,\,\exp(i\pi/2),\,\exp(2i\pi/3),\,\exp(i\pi/6),\,$ form a entire basis of $K_{12},$ $\exp(7i\pi/6)=-\exp(i\pi/6),\,$ and (\ref{eq:y1}) holds.
If $m=18,$ then $$\exp(7i\pi/9)=-\exp(-2i\pi/9)=\exp(4i\pi/9)+\exp(10i\pi/9),\,$$
and (\ref{eq:y1}) holds.
The coefficient at $(\theta_0(z))^0$ in the expression (\ref{eq:ac}) of $\phi^\ast(z;\nu)$ is equal to $$\sum\limits_{k=0}^{\nu\Delta}(-1)^\nu\alpha^\ast_{\nu,k}/(\nu+k)$$ and, if $\Delta=p,\,\nu=1,$ then in view (\ref{eq:z13}) -- (\ref{eq:z14}),
the value of $v_p$ on this coefficient is equal to $0.$
Therefore,
if $m\in{\mathfrak M}$ and $\phi(m)>p=\Delta$, then $\phi^\ast(z;1)\ne0.$
If $m\in{\mathfrak M}\diagdown{\mathfrak M}_0,$ and $m\equiv0 \mod p$
then $m=2p^\alpha,$ where $\alpha\in{\mathbb N}.$
According to the Lemma 1, ${\mathfrak p}=(1+\exp(2i\pi/m)$ is a prime ideal in $K_m,$ and, furthermore, ${\mathfrak p}^{\phi(m)}=(p).$
Let $v_{\mathfrak p}.$ is the ${\mathfrak p}$-adic valuation,
which prolongs the valuation $v_p.$ Clearly, $v_{\mathfrak p}(1+\exp(2i\pi/m)=1/\phi(m),\, v_{\mathfrak p}(\theta_0(z))=-1/\phi(m)$ In view of (\ref{eq:ac}) with $\nu=1$, for the summands of the sum $$ \sum\limits_{k=1}^{\nu\Delta}\alpha^\ast_{\nu,k}(\theta_0(z))^{1+k} \sum \limits_{\tau=2}^{1+k} ((\theta_0(z))^{-\tau}/\tau)) $$ we have the inequality $$v_{\mathfrak p}((\theta_0(z))^{\Delta+k-y\alpha^\ast_{\nu,k}/y}\ge -(k-1)/\phi(m)+2-v_{\mathfrak p}(\tau)\ge-(p-3)/\phi(m)+2,$$ if $k=1,\,\ldots,\,p-2,$ because in this case $\tau\in[2,\,p-1],$ $$v_p((\theta_0(z))^{\Delta+k-y\alpha^\ast_{\nu,k}/y}\ge-(k)/\phi(m)+ 1-v_p(\tau)\ge-(p-1)/\phi(m),$$ where $k\in\{q-1,\,q\},$ and the equality reaches only for $k=\tau=p;$ on the other hand,
$v_{\mathfrak p}(\alpha^\ast(z;1))\ge1-(p+1)/\phi(m)\ge -2/(p-1)\ge-2/\phi(m).$ So, if $p\ge5,$ then $v_{\mathfrak p}(\phi^\ast(z;1))=-(p-1)/\phi(m).$ If $m\in{\mathfrak M}\diagdown{\mathfrak M_0,}$ then $m=2q^\alpha,$ with prime $q,$ according to the Lemma 1, ${\mathfrak l}=(1+\exp(2i\pi/m))$ is a prime ideal in $K_m,$ and ${\mathfrak l}^{\phi(m)}=(q).$ Therefore in this case $v_{\mathfrak p}(\theta_0(z)=0$ If $m\in{\mathfrak M}_0,$ then, according to the Lemma 1,
$v_{\mathfrak p}(\theta_0(z)=0.$ According to (\ref{eq:ac}), in both last cases, $$v_{\mathfrak p}(\phi^\ast(z;1))+ \alpha_{\nu,p-1}/p+\theta_0(z)\alpha_{\nu,p}/p)\ge1.$$ In view of (\ref{eq:z13}),\,(\ref{eq:z14}), $$v_{\mathfrak p}(\alpha_{\nu,p-1}/p+\theta_0(z)\alpha_{\nu,p}/p)=$$ $$v_{\mathfrak p}(exp(2i\pi/m)-1)/(exp(2i\pi/m)+1)).$$ If $p=5$ and $m\in\{3,\,4,\,5,\,7,\,8,\,9,\,10\}$ then, clearly, \begin{equation}\label{eq:y5} v_{\mathfrak p}((exp(2i\pi/m)-1))\le1/4. \end{equation} If $p=5$ and $m=12,$ then $Nm_{K_{12}}((exp(i\pi/6)-1))=3$ and (\ref{eq:y5}) holds.
If $p=7,$ and $m\in\{3,\,4,\,5,\,7,\,8,\,9,\,10,\,12,\,14,\,18\},$
then $$v_{\mathfrak p}((exp(2i\pi/m)-1))\le1/6.$$
$\blacksquare$
\bf Lemma 11. \it Let are fulfilled all the conditions of the Lemma 10. Then \begin{equation} \limsup\limits_{\nu\in{\mathbb N},\,\nu\to\infty} \left(\vert f^\ast_0(z,\,\nu)\vert^{1/\nu} =\rho_{2,1}(z)\bigg\vert_{\theta_0(z)=-1/(1+\exp(2ik\pi/m)}\right)= \label{eq:hd}\end{equation} $$\vert h^\sim(\eta_{1} (1/(2\cos(k\pi i/m)),\,k\pi i/m,\delta_0))\vert,$$ where $h^\sim(\eta)$ is defined in (\ref{eq:ba}).
\bf Proof. \rm According to the Lemma 2, (\ref{eq:aa1}) and Lemma 10, $f^\ast_0(z,\,\nu)$ is a nonzero solution of the Poincar\'e type difference equation (\ref{eq:bc}). According to the Perron's theorem and Lemma 5, the equality (\ref{eq:hd}) holds. $\blacksquare$
Let $K/{\mathbb Q}$ be the finite extension of the field ${\mathbb Q},$ $$[K:{\mathbb Q}] = d.$$ Let the field $K$ has $r_1$ real places
and $r_2$ complex places. Each such place is the monomorphism
of the field $K$ in the field ${\mathbb R},$ if a place is real, or in the field ${\mathbb C},$ if a place is not real; we will denote these monomorphisms respectively by $\sigma_1\,,\ldots\sigma_{r_1+r_2}.$ Then $d=r_1+2r_2.$ Let ${\mathfrak B}$ be the fixed integer basis $$ \omega_1\,,\ldots\,,\omega_d $$ of the field $K$ over ${\mathbb Q}.$ Clearly,$K$ is an algebra over
${\mathbb Q}.$ With extension of the ground field from ${\mathbb Q}$ to ${\mathbb R}$ appears an isomorphism of the algebra ${\mathfrak K} = K\otimes{\mathbb R}$ onto direct sum $$ \underbrace{\mathbb R\oplus\ldots\oplus\mathbb R}_{\text {$r_1$ times}} \oplus\underbrace{\mathbb C\oplus\ldots\oplus\mathbb C}_ {\text {$r\sb 2$ times}} $$ of $r_1$ copies of the field ${\mathbb R}$ and $r_2$ copies of the field ${\mathbb C}.$ We identify by means of this isomorphism the aIgebra ${\mathfrak K}$ with the specified direct sum. We denote below by
$\pi_j,$ where $j = 1\,,\ldots\,,r_1+r_2,$ the projection
of ${\mathfrak K}$ onto its $j-$th direct summand
and also the extension of this projection onto all kinds of matrices which have all the elements in ${\mathfrak K}.$
So, $\pi_j({\mathfrak K})={\mathbb R}$
for $j = 1\,,\ldots\,, r_1$ and $\pi_j({\mathfrak K})={\mathbb C}$ for
$j = r_1+1\,,\ldots\,,r_1+r_2.$
Further by ${\mathfrak i}_{\mathfrak K}$ we denote the embedding
of ${\mathbb R}$ in ${\mathfrak K}$ in diagonal way and also
the extension of this embedding
onto all kinds of the real matrices. So, ${\mathbb R}$ is imbedded by means
of ${\mathfrak i}_{\mathfrak K}$ in ${\mathfrak K}$ in diagonal way.
Each element $Z\in{{\mathfrak K}}$ has a unique representation in the form: $$ Z=\left(\matrix z_1\\\vdots\\ z_{r_1+r_2}\\\ \\\overline{z_{r_1+1}} \\\vdots\\\overline{z_{r_1+r_2}}\endmatrix\right), $$ with $z_j=\pi_j(Z)\in{\mathbb R}$ for any $j=1\,,\ldots\,,r_1$ and with
$z_j=\pi_j(Z)\in\mathbb C$ for
\noindent any $j=r_1+1\,,\ldots\,,r_1+r_2.$ Further by $Tr_{\mathfrak K} (Z)$ we denote the sum $$\sum\limits_{j=1}^{r_1}z_j+\sum\limits_{j=r_1+1}^{r_1+r_2}2\Re(z_j)=$$ $$\sum\limits_{j=1}^{r_1}\pi_j(Z)+\sum\limits_{j=r_1+1}^{r_1+r_2}
2\Re(\pi_j (Z)),$$ and by $q_\infty^{({\mathfrak K})} (Z)$ we denote the value $$\max (\vert z_1\vert\,,\ldots\,,\vert z_{r_1+r_2}\vert)=$$ $$\max(\vert\pi_1(Z)\vert\,,\ldots\,,\vert\pi_{r_1+r_2}(Z)\vert).$$ Clearly, $$q_\infty^{({\mathfrak K})}(Z_1Z_2)\le q_\infty^{({\mathfrak K})}(Z_1) q_\infty^{({\mathfrak K})}(Z_2),$$ $$q_\infty^{({\mathfrak K})}(Z_1+Z_2)\le q_\infty^{({\mathfrak K})}(Z\sb1) +q_\infty^{({\mathfrak K})}(Z_2),$$ $$q_\infty^{(\mathfrak K)}({\mathfrak i}_{\mathfrak K}(\lambda)Z)= \vert\lambda\vert q_\infty^{({\mathfrak K})}(Z)$$ for any $Z_1\in{\mathfrak K},\,Z_2\in {\mathfrak K},\,Z\in{\mathfrak K}$ and $\lambda\in{\mathbb R}.$ The natural extension of the norm $q_\infty^{({\mathfrak K})}$ on the set of all the matrices, which have all the elements in $\mathfrak K$ (i.e. the maximum of the norm $q_\infty^{({\mathfrak K})}$ of all the elements of the matrix) also will be denoted by $q_\infty^{({\mathfrak K})}.$ If $$Z=\left(\matrix z_1\\\vdots\\z_{d}\endmatrix\right)\in K,$$ then $$z\sb j=\sigma_j (Z),$$ where $j = 1\,,\ldots\,,r_1+r_2,$ $$z_{r_1+r_2+j}=\overline {\sigma_{r_1+j}(Z)},$$ where $j = 1\,,\ldots\,,r\sb 2.$ In particular, $$ \omega_k=\left(\matrix\sigma_1(\omega_k)\\\vdots\\\ \sigma_{r_1+r_2 }(\omega_k)\\\ \\\overline{\sigma_{r\sb1+1}(\omega_k)} \\\vdots\\\overline{\sigma_{r_1+r_2}(\omega_k)} \endmatrix\right), $$ As usually, the ring of all the integer elements of the field $K$
will be denoted by ${\mathbb Z}_K.$
The ring ${\mathbb Z}_K$ is embedded in the ring ${\mathfrak K}$ as discrete lattice. Moreover, if $Z\in{\mathbb Z}_K\backslash \{0\},$ then $$\left (\prod\limits_{i=1}^{r_1}\vert\sigma_j(Z)\vert\right) \prod\limits_{i=1}^{r_2}\vert\sigma_{r_1+i}(Z)\vert^2 = \vertNm_{K/{\mathbb Q}}(Z)\vert \in{\mathbb N}$$ and therefore $ q_\infty^{({\mathfrak K})}(Z)\ge1. $ for any $Z\in {\mathbb Z}_K\backslash \{0\}.$ The elements of ${\mathbb Z}_K$ we name below by $K$-integers. For each $Z\in{\mathfrak K}$ let $$\Vert{\mathbb Z}\Vert_K =\inf\limits_{W\in{\mathbb Z}_K}\{q^{({\mathfrak K})}_\infty(Z-W)\}. $$ Let $\{m,\,n\}\subset \Bbb N,$ $$ a_{i,k}\in{\mathfrak K} $$
for $i = 1\,,\ldots\,,m,\ k = 1\,,\ldots\,,n,$ $$ \alpha_j^\wedge(\nu)\in{\mathbb Z}_K, $$ where $j=1\,,\ldots \,,m+n$ and $\nu\in\Bbb N.$ Let there are $\gamma_0,r^\wedge_1\ge 1,\,\ldots,\,r^\wedge_m\ge 1$ such that $$ q^{(\mathfrak K)}_\infty(\alpha_i(\nu))< \gamma_0(r^\wedge_i)^\nu$$ where $i=1\,,\ldots \,,m$ and $\nu\in\Bbb N.$ Let $$ y_k(\nu)=-\alpha^\wedge_{m+k}(\nu)+ \sum\limits_{i=1}^ma_{i,k}\alpha_i^\wedge(\nu)$$ where $k=1\,,\ldots \,,n$ and $\nu\in{\mathbb N}.$ If $X=\left(\matrix Z_1\\\vdots\\Z_n\endmatrix\right)\in{\mathfrak K}^n,$
then let $$ y^\wedge (X)=y^\wedge(X,\nu)=\sum\limits_{k=1}^ny^\wedge_k(\nu)Z_k $$ for $\nu\in{\mathbb N},$ let $$
\phi_i(X)=\sum\limits_{k=1}^n a_{i,k} Z_k $$ for $i=1\,,\ldots \,,m,$ and let $$ \alpha^\wedge_0(X,\nu)=\sum\limits_{k=1}^n\alpha^\wedge_{m+k}(\nu)Z_k $$ for $\nu\in\mathbb N.$ Clearly, $$ y^\wedge (X,\nu)=-\alpha_0^\wedge (X,\nu)+ \sum\limits_{i=1}^m\alpha_i^\wedge(\nu)\phi_i(X) $$ for $X\in{\mathfrak K}^n$ and $\nu\in\Bbb N,$ $$
\alpha^\wedge_0(X,\nu)\in{\mathbb Z_K} $$ for $X \in ({\mathbb Z}_K)^n$ and $\nu\in\Bbb N.$
\bf Lemma 12. \it Let $\{l,\,n\}\subset{\mathbb N},\,\gamma_1>0,\, \gamma_2>\frac 12,R_1\ge R_2>1,$ $$ \alpha_i=(\log(r^\wedge_iR_1/R_2))/\log(R_2) , $$ where $i=1\,,\ldots \,,m,$ let $ X\in ({\mathbb Z}_K)^n\backslash\{(0)\},$ $$ \gamma_3=\gamma_1(R_1)^{(-\log(2\gamma_2R_2))/\log(R_2)} , \gamma_4=\gamma_3 \left( \sum\limits_{i=1}^m\gamma_0(r_i^\wedge)^{(log(2\gamma_2))/\log(R_2)+l} \right)^{-1}$$ and let for each $\nu\in{\mathbb N} - 1$ hold the inequalities $$ \gamma_1(R_1)^{-\nu}q^{({\mathfrak K})}_\infty(X)\le \sup\{q^{({\mathfrak K})}_\infty(y^\wedge(X,\kappa))\colon\kappa= \nu,\,\ldots,\,\nu+l-1\},$$ $$ q\sp{({\mathfrak K})}_\infty(y^\wedge(X,\nu))\le\gamma_2(R_2)^{-\nu} q^{(\mathfrak K)}_\infty(X) $$ Then $$ \sup\{\Vert\phi_i(X)\Vert_K(q^{({\mathfrak K})}_\infty (X))^{\alpha_i} \colon i=1,\,\ldots, \,m\} \ge\gamma_4. $$
\bf Proof. \rm Proof may be found in [\ref{r:cj}], Theorem 2.3.1. $\blacksquare$
\bf Corollary. \it Let $ a\in {\mathfrak K}, $ \begin{equation} \alpha_1^\wedge(\nu)\in{\mathbb Z}_K,\,\alpha_2^\wedge(\nu)\in{\mathbb Z}_K, y(\nu)=-\alpha^\wedge_2(\nu)+a\alpha_1^\wedge (\nu) \label{eq:ec}\end{equation} where $\nu\in\mathbb N.$ Let there are $\gamma_0,r^\wedge_1\ge1$ such that $$ q^{({\mathfrak K})}_\infty(\alpha_1(\nu))<\gamma_0(r^\wedge_1)^\nu, $$ where $\nu\in{\mathbb N}.$
Let $l\in{\mathbb N},\,\gamma_1>0,\, \gamma_2>\frac 12,R_1\ge R_2>1,$ $$ \alpha_1=(\log(r_1^\wedge R_1/R_2))/\log(R_2),\, \gamma_3=\gamma_1(R_1)^{(-\log(2\gamma_2R_2))/\log(R_2)}, $$ $$ \gamma_4=\gamma_3\left (
\gamma_0(r^\wedge_1)^{(log(2\gamma_2))/\log(R_2)+l}\right)^{-1}, $$ $X\in{\mathbb Z}_K$ and let for each $\nu\in{\mathbb N}-1$
hold the inequalities $$ \gamma_1(R_1)^{-\nu}q^{({\mathfrak K})}_\infty(X)\le \sup\{q^{({\mathfrak K})}_\infty(y_1(\kappa)X)\colon \kappa=\nu\,,\ldots\,,\nu+l-1))\}, $$ $$ q^{({\mathfrak K})}_\infty(y(\nu)X)\le\gamma_2 (R_2)^{-\nu}q^{({\mathfrak K})}_\infty(X) $$ Then \begin{equation}\label{eq:fc} \Vert aX\Vert_K(q^{({\mathfrak K})}_\infty (X))^{\alpha}
\ge\gamma_4. \end{equation}
\bf Proof. \rm This Corrolary is the Lemma 12 for $m=n=1.$ $\blacksquare$
Let $B\in{\mathbb N},\, D^\ast(B)=\inf\{q\in{\mathbb N}\colon d/\kappa \in{\mathbb N},\, \kappa \in{\mathbb N},\,\kappa\le B\}.$ It is known that $$D^\ast(B)=\exp(B+O(B/\log(B)).$$ Let $d^\ast_0(\Delta,\nu)=D^\ast(\nu(\Delta+1)).$ Then \begin{equation} d^\ast_0(\Delta,\nu)=\exp(\nu(\Delta+1)+O(\nu/\log(\nu))), \label{eq:fd}\end{equation} when $\nu\to\infty.$
Probably G.V. Chudnovsky was the first man, who discovered, that the numbers (\ref{eq:i}) have a great common divisor; Hata ([\ref{r:eb}]) in details studied this effect. Therefore I name the mentioned common divisor
by Chudnovsky-Hata's multiplier and denote it by $d^\ast_1(\Delta,\nu).$ According to the Hata's results, \begin{equation} \log(d^\ast_1(\Delta,\nu))=(1+o(1))\nu\times \label{eq:fe}\end{equation} $$\sum\limits_{\mu=0}^1\left(\frac{\Delta+(-1)^\mu}2 \log\left(\frac\Delta{\Delta+(-1)^\mu}\right)+ (-1)^\mu\frac\pi2\sum\limits_{\kappa=1}^{\left[\frac{\Delta+(-1)^\mu}2\right]} \cot\left(\frac{\pi\kappa}{\Delta+(-1)^\mu}\right)\right).$$ In view of (\ref{eq:fd}), \begin{equation} d^\ast_0(5,\nu)=\exp(6\nu(\Delta+1)+O(\nu/\log(\nu))), d^\ast_0(7,\nu)= \label{eq:ff}\end{equation} $$\exp(8\nu(8)+O(\nu/\log(\nu))).$$ In view of (\ref{eq:ff}) \begin{equation} \log(d^\ast_1(5,\nu))=(1+o(1))\nu\times \label{eq:fg}\end{equation} $$(-3\log(1.2)+2\log(0.8)+ (\pi/2)(\cot(\pi/6)+\cot(\pi/3)+ \cot(\pi/4)))=$$ $$(1+o(1))\nu\times1.956124...,$$ \begin{equation} \log(d^\ast_1(7,\nu))=(1+o(1))\nu\times \label{eq:fh}\end{equation} $$(4\log(7/8)+3\log(7/6))+ (1+o(1))(\pi/2)\nu\times$$ $$(-\cot(\pi/6)-\cot(\pi/3)+\cot(\pi/8)+\cot(3\pi/8)+\cot(\pi/4))=$$ $$(1+o(1))\nu(4\log(7/8)+3\log(7/6)+\pi(-2/\sqrt{3}+2/\sqrt{2}+1/2)=$$ $$(1+o(1))\nu\times2.314407\ldots\,,$$ when $\nu\to\infty.$
In view of (\ref{eq:ab}) and (\ref{eq:ac}), $$\alpha^\ast(z;\nu)d^\ast_0(\nu)/d^\ast_1(\nu)\in{\mathbb Z}[z],$$ $$\phi^\ast(z;\nu)d^\ast_0(\nu)/d^\ast_1(\nu)\in{\mathbb Z}[z].$$ Let \begin{equation} U_\Delta(m,\nu)=d^\ast_0(\nu)/d^\ast_1(\nu),\,\Lambda_0(m)=0, \label{eq:fi}\end{equation} if $m\ne2p^\alpha,$ where $p$ run over the all the prime numbers and $\alpha$ run over ${\mathbb N}$ and let \begin{equation}\label{eq:gj} U_\Delta(m,\nu)=\frac{d^\ast_0(\nu)}{d^\ast_1(\nu)}p^{[(\Delta+1)\nu/\phi(m)]+1} ,\,\Lambda_0(m)=\Lambda(m/2), \end{equation} if $m=2p^\alpha,$ where $p$ is a prime number and $\alpha\in\mathbb N.$ In view of the (\ref{eq:ab}), (\ref{eq:ac}) and Lemma 1, \begin{equation}\label{eq:gj1} \alpha^\ast(z;\nu)\bigg\vert_ {z=\left(\frac1{2\cos(\frac{k\pi i}m)},\,\frac{k\pi i}m-\pi\right)} U_\Delta(m,\nu) \in{\mathbb Z}_{{\mathbb Q}[\exp(2i\pi/m)]}, \end{equation} \begin{equation}\label{eq:gj2} \phi^\ast(z;\nu)\bigg\vert_ {z=\left(\frac1{2\cos(\frac{k\pi i}m)},\,\frac{k\pi i}m-\pi\right)} U_\Delta(m,\nu) \in{\mathbb Z}_{{\mathbb Q}[\exp(2i\pi/m)]}, \end{equation}
where $(k,m)=1.$ In view of (\ref{eq:gj}), (\ref{eq:fi}), (\ref{eq:fe}), (\ref{eq:fd}), (\ref{eq:1}) and (\ref{eq:3}) \begin{equation} \frac{d^\ast_0(\nu)}{d^\ast_1(\nu)}= \label{eq:gb}\end{equation} $$\nu(1+o(1))V^\ast_\Delta \log(U_\Delta(m,\nu))=\nu(1+o(1))V_\Delta(m),$$ when $\nu\to\infty.$
The polynomial (\ref{eq:bj1}) take the form $$ D^\wedge (z,\eta)=(\eta+1)\left(\eta+\frac{\Delta-1}{\Delta+1}\right)+ \frac{2\Delta\exp(i\psi)\eta}{(\Delta+1)\cos(\psi)}= $$ $$((\Delta+1)\eta^2+2\Delta(2+iT)\eta+(\Delta-1))/(\Delta+1),$$ where $\psi\in(-pi/2,\,\pi/2)$ and $T=\tan(\psi);$ its roots are equal to \begin{equation} -(2\Delta+\Delta iT+R)/(\Delta+1), \label{eq:hf}\end{equation} where $R^2=\Delta^2(3-T^2)+1+4\Delta^2iT.$ In view of (\ref{eq:0}), Then $$R\in\{\pm\left( w_\Delta(T)+i2\Delta^2iT/w_\Delta(T)\}\right).$$ In view of (\ref{eq:hf}) and (\ref{eq:ch}), $$
\eta_j^ \wedge (r,\psi,\delta_0)= $$ $$ -\frac{ 2\Delta+\Delta iT+(-1)^j\left( w_\Delta(T)+i2\Delta^2iT/w_\Delta(T)\right)} {\Delta+1}= $$ $$ -\frac{ 2\Delta+(-1)^jw_\Delta(T)+iT\Delta\left(1+(-1)^j2\Delta/w_\Delta(T)\right)} {\Delta+1}, $$ where $j=0,1,$ $$ \vert\eta_j^ \wedge (r,\psi,\delta_0)+k\vert^2= $$ $$ \frac{\left(2\Delta+(-1)^jw_\Delta(T)-k(\Delta+1)\right)^2+ T^2\Delta^2\left(1+(-1)^j2\Delta/w_\Delta(T)\right)^2} {(\Delta+1)^2}, $$ where $j=0,1;\,k=0,\,1,\,-1.$ Therefore, in view of (\ref{eq:ba}) and (\ref{eq:2}) \begin{equation} \ln\vert h^\sim(\eta_j^ \wedge (r,\psi,\delta_0))\vert= \label{eq:hg1}\end{equation} $$(\eta_j(r,\psi,\delta_0)-1)(1-\delta_0)^{-d_1} (\eta_j(r,\psi,\delta_0)+1)2^{-2}\eta_j(r,\psi,\delta_0)^{d_1}=$$ $$-log\left(4(\Delta+1)^{\Delta+1}(1-1/\Delta)^(\Delta-1)\right)+$$ $$\frac12\log\left(\left(2\Delta+(-1)^jw_\Delta(T)+(\Delta+1)\right)^2+ T^2\Delta^2\left(1+\frac{(-1)^j2\Delta}{w_\Delta(T)}\right)^2\right)+$$ $$\frac12 \log\left(\left(2\Delta+(-1)^jw_\Delta(T)-(\Delta+1)\right)^2+ T^2\Delta^2\left(1+\frac{(-1)^j2\Delta}{w_\Delta(T)}\right)^2\right)+$$ $$\frac{(\Delta-1)}2 \log\left(\left(2\Delta+(-1)^jw_\Delta(T)\right)^2+ T^2\Delta^2\left(1+\frac{(-1)^j2\Delta}{w_\Delta(T)}\right)^2\right)=$$ $$l_\Delta(j,T),$$ where $j=0,1.$ Clearly, $$w_\Delta(0)=\sqrt{3\Delta^2+1},$$ $$
\eta_j^ \wedge (1/2,0,\delta_0)=-\frac{2\Delta+(-1)^j\sqrt{3\Delta^2+1}} {\Delta+1}, $$ where $j=0,1,$ $$ \left\vert\eta_j^ \wedge (1/2,0,\delta_0)+k\right\vert= \left\vert\frac{2\Delta+(-1)^j\sqrt{3\Delta^2+1}-k(\Delta+1)} {\Delta+1}\right\vert, $$ where $j=0,1;\,k=0,\,1,\,-1.$ Therefore \begin{equation} l_\Delta(\epsilon,0)= \left(\log\vert h^\sim(\eta_\epsilon^ \wedge(1/2,0,\delta_0))\vert \right)=\label{eq:h11}\end{equation} $$ \log\left(\vert(\eta_\epsilon(1/2,0,\delta_0)-1)(1-\delta_0)^{-d_1} (\eta_\epsilon(1/2,0,\delta_0)+1) 2^{-2}\eta_\epsilon(1/2,0,\delta_0)^{d_1}\vert\right)= $$ $$-\log\left(4(\Delta+1)^{\Delta+1}(1-1/\Delta)^(\Delta-1)\right)\,+$$ $$\log\left(\vert2\Delta+(-1)^\epsilon\sqrt{3\Delta^2+1}-(\Delta+1)\vert \right)+$$ $$\log\left(\vert{2\Delta+(-1)^\epsilon\sqrt{3\Delta^2+1}+(\Delta+1)}\vert \right)+$$ $$(\Delta-1)\log\left(\vert{2\Delta+(-1)^\epsilon\sqrt{3\Delta^2+1}}\vert \right).$$ Consequently
$$l_5(1,0)=-\|og(4)-6\log6-4\log(0.8)+$$ $$ \log(\sqrt{76}-4)+\log(16-\sqrt{76})+4\log(10-\sqrt{76})$$ I made computations below "by hands" using calculator of the firm "CASIO." $$\log4=1,386294361...\,;\,6\log(6)=10,7505682...\,;$$ $$4\log(0.8)=-0,892574205...\,;$$ $$\sqrt{76}=8,717797887...\,;\,\sqrt{76}-4=4,717797887...\,;$$ $$16-\sqrt{76}=7,282202113...\,;\,10-\sqrt{76}=1,282202113...\,;$$ $$\log\left(\sqrt{76}-4\right)=1.551342141...\,;\, \log\left(16-\sqrt{76}\right)=1.985433305...\,;$$ $$\log\left(10-\sqrt{76}\right)=0.248579...\,;\, 4\log\left(10-\sqrt{76}\right)=0,994316001...\,;\, $$ \begin{equation} l_5(1,0)=-6.713196909...; \label{eq:h12}\end{equation} $$ l_7(1,0)=-\log(4)-8\log(8)-6\log(6)+6\log(7)+$$ $$\log\left(\sqrt{148}-6\right)+ \log\left(22-\sqrt{148}\right)+ 6\log\left(14-\sqrt{148}\right);$$ $$8\log8=16,63553233...\,;\,6\log6=10,75055682...;\,6\log7=11,67546089...;$$ $$\sqrt{148}=12,16552506...\,;\,\sqrt{148}-6=6,16552506...\,$$ $$22-\sqrt{148}=9,83474939...\,;\,14-\sqrt{148}=1,83474939...\,;$$ $$\log(\sqrt{148}-6)=1,818973301;\,\log(22-\sqrt{148})=2,285894063...;\,$$ $$\log(14-\sqrt{148})=0,606758304...\,;\,6\log(14-\sqrt{148})=3,640549824...\,; \,$$ \begin{equation} l_7(1,0)=-9,35150543...\,. \label{eq:h13}\end{equation} In view of (\ref{eq:1}),\,(\ref{eq:fd}),\,(\ref{eq:fe}),\,(\ref{eq:fg}),\, (\ref{eq:fh}) and (\ref{eq:gb}), \begin{equation} V_5^\ast=6-1.956124...=4,04387...;V_7^\ast=8-2.314407=5,685593. \label{eq:h14}\end{equation}
In view (\ref{eq:h12}) -- (\ref{eq:h14}), \begin{equation} -V_5^\ast-l_5(1,0)>0,\,-V_7^\ast-l_7(1,0)>0. \label{eq:h15}\end{equation} So, the key inequalities (\ref{eq:h15}) are checked "by hands". I view of (\ref{eq:hg1}), (\ref{eq:h15}) and Lemma 3, $$ -V_5^\ast-l_5(1,\tan(\pi/m))>0,\,-V_7^\ast-l_7(1,\tan(\pi/m))>0, $$ where $m>2.$ Since $(\log(p))/(p^{\alpha-1}(p-1))$ decreases together with increasing of $p\in(3,\,+\infty)$ with fixed $\alpha\ge1,$ or icreasing of $\alpha\in(1,\,+\infty$ with fixed $p\ge2$ (or, of course, increasing both $\alpha\in(1,\,+\infty$ and $p\in(3,\,+\infty)$), and $$\lim\limits_{p\to\infty}((\log(p))/(p^{\alpha-1}(p-1)))=0,$$ where $\alpha\ge1,$ $$\lim\limits_{\alpha\to\infty}((\log(p))/(p^{\alpha-1}(p-1)))=0,$$ where $p\ge2,$ it follows that the inequality (\ref{eq:4}) holds for all the sufficient big integers $m.$ Computations on computer of class "Pentium" show that the inequality (\ref{eq:4}) holds for $m=3,\,m=4,\,m=5$ and $m=2\times5;$ therefore inequality (\ref{eq:4})
holds for all the $m>2\times3.$ Let $ \varepsilon_0=h_{\Delta}(m)/2, $ with $h_{\Delta}(m)$ defined in (\ref{eq:2a}).
In view of (\ref{eq:4}), $ \varepsilon_0>0. $ We take now $K=K_m={\mathbb Q}[\exp(2\pi i/m)].$ Let further $\{\sigma_1,\,\ldots,\,\sigma_{\phi(m)}\}=Gal(K/{\mathbb Q}).$ For each $j=1,\,\ldots,\,\phi(m)$ there exists $k_j\in(-m/2,m/2)\cap{\mathbb Z}$ such that $$(\vert k_j\vert,\,m)=1,\, \sigma_j\left(\exp\left(\frac{2\pi i}m\right)\right)= \exp\left(\frac{2\pi ik_j}m\right).$$ Let $a$ be the element of $\mathfrak K,$ such that $$ \pi_j(a)=\log(2+\sigma_j(\exp(2\pi i/m)))=\log(2+\exp(2\pi ik_j/m)), $$ where $j=1,\,\ldots,\,\phi(m);$ we suppose that $k_1=1.$ In view of (\ref{eq:gj1}) and (\ref{eq:gj2}), let $\alpha_1^\vee (\nu),\,\alpha_1^\wedge (\nu),\, \alpha_2^\vee (\nu),\,\alpha_2^\wedge (\nu),\,$
are elements in ${\mathfrak K}$ such that $$ \pi_j(\alpha_1^\vee(\nu))= \alpha^\ast(z;\nu)\bigg\vert_ {z=\left(\frac1{2\cos(\frac{k_j\pi i}m)},\,\frac{k_j\pi i}m-\pi\right)}, $$ $$ \pi_j(\alpha_2^\vee(\nu))= \phi^\ast(z;\nu)\bigg\vert_ {z=\left(\frac1{2\cos(\frac{k_j\pi i}m)},\,\frac{k_j\pi i}m-\pi\right)}, $$ \begin{equation}\label{eq:ij} \pi_j(\alpha_1^\wedge(\nu))= \alpha^\ast(z;\nu)\bigg\vert_ {z=\left(\frac1{2\cos(\frac{k_j\pi i}m)},\,\frac{k_j\pi i}m-\pi\right)} U_\Delta(m,\nu), \end{equation} \begin{equation}\label{eq:ia} \pi_j(\alpha_2^\wedge(\nu))= \phi^\ast(z;\nu)\bigg\vert_ {z=\left(\frac1{2\cos(\frac{k_j\pi i}m)},\,\frac{k_j\pi i}m-\pi\right)} U_\Delta(m,\nu), \end{equation} where $j=1,\,\ldots,\,\phi(m).$ Then $\alpha_k^\wedge (\nu)\in{\mathbb Z}_K$ for $k=1,\,2.$ \begin{equation}\label{eq:ec00} y^\vee(\nu)=-\alpha^\vee_2(\nu)+a\alpha_1^\vee(\nu), \end{equation} and let $y(\nu)$ is defined by means the equality (\ref{eq:ec}). According to the Corrollary of the Lemma 4, to the Theorem 4 in [\ref{r:cb}]
(or Theorem 7 in [\ref{r:cf1}]),
to the Lemma 8, to (\ref{eq:hg1}), there exist $m^\ast_1\in{\mathbb N}$ having the following property:
for any $\varepsilon\in(0,\,\varepsilon_0)$ there exist
$\gamma_0(\varepsilon)>0,\,\gamma_1(\varepsilon)>0,$ and $\gamma_2(\varepsilon)>0$ such that \begin{equation}\label{eq:ib0} \vert\pi_j(\alpha_k^\vee(\nu))\vert\le \end{equation} $$ \gamma_0(\varepsilon)\exp((l_\Delta(\tan((k_j\pi i)/m),0)+ \varepsilon/3)\nu), $$ where $k=1,\,2,\,j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_1,$
\begin{equation}\label{eq:ib} \gamma_1(\varepsilon)\exp((l_\Delta(\tan((k_j\pi i)/m),1)- \varepsilon/3)\nu)\le \end{equation} $$\max(\vert\pi_j(y^\vee(\nu))\vert,\,\vert\pi_j(y^\vee(\nu+1))\vert\le$$ $$\gamma_2(\varepsilon)\exp((l_\Delta(\tan((k_j\pi i)/m),1)+ \varepsilon/3)\nu),$$ where $j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_1.$
Let $\omega_1(m)=(m-1)/2,$ if $m$ is odd, $\omega_1(m)=m/2-2,$ if $m\equiv2(\mod4)$ and $\omega(m)=m/2-1,$ if $m\equiv0(\mod4).$ Then
$$\omega_1(m)=\sup\{k\in{\mathbb N}\colon k_j<m/2,(k,m)=1\}.$$
According to the Lemma 3 and (\ref{eq:hg1}), \begin{equation}\label{eq:ic0} l_\Delta(\tan((k_j\pi i)/m),0)\le l_\Delta(\tan((\omega_1(m)\pi i)/m),0), \end{equation} \begin{equation}\label{eq:ic} l_\Delta(\tan((\omega_1(m)\pi i)/m),1)\le\end{equation} $$l_\Delta(\tan((k_j\pi i)/m),1)\le l_\Delta(\tan((\pi i)/m),1)$$ where $j=1,\,\ldots,\,\phi(m).$ In view of (\ref{eq:ib0}) -- (\ref{eq:ic}), \begin{equation}\label{eq:id0} \vert\pi_j(\alpha_k^\vee(\nu))\vert\le \gamma_0(\varepsilon)\exp((l_\Delta(\tan((\omega_1(\nu)\pi i)/m),0)+ \varepsilon/3)\nu),\end{equation} where $k=1,\,2,\,j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_1,$
\begin{equation}\label{eq:id} \gamma_1(\varepsilon)\exp((l_\Delta(\tan((\omega_1(m)\pi i)/m),1)- \varepsilon/3)\nu)\le \end{equation} $$\max(\vert\pi_j(y^\vee(\nu))\vert,\,\vert\pi_j(y^\vee(\nu+1))\vert\le$$ $$\gamma_2(\varepsilon)\exp((l_\Delta(\tan((\pi i)/m),1)+ \varepsilon/3)\nu),$$ where $j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_1.$ In view of (\ref{eq:gb}), there exists $m^\ast_2\in{\mathbb N}-1+m^\ast_1,$ such that \begin{equation}\label{eq:ie} \exp(V_\Delta(m)-\varepsilon/3)\nu\le U_\Delta(m,\nu)\le \exp(V_\Delta(m)-\varepsilon/3)\nu \end{equation} where $\nu\in{\mathbb N}-1+m^\ast_2.$
In view of (\ref{eq:ic}) -- (\ref{eq:ie}), (\ref{eq:ij}) -- (\ref{eq:ec00}),
(\ref{eq:2a}), (\ref{eq:2a0}), \begin{equation}\label{eq:id1} \vert\pi_j(\alpha_k(\nu))\vert\le \gamma_0(\varepsilon)\exp((g_{\Delta,0}(m)+2\varepsilon/3)\nu), \end{equation} where $k=1,\,2,\,j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_2,$
\begin{equation}\label{eq:id2} \gamma_1(\varepsilon)\exp((-g_{\Delta,1}(m)-2\varepsilon/3)\nu)\le \end{equation} $$\max(\vert\pi_j(y^\vee(\nu))\vert,\,\vert\pi_j(y^\vee(\nu+1))\vert\le$$ $$\gamma_2(\varepsilon)\exp((-h_{\Delta}(m)+2\varepsilon/3)\nu),$$ where $j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_2.$
Let $X\in{\mathbb Z}_{K_m}\diagdown\{0\}.$ Then, in view of (\ref{eq:id1}) and (\ref{eq:id2}), \begin{equation}\label{eq:id3} \vert\pi_j(X\alpha_k(\nu))\vert\vert\le \gamma_0(\varepsilon)\exp((g_{\Delta,0}(m)+2\varepsilon/3)\nu) \vert\pi_j(X)\vert\le \end{equation} $$ \gamma_0(\varepsilon)\exp((g_{\Delta,0}(m)+2\varepsilon/3)\nu) q_\infty^{({\mathfrak K)}}(X), $$ where $k=1,\,2,\,j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_2,$
\begin{equation}\label{eq:id4} \gamma_1(\varepsilon)\exp((-g_{\Delta,1}(m)-2\varepsilon/3)\nu) \vert\pi_j(X)\vert\le \end{equation} $$\max(\vert\pi_j(Xy^\vee(\nu))\vert,\,\vert\pi_j(Xy^\vee(\nu+1))\vert\le$$ $$\max(q_\infty^{({\mathfrak K})}(Xy^\vee(\nu)),\, q_\infty^{({\mathfrak K})}(Xy^\vee(\nu+1)),$$ where $j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_2,$ \begin{equation}\label{eq:id5} \max(\vert\pi_j(Xy^\vee(\nu))\vert,\,\vert\pi_j(Xy^\vee(\nu+1))\vert\le \end{equation} $$\gamma_2(\varepsilon)\exp((-h_{\Delta}(m)+2\varepsilon/3)\nu)\vert\pi_j(X) \vert\le$$ $$\gamma_2(\varepsilon)\exp((-h_{\Delta}(m)+2\varepsilon/3)\nu) q_\infty^{\mathfrak K}(X), $$ where $j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_2.$
In view of (\ref{eq:id3}) \begin{equation}\label{eq:id6} q_\infty^{({\mathfrak K})}(X\alpha_k(\nu))\le \end{equation} $$ \gamma_0(\varepsilon)\exp((g_{\Delta,0}(m)+2\varepsilon/3)\nu) q_\infty^{({\mathfrak K})}(X), $$ where $k=1,\,2,\,$ and $\nu\in{\mathbb N}-1+m^\ast_2.$ In view of (\ref{eq:id5}),
\begin{equation}\label{eq:id7} \max(q_\infty^{({\mathfrak K})}(Xy^\vee(\nu)),\, q_\infty^{({\mathfrak K})}((Xy^\vee(\nu+1))= \end{equation} $$ \sup(\{\vert\pi_j(Xy^\vee(\nu+\epsilon))\vert ,\,\colon \epsilon\in\{0,\,1\},\,j=1,\,\ldots,\,\phi(m)\})\le$$ $$\gamma_2(\varepsilon)\exp((-h_{\Delta}(m)+2\varepsilon/3)\nu) q_\infty^{({\mathfrak K})}(X), $$ where $\nu\in{\mathbb N}-1+m^\ast_2.$
Taking in acount (\ref{eq:id6}), (\ref{eq:id7}) and (\ref{eq:id4}), we see that all the conditions of the Corollary of the Lemma 12 are fulfilled for $$\varepsilon\in(0,\,\varepsilon_0), \gamma_0(\varepsilon),\,\gamma_1(\varepsilon),\, \gamma_2(\varepsilon), y=y(\nu),\,\alpha_1(\nu),\alpha_2(\nu),$$ $$r_1=r_1(\varepsilon)=\exp(g_{\Delta,0}(m)+2\varepsilon/3,$$ $$R_1=R_1(\varepsilon)=\exp(g_{\Delta,1}(m)+2\varepsilon/3),$$ $$R_2=R_2(\varepsilon)\exp(h_{\Delta}(m)-2\varepsilon/3),$$ and this proves the part of our Theorem connected with the inequality (\ref{eq:5}).
Let again $X\in{\mathbb Z}_{K_m}\diagdown\{0\}$ and let $$ q_{min}^{({\mathfrak K})}(X)=\inf(\vert\{\pi_j(X)\vert \colon j=1,\,\ldots,\,\phi(m)\}) $$ Clearly, $q_{min}^{({\mathfrak K})}(X)>0$ According to the Theorem 4
in [\ref{r:cb}], or to the Theorem 7 in [\ref{r:cf1}], there exist $m^\ast_1\in{\mathbb N}$ having the following property: for any $\varepsilon\in(0,\,\varepsilon_0)$ there exist
$\gamma_0^\ast(X,\varepsilon)>0,\,\gamma_1^\ast(X,\varepsilon)>0,$ and $\gamma_2^\ast(X,\varepsilon)>0$ such that $$ \vert\pi_j(\alpha_k^\vee(\nu))\vert\le\gamma_0^\ast (\varepsilon)\exp((l_\Delta(\tan((\omega_m\pi i)/m),0)+\varepsilon/3)\nu), $$ where $k=1,\,2,\,j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_1,$ $$ \gamma_1^\ast(X\varepsilon)\exp((l_\Delta(\tan((\pi i)/m),1)- \varepsilon/3)\nu)\le $$ $$\max(\vert\pi_j(y^\vee(\nu))\vert,\,\vert\pi_j(y^\vee(\nu+1))\vert\le$$ $$\gamma_2(\varepsilon)\exp((l_\Delta(\tan((\pi i)/m),1)+ \varepsilon/3)\nu),$$ where $j=1,\,\ldots,\,\phi(m)$ and $\nu\in{\mathbb N}-1+m^\ast_1.$ Repeating the previous considerations, we see that all the conditions of the Corollary of the Lemma 12 are fulfilled for $\varepsilon\in(0,\,\varepsilon_0),$ $$\gamma_0=\gamma_0^\ast(X,\varepsilon),\, \gamma_1=\gamma_1^\ast(X,\varepsilon),\, \gamma_2=\gamma_2^\ast(X,\varepsilon),$$ $$y=y(\nu),\,\alpha_1(\nu),\alpha_2(\nu),\, r_1=r_1(\varepsilon)=\exp(g_{\Delta,0}(m)+2\varepsilon/3,$$ and $$R_1=R_2=R_2(\varepsilon)\exp(h_{\Delta}(m)-2\varepsilon/3),$$ and this proves the part of our Theorem connected with the inequality (\ref{eq:6}). $\blacksquare$
Below are values of $\beta$ and $\alpha$ computed for $\Delta\in\{5,\,7\}$ and some $m\in{\mathbb N}.$ $$(m;\,\Delta;\,\beta;\,\alpha)=(3;\,5;\,3,111228...\,;\,3,111228...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(3;\,7;\,3,073525...\,;\,3,073525...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(4;\,5;\,11,458947...\,;\,11,458947...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(4;\,7;\,10,551730...\,;\,10,551730...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(5;\,5;\,4,826751...\,;\,5,607961...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(5;\,7,\,4,837858...\,;\,5,684622...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(7;\,5;\,5,701485...\,;\,6,977258...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(7;\,7;\,5,724804...\,;\,7,114963...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(8;\,5;\,8,337857...\,;\,9,436901...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(8;\,7;\,8,253047...\,;\,9,433260...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(9;\,5;\,6,312056...\,;\,7,960502...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(9;\,7;\,6,335274...\,;\,8,134962...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(10;\,5;\,43,546644...\,;\,46,230614...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(10;\,7;\,35,648681...\,;\,38,043440...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(11;\,5;\,6,786990...\,;\,8,735234...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(11;\,7,\,6,806087...\,;\,8,934922...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(12;\,5;\,5,638541...\,;\,6,813222...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(12;\,7;\,5,696732...\,;\,6,983870...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(13;\,5;\,7,177155...\,;\,9,376030...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(13;\,7;\,7,190814...\,;\,9,594580...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(14;\,5;\,19,659885...\,;\,21,835056...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(14;\,7;\,18,447228...\,;\,20,668254...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(15\,;5;\,7,508714...\,;\,9,922761...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(15;\,7;\,7,516606...\,;\,10,156245...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(16;\,5,\,7,951153...\,;\,9,876454...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(16;\,7,\,7,945763...\,;\,10,039605...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(17;\,5;\,7,797153...\,;\,10,399610...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(17;\,7,\,7,799343...\,;\,10,645404...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(18;\,5,\,9,486110...\,;\,10,955534...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(18;\,7,\,9,406368...\,;\,10,989150...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(19;\,5;\,8,052478...\,;\,10,822446...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(19;\,7;\,8,049182...\,;\,11,078690...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(20;\,5;\,6,696241...\,;\,8,559091...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(20;\,7;\,6,733979...\,;\,8,774063...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(21;\,5;\,8,281548...\,;\,11,202268...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(21;\,7;\,8,273039...\,;\,11,467583...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(22;\,5;\,13,134623...\,;\,15,504916...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(22;\,7;\,12,815391...\,;\,15,331975...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(23;\,5;\,8,489281...\,;\,11,547024...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(23;\,7;\,8,475843...\,;\,11,820351...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(24;\,5;\,7,088338...\,;\,9,210037...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(24;\,7;\,7,116679...\,;\,8,439782...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(25;\,5;\,8,679328...\,;\,11,862643...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(25;\,7;\,8,661235...\,;\,12,143143...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(26;\,5;\,12,172520...\,;\,14,674949...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(26;\,7;\,11,944943...\,;\,14,618461...),$$ $$\ldots$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(32;\,5;\,8,654733...\,;\,11,466214...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(32;\,7;\,8,637697...\,;\,11,705492...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(33;\,5;\,9,310125...\,;\,12,911341...),$$ $$(m;\,\Delta;\,\beta;\,\alpha)=(33;\,5;\,9,275806...\,;\,13,214792...),$$ {\begin{center}\large\bf References.\end{center}} \footnotesize \vskip4pt \refstepcounter{r}\noindent[\ther] R.Ap\'ery, Interpolation des fractions continues\\ \hspace*{3cm} et irrationalite de certaines constantes,\\ \hspace*{3cm} Bulletin de la section des sciences du C.T.H., 1981, No 3,
37 -- 53; \label{r:cd}\\ \refstepcounter{r} \noindent[\ther] F.Beukers, A note on the irrationality of $\zeta (2)$ and $\zeta (3),$\\ \hspace*{3cm} Bull. London Math. Soc., 1979, 11, 268 -- 272; \label{r:ce}\\ \refstepcounter{r} \noindent[\ther]
A.van der Porten,
A proof that Euler missed...Ap\'ery's proof of the irrationality of
$\zeta (3),$\\ \hspace*{3cm} Math Intellegencer, 1979, 1, 195 -- 203;\label{r:cf}\\ \refstepcounter{r} \noindent[\ther]
W. Maier, Potenzreihen irrationalen Grenzwertes,\\ \hspace*{3cm} J.reine angew. Math.,
156, 1927, 93 -- 148;\label{r:cg}\\ \refstepcounter{r}\noindent[\ther]
E.M. Niki\u sin, On irrationality of the values of the functions F(x,s)
(in Russian),\\ \hspace*{3cm} Mat.Sb. 109 (1979), 410 -- 417;\\ \hspace*{3cm}
English transl. in Math. USSR Sb. 37 (1980), 381 -- 388;\label{r:ch}\\ \refstepcounter{r}\noindent[\ther]
G.V. Chudnovsky, Pade approximations to the generalized
hyper-geometric functions\\ \hspace*{3.4cm} I,J.Math.Pures Appl., 58, 1979,
445 -- 476;\label{r:dj}\\ \refstepcounter{r}\noindent[\ther] \rule{2.7cm}{.3pt},
Transcendental numbers, Number Theory,Carbondale,\\ \hspace*{3.4cm} Lecture Notes in Math, Springer-Verlag, 1979, 751, 45 -- 69; \label{r:da}\\ \refstepcounter{r}\noindent[\ther] \rule{2.7cm}{.3pt}, Approximations rationelles des logarithmes
de nombres rationelles\\ \hspace*{3.4cm} C.R.Acad.Sc. Paris, S\'erie A, 1979, 288, 607 -- 609; \label{r:db}\\ \refstepcounter{r}\noindent[\ther] \rule{2.7cm}{.3pt}, Formules d'Hermite pour les approximants de Pad\'e de logarithmes\\ \hspace*{3.4cm} et de fonctions bin\^omes, et mesures d'irrationalit\'e,\\ \hspace*{3.4cm} C.R.Acad.Sc. Paris, S\'erie A, 1979, t.288, 965 -- 967; \label{r:dc}\\ \refstepcounter{r}\noindent[\ther] \rule{2.5cm}{.3pt},Un syst\'me explicite d'approximants de Pad\'e\\ \hspace*{3.4cm} pour les fonctions hyp\'erg\'eometriques g\'en\'eralies\'ees,\\ \hspace*{3.4cm} avec applications a l'arithm\'etique,\\ \hspace*{3.4cm}
C.R.Acad.Sc. Paris, S\'erie A, 1979, t.288, 1001 -- 1004;\label{r:dd}\\ \refstepcounter{r}\noindent[\ther] \rule{2.5cm}{.3pt},
Recurrenses defining Rational Approximations\\ \hspace*{3.4cm} to the irrational numbers, Proceedings\\ \hspace*{3.4cm} of the Japan Academie,
Ser. A, 1982, 58, 129 -- 133; \label{r:de}\\ \refstepcounter{r}\noindent[\ther] \rule{2.5cm}{.3pt}, On the method of Thue-Siegel,\\ \hspace*{3.4cm} Annals of Mathematics, 117 (1983), 325 -- 382; \label{r:df}\\ \refstepcounter{r}\noindent[\ther] K.Alladi and M. Robinson, Legendre polinomials and irrationality,\\ \hspace*{6cm}J. Reine Angew.Math., 1980, 318, 137 -- 155; \label{r:dg}\\ \refstepcounter{r}\noindent[\ther] A. Dubitskas, An approximation of logarithms of some numbers,\\ \hspace*{3cm} Diophantine approximations II,Moscow, 1986, 20 -- 34; \label{r:dh}\\ \refstepcounter{r}\noindent [\ther] \rule{2cm}{.3pt},
On approximation of $\pi/ \sqrt {3}$ by rational fractions,\\ \hspace*{3cm} Vestnik MGU, series 1, 1987, 6, 73 -- 76; \label{r:ej}\\ \refstepcounter{r}\noindent[\ther]
S.Eckmann, \"Uber die lineare Unaqbhangigkeit der Werte gewisser Reihen,\\ \hspace*{3cm} Results in Mathematics, 11, 1987, 7 -- 43; \label{r:ea}\\ \refstepcounter{r}\noindent[\ther]
M.Hata, Legendre type polinomials and irrationality mesures,\\ \hspace*{3cm} J. Reine Angew. Math., 1990, 407, 99 -- 125; \label{r:eb}\\ \refstepcounter{r}\noindent[\ther]
A.O. Gelfond, Transcendental and algebraic numbers (in Russian),\\ \hspace*{3cm} GIT-TL, Moscow, 1952; \label{r:ec}\\ \refstepcounter{r}\noindent[\ther] H.Bateman and A.Erd\'elyi, Higher transcendental functions,1953,\\ \hspace*{3cm} New-York -- Toronto -- London,
Mc. Grow-Hill Book Company, Inc.; \label{r:ed}\\ \refstepcounter{r}\noindent[\ther]
E.C.Titchmarsh, The Theory of Functions, 1939, Oxford University Press; \label{r:ee}\\ \refstepcounter{r}\noindent[\ther] E.T.Whittaker and G.N. Watson, A course of modern analysis,\\ \hspace*{3cm} 1927, Cambridge University Press; \label{r:ef}\\ \refstepcounter{r}\noindent[\ther]
O.Perron, \"Uber die Poincaresche Differenzengleichumg,\\ \hspace*{3cm} Journal f\"ur die reine und angewandte mathematik,\\ \hspace*{3cm} 1910, 137, 6 -- 64;\label{r:a}\\ \refstepcounter{r} \noindent [\ther] A.O.Gelfond, Differenzenrechnung (in Russian),
1967, Nauka, Moscow. \label{r:b}\\ \refstepcounter{r}\noindent [\ther]
A.O.Gelfond and I.M.Kubenskaya, On the theorem of Perron\\ \hspace*{6cm} in the theory of differrence equations (in Russian),\\ \hspace*{3cm} IAN USSR, math. ser., 1953, 17, 2, 83 -- 86. \label{r:c}\\ \refstepcounter{r} \noindent [\ther] M.A.Evgrafov, New proof of the theorem of Perron\\ \hspace*{3cm} (in Russian),IAN USSR, math. ser., 1953, 17, 2, 77 -- 82; \label{r:d}\\ \refstepcounter{r} \noindent [\ther] G.A.Frejman, On theorems of of Poincar\'e and Perron\\ \hspace*{3cm} (in Russian), UMN, 1957, 12, 3 (75), 243 -- 245; \label{r:e}\\ \refstepcounter{r} \noindent [\ther] N.E.N\"orlund, Differenzenrechnung, Berlin,
Springer Verlag, 1924;\label{r:f}\\ \refstepcounter{r} \noindent [\ther] I.M.Vinogradov, Foundtions of the Number Theory, (in Russian), 1952, GIT-TL;\label{r:g}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, J. Diedonne, Foundations of modern analysis,\\ \hspace*{3cm}
Institut des Hautes \'Etudes Scientifiques, Paris,\\ \hspace*{3cm} Academic Press, New York and London, 1960\label{r:ei}\\ \refstepcounter{r} \noindent [\ther] CH.-J. de la Vall\'ee Poussin, Course d'analyse infinit\'esimale,\\ \hspace*{3cm} Russian translation by G.M.Fikhtengolts,\\ \hspace*{3cm} GT-TI, 1933; \label{r:cb0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, H.WEYL, Algebraic theory of numbers, 1940,\\ \hspace*{3cm} Russian translation by L.I.Kopejkina, \label{r:cb1}\\ \refstepcounter{r} \noindent [\ther] L.A.Gutnik, On the decomposition
of the difference operators of Poincar\'e type\\ \hspace*{3cm} (in Russian), VINITI, Moscow, 1992, 2468 -- 92, 1 -- 55; \label{r:h}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the decomposition
of the difference operators\\ \hspace*{3cm} of Poincar\'e type in Banach algebras\\ \hspace*{3cm} (in Russian), VINITI, Moscow, 1992, 3443 -- 92, 1 -- 36; \label{r:i}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the difference equations of Poincar\'e
type\\ \hspace*{3cm} (in Russian), VINITI, Moscow 1993, 443 -- B93, 1 -- 41; \label{r:aj}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the difference equations
of Poincar\'e type in normed algebras\\ \hspace*{3cm} (in Russian), VINITI, Moscow, 1994, 668 -- B94, 1 -- 44; \label{r:aa}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the decomposition of
the difference equations of Poincar\'e type\\ \hspace*{3cm} (in Russian),
VINITI, Moscow, 1997, 2062 -- B97, 1 -- 41; \label{r:ab}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, The difference equations
of Poincar\'e type\\ \hspace*{3cm} with characteristic polynomial having roots equal to zero\\ \hspace*{3cm}(in Russian), VINITI, Moscow, 1997, 2418 -- 97, 1 -- 20; \label{r:ac}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the behavior of solutions\\ \hspace*{3cm} of difference equations of Poincar\'e type\\ \hspace*{3cm}
(in Russian), VINITI, Moscow, 1997, 3384 -- B97, 1 - 41; \label{r:ad}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the variability of solutions of difference equations of Poincar\'e type\\ \hspace*{3cm}
(in Russian), VINITI, Moscow, 1999, 361 -- B99, 1 -- 9; \label{r:ae}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, To the question of the variability
of solutions\\ \hspace*{3cm} of difference equations of Poincar\'e type (in Russian),\\ \hspace*{3cm} VINITI, Moscow, 2000, 2416 -- B00, 1 -- 22;\label{r:af}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N),$\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik,\\ \hspace*{3cm} Bonn, Preprint Series, 2000, 3, 1 -- 13; \label{r:ag}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the Irrationality of Some Quantyties Containing $\zeta (3)$ (in Russian),\\ \hspace*{3cm} Uspekhi Mat. Nauk, 1979, 34, 3(207), 190; \label{r:di}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the Irrationality of Some Quantities
Containing $\zeta (3),$\\ \hspace*{3cm} Eleven papers translated from the Russian,\\ \hspace*{3cm} American Mathematical Society, 1988, 140, 45 - 56; \label{r:ah}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, Linear independence over $\mathbb Q$
of dilogarithms at rational points\\ \hspace*{3cm} (in Russian), UMN, 37 (1982), 179-180;\\ \hspace*{3cm}english transl. in Russ. Math. surveys 37 (1982), 176-177; \label{r:ai}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On a measure of the irrationality
of dilogarithms at rational points\\ \hspace*{3cm} (in Russian), VINITI, 1984, 4345-84, 1 -- 74; \label{r:bj}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, To the question of the smallness of some linear forms\\ \hspace*{3cm} (in Russian), VINITI, 1993, 2413-B93, 1 -- 94; \label{r:ba}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, About linear forms,\\ \hspace*{3cm} whose coefficients are logarithms\\ \hspace*{3cm} of algebraic numbers (in Russian),\\ \hspace*{3cm} VINITI, 1995, 135-B95, 1 -- 149; \label{r:bb}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, About systems of vectors, whose
coordinates\\ \hspace*{3cm} are linear combinations of logarithms of algebraic numbers\\ \hspace*{3cm} with algebraic coefficients (in Russian),\\ \hspace*{3cm} VINITI, 1994, 3122-B94, 1 -- 158; \label{r:bc}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the linear forms, whose\\ \hspace*{3cm} coefficients are $\mathbb A$ - linear combinations\\ \hspace*{3cm} of logarithms of $\mathbb A$ - numbers,\\ \hspace*{3cm} VINITI, 1996, 1617-B96, pp. 1 -- 23. \label{r:bc1}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On systems of linear forms, whose\\ \hspace*{3cm} coefficients are $\mathbb A$ - linear combinations\\ \hspace*{3cm} of logarithms of $\mathbb A$ - numbers,\\ \hspace*{3cm} VINITI, 1996, 2663-B96, pp. 1 -- 18. \label{r:bc2}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, About linear forms, whose coefficients\\ \hspace*{3cm} are $\mathbb Q$-proportional to the number $\log 2, $
and the values\\ \hspace*{3cm} of $\zeta (s)$ for integer $s$ (in Russian),\\ \hspace*{3cm} VINITI, 1996, 3258-B96, 1 -- 70; \label{r:bd}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, The lower estimate for some linear forms,\\ \hspace*{3cm} coefficients of which are proportional to the values\\ \hspace*{3cm} of $\zeta (s)$ for integer $s$ (in Russian),\\ \hspace*{3cm} VINITI, 1997, 3072-B97, 1 -- 77; \label{r:be}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt},
On linear forms with coefficients in ${\mathbb N} \zeta(1 + {\mathbb N}) $\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2000, 3, 1 -- 13;\label{r:cc0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt},
On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N)$\\ \hspace*{3cm} (the detailed version,part 1),
Max-Plank-Institut f\"ur Mathematik,\\ \hspace*{3cm} Bonn, Preprint Series, 2001, 15, 1 -- 20; \label{r:bf}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N)$\\ \hspace*{3cm} (the detailed version,part 2),
Max-Plank-Institut f\"ur Mathematik,\\ \hspace*{3cm} Bonn, Preprint Series,2001, 104, 1 -- 36; \label{r:bg}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients in ${\mathbb N}\zeta(1+{\mathbb N})$\\ \hspace*{3cm} (the detailed version,part 3),
Max-Plank-Institut f\"ur Mathematik,\\ \hspace*{3cm} Bonn, Preprint Series, 2002, 57, 1 -- 33; \label{r:bh}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the rank over ${\mathbb Q}$ of some real matrices (in Russian),\\ \hspace*{3cm} VINITI, 1984, 5736-84; 1 -- 29;\label{r:eg}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the rank over ${{\mathbb Q}}$ of some real matrices,\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik,\\ \hspace*{3cm} Bonn, Preprint Series, 2002, 27, 1 -- 32;\label{r:eh}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+{\mathbb N})$\\ \hspace*{3cm} (the detailed version, part 4),
Max-Plank-Institut f\"ur Mathematik,\\ \hspace*{3cm} Bonn, Preprint Series, 2002, 142, 1 -- 27; \label{r:bi}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the dimension of some linear spaces\\ \hspace*{3cm} over finite extension of ${\mathbb Q}$ (part 2),\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn, Preprint Series,\\ \hspace*{3cm} 2002, 107, 1 -- 37; \label{r:cj}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the dimension of some linear spaces
over $\mathbb Q$ (part 3),\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2003, 16, 1 -- 45. \label{r:ca}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt},On the difference equation
of Poincar\'e type (Part 1).\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2003, 52, 1 -- 44. \label{r:cb}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the dimension of some linear
spaces over $\mathbb Q,$ (part 4)\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2003, 73, 1 -- 38. \label{r:cc}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N)$\\ \hspace*{3cm} (the detailed version, part 5),\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2003, 83, 1 -- 13. \label{r:cd0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N)$\\ \hspace*{3cm} (the detailed version, part 6),\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2003, 99, 1 -- 33. \label{r:ce0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt},On the difference equation of Poincar\'e type (Part 2).\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2003, 107, 1 -- 25. \label{r:cf0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the asymptotic behavior of solutions\\ \hspace*{3cm} of difference equation (in English).\\ \hspace*{3cm} Chebyshevskij sbornik,
2003, v.4, issue 2, 142 -- 153. \label{r:cg0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N),$\\ \hspace*{3cm} Bonner Mathematishe Schriften Nr. 360,\\ \hspace*{3cm} Bonn, 2003, 360. \label{r:ch0}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On the dimension of some linear
spaces over $\mathbb Q,$ (part 5)\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2004, 46, 1 -- 42. \label{r:ci1}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt},On the difference equation of Poincar\'e type (Part 3).\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2004, 9, 1 -- 33. \label{r:cf1}\\ \refstepcounter{r} \noindent [\ther] \rule{2cm}{.3pt}, On linear forms with coefficients
in ${\mathbb N}\zeta(1+\mathbb N)$\\ \hspace*{3cm} (the detailed version, part 7),\\ \hspace*{3cm} Max-Plank-Institut f\"ur Mathematik, Bonn,\\ \hspace*{3cm} Preprint Series, 2004, 88, 1 -- 27. \label{r:ci2}
\vskip 10pt
{\it E-mail:}{\sl\ gutnik$@@$gutnik.mccme.ru}
\end{document} | arXiv |
\begin{document}
\def\hbox{\rlap{$\sqcap$}$\sqcup$}{\hbox{\rlap{$\sqcap$}$\sqcup$}} \def\qed{\ifmmode\hbox{\rlap{$\sqcap$}$\sqcup$}\else{\unskip\nobreak\hfil \penalty50\hskip1em\null\nobreak\hfil\hbox{\rlap{$\sqcap$}$\sqcup$} \parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi}
\def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}}
\def{\mathfrak H}{{\mathfrak H}} \def{\mathfrak R}{{\mathfrak R}} \def{\mathfrak M}{{\mathfrak M}}
\def \C {{\mathbb C}} \def \F {{\mathbb F}} \def \L {{\mathbb L}} \def \K {{\mathbb K}} \def \Q {{\mathbb Q}} \def \Z {{\mathbb Z}}
\def\\{\cr} \def\({\left(} \def\){\right)} \def\[{\left[} \def\right]{\right]} \def\fl#1{\left\lfloor#1\right\rfloor} \def\cl#1{\left\lceil#1\right\rceil}
\def \lcm{{\mathrm {lcm}}} \def \rad{{\mathrm {rad}}} \def \ord{{\mathrm {ord}}} \def \llog{{\mathrm{llog~}}} \def \Li{{\mathrm {Li}}} \def\mathbf{e}{\mathbf{e}} \def\e_m{\mathbf{e}_m} \def\e_\ell{\mathbf{e}_\ell} \def\mathrm{Res}{\mathrm{Res}}
\def\vec{a} \cdot \vec{x}{\vec{a} \cdot \vec{x}} \def\vec{a} \cdot \vec{y}{\vec{a} \cdot \vec{y}}
\def\qquad \text{and} \qquad{\qquad \text{and} \qquad} \renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\comm}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule
#1\par
\hrule}}
\title{Counting Irreducible Binomials over Finite Fields}
\author[R. Heyman] {Randell Heyman}
\address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} \email{[email protected]}
\author[I. E. Shparlinski] {Igor E. Shparlinski}
\address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} \email{[email protected]}
\begin{abstract} We consider various counting questions for irreducible binomials over finite fields. We use various results from analytic number theory to investigate these questions. \end{abstract}
\subjclass[2010]{11T06}
\keywords{Irreducible binomials; finite fields; primes in arithmetic progressions}
\maketitle
\section{Introduction}
\subsection{Background}
It is reasonably easy to obtain an asymptotic formula for the total number of irreducible polynomials over the finite field $\F_q$ of $q$ elements, see~\cite[Theorem~3.25]{LiNi}.
Studying irreducible polynomials with some prescribed coefficients is much more difficult, yet remarkable progress has also been achieved in this direction, see~\cite{Cohen, Hucz, Poll} and references therein.
Here we consider a special case of this problem and investigate some counting questions concerning irreducible binomials over the finite field $\F_q$ of $q$ elements. More precisely, for an integer $t$ and a prime power $q$,
let $N_t(q)$ be the number of irreducible binomials
over $\F_q$ of the form $X^t-a \in \F_q[X]$.
We use a well known characterisation of irreducible binomials $X^t-a$ over $\F_q$ of $q$ elements to count the total number of such binomials on average over $q$ or $t$. In fact, we consider several natural regimes, for example, when $t$ is fixed and $q$ varies or when both vary in certain ranges $t \le T$ and $q\le Q$. There has always been very active interest in binomials, see~\cite[Notes to Chapter~3]{LiNi} for a survey of classical results. Furthermore, irreducible binomials have been used in~\cite{Shoup} as building blocks for constructing other irreducible polynomials over finite fields, and in ~\cite{BMGVBO} for characterising the irreducible factors of $x^n-1$ (see also~\cite{ABK, MaZi} and references therein for more recent applications). However, the natural question of investigating the behaviour of $N_t(q)$ has never been addressed in the literature.
Our methods rely on several classical and modern results of analytic number theory; in particular the distribution of primes in arithmetic progressions.
\subsection{Notation} \label{sec:not}
As usual, let $\omega(s)$, $\pi(s)$, $\varphi(s)$, $\Lambda(s)$ and $\zeta(s)$ denote the number of distinct prime factors of $s$, the number of prime numbers less than or equal to $s$, the Euler totient function, the von Mangoldt function and the Riemann-zeta function evaluated at $s$ respectively.
For positive integers $Q$ and $s$ we denote the number of primes in arithmetic progression by
$$\pi(Q;s,a)=\sum_{\substack{p \le Q\\p \equiv a \pmod s}}1.$$ We also denote $$\psi(Q;s,a)=\sum_{\substack{p \le Q\\p \equiv a \pmod s}}\Lambda(p).$$ The letter $p$ always denotes a prime number whilst the letter $q$ always denotes a prime power.
We recall that the notation $f(x) = O(g(x))$ or $f(x) \ll g(x)$ is equivalent to the assertion that there exists a constant $c>0$ (which may depend on the real parameter $\varepsilon> 0$) such that $|f(x)|\le c|g(x)|$ for all $x$. The notation $f(x)=o(g(x))$ is equivalent to the assertion that $$\lim_{x \rightarrow \infty}\frac{f(x)}{g(x)}=0.$$ The notation $f(x) \sim g(x)$ is equivalent to the assertion that $$\lim_{x \rightarrow \infty} \frac{f(x)}{g(x)}=1.$$
We define $\log x$ as $\log x=\max\{\ln x, 2\}$ where $\ln x$ is the natural logarithm, Furthermore, for an integer $k\ge 2$, we define recursively $\log_k x=\log(\log_{k-1}x)$.
Finally, we use $\Sigma^\sharp$ to indicate that the summation is only over squarefree arguments in the range of summation.
\subsection{Main results}
We denote the radical of an integer $t\ne 0$, the largest square-free number that divides $t$, by $\rad(t)$. It is also convenient to define $$\rad_4(t)=\begin{cases}\rad(t)&\mbox{if } 4 \nmid t,\\ 2\rad(t)&\mbox{otherwise}. \end{cases}$$
We start with an upper bound on the average value of $N_t(q)$ for a fixed $t$ averaged over $q\le Q$.
\begin{thm} \label{thm:UppBound q} For any fixed $\varepsilon>0$ uniformly over real $Q$ and positive integers $t$ with $\rad_4(t) \le Q^{1-\varepsilon}$, we have $$\sum_{q \le Q}N_t(q)\le (1 + o(1)) \frac{Q^2}{\rad_4(t)\log (Q/\rad_4(t))}$$ as $Q\to \infty$. \end{thm}
We also present the following lower bound (which has $\varphi(\rad(t))^2$ instead of the expected $\varphi(\rad(t))$).
\begin{thm} \label{thm:LowBound q} There exists an absolute constant $L > 0$ such that uniformly over
real $Q$ and positive integers $t$ with $Q \ge t^L$ we have $$\sum_{q \le Q}N_t(q)\gg \frac{Q^2}{\varphi(\rad(t))^2(\log Q)^2}.$$ \end{thm}
We also investigate $N_t(q)$ for a fixed $q$ averaged over $t\le T$.
\begin{thm} \label{thm:UppBound t} For any fixed positive $A$ and $\varepsilon$ and a sufficiently large real $q$ and $T$ with $$ T \ge \(\log (q-1)\)^{(1+ \varepsilon) A \log_3 q/\log_4 q} $$ we have $$ \sum_{t \le T}N_t(q)\le (q-1)T/(\log T)^A. $$ \end{thm}
Finally, we obtain an asymptotic formula for the double average of $N_t(q)$ over $q\le Q$ and squarefree $t\le T$ in a rather wide range of parameters $Q$ and $T$. With more work similar results can also be obtained for the average value of $N_t(q)$ over all integers $t \le T$. However to exhibit the ideas and simplify the exposition, we limit ourselves to this special case, in particular we recall our notation $\Sigma^\sharp$ from Section~\ref{sec:not}.
\begin{thm} \label{thm:Asymp q t} For any fixed $\varepsilon> 0$ and any $$ T \le Q^{1/2}/(\log Q)^{5/2+\varepsilon} $$ we have $$ \sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\sum_{q \le Q} N_t(q) = (1+o(1))\frac{Q^2\log T}{2\zeta(2)\log Q}, $$ as $T\to \infty$. \end{thm}
It seems difficult to obtain the asymptotic formula of Theorem~\ref{thm:Asymp q t} for larger values of $T$ (even under the Generalised Riemann Hypothesis). However, here we show that a result of Mikawa~\cite{Mik} implies a lower bound of right order of magnitude for values of $T$ of order that may exceed $Q^{1/2}$.
\begin{thm} \label{thm:Lower q t} For any fixed $\beta < 17/32$ and $T \le Q^\beta$, we have $$ \sum_{T \le t \le 2T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\sum_{q \le Q} N_t(q) \gg \frac{Q^2}{\log Q}, $$ \end{thm}
We note that Theorem~\ref{thm:Lower q t} means that for a positive proportion of fields $\F_q$ with $q \le Q$ there is a positive proportion of irreducible binomials whose degrees do not exceed $Q^{\beta}$.
\section{Preparations}
\subsection{Characterisation of irreducible binomials} Let $\ord_q a$ denote the multiplicative order of $a \in \F_q^*$.
Our main tool is the following characterisation of irreducible binomials (see~\cite[Theorem~3.75]{LiNi}).
\begin{lem} \label{lem:Irr Bin} Let $t \ge 2$ be an integer and $a \in \F_q^*$. Then the binomial $x^t-a$ is irreducible in $\F_q[x]$ if and only if the following three conditions are satisfied: \begin{enumerate} \item $\rad(t) \mid \ord_q a$, \item $ \gcd\(t,(q-1)/\ord_q a\)=1$, \item if $4 \mid t$ then $q \equiv 1 \pmod 4$. \end{enumerate} \end{lem}
\begin{lem}\label{lem:ntq} Suppose that $q$ is a prime power. Then $$N_t(q)=\begin{cases} \displaystyle{\frac{\varphi(t)}{t}(q-1)}, &
\text{if } \rad_4(t)\mid (q-1) , \\
0,& \text{otherwise}. \end{cases}$$ \end{lem}
\begin{proof} We can assume that $\rad_4(t)\mid (q-1)$(or equivalently $\rad(t)\mid (q-1)$ and if $4\mid t$ then $q \equiv 1 \pmod 4$), as in the opposite case the result is follows immediately from Lemma~\ref{lem:Irr Bin}.
Furthermore, from Lemma~\ref{lem:Irr Bin} we see that $$N_t(q)=\sum_{\substack{a \in \F_q^*\\ \rad(t)\mid \ord_qa \\ \gcd(t,(q-1)/\ord_qa)=1}}1.$$ Since $\F_q^*$ is a cyclic group, there are $\varphi(\ord_qa)$ elements of $\F_q^*$ that have order equal to $\ord_qa$. Hence, we obtain $$ N_t(q)=\sum_{\substack{ j \mid (q-1) \\ \rad(t)\mid j \\ \gcd(t,(q-1)/j)=1}} \varphi(j). $$ We now write $q-1=RS$, where $R$ is the largest divisor of $q-1$ with $\gcd(R, \rad(t)) =1$ (thus all prime divisors of $S$ also divide $t$). Now, for every integer $j\mid (q-1)$ the conditions $\rad(t)\mid j$ and $\gcd(t,(q-1)/j)=1$ mean that $j=Sd$ for some $d\mid R$. Since $\gcd(S,R)=1$, we have $$ N_t(q)=\sum_{d\mid R} \varphi(Sd)=\varphi(S) \sum_{d\mid R} \varphi(d)=\varphi(S)R=\frac{\varphi(t)}{t}SR=\frac{\varphi(t)}{t}(q-1), $$ which concludes the proof. \end{proof}
\subsection{Analytic number theory background}
We recall a quantitative version of the Linnik theorem, see~\cite[Corollary~18.8]{IwKow}, which is slightly stronger than the form which is usually used.
\begin{lem}\label{lem:Linnik} There is an absolute constant $L$ such that if a positive integer $k$ is sufficiently large and $Q\ge k^L$, then uniformly over all integers $a$ with $\gcd(k,a)=1$ we have $$\psi(Q;k,a) \gg \frac{Q}{\varphi(k)\sqrt{k}}.$$ \end{lem}
On average over $k$ we have a much more precise result given by the {\it Bombieri--Vinogradov theorem\/} which we present in the form that follows from the work of Dress, Iwaniec, and Tenenbaum~\cite{DIT} combined with the method of Vaughan~\cite{Vau}:
\begin{lem}\label{lem:Bomb-Vin} For any $A>0$, $\alpha > 3/2$ and $T \le Q$ we have $$ \sum_{t \le T} \max_{\gcd(a,t)=1} \max_{R \le Q}
\left|\pi(R;t,a) - \frac{\pi(R)}{\varphi(t)}\right| \le Q (\log Q)^{-A} + Q^{1/2} T (\log Q)^{\alpha}. $$ \end{lem}
The following result follows immediately from much more general estimates of Mikawa~\cite[Bounds~(4) and~(5)]{Mik}.
\begin{lem} \label{lem:Mikawa} For any fixed $\beta < 17/32$, $u \le z^\beta$ and for all but $o(u)$ integers $k \in [u, 2u]$ we have $$ \pi(2z;k,1) - \pi(z;k,1) \gg \frac{z}{\varphi(k)\log z} . $$ \end{lem}
We also have a bound on the number $\rho_T(n)$ of integers $t\le T$ with $\rad(t) \mid n$, which is due to Grigoriev and Tenenbaum~\cite[Theorem~2.1]{GrigTen}. We note that~\cite[Theorem~2.1]{GrigTen} is formulated as a bound on the number of divisors $t \mid n$ with $t \le T$. However a direct examination of the argument reveals that it actually provides an estimate for the above function $\rho_T(n)$. In fact we present it in simpler form given by~\cite[Corollary~2.3]{GrigTen}
\begin{lem}\label{lem:SmallDiv} For any fixed positive $A$ and $\varepsilon$ and a sufficiently large positive integer $n$ and a real $T$ with $$ T \ge \(\log n\)^{(1+ \varepsilon) A \log_3 n/\log_4 n} $$ we have $\rho_T(n)\le T/(\log T)^A$. \end{lem}
\section{Proofs of Main Results}
\subsection{Proof of Theorem~\ref{thm:UppBound q}}
For the case where $4\nmid t$ we denote $s = \rad(t)$. Using Lemma~\ref{lem:ntq} we have \begin{equation} \label{eq q-1 q} \sum_{q \le Q}N_t(q)=\frac{\varphi(t)}{t}\sum_{\substack{q \le Q \\ s \mid (q-1)}}(q-1) = \frac{\varphi(t)}{t}\sum_{\substack{q \le Q\\ s\mid (q-1)}}q + O(Q/s). \end{equation}
So, with $$ \ell = \fl{\frac{\log Q}{\log 2}} \qquad \text{and} \qquad \lambda = 2\varepsilon^{-1}, $$ we have \begin{equation} \label{eq:prime pow} \sum_{\substack{q \le Q\\ s\mid (q-1)}}q = \sum_{\substack{p \le Q\\ s\mid (p-1)}}p + \sum_{2 \le r \le \ell} \sum_{\substack{p^r\le Q\\ s\mid (p^r-1)}}p^r. \end{equation} Using the Brun-Titchmarsh bound, see~\cite[Theorem~6.6]{IwKow} and partial summation we obtain \begin{equation} \label{eq:r=1} \sum_{\substack{p \le Q\\ s\mid (p-1)}}p \le (1+o(1)) \frac{Q^2}{\varphi(s) \log (Q/s)}, \end{equation} provided that $s/Q \to 0$.
We now estimate the contribution from other terms with $r \ge 2$.
The condition $s\mid p^r-1$ puts $p$ in at most $r^{\omega(s)}$ arithmetic progressions modulo $s$. Extending the summation to all integers $n \le Q^{1/r}$ in these progressions, we have $$ \sum_{\substack{p^r\le Q\\ s\mid (p^r-1)}} p^r \ll r^{\omega(s)} Q(Q^{1/r} s^{-1}+1). $$ We use this bound for $r \le \lambda$. Since $$ \omega(s) \ll \frac{\log s}{\log \log (s+2)}, $$ for $r \le \lambda$ we have $$ r^{\omega(s)} = \exp\( O\( \frac{\log s}{\log \log (s+2)}\)\). $$ The total contribution from all terms with $2 \le r \le \lambda$ is at most \begin{equation} \label{eq:small r} \begin{split} \sum_{2 \le r \le \lambda} \sum_{\substack{p^r\le Q\\ s\mid (p^r-1)}}p^r & \le Q(Q^{1/2} s^{-1} +1) \exp\( O\(\frac{\log s}{\log \log (s+2)}\)\) \\ & = Q^{1+o(1)}(Q^{1/2} s^{-1} +1). \end{split} \end{equation} For $\lambda \le r \le \ell$ we use the trivial bound \begin{equation} \label{eq:big r} \sum_{\lambda \le r \le \ell} \sum_{\substack{p^r\le Q\\ s\mid (p^r-1)}}p^r \le \ell Q^{1+1/\lambda}. \end{equation}
Combining~\eqref{eq:small r} and~\eqref{eq:big r} we see that \begin{equation} \label{eq:sum pr} \begin{split}
\sum_{2 \le r \le \ell} \sum_{\substack{p^r\le Q\\ s\mid (p^r-1)}}p^r &\ll Q^{3/2 + o(1)} s^{-1} + Q^{1+o(1)} + Q^{1+ \varepsilon/2} \log Q\\ &\ll Q^{3/2 + o(1)} s^{-1}, \end{split} \end{equation} provided that $s \le Q^{1-\varepsilon}$ and $Q \to \infty$. Recalling~\eqref{eq q-1 q}, \eqref{eq:prime pow} and~\eqref{eq:r=1} and that $$ \frac{\varphi(t)}{t \varphi(s)} = \frac{1}{s}, $$ we conclude the proof for the case where $4 \nmid t$.
In the event that $4 \mid t$ then, returning to~\eqref{eq q-1 q}, we have $$ \sum_{q \le Q}N_t(q)=\frac{\varphi(t)}{t}\sum_{\substack{q \le Q \\ s \mid (q-1)\\4 \mid (q-1)}}(q-1) =\frac{\varphi(t)}{t}\sum_{\substack{q \le Q \\ \lcm(4,\rad(t)) \mid (q-1)}}(q-1). $$ Since $\lcm(4,\rad(t))=2\rad(t)$, the proof now continues as before, replacing $s$ with $2s$.
\subsection{Proof of Theorem~\ref{thm:LowBound q}}
Combining~\eqref{eq q-1 q} and~\eqref{eq:prime pow}, we have \begin{equation} \begin{split} \label{eq sum p upper bound 2} \sum_{q \le Q} N_t(q) \ge \sum_{p\le Q} N_t(p) &=\frac{\varphi(t)}{t} \sum_{\substack{p \le Q\\\rad_4(t)\mid (p-1)}}(p-1) \\ &\ge \frac{\varphi(t)}{t} \sum_{\substack{p \le Q\\ 2s\mid (p-1)}}(p-1) , \end{split} \end{equation} where, as before, $s = \rad(t)$.
It immediately follows from Lemma~\ref{lem:Linnik} that $$ \pi(Q;2s,1)\gg \frac{Q}{\varphi(2s)\sqrt{2s} \log Q} \ge \frac{Q}{\varphi(s)\sqrt{s} \log Q}. $$ Thus $$ \sum_{\substack{p \le Q\\2s\mid (p-1)}}p \ge \sum_{k=1}^{\pi(Q; s,1)}\(2ks+1\)\ge 2s\frac{\pi(Q; s,1)^2}{2} \gg\frac{Q^2}{\varphi^2(s)(\log Q)^2}. $$
Combining this lower bound with~\eqref{eq sum p upper bound 2} completes the proof.
\subsection{Proof of Theorem~\ref{thm:UppBound t}}
Fix any positive $T$ and $q$. For $q-1\equiv 0 \pmod 4$ we have, using Lemma~\ref{lem:ntq}, \begin{equation}\label{eq q-1=0}
\sum_{t \le T}N_t(q)=(q-1)\sum_{\substack{t \le T \\\rad(t)|(q-1)}}\frac{\varphi(t)}{t}\le (q-1)\sum_{\substack{t \le T \\\rad(t)|(q-1)}}1. \end{equation} For $q-1 \not\equiv 0 \pmod 4$ we have , using Lemma ~\ref{lem:ntq}, \begin{equation} \begin{split} \label{eq q-1not=0}
\sum_{t \le T}N_t(q)&=(q-1)\sum_{\substack{t \le T \\\rad(t)|(q-1)\\4 \nmid t}}\frac{\varphi(t)}{t}\le (q-1)\sum_{\substack{t \le T \\\rad(t)|(q-1)}}\frac{\varphi(t)}{t}\\& \le (q-1)\sum_{\substack{t \le T \\\rad(t)|(q-1)}}1. \end{split} \end{equation}
Combining~\eqref{eq q-1=0}, \eqref{eq q-1not=0} and Lemma~\ref{lem:SmallDiv} completes the proof.
\subsection{Proof of Theorem~\ref{thm:Asymp q t}}
Using~\eqref{eq q-1 q}, \eqref{eq:prime pow}
and~\eqref{eq:sum pr} we have
\begin{equation} \begin{split}
\label{double sum nmid} \sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\, \sum_{q\le Q} N_t(q) &= \sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\frac{\varphi(t)}{t} \sum_{\substack{p\le Q\\ t\mid(p-1)}}p+
O\(Q^{3/2+o(1)} \sum_{ t\le T} t^{-1} \) \\
&=\sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\frac{\varphi(t)}{t}
\sum_{\substack{p\le Q\\ t \mid(p-1)}}p + O\(Q^{3/2+o(1)} \), \end{split} \end{equation} as $T \le Q^{1/2}$.
Using partial summation we have \begin{equation} \label{eq Sum P1} \sum_{\substack{p\le Q\\ t\mid(p-1)}}p =(Kt+1)\pi(Kt+1;t,1)-t\sum_{1 \le k \le K}\pi(kt;t,1), \end{equation} where $K=\fl{(Q-1)/t}$.
We now write $$
{\mathcal E}(Q,t) = \max_{R\le Q} \left|\pi(R;t,1) - \frac{\pi(R)}{\varphi(t)}\right|. $$
With this notation we derive from~\eqref{eq Sum P1} that \begin{equation} \label{eq Sum P2}
\sum_{\substack{p\le Q\\t\mid(p-1)}}p =
\frac{Q \pi(Q)}{\varphi(t)} - \frac{t}{\varphi(t)} \sum_{1 \le k \le K} \pi(kt)+O\(tK{\mathcal E}(Q,t)\).
\end{equation} By the prime number theorem and~\cite[Corollary~5.29]{IwKow}, and noting that for $1 \le k \le K$ we have $kt\le Q$, we also conclude that \begin{equation*} \begin{split}
\sum_{1 \le k \le K} \pi(kt) & = t \sum_{1 \le k \le K} \frac{k}{\log (kt)} +O(Q^2(\log Q)^{-2})\\
& = t \sum_{K/(\log Q)^2 \le k \le K} \frac{k}{\log (kt)} +O(Q^2(\log Q)^{-2}). \end{split} \end{equation*}
Now, for $K/(\log Q)^2 \le k \le K$ we have $$
\frac{1}{\log (kt)} = \frac{1}{\log Q + O(\log \log Q)}
= \frac{1}{\log Q} + O\(\frac{\log \log Q}{(\log Q)^2}\). $$ Therefore \begin{equation*} \begin{split}
\sum_{1 \le k \le K} \pi(kt) & = \(\frac{1}{2} + o(1)\) \frac{t}{ \log Q} K^2
= \(\frac{1}{2} + o(1)\) \frac{Q^2}{t\log Q}. \end{split} \end{equation*}
Substituting this in~\eqref{eq Sum P2} and using $\pi(Q) \sim Q/\log Q$, we obtain $$ \sum_{\substack{p\le Q\\t\mid(p-1)}}p = \(\frac{1}{2} + o(1)\) \frac{Q^2}{\varphi(t) \log Q} + O\(Q{\mathcal E}(Q,t) \) . $$ Using this bound in~\eqref{double sum nmid} yields \begin{equation*} \begin{split}
\sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\sum_{q\le Q}N_t(q) &
= \(\frac{1}{2} + o(1)\) \frac{Q^2}{2 \log Q}
\sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\frac{1}{t} \\ & \qquad \qquad +
O\(Q^{3/2+O(1)}+Q \sum_{t\le T }{\mathcal E}(Q,t)\). \end{split} \end{equation*} By Lemma~\ref{lem:Bomb-Vin}, with $A=1+\varepsilon$ and $\alpha = 3/2 + \varepsilon/2$, there is some $B>0$ such that $$ \sum_{t \le T} {\mathcal E}(Q,t) \ll Q (\log Q)^{-A} + Q^{1/2} T (\log Q)^{\alpha} \ll Q (\log Q)^{-1-\varepsilon/2}. $$ Hence \begin{equation} \label{eq: sumTQ} \sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\sum_{q\le Q}N_t(q)=
\(\frac{1}{2} + o(1)\)\frac{Q^2}{ \log Q}
\sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\frac{1}{t} +
O\(Q (\log Q)^{-1-\varepsilon/2}\). \end{equation} A simple inclusion-exclusion argument leads to the asymptotic formula \begin{equation} \label{eq: harmonic sf} \sum_{t \le T}\hskip-18 pt{\phantom{\sum}}^\sharp\, \frac{1}{t}= \(\frac{1}{\zeta(2)} + o(1)\)\log T , \end{equation} see~\cite{Sur} for a much more precise result. Substituting~\eqref{eq: harmonic sf} into~\eqref{eq: sumTQ} completes the proof.
\subsection{Proof of Theorem~\ref{thm:Lower q t}}
We proceed as in the proof of Theorem~\ref{thm:Asymp q t} but instead of~\eqref{double sum nmid} we write \begin{equation*} \begin{split} \sum_{T \le t \le 2T}\hskip-18 pt{\phantom{\sum}}^\sharp\, \sum_{q\le Q} N_t(q) & \ge \sum_{T \le t \le 2T}\hskip-18 pt{\phantom{\sum}}^\sharp\, \sum_{Q/2 \le p\le Q} N_t(p) = \sum_{T \le t \le 2T}\hskip-18 pt{\phantom{\sum}}^\sharp\, \frac{\varphi(t)}{t} \sum_{\substack{Q/2 \le p\le Q \\ t\mid(p-1)}}p\\
&\gg Q \sum_{T \le t \le 2T}\hskip-18 pt{\phantom{\sum}}^\sharp\,\frac{\varphi(t)}{t}
\(\pi(Q;t,1) - \pi(Q/2;t,1) \).
\end{split} \end{equation*} Using Lemma~\ref{lem:Mikawa} we easily conclude the proof.
\section*{Acknowledgment}
This work was supported in part by ARC grant~DP140100118.
\end{document} | arXiv |
\begin{document}
\title{MC-finiteness of restricted set partition functions}
\author{Y. Filmus} \address{Faculty of Computer Science Technion-Israel Institute of Technology, Haifa Israel} \email{[email protected]}
\author{E. Fischer} \address{Faculty of Computer Science Technion-Israel Institute of Technology, Haifa Israel} \email{[email protected]}
\author{J.A. Makowsky} \address{Faculty of Computer Science Technion-Israel Institute of Technology, Haifa Israel} \email{[email protected]}
\author{V. Rakita} \address{Faculty of Mathematics Technion-Israel Institute of Technology, Haifa Israel} \email{[email protected]}
\keywords{Set partitions, r-Bell numbers, congruences, Specker-Blatter Theorem}
\newcommand{\angl}[1]{\left\langle #1 \right\rangle} \newcommand{\card}[3]{card_{\mathcal{#1},\overline{#2}}(#3(\overline{#2}))} \newif\ifmargin
\marginfalse \newif\ifshort \shorttrue \newif\ifskip \skiptrue \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}}
\newtheorem{theorem}{Theorem} \newtheorem{proposition}[theorem]{\bf Proposition} \newtheorem{examples}[theorem]{\bf Examples} \newtheorem{example}[theorem]{\bf Example} \newtheorem{problem}{\bf Problem} \newtheorem{remark}[theorem]{\bf Remark} \newtheorem{remarks}[theorem]{\bf Remarks} \newtheorem{definition}{Definition} \newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lesson}{Lesson}
\newtheorem{defi}{Definition}[section]
\newtheorem{conjecture}{Conjecture}
\newtheorem{ex}{Example}[section]
\newtheorem{lemma}[theorem]{Lemma} \newtheorem{coro}[theorem]{Corollary} \newtheorem{conj}[theorem]{Conjecture} \newtheorem{cons}[theorem]{Consequence} \newtheorem{obs}[theorem]{Observation} \newtheorem{claim}[theorem]{Claim} \newtheorem{fact}[theorem]{Fact} \newtheorem{oproblem}[theorem]{Problem}
\newenvironment{renumerate}{\begin{enumerate}}{\end{enumerate}} \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \renewcommand{(\roman{enumi}.\alph{enumii})}{(\roman{enumi}.\alph{enumii})}
\renewcommand{\widetilde}{\widetilde} \renewcommand{\overline}{\overline}
\newcommand{\mathrm{D}}{\mathrm{D}} \newcommand{\mathrm{WFF}}{\mathrm{WFF}} \newcommand{\mathrm{SOL}}{\mathrm{SOL}} \newcommand{\mathrm{FOL}}{\mathrm{FOL}} \newcommand{\mathrm{MSOL}}{\mathrm{MSOL}} \newcommand{\mathrm{CMSOL}}{\mathrm{CMSOL}} \newcommand{\mathrm{CFOL}}{\mathrm{CFOL}} \newcommand{\mathrm{IFPL}}{\mathrm{IFPL}} \newcommand{\mathrm{FPL}}{\mathrm{FPL}} \newcommand{\mbox{\bf SEN}}{\mbox{\bf SEN}} \newcommand{\mbox{\bf WFTF}}{\mbox{\bf WFTF}} \newcommand{\mbox{\bf TFOF}}{\mbox{\bf TFOF}} \newcommand{\mbox{\bf TFOL}}{\mbox{\bf TFOL}} \newcommand{\mbox{\bf FOF}}{\mbox{\bf FOF}} \newcommand{\mbox{\bf NNF}}{\mbox{\bf NNF}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{\mbox{\bf HF}}{\mbox{\bf HF}} \newcommand{\mbox{\bf CNF}}{\mbox{\bf CNF}} \newcommand{\mbox{\bf PNF}}{\mbox{\bf PNF}} \newcommand{\mbox{\bf QF}}{\mbox{\bf QF}} \newcommand{\mbox{\bf DNF}}{\mbox{\bf DNF}} \newcommand{\mbox{\bf DISJ}}{\mbox{\bf DISJ}} \newcommand{\mbox{\bf CONJ}}{\mbox{\bf CONJ}} \newcommand{\mbox{Ass}}{\mbox{Ass}} \newcommand{\mbox{Var}}{\mbox{Var}} \newcommand{\mbox{Support}}{\mbox{Support}} \newcommand{\mbox{\bf Var}}{\mbox{\bf Var}} \newcommand{{\mathfrak A}}{{\mathfrak A}} \newcommand{{\mathfrak B}}{{\mathfrak B}} \newcommand{{\mathfrak N}}{{\mathfrak N}} \newcommand{{\mathfrak Z}}{{\mathfrak Z}} \newcommand{{\mathfrak Q}}{{\mathfrak Q}} \newcommand{{\mathfrak A}}{{\mathfrak A}} \newcommand{{\mathfrak B}}{{\mathfrak B}} \newcommand{{\mathfrak C}}{{\mathfrak C}} \newcommand{{\mathfrak G}}{{\mathfrak G}} \newcommand{{\mathfrak W}}{{\mathfrak W}} \newcommand{{\mathfrak R}}{{\mathfrak R}} \newcommand{{\mathfrak N}}{{\mathfrak N}} \newcommand{{\mathfrak Z}}{{\mathfrak Z}} \newcommand{{\mathfrak Q}}{{\mathfrak Q}} \newcommand{{\mathbf F}}{{\mathbf F}} \newcommand{{\mathbf T}}{{\mathbf T}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbf P}}{{\mathbf P}} \newcommand{{\mathbf{PH}}}{{\mathbf{PH}}} \newcommand{{\mathbf{NP}}}{{\mathbf{NP}}} \newcommand{\mbox{MT}}{\mbox{MT}} \newcommand{\mbox{TT}}{\mbox{TT}} \newcommand{\mathcal{L}}{\mathcal{L}}
\begin{abstract}
A sequence $s(n)$ of integers is MC-finite if for every $m \in {\mathbb N}$ the sequence $s^m(n) = s(n) \bmod{m}$ is ultimately periodic. We discuss various ways of proving and disproving MC-finiteness. Our examples are mostly taken from set partition functions, but our methods can be applied to many more integer sequences.
\end{abstract} \maketitle
\sloppy
\footnotesize \today \tableofcontents \normalsize
\section{Introduction} \label{se:intro} \ifmargin \marginpar{s-intro} \else\fi
\subsection{Goal of this paper} Given a sequence of integers $s(n)$ with some combinatorial interpretation, one wonders what can be said about the sequence $s(n)$. Ideally, we would like to have an explicit formula for $s(n)$, or some recurrence relation with coefficients being constant or polynomial in $n$. Second best is an asymptotic description of $s(n)$.
We could instead look at the sequence $s^m(n) \equiv s(n) \bmod{m}$ and try to describe $s^m(n)$. If for every modulus $m$ the sequence $s^m(n)$ is ultimately periodic, we say that $s(m)$ is {\em MC-finite}. We consider MC-finiteness a legitimate topic in the study of integer sequences. MC-finiteness appears under this name only since the publication of \cite{makowsky2010application} in 2010. Without its name, the concept appears in the literature before, but rarely, e.g., under the name of {\em supercongruence} \cite{banderier2017right,banderier2019period}. The four substantial monographs on integer sequences published after 2000 do not mention the concept at all, see \cite{everest2003recurrence,mansour2012combinatorics,mansour2012combinatorics,mezo2019combinatorics}.
All the sequences we discuss in this paper appear in {\em The On-Line Encyclopedia of Integer Sequences, OEIS, https://oeis.org/}, \cite{oeis}, with a number starting with $A$. We give these numbers with the first mention of the sequence, and list them also at the end of the paper. Needless to say, our methods also apply to many other entries in OEIS.
This paper grew out of our attempts to show that the sequence $B_r(n)$
of restricted Bell numbers (only listed in OEIS for $r=2, A005493$ and $r=3, A005494$) and $S_r(n,k)$ of restricted Stirling numbers of the second kind $A143494-A143496$ introduced in \cite{broder1984r} are MC-finite.
The purpose of this paper is two-fold. Its first part is mostly expository and written with the intent to popularize the study of MC-finiteness for researchers in Integer Sequences. However, the statements that the examples chosen are MC-finite have not, to the best of our knowledge, been stated before in the literature. We have chosen our examples in order to familiarize the reader with the two general methods to establish MC-finiteness. The first is {\em logical methods}, pioneered by C. Blatter and E. Specker, \cite{blatter1981nombre,specker1990application,pr:BlatterSpecker84}, and further developed by two of the authors of this paper (EF and JAM), \cite{pr:FischerMakowsky03,ar:FischerMakowsky2022}.
The second is a {\em combinatorial method} to prove MC-finiteness, also first suggested by E. Specker in \cite{specker1990application}, and later independently by G. S\'enizergues \cite{senizergues2007sequences}, but only made precise in \cite{cadilhac2021polynomial}. This method is based on the existence of finitely many mutual polynomial recurrence relations over ${\mathbb Z}$ used to define the integer sequence. In a separate paper, these methods are applied to infinitely many integer sequences arising from finite topologies \cite{topologies}.
In this paper we investigate MC-finiteness and counterexamples thereof of
integer sequences derived from counting various unrestricted and restricted set partitions and transitive relations. Among the unrestricted cases we look at the Bell numbers $B(n)$, $A000110$, and the Stirling numbers of the second kind $S(n, k_0)$, $A000453$. We also discuss the number of linear quasi-orders (pre-orders) $LQ(n)$, $A000670$, the number of quasi-orders (pre-orders) $Q(n)$, $A000798$, the number of partial orders $P(n)$, $A001035$, and the number of transitive relations $T(n)$, $A006905$, on the set $[n]$. The numbers $LQ(n)$ are called {\em ordered Bell numbers} or {\em Fubini numbers}, often denoted in the literature by $a(n)$ and also by $F(n)$. For the unrestricted cases the results are seemingly new, or at least have not been stated before, but are simple consequences of growth arguments and the logical method due to C. Blatter and E. Specker \cite{pr:BlatterSpecker84,specker1990application}, the {\em Specker-Blatter Theorem}.
Typical restricted cases, first introduced by A. Broder \cite{broder1984r} and further studied in \cite{benyi2019restricted}, are the Stirling numbers of the second kind $S_{A,r}(n,k)$, which count the partitions of $[n+r]$ into $k+r$ blocks such that the elements $i \leq r$ are all in different blocks and the size of each block is in $A \subseteq \mathbb{N}$. For $r=2$ see $A143494$. The Bell numbers $B_{A,r}(n)$ are defined as $\sum_k S_{A,r}(n,k)$, see $A005493$ for $r=2$ and $A005494$ for $r=3$. The same restrictions can also be imposed on Stirling numbers of the second kind $S_{A,r}(n,k)$, and on all the unrestricted cases above. For the restricted cases, the results are new and require non-trivial extensions of the Specker-Blatter Theorem. The Catalan numbers $A000108$ also have an interpretation as set partitions. They count the number of non-overlapping partitions, see \cite[Theorem 9.4]{roman2015introduction} and \cite[Chapter 10]{koshy2008catalan}. Although this can be viewed as a restricted version of the Bell numbers, our results do not apply to this case, as we shall explain later.
\subsection{Outline of the paper} \ifmargin \marginpar{s-outline} \else\fi
In Section \ref{se:mcfinite} we introduce C-finiteness and its modular variant MC-finiteness. In Section \ref{se:howtoprove} we discuss the methods for proving and disproving C-finiteness and MC-finiteness, and in Section \ref{se:immediate} we present immediate consequences of the logical method for set partitions without positional restrictions and without restrictions on size of the blocks. The three sections have tutorial character, although the MC-finiteness of the examples has not been stated before in the literature. In Sections \ref{se:restricted} and \ref{se:rproofs} we discuss set partitions with positional restrictions and restrictions on size of the blocks, and how new logical tools are used to obtain C-finiteness and MC-finiteness in these cases. We conclude the main part of the paper with Section \ref{se:conclu}, where we present our conclusions and suggestions for further research, and in Section \ref{se:oeis} we list the numbers of the discussed OEIS-sequences. There are four appendices. In Appendix \ref{ap:mc-finite} we discuss larger classes of polynomial recursive sequences and weaker versions of MC-finiteness. In Appendix \ref{se:constants} we prove a special case of the main theorem from \cite{ar:FischerMakowsky2022} which suffices for our results in Section \ref{se:rproofs}. In Appendix \ref{se:c-finite} we give the details for proving C-finiteness of restricted Stirling numbers of the second kind. Finally, in Appendix \ref{se:explicit}, we give an explicit computation of $S_A(n,k)$.
\ifskip\else
In Section \ref{se:periodic} we first deal with the applications of the extension of the Specker-Blatter Theorem to $\mathrm{CMSOL}$, Monadic Second Order Logic with modular counting. This allows us to show MC-finiteness for $B_{A,0}(n)$ and its variations for $A \subseteq {\mathbb N}$ ultimately periodic and without hard-wired constants.
In Section \ref{se:growth} we list some properties of restricted Stirling and Lah numbers and give lower bounds for their growth. This shows that $B_{A,r}(n)$ and the corresponding Lah-numbers are not C-finite provided that $A$ is infinite and ultimately periodic. Here a fixed finite number of hard-wired constants is allowed.
In Section \ref{se:fm} we give the background needed to prove C-finiteness using model theoretic methods as described in \cite{fischer2008linear}. This establishes C-finiteness for the restricted Stirling numbers of the second kind.
In Section \ref{se:hard} we discuss hard-wired constants and state a new extension of the Specker-Blatter Theorem, Theorem \cite{ar:FischerMakowsky2022} which shows how to handle the case where a fixed finite set of hard-wired constants are allowed. The special case for Bell, Stirling and Lah numbers is proved in In Section \ref{se:constants}.
Finally, in Section \ref{se:conclu}, we draw conclusions and suggest further lines of research concerning the complexity of computing $S_{\phi}(n)$ and $S^m_{\phi}(n)$, Problems \ref{problem-1}, \ref{problem-2} and \ref{problem-3}.
There are two appendices. In Appendix \ref{se:c-finite} we give a detailed prof of Theorem \ref{th:FM} and its applications. In Appendix \ref{se:explicit} we give an explicit method of computation of $S_A(n,k_0)$ for fixed $k_0$ and computable $A$. \fi
\section{C-finite and MC-finite sequences of integers} \label{se:mcfinite} A sequence of integers $s(n)$ is {\em C-finite}\footnote{ These are also called constant-recursive sequences or linear-recursive sequences in the literature. } if there are constants $p, q \in {\mathbb N}$ and $c_i \in {\mathbb Z}, 0 \leq i \leq p-1$ such that for all $n \geq q$ the linear recurrence relation $$ s(n+p) = \sum_{i=0}^{p-1} c_i s(n+i), n \geq q, $$ holds for $s(n)$. C-finite sequences have limited growth, see e.g. \cite{everest2003recurrence,kauers2011concrete}: \begin{proposition} \label{prop:c-finite} Let $s_n$ be a C-finite sequence of integers. Then there is $c \in {\mathbb N}^+$ such that for all $n \in {\mathbb N}$, $a_n \leq 2^{cn}$. \end{proposition} Actually, a lot more can be said, see \cite{flajolet2009analytic}, but we do not need it for our purposes.
To prove that a sequence $s(n)$ of integers is not C-finite, we can use Proposition \ref{prop:c-finite}. To prove that a sequence $s(n)$ of integers is C-finite, there are several methods: One can try to find an explicit recurrence relation, one can exhibit a rational generating function, or one can use a method based on model theory as described in \cite{fischer2008linear,fischer2011application}. The last method will be briefly discussed in Section \ref{se:fm} and further explained in Appendix \ref{se:c-finite}. It is referred to as method FM.
A sequence of integers $s(n)$ is {modular C-finite}, abbreviated as {\em MC-finite}, if for every $m \in {\mathbb N}$ there are constants $p_m, q_m \in {\mathbb N}^+$ such that for every $n \geq q_m$ there is a linear recurrence relation $$ s(n+p_m) \equiv \sum_{i=0}^{p_m-1} c_{i,m} s(n+i) \bmod{m} $$ with constant coefficients $c_{i,m} \in {\mathbb Z}$.
Note that the coefficients $c_{i,m}$ and both $p_m$ and $q_m$ generally do depend on $m$.
We denote by $s^m(n)$ the sequence $s(n) \bmod{m}$. \begin{proposition} The sequence $s(n)$o is MC-finite iff $s^m(n)$ is ultimately periodic for every $m$. \end{proposition} \begin{proof} MC-finiteness implies periodicity. The converse is from \cite{reeds1985shift}. \end{proof}
Clearly, if a sequence $s(n)$ is C-finite it is also MC-finite with $r_m=r$ and $c_{i,m}=c_i$ for all $m$. The converse is not true, there are uncountably many MC-finite sequences, but only countably many C-finite sequences with integer coefficients, see Proposition \ref{pr:many} below.
\begin{examples}\ \label{ex:mc} \begin{enumerate}[(i)] \item The Fibonacci sequence is C-finite. \item If $s(n)$ is C-finite it has at most simple exponential growth, by Proposition \ref{prop:c-finite}. \item The Bell numbers $B(n)$ are {\em not C-finite}, but are {\em MC-finite}. \item Let $f(n)$ be any integer sequence. The sequence $s_1(n)=2\cdot f(n)$ is ultimately periodic modulo $2$, but not necessarily MC-finite. \item Let $g(n)$ be any integer sequence.
The sequence $s_2(n) = n!\cdot g(n)$ is MC-finite. \label{many-mc}
\item The sequence $s_3(n)= \frac{1}{2} {2n \choose n}$ is not MC-finite: $s_3(n)$ is odd iff $n$ is a power of $2$, and otherwise it is even (Lucas, 1878). A proof may be found in \cite[Exercise 5.61]{graham1989concrete} or in \cite{specker1990application}. \item The Catalan numbers $C(n) = \frac{1}{n+1}{2n \choose n}$ are not MC-finite, since $C(n)$ is odd iff $n$ is a Mersenne number, i.e., $n = 2^m-1$ for some $m$, see \cite[Chapter 13]{koshy2008catalan}.
\item \label{many-nonmc} Let $p$ be a prime and $f(n)$ monotone increasing. The sequence $$ s(n) = \begin{cases} p^{f(n)} & n \neq p^{f(n)} \\ p^{f(n)}+1 & n = p^{f(n)} \end{cases} $$ is monotone increasing but not ultimately periodic modulo $p$, hence not MC-finite.
\end{enumerate} \end{examples}
\begin{proposition} \label{pr:many} \begin{enumerate}[(i)] \item There are uncountably many monotone increasing sequences which are MC-finite, and uncountably many which are not MC-finite. \item Almost all integer sequences are not MC-finite. \end{enumerate} \end{proposition} \begin{proof} (i) follows from Examples \ref{ex:mc} (\ref{many-mc}) and (\ref{many-nonmc}). (ii) is shown in Proposition \ref{pr:normal} in Appendix \ref{se:ProofMC}. \end{proof}
Although we are mostly interested in MC-finite sequences $s(n)$, it is natural to check in each example whether the sequence $s(n)$ is also C-finite. In most examples the answer is negative. However, Theorem \ref{th:c-finite} shows that for restricted Stirling numbers of the second kind are all C-finite. We show this via a general method, Theorem \ref{th:FM}, without exhibiting a generating function like in the classical case for $S(n,k)$.
\section{How to prove and disprove MC-finiteness} \label{se:howtoprove}
\subsection{Polynomial recurrence relations}
In his 1988 paper \cite[Page 144]{specker1990application}, E. Specker notes the following:
\begin{quote} In many known cases, [MC-finiteness] is a consequence of polynomial recurrence relations $$f(n) = \sum_{i=1}^d P_i(n) f(n-i)$$ where $P_i$ are polynomials in ${\mathbb Z}[x]$. \end{quote} For $f(n) = n!$ this is obvious.
\begin{definition} \begin{enumerate}[(i)] \item An integer sequence $s(n)$ is {\em holonomic over ${\mathbb Z}$} if there exist polynomials $P_i \in {\mathbb Z}[x]$ with $P_1, P_k \neq 0$ such that $$ s(n) = \sum_{i=1}^k P_i(n)s(n-i) $$ \item An integer sequence $s(n)$ is {\em polynomially recursive (PRS) over ${\mathbb Z}$} if there exist $k \in {\mathbb N}$ integer sequences $s_i(n), 1 \leq i \leq k$ with $s(n) = s_1(n)$ and polynomials $P_i \in {\mathbb Z}[x_1, \ldots , x_k]$ such that the following mutual recursion holds: $$ s_i(n+1) = P_i(s_1(n), \ldots , s_k(n)), i = 1, \ldots k $$ \item An integer sequence $s(n)$ is {\em PRS over ${\mathbb Z}$ and $n$} if the polynomials also involve $n$ as an additional variable. In other words $P_i \in {\mathbb Z}[x_1, \ldots , x_k, y]$ and $$ s_i(n+1) = P_i(s_1(n), \ldots , s_k(n), n), i = 1, \ldots k $$ \end{enumerate} Actually, (ii) and (iii) are equivalent. \end{definition}
We note that, if $s(n)$ is an integer sequence which is polynomially recursive over ${\mathbb Z}$ and $n$ then $s(n)$ is holonomic over ${\mathbb Z}$.
In fact, the following is true: \begin{theorem} \label{th:MC} If $s(n)$ is an integer sequence which is polynomially recursive over ${\mathbb Z}$ and $n$ then $s(n)$ is MC-finite. In particular, this is true also for integer sequences $s(n)$ holonomic over ${\mathbb Z}$. \end{theorem} The proof is given in Appendix \ref{se:ProofMC}. There we also briefly discuss weaker properties than MC-finiteness, where the modular recurrence holds only for almost all $m \in {\mathbb N}^+$.
\begin{remarks} \begin{enumerate}[(i)] \item In general, holonomic sequences are defined over fields $\mathbb{F}$ rather than the ring ${\mathbb Z}$. A good reference is \cite[Chapter 7]{kauers2011concrete}. A theorem related to Theorem \ref{th:MC} for holonomic sequences can be found in \cite[Theorem 7]{banderier2017right}, see also \cite{banderier2019period}. \item In \cite{cadilhac2021polynomial}, polynomially recursive sequences are defined for rational numbers rather than integers, and the polynomials are in ${\mathbb Q}[x_1, \ldots , x_k]$. \end{enumerate} \end{remarks}
The following examples, besides (v), are from \cite{cadilhac2021polynomial}. \begin{examples} \begin{enumerate}[(i)] \item The sequences $a(n) = n!$ with $a(n) = n\cdot a(n-1)$ and $a(0)=1$ is holonomic over ${\mathbb Z}$. It is obviously MC-finite. \item The sequence $a(n)= 2^{2^n}$ is polynomially recursive with $a(0)=2$ and $a(n) = a(n-1)^2$. It is not holonomic, since every holonomic sequence $a(n)$ is bounded by some $2^{p(n)}$ for some polynomial $p(n)$, see \cite{gerhold2004some}. It is easy to see that it is MC-finite, but it is also MC-finite by the Specker-Blatter Theorem below, as it counts the number of ways one can interpret a unary predicate on $[n]$. \item The Catalan numbers $C_n$ are holonomic: $ (n+2)C_{n+1} = (4n+2)C_n $. They are not holonomic over ${\mathbb Z}$, since they are not MC-finite.
Furthermore, they are not polynomially recursive even if we allow rational numbers. \item The sequence $n^n$ is not polynomially recursive, but it is MC-finite by the Specker-Blatter Theorem below. \item We show in Appendix \ref{ap:mc-finite} that the sequence $A086714$ given by $a(0)=4, a(n+1) = {a(n) \choose 2}$ is not MC-finite but periodic modulo every odd number. \end{enumerate} \end{examples}
MC-finite sequences are closed under various arithmetic operations.
\begin{proposition} \label{mc-closure} Let $a(n), b(n)$ be MC-finite sequences and $c \in {\mathbb Z}$. \begin{enumerate}[(i)] \item Then $c\cdot a(n), a(n) + b(n), a(n) \cdot b(n)$ are MC-finite. \item If additionally, $b(n) \in {\mathbb N}^+$ and tends to infinity, $a(n)^{b(n)}$ is also MC-finite. \item Let $A \subseteq {\mathbb N}^+$ be non-periodic and $a(n) =2$ be a constant, hence MC-finite, sequence. The sequence $$ b(n) = \begin{cases} 1 & n \in A \\ n!+1 & n \not \in A \end{cases} $$ is MC-finite and oscillates. However $a(n)^{b(n)}$ is not MC-finite.
\end{enumerate} \end{proposition}
\subsection{A definability criterion}
In order to prove that a sequence $s(n)$ is MC-finite one can also use a method due to E. Specker and C. Blatter from 1981 \cite{blatter1981nombre,pr:BlatterSpecker84,specker1990application}. It uses logical definability as a sufficient condition. We denote by $\mathrm{FOL}$ first order logic, by $\mathrm{MSOL}$ monadic second order logic, and by $\mathrm{CMSOL}$ the logic $\mathrm{MSOL}$ augmented with modular counting quantifiers. Details on the definition of $\mathrm{CMSOL}$ are given in Section \ref{se:periodic}. In its simplest form, the Specker Blatter Theorem can be stated as follows:
\begin{theorem}[Specker-Blatter Theorem] \label{th:BS-CMSOL} Let $S_{\phi}(n)$ be the number of binary relations $R$ on a set $[n]$ which
satisfy a given formula $\phi \in \mathrm{CMSOL}$. $S_{\phi}(n)$ is MC-finite, or equivalently, $S^m_{\phi}(n)$ is ultimately periodic for every $m$. \end{theorem} The original Specker-Blatter Theorem was stated for classes of structures with a finite set of binary relations definable in Monadic Second Order Logic $\mathrm{MSOL}$. It also works with unary relations added. The extension to $\mathrm{CMSOL}$ is due to \cite{pr:FischerMakowsky03}.
This method
is abbreviated in the sequel by SB.
\subsection{Comparing the methods} If one proves MC-finiteness for an integer sequence directly, the proof may be sometimes straightforward, but also sometimes tricky, and not applicable to other sequences. In contrast to this, Theorems \ref{th:MC} and \ref{th:BS-CMSOL} are meta-theorems. They only require to check for some structural data about the sequence $s(n)$, recurrence relations or logical definability. However, these meta-theorems are only existence theorems, without explicitly giving the required coefficients $c_{i,m}$ which show MC-finiteness.
\begin{examples} We note that the two meta-theorems cannot always be applied to the same integer sequences. \begin{enumerate}[(i)] \item The sequence $s(n)=n^n$ counts the number of unary functions (as binary relations) from $[n]$ to $[n]$, which is $\mathrm{FOL}$-definable, but it is not polynomially recursive, as shown in \cite{cadilhac2021polynomial}. However, MC-finiteness can also be established directly without much effort. \item There are polynomially recursive sequences over ${\mathbb Z}$ (hence MC-finite) which grow as fast as $2^{2^n}$, e.g., the sequence $a(0) = 2, a(n+1) = a(n)^2$ satisfies $a(n) = 2^{2^n}$. However, counting the number of $k$ binary relations on $[n]$ is bounded by $2^{kn^2}$. Hence, Theorem \ref{th:BS-CMSOL} cannot be applied. Again, MC-finiteness can also be established directly without much effort. \item The class of regular simple graphs is not $\mathrm{CMSOL}$-definable. For a general method for proving non-definability in $\mathrm{CMSOL}$, see \cite{makowsky2014connection}. Hence Theorem \ref{th:BS-CMSOL} cannot be applied to the sequence $A295193$, which counts the number of regular simple graphs on $n$ labelled nodes. In contrast to this, $r$-regular graphs are $\mathrm{FOL}$-definable, hence Theorem \ref{th:BS-CMSOL} can be applied easily to the sequence $RG(n,r)$ which counts the number of labelled $r$-regular graphs. The existence of recurrences for fixed $r$ is discussed in \cite{mckay1983applications} and the references cited therein. For $r=2,3$ this is $A110040$. Recurrences for $r = 0, 1, 2$ are found easily. For $r=3,4$ explicit recurrences were published in \cite{read1970some,read1980number}, and for $r=5$ in \cite{goulden1983hammond}. The recurrence for $r=5$ is linear but very long. In \cite{gessel1990symmetric}, it is shown that $RG(n,r)$ is holonomic (P-recursive) for every $k \in {\mathbb N}^+$. We have not checked whether $RG(n,r)$ is holonomic over ${\mathbb Z}$. In \cite{read1980number} it is shown that $RG(n,4)$ is polynomially recursive, but the equations given there do not show that $RG(n,4)$ is polynomially recursive over ${\mathbb Z}$.
It seems that Theorem \ref{th:BS-CMSOL} is the most suitable method to show that for each $r$ the sequence $RG(n,r)$ is MC-finite. \end{enumerate} \end{examples}
We will use an extension to $\mathrm{CMSOL}$, $\mathrm{MSOL}$ extended by modular counting quantifiers, from \cite{fischer2011application}, and a new extension which allows the use of hard-wired constants and is described in Section \ref{se:constants}.
Clearly, $S_{\phi}(n)$ is computable by brute force, given $\phi$ and $n$. In \cite{specker1990application}, it is mentioned that $S^m_{\phi}(n) = S_{\phi}(n) \bmod{m}$ can be computed more efficiently, but no details are given. Only the special case for $Q^m(n)$ is given, where $Q(n)$ is the number of quasi-orders on $[n]$.
\section{Immediate consequences of the Specker-Blatter Theorem} \label{se:immediate} \subsection{The Bell numbers $B(n)$}
The Bell numbers $B(n)$ count the number of partitions of the set $[n]$. This is the same as counting the number of equivalence relations on $[n]$, which is expressible by an $\mathrm{FOL}$-formula. Therefore, we get immediately from Theorem \ref{th:BS-CMSOL} that: \begin{theorem} \label{th:bell} The Bell numbers $B(n)$ are MC-finite. \end{theorem} The Bell numbers do satisfy some known congruences. For $m=p$ a prime, they satisfy the Touchard congruence $$ B(p+n) \equiv B(n) + B(n+1) \mod{p}. $$ However, this is not enough to establish MC-finiteness.
The Bell numbers are not C-finite, because they grow too fast. The following estimate is due to \cite{de1981asymptotic,berend2010improved}. \begin{proposition} \label{prop:b-growth} For every $n \in {\mathbb N}^+$ $$ \left(\frac{n}{e \ln n}\right)^n \leq B(n).$$ Furthermore, for every $\epsilon > 0$ there is $n_0(\epsilon)$ such that for all $n \geq n_0(\epsilon)$ $$B(n) \leq \left(\frac{n}{e^{1-\epsilon} \ln n}\right)^n.$$ \end{proposition} Better estimates are known, see \cite[Proposition VIII.3]{flajolet2009analytic}, but are not needed here. Another way to see that Bell numbers are not C-finite is by noticing that they are not holonomic, \cite{klazar2003bell}. There, and in \cite{banderier2002generating}, some variations of Bell numbers are also studied: \begin{definition} \begin{enumerate}[(i)] \item $B(n)_{k,m}$ counts the number of partitions of $[n]$ which have $k$ blocks modulo $m$. \item $B(n)^{\pm} = B(n)_{0,2} - B(n)_{1,2}$ which are the Uppuluri-Carpenter numbers $A000587$. \item $B(n)^{bc}$ counts the number of {\em bicolored partitions} of $[n]$, i.e., the partitions of $[n]$ where the blocks are colored with two non-interchangeable colors $C_1, C_2$, $A001861$. \end{enumerate} \end{definition} \begin{theorem} \label{th:klazar} The sequences $B(n), B(n)_{k,m}, B(n)^{\pm}, B(n)^{bc} $ are not holonomic, hence not C-finite, but they are MC-finite. \end{theorem} \begin{proof} That they are not holonomic is shown in \cite{klazar2003bell}, and in \cite{banderier2002generating}. To see that they are MC-finite, we apply Theorem \ref{th:BS-CMSOL}. \begin{enumerate}[(i)] \item
$B(n)_{k,m}$ is definable in $\mathrm{CMSOL}$. We say that there is a set $X \subseteq [n]$ which intersects every block in exactly one element, and $|X| = k \mod{m}$. \item $B(n)^{\pm}$ is the difference of two MC-finite sequences, hence MC-finite. \item $B(n)^{bc}$ counts the number of binary and unary relations $E, C_1, C_2$ on $[n]$ such that $E$ is an equivalence relations, $C_1, C_2 \subseteq [n]$ partition $[n]$, and each of them is closed under $E$. \end{enumerate} \end{proof}
\subsection{Counting transitive relations} The Bell numbers $B(n)$ count the number of equivalence relations $E(n)$ on a set $[n]$. Similarly we can look at the number of linear quasi-orders (linear pre-orders) $LQ(n)$, the number of quasi-orders (pre-orders) $Q(n)$, the number of partial orders $P(n)$, and the number of transitive relations $T(n)$ on the set $[n]$.
These integer sequences were analyzed in \cite{pfeiffer2004counting}. They are all definable in $\mathrm{FOL}$, and we have \begin{proposition} \label{transitive} $ B(n)=E(n) \leq LQ(n) \leq P(n) \leq Q(n) \leq T(n). $ \end{proposition} \begin{proof} $E(n) \leq LQ(n)$: We can turn an equivalence relation into a linear quasi-order by linearly ordering the equivalence classes.
$LQ(n) \leq P(n)$: Each linear quasi-order can be made into a partial order by replacing every set of mutually equi-comparable elements in a linear quasi-order with an anti-chain.
$P(n) \leq Q(n)$: Each partial order is also a quasi-order.
$Q(n) \leq T(n)$: Each quasi-order is transitive. \end{proof} Hence we get using the Specker-Blatter Theorem and Proposition \ref{transitive}: \begin{theorem} \label{th:1} The sequences $ B(n)=E(n), LQ(n), P(n), Q(n)$ and $T(n) $ are MC-finite but not C-finite. \end{theorem}
\subsection{Stirling numbers of the second kind} Let $S(n,k)$ be the number of partitions of $[n]$ into $k$ non-empty blocks. $S(n,k)$ is also known as the Stirling number of the second kind. Clearly, $$ B(n) = \sum_k S(n,k). $$ \begin{theorem} \label{th:stirling} For fixed $k=k_0$ the sequence $S(n, k_0)$ is C-finite, and hence MC-finite. \end{theorem} This can be seen by observing that $S(n, k_0)$ has a rational generating function, see \cite[7.47]{graham1989concrete}.
$$ \sum_{n=0}^\infty S(n,k_0) x^n = \frac{x^{k_0}}{(1-x)(1-2x)\cdots (1-k_0x)}. $$
\subsection{Lah numbers $Lah(n)$, $A001286$}
If we modify the Stirling numbers of the second kind $S(n,k)$ such that the elements in the blocks of the partition are ordered between them, we arrive at the somewhat less known Lah number $Lah(n,k)$, $A001286$, introduced by I. Lah in \cite{lah1954new,lah1955neue} in the context of actuarial science. Good references for Lah numbers are \cite{graham1989concrete,charalambides2018enumerative}. The Lah numbers are also coefficients expressing rising factorials $x^{(n)}$ in terms of falling factorials $x_{(n)}$. \begin{proposition} \label{pr:Lah-identity} $$ x^{(n)} = \sum_{k=1} Lah(n,k) x_{(k)} \text{ and } x_{(n)} = \sum_{k=1} Lah(n,k) x^{(k)} $$ \end{proposition} In \cite{guo2015six} six proofs of Proposition \ref{pr:Lah-identity} are given. Furthermore, $Lah(n) = \sum_k Lah(n,k)$.
$Lah(n)$ counts the number of linear quasi-orders on $[n]$, hence $Lah(n) = LQ(n)$, and $Lah(n,k)$ counts the number of linear quasi-orders on $[n]$ with $k$ sets of equi-comparable elements. Two elements $u,v$ in a quasi-order are {\em equi-comparable} if both $u \leq v$ and $v \leq u$. This is again definable in first order logic $\mathrm{FOL}$.
There are explicit formulas: \begin{proposition} \label{prop:Lah} \begin{gather} Lah(n,k) = \frac{n!}{k!} \cdot {{n-1} \choose {k-1}} = \sum_{j=0}^n s(n,j) S(j,k) \label{eq:lah-1} \\ Lah(n) = \sum_k Lah(n,k) = n! \sum_k \frac{1}{k!} \cdot {{n-1} \choose {k-1}} \label{eq:lah-2} \end{gather} where $s(n,j)$ are the Stirling numbers of the first kind, see \cite{comtet2012advanced}. \end{proposition} There is also a recurrence relation: \begin{gather} Lah(n+1,k) = Lah(n, k-1) + (n+k) Lah(n,k) \label{eq:lah-3} \end{gather} But again this is not enough to establish C-finiteness or MC-finiteness, since it is a recurrence involving both $n$ and $k$.
\begin{theorem} \label{th:lah} Both $Lah(n)$ and $Lah(n,k_0)$ are MC-finite but not C-finite. \end{theorem} \begin{proof} It follows directly from Equation (\ref{eq:lah-1}), and also from Equation (\ref{eq:lah-3}),
that for $k=k_0$ fixed the sequence $Lah(n,k_0)$ is not C-finite.
MC-finiteness again follows using Theorem \ref{th:BS-CMSOL}.
\end{proof} Note however that the recurrence relation given in Equation (\ref{eq:lah-3}) does not have constant coefficients.
\subsection{Summary so far} Table \ref{table-1} summarizes the results which are direct consequences of the growth arguments or non-holonomicity (NH) and the Specker-Blatter Theorem \ref{th:BS-CMSOL} (SB).
\begin{center} \begin{table}[h]
\begin{tabular}{||l|l|l|l||l|l|l|} \hline \hline Series & C-finite & Proof & Theorem & MC-finite & Proof & Theorem \\ \hline \hline $S(n) = B(n)$ & no & Growth & \ref{th:1} & yes & SB & \ref{th:bell} \\ \hline $S(n,k_o)$ & yes & gen.fun& \ref{th:stirling} & yes & gen.fun &\ref{th:stirling} \\ \hline $B(n)^{\pm}$ & no & NH & \ref{th:klazar} & yes & SB &\ref{th:klazar} \\ \hline $B(n)^{bc}$ & no & NH & \ref{th:klazar} & yes & SB &\ref{th:klazar} \\ \hline \hline $LQ(n)$ & no & Growth & \ref{th:1} & yes & SB & \ref{th:1} \\ \hline $Q(n)$ & no & Growth & \ref{th:1} & yes & SB & \ref{th:1} \\ \hline $P(n)$ & no & Growth & \ref{th:1} & yes & SB & \ref{th:1} \\ \hline $T(n)$ & no & Growth & \ref{th:1} & yes & SB & \ref{th:1} \\ \hline \hline $Lah(n) = LQ(n)$ & no & Growth & \ref{th:lah} & yes & SB & \ref{th:lah} \\ \hline $Lah(n, k_0)$ & no & Growth & \ref{th:lah} & yes & SB & \ref{th:lah} \\ \hline \hline \end{tabular} \caption{Direct consequences of the Specker-Blatter Theorem} \label{table-1} \end{table} \end{center}
\section{Restricted set partitions} \label{se:restricted}
The new results of this paper concern C-finiteness and MC-finiteness for restricted versions of set partitions.
We have two kinds of restrictions in mind. The first are {\em positional restrictions} which impose conditions on the positions of the elements of $[n]$ where $[n]$ is equipped with its natural order. The second are {\em size restrictions} which impose conditions on the size of the blocks or the number of the blocks.
\subsection{Global positional restrictions}
\begin{definition} Let $A$ and $B$ be two blocks of a partition of $[n]$. \begin{enumerate}[(i)] \item $A$ and $B$ are {\em crossing} if there are elements $a_1, a_2 \in A$ and $b_1, b_2 \in B$ such that $a_1 < b_1 < a_2 < b_2$ or $b_1 < a_1 < b_2 < a_2$. \item Let $\min{A},\max{A},\min{B},\max{B}$ the smallest and the largest elements in $A$ and $B$. $A$ and $B$ are {\em overlapping} if $\min{A} < \min{B} < \max{A} < \max{B}$ or $\min{B} < \min{A} < \max{B} < \max{A}$. \item If $A$ and $B$ are overlapping they are also crossing, but not conversely. \item The number $B(n)^{nc}$ of non-crossing set partitions on $[n]$ is one of the interpretations of the Catalan numbers, \cite{roman2015introduction}. \item The Bessel number $B(n)^B$ ($A006789$) is the number of non-overlapping set partitions on $[n]$, \cite{flajolet1990non}. \end{enumerate} \end{definition} The Catalan numbers $C(n)$ are not holonomic and not MC-finite. In \cite{banderier2002generating} it is shown that the Bessel numbers $B(n)^B$ are not holonomic.
Are the Bessel numbers $B(n)^B$ MC-finite?
The positional restrictions here are {\em global} in the sense that they involve all of the elements of $[n]$ with their natural order. For non-holonomic integer sequences $s(n)$ that count the number of set partitions subject to global positional restrictions, we have currently no tools to decide whether they are MC-finite or not.
Next, we look at {\em local} positional restrictions one can impose on Stirling and Lah numbers, \cite{broder1984r,wagner1996generalized,nyul2015r,benyi2019some,benyi2019restricted}. They are local because they only put restrictions on the positions of a fixed number of elements of $[n]$ with their natural order.
\subsection{Local positional and size restrictions} Recall that we denote by $[n]$ the set $\{1, 2. \ldots, n\}$. We denote by $S_r(n,k)$ the number of partitions of $[n+r]$ into $k+r$ non-empty blocks with the additional condition that the first $r$ elements are in distinct blocks. The elements $1, \ldots, r$ are called {\em special elements} and the partitions where the first $r$ elements are in distinct blocks are called $r$-partitions. When dealing with definability we view the special elements as {\em hard-wired} constants, i.e., constant symbols $a_i, 1 \leq i \leq r$ with a fixed interpretation by elements of $[n+r]$.
We define $S_r(n)= B_r(n)$ by $$ S_r(n) = \sum_k S_r(n,k). $$ $Lah_r(n,k)$, $A143497$, and $Lah_r(n)$ are defined analogously,
with the condition that $a_1 < a_2 < \ldots < a_r$ are in different blocks. \cite{nyul2015r,shattuck2016generalized}.
Let $A \subseteq {\mathbb N}$. We denote by $S_{A,r}(n) = B_{A,r}(n)$, $S_{A,r}(n,k)$, $Lah_{A,r}(n)$ and $Lah_{A,r}(n,k)$ the number of corresponding partitions where every block has its size in $A$.
For $r=0$, in the absence of special elements, we just write $S_A(n) = B_A(n)$, $S_A(n,k)$, $Lah_A(n)$ and $Lah_A(n,k)$.
A set $A \subseteq {\mathbb N}$ is {\em (ultimately) periodic} if there exist $p,n_0 \in {\mathbb N}^+$ such that for all $n \in {\mathbb N}$ ($n \geq n_0$) we have $n \in A$ iff $n+p \in A$. In other words, the characteristic function $\chi_A(n)$ of $A$ is ultimately periodic in the usual sense, $\chi_A(n) = \chi_A(n+p)$ ($n \geq n_0$). Analogous definitions can be made for $LQ(n)$, denoted by $LQ_{A,r}$, and also called $r$-Fubini sequences, with OEIS-number $A232472$.
\subsection{Main results for restricted set partitions} Our results for restricted set partitions are summarized in Tables \ref{table-2}, \ref{table-3}, \ref{table-4} and \ref{table-5} below.
FM refers to the proof method of \cite{fischer2008linear,fischer2011application}.
SB* refers to the extension of the Specker-Blatter Theorem to allow a fixed finite set of special elements as hard-wired constants.
\begin{center} \begin{table}[h]
\begin{tabular}{|l||l|l|l||l|l|l|} \hline \hline Series & C-finite & Proof & Theorem & MC-finite & Proof & Theorem \\ \hline \hline $S_A(n) = B_A(n)$ & no & Growth & \ref{th:g-bell} & yes & SB* & \ref{th:A-periodic} \\ \hline $S_A(n,k_0)$ & yes & FM & \ref{th:c-fin} & yes & FM & \ref{th:c-fin} \\ \hline $Lah_A(n) = LQ_A(n)$ & no & Growth & \ref{th:g-lah} & yes & SB* & \ref{th:A-periodic} \\ \hline $Lah_A(n, k_0)$ & no & Growth & \ref{th:g-lah} & yes & SB* & \ref{th:A-periodic} \\ \hline \hline \end{tabular} \caption{With ultimately periodic $A$ only} \label{table-2} \end{table} \end{center}
\begin{center} \begin{table}[h]
\begin{tabular}{|l||l|l|l||l|l|l|} \hline \hline Series & C-finite & Proof & Theorem & MC-finite & Proof & Theorem \\ \hline \hline $S_r(n)=B_r(n)$ & no & Growth & \ref{th:g-bell} & yes & SB* & \ref{co:SB*} \\ \hline $S_r(n, k_0)$ & yes & FM &\ref{th:c-fin} & yes & FM & \ref{th:c-fin} \\ \hline \hline $Lah_r(n, k_0)$ & no & Growth &\ref{th:g-lah} & yes & SB* & \ref{co:SB*} \\ \hline \hline \end{tabular} \caption{With hard-wired constants only} \label{table-3} \end{table} \end{center}
\begin{center} \begin{table}[h]
\begin{tabular}{|l||l|l|l||l|l|l|} \hline \hline Series & C-finite & Proof & Theorem & MC-finite & Proof & Theorem \\ \hline \hline $S_{A,r}(n)= B_{A,r}(n)$ & no & Growth & \ref{th:bell-A} & yes & SB* & \ref{co:SB*} \\ \hline $S_{A,r}(n, k_0)$ & yes & FM & \ref{th:c-fin} & yes & FM & \ref{th:c-fin} \\ \hline $Lah_{A,r}(n, k_0)$ & no & Growth & \ref{th:bell-A} & yes & SB* & \ref{co:SB*} \\ \hline \hline \end{tabular} \caption{With ultimately periodic $A$ and hard-wired constants } \label{table-4} \end{table} \end{center} These results also hold for $LQ_{A,r}$, the $r$-Fubini numbers, and other similarly defined sequences.
\begin{center} \begin{table}[h]
\begin{tabular}{|l||l|l|l||l|l|l|} \hline \hline Series & C-finite & Proof & Theorem & MC-finite & Proof & Theorem \\ \hline \hline $B(n)^B$ & no & NH & \cite{banderier2002generating} & ??? & ??? & --- \\ \hline $B(n)^{nc} = C(n)$ & no & NH & \cite{roman2015introduction} & no & \cite[Theorem 9.4]{roman2015introduction}
& \cite{banderier2002generating} \\
\hline
\hline \hline \end{tabular} \caption{With global positional restrictions} \label{table-5} \end{table} \end{center}
\section{Proofs for the restricted cases} \label{se:rproofs} For the analysis of MC-finiteness in the restricted cases we need some additional tools.
\subsection{Ultimate periodicity of $A$} \label{se:periodic} \ifmargin \marginpar{s-periodic} \else\fi
Recall that a formula with a {\em modular counting quantifier $C_{b,m}x \phi(x)$} is true in a structure ${\mathfrak B}$ if the cardinality of the set of elements in ${\mathfrak B}$ which satisfy $\phi(x)$, satisfies
$$|\{a \in B : \phi(a) \}| \equiv b \mod{m}.$$ $\mathrm{CMSOL}$ is the logic obtained from $\mathrm{MSOL}$ by extending it with all the modular counting quantifiers $C_{b,m}$. In \cite{pr:FischerMakowsky03} the Specker-Blatter Theorem was extended to hold for $\mathrm{CMSOL}$, as already stated in Theorem \ref{th:BS-CMSOL}. $\mathrm{CMSOL}$ is also needed to prove the following lemma:
\begin{lemma} \label{le:periodic} Let $A$ be ultimately periodic and $\psi(x)$ be a formula of $\mathrm{CMSOL}$. Then there is a sentence $\psi_A \in \mathrm{CMSOL}$ such that in every finite structure ${\mathfrak B}$ we have
$${\mathfrak B} \models \psi_A \text{ iff }|\{b \in B : \psi(b) \}| \in A$$ \end{lemma}
\begin{proof} If $A = A_{a,m}= \{ n \in {\mathbb N}: n \equiv a \mod m \}$ the formula $\psi_A$ is the sentence $C_{a,m} x \psi(x)$.
Next we observe that if $A$ is ultimately periodic there are finitely many $a_1, \ldots, a_k$ and $q$ such that $A = \bigcup_{i=0}^k A_i$ with $A_0 \subseteq [q]$ and $A_i = \{ n > q: n \equiv a_i \mod m \}$. We proceed in steps:
\begin{enumerate}[(i)] \item $\exists^{\geq k}x\psi(x):=\exists x_1,\ldots,x_k \bigwedge_{i=1}^k\psi(x_i)\wedge\bigwedge_{1\leq i<j\leq k}(x_i\neq x_j)$ says that there are at least $k$ elements that satisfy $\psi(x)$. \item $\exists^{=k}x\psi(x):=\exists^{\geq k}x\psi(x)\wedge\neg\exists^{\geq k+1}x\psi(x)$ says that there are exactly $k$ such elements. \item $\psi{A_0}:= (\exists^{<q} x \psi(x) \rightarrow \bigvee_{j\in A_0}\exists^{=j}x\psi(x))$ says that if the number of elements satisfying $\psi(x)$ is less than $q$ then the number of such elements has exactly one of the cardinalities in $A_0$. \item $\psi{A_i}:=\exists^{\geq q+1}x\psi(x)\wedge C_{a,m}x\psi(x)$ says that if the number of elements satisfying $\psi(x)$ is bigger or equal than $q$ then the number such elements equals $a_i \bmod{m}$. \item $\psi(x)_A := \bigvee_{i=0}^k \psi{A_i}(x)$ is the required formula. \end{enumerate}
\end{proof}
Theorem \ref{th:BS-CMSOL} together with Lemma \ref{le:periodic} gives immediately: \begin{theorem} \label{th:A-periodic} Assume that $A$ is ultimately periodic. Then the sequences $B_A(n)= S_A(n), Lah_A(n)$ and $Lah_A(n, k_0)$ are MC-finite. \end{theorem}
\subsection{Growth arguments}
We first discuss growth arguments for $B_A(n)= S_A(n), Lah_A(n)$ and $Lah_A(n, k_0)$. \begin{theorem} \label{th:bell-A} Let $A \subseteq {\mathbb N}$ be infinite and ultimately periodic. Then $B_A(n)= S_A(n), Lah_A(n)$ and $Lah_A(n, k_0)$ are not C-finite. \end{theorem} \begin{proof} First we prove it for $B_A(n)$ and $A = A_m = \{ n \in {\mathbb N}: n \equiv 0 \mod m \}$. Let $P_1, \ldots , P_k$ be a partition of $[n]$. We replace in each $P_i$ every element by $m$ elements. This gives us a partition of $[mn]$ with each block of size in $A_m$. Hence $$ P_A(mn) \geq P(n) \geq \left(\frac{n}{e \ln n}\right)^n $$ or, equivalently, $$ P_A(n) \geq P(n/m) \geq \left(\lfloor\frac{n/m}\rfloor{e \ln \lfloor n/m \rfloor}\right)^{\lfloor n/m \rfloor} $$ which still grows superexponentially.
Next we assume that $A = A_{k,a,m}= \{ n \in {\mathbb N}: n \equiv a \mod m, n \geq k \}$. We proceed as before, but additionally add $mr+a$ elements to each block, for $r$ large enough.
Finally, we note that for every infinite (ultimately) periodic set $A$ there is a set $A_{k, a,m}$ for some $k, a, m \in {\mathbb N}^+$ such that $ A_{k,a,m} \subseteq A$
For $Lah_A(n)$ and $Lah_A(n, k_0)$ we proceed similarly using Proposition \ref{prop:Lah}. \end{proof}
Next we discuss growth for $Lah(n,k_0)$, $Lah(n) = \sum_k Lah(n,k)$ and $Lah_r(n, k_0)$.
We have seen in Proposition \ref{prop:b-growth} that $$ \left(\frac{n}{e \ln n}\right)^n \leq B(n) \leq \left(\frac{n}{e^{1-o(1)} \ln n}\right)^n $$
We now show \begin{lemma} \label{le:bell-3} $B_r(n) \geq B(n)$ \end{lemma} \begin{proof} Every partition of $[n]$ gives rise to at least one partition of $[n+r]$ where the first $r$ elements are in distinct blocks containing only one element. \end{proof}
From Proposition \ref{prop:c-finite}, \ref{transitive} and \ref{le:bell-3}
we get:
\begin{theorem} \label{th:g-bell} The sequences $B(n)$ and $B_r(n)$
are not C-finite. \end{theorem}
\begin{lemma} \label{le:lah-1} For $k_0, r$ fixed, the Lah number $Lah(n,k_0)$ satisfy the following: \begin{enumerate}[(i)] \item $Lah(n,k_0) = {{n-1} \choose {k_0-1}} \frac{n!}{k_0!}$, \item $Lah(n) \geq Lah(n, k_0)$, and \item $Lah_r(n, k_0) \geq Lah(n, k_0)$. \end{enumerate} \end{lemma} \begin{proof} (i) is from \cite{lah1954new,lah1955neue}. (ii) follows from (i), and (iii) is proved like Lemma \ref{le:bell-3}. \end{proof}
This gives immediately
\begin{theorem} \label{th:g-lah} Let $k_0$ be fixed. The sequences $Lah(n,k_0)$, $Lah(n) = \sum_k Lah(n,k)$ and $Lah_r(n, k_0)$ are not C-finite. \end{theorem}
\subsection{Hard-wired constants}
Recall that a constant is {\em hard-wired} on $[n]$ if its interpretation is fixed.
The Specker-Blatter Theorem is originally proved for classes of structures with a finite number of binary relations. It is false for one quaternary relations \cite{ar:Fischer02}. It was announced recently that it is also false for one ternary relation, \cite{ar:FischerMakowsky2022}.
The Specker-Blatter Theorem remains true when adding a finite number of unary relations. This is so because a unary relation $U(x)$ can be expressed as a binary relation $R(x,x)$ which is false for $R(x,y)$ when $x \neq y$.
Adding constants comes in two flavors, with variable interpretations, or hard-wired. Assume we want to count the number of unary predicates $P$ on $[n]$ which contain the interpretation of a constant symbol $c$. There are $n$ possible interpretations for $c$ and $2^{n-1}$ interpretations for sets not containing $c$, hence $n2^{n-1}$ many such sets. However, if $c$ is hard-wired to be interpreted as $1 \in [n]$, there are only $2^{n-1}$ many such sets.
Constants can be represented as unary predicates the interpretation of which is a singleton. If we do this, the Specker-Blatter Theorem holds, but we cannot model the $r$-Bell numbers like this. To prove that the $r$-Bell numbers are MC-finite one has to deal with $r$ many hard-wired constants. Adding a finite number of hard-wired constants needs some work. In Appendix \ref{se:constants} we show how to eliminate a finite number of hard-wired constants for the case of $S_r(n)$. The proof generalizes. In \cite{ar:FischerMakowsky2022} the more general version is proved:
\begin{theorem} \label{th:SB*} Let $\tau_r$ be a vocabulary with finitely many binary and unary relation symbols, and $r$ hard-wired constants. Let $\phi$ be a formula of $\mathrm{CMSOL}(\tau_r)$. Then $S_{\phi}(n)$ is MC-finite. \end{theorem} \begin{corollary} \label{co:SB*} The sequences $S_r(n)=B_r(n)$, $Lah_r(n, k_0)$, $S_{A,r}(n)= B_{A,r}(n)$, $Lah_{A,r}(n, k_0)$ are MC-finite. \end{corollary}
\subsection{Proving C-finiteness} \label{se:fm} \ifmargin \marginpar{rj-fm} \else\fi
In this subsection we explain a special case of the method used in \cite{fischer2008linear} to prove C-finiteness. It is based on counting partitions of graphs satisfying additional properties and computing these partitions for iteratively constructed graphs.
\subsubsection{Counting partitions with a fixed number of blocks} Let $G = (V(G), E(G))$ be a graph, and $k_0 \in {\mathbb N}$. We look at partitions $P_1(G), \ldots, P_{k_0}(G)$ of $V(G)$ which can be described in first order logic $\mathrm{FOL}$. The following are three typical examples: \begin{examples} \begin{enumerate}[(i)] \item The underlying sets of $G[P_i(G)]$ form a partition of $V(G)$ without further restrictions. \item For each $i \leq k_0$ the induced graph $G[P_i(G)]$ is edgeless (proper coloring). \item Let $\mathcal{P}$ be a graph property. For each $i \leq k_0$ the set $G[P_i(G)]$ is in $\mathcal{P}$ ($\mathcal{P}$-coloring). \end{enumerate} \end{examples}
We look at the counting function $$ f_{\phi}(G) =
| \{ P_1(G), \ldots, P_{k_0}(G) : \phi(P_1(G), \ldots, P_{k_0}(G)) \}| $$ defined using an $\mathrm{FOL}$-formula $\phi$.
Let $A \subseteq {\mathbb N}$ be an ultimately periodic set. We also look at the restricted counting function $$ f_{\phi, A}(G) =
| \{ P_1(G), \ldots, P_{k_0}(G) : \phi(P_1(G), \ldots, P_{k_0}(G)) \text{ and } |P_i(G)| \in A \}|. $$
We also allow graphs with a fixed number of distinct vertices, which may appear in the formula $\phi$.
\subsubsection{Iteratively constructed graphs} \begin{definition} A $k$-colored graph is a graph $G$ together with $k$ sets $V_1,V_2,...,V_k\subseteq V(G)$ such that $V_i \cap V_j =\emptyset $ for $i\neq j$. A basic operation on $k$-colored graphs is one of the following: \begin{itemize} \item $Add_i$: add a new vertex of color $i$ to $G$. \item $Recolor_{i,j}$: recolor all vertices with color $i$ to color $j$ in $G$. \item $Uncolor_{i}$: remove the color of all vertices with color $i$. Uncolored vertices cannot be recolored again. \item $AddEdges_{i,j}$: add an edge between every vertex with color $i$ and every vertex with color $j$ in $G$. \item $DeleteEdges_{i,j}$: delete all edges between vertices with color $i$ and vertices with color $j$ from $G$.
\end{itemize} A unary operation $F$ on graphs is elementary if $F$ is a finite composition of basic operations on $k$-colored graphs (with $k$ fixed). We say that a sequence of graphs $\{G_n\}$ is iteratively constructed if it can be defined by fixing a graph $G_0$ and defining $G_{n+1}=F(G_n)$ for an elementary operation $F$. \end{definition}
\begin{example} \label{exIterativelyConstructed} The following sequences are iteratively constructed: \begin{itemize} \item The complete graphs $K_n$ can be constructed using two colors: Fix $G_0$ to be the empty graph, and the operation $F$, given a graph $G_n$, adds a vertex with color 2, adds edges between all vertices with color 2 and color 1, and recolors all vertices with color 2 to color 1. \item The paths $P_n$ can be constructed using 3 colors: Fix $G_0$ to be the empty graph, and the operation $F$, given a graph $G_n$, adds a vertex with color 3, adds edges between all vertices with colors 2 and 3, recolors all vertices with color 2 to color 1, and recolors all vertices with color 3 to color 2. \item The cycles $C_n, n \geq 3$ can be constructed by first constructing a path $P_n$ where the first and the last element have colors $1$ and $2$ different from the remaining vertices. Then we connect the first and last element of $P_n$ by an edge. This needs $5$ colors, but is not iterative. To make it an iterative construction we proceed as follows. Given a cycle $C_n$ with with two neighboring vertices of color $1$ and $2$, uncolor all the other vertices and remove the edge $(1,2)$. Then add a new vertex with color $3$, make edges $(1,3)$ and $(3,2)$, uncolor the old vertices colored by $1$, and then recolor $3$ to have color $1$. \end{itemize} \end{example}
\begin{remark} In \cite{fischer2008linear} there was an additional operation allowed \begin{itemize} \item $Duplicate$: Add a disjoint copy of $G$ to $G$, \end{itemize} assuming erroneously that $Duplicate$ behaves like a unary operation on graphs. Although it looks like a unary operation on graphs, the sequence of graphs $$ G_0 = E_1, G_{n+1} = Duplicate(G_n) $$ grows too fast and does not fit the framework that the authors have envisaged in \cite{fischer2008linear}. \end{remark}
\subsubsection{The FM method}
In this framework \cite{fischer2008linear} proved the following:
\begin{theorem}[The Fischer-Makowsky Theorem] \label{th:FM} Let $G_n$ be an iteratively constructed sequence of graphs, $A \subseteq {\mathbb N}$ be ultimately periodic, and $$ f_{\phi}(G_n) =
| \{ P_1(G_n), \ldots, P_{k_0}(G_n) : \phi(P_1(G_n), \ldots, P_{k_0}(G_n)) \}| $$ and $$ f_{\phi, A}(G_n) =
| \{ P_1(G_n), \ldots, P_{k_0}(G_n) : \phi(P_1(G_n), \ldots, P_{k_0}(G_n)) \text{ and } |P_i(G_n)| \in A \}|, $$ where $\phi \in \mathrm{CMSOL}$. Then the sequences $f_{\phi}(G_n)$ and $f_{\phi, A}(G_n)$ are C-finite. \end{theorem}
We now use Theorem \ref{th:FM} to prove:
\begin{theorem} \label{th:c-finite} \label{th:c-fin} Let $A$ be ultimately periodic, $r, k_0 \in {\mathbb N}$. Then $S(n,k_0)$, $S_A(n,k_0)$, $S_r(n,k_0)$ and $S_{A,r}(n,k_0)$ are C-finite. \end{theorem} \begin{proof} It suffices to prove it for $S_{A,r}(n,k_0)$. The other cases can be obtained by setting $r=0$ and/or $A= {\mathbb N}$.
We have to show that $S_{A,r}(n,k_0)$ is of the form $f_{\phi, A}(G_n)$.
We define an iteratively constructed sequence of graphs $G =(V(G), E(G), v_1, \ldots, v_r)$ with $r$ distinct vertices as follows. $G_0 = (K_r, v_1, \ldots, v_r)$. $G_{n+1} = G_n \sqcup K_1$.
Now take $\phi(P_1, \ldots , P_{k_0}, v_1, \ldots , v_r)$ which says that the $P_i$'s form a partition and for each $i \leq r$ the distinguished vertex $v_i$ belongs to $P_i(G)$. \end{proof}
Further details are given in Appendix \ref{se:c-finite}.
\section{Conclusions and further research} \label{se:conclu}
In the first part of the paper we introduced MC-finiteness as a worthwhile topic in the study of integer sequences. We surveyed two methods of establishing MC-finiteness of such sequences.
In Theorem \ref{th:MC}, MC-finiteness follows from the existence of polynomial recurrence relations with coefficients in ${\mathbb Z}$. In Theorem \ref{th:BS-CMSOL}, MC-finiteness follows from a logical definability assumption in Monadic Second Order Logic augmented with modular counting quantifiers $\mathrm{CMSOL}$. We have compared the advantages and disadvantages of the methods, and we have used the logic method of Theorem \ref{th:BS-CMSOL} to give quick and transparent proofs of MC-finiteness.
\ifskip\else \section{Conclusions and suggestions for further work} \label{se:conclu} \ifmargin \marginpar{s-conclu} \else\fi
The purpose of this paper was to study congruences (MC-finiteness) for restricted set partition functions. In the unrestricted case MC-finiteness is a direct consequence of the Specker-Blatter Theorem from 1981, which, unfortunately, has been widely unnoticed. In its original version it only applies to counting labeled structures which use binary relations definable in Monadic Second Order Logic $\mathrm{MSOL}$. The direct applications are summarized in Table \ref{table-1}. \fi
In the second part of the paper we got similar results for locally restricted set partition functions like $B_{A,r}$. For this purpose the Specker-Blatter Theorem has to be extended
in order to
count labeled structures where a fixed number of special elements are in a certain configuration. In the case of $B_{A,r}$, $A$ is a set of natural numbers and $r$ is a natural number. $B_{A,r}$ counts the number of set partitions of $[n]$ where the first $r$ elements are in different blocks and $A$ indicates the possible cardinalities of the blocks of the partition. Such an extension is given in Theorem \ref{th:SB*}. A proof of a special case of this theorem is given in the appendix. The general case can be found in \cite{ar:FischerMakowsky2022}. Our new results are summarized in Tables \ref{table-2}--\ref{table-5}.
We did not investigate in depth whether MC-finiteness of the examples in Tables \ref{table-2}--\ref{table-5} can be established directly or by exhibiting suitable polynomial recurrence schemes, in order to apply Theorem \ref{th:MC}.
\begin{problem} \label{problem-Bessel}: Are the Bessel numbers $B(n)^B$ MC-finite? \end{problem}
\begin{problem} \label{problem-PRS}: Find systems of mutual polynomial recurrences for all the examples in Tables \ref{table-2}--\ref{table-4}. \end{problem}
Instead of set partition functions we can also count the number of, say, partial orders where \begin{enumerate}[(i)] \item $r$ special elements are in a particular $\mathrm{CMSOL}$ definable configuration, such as prescribed comparability and incomparability, and \item $A$ indicates the possible cardinalities of certain definable sets, such as antichains or maximal linearly ordered sets. \end{enumerate}
Our techniques allow us to show that counting such partial orders on $[n]$ results in MC-finite sequences.
In \cite{specker1990application} it is suggested that counting the number of quasi-orders $Q^m(n)$ on $[i]$ modulo $m$ is easier than finding the exact value of $Q(n)$.
Clearly, $S_{\phi}(n)$ is computable by brute force, given $\phi$ and $n$. In fact, for $\phi \in \mathrm{FOL}$ the problem is in $\sharp{\mathbf P}$. For $\phi \in \mathrm{CMSOL}$ it is in $\sharp{\mathbf{PH}}$, the analogue of $\sharp{\mathbf P}$ for problems definable in Second Order Logic, or equivalently, in the polynomial hierarchy. As noted in \cite[Proposition 11]{makowsky1996arity}, there are arbitrarily complex problems in ${\mathbf{PH}}$ already definable in $\mathrm{MSOL}$. However, $S^m_{\phi}(n)$ is in $MOD_m{\mathbf P}$, respectively in $MOD_m{\mathbf{PH}}$, the corresponding modular counting classes introduced in \cite{beigel1992counting}.
It is still open how exactly $MOD_m{\mathbf P}$ is related to $\sharp{\mathbf P}$.
In \cite{specker1990application}, it is mentioned that $S^m_{\phi}(n) = S_{\phi}(n) \mod{m}$ can be computed more efficiently, but no details are given. Only the special case of $Q^m(n)$ is given, where $Q(n)$ is the number of quasi-orders on $[n]$.
\begin{problem} \label{problem-1}
Given $\phi \in \mathrm{FOL}$ and $m$, find algorithms for computing $S_{\phi}(n)$ and $S^m_{\phi}(n)$ and determine upper and lower bounds for them. One may assume that $n$ is encoded in unary. \end{problem} \begin{problem} \label{problem-2} Same as Problem \ref{problem-1} for $\phi \in \mathrm{CMSOL}$. \end{problem} \begin{problem} Inspired by the remarks above, the following might be a worthwhile project: \label{problem-3} Investigate the complexity classes $\sharp{\mathbf{PH}}$ and $MOD_m{\mathbf{PH}}$ and their mutual relationships. \end{problem}
\section{List of OEIS-sequences} \label{se:oeis} \begin{description} \item[A000108] Catalan numbers $C(n)$. \item[A000110] Bell numbers $B(n)$. \item[A000453] Stirling numbers of the send kind $S(n,k)$. \item[A000587] Uppuluri-Carpenter numbers $A000587$. \item[A000670] Number of linear quasi-orders (pre-orders) $LQ(n)$. \item[A000798] Number of quasi-orders (pre-orders) $Q(n)$. \item[A001035] Number of partial orders $P(n)$. \item[A001286] Lah numbers $Lah(n)$. \item[A001861] Bicolored partitions. \item[A005493] $r$-Bell numbers $B_{A,2}(n)$ for $r=2$. \item[A005494] $r$-Bell numbers $B_{A,3}(n)$ for $r=3$. \item[A006905] Number of transitive relations $T(n)$. \item[A086714] $a(0)=4, a(n+1) = {a(n) \choose 2}$. \item[A110040] Regular labeled graphs of degree $2$ and $3$. \item[A143494] $r$-Stirling numbers $S_{A,r}(n,k)$ \item[A143497] $r$-Lah numbers $Lah_{A,r}(n)$. \item[A232472] $r$-Fubini numbers $LQ_{A,r}$ for $r=2$. \item[A295193] Regular labeled graphs. \end{description}
\appendix
\section{More on MC-finiteness} \label{ap:mc-finite} \label{se:ProofMC}
\subsection{Polynomial recursive sequences}
A \emph{polynomial recursive sequence}~\cite{cadilhac2021polynomial} is a mutual recurrence in which the recurrence relation is a polynomial. That is, we define $d$ sequences in parallel by initial values $a_1(0),\dots,a_d(0)$ and the recurrence \[
a_i(n+1) = P_i(a_1(n),\dots,a_d(n)), \] where $P_i$ is a polynomial with rational coefficients. We will only consider recurrences for which $a_i(n) \in \mathbb{N}$ for all $i \in [d]$ and $n \ge 0$.
\begin{theorem}[\cite{cadilhac2021polynomial}] \label{thm:CMPPS20} Let $m$ be a natural number which is relatively prime to all denominators of coefficients of the defining polynomials $P_1,\dots,P_d$. Then the sequences $a_i(n) \bmod m$ are eventually periodic. \end{theorem} \begin{proof} Notice that \[
a_i(n+1) \bmod m = (P_i \bmod m)(a_1(n) \bmod m, \dotsm a_d(n) \bmod m). \] Thus the function $P\colon \mathbb{Z}_m^d \to \mathbb{Z}_m^d$ given by \[
P(x_1,\dots,x_d) = ((P_1 \bmod m)(x_1,\dots,x_d),\dots, (P_d \bmod m)(x_1,\dots,x_d)) \] satisfies \[
(a_1(n+1) \bmod m,\dots,a_d(n+1) \bmod m) = P(a_1(n) \bmod m,\dots,a_d(n) \bmod m). \] Since $\mathbb{Z}_m^d$ is finite, if we start at $(a_1(0) \bmod m, \dots, a_d(0) \bmod m)$ and repeatedly apply $P$, we will eventually enter a cycle. \end{proof}
This result raises the following question: what happens for other $m$? It turns out that the theorem fails in general for such $m$.
\begin{theorem} \label{thm:main} Consider the following sequence $A086714$:
\[
a(n+1) = \binom{a(n)}{2}, \quad a(0) = 4. \] The sequence $a(n) \bmod 2$ is not eventually periodic. \end{theorem}
The same result holds (with the same proof) for any $a(0) \ge 4$, as well as for any recurrence of the form $a(n+1) = (a(n)+b)(a(n)+c)/2$, as long as $b,c$ have different parities and $a(0)$ is chosen so that $a(n) \to \infty$.
\subsection{Proof of Theorem \ref{th:MC}}
Let $\beta(n) = a(n) \bmod 2$. It is not hard to check that the sequence $\beta(n) \ldots \beta(n + k - 1)$ depends only on $a(n) \bmod 2^k$. It turns out that the opposite holds as well: we can determine $a(n) \bmod 2^k$ from $\beta(n) \ldots \beta(n + k - 1)$.
\begin{lemma} \label{lem:bijection} Let $a_r,\beta_r$ be defined as above, except with the initial condition $a_r(0) = r$.
For all $k \ge 1$, the function \[
\Phi_k(r) = \beta_r(0) \ldots \beta_r(k-1) \] is a bijection between $\{0,\dots,2^k-1\}$ and $(0,1)^k$. \end{lemma}
For example, if $k = 3$, we get the following bijection: \begin{align*} \Phi_3(0) &= 000 & \Phi_3(1) &= 100 & \Phi_3(2) &= 010 & \Phi_3(3) &= 111 \\ \Phi_3(4) &= 001 & \Phi_3(5) &= 101 & \Phi_3(6) &= 011 & \Phi_3(7) &= 110 \end{align*}
\begin{proof} The proof is by induction on $k$. The result is clear when $k = 1$, so suppose $k > 1$.
The first bit of $\Phi_k(r)$ is the parity of $r$, and the remaining bits are $\Phi_{k-1}(s)$, where $s = \binom{r}{2} \bmod 2^{k-1}$. To complete the proof, we show that the mapping $r \mapsto s$ is $2$-to-$1$, with the two pre-images of every $s$ having different parity.
Indeed, suppose that $\binom{a}{2} \equiv \binom{b}{2} \pmod{2^{k-1}}$ for $a,b \in \{0,\dots,2^k-1\}$. Then $a(a-1) \equiv b(b-1) \pmod{2^k}$, and so $2^k \mid a(a-1) - b(b-1) = (a-b)(a+b-1)$.
If $a,b$ have the same parities then $a+b-1$ is odd and so $2^k \mid a-b$. Since $a,b \in \{0,\dots,2^k-1\}$, in this case $a = b$.
If $a,b$ have different parities then $a-b$ is odd and so $2^k \mid a+b-1$, and so $b = 1-a \bmod 2^k$ is uniquely defined, and has a parity different from $a$. \end{proof}
We can now prove Theorem \ref{thm:main}. First, notice that ${a \choose 2} > a$ for $a \geq 4$, and so $a(n) \to \infty$. Now suppose that the sequence $\beta$ is ultimately periodic, say with period $\beta(N), \dots, \beta(N+\ell-1)$. Lemma \ref{lem:bijection} implies that for every $k \ge 1$, the sequence $a(n) \bmod 2^k$ has period $a(N) \bmod 2^k, \dots, a(N+\ell-1) \bmod 2^k$, and in particular, $a(N) \equiv a(N+\ell) \bmod 2^k$. Choosing $k$ such that $2^k > a(N+\ell)$, we reach a contradiction.
\ifskip\else \subsection{Normal sequences} Let $s(n)$ be an integer sequence, and $b \in {\mathbb N}^+$. The sequence $s^b(n) = s(n) \bmod{b}$ is normal, that is, if we chunk it into substrings of length $\ell \ge 1$ then each of the $b^\ell$ possible strings of $[b]^\ell$ appear in $s^b(n)$ with equal limiting frequency. It is {\em absolutely normal} if it is normal for every $b$. The sequence $s^b(n) = s(n) \bmod{b}$ can be viewed as a real number $r_b$ written in base $b$. A classical theorem from 1922 by E. Borel says that almost all reals are absolutely normal, \cite{everest2003recurrence}. The theorem below shows that MC-finite integer sequences are very rare.
\begin{proposition} \label{pr:normal} \begin{enumerate}[(i)] \item Almost all reals are absolutely normal. \item If $s^b(n)$ is normal for some $b$, then $s(n)$ is not MC-finite. \item Let $UP_b$ be the set of integer sequences $s^b(n)$ with $s^b(n) = s(n) \bmod{b}$ for some integer sequence $s(n)$ which are ultimately periodic. Then $UP_b$ has measure $0$. \end{enumerate} \end{proposition} Proving that a specific sequence is normal is usually difficult.
Here is a challenge: \begin{conjecture} The binary sequence $\beta(n) = a(n) \bmod 2$ from Theorem \ref{thm:main} is normal with $b=2$. \end{conjecture}
Here are some possible questions for further research:
\begin{enumerate}[(i)] \item Let $p_1 < \cdots < p_\ell$ be prime and $d_1,\dots,d_\ell \ge 0$ be integers. Construct a one-dimensional PRS whose reduction modulo $m$ is eventually periodic iff $\operatorname{ord}_{p_i}(m) \le d_i$. (Theorem \ref{thm:main} gives an answer for $\ell = 1$, $p_1 = 2$, $d_1 = 0$.) \item \end{enumerate} \fi
\subsection{Normal sequences} Let $s(n)$ be an integer sequence, and $b \in {\mathbb N}^+$. The sequence $s^b(n) = s(n) \bmod{b}$ is normal, that is, if we chunk it into substrings of length $\ell \ge 1$ then each of the $b^\ell$ possible strings of $[b]^\ell$ appear in $s^b(n)$ with equal limiting frequency. It is {\em absolutely normal} if it is normal for every $b$. The sequence $s^b(n) = s(n) \bmod{b}$ can be viewed as a real number $r_b$ written in base $b$. A classical theorem from 1922 by E. Borel says that almost all reals are absolutely normal, \cite{everest2003recurrence}. The theorem below shows that MC-finite integer sequences are very rare.
Let $PR_b$ be the set of integer sequences $s^b(n)$ with $s^b(n) = s(n) \bmod{b}$ for some integer sequence $s(n)$. $PR_b$ is the projection of all integer sequences to sequences over ${\mathbb Z}_b$. We think of $PR_b$ as a set of reals with the usual topology and its Lebesgue measure. Let $UP_b \subseteq PR_b$ be the set of sequences $s^b(n) \in PR_b$ which are ultimately periodic.
\begin{proposition} \label{pr:normal} \begin{enumerate}[(i)] \item Almost all reals are absolutely normal. \item $s(n)$ is MC-finite iff for every $b \in {\mathbb N}^+$ the sequence $s^b(n)$ is ultimately periodic \item If $s^b(n)$ is normal for some $b$, then $s(n)$ is not MC-finite. \item $UP_b \subseteq PR_b$ has measure $0$. \end{enumerate} \end{proposition} Proving that a specific sequence is normal is usually difficult.
Here is a challenge: \begin{conjecture} The binary sequence $\beta(n) = a(n) \bmod 2$ from Theorem \ref{thm:main} is normal with $b=2$. \end{conjecture}
\section{Eliminating hard-wired constants} \label{se:constants}
Let $\mathfrak{S}_r(n) =( [r+n], a_1, \ldots , a_r, E)$ be the structures on $[r+n]$ where $E$ is an equivalence relation and the $r$ elements $a_1, \ldots , a_r$ are in different equivalence classes. $S_r(n)$ counts the number of such structures on $[r+n]$.
\ifmargin \marginpar{s-elim} \else\fi
Let $\mathfrak{E}_r(n)$ be a structure on $[n]$ which consists of the following: \begin{enumerate}[(i)] \item $E(x,y)$ is an equivalence relation on $[n]$; \item There are $r$ unary relations $U_1, \ldots, U_r$ on $[n]$; \item The sets $U_i(x)$ are disjoint; \item Each $U_i(x)$ is either empty or consists of exactly one equivalence class of $E$; \end{enumerate} Let $E_r(n)$ be the number of such structures on $[n]$.
\begin{lemma} For every $r,n \in {\mathbb N}^+$ there is a bijection $f$ between the structures $\mathfrak{E}_r(n)$ on $[n]$ and the structures $\mathfrak{S}_r(n)$ on $[r+n]$, hence we have $E_r(n) = S_r(n)$. \end{lemma} \begin{proof} Given a structure $\mathfrak{S}_r(n)$ we define $f(\mathfrak{S}_r(n))$ as follows: \begin{enumerate}[(i)] \item The universe of $f(\mathfrak{S}_r(n))$ is $\{r+1, \ldots, r+n\}$. \item If for $i \leq r$ the set $\{i\}$ is a singleton equivalence class, we put $U_i =\emptyset$.
If there is an equivalence class $E_i$ which strictly contains
$i$ we put $U_i = E'_i = E_i - \{i\}$. \item $E'$ is the equivalence relation induced by $E$ on $\{r+1, \ldots, r+n\}$. \end{enumerate} Conversely, given a structure $E_r(n) = ([n], E, U_1, \ldots, U_r)$ we define $g(E_r(n))$ as follows: \begin{enumerate}[(i)] \item The universe of $g(E_r(n))$ is $[n+r]$ and the equivalence relation $E'$ is defined by defining its equivalence classes. \item If $U_i$ is empty for some $i \geq n+1$ the singleton $\{i\}$ is an equivalence class of $E'$.
If $U_i$ is not empty, then the equivalence class of $E'$ which contains $i$ is $U_i \cup \{i\}$. \item If $C$ is an equivalence class of $E$ such that $U_i \neq C$ for all $i \geq n+1$, then $C$ is an equivalence class for $E'$. \end{enumerate} It is now easy to check that $f,g$ are bijections and $g$ is the inverse of $f$. \end{proof}
\begin{remarks} \begin{enumerate}[(i)] \item Clearly the class of structures $E_r(n)$ as defined here is $\mathrm{FOL}$-definable. Hence we can apply the Specker-Blatter Theorem and conclude that $S_r(n)$ is MC-finite.
\item If $A$ is ultimately periodic then $S_{A,r}(n)$ is also MC-finite. To see this we note that for $S_{A,r}(n)$ all the equivalence classes $C$ satisfy $|C| \in A$. This means that in a structure $\mathfrak{E}_{A,r}(n)$
the equivalence classes $C$ satisfy $|C| \in A$, if they do not contain a $U_i$, and $|C| \in A'$ where $A' = \{ a-1: a \in A \}$, otherwise. If $A$ is ultimately periodic, so is $A'$ and both are definable in $\mathrm{CMSOL}$. \item For the Lah numbers $L_{r}(n)$ and $L_{A,r}(n)$ we proceed likewise by replacing the equivalence relation by a linear quasi-order.
For every $i$ we add two further unary relations and the appropriate conditions in order to take care of the ordering of the special elements.
\ifskip\else
you can add for every i two unary relations, one for the x for which a<=a_i and one for the x for which a_i<=x, and add the corresponding restrictions (if I'm not mistaken these are that U_{i,<=}(x) implies U_{i-1,<=(x)}, that U_{i-1,>=}(x) implies U_{i,<=(x)}, that every x satisfies U_{i,<=}(x) or U_{i,>=}(x), and that U_{i,>=}(x) and U_{i,<=y}(y) imply the relation between x and y). \fi
Hence both $L_{r}(n)$ and $L_{A,r}(n)$ are MC-finite. \end{enumerate} \end{remarks}
\section{Proof of Theorem \ref{th:FM} and its applications} \label{se:c-finite} \ifmargin \marginpar{s-c-finite} \else\fi
In order to prove Theorem \ref{th:FM} we use Theorem \ref{th:FMapp} below. For this we have to introduced the definition of $\mathrm{CMSOL}$-definable graph polynomials.
\ifskip\else \subsection{Iteratively constructed sequences of graphs}
In this subsection we follow \cite{fischer2008linear}. \begin{definition} A $k$-colored graph is a graph $G$ together with $k$ sets $V_1,V_2,...,V_k\subseteq V(G)$ such that $V_i \cap V_j =\emptyset $ for $i\neq j$. A basic operation on $k$-colored graphs is one of the following: \begin{itemize} \item $Add_i$: add a new vertex of color $i$ to $G$. \item $Recolor_{i,j}$: recolor all vertices with color $i$ to color $j$ in $G$. \item $Uncolor_{i,j}$: remove the color of all vertices with color $i$. Uncolored vertices cannot be recolored again. \item $AddEdges_{i,j}$: add an edge between every vertex with color $i$ and every vertex with color $j$ in $G$. \item $DeleteEdges_{i,j}$: delete all edges between vertices with color $i$ and vertices with color $j$ from $G$.
\end{itemize} A unary operation $F$ on graphs is $\mathrm{MSOL}$-elementary if $F$ is a finite composition of basic operations on $k$ colored graphs (with $k$ fixed). We say that a sequence of graphs $\{G_n\}$ is iteratively constructed if it can be defined by fixing a graph $G_0$ and defining $G_{n+1}=F(G_n)$ for an $\mathrm{MSOL}$-elementary operation $F$. \end{definition}
\begin{remark} In \cite{fischer2008linear} there was an additional operation allowed \begin{itemize} \item $Duplicate$: Add a disjoint copy of $G$ to $G$, \end{itemize} assuming erroneously that $Duplicate$ behaves like a unary operation on graphs. Although it looks like a unary operation on graphs, the sequence of graphs $$ G_0 = E_1, G_{n+1} = Duplicate(G_n) $$ grows too fast and does not fit the framework that the authors have envisaged in \cite{fischer2008linear}. \end{remark} \fi
\subsection{$\mathrm{CMSOL}$-definable graph polynomials} \begin{definition} Let ${\mathbb Z}$ be the ring of integers. We consider polynomials over ${\mathbb Z}[\overline{x}]$. For an $\mathrm{CMSOL}$-formula for graphs $\phi(\overline{v})$ with $\overline{v}= (v_1, \ldots , v_s)$, define $card_G(\phi)$ to be the cardinality of subsets of $V(G)^s$ defined by $\phi$. The extended $\mathrm{CMSOL}$ graph polynomials are defined recursively. We first define the {\em extended $\mathrm{CMSOL}$-monomials}. Let $\phi(\overline{v}) \in \mathrm{CMSOL}$. An extended $\mathrm{CMSOL}$-monomial is a term of one of the following possible forms: \begin{itemize} \item $x^{card_G(\phi)}$ where $x$ is one of the variables of $\overline{x}$. \item $x_{(card_G(\phi))}$ i.e. the falling factorial of $x$. \item ${x \choose card_G(\phi)}$. \item $\prod_{\overline{v} \in V(G)^s:\phi(v)}t(\overline{x})$ where $t(\overline{x})$ is a term in $\mathbb{Z}[\overline{x}]$. \end{itemize} The {\em extended $\mathrm{CMSOL}$ graph polynomials} are obtained from the monomials by closing under finite addition and multiplication. Furthermore they are closed under summation over subsets of $V(G)$ of the form $$\sum_{U:\phi(U)} t$$ where $\phi$ is an $\mathrm{CMSOL}$-formula with free set variables $U$, and under multiplication over elements of $V(G)^s$ of the form $$\prod_{\overline{v} \in V(G)^2:\phi(v)}t(\overline{x})$$. \end{definition}
\begin{theorem}[Theorem 1 \cite{fischer2008linear}] \label{th:FMapp} Let $F$ be an elementary operation on graphs, $\{G_n:n\in \mathbb{N}\}$ an $F$-iterated sequence of graphs, and $P$ an extended $\mathrm{CMSOL}$-definable graph polynomial. Then $\{G_n\}$ is C-finite, i.e. there exist polynomials $p_1,p_2,...,p_k\in \mathbb{Z}[\overline{x}]$ such that for sufficiently large n, $$ P(G_{n+k+1})=\sum_{i=1}^k p_iP(G_{n+i}) $$ \end{theorem}
This proves Theorem \ref{th:FM}.
\subsection{Proofs of C-Finiteness}
Now we give the detailed proofs of Theorem \ref{th:c-finite}. \phantom{fdgf}
\begin{proposition} \label{R1} Fix $k_0\in \mathbb{N}$. Then $S(n,k_0)$ is a C-finite sequence. \end{proposition} \begin{proof} Let $P$ be the graph property of cliques with at least one vertex, i.e. $P=\{K_n:n\geq 1\}$, and define $G_n=K_n$. Note that a $P$-coloring of $G_n$ with $k_0$ colors is a partition of $V(G_n)$ into exactly $k_0$ non empty color classes, so $H_P(G_n,k_0)=S(n,k_0)$. We want to apply the Fischer-Makowsky theorem. First, note that the sequence $G_n$ is iteratively constructible, see Example \ref{exIterativelyConstructed} or \cite[Proposition 2]{fischer2008linear}. Hence $H_P$ is an extended $\mathrm{CMSOL}$ graph polynomial and we can use Theorem \ref{th:FMapp}. \end{proof}
\begin{proposition} \label{R2} Fix $k_0\in \mathbb{N}$. Then $S_r(n,k_0)$ is a C-finite sequence. \end{proposition} \begin{proof} Let $P$ be the graph property of edgeless graphs with at least one vertex, i.e. $P=\{\overline{K_n}:n\geq 1\}$, and define $G_n=K_r\cup \overline{K_n}$. Note that a $P$-coloring of $G_n$ with $k_0+r$ colors is a partition of $V(G_n)$ into exactly $k_0+r$ non empty color classes, such that every vertex in $V(K_r)\subseteq V(G_n)$ is in a different color class, so $H_P(G_n,k_0+r)=S_r(n,k_0)$. We want to apply the Fischer-Makowsky theorem. First, note that the sequence $G_n$ is iteratively constructible: put $G_0=K_r$. Now given $G_n$, we construct $G_{n+1}$ by adding a disjoint vertex. Hence $H_P$ is again an extended $\mathrm{CMSOL}$ graph polynomial and we can use Theorem \ref{th:FMapp}. \end{proof}
\begin{proposition} \label{R3} Let $A\subseteq \mathbb{N}$, and $k_0\in \mathbb{N}$. Then $S_A(n,k_0)$ is a C-finite sequence if and only if $A$ is ultimately periodic. \end{proposition} \begin{proof} First, note that $S_A(n,1)=1$ iff $n\in A$. Therefore, if $A$ is not ultimately periodic, $S_A(n,1)$ is
not C-finite. On the other hand, assume $A$ is ultimately periodic. Let $P$ be the graph property of cliques with vertex size in $A$, i.e. $P=\{K_n:n\in A\}$, and define $G_n=K_n$. Note that a $P$-coloring of $G_n$ with $k_0$ colors is a partition of $V(G_n)$ into exactly $k_0$ non empty color classes, with each color class with size in $A$. so $H_P(G_n,k_0)=S_A(n,k_0)$. We want to apply the Fischer-Makowsky theorem. As before, the sequence $G_n$ is iteratively constructible. Hence $H_P$ is again an extended $\mathrm{CMSOL}$ graph polynomial and we can use Theorem \ref{th:FMapp}.
\end{proof}
\begin{proposition} \label{R4} Let $A\subseteq \mathbb{N}$, and $k_0\in \mathbb{N}$. Then $S_{A,r}(n,k_0)$ is a C-finite sequence if and only if $A$ is ultimately periodic. \end{proposition} \begin{proof} $S_A(n,1)=1$ iff $n\in A$. If $A$ is not ultimately periodic, then also $S_A(n,1)$ is not C-finite.
Assume $A$ is ultimately periodic. Let $P$ be the graph property of edgeless graphs with vertex size in $A$, i.e. $P=\{\overline{K_n}:n\in A\}$, and define $G_n=K_r\cup \overline{K_n}$. Note that a $P$-coloring of $G_n$ with $k_0+r$ colors is a partition of $V(G_n)$ into exactly $k_0+r$ non empty color classes with sizes in $A$, such that every vertex in $V(K_r)\subseteq V(G_n)$ is in a different color class, so $H_P(G_n,k_0+r)=S_r(n,k_0)$. We want to apply the Fischer-Makowsky theorem. As before, the sequence $G_n$ is iteratively constructible. Hence $H_P$ is again an extended $\mathrm{CMSOL}$ graph polynomial and we can use Theorem \ref{th:FMapp}. \end{proof}
\section{An explicit computation of $S_A(n,k)$} \label{se:explicit} \ifmargin \marginpar{s-explicit} \else\fi
Let $A\subseteq \mathbb{N}$. $S_A(n,k)$ counts the number of partitions of $[n]$ into $k$ sets with cardinalities in $A$.
We shall compute $S_A(n,k)$ explicitly. For $A = {\mathbb N}^+$ this will give also an alternative way of computing $S(n,k)$, the Stirling numbers of the second kind. The method is reminiscent to \cite[Theorem 8.6]{charalambides2018enumerative} or, in very different notation \cite[Chapter 1, Exercise 45]{bk:Stanley86}.
We introduce some suitable notation. Let $A\subseteq \mathbb{N}$. $S_A(n,k)$ counts the number of partitions of $[n]$ into $k$ sets with cardinalities in $A$.
Let $V(A,k)$ be the set of $k$-tuples $(L_1, \ldots, L_k)$ of elements of $A$ ordered in non-decreasing order, with $\sum_{i=1}^k =n$, i.e. $$ V(A,k)=\{(l_1,l_2,...,l_k)\in A^k:0<l_1\leq l_2\leq...\leq l_k,\sum_{i=1}^k l_i=n\}. $$ For $(l_1,l_2,...,l_k)\in V(A,k)$ define $g(m;l_1,l_2,...,l_k)$ to be the number of times $m$ appears in the $k$-tuple $(l_1,l_2,...,l_k)$, and $$f(l_1,l_2,...,l_k)=\prod_{m\in(l_1,l_2,...,l_k)}g(m;l_1,l_2,...,l_k)! .$$ Next we define inductively: $c_1 =n$, $c_{i+1} = c_i - l_i$, hence $c_i = n - \sum_1^{i-1} l_i$.
\begin{theorem} \label{th:explicit} Let $A\subseteq \mathbb{N}$. Then $$ S_A(n,k)= \sum_{(l_1,l_2,...,l_k)\in V(A,k)}\frac{1}{f(l_1,l_2,...,l_k)} \prod_{i=1}^k {c_i \choose l_i}
$$ \end{theorem} \begin{proof} To partition $[n]$ into $k$ sets with cardinalities in $A$, we proceed as follows: First, we select the cardinalities of the $k$ sets. This corresponds to picking an element $(l_1,l_2,...,l_k) \in V(A,k)$. To construct a partition of $n$, we choose $l_1$ elements from $[n]$, then $l_2$ elements from $[n-l_1]$ etc. Finally, we divide by $f(l_1,l_2,...,l_k)$ to account for double counting
of tuples with equal entries. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{\textbf{Scheme for constructing graphs associated with stabilizer quantum codes}} \author{Carlo Cafaro$^{1,2,3}$, Damian Markham$^{4}$, and Peter van Loock$^{2}$} \affiliation{$^{1}$Max-Planck Institute for the Science of Light, Gunther-Scharowsky-Str.1/Bau 26, 91058 Erlangen, Germany } \affiliation{$^{2}$Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz, Germany} \affiliation{$^{3}$Department of Mathematics, Clarkson University, 8 Clarkson Ave, Potsdam, NY 13699-5815, USA} \affiliation{$^{4}$CNRS LTCI, D\'{e}partement Informatique et R\'{e}seaux, Telecom ParisTech, 23 avenue d'Italie, CS 51327, 75214 Paris CEDEX 13, France}
\begin{abstract} We propose a systematic scheme for the construction of graphs associated with binary stabilizer codes. The scheme is characterized by three main steps: first, the stabilizer code is realized as a codeword-stabilized (CWS) quantum code; second, the canonical form of the CWS\ code is uncovered; third, the input vertices are attached to the graphs. To check the effectiveness of the scheme, we discuss several graphical constructions of various useful stabilizer codes characterized by single and multi-qubit encoding operators. In particular, the error-correcting capabilities of such quantum codes are verified in graph-theoretic terms as originally advocated by Schlingemann and Werner. Finally, possible generalizations of our scheme for the graphical construction of both (stabilizer and nonadditive) nonbinary and continuous-variable quantum codes are briefly addressed.
\end{abstract}
\pacs{03.67.-a (quantum information)} \maketitle
\section{Introduction}
Classical graphs \cite{die, west, wilson} are closely related to quantum error correcting codes (QECCs) \cite{gotty}. The first construction of QECCs based upon the use of graphs and finite Abelian groups appears in \cite{werner} and is provided by Schlingemann and Werner (SW-work). However, while in \cite{werner} it is proved that all codes constructed from graphs are stabilizer codes, it remains unclear how to embed the usual stabilizer code constructions into the proposed graphical scheme. Therefore, although necessary and sufficient conditions are uncovered for the graph such that the resulting code corrects a certain number of errors, the power of the graphical approach to quantum coding for stabilizer codes cannot be fully exploited unless this embedding issue is resolved. In \cite{dirk}, Schlingemann (S-work) clarifies this issue by establishing that each quantum stabilizer code (both binary and nonbinary) could be realized as a graph code and vice-versa. Almost at the same time, inspired by the work presented in \cite{werner}, the equivalence of graphical quantum codes and stabilizer codes is also established by Grassl et \textit{al}. in \cite{markus}. Despite being very important, the works in \cite{dirk} and \cite{markus} still suffer from the fact that no systematic scheme for constructing a graph of a stabilizer code or the stabilizer of a graphical quantum code is available. The solution of this point is especially important in view of the fact that although \ any stabilizer code over a finite field has an equivalent representation as a graphical quantum code, unfortunately, this representation is not unique. Furthermore, the chosen representation does not reflect all the properties of the quantum code. A crucial step forward for the description and understanding of the interplay between properties of graphs and stabilizer codes is achieved thanks to the introduction of the notion of graph states (and cluster states, \cite{hans}) into the graphical construction of QECCs as presented by Hein et \textit{al}. in \cite{hein}. In this last work, it is shown how graph states are in correspondence to graphs and special focus is devoted to the question of how the entanglement in a graph state is related to the topology of its underlying graph. In \cite{hein}, it is also pointed out that codewords of various QECCs could be regarded as special instances of graph states and criteria for the equivalence of graph states under local unitary transformations entirely on the level of the underlying graphs are presented. Similar findings are uncovered by Van den Nest et \textit{al}. in \cite{bart} (VdN-work) where a constructive scheme showing that each stabilizer state is equivalent to a graph state under local Clifford operations is discussed. Thus, the main finding of Schlingemann in \cite{dirk} is re-obtained in \cite{bart} for the special case of binary quantum states. Most importantly, in \cite{bart}, an algorithmic procedure for transforming any binary quantum stabilizer code into a graph code appears. However, to the best of our knowledge, nobody has fully and jointly exploited the results provided by either Schlingemann in \cite{dirk} or Van den Nest et \textit{al}. in \cite{bart} to provide a more systematic procedure for constructing graphs associated with arbitrary binary stabilizer codes with special emphasis on the verification of their error-correcting capabilities. We emphasize that this last point constitutes one of the original motivations for introducing the concept of a graph into quantum error correction (QEC) \cite{werner}.
The CWS quantum code formalism presents a unifying approach for constructing both additive and nonadditive QECCs, for both binary \cite{cross} (CWS-work) and nonbinary states \cite{chen}. Furthermore, every CWS code in its canonical form can be fully characterized by a graph and a classical code. In particular, any CWS code is locally Clifford equivalent to a CWS code with a graph state stabilizer and word operators consistent only of $Z$s \cite{cross}. Since the notion of stabilizer codes, graph codes and graph states can be recast into the CWS formalism, it seems natural to investigate the graphical depiction of stabilizer codes as originally thought by Schlingemann and Werner within this generalized framework where stabilizer codes are realized as CWS\ codes. Proceeding along this line of investigation, we shall observe that the notion of graph state in QEC as presented in \cite{hein} emerges naturally. Furthermore, the algorithmic procedure for transforming any (binary) quantum stabilizer code into a graph code advocated in \cite{bart} can be exploited and jointly used with the results in \cite{dirk} where the notions of both coincidence and adjacency matrices of a classical graphs are introduced. For the sake of completeness, we point out that the CWS formalism has been already employed into the literature for the graphical construction of both binary \cite{yu1} and nonbinary \cite{yu2} (both additive/stabilizer and nonadditive) QECCs. For instance in \cite{yu1}, regarding stabilizer codes as CWS codes and employing a graphical approach to quantum coding, a classification of all the extremal stabilizer codes up to eight qubits and the construction of the optimal $\left( \left( 10\text{, }24\text{, }3\right) \right) $ code together with a family of $1$-error detecting nonadditive codes with the highest encoding rate so far is presented. With a leap of imagination, in \cite{yu1} it is also envisioned a graphical quantum computation based directly on graphical objects. Indeed, this vision became recently more realistic in the work of Beigi et \textit{al}. \cite{beigi}. Here, being essentially within the CWS framework, a systematic method for constructing both binary and nonbinary concatenated quantum codes based on graph concatenation is developed. Graphs representing the inner and the outer codes are concatenated via a simple graph operation (the so-called generalized local complementation, \cite{beigi}). Despite their very illuminating findings, in \cite{beigi} it is emphasized that the elusive role played by graphs in QEC is still not well-understood. In neither \cite{yu1} nor \cite{beigi}, the Authors are concerned with the joint exploitation of the results provided by either Van den Nest et \textit{al.} in \cite{bart} (algorithmic procedure for transforming any binary quantum stabilizer code into a graph code) or Schlingemann in \cite{dirk} (use of both the coincidence and adjacency matrices of a classical graphs in QEC) in order to provide a more systematic procedure for constructing graphs associated with arbitrary binary stabilizer codes with special emphasis on the verification of their error correcting capabilities which, as pointed out earlier, constituted a major driving motivation for the introduction of graphs in QEC \cite{werner}. Instead, we aim here at investigating such unexplored topics and hope to further advance our understanding of the role played by classical graphs in quantum coding.
In this article, we propose a systematic scheme for the construction of graphs with both input and output vertices associated with arbitrary binary stabilizer codes. The scheme is characterized by three main steps: first, the stabilizer code is realized as a CWS quantum code; second, the canonical form of the CWS\ code is uncovered; third, the input vertices are attached to the graphs with only output vertices. To check the effectiveness of the scheme, we discuss several graphical constructions of various useful stabilizer codes characterized by single and multi-qubit encoding operators. In particular, the error-correcting capabilities of such quantum codes are verified in graph-theoretic terms as originally advocated by Schlingemann and Werner. Finally, possible generalizations of our scheme for the graphical construction of both (stabilizer and nonadditive) nonbinary and continuous variables quantum codes is briefly addressed.
The layout of the article is as follows. In Section II, we introduce some preliminary material. First, the notions of graphs, graph states and graph codes are presented. Second, local Clifford transformations on graph states and local complementations on graphs are briefly described. Third, the CWS quantum codes formalism is briefly explained. In Section III, we re-examine some basic ingredients of the Schlingemann-Werner work (SW-work, \cite{werner}), the Schlingemann work (S-work, \cite{dirk}) and, finally, the Van den Nest et al. work (VdN-work, \cite{bart}). We focus on those aspects of these works that are especially important for our systematic scheme. In this Section IV, we formally describe our scheme and, for the sake of clarity, apply it to the graphical construction of the Leung et al. four-qubit quantum code for the error correction of single amplitude damping errors \cite{debbie}. Finally, concluding remarks and a brief discussion on possible extensions of our schematic graphical construction to both (stabilizer and nonadditive) nonbinary and continuous variables quantum codes appear in Section V.
Several explicit constructions of graphs for various stabilizer codes characterized by either single or multi-qubit encoding operators are worked out in the Appendices. Specifically, we discuss the graphical construction of the following quantum codes:\ the three-qubit repetition code, the perfect\textbf{ }$1$-erasure correcting four-qubit code, the perfect\textbf{ }$1$-error correcting five-qubit code,\textbf{ }$1$-error correcting six-qubit quantum degenerate codes, the CSS seven-qubit stabilizer code, the Shor nine-qubit stabilizer code, the Gottesman $2$-error correcting eleven-qubit code,\textbf{ }$\left[ \left[ 4\text{, }2\text{, }2\right] \right] $\textbf{\ }stabilizer codes, and, finally, the Gottesman $\left[ \left[ 8\text{, }3\text{, }3\right] \right] $\ stabilizer code.
\section{From graph theory to the CWS formalism}
In this section, we present some preliminary material. First, the notions of graphs, graph states and graph codes are introduced. Second, local Clifford transformations on graph states and local complementations on graphs are briefly presented. Third, the CWS quantum codes formalism is briefly discussed.
\subsection{Graphs, graph states, and graph codes}
A\textbf{\ }graph $G=G\left( V\text{, }E\right) $ is characterized by a set $V$ of $n$ vertices and a set of edges $E$ specified by the adjacency matrix $\Gamma$ \cite{die, west, wilson}. This matrix is a $n\times n$ symmetric matrix with vanishing diagonal elements and $\Gamma_{ij}=1$ if vertices $i$, $j$ are connected and $\Gamma_{ij}=0$ otherwise. The neighborhood of a vertex $i$ is the set of all vertices $v\in V$ that are connected to $i$ and is defined by $N_{i}\overset{\text{def}}{=}\left\{ v\in V:\Gamma_{iv}=1\right\} $. When the vertices $a$, $b\in V$ are the end points of an edge, they are referred to as being adjacent. An $\left\{ a\text{, }c\right\} $ path is an ordered list of vertices $a=a_{1}$, $a_{2}$,..., $a_{n-1}$, $a_{n}=c$, such that for all $i$, $a_{i}$ and $a_{i+1}$ are adjacent. A connected graph is a graph that has an $\left\{ a\text{, }c\right\} $ path for any two $a$, $c\in V$. Otherwise it is referred to as disconnected. A vertex represents a physical system, e.g., a qubit (two-dimensional Hilbert space), qudit ($d$-dimensional Hilbert space), or continuous variables (CV)\ (continuous Hilbert space). An edge between two vertices represents the physical interaction between the corresponding systems. In what follows, we shall take into consideration simple graphs only. These are graphs that contain neither loops (edges connecting vertices with itself) nor multiple edges. Furthermore, for the time being, we do not make a distinction between different types of vertices. However, later on we will assign some vertices as inputs, and some as outputs.
Graph states \cite{hans} are multipartite entangled states that play a key-role in graphical constructions of QECCs codes and, in addition, are very important in quantum secret sharing \cite{damian2008} which is, to a certain extent, equivalent to error correction \cite{anne2013}. For a very recent experimental demonstration of a graph state quantum error correcting code, we refer to \cite{damian2014}.
Consider a system of $n$ qubits that are labeled by those $n$ vertices in $V$ and denote by $I^{i}$, $X^{i}$, $Y^{i}$, $Z^{i}$ (or, equivalently, $X^{i}\equiv\sigma_{x}^{i}$, $Y^{i}\equiv\sigma_{y}^{i}$, $Z^{i}\equiv \sigma_{z}^{i}$) the identity matrix and the three Pauli operators acting on the qubit $i\in V$. The $n$-qubit graph state $\left\vert G\right\rangle $ associated with the graph $G$ is defined by \cite{hein}, \begin{equation} \left\vert G\right\rangle \overset{\text{def}}{=}
{\displaystyle\prod\limits_{\Gamma_{ij}=1}}
\mathcal{U}_{ij}\left\vert +\right\rangle _{x}^{V}=\frac{1}{\sqrt{2^{n}}}
{\displaystyle\sum\limits_{\vec{\mu}=\mathbf{0}}^{\mathbf{1}}}
\left( -1\right) ^{\frac{1}{2}\vec{\mu}\cdot\Gamma\cdot\vec{\mu}}\left\vert \vec{\mu}\right\rangle _{z}\text{,} \end{equation} where $\left\vert +\right\rangle _{x}^{V}$ is the joint $+1$ eigenstate of $X^{i}$ with $i\in V$, $\mathcal{U}_{ij}$ is the controlled phase gate between qubits $i$ and $j$ given by, \begin{equation} \mathcal{U}_{ij}\overset{\text{def}}{=}\frac{1}{2}\left[ I+Z_{i}+Z_{j} -Z_{i}Z_{j}\right] \text{,} \end{equation} and $\left\vert \vec{\mu}\right\rangle _{z}$ is the joint eigenstate of $Z^{i}$ with $i\in V$ and $\left( -1\right) ^{\mu_{i}}$ as eigenvalues. The graph-state basis of the $n$-qubit Hilbert space $\mathcal{H}_{2}^{n}$ is given by $\left\{ \left\vert G^{C}\right\rangle \overset{\text{def}}{=} Z^{C}\left\vert G\right\rangle \right\} $ where $C$ is an element of the set of all the subsets of $V$ denoted by $2^{V}$. A collection of subsets $\left\{ C_{1}\text{,..., }C_{K}\right\} $ specifies a $K$-dimensional subspace of $\mathcal{H}_{2}^{n}$ that is spanned by the graph-state basis $\left\{ \left\vert G^{C_{i}}\right\rangle \right\} $ with $i=1$,..., $K$. The graph state $\left\vert G\right\rangle $ is the unique joint $+1$ eigenstate of the $n$-vertex stabilizers $\mathcal{G}_{i}$ with $i\in V$ defined as \cite{hein}, \begin{equation} \mathcal{G}_{i}\overset{\text{def}}{=}X^{i}Z^{N_{i}}\overset{\text{def}} {=}X^{i}
{\displaystyle\prod\limits_{j\in N_{i}}}
Z^{j}\text{.} \end{equation} A graph code, first introduced into the realm of QEC in \cite{werner} and later reformulated into the graph state formalism in \cite{hein}, is defined to be one in which a graph $G$ is given and the codespace (or, coding space) is spanned by a subset of the graph state basis. These states are regarded as codewords, although we recall that what is significant from the point of view of the QEC properties is the subspace they span, not the codewords themselves \cite{robert}.
\subsection{Local Clifford transformations and local complementations}
\subsubsection{Transformations on quantum states\textbf{\ }}
The Clifford group $\mathcal{C}_{n}$ is the normalizer of the Pauli group $\mathcal{P}_{\mathcal{H}_{2}^{n}}$ in $\mathcal{U}\left( 2^{n}\right) $, i.e., it is the group of unitary operators $U$ satisfying $U\mathcal{P} _{\mathcal{H}_{2}^{n}}U^{\dagger}=\mathcal{P}_{\mathcal{H}_{2}^{n}}$. The local Clifford group $\mathcal{C}_{n}^{l}$ is the subgroup of $\mathcal{C} _{n}$ and consists of all $n$-fold tensor products of elements in $\mathcal{C}_{1}$. The Clifford group is generated by a simple set of quantum gates: the Hadamard gate $H$, the phase gate $P$ and the CNOT gate $U_{\text{CNOT}}$ \cite{gaitan}. Using the well-known representations of the Pauli matrices in the computational basis, it is straightforward to show that the action of $H$ on such matrices reads \begin{equation} \sigma_{x}\rightarrow H\sigma_{x}H^{\dagger}=\sigma_{z}\text{, }\sigma _{y}\rightarrow H\sigma_{y}H^{\dagger}=-\sigma_{y}\text{, }\sigma _{z}\rightarrow H\sigma_{z}H^{\dagger}=\sigma_{x}\text{.} \end{equation} The action of the phase gate $P$ on $\sigma_{x}$, $\sigma_{y}$ and $\sigma _{z}$ is given by, \begin{equation} \sigma_{x}\rightarrow P\sigma_{x}^{\dagger}P=\sigma_{y}\text{, }\sigma _{y}\rightarrow P\sigma_{y}^{\dagger}P=-\sigma_{x}\text{, }\sigma _{z}\rightarrow P\sigma_{z}^{\dagger}P=\sigma_{z}\text{.} \end{equation} Finally, the CNOT\ gate leads to the following transformations rules, \begin{align} \sigma_{x}\otimes I & \rightarrow U_{\text{CNOT}}\left( \sigma_{x}\otimes I\right) U_{\text{CNOT}}^{\dagger}=\sigma_{x}\otimes\sigma_{x}\text{, }I\otimes\sigma_{x}\rightarrow U_{\text{CNOT}}\left( I\otimes\sigma _{x}\right) U_{\text{CNOT}}^{\dagger}=I\otimes\sigma_{x}\text{,}\nonumber\\ & \nonumber\\ \sigma_{z}\otimes I & \rightarrow U_{\text{CNOT}}\left( \sigma_{z}\otimes I\right) U_{\text{CNOT}}^{\dagger}=\sigma_{z}\otimes I\text{, }I\otimes \sigma_{z}\rightarrow U_{\text{CNOT}}\left( I\otimes\sigma_{z}\right) U_{\text{CNOT}}^{\dagger}=\sigma_{z}\otimes\sigma_{z}\text{.} \end{align} Observe that the CNOT gate propagates bit flip errors from the control to the target, and phase errors from the target to the control. As a side remark, we stress that another useful two-qubit gate is the controlled-phase gate $U_{\text{CP}}\overset{\text{def}}{=}\left( I\otimes H\right) U_{\text{CNOT} }\left( I\otimes H\right) $. The controlled-phase gate has the following action on the generators of $\mathcal{P}_{\mathcal{H}_{2}^{2}}$, \begin{align} \sigma_{x}\otimes I & \rightarrow U_{\text{CP}}\left( \sigma_{x}\otimes I\right) U_{\text{CP}}^{\dagger}=\sigma_{x}\otimes\sigma_{z}\text{, } I\otimes\sigma_{x}\rightarrow U_{\text{CP}}\left( I\otimes\sigma_{x}\right) U_{\text{CP}}^{\dagger}=\sigma_{z}\otimes\sigma_{x}\text{,}\nonumber\\ & \nonumber\\ \sigma_{z}\otimes I & \rightarrow U_{\text{CP}}\left( \sigma_{z}\otimes I\right) U_{\text{CP}}^{\dagger}=\sigma_{z}\otimes I\text{, }I\otimes \sigma_{z}\rightarrow U_{\text{CP}}\left( I\otimes\sigma_{z}\right) U_{\text{CP}}^{\dagger}=I\otimes\sigma_{z}\text{.} \end{align} We observe that a controlled-phase gate does not propagate phase errors, though a bit-flip error on one qubit spreads to a phase error on the other qubit.
We also point out that a unitary operator $U$ that fixes the stabilizer group $S_{\text{stabilizer}}$ \ (we refer to \cite{daniel-phd} for a detailed characterization of the quantum stabilizer formalism in QEC) of a quantum stabilizer code $\mathcal{C}_{\text{stabilizer}}$ under conjugation is an encoded operation. In other words, $U$ is an encoded operation that maps codewords to codewords whenever $US_{\text{stabilizer}}U^{\dagger }=S_{\text{stabilizer}}$. In particular, if $S^{\prime}\overset{\text{def}} {=}USU^{\dagger}$ (every element of $S^{\prime}$ can be written as $UsU^{\dagger}$ for some $s\in S$) and $\left\vert c\right\rangle $ is a codeword stabilized by every element in $S$, then $\left\vert c^{\prime }\right\rangle =U\left\vert c\right\rangle $ is stabilized by every stabilizer element in $S^{\prime}$.
\subsubsection{Transformations on graphs}
If there exists a local unitary (LU) transformation $U$ such that $U\left\vert G\right\rangle =\left\vert G^{\prime}\right\rangle $, the states $\left\vert G\right\rangle $ and $\left\vert G^{\prime}\right\rangle $ will have the same entanglement properties. If $\left\vert G\right\rangle $ and $\left\vert G^{\prime}\right\rangle $ are graph states, we say that their corresponding graphs $G$ and $G^{\prime}$ will then represent equivalent quantum codes, with the same distance, weight distribution, and other properties. Determining whether two graphs are LU-equivalent is a difficult task, but a sufficient condition for equivalence was given in \cite{hein}. Let the graphs $G=\left( V\text{, }E\right) $ and $G^{\prime}=\left( V\text{, }E^{\prime}\right) $ on $n$ vertices correspond to the $n$-qubit graph states $\left\vert G\right\rangle $ and $\left\vert G^{\prime}\right\rangle $. We define the two $2\times2$ unitary matrices, \begin{equation} \tau_{x}\overset{\text{def}}{=}\sqrt{-i\sigma_{x}}=\frac{1}{\sqrt{2}}\left( \begin{array} [c]{cc} -1 & i\\ i & -1 \end{array} \right) \text{ and, }\tau_{z}\overset{\text{def}}{=}\sqrt{i\sigma_{z} }=\left( \begin{array} [c]{cc} \omega & 0\\ 0 & \omega^{3} \end{array} \right) \text{,} \end{equation} where $\omega^{4}=i^{2}=-1$, and $\sigma_{x}$ and $\sigma_{z}$ are Pauli matrices. Given a graph $G=\left( V=\left\{ 0\text{,..., }n-1\right\} \text{, }E\right) $, corresponding to the graph state $\left\vert G\right\rangle $, we define a local unitary transformation $U_{a}$, \begin{equation} U_{a}\overset{\text{def}}{=}
{\displaystyle\bigotimes\limits_{i\in N_{a}}}
\tau_{x}^{\left( i\right) }
{\displaystyle\bigotimes\limits_{i\notin N_{a}}}
\tau_{z}^{\left( i\right) }\text{,} \end{equation} where $a\in V$ is any vertex, $N_{a}\subset V$ is the neighborhood of $a$, and $\tau_{x}^{\left( i\right) }$ means that the transform $\tau_{x}$ should be applied to the qubit corresponding to vertex $i$. Given a graph $G$, if there exists a finite sequence of vertices $\left( u_{0}\text{,..., } u_{k-1}\right) $ such that $U_{u_{k-1}}$...$U_{u_{0}}\left\vert G\right\rangle =\left\vert G^{\prime}\right\rangle $, then $G$ and $G^{\prime }$ are LU-equivalent \cite{hein}. It was discovered by Hein et \textit{al}. and by Van den Nest et \textit{al}. that the sequence of transformations taking $\left\vert G\right\rangle $ to $\left\vert G^{\prime}\right\rangle $ can equivalently be expressed as a sequence of simple graph operations taking $G$ to $G^{\prime}$. In particular, it was shown in \cite{bart} that a graph $G$ determines uniquely a graph state $\left\vert G\right\rangle $ and two graph states ($\left\vert G_{1}\right\rangle $ and $\left\vert G_{2} \right\rangle $) determined by two graphs ($G_{1}$ and $G_{2}$) are equivalent up to some local Clifford transformations iff these two graphs are related to each other by local complementations (LCs). The concept of LC was originally introduced by Bouchet in \cite{france}. A LC of a graph on a vertex $v$ refers to the operation that in the neighborhood of $v$ we connect all the disconnected vertices and disconnect all the connected vertices. All the graphs on up to $12$ vertices have been classified under LCs and graph isomorphisms \cite{parker}. In summary, the relation between graphs and quantum codes can be rather complicated since one graph may provide inequivalent codes and different graphs may provide equivalent codes. However, it has been established that the family of codes given by a graph is equivalent to the family of codes given by a local complementation of that graph.
As pointed out earlier, unitary operations $U$ in the local Clifford group $\mathcal{C}_{n}^{l}$ act on graph states $\left\vert G\right\rangle $. However, there exists also graph theoretical rules, transformations acting on graphs, which correspond to local Clifford operations. These operations generate the orbit of any graph state under local Clifford operations. The LC orbit of a graph $G$ is the set of all non-isomorphic graphs, including $G$ itself, that can be transformed into $G$ by any sequence of local complementations and vertex permutations. The transformation laws for a graph state$\left\vert G\right\rangle $ and a graph stabilizer under local unitary transformations $U$ read, \begin{equation} \left\vert G\right\rangle \rightarrow\left\vert G^{\prime}\right\rangle =U\left\vert G\right\rangle \text{ and, }S_{\Gamma}\rightarrow S_{\Gamma ^{\prime}}=US_{\Gamma}U^{\dagger}\text{,} \label{oggi} \end{equation} respectively. Neglecting overall phases, it turns out that local Clifford operations $U\in\mathcal{C}_{n}^{l}$ are just the symplectic transformations $Q$ of $
\mathbb{Z}
_{2}^{2n}$ which preserve the symplectic inner product \cite{moor}. Therefore, the $\left( 2n\times2n\right) $-matrices $Q$ satisfy the relation $Q^{\text{T}}PQ=P$ where T denotes the transpose operation and $P$ is the $\left( 2n\times2n\right) $-matrix that defines a symplectic inner product in $
\mathbb{Z}
_{2}^{2n}$, \begin{equation} P\overset{\text{def}}{=}\left( \begin{array} [c]{cc} 0 & I\\ I & 0 \end{array} \right) \text{.} \end{equation} Furthermore, since local Clifford operations act on each qubit separately, they have the additional block structure \begin{equation} Q\overset{\text{def}}{=}\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) \text{,} \end{equation} where the $\left( n\times n\right) $-blocks $A$, $B$, $C$, $D$ are diagonal. It was shown in \cite{bart} that each binary stabilizer code is equivalent to a graph code. In particular, each graph code characterized by the adjacency matrix $\Gamma$ corresponds to a stabilizer matrix $\mathcal{S}_{b} \overset{\text{def}}{=}\left( \Gamma\left\vert I\right. \right) $ and transpose stabilizer (generator matrix) $\mathcal{T}\overset{\text{def}} {=}\mathcal{S}_{b}^{T}=\binom{\Gamma}{I}$. The generator matrix $\binom {\Gamma^{\prime}}{I}$ for a graph state with adjacency matrix $\Gamma^{\prime }$ reads, \begin{equation} \binom{\Gamma}{I}\rightarrow\binom{\Gamma^{\prime}}{I}=\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) \binom{\Gamma}{I}\left( C\Gamma+D\right) ^{-1}\text{,} \label{nons} \end{equation} where, \begin{equation} \Gamma^{\prime}\overset{\text{def}}{=}Q\left( \Gamma\right) =\left( A\Gamma+B\right) \left( C\Gamma+D\right) ^{-1}\text{.} \label{tl1} \end{equation} Observe that in order to have properly defined generators matrices in Eq. (\ref{nons}), $C\Gamma+D$ must be nonsingular and $\Gamma^{\prime}$ must have vanishing diagonal elements. The graphical analog of the transformation law in Eq. (\ref{tl1}) was provided in \cite{bart}. Before stating this result, some additional terminology awaits to be introduced.
Two vertices $i$ and $j$ of a graph $G=\left( V\text{, }E\right) $ are called adjacent vertices, or neighbors, if $\left\{ i\text{, }j\right\} \in E $. The neighborhood $N\left( i\right) \subseteq V$ of a vertex $i$ is the set of all neighbors of $i$. A graph $G^{\prime}=\left( V^{\prime}\text{, }E^{\prime}\right) $ which satisfies $V^{\prime}\subseteq V$ and $E^{\prime }\subseteq E$ is a subgraph of $G$ and one writes $G^{\prime}\subseteq G$. For a subset $A\subseteq V$ of vertices, the induced subgraph $G\left[ A\right] \subseteq G$ is the graph with vertex set $A$ and edge set $\left\{ \left\{ i\text{, }j\right\} \in E:i\text{, }j\in A\right\} $. If $G$ has an adjacency matrix $\Gamma$, its complement $G^{\text{c}}$ is the graph with adjacency matrix $\Gamma+\mathbf{I}$, where $\mathbf{I}$ is the $\left( n\times n\right) $-matrix which has all ones, except for the diagonal entries which are zero. For each vertex $i=1$,..., $n$, a local complementation $g_{i}$ sends the $n$-vertex graph $G$ to the graph $g_{i}\left( G\right) $ which is obtained by replacing the induced subgraph $G\left[ N\left( i\right) \right] $ by its complement. In other words, \begin{equation} \Gamma\rightarrow\Gamma^{\prime}\equiv g_{i}(\Gamma)\overset{\text{def}} {=}\Gamma+\Gamma\Lambda_{i}\Gamma+\Lambda^{\left( i\right) }\text{,} \label{n1} \end{equation} where $\Lambda_{i}$ has a $1$ on the $i$th diagonal entry and zeros elsewhere and $\Lambda^{\left( i\right) }$ is a diagonal matrix such that yields zeros on the diagonal of $g_{i}(\Gamma)$. Finally, the graphical analog of Eq. (\ref{tl1}) becomes, \begin{equation} Q_{i}\left( \Gamma\right) =g_{i}(\Gamma)\text{,} \label{tl2} \end{equation} with, \begin{equation} Q_{i}\overset{\text{def}}{=}\left( \begin{array} [c]{cc} I & \text{diag}\left( \Gamma_{i}\right) \\ \Lambda_{i} & I \end{array} \right) \text{,} \label{n2} \end{equation} and diag$\left( \Gamma_{i}\right) \overset{\text{def}}{=}$diag$\left( \Gamma_{i1}\text{,..., }\Gamma_{in}\right) $. Observe that substituting (\ref{n2}) in (\ref{tl1}) and using (\ref{n1}), Eq. (\ref{tl2}) gives \begin{equation} Q_{i}\left( \Gamma\right) =g_{i}(\Gamma)\Leftrightarrow\Gamma+\Gamma \Lambda_{i}\Gamma+\Lambda^{\left( i\right) }=\Gamma+\Gamma\Lambda_{i} \Gamma+\left[ \text{diag}\left( \Gamma_{i}\right) +\text{diag}\left( \Gamma_{i}\right) \Lambda_{i}\Gamma\right] \text{,} \end{equation} that is, \begin{equation} \Lambda^{\left( i\right) }=\text{diag}\left( \Gamma_{i}\right) +\text{diag}\left( \Gamma_{i}\right) \Lambda_{i}\Gamma\text{.} \end{equation} The translation of the action of local Clifford operations on graph states into the action of local complementations on graphs as presented in Eq. (\ref{tl2}) is a major achievement of \cite{bart}.
\subsection{The CWS-work}
CWS codes include all stabilizer codes as well as several nonadditive codes. However, for the sake of completeness, we point out that there are indeed quantum codes that cannot be recast within the CWS framework as pointed out in \cite{cross} and shown in \cite{ruskai}. CWS codes in standard form can be specified by a graph $G$ and a (nonadditive, in general) classical binary code $\mathcal{C}_{^{\text{classical}}}$. The $n$ vertices of the graph $G$ correspond to the $n$ qubits of the code and its adjacency matrix is $\Gamma$. Given the graph state $\left\vert G\right\rangle $ and the binary code $\mathcal{C}_{^{\text{classical}}}$, a unique base state $\left\vert S\right\rangle $ and a set of word operators $\left\{ w_{k}\right\} $ are specified. The base state $\left\vert S\right\rangle $ is a single stabilizer state stabilized by the word stabilizer $\mathcal{S}_{\text{CWS}}$, a maximal Abelian subgroup of the Pauli group $\mathcal{P}_{\mathcal{H}_{2}^{n}}$.
Let $\left( \left( n\text{, }K\text{, }d\right) \right) $ denote a quantum code on $n$ qubits that encodes $K$ dimensions with distance $d$. Following \cite{cross}, it can be shown that a $\left( \left( n\text{, }K\text{, }d\right) \right) $ codeword stabilized code with word operators $\mathcal{W}=\left\{ w_{l}\right\} $ with $l\in\left\{ 1\text{,..., }K\right\} $ and codeword stabilizer $\mathcal{S}_{\text{CWS}}$ is locally Clifford equivalent to a codeword stabilized code with word operators $\mathcal{W}^{\prime}$, \begin{equation} \mathcal{W}^{\prime}\overset{\text{def}}{=}\left\{ w_{l}^{\prime }=Z^{\mathbf{c}_{l}}\right\} \text{,} \label{can1} \end{equation} and codeword stabilizer $\mathcal{S}_{\text{CWS}}^{\prime}$, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle S_{l}^{\prime}\right\rangle =\left\langle X_{l}Z^{\mathbf{r}_{l}}\right\rangle \text{,} \label{can2} \end{equation} where $\mathbf{c}_{l}$s are codewords defining the classical binary code $\mathcal{C}_{\text{classical}}$ and $\mathbf{r}_{l}$ is the $l$th row vector of the adjacency matrix $\Gamma$ of the graph $G$. For the sake of clarity, we stress that $Z^{\mathbf{v}}$ in Eq. (\ref{can2}) is the notational shorthand for \begin{equation} Z^{\mathbf{v}}\overset{\text{def}}{=}Z^{v_{1}}\otimes\text{...}\otimes Z^{v_{n}}\text{,} \end{equation} where $\mathbf{v=}\left( v_{1}\text{,..., }v_{n}\right) \in F_{2}^{n}$ is a binary $n$-vector. Thus, any CWS code is locally Clifford equivalent to a CWS code with a graph-state stabilizer and word operators consisting only of $Z$s. Moreover, the word operators can always be chosen to include the identity. Eqs. (\ref{can1})\ and (\ref{can2}) characterize the so-called standard form of a CWS quantum code. For a CWS code in standard form, the base state $\left\vert S\right\rangle $ is a graph state. Furthermore, the codespace of a CWS\ code is spanned by a set of basis vectors which result from applying the word operators $w_{k}$ on the base state $\left\vert S\right\rangle $, \begin{equation} \mathcal{C}_{\text{CWS}}\overset{\text{def}}{=}\text{Span}\left\{ \left\vert w_{l}\right\rangle \right\} \text{ with, }\left\vert w_{l}\right\rangle \overset{\text{def}}{=}w_{l}\left\vert S\right\rangle \text{.} \end{equation} Therefore, the dimension of the codespace equals the number of word operators. These operators are Pauli operators in $\mathcal{P}_{\mathcal{H}_{2}^{n}}$ that anticommute with one or more of the stabilizer generators for the base state. Thus, word operators map the base state onto an orthogonal state. The only exception is that in general the set of word operators also includes the identity operator so that the base state is a codeword of the quantum code as well. These basis states are also eigenstates of the stabilizer generators, but with some of the eigenvalues differing from $+1$. In addition, it turns out that a single qubit Pauli error $X$, $Z$ or $ZX$ acting on a codeword $\omega\left\vert S\right\rangle $ of a CWS code in standard form is equivalent up to a sign to another multi-qubit error consisting of $Z$s. Therefore, since all errors become $Z$s, the original quantum error model is transformed into a classical (induced by the CWS\ formalism) error model characterized, in general, by multi-qubit errors. The map $\mathcal{C} l_{\mathcal{S}_{\text{CWS}}}$ that defines this transformation reads, \begin{equation} \mathcal{C}l_{\mathcal{S}_{\text{CWS}}}:\mathcal{E}\ni E\equiv\pm Z^{\mathbf{v}}X^{\mathbf{u}}\mapsto\mathcal{C}l_{\mathcal{S}_{\text{CWS}} }\left( \pm Z^{\mathbf{v}}X^{\mathbf{u}}\right) \overset{\text{def}} {=}\mathbf{v\oplus}
{\displaystyle\bigoplus\limits_{l=1}^{n}}
u_{l}\mathbf{r}_{l}\in\left\{ 0\text{, }1\right\} ^{n}\text{,} \end{equation} where $\mathcal{E}$ denotes the set of Pauli errors $E$, $\mathbf{r}_{l}$ is the $l$th\textit{\ }row of the adjacency matrix $\Gamma$ for the graph $G$ and $u_{l}$ is the $l$\textit{th }bit of the vector $\mathbf{u}$. Finally,\textbf{\ }it was shown in \cite{cross} that any stabilizer code is a CWS code. Specifically, a quantum stabilizer code $\left[ \left[ n,k,d\right] \right] $ (where the parameters $n$, $k$, $d$ denote the length, the dimension and the distance of the quantum code, respectively) with stabilizer $\mathcal{S}\overset{\text{def}}{=}\left\langle S_{1}\text{,..., }S_{n-k}\right\rangle $ where $S_{j}$ with $j\in\left\{ 1\text{,..., }n-k\right\} $ denote the stabilizer generators and logical operations $\bar{X}_{1}$,..., $\bar{X}_{k}$ and $\bar{Z}_{1}$,..., $\bar{Z}_{k}$ is equivalent to a CWS code defined by, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle S_{1}\text{, ..., }S_{n-k}\text{, }\bar{Z}_{1}\text{,..., }\bar{Z}_{k}\right\rangle \text{,} \end{equation} and word operators $\omega_{\mathbf{v}}$, \begin{equation} \omega_{\mathbf{v}}=\bar{X}_{1}^{\left( \mathbf{v}\right) _{1}} \otimes\text{...}\otimes\bar{X}_{k}^{\left( \mathbf{v}\right) _{k}}\text{.} \end{equation} The vector $\mathbf{v}$ denotes a $k$-bit string and $\left( \mathbf{v} \right) _{l}\equiv v_{l}$ with $l\in\left\{ 1\text{,..., }k\right\} $ is the $l$th bit of the vector $\mathbf{v}$. For further details on binary CWS\ quantum codes, we refer to \cite{cross}. Finally, for a very recent investigation on the symmetries of CWS codes, we refer to \cite{tqc}.
\section{From graphs to stabilizer codes and vice-versa}
In this section, we revisit some basic ingredients of the Schlingemann-Werner work (SW-work, \cite{werner}), the Schlingemann work (S-work, \cite{dirk}) and, finally, the Van den Nest et \textit{al}. work (VdN-work, \cite{bart}). We focus on those aspects of these works that will be especially relevant for our proposed scheme.
\subsection{The Schlingemann-Werner work}
The basic graphical construction of quantum codes within the SW-work \cite{werner} can be described as follows. Quantum codes are completely characterized by a unidirected graph $G$ $=G\left( V\text{, }E\right) $ characterized by a set $V$ of $n$ vertices and a set of edges $E$ specified by the coincidence matrix $\Xi$ with both input and output vertices and a finite Abelian group $\mathcal{G}$ with a nondegenerate symmetric bicharacter $\chi$. We remark that there are various types of matrices that can be used to specify a given graph (for instance, incidence and adjacency matrices \cite{die}). The coincidence matrix introduced in \cite{werner} is simply the adjacency matrix of a graph with both input and output vertices (and, it should not be confused with the so-called incidence matrix of a graph). The sets of input and output vertices will be denoted by $X$ and $Y$, respectively. Let $\mathcal{G}$ be any finite additive Abelian group of cardinality $\left\vert \mathcal{G} \right\vert =n$ with the addition operation denoted by $+$ and null element $0$. A nondegenerate symmetric bicharacter is a map $\chi:\mathcal{G} \times\mathcal{G}\ni\left( g\text{, }h\right) \mapsto\chi\left( g\text{, }h\right) \equiv\left\langle g\text{, }h\right\rangle \in
\mathbb{C}
$ satisfying the following properties \cite{partha}: (i) $\left\langle g\text{, }h\right\rangle =\left\langle h\text{, }g\right\rangle $, $\forall g$, $h\in\mathcal{G}$; (ii) $\left\langle g\text{, }h_{1}+h_{2}\right\rangle =\left\langle g\text{, }h_{1}\right\rangle \left\langle g\text{, } h_{2}\right\rangle $, $\forall g$, $h_{1}$, $h_{2}\in\mathcal{G}$; (iii) $\left\langle g\text{, }h\right\rangle =1$ $\forall h\in\mathcal{G} \Leftrightarrow g=0$. If $\mathcal{G}=
\mathbb{Z}
_{n}\overset{\text{def}}{=}\left\{ 0\text{,..., }n-1\right\} $ (the cyclic group of order $n$) with addition modulo $n$ as the group operation, the bicharacter $\chi$ can be chosen as \begin{equation} \chi\left( g\text{, }h\right) \equiv\left\langle g\text{, }h\right\rangle \overset{\text{def}}{=}e^{i\frac{2\pi}{n}gh}\text{, } \end{equation} with $g$, $h\in
\mathbb{Z}
_{n}$. The encoding operator $\mathbf{v}_{G}$ of an error correcting code is an isometry (a bijective map between two metric spaces that preserve distances), \begin{equation} \mathbf{v}_{G}:L^{2}\left( \mathcal{G}^{X}\right) \rightarrow L^{2}\left( \mathcal{G}^{Y}\right) \text{,} \end{equation} where $L^{2}\left( \mathcal{G}^{X}\right) $ is the $\left\vert X\right\vert $-fold tensor product $\mathcal{H}^{\otimes X}$ with $\mathcal{H}=L^{2}\left( \mathcal{G}\right) $ (the Hilbert space $\mathcal{H}$ is realized as the space of integrable functions over $\mathcal{G}$) and $\mathcal{G} \overset{\text{def}}{=}
\mathbb{Z}
_{2}$ in the qubit case. Similarly, $L^{2}\left( \mathcal{G}^{Y}\right) $ is the $\left\vert Y\right\vert $-fold tensor product $\mathcal{H}^{\otimes Y}$. The Hilbert space $L^{2}\left( \mathcal{G}\right) $ is defined as, \begin{equation} L^{2}\left( \mathcal{G}\right) \overset{\text{def}}{=}\left\{ \psi\left\vert \psi:\mathcal{G}\rightarrow
\mathbb{C}
\right. \right\} \text{,} \end{equation} with scalar product between two elements $\psi_{1}$ and $\psi_{2}$ in $L^{2}\left( \mathcal{G}\right) $ given by, \begin{equation} \left\langle \psi_{1}\text{, }\psi_{2}\right\rangle \overset{\text{def}} {=}\frac{1}{\left\vert \mathcal{G}\right\vert }\sum_{g}\bar{\psi}_{1}\left( g\right) \psi_{2}\left( g\right) \text{.} \end{equation} The action of $\mathbf{v}_{G}$ on $L^{2}\left( \mathcal{G}^{X}\right) $ is defined as \cite{werner}, \begin{equation} \left( \mathbf{v}_{G}\psi\right) \left( g^{Y}\right) \overset{\text{def} }{=}\int dg^{X}\mathbf{v}_{G}\left[ g^{X\cup Y}\right] \psi\left( g^{X}\right) \text{,} \label{g12} \end{equation} where $\mathbf{v}_{_{G}}\left[ g^{X\cup Y}\right] $, the integral kernel of the isometry $\mathbf{v}_{_{G}}$, is given by \cite{werner}, \begin{align} \mathbf{v}_{G}\left[ g^{X\cup Y}\right] & =\left\vert \mathcal{G} \right\vert ^{\frac{\left\vert X\right\vert }{2}}\prod\limits_{\left\{ z\text{, }z^{\prime}\right\} }\chi\left( g_{z}\text{, }g_{z^{\prime} }\right) ^{\Xi\left( z\text{, }z^{\prime}\right) }=\left\vert \mathcal{G}\right\vert ^{\frac{\left\vert X\right\vert }{2}}\prod \limits_{\left\{ z\text{, }z^{\prime}\right\} }\left[ \exp\left( \frac{2\pi i}{p}g_{z}g_{z^{\prime}}\right) \right] ^{\Xi\left( z\text{, }z^{\prime}\right) }\nonumber\\ & \nonumber\\ & =\left\vert \mathcal{G}\right\vert ^{\frac{\left\vert X\right\vert }{2} }\prod\limits_{\left\{ z\text{, }z^{\prime}\right\} }\left[ \exp\left( \frac{2\pi i}{p}g_{z}\Xi\left( z\text{, }z^{\prime}\right) g_{z^{\prime} }\right) \right] =\left\vert \mathcal{G}\right\vert ^{\frac{\left\vert X\right\vert }{2}}\exp\left( \frac{\pi i}{p}g^{X\cup Y}\cdot\Xi\cdot g^{X\cup Y}\right) \text{.} \label{g11} \end{align} The product in Eq. (\ref{g11}) must be taken over each two elementary subsets $\left\{ z\text{, }z^{\prime}\right\} $ in $X\cup Y$. Substituting Eq. (\ref{g11}) into Eq. (\ref{g12}), the action of $\mathbf{v}_{G}$ on $L^{2}\left( \mathcal{G}^{X}\right) $ finally becomes, \begin{equation} \left( \mathbf{v}_{G}\psi\right) \left( g^{Y}\right) =\int dg^{X} \left\vert \mathcal{G}\right\vert ^{\frac{\left\vert X\right\vert }{2}} \exp\left( \frac{\pi i}{p}g^{X\cup Y}\cdot\Xi\cdot g^{X\cup Y}\right) \psi\left( g^{X}\right) \text{.} \label{g13} \end{equation} We recall that the sequential steps of a QEC cycle can be described as follows, \begin{equation} \rho\overset{\text{coding}}{\longrightarrow}\mathbf{v}\rho\mathbf{v}^{\ast }\equiv\rho^{\prime}\text{, }\rho^{\prime}\overset{\text{noise}} {\longrightarrow}\mathbf{T}\left( \rho^{\prime}\right) =\sum\limits_{\alpha }F_{\alpha}\rho^{\prime}F_{\alpha}^{\ast}\equiv\rho^{\prime\prime}\text{, }\rho^{\prime\prime}\overset{\text{recovery}}{\longrightarrow}\mathbf{R} \left( \rho^{\prime\prime}\right) =\rho\text{,} \end{equation} that is, \begin{equation} \mathbf{R}\left( \mathbf{T}\left( \mathbf{v}\rho\mathbf{v}^{\ast}\right) \right) =\rho\text{.} \end{equation} Furthermore, the traditional Knill-Laflamme error-correction conditions read, \begin{equation} \left\langle \mathbf{v}\psi_{1}\text{, }F_{\alpha}^{\ast}F_{\beta} \mathbf{v}\psi_{2}\right\rangle =\omega\left( F_{\alpha}^{\ast}F_{\beta }\right) \left\langle \psi_{1}\text{, }\psi_{2}\right\rangle \text{,} \label{kl} \end{equation} where the multiplicative factor $\omega\left( F_{\alpha}^{\ast}F_{\beta }\right) $ does not depend on the states $\psi_{1}$ and $\psi_{2}$. The graphical analog of Eq. (\ref{kl}) is given by, \begin{equation} \left\langle \mathbf{v}\psi_{1}\text{, }F\mathbf{v}\psi_{2}\right\rangle =\omega\left( F\right) \left\langle \psi_{1}\text{, }\psi_{2}\right\rangle \text{,} \label{g14} \end{equation} for all operators in $\mathcal{U}(E)$, the set of all operators in $L^{2}(\mathcal{G}^{Y})$ which are localized in $E\subset Y$. Thus, operators in $\mathcal{U}(E)$ are given by the tensor product of an arbitrary operator on $\mathcal{H}^{\otimes E}$ with the identity on $\mathcal{H}^{\otimes Y\backslash E}$. A graph code corrects $e$ errors if and only if it detects all error configurations $E\subset Y$ with $\left\vert E\right\vert \leq2e$. Given this graphical construction of the encoding operator $\mathbf{v}_{G}$ in Eq. (\ref{g13}) and the graphical quantum error-correction conditions in Eq. (\ref{g14}), the main finding provided by Schlingemann and Werner can be restated as follows: given a finite Abelian group $\mathcal{G}$ and a weighted graph $G$, an error configuration $E\subset Y$ is detected by the quantum code $\mathbf{v}_{G}$ if and only if given that \begin{equation} d^{X}=0\text{ and, }\Xi_{E}^{X}d^{E}=0\text{,} \label{wc1} \end{equation} then, \begin{equation} \Xi_{X\cup E}^{I}d^{X\cup E}=0\Rightarrow d^{X\cup E}=0\text{,} \label{wc2} \end{equation} with $I=Y\backslash E$. In general, the condition\textbf{\ }$\Xi_{B}^{A} d^{B}=0$\ is a set of equations, one for each integration vertex\textbf{\ } $a\in A$\textbf{: }for each vertex\textbf{\ }$a\in A$\textbf{, }we have to sum the\textbf{\ }$d_{b}$\textbf{\ }for all vertices\textbf{\ }$b\in B$ \textbf{\ }connected to\textbf{\ }$a$\textbf{, }and equate it to zero\textbf{. }Furthermore, we underline that the fact that\textbf{\ }$v_{G}$\ is an\textbf{\ }isometry is equivalent to the detection of zero errors. In graph-theoretic terms, the detection of zero errors requires that\textbf{\ } $\Xi_{X}^{Y}d^{X}=0$\textbf{\ }implies\textbf{\ }$d^{X}=0$. A code that\textbf{\ }satisfies Eq. (\ref{wc2}) given Eq. (\ref{wc1})\textbf{\ }can be either nondegenerate or degenerate.\ We shall assume that Eq. (\ref{wc2}) with the additional constraints in Eq. (\ref{wc1}) denotes the weak version (necessary and sufficient conditions) of the graph-theoretic error detection conditions. However, sufficient graph-theoretic error detection conditions can be introduced as well. Specifically, an error configuration $E$ is detectable by a quantum code if, \begin{equation} \Xi_{X\cup E}^{I}d^{X\cup E}=0\Rightarrow d^{X\cup E}=0\text{.} \label{sc} \end{equation} We shall denote conditions in Eq. (\ref{sc}) without any additional set of graph-theoretic constraints (like the ones provided in Eq. (\ref{wc1})) the strong version (sufficient conditions) of the graph-theoretic error detection conditions. We finally emphasize, as originally pointed out in \cite{werner}, that a code that satisfies Eq. (\ref{sc}) is nondegenerate.
\subsection{The Schlingemann-work}
Schlingemann was able to show that stabilizer codes, either binary or nonbinary, are equivalent to graph codes (and vice-versa). However, as far as our proposed scheme concerns, the main finding uncovered in the S-work \cite{dirk} may be stated as follows. Consider a graph code with only one input and $\left( n-1\right) $-output vertices. Its corresponding coincidence matrix $\Xi_{n\times n}$ can be written as, \begin{equation} \Xi_{n\times n}\overset{\text{def}}{=}\left( \begin{array} [c]{cc} 0_{1\times1} & B_{1\times\left( n-1\right) }^{\dagger}\\ B_{\left( n-1\right) \times\left( 1\right) } & A_{\left( n-1\right) \times\left( n-1\right) } \end{array} \right) \text{,} \label{gammac} \end{equation} where $A_{\left( n-1\right) \times\left( n-1\right) }$ denotes the $\left( n-1\right) \times\left( n-1\right) $-symmetric adjacency matrix $\Gamma_{\left( n-1\right) \times\left( n-1\right) }$. Then, the graph code with symmetric coincidence matrix $\Xi_{n\times n}$ in Eq. (\ref{gammac}) is equivalent to stabilizer codes being associated with the isotropic subspace $\mathcal{S}_{\text{isotropic}}$ defined as, \begin{equation} \mathcal{S}_{\text{isotropic}}\overset{\text{def}}{=}\left\{ \left( Ak\left\vert k\right. \right) :k\in\ker B^{\dagger}\right\} \text{,} \end{equation} that is, omitting unimportant phase factors, with the binary stabilizer group $\mathcal{S}_{\text{binary}}$, \begin{equation} \mathcal{S}_{\text{binary}}\overset{\text{def}}{=}\left\{ g_{k}=X^{k} Z^{Ak}:k\in\ker B^{\dagger}\right\} \text{.} \end{equation} Observe that a stabilizer operator $g_{k}\in\mathcal{S}_{\text{binary}}$ for an $n$-vertex graph has a $2n$-dimensional binary vector space representation such that $g_{k}\leftrightarrow v_{g_{k}}\overset{\text{def}}{=}\left( Ak\left\vert k\right. \right) $.
More generally, consider a $\left[ \left[ n,k,d\right] \right] $ binary quantum stabilizer code associated with a graph $G=\left( V\text{, }E\right) $ characterized by the $\left( n+k\right) \times\left( n+k\right) $ symmetric coincidence matrix $\Xi_{\left( n+k\right) \times\left( n+k\right) }$, \begin{equation} \Xi_{\left( n+k\right) \times\left( n+k\right) }\overset{\text{def}} {=}\left( \begin{array} [c]{cc} 0_{k\times k} & B_{k\times n}^{\dagger}\\ B_{n\times k} & \Gamma_{n\times n} \end{array} \right) \text{.} \label{losai-2} \end{equation} To attach the input vertices, $\Xi$ has to be constructed in such a manner that the following conditions are satisfied: i) first, $\det\Gamma_{n\times n}=0$ ($\operatorname{mod}2$); ii) second,\ the matrix $B_{k\times n} ^{\dagger}$ must define a $k$-dimensional subspace in $\mathbf{F}_{2}^{n}$ spanned by $k$ linearly independent binary vectors of length $n$ not included in the Span of the raw-vectors defining the symmetric adjacency matrix $\Gamma_{n\times n}$, \begin{equation} \text{Span}\left\{ \vec{v}_{1}\text{,..., }\vec{v}_{k}\right\} \cap\text{ Span}\left\{ \vec{v}_{\Gamma}^{\left( 1\right) }\text{,..., }\vec {v}_{\Gamma}^{\left( n\right) }\text{ }\right\} =\left\{ \emptyset \right\} \text{,} \end{equation} where $\vec{v}_{j}\in\mathbf{F}_{2}^{n}$ for $j\in\left\{ 1\text{,..., }k\right\} $ and $\vec{v}_{\Gamma}^{\left( i\right) }\in\mathbf{F}_{2}^{n}$ for $i\in\left\{ 1\text{,..., }n\right\} $; iii) third, Span$\left\{ \vec{v}_{1}\text{,..., }\vec{v}_{k}\right\} $ contains a vector $\vec{v} _{B}\in\mathbf{F}_{2}^{n}$ such that $\vec{v}_{B}\cdot\vec{v}_{\Gamma }^{\left( i\right) }=0$ for any $i\in\left\{ 1\text{,..., }n\right\} $. Condition i) is needed to avoid disconnected graphs. Condition ii) is required to have a properly defined isometry capable of detecting zero errors. Finally, condition iii) is needed to generate an isotropic subspace (or, in other words, an Abelian subgroup of the Pauli group, the so-called stabilizer group) with, \begin{equation} \left( \Gamma\vec{v}_{\Gamma}^{\left( l\right) }\text{, }\vec{v}_{\Gamma }^{\left( l\right) }\right) \odot\left( \Gamma\vec{v}_{\Gamma}^{\left( m\right) }\text{, }\vec{v}_{\Gamma}^{\left( m\right) }\right) =0\text{, } \end{equation} for any pair $\left( \vec{v}_{\Gamma}^{\left( l\right) }\text{, }\vec {v}_{\Gamma}^{\left( m\right) }\right) $ in $\left\{ \vec{v}_{\Gamma }^{\left( 1\right) }\text{,..., }\vec{v}_{\Gamma}^{\left( n\right) }\text{ }\right\} $ where the symbol $\odot$ denotes the symplectic product \cite{gaitan}.
As a final remark, we point out that in a more general framework like the one presented in \cite{dirk}, we could consider three types of vertices: input, auxiliary and output vertices. The input vertices label the input systems and are used for encoding. The auxiliary vertices are inputs used as auxiliary degrees of freedom for implementing additional constraints for the protected code subspace. Finally, output vertices simply label the output quantum systems.
\subsection{The Van den Nest-work}
The main achievement of the VdN-work in \cite{bart} is the construction of a very useful algorithmic procedure for transforming any binary quantum stabilizer code into a graph code. Before describing this procedure, we remark that it is straightforward to check that a graph code given by the adjacency matrix $\Gamma$ corresponds to a stabilizer matrix $\mathcal{S}_{b} \overset{\text{def}}{=}\left( \Gamma\left\vert I\right. \right) $ and transpose stabilizer $\mathcal{T}\overset{\text{def}}{=}\mathcal{S} _{b}^{\text{T}}=\binom{\Gamma}{I}$. That said, consider a quantum stabilizer code with stabilizer matrix, \begin{equation} \mathcal{S}_{b}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) \text{,} \label{ssss} \end{equation} and transpose stabilizer $\mathcal{T}$ given by, \begin{equation} \mathcal{T}\overset{\text{def}}{=}\mathcal{S}_{b}^{\text{T}}=\binom {Z^{\text{T}}}{X^{\text{T}}}\equiv\binom{A}{B}\text{.} \label{t} \end{equation} Let us define\textbf{ }$\mathcal{S}_{b}$ in Eq. (\ref{ssss}). Given a set of generators of the stabilizer, the stabilizer matrix\textbf{ }$\mathcal{S}_{b} $\textbf{\ }is constructed by assembling the binary representations of the generators as the rows of a full rank\textbf{ }$\left( n\times2n\right) $-matrix. The transpose of the binary stabilizer matrix (i.e., the transpose stabilizer)\textbf{ }$\mathcal{T}$\textbf{\ \ }is simply the full rank\textbf{ }$\left( 2n\times n\right) $-matrix obtained from\textbf{ }$\mathcal{S}_{b} $\textbf{\ }after exchanging rows with columns\textbf{. }The goal of the algorithmic procedure is to convert the transpose stabilizer $\mathcal{T}$ in Eq. (\ref{t}) of a given stabilizer code into the transpose stabilizer $\mathcal{T}^{\prime}=$ $\binom{A^{\prime}}{B^{\prime}}$ of an equivalent graph code. Then, the matrix $A^{\prime}$ will represent the adjacency matrix of the corresponding graph. Two scenarios may occur: i) $B$ is a $n\times n$ invertible matrix; ii) $B$ is not an invertible matrix. In the first scenario where $B$ is invertible, a right-multiplication of the transpose stabilizer $\mathcal{T}=$ $\binom{A}{B}$ by $B^{-1}$ will perform a basis change, an operation that provides us with an equivalent stabilizer code, \begin{equation} \mathcal{T}B^{-1}=\binom{A}{B}B^{-1}=\binom{AB^{-1}}{I}\text{.} \end{equation} Then, the matrix $AB^{-1}$ will denote the resulting adjacency matrix of the corresponding graph. Furthermore, if the matrix $AB^{-1}$ has nonzero diagonal elements, we can simply set these elements to zero in order to satisfy the standard requirements for a correct definition of an adjacency matrix of simple graphs. In the second scenario where $B$ is not invertible, we can always find a suitable local Clifford unitary transformation $U$ such that \cite{bart}, \begin{equation} \mathcal{S}_{b}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) \overset{U}{\rightarrow}\mathcal{S}_{b}^{\prime}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) \text{,} \end{equation} and, \begin{equation} \mathcal{T}\overset{\text{def}}{=}\mathcal{S}_{b}^{\text{T}}=\binom {Z^{\text{T}}}{X^{\text{T}}}\equiv\binom{A}{B}\overset{U}{\rightarrow }\mathcal{T}^{\prime}\overset{\text{def}}{=}\mathcal{S}_{b}^{\prime\text{T} }=\binom{Z^{^{\prime}\text{T}}}{X^{\prime\text{T}}}\equiv\binom{A^{\prime} }{B^{\prime}}\text{,} \end{equation} with $\det B^{\prime}\neq0$. Therefore, right-multiplying $\mathcal{T} ^{\prime}$ with $B^{\prime-1}$, we get \begin{equation} \mathcal{T}^{\prime}B^{\prime-1}=\binom{A^{\prime}}{B^{\prime}}B^{\prime -1}=\binom{A^{\prime}B^{\prime-1}}{I}\text{.} \end{equation} Thus, the adjacency matrix of the corresponding graph becomes $A^{\prime }B^{\prime-1}$.
The above-described algorithmic procedure for transforming any binary quantum stabilizer code into a graph code is very important for our proposed scheme as it will become clear in the next section.
\section{The scheme}
In this section, we formally describe our scheme and apply it to the graphical construction of the Leung et \textit{al}. four-qubit quantum code for the error correction of single amplitude damping errors.
\subsection{Description of the scheme}
We emphasize that our ultimate goal is the construction of classical graphs $G\left( V\text{, }E\right) $ with both input and output vertices defined by the coincidence matrix $\Xi$ in order to verify the error-correcting capabilities of the corresponding quantum stabilizer codes via the graph-theoretic error correction conditions advocated in the SW-work. To achieve this goal, we propose a systematic scheme based on a very simple idea. The CWS-, VdN- and S-works must be combined in such a manner that, with respect to our ultimate goal, the weak-points of one method should be compensated by the strong-points of another method.
\subsubsection{Step one}
The CWS formalism offers a very general framework where both binary/nonbinary and/or additive/nonadditive quantum codes can be described. For this reason, the starting point of our scheme is the realization of binary stabilizer codes as CWS quantum codes. Although this is a relatively straightforward step, the CWS code that one obtains is not, in general, in the standard canonical form. From the CWS-work in \cite{cross}, it is known that there does exist a local (unitary) Clifford operations that allows in principle to write down the CWS\ code that realizes the binary stabilizer code in standard form. However, the CWS-work does not suggest any algorithmic procedure to achieve this standard form. In the absence of a systematic procedure, uncovering a local Clifford unitary $U$ such that $\mathcal{S}_{\text{CWS}}^{\prime} \overset{\text{def}}{=}U\mathcal{S}_{\text{CWS}}U^{\dagger}$ (every element $s^{\prime}\in\mathcal{S}_{\text{CWS}}^{\prime}$ can be written as $UsU^{\dagger}$ for some $s\in\mathcal{S}_{\text{CWS}}$) may constitute a very tedious challenge. Fortunately, we can avoid this. Before explaining how, let us introduce the codeword stabilizer matrix $\mathcal{H}_{\mathcal{S} _{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) $ corresponding to the codeword stabilizer $\mathcal{S}_{\text{CWS}}$.
\subsubsection{Step two}
Two main achievements of the VdN-work in \cite{bart} are the following: first, each stabilizer state is equivalent to a graph state under local Clifford operations; second, an algorithmic procedure for transforming any binary quantum stabilizer code into a graph code is provided. Observe that a stabilizer state can be regarded as a quantum code with parameters $\left[ \left[ n\text{, }0\text{, }d\right] \right] $. Our idea is to exploit the algorithmic procedure provided by the VdN-work by translating the starting point of the algorithmic procedure in the CWS language. To achieve this, we replace the generator matrix of the stabilizer state with the codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ corresponding to the codeword stabilizer $\mathcal{S}_{\text{CWS}}$ of the CWS code that realizes the binary stabilizer code whose graphical depiction is being sought. This way, we can simply apply the VdN algorithmic procedure to uncover the standard form of the CWS code and, if necessary, the explicit expression for the local (unitary) Clifford operation that links the non-standard to the standard forms of the CWS code. After applying this VdN algorithmic procedure adapted to the CWS formalism, we can construct a graph characterized by a symmetric adjacency matrix $\Gamma$ with only output vertices. How do we attach possible input vertices to this graph associated with the $\left[ \left[ n\text{, }k\text{, }d\right] \right] $ binary stabilizer codes with $k\neq0$?
\subsubsection{Step three}
Unlike the VdN-work whose findings are limited to the binary quantum states, the S-work extends its applicability to both binary and nonbinary quantum codes. In particular, in \cite{dirk} it was shown that any stabilizer code is a graph code and vice-versa. However, in the S-work an analog of the algorithmic procedure for transforming any binary quantum stabilizer code into a graph code is missing. Despite this fact, the S-work does provide a very useful result for our proposed scheme. Namely, it is shown that a graph code with associated graph $G\left( V,E\right) $ with both input and output vertices and corresponding symmetric coincidence matrix $\Xi$ is equivalent to stabilizer codes being associated with a suitable isotropic subspace space $\mathcal{S}_{\text{isotropic}}$. Recall that at the end of the above-mentioned step two, we are basically given both the isotropic subspace and the graph without input vertices, that is the symmetric adjacency matrix $\Gamma$ embedded in the more general coincidence matrix $\Xi$. Therefore, by exploiting the just mentioned very useful specific finding of the S-work in a reverse direction (we are allowed to do so since a graph code is equivalent to a stabilizer code and vice-versa), in some sense, we can construct the full coincidence matrix $\Xi$ and finally attach the input vertices to the graph. What can we do with a graphical depiction of a binary stabilizer code?
\subsubsection{Step three+one}
In the SW-work, outstanding graphical QEC conditions were introduced \cite{werner}. However, these conditions were only partially employed for quantum codes associated with graphs and the codes needed not be necessarily stabilizer codes. By logically combining the CWS-, VdN- and S-works, the power of the graphical QEC conditions in \cite{werner} can be fully exploited in a systematic manner in both directions: from graph codes to stabilizer codes and vice-versa.
In summary, given a binary quantum stabilizer code $\mathcal{C} _{\text{stabilizer}}$, the systematic procedure that we propose can be described in $3+1=4$ points as follows:
\begin{itemize} \item Realize the stabilizer code $\mathcal{C}_{\text{stabilizer}}$ as a CWS quantum code $\mathcal{C}_{\text{CWS}}$;
\item Apply the VdN-work adapted to the CWS formalism to identify the standard form of the CWS code that realizes the stabilizer code whose graphical depiction is being sought. In other words, find the graph $G$ with only output vertices characterized by the symmetric adjacency matrix $\Gamma$ associated with $\mathcal{C}_{\text{CWS}}$ in the standard form;
\item Exploit the S-work as explained to identify the extended graph with both input and output vertices characterized by the symmetric coincidence matrix $\Xi$ associated with the isometric encoding map that defines $\mathcal{C} _{\text{CWS}}$;
\item Use the SW-work to apply the graph-theoretic error-correction conditions to the extended graph in order to explicitly verify the error-correcting capabilities of the corresponding $\mathcal{C}_{\text{stabilizer}}$ realized as a $\mathcal{C}_{\text{CWS}}$ quantum code. \end{itemize}
\subsection{Application of the scheme}
We think there is no better way to describe and understand the effectiveness of our proposed scheme than by simply working out in detail a simple illustrative example. In what follows, we wish to uncover the graph\textbf{ }associated with the Leung et \textit{al}. $\left[ \left[ 4\text{,}1\right] \right] $ four-qubit stabilizer (nondegenerate) quantum code \cite{debbie}. Several explicit constructions of graphs for various stabilizer codes characterized by either single or multi-qubit encoding operators are added in the Appendices: the three-qubit repetition code, the perfect $1$-erasure correcting four-qubit code, the perfect\textbf{ }$1$-error correcting five-qubit code,\textbf{ }$1$-error correcting six-qubit quantum degenerate codes, the CSS seven-qubit stabilizer code, the Shor nine-qubit stabilizer code, the Gottesman\textbf{ }$2$\textbf{-}error correcting eleven-qubit code\textbf{, }$\left[ \left[ 4\text{, }2\text{, }2\right] \right] $\textbf{\ }stabilizer codes, and, finally, the Gottesman\textbf{ }$\left[ \left[ 8\text{, }3\text{, }3\right] \right] $\textbf{\ }stabilizer code.
\subsubsection{Step one}
Recall that the stabilizer $\mathcal{S}_{\text{b}}^{\text{Leung}}$ of the Leung et \textit{al}. $\left[ \left[ 4\text{, }1\right] \right] $ code is given by \cite{fletcher}, \begin{equation} \mathcal{S}_{\text{b}}^{\text{Leung}}\overset{\text{def}}{=}\left\langle X^{1}X^{2}X^{3}X^{4}\text{, }Z^{1}Z^{2}\text{, }Z^{3}Z^{4}\right\rangle \text{,} \end{equation} with a suitable logical $\bar{Z}$ operation given by $\bar{Z}=Z^{1}Z^{3}$. Therefore, when regarded within the CWS framework \cite{cross}, the Leung et \textit{al}. code is equivalent to a CWS\ code defined with codeword stabilizer, \begin{equation} \mathcal{S}_{\text{CWS}}^{\text{Leung}}\overset{\text{def}}{=}\left\langle X^{1}X^{2}X^{3}X^{4}\text{, }Z^{1}Z^{2}\text{, }Z^{3}Z^{4}\text{, }Z^{1} Z^{3}\right\rangle \text{.} \label{scws} \end{equation}
\subsubsection{Step two}
Taking into consideration Eq. (\ref{scws}), we observe that $\mathcal{S} _{\text{Leung}}^{\text{CWS}}$ is local Clifford equivalent to $S_{\text{Leung} }^{\prime\text{CWS}}$ given by, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime\text{Leung}}\overset{\text{def}} {=}U\mathcal{S}_{\text{CWS}}^{\text{Leung}}U^{\dagger}\text{,} \end{equation} with $U\overset{\text{def}}{=}I^{1}\otimes H^{2}\otimes H^{3}\otimes H^{4}$ where $H$ denotes the Hadamard transformation. We notice that the codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime\text{Leung}} }$ associated with the codeword stabilizer $\mathcal{S}_{\text{CWS}} ^{\prime\text{Leung}}$ reads, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime\text{Leung}}}\overset {\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 0 \end{array} \right. \right) \text{,} \end{equation} with $\det X^{\prime}\neq0$. Therefore, we can find a suitable graph with output vertices only that is associated with the Leung et \textit{al}. code by applying the VdN algorithmic procedure. The transpose of $\mathcal{H} _{\mathcal{S}_{\text{CWS}}^{\prime\text{Leung}}}$ becomes, \begin{equation} \mathcal{T}^{\prime}\overset{\text{def}}{=}\mathcal{H}_{\mathcal{S} _{\text{CWS}}^{\prime\text{Leung}}}^{\text{T}}\equiv\binom{A^{\prime} }{B^{\prime}}=\left( \begin{array} [c]{c} \underline{ \begin{array} [c]{cccc} 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} }\\% \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 0 \end{array} \end{array} \right) \text{.} \label{bprimo} \end{equation} From Eq. (\ref{bprimo}) it turns out that $B^{\prime}$ is a $4\times4$ invertible matrix with inverse given by, \begin{equation} B^{\prime-1}=\left( \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 1 \end{array} \right) \text{.} \end{equation} Finally, the adjacency matrix $\Gamma$ of a graph that realizes the Leung et \textit{al}. code is given by $\Gamma=A^{\prime}B^{\prime-1}$, that is \begin{equation} \Gamma=A^{\prime}B^{\prime-1}=\left( \begin{array} [c]{cccc} 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \left( \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 1 \end{array} \right) =\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \overset{\text{def}}{=}\Gamma_{\text{Leung}}\text{.} \label{gl} \end{equation} As a side remark, we recall that a graph determines uniquely a graph state and two graph states determined by two graphs are equivalent up to some\emph{\ } local Clifford transformations if and only if these two graphs are related to each other via local complementations\emph{\ }(LC) \cite{bart}. Avoiding unnecessary formalities, we recall that a local complementation of a graph on a vertex $v$ can be regarded as the the operation where in the neighborhood of $v$ we connect all the disconnected vertices and disconnect all the connected vertices. For instance, applying a local complementation on vertex $v=1$ on the graph with adjacency matrix $\Gamma$ in Eq. (\ref{gl}), we obtain \begin{equation} \Gamma_{\text{Leung}}\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \overset{\text{LC}_{v=1}}{\longrightarrow}\Gamma_{\text{Leung} }^{\prime}\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 \end{array} \right) \text{.} \label{tt} \end{equation} It turns out that $\Gamma_{\text{Leung}}$ and $\Gamma_{\text{Leung}}^{\prime}$ are the only two adjacency matrices corresponding to the only two connected graphs, up to graph isomorphisms, that realize\textbf{\ }the Leung et \textit{al}. $\left[ \left[ 4\text{, }1\right] \right] $ code. As a matter of fact, recall that the LC orbit $\mathbf{L=}\left[ G\right] $ of a graph $G$ is the set of all non-isomorphic graphs, including $G$ itself, that can be transformed into $G$ by any sequence of local complementations and vertex permutations. Let $\mathcal{G}_{n}$ denote the set of all non-isomorphic simple unidirected connected graphs on $n$ vertices. Let $\mathcal{L} _{n}\overset{\text{def}}{=}\left\{ \mathbf{L}_{1}\text{,..., }\mathbf{L} _{k}\right\} $ be the set of all distinct orbits of graphs in $\mathcal{G} _{n}$. All $\mathbf{L}\in\mathcal{L}_{n}$ are disjoint and $\mathcal{L}_{n}$ constitutes a partitioning of $\mathcal{G}_{n}$, that is to say \begin{equation} \mathcal{G}_{n}\overset{\text{def}}{=}
{\displaystyle\bigcup\limits_{i=1}^{k}}
\mathbf{L}_{i}\text{.} \end{equation} Two graphs, $G_{1}$ and $G_{2}$, are equivalent with respect to local complementations and vertex permutations if one of the graphs is in the LC orbit of the other, for instance $G_{2}\in\left[ G_{1}\right] $. In \cite{danielsen1}, the set $\mathcal{L}_{4}$ of all LC orbits on $4$ vertices was generated. It was shown that despite the fact that there are $2^{\binom {4}{2}}=64$ unidirected simple graphs on $4$ vertices, the number of non-isomorphic connected graphs is only $\left\vert \mathcal{G}_{4}\right\vert =6$. Furthermore, it was shown that there are only $\left\vert \mathcal{L} _{4}\right\vert =2$ distinct LC orbits on $4$ vertices, $\mathcal{L} _{4}=\left\{ \mathbf{L}_{1}\text{, }\mathbf{L}_{2}\right\} $ with, \begin{equation} \Gamma_{\mathbf{L}_{1}}^{\left( 1\right) }\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 1\\ 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 0 \end{array} \right) \text{, }\Gamma_{\mathbf{L}_{1}}^{\left( 2\right) }\overset {\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 0\\ 1 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \text{, }\Gamma_{\mathbf{L}_{1}}^{\left( 3\right) }\overset {\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \text{, }\Gamma_{\mathbf{L}_{1}}^{\left( 4\right) }\overset {\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0 \end{array} \right) \text{, } \end{equation} and, \begin{equation} \Gamma_{\mathbf{L}_{2}}^{\left( 5\right) }\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 \end{array} \right) \text{, }\Gamma_{\mathbf{L}_{2}}^{\left( 6\right) }\overset {\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \text{. } \end{equation} We stress that $\Gamma_{\mathbf{L}_{2}}^{\left( 5\right) }$ and $\Gamma_{\mathbf{L}_{2}}^{\left( 6\right) }$ correspond to $\Gamma _{\text{Leung}}^{\prime}$ and $\Gamma_{\text{Leung}}$, respectively. Therefore, we have uncovered that the Leung et \textit{al}. $\left[ \left[ 4\text{, }1\right] \right] $ code can be realized by graphs that belong to the orbit $\mathbf{L}_{2}$ of $\mathcal{L}_{4}$ in $\mathcal{G}_{4} =\mathbf{L}_{1}\cup\mathbf{L}_{2}$, the set of all non-isomorphic unidirected connected graphs on $4$ vertices.
For the sake of completeness, we also point out that all graphs on up to $12$ vertices have been classified under LCs and graph isomorphisms \cite{parker}. Furthermore, the number of graphs on $n$ unlabeled vertices or the number of connected graphs with $n$ vertices can be found in \cite{sloane}. Finally, a very recent database of interesting graphs appears in \cite{dam}.
\subsubsection{Step three}
Let us consider the symmetric adjacency matrix $\Gamma_{\text{Leung}}$ as given in Eq. (\ref{tt}). How do we find the enlarged \textbf{graph }with corresponding symmetric coincidence matrix $\Xi_{\text{Leung}}$ given $\Gamma_{\text{Leung}}?$ Recall that the graph related to $\Gamma _{\text{Leung}}$ realizes a stabilizer code which is locally Clifford equivalent to the Leung et \textit{al}. code with standard binary stabilizer matrix $\mathcal{S}_{\text{b}}^{\prime}$ given by \begin{equation} \mathcal{S}_{\text{b}}^{\prime}\overset{\text{def}}{=}\left\langle X^{1} Z^{2}Z^{3}Z^{4}\text{, }Z^{1}X^{2}\text{, }X^{3}X^{4}\right\rangle \text{.} \end{equation} Putting $g_{1}\overset{\text{def}}{=}$ $X^{1}Z^{2}Z^{3}Z^{4}$, $g_{2} \overset{\text{def}}{=}Z^{1}X^{2}$ and $g_{3}\overset{\text{def}}{=}X^{3} X^{4}$, we have \begin{equation} \mathcal{S}_{b}^{\prime}=\left\langle g_{1}\text{, }g_{2}\text{, } g_{3}\right\rangle =\left\{ I\text{, }g_{1}\text{, }g_{2}\text{, } g_{3}\text{, }g_{1}g_{2}\text{, }g_{1}g_{3}\text{, }g_{2}g_{3}\text{, } g_{1}g_{2}g_{3}\right\} \text{.} \label{sbp} \end{equation} The $8$-dimensional binary vector representation of these\textbf{\ }stabilizer operators is given by, \begin{align} I & \leftrightarrow v_{I}=\left( 0000\left\vert 0000\right. \right) \text{, }g_{1}\leftrightarrow v_{g_{1}}=\left( 0111\left\vert 1000\right. \right) \text{, }g_{2}\leftrightarrow v_{g_{2}}=\left( 1000\left\vert 0100\right. \right) \text{,}\nonumber\\ & \nonumber\\ \text{ }g_{3} & \leftrightarrow v_{g_{3}}=\left( 0000\left\vert 0011\right. \right) \text{, }g_{1}g_{2}\leftrightarrow v_{g_{1}g_{2} }=\left( 1111\left\vert 1100\right. \right) \text{, }g_{1}g_{3} \leftrightarrow v_{g_{1}g_{3}}=\left( 0111\left\vert 1011\right. \right) \text{,}\nonumber\\ & \nonumber\\ \text{ }g_{2}g_{3} & \leftrightarrow v_{g_{2}g_{3}}=\left( 1000\left\vert 0111\right. \right) \text{, }g_{1}g_{2}g_{3}\leftrightarrow v_{g_{1} g_{2}g_{3}}=\left( 1111\left\vert 1111\right. \right) \text{.} \end{align} Recall that for a graph code with both $1$-input and $n$-output vertices, its corresponding coincidence matrix $\Xi_{\left( n+1\right) \times\left( n+1\right) }$ has the form expressed in Eq. (\ref{losai-2}). The graph code with symmetric coincidence matrix $\Xi_{\left( n+1\right) \times\left( n+1\right) }$ is equivalent to stabilizer codes being associated with the isotropic subspace $\mathcal{S}_{\text{isotropic}}$ defined as, \begin{equation} \mathcal{S}_{\text{isotropic}}\overset{\text{def}}{=}\left\{ \left( Ak\left\vert k\right. \right) :k\in\ker B^{\dagger}\right\} \text{,} \end{equation} that is, omitting unimportant phase factors, with the binary stabilizer group $\mathcal{S}_{b}$, \begin{equation} \mathcal{S}_{b}\overset{\text{def}}{=}\left\{ g_{k}=X^{k}Z^{Ak}:k\in\ker B^{\dagger}\right\} \text{.} \end{equation} In our case,\ in agreement with the four conditions for attaching input vertices as outlined in the S-work paragraph, it turns out that \begin{equation} B_{4\times1}\overset{\text{def}}{=}\left( \begin{array} [c]{c} 0\\ 0\\ 1\\ 1 \end{array} \right) \text{and, }A\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \equiv\Gamma_{\text{Leung}}\text{.} \label{b} \end{equation} Finally, the enlarged graph is defined by the following symmetric coincidence matrix $\Xi_{\text{Leung}}$, \begin{equation} \Xi_{\text{Leung}}\overset{\text{def}}{=}\left( \begin{array} [c]{ccccc} 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 & 1\\ 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 \end{array} \right) \text{.} \label{glb} \end{equation} An additional self-consistency check that substantiates the correctness of\textbf{\ }$\Xi_{\text{Leung}}$\textbf{\ }in Eq. (\ref{glb}) is represented by the fact that any $g_{k}$ in $\mathcal{S}_{b}^{\prime}$ in Eq. (\ref{sbp}) has a $8$-dimensional binary vector representation of the form $v_{g_{k}}$ with $v_{g_{k}}\overset{\text{def}}{=}\left( \Gamma_{\text{Leung}}k_{g_{k} }\left\vert k_{g_{k}}\right. \right) $ with $k_{g_{k}}\in\ker B^{\dagger}$ with $B$ given in Eq. (\ref{b}).
\section{Final remarks}
In this article, we proposed a systematic scheme for the construction of graphs\textbf{ }with both input and output vertices associated with arbitrary binary stabilizer codes. The scheme is characterized by three main steps: first, the stabilizer code is realized as a codeword-stabilized (CWS) quantum code; second, the canonical form of the CWS\ code is uncovered; third, the input vertices are attached to the graphs. To check the effectiveness of the scheme, we discussed several graphical constructions of various useful stabilizer codes characterized by single and multi-qubit encoding operators (for details, see appendices). In particular, the error correction capabilities of such quantum codes are verified in graph-theoretic terms as originally advocated by Schlingemann and Werner (for details, see appendices).
Finally, in what follows, possible generalizations of our scheme for the graphical construction of both (stabilizer and nonadditive) nonbinary and continuous-variable quantum codes will be briefly addressed.
The scheme proposed is limited to binary stabilizer codes. How about nonbinary and continuous-variable (CV) codes? How about nonadditive codes? We point out the following three points:
\begin{itemize} \item \emph{From additive to nonadditive case}. The codeword-stabilized quantum code formalism presents a unifying approach to both additive and nonadditive quantum error-correcting codes, for both binary and nonbinary states \cite{chen}.
\item \emph{From binary to nonbinary case}. The stabilizer formalism, graph states and quantum error correcting codes for $d$-dimensional quantum systems have been extensively considered in \cite{dirk2} and \cite{dirk3}. However, as pointed out in \cite{hein2}, no straightforward extension of the stabilizer formalism in terms of generators within the Pauli group is possible for $d$-level systems. As a consequence, it is possible that results obtained within the binary framework are no longer valid when taking into consideration weighted graph states. The generalizations of the Pauli group, the Clifford group, and the stabilizer states for qudits in a Hilbert space of arbitrary dimension $d$ appears in \cite{bart2}. When moving into the nonbinary case, new features emerge. For instance, the symmetric adjacency matrix does not contain any longer binary entries as in the case of a simple graph as in qubit systems. The generalization of the Pauli operators, the so-called Weyl operators, are no longer Hermitian. The finite field $\mathbf{F}_{2}$ is replaced by the finite field of prime order $d$ and all arithmetic operations are defined modulo $d$. The dimension $d$ can be naturally generalized to prime power dimension $d=p^{r}$ with $p$ being prime and $r$ being an integer. If, however, the underlying integer ring is no longer a field, one loses the vector space structure of $\mathbf{F}_{d}$, which demands some caution with respect to the concept of a basis. If $d$ contains multiple prime factors, the stabilizer, consisting of $d^{N}$ different elements, is in general no longer generated by a set of only $N$ generators. For the minimal generating set, more elements $N\leq m\leq2N$ of the stabilizer might be needed as pointed out in \cite{bart2}. Furthermore, it is possible to show that the action of the local (generalized) Clifford group on nonbinary stabilizer states can be translated into operations on graphs. However, unlike the binary case, the single local complementation is replaced by a pair of two different graph-theoretic operations. Furthermore, an efficient polynomial time algorithm to verify whether two graph states, in the non-binary case, are locally Clifford equivalent is available \cite{beigi2}. Despite these challenges, new important advances have been recently achieved. For instance, an explicit method of going from qudit CSS codes to qudit graph codes, including all the encoding and decoding procedures, has been presented in \cite{anne}.
\item \emph{From discrete to continuous case}. A remarkable difference between discrete and continuous variables (DV and CV, respectively) quantum information is that while quantum states and unitary transformations involved are described by integer-valued parameters in the DV case, they are characterized by \emph{real}-valued parameters in the CV case. The continuous-variable analog of the Pauli and Clifford algebras and groups together with sets of gates that can efficiently simulate any arbitrary unitary transformation in these groups were defined in \cite{sam2}. The standard Pauli group for CV quantum computation on $n$ coupled oscillator is the Heisenberg-Weyl group $\mathcal{HW}\left( n\right) $ which consists of phase-space displacement operators for the $n$ oscillators. Unlike the discrete Pauli group for qubits, the group $\mathcal{HW}\left( n\right) $ is a continuous Lie group, and can therefore only be generalized by a set of continuously parametrized operators. Furthermore, the Clifford group for CV is the semidirect product group of the symplectic group and Heisenberg-Weyl group, $Sp\left( 2n\text{, }
\mathbb{R}
\right) \ltimes\mathcal{HW}\left( n\right) $, consisting of all phase-space translations along with all one-mode and two-mode squeezing transformations \cite{sam2}. This group is generated by inhomogeneous quadratic polynomials in the canonical operators. For DV, it is possible to generate the Clifford group using only the CNOT, Hadamard and phase gates. However, in the CV\ case, the analog of these gates (namely, the $SUM$, the Fourier $F$ and the phase $P\left( \eta\right) $ gates with $\eta\in
\mathbb{R}
$) are all elements of $Sp\left( 2n\text{, }
\mathbb{R}
\right) $. They are generated by homogeneous quadratic Hamiltonians only. Thus, they are in the subgroup of the Clifford group. In order to generate the entire Clifford group, one requires a continuous $\mathcal{HW}\left( 1\right) $ transformation (i.e., a linear Hamiltonian that generates a one-parameter subgroup of $\mathcal{HW}\left( 1\right) $) such as the Pauli operator $X\left( q\right) $ with $q\in
\mathbb{R}
$. Finally, the Clifford group in the CV case is generated by the set $\left\{ SUM\text{, }F\text{, }P\left( \eta\right) \text{, }X\left( q\right) :\eta\text{, }q\in
\mathbb{R}
\right\} $. Continuous-variable graph states were proposed in \cite{zang2, peter1}. It is of great relevance understanding the graph-theoretic transformation rules that describe both local unitary and local Clifford unitary equivalences of arbitrary CV graph states. For a particular class of CV graph states, the so-called CV four-mode unweighted graph states, such transformation rules have been uncovered in \cite{zang3}. It turns out that even for such restricted class of states, the corresponding local Clifford unitary cannot exactly mirror that for the qubit case and a greater level of complexity arises in the CV framework. In addition, the complete implementation of local complementations for CV weighted graphs (a weighted graph state is described by a graph $G=\left( V\text{, }E\right) $ in which every edge is specified by a factor $\Omega_{ab}$ corresponding to the strength of modes $a$ and $b$; for unweighted graph states, all the interactions have the same strength) remains an open problem. In \cite{zang4}, the graphical description of local Clifford transformations for CV weighted graph states were considered. In particular, it was shown that unlike qubit weighted graph states, CV weighted graph states can be expressed by the stabilizer formalism in terms of generators in the Pauli group. The main reason for this difference is that the CZ gate for qubit is periodic as a function of the interaction strength while the CV CZ gate is not.\textbf{ }We remark that in this context, the CV case is even more subtle, besides the fact that weighted CV graph states are still stabilizer states unlike weighted qubit graph states. In particular, the most general form of weighted CV graph states has a complex adjacency matrix. In fact, all real-valued (with real adjacency matrix) CV graph states (weighted or unweighted) are unphysical states (only defined in the limit of infinite squeezing). In order to represent physical CV graph states, corresponding to pure multi-mode Gaussian states, the weights become necessarily complex. All this is introduced and discussed in \cite{peter2} where it is also described how such general, physical CV graph states transform under local and general Gaussian transformations. In particular, we emphasize that the general results presented in \cite{peter2} include Zhang's results in \cite{zang3, zang4} as the limiting cases of infinite squeezing and real-weighted states. We recall that in the qubit-case a systematic classification of local Clifford equivalence of qubit graph states has been executed and an efficient algorithm with polynomial time complexity in the number of qubits to decide whether two given stabilizer states are local Clifford equivalent is known. In the CV\ framework, it can be proved that any CV stabilizer state is equivalent to a weighted graph state under local Clifford operations, the equivalence between two stabilizer states under local Clifford operations can be investigated by studying the equivalence between weighted graph states under local Clifford operations \cite{zang5}. However, the existence of a universal method to determine whether two CV stabilizer states with finite modes are equivalent or not under local Clifford operation has been only partially addressed in \cite{peter2}\textbf{. }In the CV case, the local-Clifford equivalence of stabilizer states translates into local-Gaussian unitary equivalence of (pure) Gaussian states. Furthermore, while a single unifying definition of complex-weighted CV\ graph states (Gaussian pure states) together with graph transformation rules for all local Gaussian unitary operations were presented in \cite{peter2}, no systematic algorithm for deciding on the local equivalence of two given CV (Gaussian) stabilizer states was discussed. This issue, however, was recently addressed in \cite{giedke}. Specifically, necessary and sufficient conditions of Gaussian local unitary equivalence for arbitrary (mixed or pure) Gaussian states were derived. Despite such advances, several questions remain to be better understood. For instance, the relation between local equivalence of CV Gaussian states and Gaussian local equivalence deserves further investigation \cite{giedke, fiurasek}. A thorough analysis of this type of questions is not only important from a theoretical point of view, it can also be of practical use concerning which states are the most suitable for optical realizations of stabilizer quantum error correction codes in any dimension \cite{peter-damian}. \end{itemize}
In view of these considerations, we conclude that the extension of our proposed scheme to arbitrary nonbinary/CV codes and/or additive/nonadditive codes might turn out to be nontrivial. However, in light of the recent advances, we are confident that its generalization could be achieved with a reasonable effort.
\begin{acknowledgments} We thank the ERA-Net CHIST-ERA project HIPERCOM for financial support. \end{acknowledgments}
\begin{thebibliography}{99}
\bibitem {die}R. Diestel, \emph{Graph Theory}, Springer, Heildeberg (2000).
\bibitem {west}D. B. West, \emph{Introduction to Graph Theory}, Prentice Hall, Upper Saddle River, New Jersey (2001).
\bibitem {wilson}R. J. Wilson and J. J. Watkins, \emph{Graphs: An Introductory Approach}, John Wiley \& Sons, Inc. (1990).
\bibitem {gotty}D. Gottesman, \emph{An introduction to quantum error correction and fault-tolerant quantum computation}, in Quantum Information Science and Its Contributions to Mathematics, Proceedings of Symposia in Applied Mathematics \textbf{68}, pp. 13-58, Amer. Math. Soc., Providence, Rhode Island, USA (2010).
\bibitem {werner}D. Schlingemann and R. F. Werner, \emph{Quantum error-correcting codes associated with graphs}, Phys. Rev. \textbf{A65}, 012308 (2001).
\bibitem {dirk}D. Schlingemann, \emph{Stabilizer codes can be realized as graph codes}, Quant. Inf. Comput. \textbf{2}, 307 (2002).
\bibitem {markus}M. Grassl, A. Klappenecker and M. Rotteler, \emph{Graphs, quadratic forms, and quantum codes}, in \emph{Proceedings of the International Symposium on Information Theory}, Lausanne, Switzerland, 30 June- 5 July, p. 45 (2002).
\bibitem {hans}H. J. Briegel and R. Raussendorf, \emph{Persistent entanglement in arrays of interacting particles}, Phys. Rev. Lett. \textbf{86}, 910 (2001).
\bibitem {hein}M. Hein, J. Eisert, and H. J. Briegel, \emph{Multiparty entanglement in graph states}, Phys. Rev. \textbf{A69}, 062311 (2004).
\bibitem {bart}M. Van den Nest, J. Dehaene and B. De Moor, \emph{Graphical description of the action of local Clifford transformations on graph states}, Phys. Rev. \textbf{A69}, 022316 (2004).
\bibitem {cross}A. Cross, G. Smith, J. A. Smolin and B. Zeng, \emph{Codeword stabilized quantum codes}, IEEE Trans. Info. Theory \textbf{55}, 433 (2009).
\bibitem {chen}X. Chen, B. Zeng and I. L. Chuang, \emph{Nonbinary codeword-stabilized quantum codes}, Phys. Rev. \textbf{A78}, 062315 (2008).
\bibitem {yu1}S. Yu, Q. Chen and C. H. Oh, \emph{Graphical quantum error-correcting codes}, arXiv:quant-ph/0709.1780 (2007).
\bibitem {yu2}D. Hu, W. Tang, M. Zhao, and Q. Chen, \emph{Graphical nonbinary quantum error-correcting codes}, Phys. Rev. \textbf{A78}, 012306 (2008).
\bibitem {beigi}S. Beigi, I. Chuang, M. Grassl, P. Shor and B. Zeng, \emph{Graph concatenation for quantum codes}, J. Math. Phys. \textbf{52}, 022201 (2011).
\bibitem {debbie}D. W. Leung, M. A. Nielsen, I. L. Chuang, and Y. Yamamoto, \emph{Approximate quantum error correction can lead to better codes}, Phys. Rev. \textbf{A56}, 2567 (1997).
\bibitem {damian2008}D. Markham and B. C.\ Sanders, \emph{Graph states for quantum secret sharing}, Phys. Rev. \textbf{A78}, 042309 (2008).
\bibitem {anne2013}A. Marin and D. Markham, \emph{On the equivalence between sharing quantum and classical secrets, and error correction}, Phys. Rev. \textbf{A88}, 042332 (2013).
\bibitem {damian2014}B. A. Bell, D. A. Herrera-Marti, M. S. Tame, D. Markham, W. J. Wadsworth, and J. G. Rarity, \emph{Experimental demonstration of a graph state quantum error-correcting code}, Nature Comm. \textbf{5}, 3658 (2014).
\bibitem {robert}A. R. Calderbank, E. M. Rains, P. W. Shor and N. J. A. Sloane, \emph{Quantum error correction via codes over }$GF(4)$, IEEE Transactions on Information Theory \textbf{44}, 1369 (1998).
\bibitem {gaitan}F. Gaitan, \emph{Quantum Error Correction and Fault Tolerant Quantum Computing}, CRC Press (2008).
\bibitem {daniel-phd}D. Gottesman, \emph{Stabilizer codes and quantum error correction}, Ph. D. thesis, California Institute of Technology, Pasadena, CA, 1998.
\bibitem {france}A. Bouchet, \emph{Recognizing locally equivalent graphs}, Discrete Mathematics \textbf{114}, 75 (1993).
\bibitem {parker}L. E. Danielsen and M. G. Parker, \emph{On the classification of all self-dual additive codes over GF(4) of length up to }$12$, J. Combin. Theory \textbf{A113}, 1351 (2006).
\bibitem {moor}J. Dehaene and B. De Moor, \emph{Clifford group, stabilizer states, and linear and quadratic operations over }$GF\left( 2\right) $, Phys. Rev. \textbf{A68}, 042318 (2003).
\bibitem {ruskai}H. Pollatsek and M. B. Ruskai, \emph{Permutationally invariant codes for quantum error correction}, Lin. Alg. Appl. \textbf{392}, 255 (2004).
\bibitem {tqc}S. Beigi, J. Chen, M. Grassl, Z. Ji, Q. Wang, and B. Zeng, \emph{Symmetries of codeword stabilized quantum codes}, in TQC 2013, 8th Conference on Theory of Quantum Computation, Communication and Cryptography, 21-23 May, Guelph, Canada (2013).
\bibitem {partha}K. R. Parthasarathy, \emph{Extremality and entanglement of states in coupled quantum systems}, AIP Conf. Proc. \textbf{864}, 54 (2006).
\bibitem {fletcher}A. S. Fletcher, P. W. Shor, and M. Z. Win, \emph{Channel-adapted quantum error correction for the amplitude damping channel}, IEEE Transactions on Information Theory \textbf{54}, 5705 (2008).
\bibitem {danielsen1}L. E. Danielsen, \emph{On self-dual quantum codes, graphs, and boolean functions}, arXiv:quant-ph/0503236 (2005).
\bibitem {sloane}N. J. A. Sloane, \emph{The online encyclopedia of integer sequences}, https://oeis.org.
\bibitem {dam}G. Brinkmann, K. Coolsaet, J. Goedgebeur, H. Melot, \emph{House of graphs: a database of interesting graphs}, Discrete Appl. Math. \textbf{161}, 311 (2013).
\bibitem {dirk2}D. Schlingemann, \emph{Cluster states, algorithms and graphs}, Quant. Inf. Comput. \textbf{4}, 287 (2004).
\bibitem {dirk3}D. Schlingemann, \emph{Error syndrome calculation for graph codes on a one way quantum computer: Towards a quantum memory}, J. Math. Phys. \textbf{45}, 4322 (2004).
\bibitem {hein2}M. Hein, W. Dur, J. Eisert, R. Raussendorf, M. Van den Nest, H. J. Briegel, \emph{Entanglement in graph states and its applications}, arXiv:quant-ph/0602096 (2006).
\bibitem {bart2}E. Hostens, J. Dehaene and B. De Moor, \emph{Stabilizer states and Clifford operations for systems of arbitrary dimensions and modular arithmetic}, Phys. Rev. \textbf{A71}, 042315 (2005).
\bibitem {beigi2}M. Bahramgiri and S. Beigi, \emph{Graph states under the action of local Clifford group in non-binary case}, arXiv:quant-ph/0610267 (2007).
\bibitem {anne}A. Marin,\emph{\ Entanglement in quantum information networks. Graph states for quantum secret sharing}, Ph. D. thesis, Telecom ParisTech, France (2013).
\bibitem {sam2}S. D. Bartlett, B. C. Sanders, S. L. Braunstein and K. Nemoto, \emph{Efficient classical simulation of continuous variable quantum information processes}, Phys. Rev. Lett. \textbf{88}, 097904 (2002).
\bibitem {zang2}J. Zhang and S. L. Braunstein, \emph{Continuous-variable Gaussian analog of cluster states}, Phys. Rev. \textbf{A73}, 032318 (2006).
\bibitem {peter1}P. van Loock, C. Weedbrook, and M. Gu, \emph{Building Gaussian cluster states by linear optics}, Phys. Rev. \textbf{A76}, 032321 (2007).
\bibitem {zang3}J. Zhang, \emph{Local complementation rule for continuous-variable four-mode unweighted graph states}, Phys. Rev. \textbf{A78}, 034301 (2008).
\bibitem {zang4}J. Zhang, \emph{Graphical description of local Gaussian operations for continuous-variable weighted graph states}, Phys. Rev. \textbf{A78}, 052307 (2008).
\bibitem {peter2}N. C. Menicucci, S. T. Flammia, and P. van Loock, \emph{Graphical calculus for Gaussian pure states}, Phys. Rev. \textbf{A83}, 042335 (2011).
\bibitem {zang5}J. Zhang, G. He and G. Zeng, \emph{Equivalence of continuous-variable stabilizer states under local Clifford operations}, Phys. Rev. \textbf{A80}, 052333 (2009).
\bibitem {giedke}G. Giedke and B. Kraus, \emph{Gaussian local unitary equivalence of }$n$\emph{-mode Gaussian states and Gaussian transformations by local operations with classical communications}, Phys. Rev. \textbf{A89}, 012335 (2014).
\bibitem {fiurasek}O. Cernotic and J. Fiurasek, \emph{Transformations of symmetric multipartite Gaussian states by Gaussian local operations and classical communication}, Phys. Rev. \textbf{A89}, 042331 (2014).
\bibitem {peter-damian}P. van Loock and D. Markham, \emph{Implementing stabilizer codes by linear optics}, AIP Conf. Proc. \textbf{1363}, 256 (2011).
\bibitem {markus2}M. Grassl, Th. Beth and T. Pellizzari, \emph{Codes for the quantum erasure channel}, Phys. Rev. \textbf{A56}, 33 (1997).
\bibitem {ray}R. Laflamme, C. Miquel, J. P. Paz, and W. H. Zurek, \emph{Perfect quantum error correcting code}, Phys. Rev. Lett. \textbf{77}, 198 (1996).
\bibitem {charlie}C. H. Bennett, D. P. Di Vincenzo, J. A. Smolin, and W. K. Wootters, \emph{Mixed-state entanglement and quantum error correction}, Phys. Rev. \textbf{A54}, 3824 (1996).
\bibitem {bilal}B. Shaw, M. M. Wilde, O. Oreshkov, I. Kremsky, and D. A. Lidar, \emph{Encoding one logical qubit into six physical qubits}, Phys. Rev. \textbf{A78}, 012337 (2008).
\bibitem {steane}A. M. Steane, \emph{Multiple-particle interference and quantum error correction}, Proc. R. Soc. Lond. \textbf{A452}, 2551 (1996).
\bibitem {robert2}A. R. Calderbank and P. W. Shor, \emph{Good quantum error correcting codes exist}, Phys. Rev. \textbf{A54}, 1098 (1996).
\bibitem {shor}P. W.\ Shor, \emph{Scheme for reducing decoherence in quantum computer memory}, Phys. Rev. \textbf{A52}, 2493 (1995).
\bibitem {danielpra}D. Gottesman,\emph{\ Class of quantum error correcting codes saturating the quantum Hamming bound}, Phys. Rev. \textbf{A54}, 1862 (1996). \end{thebibliography}
\pagebreak
\appendix
\section{Single qubit encoding}
Before presenting our illustrative examples, we would like to make few remarks on graphs in quantum error correction.
The coincidence matrix of a graph characterizes the structural properties of a graph: number of vertices, number of edges, and, above all, the manner in which vertices are connected. The structure of graphs associated with stabilizer quantum codes hides essential information about the graphical error detection conditions in Eqs. (\ref{wc1}) and (\ref{wc2}). Such graphical conditions may not be necessarily visible in a direct manner as originally pointed out in \cite{dirk}. This becomes especially evident when the number of vertices and edges in the graph increases in the presence of multi-qubit encodings and/or big code lengths. However, graphs do maintain part of their appeal in that they provide a\textbf{ }\emph{geometric}\textbf{ }aid in identifying the explicit\textbf{ }\emph{algebraic}\textbf{ }linear equations that characterize the graphical error detection conditions without taking into consideration the explicit form of their corresponding coincidence matrices. In our opinion, this is no negligible advantage of our graphical approach since identifying the algebraic equations directly from the coincidence matrices can become quite tedious without a visual aid provided by graphs. Clearly, the peculiar advantage of our scheme is that it allows to uncover the expression of the coincidence matrix of a graph associated with a binary stabilizer code. We shall further discuss some of these aspects in our illustrative examples that appear below.
As an additional side remark, we point out that there could be scenarios where one can exploit the high symmetry of the graph in an efficient manner in order to check the graphical conditions for error detection \cite{werner}. While symmetry arguments are elegant and powerful, they require some caution in the case of graphs in quantum error correction: symmetries of graphs are not necessarily the same as symmetries of the associated stabilizer codes \cite{dirk, markus, tqc}. For instance, graphs with different symmetries can lead to a class of codes that are equivalent to the CSS seven-qubit code as shown in Ref. \cite{markus}. As recently pointed out in \cite{tqc}, a clear understanding of the requirements under which a graph can exhibit the same symmetry as the quantum (CWS, in general) code is still missing. In this article, we do not address this issue. However, in agreement with the statement appeared in Ref. \cite{tqc}, we do think that this point is definitively worth further attention.
\subsection{The $\left[ \left[ 3,1,1\right] \right] $ stabilizer code}
Before applying our scheme for the construction of the graph\textbf{ }associated with a $\left[ \left[ 3,1,1\right] \right] $ stabilizer code \cite{gaitan}, we emphasize how intricate can be finding the explicit expression of unitary transformations that relate sets of vertex stabilizers of graphs. For the sake of reasoning, consider the following sets $\mathcal{S}_{\left\vert \Gamma_{1}\right\rangle }$, $\mathcal{S}_{\left\vert \Gamma_{2}\right\rangle }$ and $\mathcal{S}_{\left\vert \Gamma_{3} \right\rangle }$ defined as \begin{equation} \mathcal{S}_{\left\vert \Gamma_{1}\right\rangle }\overset{\text{def}} {=}\left\langle X^{1}\text{, }X^{2}\text{, }X^{3}\right\rangle \text{, }\mathcal{S}_{\left\vert \Gamma_{2}\right\rangle }\overset{\text{def}} {=}\left\langle X^{1}Z^{2}Z^{3}\text{, }Z^{1}X^{2}\text{, }Z^{1} X^{3}\right\rangle \text{ and, }\mathcal{S}_{\left\vert \Gamma_{3} \right\rangle }\overset{\text{def}}{=}\left\langle X^{1}Z^{2}Z^{3}\text{, }Z^{1}X^{2}Z^{3}\text{, }Z^{1}Z^{2}X^{3}\right\rangle \text{,} \label{A1} \end{equation} respectively. In the canonical basis $\mathcal{B}_{\mathcal{H}_{2}^{3}}$ of the eight-dimensional \emph{complex} Hilbert space $\mathcal{H}_{2}^{3}$, \begin{equation} \mathcal{B}_{\mathcal{H}_{2}^{3}}\overset{\text{def}}{=}\left\{ \left\vert 000\right\rangle \text{, }\left\vert 001\right\rangle \text{, }\left\vert 010\right\rangle \text{, }\left\vert 011\right\rangle \text{, }\left\vert 100\right\rangle \text{, }\left\vert 101\right\rangle \text{, }\left\vert 110\right\rangle \text{, }\left\vert 111\right\rangle \right\} \text{,} \end{equation} the graph states $\left\vert \Gamma_{1}\right\rangle $, $\left\vert \Gamma _{2}\right\rangle $ and $\left\vert \Gamma_{3}\right\rangle $ read, \begin{align} & \left\vert \Gamma_{1}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 000\right\rangle +\left\vert 001\right\rangle +\left\vert 010\right\rangle +\left\vert 011\right\rangle +\left\vert 100\right\rangle +\left\vert 101\right\rangle +\left\vert 110\right\rangle +\left\vert 111\right\rangle }{\sqrt{8}}\text{, }\nonumber\\ & \text{ }\left\vert \Gamma_{2}\right\rangle \overset{\text{def}}{=} \frac{\left\vert 000\right\rangle +\left\vert 001\right\rangle +\left\vert 010\right\rangle +\left\vert 011\right\rangle +\left\vert 100\right\rangle -\left\vert 101\right\rangle -\left\vert 110\right\rangle +\left\vert 111\right\rangle }{\sqrt{8}}\text{, }\nonumber\\ & \left\vert \Gamma_{3}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 000\right\rangle +\left\vert 001\right\rangle +\left\vert 010\right\rangle -\left\vert 011\right\rangle +\left\vert 100\right\rangle -\left\vert 101\right\rangle -\left\vert 110\right\rangle -\left\vert 111\right\rangle }{\sqrt{8}}\text{.} \label{gammaeq} \end{align} We observe that, \begin{equation} \mathcal{S}_{\left\vert \Gamma_{3}\right\rangle }=\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }\mathcal{S}_{\left\vert \Gamma_{1}\right\rangle }\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }^{\dagger}\text{,} \label{01} \end{equation} with, \begin{equation} \mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }\overset{\text{def}}{=}\left( I^{1}\otimes I^{2}\otimes H^{3}\right) \cdot\left( U_{CP}^{12}\otimes I^{3}\right) \left( I^{1}\otimes H^{2}\otimes I^{3}\right) \cdot\left( I^{1}\otimes U_{CP}^{23}\right) \cdot\left( U_{CP}^{12}\otimes I^{3}\right) \text{.} \label{u1} \end{equation} Similarly, it can be shown that \begin{equation} \mathcal{S}_{\left\vert \Gamma_{2}\right\rangle }=\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }\mathcal{S}_{\left\vert \Gamma_{1}\right\rangle }\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }^{\dagger}\text{,} \label{02} \end{equation} with, \begin{equation} \mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }\overset{\text{def}}{=}\left( H^{1}\otimes I^{2}\otimes I^{3}\right) \cdot\left( I^{1}\otimes H^{2}\otimes I^{3}\right) \cdot\left( I^{1}\otimes U_{CP}^{23}\right) \cdot\left( U_{CP}^{12}\otimes I^{3}\right) \text{.} \label{u2} \end{equation}
Finally, combining (\ref{01}) and (\ref{02}), we get \begin{equation} \mathcal{S}_{\left\vert \Gamma_{3}\right\rangle }=\mathcal{U}_{\left\vert \Gamma_{2}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }\mathcal{S}_{\left\vert \Gamma_{2}\right\rangle }\mathcal{U}_{\left\vert \Gamma_{2}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }^{\dagger}=\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow \left\vert \Gamma_{3}\right\rangle }\mathcal{S}_{\left\vert \Gamma _{1}\right\rangle }\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }^{\dagger}=\mathcal{U} _{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma _{3}\right\rangle }\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }^{\dagger}\mathcal{S} _{\left\vert \Gamma_{2}\right\rangle }\mathcal{U}_{\left\vert \Gamma _{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle } \mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }^{\dagger}\text{,} \end{equation} that is, \begin{equation} \mathcal{U}_{\left\vert \Gamma_{2}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }=\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }^{\dagger}\text{.} \label{u3} \end{equation} After some algebra, we obtain that the explicit expressions for the Clifford unitary matrices $\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }$, $\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }$, and $\mathcal{U}_{\left\vert \Gamma_{2}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }$ become,
\begin{align} & \mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }\overset{\text{def}}{=}\frac{1}{2}\left( \begin{array} [c]{cccccccc} 1 & 1 & 1 & -1 & 0 & 0 & 0 & 0\\ 1 & -1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 1 & -1 & 1 & 0 & 0 & 0 & 0\\ 1 & -1 & -1 & -1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & -1 & 1\\ 0 & 0 & 0 & 0 & 1 & -1 & -1 & -1\\ 0 & 0 & 0 & 0 & -1 & -1 & -1 & 1\\ 0 & 0 & 0 & 0 & -1 & 1 & -1 & -1 \end{array} \right) \text{, }\mathcal{U}_{\left\vert \Gamma_{1}\right\rangle \rightarrow\left\vert \Gamma_{2}\right\rangle }\overset{\text{def}}{=}\frac {1}{2}\left( \begin{array} [c]{cccccccc} 1 & 0 & 1 & 0 & 1 & 0 & -1 & 0\\ 0 & 1 & 0 & -1 & 0 & 1 & 0 & 1\\ 1 & 0 & -1 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & -1\\ 1 & 0 & 1 & 0 & -1 & 0 & 1 & 0\\ 0 & 1 & 0 & -1 & 0 & -1 & 0 & -1\\ 1 & 0 & -1 & 0 & -1 & 0 & -1 & 0\\ 0 & 1 & 0 & 1 & 0 & -1 & 0 & 1 \end{array} \right) \text{,}\nonumber\\ & \text{ }\nonumber\\ & \mathcal{U}_{\left\vert \Gamma_{2}\right\rangle \rightarrow\left\vert \Gamma_{3}\right\rangle }\overset{\text{def}}{=}\frac{1}{2}\left( \begin{array} [c]{cccccccc} 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 1 & -1 & 0 & 0 & 1 & -1 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & -1 & 0 & 0 & 1 & -1\\ 1 & 1 & 0 & 0 & -1 & -1 & 0 & 0\\ 1 & -1 & 0 & 0 & -1 & 1 & 0 & 0\\ 0 & 0 & -1 & -1 & 0 & 0 & 1 & 1\\ 0 & 0 & -1 & 1 & 0 & 0 & 1 & -1 \end{array} \right) \text{. } \end{align} A systematic strategy for finding the explicit expressions for the unitary transformations in Eqs. (\ref{u1}), (\ref{u2})\ and (\ref{u3}) would be very useful. The VdN-work is especially important in this regard, as we shall see.
Let us consider the three-qubit bit-flip repetition code with codespace spanned by the codewords $\left\vert 0_{L}\right\rangle \overset{\text{def} }{=}$ $\left\vert 000\right\rangle $ and $\left\vert 1_{L}\right\rangle \overset{\text{def}}{=}$ $\left\vert 111\right\rangle $. The two stabilizer generators of this code are $g_{1}\overset{\text{def}}{=}Z^{1}Z^{2}$ and $g_{2}\overset{\text{def}}{=}Z^{1}Z^{3}$ while the logical operations can read $\bar{Z}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}$ and $\bar{X}\overset {\text{def}}{=}X^{1}X^{2}X^{3}$ with, \begin{equation} \bar{Z}\left\vert 0_{L}\right\rangle =\left\vert 0_{L}\right\rangle \text{, }\bar{Z}\left\vert 1_{L}\right\rangle =-\left\vert 1_{L}\right\rangle \text{, }\bar{X}\left\vert 0_{L}\right\rangle =\left\vert 1_{L}\right\rangle \text{ and, }\bar{X}\left\vert 1_{L}\right\rangle =\left\vert 0_{L}\right\rangle \text{.} \end{equation} This stabilizer code can be regarded as a CWS code with codeword stabilizer given by, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }\bar{Z}\right\rangle =\left\langle Z^{1}Z^{2}\text{, } Z^{1}Z^{3}\text{, }Z^{1}Z^{2}Z^{3}\right\rangle =\left\langle Z^{1}\text{, }Z^{2}\text{, }Z^{3}\right\rangle \text{.} \end{equation} Observe that $\mathcal{S}_{\text{CWS}}=U\mathcal{S}_{\left\vert \Gamma _{1}\right\rangle }U^{\dagger}$ with $\mathcal{S}_{\left\vert \Gamma _{1}\right\rangle }$ in Eq. (\ref{A1}) and $U\overset{\text{def}}{=}H^{1} H^{2}H^{3}$. Therefore, the graph state associated with $\mathcal{S} _{\text{CWS}}$ is locally Clifford equivalent to the graph state $\left\vert \Gamma_{1}\right\rangle $ in Eq. (\ref{gammaeq}). Let us consider now an alternative graphical description of the three-qubit bit-flip repetition code that better fits into our scheme.
Let us consider the codespace of the code spanned by the following new codewords, \begin{equation} \mathcal{C}\overset{\text{def}}{=}\text{Span }\left\{ \left\vert 0_{L}\right\rangle \overset{\text{def}}{=}\left\vert 000\right\rangle \text{, }\left\vert 1_{L}\right\rangle \overset{\text{def}}{=}\left\vert 111\right\rangle \right\} \rightarrow\mathcal{C}^{\prime}\overset{\text{def} }{=}\text{Span }\left\{ \left\vert 0_{L}^{\prime}\right\rangle \overset {\text{def}}{=}\frac{\left\vert 0_{L}\right\rangle +\left\vert 1_{L} \right\rangle }{\sqrt{2}}\text{, }\left\vert 1_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 0_{L}\right\rangle -\left\vert 1_{L}\right\rangle }{\sqrt{2}}\right\} \text{.} \end{equation} Notice that the codespace of the code does not change since $\mathcal{C} ^{\prime}=\mathcal{C}$ and we have simply chosen a different orthonormal basis to describe the code. However, with this alternative choice, the new codeword stabilizer reads \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }\bar{Z}^{\prime}\right\rangle =\left\langle Z^{1}Z^{2}\text{, }Z^{1}Z^{3}\text{, }X^{1}X^{2}X^{3}\right\rangle \text{,} \end{equation} and the remaining logical operation is given by $\bar{X}^{\prime} \overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}$. Observe that $\mathcal{S} _{\text{CWS}}^{\prime}$ is locally Clifford equivalent to $\mathcal{S} _{\text{CWS}}^{^{\prime\prime}}$ with $\mathcal{S}_{\text{CWS}}^{^{\prime \prime}}\overset{\text{def}}{=}U\mathcal{S}_{\text{CWS}}^{\prime}U$ and $U\overset{\text{def}}{=}H^{2}H^{3}$. The codeword stabilizer $\mathcal{S} _{\text{CWS}}^{^{\prime\prime}}$ reads, \begin{equation} \mathcal{S}_{\text{CWS}}^{^{\prime\prime}}=\left\langle Z^{1}X^{2}\text{, }Z^{1}X^{3}\text{, }X^{1}Z^{2}Z^{3}\right\rangle \text{.} \end{equation} Observe that $\mathcal{S}_{\text{CWS}}^{^{\prime\prime}}$ equals $\mathcal{S}_{\left\vert \Gamma_{2}\right\rangle }$ in Eq. (\ref{A1}). Therefore, the graph state associated with $\mathcal{S}_{\text{CWS}} ^{\prime\prime}$ is $\left\vert \Gamma_{2}\right\rangle $ in Eq. (\ref{gammaeq}). We notice that this is such a simple example that we really do not need to apply our scheme. The adjacency matrix $\Gamma$ of the graph associated with the CWS code with codeword stabilizer $\mathcal{S} _{\text{CWS}}^{^{\prime\prime}}$ reads, \begin{equation} \Gamma=\left( \begin{array} [c]{ccc} 0 & 1 & 1\\ 1 & 0 & 0\\ 1 & 0 & 0 \end{array} \right) \text{.} \label{gamma3} \end{equation} However, acting with a local complementation on the vertex $1$ of the graph with adjacency matrix $\Gamma$ in (\ref{gamma3}), we get \begin{equation} \Gamma\rightarrow\Gamma^{\prime}=\left( \begin{array} [c]{ccc} 0 & 1 & 1\\ 1 & 0 & 1\\ 1 & 1 & 0 \end{array} \right) \text{.} \label{gammaprime} \end{equation} Furthermore, the new codeword stabilizer becomes $\left[ \mathcal{S} _{\text{CWS}}^{^{\prime\prime}}\right] _{\text{new}}$, \begin{equation} \mathcal{S}_{\text{CWS}}^{^{\prime\prime}}\rightarrow\left[ \mathcal{S} _{\text{CWS}}^{^{\prime\prime}}\right] _{\text{new}}\overset{\text{def}} {=}\left\langle X^{1}Z^{2}Z^{3}\text{, }Z^{1}X^{2}Z^{3}\text{, }Z^{1} Z^{2}X^{3}\right\rangle \text{.} \end{equation} Note that the graph state associated with $\left[ \mathcal{S}_{\text{CWS} }^{^{\prime\prime}}\right] _{\text{new}}$ is $\left\vert \Gamma _{3}\right\rangle $ in Eq. (\ref{gammaeq}). We also point out that following the VdN-work, it turns out that the $6\times6$ local\textbf{\ }unitary Clifford transformation $Q$ that links the six-dimensional binary vector representation of the operators in $\mathcal{S}_{\text{CWS}}^{^{\prime\prime} }$ and $\left[ \mathcal{S}_{\text{CWS}}^{^{\prime\prime}}\right] _{\text{new}}$ is given by, \begin{equation} Q\overset{\text{def}}{=}\left( \begin{array} [c]{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right) \text{.} \label{q1} \end{equation} For the sake of clarity, consider \begin{equation} \left[ \mathcal{S}_{\text{CWS}}^{^{\prime\prime}}\right] _{\text{new} }\overset{\text{def}}{=}\left\{ I\text{, }g_{1}^{\prime}\text{, } g_{2}^{\prime}\text{, }g_{3}^{\prime}\text{, }g_{1}^{\prime}g_{2}^{\prime }\text{, }g_{1}^{\prime}g_{3}^{\prime}\text{, }g_{2}^{\prime}g_{3}^{\prime }\text{, }g_{1}^{\prime}g_{2}^{\prime}g_{3}^{\prime}\right\} \text{,} \end{equation} with $g_{1}^{\prime}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}$, $g_{2}^{\prime }\overset{\text{def}}{=}Z^{1}X^{2}Z^{3}$, $g_{3}^{\prime}\overset{\text{def} }{=}Z^{1}Z^{2}X^{3}$ and, \begin{align} v_{I} & =\left( 000\left\vert 000\right. \right) \text{, }v_{g_{1} ^{\prime}}=\left( 011\left\vert 100\right. \right) \text{, }v_{g_{2} ^{\prime}}=\left( 101\left\vert 010\right. \right) \text{, }v_{g_{3} ^{\prime}}=\left( 110\left\vert 001\right. \right) \text{, }v_{g_{1} ^{\prime}g_{2}^{\prime}}=\left( 110\left\vert 110\right. \right) \text{, }\nonumber\\ & \nonumber\\ v_{g_{1}^{\prime}g_{3}^{\prime}} & =\left( 101\left\vert 101\right. \right) \text{, }v_{g_{2}^{\prime}g_{3}^{\prime}}=\left( 011\left\vert 011\right. \right) \text{, }v_{g_{1}^{\prime}g_{2}^{\prime}g_{3}^{\prime} }=\left( 000\left\vert 111\right. \right) \text{,} \label{seta} \end{align} where $v_{g^{\prime}}$ denotes the binary vectorial representation of the Pauli operators. Furthermore, \begin{equation} \mathcal{S}_{\text{CWS}}^{^{\prime\prime}}\overset{\text{def}}{=}\left\{ I\text{, }g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{1}g_{2}\text{, } g_{1}g_{3}\text{, }g_{2}g_{3}\text{, }g_{1}g_{2}g_{3}\right\} \text{,} \end{equation}
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the [[3,1,1]]-code.}
\label{fig1}
\end{figure}
with $g_{1}\overset{\text{def}}{=}Z^{1}X^{2}$, $g_{2}\overset{\text{def}} {=}Z^{1}X^{3}$, $g_{3}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}$. Using $Q$ in Eq. (\ref{q1}), we have \begin{align} I & \rightarrow v_{QI}=\left( 000\left\vert 000\right. \right) \text{, }g_{1}\rightarrow v_{Qg_{1}}=\left( 110\left\vert 110\right. \right) \text{, }g_{2}\rightarrow v_{Qg_{2}}=\left( 101\left\vert 101\right. \right) \text{,}\nonumber\\ & \nonumber\\ \text{ }g_{3} & \rightarrow v_{Qg_{3}}=\left( 011\left\vert 100\right. \right) \text{, }g_{1}g_{2}\rightarrow v_{Qg_{1}g_{2}}=\left( 011\left\vert 011\right. \right) \text{, }g_{1}g_{3}\rightarrow v_{Qg_{1}g_{3}}=\left( 101\left\vert 010\right. \right) \text{, }\nonumber\\ & \nonumber\\ g_{2}g_{3} & \rightarrow v_{Qg_{2}g_{3}}=\left( 110\left\vert 001\right. \right) \text{, }g_{1}g_{2}g_{3}\rightarrow v_{Qg_{1}g_{2}g_{3}}=\left( 000\left\vert 111\right. \right) \text{.} \label{setb} \end{align} From Eqs. (\ref{seta}) and (\ref{setb}), we arrive at \begin{align} v_{I} & =v_{QI}\text{, }v_{g_{1}^{\prime}}=v_{Qg_{3}}\text{, } v_{g_{2}^{\prime}}=v_{Qg_{1}g_{3}}\text{, }v_{g_{3}^{\prime}}=v_{Qg_{2}g_{3} }\text{, }\nonumber\\ & \nonumber\\ v_{g_{1}^{\prime}g_{2}^{\prime}} & =v_{Qg_{1}}\text{, }v_{g_{1}^{\prime }g_{3}^{\prime}}=v_{Qg_{2}}\text{, }v_{g_{2}^{\prime}g_{3}^{\prime}} =v_{Qg_{1}g_{2}}\text{, }v_{g_{1}^{\prime}g_{2}^{\prime}g_{3}^{\prime} }=v_{Qg_{1}g_{2}g_{3}}\text{.} \end{align} Finally, given $\Gamma^{\prime}$ in Eq. (\ref{gammaprime}) and applying the S-work, the coincidence matrix for a graph associated with a $\left[ \left[ 3,1,1\right] \right] $ stabilizer code becomes, \begin{equation} \Xi_{\left[ \left[ 3,1,1\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 \end{array} \right) \text{.} \label{coincidencethree} \end{equation} It is not that difficult to use the graphical quantum error correction conditions presented in the SW-work and verify that the $\left[ \left[ 3,1,1\right] \right] $ code with associated coincidence matrix in\ Eq. (\ref{coincidencethree}) is not a $1$-error correcting quantum code.
\subsection{The $\left[ \left[ 4,1\right] \right] $ stabilizer code}
Let us consider\textbf{\ }the Grassl et\textbf{\ }\textit{al.} perfect $1$-erasure correcting four-qubit code with codespace spanned by the following codewords \cite{markus2}, \begin{equation} \left\vert 0_{L}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 0000\right\rangle +\left\vert 1111\right\rangle }{\sqrt{2}}\text{ and, }\left\vert 1_{L}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 1001\right\rangle +\left\vert 0110\right\rangle }{\sqrt{2}}\text{.} \end{equation} The three stabilizer generators of such a code are given by $g_{1} \overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}$, $g_{2}\overset{\text{def}} {=}Z^{1}Z^{4}$ and $g_{3}\overset{\text{def}}{=}Z^{2}Z^{3}$. Furthermore, the logical operations are $\bar{X}\overset{\text{def}}{=}X^{1}X^{4}$ and $\bar {Z}\overset{\text{def}}{=}Z^{1}Z^{3}$. We notice that such a code, just like the four-qubit code provided by Leung et \textit{al}., is also a $1$-error detecting code and can be used for the error correction of single amplitude damping errors. When viewed as a CWS code, the codeword stabilizer reads \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }\bar{Z}\right\rangle =\left\langle X^{1} X^{2}X^{3}X^{4}\text{, }Z^{1}Z^{4}\text{, }Z^{2}Z^{3}\text{, }Z^{1} Z^{3}\right\rangle \text{.} \end{equation} Observe that $\mathcal{S}_{\text{CWS}}$\ is local Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ with $\mathcal{S}_{\text{CWS}}^{\prime }\overset{\text{def}}{=}U\mathcal{S}_{\text{CWS}}U^{\dagger}$ and $U\overset{\text{def}}{=}H^{2}H^{3}H^{4}$. Therefore, $\mathcal{S} _{\text{CWS}}^{\prime}$ is given by \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}=\left\langle X^{1}Z^{2}Z^{3}Z^{4}\text{, }Z^{1}X^{4}\text{, }X^{2}X^{3}\text{, }Z^{1}X^{3}\right\rangle \text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime }}$ associated with $\mathcal{S}_{\text{CWS}}^{\prime}$ is given by, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 1 & 1 & 0\\ 0 & 0 & 1 & 0 \end{array} \right. \right) \text{.} \end{equation} We observe that $\det X^{\prime\text{T}}\neq0$ and, applying the VdN-work, the symmetric adjacency matrix $\Gamma$ reads\textbf{,} \begin{equation} \Gamma\overset{\text{def}}{=}Z^{\prime\text{T}}\cdot\left( X^{\prime\text{T} }\right) ^{-1}=\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \text{.} \label{grassl1} \end{equation} Therefore, applying now the S-work, the $5\times5$ symmetric coincidence matrix $\Xi_{\left[ \left[ 4,1\right] \right] }$ that characterizes the graph with both input and output vertices becomes\textbf{,} \begin{equation} \Xi_{\left[ \left[ 4,1\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{ccccc} 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 & 1\\ 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 \end{array} \right) \text{.} \label{coincidencefour} \end{equation} For the sake of completeness, we also remark that acting with a local complementation with respect to the vertex $1$ of the graph with adjacency matrix $\Gamma$ in Eq. (\ref{grassl1}), we obtain the fully connected graph with adjacency matrix $\Gamma^{\prime}$, \begin{equation} \Gamma\rightarrow\Gamma^{\prime}\equiv g_{1}\left( \Gamma\right) \overset{\text{def}}{=}\Gamma+\Gamma\Lambda_{1}\Gamma+\Lambda^{\left( 1\right) }=\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 \end{array} \right) \text{,} \label{may7} \end{equation} where, \begin{equation} \Lambda_{1}\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array} \right) \text{ and, }\Lambda^{\left( 1\right) }\overset{\text{def}} {=}\left( \begin{array} [c]{cccc} 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right) \text{.} \end{equation} We also point out that following the VdN-work, it turns out that the $8\times8$ local unitary Clifford transformation $Q$ that links the eight-dimensional binary vector representation of the operators in $\mathcal{S}_{\text{CWS}}^{^{\prime}}$ and $\left[ \mathcal{S}_{\text{CWS} }^{^{\prime}}\right] _{\text{new}}$ \textbf{(}associated with the graph with adjacency matrix\textbf{\ }$\Gamma^{\prime}$\textbf{\ }in Eq. (\ref{may7})) with, \begin{equation} \mathcal{S}_{\text{CWS}}^{^{\prime}}\rightarrow\left[ \mathcal{S} _{\text{CWS}}^{^{\prime}}\right] _{\text{new}}\overset{\text{def}} {=}\left\langle X^{1}Z^{2}Z^{3}Z^{4}\text{, }Z^{1}X^{2}Z^{3}Z^{4}\text{, }Z^{1}Z^{2}X^{3}Z^{4}\text{, }Z^{1}Z^{2}Z^{3}X^{4}\right\rangle \text{,} \end{equation} reads, \begin{equation} Q\overset{\text{def}}{=}\left( \begin{array} [c]{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right) \text{.} \end{equation} To verify that $Q$ is indeed a local Clifford operation, we observe that\textbf{\ }it exhibits the required block-diagonal structure and satisfies the relation $Q^{\text{T}}PQ=P$ with \begin{equation} P\overset{\text{def}}{=}\left( \begin{array} [c]{cc} 0 & I_{4\times4}\\ I_{4\times4} & 0 \end{array} \right) \text{.} \end{equation} Finally,\textbf{\ }it is fairly simple\textbf{\ }to use the graphical quantum error correction conditions presented in the SW-work and verify that the $\left[ \left[ 4,1\right] \right] $ code with associated coincidence matrix in\ Eq. (\ref{coincidencefour}) is a $1$-error detecting quantum code.
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the Leung et al. [[4,1]]-code.}
\label{fig2}
\end{figure}
\subsection{The $\left[ \left[ 5,1,3\right] \right] $ stabilizer code}
The codespace of the perfect five-qubit stabilizer code is spanned by the following codewords \cite{ray, charlie}, \begin{equation} \left\vert 0_{L}\right\rangle \overset{\text{def}}{=}\frac{1}{4}\left[ \begin{array} [c]{c} \left\vert 00000\right\rangle +\left\vert 11000\right\rangle +\left\vert 01100\right\rangle +\left\vert 00110\right\rangle +\left\vert 00011\right\rangle +\left\vert 10001\right\rangle -\left\vert 01010\right\rangle -\left\vert 00101\right\rangle +\\ \\ -\left\vert 10010\right\rangle -\left\vert 01001\right\rangle -\left\vert 10100\right\rangle -\left\vert 11110\right\rangle -\left\vert 01111\right\rangle -\left\vert 10111\right\rangle -\left\vert 11011\right\rangle -\left\vert 11101\right\rangle \end{array} \right] \text{,} \label{cd1} \end{equation} and, \begin{equation} \left\vert 1_{L}\right\rangle \overset{\text{def}}{=}\frac{1}{4}\left[ \begin{array} [c]{c} \left\vert 11111\right\rangle +\left\vert 00111\right\rangle +\left\vert 10011\right\rangle +\left\vert 11001\right\rangle +\left\vert 11100\right\rangle +\left\vert 01110\right\rangle -\left\vert 10101\right\rangle -\left\vert 11010\right\rangle +\\ \\ -\left\vert 01101\right\rangle -\left\vert 10110\right\rangle -\left\vert 01011\right\rangle -\left\vert 00001\right\rangle -\left\vert 10000\right\rangle -\left\vert 01000\right\rangle -\left\vert 00100\right\rangle -\left\vert 00010\right\rangle \end{array} \right] \text{.} \label{cd2} \end{equation} Furthermore, the four stabilizer generators of the code are given by, \begin{equation} g_{1}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}X^{4}\text{, }g_{2}\overset {\text{def}}{=}X^{2}Z^{3}Z^{4}X^{5}\text{, }g_{3}\overset{\text{def}}{=} X^{1}X^{3}Z^{4}Z^{5}\text{ and, }g_{4}\overset{\text{def}}{=}Z^{1}X^{2} X^{4}Z^{5}\text{.} \end{equation} A suitable choice of logical operations reads, \begin{equation} \bar{X}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}\text{ and, }\bar {Z}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{4}Z^{5}\text{.} \end{equation} We observe that the codespace of the $\left[ \left[ 5,1,3\right] \right] $ code can be equally well-described by the following set of orthonormal codewords, \begin{equation} \left\vert 0_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 0_{L}\right\rangle +\left\vert 1_{L}\right\rangle }{\sqrt{2}}\text{ and, }\left\vert 1_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac {\left\vert 0_{L}\right\rangle -\left\vert 1_{L}\right\rangle }{\sqrt{2} }\text{,} \end{equation} with unchanged stabilizer and new logical operations given by, \begin{equation} \bar{Z}^{\prime}=\bar{X}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}\text{ and, }\bar{X}^{\prime}=\bar{Z}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{4} Z^{5}\text{.} \end{equation} The codeword stabilizer $\mathcal{S}_{\text{CWS}}$ of the CWS code that realizes the five-qubit code spanned by the codewords $\left\vert 0_{L}^{\prime}\right\rangle $ and $\left\vert 1_{L}^{\prime}\right\rangle $ reads\textbf{,} \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }\bar{Z}^{\prime}\text{ }\right\rangle =\left\langle X^{1}Z^{2}Z^{3}X^{4}\text{, }X^{2}Z^{3}Z^{4} X^{5}\text{, }X^{1}X^{3}Z^{4}Z^{5}\text{, }Z^{1}X^{2}X^{4}Z^{5}\text{, } X^{1}X^{2}X^{3}X^{4}X^{5}\right\rangle \text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ associated with $\mathcal{S}_{\text{CWS}}$ is given by, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) =\left( \begin{array} [c]{ccccc} 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 1 & 1\\ 1 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{ccccc} 1 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 1 & 0\\ 1 & 1 & 1 & 1 & 1 \end{array} \right. \right) \text{.} \end{equation} We observe $\det X\neq0$. Thus, using the VdN-work, the $5\times5$ adjacency matrix $\Gamma$ becomes,
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the perfect [[5,1,3]]-code.}
\label{fig3}
\end{figure}
\begin{equation} \Gamma\overset{\text{def}}{=}Z^{\text{T}}\cdot\left( X^{\text{T}}\right) ^{-1}=\left( \begin{array} [c]{ccccc} 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 1 & 0 \end{array} \right) \text{.} \end{equation} Therefore, applying now the S-work, the $6\times6$ symmetric coincidence matrix $\Xi_{\left[ \left[ 5,1,3\right] \right] }$ characterizing the graph with both input and output vertices is given by, \begin{equation} \Xi_{\left[ \left[ 5,1,3\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{cccccc} 0 & 1 & 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 0 & 0 & 1\\ 1 & 1 & 0 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & 1 & 0 & 1\\ 1 & 1 & 0 & 0 & 1 & 0 \end{array} \right) \text{.} \end{equation} In order to show that the pentagon graph with five output vertices and one input vertex with coincidence matrix $\Xi_{\left[ \left[ 5,1,3\right] \right] }$ realizes a $1$-error correcting code, it is required to apply the graph-theoretic error detection (correction) conditions of the SW-work to $\binom{5}{2}=10$\textbf{\ }two-error configurations $E_{k}$ with $k\in\left\{ 1\text{,..., }10\right\} $. These two-error configurations read, \begin{align} & E_{1}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }2\right\} \text{, }E_{2}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }3\right\} \text{, }E_{3}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }4\right\} \text{, }E_{4}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }5\right\} \text{, }E_{5}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }3\right\} \text{, }\nonumber\\ & \nonumber\\ & E_{6}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }4\right\} \text{, }E_{7}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }5\right\} \text{, }E_{8}\overset{\text{def}}{=}\left\{ 0\text{, }3\text{, }4\right\} \text{, }E_{9}\overset{\text{def}}{=}\left\{ 0\text{, }3\text{, }5\right\} \text{, }E_{10}\overset{\text{def}}{=}\left\{ 0\text{, }4\text{, }5\right\} \text{.} \label{ec} \end{align} For instance, the application of the SW-theorem to the error configuration $E_{1}=\left\{ 0\text{, }1\text{, }2\right\} $ leads to the following set of relations, \begin{equation} d_{0}+d_{2}=0\text{, }d_{0}=0\text{ and, }d_{0}+d_{1}=0\text{.} \end{equation}
Solving this set of equations, we arrive at $d_{0}=d_{1}=d_{2}=0$. According to the SW-theorem, this implies that the the error configuration $E_{1}=\left\{ 0\text{, }1\text{, }2\right\} $ is a detectable error-configuration. In other words, the detectability of $E_{1}$ is linked to the non-singularity of the following $3\times3$ submatrix of the $6\times6$ coincidence matrix, \begin{equation} E_{1}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }2\right\} \leftrightarrow\left\{ \begin{array} [c]{c} d_{0}+d_{2}=0\\ \text{ }d_{0}=0\\ \text{ }d_{0}+d_{1}=0 \end{array} \right. \leftrightarrow\det\left( \begin{array} [c]{ccc} 1 & 0 & 1\\ 1 & 0 & 0\\ 1 & 1 & 0 \end{array} \right) \neq0\text{.} \end{equation} Following this line of reasoning, it turns out that the remaining nine error configurations in Eq. (\ref{ec}) are detectable as well. The detectability of arbitrary error configurations with two nontrivial error operators leads to the conclusion that the graph realizes a $1$-error correcting code.
\subsection{The $\left[ \left[ 6,1,3\right] \right] $ stabilizer codes}
Calderbank et \textit{al}. discovered two distinct six-qubit quantum degenerate codes which encode one logical qubit into six physical qubits \cite{robert}. The first of these codes was discovered by trivially extending the perfect five-qubit code and the other one through an exhaustive search of the encoding space.
\subsubsection{Trivial case}
The first (trivial) degenerate six-qubit code that we consider can be obtained from the $\left[ \left[ 5,1,3\right] \right] $ code by appending an ancilla qubit to the five-qubit code. Thus, we add a new qubit and a new stabilizer generator which is $X$ for the new qubit \cite{daniel-phd}. The other four stabilizer generators from the five-qubit code are tensored with the identity on the new qubit to form the generators of the new code. To be explicit, the codespace of this six-qubit code is spanned by the following two orthonormal codewords, \begin{equation} \left\vert 0_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\left\vert 0_{L}\right\rangle \otimes\left\vert +\right\rangle _{6}\text{ and, }\left\vert 1_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\left\vert 1_{L}\right\rangle \otimes\left\vert +\right\rangle _{6}\text{,} \end{equation} where $\left\vert 0_{L}\right\rangle $ and $\left\vert 1_{L}\right\rangle $ are defined in Eqs. (\ref{cd1}) and (\ref{cd2}), respectively. Following the point of view adopted for the five-qubit code, let us choose a codespace for the six-qubit code spanned by the new orthonormal codewords given by, \begin{equation} \left\vert 0_{L}^{^{\prime\prime}}\right\rangle \overset{\text{def}}{=} \frac{\left\vert 0_{L}^{\prime}\right\rangle +\left\vert 1_{L}^{\prime }\right\rangle }{\sqrt{2}}\text{ and, }\left\vert 1_{L}^{^{\prime\prime} }\right\rangle \overset{\text{def}}{=}\frac{\left\vert 0_{L}^{\prime }\right\rangle -\left\vert 1_{L}^{\prime}\right\rangle }{\sqrt{2}}\text{.} \end{equation} The five stabilizer generators of the code with codespace spanned by $\left\vert 0_{L}^{^{\prime\prime}}\right\rangle $ and $\left\vert 1_{L}^{^{\prime\prime}}\right\rangle $ read, \begin{equation} g_{1}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}X^{4}\text{, }g_{2}\overset {\text{def}}{=}X^{2}Z^{3}Z^{4}X^{5}\text{, }g_{3}\overset{\text{def}}{=} X^{1}X^{3}Z^{4}Z^{5}\text{, }g_{4}\overset{\text{def}}{=}Z^{1}X^{2}X^{4} Z^{5}\text{ and, }g_{5}\overset{\text{def}}{=}X^{6}\text{.} \label{may-1} \end{equation} A suitable choice of logical operations on $\left\vert 0_{L}^{^{\prime\prime} }\right\rangle $ and $\left\vert 1_{L}^{^{\prime\prime}}\right\rangle $ is provided by, \begin{equation} \bar{X}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{4}Z^{5}\text{ and, }\bar {Z}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}\text{.} \end{equation} We remark that $\bar{X}$ and $\bar{Z}$ anticommute, and that each commutes with all the five stabilizer generators in Eq. (\ref{may-1}). The codeword stabilizer $\mathcal{S}_{\text{CWS}}$ of the CWS code that realizes the six-qubit code spanned by the codewords $\left\vert 0_{L}^{^{\prime\prime} }\right\rangle $ and $\left\vert 1_{L}^{^{\prime\prime}}\right\rangle $\textbf{\ }reads\textbf{,} \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }\bar{Z}\text{ }\right\rangle =\left\langle X^{1}Z^{2}Z^{3}X^{4}\text{, }X^{2}Z^{3}Z^{4} X^{5}\text{, }X^{1}X^{3}Z^{4}Z^{5}\text{, }Z^{1}X^{2}X^{4}Z^{5}\text{, } X^{6}\text{, }X^{1}X^{2}X^{3}X^{4}X^{5}\right\rangle \text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ associated with $\mathcal{S}_{\text{CWS}}$ is given by, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) =\left( \begin{array} [c]{cccccc} 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0\\ 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{cccccc} 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 1 & 1 & 1 & 1 & 0 \end{array} \right. \right) \text{.} \end{equation} We observe $\det X\neq0$. Thus, using the VdN-work, the $6\times6$ adjacency matrix $\Gamma$ becomes, \begin{equation} \Gamma\overset{\text{def}}{=}Z^{\text{T}}\cdot\left( X^{\text{T}}\right) ^{-1}=\left( \begin{array} [c]{cccccc} 0 & 1 & 0 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) \text{.} \end{equation} Therefore, applying now the S-work, the $7\times7$ symmetric coincidence matrix $\Xi_{\left[ \left[ 6,1,3\right] \right] }^{\text{trivial}}$ characterizing the graph with both input and output vertices reads, \begin{equation} \Xi_{\left[ \left[ 6,1,3\right] \right] }^{\text{trivial}}\overset {\text{def}}{=}\left( \begin{array} [c]{ccccccc} 0 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 0 & 0 & 1 & 0\\ 1 & 1 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 1 & 0\\ 1 & 1 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) \text{.} \end{equation} In order to show that the graph with six output vertices and one input vertex with coincidence matrix $\Xi_{\left[ \left[ 6,1,3\right] \right] }^{\text{trivial}}$ realizes a $1$-error correcting degenerate code, we have to apply the graph-theoretic error detection (correction) conditions of the SW-work to $\binom{6}{2}=15$ error configurations $E_{k}$ with $k\in\left\{ 1\text{,..., }15\right\} $. It can be verified that any of the ten error configurations $E_{k}\overset{\text{def}}{=}\left\{ 0\text{, }e\text{, }e^{\prime}\right\} $ with $e$, $e^{\prime}\neq6$ satisfy the strong version of the graph-theoretic error detection conditions. In addition, the five two-error configurations $E_{k}$ with an error $e=6$ only satisfy the weak form of the graph-theoretic error detection conditions. This fact is consistent with the finding that concerns degenerate codes presented in the SW-work.
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the trivial [[6,1,3]]-code.}
\label{fig4}
\end{figure}
\subsubsection{Nontrivial case}
The second example of a six-qubit degenerate code provided by Calderbank et \textit{al}. is a nontrivial six-qubit code \cite{robert}, which, according to Calderbank et al., in unique up to equivalence. The example that we consider was indeed introduced by Bilal et \textit{al}. in \cite{bilal}. They state that since their example is not reducible to the trivial six-qubit code because every one of its qubits is entangled with the others, their code is equivalent to the (second) nontrivial six-qubit code according to the arguments of Calderbank et \textit{al}. The codespace of this nontrivial six-qubit code is spanned by the codewords $\left\vert 0_{L}\right\rangle $ and $\left\vert 1_{L}\right\rangle $ defined as \cite{bilal}, \begin{equation} \left\vert 0_{L}\right\rangle \overset{\text{def}}{=}\frac{1}{\sqrt{8}}\left[ \left\vert 000000\right\rangle -\left\vert 100111\right\rangle +\left\vert 001111\right\rangle -\left\vert 101000\right\rangle -\left\vert 010010\right\rangle +\left\vert 110101\right\rangle +\left\vert 011101\right\rangle -\left\vert 111010\right\rangle \right] \text{,} \end{equation} and, \begin{equation} \left\vert 1_{L}\right\rangle \overset{\text{def}}{=}\frac{1}{\sqrt{8}}\left[ \left\vert 001010\right\rangle +\left\vert 101101\right\rangle +\left\vert 000101\right\rangle +\left\vert 1000010\right\rangle -\left\vert 011000\right\rangle -\left\vert 111111\right\rangle +\left\vert 010111\right\rangle +\left\vert 110000\right\rangle \right] \text{,} \end{equation} respectively. The five stabilizer generators for this code are given by, \begin{equation} g_{1}\overset{\text{def}}{=}Y^{1}Z^{3}X^{4}X^{5}Y^{6}\text{, }g_{2} \overset{\text{def}}{=}Z^{1}X^{2}X^{5}Z^{6}\text{, }g_{3}\overset{\text{def} }{=}Z^{2}X^{3}X^{4}X^{5}X^{6}\text{, }g_{4}\overset{\text{def}}{=}Z^{4} Z^{6}\text{, }g_{5}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{5}\text{.} \end{equation} A suitable choice for the logical operations reads, \begin{equation} \bar{X}\overset{\text{def}}{=}Z^{1}X^{3}X^{5}\text{ and, }\bar{Z} \overset{\text{def}}{=}Z^{2}Z^{5}Z^{6}\text{.} \end{equation} In what follows, we shall consider the codespace spanned by the orthonormal codewords \begin{equation} \left\vert 0_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 0_{L}\right\rangle +\left\vert 1_{L}\right\rangle }{\sqrt{2}}\text{ and, }\left\vert 1_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac {\left\vert 0_{L}\right\rangle -\left\vert 1_{L}\right\rangle }{\sqrt{2} }\text{.} \end{equation} This way, the codeword stabilizer $\mathcal{S}_{\text{CWS}}$ of the CWS\ code that realizes the six-qubit code with a codespace spanned by $\left\vert 0_{L}^{\prime}\right\rangle $ and $\left\vert 1_{L}^{\prime}\right\rangle $ is given by, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }\bar{Z}^{\prime }\right\rangle \text{,} \end{equation} with $\bar{Z}^{\prime}\equiv\bar{X}\overset{\text{def}}{=}Z^{1}X^{3}X^{5}$. Therefore, $\mathcal{S}_{\text{CWS}}$ becomes\textbf{,} \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle Y^{1}Z^{3} X^{4}X^{5}Y^{6}\text{, }Z^{1}X^{2}X^{5}Z^{6}\text{, }Z^{2}X^{3}X^{4}X^{5} X^{6}\text{, }Z^{4}Z^{6}\text{, }Z^{1}Z^{2}Z^{3}Z^{5}\text{, }Z^{1}X^{3} X^{5}\right\rangle \text{.} \end{equation} Observe that $\mathcal{S}_{\text{CWS}}$ is locally Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ with $\mathcal{S}_{\text{CWS}}^{\prime }\overset{\text{def}}{=}U\mathcal{S}_{\text{CWS}}U^{\dagger}$ and $U\overset{\text{def}}{=}H^{1}H^{4}$. Therefore, we obtain\textbf{,} \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}=\left\langle Y^{1}Z^{3}Z^{4}X^{5} Y^{6}\text{, }X^{1}X^{2}X^{5}Z^{6}\text{, }Z^{2}X^{3}Z^{4}X^{5}X^{6}\text{, }X^{4}Z^{6}\text{, }X^{1}Z^{2}Z^{3}Z^{5}\text{, }X^{1}X^{3}X^{5}\right\rangle \text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime }}$ associated with $\mathcal{S}_{\text{CWS}}^{\prime}$ reads, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{cccccc} 1 & 0 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{cccccc} 1 & 0 & 0 & 0 & 1 & 1\\ 1 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 \end{array} \right. \right) \text{.} \end{equation} We observe that $\det X^{\prime\text{T}}\neq0$ and, applying the VdN-work, the symmetric adjacency matrix $\Gamma$ becomes, \begin{equation} \Gamma\overset{\text{def}}{=}Z^{\prime\text{T}}\cdot\left( X^{\prime\text{T} }\right) ^{-1}=\left( \begin{array} [c]{cccccc} 0 & 1 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & 0 & 1 & 0\\ 1 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 0 & 1 & 1 & 1 & 0 \end{array} \right) \text{.} \end{equation} Therefore, applying now the S-work, the $7\times7$ symmetric coincidence matrix $\Xi_{\left[ \left[ 6,1,3\right] \right] }^{\text{nontrivial}}$ characterizing the graph with both input and output vertices is given by, \begin{equation} \Xi_{\left[ \left[ 6,1,3\right] \right] }^{\text{nontrivial}} \overset{\text{def}}{=}\left( \begin{array} [c]{ccccccc} 0 & 1 & 0 & 1 & 0 & 1 & 0\\ 1 & 0 & 1 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 0\\ 1 & 1 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 & 1 & 0 \end{array} \right) \text{.} \end{equation} In order to show that the graph with six output vertices and one input vertex with coincidence matrix $\Xi_{\left[ \left[ 6,1,3\right] \right] }^{\text{nontrivial}}$ realizes a $1$-error correcting (degenerate) code, we have to apply the graph-theoretic error detection (correction) conditions of the SW-work to $\binom{6}{2}=15$ two-error configurations $E_{k}$ with $k\in\left\{ 1\text{,..., }15\right\} $. It can be checked that the only problematic error configuration is $E_{k}\overset{\text{def}}{=}\left\{ 0\text{, }e\text{, }e^{\prime}\right\} $ with $e=4$, $e^{\prime}=6$. The only undetectable nontrivial error is represented by $X^{4}Z^{6}$. However, this error operator belongs to the stabilizer of the code and therefore it will have no impact on the encoded quantum state. Thus, the code considered has indeed distance $d=3$. Furthermore, since a quantum stabilizer code with distance $d$ is a degenerate code if and only if its stabilizer has an element of weight less than $d$ (excluding the identity element), our code with $d=3$ and a stabilizer element of weight-$2$ is indeed a degenerate code.
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the nontrivial [[6,1,3]]-code.}
\label{fig5}
\end{figure}
\subsection{The CSS $\left[ \left[ 7,1,3\right] \right] $ stabilizer code}
The codespace of the CSS seven-qubit stabilizer code is spanned by the following codewords \cite{steane, robert2}, \begin{equation} \left\vert 0_{L}\right\rangle \overset{\text{def}}{=}\frac{1}{\left( \sqrt {2}\right) ^{3}}\left[ \begin{array} [c]{c} \left\vert 0000000\right\rangle +\left\vert 0110011\right\rangle +\left\vert 1010101\right\rangle +\left\vert 1100110\right\rangle +\\ \\ +\left\vert 0001111\right\rangle +\left\vert 0111100\right\rangle +\left\vert 1011010\right\rangle +\left\vert 1101001\right\rangle \end{array} \right] \text{,} \label{cd11} \end{equation} and, \begin{equation} \left\vert 1_{L}\right\rangle \overset{\text{def}}{=}\frac{1}{\left( \sqrt {2}\right) ^{3}}\left[ \begin{array} [c]{c} \left\vert 1111111\right\rangle +\left\vert 1001100\right\rangle +\left\vert 0101010\right\rangle +\left\vert 0011001\right\rangle +\\ \\ +\left\vert 1110000\right\rangle +\left\vert 1000011\right\rangle +\left\vert 0100101\right\rangle +\left\vert 0010110\right\rangle \end{array} \right] \text{.} \label{cd22} \end{equation} Furthermore, the six stabilizer generators of the code are given by \begin{equation} g_{1}\overset{\text{def}}{=}X^{4}X^{5}X^{6}X^{7}\text{, }g_{2}\overset {\text{def}}{=}X^{2}X^{3}X^{6}X^{7}\text{, }g_{3}\overset{\text{def}}{=} X^{1}X^{3}X^{5}X^{7}\text{, }g_{4}\overset{\text{def}}{=}Z^{4}Z^{5}Z^{6} Z^{7}\text{, }g_{5}\overset{\text{def}}{=}Z^{2}Z^{3}Z^{6}Z^{7}\text{, } g_{6}\overset{\text{def}}{=}Z^{1}Z^{3}Z^{5}Z^{7}\text{.} \end{equation} A suitable choice of logical operations reads, \begin{equation} \bar{X}\overset{\text{def}}{=}X^{1}X^{2}X^{3}\text{ and, }\bar{Z} \overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}\text{.} \end{equation} We observe that the codespace of the CSS code can be equally well-described by the following set of orthonormal codewords, \begin{equation} \left\vert 0_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac{\left\vert 0_{L}\right\rangle +\left\vert 1_{L}\right\rangle }{\sqrt{2}}\text{ and, }\left\vert 1_{L}^{\prime}\right\rangle \overset{\text{def}}{=}\frac {\left\vert 0_{L}\right\rangle -\left\vert 1_{L}\right\rangle }{\sqrt{2} }\text{,} \end{equation} with unchanged stabilizer and new logical operations given by, \begin{equation} \bar{Z}^{\prime}=\bar{X}\overset{\text{def}}{=}X^{1}X^{2}X^{3}\text{ and, }\bar{X}^{\prime}=\bar{Z}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}\text{.} \end{equation} The codeword stabilizer $\mathcal{S}_{\text{CWS}}$ of the CWS code that realizes the seven-qubit code spanned by the codewords $\left\vert 0_{L}^{\prime}\right\rangle $ and $\left\vert 1_{L}^{\prime}\right\rangle $ reads, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }g_{6}\text{, }\bar {Z}^{\prime}\text{ }\right\rangle \text{,} \end{equation} that is, \begin{equation} \mathcal{S}_{\text{CWS}}=\left\langle X^{4}X^{5}X^{6}X^{7}\text{, }X^{2} X^{3}X^{6}X^{7}\text{, }X^{1}X^{3}X^{5}X^{7}\text{, }Z^{4}Z^{5}Z^{6} Z^{7}\text{, }Z^{2}Z^{3}Z^{6}Z^{7}\text{, }Z^{1}Z^{3}Z^{5}Z^{7}\text{, } X^{1}X^{2}X^{3}\right\rangle \text{.} \end{equation} Observe that $\mathcal{S}_{\text{CWS}}$\ is local Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ with $\mathcal{S}_{\text{CWS}}^{\prime }\overset{\text{def}}{=}U\mathcal{S}_{\text{CWS}}U^{\dagger}$ and $U\overset{\text{def}}{=}H^{1}H^{2}H^{4}$. Therefore, $\mathcal{S} _{\text{CWS}}^{\prime}$ is given by, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}=\left\langle Z^{4}X^{5}X^{6}X^{7}\text{, }Z^{2}X^{3}X^{6}X^{7}\text{, }Z^{1}X^{3}X^{5}X^{7}\text{, }X^{4}Z^{5} Z^{6}Z^{7}\text{, }X^{2}Z^{3}Z^{6}Z^{7}\text{, }X^{1}Z^{3}Z^{5}Z^{7}\text{, }Z^{1}Z^{2}X^{3}\right\rangle \text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime }}$ associated with $\mathcal{S}_{\text{CWS}}^{\prime}$ reads, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{ccccccc} 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{ccccccc} 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation} We observe $\det X^{\prime}\neq0$. Thus, using the VdN-work, the $7\times7 $ adjacency matrix $\Gamma$ reads, \begin{equation} \Gamma\overset{\text{def}}{=}Z^{\prime\text{T}}\cdot\left( X^{\prime\text{T} }\right) ^{-1}=\left( \begin{array} [c]{ccccccc} 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \text{.} \end{equation}
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the CSS [[7,1,3]]-code.}
\label{fig6}
\end{figure}
Therefore, applying now the S-work, the $8\times8$ symmetric coincidence matrix $\Xi_{\text{CSS-}\left[ \left[ 7,13\right] \right] }$ characterizing the graph with both input and output vertices\textbf{\ } becomes, \begin{equation} \Xi_{\text{CSS-}\left[ \left[ 7,13\right] \right] }\overset{\text{def}} {=}\left( \begin{array} [c]{cccccccc} 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \text{.} \end{equation} It is straightforward to show that the cube graph with seven output vertices and one input vertex with coincidence matrix $\Xi_{\text{CSS-}\left[ \left[ 7,13\right] \right] }$ realizes a $1$-error correcting code. Namely, all the $\binom{7}{2}=21$ two-error configurations $E_{k}$ with $k\in\left\{ 1\text{,..., }21\right\} $, \begin{align} & E_{1}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }2\right\} \text{, }E_{2}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }3\right\} \text{, }E_{3}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }4\right\} \text{, }E_{4}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }5\right\} \text{, }E_{5}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }6\right\} \text{, }E_{6}\overset{\text{def}}{=}\left\{ 0\text{, }1\text{, }7\right\} \text{, }\nonumber\\ & \nonumber\\ & E_{7}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }3\right\} \text{, }E_{8}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }4\right\} \text{, }E_{9}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }5\right\} \text{, }E_{10}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }6\right\} \text{, }E_{11}\overset{\text{def}}{=}\left\{ 0\text{, }2\text{, }7\right\} \text{, }E_{12}\overset{\text{def}}{=}\left\{ 0\text{, }3\text{, }4\right\} \text{,}\nonumber\\ & \nonumber\\ & \text{ }E_{13}\overset{\text{def}}{=}\left\{ 0\text{, }3\text{, }5\right\} \text{, }E_{14}\overset{\text{def}}{=}\left\{ 0\text{, }3\text{, }6\right\} \text{, }E_{15}\overset{\text{def}}{=}\left\{ 0\text{, }3\text{, }7\right\} \text{, }E_{16}\overset{\text{def}}{=}\left\{ 0\text{, }4\text{, }5\right\} \text{, }E_{17}\overset{\text{def}}{=}\left\{ 0\text{, }4\text{, }6\right\} \text{,}\nonumber\\ & \nonumber\\ & \text{ }E_{18}\overset{\text{def}}{=}\left\{ 0\text{, }4\text{, }7\right\} \text{, }E_{19}\overset{\text{def}}{=}\left\{ 0\text{, }5\text{, }6\right\} \text{, }E_{20}\overset{\text{def}}{=}\left\{ 0\text{, }5\text{, }7\right\} \text{, }E_{21}\overset{\text{def}}{=}\left\{ 0\text{, }6\text{, }7\right\} \text{,} \end{align} satisfy the strong version of the graph-theoretic error detection (correction) conditions of the SW-work in agreement with the fact that the code is nondegenerate.
\subsection{The Shor $\left[ \left[ 9,1,3\right] \right] $ stabilizer code}
\subsubsection{First case}
$\allowbreak$When realized as a CWS quantum code, the Shor nine-qubit code \cite{shor} is characterized by the codeword stabilizer\textbf{\ } $\mathcal{S}_{\text{CWS}}$\textbf{\ }given by, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }g_{6}\text{, } g_{7}\text{, }g_{8}\text{, }\bar{Z}\right\rangle \text{,} \label{sshor} \end{equation} with codeword stabilizer generators given by, \begin{align} & g_{1}\overset{\text{def}}{=}Z^{1}Z^{2}\text{, }g_{2}\overset{\text{def}} {=}Z^{1}Z^{3}\text{, }g_{3}\overset{\text{def}}{=}Z^{4}Z^{5}\text{, } g_{4}\overset{\text{def}}{=}Z^{4}Z^{6}\text{, }g_{5}\overset{\text{def}} {=}Z^{7}Z^{8}\text{, }g_{6}\overset{\text{def}}{=}Z^{7}Z^{9}\text{, }\nonumber\\ & \nonumber\\ & g_{7}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}X^{6}\text{, } g_{8}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{7}X^{8}X^{9}\text{, }\bar {Z}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}X^{6}X^{7}X^{8} X^{9}\text{.} \end{align} What is the graph that realizes the Shor code? The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ corresponding to $\mathcal{S} _{\text{CWS}}$ in Eq. (\ref{sshor}) can be formally written as, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) =\left( \begin{array} [c]{ccccccccc} 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array} \right. \right) \text{.} \end{equation} Since $X^{\text{T}}$ is not invertible, the algorithmic procedure introduced in the VdN-work cannot be applied. However, we notice that the codeword stabilizer $\mathcal{S}_{\text{CWS}}$ in Eq. (\ref{sshor}) is locally Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ defined by, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}U\mathcal{S} _{\text{CWS}}U^{\dagger}\text{ with, }U\overset{\text{def}}{=}I^{1}\otimes H^{2}\otimes H^{3}\otimes I^{4}\otimes H^{5}\otimes H^{6}\otimes I^{7}\otimes H^{8}\otimes H^{9}\text{.} \label{sshor1} \end{equation} Using Eqs. (\ref{sshor}) and (\ref{sshor1}), it follows that \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle g_{1}^{\prime}\text{, }g_{2}^{\prime}\text{, }g_{3}^{\prime}\text{, } g_{4}^{\prime}\text{, }g_{5}^{\prime}\text{, }g_{6}^{\prime}\text{, } g_{7}^{\prime}\text{, }g_{8}^{\prime}\text{, }\bar{Z}^{\prime}\right\rangle \text{,} \end{equation} with, \begin{align} & g_{1}^{\prime}\overset{\text{def}}{=}Z^{1}X^{2}\text{, }g_{2}^{\prime }\overset{\text{def}}{=}Z^{1}X^{3}\text{, }g_{3}^{\prime}\overset{\text{def} }{=}Z^{4}X^{5}\text{, }g_{4}^{\prime}\overset{\text{def}}{=}Z^{4}X^{6}\text{, }g_{5}^{\prime}\overset{\text{def}}{=}Z^{7}X^{8}\text{, }g_{6}^{\prime }\overset{\text{def}}{=}Z^{7}X^{9}\text{, }\nonumber\\ & \nonumber\\ & g_{7}^{\prime}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}X^{4}Z^{5}Z^{6}\text{, }g_{8}^{\prime}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}X^{7}Z^{8}Z^{9}\text{, }\bar{Z}^{\prime}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}X^{4}Z^{5}Z^{6} X^{7}Z^{8}Z^{9}\text{.} \end{align} We observe that the the codeword stabilizer matrix $\mathcal{H}_{\mathcal{S} _{\text{CWS}}^{\prime}}$ corresponding to $\mathcal{S}_{\text{CWS}}^{\prime}$ becomes, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{ccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \end{array} \left\vert \begin{array} [c]{ccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation}
\begin{figure}
\caption{First example of a graph for a quantum code that is locally Clifford equivalent to the Shor [[9,1,3]]-code.}
\label{fig7}
\end{figure}
Omitting further details and applying the VdN-work, the symmetric $9\times9$ adjacency matrix $\Gamma$ for the Shor code becomes, \begin{equation} \Gamma\overset{\text{def}}{=}Z^{\prime\text{T}}\cdot\left( X^{\prime\text{T} }\right) ^{-1}=\left( \begin{array} [c]{ccccccccc} 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right) \text{.} \label{gammashor1} \end{equation} Finally, applying the S-work, the $10\times10$ symmetric coincidence matrix $\Xi_{\left[ \left[ 9,1,3\right] \right] }$ characterizing the graph with both input and output vertices reads, \begin{equation} \Xi_{\left[ \left[ 9,1,3\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{cccccccccc} 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right) \text{.} \end{equation} In what follows, we shall consider an alternative path leading to a graph for the nine-qubit stabilizer code. Finally, we shall discuss the error-correcting capability of the code in graph-theoretic terms as originally advocated in the SW-work.
\subsubsection{Second case}
Being within the CWS\ framework, consider a graph with nine vertices characterized by the following canonical codeword stabilizer, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }g_{6}\text{, } g_{7}\text{, }g_{8}\text{, }g_{9}\right\rangle \text{,} \label{s2} \end{equation} with, \begin{align} & g_{1}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}Z^{4}Z^{5}\text{, }g_{2} \overset{\text{def}}{=}Z^{1}X^{2}\text{, }g_{3}\overset{\text{def}}{=} Z^{1}X^{3}\text{, }g_{4}\overset{\text{def}}{=}Z^{1}X^{4}Z^{5}Z^{6} Z^{7}\text{, }g_{5}\overset{\text{def}}{=}Z^{4}X^{5}\text{, }g_{6} \overset{\text{def}}{=}Z^{4}X^{6}\text{, }\nonumber\\ & \nonumber\\ & g_{7}\overset{\text{def}}{=}Z^{1}Z^{4}X^{7}Z^{8}Z^{9}\text{, }g_{8} \overset{\text{def}}{=}Z^{7}X^{8}\text{, }g_{9}\equiv\bar{Z}\overset {\text{def}}{=}Z^{7}X^{9}\text{.} \end{align} The $9\times9$ adjacency matrix $\Gamma$ for this graph is given by, \begin{equation} \Gamma\overset{\text{def}}{=}\left( \begin{array} [c]{ccccccccc} 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right) \text{.} \label{chist} \end{equation} Applying the S-work, the $10\times10$ symmetric coincidence matrix $\Xi_{\left[ \left[ 9,1,3\right] \right] }^{\prime}$ characterizing the graph with both input and output vertices becomes, \begin{equation} \Xi_{\left[ \left[ 9,1,3\right] \right] }^{\prime}\overset{\text{def}} {=}\left( \begin{array} [c]{cccccccccc} 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right) \text{.} \end{equation} Does the graph \ associated with the adjacency matrix in Eq. (\ref{chist}) realize the Shor nine-qubit code? If we show that $\mathcal{S}_{\text{CWS}}$ in Eq. (\ref{s2}) is locally Clifford equivalent to a new stabilizer $\mathcal{S}_{\text{CWS}}^{\prime}$ from which we can construct a graph that realizes the Shor code, then we can reply with an affirmative answer. Observe that $\mathcal{S}_{\text{CWS}}$ in Eq. (\ref{s2}) is locally Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ defined as, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}U\mathcal{S} _{\text{CWS}}U^{\dagger}\text{ with, }U\overset{\text{def}}{=}P^{1}\otimes H^{2}\otimes H^{3}\otimes P^{4}\otimes H^{5}\otimes H^{6}\otimes P^{7}\otimes H^{8}\otimes H^{9}\text{.} \label{s3} \end{equation} Using Eqs. (\ref{s2}) and (\ref{s3}), it follows that \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle g_{1}^{\prime}\text{, }g_{2}^{\prime}\text{, }g_{3}^{\prime}\text{, } g_{4}^{\prime}\text{, }g_{5}^{\prime}\text{, }g_{6}^{\prime}\text{, } g_{7}^{\prime}\text{, }g_{8}^{\prime}\text{, }\bar{Z}^{\prime}\right\rangle \text{,} \end{equation} with, \begin{align} & g_{1}^{\prime}\overset{\text{def}}{=}Z^{1}X^{2}\text{, }g_{2}^{\prime }\overset{\text{def}}{=}Z^{1}X^{3}\text{, }g_{3}^{\prime}\overset{\text{def} }{=}Z^{4}X^{5}\text{, }g_{4}^{\prime}\overset{\text{def}}{=}Z^{4}X^{6}\text{, }g_{5}^{\prime}\overset{\text{def}}{=}Z^{7}X^{8}\text{, }g_{6}^{\prime }\overset{\text{def}}{=}Z^{7}X^{9}\text{, }\nonumber\\ & \nonumber\\ & g_{7}^{\prime}\overset{\text{def}}{=}Y^{1}Z^{2}Z^{3}Y^{4}Z^{5}Z^{6}\text{, }g_{8}^{\prime}\overset{\text{def}}{=}Y^{1}Z^{2}Z^{3}Y^{7}Z^{8}Z^{9}\text{, }\bar{Z}^{\prime}\overset{\text{def}}{=}Y^{1}Z^{2}Z^{3}Y^{4}Z^{5}Z^{6} Y^{7}Z^{8}Z^{9}\text{.} \end{align} We observe that the the codeword stabilizer matrix $\mathcal{H}_{\mathcal{S} _{\text{CWS}}^{\prime}}$ corresponding to $\mathcal{S}_{\text{CWS}}^{\prime}$ becomes, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( \begin{array} [c]{ccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array} \left\vert \begin{array} [c]{ccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation} Omitting further details and applying the VdN-work, the $9\times9$ symmetric adjacency matrix $\Gamma$ associated with the new graph reads, \begin{equation} \Gamma\overset{\text{def}}{=}\left( \begin{array} [c]{ccccccccc} 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{array} \right) \text{.} \label{gammashor2} \end{equation}
\begin{figure}
\caption{Second example of a graph for a quantum code that is locally Clifford equivalent to the Shor [[9,1,3]]-code.}
\label{fig8}
\end{figure}
Since the adjacency matrices in Eqs. (\ref{gammashor1}) and (\ref{gammashor2}) represent essentially the same graphs, we conclude that the graph with canonical stabilizer (\ref{s2}) realizes\textbf{\ }the Shor code as well. In addition, we point out that among all the possible $36$ two-element error configurations, $27$ configurations satisfy the strong error correction condition, $3$ satisfy the weak error correction condition and $6$ do not satisfy neither of them but they are harmless as we shall show. The strongly correctable $27$ configurations are given by, \begin{align} & \left\{ 0\text{, }1\text{, }4\right\} \text{, }\left\{ 0\text{, }1\text{, }5\right\} \text{, }\left\{ 0\text{, }1\text{, }6\right\} \text{, }\left\{ 0\text{, }1\text{, }7\right\} \text{, }\left\{ 0\text{, }1\text{, }8\right\} \text{, }\left\{ 0\text{, }1\text{, }9\right\} \text{, }\left\{ 0\text{, }2\text{, }4\right\} \text{, }\left\{ 0\text{, }2\text{, }5\right\} \text{, }\left\{ 0\text{, }2\text{, }6\right\} \text{, }\left\{ 0\text{, }2\text{, }7\right\} \text{,}\nonumber\\ & \nonumber\\ & \text{ }\left\{ 0\text{, }2\text{, }8\right\} \text{, }\left\{ 0\text{, }2\text{, }9\right\} \text{, }\left\{ 0\text{, }3\text{, }4\right\} \text{, }\left\{ 0\text{, }3\text{, }5\right\} \text{, }\left\{ 0\text{, }3\text{, }6\right\} \text{, }\left\{ 0\text{, }3\text{, }7\right\} \text{, }\left\{ 0\text{, }3\text{, }8\right\} \text{, }\left\{ 0\text{, }3\text{, }9\right\} \text{, }\left\{ 0\text{, }4\text{, }7\right\} \text{, }\left\{ 0\text{, }4\text{, }8\right\} \text{,}\nonumber\\ & \nonumber\\ & \text{ }\left\{ 0\text{, }4\text{, }9\right\} \text{, }\left\{ 0\text{, }5\text{, }7\right\} \text{, }\left\{ 0\text{, }5\text{, }8\right\} \text{, }\left\{ 0\text{, }5\text{, }9\right\} \text{, }\left\{ 0\text{, }6\text{, }7\right\} \text{, }\left\{ 0\text{, }6\text{, }8\right\} \text{, }\left\{ 0\text{, }6\text{, }9\right\} \text{,} \end{align} while the weakly correctable $3$ configurations read, \begin{equation} \left\{ 0\text{, }2\text{, }3\right\} \text{, }\left\{ 0\text{, }5\text{, }6\right\} \text{, }\left\{ 0\text{, }8\text{, }9\right\} \text{.} \end{equation} Finally, the potentially dangerous $6$ error configurations are, \begin{equation} \left\{ 0\text{, }1\text{, }2\right\} \text{, }\left\{ 0\text{, }1\text{, }3\right\} \text{, }\left\{ 0\text{, }4\text{, }5\right\} \text{, }\left\{ 0\text{, }4\text{, }6\right\} \text{, }\left\{ 0\text{, }7\text{, }8\right\} \text{, }\left\{ 0\text{, }7\text{, }9\right\} \text{.} \label{badcon} \end{equation} Each of the $6$ two-error configurations in Eq. (\ref{badcon}) generates $9$ weight-$2$ error operators for a total of $54$ errors. It turns out that in each set of errors of cardinality $9$, there is $1$ weight-$2$ nondetectable nontrivial error. However, this single error operator belongs to the stabilizer $\mathcal{S}_{\text{stabilizer}}$ of the code and therefore it will have no impact on the encoded quantum state. Thus, the code considered has indeed distance $d=3$. To be explicit, consider the set $\left\{ 0\text{, }1\text{, }2\right\} $. This sets generates the following $9$ weight-$2$ error operators, \begin{equation} X^{1}X^{2}\text{, }X^{1}Y^{2}\text{, }X^{1}Z^{2}\text{, }Y^{1}X^{2}\text{, }Y^{1}Y^{2}\text{, }Y^{1}Z^{2}\text{, }Z^{1}X^{2}\text{, }Z^{1}Y^{2}\text{, }Z^{1}Z^{2}\text{.} \end{equation} The only nontrivial error with vanishing error syndrome is $Z^{1}Z^{2}$ which, however, belongs to the stabilizer.
\subsection{The $\left[ \left[ 11,1,5\right] \right] $ stabilizer code}
$\allowbreak$The smallest possible code protecting against two arbitrary errors maps one logical qubit into eleven physical qubits. The existence of such a code was proven in \cite{robert} while its stabilizer structure was constructed in \cite{daniel-phd}. When\ realized as a CWS code, the eleven-qubit quantum stabilizer code is characterized by the codeword stabilizer, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }g_{6}\text{, } g_{7}\text{, }g_{8}\text{, }g_{9}\text{, }g_{10}\text{, }\bar{Z}\right\rangle \text{,} \label{eleven} \end{equation} with \cite{daniel-phd}, \begin{align} & g_{1}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{4}Z^{5}Z^{6}\text{, } g_{2}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}X^{6}\text{, } g_{3}\overset{\text{def}}{=}Z^{4}X^{5}Y^{6}Y^{7}Y^{8}Y^{9}X^{10}Z^{11}\text{, }g_{4}\overset{\text{def}}{=}X^{4}Y^{5}Z^{6}Z^{7}Z^{8}Z^{9}Y^{10} X^{11}\text{,}\nonumber\\ & \nonumber\\ & g_{5}\overset{\text{def}}{=}Z^{1}Y^{2}X^{3}Z^{7}Y^{8}X^{9}\text{, } g_{6}\overset{\text{def}}{=}X^{1}Z^{2}Y^{3}X^{7}Z^{8}Y^{9}\text{, } g_{7}\overset{\text{def}}{=}Z^{4}Y^{5}X^{6}X^{7}Y^{8}Z^{9}\text{, } g_{8}\overset{\text{def}}{=}X^{4}Z^{5}Y^{6}Z^{7}X^{8}Y^{9}\text{,}\nonumber\\ & \nonumber\\ & g_{9}\overset{\text{def}}{=}Z^{1}X^{2}Y^{3}Z^{7}Z^{8}Z^{9}X^{10} Y^{11}\text{, }g_{10}\overset{\text{def}}{=}Y^{1}Z^{2}X^{3}Y^{7}Y^{8} Y^{9}Z^{10}X^{11}\text{, }\bar{Z}\overset{\text{def}}{=}Z^{7}Z^{8}Z^{9} Z^{10}Z^{11}\text{.} \end{align} What is the graph that realizes such eleven-qubit code? The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ corresponding to $\mathcal{S}_{\text{CWS}}$ in Eq. (\ref{eleven}) can be formally written as, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) =\left( \begin{array} [c]{ccccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \end{array} \left\vert \begin{array} [c]{ccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation} Since $X^{\text{T}}$ is not invertible, the algorithmic procedure introduced in the VdN-work cannot be applied. However, we notice that the stabilizer $\mathcal{S}_{\text{CWS}}$ in Eq. (\ref{eleven}) is locally Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ defined by, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}U\mathcal{S} _{\text{CWS}}U^{\dagger}\text{,} \end{equation} where the unitary operator $U$ is defined as, \begin{equation} U\overset{\text{def}}{=}\left( H^{1}P^{1}H^{1}\right) \otimes H^{2}\otimes H^{3}\otimes H^{4}\otimes H^{5}\otimes H^{6}\otimes H^{7}\otimes H^{8}\otimes H^{9}\otimes H^{10}\otimes\left( H^{11}P^{11}H^{11}\right) \text{.} \label{eleven1} \end{equation} The operator $U$ can be regarded as the composition of three unitary operators $U\overset{\text{def}}{=}U_{3}\circ U_{2}\circ U_{1}$ with, \begin{align} & U_{1}\overset{\text{def}}{=}H^{1}\otimes I^{2}\otimes I^{3}\otimes I^{4}\otimes I^{5}\otimes I^{6}\otimes I^{7}\otimes I^{8}\otimes I^{9}\otimes I^{10}\otimes H^{11}\text{,}\nonumber\\ & \nonumber\\ & U_{2}\overset{\text{def}}{=}P^{1}\otimes I^{2}\otimes I^{3}\otimes I^{4}\otimes I^{5}\otimes I^{6}\otimes I^{7}\otimes I^{8}\otimes I^{9}\otimes I^{10}\otimes P^{11}\text{,}\nonumber\\ & \nonumber\\ & U_{3}\overset{\text{def}}{=}H^{1}\otimes H^{2}\otimes H^{3}\otimes H^{4}\otimes H^{5}\otimes H^{6}\otimes H^{7}\otimes H^{8}\otimes H^{9}\otimes H^{10}\otimes H^{11}\text{.} \end{align} Using Eqs. (\ref{eleven}) and (\ref{eleven1}), it follows that \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle g_{1}^{\prime}\text{, }g_{2}^{\prime}\text{, }g_{3}^{\prime}\text{, } g_{4}^{\prime}\text{, }g_{5}^{\prime}\text{, }g_{6}^{\prime}\text{, } g_{7}^{\prime}\text{, }g_{8}^{\prime}\text{, }g_{9}^{\prime}\text{, } g_{10}^{\prime}\text{, }\bar{Z}^{\prime}\right\rangle \text{,} \end{equation} with, \begin{align} & g_{1}^{\prime}\overset{\text{def}}{=}Y^{1}X^{2}X^{3}X^{4}X^{5}X^{6}\text{, }g_{2}^{\prime}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}Z^{4}Z^{5}Z^{6}\text{, }g_{3}^{\prime}\overset{\text{def}}{=}X^{4}Z^{5}Y^{6}Y^{7}Y^{8}Y^{9} Z^{10}Y^{11}\text{, }g_{4}^{\prime}\overset{\text{def}}{=}Z^{4}Y^{5}X^{6} X^{7}X^{8}X^{9}Y^{10}X^{11}\text{,}\nonumber\\ & \nonumber\\ & g_{5}^{\prime}\overset{\text{def}}{=}Y^{1}Y^{2}Z^{3}X^{7}Y^{8}Z^{9}\text{, }g_{6}^{\prime}\overset{\text{def}}{=}X^{1}X^{2}Y^{3}Z^{7}X^{8}Y^{9}\text{, }g_{7}^{\prime}\overset{\text{def}}{=}X^{4}Y^{5}Z^{6}Z^{7}Y^{8}X^{9}\text{, }g_{8}^{\prime}\overset{\text{def}}{=}Z^{4}X^{5}Y^{6}X^{7}Z^{8}Y^{9} \text{,}\nonumber\\ & \nonumber\\ & g_{9}^{\prime}\overset{\text{def}}{=}Y^{1}Z^{2}Y^{3}X^{7}X^{8}X^{9} Z^{10}Z^{11}\text{, }g_{10}^{\prime}\overset{\text{def}}{=}Z^{1}X^{2} Z^{3}Y^{7}Y^{8}Y^{9}X^{10}X^{11}\text{, }\bar{Z}^{\prime}\overset{\text{def} }{=}X^{7}X^{8}X^{9}X^{10}Y^{11}\text{.} \end{align} We observe that the codeword stabilizer matrix $\mathcal{H}_{\mathcal{S} _{\text{CWS}}^{\prime}}$ corresponding to $\mathcal{S}_{\text{CWS}}^{\prime}$ becomes, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( X^{\prime}\left\vert Z^{\prime}\right. \right) =\left( \begin{array} [c]{ccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} \left\vert \begin{array} [c]{ccccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \end{array} \right. \right) \text{.} \end{equation}
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the Gottesman [[11,1,5]]-code.}
\label{fig9}
\end{figure}
Omitting further details and applying the VdN-work, the $11\times11$ symmetric adjacency matrix $\Gamma$ for the eleven-qubit code becomes, \begin{equation} \Gamma\overset{\text{def}}{=}\left( \begin{array} [c]{ccccccccccc} 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\ 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1\\ 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \end{array} \right) \text{.} \end{equation} Employing the S-work, the $12\times12$ symmetric coincidence matrix $\Xi_{\left[ \left[ 11,1,5\right] \right] }$ can be written as, \begin{equation} \Xi_{\left[ \left[ 11,1,5\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{cccccccccccc} 0 & a_{1} & a_{2} & a_{3} & a_{4} & a_{5} & a_{6} & a_{7} & a_{8} & a_{9} & a_{10} & a_{11}\\ a_{1} & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ a_{2} & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\ a_{3} & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1\\ a_{4} & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\ a_{5} & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1\\ a_{6} & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ a_{7} & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\ a_{8} & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1\\ a_{9} & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1\\ a_{10} & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1\\ a_{11} & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \end{array} \right) \text{,} \end{equation} where the eleven matrix coefficients $a_{k}$ with $k\in\left\{ 1\text{,..., }11\right\} $ satisfy the following eleven constraints, \begin{align} a_{2}+a_{3}+a_{4}+a_{5}+a_{6} & =0\text{, }\nonumber\\ a_{1}+a_{3}+a_{7}+a_{8}+a_{9}+a_{11} & =0\text{,}\nonumber\\ \text{ }a_{1}+a_{2}+a_{5}+a_{6}+a_{7}+a_{9}+a_{10}+a_{11} & =0\text{,} \nonumber\\ a_{1}+a_{5}+a_{8}+a_{11} & =0\text{, }\nonumber\\ a_{1}+a_{3}+a_{4}+a_{6}+a_{8}+a_{9}+a_{10}+a_{11} & =0\text{, }\nonumber\\ a_{1}+a_{3}+a_{5}+a_{8}+a_{9} & =0\text{, }\nonumber\\ a_{2}+a_{3}+a_{8}+a_{10} & =0\text{,}\nonumber\\ a_{2}+a_{4}+a_{5}+a_{6}+a_{7}+a_{10}+a_{11} & =0\text{,}\nonumber\\ \text{ }a_{2}+a_{3}+a_{5}+a_{6}+a_{11} & =0\text{, }\nonumber\\ a_{3}+a_{5}+a_{7}+a_{8}+a_{11} & =0\text{, }\nonumber\\ a_{2}+a_{3}+a_{4}+a_{5}+a_{8}+a_{9}+a_{10} & =0\text{.} \label{sys} \end{align} It turns out that a suitable solution of the system of equations in (\ref{sys}) reads, \begin{equation} \mathbf{a}=\left( a_{1}\text{, }a_{2}\text{, }a_{3}\text{, }a_{4}\text{, }a_{5}\text{, }a_{6}\text{, }a_{7}\text{, }a_{8}\text{, }a_{9}\text{, } a_{10}\text{, }a_{11}\right) =\left( 1\text{, }0\text{, }1\text{, }1\text{, }1\text{, }1\text{, }0\text{, }1\text{, }0\text{, }0\text{, }1\right) \text{.} \end{equation} Finally, the coincidence matrix $\Xi_{\left[ \left[ 11,1,5\right] \right] }$ for a graph\textbf{ }associated with the eleven-qubit code reads, \begin{equation} \Xi_{\left[ \left[ 11,1,5\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{cccccccccccc} 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\ 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1\\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1\\ 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1\\ 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \end{array} \right) \text{.} \end{equation} It is straightforward, though tedious, to check that all the $\binom{11} {2}=330$ four-error configurations satisfy the graph-theoretic error detection conditions in their strong version in agreement with the SW-work for nondegenerate codes. However, we also remark that checking out $330$ graphical error detection conditions is always better that checking out $529$ Knill-Laflamme error correction conditions, \begin{equation} 3^{0}\binom{11}{0}+3^{1}\binom{11}{1}+3^{2}\binom{11}{2}=529>330\text{.} \end{equation} In the next section, we shall consider few graphical constructions of stabilizer codes characterized by multi-qubit encoding operators.
\section{Multi-qubit encoding}
\subsection{The $\left[ \left[ 4,2,2\right] \right] $ stabilizer code}
In what follows, we shall consider the graphical construction of two non-equivalent quantum stabilizer codes encoding two logical qubits into four physical qubits.
\subsubsection{First case}
The $\left[ \left[ 4,2,2\right] \right] $ code is the simplest example of a class of $\left[ \left[ n-1,k+1,d-1\right] \right] $ codes that are derivable from pure (or, nondegenerate) codes $\left[ \left[ n,k,d\right] \right] $ with $n\geq2$ (for more details, we refer to \cite{robert}) and is an explicit example of multi-qubit encoding. It is derivable from the perfect five-qubit code and can detect a single qubit error. The stabilizer generators of the code are defined by \cite{gaitan}, \begin{equation} g_{1}\overset{\text{def}}{=}X^{1}Z^{2}Z^{3}X^{4}\text{ and, }g_{2} \overset{\text{def}}{=}Y^{1}X^{2}X^{3}Y^{4}\text{.} \label{please} \end{equation} Each encoded qubit $i$ with $i\in\left\{ 1\text{, }2\right\} $ has its own of logical operations $\bar{X}_{i}$ and $\bar{Z}_{i}$. A convenient choice is, \begin{equation} \bar{X}_{1}\overset{\text{def}}{=}X^{1}Y^{3}Y^{4}\text{, }\bar{X}_{2} \overset{\text{def}}{=}X^{1}X^{3}Z^{4}\text{, }\bar{Z}_{1}\overset{\text{def} }{=}Y^{1}Z^{2}Y^{3}\text{ and, }\bar{Z}_{2}\overset{\text{def}}{=}X^{2} Z^{3}Z^{4}\text{.} \end{equation} The codeword stabilizer $\mathcal{S}_{\text{CWS}}$ associated with the CWS code that realizes this stabilizer code reads, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }\bar{Z}_{1}\text{, }\bar{Z}_{2}\text{ }\right\rangle =\left\langle X^{1}Z^{2}Z^{3}X^{4}\text{, }Y^{1}X^{2}X^{3}Y^{4}\text{, } Y^{1}Z^{2}Y^{3}\text{, }X^{2}Z^{3}Z^{4}\text{ }\right\rangle \text{.} \label{SBcase1} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ associated with $\mathcal{S}_{\text{CWS}}$ reads, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) =\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 0\\ 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 0\\ 0 & 0 & 1 & 1 \end{array} \left\vert \begin{array} [c]{cccc} 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation} We observe $\det X\neq0$. Thus, using the VdN-work, the $4\times4$ adjacency matrix $\Gamma$ becomes, \begin{equation} \Gamma\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1\\ 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 \end{array} \right) \text{.} \label{422g} \end{equation} We remark that the graph with symmetric adjacency matrix $\Gamma$ in Eq. (\ref{422g}) is in the local unitary equivalence class of the square graph (see Figure $7$ in \cite{hein}). Therefore, an alternative graph (with only output vertices) for our stabilizer code can be characterized by the alternative $4\times4$ symmetric adjacency matrix $\Gamma^{\prime}$, \begin{equation} \Gamma^{\prime}\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0 \end{array} \right) \text{.} \end{equation} Finally, applying the S-work and considering $\Gamma^{\prime}$, the $6\times6$ symmetric coincidence matrix $\Xi_{\left[ \left[ 4,2,2\right] \right] }^{\text{Beigi}}$ characterizing the graph with both input and output vertices is given by, \begin{equation} \Xi_{\left[ \left[ 4,2,2\right] \right] }^{\text{Beigi}}\overset {\text{def}}{=}\left( \begin{array} [c]{cccccc} 0 & 0 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 & 0\\ 1 & 0 & 0 & 1 & 0 & 1\\ 0 & 1 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0 & 1 & 0 \end{array} \right) \text{.} \label{bei} \end{equation} Using the SW-work, it is simple to verify that any graphical single-error configuration $\left\{ 0\text{, }0^{\prime}\text{, }e\right\} $ with $e\in\left\{ 1\text{, }2\text{, }3\text{, }4\right\} $ is detectable. Thus, the code detects any single-qubit error. As a final remark, we emphasize that the\textbf{\ }graph\textbf{ }associated with $\Xi_{\left[ \left[ 4,2,2\right] \right] }^{\text{Beigi}}$ is identical to the one appeared in \cite{beigi}.
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the Beigi et al. [[4,2,2]]-code.}
\label{fig10}
\end{figure}
\subsubsection{Second case}
Let us consider a different\textbf{\ }$\left[ \left[ 4,2,2\right] \right] $ stabilizer code with stabilizer generators defined by \cite{gaitan}, \begin{subequations} \begin{equation} g_{1}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}\text{ and, }g_{2} \overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{4}\text{.} \tag{B8} \end{equation} Each encoded qubit $i$ with $i\in\left\{ 1\text{, }2\right\} $ has its own of logical operations $\bar{X}_{i}$ and $\bar{Z}_{i}$. A convenient choice is, \end{subequations} \begin{equation} \bar{X}_{1}\overset{\text{def}}{=}X^{1}X^{2}\text{, }\bar{X}_{2} \overset{\text{def}}{=}X^{1}X^{3}\text{, }\bar{Z}_{1}\overset{\text{def}} {=}Z^{2}Z^{4}\text{ and, }\bar{Z}_{2}\overset{\text{def}}{=}Z^{3}Z^{4}\text{.} \end{equation} The codeword stabilizer $\mathcal{S}_{\text{CWS}}$ associated with the CWS code that realizes this stabilizer code reads, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }\bar{Z}_{1}\text{, }\bar{Z}_{2}\text{ }\right\rangle =\left\langle X^{1}X^{2}X^{3}X^{4}\text{, }Z^{1}Z^{2}Z^{3}Z^{4}\text{, } Z^{2}Z^{4}\text{, }Z^{3}Z^{4}\text{ }\right\rangle \text{.} \label{SBcase2} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}}$ associated with $\mathcal{S}_{\text{CWS}}$ reads, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}}\overset{\text{def}}{=}\left( Z\left\vert X\right. \right) =\left( \begin{array} [c]{cccc} 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 1 \end{array} \left\vert \begin{array} [c]{cccc} 1 & 1 & 1 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation} We observe $\det X=0$ and the VdN-work cannot be applied. However, we also notice that $\mathcal{S}_{\text{CWS}}$ is locally Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}$ with $\mathcal{S}_{\text{CWS}}^{\prime }=U\mathcal{S}_{\text{CWS}}U^{\dagger}$ and $U\overset{\text{def}}{=} H^{2}H^{3}H^{4}$. Thus, $\mathcal{S}_{\text{CWS}}^{\prime}$ becomes, \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle g_{1}^{\prime}\text{, }g_{2}^{\prime}\text{, }\bar{Z}_{1}^{\prime}\text{, }\bar{Z}_{2}^{\prime}\text{ }\right\rangle =\left\langle X^{1}Z^{2}Z^{3} Z^{4}\text{, }Z^{1}X^{2}X^{3}X^{4}\text{, }X^{2}X^{4}\text{, }X^{3}X^{4}\text{ }\right\rangle \text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime }}$ associated with $\mathcal{S}_{\text{CWS}}^{\prime}$ is given by, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array} \left\vert \begin{array} [c]{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 1\\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 1 \end{array} \right. \right) \text{.} \end{equation} We now have $\det X^{\prime}\neq0$. Therefore, using the VdN-work, the $4\times4$ adjacency matrix $\Gamma$ becomes, \begin{equation} \Gamma\overset{\text{def}}{=}\left( \begin{array} [c]{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 \end{array} \right) \text{.} \label{422gg} \end{equation} Finally, applying the S-work and considering $\Gamma$ in Eq. (\ref{422gg}), the $6\times6$ symmetric coincidence matrix $\Xi_{\left[ \left[ 4,2,2\right] \right] }^{\text{Schlingemann}}$ characterizing the graph with both input and output vertices reads, \begin{equation} \Xi_{\left[ \left[ 4,2,2\right] \right] }^{\text{Schlingemann}} \overset{\text{def}}{=}\left( \begin{array} [c]{cccccc} 0 & 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 \end{array} \right) \text{.} \end{equation} Applying the SW-work, it is simple to verify that any graphical single-error configuration $\left\{ 0\text{, }0^{\prime}\text{, }e\right\} $ with $e\in\left\{ 1\text{, }2\text{, }3\text{, }4\right\} $ is detectable. Therefore, the code detects any single-qubit error. As a final remark, we emphasize that the graph\textbf{ }associated with\textbf{\ }$\Xi_{\left[ \left[ 4,2,2\right] \right] }^{\text{Schlingemann}}$\textbf{\ }is identical to the one appeared in \cite{dirk}.
We stress that the stabilizer generated by the stabilizers in Eq. (\ref{please}) for the first code can be obtained from the stabilizer generated by the stabilizers in Eq. (B8) for the second code by applying a local unitary transformation $U\overset{\text{def}}{=}Q^{1}H^{2}H^{3}Q^{4}$ where $Q\overset{\text{def}}{=}PHP$. However, the codeword stabilizer in Eq. (\ref{SBcase1}) cannot be obtained from the codeword stabilizer in Eq. (\ref{SBcase2}) via a local unitary transformation. This feature is consistent with the fact that graphs associated with adjacency matrices in Eqs. (\ref{422g}) and (\ref{422gg}) are inequivalent. In other words, these two matrices characterize graphs that belong to different orbits \cite{hein}.
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the Schlingemann [[4,2,2]]-code.}
\label{fig11}
\end{figure}
\subsection{The $\left[ \left[ 8,3,3\right] \right] $ stabilizer code}
The $\left[ \left[ 8,3,3\right] \right] $ code is a special case of a class of $\left[ \left[ 2^{j},2^{j}-j-2,3\right] \right] $ codes \cite{danielpra}. It encodes three logical qubits into eight physical qubits and corrects all single-qubit errors. The five stabilizer generators are given by \cite{gaitan}, \begin{align} & g_{1}\overset{\text{def}}{=}X^{1}X^{2}X^{3}X^{4}X^{5}X^{6}X^{7}X^{8}\text{, }g_{2}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}Z^{4}Z^{5}Z^{6}Z^{7}Z^{8}\text{, }g_{3}\overset{\text{def}}{=}X^{2}X^{4}Y^{5}Z^{6}Y^{7}Z^{8}\text{, }\nonumber\\ & \nonumber\\ & g_{4}\overset{\text{def}}{=}X^{2}Z^{3}Y^{4}X^{6}Z^{7}Y^{8}\text{, } g_{5}\overset{\text{def}}{=}Y^{2}X^{3}Z^{4}X^{5}Z^{6}Y^{8}\text{,} \end{align} and a suitable choice for the logical operations $\bar{X}_{i}$ and $\bar {Z}_{i}$ with $i\in\left\{ 1\text{, }2\text{, }3\right\} $ reads, \begin{equation} \bar{X}_{1}\overset{\text{def}}{=}X^{1}X^{2}Z^{6}Z^{8}\text{, }\bar{X} _{2}\overset{\text{def}}{=}X^{1}X^{3}Z^{4}Z^{7}\text{, }\bar{X}_{3} \overset{\text{def}}{=}X^{1}Z^{4}X^{5}Z^{6}\text{, }\bar{Z}_{1}\overset {\text{def}}{=}Z^{2}Z^{4}Z^{6}Z^{8}\text{, }\bar{Z}_{2}\overset{\text{def}} {=}Z^{3}Z^{4}Z^{7}Z^{8}\text{, }\bar{Z}_{3}\overset{\text{def}}{=}Z^{5} Z^{6}Z^{7}Z^{8}\text{.} \end{equation} The codeword stabilizer $\mathcal{S}_{\text{CWS}}$ of the CWS code that realizes this stabilizer code is given by, \begin{equation} \mathcal{S}_{\text{CWS}}\overset{\text{def}}{=}\left\langle g_{1}\text{, }g_{2}\text{, }g_{3}\text{, }g_{4}\text{, }g_{5}\text{, }\bar{Z}_{1}\text{, }\bar{Z}_{2}\text{, }\bar{Z}_{3}\right\rangle \text{.} \end{equation} We observe that $\mathcal{S}_{\text{CWS}}$ is locally Clifford equivalent to $\mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}U\mathcal{S} _{\text{CWS}}U^{\dagger}$ with $U\overset{\text{def}}{=}H^{1}H^{2}H^{3}H^{5}$. Therefore, $\mathcal{S}_{\text{CWS}}^{\prime}$ reads\textbf{,} \begin{equation} \mathcal{S}_{\text{CWS}}^{\prime}\overset{\text{def}}{=}\left\langle g_{1}^{\prime}\text{, }g_{2}^{\prime}\text{, }g_{3}^{\prime}\text{, } g_{4}^{\prime}\text{, }g_{5}^{\prime}\text{, }\bar{Z}_{1}^{\prime}\text{, }\bar{Z}_{2}^{\prime}\text{, }\bar{Z}_{3}^{\prime}\right\rangle \text{,} \end{equation} with, \begin{align} & g_{1}^{\prime}\overset{\text{def}}{=}Z^{1}Z^{2}Z^{3}X^{4}Z^{5}X^{6} X^{7}X^{8}\text{, }g_{2}^{\prime}\overset{\text{def}}{=}X^{1}X^{2}X^{3} Z^{4}X^{5}Z^{6}Z^{7}Z^{8}\text{, }g_{3}^{\prime}\overset{\text{def}}{=} Z^{2}X^{4}Y^{5}Z^{6}Y^{7}Z^{8}\text{, }\nonumber\\ & \nonumber\\ & g_{4}^{\prime}\overset{\text{def}}{=}Z^{2}X^{3}Y^{4}X^{6}Z^{7}Y^{8}\text{, }g_{5}^{\prime}\overset{\text{def}}{=}Y^{2}Z^{3}Z^{4}Z^{5}Z^{6}Y^{8}\text{,} \end{align} and, \begin{equation} \text{ }\bar{Z}_{1}^{\prime}\overset{\text{def}}{=}X^{2}Z^{4}Z^{6}Z^{8}\text{, }\bar{Z}_{2}^{\prime}\overset{\text{def}}{=}X^{3}Z^{4}Z^{7}Z^{8}\text{, } \bar{Z}_{3}^{\prime}\overset{\text{def}}{=}X^{5}Z^{6}Z^{7}Z^{8}\text{.} \end{equation} The codeword stabilizer matrix $\mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime }}$ associated with $\mathcal{S}_{\text{CWS}}^{\prime}$ is given by, \begin{equation} \mathcal{H}_{\mathcal{S}_{\text{CWS}}^{\prime}}\overset{\text{def}}{=}\left( Z^{\prime}\left\vert X^{\prime}\right. \right) =\left( \begin{array} [c]{cccccccc} 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1\\ 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1\\ 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1\\ 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{array} \left\vert \begin{array} [c]{cccccccc} 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1\\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0\\ 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \end{array} \right. \right) \text{.} \end{equation}
\begin{figure}
\caption{Graph for a quantum code that is locally Clifford equivalent to the Gottesman [[8,3,3]]-code.}
\label{fig12}
\end{figure}
Since $\det X^{\prime}\neq0$, we can use the VdN-work and the $8\times8$ adjacency matrix $\Gamma$ becomes, \begin{equation} \Gamma\overset{\text{def}}{=}Z^{\prime\text{T}}\cdot\left( X^{\prime\text{T} }\right) ^{-1}=\left( \begin{array} [c]{cccccccc} 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1\\ 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \end{array} \right) \text{.} \label{ccube} \end{equation} We observe that the graph associated with the adjacency matrix $\Gamma$ (with $\det\Gamma\neq0$) in Eq. (\ref{ccube})\ is the cube. Acting with a local complementation with respect to the vertex $1$, $\Gamma$ becomes $\Gamma^{\prime}$ (with $\det\Gamma^{\prime}=0$). \begin{equation} \Gamma^{\prime}\overset{\text{def}}{=}\left( \begin{array} [c]{cccccccc} 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\ 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \end{array} \right) \label{833a} \end{equation} Finally, applying the S-work and considering $\Gamma^{\prime}$ in Eq. (\ref{833a}), the $11\times11$ symmetric coincidence matrix $\Xi_{\left[ \left[ 8,3,3\right] \right] }$ associated\textbf{\ }with the graph with both input and output vertices becomes, \begin{equation} \Xi_{\left[ \left[ 8,3,3\right] \right] }\overset{\text{def}}{=}\left( \begin{array} [c]{ccccccccccc} 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \end{array} \right) \text{.} \end{equation} Using the SW-work, it can be finally verified that any of the $\binom{8}{2}$ graphical\ two-error configuration $\left\{ 0\text{, }0^{\prime}\text{, }0^{\prime\prime}\text{, }e_{1}\text{, }e_{2}\right\} $ with $e_{1,2} \in\left\{ 1\text{,..., }8\right\} $ is detectable. Thus, the code corrects any single-qubit error.
\end{document} | arXiv |
Michael Barr (mathematician)
Michael Barr (born January 22, 1937) is an American mathematician who is the Peter Redpath Emeritus Professor of Pure Mathematics at McGill University.[1]
Michael Barr
Born (1937-01-22) January 22, 1937
Philadelphia, Pennsylvania, U.S.
Academic background
EducationUniversity of Pennsylvania (BS, PhD)
Academic work
DisciplineMathematics
Sub-disciplineHomological algebra
Category theory
Theoretical computer science
InstitutionsColumbia University
University of Illinois Urbana-Champaign
McGill University
Early life and education
He was born in Philadelphia, Pennsylvania, and graduated from the 202nd class of Central High School in June 1954. He graduated from the University of Pennsylvania in February 1959 and received a PhD from the same school in June 1962.
Career
Barr taught at Columbia University and the University of Illinois before coming to McGill in 1968.
His earlier work was in homological algebra, but his principal research area for a number of years has been category theory. He is well known to theoretical computer scientists for his book Category Theory for Computing Science with Charles Wells, as well as for the development of *-autonomous categories and Chu spaces which have found various applications in computer science. His monograph *-autonomous categories, and his books Toposes, Triples, and Theories,[2][3] also coauthored with Wells, and Acyclic Models, are aimed at more specialized audiences.
He is on the editorial boards of Mathematical Structures in Computer Science and the electronic journal Homology, Homotopy and Applications, and is editor of the electronic journal Theory and Applications of Categories.
References
1. "Mathematics and Statistics". McGill University. Retrieved 11 August 2011.
2. Pitts, Andrew (March 1991), "Review of Toposes, Triples and Theories by Barr, M., & Wells, C.", Journal of Symbolic Logic, 56 (1): 340–341, doi:10.2307/2274934
3. Rota, Gian-Carlo (August 1986), "Toposes, triples and theories: M. Barr and C. Wells, Springer, 1985, 345 pp.", Advances in Mathematics, 61 (2): 184, doi:10.1016/0001-8708(86)90076-9
External links
• Toposes, Triples and Theories, updated edition of text published in 1983.
• Category Theory for Computing Science updated edition of text published in 1999.
• http://www.tac.mta.ca/tac (Theory and Applications of Categories)
• https://web.archive.org/web/20080704125156/http://www.math.rutgers.edu/hha/geninfo.html (Homology, Homotopy and Applications)
• Michael Barr at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Sweden
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Complete variety
In mathematics, in particular in algebraic geometry, a complete algebraic variety is an algebraic variety X, such that for any variety Y the projection morphism
$X\times Y\to Y$
is a closed map (i.e. maps closed sets onto closed sets).[lower-alpha 1] This can be seen as an analogue of compactness in algebraic geometry: a topological space X is compact if and only if the above projection map is closed with respect to topological products.
The image of a complete variety is closed and is a complete variety. A closed subvariety of a complete variety is complete.
A complex variety is complete if and only if it is compact as a complex-analytic variety.
The most common example of a complete variety is a projective variety, but there do exist complete non-projective varieties in dimensions 2 and higher. While any complete nonsingular surface is projective,[1] there exist nonsingular complete varieties in dimension 3 and higher which are not projective.[2] The first examples of non-projective complete varieties were given by Masayoshi Nagata[2] and Heisuke Hironaka.[3] An affine space of positive dimension is not complete.
The morphism taking a complete variety to a point is a proper morphism, in the sense of scheme theory. An intuitive justification of "complete", in the sense of "no missing points", can be given on the basis of the valuative criterion of properness, which goes back to Claude Chevalley.
See also
• Chow's lemma
• Theorem of the cube
• Fano variety
Notes
1. Here the product variety X × Y does not carry the product topology, in general; the Zariski topology on it will have more closed sets (except in very simple cases).
References
1. Zariski, Oscar (1958). "Introduction to the Problem of Minimal Models in the Theory of Algebraic Surfaces". American Journal of Mathematics. 80: 146–184. doi:10.2307/2372827. JSTOR 2372827.
2. Nagata, Masayoshi (1958). "Existence theorems for nonprojective complete algebraic varieties". Illinois J. Math. 2: 490–498. doi:10.1215/ijm/1255454111.
3. Hironaka, Heisuke (1960). On the theory of birational blowing-up (thesis). Harvard University.
Sources
• Section II.4 of Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
• Chapter 7 of Milne, James S. (2009), Algebraic geometry, v. 5.20, retrieved 2010-08-04
• Section I.9 of Mumford, David (1999), The red book of varieties and schemes, Lecture Notes in Mathematics, vol. 1358 (Second, expanded ed.), Springer-Verlag, doi:10.1007/b62130, ISBN 978-3-540-63293-1
| Wikipedia |
Profile of Anyue Huang »
Topics - Anyue Huang
Quiz-5 / TUT0202 Quiz5
Verify that the given functions y 1and y2 satisfy the corresponding homogeneous equation, then find a particular solution of the given nonhomogeneous equation.
t^{2} y^{\prime \prime}-t(t+2) y^{\prime}+(t+2) y=2 t^{3}, t>0 ; y_{1}(t)=t, y_{2}(t)=t e^{t}
y_{1}(t)=t \quad y_{1}^{\prime}(t)=1 \quad y_{1}^{\prime \prime}(t)=0
-t(t+2)+(t+2) t=0
y_{2}(t)=t e^{t} \quad y_{2}^{\prime}(t)=e^{t}+t e^{t} \quad y_{2}^{\prime \prime}(t)=2 e^{t}+t e^{t}
t^{2}\left(2 e^{t}+t e^{t}\right)-t(t+2)\left(e^{t}+t e^{t}\right)+(t+2) t e^{t}=0
Hence, $y_{1}$ and $y_{2}$ satisfy the homogeneons equation
y^{\prime \prime}-\frac{t+2}{t} y^{\prime}+\frac{t+2}{t^{2}} y=2 t
\begin{aligned}
w&=\left|\begin{array}{cc}{t} & {t e^{t}} \\ {1} & {e^{t}+\operatorname{te}^{t}}\end{array}\right|=t^{2} e^{t}\\
w_{1}&=\left|\begin{array}{cc}{0} & {t e^{t}} \\ {1} & {e^{t}+t e^{t}}\end{array}\right|=-t e^{t}\\
w_{2}&=\left|\begin{array}{cc}{t} & {0} \\ {1} & {1}\end{array}\right|=t
\end{aligned}
\begin{aligned} y_{p}(t) &=t \int \frac{\left(-t e^{t}\right)(2 t)}{t^{2} e^{t}} d t+t e^{t} \int \frac{(t)(2 t)}{t^{2} e^{t}} d t \\ &=t \int-2 d t+2 t e^{t} \int \frac{1}{e^{t}} d t \\ &=t(-2 t)+2 t e^{t}\left(-e^{-t}\right) \\ &=-2 t^{2}-2 t \end{aligned}
Verify that the given functions y 1and y2 satisfy the corresponding homogeneous equation;
then find a particular solution of the given nonhomogeneous equation.
Find a differential equation whose general solution is $y=c_{1} e^{-t / 2}+c_{2} e^{-2 t}$.
Then $r_{1}=-\frac{1}{2}, r_{2}=-2$ are two roots of the characteristic equation of the required differential equation.
so the characteristic equation should look like
\begin{aligned}\left(r+\frac{1}{2}\right)(r+2) &=0 \\(2 r+1)(r+2) &=0 \\ 2 r^{2}+5 r+2 &=0 \end{aligned}
which corresponds to the DE
2 y^{\prime \prime}+5 y^{\prime}+2 y=0 | CommonCrawl |
\begin{document}
\begin{frontmatter}\vspace*{0in}
\title{Asymptotic Stability of the Landau--Lifshitz Equation}
\author{Amenda Chow}\address{Department of Mathematics and Statistics, York University, Canada\\ [email protected]}
\begin{keyword} Asymptotic stability, Equilibrium Points, Hysteresis, Lyapunov function, Nonlinear control systems, Partial differential equations \end{keyword}
\begin{abstract} The Landau--Lifshitz equation describes the behaviour of magnetic domains in ferromagnetic structures. Recently such structures have been found to be favourable for storing digital data. Stability of magnetic domains is important for this. Consequently, asymptotic stability of the equilibrium points in the Landau--Lifshitz equation are established. A suitable Lyapunov function is presented. \end{abstract}
\end{frontmatter}
\section{Introduction} The Landau--Lifshitz equation is a coupled set of nonlinear partial differential equations. One of its first appearances is in a 1935 paper \cite{Landau1935}, in which this equation describes the behaviour of magnetic domains within a ferromagnetic structure. In recent applications, structures such as ferromagnetic nanowires have appeared in electronic devices used for storing digital information \cite{Parkin2008}. In particular, data is encoded as a specific pattern of magnetic domains within a ferromagnetic nanowire. Consequently the Landau--Lifshitz equation continues to be widely explored, and its stability is of growing interest \cite{CarbouGilles2010,CarbouLabbe2006,Carbou2011,Chow2016,Gou2011,GuoBook2008,Guo2004,Jizzini2011,Labbe2012,Mayergoyz2010,Zhai1998}. Stability of equilibrium points is also related to hysteresis \cite{Chow2014_ACC,Morris2011}, and investigating stability lends insights into the hystereric behaviour that appears in the Landau--Lifshitz equation \cite{CarbouEfendiev2009,Chow2014_ACC,Wiele2007,Visintin1997}.
Stability results are often based on linearization \cite{CarbouLabbe2006,Gou2011,Labbe2012}; however, in the proceeding discussion, asymptotic stability of the Landau-Lifshitz equation is established using Lyapunov theory. This is preferred because linearization leads only to an approximation.
The difficulty with Lyapunov theory is often in the construction of an appropriate Lyapunov function. Working in infinite-dimensions makes this more difficult; however, Lyapunov functions have been found for the Landau--Lifshitz equation \cite{Chow2016,GuoBook2008}. In both these works, a Lyapunov function establishes that the equilibrium points of the Landau-Lifshitz equation are stable. The work in \cite{Chow2016} is extended here and asymptotic stability is shown. In particular, a nonlinear control is shown to steer the system to an asymptotically stable equilibrium point. Control of the Landau--Lifshitz equation is crucial as this means the behaviour of the domain walls, which contains the encoded data, can be fully determined \cite{Parkin2008}.
The control objective is to steer the system dynamics to any arbitrary equilibrium point, which requires asymptotic stability. This is presented in Theorem~\ref{thmasymstab}, which is the main result and can be found in Section~\ref{secsymstab}. A summary and future avenues are in the last section. To begin, a brief mathematical review of the Landau--Lifshitz equation is discussed next.
\section{Landau-Lifshitz Equation}
Consider a one dimensional ferromagnetic nanowire of length $L>0$. Let $\mathbf m(x,t)=(m_1(x,t),m_2(x,t),m_3(x,t))$ be the magnetization of the ferromagnetic nanowire for some position $x \in [0,L]$ and time $t \geq 0 $ with initial magnetization $\mathbf m(x,0)=\mathbf m_0(x)$. These dynamics are determined by \begin{align} &\frac{\partial \mathbf m}{\partial t} = \mathbf m \times \left( \mathbf m_{xx}+\mathbf u\right)-\nu\mathbf m\times\left(\mathbf m\times \left(\mathbf m_{xx}+\mathbf u\right)\right) \label{eqcontrolledLLphysical}\\ & \mathbf m_x(0,t)= \mathbf m_x(L,t)=\mathbf 0, \label{eqboundarycondition}\\
& || \mathbf m(x,t)||_{2} =1.\label{eqconstraint} \end{align}
where $\times$ denotes the cross product and $||\cdot||_{2}$ is the Euclidean norm. Equation~(\ref{eqcontrolledLLphysical}) is the one--dimensional controlled Landau--Lifshitz equation \cite{Bertotti2009,Chow2014_ACC,Chow2016,Gilbert2004,GuoBook2008}. It satisfies the constraint in (\ref{eqconstraint}), which means the magnitude of the magnetization is uniform at every point of the ferromagnet. The exchange energy is $\mathbf m_{xx}$. Mathematically, $\mathbf m_{xx}$ denotes magnetization differentiated with respect to $x$ twice. The parameter $\nu \geq 0$ is the damping parameter, which depends on the type of ferromagnet. The applied magnetic field, denoted $\mathbf u(t)$, acts as the control, and hence, when $\mathbf u(t)=\mathbf 0$, equation~(\ref{eqcontrolledLLphysical}) can be thought of as the uncontrolled Landau--Lifshitz equation. Neumann boundary conditions (\ref{eqboundarycondition}) are used here.
Existence and uniqueness results can be found in \cite{Alouges1992,Carbou2001,Chow2013_thesis,Chow2016} and references therein. Solutions to (\ref{eqcontrolledLLphysical}) are defined on $\mathcal L_2^3 = \mathcal L_2 ([0,L]; \mathbb R^3)$ with the usual inner--product and norm with domain \begin{align*}
D=\{ & \mathbf m\in \mathcal L_2^3 : \mathbf m_x \in \mathcal L_2^3, \, \\ & \mathbf m_{xx} \in \mathcal L_2^3, \mathbf m_x(0)=\mathbf m_x(L) = \mathbf 0 \}. \end{align*}
The notation $\|\cdot\|_{\mathcal L_2^3}$ is used for the norm.
\section{Asymptotic Stability}\label{secsymstab} For $\mathbf u(t)=\mathbf 0$, the set of equilibrium points is \begin{align}\label{equilibriumset} E=&\{\mathbf a=(a_1,a_2,a_3) : a_1,a_2,a_3\nonumber\\
& \mbox{ constants and }||\mathbf a||_2=1 \}, \end{align} which satisfies the boundary conditions in (\ref{eqboundarycondition}) \cite[Theorem~6.1.1]{GuoBook2008}. It is clear $E$ contains an infinite number of equilibria. A particular $\mathbf a \in E$ is stable but not asymptotically stable \cite[Proposition~6.2.1]{GuoBook2008}; however, the set $E$ is asymptotically stable in the $\mathcal L_2^3$--norm \cite{Chow2013_thesis,Chow2016}.
Let $\mathbf r$ be an arbitrary equilibrium point of $E$ with $r_1\neq 0$ to ensure $||\mathbf r||_2=1$, that is, (\ref{eqconstraint}), is satisfied. Define the control in (\ref{eqcontrolledLLphysical}) to be
\begin{equation}\label{equ} \mathbf u=k(\mathbf r -\mathbf m ) \end{equation} where $k$ is a scalar constant. This is the same control used in \cite{Chow2016}. For this control, $\mathbf r$ is an equilibrium point of the controlled Landau-Lifshitz equation (\ref{eqcontrolledLLphysical}). It is shown in Theorem~\ref{thmasymstab} that $\mathbf r$ is locally asymptotically stable, which is the main result. Simulations demonstrating asymptotic stability of the Landau-Lifshitz equation given the control in (\ref{equ}) are shown in \cite{Chow2016}.
The following lemmas are needed in Theorem~\ref{thmasymstab}. Lemmas~\ref{thmderivativemcrossmprime} and~\ref{lemmazerointegral} appear in \cite{Chow2016}, which can be obtained from the product rule and integration by parts, respectively. \begin{lem}\label{thmderivativemcrossmprime} For $\mathbf m\in \mathcal L_2^3$, the derivative of $\mathbf g=\mathbf m \times \mathbf m_x$ is $\mathbf g_x=\mathbf m \times \mathbf m_{xx}$. \end{lem}
\begin{lem}\label{lemmazerointegral} For $\mathbf m \in \mathcal L_2^3$ satisfying (\ref{eqboundarycondition}), \[ \int_0^L (\mathbf m-\mathbf r)^{\mathrm T}(\mathbf m \times \mathbf m_{xx})dx=0. \] \end{lem}
\begin{lem}\label{lemupperbound1}
If $\mathbf r \in E$ and $\mathbf m$ satisfies (\ref{eqconstraint}), then $||\mathbf m \times \mathbf r||_2\leq1 $. \end{lem}
\begin{proof}
Recall $||\mathbf m \times \mathbf r||_2 = ||\mathbf m||_2||\mathbf r ||_2\sin(\theta)$ where $\theta$ is the angle between $\mathbf m$. Since $\mathbf r$ and $||\mathbf m||_2=||\mathbf r||_2=1$, then
$||\mathbf m \times \mathbf r||_2 = ||\mathbf m||_2||\mathbf r ||_2\sin(\theta) \leq \sin(\theta)\leq 1$. \end{proof}
\begin{thm}\label{thmasymstab} For any $\mathbf r \in E$, there exists a range of positive values of $k$ such that $\mathbf r$ is a locally asymptotically stable equilibrium point of (\ref{eqcontrolledLLphysical}) in the $\mathcal L_2^3$--norm. \end{thm}
\begin{proof}
Let $B(\mathbf r,p)=\{\mathbf m \in \mathcal L_2^3: ||\mathbf m -\mathbf r||_{ \mathcal L_2^3}<p \} \subset D$ for some constant $0<p<2$. Note that since $p<2$, then $-\mathbf r \notin B(\mathbf r,p)$. For any $\mathbf m \in B(\mathbf r,p)$, the Lyapunov function is \[
V(\mathbf m)=\frac{f(k)}{2}\left| \left| \mathbf m-\mathbf r\right|\right|_{\mathcal L_2^3}^2+\frac{1}{2}\left| \left| \mathbf m_x\right|\right|_{\mathcal L_2^3}^2 \]
where $f(k)>0$ is a scalar function of $k$ and $|f(k)+k|\leq1$ for all $k>0$. Such functions exist. For example, $f(k)=k$ for $k\in(0,1/2]$.
Taking the derivative of $V$, \begin{align} \frac{dV}{dt}&=\int_0^Lf(k)(\mathbf m -\mathbf r)^{\mathrm T}\dot{{\mathbf m}} dx+\int_0^L\mathbf m_x^{\mathrm T} \dot{{\mathbf m}}_xdx \nonumber\\ &=\int_0^Lf(k)(\mathbf m -\mathbf r)^{\mathrm T}\dot{\mathbf m} dx-\int_0^L\mathbf m_{xx}^{\mathrm T} \dot{\mathbf m}dx\nonumber\\ &=\int_0^L\left(f(k)(\mathbf m -\mathbf r)^{\mathrm T}\dot{\mathbf m} -\mathbf m_{xx}^{\mathrm T} \dot{\mathbf m}\right)dx\label{eqLyapunovFunction} \end{align} where the dot notation means differentiation with respect to $t$.
Letting $\mathbf h = \mathbf m - \mathbf r$, the integrand in (\ref{eqLyapunovFunction}) becomes \begin{equation}\label{eqintegrand} f(k)\mathbf h^{\mathrm T}\dot{\mathbf m} - \mathbf m_{xx}^{\mathrm T} \dot{\mathbf m} \end{equation} and equation~(\ref{eqcontrolledLLphysical}) becomes \[ \dot{ \mathbf m} = \mathbf m \times \left( \mathbf m_{xx}-k\mathbf h\right)-\nu\mathbf m\times\left(\mathbf m\times \left(\mathbf m_{xx}-k\mathbf h\right)\right). \] It follows that \begin{align} \mathbf h^{\mathrm T}\dot{\mathbf m} = &\mathbf h^{\mathrm T} \left[\mathbf m \times \left( \mathbf m_{xx}-k\mathbf h\right)-\nu\mathbf m\times\left(\mathbf m\times \left(\mathbf m_{xx}-k\mathbf h\right)\right)\right] \nonumber\\ =& \mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right) -k \mathbf h^{\mathrm T}\left(\mathbf m \times \mathbf h\right) \nonumber\\ &-\nu \mathbf h^{\mathrm T}\left[\mathbf m\times\left(\mathbf m\times \mathbf m_{xx}\right)\right] +\nu k \mathbf h^{\mathrm T}\left[\mathbf m\times\left(\mathbf m\times \mathbf h\right)\right] \nonumber \\ =& \mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right) -\nu \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf h\times \mathbf m\right)\nonumber\\ &+\nu k \left(\mathbf m\times \mathbf h\right)^{\mathrm T}\left(\mathbf h\times\mathbf m\right) \nonumber\\ =& \mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right) -\nu \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf h\times \mathbf m\right) \nonumber\\
&-\nu k ||\mathbf m\times \mathbf h||_2^2 \label{eqintegrandfirstterm} \end{align}
and \begin{align}
\mathbf m_{xx}^{\mathrm T} \dot{\mathbf m} =& \mathbf m_{xx}^{\mathrm T} \left[ \mathbf m \times \left( \mathbf m_{xx}-k\mathbf h\right)-\nu\mathbf m\times\left(\mathbf m\times \left(\mathbf m_{xx}-k\mathbf h\right)\right) \right] \nonumber\\
=& \mathbf m_{xx}^{\mathrm T} \left( \mathbf m \times \mathbf m_{xx}\right) -k\mathbf m_{xx}^{\mathrm T} \left( \mathbf m \times \mathbf h\right)\nonumber\\
&-\nu\mathbf m_{xx}^{\mathrm T} \left[ \mathbf m\times\left(\mathbf m\times \mathbf m_{xx} \right)\right]+\nu k\mathbf m_{xx}^{\mathrm T} \left[ \mathbf m\times\left(\mathbf m\times \mathbf h\right)\right] \nonumber\\
=&-k\mathbf m_{xx}^{\mathrm T} \left( \mathbf m \times \mathbf h\right)-\nu\mathbf m_{xx}^{\mathrm T} \left[ \mathbf m\times\left(\mathbf m\times \mathbf m_{xx} \right)\right] \nonumber\\
&+\nu k\mathbf m_{xx}^{\mathrm T} \left[ \mathbf m\times\left(\mathbf m\times \mathbf h\right)\right] \nonumber\\
=&-k\mathbf m_{xx}^{\mathrm T} \left(\mathbf m \times \mathbf h\right)-\nu\left(\mathbf m\times \mathbf m_{xx} \right)^{\mathrm T} \left( \mathbf m_{xx}\times \mathbf m\right)\nonumber\\
& +\nu k\left(\mathbf m\times \mathbf h\right)^{\mathrm T} \left(\mathbf m_{xx} \times\mathbf m\right) \nonumber\\
=&-k\mathbf m_{xx}^{\mathrm T} \left( \mathbf m \times \mathbf h\right)+\nu||\mathbf m\times \mathbf m_{xx} ||_2^2 \nonumber\\
&+\nu k\left(\mathbf m\times \mathbf h\right)^{\mathrm T} \left(\mathbf m_{xx} \times\mathbf m\right). \label{eqintegrandsecondterm} \end{align}
Substituting (\ref{eqintegrandfirstterm}) and (\ref{eqintegrandsecondterm}) into equation~(\ref{eqintegrand}) leads to \begin{align*} &f(k)\mathbf h^{\mathrm T}\dot{\mathbf m} - \mathbf m_{xx}^{\mathrm T} \dot{\mathbf m}\nonumber\\ =& f(k)\mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right) -\nu f(k) \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf h\times \mathbf m\right) \nonumber\\
& -\nu kf(k) ||\mathbf m\times \mathbf h||_2^2 +k\mathbf m_{xx}^{\mathrm T} \left( \mathbf m \times \mathbf h\right)-\nu||\mathbf m\times \mathbf m_{xx} ||_2^2\\ &-\nu k\left(\mathbf m\times \mathbf h\right)^{\mathrm T} \left(\mathbf m_{xx} \times\mathbf m\right)\\ =& (f(k)-k)\mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right) -\nu f(k) \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf h\times \mathbf m\right) \\
& -\nu kf(k) ||\mathbf m\times \mathbf h||_2^2 -\nu||\mathbf m\times \mathbf m_{xx} ||_2^2\\ &-\nu k\left(\mathbf m\times \mathbf h\right)^{\mathrm T} \left(\mathbf m_{xx} \times\mathbf m\right)\\ =& (f(k)-k)\mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right) \\ &-\nu (f(k)+k) \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf h\times \mathbf m\right)\\
&-\nu kf(k) ||\mathbf m\times \mathbf h||_2^2 -\nu||\mathbf m\times \mathbf m_{xx} ||_2^2 \end{align*} Substituting this expression into equation~(\ref{eqLyapunovFunction}) leads to \begin{align*} \frac{dV}{dt} =&(f(k)-k) \int_0^L\mathbf h^{\mathrm T} \left(\mathbf m \times \mathbf m_{xx}\right)dx\\ &-\nu (f(k)+k) \int_0^L \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf m\times \mathbf h\right) dx\\
& - \nu kf(k)\int_0^L ||\mathbf m\times \mathbf h||_2^2 dx-\nu \int_0^L ||\mathbf m\times \mathbf m_{xx} ||_2^2 dx. \end{align*} From Lemma~\ref{lemmazerointegral}, the first integral equals zero since $\mathbf h = \mathbf m -\mathbf r$. It follows that \begin{align*} \frac{dV}{dt} =&-\nu (f(k)+k) \int_0^L \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf m\times \mathbf h\right) dx\\
& - \nu kf(k)\int_0^L ||\mathbf m\times \mathbf h||_2^2 dx-\nu \int_0^L ||\mathbf m\times \mathbf m_{xx} ||_2^2 dx\\ =& -\nu (f(k)+k) \int_0^L \left(\mathbf m\times \mathbf m_{xx}\right)^{\mathrm T}\left(\mathbf m\times \mathbf h\right) dx \\
&- \nu kf(k) ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2-\nu ||\mathbf m\times \mathbf m_{xx} ||_{\mathcal L_2^3}^2. \end{align*} Applying the Cauchy-Schwarz Inequality to the integrand leads to \begin{align*}
\frac{dV}{dt} \leq &\nu |f(k)+k| \int_0^L ||\mathbf m\times \mathbf m_{xx}||_2||\mathbf m\times \mathbf h||_2 dx \\
&- \nu kf(k) ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2-\nu ||\mathbf m\times \mathbf m_{xx} ||_{\mathcal L_2^3}^2. \end{align*}
Since $||\mathbf m\times \mathbf h||_2\leq 1$ from Lemma~\ref{lemupperbound1}, then \begin{align*}
\frac{dV}{dt} \leq &\nu |f(k)+k| \int_0^L ||\mathbf m\times \mathbf m_{xx}||_2dx - \nu kf(k) ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2\\
&-\nu ||\mathbf m\times \mathbf m_{xx} ||_{\mathcal L_2^3}^2\\
=&\nu |f(k)+k| \, ||\mathbf m\times \mathbf m_{xx}||_{\mathcal L_2^3}^2 - \nu kf(k) ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2\\
&-\nu ||\mathbf m\times \mathbf m_{xx} ||_{\mathcal L_2^3}^2\\
=&\nu \left(|f(k)+k|-1\right) \, ||\mathbf m\times \mathbf m_{xx}||_{\mathcal L_2^3}^2 - \nu kf(k) ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2. \end{align*}
Since $|f(k)+k|\leq1$, then \begin{align}\label{eqdervativeofLyapunovfinal}
\frac{dV}{dt} &\leq - \nu kf(k) ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2. \end{align}
Since $k>0$ and $f(k)>0$, then $\frac{dV}{dt} \leq 0$ with $\frac{dV}{dt} =0$ if and only if $\mathbf m\times \mathbf h=\mathbf0$. For example, suppose $f(k)=k$ on $k\in(0,1/2]$, which satisfies $|f(k)+k|\leq1$. Equation~(\ref{eqdervativeofLyapunovfinal}) becomes
\[
\frac{dV}{dt} \leq - \nu k^2 ||\mathbf m\times \mathbf h||_{\mathcal L_2^3}^2
\]
which is clearly less than or equal to zero, and the value of $k$ can be any number in the interval $(0,1/2]$. For instance, picking $k=1/4$, the Lyapunov function is
\[
V(\mathbf m)=\frac{1}{8}\left| \left| \mathbf m-\mathbf r\right|\right|_{\mathcal L_2^3}^2+\frac{1}{2}\left| \left| \mathbf m_x\right|\right|_{\mathcal L_2^3}^2.
\]
Revisiting equation (\ref{eqdervativeofLyapunovfinal}), if $\mathbf m =\mathbf r$, then $\mathbf h=\mathbf 0$ and hence $dV/dt=0$. On the other hand, $dV/dt=0$ implies $\mathbf m\times \mathbf h=\mathbf 0,$ and hence $\mathbf r\times \mathbf m=\mathbf 0.$ This is a system of algebraic equations, \begin{align*} r_2m_3-r_3m_2&=0\\ r_3m_1-r_1m_3&=0\\ r_1m_2-r_2m_1&=0. \end{align*}
The solution is $m_2=\displaystyle\frac{r_2}{r_1}m_1$ and $m_3=\displaystyle\frac{r_3}{r_1}m_1$ for any $m_1$ and $r_1\neq0$. Given $||\mathbf m||_2=1$ and $||\mathbf r||_2=1$, this leads to $m_1^2=r_1^2$ and hence $m_1=\pm r_1$, which leads to $m_2=\pm r_2$ and $m_3=\pm r_3$; that is, $\mathbf m = \pm \mathbf r$. For $V(\mathbf m)$ on $ B(\mathbf r,p)$, this implies $\mathbf m=\mathbf r$ if $dV/dt=0$. Local asymptotic stability follows from Lyapunov's Theorem \cite[Theorem~6.2.1]{Michel1995}. \end{proof}
\section{Discussions}
Asymptotic stability of an arbitrary equilibrium point of the Landau--Lifshitz equation with Neumann boundary conditions is shown in Theorem~\ref{thmasymstab}. This is established using Lyapunov theory. The result in Theorem~\ref{thmasymstab} is an extension of the work presented in \cite{Chow2016}.
The control given in (\ref{equ}) can be used to control the hysteresis that often appears in magnetization dynamics including those described by the Landau--Lifshitz equation \cite{Chow2013_thesis,Chow2014_ACC}. Figure~\ref{fighysteresisLLphysicalcontrol} depicts the input-output map of the Landau--Lifshitz equation in (\ref{eqcontrolledLLphysical}). The output is the magnetization, $\mathbf m(x,t)=(m_1(x,t),m_2(x,t),m_2(x,t))$, and the input is a periodic function denoted $\mathbf{\hat{u}}(t)$ where $\omega$ is the frequency of this input. For each $m_i$ with $i=1,2,3$, the input is the periodic function $0.01\cos(\omega t)$. For this periodic input, equation (\ref{eqcontrolledLLphysical}) becomes \[ \frac{\partial \mathbf m}{\partial t} = \mathbf m \times \left( \mathbf m_{xx}+\mathbf u\right)-\nu\mathbf m\times\left(\mathbf m\times \left(\mathbf m_{xx}+\mathbf u\right)\right) +\mathbf{\hat{u}}(t). \]
As the frequency of the (periodic) input approaches zero, loops appear in the input--output map for $m_1(x,t)$, which indicates the presence of hysteresis \cite{Bernstein2005}.
\begin{figure}\label{fighysteresisLLphysicalcontrol}
\end{figure}
Because hysteresis is characterized by multiple stable equilibrium points \cite{Chow2014_ACC,Morris2011}, the ability to control the stability of equilibrium points implies the ability to control hysteresis. Such a control for the Landau--Lifshitz equation is given in (\ref{equ}). This is a possible avenue for future exploration.
\end{document} | arXiv |
\begin{definition}[Definition:Euler's Number/Decimal Expansion]
The decimal expansion of Euler's number $e$ starts:
:$2 \cdotp 71828 \, 18284 \, 59045 \, 23536 \, 02874 \, 71352 \, 66249 \, 77572 \, 47093 \, 69995 \ldots$
{{OEIS|A001113}}
\end{definition} | ProofWiki |
\begin{definition}[Definition:Finitely Generated Module]
Let $R$ be a ring.
Let $M$ be a module over $R$.
Then $M$ is '''finitely generated''' {{iff}} there is a generator for $G$ which is finite.
\end{definition} | ProofWiki |
Two-sample hypothesis testing
In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant.
There are a large number of statistical tests that can be used in a two-sample test. Which one(s) are appropriate depend on a variety of factors, such as:
• Which assumptions (if any) may be made a priori about the distributions from which the data have been sampled? For example, in many situations it may be assumed that the underlying distributions are normal distributions. In other cases the data are categorical, coming from a discrete distribution over a nominal scale, such as which entry was selected from a menu.
• Does the hypothesis being tested apply to the distributions as a whole, or just some population parameter, for example the mean or the variance?
• Is the hypothesis being tested merely that there is a difference in the relevant population characteristics (in which case a two-sided test may be indicated), or does it involve a specific bias ("A is better than B"), so that a one-sided test can be used?
Relevant tests
Statistical tests that may apply for two-sample testing include:
• Hotelling's T-squared distribution#Two-sample statistic
• Kernel embedding of distributions#Kernel two-sample test
• Kolmogorov–Smirnov test
• Kuiper's test
• Median test
• Pearson's chi-squared test
• Student's t-test
• Tukey–Duckworth test
• Welch's t-test
See also
• A/B testing
| Wikipedia |
\begin{document}
\setcounter{page}{1} \thispagestyle{empty}
\begin{abstract} With the help of the notion of weighted sharing of sets, this paper dealt with the question posed by \emph{Yi} \cite{Yi-SC-1994} regarding the uniqueness of meromorphic functions concerning three set sharing. A result has been proved which significantly improved the recent results of \emph{Banerjee - Ahamed} \cite{Ban & Aha-BPAS-2014}, \emph{Banerjee - Mukherjee} \cite{Ban & Muk-HJ-2008} and \emph{Banerjee - Majumder} \cite{Ban & Maj-A-2014} by relaxing the nature of sharing. Several examples have been exhibited to show the sharpness of the cardinalities of the sets $\mathcal{S}_1$ and $\mathcal{S}_2$ considered in \emph{Theorem \ref{t1.2}}\;. Moreover, we give some constructive examples to endorse the validity of our established theorem. \end{abstract} \maketitle
\section{\sc Introduction, Definitions and Results} In this paper by a meromorphic function we will always mean a meromorphic function in the open complex plane. Let $f$ and $g$ be two non-constant meromorphic functions and let $a\in\mathbb{C}\cup\{\infty\}$. For standard definitions and notations of value distribution theory we refer to the reader to see \cite{Hay-1964}. We denote through out the paper $\mathbb{C^{*}}=\mathbb{C}\smallsetminus\{0\}$.\par If $f$ and $g$ have the same set of $a$-points with same multiplicities then we say that $f$ and $g$ share the value $a$ $CM$ (Counting Multiplicities). If we do not take the multiplicities into account, $f$ and $g$ are said to share the value $a$ $IM$ (Ignoring Multiplicities).\par
\begin{defi}
For a non-constant meromorphic function $f$ and any set $\mathcal{S}\subset\mathbb{\overline C}$, we define \begin{eqnarray*} E_{f}(\mathcal{S})=\displaystyle\bigcup_{a\in\mathcal{S}}\bigg\{(z,p)\in\mathbb{C}\times\mathbb{N}:f(z)=a,\;\text{with multiplicity}\; p\bigg\}, \end{eqnarray*} \begin{eqnarray*}\overline E_{f}(\mathcal{S})=\displaystyle\bigcup_{a\in\mathcal{S}}\bigg\{(z,1)\in\mathbb{C}\times\{1\}:f(z)=a\bigg\}.\end{eqnarray*} \end{defi} \par If $E_{f}(\mathcal{S})=E_{g}(\mathcal{S})$ ($\overline E_{f}(\mathcal{S})=\overline E_{g}(\mathcal{S})$) then we simply say $f$ and $g$ share $\mathcal{S}$ Counting Multiplicities(CM) (Ignoring Multiplicities(IM)).\par Evidently, if $\mathcal{S}$ contains one element only, then it coincides with the usual definition of $CM (IM)$ sharing of values. \par
Next we explain some definitions and notations which will be used in the paper. \begin{defi}\cite{Lah-Sar-2004} Let $p$ be a positive integer and $a\in\mathbb{C}\cup\{\infty\}$.\begin{enumerate}
\item[(i)] $N\left(r,\displaystyle\frac{1}{f-a}\mid \geq p\right)$ $\left(\overline N\left(r,\displaystyle\frac{1}{f-a}\mid \geq p\right)\right)$ denotes the counting function (reduced counting function) of those $a$-points of $f$ whose multiplicities are not less than $p$.
\item[(ii)] $N\left(r,\displaystyle\frac{1}{f-a}\mid \leq p\right)$ $\left(\overline N\left(r,\displaystyle\frac{1}{f-a}\mid \leq p\right)\right)$ denotes the counting function (reduced counting function) of those $a$-points of $f$ whose multiplicities are not greater than $p$.
\end{enumerate} \end{defi} \begin{defi}
Let $f$ and $g$ be two non-constant meromorphic functions such that $f$ and $g$ share the value $a$ with weight $k$ where $a\in\mathbb{C}\cup\{\infty\}$. Let $ f $ and $ g $ have same $ a $-points with respective multiplicities $ p $ and $ q $. We denote by $\overline N_E^{(k+1}\left(r,\displaystyle\frac{1}{f-a}\right)$ the counting function of those $a$-points of $f$ and $g$ where $p=q\geq k+1$, each point in this counting function counted only once. \end{defi} \begin{defi}\cite {Yi-1991} For $a\in\mathbb{C}\cup\{\infty\}$ and a positive integer $p$ we denote by \begin{eqnarray*} N_{p}\left(r,\frac{1}{f-a}\right)=\overline N\left(r,\frac{1}{f-a}\right)+\overline N\left(r,\frac{1}{f-a}\mid\geq 2\right)+\ldots+\overline N\left(r,\frac{1}{f-a}\mid\geq p\right).\end{eqnarray*}\par It is clear that $N_{1}\left(r,\displaystyle\frac{1}{f-a}\right)=\overline N\left(r,\displaystyle\frac{1}{f-a}\right)$. \end{defi} \begin{defi}
Let $N_{1)}\left(r,\displaystyle\frac{1}{f-a}\right)$ denote the counting function of the simple zeros of $f-a$ and $\overline N_{(2}\left(r,\displaystyle\frac{1}{f-a}\right)$ denote the reduced counting function of the $a$-points of $f$ of multiplicities $\geq 2$. It follows that \begin{eqnarray*} N_2\left(r,\frac{1}{f-a}\right)=N_{1)}\left(r,\displaystyle\frac{1}{f-a}\right)+2\overline N_{(2}\left(r,\displaystyle\frac{1}{f-a}\right).\end{eqnarray*} \end{defi} \begin{defi}\cite{Zha-2005} For a positive integer $p$ and $a\in\mathbb{C}\cup\{\infty\}$, we put \begin{eqnarray*}\delta_{p}(a;f)= 1- \limsup\limits _{r\longrightarrow \infty}\frac{N_{p}\left(r,\displaystyle\frac{1}{f-a}\right)}{T(r,f)}.\end{eqnarray*} \begin{eqnarray*}\Theta(a;f)= 1- \limsup\limits _{r\longrightarrow \infty}\frac{\overline N\left(r,\displaystyle\frac{1}{f-a}\right)}{T(r,f)}\end{eqnarray*}
Clearly $0\leq \delta (a;f)\leq \delta _{p}(a;f)\leq \delta_{p-1}(a;f)\leq\ldots \leq\delta_{2}(a;f)\leq\delta_{1}(a;f)=\Theta (a;f)$. \end{defi}\par In $1926$, \emph{Nevanlinna} first showed that a non-constant meromorphic function on the complex plane $\mathbb{C}$ is uniquely determined by the pre-images, ignoring multiplicities, of $5$ distinct values (including infinity). A few years latter, he showed that when multiplicities are taken into consideration, $4$ points are enough and in that case either the two functions coincides or one is the bilinear transformation of the other one.\par The uniqueness problem for entire or meromorphic functions sharing sets was initiated by a famous question of \emph{Gross} in \cite{Gross}. In $1976$, \emph{Gross} \cite{Gross} asked the following question. \begin{ques}\label{qn1.1}
Can one find two finite sets $\mathcal{S}_{j}, (j=1,2)$ such that any two non-constant entire functions $f$ and $g$ satisfying $E_{f}(\mathcal{S})=E_{g}(\mathcal{S})$, $(j=1,2)$ must be identical ? \end{ques}\par In \cite{Gross}, \emph{Gross} said that if the answer of \emph{\sc Question \ref{qn1.1}} is affirmative it would be interesting to know how large both sets would have to be ?\par In $1994$, \emph{Yi} \cite{Yi-SC-1994} posed the following question. \begin{ques}\label{qn1.2}
Can one find three finite sets $\mathcal{S}_{j}, (j=1,2, 3)$ such that any two non-constant meromorphic functions $f$ and $g$ satisfying $E_{f}(\mathcal{S})=E_{g}(\mathcal{S})$, $(j=1,2,3)$ must be identical ? \end{ques} \par In the same paper \cite{Yi-SC-1994}, \emph{Yi} answered the \emph{\sc Question \ref{qn1.2}} affirmatively and obtained a result by showing that there exist three finite sets $\mathcal{S}_1$ (with $7$ elements), $\mathcal{S}_2$ (with $2$ elements) and $\mathcal{S}_3$ (with $1$ element) such that any two non-constant meromorphic functions $f$ and $g$ satisfying $E_{f}(\mathcal{S}_{j})=E_{g}(\mathcal{S}_{j})$, $(j=1,2,3)$ must be identical.\par In the direction of \emph{\sc Question \ref{qn1.2}}, \emph{Fang - Xu} \cite{Fan & Xu-CJCM-1997} obtained the following result. \begin{theoA}\cite{Fan & Xu-CJCM-1997}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^3-z^2-1=0\}$ and $\mathcal{S}_3=\{\infty\}$. Let $f$ and $g$ be two non-constant meromorphic functions such that $\Theta(\infty;f)>\displaystyle\frac{1}{2}$ and $\Theta(\infty;g)>\displaystyle\frac{1}{2}$. If $E_{f}(\mathcal{S}_{j})=E_{g}(\mathcal{S}_{j})$, for $j=1,2,3$ then $f\equiv g$. \end{theoA}\par Dealing with the \emph{ Question \ref{qn1.2}}, \emph{Qiu - Fang} \cite{Qiu & Fan-BSMS-2002} obtained a result with an extra supposition that the meromorphic functions $f$ and $g$ both having poles of multiplicity $\geq 2$. In the same paper they also exhibited some examples to show that the condition on the poles of $f$ and $g$ can not be removed. \par In $2004$, \emph{Yi - Lin} \cite{Yi & Lin-PJAS-2004} proved the following results. \begin{theoB}\cite{Yi & Lin-PJAS-2004}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+bz^{n-1}+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a$, $b$ are non-zero constants such that $z^n+bz^{n-1}+c=0$ has no repeated root and $n\geq 3$ is an integer. If for two non-constant meromorphic functions $f$ and $g$, $E_{f}(\mathcal{S}_{j})=E_{g}(\mathcal{S}_{j})$, for $j=1,2,3$ and $\delta_{1}(\infty;f)>\displaystyle\frac{5}{6}$, then $f\equiv g$. \end{theoB} \begin{theoC}\cite{Yi & Lin-PJAS-2004}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+bz^{n-1}+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a$, $b$ are non-zero constants such that $z^n+bz^{n-1}+c=0$ has no repeated root and $n\geq 4$ is an integer. If for two non-constant meromorphic functions $f$ and $g$, $E_{f}(\mathcal{S}_{j})=E_{g}(\mathcal{S}_{j})$, for $j=1,2,3$ and $\Theta(\infty;f)>0$, then $f\equiv g$. \end{theoC} \par Progressively the research on \emph{\sc Question \ref{qn1.1}} for meromorphic functions as well as \emph{\sc Question \ref{qn1.2}} gained a valuable space in the literature and now-a-days it has increasingly become an impressive branch of the modern uniqueness theory of meromorphic functions. During the last few years a considerable amount of work has been done to explore the possible answer to \emph{\sc Question \ref{qn1.2}} by many Mathematicians. \par In $2001$, the introduction of the new notion of sharing which is a scaling between $CM$ or $IM$, known as weighted sharing of values and sets by \emph{Lahiri} \emph{\cite{Lah-NMJ-2001, Lah-CVTA-2001}} further speed up the research in the direction of \emph{Question \ref{qn1.2}}. \begin{defi}
Let $k\in\mathbb{N}\cup\{0\}\cup\{\infty\}$. For $a\in\mathbb{C}\cup\{\infty\}$, we denote by $E_{f}(a,k)$ the set of all $a$-points of $f$, where an $a$-point of multiplicity $m$ is counted $m$ times if $m\leq k$ and $k+1$ times if $m\geq k+1$. If $E_{f}(a,k)=E_{g}(a,k)$, we say that $f$ and $g$ share the value $a$ with weight $k$. \end{defi} \begin{defi}
Let $\mathcal{S}\subset\mathbb{C}\cup\{\infty\}$ be non-empty and $k\in\mathbb{N}\cup\{0\}\cup\{\infty\}$. We denote by $E_{f}(\mathcal{S},k)$ the set $E_{f}(\mathcal{S},k)=\displaystyle\bigcup_{a\in\mathcal{S}}E_{f}(a,k)$.\par Clearly $E_{f}(\mathcal{S})=E_{f}(\mathcal{S},\infty)$ and $\overline E(\mathcal{S},k)=E_{f}(\mathcal{S},0)$. \end{defi}\par With the help of wighted sharing of sets, \emph{Banerjee - Mukherjee} \cite{Ban & Muk-HJ-2008} obtained the following results. \begin{theoD}\cite{Ban & Muk-HJ-2008}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+bz^{n-1}+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a$, $b$ are non-zero constants such that $z^n+bz^{n-1}+c=0$ has no repeated root and $n\geq 3$ is an integer. If for two non-constant meromorphic functions $f$ and $g$ having no simple pole satisfying, $E_{f}(\mathcal{S}_{1},1)=E_{g}(\mathcal{S}_{1},1)$, $E_{f}(\mathcal{S}_{2},5)=E_{g}(\mathcal{S}_{2},5)$ and $E_{f}(\mathcal{S}_{3},\infty)=E_{g}(\mathcal{S}_{3},\infty)$, then $f\equiv g$. \end{theoD} \begin{theoE}\cite{Ban & Muk-HJ-2008}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+bz^{n-1}+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a$, $b$ are non-zero constants such that $z^n+bz^{n-1}+c=0$ has no repeated root and $n\geq 3$ is an integer. If for two non-constant meromorphic functions $f$ and $g$ satisfying, $E_{f}(\mathcal{S}_{1},0)=E_{g}(\mathcal{S}_{1},0)$, $E_{f}(\mathcal{S}_{2},6)=E_{g}(\mathcal{S}_{2},6)$, $E_{f}(\mathcal{S}_{3},\infty)=E_{g}(\mathcal{S}_{3},\infty)$ and $\delta_{1)}(\infty;f)+\delta_{1)}(\infty;g)>\displaystyle\frac{5}{n}$, then $f\equiv g$. \end{theoE} \begin{theoF}\cite{Ban & Muk-HJ-2008}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+bz^{n-1}+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a$, $b$ are non-zero constants such that $z^n+bz^{n-1}+c=0$ has no repeated root and $n\geq 4$ is an integer. If for two non-constant meromorphic functions $f$ and $g$ satisfying, $E_{f}(\mathcal{S}_{1},0)=E_{g}(\mathcal{S}_{1},0)$, $E_{f}(\mathcal{S}_{2},6)=E_{g}(\mathcal{S}_{2},6)$, $E_{f}(\mathcal{S}_{3},4)=E_{g}(\mathcal{S}_{3},4)$ and $\delta_{1)}(\infty;f)+\delta_{1)}(\infty;g)>0$, then $f\equiv g$. \end{theoF}
\par Recently \emph{Banerjee - Majumder} \cite{Ban & Maj-A-2014} obtained two results by improving some earlier results of \emph{Banerjee} \cite{Ban-APM-2007, Ban-KMJ-2009} as follows. \begin{theoG}\cite{Ban & Maj-A-2014}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+az^{n-1}+b=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a, b$ are non-zero constants such that $z^n+az^{n-1}+b=0$ has no repeated root and $n(\geq 4)$ be an integer. If for two non-constant meromorphic functions $f$ and $g$, $E_{f}(\mathcal{S}_1,k_1)=E_{g}(\mathcal{S}_1,k_1)$, $E_{f}(\mathcal{S}_2,k_2)=E_{g}(\mathcal{S}_2,k_2)$ and $E_{f}(\mathcal{S}_3,k_3)=E_{g}(\mathcal{S}_3,k_3)$, where $k_1\geq 0$, $k_2\geq 3$, $k_3\geq 1$ are integers satisfying \begin{eqnarray*} 3k_1k_2k_3>k_2+3k_1+k_3-2k_2k_3+4\;\;\text{and}\;\; \Theta_{f}+\Theta_{g}>0, \end{eqnarray*} where $\Theta_{h}=\Theta(\infty;h)+\Theta\left(\displaystyle\frac{a(1-n)}{n};h\right)$, then $f\equiv g$. \end{theoG} \begin{theoH}\cite{Ban & Maj-A-2014}
Let $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:z^n+az^{n-1}+b=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a, b$ are non-zero constants such that $z^n+az^{n-1}+b=0$ has no repeated root and $n(\geq 3)$ be an integer. If for two non-constant meromorphic functions $f$ and $g$, $E_{f}(\mathcal{S}_1,k_1)=E_{g}(\mathcal{S}_1,k_1)$, $E_{f}(\mathcal{S}_2,k_2)=E_{g}(\mathcal{S}_2,k_2)$ and $E_{f}(\mathcal{S}_3,k_3)=E_{g}(\mathcal{S}_3,k_3)$, where $k_1\geq 0$, $k_2\geq 4$, $k_3\geq 1$ are integers satisfying \begin{eqnarray*} 2k_1k_2k_3>k_2+2k_1+k_3-k_2k_3+3\;\;\text{and}\;\; \Theta_{f}+\Theta_{g}>1, \end{eqnarray*} where $\Theta_{h}=\Theta(\infty;h)+\Theta\left(\displaystyle\frac{a(1-n)}{n};h\right)$ then $f\equiv g$. \end{theoH}\par Earlier the problem of finding the possible answer of the \emph{\sc Question \ref{qn1.2}} was solved by \emph{Lin - Yi} \cite{Lin & Yi-TA-2003} who answered the \emph{\sc Question \ref{qn1.2}} by considering the sets $\mathcal{S}_1=\{0\}$, $\mathcal{S}_2=\{z:az^n-n(n-1)z^2+2n(n-2)bz=(n-1)(n-2)b^2\}$ and $\mathcal{S}_3=\{\infty\}$ for $n\geq 5$, where $a, b$ are constants such that $ab^{n-2}\neq 0, 2$.\par In \cite{Ban & Aha-BPAS-2014}, \emph{Banerjee - Ahamed} modified the sets $\mathcal{S}_1$, $\mathcal{S}_2$ so that $\mathcal{S}_1=\{0,1\}$, and the number of elements in the new set $\mathcal{S}_2$ has decreased by $1$ in the optimal case. Moreover the conditions on the sharing sets $\mathcal{S}_{j}$, $(j=1,2,3)$ has also been relaxed to the conditions of sharing $(\mathcal{S}_{j},k_{j})$, $(j=1,2,3)$, where $(k_1,k_2,k_3)=(0,3,2), (0,4,1)$.\par From the above discussions, we have the following notes: \begin{note}
The lower bound of the cardinality of the main range set $ \mathcal{S}_2 $ is obtained so far in \emph{\sc Theorems A, B, D, E, H} and also in the result of \emph{Qiu -Fang} \cite{Qiu & Fan-BSMS-2002} is $3$ with the help of some extra suppositions. \end{note} \begin{note}
Also one may check that the optimal choice for the weights $(k_1,k_2,k_3)=(0,3,1)$ in \emph{\sc Theorem G} can not be considered as it violates the condition $3k_1k_2k_3>k_2+3k_1+k_3-2k_2k_3+4$. \end{note} \begin{note}
We also see that in \emph{\sc Theorem H}, it is not possible to consider the weights as $(k_1,k_2,k_3)=(0,4,1)$ and hence as $(k_1,k_2,k_3)=(0,3,1)$. \end{note}
Based on the above observation, for the purpose of improving all the above mentioned results further, one can ask the following question. \begin{ques}\label{qn1.3}
Can we obtain a uniqueness result corresponding to \emph{\sc Theorems A, B, D, E, H} and \emph{Qiu -Fang} \cite{Qiu & Fan-BSMS-2002} without the help of any extra suppositions in which the lower bound of the cardinality of the main range set will be $3$ ? \end{ques}\par If the answer of the \emph{\sc Question \ref{qn1.3}} is found to be affirmative, then one natural question is as follows. \begin{ques}\label{qn1.4}
Is it possible to reduce further the choice of the weights $(k_1,k_2,k_3)$ to $(0,3,1)$ in all the above mentioned results ? \end{ques}\par Answering \emph{\sc Questions \ref{qn1.2}}, \emph{\sc \ref{qn1.3}} and \emph{\sc\ref{qn1.4}} affirmatively is the main motivation of writing this paper. In this paper, we have modified the sets $\mathcal{S}_1=\{0\}$ by $\mathcal{S}_1=\{0,\delta_{a,b}^n\}$ and $\mathcal{S}_2$ by an new one and obtained two results out of which the second one directly improves all the above mentioned results.\par To this end, we next suppose that $\displaystyle\delta_{a,b}^n=\frac{b(1-n)}{na},$ where $a, b\in\mathbb{C^{*}}$ and $n\geq 3$ be an integer. We consider here the $ \mathcal{S}_1=\{0,\delta_{a,b}^n\} $ as the set of zeros of the derivative of the polynomial $ az^n+bz^{n-1}+c. $
\begin{theo}\label{t1.2} For $n\geq 3$, let $\mathcal{S}_1=\{0,\delta_{a,b}^n\}$, $\mathcal{S}_2=\{z:az^n+bz^{n-1}+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a, b, c\in\mathbb{C^{*}}=\mathbb{C}\smallsetminus\{0\}$ be so chosen that $az^n+bz^{n-1}+c=0$ has no repeated root, $c\neq\displaystyle -\frac{b}{2n}\left(\delta_{a,b}^n\right)^{n-1}$. If for two non-constant meromorphic functions $f$ and $g$, $E_{f}(\mathcal{S}_1,0)=E_{g}(\mathcal{S}_1,0)$, $E_{f}(\mathcal{S}_2,n)=E_{f}(\mathcal{S}_2,n)$ and $E_{f}(\mathcal{S}_3,n-2)=E_{f}(\mathcal{S}_3,n-2)$, then $f\equiv g$. \end{theo} \begin{cor}\label{c1.2}
Let $\mathcal{S}_1=\bigg\{0,-\displaystyle\frac{2b}{3a}\bigg\}$, $\mathcal{S}_2=\{z:az^3+bz^2+c=0\}$ and $\mathcal{S}_3=\{\infty\}$, where $a, b, c\in\mathbb{C^{*}}$ be so chosen that $az^3+bz^2+c=0$ has no repeated root, $c\neq -\displaystyle\frac{2b^3}{27a^2}$. If for two non-constant meromorphic functions $f$ and $g$, $E_{f}(\mathcal{S}_1,0)=E_{g}(\mathcal{S}_1,0)$, $E_{f}(\mathcal{S}_2,3)=E_{f}(\mathcal{S}_2,3)$ and $E_{f}(\mathcal{S}_3,1)=E_{f}(\mathcal{S}_3,1)$, then $f\equiv g$. \end{cor} \begin{rem}
Clearly \emph{\sc Corollary \ref{c1.2}}\; directly improves the above mentioned results as we see that the lower bound of $ n $ is $ 3 $, with the corresponding weights $( k_1, k_2, k_3 )=(0, 3, 1)$. \end{rem}\par The following example shows that the conclusions of the \emph{\sc Theorems \ref{t1.2}}\; cease to be hold if we consider $c=\displaystyle -\frac{b}{2n}\left(\delta_{a,b}^n\right)^{n-1}$. \begin{exm}
Let $a=1, b=3$, then $\displaystyle -\frac{b}{2n}\left(\delta_{a,b}^n\right)^{n-1}=-2$. Let $c=\displaystyle -\frac{b}{2n}\left(\delta_{a,b}^n\right)^{n-1}=-2$ and $\mathcal{S}_2=\{z:z^3+3z^2-2=0\}=\{-1, -1-\sqrt{3},-1-\sqrt{3}\}$ and $\mathcal{S}_3=\{\infty\}$. Hence we must have $\mathcal{S}_1=\{0,\delta_{a,b}^n\}=\{0,-2\}$. Let $f(z)=\phi(z)-2$ and $g(z)=-\phi(z)$, where $\phi(z)$ is a non-constant meromorphic function. It is clear that $E_{f}(\mathcal{S}_{j})=E_{g}(\mathcal{S}_{j})$ for $j=1,2,3$ and hence $E_{f}(\mathcal{S}_{1},0)=E_{g}(\mathcal{S}_{1},0)$, $E_{f}(\mathcal{S}_{2},3)=E_{g}(\mathcal{S}_{2},3)$ and $E_{f}(\mathcal{S}_{3},1)=E_{g}(\mathcal{S}_{3},1)$ but note that $f\not\equiv g$. \end{exm} \par The next example shows the sharpness of the cardinalities of the set $\mathcal{S}_1$ and the main range set $\mathcal{S}_2$ in the \emph{\sc Theorem \ref{t1.2}}. \begin{exm}
Let $\mathcal{S}_2=\{z:az^2+bz+c=0\}=\{\gamma,\delta\}$, where $\gamma+\delta=-\displaystyle\frac{b}{a}$, $\gamma\delta=\displaystyle\frac{c}{a}$, $a,b,c\in\mathbb{C^{*}}$, $c\neq\displaystyle\frac{b^2}{8a}$. Hence $\mathcal{S}_1=\bigg\{-\displaystyle\frac{b}{2a}\bigg\}=\bigg\{\displaystyle\frac{\gamma+\delta}{2}\bigg\}$. Let $\mathcal{S}_3=\{\infty\}$ and $f(z)=h(z)+\gamma+\delta$ and $g(z)=-h(z)$, where $h(z)$ is any non-constant meromorphic function. We see that all the conditions of \emph{\sc Theorem \ref{t1.2}} are satisfied but $f\not\equiv g$. \end{exm} \par The following example shows that, the condition $b\neq 0$, in \emph{\sc Theorem \ref{t1.2}},\; can not be removed. \begin{exm}
Let $b=0$, then $\delta_{a,b}^n=0$. Thus, we get $\mathcal{S}_1=\{0\}$. \begin{eqnarray*} \text{Let}\;\; \mathcal{S}_2=\{z:az^3+c=0\}=\bigg\{\sqrt[3]{-\displaystyle\frac{c}{a}}, \sqrt[3]{-\displaystyle\frac{c}{a}}\omega, \sqrt[3]{-\displaystyle\frac{c}{a}}\omega^2\bigg\},\end{eqnarray*} $a,c\in\mathbb{C^{*}}$, where $ \omega $ is a cube roots of unity and $\mathcal{S}_3=\{\infty\}$. Let $f(z)$ be a non-constant meromorphic function and $g(z)=\omega\; f(z)$, where $\omega$ is a non-real cube root of unity. It is clear that $E_{f}(\mathcal{S}_{1},0)=E_{g}(\mathcal{S}_{1},0)$, $E_{f}(\mathcal{S}_{2},3)=E_{g}(\mathcal{S}_{2},3)$ and $E_{f}(\mathcal{S}_{3},1)=E_{g}(\mathcal{S}_{3},1)$ but $f\not\equiv g$. \end{exm} \par The next two examples show that the set $\mathcal{S}_2$ considered in \emph{\sc Theorem \ref{t1.2}} can not be replaced by any arbitrary set. \begin{exm}
Let $\mathcal{S}_1=\bigg\{\displaystyle\frac{6-\sqrt{3}}{3}, \frac{6+\sqrt{3}}{3}\bigg\}$, \begin{eqnarray*} \mathcal{S}_2=\bigg\{z:z^3-6z^2+11z-6=0\bigg\}=\{1,2,3\}\end{eqnarray*} and $\mathcal{S}_3=\{\infty\}$. Let $f(z)=h(z)+4$ and $g(z)=-h(z)$, where $ h(z) $ is a non-constant meromorphic function. Although we se that $E_{f}(\mathcal{S}_{1},0)=E_{g}(\mathcal{S}_{1},0)$, $E_{f}(\mathcal{S}_{2},3)=E_{g}(\mathcal{S}_{2},3)$ and $E_{f}(\mathcal{S}_{3},1)=E_{g}(\mathcal{S}_{3},1)$ but $f\not\equiv g$. \end{exm} \begin{exm}
Let $\mathcal{S}_1=\bigg\{\displaystyle\frac{15-\sqrt{3}}{3}, \frac{15+\sqrt{3}}{3}\bigg\}$, \begin{eqnarray*} \mathcal{S}_2=\bigg\{z:z^3-15z^2+74z-120=0\bigg\}=\{4,5,6\}\end{eqnarray*} and $\mathcal{S}_3=\{\infty\}$. Let $f(z)=\phi(z)+10$ and $g(z)=-\phi(z)$, where $ \phi(z) $ is a non-constant meromorphic function. Although we se that $E_{f}(\mathcal{S}_{1},0)=E_{g}(\mathcal{S}_{1},0)$, $E_{f}(\mathcal{S}_{2},3)=E_{g}(\mathcal{S}_{2},3)$ and $E_{f}(\mathcal{S}_{3},1)=E_{g}(\mathcal{S}_{3},1)$ but $f\not\equiv g$. \end{exm} \begin{note}
One can find many examples by considering $ \mathcal{S}_1 $ as th set of roots of the derivative of the polynomial of degree $ 3 $ whose roots formed the set $ \mathcal{S}_2 $, where $ \mathcal{S}_2=\{m, m+1, m+2\} $, where $ m\in\mathbb{N} $, and by choosing the functions $ f(z)=h(z)+2(m+1) $ and $ g(z)=-h(z) $, where $ h(z) $ is a non-constant meromorphic function. \end{note}
\section{\sc Some lemmas} In this section, we are going to discuss some lemmas which will be needed later to prove our main results. We define, for two non-constant meromorphic functions $f$ and $g$, \begin{eqnarray}\label{e2.1} \mathcal{F}=\frac{f^{n-1}(af+b)}{-c},\;\;\;\mathcal{G}=\frac{g^{n-1}(ag+b)}{-c}. \end{eqnarray}\par Associated to $\mathcal{F}$ and $\mathcal{G}$, we next define $\mathcal{H}$ as follows: \begin{eqnarray}\label{e2.2} \mathcal{H}=\left(\frac{\mathcal{F}^{\prime\prime}}{\mathcal{F}^{\prime}}-\frac{2\mathcal{F}^{\prime}}{\mathcal{F}-1}\right)-\left(\frac{\mathcal{G}^{\prime\prime}}{\mathcal{G}^{\prime}}-\frac{2\mathcal{G}^{\prime}}{\mathcal{G}-1}\right) \end{eqnarray} and \begin{eqnarray}\label{e2.4} \Psi=\frac{\mathcal{F}^{\prime}}{\mathcal{F}-1}-\frac{\mathcal{G}^{\prime}}{\mathcal{G}-1}.\end{eqnarray} \begin{lem}\label{lem2.1}\cite{Mok-1971} Let $ h $ be a non-constant meromorphic function and let \begin{eqnarray*} \mathcal{R}(h)=\displaystyle\frac{\displaystyle\sum_{i=1}^{n}a_ih^i}{\displaystyle\sum_{j=1}^{m}b_jh^j}, \end{eqnarray*} be an irreducible rational function in $ g $ with constant coefficients $\{a_i\} $, $ \{b_j\}$, where $ a_n\neq 0 $ and $ b_m\neq 0 $. Then \begin{eqnarray*} T(r,\mathcal{R}(h))=\max\{n,m\}\; T(r,h)+S(r,h). \end{eqnarray*} \end{lem} \begin{lem}\label{lem2.2}
Let $\mathcal{F}$ and $\mathcal{G}$ be given by (\ref{e2.1}) satisfying $E_{_{\mathcal{F}}}(1,q)=E_{_{\mathcal{G}}}(1,q)$, $0\leq q<\infty$ with $\mathcal{H}\not\equiv 0$, then \begin{eqnarray*} N_E^{1)}\left(r,\frac{1}{\mathcal{F}-1}\right)=N_E^{1)}\left(r,\frac{1}{\mathcal{G}-1}\right)&\leq& N(r,\mathcal{H})+S(r,\mathcal{F})+S(r,\mathcal{G}).\end{eqnarray*} \end{lem} \begin{proof} Since $E_{_{\mathcal{F}}}(1,q)=E_{_{\mathcal{G}}}(1,q)$. It is clear that any simple $1$-point of $\mathcal{F}$ and $\mathcal{G}$ is a zero of $\mathcal{H}$. From the construction of $\mathcal{H}$, we know that $m(r,\mathcal{H})=S(r,\mathcal{F})+S(r,\mathcal{G}).$ Therefore by \emph{First Fundamental Theorem}, we get \begin{eqnarray*} && N_E^{1)}\left(r,\frac{1}{\mathcal{F}-1}\right)=N_E^{1)}\left(r,\frac{1}{\mathcal{G}-1}\right)\\ &\leq& N\left(r,\frac{1}{\mathcal{H}}\right) \\&\leq& N(r,\mathcal{H})+S(r,\mathcal{F})+S(r,\mathcal{G}). \end{eqnarray*} \end{proof} \begin{lem}\label{lem2.3}
Let the set $\mathcal{S}_2$ be given as in \emph{Theorem \ref{t1.2}}\; and $\Psi$ is given by \emph{(\ref{e2.4})}. If $E_{f}(\mathcal{S}_2,n)=E_{g}(\mathcal{S}_2,n)$ and $E_{f}(\mathcal{S}_3,n-2)=E_{g}(\mathcal{S}_3,n-2)$ and $\Psi\not\equiv 0$, then \begin{eqnarray*} && \overline N\left(r,\frac{1}{f}\right)+ \overline N\left(r,\frac{1}{f-\delta_{a,b}^n}\right)\\&\leq& \overline N\left(r,\frac{1}{\mathcal{F}-1}\mid \geq n+1\right)+\overline N(r,f\mid\geq n-1)+S(r,f).\end{eqnarray*} \end{lem} \begin{proof} Since $\Psi\not\equiv 0$, so in view of lemma of logarithmic derivatives, we have $m(r,\Psi)=S(r,f)$. Again since $E_{f}(\mathcal{S}_2,n)=E_{g}(\mathcal{S}_2,n)$ and $E_{f}(\mathcal{S}_3,n-2)=E_{g}(\mathcal{S}_3,n-2)$, then one can note that \begin{eqnarray}\label{e2.9} N(r,\Psi)\leq \overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)+\overline N(r,f\mid\geq n-1)+S(r,f). \end{eqnarray}\par Let $z_0$ be a point such that $f(z_0)=0$ or $f(z_0)=\delta_{a,b}^n$. Then since $E_{f}(\mathcal{S}_1,0)=E_{g}(\mathcal{S}_1,0)$, so we must have $\Psi(z_0)=0$. Thus we see that \begin{eqnarray}\label{e2.10} \overline N\left(r,\frac{1}{f}\right)+\overline N\left(r,\frac{1}{f-\delta_{a,b}^n}\right)\leq N\left(r,\frac{1}{\Psi}\right).\end{eqnarray}\par Applying the \emph{First Fundamental Theorem}, we get from (\ref{e2.9}) and (\ref{e2.10}), \begin{eqnarray*} && \overline N\left(r,\frac{1}{f}\right)+\overline N\left(r,\frac{1}{f-\delta_{a,b}^n}\right)\\ &\leq& N\left(r,\frac{1}{\Psi}\right)\\ &\leq& T(r,\Psi)+S(r,f)\\&=& N(r,\Psi)+m(r,\Psi)+S(r,f)\\&=&N(r,\Psi)+S(r,f)\\ &\leq& \overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)+\overline N(r,f\mid\geq n-1)+S(r,f). \end{eqnarray*} \end{proof} \begin{lem}\label{lem2.4}\cite{Fang-1999}
Let $a_1$, $a_2$, $a_3$, $a_4$ be four distinct complex numbers. If $E_{f}(a_j,\infty)=E_{g}(a_j,\infty)$, (j=1, 2, 3, 4), then $f(z)=\displaystyle\frac{\alpha\;g(z)+\beta}{\gamma\; g(z)+\delta}$, where $\alpha\delta-\beta\gamma\neq 0$. \end{lem} \begin{lem}\label{lem2.5}\cite{Fang-1999}
If $E_{f^{*}}(1,\infty)=E_{g^{*}}(1,\infty)$ with $\delta_{2}(0;f^{*})+\delta_{2}(0;g^{*})+\delta_{2}(\infty,f^{*})+\delta_{2}(\infty,g^{*})>3$, then either $f^{*}g^{*}\equiv 1$ or $f^{*}\equiv g^{*}.$ \end{lem}
\section{\sc Proof of the theorem}
\begin{proof}[Proof of Theorem \ref{t1.2}] Let $\mathcal{F}$ and $\mathcal{G}$ be given by (\ref{e2.1}) and $\mathcal{H}$, by (\ref{e2.2}). We now discuss the following cases.\\
\noindent{\sc Case 1.} Let if possible $\mathcal{H}\not\equiv 0$. Therefore it is clear that $\mathcal{F}\not\equiv\mathcal{G}$ and hence $\Psi\not\equiv 0$. By the lemma of logarithmic derivatives, one can easily get that $m(r,\mathcal{H})=S(r,f)+S(r,g)=m(r,\Psi)$. Since $E_{f}(\mathcal{S}_1,0)=E_{g}(\mathcal{S}_1,0)$, $E_{f}(\mathcal{S}_2,n)=E_{g}(\mathcal{S}_2,n)$ and $E_{f}(\mathcal{S}_3,n-2)=E_{g}(\mathcal{S}_3,n-2)$ hence from the construction of $\mathcal{H}$, one can easily get that \begin{eqnarray}\label{e3.1} && N(r,\mathcal{H})\\&\leq& \nonumber\overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)+\overline N(r,f\mid\geq n-1)+\overline N\left(r,\frac{1}{f}\right)+\overline N\left(r,\frac{1}{f-\delta_{a,b}^n}\right)\\ &&\nonumber+ N_{0}\left(r,\frac{1}{f^{\prime}}\right)+ N_{0}\left(r,\frac{1}{g^{\prime}}\right)+S(r,f)+S(r,g)\nonumber,\end{eqnarray}\par where $N_{0}\left(r,\displaystyle\frac{1}{f^{\prime}}\right)$ denote the counting function of those zeros of $f^{\prime}$ which are not the zeros of $f(f-\delta_{a,b}^n)(\mathcal{F}-1)$. Similarly, $N_{0}\left(r,\displaystyle\frac{1}{g^{\prime}}\right)$ can be defined.\par
By applying \emph{Second Fundamental Theorem}, we get \begin{eqnarray}\label{e3.2} && (n+1)\bigg\{T(r,f)+T(r,g)\bigg\}\\ &\leq&\nonumber \overline N\left(r,\frac{1}{\mathcal{F}-1}\right)+\overline N(r,f)+\overline N\left(r,\frac{1}{f}\right)+\overline N\left(r,\frac{1}{f-\delta_{a,b}^n}\right)+\overline N\left(r,\frac{1}{\mathcal{G}-1}\right)\\ && \nonumber+\overline N(r,g)+\overline N\left(r,\frac{1}{g}\right)+\overline N\left(r,\frac{1}{g-\delta_{a,b}^n}\right)-N_{0}\left(r,\frac{1}{f^{\prime}}\right)-N_{0}\left(r,\frac{1}{g^{\prime}}\right)\\ &&+S(r,f)+S(r,g).\nonumber \end{eqnarray} \par
Now by using \emph{\sc Lemmas \ref{lem2.2}, \ref{lem2.3}}\; and (\ref{e3.1}), we get from (\ref{e3.2})
\begin{eqnarray}\label{e3.3} && (n+1)\bigg\{T(r,f)+T(r,g)\bigg\}\\ &\leq&\nonumber \overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)+N_{n-1}(r,f)+\overline N\left(r,\frac{1}{\mathcal{G}-1}\right)+\overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq 2\right)\\ &&\nonumber \overline N(r,g)+3\bigg\{\overline N(r,f\mid\geq n-1)+\overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)\bigg\}+S(r,f)+S(r,g)\\ &\leq&\nonumber N_{n-1}(r,f)+N_{n-1}(r,g)+\frac{1}{n-1} N(r,f)+\frac{1}{n-1} N(r,g)\\ &&\nonumber+\bigg\{2\overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)+\overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq 2\right)\bigg\}\\ &&\nonumber+\bigg\{\overline N\left(r,\frac{1}{\mathcal{G}-1}\right)+2\overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)\bigg\}+S(r,f)+S(r,g). \end{eqnarray}\par Next, we see that \begin{eqnarray}\label{e3.4} && \frac{1}{2}\overline N\left(r,\frac{1}{\mathcal{F}}\mid\leq 1\right)+\overline N\left(r,\frac{1}{\mathcal{F}}\mid\geq 2\right)+2\overline N\left(r,\frac{1}{\mathcal{F}}\mid\geq n+1\right)\\ &\leq&\nonumber \frac{1}{2} N\left(r,\frac{1}{\mathcal{F}-1}\right)+\overline N\left(r,\frac{1}{\mathcal{F}-1}\geq n+1\right)\\ &\leq&\nonumber \left(\frac{1}{2}+\frac{1}{n+1}\right) N\left(r,\frac{1}{\mathcal{F}-1}\right)=\frac{n+3}{2(n+1)}N\left(r,\frac{1}{\mathcal{F}-1}\right). \end{eqnarray}\par Similarly, we get \begin{eqnarray}\label{e3.5} && \frac{1}{2}\overline N\left(r,\frac{1}{\mathcal{G}}\mid\leq 1\right)+\overline N\left(r,\frac{1}{\mathcal{G}}\mid\geq 2\right)+2 \overline N\left(r,\frac{1}{\mathcal{G}}\mid\geq n+1\right)\\ &\leq&\nonumber \frac{n+3}{2(n+1)}N\left(r,\frac{1}{\mathcal{G}-1}\right).\end{eqnarray}\par Therefore, using (\ref{e3.4}) and (\ref{e3.5}), we obtain from (\ref{e3.3}) \begin{eqnarray*} && (n+1)\bigg\{T(r,f)+T(r,g)\bigg\}\\ &\leq& \left(1+\frac{1}{n-1}+\frac{n(n+3)}{2(n+1)}\right)\bigg\{T(r,f)+T(r,g)\bigg\},\end{eqnarray*} which contradicts $n\geq 3$.\\
\noindent{\sc Case 2.} Therefore $\mathcal{H}\equiv 0$.\par Then on integrating, we get from (\ref{e2.2}) \begin{eqnarray}\label{e3.6} \frac{1}{\mathcal{F}-1}\equiv\frac{\mathcal{A}}{\mathcal{G}-1}+\mathcal{B},\;\;\text{where}\;\; \mathcal{A}(\neq 0),\mathcal{B}\in\mathbb{C}.\end{eqnarray}\par From (\ref{e3.6}), we obtain in view Lemma \ref{lem2.1} that \begin{eqnarray*} T(r,f)=T(r,g)+S(r,f)+S(r,g).\end{eqnarray*} \par Let $ \infty $ is a e.v.P of $ f $. Then we must have $ \overline N(r,f)=S(r,f) $.\par From the proof of Lemma \ref{lem2.3}, we have already \begin{eqnarray}\label{e33.77} && \overline N\left(r,\frac{1}{f}\right)+\overline N\left(r,\frac{1}{f-\delta_{a,b}^n}\right)\\ &\leq& \nonumber \overline N\left(r,\frac{1}{\mathcal{F}-1}\mid\geq n+1\right)+\overline N(r,f\mid\geq n-1)+S(r,f)\\&\leq&\nonumber \frac{1}{n+1} N\left(r,\frac{1}{\mathcal{F}-1}\right)+\frac{1}{n-1}\overline N(r,f)+S(r,f)\\&\leq&\frac{n}{n+1} T(r,f)+S(r,f)\nonumber. \end{eqnarray}\par By the \emph{Second Fundamental Theorem} and (\ref{e33.77}), we obtain \begin{eqnarray*} T(r,f)&\leq& \overline N\left(r,\frac{1}{f}\right)+\overline N\left(r,\frac{1}{f-\delta^n_{a,b}}\right)+\overline N(r,f)+S(r,f)\\&\leq& \frac{n}{n+1} T(r,f)+S(r,f),\end{eqnarray*} which is a contradiction.\par
Let $\infty$ is not a \textit{e.v.P} of $f$. So there must exits $z_0\in\mathbb{C}$ such that $f(z_0)=\infty$. Since $E_{f}(\mathcal{S}_3,n-2)=E_{g}(\mathcal{S}_3,n-2)$, so get from (\ref{e3.6}) that $\mathcal{B}=0$.\par Therefore, we have \begin{eqnarray*} \mathcal{A}(\mathcal{F}-1)\equiv (\mathcal{G}-1). \end{eqnarray*}\par i.e., we have \begin{eqnarray}\label{ee3.2} \mathcal{A}(af^n+bf^{n-1}+c)\equiv (ag^n+bg^{n-1}+c). \end{eqnarray}\par Since $E_{f}(\mathcal{S}_1,0)=E_{g}(\mathcal{S}_1,0)$, then we have the following two possibilities. \begin{enumerate}
\item[(i)] $ E_{f}(0,0)=E_{g}(0,0)\;\;\text{and}\;\; E_{f}(\delta_{a,b}^n,0)=E_{g}(\delta_{a,b}^n,0),$ or
\item[(ii)] $ E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)\;\;\text{and}\;\; E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0).$
\end{enumerate}
\noindent{\sc Subcase 2.1.} Suppose $ E_{f}(0,0)=E_{g}(0,0)\;\;\text{and}\;\; E_{f}(\delta_{a,b}^n,0)=E_{g}(\delta_{a,b}^n,0).$ Then there exist $z_0, z_1\in\mathbb{C}$ such that $f(z_0)=0=g(z_0)$ and $f(z_1)=\delta_{a,b}^n=g(z_1)$. In both the cases, we get from (\ref{ee3.2}) that $\mathcal{A}=1$. Then (\ref{ee3.2}) reduces to \begin{eqnarray}\label{ee3.3} af^n+bf^{n-1}\equiv ag^n+bg^{n-1}.\;\;\text{i.e.,}\;\; f^{n-1}(af+b)\equiv g^{n-1}(ag+b)\end{eqnarray}\par Since $E_{f}(0,0)=E_{g}(0,0)$, so from (\ref{ee3.3}), we get $E_{f}\left(-\displaystyle\frac{b}{a},0\right)=E_{g}\left(-\displaystyle\frac{b}{a},0\right)$. Again since $E_{f}(\mathcal{S}_3,n-2)=E_{g}(\mathcal{S}_3,n-2)$, thus we see that \begin{eqnarray*} E_{f}(0,0)=E_{g}(0,0),\;\;\; E_{f}\left(\displaystyle\delta_{a,b}^n,0\right)=E_{g}\left(\displaystyle\delta_{a,b}^n,0\right),\\ E_{f}\left(-\frac{b}{a},0\right)=E_{g}\left(-\frac{b}{a},0\right),\;\;\;\; E_{f}(\infty,n-2)=E_{g}(\infty,n-2).\end{eqnarray*}\par Then by \emph{\sc Lemma \ref{lem2.4}}, one must have \begin{eqnarray}\label{ee3.4} f(z)=\displaystyle\frac{\alpha\;g(z)+\beta}{\gamma\; g(z)+\delta},\end{eqnarray}\par where $\alpha\delta-\beta\gamma\neq 0$.\par Therefore, equations (\ref{ee3.3}) and (\ref{ee3.4}) combinedly give $f\equiv g$.\\
\noindent{\sc Subcase 2.2.} Suppose $ E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)\;\;\text{and}\;\; E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0).$\par We now discuss the following subcases.\\
\noindent{\sc Subcase 2.2.1.} Let both $E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)=\phi$ and $E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0)=\phi$. Since $E_{f}(\infty,n-2)=E_{g}(\infty,n-2)$, so we must have $E_{f^{*}}(1,n-2)=E_{g^{*}}(1,n-2)$, where $f^{*}(z)=\displaystyle\frac{f(z)}{f(z)-\delta_{a,b}^n}\neq 0, \infty$ and $g^{*}(z)=\displaystyle\frac{g(z)-\delta_{a,b}^n}{g(z)}\neq 0, \infty$. Again we note that \begin{eqnarray*}\delta_{2}(0;f^{*})+\delta_{2}(0;g^{*})+\delta_{2}(\infty,f^{*})+\delta_{2}(\infty,g^{*})=4>3.\end{eqnarray*}\par Therefore, by using \emph{\sc Lemma \ref{lem2.5}}, we have $f^{*}\equiv g^{*}$ or $f^{*}g^{*}\equiv 1$.\\
\noindent{\sc Subcase 2.2.1.1.} Suppose $f^{*}g^{*}\equiv 1$. Then we have $f\equiv g$.\\
\noindent{\sc Subcase 2.2.1.2.} Suppose $f^{*}\equiv g^{*}$. Then we have \begin{eqnarray}\label{ee3.5} f+g=\delta_{a,b}^n.\end{eqnarray}\par Thus from (\ref{ee3.2}) and (\ref{ee3.5}), we see that $f$ is a constant, which is absurd.\\
\noindent{\sc Subcase 2.2.2.} Let $E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)=\phi$ or $E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0)=\phi$.\\
\noindent{\sc Subcase 2.2.2.1.} Suppose $E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)=\phi$ and $E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0)\neq\phi$. This implies that there exists $z_0\in\mathbb{C}$, such that $f(z_0)=\delta_{a,b}^n$ and $g(z_0)=0$. So from (\ref{ee3.2}), we get \begin{eqnarray}\label{ee3.6} \mathcal{A}=\frac{a\left(\delta_{a,b}^n\right)^n+b\left(\delta_{a,b}^n\right)^{n-1}+c}{c}.\end{eqnarray}\par
It follows from (\ref{ee3.6}) that \begin{eqnarray}\label{e3.13}- a\left(\delta_{a,b}^n\right)^n-b\left(\delta_{a,b}^n\right)=c\left(1-\mathcal{A}\right). \end{eqnarray} Clearly, one root of the equation (\ref{e3.13}) is $ \delta^n_{a,b} $ of multiplicity $ 2 $. Equation (\ref{ee3.2}) can be written as \begin{eqnarray}\label{e3.14} af^n+bf^{n-1}+c-\frac{c}{\mathcal{A}}=\frac{1}{\mathcal{A}}\left(ag^n+bg^{n-1}\right).\end{eqnarray} We must have $ c-\frac{c}{\mathcal{A}}\neq c\left(1-\mathcal{A}\right), $ otherwise we will have $ \mathcal{A}=\pm 1 $, which is a contradiction as $c\neq\displaystyle -\frac{b}{2n}\left(\delta_{a,b}^n\right)^{n-1}$, $ \delta^n_{a,b}\neq -\frac{b}{a},\; 0 $.\par Now, equation (\ref{e3.14}) can be written as \begin{eqnarray}\label{e3.15} a\prod_{j=1}^{n}\left(f-\zeta_j\right)=\frac{1}{\mathcal{A}}g^{n-1}(ag+b), \end{eqnarray} where $ \zeta_j\; (j=1, 2, \ldots, n) $ are distinct roots of the polynomial $\displaystyle af^n+bf^{n-1}+c-\frac{c}{\mathcal{A}}. $ From (\ref{e3.15}), it is clear that $ 0 $ is e.v.P of $ g $, which contradicts our assumption $E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0)\neq\phi$.
\noindent{\sc Subcase 2.2.2.2.} Suppose $E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)\neq\phi$ and $E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0)=\phi$. This implies that there exists $z_1\in\mathbb{C}$, such that $f(z_1)=0$ and $g(z_1)=\delta_{a,b}^n$. Then from (\ref{ee3.2}), we get \begin{eqnarray}\label{ee3.9} \mathcal{A}=\frac{c}{a\left(\delta_{a,b}^n\right)^n+b\left(\delta_{a,b}^n\right)^{n-1}+c}.\end{eqnarray}\par Next proceeding exactly same way as done in the \emph{\sc Subcase 2.2.2.1}, we get a contradiction.\\
\noindent{\sc Subcase 2.2.3.} Suppose both $E_{f}(0,0)=E_{g}(\delta_{a,b}^n,0)\neq\phi$ and $E_{f}(\delta_{a,b}^n,0)=E_{g}(0,0)\neq\phi$. Then we get \begin{eqnarray*} \mathcal{A}=\frac{a\left(\delta_{a,b}^n\right)^n+b\left(\delta_{a,b}^n\right)^{n-1}+c}{c}\;\;\text{and}\;\; \mathcal{A}=\frac{c}{a\left(\delta_{a,b}^n\right)^n+b\left(\delta_{a,b}^n\right)^{n-1}+c}.\end{eqnarray*}\par Thus we see that $\mathcal{A}=\pm 1$, which contradicts $c\neq\displaystyle -\frac{b}{2n}\left(\delta_{a,b}^n\right)^{n-1}.$\par This completes the proof. \end{proof}
\section{\sc Concluding remarks and a question} \par In this paper, we proved a result with the best possible cardinalities of the three sets sharing problems till now by answering the question posed by \emph{Yi} \cite{Yi-SC-1994} without the help of any extra suppositions. We have also abled to relax the nature of sharing of the sets compare to other results mentioned in the introduction. But we don't know whether the choice of the weights $(k_1,k_2,k_3)=(0,3,1)$ associated with the corresponding sets $ \mathcal{S}_j $, $ j=1, 2, 3 $, in our main result is the best possible or not. So we have the following quarry for the future investigation in this direction. \begin{ques}
Keeping all other conditions intact in \emph{Theorem \ref{t1.2}}, is it possible to relax the nature of sharing of the sets further ? \end{ques} \noindent{\bf Acknowledgment} The author would like to thank the referee for his/her helpful suggestions and comments towards the improvement of this manuscript.
\end{document} | arXiv |
Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): bioinformatics
Automatic localization and identification of mitochondria in cellular electron cryo-tomography using faster-RCNN
Ran Li1 na1,
Xiangrui Zeng2 na1,
Stephanie E. Sigmund3,
Ruogu Lin2,
Bo Zhou4,
Chang Liu5,
Kaiwen Wang5,
Rui Jiang1,
Zachary Freyberg6,
Hairong Lv1 &
Min Xu2
BMC Bioinformatics volume 20, Article number: 132 (2019) Cite this article
Cryo-electron tomography (cryo-ET) enables the 3D visualization of cellular organization in near-native state which plays important roles in the field of structural cell biology. However, due to the low signal-to-noise ratio (SNR), large volume and high content complexity within cells, it remains difficult and time-consuming to localize and identify different components in cellular cryo-ET. To automatically localize and recognize in situ cellular structures of interest captured by cryo-ET, we proposed a simple yet effective automatic image analysis approach based on Faster-RCNN.
Our experimental results were validated using in situ cyro-ET-imaged mitochondria data. Our experimental results show that our algorithm can accurately localize and identify important cellular structures on both the 2D tilt images and the reconstructed 2D slices of cryo-ET. When ran on the mitochondria cryo-ET dataset, our algorithm achieved Average Precision >0.95. Moreover, our study demonstrated that our customized pre-processing steps can further improve the robustness of our model performance.
In this paper, we proposed an automatic Cryo-ET image analysis algorithm for localization and identification of different structure of interest in cells, which is the first Faster-RCNN based method for localizing an cellular organelle in Cryo-ET images and demonstrated the high accuracy and robustness of detection and classification tasks of intracellular mitochondria. Furthermore, our approach can be easily applied to detection tasks of other cellular structures as well.
In cells, most biological processes are dominated by intricate molecular assemblies and networks. Analyzing the structural features and spatial organization of those assemblies is essential for understanding cellular functions. Recently, cellular cryo-Electron Tomography (cryo-ET) has been developed as an approach to obtain 3D visualization of cellular structures at submolecular resolution and in a close-to-native state [1]. Cryo-ET has been proven to be a powerful technique for structural biology in situ and has been successfully applied to the study of many important structures, including vaults [2], Integrin Linked Kinase (ILK) [3], and the nuclear pore complex (NPC) [4]. However, the systematic structural analysis of cellular components in cryo-ET images remains challenging due to several factors including low signal-to-noise ratio (SNR), limited projection range (leading to the missing wedge effect) and a crowded intracellular environment composed of complex intracellular structures.
Given the critical roles played by mitochondria within mammalian cells, and the distinctive morphology of these organelles, we chose to examine mitochondria imaged by in situ cryo-ET [5]. The 3D visualization of mitochondria can provide insights into mitochondrial structure and functionalities. Therefore, methodological improvements in the detection and localization of mitochondria within complex in situ cryo-ET datasets may significantly improve accuracy of detection of these organelles and directly impact further structural analyses.
Localization of the subcellular structures of interest can facilitate subsequent study of specific macromolecular components within the selected structures [6]. Such localization can be performed through image segmentation, which are usually performed manually or by specifically designed heuristics. Although some visualization tools have been developed to facilitate these approaches, manual segmentation in Cryo-ET images still requires large amounts of repetitive labor from researchers, and the results of which are subjective. On the other hand, automatic methods are fast and can produce consistent results. Contour-based methods like Watershed yield great results when the image complexity is low, but appear to be sensitive to noise [7]. Threshold-based methods, which usually generate a mask according to the density threshold, can be applied to foreground-background segmentation but still have difficulty in identifying different cellular components [8]. Recently, segmentation methods focusing on specific types of structures including membranes, microtubules and filaments [9–11], have drawn a lot of attention. These methods perform well on specific cellular structures, but lack generality. To date, machine learning approaches to identify intracellular structures appears to be promising. Consequently, we have developed an unsupervised segmentation method based on manually designed heuristic rules [12], and by clustering representative features [13]. Luengo et al. [14] proposed a supervised approach to classify each voxel with a trained classification model. However, both of these methods require manually designed features or rules, which might be time- and effort-consuming while having various limitations. Chen et al. developed another supervised segmentation method, taking advantage of the excellent capability of feature extraction of convolutional neural network (CNN) [15]. But in this way, a separate CNN has to be trained for each type of structural features, and the precise contours need to be manually annotated in the training data, which may not be trivial.
Our goal is to design a simple and generic method of automatic identification and localization of subcellular structures of interest within in situ cryo-ET images with weak annotations, which is different from existing segmentation-type methods and can greatly reduce the time and effort cost of detailed manual annotation. We aim to detect all objects of interest in an image and output corresponding bounding box with class prediction simultaneously. Region-based convolutional neural network (RCNN) [16], which generates region proposals using Selective Search, extracts features from all the proposals after normalization with CNNs, and finally feeds the features to a classifier and a regression layer simultaneously to get both classification scores and bounding box coordinates as output, lays the foundation for our goal. And its last incarnation, Faster RCNN [17], has achieved almost real-time detection with a high degree of accuracy. Faster RCNN based localization methods have been applied to biomedical imaging data such as breast mammography [18] and cellular fluorescence imaging [19].
In this work, we proposed an automatic identification and localization method based on Faster-RCNN, which is the first Faster-RCNN based method for localizing an cellular organelle in Cryo-ET images. Our algorithm is trained and validated on 2D projection images of a cryo-ET tomogram for localization and classification tasks of mitochondira. Our experimental results show that our algorithm is able to robustly predict the object's bounding box with classification scores. Moreover, we extended our study to 3D tomogram slices and achieved accurate and robust performance.
Our mitochondria identification and localization method is comprised of two main parts: (1) pre-processing to improve the quality of samples, and (2) object detection using Faster-RCNN. The input of our system is 2D projection images of a tomogram, and the output includes coordinates of the bounding boxes of object of interest, the class of each object and the probability of the classification. A flowchart of our method is shown in Fig. 1. In this section, we willdescribe each part of our system in details.
Flowchart of our Faster-RCNN model. The denoised input image is fed into Conv layers to generate the feature map. Then, region proposal network proposes potential regions that contain object of interest. The proposal regions are passed to 1) classifier for classification, 2) regressor for refine the bounding box location
Since biological samples are sensitive to radiation damage, only low-dose electrons can be used for electron microscopy imaging [6]. Compared to normal images, electron tomography images are usually noisier and have lower contrast. To make the images suitable for subsequent processing, we first perform noise reduction and contrast enhancement. To reduce noise, considering the edge features are often important for subcellular structures, we chose Bilateral Filtering [20], a nonlinear filtering method that preserves the original edges as much as possible. Bilateral Filtering considers the effects of both spatial distance and gray scale distance, and can be implemented by combining two Gaussian Filters. To improve local contrast and the definition of details, we use Histogram Equalization, which can also balance the brightness of different images.
Object detection in 2D images
The main idea of our method is based on Faster RCNN [17], in which the four modules of feature extraction, proposal generation, RoI Pooling, classification and regression are organically combined to form an end-to-end object detection system.
Feature extraction is the first step of our method. The input of the deep convolutional neural network is the image I, and the output is the extracted feature map. These features will be shared by subsequent modules. The basic feature extraction network in our model, Resnet-50, is based on [21]. He et al. proposed this deep residual learning method in 2015 to make the deeper network train properly. The architecture of our network is shown in Fig. 2. The original Resnet-50 network is split into two parts in our model: part one including layers conv1 to conv4_x is used for extraction of shared features, and part two including layer conv5_x and upper layers further extracts features of proposals for the final classification and regression. The implementation of the model refers to the work of Yann Henon in 2017 [22].
Detailed Architecture of the Faster-RCNN model. The basic feature extraction network Resnet-50 is split into two parts in our model: 1) layers conv1 to conv4_x is used for extraction of shared features (in the shared layers), 2) layer conv5_x and upper layers further extracts features of proposals for the final classification and regression (in the classifier). And the RPN implemented with three convolutional layers generates proposals from the shared feature map
The feature extraction network is followed by a region proposal network (RPN). A window of size n×n slides onto the feature map, and at each location it stays the features in the window are mapped to a low-dimensional vector, which will be used for object-background classification and proposal regression. At the same time, k region proposals centered on the sliding window in the original image are extracted according to k anchors, which are rectangular boxes of different shapes and sizes. Moreover, for each proposal, two probabilities for the classification and four parameters for the regression will be achieved, composing the final 6k outputs of the classification layer and the regression layer. The sliding window, classification layer and regression layer are all implemented using convolutional neural networks. In practice, we chose k=9 with 3 scales of 1282, 2562, and 5122 pixels and 3 aspect ratios of 1:1, 1:2, and 2:1 as the default in [17]. And non-maximum suppression(NMS) was adopted with the IoU threshold at 0.7, while the maximum number of proposals produced by the RPN was 300.
Features of different scales are then integrated into feature maps of the same size (7×7 in our experiment) via RoI pooling layer, so that the features can be used in final fully connected classification and regression layers. For a region proposal of any size, like h×w, it will be divided into a fixed number, like H×W, of windows of size h/H×w/W. Then max pooling will be performed and a fixed-size (H×W) feature map will be obtained with the maximum of each window.
To train the whole model end-to-end, a multi-task loss function is proposed as follows [17].
$$ L\left(p,u,t^{u},v\right)=L_{cls}(p,u)+\lambda[u\geq 1 ]L_{loc}\left(t^{u},v\right) $$
Where u is the ground truth label of the proposal, and v=(vx,vy,vw,vh) represents the regression offset between the proposal and the ground truth.The output of the classification layer, p=(p0,p1,...,pK), represents the probabilities of the proposal belonging to each one of the K+1 classes and \(t^{u}=\left (t_{x}^{u},t_{y}^{u},t_{w}^{u},t_{h}^{u}\right)\) represents the predicted regression offset for a proposal with label u. The loss function of the classification task is defined as:
$$ L_{cls}(p,u)=-\log p_{u}. $$
And the loss function of the regression is a robust L1 loss as follows:
$$ L_{loc}\left(t^{u},v\right)=\sum_{i\in {x,y,w,h}}smooth_{L1}\left(t_{i}^{u}-v_{i}\right). $$
$$ smooth_{L}1\left(x \right)=\left\{ \begin{array}{lr} 0.5x^{2}, \: \: \: \: \: if \, \|x\|<1 & \\ \|x\|-0.5, \: \: \: \: \: otherwise & \end{array} \right. $$
The hyperparameter λ is used to control the balance between the two losses and is set to λ=1 in our experiment. Similarly, the loss function of the RPN during training is also defined in this form. In the training process, the RPN with the shared layers is trained first and then the classifier is trained using proposals generated by the RPN, with the initial weights for both networks given by a pretrained model on ImageNet [17, 23].
Dataset and evaluation metrics
Data Acquisition: Tissue Culture: Rat INS-1E cells (gift of P. Maechler, Université de Genève) were cultured in RPMI 1640 medium supplemented with 2 mM L-glutamine (Life Technologies, Grand Island, NY), 5% heat-inactivated fetal bovine serum, 10 mM HEPES, 100 units/mL penicillin, 100 μg/mL streptomycin, 1 mM sodium pyruvate, and 50 μM b-Mercaptoethanol as described earlier (insert reference: PMID: 14592952).
EM Grid Preparation: For cryo-ET imaging, INS-1E cells were plated onto either fibronectin-coated 200 mesh gold R2/1 Quantifoil grids or 200 mesh gold R2/2 London finder Quantifoil grids (Quantifoil Micro Tools GmbH, Jena, Germany) at a density of 2×105 cells/mL. Following 48 h incubation under conventional culture conditions in complete RPMI 1640 medium, grids were removed directly from culture medium and immediately plunge frozen in liquid ethane using a Vitrobot Mark IV (Thermo Fisher FEI, Hillsboro, OR).
Cryo-Electron Tomography: Tomographic tilt series for INS-1E cells were recorded on a FEI Polara F30 electron microscope (Thermo Fisher FEI) at 300kV with a tilt range of ±60° in 1.5° increments using the Gatan K2 Summit direct detector (Gatan, Inc.) in super-resolution mode at 2X binned to 2.6 Å/pixel; tilt series were acquired via SerialEM.
Datasets: We collected 9 cryo-ET tomograms (786 2D slices) contains mitochondria. 482 out of the 786 slices were selected and annotated manually via LabelImg [24]. Then, the 2D slices were randomly divided into training and testing set with a ratio of 5:1. Details of our dataset are shown in Table 1.
Table 1 Cryo-ET dataset properties
Metrics: To evaluate the performance of our model, we mainly use two metrics from common object detection and segmentation evaluation: AP (average precision) and F1 score. The definitions are as follows:
$$ AP=\int_{0}^{1} P(R)\,d(R) $$
$$ F_{1} \ score=\frac{2P \times R}{P+R} $$
where P represents precision, which indicates the ratio of the true positives to all predicted positives; R represents recall, which indicates the ratio of the true positives to all true elements. Neither precision nor recall alone is sufficient to fully evaluate the prediction performance. Therefore, the F1 score defined by the weighted harmonic mean of precision and recall is commonly used in the case where both of them need to be high enough. And AP, equivalent to the area under the precision-recall curve, may provide an overall evaluation of the model's performance at different precision/recall rates. As an object detection problem, the correctness of each sample prediction is not only related to classification, but also related to localization. The accuracy of localization is evaluated by (Intersection over Union), which is defined as:
$$ IoU=\frac{S_{P} \cap S_{G}}{S_{P} \cup S_{G}} $$
where SP is the predicted bounding box and SG represents the ground truth, and IoU measures the degree of coincidence. In our experiments, different IoU thresholds(0.5, 0.6, 0.7, 0.8, and 0.9) are set, and those samples with mitochondria prediction labels and IoUs higher than the specific threshold are considered. The higher the IoU threshold, the higher the accuracy requirements for localization. Thus we can see the difference in the detection accuracy under different localization accuracy requirements, and judge the localization performance of our model. The precision, recall, F1 score and AP in our experiment are calculated.
Data preprocessing and model training
The 2D projection images we acquired from the original tomograms have low SNR and contrast which interferes with subsequent identification and segmentation of intracellular features. Thus, the images are first denoised via a bilateral filter with σr=1.2 and σd=100, suppressing noise and retaining the original edge features as much as possible. This is followed by enhancement of contrast via histogram equalization which improves in the resolution of previously indistinguishable details. Figure 3 shows an example of two images before and after preprocessing. The preprocessing methods and parameters in our method were finally determined based on the single-image SNR estimated according to [25], gray-scale distribution histograms and visual effect of the image. Figure 4 shows SNR of the same image with different σd and σr and the performance of different preprocessing schemes. We found that performing histogram equalization first will increase the noise in the original image, and the contrast will be reduced again after filtering, failing to achieve the desired effect. Furthermore, we found that Gaussian filtering used for noise reduction cannot preserve the edge as well as Bilateral filtering.
a Original 2D projection images, b Images after noise reduction (Bilateral Filtering with σr=1.2 and σd=100), c Images after noise reduction and contrast adjustment
a Bilateral Filter + Histogram Equalization, b Gaussian Filter + Histogram Equalization, c Histogram Equalization + Bilateral Filter d SNR with different σd and σr
All the models in our experiments were trained and tested using Keras [26] with Tensorflow [27] as the back-end, using optimizer Adam (Adaptive Moment Estimation) [28] with β1=0.9,β2=0.999 and learning rate of 1×10−5 for both RPN and the classifier. The 482 annotated slices were randomly split into a training set of 402 slices and a test set of 80 slices according to a ratio of 5:1. The model would be saved only if the loss after one epoch is less than the best loss before.
Prediction performance
We trained the model on the training set and tested it on the test set. Figures 5 and 6 show the test results visually and quantitatively. In addition to the bounding box, our model also gives the most likely category of the object and the probability of it belonging to that category. In Fig. 5, the red bounding box is the manually annotated ground truth, and the blue box is predicted by the model. We notice that the predicted results and the ground truth are highly coincident, and even the regions that cannot be completely overlapped basically contain the entire mitochondria, which means that our system can achieve the goal of automatic identification and localization of mitochondria quite successfully. The area where the mitochondria is located can be separated from the outside by the bounding box, so as to eliminate the influence of the surrounding environment as much as possible, making it possible to analyze the internal structures in more detail.
Examples of detection results: the red boxes are ground truth, and the blue ones are the predicted bounding boxes. Data source: a Tomogram: Unstim_20k_mito1 (projection image 63), b Tomogram: Unstim_20k_mito2 (projection image 49), c Tomogram: HighGluc_Mito2 (projection image 47), d Tomogram: CTL_Fibro_mito1 (projection image 44), e Tomogram: HighGluc_Mito1 (projection image 48), f Tomogram: CHX + Glucose Stimulation A2 (projection image 13)
Prediction performance: a AP with different IoU threshold, b Precision-Recall curve with IoU threshold=0.7
In Fig. 6, we plotted the precision-recall curve and calculated the APs at different IoU thresholds to measure the detection performance. We noticed that when the IoU threshold is set to 0.7 and below, the AP is close to 1, which means that almost all samples were correctly predicted,indicating that our system can successfully identify the mitochondria in the picture. However, when the IoU threshold is increased to 0.9, the AP drops sharply to around 0.4, which indicates that our system still has some deficiencies in the accuracy of localization. The overlap between the predicted area and the ground truth area can be further improved, which can be an important aspect of our future work. The precision-recall curve for IoU thresholds of 0.7 is also given in Fig. 6. When the IoU threshold is 0.7, all positive samples can be correctly predicted while the precision requirement is not higher than 0.9, that is, all mitochondria can be found in that condition; even with a precision of 1, which means all samples predicted to be positive must be correct, 70% of the mitochondria can still be detected.
In addition, we compared the effect of preprocessing on the prediction results. It is noted that no matter how the IoU threshold is set, the AP value of the model without preprocessing is significantly lower than that of the model containing the preprocessing, which again shows that preprocessing is a necessary step for the overall system. Especially when the IoU threshold is 0.8, the system with or without preprocessing shows a great difference in the average precision of prediction, which indicates that the main contribution of preprocessing to the system is to further improve the accuracy of localization. For the model that does not include preprocessing, the predicted bounding box that has an IoU no less than 0.8 with ground truth is quite rare, and the average precision calculated in this situation is only 0.3. After the preprocessing step, it becomes common that IoU of the predicted bounding box and the ground truth reaches 0.8, resulting in an increase of the average precision to 0.95 and higher.
Source of error
In order to further analyze the performance of our method, we separately analyzed the prediction results of the system on 9 different in situ cryo-ET tomograms (Table 2), and studied the impact of different factors including the quality of the original image, the intactness of the mitochondria etc. The F1 score and AP remain calculated at an IoU threshold of 0.7. In most tomograms, our systems show high accuracy, consistent with the overall results. However, we also found that in INS_21_g3_t10, our system could not accurately detect mitochondria. Therefore, we analyzed the projected image from INS_21_g3_t10 (Fig. 7). We noticed that in all the 2D projection images from that tomogram, the mitochondria included are too small and the structure appeared incomplete, especially the internal structure, which is basically submerged in noise and hard to identify. Even after noise reduction and contrast adjustment, the details of the mitochondria in the image are still too blurred, causing strong interference in the extraction of features. We also calculated the SNR of the two-dimensional projection images in INS_21_g3_t10, which is approximately 0.06 on average. For reference, the SNR of the original projection image from Unstim_20k_mito1 we analyzed in Fig. 4 is 0.12, which is significantly higher than the images in INS_21_g3_t10. It is also worth noting that in Unstim_20k_mito1, the subject of the projection images is the mitochondria we need to detect, while in INS_21_g3_t10, the mitochondria only occupy a very small part of the image. As a result, other components of the image are calculated as signal which may be not that useful for our detection task, making the ratio of effective information to noise even lower than 0.06. This may explain why the detection performance of it is particularly unsatisfactory.
An example of projection images from tomogram INS_21_g3_t10 (in which the mitochondria is hard to detect): a Original image, b Image after noise reduction and contrast adjustment, c Projection image from M2236_Fibro_mito1
Table 2 Prediction results on different tomograms
In order to better study the influence of different tomograms on the accuracy of localization, mean Intersection over Union (mIoU) is calculated for each tomogram. It can be noted that, on average, mIoU is higher in the tomograms that contain complete mitochondria, that is, the localization accuracy is higher, although the highest mIoU comes from a tomogram containing incomplete mitochondria. We analyzed the characteristics of this tomogram and found that it is the only one where mitochondria do not appear circular or nearly circular, but instead possess a slanted strip shape (also shown in Fig. 7). Therefore, when the mitochondrion is marked with a rectangular box, the box occupies a larger area and contains more non-mitochondrial regions, which may make the prediction results more easily coincide with the ground truth. Therefore, in general, we can still conclude that complete mitochondria are more easily localized accurately. This is also in consistent with our intuition that the complete mitochondria have a complete outline of a bilayer membrane that approximates a circular shape, which provides a powerful reference for determining its specific boundaries. In fact, the tomogram with best results on the F1 score and AP also contains intact mitochondria. Therefore, the integrity of mitochondria has a certain impact on the detection results of the system.
Prediction on tomogram slices
The ultimate goal is to detect mitonchondria in 3D tomograms. The model trained on 2D projection images can be directly applied to tomogram slices to generate the output. Like projection images, the slices were first preprocessed through Bilateral filtering and histogram equalization with the same parameters, and then tested by the Faster-RCNN model. The whole model is applied to the tomogram slice by slice and the output includes all the bounding boxes of mitochondria in the slice with a classification score for each box. And it only takes a few seconds for each slice when tested on CPUs.
As shown in Fig. 8, the mitochondria in tomogram slices can be successfully identified and localized, while the accuracy of localization may be slightly reduced due to higher noise, as compared to 2D projection images. Therefore, it is only necessary to perform annotation and training on the 2D projection images, which can greatly reduce the computational costs, and we can detect mitochondria in 3D tomograms with a tolerable error. And the probability of expanding to different organelles is still retained even in the case of 3D.
Detection results on slices of reconstructed tomograms. Data source: a Tomogram: Unstim_20k_mito_1 (slice 26), b Tomogram: M2236_truemito3 (slice 97), c Tomogram: HighGluc_Mito1 (slice 58)
In this paper, we proposed an automatic Cryo-ET image analysis algorithm for localization and identification of different structure of interest in cells. To best to our knowledge, this is the first work to applied Faster-RCNN model to Cryo-ET data, which demonstrated the high accuracy (AP>0.95 and IoU>0.7) and robustness of detection and classification tasks of intracellular mitochondria. Furthermore, our algorithm can be generalized to detect multiple cellular components using the same Faster-RCNN model, if annotations of multiple classes of cellular component were provided. For future work, we will further improve the accuracy of localization by collecting more data and we will explore the effects of different network structures to enhance the model.
Adaptive moment estimation
Average precision
cryo-ET:
Cryo-electron tomography
ILK:
Integrin linked kinase
IoU:
Intersection over union
mIoU:
Mean intersection over union NMS: Non-maximum suppression
NPC:
Nuclear pore complex
SNR:
RCNN:
Region-based convolutional neural network
RPN:
Region proposal network
Irobalieva RN, Martins B, Medalia O. Cellular structural biology as revealed by cryo-electron tomography. J Cell Sci. 2016; 129(3):469–76.
Woodward CL, Mendonċa LM, Jensen GJ. Direct visualization of vaults within intact cells by electron cryo-tomography. Cell Mol Life Sci. 2015; 72(17):3401–9.
Elad N, Volberg T, Patla I, Hirschfeld-Warneken V, Grashoff C, Spatz JP, et al.The role of integrin-linked kinase in the molecular architecture of focal adhesions. J Cell Sci. 2013; 126(18):4099–107.
Grossman E, Medalia O, Zwerger M. Functional Architecture of the Nuclear Pore Complex. Annu Rev Biophys. 2012; 41(1):557–584. PMID:22577827.
Berdanier CD. Mitochondria in health and disease.Boca Raton: CRC Press; 2005.
Asano S, Engel BD, Baumeister W. In Situ Cryo-Electron Tomography: A Post-Reductionist Approach to Structural Biology. J Mol Biol. 2016; 428(2, Part A):332–343. Study of biomolecules and biological systems: Proteins.
Volkmann N. A novel three-dimensional variant of the watershed transform for segmentation of electron density maps. J Struct Biol. 2002; 138(1):123–9.
Cyrklaff M, Risco C, Fernández JJ, Jiménez MV, Estéban M, Baumeister W, et al.Cryo-electron tomography of vaccinia virus. Proc Natl Acad Sci. 2005; 102(8):2772–7.
Martinez-Sanchez A, Garcia I, Fernandez JJ. A differential structure approach to membrane segmentation in electron tomography. J Struct Biol. 2011; 175(3):372–83.
Sandberg K, Brega M. Segmentation of thin structures in electron micrographs using orientation fields. J Struct Biol. 2007; 157(2):403–15.
Loss LA, Bebis G, Chang H, Auer M, Sarkar P, Parvin B. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography. In: Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine. BCB '12. New York: ACM: 2012. p. 170–177.
Xu M, Alber F. Automated target segmentation and real space fast alignment methods for high-throughput classification and averaging of crowded cryo-electron subtomograms. Bioinformatics. 2013; 29(13):i274–82.
Zeng X, Leung MR, Zeev-Ben-Mordehai T, Xu M. A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation. J Struct Biol. 2018; 202(2):150–60.
Luengo I, Darrow MC, Spink MC, Sun Y, Dai W, He CY, et al.SuRVoS: Super-Region Volume Segmentation workbench. J Struct Biol. 2017; 198(1):43–53.
Chen M, Dai W, Sun SY, et al.Convolutional neural Networks for automated annotation of cellular cryo-electron tomograms. Nat Methods. 2017; 14(10):983–985.
Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE: 2013. p. 580–587.
Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R, editors. Advances in Neural Information Processing Systems 28. Red Hook: Curran Associates, Inc.: 2015. p. 91–99.
Xu M, Papageorgiou DP, Abidi SZ, Dao M, Zhao H, Karniadakis GE. A deep convolutional neural network for classification of red blood cells in sickle cell anemia. PLoS Comput Biol. 2017; 13(10):e1005746.
Wang W, Taft DA, Chen YJ, Zhang J, Wallace CT, Xu M, et al.Learn to segment single cells with deep distance estimator and deep cell detector. arXiv preprint arXiv:180310829. 2018.
Tomasi C, Manduchi R. Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).Bombay: IEEE: 1998. p. 839–846.
He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE: 2016. p. 770–778.
Keras-frcnn HY. GitHub. 2017. https://github.com/yhenon/keras-frcnn. Accessed 25 July 2018.
Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR09.Miami: IEEE: 2009.
Tzutalin. LabelImg. GitHub. 2015. https://github.com/tzutalin/labelImg. Accessed 05 Apr 2018.
Thong JT, Sim KS, Phang JC. Single-image signal-to-noise ratio estimation. Scanning; 23(5):328–336.
Chollet F, et al.Keras. GitHub. 2015. https://github.com/fchollet/keras. Accessed 25 July 2018.
Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al.TensorFlow: A system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). Berkeley: USENIX Association: 2016. p. 265–283.
Kingma DP, Ba J. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:14126980. 2014.
This work was supported in part by U.S. National Institutes of Health (NIH) grant P41 GM103712. MX acknowledges support of the Samuel and Emma Winters Foundation. ZF acknowledges support from the U.S. Department of Defense (PR141292) and the John F. and Nancy A. Emmerling Fund of The Pittsburgh Foundation. This work was partially supported by the National Key Research and Development Program of China (No. 2018YFC0910404), the National Natural Science Foundation of China (Nos. 61873141, 61721003, 61573207, U1736210, 71871019 and 71471016), and the Tsinghua-Fuzhou Institute for Data Technology. RJ is a RONG professor at the Institute for Data Science, Tsinghua University.
Publication charge for this work has been funded by the National Key Research and Development Program of China (No. 2018YFC0910404), the National Natural Science Foundation of China (Nos. 61873141, 61721003, 61573207, U1736210, 71871019 and 71471016), and the Tsinghua-Fuzhou Institute for Data Technology. RJ is a RONG professor at the Institute for Data Science, Tsinghua University. This work was supported in part by U.S. National Institutes of Health (NIH) grant P41 GM103712. MX acknowledges support of the Samuel and Emma Winters Foundation. ZF acknowledges support from the U.S. Department of Defense (PR141292) and the John F. and Nancy A. Emmerling Fund of The Pittsburgh Foundation.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 20 Supplement 3, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-3.;
Ran Li and Xiangrui Zeng contributed equally to this work.
Department of Automation, Tsinghua University, Beijing, China
Ran Li, Rui Jiang & Hairong Lv
Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, USA
Xiangrui Zeng, Ruogu Lin & Min Xu
Department of Cellular, Molecular and Biophysical Studies, Columbia University Medical Center, New York, NY, USA
Stephanie E. Sigmund
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
Bo Zhou
Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
Chang Liu & Kaiwen Wang
Departments of Psychiatry and Cell Biology, University of Pittsburgh, Pittsburgh, PA, USA
Zachary Freyberg
Ran Li
Xiangrui Zeng
Ruogu Lin
Kaiwen Wang
Rui Jiang
Hairong Lv
Min Xu
MX, HL and RJ provided guidance and planning for this project. ZF provided the data used in the current study and offered guidance on the data. Ran Li and XZ proposed and implemented the methods, analysed the results and wrote the manuscript. SS, Ruogu Lin, BZ, CL and KW helped with writing and revising the manuscript. All authors read and approved the final manuscript.
Correspondence to Zachary Freyberg, Hairong Lv or Min Xu.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Li, R., Zeng, X., Sigmund, S. et al. Automatic localization and identification of mitochondria in cellular electron cryo-tomography using faster-RCNN. BMC Bioinformatics 20, 132 (2019). https://doi.org/10.1186/s12859-019-2650-7
Cryo-ET
Faster-RCNN
Cellular structure detection
Biomedical image analysis | CommonCrawl |
Improving the stability of glutamate fermentation by Corynebacterium glutamicum via supplementing sorbitol or glycerol
Yan Cao1,
Zhen-ni He2,
Zhong-ping Shi2 &
Mpofu Enock3
Bioresources and Bioprocessing volume 2, Article number: 9 (2015) Cite this article
Corynebacterium glutamicum is widely used in glutamate fermentation. The fermentation feature of the strain varies sometimes. These variations may lead to the reduction in the ability of the strain to resist environmental changes and to synthesize glutamate, resulting in abnormal glutamate fermentations.
In the abnormal glutamate fermentations, glutamate accumulation stopped after glucose feeding and the final glutamate concentration was at a lower level (50 to 60 g/L). The r NAD +/r NADH ratio was lower than that in normal batch which was reflected by lower oxidation-reduction potential (ORP) value. The abnormal fermentation performance was improved when glucose was co-fed with sorbitol/glycerol at a weight ratio of 5:1 or adding 10 to 15 g/L of sorbitol/glycerol in the initial medium. Under these conditions, glutamate synthesis continued after substrate(s) feeding and final glutamate concentration was restored to normal levels (≥72 g/L). r NAD +/rNADH ratio, ORP, and pyruvate dehydrogenase (PDH), isocitrate dehydrogenase (ICDH), and cytochrome c oxidase (CcO) activities were maintained at higher levels.
Sorbitol and glycerol were not used as carbon sources for the fermentation. They were considered as effective protective agents to increase cells' resistance ability against environmental changes and maintain key enzymes activities.
L-Glutamate is mainly used as a flavor enhancer in food industry and nutrient in pharmaceutical industry. The annual production has exceeded 2.2 million tons by fermentation with Corynebacterium glutamicum [1]. The biosynthetic pathway of glutamate includes complex enzymatic reactions, as shown in Figure 1a. The enzymes effectively convert substrate (such as glucose) into glutamate only when they are well coordinated with coenzymes NAD+ or NADH. Intracellular levels of NAD+ and NADH significantly affect the catalytic efficiency of the enzymes [2]. The ratio of NAD+/NADH in vivo is a key factor affecting energy transfer and redox state of the cells, and the optimal value at different stages during glutamate fermentation usually varied [3]. The metabolic flux distribution can be altered by variation of NAD+/NADH ratio or r NAD +/r NADH ratio in glutamate fermentation [4].
Simplified metabolic pathway of glutamate synthesis and generation/consumption of NAD + and NADH. (a) Metabolic pathway of glutamate synthesis; (b) simplified pathway of generation/consumption of NAD+ and NADH. GLC, glucose; PYR, pyruvate; Ac-CoA, acetyl coenzyme A; ICIT, isocitrate; α-KG, α-ketoglutarate; SUC, succinate; MAL, malate; OAA, oxaloacetate; PDH, pyruvate dehydrogenase; PC, pyruvate carboxylase; ICDH, isocitrate dehydrogrnase; ODHC, α-ketoglutarate dehydrogenase; GDH, glutamate dehydrogenase; CcO, cytochrome c oxidase.
NAD+/NADH ratio is indirectly reflected by oxidation-reduction potential (ORP) [5,6], which represented the redox state of the cells. The optimal ORP range corresponding to different fermentation processes is different. For example, maximum lysine yield was obtained when ORP was controlled between the range of −230 and −210 mV, while the preferable ORP range was −275 to −225 mV in homoserine and valine fermentations [6,7]. The redox state of cells is changed if certain auxiliary substances (such as sorbitol and glycerol) are supplemented and intracellular NAD+/NADH ratio can be varied correspondingly [8]. These auxiliary substrates are usually non-repressive carbon sources. They can protect cells against stress in their living environment, enhance cell viability, and reduce the metabolic burden [9–12]. It was reported that the production of alkaline polygalacturonate lyase and lipase increased by 1.85-fold and 8.7-fold, respectively, when the strategy of methanol/sorbitol co-feeding was adopted [10,13]. Arruda and Felipe found that xylitol productivity could be increased by 35% when glycerol was added in the medium [14]. Therefore, fermentation with mixed carbon sources was considered as an effective way to enhance the targeted metabolite productions.
C. glutamicum used in industry is usually stored at 4°C for a short time and replaced regularly. In this way, production fluctuation resulting from strain change can be avoided. However, sometimes the fermentation characteristics of the strain vary, resulting in decreased glutamate synthesis ability and resistance to environmental changes. In such a case, fermentation performance becomes abnormal and glutamate production also ends at a very low level. Glutamate production fluctuated in a large range, and fermentation stability decreased greatly. Frequent rejuvenation is a common solution to this problem. But it is a costly, time-consuming, and troublesome procedure. Furthermore, glutamate is a low value-added product. It is more economical to adopt a simple way with low operation cost to maintain the fermentative stability. The strategy of feeding mixed substrates has been applied in other fermentations. This was an effective method to decrease cell mortality, maintain the enzyme activity, and promote targeted metabolite production [9–11]. Similar studies with regard to glutamate fermentation are also important, but few have been reported. In this study, sorbitol or glycerol was either co-fed with glucose or added in the initial medium, aiming at protecting the cells against environmental change/stress and stabilizing glutamate productions. Meanwhile, the theoretical mechanism was interpreted. The results gained in this study will provide some useful information and reference to the glutamate fermentation industry in terms of stabilizing the glutamate production.
Strain and culture condition
C. glutamicum ATCC13032 was used in this study. The seed microorganism was grown in a shaker at 32°C and 200 r/min for 8 to 10 h in liquid medium containing (in g/L) glucose 25, K2HPO4 1.5, MgSO4 0.6, MnSO4 0.005, FeSO4 0.005, corn slurry 25, and urea 2.5 (separately sterilized). Initial pH was adjusted to 7.0 to 7.2. The medium for jar fermentation contained (in g/L) glucose 140, K2HPO4 1.0, MgSO4 0.6, MnSO4 0.002, FeSO4 0.002, thiamine 5.0 × 10−5, corn slurry 15, and urea 3.0 (separately sterilized).
The fed-batch fermentation was implemented in a 5-L fermentor (BIOTECH-5BG, Baoxing Co., Shanghai, China) equipped with on-line DO/pH/ORP electrodes. Initial medium volume was 3 L, and air aeration rate was 1.33 vvm. The temperature was maintained at 32°C during the entire fermentation period (about 34 to 36 h). pH was maintained at 7.0 to 7.2 by automatically pumping in 25% (w/v) ammonia water. Dissolved oxygen (DO) was controlled at 20% of air saturation by manually adjusting agitation rate. Concentrated glucose (50%, w/v) was fed when glucose concentration was lower than 20 g/L. Sorbitol or glycerol was supplemented by the following two methods:
Method #1: 50% (w/v) sorbitol or glycerol solution was co-fed with the addition of concentrated glucose solution. The feeding ratio of 1:5 (w/w) sorbitol/glycerol versus glucose was applied.
Method #2: 5 to 15 g /L sorbitol or glycerol was added into the initial medium before inoculation. Only glucose was fed when glucose concentration was lower than 20 g/L.
Analytical and measurement methods
Cell concentration was assayed by spectrophotometer at 620 nm (OD620). Glucose and glutamate concentrations were measured by a biosensor (SBA-40C, Shandong Science Academy, Jinan, China). The concentrations of sorbitol and glycerol were analyzed by HPLC (Hitachi Chromaster Organizer, Hitachi, Ltd, Chiyoda-ku, Japan) equipped with an ion exclusion column (Aminex HPX-87H, 300 mm × 7.8 mm, Bio-Rad, Hercules, CA, USA) and a differential refractive index detector at 30°C. The mobile phase was 0.005 mol/L H2SO4 at a flow rate of 0.6 mL/min [10]. Two electronic balances (JA1102, Haikang Co., Shanghai, China) were connected to the computer and used to monitor the feeding amount of glucose and sorbitol or glycerol solution. O2 and CO2 partial pressure in exhaust gas were on-line measured by a gas analyzer (LKM2000, Lokas Co., Daejeon, Korea). CO2 evolution rate (CER), O2 uptake rate (OUR), and respiratory quotient (RQ) were then calculated by standard formula.
The activities of pyruvate dehydrogenase (PDH) and isocitrate dehydrogenase (ICDH) were analyzed by the methods reported [15,16]. Cytochrome c oxidase (CcO) was assayed using the kit for bacteria (Genmed Scientifics Inc., Wilmington, DE, USA). The enzyme activity was expressed as U/mg-DCW, where 1 U was the quantity of dry cell converting 1 μmol NAD+ per minute. The relative enzymatic activity (REA) was used for comparison and interpretation. REA before feeding glucose/mixed carbon sources (18 h) was set as the unit (1), and REA after feeding glucose/mixed carbon sources (26 h) was described by Eq. (1).
$$ {\mathrm{REA}}_{\mathrm{aft}}(k)=\frac{E_{\mathrm{aft}}(k)}{E_{\mathrm{bef}}(k)} $$
Where E bef(k) and E aft(k) referred to the activities of k-th enzyme (PDH, ICDH, and CcO) before and after glucose/mixed carbon sources feeding.
Modeling and calculation of r NAD +/r NADH ratio
The glutamate synthesis pathway was depicted according to the map reported [16], shown in Figure 1a. NADH and NAD+ were generated or consumed to run the entire fermentation, and they were closely associated with glucose consumption, glutamate synthesis, CO2 release, and O2 consumption rates. Therefore, the generation or consumption rates of NADH and NAD+ could be determined by a couple of measurable reaction rates, such as glucose consumption rate (r GLC), glutamate formation rate (r GLU), CER, and OUR. The metabolic pathways of NADH and NAD+ were simplified as Figure 1b based on the following assumptions:
The main products were glutamate and CO2, because the concentrations of other byproducts (lactate, acetate, and other amino acids) were very low.
Pentose phosphate (PP) pathway was ignored because it was not related to generation/consumption of NADH or NAD+.
The metabolic flux into the two reaction branches at pyruvate node followed the ideal condition, namely r 2 = r 3 in Figure 1b [16]. Therefore, the glyoxylate shuttle was ignored.
The intermediate carbon metabolites were in pseudo-steady-state, and the net accumulation of them was 0. But it was not applied for NADH and NAD+. r NAD +/r NADH ratio was closely and positively associated with the NAD+/NADH ratio, while r NAD + actually represented NADH consumption rate (r (C) NADH) and r NADH represented NADH formation rate (r (F) NADH). NADH formation rate (r (F) NADH) could differ with its consumption rate (r (C) NADH), which led the variation in r NAD +/r NADH ratio.
Glutamate fermentation was a non-growth associated process; the cell concentration in production phase basically stayed at a constant level or declined slightly. So, we used the volume reaction rate to replace the specific reaction rate for convenience purpose.
The simplified metabolic pathway (Figure 1b) contains nine reactions shown in the Appendix, which covers the basic reactions occurring in EMP pathway, tricarboxylic acid (TCA) cycle, CO2 fixing reaction, respiratory chain, and glutamate synthesis. According to the assumptions and simplifications above, the rates of all the reactions were coupled as follows.
$$ {r}_1={r}_{\mathrm{GLC}} $$
$$ {r}_2={r}_3={r}_4={r}_5=0.5{r}_1 $$
$$ {r}_6={r}_7={r}_5-{r}_8 $$
$$ {r}_8={r}_{\mathrm{GLU}} $$
$$ {r}_9 = 2{r}_{O2}=2\ \mathrm{OUR} $$
The generation/consumption rates of NADH and NAD+ as well as r NAD +/r NADH ratio at a specified time t could be calculated by the equations of (7) to (16).
$$ {r}_{\mathrm{NADH}}^{F1}(t)=2{r}_1(t)=2{r}_{\mathrm{GLC}}(t) $$
$$ {r}_{\mathrm{NADH}}^{F2}(t)={r}_2(t)={r}_{{\mathrm{CO}}_2}^{F1}(t) $$
$$ {r}_{\mathrm{NADH}}^{F5}(t)={r}_7(t)={r}_6(t) $$
$$ {r}_{{\mathrm{NAD}}^{+}}^{F1}(t)={r}_{\mathrm{GLU}}(t) $$
$$ {r}_{{\mathrm{NAD}}^{+}}^{F2}(t)={r}_9(t)=2{r}_{O_2}^U(t)=2\mathrm{OUR}(t) $$
$$ {r}_{{\mathrm{CO}}_2}^U(t)={r}_3(t) $$
$$ \mathrm{C}\mathrm{E}\mathrm{R}(t)={r}_{{\mathrm{CO}}_2}^{F1}(t)+{r}_{{\mathrm{CO}}_2}^{F2}(t)+{r}_{{\mathrm{CO}}_2}^{F3}(t)-{r}_{{\mathrm{CO}}_2}^U(t) $$
$$ \begin{array}{l}\frac{r_{{\mathrm{NAD}}^{+}}}{r_{\mathrm{NAD}\mathrm{H}}}(t)=\frac{r_{{\mathrm{NAD}}^{+}}^F(t)}{r_{\mathrm{NAD}\mathrm{H}}^F(t)}=\frac{r_{{\mathrm{NAD}}^{+}}^{F1}(t)+{r}_{{\mathrm{NAD}}^{+}}^{F2}(t)}{r_{\mathrm{NAD}\mathrm{H}}^{F1}(t)+{r}_{\mathrm{NAD}\mathrm{H}}^{F2}(t)+{r}_{\mathrm{NAD}\mathrm{H}}^{F3}(t)+{r}_{\mathrm{NAD}\mathrm{H}}^{F4}(t)+{r}_{\mathrm{NAD}\mathrm{H}}^{F5}(t)}\\ {}\kern4.25em =\frac{r_{\mathrm{GLU}}(t)+2\mathrm{OUR}(t)}{2{r}_{\mathrm{GLU}}(t)+{r}_{{\mathrm{CO}}_2}^{F1}(t)+{r}_{{\mathrm{CO}}_2}^{F2}(t)+{r}_{{\mathrm{CO}}_2}^{F3}(t)+{r}_7}\\ {}\kern4.25em =\frac{r_{\mathrm{GLU}}(t)+2\mathrm{OUR}(t)}{2{r}_{\mathrm{GLU}}(t)+\mathrm{C}\mathrm{E}\mathrm{R}(t)+{r}_3+{r}_7}=\frac{r_{\mathrm{GLU}}(t)+2\mathrm{OUR}(t)}{2{r}_{\mathrm{GLU}}(t)+\mathrm{C}\mathrm{E}\mathrm{R}(t)+2{r}_3-{r}_8}\\ {}\kern4.25em =\frac{r_{\mathrm{GLU}}(t)+2\mathrm{OUR}(t)}{3{r}_{\mathrm{GLU}}(t)+\mathrm{C}\mathrm{E}\mathrm{R}(t)-{r}_{\mathrm{GLU}}(t)}\end{array} $$
Where r F NAD +(t) is the NAD+ formation rate (mmol/L/h), r F NADH(t) NADH formation rate (mmol/L/h), r U O2(t) O2 uptake rate (mmol/L/h), r F CO2(t) CO2 evolution rate (mmol/L/h), r U CO2(t) CO2 uptake rate (mmol/L/h), r GLC(t) glucose consumption rate (mmol/L/h), and r GLU(t) glutamate production rate (mmol/L/h).
Fermentation performance of normal and abnormal batches
Final glutamate production and cell concentration were two factors reflecting the fermentation performance. In 'normal' fermentation, final glutamate concentration and the maximum cell concentration (OD620) were more than 70 g/L and 50, respectively. Otherwise, fermentations were categorized into 'abnormal.' Glutamate fermentation was non-growth associated, and glutamate was accumulated when cell growth had almost ceased (after 10 h). At this moment, about 1/3 glucose in the initial medium was consumed. Additional glucose was supplemented during the main glutamate production phase at 18 to 20 h. In the normal batch, glutamate concentration still increased after glucose was fed, although glutamate accumulation rate became slow. However, glutamate production stopped after glucose feeding in the abnormal batch, and the final glutamate concentration was around 50 g/L. In this case, cell growth and glutamate production before glucose feeding were almost the same as those in the normal fermentation (Figure 2a,b). There was no significant difference in the changing trend of RQ between the normal and abnormal batches. However, the changing pattern of ORP in the abnormal batch differed from that of normal batch significantly (Figure 2c,d). ORP was maintained in the normal range (−75 to −85 mV) before 20 h (glucose was fed at 18 h) and decreased slowly to a lower level (−120 mV) after 20 h. Generally, r NAD +/r NADH ratio was closely and positively associated with the NAD+/NADH ratio. It was shown that in vivo r NAD +/r NADH ratio was at a lower level and NADH was excessive in abnormal fermentation after 20 h, which was verified by the lower r NAD +/r NADH ratio calculated in Table 1. Therefore, the improper r NAD +/r NADH ratio after glucose feeding might be the reason for the non-accumulation of glutamate in the abnormal batch [17].
Comparison of fermentation parameters for normal and abnormal batches. Black square and solid line, normal batch; white square and dotted line, abnormal batch without sorbitol or glycerol; arrow, glucose fed. a: Time courses of cell concentration (OD620) in different operation conditions; b: Time courses of glutamate concentration in different operation conditions; c: Time courses of RQ in different operation conditions; d: Time courses of ORP in different operation conditions.
Table 1 r NAD + / r NADH ratio at different instants under different operation conditions
The abnormal performance appeared after glucose was fed, and it was not contaminated. It was speculated that abnormal fermentation was due to the change of the characteristics of strain which led to the following results: (1) glutamate production ability decreased and (2) fermentation environment changed after glucose was fed and the strain failed to adapt to the change, resulting in intracellular abnormal metabolism and stoppage of glutamate synthesis. It has been reported that some osmoregulators (such as trehalose or betaine) are either produced by the microorganisms or taken up from the medium in lysine production by C. glutamicum in response to a hyperosmotic shock [18,19]. They can protect cells against environmental shock/stress. Sorbitol has the same effect. When glutamate accumulation ceased and the apparent fermentation parameters (OUR, CER, etc.) declined after glucose feeding for a period of 2 h, then 'abnormal' fermentation status was concluded. Twelve and 2 g/L sorbitol were added at 26 and 32 h, respectively; glutamate concentration increased gradually and reached 72 g/L at 36 h with a 2-h fermentation period extend, as shown in Figure 3.
Changing patterns of cell and glutamate concentrations under the condition of supplementing sorbitol when glutamate accumulation ceased. Black square, normal batch; white square, abnormal batch with addition of sorbitol; arrow 1, glucose fed; arrow 2 and arrow 3, adding 12 and 2 g/L sorbitol, respectively. a: Time courses of cell concentration (OD620) in different operation conditions; b: Time courses of glutamate concentration in different operation conditions.
Therefore, the abnormal fermentation was due to the decrease of resistance ability in response to the environmental alterations. In addition, the permeability of cell membrane increased to secrete glutamate extensively in the production phase, and the glucose addition easily brought about shock or stress in the living environment. The carbon flux distribution in vivo was adjusted, and consequently, the metabolism of NAD+ and NADH was changed. On the other hand, PDH and ICDH required NAD+ as the coenzyme and less NAD+ amount restricted the enzymes' catalytic actions. As a result, the metabolic flux was redistributed and a series of abnormal effects arose.
Cells might be more tolerant to the environmental change if some osmoregulators, such as trehalose or betaine, were added before or at the same time when the environmental shock/stress occurred. However, trehalose and betaine are expensive. Sorbitol and glycerol were cheaper, and they were also efficient environmental shock/stress protective reagents. Hence, fermentation performance when co-feeding sorbitol/glycerol with glucose or adding sorbitol/glycerol in the initial medium was studied.
Fermentation performance in presence of sorbitol
The fermentation performance in presence of sorbitol is shown in Figure 4. The cell growth patterns did not change much despite the sorbitol supplement methods adopted. Glutamate production did not cease and final glutamate concentration reached 73 and 77 g/L at 34 h, respectively, when co-feeding sorbitol with glucose or adding sorbitol in the initial medium. No difference in RQ before 20 h in the batches with/without sorbitol was observed. RQ decreased continuously after 20 h in the presence of sorbitol (RQ was about 0.4 at 34 h). Lower RQ was favorable for glutamate accumulation and indicated that less glucose proceeded beyond the α-KG node in the TCA cycle [20]. In these cases, less NADH was accumulated in vivo and it was desirable for maintaining cellular activities [21]. Less NADH accumulation implied that r (F) NADH is less than r (C) NADH, leading to a higher r NAD +/r NADH ratio as well as a higher ORP levels. Under the condition of co-feeding sorbitol with glucose, ORP was maintained at a normal level (−75 to −85 mV) after feeding the mixed carbon sources. When 15 g/L sorbitol was added in the initial medium, ORP always decreased in the production phase, but it was much higher than that in the abnormal batch without sorbitol. r NAD +/r NADH ratio was more than 0.8 after 20 h when co-feeding sorbitol with glucose or adding sorbitol in the initial medium, as shown in Table 1. The deterioration of the fermentation performance was reversed in the presence of sorbitol. This might be because the resistance to environmental change was enhanced and NADH consumption was returned to normal. The r NAD +/r NADH ratio was returned to normal level in the presence of sorbitol, and higher ORP values supported this fact indirectly.
Comparison of fermentation parameters with/without sorbitol. White square and dotted line, abnormal batch without sorbitol; black square and dashed line, sorbitol co-fed with glucose; black circle and solid line, 15 g/L sorbitol added in the initial medium; arrow, glucose fed. a: Time courses of cell concentration (OD620) in different operation conditions; b: Time courses of glutamate concentration in different operation conditions; c: Time courses of RQ in different operation conditions; d: Time courses of ORP in different operation conditions.
Fermentation performance in presence of glycerol
The fermentation performance when co-feeding glycerol with glucose or adding glycerol in the initial medium is shown in Figure 5. Similar fermentation curves and performance as those in the presence of sorbitol were obtained. The final glutamate concentration also reached normal levels at 34 h (72 and 76 g/L, respectively) when co-feeding glycerol with glucose or adding glycerol in the initial medium. The improvements in glutamate production were also closely associated with higher r NAD +/r NADH ratio (more than 0.8) as shown in Table 1, which was also reflected by higher ORP values (Figure 5) and lower RQ (0.6 to 0.7) in the late production phase.
Comparison of fermentation parameters with/without glycerol. White square and dotted line, abnormal batch without glycerol; black square and dashed line, glycerol co-fed with glucose; black square and solid line, 10 g/L glycerol added in the initial medium; arrow, glucose fed. a: Time courses of cell concentration (OD620) in different operation conditions; b: Time courses of glutamate concentration in different operation conditions; c: Time courses of RQ in different operation conditions; d: Time courses of ORP in different operation conditions.
From the results above, it could be concluded that abnormal glutamate fermentations could be restored to normal by supplementing the media with sorbitol or glycerol, especially when sorbitol or glycerol was added in the initial medium. It has been reported that sorbitol and glycerol could be assimilated by yeast and Escherichia coli to increase the targeted product yield by effectively providing the required energy [22,23]. Furthermore, both sorbitol and glycerol were used as effective protective agents of cell viability and enzymes due to their hygroscopicity, freezing tolerance, and oxidation resistance. The major role that sorbitol and glycerol played in the restoration of abnormal glutamate fermentation was analyzed subsequently.
Investigation of the role of sorbitol and glycerol during glutamate fermentation
The concentrations of sorbitol and glycerol were assayed, and results are shown in Figure 6. The results indicated that sorbitol and glycerol were hardly utilized by C. glutamicum when co-feeding them with glucose or adding them in the initial medium. When sorbitol or glycerol was co-fed with glucose, sorbitol and glycerol did not reduce after feeding; instead, they were gradually accumulated in the broth. While only a small portion of sorbitol or glycerol could be consumed when they were added in the initial medium. In summary, sorbitol and glycerol were not assimilated by C. glutamicum but functioned as protectants to improve the tolerance of the strain in response to the disturbance of the living environment.
Time courses of sorbitol and glycerol concentrations under different supplementing conditions. (a) Sorbitol or glycerol co-fed with glucose; (b) sorbitol or glycerol added in the initial medium. Black square, sorbitol; black circle, glycerol.
The glutamate biosynthesis pathway is composed of many enzymatic reactions in which citrate synthase (CS), PDH, and ICDH are the key enzymes directing carbon flux towards TCA cycle (Figure 1a). Their activities are repressed by excessive NADH. Lower r NAD +/r NADH ratio indicates that NADH was more than NAD+ in the cytoplasm. PDH and ICDH activities could be limited by the insufficiency of NAD+. CcO is the key enzyme catalyzing the transformation of proton (H+) from NADH to O2 through the respiratory chain [24]. During this period, NADH was consumed and NAD+ was regenerated. The activity of CS was difficult to measure, so the activities of PDH, ICDH, and CcO before and after glucose addition (also mixed substrates) under different operation conditions were analyzed. The relative activity of each enzyme before feeding glucose (also mixed substrates, 18 h) was defined as 1. The REA after feeding (26 h) are shown in Table 2. In the abnormal fermentation without sorbitol or glycerol, the activities of PDH and ICDH decreased by 42 and 28%, respectively, and CcO was inactive after glucose feeding. In the presences of sorbitol or glycerol, the inactivation of these enzymes was relieved. The activity of PDH decreased by 20 to 30%, and ICDH activity was not significantly affected. The activity of CcO was almost maintained at a higher level. It was concluded that the repression of the key enzymes directing glucose into glutamate synthesis was relieved by addition of sorbitol or glycerol. Accompanied by the recovery of NAD+ regeneration, the metabolic flux was shifted into the normal pathway of glutamate synthesis. Consequently, glutamate accumulated continuously after glucose feeding and a failed fermentation could be avoided.
Table 2 Relative enzymatic activities of PDH, ICDH, and CcO after substrate(s) feeding (26 h) in different batches
Sorbitol and glycerol served as shield materials during glutamate fermentation. The activity of key enzymes could be properly maintained when supplementing sorbitol or glycerol. The r NAD +/r NADH ratio was increased, and ORP was maintained around the normal range. The stability of glutamate fermentation was improved efficiently by adding sorbitol or glycerol, and the improvement was more obvious when sorbitol/glycerol was added in the initial medium.
Feasibility analysis in industry
A likely failed fermentation could be restored to normal when co-feeding sorbitol or glycerol with glucose or adding them in the initial medium. However, glutamate is a low value-added product, and the supplementing amount of sorbitol or glycerol should be minimized to save the raw-material cost in industry. Therefore, the cell growth and glutamate production were analyzed with less sorbitol or glycerol (4 to 5 g/L) addition in the initial medium, as shown in Figure 7. In these cases, cell growth was not affected and glutamate synthesis did not stop after feeding glucose and glutamate concentration ended at 74 g/L at 36 h. It was verified again that sorbitol and glycerol functioned mainly as protectants, and their protective effect strengthened with increasing concentration of shield materials [8].
Glutamate fermentation performance when less amount of sorbitol or glycerol was added in the initial medium. White square and dotted line, abnormal batch without sorbitol or glycerol; black square and dashed line, 5 g/L sorbitol added in the initial medium; black circle and solid line, 4 g/L glycerol added in the initial medium; arrow, glucose fed. a: Time courses of cell concentration (OD620) in different operation conditions; b: Time courses of glutamate concentration in different operation conditions; c: Time courses of RQ in different operation conditions; d: Time courses of ORP in different operation conditions.
The fermentation features of C. glutamicum changed during preservation process and glutamate accumulation stopped after glucose feeding, leading to an abnormal fermentation. This abnormal fermentation performance could be restored to normal by co-feeding sorbitol or glycerol with glucose or adding them in the initial medium. Restoration was more effective when sorbitol or glycerol was added in the initial medium. Glutamate fermentation stability was also improved efficiently. In these cases, sorbitol and glycerol were used as protective agents. When sorbitol or glycerol was added, the adaptive capability of cells to environmental change was promoted and the activities of PDH/ICDH/CcO could be maintained. The usage efficiency of NADH was improved, and r NAD +/r NADH ratio increased to normal level which was reflected by higher ORP value. These results provided theoretical basis and feasibility for stabilizing glutamate fermentation in its industrial production.
Sano C (2009) History of glutamate production. Am J Clin Nutr 90:728S–732S
Ying WH (2006) NAD(+) and NADH in cellular functions and cell death. Front Biosci 11:3129–3148
de Graef MR, Alexeeva S, Snoep JL, Teixeira de Mattos MJ (1999) The steady-state internal redox state (NADH/NAD) reflects the external redox state and is correlated with catabolic adaptation in Escherichia coli. J Bacteriol 181:2351–2357
Berrios-Rivera SJ, Bennett GN, San KY (2002) The effect of increasing NADH availability on the redistribution of metabolic fluxes in Escherichia coli chemostat cultures. Metab Eng 4:230–237
Kastner JR, Eiteman MA, Lee SA (2003) Effect of redox potential on stationary-phase xylitol fermentations using Candida tropicalis. Appl Microbiol Biot 63:96–100
Li J, Jiang M, Chen KQ, Ye Q, Shang LA, Wei P, Ying HJ, Chang HN (2010) Effect of redox potential regulation on succinic acid production by Actinobacillus succinogenes. Bioproc Biosyst Eng 33:911–920
Radjai MK, Hatch RT, Cadman TW (1984) Optimization of amino acid production by automatic self tuning digital control of redox potential. Biotechnol Bioeng Symp 14:657–679
Bhatnagar A, Srivastava SK (1992) Aldose reductase: congenial and injurious profiles of an enigmatic enzyme. Biochem Med Metab Biol 48:91–121
Azizi A, Ranjbar B, Khajeh K, Ghodselahi T, Hoornam S, Mobasheri H, Ganjalikhany MR (2011) Effects of trehalose and sorbitol on the activity and structure of Pseudomonas cepacia lipase: spectroscopic insight. Int J Biol Macromol 49:652–656
Wang Z, Wang Y, Zhang D, Li J, Hua Z, Du G, Chen J (2010) Enhancement of cell viability and alkaline polygalacturonate lyase production by sorbitol co-feeding with methanol in Pichia pastoris fermentation. Bioresource Technol 101:1318–1323
Liu Y, Zhang YG, Zhang RB, Zhang F, Zhu J (2011) Glycerol/glucose co-fermentation: one more proficient process to produce propionic pcid by Propionibacterium acidipropionici. Curr Microbiol 62:152–158
John GSM, Gayathiri M, Rose C, Mandal AB (2012) Osmotic shock augments ethanol stress in Saccharomyces cerevisiae MTCC 2918. Curr Microbiol 64:100–105
Ramon R, Ferrer P, Valero F (2007) Sorbitol co-feeding reduces metabolic burden caused by the overexpression of a rhizopus oryzae lipase in Pichia pastoris. J Biotechnol 130:39–46
Arruda PV, Felipe MGA (2009) Role of glycerol addition on xylose-to-xylitol bioconversion by Candida guilliermondii. Curr Microbiol 58:274–278
Popova O, Ismailov S, Popova T, Dietz KJ, Golldack D (2002) Salt-induced expression of NADP-dependent isocitrate dehydrogenase and ferredoxin-dependent glutamate synthase in Mesembryanthemum crystallinum. Planta 215:906–913
Hasegawa T, Hashimoto KI, Kawasaki H, Nakamatsu T (2008) Changes in enzyme activities at the pyruvate node in glutamate-overproducing Corynebacterium glutamicum. J Biosci Bioeng 105:12–19
Gourdon P, Lindley ND (1999) Metabolic analysis of glutamate production by Corynebacterium glutamicum. Metab Eng 1:224–231
Skjerdal OT, Sletta H, Flenstad SG, Josefsen KD, Levine DW, Ellingsen TE (1996) Changes in intracellular composition in response to hyperosmotic stress of NaCl, sucrose or glutamic acid in Brevibacterium lactofermentum and Corynebacterium glutamicum. Appl Microbiol Biotechnol 44:635–642
Park SM, Sinskey AJ, Stephanopoulos G (1997) Metabolic and physiological studies of Corynebacterium glutamicum mutants. Biotechnol Bioeng 55:864–879
Xiao J, Shi ZP, Gao P, Feng HJ, Duan ZY, Mao ZG (2006) On-line optimization of glutamate production based on balanced metabolic control by RQ. Bioproc Biosyst Eng 29:109–117
Savinell JM, Palsson BO (1992) Network analysis of intermediary metabolism using linear optimization. I. Development of mathematical formalism. J Theor Biol 154:421–454
Lin H, Bennett GN, San KY (2005) Effect of carbon sources differing in oxidation state and transport route on succinate production in metabolically engineered Escherichia coli. J Ind Microbiol Biotechnol 32:87–93
Murarka A, Dharmadi Y, Yazdani SS, Gonzalez R (2007) Fermentative utilization of glycerol by Escherichia coli and its implications for the production of fuels and chemicals. Appl Environ Microb 74:1124–1135
Farver O, Grell E, Ludwig B, Michel H, Pecht I (2006) Rates and equilibrium of CuA to heme a electron transfer in paracoccus denitrificans cytochrome c oxidase. Biophys J 90:2131–2137
The authors thank the financial sponsors from the National High-Tech Program (#2006AA020301) and Major State Basic Research Development Program (#2007CB714303) of China.
National University of Singapore (Suzhou) Research Institute, 377 Linquan Street, Suzhou, Jiangsu Province, China
Yan Cao
School of Biotechnology, Jiangnan University, 1800 Lihu Road, Wuxi, Jiangsu Province, China
Zhen-ni He & Zhong-ping Shi
Department of Food Processing Technology, Harare Institute of Technology, 1505 Ganges Road, P.O. Box BE 277, Belvedere, Harare, Zimbabwe
Mpofu Enock
Zhen-ni He
Zhong-ping Shi
Correspondence to Yan Cao.
YC carried out the experiments, performed the statistical analysis, and drafted the manuscript. Z-NH was involved in performing the experiments. ME helped carry out the experiments and revised the manuscript. Z-PS conceived the idea, participated in its design and coordination, and helped in drafting of the manuscript. All authors read and approved the final manuscript.
Simplified metabolic reactions in glutamate fermentation by C. glutamicum:
r 1: GLC + NAD+ → 2PYR + 2NADH
r 2: PYR + NAD+ → Ac-CoA + NADH + CO2
r 3: PYR + CO2 + ATP → OAA + ADP
r 4: OAA + Ac-CoA → ICIT
r 5: ICIT + NAD+ → α-KG + NADH + CO2
r 6: α-KG + NAD+ → SUC + CO2 + NADH
r 7: SUC + NAD+ → OAA + NADH
r 8: α-KG + NH4 + + NADH → GLU + NAD+
r 9: NADH + 0.5O2 + Pi → NAD+ + ATP
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Cao, Y., He, Zn., Shi, Zp. et al. Improving the stability of glutamate fermentation by Corynebacterium glutamicum via supplementing sorbitol or glycerol. Bioresour. Bioprocess. 2, 9 (2015). https://doi.org/10.1186/s40643-014-0032-6
Fermentative stability
Glutamate fermentation
Oxidation-reduction potential
Protective substance
r NAD +/r NADH | CommonCrawl |
A two-phase procedure for non-normal quantitative trait genetic association study
Wei Zhang1,
Huiyun Li2,
Zhaohai Li3 &
Qizhai Li1
BMC Bioinformatics volume 17, Article number: 52 (2016) Cite this article
The nonparametric trend test (NPT) is well suitable for identifying the genetic variants associated with quantitative traits when the trait values do not satisfy the normal distribution assumption. If the genetic model, defined according to the mode of inheritance, is known, the NPT derived under the given genetic model is optimal. However, in practice, the genetic model is often unknown beforehand. The NPT derived from an uncorrected model might result in loss of power. When the underlying genetic model is unknown, a robust test is preferred to maintain satisfactory power.
We propose a two-phase procedure to handle the uncertainty of the genetic model for non-normal quantitative trait genetic association study. First, a model selection procedure is employed to help choose the genetic model. Then the optimal test derived under the selected model is constructed to test for possible association. To control the type I error rate, we derive the joint distribution of the test statistics developed in the two phases and obtain the proper size.
The proposed method is more robust than existing methods through the simulation results and application to gene DNAH9 from the Genetic Analysis Workshop 16 for associated with Anti-cyclic citrullinated peptide antibody further demonstrate its performance.
The past decades have witnessed many biological and epidemiological discoveries through the experimental design of genetic association studies based on the development of biological technology. Many variants have been identified to be associated with the quantitative traits. For example, in studying genetic loci in association with various phenotypes, 180 were reported to be associated with human height [1], 106 were associated with age at menarche [2], 97 were identified to be associated with body mass index [3], and the single-nucleotide polymorphism (SNP) rs4702 was associated with both diastolic and systolic blood pressure levels [4]. A standard approach to conduct an association test in a quantitative trait genetic study is to fit a linear model based on the assumption that the original or transformed trait values follow a normal distribution. However, the normal assumption is often violated for many traits even though some transformations such as the Log-transformation are carried out. For example, the number of tumors per subject in mouse follows a negative binomial distribution [5] and the survival time of a person follows a truncated distribution [6]. A good alternative to address this issue is to use the nonparametric tests.
Although there are various nonparametric tests in the literature, the most commonly used ones in genetic studies are the Kruskal-Wallis test (denote it by KW) [7] and the Jonckheere-Tepstra test (denote it by JT) [8, 9]. Originally, the KW was designed to detect the differences of the response variable in the medians of three groups and it was a nonparametric version of one-way analysis of variance based on ranking. The JT was also a rank-based test for an ordered alternative hypothesis which was particularly sensitive to the genetic mode of inheritance. Recently, Zhang and Li [10] defined the nonparametric risk and nonparametric odds and proposed a nonparametric trend test (NPT) that has been shown to be more powerful than KW and JT under a given genetic model. These methods, however, would suffer from loss of power when the underlying genetic model is misspecified.
In the present paper, we propose a two-phase robust procedure to test the genetic-phenotypic association. We first construct a test to classify the genetic model in a nonparametric way. We find that the test statistic tends to be positive when the genetic model is dominant, and negative when the model is recessive. Then based on the chosen model, the association test is conducted. We derive the correlation coefficient of the test used for choosing the genetic model and that for doing association study and obtain the proper size for a given nominal significance level. Extensive simulation studies are conducted to show the new approach to have empirical size less than the nominal level, and to compare this new approach with KW and MAX3, the maximum value of three NPTs. The results show that the proposed two-phase procedure is more robust than MAX3 and KW in the sense that its minimum power in a set of plausible models is the highest among the tests under consideration. Finally, a real data analysis is used for further illustration.
Notations and genetic models
Consider a biallelic marker whose genotype is coded as 0,1, and 2, corresponding to the count of a certain candidate risk allele or a minor allele. Suppose that there are n subjects that are independently sampled from a source population in a quantitative trait genetic association study. Let (y i ,g i ),i=1,2,⋯,n be the observed sample, where y i is the trait value and g i denotes the genotype value of the ith subject, i=1,2,⋯,n. For brevity, let the first n 0 subjects have genotype 0, the second n 1 subjects have genotype 1, and the last n 2 subjects possess genotype 2. Denote f ij =Pr(Y i <Y j ),i,j=0,1,2, where Y 0,Y 1 and Y 2 are the random variables that take values in three sets \(\{y_{1},y_{2},\cdots,y_{n_{0\phantom {\dot {i}\!}}}\}\), \(\{y_{n_{0}+1},y_{n_{0}+2},\cdots,y_{n_{0}+n_{1}} \}\) and \(\{y_{n_{0}+n_{1}+1}, y_{n_{0}+n_{1}+2}, \cdots, y_{n}\},\) respectively. The null hypothesis of no association is given by H 0:f 01=f 02=1/2. The alternative hypothesis is H 1:f 02≥f 01≥1/2 and f 02>1/2.
A genetic model specifies the mode of inheritance. The three genetic models are: recessive model (REC) if f 01=1/2 and f 12=f 02>1/2, additive model (ADD) if f 01=f 12>1/2 and f 02>1/2, and dominant model (DOM) if \(f_{01}=f_{02} > \frac {1}{2}\) and f 12=1/2.
Denote Δ 1=f 01−1/2, Δ 2=f 12−1/2. We find that Δ 1−Δ 2 tends to be negative value under the recessive model and take positive under the dominant model. The signs of (Δ 1,Δ 2) under the three genetic models are plotted in Fig. 1, where the line corresponding to the additive model is the straight line with a slope of 1 at the point C, C=(1/2,1/2)τ and τ denotes the transpose of a vector or a matrix, and the other two lines are for the recessive and dominant models, respectively. The recessive and dominant models form the boundaries of the space under the alternative hypothesis. The vertex C corresponds to the null hypothesis. Denote
$$\begin{aligned} {}\hat{f}_{01}&=\frac{1}{n_{0}n_{1}}\sum\limits_{i=1}^{n_{0}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}I(y_{i}<y_{j}),\\ {}\hat{f}_{12}&=\frac{1}{n_{1}n_{2}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}I(y_{j}<y_{k}),\\ {}\widehat\sigma_{01}^{2}&=\frac{n_{1}-1}{{n_{0}^{2}}n_{1}}\sum\limits_{i=1}^{n_{0}}\left[\frac{1}{n_{1}} \sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}I(y_{i}<y_{j})-1/2\right]^{2}\\ {}&\quad+\!\frac{n_{0}\,-\,1}{n_{0}{n_{1}^{2}}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}\left[\frac{1}{n_{0}}\!\sum\limits_{i=1}^{n_{0}}I(y_{i}\!<\!y_{j})\,-\,1/2\right]^{2} \,+\,\frac{1}{4n_{0}n_{1}},\\ {}\widehat\sigma_{01,12}^{2}&=\frac{1}{{n_{1}^{2}}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}\left[\frac{1}{n_{0}}\sum\limits_{i=1}^{n_{0}} I(y_{i}<y_{j})-1/2\right]\\ &\quad\left[\frac{1}{n_{2}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}I(y_{j}<y_{k})-1/2\right], \end{aligned} $$
$$\begin{aligned} {} \widehat\sigma_{12}^{2}&=\frac{n_{2}-1}{{n_{1}^{2}}n_{2}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}} \left[\frac{1}{n_{2}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}I(y_{i}<y_{k})-1/2\right]^{2}\\ &\quad+\frac{n_{1}-1}{n_{1}{n_{2}^{2}}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}\left[\frac{1}{n_{1}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}I(y_{j}<y_{k})-1/2\right]^{2}\\ &\quad+\displaystyle\frac{1}{4n_{1}n_{2}}. \end{aligned} $$
The common three genetic models in the genetic model space. The point C=(1/2,1/2) corresponds to the null hypothesis
Then \(\hat f_{01}\) and \(\hat f_{12}\) are the consistent estimators of f 01 and f 12, respectively, \(\widehat \sigma _{01}^{2}\) and \(\widehat \sigma _{12}^{2}\) are, respectively, the consistent estimators of the variances of \(\hat f_{01}\) and \(\hat f_{12}\), and \(\widehat \sigma _{01,12}^{2}\) is the consistent estimator of the covariance between \(\hat f_{01}\) and \(\hat f_{12}\). Define a test statistic for genetic model selection as
$$\begin{array}{*{20}l} Z_{1}=\frac{\hat f_{01}-\hat f_{12}}{\sqrt{\hat\sigma_{01}^{2}-2\hat\sigma_{01,12}^{2}+\hat\sigma_{12}^{2}}}. \end{array} $$
Under the null hypothesis, Z 1 asymptotically follows the standard normal distribution. So the genetic models can be determined as follows: i) if Z 1>ξ (>0), then the genetic model is dominant; ii) if Z 1<−ξ, then the genetic model is recessive; otherwise, the additive model is claimed. Here, ξ is set to be the 90 % quantile of the standard normal distribution.
The nonparametric test under a given genetic model
$$\begin{aligned} {}\hat{f}_{02}&=\frac{1}{n_{0}n_{2}}\sum\limits_{i=1}^{n_{0}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}I(y_{i}<y_{k}),\\ {}\hat{f}_{R}&=\frac{1}{(n_{0}+n_{1})n_{2}}\sum\limits_{i=1}^{n_{0}+n_{1}}\sum_{k=n_{0}+n_{1}+1}^{n}I\{y_{i}<y_{k}\}\\ &=\frac{n_{0}}{n_{0}+n_{1}}\hat f_{02} + \frac{n_{1}}{n_{0}+n_{1}}\hat f_{12},\\ {}\hat\sigma_{02}^{2}&=\frac{n_{2}-1}{{n_{0}^{2}}n_{2}}\sum\limits_{i=1}^{n_{0}}\left[\frac{1}{n_{2}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}I(y_{i}<y_{k})-1/2\right]^{2}\\ &\quad\!\,+\,\frac{n_{0}\,-\,1}{n_{0}{n_{2}^{2}}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}\!\left[\!\frac{1}{n_{0}}\sum\limits_{i=1}^{n_{0}}I\!(y_{i}\!\!<\!\!y_{k})\,-\,\!1/2\!\right]^{2}\!\,+\,\!\frac{1}{4n_{0}n_{2}},\\ {}\widehat\sigma_{02,12}^{2}&=\frac{1}{{n_{2}^{2}}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}\left[\frac{1}{n_{0}}\sum\limits_{i=1}^{n_{0}}I(y_{i}<y_{k})-1/2\right]\\ &\quad\left[\frac{1}{n_{1}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}I(y_{j}<y_{k})-1/2\right], \end{aligned} $$
$${}{\widehat\sigma_{R}^{2}} =\frac{{n_{0}^{2}}}{(n_{0}+n_{1})^{2}}\widehat{\sigma}_{02}^{2} + \frac{2n_{0}n_{1}}{(n_{0}+n_{1})^{2}}\widehat{\sigma}_{02,12}^{2} + \frac{{n_{1}^{2}}}{(n_{0}+n_{1})^{2}}\widehat{\sigma}_{12}^{2}. $$
Then the NPT under the recessive model can be given by \(Z_{R} = (\hat f_{R}-1/2)/\hat \sigma _{R}\).
$$\begin{aligned} {}\hat{f}_{01}&\,=\,\frac{1}{n_{0}n_{1}}\sum\limits_{i=1}^{n_{0}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}I(y_{i}<y_{j}),\\ {}{w}_{1}^{*}&\,=\,\sqrt{\!(n_{0}\,+\,n_{1})\!\big/\!\left[\!(n\,+\,n_{1}\!)\widehat \sigma_{01}^{2}\!\right]}, w_{2}^{*}\,=\,\sqrt{\!(n_{1}\,+\,n_{2})\!\big/\!\left[\!(n\,+\,n_{1}\!)\widehat \sigma_{12}^{2}\right]},\\ {}w_{1}&\,=\,\frac{w_{1}^{*}}{w_{1}^{*}+w_{2}^{*}}, w_{2}=\frac{w_{2}^{*}}{w_{1}^{*}+w_{2}^{*}} \end{aligned} $$
$$\begin{array}{*{20}l} {}{\hat{f}}_{A}=w_{1}\hat f_{01}+w_{2}\hat f_{12}, {\widehat\sigma^{2}_{A}}= {w_{1}^{2}}\widehat\sigma_{01}^{2} + 2w_{1}w_{2}\widehat\sigma_{01,12}^{2} + {w_{2}^{2}}\widehat\sigma_{12}^{2}. \end{array} $$
Then, the NPT under the additive genetic model is \(Z_{A} =(\hat f_{A}-1/2)/\hat \sigma _{A}\).
Similarly, denote
$${\fontsize{8.1pt}{9.6pt}{\begin{aligned} {}\widehat\sigma_{01}^{2}&=\frac{n_{1}-1}{{n_{0}^{2}}n_{1}}\sum\limits_{i=1}^{n_{0}}\left[\frac{1}{n_{1}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}} I(y_{i}<y_{j})-1/2\right]^{2}\\ &\quad+\!\frac{n_{0}\,-\,1}{n_{0}{n_{1}^{2}}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}}\left[\frac{1}{n_{0}}\sum\limits_{i=1}^{n_{0}}I(y_{i}\!<\!y_{j})\,-\,1/2\!\right]^{2} \,+\,\frac{1}{4n_{0}n_{1}},\\ {}\widehat{\sigma}_{01,02}^{2}&=\frac{1}{{n_{0}^{2}}}\sum\limits_{i=1}^{n_{0}}\left[\frac{1}{n_{1}}\sum\limits_{j=n_{0}+1}^{n_{0}+n_{1}} I(y_{i}<y_{j})-1/2 \right]\\ &\quad\left[\frac{1}{n_{2}}\sum\limits_{k=n_{0}+n_{1}+1}^{n}I(y_{i}<y_{k})-1/2\right].\\ {}\hat{f}_{D}&=\frac{1}{n_{0}(n_{1}+n_{2})}\sum\limits_{i=1}^{n_{0}}\sum_{j=n_{0}+1}^{n}I\{y_{i}\!<y_{j}\}=\frac{n_{1}}{n_{1}+n_{2}}\hat f_{01} + \frac{n_{2}}{n_{1}+n_{2}}\hat f_{02}, \end{aligned}}} $$
$$\begin{array}{*{20}l} {}{\widehat\sigma_{D}^{2}}=\frac{{n_{1}^{2}}}{(n_{1}+n_{2})^{2}}\widehat{\sigma}_{01}^{2} + \frac{2n_{1}n_{2}}{(n_{1}+n_{2})^{2}}\widehat{\sigma}_{01,02}^{2} + \frac{{n_{2}^{2}}}{(n_{1}+n_{2})^{2}}\widehat{\sigma}_{02}^{2}. \end{array} $$
Then the NPT under the dominant model is \(Z_{D} = (\hat f_{D}-1/2)/\hat \sigma _{D}\). Under the null hypothesis, Z R , Z A and Z D follow the standard normal distribution.
Two-phase procedure
We propose a two-phase procedure (TPP) for the quantitative trait association study by first determining the underlying genetic model in the first phase, followed by testing the association with the corresponding NPT for the selected model in the second phase. In details, the two-phase procedure can be described by the following two steps:
Step 1. Determine the genetic model using Z 1. If Z 1<−ξ, the recessive model is used, else if Z 1>ξ, we use the dominant model, otherwise, the additive model is used.
Step 2. We choose the association test statistic based on the chosen model in Step 1 and do the association study.
Size adjustment
To adjust the size of the two-phase procedure for a given overall nominal significance level, we need to derive the joint distribution of Z 1 and Z x , x∈{R,A,D}. From the Additional file 1, under the null hypothesis, (Z 1,Z x )τ asymptotically follows a bivariate normal distribution with mean (0,0) and Λ x , where
$$\Lambda_{x}=\left(\begin{array}{cc} 1&\rho_{x}\\ \rho_{x}&1 \end{array} \right),~~~x\in\{R, A, D\}. $$
Denote the cumulative distribution function of Y 0, Y 1 and Y 2 by F 0, F 1 and F 2, respectively. Then ρ R , ρ A and ρ D are functions of F 0,F 1,F 2 and p (the minor allele frequency, or MAF, for short), which can be estimated empirically based on the observed data. The consistent estimates can be obtained under the situation that the means of the trait values in the groups with different genotypes being equal. The technical details of derivations for ρ R , ρ A and ρ D under the null hypothesis are presented in the Additional file 1. Suppose that the null hypothesis is rejected at the level of α ∗ in the second phase. Then, to control the overall level at a given α, we have \(\alpha =\text {P}_{H_{0}}\left (Z_{1} < -\xi, |Z_{R}|>z(1-\alpha ^{*}/2)\right)+ \text {P}_{H_{0}}\left (|Z_{1}| < \xi, |Z_{A}|>z(1-\alpha ^{*}/2)\right) + \text {P}_{H_{0}}\left (Z_{1} > \xi, |Z_{D}|>\right. \left.z(1-\alpha ^{*}/2)\right)\), where z(α) is the α quantile of the standard normal distribution. So, this relation can be written as
$${\fontsize{8.1pt}{9.6pt}{\begin{aligned} {}\alpha&\,=\,\displaystyle\int_{\Omega_{R}}\left\{\Phi\left(\frac{\!-z(1\,-\,\alpha^{*}/2)\,-\,\rho_{R}u}{(1\,-\,{\rho_{R}^{2}})^{1/2}}\right)\,+\, \Phi\!\left(\frac{-z(1\,-\,\alpha^{*}/2)\,+\,\rho_{R}u}{(1\,-\,{\rho_{R}^{2}})^{1/2}}\right)\!\right\}\!\mathrm{d}\Phi(u) \\ {}&\quad+\displaystyle\int_{\Omega_{A}}\!\!\left\{\!\Phi\!\left(\!\frac{-z(1\,-\,\alpha^{*}/2)\,-\,\rho_{A}u}{(1\,-\,{\rho_{A}^{2}})^{1/2}}\!\right)\,+\, \Phi\!\left(\!\frac{-z(1\,-\,\alpha^{*}\!/2)\,+\,\rho_{A}u}{(1\,-\,{\rho_{A}^{2}})^{1/2}}\!\right)\!\right\}\!\mathrm{d}\Phi(u)\\ {}&\quad+\!\! \displaystyle \int_{\Omega_{D}}\!\left\{\!\Phi\!\left(\frac{\!-z(1\,-\,\alpha^{*}\!/2)\,-\,\rho_{D}u}{(1\,-\,{\rho_{D}^{2}})^{1/2}}\!\right)\!+ \!\Phi\!\left(\frac{\!-z(1\,-\,\alpha^{*}\!/2)\,+\,\rho_{D}u}{(1\,-\,{\rho_{D}^{2}})^{1/2}}\!\right)\!\right\}\!\mathrm{d}\Phi(u), \end{aligned}}} $$
where Ω R ={u:u<−ξ}, Ω A ={u:−ξ≤u≤ξ}, Ω D ={u:u>ξ}, and Φ(·) is the cumulative distribution function of the standard normal distribution. Under the null hypothesis, we can numerically calculate the adjusted significant level for the association test statistic in the second phase. Table 2 shows the mean and standard error of α ∗ with the nominal level of 0.05 and 0.001 based on 1,000 and 50,000 replicates, respectively. It indicates that α ∗ is more likely to be smaller than α, and also α ∗ is less vulnerable to the MAF. For example, when MAF=0.25, the adjusted levels for the nominal α=0.05 and α=0.001 are 0.0360 and 0.00065, and the corresponding standard error are 0.0003 and 0.000013, respectively.
The performance of model selection procedure
We conduct simulation studies to explore the performance of the model selection procedure. We generate data considering three genetic models. Consider the linear model Y=β 0+G β 1+ε, where Y denotes the phenotype value, G denotes the genotype value at a SNP locus, and ε follows a truncated generalized extreme value distribution (a heavy-tailed distribution, denoted as tGEV(0, 0, d, 0)) with the shape parameter 0, the location parameter 0, the scale parameter d, and the truncated point 0. Here we specify β 0=0.50, β 1=0.50, d=5, and the MAF p∈{0.05,0.10,⋯,0.50}. The total sample size is 1,500. 10,000 replicates are conducted to compute the true selection rate (TSR) under different scenarios. Table 1 shows the results for ξ=Φ −1(0.90)=1.282. The other results for ξ=Φ −1(0.80)=0.841, ξ=Φ −1(0.85)=1.036 and Φ −1(0.95)=1.645 are given in the Additional file 1. From Table 1, we can see that the TSR increases as MAF increases. For example, if the recessive model is true, the TSR is 19.48 % for MAF of 0.05, while it is 86.21 % for MAF of 0.50. It makes sense since the expected number of subjects with genotype 2 is increasing with the MAF increasing. We also find that the TSR for additive model is satisfactory with the TSR being around 80 %. For example, the TSR are 79.23 % and 80.09 % corresponding to MAF of 0.05 and 0.50, respectively. Besides this, we also conduct simulations with covariates considering Y=β 0+X γ+G β 1+ε, where X is a covariate. The detailed results are available in the Additional file 1.
Table 1 The true selecting rate (%) of genetic model using Z 1 with ξ=Φ −1(0.9) when the error follows tGEV(0,0,5,1)
The adjusted significant level
Table 2 shows the adjusted α ∗ of the TPP under the null hypothesis. The parameter setting is the same as above. When the nominal level is 0.05, we calculate the mean and standard deviation (SD) based on 2,000 replicates. And 50,000 replicates are conducted for the nominal level of 0.001. The results indicate that the adjusted level is always less than the nominal significant level α. For example, when MAF =0.25, the adjusted levels for the nominal level α=0.05 and α=0.001 are 0.0310 and 0.00059, respectively. And the value of α ∗ is relatively stable because its standard deviation can be omitted compared with the means. For example, when MAF =0.1, the adjusted levels for the nominal level α=0.05 and α=0.001 are 0.0335 and 0.00063, and the corresponding standard deviations are 0.00169 and 0.000039, respectively.
Table 2 The adjusted level α ∗ for the nominal significant level α of 0.05 and 0.001
Type I error rate
We evaluate the empirical type I error rates of five tests: KW, Z R , Z A , MAX3, and TPP. The simulation settings are similar as above. The sample size is 1,500. Here we use ξ=Φ −1(0.90), β 0=0.50, and p∈{0.05,0.10,⋯,0.50}. 2,000 replicates are conducted for the nominal significant level of 0.05 and 50,000 replicates are conducted for the nominal significant level of 0.001. Table 3 shows the empirical type I errors of the five tests under the significant level of 0.05 and 0.001. The results show that all of the five tests could control the type I error correctly with the empirical values being close to the nominal significance level. For example, when MAF =0.20, the empirical type I error rates of KW, Z R , Z A , MAX3, and TPP test are 0.046, 0.048, 0.051, 0.045, and 0.041, respectively, under the significant level of 0.05. When MAF =0.35 and the nominal significant level is 0.001, the empirical type I error rates of KW, Z R , Z A , MAX3, and TPP test are 0.00090, 0.00086, 0.00098, 0.00090, and 0.00080, respectively.
Table 3 The empirical type I errors of KW, Z R , Z A , MAX3, and TPP when the error term follows tGEV(0,0,5,0)
We compare the power among KW, Z R , Z A , MAX3 and TPP under the similar settings to those described above. Figures 2 and 3 report the power results for the nominal level of 0.05 and 0.001, respectively, under the recessive, additive, and dominant models. In order to make the power comparable, when the nominal level is 0.001, we specify d=3 for β 1=0.25 and d=5 for β 1=0.50, and when the nominal level is 0.05, we set d=5 and β 1={0.25,0.50}. The results indicate that, except the NPT test under the true genetic model, the proposed TPP is always more powerful than KW and MAX3. this makes sense because that NPT under a given model (Z R , Z A ) is the most powerful under that model, and the model selection procedure always gives a large probability of TSR. TPP is more powerful than KW, Z A , and MAX3 under the recessive model and in most scenarios under the dominant model. In some cases, there are 6 % power increase. For example, when MAF is 0.20, β 1=0.50, α=0.05 and the genetic model is recessive, the empirical powers of KW, Z A , MAX3, and TPP are 0.335, 0.202, 0.418, and 0.473, respectively. The performance of TPP is superior than the other three test KW, Z R and MAX3 when the true model is additive or dominant. For example, when MAF is 0.30 and the genetic model is additive, β 1=0.50, α=0.001, the empirical powers of KW, Z R , MAX3, and TPP are 0.321, 0.128, 0.407, and 0.431, respectively. Furthermore, using Z R under the additive or dominant model can result in substantial loss of power. The TPP has higher robustness against the genetic model than other four tests. For example, when α=0.05 and β 1=0.50, the minimum value of power for TPP over MAF from 0.10 to 0.50 is 0.137 under the recessive, additive and dominant model, which is larger than those of KW (0.099), Z R (0.103), Z A (0.070), and MAX3 (0.112).
The powers of KW, Z R , Z A , MAX3, and TPP with tGEV(0,0,5,0) error under three genetic models. The nominal level is 0.05. The first column is for β 1=0.25 and the second column is for β 1=0.50. The total number of the subjects is n=1,500
The powers of KW, Z R , Z A , MAX3, and TPP with tGEV(0,0,d,0) error under three genetic models. The nominal level is 0.001. The first column is for β 1=0.25,d=3 and the second column is for β 1=0.50,d=5. The total number of the subjects is n=1,500
Application to gene DNAH9 associated with anti-CCP measure
We apply KW, Z A , MAX3 and TPP to identify the deleterious SNPs in the gene DNAH9 [11] for the association with the anti-CCP measure using the data from Genetic Workshop 16 [12, 13]. The anti-CCP is present in the blood of the majority of patients with rheumatoid arthritis (RA). The data includes 867 cases (with anti-CCP) and 1,195 controls (without anti-CCP). Here we impute them with the minimum value of the anti-CCP values in cases, which is 20.053 following Zheng et al. (2012)[14]. We remove the effect of population stratification using four principal coordinates [15] following Zhang and Li [10] and take the residuals as the new outcome. There are 92 SNPs in gene DNAH9 on Chromosome 17. We calculate the p-values of these SNPs using the KW, Z A , MAX3 and TPP approaches. There are six SNPs in gene DNAH9 whose proportions of the missing genotype value are more than 15 %, so we only show the p-value of the remaining 86 SNPs. In the main text, we shows the results of the SNPs whose p-values are relatively small (almost less than 0.05) in Table 4 and the p-values of the other SNPs are summarized in Table S10 in the Additional file 1. We find that the SNP rs11655963 has the minimum p-value of 2.72×10−5 using the TPP. The corresponding p-values using KW, Z A , and MAX3 are 1.18×10−4, 8.40×10−5 and 7.44×10−5, respectively. Burton et al.(2007)[16] proposed to use the p-value threshold of 5×10−5 as the moderate association at the genome-wide level. Because the p-values of KW, Z A and MAX are all larger than 5×10−5, there are no moderate genome-wide associations. However, for the TPP, we calculate the adjusted p-value threshold for 5×10−5 and it is 3.64×10−5. This indicates that using the TPP can give the moderate-strong effect.
Table 4 The p-values of 17 SNPs in gene DNAH9 for the association with Anti-CCP Measure
With the developments of biological technology, more and more data on quantitative traits and genotypes are generated and deposited in public database such as The National Center for Biotechnology Information database. It is urgent to develop new methods to excavate useful information to help understand the etiology of human complex diseases. A nonparametric two-phase procedure is proposed here to test the association between a di-allelic SNP and a non-normal distributed quantitative trait when the genetic model is unknown. Simulation results show that the proposed TPP is more robust than the existing methods.
If there are covariates needed to be adjusted for, we can first regress on the covariates and use the residuals as the new outcome and then employ TPP to conduct the association study. The detailed simulation results are presented in Additional file 1. Besides the truncated generalized extreme value distributional (a heavy-tailed distribution) error term with the truncation point 0, we also consider the error term following the centralized t distribution and general generalized extreme value distribution, respectively. The results are given in Additional file 1, where the similar results are observed.
Lango AH, Estrada K, Lettre G, Berndt SI, Weedon MN, Rivadeneira F, et al. Hundreds of variats clustered in genomic loci and biological pathways affect human height. Nature. 2010; 467:832–8.
Perry JR, Day F, Elks CE, Sulem P, Thompson DJ, Ferreira T, et al. Parent-of-origin-specific allelic associations among 106 genomic loci for age at menarche. Nature. 2014; 514(7520):92–7.
PubMed CAS PubMed Central Article Google Scholar
Locke AE, Kahali B, Berndt SI, Justice AE, Pers TH, Day FR, et al. Genetic studies of body mass index yield new insights for obesity biology. Nature. 2015; 518(7538):197–206.
Turpeinen H, Seppälä I, Lyytikäinen LP, Raitoharju E, Hutri-Kähönen N, Levula M, et al. A genome-wide expression quantitative trait loci analysis of proprotein convertase subtilisin/kexin enzymes identifies a novel regulatory gene variant for FURIN expression and blood pressure. Hum Genet. 2015; 134:627–636.
PubMed CAS Article Google Scholar
Drinkwater NR, Klotz JH. Statistical methods for the analysis of tumor multiplicity data. Cancer Res. 1981; 41:113–9.
PubMed CAS Google Scholar
Chen H, Lumley T, Brody J, Heard-Costa NL, Fox CS, Cupples LA, Dupuis J. Sequence kernel association test for survival traits. Genet Epidemiol. 2014; 38:191–7.
PubMed PubMed Central Article Google Scholar
Kruskal WH, Wallis WA. Use of ranks in one-criterion variance analysis. J Am Stat Assoc. 1952; 47:583–621.
Jonckheere A. A distribution-free k-sample test against ordered alternatives. Biometrika. 1954; 41:133–45.
Terpstra TJ. The asymptotic normality and consistency of Kendalls test against trend, when ties are present in one ranking. Indagationes Mathematicae. 1952; 14:327–33.
Zhang W, Li Q. Nonparametric risk and nonparametric odds in quantitative genetic association studies. Sci Rep-UK. 2015; 5:12105.
Lin Y, Zhang M, Wang L, Pungpapong V, Fleet JC, Zhang D. Simultaneous genome-wide association studies of anti-cyclic citrullinated peptide in rheumatoid arthritis using penalized orthogonal-components regression. BMC Proc. 2009; 3. Suppl 7:S20.
Black MH, Watanabe RM. A principal-components-based clustering method to identify multiple variants associated with rheumatoid arthritis and arthritis-related autoantibodies. BMC Proc. 2009; 3. Suppl 7:S129.
Amos CI, Chen WV, Seldin MF, Remmers EF, Taylor KE, Criswell LA, et al. Data for Genetic Analysis Workshop 16 Problem 1, association analysis of rheumatoid arthritis data. BMC Proc. 2009; 3. Suppl 7:S2.
Zheng G, Wu CO, Kwak M, Jiang W, Joo J, Lima JAC. Joint analysis of binary and quantitative traits with data sharing and outcome-dependent sampling. Genet Epidemiol. 2012; 36:263–73.
Li Q, Yu K. Improved correction for population stratification in genome-wide association studies by identifying hidden population structures. Genet Epidemiol. 2008; 32(3):215–26.
Burton PR, Clayton DG, Cardon LR, Craddock N, Deloukas P, Duncanson A, et al. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature. 2007; 447:661–78.
Q. Li was supported in part by the National Science Foundation of China, Grant No. (11371353, 61134013) and the Breakthrough Project of Strategic Priority Program of the Chinese Academy of Sciences, Grant No. XDB13040600. The authors thank Dr. Aiyi Liu of National Institute of Child Health and Human Development (NICHD), National Institutes of Health (NIH), for his helpful comments. We also thank the Editor, Associate Editor and three anonymous reviewers for their careful reading and insightful comments, which greatly improved our manuscript.
Key Laboratory of Systems Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China
Wei Zhang & Qizhai Li
School of Management and Economics, Beijing Institute of Technology, Beijing, 100081, China
Huiyun Li
Department of Statistics, George Washington University, Washington, 20052, DC, USA
Zhaohai Li
Qizhai Li
Correspondence to Huiyun Li.
WZ contributed to the design of the study and performed the analysis; HL and QL conceived the idea and drafted the manuscript; all authors participated in data interpretation, read and approved the final manuscript.
The derivations of ρ R , ρ A and ρ D under the null hypothesis. Consistent estimators of ρ R , ρ A and ρ D under the null hypothesis. Additional simulation results for the model selection procedure. Simulation results for the error term following the generalized extreme distribution. Simulation results for the error term following the centralized t distribution. Simulation results for the model with covariates. Additional p-value results of the SNPs in gene DNAH9 for the associated with Anti-CCP Measure. (PDF 179 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Zhang, W., Li, H., Li, Z. et al. A two-phase procedure for non-normal quantitative trait genetic association study. BMC Bioinformatics 17, 52 (2016). https://doi.org/10.1186/s12859-016-0888-x
Quantitative trait genetic association studies
Results and data | CommonCrawl |
Journal of Mathematical Biology
August 2017 , Volume 75, Issue 2, pp 491–520 | Cite as
A bifurcation theorem for evolutionary matrix models with multiple traits
J. M. Cushing
F. Martins
A. A. Pinto
Amy Veprauskas
First Online: 06 January 2017
One fundamental question in biology is population extinction and persistence, i.e., stability/instability of the extinction equilibrium and of non-extinction equilibria. In the case of nonlinear matrix models for structured populations, a bifurcation theorem answers this question when the projection matrix is primitive by showing the existence of a continuum of positive equilibria that bifurcates from the extinction equilibrium as the inherent population growth rate passes through 1. This theorem also characterizes the stability properties of the bifurcating equilibria by relating them to the direction of bifurcation, which is forward (backward) if, near the bifurcation point, the positive equilibria exist for inherent growth rates greater (less) than 1. In this paper we consider an evolutionary game theoretic version of a general nonlinear matrix model that includes the dynamics of a vector of mean phenotypic traits subject to natural selection. We extend the fundamental bifurcation theorem to this evolutionary model. We apply the results to an evolutionary version of a Ricker model with an added Allee component. This application illustrates the theoretical results and, in addition, several other interesting dynamic phenomena, such as backward bifurcation induced strong Allee effects.
Nonlinear matrix models Structured population dynamics Evolutionary game theory Bifurcation Equilibria Stability
Mathematics Subject Classification
92D25 92D15 39A28 37G99
J. M. Cushing and A. Veprauskas were supported by the U.S. National Science Foundation grant DMS 0917435. A. A. Pinto thanks the support of LIAAD—INESC TEC through program PEst, the Faculty of Sciences of University of Porto and Portuguese Foundation for Science and Technology (FCT—Fundação para a Ciência e a Tecnologia) through the Project "Dynamics, Optimization and Modelling, with with reference PTDC/MAT-NAN/6890/2014. This work is financed by the ERDF—European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation—COMPETE 2020 Programme, and by National Funds through the FCT—Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project «POCI-01-0145-FEDER-006961. It is also supported by the project NanoSTIMA: Macro-to-Nano Human Sensing: Towards Integrated Multimodal Health Monitoring and Analytics/NORTE-01-0145-FEDER-000016" is financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). F. Martins thanks the financial support of Portuguese Foundation for Science and Technology (FCT—Fundação para a Ciência e a Tecnologia) through a PhD. scholarship of the programme MAP-PDMA. (Reference: PD/BD/105726/2014). The authors are grateful for the comments of two anonymous reviewers and the handling editor, which were exceptionally helpful in improving the paper.
Lemma 1
Assume H2 and H5 hold. Then \(\hat{w}_{L}^{T}[\nabla _{{{\hat{v}}}}^{0}q_{ij}^{T}{{\hat{u}}}_{1}]{{\hat{w}}}_{R}=0.\)
Consider the equality
$$\begin{aligned} P({{\hat{0}}},{{\hat{u}}},{{\hat{u}}}){{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})=r({{\hat{0}}},{{\hat{u}}} ,{{\hat{u}}}){{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}}). \end{aligned}$$
which holds by the definition of \(r({{\hat{0}}},{{\hat{u}}},{{\hat{u}}})\) as an eigenvalue with a positive right eigenvector \({{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})\). Let \(\hat{p}_{i}={{\hat{p}}}_{i}({{\hat{0}}},{{\hat{u}}},{{\hat{u}}})\) denote the i-th column of \(P=P({{\hat{0}}},{{\hat{u}}},{{\hat{u}}})\). We want to take the Jacobian of both sides of equation (30) with respect to \({{\hat{u}}}\). To do this we let \(J_{{{\hat{y}}}}[{{\hat{\omega }}}({{\hat{y}}})]\) denote the Jacobian of a vector valued function \({{\hat{\omega }}}({{\hat{y}}})\) of a vector \({{\hat{y}}}.\)
The right side of (30) is a vector valued function of the form \(\tau ({{\hat{y}}}){{\hat{\omega }}}({{\hat{y}}})\) for a scalar valued function \(\tau ({{\hat{y}}}).\) Applying the general formula
$$\begin{aligned} J_{{{\hat{y}}}}[\tau ({{\hat{y}}}){{\hat{\omega }}}({{\hat{y}}})]={{\hat{\omega }}}({{\hat{y}}} )\nabla _{{{\hat{y}}}}\tau ({{\hat{y}}})^{T}+\tau ({{\hat{y}}})J_{{{\hat{y}}}}[{{\hat{\omega }}} ({{\hat{y}}})] \end{aligned}$$
and recalling (8) in Remark 2, we find that the Jacobian of the right side of (30) with respect to \({{\hat{u}}}\) is
$$\begin{aligned} {{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})\left( \nabla _{{{\hat{u}}}}r^{T}+\nabla _{{{\hat{v}}} }r^{T}\right) +rJ_{{{\hat{u}}}}[{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})]={{\hat{w}}}_{R}(\hat{0},{{\hat{u}}})\nabla _{{{\hat{v}}}}r^{T}+rJ_{{{\hat{u}}}}[{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})]. \end{aligned}$$
To calculate the Jacobian of the left-hand side of (30), we write
$$\begin{aligned} P{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})=\sum _{i=1}^{m}w_{i}^{R}({{\hat{0}}},{{\hat{u}}})\hat{p}_{i} \end{aligned}$$
where \(w_{i}^{R}({{\hat{0}}},{{\hat{u}}})\) are the components of the vector \({\hat{w}}_{R}({{\hat{0}}},{{\hat{u}}})\) and apply the product rule (31) to each term. Noting (7 ) in Remark 2 we get
$$\begin{aligned} PJ_{{{\hat{u}}}}[{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})]+\sum _{i=1}^{m}w_{i}^{R}(\hat{0},{{\hat{u}}})J_{{{\hat{v}}}}[{{\hat{p}}}_{i}]. \end{aligned}$$
Equating the Jacobians of the left and right sides of (30) we have
$$\begin{aligned} PJ_{{{\hat{u}}}}[{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})]+\sum \limits _{i=1}^{m}w_{i}^{R} ({{\hat{0}}},{{\hat{u}}})J_{{{\hat{v}}}}[{{\hat{p}}}_{i}]={{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}} )\nabla _{{{\hat{v}}}}r^{T}+rJ_{{{\hat{u}}}}[{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})]. \end{aligned}$$
$$\begin{aligned} (P-rI_{m})J_{{{\hat{u}}}}[{{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})]={{\hat{w}}}_{R}({{\hat{0}}} ,{{\hat{u}}})\nabla _{{{\hat{v}}}}r^{T}-\sum \limits _{i=1}^{m}w_{i}^{R}({{\hat{0}}},\hat{u})J_{{{\hat{v}}}}[{{\hat{p}}}_{i}] \end{aligned}$$
which in turn can be rewritten as the n equations
$$\begin{aligned} (P-rI_{m})\partial _{u_{i}}({{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}}))=(\partial _{v_{i} }rI_{m}-\partial _{v_{i}}P){{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})\;\;\text {for}\;\;1\le i\le n. \end{aligned}$$
The matrix \(P-rI_{m}\) is singular and by the Fredholm alternative, the solubility of these equations imply the n orthogonality conditions
$$\begin{aligned} {{\hat{w}}}_{L}^{T}({{\hat{0}}},{{\hat{u}}})(\partial _{v_{i}}rI_{m}-\partial _{v_{i}} P){{\hat{w}}}_{R}({{\hat{0}}},{{\hat{u}}})=0 \end{aligned}$$
are satisfied. Solving for \(\partial _{v_{i}}r\) and recalling that the eigenvectors are normalized so that \({{\hat{w}}}^{L}({{\hat{0}}},{{\hat{u}}})^{T}\hat{w}_{R}({{\hat{0}}},{{\hat{u}}})=1\), we find
$$\begin{aligned} \partial _{v_{i}}r={{\hat{w}}}_{L}^{T}({{\hat{0}}},{{\hat{u}}})\partial _{v_{i}}P{{\hat{w}}} _{R}({{\hat{0}}},{{\hat{u}}})\;\;\text {for}\;\;1\le i\le n. \end{aligned}$$
Since \(\partial _{v_{i}}^{0}r=0\) by definition of a critical trait vector \({{\hat{u}}}^{*}\), when setting \({{\hat{u}}}={{\hat{u}}}^{*}\) and \(r_{0}^{*}=1 \) in these expressions we get
$$\begin{aligned} {{\hat{w}}}_{L}^{T}\partial _{v_{k}}^{0}Q{{\hat{w}}}_{R}=0\;\;\text {for}\;\;1\le k\le n. \end{aligned}$$
Let \(u_{1,k}\) denote the scalar components of the vector \({{\hat{u}}}_{1}.\) Then
$$\begin{aligned} \nabla _{{{\hat{v}}}}^{0}q_{ij}^{T}{{\hat{u}}}_{1}=\sum _{k=1}^{n}u_{1,k}\partial _{v_{k}}^{0}q_{ij} \end{aligned}$$
$$\begin{aligned} \left[ \nabla _{{{\hat{v}}}}^{0}q_{ij}^{T}{{\hat{u}}}_{1}\right] =\sum _{k=1} ^{n}u_{1,k}\left[ \partial _{v_{k}}^{0}q_{ij}\right] =\sum _{k=1}^{n} u_{1,k}\partial _{v_{k}}^{0}Q. \end{aligned}$$
$$\begin{aligned} {{\hat{w}}}_{L}^{T}[\nabla _{{{\hat{v}}}}^{0}q_{ij}{{\hat{u}}}_{1}]{{\hat{w}}}_{R}=\sum _{k}u_{1,k}\left( {{\hat{w}}}^{L}\partial _{v_{k}}^{0}Q{{\hat{w}}}_{R}\right) \end{aligned}$$
and (33) it follows that \({{\hat{w}}}_{L}^{T}[\nabla _{{{\hat{v}}} }^{0}q_{ij}{{\hat{u}}}_{1}]{{\hat{w}}}_{R}=0\).
\(\square \)
Abrams PA (2001) Modelling the adaptive dynamics of traits involved in inter- and intraspecific interactions: an assessment of three methods. Ecol Lett 4:166–175CrossRefGoogle Scholar
Abrams PA (2005) 'Adaptive dynamics' vs. 'adaptive dynamics'. J Evol Biol 18:1162–1165CrossRefGoogle Scholar
Bernman A, Plemmons RJ (1994) Nonnegative matrices in the mathematical sciences. Classics in applied mathematics. SIAM, PhiladelphiaCrossRefGoogle Scholar
Caswell H (2001) Matrix population models: construction, analysis and interpretation, 2nd edn. Sinauer Associates, Inc., Publishers, SunderlandGoogle Scholar
Courchamp F, Berec L, Gascoigne J (2008) Allee effects in ecology and conservation. Oxford University Press, OxfordCrossRefGoogle Scholar
Cushing JM (1998) An introduction to structured population dynamics. CBMS-NSF regional conference series in applied mathematics, vol 71. SIAM, PhiladelphiaCrossRefGoogle Scholar
Cushing JM (2009) Matrix models and population dynamics. In: Mark Lewis, Chaplain AJ, Keener JP, Maini PK (eds) Mathematical biology. City mathematics series, vol 14. American Mathematical Society, Providence, pp 47–150Google Scholar
Cushing JM (2010) A bifurcation theorem for Darwinian matrix models. Nonlinear Stud 17:1–13MathSciNetzbMATHGoogle Scholar
Cushing JM (2011) On the relationship between \(r\) and \(R_{0}\) and its role in the bifurcation of equilibria of Darwinian matrix models. J Biol Dyn 5:277–297MathSciNetCrossRefGoogle Scholar
Cushing JM (2014) Backward bifurcations and strong Allee effects in matrix models for the dynamics of structured populations. J Biol Dyn 8:57–73MathSciNetCrossRefGoogle Scholar
Cushing JM (2016) One dimensional maps as population and evolutionary dynamic models. In: Cushing JM, Saleem M, Srivastava HM, Khan MA, Merajuddin M (eds) Applied analysis in biological and physical sciences. Springer proceedings in mathematics and statistics, vol 186. Springer, New DelhiCrossRefGoogle Scholar
Dennis B (1989) Allee effects: population growth, critical density, and the chance of extinction. Nat Resour Model 3:481–538MathSciNetCrossRefzbMATHGoogle Scholar
Dercole F, Rinaldi S (2008) Analysis of evolutionary processes: the adaptive dynamics approach and its applications. Princeton University Press, PrincetonzbMATHGoogle Scholar
Elaydi SN (1996) An introduction to difference equations. Springer, New YorkCrossRefzbMATHGoogle Scholar
Keilhöfer H (2004) Bifurcation theory: an introduction with applications to PDEs. Applied mathematical sciences, vol 156. Springer, New YorkCrossRefGoogle Scholar
Lande R (1976) Natural selection and random genetic drift in phenotypic evolution. Evolution 30:314–334CrossRefGoogle Scholar
Lande R (1982) A quantitative genetic theory of life history evolution. Ecology 63:607–615CrossRefGoogle Scholar
Lefkovitch LP (1965) The study of population growth in organisms grouped by stage. Biometrics 21:1–18CrossRefGoogle Scholar
Leslie PH (1945) On the use of matrices in certain population mathematics. Biometrika 33:183–212MathSciNetCrossRefzbMATHGoogle Scholar
Leslie PH (1948) Some further notes on the use of matrices in population mathematics. Biometrika 35:213–245MathSciNetCrossRefzbMATHGoogle Scholar
Lewis EG (1942) On the generation and growth of a population. Sankhya 6:93–96Google Scholar
McGill BJ, Brown JS (2007) Evolutionary game theory and adaptive dynamics of continuous traits. Annu Rev Ecol Evol Syst 38:403–435CrossRefGoogle Scholar
Meissen EP, Salau KR, Cushing JM (2016) A global bifurcation theorem for Darwinian matrix models. J Differ Equ Appl 22(8):1114–1136MathSciNetCrossRefGoogle Scholar
Roff DA (1992) The evolution of life histories: theory and analysis. Chapman and Hall, New YorkGoogle Scholar
Schreiber S (2003) Allee effects, extinctions, and chaotic transients in simplie population models. Theor Popul Biol 64:201–209CrossRefzbMATHGoogle Scholar
Usher MB (1966) A matrix approach to the management of renewable resources, with special reference to selection forests. J Appl Ecol 3:355–367CrossRefGoogle Scholar
Veprauskas A, Cushing JM (2016) Evolutionary dynamics of a multi-trait semelparous model. Discrete Contin Dyn Syst Ser B 21(2):655–676MathSciNetCrossRefzbMATHGoogle Scholar
Vincent TL, Brown JS (2005) Evolutionary game theory, natural selection, and Darwinian dynamics. Cambridge University Press, CambridgeCrossRefzbMATHGoogle Scholar
Email authorView author's OrcID profile
1.Department of MathematicsUniversity of ArizonaTucsonUSA
2.Interdisciplinary Program in Applied MathematicsUniversity of ArizonaTucsonUSA
3.Department of Mathematics, Faculty of SciencesUniversity of Porto and LIAAD-INESCPortoPortugal
Cushing, J.M., Martins, F., Pinto, A.A. et al. J. Math. Biol. (2017) 75: 491. https://doi.org/10.1007/s00285-016-1091-4
Received 20 January 2016
Revised 03 October 2016
First Online 06 January 2017
Publisher Name Springer Berlin Heidelberg | CommonCrawl |
Numerical digit
A numerical digit (often shortened to just digit) is a single symbol used alone (such as "1") or in combinations (such as "15"), to represent numbers in a positional numeral system. The name "digit" comes from the fact that the ten digits (Latin digiti meaning fingers)[1] of the hands correspond to the ten symbols of the common base 10 numeral system, i.e. the decimal (ancient Latin adjective decem meaning ten)[2] digits.
For a given numeral system with an integer base, the number of different digits required is given by the absolute value of the base. For example, the decimal system (base 10) requires ten digits (0 through to 9), whereas the binary system (base 2) requires two digits (0 and 1).
Overview
In a basic digital system, a numeral is a sequence of digits, which may be of arbitrary length. Each position in the sequence has a place value, and each digit has a value. The value of the numeral is computed by multiplying each digit in the sequence by its place value, and summing the results.
Digital values
Each digit in a number system represents an integer. For example, in decimal the digit "1" represents the integer one, and in the hexadecimal system, the letter "A" represents the number ten. A positional number system has one unique digit for each integer from zero up to, but not including, the radix of the number system.
Thus in the positional decimal system, the numbers 0 to 9 can be expressed using their respective numerals "0" to "9" in the rightmost "units" position. The number 12 can be expressed with the numeral "2" in the units position, and with the numeral "1" in the "tens" position, to the left of the "2" while the number 312 can be expressed by three numerals: "3" in the "hundreds" position, "1" in the "tens" position, and "2" in the "units" position.
Computation of place values
The decimal numeral system uses a decimal separator, commonly a period in English, or a comma in other European languages,[3] to denote the "ones place" or "units place",[4][5][6] which has a place value one. Each successive place to the left of this has a place value equal to the place value of the previous digit times the base. Similarly, each successive place to the right of the separator has a place value equal to the place value of the previous digit divided by the base. For example, in the numeral 10.34 (written in base 10),
the 0 is immediately to the left of the separator, so it is in the ones or units place, and is called the units digit or ones digit;[7][8][9]
the 1 to the left of the ones place is in the tens place, and is called the tens digit;[10]
the 3 is to the right of the ones place, so it is in the tenths place, and is called the tenths digit;[11]
the 4 to the right of the tenths place is in the hundredths place, and is called the hundredths digit.[11]
The total value of the number is 1 ten, 0 ones, 3 tenths, and 4 hundredths. The zero, which contributes no value to the number, indicates that the 1 is in the tens place rather than the ones place.
The place value of any given digit in a numeral can be given by a simple calculation, which in itself is a complement to the logic behind numeral systems. The calculation involves the multiplication of the given digit by the base raised by the exponent n − 1, where n represents the position of the digit from the separator; the value of n is positive (+), but this is only if the digit is to the left of the separator. And to the right, the digit is multiplied by the base raised by a negative (−) n. For example, in the number 10.34 (written in base 10),
the 1 is second to the left of the separator, so based on calculation, its value is,
$n-1=2-1=1$
$1\times 10^{1}=10$
the 4 is second to the right of the separator, so based on calculation its value is,
$n=-2$
$4\times 10^{-2}={\frac {4}{100}}$
History
Main article: History of the Hindu–Arabic numeral system
Glyphs used to represent digits of the Hindu–Arabic numeral system.
European (descended from the Western Arabic) 0123456789
Arabic-Indic ٠١٢٣٤٥٦٧٨٩
Eastern Arabic-Indic (Persian and Urdu) ۰۱۲۳۴۵۶۷۸۹
Devanagari (Hindi) ०१२३४५६७८९
Tamil ௧௨௩௪௫௬௭௮௯
The first true written positional numeral system is considered to be the Hindu–Arabic numeral system. This system was established by the 7th century in India,[12] but was not yet in its modern form because the use of the digit zero had not yet been widely accepted. Instead of a zero sometimes the digits were marked with dots to indicate their significance, or a space was used as a placeholder. The first widely acknowledged use of zero was in 876.[13] The original numerals were very similar to the modern ones, even down to the glyphs used to represent digits.[12]
By the 13th century, Western Arabic numerals were accepted in European mathematical circles (Fibonacci used them in his Liber Abaci). They began to enter common use in the 15th century.[14] By the end of the 20th century virtually all non-computerized calculations in the world were done with Arabic numerals, which have replaced native numeral systems in most cultures.
Other historical numeral systems using digits
The exact age of the Maya numerals is unclear, but it is possible that it is older than the Hindu–Arabic system. The system was vigesimal (base 20), so it has twenty digits. The Mayas used a shell symbol to represent zero. Numerals were written vertically, with the ones place at the bottom. The Mayas had no equivalent of the modern decimal separator, so their system could not represent fractions.
The Thai numeral system is identical to the Hindu–Arabic numeral system except for the symbols used to represent digits. The use of these digits is less common in Thailand than it once was, but they are still used alongside Arabic numerals.
The rod numerals, the written forms of counting rods once used by Chinese and Japanese mathematicians, are a decimal positional system able to represent not only zero but also negative numbers. Counting rods themselves predate the Hindu–Arabic numeral system. The Suzhou numerals are variants of rod numerals.
Rod numerals (vertical)
0 1 2 3 4 5 6 7 8 9
–0 –1 –2 –3 –4 –5 –6 –7 –8 –9
Modern digital systems
In computer science
The binary (base 2), octal (base 8), and hexadecimal (base 16) systems, extensively used in computer science, all follow the conventions of the Hindu–Arabic numeral system.[15] The binary system uses only the digits "0" and "1", while the octal system uses the digits from "0" through "7". The hexadecimal system uses all the digits from the decimal system, plus the letters "A" through "F", which represent the numbers 10 to 15 respectively.[16] When the binary system is used, the term "bit(s)" is typically used as an alternative for "digit(s)", being a portmanteau of the term "binary digit". Similar terms exist for other number systems, such as "trit(s)" for a ternary system and "dit(s) for the decimal system, although less frequently used.
Unusual systems
The ternary and balanced ternary systems have sometimes been used. They are both base 3 systems.[17]
Balanced ternary is unusual in having the digit values 1, 0 and –1. Balanced ternary turns out to have some useful properties and the system has been used in the experimental Russian Setun computers.[18]
Several authors in the last 300 years have noted a facility of positional notation that amounts to a modified decimal representation. Some advantages are cited for use of numerical digits that represent negative values. In 1840 Augustin-Louis Cauchy advocated use of signed-digit representation of numbers, and in 1928 Florian Cajori presented his collection of references for negative numerals. The concept of signed-digit representation has also been taken up in computer design.
Digits in mathematics
Despite the essential role of digits in describing numbers, they are relatively unimportant to modern mathematics.[19] Nevertheless, there are a few important mathematical concepts that make use of the representation of a number as a sequence of digits.
Digital roots
Main article: Digital root
The digital root is the single-digit number obtained by summing the digits of a given number, then summing the digits of the result, and so on until a single-digit number is obtained.[20]
Casting out nines
Main article: Casting out nines
Casting out nines is a procedure for checking arithmetic done by hand. To describe it, let $f(x)$ represent the digital root of $x$, as described above. Casting out nines makes use of the fact that if $A+B=C$, then $f(f(A)+f(B))=f(C)$. In the process of casting out nines, both sides of the latter equation are computed, and if they are not equal, the original addition must have been faulty.[21]
Repunits and repdigits
Main article: Repunit
Repunits are integers that are represented with only the digit 1. For example, 1111 (one thousand, one hundred and eleven) is a repunit. Repdigits are a generalization of repunits; they are integers represented by repeated instances of the same digit. For example, 333 is a repdigit. The primality of repunits is of interest to mathematicians.[22]
Palindromic numbers and Lychrel numbers
Main article: Palindromic number
Palindromic numbers are numbers that read the same when their digits are reversed.[23] A Lychrel number is a positive integer that never yields a palindromic number when subjected to the iterative process of being added to itself with digits reversed.[24] The question of whether there are any Lychrel numbers in base 10 is an open problem in recreational mathematics; the smallest candidate is 196.[25]
History of ancient numbers
Main article: History of writing ancient numbers
Counting aids, especially the use of body parts (counting on fingers), were certainly used in prehistoric times as today. There are many variations. Besides counting ten fingers, some cultures have counted knuckles, the space between fingers, and toes as well as fingers. The Oksapmin culture of New Guinea uses a system of 27 upper body locations to represent numbers.[26]
To preserve numerical information, tallies carved in wood, bone, and stone have been used since prehistoric times.[27] Stone age cultures, including ancient indigenous American groups, used tallies for gambling, personal services, and trade-goods.
A method of preserving numeric information in clay was invented by the Sumerians between 8000 and 3500 BC.[28] This was done with small clay tokens of various shapes that were strung like beads on a string. Beginning about 3500 BC, clay tokens were gradually replaced by number signs impressed with a round stylus at different angles in clay tablets (originally containers for tokens) which were then baked. About 3100 BC, written numbers were dissociated from the things being counted and became abstract numerals.
Between 2700 and 2000 BC, in Sumer, the round stylus was gradually replaced by a reed stylus that was used to press wedge-shaped cuneiform signs in clay. These cuneiform number signs resembled the round number signs they replaced and retained the additive sign-value notation of the round number signs. These systems gradually converged on a common sexagesimal number system; this was a place-value system consisting of only two impressed marks, the vertical wedge and the chevron, which could also represent fractions.[29] This sexagesimal number system was fully developed at the beginning of the Old Babylonia period (about 1950 BC) and became standard in Babylonia.[30]
Sexagesimal numerals were a mixed radix system that retained the alternating base 10 and base 6 in a sequence of cuneiform vertical wedges and chevrons. By 1950 BC, this was a positional notation system. Sexagesimal numerals came to be widely used in commerce, but were also used in astronomical and other calculations. This system was exported from Babylonia and used throughout Mesopotamia, and by every Mediterranean nation that used standard Babylonian units of measure and counting, including the Greeks, Romans and Egyptians. Babylonian-style sexagesimal numeration is still used in modern societies to measure time (minutes per hour) and angles (degrees).[31]
History of modern numbers
In China, armies and provisions were counted using modular tallies of prime numbers. Unique numbers of troops and measures of rice appear as unique combinations of these tallies. A great convenience of modular arithmetic is that it is easy to multiply.[32] This makes use of modular arithmetic for provisions especially attractive. Conventional tallies are quite difficult to multiply and divide. In modern times modular arithmetic is sometimes used in digital signal processing.[33]
The oldest Greek system was that of the Attic numerals,[34] but in the 4th century BC they began to use a quasidecimal alphabetic system (see Greek numerals).[35] Jews began using a similar system (Hebrew numerals), with the oldest examples known being coins from around 100 BC.[36]
The Roman empire used tallies written on wax, papyrus and stone, and roughly followed the Greek custom of assigning letters to various numbers. The Roman numerals system remained in common use in Europe until positional notation came into common use in the 16th century.[37]
The Maya of Central America used a mixed base 18 and base 20 system, possibly inherited from the Olmec, including advanced features such as positional notation and a zero.[38] They used this system to make advanced astronomical calculations, including highly accurate calculations of the length of the solar year and the orbit of Venus.[39]
The Incan Empire ran a large command economy using quipu, tallies made by knotting colored fibers.[40] Knowledge of the encodings of the knots and colors was suppressed by the Spanish conquistadors in the 16th century, and has not survived although simple quipu-like recording devices are still used in the Andean region.
Some authorities believe that positional arithmetic began with the wide use of counting rods in China.[41] The earliest written positional records seem to be rod calculus results in China around 400. Zero was first used in India in the 7th century CE by Brahmagupta.[42]
The modern positional Arabic numeral system was developed by mathematicians in India, and passed on to Muslim mathematicians, along with astronomical tables brought to Baghdad by an Indian ambassador around 773.[43]
From India, the thriving trade between Islamic sultans and Africa carried the concept to Cairo. Arabic mathematicians extended the system to include decimal fractions, and Muḥammad ibn Mūsā al-Ḵwārizmī wrote an important work about it in the 9th century.[44] The modern Arabic numerals were introduced to Europe with the translation of this work in the 12th century in Spain and Leonardo of Pisa's Liber Abaci of 1201.[45] In Europe, the complete Indian system with the zero was derived from the Arabs in the 12th century.[46]
The binary system (base 2), was propagated in the 17th century by Gottfried Leibniz.[47] Leibniz had developed the concept early in his career, and had revisited it when he reviewed a copy of the I Ching from China.[48] Binary numbers came into common use in the 20th century because of computer applications.[47]
Numerals in most popular systems
West Arabic 0 1 2 3 4 5 6 7 8 9
Asomiya (Assamese); Bengali ০ ১ ২ ৩ ৪ ৫ ৬ ৭ ৮ ৯
Devanagari ० १ २ ३ ४ ५ ६ ७ ८ ९
East Arabic ٠ ١ ٢ ٣ ٤ ٥ ٦ ٧ ٨ ٩
Persian ٠ ١ ٢ ٣ ۴ ۵ ۶ ٧ ٨ ٩
Gurmukhi ੦ ੧ ੨ ੩ ੪ ੫ ੬ ੭ ੮ ੯
Urdu
Chinese (everyday) 〇 一 二 三 四 五 六 七 八 九
Chinese (Traditional) 零 壹 貳 叄 肆 伍 陸 柒 捌 玖
Chinese (Simplified) 零 壹 贰 叁 肆 伍 陆 柒 捌 玖
Chinese (Suzhou) 〇 〡 〢 〣 〤 〥 〦 〧 〨 〩
Ge'ez (Ethiopic) ፩ ፪ ፫ ፬ ፭ ፮ ፯ ፰ ፱
Gujarati ૦ ૧ ૨ ૩ ૪ ૫ ૬ ૭ ૮ ૯
Hieroglyphic Egyptian 𓏺 𓏻 𓏼 𓏽 𓏾 𓏿 𓐀 𓐁 𓐂
Japanese 零/〇 一 二 三 四 五 六 七 八 九
Kannada ೦ ೧ ೨ ೩ ೪ ೫ ೬ ೭ ೮ ೯
Khmer (Cambodia) ០ ១ ២ ៣ ៤ ៥ ៦ ៧ ៨ ៩
Lao ໐ ໑ ໒ ໓ ໔ ໕ ໖ ໗ ໘ ໙
Limbu ᥆ ᥇ ᥈ ᥉ ᥊ ᥋ ᥌ ᥍ ᥎ ᥏
Malayalam ൦ ൧ ൨ ൩ ൪ ൫ ൬ ൭ ൮ ൯
Mongolian ᠐ ᠑ ᠒ ᠓ ᠔ ᠕ ᠖ ᠗ ᠘ ᠙
Burmese ၀ ၁ ၂ ၃ ၄ ၅ ၆ ၇ ၈ ၉
Oriya ୦ ୧ ୨ ୩ ୪ ୫ ୬ ୭ ୮ ୯
Roman I II III IV V VI VII VIII IX
Shan ႐ ႑ ႒ ႓ ႔ ႕ ႖ ႗ ႘ ႙
Sinhala 𑇡 𑇢 𑇣 𑇤 𑇥 𑇦 𑇧 𑇨 𑇩
Tamil ௦ ௧ ௨ ௩ ௪ ௫ ௬ ௭ ௮ ௯
Telugu ౦ ౧ ౨ ౩ ౪ ౫ ౬ ౭ ౮ ౯
Thai ๐ ๑ ๒ ๓ ๔ ๕ ๖ ๗ ๘ ๙
Tibetan ༠ ༡ ༢ ༣ ༤ ༥ ༦ ༧ ༨ ༩
New Tai Lue ᧐ ᧑ ᧒ ᧓ ᧔ ᧕ ᧖ ᧗ ᧘ ᧙
Javanese ꧐ ꧑ ꧒ ꧓ ꧔ ꧕ ꧖ ꧗ ꧘ ꧙
Additional numerals
1 5 10 20 30 40 50 60 70 80 90 100 500 1000 10000 108
Chinese
(simple)
一 五 十 二十 三十 四十 五十 六十 七十 八十 九十 百 五百 千 万 亿
Chinese
(complex)
壹 伍 拾 贰拾 叁拾 肆拾 伍拾 陆拾 柒拾 捌拾 玖拾 佰 伍佰 仟 萬 億
Ge'ez
(Ethiopic)
፩ ፭ ፲ ፳ ፴ ፵ ፶ ፷ ፸ ፹ ፺ ፻ ፭፻ ፲፻ ፼ ፼፼
Roman I V X XX XXX XL L LX LXX LXXX XC C D M X
See also
• Hexadecimal
• Binary digit (bit), Quantum binary digit (qubit)
• Ternary digit (trit), Quantum ternary digit (qutrit)
• Decimal digit (dit)
• Hexadecimal digit (Hexit)
• Natural digit (nat, nit)
• Naperian digit (nepit)
• Significant digit
• Large numbers
• Text figures
• Abacus
• History of large numbers
• List of numeral system topics
Numeral notation in various scripts
• Arabic numerals
• Armenian numerals
• Babylonian numerals
• Balinese numerals
• Bengali numerals
• Burmese numerals
• Chinese numerals
• Cistercian numerals
• Dzongkha numerals
• Eastern Arabic numerals
• Georgian numerals
• Greek numerals
• Gujarati numerals
• Gurmukhi numerals
• Hebrew numerals
• Hokkien numerals
• Indian numerals
• Japanese numerals
• Javanese numerals
• Khmer numerals
• Korean numerals
• Lao numerals
• Mayan numerals
• Mongolian numerals
• Quipu
• Rod numerals
• Roman numerals
• Sinhala numerals
• Suzhou numerals
• Tamil numerals
• Thai numerals
• Vietnamese numerals
References
1. ""Digit" Origin". dictionary.com. Retrieved 23 May 2015.
2. ""Decimal" Origin". dictionary.com. Retrieved 23 May 2015.
3. Weisstein, Eric W. "Decimal Point". mathworld.wolfram.com. Retrieved 2020-07-22.
4. Snyder, Barbara Bode (1991). Practical math for the technician : the basics. Englewood Cliffs, N.J.: Prentice Hall. p. 225. ISBN 0-13-251513-X. OCLC 22345295. units or ones place
5. Andrew Jackson Rickoff (1888). Numbers Applied. D. Appleton & Company. pp. 5–. units' or ones' place
6. John William McClymonds; D. R. Jones (1905). Elementary Arithmetic. R.L. Telfer. pp. 17–18. units' or ones' place
7. Richard E. Johnson; Lona Lee Lendsey; William E. Slesnick (1967). Introductory Algebra for College Students. Addison-Wesley Publishing Company. p. 30. units' or ones', digit
8. R. C. Pierce; W. J. Tebeaux (1983). Operational Mathematics for Business. Wadsworth Publishing Company. p. 29. ISBN 978-0-534-01235-9. ones or units digit
9. Max A. Sobel (1985). Harper & Row algebra one. Harper & Row. p. 282. ISBN 978-0-06-544000-3. ones, or units, digit
10. Max A. Sobel (1985). Harper & Row algebra one. Harper & Row. p. 277. ISBN 978-0-06-544000-3. every two-digit number can be expressed as 10t+u when t is the tens digit
11. Taggart, Robert (2000). Mathematics. Decimals and percents. Portland, Me.: J. Weston Walch. pp. 51–54. ISBN 0-8251-4178-8. OCLC 47352965.
12. O'Connor, J. J. and Robertson, E. F. Arabic Numerals. January 2001. Retrieved on 2007-02-20.
13. Bill Casselman (February 2007). "All for Nought". Feature Column. AMS.
14. Bradley, Jeremy. "How Arabic Numbers Were Invented". www.theclassroom.com. Retrieved 2020-07-22.
15. Ravichandran, D. (2001-07-01). Introduction To Computers And Communication. Tata McGraw-Hill Education. pp. 24–47. ISBN 978-0-07-043565-0.
16. "Hexadecimals". www.mathsisfun.com. Retrieved 2020-07-22.
17. (PDF). 2019-10-30 https://web.archive.org/web/20191030114823/http://bit-player.org/wp-content/extras/bph-publications/AmSci-2001-11-Hayes-ternary.pdf. Archived from the original (PDF) on 2019-10-30. Retrieved 2020-07-22. {{cite web}}: Missing or empty |title= (help)
18. "Development of ternary computers at Moscow State University. Russian Virtual Computer Museum". www.computer-museum.ru. Retrieved 2020-07-22.
19. Kirillov, A.A. "What are numbers?" (PDF). math.upenn. p. 2. True, if you open a modern mathematical journal and try to read any article, it is very probable that you will see no numbers at all.
20. Weisstein, Eric W. "Digital Root". mathworld.wolfram.com. Retrieved 2020-07-22.
21. Weisstein, Eric W. "Casting Out Nines". mathworld.wolfram.com. Retrieved 2020-07-22.
22. Weisstein, Eric W. "Repunit". MathWorld.
23. Weisstein, Eric W. "Palindromic Number". mathworld.wolfram.com. Retrieved 2020-07-22.
24. Weisstein, Eric W. "Lychrel Number". mathworld.wolfram.com. Retrieved 2020-07-22.
25. Garcia, Stephan Ramon; Miller, Steven J. (2019-06-13). 100 Years of Math Milestones: The Pi Mu Epsilon Centennial Collection. American Mathematical Soc. pp. 104–105. ISBN 978-1-4704-3652-0.
26. Saxe, Geoffrey B. (2012). Cultural development of mathematical ideas : Papua New Guinea studies. Esmonde, Indigo. Cambridge: Cambridge University Press. pp. 44–45. ISBN 978-1-139-55157-1. OCLC 811060760. The Okspamin body system includes 27 body parts...
27. Tuniz, C. (Claudio) (24 May 2016). Humans : an unauthorized biography. Tiberi Vipraio, Patrizia, Haydock, Juliet. Switzerland. p. 101. ISBN 978-3-319-31021-3. OCLC 951076018. ...even notches cut into sticks made out of wood, bone or other materials dating back 30,000 years (often referred to as "notched tallies").{{cite book}}: CS1 maint: location missing publisher (link)
28. Ifrah, Georges (1985). From one to zero : a universal history of numbers. New York: Viking. p. 154. ISBN 0-670-37395-8. OCLC 11237558. And so , by the beginning of the third millennium B . C . , the Sumerians and Elamites had adopted the practice of recording numerical information on small , usually rectangular clay tablets
29. London Encyclopædia, Or, Universal Dictionary of Science, Art, Literature, and Practical Mechanics: Comprising a Popular View of the Present State of Knowledge; Illustrated by Numerous Engravings and Appropriate Diagrams. T. Tegg. 1845. p. 226.
30. Neugebauer, O. (2013-11-11). Astronomy and History Selected Essays. Springer Science & Business Media. ISBN 978-1-4612-5559-8.
31. "Sexagesimal System". Springer Reference. 2011. doi:10.1007/springerreference_78190. {{cite book}}: |work= ignored (help)
32. Knuth, Donald Ervin (1998). The art of computer programming. Reading, Mass.: Addison-Wesley Pub. Co. ISBN 0-201-03809-9. OCLC 823849. The advantages of a modular representation are that addition, subtraction, and multiplication are very simple
33. Echtle, Klaus; Hammer, Dieter; Powell, David (1994-09-21). Dependable Computing - EDCC-1: First European Dependable Computing Conference, Berlin, Germany, October 4-6, 1994. Proceedings. Springer Science & Business Media. p. 439. ISBN 978-3-540-58426-1.
34. Woodhead, A. G. (Arthur Geoffrey) (1981). The study of Greek inscriptions (2nd ed.). Cambridge: Cambridge University Press. pp. 109–110. ISBN 0-521-23188-4. OCLC 7736343.
35. Ushakov, Igor (22 June 2012). In the Beginning Was the Number (2). Lulu.com. ISBN 978-1-105-88317-0.
36. Chrisomalis, Stephen (2010). Numerical notation : a comparative history. Cambridge: Cambridge University Press. p. 157. ISBN 978-0-511-67683-3. OCLC 630115876. The first safely dated instance in which the use of Hebrew alphabetic numerals is certain is on coins from the reign of Hasmonean king Alexander Janneus(103 to 76 BC)...
37. Silvercloud, Terry David (2007). The Shape of God: Secrets, Tales, and Legends of the Dawn Warriors. Terry David Silvercloud. p. 152. ISBN 978-1-4251-0836-6.
38. Wheeler, Ruric E.; Wheeler, Ed R. (2001), Modern Mathematics, Kendall Hunt, p. 130, ISBN 9780787290627.
39. Swami, Devamrita (2002). Searching for Vedic India. The Bhaktivedanta Book Trust. ISBN 978-0-89213-350-5. Maya astronomy finely calculated both the duration of the solar year and the synodical revolution of Venus
40. "Quipu | Incan counting tool". Encyclopedia Britannica. Retrieved 2020-07-23.
41. Chen, Sheng-Hong (2018-06-21). Computational Geomechanics and Hydraulic Structures. Springer. p. 8. ISBN 978-981-10-8135-4. … definitely before 400 BC they possessed a similar positional notation based on the ancient counting rods.
42. "Foundations of mathematics - The reexamination of infinity". Encyclopedia Britannica. Retrieved 2020-07-23.
43. The Encyclopedia Britannica. 1899. p. 626.
44. Struik, Dirk J. (Dirk Jan) (1967). A concise history of mathematics (3d rev. ed.). New York: Dover Publications. ISBN 0-486-60255-9. OCLC 635553.
45. Sigler, Laurence (2003-11-11). Fibonacci's Liber Abaci: A Translation into Modern English of Leonardo Pisano's Book of Calculation. Springer Science & Business Media. ISBN 978-0-387-40737-1.
46. Deming, David (2010). Science and technology in world history. Volume 1, The ancient world and classical civilization. Jefferson, N.C.: McFarland & Co. p. 86. ISBN 978-0-7864-5657-4. OCLC 650873991.
47. Yanushkevich, Svetlana N. (2008). Introduction to logic design. Shmerko, Vlad P. Boca Raton: CRC Press. p. 56. ISBN 978-1-4200-6094-2. OCLC 144226528.
48. Sloane, Sarah (2005). The I Ching for writers : finding the page inside you. Novato, Calif.: New World Library. p. 9. ISBN 1-57731-496-4. OCLC 56672043.
Authority control: National
• Germany
• Japan
• Czech Republic
| Wikipedia |
Quadratic forms, reduction of
The isolation of "reduced" forms in each class of quadratic forms over a given ring $ R $, i.e. of (one or several) "standard" forms in the class. The main aim of the reduction of quadratic forms is the solution of the problem of equivalence of quadratic forms: To establish whether or not two given quadratic forms $ q $ and $ r $ are equivalent over $ R $, and in the case of their equivalence to find (or describe) all the invertible matrices $ U $ over $ R $ taking $ q $ to $ r $( see Quadratic form). For the solution of the latter problem it suffices to know just one such matrix $ U _ {0} $ and all the automorphisms $ V $ of the form $ q $, since then $ U = V U _ {0} $. One usually has in mind equivalence of quadratic forms over $ \mathbf Z $, where one is often considering the entire collection of quadratic forms over $ \mathbf R $ and their classes over $ \mathbf Z $. There are fundamental differences in the reduction theory of positive-definite and indefinite quadratic forms.
The reduction of positive-definite quadratic forms.
There are different methods for the reduction over $ \mathbf Z $ of real positive-definite quadratic forms. Of these the most extensive and widely studied is the Minkowski (or Hermite–Minkowski) reduction method. The most general method is Venkov's method. Other prevalent reductions are those of E. Selling $ ( n = 3 ) $ and H.F. Charve $ ( n = 4 ) $.
To determine a reduced quadratic form
$$ q ( x) = B [ x ] = \ \sum _ {i , j = 1 } ^ { n } b _ {ij} x _ {i} x _ {j} ,\ \ b _ {ij} \in \mathbf R ,\ \ \| b _ {ij} \| = B , $$
means to define in the positivity cone $ \mathfrak P $ of the coefficient space $ \mathbf R ^ {N} $, $ N = n ( n + 1 ) / 2 $, a domain of reduction $ \mathfrak G $ such that $ q ( x) $ is reduced if and only if $ q = ( b _ {11} \dots b _ {n-} 1,n ) \in \mathfrak G $. It is desirable that $ \mathfrak G $ possesses good geometric properties (such as simple connectedness, convexity, etc.) and is a fundamental domain of the group $ \Gamma $ of integer transformations of determinant $ \pm 1 $. A domain $ F \subset \mathfrak P $ is called a fundamental domain of reduction of positive-definite quadratic forms if $ F $ is an open domain in $ \mathbf R ^ {N} $ and if: 1) for each $ q \in \mathfrak P $ there is an equivalent quadratic form $ h \simeq q $( $ \mathbf Z $) for which $ h \in \overline{F}\; $; and 2) if $ h _ {1} , h _ {2} \in F $ and $ h _ {1} \simeq h _ {2} $( $ \mathbf Z $), then $ h _ {1} = h _ {2} $.
a) Minkowski reduction of a quadratic form. A positive-definite quadratic form $ q ( x) $ is Minkowski reduced if for any $ k = 1 \dots n $ and any integers $ l _ {1} \dots l _ {n} $ with greatest common divisor $ ( l _ {1} \dots l _ {n} ) = 1 $,
$$ \tag{1 } a ( l _ {1} \dots l _ {n} ) \geq b _ {kk} . $$
From the infinite number of inequalities (1) for the coefficients $ b _ {ij} $ one can extract a finite number such that the remaining inequalities follow from them. In the coefficient space $ \mathbf R ^ {N} $ the set of Minkowski-reduced forms is an infinite complex pyramid (a gonohedron) with a finite number of faces, called the domain of Minkowski reduction (or Hermite–Minkowski gonohedron) $ \mathfrak E = \mathfrak E _ {n} $; $ \mathfrak E $ is a closed set, $ \mathfrak E \subset \overline{ {\mathfrak P }}\; $. For $ n \leq 7 $ the faces of $ \mathfrak E _ {n} $ have been calculated (see ).
There exists a constant $ \lambda _ {n} $ such that if the quadratic form $ q ( x) $ is Minkowski reduced, then
$$ \prod _ { i= } 1 ^ { n } b _ {ii} \leq \lambda _ {n} d ( q) , $$
where $ d ( q) = \mathop{\rm det} \| b _ {ij} \| $ is the determinant of $ q ( x) $.
Each real positive-definite quadratic form is equivalent over $ \mathbf Z $ to a Minkowski-reduced quadratic form. There is an algorithm for the reduction (for finding a reduced form that is equivalent to a given one) (see [8], [15]).
For $ n = 2 $, $ q = q ( x , y ) = ( a , b , c ) = a x ^ {2} + 2 b x y + c y ^ {2} $, $ a , b , c \in \mathbf R $, $ a > 0 $, $ d ( q) > 0 $, the conditions of being reduced have the form
$$ 0 \leq 2 b \leq a \leq c . $$
If one restricts oneself to proper equivalence (when only integer-valued transformations with determinant $ + 1 $ are admitted), then the domain of reduction has the form $ 0 \leq 2 | b | \leq a \leq c $( the Lagrange–Gauss reduction conditions). The set of all inequivalent (properly-) reduced quadratic forms can be written as the union $ F \cup F _ {1} \cup F _ {2} $, where
$$ F : 2 | b | < a < c , $$
$$ F _ {1} : 0 \leq 2 b < a = c ,\ F _ {2} : 0 < 2 b = a \leq c . $$
For $ n = 2 $ there is an algorithm for Gauss reduction, according to which one has to go over from a form not satisfying the Lagrange–Gauss conditions to its "neighbour" ,
$$ ( a ^ \prime , b ^ \prime , c ^ \prime ) = ( a , b , c ) \ \left \| \begin{array}{cr} 0 &- 1 \\ 1 & k \\ \end{array} \right \| ,\ \ a ^ \prime = c , $$
where the integer $ k $ is chosen such that $ | b ^ \prime | \leq c / 2 $. For any real quadratic form $ ( a , b , c ) $ the algorithm is broken up into a finite number of steps.
If $ q = ( a , b , c ) $, $ a , b , c \in \mathbf Z $, with greatest common divisor $ ( a , b , c ) = 1 $, then for $ d ( q) = a c - b ^ {2} > 3 $ there are only two automorphisms (of determinant 1); for $ d ( q) = 3 $, six automorphisms; and for $ d ( q) = 1 $, four automorphisms.
b) Venkov reduction of a quadratic form. This is a reduction method $ ( \mathfrak V _ \phi ) $, depending on a parameter $ \phi $, for an arbitrary real positive-definite $ n $- ary quadratic form $ q $( see [3]). A quadratic form $ q $ is said to be $ \phi $- reducible if
$$ ( q , \overline \phi \; ) \leq ( q , \overline \phi \; S ) $$
for all integer-valued $ ( n \times n ) $- matrices $ S $ of determinant 1; here $ \overline \phi \; = d ( \phi ) \phi ^ {-} 1 $ is the form reciprocal to $ \phi $, $ \overline \phi \; S $ is the quadratic form obtained from $ \overline \phi \; $ by the transformation $ S $, and $ ( q _ {1} , q _ {2} ) $ is the Voronoi semi-invariant, defined as follows: if $ q _ {1} = B _ {1} [ x ] $, $ B _ {1} = \| b _ {ij} ^ {(} 1) \| $, $ q _ {2} = B _ {2} [ x ] $, $ B _ {2} = \| b _ {ij} ^ {(} 2) \| $, then
$$ ( q _ {1} , q _ {2} ) = \ \sum _ {i , j = 1 } ^ { n } b _ {ij} ^ {(} 1) b _ {ij} ^ {(} 2) . $$
The set of $ \phi $- reducible quadratic forms in the coefficient space $ \mathbf R ^ {N} $ is a convex gonohedron $ \mathfrak V _ \phi $ with a finite number of faces lying in $ \mathfrak P $. If $ \phi = x _ {1} ^ {2} + \dots + x _ {n} ^ {2} $ and $ n \leq 6 $, then $ \mathfrak V _ \phi $ is the same as the domain of Minkowski reduction.
c) Selling and Charve reduction of a quadratic form. If in the Venkov reduction one puts $ \phi = \phi _ {n} ^ {(} 0) = \sum _ {i \leq j } x _ {i} x _ {j} $, where $ \phi _ {n} ^ {(} 0) $ is the Voronoi first perfect form, then for $ n = 3 $ one obtains the Selling reduction, and for $ n = 4 $ the Charve reduction (see , [6]).
The reduction of indefinite quadratic forms.
This is in principle more complicated than that of positive quadratic forms. There are no fundamental domains for them. Only for $ n = 2 $ is there a definitive reduction theory of quadratic forms over $ \mathbf Z $.
a) Reduction of indefinite binary quadratic forms. Let
$$ q = q ( x , y ) = \ ( a , b , c ) = a x ^ {2} + 2 b x y + c y ^ {2} ,\ \ a , b , c \in \mathbf Z , $$
be a quadratic form with determinant $ d = a c - c ^ {2} = - | d | $, where $ | d | $ is not a perfect square. Associated with $ q $ is the quadratic equation $ a z ^ {2} + 2 b z + c = 0 $ and its distinct irrational roots
$$ \Omega = \Omega ( q) = \ \frac{- b- \sqrt {| d | } }{d} ,\ \ \omega = \omega ( q) = \ \frac{- b + \sqrt {| d | } }{d} . $$
The form $ q $ is said to be reduced if $ | \Omega | > 1 $, $ | \omega | < 1 $, $ \Omega \omega < 0 $. These conditions are equivalent to the conditions
$$ 0 < \sqrt {| d | } - b < | a | < \sqrt {| d | } + b $$
(and also to the conditions $ 0 < \sqrt {| d | } - b < | c | < \sqrt {| d | } + b $). The number of reduced integer-valued quadratic forms of given determinant is finite. Every quadratic form is equivalent to a reduced one. There is an algorithm for reduction, using continued fractions (see [1]).
For a reduced quadratic form there exists precisely one "right neighbouring" and precisely one "left neighbouring" reduced quadratic form (see [1]). By going over from a reduced quadratic form to its "neighbouring" , one obtains a doubly-infinite chain of reduced forms. This chain is periodic. A finite segment of inequivalent forms of this chain is called a period. Two reduced forms are properly equivalent if and only if one of them is in the period of the other.
The foregoing theory is valid also for forms with real coefficients $ a , b , c $ if $ \Omega ( q) $ and $ \omega ( q) $ are distinct irrational roots; however, in this case a chain of reduced forms need not be periodic.
All proper automorphisms (of determinant 1) of a quadratic form with greatest common divisor $ ( a , b , c ) = 1 $, greatest common divisor $ ( a , 2 b , c ) = \sigma $, $ d = a c - b ^ {2} < 0 $, have the form
$$ \left \| \begin{array}{cc} \frac{t - b u } \sigma &- \frac{c u } \sigma \\ \frac{a u } \sigma & \frac{t + b u } \sigma \\ \end{array} \right \| = \pm \left \| \begin{array}{cc} \frac{T - b U } \sigma &- \frac{c U } \sigma \\ \frac{a U } \sigma & \frac{T + b U } \sigma \\ \end{array} \right \| ^ {n} , $$
$$ n = 0 , \pm 1 \dots $$
where $ ( t , u ) $ runs through all the solutions of the Pell equation $ t ^ {2} + d u ^ {2} = \sigma ^ {2} $ and $ ( T , U ) $ is the fundamental solution of this equation, that is, the smallest positive solution. Improper automorphisms (of determinant $ - 1 $) exist only for two-sided (or ambiguous) forms, that is, forms whose class coincides with that of its inverse (see [1]). The subgroup of proper automorphisms of a two-sided form has index 2 in the group of all automorphisms.
Indefinite integer-valued quadratic forms of determinant $ d = - s ^ {2} $, $ s > 0 $, $ s \in \mathbf Z $, reduce to the form $ ( 0 , - s , r ) $, where $ r \in \mathbf Z $, $ 0 \leq r < 2 s $. Two quadratic forms $ ( 0 , s , r _ {1} ) $ and $ ( 0 , - s , r _ {2} ) $, $ 0 \leq r _ {1} , r _ {2} < 2 s $, are properly equivalent if and only if $ r _ {1} = r _ {2} $. All the automorphisms of such forms are
$$ \pm \left \| \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right \| $$
(see [1]).
b) Reduction of indefinite $ n $- ary quadratic forms. Let $ q ( x) = B [ x ] = x ^ {T} B x $ be such a form with real coefficients and $ d ( q) \neq 0 $. Then there exists a change of variables (over $ \mathbf R $), $ x = S y $, such that
$$ q ( x) = y _ {1} ^ {2} + \dots + y _ {1} ^ {2} - y _ {t+} 1 ^ {2} - \dots - y _ {n} ^ {2} , $$
where $ ( t , n - t ) $ is the signature of $ q $. Let
$$ \left \| \begin{array}{cccccc} 1 &{} &{} &{} &{} & 0 \\ {} &\cdot &{} &{} &{} &{} \\ {} &{} & 1 &{} &{} &{} \\ {} &{} &{} &- 1 &{} &{} \\ {} &{} &{} &{} &\cdot &{} \\ 0 &{} &{} &{} &{} &- 1 \\ \end{array} \right \| $$
( $ t $ rows 1; $ n - t $ rows $ - 1 $) and $ B = S ^ {T} D S $. The quadratic form $ q ( x) $ is associated with the positive-definite quadratic form
$$ h _ {S} ( x) = y _ {1} ^ {2} + \dots + y _ {t} ^ {2} + y _ {t+} 1 ^ {2} + \dots + y _ {n} ^ {2} = S ^ {T} S [ x ] . $$
The form $ q $ is called (Hermite) reducible if there is a transformation $ S $ of the form $ q $ into a sum of squares such that $ h _ {S} ( x) $ is (for example, Minkowski) reduced.
Equivalent to this definition of a reduced quadratic form is the following [13], [14]. Let $ \Phi ( q) $ be the set of matrices $ H $ over $ \mathbf R $ of positive $ n $- ary quadratic forms satisfying the equation $ H B ^ {-} 1 H = B $. This is a connected $ t ( n - 1 ) $- dimensional manifold of the positivity cone $ \mathfrak P \subset \mathbf R ^ {N} $( which can be written out in explicit form). Let $ F \subset \mathfrak P $ be the domain of reduction of positive-definite quadratic forms. The form $ q $ is called reducible if $ \Phi ( q) \cap F $ is non-empty.
The number of classes of integral indefinite quadratic forms in $ n $ variables with a given determinant $ d $ is finite (this is true also for positive-definite quadratic forms). The number of reduced forms in a given class is also finite. If two integral quadratic forms $ q _ {1} $ and $ q _ {2} $ are equivalent, then there exists an integral transformation $ S $, the absolute values of the elements of which are bounded by a constant depending only on $ n $ and $ d $, that takes $ q _ {1} $ to $ q _ {2} $. Thus the problem of determining whether or not two indefinite integral quadratic forms are equivalent is solved in a finite number of steps.
c) Automorphisms of indefinite quadratic forms. The problem of the description of all automorphisms of an indefinite integral quadratic form has two aspects: 1) to construct a fundamental domain of the group of automorphisms; 2) to describe the general form of the automorphisms (similar to the description of automorphisms by means of the Pell equation).
The general form of the automorphisms of a quadratic form was described by Ch. Hermite for $ n = 3 $ and by A. Cayley for arbitrary $ n $( see [10]).
A fundamental domain has been constructed of the group of automorphisms of an indefinite integral quadratic form $ q ( x) $ in a manifold $ \Phi ( q) $ bounded by a finite number of algebraic surfaces, and its volume has been calculated [13]. For the case $ t = 1 $ in the $ n $- dimensional space a fundamental domain has been constructed of the group of automorphisms of a quadratic form $ q ( x) $ in the form of an infinite pyramid with a finite number of plane faces (see [2], [4]).
There is a reduction theory of quadratic forms in algebraic number fields (see [11]).
[1] B.A. Venkov, "Elementary number theory" , Wolters-Noordhoff (1970) (Translated from Russian) MR0265267 Zbl 0204.37101
[2] B.A. Venkov, Izv. Akad. Nauk SSSR. Ser. Mat. , 1 (1937) pp. 139–170
[3] B.A. Venkov, "The reduction of positive-definite quadratic forms" Izv. Akad. Nauk SSSR. Ser. Mat. , 4 (1940) pp. 37–52 (In Russian)
[4] B.A. Venkov, "On indeterminate quadratic forms with integral coefficients" Trudy Mat. Inst. Steklov. , 38 (1951) pp. 30–41 (In Russian) MR0048498
[5a] B.N. Delone, "The geometry of positive definite quadratic forms" Uspekhi Mat. Nauk : 3 (1937) pp. 16–62 (In Russian)
[5b] B.N. Delone, "The geometry of positive definite quadratic forms" Uspekhi Mat. Nauk : 4 (1938) pp. 104–164 (In Russian)
[6] B.N. Delone, R.V. Galiulin, M.I. Shtorgin, "The types of Bravais lattices" , Current problems in mathematics , 2 , Moscow (1973) pp. 119–254 (In Russian) MR0412947 Zbl 0334.50005
[7] P.G. Lejeune-Dirichlet, "Vorlesungen über Zahlentheorie" , Vieweg (1894) Zbl 25.0252.01
[8] S.S. Ryshkov, "The theory of Hermite–Minkowski reduction of positive definite quadratic forms" J. Soviet Math. , 6 : 6 (1976) pp. 651–671 Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. , 33 (1973) pp. 37–64 Zbl 0374.10019
[9a] P.P. Tammela, "Reduction theory of positive quadratic forms" J. Soviet Math. , 11 : 2 (1979) pp. 197–277 Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. , 50 (1975) pp. 6–96 MR0563103 MR0321875 Zbl 0403.10012
[9b] P.P. Tammela, "Minkowski reduction region for positive quadratic forms in seven variables" J. Soviet Math. , 16 : 1 (1981) pp. 836–857 Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. , 67 (1977) pp. 108–143; 226 Zbl 0453.10033
[10] P. Bachmann, "Zahlentheorie. Die Arithmetik der quadratischen Formen" , 1–2 , Teubner (1923–1925) MR0238661 MR1522322
[11] P. Humbert, "Réduction de formes quadratiques dans un corps algébrique fini" Comm. Math. Helv. , 23 (1949) pp. 50–63 MR0031521 Zbl 0034.31102
[12] H. Minkowski, "Diskontinuitätsbereich für arithmetische Äquivalenz" J. Reine Angew. Math. , 129 (1905) pp. 220–274 Zbl 37.0251.02
[13] C.L. Siegel, "Einheiten quadratischer Formen" Abh. Math. Sem. Univ. Hamburg , 13 (1939) pp. 209–239 MR0003003 Zbl 0023.00701 Zbl 66.0125.03
[14] C.L. Siegel, "Zur Theorie der quadratischen Formen" Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. (1972) pp. 21–46 MR0311578 Zbl 0252.10019
[15] B.L. van der Waerden, "Die Reduktionstheorie der positiven quadratischen Formen" Acta Math. , 96 (1956) pp. 265–309 Zbl 0072.03601
Quadratic forms, reduction of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Quadratic_forms,_reduction_of&oldid=49540
This article was adapted from an original article by A.V. Malyshev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Quadratic_forms,_reduction_of&oldid=49540" | CommonCrawl |
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137.
In terms of legal status, Adrafinil is legal in the United States but is unregulated. You need to purchase this supplement online, as it is not a prescription drug at this time. Modafinil on the other hand, is heavily regulated throughout the United States. It is being used as a narcolepsy drug, but isn't available over the counter. You will need to obtain a prescription from your doctor, which is why many turn to Adrafinil use instead.
Regarding other methods of cognitive enhancement, little systematic research has been done on their prevalence among healthy people for the purpose of cognitive enhancement. One exploratory survey found evidence of modafinil use by people seeking cognitive enhancement (Maher, 2008), and anecdotal reports of this can be found online (e.g., Arrington, 2008; Madrigal, 2008). Whereas TMS requires expensive equipment, tDCS can be implemented with inexpensive and widely available materials, and online chatter indicates that some are experimenting with this method.
Or in other words, since the standard deviation of my previous self-ratings is 0.75 (see the Weather and my productivity data), a mean rating increase of >0.39 on the self-rating. This is, unfortunately, implying an extreme shift in my self-assessments (for example, 3s are ~50% of the self-ratings and 4s ~25%; to cause an increase of 0.25 while leaving 2s alone in a sample of 23 days, one would have to push 3s down to ~25% and 4s up to ~47%). So in advance, we can see that the weak plausible effects for Noopept are not going to be detected here at our usual statistical levels with just the sample I have (a more plausible experiment might use 178 pairs over a year, detecting down to d>=0.18). But if the sign is right, it might make Noopept worthwhile to investigate further. And the hardest part of this was just making the pills, so it's not a waste of effort.
By which I mean that simple potassium is probably the most positively mind altering supplement I've ever tried…About 15 minutes after consumption, it manifests as a kind of pressure in the head or temples or eyes, a clearing up of brain fog, increased focus, and the kind of energy that is not jittery but the kind that makes you feel like exercising would be the reasonable and prudent thing to do. I have done no tests, but feel smarter from this in a way that seems much stronger than piracetam or any of the conventional weak nootropics. It is not just me – I have been introducing this around my inner social circle and I'm at 7/10 people felt immediately noticeable effects. The 3 that didn't notice much were vegetarians and less likely to have been deficient. Now that I'm not deficient, it is of course not noticeable as mind altering, but still serves to be energizing, particularly for sustained mental energy as the night goes on…Potassium chloride initially, but since bought some potassium gluconate pills… research indicates you don't want to consume large amounts of chloride (just moderate amounts).
Pharmaceutical, substance used in the diagnosis, treatment, or prevention of disease and for restoring, correcting, or modifying organic functions. (See also pharmaceutical industry.) Records of medicinal plants and minerals date to ancient Chinese, Hindu, and Mediterranean civilizations. Ancient Greek physicians such as Galen used a variety of drugs in their profession.…
Several chemical influences can completely disconnect those circuits so they're no longer able to excite each other. "That's what happens when we're tired, when we're stressed." Drugs like caffeine and nicotine enhance the neurotransmitter acetylcholine, which helps restore function to the circuits. Hence people drink tea and coffee, or smoke cigarettes, "to try and put [the] prefrontal cortex into a more optimal state".
An additional complexity, related to individual differences, concerns dosage. This factor, which varies across studies and may be fixed or determined by participant body weight within a study, undoubtedly influences the cognitive effects of stimulant drugs. Furthermore, single-unit recordings with animals and, more recently, imaging of humans indicate that the effects of stimulant dose are nonmonotonic; increases enhance prefrontal function only up to a point, with further increases impairing function (e.g., Arnsten, 1998; Mattay et al., 2003; Robbins & Arnsten, 2009). Yet additional complexity comes from the fact that the optimal dosage depends on the same kinds of individual characteristics just discussed and on the task (Mattay et al., 2003).
Piracetam boosts acetylcholine function, a neurotransmitter responsible for memory consolidation. Consequently, it improves memory in people who suffer from age-related dementia, which is why it is commonly prescribed to Alzheimer's patients and people struggling with pre-dementia symptoms. When it comes to healthy adults, it is believed to improve focus and memory, enhancing the learning process altogether.
It arrived as described, a little bottle around the volume of a soda can. I had handy a plastic syringe with milliliter units which I used to measure out the nicotine-water into my tea. I began with half a ml the first day, 1ml the second day, and 2ml the third day. (My Zeo sleep scores were 85/103/86 (▁▇▁), and the latter had a feline explanation; these values are within normal variation for me, so if nicotine affects my sleep, it does so to a lesser extent than Adderall.) Subjectively, it's hard to describe. At half a ml, I didn't really notice anything; at 1 and 2ml, I thought I began to notice it - sort of a cleaner caffeine. It's nice so far. It's not as strong as I expected. I looked into whether the boiling water might be breaking it down, but the answer seems to be no - boiling tobacco is a standard way to extract nicotine, actually, and nicotine's own boiling point is much higher than water; nor do I notice a drastic difference when I take it in ordinary water. And according to various e-cigarette sources, the liquid should be good for at least a year.
Although piracetam has a history of "relatively few side effects," it has fallen far short of its initial promise for treating any of the illnesses associated with cognitive decline, according to Lon Schneider, a professor of psychiatry and behavioral sciences at the Keck School of Medicine at the University of Southern California. "We don't use it at all and never have."
I tried taking whole pills at 1 and 3 AM. I felt kind of bushed at 9 AM after all the reading, and the 50 minute nap didn't help much - I was sleep only around 10 minutes and spent most of it thinking or meditation. Just as well the 3D driver is still broken; I doubt the scores would be reasonable. Began to perk up again past 10 AM, then felt more bushed at 1 PM, and so on throughout the day; kind of gave up and began watching & finishing anime (Amagami and Voices of a Distant Star) for the rest of the day with occasional reading breaks (eg. to start James C. Scotts Seeing Like A State, which is as described so far). As expected from the low quality of the day, the recovery sleep was bigger than before: a full 10 hours rather than 9:40; the next day, I slept a normal 8:50, and the following day ~8:20 (woken up early); 10:20 (slept in); 8:44; 8:18 (▁▇▁▁). It will be interesting to see whether my excess sleep remains in the hour range for 'good modafinil nights and two hours for bad modafinil nights.
Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.)
It is a known fact that cognitive decline is often linked to aging. It may not be as visible as skin aging, but the brain does in fact age. Often, cognitive decline is not noticeable because it could be as mild as forgetting names of people. However, research has shown that even in healthy adults, cognitive decline can start as early as in the late twenties or early thirties.
Before taking any supplement or chemical, people want to know if there will be long term effects or consequences, When Dr. Corneliu Giurgea first authored the term "nootropics" in 1972, he also outlined the characteristics that define nootropics. Besides the ability to benefit memory and support the cognitive processes, Dr. Giurgea believed that nootropics should be safe and non-toxic.
The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work.
Nootropics are a broad classification of cognition-enhancing compounds that produce minimal side effects and are suitable for long-term use. These compounds include those occurring in nature or already produced by the human body (such as neurotransmitters), and their synthetic analogs. We already regularly consume some of these chemicals: B vitamins, caffeine, and L-theanine, in our daily diets.
The intradimensional– extradimensional shift task from the CANTAB battery was used in two studies of MPH and measures the ability to shift the response criterion from one dimension to another, as in the WCST, as well as to measure other abilities, including reversal learning, measured by performance in the trials following an intradimensional shift. With an intradimensional shift, the learned association between values of a given stimulus dimension and reward versus no reward is reversed, and participants must learn to reverse their responses accordingly. Elliott et al. (1997) reported finding no effects of the drug on ability to shift among dimensions in the extradimensional shift condition and did not describe performance on the intradimensional shift. Rogers et al. (1999) found that accuracy improved but responses slowed with MPH on trials requiring a shift from one dimension to another, which leaves open the question of whether the drug produced net enhancement, interference, or neither on these trials once the tradeoff between speed and accuracy is taken into account. For intradimensional shifts, which require reversal learning, these authors found drug-induced impairment: significantly slower responding accompanied by a borderline-significant impairment of accuracy.
Table 3 lists the results of 24 tasks from 22 articles on the effects of d-AMP or MPH on learning, assessed by a variety of declarative and nondeclarative memory tasks. Results for the 24 tasks are evenly split between enhanced learning and null results, but they yield a clearer pattern when the nature of the learning task and the retention interval are taken into account. In general, with single exposures of verbal material, no benefits are seen immediately following learning, but later recall and recognition are enhanced. Of the six articles reporting on memory performance (Camp-Bruno & Herting, 1994; Fleming, Bigelow, Weinberger, & Goldberg, 1995; Rapoport, Busbaum, & Weingartner, 1980; Soetens, D'Hooge, & Hueting, 1993; Unrug, Coenen, & van Luijtelaar, 1997; Zeeuws & Soetens 2007), encompassing eight separate experiments, only one of the experiments yielded significant memory enhancement at short delays (Rapoport et al., 1980). In contrast, retention was reliably enhanced by d-AMP when subjects were tested after longer delays, with recall improved after 1 hr through 1 week (Soetens, Casaer, D'Hooge, & Hueting, 1995; Soetens et al., 1993; Zeeuws & Soetens, 2007). Recognition improved after 1 week in one study (Soetens et al., 1995), while another found recognition improved after 2 hr (Mintzer & Griffiths, 2007). The one long-term memory study to examine the effects of MPH found a borderline-significant reduction in errors when subjects answered questions about a story (accompanied by slides) presented 1 week before (Brignell, Rosenthal, & Curran, 2007).
Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion.
I split the 2 pills into 4 doses for each hour from midnight to 4 AM. 3D driver issues in Debian unstable prevented me from using Brain Workshop, so I don't have any DNB scores to compare with the armodafinil DNB scores. I had the subjective impression that I was worse off with the Modalert, although I still managed to get a fair bit done so the deficits couldn't've been too bad. The apathy during the morning felt worse than armodafinil, but that could have been caused by or exacerbated by an unexpected and very stressful 2 hour drive through rush hour and multiple accidents; the quick hour-long nap at 10 AM was half-waking half-light-sleep according to the Zeo, but seemed to help a bit. As before, I began to feel better in the afternoon and by evening felt normal, doing my usual reading. That night, the Zeo recorded my sleep as lasting ~9:40, when it was usually more like 8:40-9:00 (although I am not sure that this was due to the modafinil inasmuch as once a week or so I tend to sleep in that long, as I did a few days later without any influence from the modafinil); assuming the worse, the nap and extra sleep cost me 2 hours for a net profit of ~7 hours. While it's not clear how modafinil affects recovery sleep (see the footnote in the essay), it's still interesting to ponder the benefits of merely being able to delay sleep18.
In the nearer future, Lynch points to nicotinic receptor agents – molecules that act on the neurotransmitter receptors affected by nicotine – as ones to watch when looking out for potential new cognitive enhancers. Sarter agrees: a class of agents known as α4β2* nicotinic receptor agonists, he says, seem to act on mechanisms that control attention. Among the currently known candidates, he believes they come closest "to fulfilling the criteria for true cognition enhancers."
But, if we find in 10 or 20 years that the drugs don't do damage, what are the benefits? These are stimulants that help with concentration. College students take such drugs to pass tests; graduates take them to gain professional licenses. They are akin to using a calculator to solve an equation. Do you really want a doctor who passed his boards as a result of taking speed — and continues to depend on that for his practice?
But, thanks to the efforts of a number of remarkable scientists, researchers and plain-old neurohackers, we are beginning to put together a "whole systems" model of how all the different parts of the human brain work together and how they mesh with the complex regulatory structures of the body. It's going to take a lot more data and collaboration to dial this model in, but already we are empowered to design stacks that can meaningfully deliver on the promise of nootropics "to enhance the quality of subjective experience and promote cognitive health, while having extremely low toxicity and possessing very few side effects." It's a type of brain hacking that is intended to produce noticeable cognitive benefits.
Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress.
I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.) | CommonCrawl |
\begin{document}
\title{HYPERGEOMETRIC FUNCTIONS OVER $\mathbb{F}_q$ AND TRACES OF FROBENIUS FOR ELLIPTIC CURVES}
\author{Rupam Barman} \address{Department of Mathematical Sciences, Tezpur University, Napaam-784028, Sonitpur, Assam, India}
\email{[email protected]}
\thanks{The first author thanks Mathematical Institute, University of Heidelberg and Mathematics Center Heidelberg (MATCH), where the majority of this research was conducted. He is grateful to John H. Coates, R. Sujatha, Otmar Venjakob, and Anupam Saikia for their encouragements. The second author is partially supported by INSPIRE Fellowship of Department of Science and Technology, Goverment of India. Finally, the authors thank Ken Ono and the referee for helpful comments.}
\author{Gautam Kalita} \address{Department of Mathematical Sciences, Tezpur University, Napaam-784028, Sonitpur, Assam, India} \email{[email protected]}
\subjclass[2000]{Primary 11T24, 11G20}
\date{August, 2011.}
\keywords{Gaussian hypergeometric series, elliptic curves, Frobenius endomorphisms}
\begin{abstract} We present here explicit relations between the traces of Frobenius endomorphisms of certain families of elliptic curves and special values of ${_{2}}F_1$-hypergeometric functions over $\mathbb{F}_q$ for $q \equiv 1 ( \text{mod}~6)$ and $q \equiv 1 ( \text{mod}~4)$. \end{abstract}
\maketitle
\section{Introduction and statement of results} In this paper, we consider the problem of expressing traces of Frobenius endomorphisms of certain families of elliptic curves in terms of hypergeometric functions over finite fields. In \cite{Greene}, Greene introduced the notion of hypergeometric functions over finite fields or \emph{Gaussian hypergeometric series} which are analogous to the classical hypergeometric series. Since then, many interesting relations between special values of these functions and the number of $\mathbb{F}_p$-points on certain varieties have been obtained. For example, Koike \cite{koike} and Ono \cite{ono} gave formulas for the number of $\mathbb{F}_p$-points on elliptic curves in terms of special values of Gaussian hypergeometric series. Also, in \cite{BK, BK2} the authors studied this problem for certain families of algebraic curves. \par Recently in \cite{Fuselier}, Fuselier gave formulas for the trace of Frobenius of certain families of elliptic curves which involved Gaussian hypergeometric series with characters of order 12 as parameters, under the assumption that $p\equiv 1 (\text{mod}~12)$. In \cite{Lennon}, Lennon provided a general formula expressing the number of $\mathbb{F}_q$-points of an elliptic curve $E$ with $j(E)\neq 0, 1728$ in terms of values of Gaussian hypergeometric series for $q=p^e \equiv 1 (\text{mod}~ 12)$. In \cite{Lennon2}, for $q \equiv 1 (\text{mod}~ 3)$, Lennon also gave formulas for certain elliptic curves involving Gaussian hypergeometric series with characters of order 3 as parameters. \par We begin with some preliminary definitions needed to state our results. Let $q=p^e$ be a power of an odd prime and $\mathbb{F}_q$ the finite field of $q$ elements. Extend each character $\chi \in \widehat{\mathbb{F}_q^{\times}}$ to all of $\mathbb{F}_q$ by setting $\chi(0):=0$. If $A$ and $B$ are two characters of $\mathbb{F}_q^{\times}$, then ${A \choose B}$ is defined by \begin{align}\label{eq0} {A \choose B}:=\frac{B(-1)}{q}J(A,\overline{B})=\frac{B(-1)}{q}\sum_{x \in \mathbb{F}_q}A(x)\overline{B}(1-x), \end{align} where $J(A, B)$ denotes the usual Jacobi sum and $\overline{B}$ is the inverse of $B$. \par Recall the definition of the Gaussian hypergeometric series over $\mathbb{F}_q$ first defined by Greene in \cite{Greene}. For any positive integer $n$ and characters $A_0, A_1,\ldots, A_n$ and $B_1, B_2,\ldots, B_n \in \widehat{\mathbb{F}_q^{\times}}$, the Gaussian hypergeometric series ${_{n+1}}F_n$ is defined to be \begin{align}\label{eq00} {_{n+1}}F_n\left(\begin{array}{cccc}
A_0, & A_1, & \cdots, & A_n\\
& B_1, & \cdots, & B_n
\end{array}\mid x \right):=\frac{q}{q-1}\sum_{\chi}{A_0\chi \choose \chi}{A_1\chi \choose B_1\chi} \cdots {A_n\chi \choose B_n\chi}\chi(x), \end{align} where the sum is over all characters $\chi$ of $\mathbb{F}_q^{\times}$. \par Throughout the paper, we consider an elliptic curve $E_{a,b}$ over $\mathbb{F}_q$ in Weierstrass form as \begin{align}\label{eq100} E_{a,b}: y^2=x^3+ax+b. \end{align} If we denote by $a_q(E_{a,b})$ the trace of the Frobenius endomorphism on $E_{a,b}$, then \begin{align}\label{eq101} a_q(E_{a,b})=q+1-\#E_{a,b}(\mathbb{F}_q), \end{align} where $\#E_{a,b}(\mathbb{F}_q)$ denotes the number of $\mathbb{F}_q$-points on $E_{a,b}$ including the point at infinity. In the following theorems, we express $a_q(E_{a,b})$ in terms of Gaussian hypergeometric series. \begin{theorem}\label{mt1} Let $q=p^e$, $p>0$ a prime and $q\equiv1~(mod~6)$. In addition, let $a$ be non-zero and $(-a/3)$ a quadratic residue modulo $q$. If $T \in \widehat{\mathbb{F}_q^{\times}}$ is a generator of the character group, then the trace of the Frobenius on $E_{a,b}$ can be expressed as \begin{align} a_q(E_{a,b})=-qT^{\frac{q-1}{2}}(-k)~{_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{6}}, & T^{\frac{5(q-1)}{6}}\\
& \epsilon
\end{array}\mid -\frac{k^3+ak+b}{4k^3} \right),\nonumber \end{align} where $\epsilon$ is the trivial character of $\mathbb{F}_q$ and $k\in \mathbb{F}_q$ satisfies $3k^2+a=0$. \end{theorem} \begin{theorem}\label{mt2} Let $q=p^e$, $p>0$ a prime, $q\neq 9$ and $q\equiv1~(mod~4)$. Also assume that $x^3+ax+b=0$ has a non-zero solution in $\mathbb{F}_q$ and $T\in \widehat{\mathbb{F}_q^{\times}}$ is a generator of the character group. The trace of the Frobenius on $E_{a,b}$ can be expressed as \begin{align} a_q(E_{a,b})=-qT^{\frac{q-1}{2}}(6h)T^{\frac{q-1}{4}}(-1)~{_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{4}}, & T^{\frac{3(q-1)}{4}}\\
& \epsilon
\end{array}\mid \frac{12h^2+4a}{9h^2} \right),\nonumber \end{align} where $\epsilon$ is the trivial character of $\mathbb{F}_q$ and $h\in\mathbb{F}^{\times}_q$ satisfies $h^3+ah+b=0$. \end{theorem}
\section{Preliminaries} Define the additive character $\theta: \mathbb{F}_q \rightarrow \mathbb{C}^{\times}$ by \begin{align} \theta(\alpha)=\zeta^{\text{tr}(\alpha)} \end{align} where $\zeta=e^{2\pi i/p}$ and $\text{tr}: \mathbb{F}_q \rightarrow \mathbb{F}_q$ is the trace map given by $$\text{tr}(\alpha)=\alpha + \alpha^p + \alpha^{p^2}+ \cdots + \alpha^{p^{e-1}}.$$ For $A\in \widehat{\mathbb{F}_q^\times}$, the \emph{Gauss sum} is defined by \begin{align} G(A):=\sum_{x\in \mathbb{F}_q}A(x)\zeta^{\text{tr}(x)}=\sum_{x\in \mathbb{F}_q}A(x)\theta(x). \end{align} We let $T$ denote a fixed generator of $\widehat{\mathbb{F}_q^\times}$. We also denote by $G_m$ the Gauss sum $G(T^m)$. \par The \emph{orthogonality relations} for multiplicative characters are listed in the following lemma. \begin{lemma}\emph{(\cite{ireland} Chapter 8).}\label{lemma2} Let $\epsilon$ be the trivial character. Then \begin{enumerate} \item $\sum_{x\in\mathbb{F}_q}T^n(x)=\left\{
\begin{array}{ll}
q-1 & \hbox{if~ $T^n=\epsilon$;} \\
0 & \hbox{if ~~$T^n\neq\epsilon$.}
\end{array}
\right.$ \item $\sum_{n=0}^{q-2}T^n(x)~~=\left\{
\begin{array}{ll}
q-1 & \hbox{if~~ $x=1$;} \\
0 & \hbox{if ~~$x\neq1$.}
\end{array}
\right.$ \end{enumerate} \end{lemma} Using orthogonality, we have the following lemma. \begin{lemma}\emph{(\cite{Fuselier} Lemma 2.2).}\label{lemma1} For all $\alpha \in \mathbb{F}_q^{\times}$, $$\theta(\alpha)=\frac{1}{q-1}\sum_{m=0}^{q-2}G_{-m}T^m(\alpha).$$ \end{lemma} The following two lemmas on Gauss sum will be useful in the proof of our results. \begin{lemma}\emph{(\cite{Greene} Eqn. 1.12).}\label{new} If $i\in \mathbb{Z}$ and $T^i\neq \epsilon$, then $$G_iG_{-i}=qT^i(-1).$$ \end{lemma} \begin{lemma}\emph{(Davenport-Hasse Relation \cite{Lang}).}\label{lemma3} Let $m$ be a positive integer and let $q=p^e$ be a prime power such that $q\equiv 1 (\text{mod}~m)$. For multiplicative characters $\chi, \psi \in \widehat{\mathbb{F}_q^\times}$, we have \begin{align} \prod_{\chi^m=1}G(\chi \psi)=-G(\psi^m)\psi(m^{-m})\prod_{\chi^m=1}G(\chi). \end{align} \end{lemma}
\section{Proof of the results} Theorem \ref{mt1} will follow as a consequence of the next theorem. We consider an elliptic curve $E_1$ over $\mathbb{F}_q$ in the form \begin{align}\label{eq102} E_1: y^2=x^3+cx^2+d, \end{align} where $c \neq 0$. The trace of the Frobenius endomorphism on $E_1$ is given by \begin{align}\label{eq103} a_q(E_1)=q+1-\#E_1(\mathbb{F}_q). \end{align} We express the trace of Frobenius on the curve $E_1$ as a special value of a hypergeometric function in the following way. \begin{theorem}\label{theorem1} Let $q=p^e$, $p>0$ a prime and $q\equiv1~(mod~6)$. If $T \in \widehat{\mathbb{F}_q^{\times}}$ is a generator of the character group, then the trace of the Frobenius on $E_1$ is given by \begin{align} a_q(E_1)=-qT^{\frac{q-1}{2}}(-3c)~{_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{6}}, & T^{\frac{5(q-1)}{6}}\\
& \epsilon
\end{array}\mid -\frac{27d}{4c^3} \right),\nonumber \end{align} where $\epsilon$ is the trivial character of $\mathbb{F}_q$. \end{theorem} \begin{proof} The method of this proof follows similarly to that given in \cite{Fuselier}. Let $$P(x,y)=x^3+cx^2+d-y^2$$ and denote by $\#E_1(\mathbb{F}_q)$ the number of points on the curve $E_1$ over $\mathbb{F}_q$ including the point at infinity. Then $$\#E_1(\mathbb{F}_q)-1=\#\{(x,y)\in \mathbb{F}_q\times \mathbb{F}_q : P(x,y)=0\}.$$ Using the elementary identity from \cite{ireland} \begin{align}\label{eq4} \sum_{z\in \mathbb{F}_q}\theta(zP(x,y))=\left\{
\begin{array}{ll}
q & \hbox{if $P(x,y)=0$;} \\
0 & \hbox{if $P(x,y)\neq 0,$}
\end{array}
\right. \end{align} we obtain \begin{align}\label{eq1} q\cdot(\#E_1(\mathbb{F}_q)-1)&=\sum_{x,y,z\in \mathbb{F}_q}\theta(zP(x,y))\nonumber\\ &=q^2+\sum_{z\in\mathbb{F}_q^\times}\theta(zd)+\sum_{y,z\in\mathbb{F}_q^\times}\theta(zd)\theta(-zy^2)+ \sum_{x,z\in\mathbb{F}_q^\times}\theta(zd)\theta(zx^3)\theta(zcx^2)\nonumber\\ &\hspace{.5cm}+\sum_{x,y,z\in \mathbb{F}_q^\times}\theta(zd)\theta(zx^3)\theta(zcx^2)\theta(-zy^2)\nonumber\\ &:=q^2+A+B+C+D. \end{align} Now using Lemma \ref{lemma1} and then applying Lemma \ref{lemma2} repeatedly for each term of \eqref{eq1}, we deduce that $$A=\frac{1}{q-1}\sum_{z\in \mathbb{F}_q^\times}\sum_{l=0}^{q-2}G_{-l}T^l(zd)=\frac{1}{q-1}\sum_{l=0}^{q-2}G_{-l}T^l(d) \sum_{z\in \mathbb{F}_q^\times}T^l(z)=G_0=-1.$$ Similarly, \begin{align} B&=\frac{1}{(q-1)^2}\sum_{l,m=0}^{q-2}G_{-l}G_{-m}T^l(d)T^m(-1)\sum_{y\in \mathbb{F}_q^\times}T^{2m}(y) \sum_{z\in \mathbb{F}_q^\times}T^{l+m}(z)\nonumber\\ &=1+G_{\frac{q-1}{2}}G_{-\frac{q-1}{2}}T^{\frac{q-1}{2}}(d)T^{\frac{q-1}{2}}(-1).\notag \end{align} Using Lemma \ref{new} for $i=\frac{q-1}{2}$, we deduce that \begin{align} B&=1+qT^{\frac{q-1}{2}}(-1)T^{\frac{q-1}{2}}(d)T^{\frac{q-1}{2}}(-1)\notag\\ &=1+qT^{\frac{q-1}{2}}(d).\notag \end{align} Expanding the next term, we have \begin{align} C&=\frac{1}{(q-1)^3}\sum_{l,m,n=0}^{q-2}G_{-l}G_{-m}G_{-n}T^l(d)T^n(c)\sum_{z\in \mathbb{F}_q^\times}T^{l+m+n}(z) \sum_{x\in \mathbb{F}_q^\times}T^{3m+2n}(x).\nonumber \end{align} Finally, \begin{align} D&=\frac{1}{(q-1)^4}\sum_{l,m,n,k=0}^{q-2}G_{-l}G_{-m}G_{-n}G_{-k}T^l(d)T^n(c)T^k(-1)\times \nonumber\\ &\hspace{.5cm}\sum_{z\in \mathbb{F}_q^\times}T^{l+m+n+k}(z)\sum_{x\in \mathbb{F}_q^\times}T^{3m+2n}(x) \sum_{z\in\mathbb{F}_q^\times}T^{2k}(z)\nonumber. \end{align} The innermost sum of $D$ is nonzero only when $k=0$ or $k=\frac{q-1}{2}$. Using the fact that $G_0=-1$, we obtain \begin{align} D=-C+D_{\frac{q-1}{2}},\nonumber \end{align} where \begin{align} D_{\frac{q-1}{2}}&=\frac{1}{(q-1)^3}\sum_{l,m,n=0}^{q-2}G_{-l}G_{-m}G_{-n} G_{\frac{q-1}{2}}T^l(d)T^n(c)T^{\frac{q-1}{2}}(-1)\times \nonumber\\ &\hspace{.5cm}\sum_{z\in \mathbb{F}_q^\times}T^{l+m+n+\frac{q-1}{2}}(z)\sum_{x\in \mathbb{F}_q^\times}T^{3m+2n}(x),\nonumber \end{align} which is zero unless $m=-\frac{2}{3}n$ and $n=-3l-\frac{3(q-1)}{2}$. Since $G_{3l+\frac{3(q-1)}{2}}=G_{3l+\frac{q-1}{2}}$ and
$G_{-2l-(q-1)}=G_{-2l}$, we have \begin{align} D_{\frac{q-1}{2}}=\frac{1}{q-1}\sum_{l=0}^{q-2}G_{-l}G_{-2l}G_{3l+\frac{q-1}{2}} G_{\frac{(q-1)}{2}}T^l(d)T^{-3l+\frac{q-1}{2}}(c)T^{\frac{q-1}{2}}(-1).\nonumber \end{align} Using Davenport-Hasse relation \eqref{lemma3} for $m=2, \psi=T^{-l}$ and $m=3, \psi=T^{l+\frac{q-1}{6}}$ respectively, we deduce that $$G_{-2l}=\frac{G_{-l}G_{-l-\frac{q-1}{2}}}{G_{\frac{q-1}{2}}T^l(4)} ~~~~~~~ \text{and} ~~~~~~~ G_{3l+\frac{q-1}{2}} =\frac{G_{l+\frac{q-1}{6}}G_{l+\frac{q-1}{2}} G_{l+\frac{5(q-1)}{6}}}{qT^{-l-\frac{q-1}{6}}(27)}.$$ Therefore, \begin{align} D_{\frac{q-1}{2}}&=\frac{T^{\frac{q-1}{2}}(-3c)}{q(q-1)}\sum_{l=0}^{q-2}G_{-l}G_{-l}G_{-l-\frac{q-1}{2}} G_{l+\frac{q-1}{6}}G_{l+\frac{q-1}{2}}G_{l+\frac{5(q-1)}{6}}T^l\left(\frac{27d}{4c^3}\right).\nonumber \end{align} Now, if $T^{m-n}\neq\epsilon$, then we have \begin{align}\label{eq3} G_mG_{-n}=q{T^m \choose T^n}G_{m-n}T^n(-1). \end{align} Replacing $l$ by $l-\frac{q-1}{2}$ and using \eqref{eq3}, we obtain \begin{align} D_{\frac{q-1}{2}}&=\frac{qT^{\frac{q-1}{2}}(-3c)}{q-1} \sum_{l=0}^{q-2}G_{l}G_{-l}{T^{l-\frac{q-1}{3}} \choose T^{l-\frac{q-1}{2}}} G_{\frac{q-1}{6}}{T^{l+\frac{q-1}{3}} \choose T^{l-\frac{q-1}{2}}} G_{\frac{5(q-1)}{6}}T^{l-\frac{q-1}{2}}\left(\frac{27d}{4c^3}\right).\nonumber \end{align} Plugging the facts that if $l\neq0$ then $G_lG_{-l}=qT^l(-1)$ and if $l=0$ then $G_lG_{-l}=qT^l(-1)-(q-1)$ in appropriate identities for each $l$, we deduce that \begin{align} D_{\frac{q-1}{2}}&=\frac{q^3T^{\frac{q-1}{6}}(-1)T^{\frac{q-1}{2}}(-3c)}{q-1} \sum_{l=0}^{q-2}{T^{l-\frac{q-1}{3}} \choose T^{l-\frac{q-1}{2}}}{T^{l+\frac{q-1}{3}} \choose T^{l-\frac{q-1}{2}}}T^{l-\frac{q-1}{2}} \left(\frac{27d}{4c^3}\right)T^l(-1)\nonumber\\ &\hspace{.5cm}-q^2T^{\frac{q-1}{6}}(-1)T^{\frac{q-1}{2}}(-3c){T^{\frac{2(q-1)}{3}} \choose T^{\frac{q-1}{2}}} {T^{\frac{q-1}{3}} \choose T^{\frac{q-1}{2}}}T^{\frac{q-1}{2}}\left(\frac{27d}{4c^3}\right).\nonumber \end{align} Replacing $l$ by $l+\frac{q-1}{2}$ in the first term and simplifying the second term, we obtain \begin{align} D_{\frac{q-1}{2}}&=\frac{q^3T^{\frac{q-1}{2}}(-3c)}{q-1}\sum_{l=0}^{q-2} {T^{l+\frac{q-1}{6}} \choose T^l}{T^{l+\frac{5(q-1)}{6}} \choose T^l}T^l\left(-\frac{27d}{4c^3}\right)\nonumber\\ &\hspace{.5cm}-q^2T^{\frac{q-1}{2}}(d)\frac{{G_\frac{2(q-1)}{3}}G_{\frac{q-1}{2}}G_\frac{q-1}{3} G_{\frac{q-1}{2}}}{q^2G_{\frac{q-1}{6}}G_{\frac{5(q-1)}{6}}}\nonumber\\ &=q^2T^{\frac{q-1}{2}}(-3c){_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{6}}, & T^{\frac{5(q-1)}{6}}\\
& \epsilon
\end{array}\mid -\frac{27d}{4c^3} \right) -qT^{\frac{q-1}{2}}(d).\nonumber \end{align} Putting the values of $A, B, C, D$ all together in \eqref{eq1} gives \begin{align} q\cdot (\#E_1(\mathbb{F}_q)-1)&=q^2+q^2T^{\frac{q-1}{2}}(-3c){_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{6}}, & T^{\frac{5(q-1)}{6}}\\
& \epsilon
\end{array}\mid -\frac{27d}{4c^3}\right).\nonumber \end{align} Since $a_q(E_1)=q+1-\#E_1(\mathbb{F}_q)$, we have completed the proof of the Theorem. \end{proof} \noindent \textbf{Proof of Theorem \ref{mt1}.} Since $a\neq 0$ and $(-a/3)$ is quadratic residue modulo $q$, we find $k \in \mathbb{F}_q^{\times}$ such that $3k^2+a=0$.
A change of variables $(x, y) \mapsto (x+k, y)$ takes the elliptic curve $E_{a, b}: y^2=x^3+ax+b$ to \begin{align}\label{curve1} E'_{a, b}: y^2=x^3+3kx^2+(k^3+ak+b). \end{align} Clearly $a_{q}(E_{a,b})=a_q(E'_{a,b}).$ Since $3k\neq 0$, using Theorem \ref{theorem1} for the elliptic curve $E'_{a,b}$, we complete the proof.
$\Box$
\par We now prove a result for $q \equiv 1 (\text{mod}~4)$ similar to Theorem \ref{theorem1} and Theorem \ref{mt2} will follow from this result. \begin{theorem}\label{theorem2} Let $q=p^e$, $p>0$ a prime and $q\equiv1~(mod~4)$. Let $E_2$ be an elliptic curve over $\mathbb{F}_q$ defined as $$E_2 : y^2=x^3+fx^2+gx$$ such that $f\neq 0$. If $T \in \widehat{\mathbb{F}_q^{\times}}$ is a generator of the character group, then the trace of the Frobenius on $E_2$ is given by \begin{align} a_q(E_2)=-qT^{\frac{q-1}{2}}(2f)T^{\frac{q-1}{4}}(-1){_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{4}}, & T^{\frac{3(q-1)}{4}}\\
& \epsilon
\end{array}\mid \frac{4g}{f^2} \right),\nonumber \end{align} where $\epsilon$ is the trivial character of $\mathbb{F}_q$. \end{theorem} \begin{proof} We have $$\#E_2(\mathbb{F}_q)-1=\#\{(x,y)\in \mathbb{F}_q\times \mathbb{F}_q : P(x,y)=0\},$$ where $$P(x,y)=x^3+fx^2+gx-y^2.$$ Using \eqref{eq4}, we express the number of points as \begin{align}\label{eq2} q\cdot(\#E_2(\mathbb{F}_q)-1)&=\sum_{x,y,z\in \mathbb{F}_q}\theta(zP(x,y))\nonumber\\ &=q^2 +\sum_{z\in \mathbb{F}_q^\times}\theta(0) +\sum_{y,z\in \mathbb{F}_q^\times}\theta(-zy^2) + \sum_{x,z\in \mathbb{F}_q^\times}\theta(zx^3)\theta(zfx^2)\theta(zgx)\nonumber\\ &\hspace{.5cm}+ \sum_{x,y,z\in \mathbb{F}_q^\times}\theta(zx^3)\theta(zfx^2)\theta(zgx)\theta(-zy^2)\nonumber\\ &:=q^2+(q-1)+A+B+C. \end{align} Now, following the same procedure as followed in the proof of Theorem \eqref{theorem1}, we deduce that \begin{align} A&=-(q-1)\nonumber\\ B&=\frac{1}{(q-1)^3}\sum_{l,m,n=0}^{q-2}G_{-l}G_{-m}G_{-n}T^m(f)T^n(g)\sum_{z\in \mathbb{F}_q^\times}T^{l+m+n}(z)\sum_{x\in \mathbb{F}_q^\times}T^{3l+2m+n}(x)\nonumber\\ C&=-\frac{1}{(q-1)^3}\sum_{l,m,n=0}^{q-2}G_{-l}G_{-m}G_{-n}T^m(f)T^n(g)\sum_{z\in \mathbb{F}_q^\times}T^{l+m+n}(z)\sum_{x\in \mathbb{F}_q^\times}T^{3l+2m+n}(x)\nonumber\\ &\hspace{.5cm}+\frac{1}{(q-1)^3}\sum_{l,m,n=0}^{q-2}G_{-l}G_{-m}G_{-n}G_{\frac{q-1}{2}}T^m(f)T^n(g) \sum_{z\in \mathbb{F}_q^\times}T^{l+m+n+\frac{q-1}{2}}(z)\sum_{x\in \mathbb{F}_q^\times}T^{3l+2m+n}(x).\nonumber \end{align} Substituting the values of $A$, $B$, $C$ all together in \eqref{eq2} and simplifying after using Lemma \ref{lemma2}, we obtain \begin{align}\label{eq100} q\cdot(\#E_2(\mathbb{F}_q)-1)&=q^2+\frac{G_{\frac{q-1}{2}}T^{\frac{q-1}{2}}(f)}{q-1}\sum_{l=0}^{q-2} G_{-l}G_{2l+\frac{q-1}{2}}G_{-l}T^l\left(\frac{g}{f^2}\right). \end{align} The Davenport-Hasse relation \eqref{lemma3} with $m=2, \psi=T^{l+\frac{q-1}{4}}$ yields \begin{align}\label{new1} G_{2l+\frac{q-1}{2}}=\frac{G_{l+\frac{q-1}{4}}G_{l+\frac{3(q-1)}{4}}}{G_{\frac{q-1}{2}}}T^{l-\frac{q-1}{4}}(4). \end{align} Using \eqref{new1} and then \eqref{eq3} in \eqref{eq100}, we have \begin{align} q\cdot(\#E_2(\mathbb{F}_q)-1)&=q^2+\frac{q^3T^{\frac{q-1}{2}}(2f) T^{\frac{q-1}{4}}(-1)}{q-1}\sum_{l=0}^{q-2}{T^{l+\frac{q-1}{4}} \choose T^l}{T^{l+\frac{3(q-1)}{4}} \choose T^l}T^l\left(\frac{4g}{f^2}\right)\nonumber\\ &=q^2+q^2T^{\frac{q-1}{2}}(2f)T^{\frac{q-1}{4}}(-1){_{2}}F_1\left(\begin{array}{cccc}
T^{\frac{q-1}{4}}, & T^{\frac{3(q-1)}{4}}\\
& \epsilon
\end{array}\mid \frac{4g}{f^2} \right)\nonumber \end{align} and then using the relation $a_q(E_2)=q+1-\#E_2(\mathbb{F}_q)$, we complete the proof. \end{proof} \noindent \textbf{Proof of Theorem \ref{mt2}.} Since $x^3+ax+b=0$ has a non-zero solution in $\mathbb{F}_q$, let $h\in\mathbb{F}_q^{\times}$ be such that $h^3+ah+b=0$. A change of variables $(x, y) \mapsto (x+h, y)$ takes the elliptic curve $E_{a, b}: y^2=x^3+ax+b$ to \begin{align}\label{curve2} E''_{a, b}: y^2=x^3+3hx^2+(3h^2+a)x. \end{align} Since $a_{q}(E_{a,b})=a_q(E''_{a,b})$ and $3h\neq 0$, using Theorem \ref{theorem2} for the elliptic curve $E''_{a,b}$, we complete the proof.
$\Box$
\end{document} | arXiv |
\begin{definition}[Definition:Proper Relational Structure]
Let $A$ be a set or class.
Let $\RR$ be a relation on $A$.
Then $\struct {A, \RR}$ is a '''proper relational structure''' {{iff}}:
:For each $a \in A$, the preimage $\map {\RR^{-1} } a$ of $a$ under $\RR$ is a set (or small class).
\end{definition} | ProofWiki |
Sample size determination for estimating antibody seroconversion rate under stable malaria transmission intensity
Nuno Sepúlveda1, 2Email author and
Chris Drakeley1
© Sepúlveda and Drakeley; licensee BioMed Central. 2015
Published: 3 April 2015
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population.
Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision.
The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity.
Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Seroprevalence
Seroconversion rate
Parasite prevalence (PR) and entomological inoculation rate (EIR) are the two most common disease risk indicators used in malaria epidemiology. PR is defined as the percentage of people who are currently infected with malaria parasites, and reflects the direct interplay between transmission intensity, age, and disease burden. EIR is in turn the frequency at which people are bitten by infectious mosquitoes over a period of time (typically a year), and provides information on the vector biology and its interaction with the human host. These measures, although useful in high and moderate transmission settings, show limitations in areas of lower transmission or in populations on the cusp of disease elimination. This is primarily due to the low number of infected individuals (humans or mosquitoes) in the population at the time of sampling. Accurate metrics are particularly important in assessing the effects of malaria interventions at these low transmission levels. Therefore, in recent years, alternative risk indicators based on anti-malarial antibody seroprevalence (SP) and seroconversion rate (SCR) have been evaluated [1-4].
The rationale of using antibody data stems from the observation that specific antibodies against parasite antigens persist in time and at reasonably stable concentrations, even when disease transmission is seasonal. Experimentally, the quantification of antibodies in sera is relatively easy to perform using simple laboratory techniques, such as ELISA assays. The resulting antibody measurements are usually optical densities or the respective titre values upon which one classifies each individual as seronegative or seropositive using appropriate cut-off points. These seropositivity thresholds are typically determined by two distinct approaches. The first one uses antibody data of known seronegative individuals in which the parameters of the underlying distribution are estimated, as illustrated by Arnold et al. [5]. In contrast, the second approach is based on fitting a Gaussian mixture model to current antibody data directly under the assumption that there are two latent subpopulations referring to seronegative and seropositive individuals, respectively [6]. In both approaches, the cut-off point for seropositivity is determined by the average plus 3 times the standard deviation of the seronegative population. Seroprevalence (SP) is then the percentage of seropositive individuals in the sample and embodies information over currently infected and recently exposed individuals. As expected, SP estimates are typically higher than those for PR measured in the same sample [1,7]. Although overcoming some of the shortcomings of PR and EIR, SP does not reflect the dynamics of malaria transmission directly.
Seroconversion rate (SCR) extends SP analysis to the scenario where one is a step closer to capture the underlying disease dynamics of a given population. This serological parameter arises from the analysis of seroprevalence taken as function of age of the individuals using the so-called reverse catalytic models. The age of individuals is assumed to be a good surrogate of time in a stochastic process where individuals transit between seropositive and seronegative states upon malaria exposure and absence of re-infection. Theoretically, SCR is defined as the frequency by which seronegative individuals become seropositive upon malaria exposure. Conversely the frequency by which seropositive individuals return to a seronegative state is known as seroreversion rate (SRR). This last parameter is related to antibody decay in absence of disease exposure and reflects the effects of host factors on antibody dynamics.
Several studies have shown the utility of SCR as a malaria epidemiological tool with some demonstrating good agreement between this measure and EIR [1] and others detecting historical changes in transmission that otherwise would not have been possible with other measures of transmission [4,7-9]. Whilst the evidence for using serology as an adjunct epidemiological marker for malaria transmission is growing, there has been no formal examination of samples size considerations for SP and SCR as primary endpoints. In fact, most malaria epidemiological studies are planned with PR being as the primary endpoint [7] and, therefore, it is unclear whether SP and SCR might have enough statistical precision to lead to clear conclusions.
SP is in theory a proportion (or a percentage) and, as such, several methods exist for sample size determination in this situation [10]. In contrast, the precision of SCR estimates depends not only on the sample size, but also on the age distribution associated with a given population. Therefore, sample size determination is not as straightforward. A pragmatic approach is to use an empirical relationship between SCR and SP in order to determine the total sample size required for collecting a given number of seropositive individuals [8]. This approach is here improved by using the theoretical relationship between SP and SCR under a given age distribution and a fixed SRR. Sample size determination is then based on back-transforming the confidence interval for SP into the corresponding one for SCR. In the situation where SCR and SRR are both unknown, a second sample size calculator is developed by bringing simulation together with regression. The use of these two sample size calculators is instrumental to power future serological studies, notably, in the challenging research settings of populations on the cusp of elimination [11].
Reverse catalytic models for seropositivity data
In malaria epidemiology, the reverse catalytic models were first described to estimate incidence and recovery rates from longitudinal data [12]. More recently, they were recast to the analysis of malaria seroprevalence data [13]. Mathematically, these models can be described as a Markov chain where individuals transit between two serological states: 0 - seronegative and 1 - seropositive. The time between transitions is assumed to be exponentially distributed. This assumption implies that every time an individual move from one state to another, the stochastic process restarts probabilistically due to lack of memory of the Markov Chains. This is in close agreement with the general notion that malaria parasites can only confer partial immunity to the host.
This paper deals with the simplest reverse catalytic model where SCR and SRR are assumed to be fixed constants throughout time and for every individual. The use of this model has in practice three key implications. Firstly, a constant SCR implies that disease transmission remained unchanged throughout time in the population under study. Secondly, a constant SRR implies that the host factors affecting antibody decay were not altered by any genetic selection event, migration or admixture. Thirdly, all individuals have experienced the same disease transmission intensity and, thus, age can be used as a surrogate of the time of disease dynamics. Mathematically, the probability of individuals with age t being at each serological state is given by the transition probability matrix P(t) = [p i|j (t)], i, j = 0, 1, where p i|j (t) is the conditional probability of an individual with age t being in state i given he started the process in state j and R is the so-called rate matrix that, in turn, is defined as
$$ R=\left[\begin{array}{cc}\hfill -\lambda \hfill & \hfill \lambda \hfill \\ {}\hfill \rho \hfill & \hfill -\rho \hfill \end{array}\right], $$
where λ and ρ are the SCR and SRR, respectively. Assuming that all individuals are born seronegative (that is, seronegative at time t = 0; this is achieved in practice by only including individuals aged or older than 1 year to negate putative maternal effects on malaria antibodies), the probability of an individual aged t being seropositive is described by
$$ {p}_{1\Big|0}(t)=\frac{\lambda }{\lambda +\rho}\left(1-{e}^{-\left(\lambda +\rho \right)t}\right). $$
A special case of the above model may arise from populations where only a few seronegative individuals would result from seroreversion events. As a consequence, data might not enough information to estimate SRR (i.e., ρ ≈ 0). In this case, equation (2) can be rewritten as follows
$$ \log \left[- \log \left(1-{p}_{1\Big|0}(t)\right)\right]= \log \lambda + \log t. $$
This model has been applied to malaria data from low transmission populations [14], to serology data on human leishmaniasis [15], and to limiting dilution data [16]. Theoretically, equation (3) can be seen as the popular complementary log-log model from statistics that, in turn, can be formulated as a generalized linear model (GLM) under a binomial sampling scheme [17]. As such, the respective parameter estimation can be performed in most statistical softwares as long as one specifies 'log age' as the explanatory variable and the corresponding slope fixed at 1. Alternative sample size calculators for this model could be used in the same line of a GLM power analysis, as described elsewhere for logistic regression [18,19].
There are also other reverse catalytic models describing changes in disease transmission (see, for example, review of Corran et al. [1]). Although interesting, sample size determination on these alternative models will be studied elsewhere (Sepúlveda and Drakeley, in preparation). In malaria literature, one can also found an extension of the reverse catalytic modelling framework to the situation where seropositivity can be boosted by recurrent malaria exposure [20]. This model would appear to be more adequate to very high transmission settings and, thus, out of the scope of this paper.
Model parameterization
To illustrate the sample size determination on realistic values of SCR and SRR, Plasmodium falciparum data sets from two independent studies in northeast Tanzania were used [3,21]. This region extends from the high malaria transmission areas in the coastal plains of Tanga to the low transmission settings in the high altitude mountains of Kilimanjaro, Usambara and Pare. Because of this natural variation in malaria endemicity, northeast Tanzania is an ideal region to understand how different malaria risk indicators are related to each other. Available data of altitude (in meters) against EIR [21] was re-analysed leading to the following linear regression model (Additional file 1: Figure A)
$$ { \log}_{10}\mathrm{E}\mathrm{I}\mathrm{R}=2.5204-0.0025\times \mathrm{altitude}. $$
In another epidemiological study, serological data from 21 villages of the same region was also available [3,13]. SCR associated with MSP1 antibodies was found to be highly correlated with altitude [1]. This data set suggested the following relationship between SCR and altitude (Additional file 1: Figure B)
$$ { \log}_{10}\mathrm{S}\mathrm{C}\mathrm{R}=-0.2908-0.0012\times \mathrm{altitude}, $$
where SRR estimate would appear to be constant across villages and fixed at 0.017. In turn, data from the same study suggested the following relationship between PR of children aged 0–4 years old (PR04) and altitude (Additional file 1: Figure C):
$$ \log \frac{{\mathrm{PR}}_{04}}{{1\hbox{-} \mathrm{P}\mathrm{R}}_{04}}=8.9992-1.5934\times { \log}_{10}\mathrm{altitude}. $$
Solving one of the above equations as function of altitude, the expected relationship between EIR, SCR, and PR04 can be obtained as shown in Figure 1A.
Model parameterization under the assumption of constant malaria transmission intensity: A. Expected relationship between SCR, EIR rate and PR in children aged from 0 to 4 years olds. B. Age-adjusted SP curves given the expected SCRs associated with EIRs shown in A. C. Age structure of African and non-African populations. D. Seroprevalence as function of SCR based on the age distributions shown in C.
Sample size determination was conducted on the following transmission intensities as measured by EIR and PR04 (in brackets) units: 0.01 (0.050), 0.1 (0.073), 1 (0.119), 10 (0.231) and 100 (0.625). The corresponding SCRs are 0.0034, 0.0104, 0.0324, 0.0969 and 0.2900, respectively (Table 1). With respect to the above-mentioned large epidemiological study [1], a SCR between 0.0034 and 0.0104 describes low transmission intensities of high-altitude villages, such as Kilomeni (1556 m - SCR = 0.0047) or Mokala (1702 m - SCR = 0.0104). SCRs between 0.01 and 0.10 are, in turn, associated with villages in intermediate altitude, like Tewe (1049 m - SCR = 0.0308) or Ngulu (831 m - SCR = 0.0906). Finally, SCRs greater than 0.10 are related to lowland villages, such as Mgila (375 m - SCR = 0.128) or Mgome (196 m - SCR = 0.302), where malaria transmission is considered to be high. The expected age-adjusted SP curves are shown in Figure 1B.
Expected relationship between EIR, PR 04 , SCR and SP in African (AFR), Southeast Asian and South American (SEA + SA) populations where seroreversion rate was fixed at 0.017
SEA + SA
Model estimation
In terms of statistical analysis, age-adjusted seropositivity data can be summarized as a frequency vector {n ts } where n ts is the frequency of individuals with age t = 1,…,T and serological state s = 0 or 1, T is the total number of distinct age values in the sample. If individuals were sampled independently of each other and the statistical inference is focused on age-adjusted seroprevalence only, the sampling distribution of the frequency vector {n ts } can be described by a binomial-product distribution, one binomial distribution per age value, that is,
$$ f\left(\left\{{n}_{ts}\right\}\Big|\lambda, \rho \right)={\displaystyle \prod_{t=1}^T\frac{\left({n}_{t0}+{n}_{t1}\right)!}{n_{t0}!{n}_{t1}!}}{\left[{p}_{1\Big|0}(t)\right]}^{n_{t1}}{\left[1-{p}_{1\Big|0}(t)\right]}^{n_{t0}}, $$
where p 1|0(t) is given by equation (2). Parameter estimation can be performed via standard maximum likelihood methods, as described elsewhere [15]. Stata and R scripts for parameter estimation are available from the authors upon request.
Sample size calculations
The first sample size calculator assumes that SRR is a known constant (say ρ 0 = 0.017), thus, should not be estimated after sample collection. In that case, the expected relationship between SP of the population (hereafter denoted by π) and SCR can be computed as follows
$$ \pi ={\displaystyle \sum_{t=1}^{A_{\max }}{\alpha}_t\frac{\lambda }{\lambda +{\rho}_0}\left(1-{e}^{-\left(\lambda +{\rho}_0\right)t}\right),} $$
where α t is the proportion of individuals aged t in the population and A max is the maximum age considered relevant for the population, say A max =80. As expected, the above relationship depends on the age distribution of the population (or of the study design used). Official statistics on age distributions were explored in order to understand how these vary across the world [22]. These data sets suggest that African countries have the same age distribution approximately (a decreasing frequency from newborns to elderly; Additional file 2). Thus, a typical age structure distribution for these populations was generated by pooling data from different countries together (Figure 1C). Although slight differences can be observed across countries, the age distributions from Southeast Asia and South America show roughly the same pattern but distinct from the one for African populations (Additional file 2). Therefore, a non-African age distribution prototype was constructed (Figure 1C). This age structure is much flatter than its African counterpart due to a higher frequency of adults.
These two general age distributions were then used to derive the expected SP as function of SCR according to equation (8) (see Figure 1D). Interestingly, the relationship between SP and SCR in African populations when SCR = 0 is similar to the one for non-African populations when ρ = 0.017. Therefore, the sample size determination would lead to similar results for these two distinct populations.
In the statistical literature, there are several methods for constructing a confidence interval for a proportion that can be used for sample size determination, as reviewed elsewhere [23]. The most popular method is the so-called Wald Score that, although its simplicity of calculation, may lead to poor coverage and problems of overshoot and degeneracy [10]. An alternative method is to introduce an continuity correction in the Wald Score that, when applied to SP estimation, leads to the following confidence interval at 95%
$$ {\widehat{\pi}}_l=\widehat{\pi}-1.96\sqrt{\frac{\widehat{\pi}\left(1-\widehat{\pi}\right)}{n}}-\frac{1}{2n} $$
$$ {\widehat{\pi}}_l=\widehat{\pi}+1.96\sqrt{\frac{\widehat{\pi}\left(1-\widehat{\pi}\right)}{n}}+\frac{1}{2n}, $$
where \( \widehat{\pi} \) is an estimate of the true SP, n is the sample size and 1.96 is the 97.5%-quantile of a standard Gaussian distribution. For a given SCR, one can compute the expected π using equation (8) and replace it in the above equations in order to obtain the corresponding confidence bounds \( {\widehat{\pi}}_l \) and \( {\widehat{\pi}}_u \) for a given sample size n. These confidence bounds can then be back-transformed into the corresponding ones for SCR using equation (8) again. To perform the back-transformation, one needs to solve the following equations as function of λ l and λ u (the corresponding lower and upper bounds of SCR)
$$ {\widehat{\pi}}_l={\displaystyle \sum_{t=1}^{A_{\max }}{\alpha}_t\frac{\lambda }{\lambda +{\rho}_0}\left(1-{e}^{-\left(\lambda +{\rho}_0\right)t}\right),} $$
$$ {\widehat{\pi}}_u={\displaystyle \sum_{t=1}^{A_{\max }}{\alpha}_t\frac{\lambda }{\lambda +{\rho}_0}\left(1-{e}^{-\left(\lambda +{\rho}_0\right)t}\right).} $$
Unfortunately these equations can be solved analytically but a binary searching algorithm, although slow, is able to obtain an approximate solution using an appropriate searching interval.
In theory, one defines the coverage of a confidence interval as the number of times that confidence interval contains the true value of the parameter upon repeated sampling. Under this definition, a confidence interval at 95% should lead to a coverage of 95%. However, the expected coverage is not always achieved due to the use of (Gaussian) approximations for the random variables underpinning the construction of a given confidence interval. This putative incorrect coverage affects sample size determination by either undersampling in situations of undercoverage or oversampling in situations of overcoverage, as reported for proportion estimation when data stems from populations with proportions less than 0.1 or higher than 0.9 [23,24]. Therefore, the back-transformation method was tested against these putative coverage problems.
The expected coverage of the confidence interval for SCR was assessed via simulation. For every pairwise combination of SCR and n, the following two-step algorithm was employed for the generation of a given data set: i) generate the age of each individual in the sample, and (ii) generate the corresponding serological state as a Bernoulli trial with seropositivity probability given by equation (2). The back-transformation of the confidence interval for SP was applied to each data set. Coverage was finally calculated by counting how many times the confidence intervals included the SCR that generated the data.
The performance of this method was also assessed in terms of the midpoint of the corresponding confidence interval for SCR. In this scenario, a confidence interval was defined as central if the true SCR was located in the middle of the corresponding interval. A practical implication of using central confidence intervals is that they have the shortest length among all intervals one can construct with a given confidence level if a Gaussian distribution is a good approximation for the sampling distribution of SCR estimates. In that case, the use of central confidence intervals for sample size determination implies working with the best precision possible and, thus, the subsequent sample sizes are the minimum ones for a given confidence level. In opposition, if the constructed confidence intervals are not central, they might not be the ones providing the highest precision (i.e., with shortest length). To assess whether a given confidence interval is or not central, one is required to know the sampling distribution of SCR estimates upon repeated sampling. Unfortunately that distribution is not known in general.
Sample size determination was then conducted by given length of the 95% confidence interval for SCR. With this goal in mind, the relative length of that confidence interval was fixed at a given constant (e.g., 1, 0.75, 0.5, and 0.25). The above back-transformation method was used together with an additional binary search method aiming to find the required sample size. The search algorithm was implemented in R software and the corresponding code is available from the authors upon request.
When there is little information on SRR to help planning a study, there is no clear analytical method to calculate the required sample size. Instead, data simulation would appear to be the best approach for the problem. Specifically, data simulation was used to study the expected length of the confidence intervals for SCR given a set of sample sizes (e.g., n = 250, 500, 1,000, 2,500, 5,000 and 10,000). The generation of each data set followed the same algorithm as described for the performance of the first sample size calculator. For each generated data set, the estimates of SCR and SRR were obtained via maximum likelihood methods. To obtain the precision of SCR estimate associated with a given sample size, the 2.5% and 97.5% quantiles were calculated for the set of SCR estimates generated from data of a given transmission intensity. The absolute precision was defined as the absolute difference between these two quantiles whereas the relative precision is the absolute precision divided by the SCR that generated the data.
It is worth noting that the absolute precision (pr) of SP estimates associated with the first sample size calculator can be rewritten as a function of 1/n given a pair of SCR and SRR, that is,
$$ {\mathrm{pr}}_{n\Big|{\rho}_0}\left(\widehat{\pi}\right)=3.92\sqrt{\frac{\widehat{\pi}\left(1-\widehat{\pi}\right)}{n}}+\frac{1}{n}, $$
where the above equation results from the absolute difference between equations (9) and (10). Since this sample size calculator is based on a back-transformation relating SP to SCR, the precision of SCR estimates can also be expressed by a function of 1/n (say function g). This function is highly non linear and not analytically derivable but in theory can be approximated by the following MacLaurin expansion from Mathematical Calculus:
$$ {\mathrm{pr}}_{n\Big|{\rho}_0}=g(0)+\frac{g\hbox{'}(0)}{1!}\times \frac{1}{n}+\frac{g\hbox{'}\hbox{'}(0)}{2!}\times \frac{1}{n^2}+\frac{g\hbox{'}\hbox{'}\hbox{'}(0)}{3!}\times \frac{1}{n^3}+\cdots $$
where g ′ (0), g ′′ (0) and g ′′′ (0), are unknown but fixed constants associated with the function g, its first, second and third derivative evaluated at zero, respectively. Therefore, the precision of SCR estimates (\( \widehat{\lambda} \)) can be determined by a regression linear model as function of 1/n, that is,
$$ p{r}_{n\Big|{\rho}_0}\left(\widehat{\lambda}\right)={\beta}_0+\frac{\beta_1}{n}+\frac{\beta_2}{n^2}+\frac{\beta_3}{n^3}, $$
where β 0, β 1, β 2 and β 3 are coefficients to be estimated from the set of SCR estimates obtained from the simulated data. This rationale was assumed to be applicable directly to the second sample size calculator where SRR is unknown. The above model was then estimated to the simulated precision data via maximum likelihood method. The resulting adjusted correlation coefficient between simulated and predicted data was found to be >0.99, thus, suggesting that the above model is indeed a good approximation of the relationship between the sample size and the expected precision of SCR estimates. The last step was to find the sample size associated with a given precision. This was done numerically by using a binary search algorithm.
Performance of the back-transformation method
The performance of the back-transformation method was first assessed in terms of the expected coverage of the 95% confidence intervals for SCR (Table 2). In most cases, the confidence intervals showed slight overcoverage (≤1%) with a few exceptions. In very low transmission settings (SCR = 0.0036), the confidence intervals show undercoverage for sample sizes ≤250 in Africa and ≤500 elsewhere, respectively. The most severe case of incorrect coverage is for samples of 50 individuals from African populations where a strong overcoverage (0.998) is observed. Interestingly, in a non-African context, the confidence intervals show instead undercoverage (0.909) for the sample size and transmission intensity. These opposing results might reflect marked differences in the underlying age structures, notably in terms of the proportion of children in one population and the other (see Figure 1C). In high transmission intensities (SCR = 0.29), the confidence intervals also show undercoverage for samples of 100 individuals or less in African settings. In practice, the problem of under or overcoverage most likely results in confidence intervals with higher or lower length than they should in relation to a situation where the correct coverage is obtained for the constructed intervals. This has an impact on sample size determination in the sense that controlling the length of the confidence intervals showing these problems might lead to smaller or greater samples sizes than required in reality.
Coverage of confidence intervals based on back-transformation algorithm assuming SRR = 0.017
Confidence intervals for SCR estimates were then evaluated in terms of their midpoints. The results suggest that these midpoints and the true SCR tend to be closer to each other with the increase of the sample size (Additional file 3: Figure A). Mathematically speaking, this results from approximating the back-transformation by means of a linear relationship between SP and SCR. The precise sample size where that begins to happen increases with the underlying transmission intensity. More specifically, sample sizes of about 400 and 2,250 individuals tend to provide central confidence intervals when SCR=0.0036 and 0.29, respectively. For moderate sample sizes, say n < 500, the back-transformation method implies non-central confidence intervals for intermediate values of SCR. Since the exact distribution of SCR estimates is not known in general, it is unclear whether these non-central confidence intervals are the ones providing the highest precision.
Sample size calculations for known SRR
Sample size determination was then conducted under the assumption of a known SRR (SRR = 0.017; Table 3). For the same relative precision, the sample sizes vary with transmission intensities. In particular, sample sizes increase from very low to intermediate transmission intensities and then they declined after reaching a sufficiently high transmission intensity (i.e., when the SP curve becomes flat). With the increase of precision, the difference between sample sizes from different transmission intensities increases dramatically. On one extreme, for a relative length of 1, sample sizes vary from 73 (SCR = 0.0324) to 315 (SCR = 0.0036) and from 67 to 248 in African and non-African settings, respectively. On the other extreme, sample sizes range from 976 to 4968 (Africa) and from 890 to 3558 (elsewhere) for a relative length of 0.25.
Exact sample sizes and corresponding ranges for absolute SCR, EIR and SP by controlling the relative length of 95% confidence interval for SCR under the assumption of SRR = 0.017
Relative length
0.0019-0.0054
46.5-300.58
6.41-17.7
79.44-133.1
Similar sample sizes were found for African and non-African populations experiencing SCR = 0.0324 and 0.0964 (intermediate transmission) irrespective of the relative precision used. When SCR = 0.0964, the sample sizes for African populations are 79, 127 and 262 and 976 individuals to ensure a relative precision of 1, 0.75, 0.5, and 0.25, respectively, whereas the corresponding ones for non-African settings are 90, 142, 288 and 1,059. However, African studies require larger sample sizes than their non-African counterparts for SCR = 0.0036 and 0.0108 and the other way around for SCR = 0.29. For the same transmission intensity, the requirement of a smaller or larger sample size in African studies in the relation to others conducted elsewhere reflects the steepness of the SCR-SP curve. In other words, the use of the back-transformation implies that, when specifying a given confidence interval for SP, the confidence interval for SCR is going to be narrower or wider depending on the steepness of the SP curve. Mathematically, the steepness of that curve is given by the respective derivative. That derivative was found to be smaller in African than in non-African populations for SCR < 0.058 and the other way around for SCR > 0.058 (Additional file 3: Figure B). Available PR data for P. falciparum suggests that non-African populations are most likely to be at lower endemicity [25]. Note that, for SCRs in the vicinity of 0.058 where the two derivative functions cross each other, it is expected to obtain similar sample sizes for both populations, a result compatible with the sample sizes provided for intermediate transmission intensities. Finally, the relationship between SCR and SP was here found to be similar between Africa and non-African populations when SRR = 0 and 0.017, respectively (Figure 1D). Therefore, the comparison between sample sizes for African and non-African studies can also be used to ascertain the bias in sample size estimates when assuming SRR = 0 in an African setting.
The calculated sample sizes can also be used to help designing studies including different populations (or sites). Firstly, there is no theoretical impediment to use distinct sample sizes for populations known to differ in malaria endemicity. For example, a sample size of approximately 125 individuals will provide a relative precision of 1 for African sites experiencing a SCR of 0.0108. The same sample size leads to a relative precision of 0.75 for African populations with SCR = 0.0324 or 0.0969. Secondly, the expected confidence intervals for SCR can also provide clear insights on the underlying statistical power to compare sites with different transmission intensities. In particular, the sample sizes associated with a relative precision of 1 are enough to distinguish sites differing at least one order of magnitude in EIR with 95% confidence (or with 5% significance level in hypothesis testing terminology). However, this distinction cannot be done if these sample sizes were used and a 99% confidence level was alternatively specified to study between any two sites differing exactly one order of magnitude (Additional file 4). Thirdly, the expected confidence intervals for SCR are alternatively instrumental to know which transmission intensity range cannot be discriminated by the data. For example, a sample size of 79 individuals associated with a relative length of 1 and SCR = 0.0969 cannot distinguish African populations with EIR ranging from 4.18 to 29.17.
Sample size calculations for unknown SRR
Sample size calculations were then performed for the most common situation of unknown SRR. For low transmission settings (SCR ≤ 0.0108) and reasonably low sample sizes, there is a non-negligible probability of generating data sets leading to null SRR estimates (Table 4). More precisely, for SCR = 0.0036, one would need to sample at least 1,000 individuals to ensure that chance is smaller than 10% whereas for SCR = 0.0108, the same is achieved for sample sizes of no less than 500 individuals. In practice, these problematic data sets imply that the corresponding SCR estimates underestimate the true SCR that generated the data (Table 4). This underestimation can be explained by the fact that just a few seronegative individuals may result from seroreversion events but they are wrongly assumed to have never been exposed to malaria parasites under a null SRR estimate. For higher transmission settings, the occurrence of these problematic data sets is minimal because the generated data has a good balance between the total number of seropositive and seronegative individuals.
Percentage of simulated data sets where SRR was estimated as 0 (% ρ=0 ) and the bias of the corresponding SCR estimates taken as the percentage in relation to the true SCR
SCR = 0.0036
% ρ=0
Bias (%)
−25.7
Bias was defined as the difference between the mean of the corresponding estimates and the true value of SCR. The true SRR that generated the data sets was fixed at 0.017.
Approximated sample sizes were calculated using data simulation coupled with a regression model relating precision to sample size (Table 5); see Additional file 5 for the respective simulation results. Three key observations can be highlighted. Firstly, as found for known SRR, the same qualitative behavior between sample size and transmission intensity was found irrespective of the population under study. More precisely, the sample sizes increase from very low to moderate transmission and decrease from then on. Secondly, the necessity of estimating an additional parameter from the data brought more uncertainty over SCR estimation, thus, increasing the previous sample sizes for known SRR. In this case, the difference in sample sizes assuming or not a known SRR decreases with transmission intensity. On one extreme, for SCR = 0.0036, the sample sizes for relative precisions of 1, 0.75, 0.50 and 0.25 are now 2,193, 5,127 and >10,000, respectively, in comparison to 315, 549, 1163 and 4968 assuming a known SRR. On the other extreme, for SCR = 0.29, the sample sizes do not differ substantially assuming or not known SRR: 213, 267, 542, and 1,927 (unknown SRR) versus 151, 233, 461, and 1,670 (known SRR). Thirdly, for the same relative precision, African studies are most likely to require lesser individuals than their counterparts conducted elsewhere. This is in clear contrast to above results for known SRR where African studies would only have decreased sample sizes in high transmission intensities. The explanation for this result is unclear but it might be related again to the underlying age distribution. When SRR is unknown, the bulk of the information on SCR seems to come from young individuals and, if so, African populations have a higher proportion of individuals with that age. Finally, it is worth noting that, since the sample sizes were calculated using the same relative precision, the above-mentioned results for known SRR on comparing African to non-African studies are still valid for unknown SRR.
Approximate sample sizes for controlling precision of SCR estimates under of the assumption of unknown SRR where the true SRR was fixed at 0.017
>10,000
In this paper, two sample size calculators for estimating antibody SCR were proposed. The first calculator is based on the assumption of known SCR and, because of that, it implies smaller sample sizes in relation to a situation where SCR is assumed to be unknown. Obtaining smaller sample size is important for studies where ethical issues, limited human and economic resources, or time constraints might be in place. However, this calculator requires fixing SRR at a given constant. In this regard, the current knowledge of SRR is still limited. Firstly, this parameter has only been measured indirectly by means of fitting the reverse catalytic models to data. Secondly, there might be age differences in seroreversion but seropositivity data appears to not have enough information for its detection [1]. Therefore, considering SRR at a fixed constant is a pragmatic choice not also for data analysis but also for sample size calculation. Notwithstanding this pragmatism, current estimates of SRR [1,7,13] are of the same of magnitude of the one used here and, therefore, the calculated sample sizes would appear to be reliable in general. However, for the matter of precision, sample size determination is recommended to be performed using a predefined SRR estimate from a reliable source. An obvious source of information can be data from another population but with similar malaria transmission intensity and host factors. Another possible source of information is to use existing data from past surveys taken from the same population, as reported in a recent study from Kenya [26]. Statistically speaking, a more coherent and elegant way to incorporate prior information in sample size determination is via Bayesian methodology as done elsewhere for estimating proportions (or prevalences) [27,28]. Although appealing, this approach would not appear to attract much attention of malaria epidemiologists, as suggested by the scarce number of studies applying such alternative approach to data analysis.
The basic idea underlying the first sample size calculator is to apply a back-transformation to the confidence interval for SP. The reliability of this method is then critically dependent not only on the statistical performance of the chosen SP confidence interval (in this case, the Wald Score corrected for continuity), but also on the degree of similarity between the age distribution used in the sample size determination and the one to be obtained upon sample collection. In terms of the Wald confidence interval using a continuity correction, it is one among more than twenty methods proposed to construct confidence interval for a proportion [23]. A recent study compared seven of these methods in terms of sample size determination for estimating a proportion [10]. General guidelines are not easy to put forward because they depend not only on the different criteria on how to deal with eventual problems of under or overcoverage of the corresponding confidence intervals, but also on the underlying proportion of the population under study. Notwithstanding this problem, these authors showed that, for a given absolute precision and a proportion between 0.01 and 0.90, the sample sizes from different methods do not deviate more than 40 sampling units. This result is expected to hold true for SCR estimation, but might require large-enough sample sizes where a linear approximation can be invoked between SCR and SP. With the respect to the age distributions used here, official statistics showed a clear distinction between African and non-African populations. However, these age distributions report to the respective overall populations and, thus, slight differences are expected to be seen between these whole-population-based distributions and the corresponding ones for the rural areas where malaria is more prevalent. Although a case-by-case approach is recommended, these differences are most likely to be related to a higher number of older individuals living in urban population that, in general, have better access to health care. Other factors related to sampling feasibility might also introduce some bias in the sampled age distribution, such as using schools surveys or collecting household-consented data that led to a slightly overrepresentation of school-aged children (5–18 years old) in recent studies [9,29,30]. Notwithstanding these putative differences between official and sampled age distributions, there is a good agreement between the age distributions used here and the ones found across a series of recent cross-sectional studies [31-33]. Thus, the calculated sample sizes would appear to be reliable for planning future surveys not using age stratification. A natural follow up of this work is then to perform sample size determination on alternative sampling strategies that may necessitate targeting or oversampling specific age groups. In theory, stratified sampling, if done intelligently, is known to improve precision of the ensuing estimates of the population prevalence [34]. Since the first sample size calculator is based on the confidence interval for SP, the sample sizes of age-adjusted sampling strategies should be decreased in relation to the ones calculated here. The optimal age stratification in terms of minimum sample size is one among other questions to be explored in a near future.
The second sample size calculator relates to the most general situation of a unknown SRR. Although general, this method only provides approximate sample sizes because it uses simulation coupled with a regression model predicting the expected precision as function of the sample size. As expected, the additional requirement of estimating SRR results in larger sample sizes in comparison to the ones derived from a known SRR. The simulation results highlighted the possibility of generating data sets from low transmission settings where one does not have enough information to estimate the SRR, thus, introducing significant negative biases on the SCR estimates. To minimize the occurrence of such situations, sample sizes of no less than 1,000 and 500 are recommended for EIR = 0.01 and 0.1, respectively. It is worth noting that there are many combinations of transmission intensities and relative precisions leading to sample sizes of more than 1,000 individuals. This relatively intensive sampling is particularly important for studying populations close to malaria elimination (SCR ≤ 0.0108). As a statistical advantage, a large sample size diminishes the chance of underestimating SCR due to null SRR estimates. However, large community-based surveys are usually seen as financially and logistically demanding enterprises and school or health centre surveys may be more pragmatic. As with a conventional metric like parasite rate, the relative advantages and disadvantages of a relatively small community-based survey and a large study using a more convenient sampling approach need to be properly balanced. Additionally the simulation algorithm for calculating precision assumes a population of infinite size. This assumption is reasonable in highly dense populations living in small areas where malaria transmission is expected to be more homogeneous. However, this is uncommon with heterogeneity in population density and malaria transmission more likely to be the norm especially at low transmission. The corresponding sample size will need to be inflated if one is to unravel subpopulations with subtle differences in malaria exposure, as observed in different studies [1,7,13]. Finally, a large sample size might not be feasible in intrinsically small populations, such as the ones living in islands [4,9]. In that case, the precision is in fact increased in relation to the one calculated from infinite-size population and, thus, the proposed sample size calculator would lead to oversampling. However, if there are no dramatic cost restrictions, oversampling might overcome eventual losses of precision due to the occurrence of missing data.
It is also important highlighting the fact that the SCR and SRR used here are for the merozoite surface protein-1 (MSP1) antigen. Another well-characterized antigen is the P. falciparum apical membrane antigen-1 (AMA1). Current SCR and SRR estimates are different for these two antigens due to their inherent immunogenicity and half-life exposed to the immune system [8] with a higher SCR for AMA1 compared to its MSP1 counterpart. As a direct consequence of this observation, smaller sample sizes will be required for AMA1-based studies. There is relatively little data for other antigens though variation in seroconversion rates has been reported [35,36]. Practically to overcome issues around antigenic variation and differential population reactivity (e.g., due to genetics), a combination of antigens are used and sample sizes would be derived from the most immunogenic component.
In conclusion, this paper described relatively straightforward approaches to calculating the sample size for estimating SCR. The methods assume data derived from areas with stable transmission, standard population age distributions and community-based surveys with no age stratification. Several caveats relating to survey design, antibody reversion rates and antigen choice were presented to allow an appreciation of the complexity of the issue. Pragmatically however, the results suggest that SCR estimation can be readily incorporated into the design of most malariometric studies and this will be of particular use in populations with low malaria endemicity. Further work is needed to assess the sample size requirements for estimating any change in transmission with serology.
EIR:
entomological inoculation rate
parasite rate
SCR:
seroconversion rate (λ)
SRR:
seroreversion rate (ρ)
Nuno Sepúlveda is funded by the Wellcome Trust grant number 091924 and Fundação para a Ciência e Tecnologia through the project Pest-OE/MAT/UI0006/2011. Chris Drakeley is funded by the Wellcome Trust grant number 091924.
Additional file 1: Relationship between altitude and different malariometrics in northeast Tanzania: altitude versus EIR (A), altitude versus SCR (B), altitude versus PR 04 (C).
Additional file 2: Age distributions of different countries from West Africa, East Africa, South America and Southeast Asia.
Additional file 3: Midpoints of confidence intervals for SCR as function of the sample size (A) and the derivative function of SP in relation to SCR (B).
Additional file 4: Absolute SCR, EIR and SP ranges using the sample sizes shown in Table 3 and 99% confidence level for the respective intervals.
Additional file 5: Results of the simulation study when SRR is unknown. The true SRR of the population was setup at 0.017.
NS developed the proposed methodology and wrote the manuscript. CD designed the project and provided real-world implications of this work. Both authors read, revised and approved the manuscript.
London School of Hygiene and Tropical Medicine, Keppel Street, WC1E 7HT London, UK
Center of Statistics and Applications of University of Lisbon, Faculdade de Ciências da Universidade de Lisboa, Bloco C6 - Piso 4, 1749-1016 Lisboa, Portugal
Corran P, Coleman P, Riley E, Drakeley C. Serology: a robust indicator of malaria transmission intensity? Trends Parasitol. 2007;23:575–82.View ArticlePubMedGoogle Scholar
Bousema T, Youssef RM, Cook J, Cox J, Alegana VA, Amran J, et al. Serologic markers for detecting malaria in areas of low endemicity, Somalia, 2008. Emerg Infect Dis. 2010;16:392–9.View ArticlePubMed CentralPubMedGoogle Scholar
Drakeley CJ, Carneiro I, Reyburn H, Malima R, Lusingu JPA, Cox J, et al. Altitude-dependent and -independent variations in Plasmodium falciparum prevalence in northeastern Tanzania. J Infect Dis. 2005;191:1589–98.View ArticlePubMedGoogle Scholar
Cook J, Kleinschmidt I, Schwabe C, Nseng G, Bousema T, Corran PH, et al. Serological markers suggest heterogeneity of effectiveness of malaria control interventions on Bioko Island, Equatorial Guinea. PLoS One. 2011;6:e25137.View ArticlePubMed CentralPubMedGoogle Scholar
Arnold BF, Priest JW, Hamlin KL, Moss DM, Colford JM, Lammie PJ. Serological measures of malaria transmission in Haiti: comparison of longitudinal and cross-sectional methods. PLoS One. 2014;9:e93684.View ArticlePubMed CentralPubMedGoogle Scholar
Bretscher MT, Supargiyono S, Wijayanti MA, Nugraheni D, Widyastuti AN, Lobo NF, et al. Measurement of Plasmodium falciparum transmission intensity using serological cohort data from Indonesian school-children. Malar J. 2013;12:21.View ArticlePubMed CentralPubMedGoogle Scholar
Cunha MG, Silva ES, Sepúlveda N, Costa SPT, Saboia TC, Guerreiro JF, et al. Serologically defined variations in malaria endemicity in Pará state, Brazil. PLoS One. 2014;9:e113357.View ArticlePubMed CentralPubMedGoogle Scholar
Stewart L, Gosling R, Grin J, Gesase S, Campo J, Hashim R, et al. Rapid assessment of malaria transmission using age-specifc sero-conversion rates. PLoS One. 2009;4:6083.View ArticleGoogle Scholar
Cook J, Reid H, Iavro J, Kuwahata M, Taleo G, Clements A, et al. Using serological measures to monitor changes in malaria transmission in Vanuatu. Malar J. 2010;9:169.View ArticlePubMed CentralPubMedGoogle Scholar
Gonçalves L, de Oliveira MR, Pascoal C, Pires A. Sample size for estimating a binomial proportion: comparison of different methods. J Appl Stat. 2012;39:2453–73.View ArticleGoogle Scholar
Stresman G, Kobayashi T, Kamanga A, Thuma PE, Mharakurwa S, Moss WJ, et al. Malaria research challenges in low prevalence settings. Malar J. 2012;11:353.View ArticlePubMed CentralPubMedGoogle Scholar
Bekessy A, Molineaux L, Storey J. Estimation of incidence and recovery rates of Plasmodium falciparum parasitaemia from longitudinal data. Bull World Health Organ. 1976;54:685–93.PubMed CentralPubMedGoogle Scholar
Drakeley CJ, Corran PH, Coleman PG, Tongren JE, McDonald SLR, Carneiro I, et al. Estimating medium- and long-term trends in malaria transmission by using serological markers of malaria exposure. Proc Natl Acad Sci U S A. 2005;102:5108–13.View ArticlePubMed CentralPubMedGoogle Scholar
von Fricken ME, Weppelmann TA, Lam B, Eaton WT, Schick L, Masse R, et al. Age-specific malaria seroprevalence rates: a cross-sectional analysis of malaria transmission in the Ouest and Sud-Est departments of Haiti. Malar J. 2014;13:361.View ArticleGoogle Scholar
Williams BG, Dye C. Maximum likelihood for parasitologists. Parasitol Today. 1994;10:489–93.View ArticlePubMedGoogle Scholar
Bonnefoix T, Bonnefoix P, Verdiel P, Sotto JJ. Fitting limiting dilution experiments with generalized linear models results in a test of the single-hit poisson assumption. J Immunol Methods. 1996;194:113–9.View ArticlePubMedGoogle Scholar
McCullagh P, Nelder JA. Generalized Linear Models. 2nd ed. London: Chapman & Hall; 1989.View ArticleGoogle Scholar
Hsieh FY, Bloch DA, Larsen MD. A simple method of sample size calculation for linear and logistic regression. Stat Med. 1998;17:1623–34.View ArticlePubMedGoogle Scholar
Novikov I, Fund N, Freedman LS. A modified approach to estimating sample size for simple logistic regression with one continuous covariate. Stat Med. 2010;29:97–107.PubMedGoogle Scholar
Bosomprah S. A mathematical model of seropositivity to malaria antigen, allowing seropositivity to be prolonged by exposure. Malar J. 2014;13:12.View ArticlePubMed CentralPubMedGoogle Scholar
Boedker R, Akida J, Shayo D, Kisinza W, Msangeni HA, Pedersen EM, et al. Relationship between altitude and intensity of malaria transmission in the Usambara Mountains, Tanzania. J Med Entomol. 2003;40:706–17.View ArticleGoogle Scholar
UN: a world of information. United Nations, New York. 2014. http://data.un.org/. Accessed 5 May 2014.
Pires A, Amado C. Interval estimators for a Binomial proportion: comparison of twenty methods. Revstat. 2008;6:165–97.Google Scholar
Newcombe RG. Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med. 1998;17:857–72.View ArticlePubMedGoogle Scholar
Gething PW, Patil AP, Smith DL, Guerra CA, Elyazar IRF, Johnston GL, et al. A new world malaria map: Plasmodium falciparum endemicity in 2010. Malar J. 2011;10:378.View ArticlePubMed CentralPubMedGoogle Scholar
Wong J, Hamel MJ, Drakeley CJ, Kariuki S, Shi YP, Lal AA, et al. Serological markers for monitoring historical changes in malaria transmission intensity in a highly endemic region of Western Kenya, 1994–2009. Malar J. 2014;13:451.View ArticlePubMed CentralPubMedGoogle Scholar
Dendukuri N, Rahme E, Blisle P, Joseph L. Bayesian sample size determination for prevalence and diagnostic test studies in the absence of a gold standard test. Biometrics. 2004;60:388–97.View ArticlePubMedGoogle Scholar
Santis FD. Using historical data for bayesian sample size determination. J R Statist Soc A. 2007;170:95–113.View ArticleGoogle Scholar
Zeukeng F, Tchinda VHM, Bigoga JD, Seumen CHT, Ndzi ES, Abonweh G, et al. Co-infections of malaria and geohelminthiasis in two rural communities of Nkassomo and Vian in the Mfou health district, Cameroon. PLoS Negl Trop Dis. 2014;8:3236.View ArticleGoogle Scholar
Bosman P, Stassijns J, Nackers F, Canier L, Kim N, Khim S, et al. Plasmodium prevalence and artemisinin-resistant falciparum malaria in Preah Vihear Province, Cambodia: a cross-sectional population-based study. Malar J. 2014;13:394.View ArticlePubMed CentralPubMedGoogle Scholar
Drakeley CJ, Akim NI, Sauerwein RW, Greenwood BM, Targett GA. Estimates of the infectious reservoir of Plasmodium falciparum malaria in the Gambia and in Tanzania. Trans R Soc Trop Med Hyg. 2000;94:472–6.View ArticlePubMedGoogle Scholar
Maiga B, Dolo A, Tour O, Dara V, Tapily A, Campino S, et al. Human candidate polymorphisms in sympatric ethnic groups differing in malaria susceptibility in Mali. PLoS One. 2013;8:e75675.View ArticlePubMed CentralPubMedGoogle Scholar
Stevenson JC, Stresman GH, Gitonga CW, Gillig J, Owaga C, Marube E, et al. Reliability of school surveys in estimating geographic variation in malaria transmission in the Western Kenyan highlands. PLoS One. 2013;8:e77641.View ArticlePubMed CentralPubMedGoogle Scholar
Cochran WG. Sampling Techniques. 3rd ed. New York: John Wiley & Sons; 1977.Google Scholar
Baum E, Badu K, Molina DM, Liang X, Felgner PL, Yan G. Protein microarray analysis of antibody responses to Plasmodium falciparum in western Kenyan highland sites with differing transmission levels. PLoS One. 2013;8:e82246.View ArticlePubMed CentralPubMedGoogle Scholar
Ondigo BN, Hodges JS, Ireland KF, Magak NG, Lanar DE, Dutta S, et al. Estimation of recent and long-term malaria transmission in a population by antibody testing to multiple Plasmodium falciparum antigens. J Infect Dis. 2014;210:1123–32.View ArticlePubMedGoogle Scholar
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. | CommonCrawl |
\begin{document}
\twocolumn[ \icmltitle{Relative Upper Confidence Bound for the\\ $K$-Armed Dueling Bandit Problem}
\icmlauthor{Masrour Zoghi}{[email protected]} \icmladdress{ISLA, University of Amsterdam, The Netherlands} \icmlauthor{Shimon Whiteson}{[email protected]} \icmladdress{ISLA, University of Amsterdam, The Netherlands} \icmlauthor{Remi Munos}{[email protected]} \icmladdress{INRIA Lille - Nord Europe, Villeneuve d'Ascq, France} \icmlauthor{Maarten de Rijke}{[email protected]} \icmladdress{ISLA, University of Amsterdam, The Netherlands}
\icmlkeywords{dueling bandits; Thompson sampling}
\vskip 0.3in ]
\begin{abstract} This paper proposes a new method for the \emph{$K$-armed dueling bandit problem}, a variation on the regular $K$-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a finite-time regret bound of order $\mathcal O(\log t)$. In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art. \end{abstract}
\section{Introduction} \label{sec:introduction}
In this paper, we propose and analyze a new algorithm, called Relative Upper Confidence Bound (RUCB), for the \emph{$K$-armed dueling bandit problem} \citep{yue12:k-armed}, a variation on the $K$-armed bandit problem, where the feedback comes in the form of pairwise preferences. We assess the performance of this algorithm using one of the main current applications of the $K$-armed dueling bandit problem, \emph{ranker evaluation} \citep{joachims2002:optimizing,YueJoachims:2011,hofmann:irj13}, which is used in information retrieval, ad placement and recommender systems, among others.
The $K$-armed dueling bandit problem is part of the general framework of \emph{preference learning} \citep{furnkranz2010,furnkranz2012towards}, where the goal is to learn, not from real-valued feedback, but from \emph{relative feedback}, which specifies only which of two alternatives is preferred. Developing effective preference learning methods is important for dealing with domains in which feedback is naturally qualitative (e.g., because it is provided by a human) and specifying real-valued feedback instead would be arbitrary or inefficient \citep{furnkranz2012towards}.
Other algorithms proposed for this problem are Interleaved Filter (IF) \citep{yue12:k-armed}, Beat the Mean (BTM) \citep{YueJoachims:2011}, and SAVAGE \cite{Urvoy:2013}. All of these methods were designed for the \emph{finite-horizon} setting, in which the algorithm requires as input the \emph{exploration horizon}, $T$, the time by which the algorithm needs to produce the best arm. The algorithm is then judged based upon either the \emph{accuracy} of the returned best arm or the \emph{regret} accumulated in the exploration phase.\footnote{These terms are formalized in Section \ref{sec:problemsetting}.} All three of these algorithms use the exploration horizon to set their internal parameters, so for each $T$, there is a separate algorithm $\textup{IF}_T$, $\textup{BTM}_T$ and $\textup{SAVAGE}_T$. By contrast, RUCB does not require this input, making it more useful in practice, since a good exploration horizon is often difficult to guess. Nonetheless, RUCB outperforms these algorithms in terms of the accuracy and regret metrics used in the finite-horizon setting.
The main idea of RUCB is to maintain optimistic estimates of the probabilities of all possible pairwise outcomes, and (1)~use these estimates to select a potential champion, which is an arm that has a chance of being the best arm, and (2)~select an arm to compare to this potential champion by performing regular Upper Confidence Bound \citep{auer2002ucb} relative to it.
We prove a finite-time high-probability bound of $\mathcal O(\log t)$ on the cumulative regret of RUCB, from which we deduce a bound on the expected cumulative regret. These bounds rely on substantially less restrictive assumptions on the $K$-armed dueling bandit problem than IF and BTM and have better multiplicative constants than those of SAVAGE. Furthermore, our bounds are the first explicitly non-asymptotic results for the $K$-armed dueling bandit problem.
More importantly, The main distinction of our result is that it holds for \emph{all} time steps. By contrast, given an exploration horizon $T$, the results for IF, BTM and SAVAGE bound only the regret accumulated by $\textup{IF}_T$, $\textup{BTM}_T$ and $\textup{SAVAGE}_T$ in the first $T$ time steps.
Finally, we evaluate our method empirically using real data from an information retrieval application. The results show that RUCB can learn quickly and effectively and greatly outperforms BTM and SAVAGE.
The main contributions of this paper are as follows:
\begin{itemize}[leftmargin=*] \item A novel algorithm for the $K$-armed dueling bandit problem that is more broadly applicable than existing algorithms,
\item More comprehensive theoretical results that make less restrictive assumptions than those of IF and BTM, have better multiplicative constants than the results of SAVAGE, and apply to all time steps, and
\item Experimental results, based on a real-world application, demonstrating the superior performance of our algorithm compared to existing methods. \end{itemize}
\if0
Most decision-theoretic problem settings, from $K$-armed bandits to Markov decision processes, assume feedback is provided in the form of a scalar reward signal. While this model is extremely useful and widely applicable, there are nonetheless many application settings in which scalar rewards are unavailable but a weaker \emph{relative} form of feedback is \citep{furnkranz2012towards}. This type of feedback indicates only that one action performed better than another, without further quantification. \note{Masrour: I honestly find this paragraph really weak. It's almost apologetic.}
Such applications commonly arise when feedback is directly provided by humans, who can often indicate which of two alternatives they prefer even when they cannot quantify how to prioritize competing objectives. In other cases, humans are capable of providing scalar feedback but doing so is too burdensome. For example, in information retrieval, the problem of on-line learning to rank \citep{yue09:inter,hofmann:irj13} is complicated by the fact that users are generally not willing to give explicit feedback about which documents are relevant. However, by interleaving two candidate rankers, implicit relative feedback can be obtained from the user's resulting click behavior \citep{radlinski2008:how,hofmann11:probabilistic}. To model such settings, \citet{yue2012k} proposed the \emph{$K$-armed dueling bandit problem}, a variation on the regular $K$-armed bandit problem in which only relative feedback about pairs of arms is available. \note{Masrour: Could this be expanded upon a bit? At least one (perhaps more) of the reviewers will have never heard of online learning to rank or maybe even IR in general because the meta reviewers are probably going to get some bandits people to read through the theory. Any additional language about recommender systems or ad placement on the other hand would probably grab their attention.}
In this paper, we propose a new algorithm called \textsc{Relative Upper Confidence Bound} for this setting. It is based on \textsc{Upper Confidence Bound} \citep{auer2002ucb}, one of the most effective methods for the regular $K$-armed bandit problem. The main idea is to 1) maintain frequentist estimates of the probabilities of all possible pairwise outcomes, 2) simulate a round-robin tournament among the arms in which each comparison is optimistic with respect to one of the arms, i.e., it is given a ``home advantage", and 3) select an arm to compare to the winner of this tournament by performing regular \textsc{Upper Confidence Bound} relative to that winner.
We prove a bound of $\mathcal O(\log T)$ on the expected cumulative regret of \textsc{Relative Upper Confidence Bound} under less restrictive conditions than those used to analyze \textsc{Beat-the-Mean} \citep{YueJoachims:2011}, the state of the art for this setting. In addition, we evaluate our method empirically using real data from an information retrieval application. The results show that \textsc{Relative Upper Confidence Bound} can learn quickly and effectively and greatly outperforms \textsc{Beat-the-Mean}.
\fi
\section{Problem Setting} \label{sec:problemsetting}
The \emph{$K$-armed dueling bandit} problem \cite{yue12:k-armed} is a modification of the \emph{$K$-armed bandit} problem \cite{auer2002ucb}: the latter considers $K$ arms $\{a_1,\ldots,a_K\}$ and at each \emph{time-step}, an arm $a_i$ can be \emph{pulled}, generating a \emph{reward} drawn from an unknown stationary distribution with expected value $\mu_i$. The $K$-armed \emph{dueling} bandit problem is a variation, where instead of pulling a single arm, we choose a pair $(a_i,a_j)$ and receive one of the two as the better choice, with the probability of $a_i$ being picked equal to a constant $p_{ij}$ and that of $a_j$ equal to $p_{ji} = 1 - p_{ij}$. We define the \emph{preference matrix} $\myvec{P}=\left[p_{ij}\right]$, whose $ij$ entry is $p_{ij}$.
In this paper, we assume that there exists a \emph{Condorcet winner} \cite{Urvoy:2013}: an arm, which without loss of generality we label $a_1$, such that $p_{1i} > \frac{1}{2}$ for all $i>1$. Given a Condorcet winner, we define \emph{regret} for each time-step as follows \cite{yue12:k-armed}: if arms $a_i$ and $a_j$ were chosen for comparison at time $t$, then regret at that time is set to be $r_t := \frac{\Delta_{1i}+\Delta_{1j}}{2}$, with $\Delta_{k} := p_{1k}-\frac{1}{2}$ for all $k \in \{1,\ldots,K\}$. Thus, regret measures the average advantage that the Condorcet winner has over the two arms being compared against each other. Given our assumption on the probabilities $p_{1k}$, this implies that $r=0$ if and only if the best arm is compared against itself. We define \emph{cumulative regret up to time} $T$ to be $R_T = \sum_{t=1}^T r_t$.
The Condorcet winner is different in a subtle but important way from the \emph{Borda winner} \cite{Urvoy:2013}, which is an arm $a_b$ that satisfies $\sum_j p_{bj} \geq \sum_j p_{ij}$, for all $i=1,\ldots,K$. In other words, when averaged across all other arms, the Borda winner is the arm with the highest probability of winning a given comparison. In the $K$-armed dueling bandit problem, the Condorcet winner is sought rather than the Borda winner, for two reasons. First, in many applications, including the ranker evaluation problem addressed in our experiments, the eventual goal is to adapt to the preferences of the users of the system. Given a choice between the Borda and Condorcet winners, those users prefer the latter in a direct comparison, so it is immaterial how these two arms fare against the others. Second, in settings where the Borda winner is more appropriate, no special methods are required: one can simply solve the $K$-armed bandit algorithm with arms $\{a_1,\ldots,a_K\}$, where pulling $a_i$ means choosing an index $j \in \{1,\ldots,K\}$ randomly and comparing $a_i$ against $a_j$. Thus, research on the $K$-armed dueling bandit problem focuses on finding the Condorcet winner, for which special methods are required to avoid mistakenly choosing the Borda winner.
The goal of a bandit algorithm can be formalized in several ways. In this paper, we consider two standard settings:
\begin{enumerate}[leftmargin=*] \item \emph{The finite-horizon setting}: In this setting, the algorithm is told in advance the exploration \emph{horizon}, $T$, i.e., the number of time-steps that the evaluation process is given to explore before it has to produce a single arm as the best, which will be exploited thenceforth. In this setting, the algorithm can be assessed on its \emph{accuracy}, the probability that a given run of the algorithm reports the Condorcet winner as the best arm \cite{Urvoy:2013}, which is related to expected \emph{simple regret}: the regret associated with the algorithm's choice of the best arm, i.e., $r_{T+1}$ \cite{Bubeck:2009}. Another measure of success in this setting is the amount of regret accumulated during the exploration phase, as formulated by the \emph{explore-then-exploit} problem formulation \cite{yue12:k-armed}.
\item \emph{The horizonless setting}: In this setting, no horizon is specified and the evaluation process continues indefinitely. Thus, it is no longer sufficient for the algorithm to maximize accuracy or minimize regret after a single horizon is reached. Instead, it must minimize regret across \emph{all} horizons by rapidly decreasing the frequency of comparisons involving suboptimal arms, particularly those that fare worse in comparison to the best arm. This goal can be formulated as minimizing the cumulative regret over time, rather than with respect to a fixed horizon \cite{lai85:bandit-lb}. \end{enumerate}
As we describe in Section \ref{sec:relatedwork}, all existing $K$-armed dueling bandit methods target the finite-horizon setting. However, we argue that the horizonless setting is more relevant in practice for the following reason: finite-horizon methods require a horizon as input and often behave differently for different horizons. This poses a practical problem because it is typically difficult to know in advance how many comparisons are required to determine the best arm with confidence and thus how to set the horizon. If the horizon is set too long, the algorithm is too exploratory, increasing the number of evaluations needed to find the best arm. If it is set too short, the best arm remains unknown when the horizon is reached and the algorithm must be restarted with a longer horizon.
Moreover, any algorithm that can deal with the horizonless setting can easily be modified to address the finite-horizon setting by simply stopping the algorithm when it reaches the horizon and returning the best arm. By contrast, for the reverse direction, one would have to resort to the ``doubling trick'' \citep[Section 2.3]{Cesa-Bianchi:2006}, which leads to substantially worse regret results: this is because all of the upper bounds proven for methods addressing the finite-horizon setting so far are in $\mathcal O(\log T)$ and applying the doubling trick to such results would lead to regret bounds of order $(\log T)^2$, with the extra log factor coming from the number of partitions.
To the best of our knowledge, RUCB is the first $K$-armed dueling bandit algorithm that can function in the horizonless setting without resorting to the doubling trick. We show in Section \ref{sec:algorithm} how it can be adapted to the finite-horizon setting.
\section{Related Work} \label{sec:relatedwork}
In this section, we briefly survey existing methods for the $K$-armed dueling bandit problem.
The first method for the $K$-armed dueling bandit problem is \emph{interleaved filter} (IF) \cite{yue12:k-armed}, which was designed for a finite-horizon scenario and which proceeds by picking a \emph{reference} arm to compare against the rest and using it to eliminate other arms, until the reference arm is eliminated by a better arm, in which case the latter becomes the reference arm and the algorithm continues as before. The algorithm terminates either when all other arms are eliminated or if the exploration horizon $T$ is reached.
More recently, the \emph{beat the mean} (BTM) algorithm has been shown to outperform IF \cite{YueJoachims:2011}, while imposing less restrictive assumptions on the $K$-armed dueling bandit problem. BTM focuses exploration on the arms that have been involved in the fewest comparisons. When it determines that an arm fares on average too poorly in comparison to the remaining arms, it removes it from consideration. More precisely, BTM considers the performance of each arm against the \emph{mean arm} by averaging the arm's scores against all other arms and uses these estimates to decide which arm should be eliminated.
Both IF and BTM require the comparison probabilities $p_{ij}$ to satisfy certain conditions that are difficult to verify without specific knowledge about the dueling bandit problem at hand and, moreover, are often violated in practice (see the supplementary material for a more thorough discussion and analysis of these assumptions). Under these conditions, theoretical results have been proven for IF and BTM in \cite{yue12:k-armed} and \cite{YueJoachims:2011}. More precisely, both algorithms take the exploration horizon $T$ as an input and so for each $T$, there are algorithms $\textup{IF}_T$ and $\textup{BTM}_T$; the results then state the following: for large $T$, in the case of $\textup{IF}_T$, we have the expected regret bound
\[ \mathbb{E}\left[R^{\textup{IF}_T}_T\right] \leq C \frac{K \log T}{\min_{j=2}^K \Delta_j}, \] and, in the case of $\textup{BTM}_T$, the high probability regret bound
\[ R^{\textup{BTM}_T}_T \leq C^{'} \frac{\gamma^7 K \log T}{\min_{j=2}^K \Delta_j} \textup{ with high probability,} \]
where arm $a_1$ is assumed to be the best arm, and we define $\Delta_{j} := p_{1j}-\frac{1}{2}$, and $C$ and $C^{'}$ are constants independent of the specific dueling bandit problem.
The first bound matches a lower bound proven in \citep[Theorem 4]{yue12:k-armed}. However, as pointed out in \cite{YueJoachims:2011}, this result holds for a very restrictive class of $K$-armed dueling bandit problems. In an attempt to remedy this issue, the second bound was proven for BTM, which includes a relaxation parameter $\gamma$ that allows for a broader class of problems, as discussed in the supplementary material. The difficulty with this result is that the parameter $\gamma$, which depends on the probabilities $p_{ij}$ and must be passed to the algorithm, can be very large. Since it is raised to the power of $7$, this makes the bound very loose. For instance, in the three-ranker evaluation experiments discussed in Section \ref{sec:experiments}, the values for $\gamma$ are $4.85$, $11.6$ and $47.3$ for the $16$-, $32$- and $64$-armed examples.
In contrast to the above limitations and loosenesses, in Section \ref{sec:theory} we provide \emph{explicit} bounds on the regret accumulated by RUCB that do not depend on $\gamma$ and require only the existence of a Condorcet winner for their validity, which makes them much more broadly applicable.
Sensitivity Analysis of VAriables for Generic Exploration (SAVAGE) \cite{Urvoy:2013} is a recently proposed algorithm that outperforms both IF and BTM by a wide margin when the number of arms is of moderate size. Moreover, one version of SAVAGE, called \emph{Condorcet SAVAGE}, makes the Condorcet assumption and performed the best experimentally \cite{Urvoy:2013}. Condorcet SAVAGE compares pairs of arms uniformly randomly until there exists a pair for which one of the arms beats another by a wide margin, in which case the loser is removed from the pool of arms under consideration. We show in this paper that our proposed algorithm for ranker evaluation substantially outperforms Condorcet SAVAGE.
The theoretical result proven for Condorcet SAVAGE has the following form \citep[Theorem 3]{Urvoy:2013}. First, let us assume that $a_1$ is the Condorcet winner and let $\widehat{T}_{\textup{CSAVAGE}_T}$ denote the number of iterations the Condorcet SAVAGE algorithm with exploration horizon $T$ requires before terminating and returning the best arm; then, given $\delta > 0$, with probability $1-\delta$, we have for large $T$
\begin{equation*} \widehat{T}_{\textup{CSAVAGE}_T} \leq C^{''} \sum_{j=1}^{K-1} \frac{j \cdot \log\left(\frac{KT}{\delta}\right)}{\Delta_{j+1}^2}, \end{equation*}
with the indices $j$ arranged such that $\Delta_2 \leq \cdots \leq \Delta_K$ and $\Delta_j = p_{1j}-\frac{1}{2}$ as before, and $C^{''}$ a problem independent constant. This bound is very similar in spirit to our high probability result, with the important distinction that, unlike the above bound, the multiplicative factors in our result (i.e., the $D_{ij}$ in Theorem \ref{thm:HighProbBound} below) do not depend on $\delta$. Moreover, in \citep[Appendix B.1]{Urvoy:2013}, the authors show that for large $T$ we have the following expected regret bound:
\[ \mathbb{E}\left[R^{\textup{CSAVAGE}_T}_T\right] \leq C^{''} \sum_{j=2}^K \frac{j \cdot \log\left(KT^2\right)}{\Delta_{j}^2} + 1. \]
This is similar to our expected regret bound in Theorem \ref{thm:ExpBound}, although for difficult problems where the $\Delta_j$ are small, Theorem \ref{thm:ExpBound} yields a tighter bound due to the presence of the $\Delta_j$ in the numerator of the second summand.
An important advantage that our result has over the results reviewed here is an explicit expression for the additive constant, which was left out of the analyses of IF, BTM and SAVAGE.
Finally, note that all of the above results bound only $R_T$, where $T$ is the predetermined exploration horizon, since IF, BTM and SAVAGE were designed for the finite-horizon setting. By contrast, in Section \ref{sec:theory}, we bound the cumulative regret of each version of our algorithm for \emph{all} time steps.
\section{Method} \label{sec:algorithm}
We now introduce Relative Upper Confidence Bound (RUCB), which is applicable to any $K$-armed dueling bandit problem with a Condorcet winner.
\begin{algorithm}[h] \caption{Relative Upper Confidence Bound} \label{alg:RUCB} \begin{algorithmic}[1] { \REQUIRE $\alpha > \frac{1}{2}$, $T \in \{1,2,\ldots\} \cup \{\infty\}$ \STATE $\myvec{W} = \left[w_{ij}\right] \gets \mathbf{0}_{K \times K} \; $ // 2D array of wins: $w_{ij}$ is the number of times $a_i$ beat $a_j$ \FOR{$t=1,\dots,T$}
\STATE $\myvec{U} := \left[u_{ij}\right] = \frac{\myvec{W}}{\myvec{W}+\myvec{W}^T} + \sqrt{\frac{\alpha\ln t}{\myvec{W}+\myvec{W}^T}}$ \; // All operations are element-wise; $\frac{x}{0}:=1$ for any $x$.
\STATE $u_{ii} \gets \frac{1}{2}$ for each $i=1,\ldots,K$.
\STATE Pick any $c$ satisfying $u_{cj} \geq \frac{1}{2}$ for all $j$. If no such $c$, pick $c$ randomly from $\{1,\ldots,K\}$.
\STATE $d \gets \displaystyle\argmax_j u_{jc}$
\STATE Compare arms $a_c$ and $a_d$ and increment $w_{cd}$ or $w_{dc}$ depending on which arm wins. \ENDFOR
\ENSURE An arm $a_c$ that beats the most arms, i.e., $c$ with the largest count~$\# \left\{ j | \frac{w_{cj}}{w_{cj}+w_{jc}} > \frac{1}{2} \right\}$. } \end{algorithmic} \end{algorithm}
In each time-step, RUCB, shown in Algorithm \ref{alg:RUCB}, goes through the following three stages:
(1) RUCB puts all arms in a pool of potential champions. Then, it compares each arm $a_i$ against all other arms optimistically: for all $i \neq j$, we compute the upper bound $u_{ij}(t) = \mu_{ij}(t) + c_{ij}(t)$, where $\mu_{ij}(t)$ is the frequentist estimate of $p_{ij}$ at time $t$ and $c_{ij}(t)$ is an optimism bonus that increases with $t$ and decreases with the number of comparisons between $i$ and $j$ (Line 3). If we have $u_{ij} < \frac{1}{2}$ for any $j$, then $a_i$ is removed from the pool. Next, a champion arm $a_c$ is chosen randomly from the remaining potential champions (Line 5).
(2) Regular UCB is performed using $a_c$ as a benchmark (Line 6), i.e., UCB is performed on the set of arms $a_{1c} \ldots a_{Kc}$. Specifically, we select the arm $d = \argmax_j u_{jc}$. When $c \neq j$, $u_{jc}$ is defined as above. When $c = j$, since $p_{cc}=\frac{1}{2}$, we set $u_{cc}=\frac{1}{2}$ (Line 4).
(3) The pair $(a_c,a_d)$ are compared and the score sheet is updated as appropriate (Line 7).
Note that in stage (1) the comparisons are based on $u_{cj}$, i.e., $a_c$ is compared optimistically to the other arms, making it easier for it to become the champion. By contrast, in stage (2) the comparisons are based on $u_{jc}$, i.e., $a_c$ is compared to the other arms pessimistically, making it more difficult for $a_c$ to be compared against itself. This is important because comparing an arm against itself yields no information. Thus, RUCB strives to avoid auto-comparisons until there is great certainty that $a_c$ is indeed the Condorcet winner.
Eventually, as more comparisons are conducted, the estimates $\mu_{1j}$ tend to concentrate above $\frac{1}{2}$ and the optimism bonuses $c_{1j}(t)$ will become small. Thus, both stages of the algorithm will increasingly select $a_1$, i.e., $a_c = a_d = a_1$. Since comparing $a_1$ to itself is optimal, $r_t$ declines over time.
Note that Algorithm \ref{alg:RUCB} is a finite-horizon algorithm if $T < \infty$ and a horizonless one if $T = \infty$, in which case the for loop never terminates.
\section{Theoretical Results} \label{sec:theory}
In this section, we prove finite-time high-probability and expected regret bounds for RUCB. We first state Lemma \ref{lem:HighProbBound} and use it to prove a high-probability bound in Theorem \ref{thm:HighProbBound}, from which we deduce an expected regret bound in Theorem \ref{thm:ExpBound}.
To simplify notation, we assume without loss of generality that $a_1$ is the optimal arm in the following. Moreover, given any $K$-armed dueling bandit algorithm, we define $w_{ij}(t)$ to be the number of times arm $a_i$ has beaten $a_j$ in the first $t$ iterations of the algorithm. We also define $u_{ij}(t) := \frac{w_{ij}(t)}{w_{ij}(t)+w_{ji}(t)} + \sqrt{\frac{\alpha\ln t}{w_{ij}(t)+w_{ji}(t)}}$, for any given $\alpha > 0$, and set $l_{ij}(t) := 1-u_{ji}(t)$. Moreover, for any $\delta > 0$, define $C(\delta) := \left(\frac{(4\alpha-1)K^2}{(2\alpha-1)\delta}\right)^{\frac{1}{2\alpha-1}}$.
\begin{lemma}\label{lem:HighProbBound} Let $\myvec{P} := \left[ p_{ij} \right]$ be the preference matrix of a $K$-armed dueling bandit problem with arms $\{a_1,\ldots,a_K\}$, satisfying $p_{1j} > \frac{1}{2}$ for all $j > 1$ (i.e. $a_1$ is the Condorcet winner). Then, for any dueling bandit algorithm and any $\alpha > \frac{1}{2}$ and $\delta > 0$, we have
\begin{equation*} P\Big( \forall\,t>C(\delta),i,j,\; p_{ij} \in [l_{ij}(t),u_{ij}(t)] \Big) > 1-\delta. \end{equation*} \end{lemma}
\begin{proof} See the supplementary material. \end{proof}
The idea behind this lemma is depicted in Figure \ref{fig:lemma1}, which illustrates the two phenomena that make it possible: first, as long as arms $a_i$ and $a_j$ are not compared against each other, the interval $[l_{ij}(t),u_{ij}(t)]$ will grow in length as $\sqrt{\log t}$, hence approaching $p_{ij}$; second, as the number of comparisons between $a_i$ and $a_j$ increases, the estimated means $\mu_{ij}$ approach $p_{ij}$, hence increasing the probability that the interval $[l_{ij}(t),u_{ij}(t)]$ will contain $p_{ij}$.
\begin{figure}\label{fig:lemma1}
\end{figure}
Let us now turn to our high probability bound:
\begin{theorem}\label{thm:HighProbBound} Given a preference matrix $\myvec{P} = [p_{ij}]$ and $\delta > 0$ and $\alpha > \frac{1}{2}$, define $C(\delta) := \left(\frac{(4\alpha-1)K^2}{(2\alpha-1)\delta}\right)^{\frac{1}{2\alpha-1}}$ and $D_{ij} := \frac{4\alpha}{\min\{\Delta_i^2,\Delta_j^2\}}$ for each $i,j = 1,\ldots,K$ with $i \neq j$, where $\Delta_i := \frac{1}{2}-p_{i1}$, and set $D_{ii} = 0$ for all $i$. Then, if we apply Algorithm \ref{alg:RUCB} to the $K$-armed dueling bandit problem defined by $\myvec{P}$, given any pair $(i,j) \neq (1,1)$, the number of comparisons between arms $a_i$ and $a_j$ performed up to time $t$, denoted by $N_{ij}(t)$, satisfies
\begin{equation} \label{bnd:HighProbCount} P\bigg(\forall\,t,\; N_{ij}(t) \leq \max\Big\{C(\delta),D_{ij}\ln t\Big\} \bigg) > 1-\delta. \end{equation}
Moreover, we have the following high probability bound for the regret accrued by the algorithm:
\begin{equation}\label{bnd:HighProbReg} P\bigg(\forall\,t,\; R_t \leq C(\delta)\Delta^*+\sum_{i>j} D_{ij} \Delta_{ij} \ln t \bigg) > 1-\delta, \end{equation}
where $\Delta^* := \max_i \Delta_i$ and $\Delta_{ij} := \frac{\Delta_i+\Delta_j}{2}$, while $R_t$ is the cumulative regret as defined in Section \ref{sec:problemsetting}. \end{theorem}
\begin{proof}
Given Lemma \ref{lem:HighProbBound}, we know with probability $1-\delta$ that $p_{ij} \in [l_{ij}(t),u_{ij}(t)]$ for all $t > C(\delta)$. Let us first deal with the easy case when $i = j \neq 1$: when $t > C(\delta)$ holds, $a_i$ cannot be played against itself, since if we get $c=i$ in Algorithm \ref{alg:RUCB}, then by Lemma \ref{lem:HighProbBound} and the fact that $a_1$ is the Condorcet winner we have \[ u_{ii}(t) = \frac{1}{2} < p_{1i} \leq u_{1i}(t), \] and so $d \neq i$.
Now, let us assume that distinct arms $a_i$ and $a_j$ have been compared against each other more than $D_{ij}\ln t$ times and that $t > C(\delta)$. If $s$ is the last time $a_i$ and $a_j$ were compared against each other, we must have
\begin{align}\label{ineq:LUCB1} & u_{ij}(s)-l_{ij}(s) = 2\sqrt{\frac{\alpha\ln s}{N_{ij}(t)}} \\ & \qquad \leq 2\sqrt{\frac{\alpha\ln t}{N_{ij}(t)}} < 2\sqrt{\frac{\alpha\ln t}{\frac{4\alpha\ln t}{\min\{\Delta_i^2,\Delta_j^2\}}}} = \min\{\Delta_i,\Delta_j\}. \nonumber \end{align}
On the other hand, for $a_i$ to have been compared against $a_j$ at time $s$, one of the following two scenarios must have happened:
\begin{figure}
\caption{An illustration of the proof of Theorem \ref{thm:HighProbBound}. The figure shows an example of the internal state of RUCB at time $s$. The height of the dot in the block in row $a_m$ and column $a_n$ represents the comparisons probability $p_{mn}$, while the interval, where present, represents the confidence interval $[l_{mn},u_{mn}]$: we have only included them in the $(a_i,a_j)$ and the $(a_j,a_i)$ blocks of the figure because those are the ones that are discussed in the proof. Moreover, in those blocks, we have included the outcomes of two different runs: one drawn to the left of the dots representing $p_{ij}$ and $p_{ji}$, and the other to the right (the horizontal axis in these plots has no other significance). These two outcomes are included to address the dichotomy present in the proof. Note that for a given run, we must have $[l_{ji}(s),u_{ji}(s)] = [1-u_{ij}(s),1-l_{ij}(s)]$ for any time $s$, hence the symmetry present in this figure.}
\label{fig:theorem2}
\end{figure}
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=0pt,partopsep=0pt]
\item[I.] In Algorithm \ref{alg:RUCB}, we had $c=i$ and $d=j$, in which case both of the following inequalities must hold:
\begin{itemize}[leftmargin=*]
\item[a.] $u_{ij}(s) \geq \frac{1}{2}$, since otherwise $c$ could not have been set to $i$ by Line 5 of Algorithm \ref{alg:RUCB}, and
\item[b.] $l_{ij}(s) = 1-u_{ji}(s) \leq 1-p_{1i} = p_{i1}$, since we know that $p_{1j} \leq u_{1i}(t)$, by Lemma \ref{lem:HighProbBound} and the fact that $t>C(\delta)$, and for $d=j$ to be satisfied, we must have $u_{1i}(t) \leq u_{ji}(t)$ by Line 6 of Algorithm \ref{alg:RUCB}.
\end{itemize}
From these two inequalities, we can conclude
\begin{equation}\label{ineq:LUCB2a} u_{ij}(s) - l_{ij}(s) \geq \frac{1}{2}-p_{i1} = \Delta_i. \end{equation}
This inequality is illustrated using the lower right confidence interval in the $(a_i,a_j)$ block of Figure \ref{fig:theorem2}, where the interval shows $[l_{ij}(s),u_{ij}(s)]$ and the distance between the dotted lines is $\frac{1}{2}-p_{i1}$.
\item[II.] In Algorithm \ref{alg:RUCB}, we had $c=j$ and $d=i$, in which case swapping $i$ and $j$ in the above argument gives
\begin{equation}\label{ineq:LUCB2b} u_{ji}(s) - l_{ji}(s) \geq \frac{1}{2}-p_{j1} = \Delta_j. \end{equation}
Similarly, this is illustrated using the lower left confidence interval in the $(a_j,a_i)$ block of Figure \ref{fig:theorem2}, where the interval shows $[l_{ji}(s),u_{ji}(s)]$ and the distance between the dotted lines is $\frac{1}{2}-p_{j1}$. \end{itemize} Putting \eqref{ineq:LUCB2a} and \eqref{ineq:LUCB2b} together with \eqref{ineq:LUCB1} yields a contradiction, so with probability $1-\delta$ we cannot have $N_{ij}$ be larger than both $C(\delta)$ and $D_{ij}\ln t$.
This gives us \eqref{bnd:HighProbCount}, from which \eqref{bnd:HighProbReg} follows by allowing for the largest regret, $\Delta^*$, to occur in each of the first $C(\delta)$ steps of the algorithm and adding the regret accrued by $D_{ij}\ln t$ comparisons between $a_i$ and $a_j$. \qedhere
\end{proof}
\begin{figure}
\caption{A schematic graph illustrating the proof of Theorem \ref{thm:ExpBound}. Note that the expression for $H_t(q)$ is extracted from \eqref{bnd:HighProbReg}, which also implies that $H_t^{-1}$ is necessarily below $F_{R_t}$: formulated in terms of CDFs, \eqref{bnd:HighProbReg} states that $F_{R_t}\left(H_t(q_0)\right) > q_0 = H_t^{-1}\left(H_t(q_0)\right)$, where $q_0=1-\delta_0$ is a quantile. From this, we can conclude that $F_{R_t}(r) > H_t^{-1}(r)$ for all $r$.}
\label{fig:theorem3}
\end{figure}
Next, we prove our expected regret bound:
\begin{theorem}\label{thm:ExpBound} Given $\alpha > 1$, the expected regret accumulated by RUCB after $t$ iterations is bounded by
\begin{align}\label{bnd:ExpReg} \mathbb{E}[R_t] & \leq \Delta^*\left(\frac{(4\alpha-1)K^2}{2\alpha-1}\right)^{\frac{1}{2\alpha-1}} \frac{2\alpha-1}{2\alpha-2} \nonumber \\
& \qquad +\sum_{i>j}2\alpha\frac{\Delta_i+\Delta_j}{\min\{\Delta_i^2,\Delta_j^2\}}\ln t. \end{align}
\end{theorem}
\begin{proof} We can obtain the bound in \eqref{bnd:ExpReg} from \eqref{bnd:HighProbReg} by integrating with respect to $\delta$ from $0$ to $1$. This is because given any one-dimensional random variable $X$ with CDF $F_X$, we can use the identity $\mathbb{E}[X] = \int_0^1 F_X^{-1}(q)dq$. In our case, $X=R_t$ for a fixed time $t$ and, as illustrated in Figure \ref{fig:theorem3}, we can deduce from \eqref{bnd:HighProbReg} that $F_{R_t}(r) > H_t^{-1}(r)$, which gives the bound \[ F_{R_t}^{-1}(q) < H_t(q) = C(1-q)\Delta^*+\sum_{i>j} D_{ij} \Delta_{ij} \ln t. \]
Now, assume that $\alpha > 1$. To derive \eqref{bnd:ExpReg} from the above inequality, we need to integrate the righthand side, and since it is only the first term in the summand that depends on $q$, that is all we need to integrate. To do so, recall that $C(\delta) := \left(\frac{(4\alpha-1)K^2}{(2\alpha-1)\delta}\right)^{\frac{1}{2\alpha-1}}$, so to simplify notation, we define $L := \left(\frac{(4\alpha-1)K^2}{2\alpha-1}\right)^{\frac{1}{2\alpha-1}}$. Now, we can carry out the integration as follows, beginning by using the substitution $1-q=\delta$, $dq = -d\delta$: \begin{align*} & \int_{q=0}^1 C(1-q) dq = \int_{\delta=1}^0 -C(\delta) d\delta \\ & = \int_0^1 \left(\frac{(4\alpha-1)K^2}{(2\alpha-1)\delta}\right)^{\frac{1}{2\alpha-1}} d\delta = L \int_0^1 \delta^{-\frac{1}{2\alpha-1}}d\delta \\ & = L \left[ \frac{\delta^{1-\frac{1}{2\alpha-1}}}{1-\frac{1}{2\alpha-1}} \right]_0^1 = \left(\frac{(4\alpha-1)K^2}{2\alpha-1}\right)^{\frac{1}{2\alpha-1}}\frac{2\alpha-1}{2\alpha-2}. \qedhere \end{align*} \end{proof}
\begin{remark} \emph{Note that RUCB uses the upper-confidence bounds (Line 3 of Algorithm \ref{alg:RUCB}) introduced in the original version of UCB \cite{auer2002ucb} (up to the $\alpha$ factor). Recently refined upper-confidence bounds (such as UCB-V \cite{audibert2009ucbv} or KL-UCB \cite{cappe2012klucb}) have improved performance for the regular $K$-armed bandit problem. However, in our setting the arm distributions are Bernoulli and the comparison value is 1/2. Thus, since we have $2\Delta_i^2 \leq kl(p_{1,i}, 1/2) \leq 4\Delta_i^2$ (where $kl(a,b)=a\log\frac{a}{b}+(1-a)\log\frac{1-a}{1-b}$ is the KL divergence between Bernoulli distributions with parameters $a$ and $b$), we deduce that using KL-UCB instead of UCB does not improve the leading constant in the logarithmic term of the regret by a numerical factor of more than 2.} \end{remark}
\section{Experiments} \label{sec:experiments}
\begin{figure*}
\caption{Average cumulative regret and accuracy for 100 runs of BTM, Condorcet SAVAGE and RUCB with $\alpha=0.51$ applied to three $K$-armed dueling bandit problems with $K=16,32,64$. In the top row of plots, both axes use log scales, and the dotted curves signify best and worst regret performances; in the bottom plots, only the x-axis uses a log scale.
}
\label{fig:accuracy-regret}
\end{figure*}
To evaluate RUCB, we apply it to the problem of \emph{ranker evaluation} from the field of \emph{information retrieval} (IR)~\citep{mann:intr08}. A ranker is a function that takes as input a user's search query and ranks the documents in a collection according to their relevance to that query. Ranker evaluation aims to determine which among a set of rankers performs best. One effective way to achieve this is to use \emph{interleaved comparisons}~\citep{radlinski2008:how}, which interleave the documents proposed by two different rankers and presents the resulting list to the user, whose resulting click feedback is used to infer a noisy preference for one of the rankers. Given a set of $K$ rankers, the problem of finding the best ranker can then be modeled as a $K$-armed dueling bandit problem, with each arm corresponding to a ranker.
Our experimental setup is built on real IR data, namely the LETOR NP2004 dataset~\citep{letor}.
Using this data set, we create a set of 64 rankers, each corresponding to a ranking feature provided in the data set, e.g., PageRank. The ranker evaluation task thus corresponds to determining which single feature constitutes the best ranker~\citep{hofmann:irj13}.
To compare a pair of rankers, we use \emph{probabilistic interleave} (PI)~\citep{hofmann11:probabilistic}, a recently developed method for interleaved comparisons. To model the user's click behavior on the resulting interleaved lists, we employ a probabilistic user model~\citep{,hofmann11:probabilistic,craswell08:experimental} that uses as input the manual labels (classifying documents as relevant or not for given queries) provided with the LETOR NP2004 dataset. Queries are sampled randomly and clicks are generated probabilistically by conditioning on these assessments in a way that resembles the behavior of an actual user \citep{guo09:tailoring,guo09:efficient}.
Following \citep{YueJoachims:2011}, we first used the above approach to estimate the comparison probabilities $p_{ij}$ for each pair of rankers and then used these probabilities to simulate comparisons between rankers. More specifically, we estimated the full preference matrix by performing $4000$ interleaved comparisons on each pair of the $64$ feature rankers included in the LETOR dataset.
We evaluated RUCB, Condorcet SAVAGE and BTM using randomly chosen subsets from the pool of $64$ rankers, yielding $K$-armed dueling bandit problems with $K \in \{16,32,64\}$. For each set of rankers, we performed 100 independent runs of each algorithm for a maximum of 4.5 million iterations. For RUCB we set $\alpha=0.51$, which approaches the limit of our high-probability theoretical results, i.e., $\alpha>0.5$ as in Theorem \ref{thm:HighProbBound}. We did not include an evaluation of IF, since both BTM and Condocet SAVAGE were shown to outperform it \cite{YueJoachims:2011,Urvoy:2013}.
Since BTM and SAVAGE require the exploration horizon as input, we ran $\textup{BTM}_T$ and $\textup{CSAVAGE}_T$ for various horizons $T$ ranging from 1000 to 4.5 million. In the top row of plots in Figure \ref{fig:accuracy-regret}, the markers on the green and the blue curves show the regret accumulated by $\textup{BTM}_T$ and $\textup{CSAVAGE}_T$ in the first $T$ iteration of the algorithm for each of these horizons. Thus, each marker corresponds, not to the continuation of the runs that produced the previous marker, but to new runs conducted with a larger $T$.
Since RUCB is horizonless, we ran it for 4.5 million iterations and plotted the cumulative regret, as shown using the red curves in the same plots. In the case of all three algorithms, the solid line shows the expected cumulative regret averaged across all 100 runs and the dotted lines show the minimum and the maximum cumulative regret that was observed across runs. Note that these plots are in log-log scale.
The bottom plots in Figure \ref{fig:accuracy-regret} show the accuracy of all three algorithms across 100 runs, computed at the same times as the exploration horizons used for BTM and SAVAGE in the regret plots. Note that these plots are in lin-log scale.
These results clearly demonstrate that RUCB identifies the best arm more quickly, since it asymptotically accumulates 5 to 10 times less regret than Condorcet SAVAGE, while reaching higher levels of accuracy in roughly $20\%$ of the time as Condorcet SAVAGE, all without knowing the horizon $T$. The contrast is even more stark when comparing to BTM.
\section{Conclusions} \label{sec:conclusions}
This paper proposed a new method called Relative Upper Confidence Bound (RUCB) for the \emph{$K$-armed dueling bandit problem} that extends the Upper Confidence Bound (UCB) algorithm to the relative setting by using optimistic estimates of the pairwise probabilities to choose a potential champion and conducting regular UCB with the champion as the benchmark.
We proved finite-time high-probability and expected regret bounds of order $\mathcal O(\log t)$ for our algorithm and evaluated it empirically in an information retrieval application. Unlike existing results, our regret bounds hold for all time steps, rather than just a specific horizon $T$ input to the algorithm. Furthermore, they rely on less restrictive assumptions or have better multiplicative constants than existing methods. Finally, the empirical results showed that RUCB greatly outperforms state-of-the-art methods.
In future work, we will consider two extensions to this research. First, building off extensions of UCB to the continuous bandit setting \cite{Srinivas:2010,Bubeck:2011,Munos:2011,deFreitas:2012,Valko:2013}, we aim to extend RUCB to the continuous dueling bandit setting, without a convexity assumption as in \cite{Yue:2009}. Second, building off Thompson Sampling \cite{thompson1933likelihood,Agrawal:2012,Kauffman:2012}, an elegant and effective sampling-based alternative to UCB, we will investigate whether a sampling-based extension to RUCB would be amenable to theoretical analysis. Both these extensions involve overcoming not only the technical difficulties present in the regular bandit setting, but also those that arise from the two-stage nature of RUCB.
\section*{Acknowledgments}
This research was partially supported by
the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement nr 288024 (LiMoSINe project),
the Netherlands Organisation for Scientific Research (NWO) under project nrs
640.\-004.\-802, 727.\-011.\-005, 612.001.116, HOR-11-10,
the Center for Creation, Content and Technology (CCCT),
the QuaMerdes project funded by the CLARIN-nl program,
the TROVe project funded by the CLARIAH program,
the Dutch national program COMMIT,
the ESF Research Network Program ELIAS,
the Elite Network Shifts project funded by the Royal Dutch Academy of Sciences (KNAW),
the Netherlands eScience Center under project number 027.012.105
and
the Yahoo! Faculty Research and Engagement Program.
\setlength{\bibsep}{7pt}
\section{Appendix}
\begin{figure}
\caption{The probability that the Condorcet and the total ordering assumptions hold for subsets of the feature rankers. The probability is shown as a function of the size of the subset.}
\label{fig:prob-condorcet}
\end{figure}
\begin{figure*}\label{fig:lemma1Appendix}
\end{figure*}
Here we provide some details that were alluded to in the main body of the paper.
\subsection{The Condorcet Assumption}
As mentioned in Section \ref{sec:relatedwork}, IF and BTM require the comparison probabilities $p_{ij}$ to satisfy certain difficult to verify conditions. Specifically, IF and BTM require a \emph{total ordering} $\{a_1,\ldots,a_K\}$ of the arms to exist such that $p_{ij} > \frac{1}{2}$ for all $i < j$. Here we provide evidence that this assumption is often violated in practice. By contrast, the algorithm we propose in Section \ref{sec:algorithm} makes only the Condorcet assumption, which is implied by the total ordering assumption of IF and BTM.
In order to test how stringent an assumption the existence of a Condorcet winner is compared the total ordering assumption, we estimated the probability of each assumption holding in our ranker evaluation application.
Using the same preference matrix as in our experiments in Section \ref{sec:experiments}, we computed for each $K=1,\ldots,64$ the probability $P_K$ that a given $K$-armed dueling bandit problem obtained from considering $K$ of our $64$ feature rankers would have a Condorcet winner as follows: first, we calculated the number of $K$-armed dueling bandit that have a Condorcet winner by calculating for each feature ranker $r$ how many $K$-armed duelings bandits it can be the Condorcet winner of: for each $r$, this is equal to $\binom{N_r}{K}$, where $N_r$ is the number rankers that $r$ beats; next, we divided this total number of $K$-armed dueling bandit with a Condorcet winner by $\binom{64}{K}$, which is the number of all $K$-armed dueling bandit that one could construct from these $64$ rankers.
The probabilities $P_K$, plotted as a function of $K$ in Figure \ref{fig:prob-condorcet} (the red curve), were all larger than $0.97$. The same plot also shows an estimate of the probability that the total ordering assumption holds for a given $K$ (the blue curve), which was obtained by randomly selecting $100,000$ $K$-armed bandits and searching for ones that satisfy the total ordering assumption. As can be seen from Figure \ref{fig:prob-condorcet}, as $K$ grows the probability that the total ordering assumption holds decreases rapidly. This is because there exist cyclical relationships between these feature rankers and as soon as the chosen subset of feature rankers contains one of these cycles, it fails to satisfy the total ordering condition. By contrast, the Condorcet assumption will still be satisfied as long as the cycle does not include the Condorcet winner. Moreover, because of the presence of these cycles, the probability that the Condorcet assumption holds decreases initially as $K$ increases, but then increases again because the number of all possible $K$-armed dueling bandit decreases as $K$ approaches $64$.
Furthermore, in addition to the total ordering assumption, IF and BTM each require a form of \emph{stochastic transitivity}. In particular, IF requires \emph{strong stochastic transitivity}; for any triple $(i,j,k)$, with $i < j < k$, the following condition needs to be satisfied: \[ p_{ik} \geq \max\{p_{ij},p_{jk}\}. \] BTM requies the less restrictive \emph{relaxed stochastic transitivity}, i.e., that there exists a number $\gamma \geq 1$ such that for all pairs $(j,k)$ with $1 < j < k$, we have \[ \gamma p_{1k} \geq \max\{p_{1j},p_{jk}\}. \] As pointed out in \cite{YueJoachims:2011}, strong stochastic transitivity is often violated in practice, a phenomenon also observed in our experiments: for instance, all of the $K$-armed dueling bandit on which we experimented require $\gamma > 1$.
Even though BTM permits a broader class of $K$-armed dueling bandit problems, it requires $\gamma$ to be explicitly passed to it as a parameter, which poses substantial difficulties in practice. If $\gamma$ is underestimated, the algorithm can in certain circumstances be misled with high probability into choosing the Borda winner instead of the Condorcet winner, e.g., when the Borda winner has a larger average advantage over the remaining arms than the Condorcet winner. On the other hand, though overestimating $\gamma$ does not cause the algorithm to choose the wrong arm, it nonetheless results in a severe penalty, since it makes the algorithm much more exploratory, yielding the $\gamma^7$ term in the upper bound on the cumulative regret, as discussed in Section \ref{sec:relatedwork}.
\subsection{Proof of Lemma \ref{lem:HighProbBound}}
In this section, we prove Lemma \ref{lem:HighProbBound}, whose statement is repeated here for convenience. Recall from Section \ref{sec:theory} that we assume without loss of generality that $a_1$ is the optimal arm. Moreover, given any $K$-armed dueling bandit algorithm, we define $w_{ij}(t)$ to be the number of times arm $a_i$ has beaten $a_j$ in the first $t$ iterations of the algorithm. We also define $u_{ij}(t) := \frac{w_{ij}(t)}{w_{ij}(t)+w_{ji}(t)} + \sqrt{\frac{\alpha\ln t}{w_{ij}(t)+w_{ji}(t)}}$, where $\alpha$ is any positive contant, and $l_{ij}(t) := 1-u_{ji}(t)$. Moreover, for any $\delta > 0$, define $C(\delta) := \left(\frac{(4\alpha-1)K^2}{(2\alpha-1)\delta}\right)^{\frac{1}{2\alpha-1}}$.
{\bf Lemma \ref{lem:HighProbBound}.}~\emph{Let $\myvec{P} := \left[ p_{ij} \right]$ be the preference matrix of a $K$-armed dueling bandit problem with arms $\{a_1,\ldots,a_K\}$, satisfying $p_{1j} > \frac{1}{2}$ for all $j > 1$ (i.e., $a_1$ is the Condorcet winner). Then, for any dueling bandit algorithm and any $\alpha > \frac{1}{2}$ and $\delta > 0$, we have} \begin{equation}\label{eqn:TailBound} P\Big( \forall\,t>C(\delta),i,j,\; p_{ij} \in [l_{ij}(t),u_{ij}(t)] \Big) > 1-\delta. \end{equation}
\begin{proof} To decompose the lefthand side of \eqref{eqn:TailBound}, we introduce the notation $\mathcal{G}_{ij}(t)$ for the ``good'' event that at time $t$ we have $p_{ij} \in [l_{ij}(t),u_{ij}(t)]$, which satisfies the following: \begin{itemize}[leftmargin=*,topsep=0pt,parsep=0pt,partopsep=0pt] \item[] \hspace{-5mm} (i) $\mathcal{G}_{ij}(t) = \mathcal{G}_{ji}(t)$ because of the triple of equalities $\Big(p_{ji},l_{ji}(t),u_{ji}(t)\Big) = \Big(1-p_{ij},1-u_{ij}(t),1-l_{ij}(t)\Big)$. \item[] \hspace{-6mm} (ii) $\mathcal{G}_{ii}(t)$ always holds, since $\left(p_{ii},l_{ii}(t),u_{ii}(t)\right) = \left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$. Together with (i), this means that we only need to consider $\mathcal{G}_{ij}(t)$ for $i < j$. \item[] \hspace{-4.5mm} (iii) Define $\tau^{ij}_n$ to be the iteration at which arms $i$ and $j$ were compared against each other for the $n^{th}$ time. If $G_{ij}\left(\tau^{ij}_n+1\right)$ holds, then the events $\mathcal{G}_{ij}(t)$ hold for all $t \in \left(\tau^{ij}_n,\tau^{ij}_{n+1}\right]$ because when $t \in \left(\tau^{ij}_n,\tau^{ij}_{n+1}\right]$, $w_{ij}$ and $w_{ji}$ remain constant and so in the expressions for $u_{ij}(t)$ and $u_{ji}(t)$ only the $\ln t$ changes, which is a monotonically increasing function of $t$. So, we have \[ l_{ij}(t) \leq l_{ij}(\tau^{ij}_n+1) \leq p_{ij} \leq u_{ij}(\tau^{ij}_n+1) \leq u_{ij}(t). \]
Moreover, the same statement holds with $\tau^{ij}_n$ replaced by any $T \in \left(\tau^{ij}_n,\tau^{ij}_{n+1}\right]$, i.e., if we know that $\mathcal{G}_{ij}(T)$ holds, then $\mathcal{G}_{ij}(t)$ also holds for all $t \in \left(T,\tau^{ij}_{n+1}\right]$. This is illustrated in Figure \ref{fig:lemma1Appendix}. \end{itemize}
Now, given the above three facts, we have for any $T$
\begin{align}\label{eqn:goodPruning} & P\Big( \forall\,t\geq T,i,j,\; \mathcal{G}_{ij}(t) \Big) \\ & \; = P\Big( \forall\,i>j,\; \mathcal{G}_{ij}(T) \textup{ and } \forall\,n\;s.t.\; \tau^{ij}_n > T, \; \mathcal{G}_{ij}(\tau^{ij}_n) \Big). \nonumber \end{align}
Let us now flip things around and look at the complement of these events, i.e. the ``bad'' event $\mathcal{B}_{ij}(t)$ that $p_{ij} \notin [l_{ij}(t),u_{ij}(t)]$ occurs. Then, subtracting both sides of Equation \eqref{eqn:goodPruning} from $1$ and using the union bound gives \begin{align*} & P\Big( \exists\,t>T,i,j\;s.t.\; \mathcal{B}_{ij}(t) \Big) \\ & \leq \sum_{i<j} \bigg[ P\Big( \mathcal{B}_{ij}(T) \Big) + P\Big( \exists\,n: \tau^{ij}_n > T \textup{ and } \mathcal{B}_{ij}(\tau^{ij}_n) \Big) \bigg]. \end{align*} Further decomposing the righthand side using union bounds and making the condition explicit, we get
\begin{align*} & P\Big( \exists\,t>T,i,j\;s.t.\; \mathcal{B}_{ij}(t) \Big) \\
& \leq \sum_{i>j} \Bigg[ P\left( \left|p_{ij}-\mu^{ij}_{N_{ij}(T)}\right| > \sqrt{\frac{\alpha\ln T}{N_{ij}(T)}} \right) + \\
& P\left( \exists\,n \leq T\;s.t.\; \tau^{ij}_n > T \textup{ and } \left|p_{ij}-\mu^{ij}_n\right| > \sqrt{\frac{\alpha\ln \tau^{ij}_n}{n}} \right) \\
& \qquad\quad + P\left( \exists\,n > T\;s.t.\; \left|p_{ij}-\mu^{ij}_n\right| > \sqrt{\frac{\alpha\ln \tau^{ij}_n}{n}} \right) \Bigg], \\ \end{align*}
since $T < n < \tau^{ij}_n$. Here, $\mu^{ij}_n := \frac{w_{ij}(\tau^{ij}_n)}{w_{ij}(\tau^{ij}_n)+w_{ji}(\tau^{ij}_n)}$ is the frequentist estimate of $p_{ij}$ after $n$ comparisons between arms $a_i$ and $a_j$.
Now, in the above sum, we can upper-bound the first term by looking at the higher probability event that $\mathcal{B}_{ij}(T)$ happens for any possible number of comparisons between $a_i$ and $a_j$, and since we know that $N_{ij}(T) \leq T$, we can replace $N_{ij}(T)$ with a variable $n$ that can take values between $0$ and $T$. For the second term, we know that $\tau^{ij}_n > T$, so we can replace $\tau^{ij}_n$ with $T$ and remove the condition $\tau^{ij}_n > T$ and look at all $n \leq T$. For the third term, since we always have that $n < \tau^{ij}_n$, we can replace $\tau^{ij}_n$ with $n$ and get a higher probability event. Putting all of this together we get the looser bound
\begin{align}
& P\Big( \exists\,t>T,i,j\;s.t.\; \mathcal{B}_{ij}(t) \Big) \nonumber \\
& \leq \sum_{i<j} \Bigg[ P\left( \exists\, n\in\{0,\ldots,T\}: \left| p_{ij} - \mu^{ij}_n \right| > \sqrt{\frac{\alpha\ln T}{n}} \right) \nonumber \\
& \qquad\; + P\left( \exists\, n\in\{0,\ldots,T\}: \left| p_{ij} - \mu^{ij}_n \right| > \sqrt{\frac{\alpha\ln T}{n}} \right) \nonumber \\
& \qquad\; + P\left( \exists\, n > T\; s.t. \; \left| p_{ij} - \mu^{ij}_n \right| > \sqrt{\frac{\alpha\ln n}{n}} \right) \Bigg] \nonumber \\
& \leq \sum_{i<j} \Bigg[ 2\sum_{n=0}^T P\left( \left| p_{ij} - \mu^{ij}_n \right| > \sqrt{\frac{\alpha\ln T}{n}} \right) \nonumber \\
& \qquad\quad + \sum_{n=T+1}^\infty P\left( \left| p_{ij} - \mu^{ij}_n \right| > \sqrt{\frac{\alpha\ln n}{n}} \right) \Bigg]. \label{eqn:CHstep}
\end{align}
To bound the expression on line \eqref{eqn:CHstep}, we apply the Chernoff-Hoeffding bound, which in its simplest form states that given i.i.d.\ random variables $X_1, \ldots, X_n$, whose support is contained in $[0,1]$ and whose expectation satisfies $\mathbb{E}[X_k] = p$, and defining $\mu_n := \frac{X_1+\cdots+X_n}{n}$, we have $P(|\mu_n-p| > a) \leq 2e^{-2na^2}$. This gives us
\begin{align}
& P\Big( \exists\,t>T,i,j\;s.t.\; \mathcal{B}_{ij}(t) \Big) \nonumber \\
& \leq \sum_{i<j} \left[ 2\sum_{n=1}^T 2e^{-2\cancel{n}\dfrac{\alpha\ln T}{\cancel{n}}} + \sum_{n=T+1}^\infty 2e^{-2\cancel{n}\dfrac{\alpha\ln n}{\cancel{n}}} \right] \nonumber \\
& = \frac{K(K-1)}{2} \left[ \sum_{n=1}^T \frac{4}{T^{2\alpha}} + \sum_{n=T+1}^\infty \frac{2}{n^{2\alpha}} \right] \nonumber \\
& \leq \frac{2K^2}{T^{2\alpha-1}} + K^2\int_{T}^\infty \frac{dx}{x^{2\alpha}}, \; \textup{since $\frac{1}{x^{2\alpha}}$ is decreasing.} \nonumber \\
& \leq \frac{2K^2}{T^{2\alpha-1}} + K^2\int_{T}^\infty \frac{dx}{x^{2\alpha}} = \frac{2K^2}{T^{2\alpha-1}} + \frac{K^2}{(1-2\alpha) x^{2\alpha-1}} \bigg|_{T}^\infty \nonumber \\
& = \frac{(4\alpha-1)K^2}{(2\alpha-1) T^{2\alpha-1}}. \label{eqn:badUpperBound} \end{align}
Now, since $C(\delta) = \left(\frac{(4\alpha-1)K^2}{(2\alpha-1)\delta}\right)^{\frac{1}{2\alpha-1}}$ for each $\delta~>~0$, the bound in \eqref{eqn:badUpperBound} gives us \eqref{eqn:TailBound}.
\end{proof}
\end{document} | arXiv |
Neutron Star materials - If a neutron star stops spinning, What will be the characteristics of the materials in it?
As the title says, what if a neutron star stops spinning what will happen to the materials that the neutron star is made of?
Will they still be super dense?
Are they brittle? If they are, how strong is it?
How heavy those materials will be?
Here's the actual scenario: A certain blacksmith discovers a rock (technically a mine) that is so heavy that in fact, just a pebble of it took him and 3 of his sons to carry it.
Along that scenario, what I could actually do with that kind of denseness and if it can be worked with.
Or possible create a fabric so strong that it beats other metals in terms of armor properties
science-fiction astronomy stars
a CVn♦
mico villenamico villena
$\begingroup$ Your estimate of neutron star density is way too low. Neutron stars weigh 100 thousand tons per cubic mm (roughly the size of a large grain of sand). $\endgroup$ – March Ho Jul 21 '16 at 11:39
$\begingroup$ Without the gravitational pressure holding it together, the degenerate matter of a neutron star will probably disinitgrate...by the particles flying away from each other at nearly the speed of light. $\endgroup$ – Thucydides Jul 21 '16 at 12:27
$\begingroup$ See this video for what a marble-sized piece of neutronium would do. $\endgroup$ – JDługosz Aug 19 '16 at 22:24
$\begingroup$ Why would it matter if it was spinning or not? What is a "fabring"? $\endgroup$ – JDługosz Aug 19 '16 at 22:43
$\begingroup$ @JDługosz Seems to me OP meant fabric. $\endgroup$ – a CVn♦ Aug 21 '16 at 11:43
Neutrons stars are extreme objects that measure between 10 and 20 km across. They have densities of 10^17 kg/m3 (the Earth has a density of around 5×10^3 kg/m3 )
A pebble 1 cm (0.01 m) radius, volume would be 4.188 x 10^-6, mass would be 4.188 x 10^11, That is 418800000 tonnes. And I don't think even 3 supermen could drag it.
Most of the space in an atom is empty, the electrons orbit really far away from nucleus. Neutron stars are made when atoms are disintegrated, into its fundamental components nucleus, and electrons which rather than orbiting zip around closely. It wouldnt be brittle (atleast I think so). If you make a weapon with it like a sword and somehow figure a way to use it, you don't have cut people, people will be attracted to it all you have to do it point it at them, they would be crushed by the gravity.
Making armours also wont work because things around would be attracted to it. You can make bullets for interstellar weapons and shooting at high speed at a planet, would wreck it.
ChinuChinu
$\begingroup$ MCU Thor's hammer is made of that material. The only reason it doesn't end life on Earth as we know it is Odin's magic. $\endgroup$ – Renan Jul 21 '16 at 14:30
$\begingroup$ BTW I don't think that weapon would work by pointing. It would work by getting close to the target. I feel sorry for anyone appointed as the official wielder of that weapon XD $\endgroup$ – Renan Jul 21 '16 at 14:31
$\begingroup$ @Renan Yes; the gravitational attraction alone of a macroscopic object made from neutron star-like material would probably be enough to clear the battlefield. $\endgroup$ – a CVn♦ Aug 21 '16 at 11:38
$\begingroup$ @Renan That's not actually true, his hammer was forged in a dying star but it's made of some fictional uru. $\endgroup$ – Vakus Drake Aug 21 '16 at 12:05
Neutronium probably isn't the material that you want to use if you want to keep it even slightly plausible. It can't exist outside of a neutron star that has less than 2 solar masses squeezed into a 10 mile diameter. Anything less than that and the strong nuclear force would cause the outer layers to pop off, losing more mass, until it disintegrates into a cloud of neutron radiation.
But say you handwaved that part away.
Could you make armor out of it? No. It would be to heavy to move (like several earths heavy), and with so much gravity that anything in the vicinity around it would be pulled toward it, crushing down to a crusty patina on the surface.
Could you make a weapon out of it? Yes. Drop a chunk toward a planet and watch it shatter it's way through, and then the broken chunks would slowly form around the piece of neutronium.
If you want an unbreakable armor, I'd personally suggest some super advanced alloy. Say they find a mysterious piece of metal, melt it down and combine it with other metals like iron to form something new.
Steel is an alloy of iron, carbon, and a few other metals depending on what properties you want it to have. By adding this mystery metal you could give it whatever properties you want.
AndyD273AndyD273
$\begingroup$ See this for supermaterials. $\endgroup$ – JDługosz Aug 19 '16 at 22:20
$\begingroup$ If you dropped a chunk of neutronium towards a planet, assuming it started at neutron star densities, the first thing it would do is rapidly expand/explode. Objects at those densities won't continue to exist once they're out of the neutron star or some other sort of crushingly high pressure containment vessel. $\endgroup$ – ckersch Aug 19 '16 at 22:21
$\begingroup$ @ckersch worse than that! Neutrons have a half-life of around 11 minutes… $\endgroup$ – JDługosz Aug 19 '16 at 22:25
$\begingroup$ @ckersch right, my answer does cover that, saying anything less than 2 solar masses would cause it to explode/disintegrate. But if you could keep it in its neutronium form through magic or whatever, it would destroy any planet it got close to, so wouldn't make good armor. $\endgroup$ – AndyD273 Aug 20 '16 at 1:31
Yes it will still be super dense. I don't think the neutron star spinning has to do with the density, since the primary force is gravity. The density might factor into the strength, though it depends on what you want to measure. Hardness, tensile strength, shear strength, compressive strength, etc. is hard to be sure of, because it's so dense. Average mass exceed 500,000 earths, but average size is in the ballpark of 25 kilometers. Or a cubic centimeter of that stuff will weigh many hundreds of million tonnes. If you were dropped a meter above a neutron star, you'd likely accelerate to over seven million kilometers/hour. You get the picture.
But it also depends on what you're specifically asking for when you say the materials of a neutron star. The crust is likely iron atom cores and past that is simply a super-dense soup of neutrons. Beyond that you might get to a quark-gluon plasma, or a superfluid of neutron degenerate matter. Regardless, the theme here is extreme density. And thus, it would be extremely unlikely that you'd be able to move it, let alone work with it. You'd probably need some form of advanced gravitic manipulation, and even then it'd best be used not as armor but for something else.
We as a species don't understand them very well, but there's a reason dense materials do not necessarily make better armor. First there's the issue of weight, and workability. There are plenty of lightweight and very strong materials that are better suited to creating armor, especially for people. Even a starship's hull would likely be better with a lighter armor because of sheer mass and inertia. And as Chinu said, it would be extraordinarily efficient in kinetic bombardment again due to it's density, but said density is the primary limiter in its usefulness.
armorhide406armorhide406
$\begingroup$ The spin of a neutron star is not terribly important in its density. They do slowly spin down, since their magnetic fields tend to act as a brake, but as it slows, it will get very slightly denser, simply because "centripetal force" is getting weaker and thus opposing its gravity less effectively. $\endgroup$ – John Dallman Jul 21 '16 at 12:43
Nothing will happen to the materials the neutron star is made of because their composition is in no way related to the spinning of the star, only its mass and radius. There would be a bit less centrifugal force counteracting the gravity near the equator, so there may be some minor changes in the equatorial crust, with some of the lighter nuclei in the crust clumping together to form heavier ones. The neutron star will remain a dense ball of neutronium with a thin crust of heavy, exotic atomic nuclei. It will be impossible to mine, because the gravity would kill you if you stepped foot on the surface.
ckerschckersch
$\begingroup$ I suspect the gravity would be problematic long before you reach the surface. See also Short of collision, can gravity itself kill you? on Physics. $\endgroup$ – a CVn♦ Aug 21 '16 at 11:41
Your understanding of "neutron star material" is faulty. The reason neutron stars are so dense is because of their high gravity compressing that matter together, not some intrinsic property of the material itself. Basically a neutron star is what happens when a star's collapsed core isn't quite massive enough to become a black hole, but is fairly close.
So if you had a piece of neutron star material and took it out of the star's gravitational field, it would simply evaporate into free neutrons, which would soon decay into plain old hydrogen gas. If you removed it quickly, it would explode.
Now if you had an exotic material that had density comparable to that of a neutron star and was somehow stable... well, you'd certainly have some use for it, but it wouldn't make good armor because of its weight. If you had the energy to accelerate it or simply drop it from orbit, it would be an absurdly powerful weapon. As for how brittle it would be, that depends on its exact material properties, which are already nothing like that of a neutron star so you can make it whatever you want.
Also, it would make one heck of a paperweight.
Let's go more with the scenario you are describing, than the question you start out by asking (which appears to be only peripherally related).
Let's assume that a "pebble" is approximately 1 cm3 in size.
Let's also assume that one of these adult men can carry somewhere on the order of 75 kg.
Let's also ignore how four men are able to simultaneously hold on to an object that is 1 cm3 in size, which is going to be a serious challenge, but not unsolvable, in itself.
With these assumptions, we can estimate the pebble's weight to be on the order of $4 \times 75~\text{kg} = 300~\text{kg}$.
As a consequence, the density of the material is something like $300 \times 10^6 = 3 \times 10^8$ kg/m3 (because $\frac{\text{cm}^3}{\text{m}^3} = 10^6$).
Apparently, osmium is the densest naturally occuring element on Earth, at 22.59 g/cm3 = $2.259 \times 10^4$ kg/m3. A pebble-sized ball of osmium would weigh a few tens of grams.
For back-of-the-envelope calculations, it's common to just look at the exponent. Your material is $10^{8-4} = 10^4$ times as dense as osmium. The actual result is that the material these men found is somewhere around 13,000 times as dense as osmium, but this figure could easily be anywhere from 10,000 to 15,000 times depending on how strong these men are.
For comparison, as pointed out by Chinu, a neutron star has a density on the order of $10^{17}$ kg/m3 (actually, several times that; Wikipedia states $3.7 \times 10^{17}$ to $5.9 \times 10^{17}$ kg/m3), which is another nine orders of magnitude (a billion times) more dense than the material you envision. A pebble-sized portion of a neutron star, assuming it stayed together (which it wouldn't, as IndigoFenix already pointed out), would weigh not 300 kg, but more like 300,000,000,000 ($3 \times 10^{11}$) kg.
In conclusion, basically and unfortunately, this question is another example of a lack of sense of scale in space.
I agree with AndyD273: It would probably be better, and less likely to risk loss of suspension of disbelief, to just have these men come up with a super-strong alloy instead. You don't even have to name or describe the parts to such an alloy unless you want to (but if you do, beware of falling into the same trap again by misestimating things).
Not the answer you're looking for? Browse other questions tagged science-fiction astronomy stars or ask your own question.
What are some plausible super materials?
Earth-like planet orbiting neutron star?
Behavior of neutron star material outside of neutron star
Heavy metal planets in a globular cluster
Detecting a neutron star entering the solar system
What is the habitable zone around my star?
How would a neutron star affect the habitability of planets orbiting a companion star?
Neutron star "evolution", how do neutron stars die?
Considerations for venturing near a neutron star
How much damage would a cupful of neutron star matter do to the Earth?
How Are Humans Special? | CommonCrawl |
\begin{document}
\title{Determinacy and Decidability of Reachability\ Games with Partial Observation on Both Sides} \begin{abstract} We consider two-players stochastic reachability games with partial observation on both sides and finitely many states, signals and actions. We prove that in such games, either player $1$ has a strategy for winning with probability $1$, or player $2$ has such a strategy, or both players have strategies that guarantee winning with non-zero probability (positively winning strategies). We give a fix-point algorithm for deciding which of the three cases holds, which can be decided in doubly-exponential time. \end{abstract}
\section*{Introduction} We prove two determinacy and decidability results about two-players stochastic reachability games with partial observation on both sides and finitely many states, signals and actions. Player $1$ wants the play to reach the set of target states, while player $2$ wants to keep away the play from target states. Players take their decisions based upon \emph{signals} that they receive all along the play, but they cannot observe the actual state of the game, nor the actions played by their opponent, nor the signals received by their opponent. Each player only observes the signals he receives and the actions he plays. Players have common knowledge of the initial state of the game.
Our determinacy result is of a special kind, it concerns two notions of solutions for stochastic games. The first one is the well known notion of \emph{almost-surely} winning strategy, which guarantees winning with probability $1$ against any strategy of the opponent. The second one is the notion of \emph{positively winning} strategy: a strategy is positively winning if it guarantees a non-zero winning probability against any strategy of the opponent. This notion is less known, to our knowledge it appeared recently in~\cite{theseflorian}. The notion of positively winning strategy is different from the notion of positive value, because the non-zero winning probability can be made arbitrarily small by the opponent, hence existence of a positively winning strategy does not give any clue for deciding whether the value is zero or not. Existence of a positively winning strategy guarantees that the opponent does not have an almost-surely winning strategy, however there is no straightforward reason that one of these cases should always holds. Actually, if we consider more complex classes of games than reachability games, there are various examples where neither player $1$ has a positively winning strategy nor player $2$ has an almost-surely winning strategy.
Our first result (Theorem~\ref{theo:twoplayers1}) states that, in reachability games with partial observation on both sides, either player $1$ has a positively winning strategy or player $2$ has an almost-surely winning strategy. Moreover which case holds is decidable in \emph{exponential} time. Notice that an almost-surely winning strategy for player $2$ in a reachability game is \emph{surely winning} as well.
Our second result (Theorem~\ref{theo:twoplayers2}) states that either player $1$ has an almost-surely winning strategy or player $2$ has a positively winning strategy, and this is decidable in \emph{doubly-exponential} time.
Both these results strengthen and generalize in several ways results given in~\cite{chdr07}. Actually, in this paper is addressed only the particular case where player $2$ has perfect information and target states are observable by player $1$. Moreover in~\cite{chdr07} no determinacy result is established, the paper "only" describes an algorithm for deciding whether player $1$ has an almost-sure winning strategy.
\section{Reachability games with partial observation on both sides}
We consider zero-sum stochastic games with partial observation on both sides, where the goal of Player $1$ is to reach a certain set of target states. Players only partially observe the state of the game, via signals. Signals and state transitions are governed by probability transitions: when the state is $k$ and two actions $i$ and $j$ are chosen, player $1$ and $2$ receive respectively signals $c$ and $d$ and the new state is $l$ with probability $\tp{c,d,l}{k,i,j}$.\\
\subsection{Notations} We use the following standard notations~\cite{renault}.\\ The game is played in steps. At each step the game is in some state $k\inK$. The goal of player $1$ is to reach target states $T\subseteqK$. Before the game starts, the initial state is chosen according to the initial distribution $\delta \in\distrib{K}$, which is common knowledge of both players. Players $1$ and $2$ choose actions $i\inI$ and $j\inJ$, then player $1$ receives a signal $c\inC$, player $2$ receives a signal $d\inD$, and the game moves to a new state $l$. This happens with probability $\tp{c,d,l}{k,i,j}$ given by fixed transition probabilities $p : K\times I \times J \to \distrib{C\times D \times K}$, known by both players. We denote $\tp{l}{k,i,j}=\sum_{c,d}\tp{c,d,l}{k,i,j}$. Players observe and remember their own actions and the signals they receive, it is convenient to suppose that in the signal they receive is encoded the action they just played, formally their exists $\act : C \cup D \to I\cupJ$ such that $\tp{c,d,k'}{k,i,j}>0 \iff (i=\act(c) \text{ and } j=\act(d))$. We denote $\tp{c,d,l}{k}=\tp{c,d,l}{k,\act(i),\act(j)}$. This way, plays can be described by sequences of states and signals for both players, without mentioning which actions were played. A sequence $p=(k_0,c_1,d_1,\ldots,c_{n},d_{n},k_{n})\in(KCD)^*K$ is a finite play if for every $0\leq m< n$, $\tp{c_{m+1},d_{m+1},k_{m+1}}{k_m,\act(c_{m+1}),\act(d_{m+1})}>0$. An infinite play is a sequence $p\in(KCD)^\omega$ whose prefixes are finite plays.
A strategy of player $1$ is a mapping $\sigma : \distrib{K}\timesC^*\to \distrib{I}$ and a strategy of player $2$ is $\tau : \distrib{K}\timesD^*\to \distrib{J}$.
In the usual way, an initial distribution $\delta$ and two strategies $\sigma$ and $\tau$ define a probability measure $\prob{\sigma,\tau}{\delta}{\cdot}$ on the set of infinite plays, equipped with the $\sigma$-algebra generated by cylinders.
We use random variables $K_n,I_n,J_n,C_n,D_n$ for designing respectively the $n$-th state, action of player $1$, action of player $2$, signal of player $1$, signal of player $2$. The probability to reach a target state someday is: \[ \gamma_1(\delta,\sigma,\tau) = \prob{\sigma,\tau}{\delta}{\exists m \in \mathbb{N}, K_m \inT}\enspace, \] and the probability to never reach the target is $\gamma_2(\delta,\sigma,\tau)=1-\gamma_2(\delta,\sigma,\tau)$. Player $1$ seeks maximizing $\gamma_1$ while player $2$ seeks maximizing $\gamma_2$.
\subsection{Winning almost-surely or positively}
\begin{definition}[Almost-surely and positively winning] A distribution $\delta$ is \emph{almost-surely winning} for player $1$ if there exists a strategy $\sigma$ such that \begin{equation}\label{eq:as} \forall \tau, \gamma_1(\delta,\sigma,\tau)=1
\enspace. \end{equation} A distribution $\delta$ is \emph{positively winning} for player $1$ if there exists a strategy $\sigma$ such that \begin{equation}\label{eq:ps} \forall \tau, \gamma_1(\delta,\sigma,\tau)>0
\enspace. \end{equation} If the uniform distribution on a set of states $L\subseteq K$ is almost-surely or positively winning then $L$ itself is said to be almost-surely or positively winning. If there exists $\sigma$ such that~\eqref{eq:as}
holds for every almost-surely winning
distribution then $\sigma$ is said to be almost-surely winning
.
Positively winning strategies for player $1$ and almost-sure winning and positively winning strategies for player $2$ are defined similarly.
\end{definition}
\section{Winning almost-surely and positively with finite memory}
Of special algorithmic interest are strategies with finite memory. \begin{definition}[Strategies with finite memory] A strategy $\sigma$ with finite memory is described by
a finite set $M$ called the memory,
a strategic function $\sigma_M:M\to\distrib{I}$, an update function $\update_M : M \times C \to M$, an initialization function $\init_M : \parties{K} \to M$.
For playing with $\sigma$, player $1$ proceeds as follows. Let $\delta$ be the initial distribution with support $L$, then initially player $1$ puts the memory in state $\init_M(L)$. When the memory is in state $m$, player $1$ chooses his action according to the distribution $\sigma_M(m)$. When player $1$ receives a signal $c$ and its memory state is $m$, he changes the memory state to $\update_M(m,c)$. \end{definition}
A crucial tool for establishing decidability and determinacy result is the class of finite memory strategy whose finite memory if based on the notions of beliefs or pessimistic beliefs.
\subsection{Beliefs and pessimistic beliefs}
The belief of a player at some moment of the play is the set of states he thinks the game could possibly be, according to the signals he received up to now. The pessimistic belief is similar, except the player assumes that no final state has been reached yet. One of the motivations for introducing beliefs and pessimistic beliefs is Proposition~\ref{prop:belief}.
Beliefs of player $1$ are defined by mean of the operator $\mathcal{B}_1$ that associates with $L\subseteq K$ and $c\inC$, \begin{equation} \label{eq:defbelief} \mathcal{B}_1(L,c) = \{ k\in K \mid \exists l\in L, \exists d\in D, \tp{k,c,d}{l}>0 \}\enspace. \end{equation} We defined inductively the belief after signals $c_1,\ldots,c_n$ by $\mathcal{B}_1(L,c_1,\ldots,c_n,c) = \mathcal{B}_1(\mathcal{B}_1(L,c_1,\ldots,c_n),c)$.
Pessimistic beliefs of player $1$ are defined by \[ \mathcal{B}_1^p(L,c) =\mathcal{B}_1(L\backslash T,c)\enspace. \]
Beliefs $\mathcal{B}_2$ and pessimistic beliefs $\mathcal{B}_2^p$ for player $2$ are defined similarly. We will use the following properties of beliefs and pessimistic beliefs.
\begin{proposition}\label{prop:belief} Let $\sigma,\tau$ be strategies for player $1$ and $2$ and $\delta$ an initial distribution with support $L$. Then for every $n\in\mathbb{N}$, \begin{align*} &\prob{\sigma,\tau}{\delta}{K_{n+1}\in\mathcal{B}_1(L,C_1,\ldots,C_n)}=1\enspace,\\ &\prob{\sigma,\tau}{\delta}{K_{n+1}\in\mathcal{B}_2(L,D_1,\ldots,D_n)}=1\enspace,\\ &\prob{\sigma,\tau}{\delta}{K_{n+1}\in\mathcal{B}_1^p(L,C_1,\ldots,C_n)\text{ or }K_m\inT \text{ for some }1\leq m\leq n}=1\enspace,\\ &\prob{\sigma,\tau}{\delta}{K_{n+1}\in\mathcal{B}_2^p(L,D_1,\ldots,D_n)\text{ or }K_m\inT \text{ for some }1\leq m\leq n}=1\enspace. \end{align*}
Suppose $\tau$ and $\delta$ almost-surely winning for player $2$, then for every $n\in\mathbb{N}$, \[ \prob{\sigma,\tau}{\delta}{\mathcal{B}_2(L,D_1,\ldots,D_n)\text{ is a.s.w. for player }2}=1\enspace. \]
Suppose $\sigma$ and $\delta$ almost surely winning for player $1$, then for every $n\in\mathbb{N}$, \[ \prob{\sigma,\tau}{\delta}{\mathcal{B}_1^p(L,C_1,\ldots,C_n)\text{ is a.s.w. for player $1$ or } \exists 1\leq m\leq n, K_m\inT}=1\enspace. \] \end{proposition}
\begin{proof} Almost straightforward from the definitions. \end{proof}
\subsection{Belief and pessimistic belief strategies}
A strategy $\sigma$ is said to be a \emph{belief strategy} for player $1$ if it has finite memory $M = \parties{K}$ and \begin{enumerate} \item the initial state of the memory is the support of the initial distribution, \item the update function is $(L,c)\to \mathcal{B}_1(L,c)$, \item the strategic function $\parties{K}\to\distrib{I}$ associates with each memory state $L\subseteq K$ the uniform distribution on a non-empty set of actions $I_L\subseteq I$. \end{enumerate} The definition of a pessimistic belief strategy for player $1$ is the same, except the update function is $\mathcal{B}_1^p$.
\section{Determinacy and decidability results}
In this section, we establish our main result, a determinacy result of a new kind. Usual determinacy results in game theory concern the existence of a value. Here the determinacy refers to positive and almost-sure winning:
\begin{theorem}[Determinacy]\label{theo:determinacy} Every initial distribution is either almost-surely winning for player $1$, surely winning for player $2$ or positively winning for both players. \end{theorem}
Theorem~\ref{theo:determinacy} is a corollary of Theorems~\ref{theo:twoplayers1} and~\ref{theo:twoplayers2}, in which details are given about the complexity of deciding whether an initial distribution is positively winning for player $1$ and whether it is positively winning for player $1$.
Deciding whether a distribution is positively winning for player $1$ is quite easy,
because player $1$ has a very simple strategy for winning positively: playing randomly any action.
\begin{theorem}[Deciding positive winning for player $1$]\label{theo:twoplayers1} Every initial distribution is either positively winning for player $1$ or surely winning for player $2$.
The strategy for player $1$ which plays randomly any action is positively winning. Player $2$ has a belief strategy which is surely winning.
The partition of supports between those positively winning for player $1$
and those surely winning for player $2$ is computable in time exponential in $|K|$, together with an almost-surely winning belief strategy for player $2$. \end{theorem}
\begin{proof}[Proof of Theorem~\ref{theo:twoplayers1}]
Let $\ensuremath{\mathcal{L}}_\infty\subseteq \parties{K\backslashT}$ be the greatest fix-point of the monotonic operator $\Phi:\parties{\parties{K\backslashT}}\to \parties{\parties{K\backslashT}}$ defined by: \[ \Phi(\ensuremath{\mathcal{L}})= \{L\in \ensuremath{\mathcal{L}} \mid \exists j\in J, \forall d\inD, \text{ if }j=\act(d)\text{ then }\mathcal{B}_2(L,d)\in \ensuremath{\mathcal{L}}\}, \] and let $\sigma_R$ be the strategy for player $1$ that plays randomly any action. To establish Theorem~\ref{theo:twoplayers1} we are going to prove that: \begin{enumerate} \item[(A)] every support in $\ensuremath{\mathcal{L}}_\infty$ is surely winning for player $2$, and \item[(B)] $\sigma_R$ is positively winning from any support $L\subseteqK$ which is not in $\ensuremath{\mathcal{L}}_\infty$. \end{enumerate}
We start with proving (A). For winning surely from any support $L\in\ensuremath{\mathcal{L}}_\infty$, player $2$ uses the following belief strategy: if the current belief of player $2$ is $L\in\ensuremath{\mathcal{L}}_\infty$ then player $2$ chooses an action $j_L$ such that whatever signal $d$ player $2$ receives (with $\act(d)=j_L$), his next belief $\mathcal{B}_2(L,d)$ will be in $\ensuremath{\mathcal{L}}_\infty$ as well. By definition of $\Phi$ there always exists such an action $j$, and this defines a belief-strategy $\sigma:L\to j_L$ for player $2$. When playing with this strategy, beliefs of player $2$ never intersect $T$ hence according to Proposition~\ref{prop:belief},
against any strategy $\sigma$ of player $1$, the play stays almost-surely in $K\backslashT$, hence it stays surely in $K\backslashT$.
Conversely, we prove (B). We fix the strategy for player $1$ which consists in playing randomly any action with equal probability, and
the game is a one-player game where only player $2$ has choices to make: it is enough to prove (B) in the special case where the set of actions of player $1$ is a singleton $I=\{i\}$. Let $\ensuremath{\mathcal{L}}_0=\parties{K\backslashT}\supseteq \ensuremath{\mathcal{L}}_1=\Phi(\ensuremath{\mathcal{L}}_0)\supseteq \ensuremath{\mathcal{L}}_2=\Phi(\ensuremath{\mathcal{L}}_1)\ldots$ and $\ensuremath{\mathcal{L}}_\infty$ be the limit of this sequence, the greatest fixpoint of $\Phi$. We prove that for any support $L\in\parties{K}$, if $L\not\in\ensuremath{\mathcal{L}}_\infty$ then: \begin{equation}\label{eq:postoprove} \text{$L$ is positively winning for player $1$}\enspace. \end{equation} If $L\capT \not=\emptyset$,~\eqref{eq:postoprove} is obvious. For delaing with the case where $L\in\parties{K\backslashT}$, we define for every $n\in\mathbb{N}$, $\ensuremath{\mathcal{K}}_n = \parties{K\backslashT} \backslash \ensuremath{\mathcal{L}}_n$, and we prove by induction on $n\in\mathbb{N}$ that for every $L\in\ensuremath{\mathcal{K}}_n$, then for every initial distribution $\delta_L$ with support $L$, for every strategy $\tau$, \begin{equation}\label{eq:topo} \prob{\tau}{\delta_L}{\exists m\in\mathbb{N}, K_m\inT, 2\leq m\leq n+1 }>0 \enspace. \end{equation} For $n=0$,~\eqref{eq:topo} is obvious because $\ensuremath{\mathcal{K}}_0=\emptyset$. Suppose that for some $n\in\mathbb{N}$, \eqref{eq:topo} holds for every $L\in\ensuremath{\mathcal{K}}_n$, and let $L\in\ensuremath{\mathcal{K}}_{n+1}$. If $L\in\ensuremath{\mathcal{K}}_{n}$ then by inductive hypothesis,~\eqref{eq:topo} holds. Otherwise by definition of $\ensuremath{\mathcal{K}}_{n+1}$, $L\in\ensuremath{\mathcal{L}}_{n}\backslash\Phi(\ensuremath{\mathcal{L}}_n)$ hence by definition of $\Phi$, whatever action $j$ is played by player $2$ at the first round, there exists a signal $d_j$ such that $\act(d_j)=j$ and $\mathcal{B}_2(L,d_j)\not \in \ensuremath{\mathcal{L}}_n$. Let $\tau$ be a strategy for player $2$ and $j$ an action such that $\tau(\delta_L)(j)>0$. If $\mathcal{B}_2(L,d_j)\capT \not= \emptyset$ then according to Proposition~\ref{prop:belief}, $\prob{\tau}{\delta_L}{K_2\inT}>0$. Otherwise $\mathcal{B}_2(L,d_j)\in\parties{K\backslashT}\backslash\ensuremath{\mathcal{L}}_n=\ensuremath{\mathcal{K}}_n$ hence according to the inductive hypothesis $\prob{\tau[d_j]}{\mathcal{B}_2(L,d_j)}{\exists m\in\mathbb{N}, 2\leq m\leq n+1, K_m\inT}>0$. Since player $1$ has only one action, by definition of beliefs, for every state $l\in\mathcal{B}_2(L_d,j)$, $\prob{\tau}{\delta_L}{K_2 =l}>0$. Together with the previous equation, we obtain\\ $\prob{\tau}{\delta_L}{\exists m\in\mathbb{N}, 3\leq m\leq n+2, K_m\inT}>0$. This achieves the inductive step.
The computation of the partition of supports between those positively winning for player $1$, and those surely winning for player $2$ and a surely winning strategy for player $2$ amounts to the computation of the largest fixpoint of $\Phi$. since $\Phi$ is monotonic, and each application of the operator can be computed in exponential time, the overall computation can be achieved in exponential time and space. \end{proof}
Deciding whether an initial distribution is positively winning for player $1$ is easy because player $1$ has a very simple strategy for that: playing randomly.
Player $2$ does not have such a simple strategy for winning positively: he has to make hypotheses about the beliefs of player $1$,
as is shown in the example depicted by fig.~\ref{fig:blaise}.
\begin{figure}
\caption{ A game where player $2$ needs a lot of memory.}
\label{fig:blaise}
\end{figure}
\begin{theorem}[Deciding positive winning for player $2$]\label{theo:twoplayers2} Every initial distribution is either almost-surely winning for player $1$ or positively winning for player $2$.
Player $1$ has an almost-surely winning strategy which is pessimistic belief. Player $2$ has a finite memory strategy such that each memory state is a pair of a state and a pessimistic belief of player $1$.
The partition of supports between those almost-surely winning for player $1$
and those positively winning for player $2$ is computable in time doubly-exponential in $|K|$, together with the winning strategies for both players. \end{theorem}
The proof of Theorem~\ref{theo:twoplayers2} is based on the following intuition. The easiest way of winning for player $2$ is to reach with positive probability a state from where he wins surely. Hence player $1$ will try to prevent the play from reaching such surely winning states, in other words player $1$ should prevent his pessimistic belief to contain such surely winning states. However, doing so, player $1$ may prevent the play to reach a target state: it may hold that player $2$ has a strategy for winning positively under the hypothesis that pessimistic beliefs of player $1$ never contains surely winning states. This adds new beliefs of player $1$ to the collection of pessimistic beliefs that player $1$ should avoid. And so on...
For formalizing these intuitions, we make use of \emph{$\ensuremath{\mathcal{L}}$-games}.
\begin{definition}[$\ensuremath{\mathcal{L}}$-games] Let $\ensuremath{\mathcal{L}}\subseteqK$ be a collection of supports. The $\ensuremath{\mathcal{L}}$-game associated with $\ensuremath{\mathcal{L}}$ is the game with same actions, transitions and signals than the original partial observation game, only the winning condition changes:
player $1$ loses if either the play never reaches a target state or if at some moment the pessimistic belief of player $1$ is in $\ensuremath{\mathcal{L}}$ and the play has never visited a target state previously. Formally given an initial distribution $\delta$ with support $L$ and two strategies $\sigma$ and $\tau$ the winning probability of player $1$ is: \[ \prob{\sigma,\tau}{\delta}{\exists n\geq 1, K_n\inT \text{ and }\forall m<n, \mathcal{B}_1(L,C_1,\ldots,C_m)\not\in\ensuremath{\mathcal{L}}}\enspace. \] \end{definition}
Actually $\ensuremath{\mathcal{L}}$-games are special cases of reachability games, as shown in the next proposition and its proof. \begin{proposition}\label{prop:LLgames}
In a $\ensuremath{\mathcal{L}}$-game, every support is either positively winning for player $2$
or almost-surely winning for player $1$. This partition can be computed in time doubly-exponential in $|K|$. Player $2$ has a positively winning strategy whose states are pairs of states and pessimistic beliefs of player $1$. Player $1$ has an almost-surely winning pessimistic-belief strategy. \end{proposition}
\begin{proof} We define a reachability game $G_\ensuremath{\mathcal{L}}$ associated with $\ensuremath{\mathcal{L}}$ in the following way. Make the synchronized product of the original game with pessimistic beliefs of player $1$: each state is a pair $(k,L)$ with $k\inK$ and $L\subseteqK\backslashT$. Transitions are inherited from the original game except that every state whose second component is in $\ensuremath{\mathcal{L}}$ is absorbing. The set of target states is the set of pairs whose first component is in $T$. According to Theorem~\ref{theo:twoplayers1} in the reachability game $G_\ensuremath{\mathcal{L}}$ every state is either positively winning for player $2$ or almost-surely winning for player $1$. Moreover according to Theorem~\ref{theo:twoplayers1}, player $2$ has a positively winning belief strategy $\tau_\ensuremath{\mathcal{L}}$ in $G_\ensuremath{\mathcal{L}}$ from which it is easy to construct a positively winning strategy in the $\ensuremath{\mathcal{L}}$-game, with finite memory, whose memory states are sets of states of $G_\ensuremath{\mathcal{L}}$. Also according to Theorem~\ref{theo:twoplayers1}, player $1$ has an almost-surely pessimistic belief strategy $\sigma_\ensuremath{\mathcal{L}}$ in $G_\ensuremath{\mathcal{L}}$. Notice that pessimistic beliefs of player $1$ in $G_\ensuremath{\mathcal{L}}$ cannot take all the possible values in $\parties{(K\backslashT)\times\parties{K\backslashT}}$ because intuitively player $1$ has perfect knowledge about his own pessimistic beliefs and formally such a pessimistic belief is always of the type $\cup_{l\in L} \{(l,L)\}$ for some $L\subseteqK\backslashT$. As a consequence, it is easy to extract from $\sigma_\ensuremath{\mathcal{L}}$ a pessimistic belief almost-surely winning strategy in the $\ensuremath{\mathcal{L}}$-game. \end{proof}
The heart of the proof of Theorem~\ref{theo:twoplayers2} is based on the two next propositions. \begin{proposition}\label{prop:upward} Let $\ensuremath{\mathcal{L}}$ be an upward-closed collection of supports. Suppose that every support in $\ensuremath{\mathcal{L}}$ is positively winning for player $2$ in the original game.
Then any support positively winning for player $2$ in the $\ensuremath{\mathcal{L}}$-game is positively winning in the original game as well.
If, apart from supports in $\ensuremath{\mathcal{L}}$, there are no supports positively winning for player $2$ in the $\ensuremath{\mathcal{L}}$-game, then every support $L\not\in\ensuremath{\mathcal{L}}$ is almost-surely winning for player $1$ in the original game. \end{proposition}
\begin{proof}
Let $\ensuremath{\mathcal{L}}_p$ be the set of supports positively winning in the $\ensuremath{\mathcal{L}}$-game, that are not in $\ensuremath{\mathcal{L}}$.
We start with the case where $\ensuremath{\mathcal{L}}_p$ is not empty.
Let $\tau$ be a strategy for player $2$ positively winning in the original game.
Let $\tau'$ be a strategy for player $2$ positively winning in the $\ensuremath{\mathcal{L}}$-game.
Let $\tau''$ be the following strategy for player $2$. Player $2$ starts playing totally randomly any action with equal probability.
At each step of the play, player $2$ throws a dice with three sides to decide whether he should: \begin{itemize} \item keep playing randomly, \item pick randomly a support $L\in\ensuremath{\mathcal{L}}$, forget the past observations and switch definitively to strategy $\tau$ with initial support $L$, \item
pick randomly a support $L\in\ensuremath{\mathcal{L}}_p$, forget the past observations and switch definitively to strategy $\tau'$ with initial support $L$.
\end{itemize} Let us prove that $\tau''$ is positively winning in the original game, i.e. for every strategy $\sigma$ and initial distribution $\delta$ with support $L\in\ensuremath{\mathcal{L}}_p$, \begin{equation}\label{eq:taupp} \prob{\sigma,\tau''}{\delta}{\exists n\geq 1, K_n\inT}<1\enspace. \end{equation}
By definition of $\tau''$, there is non-zero probability that the play is consistent with $\tau'$ i.e.
\begin{equation}\label{eq:taup} \prob{\sigma,\tau''}{\delta}{\forall n\geq 1, J_n = \tau'(L,D_1,\ldots,D_{n-1})}>0\enspace. \end{equation} Since $\tau'$ is positively winning in the $\ensuremath{\mathcal{L}}$-game, \begin{equation}\label{eq:tauppp} \prob{\sigma,\tau'}{\delta}{\exists n\geq 1, K_n\inT \text{ and }\forall m<n, \mathcal{B}_1(L,C_1,\ldots,C_m)\not\in\ensuremath{\mathcal{L}}}<1\enspace. \end{equation} If $\prob{\sigma,\tau'}{\delta}{\exists n\geq 1, K_n\inT}<1$ then together with~\eqref{eq:taup} this gives~\eqref{eq:taupp}.\\ If $\prob{\sigma,\tau'}{\delta}{\exists n\geq 1, K_n\inT}=1$ then according to~\eqref{eq:tauppp}, there exists $N\geq 1$ and a pessimistic belief $B\in\ensuremath{\mathcal{L}}$ such that: \[ \prob{\sigma,\tau'}{\delta}{\mathcal{B}_1(L,C_1,\ldots,C_N)=B\text{ and } \forall 1\leq m\leq N, K_m\not\inT}>0\enspace. \] Since every sequence of actions is played with positive probability by $\tau''$, then: \begin{equation}\label{eq:BBB} \forall l\in B, \prob{\sigma,\tau''}{\delta}{K_N=l\text{ and } \forall 1\leq m\leq N, K_m\not\inT}>0\enspace. \end{equation} By definition of $\tau''$, there is positive probability that $\tau''$ picks randomly the support $B\in\ensuremath{\mathcal{L}}$ and switches to $\tau$ with initial support $B$. By definition, $\tau$ is positively winning from $B$ hence there exists $l\in B$ such that: \[ \forall \sigma', \prob{\sigma',\tau}{l}{\forall n\geq 1, K_n \not\inT} >0\enspace, \] together with~\eqref{eq:BBB} it proves
$\prob{\sigma,\tau''}{\delta}{K_N=l\text{ and } \forall m\geq 1, K_m\not\inT}>0$
hence ~\eqref{eq:taupp}.
Now we consider the case where $\ensuremath{\mathcal{L}}_p$ is empty. According to Proposition~\ref{prop:LLgames},
player $1$ has a pessimistic belief strategy $\sigma$ which is almost-surely winning in the $\ensuremath{\mathcal{L}}$-game from every support $L\not\in\ensuremath{\mathcal{L}}$. This ensures, for every $\delta$ whose support is $L\in\ensuremath{\mathcal{L}}$, for every strategy $\tau$, \begin{equation}\label{eq:pbelinfty} \prob{\sigma,\tau}{\delta}{\forall n\geq 1, \mathcal{B}_1^p(L,C_1,\ldots, C_n)\not\in\ensuremath{\mathcal{L}}\text{ or } \exists m\leq n, K_m\inT}=1\enspace. \end{equation}
We start with proving for each $L\not\in\ensuremath{\mathcal{L}}$ there exists $N_L\in\mathbb{N}$ such that for every strategy $\tau$, for every distribution $\delta$ with support $L$, \begin{equation}\label{eq:unif3} \prob{\sigma,\tau}{\delta}{ \exists n\leq N_L, K_n\inT }\geq \frac{1}{2}\enspace. \end{equation} We suppose such an $N_L$ does not exist and seek for a contradiction. Suppose for every $N$ there exists $\tau_{N}$ and $\delta_N$ such that~\eqref{eq:unif3} does not hold.
We can suppose $\tau_N$ is deterministic i.e. $\tau_N:D^* \to J$, and $\delta_N$ converges to some distribution $\delta$, whose support is included in $L$. Using Koenig's lemma, it is easy to build a strategy $\tau:D^*\to J$ such that for infinitely many $N$, \[ \prob{\sigma,\tau}{\delta_N}{
\exists n\leq N, K_n\inT }\leq \frac{1}{2}\enspace. \]
Taking the limit when $N\to\infty$, we get: \[ \prob{\sigma,\tau}{\delta}{ \exists n\geq 1, K_n\inT }\leq \frac{1}{2}\enspace, \] which contradicts the fact that $\sigma$ is almost-surely winning from $L$, since the support of $\delta$ is included in $L$. This proves the existence of $N_L$ such that~\eqref{eq:unif3} holds.
Now let $N=\max\{N_L\mid L\not\in\ensuremath{\mathcal{L}}\}$ and let $\sigma'$ be the pessimistic belief strategy for player $1$ similar to $\sigma$, except every $N$ steps the memory is reset, formally: $\sigma'(L)(c_1,\ldots,c_n) = \sigma(\mathcal{B}_1^p(L,c_1,\ldots,c_{(n /N)*N}))(c_{(n / N)*N}\cdots c_n)$. Then whatever be the strategy played by player $2$,
according to~\eqref{eq:pbelinfty} as long as a target state is not reached, the memory of $\sigma'$ will stay outside $\ensuremath{\mathcal{L}}$. Then according to~\eqref{eq:unif3}, when playing $\sigma'$, every $N$ steps there is probability at least $\frac{1}{2}$ to reach a target state, knowing that it was not reached before, hence there is probability $0$ of never reaching a target state. Consequently, $\sigma$ is almost-surely winning from any support $L\not\in\ensuremath{\mathcal{L}}$. \end{proof}
Now we can prove Theorem~\ref{theo:twoplayers2}. \begin{proof}[Proof of Theorem~\ref{theo:twoplayers2}] Let $\ensuremath{\mathcal{L}}_0,\ensuremath{\mathcal{L}}_1,\ldots$ be the sequence defined by $\ensuremath{\mathcal{L}}_0=\emptyset$ and for every $n\in\mathbb{N}, \ensuremath{\mathcal{L}}_{n+1}\subseteq\parties{K}$ is the set of supports positively winning for player $2$ in the $\ensuremath{\mathcal{L}}_n$-game. Then $\ensuremath{\mathcal{L}}_0\subseteq \ensuremath{\mathcal{L}}_1\subseteq \ldots$ and $\parties{\parties{K}}$ is finite hence there is a limit $\ensuremath{\mathcal{L}}_\infty$ to this sequence.
Every $\ensuremath{\mathcal{L}}_n$ is upward-closed hence according to Proposition~\ref{prop:upward}, every support in $\ensuremath{\mathcal{L}}_\infty$ is positively winning for player $2$. Moreover, according to Proposition~\ref{prop:LLgames}, player $2$ has a positively winning strategy with finite memory whose memory states are sets of pairs a state and a pessimistic belief of player $1$.
By definition of $\ensuremath{\mathcal{L}}_\infty$, the only support positively winning in the $\ensuremath{\mathcal{L}}_\infty$-game are in $\ensuremath{\mathcal{L}}_\infty$. Hence according to Proposition~\ref{prop:upward} again, every support not in $\ensuremath{\mathcal{L}}_\infty$ is almost-surely winning for player $1$. Moreover, according to Proposition~\ref{prop:LLgames}, player $1$ has a pessimistic belief almost-surely winning strategy.
The computation of $\ensuremath{\mathcal{L}}_\infty$ can be achieved in doubly-exponential time, because according to Proposition~\ref{prop:LLgames} each step can be carried on in time doubly exponential in $K$ and since the sequence $(\ensuremath{\mathcal{L}}_n)_{n\in\mathbb{N}}$
is monotonic its length is at most exponential in $|K|$. \end{proof}
\section*{Conclusion} We considered stochastic reachability games with partial observation on both sides. We established a determinacy result: such a game is either almost-surely winning for player $1$, surely winning for player $2$ or positively winning for both players. Despite its simplicity, this result is not so easy to prove. Also we gave algorithms for deciding in doubly-exponential time which of the three cases hold.
A natural question is whether these results extend are true for B{\"u}chi games as well? The answer is "partially".
One one hand, it is possible to prove that a game is either almost-surely winning for player $1$ or positively winning for player $2$ and to decide in doubly-exponential time which of the two cases hold. This can be done by techniques almost identical to the ones in this paper.
On the other hand, it was shown recently that the question "does player $1$ has a \emph{deterministic} strategy for winning positively a B{\"u}chi game?" is undecidable~\cite{bbg}, even when player $1$ receives no signals and player $2$ has only one action. It is quite easy to see that "deterministic" can be removed from this question, without changing its answer. Hence the only hope for solving positive winning for B{\"u}chi games is to consider subclasses of partial observation games where the undecidability result fails, an interesting question.
\end{document} | arXiv |
\begin{document}
\newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{remark}[theorem]{Remark} \newcommand{{{\mathrm h}}}{{{\mathrm h}}}
\numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{table}{section} \numberwithin{figure}{section}
\def\mathop{\sum\!\sum\!\sum}{\mathop{\sum\!\sum\!\sum}} \def\mathop{\sum\ldots \sum}{\mathop{\sum\ldots \sum}} \def\mathop{\int\ldots \int}{\mathop{\int\ldots \int}}
\def\gtrsim{\gtrsim} \def\lesssim{\lesssim}
\def\mathscr A{\mathscr A} \def\mathscr B{\mathscr B} \def\mathscr C{\mathscr C}
\def\Delta{\Delta} \def\mathscr E{\mathscr E} \def\mathscr F{\mathscr F} \def\mathscr G{\mathscr G} \def\mathscr H{\mathscr H} \def\mathscr I{\mathscr I} \def\mathscr J{\mathscr J} \def\mathscr K{\mathscr K} \def\mathscr L{\mathscr L} \def\mathscr M{\mathscr M} \def\mathscr N{\mathscr N} \def\mathscr O{\mathscr O} \def\mathscr P{\mathscr P} \def\mathscr Q{\mathscr Q} \def\mathscr R{\mathscr R} \def\mathscr S{\mathscr S} \def\mathscr U{\mathscr U} \def\mathscr T{\mathscr T} \def\mathscr V{\mathscr V} \def\mathscr W{\mathscr W} \def\mathscr X{\mathscr X} \def\mathscr Y{\mathscr Y} \def\mathscr Z{\mathscr Z}
\def\hbox{\rlap{$\sqcap$}$\sqcup$}{\hbox{\rlap{$\sqcap$}$\sqcup$}} \def\qed{\ifmmode\hbox{\rlap{$\sqcap$}$\sqcup$}\else{\unskip\nobreak\hfil \penalty50\hskip1em\null\nobreak\hfil\hbox{\rlap{$\sqcap$}$\sqcup$} \parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi}
\newfont{\teneufm}{eufm10} \newfont{\seveneufm}{eufm7} \newfont{\fiveeufm}{eufm5}
\def\frak#1{{\fam\eufmfam\relax#1}}
\newcommand{{\boldsymbol{\lambda}}}{{\boldsymbol{\lambda}}} \newcommand{{\boldsymbol{\mu}}}{{\boldsymbol{\mu}}} \newcommand{{\boldsymbol{\xi}}}{{\boldsymbol{\xi}}} \newcommand{{\boldsymbol{\rho}}}{{\boldsymbol{\rho}}}
\newcommand{{\boldsymbol{\alpha}}}{{\boldsymbol{\alpha}}} \newcommand{{\boldsymbol{\beta}}}{{\boldsymbol{\beta}}} \newcommand{{\boldsymbol{\varphi}}}{{\boldsymbol{\varphi}}} \newcommand{{\boldsymbol{\psi}}}{{\boldsymbol{\psi}}} \newcommand{{\boldsymbol{\vartheta}}}{{\boldsymbol{\vartheta}}}
\defFrak K{Frak K} \defFrak{T}{Frak{T}}
\def{Frak A}{{Frak A}} \def{Frak B}{{Frak B}} \def\mathfrak{C}{\mathfrak{C}}
\def \balpha{\bm{\alpha}} \def \bbeta{\bm{\beta}} \def \bgamma{\bm{\gamma}} \def \blambda{\bm{\lambda}} \def \bchi{\bm{\chi}} \def \bphi{\bm{\varphi}} \def \bpsi{\bm{\psi}}
\def\eqref#1{(\ref{#1})}
\def\vec#1{\mathbf{#1}}
\def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \newcommand{\rmod}[1]{\: \mbox{mod} \: #1}
\def{\mathcal g}{{\mathcal g}}
\def\mathbf r{\mathbf r}
\def{\mathbf{\,e}}{{\mathbf{\,e}}} \def{\mathbf{\,e}}_p{{\mathbf{\,e}}_p} \def{\mathbf{\,e}}_m{{\mathbf{\,e}}_m}
\def{\mathrm{Tr}}{{\mathrm{Tr}}} \def{\mathrm{Nm}}{{\mathrm{Nm}}}
\def{\mathbf{S}}{{\mathbf{S}}}
\def{\mathrm{lcm}}{{\mathrm{lcm}}} \def{\mathrm{ord}}{{\mathrm{ord}}}
\def\({\left(} \def\){\right)} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil}
\def\qquad \mbox{and} \qquad{\qquad \mbox{and} \qquad}
\hyphenation{re-pub-lished}
\mathsurround=1pt
\defb{b} \overfullrule=5pt
\def \F{{\mathbb F}} \def \K{{\mathbb K}} \def \N{{\mathbb N}} \def \Z{{\mathbb Z}} \def \Q{{\mathbb Q}} \def \R{{\mathbb R}} \def \C{{\mathbb C}} \def\F_p{\F_p} \def \fp{\F_p^*}
\def\cK_p(m,n){{\mathcal K}_p(m,n)} \def\cK_q(m,n){{\mathcal K}_q(m,n)} \def\cK_p(m,n){{\mathcal K}_p(m,n)} \def\cK_q(\bfxi; m,n){{\mathcal K}_q({\boldsymbol{\xi}}; m,n)} \def\cK_p(\bfxi; m,n){{\mathcal K}_p({\boldsymbol{\xi}}; m,n)} \def\cK_{\nu,p}(\bfxi; m,n){{\mathcal K}_{\nu,p}({\boldsymbol{\xi}}; m,n)} \def\cK_{\nu,q}(\bfxi; m,n){{\mathcal K}_{\nu,q}({\boldsymbol{\xi}}; m,n)}
\def\cK_p(m,n){{\mathcal K}_p(m,n)} \def\psi_p(m,n){\psi_p(m,n)}
\def \xbar{\overline x} \def{\mathbf{\,e}}{{\mathbf{\,e}}} \def{\mathbf{\,e}}_p{{\mathbf{\,e}}_p} \def{\mathbf{\,e}}_q{{\mathbf{\,e}}_q}
\title[Exponential sums with sparse polynomials]{Exponential sums with sparse polynomials over finite fields} \author{Igor Shparlinski and Qiang Wang}
\address{School of Mathematics and Statistics, The University of New South Wales, Sydney NSW 2052, Australia} \email{[email protected]}
\address{School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, ON K1S 5B6, Canada} \email{[email protected]}
\begin{abstract} We obtain new bounds of exponential sums modulo a prime $p$ with sparse polynomials $a_0x^{n_0} + \cdots + a_{\nu}x^{n_\nu}$. The bounds depend on various greatest common divisors of exponents $n_0, \ldots, n_\nu$ and their differences. In particular, two new bounds for binomials are obtained, improving previous results in broad ranges of parameters.
\end{abstract}
\keywords{Exponential sums, sparse polynomials, binomials} \subjclass[2010]{11L07, 11T23}
\maketitle
\section{Introduction}
For a prime $p$, positive integers $n_0, \ldots, n_\nu$ and arbitrary integer coefficients $a_0, \ldots, a_\nu$, we consider the exponential sum \[ S_{n_0, \ldots, n_\nu}(a_0, \ldots, a_\nu) = \sum_{x=0}^{p-1} {\mathbf{\,e}}_p(a_0 x^{n_0} + \cdots + a_\nu x^{n_\nu}), \] where ${\mathbf{\,e}}_p(x) = e^{2\pi i x/p}$.
For the convenience, we denote $$
M_{n_0, \ldots, n_\nu} = \max_{\substack{a_0, \ldots, a_\nu \in \Z\\ \gcd\(\prod_{i=0}^k a_i,p\)=1}} \left|S_{n_0, \ldots, n_\nu}(a_0, \ldots, a_\nu)\right|. $$
The classical Weil bound on exponential sums with general polynomials, see, for example,~\cite[Theorem~5.38]{LN}, implies that if at least one of the coefficients does not vanish modulo $p$ then \begin{equation} \label{eq:Weil}
| S_{n_0, \ldots, n_\nu}(a_0, \ldots, a_\nu) | \le \max\{n_0, \ldots, n_\nu\} p^{1/2}. \end{equation}
Clearly, the bound~\eqref{eq:Weil} becomes trivial if $ \max\{n_0, \ldots, n_\nu\} \ge p^{1/2}$. Thus starting from work of Akulinichev~\cite{Aku}, there have been a chain of consistent efforts to obtain nontrivial bounds beyond this restriction, see~\cite{CCP1,CCP2, CP05, CP10, Mac, MPSS, MSS} and references therein.
The case of binomial sums (that is, $\nu=1$) has always been of special interest~\cite{CP03, CP11, Kar, SV20}. In particular, bounds for binomial sums have played a key role in resolution of {\it Goresky--Klapper conjecture}, see~\cite{GK}, and its generalisation~\cite{ACMPPRT,BCPP1,CoKo,CP11,GKMS,CMPR} and in the closely related generalised {\it Lehmer conjecture\/}~\cite{BCPP2}.
In fact in the case of binomial sums $S_{m, n}(a,b)$ one is especially interested in $m=1$ which is important for the above applications.
For example, if $m=1$ Akulinichev~\cite[Theorem~1]{Aku} has given the bound \begin{equation} \label{eq:Aku} M_{1, n} \le p/\sqrt{\gcd(n, p-1)}, \end{equation} and then combining~\eqref{eq:Aku} with~\eqref{eq:Weil} shown that for $n \mid p-1$ the following uniform bound holds \begin{equation} \label{eq:Aku 56} M_{1, n} \le p^{5/6}, \end{equation} see~\cite[Corollary]{Aku}. The bound~\eqref{eq:Aku 56} has been improved in~\cite[Corollary~3.3]{SV20} as follows \begin{equation} \label{eq:SV 45} M_{1, n} = O\(p^{4/5}\), \end{equation} (with an absolute implied constant).
In fact, the bound~\eqref{eq:SV 45} is based on an improvement of a bound of Karatsuba~\cite[Theorem~1]{Kar} $$ M_{1, n} \le (n-1)^{1/4} p^{3/4}, $$ which by~\cite[Theorem~3.2]{SV20} can be replaced with \begin{equation} \label{eq:ShpVol} M_{1, n} \le p^{3/4} + (n-1)^{1/3} p^{2/3}. \end{equation}
We also recall the following bound of Cochrane and Pinner~\cite[Theorem~1.3]{CP11} \begin{equation}\label{eq:CP11} M_{m, n} \le d + 2.292 e^{13/46} p^{89/92}, \end{equation} where $d = \gcd(n-m, p-1)$ and $e= \gcd(m, n, p-1)$.
The bounds of sums with more monomials usually involve more parameters and conditions
and are somewhat too technical to survey and compare here. However we believe that our bounds expand the class of general sparse polynomials which admit nontrivial bounds and they certainly do so in the case of binomial sums.
\section{Main Results}
We recall that the notations $A=O(B)$, $A\ll B$ and $B \gg A$ are each equivalent to the statement that the inequality $A\le c\,B$ holds with a constant $c>0$ which is absolute throughout this paper.
In what follows, it is convenient to introduce notation $A\lesssim B$ and $B\gtrsim A$ as equivalents of $A \le p^{o(1)} B$ as $p\to \infty$.
\begin{theorem} \label{thm:Bound de} For $\nu\ge 2$ positive integers $n_0, n_1, \ldots, n_\nu$, we denote $$ d = \gcd(n_1 - n_0, \ldots, n_{\nu}-n_{0}, p-1), \qquad e = \gcd(d, n_0), . $$ and $$ D =\min_{0\leq i \leq \nu} \max_{j\neq i} \gcd(n_j- n_i, p-1), \qquad \Gamma = (p-1)/D, \qquad \Delta = d/e. $$ Then $$ M_{n_0, \ldots, n_\nu} \lesssim
\begin{cases}
\Delta^{-1/4}\Gamma^{-1/4\nu}p^{7/6}, &\quad \text{if $p^{29/48} \le \Delta <p^{2/3}$,}\\
\Delta^{-21/52}\Gamma^{-1/4\nu}p^{131/104}, &\quad \text{if $p^{59/112} \le \Delta <p^{29/48}$,}\\
\Delta^{-7/20}\Gamma^{-1/4\nu}p^{197/160}, &\quad \text{if $p^{1/2} \le \Delta <p^{59/112}$,}\\
\Delta^{-31/80}\Gamma^{-1/4\nu}p^{5/4}, & \quad \text{if $ \Delta< p^{1/2}$.} \end{cases} $$ \end{theorem}
The above bound is nontrivial when $$ \max\{p^{29/48}, \Gamma^{-1/\nu} p^{2/3}\}< \Delta < p^{2/3}, $$ or $$ \max\{p^{59/112}, \Gamma^{-13/21\nu} p^{27/42}\} < \Delta < p^{29/48}, $$ or $$ \max\{p^{1/2}, \Gamma^{-5/7\nu} p^{37/56}\} < \Delta < p^{59/112}, $$ or $$ \Gamma^{-20/31\nu} p^{20/31} < \Delta < p^{1/2}. $$
When $\nu=1$, it is easy to see that $D=d$ and $$ e = \gcd( \gcd(n -m, p-1), m) = \gcd(n -m, m, p-1) = \gcd(m,n, p-1) $$ and thus for binomials the following result holds.
\begin{cor} \label{cor:Bound de} For positive integers $m$ and $n$, we denote $$ d= \gcd(n-m, p-1) \qquad \mbox{and} \qquad e = \gcd(m,n, p-1). $$ Then $$ M_{m, n} \lesssim
\begin{cases} e^{1/4}p^{11/12} , &\quad \text{if $p^{29/48} \le d/e<p^{2/3}$,}\\ e^{21/52}d^{-2/13}p^{105/104}, &\quad \text{if $p^{59/112} \le d/e <p^{29/48}$,}\\ e^{7/20}d^{-1/10}p^{157/160}, &\quad \text{if $p^{1/2} \le d/e <p^{59/112}$,}\\ e^{31/80}d^{-11/80}p, & \quad \text{if $ d/e< p^{1/2}$.} \end{cases} $$ \end{cor}
In the case when integers $m$ and $n$ satisfy $$ n \mid p-1 \qquad \mbox{and} \qquad \gcd(m,n)=1 $$ Akulinichev~\cite[Theorem~3]{Aku} has shown that \begin{equation}\label{eq:A65} M_{m, n} \le pn^{-1} + h^{1/2} p^{3/4} \end{equation} where $h = \gcd(m,p-1)$. Using the same idea as in the proof of Theorem~\ref{thm:Bound de} we obtain another bound in terms of $h$ and $m$ which improve previous bounds in some other cases.
\begin{theorem} \label{thm:Bound ell} For positive integers $m$ and $n$ such that $$ n \mid p-1 \qquad \mbox{and} \qquad \gcd(m, n)=1 \qquad \mbox{and} \qquad h = \gcd(m,p-1). $$ Then $$ M_{m, n} \lesssim
\begin{cases} h^{1/4}n^{-1/4} p^{11/12}, &\quad \text{if $p^{29/48} \le n <p^{2/3}$,}\\ h^{1/4}n^{-21/52}p^{105/104}, &\quad \text{if $p^{59/112} \le n<p^{29/48}$,}\\ h^{1/4}n^{-7/20}p^{157/160}, &\quad \text{if $p^{1/2} \le n<p^{59/112}$,}\\ h^{1/4}n^{-31/80}p, & \quad \text{if $ n< p^{1/2}$.} \end{cases} $$ \end{theorem}
It is difficult to give an exact region when Theorem~\ref{thm:Bound de} improves the large variety of previous results more sparse polynomials. However, in Section~\ref{sec:comp} we compare Corollary~\ref{cor:Bound de} and Theorem~\ref{thm:Bound ell}
with previous bounds~~\eqref{eq:CP11} and~\eqref{eq:A65} for binomial sums, which also depend on various greatest common divisors (while the bounds~\eqref{eq:Weil} and~\eqref{eq:ShpVol} depend on the size of the exponents and so are incomparable with our result
\section{Preparations} \label{sec:prelim}
For a positive integer $t \mid p-1$ we use $T_t$ to denote the number of solutions to the equation $$ u^t + v^t \equiv x^t + y^t \pmod p, \qquad 1 \le u,v,x,y < p. $$
We estimate $T_t$ via a combination of recent results of Shkredov~\cite[Theorem~8]{Shkr1} and of Murphy, Rudnev, Shkredov, and Shteinikov~\cite[Theorems~1.4 and~6.5]{MRSS} (which in turn improve the previous result of Heath-Brown and Konyagin~\cite[Lemma~3]{HBK}) on the {\it additive energy of multiplicative subgroups\/}, that is, on the number of solutions $E(\Gamma)$ to the equations $$ u_1+u_2 = v_1+v_2, \qquad u_1,u_2 , v_1, v_2 \in \Gamma, $$ where $\Gamma$ is a multiplicative subgroup of $\F_p^*$.
We also note that unfortunately there is a misprint in the formulation of~\cite[Theorem~8]{Shkr1} (some terms ought be commuted and the symbol `$\min$' ought to stay in a different place). More specifically, in the notation of~\cite{Shkr1} the bound given by~\cite[Theorem~8]{Shkr1}, is of the form \begin{align*} E(\Gamma) & \le \(\# \Gamma\)^3 p^{-1/3} \log \# \Gamma \\ &\qquad \quad + \min\left \{ p^{1/26} \(\# \Gamma\)^{31/13} \log^{8/13}\# \Gamma, \(\# \Gamma\)^{32/13} \log^{41/65} \# \Gamma\right \}, \end{align*} where $$ E(\Gamma) = \#\left\{\(u,v,x,y\) \in \Gamma^4:~u+v= x+y \right\} $$ is the additive energy of $\Gamma$.
Taking the above into account, we derive the following bound on $T_t$, where we have suppressed some logarithmic factors via the use of the symbol `$\lesssim$'.
\begin{lemma} \label{lem:Mt} We have $$ T_t \lesssim \begin{cases}
p^{8/3} t, &\quad \text{if $p^{29/48} \le (p-1)/t <p^{2/3}$,}\\
p^{63/26} t^{21/13}, &\quad \text{if $p^{59/112} \le (p-1)/t <p^{29/48}$,}\\
p^{101/40} t^{7/5}, &\quad \text{if $p^{1/2} \le (p-1)/t <p^{59/112}$,}\\
p^{49/20} t^{31/20}, & \quad \text{if $ (p-1)/t < p^{1/2}$.} \end{cases} $$ \end{lemma}
We also need recall the following bound given by \cite[Lemma~7]{CFKLLS}.
\begin{lemma} \label{lem:SprEq Zeros} For $\nu +1\ge 2$ elements $a_0, a_1, \ldots\,, a_\nu \in \F_p^*$ and arbitrary integers $t_0, t_1, \ldots , t_\nu<p$, the number of solutions $Q$ to the equation $$ \sum_{i=0}^\nu a_ix^{t_i} = 0, \qquad x \in \F_p^*, $$ with $t_0 = 0$, satisfies $$ Q \le 2 p^{1 - 1/\nu} D^{1/\nu} + O(p^{1 - 2/\nu} D^{2/\nu}), $$ where $$ D = \min_{0 \le i \le \nu} \max_{j \ne i} \gcd(t_j - t_i, p-1). $$ \end{lemma}
We now derive the following estimate for points on sparse curves.
\begin{lemma} \label{lem:SprCurve Zeros} For $\nu \ge 1$ elements $a_0, a_1, \ldots\,, a_\nu \in \F_p^*$ and arbitrary integers $t_0, t_1, \ldots , t_\nu<p$, the number of solutions $R$ to the equation $$ \sum_{i=0}^\nu a_ix^{t_i} = \sum_{i=0}^\nu a_iy^{t_i} , \qquad x,y \in \F_p^*, $$ with $t_0 = 0$, satisfies $$ R \ll p^{2 - 1/\nu} D^{1/\nu} , $$ where $$ D = \min_{0 \le i \le \nu} \max_{j \ne i} \gcd(t_j - t_i, p-1). $$ \end{lemma}
\begin{proof} We now write $x = yz$ and obtain \begin{align*} R
&
= \#\{(y,z) \in\(\F_p^*\)^2:~ a_0(z^{t_0}-1) + a_1 y^{t_1-t_0} (z^{t_1} -1) +\cdots \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + a_\nu y^{t_\nu-t_0} (z^{t_\nu} -1)= 0 \}. \end{align*} If $$ z^{t_0}-1=z^{t_1} -1= \ldots = z^{t_\nu} -1= 0 $$ then obviously $z^d =1$ where $$
d = \gcd(t_1 - t_0, \ldots, t_{\nu}-t_{0}, p-1) $$
and thus there are at most $d$ such values of $z$, for which there are at most $p$ values of $y$. Otherwise, for $\nu \ge 1$, by Lemma~\ref{lem:SprEq Zeros} we have for each $z$ we have at most $O( p^{1 - 1/\nu} D^{1/\nu})$ values of $y$. Hence $R \ll dp + p^{1 - 1/\nu} D^{1/\nu} $. Because obviously $d \mid t_i - t_j$ for all $1\le i < j \le \nu$, we have $d \le D$ and thus $dp\le Dp \ll p^{2 - 1/\nu} D^{1/\nu}$ which implies the desired bound. \end{proof}
\section{Proof of Theorem~\ref{thm:Bound de}}
We fix some $a_0, \ldots, a_\nu$ with $$ \gcd\(a_0\ldots a_\nu, p\)=1 $$ and consider the sum $$ S^* = \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(x^{n_0}\(a_0 + a_1 x^{e_1 d} + \cdots a_\nu x^{e_\nu d}\)\) $$ over the multiplicative group $\F_p^*$ of the finite field of $p$ elements. Clearly it is enough to estimate the sum $S^*$.
Let $n_i -n_{0} = e_i d$ for $1\leq i \leq \nu$, thus \[ a_0 + a_1 x^{n_1-n_0} + \cdots + a_\nu x^{n_\nu-n_0} = a_0 + a_1 x^{e_1 d} + \cdots a_\nu x^{e_\nu d}. \]
Let \[ s = \frac{p-1}{d}. \]
Now, using that for any $y\in\F_p^*$ we have $y^{sd} = 1$, we derive \begin{align*} &S^* = \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(x^{n_0}\(a_0 + a_1 x^{e_1 d} + \cdots a_\nu x^{e_\nu d}\)\) \\ & = \frac{1}{p-1} \sum_{y\in\F_p^*} \sum_{x\in\F_p^*}
{\mathbf{\,e}}_p\(\(xy^s\)^{n_0}\(a_0 + a_1 (xy^s)^{e_1 d} + \cdots a_\nu (xy^s)^{e_\nu d}\)\) \\ & = \frac{1}{p-1} \sum_{x\in\F_p} \sum_{y\in\F_p^*} {\mathbf{\,e}}_p\(x^{n_0} y^{n_0 s} \(a_0 + a_1 x^{n_1-n_0} + \cdots + a_\nu x^{n_\nu-n_0} \)\) . \end{align*}
Let $$ N(\lambda) = \#\{x\in\F_p^*:~ a_0x^{n_0} + a_1 x^{n_1} + \cdots + a_\nu x^{n_\nu}= \lambda\}, $$ thus we can write $$ S^* = \frac{1}{p-1} \sum_{\lambda \in\F_p} N(\lambda) \sum_{y\in\F_p^*} {\mathbf{\,e}}_p(\lambda y^{n_0 s}) . $$
By the H{\"o}lder inequality \begin{equation} \label{eq:Hold}
|S^*|^4 \ll p^{-3} \left(\sum_{\lambda \in\F_p} N(\lambda) \right)^2 \sum_{\lambda \in\F_p} N(\lambda)^2
\sum_{\lambda \in\F_p} \left| \sum_{y\in\F_p^*} {\mathbf{\,e}}_p(\lambda y^{n_0 s})\right|^4. \end{equation}
By the orthogonality of exponential functions \begin{align*}
\sum_{\lambda \in\F_p} & \left| \sum_{y\in\F_p^*} {\mathbf{\,e}}_p(\lambda y^{n_0 s})\right|^4\\
& \quad = \sum_{\lambda \in\F_p} \sum_{u,v, y,z\in\F_p^*} {\mathbf{\,e}}_p\(\lambda \(u^{n_0 s} + v^{n_0 s} - y^{n_0 s} -z^{n_0 s}\)\)\\ & \quad =
\sum_{u,v, y,z\in\F_p^*} \sum_{\lambda \in\F_p} {\mathbf{\,e}}_p\(\lambda \(u^{n_0 s} + v^{n_0 s} - y^{n_0 s} -z^{n_0 s}\)\) = p T_{ n_0 s},
\end{align*} where $T_t$ is defined as in Section~\ref{sec:prelim}.
Let $$r=\gcd(n_0 s, p-1) = s \gcd\(n_0, (p-1)/s\) = s \gcd\(n_0,d\) = es. $$ Then we have $T_{n_0 s} = T_r$. Hence we rewrite~\eqref{eq:Hold} as \begin{equation} \label{eq:S NN2Mld} \(S^*\)^4 \ll p^{-3} \left(\sum_{\lambda \in\F_p} N(\lambda) \right)^2 \sum_{\lambda \in\F_p} N(\lambda)^2 T_r. \end{equation}
Trivially, we have \begin{equation} \label{eq:sum N} \sum_{\lambda \in\F_p} N(\lambda) = p. \end{equation}
Furthermore, we have \begin{align*} \sum_{\lambda \in\F_p} & N(\lambda)^2 \\ & = \#\{(x,y) \in\(\F_p^*\)^2:~ a_0x^{n_0} + \cdots + a_\nu x^{n_\nu}= a_0y^{n_0} + \cdots + a_\nu y^{n_\nu}\}. \end{align*}
Therefore, by Lemma~\ref{lem:SprCurve Zeros} \begin{equation} \label{eq:sum N2} \sum_{\lambda \in\F_p} N(\lambda)^2 \ll p^{2 - 1/\nu} D^{1/\nu}. \end{equation}
Note that $$ \frac{p-1}{r} = \frac{p-1}{es} = \frac{d}{e} . $$ Hence Lemma~\ref{lem:Mt} implies \begin{equation} \label{eq:Mr} T_r \lesssim \begin{cases}
(e/d) p^{11/3}, &\quad \text{if $p^{29/48} \le d/e <p^{2/3}$,}\\ (e/d)^{21/13} p^{105/26} , &\quad \text{if $p^{59/112} \le d/e<p^{29/48}$,}\\
(e/d)^{7/5} p^{157/40} &\quad \text{if $p^{1/2} \le d/e <p^{59/112}$,}\\ (e/d)^{31/20} p^{4} , & \quad \text{if $ d/e < p^{1/2}$.} \end{cases} \end{equation}
Hence substituting the bounds~\eqref{eq:sum N}, \eqref{eq:sum N2} and~\eqref{eq:Mr} in~\eqref{eq:S NN2Mld} we conclude the proof.
\section{Proof of Theorem~\ref{thm:Bound ell}}
We fix some integers $a$ and $b$ with $\gcd(ab,p)=1$ and denote $$ S^*= \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(ax^m + bx^n\). $$
Denoting $s = (p-1)/n$ we obtain \begin{align*} S^* & = \frac{1}{p-1} \sum_{y\in\F_p^*} \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(a\(xy^s\)^m + b\(xy^s\)^n\)\\ & = \frac{1}{p-1} \sum_{x\in\F_p^*} \sum_{y\in\F_p^*} {\mathbf{\,e}}_p\(ax^m y^{ms} + bx^n\). \end{align*} Since $\gcd(m,n)=1$ we can replace $y^{ms}$ with just $y^s$ as both functions run through the same set of elements of $\F_p^*$ of order $n$ and take each value exactly $s$ times. Hence $$ S^* = \frac{1}{p-1} \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(bx^n\) \sum_{y\in\F_p^*} {\mathbf{\,e}}_p\(ax^my^{s}\). $$ By the Cauchy inequality we have \begin{align*}
|S^*|^2 & \le \frac{1}{p} \sum_{x\in\F_p^*} \left| \sum_{y\in\F_p^*} {\mathbf{\,e}}_p\(bx^my^{s}\)\right|^2 \\ & = \frac{1}{p} \sum_{y,z\in\F_p^*} \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(ax^m\(y^{s}-z^s\)\) = \frac{1}{p} \sum_{\lambda\in\F_p} R(\lambda) \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(b\lambda x^m\) , \end{align*} where $$ R(\lambda) = \#\{(y,z) \in\(\F_p^*\)^2:~ y^{s}-z^s= \lambda\}. $$ Clearly $$
\sum_{\lambda\in\F_p} R(\lambda) ^2 = T_s, $$ where $T_s$ is as in Lemma~\ref{lem:Mt}. Therefore, applying the Cauchy inequality one more time, and using the orthogonality of exponential functions, we obtain \begin{align*}
|S^*|^4 & \ll \frac{1}{p^2} T_s \sum_{\lambda\in\F_p} \left| \sum_{x\in\F_p^*} {\mathbf{\,e}}_p\(b\lambda x^n\) \right|^2 \\ & = \frac{1}{p^2} T_s p\#\left\{(u,v)\in \(\F_p^*\)^2:~ u^m = v^m\right \} = \frac{1}{p^2} T_s hp^2 = h T_s . \end{align*} Using Lemma~\ref{lem:Mt} we obtain the desired result.
\section{Comparison} \label{sec:comp}
We first note that bounds~\eqref{eq:Weil} and~\eqref{eq:ShpVol}, Corollary~\ref{cor:Bound de} and Theorem~\ref{thm:Bound ell} all hold and nontrivial under different conditions and thus are not directly comparable.
Hence we only compare Corollary~\ref{cor:Bound de} with the bound~\eqref{eq:CP11} and then also Theorem~\ref{thm:Bound ell} with bounds~\eqref{eq:CP11} and~\eqref{eq:A65}. Note that when we compare Theorem~\ref{thm:Bound ell} with~\eqref{eq:CP11} we always have $e =1$ (since $\gcd(m,n)=1$) and we also assume that $d \le p^{89/92}$.
For example, the bound of Corollary~\ref{cor:Bound de} improves previous known bounds when $e$ is small but $d$ is sufficiently large.
Indeed, when $p^{29/48} \le d/e<p^{2/3}$, because $p^{11/12} < p^{89/92}$ and $e$ is sufficiently small comparing to $p$
our bound is clearly better than~\eqref{eq:CP11}.
When $ p^{59/112} \le d/e < p^{29/48}$, then using $d \ge e p^{59/112}$
we obtain $$e^{21/52}d^{-2/13}p^{105/104} \le e^{1/4} p^{13/14},
$$ which is better than~\eqref{eq:CP11} for a small $e$.
When $ p^{1/2} \le d/e < p^{59/112}$, then using $d \ge e p^{1/2}$ we have $$e^{7/20} d^{-1/10} p^{157/160} \le e^{1/4}p^{149/160}, $$ which is also better than~\eqref{eq:CP11} for a small $e$.
Finally for $d \le p^{1/2}$ and $e = O(1)$, our bound improves~\eqref{eq:CP11} for $d > p^{60/253}$.
To demonstrate the strength of our result, we let $d=p^{\delta+o(1)}$ and $e= p^{\gamma+o(1)}$. Then Figure~\ref{fig0} ($x$-axis is $\delta$ and $y$-axis is $\gamma$) shows the regions of $(\delta, \gamma)$ where the bounds of Corollary~\ref{thm:Bound de} are better than both~\eqref{eq:CP11} and nontrivial.
\begin{figure}
\caption{Polygon of Corollary~\ref{cor:Bound de} improving~\eqref{eq:CP11} with $d=p^{\delta+o(1)}$ and $e= p^{\gamma +o(1)}$}
\label{fig0}
\end{figure}
It is easy to check that one can produce infinite series of examples with parameters arbitrary close to any point
inside of the polygon of Figure~\ref{fig0}.
Next, we note that Theorem~\ref{thm:Bound ell} gives nontrivial bounds in any one of the following cases. \begin{itemize} \item[(i)] $\max\{p^{29/48}, hp^{-1/3}\} \le n < p^{2/3}$, \item[(ii)] $\max\{p^{59/112}, h^{13/21} p^{1/42}\} \le n< p^{29/48}$, \item[(iii)] $\max\{p^{1/2}, h^{5/7} p^{-3/56}\} \le n< p^{59/112}$, \item[(iv)] $h^{20/31} \le n < p^{1/2}$. \end{itemize}
In Case~(i), if also $hn > p^{2/3}$, then $h^{1/4} n^{-1/4} p < h^{1/2} p^{3/4}$ and we improve~\eqref{eq:A65}. If also $n > h p^{3/23}$, then $h^{1/4}n^{-1/4} p < p^{89/92}$ and we also improve~\eqref{eq:CP11}.
In Case~(ii), if also
$h n^{21/13} > p^{27/26}$, then $h^{1/4} n^{-21/52} p^{105/104} $ $< h^{1/2} p^{3/4}$ and we thus improve~\eqref{eq:A65}. If also $n > h^{13/21}p^{101/966}$, then we have $h^{1/4}n^{-21/52} p^{105/104} < p^{89/92}$ and thus we improve~\eqref{eq:CP11} as well.
In Case~(iii), if also
$h n^{7/5} > p^{37/40}$, then $h^{1/4} n^{-7/20} p^{157/160} < h^{1/2} p^{3/4}$ and we thus improve~\eqref{eq:A65}. If also $n > h^{5/7}p^{51/1288}$, then we have $h^{1/4}n^{-7/20} p^{157/160} < p^{89/92}$ and thus we improve~\eqref{eq:CP11} as well.
In Case~(iv), if also $hn^{31/20} > p$, then we improve~\eqref{eq:A65}
because $h^{1/4} n^{-31/80} p < h^{1/2} p^{3/4}$. If also $n > h^{20/31} p^{60/713}$ , then $h^{1/4} n^{-31/80} p < p^{89/92}$ and thus we
also improve~\eqref{eq:CP11}.
It is easy to see that in each of the Cases~(i), (ii), (iii) and (iv) the above ranges of $h$ and $m$ overlap so in each of them we
sometimes improve both~\eqref{eq:CP11} and~\eqref{eq:A65} simultaneously.
To demonstrate our result, we let $h=p^{\varepsilon+o(1)}$ and $n= p^{\eta+o(1)}$. Then Figute~\ref{fig1} ($x$-axis is $\varepsilon$ and $y$-axis is $\eta$) shows the regions of $(\varepsilon, \eta)$ where the bounds of Theorem~\ref{thm:Bound ell} are better than both~\eqref{eq:CP11} and~\eqref{eq:A65}. We want to emphasize that $hn < p$ because $\gcd(m,n)=1$ implies that $h=\gcd(m, p-1) = \gcd(m, p-1) \leq (p-1)/n$.
\begin{figure}
\caption{Polygon of Theorem~\ref{thm:Bound ell} improving~\eqref{eq:CP11} and~\eqref{eq:A65} with $h= p^{\varepsilon+o(1)}$ and $n= p^{\eta+o(1)}$}
\label{fig1}
\end{figure}
As before, we note that is easy to check that one can produce infinite series of examples with parameters arbitrary close to any point
inside of the polygon of Figure~\ref{fig1}.
\section{Comments}
One can certainly use higher powers of sums $S^*$ in the proofs of both Theorems~~\ref{thm:Bound de} and~\ref{thm:Bound ell}.
For example, for any integer $\nu\ge 1$ we can generalise~\eqref{eq:S NN2Mld} as $$ \(S^*\)^{2\nu} \ll p^{-2\nu +1} \(\sum_{\lambda \in\F_p} N(\lambda) \)^{2\nu-2} \sum_{\lambda \in\F_p} N(\lambda)^2 T_{\nu,r}, $$ where $T_{\nu,r}$ is the number of solutions to the equation $$ u_1^t + \ldots + u_\nu^t \equiv v_1^t + \ldots + v_\nu^t \pmod p, \quad 1 \le u_1, v_1, \ldots , u_\nu ,v_\nu < p. $$ Hence we need analogues of Lemma~\ref{lem:Mt} for $T_{\nu,r}$. Such nontrivial bounds on $T_{\nu,r}$ are indeed available, for example they can be derived from~\cite[Lemma~4.4]{MRSS} for $\nu =3$ and~\cite[Theorems~3 and~25]{Shkr2} for larger values of $\nu$.
However with the present knowledge of such bounds it is not clear whether one can obtain better bounds of exponential sums.
\end{document} | arXiv |
\begin{document}
\begin{abstract} A monoid $S$ is {\em right coherent} if every finitely generated subact of every finitely presented right $S$-act is finitely presented. The corresponding notion for a ring $R$ states that every finitely generated submodule of every finitely presented right $R$-module is finitely presented. For monoids (and rings) right coherency is a finitary property which determines the existence of a {\em model companion} of the class of right $S$-acts (right $R$-modules) and hence that the class of existentially closed right $S$-acts (right $R$-modules) is axiomatisable.
Choo, Lam and Luft have shown that free rings are right (and left) coherent; the authors, together with Ru\v{s}kuc, have shown that groups, and free monoids, have the same properties. We demonstrate that free inverse monoids do not.
Any free inverse monoid contains as a submonoid the free left ample monoid, and indeed the free monoid, on the same set of generators. The main objective of the paper is to show that the free left ample monoid {\em is} right coherent. Furthermore, by making use of the same techniques we show that both free inverse and free left ample monoids satisfy $({\bf R})$, $({\bf r)}$, $({\bf L})$ and $({\bf l)}$, conditions arising from the axiomatisability of classes of right $S$-acts and of left $S$-acts. \end{abstract}
\title{Coherency, free inverse monoids and free left ample monoids} \section{Introduction}\label{sec:intro}
Let $S$ be a monoid. A {\em right $S$-act} is a set $A$ together with a map $A\times S\rightarrow A$ where $(a,s)\mapsto as$, such that for all $a\in A$ and $s,t\in S$ we have $a1=a$ and $a(st)=(as)t$. We also have the dual notion of a {\em left $S$-act}: where handedness for $S$-acts is not specified in this article we will always mean {\em right} $S$-acts. The study of $S$-acts is, effectively, that of representations of the monoid $S$ by mappings of sets.
Clearly $S$-acts over a monoid $S$ are the non-additive analogue of $R$-modules over a (unital) ring $R$. Although the study of the two notions diverges considerably once technicalities set in, one can often begin by forming analagous notions and asking analagous questions. In this article we study coherency for monoids. A monoid $S$ is said to be {\em right coherent} if every finitely generated subact of every finitely presented right $S$-act is finitely presented. {\em Left coherency} is defined dually; $S$ is {\em coherent} if it is both left and right coherent. These notions are analogous to those for a ring $R$ (where, of course, $S$-acts are replaced by $R$-modules). Coherency is a finitary condition for rings and monoids, much weaker than, for example, the condition that says all finitely generated $R$-modules or $S$-acts be finitely presented. As demonstrated by Eklof and Sabbagh \cite{eklof:1971}, it is intimately related to the model theory of $R$-modules. The corresponding results for $S$-acts appear in \cite{gould:1987}, the latter informed by the more general approach of Wheeler \cite{wheeler:1976}.
Chase \cite{chase:1960} gave internal conditions on a ring $R$ such that $R$ be right coherent. Correspondingly, a monoid $S$ is right coherent if and only if for any finitely generated right congruence $\rho$ on $S$, and for any $a,b\in S$, the right congruence \[r(a\rho)=\{ (u,v)\in S\times S:au\,\rho\, av\}\] is finitely generated, and the subact $(a\rho)S\cap (b\rho)S$ of the right $S$-act $S/\rho$ is finitely generated \cite{gould:1992}.
Choo, Lam and Luft \cite[Corollary 2.2 and remarks]{choo:1972} have shown that free rings are coherent. The first author proved that free commutative monoids are coherent \cite{gould:1992} and recently the authors, together with Ru\v{s}kuc \cite{ghr:2013}, have shown that free monoids are coherent. The class of coherent inverse monoids contains all semilattices of groups \cite{gould:1992} and so, in particular, all groups and all semilattices. Certainly then free groups are coherent. It therefore becomes natural to ask whether free inverse monoids are coherent, since, not only are they free objects in a variety of unary algebras, they are constructed from free groups acting on semilattices. In fact, as we show at the end of this article, coherency fails for free inverse monoids. This negative result motivates us to ask whether free left ample monoids, which may be thought of as the `positive' part of free inverse monoids, being constructed from free monoids rather than free groups, are coherent. We argue that free left ample monoids are right but not left coherent. The proof of right coherency is motivated by the methods in \cite{ghr:2013}, it is, however, rather more delicate.
For the convenience of the reader we describe in Section~\ref{sec:prelims} the construction of the free inverse $\mathrm{FIM}(\Omega)$, free left ample $\mathrm{FLA}(\Omega)$ and free ample $\mathrm{FAM}(\Omega)$ monoids on a set $\Omega$ from (prefix) closed subsets of the free group $\mathrm{FG}(\Omega)$. In Section~\ref{sec:fiRr} we focus on showing that the finitary properties ($\mathbf{R}$),($\mathbf{r}$),($\mathbf{L}$) and ($\mathbf{l}$) (defined therein) hold for $\mathrm{FIM}(\Omega)$ and $\mathrm{FLA}(\Omega)$. These properties (which arise from considerations of first order axiomatisability of the class of strongly flat right and left $S$-acts - see \cite{gould:1987b}) are similar in flavour, although easier to handle, than coherency. Our main work is in Section~\ref{sec:positive}, where we make a detailed analysis of finitely generated right congruences on $\mathrm{FLA}(\Omega)$. This hard work is then put to use in Section~\ref{sec:flacoherent} where we show that $\mathrm{FLA}(\Omega)$ is right coherent for any set $\Omega$. In Section~\ref{sec:constructions} we argue that the class of right coherent monoids is closed under retract. As a consequence of this, we have an alternative (albeit rather longer) proof that free monoids are coherent. Finally, in Section~\ref{sec:negative}, we show that $\mathrm{FIM}(\Omega)$, $\mathrm{FLA}(\Omega)$ and $\mathrm{FAM}(\Omega)$ are not coherent (for $|\Omega|\geq 2$).
\section{Preliminaries}\label{sec:prelims}
Let $\Omega$ be a non-empty set, let $\Omega^*$ be the free monoid and let $\mathrm{FG}(\Omega)$ be the free group on $\Omega$, respectively. We follow standard practice and denote by $l(a)$ the length of a reduced word $a\in \mathrm{FG}(\Omega)$ and so, in particular, of $a\in\Omega^*$. The empty word will be denoted by $\epsilon$. Of course, $\Omega^*$ is a submonoid of the free group $\mbox{FG}(\Omega)$, and in the sequel, if $a \in \Omega^*$, by $a^{-1}$ we mean the inverse of $a$ in $\mbox{FG}(\Omega)$. For any $a\in \mathrm{FG}(\Omega)$ we denote by $a\kern -3 pt \downarrow$ the set of prefixes of the {\em reduced word} corresponding to $a$. Thus, if $a$ is reduced and $a=x_1\hdots x_n$ where $x_i\in \Omega\cup\Omega^{-1}$, then \[a\kern -3 pt \downarrow=\{ \epsilon, x_1,x_1x_2,\hdots, x_1x_2\hdots x_n\}.\]
The free inverse monoid on $\Omega$ is denoted by $\mathrm{FIM}(\Omega)$. The structure of $\mathrm{FIM}(\Omega)$ was determined by Munn \cite{munn:1974} and Scheiblich \cite{scheiblich:1972}; the description we give below follows that of \cite{scheiblich:1972}, of which further details may be found in \cite{ho}. However, we keep the equivalent characterisation via Munn trees constantly in mind.
Let $\mathcal{P}^f_c(\Omega)$ be the set of finite prefix closed subsets of $\mathrm{FG}(\Omega)$. If $A\in \mathcal{P}^f_c(\Omega)$, then - regarding elements of $A$ as reduced words - a {\em leaf} $a$ of $A$ is a word such that $a$ is not a proper prefix of any other word in $A$. Note that $\mathrm{FG}(\Omega)$ acts in the obvious way on its semilattice of subsets under union. Using this action we define \[ \mathrm{FIM}(\Omega)=\{ (A,a):A\in \mathcal{P}^f_c(\Omega), a\in A\}. \] With binary operation given by \[ (A,a)(B,b)=(A\cup aB, ab), \] $\mathrm{FIM}(\Omega)$ is the free inverse monoid generated by $\Omega$. The identity is $(\{ \epsilon\}, \epsilon)$, the inverse $(A,a)^{-1}$ of $(A,a)$ is $(a^{-1}A,a^{-1})$ and the natural injection of $\Omega\rightarrow \mathrm{FIM}(\Omega)$ is given by \[x\mapsto (\{ 1,x\},x).\] We will make use of the fact that the free inverse monoid (in fact, every inverse monoid) possesses a left-right duality, by virtue of the anti-isomorphism given by $x\mapsto x^{-1}$. For future purposes we remark that if $a\in FG(X)$ is reduced, then \[a^{-1}\cdot a\,\, {\kern -3 pt \downarrow}=(a^{-1}){\downarrow}.\]
Throughout this article we denote elements of $\mathrm{FIM}(\Omega)$ by boldface letters, elements of $\mathcal{P}^f_c(\Omega)$ by capital letters, and elements of $\mathrm{FG}(\Omega)$ by lowercase letters. We write a typical element of $\mathrm{FIM}(\Omega)$ as ${\bf a}=(A,a)$; $A$ and $a$ will always denote the first and second coordinate of ${\bf a}$, respectively. One exception to this convention is that we denote the identity $(\{ \epsilon\}, \epsilon)$ of $\mathrm{FIM}(\Omega)$ by $\bf{1}$.
The free left ample monoid $\mathrm{FLA}(\Omega)$ on $\Omega$ is the submonoid of $\mathrm{FIM}(\Omega)$ given by \[\mathrm{FLA}(\Omega)= \{ (A,a)\in \mathrm{FIM}(\Omega): A\subseteq \Omega^*\},\] note that perforce, $a\in \Omega^*$ and we assume from the outset, when dealing with an element ${\bf a}=(A,a)\in \mathrm{FLA}(\Omega)$, that all the words in $A$ are reduced. We remark that FLA$(\Omega)$ also possesses a unary operation of $(A,a)^+=(A,\epsilon)=(A,a)(A,a)^{-1}$ and (as a unary semigroup) is the free algebra on $\Omega$ in both the variety of left restriction semigroups and the quasi-varieties of (weakly) left ample semigroups \cite{fountain:1991,gomes:2000,cornock:2011}.
Similarly, the free ample semigroup on $\Omega$ is the submonoid of $\mathrm{FIM}(\Omega)$ given by \[ \mathrm{FAM}(\Omega)=\{(A,a) \in \mathrm{FIM}(\Omega): a \in \Omega^*\}. \]
The free ample monoid possesses another unary operation defined by \[(A,a)^*=(A,a)^{-1} (A,a)=(a^{-1}A,1)\] and (as a biunary semigroup) is the free algebra on $\Omega$ in both the variety of restriction semigroups and the quasi-varieties of (weakly) ample semigroups. We remark here that the set of identities and quasi-identities definining the class of ample monoids is left-right dual, so that $\mathrm{FAM}(\Omega)$ consequently also has a left-right duality.
Note that $\mathrm{FLA}(\Omega)$ is built from $\Omega^*$ (see \cite{gould:2009}),but to simplify notation we make use of the embedding of $\Omega^*$ into $\mathrm{FG}(\Omega)$. However, when dealing with $\mathrm{FLA}(\Omega)$, we will use inverses only when we know that the resulting element lies in $\Omega^*$, for example we will write $u^{-1}v$ only if $u$ is a prefix of $v$.
Let $S$ be a semigroup, let $H \subseteq S \times S$ and let us denote by $\rho$ the right congruence generated by $H$. Then it is well known that $s \mathrel{\rho} t$ if and only if there exists a so-called $H$-sequence \[ s=c_1t_1, d_1t_1=c_2t_2, \ldots, d_nt_n=t \] connecting $s$ to $t$ where $(c_i,d_i) \in H \cup H^{-1}$ for all $1\leq i\leq n$. If $n=0$, we interpret this sequence as being $s=t$.
\section{$\mathrm{FIM}(\Omega),\mathrm{FAM}(\Omega)$ and $\mathrm{FLA}(\Omega)$ satisfy ($\bf{R}$), ($\bf{r}$), ($\bf{L}$) and ($\bf{l}$).}\label{sec:fiRr}
The conditions $(\bf{R})$ and $(\bf{r})$ \big(($\bf{L}$) and ($\bf{l}$)\big) are connected to the axiomatisability of certain classes of right (left) acts, and were introduced in \cite{gould:1987b}. Connected via axiomatisability to coherency, they are somewhat easier to handle. In this section we show that the free inverse, the free ample and the free left ample monoids satisfy these conditions. In doing so we develop some facility for handling products and factorisations in these monoids.
\begin{Def} Let $S$ be a monoid. We say that $S$ satisfies Condition $(\bf{r})$ if for every $s,t \in S$ the right ideal \[ {\bf r}^S(s,t)=\{u \in S:su=tu\} \] is finitely generated.
The monoid $S$ satisfies Condition $(\bf{R})$ if for every $s,t \in S$ the $S$-subact \[ {\bf R}^S(s,t)=\{(u,v):su=tv\} \] of the right $S$-act $S \times S$ is finitely generated. (Note that we allow $\emptyset$ to be an ideal and an $S$-act.)
The conditions $(\bf{L})$ and $(\bf{l})$ are defined dually. \end{Def}
\begin{lem}\label{First} Let $A$ be a prefix closed subset of $\mathrm{FG}(\Omega)$ and let $g,h \in A$. Then \[g((g^{-1}h)\kern -3 pt \downarrow) \subseteq A.\] \end{lem}
\begin{proof} Let $x$ be the longest common prefix of the reduced words $g,h \in \mathrm{FG}(\Omega)$. That is, $g=xg'$ and $h=xh'$ where $g',h'$ do not have a common nonempty prefix. Then \[ g((g^{-1}h)\kern -3 pt \downarrow)=xg'(g'^{-1} h')\kern -3 pt \downarrow \subseteq (xg')\kern -3 pt \downarrow \cup (xh')\kern -3 pt \downarrow=g\kern -3 pt \downarrow \cup h\kern -3 pt \downarrow \subseteq A. \] \end{proof}
\begin{lem}\label{lem:crack}
Let $S$ denote either $\mathrm{FIM}(\Omega)$, $\mathrm{FLA}(\Omega)$ or $\mathrm{FAM}(\Omega)$, let ${\bf a}{\bf u}={\bf b}{\bf v}$ in $S$ and suppose that there exists a leaf $x \in A \cup aU=B \cup bV$ such that $x \not\in A \cup B$. Then there exist ${\bf u}',{\bf v}',{\bf z} \in S$ such that $\left| A \cup aU'\right|<\left|A \cup aU\right|$, \[ {\bf a}{\bf u}'={\bf b}{\bf v}' \text{ and } ({\bf u},{\bf v})=({\bf u}',{\bf v}'){\bf z}. \] Furthermore, if ${\bf u}={\bf v}$ then ${\bf u}'={\bf v}'$. \end{lem}
\begin{proof} Clearly ${\bf u}\neq \mathbf{1}$. If $S=\mathrm{FLA}(\Omega)$ then it is easy to see that $x=ak$ where $k\in \Omega^*\setminus \{ \epsilon\}$ is a leaf of $U$. The statement for $S$ now follows from Lemma \ref{Crack}. We therefore consider the case where $S=\mathrm{FIM}(\Omega)$ of $S=\mathrm{FAM}(\Omega)$.
We can suppose that the words $x,a,b,u$ and $v$ are reduced. Note that $x \not\in A \cup B$ implies that $x\in aU \cap bV$. We have that $x \not \in A$ so in particular, $x$ is not a prefix of $a$. In this case the last letter of $x$ does not cancel in the product $a^{-1}x$. Now if $a^{-1}x$ is not a leaf of $U$ then there exists $c \in \Omega \cup \Omega^{-1}$, different from the last letter of $x$, such that $a^{-1}xc \in U$. In this case $xc \in A \cup aU$, contradicting that $x$ is a leaf of $A \cup aU$. So we have shown that $a^{-1}x$ is a leaf of $U$. Similarly $b^{-1}x$ is a leaf of $V$. There are two different cases to consider.
Case (i): $x \neq au$. Let $z=(au)^{-1}x$. Note that $u,a^{-1}x \in U$, which is prefix closed, and $z=(au)^{-1}x=u^{-1}\cdot a^{-1}x$. Lemma \ref{First} then gives that $u(z\kern -3 pt \downarrow) \subseteq U$. Since $uz=a^{-1}x$, we have that \[ (U,u)=(U \setminus \{a^{-1}x\},u)(z\kern -3 pt \downarrow,1). \] Furthermore, $z=(au)^{-1}x=(bv)^{-1}x$, so similarly we have that \[ (V,v)=(V \setminus \{b^{-1}x\},v)(z\kern -3 pt \downarrow,1). \] Also, $A \cup a(U \setminus \{a^{-1}x\})=B \cup b(V \setminus \{b^{-1}x\})=(A \cup aU) \setminus \{x\}$, so we have that \[ (A,a)(U\setminus\{a^{-1}x\},u)=(B,b)(V \setminus\{b^{-1}x\},v). \] So if we let \[ (U',u')=(U \setminus \{a^{-1}x\},u), (V',v')=(V \setminus\{b^{-1}x\},v) \text{ and }{\bf z}=(z\kern -3 pt \downarrow,z), \] then (noticing that if $(U,u)=(V,v)$ we must have that $a=b$), the statements of the lemma are satisfied.
Case (ii): $x=au=bv$. Since $x \not\in A \cup B$, but $a,b \in A \cup B$ we have that $u,v\neq\epsilon$. In case $S=\mathrm{FAM}(\Omega)$, this implies that the last letters of $x,u$ and $v$ are the same which we denote by $z\in \Omega$. Note that $uz^{-1},vz^{-1} \in \Omega^*$ in this case.
If $S=\mathrm{FIM}(\Omega)$ then let $z$ be the last letter of the reduced word $x$. If $z$ is not the last letter of $u$ then in the product $x=au$, all letters of $u$ must cancel, so $a=xu^{-1}$ where $xu^{-1}$ is reduced. However, this contradicts the fact that $x$ is a leaf, showing that the last letter of the reduced word $u$ is $z$. Similarly the last letter of the reduced word $v$ is $z$.
In both the cases $S=\mathrm{FAM}(\Omega)$ and $S=\mathrm{FIM}(\Omega)$, $u \neq uz^{-1}$ and $u \neq \epsilon$ imply that $uz^{-1} \in U \setminus \{u\}$, and similarly $vz^{-1} \in V \setminus \{v\}$. Now let ${\bf u}'=(U \setminus\{u\},uz^{-1}), {\bf v}'=(V \setminus \{v\},vz^{-1})$ and ${\bf z}=(\{1,z\},z)$. Then \[ (U,u)=(U',u')(\{1,z\},z),\, (V,v)=(V',v'),(\{1,z\},z) \] and \[ (A,a)(U',u')=\big((A\cup aU) \setminus \{au\},au'\big)=(B,b)(V',v'). \] Furthermore, if ${\bf u}={\bf v}$ then clearly ${\bf u}'={\bf v}'$, which finishes the proof.
\end{proof}
\begin{prop}\label{lem:fi} The monoids $\mathrm{FIM}(\Omega)$, $\mathrm{FAM}(\Omega)$ and $\mathrm{FLA}(\Omega)$ satisfy $(\mathbf{R})$ and $(\mathbf{r})$. \end{prop} \begin{proof} Let $S$ denote $\mathrm{FIM}(\Omega)$, $\mathrm{FAM}(\Omega)$ or $\mathrm{FLA}(\Omega)$ and let ${\bf a},{\bf b}\in S$. We claim that the finite set \[ X=\{({\bf u},{\bf v}):{\bf a}{\bf u}={\bf b}{\bf v},\, A\cup aU = A \cup B\} \]
generates $\mathbf{R}({\bf a},{\bf b})$. Let $({\bf u},{\bf v}) \in \mathbf{R}({\bf a},{\bf b})$. We prove by induction on the size of $A\cup aU$ that $({\bf u},{\bf v}) \in X \cdot S$. Note that $A \cup aU=B \cup bV$ implies $A \cup B \subseteq A \cup aU$, so that if $\left| A \cup aU\right| \leq \left|A \cup B\right|$, then necessarily $A \cup aU=B \cup bV=A \cup B$, which shows that $({\bf u},{\bf v}) \in X$.
Suppose now that we have that there exists an $n \geq \left|A \cup B\right|$ such that whenever $\left|A \cup aU\right|\leq n$ and $({\bf u},{\bf v}) \in \mathbf{R}({\bf a},{\bf b})$, then necessarily $({\bf u},{\bf v}) \in X \cdot S$. Now let $({\bf u},{\bf v}) \in \mathbf{R}({\bf a},{\bf b})$ be such that $\left| A \cup aU\right|=n+1$. Since $({\bf u},{\bf v}) \in \mathbf{R}({\bf a},{\bf b})$ we have that $A \cup B \subseteq A \cup aU=B \cup bV$, and since $n+1>\left|A \cup B\right|$, there exists $x \in A \cup aU=B \cup bV$ such that $x \not\in A\cup B$. This implies that $x \in aU \cap bV$. We can also assume that $x$ is a leaf of $A \cup aU=B \cup bV$. Then Lemma \ref{lem:crack} implies that there exist elements ${\bf u}',{\bf v}',{\bf z} \in S$ such that $\left| A \cup aU'\right|<\left|A \cup aU\right|$ and \[ ({\bf u}',{\bf v}') \in \mathbf{R}({\bf a},{\bf b}),\, ({\bf u},{\bf v})=({\bf u}',{\bf v}'){\bf z}. \] In this case the induction hypothesis implies that $({\bf u}',{\bf v}') \in X\cdot S$, so that $({\bf u},{\bf v}) \in X \cdot S$ as required.
For $(\mathbf{r})$, the proof is entirely similar. We show that the set \[ Y=\{{\bf u} \in S: {\bf a}{\bf u}={\bf b}{\bf u},\, A\cup aU=A \cup B\} \] generates $\mathbf{r}(s,t)$, making particular use of the final statement of Lemma \ref{lem:crack}. \end{proof}
The free inverse monoid and the free ample monoid are left-right dual, so from the dual of Lemma \ref{lem:crack} they satisfy $(\bf{L})$ and $(\bf{l})$. To show that $\mathrm{FLA}(\Omega)$ satisfies $(\bf{L})$ and $(\bf{l})$, we first prove a result corresponding to Lemma~\ref{lem:crack}.
\begin{Lem}\label{lem:crackleft}
Let ${\bf u}{\bf a}={\bf v}{\bf b}$ in $\mathrm{FLA}(\Omega)$ and suppose that there exists $x \in U \cup uA=V \cup vB$ such that $x$ is either a leaf, or $x=\epsilon$ and every element of $(U \cup uA) \setminus \{\epsilon\}$ has a common nonempty prefix (this corresponds to a tree having a root with degree $1$). Furthermore, suppose that $x \not\in uA \cup vB$. Then there exist ${\bf u}',{\bf v}',{\bf z} \in \mathrm{FLA}(\Omega)$ such that $\left| U' \cup u'A\right|<\left|U \cup uA\right|$, \[ {\bf u}'{\bf a}={\bf v}'{\bf b} \text{ and } ({\bf u},{\bf v})={\bf z}({\bf u}',{\bf v}'). \] Furthermore, if ${\bf u}={\bf v}$ then ${\bf u}'={\bf v}'$. \end{Lem}
\begin{proof} Note that as $x\notin uA \cup vB$, $x\neq u$ and $x\neq v$. If $x$ is a leaf, then let ${\bf z}=(x\kern -3 pt \downarrow,1), U'=U \setminus \{x\}, u'=u, V'=V \setminus \{x\}, v'=v$. In this case \[ {\bf u}'{\bf a}=\big((U \cup uA) \setminus \{x\},ua\big)=\big((V \cup vB)\setminus \{x\},vb\big)={\bf v}'{\bf b}, {\bf z}{\bf u}'={\bf u}, {\bf z}{\bf v}'={\bf v}. \] Furthermore, if ${\bf u}={\bf v}$ then of course ${\bf u}'={\bf v}'$.
If $x=\epsilon$ then $x \not \in uA \cup vB$ implies $u,v \neq \epsilon$. Let $z$ be the common first letter of elements of $(U\cup uA) \setminus \{\epsilon\}$ and let ${\bf z}=(\{\epsilon,z\},z)$. Then if we set $(U',u')=(z^{-1} (U \setminus \{\epsilon\}),z^{-1}u)$ and $(V',v')=(z^{-1} (V \setminus \{\epsilon\},z^{-1}v)$ then \[ U' \cup u'A=z^{-1} ( U \setminus \{\epsilon\}) \cup z^{-1}uA=z^{-1}\big((U \cup uA) \setminus \{\epsilon\}\big)=\ldots=V' \cup v'B, \] which shows that ${\bf u}'{\bf a}={\bf v}'{\bf b}$. Also we have \[ Z \cup zU'=\{\epsilon,z\} \cup (U\setminus \{\epsilon\})=U, \] because $z \in U$ (being the first letter of $u$). As a consequence ${\bf z}{\bf u}'={\bf u}$ and similarly ${\bf z}{\bf v}'={\bf v}$ also. Lastly, if ${\bf u}={\bf v}$ then clearly ${\bf u}'={\bf v}'$ which finishes the proof. \end{proof}
\begin{prop} The free inverse monoid $\mathrm{FIM}(\Omega)$, the free ample monoid $\mathrm{FAM}(\Omega)$ and the free left ample monoid $\mathrm{FLA}(\Omega)$ satisfy $(\bf{L})$ and $(\bf{l})$. \end{prop}
\begin{proof} We have already mentioned that $\mathrm{FIM}(\Omega)$ and $\mathrm{FAM}(\Omega)$ must satisfy $(\bf{L})$ and $(\bf{l})$. For $\mathrm{FLA}(\Omega)$, let ${\bf a},{\bf b} \in \mathrm{FLA}(\Omega)$. Then either $\mathbf{L}({\bf a},{\bf b})$ is empty or one of $a$ and $b$ is a suffix of the other. Without loss of generality we can assume that $b=ya$ for some $y \in \Omega^*$. In this case we claim that the finite set \[ X=\{({\bf u},{\bf v}): {\bf u}{\bf a}={\bf v}{\bf b}, U \cup uA = B \cup yA\} \]
generates $\bf{L}({\bf a},{\bf b})$. Note that if $({\bf u},{\bf v}) \in \bf{L}({\bf a},{\bf b})$ then necessarily $u=vy$ so from the equation $U\cup vyA=V \cup vB$ we conclude that $v (B \cup yA) \subseteq U\cup uA$. As a consequence we see that if $\left| U \cup uA\right|\leq \left| B \cup yA\right|$ then $U \cup uA=v(B \cup yA)$, which implies that $v=\epsilon$ so that $U \cup uA=B \cup yA$ and $({\bf u},{\bf v}) \in X$.
Suppose now that there exists an $n \geq \left|B \cup yA\right|$ such that whenever $\left|U\cup uA\right|\leq n$ and $({\bf u},{\bf v}) \in \bf{L}({\bf a},{\bf b})$, then necessarily $({\bf u},{\bf v}) \in \mathrm{FLA}(\Omega) \cdot X$. Now let $({\bf u},{\bf v}) \in \bf{L}({\bf a},{\bf b})$ be such that $\left|U \cup uA\right|=n+1$. Note that $ua=vya$ implies that $u=vy$. Then $U\cup vyA=V \cup vB$, so $v(B \cup yA) \subseteq U \cup vyA$. However, $\left|v(B \cup yA)\right|=\left|B \cup yA\right| < \left| U \cup vyA\right|$, so $U \cup uA \neq v(B\cup yA)=uA \cup vB$.
If there exists a leaf of $U \cup uA$ which is not contained in $uA \cup vB$ then let $x$ be one such leaf. However, if there is no such leaf then that means that every leaf of $U \cup uA$ is contained in $v(B \cup yA)$. If $v=\epsilon$ then as $y\in B$, $v(B \cup yA)$ is prefix closed so $U \cup uA=v(B \cup yA)=uA \cup vB$, which is a contradiction. So $v \neq \epsilon$, and we have that all leaves of $U \cup uA$ have $v$ as a prefix. This can only happen if $U \cup uA=v\kern -3 pt \downarrow \cup vC$ for some prefix closed set $C$, which shows that every element of $(U \cup uA) \setminus \{\epsilon\}$ has the same first letter as $v$. In this case let $x=\epsilon$. Then Lemma \ref{lem:crackleft} implies that there exists ${\bf u}',{\bf v}',{\bf z} \in \mathrm{FLA}(\Omega)$ such that $\left|U' \cup u'A\right|<\left|U \cup uA\right|$, \[ ({\bf u}',{\bf v}') \in \bf{L}({\bf a},{\bf b}) \text{ and } ({\bf u},{\bf v})={\bf z}({\bf u}',{\bf v}'). \] In this case the induction hypothesis implies that $({\bf u}',{\bf v}') \in \mathrm{FLA}(\Omega) \cdot X$ and so we have $({\bf u},{\bf v}) \in \mathrm{FLA}(\Omega) \cdot X$ as required.
For $(\bf{l})$, the proof is entirely similar, namely the finite set \[ Y=\{U \in S: {\bf u}{\bf a}={\bf u}{\bf b}, U \cup uA=B \cup yA\} \] generates $\bf{l}({\bf a},{\bf b})$ if $b=ya$.
\end{proof}
\section{$\mathrm{FLA}(\Omega)$: analysis of $H$-sequences}\label{sec:positive}
In order to show that $\mathrm{FLA}(\Omega)$ is right coherent, we make a careful examination of $H$-sequences for finite sets $H\subseteq \mathrm{FLA}(\Omega) \times \mathrm{FLA}(\Omega)$.
\begin{Def} Let ${\bf a}\in \mathrm{FLA}(\Omega)$. \begin{enumerate} \item[(i)]
The \emph{weight} $w({\bf a})$ of ${\bf a}$ is defined by $w({\bf a})=\left|A\right|-1 + l(a)$. \item[(ii)] The \emph{diameter} $d({\bf a})$ of ${\bf a} $ is defined by $d({\bf a})=\text{max }\{l(u):u\in A\}$.
\end{enumerate} \end{Def}
The following lemma states the most important basic properties of the weight function.
\begin{Lem}\label{Basic} Let ${\bf a},{\bf b},{\bf c},{\bf a}_1,\ldots,{\bf a}_n \in \mathrm{FLA}(\Omega)$. Then \begin{enumerate} \item[\rm{(W0)}] $w({\bf a})=0$ if and only if ${\bf a}=\bf{1}$; \item[\rm{(W1)}] $w({\bf a}),w({\bf b})\leq w({\bf a}{\bf b})\leq w({\bf a})+w({\bf b})$; \item[\rm{(W2)}] $w({\bf a}{\bf b})=w({\bf a})$ if and only if ${\bf a} {\bf b}= {\bf a}$, and this is equivalent to ${\bf b}\in E(\mathrm{FLA}(\Omega)) $ with ${\bf a}\leq_{\mathcal{L}} {\bf b}$. \begin{comment} \item[\rm{(W3)}] if $w({\bf a}_1\ldots {\bf a}_{i-1}) < w({\bf a}_1\ldots {\bf a}_i)$ for every $1<i\leq n$; then \[w({\bf a}_1\ldots{\bf a}_j)-w({\bf a}_1\ldots {\bf a}_i)\geq j-i\] for every $1\leq i<j\leq n$; \item[\rm{(W4)}] if $w({\bf a}{\bf b})-w({\bf a}) > w({\bf c})$, then $w({\bf a}{\bf c})<w({\bf a}{\bf b}{\bf c})$. \end{comment} \end{enumerate} \end{Lem} \begin{proof} The proof of (W0) is clear.
For (W1), let ${\bf a}=(A,a)$ and ${\bf b}=(B,b)$, so that ${\bf a} {\bf b} =(A\cup aB,ab)$. Then
\[w({\bf a} {\bf b})=|A\cup aB|-1+l(ab)\]
and as $|A\cup aB|\geq |A|, |aB|$ where $|aB|=|B|$ and $l(ab)\geq l(a),l(b)$, we have $w({\bf a}),w({\bf b})\leq w({\bf a}{\bf b})$.
On the other hand, the second inequality for (W1) follows from the observation that as $a\in A\cap aB$ we have \[|A\cup aB|=|A|+|aB\setminus A|\leq |A|+|aB|-1
=|A|+|B|-1.\]
Clearly $|A\cup aB|\geq |A|$ and $l(ab)\geq l(a)$, so that if $w({\bf a}{\bf b})=w({\bf a})$, we must have
$|A\cup aB|=|A|$ and $l(b)=0$. Hence $b=\epsilon$, $aB\subseteq A$ and so ${\bf a}{\bf b}={\bf a}$.
If ${\bf a}{\bf b}={\bf a}$ (equivalently, $w({\bf a}{\bf b})=w({\bf a})$), then we have shown that ${\bf b}\in E(${\em FLA}$(\Omega)) $ and clearly ${\bf a}\leq_{\mathcal{L}} {\bf b}$. The converse is clear. Thus (W2) holds. \begin{comment} For (W3) observe that \[w({\bf a}_1\ldots{\bf a}_j)-w({\bf a}_1\ldots {\bf a}_i)=\Sigma_{i<k\leq j} w({\bf a}_1\ldots{\bf a}_k)-w({\bf a}_1\ldots {\bf a}_{k-1})\geq \Sigma_{i<k\leq j} 1=j-i.\]
Finally, using (W1) we notice that \[w({\bf a}{\bf b}{\bf c})\geq w({\bf a}{\bf b}) >w({\bf a})+w({\bf c}) \geq w({\bf a}{\bf c}),\] so that (W4) is true. \end{comment} \end{proof}
The proof of our main result depends heavily on the fact that certain factorisations can be carried through sequences. The following two lemmas constitute the foundations of this process.
\begin{Lem} \label{Crack} Let ${\bf d}{\bf z}={\bf b}{\bf v}$, ${\bf z} \neq {\bf 1}$ and let $x$ be a leaf of $Z$ such that $dx \not\in B$. Then there exist elements ${\bf z}',{\bf x},{\bf v}' \in \mathrm{FLA}(\Omega)$ such that \[ Z'=Z \setminus \{x\}, w({\bf z}')<w({\bf z}),\,{\bf z}={\bf z}'{\bf x},\,{\bf v}={\bf v}'{\bf x},\, {\bf d}{\bf z}'={\bf b}{\bf v}' \] and \begin{enumerate}
\item if $x\neq z$ and $dx \not\in D$ then ${\bf x}=(\tilde{x}\kern -3 pt \downarrow \cup \tilde{z}\kern -3 pt \downarrow,\tilde{z}), {\bf v}'=(V \setminus \{b^{-1}dx\},v\tilde{z}^{-1})$ where $\tilde{x},\tilde{z} \in \Omega^*$ have no common non-empty prefix, $x=z'\tilde{x}, z=z'\tilde{z}$ (so $dx=dz'\tilde{x}=bv'\tilde{x}$),
\item if $x=z$ (then necessarily $x\neq \epsilon$) and $dx \not\in D$ then ${\bf z}'=(Z',zx'^{-1}), {\bf x}=(\{\epsilon,x'\},x')$ and ${\bf v}'=(V\setminus \{v\},vx'^{-1})$, where $x'$ is the last letter of $x$,
\item if $x=z$ (then necessarily $x\neq \epsilon$) and $dx \in D$ then ${\bf z}'=(Z',zx'^{-1}), {\bf x}=(\{\epsilon,x'\},x')$ and ${\bf v}'=(V,vx'^{-1})$, where $x'$ is the last letter of $x$,
\item if $x\neq z$ and $dx \in D$ then ${\bf z}'=(Z',z'), {\bf x}=(\tilde{x}\kern -3 pt \downarrow \cup \tilde{z}\kern -3 pt \downarrow,\tilde{z}), {\bf v}'=(V,v\tilde{z}^{-1})$ where $\tilde{x},\tilde{z} \in \Omega^*$ have no common non-empty prefix.
\end{enumerate}
Furthermore, the following are true: \begin{enumerate}
\item[(A)] in cases $(1)$ and $(2)$ we have $\left|D \cup dZ'\right|<\left|D \cup dZ\right|$ and that if ${\bf z}={\bf v}$ then ${\bf z}'={\bf v}'$, \item[(B)] in cases $(1),(2)$ and $(3)$ we have $w({\bf b}{\bf v}')=w({\bf d}{\bf z}')<w({\bf d}{\bf z})=w({\bf b}{\bf v})$. \end{enumerate} \end{Lem}
\begin{proof} We investigate all $4$ cases separately:
Case (i): $dx \not\in D$ and $x \neq z$. Let $z'$ be the greatest common prefix of $z$ and $x$, that is, there exist $\tilde{z}$ and $\tilde{x}$ such that $z=z'\tilde{z}$ and $x=z'\tilde{x}$ and $\tilde{z}$ and $\tilde{x}$ have no common non-empty prefix. It is important to note that $\tilde{x}\neq \epsilon$, for $x$ is a leaf different from $z$. Now let \[ {\bf z}'=(Z \setminus \{x\},z'), {\bf x}=(\tilde{x}\kern -3 pt \downarrow \cup \tilde{z}\kern -3 pt \downarrow,\tilde{z}). \] Then it is easy to check that ${\bf z}',{\bf x} \in \mbox{FLA}(\Omega)$ and ${\bf z}={\bf z}'{\bf x}$. Note that since $dx \not\in B$, but $dx \in B \cup bV$, we have that $dx =dz'\tilde{x} \in bV$, and that $bv=dz=dz'\tilde{z} \in bV$ also. Since $\tilde{z}$ and $\tilde{x}$ have no common non-empty prefix, we conclude that $b$ is a prefix of $dz'$. As a consequence of the fact that $bv=dz'\tilde{z}$, we conclude that $\tilde{z}$ is a suffix of $v$, so $v\tilde{z}^{-1} \in V$. Furthermore, $bv=dz'\tilde{z}$ implies that $v\tilde{z}^{-1}=b^{-1}dz'\neq b^{-1}dz'\tilde{x}=b^{-1}dx$. Now let \[ {\bf v}'=(V \setminus \{b^{-1}dx\},v\tilde{z}^{-1}). \] Note that our assumption that $dx \not\in D$ implies that $dx$ is a leaf of $B\cup bV$. Then, since $dx \not \in B$, we have that $b^{-1}dx$ is a leaf of $V$, so ${\bf v}' \in \mbox{FLA}(\Omega)$. It is then easy to check that ${\bf v}={\bf v}' {\bf x}$, since the second coordinates are the same, and $b^{-1}dx=b^{-1}dz'\tilde{x}=v\tilde{z}^{-1} \tilde{x}$. Similarly ${\bf d}{\bf z}'={\bf b}{\bf v}'$, for the second coordinates are both equal $dz'$, and the first coordinates both equal $(B \cup bV) \setminus \{dx\}$. Also we have that $w({\bf b}{\bf v}') < w({\bf b}{\bf v})$, because $dx \in B \cup bV$. Furthermore, if ${\bf z}={\bf v}$ then from ${\bf d}{\bf z}={\bf b}{\bf v}$ we conclude that $d=b$ which implies that $b^{-1}dx=x$. Similarly $v\tilde{z}^{-1}=b^{-1}dz'=z'$, showing that ${\bf z}'={\bf v}'$.
Case (ii): $dx \not\in D$, and $x=z$. We have that $z \neq \epsilon$, for otherwise ${\bf z}={\bf 1}$. So let $z=z'x'$ where $x' \in \Omega$, and let \[ {\bf z}'= (Z \setminus \{z\},z'),\ {\bf x}=(\{\epsilon,x'\},x'). \] We have that ${\bf z}',{\bf x} \in \mbox{FLA}(\Omega)$, and that ${\bf z}={\bf z}'{\bf x}$. Note that $dz \not \in B$, but it is the second coordinate of ${\bf b}{\bf v}$. Thus, $v \neq \epsilon$, and we have that $x'$ is the last letter of $v$ and as a consequence, $dz'=bv'$, where $v'=v(x')^{-1}$. We see that $v$ is a leaf of $V$ and similarly to the previous case it is easy to show that if we define \[ {\bf v}'=(V \setminus \{v\},v'), \] then ${\bf v}' \in \mbox{FLA}(\Omega), w({\bf b}{\bf v}')<w({\bf b}{\bf v}), {\bf v}={\bf v}'{\bf x}$ and ${\bf d}{\bf z}'={\bf b}{\bf v}'=\big( (D \cup dZ) \setminus\{dz\},dz'\big)$. Furthermore, if ${\bf z}={\bf v}$ then of course $z=v$ and we conclude that ${\bf z}'={\bf v}'$, so the statements of the lemma are true.
Case (iii): $dx \in D$, and $x=z$. This case is similar to Case (ii), the only difference being that we have to define \[ {\bf v}'=(V,v'). \] Since the second coordinate of ${\bf b}{\bf v}'$ is one letter shorter than $bv$, we have that $w({\bf b}{\bf v}')<w({\bf b}{\bf v})$.
Case (iv): $dx \in D$ and $x \neq z$. Put \[ {\bf z}'=(Z \setminus \{x\},z'),\ {\bf x}=(\tilde{x}\kern -3 pt \downarrow \cup \tilde{z}\kern -3 pt \downarrow,\tilde{z}) \text{ and } {\bf v}'=(V,v\tilde{z}^{-1}) \] where $z',\tilde{z}$ and $\tilde{x}$ are defined as in Case (i). It is easy to check (using the same argument as in Case (i)) that $b^{-1}dx=v\tilde{z}^{-1}\tilde{x}$ is a leaf in $V$, ${\bf z}',{\bf x},{\bf v}' \in \mbox{FLA}(\Omega)$, $w({\bf z}')<w({\bf z})$ and \[ {\bf z}={\bf z}'{\bf x},\ {\bf v}={\bf v}'{\bf x} \text{ and }{\bf d}{\bf z}'={\bf b} {\bf v}', \] so that again, the statements of the lemma are true.
\end{proof}
\begin{Lem} \label{Roll} Let ${\bf a}{\bf b}={\bf c}{\bf d}$ such that ${\bf b}=(x\kern -3 pt \downarrow \cup b\kern -3 pt \downarrow,b)$ for some $b,x \in \Omega^*,x\neq\epsilon$, having no common non-empty prefix. If $ax \not\in A \cup C$ and $A=(A\cup aB) \setminus \{ax\}$, then ${\bf d}={\bf d}'{\bf b}$ for some ${\bf d}'=(D\setminus\{ d'x\},d')$ such that ${\bf a}={\bf c}{\bf d}'$. \end{Lem}
\begin{proof} First remark that our hypotheses guarantee that $ax$ is a leaf of $A\cup aB=C\cup cD$.
Since $ab=cd$, $c$ is a prefix of $ab$. However, since $ax \in C \cup cD$, but $ax \not\in C$, we have that $c$ is also a prefix of $ax$. Since $b$ and $x$ have no common non-empty prefix, this implies that $c$ is a prefix of $a$.
Let $d' \in \Omega^*$ be such that $a=cd'$. We have that $ax=cd'x \in cD$, so $d'x \in D$. From $cd'b=ab=cd$ we deduce that $d'b=d\in D$. From $d'b,d'x \in D$, the prefix closure of $D$ gives that $d'B \subseteq D$. Observe now that $d'x$ is a leaf of $D$ and $d'x\neq d'$, so that ${\bf d}'=(D\setminus\{ d'x\},d')\in \mbox{FLA}(\Omega)$ and clearly, $cd'x \not\in C \cup cD'$. Moreover, it is easy to check that \[{\bf a}={\bf c}{\bf d}'\mbox{ and } {\bf d}={\bf d}'{\bf b}.\] \end{proof}
Let $\rho$ be a finitely generated right congruence on $\mbox{FLA}(\Omega)$. Without loss of generality we may suppose that $\rho=\langle H \rangle$ for some finite $H \subseteq \mbox{FLA}(\Omega)\times \mbox{FLA}(\Omega)$ with $H^{-1}= H$. Let us denote by $\mathcal{D}$ the maximum of the diameters of the components of the elements of $H$. In the following definition, we abuse terminology a little. The elements ${\bf a},{\bf u},{\bf b}$ and $\mathbf{v}$ play a special role, but are not distinguished from the products ${\bf a} {\bf u}$ and ${\bf b} \mathbf{v}$. We employ similar conventions in other circumstances.
\begin{Def} Suppose that we have an $H$-sequence \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,{\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \] connecting ${\bf a}{\bf u}$ and ${\bf b}{\bf v}$. Then we say that the $H$-sequence is \emph{reducible} if there exist elements ${\bf y},{\bf u}',{\bf t}_1',\ldots,{\bf t}_n',{\bf v}'$ such that \begin{itemize} \item[(Red1)] $w({\bf a}{\bf u}')<w({\bf a}{\bf u})$, $w({\bf b}{\bf v}')<w({\bf b}{\bf v})$ or $w({\bf t}_i')<w({\bf t}_i)$ for some $i$; \item[(Red2)] ${\bf u}={\bf u}'{\bf y}, {\bf t}_1={\bf t}_1'{\bf y},\ldots,{\bf t}_n={\bf t}_n'{\bf y},{\bf v}={\bf v}'{\bf y}$;
\item[(Red3)] ${\bf a}{\bf u}'={\bf c}_1{\bf t}_1',{\bf d}_1{\bf t}_1'={\bf c}_2{\bf t}_2',\ldots,{\bf d}_n{\bf t}_n'={\bf b}{\bf v}'$. \end{itemize}
If a sequence is not reducible, we call it \emph{irreducible}. \end{Def}
From the above definition, a length-0
$H$-sequence ${\bf a}{\bf u}={\bf b}{\bf v}$ is reducible if and only if there exist elements ${\bf y},{\bf u}',{\bf v}' \in \mbox{FLA}(\Omega)$ such that ${\bf u}={\bf u}'{\bf y}, {\bf v}={\bf v}'{\bf y}, {\bf a}{\bf u}'={\bf b}{\bf v}'$ and $w({\bf a}{\bf u}')=w({\bf b}{\bf v}')<w({\bf a}{\bf u})=w({\bf b}{\bf v})$.
Note that if (Red2) holds, then in view of (W2) in Lemma~\ref{Basic}, (Red1) is equivalent to saying that ${\bf a}{\bf u}'\neq {\bf a}{\bf u}$, ${\bf b}{\bf v}' \neq {\bf b}{\bf v}$ or ${\bf t}_i'\neq {\bf t}_i$ for some $i$ - we are going to make use of this fact in the sequel. We are going to show that every irreducible sequence has an element with diameter less than or equal to $2\mathrm{max}(\mathcal{D},d({\bf a}),d({\bf b}))$.
\begin{Lem}\label{Two} If the sequence ${\bf a}{\bf u}={\bf b}{\bf v}$ is irreducible then $d({\bf u})\leq \mathrm{max}(d({\bf a}),d({\bf b}))$. \end{Lem}
\begin{proof} Suppose that $d({\bf u})>d({\bf a}),d({\bf b})$. Then there exists a leaf $x \in U$ such that $l(u)>d({\bf a}),d({\bf b})$. As a consequence we have $ax \not\in A \cup B$, so by Cases $(1)$ and $(2)$ of Lemma \ref{Crack} there exist ${\bf u}',{\bf v}',{\bf x} \in \mbox{FLA}(\Omega)$ such that ${\bf a}{\bf u}'={\bf b}{\bf v}', {\bf u}={\bf u}'{\bf x},{\bf v}={\bf v}'{\bf x}$ and $w({\bf b}{\bf v}')<w({\bf b}{\bf v})$, contradicting the irreducibility of the sequence ${\bf a}{\bf u}={\bf b}{\bf v}$. \end{proof}
The following Lemma shows that elements of $\mathrm{FLA}(\Omega)$ which are connected by an irreducible sequence are `lean' - the length of their second component limits their diameter. In fact, much more is true, but this statement will suffice for our proof. Furthermore, it is worth noting that this lemma is one (the other one is Statement (\ref{P1}) of Lemma \ref{Crack}) which is not dualisable - it fails if we swap from right congruences to left congruences.
\begin{Lem} \label{Small} If \begin{equation}\label{Seqq} {\bf a}{\bf u}={\bf c}_1{\bf t}_1,{\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\ldots,{\bf d}_n{\bf t}_n={\bf b} {\bf v} \end{equation} is an irreducible sequence, then $d({\bf a}{\bf u}) \leq 2\mathrm{max}(l(au),d({\bf a}),d({\bf b}),\mathcal{D})$. \end{Lem}
\begin{proof} Let $\mathcal{M}=\mathrm{max}(l(au),d({\bf a}),d({\bf b}),\mathcal{D})$. For brevity let ${\bf c}_{n+1}={\bf b}$ and ${\bf t}_{n+1}={\bf v}$. Suppose that $d({\bf a}{\bf u})>2\mathcal{M}$, which clearly implies that ${\bf u}\neq {\bf 1}$. Let $y$ be a leaf of $A \cup aU$ with $l(y)=d({\bf a}{\bf u})>2\mathcal{M}$. Then clearly $y \not\in A$, so $y=ax$ for some leaf $x \in U$. Notice that since $l(a)\leq d({\bf a})$, we have that $l(x)>\mathcal{M} \geq d({\bf a}),d({\bf c}_1)$, so $ax \not\in A \cup C_1$. Also, $l(ax)>l(au)$ implies that $x \neq u$. Then if we apply Lemma \ref{Crack} to the equality ${\bf a}{\bf u}={\bf c}_1{\bf t}_1$ and the leaf $x \in U$, we obtain by Case $(1)$ that there exist elements ${\bf x},{\bf u}',{\bf t}_1' \in \mbox{FLA}(\Omega)$ such that \[ w({\bf a}{\bf u}')<w({\bf a}{\bf u}),\,{\bf u}={\bf u}'{\bf x},\,{\bf t}_1={\bf t}_1'{\bf x},{\bf a}{\bf u}'={\bf c}_1{\bf t}_1', \] \[ {\bf x}=(\tilde{x}\kern -3 pt \downarrow \cup \tilde{u}\kern -3 pt \downarrow,\tilde{u}) \text{ and } {\bf t}_1'=(T_1\setminus\{ t_1'\tilde{x}\},t_1')\]
with $\tilde{x},\tilde{u} \in \Omega^*$ having no common non-empty prefix and $x=u'\tilde{x}$. Note that $ax=au'\tilde{x}$, $l(ax)>2\mathcal{M}\geq \mathcal{M}+l(au)$ and $au'$ is a prefix of $au$, so we have that $l(\tilde{x})>\mathcal{M}$. Further, $C_1 \cup c_1T_1'=(C_1 \cup c_1T_1) \setminus \{c_1t_1'\tilde{x}\}$.
Note that if $n=0$ then we have already contradicted the irreducibility of the sequence (\ref{Seqq}), so in the sequel we suppose that $n>0$.
Suppose for induction that we have constructed elements ${\bf u}',{\bf t}_1',\ldots,{\bf t}_m'\in \mbox{FLA}(\Omega)$ satisfying ${\bf u}={\bf u}'{\bf x}$, ${\bf t}_i={\bf t}_i'{\bf x}$ for all $1\leq i\leq m$, $T_m'=T_m \setminus \{t_m'\tilde{x}\}$ and $C_m \cup c_mT_m'=(C_m \cup c_mT_m) \setminus \{c_mt_m'\tilde{x}\}$.
Since $l(\tilde{x})>\mathcal{M}$, we have that $d_mt_m'\tilde{x} \not \in (D_m \cup d_mT_m') \cup C_{m+1}$, so $D_m \cup d_mT_m'=(D_m \cup d_mT_m) \setminus \{d_mt_m'\tilde{x}\}$. We can therefore apply Lemma \ref{Roll} to the equality ${\bf d}_m{\bf t}_m' \cdot {\bf x}={\bf c}_{m+1}{\bf t}_{m+1}$ and obtain that ${\bf t}_{m+1}={\bf t}_{m+1}'{\bf x}$ for some ${\bf t}_{m+1}'$ with $T_{m+1}'=T_{m+1} \setminus \{t_{m+1}\tilde{x}\}$ and ${\bf d}_m{\bf t}_m'={\bf c}_{m+1}{\bf t}_{m+1}'$, so that $C_{m+1} \cup c_{m+1}T_{m+1}'=(C_{m+1} \cup c_{m+1}T_{m+1}) \setminus \{c_{m+1}t_{m+1}\tilde{x}\}$.
Applying induction (note that $\mathcal{M}\geq d({\bf b})$ is required at the last step), there exist elements ${\bf u}',{\bf t}_1',\ldots,{\bf t}_n',{\bf v}'$ such that ${\bf u}={\bf u}'{\bf x},{\bf t}_1={\bf t}_1'{\bf x},\ldots,{\bf t}_n={\bf t}_n'{\bf x},{\bf v}={\bf v}'{\bf x}, w({\bf a}{\bf u}')<w({\bf a}{\bf u})$ and
\[{\bf a}{\bf u}'={\bf c}_1{\bf t}_1',{\bf d}_1{\bf t}_1'={\bf c}_2{\bf t}_2',\ldots,{\bf d}_n{\bf t}_n'={\bf b} {\bf v}'.\] This contradicts the irreducibility of the sequence (\ref{Seqq}) and so we conclude that $d({\bf a}{\bf u}) \leq 2\mathcal{M}$. \end{proof}
\begin{Def} We say that the pair $({\bf a} {\bf u},{\bf b} {\bf v})$ is \emph{irreducible} if ${\bf a} {\bf u}$ and ${\bf b} {\bf v}$ can be connected by an irreducible $H$-sequence. \end{Def}
Note that in view of an earlier remark, we are a little cavalier above; more properly, we should write ${\bf a} \cdot {\bf u}$ and ${\bf b}\cdot {\bf v}$.
\begin{Def} Let ${\bf a}{\bf u}={\bf c}_1{\bf t}_1, {\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v}$ be an $H$-sequence $\mathcal{S}$. We define the {\em weight} $w$ of $\mathcal{S}$ to be $w({\bf a}{\bf u})+w({\bf t}_1)+\ldots+w({\bf t}_n)+w({\bf b}{\bf v})$. \end{Def}
\begin{Lem} \label{Irr} Let \[\mathcal{S}: {\bf a}{\bf u}={\bf c}_1{\bf t}_1, {\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v}\] be an $H$-sequence. Then there exist elements ${\bf y},{\bf u}',{\bf t}_1',\ldots,{\bf t}_n',{\bf v}'$ such that \[ {\bf u}={\bf u}'{\bf y}, {\bf t}_1={\bf t}_1'{\bf y},\ldots,{\bf t}_n={\bf t}_n'{\bf y}, {\bf v}={\bf v}'{\bf y}, \] and \[ {\bf a}{\bf u}'={\bf c}_1{\bf t}_1',{\bf d}_1{\bf t}_1'={\bf c}_2{\bf t}_2',\ldots,{\bf d}_n{\bf t}_n'={\bf b}{\bf v}' \] is an irreducible $H$-sequence. \end{Lem} \begin{proof} We use induction on the weight of $\mathcal{S}$. First note that by Lemma~\ref{Basic}, $w(\mathcal{S})\geq w({\bf a})+w({\bf b})$.
If $w(\mathcal{S})=w({\bf a})+w({\bf b})$, then again by Lemma~\ref{Basic} we have that ${\bf a}{\bf u}={\bf a}$, ${\bf b}{\bf v}={\bf b}$ and $w({\bf t}_1)=\ldots=w({\bf t}_n)=0$, so that ${\bf t}_1=\ldots={\bf t}_n={\bf 1}$ and our $H$-sequence is irreducible in view of (Red1).
Suppose now that $w(\mathcal{S})>w({\bf a})+w({\bf b})$ and the $H$-sequence \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1, {\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \] is reducible. Then there exist elements $\tilde{{\bf y}},\tilde{{\bf u}},\tilde{{\bf t}}_1,\ldots,\tilde{{\bf t}}_n,\tilde{{\bf v}}$ satisfying conditions (Red1)-(Red3), that is, ${\bf u}=\tilde{{\bf u}}\tilde{{\bf y}},{\bf t}_i=\tilde{{\bf t}}_i\tilde{{\bf y}}$ for all $1 \leq i \leq n$, ${\bf v}=\tilde{{\bf v}}\tilde{{\bf y}}$, \begin{equation}\label{Nyaff} {\bf a}\tilde{{\bf u}}={\bf c}_1\tilde{{\bf t}}_1,{\bf d}_1\tilde{{\bf t}}_1={\bf c}_2\tilde{{\bf t}}_2,\ldots,{\bf d}_n\tilde{{\bf t}}_n={\bf b}\tilde{{\bf v}} \end{equation} and \[ w({\bf a}\tilde{{\bf u}})+w(\tilde{{\bf t}}_1)+\ldots+w(\tilde{{\bf t}}_n)+w({\bf b}\tilde{{\bf v}})<w({\bf a}{\bf u})+w({\bf t}_1)+\ldots+w({\bf t}_n)+w({\bf b}{\bf v}). \] This inequality shows that we can apply the inductive hypothesis to the $H$-sequence (\ref{Nyaff}). Thus there exists an irreducible sequence \[ {\bf a}{\bf u}'={\bf c}_1{\bf t}_1',\ldots,{\bf d}_n{\bf t}_n'={\bf b}{\bf v}' \] and an element ${\bf y}'$ such that $\tilde{{\bf u}}={\bf u}'{\bf y}',\tilde{{\bf t}}_i={\bf t}_i'{\bf y}'$ and $\tilde{{\bf v}}={\bf v}'{\bf y}'$. In this case let ${\bf y}={\bf y}'\tilde{{\bf y}}$, and the lemma is proved.
\end{proof}
This lemma shows that if $({\bf a}{\bf u},{\bf b}{\bf v})$ is not irreducible, then it is a `direct consequence' of an irreducible pair $({\bf a} {\bf u}',{\bf b}{\bf v}')$. The following lemma will be used to `dismantle' irreducible sequences, and to show that they always contain a `small' element.
\begin{Lem}\label{New} Let \begin{equation}\label{BasicS} {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_{n-1}{\bf t}_{n-1}={\bf c}_n{\bf t}_n, {\bf d}_n{\bf t}_n={\bf b}{\bf v} \end{equation} be an irreducible sequence. Then there exist ${\bf z}, {\bf u}',{\bf t}_1',\ldots, {\bf t}_n' \in \mbox{FLA}(\Omega)$ such that \begin{equation}\label{P1} d({\bf z})\leq \mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D}), \end{equation} \begin{equation}\label{P2} {\bf u}={\bf u}'{\bf z}, {\bf t}_1={\bf t}_1'{\bf z}, \ldots, {\bf t}_n={\bf t}_n'{\bf z}, \end{equation} and such that the sequence \begin{equation}\label{P3} {\bf a}{\bf u}'={\bf c}_1{\bf t}_1', \ldots, {\bf d}_{n-1}{\bf t}_{n-1}'={\bf c}_n{\bf t}_n' \end{equation} is irreducible. Furthermore, if ${\bf z} \neq {\bf 1}$, then \begin{equation}\label{Min} \text{min}(d({\bf a}{\bf u}),d({\bf b}{\bf v})) \leq 2\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D}). \end{equation} \end{Lem}
\begin{proof} If the sequence \begin{equation}\label{Chopped} {\bf a}{\bf u}={\bf c}_1{\bf t}_1, \ldots, {\bf d}_{n-1}{\bf t}_{n-1}={\bf c}_n{\bf t}_n \end{equation} is irreducible then ${\bf z}={\bf 1}, {\bf u}={\bf u}', {\bf t}_i'={\bf t}_i$ for $1\leq i\leq n$ satisfy the requirements of the lemma. Let us therefore suppose that the sequence (\ref{Chopped}) is reducible. Then by Lemma \ref{Irr} there exist ${\bf z}\neq {\bf 1},{\bf u}',{\bf t}_1',\ldots, {\bf t}_n' \in \mbox{FLA}(\Omega)$ such that (\ref{P2}) and (\ref{P3}) are satisfied.
Let us fix ${\bf u}',{\bf t}_1',\ldots,{\bf t}_n'$, and choose a ${\bf z}$ such that its weight is minimal amongst those satisfying the equalities (\ref{P2}). We claim that this particular ${\bf z}$ satisfies (\ref{P1}) by first showing that $Z \subseteq (au')^{-1}A \cup (d_nt_n')^{-1}B$ where \[ g^{-1} X=\{y \in \Omega^*: gy \in X\}. \] Note that if $X$ is prefix closed then so is $g^{-1}X$. Therefore it is enough to show that the leaves of $Z$ are contained in $(au')^{-1}A \cup (d_nt_n')^{-1}B$. Let $x$ be a leaf of $Z$, and suppose that $d_nt_n'x \not\in B$.
Then by applying Lemma \ref{Crack} to the equation ${\bf d}_n{\bf t}_n' \cdot {\bf z}={\bf b}\cdot {\bf v}$, there exist elements ${\bf z}',{\bf v}',{\bf x} \in \mbox{FLA}(\Omega)$ such that ${\bf z}={\bf z}'{\bf x}, w({\bf z}')<w({\bf z}), {\bf v}={\bf v}'{\bf x}$ and ${\bf d}_n{\bf t}_n'{\bf z}'={\bf b}{\bf v}'$. If we multiply the sequence (\ref{P3}) by ${\bf z}'$ and combine it with the equality ${\bf d}_n{\bf t}_n'{\bf z}'={\bf b}{\bf v}'$ we obtain the $H$-sequence \begin{equation}\label{Multi} {\bf a}{\bf u}'{\bf z}'={\bf c}_1{\bf t}_1'{\bf z}', \ldots, {\bf d}_{n-1}'{\bf t}_{n-1}'{\bf z}'={\bf c}_n{\bf t}_n'{\bf z}', {\bf d}_n{\bf t}_n'{\bf z}'={\bf b}{\bf v}'. \end{equation} Note that if we multiply the sequence (\ref{Multi}) by the element ${\bf x}$ we obtain the sequence (\ref{BasicS}).
If $x=z$ or $d_nt_n'x \not \in D_n \cup d_nT_n'$, then we also have that $w({\bf b}{\bf v}')<w({\bf b}{\bf v})$, contradicting the irreducibility of sequence (\ref{BasicS}).
We therefore conclude that $x\neq z$ and $d_nt_n'x \in D_n \cup d_nT_n'$. Since sequence (\ref{BasicS}) is irreducible, this can only happen if ${\bf a}{\bf u}'{\bf z}'={\bf a}{\bf u}, {\bf t}_1'{\bf z}'={\bf t}_1,\ldots {\bf t}_n'{\bf z}'={\bf t}_n$ and ${\bf b}{\bf v}'={\bf b}{\bf v}$. Note that $w({\bf z}')<w({\bf z})$, so by the minimality of $w({\bf z})$, one of the equations of (\ref{P2}) must fail for ${\bf z}'$, and since we have just shown that ${\bf t}_i={\bf t}_i'{\bf z}'$ for all $i$, we have that ${\bf u}\neq {\bf u}'{\bf z}'$. Notice that ${\bf a}{\bf u}'{\bf z}'={\bf a}{\bf u}$ implies that the second coordinates of ${\bf u}$ and ${\bf u}'{\bf z}'$ are the same and so the first coordinates of ${\bf u}$ and ${\bf u}'{\bf z}'$ are different. Since ${\bf z}'=(Z \setminus \{x\},z')$, the first coordinate of ${\bf u}'{\bf z}'$ can differ from the first coordinate of ${\bf u}={\bf u}'{\bf z}$ only in the element $u'x$. That is, $u'x \not\in U' \cup u'Z'$. However, ${\bf a}{\bf u}={\bf a}{\bf u}'{\bf z}'$ and $au'x \in A \cup aU$, so $au'x \in A \cup a(U' \cup u'Z')$, that is, $au'x \in A$.
So far we have shown that for every leaf $x$ of $Z$, if $d_nt_n'x \not \in B$, then $au'x \in A$. This shows that every leaf $x$ of $Z$ is contained in the prefix closed set $(au')^{-1}A \cup (d_nt_n')^{-1}B$, so $Z \subseteq (au')^{-1}A \cup (d_nt_n')^{-1}B$. Since $d(g^{-1}X)\leq d(X)$ for every $g \in \Omega^*$ and finite $X \subseteq \Omega^*$, we conclude that $d({\bf z})\leq \text{max}(d({\bf a}),d({\bf b}))\leq \mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$.
We have observed that ${\bf z}\neq \mathbf{1}$. Either $au'z \in A$ or $d_nt_n'z \in B$. If $d_nt_n'z \in B$ then $l(bv)=l(d_nt_n)=l(d_nt_n'z)\leq d({\bf b})$, whilst if $au'z \in A$, then $l(au)=l(au'z) \leq d({\bf a})$. Lemma \ref{Small} implies in the first case that $d({\bf b}{\bf v}) \leq 2\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$, whilst in the second case $d({\bf a}{\bf u}) \leq 2\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$.
\end{proof}
As a consequence of this lemma we can show that every irreducible sequence contains a `small' element.
\begin{Lem}\label{Sm} Let \begin{equation}\label{SeqSm} {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \end{equation} be an irreducible $H$-sequence. Then there exists an element in the sequence having diameter less than or equal to $2\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$. \end{Lem}
\begin{proof} Let $\mathcal{D}'=\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$. If $d({\bf a}{\bf u}) \leq 2 \mathcal{D}'$, then the statement is true, so let us suppose that $d({\bf a}{\bf u}) > 2\mathcal{D}'$.
Apply Lemma \ref{New} to the sequence (\ref{SeqSm}). Note that ${\bf z}\neq 1$ if and only if the shortened sequence \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_{m-1}{\bf t}_{m-1}={\bf c}_m{\bf t}_m \] is also irreducible. In this case we can apply Lemma \ref{New} to this shortened sequence, and repeat the procedure until ${\bf z} \neq \bf{1}$. Note that such a ${\bf z}$ exists, for otherwise we would have that the sequence ${\bf a}{\bf u}={\bf c}_1{\bf t}_1$ is irreducible, which by Lemma \ref{Two} contradicts our assumption that $d({\bf a}{\bf u})>2\mathcal{D}'$. That is, there exists $2 \leq i\leq n+1$ such that \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots, {\bf d}_{j-1}{\bf t}_{j-1}={\bf c}_j{\bf t}_j \] is irreducible for all $i\leq j\leq n+1$ (where we denote ${\bf b}$ by ${\bf c}_{n+1}$ and ${\bf v}$ by ${\bf t}_{n+1}$), but \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots, {\bf d}_{i-2}{\bf t}_{i-2}={\bf c}_{i-1}{\bf t}_{i-1} \] is reducible. In this case if we apply Lemma \ref{New} to the first sequence with $j=i$, then the acquired element ${\bf z}$ will be different from ${\bf 1}$, and as a consequence the lemma implies that $\text{min}(d({\bf a}{\bf u}),d({\bf c}_i{\bf t}_i)) \leq 2 \mathcal{D}'$. \end{proof}
Now let \begin{equation}\label{Seq1} {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_{n-1}{\bf t}_{n-1}={\bf c}_n{\bf t}_n,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \end{equation} be an irreducible $H$-sequence with $n\geq 1$ and let $\mathcal{D}'=\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$. Then by Lemma \ref{New} there exist ${\bf z},{\bf u}',{\bf t}_1',\ldots,{\bf t}_n'\in \mbox{FLA}(\Omega)$, $d({\bf z}) \leq \mathcal{D}'$ such that ${\bf u}={\bf u}'{\bf z}$ and ${\bf t}_i={\bf t}_i'{\bf z}$ for every $1\leq i\leq n$, and such that the sequence \[ {\bf a}{\bf u}'={\bf c}_1{\bf t}_1',\ldots,{\bf d}_{n-1}{\bf t}_{n-1}'={\bf c}_n{\bf t}_n' \] is irreducible. Now let us apply Lemma \ref{New} to this sequence. Thus, there exist elements ${\bf y}^{(n)},{\bf u}^{(n)},{\bf t}_1^{(n)},\ldots,{\bf t}_{n-1}^{(n)} \in \mbox{FLA}(\Omega)$, $d({\bf y}^{(n)})\leq \mathcal{D}'$ satisfying ${\bf u}'={\bf u}^{(n)}{\bf y}^{(n)}$, ${\bf t}_i'={\bf t}_i^{(n)}{\bf y}^{(n)}$ for every $1\leq i\leq n-1$ and such that the $H$-sequence \begin{equation} {\bf a}{\bf u}^{(n)}={\bf c}_1{\bf t}_1^{(n)},\ldots, {\bf d}_{n-2}{\bf t}_{n-2}^{(n)}={\bf c}_{n-1}{\bf t}_{n-1}^{(n)} \end{equation} is irreducible.
Note that ${\bf u}={\bf u}^{(n)}{\bf y}^{(n)}{\bf z}$ and ${\bf t}_i={\bf t}_i^{(n)}{\bf y}^{(n)}{\bf z}$ for every $1\leq i\leq n-1$. Inductively, for every $2\leq k \leq n$ we can define the elements ${\bf u}^{(k)},{\bf y}^{(k)}$ and ${\bf t}_i^{(k)}$ where $1 \leq i\leq k-1$ satisfying ${\bf u}^{(k+1)}={\bf u}^{(k)}{\bf y}^{(k)}$ and ${\bf t}_i^{(k+1)}={\bf t}_i^{(k)}{\bf y}^{(k)}$ for every $1\leq i\leq k-1$ such that the $H$-sequence \begin{equation}\label{KSeq} {\bf a}{\bf u}^{(k)}={\bf c}_1{\bf t}_1^{(k)},\ldots, {\bf d}_{k-2}{\bf t}_{k-2}^{(k)}={\bf c}_{k-1}{\bf t}_{k-1}^{(k)} \end{equation} is irreducible, and $d({\bf y}^{(k)})\leq \mathcal{D}'$.
The last step is to define ${\bf y}^{(1)}$: at this point we have that the $H$-sequence \begin{equation} {\bf a}{\bf u}^{(2)}={\bf c}_1{\bf t}_1^{(2)} \end{equation} is irreducible. By Lemma \ref{Two}, we have that $d({\bf u}^{(2)})\leq \mathrm{max}(d({\bf a}),d({\bf c}_1)) \leq \mathcal{D}'$. So if we define ${\bf y}^{(1)}={\bf u}^{(2)}$ then $d({\bf y}^{(1)})\leq \mathcal{D}'$ . For later reference, we summarise the properties of the elements ${\bf y}^{(i)}_j$ in the following lemma.
\begin{Lem}\label{lem:thedecomp} If \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_{n-1}{\bf t}_{n-1}={\bf c}_n{\bf t}_n,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \] is an irreducible $H$-sequence with $n\geq 1$, then there exist elements ${\bf z},{\bf u}^{(i)},{\bf y}^{(i)}$ and ${\bf t}^{(i)}_j$ where $1\leq j<i\leq n$ such that \begin{itemize} \item[\rm (Y1)] ${\bf u}={\bf y}^{(1)} \ldots {\bf y}^{(n)}{\bf z}$, ${\bf u}^{(i)}={\bf y}^{(1)}\ldots {\bf y}^{(i-1)}$ for every $2\leq i\leq n$, \item[\rm (Y2)] ${\bf t}^{(j)}_i={\bf t}^{(j-1)}_i {\bf y}^{(j-1)}$, \item[\rm (Y3)] the $H$-sequence \[ {\bf a}{\bf u}^{(j)}={\bf c}_1{\bf t}^{(j)}_1,\ldots,{\bf d}_{j-2}{\bf t}^{(j)}_{j-2}={\bf c}_{j-1}{\bf t}^{(j)}_{j-1} \] is irreducible for every $2\leq j\leq n$, \item[(Y4)] $d({\bf z}), d({\bf y}^{(i)})\leq \mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$ for all $1 \leq i \leq n$. \end{itemize} \end{Lem}
Notice that for every $1\leq i\leq n$ we have that either ${\bf a}{\bf y}^{(1)}\ldots {\bf y}^{(i)}\neq {\bf a}{\bf y}^{(1)} \ldots {\bf y}^{(i+1)}$ or ${\bf y}^{(i+1)}$ is an idempotent (here we assume that ${\bf y}^{(n+1)}={\bf z}$).
\section{The free left ample monoid and right coherency}\label{sec:flacoherent}
We are now in a position to show that $\mathrm{FLA}(\Omega)$ is right coherent. Assume first that $\Omega$ is finite. Continuing from Lemma~\ref{lem:thedecomp}, let $\mathcal{W}$ be the maximal weight of elements of $\mbox{FLA}(\Omega)$ having diameter less than or equal to $\mathcal{D}'$. Since $\Omega$ is finite, so $\mathcal{W}$ exists. If we multiply any number of idempotents having diameter less than or equal to $\mathcal{D}'$, then the diameter of the resulting element will be less than or equal to $\mathcal{D}'$, so the weight of the product will be less than or equal to $\mathcal{W}$.
Now let us `merge' the consecutive idempotents of the sequence ${\bf y}^{(1)},\ldots, {\bf y}^{(n)},{\bf z}$ with the succeeding non-idempotent elements. That is, if ${\bf y}^{(1)}$ is not idempotent, then let ${\bf y}_1={\bf y}^{(1)}$. Otherwise, let ${\bf y}^{(1)}\ldots {\bf y}^{(i)}$ be the first maximal idempotent subsequence, and let ${\bf y}_1= {\bf y}^{(1)}\ldots {\bf y}^{(i)} {\bf y}^{(i+1)}$, and so on: if the next element is not idempotent, it will be ${\bf y}_2$, otherwise ${\bf y}_2$ will be the product of the following maximal subsequence of idempotents multiplied by the next non-idempotent. In case ${\bf z}$ is idempotent, the last element of the sequence ${\bf y}_1,\ldots,{\bf y}_m$ will be idempotent, but all the others are non-idempotent. Notice that for every $1\leq i\leq m$, ${\bf y}_i$ is a product of idempotents followed by a non-idempotent except (possibly) in the case $i=m$. All factors of ${\bf y}_i$ have diameter less than or equal to $\mathcal{D}'$, so the product of their diameters also has this property. This implies that $w({\bf y}_i)\leq \mathcal{W}$. The properties of the sequence ${\bf y}_1,\ldots,{\bf y}_m$ are summarised in the following lemma.
\begin{Lem} \label{Why} If \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \] is an irreducible $H$-sequence, then there exist elements ${\bf y}_1,\ldots,{\bf y}_m$ such that \begin{itemize} \item[\rm (C1)] ${\bf u}={\bf y}_1{\bf y}_2\ldots {\bf y}_m$, \item[\rm (C2)] $w({\bf y}_i)\leq \mathcal{W}$ for every $1\leq i\leq m$, where $\mathcal{W}$ denotes the maximal weight of elements of $\mbox{FLA}(\Omega)$ having diameter less than or equal to $\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$, \item[\rm (C3)] ${\bf y}_i$ is not an idempotent for all $1\leq i\leq m-1$, \item[\rm (C4)] For every $1\leq i\leq m-1$, there exists an irreducible $H$-sequence connecting ${\bf a}{\bf y}_1{\bf y}_2\ldots {\bf y}_i$ with an element of the form ${\bf c}_i\tilde{{\bf t}}_i$ where $({\bf c}_i,{\bf d}_i) \in H$. \end{itemize} \end{Lem}
We aim to show that the right annihilator congruence \[ r({\bf a}\rho)=\{({\bf u},{\bf v}) \in \mbox{FLA}(\Omega) \times \mbox{FLA}(\Omega):{\bf a}{\bf u} \mathrel{\rho} {\bf a}{\bf v}\} \] is finitely generated for all ${\bf a} \in \mbox{FLA}(\Omega)$. To show this, let ${\bf a} \in \mbox{FLA}(\Omega)$ be fixed. Now let \[ \mathbb{K}= \{{\bf a}{\bf u}\rho:\exists\, {\bf b}{\bf v} \in \mbox{FLA}(\Omega)\text{ with }d({\bf b})\leq \mathrm{max}(d({\bf a}),\mathcal{D}) \text{ and } ({\bf a}{\bf u},{\bf b}{\bf v}) \text{ irreducible}\}. \]
\begin{Lem}\label{K} The set $\mathbb{K}$ is finite. \end{Lem}
\begin{proof} Let ${\bf a}{\bf u}\rho \in \mathbb{K}$ and let \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \] be an irreducible $H$-sequence connecting ${\bf a}{\bf u}$ to an element ${\bf b}{\bf v} \in \mbox{FLA}(\Omega)$ testifying ${\bf a}{\bf u}\rho \in \mathbb{K}$. Then by Lemma \ref{Sm} there exists an element in the sequence having diameter less than or equal to $2\mathrm{max}(d({\bf a}),\mathcal{D})$. Since there are only finitely many such elements of $\mbox{FLA}(\Omega)$, we have that $\mathbb{K}$ is finite. \end{proof}
Now let $\mathcal{K}=\left|\mathbb{K}\right|$, and let us define the set \[ H'=\{({\bf u},{\bf v}):{\bf a}{\bf u} \mathrel{\rho} {\bf a}{\bf v} \text{ and }w({\bf a}{\bf u}),w({\bf a}{\bf v}) \leq (\mathcal{K}+3)\mathcal{W}'\}, \] where $\mathcal{W}'$ is the maximum of the weights of elements of $\mbox{FLA}(\Omega)$ having diameter less than or equal to $2\mathrm{max}(d({\bf a}),\mathcal{D})$.
\begin{Lem}\label{A} The finite set $H'$ generates the right annihilator congruence of ${\bf a}\rho$. \end{Lem}
\begin{proof} Denote the right annihilator congruence of ${\bf a}\rho$ by $\tau$. By definition, $H'\subseteq \tau$. Now let $({\bf u},{\bf v}) \in \tau$. We are going to show that $({\bf u},{\bf v}) \in \langle H'\rangle$. Without loss of generality we can suppose that $w({\bf a}{\bf u})\geq w({\bf a}{\bf v})$. If the pair $({\bf a}\cdot{\bf u},{\bf a}\cdot{\bf v})$ is reducible, then by Lemma \ref{Irr} there exist elements ${\bf u}',{\bf v}'$ and ${\bf y}$ such that the pair $({\bf a}{\bf u}',{\bf a}{\bf v}')$ is irreducible and $({\bf u},{\bf v})=({\bf u}',{\bf v}'){\bf y}$. We therefore suppose that the pair $({\bf a}\cdot{\bf u},{\bf a}\cdot{\bf v})$ is irreducible and prove by induction on $l(au)+l(av)$ that $({\bf u},{\bf v}) \in \langle H'\rangle$. If $l(au)+l(av)\leq \mathrm{max}(d({\bf a}),\mathcal{D})$ then certainly $l(au)\leq \mathrm{max}(d({\bf a}),\mathcal{D})$, so by Lemma \ref{Small}, $d({\bf a}{\bf u}) \leq 2\mathrm{max}(d({\bf a}),\mathcal{D})$, thus $w({\bf a}{\bf v})\leq w({\bf a}{\bf u}) \leq \mathcal{W}'$, so $({\bf a}{\bf u},{\bf a}{\bf v}) \in H'$.
Suppose now that whenever $({\bf a}{\bf u}',{\bf a}{\bf v}') \in \tau$ is any irreducible pair such that $l(au')+l(av')\leq M$ for some $M\geq \mathrm{max}(d({\bf a}),\mathcal{D})$, then $({\bf a}{\bf u}',{\bf a}{\bf v}') \in \langle H'\rangle$. Let $({\bf a}\cdot{\bf u},{\bf a}\cdot{\bf v}) \in \tau$ be an irreducible pair such that $l(au)+l(av) = M+1$. We are going to show that $({\bf a}{\bf u},{\bf a}{\bf v}) \in \langle H'\rangle$. If $w({\bf a}{\bf u}) \leq (\mathcal{K}+3)\mathcal{W}'$, then by definition $({\bf a}{\bf u},{\bf a}{\bf v}) \in H'$, so we can suppose that $w({\bf a}{\bf u})>(\mathcal{K}+3)\mathcal{W}'$. Of course, this implies that $d({\bf a}{\bf u})>2\text{max}(d({\bf a}),\mathcal{D})$.
Now let \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_n{\bf t}_n={\bf a}{\bf v} \] be an irreducible $H$-sequence connecting ${\bf a} {\bf u}$ and ${\bf a}{\bf v}$. Note that $n\geq 1$, for otherwise ${\bf a}{\bf u}={\bf a}{\bf v}$ is an irreducible $H$-sequence such that $d({\bf a}{\bf u})>2\max(d({\bf a}),\mathcal{D})$, which contradicts Lemma \ref{Two}. By Lemma \ref{Why} we have that there exist elements ${\bf y}_1,\ldots,{\bf y}_m$ satisfying Conditions (C1)-(C4). Of course, $\mathcal{W}<\mathcal{W}'$, for the latter corresponds to a doubled diameter. Furthermore, since $w({\bf a}),w({\bf y}_i)\leq \mathcal{W}'$ for every $i$, we have that $w({\bf a}{\bf y}_1\ldots{\bf y}_m)\leq (m+1)\mathcal{W}'$. However, $w({\bf a}{\bf y}_1\ldots {\bf y}_m) > (\mathcal{K}+3)\mathcal{W}'$, so that making use of Lemma~\ref{Basic}, we see that $m> \mathcal{K}+2$. By Condition (C4), $({\bf a}{\bf y}_1 \ldots {\bf y}_i)\rho \in \mathbb{K}$ for all $1 \leq i \leq m-1$, so we have that there exist $1\leq i<j\leq \mathcal{K}+1$ such that \[ {\bf a}{\bf y}_1\ldots{\bf y}_i \mathrel{\rho} {\bf a}{\bf y}_1\ldots {\bf y}_j. \] Note that $w({\bf a}{\bf y}_1\ldots {\bf y}_i),w({\bf a}{\bf y}_1\ldots {\bf y}_j) \leq (\mathcal{K}+2) \mathcal{W}'$, so we have that the pair \begin{equation}\label{Nyekk} ({\bf y}_1\ldots{\bf y}_i,{\bf y}_1\ldots {\bf y}_j) \end{equation} is contained in $H'$. For brevity, denote the product ${\bf y}_1\ldots {\bf y}_i {\bf y}_{j+1}\ldots {\bf y}_m$ by ${\bf t}$. If we multiply the pair (\ref{Nyekk}) by ${\bf y}_{j+1}\ldots{\bf y}_m$, we conclude that \[ ({\bf t},{\bf u}) \in \langle H'\rangle, \] so ${\bf a}{\bf t} \mathrel{\rho} {\bf a}{\bf v}$. Note that $l(at) < l(au)$, because ${\bf t}$ lacks at least one non-idempotent factor (namely ${\bf y}_j$). As a consequence $l(at)+l(av) < l(au)+l(av)=M+1$, so by the induction hypotheses we have that \[ ({\bf t},{\bf v}) \in \langle H'\rangle. \] That is, $({\bf t},{\bf u}),({\bf t},{\bf v}) \in \langle H' \rangle$, so by transitivity we have that $({\bf u},{\bf v}) \in \langle H'\rangle$, and the lemma is proved.
\end{proof}
\begin{Lem}\label{B} Let ${\bf a},{\bf b} \in \mbox{FLA}(\Omega)$, $H\subseteq \mbox{FLA}(\Omega) \times \mbox{FLA}(\Omega)$ be finite and let $\rho=\langle H\rangle$ be a finitely generated right congruence. Then \[ {\bf a}\rho \cdot S \cap {\bf b}\rho \cdot S=\{{\bf c}\rho: {\bf c} \mathrel{\rho} {\bf a}{\bf u} \mathrel{\rho} {\bf b}{\bf v} \mbox{ for some }{\bf u},{\bf v} \in \mbox{FLA}(\Omega)\} \] is either empty or finitely generated as a right $S$-act. \end{Lem}
\begin{proof} Suppose that ${\bf a}\rho\cdot S \cap {\bf b} \rho\cdot S \neq \emptyset$. Let \[ \mathbb{K}'= \{{\bf a}{\bf u}\rho:\text{ there exists }{\bf v} \in \mbox{FLA}(\Omega), \text{ such that } ({\bf a}{\bf u},{\bf b}{\bf v}) \text{ is irreducible}\}. \] Note that similarly to the set $\mathbb{K}$ defined before Lemma \ref{K}, $\mathbb{K}'$ is also finite, because by Lemma \ref{Sm}, if $({\bf a}{\bf u},{\bf b}{\bf v})$ is irreducible then ${\bf a}{\bf u}$ is $\rho$-related to an element of $\mbox{FLA}(\Omega)$ having diameter less than or equal to $\mathrm{max}(d({\bf a}),d({\bf b}),\mathcal{D})$. We claim that $\mathbb{K}'$ generates ${\bf a}\rho\cdot S \cap {\bf b}\rho\cdot S$. Let ${\bf a}{\bf u}\rho={\bf b}{\bf v}\rho \in {\bf a}\rho\cdot S \cap {\bf b}\rho\cdot S$. Then there exists an $H$-sequence \[ {\bf a}{\bf u}={\bf c}_1{\bf t}_1,\ldots,{\bf d}_n{\bf t}_n={\bf b}{\bf v} \] connecting ${\bf a}{\bf u}$ and ${\bf b}{\bf v}$. By Lemma \ref{Irr}, there exist an irreducible pair $({\bf a}{\bf u}',{\bf b}{\bf v}')$ and ${\bf y} \in \mbox{FLA}(\Omega)$ such that $({\bf a}{\bf u},{\bf b}{\bf v})=({\bf a}{\bf u}',{\bf b}{\bf v}'){\bf y}$. In this case ${\bf a}{\bf u}'\rho \in \mathbb{K}'$, so ${\bf a}{\bf u}\rho \in \mathbb{K}' S$, thus $\mathbb{K}'$ generates ${\bf a}\rho\cdot S\cap {\bf b}\rho \cdot S$. \end{proof}
As a consequence of Lemmas \ref{A} and \ref{B} we have our first main result.
\begin{Thm}\label{thm:main1} If $\Omega$ is finite, then the free left ample monoid $\mbox{FLA}(\Omega)$ is right coherent. \end{Thm}
To show Theorem~\ref{thm:main1} is true for arbitrary $\Omega$ we need a simple consequence of Lemma \ref{Crack}.
\begin{Lem} \label{Pi} Let ${\bf d}{\bf z}={\bf b}{\bf v}$ and let $\Pi$ be a subset of $\Omega$ containing all letters appearing in $D$ and $B$. Then there exists ${\bf z}',{\bf v}' \in \mathrm{FLA}(\Pi)$ and ${\bf x} \in \mathrm{FLA}(\Omega)$ such that ${\bf d}{\bf z}'={\bf b}{\bf v}'$ and $({\bf z},{\bf v})=({\bf z}',{\bf v}'){\bf x}$. \end{Lem}
\begin{proof} Let ${\bf z}',{\bf v}'$ be minimal (with respect to $w({\bf z}')+w({\bf v}')$) in $\mathrm{FLA}(\Omega)$ satisfying that there exists ${\bf x} \in \mathrm{FLA}(\Omega)$ such that ${\bf d}{\bf z}'={\bf b}{\bf v}', {\bf z}={\bf z}'{\bf x}$ and ${\bf v}={\bf v}'{\bf x}$. We claim that ${\bf z}',{\bf v}' \in \mathrm{FLA}(\Pi)$. Suppose on the contrary that either ${\bf z}' \not \in \mathrm{FLA}(\Pi)$ or ${\bf v}' \not \in \mathrm{FLA}(\Pi)$. We can suppose without loss of generality that ${\bf z}' \not\in \mathrm{FLA}(\Pi)$. Then there exists a leaf $x \in Z'$ such that $x$ contains a letter which is not in $\Pi$. In this case clearly $dx \not \in D \cup B$, so Lemma \ref{Crack} implies that there exist elements ${\bf z}'',{\bf v}'',{\bf x}'$ such that ${\bf d}{\bf z}''={\bf b}{\bf v}'', {\bf z}'={\bf z}''{\bf x}', {\bf v}'={\bf v}''{\bf x}'$ and $w({\bf z}'')<w({\bf z}')$. However, these facts together with the observations ${\bf z}={\bf z}'' ({\bf x}'{\bf x}), {\bf v}={\bf v}'' ({\bf x}'{\bf x})$ contradict the minimality of ${\bf z}'$ and ${\bf v}'$. This shows that ${\bf z}',{\bf v}' \in \mathrm{FLA}(\Pi)$, finishing the proof. \end{proof}
\begin{Thm} For any set $\Omega$, we have that $\mathrm{FLA}(\Omega)$ is right coherent. \end{Thm} \begin{proof} Let $\rho$ be a right congruence on $\mathrm{FLA}(\Omega)$ with finite set of generators $H$, so that $\rho=\langle H\rangle_{\mathrm{FLA}(\Omega)}$, and let ${\bf b},{\bf c}\in \mathrm{FLA}(\Omega)$.
Let $\Pi$ be the finite set of letters occuring in ${\bf b},{\bf c}$ or in components of $H$ and put $\rho'=\langle H\rangle_{\mathrm{FLA}(\Pi)}$.
We claim that for any ${\bf u},{\bf v}\in \mathrm{FLA}(\Omega)$ with ${\bf b}{\bf u}\,\rho\, {\bf c}{\bf v}$ via an $H$-sequence \[{\bf b}{\bf u}={\bf c}_1{\bf t}_1, {\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\hdots, {\bf d}_n{\bf t}_n={\bf c}{\bf v}\] in $\mathrm{FLA}(\Omega)$, there exist \[{\bf u}', {\bf t}_i'\, (1\leq i\leq n), {\bf v}'\in {\mathrm{FLA}(\Pi)}, {\bf x}\in \mathrm{FLA}(\Omega)\] such that \[{\bf u}={\bf u}'{\bf x}, {\bf t}_i={\bf t}_i'{\bf x}\, (1\leq i\leq n), {\bf v}={\bf v}'{\bf x}\] and \[{\bf b}{\bf u}'={\bf c}_1{\bf t}_1', {\bf d}_1{\bf t}_1'={\bf c}_2{\bf t}_2',\hdots, {\bf d}_n{\bf t}_n'={\bf c}{\bf v}'.\]
If $n=0$, then ${\bf b}{\bf u}={\bf c}{\bf v}$ so by Lemma \ref{Pi} we have that $({\bf u},{\bf v})=({\bf u}',{\bf v}'){\bf x}$ and ${\bf b}{\bf u}'={\bf c}{\bf v}'$ for some ${\bf u}',{\bf v}' \in \mathrm{FLA}(\Pi)$ and ${\bf x}\in\mathrm{FLA}(\Omega)$ as required.
Suppose now that $n>0$ and the result holds for all sequences of length $n-1$. Consider the $H$-sequence \[{\bf b}{\bf u}={\bf c}_1{\bf t}_1, {\bf d}_1{\bf t}_1={\bf c}_2{\bf t}_2,\hdots, {\bf d}_n{\bf t}_n={\bf c}{\bf v}.\] From the first equality, and the fact that ${\bf c}_1\in {\mathrm{FLA}(\Pi)}$, we deduce that there exists ${\bf u}',{\bf t}_1'\in {\mathrm{FLA}(\Pi)}$ and ${\bf x}\in\mathrm{FLA}(\Omega)$ such that \[{\bf u}={\bf u}'{\bf x}, {\bf t}_1={\bf t}_1'{\bf x}\mbox{ and }{\bf b}{\bf u}'={\bf c}_1{\bf t}_1'.\] From the remaining part of the sequence, the fact that ${\bf d}_1\in {\mathrm{FLA}(\Pi)}$ and our inductive hypothesis, we deduce that there exists ${\bf v}'', {\bf t}_i''\, (1\leq i\leq n)\in {\mathrm{FLA}(\Pi)}$ and ${\bf z}\in\mathrm{FLA}(\Omega)$ such that \[ {\bf t}_i={\bf t}_i''{\bf z}, {\bf v}={\bf v}''{\bf z}\mbox{ and } {\bf d}_1{\bf t}_1''={\bf c}_2{\bf t}_2'',\hdots, {\bf d}_n{\bf t}_n''={\bf c}{\bf v}''.\] We now examine the equality \[{\bf t}_1={\bf t}_1'{\bf x}={\bf t}_1''{\bf z}.\] Again by Lemma~\ref{Pi} we have that $({\bf x},{\bf z})=({\bf x}',{\bf z}'){\bf w}$ for some ${\bf x}',{\bf z}'\in {\mathrm{FLA}(\Pi)}$ and ${\bf w}\in \mathrm{FLA}(\Omega)$ with ${\bf t}_1'{\bf x}'={\bf t}_1''{\bf z}'$. Now let \[\tilde{{\bf u}}={\bf u}'{\bf x}', \tilde{{\bf t}}_i={\bf t}_i''{\bf z}' \, (1\leq i\leq n)\mbox{ and }\tilde{{\bf v}}= {\bf v}''{\bf z}'.\] Then it is easy to check that
\[{\bf u}=\tilde{{\bf u}}{\bf w}, {\bf t}_i=\tilde{{\bf t}}_i{\bf w}\, (1\leq i\leq n), {\bf v}=\tilde{{\bf v}}{\bf w}\] and \[{\bf b}\tilde{{\bf u}}={\bf c}_1\tilde{{\bf t}}_1, {\bf d}_1\tilde{{\bf t}}_1={\bf c}_2\tilde{{\bf t}}_2,\hdots, {\bf d}_n\tilde{{\bf t}}_n={\bf c}\tilde{{\bf v}}.\] Hence our claim holds by induction.
Since ${\mathrm{FLA}(\Pi)}$ is right coherent, the right congruence $r({\bf a}\rho')$ on ${\mathrm{FLA}(\Pi)}$ has a finite set of generators $K$. Clearly $K\subseteq r({\bf a}\rho)$. Conversely, if $({\bf u},{\bf v})\in r({\bf a}\rho)$, then as ${\bf a}{\bf u}$ is connected to ${\bf a}{\bf v}$ via an $H$-sequence, we can apply the above claim to obtain that ${\bf a}{\bf u}'\,\rho'\, {\bf a}{\bf v}'$ for some ${\bf u}',{\bf v}'\in {\mathrm{FLA}(\Pi)}$ such that $({\bf u},{\bf v})=({\bf u}',{\bf v}'){\bf x}$ for some ${\bf x}\in \mathrm{FLA}(\Omega)$. Thus $({\bf u}',{\bf v}')\in \langle K\rangle_{{\mathrm{FLA}(\Pi)}} \subseteq \langle K\rangle_{{\mathrm{FLA}(\Omega)}}$, and it follows that $\langle K\rangle_{{\mathrm{FLA}(\Omega)}}=r({\bf a}\rho)$.
Now take ${\bf b}={\bf a}$ and ${\bf c}={\bf a}'$ and suppose that ${\bf a}\rho\cdot \mathrm{FLA}(\Omega)\cap {\bf a}'\rho\cdot \mathrm{FLA}(\Omega)\neq\emptyset$. Then ${\bf a}{\bf u}\,\rho\, {\bf a}'{\bf v}$ for some ${\bf u},{\bf v}\in \mathrm{FLA}(\Omega)$ and we have that ${\bf a}{\bf u}'\,\rho'\, {\bf a}'{\bf v}'$ for some
${\bf u}',{\bf v}'\in {\mathrm{FLA}(\Pi)}$ such that $({\bf u},{\bf v})=({\bf u}',{\bf v}'){\bf x}$ for some ${\bf x}\in \mathrm{FLA}(\Omega)$. Since ${\bf a}\rho'\cdot {\mathrm{FLA}(\Pi)}\cap {\bf a}'\rho'\cdot {\mathrm{FLA}(\Pi)}\neq\emptyset$ and ${\mathrm{FLA}(\Pi)}$ is right coherent, we have that ${\bf a}\rho'\cdot {\mathrm{FLA}(\Pi)}\cap {\bf a}'\rho'\cdot {\mathrm{FLA}(\Pi)}= L\cdot {\mathrm{FLA}(\Pi)}$ for some finite set $L=\{ {\bf u}_i\rho': 1\leq i\leq n\}$, where the ${\bf u}_i$ are fixed representatives of their $\rho'$-classes.
For each $i\in \{ 1,\hdots,n\}$ we therefore have that \[{\bf a}{\bf w}_i\,\rho'\, {\bf u}_i{\bf x}_i\,\rho'\, {\bf a}'{\bf z}_i\] for some ${\bf w}_i,{\bf x}_i,{\bf z}_i\in {\mathrm{FLA}(\Pi)}$, so that clearly \[{\bf a}{\bf w}_i\,\rho\, {\bf u}_i{\bf x}_i\,\rho\, {\bf a}'{\bf z}_i\] and so \[L'=\{ {\bf u}_i\rho: 1\leq i\leq n\}\subseteq {\bf a}\rho\cdot \mathrm{FLA}(\Omega)\cap {\bf a}'\rho\cdot \mathrm{FLA}(\Omega).\]
Conversely, if ${\bf a}{\bf b}\,\rho\, {\bf a}'{\bf c}$ then as above we have that $({\bf b},{\bf c})=({\bf b}',{\bf c}'){\bf t}$ for some ${\bf b}',{\bf c}'\in {\mathrm{FLA}(\Pi)}$ and ${\bf t}\in \mathrm{FLA}(\Omega)$ with ${\bf a}{\bf b}'\,\rho'\,{\bf a}'{\bf c}'$. Now $({\bf a}{\bf b}')\rho' =({\bf u}_i\rho'){\bf w}$ for some $i\in \{ 1,\hdots, n\}$ and ${\bf w}\in {\mathrm{FLA}(\Pi)}$ so that $({\bf a}{\bf b}')\rho =({\bf u}_i\rho){\bf w}$ and hence $({\bf a}{\bf b})\rho=({\bf u}_i\rho){\bf w}{\bf t}\in L'\cdot \mathrm{FLA}(\Omega)$. Thus ${\bf a}\rho\cdot \mathrm{FLA}(\Omega)\cap {\bf a}'\rho\cdot \mathrm{FLA}(\Omega) =L'\cdot \mathrm{FLA}(\Omega)$ as required. \end{proof}
\section{Coherency and retracts}\label{sec:constructions}
Investigations of how coherency behaves with respect to certain constructions will be the subject of a future paper, however, to show how the coherency of the free monoid follows from our result, we show that retracts of (right) coherent monoids are (right) coherent.
\begin{Def} Let $S$ be a monoid. Then $T\subseteq S$ is a {\em retract} of $S$ if there exists a homomorphism $\varphi \colon S \to S$ such that $\varphi^2=\varphi$ and $\text{Im }\varphi=T$.
Note that any retract is a subsemigroup and a monoid. \end{Def}
\begin{Lem} Let $S$ be a monoid and let $T$ be a retract of $S$. Let $\rho$ be a right congruence on $T'$, and let $\rho'$ be the right congruence on $S$ generated by $\rho$. Then the restriction of $\rho'$ to $T$ coincides with $\rho$. \end{Lem}
\begin{proof} Let $a,b \in T$ such that $a \mathrel{\rho'} b$. Since $\rho'$ is generated by $\rho$, there exist elements $c_1,\ldots,c_n,d_1,\ldots,d_n \in T$ and $t_1,\ldots, t_n \in S$ such that $c_i\mathrel{\rho} d_i$ for every $1\leq i\leq n$, and such that \[ a=c_1t_1,\ldots, d_nt_n=b. \] If we take the image of this sequence under $\varphi$ we obtain the $H$-sequence \[ a=c_1(t_1\varphi),\ldots,d_n(t_n\varphi)=b \] connecting $a$ and $b$ in $T$, so $a \mathrel{\rho} b$.
\end{proof}
\begin{Thm}\label{Retract} Let $S$ be a right coherent monoid and let $T$ be a retract of $S$. Then $T$ is right coherent. \end{Thm}
\begin{proof} Let $\rho$ be a finitely generated right congruence on $T$, so that $\rho=\langle H\rangle_T$ for some finite set $H \subseteq T \times T$. Denote by $\rho'$ the right congruence on $S$ generated by $\rho$. Clearly, $\rho'=\langle H\rangle_S$.
First we show that if $a,b \in S$ and $a \mathrel{\rho'} b$, then $a\varphi \mathrel{\rho} b\varphi$. For this, let \[ a=c_1t_1,\ldots, d_nt_n=b \] be an $H$-sequence connecting $a$ and $b$ in $S$. Since $H \subseteq T \times T$, if we take the image of this sequence under $\varphi$ we obtain the $H$-sequence \[ a\varphi=c_1(t_1\varphi),\ldots,d_n(t_n\varphi)=b\varphi \] connecting $a\varphi$ and $b\varphi$ in $T$, so that $a\varphi \mathrel{\rho} b\varphi$.
Now let $a \in T$ be fixed. Note that $r(a\rho')$ is a right congruence on $S$, and $r(a\rho)$ is a right congruence on $T$. Since $S$ is right coherent, we have that $r(a\rho')=\langle X\rangle_S$ for some finite $X \subseteq S \times S$. We claim that the finite set \[ X\varphi=\{(u\varphi,v\varphi): (u,v) \in X\}\subseteq T \times T \] generates $r(a\rho)$.
First note that if $(u,v) \in X$, then $au \mathrel{\rho'} av$, so we have that \[a(u\varphi)=(au)\varphi \mathrel{\rho} (av)\varphi=a(v\varphi),\] that is, $(u\varphi,v\varphi) \in r(a\rho)$. Thus we have shown that $X\varphi \subseteq r(a\rho)$.
On the other hand, if $(u,v) \in r(a\rho)$, then necessarily $(u,v) \in r(a\rho')$, so there exists an $X$-sequence \[ u=c_1t_1,\ldots,d_nt_n=v \] connecting $u$ and $v$ in $S$. If we take the image of this sequence under $\varphi$ (and remember that $u,v \in T$), then we obtain the $X\varphi$-sequence \[ u=(c_1\varphi) (t_1\varphi),\ldots,(d_n\varphi)(t_n\varphi)=v \] connecting $u$ and $v$. That is, $(u,v) \in \langle X\varphi \rangle_T$, and we have shown that $r(a\rho)$ is finitely generated.
Now suppose that $a,b \in T$ are such that $a\rho\cdot T \cap b\rho\cdot T \neq \emptyset$. Then clearly $a\rho'\cdot S \cap b\rho'\cdot S \neq \emptyset$, so there exists a finite set $Y \subseteq S$ such that $a\rho'\cdot S \cap b\rho'\cdot S=Y\cdot S$. We claim that $a\rho\cdot T \cap b\rho\cdot T = Y\varphi\cdot T$ where \[ Y\varphi=\{(x\varphi) \rho: x\rho' \in Y\}\subseteq T \times T. \] Notice that $Y\varphi$ is well defined, for if $x \mathrel{\rho'} y$, then $x\varphi \mathrel{\rho} y\varphi$.
First note that if $x\rho' \in Y$, then $au \mathrel{\rho'} x \mathrel{\rho'} bv$ for some $u,v \in S$. By an earlier comment, this implies that $a(u\varphi) \mathrel{\rho} x\varphi \mathrel{\rho} b(v\varphi)$, so $(x\varphi)\rho \in a\rho\cdot T \cap b\rho\cdot T$, and so $Y \varphi \cdot T \subseteq a\rho\cdot T \cap b\rho\cdot T$.
Conversely, let $w\rho \in a\rho\cdot T \cap b\rho\cdot T$ for some $w \in T$. Then clearly $w\rho' \in a\rho'\cdot S \cap b\rho'\cdot S$, so there exist an $x\rho' \in Y$ and $s\in S$ such that $w\rho'=x\rho' \cdot s$, that is, $w \mathrel{\rho'} xs$. Applying $\varphi$ we see that $w=w\varphi \mathrel{\rho} (x\varphi) (s\varphi)$, that is, $w\rho=(x\varphi)\rho \cdot s\varphi\in
Y\varphi\cdot T$. Consequently, $a\rho\cdot T \cap b\rho\cdot T \subseteq Y\varphi\cdot T$ as required. \end{proof}
\begin{Cor} \cite{ghr:2013} The free monoid $\Omega^*$ is right coherent. \end{Cor}
\begin{proof} Note that the idempotent map \[ \varphi \colon \mbox{FLA}(\Omega) \to \Omega^*, {\bf a} \mapsto (a\kern -3 pt \downarrow,a) \] is a homomorphism, so $\Omega^*$ is a retract of $\mbox{FLA}(\Omega)$. Then Theorem \ref{Retract} implies that $\Omega^*$ is right coherent. \end{proof}
Note that the free monoid is (right) coherent, however, there exist non-coherent monoids, so the class of (right) coherent monoids is not closed under homomorphic images.
\section{The negative results}\label{sec:negative}
In this section, we show that the free inverse monoid is not left coherent. By duality, neither can it be right coherent. A few simple remarks then yield that the free left ample monoid is not left coherent and that the free ample monoid is neither left nor right coherent.
Let $\Omega=\{x,y\},\ {\bf a}=(\{\epsilon,x\},x) \in \mathrm{FIM}(\Omega)$ and ${\bf b}=(\{\epsilon,y\},y) \in \mathrm{FIM}(\Omega)$. Denote by $\rho$ the left congruence generated by the pair $({\bf a},\bf{1})$, and by $\tau$ the left annihilator of ${\bf b}\rho$, that is, \[ \tau=\{({\bf u},{\bf v}): {\bf u}{\bf b} \mathrel{\rho} {\bf v}{\bf b}\} \subseteq \mathrm{FIM}(\Omega) \times \mathrm{FIM}(\Omega). \] It is easy to see that $\tau$ is a left congruence on $\mathrm{FIM}(\Omega)$. We claim that it is not finitely generated.
The following lemma is effectively folklore, but we prove it here for completeness.
\begin{Lem}\label{Rho} For every ${\bf u},{\bf v} \in \mathrm{FIM}(\Omega)$, we have that ${\bf u} \mathrel{\rho} {\bf v}$ if and only if there exist $m,n \in \mathbb{N}^0$ such that ${\bf u}{\bf a}^n={\bf v}{\bf a}^m$. \end{Lem}
\begin{proof} It is straightforward that if such $n$ and $m$ exist, then ${\bf u}$ and ${\bf v}$ are $\rho$-related. For the converse part, suppose that ${\bf u} \mathrel{\rho} {\bf v}$. Thus, since $\rho$ is generated by $({\bf a},\bf{1})$, there exist elements ${\bf c}_1,\ldots,{\bf c}_p,{\bf d}_1,\ldots,{\bf d}_p,{\bf t}_1,\ldots,{\bf t}_p \in \mathrm{FIM}(\Omega)$ such that for any $1\leq i\leq p$, $({\bf c}_i,{\bf d}_i)=({\bf a},\bf{1})$ or $({\bf c}_i,{\bf d}_i)=(\bf{1},{\bf a})$, satisfying \[ {\bf u}={\bf t}_1{\bf c}_1,{\bf t}_1{\bf d}_1={\bf t}_2{\bf c}_2,\ldots,{\bf t}_{p-1}{\bf d}_{p-1}={\bf t}_p{\bf c}_p,{\bf t}_p{\bf d}_p={\bf v}. \]
Note that for all $1\leq i\leq p$, we have that either ${\bf t}_i{\bf c}_i={\bf t}_i{\bf d}_i{\bf a}$ (exactly when $({\bf c}_i,{\bf d}_i)=({\bf a},{\bf 1})$) or ${\bf t}_i{\bf c}_i{\bf a}={\bf t}_i{\bf d}_i$ (exactly when $({\bf c}_i,{\bf d}_i)=({\bf 1},{\bf a})$). Applying this argument successively to $i=1,2,\ldots,p$, we obtain the result of the lemma (actually, we also see that $n$ and $m$ are just the number of the pairs $(\bf{1},{\bf a})$ and $({\bf a},\bf{1})$, respectively, in the sequence $({\bf c}_1,{\bf d}_1),\ldots,({\bf c}_p,{\bf t}_p)$). \end{proof}
As a direct consequence, we have the following lemma:
\begin{Lem}\label{Tau} For every ${\bf u},{\bf v} \in \mathrm{FIM}(\Omega)$, ${\bf u}\mathrel{\tau}{\bf v}$ if and only if there exist $m,n \in \mathbb{N}^0$ such that ${\bf u}{\bf b}{\bf a}^n={\bf v}{\bf b}{\bf a}^m$. \end{Lem}
For any $0 \leq i$, let \[ U_i=\{\epsilon,y,yx,\ldots, yx^i\}. \]
\begin{Lem} \label{prevv} We have that $(U_i,\epsilon) \mathrel{\tau} (U_1,\epsilon)$ for any $1\leq i$. \end{Lem}
\begin{proof} Since \[ (U_i,\epsilon){\bf b}{\bf a}^{i}=(U_1,\epsilon){\bf b}{\bf a}^{i}=(\{\epsilon,y,yx,yx^2,\ldots,yx^{i}) \] we have by Lemma \ref{Tau} that $(U_i,\epsilon) \mathrel{\tau} (U_1,\epsilon)$. \end{proof}
\begin{Lem}\label{lem:nfg} The left annihilator congruence $\tau=l({\bf b}\rho)$ is not finitely generated. \end{Lem}
\begin{proof} Suppose for contradiction that
$H$ is a finite symmetric subset of $\tau$ generating $\tau$ and let $k$ be a natural number such that for every $((S,s),(T,t)) \in H$ we have that $k > \left|S\right|$.
Now suppose that $(U_k,\epsilon)={\bf t}{\bf c}$ where $({\bf c},{\bf d}) \in H$ and ${\bf t} \in \mathrm{FIM}(\Omega)$. Then $c^{-1}=t \in U_k$ and $c^{-1}C \subseteq U_k$. Note that since $c \in C$, $c^{-1}C$ is also prefix closed. The facts that $U_k$ is a single path and $\left|C\right|<k$ imply that $c^{-1}C \subseteq \{\epsilon,y,yx,\ldots,yx^{k-1}\}$. However, $U_k=T \cup c^{-1}C$, and as a consequence we have that $yx^k \in T$, so $T=U_k$.
We also have ${\bf c} \mathrel{\tau} {\bf d}$, so there exist $i,j$ such that ${\bf c}{\bf b}{\bf a}^i={\bf d}{\bf b}{\bf a}^j$. By multiplying this equality from the right by an appropriate power of ${\bf a}$ we can ensure that $i,j>k$. Note that since $C \subseteq cU_k$, the first component of ${\bf c}{\bf b}{\bf a}^i$ is $\{c,cy,cyx,\ldots,cyx^i\}$, whereas the first component of ${\bf d}{\bf b}{\bf a}^j$ contains the vertices $\{d,dy,dyx,dyx^2,\ldots,dyx^j\}$. Given that $c^{-1}\in U_k$, a brief analysis shows this can only happen if $d=c$, and then $c^{-1}D \subseteq \{\epsilon,y,\ldots, yx^{k-1}\}$ follows from the facts that \[ c^{-1}D \subseteq c^{-1} \{c,cy,cyx,\ldots, cyx^{i}\}=\{\epsilon,y,yx,\ldots, yx^{i}\}, \]
$c^{-1}D$ is prefix closed and $\left|c^{-1}D\right|<k$. So altogether we obtain that $T=U_k$ and $tD=c^{-1}D \subseteq \{\epsilon,y,yx,\ldots,yx^{k-1}\} \subseteq U_k$, so $T \cup tD=U_k$ and as a consequence we conclude that ${\bf t}{\bf d}=(U_k,\epsilon)$. That is, applying elements of $H$ to right factors of $(U_k,\epsilon)$ does not change $(U_k,\epsilon)$, so the $\tau$-class of $(U_k,\epsilon)$ is singleton, that is, $(U_k,\epsilon) \not\mathrel{\tau} (U_{k+1},\epsilon)$, contradicting Lemma \ref{prevv}. \end{proof}
\begin{Thm}
Let $\left|\Omega\right|>1$. Then the free inverse monoid $\mathrm{FIM}(\Omega)$ and the free ample monoid $\mathrm{FAM}(\Omega)$ are neither left nor right coherent. The free left ample monoid $\mathrm{FLA}(\Omega)$ is right coherent, but not left coherent. \end{Thm}
\begin{proof} Lemma \ref{lem:nfg} shows that $\mathrm{FIM}(\Omega)$ is not left coherent. Exactly the same argument applies to show that $\mathrm{FLA}(\Omega)$ and $\mathrm{FAM}(\Omega)$ are not left coherent, simplifying further, since $c=t=\epsilon$. By duality, $\mathrm{FIM}(\Omega)$ and $\mathrm{FAM}(\Omega)$ cannot be right coherent. \end{proof}
\end{document} | arXiv |
Help us improve our products. Sign up to take part.
A Nature Research Journal
Search E-alert Submit My Account Login
Detection of Permethrin pesticide using silver nano-dendrites SERS on optical fibre fabricated by laser-assisted photochemical method
Thanh Binh Pham1,
Thi Hong Cam Hoang2,
Van Hai Pham3,
Van Chuc Nguyen1,
Thuy Van Nguyen1,
Duc Chinh Vu1,
Van Hoi Pham1 &
Huy Bui1
Scientific Reports volume 9, Article number: 12590 (2019) Cite this article
Permethrin, 3-Phenoxybenzyl (1 RS)-cis,trans-3-(2,2-dichlorovinyl)- 2,2-dimethylcyclopropanecarboxylate, has a wide range of applications like insecticide, insect repellent and prevents mosquito-borne diseases, such as dengue fever and malaria in tropical areas. In this work, we develop a prominent monitoring method for the detection of permethrin pesticide using surface-enhanced Raman scattering (SERS) optical fibre substrates. The novel SERS-active optical fibre substrates were grown and deposited silver (Ag) nano-dendrites on the end of multi-mode fibre core by laser-assisted photochemical method. The characteristic of the Ag-nanostructures could be controlled by the experimental conditions, namely, laser illumination time. Ag nanoparticles optical fibre substrates and Ag nano-dendrites optical fibre substrates were prepared with laser illumination time of 3 min and 8 min, respectively. The achieved SERS-activity optical fibre substrates were tested with Rhodamine 6G aqueous solutions. We demonstrate that the SERS activity coupled with Ag nano-dendrites optical fibre substrate has higher Raman enhancement factor due to the creation of many of hot-spots for amplifying Raman signals. Besides, the stability and reproducibility of the Ag nano-dendrites optical fibre substrate were also evaluated with stored time of 1000 hours and relative standard deviation of less than 3%. The Ag nano-dendrite optical fibre substrate was selected for detection of permethrin pesticide in the concentration range of 0.1 ppm–20 ppm with limit of quantification (LOQ) of 0.1 ppm and calculated limit of detection (LOD) of 0.0035 ppm, proving its great potential for direct, rapid detection and monitoring of permethrin.
Pesticides are exploited in agricultural production intended for increasing the yield, as well as improving the quality of crops. The pyrethroid family pesticide including permethrin, cypermethrin, deltamethrin, λ-cyhalothrin, which is of natural origin by mixture of various complex ester-shaped separated from the flowers of the daisy, has been widely used in agriculture as insecticide, acaricide, and insect repellent. In which, permethrin does not present any notable genotoxicity or immunotoxicity in humans and farm animals, but is classified by the US Environmental Protection Agency (EPA) as a likely human carcinogen, and is listed as a "restricted use" substance1. However, the pesticide misuse leads to serious environmental pollution and public concern since it could pose potential risks to human health. Therefore, the accurate, rapid and reliable detection of pesticides will prevent potential harmful effects on the environment and human health. Normally, pesticides are detected by various laboratory based techniques, such as chromatography, gas chromatography-mass spectrometry, high-performance liquid chromatography, atomic fluorescence spectrometry, liquid chromatography-tandem mass spectrometry2,3,4. These techniques have high sensitivity and accuracy but they rely on expensive instruments and require professionally trained personnel to treat samples and operate instruments in centralized laboratories. Hence, it is essential to develop various new generation sensors, such as colorimetric, electrochemical, acoustic, field-effect transistor (FET) devices5,6, surface-enhanced Raman scattering (SERS)7,8,9, and optical fiber sensors10,11,12. The bio-chemical sensors based on optical fibers are rapidly developed because of their advantages such as low signal attenuation, compact size, small sample volume, high flexibility, immunity from interference of electromagnetic fields, and remote sensing possibility. The SERS is a powerful spectroscopic technique for ultrasensitive and selective bio-chemical detection13,14,15 due to its capability of providing "fingerprint" of information of molecular structures in low concentrations. The enhancement mechanism of SERS has been explained by the highly local enhanced electromagnetic field on the noble metal surface due to the excitation of localized surface Plasmon resonances and enhanced chemical interaction between the adsorbate and noble metal nanoparticles. The regions with a highly enhanced local electromagnetic field are often called 'hot spots', which play a decisive role in the enhancement of the Raman signal. The Raman signal enhancement highly depends on the arrangement and morphology of the noble metal surface. The SERS on metal surface roughening or aggregation (such as flowers-like or dendrites) has demonstrated large enhancement factors of SERS activity16,17,18,19. The SERS substrates with many hot-spots and uniform distribution of nanostructures on the surface will have a large Raman enhancement and high repetition of measurement. The metal dendritic nanostructures are structures with many hot-spots which are formed from the tips and the sharp edges of the trunk and branches of the nano-dendrites, and the narrow gap between the branches20,21. The Ag nanostructure is considered to be one of the most excellent candidates for SERS application due to its highly desired plasmonic properties, low cost and easy fabrication/synthesis. The dendritic Ag nanostructure has been of interest and usually fabricated by electrochemical deposition method to increase high density hot-spots for enhanced SERS activity22,23. Photochemical approach has been used to synthesize Ag nanostructures suspended in solution by directly exposing on growth solution19,24,25,26. Recently, several studies combining SERS activity with optical fibre for directional development SERS optical fibre probe of biochemical compact optical fibre sensors using in outdoor fields have been recognized27,28,29. Ag nanoparticles have been modified on the silanized surface which was created by functionalizing SERS optical fibre probe30,31,32 or the Ag nanoparticles have been synthesized on the fiber taper surface through chemical deposition method33.
In this paper, we propose a novel SERS-active on optical fibre substrates with synthesized silver nano-dendrites structure by a facile and low-cost laser-assisted photochemical method. The Ag nanostructures were directly grown and immobilized on the multi-mode fibre ends by irradiation of green laser beam from a mixed silver ions solution, and occurred only in the main laser-irradiated part. Rhodamine 6G aqueous solutions were employed as a probe to characterize the enhancement, stability and uniformity of the achieved SERS substrates with Ag nanostructures on the optical fibre ends. It is found that the Raman enhancement factor of the Ag nano-dendrites SERS optical fibre substrate was increased. This SERS optical fibre substrate also exhibited good stability and reproducibility with an average relative standard deviation of less than 3%. The proposed Ag nano-dendrite optical fibre substrates were applied in detection of permethrin pesticide in the concentration range of 0.1 ppm–20 ppm with LOQ and LOD of 0.1 ppm and 0.0035 ppm, respectively. The goal of this study was to investigate the feasibility of this proposed SERS optical fibre substrate and prove its great potential for chemical and environmental compact optical fibre sensors.
Optical multimode fibre with core/cladding diameter of 62.5/125 µm and NA of 0.22 (Thorlabs, USA) was chosen in the experiments. Silver nitrate (AgNO3, 99.5% purity), tri-sodium citrate dihydrate (C6H5Na3O7.2H2O, greater than 99% purity) and sodium borohydride (NaBH4) were purchased from Fisher Scientific UK, Merck KGaA (Germany) and Kanto Chemical. Co. Inc. (Tokyo, Japan), respectively. Permethrin (C21H20Cl2O3, its purest commercially available grade) and Rhodamine 6G (R6G) were supplied by Sigma-Aldrich (Switzerland). All the aqueous solutions were prepared with ultrapure water (greater than 18 MΩ).
All the SERS optical fibre substrates were prepared by assisted-diode laser with emission wavelength of 532 nm and power of 500 mW (Laserlands, China). The scanning electron microscope (SEM) images and Energy dispersive X-ray spectroscopy (EDX) were obtained by a field-emission SEM (FE-SEM Hitachi S-4800) with the acceleration voltage of 5 kV. All measured samples were prepared by using BDL-pipetter 6601 (Becton Dickinson Labware, USA). The Raman scattering measurements were performed by a Raman Microscope system (Horiba Scientific LabRAM HR Evolution) with confocal microscope connected to the objective lens of 10x, 60x, 100x and an excitation wavelength of 532 nm.
Preparation of the Ag nanostructure SERS optical fibre substrates
The schematic diagram of the experimental setup to prepare the Ag nanostructure SERS on the end of the optical fibre with assisted-diode laser is illustrated in Fig. 1. First, the growth solution was prepared by mixing an optimal molar concentration of aqueous 0.2 ml of AgNO3 (0.01 M) and 0.2 ml of tri-sodium citrate (0.3 M) with 19.56 ml of ultrapure water in a clean tumbler. After rapid stirring for 3 min, a freshly prepared solution of 0.04 ml of sodium borohydride solution (0.01 mM) was added dropwise to the mixture under vigorous stirring for 5 min, and after that it was kept in the dark at room temperature for further use. Second, the polymer-coating layer of the multi-mode optical fibres were stripped along 25 mm from the end, the end of fibre was carefully cleaved so that the surface of fibre-end was perpendicular to the fibre axis, and the samples were cleaned by ethanol. Then, the growth and immobilisation of Ag nanostructure on the end of the optical fibre cores were performed with assisted-diode laser propagated through the optical fibre. It was focused onto one end of multimode optical fibre (62.5/125 µm) glued to the grin-rod via an objective lens of 10x (NA = 0.22), the other end of multimode optical fibre had a standard fibre connector for collecting laser beam to fibre sample. The optical fibre sample was fixed on a homemade fibre holder in which the position of the optical fibre sample can be finely adjusted and was dipped into the prepared Ag-chemical solution, and then the laser beam was propagated into the solution via the optical fibre. Characterization of Ag nanostructures formed on the end of fibre core depended on the exposure time of diode laser. During the Ag nanostructure preparation process, the reaction solution was contained in a reactor vessel with stable temperature of 20 °C by using Peltier thermoelectric cooler to avoid the effect of the temperature on the growth of Ag nanostructure. When the Ag nanostructure preparation process was finished, the nano Ag-coated optical fibre substrate was taken out and carefully rinsed with deionized water and then dried with pure nitrogen gas stream. The Ag nanostructures coated on the optical fibre substrate remained stable in the further exposure with 532 nm laser beam in air and/or in liquid environment without Ag+ ions.
Schematic diagram of the SERS optical fiber substrates preparation.
Measured sample preparation
In general, R6G is used to test the stability of SERS substrate34,35 so in our research, we first prepare the R6G test solution in order to verify the stability of the fabricated SERS substrates before analyzing the performance of permethrin. A test aqueous solution of R6G (10−5 M, 10−7 M and 10−8 M) was prepared by being diluted in ultrapure water from stock solution of R6G (10−2 M). 13 mL of 1000 ppm stock analytic solution was prepared by dissolving 10 mg Permethrin in 12.62 ml high-purity methanol. The following series of standard solution concentrations of 20 ppm, 10 ppm, 5 ppm, 1 ppm, 0.5 ppm and 0.1 ppm were formed by diluting it in ultrapure water due to the limited solution of the permethrin analyte in water.
Then droplets of 1 µL of the R6G-diluted solutions were dropped on the SERS optical fibre substrate and naturally dried in air before SERS detection so that R6G molecules could be absorbed onto the Ag nanostructures in order to test the stability of SERS substrate. Similarly, permethrin solution was analysed with this approach to investigate the monitoring ability by fabricated SERS optical fibre substrate.
The Raman scattering measurements were performed at room temperature by a Raman spectrometer system (Horiba Scientific LabRAM HR Evolution) using the objective lens of 100x, excitation wavelength of 532 nm and laser power of 3 mW. The SERS optical fibre substrate was fixed perpendicularly to the objective lens by a fibre holder, and focused on by exciting laser beam. The Raman spectra were obtained with acquisition time of 30 s and two accumulations. All Raman spectral data were background-subtracted by the spectral acquisition software (LabSpec 6.0 from Horiba Jobin Yvon) and averaged from randomly four measured point on the optical fibre substrate.
The morphology characterization of the fabricated silver nanostructure SERS optical fibre substrates were exhibited by FE-SEM images as shown in Figs 2 and 3. The silver nanostructures were uniformly distributed and confined within the core area of the end of multimode fibre of 62.5/125 µm as clearly depicted in Fig. 2(a). The components of the silver nanostructure SERS optical fibre substrate were analysed by investigating the EDX spectrum measured in the marked region of the inset as displayed in Fig. 2(b), in which the sharp peaks as O, Ge and Si corresponding to the components of typical optical fibre with Ge doped into SiO2 substrate and the sharp peak Ag definitively demonstrated Ag nanostructures growth after assisted-laser. The fundamental mechanism of growth of silver nanostructures on the end of optical fibre core could be interpreted by the fact that spherical silver seed particles were first formed through the reduction of silver nitrate by sodium borohydride in the presence of tri-sodium citrate, after under illumination of laser beam with optical power density of 78 W/cm2, suspended spherical silver seeds in growth solution were promptly driven toward the region where the optical field was more intense and were stuck and formed onto the surface of the end optical fibre core because of the optical gradient force near the surface of the end optical fibre core by assisted-laser beam. The silver nanostructures were grown because more silver ions were strongly reduced and deposited onto silver seeds formed previously due to the surface plasmon resonance effect of the silver seed36,37,38,39. The evolution of the growth of silver nanostructures can be clearly observed with different exposure time as demonstrated by SERS spectra shown in Fig. 3(with curves A, B, C) and the SEM images depicted in Fig. 3(A1, B1, C1 correspond to these curves). The silver nano-pebbles structure from 70 to 180 nm was uniformly formed onto the surface of the end optical fibre core under illumination of laser beam within short exposure time of 3 min, as depicted in Fig. 3(A1). After exposing for 7 min, silver nano-aggregates were generated and the silver short branches were developed with the size from 90 to 120 nm as shown in Fig. 3(B1). Since the exposure time was increased to 8 min, the silver nano-dendrites structure was formed with higher density and the branches were enlarged to the size of around 110–160 nm as represented in Fig. 3(C1). To investigate the SERS ability of the silver nanostructures optical fibre substrates prepared at different exposure time of laser, the SERS spectra of the R6G solution of 10−5M modified onto the prepared silver nanostructures optical fibre substrates were recorded in Fig. 3 (curves A, B, C). Some of the strong bands of R6G-adsorbed silver nanostructure were clearly observed with the feature characteristic peaks at 613.2 cm−1 assigned to the C-C-C ring in-plane, and 1361.5 cm−1, 1506.1 cm−1 and 1650.7 cm−1 corresponding to the symmetric modes of C-C in-plane stretching vibrations40. We could also observe that the SERS signal intensity of R6G was greatly enhanced by increasing exposure time of silver nanostructure growth process. The characteristic Raman signal intensity of R6G-adsorbed silver nano-dendrites structure optical fibre substrate is much higher than that of silver nano-pebbles structure optical fibre substrate at 613.2 cm−1, 1361.5 cm−1, 1506.1 cm−1 and 1650.7 cm−1 with 7.25, 11.44, 11.87, and 13.1 fold, respectively. To quantitatively estimate the Ag nanostructure-induced enhancement, we could calculate the relative enhancement factor (REF) according to different exposure time. The REF of SERS optical fibre substrate strongly depends on the SERS conditions, such as morphology of substrate, analytic, excitation wavelength, etc., and can be calculated by the formula17,41:
$${\rm{REF}}=\frac{{I}_{S{\rm{ER}}S}\times {C}_{R}}{{I}_{R}\times {C}_{SERS}}$$
where ISERS is the SERS intensity of R6G adsorbed on an Ag nanostructure optical fibre substrate and IR represents the normal Raman intensity (non-SERS) of R6G on the optical fibre probe without Ag nanostructure. CSERS is the concentration of R6G (10−5 M) in SERS spectrum, CR is the concentration of R6G (1 M) in normal Raman spectrum as depicted in Fig. 3 (Background line). The R6G with the concentration of 1 M was tested on the optical probe without silver nanostructure, the corresponding intensity at 613.2 cm−1 is 321 in arbitrary unit (a.u.). The enhanced intensity of R6G with the concentration of 10−5 M on the silver nano-dendrites structure on the optical fibre substrate is 33392.6 in a.u. The calculated values of REF of characteristic Raman signals of R6G-adsorbed silver nano-pebbles structure optical fibre substrate, silver nano-aggregates structure and silver nano-dendrites structure optical fibre substrate at 613.2 cm−1, 1361.5 cm−1, 1506.1 cm−1 and 1650.7 cm−1 were summarized in Table 1. We can clearly observe that the REF for characteristic Raman signals of R6G molecule with concentration of 10−5M increased as the process of developing of Ag nanostructures following to the exposed time increasing. The longer exposed time generates exotic silver morphology structures. The REF value was highest in the case of the silver nano-dendrites structure optical fibre substrate prepared at exposed time of 8 min. The great REF of SERS signals onto this Ag nano-dendrites structure optical fibre substrate could be postulated for two SERS enhancement mechanisms which are the electromagnetic enhancement and the chemical enhancement. Chemical enhancement involves charge transfer between the SERS substrate and the detected molecules. Consequently, the high performance SERS signal intensity of this Ag dendrites structure optical fibre probes could be primarily assigned to an electromagnetic enhancement effect. This effect is generated from the light induced localized surface plasmon resonance (LSPR). The short distances among dendritic branches, the large curvature regions between the trunk and branches and the fractal features of Ag dendrites nanostructure may allow the formation of many SERS 'hot-spots' where the optical field intensity is much higher than that at other sites. The Ag nano-dendrites structure optical fibre substrate could provide a number of highly active hot-spots that may promote giant electromagnetic enhancement thus obtaining a higher enhancement factor value. In the case of silver nano-pebbles, SERS 'hot spots' are only located between the nano-pebbles while the branches of silver nano-aggregates are short and the density is low so their REF values are not so large.
SEM image (a) and EDX spectrum (b) of silver nanostructure SERS-active on optical fibre substrate, the inset shows EDX measured zone.
SERS spectra of R6G solution (10−5 M) recorded for SERS substrate with growth Ag nano-pebbles (curve A), Ag nano-aggregates (curve B) and Ag nano-dendrites (curve C) on the end of optical fibre core, and the (A1), (B1) and (C1) SEM images, respectively.
Table 1 Values of REF of characteristic Raman signals of R6G-adsorbed silver nanostructures optical fibre substrates.
Beside the high enhancement and sensitivity of SERS substrate, stability and reproducibility are also important parameters to evaluate the quality of SERS substrate. The stability of Ag nano-dendrites structure optical fibre substrate can be improved by limiting the oxidation of its surface, which was implemented by exposing the 532 nm laser beam in air immediately after finishing preparation. Figure 4(a) exhibits the Raman spectra of 10−5M R6G modified on the Ag nano-dendrites structure optical fibre substrate corresponding pre-processing substrate, after-processing substrate, and substrates stored for 700 h and 1000 h. Figure 4(b) shows that the intensity of Raman signals at 613.2 cm−1, 1361.5 cm−1, 1506.1 cm−1 and 1650.7 cm−1 decreased after processing the surface of SERS optical fibre substrate of about 14.6%, 15%, 13.5%, and 11.1%, respectively, and then the intensity of Raman signals stabilized during stored substrate time of 700 h. It was demonstrated that the silver nano-dendrites structure was firmly attached on the surface of the end optical fibre substrate. The intensity of Raman signals at 613.2 cm−1, 1361.5 cm−1, 1506.1 cm−1 and 1650.7 cm−1 decreased after storing time of 1000 h of about 29.2%, 43.9%, 44.2%, and 45.8%, respectively, owing to the oxidation of silver. Figure 5 presents Raman spectra of R6G modified from three different zones of the Ag nano-dendrites structure optical fibre substrate, the spectra of each zone is analysed by four mapping runs. The intensity of Raman signals on that three different zones display no shift and are nearly similar as depicted in Fig. 5 (Sample 1 with R6G of 10−8 M) and Fig. 5 (Sample 2 with R6G of 10−7 M). The relative standard deviation of the Raman signals at 613.2 cm−1 was calculated to be about 3% as shown in inset of Fig. 5 which proves the reproducibility of the measurements. Therefore, these results illustrate that the SERS substrate with Ag nano-dendrites structures on the end of fibre core prepared by assisted-laser possesses optimum SERS activity at an exposure time of 8 min could be a good candidate for an improved SERS optical fibre probe. The silver nano-dendrite SERS-active on optical fibre substrates were selected for detection of permethrin with molecular formula of C21H20Cl2O3 and molecular structure shown in the inset of Fig. 6. The Raman spectrum of permethrin solid with 99.5% purity on optical fibre substrate without Ag nanostructure and the SERS spectrum of 10 ppm permethrin solution on the optical fibre substrate with silver nano-dendrites are clearly shown in Fig. 6. The primary characteristic Raman peaks of permethrin are marked in the spectra corresponding to assignments of the bands enumerated in Table 242. The strong Raman bands of permethrin-adsorbed Ag nano-dendrites are clearly observed with the feature characteristic peaks at around 303.2 cm−1 associated with the scissoring vibration of C-O-C, the strongest peak at 998.9 cm−1 assigned to the benzene ring breathing mode, 1017.6 cm−1 and 1179.5 cm−1 corresponding to the stretching vibration mode of C-O, 1065.2 cm−1 assigned to the scissoring vibration mode of C-H on benzene ring, and 1574.1 cm−1 corresponding to the symmetric modes of C-C in-plane stretching vibrations, which are enhanced and shifted a few cm against the feature characteristic peaks in Raman spectrum of permethrin solid on optical fibre substrate without Ag nanostructure. The interaction of permethrin solution with silver nano-dendrites optical fibre substrate might account for these small shifts of the feature characteristic peaks in the SERS spectrum. The SERS spectra of permethrin solution with different concentrations of 20 ppm, 10 ppm, 5 ppm, 4 ppm, 3 ppm, 2.5 ppm, 2 ppm, 1.5 ppm, 1 ppm, 0.5 ppm and 0.1 ppm dispersed onto the silver nano-dendrites optical fibre substrates produced by the same process are exhibited in Fig. 7(a). It can be seen that the intensity of the SERS signal increases as the concentration of permethrin solution increases. The strongest Raman peak at 998.9 cm−1 assigned to the benzene ring breathing mode are clearly observed, so this Raman peak could be applied for the quantitative SERS detection of permethrin solution. Figure 7(b) represents the dependence of the intensity of the strongest SERS peak at 998.9 cm−1 from permethrin on the concentration of permethrin solution. Linear regression at the strongest SERS peak at 998.9 cm−1 is given as I = 823.32*C + 3487.7, as shown in the inset of Fig. 7(b), which proves the applicability of the silver nano-dendrites optical fibre substrate for quantitative analysis of permethrin pesticide. The LOD is defined as LOD = 3 × SD/slope, where SD is the standard deviation of the noise43, and the determined value is 0.0035 ppm. Comparing with LOD of permethrin on silver nanofilm deposited on glass chip44, the LOD of the silver nano-dendrites optical fibre substrate is far less. According to the food hygiene regulation of Vietnam Ministry of public health (No: 50/2016/TT-BYT), the maximum residue level (MRL) of permethrin residue in green or black tea is 20 ppm. Hence, this SERS optical fibre is a perfect candidate for the application in permethrin pesticide detection.
(a) SERS spectra of 10−5 M R6G modified on the Ag nano-dendrite structure optical fibre substrate corresponding pre-processing substrate, after-processing substrate, and substrates stored for 700 h and 1000 h. The Raman signals intensities at 613.2 cm−1, 1361.5 cm−1, 1506.1 cm−1 and 1650.7 cm−1 decreased after processing surface of Ag nano-dendrite SERS optical fibre substrate and stored substrate time of 700 h, 1000 h (b).
SERS spectra of 10−8 M and 10−7 M R6G modified from three different zones of two of the Ag nano-dendrite structure optical fibre substrates with the relative standard deviation of the Raman signals at 613.2 cm−1 as shown in inset show in (sample1) and (sample2), respectively.
Normal Raman spectrum of permethrin solid on optical fibre substrate (A) and SERS spectrum of 10 ppm permethrin solution recorded for Ag nano-dendrite SERS optical fibre substrate (B), the inset shows molecular structure of permethrin.
Table 2 The primary characteristic Raman peaks of permethrin corresponding to their assignments42.
SERS spectra of permethrin solutions with different concentrations such as 0.1 ppm, 0.5 ppm, 1 ppm, 1.5 ppm, 2 ppm, 2.5 ppm, 3 ppm, 4 ppm, 5 ppm, 10 ppm and 20 ppm using Ag nano-dendrite SERS optical fibre substrate (a). The dependence of the vibrational band intensity at 998.9 cm−1 on the permethrin concentration (b), and the inset shows fitting of the linear region of the permethrin concentration of 0.1 ppm–5 ppm.
In conclusion, we have successfully detected permethrin pesticide by preparing a novel silver nano-dendrites structure SERS-active on optical fibre substrate by a facile and low-cost laser-assisted photochemical method. The Ag nano-dendrites SERS optical fibre substrates showed an extremely higher Raman enhancement factor than that of nano-pebbles structure and also exhibited good stability and reproducibility with an average relative standard deviation of less than 3%. The Ag nano-dendrite optical fibre substrates were applied in detection of permethrin pesticide in the concentration range of 0.1 ppm–20 ppm with LOQ of 0.1 ppm and calculated LOD of 0.0035 ppm. The Ag nanostructures were directly grown and immobilized on the multi-mode fibre ends by irradiation of green laser beam from a mixed solution contented silver ions, and occur only in the main laser-irradiated part were confirmed by SEM images. The surface of silver nano-dendrites structure SERS optical fibre substrate was prepared and treated simultaneously by the assisted-laser 532 nm via optical fiber. In this treatment, Rhodamine 6G aqueous solutions were employed as a probe to characterize the enhancement, stability and uniformity of the achieved SERS substrates with Ag nanostructures on the optical fibre. The developed SERS optical fibre substrates can contribute as a promising candidate for portative SERS equipment with direct, rapid, real-time and non-destructive detection of pesticides residue in the liquid environment in outdoor fields.
Environmental Protection Agency. "Permethrin Facts (RED Fact Sheet)". Archived from the original on 28 July 2011. Retrieved 2 September 2011.
Sounderajan, S., Udas, A. C. & Venkataramani, B. Characterization of arsenic (V) and arsenic (III) in water samples using ammonium molybdate and estimation by graphite furnace atomic absorption spectroscopy. J. Hazard. Mater. 149, 238–242 (2007).
Zhang, N., Fu, N., Fang, Z. T., Feng, Y. H. & Ke, L. Simultaneous multi-channel hydride generation atomic fluorescence spectrometry determination of arsenic, bismuth, tellurium and selenium in tea leaves. Food Chem. 124, 1185–1188 (2011).
Santos, L. J. & Galceran, M. T. The application of gas chromatography to environmental analysis. TrAC Trend. Anal.Chem. 21, 672 (2002).
Cao, T. T. et al. An interdigitated ISFET-type sensor based on LPCVD grown grapheme for ultrasensitive detection of carbaryl. Sens. Actuators B Chem. 260, 78–85 (2018).
Li, M., Cushing, S. K. & Wu, N. Q. Plasmon-enhanced optical sensors: a review. Analyst 140, 386–406 (2015).
Pilat, Z. et al. Detection of Chloroalkanes by Surface-Enhanced Raman Spectroscopy in Microfluidic Chips. Sensors 18, 3212–18 (2018).
Huang, S., Hu, J., Guo, P., Liu, M. & Wu, R. Rapid detection of chlorpyriphos residue in rice by surface-enhanced Raman scattering. Analytical Methods 7, 4334–4339 (2015).
Shi, G. et al. Biomimetic synthesis of Ag-coated glasswing butterfly arrays as ultra-sensitive SERS substrates for efficient trace detection of pesticides. Beilstein J. Nanotechnol. 10, 578–588 (2019).
Pham, T. B., Bui, H., Le, H. T. & Pham, V. H. Characteristics of the Fiber Laser Sensor System Based on Etched-Bragg Grating Sensing Probe for Determination of the Low Nitrate Concentration in Water. Sensors 17, 0007 (2017).
Le, V. H. et al. Measurement of temperature and concentration influence on the dispersion of fused silica glass photonic crystal fiber infiltrated with water-ethanol mixture. Optics Communications 407, 417–422 (2018).
Cennamo, N., Zeni, L., Arcadio, F., Catalano, E. & Minardo, A. A Novel Approach to Realizing Low-Cost Plasmonic Optical Fiber Sensors: Light-Diffusing Fibers Covered by Thin Metal Films. Fibers 7, 34–7 (2019).
Lin, L. et al. Rapid Determination of Thiabendazole Pesticides in Rape by Surface Enhanced Raman Spectroscopy. Sensors 18, 1082–14 (2018).
Qi, H., Chen, H., Wang, Y. & Jiang, L. Detection of ethyl carbamate in liquors using surface-enhanced Raman spectroscopy. Royal society open science 5, 181539–8 (2018).
Lee, H. et al. Gold Nanopaticle-Coated ZrO2-Nanofiber Surface as a SERS-Active Substrate for Trace Detection of Pesticide Residue. Nanomaterials 8, 402–11 (2018).
Jiang, Y., Wu, X. J., Li, Q., Li, J. & Xu, D. Facile synthesis of gold nanoflowers with high surface-enhanced Raman scattering activity. Nanotechnology 22, 385601–6 (2011).
Dies, H., Siampani, M., Escobedo, C. & Docoslis, A. Direct Detection of Toxic Contaminants in Minimally Processed Food Products Using Dendritic Surface-Enhanced Raman Scattering Substrates. Sensors 18, 2726–11 (2018).
Lim, D. K. et al. Highly uniform and reproducible surface-enhanced Raman scattering from DNA-tailorable nanoparticles with 1-nm interior gap. Nature Nanotechnology 6, 452 (2011).
Li, S., Zhang, H., Xu, L. & Chen, M. Laser-induced construction of multi-branched CuS nanodendrites with excellent surface- enhanced Raman scattering spectroscopy in repeated applications. Optics Express 25, 16204–16213 (2017).
Lee, B. S., Lin, P. C., Lin, D. Z. & Yen, T. J. Rapid Biochemical Mixture Screening by Three-Dimensional Patterned Multifunctional Substrate with Ultra-Thin Layer Chromatography (UTLC) and Surface Enhanced Raman Scattering (SERS). Scientific Reports 8, 18967–7 (2018).
Yin, H. J. et al. Ag@Au core-shell dendrites: a stable, reusable and sensitive surface enhanced Raman scattering substrate. Scientific Reports 5, 14502 (2015).
Li, D., Liu, J., Wang, H., Barrow, C. J. & Yang, W. Electrochemical synthesis of fractal bimetallic Cu/Ag nanodendrites for efficient surface enhanced Raman spectroscopy. Chem. Commun. 52, 10968–10971 (2016).
Rafailovic, L. D. et al. Functionalizing Aluminum Oxide by Ag Dendrite Deposition at the Anode during Simultaneous Electrochemical Oxidation of Al. Adv. Mater. 27, 6438–6443 (2015).
Stamplecoskie, K. G. & Scaiano, J. C. Light emiting diode irradiation can control the morphology and optical properties of silver nanoparticles. Journal of American Chemistry Society 132, 1825–1827 (2010).
Lee, S.-W. et al. Effect of temperature on the growth of silver nanoparticles using plasmon-mediated method under the irradiation of green LEDs. Materials 7, 7781–7798 (2014).
Xu, L., Li, S., Zhang, H., Wang, D. & Chen, M. Laser-induced photochemical synthesis of branched Ag@Au bimetallic nanodendrites as a prominent substrate for surface-enhanced Raman scattering spectroscopy. Optics Express 7, 7408–7417 (2017).
Markin, A. V., Markina, N. E. & Goryacheva, I. Y. Raman spectroscopy based analysis inside photonic-crystal fibers. Trends in Analytical Chemistry 88, 185–197 (2017).
Liu, C. et al. A surface-enhanced Raman scattering (SERS)-active optical fiber sensor based on a three-dimensional sensing layer. Sensing and Bio-Sensing Research 1, 8–14 (2014).
Matikainen, A., Nuutinen, T., Vahimaa, P. & Honkanen, S. A solution to the fabrication and tarnishing problems of surface-enhanced Raman spectroscopy (SERS) fiber probes. Scientific Reports 5, 8320 (2015).
Huang, Z. et al. Tapered Optical Fiber Probe Assembled with Plasmonic Nanostructures for Surface-Enhanced Raman Scattering Application. ACS Applied Materials & Interfaces 7, 17247–17254 (2015).
Xu, W. et al. SERS Taper-Fiber Nanoprobe Modified by Gold Nanoparticles Wrapped with Ultrathin Alumina Film by Atomic Layer Deposition. Sensors 17, 467–9 (2017).
Liu, T. et al. Combined taper-and-cylinder optical fiber probes for highly sensitive surface-enhanced Raman scattering. Applied Physics B 116, 799–803 (2014).
Zhang, J., Chen, S., Gong, T., Zhang, X. & Zhu, Y. Tapered Fiber Probe Modified by Ag Nanoparticles for SERS Detection. Plasmonics 11, 743–751 (2016).
Chan, Y. F., Zhang, C. X., Wu, Z. L., Zhao, D. M. & Wang, W. Ag dendritic nanostructures as ultrstable substrates for surface-enhanced Raman scattering. Appl. Phys. Lett. 102, 183118 (2013).
Qiu, H. et al. Reliable molecular trace-detection based on flexible SERS substrate of graphene/Ag-nanoflowers/PMMA. Sens. Actuators B: Chem. 249, 439–450 (2017).
Langille, M. R., Personick, M. L. & Mirkin, C. A. Plasmon-Mediated Syntheses of Metallic Nanostructures. Angew. Chem. Int. Ed. 52, 13910–13940 (2013).
Zhai, Y. et al. Polyvinylpyrrolidone-induced anisotropic growth of gold nanoprisms in plasmon-driven synthesis. Nature Materials 15, 889–895 (2016).
Mailard, M., Huang, P. & Brus, L. Silver Nanodisk Growth by Surface Plasmon Enhanced Photoreduction of Adsorbed [Ag+]. Nano Letters 11, 1611–1615 (2003).
Bjerneld, E. J., Murty, K. V. G. K., Prikulis, J. & Kall, M. Laser-induced Growth of Ag nanoparticles from Aqueous Solutions. Chemphyschem 1, 116–119 (2002).
Hildebrandt, P. & Stockburger, M. Surface-Enhanced Resonance Raman Spectroscopy of Rhodamine 6G Adsorbed on Colloidal Silver. Journal of Physical Chemistry 88, 5935–5944 (1984).
Yang, M. et al. Silver nanoparticles decorated nanoporous gold for surface-enhanced Raman scattering. Nanotechnology 28, 055301–8 (2017).
Li, W., Lu, B., Sheng, A., Yang, F. & Wang, Z. Spectroscopic and theoretical study on inclusion complexation of beta-cyclodextrin with permethrin. Journal of Molecular Structure 981, 194–203 (2010).
Nguyen, H. L., Nguyen, T. H., Ngo, D. N., Kim, Y.-H. & Joo, S.-W. Surface-Enhanced Raman Scattering detection of Fipronil pesticide absorbed on silver nanoparticles. Sensors 19, 1355 (2019).
Hao, J., Wang, Q. K., Weimer, W., Abell, J. & Wilson, M. SERS Spectra of Permethrin on Silver Nanofilm. American Journal of Nano Research and Application 3, 29–32 (2015).
This work is financially supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.03-2018.306, and Vietnam Academy of Science and Technology (VAST) under Program for Development of Physics in Vietnam to the year of 2020 under grant number KHCBVL.04/18-19.
Institute of Materials Science, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Cau Giay, 100000, Hanoi, Vietnam
Thanh Binh Pham
, Van Chuc Nguyen
, Thuy Van Nguyen
, Duc Chinh Vu
, Van Hoi Pham
& Huy Bui
University of Science and Technology of Hanoi, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Cau Giay, 100000, Hanoi, Vietnam
Thi Hong Cam Hoang
Hanoi National University of Education, 136 Xuan Thuy, Cau Giay, 100000, Hanoi, Vietnam
Van Hai Pham
Search for Thanh Binh Pham in:
Search for Thi Hong Cam Hoang in:
Search for Van Hai Pham in:
Search for Van Chuc Nguyen in:
Search for Thuy Van Nguyen in:
Search for Duc Chinh Vu in:
Search for Van Hoi Pham in:
Search for Huy Bui in:
Thanh Binh Pham, Van Hoi Pham and Huy Bui devised the main ideas of the project. Thanh Binh Pham and Thi Hong Cam Hoang wrote the manuscript. Thanh Binh Pham, Van Hai Pham, Van Chuc Nguyen, Thuy Van Nguyen, Duc Chinh Vu contributed to do experiments and revised the manuscript.
Correspondence to Thanh Binh Pham.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Pham, T.B., Hoang, T.H.C., Pham, V.H. et al. Detection of Permethrin pesticide using silver nano-dendrites SERS on optical fibre fabricated by laser-assisted photochemical method. Sci Rep 9, 12590 (2019) doi:10.1038/s41598-019-49077-1
Received: 04 June 2019
DOI: https://doi.org/10.1038/s41598-019-49077-1
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Scientific Reports menu
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
© 2020 Springer Nature Limited
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Capacity of an associative memory model on random graph architectures} \runtitle{Associative memory model on random graphs}
\begin{aug}
\author[a]{\inits{M.}\fnms{Matthias}~\snm{L\"owe}\thanksref{a}\ead[label=e1]{[email protected]}} \and \author[b]{\inits{F.}\fnms{Franck}~\snm{Vermet}\corref{}\thanksref{b}\ead[label=e2]{[email protected]}}
\address[a]{Fachbereich Mathematik und Informatik, Universit\"at M\" unster, Einsteinstra\ss e 62, 48149 M\"unster, Germany. \printead{e1}} \address[b]{Laboratoire de Math\'ematiques, UMR CNRS 6205, Universit\'e de Bretagne Occidentale, 6 avenue Victor Le Gorgeu, CS 93837, F-29238 Brest Cedex 3, France. \printead{e2}}
\end{aug}
\received{\smonth{1} \syear{2014}}
\begin{abstract} We analyze the storage capacity of the Hopfield models on classes of random graphs. While such a setup has been analyzed for the case that the underlying random graph model is an Erd\"os--Renyi graph, other architectures, including those investigated in the recent neuroscience literature, have not been studied yet. We develop a notion of storage capacity that highlights the influence of the graph topology and give results on the storage capacity for not too irregular random graph models. The class of models investigated includes the popular power law graphs for some parameter values. \end{abstract}
\begin{keyword} \kwd{associative memory} \kwd{Hopfield model} \kwd{powerlaw graphs} \kwd{random graphs} \kwd{random matrix} \kwd{spectral theory} \kwd{statistical mechanics} \end{keyword} \end{frontmatter}
\section{Introduction}\label{sec1} Thirty years ago, in 1982, Hopfield introduced a toy model for a brain that renewed the interest in neural networks and has nowadays become popular under the name Hopfield model \cite{Hopfield1982}. This model in its easiest version assumes that the neurons are fully connected and have Ising-type activities, that is, they take the values $+1$, if a neuron is firing and $-1$, if it is not, and is based on the principles of statistical mechanics. Since Hopfield's ground-breaking work, it has stimulated a large number of researchers from the areas of computer science, theoretical physics and mathematics.
In the latter field, the Hopfield model is particularly challenging, since it also can be considered as a spin glass model and spin glasses are notoriously difficult to study. A survey over the mathematical results in this area can be found in either \cite{bovierbook} or \cite{talabook}. It is worth mentioning that even in the parameter region where no spin glass phase is expected, the Hopfield model still has to offer surprising phenomena such as in \cite{gentzloewe}.
When being considered as a neural network, one of the aspects that have been discussed most intensively is its so-called storage capacity. Here, one tries to store information, so-called patterns in the model, and the question is, how many patterns can be successfully retrieved by the network dynamics, that is, how much information can be stored in a model of $N$ neurons. One of the early mathematical results states that if the patterns are independent and identically distributed (i.i.d. for short) and consist of i.i.d. spins and if their number $M$ is bounded by $\frac{1}2 N/\log N$, the patterns can be recalled (see \cite{MPRV}) with probability converging to one as $N \to\infty$ and that the constant $\frac{1}2$ is optimal (see \cite{Bov97}). Similar results hold true, if one starts with a corrupted input -- if more than fifty percent of the input spins are correct, one still is able to restore the originally ``learned'' patterns. However, if one also allows for small errors in the retrieval of the patterns one obtains a storage capacity of $M=\alpha N$ for some value of $\alpha $ smaller than $0.14$ (see \cite{newman,loukianova,talagrand}). This latter result is in agreement with both, computer simulations as well as the predictions of the non-rigorous replica method from statistical physics (see \cite{AGS}).
The setup of the Hopfield model has been generalized in various aspects, for example, the condition of the independence has been relaxed (see \cite{L98,LV_05}), patterns with more than two spins values have been considered (see \cite{Piccoetal,LV_BEG,LV_05}), and Hopfield models on Erd\"os--Renyi graphs were studied \cite{BG92,BG93,talagrand,LV10}. The present paper starts with the observation that even though being more general than the complete graph, also Erd\"os--Renyi graphs do not seem to be the favorite architectures for a brain for scientists working in neurobiology. There, the standard paradigm currently is rather to model the brain as a small world graph (see \cite{bassett,Rubinov}). We will focus on the question, how many patterns can be stored in a Hopfield model on a random graph, if this graph is no longer necessarily an Erd\"os--Renyi graph. {The classical notion of storage capacity requires that the patterns are fixed points of the retrieval dynamics, that is, local minima of the energy landscape of the Hopfield model (or, in \cite{newman,loukianova,talagrand}, not too far apart from such minima). It turns out that this notion is already sensitive to the architecture of the network \cite{LV10}. So it is conceivable that there is a major influence of the underlying graph structure on the model's capability to retrieve corrupted information. Associativity of a network can be described as the potential to repair corrupted information. We will therefore work with a notion of storage capacity that takes this ability into account.} Moreover, the relationship between network connectivity and the performance of associative memory models has already been investigated in computer simulations (see, e.g., \cite{ChenAdams}). Therefore, the goal of the present note is to establish rigorous bounds on the storage capacity of the Hopfield model on a wide class of random graph models, where we interpret ``storage'' as the ability to retrieve corrupted information. Similar questions have been addressed for the complete graph by Burshtein \cite{Burshtein}.
We organize the paper in the following way: Section~\ref{sec2} introduces the basic model we will be working with in the present paper. It also addresses the question, what exactly we mean when talking about the storage of patterns. Section~\ref{sec3} contains the main result of this paper. The number of patterns one is able to store in the sense, that a number of errors that is proportional to N can be repaired by $\mathcal{O}(\log N))$ steps of the retrieval dynamics is of order $\operatorname{const.}(\lambda_1)^2/ (m\log N)$, where $\lambda_1$ is the largest eigenvalue of the adjacency matrix of the graph and $m$ its maximal degree. A main ingredient of the proof is thus to analyze the spectrum of the adjacency matrix of the graph that serves as a model of the network architecture. This analysis is provided in Section~\ref{sec4}. Eventually, Section~\ref{sec5} contains the proof of the main result. An \hyperref[app]{Appendix} will contain estimates on the minimum and maximum degree of an Erd\"os--R\'enyi graph. These are needed to apply our main result to the setting of such random graphs and may also be of independent interest.
\section{The model}\label{sec2} The Hopfield model is a spin model on $N\in\mathbb{N}$ spins. $\sigma\in \Sigma_N:= \{-1,+1\}^N$ describes the neural activities of $N$ neurons. The information to be stored in the model are patterns $\xi^1, \ldots, \xi^M \in\{-1,+1\}^N$. As usual, we will assume that these patterns are i.i.d. and consist of i.i.d. spins $(\xi_i^\mu)$ with
\[ {\mathbb{P}}\bigl(\xi_i^\mu= \pm1\bigr)= \tfrac12. \]
Note that $M$ may and in the interesting cases will be a function of $N$. The architecture of the Hopfield model is an undirected graph $G=(V,E)$, where $V={1, \ldots, N}$. With the help of the~patterns and the graph, one defines the sequential dynamics $S=T_N \circ T_{N-1}\circ\cdots\circ T_1$ and the parallel dynamics $T=(T_i)$ on $\Sigma_N$. By definition $T_i$ only changes the $i$th coordinate of a configuration $\sigma$ and
\begin{eqnarray*} &&S(\sigma)=T_N \circ T_{N-1}\circ\cdots\circ T_1(\sigma), \qquad T(\sigma )= \bigl(T_1(\sigma), \ldots, T_N(\sigma)\bigr), \\ &&\quad\mbox{with } T_i(\sigma)= \mathop{\operatorname{sgn}} \Biggl(\sum_{j=1}^N \sigma_j a_{ij} \sum_{\mu=1}^M \xi_i^\mu\xi_j^\mu\Biggr) \end{eqnarray*}
(with the convention that $\mathop{\operatorname{sgn}}(0)=1$, e.g.). Here, $a_{ij}=a_{ji}=1$ if the edge between $i$ and $j$ is in $E$ and $a_{ij}=a_{ji}=0$ otherwise.
The dynamics can be thought of as governing the evolution of the system from an input toward the nearest learned pattern. $\xi^\mu$ being a fixed point of $S$ (or~$T$) can thus be interpreted as recognizing a learned pattern. However, this is not really what one would call an associative memory. An important feature of the standard Hopfield model (the one where $G=K_N$, the complete graph on $N$ vertices) is thus also that under certain restrictions on~$M$ (and the number of corrupted neurons), with high probability, a corrupted version of $\xi ^\mu$, say $\tilde\xi^\mu$ converges to $\xi^\mu$ when being evolved under the dynamics. This observation is also crucial for the present paper.
We can associate Hamiltonians (or energy functions) to these dynamics by
\[ H_N^{{ S}}(\sigma)= - \operatorname{Const.}(N) \sum _{i,j=1}^N\sigma_i \sigma _j a_{ij} \sum_{\mu=1}^M \xi_i^\mu\xi_j^\mu \]
and
\[
H_N^{{ T}}(\sigma)= - \operatorname{Const.}(N) \sum _{i=1}^N \Biggl| \sum_{j=1}^N \sigma_j a_{ij} \sum_{\mu=1}^M
\xi_i^\mu\xi_j^\mu\Biggr|, \]
such that the energy will decrease along each trajectory of the dynamics:
\[ H_N^{{ S}}\bigl(S(\sigma)\bigr) \leq H_N^{{ S}} \bigl((\sigma)\bigr) \quad\mbox{and}\quad H_N^{{ T}}\bigl(T( \sigma)\bigr) \leq H_N^{{ T}}\bigl((\sigma)\bigr). \]
The constant is chosen in such a way that the mean free energy of the model is finite and not constantly equal to zero.
One can easily prove that the sequential dynamics will converge to a fixed point of $S$ and that every fixed point of $S$ is a local minimum of $H_N^{{ S}}$. In the parallel case, the dynamics $T$ will converge to a fixed point or a 2-cycle of $T$.
The idea of this setup is that the patterns (as well as their negatives $-(\xi^\mu), \mu=1, \ldots, M$) are hopefully possible limits of the dynamics. For instance, this is easily checked, if $M\equiv1$ and $G$ is the complete graph, that $\xi ^1$ is a local minimum of $H_N^S$, since then
\[ H_N^S(\sigma)=- \operatorname{Const.}(N) \Biggl(\sum _{i=1}^N \sigma_i \xi _i^1\Biggr)^2 + \operatorname{Const.}_1(N) \]
and hoped to be inherited by the more general model, as long as $M$ is small enough. Indeed, for $M=1$, the stored pattern $\xi^1$ is still a local minimum of $H_N^S$, if $G$ is only connected. In this case, one obtains that
\[ H_N^S(\sigma)=- \operatorname{Const.}(N) \sum _{i,j=1}^N\sigma_i \sigma_j a_{ij} \xi_i^1 \xi_j^1= - \operatorname{Const.}(N) X^t A X \]
with $X=(\sigma_i \xi_i^1)$ and $A=(a_{ij})$. From here, the assertion is immediate (we are grateful to an anonymous referee for this remark).
When considering the stability of a random pattern $\xi^\mu$ under $S$ or $T$ in the above setting, we need to check whether $ T_i(\xi^\mu)= \xi_i^\mu $ holds for any $i$. Now
\[ T_i\bigl(\xi^\mu\bigr)= \mathop{\operatorname{sgn}} \Biggl(\sum_{j=1}^N a_{ij} \sum _{\nu=1}^M \xi _i^\nu \xi_j^\nu\xi_j^\mu\Biggr)= \mathop{\operatorname{sgn}}\Biggl(\sum_{j=1}^N a_{ij} \xi_i^\mu +\sum _{j=1}^N a_{ij}\sum _{\nu\neq\mu} \xi_i^\nu\xi_j^\nu \xi_j^\mu\Biggr). \]
That is, we have a signal term of strength $d(i)$, the degree of vertex $i$ (given by the first summand on the right-hand side of the above equation) and a random noise term. The first observation is that the network topology enters via the degrees of the nodes. Indeed in such a simple setup -- the stability of stored information -- the minimum degree of the vertices is clearly decisive to compute the models's storage capacity: in the case where a vertex $i$ has a small degree, the noise term will exceed the signal term, except for a very small number of stored patterns. However, it seems to be obvious that also global aspects, for example, whether or not the graph is connected, must play a role. This is confirmed if we are setting up a Hopfield model on graph $G$ consisting of a complete graph $K_m$ (on the vertices $1, \ldots, m$) and the graph $K_{N-m}$ on the vertices $m+1, \ldots, N$ with $\log N \ll m \ll N$ and if we assume that these two subgraphs are disconnected or just connected by one arc. Each of the vertices thus has at least degree $m$ and it can be computed along the lines of \cite{MPRV} or \cite{Petritis} that at least $\frac{m}{2 \log N}$ patterns can be stored as fixed points of the dynamics. However, if we try to store one pattern, for example, $\xi^1$ with $\xi_i^1=1$ for all $i=1, \ldots, N$, and start with a corrupted input $\tilde\xi^1$ with
\[ \tilde\xi_i^1=\cases{ -1, &\quad $i \le m$, \vspace*{2pt} \cr 1, &\quad $m+1 \le i \le N$,} \]
we see that
\[ T_i\bigl(\tilde\xi^1\bigr)=\tilde\xi_i^1. \]
Hence, $\tilde\xi^1$ is a fixed point implying that the retrieval dynamics is not able to correct $m \ll N$ errors, even if we just want to store one pattern. So, if we insist that a neural network should also exhibit some associative abilities (and this has always been a central argument for the use of neural networks), we have to take the graph topology into account.
This topology is encoded in the so called adjacency matrix $A$ of $G$. Here, $A=(a_{ij})$ and $a_{ij}=1$, if $e_{i,j}\in E$ and $a_{ij}=0$ otherwise. If $G$ is sufficiently regular, the connectivity of $G$ (which played an important role in the above counterexample) can be characterized in terms of the spectral gap. To define it, let $\lambda_1 \ge\lambda_2 \ge\cdots\ge\lambda_N$ be the (necessarily real) eigenvalues of $A$ in decreasing order. Define $\kappa$ to be the second largest modulus of the eigenvalues, that is,
\[
\kappa:= \max_{i\ge2} |\lambda_i|=\max\bigl\{
\lambda_2, | \lambda _N|\bigr\}. \]
Then the spectral gap is the difference between the largest eigenvalue and $\kappa$, that is, $\lambda_1- \kappa$. However, also the degrees of the vertices are important. Hence, let $d_i = \sum_j a_{ij}$ be the degree of vertex $i$. We will denote by
\[ \delta:= \min_i d_i \quad\mbox{and}\quad m:= \max _i d_i \]
the minimum and maximum degree of $G$, respectively.
In this paper, we will concentrate on the parallel dynamics, which is easier to handle when we iterate the dynamics.
\section{Results}\label{sec3} We will now state the main result of the present paper.
In order to formulate it, let us define the usual Hamming distance on the space of configurations~$\Sigma_N$,
\[ d_H\bigl(\sigma, \sigma'\bigr) = \tfrac{1}2 \bigl[ N - \bigl(\sigma, \sigma'\bigr)\bigr], \]
where $(\sigma, \sigma')$ is the standard inner product of $\sigma$ and $\sigma'$. In other words, $d_H$ counts the number of indices where $\sigma$ and $\sigma'$ disagree. For any $\sigma\in\Sigma_N$ and $\varrho\in[0,1]$, let $\mathcal{S}(\sigma, \varrho N)$ the sphere of radius $\varrho N$ centered at $\sigma$, that is,
\[ \mathcal{S}(\sigma, \varrho N) = \bigl\{ \sigma' \dvt d_H\bigl( \sigma, \sigma'\bigr) = [\varrho N ]\bigr\}, \]
where $[\varrho N]$ denotes the integer part of $\varrho N$.
For the rest of the paper, we will suppose that the following hypothesis is true:
\begin{enumerate}[(H1)] \item[(H1)] There exists $c_1\in\,]0,1[$, such that $\delta> c_1 \lambda_1$ (recall that $\delta$ is the minimum degree of the graph $G$, and $\lambda_1$ is the largest eigenvalue of its adjacency matrix). \end{enumerate}
\begin{rem} Condition (H1) seems to be new. To understand it, recall that for a regular graph with degree $d$ the largest eigenvalue of $A$ equals $d$ and so does its minimum degree $\delta$. Condition~(H1) can thus be interpreted as the requirement that $G$ is sufficiently regular. Indeed, it turns out that, for example, for a Erd\"os--R\'enyi graph $G(N,p)$ is fulfilled, if and only if $p \gg\frac{\log N}{N}$, that is, when the graph is fully connected. Hence, for Erd\"os--R\'enyi graphs condition (H1) rules out the sparse case, when the graph is not only disconnected asymptotically almost surely, but also very irregular, in the sense that the degree distribution is a Poisson distribution and the relative fluctuations of the degrees are large. Moreover, it will turn out that also certain power law graphs satisfy condition (H1). \end{rem}
We will need a second condition that keeps track on how well the graph is connected.
\begin{enumerate}[(H2)] \item[(H2)] We say that a graph satisfies (H2), if the following relation holds between the largest eigenvalue $\lambda_1$ of the adjacency matrix and the modulus of its second largest eigenvalue~$\kappa$:
\begin{equation} \label{2ndcond} {\lambda_1}\ge c \log(N) {\kappa} \end{equation}
for some $c>0$ large enough. \end{enumerate}
\begin{rem} Roughly speaking, condition (\ref{2ndcond}) reveals connectivity properties of the underlying graph. Clearly, it holds for the complete graph $K_N$, where $\lambda_1=N-1$ and all the other eigenvalues are equal to $-1$. Also, as pointed out below, condition (\ref{2ndcond}) is fulfilled for an Erd\"os--R\'enyi random graph, if $p$ is large enough, since the spectral gap, that is, the difference between the largest and the second largest modulus of an eigenvalue is of order $Np(1-1/\sqrt{Np})$.
To understand, that indeed (\ref{2ndcond}) can be interpreted as a measure for the connectivity of the graph, assume for a moment that the graph were $d$-regular. Then $\lambda_1=d$. If the graph is disconnected, there is (at least) one more eigenvalue equal to one, and hence (\ref {2ndcond}) cannot hold. More generally, for a regular graph, the spectrum of the adjacency matrix can be computed from the spectrum of the Laplacian. On the other hand, the spectral gap of the Laplacian can be estimated by Poincar\'e or Cheeger type inequalities (see \cite{diaconisstroock}), which roughly state that the spectral gap of the Laplacian is small, if there are vertex sets of large volume, but small surface, or if the graph has small bottlenecks. Both quantities are a measure for how well the graph is connected. \end{rem}
Under the above conditions, we will prove, that we can store a number $M$ of patterns depending on $\lambda_1$ and the spectral gap of $A$ -- even in the sense that the dynamics $T$ repairs a corrupted input. Mathematically speaking, we show the following.
\begin{theorem}\label{synchronous} With the notation introduced in Section~\ref{sec2}, if \textup{(H1)} and \textup{(H2)} are satisfied, then there exists $\alpha_c>0$ and $\varrho_1\in\,]0,1/2[$ such that if
\[ M= \alpha\frac{\lambda^2_1}{m \log N}- \frac{\kappa\lambda_1}{m} , \]
for some $\alpha<\alpha_c$, then that for all $\varrho\in\,]0, \varrho _1]$ we obtain
\[ P\bigl[\forall\mu=1,\ldots,M, \forall x \ \mathit{s.t.}\ d_H \bigl(x,\xi^\mu\bigr) \leq\varrho N \dvt T^k(x)= \xi^\mu\bigr] \rightarrow 1\qquad \mbox{as } N \rightarrow\infty, \]
for any $k \ge{C} (\max\{\log\log N, \frac{\log(N)}{\log ({\lambda_1}/{(\kappa\log(N))})}\})$ for a sufficiently large constant $C$.
Here, $T^k$ is defined as the $k$th iterate of the map $T$. \end{theorem}
In other words, Theorem \ref{synchronous} states that we are able to store the given number of patterns in such a way that a number of errors that is proportional to $N$ can be repaired by a modest (at most $\mathcal{O}(\log N)$) number of iterations of the retrieval dynamics. The number of patterns depends on the largest eigenvalue and the spectral gap of the adjacency matrix and is larger for large spectral gaps.
Before advancing to the proof, we will apply this result to some classical models of random and non-random graphs.
\begin{corollary} If $G=K_N$, that is, in the case of the classical Hopfield model, the storage capacity in the sense of Theorem \ref{synchronous} is $M=\alpha\frac{N}{\log N}$ for some constant $\alpha$. The number of steps needed to repair a corrupted input is of order $\mathcal{O}(\log \log N)$. \end{corollary}
\begin{pf} The complete graph is regular, hence condition (H1) is satisfied. From Theorem \ref{synchronous}, we obtain the numerical values for $M$ and the number of steps by observing that in the case of the complete graph the eigenvalues of $A$ are $N-1$ and $-1$ (the latter being an $N-1$-fold eigenvalue). \end{pf}
\begin{rem} It should be remarked that similar results were obtained by Komlos and Paturi~\cite{KomlosPaturi1988}. In \cite{Komlos1993}, even the case of regular graphs is treated. The results of these two authors were probably inspired by the results in \cite{MPRV}, where the maximum number of patterns that are (with high probability) fixed points of the retrieval dynamics is determined. A similar result to \cite{KomlosPaturi1988} for the Hopfield model on the complete graph is due to Burshtein \cite{Burshtein}, who shows that the capacity of the Hopfield model obtained in \cite{MPRV} does not change, if one starts with corrupted patterns and allows for several reconstruction steps. Also, a bound for the number of necessary steps is given. These results are closely related to our result, and actually Burshtein is able to determine our $\alpha$ in the case of the Hopfield model on the complete graph. However, while he is working only with a random corrupted input, we consider a worst case scenario since we require that all vectors at distance $[\varrho N ]$ from the originally stored pattern are attracted to this pattern by the retrieval dynamics. A similar result for a Hopfield model with $q>2$ different states was proven in \cite{LoweVermetqpotts}.
These results are to be contrasted to the findings in \cite{newman,loukianova} or \cite{talagrand}. There, one is satisfied with a corrupted input being attracted to some point ``close'' to the stored pattern. Naturally, the resulting capacities are larger. Also, a bound on the number of iterations until this point is reached is not given. \end{rem}
We mainly want to apply our results to some random architectures, that is, $G$ will be the realization of some random graph. The most popular model of a random graph is the Erd\"os--Renyi graph $G(N,p)$. Here, all the possible ${ N\choose2}$ edges occur with equal probability $p=p(N)$ independently of each other. Hopfield models on $G(N,p)$ have already been discussed in \cite{BG92,talagrand} or \cite{LV10}.
Here, we obtain the following corollary.
\begin{corollary}\label{GNp} If $G$ is chosen randomly according to the model $G(N,p)$, then if $p \ge c_0 \frac{(\log N)^2}N$ for some $c_0>0$, for a set of realizations of $G$ the probability of which converges to one as $N \to\infty$, the capacity (in the above sense) of the Hopfield model is $c p N / \log (N)$ for some constant $c>0$. \end{corollary}
\begin{pf} For the eigenvalues of an Erd\"os--R\'enyi graph, it is well known that with probability converging to 1, as $N \to\infty$ (such a statement in random graph theory is said to hold asymptotically almost surely), $\lambda_1=(1+\mathrm{o}(1))Np$ and $\kappa\le c\sqrt{Np}$ (see, e.g., \cite{furedikomlos,Feige2005,Krivelevich2003} and these facts were also used in \cite{LV10}). Moreover, we can control the minimum and maximum degree in $G(N,p)$. Indeed, for our values of $p$ we have $m=(1+\mathrm{o}(1))Np$ and $\delta=(1+\mathrm{o}(1))Np$ asymptotically almost surely. Surprisingly, we could not find this result in the literature, and thus proved it in the \hyperref[app]{Appendix}.
Hence, (H1) is satisfied. \end{pf}
\begin{rem} As mentioned above, the Hopfield model on an Erd\"os--R\'enyi graph has already been discussed in \cite{BG92,talagrand} or \cite{LV10}. The first two of these papers treat the case of rather dense graphs, more precisely the regime of $p \ge\operatorname{const.} \sqrt{\frac{\log N}N }$. This regime seems to be a bit artificial, since a realization of $G(N,p)$ is already connected, once $p$ is larger than $\frac{\log N}N $. The regime of $ \operatorname{const.}_1 \frac{\log N} N \le p \le\operatorname{const.}_2 \sqrt {\frac{\log N} N}$ was analyzed in \cite{LV10}. However, in all of these papers the notion of storage capacity is the one, where we just require stored patterns to be close to minima of the energy function, that is, fixed points of the retrieval dynamics. As motivated above, this notion is unable to reflect the different reconstruction abilities for various network architectures. Corollary \ref{GNp} deals with the notion of storage capacity introduced in Section~\ref{sec2}; one might naturally wonder, whether the restriction $p \ge c_0 \frac{(\log N)^2}N$ could be weakened or whether this is the optimal condition, when we consider this notion of storage capacity. However, by now we do not have an answer to this question, especially since the reverse bound on the storage capacity is usually much harder to obtain. \end{rem}
The next example is one of the central results of the present paper: We analyze the Hopfield model on an architecture that comes closer to the models used in neuroscience, the so-called power law graphs. To introduce it, let us give a general construction of random graph models, which is standard in graph theory (see, e.g., \cite{ChungLu2002AC} or \cite{Chung-Lu02}) and nowadays referred to as the Inhomogeneous Random Graph (see, e.g., the very recommendable lecture notes \cite{vdH}). To this end, let $i_0$ and $N$ positive integers and $L=\{i_0, i_0+1, i_0+N-1\}$. For a sequence $w=(w_i)_{i\in L}$, we consider random graphs $G(w)$ in which edges are assigned independently to each pair of vertices $(i,j)$ with probability
\[ p_{ij}=\varrho w_i w_j, \]
where $\varrho=1/\sum_{k\in L} w_k$. We assume that
\[ \max_i w_i^2 < \sum _{k\in L} w_k \]
so that $p_{ij} \leq1$ for all $i$ and $j$. It is easy to see that the expected degree of $i$ is $w_i$. This allows for a very general construction of random graphs. Note in particular that for $w_i = pN$ for all $i= 1, \ldots, N$, one recovers the Erd\" os--R\'enyi graph.
For notational convenience, let
\[ { d}= \sum_{i\in L} w_i /N \]
be the expected average degree, ${\overline m}$ the expected maximum degree and
\[ \tilde{d}= \sum_{i\in L} w_i^2 \Big/ \sum_{i\in L} w_i \]
be the so-called second-order average degree of the graph $G(w)$. From these definitions, the advantage of this kind of construction of a random graph becomes transparent: We are able to construct random graphs, with expected degrees that are up to our own choice.
We now turn to a subclass of random graphs that have recently become very popular, power law graphs \cite{durrett_graphs}. Power law random graphs are random graphs in which the number of vertices of degree $k$ is proportional to $1/k^\beta$ for some fixed exponent $\beta$. It has been realized that this ``power law''-behavior is prevalent in realistic graphs arising in various areas. Graphs with power law degree distribution are ubiquitously encountered, for example, in the internet, the telecommunications graphs, the neural networks and many biological applications \cite{jeongetal,Schreiber,ZhouLipowsky}. The common feature of such networks is that they are large, have small diameter, but have small average degree. This behavior can be achieved by hubs, a few vertices with a much larger degree than others. A possible choice would be a power law graph, where the degrees obey a power law distribution. Keeping in mind that the $G(w)$ model allows to build a graph model with a given expected degree sequence, it is plausible that this model can be used to model the networks of the given examples. Indeed, using the $G(w)$ model, we can build random power law graphs in the following way. Given a power law exponent $\beta$, a maximum expected degree ${\overline m}$, and an average degree $d$, we take $ w_i=c i^{-{1}/{(\beta-1)}}$ for each $i\in\{i_0, \ldots , i_0+1, i_0+N-1\}$, with
\[
c= \frac{\beta-2}{\beta-1} d N^{{1}/{(\beta-1)}} \]
and
\[
i_0= N \biggl(\frac{d(\beta-2)}{{\overline m}(\beta-1)} \biggr)^{\beta-1}. \]
For such power law graphs, we obtain the following.
\begin{corollary}\label{powerlawcor} If $G$ is chosen randomly according to a power law graph with $\beta >3$, then if
\[ {\overline m}\gg d > c \sqrt{{\overline m}}\bigl(\log(N)\bigr)^{3/2} \]
or
\[ {\overline m} \gg d> c \sqrt{{\overline m}} \bigl(\log(N)\bigr) \quad\mbox{and}\quad { \overline m}\gg(\log N)^4,\vadjust{\goodbreak} \]
for some constant $c>0$ for a set of realizations of $G$ the probability of which converges to one as $N \to\infty$, the capacity (in the above sense) of the Hopfield model is $C(\beta) \frac {d^2}{{\overline m} \log(N)}$ for a constant $C$ that only depends on $\beta$. \end{corollary}
\begin{rem}
\begin{itemize}
\item One might indeed wonder, whether the restriction of $\beta>3$ is an artefact of our proof below or whether there is some intrinsic reason, why storing become much more difficult for \mbox{$\beta<3$}. A recent paper by Jacob and M\"orters \cite{JacobMorters} may shed some light on this question. There a spatial preferential attachment graph is constructed (for details, see the construction in \cite{JacobMorters}). It turns out that the graphs have powerlaw behavior for the degree distribution. The parameter $\beta $ depends on the parameters of the model. For their model, the authors are able to show that for $\beta>3$ the models exhibits clustering, that is, many triangles occur, while for $\beta<3$ there is no clustering. On the other hand, the storage of patterns in a Hopfield is basically a collective phenomenon for which a strong interaction of the neurons is necessary. Clustering is a measure for such a strong interaction.
\item The second condition in Corollary \ref{powerlawcor} basically states that we assume that there are so-called hubs, that is, vertices with a much larger degree than the average one, but that the graph may not be too irregular, for example, for a star graph (one vertex connected to all other vertices that are not connected otherwise), this condition would be violated, and indeed we would not be able to repair corrupted patterns on such a graph. \end{itemize}
\end{rem}
\begin{pf*}{Proof of Corollary \ref{powerlawcor}} By definition, if $d \ll{\overline m}$, then the minimum expected degree $w_{\mathrm{min}}= c (i_0+N-1)^{-{1}/{(\beta-1)}}$ satisfies $w_{\mathrm{min}}= \frac{\beta-2}{\beta-1} d (1+\mathrm{o}(1))$.
From \cite{Chung-Lu02}, we learn about the second-order average degree that
\[ \tilde{d}= \bigl(1+\mathrm{o}(1)\bigr) \frac{(\beta-2)^2}{(\beta-1)(\beta-3)} d, \]
if $\beta>3$.
On the other hand, Chung and Radcliffe prove in \cite{Chung-Radcliffe11} the following: if the maximum expected degree ${\overline m}$ satisfies ${\overline m}> \frac{8}9 \log(\sqrt{2} N)$, then with probability at least $1- \frac{1}N$, we have
\[ \lambda_1\bigl(G(w)\bigr) =\bigl(1+\mathrm{o}(1)\bigr) \tilde{d}\quad \mbox{and}\quad\kappa\bigl(G(w)\bigr)\le \sqrt{8 {\overline m} \log(\sqrt{2}N)}. \]
We will now use the following exponential bound due to Chung and Lu. As shown by these authors in \cite{ChungLu2002AC}, we have the following estimate, using Chernoff inequalities: for all $c>0$, there exist two constants $c_0, c_1>0$ such that
\begin{equation}
\label{ChungLubound} P\bigl[ \exists i\in L \dvt | d_i - w_i | > c w_i \bigr] \le\sum_{i\in L} \exp(- c_0 w_i) \le\exp( -c_1 d + \log N), \end{equation}
since $w_{\mathrm{min}} = \mathcal{O}(d)$. Applying this with, for example, $c=1/2$, we see (applying the Borel--Cantelli lemma) that for almost all realizations of the random graphs, we have that for all $i\in L$,
\[ d_i > \frac{1}2 w_i \ge\frac{1}2 w_{\mathrm{min}} = \frac{1}2\frac{\beta -2}{\beta -1} d \bigl(1+\mathrm{o}(1) \bigr)= \frac{1}2\frac{\beta-3}{\beta-2} \lambda_1 \bigl(1+ \mathrm{o}(1)\bigr), \]
and thus
\[ \delta> \frac{1}2\frac{\beta-3}{\beta-2} \bigl(1+\mathrm{o}(1)\bigr) \lambda_1, \]
which is (H1).
To apply Theorem \ref{synchronous}, we also need to compare ${\overline m}$ to the maximum degree $m$ of a graph $G$, chosen randomly according to a power law graph with $\beta>3$. We again use \eqref{ChungLubound}.
Under our assumption that $d\gg\log N$, we deduce from this estimate that $m= C {\overline m}(1+\mathrm{o}(1))$, for some $C>0$, and we finally obtain that the capacity of the Hopfield model on power law graphs (for a sequence of sets of graphs with probability converging to one) is at least
\[ \operatorname{const.} \frac{\lambda^2_1}{m \log(N)}- \frac{\kappa\lambda _1}{m}= C(\beta) \frac{d^2}{{\overline m} \log(N)}, \]
if, $\beta>3$, and $\kappa<c_2 \frac{ \lambda_1}{\log(N)}$ for some $c_2>0$ small enough. This is true in particular, if
\[ \sqrt{8 {\overline m} \log(\sqrt{2}N)} < c_3 \frac{d}{ \log(N)} \]
for some $c_3$ small enough, that is,
\[ d> c \sqrt{{\overline m}} \bigl(\log(N)\bigr)^{3/2}. \]
In fact, this condition on $d$ can be slightly weakened, if we consider the slightly stronger condition on the maximum expected degree: ${\overline m}\gg(\log N)^4$. Indeed, in a recent paper \cite{LuPeng}, Lu and Peng prove that under this condition on ${\overline m}$, we have
\[ \lambda_1\bigl(G(w)\bigr)= \bigl(1+\mathrm{o}(1)\bigr)\tilde{d}\quad \mbox{and}\quad \kappa \bigl(G(w)\bigr)\le2 \sqrt{ {\overline m}} \bigl(1+\mathrm{o}(1) \bigr), \]
a.s., if $\tilde{d} \gg\sqrt{ {\overline m}}$. Finally, we get as previously a capacity of order
\[ C(\beta) \frac{d^2}{{\overline m} \log(N)}, \]
if ${\overline m} \gg d> c \sqrt{{\overline m}} (\log(N))$ and ${\overline m}\gg(\log N)^4$. \end{pf*}
\section{Technical preparations on random graphs}\label{sec4} We first present the results we will use in the proof of our theorem. Let $G$ be a simple graph with $N$ vertices and $l$ edges. Recall that for such a graph
\[ \lambda_1\ge\cdots\ge\lambda_N \]
are the (real) eigenvalues of its adjacency matrix and $\kappa= \max\{
\lambda_2, | \lambda_N|\}$.
We begin with an estimate of the moment generating function of a sum of i.i.d. random variables, related to $G$. We assign i.i.d. random variables $X_i$ to the vertices of $G$, taking values $\pm1$ with equal probability. Let us define the ``quadratic form'' over $G$
\[ S= \sum_{\{i,j\} \in E} X_i X_j. \]
The following theorem due to Komlos and Paturi \cite{Komlos1993} gives an upper bound on the moment generating function of $S$, which appears naturally when we use an exponential Markov inequality for an upper bound.
\begin{theorem}[(\cite{Komlos1993})]\label{Ee(tS)} The moment generating function of $S$ can be bounded as
\[ E \bigl[ \mathrm{e}^{-tS}\bigr]\leq E\bigl[\mathrm{e}^{tS} \bigr] \le\exp\biggl(\frac{l t^2}{2(1-\lambda_1 t)}\biggr), \]
for $0\leq t < 1/\lambda_1$. \end{theorem}
\begin{rem} The attentive reader may wonder, whether the above theorem is really difficult to prove, as the random variables $X_i X_j$ are Bernoulli random variables. However, note that they are not independent, which is the basic difficulty in this estimate. \end{rem}
Not unexpectedly, a bound on the moment generating function implies a concentration of measure result.
\begin{corollary}\label{ExpMarkov} For any $y>0$, we have
\[ P[S> y] \leq\exp\biggl(-\frac{y^2}{2(l+ \lambda_1 y)}\biggr). \]
\end{corollary}
\begin{pf} Apply the exponential Markov inequality together with Theorem \ref {Ee(tS)} to see that
\[ P[S> y] \leq \mathrm{e}^{-t y} E\bigl[ \mathrm{e}^{t S}\bigr] \leq\exp\biggl(-ty +\frac{l t^2}{2(1-\lambda_1 t)}\biggr), \]
for $0\leq t < 1/\lambda_1$. The desired estimate is obtained by the choice of $t=\frac{y}{l+ \lambda _1 y}$ which is smaller than $1/\lambda_1$.
\end{pf}
As we will apply this result for subgraphs in the proof of our main result, we need also an estimate of the largest eigenvalue $\lambda _1(H)$ of particular subgraphs $H$ of $G$. To this end, we will quote another result by Komlos and Paturi \cite{Komlos1993}.
\begin{lemma}[(\cite{Komlos1993})]\label{subgraph}
Let $G$ be a simple graph with $N$ vertices. If $I$ and $J$ are two subsets of the vertex set of $G$ with $|I|=\varrho N$ and $|J|=\varrho' N$, where $\varrho, \varrho' \in(0,1)$, the number of edges $e(J;I)$ going from $J$ to $I$ is at most
\[ e(J,I) \le\bigl[ \varrho\varrho' \lambda_1(G) + \sqrt{ \varrho\varrho' } \kappa(G) \bigr] N. \]
Moreover, the largest eigenvalue (of the adjacency matrix) of the graph $H$ determined by the edges from $I$ to $J$ is bounded as
\[ \lambda_1(H) \le2\bigl[\sqrt{\varrho\varrho'} \lambda_1(G) + \bigl(1-\sqrt {\varrho\varrho'} \bigr) \kappa(G)\bigr]. \]
\end{lemma}
The proof of this lemma basically involves estimating quadratic forms by their eigenvalues together with Cauchy's interlacing theorem for eigenvalues of matrices. However, it is not trivial (see the proof in \cite{Komlos1993}).
\section{Proof of the main result}\label{sec5}
We are now ready to begin with the proof of Theorem \ref{synchronous}.
We first present an important lemma that determines the behavior of the system for one step of the synchronous dynamics, more precisely it controls, how many errors are corrected by one step of the dynamics.
\begin{lemma}\label{mainlemma}
Recall that $m$ denotes the maximum degree of the random graph $G$ in question and let
\[
\varrho_0=\exp\biggl(-c_2\frac{\lambda_1}{\kappa+{Mm}/{\lambda_1}} \biggr), \]
for some constant $c_2>0$. If $M\le c \lambda_1$ for some constant $c>0$, there exists $\varrho _1\in(0,\frac{1}2)$ and a constant $c_1>0$, such that for all $\varrho \in[\varrho_0,\varrho_1]$ we have
\[ P\bigl[\forall\mu\in\{1,\ldots,M\}, \forall x\in\mathcal{S}\bigl(\xi^\mu, \varrho N\bigr) \dvt d_H\bigl(T(x), \xi^\mu\bigr)\leq f( \varrho) N\bigr] \ge1-\varepsilon_N, \]
where
\[
f(\varrho)= \max\biggl\{c_1 \varrho \biggl( \frac{\kappa}{\lambda_1}\biggr)^2, c_1 \varrho h( \varrho),c_1 \frac{\kappa}{\lambda_1} h(\varrho),c_1 \varrho \biggl(\frac{M\kappa}{(\lambda_1)^2} \log\biggl(\frac{1}\varrho\biggr) \biggr)^{2/3}, \varrho _0\biggr\}\leq\varrho, \]
$\varepsilon_N \ge0$, $\varepsilon_N\rightarrow0$ as $N\rightarrow +\infty$ and
\[ h(\varrho)= -\varrho\log\varrho- (1-\varrho) \log(1-\varrho) \]
is the entropy function. \end{lemma}
\begin{pf}
This lemma is of central importance for our main result. However, its proof is rather technical. Let us therefore first describe its basic idea.
To this end, recall that it suffices to prove that
\begin{equation} \label{prob1} \sum_{\mu=1}^M P\bigl[\exists x\in\mathcal{S}\bigl(\xi^\mu, \varrho N\bigr) \dvt d_H\bigl(T(x), \xi^\mu \bigr)> f(\varrho)N\bigr] \le\varepsilon_N. \end{equation}
To simplify notation, we can assume that the fundamental memory in question is $\xi^1$.
Now assume that we start with a corrupted input (i.e., a corrupted pattern) $x\in\{-1,1\}^N$ such that $d_H(\xi^1,x)= \varrho N$. Let $I$ be the set of coordinates in which $x$ and $\xi^1$ differ. Let $T(x)$ be the vector resulting after one step of the parallel dynamics, and $J$ be the set of coordinates in which $T(x)$ and $\xi ^1$ differ. Now define the weight matrix $W$ as $W=(w_{ij})$ and
\[ w_{ij}=a_{ij}\sum_{\nu=1}^M \xi^\nu_i \xi^\nu_j. \]
Then, since $\xi^1$ is not properly reconstructed for the coordinates $j \in J$, for all $j\in J$, we have $ \xi_j^1 (W x)_j \leq0$, which implies $ \sum_{j\in J} \xi_j^1 (W x)_j \leq0$.
The idea is now to analyze the contributions to $\sum_{j\in J} \xi_j^1 (W x)_j$. Similar to what we said in the analysis of the dynamics $T_i$ in Section~\ref{sec2}, there is a ``signal term''
stemming from the closeness of $x$ to $\xi^1$ and there are noise terms from the influence of the other patterns. We will first show that the signal term grows at least linearly in $|J|$. On the other hand, we are also able to give an upper bound on the influence of the random noise terms that are also controlled by the size of $I$ and $J$. While all these computations are relatively straight forward in the Hopfield model on the complete graph, the estimates become much more involved on a general graph. The key observation is that we are able to control the probability to find sets $I$ and $J$ with the above properties with the help of the spectrum of the adjacency matrix (using the results of the previous section). Technically to this end, we have to split up the noise terms according to where the vertices $i$ in $\sum_{j\in J} \sum_I \xi_j^1 a_{ij}\xi^\mu_i \xi^\mu_j$
come from. The bottom line is, that if $|J|$ is too large, the probability to find sets $I$, with $|I|=\varrho N$ and $J$ (such that $\xi^1$ is not reconstructed correctly on $J$ when starting with an $x$ differing from $\xi^1$ in the coordinates $I$) converges to 0 -- even when being multiplied by the number of patterns $M$, if $M$ is of the given size (cf. equation \eqref{centest} below).
Let us now carry out this idea.
For later use, set
\[ S^\mu(J,I) = \sum_{j\in J}\sum _{k=1}^N a_{jk} \xi_j^1 \xi_j^\mu \xi _k^\mu x_k \]
and
\[ S(J,I)= \sum_{\mu=1}^M S^\mu(J,I)=: \sum_{j\in J} \xi_j^1 (W x)_j. \]
Observe that, if the patterns are chosen i.i.d. with i.i.d. coordinates their typical distance is $N/2 \pm\operatorname{const.} \sqrt N$. This in turn implies that, if $\varrho<1/2$ and $d_H(x, \xi^1)=\varrho N$, then $x$ tends to be closer to $\xi^1$ than to any other pattern, and $S^1(J,I)$ will be the dominating term in $S(J,I)$. We will first give a lower bound for $S^1(J,I)$. We can rewrite $S^1(J,I)$ as
\[ S^1(J,I) = \sum_{j\in J}\sum _{k=1}^N a_{jk} \xi_k^1 x_k = \sum_{j\in J} \bigl(e(j,\bar I) - e(j,I) \bigr)= e(J,V) - 2 e(J,I), \]
where again we use the notation $e(J,I)$ and $e(j,I)$, to denote the number edges going from the set $J$ to the set $I$, or, respectively, from the vertex $j$ to the set $I$. Moreover, $\bar I$ denotes the complement of the set $I$ in $V$.
Under the assumption of hypothesis (H1) and with the help of Lemma \ref{subgraph}, we have for all $I$ and $J$,
\begin{eqnarray*}
S^1(J,I)& \ge& c_1
\lambda_1 |J| - 2 \biggl(|I| |J| \frac
{\lambda _1}N + \sqrt{|I| |J|} \kappa\biggr) \\
& = &\lambda_1 |J| \biggl( c_1 - 2 \varrho- 2 \sqrt{ \frac {\varrho }{\varrho'}} \frac{\kappa}{\lambda_1}\biggr), \end{eqnarray*}
where $\varrho' = \frac{|J|}N$. If we assume that $ \varrho'\ge c_2 \varrho (\frac {\kappa }{\lambda_1})^2$ for some $c_2>0$ large enough, and $\varrho<\varrho_1$ for some $\varrho_1\in(0,1/2)$ small enough, we get
\begin{equation}
\label{lowerboundS1} S^1(J,I)\ge C_1 \lambda_1 |J|, \end{equation}
for some constant $C_1\in(0,1)$.
For $\mu\ge2$, we compute
\begin{eqnarray*}
S^\mu(J,I)&=& \sum _{(j,k)\in E(J,\bar I)} u^\mu_ju_k^\mu- \sum_{(j,k)\in E(J,I)} u^\mu_ju_k^\mu \\ &=& \sum_{(j,k)\in E(J,V)} u^\mu_ju_k^\mu-2 \sum_{(j,k)\in E(J,I)} u^\mu_ju_k^\mu, \end{eqnarray*}
where $u_i^\mu=\xi^1_i\xi_i^\mu$, for all $i=1,\ldots,N$ and $\mu =1,\ldots, M$. To apply the results for the moment generating function of quadratic forms introduced in Theorem \ref{Ee(tS)} and Corollary \ref{ExpMarkov}, we need to rewrite these sums over ordered pairs of vertices as sums over unordered pairs. We have
\[ E(J,V) = E(J,J) + E(J, \bar J) = 2 E\{J,J\} + E\{J, \bar J\}= E\{J,V\} + E\{J, J \}, \]
where for $K,L \subset V$ $E(K,L)$ is the edges set of the directed graph between the sets $K$ and $L$ induced by our original graph. Likewise, $E\{K,L\}$ denotes the corresponding set of undirected edges. In the same way, we obtain
\begin{eqnarray*} E(J,I)&=& E(J\cap\bar I, J\cap I) + E(J\cap I,J\cap I) + E(J, I\cap \bar J) \\ &= &E\{J\cap\bar I, J\cap I\} + 2 E\{J\cap I,J\cap I\} + E\{J, I\cap \bar J\} \\ &=& E\{J, I\} + E\{J\cap I,J\cap I\}. \end{eqnarray*}
Eventually,
\[ E(J,V) - 2 E(J,I)= E\{J,V\} - 2 E\{J, I\} + E\{J, J\} - 2 E\{J\cap I,J\cap I\}. \]
We want to prove that for $\varrho'$ larger than $f(\varrho)$ we have that
\begin{equation}
\label{centest} M P\bigl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, S(J,I) <0\bigr] \longrightarrow0, \end{equation}
as $N\rightarrow+\infty$.
To this end, set
\begin{eqnarray*}
S_1^\mu(J)&=& \sum _{(j,k)\in E\{J,V\}} u^\mu_ju_k^\mu,\qquad S_2^\mu (J,I)= \sum_{(j,k)\in E\{J,I\}} u^\mu_ju_k^\mu, \\ S_3^\mu(J)&=& \sum_{(j,k)\in E\{J,J\}} u^\mu_ju_k^\mu\quad \mbox{and}\quad S_4^\mu(J,I)= \sum _{(j,k)\in E\{J\cap I,J\cap I\}} u^\mu_ju_k^\mu. \end{eqnarray*}
Then
\[ S(J,I) = S^1(J,I)+ \sum_{\mu=2}^M S^\mu_1(J) -2 \sum_{\mu=2}^M S^\mu _2(J,I)+ \sum_{\mu=2}^M S^\mu_3(J)- \sum_{\mu=2}^M S^\mu_4(J,I). \]
Let $\gamma_1, \gamma_2, \gamma_3, \gamma_4 \ge0$, such that $\gamma _1+2\gamma_2+\gamma_3+\gamma_4=1$.
We will consider the four sums separately. First, using (\ref{lowerboundS1}), we have
\begin{eqnarray*}
&&P\Biggl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, \sum_{\mu =2}^M S^\mu_1(J) <-\gamma_1 S^1(J,I) \Biggr] \\ &&\quad \leq
\sum_{J \dvt |J|= \varrho' N} P\Biggl[\sum _{\mu=2}^M S^\mu_1(J) <-
\gamma_1 C_1 \lambda_1 |J|\Biggr]. \end{eqnarray*}
Given the vector $\xi^1=(\xi_i^1)_{i=1,\ldots, N}$, the random variables $(u_i^\mu)_{i=1,\ldots, N}^{\mu=2,\ldots, M}$ are conditionally independent and uniformly distributed on $\{-1,+1\}$. As the estimates we will get for the conditional probabilities and the moment generating function will not depend on the choice of $\xi^1$, they will be true also for the unconditional probabilities.
Given the vector $\xi^1$, the random variables $S^\mu_1(J), \mu =2,\ldots , M$, are independent. Similar to the estimate of Corollary \ref {ExpMarkov}, we obtain
\[ P\Biggl[\sum_{\mu=2}^M S^\mu_1(J)
<-\gamma_1 C_1 \lambda_1 |J|\Biggr] \leq \exp
\biggl(-\frac{1}2\frac{\gamma_1 C_1 \lambda_1 |J|}{\lambda_J+
{M e\{J,V\}}/{(\gamma_1 C_1 \lambda_1 |J|)}} \biggr), \]
where $\lambda_J=\lambda_1(E\{J,V\})$ is the largest eigenvalue of the graph determined by the undirected edges in $E\{J,V\}$. Using Lemma \ref{subgraph}, we have
\[ \lambda_J \le2\bigl[\sqrt{\varrho'} \lambda_1 + \bigl(1-\sqrt{\varrho'} \bigr) \kappa\bigr], \]
and
moreover, $e\{J,V\}\leq e(J,V) \le m |J|$ is trivially true. We deduce that
\[ P\Biggl[\sum_{\mu=2}^M S^\mu_1(J)
<-\gamma_1 C_1 \lambda_1 |J|\Biggr] \leq \exp \biggl(-\frac{\gamma_1 C_1}2 \frac{\varrho'N}{2\sqrt{\varrho'}+2{\kappa }/{\lambda_1}+ {Mm}/{(\gamma_1 C_1(\lambda_1)^2)}} \biggr). \]
Now there are ${N\choose|J|}$ ways to choose the set $J$, and by Stirling's formula
\[ \pmatrix{N \cr
|J|}\leq\exp\bigl(h\bigl(\varrho'\bigr)N\bigr), \]
where
\[ h(x)= -x \log x -(1-x) \log(1-x) \]
is the entropy function introduced above.
Using $h(\varrho') \leq-2 \varrho' \log(\varrho')$, we obtain that
\begin{eqnarray*}
&& \sum_{J \dvt |J|= \varrho' N} P\Biggl[\sum _{\mu=2}^M S^\mu_1(J) <-
\gamma_1 C_1 \lambda_1 |J|\Biggr] \\ &&\quad\le \exp \biggl(-2 \varrho'N \biggl(\frac{\gamma_1 C_1}4 \frac {1}{2\sqrt {\varrho'}+2{\kappa}/{\lambda_1}+ {Mm}/{(\gamma_1 C_1(\lambda _1)^2)}} + \log\bigl(\varrho'\bigr) \biggr) \biggr). \end{eqnarray*}
The exponent is negative, if
\[ \frac{\gamma_1 C_1}4 \frac{1}{2\sqrt{\varrho'}+2{\kappa }/{\lambda _1}+ {Mm}/{(\gamma_1 C_1(\lambda_1)^2)}} + \log\bigl(\varrho'\bigr) >0, \]
which is true if
\begin{equation} \label{firstcond} \frac{\gamma_1 C_1}{8} \frac{1}{2\sqrt{\varrho'}} + \log\bigl(\varrho '\bigr) >0, \end{equation}
as well as
\begin{equation} \label{secondcond} \frac{\gamma_1 C_1}{8}\frac{\lambda_1}{2{\kappa}+ {Mm}/{(\gamma _1C_1\lambda_1)}}+ \log\bigl( \varrho'\bigr) >0. \end{equation}
This gives a first bound on $f(\varrho)$ in the sense, that if $\varrho '$ is so large, then we will have small probabilities to find the corresponding sets $I$ and $J$.
Now, there exists a $\varrho_1\in(0,0.1) $, such that the first condition \eqref{firstcond} is true if $\varrho'<\varrho_1$. The second condition \eqref{secondcond} is true if
\[ \varrho' > \exp\biggl(- c \frac{\lambda_1}{2 \kappa+ {Mm}/{(\gamma _1C_1\lambda_1})}\biggr), \]
where $c= \frac{\gamma_1C_1}{8}$. This implies that, if there exists a constant $c_2>0$ such that
\[ \varrho' \ge\varrho_0:= \exp\biggl(- c_2 \frac{\lambda_1}{ \kappa+ {Mm}/{\lambda_1}}\biggr), \]
then \eqref{secondcond} is true.
For the second term, we have
\begin{eqnarray*}
&&P\Biggl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, \sum_{\mu=2}^M S^\mu_2(J,I) >\gamma_2 S^1(J,I) \Biggr] \\
&&\quad\leq \sum_{I \dvt |I|= \varrho N}\sum_{J \dvt |J|= \varrho' N} P\Biggl[\sum_{\mu=2}^M S^\mu_2(J,I)
>\gamma_2 C_1 \lambda_1 |J|\Biggr] \\
&&\quad\leq\sum_{I \dvt |I|= \varrho N}\sum_{J \dvt |J|= \varrho' N}
\exp \biggl(-\frac{1}2\frac{\gamma_2 C_1 \lambda_1 |J|}{\lambda_{\{ J,I\}
}+ {M e\{J,I\}}/{(\gamma_2 C_1 \lambda_1 |J|)}} \biggr), \end{eqnarray*}
where $\lambda_{\{J,I\}}=\lambda_1(E\{J,I\})$ is the largest eigenvalue of the graph determined by the undirected edges in $E\{J,I\}$. Using Lemma \ref{subgraph}, we get
\[ \lambda_{\{J,I\}} \le2\bigl[\sqrt{\varrho\varrho'} \lambda_1 + \kappa\bigr] \]
and
\[ e\{J,I\} \le\bigl(\varrho\varrho' \lambda_1 +\sqrt{ \varrho\varrho '}\kappa\bigr) N, \]
which implies
\begin{eqnarray*} &&P\Biggl[\sum_{\mu=2}^M S^\mu_2(J)
>\gamma_2 C_1 \lambda_1 |J|\Biggr] \\ &&\quad\leq\exp \biggl(-\frac{\gamma_2 C_1}2 \frac{\varrho' N}{2\sqrt{\varrho\varrho '}+2 {\kappa}/{\lambda_1}+{M\varrho}/{(\gamma_2 C_1\lambda_1)} + ({M\kappa}/{(\gamma_2 C_1(\lambda_1)^2)})\sqrt{{\varrho}/{\varrho '}}} \biggr). \end{eqnarray*}
There are ${N\choose|I|} {N\choose|J|} $ ways to choose the sets $I$ and $J$ and
\[ \pmatrix{N \cr
|I|} \pmatrix{N \cr
|J|} \leq\exp(\bigl(h(\varrho )+h\bigl( \varrho'\bigr)N\bigr)\leq \exp\bigl(2 h(\varrho) n\bigr), \]
as we assume that $\varrho'\le\varrho\le1/2$. These considerations yield that
\[
P\Biggl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, \sum _{\mu=2}^M S^\mu_2(J,I) >\gamma_2 S^1(J,I)\Biggr] \]
becomes small, once the condition
\[ \frac{\gamma_2 C_1}2 \frac{\varrho'}{2\sqrt{\varrho\varrho '}+2 {\kappa}/{\lambda_1}+{M\varrho}/{(\gamma_2 C_1\lambda_1)} + ({M\kappa}/{(\gamma_2 C_1(\lambda_1)^2)})\sqrt{{\varrho}/{\varrho '}}} > 2 h(\varrho), \]
is satisfied. This is true if
\[ \gamma_2 C_1 \frac{\varrho'}{4\sqrt{\varrho\varrho'} } > 8 h(\varrho),\qquad
\gamma_2 C_1 \frac{\varrho'}{4 {\kappa}/{\lambda_1}} > 8 h(\varrho), \qquad
\frac{\gamma_2 C_1}2 \frac{\varrho'}{{M\varrho }/{(\gamma _2C_1\lambda_1)} } > 8 h( \varrho) \]
and
\[ \frac{\gamma_2 C_1}2\frac{\varrho'}{ ({M\kappa}/({\gamma_2 C_1(\lambda _1)^2}))\sqrt{{\varrho}/{\varrho'}}} > 16 \varrho\log\biggl( \frac{1}\varrho \biggr)\ge8 h(\varrho). \]
From here, we obtain the four conditions
\[ \varrho'> C^2 \varrho h(\varrho)^2,\qquad \varrho'> C \frac{\kappa }{\lambda_1} h(\varrho),\qquad \varrho'> C' \frac{M}{\lambda _1}\varrho h(\varrho) \]
and
\[ \varrho' \ge\varrho\biggl(\frac{2} {C'}\frac{M\kappa}{(\lambda_1)^2} \log \biggl(\frac{1}\varrho\biggr)\biggr)^{2/3}, \]
where $C=\frac{32}{\gamma_2 C_1 }$ and $C'=\frac{16}{(\gamma_2 C_1)^2}$.
For the third term, we have
\begin{eqnarray*} &&P\Biggl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, \sum_{\mu=2}^M S^\mu_3(J,J) <-\gamma_3 S^1(J,I) \Biggr] \\
&&\quad\leq \sum_{J \dvt |J|= \varrho' N} P\Biggl[\sum _{\mu=2}^M S^\mu _3(J,J) <-
\gamma_3 C_1 \lambda_1 |J|\Biggr] \\
&&\quad\leq\sum_{J \dvt |J|= \varrho' N} \exp\biggl(-\frac{1}2 \frac
{\gamma _3 C_1 \lambda_1 |J|}{\lambda_{\{J,J\}}+ {M e\{J,J\}}/{(\gamma_3 C_1 \lambda_1 |J|)}}\biggr), \end{eqnarray*}
where $\lambda_{\{J,J\}}=\lambda_1(E\{J,J\})$ is the largest eigenvalue of the graph determined by the undirected edges in $E\{J,J\}$. Using Lemma \ref{subgraph}, we have
\[ \lambda_{\{J,J\}} \le2 \varrho' \lambda_1 + 2 \kappa \]
and $e\{J,J\}\leq(\varrho' \lambda_1 + \kappa) \varrho' N$, and we obtain as for the previous terms
\begin{eqnarray*}
&&\exp \biggl(-\frac{1}2\frac{\gamma_3 C_1
\lambda_1 |J|}{\lambda_{\{J,J\}}+ {M e\{J,J\}}/{(\gamma_3 C_1
\lambda_1 |J|)}} \biggr) \\ &&\quad\le \exp \biggl(-\frac{\gamma_3 C_1}2 \frac{ \varrho' N}{(2+{M}/{(\gamma_3 C_1 \lambda _1)})\varrho' +({\kappa}/{\lambda_1})(2+{M}/{(\gamma_3 C_1 \lambda_1)}) } \biggr). \end{eqnarray*}
There are $ {N\choose|J|} $ ways to choose the set $J$. From this, we see that
\[
P\Biggl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, \sum _{\mu=2}^M S^\mu_3(J,J) <-\gamma_3 S^1(J,I)\Biggr] \]
becomes small, if the condition
\[ \frac{\gamma_3 C_1}{2(2+{M}/{(\gamma_3 C_1 \lambda_1)})} \frac{1}{1 + {\kappa}/{(\lambda_1\varrho')}} > h\bigl(\varrho'\bigr), \]
is fulfilled, which is true if
\begin{equation} \label{nextconds} h\bigl(\varrho'\bigr)< C \quad\mbox{and}\quad h\bigl( \varrho'\bigr)< C \varrho' \frac{\lambda_1}{\kappa}\qquad \mbox{where } C=\frac{\gamma_3 C_1}{4(2+{M}/{(\gamma_3 C_1 \lambda_1)})}. \end{equation}
As we assume that $M\leq c \lambda_1$, there exists a $\varrho_2(\gamma_3, C_1)\in(0,0.1) $, such that the first inequality in \eqref{nextconds} is true if $\varrho'<\varrho_2$.
Using the bound $h(\varrho') \leq-2 \varrho' \log(\varrho')$ again, we get that there exists $c>0$ such that the second condition in \eqref{nextconds} is true if
\[ \varrho'> \exp\biggl(-c \frac{\lambda_1}{\kappa}\biggr). \]
For the fourth term, we have
\begin{eqnarray*}
&&P\Biggl[\exists I, |I|= \varrho N,\exists J, |J|= \varrho' N, \sum_{\mu=2}^M S^\mu_4(J,I) >\gamma_4 S^1(J,I) \Biggr] \\
&&\quad\leq \sum_{I \dvt |I|= \varrho N}\sum_{J \dvt |J|= \varrho' N} P\Biggl[\sum_{\mu=2}^M S^\mu_4(J,I)
>\gamma_4 C_1 \lambda_1 |J|\Biggr] \\
&&\quad\leq\sum_{I \dvt |I|= \varrho N}\sum_{J \dvt |J|= \varrho' N}
\exp\biggl(-\frac{1}2\frac{\gamma_4 C_1 \lambda_1 |J|}{\lambda_{J\cap I}+
{M e\{J\cap I,J\cap I\}}/{(\gamma_4 C_1 \lambda_1 |J|})}\biggr), \end{eqnarray*}
where $\lambda_{J\cap I}=\lambda_1(E\{J\cap I,J\cap I\})$ is the largest eigenvalue of the graph determined by the undirected edges in $E\{J\cap I,J\cap I\}$.
Using Lemma \ref{subgraph} and assuming that $\varrho'\le\varrho$, we have $\lambda_{J\cap I} \le2 \varrho' \lambda_1 + 2 \kappa$ and $e\{
J\cap I,J\cap I\}\leq(\varrho' \lambda_1 + \kappa) \varrho' N$, which are the same bounds as for the third term. There are ${N\choose|I|}
{N\choose|J|} $ ways to choose the sets $I$ and $J$ and using again
\[ \pmatrix{N \cr
|I|} \pmatrix{N \cr
|J|} \leq\exp(\bigl(h(\varrho )+h\bigl( \varrho'\bigr)N\bigr)\leq \exp\bigl(2 h(\varrho) N\bigr), \]
we finally arrive at the same conditions as for the third term, with possibly a different constant~$C$.
Finally, the various conditions can be summarized as
\begin{eqnarray*}
\varrho'&\ge& c_2 \varrho \biggl( \frac{\kappa}{\lambda_1}\biggr)^2,\qquad
\varrho_1\ge\varrho\ge\varrho' \ge\varrho_0,\qquad
\varrho'> C^2 \varrho h(\varrho)^2, \\ \varrho'&>& C \frac{\kappa}{\lambda_1} h(\varrho),\qquad \varrho '> C' \frac{M}{\lambda_1}\varrho h(\varrho) \quad\mbox{and}\quad \varrho' \ge\varrho\biggl(\frac{2} {C'} \frac{M\kappa}{(\lambda_1)^2} \log \biggl(\frac{1}\varrho\biggr) \biggr)^{2/3}. \end{eqnarray*}
Finally, taking into account all the conditions, we get that (\ref {prob1}) is true if we choose
\[ f(\varrho)= \max\biggl\{c_1 \varrho \biggl(\frac{\kappa}{\lambda_1} \biggr)^2, c_1 \varrho h(\varrho),c_1 \frac{\kappa}{\lambda_1} h(\varrho),c_1 \varrho \biggl(\frac{M\kappa}{(\lambda_1)^2} \log\biggl(\frac{1}\varrho\biggr)\biggr)^{2/3}, \varrho _0\biggr\} \]
for some $c_1>0$ large enough and we see that $f(\varrho)\le\varrho$ if $\varrho\in(\varrho_0,\varrho_1)$ with $\varrho_1$ small enough. \end{pf}
In order to prove the Theorem \ref{synchronous}, we will apply Lemma \ref{mainlemma} repeatedly until the system attains an original pattern. Using
\[ \varrho_0=\exp\biggl(-c_2\frac{\lambda_1}{\kappa+{Mm}/{\lambda_1}}\biggr), \]
we get that the system can attain an original pattern, that is, $\varrho_0 N<1$, only if
\[ \kappa+ \frac{Mm}{\lambda_1} < c_2 \lambda_1/ \log(N) \]
(which follows from the choice of $M$ made in Theorem \ref{synchronous}).
To determine the maximal number of steps the synchronous dynamics needs to converge, we analyze the following sequences.
\begin{lemma}\label{sequencelemma} Let $(w_n)_{n\in\mathbb{N}}, (x_n)_{n\in\mathbb{N}}, (y_n)_{n\in\mathbb{N}}$ and $(z_n)_{n\in\mathbb{N}}$ such that
\[ w_0=x_0= y_0 = z_0 = \varrho \in\biggl[\exp\biggl(-\frac{1}{2c}\frac{\lambda_1}{ \kappa}\biggr), 1/e\biggr] \]
and
\begin{eqnarray*} w_{n+1}&=&c w_n \biggl(\frac{\kappa}{\lambda_1} \biggr)^2,\qquad x_{n+1} = c x_n h(x_n), \\ y_{n+1}&=& c\frac{\kappa}{\lambda_1} h(y_n) \quad\mbox{and} \\ z_{n+1} &=& c z_n \biggl(\frac{M\kappa}{(\lambda_1)^2} \log\biggl( \frac{1}{z_n}\biggr)\biggr)^{2/3}, \end{eqnarray*}
for $n\in\mathbb{N}$ and $c>0$. Let us assume that $\frac{\lambda_1}{\kappa } > C_1 \log N$ for some $C_1> 1$ large enough and that $M\le C_2\lambda_1$ for some $C_2>0$. Then the sequences $(w_n), (x_n), (y_n)$ and $(z_n)$ are decreasing and there exists $C_3>0$ and
\[ n_0\ge C_3 \max\biggl\{\log\log N, \frac{\log(N)}{\log({\lambda _1}/{(\kappa\log N)})} \biggr\} \]
such that $\max\{w_{n_0}, x_{n_0}, y_{n_0}, z_{n_0}\}<1/N$. \end{lemma}
\begin{pf} Let us first consider the sequence $(w_n)$. Iterating $w_{n+1} = a w_n$, with $a=c (\frac{\kappa}{\lambda_1})^2$, we get trivially $w_n = a^n w_0$ from which we deduce that $w_n < \frac{1}N$ as soon as $n > c_1 \frac{\log(N)}{\log({\lambda_1}/{\kappa})}$ for some $c_1>0$.
For the sequence $(x_n)$, using $h(x)\leq-2 x \log(x)\leq2 \sqrt{x}$ for $x\in[0,1/2]$, we have $x_{n+1}\leq(C x_n)^{3/2}$ for some constant $C>0$. Iterating, we get $x_n \le(C^3 x_0)^{(3/2)^n}$, from which we deduce that $x_n < \frac{1}N$ if $n\ge c_2 \log\log N$ for some $c_2>0$, if $x_0$ is small enough.
For the sequence $(y_n)$, using again $h(x)\leq-2 x \log(x)$, we have to iterate the relation $y_{n+1} = a y_n \log(\frac{1} {y_n})$, with $a=2c\frac{\kappa }{\lambda_1}$. If we consider $y_0 \in[\exp(-1/a), \exp(-1)] $, the inductively defined sequence $y_{n+1} = g(y_n)$ is decreasing and converges to $\exp(-1/a)$ since the function $g(x)= - a x \log(x)$ is increasing on the interval $[\exp(-1/a), \exp(-1)]$, $y_1 \le y_0$ and $\exp(-1/a)$ is the single fixed point of $g$. Moreover, we have
\begin{eqnarray*} y_{n+2}&=& a^2 y_n \log\biggl(\frac{1} {y_n}\biggr) \biggl( \log\biggl(\frac{1} {y_n}\biggr) + \log \biggl(\frac{1}a\biggr) + \log\biggl(\frac{1}{\log({1}/ {y_n})}\biggr) \biggr) \\ &\leq& a^2 y_n \log\biggl(\frac{1} {y_n}\biggr) \biggl( \log\biggl(\frac{1} {y_n}\biggr) + \log\biggl(\frac{1}a \biggr) \biggr), \end{eqnarray*}
if $y_n\le1/e$. By iteration, if we set $b=\log(\frac{1}{\min\{\varrho, a\}})$, we get similarly for all $n\in\mathbb{N}$,
\begin{eqnarray*}y_{n}& \leq &a^n y_0 \prod_{i=0}^{n-1} \biggl[ \log\biggl(\frac{1}{y_0}\biggr) + i \log\biggl(\frac{1}a \biggr) \biggr] \\ & \leq&(ab)^n y_0 n! \\ & \leq& c_3 \biggl(ab \frac{n}e\biggr)^n \sqrt{n} \\ & =& c_3 \exp\biggl( n\bigl(\log(a) +\log(b)+ \log(n)-1\bigr) + \frac{1}2\log( n)\biggr) \\ & \le& c_3 \exp\bigl(-c_4 \log N \bigl(1+ \mathrm{o}(1) \bigr)\bigr), \end{eqnarray*}
for some $c_3>0$, if $n=c_4 \log(N)/(-\log a - \log\log N)$. In particular, this justifies the hypothesis $\frac{\lambda_1}{\kappa} > c_5 \log N$ for some $c_5> 1$ large enough. We therefore see that there exists some $c_6>0$ such that $\mathrm{e}^{-1/a} \le y_n <\frac{1}N$ for $ n = c_6 \frac{\log(N)}{\log( {\lambda_1}/{(\kappa\log N)})}$.\
The third sequence can be rewritten as $z_{n+1} = a z_n (\log\frac{1}{z_n})^{2/3}$, with $a=c (\frac{M}{\lambda_1} \frac{\kappa }{\lambda _1})^{2/3}$. With the same technique as for $y_n$, we get that the sequence $(z_n)$ converges to $\exp(-1/a^{3/2})$ and $z_n <1/N$ if
\[ n \ge c_7 {\log(N)}\Big/\log\biggl({\frac{(\lambda_1)^2} {M\kappa\log(N)}}\biggr). \]
This proves the lemma. \end{pf}
The combination of the previous considerations and Lemma \ref {sequencelemma} then yields the Theorem \ref{synchronous}.
\begin{appendix}\label{app}
\section*{Appendix: On the degrees of the Erd\"os--Renyi graph}\label{appA} To prove the Corollary \ref{GNp}, we need to estimate the minimum and the maximum degrees of a typical Erd\"os--Renyi graph $G(N,p)$. The following result could not be found in the literature. We prove in this appendix the following.
\begin{lemmas} If $G$ be is chosen randomly according to the model $G(N,p)$, then if $p \gg\frac{\log N}N$,
for a set of realizations of $G$ the probability of which converges to one as $N \to\infty$, we have $ m= (1+\mathrm{o}(1)) Np$ and $ \delta= (1+\mathrm{o}(1)) Np$. \end{lemmas}
\begin{pf} Let $G$ chosen randomly according to the model $G(N,p)$. The law of the degree $d_i$ of an arbitrary vertex $i$ of $G$ is the binomial distribution $B(N,p)$. Hence, using the exponential Markov inequality, we arrive at the following bound: for $p<a<1$ and $N\ge1$,
\[ P[d_i \ge a N] \leq\exp\bigl(-N H(a,p)\bigr), \]
where $H$ is the relative entropy or Kullback--Leibler information
\[ H(a,p) = a \log\biggl(\frac{a}p\biggr) + (1-a) \log\biggl( \frac{1-a}{1-p}\biggr). \]
If we now set $m=\max_i d_i$ as above, we obtain
\[ P[m \ge a N] \leq\sum_{i=1}^N P[d_i \ge a N] \le N \exp\bigl(-N H(a,p)\bigr). \]
If we choose $a=(1+\varepsilon)p$, for some $\varepsilon>0$ such that $a<1$, we therefore get
\[ P\bigl[m \ge(1+\varepsilon)p N\bigr] \leq N \exp \biggl(-N \biggl(p(1+ \varepsilon) \log (1+\varepsilon)+\bigl(1-(1+\varepsilon)p\bigr)\log\biggl( \frac{1-(1+\varepsilon )p}{1-p}\biggr)\biggr) \biggr). \]
Moreover, we have $(1-(1+\varepsilon)p)\log(\frac{1-(1+\varepsilon )p}{1-p}) \ge- p \varepsilon$.
Indeed, if we set $q=1-p$ and $u= 1- (1+\varepsilon)p = q - \varepsilon p$, the last inequality is equivalent to $\log( q/u) \le q/u -1$ which is true since $ q/u >1$. Thus,
\begin{eqnarray*} P\bigl[m \ge(1+\varepsilon)p N\bigr] &\leq& N \exp\bigl(-Np \bigl((1+\varepsilon) \log (1+\varepsilon)-\varepsilon\bigr)\bigr)\\ & \leq& N \exp\biggl(-Np \frac{\varepsilon ^2}2 \bigl(1+\mathrm{o}(1)\bigr)\biggr), \end{eqnarray*}
if we suppose that $\varepsilon=\mathrm{o}(1)$ as $N\rightarrow\infty$. Choosing $ \varepsilon= 2 \sqrt{\frac{\log N}{pN}}$, we have $\varepsilon=\mathrm{o}(1)$ for $p \gg\frac{\log N}N$, and
\[ P\bigl[m \ge(1+\varepsilon)p N\bigr] \rightarrow0 \qquad\mbox{as }N\rightarrow \infty. \]
Moreover, we have $m\ge\lambda_1$ and $\lambda_1 = (1+\mathrm{o}(1))p N$ (with probability converging to 1 as $N\rightarrow\infty$), which gives the result for $m$.
Now, if we set $\delta:=\min_i d_i$, we want to prove that $P[\delta \ge(1+\varepsilon') pN] \rightarrow1$, as $N\rightarrow\infty$, for some $\varepsilon'=\mathrm{o}(1)$.
We consider the complementary graph, that is, the random graph $\overline G$, such that exactly those edges are missing in a realization of $\overline G$, that occur in the corresponding realization of the original random graph $G$. Now the maximum degree $\overline m$ of $\overline G$ and the minimum degree $\delta$ of $G$ are linked via the relation $\delta= N-1- \overline m$.
As $\overline G$ is chosen randomly according to the model $G(N,1-p)$, we have
\[ P\bigl[\overline m \ge(1+\varepsilon) (1-p) N\bigr] \le N \exp\bigl(-N H \bigl((1+\varepsilon ) (1-p),1-p\bigr)\bigr), \]
for all $\varepsilon>0$ such that $(1+\varepsilon)(1-p)<1$. Now
\[ H\bigl((1+\varepsilon) (1-p),1-p\bigr) = (1+\varepsilon) (1-p) \log (1+ \varepsilon) +(p-\varepsilon+p\varepsilon)\log\biggl(1-\frac{\varepsilon(1-p)}p\biggr). \]
If we suppose that $\varepsilon= \mathrm{o}(1)$ and $\varepsilon\ll p$, using the inequality $\log(1-x) \ge-x - x^2/2-x^3$ for $x\in(0,1/2)$ to bound the last term, we obtain the estimate
\begin{eqnarray*} H\bigl((1+\varepsilon) (1-p),1-p\bigr) \ge\frac{\varepsilon^2}{2p} \biggl((1-p) - C \biggl( \varepsilon+ \frac{\varepsilon}p\biggr) + \mathcal{O}\bigl(p \varepsilon^2\bigr) \biggr), \end{eqnarray*}
for some $C>0$ and
\[ P\bigl[\overline m \ge(1+\varepsilon) (1-p) N\bigr] \le\exp\biggl( -N \frac {\varepsilon ^2}{2p} \biggl((1-p) - C \biggl(\varepsilon+ \frac{\varepsilon}p\biggr) + \mathcal {O}\bigl(p\varepsilon^2\bigr) \biggr) + \log(N)\biggr). \]
There exists some $c>0$ such that if we choose $\varepsilon= \sqrt{ \frac{4p}{c(1-p)}\frac{\log(N)}N }$, we get
\[ P\bigl[\overline m \ge(1+\varepsilon) (1-p) N\bigr] \le\exp\biggl( -c N \frac {\varepsilon^2}{2p} (1-p)+ \log(N)\biggr) \rightarrow0, \]
under the conditions $p \gg\frac{\log N}N$ and $1-p \gg(\frac{\log N}N)^{1/3}$.
Finally, we get $\delta\ge N-1 - (1+ \varepsilon) (1-p)N = (1+\mathrm{o}(1)) Np$, which is the result under these two conditions.
Eventually, we will extend this result for all $p$ such that $p\rightarrow1$, as $N\rightarrow+\infty$. As previously, using the exponential Markov inequality, we get the following bound: for $0<b<p<1$, and $N\ge1$,
\[ P[d_i \le b N] \leq\exp\bigl(-N H(b,p)\bigr). \]
We set $p=1-a_N$ and $ b= 1- b_N$, for some strictly positive sequences $(a_N)$ and $(b_N)$ such that $a_N + b_N \rightarrow0$, as $N\rightarrow\infty$, $a_N \ll b_N$, and we can restrict to the case $a_N < (c\frac{\log N}N)^{1/3}$ for some $c>0$. We get
\begin{eqnarray*} P[\delta\le b N] &\leq &N \exp\biggl(-N \biggl( (1-b_N) \log\biggl( \frac {1-b_N}{1-a_N}\biggr) + b_N \log\biggl(\frac{b_N}{a_N}\biggr) \biggr)\biggr) \\ &\leq&\exp\biggl(-N \biggl( b_N \log\biggl(\frac{b_N}{a_N} \biggr)-2b_N\biggr)+\log(N)\biggr). \end{eqnarray*}
So, we need to choose $b_N$ such that
\[ b_N \log\biggl(\frac{b_N}{a_N}\biggr) > \frac{\log(N)}{N}. \]
We have
\[ b_N \log\biggl(\frac{b_N}{a_N}\biggr) > b_N \log \biggl({b_N}\biggl(\frac{N}{c \log(N)}\biggr)^{1/3}\biggr)> \frac{\log(N)}{N}, \]
if we choose for instance $b_N =(\frac{\log N}N)^{\gamma}$ with $\gamma \in(0,1/3)$.
Finally, we get for all $p\rightarrow1$ that $\delta\ge(1-b_N) N = (1+ \mathrm{o}(1)) Np$, with probability converging to 1 as $N\rightarrow\infty$. \end{pf} \end{appendix}
\printhistory
\end{document} | arXiv |
\begin{document}
\title{On the Normality of Numbers in Different Bases}
\author{
\begin{tabular}{ccc} Ver\'onica Becher&\ \ \ \ \ \ \ \ & Theodore A.~Slaman\\ \small Universidad de Buenos Aires&&\small University of California Berkeley\\ \small [email protected]&&\small [email protected]
\end{tabular} }
\date{March 22, 2013}
\maketitle
\section{Introduction}
We ask whether normality to one base is related to normality to another. Maxfield in 1953 proved that a real number is normal to a base exactly when it is normal to every base multiplicatively dependent to that base (two numbers are multiplicatively dependent when one is a rational power of the other.) \cite{Sch61} showed that this is the only restriction on the set of bases to which a real number can be normal. He proved that for any given set of bases, closed under multiplicative dependence, there are real numbers normal to every base from the given set and not normal to any base in its complement. This result, however, does not settle the question of whether the discrepancy functions for different bases for which a real number is normal are pairwise independent. Nor does it answer whether the set of bases for which a real number is normal plays a distinguished role among its other arithmetical properties.
We pose these problems by means of mathematical logic and descriptive set theory. The set of real numbers that are normal to a least one base is located in the fourth level of the Borel hierarchy. Similarly, the set of indices for computable real numbers that are normal to at least one base is located at the fourth level of the arithmetic hierarchy. In Theorem~\ref{1} we show that from both points of view, the property that a real number is normal to at least one base is complete at the fourth level (${\mathbf\Sigma}_4^0$ and $\Sigma^0_4,$ respectively). This result settles a question in \cite{Bug12} and confirms a conjecture of A.~Ditzen~\cite[see][]{KiLin94}. We obtain the result by first establishing in Theorem~\ref{2}, that for any set at the third level of the arithmetic hierarchy ($\Pi^0_3$), there is a computable real number which is normal exactly to the bases multiplicatively dependent to elements of that set. Theorem~\ref{3} exhibits a fixed point: for any property of bases expressed at the third level of the arithmetic hierarchy ($\Pi^0_3$) and closed under multiplicative dependence, there is a real number $\xi$ such that the bases which satisfy the property relative to $\xi$ are exactly those for which $\xi$ is normal.
Theorem \ref{4} shows that the discrepancy functions for different bases can go to zero independently. We construct absolutely normal real numbers such that their discrepancy functions for a given base $s$ converge to zero arbitrarily slowly and such that their discrepancies for all the bases multiplicatively independent to $s$ are eventually dominated by a single computable bound. In contrast, the real numbers constructed by \cite{Sch61} are not normal to a given base $s$ and the discrepancy functions for all bases multiplicatively independent to $s$ converge to zero at a prescribed rate. With a different proof, \cite{BMP85} extended Schmidt's result and then \cite{MorPea88} gave explicit bounds for the rate obtained with their method.
In our construction the nonconforming behavior of the constructed real number
with respect to base $s$ appears even though it is normal to base $s$.
Theorem~\ref{5} sharpens Theorem~1 in \cite{Sch61}. We construct a real number that is normal for all elements in a given set and denies even simple normality to all other elements, addressing an issue raised in \cite{BMP85}.
Normality is an almost-everywhere property of the real numbers: the set of normal numbers has Lebesgue measure one. Normality in some bases and not all of them is also an almost-everywhere property, albeit not in the sense of Lebesgue. Consider the Cantor set $C_s$ obtained by omitting the last digit (or two) in the base $s$ expansions of real numbers ($s$ greater than $2$). Clearly, no element of $C_s$ is simply normal to base $s$. However, viewed from the perspective of the uniform measure on this Cantor set, \cite{Sch60} shows that the subset of $C_s$ whose elements are normal to every base $r$ multiplicatively independent to $s$ has measure one.
Our focus is on constructing real numbers and maintaining independent control over their discrepancy functions for multiplicatively independent bases. Since almost every element of $C_s$ is normal to base $r$, almost every sufficiently long finite initial segment of a real in $C_s$ has small discrepancy from normal in base $r$. It is our task to convert this observation into methods of constructing real numbers by iteratively extending their expansions in various bases. The first part of our task is to give computable bounds on discrepancy and estimates on how quickly discrepancy for base $r$ decreases almost everywhere in $C_s$. The second part is to convert these finitary bounds into modules for constructions. The typical module lowers discrepancy in bases $r$ from a finite set $R$ and increases discrepancy in a multiplicatively independent base $s$. It is important that the estimates on discrepancy be applicable in any basic open neighborhood in $C_s$ so that the modules can be used as any finite point in the construction.
\section{Theorems}
\begin{notation}
A~{\em base} is an integer greater than or equal to $2$. For a real number $\xi$, we use
$\expa{\xi}$ to denote its fractional part. We write $\vec{\xi}$ to denote a sequence and $\xi_j$
to denote the $j$th element of $\vec{\xi}$. If~$\vec{\xi}$ is finite with $N$ elements we write
it as $(\xi_1,\ldots, \xi_N)$. For a subinterval $I$ of $[0,1]$, $\measure{I}$ is its measure,
equivalently its length. For a finite set $S$, $\card S$ is its cardinality. We often drop the
word {\em number} and just say {\em a real} or {\em a rational } or {\em an integer}.
\end{notation}
We recall the needed definitions and then state our results. The usual presentation of the property of normality to a given base for a real number is in terms of counting occurrences of blocks of digits in its expansion in that base (\cite{Bug12,KuiNie74}). Absolute normality is normality to all bases. We define normality in terms of discrepancy. See either of the above references for a proof of Wall's Theorem, which establishes the equivalence.
\begin{definition}\label{2.1} The \emph{discrepancy} of a sequence $\vec{\xi}=(\xi_1,\dots, \xi_N)$ of real numbers in the unit interval is
\[
D(\vec{\xi})=\sup_{0\leq u< v\leq 1}
\Bigl|
\frac{\card\{n:1 \leq n \leq N, u \leq \xi_n < v\}}{N}-(v-u)
\Bigr|.
\] \end{definition}
When we refer to a sequence by specifying its elements, we will write $D(\xi_1,\dots, \xi_N)$, rather than $D((\xi_1,\dots, \xi_N))$.
\begin{definition}\label{2.2}
Let $r$ be a base. A real number $\xi$ is \emph{normal to base $r$} if and only if
$\lim_{N\to\infty}D(\expa{r^j\xi}:0\leq j< N)=0$. \emph{Absolute normality} is
normality to every base. \end{definition}
A formula in the language of arithmetic
is $\Pi^0_0$ and $\Sigma^0_0$ if
all of its quantifiers are bounded. It is $\Sigma^0_{n+1}$ if it has the form $\exists x\, \theta$
where $\theta$ is $\Pi^0_n$ and it is $\Pi^0_{n+1}$ if it has the form $\forall x\, \theta$ where
$\theta$ is $\Sigma^0_n$. A subset $A$ of ${\mathbb{N}}$ is $\Sigma^0_n$ (respectively, $\Pi^0_n$) if there
is a $\Sigma^0_n$ (respectively, $\Pi^0_n$) formula $\varphi$ such that for all $n$, $n\in A$ if and
only if $\varphi(n)$ is true.
A $\Sigma^0_n$ subset $A$ of the natural numbers is $\Sigma^0_n$-complete if there is a computable
function $f$ mapping $\Sigma^0_4$ formulas to natural numbers such that for all $\varphi$, $\varphi$ is
true in the natural numbers if and only if $f(\varphi)\in A.$
The Borel hierarchy for subsets of ${\mathbb{R}}$ with the usual topology
states that a set $A$ is ${\mathbf\Sigma}^0_1$ if and only if $A$ is open
and $A$ is ${\mathbf\Pi}^0_1$ if and only if $A$ is closed.
$A$ is ${\mathbf\Sigma}^0_{n+1}$ if and only if it is a
countable union of ${\mathbf\Pi}^0_{n}$ sets and $A$ is ${\mathbf\Pi}^0_{n+1}$ if and only if it is a countable
intersection of $\mathbf\Sigma^0_n$ sets.
By an important theorem, a $\mathbf\Sigma^0_n$ subset of ${\mathbb{R}}$ is $\mathbf\Sigma^0_n$-complete if
and only if it is not $\mathbf\Pi^0_n$.
\begin{theorem}\label{1} (1) The set of indices for computable real numbers which are normal at least one base is $\Sigma^0_4$-complete. (2) The set of real numbers that are normal to at least one base is $\mathbf \Sigma^0_4$-complete. \end{theorem}
\begin{remark} A routine extension of the proof shows that the set of real numbers which are normal to infinitely many bases is ${\mathbf\Pi}^0_5$-complete. Expressed in terms of the complement, the set of real numbers which are normal to only finitely many bases is ${\mathbf\Sigma}^0_5$-complete. \end{remark}
Let $M$ be the set of minimal representatives of the multiplicative dependence equivalence classes. Our proof of Theorem~\ref{1} relies on the following.
\begin{theorem}\label{2}
For any $\Pi^0_3$ subset $R$ of $M$ there is a computable real number $\xi$ such that for all $r$
in $M$, $r\in R$ if and only if $\xi$ is normal to base $r$. Furthermore, $\xi$ is computable
uniformly in the $\Pi^0_3$ formula that defines $R$. \end{theorem}
Theorem~\ref{3} exhibits a fixed point: the real $\xi$ appears in the $\Pi^0_3$ definition of its input set. It asserts that the set of bases for which $\xi$ is normal can coincide with any other property of elements of $M$ definable by a $\Pi^0_3$ formula relative to $\xi$. Thus, the set of bases for normality can be arbitrary, nothing distinguishes it from other $\Pi^0_3$ predicates on $M$. As a subset of ${\mathbb{N}}$ its only distinguishing feature is that it is closed under multiplicative dependence.
\begin{theorem}\label{3} For any $\Pi^0_3$ formula $\varphi$ there is a computable real number $\xi$ such that for any base $r\in M$, $\varphi(\xi,r)$ is true if and only if $\xi$ is normal to base $r$. \end{theorem}
Theorem~\ref{4} illustrates the independence between the discrepancy functions for multiplicatively independent bases by exhibiting an extreme case, that all but one of the bases behave predictably and the other is arbitrarily slow.
\begin{theorem}\label{4}
Fix a base $s$. There is a computable function $f:{\mathbb{N}}\to{\mathbb{Q}}$ monotonically decreasing to
$0$ such that for any function $g:{\mathbb{N}}\to{\mathbb{Q}}$ monotonically decreasing to $0$ there is an
absolutely normal real number $\xi$ whose discrepancy for base $s$ eventually dominates $g$ and
whose discrepancy for each base multiplicatively independent to $s$ is eventually dominated by $f$.
Furthermore, $\xi$ is computable from $g$. \end{theorem}
\begin{remark} The proof Theorem~\ref{4} can be adapted produce other contrasts in behavior between multiplicatively independent bases. We give two examples.
(1) Let $s$ be a base. There is a computable function $f:{\mathbb{N}}\to{\mathbb{Q}}$ monotonically decreasing to $0$ such that for any function $g:{\mathbb{N}}\to{\mathbb{N}}$, there is an absolutely normal real number $\xi$ such that its discrepancy for $s$ satisfies for all $n$ there is an $N>g(n)$ such that $D(\expa{s^j\xi}:0\leq j< N)>1/n$ and its discrepancies for bases multiplicatively independent to $s$ are eventually bounded by $f$. Furthermore, $\xi$ is computable from any real number $\rho$ which can computably approximate $g$.
(2) Let $s$ and $r$ be multiplicatively independent bases. There is a computable absolutely normal number $\xi$ such that \[ \limsup_{N\to\infty}\frac{D(\expa{s^j\xi}:0\leq j< N)}{D(\expa{r^j\xi}:0\leq j< N)} = \limsup_{N\to\infty}\frac{D(\expa{r^j\xi}:0\leq j< N)}{D(\expa{s^j\xi}:0\leq j< N)} =\infty. \] \end{remark} \begin{remark}\label{2.5}
There is a computable function $f:{\mathbb{N}}\to{\mathbb{Q}}$ monotonically decreasing to $0$ such that the
discrepancy of almost every real number is eventually dominated by $f$. In contrast,
there is no computable function which dominates the discrepancy of all the computable
absolutely normal numbers. \end{remark}
Finally, we state the improvement of Theorem 1 of \cite{Sch61}, asserting simple normality in the conclusion.
\begin{definition}\label{2.6}
Let $N$ be a positive integer. Let $\xi_1,\dots, \xi_N$ be real numbers in $[0,1]$. Let $F$ be a
family of subintervals. The discrepancy of $\xi_1,\dots,\xi_N$ for $F$ is \[
D(F,(\xi_1,\dots,\xi_N))=\sup_{I\in F}
\Bigl|
\frac{{\card}\{n:\xi_n \in I\}}{N}-\measure{I}
\Bigr|.
\] \end{definition}
\begin{definition}\label{2.7}
Let $r$ be a base and let $\xi$ be a real number. Let $F$ be the set of intervals of the form
$[a/r,(a+1)/r)$, where $a$ is an integer $0\leq a<r$. $\xi$ is \emph{simply normal to base $r$} if
$\lim_{N\to\infty}D(F,(\expa{r^j\xi}:0\leq j< N))=0.$ \end{definition}
\begin{theorem}\label{5}
Let $R$ be a set of bases closed under multiplicative dependence. There are real numbers normal
to every base from $R$ and not simply normal to any base in its complement. Furthermore, such a real number
can be obtained computably from $R$. \end{theorem}
\section{Lemmas}
\subsection{On Uniform Distribution of Sequences}
\begin{lemma}\label{3.1}
Let $\epsilon$ be a real number strictly between $0$ and $1$. Let $F_\epsilon$ be the
family of semi-open intervals $B_a=[a/\ceil{3/\epsilon}, (a+1)/\ceil{3/\epsilon})$, where $a$ is
an integer $0\leq a<\ceil{3/\epsilon}$. For any sequence $\vec{\xi}$ and any $N$, if
$D(F_\epsilon,\vec{\xi})<(\epsilon/3)^2$ then $D(\vec{\xi})<\epsilon$. \end{lemma}
\begin{proof}
Let $\vec{\xi}$ be a sequence of real numbers of length $N$ such that $D(F_\epsilon,\vec{\xi})$ is
less than $(\epsilon/3)^2$. Let $I$ be any semi-open subinterval of $[0,1]$. Denote
$\ceil{3/\epsilon}$ by $n$.
The number of $B_a$ with nonempty intersection with $I$
is less than or equal to $\ceil{n\measure{I}}$. For each $B_a\in F_\epsilon$, $\card\{\xi_n:\xi_n\in
B_a\}$ is less than or equal to $(1/n+\epsilon^2/9)N$. Thus, by the definition of $n$,
\begin{align*}
\frac1N\card\{\xi_n:\xi_n\in I\}&\leq \frac1N\ceil{n\measure{I}} (1/n+\epsilon^2/9)N \leq \measure{I}+\epsilon.
\end{align*}
Similarly, $\frac1N\card\{\xi_n:\xi_n\in I\}\geq \measure{I}-\epsilon.$ \end{proof}
\begin{remark}\label{3.2} In Lemma~\ref{3.1}, $F_\epsilon$ can be replaced by any partition of $[0,1]$ into subintervals of equal length, each of length at most $\epsilon/3$. \end{remark}
We record the next three observations without proof.
\begin{lemma}\label{3.3}
Suppose that $\epsilon$ is a positive real, $\vec{\xi}$ is a sequence of length $N$ and that
$D(\vec{\xi})<\epsilon$. For any sequence $\vec{\nu}$ of length $n$ with $n<\epsilon N$, for all
$k\leq n$, $D(\nu_1\dots, \nu_k, \xi_1,\dots, \xi_N)<2\epsilon$ and $D(\xi_1,\dots,
\xi_N,\nu_1\dots, \nu_k)<2\epsilon$. \end{lemma}
\begin{lemma}\label{3.4}
Let $\vec{\xi}$ be a sequence of real numbers, $\epsilon$ a positive real and $(b_m:0\leq
m<\infty)$ an increasing sequence of positive integers. Suppose that there is an $m_0$ such that
for all $m>m_0$, $b_{m+1}-b_m\leq \epsilon b_m$ and $D(\xi_j:b_m< j\leq
b_{m+1})<\epsilon$. Then $\lim_{N\to\infty}D(\vec{\xi})\leq 2\epsilon.$ \end{lemma}
\begin{lemma}\label{3.5}
Let $m$ be a positive integer and $I$ a semi-open interval. Suppose $\vec{\xi}$ is a sequence
of real numbers of length $N$ such that $N\geq\ceil{2m/\measure{I}}$ and for all $j$ with $m\leq j\leq N$, $\xi_j\not\in I$. Then, $D(I,\vec{\xi})\geq\mu(I)/2$. \end{lemma}
\begin{notation}
We let $e(x)$ denote $e^{2\pi i x}$. \end{notation}
\begin{externaltheorem}[Weyl's Criterion \protect{\cite[see][]{Bug12}}] A sequence $(\xi_n:n\geq 1)$ of real numbers is uniformly distributed modulo one if and only if for every non-zero $t,$ $\displaystyle{\lim_{N\to\infty}\frac1N\sum_{j=1}^N e(t\xi_n)=0.}$ \end{externaltheorem}
\begin{externaltheorem}[\protect{\textbf{LeVeque's Inequality} \cite[see][Theorem~2.4]{KuiNie74}}]\label{3.7} Let $\vec{\xi}=(\xi_1,\dots,\xi_N)$ be a finite sequence. Then, $\displaystyle{
D(\vec{\xi})\leq \Bigl(\frac6{\pi^2}\;\sum_{h=1}^\infty \frac1{h^2} \Bigl|\frac1N\; \sum_{j=1}^N
e(h\xi_j)\Bigr|^2\Bigr)^{\frac13}.}$ \end{externaltheorem}
\begin{lemma}\label{3.8}
For any positive real $\epsilon$ there is a finite set $T$ of integers and a positive real~$\delta$
such that for any $\vec{\xi}=(\xi_1,\dots,\xi_N)$, if for all $t\in T$,
$\displaystyle{\frac1{N^2}\; \Bigl|\sum_{j=1}^N e(t\xi_j)\Bigr|^2<\delta}$ then
$D(\vec{\xi})<\epsilon.$ Furthermore, such $T$ and $\delta$ can be computed from $\epsilon$. \end{lemma}
\begin{proof} By LeVeque's Inequality, $\displaystyle{
D(\vec{\xi})\leq \Bigl(\frac6{\pi^2}\;\sum_{h=1}^\infty \frac1{h^2} \Bigl|\frac1N\; \sum_{j=1}^N
e(h\xi_j)\Bigr|^2\Bigr)^{\frac13}.}$
Note that\linebreak
$ \Bigl|\frac1N\; \sum_{j=1}^N
e(h\xi_j)\Bigr|^2\leq 1.$ Hence, for each $h$, \begin{align*}
\sum_{h=m+1}^\infty \frac1{h^2} \Bigl|\frac1N\; \sum_{j=1}^N e(h\xi_j)\Bigr|^2&\leq \sum_{h=m+1}^\infty \frac1{h^2} \leq \int_{m+1}^\infty x^{-2} dx \leq \frac1{m+1}. \end{align*}
Assume $\displaystyle{\frac1{N^2}\; \Bigl|\sum_{j=1}^N e(t\xi_j)\Bigr|^2<\delta}$ for all positive integers $t$ less than or equal to $m$. Then, \begin{align*}
\sum_{h=1}^m \frac1{h^2} \Bigl|\frac1N\; \sum_{j=1}^N e(h\xi_j)\Bigr|^2
+\sum_{h=m+1}^\infty \frac1{h^2} \Bigl|\frac1N\; \sum_{j=1}^N e(h\xi_j)\Bigr|^2\leq \sum_{h=1}^m \frac{1}{h^2} \delta +\frac1{m+1}
\leq\delta m+\frac1{m+1}. \end{align*} To ensure $D(\vec{\xi})<\epsilon$ it is sufficient that $(6/\pi^2)\bigl(\delta m+(1/m+1)\bigr)^{\frac13}<\epsilon$.
This is obtained by setting $\delta m< (1/2) ( \epsilon^3\pi^2/ 6)$ and $1/(m+1)< (1/2) ( \epsilon^3\pi^2/ 6)$. Let $m = \ceil{12/(\epsilon^3\pi^2)}$, $T=\{ 1, 2, \ldots, m\}$ and $\delta= (\epsilon^3\pi^2) / (24 m)$. \end{proof}
\subsection{On Normal Numbers}
\begin{notation}
We use $\base{b}{r}$ to denote $\ceil{b/\log r}$, where $\log$ refers to natural logarithm. We
say that a rational number $\eta\in[0,1]$ is $s$-adic when $\eta=\sum_{j=1}^ad_js^{-j}$ for digits
$d_j$ in $\{0,\dots,s-1\}$. In this case, we say that $\eta$ has precision $a$. We use
$\digits{s}{k}$ to denote sequences in the alphabet $\{0,\dots,s-1\}$ of length $k$. For a
sequence $w$, we write $|w|$ to denote its length. When $1\leq i\leq j\leq |w|$, we call
$(w_i,\dots,w_j)$ a block of $w$. The number of occurrences of the block $u$ in $w$ is
$occ(w,u)=\#\{ i: (w_i,\dots,w_{i+|u|-1})=u \}$. \end{notation}
\begin{lemma}\label{3.9}
Let $s$ and $r$ be bases, $a$ be a positive integer and $\epsilon$ be a real between $0$ and~$1$.
There is a finite set of intervals $F$ and a positive integer $\ell_0$ such that for all $\ell\geq\ell_0$
and all $\xi_0$, if $\xi\in[\xi_0,\xi_0+s^{-\base{a+\ell}{s}})$ and $D(F,(\expa{r^j \xi_0}:\base{a}{r} < j\leq \base{a+\ell}{r}))<(\epsilon/10)^4$ then
$D(\expa{r^j \xi}:\base{a}{r}< j\leq \base{a+\ell}{r})<\epsilon$.
Furthermore, $\ell_0$ and $F$ can be taken as computable functions of $r$ and $\epsilon$. \end{lemma}
\begin{proof}
Let $F_\epsilon$ be as in Lemma~\ref{3.1} and let $I$ be an interval in $F_\epsilon$. Let $n$
denote $\ceil{100/\epsilon^2}.$ Let $F$ be the set of semi-open intervals $B_c=[c/n, (c+1)/n)$,
where $0\leq c<n$. For the sake of computing $\ell_0$, consider $b>a$, $\xi$ and $\xi_0$ such
that $\xi\in[\xi_0,\xi_0+s^{-\base{b}{s}})$. Assume $D(F,(\expa{r^j
\xi_0}:\base{a}{r}< j\leq \base{b}{r}))<(\epsilon/10)^4$.
Note that for all $j$ less than $\base{b}{r}-\log n/\log r-1$, we have $r^js^{-\base{b}{s}}< 1/n$.
Hence,
for all but the last $\ceil{\log n/\log r}+2$ values of $j$, $|r^j\xi_0 - r^j\xi|<1/n$.
Let $C$ be the set of intervals $B_c$ such that either $B_c$ or $B_{c+1}$ has non-empty
intersection with $I$. If $j$ is less than $\base{b}{r}-\log n/\log r-1$ then
$\expa{r^j\xi}\in I$ implies that $\expa{r^j\xi_0}\in \cup C$.
Observe that $\#C\leq \ceil{n \measure{I}}+2$. The fraction \[ \frac1{\base{b}{r}-\base{a}{r}} \#\{j : \base{a}{r}< j <\base{b}{r} -\log n/\log r-1 \text{ and } \expa{r^j\xi}\in \cup C\} \] is at most $\ceil{n\measure{I}+2} (1/n+\epsilon^4/10^4)$. And by definition of $n$, $$
(n\measure{I}+3) (1/n+(\epsilon/10)^4) \leq
\measure{I}+\ceil{100/\epsilon^2}(\epsilon/10)^4 +
3/\ceil{100/\epsilon^2}+3(\epsilon/10)^4 \leq
\measure{I}+ (1/2)(\epsilon/3)^2. $$ There are at most $\ceil{\log n/\log r}+2$ remaining $j$, those for which $j\geq \base{b}{r}-\log n/\log r-1$. Suppose that for each such $j$, $r^j\xi\in I$. Then,
\begin{align*} \frac{\ceil{\log n/\log r} + 2}{\base{b}{r}-\base{a}{r}} &\leq \frac{\log n/\log r + 3}{\base{b}{r}-\base{a}{r}} \leq \frac{\log \ceil{100/\epsilon^2} +3\log r}{b-a-\log r}.
\end{align*}
Let $\ell_0$ be $\bigceil{\log r+\frac{18}{\epsilon^2}\ceil{\log \ceil{100/\epsilon^2} +3\log
r}}$. For $b\geq a+\ell_0$, $\frac{\log \ceil{100/\epsilon^2} +3\log r}{b-a-\log
r}<(1/2)(\epsilon/3)^2$. A similar argument yields the same estimates for the needed lower
bound. Then, for $F_\epsilon$ and any $b\geq a+\ell_0$, $\displaystyle{
D(F_\epsilon,(\expa{r^j\xi}:\base{a}{r}< j\leq\<b;r\rangle))<(\epsilon/3)^2.
}$
By applying Lemma~\ref{3.1}, for any $\ell\geq \ell_0$,
$D(\expa{r^j \xi}:\base{a}{r}< j\leq
\base{a+\ell}{r})<\epsilon$. \end{proof}
\begin{definition}
Fix a base $s$. The \emph{discrete discrepancy} of $w\in\digits{s}{N}$ for a block of size $\ell$ is
\[
C(\ell,w)=\max\left\{\left|\frac{occ(w,u)}{N}-\frac{1}{s^\ell}\right|:u\in\digits{s}{\ell}\right\}. \] \end{definition}
The next lemmas relate the discrete discrepancy of sequences in $w\in\digits{s}{N}$ to the discrepancy of their associated sequences of real numbers.
\begin{lemma}\label{3.11}
Let $\epsilon$ be a positive real, $s$ a base, $\ell$ and $N$ positive integers such that
$s^\ell>3/\epsilon$ and $N>2\ell(3/\epsilon)^2$, and $w\in\digits{s}{N}$ such that
$C(\ell,w)<\epsilon^2/18$. Then, $D(\expa{s^j\eta_w}:0\leq j<N)<\epsilon$, where
$\eta_w=\sum_{j=1}^{|w|} w_j s^{-j}$. \end{lemma}
\begin{proof}
Let $\ell$ be such that $s^{-\ell}<\epsilon/3$. Let $F$ be the set of $s$-adic intervals of
length $s^{-\ell}$. Any $I$ in $F$ has the form $[\eta_u,\eta_u+s^{-\ell})$, for some
$u\in\digits{s}{\ell}$, and further, $\expa{s^j\eta_w}\in I$ if and only if the block $u$ occurs
in $w$ at position $j+1$. Thus, we can count instances of $\expa{s^j\eta_w}\in I$ by counting
instances of $u$ in $w$. Let $N$ and $w$ be given so that $N>2\ell(3/\epsilon)^2$,
$w\in\digits{s}{N}$ and $C(\ell,w)<(\epsilon/3)^2/2$. Then, for any $u\in\digits{s}{\ell}$,
$\left| occ(w,u)/N-s^{-\ell} \right|<(\epsilon/3)^2/2$. For any $I\in F$, $\displaystyle{
\frac1N\card\left\{j:\expa{s^j\eta_w}\in I \mbox{ and } 0\leq j<N\right\}
<s^{-\ell}+(\epsilon/3)^2/2+(\ell-1)/N< s^{-\ell}+(\epsilon/3)^2.}$ A similar count gives the
analogous lower bound. Hence, $D(F,(\expa{s^j\eta_w}:0\leq j<N))<(\epsilon/3)^2$ and so
$D(\expa{s^j\eta_w}:0\leq j<N)<\epsilon$, by application of Lemma~\ref{3.1} and Remark~\ref{3.2}. \end{proof}
\begin{lemma}[see Theorem~148, \protect{\cite{hardy}}]\label{3.12}
For any base $s$, for any positive integer $\ell$ and for any positive real numbers $\epsilon$ and $\delta$,
there is an $N_0$ such that for all $N\geq N_0$,
\[
{\card}\Bigl\{v\in \digits{s}{N}: C(\ell,v)\geq\epsilon\Bigr\} < \delta s^N.
\]
Furthermore, $N_0$ is a computable function of $s$, $\epsilon$ and $\delta$. \end{lemma}
The next lemma is specific to base $2$ and will be applied in the proof of Theorem~\ref{5}.
\begin{lemma}\label{3.13}
Given a positive real number $\epsilon$, there is an $N_0$ such that for all $N\geq N_0$,
\[
{\card}\Bigl\{v\in \digits{2}{N}:
\frac{1}{2N}\card\bigl\{m:\expa{2^m\eta_v}\in[0,1/2)\bigr\}\geq 5/8 \Bigr\}> (1-\epsilon)2^{N} \] where for $v=(v_1,\dots,v_N)\in \digits{2}{N}$, $\eta_v=\sum_{j=1}^N v_j4^{-j}$. Furthermore, $N_0$ is a computable function of $\epsilon$. \end{lemma}
\begin{proof} By Lemma~\ref{3.12}, for any positive $\delta$ there is an $N_0$ such that for all $N\geq N_0$, \[ {\card}\Bigl\{v\in \digits{2}{N}: C(1,v)\leq\delta\Bigr\} \geq (1-\epsilon) 2^{N}. \] Thus, for $(1-\epsilon)2^N$ many $v$, $\displaystyle{
\left|\frac{\card\{n:v_n=0\}}{N}-\frac12\right|<\delta \quad\mbox{and}\quad
\left|\frac{\card\{n:v_n=1\}}{N}-\frac12\right|<\delta. }$ Consider the natural bijection $V$ between $\digits{2}{N}$ and the set ${\mathcal{L}}$ of sequences of length $N$ of symbols from $\{(00),(01)\}$. Then for $(1-\epsilon)2^N$ many $v\in\digits{2}{N}$, \[
\left|\frac{\card\{n:V(v)_n=(00)\}}{N}-\frac12\right|<\delta
\quad\mbox{and}\quad \left|\frac{\card\{n:V(v)_n=(01)\}}{N}-\frac12\right|<\delta. \] We can construe each length $N$ sequence $V(v)$ from ${\mathcal{L}}$ as a length $2N$ binary sequence $V^*(v)$. Under this identification, $\eta_v=\sum_{j=1}^{2N}V^*(v)_j2^{-j}=\sum_{j=1}^N v_j4^{-j}.$ For any $v\in\digits{2}{N}$, \[
\card\{m:V^*(v)_m=0\}=2\card\{n:V(v)_n=(00)\}+\card\{n:V(v)_n=(01)\}. \] So, for $(1-\epsilon)2^N$ many $v\in\digits{2}{N}$, \[ \card\{m:V^*(v)_m=0\}\geq 2 (1/2-\delta)N+(1/2-\delta)N=3/2N-3\delta N. \] Thus, $\card\{m:\expa{2^m\eta_v}\in[0,1/2) \mbox{ and }0\leq m< 2N\}\geq (3/2)N-3\delta N$. Hence, \[ \frac{1}{2N}\card\{m:\expa{2^m\eta_v}\in[0,1/2)\}\geq 3/4-3\delta/2. \] For $\delta=1/12$, the lemma follows. \end{proof}
\begin{lemma}\label{3.14}
Let $\epsilon$ be a positive real and let $s$ be a base. There is a $k_0$ such that for every
$k\geq k_0$ there is an $N_0$ such that for all $N\geq N_0$,
\[
\card\Bigl\{w\in \digits{{\tilde{s}}}{N}:
D(\expa{s^j\eta_w}: 0\leq j< kN)<\epsilon\Bigr\}
> (1/2) {\tilde{s}}^{\, N},
\] where ${\tilde{s}}$ is either of $s^k-1$ or $s^k-2$, and for $w=(w_1,\dots,w_N)\in \digits{{\tilde{s}}}{N}$, $\eta_w=\sum_{j=1}^N w_j(s^{k})^{-j}$. Furthermore, $k_0$ is a computable function of $s$ and $\epsilon$ and $N_0$ is a computable function of $s$, $\epsilon$ and $k$. \end{lemma}
\begin{proof}
Fix the real $\epsilon$ (to be used only at the end of the proof) and fix the base $s$.
By Lemma~\ref{3.12}, for each real $\delta>0$ and integer $\ell>0$ there is
$k_0$ such that $\ell/k_0<\delta$ and for all $k\geq k_0$
\[
\card\Bigl\{v\in \digits{s}{k}: C(\ell,v)<\delta \Bigr\}>(1-\delta)s^k.
\]
Consider such a $k$. The elements $v\in\digits{s}{k}$ are of two types: those good-for-$\ell$
with $C(\ell,v)<\delta$ and the others. By choice of $k$, $(1-\delta)s^k$ blocks of length
$k$ are good-for-$\ell$. Let ${\tilde{s}}$ be either $s^k-1$ or $s^k-2$. Now view
$\digits{{\tilde{s}}}{1}$ in base $s$. If ${\tilde{s}}$ is $s^k-1$, then $\digits{{\tilde{s}}}{1}$ lacks
the not-good-for-$\ell$ block of $k$ digits all equal to $s-1$. If ${\tilde{s}}$ is $s^k-2$, then
$\digits{{\tilde{s}}}{1}$ also lacks the not-good-for-$\ell$ block of $k-1$ digits equal to $s-1$ followed by
the final digit $s-2$. So, at least $(1-\delta)$ of the elements in $\digits{{\tilde{s}}}{1}$ are
good-for-$\ell$ in that they correspond to good blocks of length $k$.
Let $N_0$ be such that for all $N\geq N_0$,
\[
\card\Bigl\{w\in \digits{{\tilde{s}}}{\ N}: C(1,w)<\delta \Bigr\}>(1-\delta){\tilde{s}}^{\, N}.
\]
Take $N\geq N_0$ and consider a sequence $w$ in $\digits{{\tilde{s}}}{N}$. If $C(1,w)<\delta$,
then each element in $\digits{{\tilde{s}}}{1}$ occurs in $w$ at least $N(1/{\tilde{s}}-\delta)$ times.
Let $w\mapsto w^*$ denote the map that takes
$w\in\digits{{\tilde{s}}}{N}$ to $w^*\in\digits{s}{kN}$ such that $\displaystyle{
\sum_{n=1}^{N}w_{n}(s^k)^{-n}=\sum_{n=1}^{kN} w^*_{n+1}\,s^{-n}}$. Let $u\in\digits{s}{\ell}$. We obtain the following bounds for $occ(u,w^*)$:
\begin{align*}
occ(u,w^*)
\leq\ & N(1/s^{\ell}+\delta)k+2\ell N+\delta Nk
\\
\leq\ &Nk(1/s^{\ell}+2\delta+2\ell/k). \\
occ(u,w^*)
\geq\ & \sum_{i=0}^{N-1}occ(u,(w^*_{ik+1},\dots,w^*_{ik+k}))& \\
\geq\ & {\tilde{s}}(1-\delta) N(1/{\tilde{s}} - \delta) k (1/s^\ell-\delta)
= Nk (1-\delta) (1 - {\tilde{s}} \delta) (1/s^\ell-\delta) \\
\geq\ & Nk (1/s^\ell -\delta - s^k\delta/s^\ell-{\tilde{s}}\delta^3) \\
\geq\ & Nk(1/s^\ell-\delta s^k). \quad\text{(We can assume that $\delta<1/2$.)} \end{align*}
So $C(\ell,w^*)< \delta s^k$. Hence,
$ \card\Bigl\{w\in \digits{{\tilde{s}}}{N}: C(\ell, w^*)< \delta s^k \Bigr\} \geq
(1-\delta) {\tilde{s}}^{\, N}.$
Let $\delta= s^{-k}(\epsilon^2/18)$. Then,
\begin{flalign*}
\card\Bigl\{w\in \digits{{\tilde{s}}}{N}: C(\ell, w^*)< \epsilon^2/18\}) \Bigr\}
\geq & (1- s^{-k}(\epsilon^2/18)){\tilde{s}}^{\, N}.
\end{flalign*}
In particular, this inequality holds for the minimal $\ell$ satisfying $s^\ell>3/\epsilon.$
Since, $\epsilon$ can be chosen so that $(1- s^{-k}(\epsilon^2/18))$ is at least $1/2$, we can
apply Lemma~\ref{3.11} to conclude the wanted result:
$\displaystyle{
\card\Bigl\{w\in \digits{{\tilde{s}}}{N}: D(\expa{s^j\eta_w}: 0\leq j< kN)<\epsilon\Bigr\} >
(1/2) {\tilde{s}}^{\, N}.}$ \end{proof}
\subsection{Schmidt's Lemmas}
Lemma~\ref{3.17} is our analytic tool to control discrepancy for multiplicatively independent bases. It originates in \cite{Sch61}. Our proof adapts the version given in \cite{Pol81}.
\addtocounter{footnote}{1}
\begin{lemma}[Hilfssatz~5, \cite{Sch61}]\label{3.15}
Suppose that $r$ and $s$ are multiplicative independent bases.
There is a constant $c$, with $0<c<1/2$, depending only on $r$ and $s,$ such that for all natural
numbers $K$ and $\ell$ with $\ell\geq s^K$, \[
\sum_{r=0}^{N-1}\prod_{k=K+1}^\infty |\cos(\pi r^n\ell/s^k)|\leq 2 N^{1-c}.
\]
Furthermore, $c$ is a computable function of $r$ and $s$.\footnote{Actually, Schmidt asserts the computability of $c$ in separate paragraph (page 309 in the same article): ``Wir stellen zun\"achst fest, da\ss man mit etwas mehr M\"uhe Konstanten $a_{20}(r, s)$ aus Hilfssatz~5 explizit berechnen k\"onnte, und da\ss\ dann $\xi$ eine eindeutig definierte Zahl ist.''} \end{lemma}
\begin{definition}\label{3.16} $ \displaystyle{ A(\xi,R,T,a,\ell)=\sum_{t\in T}\;\sum_{r\in R}\;
\Bigl|\sum_{j=\base{a}{r}+1}^{\base{a+\ell}{r}} e(r^j t \xi)\Bigr|^2.}$ \end{definition}
\begin{lemma}\label{3.17}
Let $R$ be a finite set of bases, $T$ be a finite set of non-zero integers and $a$ be a
non-negative integer. Let $s$ be a base multiplicatively independent to the elements of $R$ and
let $c(R,s)$ be the minimum of the constants $c$ in Lemma~\ref{3.15} for pairs $r,s$ with $r\in
R$. Let ${\tilde{s}}$ be $s-1$ if $s$ is odd and be $s-2$ if $s$ is even. Let $\eta$ be $s$-adic
with precision $\base{a}{s}$. For $v\in \digits{{\tilde{s}}}{N}$ let $\eta_v$ denote the rational
number $\eta+s^{-\base{a}{s}}\sum_{j=1}^N v_js^{-j}$. There is a length $\ell_0$ such that for
all $\ell\geq \ell_0$, there are at least $(1/2) {\tilde{s}}^{\base{a+\ell}{s}-\base{a}{s}}$ numbers
$\eta_v$ such that $A(\eta_v,R,T,a,\ell)\leq \ell\,^{2-c(R,s)/4}$. Furthermore, $\ell_0$ is a
computable function of $R$, $T$ and $s$. \end{lemma}
\begin{proof}
We abbreviate $A(x,R,T,a,\ell)$ by $A(x)$, abbreviate $(a+\ell)$ by $b$
and $\digits{{\tilde{s}}}{\base{b}{s}-\base{a}{s}}$ by~${\mathcal{L}}$. To provide the needed $\ell_0$ we will estimate the mean value of $A(x)$ on the set of numbers~$\eta_v$. We need an upper bound for \[
\sum_{v\in {\mathcal{L}}}A(\eta_v)=\sum_{v\in {\mathcal{L}}}\;\sum_{t\in T}\;\sum_{r\in R}\;
\Bigl|\sum_{j=\base{a}{r}+1}^{\base{b}{r}} e(r^j t \eta_v)\,\Bigr|^2 =\sum_{v\in {\mathcal{L}}}\;\sum_{t\in T}\;\sum_{r\in R}\;
\sum_{g=\base{a}{r}+1}^{\base{b}{r}}
\sum_{j=\base{a}{r}+1}^{\base{b}{r}}
e((r^j-r^g)t\eta_v). \] Our main tool is Lemma~\ref{3.15}, but it does not apply to all the terms $A(x)$ in the sum. So we will split it into two smaller sums over $B(x)$ and $C(x)$,
so that a straightforward analysis applies to the first, and Lemma~\ref{3.15} applies to the other. Let $p$ be the least integer satisfying the conditions for each $t\in T$, $r^{p-1} \geq 2|t|$
and for each $r\in R$, $r^p\geq s^2+1$. {\everymath={\displaystyle} \begin{align*}
B(x)=\sum_{t\in T}\sum_{r\in R}
\left(
\begin{array}{ll} \sum_{g=\base{b}{r}-p+1}^{\base{b}{r}} \sum_{j=\base{a}{r}+1}^{\base{b}{r}} e((r^j-r^g)t x)&+\\ \sum_{g=\base{a}{r}+1}^{\base{b}{r}} \sum_{j=\base{b}{r}-p+1}^{\base{b}{r}} e((r^j-r^g)t x)&+\\ \sum_{g=\base{a}{r}+1}^{\base{b}{r}}
\sum_{\substack{j=\base{a}{r}+1\\ |g-j|<p}}^{\base{b}{r}} e((r^j-r^g)t x).
\end{array} \right) \end{align*} } Assume for each $r\in R$, $\ell\geq \log r$ and $\ell\geq (8p\log s)^2$
(and recall, $b=a+\ell$.) We obtain the following bounds. The first inequality uses that each term in the explicit definition of $B(x)$ has norm less than or equal to $1$. The second uses the assumed conditions on $\ell$ and the last inequality uses that $c(R,s)< 1/2$ as ensured by Lemma~\ref{3.15}. \begin{align}
|B(x)|&\leq \sum_{t\in T} \sum_{r\in R}4p(\base{b}{r}-\base{a}{r})
= {\card}T\, {\card}R \;4\, p\, (2\log s/\log r) (\base{b}{s}-\base{a}{s})\nonumber\\
&\leq {\card}T\,{\card}R\; (\base{b}{s}-\base{a}{s})^{3/2} \nonumber\\
&\leq {\card}T\,{\card}R\; (\base{b}{s}-\base{a}{s})^{2-c(R,s)/2}.\nonumber \end{align} Thus, $\displaystyle{\sum_{v\in {\mathcal{L}}}B(\eta_v)\leq {\card}T\,{\card}R\; (b-a)^{2-c(R,s)/2}\,{\tilde{s}}^{\base{b}{s}-\base{a}{s}}}$. We estimate $\sum_{v\in {\mathcal{L}}}C(\eta_v)$, where \[ C(x) =\sum_{t\in T}\sum_{r\in R} \sum_{g=\base{a}{r}+1}^{\base{b}{r}-p}\;\; \sum_{\substack{j=\base{a}{r}+1\\
|j-g|\geq p}}^{\base{b}{r}-p} e((r^j-r^g)t x). \] We will rewrite $C(x)$ conveniently. We start by rewriting $\sum_{v\in {\mathcal{L}}}A(\eta_v)$.
\begin{align*}
\sum_{v\in {\mathcal{L}}}A(\eta_v)
=&\sum_{t\in T}\;\sum_{r\in R}\;\sum_{v\in {\mathcal{L}}}\;
\sum_{g=\base{a}{r}+1}^{\base{b}{r}}
\sum_{j=\base{a}{r}+1}^{\base{b}{r}}
e((r^j-r^g)t\eta_v)
\\ =&
\sum_{t\in T}\;\sum_{r\in R}\;
\sum_{j=\base{a}{r}+1}^{\base{b}{r}}
\sum_{g=\base{a}{r}+1}^{\base{b}{r}}\;
\sum_{v\in {\mathcal{L}}}\;
e((r^j-r^g)t\eta_v).
\end{align*} For fixed $j$ and $g$, we have the following identity. \[ \sum_{v\in {\mathcal{L}}}\;\; e((r^j-r^g)t\eta_v)= \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\Bigl(1+e\Bigl(\frac{t(r^j-r^g)}{s^k}\Bigl)+\dots+e\Bigl(\frac{({\tilde{s}}-1) t(r^j-r^g)}{s^k}\Bigl) \Bigl). \] Since $v\in{\mathcal{L}}= \digits{{\tilde{s}}}{\base{b}{s}-\base{a}{s}}$ the digits in $v$ are in $\{0,\dots,{\tilde{s}}-1\}$. Thus, \[
\Bigl|\sum_{v\in {\mathcal{L}}} A(\eta_v)\Bigr|\leq \sum_{t\in T}\sum_{r\in R} \sum_{j=\base{a}{r}+1}^{\base{b}{r}}\;\; \sum_{g=\base{a}{r}+1}^{\base{b}{r}}\;\;
\prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;\Bigl|\; \sum_{d=0}^{{\tilde{s}}-1} e\Bigl(\frac{d t(r^j-r^g)}{s^k}\Bigr)
\;\Bigr| \] and \[
\Bigl|\sum_{v\in {\mathcal{L}}} C(\eta_v)\Bigr|\leq\sum_{t\in T}\sum_{r\in R} \sum_{j=\base{a}{r}+1}^{\base{b}{r}-p} \sum_{\substack{g=\base{a}{r}+1\\
|j-g|\geq p}}^{\base{b}{r}-p }\;\; \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;
\Bigl|\; \sum_{d=0}^{{\tilde{s}}-1} e\Bigl(\frac{d t(r^j-r^g)}{s^k}\Bigr)
\;\Bigr|. \]
\noindent Since $|\sum_x e(x)|=|\sum_x e(-x)|$, we can bound the sums over $g$ and $j$ as follows. \begin{equation}
\nonumber
\Bigl|\sum_{v\in {\mathcal{L}}} C(\eta_v)\Bigr|\leq 2 \sum_{t\in T}\sum_{r\in R} \sum_{j=p}^{\base{b}{r}-\base{a}{r}-p}\;\; \sum_{g=1}^{\base{b}{r}-\base{a}{r}-p-j}\;\; \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;
\Bigl|\; \sum_{d=0}^{{\tilde{s}}-1} e\Bigl(\frac{d tr^{\base{a}{r}}r^g(r^{j}-1)}{s^k}\Bigr)
\;\Bigr|. \end{equation} Let $L=(r^j-1) r^{\base{a}{r}}t$. The following bounds related to $L$ are ensured by the choice of $p$. Let $T_{\mbox{max}}$ be the maximum of the absolute values of the elements of $T$. \begin{align*}
Lr^gs^{-\base{b}{s}}&\leq (r^j -1)r^{\base{a}{r}} t r^g s^{-\base{b}{s}}\\
&\leq r^j r^{\base{a}{r}} t r^{\base{b}{r} - \base{a}{r} - p - j} s^{-\base{b}{s}} =
t r^{\base{b}{r}-p} s^{-\base{b}{s}}\\
&\leq T_{\mbox{max}} \; r^{\ceil{b/\log r}} \; s^{-\ceil{b/\log s}} r^{-p}\\
&\leq T_{\mbox{max}} \; r^{1-p}\\
&\leq 1/2 \qquad\mbox{(an ensured condition on $p$)}. \end{align*} We give a lower bound on the absolute value of $L$. \begin{align*}
|L|&\geq (r^p-1)r^{\base{a}{r}} = (r^p-1)r^{\ceil{a/\log r}}\\
&\geq (r^p-1) s^{a/\log s}\\
&\geq s^{2+a/\log s} \qquad\mbox{(an ensured condition on $p$)}\\ \phantom{ Lr^gs^{-\base{b}{s}}}
&\geq s^{\base{a}{s} +1}. \end{align*} Below, we use
$\Bigl|\sum_{d=0}^{{\tilde{s}}-1} e({dx}) \Big|\leq ({\tilde{s}}/2) \ |1+e(x)|$; notice that the leading coefficient is whole (note to the curious reader: this the only reason that ${\tilde{s}}$ is required to be even). \begin{align*} \sum_{g=1}^{\base{b}{r}-\base{a}{r}-p-j} \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;
\Bigl|\; \sum_{d=0}^{{\tilde{s}}-1} e(d L r^gs^{-k})
\;\Bigr| &\leq \sum_{g=1}^{\base{b}{r}-\base{a}{r}-p-j} \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;
\frac{{\tilde{s}}}{2}\, \Bigl|1+e\Bigl(r^g L s^{-k}\Bigr) \Bigr| \end{align*} which, by the double angle identities, is at most $\displaystyle{ {\tilde{s}}^{\base{b}{s} - \base{a}{s}} \sum_{g=1}^{\base{b}{r}-\base{a}{r}-p-j} \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;
|\cos(\pi L r^g s^{-k})|.}$ \newline If $k\geq \base{b}{r}$, then $L r^g s^{-k}\leq 2^{-(k+1)}$. Therefore, $\displaystyle{
\prod_{k=\base{b}{r}+1}^\infty |\cos(\pi L r^g s^{-k})|\geq \prod_{k=1}^\infty |\cos(\pi 2^{-(k+1)})|}$, where the right hand side is a positive constant. Then, \begin{align*}
\prod_{k=\base{a}{r}+1}^{\base{b}{r}} |\cos(\pi L r^g s^{-k})| &=
\prod_{k=\base{a}{r}+1}^\infty |\cos(\pi L r^g s^{-k})|
\;\;\left(\prod_{k=\base{b}{r}+1}^\infty |\cos(\pi L r^g s^{-k})|\right)^{-1} \end{align*} which, for the appropriate constant $\mbox{\it\~c}$, is at most
$\displaystyle{ \mbox{\it\~c} \;\prod_{k=\base{a}{r}+1}^\infty |\cos(\pi L r^g s^{-k})|}$. \newline We can apply Lemma~\ref{3.15}: \begin{align*} \sum_{g=1}^{\base{b}{r}-\base{a}{r}-p-j} \prod_{k=\base{a}{r}+1}^{\base{b}{r}}\;
\Bigl|\; \sum_{d=0}^{{\tilde{s}}-1} e(d L r^gs^{-k})
\;\Bigr| &\leq \sum_{g=1}^{\base{b}{r}-\base{a}{r}-p-j}
\mbox{\it\~c} \;\prod_{k=\base{a}{r}+1}^\infty |\cos(\pi L r^g s^{-k})|\\ &\leq 2\mbox{\it\~c}({\base{b}{r}-\base{a}{r}})^{1-c(R,s)}. \end{align*} \begin{align*}
\Bigl|\sum_{v\in {\mathcal{L}}} C(\eta_v)\Bigr|&\leq 2 \sum_{t\in T}\sum_{r\in R} \sum_{j=p}^{\base{b}{r}-\base{a}{r}-p}\;\; {\tilde{s}}^{\base{b}{s}-\base{a}{s}}\; 2\mbox{\it\~c} ({\base{b}{s}-\base{a}{s}})^{1-c(R,s)}\\ &\leq 4\mbox{\it\~c}\; {\card} T\;{\card} R\;({\base{b}{s}-\base{a}{s}})^{2-c(R,s)}\; {\tilde{s}}^{\base{b}{s}-\base{a}{s}}. \end{align*}
Combining this with the estimate for $|\sum_{v\in{\mathcal{L}}} B(\eta_v)|$, we have \[
|\sum_{v\in{\mathcal{L}}} A(\eta_v)|\leq 4\mbox{\it\~c}\; {\card}T {\card}R (\base{b}{s}-\base{a}{s})^{2-c(R,s)}{\tilde{s}}^{\base{b}{s}-\base{a}{s}}. \] Therefore, the number of $v\in{\mathcal{L}}$ such that $\displaystyle{A(\eta_v)> 4\mbox{\it\~c}\; {\card}T\; {\card}R (\base{b}{s}-\base{a}{s})^{-c(R,s)/2}}$ is at most $(\base{b}{s}-\base{a}{s})^{-c(R,s)/2}\; {\tilde{s}}^{\base{b}{s}-\base{a}{s}}.$ If $\ell>(2^{2/c(R,s)}+1)\log s$ and $\ell>(16\mbox{\it\~c}\card{T}\card{R})^{4/c(R,s)}$ then $(\base{b}{s}-\base{a}{s})^{-c(R,s)/2}<1/2$. So, there are at least $(1/2){\tilde{s}}^{(\base{b}{s}-\base{a}{s})}$ members $v\in{\mathcal{L}}$ for which \begin{align*}
A(\eta_v)&\leq 4\mbox{\it\~c}\; {\card}T\, {\card}R (\base{b}{s}-\base{a}{s})^{2-c(R,s)/2}\\
&\leq 4\mbox{\it\~c}\; {\card}T\, {\card}R (2\ell)^{2-c(R,s)/2}\\
&\leq \ell\,^{2-c(R,s)/4}. \end{align*} This proves the lemma for $\ell_0$ equal to the least integer greater than $(2^{2/c(R,s)}+1)\log s$, $(16\mbox{\it\~c}\card{T}\card{R})^{4/c(R,s)}$, $(8p\log s)^2$ and $\max\{\log r : r\in R\}$. \end{proof}
\subsection{On Changing Bases}
\begin{lemma}\label{3.18}
For any interval $I$ and base $s$, there is a $s$-adic subinterval $I_s$ such that
$\measure{I_s}\geq \measure{I}/(2s)$ . \end{lemma}
\begin{proof}
Let $m$ be least such that $1/s^m<\measure{I}$.
Note that $1/{s^m}\geq \measure{I}/ {s}$, since $1/s^{m-1}\geq\measure{I}$.
If there is a $s$-adic interval of length $1/ {s^m}$ strictly contained in $I$,
then let $I_s$ be such an interval, and note that $I_s$ has
length greater than or equal to $\measure{I}/{s}$.
Otherwise, there must be an $a$ such that
$a/s^m$ is in $I$ and neither $(a-1)/s^m$ nor $(a+1)/s^m$ belongs to $I$.
Thus, $2/s^m>\measure{I}$.
However, since $1/s^m < \measure{I}$ and $s\geq 2$ then
$2/s^{m+1}<\measure{I}$.
So, at least one of the two intervals
$\displaystyle{\left[\frac{sa-1}{s^{m+1}}, \frac{sa}{s^{m+1} }\right)}$ or
$\displaystyle{\left[\frac{sa}{s^{m+1} }, \frac{sa+1}{s^{m+1} }\right)}$ must be contained in $I$.
Let $I_s$ be such.
Then, $\measure{I_s}$ is $\displaystyle{\frac{1}{s^{m+1}}=\frac{1}{2s}\frac{2}{s^m}\ >\ \measure{I}/(2s).}$
In either case, the length of $I_s$ is greater than $\measure{I}/(2s)$. \end{proof}
\begin{lemma}\label{3.19}
Let $s_0$ and $s_1$ be bases and suppose that $I$ is an $s_0$-adic interval of length
$s_0^{-\<b;s_0\rangle}$. For $a=b+\ceil{\log s_0 + 3\log s_1}$, there is an $s_1$-adic subinterval of
$I$ of length $s_1^{-\<a;s_1\rangle}$.
\end{lemma}
\begin{proof} By the proof of Lemma~\ref{3.18}, there is an $s_1$-adic subinterval of $I$ of length $s_1^{-(\ceil{-\log_{s_1}(\mu(I))}+1)}$: \begin{align*}
\ceil{-\log_{s_1}(\measure{I})}+1&= \ceil{-\log_{s_1}(s_0^{-\base{b}{s_0}})}+1 =\ceil{
{\base{b}{s_0}\log s_0 }/{\log s_1}
}+1\\
&\leq \ceil{ b/\log s_1+ \log s_0/\log s_1}+1\\
&\leq \base{b}{s_1} +\ceil{\log s_0/\log s_1} +1. \end{align*}
Thus, there is an $s_1$-adic subinterval of $I$ of length $s_1^{-(\base{b}{s_1} +\ceil{\log s_0/\log s_1} +1)}$. Consider $a=b+\ceil{\log s_0 + 3\log s_1}$. Then \begin{align*}
\base{a}{s_1} &= \ceil{a/\log s_1} = \ceil{{b+\ceil{\log s_0 + 3\log s_1}}/{\log s_1}}\\
&\geq b/\log s_1 +(\log s_0+3\log s_1)/\log s_1\\
&\geq \base{b}{s_1} +\ceil{\log s_0/\log s_1} +1. \end{align*} This inequality is sufficient to prove the lemma. \end{proof}
The next observation is by direct substitution. We will use it in the proofs of the theorems.
\begin{remark}\label{3.20}
Suppose that $r$, $s_0$ and $s_1$ are bases. Let $b$ be a positive integer and let
$a=b+\ceil{\log s_0 + 3\log s_1}$. Then,
$\base{a}{r}-\base{b}{r}\leq \ceil{\log s_0 + 3\log s_1}/\log r+1.$ Hence,\linebreak
$\base{a}{r}-\base{b}{r}\leq 2\ceil{\log s_0 + 3\log s_1}.$
\end{remark}
\section{Proofs of Theorems}
\subsection{Tools}
\begin{notation} Let $M$ be the set of minimal representatives of the multiplicative dependence equivalence classes. Let $p(s_0,s_1)=2\ceil{\log s_0+3\log s_1}$. \end{notation}
\begin{definition}\label{4.1}
Let $T$ and $\delta$ be as defined in Lemma~\ref{3.8} for input $(\epsilon/10)^4$. Let $\ell$ be the function with inputs $R$, $s$, $k$, $\epsilon$ and value the least integer greater than all of the following: \begin{itemize} \item The maximum of $\ell_0$ as defined in Lemma~\ref{3.9} over all inputs $r$ in $R$
and $\epsilon$ as given. \item $N_0$ as defined in Lemma~\ref{3.14} for inputs $s$, $k$ and $\epsilon$ \item $\ell_0$ as defined in Lemma~\ref{3.17} for inputs $R$, $T$ and $s^k$. \item $\left((\log r)^2/\delta\right)^{4/c(R,s^k)}$ for $c(R,s^k)$ the minimum of the constants of Lemma~\ref{3.15} for pairs $s,r$ with $r\in R$. \end{itemize} \end{definition}
\subsection{Proof of Theorem \ref{2}}
\begin{theoremD}
For any $\Pi^0_3$ subset $R$ of $M$ there is a computable real number $\xi$ such that for all
$r\in M$, $r\in R$ if and only if $\xi$ is normal to base $r$. Furthermore, $\xi$ is computable
uniformly in the $\Pi^0_3$ formula that defines $R$. \end{theoremD}
Note that $m\in M$ if and only if there is no $n$ less than $m$ such that $m$ is an integer power of~$n$, an arithmetic condition expressed using only bounded quantification. Let $\varphi=\forall x\exists y\forall z\theta$ be a $\Pi^0_3$ formula with one free variable. We will construct a real number $\xi$ so that for every base $r$, $\xi$ is normal to base $r$ if and only if $\varphi(r)$ is true. The normality of $\xi$ to base $r$ is naturally expressed using three quantifiers: $\forall\epsilon\exists n\forall N\geq n\ D(\expa{r^k\xi}: 0\leq k<N)<\epsilon$. Lemma \ref{3.1} shows that the discrepancy $D$
admits computable approximations (using finite partitions of the unit interval). Thus, the normality of $\xi$ to base $r$ is a $\Pi^0_3$ formula. In our construction, we will bind the quantified variables in $\varphi(r)$ to those in the formula for normality. The variable $x$ will correspond to $\epsilon$, $y$ to $n$ and $z$ to $N$.
We define a sequence $\xi_m$, $b_m$, $s_m$, $k_m$, $\epsilon_m$, $\ell_m$, $x_m$, $R_m$ and $c_m$ by stages. $\xi_m$ is a $s_m^{k_m}$-adic rational number of precision $\base{b_m}{s_m^{k_m}}$. $b_m$ and $k_m$ are positive integers. $s_m$ is a base. $R_m$ is a finite set of bases. The real $\xi$ will be an element of $[\xi_m,\xi_m+(s_m^{k_m})^{-\base{b_m}{s_m^{k_m}}})$. Stage $m+1$ is devoted to extending $\xi_{m}$ so that the discrepancy of the extended part in base $s_{m+1}$ is below $1/x_m$ and above $1/(2s_{m+1}^{k_{m+1}})$, and so that the discrepancy of the extension for the other bases under consideration is below $\epsilon_{m+1}$. $\ell_{m+1}$ is used to determine the length of the extension and $c_{m+1}$ is an integer used to monitor $\varphi$ and set bounds on discrepancy. Fix an enumeration of $M$ such that every element of $M$ appears infinitely often.
\noindent\emph{Initial stage.} Let $\xi_0=0$, $b_0=1$, $s_0=3$, $k_0=1$, $\epsilon_0=1$, $\ell_0=1$, $x_0=1$ and $c_0=1$.
\noindent{\em Stage $m+1$.\ } Given $\xi_{m}$ of the form $\sum_{j=1}^{\base{b_{m}}{s_m^{k_{m}}}} v_j(s_m^{k_{m}})^{-j}$, $b_{m}$, $s_{m}$, $k_{m}$, $\epsilon_{m}$, $\ell_m$, $x_m$, $R_m$ and $c_m$.
(1) Let $F$ be the canonical partition of $[0,1]$ into intervals of length $(1/3)(1/(4)s_m^{-k_m}$. If~$D(F,(\expa{s_m^j\xi_m}:0\leq j< \base{b_m}{s_m}))<((1/3)(1/4)s_m^{-k_m}))^2$, then let $s_{m+1}$ be $s_m$, $k_{m+1}$ be $k_m$, $\epsilon_{m+1}$ be $\epsilon_m$, $\ell_{m+1}$ be $\ell_m$, $x_{m+1}$ be $x_m$, $R_{m+1}$ be $R_m$ and $c_{m+1}$ be $c_m$.
(2) Otherwise, let $c$ be $c_m+1.$ Let $s$ be the $c$th element in the enumeration of $M$. Let $n$ be maximal less than $c$ such that $s$ is also the $n$th element in the enumeration of $M$, or be 0 if $s$ appears for the first time at $c$. Take $x$ to be minimal such that there is a $y$ less than $n$ satisfying $\forall z<n\,\varphi(x,y,z)$ and $\exists z<c\,\neg\varphi(x,y,z)$. If there is none such, then set $x$ equal to $c$. Let $k$ and $N$ be as defined in Lemma~\ref{3.14} for input $\epsilon=1/x$ and base $s$. Let $R$ be the set of bases not equal to $s$ which appear in the enumeration of $M$ at positions less than $c$. Let $L$ be the least integer greater than $\max\{x, c , 2 s^{k}\} \log(\max(R\cup\{s\})) p(s_m,s)$, $N$, and $\ell(R,s,k,1/c)$. If for some $r\in R$, $(1/c) \base{b_m}{r}\leq L+p(s_m,s)$ or $(1/x) \base{b_m}{s}\leq L+p(s_m,s)$ then let $s_{m+1}$ be $s_m$,
$k_{m+1}$ be $k_m$,
$\epsilon_{m+1}$ be $\epsilon_m$,
$\ell_{m+1}$ be $\ell_m$,
$x_{m+1}$ be $x_m$,
$R_{m+1}$ be $R_m$ and
$c_{m+1}$ be $c_m$.
(3) Otherwise, let $s_{m+1}$ be $s$, $k_{m+1}$ be $k$, $\epsilon_{m+1}$ be $1/c$, $\ell_{m+1}$ be $L$ $x_m$ be $x$, $R_{m+1}$ be $R$ and $c_{m+1}$ be $c$.
Let $a_{m+1}$ be minimal such that there is an $s_{m+1}^{k_{m+1}}$-adic subinterval of $[\xi_{m},\xi_{m}+(s_{m}^{k_m})^{-\base{b_{m}}{s_{m}^{k_m}}})$ of length $(s_{m+1}^{k_{m+1}})^{-\base{a_{m+1}}{s_{m+1}^{k_{m+1}}}}$ and let $[\eta_{m+1},\eta_{m+1}+(s_{m+1}^{k_{m+1}})^{-\base{a_{m+1}}{s_{m+1}^{k_{m+1}}}})$ be the leftmost such. Let ${\tilde{s}}$ be $s_{m+1}^{k_{m+1}}-1$ if $s_{m+1}$ is odd and be $s_{m+1}^{k_{m+1}}-2$ otherwise. Let $T$ and $\delta$ be as defined in Lemma~\ref{3.8} for input $\epsilon=(\epsilon_{m+1}/10)^4$. Let~$b_{m+1}$ be $a_{m+1}+\ell_{m+1}$. Let $\nu$ be such that \begin{itemize} \item $\displaystyle{
\nu=\eta_{m+1}+\sum_{j=\base{a_{m+1}}{s_{m+1}^{k_{m+1}}}+1}^{\base{b_{m+1}}{s_{m+1}^{k_{m+1}}}} w_j(s_{m+1}^{k_{m+1}})^{-j}}$,
for some $(w_1,\dots,w_{\ell_{m+1}})$ in $\digits{{\tilde{s}}}{\ell_{m+1}}$.
\item $A(\nu,R_{m+1},T,a_{m+1},\ell_{m+1})/\base{\ell_{m+1}}{\max(R_{m+1})}^2<\delta$.
\item $\nu$ minimizes
$D(F,(\expa{s_{m+1}^j\nu}:\base{a_{m+1}}{s_{m+1}}<
j\leq \base{b_{m+1}}{s_{m+1}}))$ among the $\nu$ satisfying the first two conditions, for $F$ as
defined in clause (1). If there is more than one minimizer, take the least such for $\nu$. \end{itemize} \nopagebreak[4] We define $\xi_{{m+1}}$ to be $\nu$. This ends the description of stage $m+1$.
We verify that the construction succeeds. Let $m+1$ be a stage. If clause (1) or (2) applies, let $m_0$ be the greatest stage less than or equal to $m+1$ such that $c_{m_0}=c_{m_0+1}=\cdots=c_{m+1}$. During stage $m_0$, $k_{m_0}$ and $\ell_{m_0}$ were chosen to satisfy the conditions to reach clause (3). Note that since $b_m>b_{m_0}$ the conditions in clause (2) apply to $b_m$ in place of $b_{m_0}$:
$(1/c_{m+1}) b_m > \ell_{m+1} +p(s_{m_0-1},s_{m+1})$ and $(1/x_{m+1}) b_m > \ell_{m+1} +p(s_{m_0-1},s_{m+1})$. Then, $\ell_{m+1}$ is greater than $\max\{x_{m+1}, c_{m+1}, 2 s_{m+1}^{k_{m+1}}\} \log(\max(R_{m+1}\cup\{s_{m+1}\}) p(s_{m_0-1},s_{m+1})$, $N$, and $\ell(R_{m+1},s_{m+1},k_{m+1},1/c_{m+1})$, where $N$ is determined during stage $m_0$. If clause (3) applies, then the analogous conditions hold by construction.
Stage $m+1$ determines the $s_{m+1}^{k_{m+1}}$-adic subinterval $[\eta_{m+1},\eta_{m+1}+(s_{m+1}^{k_{m+1}})^{-\base{a_{m+1}}{s_{m+1}^{k_{m+1}}}})$ of the interval provided at the end of stage $m$. The existence of this subinterval is ensured by Lemma~\ref{3.19}. The stage ends by selecting the rational number $\nu$. The existence of an appropriate $\nu$ is ensured by Lemma~\ref{3.17} with the inputs given by the construction. It follows that $\xi$ is well defined as the limit of the $\xi_{m}$.
Let $s$ be a base that appears in the enumeration of $M$ at or before $c_{m+1}$. There are two possibilities for $s$ during stage $m$: either it is an element of $R_{m+1}$ or it is equal to $s_{m+1}$. Suppose first that $s\in R_{m+1}$. $\xi_{m+1} = \nu$ was chosen so that $A(\nu,R_{m+1},T,a_{m+1},\ell_{m+1})/\base{\ell_{m+1}}{s}^2<\delta$. By~Definition~\ref{3.16},
$\displaystyle{ A(\nu,R_{m+1},T,a_{m+1},\ell_{m+1}) }$ is equal to $\displaystyle{\sum_{t\in T}\;\sum_{r\in R_{m+1}}\; \Bigl|\sum_{j=\base{a_{m+1}+1}{r}}^{\base{b_{m+1}}{r}}
e(r^j t \nu)\Bigr|^2 }$. Hence,
$\displaystyle{(1/\base{\ell_{m+1}}{s}^2) \sum_{t\in T}\,\Bigl|\sum_{j=\base{a_{m+1}+1}{s}}^{\base{b_{m+1}}{s}} e(s^j t \nu)\Bigr|^2<\delta. }$ By choice of $T$ and $\delta$, Lemma~\ref{3.8} ensures \[ D(s^j\nu:\base{a_{m+1}}{s}<j\leq \base{b_{m+1}}{s})<(\epsilon_{m+1}/10)^4. \] By definition of $\xi$, $\xi \in[\nu,\nu+(s_{m+1}^{k_{m+1}})^{-\base{b_{m+1}}{s_{m+1}^{k_{m+1}}}})$. By Lemma~\ref{3.9}, we conclude that \[ D(s^j\xi:\base{a_{m+1}}{s}< j\leq \base{b_{m+1}}{s})<\epsilon_{m+1}. \] By Remark~\ref{3.20}, $\base{a_{m+1}}{s}-\base{b_m}{s}$ is less than or equal to $p(s_m,s_{m+1})$. By construction, $\epsilon_{m+1}\,\base{\ell_{m+1}}{s}$ is greater than $p(s_m,s_{m+1})$. By Lemma~\ref{3.3} \[ D(s^j\xi:\base{b_{m}}{s}< j\leq \base{b_{m+1}}{s})<2\epsilon_{m+1}. \]
The second possibility is that $s$ is equal to $s_{m+1}$. Again, consider the selection of the rational number $\nu$ during stage $m+1$. By Lemma~\ref{3.17}, more than half of the eligible candidates satisfy the inequality $A(\nu,R_{m+1},T,b_{m},\ell_{m+1})/\base{\ell_{m+1}}{\max(R_{m+1})}^2<\delta$. By Lemma~\ref{3.14}, more than half the candidates satisfy \[ D(\expa{s^j\nu}:\base{a_{m+1}}{s}< j\leq \base{b_{m+1}}{s})<1/x_{m+1}. \] By choice of $\xi_{m+1}$, \[ D(F,(\expa{s^j\xi_{m+1}}:\base{a_{m+1}}{s}< j\leq \base{b_{m+1}}{s})<1/x_{m+1}. \] By Lemma~\ref{3.1}, \[ D(\expa{s^j\xi_{m+1}}:\base{a_{m+1}}{s}< j\leq \base{b_{m+1}}{s})<3 x_{m+1}^{-1/2}. \] By construction $(1/x_{m+1}) \base{\ell_{m+1}}{s}$ is greater than $p(s_m,s)$ and so, as above, \[ D(\expa{s^j\xi_m}:\base{b_{m}}{s}< j\leq \base{b_{m+1}}{s})<6 x_{m+1}^{-1/2}. \] By Lemma~\ref{3.9}, \[ D(\expa{s^j\xi}:\base{b_{m}}{s}< j\leq \base{b_{m+1}}{s})<10(6x_{m+1}^{-1/2})^{1/4}. \] Further, $\xi_{{m+1}}$ is chosen so that in $(\expa{(s^{k_{m+1}})^j\xi_{m+1}}:\base{a_{m+1}}{s^{k_{m+1}}}<j\leq\base{b_{m+1}}{s^{k_{m+1}}})$ no element belongs to $[1-s^{k_{m+1}},1]$. As $\xi\in[\xi_{m+1},\xi_{m+1}+s^{-\base{b_{m+1}}{s}}),$ the same holds for $\xi$. Since $\base{b_{m+1}}{s^{k_{m+1}}}-\base{b_{m}}{s^{k_{m+1}}}>2 s^{k_{m+1}} p(s_m,s)$, Lemma~\ref{3.5} applies and so \[ D(\expa{s^j\xi}:\base{b_{m}}{s}<j\leq\base{b_{m+1}}{s}) )\geq 1/(2s^{k_{m+1}}). \] Stages subsequent to $m+1$ will satisfy the same inequality until the first stage for which $D(\expa{s_m^j\xi_m}:0\leq j< \base{b_m}{s_m})\geq 1/(4s_m^{k_m})$. By a direct counting argument, there will be such a stage and during that stage clause~(1) cannot apply. Similarly, clause (2) cannot apply for indefinitely many stages, as the values of $b_m$ are unbounded. It follows that $\lim_{m\to\infty}c_{m+1}=\infty$.
If $\varphi(s)$ is true, then for each $x$, there are only finitely many stages during which $s_m=s$ and $x_m=x$. Let $\epsilon$ be greater than $0$. There will be a stage $m_0$ such that for all $m$ greater than $m$, $\epsilon>2\epsilon_m$ and, if $s=s_m$ then $\epsilon>10(6x_{m+1}^{-1/2})^{1/4}$. By construction, Lemma~\ref{3.4} applies to conclude $\lim_{N\to\infty}D( \expa{s^j\xi_m}:0\leq j< N)\leq 2\epsilon$. By applying Lemma~\ref{3.9}, we conclude that $\xi$ is normal to base $s$.
If $\varphi(s)$ is not true, then let $x$ be minimal such that $\forall y \exists z\neg\varphi(s,x,y,z)$. There will be infinitely many $m+1$ such that $s=s_{m+1},$ $x=x_{m+1}$ and $k_{m+1}$ is the $k$ associated with $s$ and~$x$. As already discussed, each of these stages will be followed by a later stage $m_1$ such that \[ D(\expa{s^j\xi}:0\leq j< \base{b_{m_1}}{s})\geq 1/(4s^{k}). \] Hence $\xi$ is not normal to base $s$.
\subsection{Proof of Theorem \ref{1}}
\begin{theoremU} (1) The set of indices for computable real numbers which are normal to at least one base is $\Sigma^0_4$-complete. (2) The set of real numbers that are normal to at least one base is $\mathbf \Sigma^0_4$-complete. \end{theoremU}
To prove item (1) we must exhibit a computable function $f$, taking $\Sigma^0_4$ sentences (no free variables) to indices for computable real numbers, such that for any $\Sigma^0_4$ sentence $\psi$, $\psi$ is true in the natural numbers if and only if the computable real number named by $f(\psi)$ is normal to at least one base. Let $\psi$ be a $\Sigma^0_4$ sentence and let $\varphi$ be the $\Pi^0_3$ formula such that $\psi=\exists w\varphi(w)$.
Let $M$ be the set of minimal representatives of the multiplicative dependence equivalence classes and fix the computable enumeration of $M=\{s_1,s_2,\dots\}$ (as in the proof of Theorem~\ref{2}). Consider the $\Pi^0_3$ formula $\varphi^*$ such that $\varphi^*(s_w)$ is equivalent to $\varphi(w)$. By Theorem~\ref{2}, there is a computable real $\xi$ such that for all $s_w$, $\xi$ is normal to base $s_w$ if and only if $\varphi^*(s_w)$ is true, if and only if $\varphi(w)$ is true. Thus, $\xi$ is normal to at least one base if and only if there is a $w$ such that $\varphi(w)$ is true, if and only if $\psi=\exists w\varphi(w)$ is true. In Theorem~\ref{2}, $\xi$ is obtained uniformly from $\varphi^*$, which was obtained uniformly from $\varphi$. The result follows.
For item (2) recall that a subset in ${\mathbb{R}}$ is ${\mathbf\Sigma}^0_4$-complete if it is ${\mathbf\Sigma}^0_4$ and it is hard for ${\mathbf\Sigma}^0_4$. To prove hardness of subsets of ${\mathbb{R}}$ at levels in the Borel hierarchy it is sufficient to consider subsets of Baire space, ${\mathbb{N}}^{\mathbb{N}}$ because there is a continuous function from ${\mathbb{R}}$ to ${\mathbb{N}}^{\mathbb{N}}$ that preserves the levels. The Baire space admits a syntactic representation of the levels of Borel hierarchy in arithmetical terms. A subset $A$ of ${\mathbb{N}}^{\mathbb{N}}$ is ${\mathbf\Sigma}^0_4$ if and only if there is a parameter $p$ in ${\mathbb{N}}^{\mathbb{N}}$ and a $\Sigma^{p}_4$ formula $\psi(x,p)$, where $x$ is a free variable, such that for all $x\in{\mathbb{N}}^{\mathbb{N}}$, $x\in A$ if and only if $\psi(x,p)$ is true. A subset $B$ of ${\mathbb{R}}$ is hard for ${\mathbf\Sigma}^0_4$ if for every ${\mathbf\Sigma}^0_4$ subset $A$ of ${\mathbb{N}}^{\mathbb{N}}$ there is a continuous function $f$ such that for all $x\in{\mathbb{N}}^{\mathbb{N}}$, $x\in A$ if and only if $f(x)\in B$. Consider a ${\mathbf\Sigma}^0_4$ subset $A$ of the Baire space defined by a $\Sigma^{p}_4$ formula $\psi(x,p)$, where $x$ is a free variable. The same function given for item (1) but now relativized to $x$ and $p$ yields a real number $\xi$ such that $\psi(x,p)$ is true if and only if $\xi$ is normal to at least one base. This gives the required continuous function $f$ satisfying $x\in A$ if and only $f(x)$ is normal to at least one base.
\subsection{Proof of Theorem \ref{3}}
\begin{theoremCi}
For any $\Pi^0_3$ formula $\varphi$ there is a computable real number $\xi$ such that for any base $r\in M$,
$\varphi(\xi,r)$ is true if and only if $\xi$ is normal to base $r$. \end{theoremCi}
The proof follows from Theorem~\ref{2} by an application of the Kleene Fixed Point Theorem~\cite[see][Chapter~11]{Rog87}. Let $\varphi$ be a $\Pi^0_3$ formula with two free variables, one ranging over ${\mathbb{N}}^{\mathbb{N}}$ and the other ranging over ${\mathbb{N}}$. Let $\Psi_e$ be a computable enumeration of the partial computable functions from ${\mathbb{N}}$ to ${\mathbb{N}}$. The condition ``$\Psi_e$ is a total function and $\varphi(\Psi_e,r)$'' is a $\Pi^0_3$ property of $e$ and $r$. By Theorem~\ref{2}, there is a computable function which on input a $\Pi^0_3$ formula $\theta$ produces a (total) computation of a real $\xi_\theta$ which is normal to base $r\in M$ if and only if $\theta(r)$ is true. In particular, there is a computable function $f$ such that for every $e$, for all $r\in M$, \[ \Psi_e \text{ is a total function and } \varphi(\Psi_e,r) \text{ if and only if } \Psi_{f(e)} \text{is
normal to base $r$.} \] Furthermore, for every $e$, $\Psi_{f(e)}$ is total. By the Kleene Fixed Point Theorem, there is an $e$ such that $\Psi_e$ is equal to $\Psi_{f(e)}.$ For this $e$, for all $r\in M$, \[ \varphi(\Psi_e,r) \text{ if and only if } \Psi_e \text{ is normal to base $r$.} \] Then, $\xi=\Psi_e$ satisfies the condition of the Theorem.
\subsection{Proof of Theorem \ref{4}}
\begin{theoremT}
Fix a base $s$. There is a computable function $f:{\mathbb{N}}\to{\mathbb{Q}}$ monotonically decreasing to $0$ such
that for any function $g:{\mathbb{N}}\to{\mathbb{Q}}$ monotonically decreasing to $0$ there is an absolutely normal
real number $\xi$ whose discrepancy for base $s$ eventually dominates $g$ and whose discrepancy
for each base multiplicatively independent to $s$ is eventually dominated by $f$. Furthermore,
$\xi$ is computable from $g$. \end{theoremT}
Let $s$ be a base. We define a sequence $\xi_m$, $b_m$, $k_m$, $\epsilon_m$, $\ell_m$, $R_m$ and ${\overline{k}}_m$ by stages. $b_m$, $k_m$ and ${\overline{k}}_m$ are a positive integers, $\epsilon_m$ a positive rational number and $R_m$ a finite set of bases multiplicatively independent to $s$. $\xi_m$ is an $s^{k_m}$-adic rational number of precision $\base{b_m}{s^{k_m}}$. The real $\xi$ will be an element of $[\xi_m,\xi_m+(s^{k_m})^{-\base{b_m}{s^{k_m}}})$. Stage $m+1$ is devoted to extending $\xi_{m}$ so that the discrepancy of the extension is below $\epsilon_{m+1}$ for the bases in $R_{m+1}$ and so that the discrepancy of the extension in base $s$ is in a controlled interval above $g$. We use $k_{m+1}$ to enforce the endpoints of this interval. $\ell_m$ determines the length of the extension.
At each stage $m$ the determination of $\xi_{m+1}$ is done so that the discrepancy functions for $\xi$ relative to bases independent to $s$ converge to $0$ uniformly, without reference to the function~$g$. We obtain $f$ as the function bounding these discrepancies by virtue of construction. The variable ${\overline{k}}_m$ acts as a worse case surrogate for the exponent of $s$ used in the construction relative to~$g$.
\noindent{\em Initial stage.\ } Let $\xi_0=0$, $b_0=1$, $k_0=1$, $\epsilon_0=1$, $\ell_0=0$, $R_0=\{r_0\}$ where $r_0$ is the least base which is multiplicatively independent to $s$, and ${\overline{k}}_0=1$.
\noindent{\em Stage $m+1$.\ } Given $b_{m}$, $R_{m}$, $\epsilon_{m}$, ${\overline{k}}_{m}$, $k_{m}$ and $\xi_{m}$ of the form $\sum_{j=1}^{\base{b_{m}}{s^{k_{m}}}}
v_j(s^{k_{m}})^{-j}$.
(1) Let $r$ be the least number greater than the maximum element of $R_{m}$ which is
multiplicatively independent to $s$. If
$(\epsilon_m/2)\base{b_{m}}{r}\geq\ell(R_{m}\cup\{r\},s,{\overline{k}}_{m}+1,\epsilon_{m}/2)$ then let
$\epsilon_{m+1}$ be $\epsilon_{m}/2$, let $R_{m+1}$ be $R_{m}\cup\{r\}$ and ${\overline{k}}_{m+1}$ be
${\overline{k}}_{m}+1$. Otherwise, let $\epsilon_{m+1}$ be $\epsilon_{m}$, $R_{m+1}$ be $R_{m}$ and
${\overline{k}}_{m+1}$ be ${\overline{k}}_{m}$. Let $\ell_{m+1}=\ell(R_{m+1},s,{\overline{k}}_{m+1},\epsilon_{m+1})$ and let
$b_{m+1}=b_{m}+\ell_{m+1}$.
(2) Let $k$ and $N$ be as determined by Lemma~\ref{3.14} for the input value $\epsilon=1/(4s^{k_{m}})$. If $(k\leq {\overline{k}}_{m+1})$, $(N\leq\base{\ell_{m+1}}{s})$ and $(1/(2s^k)>g(\base{b_{m}}{s}))$, then let $k_{m+1}$ be $k$. Otherwise, let $k_{m+1}$ be $k_{m}$.
We define $\xi_{{m+1}}$ to be $\xi_{m}+\nu$, where $\nu$ is determined as follows. Let ${\tilde{s}}$ be $s^{k_{m+1}}-1$ if $s$ is odd and be $s^{k_{m+1}}-2$ if $s$ is even. Let $T$ and $\delta$ be as determined in Lemma~\ref{3.8} with input $\epsilon=(\epsilon_{m+1}/10)^4$. Let $\nu$ be such that
\begin{itemize}
\item $\displaystyle{
\nu=\xi_{m}+\sum_{j=1}^{\base{\ell_{m+1}}{s^{k_{m+1}}}} w_j(s^{k_{m+1}})^{-(\base{b_{m}}{s^{k_{m+1}}}+j)}}$
for some $(w_1,\dots,w_{\base{\ell_{m+1}}{s_{m+1}^{k_{m+1}}}})$ in\\
$\digits{{\tilde{s}}}{\base{\ell_{m+1}}{s_{m+1}^{k_{m+1}}}}$.
\item $A(\nu,R_{m+1},T,\ell_{m+1})/\base{\ell_{m+1}}{\max(R_{m+1})}^2<\delta$.
\item $\nu$ minimizes $D(F,(\expa{s^j\nu}:0\leq j<\base{\ell_{m+1}}{s}))$ among the $\nu$
satisfying the first two conditions, where $F$ is the canonical partition of $[0,1]$ into
intervals of length $(1/3)(1/4s^{k_{m+1}})$. If there is more than one minimizer, take the
least such for $\nu$.
\end{itemize}
We define the function $f:{\mathbb{N}}\to{\mathbb{Q}}$ as follows. Given a positive integer $n$, let $m_n$ be such
that $b_{m_n}\leq n<b_{m_n+1}$. Let $m_0$ be maximal such that
$\epsilon_{m_0}\base{b_{m_n}}{\max(R_{m_0})}>b_{m_0}$. Define $f(n)$ to be $4\epsilon_{m_0}$. By
construction, $\epsilon_m$ is monotonically decreasing and so $f$ is also. Note, for all $m$,
$\ell_m>0$ and $\lim_{m\to\infty}b_m=\infty$. For every stage $m+1$, clause (1) sets
$\epsilon_{m+1}$ to be $\epsilon_m/2$, unless $b_m$ is not sufficiently large. The value of
$\epsilon_{m+1}$ will be reduced at a later sufficiently large stage. Thus, $\epsilon_m$ goes to
$0$ and so does $f$.
The function $f$ is defined in terms of the sequences of values $b_m$, $R_m$ and $\epsilon_m$, which are determined by clause (1). The conditions and functions appearing in clause (1) are computable, as was verified in each of the relevant lemmas. Thus, $f$ is a computable function.
Suppose that $r$ and $s$ are multiplicatively independent. Fix $n_0$ and $n_1$ so that $r\in R_{n_0}$ and $\epsilon_{n_0}\base{b_{n_1}}{\max(R_{n_0})}>b_{n_0}$. Let $n$ be any integer greater than $b_{n_1}$ let $m_n$ be such that $b_{m_n}\leq n<b_{m_n+1}$. By definition of $f$, there is an $m_0$ such that $f(n)=4\epsilon_{m_0}$ and $\epsilon_{m_0}\base{b_{m_n}}{\max(R_{m_0})}>b_{m_0}$. Since $n>n_1$, this $m_0$ is greater than or equal to $n_0$. By Lemma~\ref{3.8}, for each $m+1\geq m_0$, $\xi_{m+1}$ is chosen so that $A(\nu,R_{m+1},T,b_{m},\ell_{m+1})/\base{\ell_{m+1}}{r}^2$ is sufficiently small to ensure \[ D(\expa{r^j\xi_m}:\base{b_m}{r}<j\leq \base{b_{m+1}}{r})<(\epsilon_{m+1}/10)^4. \] By Lemma~\ref{3.9}, for each $m$ greater than or equal to $m_0$, \[ D(\expa{r^j\xi}:\base{b_m}{r}<j\leq \base{b_{m+1}}{r})<\epsilon_m\leq\epsilon_{m_0}. \] Fix $m$ so that $\base{b_m}{r}\leq \base{n}{r}<\base{b_{m+1}}{r}$. By a direct count, \[ D(\expa{r^j\xi}:\base{b_{m_0}}{r}< j\leq\base{b_m}{r})<\epsilon_{m_0}. \] By Lemma~\ref{3.3}, \[ D(\expa{r^j\xi}:0\leq j< \base{b_m}{r})<2\epsilon_{m_0}. \] And again by Lemma~\ref{3.3}, \[ D(\expa{r^j\xi}:0\leq j< \base{n}{r})<4\epsilon_{m_0}=f(n). \] Furthermore, since $\lim_{n\to\infty}f(n)=0$ we have $\lim_{n\to\infty} D(\expa{r^j\xi}:0\leq j<\base{n}{r})=0$. Consequently, $\xi$ is normal base $r$.
Consider the base $s$. During each stage $m$, the value of $ D(\expa{s^j\xi_m}:\base{b_m}{s}<j\leq \base{b_{m+1}}{s}) $ is controlled from above and from below. First, we discuss the lower bound on the discrepancy function for $\xi$ in base $s$. By construction, $\xi_{m+1}$ is obtained from $\xi_m$ by adding a rational number whose $s^{k_m}$-adic expansion omits at least the digit $s^{k_m}-1$. Further, the same digit $s^{k_m}-1$ in base $s^{k_m}$ was omitted every previous stage (omitting $s^{k}-1$ in base $s^k$ precludes a length $k$ sequence of digits $s-1$ in base $s$). Then, for any $n$ such that ${\base{b_m}{s}}\leq n<{\base{b_{m+1}}{s}}$, \[ D(\expa{s^j\xi}:0\leq j< \base{n}{s})\geq 1/(2s^{k_m}). \] By construction, $k_m$ is defined so that $1/(2s^{k_m})>g({\base{b_m}{s}})\geq g(n)$. Hence, \[ D(\expa{s^j\xi}:0\leq j< \base{n}{s})> g(n). \] Now, we treat the upper bound. Let $m$ be a stage. Let $m_0$ be the greatest stage less than or equal to $m$ such that $k_{m_0}\neq k_{m_0-1}$. By construction, $k_{m_0}$ and $\ell_{m_0}$ satisfy the conditions of Lemma~\ref{3.14} with input $\epsilon$ equal to $1/(4s^{k_{m_0}})$. Since $\ell_m\geq\ell_{m_0}$, the same holds during stage $m$. Consider the selection of $\nu$ during stage $m$. By Lemma~\ref{3.17}, more than half of the eligible candidates satisfy the inequality $A(\nu,R_m,T,b_{m-1},\ell_m)/\base{\ell_m}{\max(R_m)}^2<\delta$. By Lemma~\ref{3.14}, more than half the candidates satisfy \[ D(\expa{s^j\nu}:1\leq j\leq \base{\ell_m}{s})<1/(4{s^k_{m_0}}). \] Consequently, $\xi_m$ will be defined so that \[ D(F,(\expa{s^j\xi_m}:\base{b_{m-1}}{s}< j\leq \base{b_m}{s})) \] is less than $1/(4 s^k_{m_0} )$, where $F$ is as indicated in the construction. By Lemma~\ref{3.1}, \[ D(\expa{s^j\xi_m}:\base{b_{m-1}}{s}< j\leq \base{b_m}{s})<3(4 s^{k_{m_0}})^{-1/2}. \] As already argued, $\lim_{m\to\infty}\epsilon_m=0$. Similarly, the values of ${\overline{k}}_m$ and the maximum element of $R_m$ become arbitrarily large as $m$ increases. It follows that $\lim_{m\to\infty}\ell_m=\infty$. Since $g$ is a monotonically decreasing function and $\lim_{n\to\infty}g(n)=0$, for every stage $m$ there will be a later stage $m_1$ such that $k_{m_1}>k_{m}$. Thus, $\lim_{m\to\infty} D(\expa{s^j\xi_m}:\base{b_{m-1}}{s}< j\leq \base{b_m}{s})=0$. It follows from Lemma~\ref{3.9}, that $\lim_{m\to\infty} D(\expa{s^j\xi}:\base{b_{m-1}}{s}< j\leq \base{b_m}{s})=0$, and from Lemma~\ref{3.4} that $\lim_{N\to\infty} D(\expa{s^j\xi}:0\leq j< N)=0$. Hence $\xi$ is normal to base $s$. By Maxfield's Theorem, $\xi$ is normal to every base multiplicatively dependent to $s$. Thus, $\xi$ is absolutely normal.
\subsection{Proof of Theorem~\ref{5}}
\begin{theoremC}
Let $R$ be a set of bases closed under multiplicative dependence. There are real numbers normal
to every base from $R$ and not simply normal to any base in its complement. Furthermore, such a real
number can be obtained computably from $R$. \end{theoremC}
Let $S$ denote the set of bases in the complement of $R$. Fix an enumeration of $S$ such that every element of $S$ appears infinitely often. The case in which $2$ is an element of $S$ requires special attention and we treat it separately.
\emph{The case $2\not\in S$.\ } Assume that $2$ is not an element of $S$. Fix an
enumeration of $S$ in which every element of $S$ appears infinitely often.
We define a sequence $\xi_m$, $b_m$, $s_m$, $\epsilon_m$, $\ell_m$, $R_m$ and $c_m$. $b_m$ is a positive
integer, $\epsilon_m$ a positive rational number and $R_m$ a finite set of bases multiplicatively
independent to $s_m$. $\xi_m$ is an $s_m$-adic rational number of precision $\base{b_m}{s_m}$.
The real $\xi$ will be an element of $[\xi_m,\xi_m+s_m^{-\base{b_m}{s_m}})$. Stage $m+1$ is
devoted to extending $\xi_{m}$ so that the discrepancy of the extension is below $\epsilon_{m+1}$
for the bases in $R_{m+1}$ and so that the extension in base $s_{m+1}$ omits the digit
$s_{m+1}-1$. $\ell_m$ determines the length of the extension. $c_m$ is a counter to track
progress through the enumeration of $S$ with repetitions.
\noindent{\em Initial stage.\ } Let $\xi_0=0$, $b_0=0$, $s_0$ be the least element of $S$,
$\epsilon_0=1$, $\ell_0=0$, $R_0=\{r_0\}$ where $r_0$ is the least element of $R$ and $c_0=1$
\noindent{\em Stage $m+1$.\ } Given $\xi_{m}$ of the form $\sum_{j=1}^{\base{b_{m}}{s^{k_{m}}}} v_j(s_m^{k_{m}})^{-j}$, $b_{m}$, $s_{m}$, $\epsilon_{m}$, $\ell_m$, $R_m$ and $c_m$.
(1) If $D(\{[1-1/s_m,1]\},(\expa{s_m^j\xi_m}:0\leq j<\base{b_m}{s_m}))<(1/4)(1/s_m)$, then let $s_{m+1}=s_{m}$, $\epsilon_{m+1}=\epsilon_{m}$, $\ell_{m+1}=\ell_m$, and $R_{m+1}=R_m$.
(2) Otherwise, let $c=c_m+1$. Let $s$ be the $c$th element in the enumeration of $S$. Let $r$ be the least element of $R$ not in $R_m$. Let $L$ be the least integer greater than $\max(c\, p(s_m,s)\log(\max(R_m)), \ell(R_m\cup\{r\},s,1,1/c))$. If~$(1/c) \base{b_m}{\max(R_m)}\leq L+p(s_m,s)$ then let $s_{m+1}$ be $s_m$, let $\epsilon_{m+1}$ be $\epsilon_m$, let $\ell_{m+1}$ be $\ell_m$, $R_{m+1}$ be $R_m$ and $c_{m+1}$ be $c_m$.
(3) Otherwise, let $s_{m+1}$ be $s$, $\epsilon_{m+1}$ be $1/c$, $\ell_{m+1}$ be $L$, $R_{m+1}$ be $R_m\cup\{r\}$ and $c_{m+1}$ be $c$.
Let $a_{m+1}$ be minimal such that there is an $s_{m+1}$-adic subinterval of $[\xi_{m},\xi_{m}+s_{m}^{-\base{b_{m}}{s_{m}}})$ with measure $s_{m+1}^{-\base{a_{m+1}}{s_{m+1}}}$ and the leftmost such subinterval be $[\eta_{m+1},\eta_{m+1}+s_{m+1}^{-\base{a_{m+1}}{s_{m+1}}})$. Let ${\tilde{s}}$ be $s_{m+1}-1$ if $s_{m+1}$ is odd and be $s_{m+1}-2$ otherwise. Let $T$ and $\delta$ be as determined in Lemma~\ref{3.8} for input $\epsilon=(\epsilon_{m+1}/10)^4$. Let $\nu$ be in $[\eta_{m+1},\eta_{m+1}+s_{m+1}^{-\base{a_{m+1}}{s_{m+1}}})$ such that
\begin{itemize} \item $\displaystyle{
\nu=\eta_{m+1}+\sum_{j=1}^{\base{\ell_{m+1}}{s_{m+1}}} w_js_{m+1}^{-(\base{a_{m+1}}{s_{m+1}}+j)}}$,
for some $(w_1,\dots,w_{\base{\ell_{m+1}}{s_{m+1}}})$ in\\
$\digits{{\tilde{s}}}{\base{\ell_{m+1}}{s_{m+1}}}$
\item $A(\nu,R_{m+1},T,b_{m},\ell_{m+1})/\base{\ell_{m+1}}{\max(R_{m+1})}^2<\delta$ \end{itemize} We define $\xi_{{m+1}}$ to be $\nu$ and $b_{m+1}$ to be $a_{m+1}+\ell_{m+1}$. This ends the description of stage $m+1$.
We verify that the construction succeeds. Let $m+1$ be a stage. If clause (1) or (2) applies during stage $m+1$, let $m_0$ be the greatest stage less than or equal to $m+1$ such that $c_{m_0}=c_{m_0+1}=\cdots=c_{m+1}$. During stage $m_0$, $\ell_{m_0}$ was chosen to satisfy the conditions to reach clause (3). Note that since $b_m>b_{m_0}$ these conditions apply to $b_m$ in place of $b_{m_0}$: $(1/c_{m+1}) \base{b_m}{\max(R_{m_0})}>\ell_{m+1}+p(s_{m_0-1},s_{m+1})$ and $\ell_{m+1}$ is the maximum of $c_{m+1} p(s_{m_0-1},s_{m+1})$ and $\ell(R_{m+1},s_{m+1},1,1/c_{m+1})$. If clause (3) applies during stage $m+1$, then the analogous conditions hold by construction. Then, stage $m+1$ determines the subinterval $[\eta_{m+1},\eta_{m+1}+(s_{m+1})^{-\base{a_{m+1}}{s_{m+1}}})$ of the interval provided at the end of stage $m$. Following that, it selects $\nu$ and finishes the stage. The existence of an appropriate $\nu$ is ensured by Lemma~\ref{3.17} applied to the parameters of the construction, as anticipated in the definition of the $\ell$ function. It follows that $\xi$ is well defined as the limit of the $\xi_{m}$. Further, since $\ell$ takes only positive values, $b_m$ is an increasing function of $m$.
We show that $c_m$ goes to infinity and $\epsilon_m=1/c_m$ goes to $0$. Consider a stage $m+1$. By construction, no element of $(\expa{s_{m+1}^j\xi_{m+1}}:\base{a_{m+1}}{s_{m+1}}< j\leq \base{b_{m+1}}{s_{m+1}})$ is in $[1-1/s_{m+1},1]$. Further, during every subsequent stage $m_1+1$ with $c_{m_1+1}=c_{m+1}$, we have $a_{m_1+1}=b_{m_1}$, so no element of $(\expa{s_{m+1}^j\xi_{m_1+1}}:\base{b_{m_1}}{s_{m+1}}< j\leq \base{b_{m_1+1}}{s_{m+1}})$ is in $[1-1/s_{m+1},1]$. By Lemma~\ref{3.5}, there will be a stage $n+1$ after $m+1$ such that $c_{n+1}=c_{m+1}$ and \[ D(\{[1-1/s_{m+1},1]\},(\expa{s_{m+1}^j\xi_n}:0\leq j< \base{b_n}{s_{m+1}}))\geq (1/4)(1/s_{m+1}). \] Thus, clauses (1) and (2) cannot maintain the value $c_{m+1}$ indefinitely.
Suppose that $s\in S$. There will be infinitely many stages $m$ such that $s=s_m$. By the above, there will be infinitely many $m$ such that $s_m=s$ and \[ D(\{[1-1/s_{m},1]\},(\expa{s_{m}^j\xi_m}:0\leq j<\base{b_m}{s_{m}}))\geq (1/4)(1/s_{m}). \]
Since $\xi\in[\xi_{m},\xi_{m}+s_{m}^{-\base{b_{m}}{s_{m}}})$, the same is true for $\xi$ in place of $\xi_m$. It follows that $\xi$ is not simply normal to base $s$.
Suppose that $r\in R$ and $\epsilon>0$. For all sufficiently large stages, $r\in R_{m+1}$ and $\epsilon_{m+1}<\epsilon$. Consider a sufficiently large stage $m+1$. $\xi_{m+1}$ was defined to be $\nu$, which was chosen so that $A(\nu,R_{m+1},T,a_{m+1},\ell_{m+1})/\base{\ell_{m+1}}{\max(R_{m+1})}^2<\delta$. By Definition~\ref{3.16}, \[A(\nu,R_{m+1},T,a_{m+1},\ell_{m+1}) =\displaystyle{\sum_{t\in T}\;\sum_{r\in R_{m+1}}\;
\Bigl|\sum_{j=\base{a_{m+1}+1}{r}}^{\base{b_{m+1}}{r}} e(r^j t \nu)\Bigr|^2 } \]
and so
$\displaystyle{\base{\ell_{m+1}}{r}^{-2} \sum_{t\in T}\Bigl|\sum_{j=\base{a_{m+1}+1}{r}}^{\base{b_{m+1}}{r}} e(r^j t
\nu)\Bigr|^2<\delta.}$ By choice of $T$ and $\delta$, Lemma~\ref{3.8} ensures that \[ D(r^j\nu:\base{a_{m+1}}{r}<j\leq \base{b_{m+1}}{r})<(\epsilon_{m+1}/10)^4. \] By definition of $\xi$, $\xi \in[\nu,\nu+(s_{m+1}^{k_{m+1}})^{-\base{b_{m+1}}{s_{m+1}^{k_{m+1}}}})$. By Lemma~\ref{3.9}, we conclude that \[ D(r^j\xi:\base{a_{m+1}}{r}< j\leq \base{b_{m+1}}{r})<\epsilon_{m+1}. \] By construction, $\epsilon_{m+1} \ell_{m+1}$ is greater than $\log(r)\,p(s_m,s_{m+1})$. By Lemma~\ref{3.3} \[ D(r^j\xi:\base{b_{m}}{r}< j\leq \base{b_{m+1}}{r})<2\epsilon_{m+1}<2\epsilon. \] It follows that $\xi$ is normal to base $r$.
\emph{The case $2\in S$.\ } Removing $2$ and retaining all of its other powers in $S$ maintains the condition of multiplicative independence between elements of $R$ and $S$. A small alteration in our construction during the stages that ensure that $\xi$ is not simply normal for base $4$ will also ensure that $\xi$ is not simply normal for base $2$, by application of Lemma~\ref{3.13}:
We change clause (2) to require that $\ell_m$ be sufficiently large so that Lemma~\ref{3.13} applies to conclude that more than half of the base $4$ sequences of length $\ell_m$ have simple discrepancy greater than $1/8$ in base $2$. This requirement is added to the others that determine $\ell_m$ in the general construction. Then, while the value $s_m=4$ is maintained, we choose $\nu$ from among these sequences and so that the condition on the value of $A$ on $\nu$ from the general construction is also satisfied. Finally, clause (1) should be changed so that in addition to the existing condition on discrepancy in base~$4$ there is another condition that the simple discrepancy in base $2$ is less than $1/16$.
Even with these changes, $\xi$ is well-defined. Lemma~\ref{3.13} shows that more than half of the sequences $\nu$ have simple discrepancy greater than $1/8$ in base $2$. Lemma~\ref{3.17} shows that at least half of them satisfy the condition on the value of $A$. Thus, there is an appropriate $\nu$ available. Arguing as previously, $\xi$ is not simply normal to base~$2$.
\noindent {\bf Acknowledgements.} Becher's research was supported by CONICET and Agencia Nacional de Promoci\'{o}n Cient\'{i}fica y Tecnol\'{o}gica, Argentina. Slaman’s research was partially supported by the National Science Foundation, USA, under Grant No. DMS-1001551 and by the Simons Foundation. This research was done while the authors participated in the Buenos Aires Semester in Computability, Complexity and Randomness, 2013.
\end{document} | arXiv |
\begin{document}
\title{A Harnack's inequality for mixed type evolution equations}
\begin{abstract} We define a homogeneous parabolic De Giorgi classes of order 2 which suits a mixed type class of evolution equations whose simplest example is $\mu (x) \frac{\partial u}{\partial t} - \Delta u = 0$ where $\mu$ can be positive, null and negative, so in particular elliptic-parabolic and forward-backward parabolic equations are included. For functions belonging to this class we prove local boundedness and show a Harnack inequality which, as by-products, gives H\"older-continuity, in particular in the interface $I$ where $\mu$ change sign, and a maximum principle. \end{abstract}
\ \\
\noindent Mathematics subject classification: 35B65, 35M10, 35B50, 35B45, 35J62, 35K65, 35J70 \\ Keywords: parabolic equations, elliptic equations, elliptic-parabolic equations, forward-backward parabolic equations, mixed type equations, weighted Sobolev spaces, Harnack's inequality, H\"older-continuity
\section{Introduction}
The purpose of this paper is to study problems related to equations of mixed type whose simplest example may be \begin{equation} \label{lequazione} \mu (x) \frac{\partial u}{\partial t} - \Delta u = 0 \qquad \qquad \text{in } \Omega \times (0,T) \end{equation} where $\mu$ is a function changing sign and possibly taking also the value zero in some region of positive measure, $\Omega$ an open subset of ${\bf R}^n$ and $T > 0$. This means that the equation can be forward parabolic in a subregion $\Omega_+ \times (0,T)$, backward parabolic in another subregion $\Omega_- \times (0,T)$ and also a family of elliptic equations depending on the parameter $t$ in a third subregion $\Omega_0 \times (0,T)$ of $\Omega \times (0,T)$. For the existence of solutions to such equations we refer to \cite{fabio4} and the forthcoming paper \cite{fabio9}. In these papers coefficient $\mu$ is considered depending also on time, but here we confine to $\mu$ depending only on the spatial variable. \\ We want to remember that equations of this kind were already been considered, but always in some simple cases. We recall, among the many papers about mixed type equations, \cite{baogris} and \cite{pag-tal}, were the two following equations are considered \begin{gather*} x \frac{\partial u}{\partial t} - \frac{\partial^{2m} u}{\partial x^{2m}} = 0 \quad (m \geqslant 1 , \, m \in {\bf N}) \, , \qquad \text{sgn}(x) \frac{\partial u}{\partial t} - \frac{\partial^{2} u}{\partial x^{2}} + k u = f \, , \end{gather*} and one of the many cases considered by Beals, \cite{beals}, where the following equation is considered $$ x \frac{\partial u}{\partial t} - \frac{\partial}{\partial x} \left( (1-x^2) \frac{\partial u}{\partial x} \right) \ = 0 $$ and, for a general situation like that we consider in \cite{fabio4} and \cite{fabio9}, but confined to $\mu \geqslant 0$, we recall \cite{show1}. \\ For the many applications we refer to \cite{show1} and \cite{fabio9} and the references therein. \\ [0.4em] To come back to the content of the present paper, we give a Harnack type inequality (see Theorem \ref{Harnack1} and Theorem \ref{Harnack2}) for a wide class of functions belonging to a proper De Giorgi class. By this result, on one side we give a generalized Harnack inequality which includes the classical ones for elliptic equations and for parabolic equations, on the other we study regularity and maximum principles of solutions of equations like \eqref{lequazione}; in particular we get some local H\"older continuity on the interfaces where $\mu$ change sign (see the examples at the end of the paper). \\ We recall that a result of regularity in a very general setting was already considered in \cite{fabio7} and \cite{fabio9}, but this regards only regularity in time. \\ [0.4em] Just to avoid to confine to consider equations with $\mu : \Omega \to \{ -1, 0 , 1 \}$ and, on the contrary, to consider also, for instance, $\mu$ continuous, one is forced to consider weighted spaces. For this reason we consider a more general De Giorgi class suitably defined to contain quasi-minima (see Section \ref{De Giorgi classes and Q-minima}) for the equation \begin{equation} \label{lequazione2} \mu \frac{\partial u}{\partial t} - \text{div} (\lambda D u) = 0 \qquad \qquad \text{in } \Omega \times (0,T) \end{equation} with $\mu$ and $\lambda$ functions in $L^1_{\rm loc} (\Omega)$, $\lambda > 0$ while $\mu$ is valued in ${\bf R}$. Indeed the De Giorgi class we consider contains also solutions of more general equations, like \begin{equation} \label{equazionegeneralissima} \mu (x) \frac{\partial u}{\partial t} - \text{div} \, A (x,t,u,Du) = B(x,t,u,Du) \end{equation} with ($L \geqslant 1, M > 0$) \begin{align} \label{condizionissime}
& \big(A (x,t,u,Du) , Du \big) \geqslant \lambda (x) |Du|^2 \, , \nonumber \\
& | A (x,t,u,Du) | \leqslant L \, \lambda (x) |Du| \, , \\
& | B (x,t,u,Du) | \leqslant M \, \lambda (x) |Du| \, . \nonumber \end{align} To give our main result we follow \cite{dib1} and \cite{gianazza-vespri}, but we want to stress that the De Giorgi class we consider is different from the one considered in those papers, also when $\mu \equiv 1$ and that not only because of the more complicate nature of the equations we consider (the reason lies in Lemma \ref{stimettaDG}). \\ [0.4em] Since our class contains parabolic quasi-minima we want recall that quasi-minima or quasi-minimizers (briefly $Q$-minima) were introduced by Giaquinta and Giusti in \cite{giagiu}, where they prove local H\"older continuity extending the result due to De Giorgi for the solution of elliptic equations, while Harnack inequality for quasi-minima was proved by DiBenedetto and Trudinger in \cite{dib-tru}. In the parabolic setting the definition of quasi-minima is due to Wieser (see \cite{wieser}), who proves H\"older continuity for a suitable parabolic De Giorgi class. \\ [0.5em] Going back to degenerate elliptic and parabolic equations, where by ``degenerate'' we mean where some weights are involved like in \eqref{lequazione2}, we precise that we consider $\mu$ and $\lambda$ such that $$
|\mu|_{\lambda} := \left\{ \begin{array}{ll}
|\mu| & \text{ if } \mu \not = 0, \\ \lambda & \text{ if } \mu = 0 \end{array} \right. \quad \text{and} \quad \lambda \qquad \text{are Muckenhoupt weights} , $$
a class of weights we introduce in Section \ref{paragrafo2}. Precisely $|\mu|_{\lambda} \in A_{\infty}$ and $\lambda \in A_2$. Moreover we assume a condition relating $|\mu|_{\lambda}$ and $\lambda$, assumption (H.2), which is the existence of two constants $q > 2$ and $K > 0$ such that ($x \in {\bf R}^n$, $\rho > r > 0$) \begin{equation} \label{siamoinritardo}
\left(\frac{|B_r(x)|}{|B_\rho(x)|}\right)^{1/n}
\left( \frac{|\mu|_{\lambda} (B_r(x))}{|\mu|_{\lambda} (B_\rho(x))} \right)^{1/q} \left( \frac{\lambda (B_r(x))}{\lambda (B_\rho(x))} \right)^{-1/2} \leqslant K \, . \end{equation}
We stress that we are forced to introduce the weight $|\mu|_{\lambda}$ extending $|\mu|$ because the weight $|\mu|$ could be zero in some region with positive measure and in that case the measure associated to $|\mu|$, even if non-negative, would not be {\em doubling}. We recall that $\omega \in L^1_{\rm loc}(\Omega)$, $\omega : \Omega \to [0, +\infty]$, satisfies a doubling condition if there is a positive constant $c$ such that $$ \omega (B_{2r} (x_0)) \leqslant c \, \omega (B_{r} (x_0)) $$
for every $x_0 \in \Omega$ and $r > 0$ such that $B_{2r} (x_0) \subset \Omega$ (and where $\omega (A)$ denotes $\int_A \omega (x) dx$). Assumption we need for the weights $|\mu|_{\lambda}$ and $\lambda$ are summarized in (H.1), (H.2), (H.3), (H.4) in Section \ref{De Giorgi classes and Q-minima}. In particular (H.4) gives also a condition about the geometry of the interface separating the regions $\Omega_+ = \{ \mu > 0 \}, \Omega_0 = \{ \mu = 0 \}, \Omega_- = \{ \mu < 0 \}$, condition which turns out to be sufficient to get the Harnack inequality. We do not know if this is sharp and are not able to give a counterexample to this condition. \\ [0.4em] Harnack's inequality for parabolic equations was first proved separately by Hadamard and Pini in 1954 just for the heat equations, then Moser, Aronson, Serrin, Trudinger gave some generalizations of this result. But among the many papers studying Harnack's inequality and regularity of partial differential equations, both parabolic and elliptic, we confine to mention some papers regarding degenerate cases similar to the one we are considering, referring also to the references contained in them for the more classical results. \\ First we recall \cite{fa-ke-se} where for the first time, at least for our knowledge, a Muckenhoupt condition on $\lambda$, and precisely $\lambda \in A_2$, was consider to study regularity of the solutions of equations like $$ \text{div} (\lambda \, D u) = 0 $$ or more generally $\text{div} (a \cdot D u) = 0$ where $a$ satisfies $$
\lambda (x) |\xi|^2 \leqslant \big( a(x) \cdot \xi, \xi \big) \leqslant L \, \lambda(x) |\xi|^2 $$ In this regard we also recall \cite{tru1} and \cite{tru2}, where some sommability conditions, and not some local conditions, on the weight were requested. \\ As regards the parabolic case, we recall that equations like \eqref{lequazione2} are considered in \cite{chia-se1}, where $\mu \equiv 1$ is considered, and in \cite{chia-se2}, where $\mu = \lambda$ is considered. In both these papers a condition $A_2$ on the weight $\lambda$ is considered, when $\mu \equiv 1$ to show that $L^{\infty}$ bounds and Harnack inequality are impossible, in the second paper where $\mu = \lambda$ to show $L^{\infty}$ bounds and a Harnack inequality. To get the Harnack inequality with $\mu \equiv 1$ a stronger request has to be made, i.e. $\lambda$ has to belong to $A_{1+2/n}$ which is a proper subclass of $A_2$ (see \cite{chia-se3}). \\ A more recent paper we mention about linear elliptic equation with principal part in divergence form \cite{mohammed}, where the matrix $a$ defining the principal part satisfies \begin{equation} \label{veronastation}
\lambda_1 (x) |\xi|^2 \leqslant \big( a(x,t) \cdot \xi, \xi \big) \leqslant \lambda_2 (x) |\xi|^2 \, , \end{equation}
but satisfying \eqref{siamoinritardo} with $\lambda_1$ in the place of $\lambda$ and $\lambda_2$ in the place of $|\mu|_{\lambda}$; this implies the Sobolev-Poincar\'e inequality $$
\Big[ \frac{1}{\nu(B_\rho)} {\int_{B_\rho}} |u(x)|^q \lambda_2 (x) dx \Big]^{1/q} \leqslant C \, \rho \,
\Big[ \frac{1}{\omega(B_\rho)} {\int_{B_\rho}} |Du(x)|^2 \lambda_1 (x) dx \Big]^{1/2} $$ for every Lipschith function with either support contained in $B_{\rho}$ or with null mean value. About parabolic equations with some $\mu$ in front of $\partial_t$ we also mention \cite{ishige}, where an equation with $\mu = \lambda$ is considered, \cite{fernandes}, where the author considers $\mu \, \partial_t u - \text{div} \big( a(x,t) Du \big) = 0$ with $a$ satisfying \eqref{veronastation}, and \cite{gut-wheeden2} where $\lambda_1$ and $\lambda_2$ are depending also on time. Finally we quote the recent paper \cite{surnachev}, where the technique used is the one developed by DiBenedetto, Gianazza and Vespri in \cite{dib1} and \cite{gianazza-vespri} (see also \cite{articolo} for a result regarding non-linear equations) and the result is analogous to that in \cite{chia-se3}, but it concerns monotone operators with $(p-1)$-growth and the condition about $\lambda$ is $A_{1+p/n}$. \\ [0.4em]
Coming back to our result, we want to stress that our condition (H.2) on the pair $(|\mu|_{\lambda}, \lambda)$ simply reduces to require $\lambda \in A_2$ when $\mu \equiv \lambda$, while is sharp to get, among the Muckenhoupt weights, $\lambda \in A_{1+2/n}$ when $\mu \equiv 1$ (for this see Remark \ref{notaimportante}, point $\mathpzc{D}$, and Remark \ref{cortona}), so our result cover the result obtained in \cite{chia-se2} and \cite{chia-se3} and, confining to $\mu > 0$ almost everywhere, generalizes them to doubly weighted parabolic equations. \\ [0.4em] As regards some result concerning mixed type equations we want to recall a recent result contained in \cite{alk-liske}, where the authors prove H\"older continuity for the limit of a family of functions, the solutions $(u_{\epsilon})_{\epsilon > 0}$ of a class of parabolic equations like $$ \partial_t (\omega_{\epsilon} u) - \text{div} \, ( \omega_{\epsilon} a (x,t) Du) = 0 \, , $$ with $a$ satisfying \eqref{veronastation} with $\lambda_1$ and $\lambda_2$ positive constants and $\omega_{\epsilon} = 1$ in one region, $\omega_{\epsilon} = \epsilon$ in another. \\ [0.4em] Before concluding the introduction we want to stress some difficulties and some interesting thing regarding the main results (Theorem \ref{Harnack1} and Theorem \ref{Harnack2}). A first comment is the following: given a ball $B_{\rho}(x_o) \subset \Omega$ and once defined $B_{\rho}^+(x_o) := B_{\rho}(x_o) \cap \{ \mu > 0 \}$, $B_{\rho}^-(x_o) := B_{\rho}(x_o) \cap \{ \mu < 0 \}$, $B_{\rho}^0(x_o) := B_{\rho}(x_o) \cap \{ \mu = 0 \}$, we (in particular) show there is a positive constant $c$ such that for every $u$ in a proper class $$ u(x_o, t_o)\leqslant c \inf_{B_{\rho} (x_o)}
\left\{
\begin{array}{ll}
u \left(x, t_o + \vartheta \, \rho^2 \, \frac{|\mu|_{\lambda} (B_{\rho}(x_o))}{\lambda (B_{\rho}(x_o))} \right)
& \text{ if } x \in \overline{B_{\rho}^+(x_o)} \\ [0.5em]
u \left(x, t_o - \vartheta \, \rho^2 \, \frac{|\mu|_{\lambda} (B_{\rho}(x_o))}{\lambda (B_{\rho}(x_o))} \right)
& \text{ if } x \in \overline{B_{\rho}^-(x_o)} \\ [0.5em]
u(x, t_o) & \text{ if } x \in \overline{B_{\rho}^0(x_o)} .
\end{array}
\right. $$ Notice that the temporal interval where $\mu \not= 0$ is proportional to $$
\rho^2 \, \frac{|\mu|_{\lambda} (B_{\rho}(x_o))}{\lambda (B_{\rho}(x_o))} $$ where ($\mu_+$ and $\mu_-$ the positive and negative parts of $\mu$) $$
|\mu|_{\lambda} (B_{\rho}(x_o)) = \mu_+ (B_{\rho}^+(x_o)) + \mu_- (B_{\rho}^-(x_o)) + \lambda (B_{\rho}^0(x_o)) \, ; $$ what we want to stress is then that the natural temporal delay, for instance where $\mu > 0$, depends also on the measure of the regions where $\mu < 0$ and $\mu = 0$. \\ This causes a difficulty in proving our result, in particular Theorem \ref{Harnack1}, because the natural cylinders are alike $$
B_{\rho}(x) \times \left(t, t +\rho^2 \, \frac{|\mu|_{\lambda} (B_{\rho}(x))}{\lambda (B_{\rho}(x))} \right) $$ and so (in general) it not true that $$
B_{r}(x) \times \left(t, t + r^2 \, \frac{|\mu|_{\lambda} (B_{r}(x))}{\lambda (B_{r}(x))} \right)
\subset B_{R}(x) \times \left(t, t + R^2 \, \frac{|\mu|_{\lambda} (B_{R}(x))}{\lambda (B_{R}(x))} \right) , \quad \text{with } \quad 0 < r < R \, . $$ Other natural difficulties are due to the equation, which can change its nature around an interface, and so every result used by DiBenedetto, Gianazza and Vespri is to be suitably modified and adapted. \\ [0.4em] The paper is organized as follows: in Section \ref{paragrafo2} we introduce the class of Muchenhoupt weights and prove some results needed in the following; in particular a simple, but fundamental, extension of a classical lemma will be needed (see Lemma \ref{lemmuzzofurbo-quinquies}). Section \ref{paragrafo3} is devoted to a brief comment about mixed type equations, needed to explain a requirement we make in the De Giorgi class. In Section \ref{De Giorgi classes and Q-minima} we introduce a degenerate mixed type evolution equation, the Q-minima for that equation, assumptions about weights involved in that equation and the De Giorgi class suited to that equations which, as already mentioned, turns out to be different from the one introduced in \cite{gianazza-vespri} or in \cite{wieser} also when $\mu \equiv 1$; we also show that Q-minima (and then a large class of solutions) are contained in the De Giorgi class we define. In the following three sections we prove local boundedness, the fundamental step so-called {\em expansion of positivity} (see Section \ref{secPositivity}) and a Harnack type inequality stated in Theorem \ref{Harnack1} and Theorem \ref{Harnack2}. Finally, we give some natural consequences of the inequality we obtain and, due to the particular nature of the equation, some examples in the hope to help comprehension. \\ [0.4em] Finally we want to mention that a short version of the paper, without proofs and with a simpler situation where we consider $\mu$ bounded and $\lambda \equiv 1$, can be found in \cite{fabio11}. \\ [0.4em] {\sc Acknowledgments - } The author is pleased to thank R. Serapioni and V. Recupero for some nice and interesting discussions on the subject. \\
\section{Preliminaries on weights} \label{paragrafo2}
In this section we remind and introduce some definitions and results about $A_p$ weights needed in the following.
For most of the results we refer to \cite{gc-rdf}. \\ By $B_{\rho}(x_0)$ we will denote the open ball
$\{x \in {\bf R}^n \, | \, |x-x_0| < \rho \}$, and sometimes we will simply write $B_{\rho}$ or $B$ if it is not there is no need to specify further. With the word {\it weight} we will mean a function $\eta$ such that $$ \eta \text{ weight if: } \quad \eta > 0 \text{ a.e. in } {\bf R}^n \quad \text{ and } \quad \eta \in L^1_{\rm loc}({\bf R}^n). $$ Given a weight $\eta$ and a function $u \in L^p(\Omega, \eta)$ with $\Omega$ open set of ${\bf R}^n$ we will write $$ \eta(B) := \int_B \eta \, dx \, , \hskip20pt
{{\int\!\!\!\!\!\!-}_{\!\!{B}}} |u|^p \eta \, dx := \frac{1}{\eta(B)} \int_B |u|^p \eta \, dx \, . $$
\begin{definition} \label{ap} Let $p >1$, $K > 0$ be constants, $\omega$ a weight. We say that $\omega$ belongs to the class $A_p(K)$ if \begin{equation} \label{adueduealfa} \bigg( {{\int\!\!\!\!\!\!-}_{\!\!{B}}} \omega \, dx \bigg)^{1/p}
\bigg( {{\int\!\!\!\!\!\!-}_{\!\!{B}}} \omega^{-1/(p-1)} dx \bigg)^{(p-1)/p} \leqslant K \hskip10pt {\it for\ every\ ball\ }B \subset {\bf R}^n \end{equation} We say that $\omega$ belongs to the class $A_{\infty} (K, \varsigma)$ if \begin{equation} \label{Ainfinito}
\frac{\omega (S)}{\omega (B)} \leqslant K \left( \frac{|S|}{|B|} \right)^{\varsigma} \end{equation} for every ball $B$ and every measurable set $S \subset B$. \\ We denote by $A_p = \bigcup_{K \geqslant 1} A_p(K)$. It turns out $($see, e.g., \cite{gc-rdf}$)$ that $A_{\infty} = \bigcup_{p > 1} A_p$. \\ Given a positive weight $\eta$, a class $A_{p}(K;\eta)$ and all the previous classes may be defined in a analogous way simply replacing the measure $dx$ with $\eta \, dx$. \\ More generally a pair $(\nu , \omega)$ of weights belong to $A_{p,q}^{\alpha} (B_0, K)$, $\alpha \in [0,n)$, $B_0$ ball $($possibly ${\bf R}^n)$ if \begin{equation} \label{adueduealfa2}
|B|^{\alpha/n} \bigg( {{\int\!\!\!\!\!\!-}_{\!\!{B}}} \nu \, dx \bigg)^{1/q}
\bigg( {{\int\!\!\!\!\!\!-}_{\!\!{B}}} \omega^{-1/(p-1)} dx \bigg)^{(p-1)/p} \leqslant K \hskip10pt {\it for\ every\ ball\ }B \subset B_0 \, . \end{equation} We simply write $A_{p,q}^{\alpha} (K)$ if $B_0 = {\bf R}^n$. For $\alpha = 0$ we get the classical Muckenhoupt class of pairs $($for more details we refer to \cite{gc-rdf}$)$; for $\alpha = 0$, $q = p$, $\nu = \omega$, $B_0 = {\bf R}^n$ we get the class $A_p$. \end{definition}
\noindent We remind some properties of $A_p$ weights (the same properties hold for $A_p(\eta)$ weights), for which we refer to \cite{gc-rdf}. $A_p$ weights verify the {\it doubling property} which is the following: for a fixed $t > 1$ there exists a constant $c_d > 1$ which we denote by $c_d(\omega)$, such that \begin{equation} \label{doubling_property} \int_{tB}\omega \, dx \leqslant c_d (\omega) \int_B \omega \, dx \end{equation} for every ball $B$ of ${\bf R}^n$, where by $t B$ we mean the ball concentric to $B$ and whose radius is $t$ times the lenght of the side of $B$. If $\omega \in A_p(K)$ one has that for every $t > 0$ the constant $c_d$ depends (only) on $t, n, p, K$. \\
Moreover $\omega \in A_p(K)$ satisfies the following {\it reverse H\"older's inequality}: there is $\delta = \delta (n,p,K) > 0$ and a constant $c_{\textit{rh}} = c_{\textit{rh}}(p,K) \geqslant 1$ such that \begin{equation} \label{maggiore_sommabilita`} \begin{array}{c} \bigg( \displaystyle\media{B} \omega^{1+\delta}dx \bigg)^{1/{(1+\delta)}} \leqslant
c_{\textit{rh}} \ \bigg( \displaystyle\media{B} \omega \, dx \bigg)\ , \\ \bigg(\displaystyle\media{B} \omega^{ {-{\frac{1}{p-1}}} (1+\delta)}dx
\bigg)^{1/{(1+\delta)}} \leqslant
c_{\textit{rh}} \ \bigg( \displaystyle\media{B} \omega ^{ {-{\frac{1}{p-1}}} }dx \bigg), \end{array} \end{equation} for every ball $B$. A consequence of the definition of $A_p$ weights and of \eqref{maggiore_sommabilita`} are the two following inequalities. If $\omega \in A_p(K)$ then, called $\varsigma$ the quantity $\delta/(1+\delta)$, one has \begin{align} \label{ecomeservono!}
\left(\frac{|S|}{|B|}\right)^p \leqslant K \, \frac{\omega(S)}{\omega(B)} \, , \hskip20pt
\frac{\omega(S)}{\omega(B)} \leqslant c_{\textit{rh}} \left(\frac{|S|}{|B|}\right)^{\varsigma} \end{align} for every measurable $S \subset B$, for every $B$ ball of ${\bf R}^n$.
\begin{oss} \rm -\ \label{minoreq} Another interesting property of $A_p$ weights is the following. \\ If $\omega \in A_p(K)$ then there is $p' < p$, $p' = p'(n,p,K)$, and $K' = K' (n,p,K)$ such that $\omega \in A_{p'}(K')$. To prove this fact take $\omega \in A_p(K)$, $\delta, c_{\textit{rh}}$ considered in \eqref{maggiore_sommabilita`}, choose $\bar{p}$ in such a way that $$ \frac{1}{\bar{p}-1} = \frac{1}{p-1} (1 + \delta) $$ (precisely $\bar{p} = (p+\delta)(1+\delta)^{-1} < p$) and using \eqref{maggiore_sommabilita`} we get \begin{align*} \media{B} \omega \, dx \ \bigg( \media{B} \omega^{-\frac{1}{p'-1}} dx \bigg)^{p'-1} & \leqslant \media{B} \omega \, dx \ \bigg( \media{B} \omega^{-\frac{1}{p-1}(1+\delta)} dx \bigg)^{\frac{p-1}{1+\delta}} \leqslant \\ & \leqslant c_{\textit{rh}}^{p-1} \, \media{B} \omega \, dx \ \bigg( \media{B} \omega^{-\frac{1}{p-1}} dx \bigg)^{p-1} \leqslant
c_{\textit{rh}}^{p-1} K^p \, . \end{align*} for every $p' \in [\bar{p}, p]$. \end{oss}
\begin{oss} \rm -\ \label{remarcuccia} Suppose to have $\nu, \omega \in A_{\infty}$, i.e. there are $s_1, s_2, K_1, K_2 > 1$ such that $\omega \in A_{s_1}(K_1)$ and $\nu \in A_{s_2}(K_2)$. \\ Then the weight $\omega/\nu \in A_{\infty}(\nu)$, i.e. there is $r > 1$ such that $\omega/\nu \in A_r(c; \nu)$ or \begin{equation} \label{stimazza} \frac{\displaystyle \int_B \omega \, dx}{\displaystyle \int_B \nu \, dx} \left( \frac{\displaystyle \int_B \Big(\frac{\omega}{\nu}\Big)^{-1/(r-1)} \nu \, dx}{\displaystyle \int_B \nu \, dx} \right)^{r-1} \leqslant c \hskip20pt \textrm{for every }B \textrm{ ball in }{\bf R}^n.
\end{equation} Indeed multiplying and dividing by $|B|^r$ we get that the above inequality is equivalent to $$
\frac{1}{|B|^r} \, \displaystyle \int_B \omega dx \left(\int_B \omega^{-1/(r-1)} \nu^{r/(r-1)} dx \right)^{r-1} \leqslant c \, \frac{1}{|B|^r} \left( \int_B \nu \, dx \right)^r \, . $$ Now by H\"older's inequality, if $a^{-1} + b^{-1} = 1$, $a, b > 1$, we get that $$ \begin{array}{l}
{\displaystyle \frac{1}{|B|^r}} \displaystyle \int_B \omega \, dx \left( \displaystyle \int_B \omega^{-1/(r-1)} \nu^{r/(r-1)} dx \right)^{r-1} \leqslant \\ [0.5em] \hskip40pt \leqslant \media{B} \omega \, dx \left(\media{B} \omega^{-a/(r-1)} dx \right)^{(r-1)/a}
\left(\media{B} \nu^{rb/(r-1)} dx \right)^{(r-1)/b} \, . \end{array} $$ Since $a$ and $r$ are arbitrary we can choose $1 + (r-1)/a = s_1$, so that
$\omega \in A_{1+(r-1)/a}(K)$ and consequently $$ \media{B} \omega \, dx \left(\media{B} \omega^{-a/(r-1)} dx \right)^{(r-1)/a} \leqslant K_1 \, . $$ Moreover if $\nu \in A_{\infty}$ by the higher summability property of $A_{\infty}$ weights, there is $\delta = \delta (s_2, n, K_2) > 0$ such that \eqref{maggiore_sommabilita`} holds. Notice that it is possible to choose $a, b, r > 1$ in such a way $$ \frac{1}{a} + \frac{1}{b} = 1\, , \hskip20pt \frac{r-1}{a} = s_1 - 1 \, , \hskip20pt \frac{r b}{r-1} = 1 + \delta \, . $$ With these choices
there is $c_1 = c_1(s_2, n, K_2)$ such that $$ \left(\media{B} \nu^{rb/(r-1)} dx \right)^{(r-1)/b} \leqslant c_1 \left( \media{B} \nu \, dx \right)^r \, . $$ Then \eqref{stimazza} holds with $c = K_1 c_1$, $c = c(s_2, n, K_1, K_2)$, $r = r(s_1, s_2, n, K_1, K_2)$. \end{oss}
We recall now some classical results about weighted inequalities. The following one in particular can be found in \cite{chanillo-wheeden} and is the weighted version of the standard Sobolev-Poincar\'e inequality. Given two weights $\nu, \omega$ in ${\bf R}^n$ and $p,q$ with $1 < p < q$ the following condition: \\ \begin{center} \begin{minipage}{11cm} {there is a constant $K > 0$ such that} \end{minipage} \begin{equation} \label{sob-poin-cond}
\left(\frac{|B_r(\bar{x})|}{|B_\rho(\bar{x})|}\right)^{\alpha/n} \left( \frac{\nu(B_r(\bar{x}))}{\nu(B_\rho(\bar{x}))} \right)^{1/q} \left( \frac{\omega(B_r(\bar{x}))}{\omega(B_\rho(\bar{x}))} \right)^{-1/p} \leqslant K \end{equation} \begin{minipage}{11cm} for every pair of concentric balls $B_r$ and $B_\rho$ with $0 < r < \rho$, \end{minipage} \end{center} \ \\ with $\alpha = 1$ is essentially necessary and sufficient to have the Sobolev-Poincar\'e inequality. Below we confine to state only the result we need. For more details we refer to \cite{chanillo-wheeden}.
\begin{definition} \label{bipiqualfa} For a pair of weights $\nu,\omega$ and $\alpha \in [0,n)$ we will write $($this is not a standard notation$)$ $$ (\nu,\omega) \in B_{p,q}^{\alpha} (K) $$ if it satisfies \eqref{sob-poin-cond} for every pair of balls $B_r (\bar{x}), B_\rho(\bar{x})$ with $r < \rho$ and $\bar{x} \in {\bf R}^n$. \end{definition}
\begin{teorema}
\label{chanillo-wheeden}
Consider $p,q$ such that $1 < p < q$, $\rho > 0$, $x_0 \in {\bf R}^n$, two weights $\nu ,\omega$ in ${\bf R}^n$ such that $\omega \in A_p(K_1)$, $(\nu,\omega) \in B_{p,q}^1(K_2)$ and $\nu$ satisfies \eqref{doubling_property}. Then there is a constant $\gamma_1$ depending $($only$)$ on $n, p, q, K_1, K_2, c_{d}(\nu)$ $($the doubling constants of the weight $\nu)$ such that \begin{equation} \label{disuguaglianzadisobolev}
\Big[ \frac{1}{\nu(B_\rho)} {\int_{B_\rho}} |u(x)|^q \nu (x) dx \Big]^{1/q} \leqslant \gamma_1 \, \rho \,
\Big[ \frac{1}{\omega(B_\rho)} {\int_{B_\rho}} |Du(x)|^p \omega (x) dx \Big]^{1/p} \end{equation} for every $u$ Lipschitz continuous function defined in $B_{\rho} = B_{\rho}(x_0)$, with either support contained in $B_{\rho}(x_0)$ or with null mean value. \end{teorema}
\begin{oss} \rm -\ \label{rmkipotesi} Notice that the previous theorem holds also for every $q' \in [1, q]$ in the place of $q$ with the same constant $\gamma_1$. Indeed condition \eqref{sob-poin-cond} holds with the same constant $K$ for every $q' \in [1, q]$. \\ Moreover, using \eqref{ecomeservono!}, one gets that in particular Theorem \ref{chanillo-wheeden} holds when $\nu = \omega \in A_p$ with $q = np/(n-1) > p$ (and in fact also with some greater value thanks to Remark \ref{minoreq}). \end{oss}
\begin{oss} \rm -\ \label{notaimportante} Here we want to stress some important facts we will need later. \\ $\mathpzc{A}$ {\em - If $(\nu,\omega) \in A_{p,q}^1(K, B_0)$ with $1 < p < q$, $\nu \in A_{\infty}$, then there are $\alpha \in (0,1)$, $\tilde{q} \in (p,q)$, $\tilde{K} \geqslant K$ such that $(\nu,\omega) \in A_{p,\tilde{q}}^{\alpha}(\tilde{K}, B_0)$}. \\ By \eqref{ecomeservono!}, since $\nu \in A_{\infty}$, we get the existence of $\varsigma > 0$ such that for every $\delta > 0$ $$
\left(\frac{\nu(B_r)}{\nu(B_R)}\right)^{\delta} \leqslant (c_{\textit{rh}}(\nu))^{\delta} \left(\frac{|B_r|}{|B_R|}\right)^{\varsigma \delta}. $$ Now we choose $\delta$ and consequently define $\tilde{q}$ in such a way that $$ \frac{1}{q} + \delta < \frac{1}{p} \hskip20pt \textrm{and} \hskip20pt \frac{1}{\tilde{q}} := \frac{1}{q} + \delta \, . $$ Now we can fix $\alpha \in (0,1)$ and we do that in such a way that $\varsigma \delta = (1-\alpha)/n$. Then we have for $r < R$ \begin{align*} K \left( \frac{\omega(B_r(\bar{x}))}{\omega(B_R(\bar{x}))} \right)^{1/p} & \geqslant
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{1/n}
\left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/q} = \\
& = \left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{\alpha/n}
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{(1-\alpha)/n}
\left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/q} \geqslant \\
& \geqslant \frac{1}{(c_{\textit{rh}}(\nu))^{\delta}}\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{\alpha/n}
\left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/q'} \, . \end{align*}
$\mathpzc{B}$ {\em - If $(\nu,\omega) \in A_{p,q}^1(K_2, B_0)$ with $1 < p < q$, $\nu \in A_{\infty}$, $\omega \in A_p(K_1)$, then there are $p' \in (1,p)$, $q' \in (p,q)$, $K_2' \geqslant K_2$ such that $(\nu,\omega) \in A_{p',q'}^{1}(K_2', B_0)$}. \\ Consider the values of $\alpha, q', K'$ ($K' \geqslant K_2$) of point $\mathpzc{A}$: then we know that $(\nu,\omega) \in A_{p,q'}^{\alpha}(K', B_0)$. If we consider $p'$ in such a way that $$ \frac{p-p'}{p'} = \frac{1-\alpha}{n} $$ we get, using the assumptions, the fact $\omega \in A_p(K_1)$ and \eqref{ecomeservono!}, for $r < R$ \begin{align*} K' \left( \frac{\omega(B_r(\bar{x}))}{\omega(B_R(\bar{x}))} \right)^{1/p'} & \geqslant
\left( \frac{\omega(B_r(\bar{x}))}{\omega(B_R(\bar{x}))} \right)^{1/p' - 1/p}
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{\alpha/n}
\left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/q'} \geqslant \\ & \geqslant \left(\frac{1}{K_1}\right)^{\frac{p-p'}{p'p}}
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{\frac{p-p'}{p'}}
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{\alpha/n}
\left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/q'} = \\ & = \left(\frac{1}{K_1}\right)^{\frac{p-p'}{p'p}}
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{1/n}
\left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/q'} \, . \end{align*} Taking $K_2' = K' K_1^{\frac{p-p'}{p'p}}$ (which depends on $K_1, K_2, c_{\textit{rh}}(\nu), p, q, n$) we conclude. \\ Actually one can require simply $\omega \in A_{\infty}$ and get not only $(\nu,\omega) \in A_{p',q'}^{1}(K_2', B_0)$, but in fact $(\nu,\omega) \in A_{p',q'}^{\alpha'}(K_2', B_0)$ with $p' \in (1,p)$, $q' \in (p,q)$, $\alpha' \in (\alpha,1)$. \\ [0.3em] $\mathpzc{C}$ {\em - If $(\nu,\omega) \in A_{2,q}^1(K, B_0)$ with $q > 2$ the function $f(\bar{x},r) = r^{2 \alpha} \frac{\nu (B_r(\bar{x}))}{\omega (B_r(\bar{x}))}$ satisfies the following inequality: by point $\mathpzc{A}$ and Remark $\ref{rmkipotesi}$ we get that there are $\alpha \in (0,1)$ and $\tilde{K} = \tilde{K} (K, c_{\textit{rh}}(\nu))$ such that $$ f(\bar{x},r) \leqslant {\tilde{K}}^2 f(\bar{x},R) $$ for every $\bar{x} \in B_0$ and $r,R$ satisfying $0 < r < R$.} \\ Indeed by assumptions we derive \begin{gather*}
\left(\frac{|B_r(\bar{x})|}{|B_R(\bar{x})|}\right)^{\alpha/n} \left( \frac{\nu(B_r(\bar{x}))}{\nu(B_R(\bar{x}))} \right)^{1/2} \left( \frac{\omega(B_r(\bar{x}))}{\omega(B_R(\bar{x}))} \right)^{-1/2} \leqslant \tilde{K} \, . \end{gather*} Taking the power $2$ we immediately get the thesis. \\ [0.3em] $\mathpzc{D}$ {\em - Consider $\nu \equiv 1$. Then there are $q > p$ and $\hat{K}$ depending on $n, p, K$ such that} $$ \omega \in \left\{ \begin{array}{ll} A_{1+ p/n} (K) & \text{ for } n \geqslant {\displaystyle \frac{p}{p-1}} \, , \\ [0.5em] A_p (K) & \text{ for } n \leqslant {\displaystyle \frac{p}{p-1}} \, . \end{array} \right. \quad \Longrightarrow \quad (1, \omega) \in B_{q, p}^{1} ( \hat{K}) \, . $$ First of all notice that for every $n$ we have indeed $\omega \in A_{1+ p/n} (K)$. In particular, by Remark \ref{minoreq}, there is $K'$ and $\varepsilon$ such that $\omega \in A_{1+ p/n - \varepsilon} (K')$. Using the first one in \eqref{ecomeservono!} with $S = B_r(x)$ and $B = B_{\rho}(x)$ ($x \in \Omega$ and $\rho > r > 0$) we get $$
\left(\frac{| B_r |}{| B_{\rho} |}\right)^{1+ \frac{p}{n} - \varepsilon} \leqslant K' \, \frac{\omega (B_r)}{\omega(B_{\rho})} . $$ What we want to prove, since $\nu \equiv 1$, is $$
\left(\frac{| B_r |}{| B_{\rho} |}\right)^{\frac{1}{n} + \frac{1}{q}} \leqslant \hat{K} \, \left( \frac{\omega (B_r)}{\omega(B_{\rho})} \right)^{1/p}. $$ Taking the power $p$ we get the thesis with $\hat{K} = (K')^p$ and choosing some $q \in \left( p, \frac{p}{1 - \varepsilon}\right)$. \end{oss}
\begin{oss} \rm -\ \label{cortona} In this remark we want to stress that the request $\omega \in A_{1+p/n}$ is optimal among the Muckenhoupt class to get that $(1, \omega) \in B_{q, p}^{1}$ for some $q > p$. \\ Indeed consider
$\omega (x) = |x|^{\beta}$ which is $A_r$ if and only if $- n < \beta < (r - 1)n$. If we consider $r > 1 + p/n$ then it is possible to choose $\beta > p$ and in this case to get $(1, \omega) \in B_{q, p}^{1}$ for some $q > p$ we should consider $p < \beta < p + (p/q - 1)n$, but this is clearly impossible. \end{oss}
\noindent
\noindent We state now a slight generalization of a result about Muckenhoupt type weights, (see \cite{gut-wheeden3} and \cite{gut-wheeden2}).
\begin{teorema} \label{gut-whee} Consider $B_{\rho} = B_{\rho}(x_0)$ a ball of ${\bf R}^n$ whose radius's lenght is $\rho$, $\omega \in A_2(K_1)$, $(\nu,\omega) \in B_{2,q}^1(K_2)$ for some $q > 2$, $\nu \in A_{\infty}$. Then there is $\upsigma_1 \in (1,q)$ $($see also the remark below$)$ such that for every $A \subset B_{\rho}(x_0)$, for every Lipschitz continuous function $u$ defined in $B_{\rho}(x_0)$, with either support contained in $B_{\rho}(x_0)$ or with null mean value and for every $\kappa \in (1, \upsigma_1]$ $$
\frac{1}{\upsilon (B_{\rho})} \int_A |u|^{2\kappa} \upsilon \, dx \leqslant \gamma_1^{2} \, \rho^2 \,
\Big( \frac{1}{\nu (B_{\rho})} \int_A |u|^{2} \nu \, dx \Big)^{\kappa -1}
\Big( \frac{1}{\omega (B_{\rho})} \int_{B_{\rho}} |D u|^2 \omega \, dx \Big) $$ where the inequality holds both with $\upsilon = \nu$ and $\upsilon = \omega$ $($and in fact with every weight for which Theorem $\ref{chanillo-wheeden}$ holds$)$. \end{teorema} \begin{oss} \rm -\ \label{dipendenza} The assumption $\nu \in A_{\infty}$ means that there is $s > 1, K_3 \geqslant 1$ such that $\nu \in A_s(K_3)$. Following the proof of Theorem \ref{gut-whee} (and thanks to Remark \ref{remarcuccia}) one can see that the constant $\kappa$ depends $($only$)$ on $n, q, s, K_1, K_3$. \end{oss}
\noindent {\it Proof}\ \ -\ \ Consider $\kappa > 1$ (to be chosen) and consider $h_0, r > 1$ in such a way that $$ (\kappa -1) + \frac{1}{h_0} + \frac{1}{r} = 1 \, . $$
Writing $|u|^{2\kappa} \upsilon$ as $|u|^{2(\kappa-1)} \nu^{k-1} u^2 \upsilon ^{1/h_0} \upsilon^{1-1/h_0} \nu^{1-\kappa}$ we get $$
\int_{A} |u|^{2\kappa} \upsilon \, dx \leqslant
\left( \int_{A} u^2 \nu \, dx \right)^{\kappa-1} \left( \int_{B_{\rho}}|u|^{2h_0} \upsilon \, dx \right)^{\frac{1}{h_0}} \left(\int_{B_{\rho}} \upsilon^{(1-1/h_0)r} \nu^{(1-\kappa)r} dx \right)^{\frac{1}{r}} \, . $$ Now we chose $h_0 = q/2$ in such a way Theorem \ref{chanillo-wheeden} holds both with $\upsilon = \nu$ and $\upsilon = \omega$ on the left hand side of the inequality. For such a $h_0$ we get (we have not chosen $k$ and $r$ yet) $$
\left( {{\int\!\!\!\!\!\!-}_{\!\!B_{\rho}}} |u|^{2h_0} \, \upsilon \, dx \right)^{\frac{1}{2h_0}} \leqslant
\gamma_1 \, \rho \,
\left( {{\int\!\!\!\!\!\!-}_{\!\!B_{\rho}}} |D u|^2 \omega \, dx \right)^{1/2} \, . $$ Now consider $\upsilon = \omega$. The previous inequality becomes $$
\left( \int_{B_{\rho}} |u|^{2h_0} \, \upsilon \, dx \right)^{\frac{1}{h_0}} \leqslant
\gamma_1^2 \, \rho^2 \, \frac{1}{(\omega(B_{\rho}))^{1-\frac{1}{h_0}}}
\left( \int_{B_{\rho}} |D u|^2 \omega \, dx \right) \, . $$ Since $(1-h_0^{-1})r = r (\kappa-1) + 1$ we may write $$ \int_{B_{\rho}} \omega^{(1-1/h_0)r} \nu^{(1-\kappa)r} \, dx = \int_{B_{\rho}} \left(\frac{\omega}{\nu}\right)^{(\kappa-1)r + 1} \nu \, dx \, . $$ Since $\omega/{\nu} \in A_{\infty}({\nu})$ (see Remark \ref{remarcuccia}) the function $\omega/{\nu}$ satisfies a reverse H\"older inequality. Then there are two positive constants $\delta, c_{\textit{rh}}$ such that, for every ball $B$, $$ \frac{1}{\nu (B)} \int_B \left(\frac{\omega}{\nu}\right)^{1 + \delta} \nu \, dx \leqslant c_{\textit{rh}} \left[ \frac{1}{{\nu} (B)} \int_B \frac{\omega}{\nu} \, \nu \, dx \right]^{1 + \delta} = c_{\textit{rh}} \left[ \frac{\omega (B)}{{\nu} (B)} \right]^{1 + \delta} $$ (the constants $c_{\textit{rh}}, \delta$ depend on $n, s, K_1, K_3$ if $\nu \in A_s(K_3)$). Then we will choose $\kappa , r$ in such a way that $(\kappa -1)r = \delta$ and consequently, by what remarked above, we get \begin{align*} \int_{B_{\rho}} \omega^{(1-1/h_0)r} \nu^{(1-\kappa)r} \, dx \leqslant c_{\textit{rh}} \, {\nu}(B_{\rho}) \left[ \frac{\omega (B_{\rho})}{{\nu} (B_{\rho})} \right]^{1 + (\kappa-1)r} \leqslant c_{\textit{rh}} \, \frac{\big(\omega (B_{\rho})\big)^{1 + (\kappa-1)r}}{\big({\nu} (B_{\rho})\big)^{(\kappa-1)r}} \, . \end{align*} Then we get the thesis when $\upsilon = \omega$. If $\upsilon = \nu$ the proof is easier since the quantity $\upsilon^{(1-1/h_0)r} \nu^{(1-\kappa)r}$ reduces to $\nu$.
$\square$ \\
\noindent We briefly recall the definition (a possible definition, in our case equivalent to the other possible ones) of weighted Sobolev spaces for $\nu \in A_{\infty}$ and $\omega \in A_p$. Given an open and bounded set $\Omega \subset {\bf R}^n$ by $L^p(\Omega, \nu)$
we denote the set of measurable functions $u : \Omega \to {\bf R}$ such that $\int_{\Omega} |u|^p \nu \, dx$ is finite. By $W^{1,p}(\Omega, \nu, \omega)$ we denote the space
$\{ u \in L^p(\Omega, \nu) \cap W^{1,1}_{\text{loc}}(\Omega) \, | \, D_i u \in L^p(\Omega, \omega) \}$ endowed with the obvious norm; by $W^{1,p}_0(\Omega, \nu, \omega)$ we denote the closure of $C^1_c(\Omega)$ in $W^{1,p}(\Omega, \nu, \omega)$. Indeed we will write $H^1(\Omega, \nu, \omega)$ for $W^{1,2}(\Omega, \nu, \omega)$. \\
\noindent Coming back to the result stated in Theorem \ref{gut-whee}, integrating in time one immediately gets what follows.
\begin{cor} \label{cor-gut-whee} With the same assumptions of Theorem $\ref{gut-whee}$, consider moreover $s_1, s_2 \in (0,T)$. Consider a family of open sets $A(t)$, $t \in (s_1,s_2)$ in such a way $E = \cup_{t \in (s_1,s_2)} A(t)$ is a an open subset of $B_{\rho} \times (s_1, s_2)$. For every $v \in C^0([s_1,s_2]; L^2(B_{\rho},\nu)) \cap L^2(s_1,s_2; W^{1,2}_0(B_{\rho}, \nu, \omega))$ it holds \begin{align*} \frac{1}{\upsilon (B_{\rho})}
\int \!\!\! & \int_{E} |u|^{2\kappa} (x,t) \upsilon (x) \, dx dt \leqslant
\gamma_1^{2} \, \rho^2 \, \left( \frac{1}{\nu (B_{\rho})} \right)^{\kappa-1} \cdot \\
& \cdot \Big( \sup_{s_1 < t < s_2} \int_{A(t)} |u|^{2}(x,t) \nu (x) \, dx \Big)^{\kappa-1}
\frac{1}{\omega (B_{\rho})} \int_{s_1}^{s_2}\!\! \int_{B_{\rho}} |D u|^2 (x,t) \, \omega (x) \, dx dt \end{align*} where the inequality holds both with $\upsilon = \nu$ and $\upsilon = \omega$. \end{cor}
\begin{lemma} \label{lemma2.2} Consider $B_{\rho} = B_{\rho}(x_0)$ a ball, $p \in (1,+\infty)$, $q \in [1,+\infty)$, $\nu$, $\omega$ and $v \in W^{1,p}(B_{\rho}, \nu, \omega)$ for which assumptions of Theorem $\ref{chanillo-wheeden}$ hold, $k, l \in {\bf R}$ with $k < l$. Consider also a subset $Z$ of $B_{\rho}$ and denote by $\bar\nu$ the function taking value $0$ in $Z$ and $\bar\nu \equiv \nu$ in $B_{\rho} \setminus Z$. Then $$ (l-k)^q \, \bar\nu( \{ v < k \} ) \, \bar\nu( \{ v > l \}) \leqslant
\, 2^q \, \gamma_1^q \, \rho^q \, \bar\nu(B_{\rho}) \, \nu(B_{\rho}) \, \omega(B_{\rho})^{-\frac{q}{p}} \,
\left( \int_{B_{\rho} \cap \{ k < v < l \}} |Dv|^p \, \omega \, dx \right)^{q/p} \, . $$ \end{lemma} \begin{oss} \rm -\ The previous result holds in every open set $\Omega$, provided that Theorem \ref{chanillo-wheeden} holds with $\Omega$ in the place of $B_{\rho}$. \end{oss} \noindent
{\it Proof}\ \ -\ \ Denoted by $A$ the set $\{x \in B_{\rho} \setminus Z \, | \, v(x) < k \}$ and suppose $\bar{\nu}(A) > 0$, otherwise there is nothing to prove. Following the proof of Theorem 3.16 in \cite{giusti} we have that for every $u$ which takes the value zero in $A$ \begin{equation} \label{misuradiA}
\int_{B_{\rho}} |u - u_{B_{\rho}}|^q \bar\nu \, dx =
\int_{B_{\rho}\setminus A} |u - u_{B_{\rho}}|^q \bar\nu \, dx + \int_{A} |u_{B_{\rho}}|^q \bar\nu \, dx
\geqslant |u_{B_{\rho}}|^q \int_{A} \bar\nu \, dx \, , \end{equation}
where $u_{B_{\rho}} = |B_{\rho}|^{-1} \int_{B_{\rho}} u(x) \, dx$. Consider the function $$ u:= \left\{
\begin{array}{ll}
\min \{v , l \} - k & \textrm{if } v > k \\
0 & \textrm{if } v \leqslant k \, .
\end{array}
\right. $$ and estimate, first from below $$
\int_{B_{\rho}} |u|^q \bar\nu \, dx =
\int_{\{ v > l \}} (l-k)^q \bar\nu \, dx + \int_{\{ k < v < l \}} (v-k)^q \bar\nu \, dx
\geqslant (l-k)^q \int_{\{ v > l \}} \bar\nu\, dx \, , $$ and then, using \eqref{misuradiA}, from above \begin{align*}
\left( \int_{B_{\rho}} |u|^q \bar\nu \, dx \right)^{\frac{1}{q}} & \leqslant
\left( \int_{B_{\rho}} \big[ |u - u_{B_{\rho}}| + |u_{B_{\rho}}| \big]^q \bar\nu \, dx \right)^{\frac{1}{q}}\\
& \leqslant \left( \int_{B_{\rho}} |u - u_{B_{\rho}}|^q \bar\nu \, dx \right)^{\frac{1}{q}} +
\left( |u_{B_{\rho}}|^q \int_{B_{\rho}} \bar\nu \, dx \right)^{\frac{1}{q}} \\
& \leqslant 2 \, \left( \frac{\bar\nu(B_{\rho})}{\bar\nu(A)} \int_{B_{\rho}} |u - u_{B_{\rho}}|^q \bar\nu \, dx \right)^{\frac{1}{q}} \, . \end{align*} Now, if $q > p$ we can apply Theorem \ref{chanillo-wheeden}; if $q \leqslant p$, notice that
$(\nu ((B_{\rho}))^{-1} \int_{B_{\rho}} |u - u_{B_{\rho}}|^q \bar\nu \, dx )^{1/q} \leqslant
(\nu ((B_{\rho}))^{-1} \int_{B_{\rho}} |u - u_{B_{\rho}}|^{q'} \bar\nu \, dx )^{1/q'}$ for $q' > q$. Then, by Theorem \ref{chanillo-wheeden} used if necessary with $q' > p$, we finally get $$ \displaylines{
(l-k)^q \int_{\{ v > l \}} \bar\nu \, dx \leqslant
2^q \, \gamma_1^q \, \rho^q \, \frac{\bar\nu(B_{\rho})}{\bar\nu(A)} \, \frac{\nu(B_{\rho})}{\omega(B_{\rho})^{q/p}}
\, \left( \int_{\{k < v < l\}} |Dv|^p \omega \, dx \right)^{q/p} \, .
\llap{$\square$}} $$ \\
\begin{lemma} \label{lemmaMisVar} Consider $x_0 \in \Omega$ and $\rho > 0$ such that $B_{2\rho}(x_0) \subset \Omega$, $\sigma \in (0,\rho)$, $\omega \in A_2(K_1)$, $(\nu,\omega) \in B_{2,q}^1(K_2)$, $\nu \in A_{\infty}$, $q > 2$, $\alpha, \beta > 0$.
Consider $\mathcal{B}$ an open and non-empty subset of $B_{\rho}(x_0)$ such that $\mathcal{B}^{\sigma} = \{ x \in \Omega \, | \, \text{\rm dist}(x, \mathcal{B}) < \sigma \}$ is a subset of $B_{\rho}(x_0)$. Then, for every $\varepsilon , \delta\in (0,1)$ there exists $\eta \in (0,1)$ such that for every $u\in W^{1,2}_{\rm loc}(\Omega, \nu, \omega)$ satisfying $$
\int_{\mathcal{B}^{\sigma}} |Du|^2 \, \omega \, dx \leqslant \beta \, \frac{\omega(B_{\rho}(x_0))}{\rho^2}, $$ and $$ \nu(\{ u > 1\} \cap \mathcal{B} )\geqslant \alpha \, \nu (B_\rho(x_0)), $$ there exists $x^*\in \mathcal{B}$ with $B_{\eta\rho}(x^*) \subset \mathcal{B}$ such that $$ \nu(\{ u > \varepsilon \}\cap B_{\eta\rho}(x^*)) > (1-\delta) \, \nu (B_{\eta\rho}(x^*)). $$ \end{lemma} \noindent {\it Proof}\ \ -\ \
For any positive $\eta$ satisfying $\eta \rho < \sigma/2$, we can consider a finite disjoint family of balls $(B_{\eta\rho}(x_i))_{i\in I}$ with the property that $$ \bigcup _{i\in I} B_{\eta\rho}(x_i) \subset \mathcal{B} \subset \bigcup_{i\in I} B_{2\eta\rho}(x_i) \subset \mathcal{B}^{\sigma} \, . $$ Again for simplicity, we denote by $B_i$ the ball $B_{\eta\rho}(x_i)$ and by $B_{ii}$ the ball $B_{2\eta\rho}(x_i)$. We denote by $I^+$ and $I^-$ the sets $$ I^+=\{i\in I: \nu(\{u>1\} \cap B_{ii}) > \frac{\alpha}{2 \, c_d(\nu)} \, \nu(B_{ii}) \}, $$ $$ I^-=\{i\in I: \nu(\{u>1\} \cap B_{ii}) \leqslant \frac{\alpha}{2 \, c_d(\nu)} \, \nu (B_{ii}) \} $$ where $c_d(\nu)$ is the doubling constant of the weight $\nu$, which, from now on, we will simply denote by $c_d$. By assumption we then get
\begin{align*} \alpha \, \nu(B_{\rho}(x_0)) & \leqslant \nu (\{u>1\}\cap \mathcal{B}) \leqslant \sum_{i \in I^+} \nu (\{u >1\}\cap B_{ii})
+\frac{\alpha}{2\, c_d} \sum_{i\in I^-}\nu (B_{ii}) \leqslant \\ &\leqslant \sum_{i \in I^+}\nu (\{u>1\}\cap B_{ii})
+\frac{\alpha}{2} \sum_{i\in I^-}\nu (B_{i}) \leqslant \\ &\leqslant \sum_{i \in I^+} \nu(\{u>1\}\cap B_{ii}) + \frac{\alpha}{2} \, \nu (\mathcal{B}) \leqslant \\ &\leqslant \sum_{i \in I^+} \nu(\{u>1\}\cap B_{ii}) + \frac{\alpha}{2} \, \nu (B_{\rho}(x_0)) \, . \end{align*} By this we get that \begin{equation} \label{esti1} \frac{\alpha}{2} \, \nu (B_{\rho}(x_0)) \leqslant \sum_{i\in I^+}\nu (\{u>1\}\cap B_{ii}) \, . \end{equation} Now fix $\varepsilon , \delta\in (0,1)$ and assume by contradiction that \begin{equation} \label{estDelta} \nu(\{u > \varepsilon \} \cap B_i) \leqslant (1-\delta) \, \nu (B_i),\qquad
\textrm{ for every } i \in I := I^+ \cup I^- \, . \end{equation} This clearly would imply in particular that $$ \frac{\nu(\{u\leqslant \varepsilon\}\cap B_{ii})}{\nu (B_{ii})}\geqslant \frac{\delta}{c_d} =: \delta'
\qquad \textrm{ for every } i \in I^+ \, . $$ By this last inequality, Lemma \ref{lemma2.2} with $p = q= 2$, $k = \varepsilon$ and $l = 1$, $\bar{\nu} = \nu$ we would obtain that \begin{align} \label{estOmega} \nonumber \delta' \, \nu(\{u>1\}\cap B_{ii}) \leqslant & \,
\frac{\nu(\{u\leqslant \varepsilon\}\cap B_{ii})}{\nu(B_{ii})} \, \nu(\{u>1\}\cap B_{ii}) \leqslant \\ \leqslant & \, \frac{4 \gamma_1^2}{(1-\varepsilon)^2} \, (\eta \rho)^2 \, \frac{\nu(B_{ii})}{\omega(B_{ii})} \,
\int_{\{\varepsilon < u < 1\}\cap B_{ii}} |Du|^2 \, \omega \, dx \, . \end{align} By Remark \ref{notaimportante}, point $\mathpzc{A}$, we get the existence of $a \in (0,1)$, $K_2'$, such that (see also Remark \ref{rmkipotesi}) $$
\left(\frac{|B_{2\eta\rho}(x_i)|}{|B_{2\rho}(x_i)|}\right)^{a/n} \left( \frac{\nu (B_{2\eta\rho}(x_i))}{\nu (B_{2\rho}(x_i))} \right)^{1/2} \left( \frac{\omega (B_{2\eta\rho}(x_i))}{\omega (B_{2\rho}(x_i))} \right)^{-1/2} \leqslant K_2' \, , $$ i.e. $$ \eta^{2a} \, \frac{\nu (B_{2\eta\rho}(x_i))}{\omega (B_{2 \eta \rho}(x_i))} \leqslant
(K_2')^2 \, \frac{\nu (B_{2\rho}(x_i))}{\omega (B_{2\rho}(x_i))} \, . $$ Notice that \begin{gather*} \nu (B_{2\rho}(x_i)) \leqslant c_d (\nu) \, \nu (B_{\rho}(x_i)) \leqslant c_d (\nu) \, \nu (B_{2\rho}(x_0)) \leqslant (c_d (\nu))^2 \, \nu (B_{\rho}(x_0)) \\ \omega (B_{2\rho}(x_0)) \leqslant \omega (B_{4\rho}(x_i)) \leqslant (c_d (\omega))^2 \omega (B_{\rho}(x_i)) \leqslant (c_d (\omega))^2 \omega (B_{2\rho}(x_i)) \end{gather*} by which we get $$ \frac{\nu (B_{2\rho}(x_i))}{\omega (B_{2\rho}(x_i))} \leqslant \frac{(c_d (\nu))^2}{(c_d (\omega))^2} \, \frac{\nu (B_{\rho}(x_0))}{\omega (B_{\rho}(x_0))} \, . $$
Summing up on $I^+$, from \eqref{esti1} and \eqref{estOmega} we get \begin{align*} \frac{\alpha}{2} \, \delta' \, \nu (B_{\rho}(x_0)) \leqslant & \, \sum_{i\in I^+} \frac{4 \gamma_1^2}{(1-\varepsilon)^2} \, (\eta \rho)^2 \, \frac{\nu(B_{ii})}{\omega(B_{ii})} \,
\int_{\{\varepsilon < u < 1\}\cap B_{ii}} |Du|^2 \, \omega \, dx \leqslant \\ \leqslant & \, \frac{4 \gamma_1^2}{(1-\varepsilon)^2} \, \eta^{2(1-a)} \rho^2 (K_2')^2 \frac{(c_d (\nu))^2}{(c_d (\omega))^2} \, \frac{\nu (B_{\rho}(x_0))}{\omega (B_{\rho}(x_0))} \,
\sum_{i\in I^+} \int_{\{\varepsilon < u < 1\}\cap B_{ii}} |Du|^2 \, \omega \, dx \leqslant \\ \leqslant & \, \frac{4 \gamma_1^2}{(1-\varepsilon)^2} \, \eta^{2(1-a)} (K_2')^2 \frac{(c_d (\nu))^2}{(c_d (\omega))^2} \, \beta \, \nu (B_{\rho}(x_0)) \, . \end{align*} The conclusion follows by taking the limit $\eta\to 0$.
$\square$ \\
\noindent Here we state three results, which are corollaries rispectively of Theorem \ref{chanillo-wheeden}, Lemma \ref{lemma2.2}, Lemma \ref{lemmaMisVar}.
\begin{cor} \label{corollario1} In the same assumptions of Theorem $\ref{chanillo-wheeden}$ suppose moreover $a, b \in {\bf R}$, $a < b$. Then \begin{equation} \label{disuguaglianzadisobolevcontempo}
\Big[ \frac{1}{\nu(B_\rho)} {\int_a^b \!\!\! \int_{B_\rho}} |u(x,t)|^p \nu (x) dx dt \Big]^{1/p} \leqslant \gamma_1 \, \rho \,
\Big[ \frac{1}{\omega(B_\rho)} {\int_a^b \!\!\! \int_{B_\rho}} |Du(x,t)|^p \omega (x) dx dt \Big]^{1/p} \end{equation} for every $u$ Lipschitz continuous function in $B_{\rho}(x_0) \times (a,b)$ such that for every $t \in (a,b)$ $u(\cdot, t)$ has either support contained in $B_{\rho}(x_0)$ or null mean value $($with respect to the variable $x)$. \end{cor} \noindent {\it Proof}\ \ -\ \ It is sufficient first to observe that
$(\nu ((B_{\rho}))^{-1} \int_{B_{\rho}} |u - u_{B_{\rho}}|^p \nu \, dx )^{1/p} \leqslant
(\nu ((B_{\rho}))^{-1} \int_{B_{\rho}} |u - u_{B_{\rho}}|^{q'} \nu \, dx )^{1/q'}$ for $q > p$, then to take the power $p$ and integrate in time.
$\square$ \\
\begin{cor} \label{corollario2} Consider $B_{\rho} = B_{\rho}(x_0)$ a ball, $a, b \in {\bf R}$, $a < b$, $p \in (1,+\infty)$, $\nu$, $\omega$ and $v \in L^p (a, b; W^{1,p}(B_{\rho}, \nu, \omega))$ for which assumptions of Theorem $\ref{chanillo-wheeden}$ hold, $k, l \in {\bf R}$ with $k < l$. Consider also a subset $Z$ of $B_{\rho}$ and denote by $\bar\nu$ the function taking value $0$ in $Z$ and $\bar\nu \equiv \nu$ in $B_{\rho} \setminus Z$. Then \begin{align*} (l - k)^p \, & \bar\nu \otimes \mathcal{L}^1 \, ( \{ v < k \} ) \, \bar\nu \otimes \mathcal{L}^1 \, ( \{ v > l \}) \leqslant \\ & \leqslant \, 2^p \, \gamma_1^p \, \rho^p \, \bar\nu \otimes \mathcal{L}^1 \, \big(B_{\rho} \times (a,b) \big) \,
\frac{\nu (B_{\rho})}{\omega (B_{\rho})} \,
\iint_{( B_{\rho} \times (a,b)) \cap \{ k < v < l \}} |Dv|^p \, \omega \, dx dt \, . \end{align*} \end{cor} \noindent {\it Proof}\ \ -\ \ One can follow the proof of Lemma \ref{lemma2.2} integrating in space and time and finally applying Corollary \ref{corollario1}.
$\square$ \\
\begin{cor} \label{corollario3} Consider $x_0 \in \Omega$ and $\rho > 0$ such that $B_{2\rho}(x_0) \subset \Omega$, $a, b \in {\bf R}$, $a < b$, $\sigma \in (0,\rho)$, $\omega \in A_2(K_1)$, $(\nu,\omega) \in B_{2,q}^1(K_2)$, $\nu \in A_{\infty}$, $q > 2$, $\alpha, \beta > 0$.
Consider $\mathcal{B}$ an open and non-empty subset of $B_{\rho}(x_0)$ such that also $\mathcal{B}^{\sigma} = \{ x \in \Omega \, | \, \text{\rm dist}(x, \mathcal{B}) < \sigma \}$ is a subset of $B_{\rho}(x_0)$, $a, b \in {\bf R}$, $a < b$. Then, for every $\varepsilon , \delta\in (0,1)$ there exists $\eta \in (0,1)$ such that for every $u\in L^2(a,b; W^{1,2}_{\rm loc}(\Omega, \nu, \omega))$ satisfying $$
\int_a^b \!\!\! \int_{\mathcal{B}^{\sigma}} |Du|^2 \, \omega \, dx dt \leqslant \beta \, (b - a) \, \frac{\omega(B_{\rho}(x_0))}{\rho^2} $$ and $$ \nu \otimes {\mathcal L}^1 \big(\{ u > 1\} \cap (\mathcal{B} \times (a,b) ) \big) \geqslant \alpha \, (b - a) \, \nu (B_\rho(x_0)), $$ there exists $x^*\in \mathcal{B}$ with $B_{\eta\rho}(x^*) \subset \mathcal{B}$ such that $$ \nu \otimes {\mathcal L}^1 \big( \{ u > \varepsilon \}\cap (B_{\eta\rho}(x^*) \times (a,b) ) \big) > (1-\delta) \, (b - a) \, \nu (B_{\eta\rho}(x^*)). $$ \end{cor} \noindent {\it Proof}\ \ -\ \ One can repeat the proof of Lemma \ref{lemmaMisVar} using a family of disjoint cylinders $(B_{\eta\rho}(x_i) \times (a,b))_{i\in I}$ with the property that $$ \bigcup _{i\in I} B_{\eta\rho}(x_i) \subset \mathcal{B} \subset \bigcup_{i\in I} B_{2\eta\rho}(x_i) \subset \mathcal{B}^{\sigma} \, , $$ taking the measure $\nu \otimes {\mathcal L}^1$ instead of $\nu$ and finally using Corollary \ref{corollario2} to conclude.
$\square$ \\
\noindent We conclude stating a standard lemma (see, for instance, Lemma 7.1 in \cite{giusti}) and one of its possible generalizations which will be needed later.
\begin{lemma} \label{giusti} Let $(y_h)_h$ be a sequence of positive real numbers such that $$ y_{h+1} \leqslant c \, b^h \, y_h^{1+\alpha} $$ with $c, \alpha > 0$, $b > 1$. If $y_0 \leqslant c^{-1/\alpha} b^{-1/\alpha^2}$ then $$ \lim_{h\to +\infty} y_h = 0 \, . $$ \end{lemma}
\begin{lemma} \label{lemmuzzofurbo-quinquies} Let $(y_h)_h$ and $(\epsilon_h)_h$ two sequences of non-negative real numbers such that \begin{equation} \label{ipotesi} y_{h+1} \leqslant c \, b^h \, (y_h + \epsilon_h) \, y_h^{\alpha} \, , \hskip20pt y_{h+1} \leqslant y_h \, , \hskip20pt \lim_{h \to + \infty} \epsilon_h = 0 \, , \end{equation} $c, \alpha > 0$, $b > 1$. If $y_0 < c^{-1/\alpha} b^{-1/\alpha^2}$ then $$ \lim_{h\to +\infty} y_h = 0 \, . $$ \end{lemma} \noindent {\it Proof}\ \ -\ \ If $\epsilon_h = 0$ for every $h$ we reduce to Lemma \ref{giusti}. Otherwise, say $\bar{y}$ the limit $\lim_h y_h$ which exists by the monotonicity of $(y_h)_h$ and suppose that $$ y_0 < c^{-1/\alpha} b^{-1/\alpha^2} \, . $$ Now, by contradiction, assume that $$ \bar{y} > 0 \, . $$ By assumptions we have that for each $\varepsilon > 0$ there is $\bar{h} = \bar{h}(\varepsilon)$ such that \begin{equation} \label{assurdo} \epsilon_h \leqslant \varepsilon \hskip30pt \text{for every } h \geqslant \bar{h} \, . \end{equation} Now for each $\delta > 0$ we choose $\varepsilon$ such that $\varepsilon < \delta \, \bar{y}$ so that we get $\delta \, y_h \geqslant \varepsilon$ for every $h$. In particular for $h \geqslant \bar{h}$ we get \begin{align*} y_{h+1} & \leqslant \, c \, b^h \, (y_h + \epsilon_h) \, y_h^{\alpha} \leqslant \\
& \leqslant \, c \, b^h \, (y_h + \varepsilon) \, y_h^{\alpha} \leqslant \\
& \leqslant \, c \, b^h \, (y_h + \delta \, y_h) \, y_h^{\alpha} = \\
& = (1+\delta) \, c \, b^h \, y_h^{1 + \alpha} \, . \end{align*} Using the lemma above we have that if $y_{\bar{h}} \leqslant (1+\delta)^{-1/\alpha} \, c^{-1/\alpha} b^{-1/\alpha^2}$ than $\lim_{h} y_h = \bar{y} = 0$, where $\bar{h}$ depends on $\varepsilon$ which depends on the choice of $\delta$. By the monotonicity of $(y_h)_h$ if $y_0 \leqslant (1+\delta)^{-1/\alpha} \, c^{-1/\alpha} b^{-1/\alpha^2}$ the condition on $y_{\bar{h}}$ is garanteed whatever the value of $\bar{h}$. Since $y_0 < c^{-1/\alpha} b^{-1/\alpha^2}$ there is $\delta > 0$ such that $y_0 \leqslant (1+\delta)^{-1/\alpha} \, c^{-1/\alpha} b^{-1/\alpha^2}$ and so we would derive that $\bar{y} = 0$, which contradicts the assumption $\bar{y} > 0$.
$\square$ \\
\section{Preliminaries about mixed type equations} \label{paragrafo3}
This brief section is devoted to a remark about equations of mixed type, like for example \begin{equation} \label{equazionegenerale} \mu (x) \frac{\partial u}{\partial t} - \textrm{div} (a (x,t,Du)) = 0 , \end{equation} where $a$ is a Caratheodory function such that \begin{align} & a(x,t,0) = 0 \, , \nonumber \\ \label{proprieta`}
& (a(x,t,\xi) - a(x,t,\eta), \xi - \eta) \geqslant \lambda (x) |\xi - \eta|^2 \, , \\
& |a(x,t, \xi) - a(x,t,\eta) | \leqslant L \, \lambda (x) |\xi - \eta| \, , \nonumber
\end{align}
for every $\xi, \eta \in {\bf R}^n$, where $L$ is a positive constant and $\mu = \mu(x), \lambda = \lambda(x)$ are functions, $\lambda$ positive, while $\mu$ may change sign (and also be zero in some positive measure regions). \\
\noindent Before talking about mixed type equations we want to recall that a weighted Sobolev space $H^1 (\Omega, |\mu|, \lambda)$ endowed with the norm $$
\| u \|^2 := \int_{\Omega} u^2 |\mu| dx + \int_{\Omega} |D u|^2 \lambda dx $$
can be defined even if the function $|\mu|$ takes the value zero in a subset whose measure is positive
(we refer to \cite{fabio3} for the definition and the completeness of this space). If we denote the space $L^2(0,T; H^1_0(\Omega, |\mu|, \lambda)) $ by $\mathcal{V}$ and the space
$\left\{ u \in \mathcal{V} \, | \, \mu u' \in \mathcal{V}' \right\}$ by $\mathcal{W}$ ($u'$ denotes the derivative of $u$, $\mathcal{V}'$ the dual space of $\mathcal{V}$) one has that a solution of \eqref{equazionegenerale} belong to $\mathcal{W}$ and (see \cite{fabio4}) $$ u \in \mathcal{W} \hskip10pt \Longrightarrow \hskip10pt t \mapsto \int_{\Omega} u^2(x,t) \mu(x) dx \hskip10pt \text{is continuous in } [0,T] \, , $$ and \begin{equation} \label{finitezzamu}
\int_{\Omega} u^2(x,t) |\mu|(x) dx \hskip10pt \text{is finite for every } t \in [0,T] \, . \end{equation} On the other hand, for $u$ solution of \eqref{equazionegenerale}, the function
\begin{equation} \label{finitezzalambda} t \mapsto \int_{\Omega} u^2(x,t) \lambda (x) dx \hskip20pt \text{is not necessarily } L^{\infty}_{\rm loc}(0,T) \end{equation} even if it is finite for almost every $t$ since ${\displaystyle \int_0^T \!\! \int_{\Omega} u^2(x,t) \lambda (x) dx}$ is finite. In the next section we will define a De Giorgi type class of functions requiring \begin{equation} \label{ipotesipoconaturale}
t \mapsto \int_{A} u^2(x,t) |\mu|_{\lambda} (x) dx \hskip10pt \text{belongs to } L^{\infty}_{\rm loc}(0,T) \hskip10pt \text{for every } A \subset\subset \Omega \, . \end{equation}
This is something more of the natural requirement \eqref{finitezzamu} and
this a priori is not guaranteed by the equation in a general situation, but in many cases it is true, as we mention below.
This condition will be needed only if there is a region in which the equation reduces to a family of elliptic equations, i.e. if there is an open set in which $\mu = 0$. \\ More in general, using a corollary of Theorem 2.1 in \cite{fabio7} one can prove that, if $u$ is the solution of the problem \begin{equation} \label{aaa} \left\{ \arst \begin{array}{l} {\displaystyle \mu \frac{\partial u}{\partial t}}
- \textrm{div} (a(x,t)\cdot Du) = 0 \hskip30pt
\text{in } \Omega \times (0,T) \\ u = \phi
\text{in } \partial\Omega \times (0,T) \\
u (x,0) = \varphi (x)
\text{in } \{ x \in \Omega \, | \, \mu(x) > 0 \} \\
u (x,0) = \psi (x)
\text{in } \{ x \in \Omega \, | \, \mu(x) < 0 \} \end{array} \right. \end{equation} for some $\phi \in \mathcal{W}$, $\varphi, \psi \in L^2(\Omega)$, if $$ \phi_t \in \mathcal{W} \hskip15pt \text{and} \hskip15pt a \hskip5pt \text{is regular in time } $$ (we refer to \cite{fabio7} for the precise requirement about regularity of $a$) we derive that the function
$w = \eta (u-\phi) \in H^1(0,T; H^1(\Omega, |\mu|, \lambda))$, and then in particular \begin{equation} \label{cont}
u \in C^0((0,T); H^1(\Omega, |\mu|, \lambda)) \end{equation}
and as a by-product one gets that $u$ satisfies \eqref{ipotesipoconaturale} since $H^1(\Omega, |\mu|, \lambda) \subset L^2(\Omega, \lambda)$. \\ [0.3em] Analogous considerations hold for Neumann boundary conditions. \\ [0.3em] We observe that in general a solution of a family of elliptic equation will be not regular in time (if, e.g., $a$ is not regular in time) as we will show with an example in the last section. \\
\section{De Giorgi classes and Q-minima} \label{De Giorgi classes and Q-minima}
From this section on we will focus our attention on a class of functions which contains the solutions of some forward-backward evolution equations, also possibly a family of elliptic equations, whose simplest example is the following ($\lambda$ is positive, but $\mu$ is valued in ${\bf R}$, both may be unbounded) \begin{equation} \label{equazione} \mu \frac{\partial u}{\partial t} - \text{div} (\lambda D u) = 0 \hskip30pt \textrm{in } \Omega \times (0,T) \, , \end{equation} but one can think to \eqref{equazionegenerale} or to \eqref{equazionegeneralissima}. The connection of the class we are going to define and this equation will be clarified below. We will show that solutions of such a homogeneous equation, of equation \eqref{equazionegenerale} and also of a wider class of homogeneous equations are {\it quasi-minimizers} (from now on we will call them more simply, and according to the original definition, $Q$-minima, see Definition \ref{quasi-min}) for equation \eqref{equazione}, and $Q$-minima are contained in the De Giorgi class we are going to define. \\ \ \\ {\bf Assumptions about $\mu$ and $\lambda$ - } Given $\mu$ and $\lambda$ defined in ${\bf R}^n$, $\lambda$ positive almost everywhere, while $\mu$ may be positive, null and negative, we define $$ \mu_{\lambda} := \left\{ \begin{array}{ll} \mu & \text{ if } \mu \not = 0, \\ [0.5em] \lambda & \text{ if } \mu = 0 . \end{array} \right. $$ Once considered $\Omega$ on open subset of ${\bf R}^n$ and $T > 0$ we require $\mu$ and $\lambda$ to satisfy what follows: there is $q > 2$ such that \begin{align*} \text{(H.1) - } & \lambda \in A_2(K_1) \, , \\
\text{(H.2) - } & (|\mu|_{\lambda},\lambda) \in B_{2,q}^1(K_2) \, , \\
\text{(H.3) - } & |\mu|_{\lambda} \in A_{\infty} (K_3, \varsigma) \, . \end{align*} This conditions (see Theorem \ref{chanillo-wheeden}) garantees the validity of the Sobolev-Poincar\'e type inequality $$
\Big[ \frac{1}{|\mu|_{\lambda} (B_\rho)} {\int_{B_\rho}} |u(x)|^q |\mu|_{\lambda} (x) dx \Big]^{1/q} \leqslant \gamma_1 \, \rho \,
\Big[ \frac{1}{\lambda (B_\rho)} {\int_{B_\rho}} |Du(x)|^2 \lambda (x) dx \Big]^{1/2} $$ and of all the results which follows (in particular Theorem \ref{gut-whee} and Corollary \ref{cor-gut-whee}). \ \\ The condition (H.2) (see Remark \ref{notaimportante}, point $\mathpzc{A}$) garantees the existence of $\alpha \in (0,1)$, $\tilde{K}_2 > K_2$
depending on $K_2$ and $c_{\textit{rh}}(|\mu|_{\lambda})$ and $\tilde{q} \in (2, q)$ such that, thanks also to Remark \ref{rmkipotesi}, \begin{align*}
\text{(H.2)}' \text{ - } & (|\mu|_{\lambda},\lambda) \in B_{2,\tilde{q}}^{\alpha}(\tilde{K}_2) \subset B_{2,2}^{\alpha}(\tilde{K}_2) \, . \end{align*} We will suppose that the sets $$
\Omega_+ := \{ x \in \Omega \, | \, \mu(x) > 0 \} , \hskip10pt
\Omega_- := \{ x \in \Omega \, | \, \mu(x) < 0 \} \hskip10pt \textrm{and} \hskip10pt \Omega_0 := \Omega \setminus \big( \Omega_+ \cup \Omega_- \big) $$ are the union of a finite number of open and connected subsets of $\Omega$. This means, for instance, that $\mu$ cannot change sign in a Cantor type set with positive measure. \\ Beyond to $\mu_+$ and $\mu_-$, which will denote respectively the positive and negative part of $\mu$, we define \begin{equation} \label{lambda} \lambda_+ := \left\{ \begin{array}{ll} \lambda & \text{ in } \Omega_+ \\ [0.5em] 0 & \text{ in } \Omega \setminus \Omega_+ \end{array} \right. , \quad \lambda_- := \left\{ \begin{array}{ll} \lambda & \text{ in } \Omega_- \\ [0.5em] 0 & \text{ in } \Omega \setminus \Omega_- \end{array} \right. , \quad \lambda_0 := \left\{ \begin{array}{ll} \lambda & \text{ in } \Omega_0 \\ [0.5em] 0 & \text{ in } \Omega \setminus \Omega_0 \end{array} \right. \, . \end{equation}
In this way notice that $$|\mu|_{\lambda} = |\mu| + \lambda_0 = \mu_+ + \mu_- + \lambda_0 .$$ Notice that hypotheses (H.1) and (H.3) (see \eqref{doubling_property})
implies that $\lambda$ and $|\mu|_{\lambda}$ are doubling, i.e. there is a constant $\mathfrak q$ such that \begin{equation} \label{doublingmula} \begin{array}{c}
|\mu|_{\lambda} \big( B_{2\rho}(x) \big) \leqslant \mathfrak q \, |\mu|_{\lambda} \big( B_{\rho}(x) \big) , \\ [0.5em] {\lambda} \big( B_{2\rho}(x) \big) \leqslant \mathfrak q \, {\lambda} \big( B_{\rho}(x) \big) \end{array} \end{equation} for every $x \in \Omega$ and $\rho > 0$ for which $B_{2\rho}(x) \subset \Omega$. \\ Moreover by \eqref{ecomeservono!}, once denoted by $c_{\textit rh}(\lambda)$ the constant satisfying \eqref{maggiore_sommabilita`} with the weight $\lambda$ and $\varsigma (\lambda)$ the constant appearing in \eqref{ecomeservono!} with $\omega = \lambda$ and
$c_{\textit rh}(|\mu|_{\lambda})$ and $\varsigma (|\mu|_{\lambda})$ the analogous with $\omega = |\mu|_{\lambda}$, we get that \begin{equation} \label{carlettomio}
\frac{\lambda(S)}{\lambda(Q)} \leqslant \upkappa \, \left(\frac{|\mu|_{\lambda}(S)}{|\mu|_{\lambda}(Q)}\right)^{\uptau} \, , \qquad
\frac{|\mu|_{\lambda}(S)}{|\mu|_{\lambda}(Q)} \leqslant \upkappa \, \left( \frac{\lambda(S)}{\lambda(Q)} \right)^{\uptau} \end{equation}
where $\uptau = \min \{ {\varsigma (\lambda)/r}, {\varsigma (|\mu|_{\lambda})/2} \}$ and
$\upkappa = \max \{ c_{\textit{rh}} (\lambda) \, K_3^{\varsigma (\lambda)/r} , c_{\textit{rh}} (|\mu|_{\lambda}) \, K_1^{\varsigma (|\mu|_{\lambda})/2} \}$. \\ Once defined $I$, the set of ``interfaces'' as follows: $$ I_+ = \partial\Omega_+ \cap \Omega \, , \hskip10pt I_- = \partial\Omega_- \cap \Omega \, , \hskip10pt I_0 = \partial\Omega_0 \cap \Omega \, , \hskip20pt I := I_+ \cup I_- \cup I_0 \, , $$ we moreover will assume the following additional assumptions where, for simplicity, we assume the first holds with the the same constant $\mathfrak q$ as before: \begin{align*} \text{(H.4) - } &
\left| \begin{array}{ll} \mu_+ \big( B_{2\rho}(x) \big) \leqslant \mathfrak q \, \mu_+ \big( B_{\rho}(x) \big)
& \quad \qquad \text{for every } x \in {\Omega}_+ \cup I_+ , \\ [0.5em] \mu_- \big( B_{2\rho}(y) \big) \leqslant \mathfrak q \, \mu_- \big( B_{\rho}(y) \big)
& \quad \qquad \text{for every } y \in {\Omega}_- \cup I_- , \\ [0.5em] \lambda_0 \big( B_{2\rho}(z) \big) \leqslant \mathfrak q \, \lambda_0 \big( B_{\rho}(z) \big)
& \quad \qquad \text{for every } z \in {\Omega}_0 \cup I_0 , \end{array} \right. \\ \ & \ \\ \text{(H.5) - } &
I \text{ is a such that } \lim_{\varepsilon \to 0^+} |I^{\varepsilon}| = 0 , \end{align*} where (H.4) holds for every $\rho > 0$ for which $B_{2\rho}(x) \subset \Omega$ and $I^{\varepsilon}$ is the open $\varepsilon$-neighbourhood of $I$ and is defined in \eqref{ingr.dimagr}. \\ [0.3em]
Some comments about (H.4) and (H.5) are in order. First notice that since $|\mu|_{\lambda}$ satisfies \eqref{doublingmula}, at least one of the three requirements in (H.4) holds for every $x \in \Omega$. \\ Notice moreover that assumption (H.4) is deeply connected to a geometric requirement about the set $I$ of interfaces, indeed (H.4) has to hold in particular for points belonging to $I$. Finally, about the set $I$, notice that (H.5) is weaker than the requirement that $I$ is a $\mathcal{H}^{n-1}$-rectifiable set because $I$ could be also not rectifiable. For all these comments we refer to the last section, in which some examples are shown. \\ \ \\ {\bf Some notations - } By $u_+(y)$ we define the function $\max \{ u(y), 0 \}$ and by $u_-(y)$ $ \max \{ -u(y), 0 \}$. We will write $u_+^2$ or $u_-^2$ to denote $$ u_+^2(y) := ( u_+(y))^2 \, , \hskip20pt u_-^2(y) := ( u_-(y))^2 \, . $$ Given $A \subset \Omega$ we will denote,
for a given $\varepsilon > 0$, \begin{equation} \label{ingr.dimagr} \begin{array}{c}
A^{\varepsilon} := \big\{ x \in \Omega \, \big| \, \textrm{dist} (x,A) < \varepsilon \big\} \, , \hskip20pt
A_{\varepsilon} := \big\{ x \in \Omega \, \big| \, \textrm{dist} (x,A^c) < \varepsilon \big\} \, , \\ [0.5em] \text{while for } \varepsilon = 0 \hskip10pt A^{\varepsilon} = A_{\varepsilon} := A \, . \end{array} \end{equation} Fix, beyond $x_0$, $t_0 \in (0,T)$. For a given $\varepsilon > 0$ and a ball $B_{\rho}(x_0)$ we define the sets \begin{align*} I_{\rho,\varepsilon}(x_0) := (I \cap B_{\rho}(x_0))^{\varepsilon} \, , & \hskip 20pt
B_{\rho}^0(x_0) := B_{\rho}(x_0) \cap \Omega_0 \\ B_{\rho}^+(x_0) := B_{\rho}(x_0) \cap \Omega_+ \, , & \hskip 20pt
B_{\rho}^-(x_0) := B_{\rho}(x_0) \cap \Omega_- \, , \\ I_{\rho}^+ (x_0) := I \, \cap \, \overline{B_{\rho}^+}(x_0) \, , \hskip 20pt
I_{\rho}^- (x_0) := I \, \cap & \, \overline{B_{\rho}^-}(x_0) \, , \hskip 20pt
I_{\rho}^0 (x_0) := I \, \cap \, \overline{B_{\rho}^0}(x_0) \, , \\ I_{\rho,\varepsilon}^+(x_0) := (I_{\rho}^+(x_0))^{\varepsilon} \cap B_{\rho}^+(x_0)\, ,
& \hskip 20pt I^{\rho,\varepsilon}_+(x_0) := (I_{\rho}^+(x_0))^{\varepsilon} \setminus I_{\rho,\varepsilon}^+(x_0) \, , \\ I_{\rho,\varepsilon}^-(x_0) := (I_{\rho}^-(x_0))^{\varepsilon} \cap B_{\rho}^-(x_0)\, ,
& \hskip 20pt I^{\rho,\varepsilon}_-(x_0) := (I_{\rho}^-(x_0))^{\varepsilon} \setminus I_{\rho,\varepsilon}^-(x_0) \, , \\ I_{\rho,\varepsilon}^0(x_0) := (I_{\rho}^0(x_0))^{\varepsilon} \cap B_{\rho}^0(x_0)\, ,
& \hskip 20pt I^{\rho,\varepsilon}_0(x_0) := (I_{\rho}^0(x_0))^{\varepsilon} \setminus I_{\rho,\varepsilon}^0(x_0) \, . \end{align*} We define the following functions \begin{align} \label{funzioneacca}
& h(x_0, \rho) := \frac{|\mu|_{\lambda} \left(B_{\rho}(x_0)\right)}{\lambda \left(B_{\rho}(x_0)\right)} \, ,
\hskip20pt f (x_0, \rho) := h(x_0,\rho) \rho^2 \, . \end{align} These functions depend a priori on $x_0$, but just for simplicity we will not specify this dependence writing only $h(\rho)$ and $f(\rho)$ if not strictly necessary. \\ Notice that the function $h$ satisfies, if $\mu \not= 0$ almost everywhere, the following inequalities
\begin{equation} \label{stimeacca}
h(x_0, \rho) \leqslant \mathfrak q \, h(x_0, 2\rho) \, , \hskip20pt h(x_0, 2\rho) \leqslant \mathfrak q \, h(x_0, \rho) \, . \end{equation} Other sets we define are the following: fix $x_0 \in \Omega$ and $t_0 \in (0,T)$, $R > 0$, $\upbeta > 0$ and $s_1, s_2 \in (0,T)$ with $s_1 < t_0 < s_2$ and satisfying \begin{equation} \label{esseunoeessedue} \left. \begin{array}{lll} i \, ) & s_2 - t_0 = t_0 - s_1 = \upbeta \, h(x_0, R) R^2 & \qquad \text{ when we consider } B_R^+(x_0) \text{ or } B_R^-(x_0) \, , \\ [0.5em] ii \, ) & s_1, s_2 \quad \text{arbitrary} & \qquad \text{ when we consider } B_R^0(x_0) \, . \end{array} \right. \end{equation} Inside the cylinder $B_R(x_0) \times (s_1, s_2)$ for
$$ \theta \in [0,1) $$ we define \begin{equation} \label{sigmateta}
\sigma_\theta := \theta \, \upbeta \, h(x_0,R) \, R^2 \, . \end{equation} in such a way that $\sigma_\theta \in [0, \upbeta \, h(x_0,R) R^2)$; then for $\rho \in (0, R)$ and $\varepsilon > 0$ and taking $s_1, s_2$ as in \eqref{esseunoeessedue}, point $i \, )$, we define the sets \begin{equation} \label{notazione1} \begin{array}{l}
Q_R^{\upbeta,\texttt{\,>}}(x_0,t_0) := B_R (x_0) \times (t_0, s_2) \, , \qquad
Q_R^{\upbeta,\texttt{\,<}}(x_0,t_0) := B_R (x_0) \times (s_1,t_0) \, , \\ [0.5em] Q_R^{\upbeta,+} (x_0,t_0) := B_R^+ (x_0) \times (t_0, s_2) \, , \qquad
Q_R^{\upbeta,-} (x_0,t_0) := B_R^- (x_0) \times (s_1,t_0) \, , \\ [0.5em] Q_{R;\rho, \theta}^{\upbeta,+} (x_0,t_0) := B_{\rho}^+ (x_0) \times (t_0 + \sigma_\theta, s_2) \, , \\ [0.5em] Q_{R;\rho, \theta}^{\upbeta,-} (x_0,t_0) := B_{\rho}^- (x_0) \times (s_1, t_0 - \sigma_{\theta}) \, , \\ [0.5em] Q_{R;\rho, \theta}^{\upbeta,+,\varepsilon} (x_0,t_0) :=
\left\{
\begin{array}{ll}
B_{\rho + \varepsilon} (x_0) \times (t_0 + \sigma_\theta, s_2)
& \hspace{-2cm} \text{ if } B_{\rho + \varepsilon}^+ (x_0) = B_{\rho + \varepsilon} (x_0) , \\ [0.2em]
\left( (B_{\rho}^+ (x_0))^{\varepsilon} \times (t_0 + \sigma_\theta, s_2) \right) \cup
\big( (I_{\rho}^+(x_0))^{\varepsilon} \times (t_0, s_2) \big)
& \text{ otherwise} ,
\end{array}
\right. \\ [1em] Q_{R;\rho, \theta}^{\upbeta,-,\varepsilon} (x_0,t_0) :=
\left\{
\begin{array}{ll}
B_{\rho + \varepsilon} (x_0) \times (s_1, t_0 - \sigma_\theta)
& \hspace{-2cm} \text{ if } B_{\rho + \varepsilon}^- (x_0) = B_{\rho + \varepsilon} (x_0) , \\ [0.2em]
\left( (B_{\rho}^- (x_0))^{\varepsilon} \times (s_1, t_0 - \sigma_\theta) \right) \cup
\big( (I_{\rho}^-(x_0))^{\varepsilon} \times (s_1, t_0) \big)
& \text{ otherwise} ,
\end{array}
\right. \\ [1em] \end{array} \end{equation} and with $s_1, s_2$ arbitrary (see \eqref{esseunoeessedue}) we define \begin{equation} \label{notazione2} \begin{array}{l} Q_{R;\rho; s_1, s_2}^{0} (x_0) := B_{\rho}^0 (x_0) \times (s_1, s_2) \quad \text{for } \rho \leqslant R \, , \\ [0.5em]
Q_{R;\rho; s_1, s_2}^{0,\varepsilon} (x_0) := ( B_{\rho}^0(x_0))^{\varepsilon} \times (s_1, s_2) \end{array} \end{equation} The first subscript $R$ below $Q$ denotes that $s_2 - t_0$ and $t_0 - s_1$ are proportional to $R^2$ and that we consider subsets of $B_R \times (0,T)$. \\ \ \\ \noindent We now introduce the De Giorgi class for equation \eqref{equazionegenerale}. \\ In the following definition we will use the measures $\mu_+$ and $\mu_-$ rescaled by the factor $h(x_0,R)$. We will make the implicit assumption that the support of these measures (or functions) is the same of $\mu_+$ and $\mu_-$, i.e. $$ \frac{\mu_+}{h(x_0, {R})} (x) :=
\left\{
\begin{array}{ll}
{\displaystyle \frac{\mu_+}{h(x_0, {R})} } & \text{if } \mu_+ (x) > 0 \, , \\ [1em]
0 & \text{if } \mu_+ (x) = 0 \, ,
\end{array}
\right. \quad \frac{\mu_-}{h(x_0, {R})} :=
\left\{
\begin{array}{ll}
{\displaystyle \frac{\mu_-}{h(x_0, {R})} } & \text{if } \mu_- (x) > 0 \, , \\ [1em]
0 & \text{if } \mu_- (x) = 0 \, .
\end{array}
\right. $$
Moreover in the definition which follows we require that $u \in L^{\infty}_{\rm loc} ((0,T); L^2_{\rm{loc}} (\Omega, |\mu|_{\lambda}))$ even if only the terms $$ \int_{B_{\rho}} u^2 (x,t) \mu_+ (x) dx \quad \text{and} \quad \int_{B_{\rho}} u^2 (x,t) \mu_- (x) dx $$ are, a priori, bounded (see Section \ref{paragrafo3}). The fact that also ${\displaystyle \int_{B_{\rho}^0} u^2 (x,t) \lambda (x) dx}$ is to be finite will be needed, for instance, to prove point $iii \,)$ of Theorem \ref{Linfinity}.
\begin{definition}[De Giorgi classes] \label{classiDG} Consider $\Omega$ an open subset of ${\bf R}^n$ and $T > 0$ and a point $(x_0, t_0) \in \Omega \times (0,T)$. Consider $R, r, \tilde{r} > 0$, $r < \tilde{r} \leqslant R$, $\upbeta > 0$, $\theta, \tilde\theta$ such that $0 \leqslant \tilde\theta < \theta < 1$, $s_1, s_2, t_0 \in (0,T)$, $s_1 < t_0 < s_2$ satisfying \eqref{esseunoeessedue}. We say that a function $$
u \in L^2_{\rm{loc}}(0,T; H^1_{\rm{loc}} (\Omega, |\mu|, \lambda)) \cap L^{\infty}_{\rm loc} ((0,T); L^2_{\rm{loc}} (\Omega, |\mu|_{\lambda})) $$ belongs to the De Giorgi class $DG_+(\Omega, T, \mu, \lambda, \gamma)$, being $\gamma$ a positive constant, if for every $\varepsilon \in [ 0, R - \tilde{r}]$ and $\theta - \tilde\theta = (\tilde{r} - r)^2/R^2$ and every $k \in {\bf R}$
the following inequalities hold $(\sigma_\theta$ is defined in \eqref{sigmateta}$)$: \\ [0.5em] $i \, )$ for $s_2 = t_0 + \upbeta \, h(x_0, {R}) R^2$ and $B_R(x_0) \times [t_0, s_2] \subset \Omega \times (0,T)$ \begin{align} \label{DGgamma+} \sup_{t \in (t_0 + \sigma_\theta, s_2)} & \int_{B_{r + \varepsilon}^+} (u-k)_+^2 (x,t) \mu_+ (x) dx + \sup_{t \in (t_0, t_0 + \sigma_{\tilde\theta})} \int_{I^{r, \varepsilon}_+} (u-k)_+^2 (x,t) \mu_-(x) \, dx \nonumber \\
& \hskip130pt + \iint_{Q_{R;r, \theta}^{\upbeta, +,\varepsilon}} |D(u-k)_+|^2\, \lambda \, dx ds \leqslant \nonumber \\ \leqslant & \, \gamma \Bigg[
\sup_{t \in (t_0,t_0 + \sigma_{\tilde\theta})} \int_{I_{r, \tilde{r}-r + \varepsilon}^+} (u-k)_+^2 (x,t) \mu_+ (x) \, dx + \\ & \hskip30pt + \sup_{t \in (t_0 + \sigma_\theta, s_2)} \int_{I^{r, \tilde{r}-r + \varepsilon}_+} (u-k)_+^2 (x,t) \mu_- (x) \, dx + \nonumber \\ & \hskip30pt + \frac{1}{(\tilde{r} - r)^2}
\iint_{Q_{R;r , {\tilde\theta}}^{\upbeta, +,\tilde{r}-r + \varepsilon}}
(u-k)_+^2\, \left( \frac{\mu_+}{\upbeta \, h(x_0, R)} + \lambda \right) \, dx dt \Bigg] ; \nonumber \end{align} $ii \, )$ for $s_1 = t_0 - \upbeta \, h(x_0, R) R^2$ and $B_R(x_0) \times [s_1, t_0] \subset \Omega \times (0,T)$ \begin{align} \label{DGgamma-} \sup_{t \in (s_1, t_0 - \sigma_\theta)} & \int_{B_{r + \varepsilon}^-} (u-k)_+^2 (x,t) \mu_- (x) dx + \sup_{t \in (t_0 - \sigma_{\tilde\theta}, t_0)} \int_{I^{r, \varepsilon}_-} (u-k)_+^2 (x,t) \mu_+(x) \, dx \nonumber \\
& \hskip130pt + \iint_{Q_{R;r, \theta}^{\upbeta,-,\varepsilon}} |D(u-k)_+|^2\, \lambda \, dx ds \leqslant \nonumber \\ \leqslant & \, \gamma \Bigg[
\sup_{t \in (t_0 - \sigma_{\tilde\theta}, t_0)} \int_{I_{r, \tilde{r}-r + \varepsilon}^-} (u-k)_+^2 (x,t) \mu_- (x) \, dx + \\ & \hskip30pt + \sup_{t \in (s_1, t_0 - \sigma_\theta)} \int_{I^{r, \tilde{r}-r + \varepsilon}_-} (u-k)_+^2 (x,t) \mu_+ (x) \, dx + \nonumber \\ & \hskip30pt + \frac{1}{(\tilde{r} - r)^2}
\iint_{Q_{R;r, {\tilde\theta}}^{\upbeta, -,\tilde{r}-r + \varepsilon}}
(u-k)_+^2\, \left( \frac{\mu_-}{\upbeta \, h(x_0, R)} + \lambda \right) \, dx dt \Bigg] ; \nonumber \end{align} $iii \, )$ for $s_1$ and $s_2$ arbitrary and $B_R(x_0) \times [s_1, s_2] \subset \Omega \times (0,T)$ \begin{align} \label{DGgamma0}
\iint_{Q_{R;r; s_1, s_2}^{0,\varepsilon} (x_0)} & |D(u-k)_+|^2 \lambda \, dx dt \leqslant \nonumber \\ & \leqslant \gamma \Bigg[ \sup_{t \in (s_1, s_2)} \int_{ I_0^{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,t) \mu_- (x) \, dx + \nonumber \\ & \hskip50pt + \sup_{t \in (s_1, s_2)}\int_{ I_0^{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,t) \mu_+ (x) \, dx \, + \\ & \hskip50pt + \frac{1}{(\tilde{r} - r)^2}
\iint_{Q_{R; r; s_1, s_2}^{0,\tilde{r}-r + \varepsilon} } (u-k)_+^2 \, \lambda \, dx dt \Bigg] \, ; \nonumber \end{align} $iv \, )$ for every $s_2 > t_0$ such that $B_R(x_0) \times [t_0, s_2] \subset \Omega \times (0,T)$ \begin{align} \label{DGgamma+_1} \sup_{t \in (t_0, s_2)} \int_{B_r^+} & (u-k)_+^2 (x,t) \mu_+ (x) dx \leqslant \int_{B_{\tilde{r}}^+} (u-k)_+^2 (x,t_0) \mu_+ (x) dx \, + \nonumber \\ + & \sup_{t \in (t_0, s_2)} \int_{I^{r, \tilde{r}-r}_+} (u-k)_+^2 (x,t) \mu_-(x) \, dx + \\ & \hskip50pt + \, \gamma \, \frac{1}{(\tilde{r} - r)^2}
\int_{t_0}^{s_2} \!\!\!\! \int_{B_{\tilde{r}}^+ \cup I^{r, \tilde{r}-r}_+} (u-k)_+^2\, \lambda \, dx dt ; \nonumber \end{align} $v \, )$ for every $s_1 < t_0$ such that $B_R(x_0) \times [s_1, t_0] \subset \Omega \times (0,T)$ \begin{align} \label{DGgamma+_2} \sup_{t \in (s_1, t_0)} \int_{B_r^-} & (u-k)_+^2 (x,t) \mu_- (x) dx \leqslant \int_{B_{\tilde{r}}^-} (u-k)_+^2 (x,t_0) \mu_- (x) dx \, + \nonumber \\ + & \sup_{t \in (t_0, s_2)} \int_{I^{r, \tilde{r}-r}_-} (u-k)_+^2 (x,t) \mu_+(x) \, dx + \\ & \hskip50pt + \, \gamma \, \frac{1}{(\tilde{r} - r)^2}
\int_{s_1}^{t_0} \!\!\!\! \int_{B_{\tilde{r}}^- \cup I^{r, \tilde{r}-r}_-} (u-k)_+^2\, \lambda \, dx dt \, . \nonumber \end{align} We will say that $u$ belongs to $DG_-(\Omega, T, \mu, \lambda, \gamma)$ if the estimates above holds for $(u-k)_-$ in the place of $(u-k)_+$. We will say that $u$ belongs to $DG(\Omega, T, \mu, \lambda, \gamma)$ if $u \in DG_+(\Omega, T, \mu, \lambda, \gamma) \cap DG_-(\Omega, T, \mu, \lambda, \gamma)$. \end{definition}
\begin{oss} \rm -\ \label{notachesegueladefinizione}
Notice that if $|\mu|(B_R(x_0)) = 0$, that is $B_R (x_0) \subset \Omega_0$, \eqref{DGgamma+}, \eqref{DGgamma-} and \eqref{DGgamma0} coincide and reduce to \begin{align*}
\int_{s_1}^{s_2} \!\!\! \int_{B_{r}} |D(u-k)_+|^2\, \lambda \, dx dt \leqslant \, \gamma \, \frac{1}{(\tilde{r} - r)^2}
\int_{s_1}^{s_2} \!\!\! \int_{B_{r}} (u-k)_+^2\,\lambda \, dx dt \end{align*} by which we can derive \begin{align} \label{tempofissato}
\int_{B_{r}(x_0)} |D(u-k)_+|^2 (x,t)\, \lambda(x) \, dx \leqslant
\gamma \, \frac{1}{(\tilde{r} - r)^2} \int_{B_{\tilde{r}}(x_0)} (u-k)_+^2 (x,t)\, \lambda (x) \, dx \end{align}
for {\em almost} every $t \in [s_1, s_2]$. Since by assumption $u \in L^{\infty}_{\rm loc} ((0,T); L^2_{\rm{loc}} (\Omega, |\mu|_{\lambda}))$ we get as a by-product that $u \in L^{\infty}_{\rm loc} ((0,T); H^1_{\rm{loc}} (\Omega_0, \lambda, \lambda))$. \\
In some cases we can derive that \eqref{tempofissato} can hold for {\em every} $t \in [s_1, s_2]$ (see the previous section). \end{oss}
\noindent The estimates given in Definition \ref{classiDG} are also known as \emph{energy estimates} or \emph{Caccioppoli's estimates} and we will often refer to them in this way. \\ \ \\ \noindent Now denote by $\mathcal{K}(\Omega \times (0,T))$ the set $\{ K \subset \Omega
\times (0,T) \, | \, K \textrm{ compact} \}$ and consider the functional $$
E: L^2(0,T; H^{1}(\Omega)) \times \mathcal{K}(\Omega \times (0,T)) \to {\bf R} \, , \hskip30pt E(w, K) = \frac{1}{2} \, \int\!\!\!\int_K |Dw|^2 \lambda \, dx dt \, . $$ We are going to define a $Q$-minimum following the definition given in \cite{wieser} (see also \cite{giagiu} for the elliptic case).
\begin{definition} \label{quasi-min} We will call a function $u:\Omega \times (0,T) \to {\bf R}$ a $Q$-minimum for the equation \eqref{equazione}
if $u \in L^2_{\rm loc}(0,T; H^{1}_{\rm loc}(\Omega,$ $|\mu|, \lambda)) \cap L^{\infty}_{\rm loc} ((0,T); L^2_{\rm{loc}} (\Omega, |\mu|_{\lambda}))$
and there is a constant $Q \geqslant 1$ such that \begin{equation} \label{Qminpar} - \int \!\!\! \int_{{\rm supp} (\phi)} u \frac{\partial \phi}{\partial t} \mu \, dx dt +
E(u, {\rm supp}(\phi)) \leqslant Q \, E (u-\phi,{\rm supp}(\phi)) \end{equation} for every $\phi \in C^1_c (\Omega \times (0,T))$. \end{definition}
\begin{oss} \rm -\ \label{notasuiQminimi}
It is easy to verify that if $u \in L^2(0,T; H^{1}(\Omega, |\mu|, \lambda))$ is a $Q$-minimum for equation \eqref{equazione} than the map $L \phi := - \int \!\! \int_{{\rm supp} (\phi)} u \frac{\partial \phi}{\partial t} \mu \, dx dt$ with $\phi \in C^1_c (\Omega \times (0,T))$ turns out to be a linear and continuous form in
$L^2(0,T; H^{1}_0(\Omega,|\mu|, \lambda))$, i.e. $L$ belongs to the dual space $L^2(0,T; (H^{1}(\Omega,|\mu|, \lambda))')$ (the proof can be obtained following the analogous one in \cite{wieser}). \end{oss}
\noindent {\bf Solutions are $Q$-minima} - Following the analogous proof in \cite{wieser} one can verify that $u$ is a solution of \eqref{equazione} if and only if $u$ is a $1$-minimum for \eqref{equazione}. \\ A second interesting fact is that a solution of \eqref{equazionegenerale} is a $Q$-minimum for the equation \eqref{equazione}. Indeed using \eqref{proprieta`} it is easy to see that a solution of \eqref{equazionegenerale} satisfies \eqref{Qminpar} with $Q = 2 L M$. \\
\noindent {\bf $Q$-minima belong to the class $DG$} - We now want to show that the De Giorgi class defined above contains $Q$-minima and in particular solutions of \eqref{equazione}. In Section \ref{secHarnack} we will show a Harnack type inequality, and then H\"older continuity, for functions in the De Giorgi classes, and consequently for $Q$-minima and solutions of \eqref{equazione}. To show this, first of all notice that if $u$ satisfies \eqref{Qminpar} for every $\phi \in C^1_c (\Omega \times (0,T))$ then, by density of $C^1_c (\Omega \times (0,T))$ in $\mathcal{W}$, $u$ satisfies \eqref{Qminpar} also for $\phi \in \mathcal{W}$; then in particular we could choose $\phi = (u-k)_+ \zeta^2$ with $\zeta$ a Lipschitz continuous and non-negative function such that $\zeta(\cdot, t) \in \text{Lip}_0(B_{R}(x_0))$,
$|\nabla \zeta|, \zeta_t \in L^{\infty}$, $\zeta_t \mu \geqslant 0$. \\ To show this fact it is sufficient to consider a point $(x_0, t_0) \in \Omega \times (0,T)$, a function $(u-k)_+ \zeta^2$ with $\zeta$ defined in $[s_1, s_2] \times B_{R}(x_0)$ with $0 < s_1 < t_0 < s_2 < T$ and $s_2 - t_0 = \upbeta \, h(x_0, R) R^2$ if $\mu_+ (B_R(x_0)) > 0$, $t_0 - s_1 = \upbeta \, h(x_0, R) R^2$ if $\mu_- (B_R(x_0)) > 0$, while if $B_R(x_0) \subset \Omega_0$ $s_1$ and $s_2$ arbitrary; then for arbitrary $\sigma_1, \sigma_2$ satisfying $s_1 \leqslant \sigma_1 < \sigma_2 \leqslant s_2$ choose $\phi_\epsilon = (u-k)_+ \zeta^2 \tau_\epsilon$ where $$ \tau_\epsilon(t) = \left\{ \begin{array}{ll} 1 & t \in [\sigma_1, \sigma_2] \\ \epsilon^{-1}(t - \sigma_1 + \epsilon) & t \in [\sigma_1 - \epsilon, \sigma_1] \\ - \epsilon^{-1}(t - \sigma_2 - \epsilon) & t \in [\sigma_2, \sigma_2 + \epsilon] \\ 0 & t \not\in [\sigma_1 - \epsilon, \sigma_2 + \epsilon] \end{array} \right. $$ for a suitable $\epsilon > 0$. Taking such a $\phi_\epsilon$ in \eqref{Qminpar} and letting $\epsilon$ go to zero one gets that
\begin{equation} \label{prelim_DG} \begin{array}{l} {\displaystyle \frac{1}{2} \int_{B_{R}} (u-k)_+^2(x,\sigma_2) \zeta^2(x,\sigma_2) \, \mu(x) \, dx + E(u, K) \leqslant Q \, E (u-\phi,K) + } \\ [1.5em] {\displaystyle \hskip20pt + \frac{1}{2} \int_{B_{R}} (u-k)_+^2(x,\sigma_1) \zeta^2(x,\sigma_1) \, \mu (x) \, dx + \int_{\sigma_1}^{\sigma_2} \!\!\! \int_{B_{R}} (u-k)_+^2 \zeta \zeta_t \, \mu \, dx dt } \end{array} \end{equation} where we simply denote $B_{R}$ instead of $B_{R}(x_0)$ and $K$ denotes the part of the support of $\zeta$ contained in $B_R \times [\sigma_1, \sigma_2]$. \\ \ \\ \noindent $1^{\circ}$ - First suppose $\mu_+(B_{R}(x_0)) > 0$ and show \eqref{DGgamma+} and \eqref{DGgamma+_1}. We proceed as follows: consider $\phi = (u-k)_+ \zeta^2$ with $\zeta$ a Lipschitz continuous function to be choosen later. Since we have that $$ u - \phi = \left\{ \begin{array}{ll} u & u \leqslant k \\ (u-k)(1-\zeta^2) + k & u > k \, . \end{array} \right. $$ and ${\rm supp}(\phi) \subset \{u > k\}$ we have that
\begin{equation} \label{prelim_DG_2} \begin{array}{l} {\displaystyle E ( u-\phi, {\rm supp}(\phi)) = \frac{1}{2}
\int \!\!\! \int_{{\rm supp}(\phi)} \left| D \left[ (u-k)_ + (1-\zeta^2) \right] \right|^2 \lambda \, dx dt \leqslant } \\ [1.5em]
\hskip20pt {\displaystyle \leqslant\int \!\!\! \int_{{\rm supp}(\phi)} \left[ (1-\zeta^2)^2 |D(u-k)_+|^2
+ 4 (u-k)_+^2 \zeta^2 |D \zeta|^2 \right] \lambda \, dx dt \, . } \end{array} \end{equation} We first prove \eqref{DGgamma+}. We consider $r, \tilde{r} > 0$ with $r < \tilde{r} < R$, $t_0, s_2 \in (0,T)$ with $s_2 - t_0 = \upbeta \, h(x_0,R) R^2$, $\theta, \tilde\theta$ such that $0 \leqslant \tilde\theta < \theta < 1$. By assuming in addition that for $\varepsilon \geqslant 0$ (and sufficiently small, say $\varepsilon < R - \tilde{r}$) $$ K := \text{supp}(\zeta) \cap \big( B_R (x_0) \times [s_1, s_2] \big) \subset Q_{R;{r}, \tilde\theta}^{\upbeta, +,\tilde{r} - r + \varepsilon} (x_0,t_0) $$
and that $|\zeta| \leqslant 1$, on the right hand side we estimate $(1-\zeta^2)^2$ by $1-\zeta^2$ and the second term by $4 (u-k)_+^2 |D \zeta|^2$. Moreover using the assumption that $u$ is a $Q$-minimum and since $E ( u, K) = E ( (u-k)_+, K)$ we get that for every $\tau_1, \tau_2 \in [t_0, s_2]$ with $\tau_1 < \tau_2$ \begin{align}
\int_{B_{\tilde{r} + \varepsilon}} & (u-k)_+^2(x,\tau_2) \zeta^2(x,\tau_2) \mu (x) \, dx - \int_{B_{\tilde{r} + \varepsilon}} (u-k)_+^2(x,\tau_1) \zeta^2(x,\tau_1) \mu (x) \, dx + \nonumber \\
& \hskip180pt + 2Q \int_{\tau_1}^{\tau_2} \!\!\! \int_{B_{\tilde{r} + \varepsilon}} |D(u-k)_+|^2 \zeta^2 \lambda \, dx dt \leqslant \nonumber \\ & \leqslant 2 \int_{\tau_1}^{\tau_2} \!\!\! \int_{B_{\tilde{r} + \varepsilon}} (u-k)_+^2 \, \zeta \zeta_t \, \mu \, dx dt +
8Q \int_{\tau_1}^{\tau_2} \!\!\! \int_{B_{\tilde{r} + \varepsilon}} (u-k)_+^2 |D \zeta|^2 \lambda \, dx dt + \nonumber \\ & \hskip20pt + (2Q-1) \iint_{Q_{R; r, \tilde\theta}^{\upbeta,+,\tilde{r} - r + \varepsilon} \cap (B_{R} \times [\tau_1, \tau_2])}
|D(u-k)_+|^2 \lambda \, dx dt \, . \nonumber \end{align} We then choose a Lipschitz continuous function $\zeta$ (see also Figure A below where we show an example where $\mu > 0$ and $\mu < 0$) satisfying also \begin{equation} \begin{array}{c} \zeta = 1 \hskip10pt \text{ in } Q_{R; r, \theta}^{\upbeta,+,\varepsilon} (x_0,t_0) \, , \hskip20pt \zeta = 0 \hskip10pt \text{ in } Q_R^{\upbeta,\texttt{\,>}}(x_0,t_0) \setminus
Q_{R; r, \tilde\theta}^{\upbeta,+,\tilde{r} - r + \varepsilon} (x_0,t_0) \, , \\ [1em]
{\displaystyle \hskip8pt |D \zeta| \leqslant \frac{1}{\tilde{r} - r} } \, ,
\hskip20pt \theta - \tilde\theta = {\displaystyle \frac{(\tilde{r} - r)^2}{R^2} } \, , \\ [1em] \label{puredifave}
{\displaystyle |\zeta_t| \leqslant \, \frac{1}{\sigma_{\theta} - \sigma_{\tilde\theta}} =
\frac{1}{\upbeta \, h(x_0,R) (\tilde{r} - r)^2 } }\, , {\displaystyle \hskip15pt \zeta_t \mu \geqslant 0 } \, , \hskip12pt \zeta_t \mu_- = 0 \, . \end{array} \end{equation} \ \\ \ \\ \ \\ \begin{picture}(150,200)(-180,0) \put (-105,200){\linethickness{1pt}\line(1,0){210}} \put (-105,50){\linethickness{1pt}\line(1,0){210}} \put (-105,50){\linethickness{1pt}\line(0,1){150}} \put (105,50){\linethickness{1pt}\line(0,1){150}}
\put (30,40){\line(0,1){170}}
\put (-180,125){\line(1,0){320}} \put (-170,40){\line(0,1){170}}
\put (-95,200){\line(0,-1){4}} \put (-95,195){\line(0,-1){4}} \put (-95,190){\line(0,-1){4}} \put (-95,185){\line(0,-1){4}} \put (-95,180){\line(0,-1){4}}
\put (-95,175){\line(1,0){4}} \put (-90,175){\line(1,0){4}} \put (-85,175){\line(1,0){4}} \put (-80,175){\line(1,0){4}} \put (-75,175){\line(1,0){4}} \put (-70,175){\line(1,0){4}} \put (-65,175){\line(1,0){4}} \put (-60,175){\line(1,0){4}} \put (-55,175){\line(1,0){4}} \put (-50,175){\line(1,0){4}} \put (-45,175){\line(1,0){4}} \put (-40,175){\line(1,0){4}} \put (-35,175){\line(1,0){4}} \put (-30,175){\line(1,0){4}} \put (-25,175){\line(1,0){4}} \put (-20,175){\line(1,0){4}} \put (-15,175){\line(1,0){4}} \put (-10,175){\line(1,0){4}} \put (-5,175){\line(1,0){4}} \put (0,175){\line(1,0){4}} \put (5,175){\line(1,0){4}} \put (10,175){\line(1,0){4}} \put (15,175){\line(1,0){4}}
\put (20,175){\line(0,-1){4}} \put (20,170){\line(0,-1){4}} \put (20,165){\line(0,-1){4}} \put (20,160){\line(0,-1){4}} \put (20,155){\line(0,-1){4}} \put (20,150){\line(0,-1){4}} \put (20,145){\line(0,-1){4}} \put (20,140){\line(0,-1){4}} \put (20,135){\line(0,-1){4}} \put (20,130){\line(0,-1){4}}
\put (40,175){\line(0,1){4}} \put (40,180){\line(0,1){4}} \put (40,185){\line(0,1){4}} \put (40,190){\line(0,1){4}} \put (40,195){\line(0,1){4}} \put (40,170){\line(0,1){4}} \put (40,165){\line(0,1){4}} \put (40,160){\line(0,1){4}} \put (40,155){\line(0,1){4}} \put (40,150){\line(0,1){4}} \put (40,145){\line(0,1){4}} \put (40,140){\line(0,1){4}} \put (40,135){\line(0,1){4}} \put (40,130){\line(0,1){4}} \put (40,125){\line(0,1){4}}
\put (50,191){\line(0,1){1}} \put (50,194){\line(0,1){1}} \put (50,197){\line(0,1){1}} \put (50,188){\line(0,1){1}} \put (50,185){\line(0,1){1}} \put (50,182){\line(0,1){1}} \put (50,179){\line(0,1){1}} \put (50,176){\line(0,1){1}} \put (50,173){\line(0,1){1}} \put (50,170){\line(0,1){1}} \put (50,167){\line(0,1){1}} \put (50,164){\line(0,1){1}} \put (50,161){\line(0,1){1}} \put (50,158){\line(0,1){1}} \put (50,155){\line(0,1){1}} \put (50,152){\line(0,1){1}} \put (50,149){\line(0,1){1}} \put (50,146){\line(0,1){1}} \put (50,143){\line(0,1){1}} \put (50,140){\line(0,1){1}} \put (50,137){\line(0,1){1}} \put (50,134){\line(0,1){1}} \put (50,131){\line(0,1){1}} \put (50,128){\line(0,1){1}} \put (50,125){\line(0,1){1}}
\put (10,200){\linethickness{2pt}\line(1,0){40}} \put (35,210){$I_{R,\tilde{r} - r + \varepsilon}(x_0) \times \{ s_2 \}$} \put (20,50){\linethickness{2pt}\line(1,0){20}} \put (20,30){$I_{R,\varepsilon}(x_0) \times \{ s_1 \}$}
\put (-40,140){\tiny$\mu > 0$} \put (40,110){\tiny$\mu < 0$ \text{ or } $\mu = 0$}
\put (0,125){\linethickness{2pt}\line(1,0){1}} \put (-10,130){\tiny$(x_0,t_0)$}
\put (-185,198){$s_2$} \put (-169,200){\line(1,0){1}} \put (-166,200){\line(1,0){1}} \put (-163,200){\line(1,0){1}} \put (-160,200){\line(1,0){1}} \put (-157,200){\line(1,0){1}} \put (-154,200){\line(1,0){1}} \put (-151,200){\line(1,0){1}} \put (-148,200){\line(1,0){1}} \put (-145,200){\line(1,0){1}} \put (-141,200){\line(1,0){1}} \put (-138,200){\line(1,0){1}} \put (-135,200){\line(1,0){1}} \put (-132,200){\line(1,0){1}} \put (-129,200){\line(1,0){1}} \put (-126,200){\line(1,0){1}} \put (-123,200){\line(1,0){1}} \put (-120,200){\line(1,0){1}} \put (-117,200){\line(1,0){1}} \put (-117,200){\line(1,0){1}} \put (-114,200){\line(1,0){1}} \put (-111,200){\line(1,0){1}}
\put (10,158){\line(0,-1){1}} \put (10,155){\line(0,-1){1}} \put (10,152){\line(0,-1){1}} \put (10,149){\line(0,-1){1}} \put (10,146){\line(0,-1){1}} \put (10,143){\line(0,-1){1}} \put (10,140){\line(0,-1){1}} \put (10,137){\line(0,-1){1}}
\put (-185,48){$s_1$} \put (-169,50){\line(1,0){1}} \put (-166,50){\line(1,0){1}} \put (-163,50){\line(1,0){1}} \put (-160,50){\line(1,0){1}} \put (-157,50){\line(1,0){1}} \put (-154,50){\line(1,0){1}} \put (-151,50){\line(1,0){1}} \put (-148,50){\line(1,0){1}} \put (-145,50){\line(1,0){1}} \put (-141,50){\line(1,0){1}} \put (-138,50){\line(1,0){1}} \put (-135,50){\line(1,0){1}} \put (-132,50){\line(1,0){1}} \put (-129,50){\line(1,0){1}} \put (-126,50){\line(1,0){1}} \put (-123,50){\line(1,0){1}} \put (-120,50){\line(1,0){1}} \put (-117,50){\line(1,0){1}} \put (-114,50){\line(1,0){1}} \put (-111,50){\line(1,0){1}}
\put (-105,160){\line(1,0){1}} \put (-102,160){\line(1,0){1}} \put (-99,160){\line(1,0){1}} \put (-96,160){\line(1,0){1}} \put (-93,160){\line(1,0){1}} \put (-90,160){\line(1,0){1}} \put (-87,160){\line(1,0){1}} \put (-84,160){\line(1,0){1}} \put (-81,160){\line(1,0){1}} \put (-78,160){\line(1,0){1}} \put (-75,160){\line(1,0){1}} \put (-72,160){\line(1,0){1}} \put (-69,160){\line(1,0){1}} \put (-66,160){\line(1,0){1}} \put (-63,160){\line(1,0){1}} \put (-60,160){\line(1,0){1}} \put (-57,160){\line(1,0){1}} \put (-54,160){\line(1,0){1}} \put (-51,160){\line(1,0){1}} \put (-48,160){\line(1,0){1}} \put (-45,160){\line(1,0){1}} \put (-42,160){\line(1,0){1}} \put (-39,160){\line(1,0){1}} \put (-36,160){\line(1,0){1}} \put (-33,160){\line(1,0){1}} \put (-30,160){\line(1,0){1}} \put (-27,160){\line(1,0){1}} \put (-24,160){\line(1,0){1}} \put (-21,160){\line(1,0){1}} \put (-18,160){\line(1,0){1}} \put (-15,160){\line(1,0){1}} \put (-12,160){\line(1,0){1}} \put (-9,160){\line(1,0){1}} \put (-6,160){\line(1,0){1}} \put (-3,160){\line(1,0){1}} \put (0,160){\line(1,0){1}} \put (3,160){\line(1,0){1}} \put (6,160){\line(1,0){1}} \put (9,160){\line(1,0){1}}
\put (-30,10){Figure A}
\end{picture}
\ \\ \ \\ \ \\ \noindent Plugging such a $\zeta$ into the last inequality and dividing by $2Q$ we get that \begin{align} \label{stimaprimadellemma} \frac{1}{2Q} & \int_{B_{r + \varepsilon}^+ \cup I^{r, \tilde{r} - r + \varepsilon}_+} (u-k)_+^2(x,\tau_2) \mu (x) \, dx - \frac{1}{2Q} \int_{ I^+_{r, \tilde{r} - r + \varepsilon} \cup I^{r, \varepsilon}_+} (u-k)_+^2(x,\tau_1) \mu (x) \, dx + \nonumber \\ & \hskip80pt
+ \iint_{Q_{R; r, \theta}^{\upbeta, +,\varepsilon} \cap (B_{R} \times [\tau_1, \tau_2])} |D(u-k)_+|^2 \lambda \, dx dt \leqslant \nonumber \\ & \hskip20pt \leqslant \ \frac{1}{2Q} \,
\frac{1}{(\tilde{r} - r)^2} \iint_{Q_{R; r, \tilde\theta}^{\upbeta, +,\tilde{r} - r + \varepsilon} \cap (B_{R} \times [\tau_1, \tau_2])}
(u-k)_+^2 \, \left( 8Q \lambda + \frac{2}{\upbeta \, h(x_0, R)} \mu_+ \right) dx dt + \\ & \hskip40pt + \frac{2Q-1}{2Q} \iint_{Q_{R; r, \tilde\theta}^{\upbeta, +,\tilde{r} - r + \varepsilon} \cap (B_{R} \times [\tau_1, \tau_2]) }
|D(u-k)_+|^2 \lambda \, dx dt \nonumber \end{align} with $$ \tau_1 \in [t_0, t_0 + \sigma_{\tilde\theta}(R)] \quad \text{and} \quad \tau_2 \in [t_0 + \sigma_{\theta}(R), s_2] \, .
$$
\noindent Before going on with the proof we state two lemmas, the first result is a slight generalization of Lemma 5.1 in \cite{giaquinta} (see also Section 4 in \cite{wieser}).
\begin{lemma} \label{giaq} Consider some non-negative functions
$f, g_1, g_2 : [t_0, s_2] \times (0,R] \times [0,R] \to [0,M]$, $F , G : [t_0, s_2] \times (0,R] \times [0, 1) \times [0,R] \to (0, M]$, $M$ positive constant, satisfying
\begin{align} \label{iterLemma} f(\tau_2, \rho, \varepsilon) \, + & \, g_2(\tau_1, \rho, \varepsilon) + F(\tau_1, \tau_2; \rho, \vartheta, \varepsilon) \leqslant g_1(\tau_1, \rho, \tilde\varepsilon) + g_2(\tau_2, \rho, \tilde\varepsilon) \, + \nonumber \\ & + \ {\displaystyle \frac{1}{(\tilde\varepsilon - \varepsilon)^2}} \, G(\tau_1, \tau_2; \rho, \tilde\vartheta, \tilde\varepsilon) +
\delta \,F (\tau_1, \tau_2; \rho, \tilde\vartheta, \tilde\varepsilon) \end{align} and \begin{align*} g_1(\tau_1, \rho, \varepsilon) & \leqslant g_1(\tau_1, \tilde\rho, \tilde\varepsilon) \, , \hskip10pt
g_2(\tau_2, \rho, \varepsilon) \leqslant g_2(\tau_2, \tilde\rho, \tilde\varepsilon) \, , \\ & F (\tau_1, \tau_2; \rho, \vartheta, \varepsilon) \leqslant F (\tau_1, \tau_2; \tilde\rho, \tilde\vartheta, \tilde\varepsilon) \end{align*} for every $\tau_1, \tau_2 \in [t_0, s_2]$, $\tau_1 < \tau_2$,
for every $\rho \leqslant \tilde\rho, \tilde\vartheta \leqslant \vartheta , \varepsilon \leqslant \tilde\varepsilon$ and $\delta \in (0,1)$. Then there is a constant $c > 1$ depending only on $\delta$ such that \begin{align*} f(\tau_2, \rho, \varepsilon) & + g_2(\tau_1, \rho, \varepsilon) + F(\tau_1, \tau_2; \rho, \vartheta, \varepsilon) \leqslant \\ & \ \leqslant \frac{1}{1 - \delta} \big[ g_1(\tau_1, \rho, \tilde\varepsilon) + g_2(\tau_2, \rho, \tilde\varepsilon) \big] + {\displaystyle \frac{c}{(\tilde\varepsilon - \varepsilon)^2}} \, G(\tau_1, \tau_2; \rho, \tilde\vartheta, \tilde\varepsilon) \, . \end{align*} \end{lemma} \noindent {\it Proof}\ \ -\ \ We take the sequences $\vartheta_n$ and $\varepsilon_n$ defined by ($\eta$ to be chosen) \begin{align*}
& \vartheta_0 = \vartheta \, , \hskip10pt \vartheta_{n+1} = \vartheta_n + (1-\eta) (\tilde\vartheta - \vartheta) \eta^n ,\qquad \eta \in (0,1) \, , \\ & \varepsilon_0 = \varepsilon \, , \hskip10pt \varepsilon_{n+1} = \varepsilon_n + (1-\eta) (\tilde\varepsilon - \varepsilon) \eta^n ,\qquad \eta \in (0,1) \, . \end{align*} Notice that \begin{align*} \varepsilon_{n+1} - \varepsilon_0 & = \varepsilon_{n+1} - \varepsilon = (\tilde\varepsilon - \varepsilon) (1 - \eta^{n+1}) \, , \\ \varepsilon_0 + \sum_{n=0}^{\infty} (\varepsilon_{n+1} & - \varepsilon_n) = \tilde\varepsilon \, , \quad \quad \vartheta_0 + \sum_{n=0}^{\infty} (\vartheta_{n+1} - \vartheta_n) = \tilde\vartheta \, . \end{align*} By \eqref{iterLemma} we have \begin{align*} f(\tau_2, \rho, \varepsilon_0) & + g_2(\tau_1, \rho, \varepsilon_0) + F (\tau_1, \tau_2; \rho, \vartheta_0, \varepsilon_0) \leqslant \\ \leqslant & \ g_1(\tau_1, \rho, \varepsilon_1) + g_2(\tau_2, \rho, \varepsilon_1) \ + \\ & + \ {\displaystyle \frac{1}{(\varepsilon_1 - \varepsilon_0)^2}} \, G(\tau_1, \tau_2; \rho, \vartheta_{1}, \varepsilon_1) +
\delta \,F (\tau_1, \tau_2; \rho, \vartheta_{1}, \varepsilon_1) \leqslant \\ \leqslant & \ g_1(\tau_1, \rho, \varepsilon_1) + g_2(\tau_2, \rho, \varepsilon_1) \ +
{\displaystyle \frac{1}{(\varepsilon_1 - \varepsilon_0)^2}} \, G(\tau_1, \tau_2; \rho, \vartheta_{1}, \varepsilon_1) + \\ & + \ \delta \Bigg[ g_1(\tau_1, \rho, \varepsilon_2) + g_2(\tau_2, \rho, \varepsilon_2) + \\ & + {\displaystyle \frac{1}{(\varepsilon_2 - \varepsilon_1)^2}} \, G(\tau_1, \tau_2; \rho, \vartheta_{2}, \varepsilon_2) +
\delta F (\tau_1, \tau_2; \rho, \vartheta_{2}, \varepsilon_2) \Bigg] \, . \end{align*} By the monotonicity property of the functions we have in fact \begin{align*} f(\tau_2, \rho, \varepsilon_0) & + g_2(\tau_1, \rho, \varepsilon_0) + F (\tau_1, \tau_2; \rho, \vartheta_0, \varepsilon_0) \leqslant \\ \leqslant & \ (1 + \delta) \Big[ g_1(\tau_1, \rho, \varepsilon_2 ) + g_2(\tau_2, \rho, \varepsilon_2) \Big]\ + \\ & \ + \left(\frac{1}{(\varepsilon_1 - \varepsilon_0)^2} + \frac{\delta}{(\varepsilon_2 - \varepsilon_1)^2} \right)
G(\tau_1, \tau_2; \rho, \vartheta_{2}, \varepsilon_2) + \\ & \ + \delta^2 F (\tau_1, \tau_2; \rho, \vartheta_{2}, \varepsilon_2) \, . \end{align*} Iterating $N$ times these inequalities we first get \begin{align*} f(\tau_2, \rho, \varepsilon_0) & + g_2(\tau_1, \rho, \varepsilon_0) + F (\tau_1, \tau_2; \rho, \vartheta_0, \varepsilon_0) \leqslant
\dots \, \leqslant \\ \leqslant & \ \Big[ g_1(\tau_1, \rho, \varepsilon_{N+1}) +
g_2(\tau_2, \rho, \varepsilon_{N+1}) \Big] \sum_{n=0}^N \delta^n + \\ & \ + G(\tau_1, \tau_2; \rho, \vartheta_{N+1}, \varepsilon_{N+1}) \sum_{n=0}^N \frac{\delta^n}{(\varepsilon_{n+1} - \varepsilon_n)^2}+ \\ & \ + \delta^{N+1} F (\tau_1, \tau_2; \rho, \vartheta_{N+1}, \varepsilon_{N+1}) \, ; \end{align*} then taking the limit as $N \to +\infty$ we finally obtain \begin{align*} f(\tau_2, \rho, \varepsilon) & + g_2(\tau_1, \rho, \varepsilon) + F (\tau_1, \tau_2; \rho, \vartheta, \varepsilon) \leqslant \\ & \leqslant \ \frac{1}{1 - \delta} \ \Big[ g_1(\tau_1, \rho, \tilde\varepsilon) +
g_2(\tau_1, \rho, \tilde\varepsilon) \Big] + \\ & \quad \quad \quad + G(\tau_1, \tau_2; \rho, \tilde\vartheta, \tilde\varepsilon)
\frac{1}{(\tilde\varepsilon - \varepsilon)^2}\, \frac{1}{(1-\eta)^2} \, \sum_{n=0}^{\infty} \left( \frac{\delta}{\eta^2}\right)^n \, . \end{align*} Taking $\eta \in (\sqrt{\delta}, 1)$ we are done. Taking for instance $\eta = \sqrt{(1+\delta)/2}$ one could have $c = (1 + \delta)/(1 - \delta)$.
$\square$ \\
\noindent Call \begin{align} \label{funzioncine} f (\tau, \rho, \varepsilon) & := \frac{1}{2Q} \int_{B_{\rho + \varepsilon}^+} (u-k)_+^2(x,\tau) \mu_+ (x) \, dx \, , \nonumber \\ g_2 (\tau, \rho, \varepsilon) & := \frac{1}{2Q} \int_{I^{\rho, \varepsilon}_+} (u-k)_+^2(x,\tau) \mu_- (x) \, dx \, , \nonumber \\ g_1 (\tau, \rho, \varepsilon) & := \frac{1}{2Q} \int_{I^+_{\rho, \varepsilon}} (u-k)_+^2(x,\tau) \mu_+ (x) \, dx \, , \\ F (\tau_1, \tau_2; \rho, \vartheta, \varepsilon)
& := \iint_{Q_{R; \rho, \vartheta}^{\upbeta, +,\varepsilon} \cap (B_{R} \times [\tau_1, \tau_2])} |D(u-k)_+|^2 \lambda \, dx dt \, , \nonumber \\ G (\tau_1, \tau_2; \rho, \vartheta, \varepsilon)
& := \frac{1}{2Q} \, \iint_{Q_{R;\rho, \vartheta}^{\upbeta, +,\varepsilon} \cap (B_{R} \times [\tau_1, \tau_2])}
(u-k)_+^2 \, \left( 8Q \lambda + \frac{2}{\upbeta \, h(x_0, R)} \mu_+ \right) dx dt \, , \nonumber \end{align} for $ \rho, \vartheta, \varepsilon \geqslant 0$; now we apply the previous lemma in \eqref{stimaprimadellemma} with $\delta = \frac{2Q - 1}{2Q}$, $\rho = r$, $\tilde{\varepsilon} = \tilde{r} - r + \varepsilon$ and since $(1 - \delta)^{-1} = 2Q$ we derive the existence of a positive constant $c_Q$ depending only on $Q$ (for instance, as shown at the end of the proof, one could consider $c_Q = 4Q - 1$) such that \begin{align} \label{devoproseguire} \frac{1}{2Q} \int_{B_{r + \varepsilon}^+} & (u-k)_+^2(x,\tau_2) \mu_+ (x) \, dx + \frac{1}{2Q} \int_{I^{r, \varepsilon}_+} (u-k)_+^2(x,\tau_1) \mu_- (x) \, dx + \nonumber \\ & \hskip80pt
+ \iint_{Q_{R; r, \theta}^{\upbeta, +,\varepsilon} \cap (B_{R} \times [\tau_1, \tau_2])} |D(u-k)_+|^2 \lambda \, dx dt \leqslant \nonumber \\ \leqslant & \ \int_{I^{r, \tilde{r} - r + \varepsilon}_+} (u-k)_+^2(x,\tau_2) \mu_- (x) \, dx +
\int_{I^+_{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,\tau_1) \mu_+ (x) \, dx + \\ & + \ \frac{c_Q}{2Q} \, \frac{1}{(\tilde{r} - r)^2}
\iint_{Q_{R;r, \tilde\theta}^{\upbeta, +,\tilde{r} - r + \varepsilon} \cap (B_R \times [\tau_1, \tau_2])}
(u-k)_+^2 \, \left( 8Q \lambda + \frac{2}{\upbeta \, h(x_0, R)} \, \mu_+ \right) dx dt \, . \nonumber \end{align}
\noindent Here is the second lemma, a simple but important lemma.
\begin{lemma} \label{stimettaDG} Consider some non-negative functions $f, g_1, g_2, g_3: [t_0, s_2] \to [0,M]$, $F , G : [s_1, s_2] \to (0, M]$, $M$ positive constant, satisfying $$ f(\tau_2) + g_3(\tau_1) + \int_{\tau_1}^{\tau_2} F(t) dt \leqslant g_2(\tau_2) + g_1(\tau_1) + \int_{\tau_1}^{\tau_2} G(t) dt $$
for every $\tau_1 <\tau_2$. Let $\theta$ and $\tilde\theta$ be the values considered in \eqref{puredifave}, $\sigma_{\theta} = \theta \, \upbeta \, h(x_0, R) R^2$, $\sigma_{\tilde\theta} = \tilde\theta \, \upbeta \, h(x_0, R) R^2$ for some positive $\upbeta$. Then \begin{align*} \sup_{t \in (t_0 + \sigma_{\theta}, s_2)} & f(t) + \sup_{t \in (t_0, t_0 + \sigma_{\tilde\theta})} g_3(t) + \int_{t_0}^{s_2} F(t) dt \leqslant \\ & \leqslant \ 2 \left[ \sup_{t \in (t_0 + \sigma_{\theta}, s_2)} g_2 (t) + \sup_{t \in (t_0, t_0 + \sigma_{\tilde\theta})} g_1 (t) + \int_{s_1}^{s_2} G(t) dt \right] \, . \end{align*} \end{lemma} \noindent {\it Proof}\ \ -\ \ By the assumptions in particular we have \begin{align*} f(\tau_2) + g_3(\tau_1) \leqslant g_2(\tau_2) + g_1(\tau_1) + \int_{\tau_1}^{\tau_2} G(t) dt \, , \\ \int_{\tau_1}^{\tau_2} F(t) dt \leqslant g_2(\tau_2) + g_1(\tau_1) + \int_{\tau_1}^{\tau_2} G(t) dt \, . \end{align*} Taking the supremum in both the inequalities we get \begin{align*} \sup_{\tiny \begin{array}{c} \tau_1 \in (t_0, t_0 + \sigma_{\tilde\theta}) \\
\tau_2 \in (t_0 + \sigma_{\theta}, s_2)
\end{array}} &
\big[ f(\tau_2) + g_3(\tau_1) \big] =
\sup_{\tau_2 \in (t_0 + \sigma_{\theta}, s_2)} f(\tau_2) + \sup_{\tau_1 \in (t_0, t_0 + \sigma_{\tilde\theta})} g_3(\tau_1) \leqslant \\ & \leqslant \ \sup_{\tiny \begin{array}{c} \tau_1 \in (t_0, t_0 + \sigma_{\tilde\theta}) \\
\tau_2 \in (t_0 + \sigma_{\theta}, s_2)
\end{array}}
\left[ g_2(\tau_2) + g_1(\tau_1) + \int_{\tau_1}^{\tau_2} G(t) dt \right] \leqslant \\ & \leqslant \ \sup_{\tau_2 \in (t_0 + \sigma_{\theta}, s_2)} g_2(\tau_2) +
\sup_{\tau_1 \in (t_0, t_0 + \sigma_{\tilde\theta})} g_1(\tau_1) + \int_{t_0}^{s_2} G(t) dt \end{align*} and \begin{align*} \int_{t_0}^{s_2} F(t) dt \leqslant \ \sup_{\tau_2 \in (t_0 + \sigma_{\theta}, s_2)} g_2(\tau_2) +
\sup_{\tau_1 \in (t_0, t_0 + \sigma_{\tilde\theta})} g_1(\tau_1) + \int_{t_0}^{s_2} G(t) dt \, . \end{align*} Summing the two inequalities we get the thesis.
$\square$ \\
\noindent Now we multiply by $2Q$ the inequality \eqref{devoproseguire} and apply the previous lemma. We get \begin{align*} \sup_{t \in (t_0 + \sigma_{\theta}, s_2)} & \int_{B_{r + \varepsilon}^+} (u-k)_+^2(x,t) \mu_+ (x) \, dx +
\sup_{t \in (t_0, t_0 + \sigma_{\tilde\theta})} \int_{I^{r, \varepsilon}_+} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\
& \hskip80pt + 2 Q \iint_{Q_{R; r, \theta}^{\upbeta, +,\varepsilon}} |D(u-k)_+|^2 \lambda \, dx dt \leqslant \\ \leqslant & \ 4Q \, \sup_{t \in (t_0 + \sigma_{\theta}, s_2)} \int_{I^{r, \tilde{r} - r + \varepsilon}_+} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\ & \hskip10pt + 4Q \, \sup_{t \in (t_0, t_0 + \sigma_{\tilde\theta})} \int_{I^+_{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,t) \mu_+ (x) \, dx + \\ & \hskip10pt + \, \frac{2 \, c_Q}{(\tilde{r} - r)^2} \iint_{Q_{R;r, \tilde\theta}^{\upbeta, +,\tilde{r} - r + \varepsilon}}
(u-k)_+^2 \, \left( 8Q \lambda + \frac{2}{\upbeta \, h(x_0, R)} \mu_+ \right) dx dt \, . \end{align*} Finally, calling $\gamma$ the quantity $16 \, c_Q \, Q$ (which turns out to be greater than $1$) we get \eqref{DGgamma+} \begin{align*} \sup_{t \in (t_0 + \sigma_{\theta}, s_2)} & \int_{B_{r + \varepsilon}^+} (u-k)_+^2(x,t) \mu_+ (x) \, dx +
\sup_{t \in (t_0, t_0 + \sigma_{\tilde\theta})} \int_{I^{r, \varepsilon}_+} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\
& \hskip150pt + \, \iint_{Q_{R; r, \theta}^{\upbeta, +,\varepsilon}} |D(u-k)_+|^2 \lambda \, dx dt \leqslant \\ \leqslant & \ \gamma \Bigg[ \sup_{t \in (t_0,t_0 + \sigma_{\tilde\theta})} \int_{I^+_{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,t) \mu_+ (x) \, dx + \\ & \hskip25pt + \sup_{t \in (t_0 + \sigma_{\theta}, s_2)} \int_{I^{r, \tilde{r} - r + \varepsilon}_+} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\ & \hskip25pt + \, \frac{1}{(\tilde{r} - r)^2} \iint_{Q_{R;r, \tilde\theta}^{\upbeta, +,\tilde{r} - r + \varepsilon}}
(u-k)_+^2 \, \left( \lambda + \frac{1}{\upbeta \, h(x_0, R)} \mu_+ \right) dx dt \Bigg] \, . \end{align*} \ \\ \noindent Now we prove \eqref{DGgamma+_1}. We integrate in $B_R (x_0) \times [\tau_1, \tau_2]$ with $[\tau_1, \tau_2] \subset [t_0, s_2]$ for an arbitrary $s_2$ (we mean that it is not necessary to consider $s_2 = t_0 + \upbeta \, h(x_0, R) R^2$) and, as done before to obtain \eqref{prelim_DG},
we get for every $[\tau_1, \tau_2] \subset [t_0, s_2]$ \begin{align*} \frac{1}{2} \int_{B_{R}} & (u-k)_+^2(x,\tau_2) \zeta^2(x,\tau_2) \, \mu(x) \, dx + E (u,K)
\leqslant Q \, E (u-\phi,K) + \\ & + \frac{1}{2} \int_{B_{R}} (u-k)_+^2(x,\tau_1) \zeta^2(x,\tau_1) \, \mu (x) \, dx + \int_{\tau_1}^{\tau_2} \!\!\! \int_{B_{R}} (u-k)_+^2 \zeta \zeta_t \, \mu \, dx dt . \end{align*}
Now choosing $\zeta$ (whose support depends on $\tau$) such that \begin{align*} \zeta = 1 \hskip10pt \text{ in } B_{r}^+(x_0) \times [t_0, \tau] \, , & \hskip20pt \zeta = \, 0 \hskip10pt \text{ in } B_R (x_0) \setminus \big( B_{\tilde{r}}^+ (x_0) \cup I^{r, \tilde{r}-r}_+ \big) \times [t_0, \tau] \, , \nonumber \\
\zeta_t \equiv 0 , & \hskip50pt |D \zeta| \leqslant \frac{1}{\tilde{r} - r} \, , \end{align*} using the estimate \eqref{prelim_DG_2} and the inequality which follows it and taking $\tau_1 = t_0$, we get that for every $\tau \in [t_0, s_2]$
\begin{align*}
\frac{1}{2Q} \int_{B_{r}^+} & (u-k)_+^2(x,\tau) \mu_+ (x) \, dx + \int_{t_0}^{\tau} \!\! \int_{B_{r}^+} |D(u-k)_+|^2 \lambda \, dx dt \leqslant \\ \leqslant & \ \frac{1}{2Q} \int_{B_{\tilde{r}}^+} (u-k)_+^2(x,t_0) \mu_+ (x) \, dx +
\frac{1}{2Q} \int_{I^{r, \tilde{r}-r}_+} (u-k)_+^2(x,\tau) \mu_- (x) \, dx + \\ & + \frac{4}{(\tilde{r} - r)^2} \int_{t_0}^{\tau} \!\! \int_{B_{\tilde{r}}^+ \cup I^{r, \tilde{r}-r}_+} (u-k)_+^2 \lambda \, dx dt
+ \frac{2Q - 1}{2Q} \int_{t_0}^{\tau} \!\! \int_{B_{\tilde{r}}^+ \cup I^{r, \tilde{r}-r}_+} |D(u-k)_+|^2 \lambda \, dx dt \, . \end{align*} As done to obtain \eqref{DGgamma+}, we first use Lemma \ref{giaq} with the analogous functions considered in \eqref{funzioncine} (notice that with $\varepsilon = 0$ we get $g_2(t_0,r,0) = 0$), then we use Lemma \ref{stimettaDG} to conclude and get \eqref{DGgamma+_1}. \\ \ \\ In an analogous way one can prove \eqref{DGgamma-} and \eqref{DGgamma+_2}, provided that $\mu_- (B_{R}(x_0)) > 0$. \\ \ \\ \noindent $2^{\circ}$ - We now drop the assumptions $\mu_+ (B_{R}(x_0)) > 0$ and $\mu_- (B_{R}(x_0)) > 0$ and prove \eqref{DGgamma0}. We recall that in this case we consider $K = B_R (x_0) \times [s_1, s_2]$ with $s_1$ and $s_2$ arbitrary (but belonging to $[0,T]$). Now proceeding similarly as before, taking $\phi = (u-k)_+ \zeta^2$ with $\zeta$ independent of $t$ and satisfying $$ \begin{array}{c} \zeta \equiv 1 \hskip10pt \text{in } (B_{r}^0 (x_0))^{\varepsilon} \, , \hskip20pt
\zeta \equiv 0 \hskip10pt \text{in } B_{R} (x_0) \setminus (B_r^0 (x_0))^{\tilde{r} - r + \varepsilon} \, , \\ [1em]
0 \leqslant \zeta \leqslant 1 \, , \hskip20pt 0 \leqslant {\displaystyle |D \zeta| \leqslant \frac{1}{\tilde{r}-r} } \, , \end{array} $$
from \eqref{prelim_DG}, integrating over $(B_r^0)^{\tilde{r}-r + \varepsilon} \times (\tau_1, \tau_2)$, we derive for every $\tau_1, \tau_2 \in [s_1, s_2]$, $\tau_1 < \tau_2$, \begin{align} \frac{1}{2Q} & \int_{I_0^{r, \varepsilon}} (u-k)_+^2(x,\tau_2) \mu_+ (x) \, dx +
\frac{1}{2Q} \int_{I_0^{r, \varepsilon}} (u-k)_+^2(x,\tau_1) \mu_- (x) \, dx + \nonumber \\ & \hskip80pt
+ \iint_{Q_{R;r; \tau_1, \tau_2}^{0,\varepsilon} } |D(u-k)_+|^2 \lambda \, dx dt \leqslant \nonumber \\ & \hskip20pt \leqslant \ \frac{1}{2Q} \int_{ I_0^{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,\tau_2) \mu_- (x) \, dx +
\frac{1}{2Q} \int_{ I_0^{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,\tau_1) \mu_+ (x) \, dx \, + \nonumber \\ & \hskip40pt + \frac{4}{(\tilde{r} - r)^2} \iint_{Q_{R; r; \tau_1, \tau_2}^{0,\tilde{r}-r + \varepsilon} } (u-k)_+^2 \, \lambda \, dx dt +
\frac{2Q-1}{2Q} \iint_{Q_{R; r ; \tau_1, \tau_2}^{0,\tilde{r}-r + \varepsilon} } |D(u-k)_+|^2 \lambda \, dx dt \, . \nonumber \end{align}
We can apply Lemma \ref{giaq} with $\vartheta = \tilde\vartheta = 0$, $\rho = r$, $\tilde{\rho} = \tilde{r}$, $\varepsilon \geqslant 0$, $\tilde\varepsilon = \tilde{r} - r$,
$\delta = (2Q-1)/2Q$ and \begin{align*}
g_2 (\tau, \rho, \epsilon) & := \frac{1}{2Q} \int_{I_0^{\rho, \epsilon}} (u-k)_+^2(x,\tau) \mu_- (x) \, dx \, , \\ f (\tau, \rho, \epsilon) = g_1 (\tau, \rho, \epsilon) & := \frac{1}{2Q} \int_{I_0^{\rho, \epsilon}} (u-k)_+^2(x,\tau) \mu_+ (x) \, dx \, , \\ F (\tau_1, \tau_2; \rho, \vartheta, \epsilon)
& := \iint_{Q_{R;\rho; \tau_1, \tau_2}^{0,\epsilon} } |D(u-k)_+|^2 \lambda \, dx dt \, , \\ G (\tau_1, \tau_2; \rho, \vartheta, \epsilon)
& := 4 \iint_{Q_{R;\rho; \tau_1, \tau_2}^{0,\epsilon} } (u-k)_+^2 \, \lambda \, dx dt \, , \end{align*} and get the existence of $c_Q$ such that \begin{align} & \frac{1}{2Q} \int_{I_0^{r, \varepsilon}} (u-k)_+^2(x,\tau_2) \mu_+ (x) \, dx +
\frac{1}{2Q} \int_{I_0^{r, \varepsilon}} (u-k)_+^2(x,\tau_1) \mu_- (x) \, dx + \nonumber \\ & \hskip150pt
+ \iint_{Q_{R;r; \tau_1, \tau_2}^{0,\varepsilon} } |D(u-k)_+|^2 \lambda \, dx dt \leqslant \nonumber \\ & \hskip20pt \leqslant \ \int_{ I_0^{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,\tau_2) \mu_- (x) \, dx +
\int_{ I_0^{r, \tilde{r} - r + \varepsilon}} (u-k)_+^2(x,\tau_1) \mu_+ (x) \, dx \, + \nonumber \\ & \hskip100pt + \frac{4 c_Q}{(\tilde{r} - r)^2}
\iint_{Q_{R; r ; \tau_1, \tau_2}^{0,\tilde{r}-r + \varepsilon} } (u-k)_+^2 \, \lambda \, dx dt \, . \nonumber \end{align} Taking the supremum for $\tau_1, \tau_2 \in (s_1, s_2)$ we get that $u$ satisfies \eqref{DGgamma0} with $\gamma = 4 c_Q$. \\
\section{Local boundedness for functions in $DG$ } \label{paragrafo5}
In this section we prove that functions belonging to the De Giorgi class are locally bounded in $\Omega \times (0,T)$. \\ We start proving that a generic function $u \in DG (\Omega, T, \mu, \lambda, \gamma)$ is bounded in $(B_{\rho} \times (a,b) )\cap (\Omega_+ \times (0,T))$ for some set $B_{\rho} \times (a,b) \subset \subset \Omega \times (0,T)$. \\ Fix $x_0 \in \Omega$, $t_0 \in (0,T)$, $R > 0$ and in what follows assume $$ \mu_+ (B_{R}(x_0)) > 0 \, . $$ Then consider $\upbeta > 0$ and $s_2 \in (0,T)$ with $$ s_2 - t_0 = \upbeta \, h(x_0, R) R^2 \, , \hskip20pt B_R(x_0) \times (t_0, s_2) \subset \Omega \times (0,T) \, . $$ Consider now $r , \tilde{r} , \hat{r} \in (0,R]$ such that $$ \frac{R}{2} \leqslant r < \tilde{r} < \hat{r} \leqslant R \hskip10pt \text{and} \hskip10pt \tilde{r} - r = \frac{\hat{r} - \tilde{r}}{2} $$ and $\theta, \tilde\theta, \hat\theta$ such that $$ 0 \leqslant \hat\theta < \tilde\theta < \theta < 1\hskip12pt \text{and} \hskip12pt
\tilde\theta - \hat\theta = \frac{(\hat{r} - \tilde{r})^2}{R^2} \, , \hskip5pt
\theta - \tilde\theta = \frac{(\tilde{r} - r)^2}{R^2} $$ and define analogously as done in \eqref{sigmateta} (but here we simplify the notation) $$ \sigma := \theta \, \upbeta \, h(x_0,R) \, R^2 \, , \hskip20pt \tilde\sigma := \tilde\theta \, \upbeta \, h(x_0,R) \, R^2 \, , \hskip20pt \hat\sigma := \hat\theta \, \upbeta \, h(x_0,R) \, R^2 \, , $$ in such a way that $$ 0 \leqslant \hat\sigma < \tilde\sigma < \sigma < s_2 - t_0 \, . $$ Since $t_0, x_0$ will remain fixed we will often use the following simplified notations: we will write $$ h(\rho), \, B_{\rho}, \, Q_R^{\upbeta,+}, \, Q_R^{\upbeta,\texttt{\,>}}, Q_{R; \rho, \theta}^{\upbeta,+,\delta}, \, Q_{R; \rho, \theta}^{\upbeta,+} $$ instead of respectively $$ h(x_0, \rho), \, B_{\rho}(x_0), \, Q_R^{\upbeta,+}(x_0, t_0), \, Q_R^{\upbeta,\texttt{\,>}}(x_0, t_0), \, Q_{R; \rho, \theta}^{\upbeta,+,\delta}(x_0, t_0), \, Q_{R; \rho, \theta}^{\upbeta,+} (x_0,t_0). $$ In fact, to further simplify the notations, we will suppose that (it is always possible, up to a translation) $$ t_0 = 0 \, . $$ Finally, from now on, we will use this short notations for the following measures $$ \begin{array}{c} M := \mu \otimes {\mathcal L}^1 \, , \hskip20pt \Lambda := \lambda \otimes {\mathcal L}^1 \, ,
\hskip20pt |M|_{\Lambda} := |\mu|_{\lambda} \otimes {\mathcal L}^1 \, , \\ [0.5em] M_+ := \mu_+ \otimes {\mathcal L}^1 \, , \hskip20pt M_- := \mu_- \otimes {\mathcal L}^1 \, , \\ [0.5em] \Lambda_+ := \lambda_+ \otimes {\mathcal L}^1 \, , \hskip20pt
\Lambda_- := \lambda_- \otimes {\mathcal L}^1 \, , \hskip20pt
\Lambda_0 := \lambda_0 \otimes {\mathcal L}^1 \end{array} $$ where we recall that $\lambda_+, \lambda_-, \lambda_0$ have been defined in \eqref{lambda}. \\ \ \\ Now fix a function $u \in DG (\Omega, T, \mu, \lambda, \gamma)$ and define (since $\upbeta$ will remain fix we omit it in the definition of the following set) $$ \begin{array}{c}
A_R^{+,\delta}(k; \rho, \theta) = \{ (x,t) \in Q_{R; \rho, \theta}^{\upbeta,+,\delta} \, | \, u(x,t) > k \} \, . \end{array} $$ Consider a function $\zeta \in \text{Lip} (B_{\tilde{r}} (x_0) \times [t_0, s_2])$ such that $\zeta (\cdot, t) \in \text{Lip}_c (B_{\tilde{r}} (x_0))$ for every $t$ such that (notice that $\tilde{r} - \frac{R}{2} = r - \frac{R}{2} + (\tilde{r} - r)$ and $\hat{r} - \frac{R}{2} = \tilde{r} - \frac{R}{2} + (\hat{r} - \tilde{r})$) $$ \begin{array}{c} \zeta \equiv 1 \hskip10pt \text{in } Q_{R; \frac{R}{2}, \theta}^{\upbeta,+, r - \frac{R}{2}} (x_0,t_0) \, , \hskip20pt
\zeta \equiv 0 \hskip10pt \text{in } Q_R^{\upbeta,\texttt{\,>}} (x_0,t_0) \setminus
Q_{R; \frac{R}{2}, \tilde\theta}^{\upbeta,+,\tilde{r} - \frac{R}{2}} (x_0,t_0) \, , \, , \\ [1em]
0 \leqslant \zeta \leqslant 1 \, , \hskip15pt {\displaystyle |D \zeta| \leqslant \frac{1}{\tilde{r}-r} \, ,
\hskip15pt 0 \leqslant \zeta_t \mu \, , \hskip15pt \zeta_t \mu_- = 0 \, , \hskip15pt
|\zeta_t| \leqslant \frac{1}{\upbeta \, h(x_0,R)} \frac{1}{(\tilde{r}-r)^2} } \, . \end{array} $$ In what follows we will denote by $Q_{R; R/2, \tilde\theta}^{\upbeta,+,\tilde{r} - R/2}(s)$ the set
$\{ (x,t) \in Q_{R; R/2, \tilde\theta}^{\upbeta,+,\tilde{r} - R/2} \, | \, t = s \}$. \\ First using H\"older's inequality, then applying Corollary \ref{cor-gut-whee} to the function $(u-k)_+\zeta$
with $\upsilon = \nu = |\mu|_{\lambda}$ and $\omega = \lambda$, $E = Q_{R; R/2, \tilde\theta}^{\upbeta,+,\tilde{r} - R/2} \cap \Omega_+$ (we integrate first in $Q_{R; R/2, \theta}^{\upbeta, +,r - R/2}$, then in $Q_{R; R/2, \tilde\theta}^{\upbeta,+,\tilde{r} - R/2}$, with respect to the measure $\mu_+ dx dt$ which is supported in $E$), we estimate \begin{align*}
\frac{1}{|\mu|_{\lambda} (B_R)} & \int\!\!\!\int_{Q_{R; R/2, \theta}^{\upbeta,+,r - R/2}} (u-k)_+^2 \mu_+ \, dx dt
\leqslant \frac{1}{|\mu|_{\lambda} (B_R)}
\iint_{Q_{R; R/2, \tilde\theta}^{\upbeta,+,\tilde{r} - R/2}} (u-k)_+^2 \zeta^2 \mu_+ \, dx dt \leqslant \\ \leqslant & \, \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}}
\left[\frac{1}{|\mu|_{\lambda} (B_R)} \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}}
(u-k)_+^{2\kappa} \zeta^{2\kappa} \mu_+ \, dx dt \right]^{\frac{1}{\kappa}} \leqslant \\ \leqslant & \, \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}}
\, \gamma_1^{2/\kappa} R^{2/\kappa}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \frac{1}{(\lambda(B_R))^{1/\kappa}} \cdot \\ & \hskip40pt \cdot \Bigg( \sup_{0 < t < s_2} \int_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}(t)} (u-k)_+^{2}(x,t) \zeta^2(x,t) \mu_+ (x) \, dx
\Bigg)^{\frac{\kappa-1}{\kappa}} \!\!\!\!\!\!\! \cdot \\
& \hskip60pt \cdot \Bigg( \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} |D ((u-k)_+\zeta)|^2 (x,t) \, \lambda (x) \, dx dt
\Bigg)^{\frac{1}{\kappa}} \leqslant \\ \leqslant & \, \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}} \hskip5pt \gamma_1^{2/\kappa} \frac{R^{2/\kappa}}{(\lambda(B_R))^{1/\kappa}}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \cdot \\ & \hskip40pt \cdot \Bigg( \sup_{0 < t < s_2} \int_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}(t)} (u-k)_+^{2}(x,t) \zeta^2(x,t) \mu_+ (x) \, dx + \\
& \hskip80pt + \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} |D ((u-k)_+\zeta)|^2 (x,t) \, \lambda (x) \, dx dt \Bigg) \leqslant \\ \leqslant & \, \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}} \hskip5pt \gamma_1^{2/\kappa} \frac{R^{2/\kappa}}{(\lambda(B_R))^{1/\kappa}}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \cdot \\ & \cdot \Bigg( \sup_{0 < t < s_2} \int_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}(t)} (u-k)_+^{2}(x,t) \mu_+ (x) \, dx + \\
& \quad + 2 \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} |D (u-k)_+|^2 (x,t) \, \lambda (x) \, dx dt +
\frac{2}{(\tilde{r} - r)^2} \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} (u-k)_+^2 (x,t) \, \lambda (x) \, dx dt \Bigg) \\ \leqslant & \, \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}} \hskip5pt \gamma_1^{2/\kappa} \frac{R^{2/\kappa}}{(\lambda(B_R))^{1/\kappa}}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \cdot \\ & \cdot \Bigg( \sup_{t \in (\tilde\sigma, s_2)} \int_{B_{\tilde{r}}^+} (u-k)_+^2 (x,t) \mu_+ (x) dx +
\sup_{t \in (0, \tilde\sigma)} \int_{I_{R/2, \tilde{r} - R/2}^+} (u-k)_+^2 (x,t) \mu_+ (x) dx \, + \\
& \hskip10pt + 2 \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} |D (u-k)_+|^2 (x,t) \, \lambda (x) \, dx dt +
\frac{8}{(\hat{r} - \tilde{r})^2} \iint_{Q_{R; R/2, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} (u-k)_+^2 (x,t) \, \lambda (x) \, dx dt \Bigg) \end{align*} where in the last inequality we have used the fact that $2 (\tilde{r} - r) = \hat{r} - \tilde{r}$. \\ Now we can continue using the energy estimates \eqref{DGgamma+} (with $\varepsilon = \tilde{r} - R/2$)
\begin{align*}
& \frac{1}{|\mu|_{\lambda} (B_R)} \int\!\!\!\int_{Q_{R;R/2, \theta}^{\upbeta, +,r - R/2}} (u-k)_+^2 \mu_+ \, dx dt \leqslant
\frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}}
\hskip5pt \gamma_1^{2/\kappa} \frac{R^{2/\kappa}}{(\lambda(B_R))^{1/\kappa}}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \cdot \\ & \hskip25pt \cdot \Bigg[ 2\gamma \sup_{t \in (0, \hat\sigma)} \int_{I_{R/2, \hat{r} - R/2}^+} (u-k)_+^2 (x,t) \mu_+(x) \, dx + 2 \gamma \sup_{t \in (\tilde\sigma, s_2)} \int_{I^{R/2, \hat{r} - R/2}_+} (u-k)_+^2 (x,t) \mu_-(x) \, dx + \\ & \hskip35pt + \frac{2 \gamma}{(\hat{r} - \tilde{r})^2} \iint_{Q_{R; R/2, {\hat\theta}}^{\upbeta, +,\hat{r} - R/2}}
(u-k)_+^2\, \left( \frac{\mu_+}{\upbeta \, h(R)} + \lambda \right) \, dx ds +
\sup_{t \in (\hat\sigma, \tilde\sigma)} \int_{I_{R/2,\tilde{r} - R/2}^+} (u-k)_+^2 (x,t) \mu_+ (x) dx + \\ & \hskip35pt + \frac{8}{(\hat{r} - \tilde{r})^2}
\iint_{Q_{R; \tilde{r}, \tilde\theta}^{\upbeta, +,\tilde{r} - R/2}} (u-k)_+^2 (x,t) \, \lambda (x) \, dx dt \Bigg] \leqslant \\ & \hskip10pt \leqslant
\frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}}
\hskip5pt \gamma_1^{2/\kappa} \frac{R^{2/\kappa}}{(\lambda(B_R))^{1/\kappa}}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \cdot \\ & \hskip20pt \cdot \left[ \frac{2 \gamma + 8}{(\hat{r} - \tilde{r})^2} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}}
(u-k)_+^2 \left( \frac{\mu_+}{\upbeta \, h(R)} + \lambda \right) dx dt +
(2 \gamma+1) \sup_{t \in (0, s_2)} \int_{(I_{R/2}^+)^{\hat{r} - R/2}} (u-k)_+^2 (x,t) |\mu| (x) dx \right] = \\
& \hskip10pt = \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}{(|\mu|_{\lambda} (B_R))^{\frac{\kappa - 1}{\kappa}}}
\hskip5pt \gamma_1^{2/\kappa} \frac{R^{2/\kappa}}{(\lambda(B_R))^{1/\kappa}}
\left(\frac{1}{|\mu|_{\lambda} (B_R)}\right)^{\frac{\kappa -1}{\kappa}} \frac{2 \gamma + 8}{(\hat{r} - \tilde{r})^2} \, \cdot \\ & \hskip20pt \cdot {\lambda (B_{R}) }
\Bigg[ \frac{1}{\upbeta \, |\mu|_{\lambda} (B_{R})} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 \mu_+ \, dx dt
+ \frac{1}{\lambda (B_{R})} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 \lambda \, dx dt \, + \\ & \hskip60pt + \frac{2\gamma + 1}{2\gamma +8} \, (\hat{r} - \tilde{r})^2 \, \frac{1}{\lambda (B_{R})}
\sup_{t \in (0, s_2)} \int_{(I_{R/2}^+)^{\hat{r} - R/2}} (u-k)_+^2 (x,t) |\mu| (x) dx \Bigg] \, . \end{align*} Now we divide by $s_2 - t_0 = \upbeta \, h({R})R^2$, estimate $\frac{2\gamma + 1}{2\gamma +8}$ by $1$
and finally multiply and divide in the right hand side by $(\upbeta \, h({R}) R^2)^{\frac{\kappa - 1}{\kappa}}$. We get \begin{align}
\frac{1}{|M|_{\Lambda} (Q_R^{\upbeta, \texttt{\,>}})} & \int\!\!\!\int_{Q_{R; R/2, \theta}^{\upbeta, +,r - R/2}} (u-k)_+^2 \mu_+ \, dx dt \leqslant \nonumber \\ \leqslant & \hskip5pt \gamma_1^{2/\kappa} \,
R^2 \, \upbeta^{\frac{\kappa - 1}{\kappa}} \, \, \frac{2 \gamma + 8}{(\hat{r} - \tilde{r})^2} \,
\frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(|M|_{\Lambda} (Q_R^{\upbeta, \texttt{\,>}}))^{\frac{\kappa - 1}{\kappa}} } \, \left( \frac{1}{\upbeta} + 1 \right) \cdot \nonumber \\ \label{mitoccanum1}
& \cdot \Bigg[ \frac{1}{|M|_{\Lambda} (Q_R^{\upbeta, \texttt{\,>}})} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 \mu_+ \, dx dt
+ \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 \lambda_+ \, dx dt \, + \\ & \hskip50pt + \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 (\lambda_0 + \lambda_-) \, dx dt \, + \nonumber \\ & \hskip50pt + (\hat{r} - \tilde{r})^2 \, \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\sup_{t \in (0, s_2)} \int_{(I_{R/2}^+)^{\hat{r} - R/2}} (u-k)_+^2 (x,t) |\mu| (x) dx \Bigg] \, . \nonumber \end{align}
\noindent Notice that $$ \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 (\lambda_0 + \lambda_-) \, dx dt \quad \text{is in fact} \quad \int_{0}^{s_2} \!\!\!\! \int_{(I_{R/2}^+)^{\hat{r} - R/2}} (u-k)_+^2 (\lambda_0 + \lambda_-) \, dx dt . $$
\noindent In a similar way one can estimate $\int\!\!\!\int_{Q_{R; R/2, \theta}^{\upbeta, +,r - R/2}} (u-k)_+^2 \lambda_+ \, dx dt$. The main difference is that we use Corollary \ref{cor-gut-whee} with $\nu = |\mu|_{\lambda} $ and $\upsilon = \omega = \lambda$. We get \begin{align} \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})} & \int\!\!\!\int_{Q_{R; R/2, \theta}^{\upbeta, +,r - R/2}} (u-k)_+^2 \lambda_+ \, dx dt \leqslant \nonumber \\ \leqslant & \hskip5pt \gamma_1^{2/\kappa} \,
R^2 \, \frac{1 + \upbeta}{\upbeta^{\frac{1}{\kappa}}} \, \frac{2 \gamma + 8}{(\hat{r} - \tilde{r})^2} \,
\frac{\big( \Lambda_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(\Lambda (Q_R^{\upbeta, \texttt{\,>}}))^{\frac{\kappa - 1}{\kappa}}}\, \cdot \nonumber \\ \label{mitoccanum2}
& \cdot \Bigg[ \frac{1}{|M|_{\Lambda} (Q_R^{\upbeta, \texttt{\,>}})} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 \mu_+ \, dx dt
+ \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})} \iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 \lambda_+ \, dx dt \, + \\ & \hskip50pt + \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 (\lambda_0 + \lambda_-) \, dx dt \, + \nonumber \\ & \hskip50pt + (\hat{r} - \tilde{r})^2 \, \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\sup_{t \in (0, s_2)} \int_{(I_{R/2}^+)^{\hat{r} - R/2}} (u-k)_+^2 (x,t) |\mu| (x) dx \Bigg] \, . \nonumber \end{align} Once defined (for $\rho \in [R/2, R]$) \begin{align*}
\tilde{u}_{\mu_+} (l;\rho, \vartheta; \varepsilon) & := \left( \frac{1}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,>}})}
\int\!\!\!\int_{Q_{R; \rho, \vartheta}^{\upbeta, +,\varepsilon}} (u-l)_+^2 \mu_+ \, dx dt \right)^{1/2} \, , \\ \tilde{u}_{\lambda_+} (l;\rho, \vartheta; \varepsilon) & := \left( \frac{1}{\Lambda (Q_{R}^{\upbeta, \texttt{\,>}})}
\int\!\!\!\int_{Q_{R; \rho, \vartheta}^{\upbeta, +,\varepsilon}} (u-l)_+^2 \lambda_+ \, dx dt \right)^{1/2} \, , \\ \big( \tilde{u}_+ (l;\rho, \vartheta, \varepsilon) \big)^2 & := \big( \tilde{u}_{\mu_+} (l;\rho, \vartheta, \varepsilon) \big)^2 +
\big( \tilde{u}_{\lambda_+} (l;\rho, \vartheta, \varepsilon) \big)^2 \, , \end{align*}
we sum the two inequalities and get \begin{align*} \big( \tilde{u}_+ (k; & \, {\textstyle \frac{R}{2}}, \theta; \, r - {\textstyle \frac{R}{2}}) \big)^2 \leqslant \frac{C_1}{(\hat{r} - \tilde{r})^2} \, \left[ \frac{\big( M_+ (A_R^{+,\tilde{r} - R/2}(k; \frac{R}{2}, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{|M|_{\Lambda} (Q_R^{\upbeta, \texttt{\,>}}))^{\frac{\kappa - 1}{\kappa}}} \, + \right. \\ & \left. + \, \frac{\big( \Lambda_+ (A_R^{+,\tilde{r} - R/2}(k; \frac{R}{2}, \tilde\theta)) \big)^{\frac{\kappa - 1}{\kappa}}}
{(\Lambda (Q_R^{\upbeta, \texttt{\,>}}))^{\frac{\kappa - 1}{\kappa}}} \right] \cdot
\left[ \big( \tilde{u}_+ (k; {\textstyle \frac{R}{2}}, \hat\theta; \hat{r} - {\textstyle \frac{R}{2}}) \big)^2 +
\big( \omega^{\hat{r} - \tilde{r}} (u; k; \hat{r} ; \hat\theta) \big)^2 \right] \end{align*} where $C_1 = \gamma_1^{2/\kappa} \, R^2 \, {\displaystyle \frac{1 + \upbeta}{\upbeta^{\frac{1}{\kappa}}}} \, (2 \gamma + 8)$ and \begin{align*} \big( \omega^{\hat{r} - \tilde{r}} (u; k; \hat{r} ; \hat\theta) \big)^2:= &
\, \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\iint_{Q^{\upbeta, +,\hat{r} - R/2}_{R; R/2, \hat\theta}} (u-k)_+^2 (\lambda_0 + \lambda_-) \, dx dt \, + \\
& + (\hat{r} - \tilde{r})^2 \, \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\sup_{t \in (0, s_2)} \int_{(I_{R/2}^+)^{\hat{r} - R/2}} (u-k)_+^2 (x,t) |\mu| (x) dx \, . \end{align*} Notice that for $h < k$ we have \begin{align*} (k-h)^2 & M_+ (A_R^{+,\tilde{r} - R/2}(k; {\textstyle \frac{R}{2}}, \tilde\theta)) \leqslant
\iint_{A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta)} (u-h)_+^2 \mu_+ \, dx dt \leqslant \\ & \leqslant \, \iint_{A_R^{+,\tilde{r} - R/2}(h; R/2 ,\tilde\theta)} (u-h)_+^2 \mu_+ \, dx dt \, , \end{align*} that is $$ M_+ (A_R^{+,\tilde{r} - R/2}(k; {\textstyle \frac{R}{2}}, \tilde\theta)) \leqslant
\frac{M_+ (Q_{R}^{\upbeta, \texttt{\,>}})}{(k-h)^2} \, \,
\big( \tilde{u}_{\mu_+} (h; {\textstyle \frac{R}{2}},\tilde\theta ; \tilde{r} - {\textstyle \frac{R}{2}}) \big)^2 \, . $$ From that (and the analogous estimate for $\Lambda_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta))$) we derive \begin{align*}
\frac{M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta))}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,>}})} \leqslant \frac{M_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta))}{M_+ (Q_{R}^{\upbeta, \texttt{\,>}})} \leqslant
\frac{1}{(k-h)^2} \, \, \big( \tilde{u}_{\mu_+} (h; {\textstyle \frac{R}{2}}, \tilde\theta; \tilde{r} - {\textstyle \frac{R}{2}} ) \big)^2 \, , \\ \frac{\Lambda_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta))}{\Lambda (Q_{R}^{\upbeta, \texttt{\,>}})} \leqslant \frac{\Lambda_+ (A_R^{+,\tilde{r} - R/2}(k; R/2, \tilde\theta))}{\Lambda_+ (Q_{R}^{\upbeta, \texttt{\,>}})} \leqslant
\frac{1}{(k-h)^2} \, \, \big( \tilde{u}_{\lambda_+} (h; {\textstyle \frac{R}{2}}, \tilde\theta; \tilde{r} - {\textstyle \frac{R}{2}}) \big)^2 \, . \end{align*} Then, applying these inequalities
we get \begin{align} \label{daiterare} \tilde{u}_+ (k; {\textstyle \frac{R}{2}}, \theta; r - {\textstyle \frac{R}{2}})
\leqslant & \, \frac{C_1^{1/2}}{\hat{r} - \tilde{r}} \, \frac{1}{(k-h)^{\frac{\kappa - 1}{\kappa}}} \,
\tilde{u}_+ (h; {\textstyle \frac{R}{2}}, \tilde\theta ; \tilde{r} - {\textstyle \frac{R}{2}})^{\frac{\kappa - 1}{\kappa}}\,
\left[ \tilde{u}_+ (k; {\textstyle \frac{R}{2}}, \hat\theta; \hat{r} - {\textstyle \frac{R}{2}}) +
\omega^{\hat{r} - \tilde{r}} (u; k; \hat{r} ; \hat\theta) \right] \leqslant \nonumber \\ \leqslant & \, \frac{C_1^{1/2}}{\hat{r} - \tilde{r}} \, \frac{1}{(k-h)^{\frac{\kappa - 1}{\kappa}}} \,
\tilde{u}_+ (h; {\textstyle \frac{R}{2}}, \tilde\theta ; \tilde{r} - {\textstyle \frac{R}{2}})^{\frac{\kappa - 1}{\kappa}}\,
\left[ \tilde{u}_+ (h; {\textstyle \frac{R}{2}}, \hat\theta; \hat{r} - {\textstyle \frac{R}{2}}) +
\omega^{\hat{r} - \tilde{r}} (u; h; \hat{r} ; \hat\theta) \right] \leqslant \nonumber \\ \leqslant & \, \frac{C_1^{1/2}}{\hat{r} - \tilde{r}} \, \frac{1}{(k-h)^{\frac{\kappa - 1}{\kappa}}} \,
\tilde{u}_+ (h; {\textstyle \frac{R}{2}}, \hat\theta; \hat{r} - {\textstyle \frac{R}{2}})^{\frac{\kappa - 1}{\kappa}} \,
\left[ \tilde{u}_+ (h; {\textstyle \frac{R}{2}}, \hat\theta; \hat{r} - {\textstyle \frac{R}{2}}) +
\omega^{\hat{r} - \tilde{r}} (u; h; \hat{r} ; \hat\theta) \right] . \end{align}
Consider the following choices: for $n \in {\bf N}$, $k_0\in {\bf R}$ and a fixed $d$ we define \begin{align*} & k_n := k_0 + d \left( 1 - \frac{1}{2^n} \right) \nearrow k_0 + d \, , \\ & r_n := \frac{R}{2} + \frac{R}{2^{n+1}} \searrow \frac{R}{2}\, , \\ & \theta_n := \frac{1}{2} \left( 1 - \frac{1}{4^{n}} \right)\nearrow \frac{1}{2} \, , \\ & \sigma_n := \theta_n \, \upbeta \, h(x_0, R) \, R^2 \nearrow \frac{1}{2} \, \upbeta \, h(x_0, R) \, R^2 \, . \end{align*}
Notice that (for these choices) $$ 2 \, (r_n - r_{n+1} ) = r_{n-1} - r_n \, . $$
With this choice of $\theta_n$ (and since $\upbeta \, h(x_0, R) R^2 = s_2 - t_0 = s_2$ since we are supposing $t_0 = 0$) we have that $$ \sigma_n = \theta_n \, \upbeta \, h(x_0, R) \, R^2 = \theta_n \, s_2 \nearrow \frac{s_2}{2} \, . $$ With this choices
we define the sequences $$ u_n^+ := \tilde{u}_+ (k_n; {\textstyle \frac{R}{2}}, \theta_n; r_n - {\textstyle \frac{R}{2}})
\, , \hskip20pt \omega_n^+ := \omega^{r_n - r_{n+1}} (u; k_n; r_n; \theta_n) $$ and show that with the particular choices just made above the sequence $(u_n)_n$ is infinitesimal. To get that it is sufficient to observe that from \eqref{daiterare} and using \begin{align*} & r_{n+1} & \text{in the place of} & \hskip10pt r \, , & \theta_{n+1} & \hskip10pt \text{in the place of} & \theta \, , \\ & r_{n} & \text{in the place of} & \hskip10pt \tilde{r} \, , & \theta_{n} & \hskip10pt \text{in the place of} &\tilde\theta \, , \\ & r_{n-1} & \text{in the place of} & \hskip10pt \hat{r} \, , & \theta_{n-1} & \hskip10pt \text{in the place of} & \hat\theta \, , \\ & k_{n+1} & \text{in the place of} & \hskip10pt k \, , & k_{n-1} & \hskip10pt \text{in the place of} & h \, , \end{align*} we derive \begin{equation} \label{enumeriamola} u_{n+1}^+ \leqslant \, C_+ \, 2^{n+1} \, \frac{2^{(n+1) \frac{\kappa - 1}{\kappa}}}{(3d)^{\frac{\kappa - 1}{\kappa}}} \,
\left( u_{n-1}^+ + \omega_{n-1}^+ \right) (u_{n-1}^+)^{\frac{\kappa - 1}{\kappa}} \, , \qquad n \geqslant 1 \, , \end{equation} where $C_+ = \sqrt{C_1}/R = \gamma_1^{1/\kappa} \, (1 + \upbeta)^{1/2} \upbeta^{-\frac{1}{2\kappa}} \, \, (2 \gamma + 8)^{1/2}$. Setting $$ \alpha = \frac{\kappa - 1}{\kappa} \, , \hskip10pt c = C_+ \, \frac{4^{1 + \alpha}}{3^{\alpha}d^{\alpha}} \, , \hskip10pt b = 2^{1+\alpha} \, , \hskip10pt y_n = u_n^+ \, , \hskip10pt \epsilon_n = \omega_n^+ \, . $$ \eqref{enumeriamola} becomes $$ u_{n+1}^+ \leqslant c \, b^{n-1} \left( u_{n-1}^+ + \omega_{n-1}^+ \right) (u_{n-1}^+)^{\alpha} \, , \qquad n \geqslant 1 . $$ In particular we get $$ u_{2(n+1)}^+ \leqslant c \, b^{2n} \left( u_{2n}^+ + \omega_{2n}^+ \right) (u_{2n}^+)^{\alpha} \, , \qquad n \geqslant 0 . $$ Now notice that $(u_n^+)_n$ is decreasing. Then, using Lemma \ref{lemmuzzofurbo-quinquies}, provided that \begin{equation} \label{costanted} u_0^+ <
\left( C_+ \, \frac{4^{1+\alpha}}{3^{\alpha}d^{\alpha}} \right)^{-1/\alpha} \, 2^{ -\frac{2}{\alpha} - \frac{2}{\alpha^2}} =
3d\left( C_+ \right)^{-\frac{1}{\alpha}} \, 4^{ -\frac{2}{\alpha} - \frac{1}{\alpha^2} - 1} \, , \end{equation} that is \begin{align*}
& \left( \frac{1}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,>}})}
\int\!\!\!\int_{Q_{R; R/2, 0}^{\upbeta, +,R/2}} (u-k_0)_+^2 \mu_+ \, dx dt + \frac{1}{\Lambda (Q_{R}^{\upbeta, \texttt{\,>}})}
\int\!\!\!\int_{Q_{R; R/2, 0}^{\upbeta, +,R/2}} (u-k_0)_+^2 \lambda_+ \, dx dt \right)^{1/2} < \\ & \hskip200pt < \, 3d\left( C_+ \right)^{-\frac{1}{\alpha}} \, 4^{ -\frac{2}{\alpha} - \frac{1}{\alpha^2} - 1} \, , \end{align*} we get that the subsequence $(u_{2n})_n$ is infinitesimal and since $(u_n)_n$ is decreasing we finally derive \begin{equation} \label{limitezero!!!} \lim_{n \to +\infty} u_n^+ = \tilde{u}_+ \left(k_0 + d; \frac{R}{2}, \frac{1}{2} \right) = 0 \end{equation} where \begin{align*} \big( \tilde{u}_+ (l; \varrho , \vartheta) \big)^2 := & \, \big( \tilde{u}_+ (l; \varrho ,\vartheta ; 0) \big)^2 = \\
= & \, \frac{1}{|M|_{\Lambda} (Q_R^{\upbeta, \texttt{\,>}})}
\int\!\!\!\int_{Q_{R; \varrho, \vartheta}^{\upbeta, +}} (u-l)_+^2 \mu_+ \, dx dt
+ \frac{1}{\Lambda (Q_R^{\upbeta, \texttt{\,>}})}
\int\!\!\!\int_{Q_{R; \varrho, \vartheta}^{\upbeta, +}} (u-l)_+^2 \lambda_+ \, dx dt \, . \end{align*} In a complete analogous way, if $\mu_- (B_R) > 0$ and taking $s_1 = t_0 - \upbeta \, h(x_0, R) R^2$, one can prove that \begin{equation} \label{limitezero!!!-} \begin{array}{l} {\displaystyle \int\!\!\!\int_{Q_{R; R/2, 1/2}^{\upbeta, -} (x_0,t_0)} (u-k_0-d)_+^2 \mu_- \, dx dt = 0 \, , } \\ [1em] {\displaystyle \int\!\!\!\int_{Q_{R; R/2, 1/2}^{\upbeta, -}(x_0,t_0)} (u-k_0-d)_+^2 \lambda_- \, dx dt = 0 \, , } \end{array} \end{equation} where $Q_{R; R/2, 1/2}^{\upbeta, -} (x_0,t_0) = B_R^- (x_0) \times (t_0 - \upbeta \, h(x_0, R) R^2, t_0 - \frac{1}{2} \upbeta \, h(x_0, R) R^2)$, provided that \begin{align*}
& \left( \frac{1}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,<}})}
\int\!\!\!\int_{Q_{R; R, 0}^{\upbeta, -,R/2}} (u-k_0)_+^2 \mu_- \, dx dt + \frac{1}{\Lambda (Q_{R}^{\upbeta, \texttt{\,<}})}
\int\!\!\!\int_{Q_{R; R, 0}^{\upbeta, -,R/2}} (u-k_0)_+^2 \lambda_- \, dx dt \right)^{1/2} < \\ & \hskip200pt < \, 3d\left( C_- \right)^{-\frac{1}{\alpha}} \, 4^{ -\frac{2}{\alpha} - \frac{1}{\alpha^2} - 1} \, , \end{align*} where $C_- = C_+ = \gamma_1^{1/\kappa} \, \upbeta^{1/2} \, (2 \gamma + 8)^{1/2}$. \\ [0.5em] The proof regarding the part in which $\mu \equiv 0$ is slightly different and we show it.
We define $$ \sigma_1 := t_0 - \frac{R^2}{2}, \quad \sigma_2 := t_0 + \frac{R^2}{2} \qquad \text{ so that } \ \sigma_2 - \sigma_1 = R^2 . $$ Moreover we suppose that $$ \lambda_0 (B_{R}) > 0 , $$ otherwise there is nothing to prove. We consider $r , \tilde{r} , \hat{r} \in (R/2,R)$ as before. Consider a function $\zeta \in \text{Lip}_c (B_{\tilde{r}} (x_0))$ (independent of $t$!) such that $$ \begin{array}{c} \zeta \equiv 1 \hskip10pt \text{in } Q_{R; \frac{R}{2}; \sigma_1, \sigma_2}^{0, r - \frac{R}{2}} (x_0) \, , \hskip20pt
\zeta \equiv 0 \hskip10pt \text{in }
\big( B_R (x_0) \times (\sigma_1, \sigma_2) \big) \setminus Q_{R; \frac{R}{2}; \sigma_1, \sigma_2}^{0,\tilde{r} - \frac{R}{2}} (x_0) \, , \\ [1em]
0 \leqslant \zeta \leqslant 1 \, , \hskip15pt {\displaystyle |D \zeta| \leqslant \frac{1}{\tilde{r}-r} } \, . \end{array} $$ We moreover define \begin{gather*}
A^{0,\delta}_{R} (k;\rho;\sigma_1, \sigma_2) := \{ (x,t) \in Q_{R;\rho; \sigma_1, \sigma_2}^{0, \delta} (x_0) \, | \, u(x,t) > k \} .
\end{gather*} Then we proceed in a way similar to that above and estimate $(\lambda (B_R))^{-1} \int\!\!\!\int_{Q_{R; R/2; \sigma_1, \sigma_2}^{0,r - R/2}} (u-k)_+^2 \lambda \, dx dt$ using first Corollary \ref{cor-gut-whee} with $\nu = \upsilon = \omega = \lambda$. One has (we write $Q_{R;\rho; s_1, s_2}^{0,\varepsilon}$ to mean $Q_{R;\rho; s_1, s_2}^{0,\varepsilon} (x_0)$) \begin{align*} & \frac{1}{\lambda (B_R)} \iint_{Q_{R; R/2; \sigma_1, \sigma_2}^{0,r-R/2}} (u-k)_+^2 \lambda_0 \, dx dt \leqslant
\frac{1}{\lambda (B_R)} \iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}} (u-k)_+^2 \zeta^2 \lambda_0 \, dx dt \leqslant \\ & \hskip30pt \leqslant \frac{\big( \Lambda_0 (A^{0,\tilde{r} - R/2}_{R} (k; R/2 ;\sigma_1, \sigma_2) ) \big)^{\frac{\kappa - 1}{\kappa}}}
{(\lambda (B_R))^{\frac{\kappa - 1}{\kappa}}}
\left[\frac{1}{\lambda (B_{R})} \iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}}
(u-k)_+^{2\kappa} \zeta^{2\kappa} \lambda_0 \, dx dt \right]^{\frac{1}{\kappa}} \leqslant \\ & \hskip30pt \leqslant \frac{\big( \Lambda_0 (A^{0,\tilde{r} - R/2}_{R} (k; R/2;\sigma_1, \sigma_2) ) \big)^{\frac{\kappa - 1}{\kappa}}}
{(\lambda (B_R))^{\frac{\kappa - 1}{\kappa}}} \, \gamma_1^{2/\kappa} \, R^{2/\kappa} \frac{1}{\lambda (B_R)} \cdot \\ & \hskip40pt \cdot \left[ \sup_{t \in (\sigma_1, \sigma_2)}
\int_{(B_{R/2}^0)^{\tilde{r} - R/2}} (u-k)_+^2 (x,t) \lambda_0 (x) dx \right]^{\frac{\kappa - 1}{\kappa}}
\left[\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}} | D \big( (u-k)_+ \zeta \big) |^2 \lambda \, dx dt
\right]^{\frac{1}{\kappa}} . \end{align*} Then using the energy estimate \eqref{DGgamma0} we get \begin{align*}
\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}} & | D \big( (u-k)_+ \zeta \big) |^2 \lambda \, dx dt \leqslant \\ & \leqslant 2 \iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}}
\Big[ | D (u-k)_+ |^2 \zeta^2 + | D \zeta |^2 (u-k)_+^2 \Big] \lambda \, dx dt \leqslant \\ & \leqslant 2 \iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}}
\Big[ | D (u-k)_+ |^2 + \frac{1}{(\tilde{r} - r)^2} (u-k)_+^2 \Big] \lambda \, dx dt \leqslant \\ & \leqslant 2 \gamma \Bigg[ \sup_{t \in (\sigma_1, \sigma_2)} \int_{ I_0^{R/2, \hat{r} - R/2}} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\ & \hskip50pt + \sup_{t \in (\sigma_1, \sigma_2)}\int_{ I_0^{R/2, \hat{r} - R/2}} (u-k)_+^2(x,t) \mu_+ (x) \, dx \, + \\ & \hskip50pt + \frac{1}{(\hat{r} - \tilde{r})^2}
\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \hat{r} - R/2}} (u-k)_+^2 \, \lambda \, dx dt \Bigg] + \\ & \hskip80pt + \frac{2}{(\tilde{r} - r)^2} \iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, \tilde{r} - R/2}} (u-k)_+^2 \lambda \, dx dt \, . \end{align*} Then we have, dividing by $\sigma_2 - \sigma_1$ in both sides, \begin{align} \label{mitoccanum3} \frac{1}{(\sigma_2 - \sigma_1) \lambda (B_R)} &
\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, {r} - R/2}} (u-k)_+^2 \lambda_0 \, dx dt \leqslant \nonumber \\ & \hskip30pt \leqslant \frac{\big( \Lambda_0 (A^{0,\tilde{r} - R/2}_{R} (k; R/2 ;\sigma_1, \sigma_2) ) \big)^{\frac{\kappa - 1}{\kappa}}}
{(\sigma_2 - \sigma_1)^{\frac{\kappa - 1}{\kappa}}(\lambda (B_R))^{\frac{\kappa - 1}{\kappa}}} \,
\gamma_1^{2/\kappa} \, \frac{R^{2/\kappa}}{(\sigma_2 - \sigma_1)^{\frac{1}{\kappa}}}
\frac{(\sigma_2 - \sigma_1)}{(\sigma_2 - \sigma_1)\lambda (B_R)} \cdot \nonumber \\ & \hskip50pt \cdot \Bigg[ \sup_{t \in (\sigma_1, \sigma_2)}
\int_{(B_{R/2}^0)^{\tilde{r} - R/2}} (u-k)_+^2 (x,t) \lambda_0 (x) dx + \nonumber \\ & \hskip70pt + 2 \gamma \sup_{t \in (\sigma_1, \sigma_2)} \int_{ I_0^{R/2, \hat{r} - R/2}} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\ & \hskip85pt + 2 \gamma \sup_{t \in (\sigma_1, \sigma_2)}\int_{ I_0^{R/2, \hat{r} - R/2}} (u-k)_+^2(x,t) \mu_+ (x) \, dx \, + \nonumber \\ & \hskip100pt + \frac{2 \gamma + 8}{(\hat{r} - \tilde{r})^2}
\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, {\hat r} - R/2}} (u-k)_+^2 (\lambda_+ + \lambda_-) \, dx dt \, + \nonumber \\ & \hskip115pt + \frac{2 \gamma + 8}{(\hat{r} - \tilde{r})^2}
\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, {\hat r} - R/2}} (u-k)_+^2 \lambda_0 \, dx dt \Bigg] \, . \nonumber \end{align} Now defining $$ \big( \tilde{u}_0(l; \rho ; \varepsilon ; \sigma_1, \sigma_2) \big)^2 =
\frac{1}{(\sigma_2 - \sigma_1) \lambda (B_R)}\iint_{Q_{R; \rho ; \sigma_1, \sigma_2}^{0, \varepsilon}} (u - l)_+^2 \lambda_0 \, dx dt
$$
for $\varepsilon \in [0, R/2)$, \begin{align*} \big( \omega^{\hat{r} - \tilde{r}} (u; k; \hat{r}) \big)^2:= &
\, \frac{(\hat{r} - \tilde{r})^2}{(\sigma_2 - \sigma_1)\lambda (B_R)} \cdot \Bigg[ \sup_{t \in (\sigma_1, \sigma_2)}
\int_{(B_{R/2}^0)^{\hat{r} - R/2}} (u-k)_+^2 (x,t) \lambda_0 (x) dx + \\ & \hskip65pt + \sup_{t \in (\sigma_1, \sigma_2)} \int_{ I_0^{R/2, \hat{r} - R/2}} (u-k)_+^2(x,t) \mu_- (x) \, dx + \\ & \hskip85pt + \sup_{t \in (\sigma_1, \sigma_2)}\int_{ I_0^{R/2, \hat{r} - R/2}} (u-k)_+^2(x,t) \mu_+ (x) \, dx \, \Bigg] + \\ & \hskip50pt + \frac{1}{(\sigma_2 - \sigma_1)\lambda (B_R)}
\iint_{Q_{R; R/2 ; \sigma_1, \sigma_2}^{0, {\hat r} - R/2}} (u-k)_+^2 (\lambda_+ + \lambda_-) \, dx dt \end{align*} and for $k > h$ $$ \frac{\Lambda_0 (A^{0,\tilde{r} - R/2}_{R} (k; R/2 ;\sigma_1, \sigma_2) )}{(\sigma_2 - \sigma_1)\lambda (B_R)} \leqslant
\frac{1}{(k-h)^2} \big( \tilde{u}_0(h; {\textstyle \frac{R}{2}}; \tilde{r} - {\textstyle \frac{R}{2}} ;\sigma_1, \sigma_2) \big)^2 $$ and since $\sigma_2 - \sigma_1 = \upbeta \, R^2$ we reach \begin{align} \tilde{u}_0(k; {\textstyle \frac{R}{2}}; r - {\textstyle \frac{R}{2}} ;\sigma_1, \sigma_2) & \leqslant \frac{\gamma_1^{1/\kappa} \, \upbeta^{\frac{\kappa - 1}{2\kappa}} R}{(k - h)^{\frac{\kappa - 1}{\kappa}}}
\frac{(2 \gamma + 8)^{1/2}}{\hat{r} - \tilde{r}} \cdot \nonumber \\ & \cdot \Big[ \omega^{\hat{r} - \tilde{r}} (u; k; \hat{r}) +
\tilde{u}_0 (k; {\textstyle \frac{R}{2}}; \hat{r} - {\textstyle \frac{R}{2}};\sigma_1, \sigma_2) \Big]
\big( \tilde{u}_0 (h; {\textstyle \frac{R}{2}}; \tilde{r} - {\textstyle \frac{R}{2}} ; \sigma_1, \sigma_2) \big)^{\frac{\kappa - 1}{\kappa}} \leqslant \nonumber \\ \label{noncasca} & \leqslant \frac{\gamma_1^{1/\kappa} \, \upbeta^{\frac{\kappa - 1}{2\kappa}} R}{(k - h)^{\frac{\kappa - 1}{\kappa}}} \frac{(2 \gamma + 8)^{1/2}}{\hat{r} - \tilde{r}} \cdot \\ & \cdot \Big[ \omega^{\hat{r} - \tilde{r}} (u; k; \hat{r}) + \tilde{u}_0 (h; {\textstyle \frac{R}{2}}; \hat{r} - {\textstyle \frac{R}{2}}; \sigma_1, \sigma_2) \Big]
\big( \tilde{u}_0 (h; {\textstyle \frac{R}{2}}; \hat{r} - {\textstyle \frac{R}{2}};\sigma_1, \sigma_2) \big)^{\frac{\kappa - 1}{\kappa}} . \nonumber \end{align} As done before, consider the following choices for $n \in {\bf N}$, $k_0\in {\bf R}$ and a fixed $d$: \begin{align*} & k_n := k_0 + d \left( 1 - \frac{1}{2^n} \right) \nearrow k_0 + d \, , \qquad r_n := \frac{R}{2} + \frac{R}{2^{n+1}} \searrow \frac{R}{2} \, , \end{align*} and define the sequences $$ u_n^0 := \tilde{u}_0 (k_n; {\textstyle \frac{R}{2}}; r_n - {\textstyle \frac{R}{2}}; \sigma_1, \sigma_2) \, ,
\hskip20pt \omega_n^0 := \omega^{r_n - r_{n+1}} (u; k_n; r_n) \, . $$ Making the following choices in \eqref{noncasca} \begin{align*} & r_{n+1} & \text{in the place of} & \hskip10pt r \, , & r_{n} & \hskip10pt \text{in the place of } \ \ \tilde{r} \, , \\ & r_{n-1} & \text{in the place of} & \hskip10pt \hat{r} \, , & \, & \, \\ & k_{n+1} & \text{in the place of} & \hskip10pt k \, , & k_{n-1} & \hskip10pt \text{in the place of } \ \ h \, , \end{align*} we get
\begin{equation} \label{enumeriamola_bis} u_{n+1}^0 \leqslant \, \frac{\gamma_1^{1/\kappa} \upbeta^{\frac{\kappa - 1}{2\kappa}} (2 \gamma + 8)^{1/2}}{(3d)^{\frac{\kappa - 1}{\kappa}}} \, (2^{\frac{2\kappa - 1}{\kappa}})^{n-1} \left( u_{n-1}^0 + \omega_{n-1}^0 \right) (u_{n-1}^0)^{\frac{\kappa - 1}{\kappa}} \, , \qquad n \geqslant 1 \end{equation} and then, similarly as before, we derive that \begin{align*} \lim_{n \to +\infty} u_n^0 = & \, \tilde{u}_0 \left(k_0 + d; \frac{R}{2}; \sigma_1, \sigma_2 \right) :=
\tilde{u}_0 \left(k_0 + d; \frac{R}{2}; 0; \sigma_1, \sigma_2 \right) = \\ = & \, \left( \frac{1}{(\sigma_2 - \sigma_1)\lambda (B_R)}
\int_{\sigma_1}^{\sigma_2} \!\!\!\! \int_{B_{R/2}^0} (u - k_0 - d)_+^2 \lambda_0 \, dx dt \right)^{1/2} = 0 \end{align*} provided that $$
u_0^0 < 3 \, d \, \gamma_1^{- \frac{1}{\kappa - 1}} \, \upbeta^{-1/2} (2 \gamma + 8)^{- \frac{\kappa}{2(\kappa - 1)}}
\, 2^{- \frac{6 \kappa^2 - 7 \kappa + 2}{(\kappa - 1)^2}} \, . $$
\ \\ \noindent Now we continue and conclude this section showing that $u$ is locally bounded in $\Omega$. In Figure B we show, supposing $\mu > 0$ and $\mu < 0$, the sets involved in the stimates of points $i \,)$ and $ii \, )$.
\begin{theorem} \label{Linfinity} Suppose $u \in DG(\Omega,T, \mu, \lambda, \gamma)$ and consider $(x_0, t_0) \in \Omega \times (0,T)$, $\upbeta > 0$. Then there is a constant $c_\infty$ depending only on $\gamma, \gamma_1, \kappa , \upbeta$
such that: \\ [0.5em] $i \, )$ for every $B_{R} (x_0) \times (t_0, t_0 + \upbeta \, h(x_0,R) R^2) \subset \Omega \times (0,T)$ if $\mu_+ (B_R (x_0)) > 0$ we have \begin{align*}
\mathop{\rm ess\hspace{0.1 cm}sup}\limits_{Q_{R; R/2, 1/2}^{\upbeta,+}} & |u| \leqslant
c_{\infty} \Bigg[ \frac{1}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,>}})} \iint_{Q_{R; R, 0}^{\upbeta,+,R/2}} u^2 \mu_+ \, dx dt +
\frac{1}{\Lambda (Q_{R}^{\upbeta, \texttt{\,>}})} \iint_{Q_{R; R, 0}^{\upbeta,+,R/2}} u^2 \lambda_+ \, dx dt \Bigg]^{1/2} ; \end{align*} $ii \, )$ for every $B_{R} (x_0) \times (t_0 - \upbeta \, h(x_0,R) R^2, t_0) \subset \Omega \times (0,T)$ if $\mu_- (B_R (x_0)) > 0$ we have \begin{align*}
\mathop{\rm ess\hspace{0.1 cm}sup}\limits_{Q_{R; R/2, 1/2}^{\upbeta,-}} & |u| \leqslant
c_{\infty} \Bigg[ \frac{1}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,<}})} \iint_{Q_{R; R, 0}^{\upbeta,-,R/2}} u^2 \mu_- \, dx dt +
\frac{1}{\Lambda (Q_{R}^{\upbeta, \texttt{\,<}})} \iint_{Q_{R; R, 0}^{\upbeta,-,R/2}} u^2 \lambda_- \, dx dt \Bigg]^{1/2} ; \end{align*} $iii \, )$ for every $B_{R} (x_0) \times (\sigma_1, \sigma_2) \subset \Omega \times (0,T)$, $\sigma_2 - \sigma_1 = R^2$, if $\lambda_0 (B_R (x_0)) > 0$ \begin{align*}
\mathop{\rm ess\hspace{0.1 cm}sup}\limits_{B_{R/2}^0 \times (\sigma_1, \sigma_2)} & |u| \leqslant c_{\infty}
\left( \frac{1}{\Lambda (B_{R} \times (\sigma_1, \sigma_2))} \iint_{Q^{0,R/2}_{R;R;\sigma_1,\sigma_2}} u^2 \lambda_0 \, dx dt \right)^{1/2} . \end{align*} \end{theorem} \noindent {\it Proof}\ \ -\ \ We prove the first point, being the others very similar. By \eqref{limitezero!!!} we derive that $$ \mathop{\rm ess\hspace{0.1 cm}sup}\limits_{Q_{R; R/2, 1/2}^+} u \leqslant k_0 + d $$ and $d$ has to satisfy \eqref{costanted}. For example we can choose $$ d = 2 \left( C_+ \right)^{\frac{1}{\alpha}} \, 3^{-1} 4^{ \frac{2}{\alpha} + \frac{1}{\alpha^2} + 1} \, u_0^+ . $$ By definition of $u_0^+$, defining the quantity $$ c_{\infty} := \frac{d}{u_0^+} = \frac{2}{3} \left( C_+ \right)^{\frac{1}{\alpha}} \, 4^{ \frac{2}{\alpha} + \frac{1}{\alpha^2} + 1} = \frac{2}{3} \gamma_1^{\frac{1}{\kappa - 1}} \frac{(1+\upbeta)^{\frac{\kappa}{2(\kappa - 1)}}}{\upbeta^{\frac{1}{2(\kappa - 1)}}} \, (2 \gamma + 8)^{\frac{\kappa}{2(\kappa - 1)}} 4^{\frac{3 \kappa^2 - 3 \kappa + 1}{(\kappa - 1)^2}} \, , $$ choosing $k_0 = 0$ and estimating $u_+^2$ by $u^2$ we finally get \begin{align*} \mathop{\rm ess\hspace{0.1 cm}sup}\limits_{Q_{R; R/2, 1/2}^{\upbeta,+}} & u \leqslant
c_{\infty} \left( \frac{1}{|M|_{\Lambda} (Q_{R}^{\upbeta, \texttt{\,>}})} \iint_{Q_{R; R, 0}^{\upbeta,+,R/2}} u^2 \mu_+ \, dx dt +
\frac{1}{\Lambda (Q_{R}^{\upbeta, \texttt{\,>}})} \iint_{Q_{R; R, 0}^{\upbeta,+,R/2}} u^2 \lambda_+ \, dx dt \right)^{1/2} \, . \end{align*} Since the analogous argument can be applied to $-u$ we have the first claim. The points $ii \, )$ and $iii \, )$ are completely analogous: the only difference is that the constant $c_{\infty}$ in point $ii \, )$ is the same as in point $i \, )$, in point $iii \, )$ is $3^{-1} \, \gamma_1^{\frac{1}{\kappa - 1}} \, \upbeta^{1/2} (8 \gamma + 2)^{\frac{\kappa}{2(\kappa - 1)}} \, 2^{\frac{6 \kappa^2 - 7 \kappa + 2}{(\kappa - 1)^2}-1}$.
$\square$ \\
\begin{oss} \rm -\ Notice that from points $i \, )$ and $ii \, )$ it is not possible to derive a pointwise (in time) estimate: indeed letting $\upbeta$ go to zero the constant $c_{\infty}$ goes to $+\infty$. \\ Also in point $iii \, )$ we cannot obtain a pointwise estimate because $\sigma_2 - \sigma_1 = \upbeta R^2$ and the constant $c_{\infty}$ depends on $\upbeta$. \\ Nevertheless one could obtain a pointwise estimate if $B_R \subset \Omega_0$ using \eqref{tempofissato} and Theorem \ref{gut-whee}.
\end{oss} \ \\ \noindent The local boundedness for a function in the class $DG$ is immediatly needed in the following section.
\ \\ \ \\ \begin{picture}(150,200)(-180,0) \put (-105,200){\linethickness{1pt}\line(1,0){200}} \put (-105,50){\linethickness{1pt}\line(1,0){200}} \put (-105,50){\linethickness{1pt}\line(0,1){150}} \put (95,50){\linethickness{1pt}\line(0,1){150}}
\put (-5,40){\line(0,1){170}}
\put (-104,125){\line(1,0){1}} \put (-101,125){\line(1,0){1}} \put (-98,125){\line(1,0){1}} \put (-95,125){\line(1,0){1}} \put (-92,125){\line(1,0){1}} \put (-89,125){\line(1,0){1}} \put (-86,125){\line(1,0){1}} \put (-83,125){\line(1,0){1}} \put (-80,125){\line(1,0){1}} \put (-77,125){\line(1,0){1}} \put (-74,125){\line(1,0){1}} \put (-71,125){\line(1,0){1}} \put (-68,125){\line(1,0){1}} \put (-65,125){\line(1,0){1}} \put (-62,125){\line(1,0){1}} \put (-59,125){\line(1,0){1}} \put (-56,125){\line(1,0){1}}
\put (-55,125){\line(0, -1){1}} \put (-55,122){\line(0, -1){1}} \put (-55,119){\line(0, -1){1}} \put (-55,116){\line(0, -1){1}} \put (-55,113){\line(0, -1){1}} \put (-55,110){\line(0, -1){1}} \put (-55,107){\line(0, -1){1}} \put (-55,104){\line(0, -1){1}} \put (-55,101){\line(0, -1){1}} \put (-55,98){\line(0, -1){1}} \put (-55,95){\line(0, -1){1}} \put (-55,92){\line(0, -1){1}} \put (-55,89){\line(0, -1){1}} \put (-55,86){\line(0, -1){1}} \put (-55,83){\line(0, -1){1}} \put (-55,80){\line(0, -1){1}} \put (-55,77){\line(0, -1){1}} \put (-55,74){\line(0, -1){1}} \put (-55,71){\line(0, -1){1}} \put (-55,68){\line(0, -1){1}} \put (-55,65){\line(0, -1){1}} \put (-55,62){\line(0, -1){1}} \put (-55,57){\line(0, -1){1}} \put (-55,54){\line(0, -1){1}} \put (-55,51){\line(0, -1){1}}
\put (-55,200){\line(0,-1){4}} \put (-55,195){\line(0,-1){4}} \put (-55,190){\line(0,-1){4}} \put (-55,185){\line(0,-1){4}} \put (-55,180){\line(0,-1){4}} \put (-55,175){\line(0,-1){4}} \put (-55,170){\line(0,-1){4}} \put (-55,165){\line(0,-1){2}} \put (-55,163){\line(1,0){4}} \put (-50,163){\line(1,0){4}} \put (-45,163){\line(1,0){4}} \put (-40,163){\line(1,0){4}} \put (-35,163){\line(1,0){4}} \put (-30,163){\line(1,0){4}} \put (-25,163){\line(1,0){4}} \put (-20,163){\line(1,0){4}} \put (-15,163){\line(1,0){4}} \put (-10,163){\line(1,0){4}}
\put (-55,187){\line(1,1){12}} \put (-55,177){\line(1,1){22}} \put (-55,167){\line(1,1){32}} \put (-49,163){\line(1,1){37}} \put (-39,163){\line(1,1){33}} \put (-29,163){\line(1,1){23}} \put (-19,163){\line(1,1){13}}
\put (45,199){\line(0,-1){1}} \put (45,196){\line(0,-1){1}} \put (45,193){\line(0,-1){1}} \put (45,190){\line(0,-1){1}} \put (45,187){\line(0,-1){1}} \put (45,184){\line(0,-1){1}} \put (45,181){\line(0,-1){1}} \put (45,178){\line(0,-1){1}} \put (45,175){\line(0,-1){1}} \put (45,172){\line(0,-1){1}} \put (45,169){\line(0,-1){1}} \put (45,166){\line(0,-1){1}} \put (45,163){\line(0,-1){1}} \put (45,160){\line(0,-1){1}} \put (45,157){\line(0,-1){1}} \put (45,154){\line(0,-1){1}} \put (45,151){\line(0,-1){1}} \put (45,148){\line(0,-1){1}} \put (45,145){\line(0,-1){1}} \put (45,142){\line(0,-1){1}} \put (45,139){\line(0,-1){1}} \put (45,136){\line(0,-1){1}} \put (45,133){\line(0,-1){1}} \put (45,130){\line(0,-1){1}} \put (45,127){\line(0,-1){1}}
\put (45,125){\line(1,0){1}} \put (48,125){\line(1,0){1}} \put (51,125){\line(1,0){1}} \put (54,125){\line(1,0){1}} \put (57,125){\line(1,0){1}} \put (60,125){\line(1,0){1}} \put (63,125){\line(1,0){1}} \put (66,125){\line(1,0){1}} \put (69,125){\line(1,0){1}} \put (72,125){\line(1,0){1}} \put (75,125){\line(1,0){1}} \put (78,125){\line(1,0){1}} \put (81,125){\line(1,0){1}} \put (84,125){\line(1,0){1}} \put (87,125){\line(1,0){1}} \put (90,125){\line(1,0){1}} \put (93,125){\line(1,0){1}}
\put (45,50){\line(0,1){4}} \put (45,55){\line(0,1){4}} \put (45,60){\line(0,1){4}} \put (45,65){\line(0,1){4}} \put (45,70){\line(0,1){4}} \put (45,75){\line(0,1){4}} \put (45,80){\line(0,1){4}} \put (45,85){\line(0,1){2}} \put (45,87){\line(-1,0){4}} \put (40,87){\line(-1,0){4}} \put (35,87){\line(-1,0){4}} \put (30,87){\line(-1,0){4}} \put (25,87){\line(-1,0){4}} \put (20,87){\line(-1,0){4}} \put (15,87){\line(-1,0){4}} \put (10,87){\line(-1,0){4}} \put (5,87){\line(-1,0){4}} \put (0,87){\line(-1,0){4}}
\put (-5,77){\line(1,1){10}} \put (-5,67){\line(1,1){20}} \put (-5,57){\line(1,1){30}} \put (-2,50){\line(1,1){37}} \put (8,50){\line(1,1){37}} \put (18,50){\line(1,1){27}} \put (28,50){\line(1,1){17}}
\put (-90,140){\tiny$\mu > 0$}
\put (60,110){\tiny$\mu < 0$}
\put (-5,125){\linethickness{2pt}\line(1,0){1}} \put (-10,130){\tiny$(x_0,t_0)$}
\put (-30,10){Figure B}
\end{picture}
\section{Expansion of positivity} \label{secPositivity}
In this section we will see many preliminary results needed to prove Harnack's inequality. \\ In what follows we fix the following points and sets: given three points $(x^{\diamond}\!, t^{\diamond}), (x^{\circ}\!, t^{\circ}), (x^{\star}\!, t^{\star}) \in \Omega \times (0,T)$ in such a way that \begin{align*} & Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond}) =
B_R(x^{\diamond}) \times (t^{\diamond}, s_2^{\diamond}) \subset \Omega \times (0,T)
\quad & \text{where } s_2^{\diamond} = t^{\diamond} + \upbeta^{\diamond} \, h \! \left(x^{\diamond} \!, R \right) R^2 \, , \\ & Q_{R}^{\upbeta^{\circ}, \texttt{\,<}} (x^{\circ}\!, t^{\circ}) =
B_R(x^{\ast}) \times (s_1^{\circ}, t^{\circ}) \subset \Omega \times (0,T)
\quad & \text{where } s_1^{\circ} = t^{\ast} - \upbeta^{\circ} \, h \! \left(x^{\ast} \!, R \right) R^2 \, , \\ & Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star}) :=
B_R(x^{\star}) \times (s_1^{\star} , s_2^{\star}) \subset \Omega \times (0,T) \quad & \text{where } s_1^{\star} = t^{\star} - \frac{\upbeta^{\star}}{2} R^2 , \ s_2^{\star} = t^{\star} + \frac{\upbeta^{\star}}{2} R^2 \, , \end{align*} with $\upbeta^{\diamond}, \upbeta^{\circ} , \upbeta^{\star} > 0$. \\ [0.3em] \noindent We recall that, thanks to the results of the previous section, a function belonging to the De Giorgi class $DG$ is locally bounded.
\begin{prop} \label{prop-DeGiorgi1} Consider three points $(x^{\diamond}\!, t^{\diamond}), (x^{\circ}\!, t^{\circ}), (x^{\star}\!, t^{\star}) \in \Omega \times (0,T)$ and $\rho \in (0,R)$. Suppose $Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond})$, $Q_{R}^{\upbeta^{\circ}, \texttt{\,<}} (x^{\circ}\!, t^{\circ})$, $Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})$ are contained in $\Omega \times (0,T)$. Then for every choice of $\theta^{\diamond} , \theta^{\circ} \in (0,1)$ and $a, \sigma \in (0,1)$ there are \\ $\overline{\nu}^{\diamond} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a$, $\theta^{\diamond}$, $\upbeta^{\diamond}$,\\ $\overline{\nu}^{\circ} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a$, $\theta^{\circ}$, $\upbeta^{\circ}$, \\ $\overline{\nu}^{\star} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a, (R - \rho)/R$, $\max\{ 1, 1/\upbeta^{\star} \}$, \\ $\overline{\nu} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a, (R - \rho)/R$, \\ such that for every $u \in DG_+(\Omega, T, \mu, \lambda, \gamma)$ and fixed $\overline{m}, \omega$ satisfying \\ [0.5em] $i \, )$ $\overline{m} \geqslant \sup_{Q_{R; R, 0}^{\upbeta^{\diamond},+} (x^{\diamond} \!, t^{\diamond})} u, \hskip10pt \omega \geqslant \mathop{\rm osc}\limits_{Q_{R; R, 0}^{\upbeta^{\diamond}+} (x^{\diamond} \!, t^{\diamond})} u$ we have that if $\mu_+ (B_{\rho}) > 0$ and
\begin{align*}
\frac{ M_+ (A_{0}^+)}{|M|_{\Lambda} (Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond}))} + \frac{ \Lambda_+ (A_{0}^+)}{\Lambda (Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond}))} \leqslant \overline{\nu}^{\diamond} , \end{align*}
$\hskip8pt$ where $A_0^+ = \{ (x,t) \in Q_{R; R, 0}^{\upbeta^{\diamond}, +} (x^{\diamond}\!, t^{\diamond}) \, | \, u(x,t) > \overline{m} - \sigma \omega \}$, then $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt
\text{for a.e. } (x,t) \in Q_{R; \rho, \theta^{\diamond}}^{\upbeta^{\diamond}, +} (x^{\diamond}\!, t^{\diamond}) \, ; $$ $ii \, )$ $\overline{m} \geqslant \sup_{Q_{R; R, 0}^{\circ, -} (x^{\circ} \!, t^{\circ})} u,
\hskip10pt \omega \geqslant \mathop{\rm osc}\limits_{Q_{R; R, 0}^{\circ, -} (x^{\circ} \!, t^{\circ})} u$ we have that if $\mu_- (B_{\rho}) > 0$ and \begin{align*}
\frac{ M_- (A_{0}^-)}{|M|_{\Lambda} (Q_{R}^{\circ, \texttt{\,<}} (x^{\circ}\!, t^{\circ}))} + \frac{ \Lambda_- (A_{0}^-)}{\Lambda (Q_{R}^{\circ, \texttt{\,<}} (x^{\circ}\!, t^{\circ}))} \leqslant \overline{\nu}^{\circ} , \end{align*}
$\hskip8pt$ where $A_0^- = \{ (x,t) \in Q_{R; R, 0}^{\circ, -} (x^{\circ} \!, t^{\circ}) \, | \, u(x,t) > \overline{m} - \sigma \omega \}$, then $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt \text{for a.e. } (x,t) \in Q_{R; \rho, \theta^{\circ}}^{\circ, -} (x^{\circ}\!, t^{\circ}) \, ; $$ $iii \, )$ $\overline{m} \geqslant \sup_{Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})} u, \hskip10pt
\omega \geqslant \mathop{\rm osc}\limits_{B_R(x^{\star}) \times (s_1^{\star} , s_2^{\star})} u$ we have that if $\lambda_0 (B_{\rho}) > 0$ and \begin{align*} \Lambda_0 (A_{0}^0) \leqslant \overline{\nu}^{\star} \, \Lambda (Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})) \end{align*}
$\hskip8pt$ where $A_0^0 = \{ (x,t) \in Q_{R; R, s_1^{\star}, s_2^{\star}}^{0} (x^{\star} \!, t^{\star}) \, | \, u(x,t) > \overline{m} - \sigma \omega \}$, then $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt \text{for a.e. } (x,t) \in Q_{R; \rho, s_1^{\star}, s_2^{\star}}^{0} (x^{\star} \!, t^{\star}) \, ; $$ $iv \, )$ $\overline{m} \geqslant \sup_{B_R(x^{\star})} u (\cdot, t), \hskip10pt \omega \geqslant \mathop{\rm osc}\limits_{B_R(x^{\star})} u (\cdot, t)$ we have that if $B_R(x^{\star}) \subset \Omega_0$ and \begin{align*}
\lambda \big(\{ x \in B_{R} (x^{\star}) \, | \, u(x,t) > \overline{m} - \sigma \omega \} \big) \leqslant
\overline{\nu} \ \lambda (B_R (x^{\star}) ) \end{align*} then $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt \text{for a.e. } x \in B_{\rho} (x^{\star}) $$ for a.e. $t \in (s_1^{\star} , s_2^{\star})$. \end{prop}
\begin{oss} \rm -\ \label{pluto} The requirement $\mu_+(B_{\rho}) > 0$ in point {\em i }\!) (and analogously $\mu_-(B_{\rho}) > 0$ in point {\em ii }\!) and $\lambda_0 (B_{\rho}) > 0$ in point {\em iii }\!) is not technically needed, for the proof it would be sufficient to have $\mu_+(B_{R}) > 0$. We require it just to give a meaning to the thesis of the theorem. \end{oss}
\noindent {\it Proof}\ \ -\ \
We prove only the first claim, being the other similar. Often we will not write the point $(x^{\diamond} \!, t^{\diamond})$, just to simplify the notation. First of all fix $a , \sigma \in (0,1)$ which will remain fixed for all the proof. Choose $\theta^{\diamond} \in (0,1)$ and $\rho \in (0,R)$, assume that $\mu_+(B_{\rho}) > 0$ and consider the following sequences ($h \in {\bf N}$)
$$ \displaystyle \rho_h = \rho + \varepsilon^{h} (R - \rho), \quad \quad \theta_h = \theta^{\diamond} - \varepsilon^{2h} \theta^{\diamond} \, , $$
where $\varepsilon \in (0,1)$. We require that $(\theta_{h+1} - \theta_h) R^2$ is to be equal to $(\rho_{h} - \rho_{h+1})^2$ (as required in Definition \ref{classiDG} and in the proof that a $Q$-minimum belongs to the De Giorgi class, see \eqref{puredifave}): we derive that $\theta^{\diamond}$ has to satisfy \begin{equation} \label{teta} \theta^{\diamond} = \frac{1 - \varepsilon}{1 + \varepsilon} \, \frac{(R-\rho)^2}{R^2} \, . \end{equation} Referring to definitions \eqref{notazione1} we will consider $$ x_0 = x^{\diamond}\!, \qquad t_0 = t^{\diamond}\!, \qquad s_2 = s_2^{\diamond} := t^{\diamond} + \upbeta^{\diamond} \, h(x^{\diamond}\!, R) R^2 \, , $$ but we will often omit to write them just to simplify the notation. Now we moreover define, for $h \in {\bf N}$ and $a, \sigma \in (0,1)$, \begin{equation} \label{tutticonacca} \begin{array}{c} \displaystyle B_h = B_{\rho_h}(x^{\diamond}) \, , \\ \displaystyle \delta_h : = \sum_{j = h}^{\infty} (\rho_j - \rho_{j+1}) = \rho_h - \rho = \varepsilon^{h} (R - \rho) \searrow 0 \, , \\ [4mm] Q_h^+ := Q_{R; \rho, \theta_h}^{\upbeta^{\diamond}, +, \rho_h-\rho} (x^{\diamond} \!, t^{\diamond}) \\ [4mm] I_h^+ := (I_{\rho}^+(x^{\diamond}))^{\delta_h} \, , \\ [2mm] \displaystyle \sigma_h = a \, \sigma + \varepsilon^h (1 - a)\, \sigma \searrow a\sigma\, ,
\quad\quad k_h = \overline{m} - \sigma_h \omega \nearrow \overline{m} - a \sigma \omega \, , \\ [4mm]
A_h^+ = \{ (x,t) \in Q_h^+ \, | \, u(x,t) > k_h \} \, . \end{array} \end{equation} Notice that $$ \begin{array}{c} Q_{h+1}^+ \subset Q_h^+ \, , \hskip20pt A_{h+1}^+ \subset A_h^+ \, , \\ [0.5em] \rho_h - \rho_{h+1} = (1 - \varepsilon)\varepsilon^{h} (R - \rho) \, , \\ [0.5em]
\theta_{h+1} \, \upbeta^{\diamond} \, h(x^{\diamond}\!,R) \, R^2 - \theta_h \, \upbeta^{\diamond} \, h(x^{\diamond}\!,R) \, R^2 = \theta^{\diamond} (1 - \varepsilon^2) \, \varepsilon^{2h} \, \upbeta^{\diamond} \, h(x^{\diamond}\!,R) \, R^2 . \end{array} $$ In the next picture we show some possible $Q_h^+$ marked by dashed lines, while the one marked by longer lines is the limit set (for $h \to +\infty$). \ \\ \ \\ \ \\ \begin{picture}(150,200)(-180,0) \put (-105,200){\linethickness{1pt}\line(1,0){210}} \put (-105,50){\linethickness{1pt}\line(1,0){210}} \put (-105,50){\linethickness{1pt}\line(0,1){150}} \put (105,50){\linethickness{1pt}\line(0,1){150}}
\put (-40,220){\tiny$\mu > 0$} \put (40,220){\tiny$\mu < 0 \text{ or }\mu = 0$}
\put (30,40){\line(0,1){170}}
\put (-180,125){\line(1,0){320}}
\put (-83,198){\line(0,-1){1}} \put (-83,195){\line(0,-1){1}} \put (-83,192){\line(0,-1){1}} \put (-83,189){\line(0,-1){1}} \put (-83,186){\line(0,-1){1}} \put (-83,183){\line(0,-1){1}} \put (-83,180){\line(0,-1){1}} \put (-83,177){\line(0,-1){1}} \put (-83,174){\line(0,-1){1}} \put (-83,171){\line(0,-1){1}} \put (-83,168){\line(0,-1){1}} \put (-83,165){\line(0,-1){1}} \put (-83,162){\line(0,-1){1}} \put (-83,159){\line(0,-1){1}} \put (-83,156){\line(0,-1){1}} \put (-83,153){\line(0,-1){1}} \put (-83,150){\line(0,-1){1}} \put (-83,147){\line(0,-1){1}} \put (-83,144){\line(0,-1){1}} \put (-83,141){\line(0,-1){1}} \put (-83,138){\line(0,-1){1}} \put (-83,135){\line(0,-1){1}} \put (-83,132){\line(0,-1){1}}
\put (-83,129){\line(1,0){1}} \put (-80,129){\line(1,0){1}} \put (-77,129){\line(1,0){1}} \put (-74,129){\line(1,0){1}} \put (-71,129){\line(1,0){1}} \put (-68,129){\line(1,0){1}} \put (-65,129){\line(1,0){1}} \put (-62,129){\line(1,0){1}} \put (-59,129){\line(1,0){1}} \put (-56,129){\line(1,0){1}} \put (-53,129){\line(1,0){1}} \put (-50,129){\line(1,0){1}} \put (-47,129){\line(1,0){1}} \put (-44,129){\line(1,0){1}} \put (-41,129){\line(1,0){1}} \put (-38,129){\line(1,0){1}} \put (-35,129){\line(1,0){1}} \put (-32,129){\line(1,0){1}} \put (-29,129){\line(1,0){1}} \put (-26,129){\line(1,0){1}} \put (-23,129){\line(1,0){1}} \put (-20,129){\line(1,0){1}} \put (-17,129){\line(1,0){1}} \put (-14,129){\line(1,0){1}} \put (-11,129){\line(1,0){1}} \put (-8,129){\line(1,0){1}} \put (-5,129){\line(1,0){1}} \put (-2,129){\line(1,0){1}} \put (1,129){\line(1,0){1}} \put (4,129){\line(1,0){1}} \put (7,129){\line(1,0){1}} \put (10,129){\line(1,0){1}} \put (13,129){\line(1,0){1}} \put (16,129){\line(1,0){1}}
\put (17,129){\line(0,-1){1}} \put (17,126){\line(0,-1){1}}
\put (-76,199){\line(0,-1){1}} \put (-76,196){\line(0,-1){1}} \put (-76,193){\line(0,-1){1}} \put (-76,190){\line(0,-1){1}} \put (-76,187){\line(0,-1){1}} \put (-76,184){\line(0,-1){1}} \put (-76,181){\line(0,-1){1}} \put (-76,178){\line(0,-1){1}} \put (-76,175){\line(0,-1){1}} \put (-76,172){\line(0,-1){1}} \put (-76,169){\line(0,-1){1}} \put (-76,166){\line(0,-1){1}} \put (-76,163){\line(0,-1){1}} \put (-76,160){\line(0,-1){1}} \put (-76,157){\line(0,-1){1}} \put (-76,154){\line(0,-1){1}} \put (-76,151){\line(0,-1){1}} \put (-76,148){\line(0,-1){1}} \put (-76,145){\line(0,-1){1}} \put (-76,142){\line(0,-1){1}}
\put (-75,141){\line(1,0){1}} \put (-72,141){\line(1,0){1}} \put (-69,141){\line(1,0){1}} \put (-66,141){\line(1,0){1}} \put (-63,141){\line(1,0){1}} \put (-60,141){\line(1,0){1}} \put (-57,141){\line(1,0){1}} \put (-54,141){\line(1,0){1}} \put (-51,141){\line(1,0){1}} \put (-48,141){\line(1,0){1}} \put (-45,141){\line(1,0){1}} \put (-42,141){\line(1,0){1}} \put (-39,141){\line(1,0){1}} \put (-36,141){\line(1,0){1}} \put (-33,141){\line(1,0){1}} \put (-30,141){\line(1,0){1}} \put (-27,141){\line(1,0){1}} \put (-24,141){\line(1,0){1}} \put (-21,141){\line(1,0){1}} \put (-18,141){\line(1,0){1}} \put (-15,141){\line(1,0){1}} \put (-12,141){\line(1,0){1}} \put (-9,141){\line(1,0){1}} \put (-6,141){\line(1,0){1}} \put (-3,141){\line(1,0){1}} \put (0,141){\line(1,0){1}} \put (3,141){\line(1,0){1}} \put (6,141){\line(1,0){1}} \put (9,141){\line(1,0){1}} \put (12,141){\line(1,0){1}} \put (15,141){\line(1,0){1}} \put (18,141){\line(1,0){1}} \put (21,141){\line(1,0){1}}
\put (24,141){\line(0,-1){1}} \put (24,138){\line(0,-1){1}} \put (24,135){\line(0,-1){1}} \put (24,132){\line(0,-1){1}} \put (24,129){\line(0,-1){1}} \put (24,126){\line(0,-1){1}}
\put (-75,210){$- \rho$} \put (-70,200){\linethickness{2pt}\line(1,0){1}}
\put (65,210){$\rho$} \put (70,200){\linethickness{2pt}\line(1,0){1}}
\put (-70,200){\line(0,-1){4}} \put (-70,195){\line(0,-1){4}} \put (-70,190){\line(0,-1){4}} \put (-70,185){\line(0,-1){4}} \put (-70,180){\line(0,-1){4}} \put (-70,175){\line(0,-1){4}} \put (-70,170){\line(0,-1){4}} \put (-70,165){\line(0,-1){4}} \put (-70,160){\line(0,-1){4}} \put (-70,155){\line(0,-1){4}} \put (-70,150){\line(1,0){4}} \put (-65,150){\line(1,0){4}} \put (-60,150){\line(1,0){4}} \put (-55,150){\line(1,0){4}} \put (-50,150){\line(1,0){4}} \put (-45,150){\line(1,0){4}} \put (-40,150){\line(1,0){4}} \put (-35,150){\line(1,0){4}} \put (-30,150){\line(1,0){4}} \put (-25,150){\line(1,0){4}} \put (-20,150){\line(1,0){4}} \put (-15,150){\line(1,0){4}} \put (-10,150){\line(1,0){4}} \put (-5,150){\line(1,0){4}} \put (0,150){\line(1,0){4}} \put (5,150){\line(1,0){4}} \put (10,150){\line(1,0){4}} \put (15,150){\line(1,0){4}} \put (20,150){\line(1,0){4}} \put (25,150){\line(1,0){4}}
\put (-20,30){Figure C} \end{picture}
\noindent First of all notice that since \begin{align*} (k_{h+1} - k_h)^2 & M_+ (A_{h+1}^+) \leqslant \iint_{A_{h+1}^+} (u - k_h)_+^2 \mu_+ \, dx dt \leqslant \iint_{Q_{h+1}^+} (u - k_h)_+^2 \mu_+ \, dx dt \end{align*} and $k_{h+1} - k_h = (1-a) \, \sigma \, \omega \, \varepsilon^{h+1}$ we can estimate \begin{align} \label{first}
\varepsilon^{2h+2} \, (1-a)^2 \sigma^2 \omega^2 \, \frac{ M_+ (A_{h+1}^+)}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \leqslant
\frac{1}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \iint_{Q_{h+1}^+} (u - k_h)_+^2 \mu_+ \, dx dt \, . \end{align} Similarly \begin{equation} \label{second} {\displaystyle \varepsilon^{2h+2} \, (1-a)^2 \sigma^2 \omega^2 \, \frac{\Lambda_+ (A_{h+1}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}} \leqslant \frac{1}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \iint_{Q_{h+1}^+} (u-k_h)_+^2 \lambda_+ \, dx dt \, . \end{equation}
\noindent Then we can argue in a completely analogous way as done to obtain \eqref{mitoccanum1} and \eqref{mitoccanum2}. Taking in \eqref{mitoccanum1} \begin{equation} \label{valoriacchesimi} \begin{array}{l} \rho_{h+1} = r \, , \hskip20pt \rho_h = \tilde{r} \, , \hskip20pt \rho_{h-1} = \hat{r} \, , \hskip20pt \rho \text{ in place of } R/2 \, , \\ [2mm] \theta_{h+1} = \theta \, , \hskip20pt \theta_h = \tilde\theta \, , \hskip20pt \theta_{h-1} = \hat\theta \, , \hskip20pt k_h = k \, , \end{array} \end{equation} we get (the only difference with \eqref{mitoccanum1} is that $2(\rho_h - \rho_{h+1}) \not= \rho_{h-1} - \rho_{h}$ unless $\varepsilon = 1/2$) \begin{align*}
\frac{1}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} & \iint_{Q_{h+1}^+} (u-k_h)_+^2 \mu_+ \, dx dt \leqslant \nonumber \\ \leqslant & \hskip5pt \gamma_1^{2/\kappa} \, R^2 \, \frac{1 + \upbeta^{\diamond}}{({\upbeta^{\diamond}})^{\frac{1}{\kappa}}} \,
\frac{2 \gamma + 2}{(\rho_{h} - \rho_{h+1})^2} \, \cdot \nonumber \\
& \cdot \Bigg[ \frac{1}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \iint_{Q_{h-1}^+} (u - k_h)_+^2 \mu_+ \, dx dt
+ \frac{1}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \iint_{Q_{h-1}^+} (u - k_h)_+^2 \lambda_+ \, dx dt \, + \nonumber \\ & \hskip50pt + \frac{1}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \iint_{I_{h-1}^+ \times (t^{\diamond} \!, s_2^{\diamond})}
(u - k_h)_+^2 (\lambda_0 + \lambda_-) \, dx dt \, + \nonumber \\ & \hskip50pt + (\rho_{h} - \rho_{h+1})^2 \, \frac{1}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
\sup_{t \in (t^{\diamond}\!, s_2^{\diamond})} \int_{I_{h-1}^+} (u - k_h)_+^2 (x,t) |\mu| (x) dx \Bigg] \, .
\end{align*} Now since (here we use $\sigma_h \leqslant \sigma$) \begin{align*} \iint_{Q_{h-1}^+} (u-k_h)_+^2 \mu_+ \, dx dt \leqslant M_+ (A_{h-1}^+) \ \sup_{Q_{h-1}^+} (u - k_h)^2 \leqslant M_+ (A_{h-1}^+) (\sigma \omega)^2 \, , \\ \iint_{Q_{h-1}^+} (u-k_h)_+^2 \lambda_+ \, dx dt \leqslant
\Lambda_+ (A_{h-1}^+) \ \sup_{Q_{h-1}^+} (u - k_h)^2 \leqslant \Lambda_+ (A_{h-1}^+) (\sigma \omega)^2 \, , \end{align*} by the above inequality and by \eqref{first} we get \begin{align*}
\frac{ M_+ (A_{h+1}^+)}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \leqslant & \,
\frac{\gamma_1^{2/\kappa} \, R^2}{\varepsilon^{2h+2} (1-a)^2 \sigma^2 \omega^2} \,
\frac{1 + \upbeta^{\diamond}}{(\upbeta^{\diamond})^{\frac{1}{\kappa}}}
\frac{2 \gamma + 2}{(1-\varepsilon)^2 \varepsilon^{2h}(R - \rho)^2} \, \\
& \cdot \left( \frac{ M_+ (A_{h}^+)} {|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \right)^{\frac{\kappa - 1}{\kappa}}\, \cdot
\Bigg[ \frac{M_+ (A_{h-1}^+)}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} (\sigma \omega)^2 +
\frac{\Lambda_+ (A_{h-1}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} (\sigma \omega)^2 + \\ & \hskip30pt + \frac{1}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \iint_{I_{h-1}^+ \times (t^{\diamond} \!, s_2^{\diamond})}
(u - k_h)_+^2 (\lambda_0 + \lambda_-) \, dx dt \, + \\ & \hskip30pt + \, \frac{(R - \rho)^2 (1-\varepsilon)^2 \varepsilon^{2h}}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
\sup_{t \in (t^{\diamond}\!, s_2^{\diamond})} \int_{I_{h-1}^+} (u - k_h)_+^2 (x,t) |\mu| (x) dx \Bigg] \, . \end{align*} Now defining first \begin{align*} y_h := & \ y^{M}_h + y^{\Lambda}_h \, , \hskip15pt \text{ where }
y^{M}_h := \frac{M_+ (A_{h}^+)}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
\hskip10pt \text{and} \hskip10pt y^{\Lambda}_h := \frac{\Lambda_+ (A_{h}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \, , \end{align*} and, since $(u - k_h)_+^2$ is bounded by $(\sigma \omega)^2$ and estimating \begin{align*} \frac{1}{\sigma^2 \omega^2 \Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} & \iint_{I_{h}^+ \times (t^{\diamond} \!, s_2^{\diamond})}
(u - k_h)_+^2 (\lambda_0 + \lambda_-) \, dx dt \, + \\ & \hskip10pt + \, \frac{(R - \rho)^2 (1-\varepsilon)^2 \varepsilon^{2(h-1)}}{\sigma^2 \omega^2\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
\sup_{t \in (t^{\diamond}\!, s_2^{\diamond})} \int_{I_{h}^+} (u - k_h)_+^2 (x,t) |\mu| (x) dx \leqslant \\ \leqslant & \ \frac{(\Lambda_0 + \Lambda_-)(I_{h}^+ \times (t^{\diamond} \!, s_2^{\diamond}))}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
+ \, \frac{(R - \rho)^2 (1-\varepsilon)^2 \varepsilon^{2(h-1)} |\mu| (I_{h}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \end{align*} defining also \begin{align*} \epsilon_h := \frac{(\Lambda_0 + \Lambda_-)(I_{h}^+ \times (t^{\diamond} \!, s_2^{\diamond}))}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
+ \, \frac{(R - \rho)^2 (1-\varepsilon)^2 \varepsilon^{2(h-1)} |\mu| (I_{h}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \end{align*} we first get \begin{align*} y^M_{h+1} \leqslant \frac{\gamma_1^{2/\kappa} R^2 \, (2\gamma + 2)}{(1-a)^2 (1-\varepsilon)^2 \varepsilon^2 (R - \rho)^2}
\, \frac{1 + \upbeta^{\diamond}}{(\upbeta^{\diamond})^{\frac{1}{\kappa}}} \, \frac{1}{\varepsilon^{4h}} \
(y^M_{h})^{\frac{\kappa - 1}{\kappa}} \left[ y^{M}_{h-1} + y^{\Lambda}_{h-1} + \epsilon_{h-1} \right] \, . \end{align*} Taking \eqref{valoriacchesimi} in \eqref{mitoccanum2} we can argue in a similar way to estimate $y^{\Lambda}_{h+1}$ and get \begin{align*} y^{\Lambda}_{h+1} \leqslant \frac{\gamma_1^{2/\kappa} R^2\, (2\gamma + 2)}{(1-a)^2 (1 - \varepsilon)^2 \varepsilon^2 (R - \rho)^2}
\, \frac{1 + \upbeta^{\diamond}}{(\upbeta^{\diamond})^{\frac{1}{\kappa}}} \,
\frac{1}{\varepsilon^{4h}} \
(y^{\Lambda}_{h})^{\frac{\kappa - 1}{\kappa}} \left[ y^{M}_{h-1} + y^{\Lambda}_{h-1} + \epsilon_{h-1} \right] \, . \end{align*} Summing the two inequalities and since the sequences $(y^{M}_h)_h$, $(y^{\Lambda}_h)_h$ are decreasing we finally get \begin{align*} y_{h+1} \leqslant \ \frac{\gamma_1^{2/\kappa} R^2 \, (2\gamma + 2)}{(1-a)^2 (1-\varepsilon)^2 \varepsilon^2 (R - \rho)^2} \,
\frac{1 + \upbeta^{\diamond}}{(\upbeta^{\diamond})^{\frac{1}{\kappa}}} \,
\frac{1}{\varepsilon^{4h}} \ y_{h-1}^{\frac{\kappa - 1}{\kappa}} \left( y_{h-1} + \epsilon_{h-1} \right) \end{align*} for every $h \geqslant 1$; then, for instance, \begin{align*} y_{2(h+1)} \leqslant \ \frac{\gamma_1^{2/\kappa} R^2 \, (2\gamma + 2)}{(1-a)^2 (1-\varepsilon)^2 (R - \rho)^2 \varepsilon^{6}} \,
\frac{1 + \upbeta^{\diamond}}{(\upbeta^{\diamond})^{\frac{1}{\kappa}}} \,
\frac{1}{\varepsilon^{8h}} \ y_{2h}^{\frac{\kappa - 1}{\kappa}} \left( y_{2h} + \epsilon_{2h} \right) \, . \end{align*} Using \eqref{teta} to write $R^2/(R - \rho)^2$ and Lemma \ref{lemmuzzofurbo-quinquies} with \begin{gather*} c = \frac{\gamma_1^{2/\kappa} \, (2\gamma + 2)}{(1 - a)^2 (1 - \varepsilon^2) \, \varepsilon^{6} \, \theta^{\ast}} \,
\frac{1 + \upbeta^{\diamond}}{(\upbeta^{\diamond})^{\frac{1}{\kappa}}} \, ,
\hskip10pt \alpha = \frac{\kappa - 1}{\kappa} \, , \hskip10pt b = \frac{1}{\varepsilon^8} \, , \end{gather*} we derive that the subsequence $(y_{2h})_h$ of even indexes, and in fact the whole sequence $(y_h)_h$ since $(y_h)_h$ is decreasing, is converging to zero provided that \begin{align*}
\frac{ M_+ (A_{0}^+)}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} +
\frac{ \Lambda_+ (A_{0}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} \leqslant \left(\frac{(1-a)^2 (1-\varepsilon^2) \, \varepsilon^{6} \, \theta^{\diamond} (\upbeta^{\diamond})^{\frac{1}{\kappa}}}
{\gamma_1^{2/\kappa} \, (1 + \upbeta^{\diamond}) \, (2\gamma + 2)} \right)^{\frac{\kappa}{\kappa-1}} \,
\, \varepsilon^{\frac{8\kappa^2}{(\kappa - 1)^2}} \, . \end{align*} By the definition of $A_h$ we have that \begin{gather*} Q_0^+ = Q_{R; \rho, 0}^{\upbeta^{\diamond}, +,R - \rho} (x^{\diamond} \!, t^{\diamond}) \qquad \text{and} \qquad
A_0^+ = \big\{ (x,t) \in Q_0^+ \, \big| \, u(x,t) > \overline{m} - \sigma \omega \big\} \end{gather*} but we can consider \begin{align*}
A_0^+ & = \big\{ (x,t) \in Q_{R; R, 0}^{\upbeta^{\diamond}, +} (x^{\diamond} \!, t^{\diamond}) \, \big| \, u(x,t) > \overline{m} - \sigma \omega \big\} = \\
& = \big\{ (x,t) \in B_R^+ (x^{\diamond}) \times (t^{\diamond} \!, t^{\diamond} + h(x^{\diamond} \!, R) R^2) \, \big| \, u(x,t) > \overline{m} - \sigma \omega \big\} \end{align*} since we will consider the measures $M_+$ and $\Lambda_+$ of this set.
Then we have derived that $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt \text{for a.e. }
(x,t) \in Q_{R; \rho, \theta^{\diamond} }^{\upbeta^{\diamond},+} (x^{\diamond} \!, t^{\diamond}) $$
provided that $$
\frac{ M_+ (A_{0}^+)}{|M|_{\Lambda} (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})} +
\frac{ \Lambda_+ (A_{0}^+)}{\Lambda (Q_R^{\upbeta^{\diamond}, \texttt{\,>}})}
\leqslant \overline{\nu}^{\diamond} $$ where $$ \overline{\nu}^{\diamond} = \left(\frac{(1-a)^2 (1-\varepsilon^2) \, \varepsilon^{6} \, \theta^{\diamond} \, (\upbeta^{\diamond})^{\frac{1}{\kappa}}}
{\gamma_1^{2/\kappa} \, (1 + \upbeta^{\diamond}) \, (2\gamma + 2)} \right)^{\frac{\kappa}{\kappa-1}} \,
\, \varepsilon^{\frac{8\kappa^2}{(\kappa - 1)^2}} \, . $$
In a complete analogous way: fix a point $(x^{\circ} \!, t^{\circ})$ such that $\mu_- (B_R(x^{\circ})) > 0$. One gets that taking the same values as before for $\rho, a , \sigma$ and $\theta^{\circ} \in (0,1)$ there is $\overline{\nu}^{\circ} > 0$ such that if $$
\frac{ M_- (A_{0}^-)}{|M|_{\Lambda} (Q_R^{\upbeta^{\circ}, \texttt{\,<}})} + \frac{ \Lambda_- (A_{0}^-)}{\Lambda (Q_R^{\upbeta^{\circ}, \texttt{\,<}})} \leqslant \overline{\nu}^{\circ} \, , $$ where the ball $B_R$ is centred in $x^{\circ}$ and \begin{gather*}
A_0^- = \left\{ (x,t) \in Q_{R; \rho, 0}^{\upbeta^{\circ}, -,R-\rho} (x^{\circ} \!, t^{\circ}) \, | \, u(x,t) > \overline{m} - \sigma \omega \right\} \, , \\ \overline{\nu}^{\circ} = \left(\frac{(1-a)^2 (1-\varepsilon^2) \, \varepsilon^{6} \, \theta^{\circ} \, (\upbeta^{\circ})^{\frac{1}{\kappa}}}
{\gamma_1^{2/\kappa} \, (1 + \upbeta^{\circ}) \, (2\gamma + 2)} \right)^{\frac{\kappa}{\kappa-1}} \,
\, \varepsilon^{\frac{8\kappa^2}{(\kappa - 1)^2}} \, , \end{gather*} then $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt \text{for a.e. }
(x,t) \in Q_{R; \rho, \theta^{\circ} }^{\upbeta^{\circ}, -} (x^{\circ} \!, t^{\circ}) \, . $$ Finally we analyse the part in which $\mu \equiv 0$, which is slightly different. Fix a point $(x^{\star} \!, t^{\star})$ such that $\lambda_0 (B_R(x^{\star})) > 0$, consider $k_h$ and $\sigma_h$ as in \eqref{tutticonacca}. Arguing as done to obtain \eqref{mitoccanum3} and taking in \eqref{mitoccanum3} for $k, r, \tilde{r}, \hat{r}$ the same values as in \eqref{valoriacchesimi} and for $\sigma_1, \sigma_2$ respectively $s_1^{\star}$ and $s_2^{\star}$ we get \begin{align*} & \frac{1}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))} \int\!\!\!\int_{Q_{h+1}^0} (u - k_h)_+^2 \lambda_0 \, dx dt \leqslant \\
& \hskip40pt \leqslant \gamma_1^{2/\kappa} \, (\upbeta^{\star})^{\frac{\kappa - 1}{\kappa}}
R^2 \, \frac{2 \gamma + 2}{(1 - \varepsilon)^2 \varepsilon^{2h}(R - \rho)^2} \,
\frac{\big( \Lambda_0 (A_h^0) \big)^{\frac{\kappa - 1}{\kappa}}}
{(\Lambda (B_R \times (s_1^{\star}, s_2^{\star})))^{\frac{\kappa - 1}{\kappa}}}\, \cdot \\ & \hskip50pt \cdot \Bigg[
\frac{1}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))}
\iint_{Q_{h-1}^0} (u - k_h)_+^2 \lambda_0 \, dx dt + \\ & \hskip60pt + \frac{1}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))}
\iint_{I_{h-1}^0 \times (s_1^{\star} \!, s_2^{\star})} (u - k_h)_+^2 (\lambda_+ + \lambda_-) \, dx dt + \\ & \hskip60pt + \frac{(1 - \varepsilon)^2 \varepsilon^{2h}(R - \rho)^2}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))}
\sup_{t \in (s_1^{\star}, s_2^{\star})} \int_{(B_{\rho_h}^0)^{\rho_h - \rho}} (u - k_h)_+^2 (x,t) \lambda_0 (x) dx \\ & \hskip60pt + \frac{(1 - \varepsilon)^2 \varepsilon^{2h}(R - \rho)^2}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))}
\sup_{t \in (s_1^{\star}, s_2^{\star})} \int_{I_{h-1}^0} (u - k_h)_+^2 (x,t) \mu_+ (x) dx + \\ & \hskip60pt + \frac{(1 - \varepsilon)^2 \varepsilon^{2h}(R - \rho)^2}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))}
\sup_{t \in (s_1^{\star}, s_2^{\star})} \int_{I_{h-1}^0} (u - k_h)_+^2 (x,t) \mu_- (x) dx \Bigg] \end{align*} where \begin{gather*} I_h^0 := (I_{\rho}^0(x^{\star}))^{\rho_h - \rho} \setminus I_{\rho,\rho_h - \rho}^0(x^{\star}) \\ Q_h^0 := Q_{R; \rho, s_1^{\star}, s_2^{\star}}^{0, \rho_h - \rho} (x^{\star} \!, t^{\star}) \, , \\
A_h^0 = \{ (x,t) \in Q_h^0 \, | \, u(x,t) > k_h \} \, . \end{gather*} Since, as for \eqref{first}, we have \begin{gather*} \varepsilon^{2h+2} \, (1-a)^2 \sigma^2 {\omega}^2 \, \frac{\Lambda_0 (A_{h+1}^0)}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))} \leqslant
\frac{1}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))} \int\!\!\!\int_{Q_{h+1}^0} (u - k_h)_+^2 \lambda_0 \, dx dt \, , \\ \iint_{Q_{h-1}^0} (u-k_h)_+^2 \lambda_0 \, dx dt \leqslant
\Lambda_0 (A_{h-1}^0) \ \sup_{Q_{h-1}^0} (u - k_h)^2 \leqslant \Lambda_0 (A_{h-1}^0) (\sigma {\omega})^2 \, , \end{gather*} we derive \begin{align*} y_{h+1} \leqslant \frac{\gamma_1^{2/\kappa} \, (\upbeta^{\star})^{\frac{\kappa - 1}{\kappa}} \, R^2 \, (2\gamma + 2)}{(1-a)^2 (1-\varepsilon)^2
\varepsilon^2 (R - \rho)^2} \, \frac{1}{\varepsilon^{4h}} \ y_{h-1}^{\frac{\kappa - 1}{\kappa}} \left( y_{h-1} + \epsilon_{h-1} \right) \end{align*} where here we have defined \begin{align*} y_h := & \ \frac{\Lambda_0 (A_{h}^0)}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))} \\ \epsilon_h := & \frac{1}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star}))}
\Bigg[ (\Lambda_+ + \Lambda_-) (I_{h-1}^0 \times (s_1^{\star}, s_2^{\star})) + \\ & \hskip40pt + (R - \rho)^2 (1-\varepsilon)^2 \varepsilon^{2h} \Big( \Lambda_0 ((B_{\rho_h}^0)^{\rho_h - \rho}) +
|\mu| (I_{h-1}^0) \Big) \Bigg] . \end{align*} Arguing similarly as before we get that $y_h$ tends to zero, that is $$ u(x,t) \leqslant \overline{m} - a \, \sigma \, \omega \hskip30pt \text{for a.e. }
(x,t) \in Q_{R; \rho, s_1^{\star}, s_2^{\star}}^{0} (x^{\star} \!, t^{\star}) \, , $$ provided that $$ \frac{ \Lambda_0 (A_{0}^0)}{\Lambda (B_R \times (s_1^{\star}, s_2^{\star})} \leqslant \overline{\nu}^{\star} $$ where
$$ \overline{\nu}^{\star} = \left[ \frac{(1-a)^2 \, (1 - \varepsilon)^2 \, \varepsilon^6 \, (R - \rho)^2}
{\gamma_1^{2/\kappa} R^2 \, (2\gamma + 2)} \right]^{\frac{\kappa}{\kappa-1}}\, \frac{1}{\upbeta^{\star}} \,
\varepsilon^{\frac{8 \kappa^2}{(\kappa - 1)^2}} \, . $$ Notice that ($\gamma_1 > 1$) $$ \left[ \frac{(1-a)^2 \, (1 - \varepsilon)^2 \, \varepsilon^6 \, (R - \rho)^2}
{\gamma_1^{2/\kappa} R^2 \, (2\gamma + 2)} \right]^{\frac{\kappa}{\kappa-1}} \,
\, \varepsilon^{\frac{8 \kappa^2}{(\kappa - 1)^2}} \leqslant 1 $$ and to garantee $\overline{\nu}^{\star} \leqslant 1$ for every choice of $\upbeta^{\star}$ (say less than $1$) we can choose $\varepsilon$ in a suitable way. For example taking $\varepsilon$ in such a way that $\varepsilon^{\frac{8 \kappa^2}{(\kappa - 1)^2}} / \upbeta^{\star} = 1/2$, i.e. $$ \varepsilon = \left( \frac{\upbeta^{\star}}{2} \right)^{\frac{(\kappa - 1)^2}{8 \kappa^2}} $$ we have $\overline{\nu}^{\star} < 1$ and we get rid of the dependence of $1/\upbeta^{\star}$ for $\upbeta^{\star}$ small. \\ [0.3em]
For the last point we can proceed as follows: first notice that $B_R := B_R (x^{\star}) \subset \Omega_0$. With the same $k_h$ and $\rho_h$ as before we consider $B_h := B_{\rho_h} (x^{\star})$, define the sequence of test functions $$ \zeta_h : B_R \to [0,1] \, , \qquad \zeta_h (x) = \left\{
\begin{array}{ll}
1 & \text{ in } B_{h+1} \\
0 & \text{ in } B_R \setminus B_{h}
\end{array}
\right.
\qquad | D \zeta_h | \leqslant \frac{1}{\rho_{h} - \rho_{h+1}} $$
and for almost every $t \in (0,T)$ we define $A_h = \{ x \in B_{\rho_h} (x^{\star}) \, | \, u(x,t) > k_h \}$. Using Theorem \ref{chanillo-wheeden} with $2 \kappa$ in the place of $q$ (see also Remark \ref{rmkipotesi}) we have \begin{align*} & (1 - a)^2 \sigma^2 \omega^2 \varepsilon^{2(h+1)} \frac{\lambda (A_{h+1})}{\lambda (B_R)} \leqslant
\frac{1}{\lambda (B_R)} \int_{B_{{h+1}}} (u - k_h)_+^2 (x,t) \lambda (x) \, dx \leqslant \\ & \hskip30pt \leqslant \frac{1}{\lambda (B_R)} \int_{B_{{h}}} (u - k_h)_+^2 (x,t) \zeta_h^2 (x) \lambda (x) \, dx \leqslant \\ & \hskip30pt \leqslant \left(\frac{\lambda (A_h)}{\lambda (B_R)}\right)^{\frac{\kappa - 1}{\kappa}}
\left[\frac{1}{\lambda (B_{R})} \int_{B_{{h}}}
(u - k_h)_+^{2\kappa} (x,t) \zeta_h^{2\kappa} (x) \lambda(x) \, dx \right]^{\frac{1}{\kappa}} \leqslant \\ & \hskip30pt \leqslant \left(\frac{\lambda (A_h)}{\lambda (B_R)}\right)^{\frac{\kappa - 1}{\kappa}}
\frac{\gamma_1^{2} \, R^{2}}{\lambda (B_R)} \int_{B_{{h}}} | D \big( (u - k_h)_+ \zeta_h \big) |^2 \lambda \, dx \leqslant \\ & \hskip30pt \leqslant \left(\frac{\lambda (A_h)}{\lambda (B_R)}\right)^{\frac{\kappa - 1}{\kappa}}
\frac{2 \, \gamma_1^{2} \, R^{2}}{\lambda (B_R)} \int_{B_{{h}}}
\left[ | D (u - k_h)_+ |^2 + \frac{1}{(\rho_h - \rho_{h+1})^2} (u - k_h)_+^2 \right] \lambda \, dx \leqslant \\ & \hskip30pt \leqslant \left(\frac{\lambda (A_h)}{\lambda (B_R)}\right)^{\frac{\kappa - 1}{\kappa}}
\frac{2 \, \gamma_1^{2} \, R^{2}}{\lambda (B_R)}
\Bigg[ \frac{\gamma}{(\rho_{h-1} - \rho_h)^2} \int_{B_{{h-1}}} (u - k_h)_+^2 (x,t)\, \lambda (x) \, dx + \\ & \hskip200pt + \frac{1}{(\rho_h - \rho_{h+1})^2} \int_{B_{h}} (u - k_h)_+^2 \lambda \, dx \Bigg] \leqslant \\ & \hskip30pt \leqslant \left(\frac{\lambda (A_h)}{\lambda (B_R)}\right)^{\frac{\kappa - 1}{\kappa}}
\frac{2 \, \gamma_1^{2} \, R^{2}}{\lambda (B_R)} \,
\frac{\gamma + 1}{\varepsilon^{2h} (R - \rho)^2 (1 - \varepsilon)^2} \int_{B_{{h-1}}} (u - k_h)_+^2 (x,t)\, \lambda (x) \, dx \leqslant \\ & \hskip30pt \leqslant
{2 \, \gamma_1^{2} \, R^{2}} \,
\frac{\gamma + 1}{\varepsilon^{2h} (R - \rho)^2 (1 - \varepsilon)^2} \, \sigma^2 \omega^2 \,
\left(\frac{\lambda (A_{h-1})}{\lambda (B_R)}\right)^{1 + \frac{\kappa - 1}{\kappa}}\, . \end{align*} We can conclude similarly as before using Lemma \ref{giusti} and provided that $$ \displaylines{
\frac{\lambda (A_0)}{\lambda (B_R)} \leqslant \overline{\nu}^{} = \left[ \frac{(1-a)^2 \, (1 - \varepsilon)^2 \, \varepsilon^6 \, (R - \rho)^2}
{\gamma_1^2 R^2 \, (2 \gamma + 2)} \right]^{\frac{\kappa}{\kappa-1}} \,
\, \varepsilon^{\frac{8 \kappa^2}{(\kappa - 1)^2}} \, .
\llap{$\square$}} $$ \\
\begin{prop} \label{prop-DeGiorgi2} Consider three points $(x^{\diamond}\!, t^{\diamond}), (x^{\circ}\!, t^{\circ}), (x^{\star}\!, t^{\star}) \in \Omega \times (0,T)$ and $r \in (0,R)$. Suppose $Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond})$ $Q_{R}^{\upbeta^{\circ}, \texttt{\,<}} (x^{\circ}\!, t^{\circ})$ $Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})$ are contained in $\Omega \times (0,T)$. Then for every choice of $\theta^{\diamond}, \theta^{\circ} \in (0,1)$ and $a, \sigma \in (0,1)$ there are \\ $\underline{\nu}^{\diamond} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a$, $\theta^{\diamond}$, $\upbeta^{\diamond}$, \\ $\underline{\nu}^{\circ} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a$, $\theta^{\circ}$, $\upbeta^{\circ}$,\\ $\underline{\nu}^{\star} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a, (R - r)/R$, $\max\{ 1, 1/\upbeta^{\star} \}$, \\ $\underline{\nu} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, a, (R - r)/R$, \\ such that for every $u \in DG_-(\Omega, T, \mu, \lambda, \gamma)$ and fixed $\underline{m}, \omega$ satisfying \\ [0.5em] $i \, )$ $\underline{m} \leqslant \inf_{Q_{R; R, 0}^{\upbeta^{\diamond},+} (x^{\diamond} \!, t^{\diamond})} u, \hskip10pt \omega \geqslant \mathop{\rm osc}\limits_{Q_{R; R, 0}^{\upbeta^{\diamond}+} (x^{\diamond} \!, t^{\diamond})} u$ we have that if $\mu_+ (B_{r}) > 0$ and \begin{align*}
\frac{ M_+ (A_{0}^+)}{|M|_{\Lambda} (Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond}))} + \frac{ \Lambda_+ (A_{0}^+)}{\Lambda (Q_{R}^{\upbeta^{\diamond}, \texttt{\,>}} (x^{\diamond}\!, t^{\diamond}))} \leqslant \underline{\nu}^{\diamond} , \end{align*}
$\hskip8pt$ where $A_0^+ = \{ (x,t) \in Q_{R; R, 0}^{\upbeta^{\diamond}, +} (x^{\diamond}\!, t^{\diamond}) \, | \, u(x,t) < \underline{m} + \sigma \omega \}$, then $$ u(x,t) \geqslant \underline{m} + a \, \sigma \, \omega \hskip30pt
\text{for a.e. } (x,t) \in Q_{R; r, \theta^{\diamond}}^{\upbeta^{\diamond}, +} (x^{\diamond}\!, t^{\diamond}) \, ; $$ $ii \, )$ $\underline{m} \leqslant \inf_{Q_{R; R, 0}^{\circ, -} (x^{\circ} \!, t^{\circ})} u,
\hskip10pt \omega \geqslant \mathop{\rm osc}\limits_{Q_{R; R, 0}^{\circ, -} (x^{\circ} \!, t^{\circ})} u$ we have that if $\mu_- (B_{r}) > 0$ and \begin{align*}
\frac{ M_- (A_{0}^-)}{|M|_{\Lambda} (Q_{R}^{\circ, \texttt{\,<}} (x^{\circ}\!, t^{\circ}))} + \frac{ \Lambda_- (A_{0}^-)}{\Lambda (Q_{R}^{\circ, \texttt{\,<}} (x^{\circ}\!, t^{\circ}))} \leqslant \underline{\nu}^{\circ} , \end{align*}
$\hskip8pt$ where $A_0^- = \{ (x,t) \in Q_{R; R, 0}^{\circ, -} (x^{\circ} \!, t^{\circ}) \, | \, u(x,t) < \underline{m} + \sigma \omega \}$, then $$ u(x,t) \geqslant \underline{m} + a \, \sigma \, \omega \hskip30pt \text{for a.e. } (x,t) \in Q_{R; r, \theta^{\circ}}^{\circ, -} (x^{\circ}\!, t^{\circ}) \, ; $$ $iii \, )$ $\underline{m} \leqslant \inf_{Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})} u, \hskip10pt
\omega \geqslant \mathop{\rm osc}\limits_{Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})} u$ we have that if $\lambda_0 (B_{r}) > 0$ and \begin{align*} \Lambda_0 (A_{0}^0) \leqslant \underline{\nu}^{\star} \, \Lambda (Q_{R}^{s_1^{\star} , s_2^{\star}} (x^{\star}\!, t^{\star})) \end{align*}
$\hskip8pt$ where $A_0^0 = \{ (x,t) \in Q_{R; R, s_1^{\star}, s_2^{\star}}^{0} (x^{\star} \!, t^{\star}) \, | \, u(x,t) < \underline{m} + \sigma \omega \}$, then $$ u(x,t) \geqslant \underline{m} + a \, \sigma \, \omega \hskip30pt \text{for a.e. } (x,t) \in Q_{R; r, s_1^{\star}, s_2^{\star}}^{0} (x^{\star} \!, t^{\star}) \, ; $$ $iv \, )$ $\underline{m} \leqslant \inf_{B_R(x^{\star})} u (\cdot, t), \hskip10pt \omega \geqslant \mathop{\rm osc}\limits_{B_R(x^{\star})} u (\cdot, t)$ we have that if $B_R(x^{\star}) \subset \Omega_0$ and \begin{align*}
\lambda \big(\{ x \in B_{R} (x^{\star}) \, | \, u(x,t) < \underline{m} + \sigma \omega \} \big) \leqslant \underline{\nu} \ \lambda (B_R (x^{\star}) ) \end{align*} then $$ u(x,t) \geqslant \underline{m} + a \, \sigma \, \omega \hskip30pt \text{for a.e. } x \in B_{r} (x^{\star}) $$ for a.e. $t \in (0,T)$. \end{prop}
\ \\ \noindent We now need some results which are preparatory for one fundamental step in view of proving the Harnack's inequality, Lemma \ref{esp_positivita}, which is usually referred to as {\em expansion of positivity}. \\ \ \\ We define, for a fixed point $(\bar{y}, \bar{s}) \in \Omega \times (0,T)$ and a fixed $h > 0$, the sets
\begin{align} \label{Aacca_ro2}
& A_{h,\rho}^+ (\bar{y}, \bar{s}) = \{ x \in B_{\rho}^+(\bar{y}) \, | \, u(x,\bar{s}) < h \} \, , \nonumber \\
& A_{h,\rho}^- (\bar{y}, \bar{s}) = \{ x \in B_{\rho}^-(\bar{y}) \, | \, u(x,\bar{s}) < h \} \, , \\
& A_{h,\rho}^0 (\bar{y}, \bar{s}) = \{ x \in B_{\rho}^0(\bar{y}) \, | \, u(x,\bar{s}) < h \} \, . \nonumber \end{align}
\begin{oss} \rm -\ \label{Aacca_ro} Observe that the condition $u(x,\bar{s}) \geqslant h$ for every $x \in B_{\rho}(\bar{y})$ implies that $A_{h,4 \rho}(\bar{y}, \bar{s}) \subset B_{4\rho}(\bar{y}) \setminus B_{\rho}(\bar{y})$ and then in particular, if $\omega$ is a doubling weight ($c_{\omega}$ denotes the doubling constant of $\omega$), one has $$ \omega \big(A_{h,4\rho}(x^{\ast}, t^{\ast})\big) \leqslant \, \omega \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big)
\leqslant \left(1 - c_{\omega}^{-2} \right) \, \omega \big(B_{4\rho}(x^{\ast}) \big)\, . $$
In our situation this holds for $|\mu|_{\lambda}$, thanks to \eqref{doublingmula}, but also for $\mu_+$, $\mu_-$, $\lambda_0$ thanks to the assumtpion (H.4). \end{oss}
\begin{lemma} \label{lemma1} Given $(x^{\ast} \!, t^{\ast})$ such that $B_{4\rho}(x^{\ast}) \subset \Omega$ then \\ [0.3em] $i \, )$ if $\lambda_0 \big(B_{4\rho}(x^{\ast}) \big) > \lambda_0 \big(B_{\rho}(x^{\ast}) \big) > 0$ there exists $\eta \in (0,1)$, depending only on $\mathfrak q$, such that for every $\bar{t} \in (0,T)$
we have that, given $h > 0$ and $u \geqslant 0$ belonging to $DG(\Omega, T, \mu, \lambda, \gamma)$ for which the following holds $$ u(x,\bar{t}) \geqslant h \hskip20pt \text{a.e. in } B_{\rho}^0(x^{\ast}) , $$ then \begin{align*} \lambda_0 (A_{\eta h,4\rho}^0 (x^{\ast} \!, \bar{t})) & \leqslant
\left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, \lambda_0 \big(B_{4\rho}^0(x^{\ast}) \big) . \end{align*} If $B_{4\rho}(x^{\ast}) \times [t^{\ast} - \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2] \subset \Omega \times (0,T)$ with $\upbeta \in (0,16]$ then: \\ [0.3em] $ii \, )$ if $\mu_+ \big(B_{4\rho}(x^{\ast}) \big) > \mu_+ \big(B_{\rho}(x^{\ast}) \big) > 0$ there exists $\eta \in (0,1)$, depending only on $\gamma, \mathfrak q$, and there exists $\tilde\upbeta \in (0, \upbeta]$, depending only on $\gamma$ and $\upbeta$, such that, given $h > 0$ and $u \geqslant 0$ belonging to $DG(\Omega, T, \mu, \lambda, \gamma)$ for which the following holds $$ u(x,t^{\ast}) \geqslant h \hskip20pt \text{a.e. in } B_{\rho}^+(x^{\ast}) , $$ then for every $t \in [t^{\ast}, t^{\ast} + \tilde\upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2]$ \begin{align*} \mu_+ (A_{\eta h,4\rho}^+ (x^{\ast} \!, t)) & \leqslant
\left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, \mu_+ \big(B_{4\rho}^+(x^{\ast}) \big) ; \end{align*} $iii \, )$ if $\mu_- \big(B_{4\rho}(x^{\ast}) \big) > \mu_- \big(B_{\rho}(x^{\ast}) \big) > 0$ there exists $\eta \in (0,1)$, depending only on $\gamma, \mathfrak q$, and there exists $\tilde\upbeta \in (0, \upbeta]$, depending only on $\gamma$ and $\upbeta$, such that, given $h > 0$ and $u \geqslant 0$ belonging to $DG(\Omega, T, \mu, \lambda, \gamma)$ for which the following holds $$ u(x,t^{\ast}) \geqslant h \hskip20pt \text{a.e. in } B_{\rho}^-(x^{\ast}) , $$ then for every $t \in [t^{\ast} - \tilde\upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast}]$ \begin{align*} \mu_- (A_{\eta h,4\rho}^- (x^{\ast} \!, s)) & \leqslant
\left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, \mu_- \big(B_{4\rho}^-(x^{\ast}) \big) ; \end{align*}
$iv \, )$ there exist $\eta \in (0,1)$, depending only on $\gamma$ and $\mathfrak q$, and there exists $\tilde\upbeta \in (0, \upbeta]$, depending only on $\gamma$ and $\upbeta$, such that, given $h > 0$ and $u \geqslant 0$ belonging to $DG(\Omega, T, \mu, \lambda, \gamma)$ for which the following holds $$ u(x,t^{\ast}) \geqslant h \hskip20pt \text{a.e. in } B_{\rho}(x^{\ast}) , $$ then \begin{align*}
|\mu|_{\lambda} \big( A_{\eta h,4\rho}^+ (x^{\ast} \!, t) \cup A_{\eta h,4\rho}^- (x^{\ast} \!, s) \cup A_{\eta h,4\rho}^0 (x^{\ast} \!, t^{\ast}) \big)
\leqslant \left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, |\mu|_{\lambda} \big(B_{4\rho}(x^{\ast}) \big) \end{align*} for every $t \in [t^{\ast}, t^{\ast} + \tilde\upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2]$ and $s \in [t^{\ast} - \tilde\upbeta \, h(x^{\ast} \!, 4\rho)\, \rho^2, t^{\ast}]$. \end{lemma}
\noindent {\it Proof}\ \ -\ \ First we prove point $ii \, )$. Consider $s_1 = t^{\ast} - \upbeta h(x^{\ast}\!, 4 \rho) \rho^2$, $s_2 = t^{\ast} + \upbeta h(x^{\ast}\!, 4 \rho) \rho^2$. Apply the energy estimate \eqref{DGgamma+_1} to the function $(u - h)_-$ with $x_0 = x^{\ast}$, $t_0 = t^{\ast}$, $r = 4 \rho (1-\sigma)$ for an arbitrary $\sigma \in (0,1)$, $R = \tilde{r} = 4 \rho$, $\varepsilon = 0$. With this choice we have $\tilde{r} - r = 4 \rho \sigma$. Then we get \begin{align*} \sup_{t \in (t^{\ast} \!, s_2)} & \int_{B_{4 \rho (1-\sigma)}^+(x^{\ast})} (u - h)_-^2 (x,t) \mu_+ (x) dx \leqslant \\ \leqslant & \int_{B_{4\rho}^+(x^{\ast})} (u - h)_-^2 (x,t^{\ast}) \mu_+ (x) dx \ +
\sup_{t \in (t^{\ast}\!, s_2)} \int_{I^{4\rho, 4\rho\sigma}_+} (u-h)_-^2 (x,t) \mu_-(x) \, dx + \\ & + \, \frac{\gamma}{(4 \rho \sigma)^2} \int_{t^{\ast}}^{s_2} \!\!\!\! \int_{B_{4\rho}^+(x^{\ast}) \cup I^{4\rho, 4\rho\sigma}_+} (u - h)_-^2\, \lambda \, dx ds . \end{align*} Now, in addition to this inequality, we use the two following inequalities: first that in a set $A_{\eta h, r}$ we have that $(u-h)_- \geqslant (1-\eta) h$; moreover, since $u \geqslant 0$, $(u - h)_- \leqslant h$. Then, using also Remark \ref{Aacca_ro}, we get for every $t \in [t^{\ast}, s_2]$ \begin{align*} (1-\eta)^2 h^2 & \mu_+\big( A_{\eta h,4\rho (1-\sigma)}^+ (x^{\ast} \!, t) \big) \leqslant \\ \leqslant & \int_{A_{\eta h,4\rho (1-\sigma)}^+ (x^{\ast} \!, t)} (u - h)_-^2 (x,t) \mu_+ (x) dx \leqslant \\ \leqslant & \int_{B_{4 \rho (1-\sigma)}^+(x^{\ast})} (u - h)_-^2 (x,t) \mu_+ (x) dx + \leqslant \\ \leqslant & \ h^2 \, \mu_+ \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) +
h^2 \mu_-({I^{4\rho, 4\rho\sigma}_+}) +
\frac{\gamma h^2}{(4\rho\sigma)^2} \, \Lambda \big( (B_{4\rho}^+(x^{\ast}) \cup I^{4\rho, 4\rho\sigma}_+) \times (t^{\ast}, s_2) \big) . \end{align*} Using the following decomposition \begin{align*} A_{\eta h, 4\rho}^+(x^{\ast} \!, t) & =
A_{\eta h,4\rho (1-\sigma)}^+ (x^{\ast} \!, t) \cup \big\{x \in B_{4\rho}^+ (x^{\ast}) \setminus B_{4\rho(1-\sigma)}^+(x^{\ast}) \, \big| \, u(x,t) < \eta h \big \} , \end{align*} and then the last estimate we get \begin{align} (1-\eta)^2 & \mu_+ \big( A_{\eta h,4\rho}^+ (x^{\ast} \!, t) \big) \leqslant \nonumber \\ \leqslant & \ (1-\eta)^2 \Big[ \mu_+\big( A_{\eta h,4\rho (1-\sigma)}^+ (x^{\ast} \!, t) \big)
+ \mu_+ \big( B_{4\rho} (x^{\ast}) \setminus B_{4\rho(1-\sigma)}(x^{\ast}) \big) \Big] \leqslant \nonumber \\ \label{oralareplichiamo} \leqslant & \ \mu_+ \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) +
\mu_-({I^{4\rho, 4\rho\sigma}_+}) +
\frac{\gamma}{(4\rho\sigma)^2} \, \Lambda \big( (B_{4\rho}^+(x^{\ast}) \cup I^{4\rho, 4\rho\sigma}_+) \times (t^{\ast}, s_2) \big) + \\
& + \, (1-\eta)^2 \mu_+ \big( B_{4\rho} (x^{\ast}) \setminus B_{4\rho(1-\sigma)}(x^{\ast}) \big) . \nonumber \end{align} If the thesis were false we would have that for every $\tilde{\upbeta} \in (0, \upbeta]$ and $\eta \in (0,1)$ there would be $\bar{t} \in [t^{\ast}, t^{\ast} + \tilde\upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2]$ such that
$$ \left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, \mu_+ \big(B_{4\rho}^+(x^{\ast}) \big)
< \mu_+ (A_{\eta h,4\rho}^+ (x^{\ast} \!, \bar{t})) $$ and then \begin{align*} (1-\eta)^2 & \left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, \mu_+ \big(B_{4\rho}^+(x^{\ast}) \big) < \\
< & \ \mu_+ \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) +
\mu_- ({I^{4\rho, 4\rho\sigma}_+}) +
\frac{\gamma}{(4\rho\sigma)^2} \, \Lambda \big( (B_{4\rho}^+(x^{\ast}) \cup I^{4\rho, 4\rho\sigma}_+) \times (t^{\ast}, s_2) \big) + \\
& + \, (1-\eta)^2 \mu_+ \big( B_{4\rho} (x^{\ast}) \setminus B_{4\rho(1-\sigma)}(x^{\ast}) \big) . \end{align*} Then taking, for instance, $\upbeta = \sigma^3$, letting $\sigma$ and $\eta$ go to zero we would find the contradiction (and here is needed $\mu_+ \big(B_{4\rho}(x^{\ast}) \big) > \mu_+ \big(B_{\rho}(x^{\ast}) \big) > 0$) \begin{gather*} \left(1 - \frac{1}{2} \frac{1}{\mathfrak q^2} \right) \, \mu_+ \big(B_{4\rho}^+(x^{\ast}) \big)
\leqslant \mu_+ \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) \\ \Downarrow \\ 2 \mu_+ \big( B_{\rho}(x^{\ast}) \big) \leqslant \mu_+ \big( B_{\rho}(x^{\ast}) \big) \, . \end{gather*} In a way analogous to \eqref{oralareplichiamo} we can derive for every $s \in [s_1, t^{\ast}]$ \begin{align} (1-\eta)^2 & \mu_- \big( A_{\eta h,4\rho}^- (x^{\ast} \!, s) \big) \leqslant \nonumber \\ \leqslant & \ (1-\eta)^2 \Big[ \mu_- \big( A_{\eta h,4\rho (1-\sigma)}^- (x^{\ast} \!, s) \big)
+ \mu_- \big( B_{4\rho} (x^{\ast}) \setminus B_{4\rho(1-\sigma)}(x^{\ast}) \big) \Big] \leqslant \nonumber \\ \label{replica1} \leqslant & \ \mu_- \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) +
\mu_+ ({I^{4\rho, 4\rho\sigma}_-}) +
\frac{\gamma}{(4\rho\sigma)^2} \, \Lambda \big( (B_{4\rho}^-(x^{\ast}) \cup I^{4\rho, 4\rho\sigma}_-) \times (s_1, t^{\ast}) \big) + \\
& + \, (1-\eta)^2 \mu_- \big( B_{4\rho} (x^{\ast}) \setminus B_{4\rho(1-\sigma)}(x^{\ast}) \big) \nonumber \end{align} by which, again by contradiction, we prove point $iii \, )$. \\ [0.3em] Point $i\, )$ is quite immediate.
Since $(u - h)_-(x,\bar{t}) \geqslant (1 - \eta) h$ in $A_{\eta h,4\rho}^0(x^{\ast} \!, \bar{t})$ we immediately get \begin{align*} (1-\eta)^2 h^2 & \lambda_0\big( A_{\eta h,4\rho (1-\sigma)}^0 (x^{\ast} \!, \bar{t}) \big) \leqslant \\ \leqslant & \int_{A_{\eta h,4\rho}^0 (x^{\ast} \!, \bar{t})} (u - h)_-^2 (x,\bar{t}) \lambda_0 (x) dx \leqslant \\ \leqslant & \int_{B_{4 \rho}^0(x^{\ast})} (u - h)_-^2 (x, \bar{t}) \lambda_0 (x) dx \leqslant
h^2 \, \lambda_0 \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) \end{align*} that is $$ (1-\eta)^2 \lambda_0\big( A_{\eta h,4\rho (1-\sigma)}^0 (x^{\ast} \!, \bar{t}) \big) \leqslant
\lambda_0 \big(B_{4\rho}(x^{\ast}) \setminus B_{\rho}(x^{\ast}) \big) \leqslant
\left(1 - \frac{1}{\mathfrak q^2} \right) \lambda_0 \big(B_{4\rho}(x^{\ast}) \big) $$ and then $\eta$ is easily found. \\ [0.3em] Point $iv \, )$ is obtained simply summing and rearranging the previous inequalities.
$\square$ \\
\begin{lemma} \label{lemma2} Consider $\upbeta \in (0,16]$ and $(x^{\ast} \!, t^{\ast})$ such that $B_{5\rho}(x^{\ast}) \times [t^{\ast} - \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2] \subset \Omega \times (0,T)$, consider $\eta$ and $\tilde\upbeta$ to be the values determined in Lemma $\ref{lemma1}$. Consider $\upkappa$ and $\uptau$ the constants appearing in \eqref{carlettomio}. Consider $u \geqslant 0$ in $DG(\Omega, T, \mu, \lambda, \gamma)$, $h > 0$. \\ [0.3em]
$i \, )$ If $\mu_+ \big(B_{4\rho}(x^{\ast}) \big) > \mu_+ \big(B_{\rho}(x^{\ast}) \big) > 0$ and $u(\cdot,t^{\ast}) \geqslant h$ a.e. in $B_{\rho}^+(x^{\ast})$ \\ then for every $\epsilon > 0$ there exists $\eta_1 \in (0,\eta)$, $\eta_1$ depending only on $\gamma_1 , \gamma , \mathfrak q , \epsilon , \eta, \tilde\upbeta$ such that \begin{align*} M_+ \Big( \{u < \eta_1 h \} \, \cap \, &
\Big[B_{4\rho}^+ (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast}\!, 4\rho) ) \Big] \Big) \leqslant \\
& \leqslant \epsilon \ |M|_{\Lambda} \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big) , \\ \Lambda_+ \Big( \{u < \eta_1 h \} \, \cap \, &
\Big[B_{4\rho}^+ (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast}\!, 4\rho) ) \Big] \Big) \leqslant \\ & \leqslant \upkappa \, \epsilon^{\uptau} \ \Lambda \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big) ; \end{align*} $ii \, )$ if $\mu_- \big(B_{4\rho}(x^{\ast}) \big) > \mu_- \big(B_{\rho}(x^{\ast}) \big) > 0$ and $u(\cdot,t^{\ast}) \geqslant h$ a.e. in $B_{\rho}^-(x^{\ast})$ \\ then for every $\epsilon > 0$ there exists $\eta_1 \in (0,\eta)$, $\eta_1$ depending only on $\gamma_1 , \gamma , \mathfrak q , \epsilon , \eta, \tilde\upbeta$ such that \begin{align*} M_- \Big( \{u < \eta_1 h \} \, \cap \, &
\Big[B_{4\rho}^- (x^{\ast}) \times (t^{\ast} - \tilde\upbeta \, \rho^2 h(x^{\ast}\!, 4\rho), t^{\ast} ) \Big] \Big) \leqslant \\
& \leqslant \epsilon \ |M|_{\Lambda} \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast} - \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) , t^{\ast} ) \Big) , \\ \Lambda_- \Big( \{u < \eta_1 h \} \, \cap \, &
\Big[B_{4\rho}^- (x^{\ast}) \times (t^{\ast} - \tilde\upbeta \, \rho^2 h(x^{\ast}\!, 4\rho), t^{\ast} ) \Big] \Big) \leqslant \\ & \leqslant \upkappa \, \epsilon^{\uptau} \ \Lambda \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast} - \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) , t^{\ast} ) \Big) ; \end{align*} $iii \, )$ consider $\upbeta > 0$ such that $B_{5\rho}(x^{\ast}) \times [t^{\ast} - \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2] \subset \Omega \times (0,T)$: if $\lambda_0 \big(B_{4 \rho}(x^{\ast}) \big) > \lambda_0 \big(B_{\rho}(x^{\ast}) \big) > 0$ and $u \geqslant h$ a.e. in $\big( B_{\rho}^0(x^{\ast}) \times (t^{\ast} - \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho), t^{\ast} + \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho) ) \big)$ then for every $\epsilon > 0$ there exists $\eta_1 \in (0,\eta)$, $\eta_1$ depending only on $\gamma_1 , \gamma , \mathfrak q , \epsilon , \eta, \upbeta$ such that \begin{align*} \Lambda_0 \Big( \{u < \eta_1 h \} \cap
\Big[B_{4 \rho}^0 (x^{\ast}) \times & (t^{\ast} - \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho), t^{\ast} + \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho) ) \Big] \Big)
\leqslant \\ & \leqslant \epsilon \ \Lambda
\Big( B_{4\rho} (x^{\ast}) \times (t^{\ast} - \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho), t^{\ast} + \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho ) \Big) ; \end{align*} $iv \, )$ if $B_{5 \rho} (x^{\ast}) \subset \Omega_0$ and $u (\cdot, t) \geqslant h$ a.e. in $B_{\rho}(x^{\ast})$ then for every $\epsilon > 0$ there exists $\eta_1 \in (0,\eta)$, $\eta_1$ depending only on $\gamma_1 , \gamma , \mathfrak q , \epsilon , \eta$ such that for almost every $t \in [t^{\ast} - \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2]$ \begin{align*} \lambda \Big( \{u < \eta_1 h \} \cap \big( B_{4\rho} (x^{\ast}) \times \{ t \} \big) \Big) \leqslant \epsilon \ \lambda \Big( B_{4\rho} (x^{\ast}) \Big) . \end{align*} \end{lemma} \noindent {\it Proof}\ \ -\ \ We first show point $i\, $). Consider $\tilde\upbeta$ and $\eta$ to be the values determined in Lemma \ref{lemma1}, point $i\, $). For simplicity, by $f$ we will denote the quantity $$ f (x^{\ast}\!, 4\rho) = h(x^{\ast}\!, 4\rho) \, \rho^2 \, . $$
Now we consider $m \in {\bf N}$, $\tau \in [t^{\ast}\!, t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho) ]$ and $\sigma \in [t^{\ast} - \tilde\upbeta \, f(x^{\ast}\!,4\rho), t^{\ast} ]$. First of all notice that for every $t^{\ast}$, for every $\tau \in [t^{\ast}\!, t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho) ]$ and $\sigma \in [t^{\ast} - \tilde\upbeta \, f(x^{\ast}\!,4\rho), t^{\ast} ]$ and every $t \in (t^{\ast} - \upalpha \, \rho^2 h(x^{\ast}\!, 4\rho), t^{\ast} + \upalpha \, \rho^2 h(x^{\ast}\!, 4\rho) )$ we derive, using Lemma \ref{lemma1} and since for $m \in {\bf N}$ it holds $A_{\eta h 2^{-m},4\rho}^+ (x^{\ast}\!,\tau) \subset A_{\eta h,4\rho}^+ (x^{\ast}\!,\tau)$, $A_{\eta h 2^{-m},4\rho}^- (x^{\ast}\!,\sigma) \subset A_{\eta h,4\rho}^- (x^{\ast}\!,\sigma)$, $A_{\eta h 2^{-m},4\rho}^0 (x^{\ast}\!,t^{\ast}) \subset A_{\eta h,4\rho}^0 (x^{\ast}\!,t^{\ast})$, that if $\mu_+ \big( B_{\rho} (x^{\ast}) \big) > 0$, $\mu_- \big( B_{\rho} (x^{\ast}) \big) > 0$, $\lambda_0 \big( B_{\rho} (x^{\ast}) \big) > 0$ \begin{equation} \label{nuovoarrivo} \begin{array}{l} {\displaystyle \frac{1}{2 \mathfrak q^2} } \, \mu_+ \big(B_{4\rho}^+(x^{\ast}) \big) \leqslant
\mu_+ \big(B_{4\rho}^+(x^{\ast}) \setminus A_{\eta h,4\rho}^+ (x^{\ast} \!, \tau)\big)
\leqslant
\mu_+ \big(B_{4\rho}^+(x^{\ast}) \setminus A_{\eta h 2^{-m},4\rho}^+ (x^{\ast} \!, \tau)\big) , \\ [1em] {\displaystyle \frac{1}{2 \mathfrak q^2} } \, \mu_- \big(B_{4\rho}^-(x^{\ast}) \big) \leqslant
\mu_- \big(B_{4\rho}^-(x^{\ast}) \setminus A_{\eta h,4\rho}^- (x^{\ast} \!, \sigma)\big)
\leqslant
\mu_- \big(B_{4\rho}^-(x^{\ast}) \setminus A_{\eta h 2^{-m},4\rho}^- (x^{\ast} \!, \sigma)\big), \\ [1em] {\displaystyle \frac{1}{2 \mathfrak q^2} } \, \lambda_0 \big(B_{4\rho}^0(x^{\ast}) \big) \leqslant
\lambda_0 \big(B_{4\rho}^0(x^{\ast}) \setminus A_{\eta h,4\rho}^0 (x^{\ast} \!, t)\big)
\leqslant
\lambda_0 \big(B_{4\rho}^0(x^{\ast}) \setminus A_{\eta h 2^{-m},4\rho}^0 (x^{\ast} \!, t)\big). \end{array} \end{equation} Again for simplicity, we define (since $x^{\ast}$ is fixed we omit it) \begin{align*} A_m^+(\tau) := & A_{\eta h 2^{-m},4\rho}^+ (x^{\ast} \!, \tau) , \hskip30pt
a_m^{+} := \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \mu_+(A_m^+(\tau)) \, d \tau , \\ A_m^-(\sigma) := & A_{\eta h 2^{-m},4\rho}^- (x^{\ast} \!, \sigma) , \hskip30pt
a_m^- := \int_{t^{\ast} - \tilde\upbeta \, f(x^{\ast}\!,4\rho)}^{t^{\ast}} \mu_-(A_m^-(\sigma)) \, d \sigma , \\ A_m^0(t) := & A_{\eta h 2^{-m},4 \rho}^0 (x^{\ast} \!, t) , \hskip28pt
a_m^0 := \int_{t^{\ast} - \upalpha \, f(x^{\ast}\!,4\rho)}^{t^{\ast} + \upalpha \, f(x^{\ast}\!,4\rho)}
\lambda_0 (A_m^0(t)) dt , \\ A_m (t) := & A_{\eta h 2^{-m},4\rho} (x^{\ast} \!, t) , \hskip32pt
B_{4\rho} := B_{4\rho} (x^{\ast}) , \\ d_m^{\texttt{\,>}} := & \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \lambda (A_m(t)) d t ,
\hskip10pt
d_m^{\texttt{\,<}} := \int_{t^{\ast} - \tilde\upbeta \, f(x^{\ast}\!,4\rho)}^{t^{\ast}} \lambda (A_m(t)) d t , \\ & \hskip20pt d_m := \int_{t^{\ast} - \upalpha \, f(x^{\ast}\!,4\rho)}^{t^{\ast} + \upalpha \, f(x^{\ast}\!,4\rho)} \lambda (A_m(t)) d t . \end{align*}
First we prove point $i \,)$. Now we estimate from above and from below the quantity \begin{align*} \mu_+ \big( B_{4\rho}^+ \setminus A_{m-1}^+ (\tau) \big) \int_{B_{4\rho}} \Big(u - \frac{\eta h}{ 2^m}\Big)_- (x, \tau) \mu_+ (x) dx \end{align*} Using also \eqref{nuovoarrivo} we get that for every $\tau \in [t^{\ast}\!, t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho) ]$ \begin{align*} & \frac{1}{2 \mathfrak q^2} \, \mu_+ \big(B_{4\rho}^+ \big) \frac{\eta h}{2^{m+1}} \mu_+ \big( A_{m+1}^+ (\tau) \big) \leqslant \\ & \ \leqslant \, \mu_+ \big( B_{4\rho}^+ \setminus A_{m-1}^+ (\tau) \big) \frac{\eta h}{2^{m+1}} \mu_+ \big( A_{m+1}^+ (\tau) \big) \leqslant \\ & \ \leqslant \, \mu_+ \big( B_{4\rho}^+ \setminus A_{m-1}^+ (\tau) \big) \int_{B_{4\rho}} \Big(u - \frac{\eta h}{ 2^m}\Big)_- (x, \tau) \mu_+ (x) dx = \\ & \ \leqslant \, \mu_+ \big( B_{4\rho}^+ \setminus A_{m-1}^+ (\tau) \big) \frac{\eta h}{2^{m}} \mu_+ \big( A_{m}^+ (\tau) \big) \end{align*}
that is we get that for every $\tau \in [t^{\ast}\!, t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho) ]$
\begin{equation} \label{saraclara} \begin{array}{l} {\displaystyle \frac{1}{2 \mathfrak q^2} \, \mu_+ \big(B_{4\rho}^+ \big) \frac{\eta h}{2^{m+1}} \mu_+ \big( A_{m+1}^+ (\tau) \big) \leqslant } {\displaystyle \ \mu_+ \big( B_{4\rho}^+ \setminus A_{m-1}^+ (\tau) \big) \frac{\eta h}{2^{m}} \mu_+ \big( A_{m}^+ (\tau) \big) } \, . \end{array} \end{equation} Now to estimate the right hand side of \eqref{saraclara} we use Lemma \ref{lemma2.2} in the ball $B_{4\rho}(x^{\ast})$ with $k = \eta h /2^{m}$, $l = \eta h /2^{m-1}$, $q=1$, $p \in (1,2)$ arbitrary,
$\omega = \lambda$, $\nu = |\mu|_{\lambda}$ ($\bar\nu = \mu_+$) we get for every $\tau \in (t^{\ast}\!, t^{\ast} + \tilde\upbeta f(x^{\ast}\!,4\rho))$ \begin{align*} & {\displaystyle \mu_+ \big( B_{4\rho}^+ \setminus A_{m-1}^+ (\tau) \big) \frac{\eta h}{2^{m}} \mu_+ \big( A_{m}^+ (\tau) \big) } \leqslant \\ & {\displaystyle \qquad \leqslant 8 \, \gamma_1 \, \rho
\ \frac{\mu_+ (B_{4\rho}^+) \, |\mu|_{\lambda} (B_{4\rho})}{(\lambda (B_{4\rho}))^{1/p}} \cdot \left(\int_{{A_{m-1}(\tau) \setminus A_m(\tau)}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! |D u|^p (x,\tau) \, \lambda \, dx \right)^{1/p} } \end{align*} By this last inequality and \eqref{saraclara} and integrating in time between $t^{\ast}$ and $t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)$ we get \begin{align} \label{saraclara2}
\frac{1}{2 \mathfrak q^2} \, & \frac{\eta h}{2^{m+1}} a_{m+1}^+ \leqslant \nonumber \\ & {\displaystyle \leqslant 8 \, \gamma_1 \, \rho
\ \frac{|\mu|_{\lambda} (B_{4\rho})}{(\lambda (B_{4\rho}))^{1/p}} \cdot
\int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \left(\int_{{A_{m-1}(\tau) \setminus A_m(\tau)}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! |D u|^p (x,\tau) \, \lambda \, dx \right)^{1/p} } d\tau \leqslant \nonumber \\ & {\displaystyle \leqslant 8 \, \gamma_1 \, \rho
\ \frac{|\mu|_{\lambda} (B_{4\rho})}{(\lambda (B_{4\rho}))^{1/p}} \cdot
\left( \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \int_{{A_{m-1}(\tau) \setminus A_m(\tau)}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! |D u|^p (x,\tau) \, \lambda \, dx d\tau \right)^{1/p} }
\big( \tilde\upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{p-1}{p}} \leqslant \nonumber \\ & {\displaystyle \leqslant 8 \, \gamma_1 \, \rho
\ \frac{|\mu|_{\lambda} (B_{4\rho})}{(\lambda (B_{4\rho}))^{1/p}} \big( \tilde\upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{p-1}{p}} \cdot
\left( \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)}
\big[ \lambda (A_{m-1} (\tau)) - \lambda (A_m (\tau)) \big]\, d \tau \right)^{\frac{2-p}{2p}} } \cdot \\ & {\displaystyle \hskip100pt \cdot \left( \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \!\!\! \int_{B_{4\rho}}
\Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,\tau) \, \lambda \, dx d\tau \right)^{1/2} }. \nonumber \end{align} Now we want to estimate the term in the right hand side involving the gradient of $u - \frac{\eta h}{ 2^m}$ and to do this we apply the energy estimates \eqref{DGgamma+}, \eqref{DGgamma-}, \eqref{DGgamma0} in some suitable subsets of $$ B_{5\rho} (x^{\ast}) \times (t^{\ast}\! - \tilde\upbeta \, f(x^{\ast}\!, 4\rho), t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!, 4\rho) ) . $$ to estimate the quantity $ {\displaystyle
\int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \!\!\! \int_{B_{4\rho}} \Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,\tau) \, \lambda \, dx d\tau . } $ First we estimate, taking in \eqref{DGgamma+} $t_0 = t^{\ast} - \tilde\upbeta f(x^{\ast} \!, 4 \rho)$, $s_2 = t^{\ast} + \tilde\upbeta f(x^{\ast} \!, 4 \rho)$, $R = \tilde{r} = 5\rho$, $r = 4\rho$, $\varepsilon = 0$, $\tilde\theta = 0$ and $\theta = \tilde\upbeta \, \frac{16}{25} \frac{h(x^{\ast}\!\!,4\rho)}{h(x^{\ast}\!\!,5\rho)}$
in \eqref{DGgamma+}, \begin{align*} \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} & \!\!\! \int_{B_{4\rho}^+}
\Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,\tau) \, \lambda \, dx d\tau \leqslant \\ \leqslant \gamma & \Bigg[
\int_{I_{5\rho, \rho}^+} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 \big(x,t^{\ast} - \tilde\upbeta f(x^{\ast} \!, 4 \rho)\big) \mu_+(x) \, dx + \\ & \hskip25pt + \sup_{t \in ( t^{\ast}\!, \, t^{\ast}\! + \tilde\upbeta \, f(x^{\ast}\!, 4\rho))}
\int_{I^{5\rho, \rho}_+} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 (x,t) \mu_-(x) \, dx + \\ & \hskip50pt + \, \frac{1}{\rho^2}
\iint_{Q_{5\rho;5\rho, 0}^{+,\rho}}
\Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2(x,\tau)\, \left( \frac{\mu_+}{h(x^{\ast}\!,5\rho)} + \lambda \right) \, dx d\tau \Bigg] \leqslant \\ \leqslant \gamma & \Bigg[ \left( \frac{\eta h}{ 2^{m-1}} \right)^2 \, \mu_+ (I_{5\rho, \rho}^+) +
\left( \frac{\eta h}{ 2^{m-1}}\right)^2 \, \mu_- (I_+^{5\rho, \rho}) + \\ & \hskip50pt + \left( \frac{\eta h}{ 2^{m-1}} \right)^2 \, \frac{1}{\rho^2} \,
\left( \frac{M_+}{h(x^{\ast}\!,5\rho)} + \Lambda \right) ( Q_{5\rho;5\rho, 0}^{+,\rho} ) \Bigg] \leqslant \\
\leqslant \gamma & \left( \frac{\eta h}{ 2^{m-1}}\right)^2 \frac{1}{\rho^2} \, \Bigg[ \rho^2 \, |\mu| (B_{5\rho}) +
2 \, \lambda (B_{5\rho}) \, 2 \, \tilde\upbeta \, f(x^{\ast}\!,4\rho) \Bigg] . \end{align*} Then taking in \eqref{DGgamma-} $t_0 = t^{\ast} +2 \, \tilde\upbeta f(x^{\ast} \!, 4 \rho)$, $s_1 = t^{\ast}$, $R = \tilde{r} = 5\rho$, $r = 4\rho$, $\varepsilon = 0$, $\tilde\theta = 0$ and $\theta = \tilde\upbeta \, \frac{16}{25} \frac{h(x^{\ast}\!\!,4\rho)}{h(x^{\ast}\!\!,5\rho)}$
in \eqref{DGgamma+}, \begin{align*} \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} & \!\!\! \int_{B_{4\rho}^-}
\Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,\tau) \, \lambda \, dx d\tau \leqslant \\ \leqslant \gamma & \Bigg[
\int_{I_{5\rho, \rho}^-} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 \big(x,t^{\ast} + 2 \tilde\upbeta f^+(x^{\ast} \!, 4 \rho)\big) \mu_-(x) \, dx + \\ & \hskip25pt + \sup_{t \in ( t^{\ast}\!, \, t^{\ast}\! + \tilde\upbeta \, f(x^{\ast}\!, 4\rho))}
\int_{I^{5\rho, \rho}_-} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 (x,t) \mu_+(x) \, dx + \\ & \hskip50pt + \, \frac{1}{\rho^2}
\iint_{Q_{5\rho;5\rho, 0}^{-,\rho}}
\Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2(x,\tau)\, \left( \frac{\mu_-}{h(x^{\ast}\!,5\rho)} + \lambda \right) \, dx d\tau \Bigg] \leqslant \\ \leqslant \gamma & \Bigg[ \left( \frac{\eta h}{ 2^{m-1}} \right)^2 \, \mu_- (I_{5\rho, \rho}^-) +
\left( \frac{\eta h}{ 2^{m-1}}\right)^2 \, \mu_+ (I^{5\rho, \rho}_-) + \\ & \hskip50pt + \left( \frac{\eta h}{ 2^{m-1}} \right)^2 \, \frac{1}{\rho^2} \,
\left( \frac{M_-}{h(x^{\ast}\!,5\rho)} + \Lambda \right) ( Q_{5\rho;5\rho, 0}^{-,\rho} ) \Bigg] \leqslant \\
\leqslant \gamma & \left( \frac{\eta h}{ 2^{m-1}}\right)^2 \frac{1}{\rho^2} \, \Bigg[ \rho^2 \, |\mu| (B_{5\rho}) +
4 \, \tilde\upbeta \, \lambda (B_{5\rho}) \, f(x^{\ast}\!,4\rho) \Bigg] .
\end{align*} Finally taking in \eqref{DGgamma0} $s_1 = t^{\ast}$, $s_2 = t^{\ast} + \tilde\upbeta f(x^{\ast} \!, 4 \rho)$, $R = \tilde{r} = 5\rho$, $r = 4\rho$, $\varepsilon = 0$ in \eqref{DGgamma+}, \begin{align*} \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} & \!\!\! \int_{B_{4\rho}^0}
\Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,\tau) \, \lambda \, dx d\tau \leqslant \\ \leqslant \gamma & \Bigg[ \sup_{t \in ( t^{\ast}\!, \, t^{\ast}\! + \tilde\upbeta \, f(x^{\ast}\!, 4\rho))}
\int_{I^{5\rho, \rho}_0} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 (x,t) \mu_-(x) \, dx + \\ & \hskip25pt + \sup_{t \in ( t^{\ast}\!, \, t^{\ast}\! + \tilde\upbeta \, f(x^{\ast}\!, 4\rho))}
\int_{I^{5\rho, \rho}_0} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 (x,t) \mu_+(x) \, dx + \\ & \hskip50pt + \, \frac{1}{\rho^2}
\int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)} \!\!\! \int_{(B_{4\rho}^0)^{\rho}}
\Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2(x,\tau)\, \lambda \, dx d\tau \Bigg] \leqslant \\ \leqslant \gamma & \Bigg[
\left( \frac{\eta h}{ 2^m}\right)^2 \, |\mu| (I^{5\rho, \rho}_0) +
\left( \frac{\eta h}{ 2^{m-1}} \right)^2 \, \frac{1}{\rho^2} \,
\lambda \big( (B_{4\rho}^0)^{\rho} \big) \, \tilde\upbeta \, f(x^{\ast}\!,4\rho) \Bigg] \leqslant \\
\leqslant \gamma & \left( \frac{\eta h}{ 2^{m-1}}\right)^2 \frac{1}{\rho^2} \, \Bigg[ \rho^2 \, |\mu| (B_{5\rho}) +
\tilde\upbeta \, \lambda \big(B_{5\rho}\big) \, f(x^{\ast}\!,4\rho) \Bigg] . \end{align*} Summing up we get \begin{equation} \label{mo'lanumeriamo} \begin{array}{l} {\displaystyle \int_{t^{\ast}}^{t^{\ast} + \tilde\upbeta \, f(x^{\ast}\!,4\rho)}
\!\!\! \int_{B_{4\rho}} \Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,\tau) \, \lambda \, dx d\tau \leqslant } \\ [1em] \hskip50pt \leqslant \gamma
{\displaystyle \left( \frac{\eta h}{ 2^{m-1}}\right)^2 \frac{1}{\rho^2} \, \Bigg[ 3 \rho^2 \, |\mu| (B_{5\rho}) +
9 \, \tilde\upbeta \, f(x^{\ast}\!,4\rho)\, \lambda \big(B_{5\rho}\big) \Bigg] } \end{array} \end{equation} and so we can conclude, from \eqref{saraclara2}, that \begin{align*} a_{m+1}^+ \leqslant 64 \, \gamma_1 \gamma^{1/2} \mathfrak q^2 \,
\frac{|\mu|_{\lambda} (B_{4\rho})}{(\lambda (B_{4\rho}))^{1/p}} & \, \big( \tilde\upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{p-1}{p}}
{\displaystyle \cdot \left( d_{m-1}^{\texttt{\,>}} - d_m^{\texttt{\,>}} \right)^{\frac{2-p}{2p}} } \cdot \\
& {\displaystyle \cdot \Bigg[ 3 \rho^2 \, |\mu| (B_{5\rho}) +
9 \, \tilde\upbeta \, f^+(x^{\ast}\!,4\rho)\, \lambda \big(B_{5\rho}\big) \Bigg]^{1/2} }. \end{align*} Taking the power $\frac{2p}{2-p}$ and summing between $m = 1$ and $m = {m}^{\ast}$ we have \begin{align*} \sum_{m = 1}^{m^{\ast}} (a_{m+1}^+)^{\frac{2p}{2-p}}
\leqslant (64 \, \gamma_1 \gamma^{1/2} \mathfrak q^2)^{\frac{2p}{2-p}} \, &
\frac{\left( |\mu|_{\lambda} (B_{4\rho}) \right)^{\frac{2p}{2-p}}}
{(\lambda (B_{4\rho}))^{\frac{2}{2-p}}} \, \big( \tilde\upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{2(p-1)}{2-p}} \cdot \\
& {\displaystyle \cdot \Bigg[ 3 \rho^2 \, |\mu| (B_{5\rho}) +
9 \, \tilde\upbeta \, f(x^{\ast}\!,4\rho)\, \lambda \big(B_{5\rho}\big) \Bigg]^{\frac{p}{2-p}} }
{\displaystyle \left( d_{0}^{\texttt{\,>}} - d_{m^{\ast}}^{\texttt{\,>}} \right) } \, . \end{align*} Since the sequences $(a_{m}^+)_{m \in {\bf N}}$ and $(d_{m}^{\texttt{\,>}})_{m \in {\bf N}}$ are decreasing we can estimate $\sum_{m = 1}^{m^{\ast}} (a_{m+1}^+)^{\frac{2p}{2-p}}$ from below by $m^{\ast} (a_{m^{\ast}+1}^+)^{\frac{2p}{2-p}}$ and $d_{0}^{\texttt{\,>}} - d_{m^{\ast}}^{\texttt{\,>}}$ from above by $d_{0}^{\texttt{\,>}}$ and $d_{0}^{\texttt{\,>}}$ by $\tilde\upbeta \, f(x^{\ast}\!,4\rho)\, \lambda ( B_{4\rho} )$ and get
\begin{align*} (a_{m^{\ast}+1}^+)^{\frac{2p}{2-p}} & \leqslant \frac{1}{m^{\ast}} \, (64 \, \gamma_1 \gamma^{1/2} \mathfrak q^2)^{\frac{2p}{2-p}} \,
\frac{\left( |\mu|_{\lambda} (B_{4\rho}) \right)^{\frac{2p}{2-p}}}
{(\lambda (B_{4\rho}))^{\frac{2}{2-p}}} \, \big( \tilde\upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{2(p-1)}{2-p}} \cdot \\
& \qquad {\displaystyle \cdot \Bigg[ 3 \rho^2 \, |\mu| (B_{5\rho}) +
9 \, \tilde\upbeta \, f(x^{\ast}\!,4\rho)\, \lambda \big(B_{5\rho}\big) \Bigg]^{\frac{p}{2-p}} }
\tilde\upbeta \, f(x^{\ast}\!,4\rho)\, \lambda ( B_{4\rho} ) \leqslant \\ & \leqslant \frac{C^{\frac{2p}{2-p}}}{m^{\ast}} \,
\big( \tilde\upbeta f(x^{\ast}\!,4\rho) \big)^{\frac{2p}{2-p}} \, \big( |\mu|_{\lambda} (B_{4\rho}) \big)^{\frac{2p}{2-p}} = \\ & = \frac{C^{\frac{2p}{2-p}}}{m^{\ast}} \,
\big( |M|_{\Lambda} \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}\!, t^{\ast} + \tilde \upbeta f(x^{\ast}\!,4\rho) ) \big) \big)^{\frac{2p}{2-p}} \, ,
\end{align*} where $C = 64 \, \gamma_1 \gamma^{1/2} \mathfrak q^{5/2} (3 + 9 \, \tilde\upbeta)^{1/2} \tilde\upbeta^{-1/2}$, by which finally \begin{align*} a_{m^{\ast}+1}^+ & \leqslant C \, \left( \frac{1}{m^{\ast}} \right)^{\frac{2-p}{2p}} \,
|M|_{\Lambda} \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}\!, t^{\ast} + \tilde \upbeta f(x^{\ast}\!,4\rho) ) \big) \, . \end{align*} Then for every $\epsilon > 0$ one can find $m^{\ast}$ such that $C / {m^{\ast}}^{\frac{2-p}{2p}} \leqslant \epsilon$. Then we consider $$ m^{\ast} \geqslant \left( \frac{64 \, \gamma_1 \gamma^{1/2} \mathfrak q^{5/2} (3 + 9 \, \tilde\upbeta)^{1/2}}{\tilde\upbeta^{1/2} \epsilon} \right)^{\frac{2p}{2-p}} \qquad \text{and} \qquad \eta_1 = \frac{\eta}{2^{m^{\ast}}} $$ which depends on $\gamma_1 , \gamma , \mathfrak q , \epsilon , \tilde\upbeta , \eta$. \\ Now by \eqref{carlettomio} we immediately get that \begin{align*} & \frac{\Lambda_+ \Big( \{u < \eta_1 h \} \, \cap \, \Big[B_{4\rho}^+ (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast}\!, 4\rho) ) \Big] \Big)}
{\Lambda \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big)} \leqslant \\ & \hskip30pt \leqslant \upkappa \left( \frac{M_+ \Big( \{u < \eta_1 h \} \, \cap \, \Big[B_{4\rho}^+ (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast}\!, 4\rho) ) \Big] \Big)}
{|M|_{\Lambda} \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big)}
\right)^{\uptau} \leqslant \upkappa \, \epsilon^{\uptau} \, . \end{align*} \ \\ In a complete analogous way one can prove point $ii$). \\ \ \\
Point $iii$): the case where $\mu \equiv 0$, as usual, is a bit different. Let us see a sketch of the proof.
Integrating between $t^{\ast} - \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho)$ and $t^{\ast} + \upbeta \, \rho^2 h(x^{\ast}\!, 4\rho)$ and
using Lemma \ref{lemma2.2} similarly as before but with $\nu = \lambda$ and $\bar{\nu} = \lambda_0|_{B_r (x^{\ast})}$, we get \begin{align*} \frac{1}{2 \mathfrak q^2} \, \frac{\eta h}{2^{m+1}} a_{m+1}^0 & {\displaystyle \leqslant 8 \, \gamma_1 \, \rho
\ (\lambda (B_{4\rho}))^{\frac{p-1}{p}} \cdot
\left( \int_{t^{\ast} - \upbeta \, f(x^{\ast}\!, 4\rho)}^{t^{\ast} + \upbeta \, f(x^{\ast}\!,4\rho)}
\big[ \lambda (A_{m-1} (t)) - \lambda (A_m (t)) \big]\, d t \right)^{\frac{2-p}{2p}} } \cdot \\ & {\displaystyle \hskip40pt \cdot \, \big( 2 \upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{p-1}{p}}
\left( \int_{t^{\ast} - \upbeta \, f(x^{\ast}\!, 4\rho) }^{t^{\ast} + \upbeta \, f(x^{\ast}\!,4\rho)} \!\!\! \int_{B_{4\rho}}
\Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,t) \, \lambda \, dx dt \right)^{1/2} }. \end{align*} Now estimating the part involving the gradient of $u - \frac{\eta h}{ 2^m}$ similarly as \eqref{mo'lanumeriamo} we get \begin{align*} \frac{1}{2 \mathfrak q^2} \, \frac{\eta h}{2^{m+1}} a_{m+1}^0 & {\displaystyle \leqslant 8 \, \gamma_1 \, \gamma^{\frac{1}{2}}
\ (\lambda (B_{4\rho}))^{\frac{p-1}{p}} \cdot
\left( \int_{t^{\ast} - \upbeta \, f(x^{\ast}\!, 4\rho)}^{t^{\ast} + \upbeta \, f(x^{\ast}\!,4\rho)}
\big[ \lambda (A_{m-1} (t)) - \lambda (A_m (t)) \big]\, d t \right)^{\frac{2-p}{2p}} } \cdot \\ & \hskip20pt \cdot \, \big( 2 \upbeta \, f(x^{\ast}\!,4\rho) \big)^{\frac{p-1}{p}}
\left( \frac{\eta h}{ 2^{m-1}}\right) \, \Bigg[ 3 \rho^2 \, |\mu| (B_{5\rho}) +
18 \, \upbeta \, f(x^{\ast}\!,4\rho)\, \lambda \big(B_{5\rho}\big) \Bigg]^{\frac{1}{2}} \end{align*} and proceeding as before we reach \begin{align*} a_{m^{\ast}+1}^0 & \leqslant C' \, \left( \frac{1}{m^{\ast}} \right)^{\frac{2-p}{2p}} \,
\Lambda \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}\! - \upbeta f(x^{\ast}\!,4\rho) , t^{\ast} + \upbeta f(x^{\ast}\!,4\rho) ) \big) \end{align*} with $C' = 64 \, \gamma_1 \gamma^{1/2} \mathfrak q^{5/2} (3 + 18 \, \upbeta)^{1/2} (2 \upbeta)^{-1/2}$. The conclusion is as before. \\ \ \\ Finally let us see point $iv \, )$.
If $B_{4\rho} (x^{\ast}) \subset \Omega_0$ we have \begin{align*} \frac{1}{2 \mathfrak q^2} \, \frac{\eta h}{2^{m+1}} & \lambda \big(A_{m+1} (t) \big) =
\frac{1}{2 \mathfrak q^2} \, \frac{\eta h}{2^{m+1}} \lambda \big(A_{m+1} (t) \big) \leqslant \nonumber \\ & {\displaystyle \leqslant 8 \, \gamma_1 \, \rho
\ (\lambda (B_{4\rho}))^{\frac{p-1}{p}} \cdot \left(\int_{{A_{m-1} (t) \setminus A_m (t)}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! |D u|^p (x,t) \, \lambda \, dx \right)^{1/p} } d\tau \leqslant \nonumber \\ & \leqslant \big( \lambda (A_{m-1} (t)) - \lambda (A_m (t)) \big)^{\frac{2-p}{2}}
\left( \int_{B_{4\rho}} \Big|D \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-\Big|^2 (x,t) \, \lambda (x) \, dx \right)^{1/2} \, . \end{align*}
Since $B_{5\rho} (x^{\ast}) \subset \Omega_0$ taking $\tilde{r} = 5\rho$, $r = 4 \rho$ and $\varepsilon = 0$ in \eqref{tempofissato}, we get for almost every $t$ that \begin{align*}
\int_{B_{4\rho}} \Big|D \Big(u - \frac{\eta h}{ 2^{m-1}} & \Big)_-\Big|^2 (x,t) \, \lambda (x) \, dx \leqslant \\ & \leqslant \gamma \, \frac{1}{\rho^2} \int_{B_{5 \rho}} \Big(u - \frac{\eta h}{ 2^{m-1}}\Big)_-^2 (x,t) \, \lambda (x) \, dx \leqslant
\gamma \left( \frac{\eta h}{ 2^{m-1}}\right)^2 \frac{1}{\rho^2} \, \lambda (B_{5\rho}) \end{align*} and then $\lambda \big(A_{m+1} (t) \big) \leqslant 64 \, \mathfrak q^2 \, \gamma_1 \, \gamma^{1/2} \, \big( \lambda (A_{m-1} (t)) - \lambda (A_m (t)) \big)^{\frac{2-p}{2}} \, \big( \lambda (B_{5\rho}) \big)^{1/2}$. By that we can conclude similarly as above.
$\square$ \\
\noindent Now we state a result known as {\em expansion of positivity}. It will be a fundamental step to prove the Harnack inequality.
\begin{lemma} \label{esp_positivita} Consider $(x^{\ast} \!, t^{\ast})$ such that $B_{5\rho}(x^{\ast}) \times [t^{\ast} - 16 \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast} + 16 \, h(x^{\ast} \!, 4\rho) \, \rho^2] \subset \Omega \times (0,T)$. \\
Consider the value $\tilde\upbeta$
determined in Lemma \ref{lemma1} and used in in Lemma \ref{lemma2}. Then for every $\hat\theta \in (0, 1)$ there is $\uplambda > 0$ depending only on $\gamma_1 , \gamma , \mathfrak q , \kappa, \tilde\upbeta , \hat\theta$ such that for every $h > 0$ and $u \geqslant 0$ in $DG(\Omega, T, \mu, \lambda, \gamma)$ points $i\, )$ and $ii\, )$ are true: \\ [0.3em] $i\, )$ if $\mu_+ (B_{\rho}(x^{\ast})) > 0$ and $$ u(\cdot , t^{\ast}) \geqslant h \qquad \text{a.e. in } B_{\rho}^+(x^{\ast}) $$ then \begin{align*} u \geqslant \uplambda h \qquad \text{a.e. in }
& \, B_{2\rho}^+(x^{\ast}) \times
\big(t^{\ast} + \hat\theta \, \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2, t^{\ast} + \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2 \big) ; \end{align*} $ii\, )$ if $\mu_- (B_{\rho}^-(x^{\ast})) > 0$ and $$ u(\cdot , t^{\ast}) \geqslant h \qquad \text{a.e. in } B_{\rho}^-(x^{\ast}) $$ then \begin{align*} u \geqslant \uplambda h \qquad \text{a.e. in }
& \, B_{2\rho}^-(x^{\ast}) \times
\big(t^{\ast} + \hat\theta \, \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2, t^{\ast} + \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2 \big) . \end{align*}
Moreover for every $\upbeta > 0$ for which $B_{5\rho}(x^{\ast}) \times [t^{\ast} - \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast} \!, 4\rho) \, \rho^2] \subset \Omega \times (0,T)$ there is $\uplambda > 0$ depending only on $\gamma_1 , \gamma , \mathfrak q , \kappa, \upbeta$ such that for every $h > 0$ and $u \geqslant 0$ in $DG(\Omega, T, \mu, \lambda, \gamma)$ point $iii\, )$ is true: \\ [0.3em] $iii\, )$ if $\lambda_0 (B_{\rho}(x^{\ast})) > 0$ and $$ u \geqslant h \qquad \text{a.e. in } B^0_{\rho}(x^{\ast}) \times
\big( t^{\ast} - \upbeta \, h(x^{\ast}, 4\rho) \rho^2, t^{\ast} + \upbeta \, h(x^{\ast}, 4\rho) \rho^2 \big) $$ then \begin{align*} u \geqslant \uplambda h \qquad \text{a.e. in }
B^0_{2\rho}(x^{\ast}) \times \big( t^{\ast} - \upbeta \, h(x^{\ast}, 4\rho) \rho^2, t^{\ast} + \upbeta \, h(x^{\ast}, 4\rho) \rho^2 \big) . \end{align*} If $B_{5 \rho} (x^{\ast}) \subset \Omega_0$ there is $\uplambda > 0$ depending only on $\gamma_1 , \gamma , \mathfrak q , \kappa$ such that for every $h > 0$ and $u \geqslant 0$ in $DG(\Omega, T, \mu, \lambda, \gamma)$ point $iv\, )$ is true: \\ [0.3em] $iv\, )$ for almost every $t \in (0,T)$ if
$$ u (\cdot , t) \geqslant h \qquad \text{a.e. in } B_{\rho} (x^{\ast}) $$ then \begin{align*} u (\cdot , t) \geqslant \uplambda h \qquad \text{a.e. in } \, B_{2\rho}(x^{\ast}) . \end{align*} \end{lemma} \noindent {\it Proof}\ \ -\ \ The proof is a consequence of Proposition \ref{prop-DeGiorgi2} and Lemma \ref{lemma2}. We start from point $i\, )$: in Proposition \ref{prop-DeGiorgi2} we consider $\underline{m} = 0$, $R = 4 \rho$, $r = 2 \rho$, $\upbeta^{\diamond} = \tilde{\upbeta}$ (the value determined in Lemma \ref{lemma1} and used in Lemma \ref{lemma2} and belonging to $(0,16]$), $\theta^{\diamond}$ and $a \in (0,1)$ arbitrary; from Proposition \ref{prop-DeGiorgi2} we derive the existence of $\underline{\nu}^{\diamond} \in (0,1)$ such that if, for $c > 0$ an arbitrary constant, the following holds \begin{align*} & \frac{M_+ \Big( \big\{ u < c \big\} \cap \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde{\upbeta} \rho^2 h(x^{\ast}, 4\rho) ) \big) \Big)}
{|M|_{\Lambda} \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big)} + \\ & \qquad \qquad \qquad + \frac{\Lambda_+ \Big( \big\{ u < c \big\} \cap \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde{\upbeta} \rho^2 h(x^{\ast}, 4\rho) ) \big) \Big)} {\Lambda \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big)} \leqslant \underline{\nu}^{\diamond} \end{align*} then $$ u \geqslant a \, c \qquad \text{ in } B^+_{2\rho}(x^{\ast}) \times
\left(t^{\ast} + \theta^{\diamond} \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2, t^{\ast} + \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2 \right) \, . $$ Now we use Lemma \ref{lemma2}: consider $\eta$ the value determined in Lemma \ref{lemma1} and used in in Lemma \ref{lemma2}, take $\upbeta = 16$ and $\epsilon$ such that $\epsilon + \upkappa \, \epsilon^{\uptau} = \underline{\nu}^{\diamond}$ and conclude that there is $\eta_1$ (depending on $\gamma_1 , \gamma , \mathfrak q , \tilde\upbeta , \eta, \underline{\nu}^{\diamond}$ and then on $\gamma_1 , \gamma , \mathfrak q , \tilde\upbeta , \eta, \kappa, a , \theta^{\diamond}$, but $\eta$ depends only on $\gamma$ and $\mathfrak q$) such that \begin{align*} & \frac{M_+ \Big( \big\{ u < \eta_1 h \big\} \cap \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde{\upbeta} \rho^2 h(x^{\ast}, 4\rho) ) \big) \Big)}
{|M|_{\Lambda} \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big)} + \\ & \qquad \qquad \qquad + \frac{\Lambda_+ \Big( \big\{ u < \eta_1 h \big\} \cap \big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde{\upbeta} \rho^2 h(x^{\ast}, 4\rho) ) \big) \Big)} {\Lambda \Big( B_{4\rho} (x^{\ast}) \times (t^{\ast}, t^{\ast} + \tilde\upbeta \, \rho^2 h(x^{\ast} \!, 4\rho) ) \Big)} \leqslant \underline{\nu}^{\diamond} \, . \end{align*} Then $$ u \geqslant a \, \eta_1 h \qquad \text{ in } B^+_{2\rho}(x^{\ast}) \times
\left(t^{\ast} + \theta^{\diamond} \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2, t^{\ast} + \tilde\upbeta \, h(x^{\ast}, 4\rho) \rho^2 \right) \, . $$ Taking $\hat{\theta} = \theta^{\diamond}$, $a = 1/2$ for simplicity and $\uplambda = \eta_1/2$ we conclude the proof of point $i \, )$. In the same way one can prove point point $ii \, )$. \\ [0.3em] Let us see point point $iii \, )$. In Proposition \ref{prop-DeGiorgi2} we consider again $\underline{m} = 0$, $R = 4 \rho$, $r = 2 \rho$, $\upbeta^{\star} = \upbeta \, h(x^{\ast}, 4\rho) / 8$ and $a \in (0,1)$ arbitrary. We derive the existence of $\underline{\nu}^{\star} \in (0,1)$ such that if, for $c > 0$ an arbitrary constant, the following holds \begin{align*} \Lambda_0 \bigg( \big\{ u < c \big\} \, \cap \, \Big( & B_{4\rho} (x^{\ast}) \times
\big(t^{\ast} - \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2 \big) \Big) \bigg) \leqslant \\
& \leqslant \, \underline{\nu}^{\star} \,
\Lambda \Big( B_{4\rho} (x^{\ast}) \times
\big(t^{\ast} - \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2 \big) \Big) \, , \end{align*} then $$ u \geqslant a \, c \qquad \text{ in }
B^0_{2\rho}(x^{\ast}) \times \big(t^{\ast} - \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2 \big) \, . $$ Now in Lemma \ref{lemma2} take $\epsilon = \underline{\nu}^{\star}$ and conclude that there is $\eta_1$ (depending on $\gamma_1 , \gamma , \mathfrak q , \kappa, a, \upbeta$) such that $$ u \geqslant a \, \eta_1 h \qquad \text{ in }
B^0_{2\rho}(x^{\ast}) \times \big(t^{\ast} - \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2, t^{\ast} + \upbeta \, h(x^{\ast}, 4\rho) \, \rho^2 \big) \, . $$ Taking, e.g, $a = 1/2$ we conclude. \\ [0.3em] To prove point $iv$) we consider $\underline{m}$, $R$, $r$ and $a \in (0,1)$ as above and use point $iv$) of Proposition \ref{prop-DeGiorgi2}. Then we get the existence of $\underline{\nu} \in (0,1)$ such that, for $c > 0$, if \begin{align*}
\lambda \big(\{ x \in B_{4\rho} (x^{\star}) \, | \, u(x,t) < c \} \big) \leqslant \underline{\nu} \ \lambda (B_{4\rho} (x^{\star}) ) \end{align*} then
$u(x,t) \geqslant a \, c$ for a.e. $x \in B_{2\rho} (x^{\star})$. Using Lemma \ref{lemma2} we conclude as above.
$\square$ \\
\section{The Harnack type inequality} \label{secHarnack}
The following theorems (Theorem \ref{Harnack1} and Theorem \ref{Harnack2}) are the main results of the paper. \\
\begin{theorem} \label{Harnack1} Assume $u\in DG(\Omega, T, \mu, \lambda, \gamma)$, $u\geqslant 0$, $(x_o, t_o) \in \Omega \times (0,T)$ and fix $\rho > 0$. \begin{itemize} \item[$i\, $)] Suppose $x_o \in \Omega_+ \cup I_+$.
For every $\vartheta_+ \in (0,1]$ for which $B_{5\rho}(x_o) \times [t_o - h(x_o, \rho) \rho^2, t_o + 16 \, h(x_o, 4\rho) \rho^2 + \vartheta_+ h(x_o, \rho) \rho^2] \subset \Omega \times (0,T)$ there exists $c_+ > 0$ depending $($only$)$ on $\gamma_1, \gamma, \mathfrak q, \kappa, \alpha, \upkappa, \uptau, K_1, K_2, K_3, q, \varsigma , \vartheta_+$ such that $$u(x_o, t_o) \leqslant c_+ \, \inf_{B_{\rho}^+ (x_o)} u(x, t_o + \vartheta_+ \, \rho^2 h(x_o, \rho)) .$$ \item[$ii\, $)] Suppose $x_o \in \Omega_- \cup I_-$.
For every $\vartheta_- \in (0,1]$ for which $B_{5\rho}(x_o) \times [t_o - 16 \, h(x_o, 4\rho) \rho^2 + \vartheta_- h(x_o, \rho) \rho^2, t_o + h(x_o, \rho) \rho^2] \subset \Omega \times (0,T)$ there exists $c_- > 0$ depending $($only$)$ on $\gamma_1, \gamma, \mathfrak q, \kappa, \alpha, \upkappa, \uptau, K_1, K_2, K_3, q, \varsigma , \vartheta_-$ such that $$u(x_o, t_o) \leqslant c_- \, \inf_{B_{\rho}^- (x_o)} u(x, t_o - \vartheta_- \, \rho^2 h(x_o, \rho)) .$$ \item[$iii\, $)] Suppose $x_o \in \Omega_0 \cup I_0$.
Suppose $B_{5\rho}(x_o) \times [t_o - 16 \, h(x_o, 4\rho) \rho^2, t_o + 16 \, h(x_o, 4\rho) \rho^2] \subset \Omega \times (0,T)$. For every $s_1, s_2$ for which $s_2 - t_o = t_o - s_1 \leqslant 16 \, h(x_o, 4\rho) \rho^2$, suppose $s_2 - t_o = t_o - s_1 = \upomega \, h(x_o, 4\rho) \rho^2$ for $\upomega \in (0,16]$, there is $c_0$ depending $($only$)$ on $K_1, K_2, K_3, q, \varsigma, \kappa, \gamma_1, \gamma, \upomega, h(x_o, 4\rho), \mathfrak q$ such that $$\sup_{B_{\rho}^+ (x_o) \times [s_1, s_2]} u \leqslant c_0 \, \inf_{B_{\rho}^+ (x_o) \times [s_1, s_2]} u.$$ \item[$iv\, $)] Suppose $B_{5\rho}(x_o) \subset \Omega_0$. Then there is $c$ depending $($only$)$ on $K_1, K_2, K_3, q, \varsigma, \kappa, \gamma_1, \gamma, \mathfrak q$ such that for almost every $t \in (0,T)$ $$\sup_{B_{\rho} (x_o)} u (\cdot, t) \leqslant c \, \inf_{B_{\rho} (x_o)} u (\cdot, t).$$ \end{itemize} \end{theorem} \noindent {\it Proof}\ \ -\ \ We start by proving the first of the three inequalities under the assumption that $B_{\rho}^+(x_o) \not= \emptyset$. For some $r_1, r_2 > 0$ and $(\bar{x}, \bar{t}) \in B_{5\rho}(x_o) \times [t_o - h(x_o, \rho) \rho^2, t_o + 16 \, h(x_o, 4\rho) \rho^2 + \vartheta_+ h(x_o, \rho) \rho^2] \subset \Omega \times (0,T)$ we define the sets \begin{gather*} Q^{+, \texttt{\,<}}_{r_1, h(\bar{y}, r_2)} (\bar{x}, \bar{t}) := \Big( B_{r_1}^+ (\bar{x}) \times [\bar{t} - h(\bar{y}, r_2) r_1^2, \bar{t}] \Big) \, , \quad Q^{+, \texttt{\,>}}_{r, h(\bar{y}, r_2)} (\bar{x}, \bar{t}) := \Big( B_{r}^+ (\bar{x}) \times [\bar{t}, \bar{t} + h(\bar{y}, r_2) r_1^2] \Big) \, , \\ Q^{\texttt{\,<}}_{r_1, h(\bar{y}, r_2)} (\bar{x}, \bar{t}) := \Big( B_{r_1} (\bar{x}) \times [\bar{t} - h(\bar{y}, r_2) r_1^2, \bar{t}] \Big) \, , \quad Q^{\texttt{\,>}}_{r,h(\bar{y}, r_2)} (\bar{x}, \bar{t}) := \Big( B_{r} (\bar{x}) \times [\bar{t}, \bar{t} + h(\bar{y}, r_2) r_1^2] \Big) \, . \end{gather*} We may write $u(x_o, t_o) = b \, \rho^{-\xi}$ for some $b, \xi > 0$ to be fixed later. Define the functions $$ \mathpzc{M} (r) = \sup_{Q^{+, \texttt{\,<}}_{r, h (x_o, \rho)} (x_o, t_o)} u, \qquad \mathpzc{N} (r) = b (\rho - r)^{-\xi}, \qquad r \in [0,\rho) . $$ Let us denote by $r_o \in [0,\rho)$ the largest solution of $\mathpzc{M} (r) = \mathpzc{N} (r)$. Define $$ N := \mathpzc{N} (r_o) = b (\rho - r_o)^{-\xi} \, . $$ We can find $(y_o, \tau_o) \in Q^{+, \texttt{\,<}}_{r_o, \, h(x_o,\rho)} (x_o, t_o)$ such that \begin{equation} \label{choicey0t0} \frac{3N}{4} < \sup_{Q^{+, \texttt{\,<}}_{\frac{\rho_o}{4}\!, \, h (y_o, \rho_o)} (y_o, \tau_o)} u \leqslant N \end{equation} where $\rho_o \in (0, (\rho - r_o) / 2 ]$. If $\rho_o \leqslant (\rho - r_o) / 2$ then $B^+_{\rho_o} (y_o) \subset B_{\frac{\rho + r_o}{2}} (x_o)$. We want the value of $\rho_o$ to be be chosen in such a way that $$
Q^{+, \texttt{\,<}}_{\rho_o, \, h(y_o,\rho_o)} (y_o, \tau_o) \subset Q^{+, \texttt{\,<}}_{\frac{\rho + r_o}{2}, \, h(x_o,\rho)}(x_o,t_o) $$ and the request $\rho_o \leqslant (\rho - r_o) / 2$ may be not sufficient. We also need $\tau_o - h(y_o, \rho_o) \rho_o^2 \geqslant t_o - h(x_o,\rho) (\rho+r_o)^2/4$ and this is guaranteed if \begin{equation} \label{enumeriamopurequesta!} h(y_o,\rho_o) \rho_o^2 \leqslant h(x_o, \rho) \left[ \frac{(\rho + r_o)^2}{4} - r_o^2 \right] , \end{equation} which in turn is true, since $r_o^2 < \rho \, r_o$, if $$ h(y_o,\rho_o) \rho_o^2 \leqslant h(x_o, \rho) \, \frac{(\rho - r_o)^2}{4} . $$ so we will choose $\rho_o$ satisfying these two requests. Notice that this last request can be satisfied writing $h(y_o, \rho_o) \, \rho_o^2 = h(y_o, \rho_o) \rho_o^{2\alpha} \rho_o^{2(1-\alpha)}$ because, thanks to Remark \ref{notaimportante}, point $\mathpzc{C}$, and (H.2)$'$ we have \begin{align*} h(y_o, \rho_o) \rho_o^{2\alpha} & \leqslant \tilde{K}_2^2 \, h(y_o, 2 \rho) (2\rho)^{2 \alpha} \leqslant \\
& \leqslant \tilde{K}_2^2 \, \frac{|\mu|_{\lambda} (B_{4\rho} (x_o))}{\lambda (B_{2\rho} (y_o))} (2\rho)^{2 \alpha} \leqslant \\
& \leqslant 4^{\alpha} \tilde{K}_2^2 \, \mathfrak q^2 \, h (x_o, \rho) \rho^{2 \alpha} \end{align*} and then we have \begin{align*} h(y_o, \rho_o) \rho_o^2 \leqslant 4^{\alpha} \tilde{K}_2^2 \, \mathfrak q^2 \, h (x_o, \rho) \rho^{2 \alpha} \rho_o^{2(1-\alpha)}. \end{align*} Then \eqref{enumeriamopurequesta!} holds if in particular \begin{align*} 4^{\alpha} \tilde{K}_2^2 \, \mathfrak q^2 \, h (x_o, \rho) \rho^{2 \alpha} \rho_o^{2(1-\alpha)} \leqslant
h(x_o, \rho) \, \frac{(\rho - r_o)^2}{4} \end{align*} that is \begin{align} \label{rozero2} \rho_o^{1-\alpha} \leqslant \frac{1}{2^{\alpha} \tilde{K}_2 \, \mathfrak q} \, \frac{1}{\rho^{\alpha}} \, \frac{\rho - r_o}{2} \end{align} and it is always possible to choose $\rho_o$ small enough such that \eqref{rozero2} is satisfied. Therefore $\rho_o$ will be chosen satisfying \begin{align} \label{rozero} \rho_o = \min \left\{ \frac{\rho - r_o}{2},
\left[ \frac{1}{2^{\alpha} \tilde{K}_2 \, \mathfrak q} \, \frac{1}{\rho^{\alpha}} \, \frac{\rho - r_o}{2} \right]^{\frac{1}{1-\alpha}} \right\} . \end{align}
By this choice of $\rho_o$ and by the choice of $r_o$ we have \begin{equation} \label{oscillazione} \sup_{Q^{+, \texttt{\,<}}_{\rho_o, h(y_o,\rho_o)} (y_o, \tau_o)} u \leqslant \sup_{Q^{+, \texttt{\,<}}_{\frac{\rho + r_o}{2}, h(x_o,\rho)}(x_o,t_o)} u <
\mathpzc{N} \left( \frac{\rho + r_o}{2}\right) = 2^{\xi} N . \end{equation} We now proceed dividing the proof in six steps. \\ [0.3em] \textsl{Step 1 - } In this step we want to show that there is $\overline{\nu} \in (0,1)$, depending only on $\kappa, \gamma_1, \gamma, \xi, \mathfrak q$, such that \begin{equation} \label{mimancanogliamorimiei} \begin{array}{c} {\displaystyle \frac{M_+ \left( \left\{ u > \frac{N}{2} \right\} \cap
Q^{+, \texttt{\,<}}_{\rho_o/2,h (y_o, \rho_o)} (y_o, \tau_o) \right)}
{|M|_{\Lambda} \left( Q^{\texttt{\,<}}_{\rho_o/2,h (y_o, \rho_o)} (y_o, \tau_o) \right)} } > \overline{\nu} \, , \\ [2em] {\displaystyle \frac{\Lambda_+ \left( \left\{ u > \frac{N}{2} \right\} \cap
Q^{+, \texttt{\,<}}_{\rho_o/2,h (y_o, \rho_o)} (y_o, \tau_o) \right)}
{\Lambda \left( Q^{\texttt{\,<}}_{\rho_o/2,h (y_o, \rho_o)} (y_o, \tau_o) \right)} > \overline{\nu} } \end{array} \end{equation} and that \begin{equation} \label{mimancanogliamorimiei-1}
\iint_{Q^{+, \texttt{\,<}}_{\frac{\rho_o}{2},h(y_o, \rho_o)} (y_o, \tau_o)} |Du|^2 \, \lambda \, dx dt \leqslant
9 \, \gamma \, (2^{\xi} N)^2 \, h(y_o, \rho_o) \, \lambda \big( B_{\rho_o}(y_o) \big) \, . \end{equation} To prove \eqref{mimancanogliamorimiei} first we show that there is $\nu \in (0,1)$ such that \begin{equation} \label{mimancanogliamorimiei-2} {\displaystyle \frac{M_+ \left( \left\{ u > \frac{N}{2} \right\} \cap
Q^{+, \texttt{\,<}}_{\rho_o/2,h(y_o, \rho_o)} (y_o, \tau_o) \right)}
{|M|_{\Lambda} \left( Q^{\texttt{\,<}}_{\rho_o/2,h(y_o, \rho_o)} (y_o, \tau_o) \right)} \, + } \, {\displaystyle \frac{\Lambda_+ \left( \left\{ u > \frac{N}{2} \right\} \cap
Q^{+, \texttt{\,<}}_{\rho_o/2,h(y_o, \rho_o)} (y_o, \tau_o) \right)}
{\Lambda \left( Q^{\texttt{\,<}}_{\rho_o/2,h(y_o, \rho_o)} (y_o, \tau_o) \right)} > \nu \, . } \end{equation} Argue by contradiction and suppose that \eqref{mimancanogliamorimiei-2} is false. Since \begin{align*} & Q^{+, \texttt{\,<}}_{\frac{\rho_o}{2},h(y_o, \rho_o)} (y_o, \tau_o) = \left( B_{\frac{\rho_o}{2}}^+ (y_o) \times \left[\tau_o - h(y_o, {\textstyle \frac{\rho_o}{2}}) {\textstyle \frac{h (y_o, \rho_o)}{h(y_o, \rho_o/2)}{\textstyle \frac{\rho_o^2}{4}} }, \tau_o \right] \right) \, ,\\ & Q^{+, \texttt{\,<}}_{\frac{\rho_o}{4},h(y_o, \rho_o)} (y_o, \tau_o) = \left( B_{\frac{\rho_o}{4}}^+ (y_o) \times \left[\tau_o - h(y_o, {\textstyle \frac{\rho_o}{2}}) {\textstyle \frac{h (y_o, \rho_o)}{h(y_o, \rho_o/2)}{\textstyle \frac{\rho_o^2}{16}} }, \tau_o \right] \right) \, , \end{align*} setting in Proposition \ref{prop-DeGiorgi1} \begin{gather*} \overline{m} = \omega = 2^\xi N, \quad R = \frac{\rho_o}{2}, \quad \rho = \frac{\rho_o}{4}, \quad
\sigma = 1 - 2^{-\xi-1}, \quad a = \sigma^{-1}\biggl(1-\frac{3}{2^{\xi+2}}\biggr) \, , \\ x^{\diamond} = y_o \, , \qquad t^{\diamond} = \tau_o - h(y_o, \rho_o) \frac{\rho_o^2}{4} \, , \qquad \upbeta^{\diamond} = \frac{h (y_o, \rho_o)}{h(y_o, \rho_o/2)} \, , \qquad {\theta}^{\diamond} = \frac{3}{4} \, , \end{gather*} we obtain from Proposition \ref{prop-DeGiorgi1} that $$ u\leqslant \frac{3N}{4} \quad \textrm{in }\, Q^{+, \texttt{\,<}}_{\frac{\rho_o}{4}\!, \, h(y_o, \rho_o)} (y_o, \tau_o) $$ which contradicts \eqref{choicey0t0}. Notice that $\upbeta^{\diamond} \in [\mathfrak q^{-1}, \mathfrak q]$. Now by \eqref{mimancanogliamorimiei-2} we derive that at least one of the two addends in \eqref{mimancanogliamorimiei-2} is greater or equal to $\nu/2$. Now we get \eqref{mimancanogliamorimiei} by \eqref{carlettomio} taking $$ \overline{\nu} = \frac{1}{\upkappa} \, \left( \frac{\nu}{2} \right)^{\frac{1}{\alpha}} \, . $$ To prove \eqref{mimancanogliamorimiei-1} we use \eqref{DGgamma+}. In $\eqref{DGgamma+}$ we choose $x_0 = y_o$, $t_0 = \tau_o - h (y_o, \rho_o) \rho_o^2$, $R = \rho_o$, $\tilde{r} = \rho_o$, $r = \rho_o / 2$, $\varepsilon = 0$, $\upbeta = 1$, $\theta = \frac{3}{4}$, $\tilde\theta = \frac{1}{2}$, $k = 0$ and since $u \leqslant 2^{\xi} N$ we get \begin{align} \label{est2.18}
& \iint_{Q^{+, \texttt{\,<}}_{\frac{\rho_o}{2}\!, \, h(y_o, \rho_o)} (y_o, \tau_o)} |Du|^2 \, \lambda \, dx dt \leqslant \nonumber \\ & \qquad \leqslant \, \gamma \Bigg[ (2^{\xi} N)^2 \, \mu_+ \left( I_{\frac{\rho_o}{2}, \frac{\rho_o}{2}}^+ (y_o) \right)
+ (2^{\xi} N)^2 \, \mu_- \left( I^{\frac{\rho_o}{2}, \frac{\rho_o}{2}}_+ (y_o) \right) + \nonumber \\ & \qquad \qquad + \frac{4}{\rho_o^2}
\iint_{\left(B_{\frac{\rho_o}{2}}^+(y_o)\right)^{\frac{\rho_o}{2}} \times [\tau_o - h(y_o, \rho_o) \frac{\rho_o^2}{2}, \tau_o] \cup
\left( I_{\frac{\rho_o}{2}}^+ (y_o) \right)^{\frac{\rho_o}{2}} \times [\tau_o - h(y_o, \rho_o) \rho_o^2, \tau_o]}
u^2\, \left( \frac{\mu_+}{h(y_o, \rho_o)} + \lambda \right) \, dx dt \Bigg] \leqslant \nonumber \\ & \qquad \leqslant \, \gamma \Bigg[ (2^{\xi} N)^2 \, \mu_+ \left( I_{\frac{\rho_o}{2}, \frac{\rho_o}{2}}^+ (y_o) \right)
+ (2^{\xi} N)^2 \, \mu_- \left( I^{\frac{\rho_o}{2}, \frac{\rho_o}{2}}_+ (y_o) \right)\Bigg] + \nonumber \\ & \qquad \qquad + \frac{4 \, \gamma}{\rho_o^2} (2^{\xi} N)^2 \Bigg[
\rho_o^2 \, \mu_+ \left( \left(B_{\frac{\rho_o}{2}}^+(y_o)\right)^{\frac{\rho_o}{2}} \right) +
h(y_o, \rho_o) \, \rho_o^2 \, \lambda \left( \left(B_{\frac{\rho_o}{2}}^+(y_o)\right)^{\frac{\rho_o}{2}} \right)
\Bigg] \leqslant \nonumber \\ & \qquad \leqslant \, \frac{\gamma}{\rho_o^2} \Bigg[ (2^{\xi} N)^2 \, \frac{h(y_o, \rho_o)}{h(y_o, \rho_o)} \, \rho_o^2 \,
|\mu| \left( \left(B_{\frac{\rho_o}{2}}^+(y_o)\right)^{\frac{\rho_o}{2}} \right) \Bigg] + \nonumber \\ & \qquad \qquad + \frac{4 \, \gamma}{\rho_o^2} (2^{\xi} N)^2 \Bigg[
\frac{h(y_o, \rho_o)}{h(y_o, \rho_o)} \, \rho_o^2 \, \mu_+ \left( \left(B_{\frac{\rho_o}{2}}^+(y_o)\right)^{\frac{\rho_o}{2}} \right) +
h(y_o, \rho_o) \, \rho_o^2 \, \lambda \left( \left(B_{\frac{\rho_o}{2}}^+(y_o)\right)^{\frac{\rho_o}{2}} \right)
\Bigg] \leqslant \nonumber \\ & \qquad \leqslant \frac{9 \, \gamma}{\rho_o^2} (2^{\xi} N)^2 \, h(y_o, \rho_o) \, \rho_o^2 \, \lambda \big( B_{\rho_o}(y_o) \big)\, . \nonumber \end{align} \textsl{Step 2 - } The goal of this step is to show the existence of $\bar{t} \in [\tau_o - h (y_o, \rho_o) {\textstyle \frac{\rho_o^2}{4}}, \tau_o]$ such that \begin{equation} \label{giochinipercarlettomio} \begin{array}{c} {\displaystyle
\frac{\mu_+ \left( \left\{ x \in B_{\rho_o / 2}^+ (y_o) \, \big| \, u(x,\bar{t}) > \frac{N}{2} \right\} \right)}
{{|\mu|}_{\lambda} \left( B_{\rho_o / 2} (y_o) \right)} > \frac{\overline{\nu}}{2} \, , } \\ [1em] {\displaystyle
\frac{\lambda_+ \left( \left\{ x \in B_{\rho_o / 2}^+ (y_o) \, \big| \, u(x,\bar{t}) > \frac{N}{2} \right\} \right)}
{\lambda \left( B_{\rho_o / 2} (y_o) \right)} > \frac{\overline{\nu}}{2} \, , } \\ [1em] {\displaystyle \int_{(B_{\frac{\rho_o}{2}}^+(y_o))^{\frac{\rho_o}{2}}}
|Du(x,\bar t)|^2 \lambda (x) dx \leqslant \frac{144 \gamma}{\overline{\nu}} \, (2^{\xi} N)^2 \, \frac{\lambda (B_{\rho_o}(y_o))}{\rho_o^2} \, . } \end{array} \end{equation}
To this aim we introduce the following sets ($b$ being a positive number to be fixed later)
\begin{gather*}
A^+(t) = \left\{ x\in B_{\rho_o/2}^+(y_o) \, \Big| \, u(x,t) > \frac{N}{2} \right\} \, ,\qquad
t \in [\tau_o - h (y_o, \rho_o) {\textstyle \frac{\rho_o^2}{4}} , \tau_o] \\
I^+_{\mu} = \left\{ t \in [\tau_o - h (y_o, \rho_o) {\textstyle \frac{\rho_o^2}{4}} , \tau_o] \, \Big| \,
\frac{\mu_+ (A^+(t))}{|\mu|_{\lambda}(B_{\rho_o/2}(y_o))} > \frac{\overline{\nu}}{2} \right\}, \\
J_b=\displaystyle \bigg\{t\in [\tau_o - h (y_o, \rho_o) {\textstyle \frac{\rho_o^2}{4}} , \tau_o] \, \Big| \,
\int_{(B_{\frac{\rho_o}{2}}^+(y_o))^{\frac{\rho_o}{2}}} |Du(x,t)|^2 \lambda(x) dx \leqslant
b \, (2^{\xi} N)^2 \, \frac{\lambda (B_{\rho_o}(y_o))}{\rho_o^2} \bigg\} \, . \end{gather*} Using \eqref{mimancanogliamorimiei} we can write \begin{align*} \overline{\nu} \, h(y_o, \rho_o) \frac{\rho_o^2}{4} & < \int_{\tau_o - h (y_o, \rho_o) {\textstyle \frac{\rho_o^2}{4}}}^{\tau_o}
\frac{\mu_+(A^+(t))}{{{|\mu|}_{\lambda} \left( B_{\rho_o / 2} (y_o) \right)}} \, dt = \\
& = \int_{I^+_{\mu}} \frac{\mu_+(A^+(t))}{{{|\mu|}_{\lambda} \left( B_{\rho_o / 2} (y_o) \right)}} \, dt +
\int_{[\tau_o - h (y_o, \rho_o) {\textstyle \frac{\rho_o^2}{4}} , \tau_o] \setminus I^+}
\frac{\mu_+(A^+(t))}{{{|\mu|}_{\lambda} \left( B_{\rho_o / 2} (y_o) \right)}} \, dt \leqslant \\
& \leqslant | I^+_{\mu} | + \frac{\overline{\nu}}{2} \, h(y_o, \rho_o) \frac{\rho_o^2}{4} \end{align*}
by which \begin{align*}
| I^+_{\mu} | > \frac{\overline{\nu}}{2} \, h(y_o, \rho_o) \frac{\rho_o^2}{4} \, . \end{align*}
Now from one hand we have \eqref{mimancanogliamorimiei-1}, on the other \begin{align*} \int_{[\tau_o - h (y_o, \rho_o) \frac{\rho_o^2}{4}, \tau_o ] \setminus J_{b}}
\int_{(B_{\frac{\rho_o}{2}}^+(y_o))^{\frac{\rho_o}{2}}} |Du|^2 \, \lambda \, dx dt \geqslant
b \, (2^{\xi} N)^2 \, \frac{\lambda (B_{\rho_o}(y_o))}{\rho_o^2} \,
\Big|\Big[\tau_o - h (y_o, \rho_o) \frac{\rho_o^2}{4},\tau_o\Big]\setminus J_b\Big| . \end{align*} Then we get $$
\big| J_b \big| \geqslant h(y_o, \rho_o) \frac{\rho_o^2}{4} \left( 1 - \frac{36 \gamma}{b} \right) . $$ Choosing $b > 36 \gamma$ this inequality is not trivial. Choosing, e.g., $b = 144 \gamma / \overline{\nu}$ one gets $$
| I^+_{\mu} \cap J_b | = | I^+_{\mu} | + | J_b | - | I^+_{\mu} \cup J_b | \geqslant \frac{\overline{\nu}}{4} \, h(y_o, \rho_o) \frac{\rho_o^2}{4} \, . $$ \ \\ [0.3em] \textsl{Step 3 - } Here we show that for every $\bar{\delta} \in (0,1)$ there are $\eta \in (0,1)$ and $y^{\ast} \in B_{\rho_o / 2}^+ (y_o)$, $\eta = \eta (K_1, K_2, q, K_3, \varsigma, \bar{\delta})$, $y^{\ast} = y^{\ast} (\gamma, 2^{\xi} N, \overline{\nu}, K_1, K_2, q, K_3, \varsigma, \bar{\delta}) = y^{\ast} (\gamma, 2^{\xi} N, \kappa, \gamma_1, \mathfrak q, K_1, K_2, q, K_3, \varsigma, \bar{\delta})$, such that $B_{\eta \frac{\rho_o}{2}} (y^{\ast}) \subset B_{\frac{\rho_o}{2}}^+(y_o)$ and such that \begin{equation} \label{estMis} \mu_+ \left(\left\{u(\cdot,\bar t) \leqslant \frac{N}{4}\right\} \cap B_{\eta \frac{\rho_o}{2}}(y^{\ast}) \right) \leqslant
\bar{\delta} \, \mu_+ (B_{\eta \frac{\rho_o}{2}}(y^{\ast})). \end{equation} To see that it is sufficient to use the informations of the previous step and to apply Lemma \ref{lemmaMisVar} to the function $2u/N$
with $\omega = \lambda$, $\nu = |\mu|_{\lambda}$, $\varepsilon = 1/2$, $\rho = \rho_o$, $x_0 = y_o$, $\mathcal{B} = B_{\rho_o / 2}^+ (y_o)$, $\sigma = \rho_o / 2$, $\alpha = \frac{\overline{\nu}}{2}$, $\beta = \frac{144 \gamma}{\overline{\nu}} \, (2^{\xi} N)^2$ and we get $$ \mu_+ \left(\left\{u(\cdot,\bar t) > \frac{N}{4}\right\} \cap B_{\eta \frac{\rho_o}{2}}(y^{\ast}) \right) >
(1 - \bar{\delta}) \, \mu_+ (B_{\eta \frac{\rho_o}{2}}(y^{\ast})) $$ which is equivalent to \eqref{estMis}. Notice that $\eta$ depends on $K_1, K_2, q, K_3, \varsigma$, the constants of the weights, $\bar{\delta}$ and not on the value $N$. \\ [0.3em] \textsl{Step 4 - } Here we show that an estimate like that of the third step can be established also in a cylinder. Precisely we show that for every $\delta \in (0,1)$ there is $\bar{x} \in B_{\eta \frac{\rho_o}{4}}(y^{\ast})$, $\varepsilon \in (0,1)$ which will depend only on $\delta$ and $\mathfrak q$, and $s^{\ast} = (\varepsilon \, \eta \, \rho_o/4)^2 \, h(\bar{x}, \varepsilon \eta \frac{\rho_o}{4})$ such that ($\bar{t}, \bar{\delta}, \eta, \rho_o$ as above) \begin{align} \label{cilindro_brutto_3} M_+ \left( \left\{ u \leqslant \frac{N}{8}\right \} \cap \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big) \right)
\leqslant \delta \, M_+ \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big) \, . \end{align} Notice that $\bar{x}$ implicitely depends on $y^{\ast}$ and $\delta$ and then $\bar{x}$ depends on $\gamma, 2^{\xi} N, \kappa, \gamma_1, \mathfrak q, K_1, K_2, q, K_3, \varsigma, \bar{\delta}, \delta$. \\ To see this we consider $\varepsilon \in (0,1) $ and a disjoint family of balls $\{B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j)\}_{j=1}^m$ such that \begin{align*} & B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \subset B_{\eta \frac{\rho_o}{4}}(y^{\ast})) \quad \text{for every } j=1,\ldots, m, \quad \text{ and } \\ & B_{\eta \frac{\rho_o}{4}} (y^{\ast}) \subset \bigcup_{j=1}^m B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j) \subset B_{\eta \frac{\rho_o}{2}} (y^{\ast}) \end{align*} and define $$ s^{\ast}_j := (\varepsilon \, \eta \, \rho_o/4)^2 \, h \left(x_j, \varepsilon \eta \frac{\rho_o}{4} \right) \, . $$ If necessary one can choose $\varepsilon$ small enough so that $\bar{t} + s^{\ast}_j < T$. We apply the energy estimate \eqref{DGgamma+_1} to the function $(u - N/4)_-$ in each of the sets $B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \times [\bar{t}, \bar{t} + s^{\ast}_j]$.
Since $B_{\eta \frac{\rho_o}{2}}(y^{\ast}) \subset \Omega_+$ we get \begin{align*} & \sup_{t \in [\bar{t}, \bar{t} + s^{\ast}_j]} \int_{B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j)} \left(u - \frac{N}{4}\right)^2_- (x,t) \mu_+ (x) dx \leqslant \\ & \hskip40pt \leqslant \int_{B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j)} \left(u-\frac{N}{4}\right)^2_- (x, \bar{t}) \mu_+ (x) dx +
\frac{16 \gamma}{\eta^2 \rho_o^2}
\int_{\bar{t}}^{\bar{t} + s^{\ast}_j} \!\!\!\! \int_{B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j)} \left(u-\frac{N}{4}\right)^2_-\, \lambda \, dx dt \end{align*} and summing over $j$ and using \eqref{estMis} \begin{align*} & \sum_{j = 1}^m
\sup_{t \in [\bar{t}, \bar{t} + s^{\ast}_j]} \int_{B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j)} \left(u - \frac{N}{4}\right)^2_- (x,t) \mu_+ (x) dx \leqslant \\ & \hskip30pt \leqslant \int_{B_{\eta \frac{\rho_o}{2}}(y^{\ast}))} \left(u-\frac{N}{4}\right)^2_- (x, \bar{t}) \mu_+ (x) dx
+ \sum_{j = 1}^m \frac{16 \gamma}{\eta^2 \rho_o^2}
\int_{\bar{t}}^{\bar{t} + s^{\ast}_j} \!\!\!\! \int_{B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j)} \left(u-\frac{N}{4}\right)^2_-\, \lambda \, dx dt \leqslant \\ & \hskip30pt \leqslant
\frac{N^2}{16} \, \mu_+ \left(\left\{u(\cdot,\bar t) \leqslant \frac{N}{4}\right\} \cap B_{\eta \frac{\rho_o}{2}}(y^{\ast}) \right) + \\ & \hskip70pt + \sum_{j = 1}^m \frac{16 \gamma}{\eta^2 \rho_o^2} \, \frac{N^2}{16} \,
\varepsilon^2 \, \eta^2 \, \frac{\rho_o^2}{16} \, h \left(x_j, \varepsilon \eta \frac{\rho_o}{4}\right)
\, \lambda \big(B_{\varepsilon \eta \frac{\rho_o}{2}}(x_j) \big) \leqslant \\ & \hskip30pt \leqslant \frac{N^2}{16} \, \bar{\delta} \, \mu_+ \big(B_{\eta \frac{\rho_o}{2}}(y^{\ast}) \big) +
\sum_{j = 1}^m \frac{16 \, \gamma}{\eta^2 \rho_o^2} \, \frac{N^2}{16} \,
\varepsilon^2 \eta^2 \, \frac{\rho_o^2}{16} \, h \left(x_j, \varepsilon \eta \frac{\rho_o}{4}\right)
\, \mathfrak q \, \lambda \big(B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \big) \leqslant \\
& \hskip30pt \leqslant \mathfrak q \, \frac{N^2}{16} \, \bar{\delta} \, |\mu|_{\lambda} \big(B_{\eta \frac{\rho_o}{4}}(y^{\ast})) \big) +
\frac{\gamma \, \mathfrak q \, N^2 \varepsilon^2}{16} \, \sum_{j = 1}^m |\mu|_{\lambda} \big(B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \big) \leqslant \\ & \hskip30pt
\leqslant \mathfrak q \, \frac{N^2}{16} ( \bar{\delta} + \gamma\, \varepsilon^2 ) \, |\mu|_{\lambda} \big(B_{\eta \frac{\rho_o}{4}}(y^{\ast}) \big) \, . \end{align*} On the other side, defining \begin{gather*}
B_j(t) = \left\{ x \in B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \, \bigg| \, u(x,t) \leqslant \frac{N}{8}\right\} \, , \end{gather*} we easily get (for $t \in [\bar{t}, \bar{t} + s^{\ast}_j]$) $$ \int_{B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j)} \Big( u - \frac{N}{4} \Big)^2_- (x,t) \mu_+ (x) dx \geqslant
\int_{B_j(t)} \Big(u-\frac{N}{2}\Big)^2_-(x,t) \mu_+(x) dx \geqslant \frac{N^2}{64} |\mu|_{\lambda}( B_j(t)) \, . $$ Now putting together these inequalities we get \begin{align*}
\frac{N^2}{64} \sum_{j = 1}^m |M|_{\Lambda} & \left( \left\{ u \leqslant \frac{N}{8}\right \} \cap \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \times [\bar{t}, \bar{t} + s^{\ast}_j] \big) \right) \leqslant \\ & \hskip80pt \leqslant \mathfrak q \, \frac{N^2}{16} ( \bar{\delta} + \gamma\, \varepsilon^2 )
\sum_{j = 1}^m |M|_{\lambda} \Big(B_{\eta \frac{\rho_o}{4}}(y^{\ast}) \times [\bar{t}, \bar{t} + s^{\ast}_j] \Big) \, . \end{align*} Once $\delta \in (0,1)$ is chosen we consider $\varepsilon$ and $\bar{\delta}$ in such a way that $$ 4 \, \mathfrak q \, ( \bar{\delta} + \gamma\, \varepsilon^2 ) \leqslant \delta $$ and then we get \begin{align*}
\sum_{j = 1}^m |M|_{\Lambda} \left( \left\{ u \leqslant \frac{N}{8}\right \} \cap \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(x_j) \times [\bar{t}, \bar{t} + s^{\ast}_j] \big) \right) \leqslant
\delta \sum_{j = 1}^m |M|_{\lambda} \Big(B_{\eta \frac{\rho_o}{4}}(y^{\ast}) \times [\bar{t}, \bar{t} + s^{\ast}_j] \Big) \, . \end{align*} Notice that $s^{\ast}_j$ depend on $\varepsilon$ and consequently on the choice of $\delta$. To find a cylinder, independent of $\delta$, in which the estimate above holds true notice that, whatever the choice of $\delta$, by the last inequality at least one among the $x_j$'s has to satisfy \eqref{cilindro_brutto_3}. We call $\bar{x}$ that $x_j$ and $s^{\ast} := s^{\ast}_j$ . \\ [0.3em]
\textsl{Step 5 - } Here we show that \begin{equation} \label{emanuela} u \geqslant \frac{N}{16} \qquad \qquad \text{a.e. in } B_{\varepsilon \eta \frac{\rho_o}{8}}(\bar{x}) \times
\left[ \bar{t} + \frac{(\varepsilon \, \eta \, \rho_o)^2}{32} \, h \big( \bar{x}, \varepsilon \eta \frac{\rho_o}{4} \big) ,
\bar{t} + \frac{(\varepsilon \, \eta \, \rho_o)^2}{16} \, h \big( \bar{x}, \varepsilon \eta \frac{\rho_o}{4} \big) \right] \, . \end{equation} First notice that $\varepsilon$ depens only on $\delta$ and $\mathfrak q$. By \eqref{cilindro_brutto_3} and \eqref{carlettomio} we also get \begin{equation} \label{cilindro_brutto_4} \Lambda \left( \left\{ u \leqslant \frac{N}{8}\right \} \cap \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big) \right)
\leqslant \upkappa \, \delta^{\uptau} \, \Lambda \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(y^{\ast}) \times [\bar{t}, \bar{t} + s^{\ast}] \big) \, . \end{equation} Now we want to apply Proposition \ref{prop-DeGiorgi2}, so first notice that, by the choice of $\bar{x}$ and $\rho_o$, since $u \geqslant 0$ and by \eqref{oscillazione} we have, choosing $\varepsilon$ even smaller if necessary so that $\bar{t} + s^{\ast} < \tau_o$, that $$ \mathop{\rm osc}\limits_{B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}]} \leqslant 2^{\xi} N \, . $$ Then taking in Proposition \ref{prop-DeGiorgi2}, point $i\, $), the following values \begin{gather*} \underline{m} = 0 , \qquad \omega = 2^{\xi} N , \qquad r = \varepsilon \eta \frac{\rho_o}{8} , \qquad R = \varepsilon \eta \frac{\rho_o}{4} , \\ x^{\diamond} = \bar{x} , \qquad t^{\diamond} = \bar{t} , \qquad \upbeta^{\diamond} = 1 , \\
\sigma = \frac{1}{8} \frac{1}{2^{\xi}} , \qquad a = \frac{1}{2} , \qquad \theta^{\diamond} = \frac{1}{2} \end{gather*} we have the existence of $\underline{\nu}^{\diamond}$, which in this case depends only on $\kappa, \gamma_1, \gamma$, such that if $$ \frac{M_+ \left( \left\{ u \leqslant \frac{N}{8}\right \} \cap \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big) \right)}
{M_+ \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big)} + \frac {\Lambda \left( \left\{ u \leqslant \frac{N}{8}\right \} \cap \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big) \right)}
{\Lambda \big( B_{\varepsilon \eta \frac{\rho_o}{4}}(\bar{x}) \times [\bar{t}, \bar{t} + s^{\ast}] \big)} \leqslant \underline{\nu}^{\diamond} $$ then \eqref{emanuela} holds. Then, by \eqref{cilindro_brutto_3} and \eqref{cilindro_brutto_4}, it is sufficient to choose $\delta$ in the fourth step in such a way that $$ \delta + \upkappa \, \delta^{\uptau} = \underline{\nu}^{\diamond} $$ to get that \eqref{emanuela} holds (so $\delta$ depends only on $\kappa, \gamma_1, \gamma, \upkappa, \uptau)$. \\ [0.3em] \textsl{Step 6 - } Now, starting from \eqref{emanuela}, we are in the conditions to apply the expansion of positivity. \\ Before going on we recall the dependence of some parameters that are involved (and that we will need): \begin{align*} & \eta = \eta (K_1, K_2, q, K_3, \varsigma, \bar{\delta}) = \eta (K_1, K_2, q, K_3, \varsigma, \delta, \uptau, \mathfrak q)
= \eta (K_1, K_2, q, K_3, \varsigma, \kappa, \gamma_1, \gamma, \upkappa , \mathfrak q) , \\ & \varepsilon = \varepsilon (\bar{\delta}) = \varepsilon (\delta, \mathfrak q) = \varepsilon (\kappa, \gamma_1, \gamma, \upkappa, \uptau, \mathfrak q) , \\
\end{align*} We call just for simplicity $$ r := \frac{\varepsilon \, \eta \, \rho_o}{8} \qquad \text{and} \qquad \bar{s} := \bar{t} + 4 \, h \Big( \bar{x}, \varepsilon \eta \frac{\rho_o}{4} \Big) r^2 \, . $$ In Lemma \ref{esp_positivita} we consider \begin{gather*} x^{\ast} = \bar{x} , \qquad t^{\ast} = \bar{t} + \frac{(\varepsilon \, \eta \, \rho_o)^2}{16} \, h \Big(\bar{x}, \varepsilon \eta \frac{\rho_o}{4} \Big) = \bar{s}, \qquad \rho = r , \qquad h = \frac{N}{16} , \end{gather*} and get that there is $\tilde\upbeta$ depending on $\gamma$ and for every $\hat\theta$ there is $\uplambda > 0$ depending on $\gamma_1 , \gamma , \mathfrak q , \kappa, \tilde\upbeta , \hat\theta$ such that $$ u \geqslant \uplambda \, \frac{N}{16} \qquad \qquad \text{a.e. in } B^+_{2 r}(\bar{x}) \times
\left[ \bar{s} + \hat\theta \, \tilde\upbeta \, h (\bar{x}, 4 r) r^2, \bar{s} + \tilde\upbeta \, h (\bar{x}, 4 r) r^2 \right] \, . $$ Since this holds for every $t \in [ \bar{s} + \hat\theta \, \tilde\upbeta \, h (\bar{x}, 4 r) r^2, \bar{s} + \tilde\upbeta \, h (\bar{x}, 4 r) r^2 ]$, applying again this lemma we reach $$ u \geqslant \uplambda^2 \, \frac{N}{16} \qquad \text{a.e. in } B^+_{4 r}(\bar{x}) \times
\left[ \bar{s} + \hat\theta \, \tilde\upbeta \, \big( h (\bar{x}, 4 r) + 4 h (\bar{x}, 8 r) \big) r^2,
\bar{s} + \tilde\upbeta \big( h (\bar{x}, 4 r) + 4 h (\bar{x}, 8 r) \big) r^2 \right] . $$ Iterating this arguement $m$ times we get
$$ u \geqslant \uplambda^m \, \frac{N}{16} \qquad \text{a.e. in } B^+_{2^m r}(\bar{x}) \times \left[ \bar{s} + \hat\theta \, \tilde\upbeta \, r^2 \sum_{j=1}^m 4^{j-1} h(\bar{x}, 2^{j+2} r),
\bar{s} + \tilde\upbeta \, r^2 \sum_{j=1}^m 4^{j-1} h(\bar{x}, 2^{j+2} r) \right] . $$
Now we define the quantities ($m \in {\bf N}$ is still to be fixed) $$ \left\{ \arrst{1.5} \begin{array}{l} {\displaystyle {s}_m := \bar{s} + \hat\theta \, \tilde\upbeta \, r^2 \sum_{j=1}^m 4^{j-1} h(\bar{x}, 2^{j+2} r) } \, , \\
{\displaystyle {t}_m := \bar{s} + \tilde\upbeta \, r^2 \sum_{j=1}^m 4^{j-1} h(\bar{x}, 2^{j+2} r) }\, .
\end{array} \right. $$ Since $\bar{x} \in B_{\rho}(x_o)$ requiring that $2^m r \geqslant 2\rho$ provides that $B_{2^m r}(\bar{x}) \supset B_{\rho}(x_o)$ so we require that $m$ is such that \begin{equation} \label{emme} 2 \rho \leqslant 2^m r < 4 \rho \, , \qquad \text{ i.e.} \quad 1 + \log_2\frac{\rho}{r} \leqslant m < 2 + \log_2\frac{\rho}{r} \, . \end{equation} What we have still to fix in the times interval is the value of $\hat\theta$ and moreover the values of $b$ and $\xi$. Now notice that for every $x,y \in \Omega$ and $\varrho > 0$ such that $B_{2\varrho} (x) \subset \Omega$ and $B_{2\varrho} (y) \subset \Omega$
and such that $| x - y | < \varrho$ we have \begin{gather*}
|\mu|_{\lambda} (B_{\varrho}(x)) \leqslant |\mu|_{\lambda} (B_{2\varrho}(y)) \leqslant \mathfrak q \, \mu|_{\lambda} (B_{\varrho}(y)) \\ \lambda (B_{\varrho}(y)) \leqslant \lambda (B_{2\varrho}(x)) \leqslant \mathfrak q \, \lambda (B_{\varrho}(x)) \end{gather*} by which we derive $$ h (x, \varrho) \leqslant \mathfrak q^2 h (y, \varrho) \, . $$
Then, using this last estimate, (H.2)$'$ (see also Remark \ref{notaimportante}, point $\mathpzc{C}$) and \eqref{emme} we can estimate \begin{align*} \sum_{j=1}^m 4^{j-1} r^2 h(\bar{x}, 2^{j+2} r) & \, \leqslant \mathfrak q^2 \sum_{j=0}^{m-1} 4^{j} r^2 h(x_o, 2^{j+3} r) = \\ & \, = \frac{\mathfrak q^2}{4^3} \sum_{j=0}^{m-1} (4^{j+3} r^2)^{1 - \alpha} (4^{j+3} r^2)^{\alpha} h(x_o, 2^{j+3} r) \leqslant \\ & \, \leqslant \frac{\mathfrak q^2}{4^3} \sum_{j=0}^{m-1} (4^{j+3} r^2)^{1 - \alpha} {\tilde{K}_2}^2 (4^{m+2} r^2)^{\alpha} h(x_o, 2^{m+2} r) = \\ & \, = \frac{\mathfrak q^2}{4^3} {\tilde{K}_2}^2 (4^{m+2} r^2)^{\alpha} h(x_o, 2^{m+2} r) \sum_{j=0}^{m-1} (4^{3} r^2)^{1 - \alpha}(4^{1 - \alpha})^j \leqslant \\ & \, \leqslant \frac{\mathfrak q^2 \, {\tilde{K}_2}^2}{4 - 4^{\alpha}} \, 4^{m} r^2 h(x_o, 2^{m+2} r) \leqslant \\ & \, \leqslant \frac{4 \, \mathfrak q^6 \, {\tilde{K}_2}^2}{4 - 4^{\alpha}} \, \rho^2 h(x_o, \rho) \end{align*} by which \begin{gather*} s_m \leqslant \bar{s} + \hat\theta \, \tilde\upbeta \, \frac{4 \, \mathfrak q^6 \, {\tilde{K}_2}^2}{4 - 4^{\alpha}} \, \rho^2 h(x_o, \rho) \, . \end{gather*} Now for a fixed constant $\vartheta_+ \in (0,1]$ we can choose $$ \hat\theta \leqslant \vartheta_+ \, \frac{4 - 4^{\alpha}}{4} \, \frac{1}{\tilde\upbeta \, \mathfrak q^6 \, {\tilde{K}_2}^2} , $$ indipendent of $m$, and, since $\bar{s} < t_o$, we get \begin{gather*}
s_m < t_o + \vartheta_+ \, \rho^2 h(x_o, \rho) \, . \end{gather*} Notice that once $\hat\theta$ is fixed $\uplambda$ depends only on $\gamma_1 , \gamma , \mathfrak q , \kappa, \tilde\upbeta$. By the choice of $m$
and recalling the definition of $N$ we have \begin{align*} u \geqslant \uplambda^m \, \frac{b (\rho - r_o)^{-\xi}}{16} \qquad \text{a.e. in } B^+_{\rho}(x_o) \times \left[ s_m, t_m \right] . \end{align*} By the choice we made of $\rho_o$ in \eqref{rozero} we have that $$ \text{or } \frac{1}{\rho - r_o} = \frac{1}{2 \, \rho_o} \qquad
\text{either } \frac{1}{\rho - r_o} = \frac{1}{2^{1+\alpha} \, \mathfrak q \, \tilde{K}_2} \, \frac{1}{\rho^{\alpha}} \, \frac{1}{\rho_o^{1 - \alpha}}\, . $$ Then, by the definition of $r$ and since $u(x_o, t_o) = b \, \rho^{-\xi}$, in the first case we get \begin{align*} u (x,t) \geqslant \frac{(2^{\xi} \uplambda)^m}{16} \, \frac{b (\varepsilon \, \eta)^{\xi}}{(2^6 \rho)^{\xi}} =
(2^{\xi} \uplambda)^m \, \frac{(\varepsilon \, \eta)^{\xi}}{2^{6\xi + 4}} \, u(x_o, t_o)
\qquad \text{a.e. in } B^+_{\rho}(x_o) \times \left[ s_m, t_m \right] . \end{align*} In the second we get \begin{align*} u (x,t) \geqslant & \, \frac{\uplambda^m}{16} b \left( \frac{1}{2^{1+\alpha} \, \mathfrak q \, \tilde{K}_2} \right)^{\xi} \, \frac{1}{\rho^{\alpha \xi}}
\left( \frac{\varepsilon \eta}{8}\right)^{(1-\alpha)\xi} \left( \frac{2^m}{4 \rho}\right)^{(1-\alpha)\xi} = \\ = & \, (2^{(1-\alpha)\xi} \uplambda)^m \, b \, \frac{(\varepsilon \, \eta)^{(1-\alpha)\xi}}{(2^{6 - 4\alpha}\mathfrak q \, \tilde{K}_2)^{\xi}} \, \frac{1}{\rho^{\xi}} = \\ = & \, (2^{(1-\alpha)\xi} \uplambda)^m \, \frac{(\varepsilon \, \eta)^{(1-\alpha)\xi}}{(2^{6 - 4\alpha}\mathfrak q \, \tilde{K}_2)^{\xi}} \, u(x_o, t_o)
\qquad \text{a.e. in } B^+_{\rho}(x_o) \times \left[ s_m, t_m \right] . \end{align*} So we can get rid of the dependence of $m$ choosing now $\xi$ in such a way that \begin{align*} 2^{\xi} \uplambda = 1 & \qquad \text{ in the first case} , \\ 2^{(1-\alpha)\xi} \uplambda = 1 & \qquad \text{ in the second case} . \end{align*} Since $r$ depends on $\rho_o$, which depends on $r_o$, which depends on $\xi$, once we have fixed $\xi$ we have also chosen the value of $r$, and consequently of $m$. Summing up, we have reached \begin{align*} u (x,t) \geqslant {c}_o \, u(x_o, t_o)
\qquad \quad \text{a.e. in } B^+_{\rho}(x_o) \times \left[ s_m, t_m \right] \end{align*} with $s_m < t_o + \vartheta_+ \, \rho^2 h(x_o, \rho)$, where $$ {c}_o = \frac{(\varepsilon \, \eta)^{\xi}}{2^{6\xi + 4}} \qquad \text{ or } \qquad {c}_o = \frac{(\varepsilon \, \eta)^{(1-\alpha)\xi}}{(2^{6 - 4\alpha}\mathfrak q \, \tilde{K}_2)^{\xi}} . $$ By the dependence of $\eta$, $\varepsilon$ and $\xi$ and since $\tilde{K}_2$ depends only on $K_2$ we have that $$ c_o \quad \text{ depends on } \qquad \gamma_1, \gamma, \mathfrak q, \kappa, \tilde\upbeta, \alpha, \upkappa, \uptau, K_1, K_2, K_3, q, \varsigma \, . $$ Now we are done if $t_m \geqslant t_o + \vartheta_+ \, \rho^2 h(x_o, \rho)$ and the constant $c_+$ is ${c}_o$. \\ If, otherwise, $t_m < t_o + \vartheta_+ \, \rho^2 h(x_o, \rho)$ we consider $$ \hat{t} \in [s_m , t_m] \qquad \text{such that } \quad \hat{t} + \hat\theta \, \tilde\upbeta \, h(x_o, 4\rho) \rho^2
\leqslant t_o + \vartheta_+ h(x_o, \rho) \rho^2 \, . $$ By \eqref{stimeacca} this is true, taking if necessary $\hat\theta$ smaller, if $$ \hat\theta \leqslant \frac{\vartheta_+}{\mathfrak q^2 \tilde\upbeta} \, . $$ Applying again Lemma \ref{esp_positivita} and since $u(x,t) \geqslant c_o \, u(x_o, t_o)$ a.e. in $B^+_{\rho}(x_o) \times \left[ s_m, t_m \right]$ (and then also in $B^+_{\rho/4}(x_o) \times \left[ s_m, t_m \right]$) we get, in particular, that both $$ u(x,t) \geqslant \uplambda \, c_o \, u(x_o, t_o) \qquad \text{a.e. in } B^+_{2\rho}(x_o) \times
\left[ \hat{t} + \hat\theta \, \tilde\upbeta \, h(x_o, 4\rho) \rho^2, \hat{t} + \tilde\upbeta \, h(x_o, 4\rho) \rho^2 \right] $$ and $$ u(x,t) \geqslant \uplambda \, c_o \, u(x_o, t_o) \qquad \text{a.e. in } B^+_{\rho/2}(x_o) \times
\left[ \hat{t} + \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2/ 16, \hat{t} + \tilde\upbeta \, h(x_o, \rho) \rho^2/16 \right] \, ; $$ then in particular $$ u(x,t) \geqslant \uplambda \, c_o \, u(x_o, t_o) \qquad \text{a.e. in } B^+_{\rho}(x_o) \times
\left[ \hat{t} + \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2/ 16, \hat{t} + \tilde\upbeta \, h(x_o, \rho) \rho^2/16 \right] \, . $$ Repeating this argument for every $t$ in $\left[ \hat{t} + \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2/ 16, \hat{t} + \tilde\upbeta \, h(x_o, \rho) \rho^2/16 \right]$
we get $$ u(x,t) \geqslant \uplambda^2 \, c_o \, u(x_o, t_o) \qquad \text{a.e. in } B^+_{\rho}(x_o) \times
\left[ \hat{t} + 2 \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16, \hat{t} + 2 \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16 \right] \, . $$ If necessary, we add the requirement $2\hat\theta < 1$ so that
$\left[ \hat{t} + 2 \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16, \hat{t} + 2 \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16 \right] \cap \left[ \hat{t} + \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2/ 16, \hat{t} + \tilde\upbeta \, h(x_o, \rho) \rho^2/16 \right] \not= \emptyset$. Going on, we get $$ u(x,t) \geqslant \uplambda^3 \, c_o \, u(x_o, t_o) \qquad \text{a.e. in } B^+_{\rho}(x_o) \times
\left[ \hat{t} + 3 \, \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16, \hat{t} + 3 \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16 \right] $$ requiring $3 \hat \theta < 2$, which is free since we already imposed $2\hat\theta < 1$. We iterate $k$ times, without additional assumptions about $\hat\theta$, till $\hat{t} + k \, \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16 > t_o + \vartheta \, h(x_o, \rho) \rho^2$ and get $$ u(x,t) \geqslant \uplambda^k \, c_o \, u(x_o, t_o) \qquad \text{a.e. in } B^+_{\rho}(x_o) \times
\left[ \hat{t} + k \, \hat\theta \, \tilde\upbeta \, h(x_o, \rho) \rho^2 / 16, \hat{t} + k \, \tilde\upbeta \, h(x_o, \rho) \rho^2 /16 \right] \, . $$ Since $t_o - \hat{t} > h(x_o, \rho) \rho^2$, the inequality $$ \hat{t} + \frac{k \, \tilde\upbeta}{16} \, h(x_o, \rho) \rho^2 > t_o + \vartheta_+ h(x_o, \rho) \rho^2 $$ holds if we choose $$ k > \frac{16}{\tilde\upbeta} \, (1 + \vartheta_+ ) \, . $$ For instance we can choose $[\frac{16}{\tilde\upbeta} \, (1 + \vartheta_+)] + 1$, the minimum integer greater that $\frac{16}{\tilde\upbeta} \, (1 + \vartheta_+)$ and the constant $c_+$ is $\uplambda^k c_o$, where $k$ depends only on $\tilde\upbeta$ and $\vartheta_+$. Since $\tilde\upbeta$ depends only on $\gamma$ we conclude that $c_+$ depends (only) on $$ \gamma_1, \gamma, \mathfrak q, \kappa, \alpha, \upkappa, \uptau, K_1, K_2, K_3, q, \varsigma , \vartheta_+ \, . $$ \ \\ [0.3em] In a complete analogous way one can prove point $ii \, )$. \\ [0.3em] We see now point $iii \, )$. Since $s_1$ and $s_2$ will remain fixed in the following we will use the simplified notations, for some $r > 0$ and $\bar{x} \in \Omega$, $$ Q^{0}_{r} (\bar{x}) := B^0_r (\bar{x}) \times [s_1, s_2] \, , \qquad Q_{r} (\bar{x}) := B_r (\bar{x}) \times [s_1, s_2] \, . $$ Similarly as for point $i \, )$, we may write $u(x_o, t_o) = b \, \rho^{-\xi}$ for some $b, \xi > 0$ to be fixed later. Define the functions $$ \mathpzc{M} (r) = \sup_{Q^{0}_{r} (x_o)} u, \qquad \mathpzc{N} (r) = b (\rho - r)^{-\xi}, \qquad r \in [0,\rho) . $$ Let us denote by $r_o \in [0,\rho)$ the largest solution of $\mathpzc{M} (r) = \mathpzc{N} (r)$. Define $$ N := \mathpzc{N} (r_o) = b (\rho - r_o)^{-\xi} \, . $$ We can find $y_o \in B^{0}_{r_o} (x_o)$ such that \begin{equation} \label{choicey_o} \frac{3N}{4} < \sup_{Q^{0}_{\frac{\rho_o}{4}} (y_o)} u \leqslant N \end{equation} where $\rho_o = (\rho - r_o) / 2$, so $B^0_{\rho_o} (y_o) \subset B_{\frac{\rho + r_o}{2}} (x_o)$. By this choice of $\rho_o$ and by the choice of $r_o$ we have \begin{equation} \label{oscillazione} \sup_{Q^{0}_{\rho_o} (y_o)} u \leqslant \sup_{Q^{0}_{\frac{\rho + r_o}{2}}(x_o)} u < \mathpzc{N} \left( \frac{\rho + r_o}{2}\right) = 2^{\xi} N . \end{equation} We now proceed dividing the proof in four steps. \\ [0.3em] \textsl{Step 1 - } In this step we want to show that there is $\overline{\nu} \in (0,1)$, depending on $\kappa, \gamma_1, \gamma$, such that \begin{align*} \Lambda_0 \left( \left\{ u > \frac{N}{2} \right\} \cap Q^{0}_{\rho_o/2} (y_o) \right) > \overline{\nu} \, \Lambda \left( Q_{\rho_o/2} (y_o) \right) \end{align*} and that \begin{equation} \label{chesonno!}
\iint_{Q^{0}_{\rho_o/2} (y_o)} |Du|^2 \, \lambda \, dx dt \leqslant \gamma \, (2^{\xi} N)^2 \left( \frac{2 \, K_2^2 \, \mathfrak q^2}{\upomega} + 4 \right) (s_2 - s_1) \, \frac{\lambda \big( B_{\rho_o} (y_o) \big)}{\rho_o^2} \, . \end{equation} Arguing by contradiction we immediatly get the first inequality: indeed if that were false, setting in Proposition \ref{prop-DeGiorgi1}, point $iii \, )$, \begin{gather*} \overline{m} = \omega = 2^\xi N, \quad R = \frac{\rho_o}{2}, \quad \rho = \frac{\rho_o}{4}, \quad
\sigma = 1 - 2^{-\xi-1}, \quad a = \sigma^{-1}\biggl(1-\frac{3}{2^{\xi+2}}\biggr) \, , \\ x^{\star} = y_o \, , \qquad t^{\star} = t_o \, , \qquad \upbeta^{\star} = \frac{8 (s_2 - t_o)}{\rho_o} ,
\qquad s_1^{\star} = s_1 \, , \qquad s_2^{\star} = s_2 \, , \end{gather*} we would get that $$ u \leqslant \frac{3N}{4} \quad \textrm{in }\, B_{\rho_o/4}^0 (y_o) \times \left(s_1, s_2 \right) $$ which contradicts \eqref{choicey_o}. To prove \eqref{chesonno!} we choose in \eqref{DGgamma0} $x_0 = y_o$, $R = \rho_o$, $\tilde{r} = \rho_o$, $r = \rho_o / 2$, $\varepsilon = 0$, $k = 0$ and since $u \leqslant 2^{\xi} N$ we get \begin{align*}
& \iint_{Q^{0}_{\rho_o/2} (y_o)} |Du|^2 \, \lambda \, dx dt \leqslant \\ & \qquad \leqslant \, \gamma \Bigg[
(2^{\xi} N)^2 \, |\mu| \left( I^{\frac{\rho_o}{2}, \frac{\rho_o}{2}}_0 (y_o) \right) +
\frac{4}{\rho_o^2} \iint_{\left(B_{\frac{\rho_o}{2}}^0(y_o)\right)^{\frac{\rho_o}{2}} \times [s_1, s_2]}
u^2\, \lambda \, dx dt \Bigg] \leqslant \\ & \qquad \leqslant \, \gamma \Bigg[
(2^{\xi} N)^2 \, |\mu| \left( I^{\frac{\rho_o}{2}, \frac{\rho_o}{2}}_0 (y_o) \right) +
(2^{\xi} N)^2 \, \frac{4}{\rho_o^2} \, 2 \, \upomega \, h(x_o, 4\rho) \rho^2 \, \lambda \big( B_{\rho_o} (y_o) \big) \Bigg] \leqslant \\ & \qquad \leqslant \, \gamma \, (2^{\xi} N)^2
\left[ |\mu|_{\lambda} \big( B_{\rho_o} (y_o) \big) + \frac{8 \, \upomega}{\rho_o^2} \, h(x_o, 4 \rho) \rho^2
\, \lambda \big( B_{\rho_o} (y_o) \big) \right] = \\ & \qquad = \, \gamma \, (2^{\xi} N)^2
\left[ h(y_o, \rho_o) + \frac{8 \, \upomega}{\rho_o^2} \, h(x_o, 4 \rho) \rho^2 \right] \lambda \big( B_{\rho_o} (y_o) \big) \leqslant \\ & \qquad \leqslant \, \gamma \, (2^{\xi} N)^2
\left[ 4 \, K_2^2 \, \mathfrak q^2 \, h(x_o, 4 \rho) \frac{\rho^2}{\rho_o^2} +
\frac{8 \, \upomega}{\rho_o^2} \, h(x_o, 4 \rho) \rho^2 \right] \lambda \big( B_{\rho_o} (y_o) \big) \, . \end{align*} \\ [0.3em] \textsl{Step 2 - } Here we show that for every $\delta \in (0,1)$ there are $\eta \in (0,1)$, which will depend only on $K_1, K_2, K_3, q, \varsigma, \delta$, and $y^{\ast} \in B_{\rho_o / 2}^0 (y_o)$, which will depend only on $\gamma, 2^{\xi} N, \bar{\nu}, K_1, K_2, K_3, q, \varsigma, \upomega, \delta$ ($\delta$ will be chosen depending on $\kappa, \gamma_1, \gamma, \upomega \, h(x_o, 4\rho))$), such that $B_{\eta \rho_o/2}(y_o) \subset B_{\rho_o / 2}^0 (y_o)$ and \begin{equation} \label{cilindro_bello} \Lambda \left( \left\{ u \leqslant \frac{N}{4}\right \} \cap \big( Q_{\eta \frac{\rho_o}{2}} (y^{\ast}) \big) \right)
\leqslant \delta \, \Lambda \big( Q_{\eta \frac{\rho_o}{2}} (y^{\ast}) \big) \, . \end{equation} Indeed by \textsl{Step 1} and applying Corollary \ref{corollario3} to the function $2u/N$ with $\omega = \nu = \lambda$, $\varepsilon = 1/2$, $\rho = \rho_o$, $x_0 = y_o$, $\mathcal{B} = B_{\rho_o / 2}^0 (y_o)$, $\sigma = \rho_o / 2$, $a = s_1$, $b = s_2$, $\alpha = \overline{\nu}$, $\beta = \gamma \, (2^{\xi} N)^2 \left( 2 \, K_2^2 \, \mathfrak q^2 \, \upomega^{-1} + 4 \right)$ we get the existence of $B_{\eta \frac{\rho_o}{2}} (y^{\ast}) \subset B_{\rho_o / 2}^0 (y_o)$ such that $$ \Lambda \left( \left\{ u > \frac{N}{4}\right \} \cap \big( Q_{\eta \frac{\rho_o}{2}} (y^{\ast}) \big) \right)
> (1 - \delta) \, \Lambda \big( Q_{\eta \frac{\rho_o}{2}} (y^{\ast}) \big) $$ which is equivalent to \eqref{cilindro_brutto_3}. \\ [0.3em] \textsl{Step 3 - } Here we show that \begin{equation} \label{emanuela} u \geqslant \frac{N}{8} \qquad \qquad \text{a.e. in } Q_{\eta \frac{\rho_o}{4}} (y^{\ast}) \, . \end{equation} Now we want to apply Proposition \ref{prop-DeGiorgi2} so first notice that, since $u \geqslant 0$ and by \eqref{oscillazione} we have that $$ \mathop{\rm osc}\limits_{Q_{\eta \frac{\rho_o}{2}} (y^{\ast})} \leqslant 2^{\xi} N \, . $$ Then taking in Proposition \ref{prop-DeGiorgi2}, point $iii\, $), the following values \begin{gather*} \underline{m} = 0 , \qquad \omega = 2^{\xi} N , \qquad r = \eta \frac{\rho_o}{4} , \qquad R = \eta \frac{\rho_o}{2} , \\ x^{\star} = y^{\ast} , \qquad t^{\star} = t_o , \qquad s_1^{\star} = s_1\qquad s_2^{\star} = s_2 , \\ \upbeta^{\star} = 8 \, \upomega \, h(x_o, 4\rho) \, \frac{\rho^2}{\eta^2 \rho_o^2} , \qquad \sigma = \frac{1}{4} \frac{1}{2^{\xi}} , \qquad a = \frac{1}{2} \end{gather*} we have the existence of $\underline{\nu}^{\star} \in (0,1)$, which in this case depends only on $\kappa, \gamma_1, \gamma, \upomega \, h(x_o, 4\rho)$, such that if $$ \Lambda \left( \left\{ u \leqslant \frac{N}{4}\right \} \cap \big( Q_{\eta \frac{\rho_o}{2}} (y^{\ast}) \big) \right)
\leqslant \underline{\nu}^{\star} \, \Lambda \big( Q_{\eta \frac{\rho_o}{2}} (y^{\ast}) \big) \, . $$ then \eqref{emanuela} holds. Then it is sufficient to choose $\delta = \underline{\nu}^{\star}$ (so $\delta$ depends only on $\kappa, \gamma_1, \gamma, \upomega \, h(x_o, 4\rho))$. \\ [0.3em] \textsl{Step 4 - } Now we want to apply the expansion of positivity. We call, for simplicity $$ r := \eta \frac{\rho_o}{4} \, . $$ Taking in Lemma \ref{esp_positivita}, point $iii\, )$, $$ \rho = r \, , \qquad \upbeta = \upomega $$ we get that $$ u \geqslant \uplambda \, \frac{N}{8} \qquad \qquad \text{a.e. in } B^0_{2 r}(y^{\ast}) \times \left[ s_1, s_2 \right] $$ with $\uplambda$ depending on $\gamma_1, \gamma, \mathfrak q, \kappa, \upomega$. Now taking in Lemma \ref{esp_positivita}, point $iii\, )$, $$ \rho = 2r \, , \qquad \upbeta = \upomega $$ we get that $$ u \geqslant \uplambda^2 \, \frac{N}{8} \qquad \qquad \text{a.e. in } B^0_{2 r}(y^{\ast}) \times
\left[ s_1, s_2 \right] \, . $$ We iterate this argument $m$ times getting $$ u \geqslant \uplambda^m \, \frac{N}{8} \qquad \qquad \text{a.e. in } B^0_{2^m r}(y^{\ast}) \times
\left[ s_1, s_2 \right] $$ till $B_{2^m r}(y^{\ast}) \supset B_{\rho}(x_o)$ and this is guaranteed if $$ 2 \rho \leqslant 2^m r < 4 \rho \, .
$$ As done before, observe that \begin{align*} u (x,t) & \geqslant \uplambda^m \, \frac{N}{8} = \uplambda^m \, \frac{b \, \eta^\xi}{8^{\xi+1}} \, \frac{2^{m\xi}}{(2^m r)^{\xi}}
\geqslant \uplambda^m \, \frac{b \, \eta^\xi}{8^{\xi+1}} \, \frac{2^{m \xi}}{(4 \, \rho)^{\xi} } = \\
& = (2^{\xi} \uplambda)^m \, \frac{\eta^\xi}{2^{5\xi + 3}} \, u(x_o, t_o) \, . \end{align*} Then, as before, choosing $\xi$ in such a way $2^{\xi} \uplambda = 1$ we get rid of the dependence of $m$ and then in particular we get $$ u (x,t) \geqslant c_0 \, u(x_o, t_o) \qquad \qquad \text{a.e. in } B^0_{\rho}(x_o) \times
\left[ s_1, s_2 \right] $$ where $c_0 = \frac{\eta^\xi}{2^{5\xi + 3}}$ depends (only) on $K_1, K_2, K_3, q, \varsigma, \kappa, \gamma_1, \gamma, \upomega, h(x_o, 4\rho), \mathfrak q$, the constants by which $\uplambda$ and $\eta$ depend. \\ [0.3em] Finally the proof of point $iv \, )$ can be obtained similarly to that of point $iii \, )$, using in the order Proposition \ref{prop-DeGiorgi1}, point $iv \, )$, Lemma \ref{lemmaMisVar}, Proposition \ref{prop-DeGiorgi2}, point $iv \, )$, Lemma \ref{esp_positivita}, point $iv \, )$.
$\square$ \\
\noindent The previous theorem has an immediate consequence which we state here below.
\begin{theorem} \label{Harnack2} Assume $u\in DG(\Omega, T, \mu, \lambda, \gamma)$, $u\geqslant 0$. Fix $\rho > 0$ and $\vartheta \in (0,1]$ for which $B_{5\rho}(x_o) \times [t_o - 16 \, h(x_o, 4\rho) \rho^2 - \vartheta h(x_o, \rho) \rho^2, t_o + 16 \, h(x_o, 4\rho) \rho^2 + \vartheta h(x_o, \rho) \rho^2] \subset \Omega \times (0,T)$. Suppose $x_o \in I$. Then there exists $c > 0$ depending on $\gamma_1, \gamma, \mathfrak q, \kappa, \alpha, \upkappa, \uptau, K_1, K_2, K_3, q, \varsigma , \vartheta$ such that $$ u(x_o, t_o)\leqslant c \inf_{B_{\rho} (x_o)} \tilde{u} (x) $$ where \begin{align*} \tilde{u} (x) =
\left\{
\begin{array}{ll}
u(x, t_o + \vartheta \, h(x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^+(x_o) \\ [0.3em]
u(x, t_o - \vartheta \, h (x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^-(x_o)
\end{array}
\right. & \qquad \text{if } \quad x_o \in I_+ \cap I_- , \\ \tilde{u} (x) =
\left\{
\begin{array}{ll}
u(x, t_o + \vartheta \, h(x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^+(x_o) \\ [0.3em]
u(x, t_o) & \text{ if } x \in B_{\rho}^0(x_o)
\end{array}
\right. & \qquad \text{if } \quad x_o \in I_+ \cap I_0 , \\ \tilde{u} (x) =
\left\{
\begin{array}{ll}
u(x, t_o - \vartheta \, h (x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^-(x_o) \\ [0.3em]
u(x, t_o) & \text{ if } x \in B_{\rho}^0(x_o)
\end{array}
\right. & \qquad \text{if } \quad x_o \in I_- \cap I_0 , \\ \tilde{u} (x) =
\left\{
\begin{array}{ll}
u(x, t_o + \vartheta \, h(x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^+(x_o) \\ [0.3em]
u(x, t_o - \vartheta \, h (x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^-(x_o) \\ [0.3em]
u(x, t_o) & \text{ if } x \in B_{\rho}^0(x_o) .
\end{array}
\right. & \qquad \text{if } \quad x_o \in I_+ \cap I_- \cap I_0 \, . \end{align*} \end{theorem} \noindent {\it Proof}\ \ -\ \ By Theorem \ref{Harnack1} we immediately get the result taking $\vartheta = \vartheta_+ = \vartheta_-$ and $c = \max\{ c_+, c_-, c_0 \}$.
$\square$ \\
\noindent One can give many different and equivalent formulations of the classical parabolic Harnack's inequality. We conclude giving only one possible equivalent formulation, which can be proved by standard arguements, to the one given above. Under the assumptions of Theorem \ref{Harnack2} one has for $u \in DG$, $u \geqslant 0$, and for instance for $x_o \in \partial\Omega_+ \cap \partial\Omega_0 \cap \partial\Omega_-$ (and with obvious generalization in the other cases) \begin{align} \label{equivalent} & \sup_{B_{\rho} (x_o)} \tilde{u} (x) \leqslant c \inf_{B_{\rho} (x_o)} u (x, t_o) \nonumber \\ & \text{where} \quad \tilde{u} (x) =
\left\{
\begin{array}{ll}
u(x, t_o - \vartheta \, h(x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^+(x_o) \\ [0.3em]
u(x, t_o + \vartheta \, h (x_o, \rho) \rho^2) & \text{ if } x \in B_{\rho}^-(x_o) \\ [0.3em]
u(x, t_o) & \text{ if } x \in B_{\rho}^0(x_o) .
\end{array}
\right. \end{align} \ \\
\noindent {\bf Some consequences of the Harnack inequality - } An important and standard consequence for a function satisfying a Harnack's inequality is H\"older-continuity. By classical computations and assuming (if necessary taking $\gamma$ bigger) $$ \frac{\gamma}{\gamma - 1} < 2 $$ one can get that if $u \in DG(\Omega, T, \mu, \lambda, \gamma)$ then $u$ is locally $\alpha$-H\"older continuous with respect to $x$ and $\alpha/2$-H\"older continuous with respect to $t$, where $\alpha = (\log_2 \frac{\gamma}{\gamma - 1})$, in $\big( \Omega_+ \cup \Omega_- \cup I \big) \times (0,T)$. As regards $\Omega_0$ we can only get that for every $t \in (0,T)$ $u ( \cdot, t)$ is locally $\alpha$-H\"older continuous in $\Omega_0$. Notice that in the interface $I$ separating $\Omega_0$ and $\Omega_+ \cup \Omega_-$ the function $u$ is regular also with respect to $t$. \\ [0.3em] Another consequence is a strong maximum pronciple, which one can get, again by standard arguement, using \eqref{equivalent}. One can derive a ``standard'' maximum principle from Theorem \ref{Harnack1}, which we do not state, and others from Theorem \ref{Harnack2}. \\ If, for instance, we suppose $x_o \in \partial\Omega_+ \cap \partial\Omega_0 \cap \partial\Omega_-$ (and again with obvious generalization in the other cases) we could briefly state the maximum principles as follows: suppose $(x_o, t_o) \in \Omega \times (0,T)$ is a maximum point for $u$ in a set \begin{align*} & \Big( B_{\rho}^+ (x_o) \times (t_o - \vartheta \, h(x_o, \rho) \rho^2, t_o + \vartheta \, h(x_o, \rho) \rho^2) \Big)
\cup \Big( B_{\rho}^0 (x_o) \times \{ t_o \} \Big) \, \cup \\ & \quad \quad \Big( \cup B_{\rho}^- (x_o) \times (t_o - \vartheta \, h(x_o, \rho) \rho^2, t_o + \vartheta \, h(x_o, \rho) \rho^2) \Big) \end{align*} for some $\vartheta \in (0,1]$, then $u$ is constant in the set \begin{align*} \Big( B_{\rho}^+ (x_o) \times (t_o - \vartheta \, h(x_o, \rho) \rho^2, t_o] \Big)
\cup \Big( B_{\rho}^0 (x_o) \times \{ t_o \} \Big) \, \cup
\Big( \cup B_{\rho}^- (x_o) \times [t_o, t_o + \vartheta \, h(x_o, \rho) \rho^2) \Big) \, . \end{align*} \ \\
\section{Examples} \label{paragrafo9}
In this section we show some possible examples of $\mu$ (and consequently of $I$) and $\lambda$. In all the examples, just for simplicity, we suppose $\Omega \subset {\bf R}^2$.
\begin{itemize}[itemsep=1ex, leftmargin=0.62cm] \item[1.] In the simplest situation when $\mu \equiv \lambda \equiv 1$ we get the classical case in which the De Giorgi class contains the solutions of $$ \frac{\partial u}{\partial t} - \textrm{div} (a(x,t,u,Du)) = b(x,t,u,Du) $$ with $a, b$ satisfying \begin{align*}
\big(a (x,t,u,Du) , Du \big) \geqslant \lambda |Du|^p \, , \\
| a (x,t,u,Du) | \leqslant \Lambda |Du|^{p-1} \, , \\
| b (x,t,u,Du) | \leqslant M |Du|^{p-1} \, , \end{align*} with $\lambda, \Lambda, M$ positive numbers. Obviously if $\mu \equiv -1$ we have the analogous results for backward parabolic equations. \item[2.] If $\mu \equiv 0$ and $\lambda \equiv 1$ we have a family (in the parameter $t$) of elliptic equations for which one cannot expect regularity in time, neither for ``solutions''. The same may happen if $\Omega_0$ is a proper subset of $\Omega$. \\ For example, in dimension $1$ consider the solutions of $$ \frac{d}{dx} \left( a(x,t) \frac{du}{dx} \right) = 0 \, , \qquad u(0)=0 \, , \ u(2) = 1 \, , $$ with $$ a(x,t) = \alpha (t) \text{ in } [0,1] \qquad \text{and} \qquad a(x,t) = \beta (t) \text{ in } [1,2] $$ with $\alpha(t) \not= \beta(t)$ for every $t$ and $\alpha$ and $\beta$ discontinuous. The solutions are clearly discontinuous in time for $x \in (0,2)$. \item[3.] If $\mu > 0$ and $\lambda > 0$ we have the Harnack's inequality for doubly weighted equations, like for instance \begin{equation} \label{ultimavolta} \mu \frac{\partial u}{\partial t} - \textrm{div} (\lambda Du) = 0 \, . \end{equation} In the particular case $\mu \equiv 1$ we rescue the result contained in \cite{chia-se3} (and also contained in \cite{surnachev}), while if $\mu \equiv \lambda$ we rescue the result contained in \cite{chia-se2}. \item[4.]
Consider now an example where for simplicity $|\mu| \equiv \lambda \equiv 1$ in $\Omega$, but $\mu \not\equiv1$. Suppose, for instance, that $\mu$ changes sign around an interface like that in the first of the two following pictures where $I$ is a cross intersecting in a point $x_o$. This kind of interface clearly satisfies assumptions (H.4) and (H.5) and then also in a neighbourhood of the points $(x_o, t)$, $t \in (0,T)$, the solution, e.g., of \eqref{ultimavolta} is H\"older-continuous. \\ Also an interface like that shown in the second of the two following pictures is admitted. \ \\ \ \\ \ \\ \ \\ \begin{picture}(150,200)(-180,0) \hspace{-3.5cm} \put (-40,180){\tiny$\mu =1$} \put (60,80){\tiny$\mu =1$} \put (60,180){\tiny$\mu = -1$} \put (-40,80){\tiny$\mu = -1$}
\put (20,50){\linethickness{1pt}\line(0,1){150}}
\put (-70,125){\linethickness{1pt}\line(1,0){180}}
\end{picture} \begin{picture}(150,200)(-180,0) \hspace{-1cm} \put (10,180){\tiny$\mu =0$} \put (60,120){\tiny$\mu =1$} \put (-40,120){\tiny$\mu = -1$}
\put (30,70){\linethickness{1pt}\line(0,1){70}} \put (30,140){\linethickness{1pt}\line(3,2){70}} \put (30,140){\linethickness{1pt}\line(-3,2){70}}
\end{picture}
\item[5.] Consider $\mu \geqslant 0$ and, for simplicity, suppose that $\mu$ takes only the values $1$ and $0$. in the pictures below there are two simple examples: in the first one the interface is made by just a line, in the second one is made by two intersecting lines. In both cases a function belonging to the De Giorgi class turns out to be H\"older-continuous in $(\Omega_+ \cup I ) \times (0,T)$. In particular it is continuous in the interface $I$ both in $x$ and $t$, even if it could not be continuous in $\Omega_0$ as shown in the second example. \ \\ \ \\ \ \\ \ \\ \begin{picture}(150,200)(-180,0) \hspace{-4cm} \put (-40,130){\tiny$\mu =1$} \put (60,130){\tiny$\mu = 0$}
\put (30,40){\linethickness{1pt}\line(0,1){170}}
\end{picture} \begin{picture}(150,200)(-180,0) \hspace{-2cm} \put (-40,180){\tiny$\mu =1$} \put (60,80){\tiny$\mu =1$} \put (60,180){\tiny$\mu = 0$} \put (-40,80){\tiny$\mu = 0$}
\put (30,40){\linethickness{1pt}\line(0,1){170}} \put (-70,125){\linethickness{1pt}\line(1,0){200}}
\end{picture}
\item[6.] Also some cusps like the one in the picture below can be admitted, provided that assumption (H.4) is satisfied. For example, suppose (part of) the interface is that in the picture below and the vertex is the point $(0,0)$ and suppose $\mu \not=0$. If $\mu_+$ satisfies (H.4) then we are in the assumptions and the theorems of Section \ref{secHarnack} hold. \\ For instance, suppose $\lambda \equiv 1$ and consider $\mu \equiv -1$ on the left of the curve and $\mu \equiv 1$ on the other side of the curve which is the union of the graphs of $f(x) = x^n$ and $g(x) = - x^n$ for $x \in [0, L]$, $L > 0$, and $n \in {\bf N}$, $n \geqslant 1$. We have that $$ \mu_+ \big( {B_{2\rho} (0,0)} \big) \leqslant \mathfrak q \, \mu_+ \big( {B_{\rho} ((0,0))} \big) $$ for some $\mathfrak q$ depending on $n$. \\ While if, for instance, we consider $f(x) = e^{-1/x}$ and $g(x) = - e^{-1/x}$ the above inequality does not hold any more. \ \\ \ \\ \begin{tikzpicture} \hspace{3cm} \begin{axis} [axis equal] \addplot [domain=0:1,variable=\t, samples=40,smooth,thick,black] ({t},{t^3}); \addplot [domain=0:1,variable=\t, samples=40,smooth,thick,black] ({t},{-t*t*t}); \end{axis} \end{tikzpicture} \ \\ \ \\
If we consider different $\mu$, i.e. $\mu$ which can degenerate to zero, the geometry of the interface can change depending also on how the weights $|\mu|$ and $\lambda$ degenerate near the interface.
\item[7.]
The final example is the following: again for simplicity suppose $|\mu| \equiv \lambda \equiv 1$ in ${\bf R}^2$ and suppose $\mu \equiv 1$ in the region above the graphic of $f$, which we will call $\Omega_+$, and $\mu \equiv -1$ in the region below the graphic of $f$, which we will call $\Omega_-$, where $$ f(y) = y \cos \frac{1}{y} \qquad (f(0) = 0) \, . $$ In spite of the fact that the length of the graphic inside the ball $B := B_1(0,0)$ is infinite, the measure (the $2$-dimensional Lebesgue measure $\mathcal{L}^2$) of the $\varepsilon$-neighbourhood of $I$ is of order $\varepsilon$ and then going to zero when $\varepsilon \to 0^+$. Moreover, due to the simmetry of the graphic of $f$ we have that $$ \mu_+ \big( B_{2\rho}(0,0) \big) = \frac{1}{2} \, \mathcal{L}^2 \big( B_{2\rho}(0,0) \big) \leqslant
\frac{1}{2} \, c \, \mathcal{L}^2 \big( B_{\rho}(0,0) \big) = \frac{1}{2} \, c \, \mu_+ \big( B_{\rho}(0,0) \big) $$ where $c$ denotes the doubling constant for $\mathcal{L}^2$. Therefore also in this case assumptions (H.4) and (H.5) are satisfied and even if $I$ is not rectifiable can be an admissible interface. \ \\ \ \\ \begin{tikzpicture} \hspace{3cm} \begin{axis} [axis equal] \addplot [domain=-0.001:0.6,variable=\t, samples=400,smooth,thick,black] ({t},{t*cos(deg(pi/t))}); \addplot [domain=-0.6:-0.001,variable=\t, samples=400,smooth,thick,black] ({t},{t*cos(deg(pi/t))}); \end{axis} \end{tikzpicture} \ \\ \ \\
\end{itemize}
\end{document} | arXiv |
De Casteljau's algorithm
In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to split a single Bézier curve into two Bézier curves at an arbitrary parameter value.
Although the algorithm is slower for most architectures when compared with the direct approach, it is more numerically stable.
Definition
A Bézier curve $B$ (of degree $n$, with control points $\beta _{0},\ldots ,\beta _{n}$) can be written in Bernstein form as follows
$B(t)=\sum _{i=0}^{n}\beta _{i}b_{i,n}(t),$
where $b$ is a Bernstein basis polynomial
$b_{i,n}(t)={n \choose i}(1-t)^{n-i}t^{i}.$
The curve at point $t_{0}$ can be evaluated with the recurrence relation
$\beta _{i}^{(0)}:=\beta _{i},\ \ i=0,\ldots ,n$
$\beta _{i}^{(j)}:=\beta _{i}^{(j-1)}(1-t_{0})+\beta _{i+1}^{(j-1)}t_{0},\ \ i=0,\ldots ,n-j,\ \ j=1,\ldots ,n$
Then, the evaluation of $B$ at point $t_{0}$ can be evaluated in ${\binom {n}{2}}$ operations. The result $B(t_{0})$ is given by
$B(t_{0})=\beta _{0}^{(n)}.$
Moreover, the Bézier curve $B$ can be split at point $t_{0}$ into two curves with respective control points:
$\beta _{0}^{(0)},\beta _{0}^{(1)},\ldots ,\beta _{0}^{(n)}$
$\beta _{0}^{(n)},\beta _{1}^{(n-1)},\ldots ,\beta _{n}^{(0)}$
Geometric interpretation
The geometric interpretation of De Casteljau's algorithm is straightforward.
• Consider a Bézier curve with control points $P_{0},...,P_{n}$. Connecting the consecutive points we create the control polygon of the curve.
• Subdivide now each line segment of this polygon with the ratio $t:(1-t)$ and connect the points you get. This way you arrive at the new polygon having one fewer segment.
• Repeat the process until you arrive at the single point – this is the point of the curve corresponding to the parameter $t$.
The following picture shows this process for a cubic Bézier curve:
Note that the intermediate points that were constructed are in fact the control points for two new Bézier curves, both exactly coincident with the old one. This algorithm not only evaluates the curve at $t$, but splits the curve into two pieces at $t$, and provides the equations of the two sub-curves in Bézier form.
The interpretation given above is valid for a nonrational Bézier curve. To evaluate a rational Bézier curve in $\mathbf {R} ^{n}$, we may project the point into $\mathbf {R} ^{n+1}$; for example, a curve in three dimensions may have its control points $\{(x_{i},y_{i},z_{i})\}$ and weights $\{w_{i}\}$ projected to the weighted control points $\{(w_{i}x_{i},w_{i}y_{i},w_{i}z_{i},w_{i})\}$. The algorithm then proceeds as usual, interpolating in $\mathbf {R} ^{4}$. The resulting four-dimensional points may be projected back into three-space with a perspective divide.
In general, operations on a rational curve (or surface) are equivalent to operations on a nonrational curve in a projective space. This representation as the "weighted control points" and weights is often convenient when evaluating rational curves.
Notation
When doing the calculation by hand it is useful to write down the coefficients in a triangle scheme as
${\begin{matrix}\beta _{0}&=\beta _{0}^{(0)}&&&\\&&\beta _{0}^{(1)}&&\\\beta _{1}&=\beta _{1}^{(0)}&&&\\&&&\ddots &\\\vdots &&\vdots &&\beta _{0}^{(n)}\\&&&&\\\beta _{n-1}&=\beta _{n-1}^{(0)}&&&\\&&\beta _{n-1}^{(1)}&&\\\beta _{n}&=\beta _{n}^{(0)}&&&\\\end{matrix}}$
When choosing a point t0 to evaluate a Bernstein polynomial we can use the two diagonals of the triangle scheme to construct a division of the polynomial
$B(t)=\sum _{i=0}^{n}\beta _{i}^{(0)}b_{i,n}(t),\quad t\in [0,1]$
into
$B_{1}(t)=\sum _{i=0}^{n}\beta _{0}^{(i)}b_{i,n}\left({\frac {t}{t_{0}}}\right)\!,\quad t\in [0,t_{0}]$
and
$B_{2}(t)=\sum _{i=0}^{n}\beta _{i}^{(n-i)}b_{i,n}\left({\frac {t-t_{0}}{1-t_{0}}}\right)\!,\quad t\in [t_{0},1].$
Bézier curve
When evaluating a Bézier curve of degree n in 3-dimensional space with n + 1 control points Pi
$\mathbf {B} (t)=\sum _{i=0}^{n}\mathbf {P} _{i}b_{i,n}(t),\ t\in [0,1]$
with
$\mathbf {P} _{i}:={\begin{pmatrix}x_{i}\\y_{i}\\z_{i}\end{pmatrix}},$
we split the Bézier curve into three separate equations
$B_{1}(t)=\sum _{i=0}^{n}x_{i}b_{i,n}(t),\ t\in [0,1]$
$B_{2}(t)=\sum _{i=0}^{n}y_{i}b_{i,n}(t),\ t\in [0,1]$
$B_{3}(t)=\sum _{i=0}^{n}z_{i}b_{i,n}(t),\ t\in [0,1]$
which we evaluate individually using De Casteljau's algorithm.
Example
We want to evaluate the Bernstein polynomial of degree 2 with the Bernstein coefficients
$\beta _{0}^{(0)}=\beta _{0}$
$\beta _{1}^{(0)}=\beta _{1}$
$\beta _{2}^{(0)}=\beta _{2}$
at the point t0.
We start the recursion with
$\beta _{0}^{(1)}=\beta _{0}^{(0)}(1-t_{0})+\beta _{1}^{(0)}t_{0}=\beta _{0}(1-t_{0})+\beta _{1}t_{0}$
$\beta _{1}^{(1)}=\beta _{1}^{(0)}(1-t_{0})+\beta _{2}^{(0)}t_{0}=\beta _{1}(1-t_{0})+\beta _{2}t_{0}$
and with the second iteration the recursion stops with
${\begin{aligned}\beta _{0}^{(2)}&=\beta _{0}^{(1)}(1-t_{0})+\beta _{1}^{(1)}t_{0}\\\ &=\beta _{0}(1-t_{0})(1-t_{0})+\beta _{1}t_{0}(1-t_{0})+\beta _{1}(1-t_{0})t_{0}+\beta _{2}t_{0}t_{0}\\\ &=\beta _{0}(1-t_{0})^{2}+\beta _{1}2t_{0}(1-t_{0})+\beta _{2}t_{0}^{2}\end{aligned}}$
which is the expected Bernstein polynomial of degree 2.
Implementations
Here are example implementations of De Casteljau's algorithm in various programming languages.
Haskell
deCasteljau :: Double -> [(Double, Double)] -> (Double, Double)
deCasteljau t [b] = b
deCasteljau t coefs = deCasteljau t reduced
where
reduced = zipWith (lerpP t) coefs (tail coefs)
lerpP t (x0, y0) (x1, y1) = (lerp t x0 x1, lerp t y0 y1)
lerp t a b = t * b + (1 - t) * a
Python
def de_casteljau(t, coefs):
beta = [c for c in coefs] # values in this list are overridden
n = len(beta)
for j in range(1, n):
for k in range(n - j):
beta[k] = beta[k] * (1 - t) + beta[k + 1] * t
return beta[0]
JavaScript
The following function applies De Casteljau's algorithm to an array of points, resolving the final midpoint with the additional properties in and out (for the midpoint's "in" and "out" tangents, respectively).
function deCasteljau(points, position = 0.5){
let a, b, midpoints = [];
while(points.length > 1){
const num = points.length - 1;
for(let i = 0; i < num; ++i){
a = points[i];
b = points[i+1];
midpoints.push([
a[0] + ((b[0] - a[0]) * position),
a[1] + ((b[1] - a[1]) * position),
]);
}
points = midpoints;
midpoints = [];
}
return Object.assign(points[0], {in: a, out: b});
}
The following example calls this function with the green points below, exactly halfway along the curve. The resulting coordinates should equal $(192,32)$, or the position of the centremost red point.
{
/* Definition of deCasteljau() function omitted for brevity */
const nodes = window.document.querySelectorAll("circle.n0-point");
const points = Array.from(nodes).map(({cx, cy}) => [cx.baseVal.value, cy.baseVal.value]);
deCasteljau(points); // Result: [192, 32]
}
See also
• Bézier curves
• De Boor's algorithm
• Horner scheme to evaluate polynomials in monomial form
• Clenshaw algorithm to evaluate polynomials in Chebyshev form
References
• Farin, Gerald & Hansford, Dianne (2000). The Essentials of CAGD. Natic, MA: A K Peters, Ltd. ISBN 1-56881-123-3
External links
• Piecewise linear approximation of Bézier curves – description of De Casteljau's algorithm, including a criterion to determine when to stop the recursion
• Bezier Curves and Picasso — Description and illustration of De Casteljau's algorithm applied to cubic Bézier curves.
• de Casteljau's algorithm - Implementation help and interactive demonstration of the algorithm.
| Wikipedia |
\begin{document}
\title{Relative Density and Exact Recovery in Heterogeneous Stochastic Block Models }
\begin{abstract} The Stochastic Block Model (SBM) is a widely used random graph model for networks with communities. Despite the recent burst of interest in recovering communities in the SBM from statistical and computational points of view, there are still gaps in understanding the fundamental information theoretic and computational limits of recovery.
In this paper, we consider the SBM in its full generality, where there is no restriction on the number and sizes of communities or how they grow with the number of nodes, as well as on the connection probabilities
inside or across communities.
This generality allows us to move past the artifacts of homogenous SBM, and understand the right parameters (such as the relative densities of communities) that define the various recovery thresholds. We outline the implications of our generalizations via a set of illustrative examples. For instance, $\log n$ is considered to be the standard lower bound on the cluster size for exact recovery via convex methods, for homogenous SBM. We show that it is possible, in the right circumstances (when sizes are spread and the smaller the cluster, the denser), to recover very small clusters (up to $\sqrt{\log n}$ size), if there are just a few of them (at most polylogarithmic in $n$).
\end{abstract}
\section{Introduction}
A fundamental problem in network science and machine learning is to discover structures in large, complex real-world networks (e.g., biological, social, or information networks). Communities are one of the most basic structures to look for, and are useful in many ways including simplifying network analysis. Community or cluster detection also arises in machine learning and underlies many decision tasks, as a basic step that uses pairwise relations between data points in order to understand more global structures in the data. Applications of community detection are numerous, and include recommendation systems \cite{xu2014jointly}, image segmentation \cite{shi2000normalized, meila2001random}, learning gene network structures in bioinformatics, e.g., in protein detection \cite{CY:06} and population genetics \cite{JTZ:04}.
In spite of a long history of heuristic algorithms (see, e.g., \cite{leskovec2010empirical} for an empirical overview), as well as strong research interest in recent years on the theoretical side as reviewed in the next section, there are still gaps in understanding the fundamental information theoretic limits of recoverability (i.e., if there is enough information to reveal the communities) and computational tractability (if there are efficient algorithms to recover them). This is particularly true in the case of sparse graphs (that test the limits of recoverability),
graphs with heterogeneous communities (communities varying greatly in size and connectivity), graphs with a number of communities that grows with the number of nodes, and partially observed graphs (with various observation models).
In this paper, we
study recovery regimes and algorithms for community detection in sparse graphs generated under a heterogeneous stochastic block model, where there is no restriction on the number and sizes of communities or how they grow with the number of nodes, as well as the connection probabilities inside or across communities.
We propose key network descriptors, called relative densities (defined in \eqref{def:relative_density}), that govern the exact recoverability of the communities, and determine ranges of these parameters that lead to various regimes of difficulty of recovery.
The implications of our generalizations are outlined in Section \ref{sec:this-paper} where illustrative examples provide insight into our results in Section \ref{sec:main-results}.
\subsection{The Heterogenous Stochastic Block Model and Exact Recovery} \label{sec:GSBM-def}
The stochastic block model (SBM), first introduced and studied in mathematical sociology by Holland, Laskey and Leinhardt in 1983 \cite{holland1983stochastic}, can be described as follows. Start with $n$ vertices and partition the vertex set $\{1,2,\ldots,n\}$ into $r$ groups $V_1, V_2,\ldots, V_r\,$, of sizes $n_1, n_2,\ldots, n_r$ respectively.
Then, we draw an edge between two nodes with a probability depending on which communities they belong to; i.e., the probability of an edge between vertices $i$ and $j$ (denoted by $i \sim j$) is given by \begin{align} \label{eq:rand-graph-dist} \operatorname{\mathbb{P}}(i \sim j) = \begin{cases} p_k & \text{if there is a $k \in \{1,2,\ldots,r\}$ such that $i,j \in V_k$} \\ q & \text{otherwise} \end{cases} \end{align} where we assume $q < \min_k p_k$ in order for the idea of communities to make sense. Such inter-cluster edges are also known as ``ambient" edges. Notice that each of the $V_k$'s is endowed with an {Erd\H{o}s-R\'enyi~} graph structure $\mathcal{G}(n_k, p_k)$ (within each community $V_k\,$, the probability of an edge is given by the local probability $p_k$). This defines a distribution over random graphs known as the stochastic block model. To contrast our study of this general setting with previous works where homogenous SBM is considered (where the sizes and probabilities associated to the communities are equal, e.g., in \cite{chen2014statistical}), or other special cases of SBMs are studied (e.g., when the number of communities is fixed or grows slowly with the number of nodes such as in \cite{abbe2015community}), we sometimes refer to the above model as the {\em heterogenous stochastic block model}.
The community detection problem studied in this paper is then stated simply as: given the adjacency matrix of a graph generated by the heterogenous stochastic block model, can we recover the labels of {\em all} vertices, with high probability, using an algorithm that has been proved to do so, whether in polynomial time or not. Note that recovery with high probability is the best one can hope for, as--with tiny probability--the model can generate graphs where the partition is unrecoverable, e.g., the complete graph. Whether this problem is solvable depends on the parameters involved, and our results characterize parts of the model space for which such recovery is possible. Moreover, based on the computational complexity of the proposed algorithm, we can be in different subregimes, hard (recovery is possible \emph{theoretically}, but not necessarily \emph{efficiently}), easy (recovery can be done efficiently; i.e., there is a polynomial-time algorithm), and simple (recovery can be done by simple counting and thresholding procedures), as explained in the next section.
In the next subsection, we mention other natural questions in community detection and review existing results in the literature. We summarize our new results in section \ref{sec:this-paper}.
\subsection{Related Work}
What we can infer about the community structure from a single draw of the random graph varies based on the regime of model parameters. Often, the following scenarios are considered.
\begin{enumerate}[1.] \item {\em Exact Recovery (Strong Consistency).} In this regime it is possible to recover all labels, with high probability. That is, an algorithm has been proved to do so, whether in polynomial time or not.
Notice that we need the nodes in all communities to be connected for the exact recovery to be possible.
\item {\em Almost Exact Recovery (Weak Consistency).} A total of $n - o(n)$ labels are
recoverable, but no more. For example, consider the case where the graph has multiple components, all but one of which are tiny; the tiny components cannot be correctly classified.
\item {\em Partial Recovery or Approximation Regime.}
Only a {\em fraction} of vertices, i.e. $(1-\epsilon)n$ for some $\epsilon >0\,$, can be guaranteed to be recovered correctly.
For example, in the case
of two symmetric communities, this fraction should be greater than $1/2$ (which
one can obtain just by random guessing).
\item {\em Detectability.} One may construct a partition of the
graph which is correlated\footnote{In this context, this means doing
better than guessing.} with the true
partition, but one cannot \emph{guarantee} any kind of quantitative improvement
over random guessing. This happens in very sparse
regimes when some $p_k$'s and $q$ are of the same, small, order; e.g. see \cite{mossel2014reconstruction}.
\end{enumerate}
It may appear at first that the differences between exact recovery with strong and weak consistencies (the first two regimes above) are small; to illustrate the
differences, consider the situation when one has a very large (sized
$n$) social network with a particular set of nodes of interest, which
may also be large but $o(n)$.
An exact
recovery algorithm with strong consistency guarantees that, with high probability, \emph{all} of
the nodes of interest will be correctly labeled.
An exact recovery algorithm with weak consistency
can guarantee that \emph{any} of the nodes will
be correctly labeled with high probability, but may yield absolutely
no guarantees about the entire set (in fact, depending on the set
size, the probability that some nodes will be mislabeled may be
$O(1)$).
In other words, in such setting, while the probability of correct recovery for a {\em fixed} set of $n-o(n)$ vertices may be zero, the probability of correct recovery for {\em some} set of $n-o(n)$ vertices is close to one.
\paragraph{Thresholds.} Recently, there has been significant interest in determining \emph{sharp thresholds} (or phase transitions) for the various parameter regimes. Currently, the best understood case is the SBM with only two communities of equal size (which we refer to as binary SBM hereafter) for which all of the four regimes above have been identified and characterized in a series of recent papers \cite{coja2010graph,mossel2014reconstruction, mossel2013proof, massoulie_proof,mossel14belief, mossel2014consistency, abbe2014exact, hajek2014achieving}. Moreover, tractable algorithms have been proposed and they work down to the information-theoretical thresholds; i.e., information-theoretical and computational thresholds coincide for the case of binary SBM.
Aside from this case,
Abbe and Sandon \cite{abbe2015community} proved the existence of an information-theoretic threshold for exact recovery in the case when the number $r$ of communities is fixed and all community sizes are $O(n)$ (while the connectivity probabilities $p_k,q$ are $O(\log n/n)$).
In particular, in \cite{abbe2015community}, they provided an almost linear-time algorithm using the knowledge of model parameters that works down to this information-theoretic threshold. Such knowledge is shown to be unnecessary in a fully agnostic algorithm developed in \cite{abbe2015recovering}.
Outside of the settings described above, results tend to be inconclusive
where not all the regimes are well understood and the bounds incorporate large or unknown constants.
Although we do not aim to give an exhaustive review of the existing literature, we will mention the main state-of-the-art results for the regimes identified above.
\begin{enumerate}[\bf 1.] \item \textbf{Exact Recovery (Strong Consistency).} Many partial results are available for general SBM,
yielding upper bounds on the thresholds for efficient regimes, or
lower bounds for exact recoverability; for example Chen and Xu \cite{chen2014statistical} which served as an inspiration for this paper.
The results in \cite{chen2014statistical} cover the regime when all clusters are
equivalent, that is, all $p_k = p$ and there are $r$ clusters, each
of size $K := n/r\,$; $r$ and $p$ are allowed to vary with
$n\,$. Depending on $K$, $p$, $q$, and $n$, they characterize the conditions under which
1) exact recovery is {\em impossible},
2) exact recovery is possible \emph{theoretically}, but not necessarily \emph{efficiently}, e.g., by the Maximum Likelihood Estimator,
3) exact recovery can be done efficiently, e.g., by a semidefinite programming relaxation of the ML estimator,
4) exact recovery can be done by a simple counting and thresholding procedure.
The bounds for these regimes in \cite{chen2014statistical} are not shown to be sharp thresholds, but they work down to the limit of cluster connectivity for $p$ and $K$, which with $K = O(n^{\beta}$) for some constant $0 \leq \beta \leq 1$, results in $p = O(\log n/K)$ (further lowering of $p$ will result in a disconnected graph, and as such strong recovery becomes impossible.) The downside of \cite{chen2014statistical} lies in the very strong assumption of equivalent clusters. The difficulty of such assumption in heterogeneous SBM wil be discussed in detail in Section \ref{sec:examples}.
\item \textbf{Almost Exact Recovery (Weak Consistency).}
This case has not been extensively treated in the literature.
Yun and Proutiere \cite{yun2014community} studied the case when there is a finite number of clusters, all of size $O(n)$, and such that all intra-cluster probabilities $p_k$ are equal to $p$. They find a characterizing condition for weakly consistent recovery in terms of $p$, $q$, and $n$; this condition was rediscovered in the case of the binary SBM by Mossel, Neeman and Sly \cite{mossel2014consistency}; for this latter case it can be stated as
\begin{equation} \begin{aligned}\label{eqn:weak_recovery}
n \frac{(p-q)^2}{p + q} \rightarrow
\infty~.
\end{aligned} \end{equation}
\cite{yun2014community} is the first to give a lower bound
on the threshold. In their studied case this lower bound coincides with the
upper bound, which they show
by providing a spectral algorithm (based on an algorithm by
Coja-Oghlan \cite{coja2010graph}) with a simpler analysis.
Previous to their results, there have been other
methods/algorithms to show the possibility of weakly consistent recovery;
although the algorithms used may be even simpler (e.g.,
Rohe, Chatterjee, Yu \cite{rohe2011spectral}, which is spectral), they generally do
not come close to the threshold.
Previously, weakly consistent recovery has been studied by Rohe, Chatterjee, Yu \cite{rohe2011spectral} using a spectral algorithm (based on an algorithm by Coja-Oghlan \cite{coja2010graph} with a simpler analysis), but the results do not come close to the threshold where $p,q$ is required to be almost $O(1)$(up to logarithmic factors).
Recently, Zhang and Zhou \cite{zhangminimax15} obtained similar result as (\ref{eqn:weak_recovery}) under approximately same-sized communities, with the smallest inter-cluster connectivity parameter $p$ and the highest intra-cluster connectivity parameter $q\,$, by adopting a minimax approach. They show that weak recovery is possible if
\begin{equation*} \begin{aligned}
\frac{n(p-q)^2}{pK\log K}\to \infty,
\end{aligned} \end{equation*} and is impossible if
\begin{equation*} \begin{aligned}
\frac{n(p-q)^2}{pK}=O(1)
\end{aligned} \end{equation*} where $K$ is the number of clusters which is allowed to grow. Later, \cite{gao2015achieving} proposed a computationally feasible algorithm that provably achieves the optimal misclassification proportion given above.
\item \textbf{Partial Recovery.}
Coja-Oghlan \cite{coja2010graph} extended the asymptotic analysis of SBM to bounded degree regimes and was the first to give \emph{partial recovery} results. For the binary SBM case, his conditions amount roughly
to the following: for $p=a/n$ and $q = b/n$ for some constants $a,b$, there exists
some large constant $C$ such that, if $(a-b)^2 \geq C (a+b)
\log(a+b)$, then partial recovery is possible, and the fraction of
recovered vertices is upper bounded by a function of $C$.
Following \cite{coja2010graph}, a series of works by \cite{decelle2011asymptotic, mossel2014reconstruction, massoulie_proof, mossel2013proof} established a sharp threshold for {\em detection} in binary SBM.
Decelle et al \cite{decelle2011asymptotic} conjectured a sharp threshold at $(a-b)^2=2(a+b)\,$, based on non-rigorous ideas from statistical physics. Later, \cite{mossel2014reconstruction} showed that below this threshold it is impossible to cluster, or even to estimate the model parameters from the graph. Finally, \cite{massoulie_proof,mossel2013proof} provided an algorithm which efficiently outputs a labeling that is correlated with the true community assignment when $(a-b)^2 >2(a+b)\,$. Mossel, Neeman and Sly \cite{mossel14belief} proposed an algorithm using a variant of belief propagation that is optimal in the sense that if $(a-b)^2> C(a + b)$ for some constant $C$ then the algorithm achieves the optimal fraction of nodes labelled correctly. \dgreen{ }
For the general SBM in the bounded average degree regime, recently, Guedon and Vershynin \cite{guedon2014community} analyzed a convex optimization based approach, and Le, Levina, and Vershynin \cite{le2015sparse} analyzed a simple spectral algorithm, achieving similar upper bounds on the threshold of partial recovery. The proofs make use of the Grothendieck inequality. \cite{guedon2014community} offers a convex optimization approach for obtaining a correct labeling of a $(1-\epsilon)$ fraction of the vertices for arbitrarily small $\epsilon$.
The particular formulation of the convex problem is not crucial and can be changed without significant change to the bound itself. However, it is unclear how their results evolve when the networks have unbounded average degrees.
Le, Levina, and Vershynin \cite{le2015sparse} proposed a spectral method with degree correction when the average degree regime of the network is bounded. As a result of the degree correction, the graph Laplacian concentrates (which otherwise does not, in the bounded average degree regime) and hence the leading eigenvectors of the Laplacian can be used to approximately recover the labels. A similar degree correction trick was adopted in \cite{qin2013regularized}.
It should be noted that in \cite{rohe2011spectral}, the authors used the fact that although the Laplacian does not concentrate, the square of the Laplacian does, and obtained good partial solutions in a much denser regime (smallest degree being $O(n/\log n)$).
\item \textbf{Detectability/Impossibility.}
{ As mentioned above, for the binary SBM with $p=a/n$ and $q=b/n\,$, Decelle et al \cite{decelle2011asymptotic} conjectured that if $(a-b)^2 < 2(a + b)$
one cannot infer the community assignments with better than 50\% accuracy which can be achieved by random guessing. The conjecture was later verified by \cite{mossel2014reconstruction} as pointed out above.
For the symmetric SBM with $r$ equivalent communities (of the same size and connection probabilities), the strongly empirically-supported conjecture of Decelle et al \cite{decelle2011asymptotic} states that
{when $(a-b)^2<c(r) (a+(r-1)b)$ for some $c(r)\leq r$},
the model is indistinguishable from a general {Erd\H{o}s-R\'enyi~} model; e.g. see Conjecture 7.2 in \cite{mossel2014reconstruction} for details. } \end{enumerate}
As mentioned in the beginning of this section, it has been proven that there is no gap between the information-theoretic and computational thresholds for binary SBM. On the other hand, while the information-theoretic threshold for partial recovery of more than 2 communities is still unknown, \cite{mossel2014reconstruction} conjectured a gap exists for partial recovery for more than 4 communities. Similarly, sharp thresholds for exact recovery of multiple communities are still unknown (see \cite{abbe2015community}).
In addition to the papers mentioned above, the interested reader will find good surveys of current literature in \cite{chen2014statistical,abbe2015community, amini2014semidefinite, mossel14belief, mossel2014consistency}.
\subsection{This paper} \label{sec:this-paper}
In this paper we study the general setup presented in Section \ref{sec:GSBM-def}, where the communities are not constrained to have the same size and connection probabilities, and where $r$ is allowed to grow with $n$. Our work is {\em concerned with exact recovery} and is based on \cite{chen2014statistical}. We provide the following: \begin{itemize} \item An information-theoretic lower bound, describing an impossibility regime (Theorem \ref{thm:impossibility}), \item An upper bound, describing a potentially ``hard" regime in which recovery is always possible,
though not necessarily in an efficient way (Theorem \ref{thm:hard-recovery}). Here we assume the sizes of the communities $n_k\,$, for $k=1,\ldots,r\,$, are known.
\item An upper bound for efficient recovery via a convex optimization algorithm similar to the one in \cite{chen2014statistical}, describing an ``easy" regime (Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2}). Here we assume the quantity $\sum_k n_k^2$ is known.
\item A bound characterizing a very simple and efficiently solvable
thresholding algorithm, if model parameters $p_k,q$ are known (Theorem \ref{thm:simple-recovery}).
\item Extensions of the above bounds to the case of partial
observations, i.e., when each entry of the matrix is observed uniformly with
some probability $\gamma$ and the results recorded.
\end{itemize}
Our setup is general and allows for any mix of clusters of all magnitudes and densities. We illustrate the importance of considering such a model, as opposed to using summary statistics such as $n_{\min}$ and $p_{\min}\,$, by some examples later in this section. This setup allowed us to identify the crucial quantities \begin{equation} \begin{aligned}\label{def:relative_density} \rho_k = n_k (p_k - q)\,,\quad \widetilde D(p_k,q) = \frac{(p_k-q)^2}{q(1-q)} \,,\quad \widetilde D(q,p_k) = \frac{(p_k-q)^2}{p_k(1-p_k)}, \end{aligned} \end{equation}
where $\rho_k$ is called the \emph{relative cluster density} for a cluster $k\,$, and $\widetilde D$ represents the Chi-square divergence between two Bernoulli random variables with the given probabilities. We elaborate on these quantities in the beginning of Section \ref{sec:main-results}. The bounds resulting from our inequalities bear resemblance to, and appear to be generalizations of McSherry's \cite{mcsherry2001spectral}, allowing for the different $n_k$'s and $p_k$'s. It is worth mentioning that we have explored the possibility of allowing for a whole matrix of inter- and intra-cluster connectivity probabilities (in other words, we looked at the case when instead of a uniform probability $q$ of inter-cluster connection, we have different connectivity probabilities $q_{kl}$ for each pair of clusters $(k,l)$, for $k \neq l$.) The calculations can be followed through but at the cost of added notation complexity, with no clear shortcut, which we decided not to pursue.
Our results cover a wider set of cases than present in the existing literature. We give illustrative examples in Section \ref{sec:examples} to show that the setup we consider and the results we obtain represent a clear improvement over previous work.
The examples emphasize how Theorems \ref{thm:convex_recovery}, \ref{thm:convex_recovery2} and \ref{thm:hard-recovery} (given in Section \ref{sec:main-results} with proofs and more details given in Appendices \ref{app:proof-convex}, \ref{app:proof-rec}), complement each other, and how they compare and contrast with existing literature. More details and justification for the claims made in the examples are given in Appendix \ref{app:verification}.
\subsection{Examples} \label{sec:examples} In the following, a {\em configuration} is a list of cluster sizes $n_k$, their connection probabilities $p_k$, and the inter-cluster connection probability $q\,$. A triple $(m,p,k)$ indicates $k$ clusters of size $m$ each, with connectivity parameter $p\,$. We do not worry about whether $m$ and $k$ are always integers; if they are not, one can always round up or down as needed so that the total number of vertices is $n$, without changing the asymptotics. Moreover, when the $O(\;)$ notation is used, we mean that appropriate constants can be determined.
\newcommand{$\checkmark$}{$\checkmark$} \newcommand{$\times$}{$\times$} \begin{table}[htbp] \begin{center} \small
\begin{tabular}{l | l | ccc}
& & convex recovery & convex recovery & recoverability \\
& importance & by Thm.~\ref{thm:convex_recovery}& by Thm.~\ref{thm:convex_recovery2} & by Thm.~\ref{thm:hard-recovery} \\[3pt] \hline Ex.~\ref{ex:we-can1} &counter-example for $(p_{\min},n_{\min})$ & $\times$& $\times$&$\checkmark$ \\ Ex.~\ref{ex:we-can2} &counter-example for $(p_{\min},n_{\min})$ &$\checkmark$&$\checkmark$&$\checkmark$ \\ Ex.~\ref{ex:cvx-thm1-sqrtlogn} &$n_{\min}=\sqrt{\log n}$ &$\checkmark$&$\times$&$\times$ \\ Ex.~\ref{ex:cvx-thm2-slogn} &$n_{\max}=O(n)$, many small clusters
&$\checkmark$
&$\checkmark$
&$\checkmark$ \\ Ex.~\ref{ex:cvx-thm2-logn} &$n_{\min}=O(\log n)$, spread in sizes &$\times$&$\checkmark$&$\checkmark$ \\
Ex.~\ref{ex:hard} & small $p_{\min}-q\,$, all $p_k,q$ are $O(1)$
&$\checkmark$
&$\checkmark$
&$\checkmark$ \\ \end{tabular} \end{center} \caption{A summary of examples in Section \ref{sec:this-paper}. Each row gives the important aspect of the corresponding example as well as whether, under appropriate regimes of parameters, it would satisfy the conditions of the theorems proved in this paper.} \label{tab:examples} \end{table}
\subsubsection{Counter-examples for the $(p_{\min},n_{\min})$ heuristic} \label{sec:counter-ex}
In a heterogenous setup, one might think one can plug in $(p_{\min},n_{\min})$ in the results for homogenous SBM to identify recoverability regimes. While this simplistic approach will indeed yield some upper bounds on some of the ``positive" thresholds (i.e. if you can solve it for the simplistic case, you can also solve it for the more complex one, \emph{it can completely fail to correctly identify solvable subregimes}. The first two examples show why such a heuristic used for generalization attempts in the literature is not useful enough.
\begin{example} \label{ex:we-can1} Suppose we have two clusters of sizes $n_1 = n -\sqrt{n}$, $n_2 = \sqrt{n}$,
with $p_1 = n^{-2/3}$ and $p_2 = 1/\log n$ while $q =
n^{-2/3-0.01}\,$. As we will see, the bounds we obtain here in
Theorem \ref{thm:hard-recovery} make it clear that this case is theoretically
solvable (in the \emph{hard} regime). By contrast, Theorem 3.1 in \cite{cai2014robust} (specialized for the case of no outliers), requiring \begin{equation}\label{eq:CL14-pmin-nmin} n_{\min}^2(p_{\min}-q)^2 \gtrsim (\sqrt{p_{\min}n_{\min}} + \sqrt{nq})^2\log n \,, \end{equation} would fail and provide no guarantee for recoverability.
\end{example}
\begin{example} \label{ex:we-can2} Consider a configuration as \begin{equation*} \begin{aligned} \tri{n - n^{2/3}}{n^{-1/3+ \epsilon}}{1} \;\;,\;\; \tri{\sqrt{n}}{O(\tfrac{1}{\log n})}{n^{1/6}} \;\;,\;\; q = n^{-2/3+ 3 \epsilon}, \end{aligned} \end{equation*} where $\epsilon$ is some small quantity, e.g. $\epsilon = 0.1\,$. Either of Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} verify that this case is in the \emph{easy} regime, and the partition can be recovered efficiently by solving a convex program, with high probability. By contrast, using the $p_{\min} = n^{-1/3+\epsilon}$ and $n_{\min}= \sqrt{n}$ heuristic, neither the condition of Theorem 3.1 in \cite{cai2014robust} (given in \eqref{eq:CL14-pmin-nmin}) nor the condition of Theorem 2.5 in \cite{chen2014statistical} is fulfilled, and thus we have no means of reaching the same conclusion based on the $(p_{\min},n_{\min})$ heuristic. \end{example}
\subsubsection{Cluster sizes: small, large, and in-between}
The next three examples attempt to provide an idea of how wide the spread of cluster sizes can be, as characterized by our results.
Most algorithms for clustering the SBM run into the problem of small clusters \cite{chen2012clustering,boppana1987eigenvalues,mcsherry2001spectral}, often because the models employed do not allow for enough parameter variation to identify the key quantities involved.
{The bounds we obtain in this paper indicate that the ``correct" parameters are not the pairs $(p_k,n_k)$, but rather the relative cluster densities $\rho_k = (p_k-q) n_k$ (which are related to the ``effective densities'' appearing in \cite{VinayakOH14}). This allows us to significantly vary the sizes of the clusters, and still be able to obtain exact recovery, as long as the relative densities are large enough. }
\begin{example}[ (smallest cluster size for convex recovery)] \label{ex:cvx-thm1-sqrtlogn}
Consider a configuration as \begin{equation*} \begin{aligned} \tri{\sqrt{\log n}}{O(1)}{m} \;\;,\;\; \tri{n_2 }{O(\tfrac{\log n}{\sqrt{n}})}{\sqrt{n} } \;\;,\;\; q = O(\tfrac{\log n}{n}), \end{aligned} \end{equation*} where $n_2 = \sqrt{n} - m \sqrt{\log n / n}$ to ensure a total of $n$ vertices. Here, we assume $m\leq n/(2\sqrt{\log n})$ which implies $n_2 \geq \sqrt{n}/2\,$.
It is straightforward to verify the conditions of Theorem \ref{thm:convex_recovery}. Notice that, in verifying the first condition for the second group of clusters (with $p_2 = O(\tfrac{\log n}{\sqrt{n}})$), we need $p_2n_2 \gtrsim \log n_2$, which is satisfied when $m$ is a constant.
There are two important things to note in this example. First, to our knowledge, \emph{this is the first example in the literature for which SDP-based recovery works and allows the recovery of (a few) clusters of size smaller than $\log n$.} Previously, $\log n$ was considered to be the standard bound on the cluster size for exact recovery, as illustrated by Theorem 2.5 of \cite{chen2014statistical} in the case of equivalent clusters. We have thus shown that it is possible, in the right circumstances (when sizes are spread and the smaller the cluster, the denser), to recover very small clusters (up to $\sqrt{\log n}$ size), \emph{if there are just a few of them (at most polylogarithmic in $n$).} The significant improvement we made in the bound on the size of the smallest cluster is due to the fact that we were able to perform a closer analysis of the SDP machinery (which we provide in the proof of Theorem \ref{thm:convex_recovery}). For more details, see Section \ref{app:proof-convex_recovery}.
Secondly, the condition of Theorem \ref{thm:hard-recovery} is {\em not} satisfied. This is not an inconsistency (as Theorem \ref{thm:hard-recovery} gives only an upper bound for the threshold), but indicates the limitation of this theorem in characterizing all recoverable cases. \end{example}
\paragraph{Spreading the sizes.} The previous example allows us to go lower than the standard $\log n$ bound on the cluster size for exact recovery; however, we can solve only if the number of very small clusters is finite.
On the other hand, Theorem \ref{thm:convex_recovery2} provides us with the option of having many small clusters but requires the smallest cluster to be of size $O(\log n)\,$. Since the maximum cluster size is $O(n)$, one may ask what kind of a spread can be achieved with the help of Theorem \ref{thm:convex_recovery2}.
In Example \ref{ex:cvx-thm2-slogn}, we assume a cluster of size $O(n)$ and examine how small $n_{\min}$ can be for Theorem \ref{thm:convex_recovery2} to guarantee exact recovery by the convex program. Similarly, in Example \ref{ex:cvx-thm2-logn}, we fix $n_{\min}=\log n$ and examine how large $n_{\max}$ can be.
\begin{example} \label{ex:cvx-thm2-slogn} Consider a configuration where small clusters are dense and we have a big cluster, \[ \tri{\tfrac{1}{2}n^\epsilon}{O(1)}{n^{1-\epsilon}} \;\;,\;\; \tri{\tfrac{1}{2}n}{n^{-\alpha}\log n}{1} \;\;,\;\; q = O(n^{-\beta}\log n), \]
with $0<\epsilon<1$ and $0<\alpha< \beta<1$.
Then the conditions of Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} both require that
\begin{align} \label{eq-condn-ex:cvx-thm2-slogn}
\tfrac{1}{2}(1-\alpha) <\epsilon<2(1-\alpha) \quad,\quad \epsilon> 2\alpha-\beta \end{align}
and are depicted in Figure \ref{fig:spread-size}. Since we have not specified the constants in our results, we only consider strict inequalities.
\begin{figure}
\caption{The space of parameters in Equation \ref{eq-condn-ex:cvx-thm2-slogn}. The face defined by $\beta=\alpha$ is shown with dotted edges. The three gray faces correspond to $\beta=1\,$, $\alpha=0$ and $\epsilon=1\,$. The green plane (corresponding to the last condition in \eqref{eq-condn-ex:cvx-thm2-slogn}) comes from controlling the intra-cluster interactions uniformly (see \eqref{eq:Bkk-bound-before} and \eqref{eq:Bkk-bound}) which might be only an artifact of our proof and can be possibly improved. }
\label{fig:spread-size}
\end{figure}
Notice that the small clusters are as dense as can be, but the large one is not necessarily very dense. By picking $\epsilon$ to be just over $1/4$, we can make $\alpha$ just shy of $1/2$, and $\beta$ very close to $1$. As far as we can tell, there are no results in the literature surveyed that cover such a case, although the clever ``peeling'' strategy introduced in \cite{ailon2013breaking} would recover the largest cluster.
The strongest result in \cite{ailon2013breaking} that seems applicable here is Corollary 4 (which works for non-constant probabilities). The \cite{ailon2013breaking} algorithm works to recover a large cluster (larger than $O(\sqrt{n} \log^2n)$), subject to existence of a gap in the cluster sizes (roughly, there should be no cluster sizes between $O(\sqrt{n})$ and $O(\sqrt{n} \log^2n)$). Therefore, in this example, after a single iteration, the algorithm will stop, despite the continued existence of a gap, as there is no cluster with size above the gap. Hence the ``peeling'' strategy on this example would fail to recover all the clusters.
\end{example}
\begin{example} \label{ex:cvx-thm2-logn} Consider a configuration with many small dense clusters. We are interested to see how large the spread of cluster sizes can be for the convex recovery approach to work. As required by Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2}
and to control $\sigma_{\max}$ (defined in \eqref{def:sigmax}) the larger a cluster, the smaller its connectivity probability should be; therefore we choose the largest cluster at the threshold of connectivity (required for recovery). Consider the following cluster sizes and probabilities: \[ \tri{\log n}{O(1)}{\tfrac{n}{\log n}- m \sqrt{\tfrac{n}{\log n}}} \;\;,\;\; \tri{\sqrt{n\log n}}{O(\sqrt{\tfrac{\log n}{n}})}{m} \;\;,\;\; q = O(\tfrac{\log n}{n}), \] where $m$ is a constant. Again, we round up or down where necessary to make sure the sizes are integers and the total number of vertices is $n$. All the conditions of Theorem \ref{thm:convex_recovery2} are satisfied, hence we conclude that exact convex recovery is possible in this case.
Note that the last condition of Theorem \ref{thm:convex_recovery} is not satisfied since there are too many small clusters. Also note that alternate methods proposed in the literature surveyed would not be applicable; in particular, the gap condition in \cite{ailon2013breaking} is not satisfied for this case from the start. \end{example}
\subsubsection{Closeness of $p_{\min}$ and $q$} Finally, the following examples illustrate how small $p_{\min}-q$ can be in order for the recovery, respectively, the convex recovery algorithms to still be guaranteed to work. Note that the difference in $p_{\min} - q$ for the two types of recovery is noticeable, indicating that there is a significant difference between what we know to be recoverable and what we can recover efficiently by our convex method. We consider both dense graphs (where $p_{\min}$ is $O(1)$) and sparse ones.
\begin{example} \label{ex:hard} Consider a configuration where all of the probabilities are of $O(1)$ and \begin{equation*} \begin{aligned} \tri{n_1}{p_{\min}}{1} \;\;,\;\; \tri{n_{\min}}{p_2}{1} \;\;,\;\; \tri{n_3}{p_3}{\tfrac{n - n_1-n_{\min}}{n_3}} \;\;,\;\; q = O(1), \end{aligned} \end{equation*} where $p_2-q$ and $p_3-q$ are $O(1)\,$. On the other hand, we assume $p_{\min}-q=f(n)$ is small. For recoverability by Theorem \ref{thm:hard-recovery}, we need $f(n) \gtrsim (\log n)/n_{\min}$ and $f^2(n) \gtrsim (\log n)/n_1\,$. Notice that, since $n\gtrsim n_1 \gtrsim n_{\min}\,$, we should have $f(n) \gtrsim \sqrt{{\log n}/{n}}\,$.
For the convex program to recover this configuration (by Theorem \ref{thm:convex_recovery} or \ref{thm:convex_recovery2}), we need $n_{\min}\gtrsim \sqrt{n}$ and $f^2(n) \gtrsim \max\{n/n_1^2\,,\, \log n/ n_{\min}\}\,$, while all the probabilities are $O(1)\,$.
\end{example}
For a similar configuration to Example \ref{ex:hard}, where the probabilities are not $O(1)\,$,
recoverability by Theorem \ref{thm:hard-recovery} requires $f(n) \gtrsim \max\{\sqrt{p_{\min}(\log n)/n}\,,\, n^{-c}\}$ for some appropriate $c>0\,$.
Note that if all the probabilities, as well as $p_{\min}-q\,$, are $O(1)$, then by Theorem \ref{thm:hard-recovery} all clusters down to a logarithmic size should be recoverable. However, the success of convex recovery is guaranteed by Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} when $n_{\min}\gtrsim \sqrt{n}\,$.
\section{Main Results} \label{sec:main-results} In this paper, we consider the heterogenous stochastic block model described in Section \ref{sec:GSBM-def}. Consider a partition of the $n$ nodes into $V_0,V_1,\ldots,V_r\,$, where $\abs{V_k} = n_k\,$, $k=0,1,\ldots,r\,$. Consider $\bar n = \sum_{k=1}^r n_k$ and denote the number of isolated nodes by $n_0$ ; hence, $n_0+\bar n = n\,$. Ignoring $n_0\,$, we further define $n_{\min} = \min\{n_k:\; k=1,\ldots,r\}$ and $n_{\max} = \max\{n_k:\; k=1,\ldots,r\}\,$. The nodes in $V_0$ are isolated and the nodes in $V_k$ form the community $\mathcal{C}_k = V_k\times V_k\,$, for $k=1,\ldots, r\,$. The union of communities is denoted by $\mathcal{C} = \cup_{k=1}^r \mathcal{C}_k$ and $\mathcal{C}^c$ denotes the complement; i.e. $\mathcal{C}^c = \{(i,j):\; (i,j)\not\in\mathcal{C}_k \text{ for any } k = 1,\ldots, r,\text{ and } i,j = 1,\ldots,n\}$.
Denote by $\mathcal{Y}$ the set of admissible adjacency matrices according to a community assignment as above, i.e. \[ \mathcal{Y}:=\{Y\in\{0,1\}^{n\times n}:\; Y \text{ is a valid clustering matrix over the partition } V_0,V_1,\ldots, V_r \text{ where }\abs{V_k}=n_k \} \,. \] We will denote by $\mathbf{1}_C\in\mathbb{R}^{n\times n}$ a matrix which is 1 on $C\subset\{1,\ldots,n\}^2$ and zero elsewhere. $\log$ denotes the natural logarithm (base $e$), and the notation $\theta\gtrsim 1$ is equivalent to $\theta \geq O(1)\,$. A Bernoulli random variable with parameter $p$ is denoted by $\operatorname{Ber}(p)\,$, and a Binomial random variable with parameters $n$ and $p$ is denoted by $\operatorname{Bin}(n,p)\,$.
Consider a distribution over random graphs with $V$ as their node set as defined in \eqref{eq:rand-graph-dist}. Each subset $V_k$ is endowed with an {Erd\H{o}s-R\'enyi~} graph structure $\mathcal{G}(n_k, p_k)$ for $k=1,\ldots,r\,$, and an edge is drawn between two nodes in different communities, independent of other edges, with probability $q\,$. We assume that $p_k\geq q$ for $k=1,\ldots,r\,$. The goal is to recover the underlying clustering matrix $Y^\star$ exactly given a single graph drawn from this distribution. We will need the following definitions: \begin{itemize} \item Define the {\em relative density of a community} as \begin{equation*} \begin{aligned} \rho_k=(p_k-q)n_k \end{aligned} \end{equation*} which gives $\sum_{k=1}^r \rho_k = \sum_{k=1}^r p_k n_k - qn\,$.
\item The Neyman Chi-square divergence (e.g., see \cite{cressie1984multinomial}) between two discrete random variables $\mu$ and $\pi$ (on the same support set of size $t$) is defined as \[ \fdiv{\chi_N^2}{\mu}{\pi} = \sum_{i=1}^t \frac{\mu_i^2}{\pi_i}-1 = \sum_{i=1}^t \frac{(\mu_i-\pi_i)^2}{\pi_i} \]
and is always bounded below by the KL divergence; due to $\log x \leq x-1\,$. In the case of two Bernoulli random variables $\operatorname{Ber}(p)$ and $\operatorname{Ber}(q)\,$, the Neyman Chi-square divergence is given by \begin{equation*} \widetilde D(p,q) := \frac{(p-q)^2}{q(1-q)} \end{equation*} and we have $\widetilde D(p,q)\geq D_{\mathrm{KL}}(p,q):=D_{\mathrm{KL}}(\operatorname{Ber}(p),\operatorname{Ber}(q))\,$; see \eqref{eqn:KL-ineq}. Moreover, for $q<p\,$, when both $p$ and $q/p$ are bounded away from $1\,$, we have \begin{align}\label{eq:chidivqp-approx-p} \widetilde D(q,p) = p \frac{(1-q/p)^2}{1-p} \approx p \,. \end{align} Chi-square divergence is an instance of a more general family of divergence functions called $f$-divergences or Ali-Silvey distances \cite{ali1966general}. This family also has KL-divergence, total variation distance, Hellinger distance and Chernoff distance as special cases. Moreover, the divergence used in \cite{abbe2015community} is an $f$-divergence.
\item Define the total variance $\sigma_k^2 = n_kp_k(1-p_k)$ over the $k$th community, and let $\sigma_0^2 = nq(1-q)\,$. Also, define \begin{align}\label{def:sigmax} \sigma_{\max}^2 = \max_{k=1,\ldots,r}\; \sigma_k^2 = \max_{k=1,\ldots,r}\; n_kp_k(1-p_k) \,. \end{align} \end{itemize}
\subsection{Convex Recovery} \label{sec:convex}
We consider a convex optimization program for recovering the underlying clustering matrix $Y^\star = \sum_{k=1}^r \mathbf{1}_{\mathcal{C}_k}$ and characterize the models that are exactly recoverable using this program. In the following, $\norm{\cdot}_\star$ denotes the matrix nuclear norm or trace norm, i.e., the sum of singular values of the matrix. The dual to the nuclear norm is the spectral norm, denoted by $\norm{\cdot}\,$.
\begin{equation} \begin{aligned} \label{proc:convex-recovery}
\fbox{ \begin{minipage}[c][11.5em][c]{0.42\textwidth}{ {
\bf Convex Recovery } \vskip.6em \begin{algorithmic} \STATE {\bf input:} $\sum_{k=1}^r n_k^2$ \\[.3em] \STATE {\bf output:} \begin{equation*} \begin{aligned} \begin{array}{lll} \hat Y = &\arg\underset{Y}{\max} & \sum A_{ij}Y_{ij} \\ &\mathrm{subject\; to} & \norm{Y}_\star\leq \norm{Y^\star}_\star=n \\ & & \sum_{i,j}Y_{ij}=\sum_{k}n_k^2 \\ & & 0\leq Y_{ij}\leq 1 \end{array} \end{aligned} \end{equation*} \end{algorithmic} }\end{minipage}}
\end{aligned} \end{equation}
We prove two theorems giving conditions under which the above convex program outputs the true clustering matrix with high probability. While the theorems are similar in terms of the methodology used, they differ in terms of the conditions we must impose. As we will see, Theorem \ref{thm:convex_recovery} allows us to describe a regime in which \emph{tiny} communities of size $O(\sqrt{\log n})$ are recoverable (provided that they are very dense and that only few tiny or small clusters exist; see Example \ref{ex:cvx-thm1-sqrtlogn}), while Theorem \ref{thm:convex_recovery2} covers a less restrictive regime in terms of cluster sizes, but allows us to recover clusters only down to $O(\log n)\,$; see Example \ref{ex:cvx-thm2-logn}. The proofs for both theorems along with auxiliary lemmas are given in Appendix \ref{app:proof-convex}.
\begin{theorem}\label{thm:convex_recovery} Under the heterogenous stochastic block model,
the output of convex recovery program in \eqref{proc:convex-recovery} coincides with $Y^\star\,$ with high probability, provided that \begin{equation*} \begin{aligned} \rho_k^2 \gtrsim \sigma_k^2 \log n_k \;\;,\;\; \widetilde D(p_{\min},q) \gtrsim \tfrac{\log n_{\min}}{n_{\min}} \;\;,\;\; \rho_{\min}^2 \gtrsim \max\{\sigma_{\max}^2\,,\,nq(1-q)\,,\, \log n \} \;\;,\;\; \sum_{k=1}^r n_k^{-\alpha} = o(1) \end{aligned} \end{equation*} for some $\alpha>0\,$, where $\sigma_k^2 = n_kp_k(1-p_k)\,$.
\end{theorem} The assumption $\sum_{i=1}^r n_i^{-\alpha} = o(1)$ above is tantamount to saying that
the number of small or tiny communities (where by tiny we mean communities of size $O(\sqrt{\log n})$) cannot be too large (e.g., the
number of polylogarithmic-size communities cannot be a power of
$n$). In other words, one needs to have mostly large communities
(growing like $n^{\epsilon}$, for some $\epsilon>0$) for this assumption to be satisfied. Note, however, that the condition does \emph{not} restrict the number of clusters
of size $n^{\epsilon}$ for any fixed $\epsilon>0\,$.
The second theorem imposes more stringent conditions on the relative density, but relaxes the condition that only a very small number of nodes can be in small clusters. \begin{theorem}\label{thm:convex_recovery2} Under the heterogenous stochastic block model,
the output of convex recovery program in \eqref{proc:convex-recovery} coincides with $Y^\star\,$, with high probability, provided that \begin{equation*} \begin{aligned} \rho_k^2 \gtrsim \sigma_k^2 \log n \;\;,\;\; \widetilde D(p_{\min},q) \gtrsim \tfrac{\log n}{n_{\min}}
\;\;,\;\; \rho_{\min}^2 \gtrsim \max\{\sigma_{\max}^2\,,\, nq(1-q) \} \,. \end{aligned} \end{equation*} \end{theorem}
\begin{remark}\label{rem:connected-convex} For exact recovery to be possible, we need all communities (but at most one) to be connected. Therefore, in each subgraph, which is generated by $\mathcal{G}(n_k,p_k)\,$, we need $p_k n_k > \log n_k$, for $k=1,\ldots,r\,$. Observe that this connectivity requirement is implicit in the first condition of Theorems \ref{thm:convex_recovery}, \ref{thm:convex_recovery2} which can be seen from \eqref{eq:chidivqp-approx-p}. \end{remark}
Note that any convex optimization problem that involves the nuclear norm $\norm{Y}_\star$ (or equivalently, $\operatorname{tr}(Y)$ for $Y\succeq 0$) in its objective function or constraints, will have a bottleneck similar to the specific convex problem we analyzed here. Namely, for any such program to succeed we need a subgradient of the nuclear norm at $Y^\star$ which has a component $Z$ with spectral norm bounded by 1 (see the proof of Theorem \ref{thm:convex_recovery} in Appendix \ref{app:proof-convex}).
For example, when all $p_k$ and $q$ are $O(1)$, this requires the minimum cluster size to be at least $O(\sqrt{n})\,$; also see Example \ref{ex:hard}.
It is worth mentioning that for some community configurations, a simple counting argument can provide us with the exact underlying community structure; hence no need to solve a semidefinite program as above. We present one such algorithm in Appendix \ref{sec:simple} and characterize exact recovery guarantees.
In the following, we attempt to provide a better picture of the model space in terms of recoverability. Section \ref{sec:hard} considers a modified maximum likelihood estimator to identify bigger parts of the model space that can be recovered exactly. Section \ref{sec:impossiblity} provides an information-theoretic argument to exclude part of the model space that are impossible to recover exactly.
\subsection{Exactly Recoverable Models}\label{sec:hard} Next, we consider an estimator, inspired by maximum likelihood estimation, and characterize a subset of the model space which is exactly recoverable via this simple estimation method. The proposed estimation approach is not computationally tractable and is only used to examine the conditions for which exact recovery is possible. For a fixed $Y \in \mathcal{Y}$ and an observed matrix $A\,$, the likelihood function is given by \[ \operatorname{\mathbb{P}}_Y(A)=\prod_{i<j}p^{A_{ij}Y_{ij}}_{\tau(i,j)}(1-p_{\tau(i,j)})^{(1-A_{ij})Y_{ij}}q^{A_{ij}(1-Y_{ij})}(1-q)^{(1-A_{ij})(1-Y_{ij})}, \] where $\tau:\{1,\ldots,n\}^2\to \{1,\ldots,r\}$ and $\tau(i,j)=k$ if and only if $(i,j)\in \mathcal{C}_k\,$, and arbitrary in $\{1,\ldots,r\}$ otherwise. The log-likelihood function is given by \[ \log \operatorname{\mathbb{P}}_Y(A)=\sum_{i<j}\log\frac{(1-q)p_{\tau(i,j)}}{q(1-p_{\tau(i,j)})}A_{ij}Y_{ij}+\sum_{i<j}\log\frac{1-p_{\tau(i,j)}}{1-q}Y_{ij} + \textrm{ terms not involving }\{Y_{ij}\} \,. \] Maximizing the log-likelihood involves maximizing a weighted sum of $\{Y_{ij}\}$'s where the weights depend on the (usually unknown) values of $q,p_1,\ldots,p_r\,$. To be able to work with less information, we will use the following modification of maximum likelihood estimation, which only uses the knowledge of $n_0,n_1,\ldots,n_r\,$. \begin{equation} \begin{aligned}\label{proc:MLE-like} \fbox{ \begin{minipage}[c][5em][c]{0.5\textwidth}{ {
\bf Non-convex Recovery } \vskip.6em \begin{algorithmic} \STATE {\bf input:} $\{n_k\}$ \\[.3em] \STATE {\bf output:} $\hat Y = \arg\underset{Y}{\max}\;
\left\{\sum_{i,j} A_{ij}Y_{ij}:\; Y\in\mathcal{Y} \right\}$ \end{algorithmic} }\end{minipage}} \end{aligned} \end{equation}
\begin{theorem}\label{thm:hard-recovery} Suppose $n_{\min}\geq 2$ and $n\geq 8\,$. Under the heterogenous stochastic block model, provided that \begin{equation*} \begin{aligned} \rho_{\min} \geq 4 (17+\eta)\bigg( \tfrac{1}{3} + \frac{p_{\min}(1- p_{\min}) +q(1-q)}{p_{\min}-q} \bigg) \log n \,, \end{aligned} \end{equation*} for some choice of $\eta>0\,$, the optimal solution $\hat{Y}$ of the non-convex recovery program in \eqref{proc:MLE-like} coincides with $Y^\star$, with a probability not less than $1-5 \tfrac{p_{\max}-q}{p_{\min}-q}n^{2-\eta}\,$. \end{theorem}
Notice that $\rho_{\min} = \min_{k=1,\ldots,r}\, n_k(p_k-q)$ and $p_{\min} = \min_{k=1,\ldots,r}\, p_k$ do not necessarily correspond to the same community.
\subsection{When is Exact Recovery Impossible?}\label{sec:impossiblity} \begin{theorem} \label{thm:impossibility} If any of the following conditions holds, \begin{enumerate}[(1)] \item \label{condn:impossible-first} $2\leq n_k \leq n/e\,$, and \begin{align} 4\sum_{i=1}^r n_k^2 \widetilde D(p_k,q) \leq \tfrac{1}{2}\sum_k n_k\log \tfrac{n}{n_k}-r-2 \end{align} \item \label{condn:impossible-second} $2\leq n_k \leq n/e\,$, and \begin{align} \tfrac{1}{2}r + \log \tfrac{1- p_{\min}}{1- p_{\max}} +1 + \sum_k n_k^2p_k \leq (\tfrac{1}{4}n - \sum n_k^2p_k) \log n + \sum (n_kp_k-\tfrac{1}{4})n_k \log n_k \end{align} \item \label{condn:impossible-third} $n\geq 128\,$, $r\geq 2$ and \begin{align} \max_k \; n_k\left(\widetilde D(p_k,q)+\widetilde D(q,p_k) \right) \leq \tfrac{1}{12}\log(n-n_{\min})
\end{align} \end{enumerate} then $$\inf_{\hat{Y}}\sup_{Y^\star \in \mathcal{Y}}\operatorname{\mathbb{P}}[\hat{Y}\neq Y^\star]\geq {1 \over 2}$$ where the infimum is taken over all measurable estimators $\hat{Y}$ based on the realization $A$ generated according to the heterogenous stochastic block model (HSBM). \end{theorem}
\subsection{Partial Observations} \label{sec:partial} In the general stochastic block model, we assume that the entries of a symmetric adjacency matrix $A\in\{0,1\}^{n\times n}$ have been generated according to a combination of {Erd\H{o}s-R\'enyi~} models with parameters that depend on the true clustering matrix. In the case of partial observations, we assume that the entries of $A$ has been observed independently with probability $\gamma\,$. In fact, every entry of the input matrix falls into one of these categories: {\em observed as one} denoted by $\Omega_1$, {\em observed as zero} denoted by $\Omega_0$, and {\em unobserved} which corresponds to $\Omega^c$ where $\Omega=\Omega_0\cup\Omega_1\,$. If an estimator only takes the observed part of the matrix as the input, one can revise the underlying probabilistic model to incorporate both the stochastic block model and the observation model; i.e. a revised distribution for entries of $A$ as \begin{equation*} \begin{aligned} A_{ij} = \begin{cases} \operatorname{Ber}(\gamma p_k) & (i,j)\in\mathcal{C}_k \text{ for some } k \\ \operatorname{Ber}(\gamma q) & i\in\mathcal{C}_k \text{ and } j\in\mathcal{C}_l \text{ for } k\neq l \,. \end{cases} \end{aligned} \end{equation*} yields the same output from an estimator that only takes in the observed values.
Therefore, the algorithms in \eqref{proc:convex-recovery} and \eqref{proc:MLE-like}, as well as the results of Theorems \ref{thm:convex_recovery}, \ref{thm:convex_recovery2}, \ref{thm:hard-recovery}, can be easily adapted to the case of partially observed graphs.
\section{Future Directions}
We have provided a series of extensions to prior work (especially \cite{chen2014statistical,abbe2015community}),
however there are still interesting problems that remain open. Future directions for research on this topic include the following.
\paragraph{Models for Partial Observation.} We considered the case where a subset of the edges in the underlying graph were observed uniformly at random. In practice, however, the observed edges are often not uniformly sampled, and care will be needed to model the effect of nonuniform sampling. Also, in many practical problems, the observed edges may be chosen by the algorithm based on some prior information (non-adaptive), or based on observations made so far (adaptive); e.g., see Yun and Proutiere \cite{yun2014community}. It will be interesting to examine what the algorithms can achieve in these scenarios.
\paragraph{Overlapping Communities.} SBMs with overlapping communities represent a more realistic model than the non-overlapping case; it has been shown that the large social and information network community structure is quite complex and that very large communities tend to have significant overlap. Only a few references in the literature have considered this problem (e.g., \cite{abbe2015community}), and there are many open questions on recovery regimes and algorithms.
It would be interesting to develop a convex optimization-based algorithm for recovery of models generated by SBM with overlapping communities.
\paragraph{Outlier Nodes.} A practically important extension to the SBM is to allow for adversarial outlier nodes. Cai and Li in \cite{cai2014robust} proposed a semidefinite program that can recover the clusters in an SBM in the presence of outliner nodes connected to other nodes in an arbitrary way, provided that the number of outliers is small enough. Their result is comparable to the best known results in the case of balanced clusters and equal probabilities. However, their complexity results are still parametrized by $p_{\min}$ and $n_{\min}$, which excludes useful examples, as discussed in Section \ref{sec:this-paper}. Extending our results to the setting of \cite{cai2014robust} is a direction for future work.
\appendix \section{Proofs for Convex Recovery} \label{app:proof-convex} In the following, we present the proofs of Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2}.
\subsection{Proof of Theorem \ref{thm:convex_recovery}} \label{app:proof-convex_recovery} We are going to prove that under the HSBM, with high probability, the output of the convex recovery program in \eqref{proc:convex-recovery} coincides with the underlying clustering matrix $Y^\star$ provided that \begin{equation} \begin{aligned} \rho_k^2 &\gtrsim n_kp_k(1-p_k)\log n_k \\ (p_{\min}-q)^2 &\gtrsim q(1-q)\tfrac{\log n_{\min}}{n_{\min}} \\ \rho_{\min}^2 &\gtrsim \max \left\{ \max_k \; n_kp_k(1-p_k),\, nq(1-q),\, \log n \right\} \end{aligned} \end{equation} as well as $\sum_{k=1}^r n_k^{-\alpha} = o(1)$ for some $\alpha>0\,$. Notice that $p_k(1-p_k)n_k\gtrsim \log n_k\,$, for all $k=1,\ldots,r\,$, is implied by the first condition, as mentioned in Remark \ref{rem:connected-convex}.
Before proving Theorem \ref{thm:convex_recovery}, we state a crucial result from random matrix theory that allows us to bound the spectral radius of the matrix $A - \mathbb{E}(A)$ where $A$ is an instance of adjacency matrix under HSBM. This result appears, for example, as Theorem 3.4 in \cite{chatterjee2012matrix}\footnote{As a
more general result about the norms of rectangular matrices, but
with the slightly stronger growth condition $\sigma^2 \geq
\log^{6+\epsilon}n /n$.}. Although Lemma 2 from \cite{tomozei2010distributed} appears to state a weaker version of this result, the proof presented there actually supports the version we give below in Lemma \ref{randmat}. Finally, Lemma 8 from \cite{vu2014simple} states the same result and presents a very brief sketch of the proof idea, along the lines of the proof presented fully in \cite{tomozei2010distributed}.
\begin{lemma} \label{randmat}
Let $A = \{a_{ij}\}$ be a $n \times n$ symmetric random
matrix such that each $a_{ij}$ represents an independent random
Bernoulli variable with $\operatorname{\mathbb{E}}(a_{ij}) = p_{ij}\,$. Assume that there
exists a constant $C_0$ such that $\sigma^2 = \max_{i,j}
p_{ij}(1-p_{ij}) \geq C_0 \log n /n$. Then for each constant $C_1>0$ there
exists $C_2>0$ such that \[ \operatorname{\mathbb{P}} \left ( \norm{A - \operatorname{\mathbb{E}}(A)} \geq C_2 \sigma \sqrt{n} \right) ~\leq~ n^{-C_1} \,. \] \end{lemma}
As an immediate consequence of this, we have the following corollary.
\begin{corollary} \label{weakrandmat}
Let $A = \{a_{ij}\}$ be a $n \times n$ symmetric random
matrix such that each $a_{ij}$ represents an independent random
Bernoulli variable with $\operatorname{\mathbb{E}}(a_{ij}) = p_{ij}\,$. Assume that there
exists a constant $C_0$ such that $\sigma^2 = \max_{i,j}
p_{ij}(1-p_{ij}) \leq C_0 \log n /n\,$. Then for each constant $C_1>0$ there
exists $C_3>0$ such that such that \[ \operatorname{\mathbb{P}} \left ( \norm{A - \operatorname{\mathbb{E}}(A)} \geq C_3 \sqrt{\log n} \right) ~\leq~ n^{-C_1}\,. \] \end{corollary}
\begin{proof} The corollary follows from Lemma \ref{randmat}, by replacing the $(1,1)$ entry of $A$ with a Bernoulli variable
of probability $p_{11} = C_0 \log n /n$. Given that the old $(1,1)$ entry and the new $(1,1)$ entry are both Bernoulli variables, this can change $\norm{A -
\operatorname{\mathbb{E}}(A)}$ by at most $1$. The new maximal variance is equal to
$\max_{i,j} p_{ij}(1-p_{ij}) = C_0 \log n/n\,$. Therefore Lemma \ref{randmat}
is applicable to the new matrix and the conclusion holds. \end{proof}
We use Lemma \ref{randmat} to prove the following result. \begin{lemma}\label{lem:specnorm-convex-recovery} Let $A$ be generated according to the heterogenous stochastic block model (HSBM). Suppose \begin{enumerate}[(1)]
\item\label{condn1:specnorm-convex-recovery}
$p_k(1-p_k)n_k \gtrsim \log n_k\,$, for $k=1,\ldots, r\,$, and \item\label{condn2:specnorm-convex-recovery}
there exists an $\alpha>0$ such that $\sum_{k=0}^r n_k^{-\alpha}
= o(1)\,$.
\end{enumerate} Then with probability at least $1-o(1)$ we have
\[
\norm{A-\operatorname{\mathbb{E}}(A)} \;\lesssim\; \max_i\sqrt{p_i(1-p_i)n_i}+ \sqrt{\max \{q(1-q)n\,,\, \log n\} } \,.
\] \end{lemma}
\begin{proof} We split the matrix $A$ into two matrices, $B_1$ and $B_2\,$. $B_1$ consists of the block-diagonal projection onto the clusters, and $B_2$ is the rest. Denote the blocks on the diagonal of $B_1$ by $C_1$, $C_2$, $\ldots, C_r$, where $C_i$ corresponds to the $i$th cluster. Then $\norm{B_1 - \operatorname{\mathbb{E}}(B_1)} = \max_i \norm{C_i - \operatorname{\mathbb{E}}(C_i)}$, and for each $i$, $\norm{C_i - \operatorname{\mathbb{E}}(C_i)} \gtrsim \sqrt{p_i(1-p_i)
n_i}$ with probability at most $n_i^{-\alpha}\,$, by Lemma \ref{randmat}. By assumptions \eqref{condn1:specnorm-convex-recovery} and \eqref{condn2:specnorm-convex-recovery} of Lemma \ref{lem:specnorm-convex-recovery} and applying a union bound, we conclude that \[ \norm{B_1 - \operatorname{\mathbb{E}}(B_1)} \lesssim \max_i \sqrt{p_i(1-p_i)n_i} \] with probability at least $1 - \sum_{i=1}^{r} n_i^{-\alpha} = 1-o(1)\,$. We shall now turn our attention to $B_2\,$. Let $\sigma^2 = \max \{q(1-q), \log n/n \}\,$. By Corollary \ref{weakrandmat},
$\norm{B_2 - \operatorname{\mathbb{E}}(B_2)} \lesssim \max \{\sqrt{q(1-q)n}, \sqrt{\log n}
\}\,$, with high probability. Putting the two norm estimates together, the conclusion of Lemma \ref{lem:specnorm-convex-recovery} follows. \end{proof}
We are now in the position to prove Theorem \ref{thm:convex_recovery}.
\begin{proof}[of Theorem \ref{thm:convex_recovery}] We need to show that for any feasible $Y\neq Y^\star\,$, we have $\Delta(Y):=\langle A , Y^\star-Y \rangle>0\,$. Rewrite $\Delta(Y)$ as \begin{equation*} \begin{aligned} \label{eq:DeltaYsplit} \Delta(Y)=\langle{A},{Y^\star-Y}\rangle=\langle{\operatorname{\mathbb{E}}[A]},{Y^\star-Y}\rangle+\langle{A-\operatorname{\mathbb{E}}[A]},{Y^\star-Y}\rangle \,.
\end{aligned} \end{equation*}
Note that $\sum_{i,j} Y^\star_{ij} = \sum_{i,j} Y_{ij} = \sum_{k=1}^r n_k^2$, thus $\sum_{i,j} (Y^\star_{ij} - Y_{ij}) = 0\,$. Express this as \[ \sum_{k=1}^r \sum_{i,j \in V_k} (Y^\star-Y)_{ij} = - \sum_{k' \neq
k''} \sum_{i \in V_{k'},\, j \in V_{k''}} (Y^\star-Y)_{ij} \,. \] Then we have \begin{equation*} \begin{aligned} \langle{\operatorname{\mathbb{E}}[A]},{Y^\star-Y}\rangle & = & \sum_{k=1}^r \sum_{i,j \in V_k^\star} p_k (Y^\star-Y)_{ij} + \sum_{k' \neq k''} \sum_{i \in V_{k'},\, j \in V_{k''}} q (Y^\star-Y)_{ij} & = & \sum_{k=1}^r \sum_{i,j \in V_k} (p_k - q) (Y^\star-Y)_{ij} \,. \end{aligned} \end{equation*} Finally, since $0 \leq (Y^\star-Y)_{ij} \leq 1$ for $i,j \in V_k\,$, we can write \begin{equation} \begin{aligned} \label{doi} \langle{\operatorname{\mathbb{E}}[A]},{Y^\star-Y}\rangle= \sum_{k=1}^r \sum_{i,j \in V_k} (p_k -q) \norm{(Y^\star-Y)_{\mathcal{C}_k}}_1 \,. \end{aligned} \end{equation}
Next, recall that the subdifferential (i.e., the set of all subgradients) of $\norm{\cdot}_\star$ at $Y^\star$ is given by \[
\partial \norm{Y^\star}_\star = \{ UU^T + Z ~ \big |~ U^TZ = ZU = 0\,,\; \norm{Z}\leq 1\} \] where $Y^\star=UKU^T$ is the singular value decomposition for $Y^\star$ with $U\in\mathbb{R}^{n\times r}\,$, $K=\operatorname{diag}(n_1,\ldots,n_r)\,$, and $U_{ik} = 1/\sqrt{n_k}$ if node $i$ is in cluster $\mathcal{C}_k$ and $U_{ik}=0$ otherwise.
Let $M := A - \operatorname{\mathbb{E}}[A]\,$. Since conditions \eqref{condn1:specnorm-convex-recovery} and \eqref{condn2:specnorm-convex-recovery} of Lemma \ref{lem:specnorm-convex-recovery} are verified, there exists $C_1>0$ such that $\norm{M}\leq \lambda\,$, with probability $1-o(1)\,$, where \begin{eqnarray} \label{lambdadef} \lambda := C_1 \left(\max_i \sqrt{p_i(1-p_i)n_i}+ \sqrt{\max \{q(1-q)n, \log n\} } \right)\,. \end{eqnarray} Furthermore, let the projection operator onto a subspace $T$ be defined by \[ \mathcal{P}_T(M):=UU^TM+MUU^T-UU^TMUU^T \,, \] and also $\mathcal{P}_{T^\bot}=\mathcal{I}-\mathcal{P}_T$, where $\mathcal{I}$ is the identity map. Since $\norm{\mathcal{P}_{T^\bot}(M)}\leq \norm{M}\leq \lambda$ with high probability, $UU^T+ \tfrac{1}{\lambda}\mathcal{P}_{T^\bot}(M)\in \partial \norm{Y^\star}_{\star}$ with high probability. Now, by the constraints of the convex program, we have \begin{equation} \begin{aligned} 0\geq \norm{Y}_{\star}-\norm{Y^\star}_{\star} \geq \langle{UU^T+\tfrac{1}{\lambda}\mathcal{P}_{T^\bot}(M)},{Y-Y^\star}\rangle = \langle{UU^T-\tfrac{1}{\lambda}\mathcal{P}_{T}(M)},{Y-Y^\star}\rangle+\tfrac{1}{\lambda}\langle{M},{Y-Y^\star}\rangle \,, \end{aligned} \end{equation} which implies $\langle{M},{Y^\star-Y}\rangle\geq \langle{\mathcal{P}_{T}(M)-\lambda UU^T},{Y^\star-Y}\rangle\,$. Combining \eqref{eq:DeltaYsplit} and \eqref{doi} we get, \begin{equation} \begin{aligned} \label{trei} \Delta(Y) &\geq \sum_{k=1}^r (p_k-q)\norm{(Y^\star-Y)_{\mathcal{C}_k}}_1+ \langle{\mathcal{P}_{T}(M)-\lambda UU^T},{Y^\star-Y}\rangle\\ &\geq \sum_{k=1}^r (p_k-q)\norm{(Y^\star-Y)_{\mathcal{C}_k}}_1\\ &\quad - \sum_{k=1}^r \underbrace{\norm{(\mathcal{P}_{T}(M)-\lambda UU^T)_{\mathcal{C}_k}}_{\infty}}_{(\mu_{kk})}\norm{(Y^\star-Y)_{\mathcal{C}_k}}_1\\ &\quad - \sum_{k'\neq k''}\underbrace{\norm{(\mathcal{P}_{T}(M)-\lambda UU^T)_{V_{k'}\times V_{k''}}
}_{\infty}}_{(\mu_{k' k''})}\norm{(Y^\star-Y)_{V_{k'}\times V_{k''}}}_1
\end{aligned} \end{equation} where we have made use of the fact that an inner product can be bounded by a product of dual norms. We now derive bounds for the quantities $\mu_{kk}$ and $\mu_{k'k''}$ marked above. Note that the former indicates sums over the clusters, while the latter indicates sums outside the clusters.
For $\mu_{kk}$, if $(i,j)\in \mathcal{C}_k$ then \begin{equation*} \begin{aligned} \left(\mathcal{P}_T(M)-\lambda UU^T\right)_{ij} &= \left(UU^TM+MUU^T-UU^TMUU^T-\lambda UU^T\right)_{ij}\\ &={1 \over n_k} \sum_{l \in \mathcal{C}_k}M_{lj}+{1 \over n_k}\sum_{l \in \mathcal{C}_k}M_{il}-{1 \over n_k^2}\sum_{l,l'\in \mathcal{C}_k}M_{ll'}-{\lambda \over n_k} \,. \end{aligned} \end{equation*} Recall Bernstein's inequality (e.g. see Theorem 1.6.1 in \cite{tropp2015introduction}): \begin{proposition} (Bernstein Inequality) Let $S_1, S_2$, $\ldots, S_n$ be independent, centered, real random variables, and assume that each one is uniformly bounded: \[ \operatorname{\mathbb{E}}[S_{k}] = 0~~\text{and}~~\abs{S_k}\leq L~~\text{for each } k=1, \ldots,
n \,. \] Introduce the sum $Z= \sum_{k=1}^n S_k$, and let $\nu(Z)$ denote the variance of the sum: \[ \nu(Z) = \operatorname{\mathbb{E}}[Z^2] = \sum_{k=1}^n \operatorname{\mathbb{E}}[S_k^2] \,. \] Then \[ \operatorname{\mathbb{P}}[~\abs{Z} \geq t~]~\leq~2 \exp\left(\frac{-t^2/2}{\nu(Z) +
Lt/3}\right)~~\text{for all } t \geq 0 \,. \] \end{proposition}
We will apply it to bound the three sums in $\mu_{kk}$, using the fact that each of the sums contains only centered, independent, and bounded variables, and that the variance of each entry in the sum is $p_k(1-p_k)\,$. For the first two sums, we can use $t \sim \sqrt{n_k p_k(1-p_k)
\log n_k}$ to obtain a combined failure probability (over the entire
cluster) of $O(n_k^{-\alpha})$. Finally, for the third sum, we may
choose $t \sim n_k \sqrt{p_k (1-p_k) \log n_k}$, again for a combined failure
probability over the whole cluster of no more than $O(n_k^{-\alpha})$.
We have thusly \begin{equation*} \begin{aligned} \mu_{kk}\leq \abs{ \tfrac{1}{n_k} \sum_{l \in \mathcal{C}_k}M_{lj} } + \abs{ \tfrac{1}{n_k}\sum_{l \in \mathcal{C}_k}M_{il} } + \abs{ \tfrac{1}{n_k^2}\sum_{l,l'}M_{l,l'} } +{\lambda \over n_k} \lesssim \sqrt{\frac{p_k(1-p_k)}{n_k} \log n_k}+{\sqrt{p_k (1-p_k) \log n_k} \over n_k}+{\lambda \over n_k}~,\\ \end{aligned} \end{equation*} for all $i,j \in \mathcal{C}_k$, with probability $1-O(n_k^{-\alpha})$. Note that in the inequality above, the second term is much smaller in magnitude than the first, so we can disregard it; using \eqref{lambdadef}, we obtain \begin{equation} \begin{aligned} \label{eq:Ak-bound} \mu_{kk} \lesssim \frac{1}{n_k} \left ( \sqrt{n_k p_k(1-p_k)
\log n_k} + \max_i \sqrt{p_i (1-p_i) n_i} + \sqrt{\max \{q(1-q)n, \log n\} } \right )\,, \end{aligned} \end{equation} and by taking a union bound over $k$ we can conclude that the probability that any of these bounds fail is $o(1)\,$. Similarly, for $\mu_{k'k''}$, for $k'\neq k''\,$, we can calculate that \begin{eqnarray} \label{eq:Bkk-bound-before} \mu_{k'k''}\leq \abs{\tfrac{1}{n_{k'}} \sum_{l \in
\mathcal{C}_{k'}}M_{lj}}+\abs{\tfrac{1}{n_{k''}}\sum_{l \in
\mathcal{C}_{k''}}M_{il}}+\abs{\tfrac{1}{n_{k'}n_{k''}}\sum_{l'\in\mathcal{C}_{k'},\,l''\in\mathcal{C}_{k''}}M_{l',l''}} & \lesssim & \\ & & \hspace{-3.6cm} \sqrt{q(1-q) (\frac{\log n_{k'}}{n_{k'}} +\frac{\log
n_{k''}}{n_{k''}})}+ \frac{\sqrt{q(1-q) \log (n_{k'}
n_{k''})}}{\sqrt{n_{k'} n_{k''}}}~, \nonumber
\end{eqnarray} with failure probability over all $i \in \mathcal{C}_{k'}$, $j \in \mathcal{C}_{k''}$ of no more than $O(n_{k'}^{-\alpha} n_{k''}^{-\alpha})\,$. We do this by taking $t \sim \sqrt{n_{k'} q(1-q) \log (n_{k'} n_{k''})}$, respectively $t \sim \sqrt{n_{k''} q(1-q) \log (n_{k'} n_{k''})}$ in the first two sums. For the third, we just take $t \sim \sqrt{ n_{k'} n_{k''} q(1-q) \log (n_{k'} n_{k''})}\,$. As before, note that the second term is much smaller in magnitude than the first, and hence we can disregard it to obtain \begin{eqnarray} \label{eq:Bkk-bound} \mu_{k'k''}\lesssim \max_{k} \sqrt{\frac{q(1-q)\log
n_k}{n_k}} = \sqrt{\frac{q(1-q) \log n_{\rm{min}}}{n_{\rm{min}}}} := \mu_{\operatorname{off}}~, \end{eqnarray} as the function $\log x/x$ is strictly increasing if $x \geq 3$, with the probability that all of the above are simultaneously true being $1-o(1)$. Since the bound on $\mu_{k'k''}$ is independent of $k'$ and $k''$ we can rewrite \eqref{trei} as \begin{equation*} \begin{aligned} \Delta(Y) &\geq \sum_{k=1}^r (p_k-q)\norm{(Y^\star-Y)_{\mathcal{C}_k}}_1
- \sum_{k=1}^r \mu_{kk} \norm{(Y^\star-Y)_{\mathcal{C}_k}}_1
- \sum_{k'\neq k''} \mu_{k'k''}\norm{(Y^\star-Y)_{V_{k'}\times V_{k''}}}_1 \\ &\geq \sum_{k=1}^r \left(p_k-q - \mu_{kk} - \mu_{\operatorname{off}} \right) \norm{(Y^\star-Y)_{\mathcal{C}_k}}_1 \end{aligned} \end{equation*} where we use the fact that $\sum_{k' \neq k''} \norm{(Y^\star-Y)_{V_{k'}\times V_{k''}}}_{1} = \sum_{k=1}^r \norm{(Y^\star-Y)_{\mathcal{C}_k}}_{1}\,$. Finally, the conditions of theorem guarantee the nonnegativity of the right hand side, hence the optimality of $Y^\star$ as the solution to the convex recovery program in \eqref{proc:convex-recovery}.
\end{proof}
\subsection{Proof of Theorem \ref{thm:convex_recovery2}} We use a different result than Lemma \ref{lem:specnorm-convex-recovery}, which we state below. \begin{lemma}[Corollary 3.12 in \cite{bandeira2014sharp}] \label{lem:bandeira2014sharp} Let $X$ be an $n\times n$ symmetric matrix whose entries $X_{ij}$ are independent symmetric random variables. Then there exists for any $0<\epsilon \leq \tfrac{1}{2}$ a universal constant $c_\epsilon$ such that for every $t\geq 0$ \[ \norm{X}\leq 2(1+\epsilon) \tilde \sigma + t \,, \] with probability at least $1-n\exp(\tfrac{-t^2}{c_\epsilon \tilde \sigma_\star^2})\,$, where \[ \tilde \sigma = \max_i \sqrt{\sum_j \operatorname{\mathbb{E}}[X_{ij}^2]} \;,\quad \tilde \sigma_\star = \max_{i,j} \norm{X_{ij}}_\infty \,. \] \end{lemma} We specialize Lemma \ref{lem:bandeira2014sharp} to HSBM to get the following result. \begin{lemma}\label{lem:specnorm-convex-recovery2} Let $A$ be generated according to the heterogenous stochastic block model (HSBM). Then there exists for any $0<\epsilon \leq \tfrac{1}{2}$ a universal constant $c_\epsilon$ such that \[ \norm{A-\operatorname{\mathbb{E}}(A)} \leq 4(1+\epsilon) \max\{\sigma_{\max}, \sigma_0\} + \sqrt{2 c_\epsilon \log n} \] with probability at least $1-n^{-1}\,$. \end{lemma}
We can now present the proof for Theorem \ref{thm:convex_recovery2}.
\begin{proof}The proof follows the same lines as the proof of Theorem \ref{thm:convex_recovery}. Given the similarities between the proofs, we will only describe here the differences between the tools employed, and how they affect the conditions in Theorem \ref{thm:convex_recovery2}. The proof proceeds identically as before, up to the definition of $\lambda$, which--since we use Lemma \ref{lem:specnorm-convex-recovery2} rather than \ref{lem:specnorm-convex-recovery}--becomes \begin{eqnarray} \label{lambdadef1} \lambda := C_2 \max \{\sigma_{\max},\, \sigma_0,\, \sqrt{\log n}\} \,, \end{eqnarray} where $C_2$ was chosen as a good upper bounding constant for Lemma \ref{lem:specnorm-convex-recovery2}.
The other two small changes come from the fact that we will need to make sure that the failure probabilities for the quantities $\mu_{kk}$ and $\mu_{k'k''}$ are polynomial in $1/n$, which leads to the replacement of $\log n_k$ in either of them by a $\log n$. The rest of the proof proceeds exactly in the same way.
\end{proof}
\section{Proofs for Recoverability and Non-recoverability} \label{app:proof-rec}
\subsection{Proofs for Recoverability}
\begin{proof}[of Theorem \ref{thm:hard-recovery}] For $\Delta(Y):=\langle{A},{Y^\star-Y}\rangle\,$, we have to show that for any feasible $Y\neq Y^\star\,$, we have $\Delta(Y)>0\,$. For simplicity we assume $Y_{ii}=Y_{ii}^\star=0$ for all $i \in\{1,\ldots,n\}\,$. Consider an splitting as \begin{align} \label{eq:split-iprod} \Delta(Y)=\langle{A},{Y^\star-Y}\rangle=\langle{\operatorname{\mathbb{E}}(A)},{Y^\star-Y}\rangle+\langle{A-\operatorname{\mathbb{E}}(A)},{Y^\star-Y}\rangle\,. \end{align} Notice that $Y^\star= \sum_{k=1}^r \mathbf{1}_{\mathcal{C}_k}$ and $\operatorname{\mathbb{E}}(A) = q\mathbf{1}\mathbf{1}^T + \sum_{k=1}^r (p_k-q)\mathbf{1}_{\mathcal{C}_k}\,$. Considering $d_k(Y)=\langle{Y^\star_{\mathcal{C}_k}},{Y^\star-Y}\rangle\,$, the number of entries on $\mathcal{C}_k$ on which $Y$ and $Y^\star$ do not match, we get \begin{align} \label{eq:split-EA-part} \langle{\operatorname{\mathbb{E}}(A)},{Y^\star-Y}\rangle = \sum_{k=1}^r (p_k-q) d_k(Y) \end{align} where we used the fact that $Y,Y^\star\in\mathcal{Y}$ and have the same number of ones and zeros, hence $\sum_{i,j} Y_{ij} = \sum_{i,j} Y^\star_{ij}\,$.
On the other hand, the second term in \eqref{eq:split-iprod} can be represented as \begin{align*} T(Y):=\langle{A-\operatorname{\mathbb{E}}(A)},{Y^\star-Y}\rangle \;=\; \sum_{Y_{ij}^\star=1,Y_{ij}=0} (A-\operatorname{\mathbb{E}}(A))_{ij} \;+\; \sum_{Y_{ij}^\star=0,Y_{ij}=1} (\operatorname{\mathbb{E}}(A)-A)_{ij} \end{align*} where each term is a centered Bernoulli random variable bounded by $1\,$. Observe that the total variance for all the summands in the above is given by \begin{align*} \sigma^2=\sum_{k=1}^r d_k(Y) p_k(1- p_k) + q(1- q) \sum_{k=1}^r d_k(Y) \,. \end{align*} Then, combining \eqref{eq:split-iprod} and \eqref{eq:split-EA-part}, and applying the Bernstein inequality yields \begin{align*} \operatorname{\mathbb{P}}(\Delta(Y)\leq 0) = \operatorname{\mathbb{P}}\bigg(T(Y)\leq -\sum_k (p_k-q)d_k(Y)\bigg) \leq \exp\bigg( -\frac{ t^2}{2\sigma^2+ 2t/3} \bigg) &\leq \exp\bigg(-\frac{\sum_k (p_k-q) d_k(Y)}{2\nu(Y) + 2/3}\bigg) \end{align*} where $t = \sum_k (p_k-q)d_k(Y)$ and \begin{align*} \nu(Y) = \frac{\sigma^2}{t} = \frac{\sum_{k=1}^r ( p_k(1- p_k) +q(1-q)) d_k(Y) }{\sum_k (p_k-q)d_k(Y)} \leq \max_k\frac{p_k(1- p_k) +q(1-q)}{p_k-q} = \frac{p_{\min}(1- p_{\min}) +q(1-q)}{p_{\min}-q} :=\bar\nu_0 \,. \end{align*} Considering $\bar \nu:= 2\bar\nu_0 +2/3$ and $\theta_k:=\lfloor\frac{p_{k}-q}{p_{\min}-q} \rfloor\,$, we get \begin{align} \label{eq:bound-DeltaY} \operatorname{\mathbb{P}}(\Delta(Y)\leq 0) \leq \exp\bigg(-\tfrac{1}{\bar\nu} {\sum_k (p_k-q) d_k(Y)}\bigg) \leq \exp\bigg(-\tfrac{1}{\bar\nu} {(p_{\min}-q) \sum_k \theta_k d_k(Y)}\bigg) \end{align} which can be bounded using the next lemma which is a direct extension of Lemma 4 in \cite{chen2014statistical}.
\begin{lemma} \label{lem:thetad-bound} Given the values of $\theta_k$ and $n_k\,$, for $k=1,\ldots,r\,$, and for each integer value $\xi\in [\min\theta_k(2n_k-1),\,\sum_{k}\theta_kn_k^2]\,$, we have \begin{align}\label{eq:lem:thetad-bound}
\big| {\{[Y]\in \mathcal{Y}:\sum_{k=1}^r \theta_k d_k(Y)=\xi\}} \big| \leq \left(\frac{4\xi}{\tau}\right)^2 {n}^{16\xi/\tau} \end{align} where $\tau:= \min_k\,\theta_k n_k\,$, and $[Y] = \{Y'\in\mathcal{Y}:\; Y'_{ij} Y^\star_{ij} = Y_{ij} Y^\star_{ij} \}\,$. \end{lemma}
Now plugging in the result of Lemma \ref{lem:thetad-bound} into \eqref{eq:bound-DeltaY} yields, \begin{align} \operatorname{\mathbb{P}}\bigg(\exists Y\in \mathcal{Y}:Y\neq Y^\star,\Delta(Y)\leq 0\bigg) &\leq \sum_{\xi}\operatorname{\mathbb{P}}\big(\exists Y\in \mathcal{Y}: \sum_k \theta_k d_k(Y)=\xi\,,\,\Delta(Y)\leq 0\big) \nonumber\\ &\leq 2 \sum_\xi \left(\frac{4\xi}{\tau}\right)^2 { n}^{16\xi/\tau} \exp\bigg(-\tfrac{1}{\bar\nu} {(p_{\min}-q) \xi}\bigg) \nonumber\\ &= 32 \sum_\xi \bigg(\frac{\xi}{\tau}\bigg)^2 \exp\bigg( (16\log n - \tfrac{1}{\bar\nu} (p_{\min}-q)\tau ) \frac{\xi}{\tau} \bigg) \nonumber\\ &\leq 32 \sum_\xi \bigg(\frac{\xi}{\tau}\bigg)^2 \exp\bigg( (16\log n - \tfrac{1}{2\bar\nu} \rho_{\min} ) \frac{\xi}{\tau} \bigg) \label{eq:xi-decr-func} \end{align}
In order to have a meaningful bound for the above probability, we need the exponential term in \eqref{eq:xi-decr-func} to be decreasing. Hence, we require $\rho_{\min} \geq 64 \bar\nu \log n \,$. Moreover, the function in \eqref{eq:xi-decr-func} is a decreasing function of $\xi/\tau$ for \begin{align}\label{eq:xi-dec} \frac{\xi}{\tau} \geq \frac{4\bar\nu }{ \rho_{\min} -32 \bar\nu \log n} \,. \end{align} Since $\xi\geq \min \theta_k(2n_k-1)\geq \min\theta_kn_k=\tau\,$, requiring the following condition (for some $\eta>0$ which will be determined later), \begin{align} \label{eq:cond-rho-min-1} \rho_{\min} \geq 2(16+\eta)\bar\nu\log n +4\bar\nu \,, \end{align} implies \[ \frac{\xi}{\tau} \geq 1 \geq \frac{4}{4+2\eta \log n} \geq \frac{4\bar\nu }{ \rho_{\min} -32 \bar\nu \log n} \] and allows us to bound the summation in \eqref{eq:xi-decr-func} with the largest term (corresponding to the smallest value of $\xi/\tau\,$, or an even smaller value, namely 1) times the number of summands; i.e., \begin{align} \text{\eqref{eq:xi-decr-func}} &\leq 32\;(\sum\theta_kn_k^2)\, \exp\left(16\log n - \tfrac{1}{2\bar\nu} \rho_{\min}\right) \\
&\leq 32\sum\theta_kn_k^2 \exp(-2-\eta \log n) \\
&\leq 5\, \theta_{\max} n^{2-\eta} \\ &\leq 5\, \tfrac{p_{\max}-q}{p_{\min}-q} n^{2-\eta} \,, \end{align} or, similarly, \begin{align} \text{\eqref{eq:xi-decr-func}} \leq 32\sum\theta_kn_k^2 \exp(-2-\eta \log n) \leq 5 \frac{\sum_{k=1}^r\rho_k}{p_{\min}-q}n^{1-\eta} \,. \end{align} Hence, if the condition in \eqref{eq:cond-rho-min-1} holds we get the optimality of $Y^\star$ with a probability at least equal to the above. Finally, $n\geq 8$ implies $\log n\geq 2$ and \eqref{eq:cond-rho-min-1} can be insured by \[ \rho_{\min} \geq 4(17+\eta) \bigg( \frac{1}{3}+ \frac{p_{\min}(1-p_{\min})+q(1-q)}{p_{\min}-q}\bigg) \log n \,. \] \end{proof}
\begin{proof}[of Lemma \ref{lem:thetad-bound}] We extend the proof of Lemma 4 in \cite{chen2014statistical} to our case. Fix a $Y\in\mathcal{Y}$ with $\sum_{k=1}^r\theta_kd_k(Y)=\xi$ and consider the corresponding $r$ clusters as well as the set of isolated nodes. Notice that for any $Y'\in[Y]$ we also have $\sum_{k=1}^r\theta_kd_k(Y')=\xi\,$. In the following, we will construct an ordering for the clusters of $Y$ according to $Y^\star\,$. Denote the clusters of $Y^\star$ by $V_1^\star,\ldots,V_r^\star,$ and $V_{r+1}^\star\,$.
Consider the set of values of cluster sizes $\{n_1,\ldots,n_r\} = \{\eta_1,\ldots, \eta_s\}$ where $\eta_1,\ldots, \eta_{s}$ are distinct, and define $\mathcal{I}_\ell = \{k:\; n_k =\eta_\ell\}\subset\{1,\ldots,r\}$ for $\ell=1,\ldots, s\,$. For any $\eta_\ell$ of multiplicity 1 (i.e., $\abs{\mathcal{I}_\ell}=1$), the cluster in $Y\in\mathcal{Y}$ of size $\eta_\ell$ can be uniquely assigned to a cluster among $V_1^\star,\ldots,V_r^\star$ of similar size. We now define an ordering for the remaining clusters. Consider a $\eta_\ell$ of multiplicity larger than 1, and restrict the attention to clusters $V$ of size $\eta_\ell$ and clusters $V_k^\star$ for $k\in\mathcal{I}$ (all clusters in $Y^\star$ of size $\eta_\ell$). This is similar to the case in \cite{chen2014statistical} where all sizes are equal: For each new cluster $V$ of size $\eta_\ell$, if there exists a $k\in\mathcal{I}_\ell$ such that $\abs{V\cap V_k^\star}>\tfrac{1}{2}\eta_\ell$ then we label this cluster as $V_k\,$; this label is unique. The remaining unlabeled clusters are labeled arbitrarily by a number in $\mathcal{I}_\ell\,$.
Hence, we labeled all the clusters of $Y$ according to the clusters of $Y^\star\,$. For each $(k, k') \in \{1,\ldots,r\} \times \{1,\ldots,r+1\}$, we use $\alpha_{kk'} := \abs{V_k^\star \cap V_{k'}}$ to denote the sizes of intersections of the true and new clusters. We observe that the new clusters $(V_1,\ldots,V_{r+1})$ have the following properties: \begin{enumerate}[({A}1)] \item $(V_1,\ldots,V_{r+1})$ is a partition of $\{1,\ldots,n\}$ with $\abs{V_k}=n_k$ for all $k=1,\ldots,r\,$; since $Y\in\mathcal{Y}\,$. \item For $\ell\in\{1,\ldots,s\}$ with $\abs{\mathcal{I}_\ell}=1\,$, we have $\alpha_{kk} = n_k$ for the index $k\in\mathcal{I}_\ell\,$. \item For $\ell\in\{1,\ldots,s\}$ with $\abs{\mathcal{I}_\ell}>1\,$, consider any $k\in\mathcal{I}_\ell\,$. Then, exactly one of the following is true: (1) $\alpha_{kk}>\tfrac{1}{2}n_k$; (2) $\alpha_{kk'}\leq \tfrac{1}{2}n_k$ for all $k'\in\mathcal{I}_\ell\,$.
\item For $d_k(Y) = \iprod{Y^\star_{\mathcal{C}_k}}{Y^\star - Y}\,$, where $k=1\,\ldots,r\,$, we have \begin{align*} d_k(Y) &= \abs{\{ (i,j):\; (i,j)\in\mathcal{C}_k^\star\,,\, Y_{ij} = 0 \}} \\ &= \abs{\{ (i,j):\; (i,j)\in\mathcal{C}_k^\star\,,\, (i,j)\in \mathcal{C}_{r+1} \}}
+ \sum_{k'\neq k''} \abs{\{ (i,j):\; (i,j)\in\mathcal{C}_k^\star\,,\, (i,j)\in V_{k'}\times V_{k''} \}} \\ &= \alpha_{k(r+1)}^2 + \sum_{k'\neq k''} \alpha_{kk'}\alpha_{kk''} \,, \end{align*} which implies \begin{align*} \xi = \sum_{k=1}^r \theta_k d_k(Y) = \sum_{k=1}^r \theta_k \alpha_{k(r+1)}^2 + \sum_{k=1}^r \sum_{k'\neq k''} \theta_k\alpha_{kk'}\alpha_{kk''} \,. \end{align*}
Unless specified otherwise, all the summations involving $k'$ or $k''$ are over the randge $1,\ldots,r+1\,$. \end{enumerate} We showed that the ordered partition for a $Y\in\mathcal{Y}$ with $\sum_{k=1}^r\theta_kd_k(Y)=\xi$ satisfies the above properties. Therefore, \[ \abs{\{ [Y]\in\mathcal{Y}:\; \sum_{k=1}^r \theta_k d_k(Y) = \xi \}} \leq \abs{\{ (V_1,\ldots, V_{r+1}) \text{ satisfying the above conditions} \}} \,. \] Next, we upper bound the right hand side of the above.
Fix an ordered clustering $(V_1,\ldots, V_{r+1})$ which satisfies the above conditions. Define, \[ m_1 := \sum_{k'\neq 1} \alpha_{1k'} \] as the number of nodes in $V_1^\star$ that are misclassified by $Y\,$; hence $m_1+\alpha_{11}=n_1\,$. Consider the following two cases: \begin{itemize} \item if $\alpha_{11}>n_1/4$ we have \[ \sum_{k'\neq k''} \alpha_{1k'}\alpha_{1k''} \geq \alpha_{11} \sum_{k''\neq 1} \alpha_{1k''} > \tfrac{1}{4}n_1m_1 \] \item if $\alpha_{11}\leq n_1/4$ we have $m_1\geq 3n_1/4\,$, which from the aforementioned properties, we must have $\alpha_{1k'}\leq n_1/2$ for all $k'=1,\ldots, r\,$. Then, \[ \sum_{k'\neq k''} \alpha_{1k'}\alpha_{1k''} + \alpha_{1(r+1)}^2 \geq \sum_{1\neq k'\neq k''\neq1} \alpha_{1k'}\alpha_{1k''} + \alpha_{1(r+1)}^2 = m_1^2 - \sum_{k'=2}^r \alpha_{1k'}^2 \geq m_1^2 - \tfrac{1}{2}n_1m_1 \geq \tfrac{1}{4}n_1m_1 \] \end{itemize} Therefore, \[ d_1(Y) = \sum_{k'\neq k''} \alpha_{1k'}\alpha_{1k''} + \alpha_{1(r+1)}^2 \geq \tfrac{1}{4} n_1m_1 \] which as well holds for other indices besides $k=1\,$. This yields \[
\xi \geq \tfrac{1}{4}\sum_{k=1}^r \theta_kn_km_k \geq \tfrac{1}{4}(\min_k\, \theta_kn_k)\sum_{k=1}^r m_k \quad \implies \quad \bar w := \sum_{k=1}^r m_k \leq \frac{4\xi}{\min_k\, \theta_kn_k} :=M \] where $\bar w$ is the number of misclassified non-isolated nodes. Since, one misclassified isolated node produces one misclassified non-isolated node, we have $w_0\leq \bar w\leq M$ where $w_0$ is the number of misclassified isolated nodes.
\begin{itemize} \item The pair of numbers $(\bar w,w_0)$ can take at most $M^2$ different values. \item For each such pair of counts, there are at most ${\bar n}^{2M}$ ways to choose the identity of the misclassified nodes. \item Each misclassified non-isolated node can be assigned to one of $r-1\leq \bar n$ different clusters or be left isolated, and each misclassified isolated node can be assigned to one of $r\leq \bar n$ clusters. \end{itemize} All in all, \[ \abs{\{ [Y]\in\mathcal{Y}:\; \sum_{k=1}^r \theta_k d_k(Y) = \xi \}} \leq M^2 {\bar n}^{4M} = \left(\frac{4\xi}{\min_k\, \theta_kn_k}\right)^2 \exp\left( \frac{16\xi}{\min_k\, \theta_kn_k} \log \bar n \right) \,. \] \end{proof}
\subsection{Proofs for Non-recoverability} \begin{proof}[of cases \ref{condn:impossible-first} and \ref{condn:impossible-second} of Theorem \ref{thm:impossibility}] Let $\operatorname{\mathbb{P}}_{(Y^\star,A)}$ be the joint distribution of $Y^\star$ and $A$, where $Y^\star$ is sampled uniformly from $\mathcal{Y}$ and $A$ is generated according to the heterogenous stochastic block model conditioning on $Y^\star$. Note that \[ \inf_{\hat{Y}}\sup_{Y^\star \in \mathcal{Y}}\operatorname{\mathbb{P}}[\hat{Y}\neq Y^\star]\geq \inf_{\hat{Y}}\operatorname{\mathbb{P}}_{(Y^\star,A)}[\hat{Y}\neq Y^\star]\,. \] By Fano's inequality we have, \begin{equation} \begin{aligned}\label{eqn:Fano} \operatorname{\mathbb{P}}_{(Y^\star,A)}[\hat{Y}\neq Y^\star]\geq 1-\frac{I(Y^\star;A)+1}{\log\abs{\mathcal{Y}}}, \end{aligned} \end{equation} where $I(X;Z)$ is the mutual information, and $H(X)$ is the Shannon entropy for $X$. By counting argument we find that $\abs{\mathcal{Y}}=\binom{n}{\bar n }\frac{\bar n !}{n_1!\ldots n_r!}$. Using $\sqrt{n}(n/e)^n\leq n!\leq e\sqrt{n}(n/e)^n$ and $\binom{n}{\bar n }\geq (n/\bar n )^{\bar n }$, it follows that \[ \abs{\mathcal{Y}}\geq \frac{n^{\bar n }\sqrt{\bar n }}{e^r\sqrt{n_1\ldots n_r}n_1^{n_1}\ldots n_r^{n_r}} \] which gives \[ \log \abs{\mathcal{Y}}\geq \sum_{i=1}^r n_i\big(\log{n \over n_i}-\frac{\log n_i}{2n_i}\big)-r\geq{1 \over 2}\sum_{i=1}^r n_i\log{n \over n_i}-r\,. \]
On the other hand, note that $H(A)\leq \binom{n}{2}H(A_{12})$ by chain rule, the fact that $H(X|Y)\leq H(X)$, and the symmetry among identically distributed $A_{ij}$'s. Furthermore $A_{ij}$'s are conditionally independent and hence $H(A|Y^\star)=\binom{n}{2}H(A_{12}|Y^\star_{12})$. Now it follows that
$$I(Y^\star;A)=H(A)-H(A|Y^\star)\leq \binom{n}{2}I(Y^\star_{12};A_{12}).$$ Observe that \[ \operatorname{\mathbb{P}}(Y_{12}^\star=1,(1,2)\in \mathcal{C}_i) =\frac{\binom{n-2}{n_i-2} \binom{n-n_i}{n_1,\ldots,n_{i-1},n_{i+1},\ldots,n_r,n_0}}{\abs{\mathcal{Y}}}=\frac{n_i(n_i-1)}{n(n-1)}:=\alpha_i \,. \] Using the properties of KL-divergence, we have $\operatorname{\mathbb{P}}(A_{12}=1)=\sum_{i=1}^r \alpha_i p_i+(1-\sum_{i}\alpha_i) q:=\beta\,$. Therefore, \begin{equation} \begin{aligned} \label{eqn:mutualinfo} I(Y^\star_{12},A_{12})=\sum_{i=1}^r\alpha_i D_{\mathrm{KL}}( p_i,\beta)+(1-\sum_{i}\alpha_i)D_{\mathrm{KL}}( q,\beta) {= H(\beta) - \sum \alpha_i H(p_i) - (1-\sum \alpha_i) H(q)} \end{aligned} \end{equation}
Since $I(Y^\star;A)\leq \binom{n}{2}I(Y_{12}^\star;A_{12})$, plugging in the following condition in Fano's inequality \eqref{eqn:Fano}, \begin{equation} \begin{aligned} \label{eq:imp-bound} \big( {\tfrac{1}{2}\sum_i n_i\log \frac{n}{n_i}-r} \big) \geq 2+2\binom{n}{2} I(Y^\star_{12};A_{12}) \,, \end{aligned} \end{equation} guarantees $\operatorname{\mathbb{P}}_{(Y^\star,A)}(\hat{Y}\neq Y^\star)\geq \tfrac{1}{2}$. In the following, we bound $I(Y^\star_{12};A_{12})$ in two different ways to derive conditions \ref{condn:impossible-first} and \ref{condn:impossible-second} of Theorem \ref{thm:impossibility}. Throughout the proof we use the following inequality from \cite{chen2014statistical} for the Kullback-Leibler divergence of Bernoulli variables, \begin{equation} \begin{aligned} \label{eqn:KL-ineq} D_{\mathrm{KL}}(p,q):=D_{\mathrm{KL}}(\operatorname{Ber}(p),\operatorname{Ber}(q))=p \log \frac{p}{q}+(1-p)\log\frac{1-p}{1-q}\leq \frac{(p-q)^2}{q(1-q)}\,, \end{aligned} \end{equation} where the inequality is established by $\log x \leq x-1\,$, for any $x\geq 0\,$.
\begin{itemize} \item From \eqref{eqn:mutualinfo}, we have \begin{equation} \begin{aligned} I(Y^\star_{12},A_{12}) \leq \sum_{i=1}^r \frac{4\alpha_i (p_i-q)^2}{ q(1- q)} \leq \frac{ 4 \sum_{i=1}^r n_i^2(p_i-q)^2}{n(n-1) q(1- q)} \end{aligned} \end{equation} where we assumed $\sum n_i^2 \leq \tfrac{1}{2}n^2\,$. Now, the right hand side of \ref{eq:imp-bound} can be bounded as \[ 2\binom{n}{2} I(Y^\star_{12};A_{12}) \leq \frac{4\sum_{i=1}^r n_i^2(p_i-q)^2}{q(1- q)} =4\sum_{i=1}^r n_i^2\widetilde D(p_i,q) \] and gives the sufficient condition \ref{condn:impossible-first} of Theorem \ref{thm:impossibility}.
\item
Again from (\ref{eqn:mutualinfo}), we have \begin{equation*} \begin{aligned} I(Y_{12}^\star;A_{12}) & = \sum_i \alpha_i \bigg( p_i \log \frac{ p_i}{\beta}+\big(1- p_i\big)\log\frac{1- p_i}{1-\beta}\bigg)+\big(1-\sum_i \alpha_i\big)D_{\mathrm{KL}}( q,\beta)\\ &\leq \sum \alpha_i p_i \log \frac{1}{\alpha_i} +\log c + \big(1-\sum_i \alpha_i\big)\frac{( q-\beta)^2}{\beta(1-\beta)} \end{aligned} \end{equation*} where the first term is bounded via $\beta\geq \sum_i \alpha_i p_i\geq \alpha_i p_i\,$, the second term is bounded via $\beta\leq p_{\max}$ and $c=({1- p_{\min}})/({1- p_{\max}})\,$, and we used \eqref{eqn:KL-ineq} for the last term. Since $1-\beta=1- q-\sum_i \alpha_i (p_i-q)\geq (1-\sum_i\alpha_i)(1- q)\,$, the last term can be bounded as \begin{equation*} \begin{aligned} \big(1-\sum_i \alpha_i\big)\frac{( q-\beta)^2}{\beta(1-\beta)}&\leq \big(1-\sum_i \alpha_i\big)\frac{\big(\sum_i \alpha_i (p_i-q)\big)^2}{\big(\sum_i \alpha_i p_i\big)\big(1-\sum_i\alpha_i\big)(1- q)}\leq \sum_i\alpha_i (p_i-q)\leq \sum_i\alpha_i p_i \,. \end{aligned} \end{equation*} This implies \begin{equation} \begin{aligned} I(Y_{12}^\star;A_{12}) & \leq \sum_i \alpha_i p_i\log\tfrac{1}{\alpha_i}+ \sum_i\alpha_i p_i+\log c \leq \sum_i \alpha_i p_i\log\tfrac{e}{\alpha_i}+\log c.\\ \end{aligned} \end{equation} Since $n_i\geq 2$, $\alpha_i=\frac{n_i(n_i-1)}{n(n-1)}\geq \tfrac{n_i^2}{en^2}$. Hence \begin{equation*} \begin{aligned} 2\binom{n}{2}I(Y_{12}^\star;A_{12}) \leq n(n-1) \sum_i\frac{n_i(n_i-1)}{n(n-1)}p_i\log\frac{e^2n^2}{n_i^2}+2\log c \leq 2\sum_i n_i^2p_i\log \frac{e n}{n_i} +2\log c \end{aligned} \end{equation*} which gives the sufficient condition \ref{condn:impossible-second} of Theorem \ref{thm:impossibility}. \end{itemize} \end{proof}
\begin{proof}[of case \ref{condn:impossible-third} in Theorem \ref{thm:impossibility}] Without loss of generality assume $n_1\leq n_2\leq \ldots\leq n_r\,$. Let $M:=\bar n-n_{\min}=\bar n-n_1\,$, and $\bar \mathcal{Y} := \{Y_0,Y_1,\ldots,Y_M\}\,$. $Y_0$ is the clustering matrix with clusters $\{\mathcal{C}_\ell\}_{\ell=1}^r$ that correspond to $V_1=\{1,\ldots,n_1\}\,$, $V_\ell =\{\sum_{i=1}^{\ell-1} n_i+1,\ldots,\sum_{i=1}^\ell n_i\}$ for $\ell= 2,\ldots,r\,$. Other members of $\bar\mathcal{Y}$ are given by swapping an element of $\cup_{\ell=2}^r V_\ell$ with an element of $V_1\,$. Let $\operatorname{\mathbb{P}}_i$ be the distributional law of the graph $A$ conditioned on $Y^\star=Y_i\,$. Since $\operatorname{\mathbb{P}}_i$ is product of ${1 \over 2}n(n-1)$ Bernoulli random variables, we have
\begin{equation} \begin{aligned} I(Y^\star;A)&=\operatorname{\mathbb{E}}_Y\left[D_{\mathrm{KL}}\left(\operatorname{\mathbb{P}}(A|Y),\operatorname{\mathbb{P}}(A)\right)\right]\\ &=\tfrac{1}{M+1}\sum_{i=0}^M D_{\mathrm{KL}}\big(\operatorname{\mathbb{P}}_i ,\tfrac{1}{M+1}\sum_{j=0}^M \operatorname{\mathbb{P}}_{j}\big)\\ &\leq \tfrac{1}{(M+1)^2}\sum_{ i,j=0}^MD_{\mathrm{KL}}(\operatorname{\mathbb{P}}_i,\operatorname{\mathbb{P}}_{j})\\ &\leq \max_{i,j=0,\ldots,M} \;D_{\mathrm{KL}}(\operatorname{\mathbb{P}}_i,\operatorname{\mathbb{P}}_{j})\\ &\leq \max_{i_1,i_2,i_3=1,\ldots,r}\; \sum_{j=1}^3 \left( \frac{n_{i_j}(p_{i_j}-q)^2}{q(1-q)} + \frac{n_{i_j}(p_{i_j}-q)^2}{p_{i_j}(1-p_{i_j})} \right) \\
&\leq 3 \max_{i=1,\ldots,r}\; \left( \frac{n_{i}(p_{i}-q)^2}{q(1-q)} + \frac{n_{i}(p_{i}-q)^2}{p_{i}(1-p_{i})} \right) \end{aligned} \end{equation} where the third line follows from the convexity of KL-divergence, and the line before the last follows from the construction of $\bar\mathcal{Y}$ and \eqref{eqn:KL-ineq}. Now if the condition of the theorem holds, then $I(Y^\star;A)\leq {1 \over 4}\log(n-n_{\min})={1 \over 4}\log\abs{\bar\mathcal{Y}}$. Note that for $n\geq 128$ we get $\log\abs{\bar\mathcal{Y}}=\log(n-n_{\min})\geq \log(n/2)\geq 4\,$. The conclusion follows by Fano's inequality in \eqref{eqn:Fano} restricting the supremum to be taken over $\bar \mathcal{Y}\,$. \end{proof}
\section{Recovery by a Simple Counting Algorithm} \label{sec:simple} In Section \ref{sec:convex}, we considered a tractable approach for exact recovery of (partially) observed models generated according to the heterogenous stochastic block model. However, in the interest of computational effort, one can further characterize a subset of models that are recoverable via a much simpler method than the convex program. The following algorithm is a proposal to do so. Moreover, the next theorem provides a characterization for models for which this simple thresholding algorithm is effective for exact recovery. Here, we allow for isolated nodes as described in Section \ref{sec:main-results}.
\begin{algorithm}[H] \begin{algorithmic}[1] \STATE (Find isolated notes) For each node $v$, compute its degree $d_v$. Declare $i$ as isolated if $$d_v <\min_k \frac{(n_k-1) (p_k-q)}{2}+(n-1) q.$$ \STATE (Find all communities) For every pair of nodes $(v,u)$, compute the number of common neighbors $S_{vu}:=\sum_{w\neq v,u}A_{vw}A_{uw}$. Declare $v,u$ as in the same community if
\[ S_{vu}>nq^2 + {1 \over 2}\left(\min_k \left((n_k-2) p_k^2-n_k q^2\right)+q\cdot \max_{i\neq j}\left(\rho_k- p_k+ \rho_l-p_l\right)\right) \] where $\rho_k = n_k(p_k-q)\,$. \end{algorithmic} \caption{{\sc Simple Thresholding Algorithm}} \label{alg:simplethresholding} \end{algorithm}
\begin{theorem} \label{thm:simple-recovery} Under the stochastic block model, with probability at least $1-2n^{-1}$, the simple counting algorithm \ref{alg:simplethresholding} find the isolated nodes provided \begin{equation}\label{simpiso} \min_k \, (n_k-1)^2 (p_k-q)^2\geq 19(1-q)\left(\max_k\, n_k p_k+n q \right) \log n \,. \end{equation} Furthermore the algorithm finds the cluster if \begin{equation} \begin{aligned}\label{simpfindcluster}
&\left[\min_k \left\{(n_k-2)p_k^2+(n-n_k)q^2\right\} -q\, \max_{k\neq l}\left\{ (n_k-1) p_k+(n_l-1) p_l+(n-n_k-n_l) q\right\}\right]^2\\ \geq & \quad 26 (1- q^2) \left(\max_k\, n_k p_k^2+n q^2\right) \log n \,, \end{aligned} \end{equation} while the term inside the bracket (which is squared) is assumed to be non-negative. \end{theorem} We remark that the following is a slightly more restrictive condition than \eqref{simpfindcluster} \begin{equation} \begin{aligned} \left[\min_k n_k(p_k^2-q^2) -2q \rho_{\max} \right]^2 \geq 26(1-q^2) \left[n q^2 +\max_k\, n_k p_k^2\right] \log n \,. \end{aligned} \end{equation} with better interpretability.
\begin{proof}[of Theorem \ref{thm:simple-recovery}] For node $v$, let $d_v$ denote its degree. Let $\bar V=\cup_{i=1}^r V_i$ denote the set of nodes which belong to one of the clusters, and $V_0$ be isolated nodes. If $v \in V_i$ for some $i=1,\ldots,r\,$, then $d_v$ is distributed as a sum of independent binomial random variables $\operatorname{Bin}(n_i-1, p_i)$ and $\operatorname{Bin}(n-n_i, q)\,$. If $v \in V_0\,$, then $d_v$ is distributed as $\operatorname{Bin}(n-1, q)\,$. Hence we have, $$\operatorname{\mathbb{E}}[d_v]= \begin{cases} (n_i-1) p_i+(n-n_i) q & v \in V_i\subset \bar V\\ (n-1) q & v \in V_0 \,,\\ \end{cases} $$ and $$\mathrm{Var}[d_v]= \begin{cases} (n_i-1) p_i(1- p_i)+(n-n_i) q(1- q) & v \in V_i\subset \bar V\\ (n-1) q(1- q) & v \in V_0 \,.\\ \end{cases} $$ Let $\kappa_0^2:=\max_i n_i p_i(1- q)+n q(1- q)$, and $t=\min_i \frac{(n_i-1) (p_i-q)}{2}\leq \frac{\kappa_0^2}{2}$. Then $\mathrm{Var}[d_v]\leq \kappa_0^2$ for any $v\in V_0\cup \bar V\,$. By Bernstein's inequality we get \begin{equation} \begin{aligned} \operatorname{\mathbb{P}}\big[\abs{d_v-\operatorname{\mathbb{E}}[d_v]} > t\big]&\leq 2\exp\bigg(-\frac{t^2}{2\kappa_0^2+2t/3}\bigg)\leq 2\exp\bigg(-\frac{3\min_i (n_i-1)^2 (p_i-q)^2}{28\kappa_0^2}\bigg)\leq 2n^{-2}, \end{aligned} \end{equation} where the last inequality follows from the condition (\ref{simpiso}). Now by union bound over all nodes, with probability at least $1-2n^{-1}$, for node $v \in V_i \subset \bar V$ we have, \begin{align} d_v\geq (n_i-1) p_i+(n-n_i) q-t>\min_i \frac{(n_i-1) (p_i-q)}{2}+(n-1) q\,, \end{align} and for node $v \in V_0\,$, \begin{align} d_v\leq (n-1) q(1- q)+t<\min_i \frac{(n_i-1) (p_i-q)}{2}+(n-1) q\,. \end{align} This proves the first statement of the theorem, and all the isolated nodes are correctly identified. For the second statement, let $S_{vu}$ denote the common neighbor for nodes $v,u \in \bar V$. Then \begin{equation*} \begin{aligned} S_{vu}\sim_d \begin{cases} \operatorname{Bin}(n_i-2, p_i^2)+\operatorname{Bin}(n-n_i, q^2) & (v,u)\in V_i\times V_i\\ \operatorname{Bin}(n_i-1, p_iq)+\operatorname{Bin}(n_j-1, p_jq)+\operatorname{Bin}(n-n_i-n_j, q^2) & (v,u)\in V_i\times V_j \,,\; i\neq j \\ \end{cases} \end{aligned} \end{equation*} where $\sim_d$ denotes equality in distribution and $+$ denotes the summation of independent random variables. Hence $$\operatorname{\mathbb{E}}[S_{vu}]= \begin{cases} (n_i-2) p_i^2+(n-n_i) q^2 & (v,u) \in V_i\times V_i \\ (n_i-1) p_iq+(n_j-1) p_jq+(n-n_i-n_j) q^2 & (v,u) \in V_i\times V_j\,,\; i\neq j \\ \end{cases} $$ and $$\mathrm{Var}[S_{vu}]= \begin{cases} (n_i-2) p_i^2(1- p_i^2)+(n-n_i) q^2(1- q^2) & (v,u) \in V_i\times V_i\\ (n_i-1) p_iq(1- p_iq)+(n_j-1) p_jq(1- p_jq)\\ \quad\quad+(n-n_i-n_j) q^2(1- q^2) & (v,u) \in V_i\times V_j\,,\; i\neq j \\ \end{cases} $$
Let \begin{align*} \Delta &=\min_i \big((n_i-2) p_i^2+(n-n_i) q^2\big)-\max_{j}\big(2(n_j-1) p_jq+(n-2n_j) q^2\big) \\ &=\min_i \big((n_i-2) p_i^2 -n_i q^2\big)-\max_{j}\big(2(n_j-1) p_jq - 2n_j q^2\big) \,, \end{align*} Let $\kappa_1^2:=2\max_i n_i p_i^2(1- q^2)+n q^2(1- q^2)$. Then $\mathrm{Var}[S_{vu}]\leq \kappa_1^2$ for all $v,u\,$. Then $\Delta \leq \kappa_1^2/2\,$. Bernstein's inequality with $t=\Delta/2$ yields \begin{equation} \begin{aligned} \operatorname{\mathbb{P}}\big[\abs{S_{vu}-\operatorname{\mathbb{E}}[S_{vu}]}>t\big]&\leq 2\exp\bigg(-\frac{t^2}{2\kappa_1^2+2t/3}\bigg)\leq 2\exp\bigg(-\frac{3\Delta^2}{26\kappa_1^2}\bigg)\leq 2n^{-3}, \end{aligned} \end{equation} where the last line follows from assumption (\ref{simpfindcluster}). By union bound over all pair of nodes $(v,u)$, we get with probability at least $1-2n^{-1}$, $S_{vu}> \Gamma$ for all $v,u$ in the same cluster and $S_{vu}<\Gamma$ otherwise. Here $$\Gamma:={1 \over 2}\bigg(\min_i \big((n_i-2) p_i^2+(n-n_i) q^2\big)+\max_{i\neq j}\big((n_i-1) p_iq+(n_j-1) p_jq+(n-n_i-n_j) q^2\big)\bigg).$$
\color{black} \end{proof}
\section{Detailed Computations for Examples in Section \ref{sec:this-paper}} \label{app:verification}
In the following, we present the detailed computations for the examples in Section \ref{sec:this-paper} and summarized in Table \ref{tab:examples}. When there is no impact on the final result, quantities are approximated as denoted by $\approx\,$.
First, we repeat the conditions of Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2}. The conditions of Theorem \ref{thm:convex_recovery} can be equivalently stated as \begin{itemize} \item $\rho_k^2 \gtrsim n_kp_k(1-p_k)\log n_k = \sigma_k^2 \log n_k $ \item $(p_{\min}-q)^2 \gtrsim q(1-q)\tfrac{\log n_{\min}}{n_{\min}}$ \item $\rho_{\min}^2 \gtrsim \max \left\{ \log n, nq(1-q), \max_k \, n_kp_k(1-p_k) \right\}$ \item $\sum_{k=1}^r n_k^{-\alpha} = o(1)$ for some $\alpha>0\,$. \end{itemize} Notice that $n_kp_k(1-p_k)\gtrsim \log n_k\,$, for $k=1,\ldots,r\,$, is implied by the first condition, as mentioned in Remark \ref{rem:connected-convex}.
The conditions of Theorem \ref{thm:convex_recovery2} can be equivalently stated as \begin{itemize} \item $\rho_k^2 \gtrsim n_kp_k(1-p_k)\log n$ \item $(p_{\min}-q)^2 \gtrsim q(1-q)\tfrac{\log n}{n_{\min}}$ \item $\rho_{\min}^2 \gtrsim \max \left\{ nq(1-q), \max_k \, n_kp_k(1-p_k) \right\}$. \end{itemize}
\begin{remark}\label{rem:pq-far} Provided that both $p_k$ and $q/p_k$ are bounded away from $1\,$, we have \begin{align} \widetilde D(q,p_k) = p_k \frac{(1-q/p_k)^2}{1-p_k} \approx p_k \quad,\quad \frac{\rho_k^2}{\sigma_k^2} = \frac{(1-q/p_k)^2}{1-p_k}\, n_kp_k \approx n_kp_k \,. \end{align} This simplifies the first condition of Theorem \ref{thm:convex_recovery} to a simple connectivity requirement. Hence, we can rewrite the conditions of Theorems \ref{thm:convex_recovery}, \ref{thm:convex_recovery2} as \begin{align*} \ref{thm:convex_recovery}& :
n_kp_k\gtrsim \log n_k\,,
\widetilde D(p_{\min},q)\gtrsim \tfrac{\log n_{\min}}{n_{\min}}\,,
\rho_{\min}^2 \gtrsim \max \left\{ \sigma_{\max}^2, nq(1-q),\log n \right\},
\sum_{k=1}^r n_k^{-\alpha} = o(1) \text{ for some } \alpha>0 \\ \ref{thm:convex_recovery2} &:
n_kp_k\gtrsim \log n\,,
\widetilde D(p_{\min},q)\gtrsim \tfrac{\log n}{n_{\min}}\,,
\rho_{\min}^2 \gtrsim \max \left\{ \sigma_{\max}^2, nq(1-q)\right\}\,. \end{align*} \end{remark}
\paragraph{Example \ref{ex:we-can1}:} In a configuration with two communities $\tri{n-\sqrt{n}}{n^{-2/3}}{1}$ and $\tri{\sqrt{n}}{\tfrac{1}{\log n}}{1}$ with $q=n^{-2/3-0.01}\,$, we have $n_{\min} = \sqrt{n}$ and $p_{\min} = n^{-2/3}\,$. We have, \[ \widetilde D(p_{\min},q) \approx n^{-2/3+0.01} \] which does not exceed either $\tfrac{\log n_{\min}}{n_{\min}} \approx \tfrac{\log n}{\sqrt{n}}$ or $\tfrac{\log n}{n_{\min}} \approx \tfrac{\log n}{\sqrt{n}}\,$, and we get no recovery guarantee from Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} respectively. However, as $p_{\min}-q$ is not much smaller than $q\,$, while $\rho_{\min}\approx n^{1/3}$ grows much faster than $\log n\,$, the condition of Theorem \ref{thm:hard-recovery} trivially holds.
Here are the related quantities for this configuration: \[ \rho_1 = n_1(p_1-q) = (n-\sqrt{n})(n^{-2/3} - n^{-2/3-0.01}) \approx n^{1/3} \quad , \quad \rho_2 = n_2(p_2-q) = \sqrt{n}(\tfrac{1}{\log n} - n^{-2/3-0.01}) \approx \tfrac{\sqrt{n}}{\log n} \] which gives $\rho_{\min}\approx n^{1/3}\,$. Furthermore, \[ \sigma_1^2 = n_1p_1(1-p_1) \approx n^{1/3} \quad , \quad \sigma_2^2 = n_2p_2(1-p_2) = \tfrac{\sqrt{n}}{\log n}\,, \] which gives $\sigma_{\max} = \tfrac{\sqrt{n}}{\log n}\,$. On the other hand $nq(1-q) \approx n^{1/3-0.01}$ which is smaller than $\sigma_{\max}^2\,$.
\paragraph{Example \ref{ex:we-can2}:} Consider a configurations with $\tri{n-n^{2/3}}{n^{-1/3+\epsilon}}{1}$ and $\tri{\sqrt{n}}{\tfrac{c}{\log n}}{n^{1/6}}$ and $q=n^{-2/3+3\epsilon}\,$. Since all $p_k$'s and $q/p_k$'s are much less than $1\,$, the first condition of both Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} can be verified by Remark \ref{rem:pq-far}. Moreover, $n_{\min} = \sqrt{n}$ and $p_{\min} =n^{-1/3+\epsilon}$ which gives \[ \widetilde D(p_{\min},q) = n^{-\epsilon} \] and verifies $\widetilde D(p_{\min},q) \gtrsim \tfrac{\log n_{\min}}{n_{\min}}$ for \ref{thm:convex_recovery}, as well as $\widetilde D(p_{\min},q) \gtrsim \tfrac{\log n}{n_{\min}}$ for \ref{thm:convex_recovery2}. Moreover, $\rho_1 \approx n^{2/3+\epsilon}$ and $\rho_2 \approx \tfrac{\sqrt{n}}{\log n}$ which gives $\rho_{\min}\approx \tfrac{\sqrt{n}}{\log n} \gtrsim \sqrt{\log n}\,$. On the other hand, $\sigma_1^2 \approx n^{2/3+\epsilon}$ and $\sigma_2^2 \approx \sqrt{n}/\log n$ which gives \[ \max\{ \sigma_{\max}^2\,, \, nq(1-q)\} \approx n^{2/3+\epsilon}\,. \] Thus all conditions of Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} are satisfied. Moreover, as $p_{\min}-q$ is not much smaller than $q\,$, while $\rho_{\min}\approx \tfrac{\sqrt{n}}{\log n}$ is growing much faster than $\log n\,$, the condition of Theorem \ref{thm:hard-recovery} trivially holds.
\paragraph{Example \ref{ex:cvx-thm1-sqrtlogn}:}
Consider a configurations with $\tri{\sqrt{\log n}}{O(1)}{m}$ and $\tri{n_2}{O(\tfrac{\log n}{\sqrt{n}})}{\sqrt{n}}$ and $q=O(\log n/n)\,$, where $n_2 = \sqrt{n} - m \sqrt{\log n / n}\,$. Here, we assume $m\leq n/(2\sqrt{\log n})$ which implies $n_2 \geq \sqrt{n}/2\,$. Since all $p_k$'s and $q/p_k$'s are much less than $1\,$, we can use Remark \ref{rem:pq-far}: the first condition of Theorem \ref{thm:convex_recovery} holds as $n_1p_1 \approx \sqrt{\log n} \gtrsim \log n_1\approx \log\log n$ and $n_2p_2\approx \log n \gtrsim \log n_2\,$. However, $n_1p_1\approx \sqrt{\log n} \not\gtrsim \log n$ and Theorem \ref{thm:convex_recovery2} does not offer a guarantee for this configuration.
Moreover, $n_{\min} = \sqrt{\log n}$ and $p_{\min} =O(\tfrac{\log n}{\sqrt{n}})$ which gives \[ \widetilde D(p_{\min},q) = \log n \] and verifies $\widetilde D(p_{\min},q) \gtrsim \tfrac{\log n_{\min}}{n_{\min}} \approx \tfrac{\log\log n}{\sqrt{\log n}}$ for \ref{thm:convex_recovery}, as well as $\widetilde D(p_{\min},q) \gtrsim \tfrac{\log n}{n_{\min}}= \sqrt{\log n}$ for \ref{thm:convex_recovery2}. Moreover, $\sigma_1^2 = \sqrt{\log n}$ (also $\rho_1$) and $\sigma_2^2 = \log n$ (also $\rho_2$) which gives \[ \max\{ \sigma_{\max}^2\,, \, nq(1-q)\} \approx \log n \] and $\rho_{\min}^2\approx \log n\,$. For the last condition of Theorem \ref{thm:convex_recovery} we need \[ m (\log n)^{-\alpha/2} + \sqrt{n} (\sqrt{n} - k\sqrt{\tfrac{\log n}{n}})^{-\alpha} = o(1) \] for some $\alpha>0$ which can be guaranteed provided that $m$ grows at most polylogarithmically in $n\,$. All in all, we verified the conditions of Theorem \ref{thm:convex_recovery} while the first condition of \ref{thm:convex_recovery2} fails. Observe that $\rho_{\min}$ fails the condition of Theorem \ref{thm:hard-recovery}.
Alternatively, consider a configuration with $\tri{\sqrt{\log n}}{O(1)}{m}$ and $\tri{\sqrt{n}}{O(\tfrac{\log n}{\sqrt{n}})}{m' }$ and $q = O(\tfrac{\log n}{n}) \,$, where $m' = \sqrt{n} - m \sqrt{\log n / n}$ to ensure a total of $n$ vertices. Here, we assume $m\leq n/(2\sqrt{\log n})$ which implies $m' \geq \sqrt{n}/2\,$. Similarly, all conditions of Theorem \ref{thm:convex_recovery} can be verified provided that $m$ grows at most polylogarithmically in $n\,$. Moreover, the conditions of Theorems \ref{thm:convex_recovery2} and \ref{thm:hard-recovery} fail to satisfy.
\paragraph{Example \ref{ex:cvx-thm2-slogn}:} Consider a configuration with $\tri{\tfrac{1}{2}n^\epsilon}{O(1)}{n^{1-\epsilon}}$ and $\tri{\tfrac{1}{2}n}{n^{-\alpha}\log n}{1}$ and $q = n^{-\beta}\log n\,$, where $0<\alpha<\beta<1$ and $0<\epsilon<1\,$.
We have $\rho_1 \approx n^\epsilon$ and $\rho_2 \approx n^{1-\alpha} \log n\,$. Since $\rho_{\min}^2\gtrsim \log n\,$, the last condition of Theorem \ref{thm:convex_recovery} holds, and $\log n_{\min} \approx \log n\,$, we need to check for similar conditions to be able to use Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2}. Using Remark \ref{rem:pq-far}, the first condition of both Theorems holds because of $n_1p_1 \approx n^\epsilon \gtrsim \log n$ and $n_2p_2 \approx n^{1-\alpha}\log n \gtrsim \log n\,$. Moreover, the condition \[ \widetilde D(p_{\min}, q) \approx n^{\beta-2\alpha} \log n \gtrsim \tfrac{\log n}{n_{\min}}\approx \tfrac{\log n}{n^\epsilon} \] is equivalent to $\beta+\epsilon > 2\alpha\,$. Furthermore, $\sigma_1^2 = n^\epsilon$ and $\sigma_2^2 = n^{1-\alpha} \log n\,$, and for the last condition we need \[ \min\{ n^{2\epsilon}\,,\, n^{2-2\alpha} \log^2 n \} \gtrsim \max\{ n^{\epsilon} \,,\, n^{1-\alpha}\log n\,,\, n^{1-\beta}\log n \} \] which is equivalent to $2\epsilon + \alpha >1$ and $\epsilon+2\alpha <2\,$. Notice that $\beta+1>2\alpha$ is automatically satisfied when we have $\beta+\epsilon > 2\alpha$ from the previous part.
\paragraph{Example \ref{ex:cvx-thm2-logn}:} Consider a configuration with $\tri{\log n}{O(1)}{\tfrac{n}{\log n}- m \sqrt{\tfrac{n}{\log n}}}$ and $\tri{\sqrt{n\log n}}{O(\sqrt{\tfrac{\log n}{n}})}{m}$ and $q = O(\tfrac{\log n}{n})\,$. All of $\rho_1\,$, $\rho_2\,$, $\sigma_1^2\,$, $\sigma_2^2\,$, and $nq(1-q)\,$, are approximately equal to $\log n\,$. Thus, the first and third conditions of Theorems \ref{thm:convex_recovery} and \ref{thm:convex_recovery2} are satisfied. Moreover, \[ \widetilde D(p_{\min},q)\approx 1 \gtrsim \tfrac{\log n_{\min}}{n_{\min}} \approx \tfrac{\log \log n}{\log n} \] which establishes the conditions of Theorem \ref{thm:convex_recovery2}. On the other hand, the last condition of Theorem \ref{thm:convex_recovery} is not satisfied as one cannot find a constant value $\alpha>0$ for which \[ \sum_{k=1}^r n_k^\alpha = \left( \tfrac{n}{\log n}- m \sqrt{\tfrac{n}{\log n}} \right) \log^{-\alpha}n + m (n\log n)^{-\alpha/2} \] is $o(1)$ while $n$ grows.
\paragraph{Example \ref{ex:hard}:} For the first configuration, Theorem \ref{thm:convex_recovery} requires $ f^2(n) \gtrsim \max\{ \tfrac{\log n_1}{n_1} \,,\, \tfrac{\log n_{\min}}{n_{\min}} \,,\, \tfrac{n}{n_1^2} \} $ while Theorem \ref{thm:convex_recovery2} requires $ f^2(n) \gtrsim \max\{ \tfrac{\log n_1}{n_1} \,,\, \tfrac{\log n}{n_{\min}} \,,\, \tfrac{n}{n_1^2} \} $ and both require $n_{\min}\gtrsim \sqrt{n}\,$. Therefore, both set of requirements can be written as \[ f^2(n) \gtrsim \max\{ \tfrac{\log n}{n_{\min}} \,,\, \tfrac{n}{n_1^2} \}\quad,\quad n_{\min}\gtrsim \sqrt{n}\,. \]
\end{document} | arXiv |
# Basic image processing techniques
Image processing is a fundamental technique in computer vision that involves manipulating and analyzing digital images. It is a crucial step in image recognition tasks, as it helps preprocess the images to extract relevant features that can be used by machine learning algorithms.
There are several basic image processing techniques that are commonly used in computer vision:
- **Grayscale conversion**: Converting an image from its original color representation to a grayscale version. This simplifies the image and reduces the amount of data to process.
- **Image resizing**: Resizing an image to a specific size, which can be useful for standardizing the input size for machine learning algorithms.
- **Image normalization**: Normalizing the pixel values of an image to a specific range, such as [0, 1] or [0, 255]. This helps ensure that the pixel values are consistent across different images.
- **Image filtering**: Applying filters to an image to enhance or suppress certain features, such as edges or textures. Common filters include Gaussian blur, Sobel operator, and Laplacian.
- **Image thresholding**: Converting an image to binary (black and white) by setting a threshold for pixel values. This is useful for extracting specific features from an image.
Given an RGB image, convert it to grayscale using the following formula:
$$Grayscale(Image) = 0.299 \times Red(Image) + 0.587 \times Green(Image) + 0.114 \times Blue(Image)$$
## Exercise
Resize the following image to a 100x100 pixel size:
```
[Image]
```
# Feature extraction and selection
Feature extraction and selection are crucial steps in image recognition. They involve extracting relevant features from images that can be used as input to machine learning algorithms. The goal is to select the most informative and discriminative features that can help classify or detect objects in the images.
There are several feature extraction techniques that can be used in image recognition:
- **Edge detection**: Detecting edges in an image using techniques like Sobel operator or Canny algorithm. Edges are important features as they represent the boundaries of objects in the image.
- **Texture analysis**: Analyzing the texture of an image using techniques like Gabor filters or local binary patterns. Texture is a useful feature for object recognition as it can help distinguish between different objects with similar shapes.
- **Shape descriptors**: Extracting shape descriptors like contours or bounding boxes from an image. Shape descriptors are useful for object detection and segmentation tasks.
- **Color features**: Extracting color features from an image using techniques like color histograms or color moments. Color is an important feature for object recognition as it can help distinguish between objects with similar shapes or textures.
Use the Sobel operator to detect edges in the following image:
```
[Image]
```
## Exercise
Extract local binary patterns (LBP) from the following image:
```
[Image]
```
# Convolutional neural networks for image recognition
Convolutional neural networks (CNNs) are a type of deep learning model that have proven to be highly effective in image recognition tasks. They are designed specifically for processing grid-like data, such as images.
The key components of a CNN are:
- **Convolutional layers**: These layers apply convolutional filters to the input image, which helps learn local patterns and features.
- **Pooling layers**: These layers downsample the output of the convolutional layers, reducing the spatial dimensions of the feature maps.
- **Fully connected layers**: These layers connect the output of the previous layers to produce the final classification or detection output.
CNNs have several advantages over traditional machine learning algorithms for image recognition:
- **Hierarchical feature learning**: CNNs learn hierarchical representations of the image, where lower layers detect simple patterns like edges, and higher layers combine these patterns to detect more complex objects.
- **Translation invariance**: CNNs are invariant to small translations in the input image, which is a common issue in real-world images.
- **Scale invariance**: CNNs can learn to recognize objects at different scales, which is important for object detection and segmentation tasks.
Here is a simple CNN architecture for image classification:
```
[Image of CNN architecture]
```
## Exercise
Describe the architecture of a CNN that can be used for object detection. Include the number of convolutional, pooling, and fully connected layers, as well as the size of the filters and the pooling windows.
# Deep learning and its role in image recognition
Deep learning is a subset of machine learning that involves using deep neural networks to solve complex problems. In image recognition, deep learning has revolutionized the field by enabling the development of highly accurate and powerful models.
The key components of deep learning for image recognition are:
- **Deep convolutional networks**: These are a type of deep neural network that are specifically designed for processing grid-like data, like images. They consist of multiple convolutional and pooling layers, followed by fully connected layers.
- **Pre-trained models**: These are deep learning models that have been trained on large datasets, like ImageNet, and can be used as the starting point for fine-tuning on smaller datasets or specific tasks.
- **Transfer learning**: This involves using a pre-trained model as the starting point for training a new model on a smaller dataset or a specific task. It leverages the knowledge learned by the pre-trained model to save time and computational resources.
Deep learning has several advantages over traditional machine learning methods in image recognition:
- **Large-scale learning**: Deep learning models can learn complex patterns and features from large datasets, which is crucial for achieving high accuracy in image recognition tasks.
- **End-to-end learning**: Deep learning models can learn the entire pipeline of image processing, feature extraction, and classification, which simplifies the process and reduces the need for manual intervention.
- **Robustness to noise**: Deep learning models can learn to recognize objects even in the presence of noise or variations in the input images, which is important for real-world applications.
Here is an example of a deep learning model for image classification:
```
[Image of deep learning model architecture]
```
## Exercise
Compare and contrast the advantages of deep learning for image recognition with traditional machine learning methods.
# Image classification with CNNs
Image classification is a common task in computer vision that involves classifying images into predefined categories. Convolutional neural networks (CNNs) have proven to be highly effective in image classification tasks.
The process of image classification with CNNs involves the following steps:
1. **Preprocessing**: Preprocess the input images by converting them to grayscale, resizing them to a fixed size, and normalizing their pixel values.
2. **Feature extraction**: Extract relevant features from the preprocessed images using techniques like edge detection, texture analysis, or color features.
3. **Feature representation**: Represent the extracted features using a fixed-size vector, which can be used as input to the CNN.
4. **Training**: Train the CNN on a labeled dataset using supervised learning, where the input is the feature representation and the output is the target class.
5. **Testing**: Test the trained CNN on a test dataset to evaluate its performance in classifying new images.
Here is an example of a CNN architecture for image classification:
```
[Image of CNN architecture]
```
## Exercise
Describe the process of image classification using a CNN, including the steps for preprocessing, feature extraction, feature representation, training, and testing.
# Object detection and segmentation
Object detection and segmentation are important tasks in computer vision that involve detecting and segmenting objects in images. Deep learning techniques, particularly convolutional neural networks (CNNs), have shown great success in these tasks.
Object detection involves identifying the presence and location of objects in an image. It can be achieved using techniques like sliding window detection or region proposal-based methods, combined with deep learning models like CNNs.
Object segmentation involves dividing an image into multiple segments, where each segment corresponds to a different object. This can be achieved using techniques like semantic segmentation or instance segmentation, combined with deep learning models like CNNs.
Here is an example of a CNN architecture for object detection:
```
[Image of CNN architecture]
```
## Exercise
Describe the process of object detection and segmentation using deep learning techniques, including the steps for detecting and segmenting objects in images.
# Advanced techniques for image recognition
Advanced techniques have been developed in recent years to improve the performance of image recognition tasks. Some of these techniques include:
- **Attention mechanisms**: These are mechanisms that allow deep learning models to focus on specific regions of the input image, which can improve the accuracy of the model.
- **Deformable convolutions**: These are convolutional layers that can learn to deform the convolutional filters, which can help the model better capture the shapes of objects in the image.
- **Domain adaptation**: This involves training a deep learning model on a source domain and then adapting it to a target domain with different characteristics, which can be useful for transferring knowledge between different tasks or datasets.
- **Multi-task learning**: This involves training a deep learning model on multiple tasks simultaneously, which can help the model learn more general representations that can be used for different tasks.
Here is an example of a CNN architecture that incorporates attention mechanisms:
```
[Image of CNN architecture]
```
## Exercise
Describe the process of using attention mechanisms, deformable convolutions, domain adaptation, and multi-task learning in image recognition tasks.
# Applications of image recognition in computer vision
Image recognition has a wide range of applications in computer vision, including:
- **Scene understanding**: Recognizing the objects and scenes present in an image, which can be useful for tasks like visual question answering or object recognition in images.
- **Object detection and tracking**: Detecting and tracking the presence and motion of objects in a sequence of images, which is important for tasks like video analysis or autonomous vehicles.
- **Image synthesis**: Generating new images based on input images, which can be useful for tasks like image inpainting or image-to-image translation.
- **Medical image analysis**: Analyzing medical images, such as X-rays or MRI scans, to detect and classify diseases or abnormalities.
- **Autonomous robots**: Enabling robots to perceive and understand their environment by recognizing objects and obstacles, which is crucial for tasks like navigation and manipulation.
Here is an example of a CNN architecture that can be used for scene understanding:
```
[Image of CNN architecture]
```
## Exercise
Describe the applications of image recognition in computer vision, including scene understanding, object detection and tracking, image synthesis, medical image analysis, and autonomous robots.
# Challenges and future directions in image recognition
Image recognition remains a challenging task in computer vision, with several open challenges and areas for future research:
- **Viewpoint variation**: Image recognition models often struggle to recognize objects from different viewpoints or under varying lighting conditions, which is important for real-world applications.
- **Occlusion and clutter**: Image recognition models often fail to recognize objects when they are partially occluded or in cluttered environments, which is a common issue in real-world images.
- **Domain adaptation**: Image recognition models often fail to generalize well to new domains or tasks, which is important for transferring knowledge between different datasets or tasks.
- **Interpretability**: Image recognition models often lack interpretability, which makes it difficult to understand why the model made a particular prediction and how it arrived at that decision.
Future directions in image recognition include:
- **Advancements in deep learning**: Continued advancements in deep learning techniques, such as attention mechanisms, deformable convolutions, and domain adaptation, can help improve the performance of image recognition models.
- **Multimodal learning**: Integrating multiple modalities, such as images, text, and audio, can help improve the performance of image recognition models and enable more powerful and flexible models.
- **Robustness to adversarial attacks**: Developing models that are robust to adversarial attacks, which are attempts to manipulate input data to deceive a machine learning model, can help improve the security and reliability of image recognition models.
Here is an example of a CNN architecture that incorporates attention mechanisms and deformable convolutions:
```
[Image of CNN architecture]
```
## Exercise
Describe the challenges and future directions in image recognition, including viewpoint variation, occlusion and clutter, domain adaptation, interpretability, advancements in deep learning, multimodal learning, and robustness to adversarial attacks.
# Evaluation metrics for image recognition
Evaluating the performance of image recognition models is crucial for assessing their effectiveness and comparing different models. Several evaluation metrics are commonly used in image recognition:
- **Accuracy**: The proportion of correct predictions out of the total number of predictions.
- **Precision**: The proportion of true positive predictions out of the total number of positive predictions.
- **Recall**: The proportion of true positive predictions out of the total number of actual positive instances.
- **F1-score**: The harmonic mean of precision and recall, which balances the trade-off between precision and recall.
- **Intersection over Union (IoU)**: The ratio of the intersection of the predicted object and the ground truth object to the union of the two objects.
Here is an example of calculating the accuracy of a image recognition model:
```
[Image of accuracy calculation]
```
## Exercise
Describe the evaluation metrics for image recognition, including accuracy, precision, recall, F1-score, and IoU.
# Real-world examples of image recognition
Image recognition has a wide range of real-world applications, including:
- **Face recognition**: Recognizing the identity of individuals from their facial features, which is important for tasks like access control, facial authentication, and social media platforms.
- **Image search**: Searching for images in a database based on a query image, which is useful for tasks like reverse image search or image-based product recommendations.
- **Autonomous vehicles**: Enabling autonomous vehicles to perceive and understand their environment by recognizing objects and obstacles, which is crucial for tasks like navigation and collision avoidance.
- **Medical imaging**: Diagnosing and treating diseases and medical conditions based on medical images, such as X-rays or MRI scans.
- **Surveillance and security**: Monitoring and analyzing images in real-time to detect and track objects or events of interest, which is important for tasks like video surveillance or intrusion detection.
Here is an example of a real-world application of image recognition for face recognition:
```
[Image of face recognition application]
```
## Exercise
Describe real-world examples of image recognition, including face recognition, image search, autonomous vehicles, medical imaging, and surveillance and security. | Textbooks |
Abscissa
Additive Identity
Additive Inverse
Adjacent Angles
Adjacent Side (Triangle)
Adjacent Sides
Altitude Geometry
Angle Bisector
Annual Percentage Rate APR
Arm of an Angle
Associative Law
Axis (Graph)
Base (Geometry)
Base Numbers
Benchmark Angles
Bisector
Bivariate Data
Cartesian Coordinates
Circumcenter
Circumradius
Class Interval
Closed Intervals
Coincident
Collinear
Column Graph
Common Difference
Common Fraction
Common Multiple
Common Ratio
Compass (Drawing)
Complement Probability
Complement Set
Complementary Angle
Degree Algebra
Quadrant (Circle)
The interval notation calculator expresses the inequality based on the chosen topology and determines the distance between any two values.
The number line for the interval input is displayed by the interval notation calculator. Our online calculator for interval notation does calculations more quickly and displays the number line in a split second.
What Is an Interval Notation Calculator?
The Interval Notation Calculator is an online tool that aids in displaying the given interval on a number line, shows the inequality by the chosen topology, and determines the distance between the two given integers.
It is the method of writing subsets of the real number line, according to the mathematical definition. An example of interval notation includes the intervals expressed according to specified conditions.
For instance if we have the set $x |2 \leq x \leq 1$, it will be expressed as [2,1] by definition.
The formula for interval (set builder) notation is:
n1 represents the first number
n2 represents the second number
To solve the notation and find the interval values, use an online interval notation solver.
When a number is expressed as [a,x], it means that both "a" and "x" are part of a set. On the other hand, (a,x) denotes the omission of "a" and "x" from the collection.
The half-closed symbol "[b,y)" denotes that b is included but y is not. Similar to (b,y], which indicates that b is excluded and y is included in the collection, (b,y] will be recognized as half-open.
How To Use an Interval Notation Calculator
You can use the Interval Notation Calculator by following the given detailed guidelines, and the calculator will surely provide you with the desired results. You can therefore follow the given instructions to get the value of the variable for the given equation.
Fill in the provided input boxes with the interval (closed or open interval).
Click on the "SUBMIT" button to get the interval notation and also the whole step-by-step solution for the Parametric to Cartesian Equation will be displayed.
Finally, in the new window, the number line for the specified period will be displayed.
How Does Interval Notation Calculator Work?
The Interval Notation Calculator works by expressing the subset of real numbers using interval notation by the integers that bound them. Inequalities can be represented using this notation.
Notations For Different Types of Intervals
To represent the interval notation for various sorts of intervals, we can adhere to a set of rules and symbols. Let's examine the various symbols that can be used to represent a specific kind of interval.
Symbols Used for Interval Notation
We use the following notations for various intervals:
[ ]: When both endpoints are part of the set, this square bracket is used.
( ): When both endpoints are not included in the set, this round bracket is used.
( ]: When the right endpoint is included in the set but the left endpoint is excluded, a semi-open bracket is used.
[ ): When the set's left endpoint is included and its right endpoint is excluded, this semi-open bracket is likewise used.
What Is Interval?
The group of real numbers that lie between any two given real numbers is called Interval and is represented using interval notation. Intervals can be used to depict inequalities. Intervals can be divided into four categories.
If x and y are two endpoints and x y, the intervals can be classified into the following categories:
Open Interval
In this type of interval, the two ends are not included in this. The inequality is written as x < z < y if z is a number that falls between x and y. Round brackets are used to denote an open interval, i.e. (x, y).
Closed Interval
This type of interval includes both of the endpoints. As $x \leq z \leq y$, the inequality can be expressed. Closed intervals are expressed using square brackets, such as [x, y].
Half Closed Right Interval
Only the left endpoint is included in this kind of interval; the right endpoint is excluded. The inequality is x z y. The left side of the interval is enclosed in a square bracket, and the right side is enclosed in a round bracket, as in [x, y).
Half Closed Left Interval
The left endpoint is excluded and only the right endpoint is included while in this interval. In line with this, x < z ≤ y will be the inequality. The left side uses a round bracket and the right side will have a square bracket, i.e., (x, y].
The Length of the interval between the endpoints x and y can be calculated as follows:
Length = y – x
Convert Inequality To Interval Notation
To convert an inequality to interval notation, follow the steps shown below.
Graph the interval's solution set on a number line.
The numbers should be written in interval notation with the smaller number on the left number line.
Use the sign $-\infty$ if the set is unbounded on the left, and $\infty$ if it is unbounded on the right.
Let's look at a few examples of inequality and convert them to interval notation.
An Inequality $x \leq 3$ has interval notation $(-\infty, 3]$
An Inequality $x < 5$ has interval notation $(-\infty, 5)$
An Inequality $x \geq 2$ has interval notation $(2, \infty]$
Represent Inequalities on a Number Line
A mathematical statement known as an inequality compares two expressions using the concepts of greater than and less than. These statements employ unique symbols. Inequality should be read from left to right, much like the text on a page.
Large sets of solutions are described by inequalities in algebra. We have created some techniques to succinctly represent very big lists of numbers since there are occasionally an endless number of numbers that will fulfill an inequality.
You are presumably already aware of the fundamental inequality in a first way. For instance:
The list of numbers less than 9 is shown by the expression $x \leq 9$.
The symbol $-5 \leq t$ denotes all numbers greater than or equal to -5.
Keep in mind that whether you are searching for larger than or less than depends on whether the variable is placed to the left or right of the inequality sign.
Important Notes on Interval Notation
The set of inequalities is expressed using interval notation.
Open interval, closed interval, and half-open interval are the three different variants of interval notation.
A bounded interval lacks the sign for infinity.
An unbounded interval is the range that includes the infinity symbol.
Let's explore some examples to better understand the working of the Interval Notation Calculator.
Check the solution to \[ x -10 \leq -12\]
Substitute the endpoint -2 into the related equation as:
x -10 $\leq$ -12
x -10 = -12
Let's check the following equality:
-2 -10 = -12
-12 = -12
Pick a value less than, such as, to check in the inequality given as:
Let's check the following inequality:
-5 -10 $\leq$ -12
-15 $\leq$ -12
It checks as:
x $\leq$ -2
This is the solution to the following inequality:
Find the domain of the following function:
\[f(x)=1/x^2 – 1\]
The denominator being 0 is the only thing for which we need to be worried. We understand that x squared minus one cannot equal zero as a result. Because of this, x squared cannot equal one.
Then, x cannot be higher than or less than one if we take the square root of both sides. Therefore, we will be able to move from infinite to infinity when we specify our domain in interval notation. We'll even go as far as the opposite.
\[ (- \infty, – 1) \cup (-1, 1) \cup (1, \infty) \]
As a result, this is our domain.
What is the interval notation for the given function f(x)=2 by root over 3x+5?
In this equation, there is no negative radical, but there is a square root. We are aware that 3x +5 can never equal zero. It has to be more than zero or equal to it. It must be encouraging.
Additionally, as it is in a denominator, it cannot be zero or negative due to the radical in the expression. Therefore, when we solve this for "x" we observe that "3x" must be greater than -5.
In addition, we discover that "x" must be greater than $-\frac{5}{3}$ by dividing both sides by "3". This means that you should start at -0.33 and work your way up to infinity in order to describe the domain using interval notation.
A parenthesis is always followed infinity. The only concern is whether we want to include the negative five-thirds, which we don't.
\[(-\frac{5}{3}, \infty)\]
So, that gets a parenthesis as well, and there we have our domain.
Lagrange Multiplier Calculator < Math Calculators List > Circle Graph Calculator | CommonCrawl |
\begin{document}
\title{
CRISP: Curriculum inducing Primitive Informed Subgoal Prediction for Hierarchical Reinforcement Learning}
\begin{abstract} Hierarchical reinforcement learning is a promising approach that uses temporal abstraction to solve complex long horizon problems. However, simultaneously learning a hierarchy of policies is unstable as it is challenging to train higher-level policy when the lower-level primitive is non-stationary. In this paper, we propose a novel hierarchical algorithm by generating a curriculum of achievable subgoals for evolving lower-level primitives using reinforcement learning and imitation learning. The lower level primitive periodically performs data relabeling on a handful of expert demonstrations using our primitive informed parsing approach. We provide expressions to bound the sub-optimality of our method and develop a practical algorithm for hierarchical reinforcement learning. Since our approach uses a handful of expert demonstrations, it is suitable for most robotic control tasks. Experimental evaluation on complex maze navigation and robotic manipulation environments show that inducing hierarchical curriculum learning significantly improves sample efficiency, and results in efficient goal conditioned policies for solving temporally extended tasks.
\end{abstract} \section{Introduction} \label{sec:introduction}
Reinforcement learning (RL) algorithms have made significant progress in solving continuous control tasks like performing robotic arm manipulation~\citep{DBLP:journals/corr/LevineFDA15, DBLP:journals/corr/VecerikHSWPPHRL17} and learning dexterous manipulation~\citep{DBLP:journals/corr/abs-1709-10087}. However, the success of RL algorithms on complex long horizon continuous tasks has been limited by issues like long term credit assignment and inefficient exploration~\citep{nachum2019does, DBLP:journals/corr/KulkarniNST16}, especially in sparse reward scenarios ~\citep{Andrychowicz2017HindsightER}. Hierarchical reinforcement learning (HRL) ~\citep{Dayan:1992:FRL:645753.668239, SUTTON1999181, NIPS1997_5ca3e9b1} promises the benefits of temporal abstraction and efficient exploration for solving tasks that require long term planning. In \textit{goal-conditioned} hierarchical framework, the high-level policy predicts subgoals for lower primitive, which in turn performs primitive actions directly on the environment~\citep{DBLP:journals/corr/abs-1805-08296,DBLP:journals/corr/VezhnevetsOSHJS17, DBLP:journals/corr/abs-1712-00948}. However, simultaneously learning multi-level policies is challenging in practice due to non-stationary higher level state transition and reward functions.
\par Prior works have leveraged expert demonstrations to bootstrap learning~\citep{DBLP:journals/corr/abs-1709-10089,DBLP:journals/corr/abs-1709-10087,DBLP:journals/corr/HesterVPLSPSDOA17}. Some approaches rely on leveraging expert demonstrations via \textit{fixed parsing}, and consequently bootstrapping multi-level hierarchical RL policy using imitation learning~\citep{DBLP:journals/corr/abs-1910-11956}. However, generating an efficient subgoal transition dataset is crucial in such tasks. In this work, we propose an \textit{adaptive parsing} technique for leveraging expert demonstrations and show that it outperforms fixed parsing based approaches on tasks that require long term planning. Ideally, a good subgoal should properly balance the task split between the hierarchical levels according to current goal reaching ability of the lower primitive, thus avoiding degenerate solutions. As the lower primitive improves, the subgoals provided to lower primitive should become progressively more difficult, such that \textit{(i)} the subgoals are always achievable by the current lower level primitive, \textit{(ii)} task split is properly balanced between hierarchical levels, and \textit{(iii)} reasonable progress is made towards achieving the final goal. We build upon these ideas and propose a generally applicable HRL approach: \textit{Curriculum inducing primitive informed subgoal prediction} (CRISP). Our approach introduces hierarchical curriculum learning to deal with the issue of non-stationarity.
\par CRISP parses a handful of expert demonstrations using our novel subgoal relabeling method: primitive informed parsing (PIP). In PIP, current lower primitive is used to perform data relabeling on expert demonstrations dataset to generate efficent subgoal supervision for the higher level policy. Since the lower primitive performs data relabeling, this approach does not require explicit labeling or segmentation of demonstrations by an expert. The periodically generated higher level subgoal dataset is used with additional imitation learning (IL) objective to provide curriculum based regularization for the higher policy. For imitation learning, we devise inverse reinforcement learning regularizer~\citep{ghasemipour2020divergence, kostrikov2018discriminator, DBLP:journals/corr/HoE16}, which constraints the state marginal of the learned policy to be similar to that of the expert demonstrations. The details of CRISP, PIP, and IRL objective are mentioned in Section~\ref{sec:method}. We also derive sub-optimality bounds in Section~\ref{suboptimality} to theoretically justify the benefits of curriculum learning in hierarchical framework. We thus provide a theoretically justified practical approach to perform hierarchical reinforcement learning.
\par Since our approach uses a handful of expert demonstrations, it is generally applicable on most complex long horizon tasks. We perform experiments on complex random maze navigation, robotic pick and place and robotic rope manipulation environments, and empirically verify that the proposed approach clearly outperforms the baseline approaches on long horizon tasks.
\section{Background} \label{sec:background}
\input{figures_tex/explain_method.tex}
We consider \textit{Universal Markov Decision Process} (UMDP)~\citep{pmlr-v37-schaul15} setting, where Markov Decision processes (MDP) are augmented with the goal space $G$. UMDPs are represented as a 6-tuple $(S,A,P,R,\gamma,G)$, where $S$ is the state space, $A$ is the action space, $P(s^{'}|s,a)=\mathbb{P}(s_{t+1}=s^{'}|s_{t}=s,a_{t}=a)$ is the transition function that describes the probability of reaching state $s^{'}$ when the agent takes action $a$ in the current state $s$. The reward function $R$ generates rewards $r$ at every timestep, $\gamma$ is the discount factor, and $G$ is the goal space. In the UMDP setting, a fixed goal $g$ is selected for an episode, and $\pi(a|s,g)$ denotes the goal-conditioned policy. $d^{\pi}(s)=(1-\gamma)\sum_{t=0}^{T}\gamma^{t}P(s_t=s|\pi)$ represents the discounted future state distribution, and $d^{\pi}_{c}(s)=(1-\gamma^c)\sum_{t=0}^{T}\gamma^{tc}P(s_{tc}=s|\pi)$ represents the c-step future state distribution for policy $\pi$. The overall objective is to learn policy $\pi(a|s,g)$ which maximizes the expected future discounted reward objective $ J = (1-\gamma)^{-1}\mathbb{E}_{s \sim d^{\pi}, a \sim \pi(a|s,g), g \sim G}\left[r(s_t,a_t,g)\right]$
\par Let $s$ be the current state and $g$ be the final goal for the current episode. In our goal-conditioned hierarchical RL setup, the overall policy $\pi$ is divided into multi-level policies. The higher level policy $\pi^{H}(s_g|s, g)$ predicts subgoals~\citep{Dayan:1992:FRL:645753.668239} $s_g$ for the lower level primitive $\pi^{L}(a | s, s_g)$, which in turn executes primitive actions $a$ directly on the environment. The lower primitive $\pi^{L}$ tries to achieve subgoal $s_g$ within $c$ timesteps by maximizing intrinsic rewards $r_{in}$ provided by the higher level policy. The higher level policy $\pi^{H}$ gets extrinsic reward $r_{ex}$ from the environment, and predicts the next subgoal $s_g$ for the lower primitive. The process is continued until either the final goal $g$ is achieved, or the episode terminates. We consider sparse reward setting where the lower primitive is sparsely rewarded intrinsic reward $r_{in}$ if the agent reaches within $\delta^{L}$ distance of the predicted subgoal $s_g$: $r_{in}=1(\|s_t-s_g\|_2 \leq \delta^{L})$, and the higher level policy is sparsely rewarded extrinsic reward $r_{ex}$ if the achieved goal is within $\delta^{H}$ distance of the final goal $g$: $r_{ex}=1(\|s_t-g\|_2 \leq \delta^{H})$. We assume access to expert demonstrations $D=\{e^i\}_{i=1}^N$, where $e^i=(s^e_0, s^e_1, \ldots, s^e_{T-1})$. We only assume access to demonstration states $s^e_i$ (and not demonstration actions) which can be obtained in most robotic control tasks.
\section{Methodology} \label{sec:method}
\input{figures_tex/env_curriculum.tex}
In this section, we explain our hierarchical curriculum learning based approach CRISP. An overview of the method is depicted in Figure~\ref{fig:explain_method}. First, we formulate our primitive informed parsing method PIP, which periodically performs data relabeling on expert demonstrations to populate subgoal transition dataset. Then, we explain how we use this dataset to learn high level policy using reinforcement learning and additional inverse reinforcement learning(IRL) based regularization objective.
\subsection{Primitive Informed Parsing: PIP} \label{subsec:parsing} Primitive informed parsing approach uses the current lower primitive $\pi_{L}$ to parse expert state demonstrations dataset $\mathcal{D}$. The underlying idea is: PIP should select sequences of maximally temporally separated states from demonstration trajectory $e$. These maximally temporally separated state sequences constitute the higher level subgoal dataset $\mathcal{D}_g$. We explain below how PIP adaptively parses expert demonstration trajectories from $D$. \par We start with current lower primitive $\pi_{L}$ and an expert state demonstration trajectory $e = (s^e_0, s^e_1, \ldots, s^e_{T-1})$ . The environment is reset to state $s^e_0$. Starting at $i=1$ to $T-1$, we incrementally provide states $s^e_i$ as subgoals to lower primitive $\pi_{L}$. From $s^e_0$, $\pi_{L}$ tries to achieve $s^e_i$ within $c$ timesteps. If $\pi_{L}$ fails to achieve the subgoal $s^e_i$ from the initial state, we add $s^e_{i}$ to the list of subgoals. The underlying idea is that since $s^e_{i-1}$ was the last subgoal achieved by lower-level primitive, it thus makes a good candidate for maximally reachable subgoal. Once we have added $s^e_{i-1}$ to the list of subgoals, we continue the process after setting $s^e_{i-1}$ as the new initial state until we reach the end of demonstration trajectory $e$. This subgoal transition sequence is added to $D_g$. Thus, the subgoal transition dataset is populated with maximally temporally separated achievable subgoals. The pseudocode for PIP is given in Algorithm~\ref{alg:algo_pip}. Note however that PIP assumes the ability to reset the environment to any state in $\mathcal{D}$ while collecting subgoal dataset. We discuss different ways to relax this assumption in Section \ref{sec:discussion}.
\begin{algorithm}[tb] \caption{PIP: Primitive Informed Parsing} \label{alg:algo_pip}
\begin{algorithmic}
\STATE Initialize $D_g = \{\}$
\FOR{each trajectory $e=(s^e_0, s^e_1, \ldots, s^e_{T-1})$ in $\mathcal{D}$}
\STATE initial state $\leftarrow s^e_0$
\STATE final goal $\leftarrow g $
\STATE Initialize list of subgoals $D^e_g = \{\}$
\FOR{i = $1$ \textbf{to} $T-1$}
\STATE Reset to initial state
\STATE Pass $s^e_i$ as the current goal to $\pi_{L}$
\IF{$s^e_i$ is not achieved by $\pi_{L}$ in $c$ time-steps}
\STATE Add $(\text{initial state}, s^e_{i)}, \text{final goal})$ to $D^e_g$
\STATE initial state $\leftarrow s^e_{i-1}$
\ENDIF
\ENDFOR
\STATE $D_g \leftarrow D_g \cup D^e_g$
\ENDFOR \end{algorithmic} \end{algorithm}
\subsection{Suboptimality analysis} \label{suboptimality} In this section, we analyze the suboptimality of our method, and examine how the performance benefits from curriculum learning and imitation learning objective. Let $\pi^{*}$ and $\pi^{**}$ be the unknown higher level and lower level optimal policies respectively, $\pi_{\theta_{H}}^{H}$ be our high level CRISP policy, and $\pi_{\theta_{L}}^{L}$ be our lower CRISP primitive policy, where $\theta_{H}$ and $\theta_{L}$ are trainable parameters of higher and lower level policies respectively. $D_{TV}(\pi_{1}, \pi_{2})$ denotes total variation divergence between probability distributions $\pi_1$ and $\pi_2$. $s$ is the current state, $g$ is the final episodic goal, $s_g$ is the subgoal provided by upper level policy and $\tau$ are $c$ length sub-trajectories. Let $\Pi_{D}^{H}$ and $\Pi_{D}^{L}$ be the upper level probability distributions which generate datasets $D_H$ and $D_L$ respectively, $\kappa$ is some distribution over states and actions, and $G$ is the goal space. Firstly, we extend the definition from~\citep{ajay2020opal} to goal-conditioned policies: \theoremstyle{definition} \newtheorem{defn}[]{Definition} \begin{defn}
$\pi^{*}$ is $\phi_{D}$-common in $\Pi_{D}^{H}$, if $\mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G}[D_{TV}(\pi^{*}(\tau | s,g)||\pi_{D}^{H}(\tau | s,g))] \leq \phi_{D}$ \end{defn}
We define the suboptimality of policy $\pi$ with respect to optimal policy $\pi^{*}$ as: \begin{equation} \label{eqn:eqn_1}
\begin{split}
Subopt(\theta) = |J(\pi^{*})-J(\pi)|
\end{split} \end{equation} \newtheorem{thm}{Theorem} \begin{thm} Assuming the optimal policy $\pi^{*}$ is $\phi_D$ common in $\Pi_{D}^{H}$, the suboptimality of upper policy $\pi_{\theta_{H}}^{H}$, over $c$ length sub-trajectories $\tau$ sampled from $d_{c}^{\pi^{*}}$ can be bounded as: \begin{equation} \label{eqn:eqn_2}
\begin{split}
& |J(\pi^{*})-J(\pi_{\theta_{H}}^{H})| \leq \lambda_{H} * \phi_{D} + \\
& \lambda_{H} * \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G} [D_{TV}(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]]
\end{split} \end{equation}
where $\lambda_{H}=\frac{2}{(1-\gamma)(1-\gamma^{c})}R_{max} \| \frac{d_c^{\pi^{*}}}{\kappa} \|_{\infty}$ \end{thm} Furthermore, the suboptimality of lower primitive $\pi_{\theta_{L}}^{L}$ can be bounded as: \begin{equation} \label{eqn:eqn_3}
\begin{split}
& |J(\pi^{**})-J(\pi_{\theta_{L}}^{L})| \leq \lambda_{L} * \phi_{D} + \lambda_{L} * \\ & \mathbb{E}_{s \sim \kappa, \pi_{D}^{L} \sim \Pi_{D}^{L}, s_g \sim \pi_{\theta_{L}}^{L}} [D_{TV}(\pi_{D}^{L}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))]]
\end{split} \end{equation}
where $\lambda_{L}=\frac{2}{(1-\gamma)^2}R_{max} \| \frac{d_c^{\pi^{**}}}{\kappa} \|_{\infty}$
\par The proofs for Equations~\ref{eqn:eqn_2} and~\ref{eqn:eqn_3} are provided in Appendix~\ref{appendix_higher} and~\ref{appendix_lower} respectively. Equation ~\ref{eqn:eqn_2} can be rearranged to yield the following form: \begin{equation} \label{eqn:eqn_4}
\begin{split}
& J(\pi^{*}) \geq J(\pi_{\theta_{H}}^{H}) - \lambda_{H} * \phi_{D} - \\ & \lambda_{H} * \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G} [d(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]
\end{split} \end{equation}
where (represent $\pi_{D}^{H}(\tau|s,g)$ as $\pi_A$ and $\pi_{\theta_{H}}^{H}(\tau|s,g))$ as $\pi_B$, \begin{equation}
d(\pi_A || \pi_B) = D_{TV}(\pi_A || \pi_B) \end{equation}
This can be solved as a minorize maximize algorithm which intuitively means: the overall objective can be optimized by $(i)$ maximizing the objective $J(\pi_{\theta_{H}}^{H})$ via RL, and $(ii)$ minimizing TV divergence between $\pi^{*}$ and $J(\pi_{\theta_{H}}^{H})$. We use entropy regularized RL technique Soft Actor Critic~\citep{haarnoja2018latent} to maximize $J(\pi_{\theta_{H}}^{H})$.
\subsection{Hierarchical curriculum learning} In Equation~\ref{eqn:eqn_2}, the suboptimality bound is dependent on $\phi_{D_g}$, which represents how good is the subgoal dataset $D_g$ populated by PIP. A lower value of $\phi_{D_g}$ implies that the optimal policy $\pi^{*}$ is closely represented by the dataset $D_g$. Since we use lower primitive to parse expert demonstrations, as the lower primitive gets better, $\pi_{D_g}$ gets closer to $\pi^{*}$. Hence $D_g$ improves and the value of parameter $\phi_D$ decreases, which implies that suboptimality bound in Equation~\ref{eqn:eqn_2} gets tighter. For implementing curriculum learning while generating $D_g$, $D_g$ is cleared after every $u$ timesteps and re-populated using PIP, as explained in Algorithm 2. This periodic re-population after $u$ timesteps generates a natural curriculum for lower primitive, as shown in Figure~\ref{fig:env_curriculum}.
\subsection{Imitation learning regularization}
Different approximations to the distance function $d$ in Equation~\ref{eqn:eqn_4} yield different imitation learning regularizers. If $d$ is replaced by Kullback–Leibler divergence, the imitation learning regularizer becomes behavior cloning objective (BC)~\citep{DBLP:journals/corr/abs-1709-10089}. If we replace $d$ with Jensen-Shannon divergence, the imitation learning objective takes the form of Inverse reinforcement learning (IRL) objective. Henceforth, CRISP-IRL will denote our method CRISP with IRL regularizer, and CRISP-BC will denote CRISP with BC regularizer. Our motivation for using IRL is that IRL outperforms BC in tasks that require long term planning. We perform experiments comparing CRISP-IRL and CRISP-BC with baselines in Section~\ref{sec:experiment}. We devise IRL objective as a GAIL~\citep{DBLP:journals/corr/HoE16} like objective implemented using LSGAN~\citep{DBLP:journals/corr/MaoLXLW16}. Let $(s^e, g^e, s^e_g) \sim D_g$ be a subgoal transition where $s^e$ is a state in an expert trajectory, $g^e$ is the corresponding final goal and $s^e_g$ is the subgoal. Let $s_g$ be the subgoal predicted by the high level policy $\pi_{\theta}^{H}(\cdot|s^e, g^e)$ and $\mathbb{D}_{\epsilon}^H$ be the higher level discriminator with parameters $\epsilon$. We bootstrap the learning of higher level policy by optimizing:
\begin{equation} \label{eqn:irl_update}
\begin{split}
& \max_{\pi_\theta^{H}}\min_\epsilon \frac{1}{2}\mathbb{E}_{(s^e, g^e, s^e_g) \sim D_g} [\mathbb{D}_{\epsilon}^H(s^e_g) - 1]^2
+ \\ & \frac{1}{2}\mathbb{E}_{(s^e, g^e)\sim D_g, s_g \sim \pi_{\theta}^{H}(\cdot|s^e, g^e)} [\mathbb{D}_{\epsilon}^H(\pi_{\theta}^{H}(\cdot|s^e, g^e)) - 0]^2
\end{split} \end{equation} This objective forces the higher policy subgoal predictions to be close to subgoal predictions of the dataset $\mathcal{D}_g$. For brevity, let $J_D^{H}$ and $J_D^{L}$ represent upper and lower IRL objectives, which depend on parameters $(\theta_{H},\epsilon_{H})$ and $(\theta_{L},\epsilon_{L})$ respectively. The discriminator $\mathbb{D}_{\epsilon}^H$ creates a natural curriculum for regularizing higher level policy by assigning the value $1$ to the predicted subgoals that are closer to the subgoals from dataset $D_g$, and $0$ otherwise. The discriminator improves with training, and regularizes the higher policy to predict achievable subgoals.
\subsection{Policy optimization} The higher level policy is trained to produce subgoals, which when fed into the lower level primitive, maximize the sum of future discounted rewards for our task using off-policy reinforcement learning.
Here, $\pi_\theta^{L}$ is the current lower primitive, $s_t$ is the state at time $t$, $T$ is the task horizon and $g$ is the sampled goal for the current episode. For brevity, we can refer to this objective function as $J^{H}_{\theta_{H}}$ and $J^{L}_{\theta_{L}}$ for upper and lower policies. We use the IRL objective to leverage the primitive-parsed dataset $\mathcal{D}_g$. Therefore, the high level policy is trained by optimizing \begin{equation} \label{eqn:joint_update_upper}
\max_{\theta_{H}} J^{H}_{\theta_{H}} + \psi * (\min_{\epsilon_{H}} J_D^{H}(\theta_{H}, \epsilon_{H})) \end{equation} Whereas, the lower level primitive is trained by optimizing, \begin{equation} \label{eqn:joint_update_lower}
\max_{\theta_{L}} J^{L}_{\theta_{L}} + \psi * (\min_{\epsilon_{L}} J_D^{L}(\theta_{L}, \epsilon_{L})) \end{equation}
\begin{algorithm}[tb] \caption{CRISP}\label{alg:algo_crisp}
\begin{algorithmic}
\REQUIRE $D$ (expert demonstrations)
\STATE Initialize higher level subgoal transition dataset $D_g = \{\}$
\FOR{epoch $i = 1 \ldots N $}
\IF{$i \% u==0$}
\STATE Clear $D_g$
\STATE Populate $D_g$ by relabeling $D$ using PIP
\ENDIF
\FOR{$j$ = $1$ \textbf{to} $T-1$}
\STATE Collect off policy experience using $\pi_{H}$ and $\pi_{L}$
\ENDFOR
\STATE Update lower primitive via Soft Actor Critic (SAC) and IRL(Eq \ref{eqn:joint_update_lower})
\STATE Sample transitions from $D_g$
\STATE Update higher policy via SAC and IRL (Eq \ref{eqn:joint_update_upper})
\ENDFOR \end{algorithmic} \end{algorithm}
The lower policy is regularized using primitive expert demonstration dataset, and the upper level is optimized using subgoal transition dataset populated using PIP. $\psi$ is the regularization weight hyper-parameter for the IRL objective. When $\psi=0$, the method reduces to HRL policy with no higher level policy regularization. When $\psi$ is too high, the method might overfit to the expert demonstration dataset. We perform ablation analysis to choose $\psi$ in our experiments in Appendix~\ref{appendix_ablation}. The CRISP algorithm is shown in Algorithm~\ref{alg:algo_crisp}
\section{Related Work} \label{sec:related_work} Learning effective hierarchies of policies has garnered substantial research interest in RL~\citep{Barto03recentadvances, SUTTON1999181, NIPS1997_5ca3e9b1,DBLP:journals/corr/cs-LG-9905014}. Options framework~\citep{SUTTON1999181,DBLP:journals/corr/BaconHP16,DBLP:journals/corr/abs-1711-03817,DBLP:journals/corr/abs-1709-04571,DBLP:journals/corr/abs-1902-09996, DBLP:journals/corr/abs-1712-00004} learns temporally extended macro actions, and a termination function for solving long horizon tasks. However, these approaches run into degenerate solutions in absence of proper regularization, where a sub-policy either terminates after each step, or runs for the entire episode. In goal-conditioned learning, some previous approaches restrict the search space by greedily solving for specific goals~\citep{Kaelbling93learningto, fd-ssvf-02}. This approach has also been extended to hierarchical RL~\citep{DBLP:journals/corr/abs-1906-11228, DBLP:journals/corr/abs-2007-15588, NEURIPS2019_c8d3a760}. HIRO~\citep{DBLP:journals/corr/abs-1805-08296} and HRL with hindsight~\citep{DBLP:journals/corr/abs-1712-00948} approaches deal with the non-stationarity issue in hierarchical learning by relabeling transition data for training goal-conditioned policies, where the higher level predicts subgoals for the lower primitive. In contrast, our method deals with non-stationarity by regularizing the higher policy with imitation learning to provide a curriculum of achievable subgoals to the lower primitive. Our approach is inspired from curriculum learning~\citep{10.1145/1553374.1553380}, where the task difficulty gradually increases in complexity, thereby amortizing non-stationarity.
\par Previous approaches that leverage expert demonstrations have shown impressive results~\citep{DBLP:journals/corr/abs-1709-10089,DBLP:journals/corr/abs-1709-10087,DBLP:journals/corr/HesterVPLSPSDOA17}. Expert demonstrations have also been used to bootstrap option learning~\citep{krishnan2017ddco, fox2017multilevel, Shankar2020LearningRS, kipf2019compile}. Other approaches use imitation learning to bootstrap hierarchical approaches in complex task domains~\citep{pmlr-v80-shiarlis18a,DBLP:journals/corr/abs-1710-05421,doi:10.1177/0278364918784350,kipf2019compile}. Relay Policy Learning (RPL)~\citep{DBLP:journals/corr/abs-1910-11956} parses uses simple fixed window based approach for parsing expert demonstrations to generate subgoal transition dataset for training higher level policy. However, fixed parsing based approaches might predict subgoals that are either too hard for the lower level primitive, in which case the higher level is cursed with ambiguous extrinsic reward signal, or too easy subgoals, in which case the higher level is forced to do most of the heavy-lifting for solving the task. In contrast, our data relabeling technique PIP segments expert demonstration trajectories into \textit{meaningful} subtasks, without requiring an external expert. Our adaptive parsing approach considers the limited goal reaching ability of lower primitive, and is therefore able to produce much better subgoals.
\section{Experiments} \label{sec:experiment} \par For experimental analysis, we consider complex tasks with continuous state and action spaces that require long term planning. We perform experiments on three robotic Mujoco~\citep{todorov2012mujoco} environments: $(i)$ maze navigation, $(ii)$ pick and place, and $(iii)$ rope manipulation environment. These environments employ a 7-DoF robotic arm to perform complex robotic tasks. These environments become progressively more difficult and demonstrate the efficacy of our method in long horizon tasks. We empirically compare our approach with various baselines in Table~\ref{tbl:success_rate_performance}. For qualitative results, please refer to the supplementary video.
\subsection{Comparative analysis} \label{sec:baselines} We enlist the baseline methods and explain the rationale. \begin{itemize} \item \textbf{Relay Policy Learning (RPL)}~\citep{DBLP:journals/corr/abs-1910-11956} parses subgoals from expert state demonstrations using a fixed window approach. We use this baseline to highlight the advantage of adaptive parsing of subgoals compared to fixed window parsing. We perform extensive search for the window size hyper-parameters in RPL for each environment, which we provide in appendix \ref{appendix_ablation}. \item \textbf{Hierarchical policy (Hier)} denotes a $2$ level hierarchical policy where the high level policy is trained using only reinforcement learning, and the lower level policy is trained using RL and IRL using primitive expert demonstrations. We use it to show the importance of curriculum based subgoal generation and consequent IRL based regularization on the higher level policy. The hierarchical $2$-level policy where both upper and lower levels are trained using only RL failed to show good performance. Hence, we do not include it in the baseline comparisons. \item \textbf{Discriminator Actor Critic (DAC)}~\citep{kostrikov2018discriminator} uses IRL to learn a single-level policy using low level expert demonstrations $\mathcal{D}$. Using this baseline, we demonstrate the advantage of using hierarchy, curriculum based subgoal generation and consequent IRL based regularization in our approach. \end{itemize} Note that we do not include 1-level RL baseline in the results as it failed to provide good performance.
\subsection{Robotic Maze Navigation Environment} Here we provide details about the maze navigation environment, its implementation and results. \subsubsection{Environment Setup} In this environment, a $7$-DOF robotic arm gripper navigates across random four room mazes. The gripper arm is kept closed and the positions of walls and gates are randomly generated. The table is discretized into a rectangular $W*H$ grid, and the vertical and horizontal wall positions $W_{P}$ and $H_{P}$ are randomly picked from $(1,W-2)$ and $(1,H-2)$ respectively. In the four room environment thus constructed, the four gate positions are randomly picked from $(1,W_{P}-1)$, $(W_{P}+1,W-2)$, $(1,H_{P}-1)$ and $(H_{P}+1,H-2)$. The height of gripper is kept fixed at table height, and it has to navigate across the maze to the goal position(shown as red sphere). The maximum task horizon $T$ is kept at $225$ timesteps, and the lower primitive is allowed to execute for $c=15$ timesteps.
\subsubsection{Implementation details}
The following implementation details refer to both the higher and lower level polices, unless otherwise explicitly stated. The state and action spaces in the environment are continuous. The actor, critic and discriminator networks are formulated as $3$ layer fully connected neural networks with $512$ neurons in each layer. The state is represented as the vector $[p,\mathcal{M}]$, where $p$ is current gripper position and $\mathcal{M}$ is the sparse maze array. The higher level policy input is thus a concatenated vector $[p,\mathcal{M},g]$, where $g$ is the target goal position, whereas the lower level policy input is concatenated vector $[p,\mathcal{M},s_g]$, where $s_g$ is the sub-goal provided by the higher level policy. The current position of the gripper is the current achieved goal. The sparse maze array $\mathcal{M}$ is a discrete $2D$ one-hot vector array, where $1$ represents presence of a wall block, and $0$ absence. In our experiments, the size of $p$ and $\mathcal{M}$ are kept to be $3$ and $110$ respectively. The upper level predicts subgoal $s_g$, hence the higher level policy action space dimension is the same as the dimension of goal space. The lower primitive action $a$ which is directly executed on the environment, is a $4$ dimensional vector with every dimension $a_i \in [0,1]$. The first $3$ dimensions provide offsets to be scaled and added to gripper position for moving it to the intended position. The last dimension provides gripper control($0$ implies a fully closed gripper, $0.5$ implies a half closed gripper and $1$ implies a fully open gripper). We select $100$ randomly generated mazes each for training, testing and validation. For selecting train, test and validation mazes, we first randomly generate $300$ distinct mazes, and then randomly divide them into $100$ train, test and validation mazes each. Each experiment is run on $4$ parallel workers. We use off-policy Soft Actor Critic~\citep{DBLP:journals/corr/abs-1801-01290} algorithm for optimizing RL objective in our experiments. We keep the regularization weight hyperparameter as $\Psi=0.0078$. We use Adam~\citep{kingma2014method} optimizer. The hyperparameter $u$ which is the number of training iterations after which the replay buffer is flushed and re-populated is set as $10$. The experiments are run for $2.93e6$ timesteps. The method for generating expert demonstrations is provided in Appendix~\ref{appendix_maze_expert}.
\subsubsection{Results} In Table \ref{tbl:success_rate_performance}, we report the success rate performance of the CRISP and other baselines averaged over $3$ seeds. While training and testing, we evaluate success rates over $N=100$ random episodic rollouts. Since the test mazes are randomly generated, the performance metric also measures generalization capability of our proposed approach. It is evident from experiments that CRISP-IRL and CRISP-BC outperform the baselines, and demonstrate impressive generalization. Note that CRISP-IRL outperforms CRISP-BC in this environment. This is because IRL outperforms BC in tasks that require long term planning.
\subsection{Robotic Pick and Place Environment} Here we provide details about the pick and place environment, its implementation and results. \input{figures_tex/tbl_performance.tex}
\subsubsection{Environment Setup} In this environment, a $7$-DOF robotic arm gripper has to pick a square block and bring/place it to a goal position. We set the goal position slightly higher than table height. The maximum task horizon $T$ is kept at $225$ timesteps, and the lower primitive is allowed to execute for $c=15$ timesteps. In this complex task, the gripper has to navigate to the block, close the gripper to hold the block, and then bring the block to the desired goal position. We provide the success rate results and baseline comparisons in section \ref{app:pick_results}
\subsubsection{Implementation details}
In this environment, the actor, critic, and discriminator networks are formulated as $3$ layer fully connected networks with $512$ neurons in each layer. The state is represented as the vector $[p,o,q,e]$, where $p$ is current gripper position, $o$ is the position of the block object placed on the table, $q$ is the relative position of the block with respect to the gripper, and $e$ consists of linear and angular velocities of the gripper and the block object. The higher level policy input is thus a concatenated vector $[p,o,q,e,g]$, where $g$ is the target goal position. The lower level policy input is concatenated vector $[p,o,q,e,s_g]$, where $s_g$ is the sub-goal provided by the higher level policy. The current position of the block object is the current achieved goal. In our experiments, the sizes of $p$, $o$, $q$, $e$ are kept to be $3$, $3$, $3$ and $11$ respectively. The upper level predicts subgoal $s_g$, hence the higher level policy action space and goal space have the same dimension. The lower primitive action $a$ is a $4$ dimensional vector with every dimension $a_i \in [0,1]$. The first $3$ dimensions provide gripper position offsets, and the last dimension provides gripper control ($0$ means closed gripper and $1$ means open gripper). While training, the position of block object and goal are randomly generated (block is always initialized on the table, and goal is always above the table at a fixed height). We select $100$ random each for training, testing and validation. For selecting train, test and validation mazes, we first randomly generate $300$ distinct environments with different block and target goal positions, and then randomly divide them into $100$ train, test and validation mazes each. Each experiment is run on $4$ parallel workers. We use off-policy Soft Actor Critic~\citep{DBLP:journals/corr/abs-1801-01290} algorithm for the RL objective in our experiments. We keep the regularization weight hyperparameter as $\Psi=0.005$ in our experiments, and use Adam~\citep{kingma2014method} optimizer in our experiments. The hyperparameter $u$ is set as $5$. The experiments are run for $6.75e6$ timesteps. The method for generating expert demonstrations is provided in Appendix~\ref{appendix_pick_and_place_expert}.
\subsubsection{Results} \label{app:pick_results} In Table \ref{tbl:success_rate_performance}, we report the success rate performances averaged over $3$ seeds. While training and testing, we evaluate success rates over $N=100$ random episodic rollouts. From Table \ref{tbl:success_rate_performance} it is apparent that CRISP-BC and CRISP-IRL clearly outperform the baselines by a large margin. Note that CRISP-IRL outperforms CRISP-BC. This provides convincing evidence that stable hierarchical learning indeed demonstrates better performance on complex long horizon tasks.
\subsection{Robotic Rope Manipulation Environment}
Here we provide details about the rope manipulation environment, its implementation and results.
\subsubsection{Environment Setup} \par In the robotic rope manipulation task, a deformable rope is kept on the table and the robotic arm performs pokes to nudge the rope towards the desired goal rope configuration. The task horizon is fixed at $25$ pokes. The deformable rope is formed from $15$ constituent cylinders joined together.
\subsubsection{Pretraining lower level primitive} In this environment, we first pretrain the lower level primitive using simpler goal rope configurations which can be achieved within a few pokes (the simple goal rope configurations are chosen based on L2 distance between the initial and goal rope configurations). We pre-trained the lower primitive on a simpler task in rope manipulation environment as without pre-training, the methods failed to provide any significant results. In order to ascertain fair comparisons, we kept this pre-training requirement consistent among all the baselines.
\subsubsection{Generating expert demonstrations} In complex environments, we generally do not have access to lower level expert demonstrations. Moreover, hard coding an expert policy may generate sub-optimal expert demonstrations. In rope environment we did not have access to lower level expert demonstrations. However, recall that our method requires only expert state demonstrations and not expert action demonstrations. For generating expert state demonstrations, we used an interpolation based approach, where we obtain subgoals $s_{g_i}$ by linearly interpolating between the starting rope configuration $s$ and the final rope configuration $g$ as: \begin{equation} s_{g_i} = \frac{i}{N}g + \frac{N-i}{N}s, \;\; i \in \{1,2,\ldots,N-1\} \end{equation} We found $N=15$ to perform well empirically. After generating these interpolations, we performed simple transformations to assure that the interpolations are valid rope configurations. Note that this approach may generate sub-optimal expert trajectories (which is in contrast with generating expert trajectories for maze navigation or pick and place environments, where it is easier to generate expert trajectories owing to the simplicity of the task. For this reason, rope environment is a particularly hard environment. Also note that although in general CRISP-IRL works better than CRISP-BC in maze navigation task (environment: simple, generating expert demonstrations: easy) and pick and place task (environment: complex, generating expert demonstrations: easy), CRISP-IRL is hard to train in scenarios like the rope manipulation environment (environment: complex, generating expert demonstrations: hard).
\subsubsection{Implementation details} The following implementation details refer to both the higher and lower level polices, unless otherwise explicitly stated. The state and action spaces in the environment are continuous. The actor, critic and discriminator networks are formulated as $3$ layer fully connected neural networks with $512$ neurons in each layer. The state space for the rope manipulation environment is a vector formed by concatenation of the intermediate joint positions. The upper level predicts subgoal $s_g$ for the lower primitive. The action space of the poke is $(x, y, \eta)$, where $(x, y)$ is the initial position of the poke, and $\eta$ is the angle describing the direction of the poke. We fix the poke length to be $0.08$. While training our hierarchical approach, we select 100 randomly generated initial and final rope configurations each for training, testing and validation. For selecting train, test and validation configurations, we first randomly generate $300$ distinct configurations, and then randomly divide them into 100 train, test and validation mazes each. Each experiment is run on 4 parallel workers. We use off-policy Soft Actor Critic (Haarnoja et al., 2018b) algorithm for optimizing RL objective in our experiments. We keep $\Psi=0.0078$ and $u=5$ in the experiments which are run for $2.625e5$ timesteps.
\subsubsection{Results} In Table \ref{tbl:success_rate_performance}, we report the success rate performance of the proposed methods and other baselines, averaged over $3$ seeds. While training and testing, we evaluate success rates over $N=100$ random episodic rollouts. Since the lower level expert action demonstrations are not available in this environment, we can not compute the DAC baseline. From Table \ref{tbl:success_rate_performance} it is apparent that CRISP-BC and CRISP-IRL clearly outperform the baselines by a large margin. In this complex task, CRISP-BC slightly outperforms CRISP-IRL. We believe this is because the advantages of IRL are sometimes counterbalanced by the fact that they are difficult to train in hard tasks. Thus although in general CRISP-IRL outperforms CRISP-BC in long horizon tasks, CRISP-BC may work better in cases where training IRL objective is hard.
\subsection{Ablative studies} \label{sec:ablation} \input{figures_tex/success_rate_comparison.tex}
To elucidate the importance of various constituent design choices in our proposed approach, we show success rate comparison plots in Figure \ref{fig:success_rate_comparison} Row 1. We compare our proposed approach with \textit{RPL} to demonstrate the advantage of adaptive parsing over fixed window parsing and thus segmenting the task into meaningful subtasks using the lower primitive, while using curriculum of subgoals for evolving lower primitive. The comparison with \textit{Hier} method shows the advantage of curriculum based subgoal regularization using imitation learning. Finally, \textit{DAC} highlights the importance of using hierarchy and curriculum based subgoal regularization using imitation learning. CRISP shows faster convergence and stable learning when compared to other approaches. In Figure \ref{fig:success_rate_comparison} Row 2, we compare the methods in terms of distance between final achieved goal and desired goal averaged over $100$ episodic rollouts. This metric gives an idea of how accurately an approach solves the task. As can be seen, CRISP clearly demonstrates better accuracy. \par The ablation experiments are provided in Appendix~\ref{appendix_ablation}. We perform experiments to find the optimal number of expert demonstrations for each environment. The optimal value is chosen to be $100$ after performing ablation experiments as shown in Figure~\ref{fig:demos_ablation} in Appendix~\ref{appendix_ablation}. If the number of demonstrations is too small, the policy might overfit to the demonstrations, thus decreasing overall performance. Although the number of available expert demonstrations is subject to availability, we increase the number until there is no significant performance improvement in our experiments. Notably, since CRISP uses PIP to select good subgoals from lower level expert dataset, we require \textit{good} lower level expert demonstration trajectories. If the expert trajectory is \textit{bad}, PIP is unable to select good subgoals, leading to poor performance. This is depicted in Figure~\ref{fig:bad_demos_ablation} in Appendix~\ref{appendix_ablation}, where we clearly show that the overall performance drops significantly when we added bad demonstrations trajectories in the expert demonstration dataset. For choosing the optimal value of hyperparameter $u$, we perform ablation experiments as provided in~\ref{fig:u_ablation} in Appendix~\ref{appendix_ablation}. For RPL experiments, we choose the window size hyper-parameter $k$ by running RPL experiments for different values of $k$. The experiments are shown in Figure~\ref{fig:rpl_ablation} in Appendix \ref{appendix_ablation}. After performing experiments in maze navigation, pick and place environments and rope manipulation environments, $k$ is set to $4$, $8$ and $3$ respectively. We also performed ablation analysis for choosing the imitation learning weight hyperparameter $\psi$. The experiments are shown in Figure~\ref{fig:lambda_ablation} in Appendix \ref{appendix_ablation}.
\section{Discussion and future work} \label{sec:discussion} \par We introduce CRISP, a general purpose lower level primitive informed method for efficient hierarchical reinforcement learning. CRISP leverages primitive parsed expert demonstrations and performs data relabeling on them to populate subgoal transition dataset for regularizing higher level policy. CRISP employs hierarchical curriculum learning for solving complex tasks requiring long term planning. We evaluate our method on complex robotic maze navigation, pick and place and rope manipulation tasks, and demonstrate that it makes substantial gains over its baselines.
\par However, CRISP assumes the ability to reset the environment to any state from expert demonstrations dataset while collecting subgoal dataset using PIP. A possible method for relaxing this assumption is to combine CRISP with~\citep{eysenbach2017leave} that learns a backward controller that tries to reset the environment. We believe this approach is an interesting avenue for future work.
\appendix
\section{Appendix} \label{sec:appendix}
\subsection{Sub-optimality analysis proof for higher level policy} \label{appendix_higher}
The sub-optimality of upper policy $\pi_{\theta_{H}}^{H}$, over $c$ length sub-trajectories $\tau$ sampled from $d_{c}^{\pi^{*}}$ can be bounded as: \begin{equation} \label{eqn:eqn_8}
\begin{split}
|J(\pi^{*})-J(\pi_{\theta_{H}}^{H})| \leq \lambda_{H} * \phi_{D} + \lambda_{H} * \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G} [D_{TV}(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]]
\end{split} \end{equation}
where $\lambda_{H}=\frac{2}{(1-\gamma)(1-\gamma^{c})}R_{max} \| \frac{d_c^{\pi^{*}}}{\kappa} \|_{\infty}$ \begin{proof}
We extend the suboptimality bound from~\citep{ajay2020opal} between goal conditioned policies $\pi^{*}$ and $\pi_{\theta_{H}}^{H}$ as follows: \begin{equation} \label{eqn:eqn_9}
\begin{split}
|J(\pi^{*})-J(\pi_{\theta_{H}}^{H})| \leq \frac{2}{(1-\gamma)(1-\gamma^{c})}R_{max}\mathbb{E}_{s \sim d_{c}^{\pi^{*}},g \sim G}[D_{TV}(\pi^{*}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]
\end{split} \end{equation} By applying triangle inequality: \begin{equation} \label{eqn:eqn_10}
\begin{split}
D_{TV}(\pi^{*}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g)) \leq D_{TV}(\pi^{*}(\tau|s,g)||\pi_{D}^{H}(\tau|s,g)) + D_{TV}(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))
\end{split} \end{equation} Taking expectation wrt $s \sim \kappa$, $g \sim G$ and $\pi_{D}^{H} \sim \Pi_{D}^{H}$, \begin{equation} \label{eqn:eqn_11}
\begin{split}
\mathbb{E}_{s \sim \kappa, g \sim G} [D_{TV}(\pi^{*}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))] \leq \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G}[D_{TV}(\pi^{*}(\tau|s,g)||\pi_{D}^{H}(\tau|s,g))] + \\ \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G}[D_{TV}(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]
\end{split} \end{equation} Since $\pi^{*}$ is $\phi_D$ common in $\Pi_{D}^{H}$, we can write~\ref{eqn:eqn_11} as: \begin{equation} \label{eqn:eqn_12}
\begin{split}
\mathbb{E}_{s \sim \kappa,g \sim G} [D_{TV}(\pi^{*}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))] \leq \phi_D + \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G}[D_{TV}(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]
\end{split} \end{equation} Substituting the result from Equation \ref{eqn:eqn_12} in Equation ~\ref{eqn:eqn_9}, we get \begin{equation} \label{eqn:eqn_13}
\begin{split}
|J(\pi^{*})-J(\pi_{\theta_{H}}^{H})| \leq \lambda_{H} * \phi_{D} + \lambda_{H} * \mathbb{E}_{s \sim \kappa, \pi_{D}^{H} \sim \Pi_{D}^{H}, g \sim G} [D_{TV}(\pi_{D}^{H}(\tau|s,g)||\pi_{\theta_{H}}^{H}(\tau|s,g))]]
\end{split} \end{equation}
where $\lambda_{H}=\frac{2}{(1-\gamma)(1-\gamma^{c})}R_{max} \| \frac{d_c^{\pi^{*}}}{\kappa} \|_{\infty}$ \end{proof}
\subsection{Sub-optimality analysis proof for lower level policy} \label{appendix_lower} Let the optimal lower level policy be $\pi^{**}$. The suboptimality of lower primitive $\pi_{\theta_{L}}^{L}$ can be bounded as follows: \begin{equation} \label{eqn:eqn_14}
\begin{split}
|J(\pi^{**})-J(\pi_{\theta_{L}}^{L})| \leq \lambda_{L} * \phi_{D} + \lambda_{L} * \mathbb{E}_{s \sim \kappa, \pi_{D}^{L} \sim \Pi_{D}^{L}, s_g \sim \pi_{\theta_{L}}^{L}} [D_{TV}(\pi_{D}^{L}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))]]
\end{split} \end{equation}
where $\lambda_{L}=\frac{2}{(1-\gamma)^2}R_{max} \| \frac{d_c^{\pi^{**}}}{\kappa} \|_{\infty}$ \begin{proof}
We extend the suboptimality bound from~\citep{ajay2020opal} between goal conditioned policies $\pi^{**}$ and $\pi_{\theta_{L}}^{L}$ as follows: \begin{equation} \label{eqn:eqn_15}
\begin{split}
|J(\pi^{**})-J(\pi_{\theta_{L}}^{L})| \leq \frac{2}{(1-\gamma)^2}R_{max}\mathbb{E}_{s \sim d_{c}^{\pi^{**}},s_g \sim \pi_{\theta_{L}}^{L}}[D_{TV}(\pi^{**}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))]
\end{split} \end{equation} By applying triangle inequality: \begin{equation} \label{eqn:eqn_16}
\begin{split}
D_{TV}(\pi^{**}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g)) \leq D_{TV}(\pi^{**}(\tau|s,s_g)||\pi_{D}^{L}(\tau|s,s_g)) + D_{TV}(\pi_{D}^{L}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))
\end{split} \end{equation} Taking expectation wrt $s \sim \kappa$, $s_g \sim \pi_{\theta_{L}}^{L}$ and $\pi_{D}^{L} \sim \Pi_{D}^{L}$, \begin{equation} \label{eqn:eqn_17}
\begin{split}
\mathbb{E}_{s \sim \kappa,s_g \sim \pi_{\theta_{L}}^{L}} [D_{TV}(\pi^{**}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))] \leq \mathbb{E}_{s \sim \kappa, \pi_{D}^{L} \sim \Pi_{D}^{L}, s_g \sim \pi_{\theta_{L}}^{L}}[D_{TV}(\pi^{**}(\tau|s,s_g)||\pi_{D}^{L}(\tau|s,s_g))] + \\ \mathbb{E}_{s \sim \kappa, \pi_{D}^{L} \sim \Pi_{D}^{L}, s_g \sim \pi_{\theta_{L}}^{L}}[D_{TV}(\pi_{D}^{L}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))]
\end{split} \end{equation} Since $\pi^{**}$ is $\phi_D$ common in $\Pi_{D}^{L}$, we can write~\ref{eqn:eqn_17} as: \begin{equation} \label{eqn:eqn_18}
\begin{split}
\mathbb{E}_{s \sim \kappa,s_g \sim \pi_{\theta_{L}}^{L}} [D_{TV}(\pi^{**}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))] \leq \phi_D + \mathbb{E}_{s \sim \kappa, \pi_{D}^{L} \sim \Pi_{D}^{L}, s_g \sim \pi_{\theta_{L}}^{L}}[D_{TV}(\pi_{D}^{L}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))]
\end{split} \end{equation} Substituting the result from Equation \ref{eqn:eqn_18} in Equation ~\ref{eqn:eqn_15}, we get \begin{equation} \label{eqn:eqn_19}
\begin{split}
|J(\pi^{**})-J(\pi_{\theta_{L}}^{L})| \leq \lambda_{L} * \phi_{D} + \lambda_{L} * \mathbb{E}_{s \sim \kappa, \pi_{D}^{L} \sim \Pi_{D}^{L}, s_g \sim \pi_{\theta_{L}}^{L}} [D_{TV}(\pi_{D}^{L}(\tau|s,s_g)||\pi_{\theta_{L}}^{L}(\tau|s,s_g))]]
\end{split} \end{equation}
where $\lambda_{L}=\frac{2}{(1-\gamma)^2}R_{max} \| \frac{d_c^{\pi^{**}}}{\kappa} \|_{\infty}$ \end{proof}
\subsection{Generating expert demonstrations for maze navigation task} \label{appendix_maze_expert} We use the path planning RRT \cite{Lavalle98rapidly-exploringrandom} algorithm to generate optimal paths $P=(p_t, p_{t+1}, p_{t+2},...p_{n})$ from the current state to the goal state. RRT has privileged information about the obstacle position which is provided to the methods through state consisting sparse maze array. Using these expert paths, we generate state-action expert demonstration dataset for the lower level policy.
\subsection{Generating expert demonstrations for pick and place task} \label{appendix_pick_and_place_expert} In order to generate expert demonstrations, we used a human expert to perform the pick and place task in virtual reality based Mujoco simulation. In this task, an expert first picks up the block using robotic gripper, and then takes it to the target goal position. Using these expert trajectories, we generate expert demonstration dataset for the lower level policy.
\subsection{Ablation experiments} \label{appendix_ablation}
\input{figures_tex/demos_ablation.tex} \input{figures_tex/u_ablation.tex} \input{figures_tex/lambda_ablation.tex} \input{figures_tex/rpl_ablation.tex} \input{figures_tex/bad_demos_ablation.tex}
\end{document} | arXiv |
Delay allocation between source buffering and interleaving for wireless video
Yushi Shen1,
Kartikeya Mehrotra2,
Pamela C. Cosman3,
Laurence B. Milstein3 &
Xin Wang4
One fundamental tradeoff in the cross-layer design of a communications system is delay allocation. We study delay budget partitioning in a wireless multimedia system between two of the main components of delay: the queuing delay in the source encoder output buffer and the delay caused by the interleaver. In particular, we discuss how to apportion the fixed delay budget between the source encoder and the interleaver given the channel characteristics, the video motion, the delay constraint, and the channel bit rate.
Delay partitioning is a fundamental tradeoff problem in the cross-layer design of a communications system. This problem is especially important in real-time video communications such as video conferencing or video telephony, in which there exists a tight end-to-end delay constraint. For example, interactive video telephony should have a maximum end-to-end delay of no more than around 300 ms. Once the receiver begins displaying the received video, the display process must continue without stalling. In other words, in order to be useful, frame data entering the source encoder at time t must be displayed at the decoder by time (t+T), where T is the delay constraint, that is, an upper bound for end-to-end delay of the system. In addition, the available data rate on the channel is constrained by the available bandwidth.
In [1] and [2], the design of rate-control schemes for low-delay video transmissions was studied for a noiseless channel. In [3] and [4], the efficient design of an interleaver for a fading channel was investigated. In [5], specific tandem and joint source-channel coding strategies with complexity and delay constraints were analyzed and compared. In [6–8], delay-constrained wireless video transmission schemes were proposed for different application scenarios. In [9] and [10], tradeoffs between delay and video compression efficiency were discussed for a motion-compensated temporal filtering (MCTF) video codec and for hierarchical bi-directional (B-frames) schemes, respectively. In [11], the tradeoff between the long-term average transmission power and the average buffer delay incurred by the traffic was analyzed mathematically over a block-fading channel with delay constraints. And in [12], the tradeoff between the network capacity and the end-to-end queueing delay was studied for a mobile ad hoc network.
In this literature, either design strategies with delay constraints were investigated without considering any tradeoff issue, or certain tradeoff problems with delay constraint were discussed for different purposes and contexts than those in this paper. In this paper, we study delay partitioning for video communications over a Rayleigh fading channel. In particular, we focus on the delay allocation between the source encoder buffer and the interleaver as we vary various parameters, such as the motion of the video content, the rate of variation of the channel, the end-to-end delay constraint, the channel bit rate, and the channel code rate.
The system model we study is shown in Fig. 1. Typically, video frames arrive at the video encoder at a constant frame rate. The frames are compressed to a variable bit stream and passed on to the video encoder output buffer from which bits are drained at a constant rate. To protect against channel errors, forward error coding (FEC) is employed on the compressed bitstream coming out of the encoder buffer. This is followed by interleaving to provide robustness to channel fading. Finally, the bit stream coming out of the interleaver is modulated and sent over the wireless channel. At the receiver, the bitstream is demodulated, de-interleaved, decoded, and then passed on to the video decoder input buffer (henceforth called the decoder buffer). The video decoder extracts bits from the decoder buffer at a variable rate to display each frame at its correct time and at the same constant frame rate at which they were available to the video encoder. A rate-control mechanism is used at the video encoder to control the number of bits allotted to each frame so that the encoder buffer and the decoder buffer never overflow or underflow, while maintaining acceptable video quality at all times. Note that we assume there is no video encoder input buffer, and no video decoder output buffer; hence, the video encoder output buffer and the video decoder input buffer are called the encoder buffer and decoder buffer, respectively, throughout this paper.
The paper is organized as follows. In Section 2, the system model is introduced in detail. In Section 3, we formulate the delay partitioning problem mathematically and end up with a relationship among source encoding buffer delay, interleaving delay, and channel decoding delay, under a delay constraint. Simulation results of the tradeoff between the source encoder buffer and the interleaver are shown and analyzed in Section 4, for different video sequences over Rayleigh fading channels. In particular, we study how the tradeoff will be affected by the motion of the video content, the rate of variation of the channel, the delay constraint, and the channel bit rate. Lastly, Section 5 concludes the paper.
System model with delay constraint
In this section, we will discuss the components in Fig. 1 in detail.
In real-time video communications, the end-to-end delay for transmitting video data needs to be very small, particularly for interactive two-way applications such as video conferencing and gaming. Video data enters the source encoder at a constant rate of f frames per second (fps), where it first undergoes block-based motion-compensated (MC) prediction, followed by DCT transformation of the residual block. The DCT coefficients are quantized by appropriately choosing the quantization parameter, and the quantized values are then run-length and Huffman coded. Assume the transmission bit rate is R B bits per second (bps), and the source-coded bit stream leaves the encoder buffer at r s bps.
Whenever a frame occupies more than r s/f bits, bits will accumulate in the source encoder buffer and increase the encoder buffer delay experienced by the incoming bits. If this trend continues for several frames, the buffer may fill up because the buffer size is limited. When the number of bits in the buffer is more than a predetermined threshold, it will lead to frame skipping as will be discussed later. On the other hand, whenever a frame occupies less than r s/f bits, the encoder buffer fullness level decreases. If this trend continues for several frames, the encoder buffer may run empty, thereby wasting channel bandwidth.
By sensing the buffer fullness and keeping an estimate of the available bit budget, the rate control chooses the quantization step size and seeks to prevent buffer overflow and underflow while maintaining acceptable video quality. If either the remaining bit budget is small or the buffer is getting full, the rate control resorts to coarse quantization. If either the remaining bit budget is large or the buffer is getting empty, the quantization step size is reduced (i.e., fine quantization). A large delay budget for the source encoder allows the use of a large encoder buffer, which tends to result in higher-quality video because the rate control has more freedom. Typically, the increased number of bits resulting from finely encoding a complex scene can be easily accommodated in the large buffer. However, when tight delay constraints exist, the system must operate with a small encoder delay budget, or equivalently a small encoder buffer, which tends to reduce the quality of the video, as the functioning of the rate control is more constrained. In extreme cases, the encoder buffer may fill up several times, leading to loss of data through repeated frame skipping.
On the decoder side, the incoming stream of video data is buffered in a source decoder buffer. Once the source decoder starts displaying the frames, the delay constraint becomes operational. If T denotes the upper bound for end-to-end delay of the system, a frame entering the encoder at time t must be displayed at the decoder at time (t+T), and all the video data corresponding to this particular frame must be available at the decoder accordingly. A video frame that is not able to meet its delay constraint is useless and is considered lost. We assume that the source decoder has knowledge of the frame numbers skipped by the source encoder and that it holds over the immediately preceding displayed frame and displays it in place of the skipped frame.
In H.263, the rate control performs the bit allocation by selecting the encoder's quantization parameter for each block of 16 × 16 pixels. We choose the test model number 8 (TMN-8) rate control [1, 2] recommended for low-delay applications. The TMN-8 rate control is a two-step approach: a frame layer control first selects a target bit count for the current frame, followed by a macroblock (MB) layer rate control which selects the quantization step size for each MB in the frame. The TMN-8 rate control has a threshold for frame skipping. Whenever the number of bits in the encoder buffer increases beyond this threshold, typically one frame is skipped so that the number of bits in the buffer falls below the threshold. For each skipped frame, buffer fullness reduces by r s/f bits. We assume the first frame in the video sequence is coded as an I frame, and all subsequent frames as P frames, since this is a common strategy for video communications with a tight delay budget. We also assume the I frame is transmitted error free to the decoder and the decoder does not start the display until the first I frame is completely buffered. The rate control starts with the first P frame. Once the I frame is displayed, the delay constraint becomes operational and all subsequent frames must meet their delay constraint.
From the point of view of the system engineer, the parameter of interest is the threshold for frame skipping (denoted by S t). However, for the hardware engineer, the buffer size (denoted S b) is more important. These two quantities are closely related, as explained now. As shown in Fig. 2, we modify the rate control such that, while encoding the frame, if the buffer fullness level exceeds S t, the remaining MBs of the frame are all skipped. If a particular sequence comprising N skip bits is used to inform the decoder of this situation, the buffer size required is S b=S t+N skip. Because N skip is usually much smaller than S t, for example N skip=24 in our system while S t is at least several thousand, for simplicity, we assume the threshold for frame skipping and the buffer size are the same and are equal to S, i.e., S b=S t=S.
Source encoder buffer
The information bitstream coming out of the source encoder buffer is channel coded using a rate-compatible punctured convolutional (RCPC) code with rate r c and constraint length ν [13]. At the receiver, the Viterbi algorithm is used to find the best candidate in the trellis for the received bitstream. The delays encountered in the channel encoder and decoder are called the channel encoding delay and the channel decoding delay. Together, these make the delay budget of channel coding. When using a convolutional code with constraint length ν, the channel decoding delay is approximately the decision depth of the Viterbi decoder, which is about 5ν in bits. The decision depth for punctured convolutional codes is generally longer. If the puncturing period for the RCPC code is P, the decision depth can be bounded by 5P ν [14]. We note, however, that when using channel encoding schemes such as turbo coding that require iterative decoding at the receiver, the channel coding delay budget may use up a significant portion of the overall delay budget.
Bandwidth is a major resource shared between source coding and channel coding. A bandwidth constraint limits the available rate on the channel. Allocating more bandwidth to the source encoder allows more information from the source to be transmitted, resulting in better-quality video. However, the bandwidth available for channel coding is reduced, leading to increased errors on the channel and thus a reduced probability of achieving high video quality. Let R B bps be the total available rate on the channel, and r s and r c be the average source coding rate and the channel code rate, respectively. Then the bandwidth constraint is expressed as [15, 16]
$$ \frac{r_{\mathrm{s}}}{r_{\mathrm{c}}} = R_{\mathrm{B}}. $$
Interleaving and fading channel model
We consider coherent BPSK over a flat fading channel, where flat fading means that there is a constant gain across the bandwidth of the received signal. Therefore, the effect of the channel is a multiplicative gain term on the received signal level. We use the channel model suggested by Jakes [17], in which the envelope of the fading process is assumed to be Rayleigh distributed. The Doppler spectrum is given by
$$ S(f) = \frac{1}{\sqrt{1-(f/f_{D})^{2}}}, $$
where f D is the Doppler frequency and is given by f D=f c v/c, where f c is the carrier frequency, v is the mobile velocity, and c is the speed of light. The covariance function of the fading process for this channel model can be shown to be given by the first order Bessel function, namely
$$ R_{\alpha}(\tau) = J_{0}(2 \pi f_{D} |\tau|), $$
where τ is the time separation between the two instances when the channel is sampled. Thus, the correlation between two consecutive symbols with separation T s is J 0(2π f D T s), where T s is the symbol time. The product f D T s is usually called the normalized Doppler frequency.
Error control coding works well when the code symbols used in the decoding process are affected by independent channel conditions. Correlated fading is one of the sources of channel memory on the land mobile channel. Interleaving is used to break up channel memory, and it is an essential element in the design of error control coding techniques for the land mobile channel. A block interleaver formats the encoded data in a rectangular array of N 1 rows and N 2 columns. The code symbols are written in row-by-row and read out column-by-column. On the decoder side, the received symbols are first de-interleaved before they enter the decoder. As a result of this reordering, the fading samples of two consecutive symbols entering the decoder are actually N 1 T s apart in time, and the correlation between two consecutive channel instances is now given as J 0(2π f D N 1 T s). The parameter N 1 is often referred to as the depth of the interleaver.
The inverse of the normalized Doppler frequency roughly equals the coherence time, N coh=1/(f D T s), of the channel in bits, and is a measure of the number of consecutive bits over which the channel remains correlated. The amount of interleaving required depends on the channel. If the channel is slower, the coherence time is larger and consequently a larger interleaver is required. When there is no limit on the size of the interleaver, perfect interleaving can be achieved for mobile channels, which ensures that the fading envelopes are uncorrelated. However, both interleaving and de-interleaving introduce delay in the system, called the interleaving delay. Both of these delays are equal to N 1 N 2 T s seconds. In a practical system, the interleaving delay budget is constrained not only by the overall delay budget but also by the delay budget necessary for the robust functioning of the source coding and the channel coding.
For convolutionally coded systems, the dimensions of the interleaver are chosen to maximize the interleaving depth N 1, which should ideally be N coh to ensure nearly independent fading conditions for consecutive symbols. More important, N 2 should be chosen at least large enough to avoid the wrap around effect [18–20]. The wrap around effect means that the length of an error event exceeds the number of columns in the interleaver. This results in more than one symbol being affected by virtually the same channel conditions and thus degrades performance. As a rule of thumb, the number of columns is chosen slightly larger than the length of the shortest error event of the code.
Interleaving, in conjunction with FEC, is a mechanism to achieve time diversity, where, by transmitting consecutive symbols sufficiently separated in time, nearly independent fading is ensured. As with any diversity technique, the performance improvement shows diminishing returns with increased diversity order. Note that the effective order of diversity is a nondecreasing function of N 1. Various rules of thumb are available in the literature to determine the interleaver depth sufficient to extract nearly independent fading case performance [3, 4].
In [3], simulations were used to demonstrate that fully interleaved performance is approximately achieved for BPSK over exponentially correlated channels when the interleaver depth is chosen to satisfy f D T s N 1>0.1. This rule, however, does not apply to correlated fading channels with other auto-correlations, such as Jakes' model. In [4], a simple figure of merit for evaluating the depth of the interleaver was obtained for Rician channels, and a variety of channel auto-correlation functions. However, as shown in our simulations, this figure of merit does not hold true for Jakes' fading model with low κ factor (κ is the ratio of signal energy in direct and diffused signal components) Rician channels and the limiting Rayleigh fading case.
The delay constraint formulation
The end-to-end delay constraint of each frame, T, is the upper bound to the delay that a frame may experience and still be able to be displayed on time, where by delay, we mean the time difference between when the video frame is captured for encoding and when it reaches the video decoder. Consider frame i captured at time t. Without loss of generality, we assume t to be zero. Further, we assume that each frame has the same number of MBs, and denote this number by M (e.g., for video with QCIF format, M=99). We also denote the MB index by k (k=0,1,2,⋯,M−1), and we let b i (k) be the number of bits in the kth MB of the ith frame.
Frames arrive at the video encoder at some constant frame rate, and thus, the processor has to process each frame in the same amount of time because we assume there is no video encoder input buffer. Each frame has the same number of MBs, and we assume each MB has to be processed in the same amount of time. At the video decoder, frames are displayed at a constant frame rate, and we assume there is no video decoder output buffer.
For frame i to meet its delay constraint, the kth MB's decoding must begin at time T−(M−k)T d, where T d is the time required to decode a MB (source decoding only, i.e., excluding the FEC decoding) and is assumed to be the same for all MBs. Also, the kth MB becomes available for encoding only after time k T e, where T e is the encoding time of a MB (source encoding only, i.e., excluding the FEC encoding) assumed to be the same for all MBs. Thus, if the kth MB is to meet its decoding deadline, the following must be true:
$$\begin{array}{@{}rcl@{}} T_{\text{eb}}(k) &+& T_{\text{enc}}(k) + T_{\text{int}}(k) + T_{\mathrm{c}}(k) + T_{\text{CH}}\\ &+& T_{\text{dein}}(k) + T_{\text{dec}}(k) + T_{\text{db}}(k) \\ &&= T - (k+1) T_{\mathrm{e}} - (M-k) T_{\mathrm{d}}, \end{array} $$
where T eb(k) is the encoder buffer delay, i.e., the time the kth MB waits in the encoder buffer before it starts moving out to the channel encoder, T enc(k) is the FEC encoding delay for the kth MB, T int(k) is the delay caused by interleaving for the kth MB, T c(k) is the transmission time for the kth MB, T CH is the channel propagation delay, assumed to be a known constant, T dein(k) is the delay caused by de-interleaving for the kth MB, T dec(k) is the channel decoding delay, and finally T db(k) is the decoder buffer delay for the kth MB, i.e., the time it waits in the decoder buffer before its decoding begins for display.
A few simplifications can be made. We have earlier explained the logic for assuming that the video encoding time, T e, and the video decoding time, T d, are the same for all MBs. We also assume they are equal to each other, which is essentially the same as saying that the MBs arrive at the encoder buffer and depart from the decoder buffer as a stream with each MB spaced T MB seconds apart, where T MB=1/(M f) and f is the frame rate. As a consequence of the above assumption, notice that the right hand side of Eq. (4) becomes independent of k. We ignore the delay caused by channel encoding (i.e., T enc(k)≈0), because it is negligible compared to the delay caused by channel decoding and the delay caused by source encoding. For Viterbi decoding of RCPC codes with constraint length ν and period P, the decoder has a latency of approximately T dec(k)=5P ν/R B [13, 14]. Also, since we are assuming a rate r c channel code and a fixed channel rate of R B bps on the channel, the transmission time for the kth MB can be expressed as T c(k)=b i(k)/r s. We assume that each MB has enough bits to span the width of the interleaver at least once, i.e., b i(k)≥N 2. The sum of the interleaving and the de-interleaving delays is then approximately given as T int(k)+T dein(k)≈2N/R B. Incorporating all these simplifications, Eq. (4) can be written as:
$$\begin{array}{@{}rcl@{}} T_{\text{eb}}(k) + \frac{2N}{R_{\mathrm{B}}} + \frac{b_{\mathrm{i}}(k)}{r_{\mathrm{s}}} + T_{\text{CH}} + \frac{5P\nu}{R_{\mathrm{B}}} + T_{\text{db}}(k) \\ = T - (M+1) T_{\text{MB}}. \end{array} $$
Furthermore, the term b i(k)/r s is typically on the order of a few milliseconds. For example, with r s=48 kbps, f=10 fps, and M=99, the average number of bits per MB is b i(k)≈50, and thus, b i(k)/r s≈1 ms. Because the delay budgets in the multimedia applications we study are typically equal to or greater than 100 ms, the term b i(k)/r s can be neglected. Assuming a constant channel propagation delay T CH, and noting that we need T db(k)>0 to guarantee the source decoder buffer does not run empty, Eq. (5) can be rewritten as
$$\begin{array}{@{}rcl@{}} T_{\text{eb}}(k) + \frac{2N}{R_{B}} + \frac{5P\nu}{R_{B}} \leq C, \end{array} $$
where C=T−(M+1)T MB−T CH, is a constant.
The encoder buffer delay experienced by each MB in each frame must satisfy the above inequality in order for the corresponding frame to meet its display deadline. As explained previously, the maximal number of source coded bits in the source encoder buffer is equal to S, and they leave the buffer at a rate r s bps; thus, T eb(k)≤S/r s. As a result, Eq. (6) is always true whenever the following is true:
$$\begin{array}{@{}rcl@{}} \frac{S}{r_{\mathrm{s}}} + \frac{2N}{R_{\mathrm{B}}} + \frac{5P\nu}{R_{\mathrm{B}}} = C, \end{array} $$
where S/r s can be viewed as the delay budget for source coding, 2N/R B as the delay budget for interleaving and 5P ν/R B as the delay budget for channel decoding. As a result, the delay partitioning problem is to allocate the delay budget among these three components under the constraint (7), such that the overall distortion of the video transmission is minimized. In the following section, simulation results are presented to study the three main components in the delay budget. A possible future research interest may be to apply some analytical models with suitable utility function computed from the three delay budget components, so that the delay budget tradeoffs can be resolved analytically in some specific conditions, but that would be beyond the scope of this paper.
Simulation results and discussion
The effect of interleaver depth on system performance
An interleaver is important to remove the channel memory when error control codes designed for memoryless channels are applied to channels with memory. Before we consider the tradeoff in delay allocation in wireless multimedia, we first study the effect of interleaver design without a delay budget restriction.
The performance of an interleaver is governed by its interleaving depth N 1. As mentioned in Subsection 2.3, simulation results in [3] demonstrated that fully interleaved performance is approximately achieved for BPSK over exponentially correlated channels when N 1≥0.1N coh is satisfied, and [4] further extended this result to Rician channels and a variety of channel auto-correlation functions by proposing a simple figure of merit for evaluating interleaver depth. Our simulations confirm this result for Jakes' fading model with high κ factor Rician channels. In Fig. 3, we show simulation results for a system with a channel code of rate r c=1/2 and minimal distance d min=10, an interleaver with N 2=100 columns, and Jakes' fading spectrum with f D T s=0.01. The two bottom dashed lines are drawn for the Rician channel with κ=5 (or 7 dB), with interleaver depth N 1=14 and ideal interleaving (i.e., N 1=∞). These results match the results of Fig. 4 in [4], which illustrates that N 1=14, which is slightly larger than 0.1N coh=10, gives performance close to ideal (infinite) interleaving.
Performance comparison for evaluating the effect of interleaver depth: bit error rate (BER) versus the signal-to-noise ratio (E s/N 0), channel code with rate r c=1/2 and d min=10, interleaver with N 2=100, and Jakes' fading spectrum with f D T s=0.01
However, further simulations illustrate that this figure-of-merit does not hold true for Jakes' fading model with low κ factor Rician channels. Lowering the κ factor of the Rician channel makes the fading more severe, and the channel is Rayleigh when κ=0 (or −∞ dB), where the direct signal component is totally absent. Clearly with decreasing κ, the performance degrades and a larger interleaver depth may be required. In Fig. 3, the performance when κ=0 is shown, by utilizing interleavers with depth N 1=14, N 1=0.7,N coh=70, N 1=N coh=100, and infinite interleaving. As seen from the four top plots, substantial gains in performance are achieved over N 1=14, with an improvement by an order of magnitude, especially at middle and high SNR. On the other hand, although the performance improves significantly from N 1=14 to N 1=70, there is not much gain in further increasing after N 1≥70. This is the typical characteristic of any diversity system, where with increasing diversity order, the improvement in performance shows diminishing returns.
Figure 4 further illustrates this point, by showing bit error performance versus interleaver depth, with a convolutional channel code having r c=1/3, constraint length ν=6, and d min=14 [13, 21], over a Rayleigh fading channel with f D T s=0.005 (i.e., N coh=200). N 2 is fixed to be 16, which is slightly greater than d min [18, 19]. We again note the sharp fall in bit error rate (BER) as N 1 increases from 0 to 80, and that the performance begins to flatten out around N 1=140 onwards, which is again the depth corresponding to 0.7N coh.
Bit error rate (BER) versus the interleaver depth (N 1), E s/N 0=3 dB, channel code with rate r c=1/3 and d min=14, interleaver with N 2=16, and Jakes' fading spectrum with f D T s=0.005
As a result, our simulation results suggest the following: for Rician channels with high κ factor, fully interleaved performance is approximately achieved when the interleaver depth N 1≥0.1N coh; while for Rician channels with low κ factor, in particular for a Rayleigh fading channel, fully interleaved performance is approximately achieved when N 1≥0.7N coh. Also, the number of columns (N 2) should be greater than the minimal distance (d min) of the channel code to avoid the wrap around effect.
Delay allocation between the source encoder buffer and the interleaver, for fixed delay budget, channel bit rate, and FEC code
We will discuss the delay allocation between the source encoder buffer and the interleaver in this and the next subsections. In all our simulations, we encoded QCIF size video sequences at f=10 fps. Also, for all comparisons, we kept the ratio of the energy-per-coded bit to the noise power spectral density, E s/N 0, constant at 3 dB. For each set of system and channel parameters, we ran 10,000 realizations of the time-correlated Rayleigh fading channel, which were generated using Jakes' model [17]. We computed the cumulative distribution function (CDF) of the average peak signal-to-noise ratio (PSNR), where PSNR is calculated by first averaging the mean square error (MSE) for the entire decoded video sequence, and then converting to PSNR. The system performance can be gauged once the CDF curves for each possible set of parameters in the set of interest are available. For example, Fig. 5 illustrates what the CDF curves could look like. Whenever two CDF curves do not intersect (e.g., curves C 1 and C 3 in Fig. 5), the lower curve is superior because it always has a higher probability of achieving any given average PSNR. When there are crossovers between two curves (e.g., curves C 1 and C 2 in Fig. 5), then one curve may be superior for one application but not for another. Comparison between the curves may then involve criteria such as minimizing the area under the curve, perhaps with some weighting. In this paper, as shown in Fig. 5, to evaluate the system performance, we adopted the criterion from [22] of minimum area under the CDF curve to the left of a certain threshold x h defined later in the paper, i.e., the value \(\int _{0}^{x_{h}} F_{\mathrm {c}}(x) dx\).
Comparison of PSNR CDF curves
In this subsection, we analyze the delay partition between the source encoder buffer and the interleaver, for a fixed delay budget C, a given channel bit rate R B and a fixed RCPC code with rate r c. As explained in Section 2, the delay budget of the source encoder is determined by the threshold for frame skipping S. Given R B and a RCPC code with rate r c, the source coding rate, r s, is determined by (1), and the channel decoding delay, which is roughly equal to (5P ν/R B), is also fixed. Under this scenario, increasing the delay budget of the source encoder comes at the cost of reducing the interleaver delay budget, i.e., using a smaller interleaver. In general, given the total delay budget C and channel bit rate R B, the choice of S is affected by the source encoding rate r s and the video content, and the choice of interleaver depth N 1 is related to the channel fading characteristics (N coh and channel model) and the video content. Therefore, we will focus on how this tradeoff will be affected by the motion of the video content, the rate of variation of the channel, the delay constraint and the channel bit rate.
In the following simulations, we used the rate r c=1/3 RCPC code with ν=6 and d min=14 [13, 21] for channel coding, and N 2 was fixed at 16. We ran the simulations with different parameters, for example, video sequences with high, medium, or low motion, channels with fast, medium, or slow fading, delay constraints that are tight, medium, or loose, and different channel bit rates.
First, we assume a delay constraint C=150 ms and a channel bit rate R B=144 kbps (thus r s=48 kbps). We simulated the system for a medium motion sequence "Foreman" QCIF over a medium fading channel with normalized Doppler frequency f D T s=0.005 (N coh=200 bits). The candidate delay allocations we tested are summarized in Table 1, which were calculated based on Eq. (7). Figure 6 shows the CDF curves of the PSNRs for these delay allocations, and the areas under the CDF curves are plotted as the solid line in Fig. 7, where the x-axis is the interleaver delay budget expressed as a fraction of the total delay budget. It is seen that, as the interleaver delay budget increases from N 1=67, the system performance initially improves because of the increased diversity gain. However, the diversity gain shows diminishing returns, and at some point the reduction in source encoder delay budget starts having more of an effect, and the system performance degrades. It is seen that (N 1=151,S=5500) is the optimal delay allocation for this case, where N 1 is about \(\frac {3}{4}N_{\text {coh}}\).
CDF curves of the PSNRs for the various delay allocations, for Foreman QCIF, Rayleigh fading channel with f D T s=0.005, delay budget C=150 ms, and channel bit rate R B=144 kbps
System performance, as measured by the areas under the CDF curves, versus the fraction of the interleaver delay budget, for different video sequences, Rayleigh fading channel with f D T s=0.005, delay budget C=150 ms, and channel bit rate R B=144 kbps. The curve for Foreman QCIF is derived from Fig. 6 with x h=33.01
Table 1 Delay allocations for tradeoff between S and N 1, used in the simulations for Figs. 6 and 7
To see the effect of the motion of the video content, we also simulated a very high motion sequence "Mobile" QCIF and a very low motion sequence "Akiyo" QCIF, with the other parameters the same (C=150 ms, R B=144 kbps and N coh=200). The system performances measured by the areas under the CDF curves are plotted and compared in Fig. 7, where the threshold value x h was set to be the maximal PSNR value observed among all the realizations in the test for that individual video sequence. For example, in Fig. 6, the largest PSNR achieved by any of the systems is 33.01 dB, so for the purposes of generating the curve corresponding to Foreman QCIF in Fig. 7, we compute the areas under the CDF curves and to the left of x h=33.01 for the curves in Fig. 6. Because different x h values were used for the three curves corresponding to the three different video sequences, the performance comparison (i.e., y-axis values) is only meaningful within a curve, but not between different curves. It is observed that, given the above parameters, a higher motion video sequence requires a higher source encoder buffer size S, at the cost of a smaller interleaver depth. For example, Fig. 7 shows the optimal choices of N 1 are 170, 151, and 140, for Akiyo, Foreman, and Mobile, respectively. In compressing video, some frames may need more bits than other frames because of the presence of fine detail. In addition, for a high motion video, some frames may need a significantly larger number of bits than others to well represent the occurrence of high motion, and the performance may degrade more seriously during concealment for frame skipping. As a result, a larger source encoder buffer is needed. To further illustrate this point, in Fig. 8, we assumed an unconstrained encoder buffer size, and recorded the number of bits accumulated in the buffer for the three video sequences when the source rate was r s=48 kbps. Note that, although the buffer size is unlimited here, the number of bits accumulated is not infinite because the system is still subject to rate control. As expected, Fig. 8 illustrates that a higher motion sequence usually needs a larger buffer size than a lower motion sequence.
The number of bits accumulated in a source encoder buffer with unlimited size versus the frame number, for different video sequences at the source coding rate r s=48 kbps
We also simulated the system for different channel variation rates, with the same C=150 ms and R B=144 kbps. Figures 9 and 10 show the performance results for a slowly fading channel with f D T s=0.0035 (N coh=286 bits), and Figs. 11 and 12 are for a fast fading channel with f D T s=0.01 (N coh=100 bits). Also, Figs. 9 and 11 show the CDF curves of the PSNRs for Foreman QCIF, and Figs. 10 and 12 compare the areas under the CDF curves of all three video sequences. Again, the x h values were set to the maximal PSNRs observed for the corresponding video sequences. It is seen that, given the same set of system parameters, a larger N 1 is preferable for a slowly fading channel, in order to break the channel memory, whereas a smaller N 1 is preferable for a fast fading channel to free up more of the delay budget for the source encoder buffer.
CDF curves of the PSNRs for various delay allocations, for Foreman QCIF, Rayleigh fading channel with f D T s=0.0035, delay budget C=150 ms, and channel bit rate R B=144 kbps
System performance, as measured by the areas under the CDF curves, versus the fraction of the interleaver delay budget, for different video sequences, Rayleigh fading channel with f D T s=0.0035, delay budget C=150 ms, and channel bit rate R B=144 kbps. The curve for Foreman QCIF is derived from Fig. 9 with x h=32.82
CDF curves of the PSNRs for various delay allocations, for Foreman QCIF, Rayleigh fading channel with f D T s=0.01, delay budget C=150 ms, and channel bit rate R B=144 kbps
System performance, as measured by the areas under the CDF curves, versus the fraction of the interleaver delay budget, for different video sequences, Rayleigh fading channel with f D T s=0.01, delay budget C=150 ms, and channel bit rate R B=144 kbps. The curve for Foreman QCIF is derived from Fig.11 with x h=34.81
Next, we simulated the system for different delay budgets and different channel bit rates. Figure 13 shows the system performance for Foreman QCIF at R B =144 kbps and f D T s=0.005, with a tight delay constraint C=100 ms, a medium constraint C=150 ms and a very loose constraint C=250 ms. In order to compare the performance not only along each curve in Fig. 13, but also across curves, the same x h value, set to be the maximal observed PSNR value in all the simulations for Fig. 13, was applied for the area calculations. It is seen that, for the three constraints, the optimal choices of N 1 are 135, 151, and 180, respectively, while the corresponding optimal ratios of the interleaver delay budget to the total delay budget are 30.0, 22.4, and 16.0 %, respectively. In other words, as the delay budget C increases, the optimal interleaver depth N 1 increases, because of more available resources, while the corresponding ratios of the interleaver delay to the total delay budget decrease, because of the diminishing returns of the diversity gain. Also, it is seen that the system performance with the best (N 1, S) choice improves, i.e., has a smaller area (y-axis value), as C increases. Similar trends occur when the channel bit rate R B increases, holding other system parameters constant. As shown in Fig. 14, which plots the system performance for Foreman QCIF at C=150 ms and f D T s=0.005, with different channel bit rates, the optimal choices of N 1 are 135, 151, and 170, for R B=96 kbps, R B=144 kbps, and R B=168 kbps, respectively, and the corresponding ratios of the interleaver delay budget to the total delay budget are 30.0, 22.4, and 21.6 %, respectively. Also, the system performance with best (N 1, S) choice improves when R B increases.
System performance, as measured by the areas under the CDF curves, versus the fraction of the interleaver delay budget, for delay budgets C=100 ms, C=150 ms, and C=250 ms, Foreman QCIF, Rayleigh fading channel with f D T s=0.005, and channel bit rate R B=144 kbps. All the areas are calculated with x h=34.16, and the curve for C=150 ms is derived from Fig. 6
System performance, as measured by the areas under the CDF curves, versus the fraction of the interleaver delay budget, for channel bit rates R B=96 kbps, R B=144 kbps, and R B=168 kbps, Foreman QCIF, Rayleigh fading channel with f D T s=0.005, and delay budget C=150 ms. All the areas are calculated with x h=34.22, and the curve for R B=144 ms is derived from Fig. 6
Examining the results shown in figures from Figs. 6 to 12, as well as our other simulation results, we see the following trends. First, the normalized Doppler frequency is the key parameter in the delay partitioning, and a system operating over a fast fading channel prefers a smaller interleaver depth N 1. And as shown in all the above simulation results, it seems that about 0.7N coh (more precisely, from 0.6N coh to 0.9N coh) is a safe choice for N 1. This result is consistent with our conclusion in Subsection 4.1, which illustrates that the maximum gain from the interleaver is approximately achieved when N 1≥0.7N coh in a Rayleigh fading channel. Second, the video content also affects the delay partitioning; a sequence with higher motion content usually prefers a larger source encoder, and thus a smaller N 1. Third, either fast fading, or a larger total delay budget C, or a larger channel bit rate R B, improves the system performance on the average, holding other parameters the same. For example, Figs. 6, 9, and 11 show that, for a given set of system parameters, the highest PSNR achieved improves from about 32 dB to about 34 dB when the channel varies more rapidly. Note that the performance improvement for a larger C or a larger R B is due to the system having additional available resources, while the performance improvement for fast fading is due to additional channel diversity. However, the last conclusion is valid only for accurate channel estimation. Lastly, the gaps between the performances of the optimal delay allocation and various sub-optimal delay allocations decrease when the channel varies faster. For example, in Fig. 10 (a slowly fading channel), the performance of the optimal allocation and those of other allocations varies by a factor of 10, while in Fig. 12 (a fast fading channel), the differences are limited to a factor of 1.2. This implies that the delay allocation issue is more important when the channel varies slowly. When the channel varies fast enough, different allocations may not affect the performance as much.
Bandwidth allocation and delay allocation
In this subsection, we vary the channel coding rate, r c, to analyze the bandwidth partition between source coding and channel coding, together with the delay partition between the source encoder buffer and the interleaver, for a fixed delay budget, C, and a given channel bit rate, R B.
Again, the rates r s and r c must satisfy bandwidth constraint (1). Also, we note from delay constraint (7) that, for a fixed R B, the interleaver delay, which is equal to 2N/R B, and the channel decoding delay, which is equal to 5P ν/R B, do not change by changing r s. This implies that increasing S proportionately with r s will ensure that the same delay allocation is maintained. However, maintaining the same delay allocation is not necessarily desirable. With a change in r s and r c, the optimal delay allocation may change.
Assume there are N c candidate channel codes with rates { r c}. The optimal bandwidth partition and delay partition, i.e., the best (r c,r s,N 1,S) 4-tuple, can be determined by a two-step optimization method: Step I: For each channel code candidate with rate r c, calculate the corresponding r s from Eq. (1). For each (r c,r s) pair, among the candidate delay partition pairs (N 1,S), find the one for this bandwidth allocation that minimizes the area under the CDF curve, as illustrated in Section. 4.2. This yields N c 4-tuples, with corresponding PSNR CDF curves. Step II: Among the N c 4-tuples, find the one with the smallest area under its CDF curve, using a common threshold value, x h. This (r s, r c, N 1, S) 4-tuple is the one with best bandwidth and delay allocations.
To illustrate this procedure, we simulated the system for different channel codes in the same RCPC family, with rates equal to 1/3, 4/11, 2/5, and 4/9 [13], for Foreman QCIF at f D T s=0.005, R B=144 kbps, and C=150 ms. From Eq. (1), the corresponding source coding rates are 48, 52.4, 57.6, and 64 kbps, respectively. For each (r c, r s) pair, different (N 1, S) pairs that satisfied Eq. (7) were simulated, and the (N 1, S) pair that minimized the area under the CDF curve was selected. For example, the pair (N 1=151,S=5500) was selected for the bandwidth allocation (r c=1/3,r s=48 k), where the areas of CDF curves were derived from Fig. 7. In Fig. 15, we show, for four possible (r c, r s) allocations, the CDF curve for the corresponding best (N 1, S) pair. They are (N 1=151,S=5500), (N 1=170,S=5780), (N 1=190,S=6110), and (N 1=217,S=6400), for r c=1/3, r c=4/11, r c=2/5, and r c=4/9, respectively. Then, the best bandwidth and delay partition 4-tuple was selected among the four candidates shown in Fig. 15. In Fig. 16, we plot the areas under all the CDF curves, wherein all curves were calculated with the same threshold x h=35.78. It is seen that the (r c=1/3,r s=48 k, N 1=151,S=5500) 4-tuple yields the best overall performance.
CDF curves of the PSNRs for the best (N 1, S) choices of different channel coding rates { r c}, Foreman QCIF, Rayleigh fading channel with f D T s=0.005, delay budget C=150 ms, and channel bit rate R B=144 kbps
System performance, as measured by the areas under the CDF curves, versus the fraction of the interleaver delay budget, for different channel coding rates { r c}, Foreman QCIF, Rayleigh fading channel with f D T s=0.005, delay budget C=150 ms, and channel bit rate R B=144 kbps. All the areas are calculated with x h=35.78, and the optimal performance points, corresponding to the minimal areas on the respective curves, are derived from Fig. 15
In Fig. 16, we show that, all other parameters being the same, increasing r c, and thus increasing r s in accordance with Eq. (1), the optimal ratio of the interleaver delay to the total delay budget increases, and both the optimal interleaver depth N 1 and the optimal source buffer S increase. This is because, first, both channel coding and interleaving are used to combat the channel fading and to protect the information sequence, so when a channel code with higher r c is used, it is willing to use a larger N 1 to compensate for the loss from a less powerful channel code. Second, with r s increasing, the source encoder needs a larger buffer. As shown in Eq. (7), for a fixed R B, the channel decoding delay, 5P ν/R B, is fixed for all the RCPC codes in an RCPC family, since all the codes are formed from the same mother code with the same period P and constraint length ν. When r c increases, the source encoding delay, S/r s, becomes smaller, given the same S, because r s increases with r c according to Eq. (1). This additional delay resource will be shared by both the source encoder and the interleaver, both of which want a larger delay budget. It turns out that the best selection is one that results in a larger S and a larger N 1. Further, the optimal ratio of the interleaver delay to the total delay budget, which is equal to (2N 1 N 2)/(R B C), also increases, because N 1 increases, while C, R B and N 2 are kept constant.
Lastly, Fig. 16 shows that, when increasing r c, the system performance with the best delay partition degrades. For example, the performance gaps between that of the optimal delay allocation for r c=1/3 and those for r c=4/11, r c=2/5, and r c=4/9, are about a factor of 0.34, 2.50, and 8.81, respectively. It is seen that, under the scenario we studied here, the system always prefers to use the strongest channel code. This is probably because the E s/N 0 value is 3 dB, which is relatively low. Under better channel conditions, a higher rate RCPC code would most likely be preferred.
We analyzed the performance of a wireless video communication system operating over a fading channel, under both an end-to-end delay constraint and a bandwidth constraint. We showed that the main delay components in the system include the queuing delay in the source encoder output buffer, the delay caused by interleaving and de-interleaving, and the delay caused by channel decoding. The relationship among these three components, restricted by the delay constraint, was derived mathematically. We then focused on the delay partitioning between the source encoding and the interleaving.
Simulation results of the tradeoff between the delay of the source encoder buffer and the interleaver were compared. In particular, we studied how this tradeoff is affected by parameters such as the Doppler frequency of the fading channel, the motion of the video content, the delay constraint, the channel bit rate, and the channel code rate.
It was shown that the normalized Doppler frequency of the fading channel (i.e., N coh) is the key parameter in the delay partitioning. Given other parameters held constant, a system operating over a fast fading channel prefers a smaller interleaver depth N 1, and thus a smaller ratio of the interleaver delay to the total delay budget. From our results for various QCIF sequences over a Rayleigh fading channel with different bandwidth and delay constraints, we found that optimal values for the interleaver depth N 1 ranged from the integer part of 0.6N coh to the integer part of 0.9N coh, and that, in general, the integer part of 0.7N coh is a safe choice for N 1. Also, we showed that the system performance is more sensitive to the delay partitioning when it operates over a slow fading channel.
Other system parameters also affect the delay partitioning between the source encoding and interleaving. In general, for a sequence with higher motion content, because of a larger variation in the number of bits used to describe each frame, a larger source encoder buffer size S and a smaller interleaver depth N 1 are preferable, and thus a smaller ratio of the interleaver delay to the total delay budget. For a system with a larger total delay budget C, or a larger channel bit rate R B, because of the additional resources, both a larger S and a larger N 1 are preferable, and our results indicate that the corresponding ratio of the interleaver delay to the total delay budget becomes smaller. Lastly, for a system with a higher channel code rate (i.e., a weaker channel code), because of the increase of source rate and the loss of error correction capability, both a larger S and a larger N 1 are again preferable, but now our results indicate that the corresponding ratio of the interleaver delay to the total delay budget becomes larger.
We also showed that either a larger total delay budget C, or a larger channel bit rate R B, or fast fading (i.e., a smaller N coh), improves the system performance on the average, holding other parameters the same. Notice that the conclusion for fast fading is valid only for accurate channel estimation. Also, a two-step procedure was proposed to determine the optimal bandwidth partition and delay partition, from a finite set of possible RCPC codes. The best allocation depends on both the channel conditions and the video content.
In conclusion, we mention several possible directions in which this work can be extended. We used a video encoder with single-frame prediction. One may involve the use of more sophisticated source encoding strategies, such as hierarchical bi-directional prediction (B-pictures) and long-term frame prediction with pulsed quality, which are more efficient but will introduce additional source coding delay. Also, the channel codes we studied are from a family of RCPC codes. One may use instead codes based upon iterative decoding, such as turbo codes and low-density parity check (LDPC) codes, which are more powerful but can result in a larger delay. Additionally, our analysis assumed perfect channel estimation. One can relax this assumption, and study the effect on the delay allocation when noisy channel estimates are used. Finally, we studied the tradeoffs of the delay partitioning problem based on simulation results. One can adopt analytical models which are appropriate for some specific scenarios to study the influence of different delay components, so that the optimization problem can be solved by suitable algorithms for some restricted conditions.
BER:
BPSK:
Binary phase shift keying
bps:
Bits per second
CDF:
DCT:
Discrete cosine transform
FEC:
Forward error coding
fps:
LDPC:
Low-density parity check
MB:
Macroblock
Motion compensated
MCTF:
Motion- compensated temporal filtering
MSE:
PSNR:
Peak signal-to-noise ratio
QCIF:
Quarter common intermediate format
RCPC:
Rate-compatible punctured convolutional code
TMN:
Test model number
JR Corbera, S Lei, Rate control in DCT video coding for low-delay communications. IEEE Trans. Circuits Syst. Video Technol. 9(1), 172–182 (1999).
CY Hsu, A Ortega, M Khansari, Rate control for robust video transmission over burst-error wireless channels. IEEE J. Selected Areas Commun. 17(5), 172–185 (1999).
JW Modestino, SY Mui, Convolutional code performance in the Rician fading channel. IEEE Trans. Commun. 24(6), 592–606 (1976).
M Rice, E Perrins, A simple figure of merit for evaluating interleaver depth for the land-mobile satellite channel. IEEE Trans. Commun. 49(8), 1343–1353 (2001).
J Lim, DL Neuhoff, Joint and tandem source-channel coding with complexity and delay constraints. IEEE Trans. Commun. 51(5), 757–766 (2003).
S Aramvith, C Lin, S Roy, M Sun, Wireless video transport using conditional retransmission and low-delay interleaving. IEEE Trans. Circuits Syst. Video Technol. 12(6), 558–565 (2002).
A Scaglione, M Schaar, Cross-layer resource allocation for delay constrained wireless video transmission, Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, (2005).
G Su, M Wu, Efficient bandwidth resource allocation for low-delay multiuser video streaming. IEEE Trans. Circuits Syst. Video Technol. 15(9), 1124–1137 (2005).
A Leontaris, PC Cosman, End-to-end delay for hierarchical B-pictures and pulsed quality dual frame video coders, Proceedings of IEEE International Conference on Image Processing, ICIP, (2006).
G Pau, B Pesquet-Popescu, M Schaar, J Vieron, Delay-performance trade-offs in motion-compensated scalable subband video compression, Proceedings Advanced Concepts for Intelligent Vision Systems, (2004). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.6331%26rep=rep1%26type=pdf.
RA Berry, RG Gallager, Communication over fading channels with delay constraints. IEEE Trans. Inform. Theory. 48(5), 1135–1149 (2002).
MJ Neely, E Modiano, Capacity and delay tradeoffs for ad hoc mobile networks. IEEE Trans. Inform. Theory. 51(6), 1917–1937 (2005).
J Hagenauer, Rate-compatible punctured convolutional codes (RCPC codes) and their applications. IEEE Trans. Commun. 36(4), 389–400 (1988).
Y Yasuda, K Kashiki, Y Hirata, High rate punctured convolutional codes for soft decision Viterbi decoding. IEEE Trans. Commun. 32:, 315–319 (1984).
Q Zhao, PC Cosman, LB Milstein, Tradeoffs of source coding, channel coding and spreading in frequency selective Rayleigh fading channels. J. VLSI Signal Process. 30:, 7–20 (2002).
Y Shen, PC Cosman, LB Milstein, Error resilient video communications over CDMA networks with a bandwidth constraint. IEEE Transactions on Image Processing. 15(11), 3241–3252 (2006).
P Dent, GE Bottomley, T Croft, Jakes' model revisited. Electron. Lett. 29(13), 1162–1163 (1993).
L Wilhelmsson, LB Milstein, On the effect of imperfect interleaving for the Gilbert-Elliott Channel. IEEE Trans. Commun. 47(5), 681–688 (1999).
K Tang, PH Siegel, LB Milstein, in Proceedings of 33rd Asilomar Conference. On the performance of turbo coding for the land mobile channel with delay constraints (Pacific Grove, CA, 1999), pp. 1659–1665.
L Wilhelmsson, On using the Gilbert-Elliott channel to evaluate the performance of block coded transmission over the land mobile channel. Ph.D. Dissertation (Lund University, Sweden, 1998).
Y Shen, PC Cosman, LB Milstein, Weight distribution of a class of binary linear block codes formed from RCPC codes. IEEE Commun. Lett. 9(9), 811–813 (2005).
PC Cosman, JK Rogers, PG Sherwood, K Zeger, Combined forward error control and packetized zerotree wavelet encoding for transmission of images over varying channels. IEEE Trans. Image Process. 9(6) (2000).
This research was supported by the Center for Wireless Communications (CWC) at University of California at San Diego,by Ericsson, Inc., by the State of California under the UC Discovery Grant program, and by the Office of Naval Research under Grant N00014-03-1-0280.
Digital Media Division, Microsoft Inc., Redmond, 98007, WA, USA
Yushi Shen
Broadcom Inc., San Jose, 95134, CA, USA
Kartikeya Mehrotra
Department of Electrical and Computer Engineering, University of California at San Diego, La Jolla, 92093-0407, CA, USA
Pamela C. Cosman
& Laurence B. Milstein
VMware Inc., Palo Alto, 94304, CA, USA
Xin Wang
Search for Yushi Shen in:
Search for Kartikeya Mehrotra in:
Search for Pamela C. Cosman in:
Search for Laurence B. Milstein in:
Search for Xin Wang in:
Correspondence to Xin Wang.
Shen, Y., Mehrotra, K., Cosman, P.C. et al. Delay allocation between source buffering and interleaving for wireless video. J Wireless Com Network 2016, 209 (2016) doi:10.1186/s13638-016-0703-4
Delay budget partitioning
Delay constraint
Cross-layer optimization
Video communications | CommonCrawl |
Category: Linear Algebra
Given All Eigenvalues and Eigenspaces, Compute a Matrix Product
Let $C$ be a $4 \times 4$ matrix with all eigenvalues $\lambda=2, -1$ and eigensapces
\[E_2=\Span\left \{\quad \begin{bmatrix}
1 \\
\end{bmatrix} \quad\right \} \text{ and } E_{-1}=\Span\left \{ \quad\begin{bmatrix}
\end{bmatrix},\quad \begin{bmatrix}
\end{bmatrix} \quad\right\}.\]
Calculate $C^4 \mathbf{u}$ for $\mathbf{u}=\begin{bmatrix}
\end{bmatrix}$ if possible. Explain why if it is not possible!
(The Ohio State University Linear Algebra Exam Problem)
Two Eigenvectors Corresponding to Distinct Eigenvalues are Linearly Independent
Let $A$ be an $n\times n$ matrix. Suppose that $\lambda_1, \lambda_2$ are distinct eigenvalues of the matrix $A$ and let $\mathbf{v}_1, \mathbf{v}_2$ be eigenvectors corresponding to $\lambda_1, \lambda_2$, respectively.
Show that the vectors $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent.
Is the Determinant of a Matrix Additive?
Let $A$ and $B$ be $n\times n$ matrices, where $n$ is an integer greater than $1$.
Is it true that
\[\det(A+B)=\det(A)+\det(B)?\] If so, then give a proof. If not, then give a counterexample.
Eigenvalues of a Stochastic Matrix is Always Less than or Equal to 1
Let $A=(a_{ij})$ be an $n \times n$ matrix.
We say that $A=(a_{ij})$ is a right stochastic matrix if each entry $a_{ij}$ is nonnegative and the sum of the entries of each row is $1$. That is, we have
\[a_{ij}\geq 0 \quad \text{ and } \quad a_{i1}+a_{i2}+\cdots+a_{in}=1\] for $1 \leq i, j \leq n$.
Let $A=(a_{ij})$ be an $n\times n$ right stochastic matrix. Then show the following statements.
(a)The stochastic matrix $A$ has an eigenvalue $1$.
(b) The absolute value of any eigenvalue of the stochastic matrix $A$ is less than or equal to $1$.
Eigenvalues of Squared Matrix and Upper Triangular Matrix
Suppose that $A$ and $P$ are $3 \times 3$ matrices and $P$ is invertible matrix.
\[P^{-1}AP=\begin{bmatrix}
0 &4 &5 \\
\end{bmatrix},\] then find all the eigenvalues of the matrix $A^2$.
Eigenvalues of a Matrix and Its Squared Matrix
Let $A$ be an $n \times n$ matrix. Suppose that the matrix $A^2$ has a real eigenvalue $\lambda>0$. Then show that either $\sqrt{\lambda}$ or $-\sqrt{\lambda}$ is an eigenvalue of the matrix $A$.
Linear Transformation and a Basis of the Vector Space $\R^3$
Let $T$ be a linear transformation from the vector space $\R^3$ to $\R^3$.
Suppose that $k=3$ is the smallest positive integer such that $T^k=\mathbf{0}$ (the zero linear transformation) and suppose that we have $\mathbf{x}\in \R^3$ such that $T^2\mathbf{x}\neq \mathbf{0}$.
Show that the vectors $\mathbf{x}, T\mathbf{x}, T^2\mathbf{x}$ form a basis for $\R^3$.
Given Eigenvectors and Eigenvalues, Compute a Matrix Product (Stanford University Exam)
Suppose that $\begin{bmatrix}
\end{bmatrix}$ is an eigenvector of a matrix $A$ corresponding to the eigenvalue $3$ and that $\begin{bmatrix}
\end{bmatrix}$ is an eigenvector of $A$ corresponding to the eigenvalue $-2$.
Compute $A^2\begin{bmatrix}
(Stanford University Linear Algebra Exam Problem)
Determine Eigenvalues, Eigenvectors, Diagonalizable From a Partial Information of a Matrix
Suppose the following information is known about a $3\times 3$ matrix $A$.
\[A\begin{bmatrix}
\end{bmatrix}=6\begin{bmatrix}
\end{bmatrix},
\quad
A\begin{bmatrix}
-1 \\
\end{bmatrix}, \quad
(a) Find the eigenvalues of $A$.
(b) Find the corresponding eigenspaces.
(c) In each of the following questions, you must give a correct reason (based on the theory of eigenvalues and eigenvectors) to get full credit.
Is $A$ a diagonalizable matrix?
Is $A$ an invertible matrix?
Is $A$ an idempotent matrix?
(Johns Hopkins University Linear Algebra Exam)
Characteristic Polynomial, Eigenvalues, Diagonalization Problem (Princeton University Exam)
\[\begin{bmatrix}
(a) Find the characteristic polynomial and all the eigenvalues (real and complex) of $A$. Is $A$ diagonalizable over the complex numbers?
(b) Calculate $A^{2009}$.
(Princeton University, Linear Algebra Exam)
Idempotent Matrix and its Eigenvalues
Let $A$ be an $n \times n$ matrix. We say that $A$ is idempotent if $A^2=A$.
(a) Find a nonzero, nonidentity idempotent matrix.
(b) Show that eigenvalues of an idempotent matrix $A$ is either $0$ or $1$.
(The Ohio State University, Linear Algebra Final Exam Problem)
Find All the Values of $x$ so that a Given $3\times 3$ Matrix is Singular
Find all the values of $x$ so that the following matrix $A$ is a singular matrix.
x & x^2 & 1 \\
0 & -1 & 1
Find All Values of $x$ so that a Matrix is Singular
1 & -x & 0 & 0 \\
0 &1 & -x & 0 \\
0 & 0 & 1 & -x \\
0 & 1 & 0 & -1
\end{bmatrix}\] be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.
Subspace of Skew-Symmetric Matrices and Its Dimension
Let $V$ be the vector space of all $2\times 2$ matrices. Let $W$ be a subset of $V$ consisting of all $2\times 2$ skew-symmetric matrices. (Recall that a matrix $A$ is skew-symmetric if $A^{\trans}=-A$.)
(a) Prove that the subset $W$ is a subspace of $V$.
(b) Find the dimension of $W$.
Vector Space of Polynomials and a Basis of Its Subspace
Let $P_2$ be the vector space of all polynomials of degree two or less.
Consider the subset in $P_2$
\[Q=\{ p_1(x), p_2(x), p_3(x), p_4(x)\},\] where
\begin{align*}
&p_1(x)=1, &p_2(x)=x^2+x+1, \\
&p_3(x)=2x^2, &p_4(x)=x^2-x+1.
\end{align*}
(a) Use the basis $B=\{1, x, x^2\}$ of $P_2$, give the coordinate vectors of the vectors in $Q$.
(b) Find a basis of the span $\Span(Q)$ consisting of vectors in $Q$.
(c) For each vector in $Q$ which is not a basis vector you obtained in (b), express the vector as a linear combination of basis vectors.
A Matrix Representation of a Linear Transformation and Related Subspaces
Let $T:\R^4 \to \R^3$ be a linear transformation defined by
\[ T\left (\, \begin{bmatrix}
x_1 \\
x_4
\end{bmatrix} \,\right) = \begin{bmatrix}
x_1+2x_2+3x_3-x_4 \\
3x_1+5x_2+8x_3-2x_4 \\
x_1+x_2+2x_3
(a) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$.
(b) Find a basis for the null space of $T$.
(c) Find the rank of the linear transformation $T$.
Inner Product, Norm, and Orthogonal Vectors
Let $\mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3$ are vectors in $\R^n$. Suppose that vectors $\mathbf{u}_1$, $\mathbf{u}_2$ are orthogonal and the norm of $\mathbf{u}_2$ is $4$ and $\mathbf{u}_2^{\trans}\mathbf{u}_3=7$. Find the value of the real number $a$ in $\mathbf{u_1}=\mathbf{u_2}+a\mathbf{u}_3$.
(The Ohio State University, Linear Algebra Exam Problem)
Give a Formula for a Linear Transformation if the Values on Basis Vectors are Known
Let $T: \R^2 \to \R^2$ be a linear transformation.
\mathbf{u}=\begin{bmatrix}
\end{bmatrix}, \mathbf{v}=\begin{bmatrix}
\end{bmatrix}\] be 2-dimensional vectors.
Suppose that
T(\mathbf{u})&=T\left( \begin{bmatrix}
\end{bmatrix} \right)=\begin{bmatrix}
\end{bmatrix},\\
T(\mathbf{v})&=T\left(\begin{bmatrix}
\end{bmatrix}\right)=\begin{bmatrix}
\end{bmatrix}.
Let $\mathbf{w}=\begin{bmatrix}
x \\
\end{bmatrix}\in \R^2$.
Find the formula for $T(\mathbf{w})$ in terms of $x$ and $y$.
Linear Independent Continuous Functions
Let $C[3, 10]$ be the vector space consisting of all continuous functions defined on the interval $[3, 10]$. Consider the set
\[S=\{ \sqrt{x}, x^2 \}\] in $C[3,10]$.
Show that the set $S$ is linearly independent in $C[3,10]$.
Vector Space of Polynomials and Coordinate Vectors
&p_1(x)=x^2+2x+1, &p_2(x)=2x^2+3x+1, \\
&p_3(x)=2x^2, &p_4(x)=2x^2+x+1.
Page 19 of 25« First«...10...1617181920212223...»Last »
Elementary Row Operations
Gaussian-Jordan Elimination
Solutions of Systems of Linear Equations
Linear Combination and Linear Independence
Nonsingular Matrices
Inverse Matrices
Subspaces in $\R^n$
Bases and Dimension of Subspaces in $\R^n$
General Vector Spaces
Subspaces in General Vector Spaces
Linearly Independency of General Vectors
Bases and Coordinate Vectors
Dimensions of General Vector Spaces
Linear Transformation from $\R^n$ to $\R^m$
Linear Transformation Between Vector Spaces
Orthogonal Bases
Determinants of Matrices
Computations of Determinants
Introduction to Eigenvalues and Eigenvectors
Eigenvectors and Eigenspaces
Diagonalization of Matrices
The Cayley-Hamilton Theorem
Dot Products and Length of Vectors
Eigenvalues and Eigenvectors of Linear Transformations
Jordan Canonical Form
Find All Eigenvalues and Corresponding Eigenvectors for the $3\times 3$ matrix
Compute $A^5\mathbf{u}$ Using Linear Combination
Sum of Squares of Hermitian Matrices is Zero, then Hermitian Matrices Are All Zero
Is an Eigenvector of a Matrix an Eigenvector of its Inverse? | CommonCrawl |
Nano Express
Nucleation mechanism of nano-sized NaZn13-type and α-(Fe,Si) phases in La-Fe-Si alloys during rapid solidification
Xue-Ling Hou1,2,
Yun Xue1,2,
Chun-Yu Liu1,2,
Hui Xu1,2,
Ning Han3,
Chun-Wei Ma3 &
Manh-Huong Phan4
Nanoscale Research Letters volume 10, Article number: 143 (2015) Cite this article
The nucleation mechanism involving rapid solidification of undercooled La-Fe-Si melts has been studied experimentally and theoretically. The classical nucleation theory-based simulations show a competitive nucleation process between the α-(Fe,Si) phase (size approximately 10 to 30 nm) and the cubic NaZn13-type phase (hereinafter 1:13 phase, size approximately 200 to 400 nm) during rapid solidification, and that the undercooled temperature change ∆T plays an important factor in this process. The simulated results about the nucleation rates of the α-(Fe,Si) and 1:13 phases in La-Fe-Si ribbons fabricated by a melt-spinner using a copper wheel with a surface speed of 35 m/s agree well with the XRD, SEM, and TEM studies of the phase structure and microstructure of the ribbons. Our study paves the way for designing novel La-Fe-Si materials for a wide range of technological applications.
La-Fe-Si alloys exhibiting a giant magnetocaloric effect (GMCE) near room temperature are one of the most promising candidate materials for advanced magnetic refrigeration technology [1-4]. In La-Fe-Si alloys, the NaZn13-type phase (1:13 phase), which undergoes a first-order magneto-structural transition accompanied by a typical itinerant electron metamagnetic transition and a large volume change in the vicinity of its Curie temperature T C, has been reported to be a driving force for achieving the GMCE [5-8]. From a materials perspective, the 1:13 phase with a cubic NaZn13-type (Fm \( \overline{3} \) c) structure is very difficult to form directly from equilibrium solidification conditions. It has been shown that during an equilibrium solidification, α-(Fe,Si) phase (A2: Im \( \overline{3} \) m) dendrites firstly grow from the liquid as the primary phase and then a peritectic reaction with the surrounding liquid occurs to form the 1:13 phase (α-(Fe,Si) + L → 1:13 phase). Trace amounts of a La-rich or a LaFeSi phase are also found in the interdendritic region [9,10]. It is a major difficulty to produce the 1:13 phase because of its low phase stability at elevated temperatures and low atomic diffusivity [11,12]. Due to the incompleteness of the peritectic reaction, a large number of α-(Fe,Si) dendrites are preserved at room temperature. In as-cast conditions attained by conventional arc-melting techniques, La-Fe-Si alloys show a two-phase structure composed of α-(Fe,Si) and La-Fe-Si (Cu2Sb-type: P4/nmm) phases. It is therefore essential to anneal the as-cast alloys in vacuum at a high temperature for a long time (approximately 1,323 K, 30 days) to gain the desired 1:13 phase. Recently, the melt spinning technique has emerged as a more efficient approach for producing La(Fe,Si)13 materials, since the desired 1:13 phase could be obtained subject to a much shorter time annealing (approximately 1,273 K, 20 to 120 min) [11,12]. The primary contents of α-(Fe,Si) and 1:13 phases obtained from the melt spinning technique are entirely different from those obtained using conventional equilibrium solidification techniques [13]. However, the origin of this difference has remained an open question. While the nucleation mechanism of 1:13 and α-(Fe,Si) phases in La-Fe-Si alloys during rapid solidification has been yet investigated, knowledge of which is key to exploiting their desirable properties for a wide range of technological applications.
To address these emerging and important issues in the present work, we have investigated theoretically and experimentally the nucleation mechanism of α-(Fe,Si) and 1:13 phases in melt-spun La-Fe-Si ribbons. Detailed microstructural studies of the wheel-side and free-side surfaces of the melt-spun ribbons are reported. Our simulated and experimental results consistently show that there exists a competitive nucleation process between the nano-sized α-(Fe,Si) and 1:13 phases during rapid solidification, and that the undercooled temperature change, ∆T, plays a crucial factor in this process. A similar trend has also been reported in other peritectic alloys [14-17].
Button ingots with a nominal composition of LaFe11.5Si1.5 were prepared by arc-melting 99% La, 99.9% Fe, and 99.5% Si crystals in an argon gas atmosphere. The ingots were remelted four times and each time the button was turned over to obtain a homogeneous composition. The button was broken into pieces, and these pieces were then put into a quartz tube with a nozzle. The chamber of the quartz tube was evacuated to a vacuum of 3 to 5 × 10−3 Pa and then filled with high-purity Ar. The samples were melted by electromagnetic induction and then ejected through the nozzle using a pressure difference into a turning cooper wheel. The surface speed of the Cu wheel was approximately 35 m/s to get ribbon samples with a thickness about 25 μm. Here, we denote the surface of the ribbon far from the copper wheel as the free surface, while the surface of the ribbon in direct contact with the copper wheel is referred to as the cooled surface. The phases and crystal structures of the ribbons were characterized by powder X-ray diffraction (XRD) using Cu-Kα radiation. The microstructure analysis was carried out by a scanning electron microscope (SEM) with an energy dispersive spectrometer (EDS) (model JSM-6700 F, JEOL Ltd., Tokyo, Japan) and a transmission electron microscope (TEM, model JEM-2010 F, JEOL Ltd., Tokyo, Japan). The TEM specimen was prepared by a dual-beam focused ion beam (FIB, model 600 i, FEI Company, Oregon, USA).
The room temperature XRD patterns, SEM, and simulated results using the classical nucleation theory, as shown in Figure 1a and c, reveal the change in composition and volume fraction of the α-(Fe,Si) and 1:13 phases on the cooled surface and free surfaces of a melt-spun ribbon. As one can see clearly in Figure 1a, the XRD patterns show that the majority of the 1:13 phase is on the cooled surface of the ribbon, while this phase diminishes, even disappears, when crossing toward the other surface of the ribbon. The majority of the α-(Fe,Si) phase is found on the free surface. The as-cast microstructure appears to be very different between the cooled and free surfaces of the ribbon (see regions A and B of Figure 1c). These results indicate that the rapid solidification process favors a direct formation of the 1:13 phase from the liquid melt of La-Fe-Si. By contrast, under an equilibrium solidification condition, the 1:13 phase is formed via a peritectic reaction process between the nascent α-(Fe,Si) and liquid (L) phase (1:13 → α-(Fe,Si) + L). It is worth noting that there is a distinct difference in the formed phase structure and microstructure between the cooled and free surfaces of the melt-spun ribbon. This can be attributed to the difference in the nucleation rates of the α-(Fe,Si) and 1:13 phases. According to the classical nucleation theory (CNT) [18,19], the heterogeneous nucleation rate can be determined by
$$ I=\frac{k_{\mathrm{B}}T{N}_{\mathrm{n}}}{3\pi \upeta (T){a}_o^3}. exp\left[-\frac{\varDelta G*}{k_{\mathrm{B}}T}\right], $$
where k B, η(T), N n, a 0, and ∆G* are the Boltzmann constant, the temperature-dependent viscosity of the undercooled melt, the potential nucleation sites, the average atomic distance, and the activation energy for forming a critical nucleus, respectively. ∆G* cas
$$ \varDelta {G}^{*}=\frac{16\pi }{3}\frac{\sigma^3}{\varDelta {G}_{\mathrm{V}}^2}f\left(\theta \right)=\frac{16}{3}\frac{\sigma^3\varDelta {S}_{\mathrm{f}}{T}_{\mathrm{l}}^3}{{\left({T}_{\mathrm{l}}-T\right)}^2}f\left(\theta \right), $$
where σ, ∆G v, ∆S f, T l, T, and f(θ) are the interfacial energy, the Gibbs free energy difference between liquid and solid, the entropy of fusion, the liquid temperature, the temperature, and the catalytic factor for nucleation, respectively. The interfacial energy, σ, can be estimated by the model developed by Spaepen [19,20]:
$$ \sigma =\alpha \frac{\varDelta {S}_{\mathrm{f}}T}{{\left({N}_1{V}_{\mathrm{m}}^2\right)}^{1/3},} $$
where α is the structure-dependent factor, N l is Avogadro constant, and V m is the volume. By inserting the essential parameters listed in Table 1 into Eqs. 1 to 3, the heterogeneous nucleation rates for the α-(Fe,Si) and 1:13 phases can be simulated. The calculated results (Figure 1b) show a competing nucleation between the α-(Fe,Si) and 1:13 phases that occurred during rapid solidification. As the undercooled temperature change, ∆T, is less than 707 K, the α-(Fe,Si) phase has a higher nucleation rate compared to that of the 1:13 phase on the free surface of the ribbon, and it is therefore a primary phase in a slow solidification process. For ∆T > 707 K, however, the reverse situation is observed. The nucleation rate of the 1:13 phase is faster than that of the α-(Fe,Si) phase on the cooled surface of the ribbon, thus resulting in the 1:13 phase as a primary solidification phase. These calculated results are well interpreted from the obtained XRD data (Figure 1a). When ∆T = 707 K, the nucleation rate of the 1:13 phase is equal to that of the α-(Fe,Si) phase. This is seen as an intersection of the two curves of Figure 1b, where the microstructure is found to be an obvious watershed between the cooled and free surfaces of the ribbon (see Figure 1c and Figure 2a); this watershed matches with the green lines of Figure 2b and c. It can also be seen in Figure 1c that small-sized grains in region A are on the cooled surface, and its microstructure is very different from that of region B of the free surface. The 'transition' region from region A (the cooled surface) to region B (the free surface) can be seen in cross-sectional SEM images with higher magnifications (Figure 2b and c), where the small-sized grains appeared as rectangular black dots in region A (Figure 2c). The chemical compositions were observed to change between regions A and B during the rapid solidification process of La-Fe-Si. EDS analysis showed that the content of La and Si in region A was higher than those of region B and of the nominal composition (see Table 2). The content of Fe was lower than those of region B and of the nominal composition. The variations in the chemical compositions in regions A and B are likely associated with the varying contents of α-(Fe,Si) phase.
The room temperature XRD patterns, SEM, and simulated results using the classical nucleation theory. (a) XRD patterns of both the cooled and free surfaces of a melt-spun La-Fe-Si ribbon; (b) nucleation rates of α-(Fe,Si) and La(Fe,Si)13 phases versus La-Fe-Si alloy; and (c) a cross-sectional SEM image of the ribbon indicating the microstructural difference between the cooled surface region (region A) and the free surface region (region B).
Table 1 Physical parameters of the La-Fe-Si alloy [21] used in our calculations
Cross-sectional SEM images. Cross-sectional SEM images of a melt-spun La-Fe-Si ribbon (a) with different magnifications (b,c). There exist three different regions: region A (the cooled surface), region B (the free surface), and a transitional region between A and B.
Table 2 The chemical compositions determined by EDS for regions A and B of the melt-spun La-Fe-Si ribbon (Figure 2 ) relative to its nominal composition
Figure 3a shows the global microstructural morphology of the cooled surface of the ribbon for region A. Expanded views of the white circle and square areas of Figure 3a are shown in Figures 3b and d, respectively. The HRTEM images and corresponding Fourier transforms for 'C' of Figure 3b, a large dark spherical precipitate region and its adjacent matrix, are displayed in Figure 3c, where the matrix is indexed to the structure of the 1:13 phase, while the spherical precipitate 'C' is indexed to the α-(Fe,Si) phase with approximately 97.26 at.% Fe and 2.74 at.% Si as determined by EDS. Using the same analysis, the spherical precipitates 'E', 'F', and 'G' in Figure 3d are determined to be the α-(Fe,Si) phase with the chemical compositions of approximately 96 to 98 at.% Fe and 4 to 2 at.% Si. It can be observed that the Moire fringes in Figure 3 g are two adjacent spherical precipitates of α-(Fe,Si) in 'E' of Figure 3d. These spherical α-(Fe,Si) phases are embedded in the 1:13 matrix. The sheet labeled 'D', with an adjacent 1:13 matrix, can be indexed to the α-(Fe,Si) phase by HRTEM images and corresponding Fourier transforms found in Figure 3e. Two types of shapes, such as the sphere and sheet of α-(Fe,Si), existed on the cooled surface of the ribbon during rapid solidification. The majority of the 1:13 phase, a matrix with equiaxed crystals of approximately 200 to 400 nm, is observed on the cooled surface with some spherical precipitations of α-(Fe,Si) (size, approximately 20 to 100 nm) as a minor phase as seen in the upper half of Figure 3a. The shape and density of the α-(Fe,Si) phase evolve along the white long arrows on the cooled surface from near to far from the copper wheel, in which the volume fraction of the fine spherical α-(Fe,Si) phase is found to decrease while that of the sheet-like α-(Fe,Si) phase increased. The spherical shape of the α-(Fe,Si) phase is replaced by the coarse irregular shape of the α-(Fe,Si) phase, and a higher density of the latter is precipitated when the ribbon surface is far from the cooper wheel (see short arrows and white triangles in Figure 3a). The corresponding selected area diffraction (SAED) pattern for the white triangle of Figure 3a is further identified as α-(Fe,Si) (Figure 3f). A higher degree of super-cooling gave rise to a nucleation rate of the 1:13 phase and the density and shape of the primary α-(Fe,Si) phase on the cooled surface of the ribbon.
The global microstructural morphology. A cross-sectional TEM micrograph of the cooled surface region (region A) (a), with higher magnifications of the regions of the sample indicated by the dashed circle (b) and box (d). HRTEM micrographs for regions C, D and E and its corresponding FFT patterns in (c), (e) and (g); SAED pattern of the 'dashed triangle' of Figure 4a is shown in Figure 4 (f).
Figure 4a shows a TEM micrograph of the free surface of the ribbon for region B with a magnification shown in Figure 4b. The microstructure consists of grain clusters, which were formed during the rapid solidification stage. The cluster boundary is well defined (see the white dashes in Figure 4a). Some nano-sized worm-like morphology was observed in the internal region of the clusters (Figure 4b). EDS revealed that the chemical composition of the black matrix was consistent with the α-(Fe,Si) phase (2.09 at.% La, 92.90 at.% Fe, and 5.00 at.% Si). The HRTEM images and corresponding Fourier transforms for the white circle of Figure 4c (regions 'G' and 'H') in Figure 4d and e can be indexed to the α-(Fe,Si) phase. The TEM analyses further confirm that the majority of the α-(Fe,Si) phase is in region B of Figure 2b, and the 1:13 phase as a majority is in region A of Figure 2b and c during rapid (melt-spinning) solidification process in La-Fe-Si alloys. These results are in good agreement with the XRD data and the simulated results using the classical nucleation theory. It is important to point out that in the melt spinning method, due to the enhanced ∆T in the cooled surface of a melt-spun La-Fe-Si ribbon, the desired 1:13 phase can be directly formed from the melt during melt-spinning. This clear understanding of the competitive nucleation mechanism between the 1:13 and α-(Fe,Si) phases allows us to address the emerging and important question of why the 1:13 phase does not form directly from the melt under equilibrium solidification conditions or under arc-melting, but from the rapid (melt spinning) solidification. It provides good guidance to the development of La-Fe-Si materials with desirable magnetic properties for a wide range of technological applications, such as magnetic refrigerant materials for use in active magnetic refrigerators.
The TEM micrographs results. TEM micrographs of the free surface region for region B (a), with higher magnifications of the 'dashed region' (b) and the 'dashed box' (c). HRTEM images and corresponding FFT patterns of region G (d) and region H (e).
The nucleation mechanism involving rapid solidification of undercooled La-Fe-Si melts has been studied theoretically and experimentally. We find that for ∆T < 707 K, the α-(Fe,Si) phase has a higher nucleation rate compared to that of the 1:13 phase, and it is a primary phase in a slow solidification process. For ∆T > 707 K, the nucleation rate of the 1:13 phase is faster than that of the α-(Fe,Si) phase, resulting in a primary solidification of the 1:13 phase. As ∆T = 707 K, both of the 1:13 and α-(Fe,Si) phases have equal nucleation rates, but the microstructural morphology is distinctly different on the cooled and free surfaces of the ribbon. The desired nano-sized 1:13 phase can be directly formed from the melt during melt-spinning due to the enhanced ∆T.
EDS:
energy dispersive spectrometer
GMCE:
giant magnetocaloric effect
HRTEM:
high resolution transmission electron microscope
SAED:
selected area diffraction
SEM:
TEM:
XRD:
Shen BG, Sun JR, Hu FX, Zhang HW, Cheng ZH. Recent progress in exploring magnetocaloric materials. Adv Mater. 2009;21:4545.
Lyubina J, Schäfer R, Martin N, Schultz L, Gutfleisch O. Novel design of La (Fe, Si)13 alloys towards high magnetic refrigeration performance. Adv Mater. 2010;22:3735.
Cheng X, Chen YG, Tang YB. High-temperature phase transition and magnetic property of LaFe11.6Si1.4 compound. J Alloys and Compd. 2011;509:8534.
Gutfleisch O, Yan A, Muller KH. Large magnetocaloric effect in melt-spun LaFe13-xSix. J Appl Phys. 2005;97:10M305.
Lyubina J, Gutfleisch O, Kuz'min MD, Richter M. La(Fe, Si)13-based magnetic refrigerants obtained by novel processing routes. J Magn Magn Mater. 2008;320:2252.
Yamada H. Metamagnetic transition and susceptibility maximum in anitinerant-electron system. Phys Rev B. 1993;47:11211.
Liu T, Chen YG, Tang YB, Xiao SF, Zhang EY, Wang JW. Structure and magnetic properties of shortly high temperature annealing LaFe11.6Si1.4 compound. J Alloys Compd. 2009;475:672.
Fujita A, Akamatsu Y, Fukamichi KJ. Itinerant electron metamagnetic transition in La(FexSi1−x)13 intermetallic compounds. J Appl Phys. 1999;85:4756.
Raghavan V. Fe-La-Si (Iron-Lanthanum-Silicon). J Phase Equilib. 2001;22:158.
Niitsu K, Kainuma R. Phase equilibria in the Fe-La-Si ternary system. Intermetallics. 2012;20:160.
Fujita A, Koiwai S, Fujieda S, Fukamichi K, Kobayashi T, Tsuji H. Magnetocaloric effect in spherical La(FexSi1-x)13 and their hydrides for active magnetic regenerator. J Appl Phys. 2009;105:07A936.
Liu J, Krautz M, Skokov K, Woodcock TG, Gutfleisch O. Systematic study of the microstructure. Entropy change and adiabatic temperature change in optimized La-Fe-Si alloys. Acta Materialia. 2011;59:3602.
Liu XB, Altounian Z, Tu GH. The structure and large magnetocaloric effect in rapidly quenched LaFe114Si16 compound. J Phys Condens Matter. 2004;16:8043.
Boettinger WJ. The structure of directionally solidified two-phase Sn-Cd peritectic alloys. Metall Trans. 1974;5:2023.
Umeda T, Okane T, Kurz W. Phase selection during solidification of peritectic alloys. Acta Mater. 1996;44:4209.
Trivedi R. Theory of layered-structure formation in peritectic systems. Metall Trans. 1995;A26:1583.
St John DH, Hogan LM. A simple prediction of the rate of the peritectic transformation. Acta Metall. 1987;35:171.
Christian JW. The Theory of Transformations in Metals and Alloys. Oxford: Pergamon Press; 1981. p. 12.
Spaepen F. The temperature dependence of the crystal-melt interfacial tension: a simple model. Mater Sci Eng A. 1994;178:15.
Chen YZ, Liu F, Yang GC, Zhou YH. Nucleation mechanisms involving in rapid solidification of undercooled Ni803B197 melts. Intermetallics. 2011;19:221.
Gong XM. Nucleation Kinetics of Crystalline Phases in Undercooled La-Fe-Si Melts. Master Thesis. 2008;41–4.
The authors gratefully acknowledge the support from the Instrumental Analysis & Research Center, Shanghai University. This work was partially supported by Shanghai Education Commission Project (Grant No. 12ZZ085), Shanghai Natural Science Foundation of China (Grant No. 13ZR1415300), and Shanghai leading Academic Discipline Project (Grant No. S30107). M-HP acknowledges support from The Florida Cluster for Advanced Smart Sensor Technologies (FCASST).
Laboratory for Microstructures, Shanghai University, Shanghai, 200444, China
Xue-Ling Hou, Yun Xue, Chun-Yu Liu & Hui Xu
School of Materials Science and Engineering, Shanghai University, Shanghai, 200072, China
Shanghai University of Engineering Science, Shanghai, 201620, China
Ning Han & Chun-Wei Ma
Department of Physics, University of South Florida, Tampa, FL, 33620, USA
Manh-Huong Phan
Xue-Ling Hou
Yun Xue
Chun-Yu Liu
Hui Xu
Ning Han
Chun-Wei Ma
Correspondence to Xue-Ling Hou or Manh-Huong Phan.
X-LH conceived of the study and participated in its design and coordination. YX performed TEM measurements, C-YL and HX performed SEM and the theoretical simulations. NH and C-WM fabricated the ribbons. X-LH and M-HP analyzed the data and wrote the paper. All authors read and approved the final manuscript.
Hou, XL., Xue, Y., Liu, CY. et al. Nucleation mechanism of nano-sized NaZn13-type and α-(Fe,Si) phases in La-Fe-Si alloys during rapid solidification. Nanoscale Res Lett 10, 143 (2015). https://doi.org/10.1186/s11671-015-0843-1
Accepted: 27 February 2015
La(Fe,Si)13 ribbons
Nucleation mechanism
Rapid solidification
EMN Meeting | CommonCrawl |
BMC Chemistry
Green and simple production of graphite intercalation compound used sodium bicarbonate as intercalation agent
Xin Wang1,
Guogang Wang2 &
Long Zhang3
BMC Chemistry volume 16, Article number: 13 (2022) Cite this article
In view of the technical difficulties in the preparation of graphite intercalation compound (GIC) such as complex processes, the need to use strong acid reagents, and the product containing corrosive elements. A novel, efficient and simple method used sodium bicarbonate as intercalation agent was developed, which combined with mechanical force and chemical method for the green production of GIC. The production parameters were optimized by the single factor experiments, the optimal conditions were the ball mill speed 500 r/min for 4 h (6 mm diameter of the stainless-steel beads as ball milling media), the decomposition temperature 200 ℃ for 4 h, and 1:1 mass ratio of flake graphite to sodium bicarbonate. SEM results revealed that the prepared product appears the lamellar separation, pores, and semi-open morphology characteristic of GIC. FT-IR results indicated that the preparation method does not change the carbon-based structure, and the sodium bicarbonate intercalant has entered the interlayer of graphite flakes to form GIC. XRD results further showed that the GIC products still maintained the structure of carbon atoms or molecules, and the sodium bicarbonate intercalation agent has entered the interlayer of the graphite, and increased the interlayer distance of the layered graphite. The expandability of GIC products was studied, and the results show that it was expandable, and the expandable volume of GIC products prepared under optimal conditions has reached 142 mL/g. The theoretical basis for large-scale production was provided by studied the mechanism of the preparation method and designed the flow chart. The method has the advantages of simple process, products free of impurities, no use of aggressive reagents, process stable, and does not pollute the environment, being favorable to mass production, and provided new preparation method and idea for two-dimensional nanomaterials with preparation technical difficulties.
Carbon is abundantly distributed on the earth, and it can constitute many carbon materials with special properties. Graphite is an allotrope of carbon, it has excellent properties such as corrosion resistance, good heat resistance, and stable chemical properties [1]. The various excellent properties make it have broad application prospects in many fields [2]. In recent years, it was found that graphite intercalation compound (GIC) can be obtained by appropriate treatment of graphite. GIC maintains the planar hexagonal layered structure, and at the same time, the intercalation material interacts with the carbon layer, which changes some structural parameters between the layers and the layers. Therefore, GIC maintaining the excellent properties of graphite, such as high conductivity, light weight, and high specific surface area. At the same time, GIC also shows many special properties such as resistance to corrosion and oxidation, resistance to high and low temperatures, and so on [3]. Studies have shown that expandability is one of the important indicators of GIC products in practical applications. Expandable GIC can quickly decompose and generate a large amount of gas at suitable temperature, which makes graphite expanded dozens or even hundreds of times along the C axis, making it have important industrial value and industrial application prospects [4]. So far, the most popular methods for the preparation of GIC include chemical oxidation [5], electrochemical oxidation [6], vapor diffusion method [7] and ultrasonic oxidation [8]. The above preparation methods are often use aggressive reagents and restricted by the relatively high energy consumption, complex operation, environmental pollution, sometimes lower yield and poor product quality. Liquid phase method has been extensively studied because it is easy to operates and can obtain higher quality product [9]. However, the use of excessive organic solvents often leads to product instability, environmental pollution and increases production expense. Therefore, it is necessary to develop a novel greener production method to resolve the problems mentioned above.
To achieve these goals, we design a new method for the simple and green production of GIC from flake graphite. The effect of the production parameters (such as ball milling media, ball milling media size, ball milling time, ball mill speed, decomposition temperature, decomposition time and mass ratio of flake graphite to sodium bicarbonate) were investigated systematically. At the same time, the expandability of GIC under different production parameters was also studied. The morphology and structure of the obtained GIC samples were characterized and confirmed by SEM, XRD, and FT-IR, and the reaction mechanism was obtained. The process flow was also designed. This work has academic and industrial reference value for the preparation of GIC.
Materials and instruments
The flake graphite (0.5 mm) and the sodium bicarbonate (AR) were purchased from Sinopharm Chemical reagent Co. (Shanghai, China).
Ball mills (QM-3SP04, YXQM-2 L, KEQ-2 L and QM3SP2) were purchased from Tianchuang Powder Technology Co. (Changsha, China), Miqi Instrument Equipment Co. (Changsha, China), Ru Rui Technology Co. (Guangzhou, China) and Ru Rui Technology Co. (Guangzhou, China), respectively. The analytical balance (TG328A) was purchased from Balance instrument factory (Shanghai, China). The pumping equipment was purchased from Guohua Electric Co. (Shanghai, China). The vacuum drying oven was purchased from Anteing Electronic Instrument Factory (Shanghai, China). The muffle furnace (TDL-1800 A) was purchased from Keda Instrument Co (Nanyang, China).
Production procedures
The flake graphite powder and sodium bicarbonate (NaHCO3) solid were mixture and loaded into the reaction tank containing steel balls according to the experimental design. The ball mill was started after adjust the suitable rotating speed. The GIC was obtained after the designed ball milling time, and then take out the mixture and put it into the muffle furnace. The temperature of the muffle furnace was adjusted from 150 to 300 °C to be suitable for the decomposition of NaHCO3. After the designed reaction time, the mixture was cooled, washed and dried, then the expandable GIC was obtained. Expanded graphite was obtained by high temperature expansion of expandable GIC at 950 ℃.
We also investigated the effects of different preparation parameters on the quality of expandable GIC products. Eight process factors (such as ball milling media, ball milling media size, ball milling time, ball mill speed, decomposition temperature, decomposition time, mass ratio of flake graphite to NaHCO3 and ball mill model) were designed and adjusted in the production process.
Morphological elucidation
Morphological information of samples was obtained by SU8020 Hitachi scanning electron microscopy (Tokyo, Japan).
Structural investigation
The molecular structure of the GIC product obtained was identified by X-ray diffraction. The samples were scanned and recorded using the X-ray diffractometer (Rigaku, Japan) with an X-ray generator from 15 to 60 of 2θ (Braff angle), using Cu/Ka irradiation at 55 mA and 60 kV. The structure information of product was obtained by FT-IR (IS50). The wave number range scanned was 4000−400 cm-1. After washed and dried, the powders and KBr were compacted into disks and analyzed.
Determination of expansion volume
Expansion volume refers to the volume (unit mass) of GIC after expansion at a certain temperature, the unit is mL/g. Determine the expansion volume according to the national standard GB10698-89, the specific determine steps are as follows [10]: Firstly, a certain amount of the samples prepared according to the experimental method described in 2.2 was weighed by an analytical balance, and a quartz beaker (with scale) was put into the muffle furnace (adjustable temperature 100−1500 °C) that has been heated to 950 °C to preheat for 5 min, then add the sample into the quartz beaker, do not close the furnace door, and take it out immediately as long as it no longer expands. Read the average value of the highest and the lowest point on the top surface of the sample after expansion as the expanded volume of the sample (V). The expanded volume Z is calculated using the following formula:
$${Z}=\frac{{V}}{{m}}$$
V- Volume of sample after expansion (mL),
m- Mass of the sample (g).
Two parallel tests were performed for each measurement, and the allowable error of the results conformed the requirements of the GB10698-89 standard.
Optimization of process parameters
Effect of ball milling time
Figure 1 shows the XRD patterns of GIC products obtained from different ball milling times (2-10 h), and other experimental conditions were set as follows: ball mill speed was 500 r/min, the decomposition temperature was 150 ℃, the decomposition time was 2 h, the mass ratio of flake graphite to \(\text{NaHC}{\text{O}}_{\text{3}}\) was 1:1. It can be seen from Fig. 1 that the samples prepared under different ball milling times all have the characteristic absorption peaks of GIC. Figure 1 shows when the time was extended, the intensity of the characteristic peak of GIC was decreased firstly and then increased, and reached the minimum at 4 h. According to literature reports, in the XRD analysis of GIC, the weaker intensity of the characteristic splitting peak and the larger peak width indicated the better intercalation effect. Generally speaking, ball milling is more sufficient as the ball milling time increases, and the mixing of graphite and intercalation agent are more uniform under the action of mechanical force, which leads to the decreased of the characteristic absorption peak. However, when the ball milling time is too long, the restacking of graphite is more pronounced, which is not conducive to the preparation of GIC. This resulted the increased of the intensity of the characteristic peak. Thus, the XRD characterization results shown that the intercalation effect was the best when ball milled for 4 h. Studies have shown that expandability is one of the important indicators of GIC products in practical applications. Therefore, the thermal expansion performance of GIC products obtained under different ball milling time has also been studied. And in the next single factor experiments, the expansion volume was used as the index to optimized the production parameters. Figure 2 shows the expansion volumes of GIC products obtained under different ball milling times. It can be seen that the expansion volume of GIC was increased firstly and then decreased when the ball milling time was extended, and reached the maximum at 4 h. Thus, the appropriate ball milling time was 4 h, which was adopted by the subsequent experiments run.
XRD patterns of GIC products after thermal treatment obtained by different ball milling times
Expansion volumes of GIC products after thermal treatment obtained by different ball milling times
Effect of decomposition time
The effects of decomposition times on the production process were set as follows: ball milling time was 4 h, ball mill speed was 500 r/min, the decomposition temperature was 150 ℃, the mass ratio of flake graphite to NaHCO3 as 1:1 and decomposition times ranging between 1 and 15 h. Figure 3 shows the expansion volumes of GIC products obtained under different decomposition times. It shows when the decomposition time was extended, the expansion volume of GIC was increased firstly and then basically unchanged, and reached the maximum at 4 h. At the beginning, more carbon dioxide was produced by the decomposition of the intercalant NaHCO3 with the increase of the decomposition time, which can effectively increase the distance between the graphite flakes, lead to the expansion volume of GIC increased. However, when the decomposition time was prolonged, the intercalant NaHCO3 decomposed completed, and the decomposition product Na2CO3 cannot continue to decomposed (the decomposition temperature of Na2CO3 is above 850 ℃). This leads to the basically unchanged of the expansion effect. As it can be inferred from the results, 4 h decomposition time was found to be suitable for the investigation.
Expansion volumes of GIC products after thermal treatment obtained from different decomposition times
Effect of decomposition temperature
The decomposition temperature is a key parameter, it directly affects the generated rate of gas obtained by the intercalation agent decomposed, which further affects the expansion effect [11]. Figure 4 shows the effect of the decomposition temperature on the production of GIC at the ball mill speed was 500 r/min, the mass ratio of flake graphite to NaHCO3 was 1:1 and the decomposition temperature was changed from 150−300 ℃. Figure 4 shows the expansion volumes of GIC products obtained under different decomposition temperature was increased firstly and then decreased, and reached the maximum at 200 ℃. This is because higher decomposition temperature resulted in better decomposition effect, which leads to an increase in the expansion volume of GIC. NaHCO3 solid starts to decompose at 50 ℃, and decomposes completely when the temperature reaches about 200 ℃. Therefore, when the decomposition temperature is too high, the decomposition rate is too fast, resulting in the carbon dioxide being lost without increasing the distance between the graphite layers, and the expansion effect is not good. From the results, a suitable decomposition temperature is 200 ℃.
Expansion volumes of GIC products after thermal treatment obtained from different decomposition temperatures
Effect of ball mill speed
The effect of the ball mill speed on the preparation of GIC were performed from 300 r/min to 600 r/min, and the mass ratio of flake graphite to NaHCO3 was 1:1. Figure 5 shows the expansion volumes of GIC products obtained under different ball mill speed was increased firstly and then decreased, and reached the maximum at 500 r/min. This is because the more uniform mixing of graphite and intercalant under the influence of mechanical force when a higher ball mill speed is used. Meanwhile, the more restack of the graphene and uneven dispersion when the ball mill speed is too fast, results in the decline of the expansion volume of GIC. Therefore, it is reasonable to expect that a suitable ball mill speed may exist. In order to ensure the better preparation process, 500 r/min was selected as the optimal ball mill speed.
Expansion volumes of GIC products after thermal treatment obtained from different ball mill speeds
Effect of mass ratio of graphite to intercalant
Figure 6 shows the expansion volumes of GIC products obtained under different mass ratio of flake graphite to NaHCO3 was increased firstly and then decreased, and reached the maximum at 1:1. The mass ratio of flake graphite to NaHCO3 was adjusted from 1:0.5 to 1:2. This is because as the amount of NaHCO3 increases, it is beneficial to produce more carbon dioxide during decomposition and increase the distance between graphite layers, which leads to an increase of the expansion volume of GIC. But when the amount of NaHCO3 is too large, under the action of mechanical force, except for a small part mixed with graphite, most of the NaHCO3 is wrapped outside the graphite, and decomposes rapidly during thermal decomposition, resulting in poor intercalation effect, which leads to decrease in the expansion volume of GIC. From the results, a suitable mass ratio of flake graphite to NaHCO3 is 1:1.
Expansion volumes of GIC products after thermal treatment obtained from different mass ratio of flake graphite to NaHCO3
Effect of ball milling media
The ball milling media have an impact on the production process, because different ball milling media have different squeezing force, impact force, shear force and internal sliding of the ball milling media on the ball milling process. Under the above process parameters, different ball milling media were used for the experiment. The specific experiment were as follows: zirconia ceramic beads, stainless-steel beads and cemented carbide beads were used as ball milling media (the diameter is 8 mm, and the number of ball milling media is 10). Table 1 shows the expansion volumes of GIC products obtained under different ball milling media. It can be seen that the expansion volume of the GIC obtained by zirconia ceramic beads as the ball milling medium was the smallest. This is due to the small specific gravity of the ceramic beads themselves, and the impact force, extrusion force and shear force on the ball milling material were small, and the ball milling efficiency was low, resulted in uneven mixing of graphite and NaHCO3. The ball milling effect of stainless-steel beads and cemented carbide beads were relatively good. This is because their own specific gravity was relatively large, and the kinetic energy generated by the drive of the ball mill was large, and the extrusion force, impact force and shear force of the ball mill material were larger. In addition, since the internal sliding of stainless-steel beads is greater than that of cemented carbide beads, which leads to the better grinding effect. Therefore, stainless-steel beads were used as the ball milling media.
Table 1 Expansion volumes of GIC products after thermal treatment obtained under different ball milling medias
Effect of ball mill media size
The size of the ball milling media directly affects the grinding effect through the impact force, extrusion force and grinding effect on the material during the ball milling process. In order to ensure the same quality of the ball mill media loaded into the ball mill, stainless steel beads with diameters of 4 mm (0.26 g/piece), 6 mm (0.89 g/piece), 8 mm (2.1 g/piece) and 10 mm (4.16 g/piece) were used 80, 24, 10 and 5 for the experiment, respectively. Figure 7 shows the expansion volumes of GIC products obtained under different ball milling media size. The number of stainless-steel beads with a small diameter was large, and the striking force of each steel ball was small, but the number of strikes was large, and the grinding area was large. The number of stainless-steel beads with a large diameter was small, and the striking force of each steel ball was large, but the number of strikes was small, and the grinding area was small. Therefore, a good grinding effect can be achieved by choosing a suitable size of the ball milling media. It can be seen from Fig. 7 that when the diameter of the stainless-steel beads was 6 mm, the expansion volume of the GIC was the largest. This is because this size of the ball milling media, not only ensured the sufficient impact force, but also has more hit times and strong grinding effect. Hence, the diameter of 6 mm was selected.
Expansion volumes of GIC products after thermal treatment obtained from different ball milling media sizes ball milling media sizes
Effect of ball mill model
In order to study the influence of different ball mill models on the preparation of GIC, experiments were carried out in four different models of ball mills according to the above optimal conditions. The experimental results were shown in Table 2. The ball mill manufacturers and models shown in the table are 1# (Changsha Tianchuang Powder Technology Co., Ltd. QM-3SP04), 2# (Changsha Miqi Instrument Equipment Co., Ltd. YXQM-2 L), 3# (Guangzhou Rurui Technology Co., Ltd. KEQ-2 L), and 4# (Guangzhou Rurui Technology Co., Ltd. QM3SP2), respectively. It can be seen from Table 2 that the expansion volumes of GIC prepared by using different types of ball mills under the same experimental conditions were basically the same. It can be seen that the production of GIC combined with mechanical force and chemical method described in this paper is stable and does no affected by the ball mill models.
Table 2 Expansion volumes of GIC products after thermal treatment obtained under different ball mill models
Mechanism discussion
Scanning electron microscope (SEM) analysis
Scanning electron microscope was used to observe the morphological of the GIC products obtained at optimum production conditions. It can be seen from Fig. 8 that GIC was composed of many bonded and superimposed graphite flakes [12]. The densely arranged graphite flakes were divided into graphite flakes with a thickness of several hundred nanometers, and there were obvious signs of bulging and swelling. This is due to the changes in the carbon layer structure caused by the intercalation agent entered between the graphite layers. Due to the intercalation effect, many honeycomb-like fine pores appear between the graphite flakes, and the pores were fusiform. The layered structure still exists, but fractures and voids appear between the lamellae. This is because the van der Waals force between the layers was destroyed and the distance between the lamellae increased significantly under the effect of the intercalation.
X-ray diffraction (XRD) analysis
In order to study the crystal structure change of the product before and after intercalation, the XRD pattern of the samples were measured. Figure 9 shows the XRD patterns of graphite raw materials, GIC and expanded graphite, respectively. The expandable GIC product was obtained at optimum production conditions. Expanded graphite was obtained by high temperature expansion of GIC at 950 ℃. It can be seen from Fig. 9 that natural flake graphite has two characteristic sharp peaks at.
SEM images of GIC sample produced at the optimum conditions
2θ = 26.60° and 2θ = 54.76° [13], and the diffraction peak intensity is large, which is due to the regular arrangement of internal particles and high crystallinity. The intensity of the diffraction peaks of GIC were greatly weakened, and the peaks width were broadened. The d002 diffraction peak (2θ = 26.60°) was split into two diffraction peaks 2θ = 24.68° and 2θ = 28.32° and the d004 diffraction peak (2θ = 54.76°) was split into two diffraction peaks 2θ = 51.98° and 2θ = 56.02°. This is because after the flake graphite was intercalated, the distance between the graphite flakes increased and the crystal structure was damaged, which resulted the split and left shift of the diffraction angle and the weak of the diffraction intensity. It was known that the interlayer spacing can be calculated according to the Bragg Eq. 2dsinθ = nλ [14, 15]. It was known that λ = 1.54 nm under the test conditions, the interlayer spacing of GIC were 0.366 and 1.75 nm calculated by substituting 2θ = 24.68° and 2θ = 51.98° in Fig. 9 into the Bragg equation respectively, which is larger than the interlayer spacing of flake graphite by 0.335 and 1.67 nm respectively. This is due to the destruction of the structure of the graphite along the C axis direction, which indicated that the NaHCO3 intercalation agent has entered the interlayer of the graphite, and increased the interlayer distance of the layered graphite. The above results indicated that the intercalation agent has entered between the graphite layers, and GIC was prepared. The interlayer structure of the expanded graphite obtained after the expansion of GIC was partially destroyed due to the effect of the intercalator. The remaining undestroyed graphite crystallites still retain the original graphite structure, so the characteristic diffraction peaks of expanded graphite were basically the same as that of flake graphite. The diffraction peak intensity of expanded graphite was significantly weakened and the peak shape was sharp compared with the flake graphite, which indicated that the crystallites in the expanded graphite were further reduced, but still have graphite crystallites.
XRD patterns of different samples
Fourier transform infrared spectra (FT-IR) analysis
As a relatively easy method, FT-IR spectroscopy has been widely used in GIC research, from which the direct structural information and changes can be obtained during various chemical treatments. Figure 10 shows the FT-IR patterns of graphite raw materials, expandable GIC obtained at optimum production conditions and expanded graphite, respectively. It can be seen that the three samples all have characteristic absorption peaks at 1582 cm-1 and 3428 cm-1. The absorption peak of 1582 cm-1 was belonged to the sp2 structure of graphite crystal C = C stretching vibration peak [16], indicated that the internal structure of the GIC and the expanded graphite layer has not changed, and the preparation method does not change the carbon-based structure. The absorption peak at 3428 cm-1 attributed to OH stretching vibration peaks, which is the trace moisture contained in the sample itself or KBr when pressed. In the infrared spectrum of GIC, there are strong characteristic peaks in 880 cm-1 and 1360 cm-1. The peak at 880 cm-1 was caused by the carbonate internal stretching vibration mode, and 1360 cm-1 was the absorption peak of carbonate internal stretching vibration mode [17]. The above results indicated that the presence of carbonate in GIC. It can be seen from the infrared spectrum of expanded graphite that the characteristic peak of carbonate was significantly weakened, indicated that the acid radical ions have decomposed to gas and escaped, but there was still a small amount of residue. These results further indicated that the preparation method does not change the carbon-based structure, and the NaHCO3 intercalation agent has entered the interlayer of the graphite, which increased the interlayer distance of the layered graphite.
FT-IR patterns of different samples
X-ray photoelectron spectrometer (XPS)
To further analyse the elements of the GIC product, we used the XPS test. The experimental results are shown in the Fig. 11. From Fig. 11, The Bindding Energy at 282.55 ev and 530.33 ev were GIC's characteristic peaks which attributed to the C1s and O1s. C1s is mainly due to the carbon structure of GIC, and O1s is mainly due to the intercalator. Further quantitative calculations found that the carbon element content of the GIC product was 88.98%, and the oxygen element content was 11.02%. The experimental results show that our preparation method produces good GIC products with no impurities. This result of XPS is in good agreement with those of FT-IR and XRD.
XPS patterns of GIC sample produced at the optimum conditions
Production mechanism
The schematic representation of the production mechanism can be seen as Fig. 12. The flake graphite is intercalated with NaHCO3 as an intercalant under the action of mechanical ball milling, and then NaHCO3 was decomposed at a suitable temperature under the protection of inert gas. The gas generated during the decomposed of NaHCO3 increases the interlayer spacing of the layered graphite, and the GIC product was obtained after washed and dried. Further research shown that the GIC prepared by this method has good thermal expansion properties. The method has the advantages of simple process, mild preparation conditions, no use of aggressive reagents, process stability, etc. Thus, it could be an alternative green and efficient method for GIC production in industry.
Schematic representation of the production mechanism
Process flow design of the preparation method
According to the experimental results in this paper, we designed the process flow for GIC green production by the combined with mechanical force and chemical method. The specific process flow chart is shown in Fig. 13. After mixing the graphite and the intercalant in a certain ratio, perform mechanical ball milling at a set speed. After the set ball milling time was reached, the mixture was taken out, and then the mixture was thermally decomposed at a suitable temperature. After reached the decomposition time, expandable GIC can be obtained after cooled, washed and dried.
Process flow diagram of preparation method
This paper investigated a new method by the combination of mechanical force and chemical method to produce the GIC from graphite. The effect of production conditions on the thermal expansion performance of GIC products were investigated by single factor experiments, the optimal conditions were obtained as 6 mm diameter of the stainless-steel beads ball milling media, the ball mill speed 500 r/min for 4 h, the decomposition temperature 200 ℃ for 4 h, and the mass ratio of flake graphite to NaHCO3 was 1:1. Under optimized conditions, the expansion volume of GIC product was 142 mL/g. At the same time, the mechanism of preparation method was studied, and the preparation process was designed. The method has the advantages of simple process, products free of impurities, no use of aggressive reagents, process stable, etc. In general, the new method could be a green and potential method for GIC production in industry.
All data generated or analysed during this study are included in this published article.
Novoselov KS, Geim AK. Electric field effect in atomically thin carbon films. Science. 2004;306:666–9.
Stoller MD, Park SJ, Zhu Y, An J, Ruoff RS. Graphene-based ultra-capacitors. Nano Lett. 2008;8:3498–502.
Zhu JP. Study on physical properties of graphite sulfate intercalation compound. J Hefei Univ Technol Nat Sci. 2001;24(6):1158–62.
Yin W. Study on preparation and properties of graphite intercalation composite. Jilin: Jilin University, 2003.
Chen YP, Li SY, et al. Optimization of initial redox potential in the preparation of expandable graphite by chemical oxidation. New Carbon Mater. 2013;28(6):435–41.
Yang YQ, Wang JD, et al. Preparation and study of expandable graphite electrochemical method. Fiber Composites. 1998;2:22–4.
Shornikova O, Sorokina N, et al. The effect of graphite nature on the properies of exfoliated graphite doped with nickel oxided. J Phys Chem Solids. 2008;69(6):1168–70.
Guo XQ, Huang J, et al. Preparation of graphene nanosheet functional material by ultrasonic stripping of secondary expanded graphite. Funct Mater. 2013;12(44):1800–3.
Gao Y, Gu JL, et al. Bromine-graphite intercalation compound. Carbon Technol. 2000;4(109):21–5.
Qiu T, Chen ZG. Study on the advanced treatment of oilfield wastewater by expanded graphite. J Jiangsu Polytech Univ. 2006; 18(4): 11–13.
Zhang Y, Xu BZ. Study on the process parameters of CrO3 -graphite intercalation compound prepared by vacuum heat treatment. In: Proceedings of the tenth national heat treatment conference. 2011; 9: 842–845.
Liu QQ, Zhang Y, et al. Study on the preparation of expanded graphite and its oil absorption performance. Non Metallic Mines. 2004;27(6):39–41.
Wang J, Han ZD. The combustion behavior of polyacrylate estedgraphite oxide composites. Polym Adv Technol. 2006; 17(4):335–340.
Chan WM, Wang JJ, et al. Preparation and characterization of graphene nanoplatelets by ultrasonic stripping. Dev Appl Mater. 2017;5(10):77–85.
Shi DK. Material science foundation. 2nd ed. Beijing: Machinery Industry Press; 2003. p. 78–80.
Ferrari CA, Robertson J. Roman spectroscopy in carbons: from nanotubes to diamond. Beijing: Chemical Industry Press; 2007. p. 193.
Ferrari AC, Robertson J. Interpretation of Raman spectra of disordered and amorphous carbon. Phys Rev B. 2000;61(20):14095–107.
XRD data, FT-IR data and XPS data were obtained using equipment maintained by Jilin Insititute of Chemical Technology Center of Characterization and Analysis. The authors acknowledge the assistance of JLICT Center of Characterization and Analysis. We would also like to thank Professor Zhang Long and Complex Utilization of Petro-resources and Biomass Laboratory for provided experimental instruments and equipment for some of the experiments in the early stage of the article.
School of Petrochemical Technology, Jilin Institute of Chemical Technology, Jilin, 132022, China
Xin Wang
School of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China
Guogang Wang
Jilin Provincial Engineering Laboratory for the Complex Utilization of Petro-resources and Biomass, School of Chemical Engineering, Changchun University of Technology, Changchun, 130012, Jilin, People's Republic of China
Long Zhang
XW conceived and designed the experiments. XW and GW conducted the experiments and interpreted the results. XW participated in analyze the data. XW wrote the paper, and was a major contributor in writing the manuscript. Long Zhang provided experimental instruments and equipment for some of the experiments in the early stage of the article. All authors read and approved the final manuscript.
Correspondence to Xin Wang.
All the authors have approved to submit the manuscript.
Wang, X., Wang, G. & Zhang, L. Green and simple production of graphite intercalation compound used sodium bicarbonate as intercalation agent. BMC Chemistry 16, 13 (2022). https://doi.org/10.1186/s13065-022-00808-y
Graphite intercalation compound
Green production
Mechanical force and chemical method
Submission enquiries: [email protected] | CommonCrawl |
If $3x + 2(1 + x) = 17$, what is the value of $6x + 5$?
Expanding and collecting terms on the left hand side of the first equation gives $5x+2=17$. Subtracting 2 from each side gives $5x=15$, then dividing each side by 5 gives $x=3$. Now that we know what $x$ is, we can substitute it into $6x+5$ and get $6(3)+5=18+5=\boxed{23}$. | Math Dataset |
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. math exams: August 2011
math exams
This site prepare students for math exams, SAT, GRE, GMAT,CLEP practice test, CLEP college Algebra,CLEP precalculus,CLEP Mathematics
PCAT Quantitative Practice Questions -8
Pharmacy College Admission Test
Evaluate the expression: $1000(2^{-1.5})$
$2828,427$
$2000.00$
$353.55$
Evaluate the expression: $\log_{49}7$
$\frac{1}{4}$
$\frac{1}{49}$
Place into standard form: $(5+i)-(7-7i)$
$-2+8i$
$2+8i$
$12+8i$
Find the domaine of the function: $f(x)=\sqrt{-6x+12}$
$x \geq 3$
$x \leq -2$
$x \leq 2$
What is the value of: $3\ln e^{6}$
$6$
$18$
What is the value of: $\csc (150 deg)$
$-1$
Solve the equation: $x^{2}-10x+50=0$
$5+5i$ or $5-5i$
What is the value of $x$: $\log_{10}x=-3$
$0.01$
$0.001$
$0.1$
Factor the expression: $x^{2}-3ix-2$
$(x+i)(x+2i)$
$(x+i)(x-2i)$
$(x-i)(x-2i)$
$(-x-i)(x-2i)$
Identify the horizontal and vertical asymptotes for: $\frac{5x^{2}}{x^{2}-9}$
$y=5$, $x=-3$, $x=3$
$y=-5$, $x=-3$
$y=5$, $x=3$
$y=5$, $x=-3$
Posted by Ssafini : at 8:43 PM 0 comments
For how many positive integers, $n$, is true that $n^{2} \leq 3n$
If $a^{4}=16$, then $3^{a}$
$\sqrt{20}\sqrt{5}=$
$2\sqrt{5}$
$5\sqrt{10}$
The sum of three positive consecutive even integers is x.
What is the value of the middle of the three integers?
$\frac{x}{3}+2$
$3x$
$\frac{x-2}{3}$
$\frac{x}{3}$
What is the average of $5^{10}$, $5^{20}$, $5^{30}$, $5^{40}$ and $5^{50}$?
$5^{9}+5^{19}+5^{29}+5^{39}+5^{49}$
$5^{30}$
$5^{149}$
$150$
Which of the following is equal to $(5^{6} \times 5^{9})^{10}$?
$25^{150}$
What is the value of $3^{\frac{1}{3}} \times 3^{\frac{2}{3}} \times 3^{\frac{3}{3}}$?
How many integers satisfy the inequality $|x| < 2 \pi$.
More than $7$
What is the average of $5^{a} \times 5^{b}=5^{300}$
If $5^{a}5^{b}=\frac{5^{c}}{5^{d}}$, what is d in terms of $a$, $b$ and $c$?
$\frac{c}{a+b}$
$c+ab$
$c-a-b$
$c+a-b$
Which of the following is equivalent to $5^{9}$
$5^{4}+5^{4}+5^{1}$
$5^{2} \times 5^{4} \times 5^{3}$
$\frac{10^{9}}{2^{10}}$
$(5^{4})^{5}$
Which of the following is equivalent to $\sqrt{289}$
Which of the following is a perfect square?
Which of the following is equivalent to $3\sqrt{10}$
$3\sqrt{5} \times \sqrt{5}$
$\sqrt{90}$
$3\sqrt{5} + 3\sqrt{2}$
$3\sqrt{5}+3\sqrt{5}$
Which of the following is equivalent to $10^{\frac{2}{5}}$
$\sqrt[5]{5}$
$\sqrt[5]{10}$
$\sqrt[5]{100}$
Which of the following fractions is equivalent to $\frac{3}{6} \times \frac{2}{5}$?
$\frac{15}{12}$
Which of the following expressions is equivalent to $\frac{7}{6} \div \frac{5}{2}$?
$\frac{9}{6}+\frac{9}{5}$
$\frac{1}{7}+\frac{2}{35}$
If $3^{x}=729$, what is $x^{3}$?
What is the value of $||4|-|-7||$
$-11$
What is the value of $(\sqrt{3}+\sqrt{5})^{2}-(\sqrt{8})^{2}$
Solve $15x-32=18-10x$
Solve $\frac{x}{8}=\frac{x-2}{4}$
$-\frac{1}{2}$
Which of the following are the factors of $t^{2}+8t+16$
$(t-8)(t-2)$
$(t+8)(t+2)$
$(t+1)(t+16)$
Solve for a in term of b, if $6a+12b=24$
$24-12b$
$2-\frac{1}{2}b$
$4-2b$
If $ax+2b=5c-dx$, what does x equal in terms of a, b, c, and d?
$a-d$
$(5c-2b)(a-d)$
$\frac{5c-d-2b}{a}$
$\frac{5c-2b}{a-d}$
If $(z-9)(z+3)=0$, what are the two possible values of z?
$z=-9$ abd $z=3$
$z=9$ abd $z=0$
$z=0$ abd $z=-3$
If $z^{2}-6z=16$, which of the following could be a value of $z^{2}+6z$?
If $3\sqrt{ a}-10=2$, what is the value of a?
Given $\frac{(x+6)(x^{2}-2x-3)}{x^{2}+3x-18}=10$, find the value of x.
Solve the equation $\frac{5x}{8}-\frac{3x}{5}=2$.
$2(5x-5)+5(2x+2)=$
$20x$
$20x-10$
$20x+10$
If $x=a+2$, and $y=-8-a$ then $x+y=$
$2a-6$
If $x \ne -5$, then $\frac{x^{2}+3x-10}{x+5}-(x-2)=$
If $(a-\frac{1}{a})^{2}=8$, then $a^{2}+\frac{1}{a^{2}}=$
$(x+y)^{2}=16$, and $x^{2}+y^{2}=6$ then $xy=$
$(x+y)=2$, and $x^{2}-y^{2}=6$ then $x-y=$
$\frac{15y+3}{3}-5y=$
$10y+1$
if $b^{2}-a^{2}=9$ then $5(a-b)(a+b)=$
When $c \ne 3$, then $\frac{c^{2}-9}{c-3}=$
$c-3$
$c+3$
$3-c$
If $b>0$, and $b^{2}-1=10 \times 12$, then $b=$
If $3x+7=5x+1$
$3.5 $
$4 $
What is the next term in the sequence: 6, 3, 10, 7, 14, 11, ...?
$15 $
$ 17$
The area of the basis of Cylinder A is 8 times the area of the basis of Cylinder B. What is the radius of Cylinder A in terms of the radius of Cylinder B?
$r_{A}=\frac{r_{B}}{8}$
$r_{A}=8r_{B} $
$r_{A}=2\sqrt{2}r_{B} $
If $x^{2}-2xy+y^{2}=121$, $x-y=$
If c is equal to the sum b and twice of a, which of the following is the average of b and c?
$a$
$b $
$c $
$a+b $
$f(x)=4x+8$, $f(c+3)=8$, $f(c)=$
$5^{n}.125^{m}=78,125$, $n+3m=$
$ 6$
$\frac{3b^{2}}{a^{3}}=27a^{2}$
$9a^{3} $
$\frac{1}{9a^{3}} $
$\frac{1}{a^{3}} $
Which of the following statements must be true about the x and y coordinates that satisfy the equation $ay-ax=0$, $a \ne 0$, $x \ne 0$ ,$y \ne 0$
$xy=1 $
$x=-y $
$y>x $
$x=y $
What is the length of the side of a cube whose volume is 125 cubic units?
If $\frac{1}{2}$ of a number is 3, what is $\frac{1}{3}$ of the number?
If $x=-1$, then $x^{5}+x^{4}+x^{3}+x^{2}-5=$
$-6 $
If $f(x)=2^{x}+7x$, then $f(4)=$
If $x-3=y$, then $(y-x)^{3}=$
$ -54$
If $a>b$, and $\frac{a}{b}>0$, which of the following is true?
$a>0$
$b>0$
$ab>0$
I only
II only
III only
I and II only
Which of the following is equal to $(\frac{x^{-7}y^{-5}}{x^{-3}y^{3}})^{-2}$
$x^{8}y^{16}$
$\frac{x^{8}}{y^{16}} $
$\frac{y^{16}}{x^{8}} $
$x^{4}y^{8}$
What is the slope of the line passing through the points (-1,7) and (3,5)?
$-\frac{1}{2} $
The symbol $\otimes$ represents a binary operation defined as $a \otimes b=3^{a}+2^{b}$, what is the value of $(-2)\otimes (-3)$
$\frac{72}{17} $
$-72 $
$ \frac{17}{72}$
If $\sqrt{\frac{49}{x}}=\frac{7}{3}$
A bike that originally sold for $150 \$$ was on sale for $120 \$$. What was the rate of discount?
$15 \%$
$25 \% $
If $ 0.10 < x < 0.12$, which of the following could be a value of $x$?
$9 \%$
If $\frac{xyz}{t}=w$ and $x$ and $t$ are doubled, what happens to the value of w
The value of $w$ is halved.
The value of $w$ is four times greater.
The value of $w$ is doubled
The value of $w$ remains the same.
What is the tenth term of the pattern below?
$\frac{3}{2}$, $\frac{9}{4}$, $\frac{27}{8}$, $\frac{81}{16}$,...
$\frac{3}{2^{10}}$
$(\frac{3}{2})^{10}$
$\frac{3^{10}}{2}$
If $a > 0$ and $b < 0$, which of the following is always negative?
$-ab$
$a+b$
$|a|-|b|$
$\frac{a}{b}$
Which of the following number pairs is in the ratio $3:7$?
$\frac{1}{3}$,$\frac{1}{7}$
$7$,$\frac{1}{3}$
If $x=-\frac{1}{4}$, then $(-x)^{-3}+(\frac{1}{x})^{2}=$
For which of the following values of $x$ is the relationship $x < x^{2} < x^{3}$ true?
$\frac{2}{3} $
$x^{2}+2xy+y^{2}=169$, $-|-(x+y)|=$
How many distincts factors does 900 have?
If $x=-\frac{1}{7}$, then which of the following is always positive for $n > 0$?
$x^{n}$
$n^{x}$
$nx$
$n-x$
CLEP Precalculus Practice Questions - Algebra review
Factor $3a^{2}+3ab-6b^{2}$
Factor $x^{3}-4x^{2}+2x-8$
Factor $25a^{2}-36b^{2}$
Resolve into factors $x^{2}-ax+bx-ab$
Resolve into factors $6x^{2}-9ax+4bx-6ab$
Resolve into factors $x^{2}+11x+24$
Resolve into factors $x^{2}-10x+24$
Resolve into factors $x^{2}-10ax+10a^{2}$
CLEP College Algebra Practice Questions - 12 & Answer Key
$\frac{5^{5}}{5^{4}}$
$\frac{3\sqrt{2}}{\sqrt{5}}$
$\sqrt[5]{1000}$
$\frac{7}{30}+\frac{2}{30}$
7 E
$2b-4$
$5c-d-2b-a$
$z=-12$ abd $z=12$
$10x^{2}+20+x+20$
$3y+1$
$o$
CLEP College Algebra Practice Questions - 9 & Answer Key
$ 4.5$
$r_{A}=\frac{r_{B}}{4} $
$b+c $
$3a^{3}$
$x>y$
I and III only
$-\frac{17}{72}$
The value of $w$ is two times smaller.
$\frac{300}{200}$
$b^{a}$
$\frac{x}{n} $
$10\sqrt{5}$
$\frac{x}{3}-1$
$\frac{b}{ac}$
GRE Practice Questions - 10 & Answer Key
GRE Practice Questions - 9 & Answer Key
GMAT Practice Questions - 12 & Answer Key
GMAT Practice Questions - 9 & Answer Key
GMAT Practice Questions -6 & Answer Key
CLEP Precalculus Practice Questions - Algebra revi...
CLEP College Algebra Practice Questions - 12 & Ans...
CLEP College Algebra Practice Questions - 9 & Answ... | CommonCrawl |
\begin{definition}[Definition:Field Extension/Degree/Infinite]
Let $E / F$ be a field extension.
$E / F$ is an '''infinite field extension''' {{iff}} its degree $\index E F$ is not finite.
Category:Definitions/Field Extensions
\end{definition} | ProofWiki |
A copula-based bivariate integer-valued ...
Article info Full article
A copula-based bivariate integer-valued autoregressive process with application
Volume 6, Issue 2 (2019), pp. 227–249
Andrius Buteikis Remigijus Leipus
Pub. online: 12 March 2019 Type: Research Article Open Access
A bivariate integer-valued autoregressive process of order 1 (BINAR(1)) with copula-joint innovations is studied. Different parameter estimation methods are analyzed and compared via Monte Carlo simulations with emphasis on estimation of the copula dependence parameter. An empirical application on defaulted and non-defaulted loan data is carried out using different combinations of copula functions and marginal distribution functions covering the cases where both marginal distributions are from the same family, as well as the case where they are from different distribution families.
Different financial institutions that issue loans do this following company-specific (and/or country-defined) rules which act as a safeguard against loans issued to people who are known to be insolvent. However, striving for higher profits might motivate some companies to issue loans to higher risk clients. Usually company's methods for evaluating loan risk are not publicly available. However, one way to evaluate if there aren't too many knowingly very high-risk loans issued, and if insolvent clients are adequately separated from responsible clients, would be to look at the quantity of defaulted and non-defaulted loans issued each day. The adequacy of company's rules for issuing loans can be analysed by modelling via copulas the dependence between the number of defaulted loans and the number of non-defaulted loans. The advantage of such approach is that copulas allow to model the marginal distributions (possibly from different distribution families) and their dependence structure (which is described via a copula) separately. Because of this feature, copulas were applied to many different fields, including survival analysis, hydrology, insurance risk analysis as well as finance (for examples of copula applications, see [3] or [4]), which also included the analysis of loans and their default rates.
The dependence of the default rate of loans on different credit risk categories was analysed in [5]. To model the dependence, copulas from ten different families were applied and three model selection tests were carried out. Because of the small sample size (24 observations per risk category) most of the copula families were not rejected and a single best copula model was not selected. To analyse whether dependence is affected by time, Fenech et al. [6] estimated the dependence among four different loan default indexes before the global financial crisis and after. They have found that the dependence was different in these periods. Four copula families were used to estimate the dependence between the default index pairs. While these studies were carried out for continuous data, discrete models created with copulas are less investigated: Genest and Nešlehová [8] discussed the differences and challenges of using copulas for discrete data compared to continuous data. Note that the previously mentioned studies assumed that the data does not depend on its own previous values. By using bivariate integer-valued autoregressive models (BINAR) it is possible to account for both the discreteness and autocorrelation of the data. Furthermore, copulas can be used to model the dependence of innovations in the BINAR(1) models: Karlis and Pedeli [10] used the Frank copula and the normal copula to model the dependence of the innovations of the BINAR(1) model.
In this paper we expand on using copulas in BINAR models by analysing additional copula families for the innovations of the BINAR(1) model and analyse different methods for BINAR(1) model parameter estimation. We also present a two-step method for the parameter estimation of the BINAR(1) model, where we estimate the model parameters separately from the dependence parameter of the copula. These estimation methods (including the one used in [10]) are compared via Monte Carlo simulations. Finally, in order to analyse the presence of autocorrelation and copula dependence in loan data, an empirical application is carried out for empirical weekly loan data.
The paper is organized as follows. Section 2 presents the BINAR(1) process and its main properties, Section 3 presents the main properties of copulas as well as some copula functions. Section 4 compares different estimation methods for the BINAR(1) model and the dependence parameter of copulas via Monte Carlo simulations. In Section 5 an empirical application is carried out using different combinations of copula functions and marginal distribution functions. Conclusions are presented in Section 6.
2 The bivariate INAR(1) process
The BINAR(1) process was introduced in [18]. In this section we will provide the definition of the BINAR(1) model and will formulate its properties.
Definition 1.
Let ${\mathbf{R}_{t}}={[{R_{1,t}},{R_{2,t}}]^{\prime }}$, $t\in \mathbb{Z}$, be a sequence of independent identically distributed (i.i.d.) nonnegative integer-valued bivariate random variables. A bivariate integer-valued autoregressive process of order 1 (BINAR(1)), ${\mathbf{X}_{t}}={[{X_{1,t}},{X_{2,t}}]^{\prime }}$, $t\in \mathbb{Z}$, is defined as:
\[ {\mathbf{X}_{t}}=\mathbf{A}\circ {\mathbf{X}_{t-1}}+{\mathbf{R}_{t}}=\left[\begin{array}{c@{\hskip10.0pt}c}{\alpha _{1}}& 0\\ {} 0& {\alpha _{2}}\end{array}\right]\circ \left[\begin{array}{c}{X_{1,t-1}}\\ {} {X_{2,t-1}}\end{array}\right]+\left[\begin{array}{c}{R_{1,t}}\\ {} {R_{2,t}}\end{array}\right],\hspace{1em}t\in \mathbb{Z},\]
where ${\alpha _{j}}\in [0,1)$, $j=1,2$, and the symbol '∘' is the thinning operator which also acts as the matrix multiplication. So the jth ($j=1,2$) element is defined as an INAR process of order 1 (INAR(1)):
\[ {X_{j,t}}={\alpha _{j}}\circ {X_{j,t-1}}+{R_{j,t}},\hspace{1em}t\in \mathbb{Z},\]
where ${\alpha _{j}}\circ {X_{j,t-1}}:={\sum _{i=1}^{{X_{j,t-1}}}}{Y_{j,t,i}}$ and ${Y_{j,t,1}},{Y_{j,t,2}},\dots \hspace{0.1667em}$ is a sequence of i.i.d. Bernoulli random variables with $\mathbb{P}({Y_{j,t,i}}=1)={\alpha _{j}}=1-\mathbb{P}({Y_{j,t,i}}=0)$, ${\alpha _{j}}\in [0,1)$, such that these sequences are mutually independent and independent of the sequence ${\mathbf{R}_{t}}$, $t\in \mathbb{Z}$. For each t, ${\mathbf{R}_{t}}$ is independent of ${\mathbf{X}_{s}}$, $s\mathrm{<}t$.
Properties of the thinning operator are provided in [17] and [19] with proofs for selected few. We present the main properties of the thinning operator which will be used later on in the case of BINAR(1) model. Denote by '$\stackrel{d}{=}$' the equality of distributions.
Theorem 1 (Thinning operator properties).
Let $X,{X_{1}},{X_{2}}$ be nonnegative integer-valued random variables, such that $\mathbb{E}{Z^{2}}\mathrm{<}\infty $, $Z\in \{X,{X_{1}},{X_{2}}\}$, $\alpha ,{\alpha _{1}},{\alpha _{2}}\in [0,1)$ and let '∘' be the thinning operator. Then the following properties hold:
(a) ${\alpha _{1}}\circ ({\alpha _{2}}\circ X)\stackrel{d}{=}({\alpha _{1}}{\alpha _{2}})\circ X$;
(b) $\alpha \circ ({X_{1}}+{X_{2}})\stackrel{d}{=}\alpha \circ {X_{1}}+\alpha \circ {X_{2}}$;
(c) $\mathbb{E}(\alpha \circ X)=\alpha \mathbb{E}(X)$;
(d) $\mathbb{V}\mathrm{ar}(\alpha \circ X)={\alpha ^{2}}\mathbb{V}\mathrm{ar}(X)+\alpha (1-\alpha )\mathbb{E}(X)$;
(e) $\mathbb{E}((\alpha \circ {X_{1}}){X_{2}})=\alpha \mathbb{E}({X_{1}}{X_{2}})$;
(f) $\mathbb{C}\mathrm{ov}(\alpha \circ {X_{1}},{X_{2}})=\alpha \mathbb{C}\mathrm{ov}({X_{1}},{X_{2}})$;
(g) $\mathbb{E}(({\alpha _{1}}\circ {X_{1}})({\alpha _{2}}\circ {X_{2}}))={\alpha _{1}}{\alpha _{2}}\mathbb{E}({X_{1}}{X_{2}})$.
${X_{j,t}}$, defined in eq. (2), has two random components: the survivors of the elements of the process at time $t-1$, each with the probability of survival ${\alpha _{j}}$, which are denoted by ${\alpha _{j}}\circ {X_{j,t-1}}$, and the elements which enter in the system in the interval $(t-1,t]$, which are called arrival elements and denoted by ${R_{j,t}}$. We can obtain a moving average representation by substitutions and the properties of the thinning operator as in [1] or [11, p. 180]:
\[\begin{aligned}{}{X_{j,t}}& ={\alpha _{j}}\circ {X_{j,t-1}}+{R_{j,t}}\stackrel{d}{=}{\sum \limits_{k=0}^{\infty }}{\alpha _{j}^{k}}\circ {R_{j,t-k}},\hspace{1em}j=1,2,t\in \mathbb{Z},\end{aligned}\]
where convergence on the right-hand side holds a.s.
Now we present some properties of the BINAR(1) model. They will be used when analysing some of parameter estimation methods. The proofs for these properties can be easily derived and some of them are provided in [17].
Theorem 2 (Properties of the BINAR(1) process).
Let ${\textbf{X}_{t}}={({X_{1,t}},{X_{2,t}})^{\prime }}$ be a nonnegative integer-valued time series given in Def. 1 and ${\alpha _{j}}\in [0,1)$, $j=1,2$. Let ${\textbf{R}_{t}}={({R_{1,t}},{R_{2,t}})^{\prime }}$, $t\in \mathbb{Z}$, be nonnegative integer-valued random variables with $\mathbb{E}({R_{j,t}})={\lambda _{j}}$ and $\mathbb{V}\mathrm{ar}({R_{j,t}})={\sigma _{j}^{2}}\mathrm{<}\infty $, $j=1,2$. Then the following properties hold:
(a) $\mathbb{E}{X_{j,t}}={\mu _{{X_{j}}}}=\frac{{\lambda _{j}}}{1-{\alpha _{j}}}$;
(b) $\mathbb{E}({X_{j,t}}|{X_{j,t-1}})={\alpha _{j}}{X_{j,t-1}}+{\lambda _{j}}$;
(c) $\mathbb{V}\mathrm{ar}({X_{j,t}})={\sigma _{{X_{j}}}^{2}}=\frac{{\sigma _{j}^{2}}+{\alpha _{j}}{\lambda _{j}}}{1-{\alpha _{j}^{2}}}$;
(d) $\mathbb{C}\mathrm{ov}({X_{i,t}},{R_{j,t}})=\mathbb{C}\mathrm{ov}({R_{i,t}},{R_{j,t}})$, $i\ne j$;
(e) $\mathbb{C}\mathrm{ov}({X_{j,t}},{X_{j,t+h}})={\alpha _{j}^{h}}{\sigma _{{X_{j}}}^{2}},\hspace{2.5pt}h\ge 0$;
(f) $\mathbb{C}\mathrm{orr}({X_{j,t}},{X_{j,t+h}})={\alpha _{j}^{h}}$, $h\ge 0$;
(g) $\displaystyle \mathbb{C}\mathrm{ov}({X_{i,t}},{X_{j,t+h}})=\frac{{\alpha _{j}^{h}}}{1-{\alpha _{i}}{\alpha _{j}}}\hspace{0.1667em}\mathbb{C}\mathrm{ov}({R_{i,t}},{R_{j,t}})$, $i\ne j$, $h\ge 0$;
(h) $\displaystyle \mathbb{C}\mathrm{orr}({X_{i,t+h}},{X_{j,t}})=\frac{{\alpha _{i}^{h}}\sqrt{(1-{\alpha _{i}^{2}})(1-{\alpha _{j}^{2}}})}{(1-{\alpha _{i}}{\alpha _{j}})\sqrt{({\sigma _{i}^{2}}+{\alpha _{i}}{\lambda _{i}})({\sigma _{j}^{2}}+{\alpha _{j}}{\lambda _{j}})}}\hspace{0.1667em}\mathbb{C}\mathrm{ov}({R_{i,t}},{R_{j,t}})$, $i\ne j$, $h\ge 0$;
Similarly to (3), we have that
\[ {\mathbf{X}_{t}}\stackrel{d}{=}{\sum \limits_{k=0}^{\infty }}{\mathbf{A}^{k}}\circ {\mathbf{R}_{t-k}},\]
Hence, the distributional properties of the BINAR(1) process can be studied in terms of ${\textbf{R}_{t}}$ values. Note also, that according to [12], if ${\alpha _{j}}\in [0,1)$, $j=1,2$, then there exists a unique stationary nonnegative integer-valued sequence ${\mathbf{X}_{t}}$, $t\in \mathbb{Z}$, satisfying (1).
From the covariance and correlation (see (g) and (h) in Theorem 2) of the BINAR(1) process we see that the dependence between ${X_{1,t}}$ and ${X_{2,t}}$ depends on the joint distribution of the innovations ${R_{1,t}}$, ${R_{2,t}}$. Pedeli and Karlis [18] analysed BINAR(1) models when the innovations were linked by either a bivariate Poisson or a bivariate negative binomial distribution, where the covariance of the innovations can be easily expressed in terms of their joint distribution parameters. Karlis and Pedeli [10] analysed two cases when the distributions of innovations of a BINAR(1) model are linked by either the Frank copula or a normal copula with either Poisson or negative binomial marginal distributions. We will expand their work by analysing additional copulas for the BINAR(1) model innovation distribution as well as estimation methods for the distribution parameters.
3 Copulas
In this section we recall the definition and main properties of bivariate copulas, mainly following [8, 15] and [21] for the continuous and discrete settings.
3.1 Copula definition and properties
Copulas are used for modelling the dependence between several random variables. The main advantage of using copulas is that they allow to model the marginal distributions separately from their joint distribution. In this paper we are using two-dimensional copulas which are defined as follows:
A 2-dimensional copula $C:{[0,1]^{2}}\to [0,1]$ is a function with the following properties:
(i) for every $u,v\in [0,1]$:
\[ C(u,0)=C(0,v)=0;\]
(ii) for every $u,v\in [0,1]$:
\[ C(u,1)=u,\hspace{1em}C(1,v)=v;\]
(iii) for any ${u_{1}},{u_{2}},{v_{1}},{v_{2}}\in [0,1]$ such that ${u_{1}}\le {u_{2}}$ and ${v_{1}}\le {v_{2}}$:
\[ C({u_{2}},{v_{2}})-C({u_{2}},{v_{1}})-C({u_{1}},{v_{2}})+C({u_{1}},{v_{1}})\ge 0\]
(this is also called the rectangle inequality).
The theoretical foundation of copulas is given by Sklar's theorem:
Theorem 3 ([20]).
Let H be a joint cumulative distribution function (cdf) with marginal distributions ${F_{1}},{F_{2}}$. Then there exists a copula C such that for all $({x_{1}},{x_{2}})\in {[-\infty ,\infty ]^{2}}$:
\[ H({x_{1}},{x_{2}})=C\big({F_{1}}({x_{1}}),{F_{2}}({x_{2}})\big).\]
If ${F_{i}}$ is continuous for $i=1,2$ then C is unique; otherwise C is uniquely determined only on $\mathrm{Ran}({F_{1}})\times \mathrm{Ran}({F_{2}})$, where $\mathrm{Ran}(F)$ denotes the range of the cdf F. Conversely, if C is a copula and ${F_{1}},{F_{2}}$ are distribution functions, then the function H, defined by equation (7) is a joint cdf with marginal distributions ${F_{1}},{F_{2}}$.
If a pair of random variables $({X_{1}},{X_{2}})$ has continuous marginal cdfs ${F_{i}}(x),i=1,2$, then by applying the probability integral transformation one can transform them into random variables $({U_{1}},{U_{2}})=({F_{1}}({X_{1}}),{F_{2}}({X_{2}}))$ with uniformly distributed marginals which can then be used when modelling their dependence via a copula. More about Copula theory, properties and applications can be found in [15] and [9].
3.2 Copulas with discrete marginal distributions
Since innovations of a BINAR(1) model are nonnegative integer-valued random variables, one needs to consider copulas linking discrete distributions. In this section we will mention some of the key differences when copula marginals are discrete rather than continuous.
Firstly, as mentioned in Theorem 3, if ${F_{1}}$ and ${F_{2}}$ are discrete marginals then a unique copula representation exists only for values in the range of $\mathrm{Ran}({F_{1}})\times \mathrm{Ran}({F_{2}})$. However, the lack of uniqueness does not pose a problem in empirical applications because it implies that there may exist more than one copula which describes the distribution of the empirical data. Secondly, regarding concordance and discordance, the discrete case has to allow for ties (i.e. when two variables have the same value), so the concordance measures (Spearman's rho and Kendal's tau) are margin-dependent, see [21]. There are several modifications proposed for Spearman's rho, however, none of them are margin-free. Furthermore, Genest and Nešlehová [8] state that estimators of the dependence parameter θ based on Kendall's tau or its modified versions are biased, and estimation techniques based on maximum likelihood are recommended. As such, we will not examine estimation methods based on concordance measures. Another difference from the continuous case is the use of the probability mass function (pmf) instead of the probability density function when estimating the model parameters which will be seen in Section 4.
3.3 Some concrete copulas
In this section we will present several bivariate copulas, which will be used later when constructing and evaluating the BINAR(1) model. For all the copulas discussed, the following notation is used: ${u_{1}}:={F_{1}}({x_{1}})$, ${u_{2}}:={F_{2}}({x_{2}})$, where ${F_{1}},{F_{2}}$ are marginal cumulative distribution functions (cdfs) of discrete random variables, and θ is the dependence parameter.
Farlie–Gumbel–Morgenstern copula
The Farlie–Gumbel–Morgenstern (FGM) copula has the following form:
\[\begin{aligned}{}C({u_{1}},{u_{2}};\theta )& ={u_{1}}{u_{2}}\big(1+\theta (1-{u_{1}})(1-{u_{2}})\big).\end{aligned}\]
The dependence parameter θ can take values from the interval $[-1,1]$. If $\theta =0$, then the FGM copula collapses to independence. Note that the FGM copula can only model weak dependence between two marginals (see [15]). The copula when $\theta =0$ is called a product (or independence) copula:
\[\begin{aligned}{}C({u_{1}},{u_{2}})& ={u_{1}}{u_{2}}.\end{aligned}\]
Since the product copula corresponds to independence, it is important as a benchmark.
Frank copula
The Frank copula has the following form:
\[\begin{aligned}{}C({u_{1}},{u_{2}};\theta )& =-\frac{1}{\theta }\log \bigg(1+\frac{(\exp (-\theta {u_{1}})-1)(\exp (-\theta {u_{2}})-1)}{\exp (-\theta )-1}\bigg).\end{aligned}\]
The dependence parameter θ can take values from $(-\infty ,\infty )\setminus \{0\}$. The Frank copula allows for both positive and negative dependence between the marginals.
Clayton copula
The Clayton copula has the following form:
\[\begin{aligned}{}C({u_{1}},{u_{2}};\theta )& =\max {\big\{{u_{1}^{-\theta }}+{u_{2}^{-\theta }}-1,0\big\}^{-\frac{1}{\theta }}},\end{aligned}\]
with the dependence parameter $\theta \in [-1,\infty )\setminus \{0\}$. The marginals become independent when $\theta \to 0$. It can be used when the correlation between two random variables exhibits a strong left tail dependence – if smaller values are strongly correlated and hight values are less correlated. The Clayton copula can also account for negative dependence when $\theta \in [-1,0)$. For more properties of this copula, see the recent paper by Manstavičius and Leipus [14].
4 Parameter estimation of the copula-based BINAR(1) model
In this section we examine different BINAR(1) model parameter estimation methods and provide a two-step method for separate estimation of the copula dependence parameter. Estimation methods are compared via Monte Carlo simulations. Let ${\textbf{X}_{t}}={({X_{1,t}},{X_{2,t}})^{\prime }}$ be a non-negative integer-valued time series given in Def. 1, where the joint distribution of ${({R_{1,t}},{R_{2,t}})^{\prime }}$, with marginals ${F_{1}},{F_{2}}$, is linked by a copula $C(\cdot ,\cdot )$:
\[\begin{aligned}{}\mathbb{P}({R_{1,t}}\le {x_{1}},{R_{2,t}}\le {x_{2}})& =C\big({F_{1}}({x_{1}}),{F_{2}}({x_{2}})\big)\end{aligned}\]
and let $C({u_{1}},{u_{2}})=C({u_{1}},{u_{2}};\theta )$, where θ is a dependence parameter.
4.1 Conditional least squares estimation
The Conditional least squares (CLS) estimator minimizes the squared distance between ${\textbf{X}_{t}}$ and its conditional expectation. Similarly to the method in [19] for the INAR(1) model, we construct the CLS estimator in the case of the BINAR(1) model.
Using Theorem 1 we can write the vector of conditional means as
\[ {\boldsymbol{\mu }_{t|t-1}}:=\left[\begin{array}{c}\mathbb{E}({X_{1,t}}|{X_{1,t-1}})\\ {} \mathbb{E}({X_{2,t}}|{X_{2,t-1}})\end{array}\right]=\left[\begin{array}{c}{\alpha _{1}}{X_{1,t-1}}+{\lambda _{1}}\\ {} {\alpha _{2}}{X_{2,t-1}}+{\lambda _{2}}\end{array}\right],\]
where ${\lambda _{j}}\hspace{0.1667em}:=\hspace{0.1667em}\mathbb{E}{R_{j,t}}$, $j\hspace{0.1667em}=\hspace{0.1667em}1,2$. In order to calculate the CLS estimators of $({\alpha _{1}},{\alpha _{2}},{\lambda _{1}},{\lambda _{2}})$ we define the vector of residuals as the difference between the observations and their conditional expectation:
\[\begin{aligned}{}{\textbf{X}_{t}}-{\boldsymbol{\mu }_{t|t-1}}& =\left[\begin{array}{c}{X_{1,t}}-{\alpha _{1}}{X_{1,t-1}}-{\lambda _{1}}\\ {} {X_{2,t}}-{\alpha _{2}}{X_{2,t-1}}-{\lambda _{2}}\end{array}\right].\end{aligned}\]
Then, given a sample of N observations, ${\textbf{X}_{1}},\dots ,{\textbf{X}_{N}}$, the CLS estimators of ${\alpha _{j}},{\lambda _{j}}$, $j=1,2$, are found by minimizing the sum
\[\begin{aligned}{}{Q_{j}}({\alpha _{j}},{\lambda _{j}})& :={\sum \limits_{t=2}^{N}}{({X_{j,t}}-{\alpha _{j}}{X_{j,t-1}}-{\lambda _{j}})^{2}}\hspace{2.5pt}\longrightarrow \hspace{2.5pt}\underset{{\alpha _{j}},{\lambda _{j}}}{\min },\hspace{1em}j=1,2.\end{aligned}\]
By taking the derivatives with respect to ${\alpha _{j}}$ and ${\lambda _{j}}$, $j=1,2$, and equating them to zero we get:
\[ {\hat{\alpha }_{j}^{\mathrm{CLS}}}=\frac{{\textstyle\sum _{t=2}^{N}}({X_{j,t}}-{\bar{X}_{j}})({X_{j,t-1}}-{\bar{X}_{j}})}{{\textstyle\sum _{t=2}^{N}}{({X_{j,t-1}}-{\bar{X}_{j}})^{2}}}\]
\[\begin{aligned}{}{\hat{\lambda }_{j}^{\mathrm{CLS}}}& =\frac{1}{N-1}\Bigg({\sum \limits_{t=2}^{N}}{X_{j,t}}-{\hat{\alpha }_{j}^{\mathrm{CLS}}}{\sum \limits_{t=2}^{N}}{X_{j,t-1}}\Bigg).\end{aligned}\]
The asymptotic properties of the CLS estimators for the INAR(1) model case are provided in [13, 19, 2] and can be applied to the BINAR(1) parameter estimates, specified via equations (12) and (13). By the fact that the j-th component of the BINAR(1) process is an INAR(1) itself, we can formulate the following theorem for the marginal parameter vector distributions (see [2]):
Theorem 4.
Let ${\mathbf{X}_{t}}={({X_{1,t}},{X_{2,t}})^{\prime }}$ be defined in Def. 1 and let the parameter vector of (2) be ${({\alpha _{j}},{\lambda _{j}})^{\prime }}$. Assume that ${\widehat{\alpha }_{j}^{\mathrm{CLS}}}$ and ${\widehat{\lambda }_{j}^{\mathrm{CLS}}}$ are the CLS estimators of ${\alpha _{j}}$ and ${\lambda _{j}}$, $j=1,2$. Then:
\[ \sqrt{N}\left(\begin{array}{c}{\widehat{\alpha }_{j}^{\mathrm{CLS}}}-{\alpha _{j}}\\ {} {\widehat{\lambda }_{j}^{\mathrm{CLS}}}-{\lambda _{j}}\end{array}\right)\stackrel{d}{\longrightarrow }\mathcal{N}({\mathbf{0}_{2}},{\mathbf{B}_{j}}),\]
\[\begin{aligned}{}{\mathbf{B}_{j}}& ={\left[\begin{array}{c@{\hskip10.0pt}c}\mathbb{E}{X_{j,t}^{2}}& \mathbb{E}{X_{j,t}}\\ {} \mathbb{E}{X_{j,t}}& 1\end{array}\right]^{-1}}{\mathbf{A}_{j}}{\left[\begin{array}{c@{\hskip10.0pt}c}\mathbb{E}{X_{j,t}^{2}}& \mathbb{E}{X_{j,t}}\\ {} \mathbb{E}{X_{j,t}}& 1\end{array}\right]^{-1}},\\ {} {\mathbf{A}_{j}}& ={\alpha _{j}}(1-{\alpha _{j}})\left[\begin{array}{c@{\hskip10.0pt}c}\mathbb{E}{X_{j,t}^{3}}& \mathbb{E}{X_{j,t}^{2}}\\ {} \mathbb{E}{X_{j,t}^{2}}& \mathbb{E}{X_{j,t}}\end{array}\right]+{\sigma _{j}^{2}}\left[\begin{array}{c@{\hskip10.0pt}c}\mathbb{E}{X_{j,t}^{2}}& \mathbb{E}{X_{j,t}}\\ {} \mathbb{E}{X_{j,t}}& 1\end{array}\right],\hspace{1em}j=1,2.\end{aligned}\]
Here, according to BINAR(1) properties in Theorem 2,
\[\begin{aligned}{}\mathbb{E}{X_{j,t}}=& \frac{{\lambda _{j}}}{1-{\alpha _{j}}},\hspace{2.5pt}\hspace{2.5pt}\mathbb{E}{X_{j,t}^{2}}=\frac{{\sigma _{j}^{2}}+{\alpha _{j}}{\lambda _{j}}}{1-{\alpha _{j}^{2}}}+\frac{{\lambda _{j}^{2}}}{{(1-{\alpha _{j}})^{2}}},\\ {} \mathbb{E}{X_{j,t}^{3}}=& \frac{\mathbb{E}{R_{j,t}^{3}}-3{\sigma _{j}^{2}}(1+{\lambda _{j}})-{\lambda _{j}^{3}}+2{\lambda _{j}}}{1-{\alpha _{j}^{3}}}+3\frac{{\sigma _{j}^{2}}+{\alpha _{j}}{\lambda _{j}}}{1-{\alpha _{j}^{2}}}-2\frac{{\lambda _{j}}}{1-{\alpha _{j}}}\\ {} & +3\frac{{\lambda _{j}}({\sigma _{j}^{2}}+{\alpha _{j}}{\lambda _{j}})}{(1-{\alpha _{j}})(1-{\alpha _{j}^{2}})}+\frac{{\lambda _{j}^{3}}}{{(1-{\alpha _{j}})^{3}}}.\end{aligned}\]
For the Poisson marginal distribution case the asymptotic variance matrix can be expressed as (see [7])
\[ {\mathbf{B}_{j}}=\left[\begin{array}{c@{\hskip10.0pt}c}\frac{{\alpha _{j}}{(1-{\alpha _{j}})^{2}}}{{\lambda _{j}}}+1-{\alpha _{j}^{2}}& -(1+{\alpha _{j}}){\lambda _{j}}\\ {} -(1+{\alpha _{j}}){\lambda _{j}}& {\lambda _{j}}+\frac{1+{\alpha _{j}}}{1-{\alpha _{j}}}{\lambda _{j}^{2}}\end{array}\right],\hspace{1em}j=1,2.\]
Furthermore, for a more general case, [12] proved that the CLS estimators of a multivariate generalized integer-valued autoregressive process (GINAR) are asymptotically normally distributed.
Note that
\[\begin{aligned}{}\mathbb{E}({X_{1,t}}-{\alpha _{1}}{X_{1,t-1}}-{\lambda _{1}})({X_{2,t}}-{\alpha _{2}}{X_{2,t-1}}-{\lambda _{2}})& =\mathbb{C}\mathrm{ov}({R_{1,t}},{R_{2,t}}),\end{aligned}\]
which follows from
\[\begin{aligned}{}& \mathbb{E}({X_{1,t}}-{\alpha _{1}}{X_{1,t-1}}-{\lambda _{1}})({X_{2,t}}-{\alpha _{2}}{X_{2,t-1}}-{\lambda _{2}})\\ {} & \hspace{1em}=\mathbb{E}({\alpha _{1}}\circ {X_{1,t-1}}-{\alpha _{1}}{X_{1,t-1}})({\alpha _{2}}\circ {X_{2,t-1}}-{\alpha _{2}}{X_{2,t-1}})\\ {} & \hspace{2em}+\mathbb{E}({\alpha _{1}}\circ {X_{1,t-1}}-{\alpha _{1}}{X_{1,t-1}})({R_{2,t}}-{\lambda _{2}})\\ {} & \hspace{2em}+\mathbb{E}({\alpha _{2}}\circ {X_{2,t-1}}-{\alpha _{2}}{X_{2,t-1}})({R_{1,t}}-{\lambda _{1}})\\ {} & \hspace{2em}+\mathbb{E}({R_{1,t}}-{\lambda _{1}})({R_{2,t}}-{\lambda _{2}})\end{aligned}\]
since the first three summands are zeros.
Example 4.1.
Assume that the joint pmf of $({R_{1,t}},{R_{2,t}})$ is given by bivariate Poisson distribution:
\[\begin{aligned}{}\mathbb{P}({R_{1,t}}=k,{R_{2,t}}=l)& ={\sum \limits_{i=0}^{\min \{k,l\}}}\frac{{({\lambda _{1}}-\lambda )^{k-i}}{({\lambda _{2}}-\lambda )^{l-i}}{\lambda ^{i}}}{(k-i)!(l-i)!i!}\hspace{0.1667em}{\mathrm{e}^{-({\lambda _{1}}+{\lambda _{2}}-\lambda )}},\end{aligned}\]
where $k,l=0,1,...$, ${\lambda _{j}}\mathrm{>}0$, $j=1,2$, $0\le \lambda \mathrm{<}\min \{{\lambda _{1}},{\lambda _{2}}\}$. Then, for each $j=1,2$, the marginal distribution of ${R_{j,t}}$ is Poisson with parameter ${\lambda _{j}}$ and $\mathbb{C}\mathrm{ov}({R_{1,t}},{R_{2,t}})=\lambda $. If $\lambda =0$ then the two variables are independent.
Assume that the joint pmf of $({R_{1,t}},{R_{2,t}})$ is bivariate negative binomial distribution given by
\[\begin{aligned}{}\mathbb{P}({R_{1,t}}=k,{R_{2,t}}=l)=& \frac{\varGamma (\beta +k+l)}{\varGamma (\beta )k!l!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+{\lambda _{2}}+\beta }\bigg)^{k}}{\bigg(\frac{{\lambda _{2}}}{{\lambda _{1}}+{\lambda _{2}}+\beta }\bigg)^{l}}\\ {} & \times {\bigg(\frac{\beta }{{\lambda _{1}}+{\lambda _{2}}+\beta }\bigg)^{\beta }},\end{aligned}\]
where $k,l=0,1,...$, ${\lambda _{j}}\mathrm{>}0$, $j=1,2$, $\beta \mathrm{>}0$. Then, for each $j=1,2$, the marginal distribution of ${R_{j,t}}$ is negative binomial with parameters β and ${p_{j}}=\beta /({\lambda _{j}}+\beta )$ and $\mathbb{E}{R_{j,t}}={\lambda _{j}}$, $\mathbb{V}\mathrm{ar}({R_{j,t}})={\lambda _{j}}(1+{\beta ^{-1}}{\lambda _{j}})$, $\mathbb{C}\mathrm{ov}({R_{1,t}},{R_{2,t}})={\beta ^{-1}}{\lambda _{1}}{\lambda _{2}}$. Thus, bivariate negative binomial distribution is more flexible than bivariate Poisson due to overdispersion parameter β.
Assume now that the Poisson innovations ${R_{1,t}}$ and ${R_{2,t}}$ with parameters ${\lambda _{1}}$ and ${\lambda _{2}}$, respectively, are linked by a copula with the dependence parameter θ. Taking into account equality (14), we can estimate θ by minimizing the sum of squared differences
\[\begin{aligned}{}S& ={\sum \limits_{t=2}^{N}}{\big({R_{1,t}^{\mathrm{CLS}}}{R_{2,t}^{\mathrm{CLS}}}-\gamma \big({\hat{\lambda }_{1}^{\mathrm{CLS}}},{\hat{\lambda }_{2}^{\mathrm{CLS}}};\theta \big)\big)^{2}},\end{aligned}\]
\[\begin{aligned}{}{R_{j,t}^{\mathrm{CLS}}}& :={X_{j,t}}-{\hat{\alpha }_{j}^{\mathrm{CLS}}}{X_{j,t-1}}-{\hat{\lambda }_{j}^{\mathrm{CLS}}},\hspace{1em}j=1,2,\\ {} \gamma ({\lambda _{1}},{\lambda _{2}};\theta )& :=\mathbb{C}\mathrm{ov}({R_{1,t}},{R_{2,t}})\hspace{2.5pt}={\sum \limits_{k,l=1}^{\infty }}kl\hspace{0.1667em}c\big({F_{1}}(k;{\lambda _{1}}),{F_{2}}(l;{\lambda _{2}});\theta \big)-{\lambda _{1}}{\lambda _{2}}.\end{aligned}\]
Here, $c({F_{1}}(k;{\lambda _{1}}),{F_{2}}(s;{\lambda _{2}});\theta )$ is the joint pmf:
\[\begin{aligned}{}c\big({F_{1}}(k;{\lambda _{1}}),{F_{2}}(l;{\lambda _{2}});\theta \big)=& \mathbb{P}({R_{1,t}}=k,{R_{2,t}}=l)\\ {} =& C\big({F_{1}}(k;{\lambda _{1}}),{F_{2}}(s;{\lambda _{2}});\theta \big)\\ {} & -C\big({F_{1}}(k-1;{\lambda _{1}}),{F_{2}}(l;{\lambda _{2}});\theta \big)\\ {} & -\hspace{2.5pt}C\big({F_{1}}(k;{\lambda _{1}}),{F_{2}}(l-1;{\lambda _{2}});\theta \big)\\ {} & +\hspace{2.5pt}C\big({F_{1}}(k-1;{\lambda _{1}}),{F_{2}}(l-1;{\lambda _{2}});\theta \big),\hspace{1em}k\ge 1,l\ge 1.\end{aligned}\]
Our estimation method is based on the approximation of covariance $\gamma ({\hat{\lambda }_{1}^{\mathrm{CLS}}},{\hat{\lambda }_{2}^{\mathrm{CLS}}};\theta )$ by
\[\begin{aligned}{}{\gamma ^{({M_{1}},{M_{2}})}}\big({\hat{\lambda }_{1}^{\mathrm{CLS}}},{\hat{\lambda }_{2}^{\mathrm{CLS}}};\theta \big)& ={\sum \limits_{k=1}^{{M_{1}}}}{\sum \limits_{l=1}^{{M_{2}}}}kl\hspace{0.1667em}c\big({F_{1}}\big(k;{\hat{\lambda }_{1}^{\mathrm{CLS}}}\big),{F_{2}}\big(l;{\hat{\lambda }_{2}^{\mathrm{CLS}}}\big);\theta \big)-{\hat{\lambda }_{1}^{\mathrm{CLS}}}{\hat{\lambda }_{2}^{\mathrm{CLS}}}.\end{aligned}\]
For example, if the marginals are Poisson with parameters ${\lambda _{1}}={\lambda _{2}}=1$ and their joint distribution is given by the FGM copula in (8), then the covariance ${\gamma ^{({M_{1}},{M_{2}})}}(1,1;\theta )$ stops changing significantly after setting ${M_{1}}={M_{2}}=M=8$, regardless of the selected dependence parameter θ. We used this approximation methodology when carrying out a Monte Carlo simulation in Section 4.4.
For the FGM copula, if we take the derivative of the sum
\[\begin{aligned}{}{S^{({M_{1}},{M_{2}})}}& ={\sum \limits_{t=2}^{N}}{\big({R_{1,t}^{\mathrm{CLS}}}{R_{2,t}^{\mathrm{CLS}}}-{\gamma ^{({M_{1}},{M_{2}})}}\big({\hat{\lambda }_{1}^{\mathrm{CLS}}},{\hat{\lambda }_{2}^{\mathrm{CLS}}};\theta \big)\big)^{2}},\end{aligned}\]
equate it to zero and use equation (17), we get
\[ {\hat{\theta }^{\mathrm{FGM}}}\hspace{0.1667em}=\hspace{0.1667em}\frac{{\textstyle\sum _{t=2}^{N}}({X_{1,t}}-{\hat{\alpha }_{1}^{\mathrm{CLS}}}{X_{1,t-1}}-{\hat{\lambda }_{1}^{\mathrm{CLS}}})({X_{2,t}}-{\hat{\alpha }_{2}^{\mathrm{CLS}}}{X_{2,t-1}}-{\hat{\lambda }_{2}^{\mathrm{CLS}}})}{(N\hspace{-0.1667em}-\hspace{-0.1667em}1){\textstyle\sum _{k=1}^{{M_{1}}}}k({F_{1,k}}{\overline{F}_{1,k}}\hspace{0.1667em}-\hspace{0.1667em}{F_{1,k-1}}{\overline{F}_{1,k-1}}){\textstyle\sum _{l=1}^{{M_{2}}}}l({F_{2,l}}{\overline{F}_{2,l}}-{F_{2,l-1}}{\overline{F}_{2,l-1}})},\]
where ${F_{j,k}}:={F_{j}}(k;{\hat{\lambda }_{j}^{\mathrm{CLS}}})$, ${\overline{F}_{j,k}}:=1-{F_{j,k}}$, $j=1,2$. The derivation of equation (19) is straightforward and thus omitted.
Depending on the selected copula family, calculation of (16) to get the analytical expression of the estimator $\hat{\theta }$ may be difficult. However, we can use the function optim in the R statistical software to minimize (15). For other cases, where the marginal distribution has parameters other than expected value ${\lambda _{j}}$, equation (15) would need to be minimized by those additional parameters. For example, in the case of negative binomial marginals with corresponding mean ${\lambda _{j}}$ and variance ${\sigma _{j}^{2}}$, i.e. when
\[\begin{aligned}{}\mathbb{P}({R_{j,t}}=k)& =\frac{\varGamma (k+\frac{{\lambda _{j}^{2}}}{{\sigma _{j}^{2}}-{\lambda _{j}}})}{\varGamma (\frac{{\lambda _{j}^{2}}}{{\sigma _{j}^{2}}-{\lambda _{j}}})k!}{\bigg(\frac{{\lambda _{j}}}{{\sigma _{j}^{2}}}\bigg)^{\frac{{\lambda _{j}^{2}}}{{\sigma _{j}^{2}}-{\lambda _{j}}}}}{\bigg(\frac{{\sigma _{j}^{2}}-{\lambda _{j}}}{{\sigma _{j}^{2}}}\bigg)^{k}},\hspace{1em}k=0,1,\dots ,\hspace{2.5pt}j=1,2,\end{aligned}\]
the additional parameters are ${\sigma _{1}^{2}},{\sigma _{2}^{2}}$, and the minimization problem becomes
\[\begin{aligned}{}{S^{({M_{1}},{M_{2}})}}& \longrightarrow \underset{{\sigma _{1}^{2}},{\sigma _{2}^{2}},\theta }{\min }.\end{aligned}\]
4.2 Conditional maximum likelihood estimation
BINAR(1) models can be estimated via conditional maximum likelihood (CML) (see [18] and [10]). The conditional distribution of the BINAR(1) process is:
\[\begin{aligned}{}\mathbb{P}& ({X_{1,t}}={x_{1,t}},{X_{2,t}}={x_{2,t}}|{X_{1,t-1}}={x_{1,t-1}},{X_{2,t-1}}={x_{2,t-1}})\\ {} & =\mathbb{P}({\alpha _{1}}\circ {x_{1,t-1}}+{R_{1,t}}={x_{1,t}},{\alpha _{2}}\circ {x_{2,t-1}}+{R_{2,t}}={x_{2,t}})\\ {} & ={\sum \limits_{k=0}^{{x_{1,t}}}}{\sum \limits_{l=0}^{{x_{2,t}}}}\mathbb{P}({\alpha _{1}}\circ {x_{1,t-1}}\hspace{0.1667em}=\hspace{0.1667em}k)\mathbb{P}({\alpha _{2}}\circ {x_{2,t-1}}\hspace{0.1667em}=\hspace{0.1667em}l)\mathbb{P}({R_{1,t}}\hspace{0.1667em}=\hspace{0.1667em}{x_{1,t}}-k,{R_{2,t}}\hspace{0.1667em}=\hspace{0.1667em}{x_{2,t}}-l).\end{aligned}\]
Here, ${\alpha _{j}}\circ x$ is the sum of x independent Bernoulli trials. Hence,
\[ \mathbb{P}({\alpha _{j}}\circ {x_{j,t-1}}=k)=\left(\genfrac{}{}{0pt}{}{{x_{j,t-1}}}{k}\right){\alpha _{j}^{k}}{(1-{\alpha _{j}})^{{x_{j,t-1}}-k}},\hspace{2.5pt}\hspace{2.5pt}k=0,\dots ,{x_{j,t-1}},\hspace{2.5pt}j=1,2.\]
In the case of copula-based BINAR(1) model with Poisson marginals,
\[\begin{aligned}{}\mathbb{P}({R_{1,t}}={x_{1,t}}-k,{R_{2,t}}={x_{2,t}}-l)& =c\big({F_{1}}({x_{1,t}}-k,{\lambda _{1}}),{F_{2}}({x_{2,t}}-l,{\lambda _{2}});\theta \big).\end{aligned}\]
Thus, we obtain
\[\begin{aligned}{}\mathbb{P}& ({X_{1,t}}={x_{1,t}},{X_{2,t}}={x_{2,t}}|{X_{1,t-1}}={x_{1,t-1}},{X_{2,t-1}}={x_{2,t-1}})\\ {} & ={\sum \limits_{k=0}^{{x_{1,t}}}}{\sum \limits_{l=0}^{{x_{2,t}}}}\left(\genfrac{}{}{0pt}{}{{x_{1,t-1}}}{k}\right){\alpha _{1}^{k}}{(1-{\alpha _{1}})^{{x_{1,t-1}}-k}}\left(\genfrac{}{}{0pt}{}{{x_{2,t-1}}}{l}\right){\alpha _{2}^{l}}{(1-{\alpha _{2}})^{{x_{2,t-1}}-l}}\\ {} & \hspace{2.5pt}\hspace{2.5pt}\times c\big({F_{1}}({x_{1,t}}-k,{\lambda _{1}}),{F_{2}}({x_{2,t}}-l,{\lambda _{2}});\theta \big)\end{aligned}\]
and the log conditional likelihood function, for estimating the marginal distribution parameters ${\lambda _{1}},{\lambda _{2}}$, the probabilities of the Bernoulli trial successes ${\alpha _{1}},{\alpha _{2}}$ and the dependence parameter θ, is
\[\begin{aligned}{}\ell ({\alpha _{1}},{\alpha _{2}},{\lambda _{1}},{\lambda _{2}},\theta )={\sum \limits_{t=2}^{N}}\log \mathbb{P}(& {X_{1,t}}={x_{1,t}},{X_{2,t}}={x_{2,t}}|{X_{1,t-1}}={x_{1,t-1}},\\ {} & {X_{2,t-1}}={x_{2,t-1}})\end{aligned}\]
for some initial values ${x_{1,1}}$ and ${x_{2,1}}$. In order to estimate the unknown parameters we maximize the log conditional likelihood:
\[ \ell ({\alpha _{1}},{\alpha _{2}},{\lambda _{1}},{\lambda _{2}},\theta )\longrightarrow \underset{{\alpha _{1}},{\alpha _{2}},{\lambda _{1}},{\lambda _{2}},\theta }{\max }.\]
Numerical maximization is straightforward with the optim function in the R statistical software.
As for the CLS estimator, in other cases, where the marginal distribution has parameters other than ${\lambda _{j}}$, equation (20) would need to be maximized by those additional parameters. The CML estimator is asymptotically normally distributed under standard regularity conditions and its variance matrix is the inverse of the Fisher information matrix [18].
4.3 Two-step estimation based on CLS and CML
Depending on the range of attainable values of the parameters and the sample size, CML maximization might take some time to compute. On the other hand, since CLS estimators of ${\alpha _{j}}$ and ${\lambda _{j}}$ are easily derived (compared to the CLS estimator of θ, which depends on the copula pmf form and needs to be numerically maximized), we can substitute the parameters of the marginal distributions in eq. (20) with CLS estimates from equations (12) and (13). Then we will only need to maximize ℓ with respect to a single dependence parameter θ for the Poisson marginal distribution case.
Summarizing, the two-step approach to estimating unknown parameters is to find
\[\begin{aligned}{}\big({\hat{\alpha }_{j}^{\mathrm{CLS}}},{\hat{\lambda }_{j}^{\mathrm{CLS}}}\big)& =\arg \min {Q_{j}}({\alpha _{j}},{\lambda _{j}}),\hspace{1em}j=1,2,\end{aligned}\]
and to take these values as given in the second step:
\[\begin{aligned}{}{\hat{\theta }^{\mathrm{CML}}}& =\arg \max \ell \big({\hat{\alpha }_{1}^{\mathrm{CLS}}},{\hat{\alpha }_{2}^{\mathrm{CLS}}},{\hat{\lambda }_{1}^{\mathrm{CLS}}},{\hat{\lambda }_{2}^{\mathrm{CLS}}},\theta \big).\end{aligned}\]
For other cases of marginal distribution, any additional parameters, other than ${\alpha _{j}}$ and ${\lambda _{j}}$ would be estimated in the second step.
4.4 Comparison of estimation methods via Monte Carlo simulation
We carried out a Monte Carlo simulation 1000 times to test the estimation methods with sample size 50 and 500. The generated model was a BINAR(1) with innovations joined by either the FGM, Frank or Clayton copula with Poisson marginal distributions, as well as with marginal distributions from different families: one is a Poisson distribution and the other is a negative binomial one. Note that for the two-step method only the estimates of θ and ${\sigma _{2}^{2}}$ are included because estimated values of ${\alpha _{1}^{\mathrm{CLS}}},{\alpha _{2}^{\mathrm{CLS}}},{\lambda _{1}^{\mathrm{CLS}}},{\lambda _{2}^{\mathrm{CLS}}}$ are used in order to estimate the remaining parameters via CML.
Monte Carlo simulation results for a BINAR(1) model with Poisson innovations linked by the FGM, Frank or Clayton copula
Copula Sample size Parameter True value CLS CML Two-Step
MSE Bias MSE Bias MSE Bias
FGM $N=50$ ${\alpha _{1}}$ 0.6 0.01874 −0.05823 0.00887 −0.01789 – –
${\alpha _{2}}$ 0.4 0.02033 −0.05223 0.01639 −0.02751 – –
${\lambda _{1}}$ 1 0.12983 0.13325 0.06514 0.03366 – –
θ −0.5 0.29789 0.12568 0.33840 0.07568 0.3311 0.0876
$N=500$ ${\alpha _{1}}$ 0.6 0.00147 −0.00432 0.00073 −0.00122 – –
θ −0.5 0.04679 0.00668 0.04271 −0.00700 0.04265 −0.00443
Frank $N=50$ ${\alpha _{1}}$ 0.6 0.02023 −0.06039 0.00950 −0.01965 – –
θ −1 1.83454 0.12394 2.05786 0.00860 1.97515 0.04216
θ −1 0.22084 0.01746 0.20138 −0.01779 0.20070 −0.01342
Clayton $N=50$ ${\alpha _{1}}$ 0.6 0.01826 −0.05489 0.00799 −0.013295 – –
θ 1 0.71845 0.02621 0.72581 0.22628 0.62372 0.13283
$N=500$ ${\alpha _{1}}$ 0.6 0.00146 −0.00518 0.00070 0.00016 – –
${\lambda _{1}}$ 1 0.00973 0.01137 0.00513 −0.00150 – –
θ 1 0.11578 0.03556 0.05864 0.04250 0.03199 −0.01342
The results for the Poisson marginal distribution case are provided in Table 1. The results for the case when one innovation follows a Poisson distribution and the other follows a negative binomial one are provided in Table 2. The lowest MSE values of $\widehat{\theta }$ are highlighted in bold. It is worth noting that CML estimation via numerical maximization depends heavily on the initial parameter values. If the initial values are selected too low or too high from the actual value, then the global maximum may not be found. In order to overcome this, we have selected the starting values equal to the CLS parameter estimates.
As can be seen in Table 1, the estimated values of ${\alpha _{j}}$ and ${\lambda _{j}}$, $j=1,2$, have a smaller bias and MSE when parameters are estimated via CML. On the other hand, estimation of θ via CLS exhibits a smaller MSE in the Frank copula case for smaller samples. For larger samples, the estimates of θ via the Two-step estimation method are very close to the CML estimates in terms of MSE and bias, and are closer to the true parameter values than the CLS estimates. Furthermore, since in the Two-step estimation numerical maximization is only carried out via a single parameter θ, the initial parameter values have less effect on the numerical maximization.
Monte Carlo simulation results for a BINAR(1) model with one innovation following a Poisson distribution and the other – a negative binomial one, where both innovations are linked by the FGM, Frank or Clayton copula
θ −0.5 0.31467 0.14070 0.29415 0.06674 0.29949 0.09693
${\sigma _{2}^{2}}$ 9 27.87327 1.15731 15.12863 −0.14888 21.68229 0.72326
${\alpha _{2}}$ 0.4 0.00194 −0.00373 0.00053 0.00016 – –
θ −0.5 0.06670 −0.02014 0.04298 −0.00268 0.04313 0.00562
${\sigma _{2}^{2}}$ 9 6.24237 −1.99232 1.81265 0.00611 1.85222 −0.03506
θ −1 1.81788 0.12516 1.75638 −0.01239 1.68019 0.06211
θ −1 0.31942 −0.05593 0.18960 −0.01481 0.1902 −0.0079
Clayton $N=50$ ${\alpha _{1}}$ 0.6 0.01987 −0.06159 0.00903 −0.01671 – –
Table 2 demonstrates the estimation results when one innovation has a Poisson distribution and the other has a negative binomial one. With the inclusion of an additional variance parameter, the CLS estimation methods exhibit larger MSE and bias than the CML and Two-step estimation methods, for both the dependence and variance parameter estimates. Furthermore, the MSE of ${\hat{\sigma }_{2}^{2}}$ is smallest when the CML estimation method is used. On the other hand, both the Two-step and CML estimation methods produce similar estimates of θ in terms of MSE, regardless of sample size and copula function.
We can conclude that it is possible to accurately estimate the dependence parameter via CML using the CLS estimates of ${\hat{\alpha }_{j}}$ and ${\hat{\lambda }_{j}}$. The resulting $\hat{\theta }$ will be closer to the actual value of θ than ${\hat{\theta }^{\mathrm{CLS}}}$ and will not differ much from ${\hat{\theta }^{\mathrm{CML}}}$. Additional inference on the bias of the estimates can be found in Appendix A.
5 Application to loan default data
In this section we estimate a BINAR(1) model with the joint innovation distribution modelled by a copula cdf for empirical data. The data set consists of loan data which includes loans that have defaulted and loans that were repaid without missing any payments (non-defaulted loans). We will analyse and model the dependence between defaulted and non-defaulted loans as well as the presence of autocorrelation.
5.1 Loan default data
The data sample used is from Bondora, the Estonian peer-to-peer lending company. In November of 2014 Bondora introduced a loan rating system which assigns loans to different groups, based on their risk level. There are 8 groups ranging from the lowest risk group, 'AA', to the highest risk group, 'HR'. However, the loan rating system could not be applied to most older loans due to a lack of data needed for Bondora's rating model. Although Bondora issues loans in 4 different countries: Estonia, Finland, Slovakia and Spain, we will only focus on the loans issued in Spain. Since a new rating model indicates new rules for accepting or rejecting loans, we have selected the data sample from 21 October 2013, because from that date forward all loans had a rating assigned to them, to 1 January 2016. The time series are displayed in Figure 1. We are analysing data consisting of 115 weekly records.
• 'CompletedLoans' – the amount of non-defaulted loans issued per week which are repaid and have never defaulted (a loan that is 60 or more days overdue is considered defaulted);
• 'DefaultedLoans' – the amount of defaulted loans issued per week.
The loan statistics are provided in Table 3:
Summary statistics of the weekly data of defaulted and non-defaulted loans issued in Spain
min max mean variance
DefaultedLoans 1.00 60.00 22.60 158.66
CompletedLoans 0.00 15.00 5.30 11.67
Bondora loan data: non-defaulted and defaulted loans by their issue date
AC function and PAC function plots of Bondora loan data
The mean, minimum, maximum and variance is higher for defaulted loans than for non-defaulted loans. As can be seen from Figure 2, the numbers of defaulted and non-defaulted loans might be correlated since they both exhibit increase and decrease periods at the same times.
The correlation between the two time series is 0.6684. We also note that the mean and variance are lower in the beginning of the time series. This feature could be due to various reasons: the effect of the new loan rating system, which was officially implemented in December of 2014, the effect of advertising or the fact that the amount of loans, issued to people living outside of Estonia, increased. The analysis of the significance of these effects is left for future research.
The sample autocorrelation (AC) function and the partial autocorrelation (PAC) function are displayed in Figure 2. We can see that the AC function is decaying over time and the PAC function has a significant first lag which indicates that the non-negative integer-valued time series could be autocorrelated.
In order to analyse if the amount of defaulted loans depends on the amount of non-defaulted loans on the same week, we will consider a BINAR(1) model with different copulas for the innovations. For the marginal distributions of the innovations we will consider the Poisson distribution as well as the negative binomial one. Our focus is the estimation of the dependence parameter, and we will use the Two-step estimation method, based on the Monte Carlo simulation results presented in Section 4.
5.2 Estimated models
We estimated a number of BINAR(1) models with different distributions of innovations which include combinations of:
• different copula functions: FGM, Frank or Clayton;
• different combinations of the Poisson and negative binomial distributions: both marginals are Poisson, both marginals are negative binomial, or a mix of both.
In the first step of the Two-step method, we estimated ${\hat{\alpha }_{1}}$ and ${\hat{\lambda }_{1}}$ for non-defaulted loans, and ${\hat{\alpha }_{2}}$ and ${\hat{\lambda }_{2}}$ for defaulted loans via CLS. The results are provided in Table 4 with standard errors for the Poisson case in parenthesis:
Parameter estimates for BINAR(1) model via the Two-step estimation method: parameter CLS estimates from the first step with standard errors for the Poisson marginal distribution case in parenthesis
${\hat{\alpha }_{1}}$ ${\hat{\alpha }_{2}}$ ${\hat{\lambda }_{1}}$ ${\hat{\lambda }_{2}}$
0.53134 0.75581 2.52174 5.58940
(0.08151) (0.06163) (0.45012) (1.41490)
Because the CLS estimation of parameters ${\alpha _{j}}$ and ${\lambda _{j}}$, $j=1,2$, does not depend on the selected copula and the marginal distribution family, these parameters will remain the same for each of the different distribution combinations for innovations. We can see that defaulted loans exhibit a higher degree of autocorrelation than non-defaulted loans do, due to a larger value of ${\hat{\alpha }_{2}}$. The innovation mean parameter for defaulted loans is also higher, what indicates that random shocks have a larger effect on the number of defaulted loans.
The parameter estimation results from the second-step are provided in Table 5 with standard errors in parenthesis. ${\hat{\sigma }_{1}^{2}}$ is the innovation variance estimate of non-defaulted loans and ${\hat{\sigma }_{2}^{2}}$ is the innovation variance estimate of defaulted loans. According to [16], the observed Fisher information is the negative Hessian matrix, evaluated at the maximum likelihood estimator (MLE). The asymptotic standard errors reported in Table 5 are derived under the assumption that ${\alpha _{j}}$ and ${\lambda _{j}}$, $j=1,2$, are known, ignoring that the true values are substituted in the second step with their CLS estimates.
From the results in Table 5 we see that, according to the Akaike information criterion (AIC) and log-likelihood values, in most cases the FGM copula most accurately describes the relationship between the innovations of defaulted and non-defaulted loans, with the Frank copula being very close in terms of the AIC value. The Clayton copula is the least accurate in describing the innovation joint distribution, when compared to the FGM and Frank copula cases, which indicates that defaulted and non-defaulted loans do not exhibit strong left tail dependence.
Since the summary statistics of the data sample showed that the variance of the data is larger than the mean, a negative binomial marginal distribution may provide a better fit. Additionally, because copulas can link different marginal distributions, it is interesting to see if copulas with different discrete marginal distributions would also improve the model fit. BINAR(1) models where non-defaulted loan innovations are modelled with negative binomial distributions and defaulted loan innovations are modelled with Poisson marginal distributions, and vice versa, were estimated. In general, changing one of the marginal distributions to a negative binomial provides a better fit in terms of AIC than the Poisson marginal distribution case. However, the smallest AIC value is achieved when both marginal distributions are modelled with negative binomial distributions, linked via the FGM copula. Furthermore, the estimated innovation variance, ${\hat{\sigma }_{2}^{2}}$, is much larger for defaulted loans, and this is similar to what we observed from the defaulted loan data summary statistics.
Parameter estimates for BINAR(1) model via Two-step estimation method: parameter CML estimates from the second-step for different innovation marginal and joint distribution combinations with standard errors in parenthesis, derived under the assumption that the values ${\hat{\lambda }_{j}}$ and ${\hat{\alpha }_{j}}$, $j=1,2$, from the first step are true
Marginals Copula $\hat{\theta }$ ${\hat{\sigma }_{1}^{2}}$ ${\hat{\sigma }_{2}^{2}}$ AIC Log-likelihood
Both Poisson FGM 0.89270 – – 1763.48096 −880.74048
(0.18671)
Frank 2.38484 – – 1760.15692 −879.07846
Clayton 0.39357 – – 1761.12369 −879.56185
Negative binomial and Poisson FGM 1.00000 6.46907 – 1731.57339 −863.78670
(0.22914) (1.01114)
Frank 2.14329 6.10242 – 1731.95241 −863.97620
Clayton 0.34540 5.73731 – 1736.47641 −866.23821
Poisson and negative binomial FGM 1.00000 – 44.83107 1498.29563 −747.14782
Frank 2.01486 – 44.10555 1498.81039 −747.40519
Clayton 0.38310 – 43.42739 1503.55388 −749.77694
Both negative binomial FGM 1.00000 6.55810 45.36834 1466.15418 −730.07709
(0.31675) (1.24032) (7.55217)
Frank 2.21356 6.58754 45.42601 1466.97947 −730.48973
Clayton 0.55939 6.64478 45.78307 1470.73515 −732.36758
Overall, both Frank and FGM copulas provide similar fit in terms of log-likelihood, regardless of the selected marginal distributions. We note, however, that for some FGM copula cases, the estimated value of parameter θ is equal to the maximal attainable value 1. Based on copula descriptions from Section 3, the FGM copula is used to model weak dependence. Given a larger sample size, the Frank copula might be more appropriate because it can capture a stronger dependence than the FGM copula can do. The negative binomial marginal distribution case $\hat{\theta }\approx 2.21356$ for the Frank copula indicates that there is a positive dependence between defaulted and non-defaulted loans, just as in the FGM copula case.
The analysis via Monte Carlo simulations of different estimation methods shows that, although the estimates of BINAR(1) parameters via CML has the smallest MSE and bias, estimates of the dependence parameter has smaller differences of MSE and bias than for other estimation methods, indicating that estimations of the dependence parameter via different methods do not exhibit large differences. While CML estimates exhibit the smallest MSE, their calculation via numerical optimization relies on the selection of the initial parameter values. These values can be selected via CLS estimation.
An empirical application of BINAR models for loan data shows that, regardless of the selected marginal distributions, the FGM copula provides the best model fit in almost all cases. Models with the Frank copula are similar to FGM copula models in terms of AIC values. For some of these cases, the estimated FGM copula dependence parameter value was equal to the maximum that can be attained by an FGM copula. In such cases, a larger sample size could help to determine whether the FGM or Frank copula is more appropriate to model the dependence between amounts of defaulted and non-defaulted loans.
Although selecting marginal distributions from different families (Poisson or negative binomial) provided better models than those with only Poisson marginal distributions, the models with both marginal distributions modelled via negative binomial distributions provide the smallest AIC values which reflects overdispersion in amounts of both defaulted and non-defaulted loans. The FGM copula, which provides the best model fit, models variables which exhibit weak dependence. Furthermore, the estimated copula dependence parameter indicates that the dependence between amounts of defaulted and non-defaulted loans is positive.
Finally, one can apply some other copulas in order to analyse whether the loan data exhibits different forms of dependence from the ones discussed in this paper. Lastly, the approach can be extended by analysing the presence of structural changes within the data, or checking the presence of seasonality as well as extending the BINAR(1) model with copula joined innovations to account for the past values of other time series rather than only itself.
A Appendix
Standard errors of the bias of the estimated parameters from the Monte Carlo simulation
P-P P-NB P-P P-NB P-P P-NB
FGM $N=50$ ${\alpha _{1}}$ 0.6 0.12396 0.12465 0.09252 0.09073 – –
${\alpha _{2}}$ 0.4 0.13274 0.13029 0.12510 0.08541 – –
${\sigma _{2}^{2}}$ 9 – 5.15368 – 3.88865 – 4.60221
$N=500$ ${\alpha _{1}}$ 0.6 0.03813 0.03893 0.02706 0.02745 – –
Frank $N=50$ ${\alpha _{1}}$ 0.6 0.12882 0.12975 0.09552 0.09420 – –
Clayton $N=50$ ${\alpha _{1}}$ 0.6 0.12352 0.12684 0.08846 0.09360 – –
Let our Monte Carlo simulation data be ${X_{j,1}^{(i)}},\dots \hspace{0.1667em},{X_{j,N}^{(i)}}$ for simulated sample $i=1,\dots \hspace{0.1667em},M$ and $j=1,2$. Let $\eta \in \{{\alpha _{1}},{\alpha _{2}},{\lambda _{1}},{\lambda _{2}},\theta ,{\sigma _{2}^{2}}\}$ and let ${\widehat{\eta }^{(i)}}$ be either a CLS, CML or Two-step estimate of the true parameter value η for the simulated sample i.
The mean squared error and the bias are calculated as follows:
\[\begin{aligned}{}\text{MSE}(\widehat{\eta })& =\frac{1}{M}{\sum \limits_{i=1}^{M}}{\big({\widehat{\eta }^{(i)}}-\eta \big)^{2}},\\ {} \text{Bias}(\widehat{\eta })& =\frac{1}{M}{\sum \limits_{i=1}^{M}}\big({\widehat{\eta }^{(i)}}-\eta \big).\end{aligned}\]
Calculating the per-sample bias for each simulated sample i would also allow us to calculate the sample variance of biases $\text{Bias}({\widehat{\eta }^{(i)}})={\widehat{\eta }^{(i)}}-\eta $, $i=1,\dots \hspace{0.1667em},M$:
\[\begin{aligned}{}\widehat{\mathbb{V}\mathrm{ar}}\big(\text{Bias}(\widehat{\eta })\big)& =\frac{1}{M-1}{\sum \limits_{i=1}^{M}}{\big[\text{Bias}\big({\widehat{\eta }^{(i)}}\big)-\text{Bias}(\widehat{\eta })\big]^{2}},\end{aligned}\]
which we can use to calculate the standard error of the bias.
Kernel density estimate for the bias of the dependence parameter estimates in the Monte Carlo simulation
The standard errors of the parameter bias of the Monte Carlo simulation are presented in Table 6. The columns labelled 'P-P' indicate the cases where both innovations have Poisson marginal distributions, while columns labelled 'P-NB' is for the cases where one innovation component follows the Poisson distribution and the other follows a negative binomial one. The kernel density estimate for the bias of the dependence parameter estimate, $\widehat{\theta }$, is presented in Figure 3 for the Monte Carlo simulation cases, where the sample size was 500.
The results in Table 6 are in line with the conclusions presented in Section 4.4 – for ${\hat{\alpha }_{j}}$, ${\hat{\lambda }_{j}}$, $j=1,2$, and ${\hat{\sigma }_{2}^{2}}$ the standard error of the bias is smaller for CML than for CLS. On the other hand, $\hat{\theta }$ has a similar standard error of the bias for CML and Two-step estimation methods. From Figure 3 we see that the CML and Two-step estimates of the dependence parameter θ are similar to each other and have a lower standard error of the bias than the CLS estimate.
The authors would like to thank the anonymous referee for his/her feedback and constructive insights, which helped to improve this paper.
Al-Osh, M., Alzaid, A.: First-order integer-valued autoregressive (INAR(1)) process. J. Time Ser. Anal. 8, 261–275 (1987) MR0903755. https://doi.org/10.1111/j.1467-9892.1987.tb00438.x
Barczy, M., Ispány, M., Pap, G., Scotto, M., Silva, M.E.: Innovational outliers in INAR(1) models. Commun. Stat., Theory Methods 39(18), 3343–3362 (2010) MR2747588. https://doi.org/10.1080/03610920903259831
Brigo, D., Pallavicini, A., Torresetti, R.: Credit Models and the Crisis: A Journey Into CDOs, Copulas, Correlations and Dynamic Models. Wiley, United Kingdom (2010)
Cherubini, U., Mulinacci, S., Gobbi, F., Romagnoli, S.: Dynamic Copula Methods in Finance. Wiley, United Kingdom (2011)
Crook, J., Moreira, F.: Checking for asymmetric default dependence in a credit card portfolio: A copula approach. J. Empir. Finance 18, 728–742 (2011)
Fenech, J.P., Vosgha, H., Shafik, S.: Loan default correlation using an Archimedean copula approach: A case for recalibration. Econ. Model. 47, 340–354 (2015)
Freeland, R.K., McCabe, B.: Asymptotic properties of CLS estimators in the Poisson AR(1) model. Stat. Probab. Lett. 73(2), 147–153 (2005) MR2159250. https://doi.org/10.1016/j.spl.2005.03.006
Genest, C., Nešlehová, J.: A primer on copulas for count data. ASTIN Bull. 37(2), 475–515 (2007) MR2422797. https://doi.org/10.2143/AST.37.2.2024077
Joe, H.: Dependence Modeling with Copulas. Chapman & Hall/CRC Monographs on Statistics and Applied probability 134 (2015) MR3328438
Karlis, D., Pedeli, X.: Flexible bivariate INAR(1) processes using copulas. Commun. Stat., Theory Methods 42, 723–740 (2013) MR3211946. https://doi.org/10.1080/03610926.2012.754466
Kedem, B., Fokianos, K.: Regression Models for Time Series Analysis. Wiley-Interscience, New Jersey (2002) MR1933755. https://doi.org/10.1002/0471266981
Latour, A.: The multivariate GINAR(p) process. Adv. Appl. Probab. 29(1), 228–248 (1997) MR1432938. https://doi.org/10.2307/1427868
Latour, A.: Existence and stochastic structure of a non-negative integer-valued autoregressive process. J. Time Ser. Anal. 19(4), 439–455 (1998) MR1652193. https://doi.org/10.1111/1467-9892.00102
Manstavičius, M., Leipus, R.: Bounds for the Clayton copula. Nonlinear Anal., Model. Control 22, 248–260 (2017) MR3608075. https://doi.org/10.15388/na.2017.2.7
Nelsen, R.: An Introduction to Copulas, 2nd Edition. Springer (2006) MR2197664. https://doi.org/10.1007/s11229-005-3715-x
Pawitan, Y.: In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press, New York (2001) MR3668697. https://doi.org/10.1080/00031305.2016.1202140
Pedeli, X.: Modelling multivariate time series for count data. PhD thesis, Athens University of Economics and Business (2011)
Pedeli, X., Karlis, D.: A bivariate INAR(1) process with application. Stat. Model., Int. J. 11(4), 325–349 (2011) MR2906704. https://doi.org/10.1177/1471082X1001100403
Silva, I.M.M.: Contributions to the analysis of discrete-valued time series. PhD thesis, University of Porto (2005)
Sklar, M.: Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Stat. Univ. Paris 8, 229–231 (1959) MR0125600
Trivedi, P.K., Zimmer, D.M.: Copula modelling: An introduction for practitioners. Found. Trends Econom. 1(1), 1–111 (2007)
Count data BINAR Poisson negative binomial distribution copula FGM copula Frank copula Clayton copula
60G10 62M10 62H12 | CommonCrawl |
\begin{definition}[Definition:Metric System/Scaling Prefixes/yocto-]
'''yocto-''' is the Système Internationale d'Unités metric scaling prefix denoting a multiplier of $10^{-24}$.
\end{definition} | ProofWiki |
\begin{definition}[Definition:Open Ball/P-adic Numbers/Center]
Let $p$ be a prime number.
Let $\struct {\Q_p, \norm {\,\cdot\,}_p}$ be the $p$-adic numbers.
Let $a \in R$.
Let $\epsilon \in \R_{>0}$ be a strictly positive real number.
Let $\map {B_\epsilon} a$ be the open $\epsilon$-ball of $a$.
In $\map {B_\epsilon} a$, the value $a$ is referred to as the '''center''' of the open $\epsilon$-ball.
\end{definition} | ProofWiki |
\begin{document}
\begin{asciiabstract} We construct an analogue of the normaliser decomposition for p-local finite groups (S,F,L) with respect to collections of F-centric subgroups and collections of elementary abelian subgroups of S. This enables us to describe the classifying space of a p-local finite group, before p-completion, as the homotopy colimit of a diagram of classifying spaces of finite groups whose shape is a poset and all maps are induced by group monomorphisms. \end{asciiabstract}
\begin{htmlabstract} We construct an analogue of the normaliser decomposition for p–local finite groups (S,F,L) with respect to collections of F–centric subgroups and collections of elementary abelian subgroups of S. This enables us to describe the classifying space of a p–local finite group, before p–completion, as the homotopy colimit of a diagram of classifying spaces of finite groups whose shape is a poset and all maps are induced by group monomorphisms. \end{htmlabstract}
\begin{abstract} We construct an analogue of the normaliser decomposition for $p$--local finite groups $(S,\mathcal{F},\mathcal{L})$ with respect to collections of $\mathcal{F}$--centric subgroups and collections of elementary abelian subgroups of $S$. This enables us to describe the classifying space of a $p$--local finite group, before $p$--completion, as the homotopy colimit of a diagram of classifying spaces of finite groups whose shape is a poset and all maps are induced by group monomorphisms. \end{abstract}
\title{The normaliser decomposition for $p$--local finite groups}
\section{The main results}
For finite groups Dwyer \cite{Dwyer-decomp} defined three types of homology decompositions of classifying spaces of finite groups known as the ``subgroup'', ``centraliser'' and ``normaliser'' decompositions. These decompositions are functors $F\co D \to \textbf{Spaces}$, where $D$ is a small category which is constructed using collections $\ensuremath{\mathcal{H}}$ of carefully chosen subgroups of $G$. The essential property of these functors is, that given a finite group $G$, the spaces $F(d)$ have the homotopy type of classifying spaces of subgroups of $G$. Moreover the category $D$ is constructed using information about the conjugation in $G$ of the subgroups in $\ensuremath{\mathcal{H}}$. We say that $D$ depends on the fusion of the collection $\ensuremath{\mathcal{H}}$ of $G$.
The purpose of this note is to construct an analogue of the normaliser decomposition for $p$--local finite groups in certain important cases. Throughout this note we will freely use the terminology and notation that by now has become standard in the theory for $p$--local finite groups. The reader who is not familiar with the jargon is advised to read \fullref{sec
plfg} prior to this section, and is also referred to \cite{BLO2} where $p$--local finite groups were initially defined.
It should be noted that the analogues of the ``subgroup'' and the ``centraliser'' decompositions for $p$--local finite groups was already known to Broto, Levi and Oliver \cite[Section~2]{BLO2}.
The normaliser decomposition which is introduced in this note
enabled the author together with Antonio Viruel to analyze the nerve $|\ensuremath{\mathcal{L}}|$ of $p$--local finite groups $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ with small Sylow subgroups $S$. We prove that these are classifying spaces of, generally infinite, discrete groups \cite{LV}.
The author also used normaliser decompositions to give an analysis of the spectra associated with the nerve, $|\ensuremath{\mathcal{L}}|$, of the linking systems due to Ruiz and Viruel in \cite{RV-extraspecial} and other ``exotic'' examples, see \cite{Li}. These results will appear separately as they involve techniques that have little to do with the actual construction of the normaliser decomposition.
We now describe the main results of this paper. Throughout we work simplicially, thus a space means a simplicial set. The category of simplicial sets is denoted by $\textbf{Spaces}$. The nerve of a small category $\mathbf{D}$ is denoted
$\text{Nr}(\mathbf{D})$ or $|\mathbf{D}|$. We obtain a functor $|-|\co\Cat\to\textbf{Spaces}$ where $\Cat$ is the category of small categories. A more detailed discussion can be found in \fullref{sec homotopy colimits}
\begin{defn} Let $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ be a $p$--local finite group. A collection is a set ${\mathcal{C}}$ of subgroups of $S$ which is closed under conjugacy in $\ensuremath{\mathcal{F}}$. That is if $P\leq S$ belongs to ${\mathcal{C}}$ then so do all the $\ensuremath{\mathcal{F}}$--conjugates of $P$. A collection ${\mathcal{C}}$ is called $\ensuremath{\mathcal{F}}$--centric if it consists of $\ensuremath{\mathcal{F}}$--centric subgroups of $S$. \end{defn}
\begin{defn} \label{def k-simplices} A $k$--simplex in a collection ${\mathcal{C}}$ is a sequence ${\mathbf{P}}$ of proper inclusions $P_0<P_1<\cdots <P_k$ of elements of ${\mathcal{C}}$. Two $k$--simplices ${\mathbf{P}}$ and ${\mathbf{P}}'$ are called conjugate if there exists an isomorphism $f\in \Iso_\ensuremath{\mathcal{F}}(P_k,P_k')$ such that $f(P_i)=P_i'$ for all $i=0,\ldots,k$. The conjugacy class of ${\mathbf{P}}$ is denoted $[{\mathbf{P}}]$. \end{defn}
\begin{defn} \label{def bsdc} The category ${\mathrm{\bar{s}d}}{\mathcal{C}}$ is a poset whose objects are the conjugacy classes $[{\mathbf{P}}]$ of all the $k$--simplices in ${\mathcal{C}}$ where $k=0,1,2,\ldots$. A morphism $[{\mathbf{P}}] \to [{\mathbf{P}}']$ in ${\mathrm{\bar{s}d}}{\mathcal{C}}$ exists if ${\mathbf{P}}'$ is conjugate to a subsimplex of ${\mathbf{P}}$. \end{defn}
Recall from \fullref{iota morphisms} that in every $p$--local finite group it is possible to choose morphisms $\iota_P^Q$ in the linking system $\ensuremath{\mathcal{L}}$ which are lifts of inclusions $P\leq Q$ of $\ensuremath{\mathcal{F}}$--centric subgroups. The choice can be made in such a way that $\iota_Q^R\circ\iota_P^Q=\iota_P^R$ for inclusions $P\leq Q\leq R$.
\begin{defn} Let ${\mathcal{C}}$ be an $\ensuremath{\mathcal{F}}$--centric collection in $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ and let ${\mathbf{P}}$ be a $k$--simplex in ${\mathcal{C}}$. Define $\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})$ as the subgroup of $\prod_{i=0}^k\Aut_\ensuremath{\mathcal{L}}(P_i)$ whose elements are the $(k{+}1)$--tuples $(\varphi_i)_{i=0}^k$ which render the following ladder commutative in $\ensuremath{\mathcal{L}}$ $$ \begin{CD} P_0 @>{\iota_{P_0}^{P_1}}>> P_1 @>{\iota_{P_1}^{P_2}}>> \cdots @>{\iota_{P_{k-1}}^{P_k}}>> P_k \\ @V{\varphi_0}VV @V{\varphi_1}VV @. @VV{\varphi_k}V \\ P_0 @>>{\iota_{P_0}^{P_1}}> P_1 @>>{\iota_{P_1}^{P_2}}> \cdots @>>{\iota_{P_{k-1}}^{P_k}}> P_k \end{CD} $$ \end{defn}
\begin{prop} \label{prop resL}
The assignment $(\varphi_i)_{i=0}^k \mapsto \varphi_0$ gives rise to a canonical isomorphism of $\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})$ with a subgroup of $\Aut_\ensuremath{\mathcal{L}}(P_0)$. More generally, if ${\mathbf{P}}'$ is a subsimplex of ${\mathbf{P}}$ in ${\mathcal{C}}$ then restriction induces a monomorphism of groups $\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \to \Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}')$. \end{prop}
\begin{proof} The second assertion follows immediately from \fullref{restn in L}. The first follows from the second by letting ${\mathbf{P}}'$ be the $1$--simplex $P_0$. \end{proof}
\begin{notationx} $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})$ denotes the subcategory of $\ensuremath{\mathcal{L}}$ whose only object is $P_0$ and whose morphism set is $\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})$. \end{notationx}
\begin{defn} Given an $\ensuremath{\mathcal{F}}$--centric collection ${\mathcal{C}}$ in a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$, let $\ensuremath{\mathcal{L}}^{\mathcal{C}}$ denote the full subcategory of $\ensuremath{\mathcal{L}}$ generated by the objects set ${\mathcal{C}}$. \end{defn}
Frequently, the inclusion $\ensuremath{\mathcal{L}}^{\mathcal{C}} \subseteq \ensuremath{\mathcal{L}}$ induces a weak homotopy equivalence on nerves. For example, this happens when ${\mathcal{C}}$ contains all the $\ensuremath{\mathcal{F}}$--centric $\ensuremath{\mathcal{F}}$--radical subgroups of $S$. This fact is proved by Broto, Castellana, Grodal, Levi and Oliver \cite[Theorem 3.5]{BCGLO1}.
The following theorem applies to all $\ensuremath{\mathcal{F}}$--centric collections. The decomposition approximates $\ensuremath{\mathcal{L}}$ if the inclusion $\ensuremath{\mathcal{L}}^{\mathcal{C}} \subseteq \ensuremath{\mathcal{L}}$ induces an equivalence as explained above.
\begin{thma} \label{thmA} Fix an $\ensuremath{\mathcal{F}}$--centric collection ${\mathcal{C}}$ in a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$. Then there exists a functor $\delta_{\mathcal{C}}\co{\mathrm{\bar{s}d}}{\mathcal{C}}\to\textbf{Spaces}$ such that \begin{enumerate} \item \label{thmA:target} There is a natural weak homotopy equivalence $$\hhocolim{{\mathrm{\bar{s}d}}{\mathcal{C}}}\, \delta_{\mathcal{C}} \xto{~~\simeq~~}
|\ensuremath{\mathcal{L}}^{\mathcal{C}}|.$$ \item There is a natural weak homotopy equivalence $B\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \xto{\simeq} \delta_{\mathcal{C}}([{\mathbf{P}}])$ for every $k$--simplex ${\mathbf{P}}$. \label{thmA:terms}
\item The natural maps $\delta_{\mathcal{C}}([{\mathbf{P}}]) \to |\ensuremath{\mathcal{L}}^{\mathcal{C}}|$ are induced by the inclusion of categories $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \subseteq \ensuremath{\mathcal{L}}^{\mathcal{C}}$. \label{thmA:augmentation}
\item If ${\mathbf{P}}'$ is a subsimplex of ${\mathbf{P}}$ then the equivalence \eqref{thmA:terms} renders the following square commutative $$ \disablesubscriptcorrection\xysavmatrix{ B\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \ar[r]^\simeq \ar[d]_{B\res^{\mathbf{P}}_{{\mathbf{P}}'}} & \delta_{\mathcal{C}}([{\mathbf{P}}]) \ar[d] \\ B\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}') \ar[r]^\simeq & \delta_{\mathcal{C}}([{\mathbf{P}}']) } $$ Moreover if ${\mathbf{P}}$ and ${\mathbf{P}}'$ are conjugate $k$--simplices and $\psi\in\Iso_\ensuremath{\mathcal{L}}(P_0,P_0')$ maps $\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}')$ onto $\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})$ by conjugation then the following square commutes $$ \disablesubscriptcorrection\xysavmatrix{ B\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}') \ar[r]^{\simeq} \ar[d]_{Bc_\psi} & \delta_{\mathcal{C}}([{\mathbf{P}}']) \ar@{=}[d] \\ B\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \ar[r]_{\simeq} & \delta_{\mathcal{C}}([{\mathbf{P}}]) } $$ \label{thmA:maps} \end{enumerate} \end{thma}
\begin{remarkx} When $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ is associated with a finite group $G$ one may consider the $G$--collection $\ensuremath{\mathcal{H}}$ consisting of all the subgroups of $G$ which are conjugate to elements of the $\ensuremath{\mathcal{F}}$--collection ${\mathcal{C}}$. Dwyer \cite[Section~3]{Dwyer-decomp} constructs a poset ${\mathrm{\bar{s}d}}\ensuremath{\mathcal{H}}$ and a functor $\delta^{\text{Dwyer}}_\ensuremath{\mathcal{H}}\co{\mathrm{\bar{s}d}}\ensuremath{\mathcal{H}} \to \textbf{Spaces}$ which he calls the normaliser decomposition. We will show in \fullref{compare with Dwyer} that ${\mathrm{\bar{s}d}}\ensuremath{\mathcal{H}}={\mathrm{\bar{s}d}}{\mathcal{C}}$ and that $\delta_{\mathcal{C}}$ and $\delta^{\text{Dwyer}}_\ensuremath{\mathcal{H}}$ can be connected by a natural zigzag of mod--$p$ equivalences. That is, a zigzag of natural transformations which at every object of ${\mathrm{\bar{s}d}}{\mathcal{C}}$ give rise to an $H_*(-;\mathbb{Z}/p)$--isomorphism. \end{remarkx}
We now describe the second type of normaliser decomposition that we shall construct in this note. It is based on collections $\ensuremath{\mathcal{E}}$ of elementary abelian subgroups of $S$.
\begin{defn} \label{def autF} For a $k$--simplex ${\mathbf{E}}$ in $\ensuremath{\mathcal{E}}$ define $\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})$ as the subgroup of $\Aut_\ensuremath{\mathcal{F}}(E_k)$ consisting of the automorphisms $f$ such that $f(E_i)=E_i$ for all $i=0,\ldots,k$. \end{defn}
Consider an $\ensuremath{\mathcal{F}}$--centric collection ${\mathcal{C}}$ in $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$.
\begin{defn} \label{def barcl} Fix an elementary abelian subgroup $E$ of $S$. The objects of the category $\bar{C}_{\ensuremath{\mathcal{L}}}({\mathcal{C}};E)$ are pairs $(P,f)$ where $P\in {\mathcal{C}}$ and $f\co E \to Z(P)$ is a morphism in $\ensuremath{\mathcal{F}}$. Morphisms $(P,f) \to (Q,g)$ in $\bar{C}_{\ensuremath{\mathcal{L}}}({\mathcal{C}};E)$ are morphisms $\psi \in \ensuremath{\mathcal{L}}(P,Q)$ such that $g=\pi(\psi)\circ f$ where $\pi\co\ensuremath{\mathcal{L}} \to \ensuremath{\mathcal{F}}$ is the projection functor. \end{defn}
Observe that $\Aut_\ensuremath{\mathcal{F}}(E)$ acts on $\bar{C}_{\ensuremath{\mathcal{L}}^{\mathcal{C}}}(E)$ by pre-composition. That is, every $h\in\Aut_\ensuremath{\mathcal{F}}(E)$ indices the assignment $(P,f)\mapsto (P,f\circ h)$.
\begin{defn} \label{def NoL} For a $k$--simplex ${\mathbf{E}}$ in $\ensuremath{\mathcal{E}}$ let $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ denote the subcategory of $\ensuremath{\mathcal{L}}$
whose objects are $P\in{\mathcal{C}}$ for which $E_k \leq Z(P)$. A morphism $\varphi\in\ensuremath{\mathcal{L}}(P,Q)$ belongs to $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ if $\pi(\varphi)|_{E_k}$ is an element of $\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})$. \end{defn}
Recall that the homotopy orbit space of a $G$--space $X$, ie the Borel construction $EG\times_G X$, is denoted by $X_{hG}$.
\begin{prop} \label{No equiv orbit} Let ${\mathbf{E}}$ be a $k$--simplex in $\ensuremath{\mathcal{E}}$. There is a map $$
\epsilon_{\mathbf{E}} \co | \breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})| \to |\bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};E_k)|_{h\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})} $$ which is a homotopy equivalence if $E_k$ is fully $\ensuremath{\mathcal{F}}$--centralised. The map is natural with respect to inclusion of simplices. \end{prop}
\begin{proof} This is immediate from \fullref{NoTrCbar}. \end{proof}
A comment on the categories $\bar{C}_{\ensuremath{\mathcal{L}}}({\mathcal{C}};E)$ is in place. If ${\mathcal{C}}$ is the collection of all the $\ensuremath{\mathcal{F}}$--centric subgroups of
$S$ and $E$ is fully $\ensuremath{\mathcal{F}}$--centralised, then it is shown by Broto, Levi and Oliver \cite[Theorem 2.6]{BLO2} that $|\bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};E)|$ has the homotopy type of the nerve of the centraliser linking system
$|C_\ensuremath{\mathcal{L}}(E)|$. The categories $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ are more mysterious. Even when ${\mathbf{E}}$ is a $1$--simplex $E$, the category $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};E)$ is in general only a subcategory of the normaliser linking system $N_\ensuremath{\mathcal{L}}(E)$ because the largest subgroup which appears as an object of $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};E)$ is $C_S(E)$ which in general is smaller than $N_S(E)$. When $C_S(E)=N_S(E)$ these categories are equal.
The next decomposition result, \fullref{thmB}, depends on a collection of elementary abelian groups $\ensuremath{\mathcal{E}}$ and a collection ${\mathcal{C}}$ of $\ensuremath{\mathcal{F}}$--centric subgroups of $S$. It approximates $\ensuremath{\mathcal{L}}$ if ${\mathcal{C}}$ contains, for example, all the $\ensuremath{\mathcal{F}}$--centric $\ensuremath{\mathcal{F}}$--radical subgroups of $S$. The collection $\ensuremath{\mathcal{E}}$ must be large enough as explicitly stated in the theorem. For example the collection of all the non-trivial elementary abelian subgroups will always be a valid choice.
\begin{defn} \label{def omega_p} Given a group $H$ and a prime $p$ let $\Omega_p(H)$ denote the subgroup of $H$ generated by all the elements of order $p$ in $H$. \end{defn}
\begin{thmb} \label{thmB} Consider a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$, an $\ensuremath{\mathcal{F}}$--centric collection ${\mathcal{C}}$ and a collection $\ensuremath{\mathcal{E}}$ of elementary abelian subgroup of $S$ which contains the subgroups $\Omega_pZ(P)$ for all $P\in {\mathcal{C}}$. Then there exists a functor $\delta_\ensuremath{\mathcal{E}}\co{\mathrm{\bar{s}d}}\ensuremath{\mathcal{E}} \to \textbf{Spaces}$ with the following properties. \begin{enumerate} \item There is a natural weak homotopy equivalence $
\hhocolim{{\mathrm{\bar{s}d}}\ensuremath{\mathcal{E}}} \, \delta_\ensuremath{\mathcal{E}} \xto{\ \ \simeq \ \ } |\ensuremath{\mathcal{L}}^{\mathcal{C}}|. $ \label{thmB:target}
\item For a $k$--simplex ${\mathbf{E}}$ in $\ensuremath{\mathcal{E}}$ there is a weak homotopy equivalence \label{thmB:terms} $$
|\bar{C}_{\ensuremath{\mathcal{L}}}({\mathcal{C}};E_k)|_{h\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})} \xto{ \ \ \simeq \ \ } \delta_\ensuremath{\mathcal{E}}([{\mathbf{E}}]). $$ \item Fix a $k$--simplex ${\mathbf{E}}$ where $E_k$ is fully $\ensuremath{\mathcal{F}}$--centralised. The equivalences \eqref{thmB:target} and \eqref{thmB:terms} give a natural map
$\delta_\ensuremath{\mathcal{E}}([{\mathbf{E}}]) \to |\ensuremath{\mathcal{L}}^{\mathcal{C}}|$ whose precomposition with $\epsilon_{\mathbf{E}}$ of \fullref{No equiv orbit} is induced by the realization of the inclusion of $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ in $\ensuremath{\mathcal{L}}^{\mathcal{C}}$. \label{thmB:augment}
\item If ${\mathbf{E}}'$ is a $k$--subsimplex of an $n$--simplex ${\mathbf{E}}$ then the following square commutes up to homotopy $$ \disablesubscriptcorrection\xysavmatrix{
|\bar{C}_{\ensuremath{\mathcal{L}}}({\mathcal{C}};E_n)|_{h\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})} \ar[d] \ar[rr]^\simeq & & \delta_\ensuremath{\mathcal{E}}([{\mathbf{E}}]) \ar[d] \\
|\bar{C}_{\ensuremath{\mathcal{L}}}({\mathcal{C}};E'_k)|_{h\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}}')} \ar[rr]^\simeq & & \delta_\ensuremath{\mathcal{E}}([{\mathbf{E}}']) } $$ The homotopy is natural with respect to inclusion of simplices. In addition, the square commutes on the nose if $E_k'=E_n$. \label{thmB:maps} \end{enumerate} \end{thmb}
\subsubsection*{Acknowledgments} The author was supported by grant NAL/00735/G from the Nuf\-field Foundation. Part of this work was supported by Institute Mittag-Leffler (Djursholm, Sweden).
\section{On $p$--local finite groups} \label{sec plfg}
The term $p$--local finite group was coined by Broto, Levi and Oliver \cite{BLO2}. It cropped up naturally in their attempt \cite{BLO1} to describe the space of self equivalences of a $p$--completed classifying space of a finite group $G$. They discovered that the relevant information needed to solve this problem lies in the fusion system of the $p$--subgroups of $G$ and certain categories which they later on called ``linking systems''. Historically, fusion systems were first introduced by Lluis Puig \cite{Puig}.
\begin{defn} Fix a prime $p$ and let $S$ be a finite $p$--group. A \emph{fusion system} over $S$ is a sub-category $\ensuremath{\mathcal{F}}$ of the category of groups whose objects are the subgroups of $S$ and whose morphisms are group monomorphisms such that \begin{itemize} \item[(1)] All the monomorphisms that are induced by conjugation by elements of $S$ are in $\ensuremath{\mathcal{F}}$.
\item[(2)] Every morphism in $\ensuremath{\mathcal{F}}$ factors as an isomorphism in $\ensuremath{\mathcal{F}}$ followed by an inclusion of subgroups. \end{itemize} We say that two subgroups $P,Q$ of $S$ are $\ensuremath{\mathcal{F}}$--\emph{conjugate} if they are isomorphic as objects of $\ensuremath{\mathcal{F}}$. \end{defn}
When $g$ is an element of $S$ and $P,Q$ are subgroups of $S$ such that $g P g^{-1}\leq Q$, we let $c_g$ denote the morphism $P \to Q$ defined by conjugation, namely $c_g(x)=gxg^{-1}$ for every $x\in P$.
We let $\Hom_S(P,Q)$ denote the set of all the morphisms $P \to Q$ in $\ensuremath{\mathcal{F}}$ that are induced by conjugation in $S$. Also notice that the factorization axiom (2) implies that all the $\ensuremath{\mathcal{F}}$--endomorphisms of a subgroup $P$ are in fact automorphisms in $\ensuremath{\mathcal{F}}$. Thus we write $\Aut_\ensuremath{\mathcal{F}}(P)$ for the set of morphisms $\ensuremath{\mathcal{F}}(P,P)$.
\begin{defn} A subgroup $P$ of $S$ is called \emph{fully $\ensuremath{\mathcal{F}}$--centralised} (resp. \emph{fully $\ensuremath{\mathcal{F}}$--normalised}) if its $S$--centraliser $C_S(P)$ (resp. $S$--normaliser $N_S(P)$) has the maximal possible order in the
$\ensuremath{\mathcal{F}}$--conjugacy class of $P$. That is, $|C_S(P)|\geq |C_S(P')|$ (resp. $|N_S(P)|\geq |N_S(P')|$) for every $P'$ which is $\ensuremath{\mathcal{F}}$--conjugate to $P$. \end{defn}
\begin{defn} \label{def sat fus} A fusion system $\ensuremath{\mathcal{F}}$ over a finite $p$--group $S$ is called \emph{saturated} if \begin{itemize} \item[I] Every fully $\ensuremath{\mathcal{F}}$--normalised subgroup $P$ of $S$ is fully $\ensuremath{\mathcal{F}}$--centralised and moreover $\Aut_S(P)=N_S(P)/C_S(P)$ is a Sylow $p$--subgroup of $\Aut_\ensuremath{\mathcal{F}}(P)$.
\item[II] Every morphism $\func{\varphi}{P}{S}$ in $\ensuremath{\mathcal{F}}$ whose image $\varphi(P)$ is fully $\ensuremath{\mathcal{F}}$--centralised can be extended to a morphism $\func{\psi}{N_\varphi}{S}$ in $\ensuremath{\mathcal{F}}$ where $$ N_{\varphi}=\{ g\in N_S(P) : \varphi c_g\varphi^{-1} \in\Aut_S(P)\}. $$ \end{itemize} \end{defn}
\begin{defn} A subgroup $P$ of $S$ is called $\ensuremath{\mathcal{F}}$--\emph{centric} if $P$ and all of its $\ensuremath{\mathcal{F}}$--conjugates contain their $S$--centralisers, that is $C_S(P')=Z(P')$ for every subgroup $P'$ of $S$ which is $\ensuremath{\mathcal{F}}$--conjugate to $P$. \end{defn}
\begin{defn} A \emph{centric linking system} associated to a saturated fusion system $\ensuremath{\mathcal{F}}$ over $S$ consists of \begin{enumerate} \item A small category $\ensuremath{\mathcal{L}}$ whose objects are the $\ensuremath{\mathcal{F}}$--centric subgroups of $S$,
\item a functor $\pi\co\ensuremath{\mathcal{L}} \to \ensuremath{\mathcal{F}}$ and
\item group monomorphisms $\delta_P\co P \to \Aut_\ensuremath{\mathcal{L}}(P)$ for every $\ensuremath{\mathcal{F}}$--centric subgroup $P$ of $S$, \end{enumerate} Such that the following axioms hold \begin{itemize} \item[(A)] The functor $\pi$ acts as the inclusion on object sets, that is $\pi(P)=P$ for every $\ensuremath{\mathcal{F}}$--centric subgroup $P$ of $S$. For any two objects $P,Q$ of $\ensuremath{\mathcal{L}}$, the group $Z(P)$ acts freely on the morphism set $\ensuremath{\mathcal{L}}(P,Q)$ via the restriction of $\delta_P\co P \to \Aut_\ensuremath{\mathcal{L}}(P)$ to $Z(P)$. The induced map on morphisms sets $$ \pi\co\ensuremath{\mathcal{L}}(P,Q) \to \ensuremath{\mathcal{F}}(P,Q) $$ identifies $\ensuremath{\mathcal{F}}(P,Q)$ with the quotient of $\ensuremath{\mathcal{L}}(P,Q)$ by the free action of $Z(P)$.
\item[(B)] For every $\ensuremath{\mathcal{F}}$--centric subgroup $P$ of $S$ the map $\pi\co\Aut_\ensuremath{\mathcal{L}}(P) \to \Aut_\ensuremath{\mathcal{F}}(P)$ sends $\delta_P(g)$, where $g\in P$, to $c_g$.
\item[(C)] For every $f\in\ensuremath{\mathcal{L}}(P,Q)$ and every $g\in P$ there is a commutative square in $\ensuremath{\mathcal{L}}$ $$ \begin{CD} P @>{f}>> Q \\ @V{\delta_P(g)}VV @VV{\delta_Q(\pi(f)(g))}V \\ P @>>{f}> Q. \end{CD} $$ \end{itemize} \end{defn}
\begin{remarkx}
A morphism $f\in\ensuremath{\mathcal{L}}(P,Q)$ is called a \emph{lift} of a morphism $\varphi\in\ensuremath{\mathcal{F}}(P,Q)$ if $\varphi=\pi(f)$. \end{remarkx}
\begin{defn} A $p$--\emph{local finite group} is a triple $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ where $\ensuremath{\mathcal{F}}$ is a saturated fusion system over the finite $p$--group $S$ and $\ensuremath{\mathcal{L}}$ is a centric linking system associated to $\ensuremath{\mathcal{F}}$. The \emph{classifying space} of $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ is the space
$\pcomp{|\ensuremath{\mathcal{L}}|}$, that is the $p$--completion in the sense of Bousfield and Kan \cite{BK}, of the realization of the small category $\ensuremath{\mathcal{L}}$. \end{defn}
\begin{void} \label{non exotic examples} When $S$ is a Sylow $p$--subgroup of a finite group $G$, there is an associated $p$--local finite group denoted $(S,\ensuremath{\mathcal{F}}_S(G),\ensuremath{\mathcal{L}}_S(G))$. See \cite[Proposition~1.3, remarks after Definition~1.8]{BLO2}. We shall write $\ensuremath{\mathcal{F}}$ for $\ensuremath{\mathcal{F}}_S(G)$ and $\ensuremath{\mathcal{L}}$ for $\ensuremath{\mathcal{L}}_S(G)$.
Morphism sets between $P,Q\leq S$ are $$ \ensuremath{\mathcal{F}}(P,Q)=\Hom_G(P,Q)=N_G(P,Q)/C_G(P) $$ where $N_G(P,Q)=\{g\in G : gPg^{-1}\leq Q\}$ and $C_G(P)$ acts on $N_G(P,Q)$ by right translation.
A subgroup $P$ of $S$ is, by \cite[Proposition~1.3]{BLO2}, $\ensuremath{\mathcal{F}}$--centric precisely when it is $p$--centric in the sense of \cite[Section~1.19]{Dwyer-decomp}, that is, $Z(P)$ is a Sylow $p$--subgroup of $C_G(P)$. In this case $C_G(P)=Z(P)\times C'_G(P)$ where $C_G'(P)$ is the maximal subgroup of $C_G(P)$ of order prime to $p$. Morphism sets of $\ensuremath{\mathcal{L}}=\ensuremath{\mathcal{L}}_S(G)$ have, by definition, the form $$ \ensuremath{\mathcal{L}}(P,Q) = N(P,Q)/C_G'(P). $$ The functor $\pi\co\ensuremath{\mathcal{L}}_S(G)\to\ensuremath{\mathcal{F}}_S(G)$ is the obvious projection functor. The monomorphism $\delta_P\co P\to \Aut_\ensuremath{\mathcal{L}}(P)$ is induced by the inclusion of $P$ in $N_G(P)$.
It is shown by Broto, Levi and Oliver \cite[after Definition~1.8]{BLO2} that $(S,\ensuremath{\mathcal{F}}_S(G),\ensuremath{\mathcal{L}}_S(G))$ is a
$p$--local finite group and that $\pcomp{|\ensuremath{\mathcal{L}}_S(G)|}\simeq \pcomp{BG}$. It should also be remarked that there are examples of $p$--local finite groups that cannot be
associated with any finite group. These are usually referred to as ``exotic examples''. \end{void}
\begin{void} \label{iota morphisms} In every $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ one can choose morphisms $\iota_P^Q\in\ensuremath{\mathcal{L}}(P,Q)$ for every inclusion of $\ensuremath{\mathcal{F}}$--centric subgroups $P\leq Q$, in such a way that \begin{enumerate} \item $\pi(\iota_P^Q)$ is the inclusion $P\leq Q$, \item $\iota_Q^R\circ \iota_P^Q=\iota_P^R$ for every $\ensuremath{\mathcal{F}}$--centric subgroups $P\leq
Q\leq R$ of $S$, and \item $\iota_P^P=\textrm{id}$ for every $\ensuremath{\mathcal{F}}$--centric subgroup $P$ of $S$. \end{enumerate} This follows from \cite[Proposition 1.11]{BLO2}. Using the notation there, one chooses $\iota^Q_P=\delta_{P,Q}(e)$ where $e$ is the identity element in $S$. Whenever possible, in order to avoid cumbersome notation, we shall write $\iota$ for $\iota_P^Q$. \end{void}
\begin{void} \label{unique factor in L} From \cite[Lemma 1.10(a)]{BLO2} it also follows that every morphism $\varphi\co P \to Q$ in $\ensuremath{\mathcal{L}}$ factors uniquely as an isomorphism $\varphi'\co P \to P'$ in $\ensuremath{\mathcal{L}}$ followed by the morphism $\iota\co P' \to Q$. In fact $P'=\pi(\varphi)(P)$ \end{void}
\begin{void} \label{morphisms in L are mono and epi} It was observed by Broto, Levi and Oliver \cite[remarks after Lemma~1.10]{BLO2} that every morphism in $\ensuremath{\mathcal{L}}$ is a monomorphism in the categorical sense. It was later observed by Broto, Castellana, Grodal, Levi and Oliver \cite[Corollary 3.10]{BCGLO1} and independently by others, that every morphism in $\ensuremath{\mathcal{L}}$ is also an epimorphism. As an easy consequence we record for further use: \end{void}
\begin{prop} \label{restn in L} Consider a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ and a commutative square in $\ensuremath{\mathcal{F}}$ on the left of the display below $$ \begin{CD} P @>{f}>> P' \\ @V{\text{incl}}VV @VV{\text{incl}}V \\ Q @>>{g}> Q' \end{CD} \qquad \qquad \qquad \qquad \qquad \begin{CD} P @>{\tilde{f}}>> P' \\ @V{\iota_P^Q}VV @VV{\iota_{P'}^{Q'}}V \\ Q @>>{\tilde{g}}> Q' \end{CD} $$ where $P,P',Q$ and $Q'$ are $\ensuremath{\mathcal{F}}$--centric subgroups of $S$. Then for every lift $\tilde{g}$ of $g$ in $\ensuremath{\mathcal{L}}$ there exists a unique lift $\tilde{f}$
of $f$ in $\ensuremath{\mathcal{L}}$ which render the square on the right commutative in $\ensuremath{\mathcal{L}}$. We denote $\tilde{f}$ by $\tilde{g}|_P$.
Given a lift $\tilde{f}$ for $f$, if there exists a lift $\tilde{g}$ for $g$ rendering the square on the right commutative, then it is unique. \end{prop}
\begin{proof} The first assertion follows immediately from \cite[Lemma 1.10(a)]{BLO2} by setting $\psi=\text{incl}_{P'}^{Q'}, \tilde{\psi}=\iota_{P'}^{Q'}$ and $\tilde{\psi\varphi}=\tilde{g}\iota_P^Q$. The second assertion follows immediately from the fact that $\iota_P^Q$ is an epimorphism. \end{proof}
\begin{void} \label{centraliser and normaliser systems} Fix a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$. Given a subgroup $P$ of $S$, there are two important $p$--local finite groups associated with it: the centraliser of $P$ when $P$ is fully $\ensuremath{\mathcal{F}}$--centralised and the normaliser of $P$ when $P$ is fully $\ensuremath{\mathcal{F}}$--normalised. Both were defined by Broto Levi and Oliver in \cite{BLO2}.
The centraliser fusion system $C_\ensuremath{\mathcal{F}}(P)$, where $P$ is fully $\ensuremath{\mathcal{F}}$--centralised, is a subcategory of $\ensuremath{\mathcal{F}}$. As a fusion system it is defined over the $S$--centraliser of $P$ denoted $C_S(P)$. Morphisms $Q \to Q'$ in $C_\ensuremath{\mathcal{F}}(P)$ are those morphisms $\varphi\co Q \to Q'$ in $\ensuremath{\mathcal{F}}$ that can be extended to a morphism $\bar{\varphi}\co PQ \to PQ'$ in $\ensuremath{\mathcal{F}}$ which induces the identity on $P$. The objects of the centric linking system $C_\ensuremath{\mathcal{L}}(P)$ associated to $C_\ensuremath{\mathcal{F}}(P)$ are the $C_\ensuremath{\mathcal{F}}(P)$--centric subgroups of $C_S(P)$. The set of morphisms $Q \to Q'$ in $C_\ensuremath{\mathcal{L}}(P)$ is a subset of $\ensuremath{\mathcal{L}}(PQ,PQ')$ consists of those morphisms $f\co PQ \to PQ'$ such that $\pi(f)$ induces the identity on $P$ and carries $Q$ to $Q'$. It is shown in \cite{BLO2} that $(C_S(P),C_\ensuremath{\mathcal{F}}(P),C_\ensuremath{\mathcal{L}}(P))$ is a $p$--local finite group.
Now fix a subgroup $K \leq \Aut_\ensuremath{\mathcal{F}}(P)$ where $P$ is fully normalised in $\ensuremath{\mathcal{F}}$. The $K$--normaliser fusion system $N^K_\ensuremath{\mathcal{F}}(P)$ is a subcategory of $\ensuremath{\mathcal{F}}$ defined over $N_S(P)$. The objects of $N_\ensuremath{\mathcal{F}}^K(P)$ are the subgroups of $N_S(P)$. A morphisms $\varphi\in\ensuremath{\mathcal{F}}(Q,Q')$ belongs to $N^K_\ensuremath{\mathcal{F}}(P)$ if it can be extended to a morphism $\bar{\varphi}\co PQ \to PQ'$ in $\ensuremath{\mathcal{F}}$ which induces an automorphism from $K$ on $P$. The fusion system $N_\ensuremath{\mathcal{F}}^K(P)$ is saturated. When $K=\Aut_\ensuremath{\mathcal{F}}(P)$ we denote this category by $N_\ensuremath{\mathcal{F}}(P)$ and call it the normaliser fusion system of $P$. The centric linking system $N_\ensuremath{\mathcal{L}}(P)$ associated to $N_\ensuremath{\mathcal{F}}(P)$ has the $N_\ensuremath{\mathcal{F}}(P)$--centric subgroups of $N_S(P)$ as its object set. The set of morphisms $Q \to Q'$ is the subset of $\ensuremath{\mathcal{L}}(PQ,PQ')$ consisting of those $f\co PQ \to PQ'$ such that $\pi(f)$ carries $Q$ to $Q'$ and induces an automorphism on $P$. \end{void}
\section{The Grothendieck construction} \label{sec homotopy colimits}
Throughout this paper we work simplicially, namely a ``space'' means a simplicial set. For further details, the reader is referred to Bousfeld and Kan \cite{BK}, May \cite{May-ss}, Goerss and Jardine \cite{Goerss-sht} and many other sources. In this section we collect several results from general simplicial homotopy theory that we shall use repeatedly in the rest of this note.
\textbf{Homotopy colimits}\qua Fix a small category ${\mathbf{K}}$ and a functor $U\co{\mathbf{K}}\to\textbf{Spaces}$. The simplicial replacement of $U$ is the simplicial space $\coprod_*U$ which has in simplicial dimension $n$ the disjoint union of the spaces $U(K_0)$ for every chain $$K_0 \to K_1 \to \cdots \to K_n$$ of $n$ composable arrows in ${\mathbf{K}}$. The homotopy colimit of $U$ denoted $\hhocolim{{\mathbf{K}}}U$ is the diagonal of $\coprod_*U$ regarded as a bisimplicial set. See Bousfield and Kan \cite[Section~XII.5]{BK}.
Consider a functor $F\co{\mathbf{K}} \to {\mathbf{L}}$ between small categories. For a functor $U\co{\mathbf{L}}\to \textbf{Spaces}$ there is an obvious natural map, cf \cite[Section~XI.9]{BK}. $$ \hhocolim{{\mathbf{K}}} F^*U \to \hhocolim{{\mathbf{L}}} U. $$
For an object $L\in {\mathbf{L}}$, the comma category $(L \downarrow F)$ has the pairs $\smash{(K, L \xto{k\in{\mathbf{K}}} FK)}$ as its objects. Morphisms $(K,L \smash{\stackrel{\raisebox{-1pt}{\scriptsize$k$}}{\longrightarrow}} FK) \to (K',L \smash{\stackrel{\raisebox{-1pt}{\scriptsize$k'$}}{\longrightarrow}} FK')$ are the morphisms $x\co K \to K'$ such that $Fx \circ k = k'$. Similarly one defines the category $(F \downarrow L)$ whose object set consists of the pairs $(K,k\co FK\to L)$. Compare MacLane \cite{MacLane-working}.
\begin{defn} \label{def right cofinal} The functor $F\co{\mathbf{K}} \to {\mathbf{L}}$ is called right-cofinal if for every object $L\in {\mathbf{L}}$ the category $(L \downarrow F)$ has a contractible nerve. \end{defn}
The following theorem was probably first proved by Quillen \cite[Theorem A]{Quillen}. See also Hollender and Vogt \cite[Section~4.4]{Ho-Vo} and Bousfield and Kan \cite[Section~XI.9]{BK}.
\begin{cofthm} \label{cofinal thm} Let $F\co{\mathbf{K}} \to {\mathbf{L}}$ be a right cofinal functor between small categories. Then for every functor $U\co{\mathbf{L}}\to\textbf{Spaces}$ the natural map $$\hhocolim{{\mathbf{K}}}F^*U\to\hhocolim{{\mathbf{L}}}U$$ is a weak homotopy equivalence. \end{cofthm}
Associated with a functor $U\co{\mathbf{K}}\to\textbf{Spaces}$ there is a functor $F_*U\co{\mathbf{L}}\to\textbf{Spaces}$ called the homotopy left Kan extension of $U$ along $F$. It is defined on every object $L\in{\mathbf{L}}$ by $$ F_*U(L) = \hocolim \Big( (F\downarrow L) \xto{\text{proj}} {\mathbf{K}} \xto{F} \textbf{Spaces} \Big). $$ See \cite[Section~5]{Ho-Vo}, \cite[Section~6]{Dw-Ka}. The following theorem is originally due to Segal. See eg \cite[Theorem 5.5]{Ho-Vo}.
\begin{pdthm} \label{pushdown thm} Fix a functor $F\co{\mathbf{K}}\to{\mathbf{L}}$ of small categories. Then for every functor $U\co{\mathbf{K}}\to\textbf{Spaces}$ there is a natural weak homotopy equivalence $$ \hhocolim{{\mathbf{L}}} F_*U \xto{ \ \ \simeq \ \ } \hhocolim{{\mathbf{K}}}U. $$ \end{pdthm}
\textbf{The Grothendieck construction}\qua Recall that a small category ${\mathbf{K}}$ gives rise to a simplicial set $\text{Nr}({\mathbf{K}})$ called the \emph{nerve} of ${\mathbf{K}}$. Its $n$--simplices are the chains of $n$ composable arrows
$K_0\to K_1\to\cdots\to K_n$ in ${\mathbf{K}}$. See, for example, Goerss and Jardine \cite[Example 1.4]{Goerss-sht} or Bousfield and Kan \cite[Section~XI.2]{BK}. We shall also use the notation $|{\mathbf{K}}|$ for the nerve of ${\mathbf{K}}$.
Given a functor $U\co{\mathbf{K}}\to\mathbf{Cat}$ Thomason \cite{Thomason} defined the translation category ${\mathbf{K}}\int U$ associated to $U$ as follows. The object set consists of pairs $(K,X)$ where $K$ is an object of ${\mathbf{K}}$ and $X$ is an object of $U(K)$. Morphisms $(K_0,X_0)\to(K_1,X_1)$ are pairs $(k,x)$ where $k\co K_0\to K_1$ is a morphism in ${\mathbf{K}}$ and $x\co U(k)(X_0) \to X_1$ is a morphism in $U(K_1)$. Composition of $(K_0,X_0) \smash{\raisebox{-2pt}{$\xto{(k_0,x_0)}$}} (K_1,X_1)$ and $(K_1,X_1) \smash{\raisebox{-2pt}{$\xto{(k_1,x_1)}$}} (K_2,X_2)$ is given by $$ (k_1,x_1)\circ(x_0,k_0) = (k_1\circ k_0, x_1\circ U(k_1)(x_0)). $$ This category is also called the Grothendieck construction of $U$ and the notation $\text{Tr}_{\mathbf{K}} U$ is also used. Thomason \cite{Thomason} shows that there is a natural weak homotopy equivalence \begin{equation} \label{Thomason map}
\eta\co \hhocolim{{\mathbf{K}}}\, |U| \xto{\ \ \simeq \ \ } |\text{Tr}_{{\mathbf{K}}}U| \end{equation}
A natural transformation $U\Rightarrow U'$ gives rise to a canonical functor $\text{Tr}_{\mathbf{K}} U \to \text{Tr}_{{\mathbf{K}}}U'$. The induced map $|\text{Tr}_{\mathbf{K}}(U)|\to|\text{Tr}_{\mathbf{K}}(U')|$ corresponds via
$\eta$ \eqref{Thomason map} to the induced map $\hhocolim{{\mathbf{K}}}\, |U| \to \hhocolim{{\mathbf{K}}}\, |U'|$. Furthermore, for every object $K$ in ${\mathbf{K}}$ the natural map $$
|U(K)| \to \hhocolim{{\mathbf{K}}}\, |U| $$ corresponds under \eqref{Thomason map} to the inclusion of categories \begin{equation} \label{translation cone} U(K) \to \text{Tr}_{{\mathbf{K}}}\, U, \qquad \text{ where } X \mapsto (K,X) \text{ and } x \mapsto (1_K,x). \end{equation}
Consider now a functor $F\co{\mathbf{K}}\to{\mathbf{L}}$ of small categories. Given $U\co {\mathbf{L}}\to\Cat$ there is a naturally defined functor \begin{equation} \label{def F shriek} F_! \co \text{Tr}_{\mathbf{K}} F^*U \to \text{Tr}_{\mathbf{L}} U, \qquad \text{where} \qquad \left\{ \begin{array}{l} F_!(K,X\in F^*U(K)) = (FK,X) \\ F_!(k,x) = (Fk,x). \end{array} \right. \end{equation}
The functor $F_!$ is a model for the map $\hocolim\,F^*|U| \to \hocolim\,|U|$ in the sense that the following square commutes $$ \begin{CD}
|\text{Tr}_{\mathbf{K}} F^*U| @>{\eta}>>
\hhocolim{{\mathbf{K}}} F^*|U| \\
@V{|F_!|}VV @VVV \\
|\text{Tr}_{\mathbf{L}} U| @>{\eta}>>
\hhocolim{{\mathbf{L}}} |U| \end{CD} $$
\begin{defn} \label{def catF star} For a functor $U\co {\mathbf{K}}\to\Cat$ define $F_*U\co {\mathbf{L}}\to\Cat$ by $$ F_*U(L) = \text{Tr} \bigl( (F\downarrow L) \xto{\text{proj}} {\mathbf{K}} \xto{U} \textbf{Spaces} \bigr) $$ \end{defn}
The maps $\eta$ \eqref{Thomason map} provide a natural weak homotopy equivalence
$|F_*U| \xto{\simeq} F_*|U|$. The equivalence in the pushdown theorem can be realized as the nerve of a functor between the transporter categories as follows.
\begin{prop} \label{def F sharp} The functor $F_\# \co \text{Tr}_{\mathbf{L}} F_*U \to \text{Tr}_{\mathbf{K}} U$ defined by $$ \begin{array}{l} F_\# \co \big( L,(K,FK \to L),X\in UK \big) \mapsto (K,X) \\ F_\# \co \big( L \xto{\ell} L', K \xto{k} K', U(k)(X) \xto{x} X' \big) \mapsto (k,x). \end{array} $$ renders the following diagram commutative where the arrow at the top of the square is an equivalence by the pushdown theorem.
\begin{equation}
\disablesubscriptcorrection\xysavmatrix{
{\hhocolim{{\mathbf{L}}}}\, |F_*U| \ar[dr]^\eta_\simeq &
\hhocolim{{\mathbf{L}}}\, F_*|U| \ar[r]^{\hbox{\footnotesize$\sim$}} \ar[d]_{\hbox{\footnotesize$\sim$}} \ar[l]_{\hbox{\footnotesize$\sim$}} &
\hhocolim{{\mathbf{K}}}\, |U| \ar[d]_{\hbox{\footnotesize$\sim$}}^\eta \\ &
|\text{Tr}_{{\mathbf{L}}}(F_*U)|
\ar[r]_{|F_\#|} &
|\text{Tr}_{{\mathbf{K}}}(U)| } \end{equation} \end{prop}
It is useful to point out that if $\star\co {\mathbf{K}} \to \Cat$ is the constant functor on the trivial category with one object and an identity morphism, then $\text{Tr}_{\mathbf{K}}(\star)=\text{Nr}({\mathbf{K}})$.
\section{EI categories} \label{sec EI}
Fix an EI category $\ensuremath{\mathcal{A}}$, namely a category all of whose endomorphisms are isomorphisms. We shall assume that the category $\ensuremath{\mathcal{A}}$ is finite. We shall also assume that $\ensuremath{\mathcal{A}}$ is equipped with a height function, namely a function $h\co \Obj(\ensuremath{\mathcal{A}}) \to \mathbb{N}$ such that $h(A) \leq h(A')$ if there exists a morphism $A\to A'$ in $\ensuremath{\mathcal{A}}$ and equality holds if and only if $A\to A'$ is an isomorphism. Clearly, if $\ensuremath{\mathcal{A}}$ is an EI-category then so is $\ensuremath{\mathcal{A}}^\ensuremath{\mathrm{op}}$. The finiteness condition also implies that if $\ensuremath{\mathcal{A}}$ is heighted then so is $\ensuremath{\mathcal{A}}^\ensuremath{\mathrm{op}}$.
We can always choose a full subcategory $\ensuremath{\mathcal{A}}_{\sk}$ of $\ensuremath{\mathcal{A}}$ which contains one representative from each isomorphism class of objects in $\ensuremath{\mathcal{A}}$. We say that $\ensuremath{\mathcal{A}}_{\sk}$ is skeletal in $\ensuremath{\mathcal{A}}$. Clearly the inclusion $\ensuremath{\mathcal{A}}_{\sk}\subseteq \ensuremath{\mathcal{A}}$ is an equivalence of categories. In the language of S\l omi\'nska \cite{Slominska} $\ensuremath{\mathcal{A}}_{\sk}$ is an EIA category.
Throughout we let $\mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu$ denote the poset $\{0\to 1\to\cdots \to k\}$ considered as a small category.
\begin{defn} \label{def sA} The subdivision category $s(\ensuremath{\mathcal{A}})$ is the category whose objects are height increasing functors ${\mathbf{A}}\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu\to \ensuremath{\mathcal{A}}$, namely $h({\mathbf{A}}(i))<h({\mathbf{A}}(i+1))$ for all $i<k$. Morphisms ${\mathbf{A}} \to {\mathbf{A}}'$ in $s(\ensuremath{\mathcal{A}})$ are pairs $(\epsilon,\varphi)$ where $\epsilon\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu'\to\mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu$ is a strictly increasing function and $\varphi\co \epsilon^*({\mathbf{A}}) \to {\mathbf{A}}'$ is a natural isomorphism of functors $\mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu' \to \ensuremath{\mathcal{A}}$. Composition of $(\epsilon,\varphi)\co {\mathbf{A}}\to {\mathbf{A}}'$ and $(\epsilon',\varphi')\co {\mathbf{A}}' \to {\mathbf{A}}''$ is given by $(\epsilon\circ\epsilon',{\epsilon'}^*(\varphi)\circ \varphi')$. \end{defn}
Note that $\epsilon$ is determined by the heights of the values of ${\mathbf{A}}$ namely $\epsilon(i)=j$ if and only if $h({\mathbf{A}}'(i))=h({\mathbf{A}}(j))$.
We shall further assume that $\ensuremath{\mathcal{A}}$ contains a subcategory $\ensuremath{\mathcal{I}}$ which is a poset with the property that every morphism $\varphi\co A \to A'$ in $A$ can be factored uniquely as $\varphi = \iota \varphi'$ where $\varphi'$ is an isomorphism in $\ensuremath{\mathcal{A}}$ and $\iota$ is a morphism in $\ensuremath{\mathcal{I}}$. The ladder $$ \disablesubscriptcorrection\xysavmatrix{ {\cdots} \ar[r] & {\mathbf{A}}(n-2) \ar[r]^{\varphi_{n-1}} \ar[d]_{(\varphi_n'\circ\varphi_{n-1})'}^\cong & {\mathbf{A}}(n-1) \ar[r]^{\varphi_n} \ar[d]_{\varphi_n'}^\cong & {\mathbf{A}}(n) \ar@{=}[d] \\ {\cdots} \ar[r] & {\mathbf{A}}'(n-2) \ar[r]_{\iota} & {\mathbf{A}}'(n-1) \ar[r]_\iota & {\mathbf{A}}'(n) } $$ shows that the the full subcategory $s_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{A}})$ of $s(\ensuremath{\mathcal{A}})$ consisting of the objects ${\mathbf{A}}$ in which all the arrows ${\mathbf{A}}(i) \to {\mathbf{A}}(i+1)$ belong to $\ensuremath{\mathcal{I}}$ is a skeletal subcategory of $s(\ensuremath{\mathcal{A}})$. We obtain two skeletal subcategories of $s(\ensuremath{\mathcal{A}})$
\begin{equation} \label{skel sA} s(\ensuremath{\mathcal{A}}_{\sk}) \subseteq s(\ensuremath{\mathcal{A}}) \qquad \text{ and } \qquad s_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{A}}) \subseteq s(\ensuremath{\mathcal{A}}). \end{equation}
We observe that $\Hom_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}},{\mathbf{A}}')$ has a free action of $\Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}}')$ with a single orbit. Also every $(\epsilon,\varphi)\co {\mathbf{A}}\to {\mathbf{A}}'$ in $s(\ensuremath{\mathcal{A}})$ gives rise to a natural group homomorphism upon restriction and conjugation with the isomorphism $\varphi\co \epsilon^*{\mathbf{A}}\approx{\mathbf{A}}'$ \begin{equation} \label{sA auto maps} \varphi_*\co \Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}}) \to \Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}}'). \end{equation}
\begin{prop} \label{p cofinal} There is a right cofinal functor $p\co s(\ensuremath{\mathcal{A}}) \to \ensuremath{\mathcal{A}}$ defined by $$ p({\mathbf{A}}) = {\mathbf{A}}(0), \qquad \qquad ({\mathbf{A}}\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu \to \ensuremath{\mathcal{A}}). $$ \end{prop}
\begin{proof} S{\l}omi\'nska \cite[Proposition 1.5]{Slominska} shows that the functor $p\co s(\ensuremath{\mathcal{A}}_{\sk}) \to \ensuremath{\mathcal{A}}_{\sk}$ is right cofinal (\fullref{def right cofinal}) hence so is $p\co s(\ensuremath{\mathcal{A}}) \to \ensuremath{\mathcal{A}}$. \end{proof}
\begin{defn} \label{def barsA} The category $\bar{s}(\ensuremath{\mathcal{A}})$ has the isomorphism classes $[{\mathbf{A}}]$ of the objects of $s(\ensuremath{\mathcal{A}})$ as its object set. There is a unique morphism $[{\mathbf{A}}] \to [{\mathbf{A}}']$ if there exists a morphism ${\mathbf{A}} \to {\mathbf{A}}'$ in $s(\ensuremath{\mathcal{A}})$. There is an obvious projection functor $$ \pi\co s(\ensuremath{\mathcal{A}}) \to \bar{s}(\ensuremath{\mathcal{A}}), \qquad {\mathbf{A}} \mapsto [{\mathbf{A}}]. $$ When $\ensuremath{\mathcal{D}}$ is a full subcategory of $s(\ensuremath{\mathcal{A}})$ one obtains a sub-poset $\bar{\ensuremath{\mathcal{D}}}$ of $\bar{s}(\ensuremath{\mathcal{A}})$ whose objects are the isomorphism classes of the objects of $\ensuremath{\mathcal{D}}$. \end{defn}
Clearly $\bar{s}(\ensuremath{\mathcal{A}})$ is a poset and it should be compared with S\l omi\'nska's construction of $s_0(\ensuremath{\mathcal{A}})$ in \cite[Section~1]{Slominska}. Also note that $\bar{s}_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{A}})=\bar{s}(\ensuremath{\mathcal{A}})$ because $s_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{A}})$ is skeletal in $s(\ensuremath{\mathcal{A}})$. Similarly $\bar{s}(\ensuremath{\mathcal{A}}_{\sk})=\bar{s}(\ensuremath{\mathcal{A}})$.
\begin{lem} \label{adj cof lem} Let $J\co {\mathcal{C}} \to \ensuremath{\mathcal{D}}$ be a functor of small categories with a left adjoint $L\co \ensuremath{\mathcal{D}} \to {\mathcal{C}}$ such that $L \circ J=\textrm{Id}$. Then $J$ is right cofinal. \end{lem}
\begin{proof} Fix an object $d\in \ensuremath{\mathcal{D}}$. We have to prove that the category $(d \downarrow J)$ has a contractible nerve. Let $(d \downarrow \ensuremath{\mathcal{D}})$ denote the category $(d \downarrow 1_\ensuremath{\mathcal{D}})$. It clearly has a contractible nerve because it has an initial object. The functors $J$ and $L$ induce obvious functors \begin{eqnarray*} && J_* \co (d\downarrow J) \to (d\downarrow \ensuremath{\mathcal{D}}), \qquad (c, d\xto{f} Jc) \mapsto (Jc, d \xto{f} Jc) \\ && L_*\co (d\downarrow\ensuremath{\mathcal{D}}) \to (d\downarrow J), \qquad (d', d \xto{f} d') \mapsto (Ld', d \xto{f} d' \xto{\eta} JLd'). \end{eqnarray*} It is obvious that $L_*\circ J_*=\textrm{Id}$. Furthermore the unit $\eta\co \textrm{Id} \to LJ$ gives rise to a natural transformation $\textrm{Id} \to J_*\circ L_*$. Therefore $J_*$ induces a homotopy equivalence on nerves so
$|(d\downarrow J)| \simeq |d\downarrow \ensuremath{\mathcal{D}})|\simeq *$. \end{proof}
\begin{prop} \label{commapi} For every functor $F\co s(\ensuremath{\mathcal{A}}) \to\Cat$ and every ${\mathbf{A}}\in s(\ensuremath{\mathcal{A}})$ there is a functor $$ \text{Tr}_{\ensuremath{\mathcal{B}}\Aut({\mathbf{A}})} F({\mathbf{A}}) \to (\pi_* F)([{\mathbf{A}}]) $$ which induces a weak homotopy equivalence on nerves. It is natural in the sense that every morphism $(\epsilon,\varphi)\co {\mathbf{A}} \to {\mathbf{A}}'$ in $s(\ensuremath{\mathcal{A}})$, gives rise to a square $$ \disablesubscriptcorrection\xysavmatrix{ {\text{Tr}_{\Aut({\mathbf{A}})} F({\mathbf{A}})} \ar[r] \ar[d]_{\text{Tr}_{\varphi_*} F(\varphi)} & (\pi_*F)([{\mathbf{A}}]) \ar[d]^{\pi_*([\varphi])}
\ar@{}|-{\overset{\tau}{\Leftarrow}}[dl] \\ {\text{Tr}_{\Aut({\mathbf{A}}')} F({\mathbf{A}}')} \ar[r] & (\pi_*F)([{\mathbf{A}}']) } $$ which commutes up to a natural transformation $\tau$ which is functorial in $(\epsilon,\varphi)$. Here $\varphi_*\co \Aut({\mathbf{A}}) \to \Aut({\mathbf{A}}')$ is the homomorphism induced by restriction and conjugation by $\varphi\co \epsilon^*{\mathbf{A}} \approx {\mathbf{A}}'$ as we described in \eqref{sA auto maps}. The square commutes on the nose if $F(\varphi)\co F({\mathbf{A}}) \longrightarrow F({\mathbf{A}}')$ is the identity. \end{prop}
\begin{proof} Fix an object ${\mathbf{A}}\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu \to \ensuremath{\mathcal{A}}$ in $s(\ensuremath{\mathcal{A}})$. Let $\Pi_{\mathbf{A}}$ be the full subcategory of $(\pi\downarrow [{\mathbf{A}}])$ consisting of the objects $({\mathbf{A}}',[{\mathbf{A}}']\xto{=} [{\mathbf{A}}])$. It is isomorphic to the full subcategory of $s(\ensuremath{\mathcal{A}})$ consisting of the isomorphism class of ${\mathbf{A}}$. The inclusion $J\co \Pi_{\mathbf{A}} \to (\pi\downarrow [{\mathbf{A}}])$ has a left adjoint $L$ where $$ L \co ({\mathbf{B}},[{\mathbf{B}}] \to [{\mathbf{A}}]) \mapsto \epsilon^*{\mathbf{B}}, \qquad \text{
where } [\epsilon^*{\mathbf{B}}]=[{\mathbf{A}}] \text{ for } \epsilon\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu' \hookrightarrow \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu. $$ Clearly $\epsilon$ is unique if it exists. There is a natural map ${\mathbf{B}} \to \epsilon^*{\mathbf{B}}$ induced by the identity on $\epsilon^*{\mathbf{B}}$ under the bijection $s(\ensuremath{\mathcal{A}})({\mathbf{B}},\epsilon^*{\mathbf{B}}) \approx s(\ensuremath{\mathcal{A}})(\epsilon^*{\mathbf{B}},\epsilon^*{\mathbf{B}})$. We obtain a natural transformation $\textrm{Id} \to JL$ which gives rise to bijections for every object $({\mathbf{A}}',[{\mathbf{A}}']\to [{\mathbf{A}}])$ in $(\pi \downarrow [{\mathbf{A}}])$
$$\Hom_{(\pi \downarrow [{\mathbf{A}}])}({\mathbf{B}},J{\mathbf{A}}') = \Hom_{s(\ensuremath{\mathcal{A}})}({\mathbf{B}},{\mathbf{A}}') \approx \Hom_{s(\ensuremath{\mathcal{A}})}(\epsilon^*{\mathbf{B}},{\mathbf{A}}') = \Hom_{\Pi_{\mathbf{A}}}(L{\mathbf{B}},{\mathbf{A}}').$$
Thus $L$ is left adjoint to $J$ and we apply \fullref{adj cof lem}. By definition $\Pi_{\mathbf{A}}$ is a connected groupoid with automorphism group $\ensuremath{\mathcal{B}}\Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}})$. Therefore upon realization, the functor $$ \text{Tr}\bigl( \ensuremath{\mathcal{B}}\Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}}){\to}s(\ensuremath{\mathcal{A}}) \xto{F} \mathbf{Cat} \bigr) \xto{\text{restriction}} \text{Tr}\bigl((\pi{\downarrow}[{\mathbf{A}}]){\to}s(\ensuremath{\mathcal{A}}) \xto{F} \mathbf{Cat}\bigr) = (\pi_*F)([{\mathbf{A}}]) $$ induces a weak homotopy equivalence. Also, for a morphism $\varphi\co {\mathbf{A}}\to {\mathbf{A}}'$ we get an obvious $\varphi_*\co \Pi_{\mathbf{A}} \to \Pi_{\mathbf{A}}'$ by restriction and conjugation by the isomorphism $\varphi\co \epsilon^*{\mathbf{A}} \to {\mathbf{A}}'$. It gives rise to the following diagram $$ \disablesubscriptcorrection\xysavmatrix{ {\ensuremath{\mathcal{B}}\Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}})} \ar[r]^{\text{incl}} \ar[d]^{\varphi_*} & \Pi_{\mathbf{A}} \ar[d]_{\varphi_*} \ar[r]^{J} & (\pi\downarrow [{\mathbf{A}}]) \ar[d]^{[\varphi]_*} \\ {\ensuremath{\mathcal{B}}\Aut_{s(\ensuremath{\mathcal{A}})}({\mathbf{A}}')} \ar[r]^{\text{incl}} & \Pi_{{\mathbf{A}}'} \ar[r]^J & (\pi\downarrow [{\mathbf{A}}']) } $$ The morphism $\varphi$ provides a canonical natural transformation $[\varphi]_*\circ J \circ \text{incl} \to J \circ \text{incl} \circ \varphi_*$. This provides the natural transformation $\tau$ in the statement of the proposition and its naturality with $\varphi$. If $F(\varphi)\co F({\mathbf{A}}) \longrightarrow F({\mathbf{A}}')$ is the identity, then $F(\tau)$ becomes the identity and the square in the statement of the proposition commutes. \end{proof}
\section{Proof of the main results}
Fix a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ and an $\ensuremath{\mathcal{F}}$--centric collection ${\mathcal{C}}$. Choose a subcategory $\ensuremath{\mathcal{I}} \subseteq \ensuremath{\mathcal{L}}^{\mathcal{C}}$ of distinguished inclusions, cf \fullref{iota morphisms}. Note that $\ensuremath{\mathcal{L}}^{\mathcal{C}}$ possesses a height function, see \fullref{sec EI}, by assigning to a subgroup $P$ in ${\mathcal{C}}$ its order. Also every morphism in $\ensuremath{\mathcal{L}}^{\mathcal{C}}$ factors uniquely as an isomorphism followed by a morphism in $\ensuremath{\mathcal{I}}$.
We claim that (Definitions \ref{def bsdc} and \ref{def barsA}) $$ {\mathrm{\bar{s}d}}{\mathcal{C}} = \bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}}). $$ To see this recall that $s_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{L}}^{\mathcal{C}})$ is a skeletal subcategory of $s(\ensuremath{\mathcal{L}}^{\mathcal{C}})$, see \eqref{skel sA}, hence $\bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}}) = \bar{s}_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{L}}^{\mathcal{C}})$. The functor $\pi\co \ensuremath{\mathcal{L}} \to \ensuremath{\mathcal{F}}$ gives a functor $\bar{s}_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{L}}^{\mathcal{C}}) \to {\mathrm{\bar{s}d}}{\mathcal{C}}$ because it maps the morphisms $\iota\in \ensuremath{\mathcal{I}}$ to inclusion of subgroups of $S$. It is an isomorphism of categories because conjugation \eqref{def k-simplices} of two $k$--simplices $P_0<\cdots<P_k$ and $P'_0<\cdots< P'_k$ induced by an isomorphism $\varphi_k\in\Iso_\ensuremath{\mathcal{F}}(P_k,P_k')$ can be lifted to an isomorphism of the corresponding objects in $s_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{L}}^{\mathcal{C}})$ by lifting the isomorphism $\varphi_k\co P_k \to P_k'$ to $\ensuremath{\mathcal{L}}$ and using \fullref{restn in L} to obtain the commutative ladder in $\ensuremath{\mathcal{L}}^{\mathcal{C}}$: $$ \disablesubscriptcorrection\xysavmatrix{ P_0 \ar[r]^{\iota} \ar[d]_\cong & P_1 \ar[r]^\iota \ar[d]_\cong & \cdots \ar[r]^{\iota} & P_k \ar[d]^{\tilde{\varphi_k}} \\ P_0' \ar[r]_\iota & P_1' \ar[r]^\iota & \cdots \ar[r]^{\iota} & P_k' } $$
We remark that a $k$--simplex $P_0<\cdots<P_k$ in ${\mathcal{C}}$ can be identified with the object $P_0 \xto{\iota} \cdots \xto{\iota} P_k$ of $s(\ensuremath{\mathcal{L}}^{\mathcal{C}})$. Under this identification we clearly have $$ \Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) = \Aut_{s(\ensuremath{\mathcal{L}}^{\mathcal{C}})}({\mathbf{P}}). $$
When $G$ is a discrete group we let $\ensuremath{\mathcal{B}} G$ denote the category with one object and $G$ as its set of morphisms.
For every $k$--simplex ${\mathbf{P}}$ in ${\mathcal{C}}$ we identify $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})$ with the obvious subcategory of $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}(P_0)$.
\begin{thm} \label{thmC} Let ${\mathcal{C}}$ be an $\ensuremath{\mathcal{F}}$--centric collection in a $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$. Then there exists a functor $\tilde{\delta}_{\mathcal{C}}\co {\mathrm{\bar{s}d}}{\mathcal{C}} \to \mathbf{Cat}$ with the following properties \begin{enumerate} \item There is a naturally defined functor $\text{Tr}_{{\mathrm{\bar{s}d}}{\mathcal{C}}}(\tilde{\delta}_{\mathcal{C}}) \to \ensuremath{\mathcal{L}}^{\mathcal{C}}$ which induces a weak homotopy equivalence on nerves. \label{thmC:target}
\item For every $k$--simplex ${\mathbf{P}}$ there is a canonical functor $ \ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \to \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}]) $ which induces a weak homotopy equivalence on nerves. If ${\mathbf{P}}'$ is a subsimplex of ${\mathbf{P}}$ then the following square commutes $$ \disablesubscriptcorrection\xysavmatrix{ {\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})} \ar[r] \ar[d]_{\res^{\mathbf{P}}_{{\mathbf{P}}'}} & \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}]) \ar[d] \\ {\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}')} \ar[r] & \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}']) } $$ \label{thmC:BPtype}
\item The natural inclusion $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \subseteq \ensuremath{\mathcal{L}}^{\mathcal{C}}$ is equal to the composition $$ \ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \to \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}]) \subseteq \text{Tr}_{{\mathrm{\bar{s}d}}{\mathcal{C}}}(\tilde{\delta}_{\mathcal{C}}) \to \ensuremath{\mathcal{L}}^{\mathcal{C}} $$ \label{thmC:augment}
\item An isomorphism of $k$--simplices $\psi\co {\mathbf{P}}'\xto{\approx} {\mathbf{P}}$ in $s(\ensuremath{\mathcal{L}}^{\mathcal{C}})$ induces a commutative square $$ \disablesubscriptcorrection\xysavmatrix{ {\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})} \ar[r] \ar[d]_{c_\psi} & \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}]) \ar[d] \\ {\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}')} \ar[r] & \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}']) } $$ \label{thmC:morphisms} \end{enumerate} \end{thm}
\begin{proof} We have seen that ${\mathrm{\bar{s}d}}{\mathcal{C}} = \bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}})$. Let $\star\co \bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}}) \to \Cat$ denote the constant functor on the trivial small category with one object and identity morphism. Use the projection functor $\pi\co s(\ensuremath{\mathcal{L}}^{\mathcal{C}}) \to \bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}})$ to define $$ \tilde{\delta}_{\mathcal{C}} = \pi_*(\star) $$ According to \fullref{commapi} we have a canonical functor $$ \ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) = \text{Tr}_{\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})}(\star) \to \pi_*(\star)([{\mathbf{P}}]) = \tilde{\delta}_{\mathcal{C}}([{\mathbf{P}}]) $$ which induces a weak homotopy equivalence. Since $\star$ is constant, the square in the statement of \fullref{commapi} commutes and we obtain the naturality assertions in point \eqref{thmC:BPtype} and \eqref{thmC:morphisms}. The natural functor of \eqref{thmC:target} is defined using \fullref{def F sharp} and \fullref{p cofinal} by $$ \text{Tr}_{{\mathrm{\bar{s}d}}{\mathcal{C}}} (\tilde{\delta}_{\mathcal{C}}) = \text{Tr}_{\bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}})} (\pi_*(\star)) \xto{ \ \pi_\# \ } \text{Tr}_{s(\ensuremath{\mathcal{L}}^{\mathcal{C}})}(\star) = s(\ensuremath{\mathcal{L}}^{\mathcal{C}}) \xto{p} \ensuremath{\mathcal{L}}^{\mathcal{C}}. $$ It induces a weak homotopy equivalence by \fullref{pushdown thm}, \fullref{def F sharp} and \fullref{cofinal thm}. Whence point \eqref{thmC:target}. Inspection of the functor $\pi_\#$, the inclusion $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}) \subseteq (\pi\downarrow [{\mathbf{P}}]) = \pi_*(\star)([{\mathbf{P}}])$ and \fullref{translation cone} yield point \eqref{thmC:augment} \end{proof}
\begin{proof}[Proof of \fullref{thmA}]
Apply \fullref{thmC} above and define $\delta_{\mathcal{C}} = |\tilde{\delta}_{\mathcal{C}}|$. \end{proof}
\begin{void} \label{compare with Dwyer} We now relate the construction in \fullref{thmA} to Dwyer's normaliser decomposition \cite[Section~3]{Dwyer-decomp}. We will show that the two functors are related by a zigzag of natural transformations which induce a mod--$p$ equivalence.
Fix a finite group $G$ and the $p$--local finite group $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$ associated with it. A collection ${\mathcal{C}}$ of $\ensuremath{\mathcal{F}}$--centric subgroups of $S$ gives rise to a $G$--collection $\ensuremath{\mathcal{H}}$ of $p$--centric subgroups of $G$ (cf \cite[Section~1.19]{Dwyer-decomp}, \fullref{non exotic examples}) by taking all the $G$--conjugates of the elements of ${\mathcal{C}}$. We let $\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}$ denote the transporter category of $\ensuremath{\mathcal{H}}$. That is, the object set of $\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}$ is $\ensuremath{\mathcal{H}}$ and the morphism set $\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}(H,K)$ is the set $N_G(H,K) = \{ g\in G : g^{-1}Hg\leq K\}$. We also let $\ensuremath{\mathcal{T}}^{\mathcal{C}}$ denote the full subcategory of $\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}$ having ${\mathcal{C}}$ as its object set. Almost by definition $\ensuremath{\mathcal{T}}^{\mathcal{C}}$ is skeletal in $\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}$. We also obtain a zigzag of functors (see \fullref{non exotic examples}) $$ \ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}} \leftarrow \ensuremath{\mathcal{T}}^{\mathcal{C}} \to \ensuremath{\mathcal{L}}^{\mathcal{C}}. $$ Dwyer \cite[Section~3]{Dwyer-decomp} defines a category ${\mathrm{\bar{s}d}}\ensuremath{\mathcal{H}}$ whose objects are the $G$--conjugacy classes $[{\mathbf{H}}]$ of the $k$--simplices $H_0<\cdots<H_k$ in $\ensuremath{\mathcal{H}}$. There is a unique morphism $[{\mathbf{H}}] \to [{\mathbf{H}}']$ in ${\mathrm{\bar{s}d}}\ensuremath{\mathcal{H}}$ if and only if ${\mathbf{H}}'$ is conjugate in $G$ to a subsimplex of ${\mathbf{H}}$. It follows directly from the definition of $\ensuremath{\mathcal{H}}$ as the smallest $G$--collection containing ${\mathcal{C}}$ and from the definition of $\ensuremath{\mathcal{F}}=\ensuremath{\mathcal{F}}_S(G)$ that ${\mathrm{\bar{s}d}}\ensuremath{\mathcal{H}}={\mathrm{\bar{s}d}}{\mathcal{C}}$. We obtain a commutative diagram (see \fullref{def sA}) $$ \disablesubscriptcorrection\xysavmatrix{ s(\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}) \ar[d]_{\pi_2} & s(\ensuremath{\mathcal{T}}^{\mathcal{C}}) \ar[r] \ar[d]_{\pi_1} \ar[l]_{\supseteq} & s(\ensuremath{\mathcal{L}}^{\mathcal{C}}) \ar[d]^\pi \\ {\bar{s}}(\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}) & \bar{s}(\ensuremath{\mathcal{T}}^{\mathcal{C}}) \ar@{=}[l] \ar@{=}[r] & \bar{s}(\ensuremath{\mathcal{L}}^{\mathcal{C}}). } $$ Fix a $k$--simplex ${\mathbf{P}}=P_0<\cdots<P_k$ in ${\mathcal{C}}$. Note that $(\pi_2\downarrow [{\mathbf{P}}])$ is isomorphic to the subcategory of $\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}}$ of the objects ${\mathbf{P}}'$ which admit a morphism to ${\mathbf{P}}$. It contains a full subcategory $\Pi_{\mathbf{P}}$ of the objects of $s(\ensuremath{\mathcal{T}}^\ensuremath{\mathcal{H}})$ that are isomorphic to ${\mathbf{P}}$; cf the proof of \fullref{commapi}. By inspection $\Pi_{\mathbf{P}}$ is the translation category of the action of $G$ on the orbit of ${\mathbf{P}}$, that is it is the transported category of the $G$--set $[{\mathbf{P}}]$ in $\ensuremath{\mathcal{H}}$ thought of as a functor $\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}})\to \mathbf{Sets}$, cf \cite[Section~3.3]{Dwyer-decomp}. Thus $$ \delta^{\text{Dwyer}}_\ensuremath{\mathcal{H}}([{\mathbf{P}}]) = EG \times_G [{\mathbf{P}}] = \text{Nr}(\Pi_{\mathbf{P}}). $$ The inclusion $J\co \Pi_{\mathbf{P}}\hookrightarrow (\pi_2 \downarrow [{\mathbf{P}}])$ has a left adjoint $L\co ({\mathbf{P}}', [{\mathbf{P}}'] \to [{\mathbf{P}}]) \mapsto \epsilon^*{\mathbf{P}}'$ where $(\epsilon,\varphi)\co {\mathbf{P}}' \to {\mathbf{P}}$ is a morphism in $s(\ensuremath{\mathcal{L}}^{{\mathcal{C}}})$, see \fullref{def sA}. Compare the proof of \fullref{commapi}. \fullref{adj cof lem} implies that $J$ is right cofinal. We obtain a zigzag of functors $$ \delta^{\text{Dwyer}}_\ensuremath{\mathcal{H}} \xrightarrow[\simeq]{\phantom{-----}}
|(\pi_2)_*(\star)|
\xleftarrow[\simeq]{\ \text{incl} \ } |(\pi_1)_*(\star)| \xto{\text{ mod-}p \ }
|\pi_*(\star)| = |\tilde{\delta}_{\mathcal{C}}| = \delta_{\mathcal{C}}. $$ The third map induces a mod--$p$ equivalence by the following argument. For any object $[{\mathbf{P}}]$ we obtain a map $(\pi_1)_*(\star)([{\mathbf{P}}]) \to \pi_*(\star)([{\mathbf{P}}])$ which by \fullref{commapi} is equivalent to the map $$ B\Aut_G({\mathbf{P}}) \to B\Aut_\ensuremath{\mathcal{L}}({\mathbf{P}}). $$ Since $P_0$ is $p$--centric then $C_G(P_0)=Z(P_0)\times C_G'(P_0)$ where $C_G'(P_0)$ is a characteristic $p'$--subgroup of $C_G(P_0)$ and $\Aut_\ensuremath{\mathcal{L}}(P_0) = N_G(P_0)/C_G'(P_0)$. Therefore $\Aut_G({\mathbf{P}}) \to \Aut_G({\mathbf{P}})/C_G'(P_0)$ induces a mod--$p$ equivalence as needed. \end{void}
We shall now prove \fullref{thmB}. Fix an $\ensuremath{\mathcal{F}}$--centric collection ${\mathcal{C}}$ and a collection $\ensuremath{\mathcal{E}}$ of elementary abelian subgroups in $(S,\ensuremath{\mathcal{F}},\ensuremath{\mathcal{L}})$. Recall from \fullref{def barcl} and \fullref{def NoL} the definitions of $\bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};E_k)$ and $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ where ${\mathbf{E}}$ is a $k$--simplex in $\ensuremath{\mathcal{E}}$.
\begin{prop} \label{NoTrCbar} Fix a $k$--simplex ${\mathbf{E}}$ in $\ensuremath{\mathcal{E}}$, namely a functor ${\mathbf{E}}\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu \to \ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}$. There is a functor $$ \epsilon\co \breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}}) \to
\text{Tr}_{\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})} \Big(\bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}}) \Big) $$ which is fully faithful and natural with respect to inclusion of simplices. If $E_k$ is fully $\ensuremath{\mathcal{F}}$--centralised, its image is also skeletal and in particular induces homotopy equivalence $
|\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})| \xto{ \ \ \simeq \ \ }
|\bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};E_k)|_{h\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})}. $ \end{prop}
\begin{proof} The objects of $\ensuremath{\mathcal{H}}:=\ensuremath{\mathcal{B}}\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})\int \bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ are pairs $(P,f)$ where $P\in {\mathcal{C}}$ and $f \in \ensuremath{\mathcal{F}}(E,Z(P))$. Morphisms are pairs $(\varphi,g)$ where $\varphi\in \ensuremath{\mathcal{L}}(P,P')$ and $g\in\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}})$ such that $f'=\pi(\varphi)\circ f \circ g$ (see \fullref{sec homotopy colimits}). Since $f,f'$ are monomorphisms, $g$ is determined by $\varphi$. Define $\epsilon\co \breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}}) \to \ensuremath{\mathcal{H}}$ by
\begin{eqnarray}
&&\epsilon(P) = (P,E \xto{\text{incl}} Z(P) \leq P) \\ \nonumber
&&\epsilon( P\xto{\varphi} P') = (P\xto{\varphi} P', \pi(\varphi)|_{E_k}^{-1}). \end{eqnarray} It is well defined and fully faithful by the definition of $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$. Naturality with respect to inclusion of simplices is readily verified. Consider an object $(P,f) \in \ensuremath{\mathcal{H}}$. Note that $f(E_k)\leq Z(P)$, hence for $g:=f^{-1}\in \Iso_\ensuremath{\mathcal{F}}(f(E_k),E_k)$ we must have (see \fullref{def sat fus}) $N_g \supseteq C_S(f(E_k)) \supseteq P$. By axiom II of \fullref{def sat fus} we can extend $g$ to an isomorphism $h\co P \to P'$ in $\ensuremath{\mathcal{F}}$. Clearly $P'$ is in ${\mathcal{C}}$ because the latter is a collection. Fix a lift $\tilde{h}\in\ensuremath{\mathcal{L}}(P,P')$ for $h$. We have $\pi(\tilde{h})\circ f = h \circ \smash{\text{incl}^P_{f(E_k)}} \circ g^{-1} = \smash{\text{incl}_{E_k}^{P'}}$. Therefore $(\textrm{id}_{E_k},\tilde{h})$ is an isomorphism $(P,f) \cong \bigl(P',\smash{\text{incl}_{E_k}^{P'}}\bigr)$ in $\ensuremath{\mathcal{H}}$. This shows that $\epsilon$ embeds $\breve{N}_\ensuremath{\mathcal{L}}({\mathcal{C}};{\mathbf{E}})$ into a skeletal subcategory of $\ensuremath{\mathcal{H}}$ and the result follows. \end{proof}
\begin{proof}[Proof of \fullref{thmB}] For every $P\in{\mathcal{C}}$ define $\zeta(P)=\Omega_pZ(P)$, see \fullref{def omega_p}. Note that if $f\co P\to P'$ is an isomorphism in $\ensuremath{\mathcal{F}}$ then $f^{-1}\co \zeta(P') \to \zeta(P)$ is an isomorphism in $\ensuremath{\mathcal{F}}$. Also if $P \leq P'$ in ${\mathcal{C}}$ then $Z(P) \leq Z(P')$ because $P$ and $P'$ are $\ensuremath{\mathcal{F}}$--centric so their centres are equal to their $S$--centralisers. It easily follows that this assignment forms a functor $$ \zeta\co \ensuremath{\mathcal{L}}^{\mathcal{C}} \to {\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}}. $$ Fix $E\in\ensuremath{\mathcal{E}}$. Since every homomorphism $f\co E \to Z(P)$ factors through $\zeta(P)$ we see that $\bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};E) =(\zeta \downarrow E)$. In particular \eqref{def catF star} \begin{equation} \label{kan zeta} \zeta_*(\star) \co E \mapsto \bar{C}_\ensuremath{\mathcal{L}}({\mathcal{C}};E). \end{equation}
We now observe that $\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}$ is an EI-category. The assignment $E \mapsto |E|$ gives rise to a height function in the sense of \fullref{sec EI}. Furthermore the set $\ensuremath{\mathcal{I}}$ of inclusions in $\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}$ forms a poset where every morphism in $\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}$ factors uniquely as an isomorphism followed by an element in $\ensuremath{\mathcal{I}}$. We conclude that $$ \bar{s}\ensuremath{\mathcal{E}} = \bar{s}_\ensuremath{\mathcal{I}}(\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}) \approx \bar{s}(\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}). $$ There is an isomorphism $\tau\co s({\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}}) \to s(\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}})$ which is the identity on objects. On the morphism set between ${\mathbf{E}}\co \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu\to\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}$ and ${\mathbf{E}}'\co \mskip0mu\underline{\mskip-0mu{n}\mskip-2mu}\mskip2mu\to\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}$ such that $\epsilon^*{\mathbf{E}}\approx {\mathbf{E}}'$ for some injective $\epsilon\co \mskip0mu\underline{\mskip-0mu{n}\mskip-2mu}\mskip2mu \to \mskip0mu\underline{\mskip-0mu{k}\mskip-3mu}\mskip3mu$ (see \fullref{def sA}), $\tau$ has the effect \begin{multline*} \Hom_{s({\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}})}({\mathbf{E}},{\mathbf{E}}') = \Iso_{{\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}}}(\epsilon^*{\mathbf{E}},{\mathbf{E}}') = \\ \Iso_{\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}({\mathbf{E}}',\epsilon^*{\mathbf{E}}) \xrightarrow[\approx]{\ \ \ f \mapsto f^{-1} \ \ } \Iso_{\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}(\epsilon^*{\mathbf{E}},{\mathbf{E}}') = \Hom_{s(\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}})}({\mathbf{E}},{\mathbf{E}}'). \end{multline*} The functor $p\co s({\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}}) \to {\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}}$ of \fullref{p cofinal} fits into a commutative diagram $$ \disablesubscriptcorrection\xysavmatrix{ s({\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}}) \ar[r]^p \ar[d]_\tau^\approx & {\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}} \ar@{=}[d] \\ s(\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}) \ar[r]^\mu & {\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}} } $$ where $\mu({\mathbf{E}})=E_k$. Since $\tau$ is an isomorphism, $\mu$ is right cofinal. We obtain a zigzag of functors $$ {\mathrm{\bar{s}d}}\ensuremath{\mathcal{E}} \xleftarrow{\ \ \pi \ \ } s(\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}) \xto{\ \ \mu \ \ } {\ensuremath{\mathcal{F}}^\ensuremath{\mathcal{E}}}^\ensuremath{\mathrm{op}} \xleftarrow{ \ \ \zeta \ \ } \ensuremath{\mathcal{L}}^{\mathcal{C}} \xto{\ \ \star \ \ } \mathbf{Cat}. $$ Define $$ \tilde{\delta}_\ensuremath{\mathcal{E}}= \pi_*\circ \mu^* \circ \zeta_*(\star), \qquad \text{ and } \qquad
\delta_\ensuremath{\mathcal{E}}=|\tilde{\delta}_\ensuremath{\mathcal{E}}|. $$ Since $\mu$ is right cofinal, the \fullref{cofinal thm} and \fullref{pushdown thm} imply a weak homotopy equivalence $$
\hhocolim{{\mathrm{\bar{s}d}}\ensuremath{\mathcal{E}}} \, \delta_\ensuremath{\mathcal{E}} \xto{\simeq} |\ensuremath{\mathcal{L}}^{\mathcal{C}}|. $$ This is point \eqref{thmB:target} of the theorem. Inspection of \eqref{translation cone}, \fullref{def F sharp} and \eqref{def F shriek} shows that this equivalence is given as the realization of a functor $\text{Tr}_{{\mathrm{\bar{s}d}}\ensuremath{\mathcal{E}}}\tilde{\delta}_\ensuremath{\mathcal{E}} \to \ensuremath{\mathcal{L}}^{\mathcal{C}}$ where \begin{eqnarray*}
({\mathbf{E}}, E_k \xto{f} P) &\mapsto& P \\
({\mathbf{E}} \to {\mathbf{E}}', g\in\Aut_\ensuremath{\mathcal{F}}({\mathbf{E}}'), P \xto{\varphi\in\ensuremath{\mathcal{L}}} P') &\mapsto& \varphi. \end{eqnarray*} Point \eqref{thmB:terms} follows from \eqref{kan zeta} and \fullref{commapi}. Point \eqref{thmB:augment} follows from \fullref{No equiv orbit}, inspection of $\epsilon$ in \fullref{NoTrCbar}, of \eqref{kan zeta} and the functor $\text{Tr}_{{\mathrm{\bar{s}d}}\ensuremath{\mathcal{E}}}(\tilde{\delta}_\ensuremath{\mathcal{E}}) \to \ensuremath{\mathcal{L}}^{\mathcal{C}}$. Point \eqref{thmB:maps} is a consequence of \fullref{commapi}.
\end{proof}
\end{document} | arXiv |
Nurse egg consumption and intracapsular development in the common whelk Buccinum undatum (Linnaeus 1758)
Kathryn E. Smith1 &
Sven Thatje1
Helgoland Marine Research volume 67, pages 109–120 (2013)Cite this article
Intracapsular development is common in marine gastropods. In many species, embryos develop alongside nurse eggs, which provide nutrition during ontogeny. The common whelk Buccinum undatum is a commercially important North Atlantic shallow-water gastropod. Development is intracapsular in this species, with individuals hatching as crawling juveniles. While its reproductive cycle has been well documented, further work is necessary to provide a complete description of encapsulated development. Here, using B. undatum egg masses from the south coast of England intracapsular development at 6 °C is described. Number of eggs, veligers and juveniles per capsule are compared, and nurse egg partitioning, timing of nurse egg consumption and intracapsular size differences through development are discussed. Total development took between 133 and 140 days, over which 7 ontogenetic stages were identified. The number of both eggs and veligers were significantly related to capsule volume, with approximately 1 % of eggs developing per capsule. Each early veliger consumed nurse eggs rapidly over just 3–7 days. Within each capsule, initial development was asynchronous, but it became synchronous during the veliger stage. No evidence for cannibalism was found during development, but large size differences between embryos developing within each capsule were observed, and occasionally 'empty' veligers were seen, which had not successfully consumed any nurse eggs. These results indicate a high level of competition for nurse eggs within each capsule during development in the common whelk. The initial differences observed in nurse egg uptake may affect individual predisposition in later life.
Many marine gastropods undergo intracapsular development inside egg capsules (Thorson 1950; Natarajan 1957; D'Asaro 1970; Fretter and Graham 1985). Embryos develop within the protective walls of a capsule that safeguards against factors such as physical stress, predation, infection and salinity changes (Thorson 1950; Pechenik 1983, 1999; Strathmann 1985; Rawlings 1995, 1999). Periods of encapsulation vary; some species are released as veligers and undergo a planktonic stage before reaching adult life (mixed development), while others display direct development, hatching from capsules as crawling juveniles (Natarajan 1957; D'Asaro 1970; Pechenik 1979). When direct development occurs, embryos are often accompanied in a capsule by nurse eggs, non-developing food eggs, which provide nutrition during development (Thorson 1950; Spight 1976b; Rivest 1983; Lahbib et al. 2010). These are usually indistinguishable from embryos in the very early stage of ontogeny and are consumed during development, potentially increasing size of juveniles at hatching (Thorson 1950). In some species, nutrition may also be provided by intracapsular fluid or protein from capsule walls (Bayne 1968; Stöckmann-Bosbach 1988; Moran 1999; Ojeda and Chaparro 2004).
Generally speaking, nurse egg consumption occurs over a period of several weeks or months. It commences some weeks into development as embryos form, and nurse eggs are then slowly consumed throughout much of development (Chaparro and Paschke 1990; Ilano et al. 2004; Lahbib et al. 2010). The number of nurse eggs consumed during this period varies across species. Ratios range from 1.7 nurse eggs per embryo in the Pacific shallow-water muricid Acanthinucella spirata (Spight 1976a), to between 50,000 and 100,000 nurse eggs per embryo in the North Atlantic deep-sea buccinid Volutopsius norwegicus (Thorson 1950). Often, within a species, the nurse egg to embryo ratio varies from capsule to capsule within one clutch (Thorson 1950; Spight 1976a). For example, Rivest (1983) found this ratio in the buccinid Lirabuccinum dirum to vary from 11 to 46 across capsules. Similar differences have been reported for other gastropods (Natarajan 1957; Spight 1976a). Within a capsule however, there is usually little variation in the number of nurse eggs ingested by each embryo, with all embryos generally being equal in their ability to consume. Any differences observed are minimal, and juveniles hatching from each capsule are normally of a very similar size (Natarajan 1957; Spight 1976a; Rivest 1983; Chaparro and Paschke 1990; Chaparro et al. 1999; Lloyd and Gosselin 2007). Large size differences amongst capsulemates are unusual, but have been reported in some species of muricid gastropod (Gallardo 1979; González and Gallardo 1999; Cumplido et al. 2011). In gastropods, the number of eggs inside a capsule is usually positively related to capsule size. Within a species, larger capsules hold more eggs and more developing embryos (Gallardo 1979; Pechenik et al. 1984; Miloslavich and Dufresne 1994). The relationship between capsule size and number of eggs (including nurse eggs) has, however, previously been shown to be stronger than the relationship between capsule size and number of developing embryos (Spight 1976b). In some cases, the number of developing embryos within a capsule has been found to be independent of capsule volume. This suggests that embryos are distributed at random, while nurse eggs are regularly placed amongst capsules (Rivest 1983; Chaparro et al. 1999).
Intracapsular development and nurse egg and embryo partitioning have been investigated in several species of marine gastropod (Natarajan 1957; D'Asaro 1970; Spight 1976a; Rivest 1983; Cumplido et al. 2011). While some attempts have been made to examine encapsulated development in the common whelk Buccinum undatum (Portmann 1925; Fretter and Graham 1985; Nasution 2003), it has not yet been fully described. Nasution (2003) gives the most in-depth account of development to date, but his descriptions are incomplete and his reports of nurse egg consumption do not match our observations. Descriptions from Portmann (1925) better fit our observations but lack detail. There are also gaps in the current literature, and very limited knowledge exists on nurse egg partitioning and intracapsular embryo size ranges through development. The common whelk is a scavenger found widespread in coastal areas in the North Atlantic. It is generally found from the shallow subtidal down to a few hundred metres of water depth (Valentinsson et al. 1999; Valentinsson 2002; Rosenberg 2009), with a latitudinal range from 38°N to 79°N spanning the North Atlantic and Arctic Oceans (OBIS http://iobis.org/mapper/?taxon=Buccinumundatum). Buccinum undatum is an important commercial species, providing locally valuable fisheries in several areas around the North Atlantic including the UK, the USA and Canada (Hancock 1967; Morel and Bossy 2004). It has been suggested as a good candidate for aquaculture (Nasution and Roberts 2004) and globally, demand for it is continuously increasing (Department of Marine Resources www.maine.gov/dmr/rm/whelks.html). Its reproductive cycle has been well documented across its range (Hancock 1967; Martel et al. 1986a, b; Kideys et al. 1993; Valentinsson 2002). Females group to deposit small creamy coloured spherical egg capsules (Martel et al. 1986a). Each lays approximately 80–150, which collectively can create large egg masses of hundreds to thousands of capsules (Fretter and Graham 1985; Valentinsson 2002). The time of year for spawning varies in this species across its distribution. In coastal waters of the UK, egg capsules are laid during the autumn and winter months (predominantly late November–January) as annual water temperatures drop below 9 °C (Hancock 1967; Kideys et al. 1993). In the northwest Atlantic, egg laying instead takes place in spring (late May to mid July) as water temperatures warm (approximately 2–3 °C) (Martel et al. 1986a). Intracapsular development takes between 2.5 and 9 months across the species range (Fretter and Graham 1985; Martel et al. 1986a; Kideys et al. 1993; Nasution 2003). Given the widespread distribution of B. undatum, its current commercial importance and its potential as a future candidate for aquaculture, it is important to understand fully the development in this species.
Here, we examine intracapsular development in B. undatum using a population from the south coast of England, at the southern end of the species distribution. Number of eggs and number of developing veligers and juveniles are examined through development. Ontogenetic stages are described in detail including nurse egg partitioning, nurse egg consumption and intracapsular ranges in embryo sizes.
In order to study the intracapsular development in B. undatum, 150 adults were collected from Viviers UK in late November 2009 (www.fishmarketportsmouth.co.uk). Adults were originally gathered from the Solent, UK (50°39′ N, 001°37′ W) from approximately 15 m water depth by Viviers using whelk traps. They were taken to the aquarium at the National Oceanography Centre, Southampton, and placed in a large outdoor tank with continuous seawater flow through. Whelks were fed scrap fish ad libitum 3 times a week, and the tank was checked daily for laying activity. Egg laying occurred between early December 2009 and early February 2010, predominantly when water temperatures fell below 8 °C. All egg masses were laid on aquarium walls within a few centimetres of the water line.
Three egg masses laid in early January were removed for examination through development. Each was left undisturbed for 24 h after egg laying had ceased before being removed from the aquarium walls and maintained in 1 μm filtered seawater at 6 °C. This was close to local water temperatures, which ranged 4.0–8.3 °C between January and March 2010 (local temperature data obtained bramblemet (www.bramblemet.co.uk/) and CEFAS (www.cefas.defra.gov.uk/our-science/observing-and-modelling/monitoring-programmes/sea-temperature-and-salinity-trends/presentation-of-results/station-22-fawley-ps.aspx) databases). Each week 3 capsules were randomly selected and dissected from each egg mass (Fig. 1a). For each egg mass, the outer layer of egg capsules was removed prior to any examination as these are often empty or hold a very small number of eggs. The contents of each capsule were examined, ontogenetic stage was described and eggs or embryos were measured along their longest axis using an eyepiece graticule. When a capsule contained loose eggs, approximately 20 were measured per capsule. When embryos were present of any age, all were measured (on average 9–11). From the trochophore stage and for the duration of nurse egg feeding, 3 capsules per egg mass were examined daily to determine the duration of short ontogenetic stages and the time taken to consume nurse eggs. Each egg mass was also examined non-invasively each week. Transparency of the capsule wall allowed approximate ontogenetic stage to be determined, and the percentage of the mass at each developmental stage was estimated (Fig. 1b). From this, embryonic development was described, including ontogenetic stages, developmental timing, change in embryo size, nurse egg partitioning and intracapsular size differences during development. Ontogenetic stages were defined as egg, trochophore, early veliger, veliger, pediveliger, pre-hatching juvenile and hatching juvenile (see below for descriptions).
a Egg mass of B. undatum showing individual capsules. b A large individual egg capsule showing many developed embryos inside, post nurse egg consumption. Scale bars represent 5 mm. c capsule, e embryo
Intracapsular contents through development
In order to investigate the intracapsular contents, B. undatum egg masses were collected from Southampton Water (Southampton, UK, 50°50′ N, 001°19′ W) from approximately 10 m water depth between January and March, 2009 and 2010. Seawater temperatures ranged from 4 to 10 °C during these periods. Collection took place using beam trawls deployed by the University of Southampton research vessel RV Callista. In total, 35 egg masses were collected, all of which were fixed in 4 % formalin for later investigation.
Capsules were selected at random from all 35 egg masses. As above, the outer layer of each egg mass was removed prior to this. Buccinum undatum egg capsules are relatively ellipsoid in shape, with a convex/concave face (Fig. 1a, b). Each capsule was measured in three dimensions (length, width, depth; ±0.01 mm) using digital calipers (Absolute digimatic caliper, Mitutoyo (UK) Ltd, Andover, UK). From these measurements, the volume of each egg capsule was estimated using an adaptation of equations used by Pechenik (1983), Rawlings (1990). The following equation was used.
$$ V = \left( {\pi \, ab} \right) \times c $$
where a = length/2, b = width/2 and c = depth.
Each capsule was then dissected, number of embryos was counted (using a bogorov counting chamber) and ontogenetic stage determined under a compound-microscope. To investigate the relationship between capsule volume and number of eggs or veligers within a capsule, approximately 160 capsules at egg stage (i.e. prior to any development occurring; 15 egg masses; 10–11 capsules from each) and 160 capsules at veliger stage were examined (18 egg masses, 8–9 capsules from each). Capsules ranging from 5.15 to 10.49 mm length (39.0–287.5 mm3 volume) were compared. Regression analyses were carried out to examine the relationship between capsule volume and number of eggs, and capsule volume and number of veligers.
Change in number of embryos per capsule during development was investigated by examining 100 capsules at veliger stage (12 egg masses, 8–9 capsules from each) and 100 capsules at pre-hatching juvenile stage (9 egg masses, 11–12 capsules from each). Since the number of eggs and embryos per capsule is related to capsule size, for this comparison, capsules of a narrower size range (length 6–8 mm, volume 52.4–146.2 mm3) were used. This eliminated the possibility of any change in number of embryos per capsule to be influenced by capsule size. Only veligers containing nurse eggs were counted; it was presumed veligers with no nurse eggs would not develop successfully. An unpaired t test was carried out to compare number of veligers per capsule to number of pre-hatching juveniles per capsule.
Ontogenetic stages
Seven ontogenetic stages were identified. These are described below.
Each capsule contains 475–2,639 (mean 1,094) small spherical eggs with no definition. Eggs are cream or yellow in colour and have an average diameter of 234 μm. Within a capsule, egg diameter varies on average by 36 μm. Approximately 1 % of these eggs are developing embryos. The remaining are nurse eggs. At this stage, both developing and nurse eggs are identical (Fig. 2a; Table 1). Egg capsules remain at this stage on average for 49 days.
Intracapsular developmental stages of B. undatum. (a) Egg, (b) trochophore, (c) early veliger, (d) veliger, (e) pediveliger and (f) pre-hatching juvenile. n nurse egg or undeveloped embryo, om outer membrane, c cilia, vl velar lobe, m mouth, mg midgut, me mantle edge, mc mantle cavity, vm visceral mass, lh larval heart, lk larval kidney, s shell, si, siphon, sg siphonal groove, t tentacle, e eye, f foot, o operculum, sa shell apex, sr spiral ribs, ar axial ribs
Table 1 Developmental periods for intracapsular development in B. undatum from the south coast of England at 6 °C
Trochophore
After 42–56 days developing embryos become globular shaped with a non-circular translucent membrane around the darker embryo. A cilia band (prototroch) is present around approximately one-third to half of the outer circumference of the membrane (Fig. 2b). Each trochophore is a little larger than an egg, with an average length of 321 μm. Each embryo remains at the trochophore stage for just 2–3 days (Table 1).
Early veliger
As the early veliger stage is reached, the prototroch extends laterally to form paired velar lobes with marginal cilia around a central simple mouth. Velar lobes are used for collection of eggs and locomotory movement. Each early veliger is mobile but lacks obvious intentional direction. Behind each lobe and just in front of the main body of the early veliger, paired larval kidneys develop, slightly opaque in colour. Whole (generally nurse) eggs are manipulated into the mouth section using the cilia. These are engulfed and stored in the midgut (Portmann 1925), which forms a circular ball directly behind the mouth section, surrounded by a thin outer membrane. There is some asynchrony in the early development of the embryos from individual capsules. In total, between 2 and 35 veligers develop per capsule (average 11). Each embryo consumes nurse eggs for 3–7 days (at 6 °C). Total consumption by all embryos within a capsule occurs during the early veliger stage, over 4–10 days. Eggs are not damaged during consumption but are stored in the midgut, conserved for later nutritional use. Whole, undamaged nurse eggs can be seen inside the each early veliger. Early veligers average 1.46 mm across their longest axis. Within one capsule, embryo size may vary by as much as 0.85 mm. These size differences continue to be observed throughout development. Once all nurse eggs are consumed, early veligers, veligers and even pediveligers are occasionally found in a capsule, which have consumed no nurse eggs at all (Figs. 1b, 2c, 3a, b; Table 1).
Early development in B. undatum. (a) Early pediveliger stage with empty midgut indicating few or no nurse eggs were consumed. (b) Veligers of varying sizes developing alongside each other; within one capsule and following nurse egg consumption. (c) Early pediveliger stage with individual nurse eggs still clearly discernible under the shell. (d) Well-developed mid pediveliger stage with velar lobes still present. Growth lines can be observed on shell. n nurse egg, vl velar lobe, mg midgut, vm visceral mass, s shell, sg siphonal groove, t tentacle, e eye, f foot, gl growth lines. Scale bars represent 500 μm
Veliger
In the veliger, the mantle edge thickens and a thin larval shell becomes visible around the midgut, creating a transparent layer. The midgut appears important in dictating the dimensions of this shell. The velar lobes become more separated, and distinct and the larval kidneys continue to be seen, often with a central yellow spot. The central mouth section becomes more opaque, early foot development begins and no further nurse egg consumption is possible. The mantle edge and the visceral mass (white in colour) beneath it become obvious. A transparent pulsating membrane located dorsolaterally in front of the mantle edge becomes evident; this is often named the larval heart (Hughes 1990; Khanna and Yadav 2004). Nurse eggs stored beneath the mantle are still clearly individually discernible at this stage and even going into the pediveliger stage (Figs. 2d, 3b, c; Table 1). It is possible to break the mantle or shell on the back of the veliger or pediveliger and find nurse eggs still inside, which are not degraded and have not yet been digested. Embryos remain at the veliger stage for approximately 14–21 days. During this period, development within a capsule becomes synchronised.
Pediveliger
At the pediveliger stage, the shell thickens and becomes increasingly apparent. The mantle cavity is initially visible beneath the mantle edge and the siphonal groove begins to form. The foot, eyes, tentacles and siphon appear. The velum and cilia, which are large at the beginning of this stage, begin to shrink back. They disappear by the end of the pediveliger stage. The larval kidneys and larval heart also disappear. Embryos remain at this stage for approximately 14–21 days (Figs. 2e, 3c, d; Table 1).
Pre-hatching juvenile
Shell growth continues and spiral and axial ribs begin to develop in the shell as the pre-hatching juvenile stage is reached. The shell thickens and colours brown (becomes pigmented). The first whorl becomes obvious and the shell shape elongates. Head, foot, tentacle and siphon features become more prominent and the operculum appears. The feeding proboscis also develops internally during this time. Pre-hatching juveniles complete development over a further 35–49 days before hatching commences. Pre-hatching juvenile size ranges from 1.57 to 3.06 mm. (Fig. 2f; Table 1).
Hatching juvenile
The features described for pre-hatching juveniles become more prominent. The juvenile emerges from the egg capsule through an opening created through radular scraping. They remain on the egg mass for a few days before moving off to feed. Overall hatching size ranged from 1.70 to 3.45 mm (Table 1).
Each egg mass took between 9 and 11 days to be laid, with complete intracapsular development taking 133–140 days (19–20 weeks) at 6 °C. Within each egg mass, development was asynchronous by up to 14 days throughout the developmental period. Within each capsule, development was initially asynchronous; both trochophore and early veliger stages, and early veliger and veliger stages were observed together in capsules. By late veliger stage development within a capsule was synchronous. Following an initial increase in embryo size as nurse egg consumption occurred, individual size (measured as change in length) increased at a steady rate throughout the remainder of the encapsulated period (Figs. 4, 5; Table 1). Within each capsule, large size differences were observed between embryos at all stages of development. Whole, undamaged nurse eggs were visible inside embryos throughout the veliger and pediveliger stages. Occasional early veligers, veligers and pediveligers were found, which had not consumed any nurse eggs. Apart from the absence of nurse eggs, these embryos were completely normal in their development (Fig. 3a–c; Table 1).
Developmental time (days) for B. undatum from Southampton Water (UK) at 6 °C. Times shown represent development across whole egg masses
Change in size of individuals (measured as length along longest axis) during intracapsular development. Size displayed is average length of individual at each stage in μm. Nurse egg consumption occurs between trochophore and early veliger stages. The average size displayed for early veliger is taken post nurse egg consumption. Error bars indicate ±1 SD
Relationship between capsule volume and number of embryos per capsule
Egg capsule volume ranged from 39.0 to 287.5 mm3 (capsule length 5.15–10.49 mm). Overall, number of eggs per capsule averaged 1,094 and number of veligers per capsule averaged 11. Regression analysis showed there to be a significant relationship between capsule volume and number of eggs (r 2 = 0.7646; p < 0.001), and capsule volume and number of veligers (r 2 = 0.5615; p < 0.001). As a percentage of total eggs, on average 1 %, develop into veligers (Fig. 6a, b).
Relationship between capsule volume and (a) number of eggs, (b) number of veligers in egg masses of B. undatum. Both relationships are significant to p < 0.001. The r 2 values are displayed
Change in number of embryos per capsule through development
When examining capsules ranging from 6 to 8 mm in length (volume 52.4–146.2 mm3), number of developing veligers per capsule ranged from 3 to 21 (average 9) and number of pre-hatching juveniles per capsule ranged from 2 to 20 (average 9). An unpaired t test showed there to be no difference between the two groups (p = 0.772).
Embryonic development and intracapsular contents data
The distribution of B. undatum extends from the southern coast of the UK, northwards up into the North Atlantic and Arctic oceans, across a temperature range of −1.5 to 22 °C (Bramblemet; CEFAS; Martel et al. 1986a). For the population used in the present study, annual temperatures vary seasonally from approximately 4–22 °C, and egg laying and development normally occur in water temperatures ranging 4–10 °C. With temperatures maintained at 6 °C, the duration of intracapsular development (4.5–5 months) was similar to previous estimates of B. undatum development in British waters (Kideys et al. 1993; Valentinsson 2002). Longer and shorter periods have been reported across the species distribution (e.g. Martel et al. 1986a; Nasution 2003). The observed differences in duration of development can be attributed to the known effects of temperature on metabolic rates in ectotherms.
In the present study, the number of eggs per capsule averaged 1,094 and the number of developing veligers averaged 11. While egg numbers were similar to those indicated in previous studies (Table 2), veliger numbers were similar to figures reported by Hancock (1967), but lower than other estimates (Portmann 1925; Martel et al. 1986a). Since number of veligers is often significantly related to capsule volume (Gallardo 1979; Pechenik et al. 1984; Valentinsson 2002), it is likely that larger capsules were examined in the latter studies. Results indicate approximately 1 % of eggs developed, giving a ratio of 109 nurse eggs per embryo, almost identical to the 110 eggs per embryo reported by Portmann (1925). The percentage of eggs developing was also comparative to other previous estimates for B. undatum (Martel et al. 1986a; Valentinsson 2002; Nasution 2003). Similar results have been reported for other buccinids including 1.1–2 % for Buccinum isaotakki (Ilano et al. 2004), 0.2–1.8 % for Buccinum cyaneum (Miloslavich and Dufresne 1994) and 1 % for Colus stimpsoni (West 1979).
Past studies provide conflicting views on the occurrence of intracapsular cannibalism in B. undatum (Table 2). Portmann (1925) indicated a reduction in number of individuals per capsule during development (from early veligers to veligers and pre-hatching juveniles), which was suggested to be due to cannibalism (Fretter and Graham 1985). Contrary to this, other studies have shown the number of developing embryos per capsule to remain constant during development, indicating no cannibalism (Hancock 1967; Martel et al. 1986a). Our results were in agreement with these latter studies. Similarly, no cannibalism during development was reported in the buccinids B. cyaneum (Miloslavich and Dufresne 1994) and B. isaotakki (Ilano et al. 2004), and only very rarely was it observed in the buccinid L. dirum (Rivest 1983). It has, however, been reported in some other gastropods including Crucibulum quiriquinae (Véliz et al. 2001), Crepidula coquimbensis (Véliz et al. 2003; Brante et al. 2009) Trophon geversianus (Cumplido et al. 2011) and a vermetid gastropod (Strathmann and Strathmann 2006).
Table 2 Reproductive biology of B. undatum from present and previous studies
Capsule size or volume has previously been shown to be a good indicator of number of eggs and veligers within a capsule. In the current study, these figures were both significantly related to capsule volume. Number of eggs was more closely related to volume than number of veligers, suggesting eggs are more regularly distributed amongst capsules than are developing embryos. This pattern has been reported before for both B. undatum (Valentinsson 2002; Nasution et al. 2010) and other gastropods, including B. cyaneum (Miloslavich and Dufresne 1994), B. isaotakki (Ilano et al. 2004), Hexaplex (Trunculariopsis) trunculus (Lahbib et al. 2010), Acanthina monodon (Gallardo 1979), Nucella lapillus (Pechenik et al. 1984) and Nucella lamellosa (Spight 1976b). Contrary to this, number of eggs has been found to be related to, but number of veligers to be independent of capsule size in the buccinid L. dirum (Rivest 1983), the calyptraeid Crepipatella dilatata (Chaparro et al. 1999) and the muricid Nucella ostrina (Lloyd and Gosselin 2007).
An initial rapid increase in embryo size was observed at the early veliger stage in the present investigation. This was followed by a relatively linear increase in size for the remainder of intracapsular development. Similar changes in size during development have been reported for B. cyaneum (Miloslavich and Dufresne 1994) and B. isaotakki (Ilano et al. 2004). For both, however, the initial increase was slower than was observed in this investigation. In B. isaotakki, it is likely that this is reflective of the slower nurse egg consumption rate previously observed in this species (Ilano et al. 2004). Probably, nurse eggs are also taken up at a slower rate in B. cyaneum.
Previous hatching sizes for B. undatum have been reported ranging from 1.0 to 3.1 mm (e.g. Fretter and Graham 1985; Nasution et al. 2010). These are similar to hatching sizes observed in the present investigation, which averaged just below 2.5 mm in length.
Nurse egg partitioning
Life history theories suggest parental fitness is maximised by investing equally into all offspring (Smith and Fretwell 1974). Traditionally, resource partitioning (in the form of nurse eggs) during intracapsular development follows this trend. Embryos compete for nurse eggs, but within a capsule competitiveness is normally equal. As a result, nurse eggs are consumed quite evenly by all embryos. This does not mean hatchlings are always of a similar size; within one species, or even one clutch, the ratio of nurse eggs to developing embryo may vary greatly between capsules, resulting in large differences in offspring size. This is usually believed to be due to irregular distribution of embryos amongst capsules (Thorson 1950; Rivest 1983; Spight 1976a; Miloslavich and Dufresne 1994). Within a capsule however, generally only small differences in offspring size are reported. For example, Spight (1976a) examined 2 species of muricid gastropod (Nucella emarginata and A. spirata) and found that although embryo size varied considerably between capsules, within a capsule large differences were rare. Previous studies examining development in B. undatum have indicated similar results, and comparable observations have also been reported for the gastropods L. dirum (Rivest 1983) and C. dilatata (Chaparro and Paschke 1990; Chaparro et al. 1999). In contrast, the present study found nurse egg partitioning to be quite different to that previously described for B. undatum or other buccinids. Large size differences were continually observed between embryos from any one capsule, and regularly individuals were found alongside a capsulemate four times their size (Fig. 3b). Although to our knowledge, variations in nurse egg consumption have not previously been reported in other buccinids, such intracapsular differences have been described for a small number of gastropods, predominantly from the muricidae family. These include A. monodon (Gallardo 1979), Chorus giganteus (González and Gallardo 1999) and T. geversianus (Cumplido et al. 2011). In A. monodon and C. giganteus, intracapsular size differences continue to be evident at hatching, presumed to be related to earlier nurse egg consumption (Gallardo 1979; González and Gallardo 1999). In T. geversianus, sibling cannibalism (which can also affect offspring size) occurs during later developmental stages, and it is not clear whether hatching sizes vary (Cumplido et al. 2011).
It is widely assumed that offspring quality increases with size (e.g. Thorson 1950; Spight 1976a; Rivest 1983; Gosselin and Rehak 2007; Lloyd and Gosselin 2007; Przeslawski 2011). Larger hatchlings are less likely to be affected by factors such as physical stress, predation and starvation. While intracapsular size differences are generally believed to be due to competition (Gallardo 1979; González and Gallardo 1999), in the present investigation, they are probably enhanced by a combination of asynchrony in development and short nurse egg consumption periods. We found nurse egg feeding to be very rapid, with each early veliger consuming eggs for just 3–7 days. This relates to 2–5 % of the developmental period. In comparison, in most gastropods, nurse egg consumption occurs over a large proportion of intracapsular development (Table 3). Even the shortest uptake periods previously reported (8–20 % of the developmental period) (Rivest 1983) are still more than double the length of the consumption period observed by us. Within a capsule, the potential to take up nurse eggs is limited by the amount already consumed by earlier developers. Thus, while intracapsular asynchrony in early development is not uncommon (e.g. Vasconcelos et al. 2004; Fernández et al. 2006; Lahbib et al. 2010), when it is combined with the short nurse egg consumption period seen in B. undatum, it follows that even a 24-h lag in initial embryonic development will put individuals at a distinct disadvantage. Rapid nurse egg consumption in B. undatum is consistent with findings by Portmann (1925), but contradictory to those of Nasution (2003). Additionally, 6 °C is towards the lower end of the temperature range that southern populations of B. undatum naturally develop in. Nurse egg consumption is even faster at warmer temperatures (Authors, unpublished data). This may lead to larger intracapsular size differences during development, and with predicted sea temperature elevations, intracapsular size ranges may increase.
Table 3 Periods of development and nurse egg consumption times for different species of gastropods
Normal veligers and pediveligers that had not successfully consumed any nurse eggs were occasionally found within a capsule in the present investigation (Fig. 3a). It is likely that these individuals reached the feeding stage after all resources had been consumed. Since no further feeding occurs between nurse egg consumption and hatching, these embryos had no nutrition available to them for development and we assumed they did not survive. This in itself is very unusual and even in the few reported cases of large intracapsular size differences between embryos (Gallardo 1979; González and Gallardo 1999; Cumplido et al. 2011), to our knowledge completely 'empty' embryos have not been observed.
In the current study, it was noted that for several weeks following consumption, individual nurse eggs could still be observed through the thin veliger mantle and early shell (Fig. 3c). Throughout this period, if the mantle or shell was broken, whole eggs would spill out. This indicated that although eggs were rapidly consumed, they were not immediately utilised but instead were stored for later nutritional use. This phenomenon was also noted by Portmann (1925), who recognised that nurse eggs stayed intact inside B. undatum veligers for long periods of time. In comparison, he found they disintegrated directly after consumption in N. lapillus. Nurse eggs have also been shown to be visible internally throughout the feeding period in A. monodon (Gallardo 1979), L. dirum (Rivest 1983) and C. dilatata (Chaparro and Paschke 1990). In each case however, the literature suggests nurse eggs begin to be assimilated shortly following consumption. In other species such as T. geversianus, nurse eggs break down prior to consumption by embryos (Cumplido et al. 2011).
The range in size of embryos within a capsule and the occurrence of 'empty' embryos observed in this investigation indicates that a higher level of competition is occurring in B. undatum than is normally observed during intracapsular development in gastropods. While large intracapsular size differences have been observed in some muricid gastropods, to our knowledge, competition for nurse eggs to the degree that some embryos are left with no nutrition for development has never previously been reported.
Bayne CJ (1968) Histochemical studies on the egg capsules of eight gastropod molluscs. Proc Malacol Soc Lond 38:199–212
Brante A, Fernández M, Viard F (2009) Limiting factors to encapsulation: the combined effects of dissolved protein and oxygen availability on embryonic growth and survival of species with contrasting feeding strategies. J Exp Biol 212:2287–2295
Chaparro OR, Paschke KA (1990) Nurse egg feeding and energy balance in embryos of Crepidula dilatata (Gastropoda: Calyptraeidae) during intracapsular development. Mar Ecol Prog Ser 65:183–191
Chaparro OR, Oyarzun RF, Vergara AM, Thompson RJ (1999) Energy investment in nurse eggs and egg capsules in Crepidula dilatata Lamarck (Gastropoda, Calyptraeidae) and its influence on the hatching size of the juvenile. J Exp Mar Biol Ecol 232:261–274
Cumplido M, Pappalardo P, Fernández M, Averbuj A, Bigatti G (2011) Embryonic development, feeding and intracapsular oxygen availability in Trophon geversianus (Gastropoda: Muricidae). J Moll Stud 77:429–436
D'Asaro CN (1970) Egg capsules of prosobranch mollusks from south Florida and the Bahamas and notes on spawning in the laboratory. Bull Mar Sci 20:414–440
Fernández M, Pappalardo P, Jeno K (2006) The effects of temperature and oxygen availability on intracapsular development of Acanthina monodon (Gastropoda: Muricidae). Rev Chile Hist Nat 79:155–167
Fretter V, Graham A (1985) The prosobranch molluscs of Britain and Denmark. Part 8. Neogastropoda. J Moll Stud [Suppl] 15:435–556
Gallardo CS (1979) Developmental pattern and adaptations for reproduction in Nucella crassilabrum and other muricacean gastropods. Biol Bull 157:453–463
González KA, Gallardo CS (1999) Embryonic and larval development of the muricid snail Chorus giganteus (Lesson 1829) with an assessment of the developmental nutrition source. Ophelia 51:77–92
Gosselin LA, Rehak R (2007) Initial juvenile size and environmental severity: the influence of predation and wave exposure on hatching size in Nucella ostrina. Mar Ecol Prog Ser 339:143–155
Hancock DA (1967) Whelks. MAFF Laboratory Leaflet No. 15. Fisheries Laboratory, Burnham-upon-Crouch, Essex
Hughes RN (1990) Larval development of Morum oniscus (L.) (Gastropoda: Harpidae). J Moll Stud 56:1–8
Ilano AS, Fujinaga K, Nakao S (2004) Mating, development and effects of female size on offspring number and size in the neogastropod Buccinum isaotakii (Kira, 1959). J Moll Stud 70:277–282
Khanna DR, Yadav PR (2004) Biology of mollusca. Discovery Publishing House, Delhi
Kideys AE, Nash RDM, Hartnoll RG (1993) Reproductive cycle and energetic cost of reproduction of the neogastropod Buccinum undatum in the Irish Sea. J Mar Biol Assoc UK 73:391–403
Lahbib Y, Abidli S, Trigui El, Menif N (2010) Laboratory studies of the intracapsular development and juvenile growth of the banded murex, Hexaplex trunculus. J World Aquac Soc 41:18–34
Lloyd MJ, Gosselin LA (2007) Role of maternal provisioning in controlling interpopulation variation in hatching size in the marine snail Nucella ostrina. Biol Bull 213:316–324
Martel A, Larrivee DH, Himmelman JH (1986a) Behavior and timing of copulation and egg-laying in the neogastropod Buccinum undatum. J Exp Mar Biol Ecol 96:27–42
Martel A, Larrivee DH, Klein KR, Himmelman JH (1986b) Reproductive cycle and seasonal feeding activity of the neogastropod Buccinum undatum. Mar Biol 92:211–222
Miloslavich P, Dufresne L (1994) Development and effect of female size of egg and juvenile production in the neogastropod Buccinum cyaneum from the Saguenay Fjord. Can J Fish Aquat Sci 51:2866–2872
Moran AL (1999) Intracapsular feeding by embryos of the gastropod genus Littorina. Biol Bull 196:229–244
Morel GM, Bossy SF (2004) Assessment of the whelk (Buccinum undatum L.) population around the Island of Jersey, Channel Isles. Fish Res 68:283–291
Nasution S (2003) Intra-capsular development in marine gastropod Buccinum undatum (Linnaeous 1758). J Nat Indones 5:124–128
Nasution S, Roberts D (2004) Laboratory trials on the effects of different diets on growth and survival of the common whelk, Buccinum undatum L. 1758, as a candidate species for aquaculture. Aquac Int 12:509–521
Nasution S, Roberts D, Farnsworth K, Parker GA, Elwood RW (2010) Maternal effects on offspring size and packaging constraints in the whelk. J Zool 281:112–117
Natarajan AV (1957) Studies on the egg masses and larval development of some prosobranchs from the Gulf of Mannar and the Palk Bay. Proc Indian Acad Sci 46:170–228
Ojeda JA, Chaparro OR (2004) Morphological, gravimetric, and biochemical changes in Crepidula fecunda (Gastropoda: Calyptraeidae) egg capsule walls during embryonic development. Mar Biol 144:263–269
Pechenik JA (1979) Role of encapsulation in invertebrate life histories. Am Nat 114:859–870
Pechenik JA (1983) Egg capsules of Nucella lapillus (L.) protect against low-salinity stress. J Exp Mar Biol Ecol 71:165–179
Pechenik JA (1999) On the advantages and disadvantages of larval stages in benthic marine invertebrate life cycles. Mar Ecol Prog Ser 177:269–297
Pechenik JA, Chang SC, Lord A (1984) Encapsulated development of the marine prosobranch gastropod Nucella lapillus. Mar Biol 78:223–229
Portmann A (1925) Der Einfluss der Nahreier auf die Larven-Entwicklung von Buccinum und Purpura. Z Morphol Okol Tiere 3:526–541
Przeslawski R (2011) Notes on the egg capsule and variable embryonic development of Nerita melanotragus (Gastropoda: Neritidae). Moll Res 31:152–158
Rawlings TA (1990) Associations between egg capsule morphology and predation among populations of the marine gastropod, Nucella emarginata. Biol Bull 179:312–325
Rawlings TA (1995) Direct observation of encapsulated development in muricid gastropods. Veliger 38:54–60
Rawlings TA (1999) Adaptations to physical stresses in the intertidal zone: the egg capsules of neogastropod molluscs. Am Zool 39:230–243
Rivest BR (1983) Development and the influence of nurse egg allotment on hatching size in Searlesia dira (Reeve, 1846) (Prosobranchia: Buccinidae). J Exp Mar Biol Ecol 69:217–241
Rosenberg G (2009) A database of Western Atlantic marine mollusca. Malacol 4.1.1 URL http://www.malacolog.org/
Smith CC, Fretwell SD (1974) The optimal balance between size and number of offspring. Am Nat 108:499–506
Spight TM (1976a) Ecology of hatching size for marine snails. Oecologia 24:283–294
Spight TM (1976b) Hatching size and the distribution of nurse eggs among prosobranch embryos. Biol Bull 150:491–499
Stöckmann-Bosbach R (1988) Early stages of the encapsulated development of Nucella lapillus (Linnaeus) (Gastropoda, Muricidae). J Moll Stud 54:181–196
Strathmann RR (1985) Feeding and nonfeeding larval development and life-history evolution in marine invertebrates. Ann Rev Ecol Syst 16:339–361
Strathmann MF, Strathmann RR (2006) A vermetid gastropod with complex intracapsular cannibalism of nurse eggs and sibling larvae and a high potential for invasion. Pac Sci 60:97–108
Thorson G (1950) Reproductive and larval ecology of marine bottom invertebrates. Biol Rev 25:1–45
Valentinsson D (2002) Reproductive cycle and maternal effects on offspring size and number in the neogastropod Buccinum undatum (L.). Mar Biol 140:1139–1147
Valentinsson D, Sjödin F, Jonsson PR, Nilsson P, Wheatley C (1999) Appraisal of the potential for a future fishery on whelks (Buccinum undatum) in Swedish waters: CPUE and biological aspects. Fish Res 42:215–227
Vasconcelos P, Gaspar MB, Joaquim S, Matias D, Castro M (2004) Spawning of Hexaplex (Trunculariopsis) trunculus (Gastropoda: Muricidae) in the laboratory: description of spawning behavior, egg masses, embryonic development, hatchling and juvenile growth rates. Invertebr Rep Biol 46:125–138
Véliz D, Guisado C, Winkler FM (2001) Morphological, reproductive, and genetic variability among three populations of Crucibulum quiriquinae (Gastropoda: Calyptraeidae) in northern Chile. Mar Biol 139:527–534
Véliz D, Winkler FM, Guisado C (2003) Development and genetic evidence for the existence of three morphologically cryptic species of Crepidula in northern Chile. Mar Biol 143:131–142
West DL (1979) Nutritive egg determination in Colus stimpsoni (Prosobranchia: Buccinidae). Am Zool 19:851–1015
Thanks are given to the skipper and crew of RV Callista for their help with sample collection. This work was supported by grants from the Total Foundation (Abyss2100) to ST and the Malacological Society to KS.
Ocean and Earth Science, University of Southampton, National Oceanography Centre, Southampton, European Way, Southampton, SO14 3ZH, UK
Kathryn E. Smith & Sven Thatje
Kathryn E. Smith
Sven Thatje
Correspondence to Kathryn E. Smith.
Communicated by Martin Thiel.
Smith, K.E., Thatje, S. Nurse egg consumption and intracapsular development in the common whelk Buccinum undatum (Linnaeus 1758). Helgol Mar Res 67, 109–120 (2013). https://doi.org/10.1007/s10152-012-0308-1
Revised: 12 April 2012
Intracapsular development | CommonCrawl |
\begin{document}
\title{Geometry of hyperfields} \author{Jaiung Jun} \address{Department of Mathematics, State University of New York at New Paltz, New Paltz, NY 12561, USA} \curraddr{} \email{[email protected]}
\subjclass[2010]{14A99(primary), 16Y99 (secondary).} \keywords{hyperfield, Berkovich analytification, real spectrum, real scheme, locally hyperringed space, rational points, fine topology, representable functor.}
\dedicatory{}
\maketitle
\begin{abstract} Given a scheme $X$ over $\mathbb{Z}$ and a hyperfield $H$ which is equipped with a topology which satisfies certain conditions, we endow the set $X(H)$ of $H$-rational points with a natural topology. We then prove that; (1) when $H$ is the \emph{Krasner hyperfield}, $X(H)$ is homeomorphic to the underlying space of $X$, (2) when $H$ is the \emph{tropical hyperfield} and $X$ is of finite type over a complete non-Archimedean valued field $k$, $X(H)$ is homeomorphic to the underlying space of the Berkovich analytificaiton $X^{\textrm{an}}$ of $X$, and (3) when $H$ is the \emph{hyperfield of signs}, $X(H)$ is homeomorphic to the underlying space of the real scheme $X_r$ associated with $X$. \end{abstract}
\tableofcontents
\section{Introduction}
A \emph{hypergroup} assumes similar axioms as an abelian group except that one allows addition to be \emph{`multi-valued'} (hyperaddition). A \emph{hyperring} $R$ is a nonempty set with two binary operations (multiplication $\cdot$, hyperaddition $+$) such that $(R,\cdot)$ is a commutative monoid, $(R,+)$ is a hypergroup, and hyperaddition is distributive over multiplication. When all nonzero elements are multiplicatively invertible, a hyperring is called a \emph{hyperfield}.
An early incarnation of \emph{hyperstructures} goes back to F.~Marty \cite{marty1935role} who introduced the notion of \emph{hypergroups}. After about 20 years, M.~Krasner adapted Marty's idea to generalize commutative rings to \emph{hyperrings} in \cite{krasner1956approximation}. Krasner's goal was to approximate (in a precise sense) Galois theory of local fields of positive characteristic by means of Galois theory of local fields of characteristic zero. Since its first appearance, hyperrings have not been sufficiently studied from algebro-geometric nor combinatorial perspective. However, lately there has been considerable attention drawn to the theory of hyperrings under various motivations mainly thanks to the following pioneering work: (1) A.~Connes and C.~Consani provide several evidences which show that hyperrings are algebraic structures which naturally appear in relation to algebraic geometry and number theory \cite{con5,con4,con3,con6}, (2) M.~Marshall, P.~G\l{}adki, and K.~Worytkiewicz show that hyperstructures simplify (or generalize) certain aspects of quadratic form theory and real algebraic geometry, \cite{mars1,mar2,gladki2017witt}, (3) O.~Viro implements hyperfields to tropical geometry to provide an algebraic foundation \cite{viro}, and (4) M.~Baker and N.~Bowler unify various generalizations of matroids by means of one unified framework, namely matroids over partial hyperstructures \cite{baker2016matroids}. Baker and Bowler's work additionally supports Viro's philosophy in perspective of valuated matroids. For more details about this viewpoint in connection with O.~Lorscheid's blueprints, see \cite{lorscheid2019tropical}.\\
In this paper, for a scheme $X$, we investigate the set $X(H)$ of rational points over a hyperfield $H$ by appropriately generalizing the notion of locally ringed spaces to locally hyperringed spaces, and $X(H)$ is defined to be a set of morphisms from ``$\Spec H$'' to $X$ in this category. In general, $X(H)$ is merely a set; however, when $H$ is equipped with a topology which satisfies certain conditions, we impose the \emph{fine topology} on $X(H)$, introduced by O.~Lorscheid and C.~Salgado in \cite{lorscheid2016remark} (and also in \cite{baker2018moduli}).
A hyperfield $H$ is said to be a \emph{topological hyperfield} if $H$ is equipped with topology, satisfying the following conditions: \begin{itemize} \item The multiplication map $H \times H \to H$, where $H \times H$ is equipped with product topology, is continuous. \item $H^\times=H-\{0\}$ is open, and the inversion map $i:H^\times \to H^\times$ is continuous. \end{itemize}
Note that in the definition of a topological hyperfield, we do not assume the compatibility of hyperaddition with a topology. This could look odd, however, the above conditions are enough to fulfill our purpose in this paper. We further note that in their work \cite{anderson2019hyperfield} on hyperfield Grassmannians, L.~Anderson and J.~Davis also assume the same conditions (as above) for topological hyperfields (see, \cite[Remark 2.13]{anderson2019hyperfield}). In what follows, we will always assume that any hyperfield, which is equipped with a topology, is a topological hyperfield.\\
Now, let us briefly recall the definition of the fine topology. Let $X=\Spec A$ be an affine scheme and $H$ a topological hyperfield. Then, one imposes a canonical topology (\emph{affine topology}, see \cite{lorscheid2016remark}) on $\Hom(A,H)$, the set of hyperring homomorphisms from $A$ to $H$; we first consider the following set-inclusion \[ \Hom(A,H) \lhook\joinrel\longrightarrow \prod_{a \in A}H^{(a)} \] and give $\Hom(A,H)$ subspace topology, where $\prod_{a \in A}H^{(a)}$ is equipped with product topology induced by a topology $\mathcal{T}$ of $H$. For the general case, when $X$ is a scheme and $H$ is a topological hyperfield, the \emph{fine topology} on the set $X(H)$ of $H$-rational points of $X$ is the finest topology such that any morphism $f:Y \to X$ from an affine scheme $Y$ to $X$ induces a continuous map $f(H):Y(H) \to X(H)$, where $Y(H)$ is equipped with the affine topology. One can easily check that the fine topology agrees with the affine topology when $X$ is affine (see, \S \ref{finetopology}).
With this setting, the main question which we want to address is the following:
\begin{question}\label{question} Which topological space arises as the set $X(H)$ of rational points of an algebraic variety $X$, where $H$ is a topological hyperfield? Do we have some interesting known-examples of $X(H)$? \end{question}
In this paper, we will prove that several familiar topological spaces arise in this way. To this end, topological hyperfields of particular interest are the following (see, \S \ref{definitions} for details): \begin{itemize} \item (Krasner hyperfield) Let $\mathbb{K}:=\{0,1\}$ be a commutative monoid with the multiplication $1\cdot 0=0$ and $1\cdot 1=1$. The hyperaddition is given by $0+1=1+0=1$, $0+0=0$, and $1+1=\{0,1\}$. Then $\mathbb{K}$ is a hyperfield called the \emph{Krasner hyperfield}. We impose topology on $\mathbb{K}$ in such a way that the set of open subsets is $\{\emptyset, \{1\}, \mathbb{K}\}$. \item(Tropical hyperfield) Let $\mathbb{T}:=\mathbb{R}\cup \{-\infty\}$, where $\mathbb{R}$ is the set of real numbers. The multiplication $\odot $ is the usual addition of $\mathbb{R}$ such that $a\odot (-\infty)=(-\infty)$ for all $a \in \mathbb{T}$. The hyperaddition $\oplus$ is to take the maximum when two elements are different, i.e., $a \oplus b =\max\{a,b\}$ if $a\neq b$. When $a=b$, $a\oplus b=\{c \in \mathbb{T} \mid c \leq a\}$, where $\leq$ is the usual order of $\mathbb{R}$ with $-\infty$ the smallest element. Then $\mathbb{T}$ is a hyperfield called the \emph{tropical hyperfield}. We simply impose Euclidean topology on $\mathbb{T}$. \item (Hyperfield of signs) Let $\mathbb{S}:=\{-1,0,1\}$ be a commutative monoid with the multiplication $1\cdot 1 =1$, $(-1)\cdot (-1)=1$, $(-1)\cdot 1=(-1)$, and $1\cdot 0 =(-1)\cdot 0 =0\cdot 0 =0$. The hyperaddition follows the rule of signs, i.e., $1+1=1$, $(-1)+(-1)=(-1)$, $1+0=1$, $(-1)+0=(-1)$, and $1+(-1)=\{-1,0,1\}$. Then $\mathbb{S}$ is a hyperfield called the \emph{hyperfield of signs}. We impose topology on $\mathbb{S}$ in such a way that the set of open subsets is $\{\emptyset, \{1\},\{-1\}, \{-1,1\}, \mathbb{S}\}$. \end{itemize}
\begin{rmk} \begin{enumerate}
\item The inspiration of the current paper stems from the recent paper \cite{baker2016matroids} of Baker and Bowler, where they develop a very elegant framework which unifies the notions of many enrichments of matroids (matroids, oriented matroids, valuated matroids, and phase matroids) as well as linear spaces at the same time. A key idea is to implement hyperfields in matroid theory to define matroids with coefficients in hyperfields. \item In \cite{anderson2019vectors}, Anderson provides anther equivalent definition for (strong) matroids over hyperfields in terms of vectors. Furthermore, Anderson and Davis investigate realization spaces for matroids over hyperfields in \cite{anderson2019hyperfield}. Also, recently, Baker and Lorscheid explore the moduli space of matroids in \cite{baker2018moduli} as representable functors on certain categories. \item After I put the current paper on arXiv, in \cite{jell2018real}, P.~Jell, C.~Scheiderer, and J.~Yu recently introduced the notion of real tropicalization and analytification, adding another example of topological spaces which are homeomorphic to the set of $H$-rational points for some hyperfield $H$. \end{enumerate} \end{rmk}
From our point of view, Baker and Bowler's framework could be seen as an investigation of a Grassmannian over various hyperfields (see, the cryptomorphic axiomatization of matroids over hyperfields via Grassmann-Pl\"{u}cker functions in \cite{baker2016matroids}). Hence, one is induced to wonder what we can say when we replace a Grassmannian with other algebraic varieties, or schemes in general (Question \ref{question}).
\begin{rmk} One partial answer for Question \ref{question} is initially given by Connes and Consani. In \cite{con3}, Connes and Consani show that there exists a set-bijection between any scheme $X$ over $\mathbb{Z}$ and the set $X(\mathbb{K})$ of $\mathbb{K}$-rational points of $X$, where $\mathbb{K}$ is the Krasner hyperfield. In this paper, we refine this theorem into a homeomorphism. \end{rmk}
In this paper, we consider three particular topological spaces which appear in algebraic geometry, namely schemes, Berkovich analytification of schemes , and real schemes. Very roughly, the underlying set of the Berkovich analytification $X^{\textrm{an}}$ of an algebraic variety $X$ over a field $k$ with a complete non-Archimedean valuation $\nu$ consists of pairs of point $x \in X$ and a valuation $\tilde{\nu}$ on the residue field $k(x)$ at $x$ which extends $\nu$ (see \S \ref{berkovich} for some details).
The real spectrum $\Sper A$ of a commutative ring $A$ is an enrichment of the prime spectrum $\Spec A$ consisting of pairs $(\mathfrak{p},P)$ of a prime ideal $\mathfrak{p}$ of $A$ and an ordering $P$ of the residue field $k(\mathfrak{p})$. Once we globalize this construction, we obtain the real scheme $X_r$ associated to a scheme $X$ (see \S\ref{spaceoforderings} for some details).
Our main result is that the aforementioned spaces arise as sets of rational points over certain hyperfields in a functorial way. To be precise, we prove the following theorem.
\begin{nothm} Let $X$ be a scheme over $\mathbb{Z}$, $k$ be a field with a complete non-Archimedean valuation, $\mathfrak{Schm}$ be the category of schemes over $\mathbb{Z}$, and $\mathfrak{Top}$ be the category of topological spaces. \begin{enumerate} \item
The set $X(\mathbb{K})$ of $\mathbb{K}$-rational points of $X$ (equipped with the fine topology) is homeomorphic to $|X|$ (the underlying topological space of $X$). In particular, the functor $\mathcal{F}$ from $\mathfrak{Schm}$ to $\mathfrak{Top}$, sending any scheme $X$ to $|X|$, is isomorphic to the functor $\Hom(\Spec \mathbb{K},-)$.
\item Let $X$ be a scheme of finite type over $k$. Then the Berkovich analytification $X^{\textrm{an}}$ of $X$ is homeomorphic to $X(\mathbb{T})$ (equipped with the fine topology). In particular, the functor $\mathcal{A}$ from the category $\mathfrak{Schm}_{k,fin}$ of schemes of finite type over $k$ to $\mathfrak{Top}$, sending any scheme $X$ to the underlying topological space $|X^{\textrm{an}}|$ of the analytification $X^{\textrm{an}}$, is isomorphic to the functor $\Hom(\Spec \mathbb{T},-)$.
\item The set $X(\mathbb{S})$ of $\mathbb{S}$-rational points of $X$ (equipped with the fine topology) is homeomorphic to the underlying topological space of the real scheme $X_r$ associated to $X$. In particular, the functor $\mathcal{R}$ from $\mathfrak{Schm}$ to $\mathfrak{Top}$, sending any scheme $X$ to the underlying topological space $|X_r|$ of the associated real scheme $X_r$, is isomorphic to the functor $\Hom(\Spec \mathbb{S},-)$. \end{enumerate} \end{nothm}
When $X$ is a group scheme over a field $k$, the underlying topological space $|X|$ itself is not a group; we only have a group structure for the set $X(K)$ of $K$-rational points for each field extension $K$ of $k$. Therefore, the identification $|X|=X(\mathbb{K})$ naturally leads one to the following question:
\begin{question}
Let $X$ be a group scheme over a field $k$. Is the underlying space $|X|$ itself a hypergroup by viewing it as the set of $\mathbb{K}$-rational points, where $\mathbb{K}$ is the Krasner hyperfield? More generally, if $H$ is a hyperfield, is $X(H)$ a hypergroup? \end{question}
In \cite{con4}, Connes and Consani prove that the answer is affirmative when $X$ is an affine line or an algebraic torus and $H=\mathbb{K}$ by explicitly describing the hypergroup structures. In \cite{jun2016hyperstructures}, the author also proves that $X(\mathbb{K})$ is ``almost'' a hypergroup (in a precise sense) when $X$ is of finite type. In $\S \ref{hyperstructures of analytic groups}$, we study the case when $H=\mathbb{T}$ and $X$ is of finite type, following the idea of Berkovich in \cite[\S 5]{berkovich2012spectral}.
Throughout the paper, we assume that all algebraic structures (rings, $k$-algebras, hyperrings, etc) are commutative unless otherwise stated. \\
\textbf{Acknowledgment}
The author would like to thank Vladimir Berkovich and Youngsu Kim for answering various questions concerning $\S \ref{hyperstructures of analytic groups}$. He also thanks Oliver Lorscheid for pointing out minor mistakes which had escaped from the author, and Farbod Shokrieh for answering various questions on Berkovich analytifications. The author is grateful to Jeffrey Giansiracusa for providing helpful feedback for the first draft of the paper. Finally, the author thanks an anonymous referee for many helpful suggestions.
\section{Review: Basic definitions and examples}\label{definitions}
In this section, we recall basic definitions and examples for hyperrings which will be used in the sequel. For more details, we refer the readers to \cite{con3}, \cite{jun2015algebraic}, or \cite{viro}.
\begin{mydef} Let $H$ be a nonempty set and $P^*(H)$ be the set of nonempty subsets of $H$. \begin{itemize} \item By a \emph{hyperoperation} on $H$, we mean a function $*:H\times H \to P^*(H)$. For the notational convenience, we let $a*b:=*(a,b)$ for all $a,b \in H$. \item For $x,y,z \in H$, we define the following two subsets of $H$: \[ (x*y)*z:=\bigcup_{w\in x*y}w*z, \quad \textrm{and} \quad x*(y*z):=\bigcup_{w\in y*z} x*w. \] When $(x*y)*z=x*(y*z)$ for all $x,y,z \in H$, we say that a hyperoperation $*$ is \emph{associative}. \item When $x*y=y*x$ for all $x,y \in H$, we say that a hyperoperation $*$ is \emph{commutative}. \item When the set $x*y$ consists of a single element $z$, we write $x*y=z$ instead of $x*y=\{z\}$. In general, we will identify an element $x \in H$ with the subset $\{x\}$ of $H$. \end{itemize} \end{mydef}
\begin{rmk} Let $H$ be a nonempty set with a hyperoperation $*$ and $A$, $B$ be nonempty subsets of $H$. Then we will use the following notation: \[ A*B:=\bigcup_{a \in A, b\in B} a*b. \] \end{rmk}
\begin{mydef} A \emph{hypergroup} is a nonempty set $H$ equipped with an associative hyperoperation $*$ such that \begin{enumerate} \item (Unique identity) $\exists ! e\in H$ such that $e*x=x*e$. \item (Unique inverse) For each $x \in H$, $\exists ! y \in H$ such that $e \in (x*y) \cap (y*x)$. We denote $y$ by $x^{-1}$. \item (Reversibility) For each $x,y,z \in H$, if $x \in y*z$ then $y \in x*z^{-1}$ and $z \in y^{-1}*x$. \item When a hyperoperation is commutative, we call $(H,*)$ a \emph{canonical hypergroup}. In this case, we will use additive notations such as $+, \oplus, \boxplus$. \end{enumerate} \end{mydef}
\begin{rmk} In \cite{jun2016hyperstructures}, we did not include the reversibility $(3)$ as a part of the definition for hypergroups. \end{rmk}
\begin{mydef} A \emph{hyperring} is a nonempty set $R$ with two binary operations $+$ and $\cdot$ such that $(R,+)$ is a canonical hypergroup and $(R,\cdot)$ is a commutative monoid satisfying the following conditions: \begin{enumerate} \item (Distributivity) $x\cdot(y+z)=x\cdot y +x \cdot z$ for all $x,y,z \in R$. \item (Absorbing element) If $0$ is the identity element with respect to a hyperoperation. Then $x\cdot 0 =0$ for all $x \in R$. We further assume that $0 \neq 1$. \end{enumerate} When each $x \neq 0 \in R$ has a multiplicative inverse, we call $(R,+,\cdot)$ a \emph{hyperfield}. \end{mydef}
\begin{myeg}\label{mainexample} We introduce some examples of hyperfields which yield interesting results in matroid theory. \begin{itemize} \item Let $\mathbb{K}:=\{0,1\}$. The multiplicative structure of $\mathbb{K}$ is same as that of $\mathbb{F}_2$, the field with two element. The commutative hyperaddition is defined as follows: \[ 0+0=0, \quad 0+1=1, \quad 1+1=\{0,1\}. \] $\mathbb{K}$ is called the \emph{Krasner hyperfield}. \item Let $\mathbb{S}:=\{-1,0,1\}$. We impose the multiplication following the rule of signs: \[ 1 \cdot 1=1, \quad 1\cdot (-1) =(-1), \quad 1\cdot 0=0\cdot (-1)=0\cdot 0=0. \] The hyperaddition also follows the rule of signs: \[ 1+1=1+0=1, \quad (-1)+(-1)=(-1)+0=(-1), \quad 1+(-1)=\mathbb{S}, \quad 0+0=0. \] $\mathbb{S}$ is called the \emph{hyperfield of signs}. \item Let $\mathbb{T}:=\mathbb{R}\cup \{-\infty\}$. The multiplication $\odot$ of $\mathbb{T}$ is same as the usual addition of real numbers and $-\infty \odot a =-\infty$ for all $a \in \mathbb{T}$. The hyperaddition is given as follows: \[ x\oplus y =\left\{ \begin{array}{ll} \max\{x,y\} & \textrm{if $x\neq y$}\\ \left[-\infty,x\right]& \textrm{if $x=y$}, \end{array} \right. \] where $\left[-\infty,x\right]:=\{t\in \mathbb{T} \mid t\leq x\}$. $\mathbb{T}$ is called the \emph{tropical hyperfield}. \item Let $(\Gamma,+)$ be a totally ordered abelian group and $\Gamma_{hyp}:=\Gamma \cup \{-\infty\}$. One can impose two binary operations $\odot$ and $\oplus$ on $\Gamma_{hyp}$ as follows: \[ a \odot b:=a+b \textrm{ and } a\odot (-\infty)=-\infty, \textrm{for all }a,b \in \Gamma, \] \[ a\oplus b =\left\{ \begin{array}{ll} \max\{a,b\} & \textrm{if $a\neq b$}\\ \left[-\infty,a\right]& \textrm{if $a=b$}, \end{array} \right. \textrm{ where } \left[-\infty,a\right]:=\{t\in \Gamma_{hyp} \mid t\leq a\}. \] Then $(\Gamma_{hyp},\odot,\oplus)$ is a hyperfield. In particular, if $\Gamma=\mathbb{R}$, then $\Gamma_{hyp}=\mathbb{T}$.
\item Let $\mathbb{P}:=S^1 \cup \{0\}$, where $S^1$ is the unit circle in the complex plane. The multiplication of $\mathbb{P}$ is induced from the multiplication of complex numbers, and we define the hyperaddition as follows: \[ x\oplus y =\left\{ \begin{array}{ll} \{-x,0,x\} & \textrm{if $x= -y$}\\ \textrm{ the shorter open arc connecting $x$ and $y$}& \textrm{if $x \neq -y$}, \end{array} \right. \] $\mathbb{P}$ is called the \emph{phase hyperfield}. \end{itemize} \end{myeg}
There is a recipe to produce a hyperring from a commutative ring $A$; let $A$ be a commutative ring and $G$ be a subgroup of the multiplicative group $A^\times$ of units in $A$. Then $G$ acts (by multiplication) on $A$. Let $A/G$ be the set of equivalence classes under the action of $G$ and $[a]$ be the equivalence class of $a \in A$. One defines the following binary operations: \[ [a]\cdot[b]=[ab], \quad [a]+[b]=\{[c] \mid c=g_1a+g_2b \textrm{ for some } g_1,g_2 \in G\}. \] Then $(A/G,+,\cdot)$ is a hyperring (a \emph{quotient hyperring}).
\begin{mydef} (\emph{Homomorphisms of hyperrings}) \begin{enumerate} \item Let $H_1$ and $H_2$ be hypergroups with the identities $e_1$ and $e_2$ respectively. A \emph{homomorphism} of hypergroups is a function $f:H_1\to H_2$ such that $f(e_1)=e_2$ and $f(a*b)\subseteq f(a)*f(b)$ for all $a,b \in H_1$. When a homomorphism $f$ satisfies the stronger condition $f(a*b)=f(a)*f(b)$ for all $a,b \in H_1$, we call $f$ \emph{strict}. \item Let $R_1$ and $R_2$ be hyperrings. A \emph{homomorphism} of hyperrings is a function $f:R_1 \to R_2$ such that $f:(R_1,+)\to (R_2,+)$ is a homomorphism of hypergroups and $f:(R_1,\cdot)\to (R_2,\cdot)$ is a homomorphism of monoids. When $f:(R_1,+)\to (R_2,+)$ is a strict homomorphism of hypergroups, we call $f$ \emph{strict}. \end{enumerate} \end{mydef}
\begin{myeg} Let $R$ be a hyperfield. Then there exists a \emph{unique homomorphism} from $R$ to the Krasner hyperfield; $\pi:R \to \mathbb{K}$ sending $a\neq 0$ to $1$ and $0$ to $0$. In other words, $\mathbb{K}$ is the final object in the category of hyperfields. \end{myeg}
\begin{rmk} There are other algebraic structures which generalize commutative rings. In fact, A.~Dress and W.~Wenzel introduce in \cite{dress1991grassmann} the notion of \emph{fuzzy rings} to unify various generalizations of matroids (as in \cite{baker2016matroids}) and also utilize fuzzy rings to recast basic definitions of tropical varieties in \cite{dress2011algebraic}. In \cite{giansiracusa2016relation}, together with J.~Giansiracusa and O.~Lorscheid, we clarify the relation between hyperrings and fuzzy rings to link Baker-Bowler theory and Dress-Wenzel theory. Also, in \cite{rowen2016algebras}, L.~Rowen introduces the notion of \emph{systems} and \emph{triples} to generalize both fuzzy rings and hyperrings along with other algebraic structures. \end{rmk}
\section{Algebraic geometry over hyperrings} \subsection{Locally hyperringed spaces and hyperring schemes} We first review some basic definitions and properties for \emph{locally hyperringed spaces} and \emph{hyperring schemes} studied in \cite{jun2015algebraic}. We also slightly generalize some of important results in \cite{jun2015algebraic} which serve as main technical tools in this paper.
Baker and Bowler made a remark in \cite{baker2016matroids} that a (semi)valuation (resp. an ordering) on a commutative ring $A$ can be interpreted as a homomorphism from $A$ to the tropical hyperfield $\mathbb{T}$ (resp. the hyperfield $\mathbb{S}$ of signs). The main reason to introduce locally hyperringed spaces and hyperring schemes is to deal with the general case when a scheme is not affine. In other words, a homomorphism from a commutative ring $A$ to a hyperfield $H$ should be replaced by a morphism $\Spec H \to \Spec A$ of locally hyperringed spaces to extend Baker and Bowler's observation to the non-affine case. All necessary details, which we omit in this section, can be found in \cite{jun2015algebraic}.
\begin{mydef}\label{primeideal} Let $R$ be a hyperring. \begin{enumerate} \item An \emph{ideal} $I$ of $R$ is a sub-hypergroup (a subset which is a hypergroup itself with the induced hyperoperation and the identity) of $R$ such that $RI\subseteq I$. \item One defines a \emph{prime ideal} to be the kernel of a homomorphism $\varphi:R\to\mathbb{K}$, where $\mathbb{K}$ is the Krasner hyperfield. \item A \emph{maximal ideal} is a proper ideal $\mathfrak{m}$ of $R$, that is $\mathfrak{m}\neq R$, which is not properly contained in any other proper ideal of $R$. \end{enumerate}
\end{mydef}
\begin{rmk} One may define a prime ideal of $R$ as in the classical definition; an ideal $\mathfrak{p}$ of $R$ such that $(R\backslash \mathfrak{p},\cdot)$ is a multiplicative monoid. But, one can easily show that this definition is equivalent to Definition \ref{primeideal}. Also, as in the classical case, any maximal ideal of a hyperring $R$ is prime. \end{rmk}
\begin{rmk} When $R$ is a hyperfield, $R$ has a unique prime ideal, namely $\{0_R\}$. \end{rmk}
Let $R$ be a hyperring and $\mathfrak{p}$ be a prime ideal of $R$. The localization $R_\mathfrak{p}$ of $R$ at $\mathfrak{p}$ is a hyperring with the underlying set: \[ R_\mathfrak{p}:=(R \times S)/ \sim, \] where $\sim$ is an equivalence relation on $(R \times S)$ such that $(r,a) \sim (r_1,a_1)$ if and only if there exists $c \in S:=R-\mathfrak{p}$ such that $cra_1=cr_1a$. We write $\frac{r}{a}$ for the equivalence class of $(r,a)$. Now, one imposes the following hyperaddition and multiplication: \[ \frac{r}{a}+\frac{r'}{a'}:=\{\frac{c}{aa'} \mid c \in ar'+a'r\}, \quad \frac{r}{a}\cdot\frac{r'}{a'}:=\frac{rr'}{aa'}. \] Equipped with these two operations, $R_\mathfrak{p}$ becomes a hyperring. Furthermore, we have a canonical map: $S^{-1}:R \to R_\mathfrak{p}$ sending $r$ to $\frac{r}{1}$ and this homomorphism is injective if $R$ does not have any zero-divisor. Note that in fact the localization procedure can be done at any multiplicative subset $S$ of $R$ and also satisfies the universal property as in the case of commutative rings.
\begin{mydef} Let $R$ be a hyperring and $X=\Spec R$ be the set of all prime ideals of $R$. Then one imposes Zariski topology on $X$ by declaring that closed sets are of the form $V(I):=\{\mathfrak{p} \in X \mid I\subseteq \mathfrak{p}\}$ for some ideal $I$ of $R$. \end{mydef}
\begin{rmk} In \cite{jun2015algebraic}, it is proven that, when $A$ is a $k$-algebra and $R=A/k^\times$, the prime spectra $\Spec A$ and $\Spec R$ are homeomorphic. One can easily confirm the set bijection. In fact, we have that $\Spec A=\Hom(A,\mathbb{K})$, $\Spec R=\Hom(A/k^\times,\mathbb{K})$, and $\Hom(A,\mathbb{K})=\Hom(A/k^\times,\mathbb{K})$. \end{rmk}
Let $R$ be a \emph{hyperdomain}, i.e., $R$ is a hyperring without multiplicative zero-divisors. In this case, we can construct a structure sheaf (of hyperrings) for the topological space $X=\Spec R$. Indeed, we define the hyperring of sections over an open subset $U\subseteq X$ as follows: \[ \mathcal{O}_X(U):=\{s:U\to \bigsqcup_{\mathfrak{p} \in U}R_\mathfrak{p},\textrm{ such that $s$ is locally representable by some element $\frac{a}{f}$. } \} \]
Then one has the following:
\begin{mytheorem}\cite{jun2015algebraic}\label{equivalence} Let $R$ be a hyperdomain and $X=\Spec R$, then the hyperring of global sections $\Gamma(X,\mathcal{O}_X)$ is isomorphic to $R$. Furthermore, for each $x=\mathfrak{p} \in X$, the stalk $\mathcal{O}_{X,x}$ exists and is isomorphic to $R_\mathfrak{p}$. \end{mytheorem}
One can directly generalize the definition of locally ringed spaces to define locally hyperringed spaces as follows:
\begin{mydef}\label{locallyhyperringed} \begin{enumerate} \item A \emph{locally hyperringed space} $(X,\mathcal{O}_X)$ is a topological space $X$ together with a sheaf $\mathcal{O}_X$ of hyperrings (the structure sheaf of $X$) such that the stalk $\mathcal{O}_{X,x}$ exists for each $x \in X$ and contains a unique maximal ideal. \item Let $(X,\mathcal{O}_X)$ and $(Y,\mathcal{O}_Y)$ be locally hyperringed spaces. A \emph{morphism} \[ (f,f^\#):(X,\mathcal{O}_X) \to (Y,\mathcal{O}_Y) \] of locally hyperringed spaces is a pair of a continuous map $f:X \to Y$ and a morphism $f^\#:\mathcal{O}_Y \to f_*\mathcal{O}_X$ of sheaves of hyperrings such that for each $x \in X$, the induced map $f^\#_x:\mathcal{O}_{Y,f(x)} \to \mathcal{O}_{X,x}$ is local. In other words, the inverse image of a unique maximal ideal of $\mathcal{O}_{X,x}$ is a unique maximal ideal of $\mathcal{O}_{Y,f(x)}$. \item An \emph{integral hyperring scheme} is a locally hyperringed space which is locally isomorphic to the spectrum of a hyperdomain. \end{enumerate} \end{mydef}
\begin{rmk} We remark that for a sheaf $\mathcal{F}$ of hyperrings on a topological space $X$, the stalk $\mathcal{F}_x$ at a point $x \in X$ may not exist. In Definition \ref{locallyhyperringed}, a locally hyperringed space $(X,\mathcal{O}_X)$ assumes the existence of the stalk $\mathcal{O}_{X,x}$ at each $x \in X$. \end{rmk}
\begin{pro}\label{cominclusion} The inclusion functor $i$, from the category $\mathcal{C}$ of commutative rings to the category $\mathcal{D}$ of hyperrings, is fully faithful. \end{pro} \begin{proof} Let $A$, $B$ be commutative rings. We have to show that $\Hom_\mathcal{C}(A,B)=\Hom_\mathcal{D}(i(A),i(B))$. Suppose that $i(f)=i(g)$ for $f,g \in \Hom_\mathcal{C}(A,B)$. In particular, this means that $f=g$ as functions and hence $f=g \in \Hom_\mathcal{C}(A,B)$. Also, for any $h\in \Hom_\mathcal{D}(i(A),i(B))$, since $i(A)$ and $i(B)$ are commutative rings, the condition $h(a+b)\subseteq h(a)+h(b)$ for all $a,b \in i(A)$ simply means that $h(a+b)=h(a)+h(b)$ and hence $h \in \Hom_\mathcal{C}(A,B)$. \end{proof}
\begin{pro}\label{stalklemma} Let $X$ be a scheme considered as an object in the category of locally hyperringed spaces. For each $x \in X$, the stalk $\mathcal{O}_{X,x}$ exists and is same as the stalk taken by considering $X$ in the category of locally ringed spaces. \end{pro} \begin{proof} We may assume that $X$ is affine, i.e., $X=\Spec A$ for some commutative ring $A$. Let $x=\mathfrak{p} \in X$. One can easily check that in this case $A_\mathfrak{p}$ satisfies the universal property of the stalk of $\mathcal{O}_X$ at $x$. \end{proof}
\begin{pro}\label{inclusionf} The inclusion functor $i$, from the category $\mathfrak{Schm}$ of schemes to the category $\mathfrak{Lhs}$ of locally hyperringed spaces, is fully faithful. \end{pro} \begin{proof} Let $X$ and $Y$ be schemes. We have to show that $\Hom_\mathfrak{Schm}(X,Y)=\Hom_\mathfrak{Lhs}(i(X),i(Y))$. Since any morphism of locally ringed spaces is indeed a morphism of locally hyperringed spaces, we have that \[ \Hom_\mathfrak{Schm}(X,Y)\subseteq\Hom_\mathfrak{Lhs}(i(X),i(Y)) \] Let $(h,h^\#):i(X) \to i(Y)$ be a morphism of locally hyperringed spaces. Similar to Proposition \ref{cominclusion}, one can easily see that $(h,h^\#)$ is indeed a morphism of locally ringed spaces. This completes the proof. \end{proof}
Note that Theorem \ref{equivalence} can not be generalized to the category of hyperrings. For instance, the following example shows that one does not have an equivalence between the opposite category of hyperrings and the category of affine hyperring schemes.
\begin{myeg}\cite[Example 4.24]{jun2015algebraic} Let $R$ be the quotient hyperring $(\mathbb{Q}\oplus \mathbb{Q})/G$, where $G$ is the subgroup of ($\mathbb{Q}\oplus \mathbb{Q})^\times$ consisting of $(1,1)$ and $(-1,-1)$. Let $X=\Spec R$. Then we have \[ \Gamma(X,\mathcal{O}_X)=(\mathbb{Q}/N) \oplus (\mathbb{Q}/N), \] where $N=\{1,-1\}$ is a subgroup of $\mathbb{Q}^\times$. One can easily check that in this case $R$ is not isomorphic to $\Gamma(X,\mathcal{O}_X)$. \end{myeg}
At the moment, we only have theory of integral hyperring schemes which only generalize integral schemes. Nonetheless, we have the following theorem which will be used in the sequel. For the general terminology for hyperring schemes which generalizes the classical concepts, we refer the readers to \cite{jun2015algebraic} or \cite[\S 4]{jaiungthesis}.
\begin{mythm}\label{mainlemma} Let $H$ be a hyperfield and $k$ be a field. Suppose that we have a fixed homomorphism $s:k \to H$ of hyperfields. Let $X$ be a scheme over a field $k$. Then to give a morphism $f:\Spec H \to X$ over $k$ of locally hyperringed spaces is equivalent to give a point $x \in X$ and a homomorphism $\tilde{s}:k(x) \to H$ of hyperrings such that $s=\tilde{s}\circ \rho$, where $k(x)$ is the residue field at $x$ and $\rho:k \to k(x)$ is the canonical homomorphism. \end{mythm} \begin{proof} The proof is similar to the classical proof, however, we include the proof for the sake of completeness. Let $\Spec H=\{y\}$. First, suppose that $(f,f^\#):\Spec H \to X$ sending $y$ to $x$ is a morphism of locally hyperringed spaces. By taking stalks, we have \[ f^\#_x:\mathcal{O}_{X,x} \to H. \] Note that even though the stalk $\mathcal{O}_{X,x}$ is taken in the category of locally hyperringed spaces, it is same as being taken in the category of locally ringed spaces thanks to Proposition \ref{stalklemma}. Now, since $f_x^\#$ is local, we have $(f_x^\#)^{-1}(y)=(f_x^\#)^{-1}(\{0\})=\mathfrak{m}_x$, where $\mathfrak{m}_x$ is a unique maximal ideal of $\mathcal{O}_{X,x}$. In other words, $\mathfrak{m}_x = \ker f_x^\#$ and hence $f^\#_x$ induces the following homomorphism of hyperrings: \[ \varphi_x:\mathcal{O}_{X,x}/\mathfrak{m}_x=k(x) \to H, \quad [a] \mapsto f_x^\#(a), \] where $[a]$ is the equivalence class of $a \in \mathcal{O}_{X,x}$ in $k(x)$. This shows that $(f,f^\#)$ gives a point $x \in X$ and a homomorphism of hyperrings $\varphi_x:\mathcal{O}_{x,x} \to H$. This is clearly compatible with $s:k \to H$ and $\rho:k\to k(x)$.
Conversely, suppose that $x \in X$ and $\varphi_x:k(x) \to H$ such that $\varphi_x\circ \rho=s$ are given. We define $f:\Spec H=\{y\} \to X$ sending $y$ to $x$. Then clearly $f$ is continuous. Next, we define a morphism $f^\#:\mathcal{O}_X \to f_*\mathcal{O}_{\Spec H}$ of sheaves of hyperrings. Notice that \[ \mathcal{O}_{\Spec H}(f^{-1}(U)) =\left\{ \begin{array}{ll} H & \textrm{if $x \in U$}\\ 0& \textrm{if $x \not \in U$}, \end{array} \right. \] Hence, for each open subset $U$ of $X$, we define the following: \[ f^\#(U):=\left\{ \begin{array}{ll} \varphi_x\circ \pi \circ \pi_{U,x} & \textrm{if $x \in U$}\\ 0& \textrm{if $x \not \in U$}, \end{array} \right. \] where $\varphi_x$ is given, $\pi:\mathcal{O}_{X,x} \to k(x)$ is a canonical projection, and $\pi_{U,x}$ is a canonical homomorphism from $\mathcal{O}_X(U)$ to $\mathcal{O}_{X,x}$. Then $f^\#$ is a morphism of sheaves of hyperrings. Indeed, clearly $f^\#(U)$ is a homomorphism of hyperrings for each open subset $U$ of $X$ and hence we only have to check the compatibility condition. Suppose that $V \subseteq U \subseteq X$. There are three cases; the first case is when $x \not \in U$. In this case, both $f^\#(U)$ and $f^\#(V)$ are zero and hence there is nothing to prove. The second case is when $x \in U \cap V^c$. Since $\mathcal{O}_{\Spec H}(f^{-1}(V))=0$ and $f^\#(V)=0$, in this case the compatibility is clear. The only nontrivial case is when $x \in V$. In this case, we have $\mathcal{O}_{\Spec H}(f^{-1}(U))=\mathcal{O}_{\Spec H}(f^{-1}(V))=H$ and the restriction map $\pi_{f^{-1}(U),f^{-1}(V)}:\mathcal{O}_{\Spec H}(f^{-1}(U)) \to \mathcal{O}_{\Spec H}(f^{-1}(V))$ is just an identity map. We first claim that the following diagram commutes. \begin{equation} \begin{gathered} \xymatrix{\ar @{} [dr] \mathcal{O}_X(U) \ar[r]^{\pi_{U,V}} \ar[d]_{f^\#(U)} & \mathcal{O}_X(V) \ar[d]^{f^\#(V)} \\
H \ar[r]^{id} & H } \end{gathered} \end{equation} In fact, we have $f^\#(U)=\varphi_x\circ \pi \circ \pi_{U,x}$. But, since $x \in V \subseteq U$, we have $\pi_{U,x}=\pi_{V,x}\circ \pi_{U,V}$ and hence $f^\#(U)=\varphi_x\circ \pi\circ \pi_{V,x}\circ \pi_{U,V}=f^\#(V)\circ \pi_{U,V}$. This proves that the diagram commutes. It only remains to show that $f^\#_{x}:\mathcal{O}_{X,x} \to \mathcal{O}_{\Spec H,y}$ is a local homomorphism of local hyperrings, i.e., $(f^\#_x)^{-1}(y)=\mathfrak{m}_x$. For this, we may assume that $X$ is affine. Let $X=\Spec A$ and $x$ be a prime ideal $\mathfrak{p}$. Since $\mathcal{O}_{\Spec H,y}=\Frac(H)=H$ (thanks to Theorem \ref{equivalence}), by taking global sections and stalks, we have the following commutative diagram: \begin{equation}\label{equation} \begin{gathered} \xymatrix{\ar @{} [dr] A \ar[r]^{\pi_{X,x}} \ar[d]_{f^\#(X)} & A_\mathfrak{p} \ar[d]^{f^\#_y} \\
H \ar[r]^{id} & H }
\end{gathered} \end{equation} Notice that $y=\{0_H\} \subseteq H$ and $(f^\#(X))^{-1}(y)=\mathfrak{p}$ (from the assumption). Furthermore, we have $\pi_{X,x}(\mathfrak{p})=\mathfrak{m}_x$. It follows from the commutative diagram \eqref{equation} that $(f^\#_y)^{-1}(y)=\mathfrak{m}_x$.
One can easily check that the above two constructions are inverse to each other and hence induces the desired one-to-one correspondence. \end{proof}
\begin{rmk} In Theorem \ref{mainlemma}, we considered a scheme over a field $k$, however, one can easily observe that the same argument is still valid when one replaces $k$ with $\mathbb{Z}$, the ring of integers. \end{rmk}
\begin{rmk} One may notice that the above theorem is a generalization of a classical result; when $H$ is a field, then the only structure morphism $\varphi:k \to H$ is an inclusion (or a canonical map when $k$ is $\mathbb{Z}$) and to give a morphism of $\Spec H$ to $X$ is equivalent to give a point $x \in X$ and an inclusion map $k(x) \to H$ which is compatible with the given structure morphism. \end{rmk}
When we specialize a hyperfield $H$ to be the Krasner hyperfield $\mathbb{K}$, the tropical hyperfield $\mathbb{T}$, and the hyperfield of signs $\mathbb{S}$ , we have the following.
\begin{cor}\label{krasner} Let $\mathbb{K}$ be the Krasner hyperfield. Suppose that $X$ is a scheme over $\mathbb{Z}$. To give a morphism $f:\Spec \mathbb{K}=\{y\} \to X$ is the same thing as to give a point $f(y)=x \in X$ and a homomorphism $k(x) \to \mathbb{K}$. \end{cor}
\begin{rmk}\label{krasnerrmk}
For any field $K$, since $\Spec K$ is a one point set and $\Spec K =\Hom(K,\mathbb{K})$, there is only one homomorphism from $K$ to $\mathbb{K}$. In particular, there exists only one homomorphism from $k(x)$ to $\mathbb{K}$ at each $x \in X$. Therefore, a morphism $f:\Spec \mathbb{K}=\{y\} \to X$ is uniquely determined by a point $f(y )=x \in X$. In other words, the underlying set $|X|$ of a scheme $X$ is in a one-to-one correspondence with the set $\Hom_{\mathfrak{Lhs}}(\Spec \mathbb{K},X)$. This perspective will be investigated in \S \ref{schemetheory}. \end{rmk}
\begin{cor}\label{analytification} Let $\mathbb{T}$ be the tropical hyperfield. Let $k$ be a non-Archimedean valued field with a valuation $\nu:k\to \mathbb{T}$ and $X$ be a scheme over $k$. Then to give a morphism $f:\Spec \mathbb{T}=\{y\} \to X$ is the same thing as to give a point $f(y)=x \in X$ and an extension of $\nu$ from $k$ to $k(x)$. \end{cor}
\begin{rmk} This perspective will be studied in \S \ref{berkovich} in connection to the Berkovich analytification $X^{\textrm{an}}$ of an algebraic variety $X$ over a complete non-Archimedean valued field $k$. \end{rmk}
\begin{cor} Let $\mathbb{S}$ be the hyperfield of signs. Let $k$ be a field together with a homomorphism $s:k \to \mathbb{S}$ and $X$ be a scheme over $k$. Then to give a morphism $f:\Spec \mathbb{S}=\{y\} \to X$ is the same thing as to give a point $f(y)=x \in X$ and a homomorphism from $k(x)$ to $\mathbb{S}$ whose restriction to $k$ is $s$, that is, an ordering of $k(x)$ which extends that of $k$. \end{cor}
\begin{rmk} This perspective will be investigated in \S \ref{spaceoforderings} in connection to the real scheme $X_r$ associated to a scheme $X$. \end{rmk}
We also have the following proposition which shows that our notion of locally hyperringed spaces is an appropriate generalization from affine schemes to non-affine schemes. In what follows, by $\mathfrak{Lhs}$, we always mean the category of locally hyperringed spaces.
\begin{pro}\label{equivaelnceofcategories} Let $H$ be a hyperfield and $X=\Spec A$ be an affine scheme. Then, we have the following identification of sets: \[ \Hom(A,H)=\Hom_{\mathfrak{Lhs}}(\Spec H, X). \] \end{pro} \begin{proof} Let $f:A \to H$ be a homomorphism of hyperrings. Then this determines the point $\mathfrak{p}:=\ker(f) \in \Spec A$ and also $f$ factors through $\tilde{f}:A/\mathfrak{p} \to H$ and hence induces a homomorphism \[ \Frac(A/\mathfrak{p})=k(p) \longrightarrow H. \] It follows from Theorem \ref{mainlemma} that this determines a unique element in $\Hom_{\mathfrak{Lhs}}(\Spec H, X)$. Conversely, any given morphism $g:\Spec H \to X$ induces a homomorphism $\Gamma(X)=A \to \Gamma(\Spec H)=H$ (thanks to Theorem \ref{equivalence}). These are clearly inverses to each other. \end{proof}
\begin{mydef} Let $X$ be a scheme and $H$ be a hyperfield. \begin{enumerate} \item We let $X(H)$ be the set of $H$-rational points of $X$, i.e., \[ X(H):=\Hom_{\mathfrak{Lhs}}(\Spec H, X). \] Whenever there is no possible confusion, we will simply write $X(H)=\Hom(\Spec H, X)$. \item Let $X$ be a scheme over a field $k$ and $H$ be a hyperfield with a fixed homomorphism $\varphi:k \to H$. We let $X(H):=\Hom_k(\Spec H, X)$, i.e., the set of morphisms of locally hyperringed spaces from $\Spec H$ to $X$ which are compatible with $\varphi^\#:\Spec H \to \Spec k$. \end{enumerate} \end{mydef}
\begin{rmk} It follows from Proposition \ref{equivaelnceofcategories} that, when $X=\Spec A$ and $H$ is a hyperfield, we have $X(H)=\Hom(A,H)$ (as sets). \end{rmk}
\subsection{Fine topology on sets of rational points}\label{finetopology} Let $X$ be a scheme over a scheme $S$. Then, in general, $X(S)$ is not equipped with any topology. In this section, we follow the idea of Lorscheid and Salgado in \cite{lorscheid2016remark} (or Lorscheid in \cite{lorscheid2015scheme}) to impose the \emph{fine topology} on sets of rational points of schemes over hyperfields with arbitrary topology. In what follows, a hyperfield with a topology is always assumed to be a topological hyperfield defined below.
\begin{mydef} A hyperfield $H$ is said to be a \emph{topological hyperfield} if $H$ is equipped with topology, satisfying the following conditions: \begin{enumerate}
\item
The multiplication map $H \times H \to H$, where $H \times H$ is equipped with product topology, is continuous.
\item
$H^\times=H-\{0\}$ is open, and the inversion map $i:H^\times \to H^\times$ is continuous. \end{enumerate} \end{mydef}
\begin{myeg} One can easily check the following. \begin{enumerate}
\item The Krasner hyperfield $\mathbb{K}$ with the topology $\{\emptyset, \{1\}, \{0,1\}\}$ is clearly a topological hyperfield. \item The tropical hyperfield $\mathbb{T}$ with the Euclidean topology is a topological hyperfield. \item The hyperfield of signs $\mathbb{S}$ with the topology $\{\emptyset, \{1\},\{-1\}, \{-1,1\}, \mathbb{S}\}$ is a topological hyperfield. \end{enumerate} \end{myeg}
\begin{rmk} We remark that, in \cite{lorscheid2015scheme}, Lorscheid implements the fine topology to ordered blue schemes to recast Berkovich analytification topologically. Also, a similar idea is considered by J.~Giansiracusa and N.~Giansiracusa in \cite{giansiracusa2014universal} based on their previous work \cite{noah} for the same purpose as Lorscheid, but with tropical schemes. To be a bit more precise, Giansiracusa-Giansiracusa show in \cite{giansiracusa2014universal} that the Berkovich analytification $X^{\textrm{an}}$ is homeomorphic to the space $\Trop(X)(\mathbb{R}_{max})$ of rational points of the universal scheme-theoretic tropicalization $\Trop(X)$ over the tropical semifield $\mathbb{R}_{max}$ endowed with the strong Zariski topology (\cite[Definition 3.4.1]{giansiracusa2014universal}). \end{rmk}
The recipe is as follows. Let $X$ be a scheme over a field $k$ (or over $\mathbb{Z}$) and $H$ be a hyperfield which is equipped with a topology $\mathcal{T}$. Suppose that a structural map $\varphi:k \to H$ is fixed. First, we consider when $X$ is an affine, i.e., $X=\Spec A$. In this case, $X(H)=\Hom_k(\Spec H, X)=\Hom_k(A,H)$ and hence we consider the following identification: \begin{equation} X(H)=\Hom_k(A,H) \subseteq \prod_{a \in A}H^{(a)}. \end{equation} We give the product topology on $\prod_{a \in A}H^{(a)}$ by using the topology $\mathcal{T}$ on $H$ and then impose the subspace topology $\mathcal{T}_p$ on $X(H)$. This is called the \emph{affine topology}. We note that the topology $\mathcal{T}_p$ is the coarsest topology on $\Hom_k(A,H)$ such that for each $a \in A$, the evaluation map \[ ev_a:\Hom_k(A,H) \to H,\quad f \mapsto f(a) \] is continuous. In this case, one can easily check that this topology is functorial in both $A$ and $H$.
Next, consider the case when $X$ is a scheme over a field $k$ (or over $\mathbb{Z}$), with a structural map $\varphi:k\to H$. The \emph{fine topology} $\mathcal{T}_F$ on $X(H)$ is the finest topology such that for any $k$-morphism $f_Y:Y \to X$ from an affine $k$-scheme $Y$ to $X$, the induced map \[ f_Y(H):Y(H) \to X(H) \] is continuous, where $Y(H)$ is equipped with the affine topology.
The following in the classical setting is proven in \cite{lorscheid2016remark}, and one may apply a similar argument in \cite[Theorem A]{lorscheid2016remark} to prove the following:
\begin{pro}\label{affinetopology} With the above notations, if $X$ is an affine scheme and $H$ is a hyperfield equipped with topology, then the affine topology and the fine topology agree on $X(H)$. \end{pro}
The fine topology is functorial in the following sense.
\begin{pro}\label{inducedlemma} Let $f:Y \to X$ be a morphism of schemes and $H$ be a topological hyperfield. Then the induced map, $f(H):Y(H) \to X(H)$ is continuous, where $Y(H)$ and $X(H)$ are equipped with the fine topology. \end{pro} \begin{proof} The same argument as in \cite[Proposition 2.1]{lorscheid2016remark} shows the result. \end{proof}
\begin{pro}\label{openembedding} Let $X$ be a scheme and $H$ be a topological hyperfield. If $\{U_i\}$ is an affine open covering of $X$. Then $\{U_i(H)\}$ is an open covering of $X(H)$. \end{pro} \begin{proof} The standard argument, as in the proof of \cite[Theorem B]{lorscheid2016remark}, reduces our proposition to the case when $X$ is affine, say $X=\Spec A$. We may also assume that $U_i$ is a basic open subset $D(f_i)$ of $X$ for some $f_i \in A$. In this case, since the affine topology agrees with the fine topology, we may further assume that $X(H)$ and $U_i(H)$ are equipped with the affine topology.
Now, for each $f \in A$, let $\alpha:A \to A_f$ be the localization. One can easily see that following is an injection: \begin{equation}\label{eq: loc} \tilde{\alpha}: D(f)(H)=\Hom(A_f, H) \to X(H)=\Hom(A,H), \quad \varphi \mapsto \varphi \circ \alpha. \end{equation} Furthermore, we have \begin{equation}\label{eq: open cover} \textrm{Img}(\tilde{\alpha}) = \{\psi \in \Hom(A,H) \mid \psi(f) \neq 0\}. \end{equation} From Proposition \ref{inducedlemma}, $\tilde{\alpha}$ is continuous. We claim that $\tilde{\alpha}$ is an open map. In fact, let $Z$ be an open subset of $\Hom(A_f,H)$ (with the affine topology). We may assume that \[ Z=\{\varphi:A_f \to H \mid \varphi(\frac{a}{f^n}) \in W\} \] for some fixed $\frac{a}{f^n} \in A_f$ and a fixed open subset $W$ of $H$. We have the following from \eqref{eq: loc} and \eqref{eq: open cover}: \[ \tilde{\alpha}(Z)= \{\psi:A \to H \mid \psi(f) \neq 0, \psi(a)\psi(f)^{-n} \in W\}. \] Let's consider $T:=\prod_{a \in A}H^{(a)}$ as the set of functions from $A$ to $H$ (not necessarily homomorphisms). Then, the following is an open subset of $T$ \[ T_W=\{g \in T \mid g(f) \neq 0, ~~g(a)g(f)^{-n} \in W\} \] since the multiplication and inversion of $H$ are continuous and $H^\times$ is an open subset of $H$. Now, we have that \[ \tilde{\alpha}(Z)=T_W \cap \Hom(A,H), \] and hence $\tilde{\alpha}(Z)$ is an open subset of $\Hom(A,H)$, showing that $\tilde{\alpha}$ is an open embedding.
Finally, if $\{U(f_i)\}$ is an open cover of $\Spec A$, we can find $n \in \mathbb{N}$ such that $\sum_{i=1}^n f_i =1$. It follows that for any $\psi \in \Hom(A,H)$, one has: \[ 1=\psi(1)=\psi(\sum_{i=1}^nf_i) \in \sum_{i=1}^n \psi(f_i). \] In particular, for each $\psi \in \Hom(A,H)$, $\psi(f_i) \neq 0$ for some $i=1,\dots,n$. It follows from the description \eqref{eq: open cover} that $\{U(f_i)(H)\}$ is an open cover of $X(H)$. \end{proof}
\section{Geometry of hyperfields in a view of classical scheme theory}\label{schemetheory}
Let $X$ be a scheme over $\mathbb{Z}$ and $\mathbb{K}$ be the Krasner hyperfield. In \cite{con3}, Connes and Consani showed that \begin{equation}\label{sets} X=X(\mathbb{K}) \textrm{ (as sets)}. \end{equation} One can see that the bijection in \eqref{sets} easily follows from Theorem \ref{mainlemma}. We enrich the bijection \eqref{sets} to a homeomorphism in a natural way. To this end, we impose the topology $\mathcal{T}$ on $\mathbb{K}=\{0,1\}$ in such a way that the open sets are $\emptyset$, $\{1\}$, and $\{0,1\}$. Then we have the following:
\begin{pro}\label{schemehomeo} Let $X=\Spec A$ be an affine scheme over $\mathbb{Z}$. Then $X(\mathbb{K})$ (equipped with the fine topology) is homeomorphic to $X$ (equipped with the Zariski topology). \end{pro} \begin{proof} Let $\varphi:X \to X(\mathbb{K})=\Hom(A,\mathbb{K})$ be the set bijection from Theorem \ref{mainlemma}, i.e., \[ \varphi:X \to X(\mathbb{K})=\Hom(A,\mathbb{K}), \quad \mathfrak{p} \mapsto \varphi(\mathfrak{p}):=a_\mathfrak{p}, \] where $a_\mathfrak{p}(x)=0 $ if and only if $x \in \mathfrak{p}$. For $a \in A$, let $D(a)$ be the basic open subset of $X$, i.e., $D(a)=\{\mathfrak{p} \in X \mid a \not \in \mathfrak{p}\}$. Then, we have \[ \varphi(D(a))=\{f \in \Hom(A,\mathbb{K}) \mid a \notin \ker(f)\}=\Hom(A,\mathbb{K})\bigcap \left(\prod_{r \in A} U_r\right), \] where $U_a=\{1\}$ and $U_r=\mathbb{K}$ for all $r \neq a \in A$. Clearly $\prod_{r \in A} U_r$ is an open subset of $\prod_{r \in A}\mathbb{K}^{(r)}$ and hence $\Hom(A,\mathbb{K})\bigcap \left(\prod_{r \in A} U_r\right)$ is an open subset of $\Hom(A,\mathbb{K})$.
Conversely, suppose that $U$ is an open subset of $\Hom(A,\mathbb{K})$. We may assume that $U=\prod_{a \in A}U_a$ such that $U_a=\mathbb{K}$ for all but finitely many $a_1,...,a_n$, where $U_{a_i}=\{1\}$. One can easily check that \[ \varphi^{-1}(U)=\bigcap_{i=1}^n D(a_i). \] This proves that $\varphi$ is a homeomorphism. \end{proof}
Indeed, Proposition \ref{schemehomeo} can be generalized to any scheme over $\mathbb{Z}$ as follows.
\begin{pro}\label{SPECrepresentable} Let $X$ be a scheme over $\mathbb{Z}$. Then $X(\mathbb{K})$, equipped with the fine topology, is homeomorphic to $X$. \end{pro} \begin{proof}
Let $i:X\to X(\mathbb{K})$ be the set-bijection described in Corollary \ref{krasner}. Let's fix an affine open covering $\{U_i\}$ of $X$, where $U_i=\Spec A_i$. Consider the restriction $r_i:=i|_{U_i}:U_i \to U_i(\mathbb{K})$ of $i$ on each $U_i$. It follows from Proposition \ref{openembedding} that each $U_i(\mathbb{K})$ is an open subset of $X(\mathbb{K})$ and $\{U_i(\mathbb{K})\}$ is an open covering of $X(\mathbb{K})$. Now, it follows from Proposition \ref{schemehomeo} that each $r_i$ is a homeomorphism and the desired result follows. \end{proof}
\begin{cor}
Let $\mathcal{F}$ be the functor from the category $\mathfrak{Schm}$ of schemes to the category $\mathfrak{Top}$ of topological spaces sending a scheme $X$ to its underlying topological space $|X|$. Then $\mathcal{F}$ is isomorphic to the functor $\Hom(\Spec \mathbb{K},-)$. In particular, by considering the affine case, the functor $\Spec$, from the category of commutative rings to $\mathfrak{Top}$, is isomorphic to $\Hom(-,\mathbb{K})$. \end{cor} \begin{proof}
For the notational convenience, we let $\mathcal{G}:=\Hom(\Spec \mathbb{K},-)$. For each scheme $X$, we let $\eta_X:\mathcal{F}(X)=|X| \to \mathcal{G}(X)=X(\mathbb{K})$ be the homeomorphism in Proposition \ref{SPECrepresentable}. Then, for each $\mathfrak{p} \in |X|$, we have $\eta_X(\mathfrak{p}):\Spec \mathbb{K} \to X$ such that the image of $\eta_X(\mathfrak{p})$ is $\mathfrak{p} \in X$. In particular, for a morphism of schemes $f:X\to Y$, one can easily see that the following diagram commutes: \[ \begin{tikzcd}[row sep=large, column sep=1.5cm] \mathcal{F}(X)\arrow{r}{\eta_X}\arrow{d}[swap]{\mathcal{F}(f)} & \mathcal{G}(X) \arrow{d}{\mathcal{G}(f)} \\ \mathcal{F}(Y) \arrow{r}[swap]{\eta_Y} & \mathcal{G}(Y) \end{tikzcd} \] This proves that the functors $\mathcal{F}$ and $\mathcal{G}$ are isomorphic. \end{proof}
\section{Geometry of hyperfields in a view of Berkovich theory}\label{berkovich}
In this section, we study the Berkovich analytification of an algebraic variety in terms of the tropical hyperfield $\mathbb{T}$. We also consider a possible connection to tropical geometry and the Berkovich analytification of affine algebraic group schemes. We note that Berkovich used the multiplicative notation (with $\mathbb{R}_{\geq 0}$), however, we will use the additive notation to be compatible with the additive notation of $\mathbb{T}$ and this makes no difference.
\subsection{Analytification is representable} We generalize and prove the remark that Baker and Bowler made (for the affine case) in \cite[Example 5.4]{baker2016matroids}; Berkovich analytification functor is isomorphic to the functor $\Hom(\Spec \mathbb{T},-)$. In what follows, we always assume that $k$ is a complete non-Archimedean valued field and let $\mathbb{T}$ be the tropical hyperfield. As we mentioned earlier, we will use the additive notation and all valuations will be assumed to be complete and non-Archimedean unless otherwise stated. We also use the terms \emph{multiplicative seminorm} and \emph{semivaluation} interchangeably.
Let's first see the affine case. For a normed algebra $(\mathcal{A},|-|_\mathcal{A})$ over a field $k$ with a valuation $\nu$, we define the following notation: \begin{equation}\label{bounded}
\Hom_{b,k}(\mathcal{A},\mathbb{T}):=\{f:\mathcal{A} \to \mathbb{T} \mid \exists C_f\in \mathbb{R}\textrm{ such that } f(x) \leq C_f+|x|_\mathcal{A}\textrm{ for all }x \in \mathcal{A} \textrm{ and } f\mid_{k}=\nu\}. \end{equation}
\begin{pro}\label{extension them} Let $(k,\nu)$ be a valued field, $\mathcal{A}$ be a normed algebra over $k$, and $\hat{\mathcal{A}}$ be the completion of $\mathcal{A}$. Then we have the following bijection of sets: \[ \Hom_{b,k}(\mathcal{A},\mathbb{T})=\Hom_{b,k}(\hat{\mathcal{A}},\mathbb{T}). \] \end{pro} \begin{proof} Note that $f \in \Hom_{b,k}(\mathcal{A},\mathbb{T})$ is nothing but a bounded multiplicative seminorm on $\mathcal{A}$ extending a valuation on $k$ and it is well known that any bounded multiplicative seminorm uniquely extends to a bounded multiplicative seminorm on its completion $\hat{\mathcal{A}}$. \end{proof}
For a normed ring $A$, we let $\Hom_b(A,\mathbb{T})$ be the set of bounded homomorphisms from $A$ to $\mathbb{T}$ as in \eqref{bounded}.
Recall that a semivaluation on a commutative ring $A$ assumes the same axioms as a valuation except that a semivaluation allows a nontrivial kernel (this is a multiplicative seminorm in the terminology of Berkovich in \cite{berkovich2012spectral}).
\begin{lem}\label{valuationmorphism} Let $(\Gamma,+)$ be a totally ordered abelian group and $\Gamma_{hyp}$ be the associated hyperfield (as in Example \ref{mainexample}). Let $A$ be a commutative ring. Then a semivaluation on $A$, with the value group $\Gamma$, is equivalent to a hyperring homomorphism from $A$ to $\Gamma_{hyp}$. In particular, a real semivaluation on $A$ is the same thing as a hyperring homomorphism from $A$ to $\mathbb{T}$. \end{lem} \begin{proof} The definition of a homomorphism from a commutative ring $A$ to $\Gamma_{hyp}$ is precisely the definition of a semivaluation with the value group $\Gamma$. \end{proof}
Next, we prove that the Berkovich analytification $X^{\textrm{an}}$ of a scheme $X$ of finite type over $k$ is homeomorphic to $X(\mathbb{T})$ equipped with the fine topology.
\begin{lem}\label{valuationashomomorphism} Let $A$ be a commutative ring and $\mathbb{T}$ be the tropical hyperfield. Then giving a hyperring morphism from $A$ to $\mathbb{T}$ is equivalent to giving a prime ideal $\mathfrak{p}$ of $A$ and a real valuation on the residue field $k(\mathfrak{p})$ at $\mathfrak{p}$. \end{lem} \begin{proof} Let $\varphi:A \to \mathbb{T}$ be a homomorphism of hyperrings. One can easily see that $\mathfrak{p}:=\ker (\varphi)$ is a prime ideal of $A$. Furthermore, $\varphi$ factors through $A/\mathfrak{p}$ and induces a hyperring homomorphism $\bar{\varphi}:A/\mathfrak{p} \to \mathbb{T}$. This, in turn, induces a hyperring homomorphism $\Frac(A/\mathfrak{p})=k(\mathfrak{p}) \to \mathbb{T}$ which is a real valuation on $k(\mathfrak{p})$ by Lemma \ref{valuationmorphism}.
Conversely, suppose that we have a prime ideal $\mathfrak{q}$ and a hyperring homomorphism $f:\Frac(A/\mathfrak{q}) \to \mathbb{T}$. One can easily check that this can be lifted to define a hyperring homomorphism $\hat{f}:A \to \mathbb{T}$ such that $\ker(\hat{f})=\mathfrak{q}$. \end{proof}
Let $k$ be a valued field. From Lemma \ref{valuationmorphism}, this is equivalent to a field $k$ with a fixed a hyperring homomorphism $\nu:k \to \mathbb{T}$ such that $\ker(\nu)=\{0\}$.
\begin{lem}\label{affinelem}
Let $k$ be a field with a valuation $\nu:k \to \mathbb{T}$. Let $A$ be a commutative $k$-algebra. Then a semivaluation on $A$ which extends $\nu$ is the same thing as a hyperring homomorphism $f:A \to \mathbb{T}$ such that $f|_k=\nu$. \end{lem} \begin{proof} This is straightforward. \end{proof}
Let $k$ be a valued field with a valuation $\nu:k\to \mathbb{T}$ and $A$ be a commutative $k$-algebra. We define the following set: \[
\Hom_k(A,\mathbb{T}):=\{f \in \Hom(A,\mathbb{T}) \mid f|_k=\nu\}. \] Then we have the following.
\begin{pro}\label{affinecase} Let $X=\Spec A$ be an affine scheme of finite type over a field $k$ with a valuation $\nu$. Then we have the following bijection of sets: \begin{equation}\label{affinehomeo} X^{an}=X(\mathbb{T}) (:=\Hom_k(A,\mathbb{T})). \end{equation} Furthermore, the bijection \eqref{affinehomeo} is a homeomorphism when $X(\mathbb{T})$ is equipped with the fine topology. \end{pro} \begin{proof} By definition, $X^{an}$ is the set of multiplicative seminorms (or semivaluations in our terminology) on $A$ which extends $\nu$. Therefore, we have $X^{an}=\Hom_k(A,\mathbb{T})$ (as sets) from Lemma \ref{affinelem}. But, it follows from Proposition \ref{equivaelnceofcategories} that $\Hom_k(A,\mathbb{T})=\Hom_k(\Spec \mathbb{T}, X)=X(\mathbb{T})$, where $\nu:k\to \mathbb{T}$ is a fixed structural morphism. All it remains to show is that such a set bijection is a homeomorphism. But, this directly follows from Proposition \ref{affinetopology} and the definition of topology on $X^{an}$. \end{proof}
Let $X$ be a scheme of finite type over a complete non-Archimedean valued field $(k,\nu)$. Recall that the points of Berkovich analytification $X^{an}$ are in one-to-one correspondence with the set of equivalence classes of morphisms $\Spec L \to X$ for all valued extensions $L$ of $k$ such that two morphisms $\Spec L \to X$ and $\Spec L' \to X$ are equivalent if and only if there exists a valued extension $L''$ for both $L$ and $L'$ and a morphism $\Spec L'' \to X$ such that the following diagram commutes: \[ \begin{tikzcd} \Spec L \arrow{rd}[swap]{} &\Spec L'' \arrow{d}{} \arrow{l} \arrow{r} &\Spec L' \arrow{ld}{} \\ &X \end{tikzcd} \]
The set of points of $X^{an}$ is also in one-to-one correspondence with the set of triples $(x,k(x),\mu)$, where $x$ is a point in $X$, $k(x)$ is the residue field at $x$, and $\mu$ is a valuation on $k(x)$ which extends $\nu$. With these interpretations, we have the following:
\begin{pro}\label{Berkovichsetbijection} Let $X$ be a scheme of finite type over a complete non-Archimedean valued field $(k,\nu)$. Then there is a bijection (of sets) as follows: \begin{equation}\label{setbijection} X^{an}=X(\mathbb{T}). \end{equation} \end{pro} \begin{proof} It follows from Corollary \ref{analytification} that there is a one-to-one correspondence between the points of $X(\mathbb{T})=\Hom_k(\Spec\mathbb{T},X)$ and triples $(x,k(x),\tilde{\nu})$, where $x \in X$, $k(x)$ is the residue field at $x$, and $\tilde{\nu}$ is a homomorphism from $k(x)$ to $\mathbb{T}$ extending $\nu:k \to \mathbb{T}$. This is, in turn, in one-to-one correspondence with the points of $X^{\textrm{an}}$ as we explained above. \end{proof}
\begin{cor}\label{analytificationhomeo} Let $X$ be a scheme of finite type over a field $k$ with a complete non-Archimedean valuation $\nu:k \to \mathbb{T}$. Then the analytification $X^{\textrm{an}}$ is homeomorphic to $X(\mathbb{T})$ which is equipped with the fine topology. \end{cor} \begin{proof} We claim that the set-bijection in Proposition \ref{Berkovichsetbijection} is a homeomorphism. Let $i$ be the set-bijection in Proposition \ref{Berkovichsetbijection}. Since $\mathbb{T}$ is a topological hyperfield, we can apply Proposition \ref{openembedding} in this case and hence our proposition is reduced to the affine case. The result now follows from Proposition \ref{affinecase}. \end{proof}
\begin{cor}\label{representable} Let $k$ be a complete non-Archimedean valued field.
Let $\mathcal{A}$ be the functor from the category of schemes of finite type over $k$ to the category of topological spaces sending $X$ to the underlying topological space $|X^{\textrm{an}}|$ of the Berkovich analytificaiton $X^{\textrm{an}}$. Then $\mathcal{A}$ is isomorphic to the functor $\Hom(\Spec \mathbb{T},-)$. \end{cor} \begin{proof}
For the notational convenience, we let $\mathcal{G}:=\Hom(\Spec \mathbb{T},-)$. For each scheme $X$ of finite type over $k$, we let $\eta_X:\mathcal{A}(X)=|X^{\textrm{an}}| \to \mathcal{G}(X)=X(\mathbb{T})$ be the homeomorphism in Corollary \ref{analytificationhomeo}. Then, for each $\mathfrak{p} \in |X^{\textrm{an}}|$, we have $\eta_X(\mathfrak{p}):\Spec \mathbb{T} \to X$ corresponding to a triple $(x,k(x),\tilde{\nu})$.
Now, let $f:X \to Y$ be a morphism of schemes of finite type over $k$. Then, $\mathcal{G}(f)(\eta_X(\mathfrak{p}))$ corresponds to the triple $(f(x),k(f(x)),\nu')$, where $\nu'$ is obtained by composing $k(f(x)) \to k(x)$ and $\tilde{\nu}$. In particular, one can easily see that the following diagram commutes: \[ \begin{tikzcd}[row sep=large, column sep=1.5cm] \mathcal{F}(X)\arrow{r}{\eta_X}\arrow{d}[swap]{\mathcal{F}(f)} & \mathcal{G}(X) \arrow{d}{\mathcal{G}(f)} \\ \mathcal{F}(Y) \arrow{r}[swap]{\eta_Y} & \mathcal{G}(Y) \end{tikzcd} \] This proves that the functors $\mathcal{F}$ and $\mathcal{G}$ are isomorphic.
\end{proof}
\begin{rmk} Let $(\Gamma,+)$ be a totally ordered abelian group, $\Gamma_{hyp}$ be the associated hyperfield, and $A$ be a commutative ring. A homomorphism $f:A \to \Gamma_{hyp}$ is the same thing as a prime ideal $\mathfrak{p} \in \Spec A$ and a homomorphism $\tilde{f}:A/\mathfrak{p} \to \Gamma_{hyp}$. The later data determines the Hahn analytification of $X=\Spec A$ as in \cite{foster2015hahn}, provided that $\Gamma_{hyp}$ is equipped with proper topology. This hints at that the analytification with a higher rank valuation case can be treated in the same way as Berkovich analytification, however, we do not pursue this case in this paper. \end{rmk}
\subsection{Hyperstructures of analytic groups}\label{hyperstructures of analytic groups} In this section, we interpret several basic definitions and results in \cite[\S 5]{berkovich2012spectral} in terms of hyperstructures. To this end, we will mostly focus on the affine case. In what follows, we let $k$ be a complete non-Archimedean valued field. \subsubsection{Hyperstructure of $G^{\textrm{an}}$}
Let $G$ be a group scheme of finite type over $k$. The analytification $G^{\textrm{an}}$ of $G$ is a group object in the category of $k$-analytic spaces (see, \cite[\S 5]{berkovich2012spectral}).
Let $p_i$ be the projections of $G^{\textrm{an}}\times_kG^{\textrm{an}}$ to the $i$ th factor for $i=1,2$ and $m:G^{\textrm{an}} \times_k G^{\textrm{an}} \to G^{\textrm{an}}$ be the multiplication of $G^{\textrm{an}}$. In general, the underlying space $|G^{\textrm{an}}|$ of $G^{\textrm{an}}$ is not a group itself, however, Berkovich introduces a `group-like' operation on $|G^{\textrm{an}}|$ as follows:
\begin{mydef}(\cite[\S 5]{berkovich2012spectral})\label{berkovichhypergroup}
Let $G$ be a group scheme of finite type over a field $k$ and $G^{\textrm{an}}$ be the analytification of $G$. One imposes a hyperoperation $\odot$ on $|G^{\textrm{an}}|$ as follows: for $g,h \in |G^{\textrm{an}}|$, \[ g\odot h:=\{f \in G^{\textrm{an}} \mid \exists w \in G^{\textrm{an}} \times_k G^{\textrm{an}}\textrm{ such that } p_1(w)=g,\textrm{ }p_2(w)=h,\textrm{ and } m(w)=f \}. \] \end{mydef}
Since $G^{\textrm{an}}$ is a group object, we have the inversion $i:G^{\textrm{an}} \to G^{\textrm{an}}$. For each $g \in G^{\textrm{an}}$, we let $g^{-1}:=i(g)$. Then we have the following.
\begin{lem}\label{reverselem} Let $(G^{\textrm{an}},\odot)$ be as above. If $x \in y\odot z$, then $x^{-1} \in z^{-1}\odot y^{-1}$. \end{lem} \begin{proof} If $x \in y \odot z$, then there exists $w \in G^{\textrm{an}}\times_k G^{\textrm{an}}$ such that $p_1(w)=y$, $p_2(w)=z$, and $m(w)=x$. Let $i:G^{\textrm{an}} \to G^{\textrm{an}}$ be the inversion and $\sigma: G^{\textrm{an}} \times_k G^{\textrm{an}} \to G^{\textrm{an}} \times_k G^{\textrm{an}}$ be the switch morphism. Let $w':=(i\times_k i)\circ \sigma(w)$. Then clearly one can see that $p_1(w')=z^{-1}$, $p_2(w')=y^{-1}$, and $m(w')=x^{-1}$ since $i \circ m=m\circ (i\times_k i)\circ \sigma$. \end{proof}
In \cite{berkovich2012spectral}, Berkovich actually proves the following proposition in our terminology.
\begin{pro}
Let $G$ be a group scheme of finite type over $k$. Then $(|G^{\textrm{an}}|,\odot)$ is a hypergroup. \end{pro} \begin{proof} It is proven in \cite[Proposition 5.1.1]{berkovich2012spectral} that $\odot$ is associative and there exists $e \in G^{\textrm{an}}$ such that $e\odot x=x\odot e=x$. Furthermore, it is also proven in \cite{berkovich2012spectral} that if $y \in g\odot x$ then $x \in g^{-1}\odot y$ (reversible condition). Therefore, we only have to show the following: \begin{enumerate} \item $e$ is the unique identity. \item For each $f \in G^{\textrm{an}}$, $f^{-1}$ is the unique inverse. \item If $x \in y\odot z$ then $y \in x\odot z^{-1}$. \end{enumerate} One can clearly see that $e$ is the unique identity element since if we have $e' \in G^{\textrm{an}}$ such that $e'\odot x=x$ for all $x \in G^{\textrm{an}}$, then we should have $e=e'\odot e=e'$. The uniqueness of inverses follows from the reversible condition: if $e \in g\odot h$, then $h \in g^{-1}\odot e=g^{-1}$. This implies that $h=g^{-1}$. Finally, if $x \in y \odot z$, then it follows from Lemma \ref{reverselem} that $x^{-1} \in z^{-1}\odot y^{-1}$. From \cite[Proposition 5.1.1]{berkovich2012spectral}, we have $y^{-1} \in z \odot x^{-1}$ and we derive $y \in x\odot z^{-1}$ by applying Lemma \ref{reverselem} again. \end{proof}
\begin{pro} Let $G$ be a group scheme of finite type over $k$ and $H$ be a closed analytic subgroup of $G^{\textrm{an}}$. Then, for any $x,y \in H$, $x\odot y \subseteq H$. In particular, with the induced hyperoperation, $(H,\odot)$ is a sub-hypergroup of $(G^{\textrm{an}},\odot)$. \end{pro} \begin{proof} This is clear from the definition. \end{proof}
Let $G$ be a group scheme of finite type over $k$. Then we have the following canonical inclusions (of sets): \begin{equation}\label{inclusion} G(k) \xhookrightarrow{i} G \xhookrightarrow{j} G^{\textrm{an}}. \end{equation}
Berkovich's hyperoperation generalizes the classical group structure in the following sense.
\begin{pro} Let $G$ be a group scheme of finite type over $k$ and let $G^{\textrm{an}}$ be the analytification of $G$. Let $i:G(k) \xhookrightarrow{} G$ and $j:G \xhookrightarrow{} G^{\textrm{an}}$ be the inclusions as in \eqref{inclusion}. Then for any $a,b \in G(k)$, we have \[ j(i(a*b)) \in j(i(a)) \odot j(i(b)), \] where $*$ is the group operation of $G(k)$. \end{pro} \begin{proof} This is clear as $a*b \in m(p^{-1}(a,b))$ for all $a,b\in G(k)$, where $p=(p_1,p_2)$. \end{proof}
\subsubsection{Berkovich's hyperstructure versus Connes and Consani's hyperstructure}\label{ccsection}
Let $G=\Spec A$ be an affine group scheme of finite type over $k$. In this case, we may use the identification $|G^{\textrm{an}}|=\Hom_k(A,\mathbb{T})$ to provide an algebraic definition of Berkovich's hyperoperation which is defined quite geometrically. We will also compare Berkovich's hyperoperation with the hyperoperation introduced by Connes and Consani in \cite{con4}. Let's first recall Connes and Consani's hyperstructure.
\begin{mydef}\label{cchyper} Let $A$ be a Hopf algebra over a field $k$ and $\Delta$ be the coproduct of $A$. By identifying, $X=\Spec A=\Hom_k(A,\mathbb{K})$, one imposes the following hyperoperation $\boxdot$ on $X$: for $f,g \in \Hom(A,\mathbb{K})$, \begin{equation}\label{hopgcc} f \boxdot g:=\{h \in \Hom(A,\mathbb{K}) \mid h(a) \in \sum f(a_{(1)})g(a_{(2)}) \textrm{ for all } \Delta(a)=\sum a_{(1)}\otimes a_{(2)}\}. \end{equation} \end{mydef}
\begin{rmk} The hyperoperation \eqref{hopgcc} makes sense for any hyperfield $H$ and $\Hom_k(A,H)$. We will consider later in this section the case when $H=\mathbb{T}$. \end{rmk}
In \cite{con4}, Connes and Consani compute the hyperoperation as in Definition \ref{cchyper} explicitly for an affine line and an algebraic torus. To be more precise, Connes and Consani prove the following:
\begin{mythm}(\cite{con4})\label{ccaffinetorus} Let $X=\Spec k[T]$ be the affine line over $k=\mathbb{Q}$ and $\delta$ be the generic point of $X$. Let $G_H:=X -\{\delta\}$. Then $(G_H,\boxdot)$, where the hyperoperation $\boxdot$ as in Definition \ref{cchyper}, is a hypergroup. More precisely, we have the following isomorphism of hypergroups: \[ G_H \simeq \overline{k}/\Aut_k(\overline{k}), \] where $\overline{k}$ is considered as an additive group. One has a similar result for an algebraic torus $X=\Spec k[T,\frac{1}{T}]$ with $\overline{k}^\times$ as a multiplicative group and $k=\mathbb{F}_p$, the field with $p$ elements. \end{mythm}
Inspired by Theorem \ref{ccaffinetorus}, in \cite{jun2016hyperstructures}, the author proves the following theorem.
\begin{mythm}\label{mythmhyp} Let $X=\Spec A$ be an affine group scheme of finite type over a field $k$. The hyperoperation $\boxdot$ on $X=\Hom_k(A,\mathbb{K})$ always satisfies the following properties: \begin{enumerate} \item $\exists !$ $e \in X$ such that $e\boxdot a=a\boxdot e$ for all $a \in X$. \item For each $a \in X$, there exists a canonical element $a^{-1} \in X$ (not necessarily unique) such that $e \in (a\boxdot a^{-1}) \bigcap (a^{-1} \boxdot a)$. \item For $a,b,c \in X$, we have $((a\boxdot b) \boxdot c) \bigcap (a\boxdot (b \boxdot c)) \neq \emptyset$. \item For $a,b,c \in X$, $a \in b \boxdot c$ if and only if $a^{-1} \in c^{-1} \boxdot b^{-1}$. \end{enumerate} \end{mythm}
Now, we consider the identification $|G^{\textrm{an}}|=\Hom_k(A,\mathbb{T})$, where $G=\Spec A$ is an affine group scheme of finite type over $k$. In other words, $A$ is a finitely generated Hopf algebra over $k$. In the remaining part of the subsection, we describe Berkovich's hyperoperation by means of the Hopf algebra structure of $A$. To this end, we first recall the following fact: Let $X=\Spec A$ and $Y=\Spec B$ be affine schemes of finite type over $k$. Then the product $X^{\textrm{an}} \times_k Y^{\textrm{an}}$ exists in the category of $k$-analytic spaces and in fact one has \begin{equation}\label{fiberproduct} X^{\textrm{an}} \times_k Y^{\textrm{an}}=(\Spec A\hat{\otimes}_k B)^{\textrm{an}}, \end{equation} where $A\hat{\otimes}_k B$ is the completed tensor product of $A$ and $B$ over $k$ (see, \cite[\S 1]{berkovich2012spectral} or \cite[Appendix B]{bosch2014lectures} for the definition of complete tensor products). Since taking a fibered product commutes with the analytification, one may also write \eqref{fiberproduct} as follows: \begin{equation}\label{fiber2} X^{\textrm{an}} \times_k Y^{\textrm{an}}=(\Spec A\hat{\otimes}_k B)^{\textrm{an}}=(\Spec A \otimes_k B)^{\textrm{an}}. \end{equation}
\begin{rmk} In the affine case, the fact that taking a fibered product commutes with the analytification directly follows from Proposition \ref{extension them}. \end{rmk}
Let $j_1:A \to A\otimes_k A$ be the homomorphism generated by sending $a$ to $a\otimes 1$ and $j_2:A \to A\otimes_k A$ be a homomorphism generated by sending $a$ to $1\otimes a$. The projection maps $p_i:|G^{\textrm{an}}\times_k G^{\textrm{an}}| \to |G^{\textrm{an}}|$ for $i=1,2$ can be rewritten as follows: \begin{equation}\label{projection} p_i:\Hom_k(A\otimes_kA, \mathbb{T}) \to \Hom_k(A,\mathbb{T}),\quad \nu \mapsto \nu\circ j_i, \quad \textrm{for }i=1,2. \end{equation}
Let $\Delta: A \to A\otimes_k A$ be the coproduct of $A$. Then the induced multiplication $m:|G^{\textrm{an}} \times_k G^{\textrm{an}}| \to G^{\textrm{an}}$ can be rephrased as follows: \begin{equation}\label{delta} m:\Hom_k(A\otimes_kA, \mathbb{T}) \to \Hom_k(A,\mathbb{T}), \quad \nu \to \nu\circ \Delta, \end{equation}
We define a hyperoperation $\star$ on $G^{\textrm{an}}=\Hom_k(A,\mathbb{T})$ as follows.
\begin{mydef}\label{BerkovichhypHopf} Let $A$ be a finitely generated Hopf algebra over $k$. For $g,h \in \Hom_k(A,\mathbb{T})$, we define the following set: \[ g\star h:=\{f \in \Hom_k(A,\mathbb{T})\mid \exists \beta_f \in \Hom_k(A\otimes_k A,\mathbb{T})\textrm{ such that } \beta_f\circ j_1=g\textrm{ }, \beta_f\circ j_2=h, \textrm{ and } f=\beta_f\circ \Delta\}. \] \end{mydef}
\begin{pro} \label{HopfBerk} Let $G=\Spec A$ be an affine group scheme of finite type over $k$ and $G^{\textrm{an}}$ be the analytification of $G$. Then, under identification of $G^{\textrm{an}}=\Hom_k(A,\mathbb{T})$, the hyperoperation defined by Berkovich agrees with the hyperoperation in Definition \ref{BerkovichhypHopf}. \end{pro} \begin{proof} This is clear. \end{proof}
Let $G=\Spec A$ be an affine group scheme of finite type over $k$. We denote by $*$ the hyperoperation on $G^{\textrm{an}}=\Hom_k(A,\mathbb{T})$ defined in Definition \ref{cchyper} by Connes and Consani. One may consider the hyperoperation of Berkovich as a refinement of Connes and Consani's hyperoperation in the following sense:
\begin{pro}\label{refinement} We have the following inclusion: for any $g,h \in G^{\textrm{an}}$, \[ (g \star h) \subseteq (g*h) \] \end{pro} \begin{proof} Let $f \in g\star h$. Then there exists $\beta_f \in \Hom_k(A\otimes_kA,\mathbb{T})$ such that $\beta_f\circ j_1=g$, $\beta_f\circ j_2=h$, and $f=\beta_f \circ \Delta$. Let $a \in A$ and $\Delta(a)=\sum a_{(1)}\otimes a_{(2)}$. We have to show that $f(a) \in \sum g(a_{(1)})h(a_{(2)})$. But, since $f=\beta_f \circ \Delta$, we have \begin{equation}\label{incl} f(a) \in \sum \beta_f (a_{(1)}\otimes a_{(2)}). \end{equation} But, we have $g(a_{(1)})=\beta_f(j_1(a_{(1)}))=\beta_f(a_{(1)}\otimes 1)$ and $h(a_{(2)})=\beta_f(j_2(a_{(2)}))=\beta_f(1\otimes a_{(2)})$. Since $a_{(1)}\otimes a_{(2)}=(a_{(1)}\otimes 1)(1\otimes a_{(2)})$, \eqref{incl} becomes the following: \begin{equation} f(a) \in \sum \beta_f (a_{(1)}\otimes a_{(2)}) =\sum g(a_{(1)})h(a_{(2)}). \end{equation} This proves the desired result. \end{proof}
Recall that, since $\mathbb{K}$ is the final object in the category of hyperrings, we have a canonical projection $\pi:\mathbb{T} \to \mathbb{K}$ and the following reduction map: \begin{equation}\label{reductionmap} \pi:\Hom_k(A,\mathbb{T}) \longrightarrow \Hom_k(A,\mathbb{K}), \quad \varphi \mapsto \pi\circ \varphi. \end{equation}
\begin{pro} Let $G=\Spec A$ be an affine group scheme of finite type over $k$. Let $*$ be the Connes and Consani's hyperoperation on $G^{\textrm{an}}$ and $\star$ be the Berkovich's hyperoperation on $G^{\textrm{an}}$, translated in terms of Hopf algebras as in Proposition \ref{HopfBerk}. Let $\pi:G^{\textrm{an}} \to G$ be the reduction map in \eqref{reductionmap}. Then we have the following inclusion: for $f,g \in \Hom_k(A,\mathbb{T})$, \[ \pi (f\star g) \subseteq \pi(f)* \pi(g). \] \end{pro} \begin{proof} Let $h \in f\star g$. We have to show that for any $a \in A$, $\Delta(a)=\sum a_{(1)} \otimes a_{(2)}$, \begin{equation}\label{ref2} \pi\circ f (a) \in \sum (\pi\circ g(a_{(1)}))(\pi\circ h(a_{(2)}))=\sum \pi (g(a_{(1)})(h(a_{(2)})) \end{equation} However, it follows from Proposition \ref{refinement} that $f(a) \in \sum g(a_{(1)})h(a_{(2)})$ and hence \[ \pi\circ f (a) \in \pi(\sum g(a_{(1)})h(a_{(2)})) \subseteq \sum \pi (g(a_{(1)})(h(a_{(2)})). \] \end{proof}
\begin{rmk} It is unclear whether or not two operations $\star$ and $*$ are different. It would be interesting to prove or disprove this. \end{rmk}
\subsubsection{Example: Affine line with a trivially valued ground field}
We explicitly compute the case for an affine line, provided that the ground field $k=\overline{\mathbb{Q}}$ is trivially valued. Note that the computation here is not new; Berkovich already computed this case in \cite[Example 5.1.4.]{berkovich2012spectral}, however, we will use the Hopf algebra structure to recompute this example.
In what follows, let $\mathbb{A}^1_k=\Spec k[T]$ and $i:\mathbb{A}^1_k(k) \hookrightarrow \mathbb{A}^{1,\textrm{an}}_k$. Then we can identify $\mathbb{A}^1_k(k)=k$. We can further identify each $q \in \mathbb{A}^1_k(k)$ as a homomorphism $f_q:k[T] \to k \to \mathbb{T}$ sending $a+bT$ to $a+bq$ and then to $\nu(a+bq)$, where $\nu$ is the trivial valuation on $k$. Let $(\mathbb{A}^{1,\textrm{an}}_k,\odot)$ be the hypergroup which we defined in the previous section (with $\odot$ defined by Berkovich). The following is well known (monomial valuations).
\begin{lem}\label{monomialvaluation} Let $k$ be a field and $a, b \in \mathbb{R}$. Then the following map, \begin{equation}\label{monomialexampleequation} \beta:k[X,Y] \to \mathbb{T}, \quad \beta(\sum_{i,j} a_{ij}X^iY^j) =\left\{ \begin{array}{ll} \max_{i,j}\{ia+jb\mid a_{ij}\neq 0\} & \textrm{if $\sum_{i,j} a_{ij}X^iY^j \neq 0$}\\ -\infty& \textrm{if $\sum_{i,j} a_{ij}X^iY^j =0$}, \end{array} \right. \end{equation} is an element of $\Hom_k(k[X,Y],\mathbb{T})$. \end{lem}
\begin{rmk} One can observe that if $a<b <0$, then $\beta(X+Y)=b$. Also if $a=b$, then $\beta(X+Y)=a$. \end{rmk}
\begin{lem}\label{monomiallemma2} Let $c\leq a \in \mathbb{R}$. Then there exists $\nu \in \Hom_k(k[X,Y],\mathbb{T})$ such that $\nu(X)=\nu(Y)=a$ and $\nu(X+Y)=c$. \end{lem} \begin{proof} Let $f:k[X',Y'] \to k[X,Y]$ be the isomorphism such that $f(X')=X$ and $f(Y')=Y+X$. It follows from Lemma \ref{monomialvaluation} that we have a semivaluation $\beta$ on $k[X',Y']$ such that $\beta(X')=a$ and $\beta(Y')=c$. Then we have $\beta(Y'+X')=\max\{a,c\}=a$. Let $g:=f^{-1}$. Then this defines a homomorphism $\nu:=\beta\circ g:k[X,Y] \to \mathbb{T}$. In particular, we obtain $\nu(X)=\beta(X')=a$, $\nu(Y)=\beta(Y'-X')=a$, and $\nu(X+Y)=\beta(Y')=c$ as desired. \end{proof}
\begin{lem}\label{classocallem} There are exactly three cases of homomorphisms $\varphi:k[T] \to \mathbb{T}$ which extends the trivial valuation $\nu:k \to \mathbb{T}$ as follows: \begin{enumerate} \item $\varphi(f(T))=0$ $\forall f(T) \neq 0$. \item $\varphi(T) >0$ and $\varphi(f(T))=\deg(f) \cdot \varphi(T)$. \item $\varphi (T) \leq 0$ and there exists an irreducible polynomial $f(T) \in k[T]$ such that $\varphi(f(T))<0$, \end{enumerate} where $0=1_\mathbb{T}$. \end{lem} \begin{proof} This is an elementary result, for instance, see \cite[\S 3.1]{payne2015topology}. \end{proof}
\begin{rmk}\label{equivalencermk} The first case of Lemma \ref{classocallem} is the trivial valuation. In the second case, $\varphi$ is uniquely determined by the value $\varphi(T) \in (0,\infty)$; $\varphi(f)=\deg(f)\varphi(T)$. In particular, if $\varphi_1$ and $\varphi_2$ belong to the second case, then there exists $q \in (0,\infty)$ such that $\varphi_1=q\varphi_2$. \end{rmk}
Let $\delta$ be the trivial valuation. One can visualize the Berkovich analytification of an affine line in the following way depending on the above cases:
Let $\varphi:k[T] \to \mathbb{T}$ be a homomorphism as in the third case of Lemma \ref{classocallem}. Let $\mathfrak{p}_\varphi:=\{f \in k[T] \mid \varphi(f) <0\}$. Then one can easily show that $\mathfrak{p}_\varphi \in \Spec(k[T])$. Since $k$ is algebraically closed, we can find a unique linear polynomial (up to a product of constants) $g_b:=b+T$ which generates $\mathfrak{p}_\varphi$ for some $b \in k$. In particular, if $\varphi_1$, $\varphi_2 \in \Hom_k(k[T],\mathbb{T})$ belong to the third case such that $\mathfrak{p}_{\varphi_1}=\mathfrak{p}_{\varphi_2}$, then there exists $q \in (0,\infty)$ such that $\varphi_1=q\varphi_2$. Furthermore let $\varphi_b$ belong to the third case and $\mathfrak{p}_{\varphi_b}=<g_b>$, where $g_b:=b+T$. Then $\varphi_b$ is uniquely determined by $\varphi_b(g_b)$. In fact, we have the following.
\begin{lem} Let $\varphi:k[T] \to \mathbb{T}$ be a homomorphism such that $\varphi(T) \leq 0$ and $\mathfrak{p}_\varphi=<g_b>$, where $g_b:=b+T$. Then for any $f \in k[T]$, $\varphi(f)=r_b\varphi(g_b)$, where $r_b$ is the maximum natural number such that $g_b^{r_b}$ divides $f$. \end{lem} \begin{proof} We let $\oplus$ be the hyperaddition of $\mathbb{T}$. Let $f \in k[T]$ such that $g_b$ does not divide $f$. Since $k$ is algebraically closed, $f=l_1\cdots l_n$ for some linear polynomials $l_i$ which is not a constant multiple of $g_b$. Let $l_i=c+T$. As $l_i \not \in \mathfrak{p}_\varphi$, we have $\varphi(l_i) \geq 0$. However, $\varphi(l_i)=\varphi(c+T) \in \varphi(c) \oplus \varphi(T)$. If $c=0$, then we have $\varphi(l_i) =\varphi(T) \leq 0$ and hence $\varphi(l_i)=0$. If $c \neq 0$, then $\varphi(l_i) \in \varphi(c)\oplus \varphi(T)=0 \oplus \varphi(T)$. But, this still implies that $\varphi(l_i) \leq 0$ and hence $\varphi(l_i)=0$. This proves the lemma and furthermore this implies that $\varphi(g_b)$ uniquely determines $\varphi$. \end{proof}
We explicitly compute the case $g_0=T$, where $0=1_\mathbb{T}$.
\begin{lem}\label{onebranch} Let $x, y \in (-\infty, 0)$. Let $f_x,f_y$ be points of $\mathbb{A}^{1,\textrm{an}}_k$ such that $\mathfrak{p}_{f_x}=\mathfrak{p}_{f_y}=<g_0>$ and $f_x(g_0)=x$, $f_y(g_0)=y$. If $h \in f_x \odot f_y$, where $\odot$ is the Berkovich's hyperoperation, then $\mathfrak{p}_{h}=<g_0>$ and \begin{equation}\label{hypersum} f_x\odot f_y =\left\{ \begin{array}{ll} f_{\max\{x,y\}} & \textrm{if $x\neq y$}\\ \{f_t \mid t \in \left[-\infty,x\right]\}& \textrm{if $x=y$}, \end{array} \right. \end{equation} where $\left[-\infty,x\right]:=\{t\in \mathbb{T} \mid t\leq x\}$. \end{lem} \begin{proof} We let $\oplus$ be the hyperaddition of $\mathbb{T}$. We first show that if $h \in f_x \odot f_y$ then $\mathfrak{p}_h=<g_0>$. In fact, there exists $\beta_h \in \Hom_k(A\otimes_k A, \mathbb{T})$ such that \begin{equation}\label{conditions} \beta_h \circ j_1=f_x,\quad \beta_h \circ j_2 =f_y, \quad \textrm{and}\quad h=\beta_h \circ \Delta. \end{equation} It follows from \eqref{conditions} that \[ \beta_h\circ j_1(g_0)=x, \quad \beta_h \circ j_2(g_0)=y,\quad \textrm{and}\quad h(g_0)=\beta_h \circ \Delta(g_0).\] Hence we have: \[ h(T)=\beta_h\circ \Delta(T)=\beta_h (T \otimes1+1\otimes T) \subseteq \beta_h (T \otimes1)\oplus \beta_h (1\otimes T) =x\oplus y. \] Since $x,y <0$, this implies that $h(T)=h(g_0) <0$ and this forces $\mathfrak{p}_h$ to be generated by $g_0$. This proves that if $h \in f_x\odot f_y$ then $h(g_0) \in x\oplus y$ and $\mathfrak{p}_h=<g_0>$.
Conversely, we have to show that if $z \in x\oplus y$ and $h \in \Hom_k(A,\mathbb{T})$ such that $h(g_0)=z$, then $h \in f_x \odot f_y$, i.e., we have to find $\beta_h :A \otimes_k A \to \mathbb{T}$ which satisfies the condition \eqref{conditions}. This is equivalent to find a homomorphism $\beta:k[X,Y] \to \mathbb{T}$ such that $\beta(X)=x$, $\beta(Y)=y$, and $\beta(X+Y)=z \leq \max\{x,y\}$. But, this has been proven in Lemma \ref{monomiallemma2}. \end{proof}
The following proposition shows that $(\mathbb{A}^{1,\textrm{an}}_k,\odot)$ contains a hypergroup which is isomorphic to a certain sub-hypergroup of $(\mathbb{T},\oplus)$. We define the following subset of $\mathbb{T}$: \[ \mathbb{T}_{<0}:=\{a \in \mathbb{T} \mid a<0\}. \] One can easily see that $\mathbb{T}_{<0}$ is a sub-hypergroup of $\mathbb{T}$, i.e., $\mathbb{T}_{<0}$ itself is a hypergroup with the induced hyperoperation.
\begin{pro} Consider the following set: \[ H:=\{h \in \mathbb{A}^{1,\textrm{an}}_k \mid h(T) < 0 \}. \] Then $(H,\odot)$ is a hypergroup which is isomorphic to $\mathbb{T}_{<0}$. \end{pro} \begin{proof} Define an isomorphism, $\varphi:H \to \mathbb{T}_{<0}$ sending $f$ to $f(T)$. The result follows from Lemma \ref{onebranch}. \end{proof}
\begin{rmk} Although we assume that $k$ is algebraically closed, we may use \cite[Lemma 5.1.2.]{berkovich2012spectral} to compute the non-algebraically closed field case. \end{rmk}
\begin{rmk} Let $\delta$ be the trivial valuation and consider the following subset of $\mathbb{A}^{1,\textrm{an}}_k$: \[ B:=\{h \in \mathbb{A}^{1,\textrm{an}}_k \mid h(T) \neq 0\}-\{\delta\}. \] We may consider a new hyperoperation $\diamond$ on $B$ as follows: \[ f \diamond g =\left\{ \begin{array}{lll} f\odot g & \textrm{if $f(T)\neq g(T)$}\\ f\odot g& \textrm{if $f(T)=g(T) <0$}\\ (f\odot g) \bigcap B & \textrm{if $f(T)=g(T) >0$}, \end{array} \right. \] Then one can easily see that $(B,\diamond)$ is isomorphic to the hypergroup $(\mathbb{T}-\{0\},\oplus)$. To be precise, the isomorphism $\varphi$ is given by $\varphi(f)=f(T)$. It seems that the computation involving the trivial valuation $\delta$ is rather subtle as Connes and Consani already observed in \cite[\S7, \S8]{con4}. \end{rmk}
\section{Geometry of hyperfields in a view of real algebraic geometry} \label{spaceoforderings}
In this section, we prove that the functor $\Sper$ (a real algebraic analogue of the functor $\Spec$) is isomorphic to the functor $\Hom(-,\mathbb{S})$, where $\mathbb{S}$ is the hyperfield of signs. We first recall the definitions for \emph{real spectra and real schemes}.
\begin{mydef} Let $A$ be a commutative ring. By an \emph{ordering} on $A$, we mean a subset $P$ of $A$ such that \begin{enumerate} \item $P+P\subseteq P$ and $P\cdot P \subseteq P$. \item $a^2 \in P$ $\forall a \in A$ and $-1 \not \in P$. \item $P\cup (-P) =A$, where $-P:=\{-a \mid a\in P\}$. \item $P \cap (-P) \in \Spec A$ (the support of $P$). \end{enumerate} \end{mydef}
\begin{lem}\label{orderinglemma} Let $A$ be a commutative ring. There is a one-to-one correspondence between orderings on $A$ and elements of $\Hom(A,\mathbb{S})$. \end{lem} \begin{proof} Let $P$ be an ordering on $A$. Define a function $\varphi_P:A \to \mathbb{S}$ as follows: \begin{equation} \varphi_P(a) =\left\{ \begin{array}{lll} 1 & \textrm{if $a \in P\cap (-P)^c$},\\ 0 & \textrm{if $a \in P \cap (-P)$}, \\ -1 & \textrm{if $a \in (-P)\cap P^c$}. \end{array} \right. \end{equation} One can easily see that $\varphi_P \in \Hom(A,\mathbb{S})$.
Conversely, for any $\varphi \in \Hom(A,\mathbb{S})$, let $P:=\varphi^{-1}(\{0,1\})$. Then $P$ is an ordering. Indeed, we obviously have $P+P \subseteq P$ and $P\cdot P \subseteq P$. For any $a \in A$, we have $\varphi(a^2)=(\varphi(a))^2 \in \{0,1\}$ and hence $a^2 \in P$. Furthermore, we have \[ \varphi(0)=\varphi(1+(-1))=0 \in \varphi(1)+\varphi(-1)=1+\varphi(-1). \] This implies that $\varphi(-1)=-1$ and hence $-1 \not\in P$. The condition that $P \cup (-P) =A$ is clear. Finally $P\cap (-P)=\ker(\varphi)$ and hence $P \cap (-P) \in \Spec A$. Clearly, these two constructions are inverses to each other. This shows the desired one-to-one correspondence. \end{proof}
Recall that a formally real field is a field $F$ equipped with an ordering. In other words, from Lemma \ref{orderinglemma}, a field $F$ is formally real if and only if the set $\Hom(F,\mathbb{S})$ is nonempty.
\begin{lem} Let $A$ be a commutative algebra over a formally real field $F$. We fix an ordering $P_F$ of $F$, i.e., we fix a homomorphism $\varphi_{P_F}:F \to \mathbb{S}$. Then there is a one-to-one correspondence between orderings on $A$ which contains $P_F$ and elements of $\Hom_F(A,\mathbb{S})$. \end{lem} \begin{proof} The proof is essentially same as the proof of Lemma \ref{orderinglemma}. \end{proof}
Now, we are ready to introduce a real spectrum. First, we define a real spectrum as a topological space and then review a \emph{Nash structure sheaf} which makes a real spectrum as a locally ringed space. For the details, we refer the readers to \cite{coste2001uniform} or \cite{fujita2003real}.
\begin{mydef}\label{spectraltopology} Let $A$ be a commutative ring. The \emph{real spectrum} $\Sper A$ of $A$ is the set of orderings on $A$ with topology, called the spectral topology, given by the basis of open subsets of the form $\{U(f)\}_{f \in A}$, where \[ U(f):=\{P \in \Sper A \mid f \not \in -P\}. \] \end{mydef}
\begin{rmk} It follows from Definition \ref{spectraltopology} that, for $f \in A$, \[ U(-f):=\{P \in \Sper A \mid -f \not \in -P\}=\{P \in \Sper A \mid f \not \in P\}. \] \end{rmk}
Next, we introduce a structure sheaf on $X=\Sper A$. For details, we refer the readers to \cite{coste2001uniform}.
Let $A$ be a commutative ring and $B$ be an \'{e}tale $A$-algebra. Then a map $f:A\to B$ induces a local homeomorphism $f^*:\Sper B \to \Sper A$ (see, \cite[Proposition 1.8]{scheiderer2006real}).
One first defines a presheaf $\mathcal{A}$ on $X=\Sper A$ as follows: for an open subset $U$ of $X$, we let $\mathcal{A}(U)$ be the set of equivalence classes of triples $(B,s,b)$, where $B$ is an \'{e}tale $A$-algebra, $s:U \to \Sper B$ is a continuous section of the local homeomorphism $\Sper B \to \Sper A$, and $b \in B$. Two triples $(B,s,b)$ and $(C,t,c)$ are equivalent if and only if there exists a triple $(D,u,d)$ and $A$-algebra homomorphisms $f:B \to D$, $g:C \to D$ such that $f(b)=d=g(c)$ and the following diagram commutes: \[ \begin{tikzcd} \Sper B &U \arrow{d}{u} \arrow{l}[swap]{s} \arrow{r}{t} &\Sper C \\ &\Sper D \arrow{lu}{f^*} \arrow{ru}[swap]{g^*} \end{tikzcd} \] The structure sheaf $\mathcal{O}_X$ of $X=\Sper A$ is the sheafification of $\mathcal{A}$. Then $(X,\mathcal{O}_X)$ is a locally ringed space and a real scheme is a locally ringed space which is locally isomorphic to a real spectrum. \begin{rmk} In some special case, the structure sheaf $\mathcal{O}_X$ of a real spectrum $X=\Sper A$ can be defined as in the case of schemes by means of \emph{real strict localizations}. See, \cite{fujita2003real} for details. \end{rmk}
Different from the case of schemes, in general, it is not true that the functors $\Sper$ and $\Gamma$ are inverses to each other. One only has the `\emph{idempotency property}'.
\begin{pro}\cite[Theorem 23]{coste2001uniform}\label{idempotency} Let $A$ be a commutative ring. Then \[ \Gamma (\Sper \Gamma (\Sper A)) \simeq \Gamma (\Sper A). \] In other words, $(\Gamma \circ \Sper)$ is an idempotent endofunctor on the category of commutative rings. \end{pro}
\begin{rmk} We note that blue schemes, introduced by Lorscheid in \cite{lorscheid2012geometry}, satisfy a similar idempotency property as in Proposition \ref{idempotency}. Therefore, one may employ the idea of \emph{globalizations} in \cite{lorscheid2012geometry} in this setting. But, we do not pursue this perspective in the current paper. \end{rmk}
The following is well known.
\begin{lem}\label{reduction} Let $A$ be a commutative ring, $X_r=\Sper A$, and $X=\Spec A$. Let $\red:X_r \to X$ be a function sending any ordering $P$ to $P\cap (-P)$. Then $\red$ is a well-defined continuous map. \end{lem}
Let $k$ be a complete non-Archimedean valued field, $A$ be a finitely generated $k$-algebra, and $X=\Spec A$. Recall that the Berkovich analytification $X^{\textrm{an}}$ of $X$ has the following decomposition: \[ X^{\textrm{an}}=\bigsqcup_{\mathfrak{p} \in X} \nu_\mathbb{R}(\mathfrak{p}), \] where $\nu_\mathbb{R}(\mathfrak{p})$ is the set of valuations on the residue field $k(\mathfrak{p})$ at $\mathfrak{p}$ extending that of $k$. Lemma \ref{reduction} provides a similar description for $\Sper A$ as follows: \[ \Sper A=\bigsqcup_{\mathfrak{p} \in X} \nu_\mathbb{S}(\mathfrak{p}), \] where $\nu_\mathbb{S}(\mathfrak{p})$ is the space of orderings of $k(\mathfrak{p})$ as in \cite{mars1} (see, Remark \ref{finalyrmk}).
Now, we impose topology on $\mathbb{S}$ by letting $\mathcal{T}:=\{\emptyset,\{-1\},\{1\},\{1,-1\},\mathbb{S}\}$ be the set of open subsets.
\begin{lem}\label{sgnaffinecase} Let $X=\Spec A$ be an affine scheme over $\mathbb{Z}$ and $X_r=\Sper A$. Then $X_r$ is homeomorphic to $X(\mathbb{S})$, equipped with the fine topology. \end{lem} \begin{proof} Let $X=\Spec A$. Then, we have a set-bijection, $X_r=\Hom(A,\mathbb{S})=X(\mathbb{S})$. One can easily see that, with the topology $\mathcal{T}$, the fine topology on $X(\mathbb{S})$ is exactly the spectral topology of $X_r$ under the bijection of Lemma \ref{orderinglemma}. The result now simply follows from Proposition \ref{affinetopology}. \end{proof}
\begin{pro}
The functor $\Sper$, from the category of rings to the category of topological spaces, is isomorphic to the functor $\Hom(-,\mathbb{S})$. \end{pro} \begin{proof}
One may apply a similar argument as in Corollary \ref{representable}.
\end{proof}
Let $X$ be a scheme over $\mathbb{Z}$. Then one can canonically associate a real scheme $X_r$ to $X$. Indeed, one may choose any affine open covering $\{U_i=\Spec A_i\}$ of $X$ and associate $(U_i)_r=\Sper A_i$ to each $i$, and then glue $\{(U_i)_r\}$ to obtain $X_r$ (see, \cite{scheiderer2006real}). As in the case of schemes and the Berkovich analytification, the underlying set of $X_r$ also has a nice description as a functor of points in the following sense: Recall that a real closed field is a field $k$ which is not algebraically closed, but the field extension $k(\sqrt{-1})$ is algebraically closed. Now, the underlying set of $X_r$ is the set of equivalence classes of rational points of $X$ over all real closed fields. Two rational points $f:\Spec k \to X$ and $g:\Spec k' \to X$ are equivalent if and only if there exists a real closed field extension $k''$ of $k$ and $k'$, together with a morphism $h:\Spec k'' \to X$, such that the following diagram commutes (see, \cite[\S 0.4]{scheiderer2006real}): \[ \begin{tikzcd} \Spec k \arrow{rd}[swap]{f} &\Spec k'' \arrow{d}{h} \arrow{l} \arrow{r} &\Spec k' \arrow{ld}{g} \\ &X \end{tikzcd} \]
\begin{lem}\label{reallemma} Let $X$ be a scheme over $\mathbb{Z}$ and $X_r$ be the real scheme associated to $X$. Then there exists a one-to-one correspondence between the points of $X_r$ and the set of couples $(x,P_{k(x)})$, where $x \in X$, and $P_{k(x)}$ is an ordering on the residue field $k(x)$. \end{lem} \begin{proof} We may assume that $X$ is affine. Let $X=\Spec A$. Then we have $X_r=\Sper A=\Hom(A,\mathbb{S})$. Now, for each $f \in \Hom(A,\mathbb{S})$, we have a prime ideal $\mathfrak{p}=\ker(f)$ and this induces a homomorphism $\tilde{f}:\Frac(A/\ker(f)) \to \mathbb{S}$ and this is just an ordering on the residue field $k(\mathfrak{p})$. Conversely, for any prime ideal $\mathfrak{p}$ and an ordering $P_{k(\mathfrak{p})}$ on the residue field $k(\mathfrak{p})$ at $\mathfrak{p}$, we have a homomorphism $\pi:A\to k(\mathfrak{p})$ and $f:k(\mathfrak{p}) \to \mathbb{S}$. By composing these two, we obtain a homomorphism $f\circ \pi :A \to \mathbb{S}$. Clearly, these two constructions are inverses to each other. \end{proof}
\begin{pro}\label{realanaly} Let $X$ be a scheme over $\mathbb{Z}$ and $X_r$ be the associated real scheme. Then $X_r$ and $X(\mathbb{S})$ are homeomorphic. \end{pro} \begin{proof} Let $i:X_r \to X(\mathbb{S})$ be the set-bijection as in Lemma \ref{reallemma}. From the definition of $X_r$, we can choose an affine open covering $\{U_j\}$ of $X$, where $U_j=\Spec A_j$, such that $X_r$ can be covered by $\{V_j=\Sper A_j\}$. Consider the resection $i_{V_j}$ of $i$ to $V_j$. In this case, we have $i_{V_j}:V_j \to X(\mathbb{S})$ and, in fact, the image of $i_{V_i}$ is $U_i(\mathbb{S})$. From Proposition \ref{openembedding}, we may assume that $X$ is affine and the result follows from Lemma \ref{sgnaffinecase}. \end{proof}
\begin{cor}
Let $\mathcal{R}$ be the functor from the category of schemes to the category of topological spaces sending a scheme $X$ to the underlying topological space $|X_r|$ of the associated real scheme $X_r$. Then $\mathcal{R}$ is isomorphic to the functor $\Hom(\Spec \mathbb{S},-)$. \end{cor} \begin{proof} One may apply a similar argument as in Corollary \ref{representable}. \end{proof}
Conversely, given a real scheme $X_\mathfrak{R}$, one can associate a scheme $X_\mathfrak{R}^{red}$; fix an affine open covering $\{V_i=\Sper A_i\}$ of $X_\mathfrak{R}$, we may associate $U_i=\Spec A_i$ for each $i$ and glue these to obtain a scheme $X_\mathfrak{R}^{red}$ over $\mathbb{Z}$.
\begin{pro}\label{reduct} Let $X_\mathfrak{R}$ be a real scheme. Then the reduction map $\mathbf{red}:X_\mathfrak{R}\to X_\mathfrak{R}^{red}$ is continuous. Moreover, the real scheme $(X_\mathfrak{R}^{red})_r$ associated the the scheme $X_\mathfrak{R}^{red}$ (as in Proposition \ref{realanaly}) is homeomorphic to $X_\mathfrak{R}$. \end{pro} \begin{proof} We may assume that $X_\mathfrak{R}$ is affine and the first statement follows from Lemma \ref{reduction}. The second statement is clear from the definition of $(X_\mathfrak{R}^{red})_r$. \end{proof}
\begin{cor} Let $X_\mathfrak{R}$ be a real scheme. Then $(X_\mathfrak{R}^{red})_r(\mathbb{S})$ is homeomorphic to $X_\mathfrak{R}$. \end{cor} \begin{proof} This directly follows from Propositions \ref{realanaly} and \ref{reduct}. \end{proof}
\begin{rmk} Although we only state the case when a scheme is over $\mathbb{Z}$, one can easily prove similar results for a scheme over a formally real field $F$. Note that $F$ should be a formally real field since otherwise, there is no homomorphism from $F$ to $\mathbb{S}$. \end{rmk}
\begin{rmk}\label{finalyrmk} In \cite{hochster1969prime}, Hochster characterized topologically the essential image of the functor $\Spec$ by introducing the notion of spectral spaces. One has a similar result in real algebraic geometry. To be precise, let $F$ be a formally real field. Then the real spectrum $X_F=\Sper F$ is called the space of orderings on $F$. Then $X_F$ becomes a Stone space (or Boolean space), i.e., compact, Hausdorff, and totally disconnected space. Conversely, it is proved by T.~Craven in \cite{craven1975boolean} that for any Stone space $X$, there exists a formally real field $F$ such that $X$ is homeomorphic to $X_F$. Finally, we remark that if the characteristic of $F$ is not equal to $2$, then $X_F$ can be realized as the set of minimal prime ideals of $W(F)$ (the Witt ring of quadratic forms of $F$) and furthermore $X_F$ is homeomorphic to the set of minimal prime ideals of $W(F)$ equipped with the Zariski topology. For more details, we refer the readers to \cite{lam2005introduction}. It would be interesting to investigate these perspectives in terms of geometry of $\mathbb{S}$ following the ideas in \cite{mars1}, \cite{mar2}, and \cite{gladki2017witt}. \end{rmk}
\begin{rmk} In \cite{con3}, Connes and Consani introduce the notion of tensor products for $\mathbb{K}$ and $\mathbb{S}$ in a certain restricted case. For instance, if $X=\Spec A$ is an affine scheme over a field $k$, then `a scalar extension' $X_\mathbb{K}$ is defined to be $\Spec (A/k^\times)$. When $k$ is a formally real field with a fixed homomorphism $\varphi:k \to \mathbb{S}$, $X_\mathbb{S}=\Spec (A/P)$, where $P$ is the ordering corresponding to $\varphi$. One may develop this approach further to incorporate the notion of tensor products with hyperfields with the approach taken in the current paper. \end{rmk}
\end{document} | arXiv |
\begin{document}
\title{Infinitary Lambda Calculi from a Linear Perspective\(Long Version)} \begin{abstract} We introduce a linear infinitary $\lambda$-calculus, called $\ell\Lambda_{\infty}$, in which two exponential modalities are available, the first one being the usual, finitary one, the other being the only construct interpreted coinductively. The obtained calculus embeds the infinitary applicative $\lambda$-calculus and is universal for computations over infinite strings. What is particularly interesting about $\ell\Lambda_{\infty}$, is that the refinement induced by linear logic allows to restrict both modalities so as to get calculi which are \emph{terminating} inductively and \emph{productive} coinductively. We exemplify this idea by analysing a fragment of $\ell\Lambda$ built around the principles of \textsf{SLL}\ and \textsf{4LL}. Interestingly, it enjoys confluence, contrarily to what happens in ordinary infinitary $\lambda$-calculi. \end{abstract}
\section{Introduction}
The $\lambda$-calculus is a widely accepted model of higher-order functional programs---it faithfully captures functions and their evaluation. Through the many extensions introduced in the last forty years, the $\lambda$-calculus has also been shown to be a model of imperative features, control~\cite{Parigot92LPAR}, etc. Usually, this requires extending the class of terms with new operators, then adapting type systems in such a way that ``good'' properties (e.g., confluence, termination, etc.), and possibly new ones, hold.
This also happened when potentially infinite structures came into play. Streams and, more generally, coinductive data have found their place in the $\lambda$-calculus, following the advent of lazy data structures in functional programming languages like \textsf{Haskell}\ and \textsf{ML}. By adopting lazy data structures, the programmer has a way to represent infinity by finite means, relying on the fact that data are not completely evaluated, but accessed ``on-demand''. In presence of infinite structures, the usual termination property takes the form of \emph{productivity}~\cite{Dijkstra80}: even if evaluating a stream expression globally takes infinite time, looking for the next symbol in it takes finite time.
All this has indeed been modelled in a variety of ways by enriching $\lambda$-calculi and related type theories. One can cite, among the many different works in this area, the ones by Parigot~\cite{Parigot92TCS}, Raffalli~\cite{Raffalli93CSL}, Hughes at al.~\cite{Hughes96POPL}, or Abel~\cite{Abel07APLAS}. Terms are \emph{finite} objects, and the infinite nature of streams is modelled through a staged inspection of so-called thunks. The key ingredient in obtaining productivity and termination is usually represented by sophisticated type systems.
There is also another way of modelling infinite structures in $\lambda$-calculi, namely \emph{infinitary rewriting}, where both terms and reduction sequences are not necessarily finite, and as a consequence infinity is somehow internalised into the calculus. Infinitary rewriting has been studied in the context of various concrete rewrite systems, including first-order term rewriting~\cite{Kennaway91RTA}, but also systems of higher-order rewriting~\cite{Ketema11IC}. In the case of $\lambda$-calculus~\cite{Kennaway97TCS}, the obtained model is very powerful, but does not satisfy many of the nice properties its finite siblings enjoy, including confluence and finite developments, let alone termination and productivity (which are anyway unexpected in the absence of types).
In this paper, we take a fresh look at infinitary rewriting, through the lenses of linear logic~\cite{Girard87TCS}. More specifically, we define an untyped, linear, infinitary $\lambda$-calculus called $\ell\Lambda_{\infty}$, and study its basic properties, how it relates to $\Lambda_{\infty}$~\cite{Kennaway97TCS}, and its expressive power. As expected, incepting linearity does not allow \emph{by itself} to solve any of the issues described above: $\ell\Lambda_{\infty}$ embeds $\Lwotp{0}{0}{1}$, a subsystem of $\Lambda_{\infty}$, and as such does not enjoy confluence. On the other hand, linearity provides the right tools to \emph{control} infinity: we delineate a simple fragment of $\ell\Lambda_{\infty}$ which has all the good properties one can expect, including productivity and confluence. Remarkably, this is not achieved through types, but rather by purely structural constraints on the way copying is managed, which is responsible for its bad behaviour. Expressivity, on the other hand, is not sacrificed.
\subsection{Linearity vs. Infinity}
The crucial r\^ole linearity has in this paper can be understood without any explicit reference to linear logic as a proof system, but through a series of observations about the usual, finitary, $\lambda$-calculus. In any $\lambda$-term $M$, the variable $x$ can occur free more than once, or not at all. If the variable $x$ occurs free exactly once in $M$ \emph{whenever} we form an abstraction $\la{x}{M}$, the $\lambda$-calculus becomes strongly normalising: at any rewrite step the size of the term strictly decreases. On the other hand, the obtained calculus has a poor expressive power and, \emph{in this
form}, is not the model of any reasonable programming language. Let duplication reappear, then, but in controlled form: besides a \emph{linear} abstraction $\la{x}{M}$, (which is well formed only if $x$ occurs free exactly once in $M$) there is also a \emph{nonlinear} abstraction $\na{x}{M}$, which poses no constraints on the number of times $x$ occurs in $M$. Moreover, and this is the crucial step, interaction with a nonlinear abstraction is restricted: the argument to a nonlinear application must itself be marked as \emph{duplicable} (and \emph{erasable}) for $\beta$ to fire. This is again implemented by enriching the category of terms with a new construct: given a term $M$, the \emph{box} $\nm{M}$ is a duplicable version of $M$. Summing up, the language at hand, call it $\ell\Lambda$, is built around applications, linear abstractions, nonlinear abstractions, and boxes. Moreover, $\beta$-reduction comes in two flavours: $$ \ap{(\la{x}{M})}{N}\rightarrow\sbst{M}{x}{N};\qquad\qquad \ap{(\na{x}{M})}{\nm{N}}\rightarrow\sbst{M}{x}{N}. $$ What did we gain by rendering duplicating and erasing operations explicit? Not much, apparently, since pure $\lambda$-calculus can be embedded into $\ell\Lambda$ in a very simple way following the so-called Girard's translation~\cite{Girard87TCS}: abstractions become nonlinear abstractions, and any application $\ap{M}{N}$ is translated into $\ap{M}{\nm{N}}$. In other words, all arguments can be erased and copied, and we are back to the world of wild, universal, computation. Not everything is lost, however: in any nonlinear abstraction $\na{x}{M}$, $x$ can occur any number of times in $M$, but at different \emph{depths}: there could be linear occurrences of $x$, which do not lie in the scope of any box, i.e., at depth $0$, but also occurrences of $x$ at greater depths. Restricting the depths at which the bound variable can occur in nonlinear abstractions gives rise to strongly normalising fragments of $\ell\Lambda$. Some of them can be seen, through the Curry-Howard correspondence, as subsystems of linear logic characterising complexity classes. This includes light linear logic~\cite{Girard98IC}, elementary linear logic~\cite{Danos03IC} or soft linear logic~\cite{Lafont04TCS}. As an example, the exponential discipline of elementary linear logic can be formulated as follows in $\ell\Lambda$: for every nonlinear abstraction $\na{x}{M}$ all occurrences of $x$ in $M$ are at depth $1$. Noticeably, this gives rise to a characterisation of elementary functions. Similarly, soft linear linear can be seen as the fragment of $\ell\Lambda_{\infty}$ in which all occurrences of $x$ (if more than one) occur at depth $0$ in any nonlinear abstraction $\na{x}{M}$.
But why could all this be useful when rewriting becomes infinitary? Infinitary $\lambda$-terms~\cite{Kennaway97TCS} are nothing more than infinite terms defined according to the grammar of the $\lambda$-calculus. In other words, one can have a term $M$ such that $M$ is syntactically equal to $\la{x}{\ap{x}{M}}$. Evaluation can still be modelled through $\beta$-reduction. Now, however, reduction sequences of infinite length (sometime) make sense: take $\ap{Y}{(\la{x}{\la{y}{\ap{y}{x}}})}$, where $Y$ is a fixed point combinator. It rewrites in infinitely many steps to the infinite term $N$ such that $N=\la{y}{\ap{y}{N}}$. Are all infinite reduction sequences acceptable? Not really: an infinite reduction sequence is said to \emph{converge} to the term $M$ only if, informally, any finite approximation to $M$ can be reached along a finite prefix of the sequence. In other words, reduction should be applied deeper and deeper, i.e., the \emph{depth} of the fired redex should tend to infinity. But how could one define the depth of a subterm's occurrence in a given term? There are many alternatives here, since, informally, one can choose to let the depth increase (or not) when entering the body of an abstraction or any of the two arguments of an application. Indeed, \emph{eight} different calculi can be formed, each with different properties. For example, $\Lwotp{0}{0}{1}$ is the calculus in which the depth only increases while entering the argument position in applications, while in $\Lwotp{1}{0}{0}$ the same happens when crossing abstractions. The choice of where the depth increases is crucial not only when defining infinite reduction sequences, but also when defining \emph{terms}, whose category is obtained by completing the set of finite terms with respect to a certain metric. So, not all infinite terms are well-formed. In $\Lwotp{0}{0}{1}$, as an example, the term $M=\ap{x}{M}$ is well-formed, while $M=\la{x}{M}$ is not. In $\Lwotp{1}{0}{0}$, the opposite holds. In all the obtained calculi, however, many of the properties one expects are not true: the Complete Developments Theorem (i.e. the infinitary analogue of the Finite Complete Theorem~\cite{Barendregt}) does not hold and, moreover, confluence fails except if formulated in terms of so-called B\"ohm reduction~\cite{Kennaway97TCS}. The reason is that $\Lambda_{\infty}$ is even wilder than $\Lambda$: various forms of infinite computations can happen, but only some of them are benign. Is it that linearity could help in taming all this complexity, similarly to what happens in the finitary case? This paper gives a first positive answer to this question.
\subsection{Contributions}
The system we will study in the rest of this paper, called $\ell\Lambda_{\infty}$, is obtained by incepting ideas from infinitary calculi into $\ell\Lambda$. Not one but \emph{two} kinds of boxes are available in $\ell\Lambda_{\infty}$, and the depth increases only while crossing boxes of the second kind (called \emph{coinductive} boxes), while boxes of the first kind (dubbed \emph{inductive}) leave the depth unchanged. As a consequence, boxes are as usual the only terms which can be duplicated and erased, but they are also responsible for the infinitary nature of the calculus: any term not containing (coinductive) boxes is necessarily finite. Somehow, the depths in the sense of $\Lambda_{\infty}$ and of $\ell\Lambda$ coincide.
Besides introducing $\ell\Lambda_{\infty}$ and proving its basic properties, this paper explores the expressive power of the obtained computational model, showing that it suffers from the same problems affecting $\Lambda_{\infty}$, but also that it has precisely the same expressive power as that of Type-2 Turing machines, the reference computational model of so-called computable analysis~\cite{Weihrauch00}.
The most interesting result among those we give in this paper consists in showing that, indeed, a simple fragment of $\ell\Lambda_{\infty}$, called $\ell\Lambda_{\infty}^{\mathsf{4S}}$, is on the one hand flexible enough to encode streams and guarded recursion on them, and on the other guarantees productivity. Remarkably, confluence holds, contrary to what happens for $\ell\Lambda_{\infty}$. Actually, $\ell\Lambda_{\infty}^{\mathsf{4S}}$ is defined around the same principles which lead to the definition of light logics. Each kind of box, however, follows a distinct discipline: inductive boxes are handled as in Lafont's \textsf{SLL}~\cite{Lafont04TCS}, while coinductive boxes follow the principles of \textsf{4LL}~\cite{Danos03IC}, hence the name of the calculus. So far, the \textsf{4LL}'s exponential discipline has not been shown to have any computational meaning: now we know that beyond it there is a form of guarded corecursion.
\section{$\ell\Lambda_{\infty}$ and its Basic Properties}\label{sect:basicproperties}
In this section, we introduce a linear infinitary $\lambda$-calculus called $\ell\Lambda_{\infty}$, which is the main object of study of this paper. Some of the dynamical properties of $\ell\Lambda_{\infty}$ will be investigated. Before defining $\ell\Lambda_{\infty}$, some preliminaries about formal systems with \emph{both} inductive and coinductive rules will be given.
\subsection{Mixing Induction and Coinduction in Formal Systems}
A formal system over a set of judgments $\mathcal{S}$ is given
by a finite set of rules, all of them having one conclusions and an
arbitrary finite number of premises. The rules of a \emph{mixed} formal system $\mathsf{S}$ are of two kinds: those which are to be interpreted inductively and those which are to be interpreted \emph{co}inductively. To distinguish between the two, inductive rules will be denoted as usual, with a single line, while coinductive rules will be indicated as follows: $ \infer= {\sjudg{B}} {\sjudg{A_1}& \ldots & \sjudg{A_n}} $. Intuitively, any correct derivation in such a system is an \emph{infinite} tree built following inductive and coinductive rules where, however, any \emph{infinite} branch crosses coinductive rule instances \emph{infinitely} often. In other words, there cannot be any infinite branch where, from a certain point on, only inductive rules occur.
Formally, the set $\cd{\mathsf{S}}$ of derivable assertions of a mixed formal system $\mathsf{S}$ over the set of judgments $\mathcal{S}$ can be seen as the greatest fixpoint of the function $\indrul{\mathsf{S}}\circ\coindrul{\mathsf{S}}$ where $\coindrul{\mathsf{S}}:\powset{\mathcal{S}}\rightarrow\powset{\mathcal{S}}$ is the monotone function induced by the the application of a \emph{single} coinductive rule, while $\indrul{\mathsf{S}}$ is the function induced by the application of inductive rules an arbitrary \emph{but finite} number of times. $\indrul{\mathsf{S}}$ can itself be obtained as the least fixpoint of a monotone functional on the space of monotone functions on $\powset{\mathcal{S}}$. The existence of $\cd{\mathsf{S}}$ can be formally justified by Knaster-Tarski theorem, since all the involved spaces are complete lattices, and the involved functionals are monotone. This is, by the way, very close to the approach from~\cite{EndrullisPolonsky}. Let $\powset{\mathcal{S}}\leadsto\powset{\mathcal{S}}$ be the set of monotone functions on the powerset $\powset{\mathcal{S}}$. A relation $\sqsubseteq$ on $\powset{\mathcal{S}}\leadsto\powset{\mathcal{S}}$ can be easily defined by stipulating that $F\sqsubseteqG$ iff for every $X\subseteq\mathcal{S}$ it holds that $F(X)\subseteqG(X)$. The structure $(\powset{\mathcal{S}}\leadsto\powset{\mathcal{S}},\sqsubseteq)$ is actually a complete lattice, because: \begin{varitemize} \item
The relation $\sqsubseteq$ is a partial order. In particular, antisymmetry is a consequence of
function extensionality: if for every $X$, both $F(X)\subseteqG(X)$
and $G(X)\subseteqF(X)$, then $F$ and $G$ are the same
function. \item
Given a set of monotone functions $\mathcal{X}$, its lub and sup exist and are the functions
$F,G:\powset{\mathcal{S}}\leadsto\powset{\mathcal{S}}$ such that for every
$X\subseteq\mathcal{S}$,
$$
F(X)=\bigcap_{H\in\mathcal{X}}H(X);\qquad\qquad
G(X)=\bigcup_{H\in\mathcal{X}}H(X).
$$
It is easy to verify that $F$ is monotone, that it minorises
$\mathcal{X}$, and that it majorises any minoriser of $\mathcal{X}$. Similarly
for $G$. \end{varitemize} The function $\indrul{\mathsf{S}}$ is the least fixpoint of the monotone functional $\mathcal{F}$ on $\powset{\mathcal{S}}\leadsto\powset{\mathcal{S}}$ defined as follows: to every function $F$, $\mathcal{F}$ associates the function $G$ obtained by feeding $F$ with the argument set $X$, and then applying one additional instance of inductive rules from $\mathsf{S}$. Since $(\powset{\mathcal{S}}\leadsto\powset{\mathcal{S}},\sqsubseteq)$ is a complete lattice, $\indrul{\mathsf{S}}$ is guaranteed to exist.
Formal systems in which \emph{all} rules are either coinductively or inductively interpreted have been studied extensively (see, e.g. \cite{Leroy09IC}). Our constructions, although relatively simple, do not seem to have appeared before at least in this form. The conceptually closest work is the one by Endrullis and coauthors~\cite{EndrullisPolonsky}.
How could we prove anything about $\cd{\mathsf{S}}$? How should we proceed, as an example, when tyring to prove that a given subset $X$ of $\mathcal{S}$ is included in $\cd{\mathsf{S}}$? Fixed-point theory tells us that the correct way to proceed consists in showing that $X$ is $(\indrul{\mathsf{S}}\circ\coindrul{\mathsf{S}})$-consistent, namely that $X\subseteq \indrul{\mathsf{S}}(\coindrul{\mathsf{S}}(X))$. We will frequently apply this proof strategy in the following.
\subsection{An Infinitary Linear Lambda Calculus}
\newcommand{\triangleright}{\triangleright} \emph{Preterms} are potentially infinite terms built from the following grammar: $$ M,N::=\;x\; \; \mbox{\Large{$\mid$}}\;\;\ap{M}{M}\; \; \mbox{\Large{$\mid$}}\;\;\la{x}{M}\; \; \mbox{\Large{$\mid$}}\;\;\ia{x}{M}\; \; \mbox{\Large{$\mid$}}\;\;
\ca{x}{M}\; \; \mbox{\Large{$\mid$}}\;\;\im{M}\; \; \mbox{\Large{$\mid$}}\;\;\cm{M}, $$ where $x$ ranges over a denumerable set $\mathcal{V}$ of variables. $\mathbb{T}$ is the set of preterms.
The notion of capture-avoiding substitution of a preterm $M$ for a variable $x$ in anoter preterm $N$, denoted $\sbst{N}{x}{M}$, can be defined, this time by \emph{coinduction}, on the structure of $N$: \begin{align*} \sbst{(x)}{x}{M}&=M;\\ \sbst{(y)}{x}{M}&=y;\\ \sbst{(\la{x}{L})}{y}{M}&=\la{x}{\sbst{L}{y}{M}};
\qquad\qquad\mbox{if $x\not\in\FV{M}$}\\ \sbst{(\ia{x}{L})}{y}{M}&=\ia{x}{\sbst{L}{y}{M}};
\qquad\qquad\mbox{if $x\not\in\FV{M}$}\\ \sbst{(\ca{x}{L})}{y}{M}&=\ca{x}{\sbst{L}{y}{M}};
\qquad\qquad\mbox{if $x\not\in\FV{M}$}\\ \sbst{(\ap{L}{P})}{y}{M}&=
\ap{(\sbst{L}{y}{M})}{(\sbst{P}{y}{M})};\\ \sbst{(\im{L})}{y}{M}&=\im{(\sbst{L}{y}{M})};\\ \sbst{(\cm{L})}{y}{M}&=\cm{(\sbst{L}{y}{M})}. \end{align*} Observe that all the equations above are guarded, so this is a well-posed definition. An \emph{inductive} (respectively, \emph{coinductive}) \emph{box} is any preterm in the form $\im{M}$ (respectively, $\cm{M}$).
Please notice that any (guarded) equation has a unique solution over preterms. As an example, $M=\la{x}{M}$, $N=\ap{N}{(\la{x}{x})}$, and $M=\ap{y}{\cm{M}}$ all have unique solutions. In other words, infinity is everywhere. Only certain preterms, however, will be the objects of this study. To define the class of ``good'' preterms, simply called \emph{terms}, we now introduce a mixed formal system. An \emph{environment} $\Gamma$ is simply a set of expressions (called \emph{patterns}) in one the following three forms: $$ p::=x\; \; \mbox{\Large{$\mid$}}\;\;\im{x}\; \; \mbox{\Large{$\mid$}}\;\;\cm{x}, $$ where any variable occurs in at most \emph{one} pattern in $\Gamma$. If $\Gamma$ and $\Delta$ are two disjoint environments, then $\Gamma,\Delta$ is their union. An environment is \emph{linear} if it only contains variables. Linear environments are indicated with metavariables like $\Theta$ or $\Xi$. A \emph{term judgment} is an expression in the form $\tjudg{\Gamma}{M}$ where $\Gamma$ is an environment and $M$ is a term. A \emph{term} is any preterm $M$ for which a judgment $\tjudg{\Gamma}{M}$ can be derived by the formal system $\ell\Lambda_{\infty}$, whose rules are in Figure~\ref{fig:llwotwfr}. \begin{figure*}
\caption{$\ell\Lambda_{\infty}$: Well-Formation Rules.}
\label{fig:llwotwfr}
\end{figure*} Please notice that $(\mathsf{mc})$ is coinductive, while all the other rules are inductive. This means that on terms, differently from preterms, not all recursive equations have a solution, anymore. As an example $M=\ap{y}{\cm{M}}$ is a term: a derivation $\pi$ for $\tjudg{\im{y}}{M}$ can be found in Figure~\ref{fig:ed}. \begin{figure*}
\caption{$\ell\Lambda_{\infty}$: Example Derivation Trees $\pi$ (left) and $\rho$ (right).}
\label{fig:ed}
\end{figure*} $\pi$ is indeed a well-defined derivation because although infinite, any infinite path in it contains infinitely many occurrences of $(\mathsf{mc})$. If we try to proceed in the same way with the preterm $N=\ap{N}{(\la{x}{x})}$, we immediately fail: the only candidate derivation $\rho$ looks like the one in Figure~\ref{fig:ed} and is not well-defined (it contains a ``loop'' of inductive rule instances). We write $\tms{\ell\Lambda_{\infty}}(\Gamma)$ for the set of all preterms $M$ such that $\tjudg{\Gamma}{M}$. The union of $\tms{\ell\Lambda_{\infty}}(\Gamma)$ over all environments $\Gamma$ is denoted simply as $\tms{\ell\Lambda_{\infty}}$.
Some observations about the r\^ole of environments are now in order: If $\tjudg{x,\Gamma}{M}$, then $x$ necessarily occurs free in $M$, but exactly once and in \emph{linear position} (i.e. not in the scope of any box). If, on the other hand, $\tjudg{\im{x},\Gamma}{M}$, then $x$ can occur free any number of times, even infinitely often, in $M$. Similarly when $\tjudg{\cm{x},\Gamma}{M}$. Observe, in this respect, that inductive and coinductive boxes are actually very permissive: if $\tjudg{\im{x},\Gamma}{M}$, $x$ can even occur in the scope of coinductive boxes, while $x$ can occur in the scope of inductive boxes if $\tjudg{\cm{x},\Gamma}{M}$. We claim that this is source of the great expressive power of the calculus, but also of its main defects (e.g. the absence of confluence).
\newcommand{\bsone}{s} \newcommand{\bstwo}{t} \newcommand{\varepsilon}{\varepsilon} \newcommand{\strnat}[1]{#1^{\bullet}} \newcommand{\NFs}[1]{\mathsf{NFs}(#1)} Sometimes it is useful to denote symbols $\downarrow$ and $\uparrow$ in a unified way. To that purpose, let $\mathbb{B}$ be the set $\{0,1\}$ of binary digits, which is ranged over by metavariables like $a$ or $b$. $\psym{0}$ stands for $\downarrow$, while $\psym{1}$ is $\uparrow$. For every $\bsone\in\mathbb{B}^*$, we can define \emph{$\bsone$-contexts}, ranged over by metavariables like $\pctxone{\bsone}$ and $\pctxtwo{\bsone}$, as follows, by induction on $\bsone$: \begin{align*} \pctxone{\varepsilon}::=[\cdot];\qquad\qquad\pctxone{0\cdot\bsone}::=\im{\pctxone{\bsone}};\qquad\qquad \pctxone{1\cdot\bsone}::=\cm{\pctxone{\bsone}};\\ \pctxone{\bsone}::=\ap{\pctxone{\bsone}}{M}\; \; \mbox{\Large{$\mid$}}\;\;\ap{M}{\pctxone{\bsone}}
\; \; \mbox{\Large{$\mid$}}\;\;\la{x}{\pctxone{\bsone}}\; \; \mbox{\Large{$\mid$}}\;\;\ia{x}{\pctxone{\bsone}}\; \; \mbox{\Large{$\mid$}}\;\;\ca{x}{\pctxone{\bsone}}. \end{align*} Given any subset $X$ of $\mathbb{B}^*$, an $X$-context, sometime denoted as $\pctxone{X}$ is an $\bsone$-context where $\bsone\inX$. A \emph{context} $C$ is simply any $\bsone$-context $\pctxone{\bsone}$. For every natural number $n\in\mathbb{N}$, $\strnat{n}$ is the set of those strings in $\mathbb{B}^*$ in which $1$ occurs precisely $n$ times. For every $n$, the language $\strnat{n}$ is regular.
\subsection{Finitary and Infinitary Dynamics}\label{sect:fininfdyn}
In this section, notions of finitary and infinitary reduction for $\ell\Lambda_{\infty}$ are given. \emph{Basic reduction} is a binary relation $\mapsto\subseteq\mathbb{T}\times\mathbb{T}$ defined by the following three rules (where, as usual, $M\redLLbasN$ stands for $(M,N)\in\;\mapsto$): $$ \ap{(\la{x}{M})}{N}\mapsto\sbst{M}{x}{N};\quad \ap{(\ia{x}{M})}{\im{N}}\mapsto\sbst{M}{x}{N};\quad \ap{(\ca{x}{M})}{\cm{N}}\mapsto\sbst{M}{x}{N}. $$ Basic reduction can be applied in any $\bsone$-context, giving rise to a \emph{ternary} relation $\rightarrow\subseteq\mathbb{T}\times\mathbb{B}^*\times\mathbb{T}$, simply called \emph{reduction}. That is defined by stipulating that $(M,\bsone,N)\in\rightarrow$ iff there are a $\bsone$-context $\pctxone{\bsone}$ and two terms $L$ and $P$ such that $L\redLLbasP$, $M=\actx{\pctxone{\bsone}}{L}$, and $N=\actx{\pctxone{\bsone}}{P}$. In this case, the reduction step is said to occur \emph{at level} $\bsone$ and we write $M\redLL{\bsone}N$ and $\level{M\redLL{\bsone}N}=\bsone$. We often employ the notation $\redLL{X}$, i.e., $\redLL{X}$ is the union of the relations $\redLL{\bsone}$ for all $\bsone\inX$. If $M\redLL{\bsone}N$ but we are not interested in the specific $\bsone$, we simply write $M\redLLwodN$. If $M\redLL{\strnat{n}}N$, then reduction is said to occur at depth $n$.
Given $X\subseteq\mathbb{B}^*$, a \emph{$X$-normal form} is any term $M$ such that whenever $(M,\bsone,N)$, it holds that $\bsone\not\inX$. The set of all $X$-normal forms is denoted as $\NFs{X}$. In the just introduced notations, the singleton $\bsone$ is often used in place of $\{\bsone\}$ if this does not cause any ambiguity. A \emph{normal form} is simply a $\mathbb{B}^*$-normal form.
Depths and levels have a different nature: while the depth increases only when entering a coinductive box, the level changes while entering any kind of box, and this is the reason why levels are binary strings rather than natural numbers.
Since $\sbst{M}{x}{N}$ is well-defined whenever $M$ and $N$ are preterms, reduction is certainly closed as a relation on the space of preterms. That it is also closed on terms is not trivial. First of all, substitution lemmas need to be proved for the three kinds of patterns which possibly appear in environments. The first of these lemmas concerns linear variables: \begin{lemma}[Substitution Lemma, Linear Case]\label{lem:substlinllwot}
If $\tjudg{\Gamma,x,\im{\Theta},\cm{\Xi}}{M}$
and $\tjudg{\Delta,\im{\Theta},\cm{\Xi}}{N}$, then it holds that
$\tjudg{\Gamma,\Delta,\im{\Theta},\cm{\Xi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{proof} We can prove that the following subset $X$ of judgments is consistent with $\ell\Lambda_{\infty}$: $$ \left\{ \tjudg{\Gamma,\Delta,\im{\Theta},\cm{\Xi}}{\sbst{M}{x}{N}}\; \; \mbox{\Large{$\mid$}}\;\; \tjudg{\Gamma,x,\im{\Theta},\cm{\Xi}}{M}\in\cd{\ell\Lambda_{\infty}}\wedge \tjudg{\Delta,\im{\Theta},\cm{\Xi}}{N}\in\cd{\ell\Lambda_{\infty}} \right\}\cup\cd{\ell\Lambda_{\infty}}. $$ Suppose that $J=\tjudg{\Gamma,\Delta,\im{\Theta},\cm{\Xi}}{\sbst{M}{x}{N}}$ is in $X$. If $J\in\cd{\ell\Lambda_{\infty}}$, then of course $$ J\in\indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(\cd{\ell\Lambda_{\infty}}))\subseteq \indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(X)). $$ Otherwise, we know that $H=\tjudg{\Gamma,x,\im{\Theta},\cm{\Xi}}{M}\in\cd{\ell\Lambda_{\infty}}$ and, by the fact $\cd{\ell\Lambda_{\infty}}=\indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(\cd{\ell\Lambda_{\infty}}))$ we can infer that $H\in\indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(\cd{\ell\Lambda_{\infty}}))$, namely that $H$ can be obtained by judgments in $\cd{\ell\Lambda_{\infty}}$ by applying coinductive rules once, followed by $n$ inductive rules. We prove that $J\in\cd{\ell\Lambda_{\infty}}$ by induction on $n$: \begin{varitemize} \item
If $n=0$, then $H$ can be obtained by means of the rule $(\mathsf{mc})$, but this
is impossible since the environment in any judgment obtained this way cannot contain
any variable, and $H$ actually contains one. \item
If $n>0$, then we distinguish a number of cases, depending on the last inductive
rule applied to derive $H$:
\begin{varitemize}
\item
If it is $(\mathsf{vl})$, then $M=x$, $\sbst{M}{x}{N}$
is simply $N$ and $\Gamma$ does not contain any variable. By the fact that
$$
\tjudg{\Delta,\im{\Theta},\cm{\Xi}}{N}\in\cd{\ell\Lambda_{\infty}}
$$
it follows, by a Weakening Lemma, that $J\in\cd{\ell\Lambda_{\infty}}$.
\item
It cannot be either $(\mathsf{vi})$ or $(\mathsf{vc})$ or $(\mathsf{mi})$: in all these
cases the underlying environment cannot contain variables;
\item
If it is either $(\mathsf{ll})$ or $(\mathsf{a})$, then the induction hypothesis yields the thesis
immediately;
\item
If it is either $(\mathsf{li})$ or $(\mathsf{lc})$, then a Weakening Lemma
applied to the induction hypothesis leads to the thesis.
\end{varitemize} \end{varitemize} From $J\in\cd{\ell\Lambda_{\infty}}$, it follows that $$ J\in\indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(\cd{\ell\Lambda_{\infty}})) \subseteq\indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(X)), $$ which is the thesis. This concludes the proof. \end{proof} A similar result can be given when the substituted variable occurs in the scope of an inductive box: \begin{lemma}[Substitution Lemma, Inductive Case]\label{lem:substindllwot}
If $\tjudg{\Gamma,\im{x},\im{\Theta},\cm{\Xi}}{M}$
and $\tjudg{\im{\Theta},\cm{\Xi}}{N}$, then it holds that
$\tjudg{\Gamma,\im{\Theta},\cm{\Xi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{proof} The structure of this proof is identical to the one of Lemma~\ref{lem:substlinllwot}. \end{proof} When the variable is in the scope of a coinductive box, almost nothing changes: \begin{lemma}[Substitution Lemma, Coinductive Case]\label{lem:substcoillwot}
If $\tjudg{\Gamma,\im{x},\im{\Theta},\cm{\Xi}}{M}$
and $\tjudg{\cm{\Theta},\cm{\Xi}}{N}$, then it holds that
$\tjudg{\Gamma,\im{\Theta},\cm{\Xi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{proof} The structure of this proof is identical to the one of Lemma~\ref{lem:substlinllwot}. \end{proof} The following is an analogue of the so-called Subject Reduction Theorem, and is an easy consequence of substitution lemmas: \begin{proposition}[Well-Formedness is Preseved by Reduction]
If $\tjudg{\Gamma}{M}$ and $M\redLLwodN$, then $\tjudg{\Gamma}{N}$. \end{proposition} \begin{proof} Let us first of all prove that if $M\redLLbasN$, and $\tjudg{\Gamma}{M}$, then $\tjudg{\Gamma}{N}$. let us distinguish three cases: \begin{varitemize} \item
If $M$ is $\ap{(\la{x}{L})}{P}$, then
$\tjudg{\Delta,x,\im{\Theta},\cm{\Xi}}{L}$ and
$\tjudg{\Sigma,\im{\Theta},\cm{\Xi}}{P}$, where
$\Gamma=\Delta,\Sigma,\im{\Theta},\cm{\Xi}$.
By Lemma \ref{lem:substlinllwot}, one gets that
$\tjudg{\Gamma}{\sbst{L}{x}{P}}$, which is the thesis. \item
If $M$ is $\ap{(\ia{x}{L})}{\im{P}}$, then
$\tjudg{\Delta,\im{x},\im{\Theta},\cm{\Xi}}{L}$ and
$\tjudg{\im{\Theta},\cm{\Xi}}{P}$, where
$\Gamma=\Delta,\im{\Theta},\cm{\Xi}$. By Lemma \ref{lem:substindllwot}, one gets that
$\tjudg{\Gamma}{\sbst{L}{x}{P}}$, which is the thesis. \item
If $M$ is $\ap{(\ca{x}{L})}{\cm{P}}$, then
$\tjudg{\Delta,\im{\Theta},\cm{x},\cm{\Xi}}{L}$ and
$\tjudg{\im{\Theta},\cm{\Xi}}{P}$, where
$\Gamma=\Delta,\im{\Theta},\cm{\Xi}$. By Lemma \ref{lem:substcoillwot}, one gets that
$\tjudg{\Gamma}{\sbst{L}{x}{P}}$, which is the thesis. \end{varitemize} One can then prove that for every context $C$ and for every pair of terms $M$ and $N$ such that $M\redLLbasN$, if $\tjudg{\Gamma}{\actx{C}{M}}$ then $\tjudg{\Gamma}{\actx{C}{N}}$. This is an induction on the structure of $C$. \end{proof} Finitary reduction, as a consequence, is well-defined not only on preterms, but also on terms.
How about \emph{infinite} reduction? Actually, even defining what an infinite reduction sequence \emph{is} requires some care. In this paper, following~\cite{EndrullisPolonsky}, we define infinitary reduction by way of a mixed formal system (see Section~\ref{sect:basicproperties}). The judgments of this formal system have two forms, namely $\sjudg{M\redLLinfN}$ and $\sjudg{M\redLLnextN}$, and its rules are in Figure~\ref{fig:llwotid}. \begin{figure*}
\caption{$\ell\Lambda_{\infty}$: Infinitary Dynamics.}
\label{fig:llwotid}
\end{figure*} The relation $\Rightarrow$ is the infinitary, coinductively defined, notion of reduction we are looking for. Informally, $\sjudg{M\redLLinfN}$ is provable (and we write, simply, $M\redLLinfN$) iff there is a third term $L$ such that $M$ reduces to $L$ in a finite amount of steps, and $L$ itself reduces infinitarily to $N$ where, however, infinitary reduction is applied at depths higher than one. The latter constraint is taken care of by $\leadsto$.
An infinite reduction sequence, then, can be seen as being decomposed into a finite prefix and finitely many infinite suffixes, each involving subterm occurrences at higher depths. We claim that this corresponds to strongly convergent reduction sequences as defined in~\cite{Kennaway97TCS}, although a formal comparison is outside the scope of this paper (see, however,~\cite{EndrullisPolonsky}).
What are the main properties of $\Rightarrow$? Is it a confluent notion of reduction? Is it that $N$ is a normal form whenever $M\redLLinfN$? Actually, the latter question can be easily given a negative answer: take the unique preterm $M$ such that $M=\;\cm{(\ap{M}{(\ap{I}{I})})}$, where $I=\la{x}{x}$ is the identity combinator. Of course, $\tjudg{\emptyset}{M}$. We can prove that both $M\redLLinfN$ and that $M\redLLinfL$, where $$ N=\;\cm{(\ap{\cm{(\ap{N}{(\ap{I}{I})})}}{I})};\qquad L=\;\cm{(\ap{\cm{(\ap{L}{I})}}{(\ap{I}{I})})}. $$ (Infact, $\Rightarrow$ is reflexive, see Lemma~\ref{lemma:reflextrans} below.) Neither $N$ nor $L$ is a normal form. It is easy to realise that there is $P$ to which both $N$ and $L$ reduces to, namely $P=\cm{(\ap{\cm{(\ap{P}{I})}}{I})}$. Confluence, however, does not hold in general, as can be easily shown by considering the following two terms $M$ and $N$: $$ M=K\cm{N}\cm{K};\qquadN=K\cm{M}\cm{I}; $$ where $K=\ca{x}{\ca{y}{x}}$ and $I=\ca{x}{x}$. If we reduce $M$ at even and at odd depths, we end up at two terms $L$ and $P$ which cannot be joined by $\Rightarrow$, namely the following: $$ L=K\cm{L}\cm{I};\qquad P=K\cm{P}\cm{K}. $$ The deep reason why this phenomenon happens is an interference between $\rightarrow$ and $\leadsto$: there are $Q$ and $R$ such that $M\redLLwodQ$ and $M\redLLnextR$, but there is no $S$ such that $Q\redLLnextS$ and $R\redLLwodS$.
\subsection{Level-by-Level Reduction}\label{sect:levelbylevel}
One restriction of $\Rightarrow$ that will be useful in the following is the so called \emph{level-by-level} reduction, which is obtained by constraining reduction to occur at deeper levels only if no redex occurs at outer levels. Formally, let $\rightarrow_{\mathit{lbl}}$ be the restriction of $\rightarrow$ obtained by stipulating that $(M,\bsone,N)\in\rightarrow_{\mathit{lbl}}$ iff there are a $\bsone$-context $\pctxone{\bsone}$ and two terms $L$ and $P$ such that $L\redLLbasP$, $M=\actx{\pctxone{\bsone}}{L}$, and $N=\actx{\pctxone{\bsone}}{P}$, \emph{and moreover}, $M$ is $\bstwo$-normal for every prefix $\bstwo$ of $\bsone$. Then one can obtain $\leadsto_{\mathit{lbl}}$ and $\Rightarrow_{\mathit{lbl}}$ from $\rightarrow_{\mathit{lbl}}$ as we did for $\leadsto$ and $\Rightarrow$ in Section~\ref{sect:fininfdyn} (see Figure~\ref{fig:llwotidlbl}). \begin{figure*}
\caption{$\ell\Lambda_{\infty}$: Level-by-Level Infinitary Dynamics.}
\label{fig:llwotidlbl}
\end{figure*}
Clearly, if $M\redLLinflblN$, then $M\redLLinfN$. Moreover, $\Rightarrow_{\mathit{lbl}}$, contrarily to $\Rightarrow$, is confluent, simply because $\rightarrow_{\mathit{lbl}}$ satisfies a diamond-property. This is not surprising, and has been already observed in the realm of finitary rewriting~\cite{Simpson05}. Moreover, level-by-level is effective: only a finite portion of $M$ needs to be inspected in order to check if a given redex occuring in $M$ can be fired (or to find one if $M$ contains one). Indeed, it will used to define what it means for a term in $\ell\Lambda_{\infty}$ to compute a function, which is the main topic of the following section.
\section{On the Expressive Power of $\ell\Lambda_{\infty}$}
The just introduced calculus $\ell\Lambda_{\infty}$ can be seen as a refinement of $\Lambda_{\infty}$ obtained by giving a first-order status to depths, i.e., by introducing a specific construct which makes the depth to increase when crossing it. In this section, we will give an interesting result about the absolute expressive power of the introduced calculus: not only functions on finite strings can be expressed, but also functions on \emph{infinite} strings. Before doing that, we will investigate on the possibility to embed existing infinitary $\lambda$-calculi from the literature.
\subsection{Embedding $\Lambda_{\infty}$}
Some introductory words about $\Lambda_{\infty}$ are now in order (see~\cite{Kennaway97TCS} or \cite{Kennaway03} for more details). Originally $\Lambda_{\infty}$ has been defined based on completing the space of $\lambda$-terms with respect to a metric. Here we reformulate the calculus differently, based on coinduction.
In $\Lambda_{\infty}$, there are many choices as to where the underlying depth can increase. Indeed, \emph{eight} different calculi can be defined. More specifically, for every $a,b,c\in\mathbb{B}$, $\Lwotp{a}{b}{c}$ is obtained by stipulating that: \begin{varitemize} \item
the depth increases while crossing abstractions iff $a=1$; \item
the depth increases when going through the first argument of an application iff $b=1$; \item
the depth increases when entering the second argument of an application iff $c=1$. \end{varitemize} Formally, one can define terms of $\Lwotp{a}{b}{c}$ as those (finite or infinite) $\lambda$-terms $M$ such that $\ptjudg{\Gamma}{abc}{M}$ is derivable through the rules in Figure~\ref{fig:lwotwfr}. \begin{figure*}
\caption{$\Lambda_{\infty}$: Well-Formation Rules.}
\label{fig:lwotwfr}
\end{figure*} Finite and infinite reduction sequences can be defined exactly as we have just done for $\ell\Lambda_{\infty}$. The obtained calculi have a very rich and elegant mathematical theory. Not much is known, however, about whether $\Lambda_{\infty}$ can be tailored as to guarantee key properties of programs working on streams, like productivity.
Let us now show how $\Lwotp{0}{0}{0}$ and $\Lwotp{0}{0}{1}$ can indeed be embedded into $\ell\Lambda_{\infty}$. For every binary digit $a$, the map $\peremb{\cdot}{a}$ from the space of terms of $\Lwotp{0}{0}{a}$ into the space of preterms is defined as follows: \begin{align*}
\peremb{x}{a}&=x;\\
\peremb{\ap{M}{N}}{a}&=\ap{\peremb{M}{a}}{\pam{a}{\peremb{N}{a}}};\\
\peremb{\ab{x}{M}}{a}&=\pa{a}{x}{\peremb{M}{a}}; \end{align*} where the expression $\pam{a}{M}$ is defined to be $\im{M}$ if $a=0$ and $\cm{M}$ if $a=1$. Please observe that $\peremb{\cdot}{a}$ is defined by coinduction on the space of terms of $\Lwotp{0}{0}{a}$, which contains possibly infinite objects. By the way, $\peremb{\cdot}{a}$ can be seen as Girard's embedding of intuitionistic logic into linear logic where, however, the kind of boxes and abstractions we use depends on $a$: we go inductive if $a=0$ and coinductive otherwise.
First of all, preterms obtained via the embedding are actuall (depending on $a$) in the environment: \begin{lemma}
For every $M\in\Lambda_{00a}$, it holds that $\tjudg{\pam{a}{\FV{M}}}{\peremb{M}{a}}$. \end{lemma} \begin{proof}
We proceed by showing that the following set of judgments $X$
is consistent with $\ell\Lambda_{\infty}$: $$ \left\{\tjudg{\pam{a}{\FV{M}}}{\peremb{M}{a}}\; \; \mbox{\Large{$\mid$}}\;\;M\in\Lwotp{0}{0}{a}\right\}. $$ Suppose that $\tjudg{\pam{a}{\FV{M}}}{\peremb{M}{a}}$, where $M\in\Lwotp{0}{0}{a}$. This implies that $M=\actx{G}{N_1,\ldots,N_n}$, where $N_1,\ldots,N_n\in\Lwotp{0}{0}{a}$. The fact that $M\in\indrul{\ell\Lambda_{\infty}}(\coindrul{\ell\Lambda_{\infty}}(X))$ can be proved by induction on $G$, with different cases depending on the value of $a$. \end{proof} From a dynamical point of view, this embedding is \emph{perfect}: not only basic reduction in $\Lambda_{\infty}$ can be simulated in $\ell\Lambda_{\infty}$, but any reduction we do in $\peremb{M}{a}$ can be traced back to a reduction happening in $M$. \begin{lemma}[Perfect Simulation]\label{lemma:perfsim}
For every $M\in\Lwotp{0}{0}{a}$, if $M\redlambda{n}N$, then
$\peremb{M}{a}\redLL{\strnat{n}}\peremb{N}{a}$.
Moreover, for every $M\in\Lambda_{00a}$, if
$\peremb{M}{a}\redLL{\strnat{n}}N$, then there
is $L$ such that $M\redlambda{n}L$ and
$\peremb{L}{a}\syneqN$. \end{lemma} \begin{proof} Just consider how a redex $\ap{(\la{x}{M})}{N}$ in $\Lwotp{0}{0}{a}$ is translated: it becomes $\ap{(\pa{a}{x}{\peremb{M}{a}})}{\pam{a}{\peremb{N}{a}}}$. As can be easily proved, for every $a$ and for every $M,N\in\Lwotp{0}{0}{a}$, $$ \sbst{(\peremb{M}{a})}{x}{\peremb{N}{a}}= \peremb{\sbst{M}{x}{N}}{a}. $$ This means $\ell\Lambda_{\infty}$ correctly simulates $\Lwotp{0}{0}{a}$. For the converse, just observe that the only redexes in $\peremb{M}{a}$ are those corresponding to redexes from $M$. \end{proof}
One may wonder whether $\Lwotp{0}{0}{1}$ is the only (non-degenerate) dialect of $\Lambda_{\infty}$ which can be simulated in $\ell\Lambda_{\infty}$. Actually, besides the perfect embedding we have just given there is also an \emph{imperfect} embedding of systems in the form $\Lwotp{a}{0}{b}$ (where $a$ and $b$ are binary digits) into $\ell\Lambda_{\infty}$: \begin{align*}
\imperemb{x}{a}{b}&=x;\\
\imperemb{\ap{M}{N}}{a}{b}&=\ap{(\pa{a}{x}{x})}
{(\ap{\imperemb{M}{a}{b}}{\pam{b}{\imperemb{N}{a}{b}}})};\\
\imperemb{\ab{x}{M}}{a}{b}&=\pa{b}{x}{\pam{b}{\imperemb{M}{a}{b}}}. \end{align*} This is a variation on the so-called call-by-value embedding of intuitionistic logic into linear logic (i.e. the embedding induced by the map $(A\rightarrow B)^\bullet=!(A^\bullet)\multimap!(B^\bullet)$, see~\cite{Maraist95ENTCS}). Please notice, however, that variables occur nonlinearly in the environment, while the term itself is never a box, contrarily to the usual call-by-value embedding (where at least values are translated into boxes). As expected: \begin{lemma}
For every $M\in\Lambda_{\bitone0b}$, it holds that
$\tjudg{\pam{b}{\FV{M}}}{\imperemb{M}{a}{b}}$. \end{lemma} As can be easily realised, any $\beta$ step in $\Lambda_{\infty}$ can be simulated by \emph{two} reduction steps in $\ell\Lambda_{\infty}$. This makes the simulation imperfect: \begin{lemma}[Imperfect Simulation]
For every $M\in\Lambda_{\bitone0b}$, if $M\redlambda{n}N$, then
$\imperemb{M}{a}{b}\redLL{\strnat{n}}^2\imperemb{N}{a}{b}$. \end{lemma} \begin{lemma}
Moreover, for every $M\in\Lambda_{\bitone0b}$, if
$\imperemb{M}{a}{b}\redLL{\strnat{n}}N$, then
there is $L$ such that
$M\redlambda{n}L$ and
$\peremb{L}{a}\syneqN$. \end{lemma}
\subsection{$\ell\Lambda_{\infty}$ as a Stream Programming Language}\label{sect:llwotspl}
One of the challenges which lead to the introduction of infinitary rewriting systems is, as we argued in the Introduction, the possibility to inject infinity (as arising in lazy data structures such as streams) into formalisms like the $\lambda$-calculus. In this section, we show that, indeed, $\ell\Lambda_{\infty}$ can not only express terms of any free (co)algebras, but also any effective function on them. Moreover, anything $\ell\Lambda_{\infty}$ can compute can also be computed by Type-2 Turing machines. To the author's knowledge, this is the first time this is done for any system of infinitary rewriting (some partial results, however, can be found in~\cite{Barendregt09}).
\subsection{Signatures and Free (Co)algebras} A \emph{signature} $\Phi$ is a set of function symbols, each with an associated \emph{arity}. Function symbols will be denoted with metavariables like $\mathtt{f}$ or $\mathtt{g}$. In this paper, we are concerned with \emph{finite} signatures, only. Sometimes, a signature has a very simple structure: an \emph{alphabet signature} is a signature whose function symbols all have arity $1$, except for a single nullary symbol, denoted $\varepsilon$. Given an alphabet $\Sigma$, $\fins{\Sigma}$ ($\infs{\Sigma}$, respectively) denotes the set of finite (infinite, respectively) words over $\Sigma$. $\fininfs{\Sigma}$ is simply $\fins{\Sigma}\cup\infs{\Sigma}$. For every alphabet $\Sigma$, there is a corresponding alphabet signature $\Phi_\Sigma$. Given a signature (or an alphabet), one usually needs to define the set of terms built according to the algebra itself. Indeed, the \emph{free
algebra} $\fa{\Phi}$ induced by a signature $\Phi$ is the set of all finite terms built from function symbols in $\Phi$, i.e., all terms \emph{inductively} defined from the following production (where $n$ is the arity of $\mathtt{f}$): \begin{equation}\label{equ:algcoalg} t::=\mathtt{f}(t_1,\ldots,t_n). \end{equation} There is another canonical way of building terms from signatures, however. One can interpret the production above \emph{coinductively}, getting a space of finite \emph{and infinite} terms: the \emph{free
coalgebra} $\fc{\Phi}$ induced by a signature $\Phi$ is the set of all finite \emph{and infinite} terms built from function symbols in $\Phi$, following~(\ref{equ:algcoalg}). Notice that $\fins{\Sigma}$ is isomorphic to $\fa{\Phi_\Sigma}$, while $\fininfs{\Sigma}$ is isomorphic to $\fc{\Phi_\Sigma}$. We often elide the underlying isomorphisms, confusing strings and terms.
\subsection{Representing (Infinitary) Terms in $\ell\Lambda_{\infty}$}. There are many number systems which work well in the finitary $\lambda$-calculus. One of them is the well-known system of Church numerals, in which $n\in\mathbb{N}$ is represented by $\la{x}{\la{y}{x^ny}}$. We here adopt another scheme, attributed to Scott~\cite{Wadsworth80}: this allows to make the relation between depths and computation more explicit. Let $\Phi=\{\mathtt{f}_1,\ldots,\mathtt{f}_n\}$ and suppose that symbols in $\Phi$ can be totally ordered in such a way that $\mathtt{f}_n\leq\mathtt{f}_m$ iff $n\leqm$. Terms of the free algebra $\fa{\Phi}$ can be encoded as terms of $\ell\Lambda_{\infty}$ as follows $$ \faemb{\Phi}{\mathtt{f}_m(t_1,\ldots,t_p)}= \ia{x_1}{\cdots.\ia{x_n}{x_m\im{\faemb{\Phi}{t_1}}\cdots\im{\faemb{\Phi}{t_p}}}}. $$ Similarly for terms in the free coalgebra $\fc{\Phi}$: $$ \fcemb{\Phi}{\mathtt{f}_m(t_1,\ldots,t_p)}= \ia{x_1}{\cdots.\ia{x_n}{x_m\cm{\fcemb{\Phi}{t_1}}\cdots\cm{\fcemb{\Phi}{t_p}}}}. $$ Given a string $s\in\fins{\Sigma}$, the term $\faemb{\Phi_\Sigma}{s}$ is denoted simply as $\fsemb{s}$. Similarly, if $s\in\infs{\Sigma}$, $\isemb{s}$ indicates $\fcemb{\Phi_\Sigma}{s}$. Please observe how $\faemb{\Phi}{\cdot}$ differs from $\fcemb{\Phi}{\cdot}$: in the first case the encoding of subterms are wrapped in an inductive box, while in the second case the enclosing box is coinductive. This very much reflects the spirit of our calculus: in $\fcemb{\Phi}{t}$ the depth increases whenever entering a subterm, while in $\faemb{\Phi}{s}$, the depth \emph{never} increases.
\subsection{Universality}\label{sect:univers}
The question now is: given the encoding in the last paragraph, which \emph{functions} can we represent in $\ell\Lambda_{\infty}$? If domain and codomain are \emph{free algebras}, a satisfactory answer easily comes from the universality of ordinary $\lambda$-calculus with respect to computability on finite structures: the class of functions at hand coincides with the effectively computable ones. If, on the other hand, functions handling or returning terms from free coalgebras are of interest, the question is much more interesting.
The expressive power of $\ell\Lambda_{\infty}$ actually coincides with the one of Type-2 Turing machines: these are mild generalisations of ordinary Turing machines obtained by allowing inputs and outputs to be not-necessarily-finite strings. Such a machine consists of finitely many, initially blank work tapes, finitely many one-way input tapes and a single one-way output tape. Noticeably, input tapes initially contain not-necessarily-finite strings, while the output tape is sometime supposed to be filled with an infinite string. See~\cite{Weihrauch00} for more details and Figure~\ref{fig:twotm} for a graphical representation of the structure of any Type-2 Turing machine: black arrows represent the data flow, whereas grey arrows represent the possible direction of the head in the various tapes. \begin{figure}
\caption{The Structure of a Type-2 Turing Machine}
\label{fig:twotm}
\end{figure}
We now need to properly formalise \emph{when} a given function on possibly infinite strings can be represented by a term in $\ell\Lambda_{\infty}$. To that purpose, let $\mathbb{S}$ be the set $\{*,\omega\}$, where the two elements of $\mathbb{S}$ are considered merely as \emph{symbols}, with no internal structure. Objects in $\mathbb{S}$ are indicated with metavariables like $\mathfrak{a}$ or $\mathfrak{b}$. A partial function $f$ from $\pfininfs{\Sigma}{\mathfrak{a}_1}\times\cdots\times\pfininfs{\Sigma}{\mathfrak{a}_n}$ to $\pfininfs{\Sigma}{\mathfrak{b}}$ is said to be \emph{representable} in $\ell\Lambda_{\infty}$ iff there is a finite term $M_f$ such that for every $s_1\in\pfininfs{\Sigma}{\mathfrak{a}_1},\dots, s_n\in\pfininfs{\Sigma}{\mathfrak{a}_n}$ it holds that $$ \ap{M_f}{\psemb{s_1}{\mathfrak{a}_1}\cdots\psemb{s_n}{\mathfrak{a}_n}}\Rightarrow_{\mathit{lbl}}\psemb{f(s_1,\ldots,s_n)}{\mathfrak{b}} $$ if $f(s_1,\ldots,s_n)$ is defined, while $\ap{M_f}{\psemb{s_1}{\mathfrak{a}_1}\cdots\psemb{s_n}{\mathfrak{a}_n}}$ has no normal form otherwise. Notice the use of level-by-level reduction. Noticeably: \begin{theorem}[Universality]
The class of functions which are representable in $\ell\Lambda_{\infty}$ coincides with the class of functions computable by
Type-2 Turing machines. \end{theorem} \begin{proof} This proof relies on a standard encoding of Turing machines into $\ell\Lambda_{\infty}$. Rather than describing the encoding in detail, we now give some observations, that together should convince the reader that the encoding is indeed possible: \begin{varitemize} \item
First of all, inductive and coinductive fixed point combinators are both available in $\ell\Lambda_{\infty}$. Indeed, let
$M_a$ be the following term:
$$
M_a=\ia{x}{\ia{y}{\ap{y}{\pam{a}{(\ap{(\ap{x}{\im{x}})}{\im{y}})}}}}
$$
Then $Y_a$ is just $\ap{M_a}{\im{M_a}}$. Observe that
$Y_a\im{M}\redLLinfM\pam{a}{(\ap{Y_a}{\im{M}})}$. \item
Moreover, observe that the encoding of free (co)algebras described above not only provides an elegant way to represent
\emph{terms}, but also allows to very easily define efficient combinators for selection. For example, given the alphabet
$\Sigma=\{0,1\}$, the algebra $\fa{\Sigma}$ of binary strings corresponds to the term
$$
M=\la{x}{\ia{y_0}{\ia{y_1}{\ia{y_\varepsilon}
{\ap{x}{\im{y_0}\im{y_1}\im{y_\varepsilon}}}}}}.
$$
Please observe that
\begin{align*}
M\faemb{\Sigma}{0(s)}\im{N_0}\im{N_1}\im{N_\varepsilon}&\Rightarrow
N_0\faemb{\Sigma}{s}{};\\
M\faemb{\Sigma}{1(s)}\im{N_0}\im{N_1}\im{N_\varepsilon}&\Rightarrow
N_1\faemb{\Sigma}{s}{};\\
M\faemb{\Sigma}{\varepsilon}\im{N_0}\im{N_1}\im{N_\varepsilon}&\Rightarrow
N_\varepsilon\faemb{\Sigma}{s}.
\end{align*}
This can be generalised to an arbitrary (co)algebra. \item
Tuples can be represented easily as follows:
$$
\la{x}{\ap{x}{\im{M_1}\ldots\im{M_n}}}.
$$ \item
A configuration of a Type-2 Turing machine working on the alphabet $\Sigma$, with states in the
set $Q$, and having $n$ input tapes and $m$ working tapes can be seen
as the following $n+3m+1$-tuple:
$$
(s_1,\ldots,s_n,t_1^l,a_1,t_1^r,\ldots,t_m^l,a_m,t_m^r,q)
$$
where $s_i$ is the not-yet-read portion of the $i$-th input tape,
$t_i^l$ (respectively, $t_i^r$) is the portion of the $i$-th working tape to the left (respectively, right)
of the head, $a_i$ is the symbol currently under the $i$-th head, and $q$ is the current state.
All these $n+3m+1$ objects can be seen as elements of appropriate (co)algebras:
\begin{varitemize}
\item
$s_1,\ldots,s_n$ are (finite or infinite, depending on the underlying machine) strings;
\item
$a_1,\ldots,a_m,q$ are all elements of finite sets;
\item
$t_1^l,t_1^r,\ldots,t_m^l,t_m^r$ are finite strings;
\end{varitemize}
As a consequence, all of them can be encoded in Scott-style. Moreover, the availability of selectors
and tuples makes it easy to write a term encoding the transition function of the encoded machine. Please notice that the
output tape is \emph{not} part of the configuration above.
If a character is produced in output (i.e. whenever the head of the output tape moves right), the rest
of the computation will take place ``inside'' the encoding of a (possibly infinite) string. \item
One needs to be careful when handling final states: if the machine reaches a final state \emph{even if}
it is meant to produce infinite strings in output, the encoding $\lambda$-term should anyway diverge~\cite{Weihrauch00}. \end{varitemize} Putting the ingredients above together, one gets for every machine $\mathcal{M}$ a term $M_\mathcal{M}$ which computes the same function as $\mathcal{M}$. The fact that anything computable by a finite term $M$ is also computable by a Type-2 Turing machine is quite easy, since level-by-level reduction is effective and, moreover, a normal form in the sense of level-by-level reduction is reached iff it is reached applying surface reduction (in the sense of inductive boxes) \emph{at each depth}. \end{proof}
\section{Taming Infinity: $\ell\Lambda_{\infty}^{\mathsf{4S}}$}
As we have seen in the last sections, $\ell\Lambda_{\infty}$ is a very powerful model: not only it is universal as a way to compute over streams, but also comes with an extremely liberal infinitary dynamics for which, unfortunately, confluence does not hold. If this paper finished here, this work would then be rather inconclusive: $\ell\Lambda_{\infty}$ suffers from the same kind of defects which its nonlinear sibling $\Lambda_{\infty}$ has.
In this section, however, we will define a restriction of $\ell\Lambda_{\infty}$, called $\ell\Lambda_{\infty}^{\mathsf{4S}}$, which thanks to a careful management of boxes in the style of light logics~\cite{Girard98IC,Danos03IC,Lafont04TCS}, allows to keep infinity under control, and to get results which are impossible to achieve in $\Lambda_{\infty}$.
Actually, the notion of a preterm remains unaltered. What changes is how \emph{terms} are defined. First of all, patterns are generalised by two new productions $p::=\dm{x}\; \; \mbox{\Large{$\mid$}}\;\;\am{x}$, which make the notion of an environment slightly more general: it can now contain variables in \emph{five} different forms. Judgments have the usual shape, namely $\tjudg{\Gamma}{M}$ where $\Gamma$ is an environment and $M$ is a term. Metavariables like $\Upsilon$ or $\Pi$ stand for environments where the only allowed patterns are either variables or the ones in the form $\im{x}$. Rules of $\ell\Lambda_{\infty}^{\mathsf{4S}}$ are quite different than the ones of $\ell\Lambda_{\infty}$, and can be found in Figure~\ref{fig:lllwotwfr}. \begin{figure*}\label{fig:lllwotwfr}
\end{figure*} The meaning well-formation rules induce on variable occurring in environments is more complicated than for $\ell\Lambda_{\infty}$. Suppose that $\tjudg{\Gamma}{M}$. Then: \begin{varenumerate} \item\label{enum:first}
If $x\in\Gamma$ then, as usual, $x$ occurs once in
$M$, and outside of any box; \item\label{enum:second}
If $\dm{x}\in\Gamma$ then $x$
can occur any number of times in $M$, but all these
occurrences are in \emph{linear} position, i.e., outside the scope
of \emph{any} box; \item\label{enum:third}
If $\im{x}\in\Gamma$ then $x$ occurs exactly once in
$M$, and in the scope of exactly one (inductive) box; \item\label{enum:fourth}
If $\cm{x}\in\Gamma$, then $x$ occurs any number of
times in $M$, with the only proviso that any such occurrence
of $x$ must be in the scope of \emph{at least} one coinductive
box. \item\label{enum:fifth}
Finally, if $\am{x}\in\Gamma$, then $x$ occurs any
number of times in $M$, in any possible position. \end{varenumerate} Conditions \ref{enum:first}. to \ref{enum:third}. are reminiscent of the ones of Lafont's soft linear logic. Analogously, Condition~\ref{enum:fourth}. is very much in the style of \textsf{4LL}\ as described by Danos and Joinet~\cite{Danos03IC}: $\uparrow$ is morally a functor for which contraction, weakening and digging are available, but which does not support dereliction. We will come back to the consequences of this exponential discipline below in this section. Please observe that any variable $x$ marked as $\dm{x}$ or $\im{x}$ cannot occur in the scope of coinductive boxes. The pattern $\am{x}$ has only a merely technical role.
If $\Gamma$ and $\Delta$ are environments, we write $\Gamma\prec\Delta$ iff $\Delta$ can be obtained from $\Gamma$ by replacing \emph{some} patterns in the form $\im{x}$ with $\dm{x}$. Well-formation is preserved by reduction and, as for $\ell\Lambda_{\infty}$, a proof of this fact requires a number of substitution lemmas: \begin{lemma}[Substitution Lemma, Linear Case]\label{lemma:lllsllinear} If $\tjudg{\Upsilon,x,\dm{\Theta},\cm{\Xi},\am{\Psi}}{M}$ and $\tjudg{\Pi,\dm{\Theta},\cm{\Xi},\am{\Psi}}{N}$, then $\tjudg{\Upsilon,\Pi,\dm{\Theta},\cm{\Xi},\am{\Psi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{lemma}[Substitution Lemma, First Inductive Case]\label{lemma:lllslinductiveI} If $\tjudg{\Upsilon,\dm{\Theta},\dm{x},\cm{\Xi},\am{\Psi}}{M}$ and $\tjudg{\Phi,\cm{\Xi},\am{\Psi}}{N}$, then $\tjudg{\Upsilon,\dm{\Theta},\dm{\Phi},\cm{\Xi},\am{\Psi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{lemma}[Substitution Lemma, Second Inductive Case]\label{lemma:lllslinductiveII} If $\tjudg{\Upsilon,\dm{\Theta},\im{x},\cm{\Xi},\am{\Psi}}{M}$ and $\tjudg{\Phi,\cm{\Xi},\am{\Psi}}{N}$, then $\tjudg{\Upsilon,\dm{\Theta},\im{\Phi},\cm{\Xi},\am{\Psi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{lemma}[Substitution Lemma, Coinductive Case]\label{lemma:lllslcoinductive} If $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\cm{x},\am{\Psi}}{M}$ and $\tjudg{\am{\Xi},\am{\Psi}}{N}$, then $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\am{\Psi}}{\sbst{M}{x}{N}}$. \end{lemma} \begin{lemma}[Substitution Lemma, Arbitrary Case]\label{lemma:lllslarbitrary} If $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\am{\Psi},\am{x}}{M}$ and $\tjudg{\am{\Psi}}{N}$, then $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\am{\Psi}}{\sbst{M}{x}{N}}$. \end{lemma} Altogether, the lemmas above imply \begin{proposition}[Well-Formedness is Preseved by Reduction]
If $\tjudg{\Gamma}{M}$ and $M\redLLwodN$, then $\tjudg{\Delta}{N}$
where $\Gamma\prec\Delta$. \end{proposition} Please observe that the underlying environment can indeed change during reduction, but in a very peculiar way: variables occurring in a $\downarrow$-pattern can later move to a $\#$-pattern.
Classes $\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ and $\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\Gamma)$ (where $\Gamma$ is an environment) are defined in the natural way, as in $\ell\Lambda_{\infty}$.
\subsection{The Fundamental Lemma}
It is now time to show \emph{why} $\ell\Lambda_{\infty}^{\mathsf{4S}}$ is a computationally well-behaved object. In this section we will prove a crucial result, namely that reduction is strongly normalising \emph{at each
depth}.
Before embarking on the proof of this result, let us spend some time to understand why this is the case, giving some necessary definitions along the way.
For any term $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ let us define the \emph{size} $\psize{M}{n}$ of $M$ \emph{at depth} $n$ as the number of occurrences of any symbol at depth $n$ inside $M$. Observe that $\psize{M}{n}$ is well-defined \emph{only} because $M$ is assumed to be a term and not just a preterm. Formally, $\psize{M}{n}$ is any
natural number satisfying the equations in
Figure~\ref{fig:psizetms}, and the following result holds:
\begin{figure}
\caption{Parametrised Sizes of Preterms: Equations.}
\label{fig:psizetms}
\end{figure} \begin{lemma}\label{lemma:psize} For every term $M$ and for every natural number $m\in\mathbb{N}$ there is a unique natural number $n$ such that $\psize{M}{m}=n$. \end{lemma} \begin{proof} The fact that for each $M$ and for each $m$ there is \emph{one} natural number satisfying the equations in Figure~\ref{fig:psizetms} can be proved by induction on $m$: \begin{varitemize} \item
If $m=0$, then since $M$ is a term, $\tjudg{\Gamma}{M}$ is an element of the
set $\indrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\coindrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}}))$. Then, let us perform another induction
on the (finite) number of inductive rules used to obtain $\tjudg{\Gamma}{M}$
from something in $\coindrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}})$:
\begin{varitemize}
\item
If the last rule is $(\mathsf{vl})$, $(\mathsf{vd})$ or $(\mathsf{va})$, then $\psize{M}{0}=1$ by
definition;
\item
If the last rule is $(\mathsf{a})$, then $M=\ap{N}{L}$,
there are $\psize{N}{0}$ and $\psize{L}{0}$,
and $\psize{M}{0}=\psize{N}{0}+\psize{L}{0}+1$;
\item
If the last rule is either $(\mathsf{ll})$, $(\mathsf{li})_1$, $(\mathsf{li})_2$, $(\mathsf{lc})$ or
$(\mathsf{mi})$, then we can proceed like in the previous case, by the inductive hypothesis;
\item
If the last rule is $(\mathsf{mc})$, then $\psize{M}{0}=0$ by definition.
\end{varitemize} \item
If $m\geq 1$, then again, since $M$ is a term, $\tjudg{\Gamma}{M}$ is an element of the
set $\indrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\coindrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}}))$. Then, let us perform an induction
on the (finite) number of inductive rules used to get $\tjudg{\Gamma}{M}$
from something in $\coindrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}})$. The only interesting
case is $M=\cm{N}$, since in all the other cases we can proceed exactly
as in the case $m=0$. From the fact that $\tjudg{\Gamma}{M}\in\indrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\coindrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}}))$,
it follows that there is $\Delta$ such that $\tjudg{\Delta}{N}\in\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}}$, i.e. $N$
itself is a term. But then we can apply the inductive hypothesis and obtain that
$\psize{N}{m-1}$ exists. It is now clear that $\psize{M}{m}=\psize{N}{m-1}$
exists. \end{varitemize} As for uniqueness, it can be proved by observing that the equations from figure~\ref{fig:psizetms} can be oriented so as to get a confluent rewrite system for which, then, the Church-Rosser property holds. This concludes the proof. \end{proof}
Now, suppose that a term $M$ is such that $M\redLL{\strnat{n}}P$. The term $M$, then, must be in the form $\actx{\pctxone{\strnat{n}}}{N}$ where $N$ is a redex whose reduct is $L$, and $P$ is just $\actx{\pctxone{\strnat{n}}}{L}$. The question is: how does any $\psize{\actx{\pctxone{\strnat{n}}}{N}}{m}$ relate to the corresponding $\psize{\actx{\pctxone{\strnat{n}}}{L}}{m}$? Some interesting observations follow: \begin{varitemize} \item
If $m<n$ then
$\psize{\actx{\pctxone{\strnat{n}}}{L}}{m}$
equals
$\psize{\actx{\pctxone{\strnat{n}}}{N}}{m}$,
since reduction does not affect the size at lower levels; \item
If $m>n$ then of course
$\psize{\actx{\pctxone{\strnat{n}}}{L}}{m}$ can
be much bigger than
$\psize{\actx{\pctxone{\strnat{n}}}{N}}{m}$,
simply because symbol occurrences at depth $m$ can be
duplicated as an effect of substitution; \item
Finally, if $m=n$, then
$p=\psize{\actx{\pctxone{\strnat{n}}}{L}}{m}$
can again be bigger than
$r=\psize{\actx{\pctxone{\strnat{n}}}{N}}{m}$,
but in a very controlled way. More specifically,
\begin{varitemize}
\item
if $N$ is a \emph{linear} redex, then $r<p$
because the function body has exactly one free occurrence of the
bound variable;
\item
if $N$ is an \emph{inductive} redex, then $r$ can indeed
by bigger than $p$, but in that case the involved
inductive box has disappeared.
\item
if $N$ is a \emph{coinductive} redex, then $r<p$
because the involved coinductive box can actually be copied many
times, but all the various copies will be found at depths
strictly bigger than $n=m$.
\end{varitemize} \end{varitemize} The informal argument above can be formalised by way of an appropriate notion of weight, generalising the argument by Lafont~\cite{Lafont04TCS} to the more general setting we work in here.
Given $n,m\in\mathbb{N}$ and a term $M$, the \emph{$n$-weight} $\wei{n}{m}{M}$ of $M$ at depth $m$ is any natural number satisfying the rules from Figure~\ref{fig:pwtms}. \begin{figure}
\caption{Parametrised Weights of Preterms: Equations.}
\label{fig:pwtms}
\end{figure} \begin{lemma}
For every term $M$ and for every natural numbers
$n,m\in\mathbb{N}$, there is a unique natural number $p$
such that $\wei{n}{m}{M}=p$. \end{lemma} \begin{proof}
This can be proved to hold in exactly the same way as we did for the
size in Lemma~\ref{lemma:psize}. \end{proof} Similarly, one can define the duplicability factor of $M$ at depth $m$, $\df{m}{M}$: take the rules in Figure~\ref{fig:dftms} and prove they uniquely define a natural number for every term, in the same way as we have just done for the weight ($\nfo{x}{M}$ is the number of free occurrences of $x$ in the term $M$, itself a well-defined concept when $M$ is a term). \begin{figure}
\caption{Parametrised Duplicability Factor of Preterms: Equations.}
\label{fig:dftms}
\end{figure} Given a term $M$, the weight of $M$ at depth $n$ is simply $\twei{n}{M}=\wei{\df{n}{M}}{n}{M}$.
The calculus $\ell\Lambda_{\infty}^{\mathsf{4S}}$ is designed in such a way that the duplicability factor never increases: \begin{lemma}\label{lemma:dfdni} If $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ and $M\redLLwodN$, then $\df{m}{M}\geq\df{m}{N}$ for every $m$. \end{lemma} \begin{proof} A formal proof could be given. We prefer, however, to give a more intuitive one here. Observe that: \begin{varitemize} \item
If $\tjudg{\Gamma,x}{M}$ or $\tjudg{\Gamma,\im{x}}{M}$, then the variable $x$ occurs
free exactly once in $M$, in the first case outside the scope of any box, in the second case in the scope of
exactly one inductive box. \item
If $\tjudg{\Gamma,\dm{x}}{M}$, then $x$ occurs free more than once in $M$, all the occurrences
being outside the scope of any box. \item
The duplicability factor at level $n$ of $M$ is nothing more than the maximum, over all abstractions $\ia{x}{N}$
at level $n$ in $M$, of the number of free occurrences of $x$ in $N$. Observe that by the well-formation rules
in Figure~\ref{fig:lllwotwfr}, the variable $x$ must be marked as $\dm{x}$ or as $\im{x}$ for any such $N$.
If it is marked as $\im{x}$, however, it occurs once in $N$. \item
Now, consider the substitution lemmas~\ref{lemma:lllsllinear}, \ref{lemma:lllslinductiveI}, \ref{lemma:lllslinductiveII},
\ref{lemma:lllslcoinductive}, and \ref{lemma:lllslarbitrary}. In all the five cases, one realises that:
\begin{varenumerate}
\item
for every $n$, $\df{n}{\sbst{M}{x}{N}}\leq\max\{\df{n}{M},\df{n}{N}\}$,
because every abstraction occurring in $\sbst{M}{x}{N}$ also occurs in either $M$ or $N$, and
substitution is capture-avoiding.
\item
If in the judgment $\tjudg{\Gamma}{\sbst{M}{x}{N}}$ one gets as a result of the substitution lemma there
is $\dm{y}\in\Gamma$, then $\nfo{y}{\sbst{M}{x}{N}}$ cannot be too big: there must be
some $z$ such that $z$ is marked as $\dm{z}$ in one (or both) of the provable judgments existing by hypothesis,
but $\nfo{z}{M}+\nfo{z}{N}$ majorises $\nfo{y}{\sbst{M}{x}{N}}$. Why?
Simply because the only case in which $\nfo{y}{\sbst{M}{x}{N}}$ could potentially grow bigger is the one in
which $x$ is marked as $\dm{x}$ in the judgment for $M$. In that case, however, the variables which are
free in $N$ are all linear (or marked as $\cm{z}$ or $\am{z}$).
\end{varenumerate} \end{varitemize} This concludes the proof. \end{proof} Moreover, and this is the crucial point, $\twei{n}{M}$ is guaranteed to strictly decrease whenever $M\redLL{\strnat{n}}N$: \begin{lemma}\label{lemma:weightdecr} Suppose that $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ and that $M\redLL{\strnat{n}}N$. Then $\twei{n}{M}>\twei{n}{N}$. Moreover, $\twei{m}{M}=\twei{m}{N}$ whenever $m<n$. \end{lemma} \begin{proof}
We first of all need to prove the following variations on the substitution lemmas:
\begin{varenumerate}
\item\label{point:sublemmaI}
If $\tjudg{\Upsilon,x,\dm{\Theta},\cm{\Xi},\am{\Psi}}{M}$ and
$\tjudg{\Pi,\dm{\Theta},\cm{\Xi},\am{\Psi}}{N}$, then
for every $n\geq\max\{\df{0}{M},\df{0}{N}\}$ it holds
that $\wei{n}{0}{\sbst{M}{x}{N}}\leq
\wei{n}{0}{M}+\wei{n}{0}{N}$.
\item\label{point:sublemmaII}
If $\tjudg{\Upsilon,\dm{\Theta},\dm{x},\cm{\Xi},\am{\Psi}}{M}$ and
$\tjudg{\Phi,\cm{\Xi},\am{\Psi}}{N}$, then
for every $n\geq\max\{\df{0}{M},\df{0}{N}\}$ it holds
that $\wei{n}{0}{\sbst{M}{x}{N}}\leq
\wei{n}{0}{M}+\nfo{x}{M}\cdot\wei{n}{0}{N}$.
\item\label{point:sublemmaIII}
If $\tjudg{\Upsilon,\dm{\Theta},\im{x},\cm{\Xi},\am{\Psi}}{M}$ and
$\tjudg{\Phi,\cm{\Xi},\am{\Psi}}{N}$, then
for every $n\geq\max\{\df{0}{M},\df{0}{N}\}$ it holds
that $\wei{n}{0}{\sbst{M}{x}{N}}\leq
\wei{n}{0}{M}+n\cdot\wei{n}{0}{N}$.
\item\label{point:sublemmaIV}
If $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\am{\Psi},\am{x}}{M}$ and
$\tjudg{\am{\Psi}}{N}$, then
for every $n\geq\max\{\df{0}{M},\df{0}{N}\}$ it holds
that $\wei{n}{0}{\sbst{M}{x}{N}}\leq
\wei{n}{0}{M}$.
\item\label{point:sublemmaV}
If $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\cm{x},\am{\Psi}}{M}$ and
$\tjudg{\am{\Xi},\am{\Psi}}{N}$, then
for every $n\geq\max\{\df{0}{M},\df{0}{N}\}$ it holds
that $\wei{n}{0}{\sbst{M}{x}{N}}\leq
\wei{n}{0}{M}$.
\end{varenumerate}
All the statements above can be proved, as usual, by induction on the (finite) number of
inductive well-formation rules which are necessary to prove the judgment about $M$ from
something in $\coindrul{\ell\Lambda_{\infty}^{\mathsf{4S}}}(\cd{\ell\Lambda_{\infty}^{\mathsf{4S}}})$. As an example, let us consider some
inductive cases on Point~\ref{point:sublemmaII}., which is one of the most interesting:
\begin{varitemize}
\item
If $M$ is proved well-formed by
$$
\infer[(\mathsf{vd})]
{\tjudg{\dm{\Theta},\cm{\Xi},\am{\Psi},\dm{x}}{x}}
{}
$$
then $\sbst{M}{x}{N}=N$ and
\begin{align*}
\wei{n}{0}{\sbst{M}{x}{N}}&=\wei{n}{0}{N}\\
&=1\cdot\wei{n}{0}{N}=\nfo{x}{M}\cdot\wei{n}{0}{N}\\
&\leq\wei{n}{0}{M}+\nfo{x}{M}\cdot\wei{n}{0}{N}.
\end{align*}
\item
If $M$ is proved well-formed by
$$
\infer[(\mathsf{a})]
{\tjudg{\Upsilon,\Pi,\dm{\Theta},\dm{x},\cm{\Xi},\am{\Psi}}{\ap{L}{P}}}
{
\tjudg{\Upsilon,\dm{\Theta},\dm{x},\cm{\Xi},\am{\Psi}}{L}
&
\tjudg{\Pi,\dm{\Theta},\dm{x},\cm{\Xi},\am{\Psi}}{P}
}
$$
then $\sbst{M}{x}{N}=\ap{(\sbst{L}{x}{N})}{(\sbst{P}{x}{N})}$
and, by inductive hypothesis, we have
\begin{align*}
\wei{n}{0}{\sbst{L}{x}{N}}&\leq\wei{n}{0}{L}+\nfo{x}{L}\cdot\wei{n}{0}{N};\\
\wei{n}{0}{\sbst{P}{x}{N}}&\leq\wei{n}{0}{P}+\nfo{x}{P}\cdot\wei{n}{0}{N}.
\end{align*}
But then:
\begin{align*}
\wei{n}{0}{\sbst{M}{x}{N}}&=\wei{n}{0}{\sbst{L}{x}{N}}+\wei{n}{0}{\sbst{P}{x}{N}}\\
&\leq(\wei{n}{0}{L}+\nfo{x}{L}\cdot\wei{n}{0}{N})\\
&\quad+(\wei{n}{0}{P}+\nfo{x}{P}\cdot\wei{n}{0}{N})\\
&=(\wei{n}{0}{L}+\wei{n}{0}{P})\\
&\quad+(\nfo{x}{L}+\nfo{x}{P})\cdot\wei{n}{0}{N}\\
&=\wei{n}{0}{M}+\nfo{x}{M}\cdot\wei{n}{0}{N}.
\end{align*}
\item
If $M$ is proved well-formed by
$$
\infer[(\mathsf{mi})]
{\tjudg{\dm{\Theta},\dm{x},\im{\Xi},\cm{\Psi},\am{\Phi}}{\im{M}}}
{\tjudg{\Xi,\cm{\Psi},\am{\Phi}}{M}}
$$
then $x$ does not occur free in $M$ and, as a consequence $\sbst{M}{x}{N}=M$. The thesis easily follows.
\end{varitemize} With the five lemmas above in our hands, it is possible to prove that if $M\redLLbasN$, then $\twei{0}{M}>\twei{0}{N}$. Let's proceed by cases depending on how $M\redLLbasN$ is derived: \begin{varitemize} \item
If $M=\ap{(\la{x}{L})}{P}$, then
$\tjudg{\Upsilon,x,\dm{\Theta},\cm{\Xi},\am{\Psi}}{L}$
and $\tjudg{\Pi,\dm{\Theta},\cm{\Xi},\am{\Psi}}{P}$. We can apply Point \ref{point:sublemmaI}.
(and Lemma~\ref{lemma:dfdni}) obtaining
\begin{align*}
\twei{0}{M}&=\wei{\df{0}{M}}{0}{M}=\wei{\df{0}{M}}{0}{L}+\wei{\df{0}{M}}{0}{P}+1\\
&>\wei{\df{0}{M}}{0}{L}+\wei{\df{0}{M}}{0}{P}\geq\wei{\df{0}{M}}{0}{\sbst{L}{x}{P}}\\
&=\wei{\df{0}{M}}{0}{N}\geq\wei{\df{0}{N}}{0}{N}=\twei{0}{N}.
\end{align*} \item
If $M=\ap{(\ia{x}{L})}{\im{P}}$, then we can distinguish two sub-cases:
\begin{varitemize}
\item
If $\tjudg{\Upsilon,\dm{x},\dm{\Theta},\cm{\Xi},\am{\Psi}}{L}$
and $\tjudg{\Phi,\cm{\Xi},\am{\Psi}}{P}$, then we can apply Point \ref{point:sublemmaII}.
(and Lemma~\ref{lemma:dfdni}) obtaining
\begin{align*}
\twei{0}{M}&=\wei{\df{0}{M}}{0}{M}\\
&=\wei{\df{0}{M}}{0}{L}+\df{0}{M}\cdot\wei{\df{0}{M}}{0}{P}+1\\
&>\wei{\df{0}{M}}{0}{L}+\df{0}{M}\cdot\wei{\df{0}{M}}{0}{P}\\
&\geq\wei{\df{0}{M}}{0}{L}+\nfo{x}{L}\cdot\wei{\df{0}{M}}{0}{P}\\
&\geq\wei{\df{0}{M}}{0}{\sbst{L}{x}{P}}\\
&=\wei{\df{0}{M}}{0}{N}\geq\wei{\df{0}{N}}{0}{N}\\
&=\twei{0}{N}.
\end{align*}
\item
If $\tjudg{\Upsilon,\im{x},\dm{\Theta},\cm{\Xi},\am{\Psi}}{L}$
and $\tjudg{\Phi,\cm{\Xi},\am{\Psi}}{P}$, then we can apply Point \ref{point:sublemmaIII}.
(and Lemma~\ref{lemma:dfdni}) obtaining
\begin{align*}
\twei{0}{M}&=\wei{\df{0}{M}}{0}{M}\\
&=\wei{\df{0}{M}}{0}{L}+\df{0}{M}\cdot\wei{\df{0}{M}}{0}{P}+1\\
&>\wei{\df{0}{M}}{0}{L}+\df{0}{M}\cdot\wei{\df{0}{M}}{0}{P}\\
&\geq\wei{\df{0}{M}}{0}{\sbst{L}{x}{P}}\\
&=\wei{\df{0}{M}}{0}{N}\geq\wei{\df{0}{N}}{0}{N}=\twei{0}{N}.
\end{align*}
\end{varitemize} \item
If $M=\ap{(\ca{x}{L})}{\cm{P}}$, then
$\tjudg{\Upsilon,\dm{\Theta},\cm{x},\cm{\Xi},\am{\Psi}}{L}$ and $\tjudg{\cm{\Xi},\am{\Psi}}{P}$.
We can apply Point \ref{point:sublemmaV}. (and Lemma~\ref{lemma:dfdni}) obtaining
\begin{align*}
\twei{0}{M}&=\wei{\df{0}{M}}{0}{M}=\wei{\df{0}{M}}{0}{L}+1\\
&>\wei{\df{0}{M}}{0}{L}\geq\wei{\df{0}{M}}{0}{\sbst{L}{x}{P}}\\
&=\wei{\df{0}{M}}{0}{N}\geq\wei{\df{0}{N}}{0}{N}=\twei{0}{N}.
\end{align*} \end{varitemize} Now, remember that $M\redLL{\strnat{n}}N$ iff there are a $n$-context $\pctxone{\strnat{n}}$ and two terms $L$ and $P$ such that $L\redLLbasP$, $M=\actx{\pctxone{\strnat{n}}}{L}$, and $N=\actx{\pctxone{\strnat{n}}}{P}$. The thesis can be proved by induction on $\pctxone{\strnat{n}}$. \end{proof} The Fundamental Lemma easily follows: \begin{proposition}[Fundamental Lemma] For every natural number $n\in\mathbb{N}$, the relation $\redLL{\strnat{n}}$ is strongly normalising. \end{proposition}
\subsection{Normalisation and Confluence}
The Fundamental Lemma has a number of interesting consequences, which make the dynamics of $\ell\Lambda_{\infty}^{\mathsf{4S}}$ definitely better-behaved than that of $\ell\Lambda_{\infty}$. The first such result we give is a Weak-Normalisation Theorem: \begin{theorem}[Normalisation]
For every term $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ there is a normal form $N\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$
such that $M\redLLinfN$. \end{theorem} \begin{proof} This is an immediate consequence of the Fundamental Lemma: for every term $M$, first reduce (by $\rightarrow$) all redexes at depth $0$, obtaining $N$, and then normalise all the subterms of $N$ at depth $1$ (by $\leadsto$). Conclude by observing that reduction at higher depths does not influence lower depths. \end{proof} The way the two modalities interact in $\ell\Lambda_{\infty}^{\mathsf{4S}}$ has effects which go beyond normalisation. More specifically, the two relations $\rightarrow$ and $\leadsto$ do not interfere like in $\ell\Lambda_{\infty}$, and as a consequence, we get a Confluence Theorem: \begin{theorem}[Strong Confluence]\label{theo:conf} If $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$, $M\redLLinfN$ and $M\redLLinfL$, then there is $P\in\ell\Lambda_{\infty}^{\mathsf{4S}}$ such that $N\redLLinfP$ and $L\redLLinfP$. \end{theorem} The path to confluence requires some auxiliary lemmas: \begin{lemma}\label{lemma:commam} If $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\am{x},\am{\Psi}}{M}$ and $\tjudg{\am{\Psi}}{N}$, $M\redLLinfL$, and $N\redLLinfP$, then $\sbst{M}{x}{N}\Rightarrow\sbst{L}{x}{P}$. \end{lemma} \begin{proof} This is a coinduction on $M$. \end{proof} \begin{lemma}\label{lemma:commcm} If $\tjudg{\Upsilon,\dm{\Theta},\cm{\Xi},\cm{x},\am{\Psi}}{M}$ and $\tjudg{\am{\Xi},\am{\Psi}}{N}$, $M\redLLnextL$, and $N\redLLinfP$, then $\sbst{M}{x}{N}\leadsto\sbst{L}{x}{P}$. \end{lemma} \begin{proof} This is again a coinduction on the structure of $M$, exploiting Lemma~\ref{lemma:commam} \end{proof} \begin{lemma}[Noninterference]\label{lemma:noninterference} If $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$, $M\redLL{\strnat{0}}N$ and $M\redLLnextL$, then there is $P\in\ell\Lambda_{\infty}^{\mathsf{4S}}$ such that $N\redLLnextP$ and $L\redLL{\strnat{0}}P$. \end{lemma} \begin{proof} By coinduction on the structure of $M$. Some interesting cases: \begin{varitemize} \item
If $M=QR$ and $Q\redLL{\strnat{0}}S$,
then $N=\termsevenR$. By definition,
$Q\redLLnextX$ and $R\redLLnextY$,
where $L=XY$. By induction hypothesis,
there is $Z$ such that $S\redLLnextZ$
and $X\redLL{\strnat{0}}Z$. The term we are looking for,
then, is just $P=\termtenY$. Indeed,
$N=\termsevenR\leadsto\termtenY$ and,
other other hand, $L=XY\redLL{\strnat{0}}\termtenY$. \item
If $M=\ap{(\ca{x}{Q})}{\cm{R}}$
and $N=\sbst{Q}{x}{R}$, then
$L$ is in the form $\ap{(\ca{x}{X})}\cm{Y}$ where
$Q\redLLnextX$ and $R\redLLinfY$, and
then we can apply Lemma~\ref{lemma:commam} obtaining that
$N\redLLnextP=\sbst{X}{x}{Y}$.
On the other hand, $L\redLL{\strnat{0}}P$. \end{varitemize} \end{proof} But there is even more: $\redLL{\strnat{0}}$ and $\leadsto$ commute. \begin{lemma}[Postponement]\label{lemma:commut} If $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$, $M\redLLnextN\redLL{\strnat{0}}L$, then there is $P\in\ell\Lambda_{\infty}^{\mathsf{4S}}$ such that $M\redLL{\strnat{0}}P\redLLnextL$. \end{lemma} \begin{proof} Again a coinduction on the structure of $M$. Some interesting cases: \begin{varitemize} \item
We can exclude the case in which $M=\cm{Q}$, because
in that case also $N$ would be a coinductive boxes, and
coinductive boxes are $\redLL{\strnat{0}}$-normal forms. \item
If $M=\ap{(\ca{x}{Q})}{\cm{R}}$,
$N=\ap{(\ca{x}{S})}{\cm{X}}$
(where $Q\redLLnextS$ and $R\redLLinfX$)
and $L=\sbst{S}{x}{X}$,
then Lemma~\ref{lemma:commam} ensures that
$P=\sbst{Q}{x}{R}$ is such
that $P\redLLnextL$. \end{varitemize} \end{proof} One-step reduction is not in general confluent in infinitary $\lambda$-calculi. However, $\redLL{\strnat{0}}$ indeed is: \begin{proposition}\label{prop:confluenceone} If $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$, $M\redLL{\strnat{0}}^*N$, and $M\redLL{\strnat{0}}^*L$, then there is $P$ such that $N\redLL{\strnat{0}}^*P$ and $L\redLL{\strnat{0}}^*P$. \end{proposition} \begin{proof} This can be proved with standard techniques, keeping in mind that in an inductive abstraction $\ia{x}{M}$, the variable $x$ occurs finitely many times in $M$. \end{proof} The last two lemmas are of a techincal nature, but can be proved by relatively simple arguments: \begin{lemma}\label{lemma:reflextrans} Both $\leadsto$ and $\Rightarrow$ are reflexive. \end{lemma} \begin{proof} Easy. \end{proof} \begin{lemma}\label{lemma:onenext} If $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ and $M\redLL{\strnat{n}}N$ (where $n\geq 1$), then $M\redLLnextN$. \end{lemma} \begin{proof} Easy, given Lemma~\ref{lemma:reflextrans} \end{proof} We are finally able to prove the Confluence Theorem: \begin{proof}[Proof of Theorem~\ref{theo:conf}] \newcommand{\pairone}{\alpha} \newcommand{\pairtwo}{\beta} We will show how to associate a term $P=f(\pairone)$ to any pair in the form $\pairone=(M\redLLinfN,M\redLLinfL)$ or in the form $\pairone=(M\redLLnextN,M\redLLnextL)$. The function $f$ is defined by coinduction on the structure of the two proofs in $\pairone$. This will be done in such a way that in the first case $N\redLLinff(\pairone)$ and $L\redLLinff(\pairone)$, while in the second case $N\redLLnextf(\pairone)$ and $L\redLLnextf(\pairone)$. If $\pairone$ is $(M\redLLinfN,M\redLLinfL)$, then by definition, $M\rightarrow^*Q\redLLnextN$ and $M\rightarrow^*R\redLLnextL$. Exploiting Lemma~\ref{lemma:onenext}, Lemma~\ref{lemma:reflextrans}, and Lemma~\ref{lemma:commut}, one obtains that there exist $S$ and $X$ such that $M\redLL{\strnat{0}}^*S\redLLnextN$ and $M\redLL{\strnat{0}}^*X\redLLnextL$. By Proposition~\ref{prop:confluenceone}, one obtains that there is $Y$ with $S\redLL{\strnat{0}}^*Y$ and $X\redLL{\strnat{0}}Y$. By repeated application of Lemma~\ref{lemma:noninterference} and Lemma~\ref{lemma:reflextrans}, one can conclude there are $Z$ and $W$ such that $N\redLL{\strnat{0}}^*Z$, $Y\redLLnextZ$, $Y\redLLnextW$ and $L\redLL{\strnat{0}}^*W$. Now, let $f(\pairone)$ be just $f(Y\redLLnextZ,Y\redLLnextW)$. If, on the other hand, $\pairone$ is $(M\redLLnextN,M\redLLnextL)$, we can define $f$ by induction on the proof of the two statements where, however, we are only interested in the last thunk of inductive rule instances. This is done in a natural way. As an example, if $M$ is an application $\ap{P}{Q}$, then clearly $N$ is $\ap{R}{S}$ and $L$ is $\ap{X}{Y}$, where $P\redLLnextR$, $P\redLLnextX$ $Q\redLLnextS$, and $Q\redLLnextY$; moreover, $f(\pairone)$ is the term $\ap{f(P\redLLnextR,P\redLLnextX)}
{f(Q\redLLnextS,Q\redLLnextY)}$. Notice how the function $f$ is well defined, being a guarded recursive function on sets defined as greatest fixed points. \end{proof} Confluence and Weak Normalisation together imply that normal forms are unique: \begin{corollary}[Uniqueness of Normal Forms]
Every term $M\in\tms{\ell\Lambda_{\infty}^{\mathsf{4S}}}$ has a unique normal form. \end{corollary} Strangely enough, even if every term $M$ has a normal form $N$, it is not guaranteed to reduce to it in every reduction order, simply because one can certainly choose to ``skip'' certain depths while normalising. In this sense, level-by-level sequences are normalising: they are not allowed to go to depth $n>m$ if there is a redex at depth $m$.
\subsection{Expressive Power}\label{sect:ll4sexppow}
At this point, one may wonder whether $\ell\Lambda_{\infty}^{\mathsf{4S}}$ is well-behaved simply because its expressive power is simply too low. Although at present we are not able to characterise the class of functions which can be represented in it, we can already give some interesting observations on its expressive power.
First of all, let us observe that the inductive fragment of $\ell\Lambda_{\infty}^{\mathsf{4S}}$ (i.e. the subsystem obtained by dropping coinductive boxes) is complete for polynomial time computable functions on finite strings, although inputs need to be represented as Church numerals for the result to hold: this is a consequence of polytime completeness for \textsf{SLL}~\cite{Lafont04TCS}.
About the ``coinductive'' expressive power of $\ell\Lambda_{\infty}^{\mathsf{4S}}$, we note that a form of guarded recursion can indeed be expressed, thanks to the very liberal exponential discipline of $\textsf{4LL}$. Consider the term $M=\ca{x}{\la{y}{\ca{z}{y\cm{(xxz\cm{z})}}}}$ and define $X$ to be $\ap{M}{\cm{M}}$. One can easily verify that for any (closed) term $N$, the term $XN\cm{N}$ reduces in three steps to $N\cm{(XN\cm{N})}$. In other words, then, $X$ is indeed a fixed point combinator which however requires the argument functional to be applied to it twice.
The two observations above, taken together, mean that $\ell\Lambda_{\infty}^{\mathsf{4S}}$ is, at least, capable of expressing all functions from $(\mathbb{B}^*)^\infty$ to $(\mathbb{B}^*)^\infty$ such that for each $n$, the string at position $n$ in the output stream can be computed in polynomial time from the string at position $n$ in the input stream. Whether one could go (substantially) beyond this is an interesting problem that we leave for further work. One cannot, however, go too far, since the \textsf{4LL}\ exponential discipline imposes that all typable stream functions are causal, i.e., for each $n$, the value of the output at position $n$ only depends on the input positions \emph{up to} $n$, at least if one encodes streams as in Section~\ref{sect:llwotspl}.
\section{Further Developments}
We see this work only as a first step towards understanding how linear logic can be useful in taming the complexity of infinitary rewriting in the context of the $\lambda$-calculus. There are at least three different promising research directions that the results in this paper implicitly suggest. All of them are left for future work, and are outside the scope of this paper. \paragraph*{Semantics} It would be interesting to generalise those semantic frameworks which work well for ordinary linear logic and $\lambda$-calculi to $\ell\Lambda_{\infty}$. One example is the so-called relational model of linear logic, in which formulas are interpreted as sets and morphisms are interpreted as binary relations. Noticeably, the exponential modality is interpreted by forming power multisets. Since the only kind of infinite regression we have in $\ell\Lambda_{\infty}$ is the one induced by coinductive boxes, it seems that the relation model should be adaptable to the calculus described here. Similarly, game semantics~\cite{AJM00IC} and the geometry of interaction~\cite{Girard88} seem to be well-suited to model infinitary rewriting.
\paragraph*{Types} The calculus $\ell\Lambda_{\infty}$ is untyped. Incepting types into it would first of all be a way to ensure the absence of deadlocks (consider, as an example, the term $\ap{(\ia{x}{x})}{(\cm{M})}$). The natural candidate for a framework in which to develop a theory of types for $\ell\Lambda_{\infty}$ is the one of recursive types, given their inherent relation with infinite computations. Another challenge could be adapting linear dependent types~\cite{DalLago11LICS} to an infinitary setting. \paragraph*{Implicit Complexity} One of the most interesting applications of the linearisation of $\Lambda_{\infty}$ as described here could come from implicit complexity, whose aim is characterising complexity classes by logical systems and programming languages without any reference to machine models nor to combinatorial concepts (e.g. polynomials). We think, in particular, that subsystems of $\ell\Lambda_{\infty}$ would be ideal candidates for characterising, e.g. type-2 polynomial time operators. This, however, would require a finer exponential discipline, e.g. an inductive-coinductive generalisation of the bounded exponential modality~\cite{GSS92}.
\section{Related Work}
Although this is arguably the first paper explicitly combining ideas coming from infinitary rewriting with resource-consciousness in the sense of linear logic, some works which are closely related to ours, but having different goals, have recently appeared.
First of all, one should mention Terui's work on computational ludics~\cite{Terui11TCS}: there, designs (i.e. the ludics' counterpart to proofs) are meant to both capture syntax (proofs) and semantics (functions), and are thus infinitary in nature. However, the overall objective in~\cite{Terui11TCS} is different from ours: while we want to stay as close as possible to the $\lambda$-calculus so as to inspire the design of techniques guaranteeing termination of programs dealing with infinite data structures, Terui's aim is to better understand usual, finitary, computational complexity. Technically, the main difference is that we focus on the exponentials and let them be the core of our approach, while computational ludics is strongly based on focalisation: time passes whenever polarity changes.
Another closely related work is a recent one by Mazza~\cite{Mazza12LICS}, that shows how the ordinary, finitary, $\lambda$-calculus can be seen as the metric completion of a much weaker system, namely the affine $\lambda$-calculus. Again, the main commonalities with this paper are on the one hand the presence of infinite terms, and on the other a common technical background, namely that of linear logic. Again, the emphasis is different: we, following~\cite{Kennaway97TCS}, somehow aim at going \emph{beyond} finitary $\lambda$-calculus, while Mazza's focus is on the subrecursive, finite world: he is not even concerned with reduction of infinite length.
If one forgets about infinitary rewriting, linear logic has already been shown to be a formidable tool to support the process of isolating classes of $\lambda$-terms having good, quantitative normalisation properties. One can, for example, cite the work by Baillot and Terui~\cite{Baillot09IC} or the one by Gaboardi and Ronchi here~\cite{Gaboardi07CSL}. This paper can be seen as a natural step towards transferring these techniques to the realm of infinitary rewriting.
Finally, among the many works on type-theoretical approaches to termination and productivity, the closest to ours is certainly the recent contribution by Cave et al.~\cite{Cave14}: our treatment of the coinductive modality is very reminiscent to their way of handling LTL operators.
\section*{Acknowledgment}
The author would like to thank Patrick Baillot, Marco Gaboardi and Olivier Laurent for useful discussions about the topics of this paper. The author is partially supported by the ANR project 12IS02001 PACE and the ANR project 14CE250005 ELICA.
\end{document} | arXiv |
Wallpaper group
A wallpaper is a mathematical object covering a whole Euclidean plane by repeating a motif indefinitely, in manner that certain isometries keep the drawing unchanged. For each wallpaper there corresponds a group of congruent transformations, with function composition as the group operation. Thus, a wallpaper group (or plane symmetry group or plane crystallographic group) is a mathematical classification of a two‑dimensional repetitive pattern, based on the symmetries in the pattern. Such patterns occur frequently in architecture and decorative art, especially in textiles, tessellations, tiles and physical wallpaper.
What this page calls pattern
Image 1.
Examples of repetitive surfaces
on a Pythagorean tiling.
Image 2.
The minimal area of any of possible repetitive surfaces
${\text{is either }}\;a\;$ by disregarding the colors ${\text{ or otherwise }}\;4\,a.$
Such Pythagorean tilings can be seen as wallpapers because they are periodic.
Any periodic tiling can be seen as a wallpaper. More particularly, we can consider as a wallpaper a tiling by identical tiles edge‑to‑edge, necessarily periodic, and conceive from it a wallpaper by decorating in the same manner every tiling element, and eventually erase partly or entirely the boundaries between these tiles. Conversely, from every wallpaper we can construct such a tiling by identical tiles edge‑to‑edge, which bear each identical ornaments, the identical outlines of these tiles being not necessarily visible on the original wallpaper. Such repeated boundaries delineate a repetitive surface added here in dashed lines.
Such pseudo‑tilings connected to a given wallpaper are in infinite number. For example image 1 shows two models of repetitive squares in two different positions, which have ${\text{equal areas of }}\,a.$ Another repetitive square has an ${\text{area of 5 times}}\,~a.~$ We could indefinitely conceive such repetitive squares larger and larger. An infinity of shapes of repetitive zones are possible for this Pythagorean tiling, in an infinity of positions on this wallpaper. For example in red on the bottom right‑hand corner of image 1, we could glide its repetitive parallelogram in one or another position. In common on the first two images: a repetitive square concentric with each small square tile, their common center being a point symmetry of the wallpaper.
Between identical tiles edge‑to‑edge, an edge is not necessarily a segment of a straight line. On the top left‑hand corner of image 3, point C is a vertex of a repetitive pseudo‑rhombus with thick stripes on its whole surface, called pseudo‑rhombus because of a concentric repetitive rhombus ${\text{of same area}}\,~a,~$ constructed from it by taking out a bit of surface somewhere to append it elsewhere, and keep the area unchanged. By the same process on image 4, a repetitive regular hexagon filled with vertical stripes is constructed from a rhombic repetitive zone ${\text{of area }}\,~a.~$ Conversely, from elementary geometric tiles edge‑to‑edge, an artist like M. C. Escher created attractive surfaces many times repeated. On image 2, $a~{\text{ represents}}$ the minimum area of a repetitive surface by disregarding colors, each repetitive zone in dashed lines consisting of five pieces in a certain arrangement, to be either a square or a hexagon, like in a proof of the Pythagorean theorem.
In the present article, a pattern is a repetitive parallelogram of minimal area in a determined position on the wallpaper. Image 1 shows two parallelogram‑shaped patterns — a square is a particular parallelogram —. Image 3 shows rhombic patterns — a rhombus is a particular parallelogram —.
On this page, all repetitive patterns (of minimal area) are constructed from two translations that generate the group of all translations under which the wallpaper is invariant. With the circle shaped symbol ⵔ of function composition, a pair like $\{T,U\}$ or $\{\,U,\;T\circ U^{\,-1}\}$ generates the group of all translations that transform the Pythagorean tiling into itself.
Image 3.
In one or the other orientation, every rhombus
in dark dashed lines instances a same pattern,
because the rotation of center S and ‑120° angle
leaves the wallpaper unchanged.
Image 4.
The same wallpaper as previously by disregarding its colors.
Otherwise if the colors are considered, there is no longer
a center of rotation that leaves the wallpaper unchanged,
either at point S or C or H.
Is considered as the same pattern its image
under an isometry keeping the wallpaper unchanged.
Possible groups linked to a pattern
A wallpaper remains on the whole unchanged under certain isometries, starting with certain translations that confer on the wallpaper a repetitive nature. One of the reasons to be unchanged under certain translations is that it covers the whole plane. No mathematical object in our minds is stuck onto a motionless wall! On the contrary an observer or his eye is motionless in front of a transformation, which glides or rotates or flips a wallpaper, eventually could distort it, but that would be out of our subject.
If an isometry leaves unchanged a given wallpaper, then the inverse isometry keeps it also unchanged, like translation ${\mathit {T}}{\text{ or }}{\mathit {T}}^{\,-1}~$ on image 1, 3 or 4, or a ± 120° rotation around a point like S on image 3 or 4. If they have both this property to leave unchanged a wallpaper, two isometries composed in one or the other order have then this same property to leave unchanged the wallpaper. To be exhaustive about the concepts of group and subgroups under the function composition, represented by the circle shaped symbol ⵔ, here is a traditional truism in mathematics: everything remains itself under the identity transformation. This identity function can be called translation of zero vector or rotation of 360°.
A glide can be represented by one or several arrows if parallel and of same length and same sense, in same way a wallpaper can be represented either by a few patterns or by only one pattern, considered as a pseudo‑tile imagined repeated edge‑to‑edge with an infinite number of replicas. Image 3 shows two patterns with two different contents, and the one in dark dashed lines or one of its images under $\,{\text{ rotation }}\,{\mathit {R}}\,{\text{ or }}\,{\mathit {R}}\,^{-1}\;$ represents the same wallpaper on the following image 4, by disregarding the colors. Certainly a color is perceived subjectively whereas a wallpaper is an ideal object, however any color can be seen as a label that characterizes certain surfaces, we might think of a hexadecimal code of color as a label specific to certain zones. It may be added that a well‑known theorem deals with colors.
Groups are registered in the catalog by examining properties of a parallelogram, edge‑to‑edge with its replicas. For example its diagonals intersect at their common midpoints, center and symmetry point of any parallelogram, not necessarily symmetry point of its content. Other example, the midpoint of a full side shared by two patterns is the center of a new repetitive parallelogram formed by the two together, center which is not necessarily symmetry point of the content of this double parallelogram. Other possible symmetry point, two patterns symmetric one to the other with respect to their common vertex form together a new repetitive surface, the center of which is not necessarily symmetry point of its content.
Certain rotational symmetries are possible only for certain shapes of pattern. For example on image 2, a Pythagorean tiling is sometimes called pinwheel tilings because of its rotational symmetry of 90 degrees about the center of a tile, either small or large, or about the center of any replica of tile, of course. Also when two equilateral triangles form edge‑to‑edge a rhombic pattern, like on image 4 or 5 (future image 5), a rotational symmetry of 120 degrees about a vertex of a 120° angle, formed by two sides of pattern, is not always a symmetry point of the content of the regular hexagon formed by three patterns together sharing a vertex, because it does not always contain the same motif.
First examples of groups
The simplest wallpaper group, Group p1, applies when there is no symmetry other than the fact that a pattern repeats over regular intervals in two dimensions, as shown in the section on p1 below.
The following examples are patterns with more forms of symmetry:
• Example A: Cloth, Tahiti
• Example B: Ornamental painting, Nineveh, Assyria
• Example C: Painted porcelain, China
Examples A and B have the same wallpaper group; it is called p4m in the IUCr notation and *442 in the orbifold notation. Example C has a different wallpaper group, called p4g or 4*2 . The fact that A and B have the same wallpaper group means that they have the same symmetries, regardless of details of the designs, whereas C has a different set of symmetries despite any superficial similarities.
The number of symmetry groups depends on the number of dimensions in the patterns. Wallpaper groups apply to the two-dimensional case, intermediate in complexity between the simpler frieze groups and the three-dimensional space groups. Subtle differences may place similar patterns in different groups, while patterns that are very different in style, color, scale or orientation may belong to the same group.
A proof that there are only 17 distinct groups of such planar symmetries was first carried out by Evgraf Fedorov in 1891[1] and then derived independently by George Pólya in 1924.[2] The proof that the list of wallpaper groups is complete only came after the much harder case of space groups had been done. The seventeen possible wallpaper groups are listed below in § The seventeen groups.
Symmetries of patterns
A symmetry of a pattern is, loosely speaking, a way of transforming the pattern so that it looks exactly the same after the transformation. For example, translational symmetry is present when the pattern can be translated (in other words, shifted) some finite distance and appear unchanged. Think of shifting a set of vertical stripes horizontally by one stripe. The pattern is unchanged. Strictly speaking, a true symmetry only exists in patterns that repeat exactly and continue indefinitely. A set of only, say, five stripes does not have translational symmetry—when shifted, the stripe on one end "disappears" and a new stripe is "added" at the other end. In practice, however, classification is applied to finite patterns, and small imperfections may be ignored.
The types of transformations that are relevant here are called Euclidean plane isometries. For example:
• If one shifts example B one unit to the right, so that each square covers the square that was originally adjacent to it, then the resulting pattern is exactly the same as the starting pattern. This type of symmetry is called a translation. Examples A and C are similar, except that the smallest possible shifts are in diagonal directions.
• If one turns example B clockwise by 90°, around the centre of one of the squares, again one obtains exactly the same pattern. This is called a rotation. Examples A and C also have 90° rotations, although it requires a little more ingenuity to find the correct centre of rotation for C.
• One can also flip example B across a horizontal axis that runs across the middle of the image. This is called a reflection. Example C also has reflections across a vertical axis, and across two diagonal axes. The same can be said for A.
However, example C is different. It only has reflections in horizontal and vertical directions, not across diagonal axes. If one flips across a diagonal line, one does not get the same pattern back, but the original pattern shifted across by a certain distance. This is part of the reason that the wallpaper group of A and B is different from the wallpaper group of C.
Another transformation is "Glide", a combination of reflection and translation parallel to the line of reflection.
Formal definition and discussion
Mathematically, a wallpaper group or plane crystallographic group is a type of topologically discrete group of isometries of the Euclidean plane that contains two linearly independent translations.
Two such isometry groups are of the same type (of the same wallpaper group) if they are the same up to an affine transformation of the plane. Thus e.g. a translation of the plane (hence a translation of the mirrors and centres of rotation) does not affect the wallpaper group. The same applies for a change of angle between translation vectors, provided that it does not add or remove any symmetry (this is only the case if there are no mirrors and no glide reflections, and rotational symmetry is at most of order 2).
Unlike in the three-dimensional case, one can equivalently restrict the affine transformations to those that preserve orientation.
It follows from the Bieberbach theorem that all wallpaper groups are different even as abstract groups (as opposed to e.g. frieze groups, of which two are isomorphic with Z).
2D patterns with double translational symmetry can be categorized according to their symmetry group type.
Isometries of the Euclidean plane
Isometries of the Euclidean plane fall into four categories (see the article Euclidean plane isometry for more information).
• Translations, denoted by Tv, where v is a vector in R2. This has the effect of shifting the plane applying displacement vector v.
• Rotations, denoted by Rc,θ, where c is a point in the plane (the centre of rotation), and θ is the angle of rotation.
• Reflections, or mirror isometries, denoted by FL, where L is a line in R2. (F is for "flip"). This has the effect of reflecting the plane in the line L, called the reflection axis or the associated mirror.
• Glide reflections, denoted by GL,d, where L is a line in R2 and d is a distance. This is a combination of a reflection in the line L and a translation along L by a distance d.
The independent translations condition
The condition on linearly independent translations means that there exist linearly independent vectors v and w (in R2) such that the group contains both Tv and Tw.
The purpose of this condition is to distinguish wallpaper groups from frieze groups, which possess a translation but not two linearly independent ones, and from two-dimensional discrete point groups, which have no translations at all. In other words, wallpaper groups represent patterns that repeat themselves in two distinct directions, in contrast to frieze groups, which only repeat along a single axis.
(It is possible to generalise this situation. One could for example study discrete groups of isometries of Rn with m linearly independent translations, where m is any integer in the range 0 ≤ m ≤ n.)
The discreteness condition
The discreteness condition means that there is some positive real number ε, such that for every translation Tv in the group, the vector v has length at least ε (except of course in the case that v is the zero vector, but the independent translations condition prevents this, since any set that contains the zero vector is linearly dependent by definition and thus disallowed).
The purpose of this condition is to ensure that the group has a compact fundamental domain, or in other words, a "cell" of nonzero, finite area, which is repeated through the plane. Without this condition, one might have for example a group containing the translation Tx for every rational number x, which would not correspond to any reasonable wallpaper pattern.
One important and nontrivial consequence of the discreteness condition in combination with the independent translations condition is that the group can only contain rotations of order 2, 3, 4, or 6; that is, every rotation in the group must be a rotation by 180°, 120°, 90°, or 60°. This fact is known as the crystallographic restriction theorem,[3] and can be generalised to higher-dimensional cases.
Crystallographic notation
Crystallography has 230 space groups to distinguish, far more than the 17 wallpaper groups, but many of the symmetries in the groups are the same. Thus one can use a similar notation for both kinds of groups, that of Carl Hermann and Charles-Victor Mauguin. An example of a full wallpaper name in Hermann-Mauguin style (also called IUCr notation) is p31m, with four letters or digits; more usual is a shortened name like cmm or pg.
For wallpaper groups the full notation begins with either p or c, for a primitive cell or a face-centred cell; these are explained below. This is followed by a digit, n, indicating the highest order of rotational symmetry: 1-fold (none), 2-fold, 3-fold, 4-fold, or 6-fold. The next two symbols indicate symmetries relative to one translation axis of the pattern, referred to as the "main" one; if there is a mirror perpendicular to a translation axis that is the main one (or if there are two, one of them). The symbols are either m, g, or 1, for mirror, glide reflection, or none. The axis of the mirror or glide reflection is perpendicular to the main axis for the first letter, and either parallel or tilted 180°/n (when n > 2) for the second letter. Many groups include other symmetries implied by the given ones. The short notation drops digits or an m that can be deduced, so long as that leaves no confusion with another group.
A primitive cell is a minimal region repeated by lattice translations. All but two wallpaper symmetry groups are described with respect to primitive cell axes, a coordinate basis using the translation vectors of the lattice. In the remaining two cases symmetry description is with respect to centred cells that are larger than the primitive cell, and hence have internal repetition; the directions of their sides is different from those of the translation vectors spanning a primitive cell. Hermann-Mauguin notation for crystal space groups uses additional cell types.
Examples
• p2 (p2): Primitive cell, 2-fold rotation symmetry, no mirrors or glide reflections.
• p4gm (p4mm): Primitive cell, 4-fold rotation, glide reflection perpendicular to main axis, mirror axis at 45°.
• c2mm (c2mm): Centred cell, 2-fold rotation, mirror axes both perpendicular and parallel to main axis.
• p31m (p31m): Primitive cell, 3-fold rotation, mirror axis at 60°.
Here are all the names that differ in short and full notation.
Crystallographic short and full names
Short pm pg cm pmm pmg pgg cmm p4m p4g p6m
Fullp1m1p1g1c1m1p2mmp2mgp2ggc2mmp4mmp4gmp6mm
The remaining names are p1, p2, p3, p3m1, p31m, p4, and p6.
Orbifold notation
Orbifold notation for wallpaper groups, advocated by John Horton Conway (Conway, 1992) (Conway 2008), is based not on crystallography, but on topology. One can fold the infinite periodic tiling of the plane into its essence, an orbifold, then describe that with a few symbols.
• A digit, n, indicates a centre of n-fold rotation corresponding to a cone point on the orbifold. By the crystallographic restriction theorem, n must be 2, 3, 4, or 6.
• An asterisk, *, indicates a mirror symmetry corresponding to a boundary of the orbifold. It interacts with the digits as follows:
1. Digits before * denote centres of pure rotation (cyclic).
2. Digits after * denote centres of rotation with mirrors through them, corresponding to "corners" on the boundary of the orbifold (dihedral).
• A cross, ×, occurs when a glide reflection is present and indicates a crosscap on the orbifold. Pure mirrors combine with lattice translation to produce glides, but those are already accounted for so need no notation.
• The "no symmetry" symbol, o, stands alone, and indicates there are only lattice translations with no other symmetry. The orbifold with this symbol is a torus; in general the symbol o denotes a handle on the orbifold.
The group denoted in crystallographic notation by cmm will, in Conway's notation, be 2*22. The 2 before the * says there is a 2-fold rotation centre with no mirror through it. The * itself says there is a mirror. The first 2 after the * says there is a 2-fold rotation centre on a mirror. The final 2 says there is an independent second 2-fold rotation centre on a mirror, one that is not a duplicate of the first one under symmetries.
The group denoted by pgg will be 22×. There are two pure 2-fold rotation centres, and a glide reflection axis. Contrast this with pmg, Conway 22*, where crystallographic notation mentions a glide, but one that is implicit in the other symmetries of the orbifold.
Coxeter's bracket notation is also included, based on reflectional Coxeter groups, and modified with plus superscripts accounting for rotations, improper rotations and translations.
Conway, Coxeter and crystallographic correspondence
Conway o××*×**632*632
Coxeter [∞+,2,∞+][(∞,2)+,∞+][∞,2+,∞+][∞,2,∞+][6,3]+[6,3]
Crystallographic p1pgcmpmp6p6m
Conway 333*3333*3442*4424*2
Coxeter [3[3]]+[3[3]][3+,6][4,4]+ [4,4][4+,4]
Crystallographic p3p3m1p31mp4 p4mp4g
Conway 222222×22**22222*22
Coxeter [∞,2,∞]+[((∞,2)+,(∞,2)+)][(∞,2)+,∞][∞,2,∞][∞,2+,∞]
Crystallographic p2pggpmgpmmcmm
Why there are exactly seventeen groups
An orbifold can be viewed as a polygon with face, edges, and vertices which can be unfolded to form a possibly infinite set of polygons which tile either the sphere, the plane or the hyperbolic plane. When it tiles the plane it will give a wallpaper group and when it tiles the sphere or hyperbolic plane it gives either a spherical symmetry group or Hyperbolic symmetry group. The type of space the polygons tile can be found by calculating the Euler characteristic, χ = V − E + F, where V is the number of corners (vertices), E is the number of edges and F is the number of faces. If the Euler characteristic is positive then the orbifold has an elliptic (spherical) structure; if it is zero then it has a parabolic structure, i.e. a wallpaper group; and if it is negative it will have a hyperbolic structure. When the full set of possible orbifolds is enumerated it is found that only 17 have Euler characteristic 0.
When an orbifold replicates by symmetry to fill the plane, its features create a structure of vertices, edges, and polygon faces, which must be consistent with the Euler characteristic. Reversing the process, one can assign numbers to the features of the orbifold, but fractions, rather than whole numbers. Because the orbifold itself is a quotient of the full surface by the symmetry group, the orbifold Euler characteristic is a quotient of the surface Euler characteristic by the order of the symmetry group.
The orbifold Euler characteristic is 2 minus the sum of the feature values, assigned as follows:
• A digit n without or before a * counts as n − 1/n.
• A digit n after a * counts as n − 1/2n.
• Both * and × count as 1.
• The "no symmetry" o counts as 2.
For a wallpaper group, the sum for the characteristic must be zero; thus the feature sum must be 2.
Examples
• 632: 5/6 + 2/3 + 1/2 = 2
• 3*3: 2/3 + 1 + 2/6 = 2
• 4*2: 3/4 + 1 + 1/4 = 2
• 22×: 1/2 + 1/2 + 1 = 2
Now enumeration of all wallpaper groups becomes a matter of arithmetic, of listing all feature strings with values summing to 2.
Feature strings with other sums are not nonsense; they imply non-planar tilings, not discussed here. (When the orbifold Euler characteristic is negative, the tiling is hyperbolic; when positive, spherical or bad).
Guide to recognizing wallpaper groups
To work out which wallpaper group corresponds to a given design, one may use the following table.[4]
Size of smallest
rotation
Has reflection?
YesNo
360° / 6p6m (*632)p6 (632)
360° / 4Has mirrors at 45°?p4 (442)
Yes: p4m (*442)No: p4g (4*2)
360° / 3Has rot. centre off mirrors?p3 (333)
Yes: p31m (3*3)No: p3m1 (*333)
360° / 2Has perpendicular reflections?Has glide reflection?
YesNo
Has rot. centre off mirrors?pmg (22*)Yes: pgg (22×)No: p2 (2222)
Yes: cmm (2*22)No: pmm (*2222)
noneHas glide axis off mirrors?Has glide reflection?
Yes: cm (*×)No: pm (**)Yes: pg (××)No: p1 (o)
See also this overview with diagrams.
The seventeen groups
Each of the groups in this section has two cell structure diagrams, which are to be interpreted as follows (it is the shape that is significant, not the colour):
a centre of rotation of order two (180°).
a centre of rotation of order three (120°).
a centre of rotation of order four (90°).
a centre of rotation of order six (60°).
an axis of reflection.
an axis of glide reflection.
On the right-hand side diagrams, different equivalence classes of symmetry elements are colored (and rotated) differently.
The brown or yellow area indicates a fundamental domain, i.e. the smallest part of the pattern that is repeated.
The diagrams on the right show the cell of the lattice corresponding to the smallest translations; those on the left sometimes show a larger area.
Group p1 (o)
Cell structures for p1 by lattice type
Oblique
Hexagonal
Rectangular
Rhombic
Square
• Orbifold signature: o
• Coxeter notation (rectangular): [∞+,2,∞+] or [∞]+×[∞]+
• Lattice: oblique
• Point group: C1
• The group p1 contains only translations; there are no rotations, reflections, or glide reflections.
Examples of group p1
• Computer generated
• Medieval wall diapering
The two translations (cell sides) can each have different lengths, and can form any angle.
Group p2 (2222)
Cell structures for p2 by lattice type
Oblique
Hexagonal
Rectangular
Rhombic
Square
• Orbifold signature: 2222
• Coxeter notation (rectangular): [∞,2,∞]+
• Lattice: oblique
• Point group: C2
• The group p2 contains four rotation centres of order two (180°), but no reflections or glide reflections.
Examples of group p2
• Computer generated
• Cloth, Sandwich Islands (Hawaii)
• Mat on which an Egyptian king stood
• Egyptian mat (detail)
• Ceiling of an Egyptian tomb
• Wire fence, U.S.
Group pm (**)
Cell structure for pm
Horizontal mirrors
Vertical mirrors
• Orbifold signature: **
• Coxeter notation: [∞,2,∞+] or [∞+,2,∞]
• Lattice: rectangular
• Point group: D1
• The group pm has no rotations. It has reflection axes, they are all parallel.
Examples of group pm
(The first three have a vertical symmetry axis, and the last two each have a different diagonal one.)
• Computer generated
• Dress of a figure in a tomb at Biban el Moluk, Egypt
• Egyptian tomb, Thebes
• Ceiling of a tomb at Gourna, Egypt. Reflection axis is diagonal
• Indian metalwork at the Great Exhibition in 1851. This is almost pm (ignoring short diagonal lines between ovals motifs, which make it p1)
Group pg (××)
Cell structures for pg
Horizontal glides
Vertical glides
Rectangular
• Orbifold signature: ××
• Coxeter notation: [(∞,2)+,∞+] or [∞+,(2,∞)+]
• Lattice: rectangular
• Point group: D1
• The group pg contains glide reflections only, and their axes are all parallel. There are no rotations or reflections.
Examples of group pg
• Computer generated
• Mat with herringbone pattern on which Egyptian king stood
• Egyptian mat (detail)
• Pavement with herringbone pattern in Salzburg. Glide reflection axis runs northeast–southwest
• One of the colorings of the snub square tiling; the glide reflection lines are in the direction upper left / lower right; ignoring colors there is much more symmetry than just pg, then it is p4g (see there for this image with equally colored triangles)[5]
Without the details inside the zigzag bands the mat is pmg; with the details but without the distinction between brown and black it is pgg.
Ignoring the wavy borders of the tiles, the pavement is pgg.
Group cm (*×)
Cell structure for cm
Horizontal mirrors
Vertical mirrors
Rhombic
• Orbifold signature: *×
• Coxeter notation: [∞+,2+,∞] or [∞,2+,∞+]
• Lattice: rhombic
• Point group: D1
• The group cm contains no rotations. It has reflection axes, all parallel. There is at least one glide reflection whose axis is not a reflection axis; it is halfway between two adjacent parallel reflection axes.
• This group applies for symmetrically staggered rows (i.e. there is a shift per row of half the translation distance inside the rows) of identical objects, which have a symmetry axis perpendicular to the rows.
Examples of group cm
• Computer generated
• Dress of Amun, from Abu Simbel, Egypt
• Dado from Biban el Moluk, Egypt
• Bronze vessel in Nimroud, Assyria
• Spandrels of arches, the Alhambra, Spain
• Soffitt of arch, the Alhambra, Spain
• Persian tapestry
• Indian metalwork at the Great Exhibition in 1851
• Dress of a figure in a tomb at Biban el Moluk, Egypt
Group pmm (*2222)
Cell structure for pmm
rectangular
square
• Orbifold signature: *2222
• Coxeter notation (rectangular): [∞,2,∞] or [∞]×[∞]
• Coxeter notation (square): [4,1+,4] or [1+,4,4,1+]
• Lattice: rectangular
• Point group: D2
• The group pmm has reflections in two perpendicular directions, and four rotation centres of order two (180°) located at the intersections of the reflection axes.
Examples of group pmm
• 2D image of lattice fence, U.S. (in 3D there is additional symmetry)
• Mummy case stored in The Louvre
• Mummy case stored in The Louvre. Would be type p4m except for the mismatched coloring
Group pmg (22*)
Cell structures for pmg
Horizontal mirrors
Vertical mirrors
• Orbifold signature: 22*
• Coxeter notation: [(∞,2)+,∞] or [∞,(2,∞)+]
• Lattice: rectangular
• Point group: D2
• The group pmg has two rotation centres of order two (180°), and reflections in only one direction. It has glide reflections whose axes are perpendicular to the reflection axes. The centres of rotation all lie on glide reflection axes.
Examples of group pmg
• Computer generated
• Cloth, Sandwich Islands (Hawaii)
• Ceiling of Egyptian tomb
• Floor tiling in Prague, the Czech Republic
• Bowl from Kerma
• Pentagon packing
Group pgg (22×)
Cell structures for pgg by lattice type
Rectangular
Square
• Orbifold signature: 22×
• Coxeter notation (rectangular): [((∞,2)+,(∞,2)+)]
• Coxeter notation (square): [4+,4+]
• Lattice: rectangular
• Point group: D2
• The group pgg contains two rotation centres of order two (180°), and glide reflections in two perpendicular directions. The centres of rotation are not located on the glide reflection axes. There are no reflections.
Examples of group pgg
• Computer generated
• Bronze vessel in Nimroud, Assyria
• Pavement in Budapest, Hungary
Group cmm (2*22)
Cell structures for cmm by lattice type
Rhombic
Square
• Orbifold signature: 2*22
• Coxeter notation (rhombic): [∞,2+,∞]
• Coxeter notation (square): [(4,4,2+)]
• Lattice: rhombic
• Point group: D2
• The group cmm has reflections in two perpendicular directions, and a rotation of order two (180°) whose centre is not on a reflection axis. It also has two rotations whose centres are on a reflection axis.
• This group is frequently seen in everyday life, since the most common arrangement of bricks in a brick building (running bond) utilises this group (see example below).
The rotational symmetry of order 2 with centres of rotation at the centres of the sides of the rhombus is a consequence of the other properties.
The pattern corresponds to each of the following:
• symmetrically staggered rows of identical doubly symmetric objects
• a checkerboard pattern of two alternating rectangular tiles, of which each, by itself, is doubly symmetric
• a checkerboard pattern of alternatingly a 2-fold rotationally symmetric rectangular tile and its mirror image
Examples of group cmm
• Computer generated
• Elongated triangular tiling
• Suburban brick wall using running bond arrangement, U.S.
• Ceiling of Egyptian tomb. Ignoring colors, this would be p4g
• Egyptian
• Persian tapestry
• Egyptian tomb
• Turkish dish
• A compact packing of two sizes of circle
• Another compact packing of two sizes of circle
• Another compact packing of two sizes of circle
Group p4 (442)
• Orbifold signature: 442
• Coxeter notation: [4,4]+
• Lattice: square
• Point group: C4
• The group p4 has two rotation centres of order four (90°), and one rotation centre of order two (180°). It has no reflections or glide reflections.
Examples of group p4
A p4 pattern can be looked upon as a repetition in rows and columns of equal square tiles with 4-fold rotational symmetry. Also it can be looked upon as a checkerboard pattern of two such tiles, a factor √2 smaller and rotated 45°.
• Computer generated
• Ceiling of Egyptian tomb; ignoring colors this is p4, otherwise p2
• Ceiling of Egyptian tomb
• Overlaid patterns
• Frieze, the Alhambra, Spain. Requires close inspection to see why there are no reflections
• Viennese cane
• Renaissance earthenware
• Pythagorean tiling
• Generated from a photograph
Group p4m (*442)
• Orbifold signature: *442
• Coxeter notation: [4,4]
• Lattice: square
• Point group: D4
• The group p4m has two rotation centres of order four (90°), and reflections in four distinct directions (horizontal, vertical, and diagonals). It has additional glide reflections whose axes are not reflection axes; rotations of order two (180°) are centred at the intersection of the glide reflection axes. All rotation centres lie on reflection axes.
This corresponds to a straightforward grid of rows and columns of equal squares with the four reflection axes. Also it corresponds to a checkerboard pattern of two of such squares.
Examples of group p4m
Examples displayed with the smallest translations horizontal and vertical (like in the diagram):
• Computer generated
• Square tiling
• Tetrakis square tiling; ignoring colors, this is p4m, otherwise c2m
• Truncated square tiling (ignoring color also, with smaller translations)
• Ornamental painting, Nineveh, Assyria
• Storm drain, U.S.
• Egyptian mummy case
• Persian glazed tile
• Compact packing of two sizes of circle
Examples displayed with the smallest translations diagonal:
• checkerboard
• Cloth, Otaheite (Tahiti)
• Egyptian tomb
• Cathedral of Bourges
• Dish from Turkey, Ottoman period
Group p4g (4*2)
• Orbifold signature: 4*2
• Coxeter notation: [4+,4]
• Lattice: square
• Point group: D4
• The group p4g has two centres of rotation of order four (90°), which are each other's mirror image, but it has reflections in only two directions, which are perpendicular. There are rotations of order two (180°) whose centres are located at the intersections of reflection axes. It has glide reflections axes parallel to the reflection axes, in between them, and also at an angle of 45° with these.
A p4g pattern can be looked upon as a checkerboard pattern of copies of a square tile with 4-fold rotational symmetry, and its mirror image. Alternatively it can be looked upon (by shifting half a tile) as a checkerboard pattern of copies of a horizontally and vertically symmetric tile and its 90° rotated version. Note that neither applies for a plain checkerboard pattern of black and white tiles, this is group p4m (with diagonal translation cells).
Examples of group p4g
• Bathroom linoleum, U.S.
• Painted porcelain, China
• Fly screen, U.S.
• Painting, China
• one of the colorings of the snub square tiling (see also at pg)
Group p3 (333)
• Orbifold signature: 333
• Coxeter notation: [(3,3,3)]+ or [3[3]]+
• Lattice: hexagonal
• Point group: C3
• The group p3 has three different rotation centres of order three (120°), but no reflections or glide reflections.
Imagine a tessellation of the plane with equilateral triangles of equal size, with the sides corresponding to the smallest translations. Then half of the triangles are in one orientation, and the other half upside down. This wallpaper group corresponds to the case that all triangles of the same orientation are equal, while both types have rotational symmetry of order three, but the two are not equal, not each other's mirror image, and not both symmetric (if the two are equal it is p6, if they are each other's mirror image it is p31m, if they are both symmetric it is p3m1; if two of the three apply then the third also, and it is p6m). For a given image, three of these tessellations are possible, each with rotation centres as vertices, i.e. for any tessellation two shifts are possible. In terms of the image: the vertices can be the red, the blue or the green triangles.
Equivalently, imagine a tessellation of the plane with regular hexagons, with sides equal to the smallest translation distance divided by √3. Then this wallpaper group corresponds to the case that all hexagons are equal (and in the same orientation) and have rotational symmetry of order three, while they have no mirror image symmetry (if they have rotational symmetry of order six it is p6, if they are symmetric with respect to the main diagonals it is p31m, if they are symmetric with respect to lines perpendicular to the sides it is p3m1; if two of the three apply then the third also, it is p6m). For a given image, three of these tessellations are possible, each with one third of the rotation centres as centres of the hexagons. In terms of the image: the centres of the hexagons can be the red, the blue or the green triangles.
Examples of group p3
• Computer generated
• Snub trihexagonal tiling (ignoring the colors: p6); the translation vectors are rotated a little to the right compared with the directions in the underlying hexagonal lattice of the image
• Street pavement in Zakopane, Poland
• Wall tiling in the Alhambra, Spain (and the whole wall); ignoring all colors this is p3 (ignoring only star colors it is p1)
Group p3m1 (*333)
• Orbifold signature: *333
• Coxeter notation: [(3,3,3)] or [3[3]]
• Lattice: hexagonal
• Point group: D3
• The group p3m1 has three different rotation centres of order three (120°). It has reflections in the three sides of an equilateral triangle. The centre of every rotation lies on a reflection axis. There are additional glide reflections in three distinct directions, whose axes are located halfway between adjacent parallel reflection axes.
Like for p3, imagine a tessellation of the plane with equilateral triangles of equal size, with the sides corresponding to the smallest translations. Then half of the triangles are in one orientation, and the other half upside down. This wallpaper group corresponds to the case that all triangles of the same orientation are equal, while both types have rotational symmetry of order three, and both are symmetric, but the two are not equal, and not each other's mirror image. For a given image, three of these tessellations are possible, each with rotation centres as vertices. In terms of the image: the vertices can be the red, the blue or the green triangles.
Examples of group p3m1
• Triangular tiling (ignoring colors: p6m)
• Hexagonal tiling (ignoring colors: p6m)
• Truncated hexagonal tiling (ignoring colors: p6m)
• Persian glazed tile (ignoring colors: p6m)
• Persian ornament
• Painting, China (see detailed image)
Group p31m (3*3)
• Orbifold signature: 3*3
• Coxeter notation: [6,3+]
• Lattice: hexagonal
• Point group: D3
• The group p31m has three different rotation centres of order three (120°), of which two are each other's mirror image. It has reflections in three distinct directions. It has at least one rotation whose centre does not lie on a reflection axis. There are additional glide reflections in three distinct directions, whose axes are located halfway between adjacent parallel reflection axes.
Like for p3 and p3m1, imagine a tessellation of the plane with equilateral triangles of equal size, with the sides corresponding to the smallest translations. Then half of the triangles are in one orientation, and the other half upside down. This wallpaper group corresponds to the case that all triangles of the same orientation are equal, while both types have rotational symmetry of order three and are each other's mirror image, but not symmetric themselves, and not equal. For a given image, only one such tessellation is possible. In terms of the image: the vertices must be the red triangles, not the blue triangles.
Examples of group p31m
• Persian glazed tile
• Painted porcelain, China
• Painting, China
• Compact packing of two sizes of circle
Group p6 (632)
• Orbifold signature: 632
• Coxeter notation: [6,3]+
• Lattice: hexagonal
• Point group: C6
• The group p6 has one rotation centre of order six (60°); two rotation centres of order three (120°), which are each other's images under a rotation of 60°; and three rotation centres of order two (180°) which are also each other's images under a rotation of 60°. It has no reflections or glide reflections.
A pattern with this symmetry can be looked upon as a tessellation of the plane with equal triangular tiles with C3 symmetry, or equivalently, a tessellation of the plane with equal hexagonal tiles with C6 symmetry (with the edges of the tiles not necessarily part of the pattern).
Examples of group p6
• Computer generated
• Regular polygons
• Wall panelling, the Alhambra, Spain
• Persian ornament
Group p6m (*632)
• Orbifold signature: *632
• Coxeter notation: [6,3]
• Lattice: hexagonal
• Point group: D6
• The group p6m has one rotation centre of order six (60°); it has two rotation centres of order three, which only differ by a rotation of 60° (or, equivalently, 180°), and three of order two, which only differ by a rotation of 60°. It has also reflections in six distinct directions. There are additional glide reflections in six distinct directions, whose axes are located halfway between adjacent parallel reflection axes.
A pattern with this symmetry can be looked upon as a tessellation of the plane with equal triangular tiles with D3 symmetry, or equivalently, a tessellation of the plane with equal hexagonal tiles with D6 symmetry (with the edges of the tiles not necessarily part of the pattern). Thus the simplest examples are a triangular lattice with or without connecting lines, and a hexagonal tiling with one color for outlining the hexagons and one for the background.
Examples of group p6m
• Computer generated
• Trihexagonal tiling
• Small rhombitrihexagonal tiling
• Great rhombitrihexagonal tiling
• Persian glazed tile
• King's dress, Khorsabad, Assyria; this is almost p6m (ignoring inner parts of flowers, which make it cmm)
• Bronze vessel in Nimroud, Assyria
• Byzantine marble pavement, Rome
• Painted porcelain, China
• Painted porcelain, China
• Compact packing of two sizes of circle
• Another compact packing of two sizes of circle
Lattice types
There are five lattice types or Bravais lattices, corresponding to the five possible wallpaper groups of the lattice itself. The wallpaper group of a pattern with this lattice of translational symmetry cannot have more, but may have less symmetry than the lattice itself.
• In the 5 cases of rotational symmetry of order 3 or 6, the unit cell consists of two equilateral triangles (hexagonal lattice, itself p6m). They form a rhombus with angles 60° and 120°.
• In the 3 cases of rotational symmetry of order 4, the cell is a square (square lattice, itself p4m).
• In the 5 cases of reflection or glide reflection, but not both, the cell is a rectangle (rectangular lattice, itself pmm). It may also be interpreted as a centered rhombic lattice. Special cases: square.
• In the 2 cases of reflection combined with glide reflection, the cell is a rhombus (rhombic lattice, itself cmm). It may also be interpreted as a centered rectangular lattice. Special cases: square, hexagonal unit cell.
• In the case of only rotational symmetry of order 2, and the case of no other symmetry than translational, the cell is in general a parallelogram (parallelogrammatic or oblique lattice, itself p2). Special cases: rectangle, square, rhombus, hexagonal unit cell.
Symmetry groups
The actual symmetry group should be distinguished from the wallpaper group. Wallpaper groups are collections of symmetry groups. There are 17 of these collections, but for each collection there are infinitely many symmetry groups, in the sense of actual groups of isometries. These depend, apart from the wallpaper group, on a number of parameters for the translation vectors, the orientation and position of the reflection axes and rotation centers.
The numbers of degrees of freedom are:
• 6 for p2
• 5 for pmm, pmg, pgg, and cmm
• 4 for the rest.
However, within each wallpaper group, all symmetry groups are algebraically isomorphic.
Some symmetry group isomorphisms:
• p1: Z2
• pm: Z × D∞
• pmm: D∞ × D∞.
Dependence of wallpaper groups on transformations
• The wallpaper group of a pattern is invariant under isometries and uniform scaling (similarity transformations).
• Translational symmetry is preserved under arbitrary bijective affine transformations.
• Rotational symmetry of order two ditto; this means also that 4- and 6-fold rotation centres at least keep 2-fold rotational symmetry.
• Reflection in a line and glide reflection are preserved on expansion/contraction along, or perpendicular to, the axis of reflection and glide reflection. It changes p6m, p4g, and p3m1 into cmm, p3m1 into cm, and p4m, depending on direction of expansion/contraction, into pmm or cmm. A pattern of symmetrically staggered rows of points is special in that it can convert by expansion/contraction from p6m to p4m.
Note that when a transformation decreases symmetry, a transformation of the same kind (the inverse) obviously for some patterns increases the symmetry. Such a special property of a pattern (e.g. expansion in one direction produces a pattern with 4-fold symmetry) is not counted as a form of extra symmetry.
Change of colors does not affect the wallpaper group if any two points that have the same color before the change, also have the same color after the change, and any two points that have different colors before the change, also have different colors after the change.
If the former applies, but not the latter, such as when converting a color image to one in black and white, then symmetries are preserved, but they may increase, so that the wallpaper group can change.
Web demo and software
Several software graphic tools will let you create 2D patterns using wallpaper symmetry groups. Usually you can edit the original tile and its copies in the entire pattern are updated automatically.
• MadPattern, a free set of Adobe Illustrator templates that support the 17 wallpaper groups
• Tess, a shareware tessellation program for multiple platforms, supports all wallpaper, frieze, and rosette groups, as well as Heesch tilings.
• Wallpaper Symmetry is a free online JavaScript drawing tool supporting the 17 groups. The main page has an explanation of the wallpaper groups, as well as drawing tools and explanations for the other planar symmetry groups as well.
• TALES GAME, a free software designed for educational purposes which includes the tessellation function.
• Kali Archived 2018-12-16 at the Wayback Machine, online graphical symmetry editor Java applet (not supported by default in browsers).
• Kali Archived 2020-11-21 at the Wayback Machine, free downloadable Kali for Windows and Mac Classic.
• Inkscape, a free vector graphics editor, supports all 17 groups plus arbitrary scales, shifts, rotates, and color changes per row or per column, optionally randomized to a given degree. (See )
• SymmetryWorks is a commercial plugin for Adobe Illustrator, supports all 17 groups.
• EscherSketch is a free online JavaScript drawing tool supporting the 17 groups.
• Repper is a commercial online drawing tool supporting the 17 groups plus a number of non-periodic tilings
See also
Wikimedia Commons has media related to Wallpaper group diagrams.
• List of planar symmetry groups (summary of this page)
• Aperiodic tiling
• Crystallography
• Layer group
• Mathematics and art
• M. C. Escher
• Point group
• Symmetry groups in one dimension
• Tessellation
Notes
1. E. Fedorov (1891) "Симметрія на плоскости" (Simmetrija na ploskosti, Symmetry in the plane), Записки Императорского С.-Петербургского минералогического общества (Zapiski Imperatorskogo Sant-Petersburgskogo Mineralogicheskogo Obshchestva, Proceedings of the Imperial St. Petersburg Mineralogical Society), series 2, 28 : 345–390 (in Russian).
2. Pólya, George (November 1924). "Über die Analogie der Kristallsymmetrie in der Ebene" [On the analog of crystal symmetry in the plane]. Zeitschrift für Kristallographie (in German). 60 (1–6): 278–282. doi:10.1524/zkri.1924.60.1.278. S2CID 102174323.
3. Klarreich, Erica (5 March 2013). "How to Make Impossible Wallpaper". Quanta Magazine. Retrieved 2021-04-07.
4. Radaelli, Paulo G. Symmetry in Crystallography. Oxford University Press.
5. If one thinks of the squares as the background, then one can see a simple patterns of rows of rhombuses.
References
• The Grammar of Ornament (1856), by Owen Jones. Many of the images in this article are from this book; it contains many more.
• John H. Conway (1992). "The Orbifold Notation for Surface Groups". In: M. W. Liebeck and J. Saxl (eds.), Groups, Combinatorics and Geometry, Proceedings of the L.M.S. Durham Symposium, July 5–15, Durham, UK, 1990; London Math. Soc. Lecture Notes Series 165. Cambridge University Press, Cambridge. pp. 438–447
• John H. Conway, Heidi Burgiel and Chaim Goodman-Strauss (2008): The Symmetries of Things. Worcester MA: A.K. Peters. ISBN 1-56881-220-5.
• Branko Grünbaum and G. C. Shephard (1987): Tilings and Patterns. New York: Freeman. ISBN 0-7167-1193-1.
• Pattern Design, Lewis F. Day
External links
• International Tables for Crystallography Volume A: Space-group symmetry by the International Union of Crystallography
• The 17 plane symmetry groups by David E. Joyce
• Introduction to wallpaper patterns by Chaim Goodman-Strauss and Heidi Burgiel
• Description by Silvio Levy
• Example tiling for each group, with dynamic demos of properties
• Overview with example tiling for each group, by Brian Sanderson
• Escher Web Sketch, a java applet with interactive tools for drawing in all 17 plane symmetry groups
• Burak, a Java applet for drawing symmetry groups. Archived 2009-02-18 at the Wayback Machine
• A JavaScript app for drawing wallpaper patterns
• Circle-Pattern on Roman Mosaics in Greece
• Seventeen Kinds of Wallpaper Patterns Archived 2017-10-12 at the Wayback Machine the 17 symmetries found in traditional Japanese patterns.
• Baloglou, George (2002). "An elementary, purely geometrical classification of the 17 planar crystallographic groups (wallpaper patterns)". Archived from the original on 2018-08-07. Retrieved 2018-07-22.
Tessellation
Periodic
• Pythagorean
• Rhombille
• Schwarz triangle
• Rectangle
• Domino
• Uniform tiling and honeycomb
• Coloring
• Convex
• Kisrhombille
• Wallpaper group
• Wythoff
Aperiodic
• Ammann–Beenker
• Aperiodic set of prototiles
• List
• Einstein problem
• Socolar–Taylor
• Gilbert
• Penrose
• Pentagonal
• Pinwheel
• Quaquaversal
• Rep-tile and Self-tiling
• Sphinx
• Socolar
• Truchet
Other
• Anisohedral and Isohedral
• Architectonic and catoptric
• Circle Limit III
• Computer graphics
• Honeycomb
• Isotoxal
• List
• Packing
• Problems
• Domino
• Wang
• Heesch's
• Squaring
• Dividing a square into similar rectangles
• Prototile
• Conway criterion
• Girih
• Regular Division of the Plane
• Regular grid
• Substitution
• Voronoi
• Voderberg
By vertex type
Spherical
• 2n
• 33.n
• V33.n
• 42.n
• V42.n
Regular
• 2∞
• 36
• 44
• 63
Semi-
regular
• 32.4.3.4
• V32.4.3.4
• 33.42
• 33.∞
• 34.6
• V34.6
• 3.4.6.4
• (3.6)2
• 3.122
• 42.∞
• 4.6.12
• 4.82
Hyper-
bolic
• 32.4.3.5
• 32.4.3.6
• 32.4.3.7
• 32.4.3.8
• 32.4.3.∞
• 32.5.3.5
• 32.5.3.6
• 32.6.3.6
• 32.6.3.8
• 32.7.3.7
• 32.8.3.8
• 33.4.3.4
• 32.∞.3.∞
• 34.7
• 34.8
• 34.∞
• 35.4
• 37
• 38
• 3∞
• (3.4)3
• (3.4)4
• 3.4.62.4
• 3.4.7.4
• 3.4.8.4
• 3.4.∞.4
• 3.6.4.6
• (3.7)2
• (3.8)2
• 3.142
• 3.162
• (3.∞)2
• 3.∞2
• 42.5.4
• 42.6.4
• 42.7.4
• 42.8.4
• 42.∞.4
• 45
• 46
• 47
• 48
• 4∞
• (4.5)2
• (4.6)2
• 4.6.12
• 4.6.14
• V4.6.14
• 4.6.16
• V4.6.16
• 4.6.∞
• (4.7)2
• (4.8)2
• 4.8.10
• V4.8.10
• 4.8.12
• 4.8.14
• 4.8.16
• 4.8.∞
• 4.102
• 4.10.12
• 4.122
• 4.12.16
• 4.142
• 4.162
• 4.∞2
• (4.∞)2
• 54
• 55
• 56
• 5∞
• 5.4.6.4
• (5.6)2
• 5.82
• 5.102
• 5.122
• (5.∞)2
• 64
• 65
• 66
• 68
• 6.4.8.4
• (6.8)2
• 6.82
• 6.102
• 6.122
• 6.162
• 73
• 74
• 77
• 7.62
• 7.82
• 7.142
• 83
• 84
• 86
• 88
• 8.62
• 8.122
• 8.162
• ∞3
• ∞4
• ∞5
• ∞∞
• ∞.62
• ∞.82
Mathematics and art
Concepts
• Algorithm
• Catenary
• Fractal
• Golden ratio
• Hyperboloid structure
• Minimal surface
• Paraboloid
• Perspective
• Camera lucida
• Camera obscura
• Plastic number
• Projective geometry
• Proportion
• Architecture
• Human
• Symmetry
• Tessellation
• Wallpaper group
Forms
• Algorithmic art
• Anamorphic art
• Architecture
• Geodesic dome
• Islamic
• Mughal
• Pyramid
• Vastu shastra
• Computer art
• Fiber arts
• 4D art
• Fractal art
• Islamic geometric patterns
• Girih
• Jali
• Muqarnas
• Zellij
• Knotting
• Celtic knot
• Croatian interlace
• Interlace
• Music
• Origami
• Sculpture
• String art
• Tiling
Artworks
• List of works designed with the golden ratio
• Continuum
• Mathemalchemy
• Mathematica: A World of Numbers... and Beyond
• Octacube
• Pi
• Pi in the Sky
Buildings
• Cathedral of Saint Mary of the Assumption
• Hagia Sophia
• Pantheon
• Parthenon
• Pyramid of Khufu
• Sagrada Família
• Sydney Opera House
• Taj Mahal
Artists
Renaissance
• Paolo Uccello
• Piero della Francesca
• Leonardo da Vinci
• Vitruvian Man
• Albrecht Dürer
• Parmigianino
• Self-portrait in a Convex Mirror
19th–20th
Century
• William Blake
• The Ancient of Days
• Newton
• Jean Metzinger
• Danseuse au café
• L'Oiseau bleu
• Giorgio de Chirico
• Man Ray
• M. C. Escher
• Circle Limit III
• Print Gallery
• Relativity
• Reptiles
• Waterfall
• René Magritte
• La condition humaine
• Salvador Dalí
• Crucifixion
• The Swallow's Tail
• Crockett Johnson
Contemporary
• Max Bill
• Martin and Erik Demaine
• Scott Draves
• Jan Dibbets
• John Ernest
• Helaman Ferguson
• Peter Forakis
• Susan Goldstine
• Bathsheba Grossman
• George W. Hart
• Desmond Paul Henry
• Anthony Hill
• Charles Jencks
• Garden of Cosmic Speculation
• Andy Lomas
• Robert Longhurst
• Jeanette McLeod
• Hamid Naderi Yeganeh
• István Orosz
• Hinke Osinga
• Antoine Pevsner
• Tony Robbin
• Alba Rojo Cama
• Reza Sarhangi
• Oliver Sin
• Hiroshi Sugimoto
• Daina Taimiņa
• Roman Verostko
• Margaret Wertheim
Theorists
Ancient
• Polykleitos
• Canon
• Vitruvius
• De architectura
Renaissance
• Filippo Brunelleschi
• Leon Battista Alberti
• De pictura
• De re aedificatoria
• Piero della Francesca
• De prospectiva pingendi
• Luca Pacioli
• De divina proportione
• Leonardo da Vinci
• A Treatise on Painting
• Albrecht Dürer
• Vier Bücher von Menschlicher Proportion
• Sebastiano Serlio
• Regole generali d'architettura
• Andrea Palladio
• I quattro libri dell'architettura
Romantic
• Samuel Colman
• Nature's Harmonic Unity
• Frederik Macody Lund
• Ad Quadratum
• Jay Hambidge
• The Greek Vase
Modern
• Owen Jones
• The Grammar of Ornament
• Ernest Hanbury Hankin
• The Drawing of Geometric Patterns in Saracenic Art
• G. H. Hardy
• A Mathematician's Apology
• George David Birkhoff
• Aesthetic Measure
• Douglas Hofstadter
• Gödel, Escher, Bach
• Nikos Salingaros
• The 'Life' of a Carpet
Publications
• Journal of Mathematics and the Arts
• Lumen Naturae
• Making Mathematics with Needlework
• Rhythm of Structure
• Viewpoints: Mathematical Perspective and Fractal Geometry in Art
Organizations
• Ars Mathematica
• The Bridges Organization
• European Society for Mathematics and the Arts
• Goudreau Museum of Mathematics in Art and Science
• Institute For Figuring
• Mathemalchemy
• National Museum of Mathematics
Related
• Droste effect
• Mathematical beauty
• Patterns in nature
• Sacred geometry
• Category
| Wikipedia |
What is the CFT dual to pure gravity on AdS$_3$?
Pure $2+1$-dimensional gravity in $AdS_3$ (parametrized as $S= \int d^3 x \frac{1}{16 \pi G} \sqrt{-g} (R+\frac{2}{l^2})$) is a topological field theory closely related to Chern-Simons theory, and at least naively seems like it may be renormalizable on-shell for certain values of $l/G$. This is a theory which has been studied by many authors, but I can't seem to find a consensus as to what the CFT dual is. Here's what I've gathered from a cursory literature search:
Witten (2007) suggests that the dual is the monster theory of Frenkel, Lepowsky, and Meurman for a certain value of $l/G$; his argument applies when the central charges $c_L$ and $c_R$ are both multiplies of $24$. In his argument, he assumes holomorphic factorization of the boundary CFT, which seems to be fairly controversial. His argument does produce approximately correct entropy for BTZ black holes, but a case can be made that black hole states shouldn't exist at all if the CFT is holomorphically factorized. He also gave a PiTP talk on the subject. Witten himself is unsure if this work is correct.
In a recent 2013 paper, McGough and H. Verlinde claim that "The edge states of 2+1-D gravity are described by Liouville theory", citing 5 papers to justify this claim. All of those are before Witten's 2007 work. Witten's work does mention Liouville theory, and has some discussion, but he doesn't seem to believe that this is the correct boundary theory, and Liouville theory is at any rate not compatible with holomorphic factorization. This paper also claims that "pure quantum gravity...is unlikely to give rise to a complete theory." Similar assertions are made in a few other papers.
Another proposal was made in Castro et.al (2011), relating this to minimal models such as the Ising model. Specifically, they claim that the partition function for the Ising model is equal to that of pure gravity $l=3G$, and make certain claims about higher spin cases.
It doesn't seem to me that all of these can simultaneously be true. There could be some way to mitigate the differences between the proposals, but my scan of the literature didn't point to anything. It seems to me that no one agrees on the correct theory. I'm not even sure if these are the only proposals, but they're the ones that I'm aware of.
First, are my above statements regarding the three proposals accurate? Also, is there any consensus in the majority of the HET community as to whether pure quantum gravity theories in $AdS_3$ exist, and if so what their CFT duals are? Finally, if there is no consensus, what are the necessary conditions for each of the proposals to be correct?
quantum-field-theory research-level quantum-gravity conformal-field-theory ads-cft
Colin McFaul
asked Nov 1 '13 at 0:59
Without reading your whole question and just answering the title:
I think that still is an (very interesting) open problem.
See e.g., Five Problems in Quantum Gravity - Andrew Strominger http://arxiv.org/abs/arXiv:0906.1313
On very general grounds [15], we expect that 3D AdS gravity should be dual to a 2D CFT with central charge c = 3l . 2G Solving the theory is equivalent to specifying this CFT. It was suggested in [23] that, rather than directly quantizing the Einstein- Hilbert action, this CFT might simply be deduced by various consistency requirements. Namely, the central charge must be c = 3l 2G , Z must be modular invariant (since these are large diffeomorphisms) and its pole structure must reflect the fact that there are no perturbative excitations. Adding the additional assumption of holomorphic factorization (i.e. decoupling of the left and right movers in the CFT), it was shown [23] that Z is uniquely determined to be a certain modular form Zext . Unfortunately Zext does not agree with the Euclidean sum-over-geometries [25] which indicates that the assumption is not valid for pure gravity.3 Modular invariance and the restriction on the pole structure are still strong, if not uniquely determining, hints on the form of Z for pure gravity. Determining Z for pure 3D quantum Einstein gravity - if it exists - is an important open problem.
ungeradeungerade
$\begingroup$ Be aware that i could easily be not be up to date. So take that with a grain of salt. Hopefully a more knowledgeable user will add something. $\endgroup$ – ungerade Nov 1 '13 at 1:22
$\begingroup$ Thanks. Two of the three papers I cited are more recent than 2009, but assuming it hasn't changed, this at least negatively answers the question "is there a consensus?". $\endgroup$ – user32020 Nov 1 '13 at 1:23
Not the answer you're looking for? Browse other questions tagged quantum-field-theory research-level quantum-gravity conformal-field-theory ads-cft or ask your own question.
References for AdS/CFT correspondence between dimensions 3 and 2
Gravity dual of N free scalars in 2D
Is there a T-dual of Witten's twistor topological string theory?
Can a $CFT_2$ which can't be factorized into chiral and antichiral parts and/or have a central charge not a multiple of 24 have AdS duals?
Vasiliev gravity and "holographic" entanglement
Holomorphic Factorization in CFT$_2$
Worldvolume vs boundary in AdS/CFT
Holographic dual of pure-classical systems
Dual of the Identity operator (AdS/CFT)
Naked Singularity and AdS/CFT
Non-trivial content of AdS/CFT for a generic EFT on AdS | CommonCrawl |
Association of air pollution with outpatient visits for respiratory diseases of children in an ex-heavily polluted Northwestern city, China
Yueling Ma1 na1,
Li Yue2 na1,
Jiangtao Liu1,
Xiaotao He1,
Lanyu Li1,
Jingping Niu1 &
Bin Luo ORCID: orcid.org/0000-0001-9324-89421,3,4
A great number of studies have confirmed that children are a particularly vulnerable population to air pollution.
In the present study, 332,337 outpatient visits of 15 hospitals for respiratory diseases among children (0–13 years), as well as the simultaneous meteorological and air pollution data, were obtained from 2014 to 2016 in Lanzhou, China. The generalized additive model was used to examine the effects of air pollutants on children's respiratory outpatient visits, including the stratified analysis of age, gender and season.
We found that PM2.5, NO2 and SO2 were significantly associated with the increased total respiratory outpatient visits. The increments of total respiratory outpatient visits were the highest in lag 05 for NO2 and SO2, a 10 μg/m3 increase in NO2 and SO2 was associated with a 2.50% (95% CI: 1.54, 3.48%) and 3.50% (95% CI: 1.51, 5.53%) increase in total respiratory outpatient visits, respectively. Those associations remained stable in two-pollutant models. Through stratification analysis, all air pollutants other than PM10 were significantly positive associated with the outpatients of bronchitis and upper respiratory tract infection. Besides, both NO2 and SO2 were positively related to the pneumonia outpatient visits. PM2.5 and SO2 were significantly related to the outpatient visits of other respiratory diseases, while only NO2 was positively associated with the asthma outpatients. We found these associations were stronger in girls than in boys, particularly in younger (0–3 years) children. Interestingly, season stratification analysis indicated that these associations were stronger in the cold season than in the transition or the hot season for PM10, PM2.5 and SO2.
Our results indicate that the air pollution exposure may account for the increased risk of outpatient visits for respiratory diseases among children in Lanzhou, particularly for younger children and in the cold season.
Air pollution is one of the greatest environmental risks to public health. The World Health Organization (WHO) report showed that outdoor air pollution was responsible for 4.2 million deaths worldwide in 2016 [1]. A growing body of literature has investigated the association between air pollution and respiratory tract, which is the main organ affected by air pollution. For instance, a panel study from Korea suggested that air pollution may cause respiratory symptoms [2]. In addition, a considerable amount of papers have focused on the associations between air pollution and respiratory diseases/mortality in Europe [3, 4], the United States [5, 6], and some Asian countries [7, 8]. In Taiwan, two main air pollutants (NO and NO2) were positively associated with respiratory diseases, followed by PM10, PM2.5, O3, CO and SO2 [9]. A study with urban Chinese population found that per 10 μg/m3 increase in PM2.5 and PM10 concentration on the current day of exposure was associated with 0.36 and 0.33% increase in respiratory system disease, respectively [10]. In Hangzhou, outpatient visits of adults with respiratory disease increased by 0.67, 3.50 and 2.10% with per 10 μg/m3 increase in PM2.5, SO2 and NO2, respectively, however, children outpatient visits increased by 1.47, 5.70 and 4.04%, respectively, which indicated that children were more susceptible to air pollutants [11]. Besides, the results for a study in Taiwan showed significant relationships between NO2, PM10 and asthma outpatients, especially for children [12]. Therefore, air pollution may affect the respiratory outpatient visits.
Children have relatively immature lungs and immune system, and inhale a larger volume of air per body weight [13], so they are more susceptible to the adverse respiratory effects of air pollution. Exposure to air pollution at early stage may affect children's normal growth and lung development [14, 15]. The increased prevalence of young children's respiratory diseases was also related to air pollution exposure time and dose in Jinan [16]. Particularly, air pollution was positively related to the pneumonia among children [17, 18]. Besides, better air quality has been approved to reduce respiratory symptoms among children [19]. However, research about comprehensive comparison of respiratory health changes in children from different subgroups is still limited, especially in cities that suffer from heavy air pollution.
Air pollution is a global problem. About 91% of the world population was estimated to breathe polluted air which exceeded the WHO air quality guideline levels in 2016 [20]. Lanzhou, an industrial city, located in a typical valley basin, is particularly well known as a dry city with scarce rainfall, high evaporation and low wind speeds [21]. Moreover, it is also frequently affected by dust storms due to its location closed to the arid and semi-arid region of Northwest China [22]. These factors combine to make Lanzhou one of the most traditional seriously air-polluted cities in China. Although, a study with very limited data has reported the effect of PM2.5 over respiratory disease in Lanzhou, but didn't focus on the children [21]. Normally, children are often divided into young children period (0–3 years age), preschool period (4–6 years age) and school period (7–13 years age), displaying growing level of immunity, who may show different effects when exposed to air pollution [23]. Therefore, we aim to assess the effects of air pollutants on children's outpatient visits for respiratory diseases from different subgroups with the data of 15 hospitals in a poor area of China-Lanzhou city.
Study area and data collection
Being the capital city of Gansu province, Lanzhou is located in the north-west of China with a population of over 3.7 million in 2017 [24]. Lanzhou is one of the most air-polluted cities in China, because it is heavily industrialized and owns a valley style terrain, and has a typical semi-arid continental climate with scarce precipitation [21, 25]. Even though the authorities have taken significant measures to improve the air quality in Lanzhou, the level of air pollutants concentration (The average annual PM2.5, PM10, SO2 and NO2 concentrations during 2007–2016 in Lanzhou were 61.23 μg/m3, 136.14 μg/m3, 42.93 μg/m3 and 45.37 μg/m3, respectively.) [21] exceeded the national level II (The average annual standards for PM2.5 is 35 μg/m3, PM10 is 70 μg/m3, SO2 is 60 μg/m3, and NO2 is 40 μg/m3.).
The daily number of outpatients for respiratory diseases between 2014 and 2016 were obtained from the 15 hospitals of the four central urban districts of Lanzhou (Chengguan, Qilihe, Xigu and Anning) (Fig. 1), which was confirmed and permitted by the Lanzhou center for disease control and prevention. This study protocol was approved by the ethics committee of Lanzhou University (Project identification code: IRB190612–1). We screened the outpatient visit data using the 10th Revision of the International Classification of Diseases (ICD-10) Code of respiratory diseases (J00-J99). We excluded the patients who were not living in the four central urban districts of Lanzhou and those children aged ≥14 years. Finally, all outpatient data were classified into four specific diseases [pneumonia, J12-J18; asthma, J45-J46; bronchitis and upper respiratory tract infection (J00-J06, J20-J21, J30-J39, J40-J42); and other respiratory diseases (J22, J43-J44, J47, J60-J99)].
Spatial distribution of air quality monitoring stations, studied hospitals, and four central urban districts in Lanzhou, China. Source: The map was created by the authors with ArcGIS 10.2.2 software (ESRI, Redlands, California, USA). ArcGIS is the intellectual property of ESRI and is used by license in here
The simultaneous daily meteorological variables and air pollutants data were obtained from open access website of Lanzhou Meteorological administration and Lanzhou air quality monitoring stations (including Institute of Biology, Railway design institute, Hospital of Staff and LanLian Hotel) (Fig. 1), respectively. The air quality monitoring stations were in four central urban districts of Lanzhou. Meteorological variables included daily average temperature and relative humidity, and air pollutants data included particulate matter with aerodynamic diameter ≤ 10 μm (PM10), particulate matter with aerodynamic diameter ≤ 2.5 μm (PM2.5), nitrogen dioxide (NO2) and sulfur dioxide (SO2).
The descriptive analysis was performed for all data. The Quasi-Poisson regression with generalized additive model (GAM) was used to examine the associations between air pollutants (PM10, PM2.5, NO2 and SO2) and the daily children's outpatient visits with respiratory diseases. The Quasi-Poisson distribution was applied to overcome the overdispersion of outpatient visits data. Generalized additive model allows for highly flexible fitting as the outcome is supposed to be dependent on a sum of the smoothed and linear functions of the predictor variables [26]. Based on the previous studies, the penalized smoothing spline function was used to adjust for long-term time trends, day-of-week, holiday and meteorological factors [27, 28]. The basis GAM equation is:
$$ {\displaystyle \begin{array}{c}\mathrm{logE}\left({Y}_t\right)=\upalpha +\upbeta {X}_t+s\left( Time,k= df+1\right)+s\left({Temperature}_l,k= df+1\right)\\ {}+s\left({Humidity}_l,k= df+1\right)+ DOW+ Holiday\end{array}} $$
Where t is the day of the observation; E (Yt) is the expected number of daily outpatient visits for respiratory diseases on day t; α is the intercept; β is the regression coefficient; Xt is the daily concentration of air pollutant on day t; s() denotes the smoother based on the penalized smoothing spline; The same lag structures as pollutants for temperature and relative humidity are adjusted, and Temperaturel and Humidityl are the six-day moving average (lag 05) of temperature and relative humidity, respectively [27, 29]. Based on Akaike's information criterion (AIC), the 7 degrees of freedom (df) per year is used for long-term time trends and 3 df for Temperaturel and Humidityl; DOW is a categorical variable indicating the date of the week; and holiday is a binary variable for national holiday in China.
After constructing the basic model, single-pollutant models were used to examine the lagged effects, i.e., single day lag (from lag 0 to lag 5) and multiple-day average lag (from lag 01 to lag 05). A spline function of GAM was applied to plot the exposure and response correlation curves between air pollution and outpatient visits for respiratory diseases. Moreover, two-pollutant models were set to evaluate the robustness of our results after adjusting for the other pollutants. In stratification analysis, all of these outpatients were classified into different sex (boys and girls) and age (0–3 years, 4–6 years and 7–13 years), and season [cold season (November to March), hot season (June to August) and transition season (April, May, September and October)] [23, 30]. According to the AIC and previous studies [23, 31], the df of time was 3, 2, 3 per year for the cold, hot and transition season, respectively. We also conducted a sensitivity analysis by changing the df from 5 to 9 per year for calendar time and from 3 to 8 for temperature and relative humidity.
All the statistical analyses were two-sided, and at a 5% level of significance. All analyses were conducted using R software (version 3.5.2) with the GAM fitted by the "mgcv" package (version 1.8–26). The effect estimates were denoted as the percentage changes and their 95% confidence intervals (CIs) in daily children's outpatient visits for respiratory diseases associated with per 10 μg/m3 increase in air pollutant concentrations. The ArcGIS 10.2.2 software (ESRI, Redlands, California, USA) and GraphPad Prism 7.00 software were used to plot the Figures.
Descriptive of air pollutants, meteorological variables and respiratory diseases outpatient data
There were 332,337 respiratory diseases outpatient visits for children during January 1st, 2014 through December 31st, 2016 in 15 major hospitals of Lanzhou. The mean concentrations of PM2.5, PM10, SO2 and NO2 were 54.52 μg/m3, 123.35 μg/m3, 22.97 μg/m3 and 51.80 μg/m3 during 2014–2016, respectively. In addition, the median of temperature and relative humidity were 12.9 °C and 50%, respectively (Table 1). On average, there were approximately 303 respiratory diseases outpatient visits per day in our study areas, and the bronchitis and upper respiratory tract infection, boys, children aged 4–6 and 7–13 years, and cold season had higher visits than other groups (Table 2).
Table 1 Descriptive statistics on daily air pollutants and meteorological parameters
Table 2 Descriptive statistics on daily outpatient visits in Lanzhou, China, during 2014–2016
Figure 2 showed that daily air pollutant concentrations were higher in the cold season than in the hot season, such as, the interquartile range of PM10, PM2.5, NO2 and SO2 concentrations in the cold season were 70.20 μg/m3, 41.00 μg/m3, 32.20 μg/m3 and 20.00 μg/m3, respectively, while they were 37.10 μg/m3, 15.20 μg/m3, 22.90 μg/m3 and 9.50 μg/m3 in the hot season. What's more, the trend of total respiratory outpatient visits in different seasons were similar to the daily air pollutant concentrations.
Box plots of air pollutants and total outpatients with respiratory diseases in the cold, transition and hot season. Boxes indicate the interquartile range (25th percentile-75th percentile); lines within boxes indicate medians; whiskers below boxes represent minimum values; whiskers and dots above boxes indicate maximum values. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Associations between air pollutants and outpatient visits for respiratory diseases
In Fig. 3, we observed significantly positive associations between respiratory diseases outpatient visits and the concentration of NO2 and SO2. In single-pollutant models, we found PM2.5, NO2 and SO2 were significantly associated with the increased respiratory outpatient visits (Fig. 4). Each 10 μg/m3 increase of PM2.5 was only significantly associated with total respiratory outpatient visits in lag 0, lag 01 and lag 02. The increments of respiratory outpatient visits were the highest in lag 05 for NO2 and SO2. The respiratory outpatient visits in lag 05 increased by 2.50% (95% CI: 1.54, 3.48%) and 3.50% (95% CI: 1.51, 5.53%) with per 10 μg/m3 increase in NO2 and SO2, respectively. In cause-specific analysis, PM2.5 showed significant effects on the increase of respiratory outpatient visits due to bronchitis and upper respiratory tract infection, and other respiratory diseases, but the significant effect of PM10 was not observed in any type of respiratory diseases (Fig. 5). To NO2, the significantly positive associations were attributed to pneumonia, asthma, and bronchitis and upper respiratory tract infection, with the greatest increase [1.73% (95% CI: 0.37, 3.11%) in lag 04, 3.28% (95% CI: 0.71, 5.91%) and 2.60% (95% CI: 1.59, 3.63%) in lag 05] in their outpatient visits, respectively. Moreover, for SO2, we found the significantly positive associations in pneumonia, bronchitis and upper respiratory tract infection, and other respiratory diseases in lag 05.
The exposure-response curves of air pollutants concentrations and total outpatients with respiratory diseases in Lanzhou, China, during 2014–2016. The X-axis is the concurrent day air pollutants concentrations (μg/m3), Y-axis is the predicted log relative risk (RR), is shown by the solid line, and the dotted lines represent the 95% confidence interval (CI). PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of children outpatient visits for total respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of children outpatient visits for cause-specific respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
After sex stratification, we found the effects of PM10 on respiratory outpatient visits for both boys and girls were not statistically significant (Fig. 6). However, the increase of each 10 μg/m3 in PM2.5 was only significantly associated with respiratory outpatient visits for boys in lag 0, but for girls in lag 0, lag 01, lag 02 and lag 03. Each 10-μg/m3 increment of NO2 and SO2 was positively associated with respiratory outpatient visits for boys, with the greatest increase in lag 05 [2.46% (95% CI: 1.46, 3.46%) and 3.25% (95% CI: 1.20, 5.34%), respectively, and girls, with the greatest increase in lag 05 [2.58% (95% CI: 1.50, 3.67%) and 3.89% (95% CI: 1.66, 6.16%), respectively. In different ages, NO2 and SO2 were positively related to respiratory outpatient visits for all ages, but PM2.5 only in children aged 0–3 and 7–13 years (Fig. 7). The effect of NO2 was the highest among 0–3 years children in lag 05 [3.45% (95% CI: 2.37, 4.54%)]. Meanwhile, the maximum increase of respiratory outpatient visits due to a 10 μg/m3 increase of SO2 occurred in lag 05 in children aged 0–3 [4.67% (95% CI:1.22, 8.24%)]. In addition, the greatest increment of respiratory outpatient visits was occurred in lag 05 with a 10 μg/m3 increase of PM10 [0.60% (95% CI: 0.21, 0.99%)], PM2.5 [2.52% (95% CI: 1.45, 3.60%)] and SO2 [7.95% (95% CI: 5.40, 10.55%)] in the cold season, but NO2 [4.02% (95% CI: 2.08, 5.99%)] in the transition season (Fig. 8). The positive associations were observed among air pollutants, including PM2.5 with PM10 (r = 0.73), SO2 (r = 0.60) and NO2 (r = 0.57); and PM10 with SO2 (r = 0.33) and NO2 (r = 0.39); SO2 with NO2 (r = 0.53) (Table 3).
Percentage change (95% confidence interval) of daily children outpatient visits caused by respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants stratified by sex for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of daily children outpatient visits caused by respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants stratified by age for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of daily children outpatient visits caused by respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants stratified by season for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Table 3 Pearson correlation analysis of pollutants
After the optimum lag day for each pollutant being determined in the single-pollutant models, the two-pollutant models were used to adjust for other pollutants. Table 4 compared the results of the single-pollutant models with the results of the two-pollutant models using exposure in lag 05 after adjusting for other pollutants. After adjusting for PM10 and PM2.5 concentration in the two-pollutant models, the percentage increase for total respiratory diseases outpatient visits of NO2 and SO2 remained statistically significant with a little increase. However, after controlling NO2 and SO2, we found the percentage changes of PM2.5 and PM10 were not statistically associated with total respiratory diseases outpatient visits, similar to the results of the single-pollutant models.
Table 4 Percentage change (95% confidence interval) of total children respiratory outpatients per 10 μg/m3 increase in concentrations of pollutants in the single and two-pollutant models
Lanzhou has a population of over 3.7 million with children accounting for 14% in 2016 [32]. In this study, we observed 332,337 children's outpatient visits for respiratory diseases within 3 years, suggesting respiratory diseases is a major health problem among children in Lanzhou. Many studies about air pollution and children respiratory diseases were conducted in the cities of China with moist climate, such as Shenzhen [33], Hefei [34] and so on. However, research about comprehensive comparison of air pollution at respiratory diseases of different groups (gender, age, season and cause-specific diseases) is still limited, especially in city with arid climate. Therefore, our results may add to the limited scientific knowledge that air pollution may also affect the incidence of respiratory diseases among children from different subgroups in an arid climate city.
The results showed that PM2.5, NO2 and SO2 were significantly associated with the increased total respiratory outpatient visits of children. A study in Shanghai during 2013–2015 found that an interquartile range (IQR) increase in PM2.5, SO2 and NO2 was associated with a 8.81, 17.26 and 17.02% increase for daily pediatric respiratory emergency visits in lag 03, respectively [35], which is higher than our study. The possible explanation is that the air pollution level in Shanghai of 2013–2015 showed a trend of rising, but it has been persistently declining in Lanzhou since 2013 [36]. However, a study conducted in Yichang during 2014–2015, China, observed that each IQR increase in PM2.5 and NO2 concentrations corresponded to a 1.91 and 1.88% increase of pediatric respiratory outpatient visits at current day, respectively [37], which was higher for PM2.5 but lower than our study for NO2. It is because that the daily average concentration of PM2.5 in Yichang was higher (84.9 μg/m3 VS 54.52 μg/m3) but NO2 was lower than Lanzhou (37.4 μg/m3 VS 51.80 μg/m3) [37]. However, the associations between PM10 and total respiratory outpatients were insignificant, which is not consistent with the findings from other studies [35, 37]. Shanghai is characterized by a higher degree of urbanization and industrialization than Lanzhou, so the PM10 of which mainly comes from traffic and industry pollution sources, similar to that in Yichang [36, 38]. However, the PM10 in Lanzhou was mainly contributed by raised dust containing higher level of crustal elements, which is not as poisonous as that in Shanghai and Yichang [39]. Even so, our results indicate that the air pollution is positively related to the respiratory diseases among children in Lanzhou.
It is well known that air pollutants are the risk factors for many respiratory diseases in children. An eight-year time-series study in Hanoi showed that all air pollutants (PM10, PM2.5, NO2 and SO2) were positively associated with pneumonia, bronchitis and asthma hospitalizations among children [18], like that reported in Shijiazhuang [23] and Taiwan [9]. Consistent with these studies, we also found all air pollutants (except PM10) were positively related to the outpatient visits of bronchitis and upper respiratory tract infection. Coupled with the fact that bronchitis and upper respiratory tract infection were the major types of respiratory diseases (87.45% of all respiratory outpatient visits) in Lanzhou, the effect of air pollution may explain part of this phenomenon. To asthma, gas pollutant like NO2 has been well known as its major risk factor, which has also been confirmed in a study with a broad range of exposures and diverse populations among children published in the Lancet [40]. Unfortunately, similar result was also found in our study with an arid climate. Therefore, although the Lanzhou government has worked positively and gained great international compliment in reducing the air pollution [41], more efforts will be needed to reduce the air pollution from vehicle exhaust.
In the stratified analysis, the impact of air pollution was more significant on girls than boys, which is consistent with the study in Taiwan among the children respiratory outpatients [9]. A review showed that girls had smaller lungs, shorter and wider airways, and exhibited higher forced expiratory flow rates than boys [42]. Therefore, the airways of girls may be less able to block air pollution. However, there is lack of consistent results for sex differences in health effects of various air pollutants. Many similar studies conducted in Beijing for asthma children [43], in Ningbo for respiratory infections children [44], in Jinan for outpatient respiratory diseases [45] and in Hanoi for children lower respiratory infections [18] found that there was no obvious difference between boys and girls. Thus, additional studies are needed to clarify whether there are sex differences for the associations between air pollutants and respiratory diseases among children. In age difference, we found younger children (0–3 years) were more vulnerable to air pollution. A study in Ningbo for pneumonia observed stronger associations between air pollutants and children under 5 years [17]. The study in Hanoi also showed positive relationship between airborne particles and daily hospital admission for respiratory diseases among children aged < 5 years [46]. It is generally recognized that this high vulnerability among younger children can be attributed to their immature lungs, higher breathing rate [47] and predominantly oral breathing characteristics [48], which increased their exposure and susceptibility to respiratory infections. These factors, combined with the underdeveloped immune function, may add together to make infant and younger children more susceptible to air pollutants.
In the present study, the descriptive results showed that the concentrations of air pollutants in Lanzhou were higher in the cold season, which is consistent with the study in Shennongjia [49]. Previous study suggested winter was the most polluted season [50]. In Northeastern and Northwestern China, due to a specific cold climate in winter and regional living habits, air pollution mainly comes from coal burning, motor vehicles and industrial production [51, 52]. Lanzhou is located in Northwestern China with a narrow and long valley basin and low winds, stable stratification especially inversion, which blocks the air streams and makes the pollutants difficult to disperse [53]. In addition, coal use in the winter also increases the level of air pollution [36]. These factors may lead the air pollution of Lanzhou to be the most severe in cold seasons. This may explain why we found the greatest effects of PM10, PM2.5 and SO2 on children respiratory outpatients in the cold season. However, the result for NO2 agrees with the similar study in Shijiazhuang [23], but is inconsistent with the study in Yichang [37]. This may also be explained by the different source of air pollutants among these cities, NO2 was also the major air pollutant in Yichang but not in Lanzhou or Shijiazhuang.
Our study has several limitations. First, the study only used the data of 3 years due to limited data accessibility and availability, which may not be abundantly enough to evaluate the effects of air pollution on child respiratory outpatients, but we at least provide some hypothesis from a specific topography with an arid climate and large sample. Second, the air pollution data were collected from only four monitoring stations, the average value of which may not be strong enough to represent the real condition of air quality in Lanzhou. Thus, there should be data from more monitoring stations of Lanzhou. Third, unknown or unmeasured confounders such as indoor air pollution, second-hand smoke exposure and so on may exist and affect the associations. Therefore, all these limitations should be solved in future studies.
Our results indicate that the air pollution exposure may account for the increased risk of outpatient visits for respiratory diseases among children in Lanzhou, particularly for younger children and in the cold season. To our knowledge, this is the first study to investigate the short-term effects of air pollution on child respiratory morbidity based on the large population in Northwestern China. The estimated percent changes may be helpful to monitor the disease burden caused by air pollution in Lanzhou among children and strengthen the urgency for controlling air pollution in Lanzhou. Since children are much more susceptible to air pollution, more urgent strategies will be needed to deal with the higher level of respiratory diseases among children, like promoting the use of personal protective equipment (e.g., respirators, air purifiers) and avoiding outdoor activities during heavily polluted weathers of Lanzhou.
The datasets used and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.
PM10 :
Particulate matter with aerodynamic diameter ≤ 10 μm
PM2.5 :
Particulate matter with aerodynamic diameter ≤ 2.5 μm
NO2 :
SO2 :
GAM:
Generalized additive model
RR:
WHO. World Health Statistics 2018: Monitoring health for the SDGs, vol. 2020; 2018.
Nakao M, Ishihara Y, Kim C, Hyun I. The impact of air pollution, including Asian sand dust, on respiratory symptoms and health-related quality of life in outpatients with chronic respiratory disease in Korea: a panel study. J Prev Med Public Health. 2018;51(3):130–9.
Wanka ER, Bayerstadler A, Heumann C, Nowak D, Jörres RA, Fischer R. Weather and air pollutants have an impact on patients with respiratory diseases and breathing difficulties in Munich, Germany. Int J Biometeorol. 2014;58(2):249–62.
Slama A, Śliwczyński A, Woźnica J, Zdrolik M, Wiśnicki B, Kubajek J, Turżańska-Wieczorek O, Gozdowski D, Wierzba W, Franek E. Impact of air pollution on hospital admissions with a focus on respiratory diseases: a time-series multi-city analysis. Environ Sci Pollut R. 2019;26(17):16998–7009.
Kim S, Peel JL, Hannigan MP, Dutton SJ, Sheppard L, Clark ML, Vedal S. The temporal lag structure of short-term associations of fine particulate matter chemical constituents and cardiovascular and respiratory hospitalizations. Environ Health Persp. 2012;120(8):1094–9.
Sinclair AH, Edgerton ES, Wyzga R, Tolsma D. A two-time-period comparison of the effects of ambient air pollution on outpatient visits for acute respiratory illnesses. J Air Waste Manag Assoc. 2010;60(2):163–75.
Greenberg N, Carel R, Portnov BA. Air pollution and respiratory morbidity in Israel: a review of accumulated empiric evidence. Isr Med Assoc J. 2015;17(7):445–50.
Vodonos A, Friger M, Katra I, Avnon L, Krasnov H, Koutrakis P, Schwartz J, Lior O, Novack V. The impact of desert dust exposures on hospitalizations due to exacerbation of chronic obstructive pulmonary disease. Air Quality, Atmosphere & Health. 2014;7(4):433–9.
Wang K, Chau T. An association between air pollution and daily outpatient visits for respiratory disease in a heavy industry area. PLoS One. 2013;8(10):e75220.
Wang C, Feng L, Chen K. The impact of ambient particulate matter on hospital outpatient visits for respiratory and circulatory system disease in an urban Chinese population. Sci Total Environ. 2019;666:672–9.
Mo Z, Fu Q, Zhang L, Lyu D, Mao G, Wu L, Xu P, Wang Z, Pan X, Chen Z, et al. Acute effects of air pollution on respiratory disease mortalities and outpatients in southeastern China. Sci Rep-Uk. 2018;8(1):1–9.
Pan H, Chen C, Sun H, Ku M, Liao P, Lu K, Sheu J, Huang J, Pai J, Lue K. Comparison of the effects of air pollution on outpatient and inpatient visits for asthma: a population-based study in Taiwan. PLoS One. 2014;9(5):1–19.
Sunyer J. The neurological effects of air pollution in children. Eur Respir J. 2008;32(3):535–7.
Alderete TL, Habre R, Toledo-Corral CM, Berhane K, Chen Z, Lurmann FW, Weigensberg MJ, Goran MI, Gilliland FD. Longitudinal associations between ambient air pollution with insulin sensitivity, beta-cell function, and adiposity in Los Angeles Latino children. Diabetes. 2017;66(7):1789–96.
Chen C, Chan C, Chen B, Cheng T, Leon GY. Effects of particulate air pollution and ozone on lung function in non-asthmatic children. Environ Res. 2015;137:40–8.
Chen Z, Cui L, Cui X, Li X, Yu K, Yue K, Dai Z, Zhou J, Jia G, Zhang J. The association between high ambient air pollution exposure and respiratory health of young children: a cross sectional study in Jinan, China. Sci Total Environ. 2019;656:740–9.
Li D, Wang J, Zhang Z, Shen P, Zheng P, Jin M, Lu H, Lin H, Chen K. Effects of air pollution on hospital visits for pneumonia in children: a two-year analysis from China. Environ Sci Pollut R. 2018;25(10):10049–57.
Nhung NTT, Schindler C, Dien TM, Probst-Hensch N, Perez L, Künzli N. Acute effects of ambient air pollution on lower respiratory infections in Hanoi children: an eight-year time series study. Environ Int. 2018;110:139–48.
Wise J. Better air quality reduces respiratory symptoms among children in southern California. BMJ. 2016;353:i2083.
WHO. Ambient (outdoor) air quality and health, vol. 2019: https://www.who.int/en/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health; 2018.
Chai G, He H, Sha Y, Zhai G, Zong S. Effect of PM2.5 on daily outpatient visits for respiratory diseases in Lanzhou, China. Sci Total Environ. 2019;649:1563–72.
Guan Q, Liu Z, Yang L, Luo H, Yang Y, Zhao R, Wang F. Variation in PM2.5 source over megacities on the ancient silk road, northwestern China. J Clean Prod. 2019;208:897–903.
Song J, Lu M, Zheng L, Liu Y, Xu P, Li Y, Xu D, Wu W. Acute effects of ambient air pollution on outpatient children with respiratory diseases in Shijiazhuang, China. Bmc Pulm Med. 2018;18(1):1–10.
National Bureau Of Statistics D. Gansu Development Yearbook for 2018, vol. 2019; 2018.
Zhang Y, Kang S. Characteristics of carbonaceous aerosols analyzed using a multiwavelength thermal/optical carbon analyzer: a case study in Lanzhou City. Science China Earth Sciences. 2019;62(2):389–402.
Dominici F, McDermott A, Zeger SL, Samet JM. On the use of generalized additive models in time-series studies of air pollution and health. Am J Epidemiol. 2002;156(3):193–203.
Zhao Y, Hu J, Tan Z, Liu T, Zeng W, Li X, Huang C, Wang S, Huang Z, Ma W. Ambient carbon monoxide and increased risk of daily hospital outpatient visits for respiratory diseases in Dongguan, China. Sci Total Environ. 2019;668:254–60.
Li Q, Yang Y, Chen R, Kan H, Song W, Tan J, Xu F, Xu J. Ambient air pollution, meteorological factors and outpatient visits for eczema in Shanghai, China: a time-series analysis. Int J Env Res Pub He. 2016;13(11):1–10.
Guo Q, Liang F, Tian L, Schikowski T, Liu W, Pan X. Ambient air pollution and the hospital outpatient visits for eczema and dermatitis in Beijing: a time-stratified case-crossover analysis. Environ Sci. 2019;21(1):163–73.
Liu Z, Jin Y, Jin H. The effects of different space forms in residential areas on outdoor thermal comfort in severe cold regions of China. Int J Env Res Pub He. 2019;16(20):3960.
Duan Y, Liao Y, Li H, Yan S, Zhao Z, Yu S, Fu Y, Wang Z, Yin P, Cheng J, et al. Effect of changes in season and temperature on cardiovascular mortality associated with nitrogen dioxide air pollution in Shenzhen, China. Sci Total Environ. 2019;697:134051.
Stasistics LMBO. Analysis of population development of Lanzhou since the 13th five-year plan period, vol. 2020; 2019.
Xia X, Zhang A, Liang S, Qi Q, Jiang L, Ye Y. The association between air pollution and population health risk for respiratory infection: a case study of Shenzhen, China. Int J Env Res Pub He. 2017;14(9):950.
Li YR, Xiao CC, Li J, Tang J, Geng XY, Cui LJ, Zhai JX. Association between air pollution and upper respiratory tract infection in hospital outpatients aged 0–14 years in Hefei, China: a time series study. Public Health. 2018;156:92–100.
Zhang H, Niu Y, Yao Y, Chen R, Zhou X, Kan H. The impact of ambient air pollution on daily hospital visits for various respiratory diseases and the relevant medical expenditures in Shanghai, China. Int J Env Res Pub He. 2018;15(3):1–10.
Su Y, Sha Y, Zhai G, Zong S, Jia J. Comparison of air pollution in Shanghai and Lanzhou based on wavelet transform. Environ Sci Pollut R. 2019;26(17):16825–34.
Liu Y, Xie S, Yu Q, Huo X, Ming X, Wang J, Zhou Y, Peng Z, Zhang H, Cui X, et al. Short-term effects of ambient air pollution on pediatric outpatient visits for respiratory diseases in Yichang city, China. Environ Pollut. 2017;227:116–24.
Yang Z, Li X, Deng J, Wang H. Stable sulfur isotope ratios and water-soluble inorganic compositions of PM10 in Yichang City, Central China. Environ Sci Pollut R. 2015;22(17):13564–72.
Jiang Y, Shi L, Guang A, Mu Z, Zhan H, Wu Y. Contamination levels and human health risk assessment of toxic heavy metals in street dust in an industrial city in Northwest China. Environ Geochem Hlth. 2018;40(5SI):2007–20.
Guarnieri M, Balmes JR. Outdoor air pollution and asthma. Lancet. 2014;383(9928):1581–92.
Liu J, Ruan Y, Wu Q, Ma Y, He X, Li L, Li S, Niu J, Luo B. Has the mortality risk declined after the improvement of air quality in an ex-heavily polluted Chinese city-Lanzhou? Chemosphere. 2020;242:125196.
Becklake MR, Kauffmann F. Gender differences in airway behaviour over the human life span. Thorax. 1999;54(12):1119–38.
Hua J, Yin Y, Peng L, Du L, Geng F, Zhu L. Acute effects of black carbon and PM2.5 on children asthma admissions: a time-series study in a Chinese city. Sci Total Environ. 2014;481:433–8.
Zheng P, Wang J, Zhang Z, Shen P, Chai P, Li D, Jin M, Tang M, Lu H, Lin H, et al. Air pollution and hospital visits for acute upper and lower respiratory infections among children in Ningbo, China: a time-series analysis. Environ Sci Pollut R. 2017;24(23):18860–9.
Wang S, Li Y, Niu A, Liu Y, Su L, Song W, Liu J, Liu Y, Li H. The impact of outdoor air pollutants on outpatient visits for respiratory diseases during 2012–2016 in Jinan. China Resp Res. 2018;19(1):1–8.
Luong LMT, Phung D, Sly PD, Morawska L, Thai PK. The association between particulate air pollution and respiratory admissions among young children in Hanoi, Vietnam. Sci Total Environ. 2017;578:249–55.
Sigmund E, De Ste CM, Miklankova L, Fromel K. Physical activity patterns of kindergarten children in comparison to teenagers and young adults. Eur J Public Health. 2007;17(6):646–51.
Esposito S, Tenconi R, Lelii M, Preti V, Nazzari E, Consolo S, Patria MF. Possible molecular mechanisms linking air pollution and asthma in children. Bmc Pulm Med. 2014;14:1–8.
Liu C, Liu Y, Zhou Y, Feng A, Wang C, Shi T. Short-term effect of relatively low level air pollution on outpatient visit in Shennongjia, China. Environ Pollut. 2019;245:419–26.
Chen W, Yan L, Zhao H. Seasonal variations of atmospheric pollution and air quality in Beijing. Atmosphere-Basel. 2015;6(11):1753–70.
Xiao Q, Ma Z, Li S, Liu Y. The impact of winter heating on air pollution in China. PLoS One. 2015;10(1):e117311.
He J, Lu S, Yu Y, Gong S, Zhao S, Zhou C. Numerical simulation study of winter pollutant transport characteristics over Lanzhou City, Northwest China. Atmosphere-Basel. 2018;9(10):1–8.
Chu PC, Chen Y, Lu S, Li Z, Lu Y. Particulate air pollution in Lanzhou China. Environ Int. 2008;34(5):698–713.
This work was supported by the National Natural Science Foundation of China (4187050043), the foundation of the Ministry of Education Key Laboratory of Cell Activities and Stress Adaptations, Lanzhou University, China (lzujbky-2020-sp21). Chengguan Science and Technology Planning Project, Lanzhou, China (2017SHFZ0043).
Yueling Ma and Li Yue contributed equally to this work.
Institute of Occupational Health and Environmental Health, School of Public Health, Lanzhou University, Lanzhou, Gansu, 730000, People's Republic of China
Yueling Ma, Jiangtao Liu, Xiaotao He, Lanyu Li, Jingping Niu & Bin Luo
Gansu Provincial Maternity and Child Health Care Hospital, Lanzhou, Gansu, 730000, People's Republic of China
Li Yue
Shanghai Typhoon Institute, China Meteorological Administration, Shanghai, 200030, China
Bin Luo
Shanghai Key Laboratory of Meteorology and Health, Shanghai Meteorological Bureau, Shanghai, 200030, China
Yueling Ma
Jiangtao Liu
Xiaotao He
Lanyu Li
Jingping Niu
BL, JPN and YLM contributed to idea formulation, study design, data preparation, data analysis, reporting results, data interpretation, and writing of the manuscript. LY and JTL contributed to data preparation and data analysis. XTH and LYL contributed to study design and interpretation of the data. All authors have seen and approved the final version.
Correspondence to Bin Luo.
The environmental data were collected from open access websites, so the consent to participate was not applicable. The hospital admission data were obtained and proved by Lanzhou center for disease control and prevention with official permission. The study protocol including data using was approved by the ethics committee of Lanzhou University (Project identification code: IRB190612–1).
Ma, Y., Yue, L., Liu, J. et al. Association of air pollution with outpatient visits for respiratory diseases of children in an ex-heavily polluted Northwestern city, China. BMC Public Health 20, 816 (2020). https://doi.org/10.1186/s12889-020-08933-w
DOI: https://doi.org/10.1186/s12889-020-08933-w
Outpatient visit
Time-series study | CommonCrawl |
Random geometry
Seminars (RGM)
Videos and presentation materials from other INI events are also available.
Search seminar archive
RGMW01 12th January 2015
10:00 to 11:00 JP Miller Gaussian Free Field 1
11:30 to 12:30 Random Planar Maps 1
13:30 to 14:30 Schramm-Loewner Evolution 1
09:00 to 10:00 Discrete Lattice Models 1
10:00 to 11:00 Gaussian Multiplicative Chaos 1
RGMW01 21st January 2015
15:00 to 16:00 J Miller Gaussian Multiplicative Chaos 6
RGMW01 22nd January 2015
RGMW01 23rd January 2015
10:00 to 11:00 W Werner Renormalization via merging trees
11:30 to 12:30 The extremal process in nested conformal loops
13:30 to 14:30 Ising Model, Conformal Field Theory, etc
15:00 to 16:00 Aperiodic hierarchical conformal tilings: random at the ends?
09:00 to 10:00 Parafermionic observables and order of the phase transition in planar random-cluster models
10:00 to 11:00 Liouville Quantum Gravity on the Riemann sphere
11:30 to 12:30 tba
13:30 to 14:30 A conformally invariant metric on CLE(4)
15:00 to 16:00 Conformal invariance of boundary touching loops of FK Ising model
09:00 to 10:00 SLE correlations and singular vectors
10:00 to 11:00 Almost sure multifractal spectrum of SLE
11:30 to 12:30 From the critical Ising model to spanning trees
13:30 to 14:30 GFF with SLE and KPZ
15:00 to 16:00 Generalized Multifractality of Whole-Plane SLE
09:00 to 10:00 Liouville quantum gravity as a mating of trees
10:00 to 11:00 Geodesics in Brownian surfaces
11:30 to 12:30 V Beffara Drawing maps
13:30 to 14:30 Some scaling limit results for critical Fortuin-Kastelyn random planar map model
09:00 to 10:00 D Chelkak Scaling limits of critical Ising correlation functions in planar domains
10:00 to 11:00 Renormalization Approach to the 2-Dimensional Uniform Spanning Tree
11:30 to 12:30 The Z-invariant massive Laplacian on isoradial graphs
13:30 to 14:30 G Pete On near-critical SLE(6) and on the tail in Cardy's formula
RGM 2nd February 2015
16:00 to 17:00 Rothschild Distinguished Visiting Fellow Lecture: Random maps and random 2-dimensional geometries
RGM 5th February 2015
16:00 to 17:00 J-C Mourrat The dynamic phi^4 model in the plane
RGM 11th March 2015
12:30 to 13:30 On random Hamilton-Jacobi equation and "other" KPZ
RGMW03 16th March 2015
10:00 to 11:00 Compensated Fragmentations
11:30 to 12:30 Random trees constructed by aggregation
14:00 to 15:00 W Kendall Google maps and improper Poisson line processes
15:30 to 16:30 A line-breaking construction of the stable trees
10:00 to 11:00 The Compulsive Gambler process
11:30 to 12:30 Delocalization of two-dimensional random surfaces with hard-core constraints
14:00 to 15:00 Small-particle limits in a regularized Laplacian random growth model
15:30 to 16:30 Local graph coloring
10:00 to 11:00 Branching Brownian motion, the Brownian net and selection in spatially structured populations
11:30 to 12:30 Rigorous results for a population model with selection
14:00 to 15:00 A Veber Genealogies with recombination in spatial population genetics
15:30 to 16:30 Branching processes with competition by pruning of Levy trees
10:00 to 11:00 A multi-scale refinement of the second moment method Co-authors: L.P
11:30 to 12:30 Log-correlated Gaussian fields: study of the Gibbs measure
14:00 to 15:00 On the complex cascade and the complex branching Brownian motion
15:30 to 16:30 L-P Arguin Maxima of log-correlated Gaussian fields and of the Riemann Zeta function on the critical line
10:00 to 11:00 Scale-free percolation
11:30 to 12:30 Rate of convergence of the mean of sub-additive ergodic processes
14:00 to 15:00 Uniformity of the late points of random walk on $\mathbb{Z}_d^n$ for $d \geq 3$
15:30 to 16:30 Maxima of logarithmically correlated fields
RGMW04 20th April 2015
10:00 to 11:00 Compact Brownian surfaces
11:30 to 12:30 Blossoming trees and the scaling limit of maps
14:00 to 15:00 T Budd Scaling constants and the lazy peeling of infinite Boltzmann planar maps
15:30 to 16:30 Critical exponents in FK-weighted planar maps
RGMW04 21st April 2015
10:00 to 11:00 Scaling limits of the uniform spanning tree
11:30 to 12:30 A hidden quantum group for pure partition functions of multiple SLEs
14:00 to 15:00 Self-avoiding Walk and Connective Constant
15:30 to 16:30 I Kortchemski Looptrees
RGMW04 22nd April 2015
09:00 to 10:00 On the geometry of discrete and continuous random planar maps
10:00 to 11:00 G Schaeffer On classes of planar maps with $\alpha$-orientations having geometric interpretations
11:30 to 12:30 G Borot Nesting statistics in the O(n) loop model on random lattices
14:00 to 15:00 O Bernardi Differential equations for colored maps
15:30 to 16:30 O Gurel Gurevich Recurrence of planar graph limits
RGMW04 23rd April 2015
09:00 to 10:00 The uniform spanning forest of planar graphs
10:00 to 11:00 Characterstic polynomials of random matrices and logarithmically correlated processes
11:30 to 12:30 Pinning and disorder relevance for the lattice Gaussian free field
14:00 to 15:00 Planar lattices do not recover from forest fires
15:30 to 16:30 The exact $k$-SAT threshold for large $k$
10:00 to 11:00 Squarings of rectangles
11:30 to 12:30 Parabolic and Hyperbolic Unimodular maps
14:00 to 15:00 Liouville Quantum gravity on Riemann surfaces
15:30 to 16:30 Scaling limits of random planar maps and growth-fragmentations
RGMW05 15th June 2015
10:00 to 11:00 W Werner Some news from the loop-soup front
11:30 to 12:30 Conformal representations of Random Maps and Surfaces
14:00 to 15:00 J Miller Liouville quantum gravity and the Brownian map
15:30 to 16:30 Essential spanning forests on periodic planar graphs
09:00 to 10:00 Competitive erosion is conformally invariant
10:10 to 11:10 A-S Sznitman On Disconnection, random walks, random interlacements, and the Gaussian free field
14:00 to 15:00 B Werness Convergence of discrete holomorphic functions on non-uniform lattices
15:30 to 16:30 Conformal restriction: the chordal and the radial
09:00 to 10:00 Scaling window of Bernoulli percolation on Z^d
10:10 to 11:10 A (slightly) new look at the backbone
11:30 to 12:30 A random walk proof of Kirchhoff's matrix tree theorem
14:00 to 15:00 Return of the Multiplicative Coalescent
15:30 to 16:30 Where Planar Simple Random Walk Loses its Rotational Symmetry
10:00 to 11:00 SLE Quantum Multifractality
11:30 to 12:30 Radial SLE martingale-observables
14:00 to 15:00 Scaling limit of the probability that loop-erased random walk uses a given edge
15:20 to 16:20 Welding of the Backward SLE and Tip of the Forward SLE
16:30 to 17:30 Boundary Measures and Natural Time Parameterization for SLE
10:00 to 11:00 C Burdzy Twin peaks
11:30 to 12:30 Loewner curvature
14:00 to 15:00 From Internal DLA to self-interacting walks
RGMW06 9th July 2018
10:00 to 11:00 Sourav Chatterjee An introduction to gauge theories for probabilists: Part I
11:15 to 12:15 Jason Miller Random walk on random planar maps I
13:45 to 14:30 Jian Ding Percolation for level-sets of Gaussian free fields on metric graphs
14:35 to 15:20 Vincent Tassion The phase transition for Boolean percolation
RGMW06 10th July 2018
09:10 to 09:55 Nina Holden Cardy embedding of uniform triangulations
10:00 to 11:00 Gregory Miermont Exploring random maps: slicing, peeling and layering - 1
11:15 to 12:15 Sourav Chatterjee An introduction to gauge theories for probabilists: Part II
13:45 to 14:30 Ofer Zeitouni On the Liouville heat kernel and Liouville graph distance (joint with Ding and Zhang)
14:35 to 15:20 Wei Qian Uniqueness of the welding problem for SLE and LQG
09:10 to 09:55 Gilles Schaeffer The combinatorics of Hurwitz numbers and increasing quadrangulations
10:00 to 11:00 Jason Miller Random walk on random planar maps II
11:15 to 12:15 Antti Kupiainen Quantum Liouville Theory
09:10 to 09:55 Tom Hutchcroft An operator-theoretic approach to nonamenable percolation
10:00 to 11:00 Sourav Chatterjee An introduction to gauge theories for probabilists: Part III
13:45 to 14:30 Julien Dubedat Stochastic Ricci Flow
14:35 to 15:20 Igor Kortchemski Condensation in critical Cauchy Bienaymé-Galton-Watson trees
09:10 to 09:30 Cyril Marzouk Geometry of large random planar maps with a prescribed degree sequence
09:35 to 09:55 Joonas Turunen Critical Ising model on random triangulations of the disk: enumeration and limits
11:15 to 12:15 Jason Miller Random walk on random planar maps III
13:45 to 14:30 Eveliina Peltola Multiple SLEs, discrete interfaces, and crossing probabilities
14:35 to 15:20 Jason Schweinsberg Yaglom-type limit theorems for branching Brownian motion with absorption
10:00 to 11:00 Wendelin Werner Conformal loop ensembles on Liouville quantum gravity 1
11:15 to 12:15 Allan Sly Phase transitions of Random Constraint Satisfaction Problems - 1
13:45 to 14:30 Ellen Powell A characterisation of the Gaussian free field
14:35 to 14:55 Thomas Budzinski Simple random walk on supercritical causal maps
15:00 to 15:20 Lukas Schoug A multifractal SLE_kappa(rho) boundary spectrum
09:10 to 09:55 Tim Budd Lattice walks & peeling of planar maps
13:45 to 14:30 Yilin Wang Geometric descriptions of the Loewner energy
14:35 to 15:20 Marcin Lis Circle patterns and critical Ising models
09:10 to 09:30 Danny Nam Cutoff for the Swendsen-Wang dynamics
09:35 to 09:55 Antoine Jego Thick Points of Random Walk and the Gaussian Free Field
09:10 to 09:55 Perla Sousi Capacity of random walk and Wiener sausage in 4 dimensions
10:00 to 11:00 Vincent Vargas The semiclassical limit of Liouville conformal field theory
11:15 to 12:15 Beatrice de Tiliere The Z-Dirac and massive Laplacian operators in the Z-invariant Ising model
13:45 to 14:30 Ewain Gwynne The fractal dimension of Liouville quantum gravity: monotonicity, universality, and bounds
14:35 to 15:20 Adrien Kassel Quantum spanning forests
09:10 to 09:55 Guillaume Remy Exact formulas on Gaussian multiplicative chaos and Liouville theory
10:00 to 11:00 Nicolas Curien Random stable maps : geometry and percolation
11:15 to 12:15 Remi Rhodes Towards quantum Kähler geometry
13:45 to 14:05 Joshua Pfeffer External DLA on a spanning-tree-weighted random planar map
14:10 to 14:30 Tunan Zhu Distribution of gaussian multiplicative chaos on the unit interval | CommonCrawl |
\begin{document}
{ \begin{center} {\Large\bf The Nevanlinna-type formula for the matrix Hamburger moment problem in a general case.} \end{center} \begin{center} {\bf S.M. Zagorodnyuk} \end{center}
\section{Introduction.} Recall that the matrix Hamburger moment problem consists of finding a left-continuous non-decreasing matrix function $M(x) = ( m_{k,l}(x) )_{k,l=0}^{N-1}$ on $\mathbb{R}$, $M(-\infty)=0$, such that \begin{equation} \label{f1_1} \int_\mathbb{R} x^n dM(x) = S_n,\qquad n\in \mathbb{Z}_+, \end{equation} where $\{ S_n \}_{n=0}^\infty$ is a prescribed sequence of Hermitian $(N\times N)$ complex matrices (moments), $N\in \mathbb{N}$. The moment problem~(\ref{f1_1}) is said to be {\it determinate} if it has a unique solution and {\it indeterminate} in the opposite case.
\noindent This problem was introduced in~1949 by Krein~\cite{cit_100_K}, and he described all solutions in the case when the corresponding J-matrix defines a symmetric operator with maximal defect numbers. This result appeared without proof in~\cite{cit_200_K} (Berezansky in 1965 proved the main fact in this theory of Krein: the convergence of the series from the polynomials of the first kind, even for the operator moment problem~\cite[Ch.7, Section 2]{cit_700_Ber}). Under similar conditions, some descriptions of solutions were obtained by Kovalishina~\cite{cit_300_K}, by Lopez-Rodriguez~\cite{cit_400_L} and by Dyukarev~\cite{cit_500_D}.
In the scalar case, a description of all solutions of the moment problem~(\ref{f1_1}) can be found, e.g., in~\cite{cit_600_Akh},\cite{cit_700_Ber} for the nondegenerate case, and in~\cite{cit_800_AK} for the degenerate case.
Set \begin{equation} \label{f1_3} \Gamma_n = \left( \begin{array}{cccc} S_0 & S_1 & \ldots & S_n\\ S_1 & S_2 & \ldots & S_{n+1}\\ \vdots & \vdots & \ddots & \vdots\\ S_n & S_{n+1} & \ldots & S_{2n}\end{array} \right),\qquad n\in \mathbb{Z}_+. \end{equation} It is well known that the following condition \begin{equation} \label{f1_4} \Gamma_n \geq 0,\qquad n\in \mathbb{Z}_+, \end{equation} is necessary and sufficient for the solvability of the moment problem~(\ref{f1_1}).
For a recent discussion on the truncated matrix Hamburger moment problems we refer to the paper~\cite{cit_820_DFKMT} and references therein. It is worth mentioning that for the truncated moment problems much is done for the degenerate case, as well. The case of the full moment problem~(\ref{f1_1}) is not such investigated. In~\cite{cit_850_Z} we presented an analytic description of all solutions of the matrix Hamburger moment problem~(\ref{f1_1}) under condition~(\ref{f1_4}). The main aim of our present investigation is to obtain a Nevanlinna-type formula for the moment problem~(\ref{f1_1}) in a general case. We only assume that condition~(\ref{f1_4}) holds and the moment problem~(\ref{f1_1}) is indeterminate (but not necessarily completely indeterminate). We express the matrix coefficients of the corresponding linear fractional transformation in terms of the given moments. Some necessary and sufficient conditions for the determinacy of the moment problem~(\ref{f1_1}) in terms of the prescribed moments are given.
\noindent {\bf Notations.} As usual, we denote by $\mathbb{R}, \mathbb{C}, \mathbb{N}, \mathbb{Z}, \mathbb{Z}_+$ the sets of real numbers, complex numbers, positive integers, integers, non-negative integers, respectively; $\mathbb{C}_+ = \{ z\in \mathbb{C}:\ \mathop{\rm Im}\nolimits z > 0\}$,
$\mathbb{D} = \{ z\in \mathbb{C}:\ |z|<1 \}$, $\mathbb{T} = \{ z\in \mathbb{C}:\ |z|=1 \}$. The notation $k\in\overline{0,\rho}$ means that $k\in \mathbb{Z}_+$, $k\leq\rho$, if $\rho <\infty$; or $k\in \mathbb{Z}_+$, if $\rho = \infty$. The set of all complex matrices of size $(m\times n)$ we denote by $\mathbb{C}_{m\times n}$, $m,n\in \mathbb{N}$. If $M\in \mathbb{C}_{m\times n}$ then $M^T$ denotes the transpose of $M$, and $M^*$ denotes the complex conjugate of $M$. The identity matrix from $\mathbb{C}_{n\times n}$ we denote by $I_n$, $n\in \mathbb{N}$; $I_\infty = (\delta_{k,l})_{k,l=0}^\infty$, $\delta_{k,l}$ is Kronecker's delta. If a set $S$ has a finite number of elements, then its number of elements we denote by $\mathop{\rm card}\nolimits(S)$. If a set $S$ has an infinite number of elements, then $\mathop{\rm card}\nolimits(S) := \infty$.
For a separable Hilbert space $H$ we denote by $(\cdot,\cdot)_H$ and $\| \cdot \|_H$ the scalar product and the norm in $H$, respectively. The indices may be omitted in obvious cases.
\noindent For a linear operator $A$ in $H$ we denote by $D(A)$ its domain, by $R(A)$ its range, and by
$A^*$ we denote its adjoint if it exists. If $A$ is invertible, then $A^{-1}$ means its inverse. If $A$ is closable, then $\overline{A}$ means its closure. If $A$ is bounded, then $\| A \|$ stands for its operator norm. For a set of elements $\{ x_n \}_{n\in K}$ in $H$, we denote by $\mathop{\rm Lin}\nolimits\{ x_n \}_{n\in K}$ and $\mathop{\rm \overline{span}}\nolimits\{ x_n \}_{n\in K}$ the linear span and the closed linear span in the norm of $H$, respectively. Here $K$ is an arbitrary set of indices. For a set $M\subseteq H$, we denote by $\overline{M}$ the closure of $M$ in the norm of $H$. By $E_H$ we denote the identity operator in $H$, i.e. $E_H x = x$, $x\in H$. Let $H_1$ be a subspace of $H$. By $P_{H_1} = P_{H_1}^{H}$ we denote the operator of the orthogonal projection on $H_1$ in $H$.
\noindent If $A$ is symmetric, we set $R_z = R_z(A) = (A-zE_H)^{-1}$, $z\in \mathbb{C}\backslash \mathbb{R}$. If $V$ is isometric, we set $\mathcal{R}_\zeta = \mathcal{R}_\zeta(V) = (E_H-\zeta V)^{-1}$, $\zeta\in \mathbb{C}\backslash \mathbb{T}$.
\section{The matrix Hamburger moment problem: the determinacy and a Nevanlinna-type formula.} Let the matrix Hamburger moment problem~(\ref{f1_1}) be given and condition~(\ref{f1_4}) hold. Set \begin{equation} \label{f2_6} \Gamma = (S_{k+l})_{k,l=0}^\infty = \left( \begin{array}{ccccc} S_0 & S_1 & \ldots & S_n & \ldots\\ S_1 & S_2 & \ldots & S_{n+1} & \ldots\\ \vdots & \vdots & \ddots & \vdots & \ldots\\ S_n & S_{n+1} & \ldots & S_{2n} & \ldots\\ \vdots & \vdots & \vdots & \vdots & \ddots\end{array} \right). \end{equation} The matrix $\Gamma$ is a semi-infinite block matrix. It may be viewed as a usual semi-infinite matrix, as well. Let \begin{equation} \label{f2_6_1} \Gamma = (\Gamma_{n,m})_{n,m=0}^\infty,\qquad \Gamma_{n,m}\in \mathbb{C}, \end{equation} and $$ S_n = (s_n^{k,l})_{k,l=0}^{N-1},\qquad s_n^{k,l}\in \mathbb{C},\ n\in \mathbb{Z}_+. $$ Notice that \begin{equation} \label{f2_7} \Gamma_{rN+j,tN+n} = s_{r+t}^{j,n},\qquad 0\leq j,n \leq N-1;\quad r,t\in \mathbb{Z}_+. \end{equation} We need here some constructions from~\cite{cit_850_Z}. By Theorem~1 in~\cite{cit_850_Z} (and this construction is well known), there exist a Hilbert space $H$, and a sequence $\{ x_n \}_{n=0}^\infty$ in $H$, such that $\mathop{\rm \overline{span}}\nolimits\{ x_n \}_{n=0}^\infty = H$, and \begin{equation} \label{f2_9} (x_n,x_m)_H = \Gamma_{n,m},\qquad n,m\in \mathbb{Z}_+. \end{equation} We choose an arbitrary such a space $H$ and a sequence $\{ x_n \}_{n=0}^\infty$ in $H$, and {\it fix them in the rest of the paper}.
Set $L := \mathop{\rm Lin}\nolimits\{ x_n \}_{n=0}^\infty$, and consider the following operator with the domain $L$: \begin{equation} \label{f2_11} A x = \sum_{k=0}^\infty \alpha_k x_{k+N},\qquad x\in L,\ x = \sum_{k=0}^\infty \alpha_k x_{k},\ \alpha_k\in \mathbb{C}. \end{equation} This operator is correctly defined and symmetric.
Let $\widehat A$ be an arbitrary self-adjoint extension of $A$ in a Hilbert space $\widehat H\supseteq H$. Let $R_z(\widehat A) = (\widehat A - z E_{\widehat H})^{-1}$ be the resolvent of $\widehat A$ and $\{ \widehat E_\lambda\}_{\lambda\in \mathbb{R}}$ be the orthogonal left-continuous resolution of unity of $\widehat A$. Recall that the operator-valued function $\mathbf R_z = P_H^{\widehat H} R_z(\widehat A)$ is called a { \it generalized resolvent} of $A$, $z\in \mathbb{C}\backslash \mathbb{R}$. The function $\mathbf E_\lambda = P_H^{\widehat H} \widehat E_\lambda$, $\lambda\in \mathbb{R}$, is a {\it spectral function} of a symmetric operator $A$. There exists a one-to-one correspondence between generalized resolvents and spectral functions. It is given by the following relation~(\cite{cit_2000_AG}): \begin{equation} \label{f2_14} (\mathbf R_z f,g)_H = \int_\mathbb{R} \frac{1}{\lambda - z} d( \mathbf E_\lambda f,g)_H,\qquad f,g\in H,\ z\in \mathbb{C}\backslash \mathbb{R}. \end{equation} By Theorem~2 in~\cite{cit_850_Z}, all solutions of the moment problem~(\ref{f1_1}) have the following form: \begin{equation} \label{f2_30} M(\lambda) = (m_{k,j} (\lambda))_{k,j=0}^{N-1},\quad m_{k,j} (\lambda) = ( \mathbf E_\lambda x_k, x_j)_H, \end{equation} where $\mathbf E_\lambda$ is a spectral function of the operator $A$. Moreover, the correspondence between all spectral functions of $A$ and all solutions of the moment problem is bijective.
\noindent By~(\ref{f2_14}) and~(\ref{f2_30}) we conclude that the formula \begin{equation} \label{f2_30_1} \int_\mathbb{R} \frac{1}{\lambda - z} dm_{k,j} (\lambda) = (\mathbf R_z x_k, x_j)_H,\quad 0\leq k,j\leq N-1,\quad z\in \mathbb{C}\backslash \mathbb{R}, \end{equation} establishes a one-to-one correspondence between all generalized resolvents of $A$ and all solutions of the moment problem~(\ref{f1_1}).
Let $B$ be a closed symmetric operator in the Hilbert space $H$, with the domain $D(B)$, $\overline{D(B)} = H$. Set $\Delta_B(\lambda) = (B- \lambda E_H) D(B)$, and $N_\lambda = N_\lambda(B) = H\ominus \Delta_B(\lambda)$, $\lambda\in \mathbb{C}\backslash \mathbb{R}$. Consider an arbitrary bounded linear operator $C$, which maps $N_i$ into $N_{-i}$. For \begin{equation} \label{f2_41} g = f + C\psi - \psi,\qquad f\in D(B),\ \psi\in N_i, \end{equation} we set \begin{equation} \label{f2_42} B_C g = Bf + i C \psi + i \psi. \end{equation} The operator $B_C$ is said to be a {\it quasiself-adjoint extension of the operator $B$, defined by the operator $C$}. By Theorem~4 in~\cite{cit_850_Z}, the following relation: \begin{equation} \label{f2_45} \int_\mathbb{R} \frac{1}{x- \lambda } d m_{k,j} (x) = ( (A_{F(\lambda)} - \lambda E_H)^{-1} x_k, x_j)_H,\qquad \lambda\in \mathbb{C}_+, \end{equation} establishes a bijective correspondence between all solutions of the moment problem~(\ref{f1_1}) and all analytic in $\mathbb{C}_+$ operator-valued functions $F(\lambda)$, which values are contractions which map $N_i(\overline{A})$ into $N_{-i}(\overline{A})$. Here $A_{F(\lambda)}$ is the quasiself-adjoint extension of $\overline{A}$ defined by $F(\lambda)$.
Set $$ y_k^- := (A-iE_H) x_k = x_{k+N} - i x_k, $$ $$ y_k^+ := (A+iE_H) x_k = x_{k+N} + i x_k,\qquad k\in \mathbb{Z}_+; $$ \begin{equation} \label{f2_45_1} L^- := \mathop{\rm Lin}\nolimits\{ y_k^- \}_{k=0}^\infty = (A-iE_H)D(A),\quad L^+ := \mathop{\rm Lin}\nolimits\{ y_k^+ \}_{k=0}^\infty = (A+iE_H)D(A), \end{equation} $$ H^- := \overline{L^-} = (\overline{A}-iE_H)D(\overline{A}),\quad H^+ := \overline{L^+} = (\overline{A}+iE_H)D(\overline{A}). $$
Let us apply the Gram-Schmidt orthogonalization procedure to the sequence $\{ y_k^- \}_{k=0}^\infty$, removing the linear dependent elements if they appear. We shall get a sequence $\mathfrak{A}^- = \{ u_k^- \}_{k=0}^{\tau^- -1}$, $0\leq\tau^-\leq +\infty$. The case $\tau^- = 0$ means that $y_k^- = 0$, $\forall k\in \mathbb{Z}_+$, and $\mathfrak{A}^-$ is an empty set.
\noindent In a similar manner, we apply the Gram-Schmidt orthogonalization procedure to the sequence $\{ y_k^+ \}_{k=0}^\infty$, and obtain a sequence $\mathfrak{A}^+ = \{ u_k^+ \}_{k=0}^{\tau^+ -1}$, $0\leq\tau^+\leq +\infty$. The case $\tau^+ = 0$ means that $y_k^+ = 0$, $\forall k\in \mathbb{Z}_+$, and $\mathfrak{A}^- = \emptyset$.
If not empty, the set $\mathfrak{A}^\pm$ forms an orthonormal basis in $H^\pm$, respectively. Notice that, by the construction, each element $u_k^\pm$, $k\in \overline{0,\tau^\pm -1}$, is a linear combination of $y_j^\pm$, $0\leq j\leq k$, respectively. Let \begin{equation} \label{f2_45_2} u_k^\pm = \sum_{j=0}^k \xi_{k;j}^\pm y_j^\pm,\qquad \xi_{k;j}^\pm\in \mathbb{C},\quad k\in \overline{0,\tau^\pm -1}. \end{equation} Observe that by~(\ref{f2_9}) we may write $$ (x_n,u_k^\pm)_H = \sum_{j=0}^k \overline{\xi_{k;j}^\pm} (x_n, y_j^\pm)_H = \sum_{j=0}^k \overline{\xi_{k;j}^\pm} (x_n, x_{j+N} \pm i x_j)_H $$ \begin{equation} \label{f2_45_3} = \sum_{j=0}^k \overline{\xi_{k;j}^\pm} (\Gamma_{n,j+N} \pm i \Gamma_{n,j}),\quad n\in \mathbb{Z}_+,\ k\in\overline{0,\tau^\pm -1}. \end{equation} By representation~(\ref{f2_45_1}), the condition $\tau^- = 0$ ($\tau^+ = 0$) is equivalent to the condition $D(A) = \{ 0 \}$, and therefore to the condition $H=\{ 0 \}$. By~(\ref{f2_7}),(\ref{f2_9}), the condition $H=\{ 0 \}$ is equivalent to the condition $S_n = 0$, $\forall n\in \mathbb{Z}_+$.
We emphasize that the numbers $\xi_{k;j}$ in~(\ref{f2_45_2}) can be computed explicitly by using relations~(\ref{f2_7}),(\ref{f2_9}). Moreover, the processes of orthogonalization which appear in this paper are based on the use of relations~(\ref{f2_7}),(\ref{f2_9}). In fact, any norm or any scalar product which appear during orthogonalization is expressed in terms of the prescribed moments.
\begin{thm} \label{t2_1} Let the matrix Hamburger moment problem~(\ref{f1_1}) be given and condition~(\ref{f1_4}), with $\Gamma_n$ from~(\ref{f1_3}), be satisfied. Let the operator $A$ in the Hilbert space $H$ be constructed as in~(\ref{f2_11}). The following conditions are equivalent:
\begin{itemize}
\item[{\rm (A)}] The moment problem~(\ref{f1_1}) is determinate;
\item[{\rm (B)}] One of the defect numbers of $A$ is equal to zero (or the both of them are zero);
\item[{\rm (C)}] $S_r = 0$, $\forall r\in \mathbb{Z}_+$, or, $\exists S_l\not=0$, $l\in \mathbb{Z}_+$, and one of the following conditions holds (or the both of them hold): { \begin{itemize}
\item[(a)] For each $n$, $0 \leq n\leq N-1$, the following equality holds: \begin{equation} \label{f2_45_4} \Gamma_{n,n} = \sum_{k=0}^{\tau^- -1}
\left| \sum_{j=0}^k \overline{\xi_{k;j}^-} (\Gamma_{n,j+N} - i \Gamma_{n,j})
\right|^2; \end{equation}
\item[(b)] For each $n$, $0 \leq n\leq N-1$, the following equality holds: \begin{equation} \label{f2_45_5} \Gamma_{n,n} = \sum_{k=0}^{\tau^+ -1}
\left| \sum_{j=0}^k \overline{\xi_{k;j}^+} (\Gamma_{n,j+N} + i \Gamma_{n,j})
\right|^2. \end{equation} Here $\Gamma_{\cdot,\cdot}$ are from~(\ref{f2_6_1}), and $\xi_{\cdot,\cdot}^\pm$ are from~(\ref{f2_45_2}).
\end{itemize} }
\end{itemize}
If the above conditions are satisfied then the unique solution of the moment problem~(\ref{f1_1}) is given by the following relation: \begin{equation} \label{f2_45_6} M(t) = (m_{k,j} (t))_{k,j=0}^{N-1},\quad m_{k,j} (t) = (E_t x_k, x_j)_H, \end{equation} where $E_t$ is the left-continuous orthogonal resolution of unity of the self-adjoint operator $A$. \end{thm} {\bf Proof. } (A)$\Rightarrow$(B). If the both defect numbers are greater then zero, then we can choose unit vectors $u_1\in N_i(\overline{A})$ and $u_2\in N_{-i}(\overline{A})$. We set $$ F(\lambda) (c u_1 + u) = c u_2,\qquad c\in \mathbb{C},\ u\in \Delta_{\overline A}(i). $$ On the other hand, we set $\widetilde F(\lambda) \equiv 0$. Functions $F(\lambda)$ and $\widetilde F(\lambda)$ produce different solutions of the moment problem~(\ref{f1_1}) by relation~(\ref{f2_45}).
\noindent (B)$\Rightarrow$(A). If one of the defect numbers is zero, then the only admissible function $F(\lambda)$ in relation~(\ref{f2_45}) is $F(\lambda)\equiv 0$.
\noindent (B)$\Rightarrow$(C). If $H= \{ 0 \}$ then condition (C) holds. Let $H\not= \{ 0 \}$.
Notice that by~(\ref{f2_9}) and~(\ref{f2_45_3}), condition (C),(a) may be written as
$$ \| x_n \|^2 = \sum_{k=0}^{\tau^- -1} \left| (x_n,u^-_k)_H
\right|^2,\qquad n=0,1,\ldots, N-1; $$ while condition (C),(b) is equivalent to
$$ \| x_n \|^2 = \sum_{k=0}^{\tau^+ -1} \left| (x_n,u^+_k)_H
\right|^2,\qquad n=0,1,\ldots, N-1. $$ Therefore condition (C),(a) is equivalent to relations: \begin{equation} \label{f2_45_7} x_n \in H^-,\qquad n=0,1,\ldots, N-1; \end{equation} and condition (C),(b) is equivalent to condition: \begin{equation} \label{f2_45_8} x_n \in H^+,\qquad n=0,1,\ldots, N-1. \end{equation} By the formula~(37) in~\cite[p. 278]{cit_850_Z}, each element of $L$ belongs to the linear span of elements $\{ x_n \}_{n=0}^\infty$, $\{ y_k^- \}_{k=0}^\infty$, as well as to the linear span of elements $\{ x_n \}_{n=0}^\infty$, $\{ y_k^+ \}_{k=0}^\infty$. Consequently, condition~(\ref{f2_45_7}) is equivalent to the condition \begin{equation} \label{f2_45_9} H = H^-, \end{equation} and condition~(\ref{f2_45_8}) is equivalent to the condition \begin{equation} \label{f2_45_10} H = H^+. \end{equation} Since one of the defect numbers is equal to zero then either~(\ref{f2_45_9}), or~(\ref{f2_45_10}) holds.
\noindent (C)$\Rightarrow$(B). If $H= \{ 0 \}$ then condition (B) holds. Let $H\not= \{ 0 \}$. If condition~(C),(a) (condition~(C),(b)) holds, then by the above considerations before~(\ref{f2_45_9}) we obtain $H=H^-$ (respectively $H=H^+$). Therefore one of the defect numbers of $A$ is equal to zero.
The last assertion of the theorem follows from formula~(\ref{f2_30}). $\Box$
We shall continue our considerations started before the statement of Theorem~\ref{t2_1}. In what follows we assume that {\it the moment problem~(\ref{f1_1}) is indeterminate}. Let the defect numbers of $A$ are equal to $\delta = \delta(A) = \dim H\ominus H^-$, and $\omega = \omega(A) = \dim H\ominus H^+$, $\delta,\omega\geq 1$.
For simplicity of notations we set $\tau := \tau^-$, and $$ u_k := u_k^-,\qquad k\in\overline{0,\tau -1}. $$ Let us apply the Gram-Schmidt orthogonalization procedure to the vectors $$ \{ u_k \}_{k=0}^{\tau -1}, \{ x_n \}_{n=0}^{N-1}. $$ Notice that the elements $\{ u_k \}_{k=0}^{\tau -1}$ are already orthonormal. Then we get an orthonormal set in $H$: $$ \mathfrak{A}_u := \{ u_k \}_{k=0}^{\tau -1} \cup \{ u_l' \}_{l=0}^{\delta-1}. $$ Notice that $\mathfrak{A}' := \{ u_l' \}_{l=0}^{\delta-1}$ is an orthonormal basis in $H\ominus H^-$.
Set \begin{equation} \label{f2_45_11} V = V_{\overline{A}} = (\overline{A} + iE_H)(\overline{A} - iE_H)^{-1} = E_H + 2i (\overline{A} - iE_H)^{-1}. \end{equation} The operator $V$ is a closed isometric operator with the domain $H^-$ and the range $H^+$. Set $$ v_k := V u_k,\qquad k\in \overline{0,\tau -1}. $$ Observe that by~(\ref{f2_45_2}) we may write $$ v_k = \sum_{j=0}^k \xi_{k;j}^- V y_j^- = \sum_{j=0}^k \xi_{k;j}^- y_j^+,\qquad k\in \overline{0,\tau -1}. $$ Notice that $$ \mathfrak{A}_v^- := \{ v_k \}_{k=0}^{\tau -1}, $$ is an orthonormal basis in $H^+$.
\noindent Let us apply the Gram-Schmidt orthogonalization procedure to the vectors $$ \{ v_k \}_{k=0}^{\tau -1}, \{ x_n \}_{n=0}^{N-1}. $$ The elements $\{ v_k \}_{k=0}^{\tau -1}$ are already orthonormal. Then we get another orthonormal basis in $H$: $$ \mathfrak{A}_v := \{ v_k \}_{k=0}^{\tau -1} \cup \{ v_l' \}_{l=0}^{\omega-1}. $$ Observe that $\mathfrak{A}_v' := \{ v_l' \}_{l=0}^{\omega-1}$ is an orthonormal basis in $H\ominus H^+$.
Let $\mathbf{R}_\lambda$ be an arbitrary generalized resolvent of the operator $A$. Let us check that $$ (\mathbf{R}_z x_k, x_j)_H $$ $$ = \frac{1}{z^2 +1} (\mathbf{R}_z y_k^-, y_j^-)_H - \frac{1}{z^2 +1} (x_{k+N},x_j)_H - \frac{z}{z^2 +1} (x_k,x_j)_H, $$ \begin{equation} \label{f2_45_12}
z\in \mathbb{C}_+\backslash\{ i \} ,\quad 0\leq k,j\leq N-1. \end{equation} In fact, let $\widetilde A\supseteq A$ be a self-adjoint operator in a Hilbert space $\widetilde H\supseteq H$, such that $P^{\widetilde{H}}_H R_z(\widetilde A) = \mathbf{R}_z$, $z\in \mathbb{C}\backslash \mathbb{R}$. Then $$ (\mathbf{R}_z x_k, x_j)_H = (R_z(\widetilde A) (A-iE_H)^{-1} (A-iE_H) x_k, x_j)_{\widetilde H} $$ $$ = (R_z(\widetilde A) R_i(\widetilde A) y_k^-, x_j)_{\widetilde H} = \frac{1}{z - i} ((R_z(\widetilde A) - R_i(\widetilde A)) y_k^-, x_j)_{\widetilde H} $$ \begin{equation} \label{f2_45_13}
\frac{1}{z - i} (R_z(\widetilde A) y_k^-, x_j)_{\widetilde H} - \frac{1}{z - i} (x_k, x_j)_{\widetilde H}; \end{equation} $$ (R_z(\widetilde A) y_k^-, x_j)_{\widetilde H} = (R_z(\widetilde A) y_k^-, R_i(\widetilde A) y_j^-)_{\widetilde H} $$ $$ = (R_{-i}(\widetilde A) R_z(\widetilde A) y_k^-, y_j^-)_{\widetilde H} = -\frac{1}{i+z} ((R_{-i}(\widetilde A) - R_z(\widetilde A)) y_k^-, y_j^-)_{\widetilde H} $$ \begin{equation} \label{f2_45_14}
= -\frac{1}{i+z} (y_k^-, x_j)_{\widetilde H} + \frac{1}{i+z} (\mathbf{R}_z y_k^-, y_j^-)_{\widetilde H}. \end{equation} By substitution~(\ref{f2_45_14}) into~(\ref{f2_45_13}), we get~(\ref{f2_45_12}).
Let $\widehat U\supseteq V$ be an arbitrary unitary extension of $V$ in a Hilbert space $\widehat H\supseteq H$. Recall~\cite{cit_2100_Ch} that the following function: \begin{equation} \label{f2_45_15} \mathbf{R}_\zeta(V) = P^{\widehat H}_H (E_{\widehat H} - \zeta \widehat U)^{-1},\qquad \zeta\in \mathbb{C}\backslash \mathbb{T}, \end{equation} is said to be a {\it generalized resolvent} of $V$.
Observe that the generalized resolvents of $V$ and $\overline {A}$ are connected by the following relation~\cite[pp. 370-371]{cit_2200_Ch}: \begin{equation} \label{f2_45_16} (1-\zeta)\mathbf{R}_\zeta(V) = E_H + (z -i) \mathbf{R}_z(\overline{A}),\qquad z\in \mathbb{C}_+,\ \zeta = \frac{z-i}{z+i} \in \mathbb{D}. \end{equation} (The latter relation follows from the fact that the usual resolvents of $V$ and $\overline{A}$ are related by a similar relation, and then one applies the projection operator $P^{\widehat H}_H$ to the both sides of that relation.) Correspondence~(\ref{f2_45_16}) between all generalized resolvents of $V$ and all generalized resolvents of $\overline {A}$ is bijective. Then \begin{equation} \label{f2_45_17} \mathbf{R}_z(\overline{A}) = \frac{2i}{z^2+1}\mathbf{R}_{\frac{z-i}{z+i}} (V) - \frac{1}{z-i} E_H,\qquad z\in \mathbb{C}_+\backslash\{ i \}. \end{equation} By~(\ref{f2_45_17}),(\ref{f2_45_12}) and~(\ref{f2_9}) we get $$ (\mathbf{R}_z x_k, x_j)_H = \frac{2i}{(z^2 +1)^2} (\mathbf{R}_{\frac{z-i}{z+i}}(V_{ \overline{A} }) y_k^-, y_j^-)_H - \frac{1}{(z^2+1)(z-i)} \varphi_{j,k}(z), $$ \begin{equation} \label{f2_45_18} z\in \mathbb{C}_+\backslash\{ i \},\quad 0\leq k,j\leq N-1, \end{equation} where $$ \varphi_{j,k}(z) := \Gamma_{k+N,j+N}-i\Gamma_{k+N,j}-i\Gamma_{k,j+N}+\Gamma_{k,j} + (z-i)\Gamma_{k+N,j} + z(z-i)\Gamma_{k,j} $$ \begin{equation} \label{f2_45_19} = \Gamma_{k+N,j+N} - i\Gamma_{k,j+N} + (z-2i)\Gamma_{k+N,j} + (z^2-iz+1)\Gamma_{k,j},\ z\in \mathbb{C}_+. \end{equation} Observe that an arbitrary generalized resolvent $\mathbf{R}_\zeta$ of the closed isometric operator $V_{\overline A}$ has the following representation~\cite[Theorem 3]{cit_2100_Ch}: \begin{equation} \label{f2_45_20} \mathbf R_{\zeta} = \left[ E - \zeta ( V \oplus \Phi_\zeta ) \right]^{-1},\qquad \zeta\in \mathbb{D}. \end{equation} Here $\Phi_\zeta$ is an analytic in $\mathbb{D}$ operator-valued function which values are linear contractions from $H\ominus H^-$ into $H\ominus H^+$. The correspondence between all such functions $\Phi_\zeta$ and all generalized resolvents of $V$ is bijective.
\noindent By~(\ref{f2_45_18}),(\ref{f2_45_20}) we get $$ (\mathbf{R}_z x_k, x_j)_H = \frac{2i}{(z^2 +1)^2} \left(\left[ E - \frac{z-i}{z+i} ( V \oplus \Phi_{\frac{z-i}{z+i}} ) \right]^{-1} y_k^-, y_j^- \right)_H $$ \begin{equation} \label{f2_45_21} - \frac{1}{(z^2+1)(z-i)} \varphi_{j,k}(z),\qquad z\in \mathbb{C}_+\backslash\{ i \},\quad 0\leq k,j\leq N-1, \end{equation} where $\Phi_\cdot$ is an analytic in $\mathbb{D}$ operator-valued function which values are linear contractions from $H\ominus H^-$ into $H\ominus H^+$.
\noindent By~(\ref{f2_30_1}) and~(\ref{f2_45_21}) we conclude that the formula $$ \int_\mathbb{R} \frac{1}{\lambda - z} dm_{k,j} (\lambda) $$ $$ = \frac{2i}{(z^2 +1)^2} \left(\left[ E - \frac{z-i}{z+i} ( V \oplus \Phi_{\frac{z-i}{z+i}} ) \right]^{-1} y_k^-, y_j^- \right)_H - \frac{1}{(z^2+1)(z-i)} \varphi_{j,k}(z), $$ \begin{equation} \label{f2_45_22} 0\leq k,j\leq N-1,\quad z\in \mathbb{C}_+\backslash \{ i \}, \end{equation} establishes a one-to-one correspondence between all analytic in $\mathbb{D}$ operator-valued functions $\Phi_\cdot$, which values are linear contractions from $H\ominus H^-$ into $H\ominus H^+$, and all solutions $M(\lambda)=(m_{k,j}(\lambda))_{k,j=0}^{N-1}$ of the moment problem~(\ref{f1_1}).
It turns out that formula~(\ref{f2_45_22}) is more convenient then formula~(\ref{f2_45}), in order to obtain a Nevanlinna-type formula for the moment problem~(\ref{f1_1}).
Denote by $\mathcal{M}_{1,\zeta}(\Phi)$ the matrix of the operator $E_H - \zeta ( V \oplus \Phi_\zeta )$ in the basis $\mathfrak{A}_u$, $\zeta\in \mathbb{D}$. Here $\Phi_\zeta$ is an analytic in $\mathbb{D}$ operator-valued function, which values are linear contractions from $H\ominus H^-$ into $H\ominus H^+$. Then $$ \mathcal{M}_{1,\zeta}(\Phi) = \left( \begin{array}{cc} A_{0,\zeta} & B_{0,\zeta}(\Phi) \\ C_{0,\zeta} & D_{0,\zeta}(\Phi) \end{array} \right), $$ where $$ A_{0,\zeta} = \left( \left( \left[ E_H - \zeta ( V \oplus \Phi_\zeta ) \right] u_k, u_j \right)_H \right)_{j,k=0}^{\tau-1} = \left( \left( u_k - \zeta V u_k, u_j \right)_H \right)_{j,k=0}^{\tau-1} $$ \begin{equation} \label{f2_50} = I_\tau - \zeta \left( \left( v_k, u_j \right)_H \right)_{j,k=0}^{\tau-1}, \end{equation} $$ B_{0,\zeta}(\Phi) = \left( \left( \left[ E_H - \zeta ( V \oplus \Phi_\zeta ) \right] u_k', u_j \right)_H \right)_{0\leq j\leq \tau-1,\ 0\leq k\leq \delta-1} $$ $$ = \left( \left( u_k' - \zeta \Phi_\zeta u_k', u_j \right)_H \right)_{0\leq j\leq \tau-1,\ 0\leq k\leq \delta-1} $$ $$ = -\zeta \left( \left( \Phi_\zeta u_k', u_j \right)_H \right)_{0\leq j\leq \tau-1,\ 0\leq k\leq \delta-1}, $$ $$ C_{0,\zeta} = \left( \left( \left[ E_H - \zeta ( V \oplus \Phi_\zeta ) \right] u_k, u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \tau-1} $$ $$ = \left( \left( u_k - \zeta V u_k, u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \tau-1} $$ \begin{equation} \label{f2_51} = - \zeta \left( \left( v_k, u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \tau-1}, \end{equation} $$ D_{0,\zeta}(\Phi) = \left( \left( \left[ E_H - \zeta ( V \oplus \Phi_\zeta ) \right] u_k', u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \delta-1} $$ $$ = \left( \left( u_k' - \zeta \Phi_\zeta u_k', u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \delta-1} $$ $$ = I_\delta - \zeta \left( \left( \Phi_\zeta u_k', u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \delta-1},\ \zeta\in \mathbb{D}. $$ Notice that matrices $A_{0,\zeta},C_{0,\zeta}$, $\zeta\in \mathbb{D}$, can be calculated explicitly using relations~(\ref{f2_9}) and~(\ref{f2_7}).
Denote by $F_\zeta$, $\zeta\in \mathbb{D}$, the matrix of the operator $\Phi_\zeta$, acting from $H\ominus H^-$ into $H\ominus H^+$, with respect to the bases $\mathfrak{A}'$ and $\mathfrak{A}_v'$: $$ F_\zeta = (f_\zeta(j,k))_{0\leq j\leq \omega - 1,\ 0\leq k\leq \delta-1},\qquad $$ $$ f_\zeta(j,k) := (\Phi_\zeta u_k', v_j')_H. $$ Then $$ \Phi_\zeta u_k' = \sum_{l=0}^{\omega-1} f_\zeta(l,k) v_l',\quad 0\leq k\leq \delta-1, $$ and $$ B_{0,\zeta}(\Phi) = -\zeta \left( \left( \sum_{l=0}^{\omega-1} f_\zeta(l,k) v_l', u_j \right)_H \right)_{0\leq j\leq \tau-1,\ 0\leq k\leq \delta-1} $$ $$ = -\zeta \left( \sum_{l=0}^{\omega-1} \left(
v_l', u_j \right)_H f_\zeta(l,k) \right)_{0\leq j\leq \tau-1,\ 0\leq k\leq \delta-1},\quad \zeta\in \mathbb{D}. $$ Set \begin{equation} \label{f2_52} W := \left( \left( v_l', u_j \right)_H \right)_{0\leq j\leq \tau-1,\ 0\leq l\leq \omega-1}. \end{equation} Then $$ B_{0,\zeta}(\Phi) = -\zeta W F_\zeta,\qquad \zeta\in \mathbb{D}. $$ We may write $$ D_{0,\zeta}(\Phi) = I_\delta - \zeta \left( \left( \sum_{l=0}^{\omega-1} f_\zeta(l,k) v_l', u_j' \right)_H \right)_{0\leq j\leq\delta-1,\ 0\leq k\leq\delta-1} $$ $$ = I_\delta - \zeta \left( \sum_{l=0}^{\omega-1} \left( v_l', u_j' \right)_H f_\zeta(l,k) \right)_{0\leq j\leq \delta-1,\ 0\leq k\leq \delta-1},\quad \zeta\in \mathbb{D}. $$ Set \begin{equation} \label{f2_53} T := \left( \left( v_l', u_j' \right)_H \right)_{0\leq j\leq \delta-1,\ 0\leq l\leq \omega-1}. \end{equation} Then $$ D_{0,\zeta}(\Phi) = I_\delta - \zeta T F_\zeta,\qquad \zeta\in \mathbb{D}. $$ Thus, we may write $$ \mathcal{M}_{1,\zeta}(\Phi) = \left( \begin{array}{cc} A_{0,\zeta} & -\zeta W F_\zeta \\ C_{0,\zeta} & I_\delta - \zeta T F_\zeta \end{array} \right),\quad \zeta\in \mathbb{D}, $$ where $A_{0,\zeta}$, $C_{0,\zeta}$ are given by~(\ref{f2_50}),(\ref{f2_51}), and $W,T$ are given by~(\ref{f2_52}),(\ref{f2_53}).
Consider the block representation of the operator $E_H - \zeta ( V \oplus \Phi_\zeta )$ with respect to the decomposition $H^- \oplus (H\ominus H^-)$: \begin{equation} \label{f2_53_1} E_H - \zeta ( V \oplus \Phi_\zeta ) = \left( \begin{array}{cc} \mathcal{A}_{0,\zeta} & \mathcal{B}_{0,\zeta}(\Phi) \\ \mathcal{C}_{0,\zeta} & \mathcal{D}_{0,\zeta}(\Phi) \end{array} \right),\qquad \zeta\in \mathbb{D}. \end{equation} Of course, the matrices of operators $\mathcal{A}_{0,\zeta}$, $\mathcal{B}_{0,\zeta}$, $\mathcal{C}_{0,\zeta}$, $\mathcal{D}_{0,\zeta}$ are matrices $A_{0,\zeta}$, $B_{0,\zeta}$, $C_{0,\zeta}$, $D_{0,\zeta}$, respectively. Observe that the matrix $A_{0,\zeta}$ is invertible, since $\mathcal{A}_{0,\zeta} = P_{H^-} (E_H - \zeta V) P_{H^-} = E_{H^-} - \zeta P_{H^-} V P_{H^-}$, is invertible, $\zeta\in \mathbb{D}$. Set \begin{equation} \label{f2_53_2} V_0 := P_{H^-} V P_{H^-}. \end{equation} The matrix of $V_0$ in the basis $\mathfrak{A}^-$ we denote by $\mathfrak{V}$: \begin{equation} \label{f2_53_3} \mathfrak{V} = \left( \left( v_k, u_j \right)_H \right)_{j,k=0}^{\tau-1}. \end{equation} Observe that using definitions of $v_k$,$u_j$, the elements of the matrix $\mathfrak{V}$ can be calculated explicitly by the prescribed moments.
We may write for the resolvent function of $V_0$: \begin{equation} \label{f2_53_4} \mathcal{R}_\zeta(V_0) = \mathcal{A}_{0,\zeta}^{-1} = E_{H^-} + \sum_{k=1}^\infty V^k \zeta^k,\qquad \zeta\in \mathbb{D}. \end{equation} Then for the corresponding matrices we may write: \begin{equation} \label{f2_53_5} A_{0,\zeta}^{-1} = I_{\infty} + \sum_{k=1}^\infty \mathfrak{V}_k \zeta^k,\qquad \zeta\in \mathbb{D}, \end{equation} where \begin{equation} \label{f2_53_6} \mathfrak{V}_k := \mathfrak{V}^k,\qquad k\in \mathbb{Z}^+. \end{equation} By the convergence in~(\ref{f2_53_5}) we mean the convergence of the corresponding entries of matrices.
Observe that the Frobenius formula for the inverse of the block matrix (\cite[p. 59]{cit_7000_G}) is still valid for the block representations of bounded operators as in~(\ref{f2_53_1}), if the following operator $$ \mathcal{H}_\zeta := \mathcal{D}_{0,\zeta} - \mathcal{C}_{0,\zeta} \mathcal{A}_{0,\zeta}^{-1} \mathcal{B}_{0,\zeta}, $$ has a bounded inverse. This can be verified by the direct multiplication of the corresponding block representations. Notice that in our case $\mathcal{H}_\zeta$ has a bounded inverse. In fact, we may write $$ \left( \begin{array}{cc} E_{H^-} & 0 \\ -\mathcal{C}_{0,\zeta} \mathcal{A}_{0,\zeta}^{-1} & E_{H\ominus H^-} \end{array} \right) \left( \begin{array}{cc} \mathcal{A}_{0,\zeta} & \mathcal{B}_{0,\zeta}(\Phi) \\ \mathcal{C}_{0,\zeta} & \mathcal{D}_{0,\zeta}(\Phi) \end{array} \right) = \left( \begin{array}{cc} \mathcal{A}_{0,\zeta} & \mathcal{B}_{0,\zeta}(\Phi) \\ 0 & \mathcal{H}_{\zeta} \end{array} \right). $$ Observe that $$ \left( \begin{array}{cc} E_{H^-} & 0 \\ -\mathcal{C}_{0,\zeta} \mathcal{A}_{0,\zeta}^{-1} & E_{H\ominus H^-} \end{array} \right)^{-1} = \left( \begin{array}{cc} E_{H^-} & 0 \\ \mathcal{C}_{0,\zeta} \mathcal{A}_{0,\zeta}^{-1} & E_{H\ominus H^-} \end{array} \right). $$ Therefore the operator $\mathcal{Q} := \left( \begin{array}{cc} \mathcal{A}_{0,\zeta} & \mathcal{B}_{0,\zeta}(\Phi) \\ 0 & \mathcal{H}_{\zeta} \end{array} \right)$ is invertible.
\noindent Suppose that there exists $y\in H\ominus H^-$, $y\not= 0$, such that $\mathcal{H}_\zeta y = 0$. Set $u := - \mathcal{A}_{0,\zeta}^{-1} \mathcal{B}_{0,\zeta} y$. Then $$ \left( \begin{array}{cc} \mathcal{A}_{0,\zeta} & \mathcal{B}_{0,\zeta}(\Phi) \\ 0 & \mathcal{H}_{\zeta} \end{array} \right) \left( \begin{array}{cc} u \\ y \end{array} \right) = 0. $$ This contradicts to the invertibility of $\mathcal{Q}$. Since $\mathcal{H}_\zeta^{-1}$ acts in the finite-dimensional space $H\ominus H^-$, it is bounded.
Applying the Frobenius formula we get \begin{equation} \label{f2_53_7} (E_H - \zeta ( V \oplus \Phi_\zeta ))^{-1} = \left( \begin{array}{cc} \mathcal{A}_{0,\zeta}^{-1} + \mathcal{A}_{0,\zeta}^{-1} \mathcal{B}_{0,\zeta} \mathcal{H}_{\zeta}^{-1} \mathcal{C}_{0,\zeta} \mathcal{A}_{0,\zeta}^{-1} & \ast \\ \ast & \ast \end{array} \right),\qquad \zeta\in \mathbb{D}, \end{equation} where by stars $\ast$ we denote the blocks which are not of interest for us.
\noindent Denote by $\mathcal{M}_{2,\zeta}(\Phi)$ the matrix of the operator $(E_H - \zeta ( V \oplus \Phi_\zeta ))^{-1}$ in the basis $\mathfrak{A}_u$, $\zeta\in \mathbb{D}$. Then $$ \mathcal{M}_{2,\zeta}(\Phi) = \left( \begin{array}{cc} A_{0,\zeta}^{-1} + A_{0,\zeta}^{-1} B_{0,\zeta} (D_{0,\zeta} - C_{0,\zeta} A_{0,\zeta}^{-1} B_{0,\zeta})^{-1} C_{0,\zeta} A_{0,\zeta}^{-1} & \ast \\ \ast & \ast \end{array} \right) $$ \begin{equation} \label{f2_53_8} = \left( \begin{array}{cc} A_{0,\zeta}^{-1} -\zeta A_{0,\zeta}^{-1} W F_\zeta ( I_\delta - \zeta T F_\zeta +\zeta C_{0,\zeta} A_{0,\zeta}^{-1} W F_\zeta)^{-1} C_{0,\zeta} A_{0,\zeta}^{-1} & \ast \\ \ast & \ast \end{array} \right),\ \zeta\in \mathbb{D}. \end{equation} Let $\{ u_j \}_{j=0}^{\rho-1}$ be a set of elements which were obtained by the Gram-Schmidt orthogonalization of $\{ y_k^- \}_{k=0}^{N-1}$. Observe that $\rho \geq 1$. In the opposite case we have $y_k^- = 0$, $0\leq k\leq N-1$. By~(\ref{f2_45_22}) we obtain that the moment problem~(\ref{f1_1}) is determinate, what contradicts to our assumptions. Set $$ H_\rho^- := \mathop{\rm Lin}\nolimits\{ y_k^- \}_{k=0}^{N-1} = \mathop{\rm Lin}\nolimits\{ u_j \}_{j=0}^{\rho-1}. $$ Consider the following operator: $$ \mathcal{J}_\zeta := P^H_{H_\rho^-} (E_H - \zeta ( V \oplus \Phi_\zeta ))^{-1} P^H_{H_\rho^-},\quad \zeta\in \mathbb{D}, $$ as an operator in the (finite-dimensional) Hilbert space $H_\rho^-$. Its matrix in the basis $\{ u_j \}_{j=0}^{\rho-1}$ we denote by $J_\zeta$. It is given by $$ J_\zeta = A_{1,\zeta} -\zeta A_{2,\zeta} W F_\zeta ( I_\delta - \zeta T F_\zeta +\zeta C_{0,\zeta} A_{0,\zeta}^{-1} W F_\zeta)^{-1} C_{0,\zeta} A_{3,\zeta},\quad \zeta\in \mathbb{D}. $$ Here $A_{1,\zeta}$ is a matrix standing in the first $\rho$ rows and the first $\rho$ columns of the matrix $A_{0,\zeta}^{-1}$; $A_{2,\zeta}$ is a matrix standing in the first $\rho$ rows of the matrix $A_{0,\zeta}^{-1}$; $A_{3,\zeta}$ is a matrix standing in the first $\rho$ columns of the matrix $A_{0,\zeta}^{-1}$.
Consider the following operator from $\mathbb{C}^N$ to $H_\rho^-$: $$ \mathcal{K} \sum_{n=0}^{N-1} c_n \vec e_n = \sum_{n=0}^{N-1} c_n y_n^-,\qquad c_n\in \mathbb{C}, $$ where $\vec e_n = (\delta_{n,0},\delta_{n,1},\ldots,\delta_{n,N-1})\in \mathbb{C}^N$. Let $K$ be the matrix of $\mathcal{K}$ with respect to the orthonormal bases $\{ \vec e_n \}_{n=0}^{N-1}$ and $\{ u_j \}_{j=0}^{\rho-1}$: \begin{equation} \label{f2_55_1} K = \left( \left( \mathcal{K} \vec e_k, u_j \right)_H \right)_{0\leq j\leq \rho-1,\ 0\leq k\leq N-1} = \left( \left(y_k^-, u_j \right)_H \right)_{0\leq j\leq \rho-1,\ 0\leq k\leq N-1}. \end{equation} By~(\ref{f2_45_22}) we may write that $$ \int_\mathbb{R} \frac{1}{\lambda - z} dm_{k,j} (\lambda) $$ $$ = \frac{2i}{(z^2 +1)^2} \left( P^H_{H^-_\rho}\left[ E - \zeta ( V \oplus \Phi_\zeta ) \right]^{-1} P^H_{H^-_\rho} \mathcal{K} \vec e_k, \mathcal{K} \vec e_j \right)_H - \frac{1}{(z^2+1)(z-i)} \varphi_{j,k}(z) $$ $$ = \frac{2i}{(z^2 +1)^2} \left( \mathcal{K}^* \mathcal{J}_\zeta \mathcal{K} \vec e_k, \vec e_j \right)_{\mathbb{C}^N} - \frac{1}{(z^2+1)(z-i)} \varphi_{j,k}(z), $$ \begin{equation} \label{f2_55_2} 0\leq k,j\leq N-1,\quad z\in \mathbb{C}_+\backslash\{ i \},\quad \zeta = \frac{z-i}{z+i}, \end{equation} establishes a one-to-one correspondence between all analytic in $\mathbb{D}$ operator-valued functions $\Phi_\cdot$, which values are linear contractions from $H\ominus H^-$ into $H\ominus H^+$, and all solutions $M(\lambda)=(m_{k,j}(\lambda))_{k,j=0}^{N-1}$ of the moment problem~(\ref{f1_1}).
\noindent Observe that $\left( \mathcal{K}^* \mathcal{J}_\zeta \mathcal{K} \vec e_k, \vec e_j \right)_{\mathbb{C}^N}$ is the element in the $j$-th row and $k$-th column of the matrix $\mathcal{M}_{3,\zeta}$ of the operator $\mathcal{J}_{1,\zeta} := \mathcal{K}^* \mathcal{J}_\zeta \mathcal{K}$ in the basis $\{ e_n \}_{n=0}^{N-1}$. We may write $$ \mathcal{M}_{3,\zeta} = K^* J_\zeta K $$ $$ = K^* A_{1,\zeta} K - \zeta K^* A_{2,\zeta} W F_\zeta ( I_\delta + \zeta (C_{0,\zeta} A_{0,\zeta}^{-1} W - T) F_\zeta)^{-1} C_{0,\zeta} A_{3,\zeta} K,\quad \zeta\in \mathbb{D}. $$ Set $$ \Delta(z) := (\varphi_{j,k}(z))_{j,k=0}^{N-1},\qquad z\in \mathbb{C}_+. $$ Then the following relation $$ \int_\mathbb{R} \frac{1}{\lambda - z} dM^T (\lambda) = \frac{2i}{(z^2 +1)^2} K^* A_{1,\zeta} K
- \frac{1}{(z^2+1)(z-i)} \Delta(z) $$ $$ - \frac{2i}{(z^2 +1)^2} \zeta K^* A_{2,\zeta} W F_\zeta ( I_\delta + \zeta (C_{0,\zeta} A_{0,\zeta}^{-1} W - T) F_\zeta)^{-1} C_{0,\zeta} A_{3,\zeta} K, $$ \begin{equation} \label{f2_55_3} z\in \mathbb{C}_+\backslash\{ i \},\quad \zeta = \frac{z-i}{z+i}, \end{equation} establishes a one-to-one correspondence between all analytic in $\mathbb{D}$, $\mathbb{C}_{\omega\times\delta}$-valued functions $F_\zeta$, which values are such that $F_\zeta^* F_\zeta\leq I_\delta$, and all solutions $M(\lambda)$ of the moment problem~(\ref{f1_1}). Set $$ \mathbf{A}(z) = 2i K^* A_{1,\zeta} K - (z+i) \Delta(z), $$ $$ \mathbf{B}(z) = - 2i \zeta K^* A_{2,\zeta} W, $$ $$ \mathbf{C}(z) = \zeta (C_{0,\zeta} A_{0,\zeta}^{-1} W - T), $$ \begin{equation} \label{f2_55_4} \mathbf{D}(z) = C_{0,\zeta} A_{3,\zeta} K,\quad z\in \mathbb{C}_+\backslash\{ i \},\quad \zeta = \frac{z-i}{z+i}. \end{equation} Then the right-hand side of~(\ref{f2_55_3}) becomes $$ \frac{1}{(z^2+1)^2} \mathbf{A}(z) + \frac{1}{(z^2+1)^2} \mathbf{B}(z) F_\zeta ( I_\delta + \mathbf{C}(z) F_\zeta)^{-1} \mathbf{D}(z). $$
\begin{thm} \label{t2_2} Let the matrix Hamburger moment problem~(\ref{f1_1}) be given and condition~(\ref{f1_4}), with $\Gamma_n$ from~(\ref{f1_3}), be satisfied. Suppose that the moment problem is indeterminate. All solutions of the moment problem~(\ref{f1_1}) can be obtained from the following relation: $$ \int_\mathbb{R} \frac{1}{\lambda - z} dM^T (\lambda) $$ \begin{equation} \label{f2_59} = \frac{1}{(z^2+1)^2} \mathbf{A}(z) + \frac{1}{(z^2+1)^2} \mathbf{B}(z) \mathbf{F}(z) ( I_\delta + \mathbf{C}(z) \mathbf{F}(z))^{-1} \mathbf{D}(z),\quad z\in \mathbb{C}_+\backslash\{ i \}, \end{equation} where $\mathbf{A}(z)$, $\mathbf{B}(z)$, $\mathbf{C}(z)$, $\mathbf{D}(z)$ are analytic in $\mathbb{C}_+$, matrix-valued functions defined by~(\ref{f2_55_4}), with values in $\mathbb{C}_{N\times N}$, $\mathbb{C}_{N\times \delta}$, $\mathbb{C}_{\delta\times \omega}$, $\mathbb{C}_{\delta\times N}$, respectively. Here $\mathbf{F}(z)$ is an analytic in $\mathbb{C}_+$, $\mathbb{C}_{\omega\times\delta}$-valued function which values are such that $\mathbf{F}(z)^* \mathbf{F}(z) \leq I_\delta$, $\forall z\in \mathbb{C}_+$. Conversely, each analytic in $\mathbb{C}_+$, $\mathbb{C}_{\omega\times\delta}$-valued function such that $\mathbf{F}(z)^* \mathbf{F}(z) \leq I_\delta$, $\forall z\in \mathbb{C}_+$, generates by relation~(\ref{f2_59}) a solution of the moment problem~(\ref{f1_1}). The correspondence between all analytic in $\mathbb{C}_+$, $\mathbb{C}_{\omega\times\delta}$-valued functions such that $\mathbf{F}(z)^* \mathbf{F}(z) \leq I_\delta$, $\forall z\in \mathbb{C}_+$, and all solutions of the moment problem~(\ref{f1_1}) is bijective. \end{thm} {\bf Proof. } The proof follows from the preceding considerations. $\Box$
\begin{center} {\large\bf The Nevanlinna-type formula for the matrix Hamburger moment problem in a general case.} \end{center} \begin{center} {\bf S.M. Zagorodnyuk} \end{center}
In this paper we obtain a Nevanlinna-type formula for the matrix Hamburger moment problem in a general case. We only assume that the problem is solvable and has more that one solution. We express the matrix coefficients of the corresponding linear fractional transformation in terms of the prescribed moments. Necessary and sufficient conditions for the determinacy of the moment problem are given.
}
\end{document} | arXiv |
Optimality condition and iterative thresholding algorithm for \(l_p\)-regularization problems
Hongwei Jiao1,
Yongqiang Chen2 &
Jingben Yin1
SpringerPlus volume 5, Article number: 1873 (2016) Cite this article
This paper investigates the \(l_p\)-regularization problems, which has a broad applications in compressive sensing, variable selection problems and sparse least squares fitting for high dimensional data. We derive the exact lower bounds for the absolute value of nonzero entries in each global optimal solution of the model, which clearly demonstrates the relation between the sparsity of the optimum solution and the choice of the regularization parameter and norm. We also establish the necessary condition for global optimum solutions of \(l_p\)-regularization problems, i.e., the global optimum solutions are fixed points of a vector thresholding operator. In addition, by selecting parameters carefully, a global minimizer which will have certain desired sparsity can be obtained. Finally, an iterative thresholding algorithm is designed for solving the \(l_p\)-regularization problems, and any accumulation point of the sequence generated by the designed algorithm is convergent to a fixed point of the vector thresholding operator.
In this paper, we investigate the following \(l_p\)-regularization problems
$$\begin{aligned} \min _{s\in {R}^n} f_\lambda (s):= \Vert As-b \Vert _2^2+ \lambda \Vert s \Vert _p^p \end{aligned}$$
where \(A\in {R}^{m\times n},~b\in {R}^m,~\lambda \in (0, \infty ), \Vert s \Vert _p^p=\sum \nolimits _{i=1}^{n} |s_i |^p, ~p\in (0, 1)\). The problem (1) has a broad applications in compressive sensing, variable selection problems and sparse least squares fitting for high dimensional data (see Chartrand and Staneva 2008; Fan and Li 2001; Foucart and Lai 2009; Frank and Freidman 1993; Ge et al. 2011; Huang et al. 2008; Knight and Wu 2000; Lai and Wang 2011; Natarajan 1995). The objective function of the problem (1) is consisted by a data fitting term \(\Vert As-b \Vert _2^2\) and a regularization term \(\lambda \Vert s \Vert _p^p\). In Chen et al. (2014) point out that the \(l_2\)-\(l_p\) minimization problem (1) is a strongly NP-hard problem. Comparing with using the \(l_1\) norm, using the \(l_p\) quasi-norm in the regularization term we can find sparser solution, which has been extensively discussed in Candès et al. (2008), Chartrand (2007a, b), Chartrand and Yin (2008), Chen et al. (2010), Tian and Huang (2013), Tian and Jiao (2015), Xu et al. (2010, 2012), Shehu et al. (2013, 2015), Bredies et al. (2015), Fan et al. (2016). In Chen et al. (2010), Chen et al. derive the lower bounds for the absolute value of nonzero entries in each local optimum solution of the model. Xu et al. (2012) presented an analytical expression in a thresholding form for the resolvent of gradient of \(\Vert s\Vert _{1/2}^{1/2}\) and developed an alternative feature theorem on optimum solutions of the \(L_{1/2}\) regularization problem, and proposed an iterative half thresholding algorithm for fast solving the problem. But there is no result for the characteristics of the global optimum solution for the problem (1).
In this article, we pay more attention to derive the characteristics of the global optimum solution of problem (1), which is inspired by Xu et al. (2012). The remaining sections of the paper are organized as follows. In "Technical preliminaries" section, we portray some important technical results. "Lower bound and optimality conditions" section first develop the proximal operator associated with a non-convex \(l_p\) quasi-norm, which can be looked as an extension of the well-known proximal operator associated with convex functions. Next, an exact lower bound for the absolute value of nonzero entries in every global optimum solution of (1) is derived, which clearly demonstrates the relation between the sparsity of the optimum solution and the choice of the regularization parameter and norm. We also establish the necessary condition for global optimum solutions of the \(l_p\)-regularization problems, i.e., the global optimum solutions are fixed points of a vector thresholding operator. In "Choosing the parameter λ for sparsity" section, we also propose a sufficient condition on the selection of \(\lambda\) to meet the sparsity requirement of global minimizers of the \(l_p\)-regularization problems. "Iterative thresholding algorithm and its convergence" section proposes an iterative thresholding algorithm for the \(l_p\)-regularization problems, and any accumulation point of the sequence produced by the designed algorithm is convergent to a fixed point of the vector thresholding operator. Finally, some conclusions are drawn in "Numerical experiments" section.
Technical preliminaries
By utilizing the objective function's separability and the operator splitting technique, the \(l_p\)-regularization problems (1) can be converted into n homologous single variable minimization problems defined on \((-\infty , +\infty )\). Therefore, at first we investigate the homologous single variable minimization problem
$$\begin{aligned} \min _{s\in {R}}~g_r(s):=s^2-2rs+\lambda |s|^{p}, \end{aligned}$$
where \(\lambda >0\) and \(p\in (0,1)\) are all any real numbers, \(s\in {R}\) is a variable and \(r\in {R}\) is a parameter. Besides, we only need to consider the following two sub-problems
$$\begin{aligned} \min _{s\ge 0}~g_r(s)& = s^2-2rs+\lambda s^{p}, \end{aligned}$$
$$\begin{aligned} \min _{s\le 0}~g_r(s)& = s^2-2rs+\lambda (-s)^{p}. \end{aligned}$$
In Chen et al. (2014), investigated the subproblem (3) and presented some results, which can be used to derive our conclusions. Let
$$\begin{aligned} {\bar{r}}\, {:=} \,\frac{2-p}{1-p}[ \lambda p(1-p)/2 ]^{1/(2-p)}>0, \end{aligned}$$
$$\begin{aligned} {\bar{s}} \;{:=} \;[ \lambda p(1-p)/2 ]^{1/(2-p)}>0. \end{aligned}$$
Lemma 1
(Lemma.2.2, Chen et al. 2014) For any \(s>0\), denote \(G(s,r) := [g_r(s)]^{'}=2s-2r+\lambda ps^{p-1}\). For any known \(r_0>{\bar{r}}\), set \(s_0 ~( s_0>{\bar{s}})\) be the positive root of the equation \(G(s,r_0)=0\), where \({\bar{r}}\) and \({\bar{s}}\) are given in (5) and (6). Then, there is a unique implicit function \(s=h_{\lambda , p}(r)\) define on \(({\bar{r}}, +\infty )\), which satisfies \(s_0=h_{\lambda , p}(r_0)\), \(h_{\lambda , p}(r)>{\bar{s}}\) and \(G(h_{\lambda , p}(r), r)\equiv 0\) for \(\forall r\in ({\bar{r}}, +\infty )\). Furthermore, for the function \(s=h_{\lambda , p}(r)\), the following conclusions hold:
\(s=h_{\lambda , p}(r)\) is a continuous function defined on \(({\bar{r}}, +\infty )\).
\(s=h_{\lambda , p}(r)\) is a differentiable function over \(({\bar{r}}, +\infty )\) and \(h_{\lambda , p}^{'}(r)=\frac{2}{2+\lambda p(p-1)h_{\lambda , p}^{p-2}(r)}\).
\(s=h_{\lambda , p}(r)\) is a strictly increasing function over \(({\bar{r}}, +\infty )\).
Moreover, if \(r>{\bar{r}}\), then \(s=h_{\lambda , p}(r)\) is the sole local minimizer of \(g_r(s)\) over \((0, +\infty )\).
(Prop.2.4, Chen et al. 2014) Set \(s^*\) be the global optimum solution for the problem (3), then we have
$$\begin{aligned} s^*=h_\lambda (r):=\left\{ \begin{array}{lll} h_{\lambda , p}(r) , & r>r^*\\ (\lambda (1-p) )^{1/(2-p)} \quad \mathrm{or}\; 0, & r=r^* \\ 0, & r<r^* \end{array} \right. \end{aligned}$$
where \(r^*:=\frac{2-p}{2(1-p)} [\lambda (1-p) ]^{1/(2-p)}\), \(h_{\lambda , p}(r)\) is defined by Lemma 1.
Set \(s^*\) be the global optimum solution for the problem (2), then we have
$$\begin{aligned} s^*=h_\lambda (r):=\left\{ \begin{array}{lllll} h_{\lambda , p}(r) , & \quad r>r^* \\ L \quad \mathrm{or}\; 0, & \quad r=r^*\\ 0, & \quad -r^*<r<r^* \\ -L \quad \mathrm{or}\; 0, & \quad r=-r^*\\ -h_{\lambda , p}(-r) , & \quad r<-r^* \end{array} \right. \end{aligned}$$
where \(r^*:=\frac{2-p}{2(1-p)} [\lambda (1-p) ]^{1/(2-p)}\), \(h_{\lambda , p}(r)\) is defined in Lemma 1 and \(L:=(\lambda (1-p) )^{1/(2-p)}\).
If \(s\ge 0\), then \(g_r(s)=s^2-2rs+\lambda s^{p}\). Let \(s^*_1\) is a global optimum solution for the problem (3), then from Lemma 2, we have
$$\begin{aligned} s^*_1=h_\lambda (r)=\left\{ \begin{array}{lll} h_{\lambda , p}(r) , & \quad r>r^*\\ (\lambda (1-p) )^{1/(2-p)} \quad \mathrm{or} \; 0, & \quad r=r^*\\ 0. & \quad r<r^* \end{array} \right. \end{aligned}$$
If \(s\le 0\), then \(g_r(s)=s^2-2rs+\lambda (-s)^{p}=(-s)^2+2r(-s)+\lambda (-s)^{p}\). Let \(y=-s\), we have \(y\ge 0\) and \(g_{(-r)}(y)=y^2+2ry+\lambda y^p\), we follow the first case. If \(y^*\) is a global optimum solution for the problem \(g_{(-r)}(y)\) over \([0, +\infty )\), then from Lemma 2, we have
$$\begin{aligned} y^*=h_\lambda (-r)=\left\{ \begin{array}{lll} h_{\lambda , p}(-r) , & \quad -r>r^* \\ (\lambda (1-p) )^{1/(2-p)} \quad \mathrm{or}\; 0, & \quad -r=r^*\\ 0. & \quad -r<r^* \end{array} \right. \end{aligned}$$
Therefore, if \(s\le 0, ~s^*_2\) is a global optimum solution for the problem \(\min _{s\in {R}_-}~g_r(s)=s^2-2rs+\lambda (-s)^{p}\), then we have
$$\begin{aligned} s^*_2=-y^*=\left\{ \begin{array}{lll} -h_{\lambda , p}(-r) , & \quad r<-r^* \\ -(\lambda (1-p) )^{1/(2-p)} \quad \mathrm{or} \; 0, & \quad r=-r^*\\ 0. & \quad r>-r^* \end{array} \right. \end{aligned}$$
Combining (8) and (9) together, we can get (7). Therefore, the proof is complete. \(\square\)
Assume that \(s^*\) is a global optimum solution for the problem (2). When \(|r|=r^*\) given in Proposition 1, set \(s^*=h_{\lambda }(r)\) be simultaneously zero or nonzero. Then the following conclusions hold:
The function \(h_{\lambda }(r)\) is an odd function over \((-\infty , +\infty )\).
The function \(h_{\lambda }(r)\) is continuous over \((r^*, +\infty )\), furthermore, \(\lim _{r\downarrow r^*} h_{\lambda }(r)=L\).
The function \(h_{\lambda }(r)\) is differentiable over \((r^*, +\infty )\).
The function \(h_{\lambda }(r)\) is strictly increasing over \((r^*, +\infty )\).
By Proposition 1 and Lemma 1, this proposition can be followed. \(\square\)
When \(p=1/2\), in Xu et al. (2012), \(h_{\nu , p}(r)\) of (7) has the following analytic corollary.
Corollary 1
(Theo. 1, Lemm. 1 and 2, Xu et al. 2012) When \(p=1/2\), the global optimum solution \(s^*\) of problem (2) has the following results:
$$\begin{aligned} s^*=h_\lambda (r):=\left\{ \begin{array}{lllll} h_{\lambda , 1/2}(r), & \quad r>r^*\\ (\lambda /2)^{2/3} \quad \mathrm{or} \; 0, & \quad r=r^*\\ 0, & \quad -r^*<r<r^* \\ -(\lambda /2)^{2/3} \quad \mathrm{or}\; 0, & \quad r=-r^*\\ -h_{\lambda , 1/2}(-r) , & \quad r<-r^* \end{array} \right. \end{aligned}$$
where \(h_{\lambda , 1/2}(r) =\frac{2}{3}r(1+\mathrm {cos}(\frac{2\pi }{3}-\frac{2\varphi (r)}{3}))\), \(\varphi (r)=\mathrm {arccos}(\frac{\lambda }{8}(\frac{|r|}{3} )^{-3/2} )\) and \(r^* =\frac{\root 3 \of {54}}{4}\lambda ^{2/3}\).
A brief proof is presented here for completeness. When \(p=1/2\), we have \(r^* =\frac{\root 3 \of {54}}{4}\lambda ^{2/3}\). When \(|r|>r^*\), \(s^*=h_{\lambda }(r)\ne 0\), by Proposition 2, then \(h_{\lambda }(r)\) is the root of the equation
$$\begin{aligned} s-r+\frac{\lambda \mathrm{sign(s)}}{4 \sqrt{|s|}}=0, \end{aligned}$$
which is followed by the first order optimum condition of (2). By Theorem 1 of Xu et al. (2012), we have \(h_{\lambda , 1/2}(r) =\frac{2}{3}r(1+\mathrm {cos}(\frac{2\pi }{3}-\frac{2\varphi (r)}{3})),~ \varphi (r)=\mathrm {arccos}(\frac{\lambda }{8}(\frac{|r|}{3} )^{-3/2} )\). The proof is completed. \(\square\)
Lower bound and optimality conditions
In this section, by using function's separability and the operator splitting technique, we propose the proximal operator associated with \(l_p\) quasi-norm. Next, we present the properties of the global optimum solutions of the \(l_p\)-regularization problems (1). For convenience, first of all, we define the following thresholding function and thresholding operators.
(\(\mathrm {p}\) thresholding function) Assume that \(r\in {R}\), for any \(\lambda > 0\), the function \(h_\lambda (r)\) defined in (7) is called as a \(\mathrm {p}\) thresholding function.
(Vector \(\mathrm {p}\) thresholding operator) Assume that \(s\in {R}^n\), for any \(\lambda > 0\), the vector \(\mathrm {p}\) thresholding operator \(H_\lambda (s)\) is defined as
$$\begin{aligned} H_\lambda (s):=(h_\lambda (s_1),h_\lambda (s_2),\ldots ,h_\lambda (s_n))^T. \end{aligned}$$
In this section, one of the main results is a proximal operator associated with the non-convex \(l_p~(0<p<1)\) quasi-norm, and which can be also looked as an extension of the well-known proximal operator associated with convex functions.
Theorem 1
For given a vector \(y\in {R}^n\) and constants \(\lambda >0, ~0<p<1\). Assume that \(s^*\) be the global optimum solution of the following problem
$$\begin{aligned} \min _{s\in {R}^n}~f(s):=\Vert s-y \Vert _2^2+\lambda \Vert s \Vert _{p}^{p}, \end{aligned}$$
then \(s^*\) can be expressed as
$$\begin{aligned} s^*=H_{\lambda }(y). \end{aligned}$$
Furthermore, we can get the exact number of global optimum solutions for the problem.
$$\begin{aligned} f(s)&=\Vert s-y \Vert _2^2+\lambda \Vert s \Vert _{p}^{p}=\Vert s \Vert _2^2-2\langle s, y \rangle +\Vert y \Vert _2^2 +\lambda \Vert s \Vert _{p}^{p} \\&=\sum \limits _{i=1}^{n} \left( s^2_i- 2y_i s_i+\lambda {|s_i|}^p \right) +\Vert y \Vert _2^2. \end{aligned}$$
Let \(g_{y_i}(s_i)=s^2_i- 2y_i s_i+\lambda {|s_i|}^p\), then
$$\begin{aligned} f(s)=\sum \limits _{i=1}^{n} g_{y_i}(s_i)+\Vert y \Vert _2^2. \end{aligned}$$
Therefore, to solve the problem (11) is equivalent to solving the following n problems, for each \(i=1,2,\ldots ,n\),
$$\begin{aligned} \min _{s_i \in {R}}~ g_{y_i}(s_i). \end{aligned}$$
By Proposition 1, for each \(i=1,2,\ldots ,n\), we can follow
$$\begin{aligned} s_i^*=\arg \min_{s_i \in {R}}~ g_{y_i}(s_i)=h_{\lambda }(y_i), \end{aligned}$$
and if \(|y_i |=r^*:=\frac{2-p}{2(1-p)} [\lambda (1-p) ]^{1/(2-p)}\), the problem (12) has two solutions; else, unique solution. Hence we can know the exact number of global optimum solutions of (11). The proof is thus complete. \(\square\)
For any \(\lambda ,~ \mu > 0,~ 0<p<1,\) and \(z\in {R}^n\), let
$$\begin{aligned} f_{\mu }(s,z):=\mu (f_{\lambda }(s)-\Vert As-Az \Vert _2^2)+ \Vert s-z \Vert _2^2, \end{aligned}$$
For simplicity, let
$$\begin{aligned} B_{\mu }(z):=z+{\mu } A^T(b-Az). \end{aligned}$$
Assume that \(s^*\in {R}^n\) be the global minimizer of \(f_\mu (s,z)\) for any fixed \(\lambda>0, \mu >0\) and \(z\in {R}^n\), then we have
$$\begin{aligned} s^*=H_{\lambda \mu }(B_{\mu }(z)). \end{aligned}$$
Without loss of generality, \(f_\mu (s,z)\) can be rewritten as
$$\begin{aligned} \begin{array}{lll} f_\mu (s,z)&=&\mu (\Vert As-b \Vert _2^2+\lambda \Vert s \Vert _p^p -\Vert As-Az \Vert _2^2)+ \Vert s-z \Vert _2^2\\ &=&\lambda \mu \Vert s \Vert _p^p+\Vert s \Vert _2^2-2\langle s, z+\mu A^T (b-Az) \rangle +\Vert z \Vert _2^2+\mu \Vert b \Vert _2^2-\mu \Vert Az \Vert _2^2\\ &=&\Vert s -B_\mu (z)\Vert _2^2+\lambda \mu \Vert s \Vert _p^p+ \Vert z \Vert _2^2+\mu \Vert b \Vert _2^2-\mu \Vert Az \Vert _2^2-\Vert B_\mu (z) \Vert _2^2. \end{array} \end{aligned}$$
Therefore, to solve \(\min _{s\in {R}^n} f_\mu (s,z)\) for any fixed \(\nu , \mu\) and Y is equivalent to solving
$$\begin{aligned} \min _{s\in {R}^n}\large \{\Vert s-B_\mu (z) \Vert _2^2+\lambda \mu \Vert s \Vert _p^p \}. \end{aligned}$$
By Theorem 1, thus the proof is complete. \(\square\)
If \(s^*\in {R}^n\) is a global minimizer of the problem (1) for any fixed \(\nu >0\) and for any fixed \(\mu\) which satisfies \(0 <\mu \le \Vert A \Vert ^{-2}\), then \(s^*\) is also a global minimizer of \(f_\mu (s, s^*)\), that is,
$$\begin{aligned} f_{\mu }(s^*, s^*)\le f_{\mu }(s, s^*) \quad \mathrm{for\;all}\;s\;\in {R}^{n}. \end{aligned}$$
For any \(s\in {R}^{n}\), Since \(0 <\mu \le \Vert A \Vert ^{-2}\), we have
$$\begin{aligned} \Vert s-s^* \Vert _2^2-\mu \Vert As-As^* \Vert _2^2\ge \Vert s-s^* \Vert _2^2-\mu \Vert A \Vert ^{2} \Vert s-s^* \Vert _2^2\ge 0. \end{aligned}$$
$$\begin{aligned} \begin{array}{lll} f_{\mu }(s, s^*)&=&\mu (f_\lambda (s)-\Vert As-As^* \Vert _2^2)+ \Vert s-s^* \Vert _2^2\\ &=&\mu (\Vert As-b \Vert _2^2+\lambda \Vert s \Vert _p^p)+ (\Vert s-s^* \Vert _2^2-\mu \Vert As-As^* \Vert _2^2)\\ &\ge & \mu (\Vert As-b \Vert _2^2+\lambda \Vert s \Vert _p^p) \\ &=& \mu f_\lambda (s)\ge \mu f_\lambda (s^*) \\ &=& f_{\mu }(s^*, s^*) \\ \end{array} \end{aligned}$$
the proof is complete. \(\square\)
For any given \(\lambda >0,~0<\mu \le \Vert A \Vert ^{-2}\), if \(s^*\) be the global optimum solution of the problem (1), then \(s^*\) satisfies
$$\begin{aligned} s^*=H_{\lambda \mu }(B_\mu (s^*)). \end{aligned}$$
Especially, we have
$$\begin{aligned} \begin{array}{ll} s^*_i & =h_{\lambda \mu }([B_\mu (s^*)]_i)\\ & =\left\{ \begin{array}{llll} h_{\lambda \mu , p}([B_\mu (s^*)]_i), & \quad \mathrm{if} \quad {[B_\mu (s^*)]}_i> r^* \\ L~\mathrm{or}~0, & \quad \mathrm{if} \quad {[B_\mu (s^*)]}_i= r^* \\ 0, & \quad \mathrm{if} \quad -r^*<{[B_\mu (s^*)]}_i< r^* \\ -L~\mathrm{or}~0, & \quad \mathrm{if} \quad {[B_\mu (s^*)]}_i= -r^* \\ -h_{\lambda \mu , p}(-[B_\mu (s^*)]_i), & \quad \mathrm{if} \quad {[B_\mu (s^*)]}_i< -r^* \end{array} \right. \end{array} \end{aligned}$$
where \(r^*:=\frac{2-p}{2(1-p)} [\lambda \mu (1-p) ]^{1/(2-p)}\) and \(L:=(\lambda \mu (1-p) )^{1/(2-p)}\).
Furthermore, we have: if \(s^*_i\in (-L, L)\), then \(s^*_i=0\).
Since \(s^*\) is a global minimizer of \(f_\mu (s, z)\) for given \(z=s^*\), by Theorem 2 and Lemma 3, we can directly get (16) and (17). By proposition 2, we can follow that
$$\begin{aligned} \lim _{r\downarrow r^*} h_{\lambda \mu }(r)=[\lambda \mu (1-p)]^{\frac{1}{2-p}} \;{=:}\;L. \end{aligned}$$
By Proposition 2, combining with the strict monotonicity of \(h_{\lambda \mu }(\cdot )\) on \(({\bar{r}}, +\infty )\) and \((-\infty , -{\bar{r}})\), we can follow that \(s^*_i> L\) as \({[B_\mu (s^*)]}_i> r^*\), \(s^*_i< -L\) as \({[B_\mu (s^*)]}_i< -r^*\) and \(|s^*_i|=L\) as \(| {[B_\mu (s^*)]}_i| = r^*\). Therefore, the proof is completed. \(\square\)
In Theorem 3, the necessary condition for global optimum solutions of the \(l_p\)-regularization problems is established, which is a thresholding expression associated with the global optimum solutions. Particularly, the global optimum solutions for the problem (1) are the fixed points of a vector-valued thresholding operator. In contrast, the conclusion does not hold in general, i.e., a point satisfying (16) is not the global optimum solution for the \(l_p\)-regularization problems (1) in general. This is related to the nature of the matrix A, for an instance, when \(A\equiv I\) and \(\mu =1\), a fixed point of (16) is the global optimum solution for the \(l_p\)-regularization problems (1) (i.e., Theorem 1).
In Theorem 3, the exact lower bound for the absolute value of nonzero entries in every global optimum solution of the model is also provided, which can be used to identify zero entries precisely in any global optimum solution. These lower bounds clearly demonstrate the relationship between the sparsity of the global optimum solution and the choices of the regularization parameter and norm, therefore, our theorem can be used to select the desiring model parameters and norms.
Choosing the parameter \(\lambda\) for sparsity
In many applications such that sparse solution reconstruction and variable selection, one need to seek out least square estimators with no more than k nonzero entries. Chen et al. (2014) present a sufficient condition on \(\lambda\) for global minimizers of the \(l_p\)-regularization problems, which have desirable sparsity, and which are based on the lower bound theory in local optimum solutions. In this paper, we also present a sufficient condition on \(\lambda\) for global minimizers of the \(l_p\)-regularization problems, which also have desirable sparsity, but which are based on the lower bound theory in global optimum solutions.
$$\begin{aligned} \beta (k)=k^{(p-2)/2} [\mu (1-p)]^{-p/2}\Vert b \Vert ^{2-p}, \quad 1\le k \le n. \end{aligned}$$
The following conclusions hold.
If \(\lambda \ge \beta (k)\), then any global minimizer \(s^*\) of the \(l_p\)-regularization problems (1) satisfies \(\Vert s^* \Vert _0 <k\) for \(1\le k \le n\).
If \(\lambda \ge \beta (1)\), then \(s^*=0\) is the unique global minimizer of the \(l_p\)-regularization problems (1).
Assume that \(s^*\ne 0\) is a global minimizer of the \(l_p\)-regularization problems (1). Let \(B=A_T\in {R}^{m\times |T |}\), where \(T=\mathrm{support(s^*)}\) and \(|T |=\Vert s^* \Vert _0\) is the cardinality of the set T. Therefore, according to the first order necessary condition, \(s^*\) must satisfy
$$\begin{aligned} 2B^T(Bs^*_T-b)+\lambda p(|s^*_T |^{p-2}\cdot (s^*_T))=0, \end{aligned}$$
which shows \(As^*-b=Bs^*_T-b\ne 0\). Hence, we have
$$\begin{aligned} f_\lambda (s^*)=\Vert As^*-b \Vert ^2+\lambda \Vert s^* \Vert _p^p> \lambda \sum \limits _{i\in T} |s^*_i |^p. \end{aligned}$$
By Theorem 3, we can follow that
$$\begin{aligned} |s^*_i |\ge (\lambda \mu (1-p))^{1/(2-p)}, ~ i \in T. \end{aligned}$$
Therefore, we have
$$\begin{aligned} f_\lambda (s^*)> \lambda |T | (\lambda \mu (1-p))^{p/(2-p)}. \end{aligned}$$
In the following, we will discuss different cases:
Assume that \(\lambda \ge \beta (k)\), we shall prove it through apagoge. If \(\Vert s^* \Vert _0\ge k \ge 1\), then by (3.11) and the definition of \(\beta (k)\) in (3.8), we have
$$\begin{aligned}\begin{array}{ll} f_\lambda (s^*) &> \lambda |T | (\lambda \mu (1-p))^{p/(2-p)} = k \lambda ^{2/(2-p)} (\mu (1-p))^{p/(2-p)} \\ &\ge k k^{-1} \Vert b \Vert ^2 \\ &= \Vert b \Vert ^2=f_\lambda (0).\\ \end{array} \end{aligned}$$
This is in contradiction with that \(s^*\) is a global minimizer of (1). Therefore, we have \(\Vert s^* \Vert _0< k\).
Assume that \(\lambda \ge \beta (1)\), we shall prove it through apagoge. If \(s^* \ne 0\), then there exists \(i_0\) satisfying \(s^*_{i_0}\ne 0\) and
$$\begin{aligned} f_\lambda (s^*)=\Vert As^*-b \Vert ^2+\lambda \Vert s^* \Vert ^p_p> \lambda |s^*_{i_0} |^p \ge \lambda (\lambda \mu (1-p))^{p/(2-p)}\ge \Vert b \Vert ^2=f_p(0). \end{aligned}$$
This is in contradiction with that \(s^*\) is a global minimizer of (1). Therefore, \(s^*=0\) must be the unique global minimizer of (1). \(\square\)
Iterative thresholding algorithm and its convergence
By the thresholding representation formula (16), an iterative thresholding formula of the problem (1) can be presented in the following: initilized \(s^0\in {R}^n\),
$$\begin{aligned} s^{k+1}=H_{\lambda \mu }(s^k+\mu A^T (b-As^k)), \end{aligned}$$
$$\begin{aligned} h_{\lambda \mu }(r):=\left\{ \begin{array}{lll} h_{\lambda \mu , p}(r) , &\quad r>r^*\\ 0, &\quad -r^* \le r\le r^*\\ -h_{\lambda \mu , p}(-r) , & \quad r<-r^*\\ \end{array} \right. \end{aligned}$$
When \(|r| =r^*\), the adjustment here is, we only select \(h_{\lambda \mu }(r)= 0\).
Firstly, some important lemmas are given in the following.
Let \(0<\mu <\Vert A \Vert ^{-2}\) and \(\{s^k \}\) be the sequence produced by the algorithm (22), then we can follow that the sequences \(\{ (f_\lambda (s^k))_k \}\) and \(\{ (f_\mu (s^{k+1}, s^k))_k \}\) are non-increasing.
For \(0<\mu <\Vert A \Vert ^{-2}\), we have
$$\begin{aligned} \Vert s^{k+1}-s^k \Vert _2^2-\mu \Vert As^{k+1}-As^{k} \Vert _2^2\ge 0. \end{aligned}$$
$$\begin{aligned} \begin{array}{ll} f_{\lambda }(s^{k+1})&\le \mu ^{-1}(\mu f_{\lambda }(s^{k+1})+\Vert s^{k+1}-s^k \Vert _2^2-\mu \Vert As^{k+1}-As^{k} \Vert _2^2) \\ &=\mu ^{-1} f_{\mu }(s^{k+1}, s^k) \\ &\le \mu ^{-1} f_{\mu }(s^{k}, s^k) \\ &=f_{\lambda }(s^{k}) \\ &\le \mu ^{-1}(\mu f_{\lambda }(s^{k})+\Vert s^{k}-s^{k-1} \Vert _2^2-\mu \Vert As^{k}-As^{k-1} \Vert _2^2) \\ &=\mu ^{-1} f_{\mu }(s^{k}, s^{k-1}). \\ \end{array} \end{aligned}$$
The first equality can be followed from the definition of \(f_\mu (s, z)\). The second inequality is because that the \(s^{k+1}\) is the minimizer of \(f_\mu (s, s^k)\). \(\square\)
This lemma demonstrate that, from iteration to iteration, the objective function \(f_\lambda (s)\) does not increase, moreover, using the proposed algorithm does not lead to worse results than not using the proposed algorithm. The algorithm (22) does not have a unique fixed point, therefore it is very important to analyze the fixed points in detail.
Let \(\Gamma _0=\{i:~s^*_i=0 \}\) and \(\Gamma _1=\{i:~|s^*_i|>(\lambda \mu (1-p) )^{1/(2-p)} \}\). The point \(s^*\) is a fixed point for the algorithm (18) if and only if
$$\begin{aligned} |A_i^T (b-As^*)|&\le \frac{2-p}{2} \lambda ^{1/(2-p)} [\mu (1-p) ]^{(p-1)/(2-p)}, \quad \mathrm{if} \,\, i\in \Gamma _0, \\ s^*_i&=h_{\lambda \mu , p}(s^*_i+\mu A_i^T (b-As^*)), \quad \mathrm{if} \,\, i\in \Gamma _1. \end{aligned}$$
A fixed point of the algorithm (22) is any \(s^*\) satisfying \(s^{*}=H_{\lambda \mu }(s^*+\mu A^T (b-As^*))\), i.e., \(s^{*}_i=h_{\lambda \mu }(s^*_i+\mu A_i^T (b-As^*))\). If \(i\in \Gamma _0\), the equality holds when and only when \(|\mu A_i^T (b-As^*)|\le \frac{2-p}{2(1-p)} [\lambda \mu (1-p) ]^{1/(2-p)}\), i.e., \(|A_i^T (b-As^*)|\le \frac{2-p}{2} \lambda ^{1/(2-p)} [\mu (1-p) ]^{(p-1)/(2-p)}\). Similarly, \(i\in \Gamma _1\) when and only when \(s^*_i=h_{\lambda \mu , p}(s^*_i+\mu A_i^T (b-As^*))\). \(\square\)
The following lemma demonstrate that the sequence \(\{s^k \}\) produced by the algorithm (22) is asymptotically regular, i.e., \(\lim _{k\rightarrow \infty } \Vert s^{k+1} -s^k \Vert _2=0\).
If \(f_\lambda (s^0)<\infty\), \(0<\mu < \Vert A \Vert ^{-2}\) and assume that \(\{s^k \}\) be the sequence produced by the algorithm (22), \(\forall \epsilon >0,~\exists K\) satisfying \(\forall k>K,~\Vert s^{k+1}-s^k \Vert _2^2\le \epsilon\).
We prove the convergence of \(\sum \limits _{k=0}^{K} \Vert s^{k+1}-s^k \Vert _2^2\), which implies the lemma. First of all, we prove that \(\sum \limits _{k=0}^{K} \Vert s^{k+1}-s^k \Vert _2^2\) is monotonically increasing. We can follow monotonicity from
$$\begin{aligned} \begin{array}{lll} \sum \limits _{k=0}^{K} \Vert s^{k+1}-s^k \Vert _2^2&=&\sum \limits _{k=0}^{K-1} \Vert s^{k+1}-s^k \Vert _2^2+\Vert s^{K+1}-s^K \Vert _2^2\\ &\ge & \sum \limits _{k=0}^{K-1} \Vert s^{k+1}-s^k \Vert _2^2. \end{array} \end{aligned}$$
Then, we will show the boundness of \(\sum\nolimits _{k=0}^{K} \Vert s^{k+1}-s^k \Vert _2^2\). For \(0<\mu < \Vert A \Vert ^{-2}\), we have \(0<\delta :=1-\mu \Vert A \Vert ^{2} <1\) and
$$\begin{aligned} \Vert s^{k+1}-s^k \Vert _2^2\le \delta ^{-1}(\Vert s^{k+1}-s^k \Vert _2^2-\mu \Vert As^{k+1}-As^{k} \Vert _2^2). \end{aligned}$$
$$\begin{aligned} \begin{array}{ll} \sum \limits _{k=0}^{K} \Vert s^{k+1}-s^k \Vert _2^2&\le \delta ^{-1} \sum \limits _{k=0}^{K} (\Vert s^{k+1}-s^k \Vert _2^2-\mu \Vert As^{k+1}-As^k \Vert _2^2) \\ &\le \delta ^{-1} \sum \limits _{k=0}^{K} \mu (f_\lambda (s^k)-f_\lambda (s^{k+1})) \\ &=\mu \delta ^{-1} (f_\lambda (s^0)-f_\lambda (s^{K+1})) \\ &\le \mu \delta ^{-1} f_\lambda (s^0)< \infty . \\ \end{array} \end{aligned}$$
The second inequality can be followed from the proof of Lemma 4 and the last inequality can be taken from \(f_\lambda (s^0)<\infty\). \(\square\)
In the following, we present an very important property of the algorithm, i.e., any accumulation point of the sequence \(\{s^k \}\) is a fixed point of the algorithm (22). Therefore, we have the following theorem and conclusion.
If \(f_\lambda (s^0)<\infty\) and \(0<\mu < \Vert A \Vert ^{-2}\), then we have the following conclusion: any accumulation point of the sequence \(\{s^k \}\) produced by the algorithm (22) is a fixed point of (22).
In Lemma 6, we take \(\epsilon <\lambda\). If \(|s^k_i |>(\lambda \mu (1-p) )^{1/(2-p)}\) and \(s^{k+1}_i=0\), then we have \(\Vert s^{k+1}-s^k \Vert _2^2\ge \lambda\), by Lemma 6 which is impossible for \(k>K\) for some K. Therefore, for large K, the set of zero and non-zero coefficients will not change and \(|s^k_i|>(\lambda \mu (1-p) )^{1/(2-p)}, ~\forall i \in \Gamma _1, ~k>K\). Assume that \(\{s^{k_j} \}\) be a convergent subsequence and \(s^*\) be its limit point, i.e.,
$$\begin{aligned} s^{k_j}\rightarrow s^*, \quad ~\mathrm{~as~}~k_j\rightarrow +\infty . \end{aligned}$$
By the limitation (24) and Lemma 6, we have
$$\begin{aligned} \Vert s^{k_j+1}-s^* \Vert _2\le \Vert s^{k_j+1}-s^{k_j} \Vert _2+\Vert s^{k_j}-s^* \Vert _2\rightarrow 0, \quad \mathrm{as} \,\, k_j\rightarrow +\infty , \end{aligned}$$
which implies that the sequence \(\{s^{k_j+1} \}\) is also convergent to \(s^*\). Note that \(s^{k_j+1}=H_{\lambda \mu }(B_\mu (s^{k_j}))\), i.e., \(s^{k_j+1}_i=h_{\lambda \mu }(s^{k_j}_i+\mu A_i^T(b-As^{k_j})), \mathrm{~for~all}~i=1, 2, \ldots , n\).
Let \(\Gamma _0=\{i:~s^*_i=0 \}\) and \(\Gamma _1=\{i:~s^*_i\ne 0 \}\). For \(s^{k_j},~k_j>K\) for some K, if \(i\in \Gamma _0\), then by (23) and (7) we have
$$\begin{aligned} |A^T_i(b-As^{k_j})|\le \frac{2-p}{2} \lambda ^{1/(2-p)} [\mu (1-p) ]^{(p-1)/(2-p)}, \end{aligned}$$
therefore, \(|A^T_i(b-As^*)|\le \frac{2-p}{2} \lambda ^{1/(2-p)} [\mu (1-p) ]^{(p-1)/(2-p)}\). Similarly, if \(i\in \Gamma _1\), then by (23) and (7) we have
$$\begin{aligned} s^{k_j+1}_i=h_{\lambda \mu }(s^{k_j}_i+\mu A_i^T(b-As^{k_j})), ~|s^{k_j}_i+\mu A_i^T(b-As^{k_j}) |> r^*, \end{aligned}$$
where \(r^*:=\frac{2-p}{2(1-p)} [\lambda \mu (1-p) ]^{1/(2-p)}\). By Proposition 2, we can follow that the function \(h_{\lambda \mu , p}()\) is continuous over \((r^*, +\infty )\) and \(( -\infty , r^*)\). Therefore, we follow that \(s^*_i=h_{\lambda \mu , p}(s^*_i+\mu A^T_i(b-As^*))\). By Lemma 5, \(s^*\) is a fixed point of (22). \(\square\)
Numerical experiments
Now we report numerical results to compare the performance of Iterative thresholding algorithm (ITA) (\(p=0.5\)) for solving (1) (Signal reconstruction) with LASSO to find sparse solutions. The computational test was conducted on a Intel(R) Core(TM)2 Duo CPU E 8400 @3.00GHZ Dell desktop computer with 2.0GHz of memory with using Matlab R2010A.
Consider a real-valued, finite-length signal \(x \in R^n\). Suppose x is T-sparse, that is, only T of the signal coefficients are nonzero and the others are zero. We use the following Matlab code to generate the original signal, a matrix A and a vector b.
$$\begin{aligned} x_{or}& = zeros(n,1); q = randperm(n); x_{or}(q(1:T)) = 2*randn(T,1);\\ A& = randn(m,n); A = orth(A')'; b= A*x_{or} ; \end{aligned}$$
The computational results for this experiment are displayed in Table 1.
Table 1 Comparison of ITA and LASSO algorithm
From Table 1 we find that ITA has smaller prediction accuracy than LASSO in shorter time.
In this paper, an exact lower bound for the absolute value of nonzero entries in each global optimum solution of the problem (1) is established. And the necessary condition for global optimum solutions of the \(l_p\)-regularization problems is derived, i.e., the global optimum solutions are the fixed points of a vector thresholding operator. In addition, we have derived a sufficient condition on the selection of \(\lambda\) for the desired sparsity of global minimizers of the problem (1) with the given (A, b, p). Finally, an iterative thresholding algorithm is designed for solving the \(l_p\)-regularization problems, and the convergence of algorithm is proved.
Bredies K, Lorenz DA, Reiterer S (2015) Minimization of non-smooth, non-convex functionals by iterative thresholding. J Optim Theory Appl 165:78–112
Article MathSciNet MATH Google Scholar
Candès EJ, Wakin MB, Boyd SP (2008) Enhancing sparsity by reweighted \(l_1\) minimization. J Fourier Anal Appl 14:877–905
Chartrand R (2007a) Nonconvex regularization for shape preservation. In: Proceedings of IEEE international conference on image processing
Chartrand R (2007b) Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process Lett 14:707–710
Chartrand R, Staneva V (2008) Restricted isometry properties and nonconvex compressive sensing. Inverse Probl 24:1–14
Chartrand R, Yin W (2008) Iteratively reweighted algorithms for compressive sensing. In: Proceedings of international conference on acoustics, speech, signal processing (ICASSP)
Chen X, Xu F, Ye Y (2010) Lower bound theory of nonzero entries in solutions of \(l_2\)-\(l_p\) minimization. SIAM J Sci Comput 32:2832–2852
Chen X, Ge D, Wang Z, Ye Y (2014) Complexity of unconstrained \(l_2 -l_p\) minimization. Math Program 143:371–383
Chen YQ, Xiu NH, Peng DT (2014) Global solutions of non-lipschitz \(s_2-s_p\) minimization over the semidefinite cones. Optim Lett 8(7):2053–2064
Fan Q, Wu W, Zurada JM (2016) Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks. SpringerPlus 2016(5):1–17
Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Ass 96:1348–1360
Foucart S, Lai MJ (2009) Sparsest solutions of under-determined linear systems via \(l_q\) minimization for \(0 < q \le 1\). Appl Comput Harmonic Anal 26:395–407
Frank IE, Freidman JH (1993) A statistical view of some chemometrics regression tools (with discussion). Technometrics 35:109–148
Ge D, Jiang X, Ye Y (2011) A note on the complexity of \(l_p\) minimization. Math Program 129:285–299
Huang J, Horowitz JL, Ma S (2008) Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann Stat 36:587–613
Knight K, Wu JF (2000) Asymptotics for lasso-type estimators. Ann Stat 28:1356–1378
Lai M, Wang Y (2011) An unconstrained \(l_q\) minimization with \(0 < q < 1\) for sparse solution of under-determined linear systems. SIAM J Optim 21:82–101
Natarajan BK (1995) Sparse approximate solutions to linear systems. SIAM J Comput 24:227–234
Shehu Y, Iyiola OS, Enyi CD (2013) Iterative approximation of solutions for constrained convex minimization problem. Arab J Math 2:393–402
Shehu Y, Cai G, Iyiola OS (2015) Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl 2015(123):1–18
MathSciNet MATH Google Scholar
Tian M, Huang LH (2013) Iterative methods for constrained convex minimization problem in hilbert spaces. Fixed Point Theory Appl 2013(105):1–18
Tian M, Jiao S-W (2015) Regularized gradient-projection methods for the constrained convex minimization problem and the zero points of maximal monotone operator. Fixed Point Theory Appl 11:1–23
Xu Z, Zhang H, Wang Y, Chang X (2010) \(l_{1/2}\) regularizer. Sci China Inf Sci 53:1159–1169
Xu Z, Chang X, Xu F, Zhang H (2012) \(l_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans Neural Netw Learn Syst 23:1013–1027
All authors are common first author, all authors contribute equally to the manuscript. All authors have a good contribution to derive the exact lower bounds, to establish the global optimum condition and to design the iterative thresholding algorithm, and to perform the numerical experiments of this research work. All authors read and approved the final manuscript.
This paper is supported by the National Natural Science Foundation of China under Grant (11171094), the Natural Science Foundation of of Henan Province (152300410097), the Key Scientific Research Project of Universities in Henan Province (14A110024), (16A110014) and (15A110022), the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07), the High-level Scientific Research Personnel Project for Henan Institute of Science and Technology (2015037), the Science and Technology Innovation Project for Henan Institute of Science and Technology.
School of Mathematical Sciences, Henan Institute of Science and Technology, Xinxiang, 453003, China
Hongwei Jiao & Jingben Yin
College of Mathematics and Information Science, Henan Normal University, Xinxiang, 453007, China
Yongqiang Chen
Hongwei Jiao
Jingben Yin
Correspondence to Hongwei Jiao.
Jiao, H., Chen, Y. & Yin, J. Optimality condition and iterative thresholding algorithm for \(l_p\)-regularization problems. SpringerPlus 5, 1873 (2016). https://doi.org/10.1186/s40064-016-3516-3
Optimality condition
\(l_p\)-regularization problems
Iterative thresholding algorithm
Global optimum solution
Fixed point
Mathematics Subject Classification | CommonCrawl |
Search SpringerLink
Bio-fertilizer Affects Structural Dynamics, Function, and Network Patterns of the Sugarcane Rhizospheric Microbiota
Qiang Liu1,2,
Ziqin Pang1,2,3,4,
Zuli Yang6,
Fallah Nyumah1,2,3,4,
Chaohua Hu1,
Wenxiong Lin3,4 &
Zhaonian Yuan1,2,5
Microbial Ecology (2021)Cite this article
Fertilizers and microbial communities that determine fertilizer efficiency are key to sustainable agricultural development. Sugarcane is an important sugar cash crop in China, and using bio-fertilizers is important for the sustainable development of China's sugar industry. However, information on the effects of bio-fertilizers on sugarcane soil microbiota has rarely been studied. In this study, the effects of bio-fertilizer application on rhizosphere soil physicochemical indicators, microbial community composition, function, and network patterns of sugarcane were discussed using a high-throughput sequencing approach. The experimental design is as follows: CK: urea application (57 kg/ha), CF: compound fertilizer (450 kg/ha), BF1: bio-fertilizer (1500 kg/ha of bio-fertilizer + 57 kg/ha of urea), and BF2: bio-fertilizer (2250 kg/ha of bio-fertilizer + 57 kg/ha of urea). The results showed that the bio-fertilizer was effective in increasing sugarcane yield by 3–12% compared to the CF treatment group, while reducing soil acidification, changing the diversity of fungi and bacteria, and greatly altering the composition and structure of the inter-root microbial community. Variance partitioning canonical correspondence (VPA) analysis showed that soil physicochemical variables explained 80.09% and 73.31% of the variation in bacteria and fungi, respectively. Redundancy analysis and correlation heatmap showed that soil pH, total nitrogen, and available potassium were the main factors influencing bacterial community composition, while total soil phosphorus, available phosphorus, pH, and available nitrogen were the main drivers of fungal communities. Volcano plots showed that using bio-fertilizers contributed to the accumulation of more beneficial bacteria in the sugarcane rhizosphere level and the decline of pathogenic bacteria (e.g., Leifsonia), which may slow down or suppress the occurrence of diseases. Linear discriminant analysis (LDA) and effect size analysis (LEfSe) searched for biomarkers under different fertilizer treatments. Meanwhile, support vector machine (SVM) assessed the importance of the microbial genera contributing to the variability between fertilizers, of interest were the bacteria Anaerolineace, Vulgatibacter, and Paenibacillus and the fungi Cochliobolus, Sordariales, and Dothideomycetes between CF and BF2, compared to the other genera contributing to the variability. Network analysis (co-occurrence network) showed that the network structure of bio-fertilizers was closer to the network characteristics of healthy soils, indicating that bio-fertilizers can improve soil health to some extent, and therefore if bio-fertilizers can be used as an alternative to chemical fertilizers in the future alternative, it is important to achieve green soil development and improve the climate.
Increasing population numbers are putting tremendous pressure and challenges on global food demand and land productivity [1, 2]. Soil fertility degradation has been a key agricultural concern [3, 4]. Overuse of chemical fertilizers in some growing agricultural areas, especially over-reliance on nitrogen fertilizers, has led to an imbalance in the nutrient structure of fertilizer supply and a decrease in fertilizer utilization [5, 6]. Such unreasonable agronomic measures lead to soil nutrient imbalance, gradual decline of crop growth, reduction of the content of soil organic matter, destruction of soil agglomeration structure, and a reduction of the activity of soil microorganisms that are closely related to plant growth [7,8,9]. In addition, intensive agricultural practices characterized by using high levels of chemical fertilizers and pesticides can alter soil biology by disrupting biological interactions. Such measures may lead to the rapid development of soil-borne diseases with imbalances in the subsurface microbiosphere caused by the proliferation of harmful soil microorganisms, including plant pathogenic fungi and bacteria. Therefore, in this context, the development of new bio-fertilizers will bring a fresh turn in agricultural production. Modern agriculture has increasingly focused on the use of bio-fertilizer as alternatives to chemical fertilizers. Numerous studies have shown that the application of bio-fertilizers can inhibit the development of related soil-borne diseases by reshaping the plant rhizosphere microbiota and promoting the secretion of related chemicals such as carbohydrates, amino acids, organic acids, proteins, and enzymes [10, 11]. Indoor cultivation trials by Dong et al. showed that soil and microorganisms under bio-fertilizer treatment conditions were significantly more resistant to pathogenic bacteria than those treated with chemical fertilizers after a spiking of Ralstonia solanacearum [12]. The study by Zhang et al. also showed that using Trichoderma bio-fertilizer can increase soil antifungal compounds, and it was speculated that it may suppress pathogenic bacteria and be an important reason for increasing grass biomass [13]. It has also been shown that the application of bio-fertilizers improves soil organic matter content, pH, soil microbial activity, and diversity more than the application of chemical fertilizers alone [14]. Most of these studies have focused on model crops or indoor cultivation conditions, and the response of rhizosphere microorganisms to bio-fertilizer under real production and field conditions remains elusive.
Soil is a very complex ecosystem in which different microorganisms play different roles [15, 16]. Plants have been placed in a sea of microorganisms from the time they were planted. Mechanisms of growth evolution have led plants to know how to find partner microorganisms that work together below adversity [17]. Plant growth-promoting bacteria (PGPB) and plant growth-promoting fungi (PGPF) can work hand in hand with plants [18]. Meanwhile, soil microbes are sensitive to environmental stresses and they play an important role in fertilizer nutrient conversion. The importance of rhizosphere microbes as neighbors of plant roots for plant health and growth cannot be overstated [15, 20]. Rhizosphere microbial communities can promote the growth of plant above-ground tissues by enhancing adaptation to environmental stresses, improving nutrient acquisition, and improving plant metabolic functions. A study by Singh et al. demonstrated the defense response of a rhizosphere microbial community consisting of Pseudomonas (PHU094), Trichoderma (THU0816), and Rhizobium (RL091) strains to specific biotic stresses in chickpea [21]. In addition, Yi et al. showed that plants can defend themselves against herbivore attack by self-protection mechanisms that recruit beneficial microorganisms of plant-promoting rhizobacteria/fungi [22]. Furthermore, Solanki et al. published that in intercropping systems, abundant plant rhizosphere beneficial diazotrophs can promote plant growth and act as an effective biological inoculant to sustain sugarcane production, and this exploration of rhizosphere microbes can provide an excellent solution to reduce the overuse of chemical fertilizers [5]. Breakthroughs in the study of rhizosphere microbial communities will open the door to microbial regulation of plant growth and metabolism. With the increasing exploration of soil microbial potential and the deepening of the concept of sustainable development, green and healthy bio-fertilizer will become the preferred choice for agricultural production. The objectives of our study were (a) to investigate the relationship between changes in the rhizosphere microbial community of sugarcane and different fertilizer application regimes and to reveal the correlation between soil microbial composition and soil chemical properties, (b) to determine the network characteristics of microorganisms under different fertilizers, and (c) to determine the contribution of bio-fertilizer application to sustainable agriculture.
Plant Materials and Fertilizers
FN41 sugarcane variety was obtained from the sugarcane experiment site of Fujian Agriculture and Forestry University. Chemical fertilizer was bought from Meishan Xindu Chemical Compound Fertilizer Co., Ltd., and its total nutrient (N-P2O5-K2O: 15–15-15) ≥ 45%. The bio-fertilizer is a compound microbial fertilizer provided by Jiangyin Lianye Biology Co., Ltd., which is developed by Nanjing Agricultural University. Bio-fertilizer was produced by inoculation of Bacillus amyloliquefaciens T-5 [23] into a mixture of rapeseed meal and chicken manure composts for the solid fermentation process. The properties of the bio-fertilizer were (N + P2O5 + K2O) = 8%, effective living bacteria ≥ 20 million/g, and organic matter ≥ 20%. The fertilizer application calculation tool (version 1.1) for the experimental plots was used to determine the amount of fertilizer to be applied.
Experimental Description and Soil Samples
A field experiment was conducted at the Sugarcane Research Station in Xingbin District, Guangxi Province of China, from March 7, 2017 to December 20, 2017. The climate is mainly subtropical monsoon climate. The annual average temperature and annual precipitation are located in the range of 20–22℃ and 1300–1350 mm, respectively. The pre-test soil samples were collected on March 1, 2017, stored on ice, and transported back to the laboratory where the determination of physicochemical properties began immediately, and the physicochemical properties were as follows: pH (4.82), soil organic matter (SOC, 17.50 g ·kg–1), total nitrogen (TN, 1.29 g· kg–1), available potassium (AK, 54.16 mg· kg–1), and available phosphorus (AP, 45.19 mg· kg–1). The treatments are as follows: (1) CK: urea application (57 kg/ha), CF: compound fertilizer (450 kg/ha), BF1: bio-fertilizer (1500 kg/ha of bio-fertilizer + 57 kg/ha of urea), and BF2: bio-fertilizer (2250 kg/ha of bio-fertilizer + 57 kg/ha of urea). Fertilizer was applied at different periods, the first application was made at the seedling stage (March 10, 2017), accounting for 40% of the total fertilizer application, and the second was made at the elongation stage (July 10, 2017), accounting for 60%. Each plot contained 5 rows. The field experiment was conducted in a randomized block design, and the row spacing was 1.2 m and row length was 25 m. Sugarcane yields and sugar content were evaluated and soil samples were collected during the maturity period. Nine soil cores from one field plot were pooled into one sample [24], and a total of 12 field plot samples were collected, including four fertilization treatments × three replications. All samples were placed individually in sterile bags and sent to the laboratory, and stored at − 20 °C; after each sample collection, the tools used were disinfected with an alcohol wipe. The samples were sieved using a 2-mm mesh, thoroughly homogenized, and divided into two parts. Portion was stored at 4 °C, and then a sufficient amount of soil was taken out and dried naturally for the determination of soil physical and chemical properties, while the other portion was stored at − 20 °C for DNA extraction.
Determination of Soil Physicochemical and Sugarcane Yield Indicators
Soil pH was estimated with a glass electrode using a soil-to-water ratio of 1:2.5, and the soil total nitrogen (TN) in the extract was determined by Element Analyzer (Thermo Scientific™, Waltham, MA, USA). Soil available phosphorus (AP) was extracted with sodium bicarbonate and determined by the molybdenum blue method. The available nitrogen (AN) and available potassium (AK) were determined by the alkaline hydrolysis diffusion method and the flame photometric method. In addition, the soil organic carbon content (SOC) was determined by using 0.8 mol/L K2Cr2O7 redox titration method. All soil physical–chemical properties were determined according to Bao [25]. The stem height and diameter of sugarcane were measured by randomly selecting 30 sugarcane plants in each plot and using tape and Vernier caliper. The number of effective stems was extrapolated from the number of effective stems in the area of 1.2 × 2.5 m to the total area of effective stems of sugarcane. To measure the sucrose content, an Extech Portable Sucrose Brix Refractometer (Mid-State Instruments, San Luis Obispo, CA, USA) was used, and the calculation was performed using the following formula: sucrose (%) = Brix (%) × 1.0825 − 7.703 [26]. The theoretical yield of sugarcane was estimated using the following equation:
$$(a) Single stalk weight (kg)\hspace{0.17em}=\hspace{0.17em}(stalk diameter {(cm))}^{2}\hspace{0.17em}\times \hspace{0.17em}(stalk height (cm)\hspace{0.17em}-\hspace{0.17em}30)\hspace{0.17em}\times \hspace{0.17em}1 (g/{cm}^{3})\hspace{0.17em}\times \hspace{0.17em}0.7854/1000$$
$$(b) Theoretical production (kg/{hm}^{2})\hspace{0.17em}=\hspace{0.17em}single stalk weight (kg)\hspace{0.17em}\times \hspace{0.17em}productive stem numbers ({hm}^{2})$$
Soil DNA Extraction, PCR Amplification, and Sequencing
Deoxyribonucleic acid was extracted from the experimental soil using the Power Soil DNA Isolation Kit (MoBio Laboratories Inc., Carlsbad, USA) according to the manufacturer's instructions. The quantity and quality of deoxyribonucleic acid (DNA) extracts were analyzed using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Waltham, MA, USA) and the DNA was stored at − 80℃ for future analysis [12]. 16S rRNA and 18S rRNA gene fragments were amplified using primers 338F (5′-ACTCCTACGGGAGGCAGCAG-3′)/806R (5′-GGACTACHVGGGTWTCTAAT-3′) [27] and SSU0817F (5′-TTAGCATGGAATAATRRAATAGGA-3′)/SSU1196R (5′-TCTGGACCTGGTGAGTTTCC-3′) [28], respectively. The amplification condition was 95℃ for 3 min, followed by 35 cycles of 95℃ for 30 s, 55℃ for 30 s, and 72℃ for 45 s, with a final extension at 72℃ for 10 min (GeneAmp 9700, ABI, California CA, USA). PCR reaction was performed in triplicate in a 20-μL mixture containing 2 μL of 2.5 mM deoxyribonucleoside triphosphate (dNTPs), 4 μL of 5 × Fast Pfu buffer, 0.4 μL of Fast Pfu polymerase, 0.4 μL of each primer (5 μM), and template DNA(10 ng) [29]. Extraction of amplicons was carried out using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA). Then, QuantiFluor™-ST (Promega, Madison, WI, USA) was used for quantification. Purified amplicons were pooled in equimolar and paired-end sequenced (2 × 250) on an Illumina MiSeq platform (Majorbio, Shanghai) according to the standard protocols. The UPARSE standard pipeline was used to analyze the sequence data [30]. Briefly, sequences with short reads (< 250 bp) were filtered out prior to downstream analysis [31]. Sequences with ≥ 97% similarity were clustered into OTUs, and the taxonomic assignment was performed using the RDP database (http://rdp.cme.msu.edu/). All sequences were deposited in the NCBI Sequence Read Archive database with the accession number PRJNA682545.
For subsequent analyses, minimum numbers of sequences were extracted at random from each sample to calculate an alpha diversity index. The significance of soil nutrients and sugarcane yield indicators was calculated using DPS, based on the LSD test (P < 0.05). Box plots of α-diversity indices, species composition, Venn diagrams, and correlation heatmap (Spearman correlation) were performed using R (3.5.2). The difference analysis (DESeq2) and VPA (variance partitioning canonical correspondence) analysis were also calculated and visualized using R [32, 33]. Bray–Curtis distance was calculated by "vegdist" function of vegan package on R (3.4.0). Non-parametric multivariate analysis of variance (PERMANOVA) was performed with the Adonis function in the vegan package of R based on the Bray–Curtis distance. Support vector machine (SVM) analysis is as follows: first logarithmic transformation of relative abundance data, then intra-matrix correction of data, and finally using Wekemo biointiomnatios cloud platform (https://bioincloud.tech) to complete [34]. Co-occurrence networks were done using the R (version 4.0.3) and Cytoscape software (3.6.1), and network structure analysis was done using UCINET (version 6.186) to calculate mean degree, clustering coefficient, and other parameters [35]. Bacterial functions were predicted by the PICRUSt software based on the KEGG functional database and fungi were annotated using the FUNGuild database [36, 37].
Sugarcane Yield Index and Soil Nutrient Variability
Compared to CF treatment, the yield per hectare of FN41 sugarcane increased from 3 to 12% under the bio-fertilizer amendment soil (BF1 and BF2). Furthermore, compared to CK, BF1, BF2, and CF treatments significantly increased (P < 0.05) plant height, stem weight, and effective stem. However, sugarcane stem diameter under CF, BF1, and BF2 treatments revealed no significant difference compared to CK treatment (Table 1). Compared with CK and CF treatments, soil pH was significantly higher (P < 0.05) in both BF1 and BF2 treatments. However, CF treatment significantly reduced soil pH compared with CK. Moreover, soil organic carbon and available phosphorus were not impacted in all the treatments compared to CK treatment. Compared to CK treatment, soil total nitrogen was significantly higher (P < 0.05) in both BF1 and BF2 treatments, whereas soil available nitrogen did not change considerably among all the treatments. The contents of total nitrogen, available nitrogen, total phosphorus, and available potassium increased significantly by about 13.8–33.8%, 12.6–25.0%, 43.8–56.3%, and 97.4–169.5%, respectively, with the increase in BF1 treatment group being the most significant (Table 2).
Table 1 Effects of different treatments on yield indexes of sugarcane
Table 2 Effects of different treatments on soil nutrient content of sugarcane
Effect of Different Fertilizers on Rhizosphere Microbial Community and Diversity
In order to assess the effects of different treatments on microbial alpha diversity in sugarcane rhizosphere soil, we plotted the box-line diagrams (Fig. 1). The rarefaction curve showed the richness of observed OTU, which proved that the depth of sample sequencing was enough to show microbial alpha diversity (Fig. S1). According to the result, rhizosphere bacterial α-diversity (Shannon, Sobs, Chao, and Ace) indices were significantly (P ≤ 0.05) affected by fertilizer, but there were differences in the degree of influence between fungi and bacteria (Table S1). For the bacteria, treatments BF1 and BF2 produced the highest significant Shannon indices respectively, compared with CK and CF, and the highest Sobs, Ace, and Chao indices were recorded in treatment BF2 (Table S1). On the other hand, of the fungi, except for Shannon and Chao which were not significantly affected by fertilizer treatment, treatment BF2 registered the highest Sobs and Ace indices compared with other treatments (Table S1).
Box plots of rhizosphere microbial alpha diversity index under different fertilizer treatments, Tukey method. CK: urea application (57 kg/ha), CF: compound fertilizer (450 kg/ha), BF1: bio-fertilizer (1500 kg/ha of bio-fertilizer + 57 kg/ha of urea), BF2: bio-fertilizer (2250 kg/ha of bio-fertilizer + 57 kg/ha of urea)
The dominant bacteria phyla were Actinobacteria, Proteobacteria, Acidobacteria, Cyanobacteria, Firmicutes, Planctomycetes Bacteroidetes, Chloroflexi, Gemmatimonadetes, and Nitrospirae in all fertilizer treatment soils (Fig. 2A), and the dominant fungi phyla were Ascomycota, Basidiomycota, Zygomycota, Ciliophora, Ochrophyta, Chytridiomycota, Choanomonada, Glomeromycota, Schizoplasmodiida, and Blastocladiomycota (Fig. 2B). Although the dominant phyla of rhizosphere microorganisms in all soils were consistent, changes in the relative abundance of the dominant taxa were observed across different treatments (Table S2). In bacteria, there was a lower abundance of Actinobacteria and a higher abundance of Acidobacteria and Chloroflexi in soils with BF addition comparing with CK and CF (Fig. 2A), and in the OTU level, the addition of fertilizer reduced the number of OTU unique to bacteria in soil, but the degree of decrease was related to the type of fertilizer (Fig. 2C). In addition, Ascomycota had absolute abundance advantage in rhizosphere fungi. Compared to CF, BF treatment has more Ciliophora, Ochrophyta, and Zygomycota (Fig. 2B). In OTU level, the addition of bio-fertilizer makes it have more unique fungal OTU, specifically, CF reduced the number of unique OTUs (Fig. 2D).
Relative abundance histograms of the top 10 rhizosphere microbial phyla in each sample (A and B). Comparison of bacterial and fungal OTU using Venn diagram among different fertilizer treatments (C and D)
The Spearman's heatmap showed the relationship between microbial diversity and soil traits (Fig. 3A), and the Spearman heatmap correlation analysis between major microbial genera and physiochemical soil variables is also illustrated in Fig. 3B. In bacteria, TP significantly affected the diversity index of bacteria and showed significant positive correlation with Shannon, Ace, Sobs, and Chao (Fig. 3A). In addition, there was a significant correlation between pH, AK, TN, and most of the bacterial genera in bacterial top 30. Among them, genus Acidobacteria, Anaerolineaceae, and Nitrospira had a significant positive correlation with soil pH while Bacillus, Rhizomicrobium, Frankiales, Saccharibacteria, and Bradyrhizobium were observed to have a significant negative correlation with pH. Furthermore, Haliangium, Nitrospira, and Nitrosomonadaceae had a strongly significant positive correlation with TN, but Bradyrhizobium registered a significant negative correlation with TN (Fig. 3B). In fungi, TN and AK had a significant positive correlation with Sobs (Fig. 3A). Meanwhile, Fusarium showed a significant negative correlation with AP and AK, and Ascomycota also showed a significant negative correlation with TP and AK. It is noteworthy that Chalazion showed a significant positive correlation with SOC and TN, and the part of these observations was also confirmed in RDA analysis with the top 10 genera.
The heatmap of Spearman correlation between microbial alpha diversity index and soil traits (A), and a Spearman correlation heatmap of soil environmental variables and the top 30 dominant bacterial and fungal genera, and the correlation coefficient was greater than 0.4, marking the significance level (B). * significance at P < 0.05, ** significance at P < 0.01, and *** significance at P < 0.001
A non-metric multidimensional scaling (NMDS) analysis showed a clear distinction in bacterial and fungal community composition of CK, CF, and BF (Fig. 4A and D). In all the treatments, the bacterial community was distinct from each other based on their NMDS1 axis; however, fungal community composition showed distinct variation among the treatments at their NMDS2 axis. Based on redundancy analysis (RDA), results revealed that soil variables (pH, AN, AK, TN, TP, SOC) affected the soil microbial community in different treatments. The X and Y canonical axes explained 40.71% and 17.12% and 30.55% and 17.86% of the observed bacterial and fungal species dynamics, respectively. It is worth noting that, of all the soil variables investigated, pH (r2 = 0.8070, p-value = 0.0005) and AK (r2 = 0.7988, p-value = 0.001) in bacteria, SOC (r2 = 0.6974, p-value = 0.0025), TN (r2 = 0.7558, p-value = 0.0020), pH (r2 = 0.6640, p-value = 0.0045), and AK (r2 = 0.6303, p-value = 0.0085) in fungi were observed as important drivers shaping and controlling microbial community (Fig. 4C and F; Table S3). Meanwhile, the results of Adonis test indicated significant differences between different fertilizer treatment groups (Table 3), and VPA analysis showed that soil physicochemical factors explained 80.09% and 73.31% of the variance for bacteria and fungi, respectively, with pH explaining a higher percentage of the variance for fungi (23.88%) than for the bacterial (9.91%) group (Fig. S2).
A non-metric multidimensional scaling (NMDS) of rhizosphere microbial community composition among different fertilizer treatments (A and D). Redundancy analysis (RDA) illustrating association between samples and soil properties among treatments (B and E), and RDA also indicate association between microbial (top 10 genera) and environmental variables (C and F). Points with different colors depict sample groups under different fertilizer treatments; gray and black points represent different microbial genera, red arrows represent environmental factors, and the arrow length represents the degree of influence on different genera or samples. Bacteria (A-C) and fungi (D-F)
Table 3 Analysis of bacteria and fungi Adonis
Differential Microorganisms Under Different Fertilizer Treatments
According to the results of DESeq2, we identified 220 genus including 98 upregulated genus and 122 downregulated genus after the comparison between CK and BF2 in the bacteria, 86 genus (up = 40, down = 46) between CK and CF, and 29 genus (up = 19, down = 10) between CF and BF2, respectively (Table S4). Latescibacteria, Actinobacteria, Acidobacteria, and Nordella were significantly enriched in comparison of CF and BF2; however, Actinospica, Jatrophihabitans, Leifsonia, and Sinomonas were significantly reduced (Fig. 5C). In the fungal community, 4 (CK vs. CF), 29 (CK vs. BF2), and 28 (CF vs. BF2) differential genera were identified in the comparison groups of the different treatments, respectively (Fig. 5D-F). Mrakia, Saccharomycetales, Obertrumia, and Galactomyces were significantly enriched after BF2 treatment compared to the control group, and Phallus, Ascomycota, and Thysanophora were significantly reduced (Fig. 5E). The identified differentially genus were shown by volcano plot (Fig. 5). In the volcano plot, p < 0.05 was set as the cut-off criterion of significant difference.
Volcano plots depicting bacteria (A-C) and fungi (D-F) genus. The X coordinate was |log2 (fold change)| and the Y coordinate was − log 10 (p adj), P < 0.05, log2 (fold change) > 2. Each point represented a genus. Points in the brown area are regulatory genera with significant changes and markers for dominant genera. Other dots were genus of non-significant difference
Effects of Fertilizer Treatments on Rhizosphere Microbial Biomarkers and Functions
Linear discriminant effect size (LEfSe) analysis was conducted to identify and select unique microbial taxa significantly related to each fertilizer treatment. Biomarker bacterial and fungal community were depicted in cladograms, and bacterial linear discriminant analysis (LDA) scores ≥ 3.5 and fungal LDA ≥ 3 were then performed respectively (Fig. 6A and C). Biomarkers associated with treatments varied across the fertilizer. The bacterial and fungal community LDA analysis detected 66 (CK = 24, CF = 16, BF1 = 26, BF2 = 0) and 98 (CK = 20, CF = 15, BF1 = 21, BF2 = 42) biomarkers for different fertilizers respectively (Fig. 6A and C). The higher score biomarker bacterial of BF1 treatment belonged to phyla Acidobacteria and Anaerolineaceae; that of CF belonged to Alphaproteobacteria, Gaiellales, and Frankiales. Meanwhile, in fungi, the higher score biomarker of BF2 belonged to Cystofilobasidiaceae, Mrakia, Pinnularia, and Tremellomycetes; that of CF belonged to unclassified Dothideomycetes and Tremellales (Fig. 6C). In addition, regarding KEGG, 44 pathways were significantly different in third-level pathways (LDA > 2.5, P < 0.05, Fig. 6B), including 29 pathways with significant difference in BF1, such as genetic information processing, global and overview maps, and energy metabolism. Seven pathways were significantly different in CF, such as environmental information processing, lipid metabolism, and xenobiotic biodegradation and metabolism (Fig. S4). The BF1 treatment group had the most differential pathways. Meanwhile, there were 14 fungal FUNGuild (CK = 4, CF = 6, BF1 = 0, BF2 = 4), of which BF2 mainly included pathotroph and animal pathogen, and pathotroph-saprotroph and fungal parasite-undefined saprotroph were in CF (LDA > 2.0, P < 0.05, Fig. 6D and Fig. S5).
Cladogram illustrating the phylogenetic dynamics of the rhizosphere microorganisms associated with different fertilizes (A and C). Bacterial biomarkers with LDA scores of ≥ 3.5 in each treatment were listed and the LDA scores of fungi ≥ 3. Different colors depict different treatments while circles show phylogenetic levels from phylum to OTU. KEGG functional pathways are differentially abundant by different fertilizes. Differentially abundant KEGG functional pathways in sugarcane's PICRUSt predicted metagenome and differences in functional classification of fungi FUNGuild were shown by using LEfSe (B and D). The nodes of different colors represent the microbes that perform a crucial role in the grouping illustrated in the color, and yellow nodes denote non-significant
In the bacteria, of the top 30 genera identified by a support vector machine (Fig. S3), Woodsholea, norank_Latescibacter, Bauldia, Myxococcales, and Oryzihumus were all identified as important variables that significantly contributed to the class separation between CK and CF, Anaerolinea, Vicinamibacter, Syntrophobacter, and Anaerolineaceae were the more important genera for the difference between CK and BF2, and more attention needs to be paid to the more important role of norank_ Anaerolineace, Vulgatibacter, Paenibacillus, Achromobacter, and Roseiarcus for their differentiation between CF and BF2 (Fig. 7A). On the other hand, in the fungi, Hydnodontaceae, norank_ Agaricomyce, Saccharomycetales, Ascomycota, and Glomeromycota between CK and CF, Ascomycota, Obertrumia, Salpingoeca, Monosiga, and Discicristoidea between CK and BF2, and Cochliobolus, Sordariales, Dothideomycetes, Pleosporales, and Acrospermum between CF and BF2 had a greater contribution to the variability between groupings than other genus, respectively (Fig. 7B).
A support vector machine (SVM) approach was used to select the bacterial genera (A) and fungal genera (B) with the highest contribution to the variance in the different fertilizer treatment groups. The horizontal coordinate is the average importance and the vertical coordinate is the microbial genus, and the heatmap shows the relative abundance differences between microbial genera between the two comparison groups. Bacteria showed the top 30 genera in importance and fungi showed the top 15 genera. Order of comparison: CK vs. CF, CK vs. BF2, and CF vs. BF2
Network Analysis of Soil Microbial Communities (Co-occurrence Network)
Co-occurrence network analysis was used to assess interactions across dominant populations, and only the significant correlations (r2 > 0.4, p < 0.05) were shown in this network. The results revealed a lower number of links in the BF2 in the bacteria, and in the fungi, BF1 feature networks had the least number of links (Table S5). Further insight into the bacterial and fungal genera network illustrated the lowest mean degree, centralization closeness, network centralization, and clustering coefficient values in BF2 than other treatments (Table S6). Some genus, such as norank_Acidobacteria, norank_Anaerolineaceae, Bacillus, and Roseiflexus, had a higher relative abundance and clustering coefficient in the bacterial network of BF1. The genus Candidatus_Solibacter, norank_Nitrosomonadaceae, Nitrospira, and norank_Acidimicrobiales of CF in bacterial network had the largest clustering coefficient compared with other treatments (Fig. 8C and Table S7). In fungal network, Fusarium had the highest clustering coefficient values in CF compared to other treatments; however, BF2 had the lowest clustering coefficient value (Fig. 8D and F, Table S8).
Co-occurrence networks of rhizosphere microbial features. The map shows the bacterial and fungal networks at the genus level, respectively, and then showed the bacterial and fungal networks with top 40 genera, respectively. CK: urea application (A and B), CF: compound fertilizer (C and D), BF1: bio-fertilizer + urea (E and F), and BF2: bio-fertilizer + urea (G and H). Different lines represent two significant Pearson correlations (r2 > 0.4, p < 0.05). Light red lines represent a significant positive correlation and blue lines represent a significant negative correlation. The red nodes represent the top 6 node values in each network, and the size of the circle represents the relative abundance of each genera
Fertilizer application is one of the most common agricultural practices used in agricultural production activities to increase crop yields [38, 39]. Although the nutrient use efficiency in China's farming activities has gradually improved over the past decade [40], a large amount of inorganic fertilizers (nitrogen, phosphorous, and potassium) have been applied to farmland in order to increase crop yields, which has caused many serious ecological problems, such as soil organic matter loss [41], low soil fertility, nutrient inefficiency, and soil quality degradation [43]. In this dangerous environment where the intake of chemical fertilizers cannot continue to improve yields, the development of new fertilizers is a very important milestone. At the same time, an in-depth understanding of the activity pattern of rhizosphere soil microorganisms after bio-fertilizer application can play a crucial role in the better development and utilization of new fertilizers to improve soil productivity. Therefore, we conducted this study.
Impact of Different Fertilizers on Sugarcane Yield Index and Soil Nutrients
Until now, there is enough evidence that soil physicochemical factors such as SOC, TP, TN, AP, AN, and AK are enhanced by different fertilizers; at the same time, some fertilizers can mitigate soil acidification to some extent [43] [41]. However, these studies are based on chemical fertilizers or other fertilizers, and rhizosphere microbial studies related to bio-fertilizers are still relatively lacking. Our findings suggest that sugarcane sugar and soil pH showed noticeable variation among different fertilizers, which may be attributable to the fact that the microorganisms added to the bio-fertilizer promote the increase of sugarcane root secretion or the rhizosphere community under the bio-fertilizer recruits more functional microbes from the soil that facilitate soil acidity reduction and nutrient uptake by the roots [13]. Although the addition of bio-fertilizer did not result in a significant level of difference in yield indicators compared to the CF treatment group, the yield increase with the use of bio-fertilizer was greater than the addition of chemical fertilizers. Similarly, the input of organic matter in the bio-fertilizer can improve the water-soluble and exchangeable forms of soil micronutrients, further enhancing the uptake of soil micronutrients by the sugarcane root system [45].
Effect of Fertilizers on Microbial Species Composition and Diversity
Fertilizer addition significantly affected the diversity and species composition of the sugarcane rhizosphere microbial community. The results showed that both compound fertilizer and bio-fertilizer increased bacterial diversity and abundance to different degrees, but had no significant effect on the rhizosphere fungal community. This phenomenon is similar to the findings of Bello et al. [46]. The non-metric multidimensional scaling (NMDS) and redundancy analysis (RDA) were used to explore changes in the composition of the rhizosphere microbial community and the correlation between environmental factors and the rhizosphere community, respectively. The results indicated that samples from different treatment groups in NMDS (Fig. 4A and D) were significantly separated and then clustered together, and the Adonis test (Table 3) once again proved that there was a significant difference between the fertilizer treatments (p < 0.05). Many studies have demonstrated that soil physicochemical factors are important drivers of soil microbial communities [47, 48]. Likewise, our finding revealed that pH, AN, TN, AK, and SOC significantly affected the rhizosphere bacterial and fungal structure and diversity according to RDA and Spearman correlation heatmap analyses (Fig. 3). The results of the VPA analysis likewise revealed that soil physicochemical variables explained a large proportion of the microbial variation (Fig. S2). These results support some of the previous findings, Cao et al. who reported that soil pH, SOC, TN, and TP were all significantly correlated with bacteria, fungi, and total microorganisms [49]. These observations may be due to the fact that the properties of different fertilizers can have specific effects on rhizosphere environment, and that functional bacteria in bio-fertilizers may increase the availability of nutrients or promote the secretion of certain chemicals from sugarcane while influencing rhizosphere community interactions, thus affecting the entire root-soil-microbial system. In addition, the bacterial genera that showed significant positive correlation with TN, TP, and AK in this study were Acidimicrobiales, Haliangium, Nitrospira, and Nitrosomonadaceae; and the major fungal genera were Pseudallescheria, Mrakia, Chalazion, and Chytridiomycota. These microbial genera are likely to act as coordinators or transformers of nutrients in the soil [50, 51].
Fertilizer's Effect on Differential Microbes
There was a large variability of differential microbial genus in comparison groups (Fig. 5 and Table S4). The bacterial genera Microbacterium, Leifsonia, and Sinomonas that were significantly reduced in BF2 compared to CK and CF were reported as a group of gram-positive bacteria may associated with disease [52]; in particular, the reduction of Leifsonia is likely to suppress or slow down the occurrence of ratoon stunting (growth-hindering) disease of sugarcane [53]. Meanwhile, significantly enriched Geobacter, Nitrosomonadaceae, and Pedomicrobium were associated with environmental remediation [54], nitrification, and utilization of trace elements in the soil [55, 56], and microbial interactions may have promoted the activity of rhizosphere-related enzymes in sugarcane, thus facilitating the uptake and utilization of trace elements. In addition, in the fungal volcano map (Fig. 5B), compared with CK and CF treatment groups, the increase of Saccharomycetales could synthesize the active chemical substances that promote root growth and cell division and promote the substrate required for the proliferation of other effective microorganisms [57]. The emergence of these phenomenons has deepened our understanding of the role of bio-fertilizers in promoting soil ecosystems and plant health in several ways.
Impact of Fertilizers on Biomarkers and Functions
To further explore the effects brought by the bio-fertilizer on the rhizosphere community, LEfSe analysis and machine algorithm (support vector machine, SVM) were used to find biomarkers and the differential contribution of microbial genera in different treatment groups, respectively. According to the results of LEfSe analysis, microbial indicator differs significantly among fertilizer treatments. This suggested that the treatment with different fertilizers accelerated the selection of the rhizosphere microbial community by modifying the rhizosphere soil microenvironment and releasing chemical secretions (recruitment or expulsion) by sugarcane to build a suitable rhizosphere environment for its own growth [58, 59]. Most of all significant biomarkers belong to Acidobacteria, Actinobacteria, and Proteobacteria in bacterial groups and Ascomycota and Basidiomgcota in fungi community. Such results once again corroborated the observation of Zhang et al., who reported phylum Ascomycota to be the most pronounced biomarker microbial community under different carbon assimilation [60]. Meanwhile, SVM evaluated the importance of the microbial genera responsible for the variability between fertilizers. Microbial genera of relatively high average importance may influence functional differences in sugarcane under fertilizer measures [61]. Between BF2 and CF, the top ranked bacterial genera in terms of relative importance were Anaerolineace, Vulgatibacter, and Paenibacillus and fungi were Cochliobolus, Sordariales, and Dothideomycetes. Microbial genera of high importance may be associated with biological processes significantly marked in LEfSe (Fig. 6B and D). Furthermore, among the LEfSe of bacterial functional pathway, BF1 had the most tagged functional pathways, such as genetic information processing, global and overview maps, energy metabolism, translation, citrate cycle, and TCA cycle, which suggested that the addition of bio-fertilizers may affect numerous biological processes by altering the community structure and composition of rhizosphere microorganisms. In a previous study, the application of Trichoderma bio-fertilizer reported by Zhang et al. changed the microbial environment of the grassland and Trichoderma abundance became the most important contributor to the grassland biomass, suggesting from the side that the addition of bio-fertilizer changed a series of biological processes at the rhizosphere level [13], while in fungi, CF treatment seemed to have a stronger effect on the biological processes of rhizosphere fungi, and this phenomenon may be due to the contest between fertilizer effect and microbial effect, which needs to be explored more deeply [46].
Fertilizer's Effects on Soil Microbial Communities and Network Patterns
Co-occurrence analysis showed that the relative abundance of bacteria Acidobacteria and Anaerolineaceae was significantly higher with the addition of bio-fertilizer to the soil compared to CK and CF treatment groups (Table S9) and played a more important role in the network (Table S7). We hypothesized that the increase in abundance was closely related to the increase in rhizosphere soil pH of sugarcane. Soil pH has been reported to be one of the major soil factors determining microbial community structure under controlled conditions of different fertilizers [46, 62]. Some microorganisms can inhibit most enzyme metabolism through internal acidification of cells, and are sensitive to pH changes [63]. Thus, an increase in soil pH is in part suggestive of a healthier soil environment. We also identified some potential beneficial bacteria among the microbes with higher relative abundance and position in the BF1 and BF2 co-occurrence network; for instance, Nitrosomonadaceae has been reported to be closely associated with nitrification in soil and bioremediation of toxic chemicals in soil [64,65,66]. In addition, the network centralization of bacterial networks differed among fertilizer treatments, with BF2 having the smallest network centralization (15.52%) (Table S6), which may be due to the fact that the addition of functional bacteria in the fertilizer disrupted the equilibrium of the interaction between the original microorganisms in the soil, making the network more extensive and more key microbes become the central radiation point. In the fungal network, Talaromyces had absolute numerical and positional dominance in each treatment (Tables S8 and S10). However, the addition of different fertilizers resulted in more negative relationships among the genera, and the greatest increase in the rate of negative relationships was observed in the BF2 network (Table S5). Meanwhile, the fungal network with bio-fertilizer treatment possessed fewer interactions, which was similar to the network characteristics of healthy soil proposed by Yun et al. [67]. Interestingly, among the fungal networks, CF possessed the highest network centralization, which may be due to the specific effects of chemical fertilizers on fungi.
In this study, we determined the rhizosphere microbial community composition, function, and response to changes in soil physicochemical parameters in sugarcane after application of different fertilizers. The main reason for such changes could be due to the combined effect of soil pH, nutrients in fertilizers, and functional bacteria. The VPA analysis showed a high degree of explanation for the microbial community by soil physicochemical factors. Compared with CK and CF, using bio-fertilizer greatly reduced soil acidification and improved soil microbial community composition and structure, thus improving soil quality and soil productivity. In addition, using bio-fertilizers induced more beneficial microorganisms to accumulate in the rhizosphere soil of sugarcane; meanwhile, the reduction of some pathogenic bacteria such as Leifsonia likely inhibited or slowed down the occurrence of sugarcane-persistent dwarf disease, promoting plant health. In the co-occurrence networks under different fertilizer measures, bio-fertilizer network is closer to the network characteristics of healthy soil, which indicated that the application of bio-fertilizer can improve the health of soil to some extent and achieve green and stable sustainable development. Overall, this study provides new insights into the future replacement of overused chemical fertilizers by bio-fertilizers and is important for exploring the plant-soil-microbial interactions.
Lenaerts B, Collard BCY, Demont M (2019). Review: improving global food security through accelerated plant breeding. Plant Science, 287:110207.
Iizumi T, Kotoku M, Kim W, West PC, Gerber JS, Brown ME (2018). Uncertainties of potentials and recent changes in global yields of major crops resulting from census- and satellite-based yield datasets at multiple resolutions. Plos One, 13:e203809.
Bel J, Legout A, Saint-André L, Hall SJ, Löfgren S, Laclau J, et al. (2020). Conventional analysis methods underestimate the plant-available pools of calcium, magnesium and potassium in forest soils. Scientific Reports, 10.
Gkarmiri K, Finlay RD, Alström S, Thomas E, Cubeta MA, Högberg N (2015). Transcriptomic changes in the plant pathogenic fungus Rhizoctonia solani AG-3 in response to the antagonistic bacteria Serratia proteamaculans and Serratia plymuthica. BMC Genomics, 16.
Solanki MK, Wang F, Wang Z, Li C, Lan T, Singh RK et al (2019) Rhizospheric and endospheric diazotrophs mediated soil fertility intensification in sugarcane-legume intercropping systems. J Soils Sediments 19:1911–1927
Wang J, Xue C, Song Y, Wang L, Huang Q, Shen Q (2016). Wheat and rice growth stages and fertilization regimes alter soil bacterial community structure, but not diversity. Frontiers in Microbiology, 7.
Ramirez KS, Craine JM, Fierer N (2012) Consistent effects of nitrogen amendments on soil microbial communities and processes across biomes. Glob Change Biol 18:1918–1927
Hamza MA, Anderson WK (2005) Soil compaction in cropping systems. Soil and Tillage Research 82:121–145
Guo JH, Liu XJ, Zhang Y, Shen JL, Han WX, Zhang WF et al (2010) Significant acidification in major Chinese croplands. Science 327:1008–1010
Gu Y, Wang X, Yang T, Friman V, Geisen S, Wei Z, et al. (2020). Chemical structure predicts the effect of plant-derived low molecular weight compounds on soil microbiome structure and pathogen suppression. Functional Ecology.
Badri DV, Vivanco JM (2009) Regulation and function of root exudates. Plant, Cell Environ 32:666–681
Dong M, Zhao M, Shen Z, Deng X, Ou Y, Tao C et al (2020) Biofertilizer application triggered microbial assembly in microaggregates associated with tomato bacterial wilt suppression. Biol Fertil Soils 56:551–563
Zhang F, Huo Y, Cobb AB, Luo G, Zhou J, Yang G, et al. (2018). Trichoderma biofertilizer links to altered soil chemistry, altered microbial communities, and improved grassland biomass. Frontiers in Microbiology, 9.
Zhong W, Gu T, Wang W, Zhang B, Lin X, Huang Q et al (2010) The effects of mineral fertilizer and organic manure on soil microbial community and diversity. Plant Soil 326:523
Gunarto L (2000). Rhizosphere microbes: their roles and potential. Jurnal Penelitian Dan Pengembangan Pertanian.
Gyaneshwar P, Kumar GN, Parekh LJ, Poole PS (2002) Role of soil microorganisms in improving P nutrition of plants. System Sciences & Comprehensive Studies in Agriculture 245:133–143
Malik AA, Swenson T, Weihe C, Morrison EW, Martiny JBH, Brodie EL et al (2020) Drought and plant litter chemistry alter microbial gene expression and metabolite production. ISME J 14:2236–2247
Lugtenberg B, Kamilova F (2009) Plant-growth-promoting rhizobacteria 63(1):541–556
Zhang Q, Zhou W, Liang G, Wang X, Sun J, He P, et al. (2015). Effects of different organic manures on the biochemical and microbial characteristics of albic paddy soil in a short-term experiment. Plos One, 10:e124096.
Pang Z, Dong F, Liu Q, Lin W, Hu C, Yuan Z (2021). Soil metagenomics reveals effects of continuous sugarcane cropping on the structure and functional pathway of rhizospheric microbial community. Frontiers in Microbiology, 12.
Singh A, Sarma BK, Upadhyay RS, Singh HB (2013) Compatible rhizosphere microbes mediated alleviation of biotic stress in chickpea through enhanced antioxidant and phenylpropanoid activities. Microbiol Res 168:33–40
Yi H, Heil M, Adame-Álvarez RM, Ballhorn DJ, Ryu C (2009) Airborne induction and priming of plant defenses against a bacterial pathogen. Plant Physiol 151:2152–2161
Tan S (2013) The effect of organic acids from tomato root exudates on rhizosphere colonization of Bacillus amyloliquefaciens T-5. Appl Soil Ecol 64:15–22
Vestergaard G, Schulz S, Sch Ler A, Schloter M (2017) Making big data smart—how to use metagenomics to understand soil quality. Biol Fertil Soils 53:1–6
Bao SD (2000). Soil and agricultural chemistry analysis.
Lin W, Wu L, Lin S, Zhang A, Zhou M, Lin R et al (2013) Metaproteomic analysis of ratoon sugarcane rhizospheric soil. BMC Microbiol 13:1–13
Sun L, Han X, Li J, Zhao Z, Liu Y, Xi Q, et al. (2020). Microbial community and its association with physicochemical factors during compost bedding for dairy cows. Frontiers in Microbiology, 11.
Wang W, Yi Y, Yang Y, Zhou Y, Jia W, Zhang S, et al. (2019). Response mechanisms of sediment microbial communities in different habitat types in a shallow lake. Ecosphere (Washington, D.C), 10:n/a-n/a.
Pang Z, Tayyab M, Kong C, Hu C, Zhu Z, Wei X et al (2019) Liming positively modulates microbial community composition and function of sugarcane fields. Agronomy 9:808
Edgar RC (2013) UPARSE: highly accurate OTU sequences from microbial amplicon reads. Nat Methods 10:996–998
Caporaso JG, Kuczynski J, Stombaugh J, Bittinger K, Bushman FD, Costello EK, et al. (2010). QIIME allows analysis of high-throughput community sequencing data. Nature Methods.
Ma B, Lv X, Cai Y, Chang SX, Dyck MF (2018) Liming does not counteract the influence of long-term fertilization on soil bacterial community structure and its co-occurrence pattern. Soil Biol Biochem 123:45–53
Love M, Anders S, Huber W (2014). Differential analysis of count data–the deseq2 package.
Suykens J, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300
Borgatti SP, Everett MG, Freeman LC (2002). UCINET VI for windows: software for social network analysis.
Douglas GM, Maffei VJ, Zaneveld JR, Yurgel SN, Brown JR, Taylor CM, et al. (2020). PICRUSt2 for prediction of metagenome functions. Nature Biotechnology.
Nhn A, Zs B, Stb B, Sb C, Lt D, Jm A et al (2016) FUNGuild: an open annotation tool for parsing fungal community datasets by ecological guild. Fungal Ecol 20:241–248
Mącik M, Gryta A, Sas-Paszt L, Frąc M (2020) The status of soil microbiome as affected by the application of phosphorus biofertilizer: fertilizer enriched with beneficial bacterial strains. Int J Mol Sci 21:8003
Yin H, Zhao W, Li T, Cheng X, Liu Q (2018) Balancing straw returning and chemical fertilizers in China: role of straw nutrient resources. Renew Sustain Energy Rev 81:2695–2702
Huang Y, Huang X, Xie M, Cheng W, Shu Q (2021). A study on the effects of regional differences on agricultural water resource utilization efficiency using super-efficiency SBM model. Scientific Reports, 11.
Qian L, Chen B, Chen M (2016). Novel alleviation mechanisms of aluminum phytotoxicity via released biosilicon from rice straw-derived biochars. Scientific Reports, 6.
Lima Neto AJD, Deus JALD, Rodrigues Filho VA, Natale W, Parent LE (2020). Nutrient diagnosis of fertigated "Prata" and "Cavendish" banana (Musa spp.) at Plot-Scale. Plants, 9:1467.
Sánchez-Montesinos B, Diánez F, Moreno-Gavira A, Gea FJ, Santos M (2019) Plant growth promotion and biocontrol of Pythium ultimum by saline tolerant trichoderma isolates under salinity stress. Int J Environ Res Public Health 16:2053
Wang R, Shi X, Wei Y, Yang X, Uoti J (2006) Yield and quality responses of citrus (Citrus reticulate) and tea (Podocarpus fleuryi Hickel.) to compound fertilizers. Journal of Zhejiang University B Science 7B:696–701
Dhaliwal SS, Naresh RK, Mandal A, Singh R, Dhaliwal MK (2019). Dynamics and transformations of micronutrients in agricultural soils as influenced by organic matter build-up: a review. Environmental and Sustainability Indicators, 1–2:100007.
Bello A, Wang B, Zhao Y, Yang W, Ogundeji A, Deng L, et al. (2021). Composted biochar affects structural dynamics, function and co-occurrence network patterns of fungi community. Science of the Total Environment, 775:145672.
Shao JL, Lai B, Jiang W, Wang JT, Hong YH, Chen FB, et al. (2019). Diversity and co-occurrence patterns of soil bacterial and fungal communities of Chinese cordyceps habitats at Shergyla Mountain, Tibet: implications for the occurrence. Microorganisms, 7.
Jia T, Cao M, Wang R (2018) Effects of restoration time on microbial diversity in rhizosphere and non-rhizosphere soil of Bothriochloa ischaemum. Int J Environ Res Public Health 15:2155
Cao H, Chen R, Wang L, Jiang L, Yang F, Zheng S, et al. (2016). Soil pH, total phosphorus, climate and distance are the major factors influencing microbial activity at a regional spatial scale. Scientific Reports, 6.
LeBlanc N, Kinkel LL, Kistler HC (2015) Soil fungal communities respond to grassland plant community richness and soil edaphics. Microb Ecol 70:188–195
Hatam I, Petticrew EL, French TD, Owens PN, Laval B, Baldwin SA (2019). The bacterial community of Quesnel Lake sediments impacted by a catastrophic mine tailings spill differ in composition from those at undisturbed locations – two years post-spill. Scientific Reports, 9.
Zhou Y, Wei W, Wang X, Lai R (2009) Proposal of Sinomonas flava gen. nov., sp. nov., and description of Sinomonas atrocyanea comb. nov. to accommodate Arthrobacter atrocyaneus. Int J Syst Evol Microbiol 59:259–263
Brumbley SM, Petrasovits LA, Birch RG, Taylor PWJ (2002). Transformation and transposon mutagenesis of Leifsonia xyli subsp.xyli, causal organism of ratoon stunting disease of sugarcane. Molecular Plant-Microbe Interactions®, 15:262–268.
Lovley DR, Ueki T, Zhang T, Malvankar NS, Shrestha PM, Flanagan KA et al (2011) Geobacter: the microbe electric's physiology, ecology, and practical applications. Adv Microb Physiol 59:1
Prosser JI, Head IM, Stein LY (2014) The family Nitrosomonadaceae. Springer, Berlin Heidelberg
Ridge JP, Lin M, Larsen EI, Fegan M, Sly LI (2007) A multicopper oxidase is essential for manganese oxidation and laccase-like activity in Pedomicrobium sp. ACM 3067. Environ Microbiol 9:944–953
Galitskaya P, Biktasheva L, Saveliev A, Grigoryeva T, Boulygina E, Selivanovskaya S (2017). Fungal and bacterial successions in the process of co-composting of organic wastes as revealed by 454 pyrosequencing. Plos One, 12:e186051.
Zhao X, Jiang Y, Liu Q, Yang H, Wang Z, Zhang M (2020). Effects of drought-tolerant Ea-DREB2B transgenic sugarcane on bacterial communities in soil. Frontiers in Microbiology, 11.
Liu Y, Yang H, Liu Q, Zhao X, Xie S, Wang Z, et al. (2021). Effect of two different sugarcane cultivars on rhizosphere bacterial communities of sugarcane and soybean upon intercropping. Frontiers in Microbiology, 11.
Zhang Q, Guo T, Li H, Wang Y, Zhou W (2020). Identification of fungal populations assimilating rice root residue-derived carbon by DNA stable-isotope probing. Applied Soil Ecology, 147:103374.
Ammons MCB, Morrissey K, Tripet BP, Van Leuven JT, Han A, Lazarus GS, et al. (2015). Biochemical association of metabolic profile and microbiome in chronic pressure ulcer wounds. Plos One, 10:e126735.
Zhalnina K, Dias R, de Quadros PD, Davis-Richardson A, Camargo FAO, Clark IM et al (2015) Soil pH determines microbial diversity and composition in the park grass experiment. Microb Ecol 69:395–406
Colla LM, Primaz AL, Benedetti S, Loss RA, de Lima M, Reinehr CO et al (2016) Surface response methodology for the optimization of lipase production under submerged fermentation by filamentous fungi. Braz J Microbiol 47:461–467
Zhang B, Xu X, Zhu L (2018). Activated sludge bacterial communities of typical wastewater treatment plants: distinct genera identification and metabolic potential differential analysis. AMB Express, 8.
Jiang J, Song Z, Yang X, Mao Z, Nie X, Guo H, et al. (2017). Microbial community analysis of apple rhizosphere around Bohai Gulf. Scientific Reports, 7.
Li M, Chen Z, Qian J, Wei F, Zhang G, Wang Y, et al. (2020). Composition and function of rhizosphere microbiome of Panax notoginseng with discrepant yields. Chinese Medicine, 15.
Yuan J, Wen T, Zhang H, Zhao M, Penton CR, Thomashow LS et al (2020) Predicting disease occurrence with high accuracy based on soil macroecological patterns of Fusarium wilt. ISME J 14:2936–2950
Thanks for the data analysis provided by the free online platform of the Magi Cloud platform (www. majorbio.com).
This research was funded by the Modern Agricultural Industry Technology System of China (CARS-170208), the Nature Science Foundation of Fujian Province (2017J01456), and the Special Foundation for Scientific and Technological Innovation of Fujian Agriculture and Forestry University (KFA17172A, KFA17528A) and the Nature Science Foundation of China (31771723), supported by China Agriculture Research System of MOF and MARA.
Key Laboratory of Sugarcane Biology and Genetic Breeding, Ministry of Agriculture, Fujian Agriculture and Forestry University, Fuzhou, 350002, China
Qiang Liu, Ziqin Pang, Fallah Nyumah, Chaohua Hu & Zhaonian Yuan
College of Agricultural, Fujian Agriculture and Forestry University, Fuzhou, 350002, China
Qiang Liu, Ziqin Pang, Fallah Nyumah & Zhaonian Yuan
Fujian Provincial Key Laboratory of Agro-Ecological Processing and Safety Monitoring, College of Life Sciences, Fujian Agriculture and Forestry University, Fuzhou, 350002, China
Ziqin Pang, Fallah Nyumah & Wenxiong Lin
Key Laboratory of Crop Ecology and Molecular Physiology, Fujian Agriculture and Forestry University, Fuzhou, 35002, China
Province and Ministry Co-Sponsored Collaborative Innovation Center of Sugar Industry, Nanning, 530000, China
Zhaonian Yuan
Guangxi Laibin Xinbin Commercial Crop Technology Extension Station, Laibin, 546100, Guangxi, China
Zuli Yang
Qiang Liu
Ziqin Pang
Fallah Nyumah
Chaohua Hu
Wenxiong Lin
All authors contributed to the intellectual input and provided assistance to this study and manuscript preparation; Z.Y. and Z.P. designed the research and conducted the experiments; Q.L. analyzed the data and wrote the manuscript; Fallah N., W.L., and C.H. reviewed the manuscript; Z.Y supervised the work and approved the manuscript for publication.
Correspondence to Zhaonian Yuan.
Below is the link to the electronic supplementary material.
Supplementary file1 (DOCX 2013 kb)
Liu, Q., Pang, Z., Yang, Z. et al. Bio-fertilizer Affects Structural Dynamics, Function, and Network Patterns of the Sugarcane Rhizospheric Microbiota. Microb Ecol (2021). https://doi.org/10.1007/s00248-021-01932-3
Bio-fertilizer
Physicochemical property
Rhizosphere microbes
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
California Privacy Statement
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
\begin{definition}[Definition:Spanning Tree/Building-Up Method]
Start with the edgeless graph $N$ whose vertices correspond with those of $G$.
Select edges of $G$ one by one, such that no cycles are created, and add them to $N$.
Continue till all vertices are included.
\end{definition} | ProofWiki |
\begin{document}
\title{The number of cliques in graphs of given order and size} \author{V. Nikiforov\\{\small Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152}\\{\small e-mail:} {\small [email protected]}} \maketitle
\begin{abstract} Let $k_{r}\left( n,m\right) $ denote the minimum number of $r$-cliques in graphs with $n$ vertices and $m$ edges. For $r=3,4$ we give a lower bound on $k_{r}\left( n,m\right) $ that approximates $k_{r}\left( n,m\right) $ with an error smaller than $n^{r}/\left( n^{2}-2m\right) .$
The solution is based on a constraint minimization of certain multilinear forms. Our proof combines a combinatorial strategy with extensive analytical arguments.
\textbf{AMS classification: }
\textbf{Keywords: }\textit{number of cliques; mulitilinear forms; Tur\'{a}n graph.}
\end{abstract}
\section*{Introduction}
Our graph-theoretic notation follows \cite{Bol98}; in particular,\ an $r$-clique is a complete subgraph on $r$ vertices.
What is the minimum number $k_{r}\left( n,m\right) $ of $r$-cliques in graphs with $n$ vertices and $m$ edges? This problem originated with the famous graph-theoretical theorem of Tur\'{a}n more than sixty years ago, but despite numerous attempts, never got a satisfactory solution, see \cite{Bol76}, \cite{Erd62}, \cite{Erd69}, \cite{Fis92}, \cite{LoSi83}, and \cite{Raz06} for some highlights of its long history. Most recently, the problem was discussed in detail in \cite{BCLSV06}.
The best result so far is due to Razborov \cite{Raz06}. Applying tools developed in \cite{Raz05}, he achieved a remarkable progress for $r=3.$ But this method failed for $r>3,$ and Razborov challenged the mathematical community to extend his result.
The aim of this paper is to answer this challenge. We introduce a class of multilinear forms and find their minima subject to certain constraints. As a consequence, for $r=3,4$ we obtain a lower bound on $k_{r}\left( n,m\right) $, approximating $k_{r}\left( n,m\right) $ with an error smaller than $n^{r}/\left( n^{2}-2m\right) .$
In our proof, a combinatorial main strategy cooperates with analytical arguments using Taylor's expansion, Lagrange's multipliers, compactness, continuity, and connectedness. We believe that such cooperation can be developed further and applied to other problems in extremal combinatorics.
It seems likely that these methods will enable the solution of the problem for $r>4$ as well. With this idea in mind we present all results as general as possible.
\section{Main results}
Suppose $1\leq r\leq n,$ let $\left[ n\right] =\left\{ 1,\ldots,n\right\} ,$ and write $\binom{\left[ n\right] }{r}$ for the set of $r$-subsets of $\left[ n\right] .$ For a symmetric $n\times n$ matrix $A=\left( a_{ij}\right) $ and a vector $\mathbf{x}=\left( x_{1},\ldots,x_{n}\right) ,$ set \begin{equation} L_{r}\left( A,\mathbf{x}\right) =
{\displaystyle\sum_{X\in\binom{\left[ n\right] }{r}}}
\text{ }
{\textstyle\prod\limits_{i,j\in X,\text{ }i<j}}
a_{ij}
{\textstyle\prod\limits_{i\in X}}
x_{i}. \label{defl1} \end{equation} Define the set $\mathcal{A}\left( n\right) $ of symmetric $n\times n$ matrices $A=\left( a_{ij}\right) $ by \[ \mathcal{A}\left( n\right) =\left\{ A:\text{ }a_{ii}=0\text{ and }0\leq a_{ij}=a_{ji}\leq1\text{ for all }i,j\in\left[ n\right] \right\} . \] Our main goal is to find $\min L_{r}\left( A,\mathbf{x}\right) $ subject to the constraints \[ A\in\mathcal{A}\left( n\right) ,\text{ }\mathbf{x}\geq0,\text{ }L_{1}\left( A,\mathbf{x}\right) =b,\text{\ and }L_{2}\left( A,\mathbf{x}\right) =c, \] where $b$ and $c$ are fixed positive numbers. Since every $L_{s}\left( A,\mathbf{x}\right) $ is homogenous of first degree in each $x_{i},$ for simplicity we assume that $b=1$ and study \begin{equation} \min\left\{ L_{r}\left( A,\mathbf{x}\right) :\left( A,\mathbf{x}\right) \in\mathcal{S}_{n}\left( c\right) \right\} , \label{mainp} \end{equation} where $\mathcal{S}_{n}\left( c\right) $ is the set of pairs $\left( A,\mathbf{x}\right) $ defined as \[ \mathcal{S}_{n}\left( c\right) =\{\left( A,\mathbf{x}\right) :\text{ } A\in\mathcal{A}\left( n\right) ,\text{\ }\mathbf{x}\geq0,\text{ } L_{1}\left( A,\mathbf{x}\right) =1,\text{ and }L_{2}\left( A,\mathbf{x} \right) =c\}. \] Note that $\mathcal{S}_{n}\left( c\right) $ is compact since the functions $L_{s}\left( A,\mathbf{x}\right) $ are continuous; hence (\ref{mainp}) is defined whenever $\mathcal{S}_{n}\left( c\right) $ is nonempty. The following proposition, proved in \ref{pp0}, describes when $\mathcal{S} _{n}\left( c\right) \neq\varnothing$.
\begin{proposition} \label{pro0}$\mathcal{S}_{n}\left( c\right) $ is nonempty if and only if $c<1/2$ and $n\geq\left\lceil 1/\left( 1-2c\right) \right\rceil .$ \end{proposition}
Hereafter we assume that $0<c<1/2$ and set $\xi\left( c\right) =\left\lceil 1/\left( 1-2c\right) \right\rceil .$
To find (\ref{mainp}), we solve a seemingly more general problem: for all $c\in\left( 0,1/2\right) ,$ $n\geq\xi\left( c\right) ,$ and $3\leq r\leq n,$ find \[ \varphi_{r}\left( n,c\right) =\min\left\{ L_{r}\left( A,\mathbf{x}\right) :\text{ }r\leq k\leq n,\text{ }\left( A,\mathbf{x}\right) \in\mathcal{S} _{k}\left( c\right) \right\} . \] We obtain the solution of (\ref{mainp}) by showing that, in fact, $\varphi _{r}\left( n,c\right) $ is independent of $n$.
To state $\varphi_{r}\left( n,c\right) $ precisely, we need some preparation. Set $s=\xi\left( c\right) $ and note that the system \begin{align} \binom{s-1}{2}x^{2}+\left( s-1\right) xy & =c,\label{cond1}\\ \left( s-1\right) x+y & =1,\label{cond2}\\ x & \geq y\nonumber \end{align} has a unique solution \begin{equation} x=\frac{1}{s}+\frac{1}{s}\sqrt{1-\frac{2s}{s-1}c},\text{ \ \ \ }y=\frac{1} {s}-\frac{s-1}{s}\sqrt{1-\frac{2s}{s-1}c}. \label{sol} \end{equation} Write $\mathbf{x}_{c}$ for the $s$-vector $\left( x,\ldots,x,y\right) $ and let $A_{s}\in\mathcal{A}\left( s\right) $ be the matrix with all off-diagonal entries equal to $1.$ Note that equations (\ref{cond1}) and (\ref{cond2}) give $\left( A_{s},\mathbf{x}_{c}\right) \in\mathcal{S} _{s}\left( c\right) .$
Setting $\varphi_{r}\left( c\right) =L_{r}\left( A_{s},\mathbf{x} _{c}\right) ,$ we arrive at the main result in this section.
\begin{theorem} \label{mainTh}If $c\in\left( 0,1/2\right) ,$ $r\in\left\{ 3,4\right\} ,$ and $r\leq\xi\left( c\right) \leq n,$ then $\varphi_{r}\left( n,c\right) =\varphi_{r}\left( c\right) .$ \end{theorem}
Note first that the premise $r\leq\xi\left( c\right) $ is not restrictive, for, $\varphi_{r}\left( n,c\right) =0$ whenever $r>\xi\left( c\right) .$ Indeed, assume that $r>\xi\left( c\right) $ and write $\mathbf{y}$ for the $r$-vector $\left( x,\ldots,x,y,0,\ldots,0\right) $ whose last $r-s$ entries are zero. Writing $B$ for the $r\times r$ matrix with $A_{s}$ as a principal submatrix in the first $s$ rows and with all other entries being zero, we see that $\left( B,\mathbf{y}\right) \in\mathcal{S}_{r}\left( c\right) $ and $L_{r}\left( B,\mathbf{y}\right) =0;$ hence $\varphi_{r}\left( n,c\right) =0,$ as claimed.
Next, note an explicit form of $\varphi_{r}\left( c\right) :$ \begin{align*} \varphi_{r}\left( c\right) & =\binom{s-1}{r}x^{r}+\binom{s-1}{r-1} x^{r-1}y\\ & =\binom{s}{r}\frac{1}{s^{r}}\left( 1-\left( r-1\right) \sqrt{1-\frac {2s}{s-1}c}\right) \left( 1+\sqrt{1-\frac{2s}{s-1}c}\right) ^{r-1}. \end{align*}
Since $\varphi_{r}\left( c\right) $ is defined via the discontinuous step function $\xi\left( c\right) ,$ the following properties of $\varphi _{r}\left( c\right) $ are worth stating:
- $\varphi_{r}\left( c\right) $ is continuous for $c\in\left( 0,1/2\right) ;$
- $\varphi_{r}\left( c\right) =0$ for $c\in\left( 0,1/4\right] $ and is increasing for $c\in\left( 1/4,1/2\right) ;$
- $\varphi_{r}\left( c\right) $ is differentiable and concave in any interval $\left( \left( s-1\right) /2s,s/2\left( s+1\right) \right) .$
\subsection{The number of cliques}
Write $k_{r}\left( G\right) $ for the number of $r$-cliques of a graph $G$ and let us outline the connection of Theorem \ref{mainTh} to $k_{r}\left( G\right) $. Let \[ k_{r}\left( n,m\right) =\min\left\{ k_{r}\left( G\right) :\text{ }G\text{ has }n\text{ vertices and }m\text{ edges}\right\} , \] and suppose that $k_{r}\left( n,m\right) $ is attained on a graph $G$ with adjacency matrix $A=\left( a_{ij}\right) .$ Clearly, for every $X\in \binom{\left[ n\right] }{r},$ \[
{\textstyle\prod\limits_{i,j\in X,\text{ }i<j}}
a_{ij}=\left\{ \begin{array} [c]{ll} 1, & \text{if }X\text{ induces an }r\text{-clique in }G,\\ 0, & \text{otherwise.} \end{array} \right. . \] Hence, letting $\mathbf{x}=\left( 1/n,\ldots,1/n\right) ,$ we see that \[ L_{1}\left( A,\mathbf{x}\right) =1,\text{ }L_{2}\left( A,\mathbf{x}\right) =m/n^{2},\text{\ and }L_{r}\left( A,\mathbf{x}\right) =k_{r}\left( G\right) /n^{r}; \] thus Theorem \ref{mainTh} gives \[ k_{r}\left( n,m\right) \geq\varphi_{r}\left( n,m/n^{2}\right) n^{r}=\varphi_{r}\left( m/n^{2}\right) n^{r}. \] Setting $s=\xi\left( m/n^{2}\right) =\left\lceil 1/\left( 1-2m/n^{2} \right) \right\rceil ,$ we obtain an explicit form of this inequality \begin{equation} k_{r}\left( n,m\right) \geq\binom{s}{r}\frac{1}{s^{r}}\left( n-\left( r-1\right) \sqrt{n^{2}-\frac{2sm}{s-1}}\right) \left( n+\sqrt{n^{2} -\frac{2sm}{s-1}}\right) ^{r-1}. \label{minc} \end{equation}
Inequality (\ref{minc}) turns out to be rather tight, as stated below and proved in Section \ref{apx}.
\begin{theorem} \label{thmeq} \[ k_{r}\left( n,m\right) <\varphi_{r}\left( \frac{m}{n^{2}}\right) n^{r}+\frac{n^{r}}{n^{2}-2m}. \]
\end{theorem}
Note, in particular, that if $m<\left( 1/2-\varepsilon\right) n^{2},$ then \[ k_{r}\left( n,m\right) <\varphi_{r}\left( m/n^{2}\right) n^{r} +n^{r-2}/2\varepsilon, \] so the order of the error is lower than expected.
\subsubsection*{Known previous results}
For $n^{2}/4\leq m\leq n^{2}/3$ inequality (\ref{minc}) was first proved by Fisher \cite{Fis92}. He showed that \[ k_{3}\left( n,m\right) \geq\frac{9nm-2n^{3}-2\left( n^{2}-3m\right) ^{3/2}}{27}=\varphi_{3}\left( m/n^{2}\right) n^{3}, \] but did not discuss how close the two sides of this inequality are.
Recently Razborov \cite{Raz06} showed that for every fixed $c\in\left( 0,1/2\right) ,$ \[ k_{3}\left( n,\left\lceil cn^{2}\right\rceil \right) =\varphi_{3}\left( c\right) n^{3}+o\left( n^{3}\right) . \] Unfortunately, his approach, based on \cite{Raz05}, provides no clues whatsoever how large the $o\left( n^{3}\right) $ term is; in particular, in his approach this term is not uniformly bounded when $c$ approaches $1/2.$ In \cite{Raz06} Razborov challenged the mathematical community to prove that $k_{r}\left( n,\left\lceil cn^{2}\right\rceil \right) =\varphi_{r}\left( c\right) n^{r}+o\left( n^{r}\right) $ for $r>3$. Our Theorem \ref{mainTh} proves this equality for $r=4.$
\section{Proof of Theorem \ref{mainTh}}
The following simple lemma will be used in the proof of Theorem \ref{mainTh}.
\begin{lemma} \label{le1}Let $0\leq c\leq a$ and $0\leq d\leq b.$ If $0\leq x\leq\min\left( a,b\right) $ and $0\leq y\leq\min\left( c,d\right) ,$ then \[ \left( a-c\right) \left( b-d\right) +x\left( c+d\right) +y\left( a+b\right) -\left( x+y\right) ^{2}\geq0 \]
\end{lemma}
\begin{proof} Set $P=x\left( c+d\right) +y\left( a+b\right) -\left( x+y\right) ^{2}.$ Since $\left( a-c\right) \left( b-d\right) \geq0,$ we may and shall suppose that $P<0.$ By symmetry, we also suppose that $a\geq b.$ If $x+y\leq b,$ by $c+d\leq a+b$ we have \[ P\geq\left( x+y\right) \left( c+d\right) +y\left( a+b-c-d\right) -\left( x+y\right) ^{2}\geq\left( x+y\right) \left( c+d\right) -\left( x+y\right) ^{2}; \] hence, $P<0$ implies that $b>c+d$ and $P\geq b\left( c+d\right) -b^{2}$. Now the proof is completed by \[ \left( a-c\right) \left( b-d\right) +b\left( c+d\right) -b^{2}=\left( a-b\right) \left( b-d\right) +cd>0. \] If $x+y>b,$ by $c+d\leq a+b,$ we have \[ P\geq b\left( c+d\right) +y\left( a+b\right) -\left( b+y\right) ^{2}=b\left( c+d\right) +y\left( a-b\right) -b^{2}-y^{2}; \] hence, $P<0$ implies that $\min\left( c,d\right) >a-b$ and \[ P\geq b\left( c+d\right) +\min\left( c,d\right) \left( a-b\right) -b^{2}-\left( \min\left( c,d\right) \right) ^{2}. \] If $d\geq c,$ we get \begin{align*} \left( a-c\right) \left( b-d\right) +P & \geq\left( a-c\right) \left( b-d\right) -b\left( b-d\right) +c\left( a-c\right) \\ & \geq\left( a-c\right) \left( b-d\right) -b\left( b-d\right) +c\left( b-d\right) =\left( a-b\right) \left( b-d\right) \geq0. \end{align*} If $c\geq d,$ we get \begin{align*} \left( a-c\right) \left( b-d\right) +P & \geq\left( a-c\right) \left( b-d\right) +b\left( c+d\right) +d\left( a-b\right) -b^{2}-d^{2}\\ & =a\left( a-b\right) +c\left( c-d\right) \geq0, \end{align*} completing the proof of Lemma \ref{le1}. \end{proof}
Next we show that $\varphi_{r}\left( n,c\right) $ increases in $c$ whenever $\varphi_{r}\left( n,c\right) >0.$
\begin{proposition} \label{pro1}Let $c\in\left( 0,1/2\right) $ and $3\leq r\leq\xi\left( c\right) \leq n.$ If $\varphi_{r}\left( n,c\right) >0$ and $0<c_{0}<c,$ then $\varphi_{r}\left( n,c\right) >\varphi_{r}\left( n,c_{0}\right) .$ \end{proposition}
\begin{proof} Suppose that \[ \xi\left( c\right) \leq k\leq n,\text{\ }\left( A,\mathbf{x}\right) \in\mathcal{S}_{k}\left( c\right) ,\text{ and \ }\varphi_{r}\left( n,c\right) =L_{r}\left( A,\mathbf{x}\right) . \] Setting $\alpha=c_{0}/c,$ we see that $\alpha A\in\mathcal{A}\left( k\right) $ and \[ L_{2}\left( \alpha A,\mathbf{x}\right) =\alpha L_{r}\left( A,\mathbf{x} \right) =c_{0}; \] thus $\left( \alpha A,\mathbf{x}\right) \in\mathcal{S}_{k}\left( c_{0}\right) .$ Hence we obtain \[ \varphi_{r}\left( n,c\right) =L_{r}\left( A,\mathbf{x}\right) =\alpha^{-\binom{r}{2}}L_{r}\left( \alpha A,\mathbf{x}\right) >L_{r}\left( \alpha A,\mathbf{x}\right) \geq\varphi_{r}\left( n,c_{0}\right) , \] completing the proof of Proposition \ref{pro1}. \end{proof}
\subsection*{Proof of Theorem \ref{mainTh}}
Let us first define a set of $n$-vectors $\mathcal{X}\left( n\right) $ by \[ \mathcal{X}\left( n\right) =\left\{ \left( x_{1},\ldots,x_{n}\right) :x_{1}+\cdots+x_{n}=1\text{ and }x_{i}\geq0,\text{ }1\leq i\leq n\right\} . \] Now the conditions $\mathbf{x}\in\mathcal{X}\left( n\right) $ is equivalent to $\mathbf{x}\geq0$ and $L_{1}\left( A,\mathbf{x}\right) =1.$
Assume for a contradiction that the theorem fails: let \begin{equation} c\in\left( 0,1/2\right) ,\text{ }3\leq r\leq\xi\left( c\right) \leq n,\text{\ }A=\left( a_{ij}\right) ,\text{\ }\mathbf{x}=\left( x_{1} ,\ldots,x_{n}\right) ,\text{\ and\ }\left( A,\mathbf{x}\right) \in\mathcal{S}_{n}\left( c\right) \label{eq6} \end{equation} be such that \begin{equation} \varphi_{r}\left( n,c\right) =L_{r}\left( A,\mathbf{x}\right) <\varphi _{r}\left( c\right) . \label{eq2} \end{equation} Assume that $n$ is the minimum integer with this property for all $c\in\left( 0,1/2\right) ,$ and that, among all pairs $\left( A,\mathbf{x}\right) \in\mathcal{S}_{n}\left( c\right) ,$ $A$ has the maximum number of zero entries. Hereafter we shall refer to this assumption as the \textquotedblleft main assumption\textquotedblright. The most important consequence of the main assumption is the following
\begin{claim} \label{cl0}If $\left( A,\mathbf{y}\right) \in\mathcal{S}_{n}\left( c\right) $ and $\varphi_{r}\left( n,c\right) =L_{r}\left( A,\mathbf{y} \right) ,$ then $\mathbf{y}$ has no zero entries.$
\square$ \end{claim}
Next we introduce some notation and conventions to simplify the presentation. For short,\ for every $i,j,\ldots,k\in\left[ n\right] ,$ set \[ C_{i}=\frac{\partial L_{2}\left( A,\mathbf{x}\right) }{\partial x_{i} },\text{ \ \ }C_{ij}=\frac{\partial L_{2}\left( A,\mathbf{x}\right) }{\partial x_{i}\partial x_{j}},\text{ \ \ }D_{ij\ldots k}=\frac{\partial L_{r}\left( A,\mathbf{x}\right) }{\partial x_{i}\partial x_{j}\cdots\partial x_{k}}, \] and note that \begin{equation} C_{ij}=a_{ij},\text{ \ \ and \ \ }\frac{\partial L_{r}\left( A,\mathbf{x} \right) }{\partial a_{ij}}a_{ij}=D_{ij}x_{i}x_{j}. \label{deq} \end{equation}
Letting $\mathbf{y}=\left( x_{1}+\Delta_{1},\ldots,x_{n}+\Delta_{n}\right) ,$ Taylor's formula gives \begin{equation} L_{2}\left( A,\mathbf{y}\right) -L_{2}\left( A,\mathbf{x}\right) =
{\displaystyle\sum\limits_{i=1}^{n}}
C_{i}\Delta_{i}+\sum\limits_{1\leq i<j\leq n}C_{ij}\Delta_{i}\Delta_{j} \label{Tay2} \end{equation} and \begin{equation} L_{r}\left( A,\mathbf{y}\right) -L_{r}\left( A,\mathbf{x}\right) =
{\displaystyle\sum\limits_{s=1}^{r}}
\text{ }
{\displaystyle\sum\limits_{1\leq i_{1}<\cdots<i_{s}\leq n}}
D_{i_{1}\ldots i_{s}}\Delta_{i_{1}}\cdots\text{ }\Delta_{i_{s}}. \label{Tayr} \end{equation}
We shall use extensively Lagrange multipliers. Since $\mathbf{x}>0,$ by Lagrange's method, there exist $\lambda$ and $\mu$ such that \begin{equation} D_{i}=\lambda C_{i}+\mu\label{lag1} \end{equation} for all $i\in\left[ n\right] $. Likewise, if $0<a_{ij}<1,$ we have \[ \frac{\partial L_{r}\left( A,\mathbf{x}\right) }{\partial a_{ij}} =\lambda\frac{\partial L_{2}\left( A,\mathbf{x}\right) }{\partial a_{ij} }=\lambda x_{i}x_{j}, \] and so, in view of (\ref{deq}), \begin{equation} D_{ij}=\lambda a_{ij}\ \ \text{whenever \ }0<a_{ij}<1. \label{lag2} \end{equation}
The rest of the proof is presented in a sequence of formal claims. First we show that $\varphi_{r}\left( n,c\right) $ is attained on a $\left( 0,1\right) $-matrix $A$.$^{{}}$
\begin{claim} \label{cl00}Let $\left( A,\mathbf{x}\right) \in\mathcal{S}_{n}\left( c\right) $ satisfy (\ref{eq6}) and (\ref{eq2}), and suppose that $A$ has the smallest number of entries $a_{ij}$ such that $0<a_{ij}<1$. Then $A$ is a $\left( 0,1\right) $-matrix. \end{claim}
\begin{proof} Assume for a contradiction that $i,j\in\left[ n\right] $ and $0<a_{ij}<1.$ By symmetry we suppose that $C_{i}\geq C_{j}.$ Let \begin{equation} f\left( \alpha\right) =\frac{a_{ij}\alpha^{2}-\left( C_{i}-C_{j}\right) \alpha}{\left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) },\label{eq} \end{equation} and suppose that $\alpha$ satisfies \begin{equation} 0<\alpha<x_{j}\text{ and \ }0\leq a_{ij}+f\left( \alpha\right) \leq1.\label{cond0} \end{equation}
Let $\mathbf{y}_{\alpha}=\left( x_{1}+\Delta_{1},\ldots,x_{n}+\Delta _{n}\right) ,$ where \begin{equation} \Delta_{i}=\alpha,\text{ \ \ }\Delta_{j}=-\alpha,\text{ \ \ and \ \ } \Delta_{l}=0\text{\ for }l\in\left[ n\right] \backslash\left\{ i,j\right\} , \label{defy} \end{equation} and define the $n\times n$ matrix $B_{\alpha}=\left( b_{ij}\right) $ by \begin{equation} b_{ij}=b_{ji}=a_{ij}+f\left( \alpha\right) \text{ \ \ and \ \ }b_{pq} =a_{pq}\text{ for }\left\{ p,q\right\} \neq\left\{ i,j\right\} . \label{defB} \end{equation}
Note that $B_{\alpha}\in\mathcal{A}\left( n\right) ,$ $\mathbf{y}_{\alpha }\in\mathcal{X}\left( n\right) ,$ and \[ L_{2}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{2}\left( A,\mathbf{y}_{\alpha}\right) =f\left( \alpha\right) \frac{\partial L_{2}\left( A,\mathbf{y}_{\alpha}\right) }{\partial a_{ij}}=f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) . \] Hence, Taylor's expansion (\ref{Tay2}) and equation (\ref{eq}) give \begin{align*} L_{2}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{2}\left( A,\mathbf{x}\right) & =L_{2}\left( A,\mathbf{y}_{\alpha}\right) -L_{2}\left( A,\mathbf{x}\right) +f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) \\ & =\left( C_{i}-C_{j}\right) \alpha-a_{ij}\alpha^{2}+f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) =0; \end{align*} thus $\left( B_{\alpha},\mathbf{y}_{\alpha}\right) \in\mathcal{S}_{n}\left( c\right) .$
Note also that, in view of (\ref{deq}), \[ L_{r}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{r}\left( A,\mathbf{y}_{\alpha}\right) =\frac{\partial L_{r}\left( A,\mathbf{y} _{\alpha}\right) }{\partial a_{ij}}f\left( \alpha\right) =f\left( \alpha\right) y_{i}y_{j}\frac{D_{ij}}{a_{ij}}=f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) \frac{D_{ij}}{a_{ij}}. \] Hence Taylor's expansion (\ref{Tayr}), Lagrange's conditions (\ref{lag1}) and (\ref{lag2}), and equation (\ref{eq}) give \begin{align*} L_{r}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{r}\left( A,\mathbf{x}\right) & =L_{r}\left( A,\mathbf{y}_{\alpha}\right) -L_{r}\left( A,\mathbf{x}\right) +f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) \frac{D_{ij}}{a_{ij}}\\ & =\left( D_{i}-D_{j}\right) \alpha-D_{ij}\alpha^{2}+f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) \frac{D_{ij}}{a_{ij}}\\ & =\lambda\left( C_{i}-C_{j}\right) \alpha-D_{ij}\alpha^{2}+f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) \frac{D_{ij}}{a_{ij}}\\ & =\frac{D_{ij}}{a_{ij}}\left( C_{i}-C_{j}\right) \alpha-D_{ij}\alpha ^{2}+f\left( \alpha\right) \left( x_{i}+\alpha\right) \left( x_{j} -\alpha\right) \frac{D_{ij}}{a_{ij}}\\ & =\frac{D_{ij}}{a_{ij}}\left( \left( C_{i}-C_{j}\right) \alpha -a_{ij}\alpha^{2}+a_{ij}\alpha^{2}-\left( C_{i}-C_{j}\right) \alpha\right) =0. \end{align*}
If there exists $\alpha\in\left( 0,x_{j}\right) $ such that $a_{ij}+f\left( \alpha\right) =0$ or $a_{ij}+f\left( \alpha\right) =1,$ we see that the matrix $B_{\alpha}$ has fewer entries belonging to $\left( 0,1\right) $ than $A$, contradicting the hypothesis and completing the proof. Assume therefore that $0<a_{ij}+f\left( \alpha\right) <1$ for all $\alpha\in\left( 0,x_{j}\right) .$ This condition implies that \[ a_{ij}x_{j}=C_{i}-C_{j}, \] for, otherwise $\lim_{\alpha\rightarrow x_{j}}\left\vert f\left( \alpha\right) \right\vert =\infty,$ and so, either $a_{ij}+f\left( \alpha\right) =0$ or $a_{ij}+f\left( \alpha\right) =1$ for some $\alpha \in\left( 0,x_{j}\right) $.
Now, extending $f\left( \alpha\right) $ continuously for $\alpha=x_{j}$ by \[ f\left( x_{j}\right) =\lim_{\alpha\rightarrow x_{j}}f\left( \alpha\right) =\lim_{\alpha\rightarrow x_{j}}\frac{a_{ij}\alpha\left( \alpha-x_{j}\right) }{\left( x_{i}+\alpha\right) \left( x_{j}-\alpha\right) }=-\frac {a_{ij}x_{j}}{x_{i}+x_{j}}, \] and defining $\mathbf{y}_{x_{j}}$ by (\ref{defy}) and $B_{x_{j}}$ by (\ref{defB}), we obtain \[ L_{r}\left( B_{x_{j}},\mathbf{y}_{x_{j}}\right) -\varphi_{r}\left( n,c\right) =L_{r}\left( B_{x_{j}},\mathbf{y}_{x_{j}}\right) -L_{r}\left( A,\mathbf{x}\right) =0. \] contradicting Claim \ref{cl0} since the $j$th entry of $\mathbf{y}_{x_{j}}$ is zero. This completes the proof of Claim \ref{cl00}. \end{proof}
Since $A$ is a $\left( 0,1\right) $-matrix with a zero main diagonal, it is the adjacency matrix of some graph $G$ with vertex set $\left[ n\right] .$ Write $E\left( G\right) $ for the edge set of $G$ and let us restate the functions $L_{r}\left( A,\mathbf{x}\right) $ in terms of $G$. We have \[ L_{2}\left( A,\mathbf{x}\right) =
{\displaystyle\sum_{ij\in E\left( G\right) }}
x_{i}x_{j} \] and more generally, \[ L_{r}\left( A,\mathbf{x}\right) =
{\displaystyle\sum}
\left\{ x_{i_{1}}\cdots\text{ }x_{i_{r}}:\text{ the set }\left\{ i_{1},\ldots,i_{r}\right\} \text{ induces an }r\text{-clique in }G\right\} . \]
To finish the proof of Theorem \ref{mainTh} we show that $G$ is a complete graph and $L_{r}\left( A,\mathbf{x}\right) =\varphi_{r}\left( c\right) .$
\subsection*{Proof that $G$ is a complete graph}
For convenience we first outline this part of the proof. Write $\overline{G}$ for the complement of $G$ and $E\left( \overline{G}\right) $ for the edge set of $\overline{G}.$ We assume that $G$ is not complete and reach a contradiction by the following major steps:
- if $ij\in E\left( \overline{G}\right) ,$ then $C_{i}\neq C_{j}$ - Claim \ref{cl1};
- if $ij\in E\left( G\right) ,$ then $D_{ij}<\lambda$ - Claim \ref{cl2};
- $\overline{G}$ is triangle-free - Claim \ref{cl4};
- $\overline{G}$ is bipartite - Claims \ref{cl5} and \ref{cl6};
- $G$ contains induced $4$-cycles - Claim \ref{cl7};
- $G$ contains no induced $4$-cycles - Claim \ref{cl7.1}.
Now the details.
\begin{claim} \label{cl1}If $ij\in E\left( \overline{G}\right) ,$ then $C_{i}\neq C_{j}$. \end{claim}
\begin{proof} Assume that $ij\in E\left( \overline{G}\right) $ and $C_{i}=C_{j}.$ Let $\mathbf{y}=\left( x_{1}+\Delta_{1},\ldots,x_{n}+\Delta_{n}\right) ,$ where \[ \Delta_{i}=-x_{i},\text{ \ \ }\Delta_{j}=x_{i},\text{ \ \ and \ \ }\Delta _{l}=0\text{\ for }l\in\left[ n\right] \backslash\left\{ i,j\right\} . \] Clearly, $\mathbf{y}\in\mathcal{X}\left( n\right) ;$ Taylor's expansion (\ref{Tay2}) gives \[ L_{2}\left( A,\mathbf{y}\right) -L_{2}\left( A,\mathbf{x}\right) =C_{j}x_{i}-C_{i}x_{i}=0; \] thus, $\left( A,\mathbf{y}\right) \in\mathcal{S}_{n}\left( c\right) .$ Taylor's expansion (\ref{Tayr}) and Lagrange's condition (\ref{lag1}) give\ \[ L_{r}\left( A,\mathbf{y}\right) -L_{r}\left( A,\mathbf{x}\right) =D_{j}x_{i}-D_{i}x_{i}=\mu\left( x_{i}-x_{i}\right) +\lambda\left( C_{j}-C_{i}\right) x_{i}=0, \] contradicting Claim \ref{cl0} as the $i$th entry of $\mathbf{y}$ is zero. The proof of Claim \ref{cl1} is completed. \end{proof}
\begin{claim} \label{cl2}If $ij\in E\left( G\right) ,$ then $D_{ij}<\lambda.$ \end{claim}
\begin{proof} Assume that $ij\in E\left( G\right) $ and $D_{ij}\geq\lambda.$ Select $pq\in E\left( \overline{G}\right) ;$ by Claim \ref{cl1} suppose that $C_{p}>C_{q} $. For every $\alpha\in\left( 0,x_{q}\right) ,$ let $\mathbf{y}_{\alpha }=\left( y_{1},\ldots,y_{n}\right) ,$ where \[ y_{p}=x_{p}+\alpha,\text{ \ \ }y_{q}=x_{q}-\alpha,\text{ \ \ and \ \ } y_{l}=x_{l}\text{ for all }l\in\left[ n\right] \backslash\left\{ p,q\right\} . \] Let \begin{equation} f\left( \alpha\right) =\frac{\left( C_{q}-C_{p}\right) \alpha}{y_{i}y_{j} }. \label{eq3} \end{equation} and define the $n\times n$ matrix $B_{\alpha}=\left( b_{rs}\right) $ by \[ b_{ij}=b_{ji}=1+f\left( \alpha\right) ,\text{ \ \ and \ \ }b_{rs} =a_{rs}\text{ for }\left\{ r,s\right\} \neq\left\{ i,j\right\} . \]
For $\alpha$ sufficiently small, $-1<f\left( \alpha\right) <0,$ and so $B_{\alpha}\in\mathcal{A}\left( n\right) $ and $\mathbf{y}_{\alpha} \in\mathcal{X}\left( n\right) .$ Taylor's expansion (\ref{Tay2}) and equation (\ref{eq3}) give \begin{align*} L_{2}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{2}\left( A,\mathbf{x}\right) & =L_{2}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{2}\left( A,\mathbf{y}_{\alpha}\right) +L_{2}\left( A,\mathbf{y} _{\alpha}\right) -L_{2}\left( A,\mathbf{x}\right) \\ & =f\left( \alpha\right) y_{i}y_{j}+\alpha\left( C_{p}-C_{q}\right) =0; \end{align*} thus, $\left( B_{\alpha},\mathbf{y}_{\alpha}\right) \in\mathcal{S} _{n}\left( c\right) .$
Taylor's expansion (\ref{Tayr}), Lagrange's condition (\ref{lag1}), and equation (\ref{eq3}) give \begin{align*} L_{r}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{r}\left( A,\mathbf{x}\right) & =L_{r}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) -L_{r}\left( A,\mathbf{y}_{\alpha}\right) +L_{r}\left( A,\mathbf{y} _{\alpha}\right) -L_{r}\left( A,\mathbf{x}\right) \\ & =D_{p}\alpha-D_{q}\alpha+D_{ij}f\left( \alpha\right) y_{i}y_{j} =\lambda\left( C_{p}-C_{q}\right) \alpha-D_{ij}\left( C_{p}-C_{q}\right) \alpha\\ & =\alpha\left( C_{p}-C_{q}\right) \left( \lambda-D_{ij}\right) . \end{align*} Since $L_{r}\left( B_{\alpha},\mathbf{y}_{\alpha}\right) \geq L_{r}\left( A,\mathbf{x}\right) ,$ $\alpha\left( C_{p}-C_{q}\right) >0,$ and $D_{ij}\geq\lambda,$ we see that $L_{r}\left( B_{\alpha},\mathbf{y}_{\alpha }\right) =L_{r}\left( A,\mathbf{x}\right) .$
If there exists $\alpha\in\left( 0,x_{q}\right) $ such that $a_{ij}+f\left( \alpha\right) =0,$ then the $\left( 0,1\right) $-matrix $B_{\alpha}$ has more zero entries than $A,$ contradicting the main assumption. On the other hand, if $a_{ij}+f\left( \alpha\right) >0$ for all $\alpha\in\left( 0,x_{q}\right) ,$ then $q\notin\left\{ i,j\right\} $ and the definitions of $f\left( \alpha\right) ,$ $B_{\alpha},$ and $\mathbf{y}_{\alpha}$ make sense for $\alpha=x_{q}$ as well. Letting $\alpha=x_{q},$ we obtain $y_{q}=0,$ contradicting Claim \ref{cl0} and completing the proof of Claim \ref{cl2}. \end{proof}
\begin{claim} \label{cl4}The graph $\overline{G}$ is triangle-free. \end{claim}
\begin{proof} Assume the assertion false and let $i,j,k\in\left[ n\right] $ be such that $ij,ik,jk\in E\left( \overline{G}\right) .$ Let the line given by \begin{equation} \left( C_{i}-C_{k}\right) x+\left( C_{j}-C_{k}\right) y=0 \label{eq4} \end{equation} intersect the triangle formed by the lines $x=-x_{i}$, $y=-x_{j}$ ,\ $x+y=x_{k}$ at some point $\left( \alpha,\beta\right) .$ Let $\mathbf{y}=\left( x_{1}+\Delta_{1},\ldots,x_{n}+\Delta_{n}\right) ,$ where \[ \Delta_{i}=\alpha,\text{ \ \ }\Delta_{j}=\beta,\text{ \ \ }\Delta_{k} =-\alpha-\beta,\text{ \ \ and \ \ }\Delta_{l}=0\text{ for }l\in\left[ n\right] \backslash\left\{ i,j,k\right\} . \] Clearly, $\mathbf{y}\in\mathcal{X}\left( n\right) ;$ Taylor's expansion ((\ref{Tay2}) and equation (\ref{eq4}) give \[ L_{2}\left( A,\mathbf{y}\right) -L_{2}\left( A,\mathbf{x}\right) =C_{i}\alpha+C_{j}\beta-C_{k}\left( \alpha+\beta\right) =0; \] thus $\left( A,\mathbf{y}\right) \in\mathcal{S}_{n}\left( c\right) .$ Taylor's expansion (\ref{Tayr}), Lagrange's condition (\ref{lag1}), and equation (\ref{eq4}) give \begin{align*} L_{r}\left( A,\mathbf{y}\right) -L_{r}\left( A,\mathbf{x}\right) & =D_{i}\alpha+D_{j}\beta-D_{k}\left( \alpha+\beta\right) \\ & =\mu\left( \alpha+\beta-\alpha-\beta\right) +\lambda\left( \left( C_{i}-C_{k}\right) \alpha+\left( C_{j}-C_{k}\right) \beta\right) =0, \end{align*} contradicting Claim \ref{cl0} as $\mathbf{y}$ has a zero entry. The proof of Claim \ref{cl4} is completed. \end{proof}
Using the following claim, we shall prove that $\overline{G}$ is a specific bipartite graph.
\begin{claim} \label{cl5}Let the vertices $i,j,k$ satisfy $ij\in E\left( G\right) ,$ $ik\in E\left( \overline{G}\right) ,$ $jk\in E\left( \overline{G}\right) .$ Then \[ \left( C_{i}-C_{k}\right) \left( C_{j}-C_{k}\right) >0. \]
\end{claim}
\begin{proof} Note first that by Claim \ref{cl1} we have $C_{i}\neq C_{k}$ and $C_{j}\neq C_{k}.$ Consider the hyperbola defined by \begin{equation} \left( C_{i}-C_{k}\right) x+\left( C_{j}-C_{k}\right) y+xy=0, \label{hyp} \end{equation} and write $H$ for its branch containing the origin. Obviously $\left( C_{i}-C_{k}\right) \left( C_{j}-C_{k}\right) <0$ implies that $\alpha \beta>0$ for all $\left( \alpha,\beta\right) \in H$.
Suppose $\left( \alpha,\beta\right) \in H$ is sufficiently close to the origin and let $\mathbf{y}=\left( x_{1}+\Delta_{1},\ldots,x_{n}+\Delta _{n}\right) ,$ where \[ \Delta_{i}=\alpha,\text{ \ \ }\Delta_{j}=\beta,\text{ \ \ }\Delta_{k} =-\alpha-\beta,\text{ \ \ and \ \ }\Delta_{l}=0\text{ for }l\in\left[ n\right] \backslash\left\{ i,j,k\right\} . \] Clearly, $\mathbf{y}\in\mathcal{X}\left( n\right) ;$ Taylor's expansion (\ref{Tay2}) and equation (\ref{hyp}) give \[ L_{2}\left( A,\mathbf{y}\right) -L_{2}\left( A,\mathbf{x}\right) =C_{i}\alpha+C_{j}\beta-C_{k}\left( \alpha+\beta\right) +\alpha\beta=0; \] thus $\left( A,\mathbf{y}\right) \in\mathcal{S}_{n}\left( c\right) .$ Taylor's expansion (\ref{Tayr}), Lagrange's condition (\ref{lag1}), and equation (\ref{hyp}) give \begin{align*} L_{r}\left( A,\mathbf{y}\right) -L_{r}\left( A,\mathbf{x}\right) & =D_{i}\alpha+D_{j}\beta-D_{k}\left( \alpha+\beta\right) +D_{ij}\alpha\beta\\ & =\lambda\left( C_{i}\alpha+C_{j}\beta-C_{k}\left( \alpha+\beta\right) \right) +D_{ij}\alpha\beta=\left( D_{ij}-\lambda\right) \alpha\beta. \end{align*} Since $D_{ij}<\lambda$ and $L_{r}\left( A,\mathbf{y}\right) \leq L_{r}\left( A,\mathbf{x}\right) ,$ we see that $\alpha\beta<0.$ Thus, $\left( C_{i}-C_{k}\right) \left( C_{j}-C_{k}\right) >0,$ completing the proof of Claim \ref{cl5}. \end{proof}
\begin{claim} \label{cl6} $\overline{G}$ is a bipartite graph and its vertex classes $U^{+}$ and $U^{-}$ can be selected so that $C_{u}>C_{w}$ for all $u\in U^{+}$ and $w\in U^{-}$ such that $uw\in E\left( \overline{G}\right) .$ \end{claim}
\begin{proof} Since $C_{i}\neq C_{j}$ for every $ij\in E\left( \overline{G}\right) ,$ if $\overline{G}$ has an odd cycle, there exist three consecutive vertices $i,k,j$ along the cycle such that $\left( C_{i}-C_{k}\right) \left( C_{j}-C_{k}\right) <0.$ Since $\overline{G}$ is triangle-free, $ij\in E\left( G\right) ;$ hence the existence of the vertices $i,j,k$ contradicts Claim \ref{cl5}. Thus, $\overline{G}$ is bipartite.
Claim \ref{cl5} implies that for every $u\in\left[ n\right] ,$ the value $C_{u}-C_{v}$ has the same sign for every $v$ such that $uv\in E\left( \overline{G}\right) .$ Let $U^{+}$ be the set of vertices for which this sign is positive, and let $U^{-}=\left[ n\right] \backslash U^{+}.$ Clearly, for every $uv\in E\left( \overline{G}\right) ,$ if $u\in U^{+},$ then $v\in U^{-},$ and if $u\in U^{-},$ then $v\in U^{+}$. Hence, $U^{+}$ and $U^{-}$ partition properly the vertices of $\overline{G},$ completing the proof of Claim \ref{cl5}. \end{proof}
Hereafter we suppose that the vertex classes $U^{+}$ and $U^{-}$ of $\overline{G}$ are selected to satisfy the condition of Claim \ref{cl6}. Note that $U^{+}$ and $U^{-}$ induce complete graphs in $G.$
\begin{claim} \label{cl7} $G$ contains an induced $4$-cycle. \end{claim}
\begin{proof} Assume the assertion false. For every vertex $u,$ write $N\left( u\right) $ for the set of its neighbors in the vertex class opposite to its own class.
If there exist $u,v\in U^{+}$ such that $\ N\left( u\right) \backslash N\left( v\right) \neq\varnothing$ and $N\left( v\right) \backslash N\left( u\right) \neq\varnothing,$ taking $x\in N\left( u\right) \backslash N\left( v\right) $ and $y\in N\left( v\right) \backslash N\left( u\right) ,$ we see that $\left\{ x,y,u,v\right\} $ induces a $4$-cycle in $G;$ thus we will assume that $N\left( u\right) \subset N\left( v\right) $ or $N\left( v\right) \subset N\left( u\right) $ for every $u,v\in U^{+}.$ This condition implies that there is a vertex $u_{1}\in U^{+}$ such that $N\left( v\right) \subset N\left( u_{1}\right) $ for every $v\in U^{+}.$ By symmetry, there is a vertex $u_{2}\in U^{-}$ such that $N\left( v\right) \subset N\left( u_{2}\right) $ for every $v\in U^{-}.$
If $N\left( u_{1}\right) \neq U^{-}$ and $N\left( u_{2}\right) \neq U^{+},$ take $x\in U^{-}\backslash N\left( u_{1}\right) $ and $y\in U^{+}\backslash N\left( u_{2}\right) ,$ and note that $N\left( x\right) =\varnothing$ and $N\left( y\right) =\varnothing.$ Hence, adding the edge $xy$ to $E\left( G\right) ,$ we see that $L_{r}\left( A,\mathbf{x}\right) $ remains the same, while $L_{2}\left( A,\mathbf{x}\right) $ increases, contradicting that $\varphi_{r}\left( n,c\right) $ is increasing in $c$ (Proposition \ref{pro1}). Thus, either $N\left( u_{1}\right) =U^{-}$ or $N\left( u_{2}\right) =U^{+},$ so one of the vertices $u_{1}$ or $u_{2}$ is connected to every vertex other than itself.
By symmetry, suppose that the vertex $n$ is connected to every vertex of $G$ other than itself. Set $\mathbf{y}=\left( x_{1},\ldots,x_{n-1}\right) $ and let $B$ be the principal submatrix of $A$ in the first $\left( n-1\right) $ columns. Since \begin{align} x_{1}+\cdots+x_{n-1} & =1-x_{n},\label{con1}\\ L_{2}\left( B,\mathbf{y}\right) & =c-x_{n}\left( 1-x_{n}\right) , \label{con2} \end{align} and \[ L_{r}\left( A,\mathbf{x}\right) =x_{n}L_{r-1}\left( B,\mathbf{y}\right) +L_{r}\left( B,\mathbf{y}\right) , \] we see that $x_{n}L_{r-1}\left( B,\mathbf{y}\right) +L_{r}\left( B,\mathbf{y}\right) $ is minimum subject to (\ref{con1}) and (\ref{con2}). Since $B\in\mathcal{A}\left( n-1\right) $, by the main assumption, both $L_{r-1}\left( B,\mathbf{z}\right) $ and $L_{r}\left( B,\mathbf{z}\right) $ attain a minimum on a complete graph $H$ and for the same vector $\mathbf{z}$. Since $n$ is joined to every vertex of $H,$ the minimum $\varphi_{r}\left( n,c\right) $ is attained on a complete graph too, a contradiction completing the proof of Claim \ref{cl7}. \end{proof}
For convenience, an induced $4$-cycle in $G$ will be denoted by a quadruple $\left( i,j,k,l\right) ,$ where $i,j,k,l$ are the vertices of the cycle, arranged so that $i,j\in U^{+},$ $k,l\in U^{-},$ $ik\notin E\left( G\right) ,$ and $jl\notin E\left( G\right) .$
\begin{claim} \label{cl7.1} If $\left( i,j,k,l\right) $ is an induced $4$-cycle in $G,$ then $D_{ij}+D_{kl}<D_{jk}+D_{li}.$ \end{claim}
\begin{proof} Indeed, let $L$ be the line defined by \begin{equation} \left( C_{i}-C_{k}\right) x+\left( C_{j}-C_{l}\right) y=0.\label{lin} \end{equation} Since $i,j\in U^{+}$ and $k,l\in U^{-},$ we have $C_{i}>C_{k}$ and $C_{j}>C_{l};$ thus $xy<0$ for all $\left( x,y\right) \in L$. Suppose that $\alpha\in\left( 0,x_{k}\right) ,$ $\beta\in\left( -x_{j},0\right) ,$ and $\left( \alpha,\beta\right) \in L.$ Let $\mathbf{y}_{\alpha}=\left( x_{1}+\Delta_{1},\ldots,x_{n}+\Delta_{n}\right) ,$ where \[ \Delta_{i}=\alpha,\text{ \ \ }\Delta_{j}=\beta,\text{ \ \ }\Delta_{k} =-\alpha,\text{ \ \ }\Delta_{l}=-\beta,\text{ \ \ and \ \ }\Delta_{h}=0\text{ for }h\in\left[ n\right] \backslash\left\{ i,j,k,l\right\} . \] Clearly, $\mathbf{y}_{\alpha}\in\mathcal{X}\left( n\right) ;$ Taylor's expansion (\ref{Tay2}) and equation (\ref{lin}) give \[ L_{2}\left( A,\mathbf{y}_{\alpha}\right) -L_{2}\left( A,\mathbf{x}\right) =\left( C_{i}-C_{k}\right) \alpha+\left( C_{j}-C_{l}\right) \beta +\alpha\beta-\alpha\beta+\alpha\beta-\alpha\beta=0; \] thus $\left( A,\mathbf{y}_{\alpha}\right) \in\mathcal{S}_{n}\left( c\right) .$ Taylor's expansion (\ref{Tayr}), Lagrange's condition (\ref{lag1}), and equation (\ref{lin}) give \begin{align*} L_{r}\left( A,\mathbf{y}_{\alpha}\right) -L_{r}\left( A,\mathbf{x}\right) & =D_{i}\alpha+D_{j}\beta-D_{k}\alpha-D_{l}\beta+\left( D_{ij}-D_{jk} +D_{kl}-D_{li}\right) \alpha\beta\\ & =\lambda\left( C_{i}\alpha+C_{j}\beta-C_{k}\alpha-C_{l}\beta\right) +\left( D_{ij}-D_{jk}+D_{kl}-D_{li}\right) \alpha\beta\\ & =\left( D_{ij}-D_{jk}+D_{kl}-D_{li}\right) \alpha\beta. \end{align*} Since $L_{r}\left( A,\mathbf{y}_{\alpha}\right) \geq L_{r}\left( A,\mathbf{x}\right) $ and $\alpha\beta<0,$ we find that $D_{ij}+D_{kl}\leq D_{jk}+D_{li}.$ If $D_{ij}+D_{kl}=D_{jk}+D_{li}$, setting \[ \alpha=\min\left\{ x_{k},\frac{C_{j}-C_{l}}{C_{i}-C_{k}}x_{j}\right\} , \] we see that $L_{r}\left( A,\mathbf{y}_{\alpha}\right) =L_{r}\left( A,\mathbf{x}\right) $ and either the $k$th or the $j$th entry of $\mathbf{y}_{\alpha}$ is zero, contradicting Claim \ref{cl0}. Hence, $D_{ij}+D_{kl}<D_{jk}+D_{li},$ completing the proof of Claim \ref{cl7.1}. \end{proof}
Select an induced $4$-cycle $\left( i,j,k,l\right) $ and let us investigate $D_{ij},D_{kl},D_{jk},$ and $D_{li}$ in the light of Claim \ref{cl7.1}. We have \begin{align*} D_{ij} & =\sum\left\{ x_{i_{1}}\cdots x_{i_{r-2}}:\left\{ i,j,i_{1} ,\ldots,i_{r-2}\right\} \text{ induces an }r\text{-clique}\right\} ,\\ D_{kl} & =\sum\left\{ x_{i_{1}}\cdots x_{i_{r-2}}:\left\{ k,l,i_{1} ,\ldots,i_{r-2}\right\} \text{ induces an }r\text{-clique}\right\} ,\\ D_{jk} & =\sum\left\{ x_{i_{1}}\cdots x_{i_{r-2}}:\left\{ j,k,i_{1} ,\ldots,i_{r-2}\right\} \text{ induces an }r\text{-clique}\right\} ,\\ D_{li} & =\sum\left\{ x_{i_{1}}\cdots x_{i_{r-2}}:\left\{ j,k,i_{1} ,\ldots,i_{r-2}\right\} \text{ induces an }r\text{-clique}\right\} . \end{align*} First note that if a product $x_{i_{1}}\cdots x_{i_{r-2}}$ is present in any of the above sums, then $\left\{ i_{1},\ldots,i_{r-2}\right\} \cap\left\{ i,j,k,l\right\} \neq\varnothing.$ Also, a product $x_{i_{1}}\cdots x_{i_{r-2}}$ is present in both $D_{ij}$ and $D_{kl}$ exactly when it is present in both $D_{jk}$ and $D_{li}.$ Hence, Claim \ref{cl7.1} implies that there exists a set $\left\{ i_{1},\ldots,i_{r-2}\right\} $ such that either $\left\{ j,k,i_{1},\ldots,i_{r-2}\right\} $ or $\left\{ i,l,i_{1} ,\ldots,i_{r-2}\right\} $ induces an $r$-clique, but neither $\left\{ i,j,i_{1},\ldots,i_{r-2}\right\} $ nor $\left\{ k,l,i_{1},\ldots ,i_{r-2}\right\} $ induces an $r$-clique. This is a contradiction for $r=3,$ as either $\left\{ p,i,j\right\} $ or $\left\{ p,k,l\right\} $ induces a triangle for every vertex $p\notin\left\{ i,j,k,l\right\} .$
Let now $r=4$. We shall reach a contradiction by proving that $D_{ij} +D_{kl}\geq D_{jk}+D_{li}.$ Let $D_{ij}^{\ast}$ be the sum of all products $x_{p}x_{q}$ present in $D_{ij}$ but not present in any of $D_{jk} ,D_{kl},D_{il}.$ Defining the sums $D_{jk}^{\ast},$ $D_{kl}^{\ast},$ and $D_{il}^{\ast}$ likewise, we see that \[ D_{ij}+D_{kl}-D_{jk}-D_{li}=D_{ij}^{\ast}+D_{kl}^{\ast}-D_{jk}^{\ast} -D_{li}^{\ast}, \] so it suffices to prove $D_{ij}^{\ast}+D_{kl}^{\ast}-D_{jk}^{\ast} -D_{li}^{\ast}\geq0.$ To this end, write $\Gamma\left( u\right) $ for the set of neighbors of a vertex $u$ and set \begin{align*} A & =\Gamma\left( i\right) \backslash\Gamma\left( k\right) ,\text{ \ \ }B=\Gamma\left( j\right) \backslash\Gamma\left( l\right) ,\text{ \ \ }X=A\cap B,\\ C & =\Gamma\left( k\right) \backslash\Gamma\left( i\right) ,\text{ \ \ }D=\Gamma\left( l\right) \backslash\Gamma\left( j\right) ,\text{ \ \ }Y=C\cap D,\\ a & =\sum_{p\in A}x_{p},\text{ \ \ }b=\sum_{p\in B}x_{p},\text{ \ \ } c=\sum_{p\in C}x_{p},\text{ \ \ }d=\sum_{p\in D}x_{p},\text{ \ \ }x=\sum_{p\in X}x_{p},\text{ \ \ }y=\sum_{p\in Y}x_{p}. \end{align*}
Observe that $A,B$ and $X$ are subsets of $U^{+}\backslash\left\{ i,j\right\} ,$ while $C,D$ and $Y$ are subsets of $U^{-}\backslash\left\{ k,l\right\} .$ For reader's sake, here is an alternative view on $A,B,C,D,X,$ and $Y$: \begin{align*} A\backslash X & =\Gamma\left( i\right) \cap\Gamma\left( j\right) \cap\Gamma\left( l\right) \backslash\Gamma\left( k\right) ,\text{ \ \ }B\backslash X=\Gamma\left( i\right) \cap\Gamma\left( j\right) \cap\Gamma\left( k\right) \backslash\Gamma\left( l\right) ,\\ C\backslash Y & =\Gamma\left( k\right) \cap\Gamma\left( l\right) \cap\Gamma\left( j\right) \backslash\Gamma\left( i\right) ,\text{ \ \ }D\backslash Y=\Gamma\left( k\right) \cap\Gamma\left( l\right) \cap\Gamma\left( i\right) \backslash\Gamma\left( j\right) ,\\ X & =\Gamma\left( i\right) \cap\Gamma\left( j\right) \backslash\left( \Gamma\left( k\right) \cup\Gamma\left( l\right) \right) ,\text{ \ \ \ }Y=\Gamma\left( k\right) \cap\Gamma\left( l\right) \backslash\left( \Gamma\left( i\right) \cup\Gamma\left( j\right) \right) , \end{align*}
Let the product $x_{p}x_{q}$ be present in $D_{jk}^{\ast.};$ thus $\left\{ j,k,p,q\right\} $ induces an $4$-clique, but neither $\left\{ i,j,p,q\right\} $ nor $\left\{ k,l,p,q\right\} $ induces an $4$-clique. Clearly, $p$ and $q$ belong to different vertex classes of $\overline{G},$ say $p\in U^{+}$ and $q\in U^{-}.$ Since $i,j,$ and $k$ are joined to $p,$ we must have $pl\notin E\left( G\right) ,$ and so $p\in B\backslash X;$ likewise we find that $q\in C\backslash Y$. Thus \begin{equation} D_{jk}^{\ast}\leq\sum_{u\in B\backslash X}x_{u}\sum_{u\in C\backslash Y} x_{u}=\left( b-x\right) \left( c-y\right) ,\label{mi1} \end{equation} and by symmetry, \begin{equation} D_{il}^{\ast}\leq\sum_{u\in A\backslash X}x_{u}\sum_{u\in D\backslash Y} x_{u}=\left( a-x\right) \left( d-y\right) .\label{mi2} \end{equation}
For every pair $\left( p,q\right) $ satisfying \[ p\in X,\text{ }q\in B\backslash X,\text{ \ \ or \ \ }p\in A\backslash X,\text{ }q\in X,\text{ \ \ or \ \ }p\in A\backslash X,\text{ }q\in B\backslash X, \] we see that $\left\{ i,j,p,q\right\} $ induces an $4$-clique, but $p$ is not joined to $k$ and $q$ is not joined to $l;$ thus $x_{p}x_{q}$ is present in $D_{ij}^{\ast}.$ Therefore, \begin{equation} D_{ij}^{\ast}\geq\sum_{u\in X}x_{u}\sum_{u\in B\backslash X}x_{u}+\sum_{u\in A\backslash X}x_{u}\sum_{u\in X}x_{u}+\sum_{u\in A\backslash X}x_{u}\sum_{u\in B\backslash X}x_{u}=ab-x^{2},\label{mi3} \end{equation} and by symmetry, \begin{equation} D_{kl}^{\ast}\geq\sum_{u\in Y}x_{u}\sum_{u\in D\backslash Y}x_{u}+\sum_{u\in C\backslash Y}x_{u}\sum_{u\in Y}x_{u}+\sum_{u\in C\backslash Y}x_{u}\sum_{u\in D\backslash Y}x_{u}=cd-y^{2}.\label{mi4} \end{equation} Now adding (\ref{mi3}) and (\ref{mi4}), and subtracting (\ref{mi1}) and (\ref{mi2}), we obtain \begin{align*} D_{ij}^{\ast}+D_{kl}^{\ast}-D_{jk}^{\ast}-D_{li}^{\ast} & \geq ab-x^{2} +cd-y^{2}-\left( b-x\right) \left( c-y\right) -\left( a-x\right) \left( d-y\right) \\ & =\left( a-c\right) \left( b-d\right) +x\left( c+d\right) +y\left( a+b\right) -\left( x+y\right) ^{2}. \end{align*} Hence, using $x\leq\min\left( a,b\right) ,$ $y\leq\min\left( c,d\right) ,$ and the inequalities \begin{align*} a-c & =\sum_{u\in\Gamma\left( i\right) \backslash\Gamma\left( k\right) }x_{u}+\sum_{u\in\Gamma\left( i\right) \cap\Gamma\left( k\right) } x_{u}-\sum_{u\in\Gamma\left( k\right) \backslash\Gamma\left( i\right) }x_{u}-\sum_{u\in\Gamma\left( i\right) \cap\Gamma\left( k\right) } x_{u}=C_{i}-C_{k}>0,\\ b-d & =\sum_{u\in\Gamma\left( j\right) \backslash\Gamma\left( l\right) }x_{u}+\sum_{u\in\Gamma\left( j\right) \cap\Gamma\left( l\right) } x_{u}-\sum_{u\in\Gamma\left( l\right) \backslash\Gamma\left( j\right) }x_{u}-\sum_{u\in\Gamma\left( j\right) \cap\Gamma\left( l\right) } x_{u}=C_{j}-C_{l}>0, \end{align*} Lemma \ref{le1} implies that $D_{ij}^{\ast}+D_{kl}^{\ast}-D_{jk}^{\ast} -D_{li}^{\ast}\geq0,$ as required.
This finishes the proof that $G$ is a complete graph for $r=3,4$.
\subsection*{Proof of $L_{r}\left( A,\mathbf{x}\right) =\varphi_{r}\left( c\right) $}
We know now that $G$ is a complete graph. We have to show that $n=\xi\left( c\right) $ and $\left( x_{1},\ldots,x_{n}\right) =\left( x,\ldots ,x,y\right) ,$ where $x$ and $y$ are given by (\ref{sol}). Our proof is based on the following assertion.$^{{}}$
\begin{claim} \label{cl8} Let $x_{3}\geq x_{2}\geq x_{1}>0$ be real numbers satisfying \begin{align} x_{1}+x_{2}+x_{3} & =a,\label{cons1}\\ x_{1}x_{2}+x_{2}x_{3}+x_{3}x_{1} & =b, \label{cons1.1} \end{align} and let $x_{1}x_{2}x_{3}$ be minimum subject to (\ref{cons1}) and (\ref{cons1.1}). Then $x_{2}=x_{3}$. \end{claim}
\begin{proof} First note that the hypothesis implies that \begin{equation} a^{2}/4<b\leq a^{2}/3. \label{condb} \end{equation} Indeed, the second of these inequalities follows from Maclaurin's inequality; assume for a contradiction that the first one fails. Then, selecting a sufficiently small $\varepsilon>0$ and setting \[ y_{1}=\varepsilon,\text{ }y_{2}=\frac{a-\varepsilon-\sqrt{\left( a+\varepsilon\right) ^{2}-4\left( b+\varepsilon^{2}\right) }}{2},\text{ }y_{3}=\frac{a-\varepsilon+\sqrt{\left( a+\varepsilon\right) ^{2}-4\left( b+\varepsilon^{2}\right) }}{2}, \] we see that $y_{1}$, $y_{2}$, $y_{3}$ satisfy (\ref{cons1}), (\ref{cons1.1}), and \[ y_{1}y_{2}y_{3}=\varepsilon\left( b-a\varepsilon+\varepsilon^{2}\right) <\varepsilon b. \] Thus, $\min x_{1}x_{2}x_{3}$, subject to (\ref{cons1}) and (\ref{cons1.1}), cannot be attained for positive $x_{1}$, $x_{2}$, $x_{3},$ a contradiction, completing the proof of (\ref{condb}).
By Lagrange's method there exist $\eta$ and $\theta$ such that \begin{align*} x_{1}x_{2} & =\eta+\theta\left( x_{1}+x_{2}\right) =\eta+\theta\left( a-x_{3}\right) \\ x_{1}x_{3} & =\eta+\theta\left( x_{1}+x_{3}\right) =\eta+\theta\left( a-x_{2}\right) \\ x_{2}x_{3} & =\eta+\theta\left( x_{2}+x_{3}\right) =\eta+\theta\left( a-x_{1}\right) . \end{align*}
If $\theta=0$ we see that $x_{1}=x_{2}=x_{3},$ completing the proof. Suppose $\theta\neq0$ and assume for a contradiction that $x_{2}<x_{3}.$ We find that \begin{align*} x_{1}\left( x_{3}-x_{2}\right) & =\theta\left( x_{3}-x_{2}\right) ,\\ x_{2}\left( x_{3}-x_{1}\right) & =\theta\left( x_{3}-x_{1}\right) , \end{align*} and so, $x_{1}=x_{2}.$ Solving the system (\ref{cons1},\ref{cons1.1}) with $x_{1}=x_{2},$ we obtain \[ x_{3}=\frac{a}{3}+\frac{2}{3}\sqrt{a^{2}-3b}\text{, \ }x_{1}=x_{2}=\frac{a} {3}-\frac{1}{3}\sqrt{a^{2}-3b}, \] implying that \begin{equation} x_{1}x_{2}x_{3}=\left( \frac{a}{3}+\frac{2}{3}\sqrt{a^{2}-3b}\right) \left( \frac{a}{3}-\frac{1}{3}\sqrt{a^{2}-3b}\right) ^{2}. \label{eq1} \end{equation} If $b=a^{2}/3,$ we see that $x_{1}=x_{2}=x_{3},$ completing the proof, so suppose that $b<a^{2}/3.$ We shall show that $\min x_{1}x_{2}x_{3},$ subject to (\ref{cons1}) and (\ref{cons1.1}), is smaller than the right-hand side of (\ref{eq1}). Indeed, setting \[ y_{1}=\frac{a}{3}-\frac{2}{3}\sqrt{a^{2}-3b},\text{ \ }y_{2}=y_{3}=\frac{a} {3}+\frac{1}{3}\sqrt{a^{2}-3b}, \] in view of (\ref{condb}), we see that $y_{1},y_{2},y_{3}$ satisfy (\ref{cons1}) and (\ref{cons1.1}). After some algebra we obtain \[ y_{1}y_{2}y_{3}-x_{1}x_{2}x_{3}=-\frac{4}{27}\left( a^{2}-3b\right) ^{3/2}<0. \] This contradiction completes the proof of Claim \ref{cl8}. \end{proof}
Claim \ref{cl8} implies that, out of every three entries of $\mathbf{x},$ the two largest ones are equal; hence all but the smallest entry of $\mathbf{x}$ are equal. Writing $y$ and $x$ for the smallest and largest entries of $\mathbf{x},$ we see that $x$ and $y$ satisfy \begin{align*} \binom{n-1}{2}x^{2}+nxy & =c,\\ \left( n-1\right) x+y & =1,\\ y & \leq x, \end{align*} and so, \[ y=\frac{1}{n}-\sqrt{1-2\frac{n}{n-1}c},\ \ \ x=\frac{1}{n}+\frac{1}{n} \sqrt{1-2\frac{n}{n-1}c}. \] Since the condition $1-2nc/\left( n-1\right) \geq0$ gives \[ n\geq\frac{1}{1-2c}, \] and $y>0$ gives \[ 1-2c<\frac{1}{n}+\frac{1}{n^{2}}<\frac{1}{n-1}, \] we find that $n=\xi\left( c\right) ,$ completing the proof of Theorem \ref{mainTh}.$
\square$
\subsection{\label{pp0}Proof of Proposition \ref{pro0}}
Suppose that $\mathcal{S}_{n}\left( c\right) $ is nonempty and that
\[ A\in\mathcal{A}\left( n\right) ,\ \ \ \mathbf{x}\geq0,\text{ \ \ } L_{1}\left( A,\mathbf{x}\right) =1,\text{ \ \ and \ \ \ }L_{2}\left( A,\mathbf{x}\right) =c. \] Then \[ c=\sum_{1\leq i<j\leq n}a_{ij}x_{i}x_{j}\leq\sum_{1\leq i<j\leq n}x_{i} x_{j}=\frac{1}{2}\left( \sum_{i}x_{i}\right) ^{2}-\frac{1}{2}\sum_{i} x_{i}^{2}\leq\frac{n-1}{2n}<\frac{1}{2}, \] and so, $c<1/2$ and $n\geq1/\left( 1-2c\right) ;$ thus $n\geq\left\lceil 1/\left( 1-2c\right) \right\rceil .$
On the other hand, if $c<1/2$ and $n\geq\left\lceil 1/\left( 1-2c\right) \right\rceil ,$ let $A\in\mathcal{A}\left( n\right) $ be the matrix with all off-diagonal entries equal to $1,$ and let $x,y$ satisfy \begin{align*} \binom{n-1}{2}x^{2}+\left( n-1\right) xy & =c,\\ \left( n-1\right) x+y & =1. \end{align*} Writing $\mathbf{x}$ for the $n$-vector $\left( x,\ldots,x,y\right) ,$ we see that $L_{1}\left( A,\mathbf{x}\right) =1$ and $L_{2}\left( A,\mathbf{x}\right) =c;$ thus $\mathcal{S}_{n}\left( c\right) $ is nonempty, completing the proof.$
\square$
\section{\label{apx}Upper bounds on $k_{r}\left( n,m\right) $}
In this section we prove Theorem \ref{thmeq}. We start with some facts about Tur\'{a}n graphs.
The $s$-partite Tur\'{a}n graph $T_{s}\left( n\right) $ is a complete $s$-partite graph on $n$ vertices with each vertex class of size $\left\lfloor n/s\right\rfloor $ or $\left\lceil n/s\right\rceil .$ Setting $t_{s}\left( n\right) =e\left( T_{s}\left( n\right) \right) ,$ after some algebra we obtain \[ t_{s}\left( n\right) =\frac{s-1}{2s}n^{2}-\frac{t\left( s-t\right) }{2s}, \] where $t$ is the remainder of $n$ $\operatorname{mod}$ $s;$ hence,$_{{}}$ \begin{equation} \frac{s-1}{2s}n^{2}-\frac{s}{8}\leq t_{s}\left( n\right) \leq\frac{s-1} {2s}n^{2}. \label{turest} \end{equation} It is known that the second one of these inequalities can be extended for all $2\leq r\leq s:_{{}}$
\begin{equation} k_{r}\left( T_{s}\left( n\right) \right) \leq\binom{s}{r}\left( \frac {n}{s}\right) ^{r}. \label{turestr} \end{equation}
The Tur\'{a}n graphs play an exceptional role for the function $k_{r}\left( n,m\right) :$ indeed, a result of Bollob\'{a}s \cite{Bol76} implies that if $G$ is a graph with $n$ vertices and $t_{s}\left( n\right) $ edges, then $k_{r}\left( G\right) \geq k_{r}\left( T_{s}\left( n\right) \right) ;$ hence,
\begin{fact} \label{f1}$k_{r}\left( n,t_{s}\left( n\right) \right) =k_{r}\left( T_{s}\left( n\right) \right) .
\square$ \end{fact}
Thus to simplify our presentation, we assume that $n\geq s\geq r\geq3$ are fixed integers and $m$ is an integer satisfying $t_{s-1}\left( n\right) <m\leq t_{s}\left( n\right) $.
First we define a class of graphs giving upper bounds on $k_{r}\left( n,m\right) .$
\subsection*{The graphs $H\left( n,m\right) $}
We shall construct a graph $H\left( n,m\right) $ with $n$ vertices and $m$ edges, where $n,s,$ and $m$ satisfy $n\geq s\geq3$ and $t_{s-1}\left( n\right) <m\leq t_{s}\left( n\right) .$ Note that the construction of $H\left( n,m\right) $ is independent of $r.$
First we define a sequence of graphs $H_{0},\ldots,H_{\left\lfloor n/s\right\rfloor }$ satisfying \begin{equation} t_{s-1}\left( n\right) =e\left( H_{0}\right) <e\left( H_{1}\right) <\cdots<e\left( H_{\left\lfloor n/s\right\rfloor }\right) =t_{s}\left( n\right) , \label{ehi} \end{equation} and then we construct $H\left( n,m\right) $ using $H_{0},\ldots ,H_{\left\lfloor n/s\right\rfloor }.$
\subsection*{The graphs $H_{0},\ldots,H_{\left\lfloor n/s\right\rfloor }$}
For every $0\leq i\leq\left\lfloor n/s\right\rfloor ,$ let $H_{i}$ be the complete $s$-partite graph with vertex classes $I,V_{1},\ldots,V_{s-1}$ such that $\left\vert I\right\vert =i$ and \[ \left\lfloor \left( n-i\right) /\left( s-1\right) \right\rfloor =\left\vert V_{1}\right\vert \leq\cdots\leq\left\vert V_{s-1}\right\vert =\left\lceil \left( n-i\right) /\left( s-1\right) \right\rceil . \]
Note that $H_{0}$ is the $\left( s-1\right) $-partite Tur\'{a}n graph $T_{s-1}\left( n\right) ,$ but it is convenient to consider it $s$-partite with an empty vertex class $I$. Note also that $H_{\left\lfloor n/s\right\rfloor }=T_{s}\left( n\right) .$
The transition from $H_{i}$ to $H_{i+1}$ can be briefly summarized as follows: select $V_{j}$ with $\left\vert V_{j}\right\vert =\left\lceil \left( n-i\right) /\left( s-1\right) \right\rceil $ and move a vertex $u$ from $V_{j}$ to $I$.
In particular, we see that \[ e\left( H_{i+1}\right) -e\left( H_{i}\right) =\left\lceil \left( n-i\right) /\left( s-1\right) \right\rceil -i>0, \] implying in turn (\ref{ehi}).
\subsection{Constructing $H\left( n,m\right) $}
Let $I,V_{1},\ldots,V_{s-1}$ be the vertex classes of $H_{i}.$ Select $V_{j}$ with $\left\vert V_{j}\right\vert =\left\lceil \left( n-i\right) /\left( s-1\right) \right\rceil $, select a vertex $u\in\left\vert V_{j}\right\vert ,$ let $l=\left\lceil \left( n-i\right) /\left( s-1\right) \right\rceil -1,$ and suppose that $V_{j}\backslash\left\{ u\right\} =\left\{ v_{1},\ldots,v_{l}\right\} .$ Do the following steps:
\qquad(a) remove all edges joining $u$ to vertices in $I;$
\qquad(b) move $u$ from $V_{j}$ to $I,$ keeping all edges incident to $u;$
\qquad(c) for $m=e\left( H_{i}\right) +1,\ldots,e\left( H_{i+1}\right) $ join $u$ to $v_{m-e\left( H_{i}\right) }$ and write $H\left( n,m\right) $ for the resulting graph.
Two observations are in place: first, $e\left( H\left( n,m\right) \right) =m,$ and second, $H\left( n,e\left( H_{i}\right) \right) =H_{i}$ for every $i=1,\ldots,\left\lfloor n/s\right\rfloor .$
Note also that every additional edge in step (c) increases the number of $r$-cliques by $k_{r-2}\left( H^{\prime}\right) ,$ where $H^{^{\prime}}$ is the fixed graph induced by the set $\left[ n\right] \backslash\left( I\cup V_{j}\right) .$ We thus make the following
\begin{claim} \label{cl9}The function $k_{r}\left( H\left( n,m\right) \right) $ increases linearly in $m$ for $e\left( H_{i-1}\right) \leq m\leq e\left( H_{i}\right) .$ \end{claim}
We need also the following upper bound on $k_{r}\left( H_{i}\right) .$
\begin{claim} \label{cl12} \[ k_{r}\left( H_{i}\right) \leq\binom{s-1}{r-1}\left( \frac{n-i}{s-1}\right) ^{r-1}i+\binom{s-1}{r}\left( \frac{n-i}{s-1}\right) ^{r} \]
\end{claim}
\begin{proof} Let $I,V_{1},\ldots,V_{s-1}$ be the vertex classes of $H_{i}.$ Since the sizes of the sets $V_{1},\ldots,V_{s-1}$ differ by at most $1,$ we see that the set $V_{1}\cup\ldots\cup V_{s-1}$ induces the Tur\'{a}n graph $T_{s-1}\left( n-i\right) .$ Hence a straightforward counting gives \[ k_{r}\left( H_{i}\right) \leq k_{r-1}\left( T_{s-1}\left( n-i\right) \right) i+k_{r}\left( T_{s-1}\left( n-i\right) \right) , \] and the claim follows from inequality (\ref{turestr}). \end{proof}
\subsection{Proof of Theorem \ref{thmeq}}
Assume that $x$ is a real number satisfying \[ \frac{s-2}{2\left( s-1\right) }n^{2}<x\leq\frac{s-1}{2s}n^{2}. \] and define the functions $p=p\left( x\right) $ and $q=q\left( x\right) $ by \begin{align} p & \geq q,\label{c1}\\ \left( s-1\right) p+q & =n,\label{c2}\\ \binom{s-1}{2}p^{2}+\left( s-1\right) pq & =x. \label{c3} \end{align}
We note that \[ p\left( x\right) =\frac{1}{s}\left( n+\sqrt{n^{2}-\frac{2s}{s-1}x}\right) ,\text{ \ \ }q\left( x\right) =\frac{1}{s}\left( n-\left( s-1\right) \sqrt{n^{2}-\frac{2s}{s-1}x}\right) . \] Set \begin{equation} f\left( x\right) =\binom{s-1}{r}p^{r}+\binom{s-1}{r-1}p^{r-1}q, \label{feq} \end{equation} and note that $f\left( x\right) =\varphi_{r}\left( x/n^{2}\right) n^{r};$ hence, to prove Theorem \ref{thmeq}, it is enough to show that if \[ \frac{s-2}{2\left( s-1\right) }n^{2}<m\leq\frac{s-1}{2s}n^{2}, \] then \begin{equation} k_{r}\left( n,m\right) \leq f\left( m\right) +\frac{n^{r}}{n^{2}-2m}. \label{meq} \end{equation}
We first introduce the auxiliary function $\widehat{f}\left( x\right) ,$ defined for $x\in\left[ t_{s-1}\left( n\right) ,t_{s}\left( n\right) \right] $ by \[ \widehat{f}\left( x\right) =\left\{ \begin{tabular} [c]{ll} $f\left( x+\frac{s-1}{8}\right) ,$
& $\text{if }t_{s-1}\left( n\right) <x\leq\frac{s-1}{2s}n^{2}-\frac{s-1}{8};$\\ $f\left( \frac{s-1}{2s}n^{2}\right) ,$ & $\text{if }\frac{s-1}{2s} n^{2}-\frac{s-1}{8}<x\leq t_{s}\left( n\right) .$ \end{tabular} \ \right. \]
To finish the proof of Theorem \ref{thmeq} we first show that \begin{equation} k_{r}\left( H\left( n,m\right) \right) \leq\widehat{f}\left( m\right) , \label{meq1} \end{equation} and then derive (\ref{meq}) using Taylor's expansion and the fact that $k_{r}\left( n,m\right) \leq k_{r}\left( H\left( n,m\right) \right) .$
\begin{claim} \label{cl13}If $m=e\left( H_{i}\right) ,$ then \[ k_{r}\left( H_{i}\right) \leq f\left( m-t_{s-1}\left( n-i\right) +\frac{s-2}{2\left( s-1\right) }\left( n-i\right) ^{2}\right) . \]
\end{claim}
\begin{proof} Indeed, as mentioned above, the set $V_{1}\cup\cdots\cup V_{s-1}$ induces a $T_{s-1}\left( n-i\right) ;$ hence, \[ i\left( n-i\right) +t_{s-1}\left( n-i\right) =m, \] and so, \[ i\left( n-i\right) +\frac{s-1}{2s}\left( n-i\right) ^{2}=m-t_{s-1}\left( n-i\right) +\frac{s-1}{2s}\left( n-i\right) ^{2}. \] Set \[ m^{\prime}=m-t_{s-1}\left( n-i\right) +\frac{s-1}{2s}\left( n-i\right) ^{2} \] and note that $i=q\left( m^{\prime}\right) .$ In view of Claim \ref{cl12}, we obtain \[ k_{r}\left( H_{i}\right) \leq\binom{s-1}{r-1}\left( \frac{n-i}{s-1}\right) ^{r-1}i+\binom{s-1}{r}\left( \frac{n-i}{s-1}\right) ^{r}=f\left( m^{\prime }\right) , \] completing the proof. \end{proof}
\begin{claim} \label{cl10}$f^{^{\prime}}\left( x\right) =\binom{s-2}{r-2}p^{r-2}.$ \end{claim}
\begin{proof} From (\ref{feq}) we have \[ f\left( x\right) =\binom{s-1}{r-1}\left( \frac{s-r}{r}p^{r}+p^{r-1} q\right) , \] and so, \[ f^{\prime}\left( x\right) =\binom{s-1}{r-1}\left( \left( s-r\right) p^{r-1}p^{\prime}+\left( r-1\right) p^{r-2}qp^{\prime}+p^{r-1}q^{\prime }\right) . \] From (\ref{c2}) and (\ref{c3}) we have \[ \left( s-1\right) p^{\prime}+q^{\prime}=0 \] and \[ \left( s-1\right) \left( \left( s-2\right) pp^{\prime}+p^{\prime }q+pq^{\prime}\right) =\left( s-1\right) p^{\prime}\left( q-p\right) =x^{\prime}=1. \] Now the claim follows after simple algebra. \end{proof}
We immediately see that $f\left( x\right) $ is increasing. Also, since $p\left( x\right) $ is decreasing, $f^{^{\prime}}\left( x\right) $ is decreasing too, implying that $f\left( x\right) $ is concave. This, in turn, implies that $\widehat{f}\left( x\right) $ is concave.
For every $i=1,\ldots,\left\lfloor n/s\right\rfloor ,$ by Claim \ref{cl13}, we have \[ k_{r}\left( H_{i}\right) \leq f\left( m^{\prime}\right) \leq\widehat {f}\left( m\right) , \] and since, by Claim \ref{cl9}, $k_{r}\left( H\left( n,m\right) \right) $ is linear for $m\in\left[ e\left( H_{i}\right) ,e\left( H_{i+1}\right) \right] ,$ inequality (\ref{meq1}) follows.
To finish the proof of (\ref{meq}), note that by Taylor's formula, in view of the concavity of $f\left( x\right) ,$ we have \begin{align*} \widehat{f}\left( m\right) & \leq f\left( m+\frac{s-1}{8}\right) \leq f\left( m\right) +\frac{s-1}{8}f^{\prime}\left( m\right) =f\left( m\right) +\frac{s-1}{8}\binom{s-2}{r-2}p^{r-2}\\ & \leq f\left( m\right) +\frac{s-1}{8}\binom{s-2}{r-2}\left( \frac{n} {s-1}\right) ^{r-2}<f\left( m\right) +sn^{r-2}\leq f\left( m\right) +\frac{n^{r}}{n^{2}-2m}, \end{align*} completing the proof of Theorem \ref{thmeq}.$
\square$
\textbf{Acknowledgement }Thanks to Cecil Rousseau for helpful discussions and to Alex Razborov for pointing out some mistakes in the initial version of the manuscript.
\end{document} | arXiv |
\begin{document}
\maketitle \begin{abstract} We study the $2D$ full Ginzburg-Landau energy with a periodic rapidly oscillating, discontinuous and [strongly] diluted pinning term using a perturbative argument. This energy models the state of an heterogeneous type II superconductor submitted to a magnetic field. We calculate the value of the first critical field which links the presence of vorticity defects with the intensity of the applied magnetic field. Then we prove a standard dependance of the quantized vorticity defects with the intensity of the applied field. Our study includes the case of a London solution having several {\it minima}. The macroscopic location of the vorticity defects is understood with the famous Bethuel-Brezis-Hélein renormalized energy. The mesoscopic location, {\it i.e.}, the arrangement of the vorticity defects around the {\it minima} of the London solution, is the same than in the homogenous case. The microscopic location is exactly the same than in the heterogeneous case without magnetic field. We also compute the value of secondary critical fields that increment the quantized vorticity. \end{abstract}
\section{Introdution} This article studies the pinning phenomenon in type-II superconducting composites.
Superconductivity is a property that appears in certain materials cooled below a critical temperature. These materials are called superconductors. Superconductivity is characterized by a total absence of electrical resistance and a perfect diamagnetism. Unfortunately, when the imposed conditions are too intense, superconductivity is destroyed in certain areas of the material called {\it vorticity defects}.
We are interested in type II superconductors which are characterized by the fact that the vorticity defects first appear in small areas. Their number increases with the intensity of the conditions imposed until filling the material. For example, when the intensity $h_{\rm ex}$ of an applied magnetic field exceeds a first threshold, the first vorticity defects appear: the magnetic field begins to penetrate the superconductor. The penetration is done along thin wires and may move resulting an energy dissipation. These motions may be limited by trapping the vorticity defects in small areas.
The behavior of a superconductor is modeled by minimizers of a Ginzburg-Landau type energy. In order to study the presence of traps for the vorticity defects we consider an energy including a pinning term that models impurities in the superconductor. These impurities would play the role of traps for the vorticity defects. We are thus lead to the subject of this article: the type-II superconducting composites with impurities.\\
The case of an infinite long homogenous type II superconducting cylinder was intensively studied in mathematics by various authors since the 90's [see \cite{SS1} for a guide to the litterature]. Namely, the present work deals with a cylindrical superconductor $\mathcal{S}=\Omega\times\mathbb{R}$ [whose section is $\Omega\subset\mathbb{R}^2$] submitted to a vertical magnetic field $(0,0,h_{\rm ex})$. Under these considerations, the vorticity defects are thin vertical cylinder. Thus their study may be done {\it via} a 2D problem formulated on $\Omega\subset\mathbb{R}^2$. Following the works of various authors [see \cite{Rub}, \cite{ASS1}, \cite{K1}], for a small parameter $\varepsilon>0$ [$\varepsilon\to0$ in this article] and $h_{\rm ex}=h_{\rm ex}(\varepsilon)\geq0$, we are interested in the description of the [global] minimizers of the functional \[ \begin{array}{cccc}
\mathcal{E}_{\varepsilon,h_{\rm ex}}:&\mathscr{H}&\to&\mathbb{R}^+\\&(u,A)&\mapsto&\displaystyle\dfrac{1}{2}\int_\Omega|\nabla u-\imath Au|^2+\dfrac{1}{2\varepsilon^2}(a_\varepsilon^2-|u|^2)^2+|{\rm curl}(A)-h_{\rm ex}|^2 \end{array}, \] where [see Section \ref{SecNotation} for more detailed notation] \begin{itemize} \item $\Omega\subset\mathbb{R}^2$ is a smooth bounded simply connected open set, \item $\mathscr{H}:=H^1(\Omega,\mathbb{C})\times H^1(\Omega,\mathbb{R}^2)$, \item $a_\varepsilon:\Omega\to\{1,b\}$ [$b\in(0,1)$ is independent of $\varepsilon$] is a periodic diluted pinning term [see Figure \ref{Intro.FigureTermeChevillage} and Section \ref{SecConstructionPinningTerm} for a construction of $a_\varepsilon$]. The impurities are the connected components of $\omega_\varepsilon:=a_\varepsilon^{-1}(\{b\})$. In the definition of $a_\varepsilon$, $\delta=\delta(\varepsilon)\underset{\varepsilon\to0}{\to}0$ is the parameter of period, $\lambda=\lambda(\varepsilon)\underset{\varepsilon\to0}{\to}0$ is the parameter of dilution and $0\in\omega\subset\mathbb{R}^2$ is a smooth bounded simply connected open set which gives the form of the impurities. \end{itemize}
\begin{figure}
\caption{The periodic pinning term}
\label{Intro.FigureTermeChevillage}
\end{figure}
We focus on a strongly diluted case [$\lambda^{1/4}|\ln\varepsilon|\to0$] with not too small connected components of $\omega_\varepsilon$ [$|\ln(\lambda\delta)|=\mathcal{O}(\ln|\ln\varepsilon|)$] but with a sufficiently small parameter of the period [see \eqref{PutaindHypTech}]. \\
Under these considerations, if $(u_\varepsilon,A_\varepsilon)$ minimizes $\mathcal{E}_{\varepsilon,h_{\rm ex}}$, then the vorticity defects may be interpreted as the set $\{|u_\varepsilon|< b/2\}$.
As said above, our study takes place in the extrem type II case $\varepsilon\to0$ and we also assume a divergent upper bound for $h_{\rm ex}$.
Vorticity defects appear for minimizers above a critical valued $H_{c_1}=[{b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|}]/({2\|\xi_0\|_{L^\infty(\Omega)}})+\mathcal{O}(1)$ [see Corollary \ref{Cor.ExactEnergyExpPreCritField} and \eqref{DefH0c1}]. Here $\xi_0\in H^1_0\cap H^2$ is called the {\it London solution} and is the unique solution of the {\it London equation} \begin{equation}\label{LondonEq1} \begin{cases} -\Delta^2\xi_0+\Delta\xi_0=0&\text{ in }\Omega \\ \Delta\xi_0=1&\text{ on }\partial\Omega \\ \xi_0=0&\text{ on }\partial\Omega \end{cases}. \end{equation}
The value $H_{c_1}$ is calculated by a standard balance of the energetic costs of a configuration without vorticity defects [$|u|\geq b/2$] with well prepared competitors having an arbitrary number of quantized vorticity defects. Here quantization as to be interpreted by the degree of $u$ around a vorticity defect. It is an observable quantity related with the circulation of the superconducting current.
In order to lead the study, the set $\Lambda:=\{z\in\Omega\,|\,\xi_0(z)=\min\xi_0\}\subset\Omega$ is of major interest [it is standard to prove that, in $\Omega$, $-1<\xi_0<0$]. From Lemma 4.4 in \cite{S1} and Lemma 4 in \cite{SS2} we have the following : \begin{lem}\label{Lem.DescriptionLambda} The set $\Lambda$ is finite. Moreover there exist $\eta>0$ and $M\geq1$ s.t. for $a\in\Omega$ we have $\xi_0(a)\geq \min\xi_0+\eta{\rm dist}(a,\Lambda)^M$ \footnote{In Lemma 4 in \cite{SS2}, $M$ is just a positive number, but $\xi\in C^0(\overline\Omega)$, and then, up to consider $\eta>0$ sufficiently small, we may assume $M\geq 1$.}. \end{lem} We write $N_0:={\rm Card}(\Lambda)$ and $\Lambda=\{p_1,...,p_{N_0}\}$.\\
We may give a simple picture of the emergence of the vorticity defects. The first vorticity defects appear close to $H_{c_1}$. If $N_0=1$ then there is first a unique vorticity defect and it is close to $\Lambda$. If $N_0\geq2$ the situation is less clear: we first have $d_1^\star\in\{1,...,N_0\}$ vorticity defect and each of them is located close to $d_1^\star$ elements of $\Lambda$. By increasing the intensity of the applied field $h_{\rm ex}$ by a bounded quantity we increment the number of vorticity defects until filling $\Lambda$.
Once each elements of $\Lambda$ is close to a vorticity defect, then by increasing $h_{\rm ex}$ of a $\mathcal{O}(\ln|\ln\varepsilon|)$, additional defects appear one by one.\\
We may now state the main theorems of the present work. For simplicity of the presentation the theorems are not stated on their most general form [see Theorem \ref{THM}].
These main results are obtained assuming that $\lambda,\delta$ and $h_{\rm ex}$ satisfy \begin{equation}\label{CondOnLambdaDelta}
\lambda^{1/4}|\ln\varepsilon|\to0\text{ and }|\ln(\lambda\delta)|=\mathcal{O}(\ln|\ln\varepsilon|), \end{equation} \begin{equation}\label{BorneKMagn}
\text{There is $K\geq 1$ s.t. $h_{\rm ex}\leq\dfrac{b^2|\ln\varepsilon|}{2\|\xi_0\|_{L^\infty(\Omega)}}+K\ln|\ln\varepsilon|$} \end{equation} and when $h_{\rm ex}\to\infty$ we need \begin{equation}\label{PutaindHypTech} \dfrac{\ln(\delta\sqrt{h_{\rm ex}})}{\ln(\lnh_{\rm ex})}\to-\infty. \end{equation}
Namely, in order to meet Hypothesis \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}, we may think $\lambda\simeq|\ln\varepsilon|^{-s},\delta\simeq|\ln\varepsilon|^{-t}$ with $s>4$ and $t>1/2$.
We need also assume that \begin{equation}\label{NonDegHyp}\text{the minimal points of $\xi_0$, $\Lambda=\{p_1,...,p_{N_0}\}$, are non degenerate critical points} \end{equation} in the sense that for $p\in\Lambda$, letting ${\rm Hess}_{\xi_0}(p)$ be the Hessian matrix of $\xi_0$ at $p$, the quadratic form $Q_p(z)=z\cdot {\rm Hess}_{\xi_0}(p)z$ is a definite positive quadratic form. Note that if \eqref{NonDegHyp} holds then we may take $M=2$ in Lemma \ref{Lem.DescriptionLambda}.\\
The strategy of this work is based on a perturbative argument. This argument applies for families of {\it quasi-minimizers} of the energy with some regularity assumptions [see Theorem \ref{THM}]. In particular, we cannot have a sharp profil near a zero of a quasi-minimizer since such profil does not make any sense for quasi-minimizer. Therefore we cannot speak about an {\it ad-hoc} notion of {\it vortices} s.t. "isolated zeros". However with a natural $L^\infty$-bound on the gradient of quasi-minimizers, the notion of vorticity defects is sufficiently robust to give them a nice description.\\
For simplicity of the presentation we first state the main results for a family $\{(u_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ s.t. \begin{equation}\label{MinimalityAssumption(u,A)} \text{$(u_\varepsilon,A_\varepsilon)$ minimizes $\mathcal{E}_{\varepsilon,h_{\rm ex}}$ in $\mathscr{H}$.} \end{equation} \begin{thm}\label{THM-A}
Assume that \eqref{NonDegHyp} holds and $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}. There exists $\mathcal{D}_{K,b}>1$ s.t. for $\{(u_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ satisfying \eqref{MinimalityAssumption(u,A)}, for sufficiently small $\varepsilon$, there exits $d_\varepsilon\in\mathbb{N}$ s.t. if $d_\varepsilon=0$ then $|u_\varepsilon|\geq b/2$ in $\Omega$, and if $d_\varepsilon\in\mathbb{N}^*$ then there exists a set of $d_\varepsilon$ points, $\mathcal{Z}_\varepsilon=\{z^\varepsilon_1,...,z^\varepsilon_{d_\varepsilon}\}\subset\Omega$, s.t. for $\mu>0$ sufficiently small and independent of $\varepsilon$ we have: \begin{enumerate} \item $d_\varepsilon\leq\mathcal{D}_{K,b}$
\item $\{|u_\varepsilon|< b/2\}\subset\cup B(z^\varepsilon_i,\varepsilon^{\mu})\subset\Omega$,
\item $|z^\varepsilon_i-z^\varepsilon_j|\geq h_{\rm ex}^{-1}\lnh_{\rm ex}$ for $i\neq j$, \item ${\rm dist}( z^\varepsilon_i,\Lambda)\leqh_{\rm ex}^{-1/2}{\lnh_{\rm ex}}{}$ for all $i$, \item $ {\rm deg}_{\partial B(z^\varepsilon_i,\varepsilon^{\mu})}(u_\varepsilon)=1$ for all $i$. \end{enumerate} Moreover: \begin{enumerate} \item There is $\eta_{\omega,b}>0$ depending only on $\omega$ and $b$ s.t. for all $i$ we have $B(z^\varepsilon_i,\eta_{\omega,b}\lambda\delta)\subset\omega_\varepsilon$. \item If for a sequence $\varepsilon=\varepsilon_n\downarrow0$ we have $h_{\rm ex}=\mathcal{O}(1)$ then $d_{\varepsilon}=0$ for small $\varepsilon$. \end{enumerate} \end{thm}
From Theorem \ref{THM-A} we know that, for small $\varepsilon$, if $\{|u_\varepsilon|< b/2\}\neq\emptyset$, then the vorticity defects are contained in small disks which are well separated, trapped by the impurities and located near $\Lambda$. The second theorem gives sharper informations related with the location of these disks. We divide the second theorem in three parts: \begin{itemize} \item Macroscopic location: We know that the disks are near $\Lambda$, for some $p\in\Lambda$, how many disks are near $p$ ? \item Mesoscopic location: For $p\in\Lambda$, how the disks near $p$ are they organized ? What is their inter-distance ? \item Microscopic location: We know that the disks are trapped by the inclusion $\omega_\varepsilon$, what is their location inside $\omega_\varepsilon$. \end{itemize} These questions are related with the crucial notion of {\it renormalized energy} [see Section \ref{Sec.RenEn}]. \begin{thm}\label{THM-B} {\bf [Direct part]}\\ Assume that \eqref{NonDegHyp} holds and $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}. Assume also $h_{\rm ex}\to\infty$.
Let $\{(u_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ satisfying \eqref{MinimalityAssumption(u,A)} and let $\varepsilon=\varepsilon_n\downarrow0$ be a sequence. Since $d=d_\varepsilon\leq\mathcal{D}_{K,b}$, up to pass to a subsequence, we may assume that $d$ is independent of $\varepsilon$. Assume $d>0$.
\noindent{\bf Macroscopic location.} Recall that $\Lambda=\{p_1,...,p_{N_0}\}$ and for $k\in\{1,...,N_0\}$ we let $D_k:= {\rm deg}_{\partial B(p_k,2\ln(h_{\rm ex})/\sqrt{h_{\rm ex}})}(u_\varepsilon)$. Write ${\bf D}=(D_1,...,D_{N_0})$. Up to pass to a subsequence we may assume that ${\bf D}$ is independent of $\varepsilon$. We then have: \begin{itemize} \item The distribution of the disks $B(z^\varepsilon_i,\varepsilon^{\mu})$ around the elements of $\Lambda$ is the most homogenous possible : \[
{\bf D}\in \Lambda_{d}:=\left.\left\{{\bf D}'\in\left\{\left\lceil\dfrac{d}{{N_0}}\right\rceil;\left\lfloor\dfrac{d}{{N_0}}\right\rfloor\right\}^{N_0}\,\right|\,\sum_{k=1}^{N_0} D_k'=d\right\}. \] Here, for $x\in\mathbb{R}$, we wrote $\lceil x\rceil$ for the ceiling of $x$ and $\lfloor x\rfloor$ for the floor of $x$. \item There exists a renormalized energy $\mathcal{W}_d:\Lambda_{d}\to\mathbb{R}$ [see \eqref{DefWdOpD}] s.t. ${\bf D}$ minimizes $\mathcal{W}_d$.
\end{itemize} \noindent{\bf Mesoscopic location.} The mesoscopic location is the same than in the homogenous case. Namely, for $p\in\Lambda$ s.t. $ {\rm deg}_{\partial B(p,2\ln(h_{\rm ex})/\sqrt{h_{\rm ex}})}(u_\varepsilon)=D>0$, there exists a renormalized energy [see Section \ref{SecMesoRenEn}] \[
W^{\rm meso}_{p,D}:\{(a_1,...,a_D)\in (\mathbb{R}^2)^D\,|\,a_i\neq a_j\text{ for }i\neq j\}\to\mathbb{R} \] s.t., denoting $\ell:=\sqrt{\dfrac{D}{h_{\rm ex}}}$ and for $z_i^\varepsilon\in B(p,2\ln(h_{\rm ex})/\sqrt{h_{\rm ex}})$ letting $\breve z_i^\varepsilon:=\dfrac{z_i^\varepsilon-p}{\ell}$, we have $\breve{\bf z}^\varepsilon=(\breve z_1^\varepsilon,...,\breve z_D^\varepsilon)$ [assuming $z_i^\varepsilon\in B(p,2\ln(h_{\rm ex})/\sqrt{h_{\rm ex}})\Leftrightarrow i\in\{1,...,D\}$] which converges to a minimizer of $W^{\rm meso}_{p,D}$. In particular $\ell$ is the typical interdistance between two close $z^\varepsilon_i,z^\varepsilon_j$.
\noindent{\bf Microscopic location.} We know that, for $i\in\{1,...,d\}$, $B(z^\varepsilon_i,\eta_{\omega,b}\lambda\delta)\subset\omega_\varepsilon$. Moreover for $i\neq j$ we have $|z^\varepsilon_i-z^\varepsilon_j|\geq \ln(h_{\rm ex})h_{\rm ex}^{-1}\gg\lambda\delta$. Then each connected component of $\omega_\varepsilon$ contains at most one disk $B(z^\varepsilon_i,\varepsilon^{\mu})$.
There exists a renormalized energy $W^{\rm micro}:\omega\to\mathbb{R}$ [see Section \ref{SecMicrRenEn}] s.t. for $i\in\{1,...,d\}$, letting $y^\varepsilon_i\in\delta\cdot\mathbb{Z}^2$ be s.t. $B(z^\varepsilon_i,\eta_{\omega,b}\lambda\delta)\subset y^\varepsilon_i+\lambda\delta\omega$ and $\hat z^\varepsilon_i:=\dfrac{z^\varepsilon_i-y^\varepsilon_i}{\lambda\delta}\in\omega$ we have \begin{itemize} \item $W^{\rm micro}(\hat z^\varepsilon_i)\to\displaystyle\min_\omega W^{\rm micro}$, \item Up to pass to a subsequence, there is $a_i\in\omega$ s.t. $\hat z^\varepsilon_i\to a_i$ and $a_i$ minimizes $W^{\rm micro}$.\footnote{For example if $\omega$ is a disk then $a_i$ is the center of the disk \cite{dos2015microscopic} .} \end{itemize}
{\bf [Optimality of the renormalized energies]}\\ Consider a sequence $\varepsilon=\varepsilon_n\downarrow0$ previously fixed [in order to have ${\bf D}$ independent of $\varepsilon$] and assume $d\neq0$. We let \begin{itemize} \item ${\bf D}'\in \Lambda_{d}$ be a minimizer of $\mathcal{W}_d$, \item for $k\in\{1,...,N_0\}$ s.t. $D_k'\geq1$, ${\bf a}_k'$ be a minimizer of $W^{\rm meso}_{p_k,D_k'}$, \item $a_0$ be a minimizer of $W^{\rm micro}$. \end{itemize} Then, for $\varepsilon=\varepsilon_n$, there exist $(u_\varepsilon',A_\varepsilon')\in\mathscr{H}$ and $d$ distinct points of $\Omega$, $\{z_1',...,z_d'\}=\{{z^\varepsilon_1}',...,{z^\varepsilon_d}'\}\subset\omega_{\varepsilon}$, s.t. \begin{itemize} \item $\mathcal{E}_{\varepsilon,h_{\rm ex}}(u_\varepsilon',A_\varepsilon')\leq\inf_\mathscr{H}\mathcal{E}_{\varepsilon,h_{\rm ex}}+o(1)$,
\item $\{|u_\varepsilon'|\leq b/2\}\subset \cup B(z_i',\sqrt\varepsilon)\subset\cup_{p\in\Lambda}B(p,{\ln(h_{\rm ex})}/\sqrt{h_{\rm ex}})$, \item for $k\in\{1,...,N_0\}$, $D_k'= {\rm deg}_{\partial B(p_k,2\ln(h_{\rm ex})/\sqrt{h_{\rm ex}})}(u_\varepsilon')$, \item $ {\rm deg}_{\partial B(z_i',\sqrt\varepsilon)}(u_\varepsilon')=1$ for all $i$,
\item writing for $p_k\in\Lambda$ [s.t. $D'_k\geq1$] and $z'_i\in B(p_k,\ln(h_{\rm ex})/\sqrt{h_{\rm ex}})$, $\breve z_i':=(z_i-p_k)/\sqrt{D_k/h_{\rm ex}}$ and $\breve{\bf z}_{p_k}':=\{\breve z'_i\,|\,z_i'\to p_k\}$\footnote{We used a little abuse of notation for the simplicity of the presentation.}, we have $\breve{\bf z}_{p_k}'\to{\bf a}_k'$, \item For $i\in\{1,...,d\}$, letting $y^\varepsilon_i\in\delta\cdot\mathbb{Z}^2$ be s.t. $z'_i\in y^\varepsilon_i+\lambda\delta\cdot\omega$ and $\hat z'_i:=\dfrac{z'_i-y^\varepsilon_i}{\lambda\delta}\in\omega$ we have $\hat z_i'\to a_0$. \end{itemize} \end{thm}
The third theorem underline the link between the number $d$ and $h_{\rm ex}$. In this theorem we write, for $x\in\mathbb{R}$, $[x]^+=\max(x,0)$ and $[x]^-=\min(x,0)$. \begin{thm}\label{THM-C} Assume that $\Omega$ satisfies \eqref{NonDegHyp}, $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}.
There are integers $L\in\{1,...,N_0\}$, $0=d_0^\star<d_1^\star<\cdots<d_L^\star=N_0$ [$d_k^\star\in\mathbb{N}$ is independent of $\varepsilon$] and critical fields [depending on $\varepsilon$] ${\tt K}^{\tt(I)}_1<\cdots<{\tt K}^{\tt(I)}_L<{\tt K}^{\tt(II)}_1<{\tt K}^{\tt(II)}_2<\cdots$ [see \eqref{TheExprionnI} and \eqref{TheExprionnII} for the expressions of ${\tt K}^{\tt(I)}_k$ and ${\tt K}^{\tt(II)}_k$] s.t. for $\{(u_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ a family satisfying \eqref{MinimalityAssumption(u,A)} and for a sequence $\varepsilon=\varepsilon_n\downarrow0$:
\begin{itemize} \item If $d_\varepsilon=0$ for small $\varepsilon$, then $[h_{\rm ex}-{\tt K}^{\tt(I)}_1]^+\to0$. \item If $d_\varepsilon>0$ for small $\varepsilon$, then $[h_{\rm ex}-{\tt K}^{\tt(I)}_1]^-\to0$.
\item Assume $L\geq2$. For $k\in\{1,...,L-1\}$, if for small $\varepsilon$ we have $d_{k-1}^\star<d_\varepsilon\leq d_k^\star$, then \[ \left[h_{\rm ex}-{\tt K}^{\tt(I)}_k\right]^-\to0\text{ and }\left[h_{\rm ex}-{\tt K}^{\tt(I)}_{k+1}\right]^+\to0. \] \item For $L\geq1$, if for small $\varepsilon$ we have $d_{L-1}^\star< d_\varepsilon\leq d_L^\star=N_0$, then \[ \left[h_{\rm ex}-{\tt K}^{\tt(I)}_L\right]^-\to0\text{ and }\left[h_{\rm ex}-{\tt K}^{\tt(II)}_1\right]^+\to0. \] \item Let $l\in\mathbb{N}^*$. If for small $\varepsilon$ we have $d_\varepsilon=N_0+l$, then \[ \left[h_{\rm ex}-{\tt K}^{\tt(II)}_{l}\right]^-\to0\text{ and }\left[h_{\rm ex}-{\tt K}^{\tt(II)}_{l+1}\right]^+\to0. \] \end{itemize} \begin{remark} A more complete statement for $d_\varepsilon\in\{0,...,N_0\}$ may be found in Proposition \ref{Prop.SHarperdescriptionNonSatured}. \end{remark} \end{thm}
\section{Notation}\label{SecNotation} \subsection{Sets, vectors and numbers} \begin{itemize} \item We identify the real plan $\mathbb{R}^2$ with $\mathbb{C}$ and we denote by $\mathbb{S}^1$ the unit circle in $\mathbb{C}$.
\item For $\mathscr{U} \subset\mathbb{R}^2$, $N\in\mathbb{N}\setminus\{0;1\}$, $(\mathscr{U} ^N)^*:=\{(z_1,...,z_N)\in \mathscr{U} ^N\,|\,z_i\neq z_j\text{ for }i\neq j \}$. \item For $k\in\{1;2\}$, $\mathcal{H}^k$ is the $k$-dimensional Hausdorff measure.
\item If $(a_1,a_2),(b_1,b_2)\in\mathbb{R}^2$, then $|(a_{1},a_{2})|=\sqrt{a_{1}^2+a_{2}^2}$, $(a_1,a_2)^\bot=(-a_2,a_1)$, $(a_1,a_2)\cdot(b_1,b_2)=a_1b_1+a_2b_2$ and $(a_1,a_2)\wedge(b_1,b_2)=a_1b_2-a_2b_1$.
\item For $\mathscr{U} \subset\mathbb{R}^2$, $\overline{\mathscr{U}}$ is the closure of $\mathscr{U}$ w.r.t. $|\cdot|$
\item For $\emptyset\neq\mathscr{U},\mathscr{V}\subset\mathbb{R}^2$ and $x_0\in\mathbb{R}^2$ we write ${\rm dist}(\mathscr{U},\mathscr{V}):=\inf\{|x-y|\,|\,x\in\mathscr{U},\,y\in\mathscr{V}\}$ and ${\rm dist}(x_0,\mathscr{V}):={\rm dist}(\{x_0\},\mathscr{V})$. \item For $\Gamma\subset\mathbb{R}^2$ a Jordan curve we let: \begin{itemize} \item ${\rm int}(\Gamma)$, the interior of $\Gamma$, be the bounded open set $\mathscr{U} \subset\mathbb{R}^2$ s.t. $\Gamma=\partial \mathscr{U} $ where $\partial \mathscr{U} $ is the boundary of $\mathscr{U} $. \item $\nu$ be the outward normal unit vector of ${\rm int}(\Gamma)$ \item $\tau$ be the direct unit tangent vector of $\Gamma$ ($\tau=\nu^\bot$) \end{itemize} \item If $S$ is a finite set then ${\rm Card}(S)$ is the cardinal of $S$. \end{itemize}
\begin{itemize}
\item If $x\in\mathbb{R}$, then we write $\lceil x\rceil:=\min\{m\in\mathbb{Z}\,|\,m\geq x\}$, the ceiling of $x$, and $\lfloor x\rfloor:=\max\{m\in\mathbb{Z}\,|\,m\leq x\}$, the floor of $x$. \item If $x\in\mathbb{R}$, then we write $[x]^+=\max(x,0)$ and $[x]^-=\min(x,0)$. \end{itemize} \subsection{Functions} \begin{itemize}
\item For $\mathscr{U}\subset\mathbb{R}^2$ a smooth open set and $K\subset\mathbb{C}$, $H^1(\mathscr{U},K)=\{u\in H^1(\mathscr{U},\mathbb{C})\,|\,u(x)\in K\text{ for a.e. }x\in \mathscr{U}\}$ where $H^1(\mathscr{U},\mathbb{C})$ is the Classical Sobolev space of the first order modeled on the Lebesgue space $L^2$.
For $k\in\mathbb{N}^*$ and $p\in [1;\infty]$ we use the standard notation for the higher order Sobolev space $H^k(\mathscr{U},K)$ modeled on $L^2$ and $W^{k,p}(\mathscr{U},K)$ for the Sobolev space of order $k$ modeled on $L^p$. \item We use the standard notation for the differential operators: ''$\nabla$'' for the gradient, ''${\rm curl}$'' for the curl, ''${\rm div}$'' for the divergence, "$\partial_\tau=\tau\cdot\nabla$" for the tangential derivative, "$\partial_\nu=\nu\cdot\nabla$" for the normal derivative... \item For $\mathscr{U}\subset\mathbb{R}^2$ a smooth bounded open set we let ${\rm tr}_{\partial\mathscr{U}}:H^1(\mathscr{U},\mathbb{C})\to H^{1/2}(\partial \mathscr{U},\mathbb{C})$ be the [surjective] trace operator. For $\Gamma$ a connected component of $\partial \mathscr{U}$ and $u\in H^1(\mathscr{U},\mathbb{C})$, we let ${\rm tr}_\Gamma(u)$ be the restriction of ${\rm tr}_{\partial \mathscr{U}}(u)$ to $\Gamma$.
We write $H_0^1(\mathscr{U},\mathbb{C}):=\{u\in H^1(\mathscr{U},\mathbb{C})\,|\,{\rm tr}_{\partial\mathscr{U}}(u)=0\}$.
\item For $u:\Omega\to\mathbb{C}$ a function we let $\underline u:=\begin{cases}u&\text{if }|u|\leq1\\u/|u|&\text{if }|u|>1\end{cases}$. \item For $\Gamma\subset\mathbb{R}^2$ a Jordan curve and $g\in H^{1/2}(\Gamma,\mathbb{S}^1)$, the degree of $g$ is defined as \begin{equation}\nonumber {\rm deg}_{\Gamma}(g):=\frac{1}{2\pi}\int_{\Gamma}g\wedge\partial_\tau g\in\mathbb{Z}. \end{equation}
For a smooth and bounded open set $\mathscr{U}\subset\mathbb{R}^2$, $\Gamma$ a connected component of $\partial \mathscr{U}$ and $u\in H^1(\mathscr{U},\mathbb{C})$, if there exists $\eta>0$ s.t. $g:={\rm tr}_\Gamma(u)$ satisfies $|g|\geq\eta$ , then $g/|g|\in H^{1/2}(\Gamma,\mathbb{S}^1)$ and we write $ {\rm deg}_\Gamma(u):= {\rm deg}_\Gamma(g/|g|)$.
When $\mathscr{U},\mathscr{V}\subset\mathbb{R}^2$ are smooth bounded simply connected open sets s.t. $\overline{\mathscr{V}}\subset \mathscr{U}$ and $u\in H^1(\mathscr{U}\setminus\overline{\mathscr{V}},\mathbb{S}^1)$, then we write [without ambiguity] $ {\rm deg}(u)$ instead of $ {\rm deg}_\Gamma(u)$ for any Jordan curve $\Gamma\subset \overline\mathscr{U}\setminus{\mathscr{V}}$ s.t. $\mathscr{V}\subset{\rm int}(\Gamma)$.
\end{itemize} \subsection{Construction of the pinning term}\label{SecConstructionPinningTerm} Let \begin{enumerate}[$\bullet$]\item $\delta=\delta(\varepsilon)\in(0,1),\,\lambda=\lambda(\varepsilon)\in(0,1)$;
\item $\omega\subset\mathbb{R}^2$ be a smooth bounded and simply connected open set s.t. $(0,0)\in\omega$ and $\overline{\omega}\subset Y:=(-1/2,1/2)^2$. \end{enumerate} For $m\in\mathbb{Z}^2$ we denote $Y_{m}^\delta:=\delta m+\delta\cdot Y$ and
$\displaystyle\omega_\varepsilon=\bigcup_{\substack{m\in\mathbb{Z}^2\text{ s.t.}\\Y_{m}^\delta\subset\Omega}}[\delta m+\lambda\delta\cdot\omega]$. For $b\in(0,1)$ we define \[
\begin{array}{cccc} a_\varepsilon:&\mathbb{R}^2&\to&\{b,1\}\\&x&\mapsto&\begin{cases}b&\text{if }x\in\omega_\varepsilon\\textrm{1\kern-0.25emI}&\text{otherwise}\end{cases} \end{array}. \] \subsection{Asymptotic}
\begin{itemize} \item[$\bullet$] In this article $\varepsilon\in(0;1)$ is a small number. We are essentially interested in the asymptotic $\varepsilon\to0$. \item[$\bullet$] The notation $o(1)$ means a quantity depending on $\varepsilon$ which tends to $0$ when $\varepsilon\to0$. \item[$\bullet$] The notation $o[f(\varepsilon)]$ means a quantity $g(\varepsilon)$ s.t. $\dfrac{g(\varepsilon)}{f(\varepsilon)}=o(1)$. \item[$\bullet$] The notation $\mathcal{O}[f(\varepsilon)]$ means a quantity $g(\varepsilon)$ s.t. $\dfrac{g(\varepsilon)}{f(\varepsilon)}$ is bounded for small $\varepsilon$. \end{itemize}
\section{Classical facts and the strongest theorem} {\bf Gauge invariance and Coulomb Gauge}
It is standard to quote the {\it gauge invariance} of the energy $\mathcal{E}_{\varepsilon,h_{\rm ex}}$. Namely, two configurations $(u,A),(u',A')\in\mathscr{H}$ are gauge equivalent, denoted by $(u,A)\stackrel{{\rm gauge}}{\sim}(u',A')$, if there exists a gauge transformation from $(u,A)$ to $(u',A')$: \[ (u,A)\stackrel{{\rm gauge}}{\sim}(u',A')\Longleftrightarrow\begin{cases} \exists\,\varphi\in H^2(\Omega,\mathbb{R})\text{ s.t.}\\u'=u\e^{\imath\varphi}\text{ and }A'=A+\nabla\varphi\end{cases}. \]
Two gauge equivalent configurations describe the same physical state. Then, physical quantities are those which are gauge invariant. For example, if $(u,A)\in\mathscr{H}$, then $|u|$, $|\nabla u-\imath Au|$, ${\rm curl}(A)$ and then $ \mathcal{E}_{\varepsilon,h_{\rm ex}}(u,A)$, $\{|u|<b/2\}$ also are gauge invariants.
In the context the Ginzburg-Landau energy, a classical choice of gauge is the {\it Coulomb gauge}. We say that $(u,A)$ is in the Coulomb gauge if \begin{equation}\label{JaugeCoulomb} \begin{cases}{\rm div}(A)=0&\text{in }\Omega\\mathcal{A}\cdot\nu=0&\text{on }\partial\Omega \end{cases}. \end{equation} One may prove [see Proposition 3.2 in \cite{SS1}] that, for $(u,A)\in\mathscr{H}$, there exists $\varphi\in H^2(\Omega,\mathbb{R})$ s.t. $A':=A+\nabla\varphi$ satisfies \eqref{JaugeCoulomb}. Then, letting $u'=u\e^{\imath\varphi}$, we have $(u',A')$ which is in the Coulomb gauge and $(u,A)\stackrel{{\rm gauge}}{\sim}(u',A')$.
One of the main motivations in using the Coulomb gauge comes from the fact that $\|{\rm curl}(A)\|_{L^2}$ controls $\|A\|_{H^1}$. Namely there exists $C\geq 1$ [which depends only on $\Omega$] s.t. if $A$ satisfies \eqref{JaugeCoulomb} then [see Proposition 3.3 in \cite{SS1}] \begin{equation}\label{CoulombH1}
\|A\|_{H^1(\Omega,\mathbb{R}^2)}\leq C\|{\rm curl}(A)\|_{L^2(\Omega)} \end{equation} and \begin{equation}\label{CoulombH2}
\|A\|_{H^2(\Omega,\mathbb{R}^2)}\leq C\|{\rm curl}(A)\|_{H^1(\Omega)}. \end{equation} Moreover we have an easy representation of $A\in H^1(\Omega,\mathbb{R}^2)$ satisfying \eqref{JaugeCoulomb} \begin{equation}\label{RepresentCoulomGauge} \text{$A\in H^1(\Omega,\mathbb{R}^2)$ is a solution of \eqref{JaugeCoulomb}}\Longleftrightarrow\,\exists\, \xi\in H^1_0\cap H^2(\Omega,\mathbb{R})\text{ s.t. }A=\nabla^\bot\xi. \end{equation}
{\bf Basic description of a minimizer}
We first note that, by direct minimization, for all $a_\varepsilon\in L^\infty(\Omega,[b;1])$, $\varepsilon,h_{\rm ex}>0$, the minimization problem of $ \mathcal{E}_{\varepsilon,h_{\rm ex}}$ in $\mathscr{H}$ admits [at least] a solution $(u_\varepsilon,A_\varepsilon)\in\mathscr{H}$.
Writing $h_\varepsilon:={\rm curl}(A_\varepsilon)$, it is standard to check that a such minimizer solves: \begin{equation}\label{FullGLuAEq} \begin{cases}
-(\nabla-\imath A_\varepsilon)^2u_\varepsilon=\dfrac{u_\varepsilon}{\varepsilon^2}(a_\varepsilon^2-|u_\varepsilon|^2)^2&\text{in }\Omega \\(\nabla-\imath A)u_\varepsilon\cdot\nu=0&\text{on }\Omega \\ -\nabla^\bot h_\varepsilon=u_\varepsilon\wedge(\nabla-\imath A_\varepsilon)u_\varepsilon&\text{in }\Omega \h_{\rm ex}_\varepsilon=h_{\rm ex}&\text{on }\partial\Omega \end{cases}. \end{equation} Using a maximum principle, we may get the following proposition: \begin{prop}\label{Prop.ModuLeq1}
Let $\varepsilon,h_{\rm ex}>0$ and $a\in L^\infty(\Omega,[b,1])$. If $(u_\varepsilon,A_\varepsilon)$ is a minimizer of $ \mathcal{E}(u,A)=\displaystyle\dfrac{1}{2}\int_\Omega|\nabla u-\imath Au|^2+\dfrac{1}{2\varepsilon^2}(a^2-|u|^2)^2+|{\rm curl}(A)-h_{\rm ex}|^2$ in $\mathscr{H}$ then $|u_\varepsilon|\leq1$ in $\Omega$. \end{prop} On the other hand, if $(u_\varepsilon,A_\varepsilon)$ is a minimizer of $\mathcal{E}_{\varepsilon,h_{\rm ex}}$ in the Coulomb gauge, then it solves \begin{equation}\label{FullGLuEq} \begin{cases}
-\Delta u_\varepsilon=\dfrac{u_\varepsilon}{\varepsilon^2}(a_\varepsilon^2-|u_\varepsilon|^2)^2-2\imath(A_\varepsilon u_\varepsilon\cdot\nabla u_\varepsilon)-|A_\varepsilon|^2u_\varepsilon&\text{in }\Omega \\\partial_\nu u_\varepsilon=0&\text{on }\Omega \end{cases}. \end{equation}
A fundamental bound in the study concerns $\|\nabla u_\varepsilon\|_{L^\infty(\Omega)}$. We have the following lemma which is a Gagliardo-Nirenberg type inequality with homogenous Neumann boundary condition.
\begin{lem}\label{LemGNEst}\footnote{The proof of Lemma \ref{LemGNEst} is done by first using $\Phi:\mathbb{D}\to\Omega$, a conformal representation of $\Omega$ on the unit disk $\mathbb{D}$. Then we extend $\tilde u:=u\circ\Phi$ in the disk $B(0,2)$ by letting $u'(x)=\tilde u(x/|x|)$ for $x\in B(0,2)\setminus\mathbb{D}$. By using the boundary condition we have $u'\in H^2(B(0,2),\mathbb{C})$. And finally one may conclude by using an interior version of Lemma \ref{LemGNEst} [Lemma A.1 in \cite{BBH1}].}\label{NumFootNoteConformal} Let $\Omega\subset\mathbb{R}^2$ be a smooth bounded simply connected open set. There exists $C_\Omega\geq1$ s.t. if $u\in H^2(\Omega)$ is s.t. $\partial_\nu u=0$ on $\partial\Omega$ then \[
\|\nabla u\|_{L^\infty(\Omega)}^2\leq C_\Omega\left(\|\Delta u\|_{L^\infty(\Omega)}+\|u\|_{L^\infty(\Omega)}\right)\|u\|_{L^\infty(\Omega)}. \] \end{lem}
Consequently, with Lemma \ref{LemGNEst} [up to change the value of $C_\Omega$], for $\varepsilon,h_{\rm ex}>0$ and $a_\varepsilon\in L^\infty(\Omega,[b^2,1])$, if $(u_{\varepsilon},A_{\varepsilon})\in\mathscr{H}$ minimizes $\mathcal{E}_{\varepsilon,h_{\rm ex}}$ is in the Coulomb gauge and is s.t. $\|A_\varepsilon\|_{L^\infty(\Omega)}\leq 1/\varepsilon$ [which is the case in the present work] then \begin{equation}\label{CrucialLipEst}
\|\nabla u_\varepsilon\|_{L^\infty(\Omega)}\leq \dfrac{C_\Omega}{\varepsilon}. \end{equation}
In the homogenous case as well as in the case without magnetic field, Estimate \eqref{CrucialLipEst} is crucial to describe vorticity defects. It is the same in the present work. More precisely, the main result [Theorem \ref{THM}] states that the three above theorems are true replacing $(u_\varepsilon,A_\varepsilon)$ that minimizes $\mathcal{E}_{\varepsilon,h_{\rm ex}}$ in $\mathscr{H}$ by any configuration $(\tilde u_\varepsilon,\tilde A_\varepsilon)$ s.t. $E_\varepsilon(\tilde u_\varepsilon,\tilde A_\varepsilon)=\inf_\mathscr{H}\mathcal{E}_{\varepsilon,h_{\rm ex}}+o(1)$ with two extra hypotheses on $|\tilde u_\varepsilon|$ : $\|\nabla |\tilde u_\varepsilon|\|_{L^\infty(\Omega)}=\mathcal{O}(\varepsilon^{-1})$ and $|\tilde u_\varepsilon|\in W^{2,1}(\Omega)$ [see \eqref{HypGlobalSurQuasiMin}]\\
{\bf Lassoued-Mironescu decoupling}
In order to study pinned Ginzburg-Landau type energies, a nice trick was initiated by Lassoued and Mironescu in \cite{LM1}. Before explaining this trick we have to do a direct calculation for $(u,A)\in \mathscr{H}$: \begin{equation}\label{DevCarre}
\mathcal{E}_{\varepsilon,h_{\rm ex}}(u,A)
= E_\varepsilon(u)+\dfrac{1}{2}\int_\Omega-2(u\wedge\nabla u)\cdot A+|u|^2|A|^2+|{\rm curl}(A)-h_{\rm ex}|^2 \end{equation} with \[
E_\varepsilon(u)=\dfrac{1}{2}\int_\Omega|\nabla u|^2+\dfrac{1}{2\varepsilon^2}(a_\varepsilon^2-|u|^2)^2. \]
The Lassoued-Mironescu decoupling is obtained by first minimizing $ E_\varepsilon$ in $H^1(\Omega,\mathbb{C})$. It is clear that $E_\varepsilon$ admits minimizers and if $U$ minimizes $E_\varepsilon$ then it satisfies \begin{equation}\label{EqForU} \begin{cases}
-\Delta U=\dfrac{U}{\varepsilon^2}(a_\varepsilon^2-|U|^2)&\text{in }\Omega\\\partial_\nu U=0&\text{on }\partial\Omega \end{cases}. \end{equation}
By an energetic argument it is easy to prove that, if $U$ minimizes $ E_\varepsilon$ in $H^1(\Omega,\mathbb{C})$, then $b\leq|U|\leq1$. Moreover from \eqref{EqForU}, $U\wedge\nabla U=0$, {\it i.e.} $U=|U|\e^{\imath\theta}$ with $\theta\in\mathbb{R}$.
Then one may consider a scalar minimizer $U_\varepsilon:\Omega\to[b,1]$. This scalar minimizer may be seen as a regularization of $a_\varepsilon$ [see Proposition \ref{Prop.RegularizationLMSol}].
Using this scalar minimizer one may get the well known Lassoued-Mironescu decoupling: for $v\in H^1(\Omega,\mathbb{R})$ we have \begin{equation}\label{DecouplageLM}
E_\varepsilon(U_\varepsilon v)= E_\varepsilon(U_\varepsilon)+F_\varepsilon(v) \end{equation} with \[
F_\varepsilon(v):=\dfrac{1}{2}\int_\Omega U_\varepsilon^2|\nabla v|^2+\dfrac{U_\varepsilon^4}{2\varepsilon^2}(1-|v|^2)^2. \] Using this decoupling, one may prove that, for $\varepsilon>0$, there exists a unique positive minimizer $U_\varepsilon:\Omega\to[b,1]$ of $E_\varepsilon$ in $H^1(\Omega,\mathbb{R})$.
On the other hand, from \eqref{DevCarre} and \eqref{DecouplageLM}, for $(u,A)\in \mathscr{H}$ and $v=u/U_\varepsilon$ we have: \begin{eqnarray*} \mathcal{F}_{\varepsilon,h_{\rm ex}}(v,A)&:=&\mathcal{E}_{\varepsilon,h_{\rm ex}}(U_\varepsilon v,A)- E_\varepsilon(U_\varepsilon)
\\&=&\dfrac{1}{2}\int_\Omega U_\varepsilon^2|\nabla v-\imath A v|^2+\dfrac{U_\varepsilon^4}{2\varepsilon^2}(1-|v|^2)^2+|{\rm curl}(A)-h_{\rm ex}|^2.
\end{eqnarray*}
It is easy to check that $\mathcal{F}_{\varepsilon,h_{\rm ex}}(v,A)$ is gauge invariant. This functional is of major interest in the study since $(v,A)$ minimizes $\mathcal{F}_{\varepsilon,h_{\rm ex}}$ in $\mathscr{H}$ if and only if $(U_\varepsilon v,A)$ minimizes $\mathcal{E}_{\varepsilon,h_{\rm ex}}$ in $\mathscr{H}$.
An easy comparaison argument implies that if $(v_\varepsilon,A_\varepsilon)$ minimizes $\mathcal{F}_{\varepsilon,h_{\rm ex}}$ then $\|v_\varepsilon\|_{L^\infty(\Omega)}\leq1$.
From now on we focus on the study of the minimizer of $\mathcal{F}_{\varepsilon,h_{\rm ex}}$. Namely we have the following theorem.
\begin{thm}\label{THM} Assume that \eqref{NonDegHyp} holds and $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}.
Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be s.t. $\mathcal{F}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}+o(1)$. Assume also that \begin{equation}\label{HypGlobalSurQuasiMin}
\begin{cases}|v_\varepsilon|\in W^{2,1}(\Omega,\mathbb{C})
\\\|\nabla| v_\varepsilon|\|_{L^\infty(\Omega)}=\mathcal{O}(\varepsilon^{-1})
\end{cases}. \end{equation}
Then Theorems \ref{THM-A}, \ref{THM-B} and \ref{THM-C} hold for $u_\varepsilon=U_\varepsilon v_\varepsilon$. \end{thm} \begin{remark}\label{THMRmark}
Theorem \ref{THM} may be rephrased in term of $U_\varepsilon$. Let $(h_{\rm ex})_{0<\varepsilon<1}\subset(0,\infty)$, $\{(u_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ and let $v_\varepsilon:=u_\varepsilon/U_\varepsilon\in H^1(\Omega,\mathbb{C})$. On the one hand, from the decoupling \eqref{DecouplageLM}, we have $\{(u_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ is s.t. $\mathcal{E}_{\varepsilon,h_{\rm ex}}(u_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{E}_{\varepsilon,h_{\rm ex}}+o(1)$ if and only $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ is s.t. $\mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}_{\varepsilon,h_{\rm ex}}+o(1)$. On the other hand $v_\varepsilon$ satisfies \eqref{HypGlobalSurQuasiMin} if and only if we have $|u_\varepsilon|\in W^{2,1}(\Omega,\mathbb{C})$ and $\|\nabla| u_\varepsilon|\|_{L^\infty(\Omega)}=\mathcal{O}(\varepsilon^{-1})$.
\end{remark}
\section{Plan of the article and proof of Theorem \ref{THM}}
The proof of Theorem \ref{THM} is done in several steps. It is based on a perturbative argument by replacing the energy $\mathcal{F}_{\varepsilon,h_{\rm ex}}$ with an energy $\tilde\mathcal{F}_{\varepsilon,h_{\rm ex}}$. This step is called the energetic cleaning [Section \ref{Sec.Clean}]. The functional $\tilde\mathcal{F}_{\varepsilon,h_{\rm ex}}$ is a perturbation of $\mathcal{F}_{\varepsilon,h_{\rm ex}}$: for $(v_\varepsilon,A_\varepsilon)\in\mathscr{H}$ which is in the Coulomb gauge and s.t. $\mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)=\mathcal{O}(h_{\rm ex}^2)$ we have $\tilde\mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)-\mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)=o(1)$ [see Proposition \ref{Prop.Nettoyage}]. In particular we have $\mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}_{\varepsilon,h_{\rm ex}}+o(1)$ if and only if $\tilde\mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\tilde\mathcal{F}_{\varepsilon,h_{\rm ex}}+o(1)$. \\
In section \ref{SectionBoundVorticity} we apply a vortex ball construction of Sandier-Serfaty [Proposition \ref{Prop.BorneInfLocaliseeSandSerf}] and we follow the strategy of Sandier-Serfaty developed in \cite{SS2} to prove that the vorticity of a reasonable configuration is bounded [see Theorem \ref{ThmBorneDegréMinGlob}].
Once the bound on the vorticity yields, we adapt a result of Serfaty \cite{S1} which gives a decomposition of $\tilde{\mathcal{F}}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)$ in term of $F_\varepsilon(v_\varepsilon)$ and the location of the vorticity defects [Proposition \ref{Docmpen}]. \\
The decomposition obtained in Proposition \ref{Docmpen} allows to focus the study on the energy $F_\varepsilon$ which ignores the magnetic field. From this point, the study of a configuration $(v_\varepsilon,A_\varepsilon)$ is done for a major part {\it via} classical results based on the case without magnetic field [as in \cite{BBH}]. To this end we adapt to our case some standard estimates ignoring the magnetic field, in particular the crucial notion of Renormalized energies is presented Section \ref{Sec.RenEn}.\\
With these preliminary results, in Section \ref{SecUpperBound}, for $d\in\mathbb{N}^*$, we construct competitors $(v_\varepsilon,A_\varepsilon)\in\mathscr{H}$ with $d$ quantized vorticity defects and then we get a sharp upper bound [see Proposition \ref{Prop.BorneSupSimple}]: \[ \inf_\mathscr{H}\mathcal{F}_{\varepsilon,h_{\rm ex}}\leq h_{\rm ex}^2 {\bf J_0}+dM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathscr{L}_2(d)+o(1). \] Here ${\bf J_0}\&M_\O$ are independent of $\varepsilon$ and $d$, $\mathscr{L}_1(d)\&\mathscr{L}_2(d)$ are independent of $\varepsilon$ and $H^0_{c_1} $ is the leading term in the expression of the first critical field.\\
With the above upper bound for the minimal energy, the heart of the work consists in getting lower bounds for quasi-minimizers. Before getting such lowers bounds we adapt to our case some tools in Section \ref{Sec.ToolBox}: an $\eta$-ellipticity result is proved [Proposition \ref{Prop.EtaEllpProp}], a construction of {\it ad-hoc} bad-discs is done [Proposition \ref{Prop.ConstrEpsMauvDisk}] and the strong effect of the dilution is expressed by various result in Section \ref{Sec.StrongEffectDilution}. \\
In Section \ref{Sect.ShapInfo} we begin the proof of the theorems. The part of Theorem \ref{THM} related with Theorem \ref{THM-A} is a direct consequence of Propositions \ref{PropToutLesDegEg1}, \ref{PropVortexProcheLambda}, \ref{Prop.BonEcartement} and \ref{Prop.PinningComplet} [and also Corollary \ref{CorDefPremierChampsCrit}].
The part of Theorem \ref{THM} related with Theorem \ref{THM-B} is given by Corollary \ref{Cor.ExactEnergyExp} and Proposition \ref{Prop.BorneSupSimple}.
The part of Theorem \ref{THM} related with Theorem \ref{THM-C} is a direct consequence of Corollary \ref{CorDefPremierChampsCrit} and Propositions \ref{Prop.SHarperdescriptionNonSatured}$\&$\ref{Prop.SHarperdescriptionNonSaturedII}.
\section{Some preliminaries}
\subsection{Energetic cleaning}\label{Sec.Clean}
In order to do the cleaning step, we have to get some estimates. Our goal is to study {\it quasi-minimizer} of $\mathcal{F}_{\varepsilon,h_{\rm ex}}$. To keep a simple presentation, we write $\mathcal{F}$ instead of $\mathcal{F}_{\varepsilon,h_{\rm ex}}$ and $F$ instead of $F_\varepsilon$ when there is no ambiguity.\\
From \eqref{CoulombH1}, \eqref{CoulombH2} and classical elliptic regularity arguments we have the following proposition.
\begin{prop}\label{Prop.BornesSups1}Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family of configuration in the Coulomb gauge. Then there is $\xi_\varepsilon\in H^1_0\cap H^2(\Omega,\mathbb{R})$ s.t. $A_\varepsilon=\nabla^\bot\xi_\varepsilon$. Moreover, if for some $h_{\rm ex}=h_{\rm ex}(\varepsilon)$ we have \begin{equation}\label{BorneFh^2} \text{$\mathcal{F}(v_\varepsilon,A_\varepsilon)=\mathcal{O}(h_{\rm ex}^2)$,} \end{equation} then there exists ${C}$ [independent of $\varepsilon$] s.t.
\begin{eqnarray}\label{FirstUpBoundXi}
\|\xi_\varepsilon\|_{H^2(\Omega)}&\leq& {C}h_{\rm ex}.
\end{eqnarray} Consequently, for $p\in[1,\infty)$, there exists $C_p>1$ [independent of $\varepsilon$] s.t. \begin{equation}\label{EstLpA}
\|\nabla \xi_\varepsilon\|_{L^p(\Omega)}=\|A_\varepsilon\|_{L^p(\Omega)}\leq C_ph_{\rm ex}. \end{equation} Moreover, up to increase the value of $C>1$ [independently of $\varepsilon$], we have \begin{equation}\label{EstGradMinL2}
\|\nabla v_\varepsilon\|_{L^2(\Omega)}\leq Ch_{\rm ex}. \end{equation} And if ${\rm curl}(A_\varepsilon)\in H^1(\Omega)$ then \begin{equation}\label{EstH3}
\|\xi_\varepsilon\|_{H^3(\Omega)}\leq C\|{\rm curl}(A_\varepsilon)\|_{H^1(\Omega)}. \end{equation} In particular, for further use, note that if ${\rm curl}(A_\varepsilon)\in H^1(\Omega)$ then $\xi_\varepsilon\in H^1_0\cap H^2\cap W^{1,\infty}(\Omega)$ and \begin{equation}\label{EstH4}
\|\nabla\xi_\varepsilon\|_{L^{\infty}(\Omega)}\leq C\|{\rm curl}(A_\varepsilon)\|_{H^1(\Omega)}. \end{equation} \end{prop}
In order to do the cleaning step we need to underline the fact that $U_\varepsilon$ may be seen as a regularization of $a_\varepsilon$ in $W^{1,\infty}$ with estimates that become bad when approaching $\partial\omega_\varepsilon$. \begin{prop}\label{Prop.RegularizationLMSol} There exist $C_b,s_b>0$ depending only on $b$ and $\Omega$ s.t. for $\varepsilon,r>0$ we have: \begin{equation}\label{EstGlobGradU}
\|\nabla U_\varepsilon\|_{L^\infty(\Omega)}\leq\dfrac{C_b}{\varepsilon}, \end{equation} \begin{equation}\label{EstLoinInterfaceU}
|U_\varepsilon-a_\varepsilon|\leq{C_b}\e^{-\frac{s_b r}{\varepsilon}}\text{ in }\{x\in\Omega\,|\,{\rm dist}(x,\partial\omega_\varepsilon)\geq r\}, \end{equation} \begin{equation}\label{EstLoinInterfaceGradU}
|\nabla U_\varepsilon|\leq\dfrac{C_b\e^{-\frac{s_b r}{\varepsilon}}}{\varepsilon}\text{ in }\{x\in\Omega\,|\,{\rm dist}(x,\partial\omega_\varepsilon)\geq r\}. \end{equation} \end{prop} \begin{proof} Estimate \eqref{EstGlobGradU} is a consequence of Lemma \ref{LemGNEst}. The proof of \eqref{EstLoinInterfaceU} is the same than Proposition 2 in \cite{Publi3}. Estimate \eqref{EstLoinInterfaceGradU} is proved in Appendix \ref{ProofVotation}. \end{proof} Since the 2-dimensional Hausdorff measure of $\omega_\varepsilon$ satisfies $\mathcal{H}^2(\omega_\varepsilon)= \mathcal{O}(\lambda^2)$, from \eqref{EstLoinInterfaceU}, for $p\in[1,\infty[$, we have the following crucial estimate \begin{equation}\label{LpEstmU}
\|U_\varepsilon^2-1\|_{L^p(\Omega)}=\mathcal{O}(\lambda^{2/p}). \end{equation}
We are now in position to do the cleaning step. We assume that $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ is a family of configuration in the Coulomb gauge which satisfies \eqref{BorneFh^2}.
We denote $\alpha_\varepsilon=U_\varepsilon^2$ and $\rho_\varepsilon=|v_\varepsilon|$. From direct computations, by splitting the integrals with the identity $\alpha_\varepsilon=(\alpha_\varepsilon-1)+1$ and using $(1-\rho_\varepsilon)^4\leq(1-\rho_\varepsilon^2)^2$, we have the existence of $C\geq1$ [independent of $\varepsilon$] s.t.
\begin{equation}\label{Lem.Nettoyage1}
\left|\int_\Omega\alpha_\varepsilon(v_\varepsilon\wedge\nabla v_\varepsilon)\cdot A_\varepsilon-\int_\Omega(v_\varepsilon\wedge\nabla v_\varepsilon)\cdot A_\varepsilon\right|\leq\dfrac{C}{2}\left[\sqrt\lambdah_{\rm ex}^2+\lambda^{1/4}h_{\rm ex}^3\varepsilon\right]\leq C\sqrt\lambdah_{\rm ex}^2 \end{equation} and \begin{equation}\label{Lem.Nettoyage2}
\left|\int_\Omega \alpha_\varepsilon \rho_\varepsilon^2|A_\varepsilon|^2-\int_\Omega |A_\varepsilon|^2\right|\leq Ch_{\rm ex}^2(\varepsilonh_{\rm ex}+\lambda).
\end{equation}
By combining \eqref{Lem.Nettoyage1} and \eqref{Lem.Nettoyage2} we immediately get the following proposition. \begin{prop}\label{Prop.Nettoyage} If $(v_\varepsilon,A_\varepsilon)$ is in the Coulomb gauge and satisfies \eqref{BorneFh^2} then \[
|\tilde\mathcal{F}(v_\varepsilon,A_\varepsilon)-\mathcal{F}(v_\varepsilon,A_\varepsilon)|\leq Ch_{\rm ex}^2(\varepsilonh_{\rm ex}+\sqrt\lambda) \] with $C$ which is independent of $\varepsilon$ and \begin{equation}\label{EGALITEdenettoyga}
\tilde\mathcal{F}( v, A)=\tilde\mathcal{F}_{\varepsilon,h_{\rm ex}}( v, A):=F( v)+\dfrac{1}{2}\int_\Omega-2( v\wedge\nabla v)\cdot A+| A|^2+|{\rm curl}( A)-h_{\rm ex}|^2. \end{equation}
\end{prop} \begin{remark} \begin{enumerate} \item One may claim that $\tilde\mathcal{F}$ is not gauge invariant if $\alpha_\varepsilon\not\equiv1$.
\item Note that if $\lambda^{1/4}|\ln\varepsilon|\to0$ and if $h_{\rm ex}=\mathcal{O}(|\ln\varepsilon|)$ then for $(v_\varepsilon,A_\varepsilon)\in\mathscr{H}$ which is in the Coulomb gauge and satisfies \eqref{BorneFh^2} we have $\tilde\mathcal{F}(v_\varepsilon,A_\varepsilon)-\mathcal{F}(v_\varepsilon,A_\varepsilon)=o(1)$ without hypothesis on $\delta\in(0;1)$. \end{enumerate} \end{remark}
\subsection{Bound on the vorticity and energetic decomposition}\label{SectionBoundVorticity} By applying Proposition 1 in \cite{SS2} with $U_\varepsilon\geq b$ we immediately get the following proposition which does not need any assumption for $\lambda,\delta\in(0;1)$. \begin{prop}\label{Prop.BorneInfLocaliseeSandSerf}
Assume $h_{\rm ex}\leq C_0 |\ln\varepsilon|$ with $C_0 \geq1$ which is independent of $\varepsilon$. Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ be a family s.t. $\mathcal{F}(v_\varepsilon,A_\varepsilon)\leq C_0 |\ln\varepsilon|^2$.
Then there exist $C,\varepsilon_0>0$ {[depending only on $\Omega$, $b$ and $C_0$]} s.t. for $\varepsilon<\varepsilon_0$ we have either $|v_\varepsilon|\geq1-|\ln\varepsilon|^{-2}$ in $\Omega$ or there exists a finite family of disjoint disks $\{B_i\,|\,i\in \mathcal{J}\}$ with $\mathcal{J}\subset\mathbb{N}^*$ [$\mathcal{J}$ depends on $\varepsilon$] and $B_i:=B(a_i,r_i)$ satisfying : \begin{enumerate}
\item $\{|v_\varepsilon|<1-|\ln\varepsilon|^{-2}\}\subset\cup B_i$
\item $\sum r_i<|\ln\varepsilon|^{-10}$,
\item writing $h_\varepsilon={\rm curl}(A_\varepsilon)$, $\rho_\varepsilon=|v_\varepsilon|$ and $v_\varepsilon=\rho_\varepsilon\e^{\imath\varphi_\varepsilon}$ [$\varphi_\varepsilon$ is locally defined] we have \begin{equation}\label{EstimateSS3ball}
\dfrac{1}{2}\int_{B_i}\rho^2|\nabla \varphi_\varepsilon-A_\varepsilon|^2+| h_\varepsilon-h_{\rm ex}|^2\geq\pi|d_i|(|\ln\varepsilon|-C\ln|\ln\varepsilon|), \end{equation} with $d_i= {\rm deg}_{\partial B_i}(v)$ if $B_i\subset\Omega$ and $0$ otherwise. \end{enumerate} \end{prop}
By following the argument of Sandier and Serfaty \cite{SS2}, we get the main result of this section.
\begin{thm}\label{ThmBorneDegréMinGlob}
Assume that $\lambda,\delta$ satisfy \eqref{CondOnLambdaDelta} and $\delta^2|\ln\varepsilon|\leq1$. Assume also Hypothesis \eqref{BorneKMagn} holds for $h_{\rm ex}$ with some $K\geq1$.
Then there exist $\varepsilon_K>0$ and $\mathcal{M}_K\geq1$ [independent of $\varepsilon$] s.t. if $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ is a family in the Coulomb gauge satisfying $\mathcal{F}(v_\varepsilon,A_\varepsilon)\leq\inf_{\mathscr{H}}\mathcal{F}+K\ln|\ln\varepsilon|$ then for $0<\varepsilon<\varepsilon_K$ we have \begin{equation}\label{CrucialBoundedkjqbsdfbn}
\dfrac{1}{2}\int_\Omega|\nabla v_\varepsilon|^2+\dfrac{1}{2\varepsilon^2}(1-|v_\varepsilon|^2)^2\leq\mathcal{M}_K|\ln\varepsilon|. \end{equation}
Moreover, if $|v_\varepsilon|\not>1-|\ln\varepsilon|^{-2}$ in $\Omega$, then letting $\{B_i\,|\,i\in\mathcal{J}\}$ be a family of disks given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf}, for $0<\varepsilon<\varepsilon_K$, we have $d_i\geq0$ for all $i\in\mathcal{J}$ and there is $s_0>0$ [depending only on $\Omega$] s.t. if $i\in\mathcal{J}$ is s.t. $d_i\neq0$ then ${\rm dist}(B_i,\Lambda)\leq \mathcal{M}_K|\ln\varepsilon|^{-s_0}$.
\end{thm} The proof of this theorem is postponed in Appendix \ref{SectionProofAppSSBound}.
We let \begin{equation}\label{DefJ0} {\bf J_0}:=\tilde{\mathcal{F}}_{1,1}(1,\nabla^\bot\xi_0)=\dfrac{\tilde{\mathcal{F}}_{\varepsilon,h_{\rm ex}}(1,h_{\rm ex}\nabla^\bot\xi_0)}{h_{\rm ex}^2}. \end{equation}
Note that if $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ is a family of quasi-minimizers then \[ \mathcal{F}_{\varepsilon,h_{\rm ex}}(v_\varepsilon,A_\varepsilon)\leq\mathcal{F}_{\varepsilon,h_{\rm ex}}(1,\nabla^\bot\xi_0)+o(1)=h_{\rm ex}^2{\bf J_0}+o(1)=\mathcal{O}(h_{\rm ex}^2). \]
The discs given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf} are "too large" for our strategy. Indeed one of the main argument is a construction of {\it bad discs} in the spirit of \cite{BBH} which links $x_\varepsilon\in\{|v_\varepsilon|\leq1/2\}$ with the energetic cost in a ball $B(x_\varepsilon,\varepsilon^\mu)$ with small $\mu>0$. Namely if $x_\varepsilon\in\{|v_\varepsilon|<1-|\ln\varepsilon|^{-2}\}\subset\cup B_i$ then the energetic cost in a ball $B(x_\varepsilon,\varepsilon^\mu)$ is not sufficiently large comparing to our error term.
In the next proposition we present the good framework of vortex balls required in the study. The first step in the study is an energetic decomposition valid under some assumptions [no assumption on $\delta\in(0;1)$ is required]. \begin{prop}\label{Docmpen} Let $C_0>1$, $(v_\varepsilon)_{0<\varepsilon<1}\subset H^1(\Omega,\mathbb{C})$ and $h_{\rm ex}>0$ be s.t.
\begin{equation}\label{AbsNatBorneuh}
F(v_\varepsilon)\leq C_0|\ln\varepsilon|^2,\,h_{\rm ex}\leqC_0|\ln\varepsilon|. \end{equation}
Assume furthermore that $\lambda^{1/4}|\ln\varepsilon|\to0$ and, for $\varepsilon\in(0;1)$, either $|v_\varepsilon|>1/2$ in $\Omega$ or $v_\varepsilon$ admits a family of valued disks $\{(B(a_i,r_i),d_i)\,|\,i\in \mathcal{J}\}$ [$ \mathcal{J}$ is finite] s.t. : \begin{itemize} \item[$\bullet$] the disks $B_i=B(a_i,r_i)$ are pairwise disjoint
\item[$\bullet$] $\{|v_\varepsilon|\leq1/2\}\subset\cup_{i\in \mathcal{J}} B_i$
\item[$\bullet$] $\sum_{i\in \mathcal{J}} r_i<|\ln\varepsilon|^{-10}$
\item[$\bullet$] For $i\in \mathcal{J}$, letting $d_i=\begin{cases} {\rm deg}_{\partial B_i}(v)&\text{if }B_i\subset\Omega\\0&\text{otherwise}\end{cases}$, we assume $\sum_{i\in\mathcal{J}}|d_i|\leqC_0$. \end{itemize} Then, if $(\xi_\varepsilon)_\varepsilon\subset H^1_0\cap H^2\cap W^{1,\infty}(\Omega,\mathbb{R})$ is s.t. \begin{equation}\label{BorneXiPourLaDec}
\|\nabla\xi_\varepsilon\|_{L^{\infty}(\Omega)}\leqC_0|\ln\varepsilon|, \end{equation}
writing $\zeta_\varepsilon:=\xi_\varepsilon-h_{\rm ex}\xi_0$ we have in the case $|v_\varepsilon|\not>1/2$ in $\Omega$:
\begin{equation}\label{FullDecDiscqVal0} \mathcal{F}(v_\varepsilon,\nabla^\bot\xi_\varepsilon)-h_{\rm ex}^2{\bf J_0}=F(v_\varepsilon)+2\pih_{\rm ex}\sum_{i\in \mathcal{J}} d_i\xi_0(a_i)+\tilde{V}_{({\bf a},{\bf d})}(\zeta_\varepsilon)+o(1) \end{equation} where for $\zeta\in H^1_0\cap H^2(\Omega)$ we denoted
\begin{equation}\label{FullDecDiscqValAlt}
\tilde{V}_{({\bf a},{\bf d})}(\zeta):=2\pi\sum_{i\in \mathcal{J}}d_i\zeta(a_i)+\dfrac{1}{2}\int_\Omega(\Delta\zeta)^2+|\nabla\zeta|^2. \end{equation}
And if $|v|>1/2$ in $\Omega$ then
\begin{equation}\label{FullDecDiscqVal0Bisso}
\mathcal{F}(v_\varepsilon,\nabla^\bot\xi_\varepsilon)-h_{\rm ex}^2{\bf J_0}=F(v_\varepsilon)+\dfrac{1}{2}\int_\Omega(\Delta\zeta_\varepsilon)^2+|\nabla\zeta_\varepsilon|^2+o(1) \end{equation}
\end{prop} The proof of Proposition \ref{Docmpen} is an adaptation of an argument of Serfaty \cite{S1} [section 4]. The proof is presented Appendix \ref{Sec.PreuveDocmpen}
Before going further, we state a result which will be useful in this article and whose proof is left to the reader. \begin{lem}\label{LemAuxConstructMagnPot} For $v\in H^1(\Omega,\mathbb{C})$, $0<\varepsilon<1$ and $h_{\rm ex}>0$, there exists a unique potential $A_{v,\varepsilon,h_{\rm ex}}=A_v\in H^1(\Omega,\mathbb{R}^2)$ s.t. $(v,A_v)$ is in the Coulomb gauge and satisfies \begin{equation}\label{MagnetiqueEq} \begin{cases}-{\nabla^\bot {\rm curl}(A_v)}{}=\alpha(\imath v)\cdot(\nabla v-\imath A_v v)&\text{in }\Omega\\{\rm curl}(A_v)=h_{\rm ex}&\text{on }\partial\Omega\end{cases}. \end{equation} Moreover $A_v$ is the unique solution of the minimization problem
\begin{equation}\label{Eq.MinPb.Pot}
\inf_{A\text{ satisfies \eqref{JaugeCoulomb}}}\mathcal{F}_{\varepsilon,h_{\rm ex}}(v,A) \end{equation} and from \eqref{CoulombH2} and \eqref{RepresentCoulomGauge} we have $A_v=\nabla^\bot\xi_v$ with $\xi_v\in H^1_0\cap H^2\cap W^{1,\infty}(\Omega,\mathbb{R})$. \end{lem} \begin{remark}\label{RemCOntrolTrainRER}
Assume $\lambda,\delta$ satisfy \eqref{CondOnLambdaDelta}, $\delta^2|\ln\varepsilon|\leq1$ and Hypothesis \eqref{BorneKMagn} holds. Consider $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ a family in the Coulomb gauge satisfying $\mathcal{F}(v_\varepsilon,A_\varepsilon)\leq\inf_{\mathscr{H}}\mathcal{F}+\mathcal{O}(\ln|\ln\varepsilon|)$. \begin{itemize}
\item From Theorem \ref{ThmBorneDegréMinGlob}, either $|v_\varepsilon|>1-|\ln\varepsilon|^{-2}$ in $\Omega$ or the family of disjoint disks given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf} satisfies the properties of the family of discs used in Proposition \ref{Docmpen}.
\item Let $A_{v_\varepsilon}=\nabla^\bot\xi_{v_\varepsilon}\in H^1(\Omega,\mathbb{R}^2)$ be given by Lemma \ref{LemAuxConstructMagnPot}. Then with \eqref{CoulombH2}$\&$\eqref{MagnetiqueEq} we have $A_{v_\varepsilon}\in L^\infty(\Omega)$ and $\|A_{v_\varepsilon}\|_{L^\infty(\Omega)}\leq C |\ln\varepsilon|$ where $C$ depends only on $\Omega$. \end{itemize}
\end{remark}
As noted by Serfaty \cite{S1}, with the help of the decomposition given by Proposition \ref{Docmpen}, we may prove that $h_{\rm ex}^2 {\bf J_0}$ is almost the minimal energy of a vortex less configuration. \begin{cor}\label{CorEtudeSansVortex}
Let $\mathscr{H}^0:=\left\{(\rho\e^{\imath\varphi},A)\,|\,\rho\in H^1(\Omega,[0,\infty)),\,\varphi\in H^1(\Omega,\mathbb{R})\text{ and }A\in H^1(\Omega,\mathbb{R}^2)\right\}$. Note that $\mathscr{H}^0$ is gauge invariant. Assume $\lambda^{1/4}|\ln\varepsilon|\to0$. \begin{enumerate}
\item\label{CorEtudeSansVortex1} Let $\varepsilon=\varepsilon_n\downarrow0$. Assume $h_{\rm ex}=\mathcal{O}(|\ln\varepsilon|)$ and for each $\varepsilon$ let $(v_\varepsilon,\nabla^\bot\xi_\varepsilon)\in\mathscr{H}^0$ be s.t. $\xi_\varepsilon\in H^1_0\cap H^2\cap W^{1,\infty}(\Omega,\mathbb{R})$ with $\|\nabla\xi_\varepsilon\|_{L^\infty(\Omega)}=\mathcal{O}(|\ln\varepsilon|)$. Writing $\zeta_\varepsilon:=\xi_\varepsilon-h_{\rm ex}\xi_0$ we have:
\begin{equation}\label{FullDecDiscqValH0}
\mathcal{F}(v_\varepsilon,\nabla^\bot\xi_\varepsilon)=h_{\rm ex}^2{\bf J_0}+F(v_\varepsilon)+\dfrac{1}{2}\int_\Omega(\Delta\zeta_\varepsilon)^2+|\nabla\zeta_\varepsilon|^2+o(1). \end{equation}
Thus, if $\mathcal{F}(v_\varepsilon,\nabla^\bot\xi_\varepsilon)\leqh_{\rm ex}^2{\bf J_0}+o(1)$ then $\zeta_\varepsilon\to0$ in $H^2(\Omega)$, $|v_\varepsilon|\to1$ in $H^1(\Omega)$ and, up to pass to a subsequence, there exists ${\tt v}\in\mathbb{S}^1$ s.t. $v_\varepsilon\to{\tt v}$ in $H^1(\Omega)$. \item\label{CorEtudeSansVortex2} We have $\inf_{\mathscr{H}^0}\mathcal{F}=h_{\rm ex}^2{\bf J_0}+o(1)$. \end{enumerate} \end{cor} \begin{proof} We prove the first assertion. Estimate \eqref{FullDecDiscqValH0} is a direct consequence of Proposition \ref{Docmpen}.
For sake of simplicity of the presentation we drop the subscript $\varepsilon$. If $\mathcal{F}(v,\nabla^\bot\xi)\leqh_{\rm ex}^2{\bf J_0}+o(1)$, then $F(v)+\|\zeta\|_{H^2(\Omega)}=o(1)$ and then $\zeta\to0$ in $H^2(\Omega)$, $|v|\to1$ in $H^1(\Omega)$. Moreover $\|\nabla v\|_{L^2(\Omega)}=o(1)$ and $\|v\|_{L^2(\Omega)}=\mathcal{O}(1)$. This clearly implies the remaining part of the assertion.\\
We prove the second assertion. We first claim, by the definition of ${\bf J_0}$, that using the configuration $(1,h_{\rm ex}\nabla^\bot\xi_0)\in\mathscr{H}^0$ we have $ \inf_{\mathscr{H}^0}\mathcal{F}\leqh_{\rm ex}^2{\bf J_0}+o(1)$.
By the gauge invariance of $\mathscr{H}^0$ we may consider a family of quasi-minimizer $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}^0$ which is in the Coulomb gauge. We write $(v_\varepsilon,A_\varepsilon)=(v,A)$. Let $(\tilde v,\tilde A)\in\mathscr{H}^0$ be defined by $\tilde{v}=\underline{v}$ and $\tilde{A}$ is the unique solution of \eqref{Eq.MinPb.Pot} associated to $\tilde v$.
By direct calculations we have: $\mathcal{F}(\tilde v,\tilde A)\leq\mathcal{F}(\tilde v, A)\leq \mathcal{F}( v, A)\leqh_{\rm ex}^2 {\bf J_0}+o(1)$.
Moreover, by denoting $h:={\rm curl}(\tilde A)$, we have $\nabla h=\alpha\tilde v\wedge(\nabla^\bot\tilde v-\tilde A^\bot\tilde v)$ in $\Omega$ and $h=h_{\rm ex}$ on $\partial\Omega$. Then $\|h\|_{H^1(\Omega)}=\mathcal{O}(|\ln\varepsilon|)$ and using \eqref{EstH3} we get $\|\tilde A\|_{H^2(\Omega)}=\mathcal{O}(|\ln\varepsilon|)$.
We are then able to apply the first assertion to get $\mathcal{F}(\tilde v,\tilde A)\geq h_{\rm ex}^2{\bf J_0}+o(1)$.
\end{proof} \subsection{Pseudo vortex structure}
We assume $\lambda^{1/4}|\ln\varepsilon|\to0$. Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family of configurations in the Coulomb gauge satisfying \eqref{AbsNatBorneuh}. We assume that $|v_\varepsilon|\not>1/2$ in $\Omega$ and that there exists $\{(B(a_i,r_i),d_i)\,|\,i\in \mathcal{J}\}$ as in Proposition \ref{Docmpen}. Then Proposition \ref{Docmpen} gives a decomposition of $\mathcal{F}(v,A)$. Except in the crucial hypothesis $\sum r_i<|\ln\varepsilon|^{-10}$, the radii $r_i$ do not play any role as well as the disks "$B(a_i,r_i)$" associated to a zero degree. We thus introduce an {\it ad-hoc} notion of {\it pseudo vortex}.
\begin{defi}\label{def.PseudoVortex} We assume that we have either $\varepsilon=\varepsilon_n\downarrow0$ or $0<\varepsilon<1$. We consider $(v_\varepsilon)_\varepsilon\subset H^1(\Omega,\mathbb{C})$, $(h_{\rm ex})_\varepsilon\subset(1,\infty)$ satisfying \eqref{AbsNatBorneuh}.
Let $\{B_i=B(a_i,r_i)\,|\,i\in \mathcal{J}\}$ be a family of disks as in Proposition \ref{Docmpen} and let $d_i=d^{(\varepsilon)}_i\in\mathbb{Z}$ be the associated "degrees" defined in Proposition \ref{Docmpen}. We denote $\mathcal{J}'=\mathcal{J}'_\varepsilon:=\{i\in\mathcal{J}\,|\,d_i\neq0\}$ [note that we have ${\rm Card}(\mathcal{J}'_\varepsilon)\leq\sum|d_i|=\mathcal{O}(1)$].
If $\mathcal{J}'\neq\emptyset$, then we say that $\{{({\bf a},{\bf d})}\}=\{(a_i,d_i)\,|\,i\in\mathcal{J}'\}$ is a set of {\it pseudo vortices} of $v_\varepsilon$.
\end{defi}
For a fixed configuration ${({\bf a},{\bf d})}$ of pseudo vortices, Serfaty studied in \cite{S1} the minimization problem of $\tilde V_{({\bf a},{\bf d})}$ [defined in \eqref{FullDecDiscqValAlt}]. We have the following result [Proposition 4.2 in \cite{S1}]. \begin{prop}\label{PropPartieMinimalSandH0}
Let ${({\bf a},{\bf d})}=\{(a_i,d_i)\,|\,i\in\mathcal{J}'\}\subset\Omega\times\mathbb{Z}^*$ be a configuration s.t. $1\leq{\rm Card}(\mathcal{J}')<\infty$ and $a_i\neq a_j$ for $i\neq j$. Then $\tilde V_{({\bf a},{\bf d})}(\zeta)$ is minimal for $\zeta=\zeta_{({\bf a},{\bf d})}$ which satisfies \begin{equation}\label{LondonEqModifie} \begin{cases} -\Delta^2\zeta_{({\bf a},{\bf d})}+\Delta\zeta_{({\bf a},{\bf d})}=2\pi\sum_{i\in\mathcal{J}'}d_i\delta_{a_i}&\text{in }\Omega \\ \zeta_{({\bf a},{\bf d})}=\Delta\zeta_{({\bf a},{\bf d})}=0&\text{on }\partial\Omega \end{cases}. \end{equation} [Here $\delta_a$ is the Dirac mass at $a\in\mathbb{R}^2$]
And we have
$\tilde{V}[\zeta_{({\bf a},{\bf d})}]=\pi\sum_{i\in\mathcal{J}'}d_i\zeta_{({\bf a},{\bf d})}(a_i)$.
\end{prop} In order to prove the above proposition, Serfaty introduced for $a\in\Omega$ the function $\zeta^a\in H^1_0\cap H^2(\Omega)$ which is the unique solution of \begin{equation}\nonumber \begin{cases} -\Delta^2\zeta^a+\Delta\zeta^a=2\pi\delta_{a}&\text{in }\Omega \\ \zeta^a=\Delta\zeta^a=0&\text{on }\partial\Omega \end{cases}. \end{equation} In particular we have $\zeta^a\leq0$ in $\Omega$. It is easy to see that
$\zeta_{({\bf a},{\bf d})}=\sum_{i\in\mathcal{J}'}d_i\zeta^{a_i}$ is the unique solution of \eqref{LondonEqModifie}.
Lemma 4.6 in \cite{S1} gives important properties related with $\zeta^a$ and $\zeta_{({\bf a},{\bf d})}$: \begin{prop}\label{Prop.Information.Zeta-a} For $s\in(0,1)$, there exists $C_s>0$ s.t. for $a,b\in\Omega$ \[
\|\zeta^a\|_{L^\infty(\Omega)}\leq C_s{\rm dist}(a,\partial\Omega)^s \] and \[
\|\zeta^a-\zeta^b\|_{H^2(\Omega)}\leq C_s|a-b|^s. \]
Consequently there exists $C>0$ depending only on $\Omega$ s.t., if $\zeta_{({\bf a},{\bf d})}$ is the unique solution of \eqref{LondonEqModifie}, then \[
\tilde{V}[\zeta_{({\bf a},{\bf d})}]=\pi\sum_{i,j\in\mathcal{J}'}d_id_j\zeta^{a_i}(a_j)\leq C\left(\sum_{i\in\mathcal{J}'}|d_i|\right)^2. \]
\end{prop} For a further use we need the following lemma. \begin{lem}\label{Rk.RegularityLondonModified} Let ${({\bf a},{\bf d})}$ as in Proposition \ref{PropPartieMinimalSandH0} then $\zeta_{({\bf a},{\bf d})}\in H^1_0\cap H^2\cap W^{1,\infty}(\Omega,\mathbb{R})$ and there is $C\geq1$ depending only on $\Omega$ s.t. \[
\|\nabla\zeta_{({\bf a},{\bf d})}\|_{L^\infty(\Omega)}\leq\dfrac{C\sum|d_i|}{\min {\rm dist}(a_i,\partial\Omega)}. \] \end{lem} \begin{proof}
Let ${({\bf a},{\bf d})}$ be as in Proposition \ref{PropPartieMinimalSandH0}, with Proposition \ref{Prop.Information.Zeta-a} we have $\zeta_{({\bf a},{\bf d})}=\sum d_i\zeta^{a_i}\in H^1_0\cap H^2$ and $\|\zeta_{({\bf a},{\bf d})}\|_{H^2(\Omega)}\leq C\sum_i|d_i|$ where $C$ depends only on $\Omega$.
Moreover, for $a\in\Omega$, from \eqref{LondonEqModifie}, we have $\Delta\zeta_{({\bf a},{\bf d})}=\zeta_{({\bf a},{\bf d})}-\sum d_i\ln|x-a_i|-R_{({\bf a},{\bf d})}$ where $R_{({\bf a},{\bf d})}$ is the harmonic extension of ${\rm tr}_{\partial\Omega}(-\sum d_i\ln|x-a_i|)$ in $\Omega$.
Consequently there exists $C\geq1$ depending only on $\Omega$ s.t. \[
\|\Delta\zeta_{({\bf a},{\bf d})}\|_{L^3(\Omega)}\leq\dfrac{C\sum|d_i|}{\min {\rm dist}(a_i,\partial\Omega)} \]
and therefore by elliptic regularity and a Sobolev embedding we get the result. \end{proof} Until now, the only way to get a nice magnetic potential associated to a function $v$ was to consider $A_v=A_{v,\varepsilon,\alpha}\in H^2(\Omega,\mathbb{R}^2)$, the unique solution of \eqref{Eq.MinPb.Pot}. The previous results give that, after the cleaning step, we can do asymptotically as well by using a magnetic potential depending on a pseudo vortices structure of $v$ instead of $v$ itself [see Remark \ref{Prop.BornéPourPotenteifjfjf}].
\begin{defi}\label{DefA_ad} Let $N\geq1$ and ${({\bf a},{\bf d})}\in(\O^N)^*\times(\mathbb{Z}^*)^N$, $h_{\rm ex}>0$. Then we define $A_{({\bf a},{\bf d})}:=h_{\rm ex}\nabla^\bot\xi_0+\nabla^\bot\zeta_{({\bf a},{\bf d})}$ where $\zeta_{({\bf a},{\bf d})}$ is the unique solution of $\eqref{LondonEqModifie}$, {\it the potential associated to ${({\bf a},{\bf d})}$}. \end{defi} \begin{remark}\label{Prop.BornéPourPotenteifjfjf}
Let $C_0>1$ and $(v_\varepsilon)_{0<\varepsilon<1}\subset H^1(\Omega,\mathbb{C})$, $h_{\rm ex}>0$ satisfying \eqref{AbsNatBorneuh} be s.t. $(v_\varepsilon)_{0<\varepsilon<1}$ admits a set of pseudo vortices $({({\bf a},{\bf d})}_\varepsilon)_{0<\varepsilon<1}$ with $\sum |d_i|\leq C_0$. We write $v\&{({\bf a},{\bf d})}$ instead of $v_\varepsilon\&{({\bf a},{\bf d})}_\varepsilon$.
Assume $\min{\rm dist}(a_i,\partial\Omega)>|\ln\varepsilon|^{-1}$ in order to have $\|\nabla\zeta_{{({\bf a},{\bf d})}}\|_{L^\infty(\Omega)}=\mathcal{O}(|\ln\varepsilon|)$ [with Lemma \ref{Rk.RegularityLondonModified}] and $\lambda^{1/4}|\ln\varepsilon|\to0$.
For $0<\varepsilon<1$, let $A_{v}\in H^1(\Omega,\mathbb{R}^2)$ be the unique solution of \eqref{Eq.MinPb.Pot} and $A_{{({\bf a},{\bf d})}}$ be defined in Definition \ref{DefA_ad}. Then we have $A_{{({\bf a},{\bf d})}}=\nabla^\bot\xi_{{({\bf a},{\bf d})}}$ and $A_{v}=\nabla^\bot\xi_{v}$ where $\xi_{{({\bf a},{\bf d})}},\xi_{v}\in H^1_0\cap H^2\cap W^{1,\infty}(\Omega,\mathbb{R})$ satisfy the hypotheses of Proposition \ref{Docmpen} [here we used \eqref{CoulombH2}$\&$\eqref{MagnetiqueEq}]. Therefore we have the following inequalities \[ \mathcal{F}(v,0)\geq\mathcal{F}(v,A_{v})=\tilde{\mathcal{F}}(v,A_{v})+o(1)\geq\tilde{\mathcal{F}}(v,A_{{({\bf a},{\bf d})}})+o(1), \] \[ \mathcal{F}(v,A_{v})\leq{\mathcal{F}}(v,A_{({\bf a},{\bf d})})=\tilde{\mathcal{F}}(v,A_{({\bf a},{\bf d})})+o(1). \]
In particular we have $\mathcal{F}(v,A_{v})=\mathcal{O}(|\ln\varepsilon|^2)$ and $\mathcal{F}(v,A_{{({\bf a},{\bf d})}})=\mathcal{O}(|\ln\varepsilon|^2)$. \end{remark} \subsection{Cluster of pseudo vortices} From a standard result for the homogenous case, it is expected that, for a reasonable magnetic field, the asymptotic location of pseudo vortices of a studied configuration is a subset of $\Lambda$. This problem is related to the {\it macroscopic location} of the pseudo vortices. To treat this problem we use an {\it ad-hoc} notion of {\it cluster of pseudo vortices}. \begin{defi} Let $N,\tilde N_0\in\mathbb{N}^*$, $\tilde N_0\leq N$, ${({\bf p},{\bf D})}\in(\overline{\Omega}^{\tilde N_0})^*\times\mathbb{Z}^{\tilde N_0}$, $\varepsilon=\varepsilon_n\downarrow0$ and ${({\bf a},{\bf d})}_\varepsilon\in(\O^N)^*\times\mathbb{Z}^N$ s.t. ${\bf d}$ is independent of $\varepsilon$. We say that $({({\bf a},{\bf d})}_\varepsilon)_\varepsilon$ admits a {\it cluster structure} on ${({\bf p},{\bf D})}$ if \begin{itemize}
\item for $i\in\{1,...,N\}$, $\lim a_i$ exists, $\lim a_i\in\{p_1,...,p_{\tilde N_0}\}$ and we write for $k\in\{1,...,\tilde N_0\}$, $S_k:=\{i\in\{1,...,N\}\,|\,a_i\to p_k\}$ \item for $k\in\{1,...,\tilde N_0\}$ $S_k\neq\emptyset$,
\item for $k\in\{1,...,\tilde N_0\}$, $D_k=\sum_{i\in S_k}d_i$. \end{itemize} \end{defi} \begin{remark} In this article we will use the notion of cluster structure with ${({\bf a},{\bf d})}$ as in Proposition \ref{Docmpen} and ${\bf p}\subset\Lambda$. \end{remark}
\begin{prop}\label{PropClusterI}Let $ N\geq1$, $\varepsilon=\varepsilon_n\downarrow0$, ${({\bf a},{\bf d})}_\varepsilon\in(\O^N)^*\times\mathbb{Z}^N$ s.t. $\sum|d_i|$ is bounded independently of $\varepsilon$. \begin{enumerate} \item\label{PropClusterI1} If $({({\bf a},{\bf d})}_\varepsilon)_\varepsilon$ admits a cluster structure on ${({\bf p},{\bf D})}$ [and then ${\bf d}$ is independent of $\varepsilon$] then ${({\bf p},{\bf D})}$ is unique [up to change the order]. We say that ${({\bf p},{\bf D})}$ is the cluster of $({({\bf a},{\bf d})}_\varepsilon)_\varepsilon$. \item\label{PropClusterI2} Up to pass to a subsequence, there exist $1\leq \tilde N_0\leq N$ and ${({\bf p},{\bf D})}\in(\overline{\Omega}^{\tilde N_0})^*\times\mathbb{Z}^{\tilde N_0}$ s.t. ${({\bf p},{\bf D})}$ is the cluster of $({({\bf a},{\bf d})}_\varepsilon)_\varepsilon$.
\item\label{PropClusterI3} If ${({\bf p},{\bf D})}$ is the cluster of $({({\bf a},{\bf d})}_\varepsilon)_\varepsilon$ then, denoting $\chi:=\max_k\max_{i\in S_k}|a^\varepsilon_i-p_k|$, we have \begin{equation}\label{SplitClustXi0}
\left|\sum_{k=1}^{\tilde N_0}\sum_{i\in S_k}|d_i||\xi_0(a^\varepsilon_i)-\xi_0(p_k)|\right|\leq C\chi \end{equation} and \begin{equation}\label{SplitClustTildeV}
\left|\tilde V[\zeta_{{({\bf a},{\bf d})}_\varepsilon}]-\tilde V[\zeta_{({\bf p},{\bf D})}]\right|\leq C\sqrt\chi \end{equation}
where $C$ depends only on $N$, $\sum|d_i|$ and $\Omega$. \end{enumerate} \end{prop} \begin{proof} The two first assertions are obvious. Estimate \eqref{SplitClustXi0} is direct by noting that $\xi_0$ a Lipschitzian function in $\Omega$. Estimate \eqref{SplitClustTildeV} is a direct consequence of Proposition \ref{Prop.Information.Zeta-a}.
\end{proof}
We then have:
\begin{cor}\label{Cor.DecompPourCluster}Assume that $\lambda,\delta,h_{\rm ex}$ satisfy \eqref{CondOnLambdaDelta} and \eqref{BorneKMagn} for some $K\geq0$ independent of $\varepsilon$. Assume also $\delta^2|\ln\varepsilon|\leq1$.
Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family s.t. $\mathcal{F}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}+K\ln|\ln\varepsilon|$ which is in the Coulomb gauge and let $\{({\bf a}_\varepsilon,{\bf d}_\varepsilon)={({\bf a},{\bf d})}\,|\,0<\varepsilon<1\}$ be a family of pseudo vortices associated to $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ [indexed on $\mathcal{J}=\mathcal{J}_\varepsilon$ possibly empty].
\begin{enumerate} \item Letting $A_{v_\varepsilon}\in H^1(\Omega,\mathbb{R}^2)$ be defined by Lemma \ref{LemAuxConstructMagnPot} we have \begin{equation}\label{NiceDecSharp} \mathcal{F}(v_\varepsilon,A_\varepsilon)\geq\mathcal{F}(v_\varepsilon,A_{v_\varepsilon})\geqh_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{i\in\mathcal{J}}d_i\xi_0(a_i)+F(v_\varepsilon)+\tilde{V}[\zeta_{({\bf a},{\bf d})}]+o(1). \end{equation} And then \begin{equation}\label{NiceDec} \mathcal{F}(v_\varepsilon,A_\varepsilon)\geq h_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{i\in\mathcal{J}}d_i\xi_0(a_i)+F(v_\varepsilon)+\mathcal{O}(1). \end{equation} \item Assume furthermore that ${({\bf a},{\bf d})}$ admits a cluster structure on ${({\bf p},{\bf D})}$. Then we have \begin{equation}\label{NiceDecSharpSplitTildeV} \mathcal{F}(v_\varepsilon,A_\varepsilon)\geqh_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{i\in\mathcal{J}}d_i\xi_0(a_i)+F(v_\varepsilon)+\tilde{V}[\zeta_{({\bf p},{\bf D})}]+o(1). \end{equation}
\end{enumerate}
\end{cor} \begin{proof} The lower bounds \eqref{NiceDecSharp} and \eqref{NiceDec} are direct consequences of Theorem \ref{ThmBorneDegréMinGlob}, Lemma \ref{LemAuxConstructMagnPot}, Remark \ref{RemCOntrolTrainRER} and Propositions \ref{Prop.BornesSups1}$\&$\ref{Docmpen}$\&$\ref{PropPartieMinimalSandH0}.
Estimate \eqref{NiceDecSharpSplitTildeV} is a direct consequence of Proposition \ref{PropClusterI} and \eqref{NiceDecSharp}. \end{proof} We then have the following corollary.
\begin{cor}\label{CorGonzo}Assume that $\lambda,\delta,h_{\rm ex}$ satisfy \eqref{CondOnLambdaDelta} and \eqref{BorneKMagn}. Assume also $\delta^2|\ln\varepsilon|\leq1$.
Let $(v_\varepsilon)_{0<\varepsilon<1}\subset H^1(\Omega,\mathbb{C})$ be s.t. $|v_\varepsilon|\not>1/2$ in $\Omega$ and assume the existence of $(B_\varepsilon)_{0<\varepsilon<1}\subset H^1(\Omega,\mathbb{R}^2)$ s.t. $(v_\varepsilon,B_\varepsilon)$ is in the Coulomb gauge and $\mathcal{F}(v_\varepsilon,B_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}+\mathcal{O}(\ln|\ln\varepsilon|)$. Assume also that $({\bf a}_\varepsilon,{\bf d}_\varepsilon)=({\bf a},{\bf d})$ are pseudo-vortices as in Definition \ref{def.PseudoVortex} for $v_\varepsilon$ [note that we thus have $\sum |d_i|=\mathcal{O}(1)$], then \begin{equation}\label{NiceDecSharpBorneSup} \mathcal{F}(v_\varepsilon,A_{({\bf a},{\bf d})})=h_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{}d_i\xi_0(a_i)+F(v_\varepsilon)+\tilde{V}[\zeta_{({\bf a},{\bf d})}]+o(1). \end{equation} where $A_{({\bf a},{\bf d})}:=h_{\rm ex}\nabla^\bot\xi_0+\nabla^\bot\zeta_{({\bf a},{\bf d})}$.
Consequently we get \begin{equation}\label{NiceDecSharpBorneSupBisso}
F(v_\varepsilon)\leq2\pih_{\rm ex}\sum_{}d_i|\xi_0(a_i)|+\mathcal{O}(\ln|\ln\varepsilon|)\leq \pi b^2\sum_{}|d_i||\ln\varepsilon|+\mathcal{O}(\ln|\ln\varepsilon|). \end{equation} \end{cor} \begin{proof} Corollary \ref{CorGonzo} is a direct consequence of $\inf_\mathscr{H}\mathcal{F}\leq h_{\rm ex}^2{\bf J_0}$, Corollary \ref{Cor.DecompPourCluster} and Propositions \ref{Docmpen}$\&$\ref{Prop.Information.Zeta-a}. \end{proof} \begin{remark} We may state an analog of Corollary \ref{CorGonzo} if ${({\bf a},{\bf d})}$ admits a structure of cluster. \end{remark}
\section{Renormalized energies}\label{Sec.RenEn}
\subsection{Macroscopic renormalized energy [at scale $1$]}\label{SecMacroRenEn} We consider in this section: \begin{itemize}
\item[$\bullet$] $N\in\mathbb{N}^*$, ${\bf z}={\bf z}^{(n)}\in(\O^N)^*:=\{(z_1,...,z_N)\subset\Omega\,|\,z_i\neq z_j\text{ pour }i\neq j \}$, \item[$\bullet$] ${\bf d}=(d_1,...,d_N)\in\mathbb{Z}^N$. \item[$\bullet$] $\hbar=\hbar({\bf z}):=\min_i{\rm dist}(z_i,\partial\Omega)$ \end{itemize} We are going to deal with functions defined in the set $\Omega$ perforated by disks with radius ${\tilde r}={\tilde r}_n\downarrow0$: \[ \Omega_{\tilde r}=\Omega_{\tilde r}({\bf z}):=\Omega\setminus\cup_i\overline{B(z_i,{\tilde r})}. \]
We assume \begin{equation}\label{HypRayClass}
{\tilde r}<\dfrac{1}{8}\min\left\{\min_{i\neq j} |z_i-z_j|\,;\,\hbar\right\}. \end{equation} For a radius ${\tilde r}>0$ s.t. \eqref{HypRayClass} is satisfied, we consider the set of functions \[
\mathcal{I}^{\rm deg}_{\tilde r}:=\left\{w\in H^1(\Omega_{\tilde r},\mathbb{S}^1)\,|\, {\rm deg}_{\partial B(z_i,{\tilde r})}(w)=d_i\text{ for }i\in\{1,...,N\}\right\} \] and \[
\mathcal{I}^{\rm Dir}_{\tilde r}:=\left\{w\in H^1(\Omega_{\tilde r},\mathbb{S}^1)\,\left|\,\begin{array}{c}w(z_i+{\tilde r}\e^{\imath\theta})={ C}_i\e^{\imath d_i\theta}\text{ for }i\in\{1,...,N\},\\\,({C}_1,...,{ C}_N)\in(\mathbb{S}^1)^N\end{array}\right.\right\}. \] In this section we are interested in the minimization of the Dirichlet functional in $\mathcal{I}^{\rm deg}_{\tilde r}$ and $\mathcal{I}^{\rm Dir}_{\tilde r}$.
Before beginning we state an easy result proved by direct minimization [the proof is left to the reader, see \cite{BBH}]. \begin{prop} For $N\geq 1$, ${\bf (z,d)}\in(\O^N)^*\times\mathbb{Z}^N$ and ${\tilde r}>0$ s.t. \eqref{HypRayClass} is satisfied, the following minimization problems admit solutions: \begin{equation}\label{MinPropDeg}
I_{\tilde r}^{\rm deg}=I_{\tilde r}^{\rm deg}{\bf (z,d)}:=\inf_{w\in\mathcal{I}_{\tilde r}^{\rm deg}}\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w|^2 \end{equation} and \begin{equation}\label{MinPropDir}
I_{\tilde r}^{\rm Dir}=I_{\tilde r}^{\rm Dir}{\bf (z,d)}:=\inf_{w\in\mathcal{I}_{\tilde r}^{\rm Dir}}\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w|^2. \end{equation} Moreover, these solutions are unique up to the multiplication by an $\mathbb{S}^1$ constant. \end{prop} \subsubsection{Study of $I^{\rm deg}_{\tilde r}$ and $I^{\rm Dir}_{\tilde r}$}\label{UnimSection} Following \cite{BBH}, it is standard to define the {\it canonical harmonic map associated to ${\bf (z,d)}$}. \begin{defi}\label{DefApplican}Let $N\in\mathbb{N}^*$ and ${\bf (z,d)}\in(\O^N)^*\times\mathbb{Z}^N$. A function $w^\zd_\star\in \cap_{0<p<2}W^{1,p}(\Omega,\mathbb{S}^1)\cap C^\infty(\Omega\setminus\{z_1,...,z_N\},\mathbb{S}^1)$ is the {\it canonical harmonic map associated to the singularities ${\bf (z,d)}$} if \begin{equation}\label{ApplicanAssocieSingDeg}
w^\zd_\star(z)={\rm e}^{\imath\varphi_\star(z)}\prod_{i=1}^N\left(\dfrac{z-z_i}{|z-z_i|}\right)^{d_i}\text{ with }\begin{cases}\text{$\varphi_\star$ is harmonic in $\Omega$}\\\partial_\nuw^\zd_\star=0\text{ on }\partial\Omega,\,\displaystyle\int_{\partial\Omega}\varphi_\star=0\end{cases}. \end{equation} \end{defi} \begin{remark}\label{Remark.DefConjuHarmPhase} In this framework, it is classic to define $\Phi^\zd_\star$ [with the notation of Definition \ref{DefApplican}], the unique solution of \[ \begin{cases} \Delta\Phi^\zd_\star=2\pi\sum_{i=1}^Nd_i\delta_{z_i}&\text{in }\Omega \\ \Phi^\zd_\star=0&\text{on }\partial\Omega \end{cases}. \] This function satisfies $\nabla^\bot\Phi^\zd_\star=w^\zd_\star\wedge\nablaw^\zd_\star$. Moreover, by denoting $R_{\bf (z,d)}$ the unique solution of \[ \begin{cases} \Delta R_{\bf (z,d)}=0&\text{in }\Omega \\
R_{\bf (z,d)}(z)=-\sum_i d_i\ln|z-z_i|&\text{on }\partial\Omega \end{cases}, \]
we have $\Phi^\zd_\star(z)=\sum_i d_i\ln|z-z_i|+R_{\bf (z,d)}(z)$. \end{remark} We first study the asymptotic behavior of minimizers of $I^{\rm deg}_{\tilde r}{\bf (z,d)}$ when ${\tilde r}\to0$. \begin{prop}\label{MinimalMapHomo}
Let $N\in\mathbb{N}^*$, ${\bf (z,d)}={\bf (z,d)}^{(n)}\subset(\O^N)^*\times\mathbb{Z}^N$ and $\hbar:=\min_i{\rm dist}(z_i,\partial\Omega)$. We assume that $\sum_i|d_i|=\mathcal{O}(1)$.
For ${\tilde r}>0$ s.t. \eqref{HypRayClass} is satisfied, we may consider $w^\zd_\Rad$, the unique solution of the problem \begin{equation}\label{ShriHoleBIDeg1}
I^{\rm deg}_{\tilde r}{\bf (z,d)}:=\inf_{w\in\mathcal{I}_{\tilde r}^{\rm deg}}\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w|^2, \end{equation} of the form \begin{equation}\label{ExprMinShrSol}
w^\zd_\Rad(z)={\rm e}^{\imath\varphi_{\tilde r}(z)}\prod_{i=1}^N\left(\dfrac{z-z_i}{|z-z_i|}\right)^{d_i}\text{ with }\begin{cases}\varphi_{\tilde r}\in H^1\cap C^\infty(\Omega_{\tilde r},\mathbb{R})\\\displaystyle\int_{\partial\Omega}\varphi_{\tilde r}=0\end{cases}. \end{equation}
We thus have the existence of $C>0$ [depending only on $\Omega,N$ and the bound of $\sum_i|d_i|$] s.t. \begin{equation}\label{BorneGradWstar}
\|\nabla w^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\leq\dfrac{C(1+|\ln{\tilde r}|)}{{\tilde r}}. \end{equation} We denote \begin{equation}\label{DefX} X:=\begin{cases}
\dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}\left(1+ \dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}\right)&\text{if }N=1 \\
\left(\dfrac{{\tilde r}}{\min_{i\neq{j}}|z_{i}-z_j|}+\dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}\right)\left(1+ \dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}\right)&\text{if }N\geq2 \end{cases} \end{equation} and we have \begin{equation}\label{ConvergenceH1ShrHolklgBorne}
\|\varphi_{\tilde r}-\varphi_\star\|^2_{H^1(\Omega_{\tilde r})}\leq C X, \end{equation} \begin{equation}\label{ConvergenceShrHolklgBorne}
0\leq \dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nablaw^\zd_\star|^2-\inf_{w\in\mathcal{I}_{\tilde r}^{\rm deg}}\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w|^2\leq CX. \end{equation} Moreover, if there exists $\eta>0$ [independent of $n$] s.t. $\hbar>\eta$ then \eqref{BorneGradWstar} may be refined into \begin{equation}\label{BorneGradWstarSpeciale}
\|\nabla w^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\leq\dfrac{C}{{\tilde r}}. \end{equation} \end{prop} The proof of Proposition \ref{MinimalMapHomo} is in Appendix \ref{PreuvePropUniModComp}.
By adapting the proof of Proposition 5.1 in \cite{S1} we have \begin{prop}\label{Prop.EnergieRenDef}
For $N\geq 1$, there exists an application $W^{\tt macro}_N=W^{\tt macro}:(\O^N)^*\times\mathbb{Z}^N\to\mathbb{R}$ s.t. for sequences ${\bf (z,d)}={\bf (z,d)}^{(n)}\in(\O^N)^*\times\mathbb{Z}^N$ and ${\tilde r}={\tilde r}_n\to0$ satisfying \eqref{HypRayClass} and s.t. ${\bf d}$ is independent of $n$, there exists $C\geq1$ [depending only on $N$, $\sum|d_i|$ and $\Omega$] s.t. \[
\left|\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2-\pi\sum_id_i^2|\ln{\tilde r}|-W^{\tt macro}{\bf (z,d)}\right|\leq CX \] with \[
W^{\tt macro}{\bf (z,d)}=-\pi\sum_{i\neq j}d_id_j\ln|z_i-z_j|-\pi\sum_id_iR_{\bf (z,d)}(z_i), \] \[
R_{\bf (z,d)}\in C^\infty(\Omega,\mathbb{R})\text{ satisfies }\|R_{\bf (z,d)}\|_{L^\infty(\Omega)}\leq C(1+|\ln\hbar|). \] \end{prop} Proposition \ref{Prop.EnergieRenDef} is proved in \ref{PreuvelammeShrinkSerfaty}. We immediately obtain from Proposition \ref{Prop.EnergieRenDef} the following corollary. \begin{cor}\label{CorBorneGrossEneStar}
Under the hypotheses of Proposition \ref{Prop.EnergieRenDef} and assuming that there exists $C_1>0$ [independent of $r$] s.t. $\dfrac{{\tilde r}(1+|\ln\hbar|)}{\hbar}\leq C_1$, there is $C>1$ [depending only on $\Omega$, $N$, $\sum_i |d_i|$ and $C_1$] s.t. $\displaystyle\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2\leq C|\ln{\tilde r}|$.
\end{cor} We end this section by linking $I^{\rm deg}_{\tilde r}$ and $I^{\rm Dir}_{\tilde r}$. \begin{prop}\label{Prop.ConditionDirEnergieRen}
Let $N\geq1$, ${\bf z}\in(\O^N)^*$ and ${\tilde r}={\tilde r}_n\downarrow0$ satisfying \eqref{HypRayClass}. Assume $\dfrac{{\tilde r}}{\hbar}\to0$ and if $N\geq2$, we also assume $\dfrac{{\tilde r}}{\min_{i\neq j}|z_i-z_j|}\to0$.
Let \[
\eta:=\begin{cases}10^{-1}\hbar&\text{if }N=1\\10^{-1}\min\{\hbar\,;\,\min_{i\neq j}|z_i-z_j|\}&\text{if }N\geq2\end{cases}. \] Assume furthermore \[
Z:=\dfrac{1}{\ln(\eta/{\tilde r})}\left[\dfrac{\eta(1+|\ln(\hbar)|)}{\hbar}+1\right]^2\to0. \]
Then for ${\bf d}\in\mathbb{Z}^N$ [independent of $n$], there exists $C>1$ [depending only on $\Omega$, $N$ and $\sum|d_i|$] s.t. \[
0\leq\inf_{w\in\mathcal{I}_{\tilde r}^{\rm Dir}}\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w|^2-\inf_{w\in\mathcal{I}_{\tilde r}^{\rm deg}}\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w|^2\leq C(X+Z). \] \end{prop} Proposition \ref{Prop.ConditionDirEnergieRen} is proved Appendix \ref{Sec.PreuvelammeShrinkSerfatyDir}. \subsubsection{Macroscopic renormalized energy and cluster of vortices} We first state an easy lemma. \begin{lem}\label{LemPseudoCondRzd} \begin{enumerate}
\item Let $N\in\mathbb{N}^*$ and ${\bf d}\in\mathbb{Z}^N$. Let $\chi>0$ and ${\bf z},{\bf z}'\in(\O^N)^*$ be s.t. for $i\in\{1,...,N\}$ we have $|z_i-z_i'|\leq\chi$. Then we have \[
\|R_{\bf (z,d)}-R_{({\bf z}',{\bf d})}\|_{L^\infty(\Omega)}\leq \sum_i|d_i|\dfrac{\chi}{\max\{\hbar({\bf z}),\hbar({\bf z'})\}}. \] \item Let $1\leq \tilde N_0\leq N$, ${\bf p}\in(\Omega^{\tilde N_0})^*$, ${\bf (z,d)}={\bf (z,d)}^{(n)}\in(\O^N)^*\times\mathbb{Z}^N$ be s.t. ${\bf d}$ is independent of $n$ and for $i\in\{1,...,N\}$ there exists $k\in\{1,...,\tilde N_0\}$ s.t. $z_i\to p_k$. We let $\chi:=\max_i{\rm dist}(z_i,\{p_1,...,p_{\tilde N_0}\})$.
For $k\in\{1,...,\tilde N_0\}$ we let $D_k:=\displaystyle\sum_{z_i\to p_k}d_i$ and ${\bf D}=(D_1,...,D_{\tilde N_0})$. Then we have \[
\|R_{\bf (z,d)}-R_{{({\bf p},{\bf D})}}\|_{L^\infty(\Omega)}\leq \sum_i|d_i|\dfrac{\chi}{\hbar({\bf p})}. \] \end{enumerate} \end{lem}
\begin{proof}The first assertion is obtained with the help of the maximum principle and the bound $|R_{\bf (z,d)}-R_{({\bf z}',{\bf d})}|\leq\sum_i|d_i|\dfrac{\chi}{\max\{\hbar({\bf z}),\hbar({\bf z}')\}}$ on $\partial\Omega$. The second assertion assertion follows by the same way. \end{proof} With Lemma \ref{LemPseudoCondRzd} we may exploit a structure of cluster for $W^{\rm macro}$. \begin{prop}\label{Prop.RenEnergieCluster} Let $1\leq \tilde N_0\leq N$, ${\bf p}\in(\Omega^{\tilde N_0})^*$ [independent of $n$] and write \[
\gamma_{\bf p}:=\begin{cases}1&\text{if }\tilde N_0=1\\{\min_{k\neq l}|p_k-p_l|}&\text{otherwise}\end{cases}. \] Let ${\bf (z,d)}={\bf (z,d)}^{(n)}\in(\O^N)^*\times\mathbb{Z}^N$ be s.t. ${\bf d}$ is independent of $n$ and for $i\in\{1,...,N\}$ there exists $k\in\{1,...,\tilde N_0\}$ s.t. $z_i\to p_k$. We denote $\chi:=\max_i{\rm dist}(z_i,\{p_1,...,p_{\tilde N_0}\})$.
For $k\in\{1,...,\tilde N_0\}$ we denote $D_k:=\displaystyle\sum_{z_i\to p_k}d_i$ and ${\bf D}=(D_1,...,D_{\tilde N_0})$. Then there exists $C\geq1$ [depending only on $\Omega,N$ and $\sum|d_i|$] s.t. \begin{eqnarray*}
&&\left|W_N^{\rm macro}{\bf (z,d)}-\left(W_{\tilde N_0}^{\rm macro}{({\bf p},{\bf D})}-\pi\sum_{k=1}^{\tilde N_0}\sum_{\substack{z_i,z_j\to p_k\\i\neq j}}d_id_j\ln|z_i-z_j|\right)\right|
\\&\leq& C\chi\left(\dfrac{1+|\ln[\hbar({\bf p})]|}{\hbar({\bf p})}+\dfrac{1}{\gamma_{\bf p}}\right). \end{eqnarray*} \end{prop} \begin{proof} We have \begin{equation}\nonumber W^{\tt macro}{\bf (z,d)}
=-\pi\sum_{k=1}^{\tilde N_0}\sum_{\substack{z_i,z_j\to p_k\\i\neq j}}d_id_j\ln|z_i-z_j|-\pi\sum_{\substack{z_i\to p_k\\zeta_j\to p_l\{\bf k}\neq l}}d_id_j\ln|z_i-z_j|-\pi\sum_id_iR_{\bf (z,d)}(z_i). \end{equation}
It is easy to check that \begin{eqnarray}\label{ClusterEst1}
\sum_{\substack{z_i\to p_k\\zeta_j\to p_l\{\bf k}\neq l}}d_id_j\ln|z_i-z_j|
&=&\sum_{k\neq l}D_kD_l\ln|p_k-p_l|+H \end{eqnarray}
with $H\leq 4\left(\sum_i|d_i|\right)^2\dfrac{\chi}{\gamma_{\bf p}}$ for sufficiently large $n$.
On the other hand, from Lemma \ref{LemPseudoCondRzd} [second assertion], we have $\|R_{\bf (z,d)}-R_{{({\bf p},{\bf D})}}\|_{L^\infty(\Omega)}\leq \sum_i|d_i|\dfrac{\chi}{\max\{\hbar({\bf z}),\hbar({\bf p})\}}$. From standard pointwise estimates for the gradient of harmonic functions [see \eqref{lkjbblkjn0}] there exists $C\geq1$ depending only on $\Omega$, $\sum|D_k|$ and $N$ [here we used $1\leq \tilde N_0\leq N$] s.t. for $z_i\to p_k$ we have $\left|R_{{({\bf p},{\bf D})}}(z_i)-R_{{({\bf p},{\bf D})}}(p_k)\right|\leq C\chi\dfrac{1+|\ln[\hbar({\bf p})]|}{\hbar({\bf p})}$.
Then, up to change the value of $C$, we have \begin{equation}\label{ClusterEst2}
\left|\sum_id_iR_{\bf (z,d)}(z_i)-\sum_kD_kR_{{({\bf p},{\bf D})}}(p_k)\right|\leq C\chi\dfrac{1+|\ln[\hbar({\bf p})]|}{\hbar({\bf p})}. \end{equation} By combining \eqref{ClusterEst1} and \eqref{ClusterEst2} we get the result. \end{proof}
\subsection{Mesoscopic renormalized energy [at scale $h_{\rm ex}^{-1/2}$]}\label{SecMesoRenEn} From the work of Sandier and Serfaty we may obtain mesoscopic informations. To this end we need to assume a non degeneracy assumption for minimal points of $\xi_0$. So we assume in this section that Hypothesis \eqref{NonDegHyp} holds.
Let \begin{equation}\label{DefEtaO}
\eta_\Omega:=\begin{cases}10^{-3}\min\{1;{\rm dist}(\Lambda,\partial\Omega)\}&\text{if }N_0=1\\10^{-3}\min\{1;{\rm dist}(\Lambda,\partial\Omega);\min_{k\neq l}|p_k-p_l|\}&\text{if }N_0\geq 2\end{cases}. \end{equation} For $p\in\Lambda$, by applying Lemma 11.1 in \cite{SS1} in the disk $B(p,\eta_\Omega)$, we get the following proposition. \begin{prop}\label{EnergieRenMeso} Assume that Hypothesis \eqref{NonDegHyp} holds. Let $D\in\mathbb{N}^*$ and $h_{\rm ex}\uparrow\infty$ when $\varepsilon\to0$. Then for $p\in\Lambda$ and $R=R(\varepsilon)\to0$ s.t. $R\sqrt{h_{\rm ex}}\to\infty$ we have \begin{eqnarray}\nonumber
&&\inf_{{\bf z}\in[B(p,R)^D]^*}\left\{-\pi\sum_{i\neq j}\ln|z_i-z_j|+2\pih_{\rm ex}\sum_i[\xi_0(z_i)-\xi_0(p)]\right\} \\\label{DevMesoscopicDef}&=&\dfrac{\pi}{2}(D^2-D)\ln\left(\dfrac{h_{\rm ex}}{D}\right)+C_{p,D}+o(1) \end{eqnarray} with
\begin{equation}\label{DefCpD} C_{p,D}:=\min_{[(\mathbb{R}^2)^D]^*}W^{\rm meso}_{p,D} \end{equation} and \begin{equation}\label{DefEnergyRenMeso} \begin{array}{cccc}
W^{\rm meso}_{p,D}:&[(\mathbb{R}^2)^D]^*&\to&\mathbb{R}\\&{\bf x}=(x_1,...,x_D)&\mapsto&\displaystyle-\pi\sum_{i\neq j}\ln|x_i-x_j|+\pi D\sum_{i=1}^D Q_p(x_i). \end{array} \end{equation} where $Q_p(x):=x\cdot {\rm Hess}_{\xi_0}(p)x$, ${\rm Hess}_{\xi_0}(p)$ is the Hessian matrix of $\xi_0$ at $p$.
Moreover the infimum in \eqref{DevMesoscopicDef} is reached and if ${\bf z}^\varepsilon\in [B(p,R)^D]^*$ is s.t. \[
-\pi\sum_{i\neq j}\ln|z^\varepsilon_i-z^\varepsilon_j|+2\pih_{\rm ex}\sum_i[\xi_0(z^\varepsilon_i)-\xi_0(p)]=\dfrac{\pi}{2}(D^2-D)\ln\left(\dfrac{h_{\rm ex}}{D}\right)+C_{p,D}+o(1) \]
then for all sequence $\varepsilon=\varepsilon_n\downarrow0$, up to pass to a subsequence, denoting $\ell=\sqrt{\dfrac{D}{h_{\rm ex}}}$ and $\breve z_i^\varepsilon=\dfrac{z_i^\varepsilon-p}{\ell}$, we have $\breve{\bf z}^\varepsilon=(\breve z_1^\varepsilon,...,\breve z_D^\varepsilon)$ which converges to a minimizer of $W^{\rm meso}_{p,D}$. In particular $|\breve z_i^\varepsilon|\leq C_{\Omega,D}$ with $C_{\Omega,D}>0$ which depends only on $\Omega$ and $D$. \end{prop}
\subsection{Microscopic renormalized energy [at scale $\lambda\delta$]}\label{SecMicrRenEn} The location of the vorticity defects at scale $\lambda\delta$ [inside a connected component of $\omega_\varepsilon$] is given by the microscopic renormalized energy exactly as in the case without magnetic field. In order to define the microscopic renormalized energy we need some notation. Recall that the pinning term $a_\varepsilon:\Omega\to\{b,1\}$ is obtained [see Section \ref{SecConstructionPinningTerm}] from a smooth bounded simply connected set $\omega$ s.t. $0\in\omega\subset\overline{\omega}\subset Y:=(-1/2,1/2)^2$. The construction of the pinning term uses two parameters $\delta=\delta(\varepsilon)$ [the parameter of period] and $\lambda=\lambda(\varepsilon)$ [the parameter of dilution]. For $x_0\in\omega$ and a sequence $\varepsilon=\varepsilon_n\downarrow0$, we consider $\hat{x}_\varepsilon\in\omega$ s.t. $\hat x_\varepsilon\to x_0\in\omega$.
Let $m_\varepsilon\in\mathbb{Z}^2$ be s.t. the cell $Y_\varepsilon=\delta(m_\varepsilon+Y)$ satisfies $\overline{Y}_\varepsilon\subset\Omega$. We then denote $z_\varepsilon=\delta[m_\varepsilon+\lambda\hat{x}_\varepsilon]$. It is proved in \cite{dos2015microscopic} [see Estimates (9) and (10)] that for $ R= R_\varepsilon\gg\lambda\delta$ and $r=r_\varepsilon\ll\lambda\delta$, denoting $\hat R=R/(\lambda\delta)$, $\hat r=r/(\lambda\delta)$, $\mathcal{D}_\varepsilon=B(\delta m_\varepsilon,R)\setminus\overline{B(z_\varepsilon,r)}$, $\hat\mathcal{D}_\varepsilon=B(0,\hat R)\setminus\overline{B(\hat x_\varepsilon,\hat r)}$ and $\hat\mathcal{D}=B(0,\hat R)\setminus\overline{B(x_0,\hat r)}$: \begin{eqnarray}\label{MicroRenoExpressionNonRescalDir}
\inf_{\substack{w\in H^1(\mathcal{D}_\varepsilon,\mathbb{S}^1)\\ {\rm deg}(w)=1}}\frac{1}{2}\int_{\mathcal{D}_\varepsilon} U_\varepsilon^2|\nabla w|^2&=&\inf_{\substack{w\in H^1(\mathcal{D}_\varepsilon,\mathbb{S}^1)\\w(z_\varepsilon+R{\rm e}^{\imath\theta})={\rm e}^{\imath\theta}\\w(x_\varepsilon+r{\rm e}^{\imath\theta})={\rm Cst}\,{\rm e}^{\imath\theta}}}\frac{1}{2}\int_{\mathcal{D}_\varepsilon} U_\varepsilon^2|\nabla w|^2+o_\varepsilon(1)
\\\label{DefRenMicroEn1}&=&\inf_{\substack{ \hat w\in H^1(\hat\mathcal{D}_\varepsilon,\mathbb{S}^1)\\ {\rm deg}(w)=1}}\frac{1}{2}\int_{\hat\mathcal{D}_\varepsilon} a^2|\nabla\hat w|^2+o_\varepsilon(1). \end{eqnarray}
Moreover from the main result in \cite{Dos-MicroRenoEN}, we have the existence of an application $\tilde W^{\rm micro}:\omega\to\mathbb{R}$ [depending only on $\omega$ and $b$] s.t. \begin{equation}\label{DefRenMicroEn2}
\inf_{\substack{ \hat w\in H^1(\hat\mathcal{D}_\varepsilon,\mathbb{S}^1)\\ {\rm deg}(w)=1}}\frac{1}{2}\int_{\hat\mathcal{D}_\varepsilon} a^2|\nabla\hat w|^2=f_\omega(\hat R)+b^2\pi|\ln(\hat r)|+\tilde W^{\rm micro}(x_0)+o(1). \end{equation}
where $\displaystyle f_\omega(\hat R):=\inf_{\substack{w\in H^1[B(0,{\hat R})\setminus \overline{\omega},\mathbb{S}^1]\\ {\rm deg}(w)=1}}\frac{1}{2}\int_{B(0,\hat R)\setminus \overline{\omega}}|\nabla w|^2$.
It is clear that there exists $C_\omega\in\mathbb{R}$ [depending only on $\omega$] s.t. when $\hat R\to\infty$ we have $f_\omega(\hat R)=\pi\ln(\hat R)+C_\omega$.
Then, by denoting $ W^{\rm micro}(x_0):=\tilde W^{\rm micro}(x_0)+C_\omega$, we get from \eqref{DefRenMicroEn2} : \begin{equation}\label{DefRenMicroEn3}
\inf_{\substack{ \hat w\in H^1(\hat\mathcal{D},\mathbb{S}^1)\\ {\rm deg}(w)=1}}\frac{1}{2}\int_{\hat\mathcal{D}} a^2|\nabla\hat w|^2=\pi\ln(\hat R)+b^2\pi|\ln(\hat r)|+ W^{\rm micro}(x_0)+o(1). \end{equation} Moreover, from \cite{Publi3} we know that $W^{\rm micro}$ admits minimizers in $\omega$. \section{Sharp upper bound: construction of a test function}\label{SecUpperBound} From now on we assume that Hypothesis \eqref{NonDegHyp} holds. We thus may use for $p\in\Lambda$ and $D\in\mathbb{N}^*$ the constant $C_{p,D}$ defined in \eqref{DefCpD}. We denote also $C_{p,0}:=0$.\\
We let for $d\in\mathbb{N}^*$ : \begin{equation}\label{DefEnsCouplageEnergieRen}
\Lambda_{d}:=\left.\left\{{\bf D}\in\left\{\left\lceil\dfrac{d}{{N_0}}\right\rceil;\left\lfloor\dfrac{d}{{N_0}}\right\rfloor\right\}^{N_0}\,\right|\,\sum_{k=1}^{N_0} D_k=d\right\}, \end{equation} \begin{equation}\label{CouplageEnergieRen} \overline{\W}_{d,\Omega}=\overline{\W}_{d}:=\min_{{\bf D}\in\Lambda_{d}}\left\{W^{\rm macro}{({\bf p},{\bf D})}+\sum_{k=1}^{N_0}C_{p_k,D_k}+\tilde{V}[\zeta_{({\bf p},{\bf D})}]\right\} \end{equation} where, for $x\in\mathbb{R}$, $\lceil x\rceil$ is the ceiling of $x$, $\lfloor x\rfloor$ is the floor of $x$, $W^{\rm macro}(\cdot)$ is defined in Proposition \ref{Prop.EnergieRenDef} and $\tilde{V}[\zeta_{({\bf p},{\bf D})}]$ is defined in Proposition \ref{Prop.Information.Zeta-a}.
We now state an easy lemma whose proof is left to the reader.
\begin{lem}\label{LemLaisseLectTrucSimple}
Let $d\in\mathbb{N}^*$ and ${\bf D}\in\Lambda_{d}$. Then the following quantities are independent of ${\bf D}$: \[ \mathscr{L}_1(d):=\dfrac{\pi}{2}\left[\left(\sum_{k=1}^{N_0} D_k^2\right)-d\right], \]
\[ \mathscr{L}_2(d):=\overline{\W}_{d}+\dfrac{\pi}{2}\sum_{\substack{k=1\\\text{s.t. }D_k\geq1}}^{N_0}(D_k-D_k^2)\ln\left({D_k}\right). \]
Moreover: $d\leq N_0\Longleftrightarrow\mathscr{L}_1(d)=0\Longleftrightarrow\mathscr{L}_2(d)=\overline{\W}_{d}$. \end{lem} \begin{notation}\label{NotL0} We let $\mathscr{L}_1(0)=\mathscr{L}_2(0)=0$. \end{notation} The main result of this section is the following proposition.
\begin{prop}\label{Prop.BorneSupSimple}Assume that $h_{\rm ex}=\mathcal{O}(|\ln\varepsilon|)$, $h_{\rm ex}\to+\infty$, \begin{equation}\label{HypLambdaDeltaConstrFoncTest}
\lambda^{1/4}|\ln\varepsilon|\to0\text{ and }{\delta\sqrt h_{\rm ex}\to0} \end{equation} and assume that Hypothesis \eqref{NonDegHyp} holds.
Let $d\in\mathbb{N}^*$ and let ${\bf D}\in\Lambda_{d}$ be a minimizer of the minimizing problem \eqref{CouplageEnergieRen}.
For $0<\varepsilon<1$, there exists $(v_\varepsilon,A_\varepsilon)\in\mathscr{H}$ which is in the Coulomb gauge with $d$ vortices of degree $1$ s.t. \begin{eqnarray}\label{ExactExpEnerg} \mathcal{F}(v_\varepsilon,A_\varepsilon)=h_{\rm ex}^2 {\bf J_0}+dM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathscr{L}_2(d)+o(1)
\end{eqnarray}
with $M_\O:=2\pi\|\xi_0\|_{L^\infty(\Omega)}$ and \begin{equation}\label{DefH0c1}
H^0_{c_1} :=\dfrac{b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|}{2\|\xi_0\|_{L^\infty(\Omega)}}+\tilde\gamma_{b,\omega} \end{equation} where \begin{equation}\label{DefGammaBo}
\tilde\gamma_{b,\omega}:=\dfrac{\displaystyle\min_\omega W^{\rm micro}+b^2[\gamma+\pi \ln b ]}{2\pi\|\xi_0\|_{L^\infty(\Omega)}}, \end{equation} { $\gamma$ is a universal constant defined in Lemma IX.1 \cite{BBH}} and $W^{\rm micro}$ is defined in Section \ref{SecMicrRenEn}.
\end{prop} Proposition \ref{Prop.BorneSupSimple} is proved in Appendix \ref{AppProofUpBound}.
\section{Tool box}\label{Sec.ToolBox} The proof of the main theorems of this article is done in a classic way: by matching upper and lower bounds. A [sharp] upper bound is obtained by Proposition \ref{Prop.BorneSupSimple}. Getting a sharp lower bound is the most challenging part of the proof. It needs the proof of several facts related with the vorticity defects of a family of quasi-minimizers [quantization, localization, size ...].
In this section we present some technical and quite classical results adapted to our situation. \subsection{An $\eta$-ellipticity property}\label{Sec.EtaElli} In this section we focus on quasi-minimizers.
We let $h_{\rm ex}=\mathcal{O}(|\ln\varepsilon|)$ and we consider $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ be a family of quasi-minimizers for $\mathcal{F}$, {\it i.e.}, \begin{equation}\label{QuasiMinDef} \mathcal{F}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}+o(1). \end{equation}
We assume that for all $\varepsilon\in(0;1)$, $(v_\varepsilon,A_\varepsilon)$ is in the Coulomb gauge and that $v_\varepsilon\in H^1(\Omega,\mathbb{C})$ is s.t. \begin{equation}\label{BoundGrpaLin}
\|\nabla |v_\varepsilon|\|_{L^\infty(\Omega)}=\mathcal{O}(\varepsilon^{-1}). \end{equation} The major result of this section is a key tool in this article: an $\eta$ ellipticity property. \begin{prop}\label{Prop.EtaEllpProp}
Let $h_{\rm ex}=\mathcal{O}(|\ln\varepsilon|)$ and let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family in the Coulomb gauge satisfying \eqref{QuasiMinDef} and \eqref{BoundGrpaLin}.
For $\eta\in(0,1)$ there exist $\varepsilon_\eta>0$ and $C_\eta>0$ [depending on the bound of $\varepsilon\|\nabla |v_\varepsilon|\|_{L^\infty(\Omega)}$] s.t. for $0<\varepsilon<\varepsilon_\eta$, if $z\in\Omega$ is s.t. \[
b^2\int_{B(z,\sqrt{\varepsilon})\cap\Omega}|\nabla v_\varepsilon|^2+\dfrac{b^2}{\varepsilon^2}(1-|v_\varepsilon|^2)^2\leq {C_\eta}{}|\ln\varepsilon|, \]
then $|v_\varepsilon(z)|>\eta$. \end{prop} Proposition \ref{Prop.EtaEllpProp} is proved in Appendix \ref{AppendixPruveEtaEllipt}.
By combining Proposition \ref{Prop.EtaEllpProp} with Theorem \ref{ThmBorneDegréMinGlob} we get immediately a first step in the [macroscopic] localization of the vorticity defects. In order to apply Theorem \ref{ThmBorneDegréMinGlob} we need assume \begin{equation}\label{MagneticIntenHyp}
\begin{cases}\text{$\lambda,\delta$ satisfy \eqref{CondOnLambdaDelta}, $\delta^2|\ln\varepsilon|\to0$, $h_{\rm ex}\to\infty$} \\\text{\eqref{BorneKMagn} holds for $h_{\rm ex}$ with some $K\geq0$ independent of $\varepsilon$} \end{cases}. \end{equation} \begin{cor}\label{Cor.DegNonNul}
Assume that $\lambda,\delta$ and $h_{\rm ex}$ satisfy \eqref{MagneticIntenHyp} and let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be s.t. \eqref{QuasiMinDef} and \eqref{BoundGrpaLin} hold. There exist $0<\varepsilon_0\leq\varepsilon_K$ and $M\geq1$ s.t. for $0<\varepsilon<\varepsilon_0$, letting $\tilde\Lambda_\varepsilon:=\Lambda\cap\cup_{d_i\neq0}B(a_i,2\mathcal{M}_K|\ln\varepsilon|^{-s_0})$ where the $(a_i,d_i)$'s [depend on $\varepsilon$] are given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf} and $\varepsilon_K\&\mathcal{M}_K\&s_0$ are given by Theorem \ref{ThmBorneDegréMinGlob}, we have \[
\displaystyle\{|v_\varepsilon|\leq1/2\}\subset\bigcup_{p\in\tilde\Lambda_\varepsilon}B(p,M|\ln\varepsilon|^{-\tilde s_0})\text{where }\tilde s_0:=\min\{s_0,10\}. \] \end{cor} \begin{proof} We argue by contradiction and we assume that there exist $\varepsilon=\varepsilon_n\downarrow0$ and a sequence $((v_\varepsilon,A_\varepsilon))_\varepsilon\subset\mathscr{H}$ s.t. \eqref{QuasiMinDef} and \eqref{BoundGrpaLin} hold and s.t. for all $n$ there exists \[
z_0=z_0^{n}\in\{|v|\leq1/2\}\setminus\bigcup_{p\in\tilde\Lambda_\varepsilon}B(p,n|\ln\varepsilon|^{-\tilde s_0}). \] Since \eqref{QuasiMinDef} and \eqref{BoundGrpaLin} are gauge invariant we may assume that, for all $\varepsilon$, $(v_\varepsilon,A_\varepsilon)$ is in the Coulomb gauge.
Let $\mathcal{B}:=\{(B(a_i,r_i),d_i)\,|\,i\in\mathcal{J}\}$ be given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf}.
Write $B_i:=B(a_i,r_i)$ for $i\in\mathcal{J}$. Note that by Theorem \ref{ThmBorneDegréMinGlob}, from the quasi-minimality of $(v_\varepsilon,A_\varepsilon)$, for $\varepsilon$ sufficiently small, we have $d_i\geq0$ for all $i$ and $d:=\sum |d_i|=\sum d_i=\mathcal{O}(1)$. Up to pass to a subsequence, we may thus assume that $d$ is independent of $\varepsilon$.
From the definition of $\tilde\Lambda_\varepsilon$, we have
\[
\bigcup_{d_i>0}B_i\subset\bigcup_{p\in\tilde\Lambda}B(p,2\mathcal{M}_K|\ln\varepsilon|^{-s_0}).
\]
Note that from Theorem \ref{ThmBorneDegréMinGlob} we have $\mathcal{F}(v_\varepsilon,0)=\mathcal{O}(|\ln\varepsilon|^2)$. Then we may use Proposition \ref{Prop.BorneInfLocaliseeSandSerf} for the configuration $(v_\varepsilon,0)\in\mathscr{H}$ to get a covering $\cup_{i\in\tilde\mathcal{J}} \tilde B_i$ of $\{|v_\varepsilon|<1-|\ln\varepsilon|^{-2}\}$ with disjoint disks $\tilde B_i=B(\tilde a_i,\tilde r_i)$, $\sum\tilde r_i<|\ln\varepsilon|^{-10}$.
Therefore there is $\rho\in[2\mathcal{M}_K|\ln\varepsilon|^{-\tilde s_0};(2\mathcal{M}_K+6)|\ln\varepsilon|^{-\tilde s_0}]$ s.t.
\[ \left[\bigcup_{p\in\tilde\Lambda_\varepsilon}\partial B(p,\rho)\right]\cap\left[\bigcup_{i\in\mathcal{J}} B_i\cup\bigcup_{i\in\tilde\mathcal{J}} \tilde B_i\right]=\emptyset.
\]
In particular $|v_\varepsilon|\geq 1-|\ln\varepsilon|^{-2}$ on $\bigcup_{p\in\tilde\Lambda_\varepsilon}\partial B(p,\rho)$. Thus, writing $\tilde d_i:= {\rm deg}_{\partial \tilde B_i}(v_\varepsilon)$ when $\tilde B_i\subset\Omega$, we get for $p\in\tilde\Lambda_\varepsilon$
\[
\sum_{\tilde B_i\subset B(p,\rho)}|\tilde d_i|\geq\left|\sum_{\tilde B_i\subset B(p,\rho)}\tilde d_i\right|= {\rm deg}_{\partial B(p,\rho)}(v_\varepsilon)=\sum_{ B_i\subset B(p,\rho)}d_i.
\]
Note that for sufficiently large $n$ we have $B(z_0,\sqrt\varepsilon)\cap\bigcup_{p\in\tilde\Lambda_\varepsilon} B(p,\rho)=\emptyset$.
On the other hand, since $\sum \tilde r_i<|\ln\varepsilon|^{-10}$, we have for $\tilde B_i\subset\Omega$
\begin{equation}\nonumber F(v,\tilde B_i)\geq \pi b^2|\tilde d_i|(|\ln\varepsilon|-C\ln|\ln\varepsilon|). \end{equation} Using Proposition \ref{Prop.EtaEllpProp} we obtain \begin{equation}\label{ContraRER}
F(v)\geq (\pi b^2 d+C_{1/2})|\ln\varepsilon|-\mathcal{O}(\ln|\ln\varepsilon|) \end{equation} where $C_{1/2}>0$ is given by Proposition \ref{Prop.EtaEllpProp} with $\eta=1/2$. Estimate \eqref{ContraRER} is in contradiction with \eqref{NiceDecSharpBorneSupBisso}.
\end{proof}
\subsection{Construction of the $\varepsilon^s$-bad discs}\label{ConstructionEpsBadDis}
As in the previous section we assume that $\lambda,\delta$ and $h_{\rm ex}$ satisfy \eqref{MagneticIntenHyp}. In this section we establish the existence of {\it $\varepsilon^s$-bad discs associated to a quasi-minimizing sequence}. The construction of the bad discs requires the hypotheses: $|v_\varepsilon|\in W^{2,1}(\Omega)$. \\
An $\varepsilon^s$-bad discs family associated to a familly $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ consists in sets of discs that have small diameters [a roots of $\varepsilon$] s.t. for fix $\varepsilon$ the discs are "well separated", the union of the discs is a covering of $\{|v|\leq1/2\}$ and each "heart" of a disc intersects $\{|v|\leq1/2\}$. Such sets of discs give thus a nice visualization of $\{|v|\leq1/2\}$.
In the next section [Section \ref{Sect.ShapInfo}], adding an extra hypothesis on $\lambda,\delta$ and $h_{\rm ex}$ we get some informations in terms of location and quantification of the $\varepsilon^s$-bad discs.
\begin{prop}\label{Prop.ConstrEpsMauvDisk}
Assume that $\lambda,\delta$ and $h_{\rm ex}$ satisfy \eqref{MagneticIntenHyp}. There exists $M_0\in\mathbb{N}^*$ s.t. for $\mu\in(0,1/2)$, if $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ is in the Coulomb gauge and agrees \eqref{HypGlobalSurQuasiMin}$\&$\eqref{QuasiMinDef}, then there exist $\varepsilon_\mu>0$ and $C_\mu\geq1$ [independent of $\varepsilon$] s.t. for $0<\varepsilon<\varepsilon_\mu$, there is $J_\mu=J_{\mu,\varepsilon}\subset\{1,...,M_0\}$ [possibly empty] s.t. if $J_\mu=\emptyset$ then $|v|>1/2$ in $\Omega$ and if $J_\mu\neq\emptyset$ then there are $\{z_i\,|\,i\in J_\mu\}\subset\Omega$, a set of mutually distinct points, and $r\in[\varepsilon^\mu,\varepsilon^{\mu_*}]$ with $\mu_*:=2^{-L_0^2}\mu$ verifying: \begin{enumerate}
\item\label{Prop.ConstrEpsMauvDisk1} $|z_i-z_j|\geq r^{3/4}$ for $i,j\in J_\mu$, $i\neq j$,
\item\label{Prop.ConstrEpsMauvDisk2} $\{|v_\varepsilon|\leq1/2\}\subset\cup_{ J_\mu}B(z_i,r)\subset\Omega$ and, for $i\in J_\mu$, $B(z_i,r/4)\cap\{|v_\varepsilon|\leq1/2\}\neq\emptyset$,
\item\label{Prop.ConstrEpsMauvDisk3} For $i\in J_\mu$ we have $\displaystyler\int_{\partial B(z_i,r)}|\nabla v_\varepsilon|^2+\dfrac{1}{2\varepsilon^2}(1-|v_\varepsilon|^2)^2\leq C_\mu$ and $|v|\geq1-|\ln\varepsilon|^{-2}$ on $\partial B(z_i,r)$. \end{enumerate} \end{prop} Proposition \ref{Prop.ConstrEpsMauvDisk} is proved in Appendix \ref{SectAppenPreuveConstructionPetitDisque}. We have the following standard estimate.
\begin{prop}\label{Prop.ProprieteEpsMauvDisk}
Assume \eqref{MagneticIntenHyp} and let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ be as in Proposition \ref{Prop.ConstrEpsMauvDisk}. Fix $\mu\in(0,1/2)$ and let $\varepsilon_\mu$, $C_\mu$ be given by Proposition \ref{Prop.ConstrEpsMauvDisk}. For $0<\varepsilon<\varepsilon_\mu$ we consider $J_\mu$, $\{z_i\,|\,i\in J_\mu\}\subset\Omega$ and $r$ obtained in Proposition \ref{Prop.ConstrEpsMauvDisk}. We denote $ d_i:= {\rm deg}_{\partial B(z_i,r)}(v_\varepsilon)$.
There exists $c_{\mu,b}\geq1$ independent of $\varepsilon$ s.t. for $\varepsilon<\varepsilon_\mu$ we have \begin{equation}\label{BorneDegré}
| d_i|\leq 4\sqrt {C_\mu}, \end{equation} \begin{equation}\label{BorneInfEn}
\dfrac{1}{2}\int_{B(z_i,r)}|\nabla v_\varepsilon|^2+\dfrac{b^2}{2\varepsilon^2}(1-|v_\varepsilon|^2)^2\geq\pi| d_i|\ln\left(\dfrac{r}{\varepsilon}\right)-c_{\mu,b} \end{equation} and then \begin{equation}\label{BorneInfEnWeight}
F(v_\varepsilon,B(z_i,r))\geq\pi| d_i|\inf_{B(z_i,r)}\alpha\left[\ln\left(\dfrac{r}{\varepsilon}\right)-c_{\mu,b}\right]\geq\pi\inf_{B(z_i,r)}\alpha\,| d_i|[(1-\mu)\ln\varepsilon -c_{\mu,b}]. \end{equation} Moreover there is $0<\tilde\varepsilon_\mu\leq\varepsilon_\mu$ s.t. for $0<\varepsilon<\tilde\varepsilon_\mu$ we have \begin{equation}\label{DegNonNulEpsS} d_i\neq0\text{ for all }i \end{equation} and \begin{equation}\label{BorneTotaleSommeDeg}
\sum_{i\in J_\mu}|d_i|\leq\mathcal{D}_{K,b}:=\dfrac{3\mathcal{M}_K}{b^2} \end{equation} \end{prop} \begin{proof} It is classical to get \eqref{BorneDegré} from Proposition \ref{Prop.ConstrEpsMauvDisk}.\ref{Prop.ConstrEpsMauvDisk3} and the Cauchy Schwartz inequality. Estimate \eqref{BorneInfEn} follows from Proposition \ref{Prop.ConstrEpsMauvDisk} $\&$ Lemma VI.1 in \cite{AB1} and \eqref{BorneInfEnWeight} is a consequence of \eqref{BorneInfEn}.
The proof of \eqref{DegNonNulEpsS} is done arguing by contradiction with the construction of a comparaison function $\tilde v:=\begin{cases}v&\text{in }\Omega\setminus B(z_{i_0},r)\\\tilde\rho\e^{\imath\tilde\phi}&\text{in }B(z_{i_0},r)\end{cases}$ s.t. $\tilde v\in H^1(\Omega,\mathbb{C})$ and $F(\tilde v,B(z_{i_0},r))=\mathcal{O}(1)$ where we assumed $d_{i_0}=0$.
Since $(v,A)$ is a quasi-minimizer of $\mathcal{F}$ we have $\mathcal{F}(v,A)\leq\mathcal{F}(\tilde v,A)+o(1)$.
On the other hand, by direct calculations $\mathcal{F}(v,A)-\mathcal{F}(\tilde v,A)=F(v,B(z_{i_0},r))-F(\tilde v,B(z_{i_0},r))+o(1)$. Consequently $F(v,B(z_{i_0},r))=\mathcal{O}(1)$ which is in contradiction with $F(v,B(z_{i_0},r))\geq C_{1/2}|\ln\varepsilon|$ [given by Proposition \ref{Prop.EtaEllpProp}] for small $\varepsilon$.
We now prove \eqref{BorneTotaleSommeDeg}. From \eqref{BorneInfEnWeight} we have $\sum_{J_\mu}|d_i|\left[\pi (1-\mu)|\ln\varepsilon|-c_{\mu,b}\right]\leq\dfrac{\mathcal{M}_K|\ln\varepsilon|}{b^2}$. Since $\mu\in(0,1/2)$, the last estimate gives the result for $\varepsilon>0$ sufficiently small. \end{proof} \subsection{Lower bounds in perforated disks}\label{Sec.StrongEffectDilution}
The goal of this section is to get lower bounds for $\frac{1}{2}\int_{\mathcal{D}}\alpha|\nabla v|^2$ where $\mathcal{D}$ is a perforated disk s.t. $\mathcal{D}\subset\Omega$ and $|v|\geq1/2$ in $\mathcal{D}$.
The starting point of the argument is an estimate on circles. Let $\tilde b\in(0,1)$, $\beta\in L^\infty((0,2\pi),[\tilde b,1])$. With Lemma D.7 in \cite{Publi4}, for $\varphi\in H^1((0,2\pi),\mathbb{R})$ s.t. $\varphi(2\pi)-\varphi(0)=2\pi$, we have the following lower bound: \begin{equation}\label{EstimaBasiCercle}
\dfrac{1}{2}\int_0^{2\pi}\beta|\partial_\theta\varphi|^2\geq\dfrac{2\pi^2}{\displaystyle\int_0^{2\pi}\dfrac{1}{\beta}}. \end{equation} In order to use \eqref{EstimaBasiCercle} we need to do a preliminary analysis.\\
For $\alpha=U_\varepsilon^2\in L^\infty(\Omega,[b^2,1])$, using Lemma E.1 in \cite{Publi4}, we have the existence of $C\geq1$ [independent of $\varepsilon$] s.t. \begin{equation}\label{CqFondDilution} \left\{\begin{array}{l}\text{For almost all $s\geq\delta/3$, letting $\mathscr{C}_s$ be a circle with radius $s$,}\\ \text{we have $\int_{\mathscr{C}_s\cap\Omega}{(1-\alpha)}\leq C\lambda s$.}\end{array}\right. \end{equation} From now on, in all this section, we consider a sequence $\varepsilon=\varepsilon_n\downarrow0$, $\lambda,\delta,h_{\rm ex}$ and $((v_\varepsilon,A_\varepsilon))_\varepsilon\subset\mathscr{H}$ satisfying the hypotheses of Proposition \ref{Prop.ConstrEpsMauvDisk} [namely \eqref{HypGlobalSurQuasiMin}, \eqref{QuasiMinDef} and \eqref{MagneticIntenHyp}]. We drop the subscript $\varepsilon$ writing $(v,A)$ instead of $(v_\varepsilon,A_\varepsilon)$
Recall that $\eta_\Omega$ is defined in \eqref{DefEtaO} and consider \begin{equation}\label{HypOxRr} \text{$x_\varepsilon\in\Omega$ and $0<r=r_\varepsilon<R=R_\varepsilon<\eta_\Omega$ s.t. ${\rm dist}(x_\varepsilon,\partial\Omega)>\eta_\Omega>0$.} \end{equation} We then denote $\mathscr{R}:=B(x_\varepsilon,R)\setminus\overline{B(x_\varepsilon,r)}\subset\Omega$.\\
Assume $|v|\geq1/2$ in $\mathscr{R}$ and let $d:= {\rm deg}_{\mathscr{R}}(v)$. From the proof of Proposition \ref{Prop.ConstrEpsMauvDisk} [see \eqref{BorneValeurMesure} in Appendix \ref{SectAppenPreuveConstructionPetitDisque}], there exists $1/2<t_\varepsilon<1$, $t_\varepsilon=1+o(1)$ s.t. $t_\varepsilon\in{\rm Im}(|v|)\cap[1-2/|\ln\varepsilon|;1-1/|\ln\varepsilon|]$ and \begin{equation}\label{DefTeps}
\left\{\begin{array}{c}\text{$V(t_\varepsilon):=\{|v|=t_\varepsilon\}$ is a finite union of Jordan curves included in $\Omega$ and } \\ \text{ of simple curves whose endpoints are on $\partial\Omega$ and $\mathcal{H}^1[V(t_\varepsilon)]=o(1)$.} \end{array}\right. \end{equation}
and since $\mathcal{H}^2(\{|v|\leq t_\varepsilon\})=o(1)$ we then have \begin{equation}\label{PropIntDefTeps} \left\{\begin{array}{c}
\text{if $U$ is a connected component of $\{|v|\leq t_\varepsilon\}$ s.t. $\overline U\subset\Omega$ then there is $\Gamma$,} \\\text{a connected component of $V(t_\varepsilon)$, which is a Jordan curve s.t. $U\subset{\rm int }(\Gamma)$.} \end{array}\right. \end{equation} \begin{remark}\label{RmMontargis}
Since $\mathcal{H}^1[V(t_\varepsilon)]=o(1)$, for sufficiently small $\varepsilon$, if $\Gamma$ [resp. $U$] is a connected component of $V(t_\varepsilon)$ [resp. $\{|v|\leq t_\varepsilon\}$] which intersects $\mathscr{R}$ then $\Gamma$ is a Jordan curve [resp. $\partial U$ is a union of connected components of $V(t_\varepsilon)$]. \end{remark} We have the following lemma: \begin{lem}\label{Lem.BorneLongueEnradian}
Assume $x_\varepsilon,r,R$ satisfy \eqref{HypOxRr} and we assume $|v|\geq1/2$ in $\mathscr{R}$. Then, for $s\in(r,R)$, letting \[
K_{s}:=\{\theta\in[0,2\pi)\,|\,|v(x_\varepsilon+s\e^{\imath\theta})|\leq t_\varepsilon\} \] we have \begin{equation}\nonumber \mathcal{H}^1(K_s)\leq \pi\dfrac{\mathcal{H}^1[V(t_\varepsilon)]}{s}. \end{equation} \end{lem} \begin{proof}
Let $s\in(r,R)$ be s.t. $\mathcal{H}^1(K_s)>0$ and denote $ \wideparen{\mathcal{K}}_s:=\{x_\varepsilon+s\e^{\imath\theta}\,|\,\theta\in K_s\}\subset\partial B(x_\varepsilon,s)$. Then $\mathcal{H}^1( \wideparen{\mathcal{K}}_s)=s\mathcal{H}^1(K_s)$.
On the one hand, letting $\mathcal{V}_\mathscr{R}(t_\varepsilon)$ be the union of the connected components of $\{|v|\leq t_\varepsilon\}$ which intersect $\mathscr{R}$, we have $ \wideparen{\mathcal{K}}_s=\mathcal{V}_\mathscr{R}(t_\varepsilon)\cap\partial B(x_\varepsilon,s)$.
On the other hand, by Remark \ref{RmMontargis}, $\partial\mathcal{V}_\mathscr{R}(t_\varepsilon)$ is a union of connected components of $V(t_\varepsilon)$ which are Jordan curves. Among these Jordan curves, we may select the maximal curves w.r.t. the inclusion of their interior. We denote these maximal curves by $\Gamma_1,...,\Gamma_N$ and we let for $i\in\{1,...,N\}$, $\mathcal{V}_i:=\overline{{\rm int}(\Gamma_i)}$. We then obtain $\mathcal{V}_\mathscr{R}(t_\varepsilon)\subset\cup_{i=1}^N\mathcal{V}_i$ and thus $ \wideparen{\mathcal{K}}_s\subset\cup_{i=1}^N[\partial B(x_\varepsilon,s)\cap\mathcal{V}_i]$.
For $i\in\{1,...,N\}$, we fix $x_i\in\mathcal{V}_i$ and we define the disk $B_i:=\overline{B(x_i,{\rm diam}(\mathcal{V}_i))}$. It is clear that $\mathcal{V}_i\subset B_i$ . Consequently \[ \mathcal{H}^1[\partial B(x_\varepsilon,s)\cap\mathcal{V}_i]\leq\mathcal{H}^1[\partial B(x_\varepsilon,s)\cap B_i]\leq2\pi\,{\rm diam}(\mathcal{V}_i). \] We claim that $2{\rm diam}(\mathcal{V}_i)\leq\mathcal{H}^1(\Gamma_i)$. Since the curves $\Gamma_i$ are pairwise disjoint, we have $\sum_{i=1}^N\mathcal{H}^1(\Gamma_i)\leq\mathcal{H}^1[V(t_\varepsilon)]$.
We may now conclude: \[ s\mathcal{H}^1(K_s)=\mathcal{H}^1( \wideparen{\mathcal{K}}_s)\leq\sum_{i=1}^N\mathcal{H}^1[\partial B(x_\varepsilon,s)\cap\mathcal{V}_i]\leq\pi\sum_{i=1}^N2{\rm diam}(\mathcal{V}_i)\leq\pi\mathcal{H}^1[V(t_\varepsilon)]. \]
\end{proof} The next proposition is one of the major use of the dilution [$\lambda\to0$]. \begin{prop}\label{Prop.ComparaisonAnneau}
Let $x_\varepsilon,r,R$ satisfying \eqref{HypOxRr} and assume $|v|\geq1/2$ in $\mathscr{R}$. We write $d:= {\rm deg}_{\mathscr{R}}(v)$ and, in $\mathscr{R}$, we let $w:=v/|v|\,\&\,\rho:=|v|$. \begin{enumerate} \item\label{Prop.ComparaisonAnneau1} If $r\geq\delta/3$ and if $\mathcal{H}^1[V(t_\varepsilon)]/r+(1-t^2_\varepsilon)+\lambda=o[\ln(R/r)]$ then \[
\dfrac{1}{2}\int_\mathscr{R}\alpha|\nabla v|^2\geq\dfrac{1}{2}\int_\mathscr{R}\alpha\rho^2|\nabla w|^2\geq \pi d^2\left[\ln\left(\dfrac{R}{r}\right)-o(1)\right]. \] \item\label{Prop.ComparaisonAnneau2} If $r=o(1)$ and if $\mathcal{H}^1[V(t_\varepsilon)]/r+(1-t^2_\varepsilon)=o[\ln(R/r)]$ then
\[
\dfrac{1}{2}\int_\mathscr{R}|\nabla v|^2\geq\dfrac{1}{2}\int_\mathscr{R}\rho^2|\nabla w|^2\geq \pi d^2\left[\ln\left(\dfrac{R}{r}\right)-o(1)\right]. \] \end{enumerate} \end{prop}
\begin{proof}We prove the first assertion. We claim that, up to replace $v$ with $\underline{v}$, we may assume $|v|\leq1$ in $\Omega$. Moreover, if $d=0$ then there is nothing to prove. We then assume $d\neq0$.
We write $v=\rho\e^{\imath d\varphi}$ where $\varphi$ is locally defined and its gradient is globally defined. Letting $x_\varepsilon+\mathbb{R}^+:=\{x_\varepsilon+s\,|\,s\geq0\}$, we may assume $\varphi\in H^1(\mathscr{R}\setminus(x_\varepsilon+\mathbb{R}^+),\mathbb{R})$. For $s\in(r,R)$, we let $\varphi_s(\theta)=\varphi(x_\varepsilon+s\e^{\imath\theta})$, $\rho_s(\theta)=|v(x_\varepsilon+s\e^{\imath\theta})|$ and $\alpha_s(\theta)=\alpha(x_\varepsilon+s\e^{\imath\theta})$. Then $\varphi_s\in H^1((0,2\pi),\mathbb{R})$ is s.t. $\varphi_s(2\pi)-\varphi_s(0)=2\pi$ and we immediately get \[
\dfrac{1}{2}\int_\mathscr{R}\alpha\rho^2|\nabla w|^2\geq \dfrac{d^2}{2}\int_r^R\dfrac{{\rm d}s}{s}\int_0^{2\pi}\alpha_s\rho_s^2|\partial_\theta\varphi_s|^2{\rm d}\theta. \] From \eqref{EstimaBasiCercle} with $\beta:=\alpha_s\rho_s^2$ we get \[ \displaystyle
\dfrac{1}{2}\int_0^{2\pi}\alpha_s\rho_s^2|\partial_\theta\varphi_s|^2\geq\dfrac{2\pi^2}{\displaystyle\int_0^{2\pi}\dfrac{1}{\alpha_s\rho_s^2}}. \] Since $b^2/4\leq \alpha_s\rho_s^2\leq 1$ we have \[ 0\leq\left(\int_0^{2\pi}\dfrac{1}{\alpha_s\rho_s^2}\right)-2\pi=\int_0^{2\pi}\dfrac{1-\alpha_s\rho_s^2}{\alpha_s\rho_s^2}
\leq\dfrac{4}{b^2}\left(\int_0^{2\pi}{1-\rho_s^2}+\int_0^{2\pi}{1-\alpha_s}\right). \]
On the one hand, from Lemma \ref{Lem.BorneLongueEnradian} we have \[ \int_0^{2\pi}{1-\rho_s^2}\leq\mathcal{H}^1(K_s)+\left[2\pi-\mathcal{H}^1(K_s)\right](1-t_\varepsilon^2)\leq\dfrac{\pi\mathcal{H}^1[V(t_\varepsilon)]}{s}+2\pi(1-t^2_\varepsilon). \] On the other hand, using \eqref{CqFondDilution}, there is $C\geq1$ [independent of $\varepsilon$] s.t.$\displaystyle \int_0^{2\pi}{1-\alpha_s}\leq C\lambda$. Then \[ \int_0^{2\pi}\dfrac{1}{\alpha_s\rho_s^2}\leq2\pi+\dfrac{4}{b^2}\left[\dfrac{\pi\mathcal{H}^1[V(t_\varepsilon)]}{s}+2\pi(1-t^2_\varepsilon)+C\lambda\right]. \] We thus get \begin{eqnarray*}
\dfrac{1}{2}\int_\mathscr{R}\alpha\rho^2|\nabla w|^2&\geq& {d^2}\int_r^R\dfrac{{\rm d}s}{s}\dfrac{2\pi^2}{2\pi-\dfrac{4}{b^2}\left[\pi\mathcal{H}^1[V(t_\varepsilon)]/s+2\pi(1-t^2_\varepsilon)+C\lambda\right]}
\\&=&\pi d^2\left[\ln\left(\dfrac{R}{r}\right)+o(1)\right]. \end{eqnarray*} The second assertion is obtain exactly in the same way than the first one. Indeed, since $\alpha$ plays no role in the statement, we may use the same argumentation with $\lambda=0$ and $\delta>0$ an arbitrary small number. \end{proof} We now state the reformulation of Proposition \ref{Prop.ComparaisonAnneau} by replacing the annular $\mathscr{R}$ with a perforated disk. \begin{cor}\label{Cor.BorneInfProcheIncl}
Let $D_0\in\mathbb{N}^*$ be independent of $\varepsilon$, $0<r=r_\varepsilon<R=R_\varepsilon$ be s.t. $r=o(R)$, $N=N_\varepsilon\in\mathbb{N}^*$ be s.t. $ N\leq D_0$ and $z_1=z_1^\varepsilon,...,z_N=z_N^\varepsilon$ be s.t. $|z_i-z_j|\geq8 r$ for $i\neq j$.
Let $y=y_\varepsilon\in\Omega$ and assume $z_1,...,z_N\in B(y,R)\subset B(y,4R)\subset B(y,\eta_\Omega)\subset\Omega$. We let $\mathcal{D}:=B(y,2R)\setminus\cup_{i=1}^N\overline{B(z_i,r)}$.
Assume $\rho=|v|\geq1/2$ in $\mathcal{D}$. For $i\in\{1,...,N\}$, we let $d_i:= {\rm deg}_{\partial B(z_i,r)}(v)$. We also assume $d_i>0$ for all $i\in\{1,...,N\}$ and $\sum_{i=1}^Nd_i\leq D_0$. Write $v=\rho w$ in $\mathcal{D}$.
Then there exists $C_0>0$ depending only on $D_0$ s.t. : \begin{enumerate} \item\label{Cor.BorneInfProcheIncl1} If $r\geq \delta/3$ and $\mathcal{H}^1[V(t_\varepsilon)]/r+(1-t^2_\varepsilon)+\lambda=o[\ln(R/r)]$ then, for sufficiently small $\varepsilon$, we have \[
\dfrac{1}{2}\int_\mathcal{D}\alpha|\nabla v|^2\geq \dfrac{1}{2}\int_\mathcal{D}\alpha\rho^2|\nabla w|^2\geq\pi\sum_{i=1}^N d_i^2\ln(R/r)-C_0. \] \item\label{Cor.BorneInfProcheIncl2} If $\mathcal{H}^1[V(t_\varepsilon)]/r+(1-t^2_\varepsilon)=o[\ln(R/r)]$ then, for sufficiently small $\varepsilon$, we have \[
\dfrac{1}{2}\int_\mathcal{D}|\nabla v|^2\geq\dfrac{1}{2}\int_\mathcal{D}\rho^2|\nabla w|^2\geq\pi\sum_{i=1}^N d_i^2\ln(R/r)-C_0. \] \end{enumerate} \end{cor}
\begin{proof}We claim that, up to replace $v$ with $\underline{v}$, we may assume $|v|\leq1$ in $\Omega$.
We first proceed to a scaling with the conformal mapping: \[ \begin{array}{cccc} \Phi:&B(y,4R)&\to&B(0,4)\\&x&\mapsto&\dfrac{x-y}{R} \end{array}. \] We then let $\hat z_i:=\Phi(z_i)$, $\hat r:= r/R$, $\hat\mathcal{D}:=\Phi[\mathcal{D}]=B(0,2)\setminus\cup_{i=1}^N\overline{B(\hat z_i,\hat r)}$, $\hat\alpha:=\alpha\circ\Phi^{-1}$ and $\hat v:=v\circ\Phi^{-1}$.
If $N=1$ or $N\geq2$ and $|\hat z_i-\hat z_j|\geq 4\times10^{-2D_0}$ for $i\neq j$ then, letting $\tilde\Omega:=B(0,4)$, $\eta_{\tilde\Omega}=10^{-1}$, we may apply Proposition \ref{Prop.ComparaisonAnneau}.\ref{Prop.ComparaisonAnneau1} \begin{eqnarray*}
\dfrac{1}{2}\int_\mathcal{D}\alpha|\nabla v|^2=\dfrac{1}{2}\int_{\hat\mathcal{D}}\hat\alpha|\nabla \hat v|^2&\geq&\sum_{i=1}^N\dfrac{1}{2}\int_{B(\hat z_i,2\times10^{-2D_0})\setminus\overline{B(\hat z_i,\hat r)}}\hat\alpha|\nabla\hat v|^2
\\&\geq&\pi\sum_{i=1}^N d_i^2\left(|\ln( R/ r)|-|\ln(2\times10^{-2D_0})|\right)+o(1). \end{eqnarray*}
This estimate is the desired result with $C_0=\pi D_0^2|\ln(2\times10^{-2D_0})|+1$.
If we are not in the previous case, {\it i.e.} $N\geq 2$ and there exists $i\neq j$ s.t. $|\hat z_i-\hat z_j|< 4\times10^{-2D_0}$, then we apply the separation process presented Appendix C [Section C.3.1] in \cite{Publi4} to the domain $\hat\mathcal{D}$ with $\eta_{\rm stop}:=10^{-2D_0}$.
The key ingredient in the separation process is a variant of Theorem IV.1 in \cite{BBH} [stated with $P=9$, the general case $P\in\mathbb{N}\setminus\{0,1\}$ is left to the reader]: \begin{lem}\label{Lem.Separation} Let $N\geq2$, $P\in\mathbb{N}\setminus\{0,1\}$, $x_1,...,x_N\in\mathbb{R}^2$ and $\eta>0$. There are $\kappa\in\{P^0,...,P^{N-1}\}$ and $\emptyset\neq J\subset\{1,...,N\}$ s.t. \[
\cup_{i=1}^N B(x_i,\eta)\subset\cup_{i\in J}B(x_i,\kappa\eta)\text{ and }|x_i-x_j|\geq(P-1)\kappa\eta\text{ for }i,j\in J,\,i\neq j. \] \end{lem} The separation process is an iterative selection of points in $\{\hat z_1,...,\hat z_N\}$ associated to the construction of a good radius.
We initialize the process by letting $\eta_0:=\hat r$, $M_0:=N$ and $J_0=\{1,...,M_0\}$.
For $k\geq1$ [where $k$ is the index in the iterative process] we construct a set $\emptyset\neq J_k\subsetneq J_{k-1}$, $M_k:={\rm Card}(J_k)$ and 3 numbers \begin{center}
$\kappa_k\in\{9^1,...,9^{M_{k-1}-1}\}$, $\eta_k':=\dfrac{1}{4}\displaystyle\min_{\substack{i,j\in J_{k-1}\\i\neq j}}|\hat z_i-\hat z_j|$ and $\eta_k:=2\kappa_k\eta_k'$. \end{center}
These objects are obtained with Lemma \ref{Lem.Separation} with $P=9$, $N=M_{k-1}={\rm Card}(J_{k-1})$, $\{x_1,...,x_N\}=\{z_i\,|\,i\in J_{k-1}\}$, $J=J_k$, $\eta=\eta_k$, $\kappa=\kappa_k$
The process stops at the end of Step $K_0\geq1$ if $M_{K_0}=1$ or $M_{K_0}\geq 2$ and $\displaystyle\min_{\substack{i,j\in J_{K_0}\\i\neq j}}|\hat z_i-\hat z_j|>4\eta_{\rm stop}$.
By construction, we have for $1\leq k\leq K_0$, $\emptyset\neq J_k\subsetneq J_{k-1}$ and $\eta_{k-1}\leq\eta_k'<\eta_k$. In particular, since ${\rm Card}( J_0)\leq D_0$, we get $K_0\leq D_0-1$.
By definition, for $k\in\{1,...,K_0\}$ we have $2\cdot 9\eta_k'\leq\eta_k\leq9^{D_0}\eta_k'$. We let \[
\eta_0:=\begin{cases}9^{D_0}\cdot\eta_{\rm stop}&\text{if }M_{K_0}=1\\\min\{9^{D_0}\cdot\eta_{\rm stop},\dfrac{1}{4}\displaystyle\min_{\substack{i,j\in J_{K_0}\\i\neq j}}|\hat z_i-\hat z_j|\}&\text{if }M_{K_0}\geq2\end{cases} \]
and then $\eta_0\geq\eta_{\rm stop}=10^{-2D_0}$. For $k\in\{0,...,{K_0}-1\}$ and $i\in J_k$ we denote $\mathscr{R}_{i,k}:=B(\hat z_i,\eta_{k+1}')\setminus\overline{B(\hat z_i,\eta_{k})}$, and, for $i\in J_{K_0}$, $\mathscr{R}_i:=B(\hat z_i,\eta_0)\setminus\overline{B(\hat z_i,\eta_{K_0})}$. By construction, the previous rings are pairwise disjoint. From Proposition \ref{Prop.ComparaisonAnneau}.\ref{Prop.ComparaisonAnneau1} we have for $k\in\{0,...,{K_0}-1\}$ and $i\in J_k$ : \begin{eqnarray*}
\dfrac{1}{2}\int_{\mathscr{R}_{i,k}}\hat\alpha|\nabla\hat v|^2&\geq&\pi {\rm deg}_{\mathscr{R}_{i,k}}(\hat v)^2\left[\ln(\eta_{k+1}/\eta_{k})-\ln(9^{D_0})\right]-o(1) \\&\geq&\pi \sum_{\hat z_j \in B(\hat z_i,\eta_{k+1}')}d_j^2\ln(\eta_{k+1}/\eta_{k})-\pi D_0^2\ln(9^{D_0})-o(1). \end{eqnarray*} And for $i\in J_{K_0}$: \begin{eqnarray*}
\dfrac{1}{2}\int_{\mathscr{R}_{i}}\hat\alpha|\nabla\hat v|^2&\geq&\pi {\rm deg}_{\mathscr{R}_{i}}(\hat v)^2\ln(\eta_0/\eta_{K_0})-o(1) \\&\geq&\pi \sum_{\hat z_j \in B(\hat z_i,\eta_0)}d_j^2\ln(\eta/\eta_{K_0})-o(1). \end{eqnarray*} By summing the previous lower bound we get the result. As for Proposition \ref{Prop.ComparaisonAnneau}, the second assertion is obtained in a similar way than the first assertion. \end{proof} \subsection{Lower bounds in a perforated domain} In this section we state a lower bound for a weighted Dirichlet energy in the domain $\Omega$ perforated by small [but not too small] disks. The philosophy of this lower bound is that in the case which interest us we may ignore the weight if the perforations are not too small ; it is an effect of the dilution $\lambda\to0$. \begin{prop}\label{VeryNiceCor} Let $\beta\in(0,1)$, $(\tilde\alpha_n)_n\subset L^\infty(\Omega,[\beta^2,1])$ be s.t. \[ K_n:=\sqrt{\int_\Omega(1-\tilde\alpha_n)^2}\to0. \] Let $N\in\mathbb{N}^*$ and ${\bf (z,d)}={\bf (z,d)}^{(n)}\subset(\O^N)^*\times\mathbb{Z}^N$ be s.t. ${\bf d}$ is independant of $n$. We denote $\hbar:=\min_i{\rm dist}(z_i,\partial\Omega)$.
Assume the existence of ${\tilde r}>0$ s.t. ${\tilde r}=o(1)$, \eqref{HypRayClass} holds and s.t. there is $C_1>0$ [independent of $n$] satisfying $\dfrac{{\tilde r}|\ln{\tilde r}|}{\hbar}\leq C_1$. Write $\Omega_{{\tilde r}}:=\Omega\setminus\cup\overline{B(z_i,{\tilde r})}$.
Let $(u_n)_n\subset H^1(\Omega,\mathbb{C})$ satisfying $|u_n|\geq\dfrac{1}{2}$ in $\Omega_{\tilde r}$ and $ {\rm deg}_{\partial B(z_i,{\tilde r})}(u_n)=d_i$ for all $i$.
Assume also \[
L_n:=\sqrt{\int_{\Omega_{\tilde r}}(1-|u_n|^2)^2}\to0. \]
Then \[
\int_{\Omega_{\tilde r}}\tilde\alpha_n|\nabla u_n|^2\geq\int_{\Omega_{\tilde r}}|\nabla \Phi^\zd_\star|^2-(4\beta^{-1}+3)\|\nabla\Phi^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\|\nabla\Phi^\zd_\star\|_{L^2(\Omega_{\tilde r})}\left(K_n+L_n\right)-\mathcal{O}(X) \] with $\Phi^\zd_\star$ is defined in Remark \ref{Remark.DefConjuHarmPhase} and $X$ is defined in \eqref{DefX}. \end{prop} Proposition \ref{VeryNiceCor} is proved Appendix \ref{Sec.PreuveVeryNiceCor}. \section{Study of the $\varepsilon^s$-bad discs}\label{Sect.ShapInfo}
In this section, in addition to the assumption \eqref{MagneticIntenHyp} on $\lambda,\delta$ and $h_{\rm ex}$, we assume that \eqref{PutaindHypTech} holds. This [technical] hypothesis \eqref{PutaindHypTech} is a little bit more restrictive than \eqref{HypLambdaDeltaConstrFoncTest} [$\delta\sqrt{h_{\rm ex}}\to0$] used to get a nice upper bound.\\
Let $\varepsilon=\varepsilon_n\downarrow0$ and let $((v,A))_\varepsilon=((v_\varepsilon,A_\varepsilon))_\varepsilon$ be a sequence that agrees \eqref{HypGlobalSurQuasiMin} and \eqref{QuasiMinDef}. Let also $\mu\in(0,1/2)$.
Since \eqref{HypGlobalSurQuasiMin} and \eqref{QuasiMinDef} are gauge invariant we may assume that $(v,A)$ is in the Coulomb gauge.
The goal of this section is to prove that, for sufficiently small $\varepsilon\&\mu$, if $J_\mu\neq\emptyset$ then $d_i=1$ $\&$ ${\rm dist}(z_i,\Lambda)\leq\ln(h_{\rm ex})/\sqrt{h_{\rm ex}}$ $\&$ $z_i\in\omega_\varepsilon$ for all $i\in J_\mu$ and for $i\neq j$, $|z_i-z_j|\geq\ln(h_{\rm ex})/h_{\rm ex}$ with a "uniform" distribution of the $z_i$'s around $\Lambda$.
With the notation of Proposition \ref{Prop.ConstrEpsMauvDisk} we let $\Omega_r:=\Omega\setminus\cup_{i\in J_\mu}\overline{B(z_i,r)}$ and $d:=\sum_{i\in J_\mu}|d_i|$.
In view of the goal of this section we may argue on subsequences. First note that from \eqref{DegNonNulEpsS} we have $d_i\neq0$ for all $i$. Up to pass to a subsequence, from \eqref{BorneTotaleSommeDeg}, we may assume that $J_\mu\neq\emptyset$ and independent of $\varepsilon$ as well as the $d_i$'s.
Since we are interested here only on informations related with $|v|$ and the $d_i$'s, we may consider that $(v,A)$ is in the Coulomb gauge and we may also change the potential vector. Namely, we may assume that $A=\nabla^\bot\xi$ with $\xi=\xi_\varepsilon\in H^1_0\cap H^2(\Omega,\mathbb{R})$ is the unique solution of \eqref{Eq.MinPb.Pot}. Note that \eqref{QuasiMinDef} still holds.
Consequently, ${\rm curl}(A)\in H^1$ and then with \eqref{FullGLuAEq}$\&$\eqref{EstH3}: $\|\xi\|_{H^3(\Omega)}\leq C\|{\rm curl}(A_\varepsilon)\|_{H^1(\Omega)}\leq C|\ln\varepsilon|$.
From Proposition \ref{Docmpen} and letting $\zeta=\zeta_\varepsilon:=\xi-h_{\rm ex}\xi_0$
\begin{equation}\nonumber \mathcal{F}(v,\nabla^\bot\xi)=h_{\rm ex}^2{\bf J_0}+F(v)+2\pih_{\rm ex}\sum d_i\xi_0(z_i)+\tilde{V}_{\bf (z,d)}(\zeta)+o(1). \end{equation}
Proposition \ref{Prop.Information.Zeta-a} infers $\tilde{V}_{\bf (z,d)}(\zeta)=\mathcal{O}(1)$. Consequently
\begin{equation}\label{FullDecDiscqVal0-ApplicGrossiere} \mathcal{F}(v,\nabla^\bot\xi)=h_{\rm ex}^2{\bf J_0}+F(v)+2\pih_{\rm ex}\sum d_i\xi_0(z_i)+\mathcal{O}(1). \end{equation} In particular we have $\mathcal{F}(v,\nabla^\bot\xi)\leqh_{\rm ex}^2{\bf J_0}+o(1)$, thus with \eqref{FullDecDiscqVal0-ApplicGrossiere} we get \begin{equation}\label{FullDecDiscqVal0-PetitChamp} F(v)\leq-2\pih_{\rm ex}\sum d_i\xi_0(z_i)+\mathcal{O}(1). \end{equation}
From Corollary \ref{Cor.DegNonNul} and Propositions \ref{Prop.ConstrEpsMauvDisk}$\&$\ref{Prop.ProprieteEpsMauvDisk} we deduce $-\sum d_i\xi_0(z_i)=\|\xi_0\|_{L^\infty(\Omega)}\sum d_i+o(1)$ and we immediately obtained \begin{equation}\label{SumDegPos} \sum d_i\geq0. \end{equation} On the other hand, from Proposition \ref{Prop.BorneSupSimple}, we have
\begin{equation}\label{BrneSup-Applic} \mathcal{F}(v,\nabla^\bot\xi)\leqh_{\rm ex}^2 {\bf J_0}+dM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathcal{O}(1). \end{equation} By combining \eqref{FullDecDiscqVal0-ApplicGrossiere} and \eqref{BrneSup-Applic} we get
\begin{equation}\label{BrneSup-ApplicF(v)}
F(v)\leq d\pi\left[b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|\right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathcal{O}(1).
\end{equation}
In conclusion, from \eqref{BorneInfEn} in conjunction with \eqref{BrneSup-ApplicF(v)} we obtain
\begin{equation}\label{BrneSup-ApplicDirEn}
\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2\leq d\pi\left[b^2|\lnr|+(1-b^2)|\ln(\lambda\delta)|\right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathcal{O}(1).
\end{equation}
We first have the following proposition.
\begin{prop}\label{PropDegPinning1} Assume
\begin{equation}\label{HypSurMu}
0<\mu<\min\left\{\dfrac{1}{\mathcal{D}_{K,b}+1},\dfrac{1-b^2}{2(\mathcal{D}_{K,b}+1)}\right\}
\end{equation}
where $\mathcal{D}_{K,b}=\dfrac{3\mathcal{M}_K}{b^2}$ and $\mathcal{M}_K$ is as in Theorem \ref{ThmBorneDegréMinGlob}.
Then there exists $\tilde\varepsilon_\mu'>0$ s.t. for $0<\varepsilon< \tilde\varepsilon_\mu'$ if $J_\mu\neq\emptyset$ then \begin{enumerate} \item $d_i>0$ for all $i$, \item\label{PropDegPinning1.2} ${\rm dist}(z_i,\omega_\varepsilon)<\sqrt\varepsilon$.
\end{enumerate} \end{prop} \begin{proof}{\bf Step 1. We prove that $d_i>0$ for all $i$}\\
We argue by contradiction and we assume the existence of an extraction still denoted by $\varepsilon=\varepsilon_n\downarrow0$ s.t. $J_-:=\{i\in J_\mu\,|\,d_{i}<0\}\neq\emptyset$ [from \eqref{DegNonNulEpsS}, for $0<\varepsilon<\tilde\varepsilon_\mu$, we have $d_i\neq0$ for all $i\in J_\mu$].
From \eqref{SumDegPos} we thus obtain: $\sum_{i\in J_\mu\setminus J_-}d_i\geq d+1$. Then, with the help of \eqref{BorneInfEnWeight}, we obtain \[
F(v)\geq b^2(1-\mu)\pi|\ln\varepsilon|\left(\sum_{i\in J_-}|d_i|+\sum_{i\in J_\mu\setminus J_-}d_i\right)\geq (d+2)\pi(1-\mu)b^2|\ln\varepsilon|+\mathcal{O}(1). \] Consequently \eqref{BrneSup-ApplicF(v)} implies $d(1+o(1))\geq(d+2)(1-\mu)-o(1)$. This inequality gives $\mu\geq\dfrac{2}{d+2}-o(1)$ which is in contradiction with $0<\mu<(\mathcal{D}_{K,b}+1)^{-1}$ for sufficiently small $\varepsilon>0$ [here we used $\mathcal{D}_{K,b}\geq\mathcal{M}_K\geq d$].\\
{\bf Step 2. We prove that ${\rm dist}(z_i,\omega_\varepsilon)<\sqrt\varepsilon$ for all $i$}\\
We argue by contradiction and we assume the existence of a subsequence still denoted by $\varepsilon=\varepsilon_n\downarrow0$ and $i_0\in J_\mu$ s.t. ${\rm dist}(z_{i_0},\omega_\varepsilon)\geq\sqrt\varepsilon$. From \eqref{EstLoinInterfaceU} we have $\inf_{B(z_{i_0},r)}\alpha\geq1-o(|\ln\varepsilon|^{-2})$. Consequently using \eqref{BorneInfEnWeight} we get $F(v,B(z_{i_0},r))\geq d_{i_0}\pi(1-\mu)|\ln\varepsilon|-\mathcal{O}(1)$. Then $F(v)\geq \pi b^2(1-\mu)d|\ln\varepsilon|+\pi(1-b^2)(1-\mu)d_{i_0}|\ln\varepsilon|-\mathcal{O}(1)$.
From \eqref{BrneSup-ApplicF(v)} we obtain \[
db^2|\ln\varepsilon|+\mathcal{O}(\ln|\ln\varepsilon|)\geq b^2(1-\mu)d|\ln\varepsilon|+(1-b^2)(1-\mu)|\ln\varepsilon|-\mathcal{O}(1). \] The last estimate implies $\mu\geq\dfrac{1-b^2}{b^2d+1-b^2}+o(1)$ which is in contradiction with $\mu\leq \dfrac{1-b^2}{2(\mathcal{D}_{K,b}+1)}$ for $\varepsilon>0$ sufficiently small. \end{proof} \begin{defi}\label{DefiSousEnsJ} \begin{itemize} \item For $i\in J_\mu$we let $y_i\in\delta\cdot\mathbb{Z}^2$ be the unique point s.t. $z_i\in B(y_i,\delta/2)$. Since ${\rm dist}(z_i,\omega_\varepsilon)<\sqrt\varepsilon$ for all $i$, $y_i$ is well defined.
\item We denote also $\tilde J\subseteq J_\mu$ a set of indices s.t. $\cup_{i\in J_\mu} B(z_i,r)\subset\cup_{k\in\tilde J}B(y_k,2\lambda\delta)$ and for $k,l\in\tilde J$ s.t. $k\neq l$ we have $y_k\neq y_l$. We then let for $k\in\tilde J$, $\tilde{J}_k:=\{i\in J_\mu\,|\,z_i\in B(y_k,2\lambda\delta)\}$. \item We may also select "good indices" in order to get well separated centers $y_k$'s. Using Lemma \ref{Lem.Separation} with $P=17,\eta=\delta$, there exists a set $\emptyset\neq J^{(y)}\subset J_\mu$ and a number $\kappa\in\{1,17,...,17^{{\rm Card}(J_\mu)-1}\}$ [dependent on $\varepsilon$] s.t. \[
\cup_{k\in\tilde J} B(y_k,\delta)\subset\cup_{k\in J^{(y)}}B(y_k,\kappa\delta)\text{ and for $k,l\in J^{(y)}$ with $k\neq l$ we have }\,|y_k-y_l|\geq16\kappa\delta. \] We denote, for $k\in J^{(y)}$, $\tilde d_k:= {\rm deg}_{\partial B(y_k,\kappa\delta)}(v)$.
\item There exists also $\{J_k\,|\,k\in J^{(y)}\}$, a partition of $J_\mu$ with non empty sets [dependent on $\varepsilon$], s.t. \[ B(z_i,\delta/2)\subset B(y_k,\kappa\delta)\Longleftrightarrow i\in J_k\text{ for }k\in J^{(y)}. \]
\end{itemize} \end{defi} We are going to prove that $\tilde J= J_\mu$ and for all $k\in J^{(y)}$ we have ${J}_k=\tilde{J}_k$. \begin{prop}\label{PropToutLesDegEg1} Assume \eqref{HypSurMu}, for $\varepsilon>0$ sufficiently small, if $J_\mu\neq\emptyset$ then $d_i=1$ for all $i\in J_\mu$. \end{prop} \begin{proof}
We argue by contradiction and we assume the existence of a subsequence [still denoted by $\varepsilon=\varepsilon_n\downarrow0$] s.t. for all $\varepsilon$ there exits $i_0\in J_\mu$ s.t. $ d_{i_0}\geq2$.
From Corollary \ref{Cor.BorneInfProcheIncl}.\ref{Cor.BorneInfProcheIncl2} applied in $B(y_k,2\lambda\delta)\setminus\cup_{i\in \tilde{J}_k}\overline{B(z_i,r)}$ : \begin{eqnarray*}
\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2
&\geq&\sum_{k\in \tilde J}\dfrac{b^2}{2}\int_{B(y_k,2\lambda\delta)\setminus\cup_{i\in \tilde{J}_k}\overline{B(z_i,r)}}|\nabla v|^2 \\&\geq&\pi b^2\sum_{k\in \tilde J}\sum_{i\in J_k}d_i^2\ln\left(\dfrac{\lambda\delta}{r}\right)-\mathcal{O}(1) \\&\geq&\pi b^2\left(1+\sum_{i\in J_\mu}d_i\right)\ln\left(\dfrac{\lambda\delta}{r}\right)-\mathcal{O}(1). \end{eqnarray*}
We then get $F(v)\geq\pi b^2(d|\ln\varepsilon|+|\lnr|)+\mathcal{O}(|\ln(\lambda\delta)|)$. Since $|\ln\varepsilon|=\mathcal{O}(|\lnr|)$ and $|\ln(\lambda\delta)|+\lnh_{\rm ex}=o(|\ln\varepsilon|)$, this estimate is in contradiction with \eqref{BrneSup-ApplicF(v)} for sufficiently small $\varepsilon$. \end{proof} \begin{prop}\label{PropVortexProcheLambda} Assume $\mu$ satisfies \eqref{HypSurMu} and $J_\mu\neq\emptyset$. Then for sufficiently small $\varepsilon>0$ we have ${\rm dist}({\bf z},\Lambda)\leq\dfrac{\lnh_{\rm ex}}{\sqrt{h_{\rm ex}}}$. \end{prop} The proof of the proposition uses the following obvious lemma whose proof is left to the reader. \begin{lem}\label{LemSommeDegCarréDec} \begin{enumerate} \item\label{LemSommeDegCarréDec1} Let $N\in\mathbb{N}^*$, ${\bf D}\in\mathbb{N}^N$ and for $k\in\{1,...,N\}$ let $N_k\in\mathbb{N}^*$ and ${\bf d}^{(k)}\in\mathbb{N}^{N_k}$ be s.t. $D_k=\sum_id_i^{(k)}$. Then we have \[ \sum_{k=1}^ND_k^2\geq\sum_{k=1}^N\sum_{i=1}^{N_k}(d_i^{(k)})^2. \] Moreover the equality holds if and only if for all $k\in\{1,...,N\}$ and for all $i\in\{1,...,N_k\}$ we have $d_i^{(k)}\in\{0,D_k\}$. \item\label{LemSommeDegCarréDec2} Let $N,d\in\mathbb{N}^*$ and denote $\displaystyle E_d:=\min_{{{\bf D}\in\mathbb{N}^N,\,\sum D_k=d}}\sum_{k=1}^N D_k^2$. Then we have for ${\bf D}\in\mathbb{N}^N$ s.t. $\sum D_k=d$: \[ \sum_{k=1}^N D_k^2=E_d\Longleftrightarrow{\bf D}\in\{\lfloor d/N\rfloor;\lceil d/N\rceil\}^N. \] \end{enumerate} \end{lem} \begin{proof}[Proof of Proposition \ref{PropVortexProcheLambda}] We argue by contradiction and we assume the existence of a subsequence [still denoted by $\varepsilon=\varepsilon_n\downarrow0$] and $i_0\in J_\mu$ s.t. ${\rm dist}(z_{i_0},\Lambda)>\dfrac{\lnh_{\rm ex}}{\sqrt{h_{\rm ex}}}$.
Then there exists $\eta>0$ [independent of $\varepsilon$] s.t. $h_{\rm ex}\xi_0(z_{i_0})\geq-h_{\rm ex}\|\xi_0\|_{L^\infty(\Omega)}+4\eta(\lnh_{\rm ex})^2$. Consequently: $-2\pih_{\rm ex}\sum \xi_0(z_i)\leq2\pi dh_{\rm ex}\|\xi_0\|_{L^\infty(\Omega)}-4\eta(\lnh_{\rm ex})^2.$
From \eqref{FullDecDiscqVal0-PetitChamp} we get [for small $\varepsilon$] \begin{eqnarray*}
F(v)&\leq& 2\pi dh_{\rm ex}\|\xi_0\|_{L^\infty(\Omega)}-3\eta(\lnh_{\rm ex})^2
\\{[\text{Hyp. \eqref{BorneKMagn}}]}&\leq&\pi d|\ln\varepsilon|-2\eta(\lnh_{\rm ex})^2. \end{eqnarray*} Using \eqref{BorneInfEn} we get \begin{equation}\label{BorneSupContraDistLambda}
\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2\leq d\pi\left[b^2|\lnr|+(1-b^2)|\ln(\lambda\delta)|\right]-\eta(\lnh_{\rm ex})^2. \end{equation}
We let $\chi:=10\max_{k\in\tilde J}{\rm dist}(y_k,\Lambda)$ and for $p\in\Lambda$, $D_p:= {\rm deg}_{\partial B(p,\chi)}(v)$, $J_p:=\{k\in J^{(y)}\,|\,y_k\in B(p,\chi)\}$. For a latter use we claim that $\chi\geq\ln(h_{\rm ex})/\sqrt{h_{\rm ex}}$ and then \begin{equation}\label{Hyp.DistLambdaContra}
\lambda|\ln\chi|/\chi\to0. \end{equation} We have [see Definition \ref{DefiSousEnsJ} for notation] \begin{eqnarray}\nonumber
&&\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2
\\\nonumber&\geq&\sum_{k\in \tilde J}\dfrac{1}{2}\int_{B(y_k,2\lambda\delta)\setminus\cup_{i\in \tilde{J}_k}\overline{B(z_i,r)}}\alpha|\nabla v|^2+
\sum_{k\in \tilde J}\dfrac{1}{2}\int_{B(y_k,\delta/3)\setminus\overline{B(y_k,2\lambda\delta)}}\alpha|\nabla v|^2+ \\\label{BigDecomp}&&+
\sum_{p\in\Lambda}\dfrac{1}{2}\int_{B(p,\chi)\setminus\cup_{k\in J_p}\overline{B(y_k,\kappa\delta)}}\alpha|\nabla v|^2
+\dfrac{1}{2}\int_{\Omega\setminus\cup_{p\in\Lambda}\overline{B(p,\chi)}}\alpha|\nabla v|^2. \end{eqnarray} It is clear that, for $k\in \tilde J$, we may use Corollary \ref{Cor.BorneInfProcheIncl}.\ref{Cor.BorneInfProcheIncl2} in $B(y_k,2\lambda\delta)\setminus\cup_{i\in \tilde{J}_k}\overline{B(z_i,r)}$ in order to get
\begin{equation}\label{OnCompteRelFin1}
\sum_{k\in \tilde J}\dfrac{1}{2}\int_{B(y_k,2\lambda\delta)\setminus\cup_{i\in \tilde{J}_k}\overline{B(z_i,r)}}\alpha|\nabla v|^2\geq b^2d\pi\ln\left(\dfrac{\lambda\delta}{r}\right)+\mathcal{O}(1). \end{equation}
Let $k\in \tilde J$, from \eqref{EstLoinInterfaceU} and Proposition \ref{Prop.ComparaisonAnneau}.\ref{Prop.ComparaisonAnneau2} we obtain \begin{equation}\label{OnCompteRelFin2}
\dfrac{1}{2}\int_{B(y_k,\delta/3)\setminus\overline{B(y_k,2\lambda\delta)}}\alpha|\nabla v|^2\geq\pi {\rm deg}_{\partial B(y_k,2\lambda\delta)}(v)^2|\ln\lambda|+\mathcal{O}(1). \end{equation}
Let $p\in\Lambda$ be s.t. $D_p\neq0$, Corollary \ref{Cor.BorneInfProcheIncl}.\ref{Cor.BorneInfProcheIncl1} gives \[
\dfrac{1}{2}\int_{B(p,\chi)\setminus\cup_{k\in J_p}\overline{B(y_k,\kappa\delta)}}\alpha|\nabla v|^2\geq\pi\sum_{k\in J_p}\tilde d_k^2\ln \left(\dfrac{\chi}{\delta}\right)+\mathcal{O}(1). \] From Propositions \ref{MinimalMapHomo}$\&$\ref{Prop.EnergieRenDef}$\&$\ref{VeryNiceCor} [with \eqref{Hyp.DistLambdaContra}] we deduce \[
\dfrac{1}{2}\int_{\Omega\setminus\cup_{p\in\Lambda}\overline{B(p,\chi)}}\alpha|\nabla v|^2\geq\pi\sum_{p\in\Lambda}D_p^2|\ln\chi|+\mathcal{O}(1). \] From Lemma \ref{LemSommeDegCarréDec}.\ref{LemSommeDegCarréDec1} we have $d\leq\sum_{k\in\tilde J} {\rm deg}_{\partial B(y_k,2\lambda\delta)}(v)^2\leq\sum_{p\in\Lambda}\sum_{k\in J_p}\tilde d_k^2\leq\sum_{p\in\Lambda}D_p^2$. Then we get \[
\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2\geq d\pi\left[b^2|\lnr|+(1-b^2)|\ln(\lambda\delta)|\right]+\mathcal{O}(1). \] This estimate is in contradiction with \eqref{BorneSupContraDistLambda} for sufficiently small $\varepsilon$. \end{proof} \begin{prop}\label{Prop.BonEcartement} Assume $\mu$ satisfies \eqref{HypSurMu} and let $\varepsilon=\varepsilon_n\downarrow0$ be a sequence. \begin{enumerate}
\item If ${\rm Card}(J_\mu)\geq2$ then for $\varepsilon>0$ sufficiently small and for $i\neq j$, $|z_i-z_j|\geqh_{\rm ex}^{-1}\lnh_{\rm ex}$. \item For $\varepsilon>0$ sufficiently small we have for $p\in\Lambda$, $ {\rm deg}_{\partial B(p,h_{\rm ex}^{-1/2}\lnh_{\rm ex})}(v)\in\{\lfloor d/N_0\rfloor;\lceil d/N_0\rceil\}$. \end{enumerate} \end{prop} The proof of Proposition \ref{Prop.BonEcartement} is postponed to Appendix \ref{Proof.Prop.BonEcartement}.
Since $\lambda\deltah_{\rm ex}\to0$, Proposition \ref{Prop.BonEcartement} implies that each cell of period contains at most a disc $B(z_i,r)$ with $i\in J_\mu$.
Following the argument in \cite{Publi4} [proof of the third part in Proposition 3.6, see Appendix D-Section 4.5], we may refined Proposition \ref{PropDegPinning1}.\ref{PropDegPinning1.2}. \begin{prop}\label{Prop.PinningComplet} Assume $\mu$ satisfies \eqref{HypSurMu}, then there is $\eta_{\omega,b}>0$ depending only on $\omega$ and $b$ s.t. for $i\in J_\mu$ we have $B(z_i,2\eta_{\omega,b}\lambda\delta)\subset\omega_\varepsilon$. \end{prop} \begin{cor} Assume $\mu$ satisfies \eqref{HypSurMu}. Then we have \begin{equation}\label{Onfaitcommeonpeut}
\int_{\Omega\setminus \cup_{i\in J_\mu}B(z_i,\lambda^2\delta^2)}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2=\mathcal{O}(|\ln(\lambda\delta)|). \end{equation} Moreover \begin{equation}\label{Onfaitcommeonpeut...}
\text{$|v|=1+o(1)$ in $\Omega\setminus \cup_{i\in J_\mu}B(z_i,2\lambda^2\delta^2)$.} \end{equation} \end{cor} \begin{proof} We have \[
\dfrac{b^4}{4}\int_{\Omega\setminus \cup_{i\in J_\mu}B(z_i,\lambda^2\delta^2)}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2\leq F(v)-\sum_{i\in J_\mu}F(v,B(z_i,\lambda^2\delta^2)). \] For $i\in J_\mu$, from Corollary \ref{Prop.ComparaisonAnneau}.\ref{Prop.ComparaisonAnneau2} : \begin{eqnarray*}
F(v,B(z_i,\lambda^2\delta^2))&\geq&\dfrac{b^2}{2}\int_{B(z_i,\lambda^2\delta^2)\setminus\overline{B(z_i,r)}}|\nabla v|^2+F(v,B(z_i,r))
\\&\geq&2b^2\pi\ln(\lambda\delta)+b^2\pi|\ln\varepsilon|+\mathcal{O}(1). \end{eqnarray*} Since, by Proposition \ref{Prop.BonEcartement}, the discs $B(z_i,\lambda^2\delta^2)$ are pairwise disjoint, we obtain with \eqref{BrneSup-ApplicF(v)}: \[
\dfrac{b^4}{4}\int_{\Omega\setminus \cup_{i\in J_\mu}B(z_i,\lambda^2\delta^2)}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2\leq \mathcal{O}(|\ln(\lambda\delta)|). \]This estimate is equivalent to \eqref{Onfaitcommeonpeut}.
We are going to prove \eqref{Onfaitcommeonpeut...}. We argue by contradiction and we assume the existence of an extraction still denoted $\varepsilon=\varepsilon_n\downarrow0$, $t\in(0,1)$ and $(x_n)_n\subset\Omega\setminus \cup_{i\in J_\mu}B(z_i,2\lambda^2\delta^2)$ s.t. $|v_{\varepsilon_n}(x_n)|<t$.
By Proposition \ref{Prop.EtaEllpProp}, there exists $C_t>0$ s.t. for sufficiently large $n$: \begin{equation}\label{BarEqBar}
\int_{B(x_n,\sqrt{\varepsilon}_n)\cap\Omega}|\nabla v_{\varepsilon_n}|^2+\dfrac{1}{\varepsilon_n^2}(1-|v_{\varepsilon_n}|^2)^2> {C_t}{}|\ln\varepsilon_n|. \end{equation} Moreover, for $n$ sufficiently large to get $\sqrt{\varepsilon}_n<\lambda^2\delta^2$, we have $[B(x_n,\sqrt{\varepsilon}_n)\cap\Omega]\subset\Omega\setminus \cup_{i\in J_\mu}B(z_i,\lambda^2\delta^2)$. This inclusion is in contradiction with \eqref{Onfaitcommeonpeut} and \eqref{BarEqBar}. \end{proof}
From Proposition \ref{Prop.PinningComplet}, for $i\in J_\mu$, we have $\hat z_i:=\dfrac{z_i-y_i}{\lambda\delta}\in\omega$ where $y_i\in\delta\mathbb{Z}^2$ is s.t. $z_i\in B(y_i,\lambda\delta)$. Moreover, up to consider an extraction, we may assume that, for $i\in J_\mu$, there exits $\hat z^0_i\in\omega$ s.t. $\hat z_i\to\hat z_i^0$.
We start with the following proposition. \begin{prop}\label{Prop.BorneInfTrèsFine}We have the following sharp lower bound: \begin{eqnarray*} \mathcal{F}(v,A)&\geq&h_{\rm ex}^2 {\bf J_0}+dM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathscr{L}_2(d)+\phantom{gsgsgsgs} \\&&\phantom{gsgsgsgs}+\sum_{i\in J_\mu}[W^{\rm micro}(\hat{z}_i^0)-\min_\omega W^{\rm micro}]+[\mathcal{W}_{d}({\bf D})-\overline{\W}_{d}]+o(1) \end{eqnarray*} where $\overline{\W}_{d}=\min_{\Lambda_{d}} \mathcal{W}_{d}$ is defined in \eqref{CouplageEnergieRen} and \begin{equation}\label{DefWdOpD} \mathcal{W}_{d}({\bf D}):=W_{N_0}^{\rm macro}{({\bf p},{\bf D})}+\sum_{p\in\Lambda}C_{p,D_p}+\tilde{V}[\zeta_{({\bf p},{\bf D})}] \end{equation} where for $p\in\Lambda$, $D\in\mathbb{N}^*$, $C_{p,D}$ is defined in \eqref{DefCpD}, $C_{p,0}:=0$ and $\tilde{V}[\zeta_{({\bf p},{\bf D})}]$ is defined in Proposition \ref{Prop.Information.Zeta-a}. \end{prop} We split the proof of Proposition \ref{Prop.BorneInfTrèsFine} in {several} lemmas.
The first step is the following lemma consisting in a "macroscopic/mesoscopic" version of Proposition \ref{Prop.BorneInfTrèsFine}. \begin{lem}\label{BorneTresFineSansIncl}
Let $\rho=|v|$ and $w=v/\rho$ in $\Omega\setminus\cup_{i\in J_\mu}\overline{B(y_i,\delta/3)}$ . We then have \begin{eqnarray*}
\dfrac{1}{2}\int_{\Omega\setminus\cup_{i\in J_\mu}\overline{B(y_i,\delta/3)}}\alpha\rho^2|\nabla w|^2&\geq& d\pi|\ln(\delta/3)|-\pi\sum_{\substack{p\in\Lambda\\mathbb{D}_p\geq2}}\sum_{\substack{i,j\in J_p\\i\neq j}}\ln|z_i-z_j|+ \\&&\phantom{jdjdjdjdjdjdjdjd}+W^{\rm macro}_{N_0}{({\bf p},{\bf D})}+o(1). \end{eqnarray*} \end{lem} \begin{proof}
On the one hand, from Proposition \ref{PropVortexProcheLambda} and letting $\chi:=h_{\rm ex}^{-1/4}$ we have $|v|\geq1/2$ in $\Omega\setminus\cup_{p\in\Lambda}\overline{B(p,\chi)}$. Then, from Proposition \ref{VeryNiceCor}, we have \begin{equation}\label{EstimationFineLoinLambda}
\dfrac{1}{2}\int_{\Omega\setminus\cup_{p\in\Lambda}\overline{B(p,\chi)}}\alpha|\nabla v|^2\geq\pi\sum_{p\in\Lambda}D_p^2|\ln\chi|+W^{\rm macro}_{N_0}{({\bf p},{\bf D})}+o(1). \end{equation}
On the other hand, from Proposition \ref{Prop.BonEcartement}, if ${\rm Card}(J_\mu)\geq2$ then, for $i,j\in J_\mu$ with $i\neq j$, we have {$|y_i-y_j|\geqh_{\rm ex}^{-1}\ln(h_{\rm ex})-2\lambda\delta$}.
Consequently, if $D_p= {\rm deg}_{\partial B(p,\eta_\Omega)}(v)\neq0$ [$\eta_\Omega$ is defined in \eqref{DefEtaO}], letting $J_p:=\{i\in J_\mu\,|\,z_i\in B(p,\eta_\Omega)\}$, $\mathcal{D}_p:=B(p,\chi)\setminus\cup_{i\in J_p}\overline{B(y_i,h_{\rm ex}^{-1})}$, \[ \begin{array}{cccc} \Phi:&B(p,\chi)&\to&\mathbb{D}=B(0,1)\\&x&\mapsto&\dfrac{x-p}{\chi} \end{array},
\] $\hat v=v\circ\Phi^{-1}$, $\hat\alpha=\alpha\circ\Phi^{-1}$, $\hat\mathcal{D}_p:=\Phi(\mathcal{D}_p)$ and $\hat y_i:=\Phi(y_i)$ for $y_i\in B(p,\chi)$, then we may apply Proposition \ref{VeryNiceCor}. Writing $(\hat{\bf y}_p,{\bf 1}):=\{(\hat y_i,1)\,|\,i\in J_p\}$, Proposition \ref{VeryNiceCor} gives: \begin{equation}\label{EstimationAutourLambda}
\dfrac{1}{2}\int_{\mathcal{D}_p}\alpha|\nabla v|^2=\dfrac{1}{2}\int_{\hat \mathcal{D}_p}\hat\alpha|\nabla \hat v|^2\geq\pi D_p\ln(\chih_{\rm ex})+W^{\rm macro}_{D_p,\mathbb{D}}(\hat{\bf y}_p,{\bf 1})+o(1) \end{equation} where $W^{\rm macro}_{D_p,\mathbb{D}}$ is the macroscopic renormalized energy in the unit disc $\mathbb{D}$ with $D_p$ points.
From Proposition 1 in \cite{LR1} we have \[
W^{\rm macro}_{D_p,\mathbb{D}}(\hat{\bf y}_p,{\bf 1})=-\pi\sum_{\substack{i,j\in J_p\\i\neq j}}\left[\ln|\hat y_i-\hat y_j|-\ln|1-\hat y_i\overline{\hat y}_j|\right]+\pi\sum_{i\in J_p}\ln(1-|\hat y_i|^2). \]
Using Proposition \ref{PropVortexProcheLambda}, we get for $i\in J_p$, $|\hat y_i|\leq\dfrac{h_{\rm ex}^{-1/2}\lnh_{\rm ex}}{\chi}=o(1)$ and then \begin{equation}\label{EstimationAutourLambdaBis} W^{\rm macro}_{D_p,\mathbb{D}}(\hat{\bf y}_p,{\bf 1})
=-\pi\sum_{\substack{i,j\in J_p\\i\neq j}}\ln|y_i- y_j|-\pi(D_p^2-D_p)|\ln\chi|+o(1). \end{equation} For $i\in J_\mu$, we let $\mathscr{R}_i:=B(y_i,h_{\rm ex}^{-1})\setminus\overline{B(y_i,\delta/3)}$. With Proposition \ref{Prop.ComparaisonAnneau}.\ref{Prop.ComparaisonAnneau1} we obtain \begin{equation}\label{EstimationAutourInclusion}
\dfrac{1}{2}\int_{\mathscr{R}_i}\alpha|\nabla v|^2\geq\pi|\ln\left(\deltah_{\rm ex}/3\right)|. \end{equation} By combining \eqref{EstimationFineLoinLambda}, \eqref{EstimationAutourLambda}, \eqref{EstimationAutourLambdaBis} and \eqref{EstimationAutourInclusion} the result is proved. \end{proof} The second step is a "microscopic" version of Proposition \ref{Prop.BorneInfTrèsFine}. \begin{lem}\label{Prop.EstFineAtraversIncl}
If $r\leq{\tilde r}\leq\lambda^2\delta^2$, then : \[
\sum_{i\in J_\mu}F(v,\mathscr{R}_i)\geq d\pi\left(|\ln(3\lambda)|+b^2|\ln(\lambda\delta/{\tilde r})|\right)+ \sum_{i\in J_\mu}W^{\rm micro}(\hat z_i^0)+o(1) \] where, for $i\in J_\mu$, $\mathscr{R}_i:=B(y_i,\delta/3)\setminus\overline{B(z_i,{\tilde r})}$. \end{lem} \begin{proof}
We first note that in order to prove Lemma \ref{Prop.EstFineAtraversIncl} [up to replace $v$ by $\underline{v}$] we may assume $\rho=|v|\leq1$. We may also assume \begin{equation}\label{BorneATraversIncl1}
\sum_{i\in J_\mu}F_\varepsilon(v,\mathscr{R}_i)=\mathcal{O}(|\ln(\lambda\delta)|) \end{equation} since in the contrary case there is nothing to prove.
Fix $i\in J_\mu$ and let $v_\star$ be a minimizer of $\displaystyle F_\varepsilon(\cdot,\mathscr{R}_i)$ in $H^1(\mathscr{R}_i,\mathbb{C})$ with the Dirichlet boundary condition ${\rm tr}_{\partial \mathscr{R}_i}(\cdot)={\rm tr}_{\partial \mathscr{R}_i}(v)$. Note that such minimizers exist and we have $F_\varepsilon(v_\star,\mathscr{R}_i)\leq F_\varepsilon(v,\mathscr{R}_i)=\mathcal{O}(|\ln(\lambda\delta)|)$.
The key ingredient consists in noting that since $v_\star$ is a minimizer of a weighted Ginzburg-Landau type energy we may thus use a sharp interior $\eta$-ellipticity result. Namely, following the strategy of \cite{Publi3} to prove Lemma 1 [see Appendix C in \cite{Publi3}], by using the first part of the proof [the interior argument which does not required any information on ${\rm tr}_{\partial \mathscr{R}_i}(v_\star)$], we get \begin{equation}\label{Putaind'etaTruc}
\rho_\star:=|v_\star|\geq1-\mathcal{O}(\sqrt{|\ln(\lambda\delta)|/|\ln\varepsilon|})\text{ in }\tilde\mathscr{R}_i:=B(y_i,\delta/3-\varepsilon^{1/4})\setminus\overline{B(z_i,{\tilde r}+\varepsilon^{1/4})}. \end{equation} Write in $\tilde\mathscr{R}_i$: $v_\star=\rho_\star w_\star$ where $w_\star\in H^1(\tilde\mathscr{R},\mathbb{S}^1)$.
Note that by \eqref{CondOnLambdaDelta} [namely $|\ln(\lambda\delta)|=\mathcal{O}(\ln|\ln\varepsilon|)$] we have $|\ln(\lambda\delta)|^3/|\ln\varepsilon|=o(1)$ and then from \eqref{BorneATraversIncl1} $\&$ \eqref{Putaind'etaTruc} [and aslo $\rho_\star\leq1$] we have \[
\int_{\tilde\mathscr{R}_i}\alpha\rho_\star^2|\nabla w_\star|^2=\int_{\tilde\mathscr{R}_i}\alpha|\nabla w_\star|^2+o(1). \] We then immediately get: \[
F(v,\mathscr{R}_i)\geq F(v_\star,\mathscr{R}_i)\geq \dfrac{1}{2}\int_{\tilde\mathscr{R}_i}\alpha|\nabla w_\star|^2+o(1)\geq\inf_{\substack{\tilde w\in H^1(\tilde\mathscr{R}_i,\mathbb{S}^1)\\ {\rm deg}(\tilde w)=1}}\frac{1}{2}\int_{\tilde\mathscr{R}_i}\alpha|\nabla\tilde w|^2+o(1). \] It suffices now to claim that from \eqref{DefRenMicroEn3} we have \[
\inf_{\substack{\tilde w\in H^1(\tilde\mathscr{R}_i,\mathbb{S}^1)\\ {\rm deg}(\tilde w)=1}}\frac{1}{2}\int_{\tilde\mathscr{R}_i}\alpha|\nabla\tilde w|^2=\pi\left(|\ln(3\lambda)|+b^2|\ln(\lambda\delta/{\tilde r})|\right)+ W^{\rm micro}(\hat z_i^0)+o(1) \]
in order to get $F(v,\mathscr{R}_i)\geq \pi\left(|\ln(3\lambda)|+b^2|\ln(\lambda\delta/{\tilde r})|\right)+ W^{\rm micro}(\hat z_i^0)+o(1)$. By summing these lower bounds we get the result. \end{proof} \begin{lem} There exits $r\leq{\tilde r}=o(\lambda^2\delta^2)$ s.t. for $i\in J_\mu$ we have \[ F[v,B(z_i,{\tilde r})]\geq b^2[\pi\ln({\tilde r}/\varepsilon)+\ln b +\gamma]+o(1). \] \end{lem} \begin{proof} We first note that we have \begin{equation}\label{BorneDansAnneaPourConstruBonneTrace} \sum_{i\in J_\mu}F[v,B(z_i,\lambda^2\delta^2)\setminus\overline{B(z_i,r)}]\leq db^2 \pi\ln(\lambda^2\delta^2/r)+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathcal{O}(1). \end{equation} The above estimate is proved by contradiction and assuming the existences of an extraction [still denoted by $\varepsilon=\varepsilon_n\downarrow0$] and of a sequence $R_n\uparrow\infty$ s.t. \[ \sum_{i\in J_\mu}F[v,B(z_i,\lambda^2\delta^2)\setminus\overline{B(z_i,r)}]\geq db^2\pi\ln(\lambda^2\delta^2/r)+\mathscr{L}_1(d)\lnh_{\rm ex}+R_n. \] From \eqref{BorneInfEnWeight} we get \[ \sum_{i\in J_\mu}F[v,B(z_i,\lambda^2\delta^2)]\geq db^2\pi\ln(\lambda^2\delta^2/\varepsilon)+\mathscr{L}_1(d)\lnh_{\rm ex}+R_n+\mathcal{O}(1). \] Using Lemmas \ref{BorneTresFineSansIncl} and \ref{Prop.EstFineAtraversIncl} we get an estimate which contradicts \eqref{BrneSup-ApplicF(v)}.
By a classical argument, for sufficiently small $\varepsilon$, there exists $\sqrtr\leq{\tilde r}\leqr^{1/4}$ s.t. for $i\in J_\mu$ \[
\dfrac{{\tilde r}}{2}\int_{\partial B(z_i,{\tilde r})}|\nabla v|^2+\dfrac{b^2}{2\varepsilon^2}(1-|v|^2)^2\leq \pi+\dfrac{4\mathscr{L}_1(d)\lnh_{\rm ex}+\mathcal{O}(1)}{|\lnr|} \]
Arguing as in the proof of Proposition \ref{Prop.ConstrEpsMauvDisk} [Step 3 in Appendix \ref{SectAppenPreuveConstructionPetitDisque}] it is clear that we may assume $|v|\geq 1-|\ln\varepsilon|^{-2}$ on $ \partial B(z_i,{\tilde r})$ for $i\in J_\mu$.
We now define for $i\in J_\mu$, $\rho_i:={\rm tr}_{\partial B(z_i,{\tilde r})}(|v|)$, $w_i:={\rm tr}_{\partial B(z_i,{\tilde r})}(v/|v|)$. We immediately get \[
\dfrac{{\tilde r}}{2}\int_{\partial B(z_i,{\tilde r})}|\nabla w_i|^2=\pi+o(1),\,\dfrac{{\tilde r}}{2}\int_{\partial B(z_i,{\tilde r})}|\nabla \rho_i|^2+\dfrac{b^2}{2\varepsilon^2}(1-\rho_i^2)^2=o(1). \] On the other hand, since $ {\rm deg}(w_i)=1$, there exists $\phi_i=\phi_{i,\varepsilon}\in H^1((0,2\pi),\mathbb{R})$ s.t. $\phi_i(0)=\phi_i(2\pi)\in[0,2\pi)$ and $w_i\left(z_i+{\tilde r} \e^{\imath\theta}\right)=\e^{-\imath(\theta+\phi_i(\theta))}$. A direct calculation gives: \[
2\pi+o(1)={{\tilde r}}{}\int_{\partial B(z_i,{\tilde r})}|\partial_\tau w_i|^2=\int_0^{2\pi}\left|(\phi_i+\theta)'\right|^2=2\pi+\int_0^{2\pi}\left|\phi'_i\right|^2. \] The last equalities imply $\phi_i'\to0$ in $L^2(0,2\pi)$ and then $\phi_i-\phi_i(0)\to0$ in $L^2(0,2\pi)$. Hence, up to pass to a subsequence, we get the existence of $\theta_i\in[0,2\pi]$ s.t. $\phi_i\to\theta_i$ in $H^1(0,2\pi)$.
We now define $\tilde w_i\in H^1(B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})},\mathbb{S}^1)$ by \[ \tilde w_i(z_i+s\e^{\imath\theta})=\e^{\imath[\theta+\tilde\phi_i(z_i+s\e^{\imath\theta})]} \text{ with } \tilde\phi_i(z_i+s\e^{\imath\theta})=\left[\phi_i(\theta)-\theta_i\right]\dfrac{2{\tilde r}-s}{{\tilde r}}+\theta_i. \]
A direct calculation gives $\int_{B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}}|\nabla\tilde\phi_i|^2=o(1)$ and then \[
\dfrac{1}{2}\int_{B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}}|\nabla\tilde w_i|^2=\dfrac{1}{2}\int_{B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}}|\nabla[\theta+\tilde\phi_i(z_i+s\e^{\imath\theta})]|^2+o(1)=\pi\ln(2)+o(1). \] Let $\tilde\rho_i\in H^1[B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})},\mathbb{R}^+]$ be s.t.
$\tilde\rho_i(z_i+s\e^{\imath\theta}):=\tilde\rho_i(z_i+{\tilde r}\e^{\imath\theta})\dfrac{2{\tilde r}-s}{{\tilde r}}+\dfrac{s-{\tilde r}}{{\tilde r}}$.
We then have $F[\tilde\rho_i,B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}]=o(1)$. Consequently, letting $v_i:=\tilde\rho_i\tilde w_i\in H^1[B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})},\mathbb{C}]$ we have \[
F[v_i,B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}]=\dfrac{b^2}{2}\int_{B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}}|\nabla \tilde w_i|^2+o(1). \]
In order to conclude we let $u_i:=\begin{cases}v_i&\text{in }B(z_i,2{\tilde r})\setminus\overline{B(z_i,{\tilde r})}\\varepsilon&\text{in }B(z_i,{\tilde r})\end{cases}$.
It is clear that $u_i(z_i+2{\tilde r}\e^{\imath\theta})=\e^{\imath\theta_i}\e^{\imath\theta}$ and then, using Lemma IX.1 in \cite{BBH}, we get \[ F[u_i,B(z_i,2{\tilde r})]\geq b^2[\pi\ln(2{\tilde r}/\varepsilon)+\gamma+\pi\ln b ]+o(1). \] The last estimate ends the proof of the lemma. \end{proof} \begin{proof}[Proof of Proposition \ref{Prop.BorneInfTrèsFine}] From the three previous lemmas we have
\begin{eqnarray}\nonumber F(v)&\geq& d\pi\left[b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|\right]-\pi\sum_{\substack{p\in\Lambda\\mathbb{D}_p\geq2}}\sum_{\substack{i,j\in J_p\\i\neq j}}\ln|z_i-z_j|+ \\\label{BigBig1}&&+W^{\rm macro}_{N_0}{({\bf p},{\bf D})}+\sum_{i\in J_\mu}W^{\rm micro}(\hat z_i^0)+db^2[\pi\ln b +\gamma]+o(1). \end{eqnarray} On the other hand, with Corollary \ref{Cor.DecompPourCluster} [estimate \eqref{NiceDecSharpSplitTildeV}] we get \begin{equation}\label{BigBig2} \mathcal{F}(v,A)\geq h_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{i\in J_\mu}\xi_0(z_i)+F(v)+\tilde{V}[\zeta_{({\bf p},{\bf D})}]+o(1) \end{equation} where $\zeta_{({\bf p},{\bf D})}$ is defined in Proposition \ref{PropPartieMinimalSandH0}.
From Proposition \ref{EnergieRenMeso} [estimate \eqref{DevMesoscopicDef}], for $p\in\Lambda$ s.t. $D_p\geq2$, we have: \begin{equation}\label{BigBig3}
-\pi\sum_{\substack{i,j\in J_p\\i\neq j}}\ln|z_i-z_j|+2\pih_{\rm ex}\sum_i[\xi_0(z_i)-\xi_0(p)]\geq\dfrac{\pi}{2}(D_p^2-D_p)\ln\left(\dfrac{h_{\rm ex}}{D_p}\right)+C_{p,D_p}+o(1). \end{equation} By combining \eqref{BigBig1}, \eqref{BigBig2} and \eqref{BigBig3} [and also $\xi_0\leq0$] we obtain \begin{eqnarray}\nonumber
\mathcal{F}(v,A)&\geq&h_{\rm ex}^2{\bf J_0}+d\pi\left[b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|\right]-2\pi dh_{\rm ex}\|\xi_0\|_{L^\infty(\Omega)}+ \\\nonumber&&+\dfrac{\pi}{2}\sum_{\substack{p\in\Lambda\\mathbb{D}_p\geq2}}\left[(D_p^2-D_p)\ln\left(\dfrac{h_{\rm ex}}{D_p}\right)+C_{p,D_p}\right]+W^{\rm macro}_{N_0}{({\bf p},{\bf D})}+ \\\label{BigBig4}&&+\sum_{i\in J_\mu}W^{\rm micro}(\hat z_i^0)+\tilde{V}[\zeta_{({\bf p},{\bf D})}]+db^2[\pi\ln b +\gamma]+o(1). \end{eqnarray} It suffices to see that, since ${\bf D}\in\Lambda_{d}$, from the definition of $\mathscr{L}_1(d)$ we have \[ \dfrac{\pi}{2}\sum_{\substack{p\in\Lambda\\mathbb{D}_p\geq2}}(D_p^2-D_p)\ln\left(\dfrac{h_{\rm ex}}{D_p}\right)=\mathscr{L}_1(d)\ln{h_{\rm ex}}+\dfrac{\pi}{2}\sum_{{p\in\Lambda}}(D_p-D_p^2)\ln\left({D_p}\right) \] in order to deduce from \eqref{BigBig4} that \begin{eqnarray*}
\mathcal{F}(v,A)&\geq&h_{\rm ex}^2{\bf J_0}+d\pi\left[-2 h_{\rm ex}\|\xi_0\|_{L^\infty(\Omega)}+b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|\right]+ \\&&+\mathscr{L}_1(d)\ln{h_{\rm ex}}+\sum_{i\in J_\mu}W^{\rm micro}(\hat z_i^0)+\mathcal{W}_{d}({\bf D})+ \\&&+\dfrac{\pi}{2}\sum_{{p\in\Lambda}}(D_p-D_p^2)\ln\left({D_p}\right)+db^2[\pi\ln b +\gamma]+o(1) \end{eqnarray*}
where $\mathcal{W}_{d}({\bf D})$ is defined in \eqref{DefWdOpD}. This estimate with the definition of $H^0_{c_1} $ and $\overline{\W}_d$ [see \eqref{CouplageEnergieRen}$\&$\eqref{DefH0c1}$\&$\eqref{DefGammaBo}] ends the proof of the proposition. \end{proof} \section{The first critical field and the location of the vorticity defects}\label{Sec.LocationVorticity} We assume that $\lambda,\delta,h_{\rm ex}$ satisfy \eqref{CondOnLambdaDelta} and \eqref{BorneKMagn} for some $K\geq0$ independent of $\varepsilon$. We assume also \eqref{PutaindHypTech}. We consider a sequence $\varepsilon=\varepsilon_n\downarrow0$.
As in the previous section we focus on sequences of quasi-minimizers of $\mathcal{F}$. For simplicity we write $(v,A)$ instead of $(v_\varepsilon,A_\varepsilon)$. We assume that \eqref{HypGlobalSurQuasiMin}$\&$\eqref{QuasiMinDef} holds and since \eqref{HypGlobalSurQuasiMin}$\&$\eqref{QuasiMinDef} are gauge invariant we may also assume that $(v,A)$ is in the Coulomb gauge.
From above results, for a fixed $\mu>0$ sufficiently small [satisfying \eqref{HypSurMu}] and for $\varepsilon>0$ sufficiently small, there exists a [finite] set $\mathcal{Z}\subset\Omega$, depending on $\varepsilon$ and possibly empty s.t. letting $d:={\rm Card}(\mathcal{Z})$ [we write $\mathcal{Z}=\{z_1,...,z_2\}$]: \begin{itemize}
\item If $d=0$, then $|v|\geq1/2$ in $\Omega$.
\item If $d>0$, then $|z_i-z_j|\gtrsimh_{\rm ex}^{-1}\ln h_{\rm ex}$ if $i\neq j$, $|v|\geq1/2$ in $\Omega\setminus\cup_{i=1}^d\overline{B(z_i,\varepsilon^\mu)}$ and $ {\rm deg}_{\partial B(z,\varepsilon^\mu)}(v)=1$ for $z\in\mathcal{Z}$. \end{itemize} Moreover $d=\mathcal{O}(1)$. Then if needed, up to pass to a subsequence, we may assume that $d$ is independent of $\varepsilon$.
By combining Corollary \ref{CorEtudeSansVortex}, Propositions \ref{EnergieRenMeso}, \ref{Prop.BorneSupSimple}, \ref{Prop.BonEcartement} and \ref{Prop.BorneInfTrèsFine} we get the following corollary. \begin{cor}\label{Cor.ExactEnergyExp} Assume $\lambda,\delta,h_{\rm ex}$ satisfy \eqref{CondOnLambdaDelta} and \eqref{BorneKMagn} for some $K\geq0$ independent of $\varepsilon$. Let $\varepsilon=\varepsilon_n\downarrow0$ and and let $((v_\varepsilon,A_\varepsilon))_\varepsilon\subset\mathscr{H}$ be a sequence satisfying \eqref{HypGlobalSurQuasiMin}$\&$\eqref{QuasiMinDef}. Assume that $d$ is independent of $\varepsilon$. Without loss of generality we may assume that $(v_\varepsilon,A_\varepsilon)$ is in the Coulomb gauge. We have \begin{equation}\label{Exact...Expending} \mathcal{F}(v_\varepsilon,A_\varepsilon)=h_{\rm ex}^2 {\bf J_0}+dM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\mathscr{L}_1(d)\lnh_{\rm ex}+\mathscr{L}_2(d)+o(1).
\end{equation}
Moreover, if $d\neq0$ then: \begin{itemize} \item We have ${\bf D}\in\Lambda_{d}$ [see \eqref{DefEnsCouplageEnergieRen}] and ${\bf D}$ minimises $\mathcal{W}_{d}$ in $\Lambda_{d}$ where $\mathcal{W}_{d}$ is defined in \eqref{DefWdOpD}.
\item For $p\in\Lambda$ s.t. $D_p>0$ and $i\in J_p$, we denote $\breve z_i:=(z_i-p)\sqrt{D_p/h_{\rm ex}}$ and $\breve{\bf z}_p:=\{\breve z_i\,|\,i\in J_p\}$. Then, up to pass to a subsequence, $\breve{\bf z}_p$ converges to a minimizer of $W^{\rm meso}_{p,D_p}$ defined in \eqref{DefEnergyRenMeso}. \item For $i\in \{1,...,d\}$, we write $\hat z_i:=(z_i-y_i)/(\lambda\delta)\in\omega$ where $y_i\in\delta\mathbb{Z}^2$ is s.t. $z_i\in B(y_i,\lambda\delta)$. Then, up to pass to a subsequence, $\hat z_i$ converges to a minimizer of $W^{\rm micro}$. \end{itemize} \end{cor}
For a further use, we claim that for $d_{0}\geq0$, from Proposition \ref{Prop.BorneSupSimple}, there exits a configuration $(v^{0},A^{0})\in\mathscr{H}$ which is in the Coulomb gauge s.t. \begin{equation}\label{ExactExpEnergTest} \mathcal{F}(v^{0},A^{0})-h_{\rm ex}^2 {\bf J_0}=\dtestM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\mathscr{L}_1(d_{0})\lnh_{\rm ex}+\mathscr{L}_2(d_{0})+o(1). \end{equation}
Recall that, from Lemma \ref{LemLaisseLectTrucSimple}, for $d\neq0$, we have $d\in\{1,...,N_0\}$ if and only if $\mathscr{L}_1(d)=0$ and $\mathscr{L}_2(d)=\overline{\W}_d$. For further use we state another lemma whose proof is left to the reader: \begin{lem}\label{TechLemmaDefDelta}For $0\leq d<d'$ we let : \begin{enumerate} \item $\displaystyle\Delta^{(1)}_d:=\dfrac{\mathscr{L}_1(d+1)-\mathscr{L}_1(d)}{M_\O}=\dfrac{\pi}{M_\O}\left\lfloor \dfrac{d}{N_0}\right\rfloor$. \item $\displaystyle\Delta^{(1)}_{d',d}:=\dfrac{\mathscr{L}_1(d')-\mathscr{L}_1(d)}{M_\O(d'-d)}=\dfrac{\pi}{M_\O(d'-d)}\sum_{k=d}^{d'-1}\left\lfloor \dfrac{k}{N_0}\right\rfloor$. \item $\displaystyle\Delta^{(2)}_d:=\dfrac{\mathscr{L}_2(d+1)-\mathscr{L}_2(d)}{M_\O}$ and $\displaystyle\Delta^{(2)}_d-\dfrac{\overline{\W}_{d+1}-\overline{\W}_d}{M_\O}=$ \[
=\left|\begin{array}{l}0\text{ if }d\leq N_0-1 \\ -\dfrac{\pi}{2M_\O}\left\lfloor \dfrac{d}{N_0}\right\rfloor\left[\left(1+\left\lfloor \dfrac{d}{N_0}\right\rfloor\right)\ln\left(1+\left\lfloor \dfrac{d}{N_0}\right\rfloor\right)+\left(1-\left\lfloor \dfrac{d}{N_0}\right\rfloor\right)\ln\left\lfloor \dfrac{d}{N_0}\right\rfloor\right]\text{ if }d\geq N_0 \end{array}\right.. \] \item {$\displaystyle\Delta^{(2)}_{d',d}:= \dfrac{\mathscr{L}_2(d')-\mathscr{L}_2(d)}{M_\O(d'-d)}$ thus, if $d'\leq N_0$, then $\Delta^{(2)}_{d',d}=\dfrac{\overline{\W}_{d'}-\overline{\W}_d}{M_\O(d'-d)}$}. \end{enumerate} \end{lem}
By using \eqref{Exact...Expending} and \eqref{ExactExpEnergTest} we easily get the following corollary. \begin{cor}\label{Cor.ExactEnergyExpPreCritField} Let $\varepsilon=\varepsilon_n\downarrow0$, $\lambda$, $\delta$, $h_{\rm ex}$ and $((v_\varepsilon,A_\varepsilon))_\varepsilon\subset\mathscr{H}$ be as in Corollary \ref{Cor.ExactEnergyExp}.
Assume that $d$ is independent of $\varepsilon$. Then we have for $ d'>d$ \begin{eqnarray}\nonumber
h_{\rm ex}\leq H^0_{c_1} +\Delta^{(1)}_{d',d}\times\lnh_{\rm ex}+\Delta^{(2)}_{d',d}+o(1).
\end{eqnarray} Then, letting $\chi$ be s.t. $h_{\rm ex} =H^0_{c_1} (1+\chi)$ [$\chi=o(1)$ from \eqref{BorneKMagn}], we have thus
\begin{eqnarray}\label{BorneSUPUNDEGENnplusPrecise} h_{\rm ex}\leq H^0_{c_1} +\Delta^{(1)}_{d',d}\times\lnH^0_{c_1} +\Delta^{(2)}_{d',d}+o(1). \end{eqnarray} If $d>d'\geq0$ then \begin{eqnarray}\label{BorneINFUNDEGENnplusPrecise} h_{\rm ex}\geq H^0_{c_1} +\Delta^{(1)}_{d,d'}\times\lnH^0_{c_1} +\Delta^{(2)}_{d,d'}+o(1). \end{eqnarray} \end{cor}
We are now in position to give an asymptotic value for the first critical field. Indeed with Corollary \ref{Cor.ExactEnergyExpPreCritField} [\eqref{BorneSUPUNDEGENnplusPrecise} with $d=0\&d'\in\{1,...,N_0\}$ and \eqref{BorneINFUNDEGENnplusPrecise} with $d\geq1\&d'=0$].
Recall that we write, for $x\in\mathbb{R}$, $[x]^+=\max(x,0)$ and $[x]^-=\min(x,0)$ \begin{cor}\label{CorDefPremierChampsCrit}
Denote $H_{c_1}:=H^0_{c_1} +\min_{d\in\{1,...,N_0\}}{\dfrac{\overline{\W}_{d}}{dM_\O}}$. Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family of quasi-minimizers satisfying \eqref{HypGlobalSurQuasiMin}. \begin{enumerate} \item\label{CorDefPremierChampsCrit1} If for sufficiently small $\varepsilon$ we have $d=0$ then $[h_{\rm ex}-H_{c_1}]^+\to0$. \item\label{CorDefPremierChampsCrit2} If for sufficiently small $\varepsilon$ we have $d>0$ then $[h_{\rm ex}-H_{c_1}]^-\to0$. \end{enumerate} \end{cor} \begin{proof} The corollary is a direct consequence of Corollary \ref{Cor.ExactEnergyExpPreCritField} taking $d'\in\{1,...,N_0\}$ which minimizes $\Delta^{(2)}_{d',0}=W_{d'}/(M_\O d')$ in \eqref{BorneSUPUNDEGENnplusPrecise} for the first assertion and $d'=0$ in \eqref{BorneINFUNDEGENnplusPrecise} for the second. \end{proof} \subsection{Secondary critical fields for $d\in\{1,...,N_0\}$}\label{Sec.SecondaryCriticFields} If $N_0=1$, if $h_{\rm ex}$ is near $H_{c_1}$ and if $d>0$, then it is standard to prove that $d=1$. If $N_0\geq2$ and $d\in\{1,...,N_0\}$, then the situation is more involved: we have no {\it a priori} sharp informations about the number of vorticity defects and their [macroscopic] location. The goal of this section is to get such informations.
\subsubsection{Preliminaries} Note that for $0\leq d<d'\leq N_0$ we have $\Delta^{(1)}_{d',d}=0$ and $\Delta^{(2)}_{d',d}=\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{M_\O (d'-d)}$.
Rephrasing Corollary \ref{Cor.ExactEnergyExpPreCritField} for $d,d'\in\{0,...,N_0\}$ we have the following key lemma. \begin{lem}\label{Lem.PremEtapChamSec} Let $\varepsilon=\varepsilon_n\downarrow0$, $\lambda$, $\delta$, $h_{\rm ex}$ and $((v_\varepsilon,A_\varepsilon))_\varepsilon\subset\mathscr{H}$ be as in Corollary \ref{Cor.ExactEnergyExp}.
Assume ${\rm Card}(\mathcal{Z})=d$ is independent of $\varepsilon$ then the following properties hold: \begin{enumerate}
\item\label{Lem.PremEtapChamSec2} If $0\leq d'<d$ then, letting $\overline{\W}_0:=0$, we have $h_{\rm ex}\geq H^0_{c_1} +\dfrac{\overline{\W}_{d}-\overline{\W}_{d'}}{M_\O (d-d')}+o(1)$.
In particular taking $d'=0$ we get $h_{\rm ex}\geq H^0_{c_1} +\dfrac{\overline{\W}_{d}}{M_\O d}+o(1)$. \item\label{Lem.PremEtapChamSec3} If $d<N_0$ and $ d<d'\leq N_0$ then $h_{\rm ex}\leq H^0_{c_1} +\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{M_\O (d'-d)}+o(1)$.
\item\label{lemQqePropQuot1} If $N_0\geq 2$, $N_0\geq d'>d\geq1$ then \[ \dfrac{\overline{\W}_{d'}}{d'}<\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{d'-d}\Longleftrightarrow\dfrac{\overline{\W}_{d}}{d}<\dfrac{\overline{\W}_{d'}}{d'}\text{ and }\dfrac{\overline{\W}_{d'}}{d'}>\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{d'-d}\Longleftrightarrow\dfrac{\overline{\W}_{d}}{d}>\dfrac{\overline{\W}_{d'}}{d'}. \] \item\label{lemQqePropQuot2} If $N_0\geq 2$ and $N_0\geq d'>d\geq1$ then \[ \dfrac{\overline{\W}_{d'}}{d'}=\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{d'-d}\Longleftrightarrow\dfrac{\overline{\W}_{d}}{d}=\dfrac{\overline{\W}_{d'}}{d'}. \]
\item\label{lemQqePropQuot3} If $N_0\geq 2$ and $0\leq d<d'<d''\leq N_0$ then we have the following convex combination \[ \dfrac{\overline{\W}_{d''}-\overline{\W}_{d}}{d''-d}=\dfrac{d''-d'}{d''-d}\dfrac{\overline{\W}_{d''}-\overline{\W}_{d'}}{d''-d'}+\dfrac{d'-d}{d''-d}\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{d'-d}. \] Consequenlty $\dfrac{\overline{\W}_{d''}-\overline{\W}_{d}}{d''-d}$ is between $\dfrac{\overline{\W}_{d''}-\overline{\W}_{d'}}{d''-d'}$ and $\dfrac{\overline{\W}_{d'}-\overline{\W}_{d}}{d'-d}$. \end{enumerate} \end{lem} \begin{proof} The two first assertions are obtained with Corollary \ref{Cor.ExactEnergyExpPreCritField}. The remaining part of the lemma consists in basic calculations. \end{proof}
\subsubsection{First step in the definition of the critical fields}\label{DefFirstCriticalFields} Assume $N_0\geq2$. We are going to define some energetic levels [in term of $\overline{\W}_{d}$] related with the number of vorticity defects and their [macroscopic] location.
We denote $d^\star_0:=0$, $\mathscr{S}_1:=\{1,...,N_0\}$, $\mathscr{K}_1^\star:=\min_{d\in\mathscr{S}_1}\dfrac{\overline{\W}_{d}}{d}=\min_{d\in\mathscr{S}_1}\dfrac{\overline{\W}_{d}-\overline{\W}_{d_0^\star}}{d-d_0^\star}$, $\mathscr{S}^\star_1:=\{d\in\mathscr{S}_1\,|\,{\overline{\W}_{d}}/{d}=\mathscr{K}_1^\star\}$ and $\mathscr D_1:=\{\,{\bf D}\in \Lambda_d\,|\,d\in \mathscr{S}^\star_1\text{ and }{\bf D}\text{ minimizes } \mathcal{W}_{d}\}$. We let also $d_1^{\star}:=\max\mathscr{S}^\star_1$ and ${\mathscr{D}}_1^\star:={\mathscr{D}}_1\cap\tilde \Lambda_{d_1^\star}$.
If $d_1^\star=N_0$ we are going to prove that for $h_{\rm ex}\geq H_{c_1}+o(1)$ [but $h_{\rm ex}$ not too large], then there is exactly one vorticity defect close to each point of $\Lambda$. In the contrary case [$1\leq d_1^\star<N_0$], then there are other critical fields which govern the number of vorticity defects.
If $d_1^{\star}<N_0$, then $\mathscr{S}_2:=\{d_1^{\star}+1,...,N_0\}\neq\emptyset$. For $d\in\mathscr{S}_2$ we let $\mathscr{K}_2(d):=\dfrac{\overline{\W}_d-\overline{\W}_{d^\star_1}}{d-d^\star_1}$, $\mathscr{S}_2^\star:=\left\{d\in\mathscr{S}_2\,|\,\mathscr{K}_2(d)=\min_{\mathscr{S}_2}\mathscr{K}_2\right\}$, $d_2^\star:=\max\mathscr{S}_2^\star$ and $\mathscr{K}_2^\star:=\mathscr{K}_2(d_2^\star)$.
We denote $\mathscr D_2:=\{{\bf D}\in \Lambda_d\,|\,d\in \mathscr{S}^\star_2\text{ and }{\bf D}\text{ minimizes } \mathcal{W}_{d}\}$ and ${\mathscr{D}}_2^\star:=\mathscr D_2\cap \Lambda_{d_2^\star}$.
We claim that for $d\in\mathscr{S}_2$ we have $\overline{\W}_d/d>\overline{\W}_{d_1^\star}/d_1^\star$. Then, with Lemma \ref{Lem.PremEtapChamSec}.\ref{lemQqePropQuot1}, we get $\mathscr{K}_2(d)>\overline{\W}_{d_1^\star}/d_1^\star$. In particular \begin{equation}\label{K_1<K_2} \mathscr{K}_2^\star=\dfrac{\overline{\W}_{d_2^\star}-\overline{\W}_{d^\star_1}}{d_2^\star-d^\star_1}>\dfrac{\overline{\W}_{d_1^\star}}{d_1^\star}=\mathscr{K}_1^\star. \end{equation} If $d_2^\star=N_0$ then we stop the construction. In the contrary case, for $d\in\mathscr{S}_3:=\{d_2^\star+1,...,N_0\}\neq\emptyset$ we have $\mathscr{K}_2(d)>\mathscr{K}_2(d_2^\star)$.
We continue the iterative construction. For $k\geq2$, assume that we have $1<d_{k-1}^\star<d_k^\star<N_0$, we let $\mathscr{S}_{k+1}:=\{d_k^\star+1,...,N_0\}\neq\emptyset$ and we assume that for $d\in\mathscr{S}_{k+1}$: \begin{equation}\label{RecurrencePropertyGrangemeon} \mathscr{K}_{k}(d):=\dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k-1}}}{d-d^\star_{k-1}}>\dfrac{\overline{\W}_{d_k^\star}-\overline{\W}_{d^\star_{k-1}}}{d_k^\star-d^\star_{k-1}}=\mathscr{K}_{k}^\star. \end{equation} For $d\in\mathscr{S}_{k+1}$ we let $\mathscr{K}_{k+1}(d):=\dfrac{\overline{\W}_d-\overline{\W}_{d^\star_k}}{d-d^\star_k}$, \[
\mathscr{S}_{k+1}^\star:=\left\{d\in\mathscr{S}_{k+1}\,|\,\mathscr{K}_{k+1}(d)=\min_{\mathscr{S}_{k+1}}\mathscr{K}_{k+1}\right\}, \] $d_{k+1}^\star:=\max\mathscr{S}_{k+1}^\star$ and $\mathscr{K}_{k+1}^\star:=\mathscr{K}_{k+1}(d_{k+1}^\star)$.
We define also \[
\mathscr D_{k+1}:=\{{\bf D}\,|\,{\bf D}\in \Lambda_d,\,d\in \mathscr{S}^\star_{k+1}\text{ and }{\bf D}\text{ minimizes } \mathcal{W}_{d}\} \text{ and } {\mathscr{D}}_{k+1}^\star
:=\mathscr D_{k+1}\cap\Lambda_{d_{k+1}^\star}. \]
From \eqref{RecurrencePropertyGrangemeon} we have \begin{equation}\label{RecurrencePropertyGrangemeonbis} \mathscr{K}_{k}(d_{k+1}^\star)=\dfrac{\overline{\W}_{d_{k+1}^\star}-\overline{\W}_{d^\star_{k-1}}}{d_{k+1}^\star-d^\star_{k-1}}>\dfrac{\overline{\W}_{d_{k}^\star}-\overline{\W}_{d^\star_{k-1}}}{d_{k}^\star-d^\star_{k-1}}=\mathscr{K}_{k}^\star. \end{equation} Then, from Lemma \ref{Lem.PremEtapChamSec}.\ref{lemQqePropQuot3} with $d=d_{k-1}^\star$, $d'=d_k^\star$ and $d''=d_{k+1}^\star$, we get that $\mathscr{K}_{k}(d_{k+1}^\star)$ is between $\mathscr{K}_{k}^\star$ and $\mathscr{K}_{k+1}^\star$. Consequently, with \eqref{RecurrencePropertyGrangemeonbis} we get \begin{eqnarray}\label{VeryVeryGoodCondGrnag} &&\mathscr{K}_{k+1}^\star>\mathscr{K}_{k}^\star.
\end{eqnarray} We stop the construction at Step $L$ s.t. $d_L^\star=N_0$. Since $1\leq d_k^\star<d_{k+1}^\star\leq N_0$, it is clear that a such $L$ exists and $1\leq L\leq N_0$.
We then have two possibilities: $L=1$ or $L\in\{2,...,N_0\}$. If $L\geq2$ then, for $k\in\{1,...,L-1\}$, \eqref{VeryVeryGoodCondGrnag} holds.
We also claim that $(1,...,1)\in\mathscr D_{L}$. \begin{lem}\label{Lem.ComparaisonEntreD_kd_k+1} Let $k\in\{1,...,L\}$, assume that $d_{k}^\star-d_{k-1}^\star\geq2$ and fix $d_{k-1}^\star<d<d_{k}^\star$. We have \[ \dfrac{\overline{\W}_{d_{k}^\star}-\overline{\W}_{d}}{d_{k}^\star-d}\leq \mathscr{K}_k^\star\leq \dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k-1}}}{d-d^\star_{k-1}}. \] Moreover, if $d\notin\mathscr S_k^\star$, then \[ \dfrac{\overline{\W}_{d_{k}^\star}-\overline{\W}_{d}}{d_{k}^\star-d}\leq \mathscr{K}_k^\star< \dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k-1}}}{d-d^\star_{k-1}}. \] \end{lem} \begin{proof} From Lemma \ref{Lem.PremEtapChamSec}.\ref{lemQqePropQuot3}, $ \mathscr{K}_k^\star$ is between $\dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k-1}}}{d-d^\star_{k-1}}$ and $\dfrac{\overline{\W}_{d_{k}^\star}-\overline{\W}_{d}}{d_{k}^\star-d}$. On the other hand, from the definition of $d_k^\star$, $\mathscr{K}_k^\star\leq \dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k-1}}}{d-d^\star_{k-1}}$. Clearly the first part of the lemma holds. If $d\notin\mathscr S_k^\star$ then, by definition, $\mathscr{K}_k^\star< \dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k-1}}}{d-d^\star_{k-1}}$. \end{proof} \subsubsection{Main result} For $k\in\{1,...,L\}$ we let \begin{equation}\label{TheExprionnI} {\tt K}^{\tt(I)}_k:=H_{c_1}^0+\dfrac{\mathscr{K}_k^\star}{M_\O} \end{equation} and we let also \begin{equation}\label{TheExprionnII} {\tt K}^{\tt(II)}_1:=H_{c_1}^0+\Delta^{(1)}_{N_0}\times\lnH^0_{c_1} +\Delta^{(2)}_{N_0}. \end{equation} Recall that the $\mathscr{K}_k^\star$'s are defined in Section \ref{DefFirstCriticalFields} and $\Delta^{(1)}_{N_0}\&\Delta^{(2)}_{N_0}$ in Lemma \ref{TechLemmaDefDelta}. Note that $H_{c_1}={\tt K}^{\tt(I)}_1$. \begin{prop}\label{Prop.SHarperdescriptionNonSatured}
Assume that \eqref{NonDegHyp} holds and $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}.
Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family satisfying \eqref{HypGlobalSurQuasiMin}$\&$\eqref{QuasiMinDef} which is in the Coulomb gauge. Assume $d_\varepsilon={\rm Card}(\mathcal{Z}_\varepsilon)\in\{1,...,N_0\}$.
We denote ${\bf D}=(D_1,...,D_{N_0})$ with $D_l= {\rm deg}_{\partial B(p_l,\eta_\Omega)}(v)$ [$\eta_\Omega$ is defined in \eqref{DefEtaO}].
\begin{enumerate} \item Assume $L=1$. For sufficiently small $\varepsilon>0$ we have ${\bf D}\in\mathscr{D}_1$.
Moreover, if $\varepsilon=\varepsilon_n\downarrow0$ is a sequence s.t. $d_\varepsilon$ is independent of $\varepsilon$ and $ d_\varepsilon\neq N_0$ [{\it i.e.} ${\bf D}\neq(1,...,1)$] then $\left[h_{\rm ex}- {\tt K}^{\tt(I)}_1\right]^+\to0$. \item Assume $L\geq2$. For $k\in\{1,...,L-1\}$, if $d^\star_{k-1}<d_\varepsilon\leq d_k^\star$ for small $\varepsilon$ or for a sequence indexed by $\varepsilon=\varepsilon_n\downarrow0$, then \begin{equation}\label{ChapsCritSec3} \left[h_{\rm ex}-{\tt K}^{\tt(I)}_k\right]^-\to0\text{ and }\left[h_{\rm ex}-{\tt K}^{\tt(I)}_{k+1}\right]^+\to0. \end{equation}
Moreover, for sufficiently small $\varepsilon$, ${\bf D}\in\mathscr{D}_{k}$. And if ${\bf D}\in\mathscr{D}_{k}\setminus\mathscr{D}^\star_{k}$ [{\it i.e.} $d^\star_{k-1}<d_\varepsilon<d_k^\star$] then \begin{equation}\label{ChapsCritSec4} \left[h_{\rm ex}-{\tt K}^{\tt(I)}_k\right]^+\to0. \end{equation}
\item If $d^\star_{L-1}< d_\varepsilon\leq d_L^\star=N_0$ for small $\varepsilon$ or for a sequence indexed by $\varepsilon=\varepsilon_n\downarrow0$, then \begin{equation}\label{ChapsCritSec5} \left[h_{\rm ex}-{\tt K}^{\tt(I)}_L\right]^-\to0\text{ and }\left[h_{\rm ex}-{\tt K}^{\tt(II)}_1\right]^+\to0.
\end{equation} Moreover, for sufficiently small $\varepsilon$, ${\bf D}\in\mathscr{D}_{L}$. And if $d_\varepsilon<N_0$ [{\it i.e} ${\bf D}\neq(1,...,1)$] then \begin{equation}\label{ChapsCritSec6} \left[h_{\rm ex}-{\tt K}^{\tt(I)}_L\right]^+\to0. \end{equation}
\end{enumerate} In particular, for sufficiently small $\varepsilon$, we have ${\bf D}\in \cup_{l=1}^{L}\mathscr{D}_l$. \end{prop} \begin{proof}
We prove the first item arguing by contradiction. First note that if $N_0=1$ then there is nothing to prove. Assume thus $N_0\geq2\,\&\,L=1$ and let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ be as in the proposition. Assume there exists $\varepsilon=\varepsilon_n\downarrow0$ s.t. ${\bf D}\notin\mathscr{D}_1$. Up to pass to a subsequence we may assume that ${\bf D}$ is independent of $\varepsilon$.
From Corollary \ref{Cor.ExactEnergyExp}, for sufficiently small $\varepsilon$, ${\bf D}$ minimizes $\mathcal{W}_d$ and then, from the definition of $\mathscr{D}_1$, we get $d\notin\mathscr{S}^\star_1$. Consequently $\overline{\W}_{N_0}/N_0<\overline{\W}_{d}/d$ and thus, from Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec3}$\&$\ref{Lem.PremEtapChamSec}.\ref{lemQqePropQuot1} [with $d'=N_0$], we get the existence of $t>0$ s.t. $h_{\rm ex}\leq H_{c_1}-t$. This last estimate is in contradiction with Corollary \ref{CorDefPremierChampsCrit}.\ref{CorDefPremierChampsCrit2}. Thus ${\bf D}\in\mathscr{D}_1$ for sufficiently small $\varepsilon$. The rest of the first assertion is a direct consequence of $d\in\mathscr{S}^\star_1\setminus\{N_0\}$ and Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec3}$\&$\ref{Lem.PremEtapChamSec}.\ref{lemQqePropQuot2} [with $d'=N_0$].\\%\eqref{ExactExpEnergTest} [with $d_{0}=N_0$] and Corollary \ref{Cor.ExactEnergyExp}.\ref{CorDefPremierChampsCrit2}.\\
We now prove the second assertion. Assume $L\geq2$. For $k\in\{1,...,L-1\}$, if $d^\star_{k-1}<d\leq d_k^\star$, then, from Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec2} [with $d'=d_{k-1}^\star$] and Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec3} [with $d'=d_{k+1}^\star$], we get \begin{equation}\label{RERCONtrzBis} \dfrac{\overline{\W}_{d}-\overline{\W}_{d_{k-1}^\star}}{M_\O(d-d_{k-1}^\star)}+o(1)\leqh_{\rm ex}-H_{c_1}^0\leq\dfrac{\overline{\W}_{d_{k+1}^\star}-\overline{\W}_{d}}{M_\O(d_{k+1}^\star-d)}+o(1). \end{equation} From the definition of $d_{k}^\star$ we have ${\mathscr{K}_k^\star}{}\leq\dfrac{\overline{\W}_{d}-\overline{\W}_{d_{k-1}^\star}}{d-d_{k-1}^\star}$ and then the lower bound in \eqref{RERCONtrzBis} gives the first convergence in \eqref{ChapsCritSec3}.
On the other hand, if $d=d_k^\star$ then, from the definition of $\mathscr{K}_{k+1}^\star$, the upper bound in \eqref{RERCONtrzBis} gives the second convergence in \eqref{ChapsCritSec3}.
If $d\neq d^\star_k$, using Lemma \ref{Lem.PremEtapChamSec}.\ref{lemQqePropQuot3} [with $d<d_{k}^\star<d_{k+1}^\star$] we obtain that $\dfrac{\overline{\W}_{d_{k+1}^\star}-\overline{\W}_{d}}{d_{k+1}^\star-d}$ is between $\dfrac{\overline{\W}_{d_k^\star}-\overline{\W}_{d}}{d_{k}^\star-d}$ and $\mathscr{K}_{k+1}^\star$. But, from Lemma \ref{Lem.ComparaisonEntreD_kd_k+1}, we get $\dfrac{\overline{\W}_{d_k^\star}-\overline{\W}_{d}}{d_{k}^\star-d}\leq \mathscr{K}_k^\star$. Since from \eqref{VeryVeryGoodCondGrnag} we have $\mathscr{K}_{k+1}^\star>\mathscr{K}_{k}^\star$, we obtain $\dfrac{\overline{\W}_{d_{k+1}^\star}-\overline{\W}_{d}}{d_{k+1}^\star-d}\leq \mathscr{K}^\star_{k+1}$. Therefore the upper bound of \eqref{RERCONtrzBis} gives the second convergence in \eqref{ChapsCritSec3}.
We now demonstrate that, for sufficiently small $\varepsilon$, ${\bf D}\in\mathscr{D}_{k}$ arguing by contradiction. We assume the existence of sequence $\varepsilon=\varepsilon_n\downarrow0$ s.t. $d_{k-1}^\star<d\leq d_{k}^\star$ with $k\in\{1,...,L-1\}$, ${\bf D}$ is independent of $\varepsilon$ and ${\bf D}\notin\mathscr{D}_{k}$. From Corollary \ref{Cor.ExactEnergyExp}, ${\bf D}$ minimizes $\mathcal{W}_d$ and then, from the definition of $\mathscr{D}_k$, we get $d\notin\mathscr{S}^\star_k$ [then $d<d_k^\star$].
On the one hand, with Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec2} [with $d'=d_{k-1}^\star$] and Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec3} [with $d'=d_{k}^\star$] we have \[ \dfrac{\overline{\W}_{d}-\overline{\W}_{d_{k-1}^\star}}{M_\O(d-d_{k-1}^\star)}+o(1)\leq h_{\rm ex}-H^0_{c_1} \leq \dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k}}}{M_\O(d-d^\star_{k})}+o(1). \] On the other hand, with Lemma \ref{Lem.ComparaisonEntreD_kd_k+1}, we have $\dfrac{\overline{\W}_{d}-\overline{\W}_{d^\star_{k}}}{d-d^\star_{k}}<\dfrac{\overline{\W}_{d}-\overline{\W}_{d_{k-1}^\star}}{d-d_{k-1}^\star}$. This inequality gives a contradiction.
Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec3} [with $d'=d_k^\star$] and Lemma \ref{Lem.ComparaisonEntreD_kd_k+1} give immediately \eqref{ChapsCritSec4}.\\
We now treat the last item of the proposition and we assume $d_{L-1}^\star<d\leq d_L^\star=N_0$. From \eqref{BorneINFUNDEGENnplusPrecise} [with $d'=d_{L-1}^\star$] we get $h_{\rm ex}-H_{c_1}^0\geq\Delta^{(2)}_{d,d_{L-1}^\star}+o(1)$. On the other hand, from the definition of $\mathscr{K}_L^\star$, we get \begin{equation}\label{PremierEtapePergola} h_{\rm ex}-H_{c_1}^0\geq\dfrac{\mathscr{K}_L^\star}{M_\O}+o(1). \end{equation} Before ending the proof of \eqref{ChapsCritSec5} we prove that \eqref{ChapsCritSec6} holds and, for sufficiently small $\varepsilon$, ${\bf D}\in\mathscr{D}_{L}$. Assume that there exists $\varepsilon=\varepsilon_n\downarrow0$ s.t. ${\bf D}$ is independent of $\varepsilon$ and $d_{L-1}^\star<d<N_0$.
From Lemma \ref{Lem.PremEtapChamSec}.\ref{Lem.PremEtapChamSec3} [with $d'=N_0$] we have \begin{equation}\label{PremierEtapePergolabis} h_{\rm ex}- H^0_{c_1} \leq\dfrac{\overline{\W}_{N_0}-\overline{\W}_{d}}{M_\O (N_0-d)}+o(1). \end{equation} Using \eqref{PremierEtapePergola} with \eqref{PremierEtapePergolabis} we get ${\mathscr{K}_L^\star}{}\leq{(\overline{\W}_{N_0}-\overline{\W}_{d})}/( N_0-d)$. Lemma \ref{Lem.ComparaisonEntreD_kd_k+1} [with $d_{L-1}^\star<d<N_0$] gives $(\overline{\W}_{N_0}-\overline{\W}_{d})/(N_0-d)\leq \mathscr{K}_L^\star$. Therefore, $(\overline{\W}_{N_0}-\overline{\W}_{d})/(N_0-d)= \mathscr{K}_L^\star$ and then by combining \eqref{PremierEtapePergola} and \eqref{PremierEtapePergolabis} we deduce that, if for some sequence $\varepsilon=\varepsilon_n\downarrow0$ we have $d_{L-1}^\star< d<N_0$, then \eqref{ChapsCritSec6} holds.
Arguing as above, [using \eqref{ExactExpEnergTest} with $d_{0}=N_0$],
one may prove that for sufficiently small $\varepsilon$ we have $d\in \mathscr{S}_L^\star$ and thus ${\bf D}\in\mathscr{D}_L$.
We complete the proof of \eqref{ChapsCritSec5}. Assume that $h_{\rm ex}$ is sufficiently large in order to have $d=N_0$ [here we used \eqref{ChapsCritSec6}]. It suffices to use \eqref{BorneSUPUNDEGENnplusPrecise} [with $d=N_0$ and $d'=N_0+1$] in order to get the remaining part of \eqref{ChapsCritSec5}.
\end{proof} \subsection{Secondary critical fields for $d\geq N_0+1$}\label{Sec.SecondaryCriticFieldsBis} The case $d\geq N_0+1$ is easier to handle than the case $1\leq d\leq N_0$.
For $k\in\mathbb{N}^*$, we let \[ {\tt K}^{\tt(II)}_k:=H^0_{c_1} +\Delta^{(1)}_{N_0+k}\times\lnH^0_{c_1} +\Delta^{(2)}_{N_0+k} \] where $\Delta^{(1)}_{N_0+k}\&\Delta^{(2)}_{N_0+k}$ are defined in Lemma \ref{TechLemmaDefDelta}. We have the following proposition. \begin{prop}\label{Prop.SHarperdescriptionNonSaturedII} Assume that \eqref{NonDegHyp} holds and $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and \eqref{PutaindHypTech}.
Let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ be a family satisfying \eqref{HypGlobalSurQuasiMin}$\&$\eqref{QuasiMinDef} which is in the Coulomb gauge.
Let $k\in\mathbb{N}^*$. If for a sequence $\varepsilon=\varepsilon_n\downarrow0$ we have $d_\varepsilon=N_0+k$ then \[ \left[h_{\rm ex}-{\tt K}^{\tt(II)}_k\right]^-\to0\text{ and }\left[h_{\rm ex}-{\tt K}^{\tt(II)}_{k+1}\right]^+\to0. \] \end{prop} \begin{proof} The proposition is a direct consequence of \eqref{BorneSUPUNDEGENnplusPrecise} [with $d=N_0+k$ and $d'=N_0+k+1$] and \eqref{BorneINFUNDEGENnplusPrecise} [with $d=N_0+k$ and $d'=N_0+k-1$].
\end{proof}
\appendix \section{Proof of Estimate \eqref{EstLoinInterfaceGradU}}\label{ProofVotation} Consider a conformal mapping $\Phi:\mathbb{D}\to\Omega$. From a result of Painlevé [see Footnote \ref{NumFootNoteConformal} page \pageref{NumFootNoteConformal}], the maps $\Phi$ and $\Phi^{-1}$ may be extended in $\overline\Omega$ and $\overline{\mathbb{D}}$ by smooth maps. Then there exists $C_\star\geq1$ s.t. \begin{equation}\label{BorneGradConfMapPainleve}
\|\nabla \Phi\|_{L^\infty(\mathbb{D})},\|\nabla \Phi^{-1}\|_{L^\infty(\Omega)}\leq C_\star. \end{equation} Write $\tilde a_\varepsilon:=a_\varepsilon\circ \Phi$ and $\tilde U_\varepsilon:=U_\varepsilon\circ \Phi$. Since the function $\tilde U_\varepsilon$ is a minimizers of $\tilde E_\varepsilon$, the analog of $E_\varepsilon$ in $\mathbb{D}$, $\tilde U_\varepsilon$ is a solution of \[ \begin{cases}
-\Delta \tilde U=w\dfrac{\tilde U}{\varepsilon^2}(\tilde a_\varepsilon^2-|\tilde U|^2)&\text{in }\mathbb{D}\\\partial_\nu\tilde U=0&\text{on }\mathbb{S}^1 \end{cases} \] with $w={\rm Jac}\, \Phi$ is the Jacobian of $\Phi$.
Define $V_\varepsilon:B(0,2)\to[b^2,1]$ by \[
V_\varepsilon(x)=\begin{cases}\tilde U_\varepsilon(x)&\text{ if }x\in\mathbb{D}\\\tilde U_\varepsilon({x}/{|x|^2})&\text{ if }x\in B(0,2)\setminus\mathbb{D}\end{cases}. \]
Then $-\Delta V_\varepsilon=-\Delta \tilde U_\varepsilon$ in $\mathbb{D}$ and $-\Delta V_\varepsilon(x)=-|x|^{-4}\Delta \tilde U_\varepsilon({x}/{|x|^2})$ in $B(0,2)\setminus\mathbb{D}$. Thus $V_\varepsilon\in H^2(B(0,2),\mathbb{C})$.
First note that if $r\leq\varepsilon$, then \eqref{EstLoinInterfaceGradU} is given by \eqref{EstGlobGradU}.
Let $r>\varepsilon$ and $x_0\in\Omega$ be s.t. ${\rm dist}(x_0,\partial\omega_\varepsilon)>r$. Let $\eta:=a_\varepsilon(x_0)-V_\varepsilon$ in $B(x_0,r/2)$. From Lemma A.1 in \cite{BBH1} and \eqref{EstLoinInterfaceU} we get for $x\in B(x_0,r/4)$ : \begin{eqnarray*}
|\nabla V_\varepsilon(x)|^2=|\nabla\eta(x)|^2&\leq& C\left(\|\Delta \eta\|_{L^\infty(B(x_0,r/2))}+\dfrac{4}{r^2}\| \eta\|_{L^\infty(B(x_0,r/2))}\right)\| \eta\|_{L^\infty(B(x_0,r/2))} \\&\leq&\frac{C\e^{-\frac{s_b r}{2\varepsilon}}}{\varepsilon^2}. \end{eqnarray*} In the previous estimate the constants are independent of $\varepsilon,r$ and $x_0$. From \eqref{BorneGradConfMapPainleve} we then get \eqref{EstLoinInterfaceGradU}.
\section{Proof of Theorem \ref{ThmBorneDegréMinGlob}}\label{SectionProofAppSSBound}
Assume that \eqref{NonDegHyp} holds and $\lambda,\delta,h_{\rm ex},K$ satisfy \eqref{CondOnLambdaDelta}, \eqref{BorneKMagn} and $\delta^2|\ln\varepsilon|\leq1$.
Consider a family of configurations $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}\subset\mathscr{H}$ which is in the Coulomb gauge and s.t. \[
\mathcal{F}(v_\varepsilon,A_\varepsilon)\leq\inf_\mathscr{H}\mathcal{F}+\mathcal{O}(\ln|\ln\varepsilon|). \] We drop the subscript $\varepsilon$. From Lemma \ref{LemAuxConstructMagnPot}, we may consider $ A_v\in H^1(\Omega,\mathbb{R}^2)$ s.t. $(v,A_v)$ is in the Coulomb gauge and \eqref{MagnetiqueEq} holds.
We then have \begin{equation}\label{Borne RERDAEI}
\mathcal{F}(v, A_v)\leq\mathcal{F}(v,A)\leq\inf_\mathscr{H}\mathcal{F}+\mathcal{O}(\ln|\ln\varepsilon|)=\mathcal{O}(h_{\rm ex}^2). \end{equation}
Proposition \ref{Prop.BorneInfLocaliseeSandSerf} gives the existence of $C,\varepsilon_0>0$ [independent of $\varepsilon$] s.t., for $\varepsilon<\varepsilon_0$, there exists a family of disjoint disks $\{B_i\,|\,i\in \mathcal{J}\}$ with $B_i=B(a_i,r_i)$ satisfying : \begin{enumerate}
\item $\{|v|<1-|\ln\varepsilon|^{-2}\}\subset\cup B_i$
\item $\sum r_i<|\ln\varepsilon|^{-10}$,
\item writing $\rho=|v|$ and $v=\rho\e^{\imath\varphi}$ we have \begin{equation}\tag{\ref{EstimateSS3ball}}
\dfrac{1}{2}\int_{B_i}\rho^2|\nabla \varphi-A|^2+| {\rm curl}(A)-h_{\rm ex}|^2\geq\pi|d_i|(|\ln\varepsilon|-C\ln|\ln\varepsilon|), \end{equation} where $d_i= {\rm deg}_{\partial B_i}(v)$ if $B_i\subset\Omega$ and $0$ otherwise. \end{enumerate} From now on, the notation $C$ stands for a positive constant independent of $\varepsilon$ whose value may change from one line to another.
\subsection{A substitution lemma} As in \cite{SS2}, we first state a substitution lemma.
\begin{lem}\label{Lem.SubsSandSerf}
There exists $(\tilde v,\tilde A)\in \mathscr{H}$ which is in the Coulomb gauge and s.t., writing, $\rho=|v|,\, v=\rho\e^{\imath\varphi}$ and $\tilde\rho=|\tilde v|,\, \tilde v=\tilde \rho\e^{\imath\tilde \varphi}$ we have \begin{enumerate} \item $(\tilde v,\tilde A)$ satisfies \eqref{MagnetiqueEq} and $\tilde\rho\leq1$, \item $\tilde\rho=1$ and $\varphi=\tilde{\varphi}$ in $\Omega\setminus\cup B_i$,
\item\label{Assertkjh2} $\|\rho(\nabla\varphi-A_v)-\tilde \rho(\nabla\tilde\varphi-\tilde A)\|_{L^2(\Omega)}^2=o(1)$,
\item\label{Assertkjh3} $\|{\rm curl}(A_v)-{\rm curl}(\tilde A)\|_{L^2(\Omega)}^2\leq C|\ln\varepsilon|^{-2}$, \item $\mathcal{F}(\tilde v,\tilde A)\leq \mathcal{F}( v, A_v)+o(1)$. \end{enumerate}
\end{lem} Lemma \ref{Lem.SubsSandSerf} is proved in \cite{SS2} [Lemma 1] for $\alpha\equiv1$. The adaptation to our case is presented below.
\begin{proof}[Proof of Lemma \ref{Lem.SubsSandSerf}] The proof of the lemma follows the same lines than in \cite{SS2}.
We define a continuous function $\chi_\varepsilon=\chi:[0,1]\to[0,1]$ by letting \[
\begin{cases}\chi(x)=x&\text{if }0\leq x\leq1/2\\\chi(x)=1&\text{if }x\geq1-|\ln\varepsilon|^{-2}\\\chi\text{ is affine}&\text{if }1/2\leq x\leq 1-|\ln\varepsilon|^{-2}\end{cases}. \] We then let $\tilde v:=\dfrac{\chi(\rho)}{\rho} v\in H^1(\Omega,\mathbb{C})$ and we let $\tilde A=A_{\tilde v}$ given by Lemma \ref{LemAuxConstructMagnPot}.
Letting $\tilde h={\rm curl}(\tilde A)$ we then get \begin{equation}\label{MagneticGLEquationTilde} -\nabla^\bot \tilde h=\alpha(\imath \tilde v)\cdot(\nabla \tilde v-\imath\tilde A \tilde v). \end{equation} Exactly as in \cite{SS2} we have \begin{eqnarray}
\label{DiferenceCourantsubstitution}
\|v\wedge\nabla v-\tilde v\wedge\nabla \tilde v\|^2_{L^2(\Omega)}\leq C|\ln\varepsilon|^{-2}. \end{eqnarray} As in \cite{SS2}, from \eqref{JaugeCoulomb}, \eqref{MagnetiqueEq} and \eqref{MagneticGLEquationTilde} we obtain PDE of the second order satisfied by $A$ and $\tilde A$.
By considering the difference of these PDE we get \begin{equation}\label{EquationDiffMagnPotSubst} -\Delta(\tilde A-A)+\alpha(\tilde A-A)=\alpha(\tilde v\wedge\nabla \tilde v- v\wedge\nabla v)+\alpha(1-\rho^2)A+\alpha(1-\tilde\rho^2)\tilde A. \end{equation}
From \eqref{EstLpA}, \eqref{Borne RERDAEI} and \eqref{DiferenceCourantsubstitution}, the RHS of \eqref{EquationDiffMagnPotSubst} is bounded in $L^2(\Omega)$ by $\dfrac{C}{|\ln\varepsilon|}$.
Since $(\tilde A-A)\cdot \nu=0$ on $\partial\Omega$, by elliptic regularity, we deduce
Assertions \ref{Assertkjh2}$\&$\ref{Assertkjh3} of the lemma.
The end of the proof is exactly as in \cite{SS2} \end{proof} From now on we replace $(v,A_v)$ with $(\tilde v,\tilde A)$ and we claim that the valued disks given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf} is valid for $(v,A_v)$ and $(\tilde v,\tilde A)$ and getting the conclusions of Theorem \ref{ThmBorneDegréMinGlob} for $(\tilde v,\tilde A)$ implies the same for $(v,A)$.
In order to simplify the presentation we write $(v,A)$ instead of $(\tilde v,\tilde A)$. \subsection{Energetic Decomposition} We have the following lower bound: \begin{prop}\label{PropBorneInfSS3}
Let $h:={\rm curl}(A)$, $h_0:=\Delta\xi_0=1+\xi_0$, $f:=h-h_{\rm ex} h_0$ and let $\{B_i=B(a_i,r_i)\,|\,i\in\mathcal{J}\}$ be the disks given by Proposition \ref{Prop.BorneInfLocaliseeSandSerf}. We have: \begin{equation}
\mathcal{F}(v,A)\geqh_{\rm ex}^2 {\bf J_0}+\sum \mathcal{F}[(v,A),B_i]+2\pih_{\rm ex}\sum d_i\xi_0(a_i)+\dfrac{1}{2}\int_{\Omega\setminus\cup B_i}|\nabla f|^2+\dfrac{1}{2}\int_\Omega f^2-o(1) \end{equation} where \begin{equation}\label{Est.EnConcentrationlocal}
\mathcal{F}[(v,A),B_i]\geq\pi b^2|d_i|(|\ln\varepsilon|-C\ln|\ln\varepsilon|). \end{equation} \end{prop} This estimate is the starting point of the main argument of \cite{SS2}. \begin{proof}[Proof of Proposition \ref{PropBorneInfSS3}] Let $\tilde\Omega:=\Omega\setminus\cup \overline{B_i}$. With \eqref{Est.EnConcentrationlocal} we get \begin{equation}\nonumber \mathcal{F}_{}[(v,A),\cup B_i]
\geq\pi b^2\sum_i|d_i|[|\ln\varepsilon|-C\ln|\ln\varepsilon|]. \end{equation}
On the other hand, letting $f:=h-h_{\rm ex} h_0$ and since $\alpha|\nabla v-\imath Av|^2\geq |\nabla h|^2$, we get \begin{eqnarray*}
&&\dfrac{1}{2}\int_{\tilde\Omega}\alpha|\nabla v-\imath Av|^2+| h-h_{\rm ex}|^2
\\&\geq&h_{\rm ex}^2{{\bf J_0}}+\dfrac{1}{2}\|f\|^2_{H^1(\tilde\Omega)}+h_{\rm ex}\int_{\tilde\Omega}\nabla f\cdot\nabla(h_0-1)+f(h_0-1)+{o(1)}. \end{eqnarray*}
Before refining the above lower bound we make some preliminary claims. We first note that from \eqref{MagneticGLEquationTilde} we have $\|h-h_{\rm ex}\|_{H^1(\Omega)}^2\leq C\|\nabla v-\imath Av\|_{L^2(\Omega)}^2=\mathcal{O}(h_{\rm ex}^2)$. Then $
\|f\|_{H^1(\Omega)}^2=\mathcal{O}(h_{\rm ex}^2)$. Consequently for $g\in\{f,h\}$ we have \begin{equation}\label{GrosseGalereEst1}
h_{\rm ex}\int_{\cup B_i\cap\Omega}|\nabla g\cdot\nabla(h_0-1)|+|g(h_0-1)|\leq C\|g\|_{H^1(\Omega)}h_{\rm ex}\sum r_i=o(1). \end{equation} We also observe that
\begin{equation}\label{GrosseGalereEst3} \int_\Omega -A^\bot\cdot\nabla(h_0-1)+h(h_0-1)=0. \end{equation}
With \eqref{EstH4} we get $\|A\|_{L^\infty(\Omega)}\leq Ch_{\rm ex}$ and then [with \eqref{MagneticGLEquationTilde}] \begin{eqnarray*}
\sum_{B_i\subset\Omega}\left|\int_{\partial B_i}\partial_\tau \varphi(h_0-h_0(a_i))\right|&=&\sum_{B_i\subset\Omega}\left|\int_{\partial B_i}(h_0-h_0(a_i))(\alpha^{-1}\nabla^\bot h+ A)\cdot\tau\right|
\\&\leq&\sum_{B_i\subset\Omega}\left[\left|\int_{\partial B_i}\alpha^{-1}(h_0-h_0(a_i)) \partial_\nu h\right|+Ch_{\rm ex} r_i\right]. \end{eqnarray*} If $B_i\subset\Omega$ we have \begin{eqnarray*}
&&\left|\int_{\partial B_i}\alpha^{-1}(h_0-h_0(a_i)) \partial_\nu h\right|
\\&=&\left|\int_{B_i}\alpha^{-1}\nabla h_0\cdot\nabla h+(h_0-h_0(a_i)){\rm div}(\alpha^{-1}\nabla h)\right|
\\&\leq&\left|\int_{B_i}(h_0-h_0(a_i)){\rm div}[v\wedge(\nabla^\bot v-\imath A^\bot v)]\right|+\mathcal{O}(h_{\rm ex} r_i)
\\&\leq&\int_{B_i}|h_0-h_0(a_i)|[2|\partial_1 v\wedge\partial_2v|+4|\nabla(|v|)|| A|+|v|^2|h|]+\mathcal{O}(h_{\rm ex} r_i) \\&\leq& Cr_ih_{\rm ex}^2. \end{eqnarray*} And then \begin{equation}\label{GrosseGalereEst4-bis}
\sum_{B_i\subset\Omega}\left|\int_{\partial B_i}\partial_\tau \varphi(h_0-h_0(a_i))\right|\leq C\sum_{B_i\subset\Omega} r_ih_{\rm ex}^2. \end{equation}
If $B_i\not\subset\Omega$, then $\|h_0-1\|_{L^\infty(B_i\cap\Omega)}\leq C r_i$ and \begin{eqnarray}\nonumber
&&\left|\int_{\partial( B_i\cap\Omega)}(h_0-1)\partial_\tau \varphi \right|
\\\nonumber&\leq&\int_{B_i\cap\Omega}\left|\nabla h_0\cdot \nabla h\right|+|h_0-1|\left[2|\partial_1 v\wedge\partial_2v|+4|\nabla(|v|)|| A|+|v|^2|h|\right] \\\label{GrosseGalereEst4-bisbis}&\leq&C r_ih_{\rm ex}^2. \end{eqnarray} By combining \eqref{GrosseGalereEst4-bis} with \eqref{GrosseGalereEst4-bisbis} we deduce: \begin{equation}\label{GrosseGalereEst4} \sum\int_{\partial B_i\cap\Omega}(h_0-1)\partial_\tau \varphi=2\pi\sum d_i (h_0(a_i)-1)+o(1). \end{equation} We used that if $B_i\not\subset\Omega$ then $d_i=0$.
We end the preliminary claims by noting that \begin{equation}\label{GrosseGalereEst5}
\int_{\Omega}|\alpha^{-1}-1||\nabla h\cdot\nabla(h_0-1)|\leq Ch_{\rm ex}\|\alpha^{-1}-1\|_{L^2(\Omega)}=o(h_{\rm ex}^{-1}). \end{equation} On the one hand, since $-\Delta f+f=-\Delta h+h$, we have with \eqref{GrosseGalereEst1}, \eqref{GrosseGalereEst3}, \eqref{GrosseGalereEst4}, \eqref{GrosseGalereEst5} and integrations by parts: \begin{eqnarray*} \int_{\tilde\Omega}\nabla f\cdot\nabla(h_0-1)+f(h_0-1)
&=&\int_{\Omega}\alpha^{-1}\nabla h\cdot\nabla(h_0-1)+h(h_0-1)+o(h_{\rm ex}^{-1})
\\%\text{[with \eqref{GrosseGalereEst3}]} &=&o(h_{\rm ex}^{-1})+\sum_i\int_{\partial B_i}\partial_\tau \varphi(h_0-1)
\\%\text{[with \eqref{GrosseGalereEst4}]} &=&o(h_{\rm ex}^{-1})+2\pi\sum_{B_i\subset\Omega}d_i [h_0(a_i)-1] \\&=&o(h_{\rm ex}^{-1})+2\pi\sum_{B_i\subset\Omega} d_i \xi_0(a_i). \end{eqnarray*}
On the other hand, since $\|f\|_{L^4(\Omega)}\leq Ch_{\rm ex}$, we get $\displaystyle\int_{\cup B_i} f^2=o(h_{\rm ex}^{-4})$, and this estimate ends the proof.
\end{proof}
\subsection{Estimate related with the signs of the $d_i$'s}
By Proposition \ref{PropBorneInfSS3} we have:
\begin{eqnarray}\nonumber 0&\geq&\pi b^2\sum_i|d_i|(|\ln\varepsilon|-C\ln|\ln\varepsilon|)+2\pih_{\rm ex}\sum_i d_i\xi_0(a_i)+
\\\label{FirstEstSSBo}&&\phantom{kqjsdffnkdkdkdkd}+\dfrac{1}{2}\int_{\Omega\setminus\cup B_i}|\nabla f|^2+\dfrac{1}{2}\int_\Omega f^2-o(1). \end{eqnarray}
Denote $I_+:=\{i\in \mathcal{J}\,|\,d_i>0\}$, $I_-:=\{i\in \mathcal{J}\,|\,d_i<0\}$, $D:=\sum_\mathcal{J} |d_i|$, $D_+:=\sum_{i\in I_+}d_i$ and $D_-:=\sum_{i\in I_-}|d_i|$.
With \eqref{FirstEstSSBo} we obtain $2h_{\rm ex} D_+\|\xi_0\|_{L^\infty(\Omega)}\geq b^2 D|\ln\varepsilon|\left(1-\frac{C\ln|\ln\varepsilon|}{|\ln\varepsilon|}\right)+o(1)$ and then:
\begin{equation}\label{SecEstSSBo}
D_-\leq D_+\times\frac{C\ln|\ln\varepsilon|}{|\ln\varepsilon|}+o(1). \end{equation}
\subsection{Estimate related with ${\rm dist}(a_i,\Lambda)$}
From Lemma \ref{Lem.DescriptionLambda}, there exist $\eta>0$ and $M\geq 1$ s.t., for $a\in\Omega$, $\xi_0(a)\geq \min\xi_0+\eta{\rm dist}(a,\Lambda)^M$.
We let $I_0:=\{i\in I\,|\,{\rm dist}(a_i,\Lambda)<|\ln\varepsilon|^{-\frac{1}{2M}}\}$ and $D_0:=\sum_{i\in I_0}|d_i|$.
If $i\notin I_0$, then $|\xi_0(a_i)|\leq\|\xi_0\|_{L^\infty(\Omega)}-\dfrac{\eta}{\sqrt{|\ln\varepsilon|}}$. We thus have \begin{eqnarray*}
\left|\sum_i d_i\xi_0(a_i)\right|&\leq&\left|\sum_{i\in I_0} d_i\xi_0(a_i)\right|+\left|\sum_{i\notin I_0} d_i\xi_0(a_i)\right|
\\&\leq&D_0\|\xi_0\|_{L^\infty(\Omega)}+(D-D_0)\left(\|\xi_0\|_{L^\infty(\Omega)}-\dfrac{\eta}{\sqrt{|\ln\varepsilon|}}\right)
\\&\leq&D\|\xi_0\|_{L^\infty(\Omega)}-(D-D_0)\dfrac{\eta}{\sqrt{|\ln\varepsilon|}}. \end{eqnarray*} From \eqref{FirstEstSSBo} we may deduce
\begin{equation}\nonumber 2h_{\rm ex}\left(D\|\xi_0\|_{L^\infty(\Omega)}-(D-D_0)\dfrac{\eta}{\sqrt{|\ln\varepsilon|}}\right)\geq b^2D(|\ln\varepsilon|-C\ln|\ln\varepsilon|)-o(1) \end{equation} and consequently \begin{equation}\label{3EstSSBo}
D-D_0\leq CD\dfrac{\ln|\ln\varepsilon|}{\sqrt{|\ln\varepsilon|}}+o(1). \end{equation}
\subsection{Estimate of the two last terms in \eqref{FirstEstSSBo}}
We let $t\geq |\ln\varepsilon|^{-\frac{1}{2M}}\geq{|\ln\varepsilon|}^{-1/2}$ and then $t\geq\delta$ since $\delta|\ln\varepsilon|^{1/2}\leq 1$.
On the one hand, from Lemma E.1 in \cite{Publi4}, by denoting $\mathscr{C}_t$ a circle with radius $t$ we get: \begin{equation}\label{EstmSur"grand"cercle}
\int_{\mathscr{C}_t\cap\Omega}(1-\alpha^{-1})=\int_{\mathscr{C}_t\cap\Omega}|1-\alpha^{-1}|\leq C_b\lambda t. \end{equation}
We assume now that the center of $\mathscr{C}_t$ is in $\Lambda$ and $t$ is s.t. $\mathscr{C}_t\subset\tilde\Omega=\Omega\setminus\cup \overline{B_i}$. We denote also $B_t$ the disk bounded by $\mathscr{C}_t$. On $\mathscr{C}_t$ we have $|v|=1$ and then $v=\e^{\imath\varphi}$ with $\varphi$ locally defined.
By direct calculations, we have [with $f=h-h_{\rm ex} h_0$, $\nu$ the outward normal unit vector to $\mathscr{C}_t$ and $\tau=\nu^\bot$ ]: \[ \int_{\mathscr{C}_t}\alpha^{-1}\partial_\nu h=-\int_{\mathscr{C}_t}[\partial_\tau\varphi-A\cdot\tau]=-2\pi d_t+\int_{B_t}h\text{ with $d_t:= {\rm deg}_{\mathscr{C}_t}(v)$.} \]
On the other hand $\displaystyle\int_{\mathscr{C}_t}\alpha^{-1}\partial_\nu h_0=\int_{B_t} h_0+\int_{\mathscr{C}_t}(\alpha^{-1}-1)\partial_\nu h_0$. Note that \[
\left|\int_{\mathscr{C}_t}(\alpha^{-1}-1)\partial_\nu h_0\right|\leq\|\nabla h_0\|_{L^\infty(\Omega)}\int_{\mathscr{C}_t}|1-\alpha^{-1}|\leq C_b\lambda t\|\nabla h_0\|_{L^\infty(\Omega)}. \] Then for $\varepsilon>0$ sufficiently small:
$\displaystyle-\int_{\mathscr{C}_t}\alpha^{-1}\partial_\nu f+\int_{B_t}f\geq 2\pi d_t-C\lambdah_{\rm ex} t$.
Consequently we obtain \begin{equation}\nonumber
\displaystyle2\int_{\mathscr{C}_t}\alpha^{-2}\int_{\mathscr{C}_t}|\partial_\nu f|^2+2\pi t^2\int_{B_t}f^2\geq4\pi^2 d_t^2-Ct\lambdah_{\rm ex} |d_t| \end{equation}
and thus, by denoting $m_t:=\displaystyle\int_{\mathscr{C}_t}\alpha^{-2}$, we get
\begin{equation}\nonumber
\dfrac{1}{2}\int_{\mathscr{C}_t}|\partial_\nu f|^2+\dfrac{\pi t^2}{2m_t}\int_{B_t}f^2\geq\dfrac{\pi^2 d_t^2}{m_t}-\dfrac{Ct\lambdah_{\rm ex} |d_t|}{m_t}. \end{equation} Since $2\pi t\leq m_t\leq b^{-4}2\pi t$, for sufficiently $\varepsilon>0$ small we obtain
\begin{equation}\label{MajEstBorneVorticite}
\dfrac{1}{2}\int_{\mathscr{C}_t}|\partial_\nu f|^2+\dfrac{t}{4}\int_{B_t}f^2\geq b^4 \dfrac{\pi d_t^2}{2t}-C\lambdah_{\rm ex} |d_t|\geq b^4 \dfrac{\pi d_t^2}{4t}. \end{equation} Following exactly the argument in \cite{SS2} we get \[
\dfrac{1}{2}\int_{\Omega\setminus\cup B_i}|\nabla f|^2+\dfrac{1}{2}\int_\Omega f^2\geq C'D^2\ln|\ln\varepsilon|+o(1). \]
With \eqref{FirstEstSSBo} and $\xi_0(a_i)\leq-\|\xi_0\|_{L^\infty(\Omega)}$ there are $C_1,C_2>0$ [ independent of $\varepsilon$] s.t. \[
(C_1 D^2-C_2D)\ln|\ln\varepsilon|\leq g(\varepsilon)\text{ with }g(\varepsilon)\to0\text{ for }\varepsilon\to0. \]
This estimate implies $D\leq \dfrac{C_1}{C_2}$. Therefore with \eqref{SecEstSSBo} and \eqref{3EstSSBo} we get the three first assertion of the theorem.
It remains to get \eqref{CrucialBoundedkjqbsdfbn} whose proof follows the same lines as in \cite{SS2} [Section 4]. \section{Proof of Proposition \ref{Docmpen}}\label{Sec.PreuveDocmpen} Let $C_0>1$, $(v_\varepsilon)_{0<\varepsilon<1}\subset H^1(\Omega,\mathbb{C})$, $(h_{\rm ex})_{0<\varepsilon<1}\subset(0,\infty)$ and $(\xi_\varepsilon)_{0<\varepsilon<1}\subset H^1_0\cap H^2\cap W^{1,\infty}(\Omega,\mathbb{R})$ be s.t. \eqref{AbsNatBorneuh} and \eqref{BorneXiPourLaDec} hold. For simplicity of the presentation we omit the index $\varepsilon$.
Let $\{(B(a_i,r_i),d_i)\,|\,i\in \mathcal{J}\}$ be as in the proposition and write $B_i:=B(a_i,r_i)$.
In this proof the letter "$C$" stands for a quantity bounded by a power of $C_0$ whose value may differ from one line to another.
We let $A=\nabla^{\bot }\xi$ and $\tilde\Omega:=\begin{cases}\Omega\setminus\cup_{}\overline{B_i}&\text{if }|v|\not>1/2\text{ in }\Omega\\\Omega&\text{if }|v|>1/2\text{ in }\Omega\end{cases}$.
The heart of the proof consists in estimating the quantity $\int_\Omega(v\wedge\nabla v)\cdot A$ in \eqref{EGALITEdenettoyga}.
We first get with the help of \eqref{AbsNatBorneuh} and \eqref{BorneXiPourLaDec} that if $|v|\not>1/2$ in $\Omega$ then $\int_{\cup B_i} v\wedge\nabla v\cdot A=o(1)$.
We also claim that, letting $w:=v/|v|$ in $\tilde\Omega$: $\int_{\tilde\Omega}(v\wedge\nabla v-w\wedge\nabla w)\cdot A=o(1)$.
In particular, if $|v|>1/2$ in $\Omega$ then we have $\int_\Omega(v\wedge\nabla v)\cdot A=o(1)$. We thus assume that $|v|\not>1/2$ in $\Omega$.
Then, with an integration by part we get \begin{eqnarray}\nonumber &&-\int_\Omega v\wedge\nabla v\cdot A \\\nonumber&
=&-\sum_{B_i\subset\Omega}\left\{\xi(a_i)\int_{\partial B_i} (w\wedge\nabla^\bot w)\cdot \nu+\int_{\partial B_i} {(\xi-\xi(a_i))}(w\wedge\nabla^\bot w)\cdot \nu\right\}+ \\\label{DevEnergyChaton1}&&\phantom{ughfhffffhflfllflflfgjgdvjdjdn}+\sum_{B_i\not\subset\Omega}\int_{\partial (B_i\cap\Omega)}\xi(w\wedge\nabla^\bot w)\cdot \nu. \end{eqnarray}
For $B_i\subset\Omega$ we immediately have : \begin{equation}\label{DevEnergyChaton2} \int_{\partial B_i} (w\wedge\nabla^\bot w)\cdot \nu=-2\pi d_i. \end{equation}
We now define $u:=\begin{cases}v&\text{in }\tilde\Omega\\u_i&\text{in }B_i\cap\Omega\end{cases}$ where $u_i$ is the harmonic extension of ${\rm tr}_{\partial (B_i\cap\Omega)}(v)$ in $B_i\cap\Omega$. By the Dirichlet principle we have for all $i$: \begin{equation}\label{DirPrincui}
\|\nabla u\|_{L^2(B_i\cap\Omega)}\leq\|\nabla v\|_{L^2(B_i\cap\Omega)}=\mathcal{O}(|\ln \varepsilon |). \end{equation}
It is easy to check that $(w\wedge\nabla^\bot w)\cdot \nu=|u|^{-2}(u\wedge\nabla^\bot u)\cdot \nu$ on $\cup_i\partial B_i$. For $i\in\mathcal{J}$, we let \[ f_i= \begin{cases} \xi-\xi(a_i)&\text{if }B_i\subset\Omega\\\xi&\text{if }B_i\not\subset\Omega \end{cases}\in H^2\cap W^{1,\infty}(B_i\cap\Omega). \] From \eqref{BorneXiPourLaDec} we get
\begin{equation}\label{Grad-LinftyBiEstXi}
\|\nabla f_i\|_{L^\infty(B_i\cap\Omega)}\leq C|\ln\varepsilon|. \end{equation}
Our goal is now to estimate $\int_{\partial (B_i\cap\Omega)} f_i(w\wedge\nabla^\bot w)\cdot \nu$. We first consider the case where $i\in\mathcal{J}$ is s.t. $|u|\geq1/2$ in $B_i\cap\Omega$. In this case we may write in $B_i$: $u=|u|\e^{\imath\phi}$ with $\phi\in H^1(B_i,\mathbb{R})$, $\|\phi\|_{H^1(B_i)}\leq C|\ln\varepsilon|$. We then have with \eqref{Grad-LinftyBiEstXi} and an integration by parts \begin{equation}\label{DevEnergyChaton3}
\left|\int_{\partial (B_i\cap\Omega)} f_i(w\wedge\nabla^\bot w)\cdot \nu\right|
\leq\|\nabla f_i\|_{L^2(B_i\cap\Omega)}\|\nabla\phi\|_{L^2(B_i\cap\Omega)}\leq C|\ln\varepsilon|^2 r_i. \end{equation}
We now assume $i\in\mathcal{J}$ is s.t. $|u|\not\geq1/2$ in $B_i\cap\Omega$. By smoothness of $|u_i|^2\in C^\infty(B_i\cap\Omega,\mathbb{R})$, there exists $t_i\in]1/5,1/4[$, a regular value of $|u_i|^2$, s.t. $\omega_i:=\{|u_i|^2<t_i\}\neq\emptyset$. We denote $D_i:=\Omega\cap[B_i\setminus\overline\omega_i$]. Since $|u|^2\geq1/4$ on $\partial B_i\cap\Omega$ we have $\partial D_i=(\partial B_i\cap\Omega)\cup\partial\omega_i\cup(\partial\Omega\cap \overline{D_i})$.
Letting $W:=\dfrac{u}{|u|}\wedge\nabla^\bot\left(\dfrac{u}{|u|}\right)$ we then get \begin{eqnarray}\label{DevEnergyChaton4} \int_{\partial D_i} f_iW\cdot \nu =\int_{D_i} f_i{\rm div} (W)+\nabla f_i\cdot W. \end{eqnarray} It is standard to check that ${\rm div}\left(W\right)=0$ in $D_i$. Moreover: \begin{equation}\nonumber
\left|\int_{D_i}\nabla f_i\cdot W\right|\leq2\|\nabla\xi\|_{L^2(B_i\cap\Omega)}\|\nabla u\|_{L^2(B_i\cap\Omega)}\leq C|\ln\varepsilon|^2 r_i. \end{equation} Consequently using \eqref{DevEnergyChaton4} we may deduce \begin{equation}\label{DevEnergyChaton5}
\left|\int_{\partial D_i} f_i\,W\cdot \nu\right|\leq C|\ln\varepsilon|^2 r_i. \end{equation} On the other hand, from \eqref{Grad-LinftyBiEstXi}, $\xi=0$ on $\partial\Omega$ and ${\rm div}\left(u\wedge\nabla^\bot u\right)=-2\partial_1 u\wedge\partial_2 u$ in $B_i\cap\Omega$, we get \begin{eqnarray}\nonumber
&&\left|\int_{\partial D_i} f_i\,W\cdot \nu-\int_{\partial B_i\cap\Omega} f_i(w\wedge\nabla^\bot w)\cdot \nu\right|
\\\nonumber&=&\left|\int_{\partial \omega_i} f_i\,W\cdot \nu\right|
\\\label{DevEnergyChaton6}&=&\dfrac{1}{t_i}\left|\int_{\omega_i} -2f_i\partial_1 u\wedge\partial_2 u+\nabla f_i\cdot\left(u\wedge\nabla^\bot u\right)\right|\leq C|\ln\varepsilon|^3 r_i. \end{eqnarray}
We may conclude by using \eqref{DevEnergyChaton1}, \eqref{DevEnergyChaton2}, \eqref{DevEnergyChaton3}, \eqref{DevEnergyChaton5} and \eqref{DevEnergyChaton6}: \begin{equation}\nonumber -\int_\Omega v\wedge\nabla v\cdot A=2\pi\sum_{B_i\subset\Omega}d_i\xi(a_i)+o(1). \end{equation} The rest of the proof is exactly the same than in \cite{S1}.
\section{Proof of some results of Section \ref{UnimSection}} \subsection{Proof of Proposition \ref{MinimalMapHomo}}\label{PreuvePropUniModComp}
We use the same notation than in Proposition \ref{MinimalMapHomo}. In this proof, the letter $C$ is a quantity which depends only on $\Omega$, $N$ and $\sum_i|d_i|$, its value may change from one line to another.
We argue as in \cite{LR1}. We let $\Phi^\zd_\star\in \cap_{0<p<2}W^{1,p}(\Omega,\mathbb{R})\cap H_{\rm loc}^1(\Omega\setminus\{z_1,...,z_N\},\mathbb{R})$ be the unique solution of \[ \begin{cases} \Delta\Phi^\zd_\star=2\pi\sum_{i=1}^Nd_i\delta_{z_i}&\text{in }\Omega \\ \Phi^\zd_\star=0&\text{on }\partial\Omega \end{cases}. \] and let $\Phi_{\tilde r}\in H^1(\Omega_{\tilde r},\mathbb{R})$ be the unique solution of \begin{equation}\label{ConjaHarmmojqsdhfhfh} \begin{cases} \Delta\Phi_{\tilde r}=0&\text{in }\Omega_{\tilde r} \\ \Phi_{\tilde r}=0&\text{on }\partial\Omega \\ \Phi_{\tilde r}={\rm Cte}_i&\text{on }\partial B(z_i,{\tilde r}) \\ \displaystyle\int_{\partial B(z_i,{\tilde r})}\partial_\nu\Phi_{\tilde r}=2\pi d_i&\text{for all }i\in\{1,...,N\} \end{cases}. \end{equation}
We then have $\nabla^\bot\Phi^\zd_\star=w^\zd_\star\wedge\nabla w^\zd_\star$ and $\nabla^\bot\Phi^{\bf (z,d)}_{\tilde r}=w^{\bf (z,d)}_{\tilde r}\wedge\nabla w^{\bf (z,d)}_{\tilde r}$. It is important to note that if $w\in H^1(\Omega_{\tilde r},\mathbb{S}^1)$, then $|\nabla w|=|w\wedge\nabla w|$.
We may decompose $\Phi^\zd_\star$ as $\Phi^\zd_\star=\sum_i d_i\Phi_{z_i}$ where, for $z\in\Omega$, $\Phi_{z}$ is the unique solution of \[ \begin{cases} \Delta\Phi_{z}=2\pi\delta_{z}&\text{in }\Omega \\ \Phi_{z}=0&\text{on }\partial\Omega \end{cases}. \] With a standard pointwise bound for the gradient of an harmonic function [see (2.31) in \cite{GT}] we have
$\|\nabla\Phi_{z_i}\|_{L^\infty(\Omega\setminus\overline{B(z_i,{\tilde r})})}\leq C\dfrac{\|\Phi_{z_i}\|_{L^\infty(\Omega\setminus\overline{B(z_i,{\tilde r}/4)})}}{{\tilde r}}$.
Thus \begin{equation}\label{PremereBornePutaindeGrad}
\|\nabla\Phi^\zd_\star\|_{L^\infty(\Omega_{{\tilde r}})}\leq C\dfrac{\sum_i|d_i|\|\Phi_{z_i}\|_{L^\infty(\Omega_{{\tilde r}/4})}}{{\tilde r}}. \end{equation}
Moreover, it is easy to check that $\Phi_{z_i}=\ln|x-z_i|+R_{z_i}$ where $R_{z_i}$ is the harmonic extension of $-\ln|x-z_i|_{|\partial\Omega}$. From \eqref{PremereBornePutaindeGrad} and by the maximum principle we get
for ${\tilde r}<\min\left\{[{\rm diam}(\Omega)]^{-1};1/4\right\}$
\begin{equation}\label{AjoutNUMA}
|\nabla\Phi^\zd_\star|\leq \dfrac{C(1+|\ln{\tilde r}|)}{{\tilde r}}\text{ in }{\Omega_{\tilde r}}
\end{equation} which proves \eqref{BorneGradWstar}.
If there is $\eta>0$ s.t. $\hbar>\eta$, then $\|R_{z_i}\|_{C^1(\Omega)}\leq C_\eta$ where $C_\eta$ which depends only on $\eta$ and $\Omega$. We thus get $\|\nabla \Phi^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\leq\dfrac{\tilde{C}_\eta}{{\tilde r}}$ [where $\tilde{C}_\eta$ depends only on $\eta$, $N$, $\sum|d_i|$ and $\Omega$] and this estimates implies \eqref{BorneGradWstarSpeciale}.
We now define
$R_{\bf (z,d)}:=\sum_i d_iR_{z_i}$ in order to have $\Phi^\zd_\star=\sum_id_i\ln|x-z_i|+R_{\bf (z,d)}$.
From Lemma I.4 in \cite{BBH} we have \begin{eqnarray}\nonumber
\|\Phi_{\tilde r}-\Phi^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}
\leq\sum_i\left[\sup_{\partial B(z_i,{\tilde r})}\sum_j\ln|x-z_j|-\inf_{\partial B(z_i,{\tilde r})}\sum_j\ln|x-z_j|\right]+\\\label{lkjbblkjn}\phantom{luljhsdgslkjdfghndfg}+\sum_i\left[\sup_{\partial B(z_i,{\tilde r})}R_{\bf (z,d)}-\inf_{\partial B(z_i,{\tilde r})}R_{\bf (z,d)}\right]. \end{eqnarray} If $N=1$, then the first term of the RHS in \eqref{lkjbblkjn} is $0$. Otherwise, as in \cite{S1} [Proposition 5.1], we have \begin{equation}\label{lkjbblkjn1}
\sum_i\left[\sup_{\partial B(z_i,{\tilde r})}\sum_j\ln|x-z_j|-\inf_{\partial B(z_i,{\tilde r})}\sum_j\ln|x-z_j|\right]\leq\dfrac{C{\tilde r}}{\min_{i\neq j}|z_i-z_j|}. \end{equation} And for $i\in\{1,...,N\}$, by harmonicity of $R_{\bf (z,d)}$, for $0<\rho<\dfrac{\hbar}{2}$ we get \begin{equation}\label{lkjbblkjn0}
\|\nabla R_{\bf (z,d)}\|_{L^\infty(B(z_i,\rho))}\leq\dfrac{C\|R_{\bf (z,d)}\|_{L^\infty(\Omega)}}{{\rm dist}(z_i,\partial\Omega)-\rho}\leq C\dfrac{1+|\ln(\hbar)|}{\hbar}. \end{equation} Then \begin{equation}\label{lkjbblkjn2}
\sum_i\left[\sup_{\partial B(z_i,{\tilde r})}R_{\bf (z,d)}-\inf_{\partial B(z_i,{\tilde r})}R_{\bf (z,d)}\right]\leq C\dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}. \end{equation} We let \begin{equation}\label{DefY} Y:=\begin{cases}
\dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}&\text{if $N=1$}\\\dfrac{{\tilde r}}{\min_{i\neq j}|z_i-z_j|}+\dfrac{{\tilde r}(1+|\ln(\hbar)|)}{\hbar}&\text{if }N\geq2 \end{cases}. \end{equation} By combining \eqref{lkjbblkjn}, \eqref{lkjbblkjn1} and \eqref{lkjbblkjn2} we get \begin{equation}\label{lkjbblkjn3}
\|\Phi_{\tilde r}-\Phi^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\leq CY. \end{equation}
From \eqref{AjoutNUMA} and \eqref{lkjbblkjn3} we immediately get
\begin{eqnarray}\nonumber 0&\leq&\int_{\Omega_{\tilde r}}|\nabla\Phi^\zd_\star|^2-|\nabla\Phi_{\tilde r}|^2+|\nabla(\Phi^\zd_\star-\Phi_{\tilde r})|^2
\\\label{Borjhpresquefinimlh}&\leq& C\,Y{\tilde r} \max_i\|\partial_\nu\Phi^\zd_\star\|_{L^\infty(\partial B(z_i,{\tilde r}))}. \end{eqnarray} On the other hand, for $i\in\{1,...,N\}$, we have [with \eqref{lkjbblkjn0}] \begin{equation}\label{BorneDerNormPstarCercle}
\|\partial_\nu\Phi^\zd_\star\|_{L^\infty(B(z_i,{\tilde r}))}
\leq C\left(\dfrac{1}{{\tilde r}}+ \dfrac{1+|\ln(\hbar)|}{\hbar}\right). \end{equation} Using $X$ defined in \eqref{DefX}, from \eqref{Borjhpresquefinimlh} and \eqref{BorneDerNormPstarCercle}, we get \begin{equation}\label{ConjaHarmmojqsdhfhfh222}
0\leq\int_{\Omega_{\tilde r}}|\nabla\Phi^\zd_\star|^2-|\nabla\Phi_{\tilde r}|^2+|\nabla(\Phi^\zd_\star-\Phi_{\tilde r})|^2\leq CX. \end{equation} From \eqref{ConjaHarmmojqsdhfhfh222} we deduce \eqref{ConvergenceShrHolklgBorne} and since $\int_{\partial\Omega}(\varphi_\star-\varphi_{\tilde r})=0$, with a Poincaré inequality we obtain \eqref{ConvergenceH1ShrHolklgBorne}. \subsection{Proof of Proposition \ref{Prop.EnergieRenDef}}\label{PreuvelammeShrinkSerfaty}
Let ${\bf (z,d)}={\bf (z,d)}^{(n)}\in(\O^N)^*\times\mathbb{Z}^N$ and denote $\hbar:=\min_i{\rm dist}(z_i,\partial\Omega)>0$. Assume that $d_1,...,d_N$ are independent of $n$. Let ${\tilde r}={\tilde r}_n\to0$ be s.t \eqref{HypRayClass} holds.
In this proof the letter $C$ stands for a quantity which depends only on $\Omega$, $N$, $C_1$ and $\sum_i|d_i|$, its value may change from one line to another.
By Remark \ref{Remark.DefConjuHarmPhase} and an integration by parts we have \begin{equation}\label{PremiereEstPourlkjhkluklu1}
\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2=\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla \Phi^\zd_\star|^2=-\dfrac{1}{2}\sum_i\int_{\partial B(z_i,{\tilde r})}\Phi^\zd_\star\partial_\nu\Phi^\zd_\star. \end{equation} For $i_0\in\{1,...,N\}$, we fix $x_{i_0}\in\partial B(z_{i_0},{\tilde r})$. Then [with $\nabla^\bot\Phi^\zd_\star=w^\zd_\star\wedge\nablaw^\zd_\star$] \begin{eqnarray}\nonumber &&\int_{\partial B(z_{i_0},{\tilde r})}\Phi^\zd_\star\partial_\nu\Phi^\zd_\star
\\\label{DecompParInciohj}&=&\int_{\partial B(z_{i_0},{\tilde r})}\left[\Phi^\zd_\star-\Phi^\zd_\star(x_{i_0})\right]\partial_\nu\Phi^\zd_\star+2\pi d_{i_0}\Phi^\zd_\star(x_{i_0}). \end{eqnarray} On the one hand, arguing as in the proof of \eqref{lkjbblkjn3}, we get for $z\in \partial B(z_{i_0},{\tilde r})$ : \[
|\Phi^\zd_\star(z)-\Phi^\zd_\star(x_{i_0})|\leq\sup_{\partial B(z_{i_0},{\tilde r})}\Phi^\zd_\star-\inf_{\partial B(z_{i_0},{\tilde r})}\Phi^\zd_\star\leq CY. \]
Then, using \eqref{BorneDerNormPstarCercle}, we obtain
\begin{equation}\label{Premirelkjhqsdjdjdjd1}
\sum_i\left|\int_{\partial B(z_i,{\tilde r})}\left[\Phi^\zd_\star-\Phi^\zd_\star(x_i)\right]\partial_\nu\Phi^\zd_\star\right|\leq CX. \end{equation} On the other hand, for $i_0\in\{1,...,N\}$ \begin{eqnarray*} \Phi^\zd_\star(x_{i_0})-R_{\bf (z,d)}(z_{i_0})
=-d_{i_0}|\ln{\tilde r}|+\sum_{j\neq i_0} d_j\ln|x_{i_0}-z_j|+\left[R_{\bf (z,d)}(x_{i_0})-R_{\bf (z,d)}(z_{i_0})\right], \end{eqnarray*}
and with \eqref{lkjbblkjn0} we get $\displaystyle\left|R_{\bf (z,d)}(x_{i_0})-R_{\bf (z,d)}(z_{i_0})\right|\leq\dfrac{C(1+|\ln\hbar|){\tilde r}}{\hbar}$. We then immediately get:
\begin{equation}\label{NUMBBBBGBGBGB}
\Phi^\zd_\star(x_{i_0})=R_{\bf (z,d)}(z_{i_0})-d_{i_0}|\ln{\tilde r}|+\sum_{j\neq i_0} d_j\ln|z_{i_0}-z_j|+\mathcal{O}(X). \end{equation}
With \eqref{DecompParInciohj}, \eqref{Premirelkjhqsdjdjdjd1} and \eqref{NUMBBBBGBGBGB} we may prove that \eqref{PremiereEstPourlkjhkluklu1} may be rewritten into \begin{equation*}
\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2
=\pi\sum_i\left[d_{i}^2|\ln{\tilde r}|-d_iR_{\bf (z,d)}(z_{i})\right]-\pi\sum_{j\neq i}d_jd_j\ln|z_{i}-z_j|+\mathcal{O}(X) \end{equation*}
where "$\mathcal{O}(X)$" is quantity bounded by $CX$ with $C$ depending only on $N,\Omega$ and $\sum|d_i|$.
\subsection{Proof of Proposition \ref{Prop.ConditionDirEnergieRen}}\label{Sec.PreuvelammeShrinkSerfatyDir}
Let ${\bf (z,d)}={\bf (z,d)}^{(n)}\in(\O^N)^*\times\mathbb{Z}^N$, ${\tilde r}\downarrow0$ and $\eta>0$ be as in the proposition.
In this proof the letter $C$ stands for a quantity which depends only on $\Omega$, $N$ and $\sum_i|d_i|$, its value may change from one line to another.
We first claim that, for $i\neq j$, $B(z_i,\eta)\cap B(z_j,\eta)\neq\emptyset$, $B(z_i,\eta)\subset\Omega$ and $\eta=\chi{\tilde r}$ with $\chi\to\infty$. In particular we assume $n$ sufficiently large to have $\eta>{\tilde r}$.
Since $\nabla^\bot\Phi^\zd_\star=w^\zd_\star\wedge\nablaw^\zd_\star$, for $i_0\in\{1,...,N\}$ and $z\in\Omega\setminus\{z_1,...,z_N\}$, we have \[
w^\zd_\star\wedge\nablaw^\zd_\star(z)=d_{i_0}\nabla^\bot(\ln|z-z_{i_0}|)+\nabla^\bot\left[R_{\bf (z,d)}(z)+\sum_{j\neq i_0} d_j\ln|z-z_j|\right]. \]
For $j\in\{1,...,N\}$, let $\theta_j$ be the main determination of the argument of $\dfrac{z-z_j}{|z-z_j|}$ and let $\mathcal{R}$ be an harmonic conjugate of $R_{\bf (z,d)}$. In $\Omega\setminus\{z_1,...,z_N\}$ we have \[ w^\zd_\star\wedge\nablaw^\zd_\star-d_{i_0}\nabla \theta_{i_0}=\nabla\left[\sum_{j\neq i_0} d_j\theta_j+\mathcal{R}\right]. \]
Then for $z\in B(z_{i_0},\eta)\setminus\{z_{i_0}\}$ we have $w^\zd_\star(z)=\left(\dfrac{z-z_{i_0}}{|z-z_{i_0}|}\right)^{d_{i_0}}\e^{\imath \varphi_{i_0}(z)}$ with $\varphi_{i_0}=\sum_{j\neq i_0} d_j\tilde\theta_j+\mathcal{R}+{\rm Cte}_{i_0}$ where, for $j\neq i_0$, $\tilde{\theta}_j$ is a determination of the argument of $\dfrac{z-z_i}{|z-z_i|}$ which is globally defined in $B(z_{i_0},\eta)$. Note that $\varphi_{i_0}\in H^1(B(z_{i_0},\eta),\mathbb{R})$.
On the other hand, by direct calculations, we have
$\left\|\sum_{j\neq i_0} d_j\nabla \tilde\theta_j\right\|_{L^\infty(B(z_{i_0},\eta))}\leq \dfrac{C}{\eta}$
and, since $R_{\bf (z,d)}$ is harmonic, we also have from the definition of $\mathcal{R}$ \[
\|\nabla\mathcal{R}\|_{L^\infty(B(z_{i_0},\eta))}=\|\nabla R_{\bf (z,d)}\|_{L^\infty(B(z_{i_0},\eta))}\leq C\dfrac{\| R_{{\bf (z,d)}}\|_{L^\infty(\Omega)}}{{\rm dist}(B(z_{i_0},\eta),\partial\Omega)}\leq C\dfrac{|\ln(\hbar)|+1}{\hbar}. \] We thus deduce \begin{equation}\label{BorneDephWstarBouleEta}
\|\nabla\varphi_{i_0}\|_{L^\infty(B(z_{i_0},\eta))}\leq C\left(\dfrac{1+|\ln(\hbar)|}{\hbar}+\dfrac{1}{\eta}\right). \end{equation} We switch to polar coordinates by letting for $i\in\{1,...,N\}$ and $\rho\in]{\tilde r},\eta[$, $\tilde{\varphi}_{i}(\rho,\theta):=\varphi_{i}(z_i+\rho\e^{\imath\theta})$. We then get, by \eqref{BorneDephWstarBouleEta} and a mean value argument,
the existence of $\rho_n\in]\sqrt{\chi}{\tilde r},\eta[$ s.t. \[
\sum_i\int_0^{2\pi}|\partial_\theta\tilde\varphi_i(\rho_n,\theta)|^2\,{\rm d}\theta\leq\dfrac{C}{\ln\chi}\left[\dfrac{\eta(|\ln(\hbar)|+1)}{\hbar}+1\right]^2. \]
We let $Z:=\dfrac{1}{\ln\chi}\left[\dfrac{\eta(|\ln(\hbar)|+1)}{\hbar}+1\right]^2$ and by assumption we have $Z\to0$.
We denote, for $i\in\{1,...,N\}$, $m_i=\displaystyle\dfrac{1}{2\pi}\int_0^{2\pi}\tilde\varphi_i(\rho_n,\theta)\,{\rm d}\theta$ in order to have \[
\int_0^{2\pi}|\tilde\varphi_i(\rho_n,\theta)-m_i|^2\,{\rm d}\theta\leq CZ. \] We then define $\phi_i\in H^1(B(z_i,\rho_n)\setminus\overline{B(z_i,{\tilde r})},\mathbb{R})$ using polar coordinates: \[ \tilde\phi_i(s,\theta)=\dfrac{s-\rho_n}{{\tilde r}-\rho_n}m_i+\dfrac{s-{\tilde r}}{\rho_n-{\tilde r}}\tilde\varphi(\rho_n,\theta)\text{ with }s\in({\tilde r},\rho_n). \]
For $z_i+s\e^{\imath\theta}\in B(z_i,\rho_n)\setminus\overline{B(z_i,{\tilde r})}$, we let $\phi_i(z_i+s\e^{\imath\theta}):=\tilde\phi_i(s,\theta)$. By standard calculations we get $\displaystyle\int_{B(z_i,\rho_n)\setminus\overline{B(z_i,{\tilde r})}}|\nabla \phi_i|^2\leq CZ$.
We conclude by defining $v=\begin{cases} w^\zd_\star&\text{in }\Omega\setminus\cup\overline{B(z_i,\rho_n)}\\u_i\e^{\imath\phi_i}&\text{in }B(z_i,\rho_n)\setminus\overline{B(z_i,{\tilde r})}
\end{cases}$ with $u_i(z)=\left(\dfrac{z-z_i}{|z-z_i|}\right)^{d_i}$. It is clear that $v\in H^1(\Omega_{\tilde r},\mathbb{S}^1)$ and that for $i\in\{1,...,N\}$ we have $v(z_i+{\tilde r}\e^{\imath\theta})={\rm Cte}_iu_i$ [with ${\rm Cte}_i=\e^{\imath m_i}$]. Note that since $ {\rm deg}_{\partial B(z_i,{\tilde r})}(w^\zd_\star)=d_i$ we have \[
\dfrac{1}{2} \int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla u_i|^2\leq \dfrac{1}{2}\int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla w^\zd_\star|^2 \] and \[
\dfrac{1}{2} \int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla (u_i\e^{\imath\phi_i})|^2= \dfrac{1}{2}\int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla u_i|^2+\dfrac{1}{2}\int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla \phi_i|^2. \] Consequently using \eqref{BorneDephWstarBouleEta} and $\rho_n<\eta$ we obtain \begin{eqnarray*}
\sum_i \dfrac{1}{2}\int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla v|^2
\leq\sum_i \dfrac{1}{2}\int_{B(z_i,\rho_n)\setminus B(z_i,{\tilde r})}|\nabla w^\zd_\star|^2+CZ. \end{eqnarray*}
Thus $\displaystyle\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla v|^2\leq\dfrac{1}{2}\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2+CZ$. The last estimate and \eqref{ConvergenceShrHolklgBorne} end the proof. \section{Proof of Proposition \ref{Prop.BorneSupSimple} }\label{AppProofUpBound} \begin{proof} {\bf Step 1. Selection of "good" points}
Let $d\in\mathbb{N}^*$ and consider ${\bf D}\in\Lambda_{d}$ which minimizes \eqref{CouplageEnergieRen}.
For $k\in\{1,...,N_0\}$, if $D_k\geq1$ we let $(\tilde{z}^{(k)}_1,...,\tilde{z}^{(k)}_{D_k})\in [B(p_k,h_{\rm ex}^{-1/4})^{D_k}]^*$ which minimizes the infimum in the left hand side of \eqref{DevMesoscopicDef} with $R=h_{\rm ex}^{-1/4}$, $p=p_k$ and $D=D_k$.
We then have the existence of $C$ [depending only on $\Omega$ and $d$] s.t. $|p_k-\tilde z^{(k)}_i|\leq C h_{\rm ex}^{-1/2}$ and if $D_k\geq2$ then $|\tilde z^{(k)}_i-\tilde z^{(k)}_j|\geq h_{\rm ex}^{-1/2}/C$ for $i\neq j$.
We may choose [in an arbitrary way] $z_i^{(k)}\in B(\tilde z_i^{(k)},\delta)\cap[\delta(\mathbb{Z}\times\mathbb{Z})]$. Since $\delta\sqrt h_{\rm ex}\to0$, we still have [up to change the value $C$] $|p_k- z^{(k)}_i|\leq C h_{\rm ex}^{-1/2}$ and if $D_k\geq2$ then $| z^{(k)}_i-\tilde z^{(k)}_j|\geq h_{\rm ex}^{-1/2}/C$ for $i\neq j$.
For $i\in\{1,...,D_k\}$ we let $x_i^{(k)}:=z_i^{(k)}+\lambda\delta x_0$ where $x_0\in\omega$ is an arbitrary point of minimum of $W^{\rm micro}$ [defined in \eqref{DefRenMicroEn3}].\\ {\bf Step 2. Construction of the test function}
We construct test functions in subdomains of $\Omega$ and then we glue the test functions. \begin{itemize}
\item We let $w^{\rm macro}_{h_{\rm ex}}\in H^1(\Omega_{h_{\rm ex}^{-1}}({\bf z}),\mathbb{S}^1)$ be a minimizer of $I_{h_{\rm ex}^{-1}}^{\rm Dir}{\bf (z,d)}$ [defined in \eqref{MinPropDir}] with ${\bf d}=(1,...,1)\in\mathbb{Z}^d$ and ${\bf z}\in(\Omega^d)^*$ is a $d$-tuple s.t. $\{z_1,...,z_d\}=\{z_i^{(k)}\,|\,k\in\{1,...,N_0\} \text{ s.t. }D_k\geq1\text{ and }i\in\{1,...,D_k\}\}$. \item For $k\in\{1,...,N_0\}$ s.t. $D_k\geq1$ and $i\in\{1,...,D_k\}$, we let $w_{k,i}^{\rm micro}\in H^1[B(z_i^{(k)},h_{\rm ex}^{-1})\setminus\overline{B(x_i^{(k)},\lambda\delta^2)},\mathbb{S}^1]$ be a minimizer of the right hand side of \eqref{MicroRenoExpressionNonRescalDir} with $z_\varepsilon=z_i^{(k)}$, $x_\varepsilon=x_i^{(k)}$, $R=h_{\rm ex}^{-1}$ and $r=\lambda\delta^2$ [from \eqref{HypLambdaDeltaConstrFoncTest} we have $R/r\to\infty$].
We let also $u_{k,i}\in H^1[B(x_i^{(k)},\lambda\delta^2),\mathbb{C}]$ be a minimizer of
\[
u\mapsto\dfrac{1}{2}\int_{B(x_i^{(k)},\lambda\delta^2)}|\nabla u|^2+\dfrac{1}{2\varepsilon^2}(1-|u|^2)^2 \] with the Dirichlet boundary condition $u(x_i^{(k)}+\lambda\delta^2\e^{\imath\theta})=\e^{\imath\theta}$. \end{itemize} By considering well chosen constants ${\rm Cte}_{k,i}^{(1)}$, ${\rm Cte}_{k,i}^{(2)}$ and ${\rm Cte}_{k}$, we may glue the above test functions and we define $v\in H^1(\Omega,\mathbb{C})$ : \[ v= \begin{cases} w^{\rm macro}_{h_{\rm ex}}&\text{in }\Omega_{h_{\rm ex}^{-1}}(\bf z) \\ {\rm Cte}_{k}&\text{in } B(z_i^{(k)},h_{\rm ex}^{-1})\text{ if }D_k=0 \\
{\rm Cte}_{k,i}^{(1)}w_{k,i}^{\rm micro}&\text{in }B(z_i^{(k)},h_{\rm ex}^{-1})\setminus\overline{B(x_i^{(k)},\lambda\delta^2)}\,\left|\begin{array}{c}k\in\{1,...,N_0\}\text{ s.t. }D_k\geq1\\\text{and }i\in\{1,...,D_k\}\end{array}\right. \\
{\rm Cte}_{k,i}^{(2)}u_{k,i}&\text{in }B(x_i^{(k)},\lambda\delta^2)\left|\begin{array}{c}k\in\{1,...,N_0\}\text{ s.t. }D_k\geq1\\\text{and }i\in\{1,...,D_k\}\end{array}\right. \end{cases}. \] {\bf Step 3. The energy of the test function}
We first note that the configuration ${\bf (z,d)}$ is s.t. $\hbar({\bf z})>\dfrac{1}{2}{\rm dist}(\Lambda,\partial\Omega)$ and for $i\neq j$ we have $\dfrac{h_{\rm ex}^{-1}}{|z_i-z_j|}\to0$, then we may apply Propositions \ref{MinimalMapHomo}, \ref{Prop.EnergieRenDef} and \ref{Prop.ConditionDirEnergieRen}. We may also use Proposition \ref{Prop.RenEnergieCluster}. From these propositions we get \begin{eqnarray}\nonumber
&&\dfrac{1}{2}\int_{\Omega_{h_{\rm ex}^{-1}}(\bf z)}|\nabla v|^2
\\\label{BorneSupFoncTestShaarp1}&=&\pi d\lnh_{\rm ex}+W_{N_0}^{\rm macro}{({\bf p},{\bf D})}-\pi\sum_{\substack{k=1\\\text{s.t. }D_k\geq2}}^{N_0}\sum_{i\neq j}\ln|z^{(k)}_i-z^{(k)}_j|+o(1). \end{eqnarray} For $k\in\{1,...,N_0\}$ s.t. $D_k\geq1$ and $i\in\{1,...,D_k\}$ with \eqref{MicroRenoExpressionNonRescalDir}, \eqref{DefRenMicroEn1} and \eqref{DefRenMicroEn2} we get: \begin{eqnarray}\nonumber
&&\dfrac{1}{2}\int_{B(z_i^{(k)},h_{\rm ex}^{-1})\setminus\overline{B(x_i^{(k)},\lambda\delta^2)}}\alpha|\nabla v|^2
\\\label{BorneSupFoncTestShaarp2}&=&\pi|\ln(\lambda\deltah_{\rm ex})|+b^2\pi|\ln( \delta)|+ W^{\rm micro}(x_0)+o(1). \end{eqnarray}
From Lemma IX.1 in \cite{BBH} and \eqref{EstLoinInterfaceU} [with $|\nabla v|\leq C\varepsilon^{-1}$], for $k\in\{1,...,N_0\}$ s.t. $D_k\geq1$ we have \begin{equation}\label{BorneSupFoncTestShaarp3}
\dfrac{1}{2}\int_{{B(x_i^{(k)},\lambda\delta^2)}}\alpha|\nabla v|^2+\dfrac{\alpha^2}{2\varepsilon^2}(1-|v|^2)^2=b^2\pi\ln(b\lambda\delta^2/\varepsilon)+b^2\gamma+o(1) \end{equation} where $\gamma\in\mathbb{R}$ is a universal constant.
In conclusion, by combining \eqref{BorneSupFoncTestShaarp1}, \eqref{BorneSupFoncTestShaarp2} and \eqref{BorneSupFoncTestShaarp3} [note $\lambda\deltah_{\rm ex}\to0$]: \begin{eqnarray}\nonumber F(v)&\leq&
d\pi\left[b^2|\ln\varepsilon|+(1-b^2)|\ln(\lambda\delta)|\right]+d\left[W^{\rm micro}(x_0)+b^2\gamma+b^2\pi\ln b\right]+
\\\label{BorneSupFoncTestShaarpGlobal}&&\phantom{lglkhgklhjjkh}+W_{N_0}^{\rm macro}{({\bf p},{\bf D})}-\pi\sum_{\substack{k=1\\\text{s.t. }D_k\geq2}}^{N_0}\sum_{i\neq j}\ln|z^{(k)}_i-z^{(k)}_j|+o(1) . \end{eqnarray} {\bf Step 4. Definition of the magnetic potential and conclusion}
Let $A_{({\bf z},{\bf 1})}$ be given by Definition \ref{DefA_ad} with ${({\bf a},{\bf d})}=({\bf z},{\bf 1})$. It is clear that we have \[
-\pi\sum_{\substack{k=1\\\text{s.t. }D_k\geq2}}^{N_0}\sum_{i\neq j}\ln|z^{(k)}_i-z^{(k)}_j|\leq C|\ln\delta| \] where $C$ depends only on $d$ and $\Omega$.
Consequently, for $\varepsilon>0$ sufficiently small and $C_0>\pi d$ we have $F(v)\leqC_0|\ln\varepsilon|$. Therefore, with Remark \ref{Prop.BornéPourPotenteifjfjf}, the configuration $(v,A_{({\bf z},{\bf 1})})\in\mathscr{H}$ is s.t. $\mathcal{F}(v,A_{({\bf z},{\bf 1})})\leq\mathcal{F}(v,0)+o(1)\leqC_0|\ln\varepsilon|^2+\mathcal{H}^2(\Omega)h_{\rm ex}^2$.
Using Proposition\ref{Docmpen} and Lemma \ref{Rk.RegularityLondonModified} we get \begin{eqnarray*} \mathcal{F}(v,A_{({\bf z},{\bf 1})})&=&h_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{i=1}^d\xi_0(z_i)+F(v)+\tilde{V}[\zeta_{({\bf z},{\bf 1})}]+o(1)
\end{eqnarray*} where $\zeta_{({\bf z},{\bf 1})}$ is the unique solution of \eqref{LondonEqModifie} with ${({\bf a},{\bf d})}={({\bf z},{\bf 1})}$.
We now use Assertion \ref{PropClusterI3} of Proposition \ref{PropClusterI} in order to get $\tilde{V}[\zeta_{({\bf z},{\bf 1})}]=\tilde{V}[\zeta_{({\bf p},{\bf D})}]+o(1)$ and then \begin{equation}\label{EstEnConstructSplitBorneSup} \mathcal{F}(v,A_{({\bf z},{\bf 1})})=h_{\rm ex}^2{\bf J_0}+2\pih_{\rm ex}\sum_{i=1}^d\xi_0(z_i)+F(v)+\tilde{V}_{({\bf z},{\bf 1})}[\zeta_{({\bf p},{\bf D})}]+o(1). \end{equation}
We claim that, from the choice of the points $z_i^{(k)},\tilde z_i^{(k)}$ we have $\xi_0(z_i^{(k)})-\xi_0(\tilde z_i^{(k)})=\mathcal{O}(\delta/\sqrt{h_{\rm ex}})$. Thus with Proposition \ref{EnergieRenMeso} we have \begin{eqnarray*}
&&-\pi\sum_{\substack{k=1\\\text{s.t. }D_k\geq2}}^{N_0}\sum_{i\neq j}\ln|z^{(k)}_i-z^{(k)}_j|+2\pih_{\rm ex}\sum_{\substack{k=1}}^{N_0}\sum_{i}\xi_0(z_i^{(k)})-2\pi dh_{\rm ex}\min_\Omega\xi_0
\\&=&\sum_{\substack{k=1\\\text{s.t. }D_k\geq1}}^{N_0}\left[-\pi\sum_{\substack{i,j\in\{1,...,D_k\}\\i\neq j}}\ln| \tilde z^{(k)}_i-\tilde z^{(k)}_j|+2\pih_{\rm ex}\sum_{i=1}^{D_k}\left[\xi_0(\tilde z^{(k)}_i)-\min_\Omega\xi_0\right]\right]+o(1) \\&=&\sum_{\substack{k=1\\\text{s.t. }D_k\geq1}}^{N_0}\left[\dfrac{\pi}{2}(D_k^2-D_k)\ln\left(\dfrac{h_{\rm ex}}{D_k}\right)+C_{p_k,D_k}\right]+o(1).
\end{eqnarray*} We may now conclude: \begin{eqnarray*} \mathcal{F}(v,B)&=&h_{\rm ex}^2{\bf J_0}+dM_\O\left[-h_{\rm ex}+H^0_{c_1} \right]+\dfrac{\pi}{2}\lnh_{\rm ex}\sum_{\substack{k=1\\\text{s.t. }D_k\geq1}}^{N_0}(D_k^2-D_k)+ \\&&\phantom{jdjdjdjdjdkdkdkdkd}+\overline{\W}_{d}+\dfrac{\pi}{2}\sum_{\substack{k=1\\\text{s.t. }D_k\geq1}}^{N_0}(D_k-D_k^2)\ln D_k+o(1). \end{eqnarray*} This estimate ends the proof of the proposition.
\end{proof} \section{Proof of Proposition \ref{Prop.EtaEllpProp}}\label{AppendixPruveEtaEllipt}
Let $h_{\rm ex}$ and $(v_\varepsilon,A_\varepsilon)$ be as in Proposition \ref{Prop.EtaEllpProp}. Note that we may assume that $A_\varepsilon=A_{v_\varepsilon}$ given by Lemma \ref{LemAuxConstructMagnPot} and then $\|A_\varepsilon\|_{L^\infty(\Omega)}=\mathcal{O}(h_{\rm ex})$. We drop the subscript $\varepsilon$. We first note that, by smoothness of $\Omega$, there is $t_0>0$, s.t. letting $\Omega_{t_0}:=\{x\in\mathbb{R}^2\,|\,{\rm dist}(x,\Omega)<t_0\}$, we may extend by reflexion $v\in H^1(\Omega,\mathbb{C})$ into $u\in H^1(\Omega_{t_0},\mathbb{C})$ letting $u=v$ in $\Omega$ and $u=v\circ\mathcal{S}_\O$ in $\Omega_{t_0}\setminus\overline{\Omega}$ where \[ \begin{array}{cccc} \mathcal{S}_\O:&\Omega_{t_0}\setminus\overline{\Omega}&\to&\Omega\\&x&\mapsto&\Pi(x)-{\rm dist}(x,\partial\Omega)\nu_{\Pi(x)} \end{array}. \] Here $\Pi:\Omega_{t_0}\setminus\overline{\Omega}\to\partial\Omega$ is the orthogonal projection on $\partial\Omega$ and, for $\sigma\in\partial\Omega$, $\nu_\sigma$ is the normal outward at $\sigma$. \begin{lem}\label{EtaEllpProp}
Let $C_0\geq1$ and let $\{(v_\varepsilon,A_\varepsilon)\,|\,0<\varepsilon<1\}$ be a family in the Coulomb gauge of quasi-minimizers of $\mathcal{F}$ in $\mathscr{H}$ for an intensity of the applied field $h_{\rm ex}=h_{\rm ex}(\varepsilon)\geq0$ s.t. $\|\nabla |v|\|_{L^\infty(\Omega)}\leqC_0\varepsilon^{-1}$.
Under these hypotheses, for $\eta\in(0,1)$ there exists $\varepsilon_\eta,C_\eta>0$ [depending on $C_0$] s.t. for $0<\varepsilon<\varepsilon_\eta$, if $z\in\Omega$ is s.t. \[
b^2\int_{B(z,\sqrt{\varepsilon}/2)}|\nabla u|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2\leq \dfrac{C_\eta}{3}|\ln\varepsilon| \]
with $u=\begin{cases}v&\text{in }\Omega\\varepsilon\circ\mathcal{S}_\O&\text{in }\Omega_{t_0}\setminus\overline{\Omega}\end{cases}$, then $|v(z)|>\eta$. \end{lem}
In order to prove Proposition \ref{Prop.EtaEllpProp} we need the following lemma.
\begin{lem}\label{lem.Doublerecouvrement} There exists $\varepsilon_\Omega>0$ depending only on $\Omega$ s.t. for $0<\varepsilon<\varepsilon_\Omega$, $z\in\Omega$ and $v\in H^1(\Omega,\mathbb{C})$, by defining $u$ as in Lemma \ref{EtaEllpProp}, the following inequality holds: \[
\int_{B(z,\sqrt{\varepsilon}/2)}|\nabla u|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2\leq 3\int_{B(z,\sqrt\varepsilon)\cap\Omega}|\nabla v|^2+\dfrac{b^2}{\varepsilon^2}(1-|v|^2)^2. \] \end{lem}
\begin{proof}[Proof of Lemma \ref{lem.Doublerecouvrement}]
In order to prove the lemma it suffices to check that by smoothness of $\Omega$ we have $\|\nabla(\mathcal{S}_\O^{-1})\|_{L^\infty(\Omega)},\|{\rm jac}\,(\mathcal{S}_\O^{-1})\|_{L^\infty(\Omega)}=1+o(1)$. We then immediately obtain \[
\int_{B(z,\sqrt{\varepsilon}/2)\setminus\Omega}|\nabla u|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2\leq[1+o(1)]\int_{\mathcal{S}_\O[B(z,\sqrt{\varepsilon}/2)\setminus\Omega]}|\nabla v|^2+\dfrac{b^2}{\varepsilon^2}(1-|v|^2)^2. \]
On the other hand, if $x\in B(z,\sqrt{\varepsilon}/2)\setminus\Omega$ then $|\mathcal{S}_\O(x)-z|\leq [1+o(1)]\sqrt\varepsilon/2\leq\sqrt\varepsilon$ for sufficiently small $\varepsilon>0$ [depending only on $\Omega$]. Then $\mathcal{S}_\O[B(z,\sqrt{\varepsilon}/2)\setminus\Omega]\subset B(z,\sqrt\varepsilon)\cap\Omega$. The lemma follows from the monotonicity of the integral. \end{proof}
By combining both lemmas we get Proposition \ref{Prop.EtaEllpProp}.
\begin{proof}[Proof of Lemma \ref{EtaEllpProp}]
We argue by contradiction and we assume the existence of $\eta\in(0,1)$, $\varepsilon=\varepsilon_n\downarrow0$ s.t. for all $n\geq1$ there are $(v,A)=(v_n,A_n)\in\mathscr{H}$, $z=z_n\in\Omega$ and $h_{\rm ex}=h_{\rm ex}^{(n)}\geq0$ s.t. $(v,A)$ is a quasi-minimizers of $\mathcal{F}$ in $\mathscr{H}$ satisfying: \begin{equation}\label{ContraTardRERTard}
\int_{B(z,\sqrt{\varepsilon}/2)}|\nabla u|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2\leq \dfrac{|\ln\varepsilon|}{n} \end{equation}
with $u=u_n=\begin{cases}v&\text{in }\Omega\\varepsilon\circ\mathcal{S}_\O&\text{in }\Omega_{t_0}\setminus\overline{\Omega}\end{cases}$ and $|v(z)|\leq\eta$. Up to replace $v$ by $\underline{v}$ we may assume $|v|\leq1$ in $\Omega$.
We are going to prove that \eqref{ContraTardRERTard} implies \begin{equation}\label{ContraIpadKlldl}
\displaystyle\dfrac{1}{\varepsilon^2} \int_{B(z,\varepsilon^{3/4})\cap\Omega}(1-|v|^2)^2=o(1). \end{equation}
On the other hand, $\|\nabla |v|\|_{L^\infty(\Omega)}=\mathcal{O}(\varepsilon^{-1})$ and then, from an argument in \cite{BBH} [Theorem III.3], we will get, for sufficiently large $n$, $|v(z)|>\eta$. Clearly this contradiction will end the proof.
Since for $n\geq1$ we have
$\displaystyle\int_{\varepsilon^{3/4}/2}^{\sqrt{\varepsilon}/2}\dfrac{{\rm d}\rho}{\rho}\,\rho\int_{\partial B(z,\rho)}|\nabla u|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2
\leq \dfrac{|\ln\varepsilon|}{n}$,
there exists $\displaystyle\rho_n\in(\varepsilon^{3/4},\sqrt{\varepsilon}/2)$ s.t. $\displaystyle\rho_n\int_{\partial B(z,\rho_n)}|\nabla u|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2\leq\dfrac{4}{n}$. Then we get : \begin{equation}\label{Eq.BorneEnCoordPolbis}
\rho_n\int_{\partial B(z,\rho_n)}|\partial_\tau {u}|^2+\dfrac{b^2}{\varepsilon^2}(1-|u|^2)^2\leq\dfrac{4}{n}. \end{equation} We switch in polar coordinate and we denote $\tilde u(\theta):=u(z+\rho_n\e^{\imath\theta})$. Estimate \eqref{Eq.BorneEnCoordPolbis} becomes \begin{equation}\label{Eq.BorneEnCoordPol}
\int_0^{2\pi}|\partial_\theta \tilde{u}|^2+\dfrac{b^2\rho_n^2}{\varepsilon^2}(1-|\tilde u|^2)^2\leq\dfrac{4}{n}. \end{equation}
On the one hand, $|\partial_\theta |\tilde {u}||^2\leq|\partial_\theta \tilde {u}|^2$ and then $\displaystyle\int_0^{2\pi}|\partial_\theta|\tilde {u}||\leq\dfrac{2\sqrt{2\pi}}{\sqrt n}$. Consequently in $[0,2\pi]$ we get $(1-|\tilde {u}|^2)^2\geq\max_{[0,2\pi]}(1-|\tilde {u}|^2)^2-\dfrac{2\sqrt{2\pi}}{\sqrt n}$. From \eqref{Eq.BorneEnCoordPol} we deduce \[
\dfrac{4\varepsilon^2}{nb^2\rho_n^2}\geq\int_0^{2\pi}(1-|\tilde u|^2)^2\geq 2\pi\left[\max_{[0,2\pi]}(1-|\tilde {u}|^2)^2-\dfrac{2\sqrt{2\pi}}{\sqrt n}\right] \]
and thus for sufficiently large $n$ we get $0\leq\max_{[0,2\pi]}(1- |\tilde {u}|^2)^2\leq\dfrac{100}{\sqrt n}$.
For a further use we define \[ \begin{array}{cccc}{\chi}_n:&B(z,\rho_n)&\to&[0,1]
\\& z+\rho\e^{\imath\theta}&\mapsto&\left(|\tilde u(\theta)|-1\right)\dfrac{\rho}{\rho_n}+1 \end{array}. \] By direct calculations we have \begin{equation}\label{EstModulAlADemande}
\int_{B(z,\rho_n)}|\nabla{\chi}_n|^2+\dfrac{1}{2\varepsilon^2}(1-{\chi}_n^2)^2
= \mathcal{O}\left(\dfrac{1}{ n}\right). \end{equation}
On the other hand, for $n$ sufficiently large, $|{u}|^2\geq\dfrac{1}{2}$ in $\partial B(z,\rho_n)$. We thus may compute the degree of $u$ on $\partial B(z,\rho_n)$ and we find
$\left| {\rm deg}_{\partial B(z,\rho_n)}(u)\right|
= \mathcal{O}\left(\dfrac{1}{ n}\right)$ which implies, for sufficiently large $n$, $ {\rm deg}_{\partial B(z,\rho_n)}(u)=0$. Consequently, we may write $u=|u|\e^{\imath\varphi}$ with $\varphi=\varphi_n\in H^1(\partial B(z,\rho_n),\mathbb{R})$. Moreover, up to multiply $u$ by a constant in $\mathbb{S}^1$, we may assume $\int_{\partial B(z,\rho_n)}\varphi=0$.
We then consider $\tilde\varphi:[0,2\pi]\to\mathbb{R}$ defined by $\tilde\varphi(\theta)=\varphi(z+\rho_n\e^{\imath\theta})$, and thus \[
\mathcal{O}\left(\dfrac{1}{ n}\right)=\rho_n\int_{\partial B(z,\rho_n)}|\nabla \varphi|^2\geq\int_0^{2\pi}|\partial_\theta\tilde\varphi|^2. \] Since $\displaystyle\int_0^{2\pi}\tilde\varphi=0$, this estimate implies $\displaystyle\int_0^{2\pi}\tilde\varphi^2=\mathcal{O}\left(\dfrac{1}{ n}\right)$.
Letting $\psi=\psi_n:B(z,{\rho_n})\to\mathbb{R}$, $z+\rho\e^{\imath\theta}\mapsto\dfrac{\rho}{\rho_n}\tilde\varphi(\theta)$, it is direct to check $\displaystyle\int_{B(z,\rho_n)}|\nabla \psi|^2=\mathcal{O}\left(\dfrac{1}{ n}\right)$.
We are now in position to end the proof by considering $V=V_n={\chi}_n\e^{\imath\psi}\in H^1(B(z,\rho_n),\mathbb{C})$ in order to have $V=v$ on $\partial B(z,\rho_n)\cap\Omega$,
\[
\dfrac{1}{2}\int_{\Omega\cap B(z,\rho_n)}|\nabla V|^2+\dfrac{1}{2\varepsilon^2}(1-|V|^2)^2=\mathcal{O}\left(\dfrac{1}{ n}\right).
\]
and [with $\|A\|_{L^\infty(\Omega)}=\mathcal{O}(h_{\rm ex})$]
\[
\left| \int_{\Omega\cap B(z,\rho_n)}\alpha(V\wedge\nabla V)\cdot A\right|\leq C\dfrac{h_{\rm ex}^{}\rho_n}{\sqrt n}=o(1).
\]
Since $V=v$ on $\partial B(z,\rho_n)\cap\Omega$ we have $w:=\begin{cases}v&\text{in }\Omega\setminus B(z,\rho_n)\\mathscr{V}&\text{in }B(z,\rho_n)\cap\Omega\end{cases}\in H^1(\Omega,\mathbb{C})$. Considering the comparison configuration $(\tilde w,A)$, from the quasi-minimality of $(v,A)$ and the above estimates we get
\[
\int_{\Omega\cap B(z,\rho_n)}|\nabla v|^2+\dfrac{1}{2\varepsilon^2}(1-|v|^2)^2\leq b^{-4}\int_{\Omega\cap B(z,\rho_n)}|\nabla V|^2+\dfrac{1}{2\varepsilon^2}(1-|V|^2)^2+o(1)=o(1).
\] Since $\rho_n>\varepsilon^{3/4}$ we get \eqref{ContraIpadKlldl} and thus this estimate ends the proof.
\end{proof} \section{Proof of Proposition \ref{Prop.ConstrEpsMauvDisk}}\label{SectAppenPreuveConstructionPetitDisque} The proof of the proposition is an adaptation of the arguments presented in \cite{AB1} [Section V] and also used in \cite{S1} [Proposition 3.2]. It is also inspired of the bad disk construction in \cite{BBH}. Let $\mu$, $\lambda$, $\delta$, $(v,A)$ and $h_{\rm ex}$ be as in the proposition.
{\bf Step 1. A first covering of $ \{|v|\leq1/2\}$}\\
For $0<\varepsilon<\varepsilon_{1/2}$ [$\varepsilon_{1/2}>0$ is given by Proposition \ref{Prop.EtaEllpProp} with $\eta=1/2$] we consider a covering of $\Omega$ by disks $\{B(x^\varepsilon_1,4\sqrt\varepsilon),...,B(x^\varepsilon_{N_\varepsilon},4\sqrt\varepsilon)\}$ s.t., for $i\neq j$, $B(x^\varepsilon_i,\sqrt\varepsilon)\cap B(x^\varepsilon_j,\sqrt\varepsilon)=\emptyset$ and $x^\varepsilon_i\in\Omega$.
For the simplicity of the presentation we omit the dependance in $\varepsilon$.\\
We say that $B(x_i,4\sqrt\varepsilon)$ is a {\it bad disk} if $\tilde E_\varepsilon[v,B(x_i,8\sqrt\varepsilon)\cap\Omega]> C_{1/2}|\ln\varepsilon|$ where for a disk $B$ we denoted \[
\tilde E_\varepsilon(v,B\cap\Omega):=\int_{B\cap\Omega}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2 \] and $C_{1/2}>0$ is given by Proposition \ref{Prop.EtaEllpProp} with $\eta=1/2$. Let \[
J'=J'_\varepsilon:=\{i\in\{1,...,N_\varepsilon\}\,|\, B(x_i,4\sqrt\varepsilon)\text{ is a bad disk}\}. \] We make two fundamental claims: \begin{enumerate} \item There exists $M_0\geq1$ [independent of $\varepsilon$] s.t. ${\rm Card}(J')\leq M_0$.
\item If $B(x_i,4\sqrt\varepsilon)$ is not a bad disk then $|v|\geq1/2$ in $B(x_i,4\sqrt\varepsilon)$. \end{enumerate} The first claim is a direct consequence of \eqref{CrucialBoundedkjqbsdfbn} and $B(x^\varepsilon_i,\sqrt\varepsilon)\cap B(x^\varepsilon_j,\sqrt\varepsilon)=\emptyset$ for $i\neq j$.
The second claim is given by Proposition \ref{Prop.EtaEllpProp}.
Then $\cup_{i\in J'} B(x_i,4\sqrt\varepsilon)$ is covering of $\{|v|\leq1/2\}$ and ${\rm Card}(J')\leq M_0$.
Up to drop some disks, we may always assume that for $i\in J'$ we have $B(x_i,4\sqrt\varepsilon)\cap\{|v|\leq1/2\}\neq\emptyset$. Consequently using Corollary \ref{Cor.DegNonNul}, for $i\in J'$ and $0<\varepsilon<\min\{\varepsilon_0,\varepsilon_{1/2}\}$ [$\varepsilon_0$ given by Corollary \ref{Cor.DegNonNul}] we have
${\rm dist}(x_i,\Lambda)= \mathcal{O}(|\ln\varepsilon|^{-s_0})$.\\
If $|v|>1/2$ in $\Omega$ then there is nothing to prove. We then assume $J'\neq \emptyset$.\\
{\bf Step 2. Separation process}\\
We replace the above bad disks with disks having same centers and with a radius $\varepsilon^\mu$. Let $\varepsilon_\mu^{(1)}>0$ be s.t. $\min\{\varepsilon_0,\varepsilon_{1/2}\}\geq \varepsilon_\mu^{(1)}$, for $0<\varepsilon<\varepsilon_\mu^{(1)}$ we have $4\sqrt\varepsilon<\varepsilon^\mu$ and \begin{equation}\nonumber
\max_{i\in J'}\,{\rm dist}(B(x_i,\varepsilon^\mu),\Lambda)\leq \dfrac{1}{\ln|\ln\varepsilon|}.
\end{equation} In particular $\cup_{i\in J'} B(x_i,\varepsilon^\mu)$ is a covering of $\{|v|\leq1/2\}$.\\
The goal of this step is to get a covering of $\{|v|\leq1/2\}$ with disks $B(x_i,\varepsilon^s)$ where $i\in\tilde {J}=\tilde J_\varepsilon\subset J'$, $s=s_\varepsilon=2^{-K}\mu$, $K=K_\varepsilon\in\{0,...,M_0-1\}$ and s.t. for $i,j\in\tilde J$, $i\neq j$, we have \begin{equation}\label{ConditiondeSepartionMauvDisk}
|x_i-x_j|\geq\varepsilon^{s/2}. \end{equation}
If ${\rm Card}(J')=1$ or \eqref{ConditiondeSepartionMauvDisk} is satisfied with $s=\mu$ [i.e. $K=0$] then we let $\tilde{J}=J'$ and we obtained the desired result of this step. Otherwise, there are $i_0,j_0\in J'$ [with $i_0<j_0$] s.t. $|x_{i_0}-x_{j_0}|<\varepsilon^{\mu/2}$. In this case we let $J^{(1)}:=J'\setminus\{i_0\}$ and we claim that ${\rm Card}(J^{(1)})={\rm Card}(J')-1$.
If ${\rm Card}(J^{(1)})=1$ or ${\rm Card}(J^{(1)})>1$ with \eqref{ConditiondeSepartionMauvDisk} holds with $s=2^{-1}\mu$ [i.e. $K=1$] for all $i,j\in J^{(1)}$ [$i\neq j$] then the goal of this step is done with $\tilde J= J^{(1)}$ and $s=2^{-1}\mu$.
Otherwise, there exits $i_0,j_0\in J^{(1)}$ [with $i_0<j_0$] s.t. $|x_{i_0}-x_{j_0}|<\varepsilon^{s/2}$. We then let $J^{(2)}:=J^{(1)}\setminus\{i_0\}$ and thus ${\rm Card}(J^{(2)})={\rm Card}(J^{(1)})-1$.
By noting that ${\rm Card}(J')\leq M_0$, the above process stops after at most $M_0-1$ iteration. We thus get the existence of $K=K_\varepsilon\in\{0,...,M_0-1\}$ and $\emptyset\neq J^{(K)}=J^{(K)}_\varepsilon\subset J'$ s.t. ${\rm Card} (J^{(K)})=1$ or \eqref{ConditiondeSepartionMauvDisk} is satisfied with $s=s_\varepsilon=2^{-K}\mu$ and $i,j\in J^{(K)}$ [$i\neq j$].
We then denote $\tilde J:=J^{(K)}$, $s=2^{-K}\mu$ and we fix $0<\varepsilon_\mu^{(2)}\leq\varepsilon_\mu^{(1)}$ s.t. for $0<\varepsilon<\varepsilon_\mu^{(2)}$ we have \begin{equation}\nonumber
\max_{i\in \tilde J}\,{\rm dist}(B(x_i,\varepsilon^{s/4}),\Lambda)\leq \dfrac{1}{\ln|\ln\varepsilon|}<10^{-1}{\rm dist}(\Lambda,\partial\Omega). \end{equation} In particular $B(x_i,\varepsilon^{s/4})\subset\Omega$ for $i\in\tilde J$. \\
{\bf Step 3. Definition of $r$}
With Corollary 5.2 in \cite{bourgain2010morse}, for a.e. $t\in{\rm Image}(|v|)$ the set $V(t):=\{x\in\Omega\,|\,|v(x)|=t\}$ is a finite union of curve. Moreover if a such curve is included in $\Omega$ then it is a Jordan curve.
Following the same strategy as in \cite{AB1} [Lemma V.1], we have the existence of $t_\varepsilon\in[1-2|\ln\varepsilon|^{-2},1-|\ln\varepsilon|^{-2}]$ s.t. $V(t_\varepsilon)$ is a finite union of Jordan curves s.t. \begin{equation}\label{BorneValeurMesure}
\mathcal{H}^1[V(t_\varepsilon)]\leq C\varepsilon|\ln\varepsilon|^5\text{ with $C$ is independent of $\varepsilon$.} \end{equation}
We fix $0<\varepsilon_\mu^{(3)}\leq\varepsilon_\mu^{(2)}$ s.t. for $0<\varepsilon<\varepsilon_\mu^{(3)}$ we have $C\varepsilon|\ln\varepsilon|^5\leq10^{-2}\varepsilon^{s}$.
We denote for $i\in\tilde{J}$ \begin{equation}\label{DefAiEps}
\mathcal{A}_i=\mathcal{A}_i^\varepsilon:=\{\rho\in[\varepsilon^s,\varepsilon^{2s/3}]\,|\,|v|\geq t_\varepsilon\text{ on }\partial B(x_i,\rho)\}. \end{equation}
From the continuity of $|v|$, it is clear that $[\varepsilon^s,\varepsilon^{2s/3}]=\mathcal{A}_i\cup\mathcal{B}_i\cup\mathcal{C}_i$ where \[
\mathcal{B}_i=\mathcal{B}_i^\varepsilon:=\{\rho\in[\varepsilon^s,\varepsilon^{2s/3}]\,|\,\exists\,x\in\partial B(x_i,\rho)\text{ s.t. }|v(x)|=t_\varepsilon\} \] and \[
\mathcal{C}_i=\mathcal{C}_i^\varepsilon:=\{\rho\in[\varepsilon^s,\varepsilon^{2s/3}]\,|\,|v|< t_\varepsilon\text{ on }\partial B(x_i,\rho)\}. \] We first claim that, since the function $\rho\mapsto\rho$ is increasing, we have \begin{eqnarray*}
\mathcal{O}(\varepsilon^2|\ln\varepsilon|)&=&\int_{\mathcal{C}_i}{\rm d}\rho\int_{\partial B(x_i,\rho)}(1-|v|^2)^2 \\&\geq&2\pi(1-t_\varepsilon^2)^2\int_{\mathcal{C}_i}\rho{\rm d}\rho \\&\geq&2\pi(1-t_\varepsilon^2)^2\int_0^{\mathcal{H}^1(\mathcal{C}_i)}\rho{\rm d}\rho=\pi(1-t_\varepsilon^2)^2\mathcal{H}^1(\mathcal{C}_i)^2. \end{eqnarray*}
Then $\mathcal{H}^1(\mathcal{C}_i)=\mathcal{O}(\varepsilon|\ln\varepsilon|^{5/2})$.
On the other hand one may prove that if $I$ is a connected components of $\mathcal{B}_i$, then there is $\rho_1,\rho_2$ s.t. $ I=[\rho_1,\rho_2]$. Since straight lines are geodesics, we obviously get \[ \mathcal{H}^1(I)=\rho_2-\rho_1\leq\mathcal{H}^1[V(t_\varepsilon)\cap\overline{B(x_i,\rho_2)}\setminus B(x_i,\rho_1)]. \] Moreover one may prove
that if $[\rho_1,\rho_2]$ and $[\rho_1',\rho_2']$ are distinct connected component of $\mathcal{B}_i$ and if $\Gamma$ is a connected component of $V(t_\varepsilon)$ s.t. $ \Gamma\cap\overline{B(x_i,\rho_2)}\setminus B(x_i,\rho_1)\neq \emptyset$ then $ \Gamma\cap\overline{B(x_i,\rho_2')}\setminus B(x_i,\rho_1')= \emptyset$ [here we used \eqref{BorneValeurMesure}]. One may conclude: $\mathcal{H}^1(\mathcal{B}_i)\leq\mathcal{H}^1(V(t_\varepsilon))\leq C\varepsilon|\ln\varepsilon|^5$.
Consequently \[
\mathcal{H}^1(\mathcal{A}_i)\geq\mathcal{H}^1([\varepsilon^s,\varepsilon^{2s/3}])-\mathcal{H}^1(\mathcal{B}_i)-\mathcal{H}^1(\mathcal{C}_i)\geq\varepsilon^{2s/3}-\varepsilon^s-\mathcal{H}^1(V(t_\varepsilon))-\mathcal{O}(\varepsilon|\ln\varepsilon|^{5/2}). \] Fix $0<\varepsilon_\mu^{(4)}\leq\varepsilon_\mu^{(3)}$ s.t. for $0<\varepsilon<\varepsilon_\mu^{(4)}$ we have $\mathcal{H}^1(\mathcal{A}_i)\geq\varepsilon^{2s/3}-\varepsilon^s-\sqrt\varepsilon$.
Define \begin{equation}\label{DefAEps} \mathcal{A}=\mathcal{A}_{\mu,\varepsilon}:=\cap_{i\in \tilde J}\mathcal{A}_i. \end{equation} It is clear that $\mathcal{H}^1(\mathcal{A})\geq\varepsilon^{2s/3}-\varepsilon^s-M_0\sqrt\varepsilon$
Since $\rho\mapsto1/\rho$ is decreasing we have \begin{eqnarray*}
\mathcal{O}(|\ln\varepsilon|)
&\geq&\int_\mathcal{A}\dfrac{{\rm d}\rho}{\rho}\,\sum_{i\in\tilde J}\rho\int_{\partial B(x_i,\rho)}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2
\\&\geq&\int_{\varepsilon^{2s/3}-\mathcal{H}^1(\mathcal{A})}^{\varepsilon^{2s/3}}\dfrac{{\rm d}\rho}{\rho}\,\times\,\inf_{\rho\in\mathcal{A}}\sum_{i\in\tilde J}\rho\int_{\partial B(x_i,\rho)}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2. \end{eqnarray*} Consequently, there exist $r=r_{\mu,\varepsilon}\in\mathcal{A}$, $C_\mu\geq1$ [$C_\mu$ is independent of $\varepsilon$] and $0<\varepsilon_\mu^{(5)}\leq\varepsilon_\mu^{(4)}$ s.t. for $0<\varepsilon<\varepsilon_\mu^{(5)}$ we have
\begin{equation}\label{BorneBordConditiondeSepartionMauvDisk}
\sum_{i\in\tilde J}r\int_{\partial B(x_i,r)}|\nabla v|^2+\dfrac{1}{\varepsilon^2}(1-|v|^2)^2\leq C_\mu. \end{equation} We finally let $J_\mu:=\tilde J $, with \eqref{ConditiondeSepartionMauvDisk} and \eqref{BorneBordConditiondeSepartionMauvDisk} the result is proved.
\section{Proof of Proposition \ref{VeryNiceCor}}\label{Sec.PreuveVeryNiceCor}
The proof is an adaptation of the proof of (VI.21) in \cite{AB1}.
Let $\tilde\alpha=\tilde\alpha_n\in L^\infty(\Omega,[\beta^2;1])$, ${\bf (z,d)}={\bf (z,d)}^{(n)}\in(\O^N)^*\times\mathbb{Z}^N$ and $u=u_n\in H^1(\Omega,\mathbb{C})$ be as in the proposition.
We first claim that up to consider $\underline{u}$ instead of $u$ we may assume $|u|\leq1$ in $\Omega$. Note also that if $\int_{\Omega_{\tilde r}}|\nabla u|^2\geq \beta^{-2}\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2$, then there is nothing to prove. We thus may assume \begin{equation}\nonumber
\int_{\Omega_{\tilde r}}|\nabla u|^2< \beta^{-2}\int_{\Omega_{\tilde r}}|\nabla w^\zd_\star|^2. \end{equation}
Let $w:={u}/{|u|}\in H^1(\Omega_{\tilde r},\mathbb{S}^1)$. From Lemma I.1 in \cite{BBH} we have $w\wedge\nabla w=\nabla^{\bot}\Phi^\zd_\star+\nabla H$ with $H=H_\varepsilon\in H^1(\Omega_{\tilde r},\mathbb{R})$ and \begin{equation}\label{NaturalHyp}
\int_{\Omega_{\tilde r}}|\nabla H|^2\leq (\beta^{-1}+1)^2\int_{\Omega_{\tilde r}}|\nabla \Phi^\zd_\star|^2. \end{equation} Let $\Phi_{\tilde r}$ be the unique solution of \eqref{ConjaHarmmojqsdhfhfh}.
We have $\displaystyle\int_{\Omega_{\tilde r}}\nabla H\cdot\nabla^\bot\Phi_{\tilde r}=0$. Then letting $\rho=|u|$: \begin{eqnarray*} \int_{\Omega_{\tilde r}}\tilde \alpha\rho^2\nabla H\cdot\nabla^\bot\Phi^\zd_\star=\int_{\Omega_{\tilde r}}(\tilde \alpha\rho^2-1)\nabla H\cdot\nabla^\bot\Phi^\zd_\star+\int_{\Omega_{\tilde r}}\nabla H\cdot(\nabla^\bot\Phi^\zd_\star-\nabla^\bot\Phi_{\tilde r}). \end{eqnarray*} But, from \eqref{ConjaHarmmojqsdhfhfh222}, there exists $C\geq1$ s.t. $\displaystyle
\left|\int_{\Omega_{\tilde r}}\nabla H\cdot(\nabla^\bot\Phi^\zd_\star-\nabla^\bot\Phi_{\tilde r})\right|\leq C\|\nabla H\|_{L^2(\Omega_{\tilde r})}\sqrt X$ where $X$ is defined in \eqref{DefX}.
Consequently, letting $\tilde C:= 4C^2/\beta^2$ we get \begin{eqnarray*}
2\int_{\Omega_{\tilde r}}\nabla H\cdot\nabla^\bot\Phi^\zd_\star+\int_{\Omega_{\tilde r}}\tilde \alpha\rho^2|\nabla H|^2&=&2\int_{\Omega_{\tilde r}}\nabla H\cdot(\nabla^\bot\Phi^\zd_\star-\nabla^\bot\Phi_{\tilde r})+\int_{\Omega_{\tilde r}}\tilde \alpha\rho^2|\nabla H|^2
\\&\geq& \|\nabla H\|_{L^2(\Omega_{\tilde r})}\left(\dfrac{\beta^2}{4}\|\nabla H\|_{L^2(\Omega_{\tilde r})}-2C\sqrt X\right)\\&\geq&- \tilde CX. \end{eqnarray*} Therefore \begin{equation}\nonumber
\int_{\Omega_{\tilde r}}\tilde \alpha\rho^2|\nabla w|^2
\geq\int_{\Omega_{\tilde r}}|\nabla \Phi^\zd_\star|^2-\int_{\Omega_{\tilde r}}(1-\tilde \alpha\rho^2)|\nabla\Phi^\zd_\star|^2-2\int_{\Omega_{\tilde r}}(1-\tilde \alpha\rho^2)|\nabla H||\nabla\Phi^\zd_\star|-\mathcal{O}(X). \end{equation} On the other hand, using \eqref{BorneGradWstar} and Corollary \ref{CorBorneGrossEneStar}, we get \begin{eqnarray*}
\left|\int_{\Omega_{\tilde r}}(1-\tilde \alpha\rho^2)|\nabla\Phi^\zd_\star|^2\right|&\leq&\left|\int_{\Omega_{\tilde r}}(1-\rho^2)|\nabla\Phi^\zd_\star|^2\right|+\left|\int_{\Omega_{\tilde r}}(1-\tilde \alpha)|\nabla\Phi^\zd_\star|^2\right|
\\&\leq&\|\nabla\Phi^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\|\nabla\Phi^\zd_\star\|_{L^2(\Omega_{\tilde r})}\left(K+L\right) \end{eqnarray*} and with \eqref{NaturalHyp}: \begin{eqnarray*}
\left|\int_{\Omega_{\tilde r}}(1-\tilde \alpha\rho^2)|\nabla H||\nabla\Phi^\zd_\star|\right|&\leq&\left|\int_{\Omega_{\tilde r}}(1-\rho^2)|\nabla H||\nabla\Phi^\zd_\star|\right|+\left|\int_{\Omega_{\tilde r}}(1-\tilde \alpha)|\nabla H||\nabla\Phi^\zd_\star|\right|
\\&\leq&\|\nabla\Phi^\zd_\star\|_{L^\infty(\Omega_{\tilde r})}\|\nabla\Phi^\zd_\star\|_{L^2(\Omega_{\tilde r})}\left(K+L\right)(2\beta^{-1}+1). \end{eqnarray*} The proposition is thus proved.
\section{Proof of Proposition \ref{Prop.BonEcartement}}\label{Proof.Prop.BonEcartement}
We prove the first assertion and we assume ${\rm Card}(J_\mu)\geq2$. We let $\chi_1:=2h_{\rm ex}^{-1}\lnh_{\rm ex}$, $\chi_2:=2h_{\rm ex}^{-1/2}\lnh_{\rm ex}$ and $\Omega_{\chi_2}=\Omega\setminus\cup_{p\in\Lambda}\overline{B(p,\chi_2)}$.
In order to get sufficiently sharp estimates to prove the proposition, we decompose $\Omega_r$ in several subdomains. To this aim, we distinguish two cases for $p\in\Lambda$ : either ${\rm Card}(J_p^{(y)})\geq 2$ or ${\rm Card}(J_p^{(y)})\in\{0,1\}$ where $J_p^{(y)}:=\{k\in J^{(y)}\,|\,y_k\in B(p,\chi_2)\}$ [the $y_k$'s are introduced in Definition \ref{DefiSousEnsJ}].
If $p\in\Lambda$ is s.t. ${\rm Card}(J_p^{(y)})\geq2$, then with Lemma \ref{Lem.Separation} [with $P=17$ and $\eta=\chi_1/2$], there are $\kappa_p=\kappa_{p,\varepsilon}\in\{17^0,...,17^{N_0-1}\}$ and $\tilde J_p^{(y)}\subset J_p^{(y)}$ s.t. \[
\bigcup_{k\in J_p^{(y)}} B( y_k,\chi_1/2)\subset\bigcup_{k\in \tilde J_p^{(y)}} B( y_k,\kappa_p\chi_1/2)\text{ and }| y_k- y_l|\geq8\kappa_p\chi_1\text{ for }k,l\in\tilde J_p^{(y)},\,k\neq l. \] We then let $\mathcal{D}_p:=B(p,\chi_2)\setminus\cup_{k\in \tilde J_p^{(y)}}\overline{B(y_k,\kappa_p\chi_1)}$ and, for $k\in \tilde J_p^{(y)}$, we write $\underline d_k:= {\rm deg}_{\partial B(y_k,\kappa_p\chi_1)}( v)$. We denote also $ D_p:=\sum_{k\in \tilde J_p^{(y)}}\underline d_k$
If $p\in\Lambda$ is s.t. $J_p^{(y)}=\{k\}$, then we let $\mathcal{D}_p=B(p,\chi_2)\setminus\overline{B(y_k,\kappa\delta)}$ with $\kappa$ given by Definition \ref{DefiSousEnsJ}. We let also $ D_p:=\underline d_k:= {\rm deg}_{\partial B(y_k,\kappa\delta)}(v)$.
Recall that we denoted (see Definition \ref{DefiSousEnsJ}), for $k\in J^{(y)}$, $\tilde d_k:= {\rm deg}_{\partial B(y_k,\kappa\delta)}(v)$. Consequently, if $J_p^{(y)}=\{k\}$, then $ D_p=\underline d_k=\tilde d_k$.
If $J_p^{(y)}=\emptyset$ then we denote $D_p=0$ and $\mathcal{D}_p=B(p,\chi_2)$.
The heart of the proof consists in proving that $\underline d_k=1$ for all $k$. Indeed, we know that if $i\in J_\mu$ then $ {\rm deg}_{\partial B(z_i,r)}(v)=1$. Consequently $\underline d_k$ is the number of points $z_i$ contained in a disk of radius at least $\chi_1$.
We let: \begin{itemize} \item $\mathcal{R}:=\bigcup_{k\in J^{(y)}}B(y_k,\kappa\delta)\setminus\bigcup_{i\in J_\mu}\overline{B(z_i,r)}$, $\kappa$ given in Definition \ref{DefiSousEnsJ}. \item For $p\in\Lambda$ s.t. ${\rm Card}(J_p^{(y)})\geq2$ and for $k\in \tilde J_p^{(y)}$ we denote \[ \mathcal{Q}_{k,p}:=B(y_k,\kappa_p\chi_1)\setminus\bigcup_{\substack{l\in J^{(y)}\\y_l\in B(y_k,\kappa_p\chi_1)}}\overline{B(y_l,\kappa\delta)}. \] Moreover, by construction, we have [for sufficiently small $\varepsilon$] \begin{equation}\label{VoilaPourquoiYaDesSur2} \bigcup_{\substack{l\in J^{(y)}\\y_l\in B(y_k,\kappa_p\chi_1)}}{B(y_l,\kappa\delta)}\subset\bigcup_{\substack{l\in J^{(y)}\\y_l\in B(y_k,\kappa_p\chi_1)}}{B(y_l,\chi_1/2)}\subset B(y_k,\kappa_p\chi_1/2). \end{equation}
\end{itemize} Thus \begin{eqnarray}\nonumber
\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2&\geq&\dfrac{1}{2}\int_{\mathcal{R}}\alpha|\nabla v|^2+
\sum_{\substack{p\in\Lambda}}\dfrac{1}{2}\int_{\mathcal{D}_{p}}\alpha|\nabla v|^2+
\\\label{EstVendre0}&&+\sum_{\substack{p\in\Lambda\\{\rm Card}(J_p^{(y)})\geq2}}\sum_{k\in\tilde J_p^{(y)}}\dfrac{1}{2}\int_{\mathcal{Q}_{k,p}}\alpha|\nabla v|^2+\dfrac{1}{2}\int_{\Omega_{\chi_2}}\alpha|\nabla v|^2. \end{eqnarray} From \eqref{OnCompteRelFin1} and \eqref{OnCompteRelFin2} we have \begin{equation}\label{EstVendre1}
\dfrac{1}{2}\int_{\mathcal{R}}\alpha|\nabla v|^2\geq d\pi\left[b^2|\lnr|+(1-b^2)|\ln\lambda|-b^2|\ln\delta|\right]+\mathcal{O}(1). \end{equation} If $J_p^{(y)}=\{k\}$, then with Corollary \ref{Cor.BorneInfProcheIncl}.\ref{Cor.BorneInfProcheIncl1} we get \begin{equation}\label{EstVendre2}
\dfrac{1}{2}\int_{\mathcal{D}_{p}}\alpha|\nabla v|^2\geq\pi\underline{d}_k^2\ln\left(\dfrac{\chi_2}{\delta}\right)+\mathcal{O}(1). \end{equation} And if ${\rm Card}(J_p^{(y)})\geq2$, still with Corollary \ref{Cor.BorneInfProcheIncl}.\ref{Cor.BorneInfProcheIncl1}: \begin{equation}\label{EstVendre4}
\dfrac{1}{2}\int_{\mathcal{D}_p}\alpha|\nabla v|^2\geq
\pi\sum_{k\in \tilde J_p^{(y)}}\underline d_k^2\ln \left(\dfrac{\chi_2}{\chi_1}\right)+\mathcal{O}(1). \end{equation}
We continue by dealing with the case ${\rm Card}(J_p^{(y)})\geq2$. From Corollary \ref{Cor.BorneInfProcheIncl}.\ref{Cor.BorneInfProcheIncl1} applied in $\mathcal{Q}_{k,p}$ for $k\in \tilde J_p^{(y)}$ [with \eqref{VoilaPourquoiYaDesSur2}] we get \begin{equation}\label{EstVendre3}
\sum_{k\in\tilde J_p^{(y)}}\dfrac{1}{2}\int_{\mathcal{Q}_{k,p}}\alpha|\nabla v|^2 \geq\pi\sum_{k\in \tilde J_p^{(y)}}\sum_{\substack{l\in J^{(y)}\\y_l\in B(y_k,\kappa_p\chi_1)}}\tilde d_l^2\ln\left(\dfrac{\chi_1}{\delta}\right)+\mathcal{O}(1) \end{equation} In order to end the proof, using Propositions \ref{MinimalMapHomo} $\&$ \ref{Prop.EnergieRenDef} $\&$ \ref{VeryNiceCor}, we get \begin{equation}\label{EstVendre5}
\dfrac{1}{2}\int_{\Omega_{\chi_2}}\alpha|\nabla v|^2\geq\pi\sum_{p\in\Lambda} D_p^2|\ln\chi_2|+\mathcal{O}(1). \end{equation} We let \[ \Delta:=\sum_{\substack{p\in\Lambda\text{ s.t.}\\{\rm Card} (J_p^{(y)})\geq2}}\sum_{k\in \tilde J_p^{(y)}}\underline d_k^2+\sum_{\substack{p\in\Lambda\text{ s.t.}\\mathcal{J}_p^{(y)}=\{k\}}}\underline{d}_k^2\text{ and }\tilde\Delta:=\sum_{k\in J^{(y)}}\tilde d_k^2. \]
From \eqref{EstVendre0}, \eqref{EstVendre1}, \eqref{EstVendre2}, \eqref{EstVendre4}, \eqref{EstVendre3} and \eqref{EstVendre5} we get \begin{eqnarray}\nonumber
&&\dfrac{1}{2}\int_{\Omega_r}\alpha|\nabla v|^2
\\\nonumber&&\geq \mathcal{O}(1)+d\pi\left[b^2|\lnr|+(1-b^2)|\ln\lambda|-b^2|\ln\delta|\right]+\pi\sum_{\substack{p\in\Lambda\text{ s.t.}\\mathcal{J}_p^{(y)}=\{k\}}}\underline{d}_k^2\ln\left(\dfrac{\chi_2}{\delta}\right)+
\\\nonumber&&+\pi\sum_{\substack{p\in\Lambda\\{\rm Card} (J_p^{(y)})\geq2}}\left[\sum_{k\in \tilde J_p^{(y)}}\underline d_k^2\ln \left(\dfrac{\chi_2}{\chi_1}\right)+\sum_{\substack{l\in J^{(y)}\\y_l\in B(p,\chi_2+\lambda\delta)}}\tilde d_l^2\ln\left(\dfrac{\chi_1}{\delta}\right)\right] +\pi\sum_{p\in\Lambda} D_p^2|\ln\chi_2|
\\\nonumber&&\geq d\pi\left[b^2|\lnr|+(1-b^2)|\ln(\lambda\delta)|\right]+\pi|\ln\chi_2|\left(\sum_{p\in\Lambda} D_p^2-\Delta\right)+\pi|\ln\delta|(\tilde\Delta-d)+
\\\nonumber&&+\pi|\ln\chi_1|\sum_{\substack{p\in\Lambda\\{\rm Card} (J_p^{(y)})\geq2}}\left[\sum_{k\in \tilde J_p^{(y)}}\underline d_k^2-\sum_{\substack{l\in J^{(y)}\\y_l\in B(p,\chi_2+\lambda\delta)}}\tilde d_l^2\right]+\mathcal{O}(1).
\end{eqnarray}
Since $\underline d_k,\tilde d_l\geq1$ for all $k,l$, from Lemma \ref{LemSommeDegCarréDec}.\ref{LemSommeDegCarréDec1} we have $\sum_{p\in\Lambda} D_p^2\geq \Delta\geq\tilde\Delta\geq d$ and moreover \[ \Delta=d\Leftrightarrow (\text{$\underline d_k=1$ for all $k$}) \] and \[ \tilde\Delta=d\Leftrightarrow (\text{$\tilde d_l=1$ for all $l$}). \] On the other hand since for $p\in\Lambda$ s.t. $J_p^{(y)}=\{k\}$ we have $\underline d_k=\tilde d_k$, we get \[ \Delta-\tilde\Delta=\sum_{\substack{p\in\Lambda\\{\rm Card} (J_p^{(y)})\geq2}}\left[\sum_{k\in \tilde J_p^{(y)}}\underline d_k^2-\sum_{\substack{l\in J^{(y)}\\y_l\in B(p,\chi_2+\lambda\delta)}}\tilde d_l^2\right]. \]
Then \eqref{BrneSup-ApplicDirEn} gives \[
\dfrac{\mathscr{L}_1(d)}{\pi}\lnh_{\rm ex}\geq \left(\sum_{p\in\Lambda} D_p^2-\Delta\right)|\ln\chi_2|+(\tilde\Delta-d)|\ln\delta|+(\Delta-\tilde\Delta)|\ln\chi_1|+\mathcal{O}(1). \]
Since $|\ln\chi_1|=\ln({h_{\rm ex}})+\mathcal{O}[\ln(\lnh_{\rm ex})]$ and $|\ln\chi_2|=\ln\sqrt{h_{\rm ex}}+\mathcal{O}[\ln(\lnh_{\rm ex})]$ we obtain
\begin{eqnarray}\nonumber &&\left(\dfrac{\mathscr{L}_1(d)}{\pi}+\dfrac{d-\sum_{p\in\Lambda} D_p^2}{2}\right)\ln{h_{\rm ex}}
\\\label{topContra}&\geq&(\Delta-\tilde\Delta)\ln\sqrt{h_{\rm ex}}+(\tilde\Delta-d)|\ln(\delta\sqrt{h_{\rm ex}})|+\mathcal{O}[\ln(\lnh_{\rm ex})]. \end{eqnarray}
From Lemma \ref{LemSommeDegCarréDec}.\ref{LemSommeDegCarréDec2} and the definition of $\mathscr{L}_1(d)$ [see Lemma \ref{LemLaisseLectTrucSimple}], we have \begin{equation}\label{topBissosososo} \dfrac{\mathscr{L}_1(d)}{\pi}+\dfrac{d-\sum_{p\in\Lambda} D_p^2}{2}\leq0. \end{equation}
Using \eqref{topBissosososo} in \eqref{topContra}, \eqref{PutaindHypTech} and $\tilde\Delta-d\geq0\&\Delta-\tilde\Delta\geq0$ we get $\tilde\Delta-d=\Delta-\tilde\Delta=0$ and then $\Delta=d$, {\it i.e.} $\underline d_k=1$ for all $k$.
On the other hand, with the help of \eqref{topContra} we may write \[ 0\geq\left(\dfrac{\mathscr{L}_1(d)}{\pi}+\dfrac{d-\sum_{p\in\Lambda} D_p^2}{2}\right)\ln{h_{\rm ex}}\geq\mathcal{O}[\ln(\lnh_{\rm ex})]. \]
We may thus deduce $\dfrac{\mathscr{L}_1(d)}{\pi}+\dfrac{d-\sum_{p\in\Lambda} D_p^2}{2}=0$ and then, with Lemma \ref{LemSommeDegCarréDec}.\ref{LemSommeDegCarréDec2}, for $p\in\Lambda$ we have $D_p\in\{\lfloor d/N_0\rfloor;\lceil d/N_0\rceil\}$.
\end{document} | arXiv |
Improved user similarity computation for finding friends in your location
Georgios Tsakalakis1 &
Polychronis Koutsakis ORCID: orcid.org/0000-0002-4168-08882
Recommender systems are most often used to predict possible ratings that a user would assign to items, in order to find and propose items of possible interest to each user. In our work, we are interested in a system that will analyze user preferences in order to find and connect people with common interests that happen to be in the same geographical area, i.e., a "friend" recommendation system. We present and propose an algorithm, Egosimilar+, which is shown to achieve superior performance against a number of well-known similarity computation methods from the literature. The algorithm adapts ideas and techniques from the recommender systems literature and the skyline queries literature and combines them with our own ideas on the importance and utilization of item popularity.
The diversity of social networks makes the problem of correctly estimating user preferences essential for personalized applications [1]. Most recommender systems suggest items of possible interest to their users by employing collaborative filtering to predict the attractiveness of an item for a specific user, based on the user's previous rating and the ratings of "similar" users. In this work, rather than focusing on possible items of interest for a user, we are interested in designing an algorithm that will utilize user ratings in order to recommend one user to another as a possible friend.
This paper continues our recent work [2], where we presented the architectural design, the functional requirements and the user interface of eMatch [3], an Android application which was inspired by the idea of finding people with common interests in the same geographical area. Close friendship is a measure of trust between individuals [4] and friends have the tendency to share common interests and activities, as has been shown in multiple studies in the literature starting with important work on personality similarities and friendship dating in the 70 s [5, 6] and continuing until today [7]. In terms of social networks, people selectively establish social links with those who are similar to them, and their attitudes, beliefs and behavioral propensities are affected by their social ties [8, 9]. In eMatch, in order to compare people's interests, users rate as many as nine interest categories: {Movies, Music, Books, Games, Sports, Science, Shopping, Food, Travel}, while they can add and rate items to each one of them. For example a user could rate the category "Sports" with "7" on a scale of 1 to 10, and add to this category the item "football" with rating "9". Based on this type of rating, the application's algorithm computes users' matching in order to suggest potential friends.
The information location is used in eMatch only for practical reasons, i.e., in order to locate potential friends in the same area and not for tracking on the map and revealing the user's location as other applications do. The goal of eMatch is to facilitate potential friends to meet and introduce themselves to each other only if they so wish. The user's location is considered private and sensitive information and is treated that way. The only information that is public is the matching percentage for all pairs of "Visible" users inside the geographical area. In this way the individual's privacy is preserved. More information can be found in [2].
The work in that paper introduced EgoSimilar, an algorithm which computes the similarity between users and is implemented in eMatch. Based on a dataset of 57 users, EgoSimilar was found to outperform two of the most well-known similarity measures, the Pearson Correlation and the Cosine Similarity, in regard to the most significant metrics used our study. Unlike other approaches in the literature, EgoSimilar takes into account the popularity of the items that have been rated in its computations.
The main contributions of the present work are as follows. We collected a much larger number of completed questionnaires (286 in total) from users of eMatch and evaluated EgoSimilar again, in order to study whether the conclusions of the work in [2] were confirmed. After confirming the excellence of EgoSimilar again in comparison to the Pearson Correlation and Cosine Similarity, however, we added into our new study several similarity measures, one of which was found to outperform EgoSimilar. For this reason, we substantially changed EgoSimilar by adapting ideas and techniques from the recommender systems literature and the skyline queries literature. The new algorithm, Egosimilar+, presented in this paper for the first time, is compared against several of the most well-known similarity computation methods from the literature and is shown to outperform all of them in regard to being able to identify existing friendships.
The rest of the paper is structured as follows. In "Related work" section we discuss related work in the field. "EgoSimilar" section briefly presents the original EgoSimilar algorithm. In "Evaluation of Egosimilar" section we evaluate EgoSimilar versus other similarity computation methods. "EgoSimilar+" section presents EgoSimilar+ and discusses the ideas and the motivation behind the new algorithm. "Evaluation of EgoSimilar+" section presents the results with the use of EgoSimilar+. Finally, "Conclusions and future work" section presents the conclusions of our study and the next steps in our work.
The Youhoo application [10] is the closest to eMatch among all current applications in iOS and Android that are related to finding friends in an area near the user. Its goal is to create circles of people with common interests in an area. However, Youhoo profiles are created from Facebook, therefore users who do not use Facebook are excluded from using the application, and users who wish to create a different profile in their Youhoo profile cannot do so. Additionally, the circles of people with common interests created by the application are quite generic or one-dimensional, e.g., students in the same university, people working in the same field, fans of a specific singer. On the contrary, eMatch computes the match between users based on the whole profile that the users wish to share through the application, and of course allows users to create a profile that is independent from any other application. Other social networking applications like GeoSocials [11] and Jiveocity [12] simply present to the user other users in the same location, without any recommendation on whether they would be a good match as friends.
Other proposals from the literature for friend recommendation focus on link prediction utilizing node proximity [13], on recommending which Twitter users to follow based on user-generated content which indicates profile similarity [14], on recommending friends according to the degree to which a friend satisfies the target user's unfulfilled informational need [15], and on selecting a community of users that can meet the specific requirements of an application [16]. The work in [16] differs from our work not only because of its different goal, but also because it computes a metric based on a binary characterization of users' interest in a specific item (interested/not interested) which does not include information on the degree of user interest for the item; it also does not consider common interest categories between users as our work does, therefore related interests are considered to be completely different.
The authors in [17] use ranking functions to propose a method that represents people's preferences in a metric space, where it is possible to define a kernel-based similarity function; they then use clustering to discover significant groups with homogeneous states. The authors point out the success of the Pearson Correlation and the cosine similarity in order to make comparisons between the rating vectors of different users and they use cosine similarity in their work. As it will be shown in our results, our proposed similarity computation approach outperforms both the Pearson Correlation and the Cosine Similarity. Also, the proposed class separation technique in [17], which utilizes Support Vector Machines, becomes computationally complex and leads the authors to avoid using K-means clustering, to decrease the computational complexity of combining K-means with their technique.
EgoSimilar
In this section we present our "matching" algorithm from [2], EgoSimilar, for computing the similarity between users based on their interests and preferences. We also present, briefly, all the other widely used methods for assessing similarity that we will compare to EgoSimilar. All approaches were implemented in eMatch, in order to find potential friends based on user ratings. These algorithms run from the server side in order to keep the computational cost contained. Their running at the smartphone would be uneconomic, and also battery- and time-consuming, since it would constantly require data transfers via mobile internet and many calculations to be executed.
For the matching algorithm to run at the server, the mobile must have Internet access and at least one location provider activated. It should also store periodically (e.g., every 10 min) the geographical location of the user.
EgoSimilar takes the following rationale into account:
The matching is done in an "egocentric" way because each user should search friends based on his/her own criteria and interests. Thus, the matching percentage between two users that will appear on each user's screen will most likely be different. Hence, if for example user X has one active category of interest while user Y has five, the matching percentage (X, Y) will be based on that one category, while the matching percentage (Y, X) on all five, leading to different results showing on each user's screen.
More popular items (popular in the sense that they are rated positively or negatively by many users) should not affect matching results as much as less popular items do, if users "agree" on them. The reason is that even if users share, e.g., a favorable opinion on a very well-known band, book, movie, etc., this does not really give a substantial hint that their tastes match in general. A similar case regarding a relatively unknown band/book/movie gives a much stronger indication of common interests. This was also pointed out in [18], where it was explained that the presence of popular objects that meet the general interest of a broad spectrum of audience may introduce weak relationships between users and adversely influence the correct ranking of candidate objects. The work in [18], however, is different from ours, as it begins with the construction of a user similarity network from historical data, in order to calculate scores for candidate objects. Our work in this paper focuses on recommending people as potential friends, not items of interest, and no historical data is relevant due to the nature of our study.
The rating choices of users are on a scale from 1 to 10. Consequently the maximum rating difference will be 9 and the weight of one unit in rating difference will be 1/9 ≈ 0.11. This weight is included in the computation of the similarity between users.
The steps followed by eMatch in computing the matching between users are described below. The first three steps are followed regardless of the matching computation method, which is implemented in step 4.
Let X be the user who runs the application, therefore, the matching is done according to X's tastes.
Check if the user's location is stored. If not, inform the user, else go to the next step.
Find users that are in close geographical proximity with user X.
Find all the active interest categories of user X.
The matching in EgoSimilar is computed as follows: for each user Y found in step 2, calculate the
$$ {\text{Matching}}\left( {{\text{X}},{\text{ Y}}} \right) = \frac{1}{{k_{\text{x}} }}\mathop \sum \limits_{c = 1}^{{k_{\text{X}} }} \left[ {{\text{w}}_{1} \left[ {1 - 0.11{\text{d}}_{1} \left( {{\text{X}},{\text{Y}},{\text{c}}} \right)} \right] + \frac{{{\text{w}}_{2} }}{{{\text{n}}_{\text{x}}^{\text{c}} }}\mathop \sum \limits_{{{\text{i}} = 1}}^{{{\text{n}}_{\text{x}}^{\text{c}} }} \left[ {1 - 0.11{\text{d}}_{2} \left( {{\text{X}},{\text{Y}},{\text{c}},{\text{i}}} \right)} \right]} \right] $$
where kX is the number of active categories of user X, X, kX ∊ [1, 9]; w1 is the weight attributed to the general rating of a category; w2 is the weight of the ratings of all individual items of a category. In our case, w1 should be smaller than w2, as we consider the "general" matching of users (e.g., both of them loving movies), to be of smaller importance, as their specific tastes in that category may differ significantly or even completely. The exact values of w1 and w2 are discussed in "Evaluation of EgoSimilar" section; \( {\text{n}}_{\text{x}}^{\text{c}} \) is the number of items user X has inserted in category c; d1(X,Y,c) is a function which computes the absolute difference in ratings between users X and Y for the cth activated category of user X. If user Y has deactivated the specific category, then we set (1 − 0.11·d1(X,Y,c)) = 0; d2(X,Y,c,i) is associated with the ith item inserted by user X in cth activated category and denotes the distance of ratings between users X and Y for the specific item.
If user Y has not rated this item, then we set (1 − 0.11·d2(X,Y,c,i)) = 0,
Otherwise d2(X,Y,c,i) is calculated, taking into account the popularity of the specific item, as follows:
Compute d2(X,Y,c,i).
Let m be the number of users that have inserted this item, and n be the number of users that have inserted items in the cth activated category of user X. Then, the popularity weight of the specific item is defined as: W c i (X) = m/n. An item is assumed to be popular if W c i (X) > 0.5, which means that more than half of the users that "voted" for this category have inserted the specific item (with either negative or positive rating).
d2(X,Y,c,i) is adapted with respect to the popularity of the item and the rationale explained above, as follows:
If (W c i (X) > 0.5 AND d2(X,Y,c,i) < 5), then
d2(X,Y,c,i) = d2(X,Y,c,i) + Wchange∙d2(X,Y,c,i).
else if (W c i (X) > 0.5 AND d2(X,Y,c,i) ≥ 5), then
d2(X,Y,c,i) = d2(X,Y,c,i)
else if (W c i (X) ≤ 0.5 AND d2(X,Y,c,i) < 5), then
d2(X,Y,c,i) = d2(X,Y,c,i) − Wchange∙d2(X,Y,c,i).
else if (W c i (X) ≤ 0.5 AND d2(X,Y,c,i) ≥ 5), then
This states that when an item is popular and the ratings of users are close, then this item should not affect matching results as much as less popular items do. Therefore, the distance of the ratings between users X and Y must be increased in order to decrease their matching. This increase is implemented via the Wchange weight, the value of which is discussed in "Evaluation of EgoSimilar" section.
If, however, the item is popular and the ratings of users are close, then this item should affect matching results more than the popular items do. Accordingly, the distance of the ratings between users X and Y must be decreased in order to increase their matching. This is implemented once again via the Wchange weight.
Similarly, in the case where the item is not popular and the ratings of users are not close, we infer that this is an indication of users that do not have common interests. So, by increasing the distance of their ratings, their matching is decreased.
The complexity of the algorithm is: Ο(pqr), where p is the number of the users, q is the number of categories (in our case, nine) and r is the maximum number of items inserted in one of the categories.
Evaluation of EgoSimilar
For our preliminary evaluation we wanted to confirm whether the results presented in [2] would stand for a much larger dataset and to investigate whether EgoSimilar would also excel in comparison with several additional similarity computation measures.
We collected data from 286 users (in comparison to the 57 users in our previous work), ages 18–40. Of the 286 participants, 272 had at least one connection (i.e., were friends) with a person from our dataset in real life. The collected information consisted of the activation/deactivation of the 9 interest categories, the Ratings for all active categories and the Ratings for the individual items in all the active categories. The items rated in each category were either new insertions by the users or as many of the default items as the users wished to rate. The mean rating given by the users was 6.6 and the standard deviation 2.5. These statistics confirmed the tendency shown in [2], of users mainly rating items that they like instead of taking the time to also add several items that they dislike. The details of the dataset are presented in Table 1.
Table 1 Dataset
The reason we chose to collect data mainly from groups of friends, a choice which carries a bias in the dataset and the results, was that in this way it would be feasible to evaluate whether the similarity computation methods would be able to "discover", through higher matching values, existing friendships.
To compare the results we ran the K-means clustering algorithm, each time with a different similarity computation measure (EgoSimilar and five other measures that are presented later in this section). We derived results for a number of clusters K ranging from 5 to 20 in order to evaluate how (and if) the number of clusters influences the user matching. K-means is preferred in comparison to new efficient methods like the one proposed in [19], because we do not want to have predefined classes in our system; classes (clusters) need to change based on the users who find themselves in the same area. Also, contrary to several approaches where it is useful to have weighted information incorporated into similarity scores (e.g. [20]), in our system all users should have equal weights when computing their similarity.
The following metrics and parameters were used in our study (the abbreviations are also presented in Table 2 for ease of reading):
Table 2 Abbreviations
Average friends' placement (AFP). This is arguably the most important metric of all, in terms of evaluating the quality of a similarity metric, as it refers to the order in which "matching users" appear on the user's screen, in decreasing percentages. A user would obviously consider first the users with whom he/she has the highest matching, regardless of the actual matching percentage (unless the matching percentage is very low even for the "top matched" user, which would be discouraging). Most importantly, in this matching list we would expect existing friends to place "low", i.e., to appear among the top matching choices. Therefore, we can study whether our approach outperforms other similarity computation methods in placing existing friends higher on the list, as existing friends should have similar interests [5, 6]. The similarity computation method that performs better in finding actual friends would be expected to be able to outperform others in finding potential friends as well.
N1: the number of users in the cluster.
N2: the number of users in the cluster that have a network (i.e., they are connected with at least one other user, who may be in that cluster or in another one).
N3: the number of users in the cluster that are connected in reality as friends.
Average valid connections (AVC): for each user in a cluster we computed the percentage of their connections that are included in the specific cluster, and derived the average percentage.
Average matching (AM): This is the average matching percentage of all users of the specific cluster.
Average matching of connected users (AMC): This is the average matching percentage of all the connected users of the cluster.
Average matching of not connected users (AMnC): This is the average matching percentage of all the users of the cluster who are not connected.
We have used several additional similarity measures in our study and implemented them in eMatch in order to compare them against Egosimilar. These similarity measures include the Pearson Correlation and the Cosine Similarity [21], which were also used in [2] and were found to provide inferior results to EgoSimilar for the smaller dataset of 57 users. The other similarity measures that we used in the present work are:
The Jaccard Index [22], also known as the Jaccard similarity coefficient, which is a statistic used for comparing the similarity and diversity of two sample sets. The Jaccard coefficient measures similarity between finite sample sets and is defined as the size of the intersection divided by the size of the union of two sample sets, as depicted in Eq. (2) below
$$ {\text{J}}\left( {\text{A,B}} \right) =\frac{{ | {\text{A}} \cap {\text{B|}}}}{{ | {\text{A}} \cup {\text{B|}}}} $$
where A and B denote the two sample sets.
The π coefficient [23], which is calculated as:
$$ \pi = \, ({\text{p}}_{\text{o}} - {\text{p}}_{\text{e}} )/( 1- {\text{p}}_{\text{e}} ) $$
where po is defined as the observed agreement between two raters who each classify items into a set of M mutually exclusive categories, and pe is defined as the expected agreement by chance.
The κ coefficient [24], which is similar to the π coefficient and is defined again by Eq. (3). The difference between the two coefficients lies in the way the expected agreement pe is computed. In the π coefficient, both annotators are assumed to classify items into a category via the same probability distribution, whereas the κ coefficient does not make this assumption (hence each annotator each assumed to have a different probability distribution). As explained in [25], when using the π coefficient any differences in the observed distributions of users' judgements are considered to be noise in the data. When using the κ coefficient these differences are considered to be related to the biases of the users.
By "agreement" in the case of the π and κ coefficients, and by "intersection of sets" in the case of the Jaccard index, we are referring to two users giving the same rating for a category or for an item within a category.
Finally, in order to examine the results of the above measures, users were separated into groups via the K-means clustering algorithm [26], using the matching percentages derived by each of the similarity computation approaches. The procedure will always terminate, but K-means does not necessarily find the optimal configuration. A disadvantage of K-means is its sensitivity to the random initialization of cluster centroids; generally initial centroids should be "far apart". We addressed this issue by using different centroids and computing average results over 10 independent runs. Later in the paper, in the evaluation of our new algorithm EgoSimilar+ in "Evaluation of EgoSimilar+" section, we focused on finding the appropriate number of clusters by utilizing silhouettes [27].
For space economy purposes and in order to focus on the most important contributions of this study, we will only present here a summary of the results of the new evaluation of EgoSimilar.
Our results were derived for the following sets of weights: (w1,w2) = (0.25, 0.75), (0.5, 0.5), (0.75, 0.25) and for wchange = 0.3, a value which was shown for both the larger dataset and the smaller one in [2] to provide the overall best results across all similarity computation methods. It did not provide the best results in all cases for EgoSimilar, which often had better results for values of 0.1 or 0.2, but for fairness and uniformity purposes we are showing the results for wchange = 0.3. We should emphasize again that we are interested in w1 < w2 as our work is focused on achieving a more specific (items-oriented) matching between users than a more generic (categories-based) one. However, we also experimented with the cases where w1 = w2 and w1 > w2 in order to study the behavior of the different similarity computation methods.
Our results showed that in regard to the comparison between EgoSimilar, the Pearson Correlation and the Cosine Similarity, there were no changes in the conclusions for this larger dataset when compared with the small dataset in our previous work. More specifically, EgoSimilar outperforms both methods in terms of distinguishing between already connected and not already connected users (i.e., already connected users have a higher matching percentage). In regard to the average friends' placement, EgoSimilar also continues to outperform the Cosine Similarity and the Pearson Correlation, as in [2], by placing existing friends "lower" (i.e., closer to the top) in the users' matching list. The reason that EgoSimilar excels is that both the Cosine Similarity and Pearson Correlation only examine the current ratings of each category/item, by each of the two users. EgoSimilar, however, tries to be more sophisticated by using weights to take advantage of the popularity of the rated items.
Tables 3 and 4 present the average results for all similarity computation methods for the number of clusters K taking all values between 5 and 20.
Table 3 Average matching difference between connected and not-connected users
Table 4 Average friends' placement
The results indicate that:
EgoSimilar outperforms all similarity computation methods in terms of distinguishing between already connected and not already connected users for all (w1, w2) weights.
EgoSimilar also outperforms all similarity computation methods in terms of the average friends' placement, which is the most important metric in our study, as explained above, for (w1,w2) = (0.5, 0.5) and (w1,w2) = (0.75, 0.25).
However, EgoSimilar is outperformed in terms of the average friends' placement by all similarity computation methods, except the Cosine Similarity and the Pearson Correlation, for (w1,w2) = (0.25, 0.75). This is an important negative result, given that our focus is on placing larger importance on individual items for making friends suggestions, therefore EgoSimilar should be able to find existing friend connections by placing them "lower" in each user's list.
All similarity computation methods place existing friends on average at around the 36–39% mark (positions 102 to 110 out of 285 users). This means that on average existing friends are placed close to the middle of each user's list, whereas we would expect them to place near the top. Once again, this is a negative result, which in this case applies to all similarity computation methods used in our study.
We should also note that we attempted to create groups of ratings (i.e., {1–2}, {3–5}, {6–8}, {9–10}) to avoid cases where users may like or dislike an item almost equally but a small difference in their rating would cause the similarity computation to miss the common predilection. Therefore, we considered that users "agree" if they give a rating that belongs to the same set of ratings, as defined above. We found, however, that this choice improved the results of the π and κ coefficient only slightly (by about 0.6% in the results of Table 3 and by about 3–4 positions in terms of the average friends' placement shown in Table 4). Hence, in the rest of the paper we kept the standard definition of "agreement" for the π and κ coefficients.
EgoSimilar+
The results of our evaluation of EgoSimilar against all other similarity computation methods showed that the premise of EgoSimilar is promising but was not enough to help our proposed approach excel overall, and in particular in the cases which were of the most interest for our work on eMatch.
Therefore, we first focused on understanding the reasons why EgoSimilar is outperformed by the new similarity computation methods added to our study (Jaccard index, π coefficient, κ coefficient) for the case of (w1,w2) = (0.25, 0.75). All three similarity computation methods that outperformed EgoSimilar focus on computing the exact agreement between users, whereas EgoSimilar computes distance. Therefore, the results in this part of our work seem to indicate that even though it is more rare, exact agreement in items leads to better results in identifying existing (and hence, also possible future) friendships, especially when the element of chance agreement is removed, as is the case for the π and κ coefficients. The improved results achieved by computing exact agreement can be, at least partially, attributed to the fact that some items are essentially categories in themselves, e.g., "football" (in the category Sports) or "pop" (in the category Music); this can lead more often to exact agreement between users than a more specific football-related or pop-related item.
The results in Table 4 show that the use of the κ coefficient achieves the best results among all other similarity computation methods and outperforms EgoSimilar for (w1,w2) = (0.25, 0.75). The distinctive feature of the κ and π coefficients in comparison to the Jaccard index is the removal of chance agreement from the observed agreement and the distinctive feature of the κ coefficient in comparison to the π coefficient is the "acceptance" of differences in user ratings as being related to the biases of the users, instead of noise.
Based on the above, we decided to create an improved version of EgoSimilar which we name EgoSimilar+. This new version incorporates the following differences with the original algorithm:
We added the calculation of biases into EgoSimilar+. Similarly to [28], a first-order approximation of the bias involved in a rating Rui is
$$ {\text{B}}_{{\rm ui}} = \mu + {\text{B}}_{\rm u} + {\text{B}}_{\rm i} $$
where u represents the user, i represents the item. The bias involved in rating Rui is denoted by Bui and accounts for the user and item effects. The overall average rating is denoted by μ, while the parameters Bu, Bi, indicate the observed deviations of user u and item i, respectively. An example, from [28], on how the adding of biases works: suppose that we want a first-order estimate for user Joe's rating of the movie Titanic. Suppose that the average rating for all movies, μ, is 7.4/10 and that Titanic has a better than average rating and tends to be rated 1 star above the average. On the other hand, Joe is a critical user, who tends to rate 0.6 stars lower than the average. Thus, the estimate for Titanic's rating by Joe would be (7.4 + 1 − 0.6) = 7.8/10.
We added biases in order to estimate the users' ratings for items that the user had not rated although the items belonged to the user's favorite categories (categories rated equally or higher than 7/10 by the user).
In recent literature on databases, skyline query processing has received very significant attention [29,30,31]. Skyline queries find within a database the set of points that are not dominated by any other point. A n-dimensional point is not dominated by another point if it is true that it is not worse than any other point in (n − 1) dimensions and is better in at least one dimension. We adapted the idea of choosing non-dominated objects into EgoSimilar+. More specifically, after adding biases as explained above, dividing users in clusters and calculating user matching, EgoSimilar+ creates, for each user, two sets of potential friends. Set A contains the non-dominated potential friends, who are shown in descending matching percentage order, and Set B contains the dominated potential friends, who are again shown in descending matching percentage order. In order to identify the non-dominated potential friends, we use Eq. (1) and we calculate user matching in each category and that category's items. A non-dominated potential friend of user X is one who does not have a smaller matching percentage with X than any other user in 8 interest categories and is better than all other potential friends in at least one interest category.
It should be noted that a potential friend of user X in set B may have a higher overall matching percentage than a potential friend of X in set A. This can happen if the user in set B has a high matching percentage with X in a specific category but is dominated in another category. Still, the fact that users in set A are not dominated lead us to place them "lower" (closer to the top) in user X's matching list.
Evaluation of EgoSimilar+
As mentioned in "Evaluation of EgoSimilar" sectiom, in order to find the appropriate number of clusters to use for K-means clustering in our dataset, we utilized silhouettes [27]. Silhouettes are a widely-used graphical aid for the interpretation and validation of cluster analysis. A silhouette shows which objects lie well within their cluster and which ones are merely somewhere in between clusters. If we take any user i of a cluster A we define as a(i) the average dissimilarity of i to all other objects of A, and as b(i) the dissimilarity of i to all objects of the second-best cluster then the silhouette s(i) is computed as:
$$ {\text{s}}\left( {\text{i}} \right) = \left( {{\text{b}}\left( {\text{i}} \right) - {\text{a}}\left( {\text{i}} \right)} \right)/{ \hbox{max} }\left( {{\text{a}}\left( {\text{i}} \right),{\text{ b}}\left( {\text{i}} \right)} \right) $$
By "dissimilarity" in our study we are referring to the Euclidean distance between user vectors.
Equation (5) indicates that the best possible clustering (i.e., s(i) being close to 1) is achieved when the "within" dissimilarity is much smaller than the smallest "between" dissimilarity b(i). In this case user i is well-clustered. When s(i) is close to zero, this means that a(i) and b(i) are approximately equal and hence it is not clear to which of the two clusters user i should be assigned. When s(i) is close to − 1, this means that the clustering is erroneous as user i is closer to the second best cluster and should have been assigned to it.
Silhouettes are especially useful because they can help identify cases where we have set k to be too low or too high; in both cases s(i) would be low, in the first due to a high a(i) and in the second due to a low b(i).
To find the appropriate K we studied the 286 users of our datasets and we clustered them with K ranging from 1 to 143, i.e., up to the case where we would have on average two users in each cluster.
Figure 1 shows the average silhouette for all objects (users) in the dataset, for different values of K (average over 10 independent runs for each value).
Average silhouette for EgoSimilar+
Our results are qualitatively similar with those in [32] where for a different problem it was again shown through silhouettes that the increase of K up to a point increases the probability of a user being in the best possible cluster, however the general quality of the solution decreases with a too big increase of K. We derived the best silhouette when K is around 25, i.e., for an average number of 11–12 users per cluster. The best K values when using all other similarity computation methods in eMatch were in the range of [20, 23], and for each method we used its best K for the results that follow, in order to make a fair comparison.
In order to make a fair comparison between EgoSimilar+ and the other similarity computation methods we initially implemented on them the same new ideas that we implemented on EgoSimilar+, which are presented in "EgoSimilar+" section. However, the first idea, of adding biases, leads to a smaller exact agreement between users and this in turn led to worse results for the κ coefficient, π coefficient and Jaccard index. The addition of biases had a negligible effect on the Pearson Correlation and Cosine Similarity results. Therefore, for fairness reasons we implemented for all five similarity computation methods only the second idea, of finding and presenting first the non-dominated potential friends.
Figure 2 presents the matching difference between connected and not-connected users for "the best" version of all similarity computation methods (best K, addition of the new idea or ideas the improve the method's results). EgoSimilar+ is shown not only to excel, once again but to significantly improve its results in comparison to EgoSimilar. Especially for the case that is of the most interest for us, i.e., (w1,w2) = (0.25, 0.75), EgoSimilar+ shows a 34% improvement in comparison to EgoSimilar in distinguishing between already connected and not already connected users.
Average matching difference between connected and not-connected users
The actual matching percentages between users in each cluster vary for all similarity computation methods between 50 and 70%, with the exception of the Cosine Similarity metric which, when used in eMatch, shows an average matching percentage larger than 80% between the users in most clusters. However, the actual matching percentage is of little value. The only substantial effect that it might have, especially in the case of not connected users, is that a quantitatively higher percentage might be more intriguing for a user in order to decide to communicate with another user. What is truly substantial is the order in which "matching users" appear on the user's screen, in decreasing percentages (high to low), where, as it will be explained below, EgoSimilar+ clearly outperforms all similarity computation metrics.
Table 5 presents the average friends' placement results for EgoSimilar+ and the other similarity computation methods, again all of them in their "best" version. It is clear from the results presented in the Table that:
Table 5 Average friends' placement for the best version of all methods
EgoSimilar+ now outperforms all similarity computation methods in terms of the average friends' placement, which is the most important metric in our study, for all values of (w1,w2), including (w1,w2) = (0.25, 0.75) which is the most important case of our study, as explained earlier.
The improvement of EgoSimilar+ through the use of the two new ideas (adding biases, finding and promoting non-dominated potential friends) is very substantial, leading it to place existing friends on average at around the 29% mark (position 83/285). This improves our confidence on the quality of friend recommendations that EgoSimilar can make. EgoSimilar+ also clearly excels against all other similarity computation methods for weights (w1,w2) = (0.5, 0.5) and (w1,w2) = (0.75, 0.25), however its improvement over EgoSimilar is not as large since the critical factor in its improvement is the addition of biases to compute unknown ratings for items, and in the above cases item similarity has a smaller weight than in the (w1,w2) = (0.25, 0.75) case.
Figure 3 furthers shows visually the percentage improvement provided by EgoSimilar+ in terms of the average friends' placement in comparison to all other similarity computation measures. For (w1,w2) = (0.25, 0.75) this improvement ranges between 8.5% and 25.5% and even for (w1,w2) = (0.75, 0.25) the smallest improvement is still 4.5%.
Percentage of improvement offered by EgoSimilar+ in average friends' placement in comparison to all other similarity computation measures
Table 6 presents the same type of results as Table 5, with the difference that the same value of K for the K-means clustering was used for all the experiments, instead of using the best K for each method. The conclusions derived by Fig. 2 and Table 5 stand, once again, for the results of Table 6, which show that EgoSimilar+ outperforms all other similarity computation method in reard to the average friends' placement in the users' matching list.
Table 6 Average friends' placement for K = 40
Conclusions and future work
We have presented and proposed a user similarity computation algorithm, Egosimilar+, with the aim of using it to find and connect people with common interests in the same geographical area. The algorithm is incorporated into a mobile application that serves as a "friend" recommendation system. EgoSimilar+ adapts ideas and techniques from the recommender systems literature and the skyline queries literature and combines them with our own ideas on the importance and utilization of item popularity. Our proposed algorithm is compared against five well-known similarity computation methods from the literature and is shown to excel in comparison with all of them, improving by 4.5–25.5% their results in terms of identifying true friends based on their interests.
The idea for eMatch, and hence the need for an algorithm like EgoSimilar+, was created by the fact that the contemporary way of life leads a large number of people to spend much time away from home, often alone among strangers. Therefore, it makes sense for them to connect right on the spot with someone close by who shares their interests. This is a decision that can be made quickly with the help of an intelligent application, as opposed to decisions regarding finding possible life partners, which would usually require much more thought and study from the user (other applications focus on this area). Even at home, however, users spend a large amount of time using their mobile devices. Therefore, even users who want to take their time with evaluating possible friends will have the opportunity to do so.
One limitation of the existing work is the fact that the extended dataset is still relatively small. In future work, we will use EgoSimilar+ in large datasets from other sources in order to provide recommendations for users/items and we will compare it once again against benchmark similarity computation methods. We also intend to incorporate semantic similarity computation algorithms into eMatch, to further improve the clustering and the implicit (via the matching percentage) friendship recommendations. The use of such algorithms is important, so that relevant concepts, names and items will be linked automatically by the application (e.g., soccer and football, or soccer and Manchester United). The incorporation of spell check software is also important, in order to avoid spelling errors that can cause the algorithm to miss a commonly liked or disliked item by two users.
Oommen BJ, Yazidi A, Granmo O-C (2012) An adaptive approach to learning the preferences of users in a social network using weak estimators. J Inf Process Syst 8:191–212
Athanasopoulou G, Koutsakis P (2015) eMatch: an android application for finding friends in your location. Mob Inf Syst J. Article ID 463791
Athanasopoulou G (2013) https://androidappsapk.co/detail-ematch-com-tuc-ematch/Accessed. 06 Nov 2018
Farrahi K, Zia K (2017) Trust reality-mining: evidencing the role of friendship for trust diffusion. HumanCentric Comput Inf Sci 7:4
Duck SW, Craig G (1978) Personality similarity and the development of friendship: a longitudinal study. Br J Soc Clin Psychol 17:237–242
Werner C, Parmelee P (1979) Similarity of activity preferences among friends: those who play together stay together. Soc Psychol Quart 42:62–66
Han X, Wang L, Crespi N, Park S, Cuevas A (2015) Alike people, alike interests? Inferring interest similarity in online social networks. Decis Support Syst 69:92–106
Lee D (2015) Personalizing information using users' online social networks: a case study of CiteULike. J Inf Process Syst 11:1–21
Souri A, Hosseinpour S, Rahmani AM (2018) Personality classification based on profiles of social networks' users and the five-factor model of personality. HumanCentric Comput Inf Sci 8:24
Youhoo (2018) http://appcrawlr.com/android/youhoo. Accessed 06 Nov 2018
GeoSocials (2018) http://appcrawlr.com/android/geosocials. Accessed 06 Nov 2018
Jiveocity (2018) http://appcrawlr.com/android/jiveocity. Accessed 06 Nov 2018
Liben-Nowell D, Kleinberg J (2007) The link prediction problem for social networks. J Assoc Inf Sci Technol 58:1019–1031
Hannon J, Bennett M, Smyth B (2010) Recommending Twitter users to follow using content and collaborative filtering approaches. In: Paper presented at the 4th ACM conference on recommender systems (RecSys), Barcelona; 2010
Wan S et al (2013) Informational friend recommendation in social media. In: Paper presented at the 36th international ACM SIGIR conference on research and development in information retrieval (SIGIR), Dublin; 2013
Han X et al (2016) CSD: a multi-user similarity metric for community recommendation in online social networks. Expert Syst Appl 53:14–26
Diez J, del Coz JJ, Luaces O, Bahamonde A (2008) Clustering people according to their preference criteria. Expert Syst Appl 34:1274–1284
Gan M, Jiang R (2013) Constructing a user similarity network to remove adverse influence of popular objects for personalized recommendation. Expert Syst Appl 40:4044–4053
Hwang D, Kim D (2017) Nearest neighbor based prototype classification preserving class regions. J Inf Process Syst 13:1345–1357
Wu J et al (2017) Weighted local Naïve Bayes link prediction. J Inf Process Syst 13:914–927
Mekouar L, Iraqi Y, Boutaba R (2012) An analysis of peer similarity for recommendations in P2P systems. Multimedia Tools Appl 60:277–303
Jaccard P (1908) Nouvelles Recherches Sur la Distribution Florale. Bulletin de la Societe Vaudoise des Sciences Naturelles 44:223–270
Scott WA (1955) Reliability of content analysis: the case of nominal scale coding. Public Opin Quart 19:321–325
Cohen J (1960) A Coefficient of agreement for nominal scales. Educ Psychol Measur 20:37–46
Di Eugenio B, Glass M (2004) The kappa statistic: a second look. Comput Linguistics 30:95–101
Forgy EW (1965) Cluster analysis of multivariate data: efficiency versus interpretability of classifications. Biometrics 21:768–769
Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65
Koren Y, Bell R, Volinsky C (2009) Matrix factorization techniques for recommender systems. Computer 42:42–49
Borzsony S, Kossman D, Stocker K (2001) The skyline operator. In: Paper presented at the 17th international conference on data engineering (ICDE), Heidelberg; 2001
Papadias D, Tao Y, Fu G, Seeger B (2005) Progressive skyline computation in database systems. ACM Trans Database Syst 30:41–82
Zhang K et al (2017) Probabilistic skyline on incomplete data. In: Paper presented at the 26th ACM international conference on information and knowledge management (CIKM), Singapore; 2017
Thuillier E, Moalic L, Lamrous S, Caminada A (2018) Clustering weekly patterns of human mobility through mobile phone data. IEEE Trans Mob Comput 17:817–830
GS analysed the extended dataset, produced and analyzed the results of the evaluation of EgoSimilar. PK collected the extended dataset and analyzed the results of the evaluation of EgoSimilar. PK also designed and evaluated EgoSimilar+. Both authors read and approved the final manuscript.
The authors wish to sincerely thank the developer of eMatch, Georgia Athanasopoulou, for her valuable help during the time that this work was conducted.
The datasets used and analysed in this study are available from the corresponding author on reasonable request.
This was not a funded research project.
School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece
Georgios Tsakalakis
School of Engineering and Information Technology, Murdoch University, Science and Computing Building 245, SC1.012, 90 South Street, Murdoch, WA, 6150, Australia
Polychronis Koutsakis
Correspondence to Polychronis Koutsakis.
Tsakalakis, G., Koutsakis, P. Improved user similarity computation for finding friends in your location. Hum. Cent. Comput. Inf. Sci. 8, 36 (2018). https://doi.org/10.1186/s13673-018-0160-7
Accepted: 26 November 2018
Similarity Calculation Method
Recommender Systems Literature
Skyline Queries
Normal Friends | CommonCrawl |
FluxFix: automatic isotopologue normalization for metabolic tracer analysis
Sophie Trefely ORCID: orcid.org/0000-0003-3816-68691,2,
Peter Ashwell1 &
Nathaniel W. Snyder1
Isotopic tracer analysis by mass spectrometry is a core technique for the study of metabolism. Isotopically labeled atoms from substrates, such as [13C]-labeled glucose, can be traced by their incorporation over time into specific metabolic products. Mass spectrometry is often used for the detection and differentiation of the isotopologues of each metabolite of interest. For meaningful interpretation, mass spectrometry data from metabolic tracer experiments must be corrected to account for the naturally occurring isotopologue distribution. The calculations required for this correction are time consuming and error prone and existing programs are often platform specific, non-intuitive, commercially licensed and/or limited in accuracy by using theoretical isotopologue distributions, which are prone to artifacts from noise or unresolved interfering signals.
Here we present FluxFix (http://fluxfix.science), an application freely available on the internet that quickly and reliably transforms signal intensity values into percent mole enrichment for each isotopologue measured. 'Unlabeled' data, representing the measured natural isotopologue distribution for a chosen analyte, is entered by the user. This data is used to generate a correction matrix according to a well-established algorithm. The correction matrix is applied to labeled data, also entered by the user, thus generating the corrected output data. FluxFix is compatible with direct copy and paste from spreadsheet applications including Excel (Microsoft) and Google sheets and automatically adjusts to account for input data dimensions. The program is simple, easy to use, agnostic to the mass spectrometry platform, generalizable to known or unknown metabolites, and can take input data from either a theoretical natural isotopologue distribution or an experimentally measured one.
Our freely available web-based calculator, FluxFix (http://fluxfix.science), quickly and reliably corrects metabolic tracer data for natural isotopologue abundance enabling faster, more robust and easily accessible data analysis.
Isotopic tracer analysis is a technique indispensable to the study of metabolic flux. A variety of stable isotopes are used for metabolic tracing depending on the purpose of the study. Stable isotopes are non-radioactive atoms with additional neutrons, and include 13C, 15N, 18O, and 2H. These 'heavy' isotopes possess chemical properties nearly identical to their lighter counterparts but differ in mass. The fate of isotope labeled atoms from substrates can be traced through their incorporation over time into specific metabolic products. The detection and differentiation of the isotopologues of each metabolite of interest is accomplished through mass spectrometry. Atoms from a labeled substrate can be incorporated singly or multiple times depending on the substrate and product being measured and the time frame considered, resulting in a distribution of isotopologues (molecules that differ only by their number of isotopic substitutions). The incorporation of [13C6]-glucose into acetyl-CoA and HMG-CoA is shown as an example (Fig. 1). The relative abundance of different combinations of 13C and 12C atoms (isotopologues) reflects the incorporation of the labeled substrate in competition with the other potential substrates.
Incorporation of 13C-labeled substrate can be measured by mass changes in product metabolites. U-[13C6]-glucose incorporation into acetyl-CoA and subsequently into HMG-CoA is shown here as an example. Carbons derived from glucose can be incorporated into acetyl-CoA, and subsequently into HMG-CoA in units of 2. Thus, 2, 4 or 6 labeled carbons can be added to a HMG-CoA molecule, producing the M2, M4 or M6 isotopologues, respectively
Stable isotopes occur naturally on earth at varying rates. 13C carbon is an abundant naturally occurring isotope in biological systems. It is found on average at a rate of ~1.1% on the earth's surface and in biological systems, although the carbon pool varies depending on its origin [1, 2]. The probability that an isotope of any atom will be incorporated into a molecule is determined by a number of factors including the elemental composition and the number of atoms present in the molecule. Incorporation of naturally occurring isotopes can make a significant contribution to molecular weight.
Relative quantitation of the different isotopologues of a metabolite must be adjusted for the natural background abundance of each isotopologue in order to make an accurate determination of artificial label incorporation. The normalization algorithm uses data from unlabeled samples, or from predicted isotopologue distribution using theoretical values from which the natural background isotopologue distribution can be estimated. This background distribution is then used to perform a transformation according to a well-established algorithm [3]. The output values indicate the enrichment of isotopologues derived from the artificially labeled substrate.
In practice this transformation is often performed as a series of calculations using software such as Excel (Microsoft), or via platform specific software. This method is prone to error due to the many steps involved and formulas requiring constant adjustment as data dimensions change for different metabolites. Programs capable of performing this calculation have been developed previously [3, 4] but they are implemented on software platforms that often suffer from compatibility, dependency and usability problems.
As HRMS technology improves to allow the acquisition of more metabolic tracer data, a bottleneck in experimental workflow is accentuated at the point of data analysis. In order to address this bottleneck and help streamline data analysis for metabolic tracer studies, we have developed FluxFix, an application freely available on the internet at http://fluxfix.science. FluxFix automatically performs the calculation from raw signal intensity values and converts them to percent molar enrichment values in one step. This program automatically adjusts to dataset dimension. FluxFix can be accessed at any time from any computer, overcoming the limitations of existing programs that are often platform specific, non-intuitive, commercially licensed and/or limited to using theoretical isotopologue distributions that can be prone to artifacts. Thus, it is robust, reduces error, is intuitive to the underlying data structure, more directly helps in interpretation, and saves time.
The application consists of a backend server running Ubuntu and an API written in Python 3.4.2 using numpy (https://github.com/numpy/numpy). The frontend was written in HTML, CSS, and makes use of Javascript. Altogether the program performs three functions as follows:
'Unlabeled' data copy and pasted directly from spreadsheet applications including Excel (Microsoft) and Google sheets is read in as tab-separated values (TSV) and used to generate a correction matrix (MCor). The website also includes an option to upload data in .CSV file format. Data must be formatted such that each row is a different sample and each column a different isotopologue. If more than one row of data is entered, the 'unlabeled' data is averaged over each column before generating MCor.
Labeled sample data is read in as a data matrix (Draw) of several rows of TSV copy and pasted directly from a spreadsheet application or uploaded as a .CSV file. Data must be in the same format as the unlabeled data (i.e. each row is a different sample and each column a different isotopologue) and have the same column dimension. The corrected data (DCor) is generated by convolving the correction matrix by the labeled data matrix as below:
$$ {\mathrm{D}}_{\mathrm{Cor}}={\left({\mathrm{M}}_{\mathrm{Cor}}\right)}^{-1}.{\mathrm{D}}_{\mathrm{Raw}} $$
The percent molar enrichment for each isotopologue (column) is calculated for each individual sample (row). The output data is presented as a matrix of percent molar enrichment values in the same format as the input data matrix (Draw). The output appears in the results box as TSV and can be directly copy and pasted into a spreadsheet. The output can also be downloaded as a .CSV file.
The web interface has two boxes for data entry (unlabeled and labeled data) and another box for presenting computed results. The 'Compute Percentages' button runs the python program and generates the output data in the results box. The calculator can be instantly reset for new data entry by refreshing the page.
User experience optimization
FluxFix was tested by release to a selected group of 20 users. These test users represented a range of levels of experience with isotopologue analysis and used a variety of differently structured datasets. After consultation with these users and acquiring feedback, we included several features that significantly improved user experience. These adaptations included:
The dimensions of the data (x, y) are shown to the user upon input. For example "Data is 'x' columns by 'y' rows". This helps the user to identify errors in data selection.
Common errors in data input include the inadvertent entry of row/column headers and malformed data matrices (data can be malformed by the absence of a cell or the presence of extra cells as trailing tabs). These errors are caught by the client-side code and reported to the user as a pop-up prompt before sending. The pop-up prompt specifically describes the problem - either the presence of non-numeric values (row/column headers) or matrix malformation–it also describes the exact row and column coordinates of the error that triggered the report making it easy for the user to identify and rectify.
If there is an error in processing the data on server-side, it is reported to the user by a pop-up prompt, which encourages them to contact us in the event of a persistent problem.
With ongoing user input and reporting, the usability of the application can be further improved. For programmers who might wish to implement the FluxFix calculation into an automated data analysis pipeline, a link is included on the webpage. This links to the project GitHub repository and instructions on how to implement the backend code in Python3 accompanied by example code.
Here we use example liquid chromatography mass spectrometry (LC-MS) data sets to demonstrate the application of FluxFix. We also outline recommendations for its application, and show the advantage of FluxFix through direct comparison to previously published isotope correction software.
Example data set analysis
Here we demonstrate the application of FluxFix in the analysis of two different example datasets. The first example data set was generated as follows; HeLa cells were incubated in DMEM containing either 25 mM [13C6]-glucose or unlabeled glucose (for unlabeled control samples) for 4 h. MS data were acquired on a Thermo Q Exactive instrument in positive ESI mode as described elsewhere [5]. Quantitation was based on the relative abundance of MS2 fragments (Fig. 2). Processing of raw data and peak integration was performed using Xcalibur and TraceFinder (Thermo).
Molecular structure of acetyl-CoA and HMG-CoA. Carbon from glucose can be incorporated into the R-groups. The MS2 fragment measured experimentally incorporates the R-groups, as well as 11 other carbon molecules. Carbon atoms are highlighted as red circles
The raw data and FluxFix output correction for the first example dataset are displayed in Table 1. This table includes a comparison of corrections derived from experimental unlabeled data and from theoretical unlabeled values. Theoretical values were generated for the MS2 fragments of acetyl-CoA and HMG-CoA (see Fig. 2) using the simulation function in XCalibur (Thermo). Figure 3 illustrates the correction using experimental data. It displays significant enrichment of the isotopologues (M0, M2, M4, M6) that can be derived from glucose, whilst the odd numbered isotopologues are not present. The metabolic pathways by which glucose is incorporated into acetyl-CoA and HMG-CoA require that it be added in two carbon units. Thus the exclusion of odd numbered isotopologues in the molar enrichment confirms the transformation was successful. The correction using simulated data results in more significant allocation of % molar enrichment to the odd numbered isotopologues, especially M1 (see Table 1), indicating that simulated unlabeled data may introduce more error.
Table 1 FluxFix correction for acetyl-CoA and HMG-CoA from [13C]-glucose treated cells. Output was generated with both simulated and experimental unlabeled data
Data correction for acetyl-CoA and HMG-CoA using FluxFix. Input data as signal intensity (left y-axis) are in black and grey and output percent molar enrichment data (right y-axis) are in red. Molar enrichment from [13C]-glucose occurs in the M2 for acetyl-CoA and M2, M4 and M6 isotopologues for HMG-CoA. This incorporation of glucose is consistent with the known metabolic pathways by which glucose carbon is incorporated in pairs and to a maximum of two atoms for acetyl-CoA and six atoms for HMG-CoA. Data is from three replicate samples, error bars are standard deviation
The potential for isotope tracer analysis in metabolite discovery has attracted attention elsewhere [6]. Table 2 presents an example dataset that highlights the potential uses of FluxFix in metabolite discovery and characterization using mass isotopologue analysis. We make use of data from a previously published experiment of isotopologue analysis of an unknown product of propionate metabolism. This data was generated in human hepatocellular carcinoma HepG2 cells incubated in [2H2]-propionate or unlabeled propionate and was analyzed by MS/MS using an API-4000 triple quadrupole mass spectrometer, as described elsewhere [7]. Since, at the time of the experiment, the chemical formula of the putative metabolite was unknown, no generation of simulated spectra was possible. Therefore, an isotopic correction matrix was generated by treating a control group of cells with unlabeled sodium propionate. In Table 2, this data was used as input into FluxFix to calculate the percent molar enrichment of several isotopologues of the unknown compound.
Table 2 Isotopologue analysis of an unknown product of propionate metabolism. FluxFix generated percent molar enrichment output values from raw MS/MS data from cells treated with [2H2]-labeled or unlabeled propionate
Recommendations for use
The FluxFix calculator is flexible and can process input data derived from any type of isotope labeling strategy that can be analyzed by mass spectrometry and potentially from NMR spectra as well. We have tested FluxFix with a range of different datasets including glycolytic intermediates, acyl-CoA thioesters, lipids and novel metabolites. Furthermore, this program is not limited to 13C-labeled metabolites. Although we did not directly test this, FluxFix is compatible for use in conjunction with inductively coupled plasma-MS to measure incorporation of stable isotopes of elements as diverse as lead, calcium, iron, chromium, magnesium and zinc. FluxFix may also be used to analyze reverse labeling, or pulse-chase experiments, since the input data is label-neutral.
The principle recommendation we make is that experimentally derived data from unlabeled samples be used in preference to simulated background distribution data wherever possible. Relative isotopologue detection ([M + 1]/M) frequently diverges from theoretical values and this divergence is affected by numerous factors including instrument resolution [8, 9]. Simulated data is limited by its inability to account for matrix effects on resolution or to accurately represent background isotopic distributions unique to different biological systems.
In order to model isotopologue signal intensity values, one must model the resolution of the signal for every isotopologue included in the calculation. Theoretical isotopologue distribution is limited because there is no precise way to model matrix effects on resolution. Resolution is determined by a number of important factors. Firstly, the resolution of the instrument. Triple quadrupole and linear ion-trap instruments are often operated at unit resolution, but many have the ability to increase or decrease resolution. High-resolution mass analyzers operate with different constraints based on the underlying physics of ion detection and separation. Secondly, the resolution of an ion, in some mass analyzers is inversely dependent on the m/z of that ion. This dependency is not equivalent across platforms. For example, the decay in resolution with increasing m/z is not equivalent on an Orbitrap versus an Ion cyclotron resonance or time-of-flight instrument [10, 11]. Thirdly, resolution is dependent upon the sample matrix. Analytes are embedded in a matrix of ions, which varies according to the sample source and preparation. The proximity of neighboring ions (close in m/z) during acquisition of an analyte will directly influence the resolution of that analyte. These unique matrix effects cannot be consistently accounted for by theoretical predictions.
Different biological systems acquire unique isotopic signatures. Carbon fixation by C3 and C4 plants preferentially incorporate 13C at different rates [1]. Isotopes are propagated through the food chain such that species accumulate unique isotope signatures. This principle has been exploited in niche ecology, where variations in isotope profiles between organisms can be used to define food webs, diet, animal migration and nutrient flow [2, 12, 13].
Isotope tracer studies can be performed on samples from varied sources with unique background isotopologue distribution. Therefore, experimentally derived isotopologue distribution data, from unlabeled samples extracted in the same way as labeled samples, produce a more accurate representation of the 'background' isotopologue distribution than theoretical isotopologue distribution values. In light of this and the inability of simulations to account for matrix effects on resolution, we recommend that users of FluxFix use unlabeled sample data generated at least in triplicate from the matching matrix with the most experimentally relevant control conditions for normalization.
Advantages of FluxFix over existing software
There are a number of available software platforms capable of performing isotopologue normalization. These include ICT [14] Pynac [15], (MS/)MS-X-Corr [16, 17], iMS2Flux [18], 13CFLUX2 [19], OpenFLUX [20], FiatFlux [21] and IsoCor [4]. With the exception of IsoCor, these are command line tools, which require an understanding of various programming languages (including Python, MatLab and Perl) and data structures to be used effectively. Many are designed to perform analysis on large 'omics' level data sets but are restricted to a single data acquisition platform or capable of detection of a single type of label incorporation eg 13C. A direct comparison of the features of several of these platforms can be found elsewhere [18].
FluxFix is unique as a web-based isotopologue normalization calculator. It performs a quick one-step calculation and does not require programming skills to use. The function of FluxFix is most similar to that performed by IsoCor [4] – a popular existing software platform. The major limitations of IsoCor are its use of theoretical isotope distribution, user-side software dependency, and inflexible data input requirements, upon which FluxFix improves. As the function of FluxFix is most similar to that of IsoCor, a detailed comparison of the features of these tools has been performed below.
IsoCor is only available as a desktop software application with Python(x, y) dependencies and is only compatible with windows and Linux operating systems. FluxFix is available as a web application compatible with any modern web browser, eliminating the need for any software installation, configuration or compatibility issues. All that is required is an internet connection.
IsoCor uses theoretical calculations to determine natural background isotopologue profiles. We do not encourage users to use simulations for background normalization. There are a variety of platforms, both free and proprietary, to simulate isotopologue distribution. For example, ChemCalc [22] is easily accessible and freely available as a web tool specifically designed for this purpose. If one choses to use simulated values routinely, we suggest that the user save them and use them in FluxFix, they need not be regenerated with every analysis, as with IsoCor.
IsoCor requires input with stringent data dimensions based on the theoretical length of the isotopologue series. This data, although theoretically possible, is in practice rarely achieved owing to a requirement for extremely high sensitivity in acquisition. As a result, the user must add a series of zeros to the end of their detectable data peaks in order to satisfy the input data dimension requirements. The inflexible input data requirements can also lead to misleading results as any isotopologues that might be invalid for acquisition reasons (e.g. the resolution was bad and contaminated with interfering peaks) cannot be omitted from the data set. FluxFix adapts to the input data dimensions chosen by the user.
IsoCor takes input as an exported .txt file that the user must generate. This extra step is not required in FluxFix, which streamlines direct copy and paste from spreadsheet applications including Excel (Microsoft), saving time and processing effort.
Additionally, IsoCor output data for batch analyses, is as a separate data .txt file. FluxFix presents the output in the results window in the same format as the input data matrix, facilitating direct copy and paste into a spreadsheet and a faster workflow.
IsoCor incorporates features that are not included in FluxFix. These are the residuum score, the derivatization feature and the isotope purity correction. We argue that these features are superfluous to an effective data workflow and could lead to data overcorrection.
IsoCor relies on a user editable file that details the isotope percent enrichment for each of the atoms being analyzed. This data is used to simulate the isotopologue distribution. FluxFix does not perform these simulations as we encourage the use of real unlabeled sample data for normalization. However, as described above, simulated isotopologue distributions can be generated for any specific chemical structures using a variety of existing software options.
IsoCor has a function that adds a derivatization group to the calculation for isotopologue distribution. This can lead to confusion because there are many different chemical structures that can be produced from a derivatized parent molecule. In isotopologue analyses, one must be specific about the chemical structure being analyzed. FluxFix relies on the user defining the isotopologue masses detected, making it clearer and more flexible.
Negative values are theoretically impossible but often occur in small values owing to variability and error. IsoCor incorporates an algorithm that penalizes negative values upon normalization such that the error involved in this penalty is reflected, instead, in a residuum score. FluxFix does not perform a penalty or give residuum scores. We argue that this penalty can be misleading, as it masks the error evident in negative values making it less likely that the user is alerted to inconsistencies in their data.
IsoCor has an option to correct for the purity of the isotope tracer used in an experiment. FluxFix does not perform this calculation, as it is not required. The purpose of FluxFix is to calculate the enrichment of label incorporation above naturally occurring isotopes. For most isotope labeling experiments a substrate is used with an approximate purity measure designated by the manufacturer (usually ~98%). The exact purity is not actually known, therefore correcting for this factor is not useful and could actually over-correct the data.
The simplicity and ease of use of FluxFix separates it from previous software for metabolic tracer correction. FluxFix is flexible both in its availability (online at any time) and in its input parameters – it can process data in the most easily accessible format (TSV from spreadsheet) and for any metabolite for which isotopologue data has been generated, there are no limitations on the dimensions of this data. It is not restricted to particular isotopes or by settings for a limited range of isotopes. Thus, FluxFix can be easily applied to an unlimited range of metabolites. Finally, the program has near seamless integration with spreadsheet applications including Microsoft excel and Google sheets, which helps the user organize data. In addition, the built-in checks for data compatibility and notation of the dimensions of the data as they are pasted in assist in error proofing.
Our freely available web based calculator, FluxFix (http://fluxfix.science), quickly and reliably corrects metabolic tracer data for natural isotopologue abundance enabling faster, more robust data analysis. It is flexible, accurate, and can be used for any tracer, any metabolite, by any computer with an Internet connection. Thus it is a simple, convenient and flexible solution to the data bottleneck problem in metabolic tracer analysis.
HMG-CoA:
3-hydroxy-3-methyl-glutaryl-Coenzyme A
TSV:
Tab-separated values
O'Leary MH. Carbon Isotopes in Photosynthesis. Bioscience. 1988;38:328–36. Available from: http://www.jstor.org/stable/info/10.2307/1310735. cited 15 Aug 2016.
Markow TA, Anwar S, Pfeiler E. Stable isotope ratios of carbon and nitrogen in natural populations of Drosophila species and their hosts. Funct Ecol. 2000;14:261–6. Available from: http://doi.wiley.com/10.1046/j.1365-2435.2000.00408.x. cited 27 Jun 2016.
Fernandez CA, Des Rosiers C, Previs SF, David F, Brunengraber H. Correction of 13C mass isotopomer distributions for natural stable isotope abundance. J Mass Spectrom. 1996;31:255–62. Available from: http://www.ncbi.nlm.nih.gov/pubmed/8799277. cited 21 Mar 2016.
Millard P, Letisse F, Sokol S, Portais J-C. IsoCor: correcting MS data in isotope labeling experiments. Bioinformatics. 2012;28:1294–6. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22419781. cited 17 Apr 2016.
Frey AJ, Feldman DR, Trefely S, Worth AJ, Basu SS, Snyder NW. LC-quadrupole/Orbitrap high-resolution mass spectrometry enables stable isotope-resolved simultaneous quantification and (13)C-isotopic labeling of acyl-coenzyme A thioesters. Anal Bioanal Chem. 2016;408:3651–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26968563. cited 5 Jul 2016.
Sadhukhan S, Han Y, Zhang G-F, Brunengraber H, Tochtrop GP. Using Isotopic Tools to Dissect and Quantitate Parallel Metabolic Pathways. J Am Chem Soc. 2010;132:6309–11. Available from: http://pubs.acs.org/doi/abs/10.1021/ja100399m. cited 17 Oct 2016.
Snyder NW, Basu SS. Metabolism of propionic acid to a novel acyl-coenzyme A thioester by mammalian cell lines and platelets. J Lipid Res. 2015;56:142–50. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25424005.
González-Antuña A, Rodríguez-González P, García Alonso JI. Determination of the enrichment of isotopically labelled molecules by mass spectrometry. J Mass Spectrom. 2014;49:681–91. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25044895. cited 13 Oct 2016.
Erve JCL, Gu M, Wang Y, DeMaio W, Talaat RE. Spectral accuracy of molecular ions in an LTQ/Orbitrap mass spectrometer and implications for elemental composition determination. J Am Soc Mass Spectrom. 2009;20:2058–69. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19716315. cited 18 Oct 2016.
Hu Q, Noll RJ, Li H, Makarov A, Hardman M, Graham Cooks R. The Orbitrap: a new mass spectrometer. J Mass Spectrom. 2005;40:430–43. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15838939. cited 8 Jul 2016.
McLuckey SA, Wells JM. Mass analysis at the advent of the 21st century. Chem Rev. 2001;101:571–606. Available from: http://www.ncbi.nlm.nih.gov/pubmed/11712257. cited 7 Jul 2016.
Layman CA, Araujo MS, Boucek R, Hammerschlag-Peyer CM, Harrison E, Jud ZR, et al. Applying stable isotopes to examine food-web structure: an overview of analytical tools. Biol Rev Camb Philos Soc. 2012;87:545–62. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22051097. cited 6 Jul 2016.
Brind'Amour A, Dubois SF, Flaherty E, Ben-David M, Newsome S, del CM R, et al. Isotopic Diversity Indices: How Sensitive to Food Web Structure? Pond DW, editor. PLoS One. 2013;8:e84198. Public Library of Science. Available from: http://dx.plos.org/10.1371/journal.pone.0084198. cited 5 Jul 2016.
Jungreuthmayer C, Neubauer S, Mairinger T, Zanghellini J, Hann S. ICT: isotope correction toolbox. Bioinformatics. 2016;32:154–6. Oxford University Press. Available from: http://www.ncbi.nlm.nih.gov/pubmed/26382193. cited 18 Oct 2016.
Carreer WJ, Flight RM, Moseley HNB. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets. Metabolites. 2013;3:853. Multidisciplinary Digital Publishing Institute (MDPI). Available from: http://www.ncbi.nlm.nih.gov/pubmed/24404440. cited 18 Oct 2016.
Wahl SA, Dauner M, Wiechert W. New tools for mass isotopomer data evaluation in13C flux analysis: Mass isotope correction, data consistency checking, and precursor relationships. Biotechnol Bioeng. 2004;85:259–68. Wiley Subscription Services, Inc., A Wiley Company. Available from: http://doi.wiley.com/10.1002/bit.10909. cited 18 Oct 2016.
Niedenführ S, ten Pierick A, van Dam PTN, Suarez-Mendez CA, Nöh K, Wahl SA. Natural isotope correction of MS/MS measurements for metabolomics and 13 C fluxomics. Biotechnol Bioeng. 2016;113:1137–47. Available from: http://doi.wiley.com/10.1002/bit.25859. cited 18 Oct 2016.
Poskar CH, Huege J, Krach C, Franke M, Shachar-Hill Y, Junker BH, et al. iMS2Flux – a high–throughput processing tool for stable isotope labeled mass spectrometric data used for metabolic flux analysis. BMC Bioinformatics. 2012;13:295. BioMed Central. Available from: http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-13-295. cited 18 Oct 2016.
Weitzel M, Nöh K, Dalman T, Niedenführ S, Stute B, Wiechert W. 13CFLUX2—high-performance software suite for (13)C-metabolic flux analysis. Bioinformatics. 2013;29:143–5. Oxford University Press. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23110970. cited 18 Oct 2016.
Quek L-E, Wittmann C, Nielsen LK, Krömer JO. OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis. Microb Cell Fact. 2009;8:25. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19409084. cited 18 Oct 2016.
Zamboni N, Fischer E, Sauer U. FiatFlux—a software for metabolic flux analysis from 13C-glucose experiments. BMC Bioinformatics. 2005;6:209. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16122385. cited 18 Oct 2016.
Patiny L, Borel A. ChemCalc: A Building Block for Tomorrow's Chemical Infrastructure. J Chem Inf Model. 2013;53:1223–8. American Chemical Society. Available from: http://pubs.acs.org/doi/abs/10.1021/ci300563h. cited 15 Aug 2016.
We thank all members of the laboratory of Prof. Ian Blair for enthusiastic testing and helpful feedback, in particular, Dr Clementina Mesaros.
This work was supported by a Pennsylvania Department of Health Commonwealth Universal Research Enhancement (CURE) grant and a NIH grant K22ES26235 to NWS.
Project name: FluxFix
Project home page: http://fluxfix.science
Operating system: browser based client. Backend uses API running in Python VM on Ubuntu
Programming language: Python, Javascript
License: MIT.
Project was conceptualized and designed by ST and NWS. ST and PA designed software. ST wrote the backend in python. PA performed the web configuration. ST prepared figures, analyzed data, and wrote manuscript. All authors read and provided editorial feedback on manuscript and figures.
AJ Drexel Autism Institute, Drexel University, Philadelphia, PA, 19104, USA
Sophie Trefely, Peter Ashwell & Nathaniel W. Snyder
Department of Cancer Biology, Abramson Family Cancer Research Institute, University of Pennsylvania, Philadelphia, PA, 19104, USA
Sophie Trefely
Peter Ashwell
Nathaniel W. Snyder
Correspondence to Sophie Trefely.
Trefely, S., Ashwell, P. & Snyder, N.W. FluxFix: automatic isotopologue normalization for metabolic tracer analysis. BMC Bioinformatics 17, 485 (2016). https://doi.org/10.1186/s12859-016-1360-7
Metabolite
Isotopologue | CommonCrawl |
Find the magnitude of the complex number $5-12i$.
The magnitude is $$
|5-12i| = \sqrt{5^2 + (-12)^2} = \sqrt{169} = \boxed{13}.
$$ | Math Dataset |
The number 74 can be factored as 2(37), so 74 is said to have two distinct prime factors. How many distinct prime factors does 210 have?
We know that $210 = 10 \cdot 21$. Breaking down these factors even further, we have that $10 = 2 \cdot 5$ and $21 = 3 \cdot 7$, so $210 = 2 \cdot 3 \cdot 5 \cdot 7$. Since these factors are all prime, $210$ has $\boxed{4}$ distinct prime factors. | Math Dataset |
5.2: Joint Distributions of Continuous Random Variables
Expectations of Functions of Jointly Distributed Continuous Random Variables
Theorem \(\PageIndex{1}\)
Independent Random Variables
Having considered the discrete case, we now look at joint distributions for continuous random variables.
If continuous random variables \(X\) and \(Y\) are defined on the same sample space \(S\), then their joint probability density function (joint pdf) is a piecewise continuous function, denoted \(f(x,y)\), that satisfies the following.
\(f(x,y)\geq0\), for all \((x,y)\in\mathbb{R}^2\)
\(\displaystyle{\iint\limits_{\mathbb{R}^2}\! f(x,y)\, dx\, dy = 1}\)
\(\displaystyle{P((X,Y)\in A) = \iint\limits_A\! f(x,y)\, dx\, dy}\), for any \(A\subseteq\mathbb{R}^2\)
The first two conditions in Definition 5.2.1 provide the requirements for a function to be a valid joint pdf. The third condition indicates how to use a joint pdf to calculate probabilities. As an example of applying the third condition in Definition 5.2.1, the joint cdf for continuous random variables \(X\) and \(Y\) is obtained by integrating the joint density function over a set \(A\) of the form
$$A = \{(x,y)\in\mathbb{R}^2\ |\ X\leq a\ \text{and}\ Y\leq b\},\notag$$
where \(a\) and \(b\) are constants. Specifically, if \(A\) is given as above, then the joint cdf of \(X\) and \(Y\), at the point \((a,b)\), is given by
$$F(a,b) = P(X\leq a\ \text{and}\ Y\leq b) = \int\limits^b_{-\infty}\int\limits^a_{-\infty}\! f(x,y)\, dx\, dy.\notag$$
Note that probabilities for continuous jointly distributed random variables are now volumes instead of areas as in the case of a single continuous random variable.
As in the discrete case, we can also obtain the individual, maginal pdf's of \(X\) and \(Y\) from the joint pdf.
Suppose that continuous random variables \(X\) and \(Y\) have joint density function \(f(x,y)\). The marginal pdf's of \(X\) and \(Y\) are respectively given by the following.
\begin{align*}
f_X(x) &= \int\limits^{\infty}_{-\infty}\! f(x, y)\,dy \quad(\text{fix a value of}\ X,\ \text{and integrate over all possible values of}\ Y) \\
f_Y(y) &= \int\limits^{\infty}_{-\infty}\! f(x, y)\,dx \quad(\text{fix a value of}\ Y,\ \text{and integrate over all possible values of}\ X)
\end{align*}
Suppose a radioactive particle is contained in a unit square. We can define random variables \(X\) and \(Y\) to denote the \(x\)- and \(y\)-coordinates of the particle's location in the unit square, with the bottom left corner placed at the origin. Radioactive particles follow completely random behavior, meaning that the particle's location should be uniformly distributed over the unit square. This implies that the joint density function of \(X\) and \(Y\) should be constant over the unit square, which we can write as
$$f(x,y) = \left\{\begin{array}{l l}
c, & \text{if}\ 0\leq x\leq 1\ \text{and}\ 0\leq y\leq 1 \\
0, & \text{otherwise},
where \(c\) is some unknown constant. We can find the value of \(c\) by using the first condition in Definition 5.2.1 and solving the following:
$$\iint\limits_{\mathbb{R}^2}\! f(x,y)\, dx\, dy = 1 \quad\Rightarrow\quad \int\limits^1_0\!\int\limits^1_0\! c\, dx\, dy = 1 \quad\Rightarrow\quad c \int\limits^1_0\!\int\limits^1_0\! 1\, dx\, dy = 1 \quad\Rightarrow\quad c=1\notag$$
We can now use the joint pdf of \(X\) and \(Y\) to compute probabilities that the particle is in some specific region of the unit square. For example, consider the region
$$A = \{(x,y)\ |\ x-y > 0.5\},\notag$$
which is graphed in Figure 1 below.
If we want the probability that the particle's location is in the lower right corner of the unit square that intersects with the region \(A\), then we integrate the joint density function over that portion of \(A\) in the unit square, which gives the following probability:
$$P(X-Y>0.5) = \iint\limits_A\! f(x,y)\, dx\, dy = \int^{0.5}_0\! \int^{1}_{y+0.5}\! 1\, dx\, dy = 0.125\notag$$
Lastly, we apply Definition 5.2.2 to find the marginal pdf's of \(X\) and \(Y\).
f_X(x) &= \int\limits^1_0\! 1\, dy = 1, \quad\text{for}\ 0\leq x\leq 1 \\
f_Y(y) &= \int\limits^1_0\! 1\, dx = 1, \quad\text{for}\ 0\leq y\leq 1
Note that both \(X\) and \(Y\) are individually uniform random variables, each over the interval \([0,1]\). This should not be too surprising. Given that the particle's location was uniformly distributed over the unit square, we should expect that the individual coordinates would also be uniformly distributed over the unit intervals.
At a particular gas station, gasoline is stocked in a bulk tank each week. Let random variable \(X\) denote the proportion of the tank's capacity that is stocked in a given week, and let \(Y\) denote the proportion of the tank's capacity that is sold in the same week. Note that the gas station cannot sell more than what was stocked in a given week, which implies that the value of \(Y\) cannot exceed the value of \(X\). A possible joint pdf of \(X\) and \(Y\) is given by
3x, & \text{if}\ 0\leq y \leq x\leq 1 \\
0, & \text{otherwise.}
Note that this function is only nonzero over the triangular region given by \(\{(x,y)\ |\ 0\leq y\leq x \leq 1\}\), which is graphed in Figure 2 below:
Figure 2: Region over which joint pdf \(f(x,y)\) is nonzero.
We find the joint cdf of \(X\) and \(Y\) at the point \((x,y) = (1/2, 1/3)\):
F\left(\frac{1}{2},\frac{1}{3}\right) = P\left(X\leq\frac{1}{2} \text{ and } Y\leq\frac{1}{3}\right) &= \int^{1/3}_0\int^{0.5}_y\! 3x\, dxdy\\
&=\int^{1/3}_0\! \left(\frac{3}{2}x^2\Big|^{0.5}_y\right)\,dy = \int^{1/3}_0 \!\left(\frac{3}{8} - \frac{3}{2}y^2\right)\,dy\\
&=\frac{3}{8}y-\frac{1}{2}y^3\Big|^{1/3}_0 \approx 0.1065
Thus, there is a 10.65% chance that less than half the tank is stocked and less than a third of the tank is sold in a given week. Note that in finding the above integral, we look at where the region given by \(\{(x,y)\ |\ x\leq1/2, y\leq1/3\}\) intersections the region over which the joint pdf is nonzero, i.e., the region graphed in Figure 2. This tells us what the limits of integration are in the double integral.
Next, we find the probability that the amount of gas sold is less than half the amount that is stocked in a given week. In other words, we find \(P(Y < 0.5X)\). In order to find this probability, we need to find the region over which we will integrate the joint pdf. To do this, look for the intersection of the region given by \(\{(x,y)\ |\ y < 0.5x\}\) with the region in Figure 2. The calculation is as follows:
P(Y<0.5X) &= \int^1_0\int^{0.5x}_0\! 3x\, dydx\\
&= \int^1_0 \!\left(3xy\Big|^{0.5x}_0\right) \,dx\\
&= \int^1_0 \!\left(\frac{3}{2}x^2-0\right) \,dx = \frac{1}{2}x^3\Big|^1_0\\
&=\frac{1}{2}
Thus, there is a 50% chance that the amount of gas sold in a given week is less than half of the gas stocked.
As we did in the discrete case of jointly distributed random variables, we can also look at the expected value of jointly distributed continuous random variables. Again we focus on the expected value of functions applied to the pair \((X, Y)\), since expected value is defined for a single quantity. At this point, it should not surprise you that the following theorem is similar to Theorem 5.1.1, the result in the discrete setting, except the sums have been replaced by integrals.
Suppose that \(X\) and \(Y\) are jointly distributed continuous random variables with joint pdf \(f(x,y)\).
If \(g(X,Y)\) is a function of these two random variables, then its expected value is given by the following:
$$\text{E}[g(X,Y)] = \iint\limits_{\mathbb{R}^2}\!g(x,y)f(x,y)\,dxdy\notag$$
We will give an example applying Theorem 5.2.1 in an example below.
We can also define independent random variables in the continuous case, just as we did for discrete random variables.
Continuous random variables \(X_1, X_2, \ldots, X_n\) are independent if the joint pdf factors into a product of the marginal pdf's:
$$f(x_1, x_2, \ldots, x_n) = f_{X_1}(x_1)\cdot f_{X_2}(x_2) \cdots f_{X_n}(x_n).\notag$$
It is equivalent to check that this condition holds for the cumulative distribution functions.
Consider the continuous random variables defined in Example 5.2.1, where the \(X\) and \(Y\) gave the location of a radioactive particle. We will show that \(X\) and \(Y\) are independent and then verify that Theorem 5.1.2 also applies in the continuous setting.
Recall that we found the marginal pdf's to be the following:
f_X(x) &= 1,\ \text{for}\ 0\leq x\leq1 \\
f_Y(y) &= 1,\ \text{for}\ 0\leq y\leq 1
So, for \((x,y)\) in the unit square, i.e., \(0\leq x\leq1\) and \(0\leq y\leq \), we have
$$f(x,y) = 1 = 1\cdot1 =f_X(x)f_Y(y),\notag$$
and outside the unit square, at least one of marginal pdf's will be \(0\), so
$$f(x,y) = 0 = f_X(x)f_Y(y).\notag$$
We have thus shown that \(f(x,y)=f_X(x)\ f_Y(y)\), for all \((x,y)\in \mathbb{R}^2\), and so by Definition 5.2.3, \(X\) and \(Y\) are independent.
Now let's look at the expected value of the product of \(X\) and \(Y\). To compute this we apply Theorem 5.2.1:
$$\text{E}[XY] = \iint_{\mathbb{R}^2} \!xy\cdot f(x,y)\, dxdy = \int^1_0\int^1_0 \!xy\cdot1\, dxdy = \int^1_0 \!\left(\frac{x^2}{2}y\Big|^1_0\right)\, dy = \frac{1}{4}\notag$$ Note that both \(X\) and \(Y\) are uniform on the interval \([0,1]\). Therefore, their expected values are both 1/2, the midpoint of \([0,1]\). Putting this all together, we have
$$\text{E}[XY] = \frac{1}{4} = \frac{1}{2}\cdot\frac{1}{2} = \text{E}[X]\ \text{E}[Y],\notag$$
which is the conclusion to Theorem 5.1.2.
5.1: Joint Distributions of Discrete Random Variables
5.3: Conditional Probability Distributions | CommonCrawl |
\begin{document}
\newcommand{\mathrm{Hom}}{\mathrm{Hom}} \newcommand{\mathrm{RHom}^*}{\mathrm{RHom}^*} \newcommand{\mathrm{HOM}}{\mathrm{HOM}} \newcommand{\underline{\mathrm{Hom}}}{\underline{\mathrm{Hom}}} \newcommand{\mathrm{Ext}}{\mathrm{Ext}} \newcommand{\mathrm{Tor}}{\mathrm{Tor}} \newcommand{\mathrm{HH}}{\mathrm{HH}} \newcommand{\mathrm{End}}{\mathrm{End}} \newcommand{\mathrm{END}}{\mathrm{END}} \newcommand{\mathrm{\underline{End}}}{\mathrm{\underline{End}}} \newcommand{\mathrm{Tr}}{\mathrm{Tr}}
\newcommand{\mathrm{coker}}{\mathrm{coker}} \newcommand{\mathrm{Aut}}{\mathrm{Aut}} \newcommand{\mathrm{op}}{\mathrm{op}} \newcommand{\mathrm{add}}{\mathrm{add}} \newcommand{\mathrm{ADD}}{\mathrm{ADD}} \newcommand{\mathrm{ind}}{\mathrm{ind}} \newcommand{\mathrm{rad}}{\mathrm{rad}} \newcommand{\mathrm{soc}}{\mathrm{soc}} \newcommand{\mathrm{ann}}{\mathrm{ann}} \newcommand{\mathrm{im}}{\mathrm{im}} \newcommand{\mathrm{char}}{\mathrm{char}} \newcommand{\mathrm{p.dim}}{\mathrm{p.dim}} \newcommand{\mathrm{gl.dim}}{\mathrm{gl.dim}}
\newcommand{\mbox{mod-}}{\mbox{mod-}} \newcommand{\mbox{Mod-}}{\mbox{Mod-}} \newcommand{\mbox{-mod}}{\mbox{-mod}} \newcommand{\mbox{-Mod}}{\mbox{-Mod}} \newcommand{\mbox{\underline{mod}-}}{\mbox{\underline{mod}-}} \newcommand{\mbox{-\underline{mod}}}{\mbox{-\underline{mod}}}
\newcommand{\gmod}[1]{\mbox{mod}_{#1}\mbox{-}} \newcommand{\gMod}[1]{\mbox{Mod}_{#1}\mbox{-}} \newcommand{\Bimod}[1]{\mathrm{Bimod}_{#1}\mbox{-}}
\newcommand{\mbox{proj-}}{\mbox{proj-}} \newcommand{\mbox{-proj}}{\mbox{-proj}} \newcommand{\mbox{Proj-}}{\mbox{Proj-}} \newcommand{\mbox{inj-}}{\mbox{inj-}}
\newcommand{\mbox{coh-}}{\mbox{coh-}} \newcommand{\mbox{CM}}{\mbox{CM}} \newcommand{\mbox{\underline{CM}}}{\mbox{\underline{CM}}}
\newcommand{\und}[1]{\underline{#1}} \newcommand{\gen}[1]{\langle #1 \rangle} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\bnc}[2]{\left(\scriptsize \begin{array}{c} #1 \\ #2 \end{array} \right)} \newcommand{\bimo}[1]{{}_{#1}#1_{#1}} \newcommand{\ses}[5]{\ensuremath{0 \rightarrow #1 \stackrel{#4}{\longrightarrow}
#2 \stackrel{#5}{\longrightarrow} #3 \rightarrow 0}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\underline{\mathcal{B}}}{\underline{\mathcal{B}}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\underline{\mathcal{C}}}{\underline{\mathcal{C}}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\ul}[1]{\underline{#1}}
\newtheorem{therm}{Theorem}[section] \newtheorem{defin}[therm]{Definition} \newtheorem{propos}[therm]{Proposition} \newtheorem{lemma}[therm]{Lemma} \newtheorem{coro}[therm]{Corollary}
\title{Periodicity of $d$-cluster-tilted algebras} \author{Alex Dugas} \address{Department of Mathematics, University of the Pacific, 3601 Pacific Ave, Stockton, CA 95211, USA} \email{[email protected]}
\subjclass[2010]{16G10, 16E05, 18E30, 18A25, 16G50} \keywords{periodic algebra, maximal orthogonal subcategory, cluster-tilting object, higher Auslander algebra}
\begin{abstract} It is well-known that any maximal Cohen-Macaulay module over a hypersurface has a periodic free resolution of period $2$. Auslander, Reiten \cite{DTrPer} and Buchweitz \cite{Buch} have used this periodicity to explain the existence of periodic projective resolutions over certain finite-dimensional algebras which arise as stable endomorphism rings of Cohen-Macaulay modules. These algebras are in fact periodic, meaning that they have periodic projective resolutions as bimodules and thus periodic Hochschild cohomology as well. The goal of this article is to generalize this construction of periodic algebras to the context of Iyama's higher AR-theory. We let $\mathcal{C}$ be a maximal $(d-1)$-orthogonal subcategory of an exact Frobenius category $\mathcal{B}$, and start by studying the projective resolutions of finitely presented functors on the stable category $\underline{\mathcal{C}}$, over both $\underline{\mathcal{C}}$ and $\mathcal{C}$. Under the assumption that $\underline{\mathcal{C}}$ is fixed by $\Omega^{d}$, we show that $\Omega^{d}$ induces the $(2+d)^{th}$ syzygy on $\mbox{mod-} \underline{\mathcal{C}}$. If $\mathcal{C}$ has finite type, i.e., if $\mathcal{C} = \mathrm{add}(T)$ for a $d$-cluster tilting object $T$, then we show that the stable endomorphism ring of $T$ has a quasi-periodic resolution over its enveloping algebra. Moreover, this resolution will be periodic if some power of $\Omega^{d}$ is isomorphic to the identity on $\underline{\mathcal{C}}$. It follows, in particular, that $2$-C.Y.-tilted algebras arising as stable endomorphism rings of Cohen-Macaulay modules over curve singularities, as in the work of Burban, Iyama, Keller and Reiten \cite{BIKR}, have periodic bimodule resolutions of period $4$. \end{abstract}
\maketitle
\section{Introduction} \setcounter{equation}{0}
In this article we describe a new way of constructing finite-dimensional endomorphism algebras with periodic Hochschild (co)homology. In fact, we show that the endomorphism rings we consider are {\it periodic} in the sense that they have periodic projective resolutions over their enveloping algebras; i.e., $\Omega_{A^e}^n(A) \cong A$ as bimodules for some $n >0$. Among the most notable examples of finite-dimensional algebras with this property are the preprojective algebras of Dynkin graphs, which all have period $6$. This interesting fact was first proved by Ringel and Schofield through a calculation of the minimal projective bimodule resolutions of such algebras. Later, Auslander and Reiten \cite{DTrPer} gave an elegant functorial argument for this periodicity, making use of the fact that these preprojective algebras can be realized as stable endomorphism rings of Cohen-Macaulay modules (in fact, as stable Auslander algebras) over $2$-dimensional simple hypersurface singularities. Actually, their arguments establish a slightly weaker version of this periodicity, showing only that the sixth power of the syzygy functor is the identity. Motivated by these results, Buchweitz \cite{Buch} develops the functor category arguments of Auslander and Reiten to deduce the (full) periodicity of the preprojective algebras of Dynkin graphs from the isomorphisms $\Omega^2 \cong Id$ in the corresponding stable categories of CM-modules. More generally, his work shows how periodic algebras can arise as stable Auslander algebras of finite-type categories, and in particular as stable endomorphism rings of $\Omega$-periodic modules.
Iyama has recently developed higher-dimensional analogues of much of the classical Auslander-Reiten theory, including a theory of higher Auslander algebras \cite{Iyama1, Iyama2}. Thus it is natural to look for generalizations of Auslander, Reiten and Buchweitz's work on periodicity to this setting. One clue is already provided by recent work of Burban, Iyama, Keller and Reiten \cite{BIKR}, showing that symmetric algebras with $\tau$-period $2$ can be obtained as endomorphism rings of certain Cohen-Macaulay modules over $1$-dimensional hypersurface singularities. Among the algebras they realize in this way are several algebras of quaternion type, which Erdmann and Skowronski have shown are periodic of period $4$ \cite{ErdSko}. As Erdmann and Skowronski's result is obtained by computing minimal projective resolutions over enveloping algebras, our motivation is parallel to Buchweitz's in \cite{Buch}. That is, we aim to generalize Buchweitz's results to explain how the $2$-periodicity of the syzygy functor in the category of CM-modules implies the $4$-periodicity of the bimodule resolutions for the appropriate endomorphism rings.
It turns out that we can obtain periodic algebras more generally as endomorphism rings of periodic $d$-cluster-tilting objects in a triangulated category. These $d$-cluster-tilting objects are in fact the objects $T$ for which $\mathrm{add}(T)$ satisfies Iyama's definition of a maximal $(d-1)$-orthogonal subcategory. Hence our results are indeed analogues of Buchweitz's for Iyama's higher Auslander-Reiten theory. We summarize our main results (see Corollary 3.1 and Theorem 3.2) in the theorem below, where $\mathcal{B}$ denotes an exact Frobenius category with a Hom-finite stable category $\underline{\mathcal{B}}$.
\begin{therm} Let $T$ be a $d$-cluster tilting object in $\mathcal{B}$ (with $d \geq 1$) such that $\Omega^{d}T \cong T$ in $\underline{\mathcal{B}}$, and set $\Lambda = \mathrm{End}_{\mathcal{B}}(T)$ and $\Gamma = \mathrm{\underline{End}}_{\mathcal{B}}(T)$. If $\Gamma$ has no semisimple blocks, then
\begin{enumerate}
\item $\mathrm{Tor}_{i}^{\Lambda}(-,\Gamma) = 0$ on $\mbox{mod-} \Gamma$ for all $i \neq 0, d+1$.
\item $\Omega^{d+2}_{\Gamma^e}(\Gamma) \cong \mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma) \cong \underline{\mathcal{B}}(T,\Omega^{d}T)$ is an invertible $(\Gamma,\Gamma)$-bimodule. Hence $\Gamma$ has a quasi-periodic projective resolution over its enveloping algebra $\Gamma^e$.
\item If $\Omega^{d}$ has order $r$ as a functor on $\ul{\mathrm{add}(T)}$, then $\Gamma$ is periodic with period dividing $(d+2)r$.
\end{enumerate}
\end{therm}
For $d=1$, the same conclusions were obtained by Buchweitz \cite{Buch} under the assumption (needed for (2) and (3)) that $\Lambda$ has Hochschild dimension $d+1=2$. He then applies it to an additive generator $T$ of the finite-type category $\mathcal{B} = \mbox{CM}(R)$ for a simple hypersurface singularity $R$ of dimension $2$ in order to deduce the periodicity of the preprojective algebras of Dynkin type. For $d=2$, we can again take $\mathcal{B} = \mbox{CM}(R)$ for an odd-dimensional isolated Gorenstein hypersurface (see \cite{Yosh} for instance). Since Eisenbud's matrix factorization theorem \cite{Eis} implies that $\Omega^2 \cong Id$ on $\underline{\mathcal{B}}$ in this case, any $2$-cluster-tilting object in $\underline{\mathcal{B}}$ is automatically $2$-periodic and thus has a stable endomorphism algebra which is periodic of period $4$. Existence of $2$-cluster-tilting objects in this setting has been studied by Burban, Iyama, Keller and Reiten \cite{BIKR}. We will discuss this and other potential applications further in the final section.
We typically work with right modules, unless noted otherwise. In this case morphisms are written on the left and composed from right to left. We also follow this convention for morphisms in abstract categories. For a category $\mathcal{A}$, we shall write $\mathcal{A}(X,Y)$ for the set of morphisms from $X$ to $Y$ in $\mathcal{A}$, and we shall write $\mathrm{Hom}_{\mathcal{A}}(-,-)$ for the morphism sets in categories of functors on $\mathcal{A}$, such as $\mbox{mod-} \mathcal{A}$. Likewise $\mathrm{Tor}^{\mathcal{A}}(-,-)$ and $\mathrm{Ext}_{\mathcal{A}}(-,-)$ will be reserved for $\mathcal{A}$-modules. We also follow the convention of writing $\Omega_{\mathcal{A}} M$ for the syzygy of an $\mathcal{A}$-module $M$ in order to distinguish it from the syzygy operator on $\mathcal{A}$ (provided this makes sense), which we write simply as $\Omega$.
\section{Functors on maximal orthogonal subcategories} \setcounter{equation}{0}
Throughout this article, we let $k$ be a field and assume that $\mathcal{B}$ is an exact Krull-Schmidt, Frobenius $k$-category, which arises as a full, extension-closed subcategory of an abelian category.
In particular, $\mathcal{B}$ has enough projectives and enough injectives and these coincide. We denote the stable category by $\underline{\mathcal{B}}$, which is a triangulated category with the cosyzygy functor $\Omega^{-1}$ as its suspension \cite{TCRTA}. In $\underline{\mathcal{B}}$ we will often write $X[i]$ for the $i^{th}$ suspension $\Omega^{-i}X$ of $X$. We write $\ul{f}$ for the residue class in $\underline{\mathcal{B}}$ of a map $f$ in $\mathcal{B}$. We further assume that all the Hom-spaces $\underline{\mathcal{B}}(X,Y)$ are finite-dimensional over $k$. Typically, we have in mind for $\mathcal{B}$ either (an exact subcategory of) $\mbox{mod-} A$ for a finite-dimensional self-injective $k$-algebra $A$ or else the category $\mbox{CM}(R)$ of maximal Cohen-Macaulay modules over an isolated Gorenstein singularity $R$ (containing $k$).
For a subcategory $\mathcal{C}$ of $\mathcal{B}$, recall that a {\it right $\mathcal{C}$-approximation} of $X \in \mathcal{B}$ consists of a map $f: C_0 \rightarrow X$ with $C_0 \in \mathcal{C}$ such that any map $h : C \rightarrow X$ with $C \in \mathcal{C}$ can be factored through $f$. The notion of a {\it left $\mathcal{C}$-approximation} $g: X \rightarrow C_0$ is defined dually. The subcategory $\mathcal{C}$ is said to be {\it functorially finite} in $\mathcal{B}$ if each object of $\mathcal{B}$ has both right and left $\mathcal{C}$-approximations. Note that this condition is equivalent to requiring that the functors $\mathcal{B}(-,X)|_{\mathcal{C}}$ and $\mathcal{B}(X,-)|_{\mathcal{C}}$ are finitely generated (as functors from $\mathcal{C}$ to $\mbox{mod-} k$) for each $X \in \mathcal{B}$. Following Iyama \cite{Iyama1}, we say that a functorially finite subcategory $\mathcal{C}$ of $\mathcal{B}$ is {\it maximal $(d-1)$-orthogonal} if
\begin{eqnarray} \mathcal{C} & = & \{X \in \mathcal{B}\ |\ \underline{\mathcal{B}}(X,\mathcal{C}[i]) = 0, \forall\ 1\leq i < d\} = \{Y \in \mathcal{B}\ |\ \underline{\mathcal{B}}(\mathcal{C},Y[i]) = 0, \forall\ 1\leq i < d\}. \end{eqnarray}
We shall henceforth assume that $\mathcal{C}$ is a functorially finite, maximal $(d-1)$-orthogonal subcategory of $\mathcal{B}$ for some $d \geq 1$. In particular, $\mathcal{C}$ must contain all the projectives in $\mathcal{B}$ and we have $\underline{\mathcal{B}}(\mathcal{C},\mathcal{C}[i]) = 0$ for all $1 \leq i < d$. It is also easy to see that the induced subcategory $\underline{\mathcal{C}}$ of $\underline{\mathcal{B}}$ remains functorially finite and maximal orthogonal, and thus we may also view $\underline{\mathcal{C}}$ as a maximal $(d-1)$-orthogonal subcategory of $\underline{\mathcal{B}}$. If $\mathcal{C} = \mathrm{add}(T)$ for an object $T \in \mathcal{B}$, then we say that $T$ is a {\it $d$-cluster tilting object} (in $\mathcal{B}$ or in $\underline{\mathcal{B}}$). Notice that in this case $\mathcal{C}$ will automatically be functorially finite. Indeed, $\underline{\mathcal{C}}$ will be a finite type subcategory of $\underline{\mathcal{B}}$, of which we are assuming the Hom-spaces are finite-dimensional over $k$. Thus, any $X \in \mathcal{B}$ has a right $\underline{\mathcal{C}}$-approximation $\ul{f} : C_0 \rightarrow X$ in $\underline{\mathcal{B}}$. Then the map $(f\ p): C_0 \oplus P \rightarrow X$, where $p : P \rightarrow X$ is a projective cover of $X$ in $\mathcal{B}$, gives a right $\mathcal{C}$-approximation of $X$. The existence of left $\mathcal{C}$-approximations is established dually.
We point out that for $d=1$ this definition forces $\mathcal{C} = \mathcal{B}$, which brings us back essentially to the setting considered by Auslander and Reiten in \cite{SEAA} and Buchweitz in \cite{Buch}. With $\mathcal{C}$ and $d$ fixed we also define subcategories \begin{eqnarray}
\mathcal{E}_j & = & \{ X \in \mathcal{B}\ |\ \underline{\mathcal{B}}(\mathcal{C},X[i]) = 0\ \mbox{for}\ 1 \leq i \leq d-1\ \mbox{and}\ i \neq j\} \end{eqnarray} for each $1 \leq j \leq d$. Notice that $\mathcal{E}_{d} = \mathcal{C}$ and $\mathcal{C} \cup \mathcal{C}[1] \subseteq \mathcal{E}_{d-1}$. If $d=2$, then the defining condition for $\mathcal{E}_1$ becomes vacuous, and so in this case we set $\mathcal{E}_1 = \mathcal{B}$.
Our main results require an additional stronger vanishing condition on $\mathcal{C}$. Fortunately, it turns out to be equivalent to a more natural (and more easily checked) periodicity condition, as we now verify.
\begin{lemma} For $\mathcal{C}$ and $\mathcal{B}$ as above, the following are equivalent. \begin{enumerate} \item $\underline{\mathcal{B}}(\mathcal{C},\mathcal{C}[i]) = 0$ for all $i$ with $-d < i \leq -1$. \item $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$; that is, $\Omega^{d}C \in \mathcal{C}$ for each $C \in \mathcal{C}$. \end{enumerate} \end{lemma}
\noindent {\it Proof.} For $X \in \mathcal{C}$, notice that $X[d] \in \mathcal{C}$ if and only if $\underline{\mathcal{B}}(X[d],\mathcal{C}[i]) = 0$ for $1 \leq i < d$, which is equivalent to $\underline{\mathcal{B}}(X,\mathcal{C}[j]) = 0$ for $-d < j \leq -1$. $\Box$ \\
We will often assume that $\mathcal{C}$ satisfies the two equivalent conditions of the above lemma. Note that these are automatic for $d=1$ and $\mathcal{C} = \mathcal{B}$. In case $\underline{\mathcal{B}}$ has Serre duality $\underline{\mathcal{B}}(X,SY) \cong D\underline{\mathcal{B}}(Y,X)$ for an auto-equivalence $S$ of $\underline{\mathcal{B}}$, with $D$ denoting the duality $\mathrm{Hom}_k(-,k)$, then the above conditions are easily seen to be equivalent to $S(\mathcal{C}) = \mathcal{C}$.
The following lemma is useful for obtaining exact sequences in $\mathcal{B}$, which may fail to be an abelian category. It implies, in particular, that $\mathcal{B}$ has {\it plenty of projectives} in the terminology of \cite{Buch}.
\begin{lemma} For any map $f : X \rightarrow Y$ in $\mathcal{B}$, there exists an object $Z$ and a projective $P$ in $\mathcal{B}$ such that $\ses{Z}{X\oplus P}{Y}{\bnc{g}{i}}{(f\ p)}$ is exact in $\mathcal{B}$. Moreover, there is a distinguished triangle $Z \stackrel{\ul{g}}{\rightarrow} X \stackrel{\ul{f}}{\rightarrow} Y \rightarrow$ in $\underline{\mathcal{B}}$, which determines $Z$ and $\ul{g}$ up to isomorphism in $\underline{\mathcal{B}}$.
\end{lemma}
\noindent {\it Proof.} Forming the pull-back of the exact sequence $\ses{\Omega Y}{P}{Y}{}{}$, where $P$ is projective, with respect to the map $f : X \rightarrow Y$ yields a commutative diagram in which the rows are exact sequences in $\mathcal{B}$: $$\xymatrix{0 \ar[r] & \Omega Y \ar@{=}[d] \ar[r] & Z \ar[d] \ar[r] & X \ar[d]^f \ar[r] &0 \\ 0 \ar[r] & \Omega Y \ar[r] & P \ar[r]^p & Y \ar[r] & 0}.$$ Thus the sequence $\ses{Z}{X \oplus P}{Y}{}{(f\ p)}$ from the pull-back square is the one we want. The second claim now follows from Lemma 2.7 in \cite{TCRTA} and the axioms for triangulated categories. $\Box$\\
We use the standard notation $\mbox{mod-} \mathcal{C}$ and $\mbox{mod-} \underline{\mathcal{C}}$ for the categories of finitely presented contravariant $k$-linear functors from $\mathcal{C}$ and $\underline{\mathcal{C}}$, respectively, to $\mbox{mod-} k$. We also write $\mbox{\underline{mod}-} \underline{\mathcal{C}}$ for the stable category obtained from $\mbox{mod-} \underline{\mathcal{C}}$ by factoring out the ideal of morphisms that factor through a projective. As we only consider functors on $\mathcal{C}$ or $\underline{\mathcal{C}}$, and never on $\mathcal{B}$, all representable functors $\mathcal{B}(-,X)$ or $\underline{\mathcal{B}}(-,X)$ are to be interpreted as restricted to $\mathcal{C}$, and we forgo writing $\mathcal{B}(-,X)|_{\mathcal{C}}$ for the restriction. We observe that our assumptions guarantee that all such representable functors belong to $\mbox{mod-} \mathcal{C}$ and $\mbox{mod-} \underline{\mathcal{C}}$, respectively. Indeed, we may complete a right $\underline{\mathcal{C}}$-approximation $\ul{f} : C_0 \rightarrow X$ to a triangle $Y \stackrel{\ul{g}}{\longrightarrow} C_0 \stackrel{\ul{f}}{\longrightarrow} X \rightarrow$, and then take a right $\underline{\mathcal{C}}$-approximation $\ul{h} : C_1 \rightarrow Y$. This construction yields a projective presentation \begin{eqnarray} \underline{\mathcal{B}}(-,C_1) \stackrel{\underline{\mathcal{B}}(-,\ul{gh})}{\longrightarrow} \underline{\mathcal{B}}(-,C_0) \stackrel{\underline{\mathcal{B}}(-,\ul{f})}{\longrightarrow} \underline{\mathcal{B}}(-,X) \rightarrow 0. \end{eqnarray}
Moreover, by the preceding lemma, these triangles may be lifted to short exact sequences $$\ses{Y \oplus Q}{C_0\oplus P_0}{X}{\bnc{g\ *}{*\ *}}{(f\ p)} \ \ \ \mbox{and}\ \ \ \ses{Z}{C_1 \oplus P_1}{Y \oplus Q}{}{\bnc{h\ *}{*\ *}}$$ with $P_0, P_1$ and $Q$ projective, which also yield right $\mathcal{C}$-approximations of $X$ and $Y \oplus Q$ resectively. Splicing together the induced exact sequences of representable functors yields a projective presentation
\begin{eqnarray} \mathcal{B}(-,C_1\oplus P_1) \stackrel{\mathcal{B}(-,\varphi)}{\longrightarrow} \mathcal{B}(-,C_0 \oplus P_0) \stackrel{\mathcal{B}(-,(f\ p))}{\longrightarrow} \mathcal{B}(-,X) \rightarrow 0\end{eqnarray}
where $\varphi$ has the form $\scriptsize \left( \begin{array}{cc} gh & * \\ * & * \end{array}\right)$. Furthermore, we can now see that the representable functor $\underline{\mathcal{B}}(-,X)$ is also in $\mbox{mod-} \mathcal{C}$ since it arises as the cokernel of the map $\mathcal{B}(-,P_X) \stackrel{\mathcal{B}(-,\pi_x)}{\longrightarrow} \mathcal{B}(-,X)$ induced by the projective cover $\pi_X : P_X \rightarrow X$.
Our current goal is to describe the projective resolutions of finitely presented $\underline{\mathcal{C}}$-modules in both $\mbox{mod-} C$ and $\mbox{mod-} \underline{\mathcal{C}}$. We start with a simple but important observation that generalizes a theorem of Buan, Marsh and Reiten for $2$-cluster tilting objects in cluster categories \cite{CTA} (see also Corollary 6.4 in \cite{IY}). For a subcategory $\mathcal{A}$ of $\mathcal{B}$ we write $\gen{A}$ for the ideal of $\mathcal{B}$ generated by the identity morphisms of the objects of $\mathcal{A}$.
\begin{lemma} Let $\mathcal{B}$ and $\mathcal{C}$ be as above, and assume $d \geq 2$. \begin{enumerate} \item For any $M \in \mbox{mod-} \underline{\mathcal{C}}$, we have $M \cong \underline{\mathcal{B}}(-,X)$ for some $X \in \mathcal{E}_{d-1}$ (without projective summands). \item The functor $\eta: \underline{\mathcal{B}} \longrightarrow \mbox{mod-} \underline{\mathcal{C}}$ given by $\eta(X) = \underline{\mathcal{B}}(-,X)$ is full and dense. Moreover, the restriction of $\eta$ to $\ul{\mathcal{E}_{d-1}}$ induces a category equivalence $$\eta: \ul{\mathcal{E}_{d-1}}/\gen{\underline{\mathcal{C}}[1]} \stackrel{\approx}{\longrightarrow} \mbox{mod-} \underline{\mathcal{C}}.$$ \end{enumerate}
In particular, if $\underline{\mathcal{B}}$ has finite type, then so does $\mbox{mod-} \underline{\mathcal{C}}$. \end{lemma}
\noindent {\it Proof.} A minimal projective presentation of $M$ in $\mbox{mod-} \underline{\mathcal{C}}$ has the form \begin{eqnarray} \underline{\mathcal{B}}(-,C_1) \stackrel{\underline{\mathcal{B}}(-,f)}{\longrightarrow} \underline{\mathcal{B}}(-,C_0) \longrightarrow M \rightarrow 0 \end{eqnarray} for a map $f : C_1 \rightarrow C_0$ in $\mathcal{C}$. We can complete $\ul{f}$ to a triangle $C_1 \stackrel{\ul{f}}{\longrightarrow} C_0 \stackrel{\ul{g}}{\longrightarrow} X \longrightarrow$ in $\underline{\mathcal{B}}$. The long-exact Hom-sequence now yields the exact sequence (using $d \geq 2$) \begin{eqnarray} \underline{\mathcal{B}}(-,C_1) \stackrel{\underline{\mathcal{B}}(-,f)}{\longrightarrow} \underline{\mathcal{B}}(-,C_0) \stackrel{\underline{\mathcal{B}}(-,g)}{\longrightarrow} \underline{\mathcal{B}}(-,X) \longrightarrow \underline{\mathcal{B}}(-,C_1[1]) = 0, \end{eqnarray}
whence $M \cong \underline{\mathcal{B}}(-,X)$. Furthermore, the exact sequences $$0=\underline{\mathcal{B}}(-,C_0[i]) \longrightarrow \underline{\mathcal{B}}(-,X[i]) \longrightarrow \underline{\mathcal{B}}(-,C_1[i+1])=0$$ for $1 \leq i \leq d-2$ show that $X \in \mathcal{E}_{d-1}$.
It follows easily that $\eta$ (even restricted to $\ul{\mathcal{E}_{d-1}}$) is full and dense, so we need only compute its kernel on $\ul{\mathcal{E}_{d-1}}$. Clearly the kernel contains the ideal $\gen{\underline{\mathcal{C}}[1]}$ since $\underline{\mathcal{B}}(-,C[1]) = 0$ for all $C \in \mathcal{C}$. Now let $f : X \rightarrow Y$ be a map between two objects of $\ul{\mathcal{E}_{d-1}}$ such that $\underline{\mathcal{B}}(C,f) = 0$ for all $C \in \mathcal{C}$. If we complete a right $\underline{\mathcal{C}}$-approximation $g: C_0 \rightarrow X$ to a triangle $Z \longrightarrow C_0 \longrightarrow X \rightarrow$ in $\underline{\mathcal{B}}$, then the induced long exact sequence of representable functors on $\mathcal{C}$ shows that $Z \in \mathcal{E}_{d} = \mathcal{C}$. As $fg = 0$ by assumption, we know that $f$ must factor through the connecting morphism $X \rightarrow Z[1]$, whence $f$ is in the ideal generated by $\underline{\mathcal{C}}[1]$. $\Box$\\
\noindent {\bf Remark.} Of course, the final statement fails for $d=1$ as it is well-known that the stable Auslander algebra of a self-injective algebra of finite representation type usually has infinite representation type.\\
Before going on, we pause briefly to review some basics about finitely-presented functors and to explain some of our notation. These facts are essentially due to Auslander and Reiten \cite{SEDRV}, but we shall follow the notation of \S 3 of \cite{Buch}. Corresponding to the natural functor $p : \mathcal{C} \rightarrow \underline{\mathcal{C}}$, we have a restriction functor $p_* : \mbox{mod-} \underline{\mathcal{C}} \rightarrow \mbox{mod-} \mathcal{C}$, which is full and faithful and identifies $\mbox{mod-} \underline{\mathcal{C}}$ with the full subcategory of $\mbox{mod-} \mathcal{C}$ consisting of functors that vanish on projectives. Moreover, $p_*$ has a right-exact left adjoint $p^*$ that is determined by $p^*\mathcal{B}(-,C) = \underline{\mathcal{B}}(-,C)$ for each $C \in \mathcal{C}$. We interpret this functor, which takes $\mathcal{C}$-modules to $\underline{\mathcal{C}}$-modules, as tensoring with $\underline{\mathcal{C}}$ over $\mathcal{C}$, and we write $\mathrm{Tor}^{\mathcal{C}}_*(-,\underline{\mathcal{C}})$ for its left derived functors. Furthermore, by considering the projective presentations (2.4) and (2.3), we see that in fact $p^*\mathcal{B}(-,X) \cong \underline{\mathcal{B}}(-,X)$ for all $X \in \mathcal{B}$.
\begin{propos} Let $M \in \mbox{mod-} \underline{\mathcal{C}}$, and assume that $d \geq 2$ and $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$. \begin{enumerate} \item There is a projective presentation of $M$ in $\mbox{mod-} \mathcal{C}$ of the form $$\ses{\mathcal{B}(-,\Omega X) \longrightarrow \mathcal{B}(-,C_1)}{\mathcal{B}(-,C_0)}{M}{\mathcal{B}(-,f)}{}$$ for $C_0, C_1 \in \mathcal{C}$ and some $X \in \mathcal{B}$ with $M \cong \underline{\mathcal{B}}(-,X)$.
\item Via $p^*$, the above sequence induces the following projective presentation of $M$ in $\mbox{mod-} \underline{\mathcal{C}}$ $$\ses{\underline{\mathcal{B}}(-,\Omega X) \longrightarrow \underline{\mathcal{B}}(-,C_1)}{\underline{\mathcal{B}}(-,C_0)}{M}{\underline{\mathcal{B}}(-,f)}{}.$$ \item For any $X \in \ul{\mathcal{E}_{d-1}}$ we have a natural isomorphism $\Omega^2_{\underline{\mathcal{C}}}[\underline{\mathcal{B}}(-,X)] \cong \underline{\mathcal{B}}(-,\Omega X)$ in $\mbox{\underline{mod}-} \underline{\mathcal{C}}$.
\end{enumerate} \end{propos}
\noindent {\it Proof.} As in the preceding proof we can find $X \in \mathcal{E}_{d-1}$ with $M \cong \underline{\mathcal{B}}(-,X)$. For simplicity, we assume that $X$ has no projective summands. Keeping the notation introduced above and continuing the sequence (2.5) to the left, we obtain the exact sequence $$\ses{\underline{\mathcal{B}}(-,\Omega X) \longrightarrow \underline{\mathcal{B}}(-,C_1)}{\underline{\mathcal{B}}(-,C_0)}{M}{\underline{\mathcal{B}}(-,f)}{}$$ as $\underline{\mathcal{B}}(-,C_0[-1])=0$. This sequence establishes (2) and also induces the isomorphism in (3), which can be seen to be natural in $X \in \ul{\mathcal{E}_{d-1}}$. Using Lemma 2.2 we now lift the triangle $C_1 \stackrel{\ul{f}}{\longrightarrow} C_0 \stackrel{\ul{g}}{\longrightarrow} X \longrightarrow$ to a short exact sequence $\ses{C_1 \oplus P_1}{C_0 \oplus P_0}{X}{}{(g\ p)}$ in $\mathcal{B}$ with $P_0, P_1$ projective. Notice that $(g\ p)$ is a right $\mathcal{C}$-approximation, since $\ul{g}$ is a right $\underline{\mathcal{C}}$-approximation by (2.6). It follows that \begin{eqnarray} \ses{\mathcal{B}(-,C_1 \oplus P_1)}{\mathcal{B}(-,C_0 \oplus P_0)}{\mathcal{B}(-,X)}{}{} \end{eqnarray}
is a projective resolution of $\mathcal{B}(-,X)$ in $\mbox{mod-} \mathcal{C}$. Taking a projective cover $\pi_X$ of $X$, the short exact sequence $\ses{\Omega X}{P_X}{X}{}{\pi_X}$ yields the exact sequence
\begin{eqnarray}
\ses{\mathcal{B}(-,\Omega X) \longrightarrow \mathcal{B}(-,P_X)}{\mathcal{B}(-,X)}{\underline{\mathcal{B}}(-,X)}{\mathcal{B}(-,\pi_X)}{}
\end{eqnarray} in $\mbox{mod-} \mathcal{C}$. Writing $\mathcal{P}(-,X)$ for the image of $\mathcal{B}(-,\pi_X)$, we can obtain the projective presentation of $M \cong \underline{\mathcal{B}}(-,X)$ as the mapping cone of the map from the sequence $$\ses{\mathcal{B}(-,\Omega X)}{\mathcal{B}(-,P_X)}{\mathcal{P}(-,X)}{}{}$$ to the sequence (2.7) which is induced by the inclusion $\mathcal{P}(-,X) \rightarrow \mathcal{B}(-,X)$. Renaming $C_0 := C_0 \oplus P_0$ and $C_1 := C_1 \oplus P_1 \oplus P_X$ we see that this mapping cone has the desired form as in (1). $\Box$\\
\noindent {\bf Remark.} If $d=1$ and $\mathcal{C} = \mathcal{B}$, then the entire projective resolution of any $M = \underline{\mathcal{B}}(-,X)$ in $\mbox{mod-} \mathcal{C}$ has the form (2.8) (cf. \cite{SEAA, SEDRV}), which is an instance of the presentation in part (1) of the proposition. Thus, part (1) remains true in case $d=1$. On the other hand, parts (2) and (3) of the proposition do not have interesting analogues in this case, since $M = \underline{\mathcal{B}}(-,X)$ will be projective in $\mbox{mod-} \underline{\mathcal{C}}$. Part (1), however, would yield a natural isomorphism $\Omega^2_{\mathcal{C}}[\underline{\mathcal{B}}(-,X)] \cong \mathcal{B}(-,\Omega X)$ in $\mbox{\underline{mod}-} \mathcal{C}$ for any $X \in \underline{\mathcal{B}}$, which resembles the isomorphism in (3).\\
We now describe the remaining terms of these projective resolutions for arbitrary $d \geq 2$. Unfortunately, $X \in \mathcal{E}_{d-1}$ usually does not imply $\Omega X \in \mathcal{E}_{d-1}$, and hence we cannot simply repeat the above construction to build a projective resolution in $\mbox{mod-} \underline{\mathcal{C}}$. However, we'll see that we can iterate the construction, once the first $d+2$ terms of the resolution have been found.
\begin{therm} Let $\mathcal{C}$ be a maximal $(d-1)$-orthogonal subcategory of $\mathcal{B}$ with $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$ and $d \geq 2$, and let $M \in \mbox{mod-} \underline{\mathcal{C}}$. \begin{enumerate} \item $M$ has a projective resolution in $\mbox{mod-} \mathcal{C}$ of the form $$\ses{\mathcal{B}(-,C_{d+1}) \rightarrow \cdots \rightarrow \mathcal{B}(-,C_1)}{\mathcal{B}(-,C_0)}{M}{}{}$$ with each $C_i \in \mathcal{C}$. \item The induced sequence of functors on $\underline{\mathcal{C}}$ $$\ses{\mathrm{Tor}^{\mathcal{C}}_{d+1}(M,\underline{\mathcal{C}})}{\underline{\mathcal{B}}(-,C_{d+1}) \rightarrow \cdots \rightarrow \underline{\mathcal{B}}(-,C_0)}{M}{}{}$$ is exact, and hence yields the first $d+2$ terms of a projective resolution for $M$ in $\mbox{mod-} \underline{\mathcal{C}}$. \item $\mathrm{Tor}_i^{\mathcal{C}}(M,\underline{\mathcal{C}}) = 0$ for all $i \neq 0, d+1$. \item We have isomorphisms $\mathrm{Tor}^{\mathcal{C}}_{d+1}(M,\underline{\mathcal{C}}) \cong \Omega^{d+2}_{\underline{\mathcal{C}}} (M)$ in $\mbox{\underline{mod}-} \underline{\mathcal{C}}$ which are natural in $M$. \item For any $X \in \ul{\mathcal{E}_{d-1}}$, we have a natural isomorphism $\Omega^{d+2}_{\underline{\mathcal{C}}}[\underline{\mathcal{B}}(-,X)] \cong \underline{\mathcal{B}}(-,\Omega^{d}X)$ in $\mbox{\underline{mod}-} \underline{\mathcal{C}}$. \end{enumerate} \end{therm}
\noindent {\it Proof.} As in Proposition 2.3, there is a triangle $C_1 \rightarrow C_0 \rightarrow X \rightarrow$ in $\underline{\mathcal{B}}$ with $M \cong \underline{\mathcal{B}}(-,X)$ and $X \in \mathcal{E}_{d-1}$. Thus $\Omega X = X[-1] \in \mathcal{E}_1$. We set $L_1 : = \Omega X$, and recursively define $L_j$ for $j \geq 2$ as follows: Take a right $\underline{\mathcal{C}}$-approximation $f_j : C_j \rightarrow L_{j-1}$ and complete it to a triangle $L_j \longrightarrow C_j \stackrel{f_j}{\longrightarrow} L_{j-1} \rightarrow$ in $\underline{\mathcal{B}}$.
We prove by induction that
\begin{itemize}
\item[(i)] $L_j \in \mathcal{E}_j$ for each $1 \leq j \leq d$; and
\item[(ii)] $\underline{\mathcal{B}}(-,L_j[j-d]) \cong \underline{\mathcal{B}}(-,X[-d])$ for $1 \leq j \leq d-1$.
\end{itemize} For $j=1$, we have already noted that (i) holds, and (ii) is trivial. Now assume that both statements hold for some $j$ with $1 \leq j < d$. We consider the exact sequences in $\mbox{mod-} \underline{\mathcal{C}}$ for various $i$ $$\underline{\mathcal{B}}(-,L_j[i-1]) \longrightarrow \underline{\mathcal{B}}(-,L_{j+1}[i]) \longrightarrow \underline{\mathcal{B}}(-,C_{j+1}[i]).$$ By hypothesis, the first term vanishes for all $i$ with $2 \leq i \leq d$ and $i \neq j+1$; while the third term vanishes for all $i$ with $1 \leq i \leq d-1$. We thus see that the middle term vanishes for all $i \neq j+1$ with $2 \leq i \leq d-1$. It vanishes for $i=1$ since $f_j$ is a right $\underline{\mathcal{C}}$-approximation, making $\underline{\mathcal{B}}(-,f_j)$ surjective. This establishes $L_{j+1} \in \mathcal{E}_{j+1}$. In particular, observe that $C_{d+1} := L_{d} \in \mathcal{E}_{d} = \mathcal{C}$. To see (ii), assume $j < d-1$ and notice that $\underline{\mathcal{B}}(-,L_{j+1}[j+1-d]) \cong \underline{\mathcal{B}}(-,L_j[j-d]) \cong \underline{\mathcal{B}}(-,X[-d])$ since $\underline{\mathcal{B}}(\mathcal{C},C_{j+1}[i]) = 0$ for $i = j+1-d, j-d$.
For each $j$ with $1 \leq j \leq d-2$ we now have a short exact sequence \begin{eqnarray} \ses{\underline{\mathcal{B}}(-,L_{j+1})}{\underline{\mathcal{B}}(-,C_{j+1})}{\underline{\mathcal{B}}(-,L_{j})}{}{}\end{eqnarray} in $\mbox{mod-} \underline{\mathcal{C}}$ since $\underline{\mathcal{B}}(\mathcal{C},L_j[-1]) \cong \underline{\mathcal{B}}(\mathcal{C}[d],L_j[d-1]) \cong \underline{\mathcal{B}}(\mathcal{C},L_j[d-1])=0$ and $f_{j+1}$ is a right $\underline{\mathcal{C}}$-approximation. Similarly, for $j = d-1$, the triangle $C_{d+1} \longrightarrow C_{d} \longrightarrow L_{d-1} \rightarrow$ induces an exact sequence $$\ses{\underline{\mathcal{B}}(-,L_{d-1}[-1]) \longrightarrow \underline{\mathcal{B}}(-,C_{d+1})}{\underline{\mathcal{B}}(-,C_d)}{\underline{\mathcal{B}}(-,L_{d-1})}{}{}.$$ Splicing these sequences together and using the isomorphism $\underline{\mathcal{B}}(-,L_{d-1}[-1]) \cong \underline{\mathcal{B}}(-,\Omega^{d}X)$ from (ii) yields an exact sequence \begin{eqnarray} 0 \rightarrow \underline{\mathcal{B}}(-,\Omega^d X) \rightarrow \underline{\mathcal{B}}(-,C_{d+1}) \rightarrow \cdots \rightarrow \underline{\mathcal{B}}(-,C_2) \rightarrow \underline{\mathcal{B}}(-,\Omega X) \rightarrow 0 \end{eqnarray} in $\mbox{mod-} \underline{\mathcal{C}}$, which can be viewed as the beginning of a projective resolution for $\underline{\mathcal{B}}(-,\Omega X)$. Now splicing (2.10) with the projective presentation from Proposition 2.4(2) gives the first $d+2$ terms of a projective resolution for $M$ in $\mbox{mod-} \underline{\mathcal{C}}$. The isomorphism in (5) follows, and its naturality is a routine verification.
At the same time, applying Lemma 2.2 to each triangle $L_j \longrightarrow C_j \stackrel{f_j}{\longrightarrow} L_{j-1} \rightarrow$ we obtain exact sequences $\ses{L_j}{C_j \oplus P_j}{L_{j-1}}{}{}$ in $\mathcal{B}$ and exact sequences $\ses{\mathcal{B}(-,L_j)}{\mathcal{B}(-,C_j \oplus P_j)}{\mathcal{B}(-,L_{j-1})}{}{}$ in $\mbox{mod-} \mathcal{C}$. Splicing these together, we obtain a projective resolution for $\mathcal{B}(-,\Omega X)$ in $\mbox{mod-} \mathcal{C}$ \begin{eqnarray} \ses{\mathcal{B}(-,C_{d+1})}{\mathcal{B}(-,C_{d} \oplus P_{d}) \rightarrow \cdots \rightarrow \mathcal{B}(-,C_2 \oplus P_2)}{\mathcal{B}(-,\Omega X)}{}{}. \end{eqnarray} Combining this with the projective presentation in Proposition 2.4, yields the desired resolution of $M$. If we now apply $-\otimes_{\mathcal{C}} \underline{\mathcal{C}}$ to this resolution, the exactness of (2.10) and of $\ses{\underline{\mathcal{B}}(-,\Omega X)}{\underline{\mathcal{B}}(-,C_1) \longrightarrow \underline{\mathcal{B}}(-,C_0)}{M}{}{}$ shows that $\mathrm{Tor}_i^{\mathcal{C}}(M,\underline{\mathcal{C}}) = 0$ for all $i \neq 0, d+1$, and $\mathrm{Tor}_{d+1}^{\mathcal{C}}(M,\underline{\mathcal{C}}) \cong \Omega^{d+2}_{\underline{\mathcal{C}}}(M)$. Moreover, this last isomorphism is clearly natural in $M$. $\Box$\\
If $M = \underline{\mathcal{B}}(-,C)$ for a nonprojective $C \in \mathcal{C}$, then the projective resolution in $\mbox{mod-} \mathcal{C}$ from the above theorem takes on an even simpler form. As in Proposition 2.4, the second syzygy of $M$ is isomorphic to $\mathcal{B}(-,\Omega C)$. Since $\underline{\mathcal{B}}(\mathcal{C},\Omega C) = 0$ the projective cover of $\Omega C$ will be a right $\mathcal{C}$-approximation. We thus obtain an exact sequence $\ses{\mathcal{B}(-,\Omega^2C)}{\mathcal{B}(-,P_2)}{\mathcal{B}(-,\Omega C)}{}{}$ in $\mbox{mod-} \mathcal{C}$ with $P_2$ projective. Repeating this construction, using $\underline{\mathcal{B}}(\mathcal{C},\Omega^i C)=0$ for $1 \leq i \leq d-1$, we obtain the projective resolution: $$\ses{\mathcal{B}(-,\Omega^{d}C)}{\mathcal{B}(-,P_{d}) \rightarrow \cdots \rightarrow \mathcal{B}(-,P_2)}{\mathcal{B}(-,P_C) \longrightarrow \mathcal{B}(-,C) \longrightarrow \underline{\mathcal{B}}(-,C)}{}{}$$ with $\Omega^{d}C \in \mathcal{C}$ by assumption. Passing to $\underline{\mathcal{B}}$ by factoring out the maps that factor through projectives, all terms of this projective resolution vanish except for the $0^{th}$ and $(d+1)^{th}$ terms. In particular, we recover the following isomorphisms \begin{eqnarray} \mathrm{Tor}_{d+1}^{\mathcal{C}}(\underline{\mathcal{B}}(-,C),\underline{\mathcal{C}}) \cong \underline{\mathcal{B}}(-,\Omega^{d}C) \end{eqnarray} of functors on $\underline{\mathcal{C}}$, which are natural in $C \in \underline{\mathcal{C}}$ (note that they also follow from combining parts (4) and (5)). Thus we have isomorphisms of bifunctors on $\underline{\mathcal{C}}$
\begin{eqnarray} \mathrm{Tor}_{d+1}^{\mathcal{C}}(\underline{\mathcal{B}}(-,-),\underline{\mathcal{C}}) \cong \underline{\mathcal{B}}(-,\Omega^{d}(-)). \end{eqnarray}
We also point out that the remaining terms of the projective resolution of $\underline{\mathcal{B}}(-,X)$ in $\mbox{mod-} \underline{\mathcal{C}}$ can now be obtained by essentially shifting the terms described in part (2) of the theorem, and in this way we obtain a {\it quasi-periodic} projective resolution for $\underline{\mathcal{B}}(-,X)$. This is due to the assumption that $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$, which guarantees that $\mathcal{E}_{i}[d] = \mathcal{E}_{i}$ for each $i$. Hence $\Omega^{d+2}_{\mathcal{C}}[\underline{\mathcal{B}}(-,X)] \cong \underline{\mathcal{B}}(-,\Omega^d X)$ with $\Omega^d X = X[-d] \in \mathcal{E}_{d-1}$. Then the construction from the proof can clearly be shifted by the $-d^{th}$ power of the suspension functor to obtain the next $d+2$ terms of the projective resolution: $$\ses{\underline{\mathcal{B}}(-,X[-2d])}{\underline{\mathcal{B}}(-,C_{d+1}[-d]) \rightarrow \cdots \rightarrow \underline{\mathcal{B}}(-,C_0[-d])}{\underline{\mathcal{B}}(-,X[-d])}{}{},$$ and so on. We also easily see that iterating the isomorphism from part (5) of the theorem yields isomorphisms $\Omega_{\underline{\mathcal{C}}}^{s(d+2)}[\underline{\mathcal{B}}(-,X)] \cong \underline{\mathcal{B}}(-,\Omega^{sd}X)$ in $\mbox{\underline{mod}-} \underline{\mathcal{C}}$ for each $s \geq 1$, which are natural in $X \in \ul{\mathcal{E}_{d-1}}$.
\section{Bimodule resolutions of stable Auslander algebras} \setcounter{equation}{0}
In this section we specialize to the case where $\mathcal{C} = \mathrm{add}(T)$ for a $d$-cluster tilting object $T \in \mathcal{B}$ with $d \geq 1$. The evaluation functor $ev_T : M \mapsto M(T)$ gives category equivalences $\mbox{mod-} \mathcal{C} \rightarrow \mbox{mod-} \Lambda$ and $\mbox{mod-} \underline{\mathcal{C}} \rightarrow \mbox{mod-} \Gamma$, where $\Lambda = \mathrm{End}_{\mathcal{B}}(T)$ and $\Gamma = \mathrm{\underline{End}}_{\mathcal{B}}(T)$. Our $\mathrm{Hom}$-finiteness assumption on $\underline{\mathcal{B}}$ guarantees that $\Gamma$ is finite-dimensional, although $\Lambda$ need not be. We also note that $\Gamma$ may be decomposable as an algebra, and may even have semisimple blocks which we typically want to ignore. As we deal with bimodules, we assume for convenience that $k$ is perfect (although, it suffices to know that $\Gamma$ splits over a separable extension of $k$). Under this assumption, the projective bimodule summands of $\Gamma$ correspond precisely to semisimple blocks.
We now translate some of our above results (parts (3) and (4) of Theorem 2.5 and (2.13)) to this setting in the corollary below. These statements are also true for $d=1$ by Theorem 1.1 and Proposition 6.5 of \cite{Buch}.
\begin{coro} Let $T \in \mathcal{B}$ be a $d$-cluster tilting object with $d \geq 1$ such that $\Omega^{d}T \cong T$ in $\underline{\mathcal{B}}$, and set $\Lambda = \mathrm{End}_{\mathcal{B}}(T)$ and $\Gamma = \mathrm{\underline{End}}_{\mathcal{B}}(T)$. Then \begin{enumerate} \item $\mathrm{Tor}_i^{\Lambda}(-,\Gamma) = 0$ on $\mbox{mod-} \Gamma$ for all $i \neq 0, d+1$; \item $\mathrm{Tor}_{d+1}^{\Lambda}(-,\Gamma) \cong \Omega^{d+2}$ as functors on $\mbox{\underline{mod}-} \Gamma$. \item $\mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma) \cong \underline{\mathcal{B}}(T,\Omega^{d}T)$ as $(\Gamma,\Gamma)$-bimodules. \end{enumerate} \end{coro}
The assumption that $\Omega^{d}T \cong T$ implies that $\underline{\mathcal{B}}(T,\Omega^{d}T)$ is isomorphic to a twisted bimodule ${}_{\sigma}\Gamma_1$ for some $k$-algebra automorphism $\sigma$ of $\Gamma$, which corresponds to an isomorphism $\eta: \Omega^{d}T \stackrel{\cong}{\longrightarrow} T$. If $\Omega^{d} \cong Id$ as functors on $\mathrm{add}(T)$, then $\underline{\mathcal{B}}(T, \Omega^{d}T) \cong \Gamma$ as bimodules.
We now delve deeper to obtain information about the projective resolution of $\Gamma$ over its enveloping algebra $\Gamma^e$. Recall that $\Gamma$ is {\it periodic} if this resolution is periodic. We will also say that $\Gamma$ is {\it quasi-periodic} (or, equivalently, that this resolution is quasi-periodic) if $\Omega^n_{\Gamma^e}(\Gamma)$ is isomorphic to a twisted bimodule ${}_{\sigma}\Gamma_1$ as above. In this case, it easily follows that each finitely generated $\Gamma$-module has bounded Betti numbers.
\begin{therm} Let $T \in \mathcal{B}$ be a $d$-cluster tilting object such that $\Omega^{d}T \cong T$ in $\underline{\mathcal{B}}$, and set $\Lambda = \mathrm{End}_{\mathcal{B}}(T)$ and $\Gamma = \mathrm{\underline{End}}_{\mathcal{B}}(T)$. Then \begin{enumerate} \item $\mathrm{Tor}_{d+1}^{\Lambda}(-,\Gamma) \cong -\otimes_{\Gamma} \mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)$ as functors on $\mbox{mod-} \Gamma$. \item $\Omega^{d+2}_{\Gamma^e}(\Gamma) \cong \mathrm{Tor}_{d+1}^{\Lambda}(\Gamma, \Gamma) \cong \underline{\mathcal{B}}(T,\Omega^{d}T)$ as $(\Gamma,\Gamma)$-bimodules (up to projective summands). \end{enumerate}
In particular, $\Gamma$ is self-injective. Moreover, writing $\Gamma = \Gamma_0 \times \Gamma_{s}$ where $\Gamma_s$ is the largest semisimple direct factor of $\Gamma$, we see that $\Gamma_0$ is quasi-periodic of quasi-period $d+2$. If $\Omega^{dr}|_{\mathrm{add}(T)} \cong Id_{\mathrm{add}(T)}$ as functors for some $r \geq 1$, then $\Gamma_0$ is periodic with period dividing $r(d+2)$. \end{therm}
\noindent {\bf Remarks.} (1) Part (2) and its consequences can be viewed as an extension of Theorem 1.5 in \cite{Buch}. Notice that we can avoid assuming that $\Lambda$ has Hochschild dimension $d+1$, even when $d=1$, since our broader assumptions on $\mathcal{B}$ and $T$ guarantee that $\Gamma$ is finite-dimensional and self-injective, and we will see that these conditions suffice. In particular, this simplifies certain issues arising in applications of Buchweitz's results (cf. 1.6, 1.12 in \cite{Buch}).
(2) While quasi-periodicity appears weaker than periodicity, we are unaware of any finite-dimensional algebras that are quasi-periodic but not periodic. This theorem could potentially be used to produce such examples: for instance, one would need a $d$-cluster tilting object $T$ with $\Omega^d T \cong T$ but where no positive power of $\Omega^d$ is isomorphic to the identity functor on $\mathrm{add}(T)$.\\
\noindent {\it Proof.} For (1), notice that $\mathrm{Tor}_{d+1}^{\Lambda}(-,\Gamma)$ is an exact functor on $\mbox{mod-} \Gamma$ as $\mathrm{Tor}_{d}^{\Lambda}(-,\Gamma)=\mathrm{Tor}_{d+2}^{\Lambda}(-,\Gamma) = 0$. Thus $\mathrm{Tor}_{d+1}^{\Lambda}(-,\Gamma) \cong -\otimes_{\Gamma} \mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)$ by the Eilenberg-Watts theorem. Observe that $\mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma) \cong {}_{\sigma}\Gamma_1$ is a projective $\Gamma$-module on either side. Furthermore, since we have an invertible bimodule $\mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)$ inducing $\Omega^{d+2}$ on $\mbox{\underline{mod}-} \Gamma$, we see that $\Omega$ must be an equivalence and $\Gamma$ is self-injective.
For (2), let $\cdots \rightarrow P_1 \stackrel{f_1}{\longrightarrow} P_0 \longrightarrow \Lambda \rightarrow 0$ be a projective resolution of $\Lambda$ over $\Lambda^e$. Applying $-\otimes_{\Lambda^e} \Gamma^e$ yields a complex $Q_{\bullet} := \Gamma \otimes_{\Lambda} P_{\bullet} \otimes_{\Lambda} \Gamma$ of projective $\Gamma^e$-modules with homology given by $$\mathrm{Tor}_*^{\Lambda^e}(\Lambda, \Gamma^e) \cong \mathrm{Tor}_*^{\Lambda}(\Gamma, \Gamma).$$ As Corollary 3.1 tells us that this homology vanishes in all degrees except $0$ and $d+1$, the beginning of a projective resolution of $\Gamma$ over $\Gamma^e$ has the form $$\ses{\Omega^{d+2}(\Gamma) \oplus Q}{Q_{d+1} \rightarrow \cdots \rightarrow Q_0}{\Gamma}{}{}$$ for some projective bimodule $Q$. Furthermore, from the definition of $\mathrm{Tor}$ we have an epimorphism\footnote{It is an isomorphism if $\Lambda$ has Hochschild dimension $d+1$. This holds for instance if $\mathcal{B} = \mbox{mod-} A$ for a finite-dimensional self-injective algebra $A$, as then $\Lambda$ is a finite-dimensional algebra of global dimension $d+1$ \cite{Iyama1}.} $\Omega^{d+2}(\Gamma) \oplus Q \rightarrow \mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)$. Let $K$ be the kernel and observe that $K$ is projective on either side since $\mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)$ and $\Omega^{d+2}(\Gamma) \oplus Q$ both are. Also observe that by definition $K = \mathrm{im} (1 \otimes f_{d+2} \otimes 1)$ consists of the $(d+1)$-boundaries of $Q_{\bullet}$. We claim that $K$ is a projective $(\Gamma,\Gamma)$-bimodule; since $\Gamma$ is self-injective it will then follow that the short exact sequence $\ses{K}{\Omega^{d+2}(\Gamma) \oplus Q}{\mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)}{}{}$ splits, yielding $\Omega^{d+2}(\Gamma) \cong \mathrm{Tor}_{d+1}^{\Lambda}(\Gamma,\Gamma)$ as bimodules (up to projective summands).
To see that $K$ is projective, we go back a step and apply $ \Gamma \otimes_{\Lambda} -$ to $P_{\bullet}$ to get a projective $(\Gamma,\Lambda)$-bimodule resolution $\Gamma \otimes _{\Lambda} P_{\bullet}$ of ${}_{\Gamma}\Gamma \otimes_{\Lambda} \Lambda_{\Lambda} \cong {}_{\Gamma} \Gamma_{\Lambda}$. Set $L = \ker (1 \otimes f_{d+1}) \cong \mathrm{coker} (1 \otimes f_{d+3})$. Since $-\otimes_{\Lambda} \Gamma$ is right-exact, we have $L \otimes_{\Lambda} \Gamma \cong \mathrm{coker} (1 \otimes f_{d+3} \otimes 1) \cong \mathrm{im} (1 \otimes f_{d+2} \otimes 1) = K$ as $(\Gamma,\Gamma)$-bimodules. For any finitely-presented right $\Gamma$-module $M$, $M \otimes_{\Gamma} \Gamma \otimes_{\Lambda} P_{\bullet} \cong M \otimes_{\Lambda} P_{\bullet}$ is a projective resolution of $M_{\Lambda}$. Since $\mathrm{p.dim}\ M_{\Lambda} \leq d+1$, $M \otimes_{\Gamma} L \cong \mathrm{coker} (1_M \otimes f_{d+3}) \cong \ker (1_M \otimes f_{d+1})$ is a projective right $\Lambda$-module. In particular, $M \otimes_{\Gamma} K \cong M \otimes_{\Gamma} (L \otimes_{\Lambda} \Gamma) \cong (M \otimes_{\Gamma} L) \otimes_{\Lambda} \Gamma$ is a projective right $\Gamma$-module for any $M$. Since $K$ is projective on either side, Theorem 3.1 of \cite{TEG} implies that $K$ is a projective bimodule.
For the final statement, we may assume that $\Gamma$ has no semisimple blocks by working with $\Gamma_0$ and an appropriate summand $T_0$ of $T$ instead. Observe that for any $r \geq 1$, $\Omega^{r(d+2)}(\Gamma) \cong \Omega^{d+2}(\Gamma)^{\otimes r} \cong \underline{\mathcal{B}}(T,\Omega^{d}T)^{\otimes r}$ up to projective summands by (2) and Corollary 3.1(3). Using part (1), Corollary 3.1(3) and (2.10) we now obtain $\underline{\mathcal{B}}(T,\Omega^{d}T)^{\otimes r} \cong \underline{\mathcal{B}}(T,\Omega^{rd}T)$ by induction on $r \geq 1$ (cf. Prop. 6.5 in \cite{Buch}). Furthermore, the latter bimodule is isomorphic to $\Gamma = \underline{\mathcal{B}}(T,T)$ as a bimodule if and only if $\Omega^{rd}$ is isomorphic to the identity functor on $\mathrm{add}(T)$. $\Box$\\
Many examples of cluster-tilting objects appear inside Calabi-Yau triangulated categories, such as the cluster categories of \cite{BMRRT} or categories of the form $\ul{\mbox{CM}}(R)$ for an isolated Gorenstein hypersurface singularity $R$ \cite{BIKR}. Recall that an auto-equivalence $S$ of $\underline{\mathcal{B}}$ is called a {\it Serre functor} if there exist natural isomorphisms $D\underline{\mathcal{B}}(X,Y) \cong \underline{\mathcal{B}}(Y,SX)$ for all $X, Y \in \underline{\mathcal{B}}$, where $D = \mathrm{Hom}_k(-,k)$ is the duality with respect to the ground field. In this case, there is a canonical enhancement of $S$ into a triangulated functor, and if $S \cong -[s]$ as triangulated functors on $\underline{\mathcal{B}}$, then we say that $\underline{\mathcal{B}}$ is {\it Calabi-Yau of dimension $s$}. Here we will consider the weaker requirement that $S \cong -[s]$ only as $k$-linear functors, in which case we say that $\underline{\mathcal{B}}$ is {\it weakly Calabi-Yau of dimension $s$}, in the sense of \cite{CYTC}. This amounts to the existence of natural isomorphisms $$D \underline{\mathcal{B}}(X,Y) \cong \underline{\mathcal{B}}(Y,X[s])$$ for all $X, Y \in \underline{\mathcal{B}}$ (In order for $\underline{\mathcal{B}}$ to be Calabi-Yau of dimension $s$, one additionally requires that these natural isomorphisms are compatible with the suspension functor as in Proposition 2.2 of \cite{CYTC}).
In case $\underline{\mathcal{B}}$ is weakly $s$-Calabi-Yau, the injective objects in $\mbox{mod-} \underline{\mathcal{B}}$ have the form $D\underline{\mathcal{B}}(X,-) \cong \underline{\mathcal{B}}(-,X[s]) \cong \underline{\mathcal{B}}(-[-s],X)$ for $X \in \underline{\mathcal{B}}$, which shows that $\mbox{mod-} \underline{\mathcal{B}}$ is a Frobenius category with Nakayama equivalence given by $\nu : F \mapsto F \circ [-s]$. Thus $\mbox{\underline{mod}-} \underline{\mathcal{B}}$ is a Hom-finite triangulated category. Moreover, Serre duality in $\underline{\mathcal{B}}$ guarantees that $\underline{\mathcal{B}}$ is a dualizing $k$-variety in the sense of \cite{SEDRV}, and hence the Auslander-Reiten formula implies $$D\underline{\mathrm{Hom}}_{\underline{\mathcal{B}}}(F,G) \cong \mathrm{Ext}^1_{\underline{\mathcal{B}}}(G, D\mathrm{Tr} F) \cong \underline{\mathrm{Hom}}_{\underline{\mathcal{B}}}(G, \Omega_{\underline{\mathcal{B}}} \nu F)$$ for all $F, G \in \mbox{\underline{mod}-} \underline{\mathcal{B}}$; that is, $\Omega_{\underline{\mathcal{B}}} \nu : F \mapsto \Omega_{\underline{\mathcal{B}}}( F \circ [-s])$ is a Serre functor for $\mbox{\underline{mod}-} \underline{\mathcal{B}}$. Moreover, knowledge of the projective resolution for $F \in \mbox{mod-} \underline{\mathcal{B}}$ (from \cite{SEDRV} or \cite{SEAA}, for example) implies that $\Omega_{\underline{\mathcal{B}}}^3(F) \cong F \circ [1]$. Hence $\nu \cong \Omega_{\underline{\mathcal{B}}}^{-3s}$ on $\mbox{\underline{mod}-} \underline{\mathcal{B}}$, and the Serre functor for $\mbox{\underline{mod}-} \underline{\mathcal{B}}$ satisfies $S = \Omega_{\underline{\mathcal{B}}} \nu \cong \Omega_{\underline{\mathcal{B}}}^{-(3s-1)}$, showing that $\mbox{\underline{mod}-} \underline{\mathcal{B}}$ is weakly $(3s-1)$-Calabi-Yau when $\underline{\mathcal{B}}$ is weakly $s$-Calabi-Yau (this has been observed elsewhere: see \cite{TOC}, for instance). This result can in fact be viewed as the $d=1$ case of the following more general statement regarding maximal $(d-1)$-orthogonal subcategories of Calabi-Yau triangulated categories. In the second part, we apply Theorem 3.2 to obtain a partial generalization of Proposition 2.1 in \cite{SCY}.
\begin{propos}[Cf. 5.4 in \cite{GKO}] Let $\mathcal{C}$ be a maximal $(d-1)$-orthogonal subcategory of $\mathcal{B}$ with $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$, and assume that $\underline{\mathcal{B}}$ is weakly $sd$-Calabi-Yau for some integer $s$. \begin{enumerate} \item $\mbox{\underline{mod}-} \underline{\mathcal{C}}$ is a weakly Calabi-Yau triangulated category of dimension $s(d+2)-1$. \item If $\mathcal{C} = \mathrm{add}(T)$ for a $d$-cluster tilting object $T \in \mathcal{B}$ and $\Gamma = \mathrm{\underline{End}}_{\mathcal{B}}(T)$ has no semisimple blocks, then $\Omega^{-s(d+2)}_{\Gamma^e}(\Gamma) \cong D\Gamma$ as bimodules. \end{enumerate} \end{propos}
\noindent {\it Proof.} (1) As remarked after Lemma 2.1, the assumption $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$ ensures that $\underline{\mathcal{C}}$ is invariant under the Serre functor $S$ of $\underline{\mathcal{B}}$. Hence the same argument given above for $\underline{\mathcal{B}}$ shows that $\mbox{mod-} \underline{\mathcal{C}}$ is a Frobenius category with Nakayama equivalence $\nu$ given by $F \mapsto F \circ [-sd]$. If $F = \underline{\mathcal{B}}(-,X) \in \mbox{mod-} \underline{\mathcal{C}}$ for $X \in \mathcal{E}_{d-1}$, then $\nu(F) \cong \underline{\mathcal{B}}(-,X[sd]) \cong \Omega_{\underline{\mathcal{C}}}^{-s(d+2)}(F)$ by Theorem 2.5(5). Since $\underline{\mathcal{C}}$ is also a dualizing $k$-variety (one again uses the Serre duality to check that the duality $D$ preserves finitely presented functors on $\underline{\mathcal{C}}$ and $\underline{\mathcal{C}}^{\mathrm{op}}$), the above argument also shows that a Serre functor for $\mbox{\underline{mod}-} \underline{\mathcal{C}}$ is given by $S = \Omega_{\underline{\mathcal{C}}} \nu \cong \Omega_{\underline{\mathcal{C}}}^{1-s(d+2)}$, and the claim follows.
(2) By Theorem 3.2, we have $\Omega^{-s(d+2)}(\Gamma) \cong \underline{\mathcal{B}}(T,\Omega^{-sd}T) \cong \underline{\mathcal{B}}(T,T[sd]) \cong D\underline{\mathcal{B}}(T,T) \cong D\Gamma$ as bimodules. $\Box$\\
\noindent {\bf Remarks.} (1) We point out that the curious requirement that the weak Calabi-Yau dimension of $\underline{\mathcal{B}}$ is $sd$ does not impose an unnecessary restriction in light of the assumption $\underline{\mathcal{C}}[d] = \underline{\mathcal{C}}$. Indeed, if $\underline{\mathcal{B}}$ is weakly $n$-C.Y. then $\underline{\mathcal{B}}(C,C[n]) \cong D\underline{\mathcal{B}}(C,C) \neq 0$ for any $C \in \mathcal{C}$ implies that $d \mid n$.
(2) In fact, the full Calabi-Yau property is shown to hold for $\mbox{\underline{mod}-} \underline{\mathcal{C}}$ in \S 5 of \cite{GKO}, since $\underline{\mathcal{C}}$ with suspension $-[d]$ is a $(d+2)$-angulated category.
\section{Examples and concluding remarks} \setcounter{equation}{0}
As remarked in the introduction, this work is motivated by the recent discovery of symmetric algebras with $D\mathrm{Tr}$-periodic module categories arising as stable endomorphism rings of $2$-cluster tilting objects in the Cohen-Macaulay module categories of $1$-dimensional hypersurface singularities \cite{BIKR}. We briefly recall the construction introduced there, as we now know that it provides a powerful tool for producing periodic symmetric algebras of period $4$.
Set $S = k[[x,y]]$ and $\mathfrak{m} = (x,y)$. Choose irreducible power series $f_i \in \mathfrak{m} \setminus \mathfrak{m}^2$ for $1 \leq i \leq n$ with $(f_i) \neq (f_j)$ for $i \neq j$, and set $f = f_1 f_2 \cdots f_n$. Then $R = S/(f)$ is an isolated hypersurface singularity of dimension $1$, and $T = \oplus_{i=1}^n S/(f_1 \cdots f_i)$ is a $2$-cluster tilting object in $\mbox{CM} (R)$. Moreover, Eisenbud's matrix factorization theorem implies that $\Omega^2 \cong Id$ on $\mbox{\underline{CM}}(R)$, and thus on $\mathrm{add}(T)$ as well. Hence Theorem 3.2 implies that $\Gamma = \mathrm{\underline{End}}_R(T)$ is periodic of period $4$. The quiver of $\Gamma$ (but not the relations) is described in Proposition 4.10 of \cite{BIKR}: $$\xymatrix{1 \ar[r]<0.5ex> & 2 \ar[r]<0.5ex> \ar[l]<0.5ex> & \cdots \ar[l]<0.5ex> \ar[r]<0.5ex> & n-2 \ar[l]<0.5ex> \ar[r]<0.5ex> & n-1 \ar[l]<0.5ex>}$$ with a loop at vertex $i$ if and only if $(f_i, f_{i+1}) \neq \mathfrak{m}$. Furthermore, it is shown that two families of algebras of quaternion type are explicitly realized in this way. These algebras are known to have tame representation type, but starting with a hypersurface $R$ of wild CM-type should produce an algebra $\Gamma$ of wild type and period $4$.
Our results also yield new information in the classical case where $d=1$. For example, if $R$ is a simple curve singularity of finite CM-type (in arbitrary characteristic) and $\Gamma$ is the stable Auslander algebra of $\mbox{CM} (R)$, it follows from Theorem 1.1(3) that $\Gamma$ is periodic of period dividing $6$. Moreover, since $\mbox{\underline{CM}}(R)$ is $2$-Calabi-Yau, $\mbox{\underline{mod}-} \Gamma$ will be (weakly) $5$-Calabi-Yau by Proposition 3.3. The algebras $\Gamma$ that arise in this way are (a proper subset of the) deformed preprojective algebras of generalized Dynkin type, as studied in \cite{ErdSko2}. We have previously applied this information about the periods and stable Calabi-Yau dimensions of these algebras in the study of the same properties for the representation-finite self-injective algebras \cite{SCY}. Similarly, if $R$ is a two-dimensional simple surface singularity (in arbitrary characteristic), the stable Auslander algebra $\Gamma$ of $\mbox{CM} (R)$ is periodic of period dividing $6$ and stably $2$-Calabi-Yau. The algebras $\Gamma$ arising in this way are necessarily deformed preprojective algebras of Dynkin type by \cite{BES}, and it is an interesting problem whether every such deformed preprojective algebra is isomorphic to the stable Auslander algebra of $\mbox{CM} (R)$ for some simple surface singularity $R$ in arbitrary characteristic, as classified in \cite{GK}.
Unfortunately, it is still a challenging problem to find additional examples of maximal $(d-1)$-orthogonal subcategories where our results can be applied. For instance, Erdmann and Holm \cite{ErdHolm} have shown that maximal $(d-1)$-orthogonal subcategories rarely exist in $\mathcal{B} = \mbox{mod-} A$ for a self-injective $k$-algebra $A$. Specifically, they show that they can only exist if every finite-dimensional $A$-module has complexity at most $1$. Such algebras do exist -- periodic algebras, for example -- but even here the examples are limited. Known examples of periodic algebras include all self-injective algebra of finite representation type \cite{Per}, but any periodic algebra constructed as the stable endomorphism ring of a maximal $(d-1)$-orthogonal subcategory in this context, will again have finite representation type by Lemma 2.3. Still, it would be interesting to see which self-injective algebras of finite representation type are $d$-cluster tilted in this sense. One could also look for maximal $(d-1)$-orthogonal subcategories of modules over tame and wild periodic algebras, which include the algebras of quaternion type, the preprojective algebras of Dynkin type and the $m$-fold mesh algebras \cite{ErdSko2}.
Nevertheless, it may still be possible to find interesting examples of $d$-cluster tilting objects in {\it subcategories} of stable module categories. In particular, our main results can be applied to a (finite type) maximal $(d-1)$-orthogonal subcategory inside some exact Frobenius subcategory $\mathcal{B}$ of $\mbox{mod-} A$. Namely, in light of Erdmann and Holm's result, one should take $\mathcal{B}$ to be the full subcategory of $\mbox{mod-} A$ consisting of modules of complexity at most $1$, which is an exact subcategory with $\underline{\mathcal{B}}$ a triangulated subcategory of $\mbox{\underline{mod}-} A$. Even here, however, it is not clear whether one will be able to find a module satisfying the restrictive self-orthogonality and Ext-configuration conditions required of a cluster-tilting object.
Another source of applications can be found in the exciting work of Iyama and Oppermann on {\it higher preprojective algebras} \cite{IO}. If $A$ is a finite-dimensional algebra with $\mathrm{gl.dim} A \leq n$ for which $\mbox{mod-} A$ contains an $n$-cluster-tilting object, then the $(n+1)$-preprojective algebra of $A$ can be defined as $\tilde{A} = T_A \mathrm{Ext}^n_A(DA,A)$, the tensor algebra over $A$ of the bimodule $\mathrm{Ext}^n_A(DA,A)$. Moreover, Iyama and Oppermann show that $\tilde{A}$ can be realized as the endomorphism ring of an $n$-periodic $n$-cluster-tilting object in a certain Hom-finite triangulated category (namely, the $n$-Amiot cluster category $\mathcal{C}_{A}^n$ associated to $A$). It follows immediately from Theorem 3.2 that $\tilde{A}$ has at least a quasi-periodic projective resolution over its enveloping algebra. However, it appears a nontrivial problem to determine the order of the $n^{th}$ shift functor $[n]$ on the relevant maximal $(n-1)$-orthogonal subcategory of $\mathcal{C}_A^n$, and thus to determine whether or not this resolution is indeed periodic.
For example, if $n=1$ and $A$ is a hereditary algebra of finite representation type, then the corresponding $2$-preprojective algebra will be the usual preprojective algebra associated to the (Dynkin) quiver of $A$. Here $\tilde{A}$ is the endomorphism ring of a $1$-periodic $1$-cluster tilting object $T$, but has period $6$ (with some exceptions in characteristic $2$ where the period is $3$). This means that for the $T$ in question, one has $T[1] \cong T$ but $-[1] : \mathrm{add}(T) \rightarrow \mathrm{add}(T)$ is not isomorphic to the identity functor, although its square $-[2]$ is.
A more interesting example with $n=2$ can be found in \cite{IO}, Example 4.18. Here we have a $3$-preprojective algebra $\tilde{A}$ for which $\Omega^{12}$ fixes each simple module up to isomorphism. Since $\tilde{A}$ is the endomorphism ring of a $2$-periodic $2$-cluster-tilting object $T$, with $-[2]|_{\mathrm{add}(T)}$ inducing $\Omega^4$ on $\mbox{\underline{mod}-} \tilde{A}$, we see that the order of $-[2]$ on $\mathrm{add}(T)$ must be a multiple of $3$ (if it is finite).
Finally, we point out that Proposition 3.3 applies to all of the $(n+1)$-preprojective algebras $\tilde{A}$, since the relevant $n$-Amiot cluster category is $n$-Calabi-Yau by construction. Thus part (2) of the proposition shows that $\Omega^{-n-2}_{\tilde{A}^e}(\tilde{A}) \cong D\tilde{A}$ as bimodules. Since $D\tilde{A} \cong {}_1 \tilde{A}_{\nu}$ for the Nakayama automorphism $\nu$ of $\tilde{A}$, we can see that $\tilde{A}$ is periodic if and only if $\nu$ has finite order in the group of outer automorphisms of $\tilde{A}$. However, even this latter condition remains difficult to verify.
\end{document} | arXiv |
Expander graph
In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.[1]
Definitions
Intuitively, an expander graph is a finite, undirected multigraph in which every subset of the vertices that is not "too large" has a "large" boundary. Different formalisations of these notions give rise to different notions of expanders: edge expanders, vertex expanders, and spectral expanders, as defined below.
A disconnected graph is not an expander, since the boundary of a connected component is empty. Every connected graph is an expander; however, different connected graphs have different expansion parameters. The complete graph has the best expansion property, but it has largest possible degree. Informally, a graph is a good expander if it has low degree and high expansion parameters.
Edge expansion
The edge expansion (also isoperimetric number or Cheeger constant) h(G) of a graph G on n vertices is defined as
$h(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial S|}{|S|}},$
where $\partial S:=\{\{u,v\}\in E(G)\ :\ u\in S,v\notin S\},$ :\ u\in S,v\notin S\},}
which can also be written as ∂S = E(S, S) with S := V(G) \ S the complement of S and
$E(A,B)=\{\{u,v\}\in E(G)\ :\ u\in A,v\in B\}$ :\ u\in A,v\in B\}}
the edges between the subsets of vertices A,B ⊆ V(G).
In the equation, the minimum is over all nonempty sets S of at most n⁄2 vertices and ∂S is the edge boundary of S, i.e., the set of edges with exactly one endpoint in S.[2]
Intuitively,
$\min {|\partial S|}=\min E({S},{\overline {S}})$
is the minimum number of edges that need to be cut in order to split the graph in two. The edge expansion normalizes this concept by dividing with smallest number of vertices among the two parts. To see how the normalization can drastically change the value, consider the following example. Take two complete graphs with the same number of vertices n and add n edges between the two graphs by connecting their vertices one-to-one. The minimum cut will be n but the edge expansion will be 1.
Notice that in min |∂S|, the optimization can be equivalently done either over 0 ≤ |S| ≤ n⁄2 or over any non-empty subset, since $E(S,{\overline {S}})=E({\overline {S}},S)$. The same is not true for h(G) because of the normalization by |S|. If we want to write h(G) with an optimization over all non-empty subsets, we can rewrite it as
$h(G)=\min _{\emptyset \subsetneq S\subsetneq V(G)}{\frac {E({S},{\overline {S}})}{\min\{|S|,|{\overline {S}}|\}}}.$
Vertex expansion
The vertex isoperimetric number hout(G) (also vertex expansion or magnification) of a graph G is defined as
$h_{\text{out}}(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial _{\text{out}}(S)|}{|S|}},$
where ∂out(S) is the outer boundary of S, i.e., the set of vertices in V(G) \ S with at least one neighbor in S.[3] In a variant of this definition (called unique neighbor expansion) ∂out(S) is replaced by the set of vertices in V with exactly one neighbor in S.[4]
The vertex isoperimetric number hin(G) of a graph G is defined as
$h_{\text{in}}(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial _{\text{in}}(S)|}{|S|}},$
where $\partial _{\text{in}}(S)$ is the inner boundary of S, i.e., the set of vertices in S with at least one neighbor in V(G) \ S.[3]
Spectral expansion
When G is d-regular, a linear algebraic definition of expansion is possible based on the eigenvalues of the adjacency matrix A = A(G) of G, where Aij is the number of edges between vertices i and j.[5] Because A is symmetric, the spectral theorem implies that A has n real-valued eigenvalues λ1 ≥ λ2 ≥ … ≥ λn. It is known that all these eigenvalues are in [−d, d] and more specifically, it is known that λn = −d if and only if G is bipartite.
More formally, we refer to an n-vertex, d-regular graph with
$\max _{i\neq 1}|\lambda _{i}|\leq \lambda $
as an (n, d, λ)-graph. The bound given by an (n, d, λ)-graph on λi for i ≠ 1 is useful many contexts, including the expander mixing lemma.
Because G is regular, the uniform distribution $u\in \mathbb {R} ^{n}$ with ui = 1⁄n for all i = 1, …, n is the stationary distribution of G. That is, we have Au = du, and u is an eigenvector of A with eigenvalue λ1 = d, where d is the degree of the vertices of G. The spectral gap of G is defined to be d − λ2, and it measures the spectral expansion of the graph G.[6]
If we set
$\lambda =\max\{|\lambda _{2}|,|\lambda _{n}|\}$
as this is the largest eigenvalue corresponding to an eigenvector orthogonal to u, it can be equivalently defined using the Rayleigh quotient:
$\lambda =\max _{v\perp u,v\neq 0}{\frac {\|Av\|_{2}}{\|v\|_{2}}},$
where
$\|v\|_{2}=\left(\sum _{i=1}^{n}v_{i}^{2}\right)^{1/2}$
is the 2-norm of the vector $v\in \mathbb {R} ^{n}$.
The normalized versions of these definitions are also widely used and more convenient in stating some results. Here one considers the matrix 1/dA, which is the Markov transition matrix of the graph G. Its eigenvalues are between −1 and 1. For not necessarily regular graphs, the spectrum of a graph can be defined similarly using the eigenvalues of the Laplacian matrix. For directed graphs, one considers the singular values of the adjacency matrix A, which are equal to the roots of the eigenvalues of the symmetric matrix ATA.
Relationships between different expansion properties
The expansion parameters defined above are related to each other. In particular, for any d-regular graph G,
$h_{\text{out}}(G)\leq h(G)\leq d\cdot h_{\text{out}}(G).$
Consequently, for constant degree graphs, vertex and edge expansion are qualitatively the same.
Cheeger inequalities
When G is d-regular, meaning each vertex is of degree d, there is a relationship between the isoperimetric constant h(G) and the gap d − λ2 in the spectrum of the adjacency operator of G. By standard spectral graph theory, the trivial eigenvalue of the adjacency operator of a d-regular graph is λ1 = d and the first non-trivial eigenvalue is λ2. If G is connected, then λ2 < d. An inequality due to Dodziuk[7] and independently Alon and Milman[8] states that[9]
${\tfrac {1}{2}}(d-\lambda _{2})\leq h(G)\leq {\sqrt {2d(d-\lambda _{2})}}.$
In fact, the lower bound is tight. The lower bound is achieved in limit for the hypercube Qn, where h(G) = 1 and d – λ = 2. The upper bound is (asymptotically) achieved for a cycle, where H(Cn) = 4/n= Θ(1/n) and d – λ = 2-2cos(2$\pi $/n) ≈ (2$\pi $/n)^2= Θ(1/n2).[1] A better bound is given in [10] as
$h(G)\leq {\sqrt {d^{2}-\lambda _{2}^{2}}}.$
These inequalities are closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry.
Similar connections between vertex isoperimetric numbers and the spectral gap have also been studied:[11]
$h_{\text{out}}(G)\leq \left({\sqrt {4(d-\lambda _{2})}}+1\right)^{2}-1$
$h_{\text{in}}(G)\leq {\sqrt {8(d-\lambda _{2})}}.$
Asymptotically speaking, the quantities h2⁄d, hout, and hin2 are all bounded above by the spectral gap O(d – λ2).
Constructions
There are three general strategies for explicitly constructing families of expander graphs.[12] The first strategy is algebraic and group-theoretic, the second strategy is analytic and uses additive combinatorics, and the third strategy is combinatorial and uses the zig-zag and related graph products. Noga Alon showed that certain graphs constructed from finite geometries are the sparsest examples of highly expanding graphs.[13]
Margulis–Gabber–Galil
Algebraic constructions based on Cayley graphs are known for various variants of expander graphs. The following construction is due to Margulis and has been analysed by Gabber and Galil.[14] For every natural number n, one considers the graph Gn with the vertex set $\mathbb {Z} _{n}\times \mathbb {Z} _{n}$, where $\mathbb {Z} _{n}=\mathbb {Z} /n\mathbb {Z} $: For every vertex $(x,y)\in \mathbb {Z} _{n}\times \mathbb {Z} _{n}$, its eight adjacent vertices are
$(x\pm 2y,y),(x\pm (2y+1),y),(x,y\pm 2x),(x,y\pm (2x+1)).$
Then the following holds:
Theorem. For all n, the graph Gn has second-largest eigenvalue $\lambda (G)\leq 5{\sqrt {2}}$.
Ramanujan graphs
Main article: Ramanujan graph
By a theorem of Alon and Boppana, all sufficiently large d-regular graphs satisfy $\lambda _{2}\geq 2{\sqrt {d-1}}-o(1)$, where λ2 is the second largest eigenvalue in absolute value.[15] As a direct consequence, we know that for every fixed d and $\lambda <2{\sqrt {d-1}}$ , there are only finitely many (n, d, λ)-graphs. Ramanujan graphs are d-regular graphs for which this bound is tight, satisfying [16]
$\lambda =\max _{|\lambda _{i}|<d}|\lambda _{i}|\leq 2{\sqrt {d-1}}.$
Hence Ramanujan graphs have an asymptotically smallest possible value of λ2. This makes them excellent spectral expanders.
Lubotzky, Phillips, and Sarnak (1988), Margulis (1988), and Morgenstern (1994) show how Ramanujan graphs can be constructed explicitly.[17]
In 1985, Alon, conjectured that most d-regular graphs on n vertices, for sufficiently large n, are almost Ramanujan.[18] That is, for ε > 0, they satisfy
$\lambda \leq 2{\sqrt {d-1}}+\varepsilon $.
In 2003, Joel Friedman both proved the conjecture and specified what is meant by "most d-regular graphs" by showing that random d-regular graphs have $\lambda \leq 2{\sqrt {d-1}}+\varepsilon $ for every ε > 0 with probability 1 – O(n-τ), where[19][20]
$\tau =\left\lceil {\frac {{\sqrt {d-1}}+1}{2}}\right\rceil .$
Zig-Zag product
Main article: Zig-zag product
Reingold, Vadhan, and Wigderson introduced the zig-zag product in 2003.[21] Roughly speaking, the zig-zag product of two expander graphs produces a graph with only slightly worse expansion. Therefore, a zig-zag product can also be used to construct families of expander graphs. If G is a (n, m, λ1)-graph and H is an (m, d, λ1)-graph, then the zig-zag product G ◦ H is a (nm, d2, φ(λ1, λ2))-graph where φ has the following properties.
1. If λ1 < 1 and λ2 < 1, then φ(λ1, λ2) < 1;
2. φ(λ1, λ2) ≤ λ1 + λ2.
Specifically,[21]
$\phi (\lambda _{1},\lambda _{2})={\frac {1}{2}}(1-\lambda _{2}^{2})\lambda _{2}+{\frac {1}{2}}{\sqrt {(1-\lambda _{2}^{2})^{2}\lambda _{1}^{2}+4\lambda _{2}^{2}}}.$
Note that property (1) implies that the zig-zag product of two expander graphs is also an expander graph, thus zig-zag products can be used inductively to create a family of expander graphs.
Intuitively, the construction of the zig-zag product can be thought of in the following way. Each vertex of G is blown up to a "cloud" of m vertices, each associated to a different edge connected to the vertex. Each vertex is now labeled as (v, k) where v refers to an original vertex of G and k refers to the kth edge of v. Two vertices, (v, k) and (w,l) are connected if it is possible to get from (v, k) to (w, l) through the following sequence of moves.
1. Zig - Move from (v, k) to (v, k' ), using an edge of H.
2. Jump across clouds using edge k' in G to get to (w, l' ).
3. Zag - Move from (w, l' ) to (w, l) using an edge of H.[21]
Randomized constructions
There are many results that show the existence of graphs with good expansion properties through probabilistic arguments. In fact, the existence of expanders was first proved by Pinsker[22] who showed that for a randomly chosen n vertex left d regular bipartite graph, |N(S)| ≥ (d – 2)|S| for all subsets of vertices |S| ≤ cdn with high probability, where cd is a constant depending on d that is O(d-4). Alon and Roichman [23] showed that for every 1 > ε > 0, there is some c(ε) > 0 such that the following holds: For a group G of order n, consider the Cayley graph on G with c(ε) log2 n randomly chosen elements from G. Then, in the limit of n getting to infinity, the resulting graph is almost surely an ε-expander.
Applications and useful properties
The original motivation for expanders is to build economical robust networks (phone or computer): an expander with bounded degree is precisely an asymptotic robust graph with the number of edges growing linearly with size (number of vertices), for all subsets.
Expander graphs have found extensive applications in computer science, in designing algorithms, error correcting codes, extractors, pseudorandom generators, sorting networks (Ajtai, Komlós & Szemerédi (1983)) and robust computer networks. They have also been used in proofs of many important results in computational complexity theory, such as SL = L (Reingold (2008)) and the PCP theorem (Dinur (2007)). In cryptography, expander graphs are used to construct hash functions.
In a 2006 survey of expander graphs, Hoory, Linial, and Wigderson split the study of expander graphs into four categories: extremal problems, typical behavior, explicit constructions, and algorithms. Extremal problems focus on the bounding of expansion parameters, while typical behavior problems characterize how the expansion parameters are distributed over random graphs. Explicit constructions focus on constructing graphs that optimize certain parameters, and algorithmic questions study the evaluation and estimation of parameters.
Expander mixing lemma
Main article: Expander mixing lemma
The expander mixing lemma states that for an (n, d, λ)-graph, for any two subsets of the vertices S, T ⊆ V, the number of edges between S and T is approximately what you would expect in a random d-regular graph. The approximation is better the smaller λ is. In a random d-regular graph, as well as in an Erdős–Rényi random graph with edge probability d⁄n, we expect d⁄n • |S| • |T| edges between S and T.
More formally, let E(S, T) denote the number of edges between S and T. If the two sets are not disjoint, edges in their intersection are counted twice, that is,
$E(S,T)=2|E(G[S\cap T])|+E(S\setminus T,T)+E(S,T\setminus S).$
Then the expander mixing lemma says that the following inequality holds:
$\left|E(S,T)-{\frac {d\cdot |S|\cdot |T|}{n}}\right|\leq \lambda {\sqrt {|S|\cdot |T|}}.$
Many properties of (n, d, λ)-graphs are corollaries of the expander mixing lemmas, including the following.[1]
• An independent set of a graph is a subset of vertices with no two vertices adjacent. In an (n, d, λ)-graph, an independent set has size at most λn⁄d.
• The chromatic number of a graph G, χ(G), is the minimum number of colors needed such that adjacent vertices have different colors. Hoffman showed that d⁄λ ≤ χ(G),[24] while Alon, Krivelevich, and Sudakov showed that if d < 2n⁄3, then[25]
$\chi (G)\leq O\left({\frac {d}{\log(1+d/\lambda )}}\right).$
• The diameter of a graph is the maximum distance between two vertices, where the distance between two vertices is defined to be the shortest path between them. Chung showed that the diameter of an (n, d, λ)-graph is at most[26]
$\left\lceil \log {\frac {n}{\log(d/\lambda )}}\right\rceil .$
Expander walk sampling
Main article: Expander walk sampling
The Chernoff bound states that, when sampling many independent samples from a random variables in the range [−1, 1], with high probability the average of our samples is close to the expectation of the random variable. The expander walk sampling lemma, due to Ajtai, Komlós & Szemerédi (1987) and Gillman (1998), states that this also holds true when sampling from a walk on an expander graph. This is particularly useful in the theory of derandomization, since sampling according to an expander walk uses many fewer random bits than sampling independently.
AKS sorting network and approximate halvers
Sorting networks take a set of inputs and perform a series of parallel steps to sort the inputs. A parallel step consists of performing any number of disjoint comparisons and potentially swapping pairs of compared inputs. The depth of a network is given by the number of parallel steps it takes. Expander graphs play an important role in the AKS sorting network, which achieves depth O(log n). While this is asymptotically the best known depth for a sorting network, the reliance on expanders makes the constant bound too large for practical use.
Within the AKS sorting network, expander graphs are used to construct bounded depth ε-halvers. An ε-halver takes as input a length n permutation of (1, …, n) and halves the inputs into two disjoint sets A and B such that for each integer k ≤ n⁄2 at most εk of the k smallest inputs are in B and at most εk of the k largest inputs are in A. The sets A and B are an ε-halving.
Following Ajtai, Komlós & Szemerédi (1983), a depth d ε-halver can be constructed as follows. Take an n vertex, degree d bipartite expander with parts X and Y of equal size such that every subset of vertices of size at most εn has at least 1 – ε/ε neighbors.
The vertices of the graph can be thought of as registers that contain inputs and the edges can be thought of as wires that compare the inputs of two registers. At the start, arbitrarily place half of the inputs in X and half of the inputs in Y and decompose the edges into d perfect matchings. The goal is to end with X roughly containing the smaller half of the inputs and Y containing roughly the larger half of the inputs. To achieve this, sequentially process each matching by comparing the registers paired up by the edges of this matching and correct any inputs that are out of order. Specifically, for each edge of the matching, if the larger input is in the register in X and the smaller input is in the register in Y, then swap the two inputs so that the smaller one is in X and the larger one is in Y. It is clear that this process consists of d parallel steps.
After all d rounds, take A to be the set of inputs in registers in X and B to be the set of inputs in registers in Y to obtain an ε-halving. To see this, notice that if a register u in X and v in Y are connected by an edge uv then after the matching with this edge is processed, the input in u is less than that of v. Furthermore, this property remains true throughout the rest of the process. Now, suppose for some k ≤ n⁄2 that more than εk of the inputs (1, …, k) are in B. Then by expansion properties of the graph, the registers of these inputs in Y are connected with at least 1 – ε/εk registers in X. Altogether, this constitutes more than k registers so there must be some register A in X connected to some register B in Y such that the final input of A is not in (1, …, k), while the final input of B is. This violates the previous property however, and thus the output sets A and B must be an ε-halving.
See also
• Algebraic connectivity
• Zig-zag product
• Superstrong approximation
• Spectral graph theory
Notes
1. Hoory, Linial & Wigderson (2006)
2. Definition 2.1 in Hoory, Linial & Wigderson (2006)
3. Bobkov, Houdré & Tetali (2000)
4. Alon & Capalbo (2002)
5. cf. Section 2.3 in Hoory, Linial & Wigderson (2006)
6. This definition of the spectral gap is from Section 2.3 in Hoory, Linial & Wigderson (2006)
7. Dodziuk 1984.
8. Alon & Spencer 2011.
9. Theorem 2.4 in Hoory, Linial & Wigderson (2006)
10. B. Mohar. Isoperimetric numbers of graphs. J. Combin. Theory Ser. B, 47(3):274–291, 1989.
11. See Theorem 1 and p.156, l.1 in Bobkov, Houdré & Tetali (2000). Note that λ2 there corresponds to 2(d − λ2) of the current article (see p.153, l.5)
12. see, e.g., Yehudayoff (2012)
13. Alon, Noga (1986). "Eigenvalues, geometric expanders, sorting in rounds, and ramsey theory". Combinatorica. 6 (3): 207–219. CiteSeerX 10.1.1.300.5945. doi:10.1007/BF02579382. S2CID 8666466.
14. see, e.g., p.9 of Goldreich (2011)
15. Theorem 2.7 of Hoory, Linial & Wigderson (2006)
16. Definition 5.11 of Hoory, Linial & Wigderson (2006)
17. Theorem 5.12 of Hoory, Linial & Wigderson (2006)
18. Alon, Noga (1986-06-01). "Eigenvalues and expanders". Combinatorica. 6 (2): 83–96. doi:10.1007/BF02579166. ISSN 1439-6912. S2CID 41083612.
19. Friedman, Joel (2004-05-05). "A proof of Alon's second eigenvalue conjecture and related problems". arXiv:cs/0405020.
20. Theorem 7.10 of Hoory, Linial & Wigderson (2006)
21. Reingold, O.; Vadhan, S.; Wigderson, A. (2000). "Entropy waves, the zig-zag graph product, and new constant-degree expanders and extractors". Proceedings 41st Annual Symposium on Foundations of Computer Science. IEEE Comput. Soc. pp. 3–13. doi:10.1109/sfcs.2000.892006. ISBN 0-7695-0850-2. S2CID 420651.
22. Pinkser, M. (1973). "On the Complexity of a Concentrator". SIAM Journal on Computing. SIAM. CiteSeerX 10.1.1.393.1430.
23. Alon, N.; Roichman, Y. (1994). "Random Cayley graphs and Expanders". Random Structures and Algorithms. Wiley Online Library. 5 (2): 271–284. doi:10.1002/rsa.3240050203.
24. Hoffman, A. J.; Howes, Leonard (1970). "On Eigenvalues and Colorings of Graphs, Ii". Annals of the New York Academy of Sciences. 175 (1): 238–242. Bibcode:1970NYASA.175..238H. doi:10.1111/j.1749-6632.1970.tb56474.x. ISSN 1749-6632. S2CID 85243045.
25. Alon, Noga; Krivelevich, Michael; Sudakov, Benny (1999-09-01). "Coloring Graphs with Sparse Neighborhoods". Journal of Combinatorial Theory. Series B. 77 (1): 73–82. doi:10.1006/jctb.1999.1910. ISSN 0095-8956.
26. Chung, F. R. K. (1989). "Diameters and eigenvalues". Journal of the American Mathematical Society. 2 (2): 187–196. doi:10.1090/S0894-0347-1989-0965008-X. ISSN 0894-0347.
References
Textbooks and surveys
• Alon, N.; Spencer, Joel H. (2011). "9.2. Eigenvalues and Expanders". The Probabilistic Method (3rd ed.). John Wiley & Sons.
• Chung, Fan R. K. (1997), Spectral Graph Theory, CBMS Regional Conference Series in Mathematics, vol. 92, American Mathematical Society, ISBN 978-0-8218-0315-8
• Davidoff, Guiliana; Sarnak, Peter; Valette, Alain (2003), Elementary number theory, group theory and Ramanujan graphs, LMS student texts, vol. 55, Cambridge University Press, ISBN 978-0-521-53143-6
• Hoory, Shlomo; Linial, Nathan; Wigderson, Avi (2006), "Expander graphs and their applications" (PDF), Bulletin of the American Mathematical Society, New Series, 43 (4): 439–561, doi:10.1090/S0273-0979-06-01126-8
• Krebs, Mike; Shaheen, Anthony (2011), Expander families and Cayley graphs: A beginner's guide, Oxford University Press, ISBN 978-0-19-976711-3
Research articles
• Ajtai, M.; Komlós, J.; Szemerédi, E. (1983), "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9, doi:10.1145/800061.808726, ISBN 978-0-89791-099-6, S2CID 15311122
• Ajtai, M.; Komlós, J.; Szemerédi, E. (1987), "Deterministic simulation in LOGSPACE", Proceedings of the 19th Annual ACM Symposium on Theory of Computing, ACM, pp. 132–140, doi:10.1145/28395.28410, ISBN 978-0-89791-221-1, S2CID 15323404
• Alon, N.; Capalbo, M. (2002), "Explicit unique-neighbor expanders", The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings, p. 73, CiteSeerX 10.1.1.103.967, doi:10.1109/SFCS.2002.1181884, ISBN 978-0-7695-1822-0, S2CID 6364755
• Bobkov, S.; Houdré, C.; Tetali, P. (2000), "λ∞, vertex isoperimetry and concentration", Combinatorica, 20 (2): 153–172, doi:10.1007/s004930070018, S2CID 1173532.
• Dinur, Irit (2007), "The PCP theorem by gap amplification" (PDF), Journal of the ACM, 54 (3): 12–es, CiteSeerX 10.1.1.103.2644, doi:10.1145/1236457.1236459, S2CID 53244523.
• Dodziuk, Jozef (1984), "Difference equations, isoperimetric inequality and transience of certain random walks", Trans. Amer. Math. Soc., 284 (2): 787–794, doi:10.2307/1999107, JSTOR 1999107.
• Gillman, D. (1998), "A Chernoff Bound for Random Walks on Expander Graphs", SIAM Journal on Computing, 27 (4): 1203–1220, doi:10.1137/S0097539794268765
• Goldreich, Oded (2011), "Basic Facts about Expander Graphs" (PDF), Studies in Complexity and Cryptography, Lecture Notes in Computer Science, 6650: 451–464, CiteSeerX 10.1.1.231.1388, doi:10.1007/978-3-642-22670-0_30, ISBN 978-3-642-22669-4
• Reingold, Omer (2008), "Undirected connectivity in log-space", Journal of the ACM, 55 (4): 1–24, doi:10.1145/1391289.1391291, S2CID 207168478
• Yehudayoff, Amir (2012), "Proving expansion in three steps", ACM SIGACT News, 43 (3): 67–84, doi:10.1145/2421096.2421115, S2CID 18098370
Recent Applications
• Hartnett, Kevin (2018), "Universal Method to Sort Complex Information Found", Quanta Magazine (published 13 August 2018)
External links
• Brief introduction in Notices of the American Mathematical Society
• Introductory paper by Michael Nielsen
• Lecture notes from a course on expanders (by Nati Linial and Avi Wigderson)
• Lecture notes from a course on expanders (by Prahladh Harsha)
• Definition and application of spectral gap
| Wikipedia |
Shifts of finite type and random substitutions
Moments and regularity for a Boltzmann equation via Wigner transform
September 2019, 39(9): 5017-5083. doi: 10.3934/dcds.2019205
Nonlinear stability of pulse solutions for the discrete FitzHugh-Nagumo equation with infinite-range interactions
Willem M. Schouten-Straatman , and Hermen Jan Hupkes
Mathematisch Instituut - Universiteit Leiden, P.O. Box 9512, 2300 RA Leiden, The Netherlands
* Corresponding author: [email protected]
Received May 2018 Revised February 2019 Published May 2019
Fund Project: Both authors acknowledge support from the Netherlands Organization for Scientific Research (NWO) (grant 639.032.612).
We establish the existence and nonlinear stability of travelling pulse solutions for the discrete FitzHugh-Nagumo equation with infinite-range interactions close to the continuum limit. For the verification of the spectral properties, we need to study a functional differential equation of mixed type (MFDE) with unbounded shifts. We avoid the use of exponential dichotomies and phase spaces, by building on a technique developed by Bates, Chen and Chmaj for the discrete Nagumo equation. This allows us to transfer several crucial Fredholm properties from the PDE setting to our discrete setting.
Keywords: Lattice differential equations, FitzHugh-Nagumo system, infinite-range interactions, nonlinear stability, non-standard implicit function theorem.
Mathematics Subject Classification: 34A33, 34D35, 34K08, 34K26, 34K31.
Citation: Willem M. Schouten-Straatman, Hermen Jan Hupkes. Nonlinear stability of pulse solutions for the discrete FitzHugh-Nagumo equation with infinite-range interactions. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5017-5083. doi: 10.3934/dcds.2019205
P. W. Bates, X. Chen and A. Chmaj, Traveling Waves of Bistable Dynamics on a Lattice, SIAM J. Math. Anal., 35 (2003), 520-546. doi: 10.1137/S0036141000374002. Google Scholar
M. Beck, H. J. Hupkes, B. Sandstede and K. Zumbrun, Nonlinear Stability of Semidiscrete Shocks for Two-Sided Schemes, SIAM J. Math. Anal., 42 (2010), 857-903. doi: 10.1137/090775634. Google Scholar
M. Beck, B. Sandstede and K. Zumbrun, Nonlinear stability of time-periodic viscous shocks, Archive for Rational Mechanics and Analysis, 196 (2010), 1011-1076. doi: 10.1007/s00205-009-0274-1. Google Scholar
M. Beck, G. Cox, C. Jones, Y. Latushkin, K. McQuighan and A. Sukhtayev, Instability of pulses in gradient reaction–diffusion systems: A symplectic approach, Phil. Trans. R. Soc. A, 376 (2018), 20170187, 20pp. doi: 10.1098/rsta.2017.0187. Google Scholar
S. Benzoni-Gavage, P. Huot and F. Rousset, Nonlinear Stability of Semidiscrete Shock Waves, SIAM J. Math. Anal., 35 (2003), 639-707. doi: 10.1137/S0036141002418054. Google Scholar
J. Bos, Fredholm Eigenschappen van Systemen met Interactie Over een Oneindig Bereik, Bachelor Thesis, Leiden University, 2015. Google Scholar
P. C. Bressloff, Spatiotemporal dynamics of continuum neural fields, Journal of Physics A: Mathematical and Theoretical, 45 (2012), 033001,109 pp. doi: 10.1088/1751-8113/45/3/033001. Google Scholar
P. C. Bressloff, Waves in Neural Media: From single Neurons to Neural Fields, Lecture notes on mathematical modeling in the life sciences., Springer, 2014. doi: 10.1007/978-1-4614-8866-8. Google Scholar
J. W. Cahn, Theory of Crystal Growth and Interface Motion in Crystalline Materials, Acta Met., 8 (1960), 554-562. Google Scholar
G. Carpenter, A Geometric Approach to Singular Perturbation Problems with Applications to Nerve Impulse Equations, J. Diff. Eq., 23 (1977), 335-367. doi: 10.1016/0022-0396(77)90116-4. Google Scholar
P. Carter, B. de Rijk and B. Sandstede, Stability of traveling pulses with oscillatory tails in the FitzHugh–Nagumo system, Journal of Nonlinear Science, 26 (2016), 1369-1444. doi: 10.1007/s00332-016-9308-7. Google Scholar
P. Carter and B. Sandstede, Fast pulses with oscillatory tails in the FitzHugh–Nagumo system, SIAM Journal on Mathematical Analysis, 47 (2015), 3393-3441. doi: 10.1137/140999177. Google Scholar
C.-N. Chen and X. Hu, Stability analysis for standing pulse solutions to FitzHugh–Nagumo equations, Calculus of Variations and Partial Differential Equations, 49 (2014), 827-845. doi: 10.1007/s00526-013-0601-0. Google Scholar
X. Chen, Existence, Uniqueness and Asymptotic Stability of Traveling Waves in Nonlocal Evolution Equations, Adv. Diff. Eq., 2 (1997), 125-160. Google Scholar
S. N. Chow, J. Mallet-Paret and W. Shen, Traveling Waves in Lattice Dynamical Systems, J. Diff. Eq., 149 (1998), 248-291. doi: 10.1006/jdeq.1998.3478. Google Scholar
O. Ciaurri, L. Roncal, P. Stinga, J. Torrea and J. Varona, Fractional discrete Laplacian versus discretized fractional Laplacian, preprint, arXiv: 1507.04986. Google Scholar
F. Ciuchi, A. Mazzulla, N. Scaramuzza, E. Lenzi and L. Evangelista, Fractional diffusion equation and the electrical impedance: Experimental evidence in liquid-crystalline cells, The Journal of Physical Chemistry C, 116 (2012), 8773-8777. doi: 10.1021/jp211097m. Google Scholar
P. Cornwell, Opening the maslov box for traveling waves in skew-gradient systems, preprint, arXiv: 1709.01908. Google Scholar
P. Cornwell and C. K. Jones, On the existence and stability of fast traveling waves in a doubly-diffusive FitzHugh-Nagumo system, SIAM J. Appl. Dyn. Syst., 17 (2018), 754–787, arXiv: 1709.09132. doi: 10.1137/17M1149432. Google Scholar
J. Evans, Nerve axon equations: Ⅲ. stability of the nerve impulse, Indiana Univ. Math. J., 22 (1972), 577-593. doi: 10.1512/iumj.1973.22.22048. Google Scholar
G. Faye and A. Scheel, Fredholm properties of nonlocal differential operators via spectral flow, Indiana University Mathematics Journal, 63 (2014), 1311-1348. doi: 10.1512/iumj.2014.63.5383. Google Scholar
G. Faye and A. Scheel, Existence of pulses in excitable media with nonlocal coupling, Advances in Mathematics, 270 (2015), 400-456. doi: 10.1016/j.aim.2014.11.005. Google Scholar
G. Faye and A. Scheel, Center manifolds without a phase space, Trans. Amer. Math. Soc., 370 (2018), 5843-5885. doi: 10.1090/tran/7190. Google Scholar
P. C. Fife and J. B. McLeod, The approach of solutions of nonlinear diffusion equations to travelling front solutions, Arch. Ration. Mech. Anal., 65 (1977), 335-361. doi: 10.1007/BF00250432. Google Scholar
R. FitzHugh, Impulses and physiological states in theoretical models of nerve membrane, Biophysical J., 1 (1966), 445-466. doi: 10.1016/S0006-3495(61)86902-6. Google Scholar
R. FitzHugh, Mathematical Models of Excitation and Propagation in Nerve, Publisher Unknown, 1966. Google Scholar
R. Fitzhugh, Motion picture of nerve impulse propagation using computer animation, Journal of Applied Physiology, 25 (1968), 628-630. doi: 10.1152/jappl.1968.25.5.628. Google Scholar
T. Gallay and E. Risler, A variational proof of global stability for bistable travelling waves, Differential and Integral Equations, 20 (2007), 901-926. Google Scholar
Q. Gu, E. Schiff, S. Grebner, F. Wang and R. Schwarz, Non-Gaussian transport measurements and the Einstein relation in amorphous silicon, Physical Review Letters, 76 (1996), 3196. doi: 10.1103/PhysRevLett.76.3196. Google Scholar
C. H. S. Hamster and H. J. Hupkes, Stability of travelling waves for reaction-diffusion equations with multiplicative noise, SIAM J. Appl. Dyn. Syst., 18 (2019), 205-278. doi: 10.1137/17M1159518. Google Scholar
S. Hastings, On Travelling Wave Solutions of the Hodgkin-Huxley Equations, Arch. Rat. Mech. Anal., 60 (1976), 229-257. doi: 10.1007/BF01789258. Google Scholar
A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J. Physiology, 117. Google Scholar
P. Howard and A. Sukhtayev, The Maslov and Morse indices for Schrödinger operators on [0, 1], Journal of Differential Equations, 260 (2016), 4499-4549. doi: 10.1016/j.jde.2015.11.020. Google Scholar
H. J. Hupkes and E. Augeraud-Véron, Well-posed of initial value problems on Hilbert spaces, In preparation. Google Scholar
H. J. Hupkes and B. Sandstede, Travelling Pulse Solutions for the Discrete FitzHugh-Nagumo System, SIAM J. Appl. Dyn. Sys., 9 (2010), 827-882. doi: 10.1137/090771740. Google Scholar
H. J. Hupkes and B. Sandstede, Stability of Pulse Solutions for the Discrete FitzHugh-Nagumo System, Transactions of the AMS, 365 (2013), 251-301. doi: 10.1090/S0002-9947-2012-05567-X. Google Scholar
H. J. Hupkes and E. S. Van Vleck, Negative diffusion and traveling waves in high dimensional lattice systems, SIAM J. Math. Anal., 45 (2013), 1068-1135. doi: 10.1137/120880628. Google Scholar
H. J. Hupkes and E. S. Van Vleck, Travelling Waves for Complete Discretizations of Reaction Diffusion Systems, J. Dyn. Diff. Eqns, 28 (2016), 955-1006. doi: 10.1007/s10884-014-9423-9. Google Scholar
H. J. Hupkes and S. M. Verduyn-Lunel, Center Manifold Theory for Functional Differential Equations of Mixed Type, J. Dyn. Diff. Eq., 19 (2007), 497-560. doi: 10.1007/s10884-006-9055-9. Google Scholar
C. K. R. T. Jones, Stability of the Travelling Wave Solutions of the FitzHugh-Nagumo System, Trans. AMS, 286 (1984), 431-469. doi: 10.1090/S0002-9947-1984-0760971-6. Google Scholar
C. K. R. T. Jones, N. Kopell and R. Langer, Construction of the FitzHugh-Nagumo Pulse using Differential Forms, in Patterns and Dynamics in Reactive Media (eds. H. Swinney, G. Aris and D. G. Aronson), vol. 37 of IMA Volumes in Mathematics and its Applications, Springer, New York, 1991,101–115. doi: 10.1007/978-1-4612-3206-3_7. Google Scholar
C. Jones, Geometric singular perturbation theory, Dynamical Systems (Montecatini Terme, 1994), 44–118, Lecture Notes in Math., 1609, Springer, Berlin, 1995. doi: 10.1007/BFb0095239. Google Scholar
A. Kaminaga, V. K. Vanag and I. R. Epstein, A Reaction–Diffusion Memory Device, Angewandte Chemie International Edition, 45 (2006), 3087-3089. Google Scholar
T. Kapitula and K. Promislow, Spectral and Dynamical Stability of Nonlinear Waves, vol. 457, Springer, 2013. doi: 10.1007/978-1-4614-6995-7. Google Scholar
J. Keener and J. Sneed, Mathematical Physiology, Springer–Verlag, New York, 1998. Google Scholar
M. Krupa, B. Sandstede and P. Szmolyan, Fast and Slow Waves in the FitzHugh-Nagumo Equation, J. Diff. Eq., 133 (1997), 49-97. doi: 10.1006/jdeq.1996.3198. Google Scholar
R. S. Lillie, Factors Affecting Transmission and Recovery in the Passive Iron Nerve Model, J. of General Physiology, 7 (1925), 473-507. doi: 10.1085/jgp.7.4.473. Google Scholar
J. Mallet-Paret, The Fredholm Alternative for Functional Differential Equations of Mixed Type, J. Dyn. Diff. Eq., 11 (1999), 1-47. doi: 10.1023/A:1021889401235. Google Scholar
M. Or-Guil, M. Bode, C. P. Schenk and H. G. Purwins, Spot Bifurcations in Three-Component Reaction-Diffusion Systems: The Onset of Propagation, Physical Review E, 57 (1998), 6432. Google Scholar
D. Pinto and G. Ermentrout, Spatially structured activity in synaptically coupled neuronal networks: 1. traveling fronts and pulses, SIAM J. of Appl. Math., 62 (2001), 206-225. doi: 10.1137/S0036139900346453. Google Scholar
L. A. Ranvier, Lećons sur l'Histologie du Système Nerveux, par M. L. Ranvier, Recueillies par M. Ed. Weber, F. Savy, Paris, 1878. Google Scholar
A. Rustichini, Functional Differential Equations of Mixed Type: the Linear Autonomous Case, J. Dyn. Diff. Eq., 1 (1989), 121-143. doi: 10.1007/BF01047828. Google Scholar
N. Sabourova, Real and Complex Operator Norms, Licentiate Thesis, Luleå University of Technology, 2007. Google Scholar
C. P. Schenk, M. Or-Guil, M. Bode and H. G. Purwins, Interacting pulses in three-component reaction-diffusion systems on two-dimensional domains, Physical Review Letters, 78 (1997), 3781. doi: 10.1103/PhysRevLett.78.3781. Google Scholar
W. M. Schouten-Straatman and H. J. Hupkes, Travelling waves for spatially discrete systems of FitzHugh-Nagumo type with periodic coefficients, preprint, arXiv: 1808.00761. Google Scholar
J. Sneyd, Tutorials in Mathematical Biosciences II., vol. 187 of Lecture Notes in Mathematics, chapter Mathematical Modeling of Calcium Dynamics and Signal Transduction., New York: Springer, 2005. doi: 10.1007/b107088. Google Scholar
A. Vainchtein and E. S. Van Vleck, Nucleation and propagation of phase mixtures in a bistable chain, Phys. Rev. B, 79 (2009), 144123. doi: 10.1103/PhysRevB.79.144123. Google Scholar
P. van Heijster and B. Sandstede, Bifurcations to Travelling Planar Spots in a Three-Component FitzHugh–Nagumo system, Physica D, 275 (2014), 19-34. doi: 10.1016/j.physd.2014.02.001. Google Scholar
E. Yanagida, Stability of Fast Travelling Wave Solutions of the FitzHugh-Nagumo Equations, J. Math. Biol., 22 (1985), 81-104. doi: 10.1007/BF00276548. Google Scholar
K. Zumbrun, Instantaneous Shock Location and One-Dimensional Nonlinear Stability of Viscous Shock Waves, Quarterly of applied mathematics, 69 (2011), 177-202. doi: 10.1090/S0033-569X-2011-01221-6. Google Scholar
K. Zumbrun and P. Howard, Pointwise Semigroup Methods and Stability of Viscous Shock Waves, Indiana Univ. Math. J., 47 (1998), 741-871. doi: 10.1512/iumj.1998.47.1604. Google Scholar
Figure 1. Illustration of the regions $ R_1,R_2,R_3 $ and $ R_4 $. Note that the regions $ R_2 $ and $ R_3 $ grow when $ h $ decreases, while the regions $ R_1 $ and $ R_4 $ are independent of $ h $
Amira M. Boughoufala, Ahmed Y. Abdallah. Attractors for FitzHugh-Nagumo lattice systems with almost periodic nonlinear parts. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1549-1563. doi: 10.3934/dcdsb.2020172
Chao Xing, Zhigang Pan, Quan Wang. Stabilities and dynamic transitions of the Fitzhugh-Nagumo system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 775-794. doi: 10.3934/dcdsb.2020134
Arnold Dikansky. Fitzhugh-Nagumo equations in a nonhomogeneous medium. Conference Publications, 2005, 2005 (Special) : 216-224. doi: 10.3934/proc.2005.2005.216
Anna Cattani. FitzHugh-Nagumo equations with generalized diffusive coupling. Mathematical Biosciences & Engineering, 2014, 11 (2) : 203-215. doi: 10.3934/mbe.2014.11.203
Wenqiang Zhao. Smoothing dynamics of the non-autonomous stochastic Fitzhugh-Nagumo system on $\mathbb{R}^N$ driven by multiplicative noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3453-3474. doi: 10.3934/dcdsb.2018251
Zhen Zhang, Jianhua Huang, Xueke Pu. Pullback attractors of FitzHugh-Nagumo system on the time-varying domains. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3691-3706. doi: 10.3934/dcdsb.2017150
Yiqiu Mao. Dynamic transitions of the Fitzhugh-Nagumo equations on a finite domain. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3935-3947. doi: 10.3934/dcdsb.2018118
Christopher Cox, Renato Feres. Differential geometry of rigid bodies collisions and non-standard billiards. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6065-6099. doi: 10.3934/dcds.2016065
Abiti Adili, Bixiang Wang. Random attractors for stochastic FitzHugh-Nagumo systems driven by deterministic non-autonomous forcing. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 643-666. doi: 10.3934/dcdsb.2013.18.643
Abiti Adili, Bixiang Wang. Random attractors for non-autonomous stochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013 (special) : 1-10. doi: 10.3934/proc.2013.2013.1
Takashi Kajiwara. A Heteroclinic Solution to a Variational Problem Corresponding to FitzHugh-Nagumo type Reaction-Diffusion System with Heterogeneity. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2133-2156. doi: 10.3934/cpaa.2017106
Takashi Kajiwara. The sub-supersolution method for the FitzHugh-Nagumo type reaction-diffusion system with heterogeneity. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2441-2465. doi: 10.3934/dcds.2018101
Bao Quoc Tang. Regularity of pullback random attractors for stochastic FitzHugh-Nagumo system on unbounded domains. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 441-466. doi: 10.3934/dcds.2015.35.441
Matthieu Alfaro, Hiroshi Matano. On the validity of formal asymptotic expansions in Allen-Cahn equation and FitzHugh-Nagumo system with generic initial data. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1639-1649. doi: 10.3934/dcdsb.2012.17.1639
Yangrong Li, Jinyan Yin. A modified proof of pullback attractors in a Sobolev space for stochastic FitzHugh-Nagumo equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1203-1223. doi: 10.3934/dcdsb.2016.21.1203
Yangrong Li, Shuang Yang, Guangqing Long. Continuity of random attractors on a topological space and fractional delayed FitzHugh-Nagumo equations with WZ-noise. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021303
Maksym Berezhnyi, Evgen Khruslov. Non-standard dynamics of elastic composites. Networks & Heterogeneous Media, 2011, 6 (1) : 89-109. doi: 10.3934/nhm.2011.6.89
Vyacheslav Maksimov. Some problems of guaranteed control of the Schlögl and FitzHugh-Nagumo systems. Evolution Equations & Control Theory, 2017, 6 (4) : 559-586. doi: 10.3934/eect.2017028
John Guckenheimer, Christian Kuehn. Homoclinic orbits of the FitzHugh-Nagumo equation: The singular-limit. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 851-872. doi: 10.3934/dcdss.2009.2.851
Anhui Gu, Bixiang Wang. Asymptotic behavior of random fitzhugh-nagumo systems driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1689-1720. doi: 10.3934/dcdsb.2018072
Willem M. Schouten-Straatman Hermen Jan Hupkes | CommonCrawl |
Egyptian geometry
Egyptian geometry refers to geometry as it was developed and used in Ancient Egypt. Their geometry was a necessary outgrowth of surveying to preserve the layout and ownership of farmland, which was flooded annually by the Nile river.[1]
We only have a limited number of problems from ancient Egypt that concern geometry. Geometric problems appear in both the Moscow Mathematical Papyrus (MMP) and in the Rhind Mathematical Papyrus (RMP). The examples demonstrate that the ancient Egyptians knew how to compute areas of several geometric shapes and the volumes of cylinders and pyramids.
Area
The ancient Egyptians wrote out their problems in multiple parts. They gave the title and the data for the given problem, in some of the texts they would show how to solve the problem, and as the last step they verified that the problem was correct. The scribes did not use any variables and the problems were written in prose form. The solutions were written out in steps, outlining the process.
Egyptian units of length are attested from the Early Dynastic Period. Although it dates to the 5th dynasty, the Palermo stone recorded the level of the Nile River during the reign of the Early Dynastic pharaoh Djer, when the height of the Nile was recorded as 6 cubits and 1 palm (about 3.217 m or 10 ft 6.7 in).[2] A Third Dynasty diagram shows how to construct a circular vault using body measures along an arc. If the area of the Square is 434 units. The area of the circle is 433.7.
The ostracon depicting this diagram was found near the Step Pyramid of Saqqara. A curve is divided into five sections and the height of the curve is given in cubits, palms, and digits in each of the sections.[3] [4]
At some point, lengths were standardized by cubit rods. Examples have been found in the tombs of officials, noting lengths up to remen. Royal cubits were used for land measures such as roads and fields. Fourteen rods, including one double-cubit rod, were described and compared by Lepsius.[5] Two examples are known from the Saqqara tomb of Maya, the treasurer of Tutankhamun.
Another was found in the tomb of Kha (TT8) in Thebes. These cubits are 52.5 cm (20.7 in) long and are divided into palms and hands: each palm is divided into four fingers from left to right and the fingers are further subdivided into ro from right to left. The rules are also divided into hands[6] so that for example one foot is given as three hands and fifteen fingers and also as four palms and sixteen fingers.[2][4][7][8][9][6]
Surveying and itinerant measurement were undertaken using rods, poles, and knotted cords of rope. A scene in the tomb of Menna in Thebes shows surveyors measuring a plot of land using rope with knots tied at regular intervals. Similar scenes can be found in the tombs of Amenhotep-Sesi, Khaemhat and Djeserkareseneb. The balls of rope are also shown in New Kingdom statues of officials such as Senenmut, Amenemhet-Surer, and Penanhor.[3]
Areas
ObjectSourceFormula (using modern notation)
triangleProblem 51 in RMP and problems 4, 7 and 17 in MMP$A={\frac {1}{2}}bh$
b = base, h = height
rectanglesProblem 49 in RMP and problems 6 in MMP and Lahun LV.4. problem 1$A=bh$
b = base, h = height
circleProblems 51 in RMP and problems 4, 7 and 17 in MMP$A={\frac {1}{4}}({\frac {256}{81}})d^{2}$
d= diameter. This uses the value 256/81 = 3.16049... for
$\pi =3.14159...$
hemisphereProblem 10 in MMP
Triangles:
The ancient Egyptians knew that the area of a triangle is $A={\frac {1}{2}}bh$ where b = base and h = height. Calculations of the area of a triangle appear in both the RMP and the MMP.[10]
Rectangles:
Problem 49 from the RMP finds the area of a rectangular plot of land[10] Problem 6 of MMP finds the lengths of the sides of a rectangular area given the ratio of the lengths of the sides. This problem seems to be identical to one of the Lahun Mathematical Papyri in London. The problem also demonstrates that the Egyptians were familiar with square roots. They even had a special hieroglyph for finding a square root. It looks like a corner and appears in the fifth line of the problem. Scholars suspect that they had tables giving the square roots of some often used numbers. No such tables have been found however.[11] Problem 18 of the MMP computes the area of a length of garment-cloth.[10]
The Lahun Papyrus Problem 1 in LV.4 is given as: An area of 40 "mH" by 3 "mH" shall be divided in 10 areas, each of which shall have a width that is 1/2 1/4 of their length.[12] A translation of the problem and its solution as it appears on the fragment is given on the website maintained by University College London.[13]
Circles:
Problem 48 of the RMP compares the area of a circle (approximated by an octagon) and its circumscribing square. This problem's result is used in problem 50.
Trisect each side. Remove the corner triangles. The resulting octagonal figure approximates the circle. The area of the octagonal figure is:
$9^{2}-4{\frac {1}{2}}(3)(3)=63$ Next we approximate 63 to be 64 and note that $64=8^{2}$
Thus the number $4({\frac {8}{9}})^{2}=3.16049...$ plays the role of π = 3.14159....
That this octagonal figure, whose area is easily calculated, so accurately approximates the area of the circle is just plain good luck. Obtaining a better approximation to the area using finer divisions of a square and a similar argument is not simple. [10]
Problem 50 of the RMP finds the area of a round field of diameter 9 khet.[10] This is solved by using the approximation that circular field of diameter 9 has the same area as a square of side 8. Problem 52 finds the area of a trapezium with (apparently) equally slanting sides. The lengths of the parallel sides and the distance between them being the given numbers.[11]
Hemisphere:
Problem 10 of the MMP computes the area of a hemisphere.[11]
Volumes
Several problems compute the volume of cylindrical granaries (41, 42, and 43 of the RMP), while problem 60 RMP seems to concern a pillar or a cone instead of a pyramid. It is rather small and steep, with a seked (slope) of four palms (per cubit).[10]
A problem appearing in section IV.3 of the Lahun Mathematical Papyri computes the volume of a granary with a circular base. A similar problem and procedure can be found in the Rhind papyrus (problem 43). Several problems in the Moscow Mathematical Papyrus (problem 14) and in the Rhind Mathematical Papyrus (numbers 44, 45, 46) compute the volume of a rectangular granary.[10][11]
Problem 14 of the Moscow Mathematical Papyrus computes the volume of a truncated pyramid, also known as a frustum.
Volumes
ObjectSourceFormula (using modern notation)
Cylindrical granariesRMP 41$V={\frac {256}{81}}r^{2}\ h$ measured in cubic-cubits
Cylindrical granariesRMP 42, Lahun IV.3$V={\frac {32}{27}}d^{2}\ h={\frac {128}{27}}r^{2}\ h$ (measured in khar).
Rectangular granariesRMP 44-46 and MMP 14$V=w\ l\ h$
w = width, l = length, h = height
Truncated pyramid (frustum)MMP 14$V={\frac {1}{3}}(a^{2}+ab+b^{2})h$
Seked
Problem 56 of the RMP indicates an understanding of the idea of geometric similarity. This problem discusses the ratio run/rise, also known as the seked. Such a formula would be needed for building pyramids. In the next problem (Problem 57), the height of a pyramid is calculated from the base length and the seked (Egyptian for slope), while problem 58 gives the length of the base and the height and uses these measurements to compute the seked.
In Problem 59 part 1 computes the seked, while the second part may be a computation to check the answer: If you construct a pyramid with base side 12 [cubits] and with a seked of 5 palms 1 finger; what is its altitude?[10]
References
1. Erlikh, Ḥagai; Erlikh, Hạggai; Gershoni, I. (2000). The Nile: Histories, Cultures, Myths. Lynne Rienner Publishers. pp. 80–81. ISBN 978-1-55587-672-2. Retrieved 9 January 2020. The Nile occupied an important position in Egyptian culture; it influenced the development of mathematics, geography, and the calendar; Egyptian geometry advanced due to the practice of land measurement "because the overflow of the Nile caused the boundary of each person's land to disappear."
2. Clagett (1999).
3. Corinna Rossi, Architecture and Mathematics in Ancient Egypt, Cambridge University Press, 2007
4. Englebach, Clarke (1990). Ancient Egyptian Construction and Architecture. New York: Dover. ISBN 0486264858.
5. Lepsius (1865), pp. 57 ff.
6. Loprieno, Antonio (1996). Ancient Egyptian. New York: CUP. ISBN 0521448492.
7. Gardiner, Allen (1994). Egyptian Grammar 3rd Edition. Oxford: Griffith Institute. ISBN 0900416351.
8. Faulkner, Raymond (1991). A Concise Dictionary of Middle Egyptian. Griffith Institute Asmolean Museum, Oxford. ISBN 0900416327.
9. Gillings, Richard (1972). Mathematics in the Time of the Pharaohs. MIT. ISBN 0262070456.
10. Clagett, Marshall Ancient Egyptian Science, A Source Book. Volume Three: Ancient Egyptian Mathematics (Memoirs of the American Philosophical Society) American Philosophical Society. 1999 ISBN 978-0-87169-232-0
11. R.C. Archibald Mathematics before the Greeks Science, New Series, Vol.71, No. 1831, (Jan. 31, 1930), pp.109-121
12. Annette Imhausen Digitalegypt website: Lahun Papyrus IV.3
13. Annette Imhausen Digitalegypt website: Lahun Papyrus LV.4
Bibliography
• Clagett, Marshall (1999). Ancient Egyptian Science: A Source Book, Vol. III: Ancient Egyptian Mathematics. Memoirs of the APS, Vol. 232. Philadelphia: American Philosophical Society. ISBN 978-0-87169-232-0.
• Lepsius, Karl Richard (1865). Die Alt-Aegyptische Elle und Ihre Eintheilung (in German). Berlin: Dümmler.
Ancient Egypt topics
• Index
• Major topics
• Glossary of artifacts
• Agriculture
• Architecture (Revival, Obelisks, Pylon)
• Art
• Portraiture
• Astronomy
• Chronology
• Cities (List)
• Clothing
• Ancient Egyptian race controversy
• Population history of Egypt
• Prehistoric Egypt
• Cuisine
• Dance
• Dynasties
• Funerary practices
• Geography
• Great Royal Wives (List)
• Hieroglyphs (Cursive hieroglyphs)
• History
• Language (Demotic, Hieratic)
• Literature
• Mathematics
• Medicine
• Military
• Music
• Mythology
• People
• Pharaohs (List, Titulary)
• Philosophy
• Pottery
• Religion
• Sites (District)
• Technology
• Trade
• Egypt–Mesopotamia relations
• Egyptology
• Egyptologists
• Museums
• Ancient Egypt portal
• Category
• WikiProject
• Commons
• Outline
| Wikipedia |
Optimizing continuous cover management of boreal forest when timber prices and tree growth are stochastic
Timo Pukkala1
Forest Ecosystems volume 2, Article number: 6 (2015) Cite this article
Decisions on forest management are made under risk and uncertainty because the stand development cannot be predicted exactly and future timber prices are unknown. Deterministic calculations may lead to biased advice on optimal forest management. The study optimized continuous cover management of boreal forest in a situation where tree growth, regeneration, and timber prices include uncertainty.
Both anticipatory and adaptive optimization approaches were used. The adaptive approach optimized the reservation price function instead of fixed cutting years. The future prices of different timber assortments were described by cross-correlated auto-regressive models. The high variation around ingrowth model was simulated using a model that describes the cross- and autocorrelations of the regeneration results of different species and years. Tree growth was predicted with individual tree models, the predictions of which were adjusted on the basis of a climate-induced growth trend, which was stochastic. Residuals of the deterministic diameter growth model were also simulated. They consisted of random tree factors and cross- and autocorrelated temporal terms.
Of the analyzed factors, timber price caused most uncertainty in the calculation of the net present value of a certain management schedule. Ingrowth and climate trend were less significant sources of risk and uncertainty than tree growth. Stochastic anticipatory optimization led to more diverse post-cutting stand structures than obtained in deterministic optimization. Cutting interval was shorter when risk and uncertainty were included in the analyses.
Adaptive optimization and management led to 6%–14% higher net present values than obtained in management that was based on anticipatory optimization. Increasing risk aversion of the forest landowner led to earlier cuttings in a mature stand. The effect of risk attitude on optimization results was small.
Maximizing the economic benefits from timber production is equal to maximizing the net present value of future net incomes. Unfortunately, the future net incomes are unknown at the moment when management decision should be made. Future net incomes depend on future timber prices, which show substantial temporal variation (Leskinen and Kangas 1998).
Also the growth and development of trees and stands are poorly known. Deterministic models explain only a part of the growth variation between years, stands and trees. Measurements of past growth show that there are periods of good growth while in other years or during longer periods trees grow less than the long-term average (e.g. Pasanen 1998). In addition to these weather-related seasonal variations in annual growth, there are also between-tree growth differences which cannot be explained by deterministic models. Another factor causing uncertainty in growth prediction is climate change. It is usually assumed that the growth rate will increase in the boreal forests of North Europe (e.g. Pukkala and Kellomäki 2012), but the estimated growth trends represent very uncertain knowledge.
Flowering, pollination, seed production and germination are sub-processes of the regeneration process of trees and stands. All these sub-processes are very sensitive to weather conditions such as temperature and rainfall. In addition, the eventual size of the seed crop depends on the fluctuations of seed predators and seed diseases. Since many sub-processes critical to regeneration success depend on weather conditions, it is impossible to predict the exact amount of regeneration in a certain year in the future, even when there are plenty of empirical regeneration data to fit models. The best that can be done is to predict the distribution of regeneration results or the probability of successful regeneration. Mortality is also hard to predict exactly. However, the so-called regular mortality (competition-related mortality) is very low in regularly thinned managed boreal forest. Therefore, if catastrophic events are excluded from the analysis (like in this study) uncertainty in mortality does not add much to the total degree of uncertainty in the prediction of stand development. For an attempt to include catastrophic events see Zhou and Buongiorno (2006).
The above discussion shows that decisions on future forest management must be made under risk and uncertainty. Risk is usually understood to be a situation in which the probabilities of different states of nature are known, which makes it possible to calculate the distribution of outcomes for a certain decision alternative. Uncertainty refers to situations in which the probabilities are unknown. The prevailing situation is uncertainty. However, to make analyses easier, the situation is transformed from uncertainty to risk, by assuming some distributions for the uncertain factors. This allows the analyst to calculate the probabilities of different outcomes of decision alternatives.
Forest landowners have different attitudes toward risk and uncertainty. Most people are risk avoiders, especially in "big" decisions with a major potential impact on their livelihood. A risk-averse person seeks decision alternatives, which are at least reasonable when the states of nature develop in an unfavorable way. Risk avoiders tend to select decision alternatives for which the lower end of the distribution of outcomes is as good as possible (Pukkala and Kangas 1996). They may also minimize the "regret", i.e. the maximum loss compared to the best decision alternative under certain states on nature. On the contrary, risk takers are optimistic and favor decision alternatives that are good under favorable states of nature, even though the probability of such an outcome may be low.
There are two basic approaches to the optimization of stand management in a risk situation: anticipatory and adaptive optimization. Anticipatory optimization seeks a single management schedule, which produces the most favorable distribution of net present values or some other objective function (Valsta 1992). Risk neutral decision makers select management schedules which produce high average net present values. Risk takers often select management schedules for which the best outcomes are good whereas risk avoiders tend to maximize the worst outcomes of alternative management schedules.
Adaptive optimization does not try to find a single management prescription for the stand. Instead, it aims at finding rules that help the landowner to make right decisions in changing environment (Lohmander 2007). A well-known rule is the reservation price function indicating the minimum price that the seller should obtain from timber (Brazee and Mendelsohn 1988; Lohmander 1995; Gong and Yin 2004). A more general approach is the Markov decision process model (Lembersky and Johnson 1975; Kaya and Buongiorno 1987).
It can be assumed that reservation price decreases with increasing financial maturity of the stand: the lower the relative value increment of the stand, the lower is the minimum selling price of a certain timber assortment. Since the relative value increment decreases with increasing stand density and mean tree size, it can be assumed that reservation price is negatively correlated with stand basal area and mean tree diameter (Lohmander 1995; Gong 1998; Lu and Gong 2003).
The aim of this study was to describe a system for stochastic optimization of the management of boreal forests in a situation where future timber prices, tree growth and regeneration are not known exactly. The developed simulation–optimization system was used to compare deterministic and stochastic optima, as well as the results of anticipatory and adaptive optimization approaches. Pukkala and Kellomäki (2012) compared anticipatory and adaptive management in even-aged forestry and Zhou et al. (2008) compared adaptive and anticipatory policies in uneven-aged forests. In this study, continuous cover management of both even-and uneven-aged initial stands was optimized. Continuous cover management refers to any sequence of cuttings that keep a minimum post-cutting residual stand basal area. Regeneration by planting or sowing is not used.
Based on previous studies, it was hypothesized that in a risk situation it is optimal to grow more diverse stands than under certainty (Rollin et al. 2005). Risk avoiders were assumed to maintain more diverse stand structures than risk seekers (Pukkala and Kellomäki 2012). The third hypothesis was that adaptive optimization and management results in higher average net present value than anticipatory optimization (Gong 1998; Lu and Gong 2003).
Growth and yield model
The set of models that was used to simulate stand development (Pukkala et al. 2013) consists of individual-tree model for diameter increment, individual-tree survival function, and an ingrowth model (Vanclay 1994). To calculate the assortment volumes of removed trees, the height model of Pukkala et al. (2009) and the taper models of Laasasenaho (1982) were used. The article of Pukkala et al. (2013) reports also methods for simulating the residual variation around the diameter increment and ingrowth models. The deviation of diameter increment from deterministic model prediction was modelled as follows (Miina 1993):
$$ de{v}_{it}={a}_i+{v}_{it} $$
$$ {v}_{it}={\rho v}_{it-1}+{e}_{it} $$
where dev it is the deviation from model prediction for tree i and 5-year period t, a i is normally distributed random tree factor for tree i, v it is random autocorrelated residual for tree i and period t, ρ is correlation coefficient between the residuals of consecutive 5-year periods and e it is normally distributed random number, var[e i ] = var[v it ](1 − ρ i 2). It was assumed that 1/3 of dev is accounted for by the tree factors (a i ) and the rest is accounted for by autocorrelated residuals (v it ). It has been found that the correlation between the residuals of consecutive 1-year periods is 0.4–0.7 and the correlation between 5-year residuals is about half of it (Henttonen 1990; Miina 1993; Kangas 1997; Pasanen 1998). In this study, the autocorrelation coefficient of residuals (ρ) was assumed to be 0.300 for all species. The total variance of residual was 0.254 for pine, 0.283 for spruce and 0.228 for birch (Pukkala et al. 2013). The random numbers (e it ) generated for the trees in a particular 5-year growth period were assumed to be correlated (Pasanen 1998), resulting in both auto- and cross-correlated time series of growth residuals (Figure 1). In simulation, the stochastic residuals were added to the predicted diameter increment. As a result, the simulated differentiation of tree size was faster than it would be in deterministic simulation.
A diameter increment scenario. Sequences of stochastic deviations from deterministic model prediction for five trees and fifty 5-year periods. Each sequence consists of a tree factor and cross- and autocorrelated stochastic temporal components. Tree 2 is a fast-growing individual and Tree 1 is a slow-growing individual.
The diameter increments obtained from the diameter increment model were multiplied with a multiplier that describes the effect of climate change on tree growth (Pukkala and Kellomäki 2012). The climate-induced growth trend is based on a process-based model (Kellomäki and Väisänen 1997; Ge et al. 2010) and corresponds to climate change scenario A1B. The effect of changing climate on diameter increment depends on tree species and growing site. The trends are linear and growth will improve approximately 20% in 50 years. In this study it was assumed that the influence of climate change on diameter increment is not known with certainty. Therefore, the slope of the trend line was assumed to be stochastic, with standard deviation equal to 0.1 times the slope coefficient.
Ingrowth was defined as the number of new trees per hectare that reach the 1.3 m height during a 5-year period. Pukkala et al. (2013) modelled the residuals of the ingrowth model as follows
$$ {dev}_{s,t}={\rho}_s{dev}_{s,t-1}+s{e}_s{e_s}_{,t} $$
where dev s,t is the deviation from the deterministic logarithmic model for species s and 5-year period t, ρ s is the autocorrelation coefficient of successive 5-year periods for species s, se s is the standard deviation of the stochastic annual component and e s,t are multi-normally distributed correlated random numbers (N(0,1)) for pine, spruce, birch and hardwood other than birch. Correlated random numbers (e s,t ) were obtained by using the Cholesky decomposition of the covariance matrix of the residuals of the species-specific ingrowth models (Pukkala et al. 2013). Correlations between the residuals of different species are not high, but for instance the unexplained variation in the ingrowth of spruce is positively correlated with the residual for birch. On the contrary, the correlations between successive five year periods are rather high, 0.670 for pine, 0.577 for spruce, 0.657 for birch and 0.637 for hardwood other than birch. The main reason for the positive autocorrelation is most probably that a single good regeneration year (good seed crop with low seed predation and high germination rate) increases the ingrowth in several coming years. The standard deviation of the stochastic annual component (se) is 0.526 for pine, 0.990 for spruce, 1.027 for birch and 0.938 for hardwood other than birch. Stochastic ingrowth scenarios are produced by adding the simulated residuals to the deterministic logarithmic ingrowth model and converting the result to a non-logarithmic value. Figure 2 shows examples of ingrowth scenarios when the model prediction is 10 new trees per hectare. It can be seen that the resulting ingrowth scenarios are very erratic, reflecting to what happens in reality.
A stochastic ingrowth scenario. Cross- and autocorrelated logarithmic residuals are generated (top) and added to the logarithmic ingrowth prediction, which is then converted to non-logarithmic value (bottom).
Leskinen and Kangas (1998) described the annual variation in timber prices with a set of models where the logarithmic price of a certain timber assortment depends on the price of the previous year plus a stochastic annual component
$$ {p}_t-\overline{p}=\alpha \left({p}_{t-1}-\overline{p}\right)+{e}_t $$
where p t is the logarithmic price in year t, α is parameter ranging from 0.45 to 0.89 for different timber assortments and e is normally distributed random number. Correlated random numbers for different assortments were produced with the help of Cholesky decomposition. The model has been estimated from the historical timber price statistics of Finland. Figure 3 shows an example timber price scenario for six assortments. It can be seen that the prices of successive years are positively correlated and the prices of different assortments are also correlated.
A stochastic timber price scenario for 100 years. The prices of different assortments are auto- and cross-correlated.
Case study stands
Calculations were done for an uneven-aged spruce stand, mixed stands of pine, spruce and birch, and pure pine and spruce stands (Table 1). Each stand was assumed to grow on a typical growing site for the species. The stands represent typical and common stand structures in the managed forests of Finland. The stands were assumed to grow in Central Finland.
Table 1 Case study stands
Each species and canopy layer was initially described by basal area, mean diameter, mean height and minimum and maximum of the diameter distribution. Stand basal area and the three diameters were used to predict the diameter distribution of each stratum (species or canopy layer) present in the stand. The predicted diameter distribution was divided into 10 classes of equal width, and 5 trees were taken to represent each class. The random tree factors of the residuals of the diameter increment model were generated at this point (a i of Equation 1). As a result, each stratum of the stand was represented by 50 "representative trees" varying in size and inherent growth potential.
Growth, survival and ingrowth were simulated using 5-year time steps. If there was ingrowth, a new representative tree was generated for every 10 new conifers or 50 hardwoods (each new tree represented 10 or 50 trees per hectare). The random tree factors of the residuals of diameter growth models were drawn from normal distribution for each new representative tree. Mortality was simulated by multiplying the frequency of the representative tree by its survival probability.
The objective variable was the net present value of all future net incomes, calculated with 3% discount rate. The next three cuttings were optimized for all stands. The net present value of the remaining growing stock (after the 3rd cutting) was calculated with species-specific models using stand basal area, mean dbh, discount rate, site variables and timber prices as predictors (Pukkala 2005). These models explain 90%–95% of the variation of the NPV of the optimal management schedule, depending on tree species. Because of discounting, the value of the ending growing stock, i.e. the discounted value of predicted net present value of all cuttings conducted later than the third cutting, had only a small effect on the total NPV. Preliminary tests indicated that optimizing three first cuttings was enough to have a reliable estimate of the total NPV and to know how the stand should be managed in the near future (Figure 4). For example, when optimizing one to five next cuttings and using model prediction to calculate the NPV of the residual stand, the following total NPVs (calculated with 3% discount rate) were obtained for an uneven-aged spruce stand: 11426 € · ha−1 (1 cutting optimized), 11897 € · ha−1 (2 cuttings optimized), 11917 € · ha−1 (3 cuttings optimized), 11884 € · ha−1 (4 cuttings optimized), 11879 € · ha−1 (5 cuttings optimized).
Effect of optimizing 1 to 5 next cuttings on the optimal thinning intensity curve of the first cutting in an uneven-aged spruce stand.
In anticipatory optimization the decision variables for each cutting were as follows:
Cutting year (exactly: number of years since the start or since previous cutting)
Parameters of the thinning intensity curve, which was defined separately for each species present in the initial stand
Thinning intensity was first described with the following logistic function (Pukkala et al. 2014):
$$ h(d)=\frac{1}{1+{a}_3\times \exp {\left[{a}_1\left({a}_2-d\right)\right]}^{1/{a}_3}} $$
where h(d) is the proportion of harvested trees at dbh d and a 1, a 2 and a 3 are parameters to be optimized. This simple function has been found to result in almost as good solutions (in terms of NPV) as optimizing the harvest intensities of different diameter classes separately (Pukkala et al. 2014). Moreover, preliminary analyses showed that parameter a 3 could be fixed to a 3 = 1 (i.e. a 3 could be removed) without any notable deterioration of the NPV of the optimal solution. Therefore, the following simplified thinning intensity model was used in this study:
$$ h(d)=\frac{1}{1+ \exp \left[{a}_1\left({a}_2-d\right)\right]} $$
Parameter a 2 gives the diameter at which thinning intensity is 0.5, and a 1 defines the type of thinning. If a 1 is negative, small trees are thinned more than large ones, resulting in low thinning. When a 1 is positive, the thinning represents high thinning while a 1equal to 0 results in uniform thinning. As a result, the number of optimized variables was 3(1 + 2) = 9 for one-species stand and 3(1 + 3 × 2) = 21 for a mixture of pine, spruce and birch.
In adaptive optimization, cutting years were replaced by a reservation price function. The following form was assumed, based on previous research (e.g. Pukkala and Kellomäki 2012), preliminary analyses and known relationships between stand basal area, mean tree diameter and financial maturity:
$$ RP = \exp \left({b}_1+{b}_2\surd D+{b}_3\surd G\right) $$
where RP is the price of saw log (roadside price) that activates a cutting treatment and b 1, b 2 and b 3 are optimized parameters that define how the reservation price depends of stand basal area and mean tree diameter. The same reservation price was used in all cuttings. In a mixed stand the current timber price, which was compared to the reservation price, was computed as the weighted average of the saw log prices of all species present in the stand, using basal area as the weight variable.
The intensity and type of cutting were defined with the same logistic function that was used in anticipatory optimization. However, in adaptive optimization cutting may be postponed if timber price is not good enough. Using the same thinning intensity curve with varying cutting years may lead to situations in which the thinning is too heavy or too light, depending on how much and to which direction the cutting year is moved. To avoid this from happening, the problem formulation was changed so that parameter a 2 (location of thinning intensity curve) was calculated with a model, and only parameter a 1 (thinning type) was optimized. This resulted in problem formulations containing 3 + 3 × 1 = 6 decision variables in one-species stands, and 3 + 3 × 3 × 1 = 12 decision variables in the mixture on pine, spruce and birch (the type of thinning was optimized separately for each species).
Several deterministic optimizations were conducted for different species on different growing sites to find the relationship between parameter a 2 (location) of the thinning intensity curve and the stand characteristics (Figure 5). On the basis of these optimizations, the following model was fitted to the diameter at which thinning intensity is 50%:
Dependence of parameter a 2 of the thinning intensity curve ( Equation 5 ) on stand basal area and mean tree diameter. Open circles represent xeric (dry) growing sites.
$$ {a}_2 = 8.738-0.156G+0.771D-1.906CT $$
where D is basal-area-weighted mean diameter of the trees (cm), G is stand basal area (m2 · ha−1) and CT is an indicator variable for xeric growing sites (CT = 1 for Calluna type and poorer sites, and 0 otherwise). The model explained 82% of the variation of a 2.
The current forestry legislation of Finland does not allow the landowner to thin the stand below a certain minimum residual basal area (typically around 10 m2 · ha−1). If the minimum basal area requirement is not met, the landowner is obliged to regenerate the stand within a certain time frame. In this study, any solution in which the minimum basal area was not met was penalized with the consequence that the selected schedules were better in line with the current forestry legislation.
Each management schedule evaluated during an optimisation run was simulated 600 times, and the mean NPV of the 600 stochastic outcomes was passed to the optimization algorithm. The results therefore represent the optimal management for risk neutral decision makers. When the effect of risk attitude was analysed the 10% accumulation point of the distribution of outcomes was used as the objective variable for a risk avoider, leading to the selection of such a management schedule for which the worst outcomes are as good as possible (Pukkala and Kangas 1996). The corresponding accumulation point for a risk seeker was 90%. The used optimization method was the direct search algorithm of Hooke and Jeeves (1961). Afterwards, all optimal solutions – also the deterministic ones – were simulated 1000 times with stochastic variation in tree growth, growth trend, in growth and timber price. The reported results on NPV, removals etc. are based on these simulations.
Effect of risk factors
The effect of adding different stochastic components to simulation and anticipatory optimization was inspected in the uneven-aged spruce stand. Management was optimized without any stochasticity and with stochasticity in growth, ingrowth or timber price. The distributions of net present values produced by the optimal anticipatory solution are shown in Figure 6. It can be seen that when only ingrowth or only climate-induced growth trend is stochastic the distribution of outcomes is very narrow, indicating that these factors do not bring much uncertainty to decision-making. Stochastic variation in tree growth brought much more uncertainty in NPV than stochasticity in ingrowth or climate-induced growth trend. When timber price was stochastic the distribution of outcomes was much wider indicating that timber price is a more significant source of uncertainty than the biological growth process of trees.
Distributions of net present value in the optimal solution when one factor at a time is stochastic ( growth , ingrowth , growth trend , or timber price ) .
Deterministic optimization and simulation resulted in the NPV of 11775 € · ha−1. The average NPVs of the outcomes of stochastic anticipatory optima were almost the same. Also the optimal cutting years of the uneven-aged spruce stand were the same in all optimizations: the first cutting immediately, the second after 15 years and the third 10 years later. However, the way in which cuttings were conducted depended on the degree of stochasticity. The deterministic optimum advised the landowner to remove all trees larger than 20.2 cm in dbh. When stochastic factors were added to simulation and optimization, more and more trees larger than 20 cm were retained, and more and more trees less than 20 cm in dbh were removed, which means that stochastic optimization leads to higher dbh-variation in the post-cutting stand (Figure 7). The same trend was observed also in pure even-aged conifer stands, except mature pine stand (Figure 8). However, also in this stand the second and third thinnings showed similar differences between deterministic and stochastic optima as obtained for the other stands. Similar differences between deterministic and stochastic optima were obtained also for the mixed stands (results not shown). Rollin et al. (2005) found that counting for risk leads to clearly more diverse stand structures than suggested by deterministic solutions.
Optimal first cutting in deterministic solution and in stochastic anticiparoty optima with different sources of stochasticity ( Gro = growth , Ingro = ingrowth , Trend = climate - induced growth trend , Price = timber price ) .
Thinning intensity curves in the first cutting of pure conifer stands in deterministic ( dashed lines ) and stochastic anticipatory optimization ( solid line ) .
The total removal of the three cuttings was 6%–23% lower in the stochastic anticipatory optima than in the deterministic optima. The interval between the 1st and the 3rd cutting was 5–20 years shorter in the stochastic optima. These are indications of risk sharing behavior: in a risky situation it is optimal to cut more often but remove a smaller volume at a time.
Figure 7 shows that the cutting intensity curve is located at larger diameters in mature stands. Because the mature stands are to be cut immediately, the result suggests that the cutting may already be late. Another partial explanation for the difference between young and mature stands is that the basal area of the young stands would increase too much without cutting, decreasing the relative value increment of the stand (see Figure 5). Table 2 shows that the stand basal area at cutting is larger for the young initial stands but the mean tree diameter is smaller, suggesting that high stand densities call for earlier cuttings, which is a logical result. In general, the higher was the mean tree size the lower was the pre- and post-cutting stand basal area.
Table 2 Results calculated from 1000 stochastic simulations with the optimal values of decision variables for a risk neutral decision maker in different problem formulations when tree growth, ingrowth and timber price are stochastic (Det = deterministic optimization, Anti = stochastic anticipatory optimization, Ada = stochastic adaptive optimization)
Effect of risk attitude
The effect of risk attitude on optimal management was analyzed in the mixed stands with the hypothesis that a risk avoider maintains a more diverse stand structure than a risk seeker. However, the thinning intensity curves were very similar for both risk attitudes suggesting that the post-cutting diameter distributions were also similar for both attitudes. The same difference as in pure stands was observed between deterministic and stochastic anticipatory optima: the deterministic optima proposed diameter-limit cutting with a narrower post-cutting diameter distribution than obtained in stochastic anticipatory optimization.
The proportions of different species after the first cutting were more uniform for risk avoider than for risk seeker (Figure 9), i.e. risk aversion led to more mixed stand. In the young mixed stand the risk seeker removed all pines in the first cutting whereas the risk avoider left all species in the residual stand. In the mature stand the post-cutting stand was more spruce dominated for the risk seeker, the risk avoider maintaining more birch and slightly more pines than the risk seeker. The differences were in line with the hypothesis, but they were small. The reason for the small differences may be that the legal limits force the landowner to keep more than species in the first cutting since otherwise the stand balsa area would be too low. Another reason is the fact that since pines and birches were clearly larger than spruces, it was optimal to gradually remove them irrespective of risk attitude. In addition, since the prices of different tree species correlate (Figure 3), increasing species diversity does not decrease the financial risk very much.
Proportions of different tree species in the post-cutting stands according to stochastic anticipatory optima for risk avoider ( A ) and risk seeker ( S ) . The number after A or S is the number of the cutting.
In the mature mixed stand, the cuttings were the earlier the more risk-averse the decision-maker was (Table 3). This is in line with Gong (1998) who concluded that risk avoiders should have the final felling earlier than risk-neutral forest landowners. The removed volume increased towards increasing risk tolerance (Gong 1998; Lu and Gong 2003). In the young mixed stand the removal was larger for the risk-neutral decision-maker than for risk avoider, but the seeker cuts less, most probably because the third cutting was 10 years earlier for the risk seeker than for other risk attitudes.
Table 3 Optimal cutting years in anticipatory stochastic optima for different risk attitudes
Adaptive optima
In adaptive optimization, cutting years were replaced by the reservation price function, resulting in cutting years that may be different in repeated stochastic simulations, depending on the realized stand development and timber price. To make the thinning intensity curve sensitive to changes in cutting year, the "location" parameter of the curve (a 2, dbh at which thinning intensity is 0.5) was calculated with a model (Equation 8) and only the type of thinning (low, uniform or high depending on parameter a 1 of Equation 6) was optimized.
The optimal reservation price functions were very similar for all initial stands (examples shown in Figure 10). As expected, the mean net present values of several repeated stochastic simulations with the optimal parameters were clearly better for the adaptive optima (Figure 11), the advantage of adaptive optimization and management being 6%–14%. There were no systematic differences in the average cutting years or removals between anticipatory and adaptive optima (Table 2). In mature stands, the average cutting year suggested by the adaptive optima was about 5 years later than in the anticipatory optima. However, the reason for this difference is most probably technical: it was possible to only postpone the first cutting from year zero, not have it earlier.
Examples of optimal reservation price functions obtained in adaptive optimization. Reservation price is the minimum price (in this case the minimum roadside price of saw log) which must be obtained to sell timber.
Average net present values of 1000 stochastic simulations with the optimal values of decision variables obtained in different problem formulations. "Adaptive a1 = 1" is a simulation in which the optimized value of parameter a 1 of the tinning intensity curve was replaced by a constant value (a 1 = 1).
The solutions of the adaptive optimization problems were also simulated so that the optimized value of parameter a 1 (thinning type) of the thinning intensity curve (Equation 6) was replaced by 1, corresponding to high thinning. The average NPVs of 1000 simulations were nearly the same as obtained with the optimized values of parameter a 2, except for the mature mixed stand. The result indicates that nearly optimal adaptive management can be found when optimizing only the reservation price function and calculating the thinning intensity curve with model, fixing parameter a 1 to 1. The whole management schedule can be defined and optimized only by three decision variables, namely the parameters of the reservation price function. In the anticipatory optima for mixed stands there are 21 decision variables and yet the expected NPV is clearly better for the adaptive solution defined by only 3 decision variables.
The average roadside price obtained from saw log was about 20% higher in adaptive optima than in deterministic or stochastic anticipatory optima (Table 2). The difference was smaller in the first cutting of the mature stands, due to the high opportunity cost of the growing stock (high financial maturity of the initial stand). The results are in agreement with the assumptions made about the shape of the reservation price function.
All the hypotheses of the study were supported by the results. However, the effect of risk attitude on optimal management was very small, which may be related to the current forestry legislation which ruled out a part on the management options. Another reason may be the size differences of the species of mixed stands, which had a greater impact on the results than risk attitude. Positive correlation between timber prices of different tree species (Figure 3) also decreases the possibilities to reduce financial risk by increased species diversity. Roessiger et al. (2011) concluded that the optimal management for a cautious risk-avoiding forest landowner uses tree species diversification, avoiding clear-cutting and mono-species forest composition.
All thinnings of all solutions were high thinnings. The very high stochastic variation of ingrowth did not affect the expected NPV of the management schedule and it did not bring much uncertainty in decision-making. This is because the removals and incomes of the first three cuttings were obtained from trees that already existed in the initial stands. Ingrowth affects the incomes of distant cuttings whose effect on NPV is very small when the discount rate is 3% or higher. In addition, infrequent regeneration and ingrowth, combined with uneven growth rate of the ingrowth trees may provide a continuous enough supply of trees to larger diameter classes. Timber price was by far the most significant source of risk and uncertainty.
By looking at the average NPVs of 1000 stochastic simulations conducted with different optimal solutions (Figure 11) it can be concluded that there is also some uncertainty related to the optimality of the found solutions. Theoretically, stochastic anticipatory optima should produce better results than 1000 stochastic simulations with the deterministic optima, but this was not always the case. Correspondingly, fixing parameter a 1 to 1 should decrease the simulated NPVs, compared to adaptive solutions where a 1 was optimized, but this did not happen always. The results suggest that stochastic problems are more difficult to solve than the deterministic ones. Simulating each schedule clearly more than 600 times in optimization (600 realizations were used in optimization runs) would most probably partially solve the problem, but with a high computational cost. An alternative approach, namely the Markovian decision process model, would be better from the computational point of view (Kaya and Buongiorno 1987).
Corresponding to the hypotheses and previous studies (Gong 1998; Lu and Gong 2003; Pukkala and Kellomäki 2012), adaptive optimization led to higher NPVs than anticipatory optima. However, the differences were smaller than what could be expected on the basis of some earlier studies (Gong and Yin 2004; Pukkala and Kellomäki 2012). This was partly because the growth interval was always 5 years although the truly optimal cutting year might be one of the years within the 5-year time step used in simulation. This most probably decreased the NPVs more in adaptive optimization since it was not possible to pick the year of the 5-year period that had the highest timber price. Therefore, the results of this study can be interpreted so that the benefit of adaptive optimization is at least 6%–14% but it can be also higher. Zhou et al. (2008) found a 17% higher NPV for adaptive strategy compared to fixed strategy, with little difference in length of cutting cycle.
The adaptive approach facilitates very simple management rules. The optimal future management can be described with only 3 parameters, namely the coefficients of the reservation price function. A thinning treatment should be conducted when the actual price is higher than the reservation price. The thinning intensity of different diameter classes is calculated with Equation 6. Parameter a 1 of the equation can be taken as 1, and parameter a 2 is calculated with Equation 8. If the use of equations is difficult to the forest manager, the equations can be converted to diagrams that show the optimal management in a changing environment.
Brazee R, Mendelsohn R (1988) Timber harvesting with fluctuating prices. For Sci 34(2):359–372
Ge Z-M, Zhou X, Kellomäki S, Wang K-Y, Peltola H, Väisänen H, Strandman H (2010) Effects of changing climate on water and nitrogen availability with implications on the productivity of Norway spruce stands in southern Finland. Ecol Modell 221(13–14):1731–1743
Gong P (1998) Risk preferences and adaptive harvest policies for even-aged stands management. For Sci 44(4):496–506
Gong P, Yin R (2004) Optimal harvest strategy for slash pine plantations: the impact of autocorrelated prices for multiple products. For Sci 50(1):10–19
Henttonen H (1990) Kuusen rinnankorkeusläpimitan kasvun vaihtelu Etelä-Suomessa. Summary: Variation in the diameter growth of Norway spruce in Southern Finland. University of Helsinki, Department of Forest Mensuration and Management. Res Notes 25:1–88
Hooke R, Jeeves TA (1961) "Direct search" solution of numerical and statistical problems. J Assoc Comput Mach 8:212–229
Kangas A (1997) On the prediction bias and variance in long-term growth projections. For Ecol Manage 96:207–216
Kaya I, Buongiorno J (1987) Economic harvesting of uneven-aged northern hardwood stands under risk: a Markovian decision model. For Sci 33(4):889–907
Kellomäki S, Väisänen H (1997) Modelling the dynamics of the forest ecosystem for climate change studies in the boreal conditions. Ecol Model 97:121–140
Laasasenaho J (1982) Taper curve and volume equations for pine spruce and birch. Communicationes Instuti Forestalis Fenniae 108:1–74
Lembersky MR, Johnson KN (1975) Optimal policies for managed stands: an infinite horizon Markov decision process approach. For Sci 21(2):109–122
Leskinen P, Kangas J (1998) Modelling and simulation of timber prices for forest planning calculations. Scand J For Res 13:469–476
Lohmander P (1995) Reservation price models in forest management: errors in the estimation of probability density function parameters and optimal adjustment of the bias free point estimates. Management systems for a global forest economy with global resource concerns. Society of American Foresters, USA, pp 439–456
Lohmander P (2007) Adaptive optimization of forest management in a stochastic world. In: Weintraib A, Romero C, Bjørndal T, Epstein R, Miranda J (eds) Handbook of operations research in natural resources. International Series in Operations Research and Management Science, Vol. 99. Springer Science + Business Media B.V., pp 525–543.
Lu F, Gong P (2003) Optimal stocking level and final harvest age with stochastic prices. J For Econ 9:119–136
Miina J (1993) Residual variation in diameter growth in a stand of Scots pine and Norway spruce. For Ecol Manage 58:111–128
Pasanen K (1998) Integrating variation in tree growth into forest planning. Silva Fennica 32(1):11–25
Pukkala T (2005) Metsikön tuottoarvon ennustemallit kivennäismaan männiköille, kuusikoille ja rauduskoivikoillle. Metsätieteen aikakauskirja 3(2005):311–322
Pukkala T, Kangas J (1996) A method for integrating risk and attitude toward risk into forest planning. For Sci 42(2):198–205
Pukkala T, Kellomäki S (2012) Anticipatory vs. adaptive optimization of stand management when tree growth and timber prices are stochastic. Forestry 85(4):463–472
Pukkala T, Lähde E, Laiho O (2009) Growth and yield models for uneven-sized forest stands in Finland. For Ecol Manage 258:207–216
Pukkala T, Lähde E, Laiho O (2013) Species interactions in the dynamics of even- and uneven-aged boreal forests. J Sustainable For 32:1–33
Pukkala T, Lähde E, Laiho O (2014) Optimizing any-aged management of mixed boreal forest under residual basal area constraint. J For Res 25(3):627–636
Roessiger J, Griess VC, Knoke T (2011) May risk aversion lead to near-natural forestry? A simulation study. Forestry 84(5):527–537
Rollin F, Buongiorno J, Zhou M, Peyron J-L (2005) Management of mixed-species, uneven-aged forests in the French Jura: from stochastic growth and price models to decision tables. For Sci 51(1):64–75
Valsta L (1992) A scenario approach to stochastic anticipatory optimization in stand management. For Sci 38:430–447
Vanclay JK (1994) Modelling forest growth and yield: applications to mixed tropical forests. CAB International, United Kingdom
Zhou M, Buongiorno J (2006) Forest landscape management in a stochastic environment, with an application to mixed loblolly pine–hardwood forests. For Ecol Manage 223:170–182
Zhou M, Liang J, Buongiorno J (2008) Adaptive versus fixed policies for economic or ecological objectives in forest management. For Ecol Manage 254:178–187
University of Eastern Finland, PO Box 111, 80101, Joensuu, Finland
Timo Pukkala
Search for Timo Pukkala in:
Correspondence to Timo Pukkala.
TP conducted the analyses and wrote the manuscript. The author read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Pukkala, T. Optimizing continuous cover management of boreal forest when timber prices and tree growth are stochastic. For. Ecosyst. 2, 6 (2015). https://doi.org/10.1186/s40663-015-0028-5
Adaptive optimization
Anticipatory optimization
Stochastic optimization
Risk preferences
Reservation price
Uncertainty and Risk Analysis in Forest Ecosystem Dynamics | CommonCrawl |
Last 3 years (2)
Physics and Astronomy (4)
Materials Research (1)
Proceedings of the International Astronomical Union (3)
The British Journal of Psychiatry (3)
Canadian Journal on Aging / La Revue canadienne du vieillissement (1)
Canadian Mathematical Bulletin (1)
Earth and Environmental Science Transactions of The Royal Society of Edinburgh (1)
MRS Online Proceedings Library Archive (1)
International Astronomical Union (3)
The Royal College of Psychiatrists (3)
Canadian Association on Gerontology/L'Association canadienne de gerontologie CAG CJG (1)
Canadian Mathematical Society (1)
Materials Research Society (1)
The Clifford-cyclotomic group and Euler–Poincaré characteristics
Linear algebraic groups and related topics
Colin Ingalls, Bruce W. Jordan, Allan Keeton, Adam Logan, Yevgeny Zaytman
Journal: Canadian Mathematical Bulletin , First View
Published online by Cambridge University Press: 02 September 2020, pp. 1-16
For an integer $n\geq 8$ divisible by $4$ , let $R_n={\mathbb Z}[\zeta _n,1/2]$ and let $\operatorname {\mathrm {U_{2}}}(R_n)$ be the group of $2\times 2$ unitary matrices with entries in $R_n$ . Set $\operatorname {\mathrm {U_2^\zeta }}(R_n)=\{\gamma \in \operatorname {\mathrm {U_{2}}}(R_n)\mid \det \gamma \in \langle \zeta _n\rangle \}$ . Let $\mathcal {G}_n\subseteq \operatorname {\mathrm {U_2^\zeta }}(R_n)$ be the Clifford-cyclotomic group generated by a Hadamard matrix $H=\frac {1}{2}[\begin {smallmatrix} 1+i & 1+i\\1+i &-1-i\end {smallmatrix}]$ and the gate $T_n=[\begin {smallmatrix}1 & 0\\0 & \zeta _n\end {smallmatrix}]$ . We prove that $\mathcal {G}_n=\operatorname {\mathrm {U_2^\zeta }}(R_n)$ if and only if $n=8, 12, 16, 24$ and that $[\operatorname {\mathrm {U_2^\zeta }}(R_n):\mathcal {G}_n]=\infty $ if $\operatorname {\mathrm {U_2^\zeta }}(R_n)\neq \mathcal {G}_n$ . We compute the Euler–Poincaré characteristic of the groups $\operatorname {\mathrm {SU_{2}}}(R_n)$ , $\operatorname {\mathrm {PSU_{2}}}(R_n)$ , $\operatorname {\mathrm {PU_{2}}}(R_n)$ , $\operatorname {\mathrm {PU_2^\zeta }}(R_n)$ , and $\operatorname {\mathrm {SO_{3}}}(R_n^+)$ .
Exoplanet host-star properties: the active environment of exoplanets
John P. Pye, David Barrado, Rafael A. García, Manuel Güdel, Jonathan Nichols, Simon Joyce, Nuria Huélamo, María Morales-Calderón, Mauro López, Enrique Solano, Pierre-Olivier Lagage, Colin P. Johnstone, Allan Sacha Brun, Antoine Strugarek, Jérémy Ahuir, On behalf of the ExoplANETS-A Consortium
Journal: Proceedings of the International Astronomical Union / Volume 14 / Issue S345 / August 2018
Published online by Cambridge University Press: 13 January 2020, pp. 202-205
Print publication: August 2018
The primary objectives of the ExoplANETS-A project are to: establish new knowledge on exoplanet atmospheres; establish new insight on influence of the host star on the planet atmosphere; disseminate knowledge, using online, web-based platforms. The project, funded under the EU's Horizon-2020 programme, started in January 2018 and has a duration ∼3 years. We present an overview of the project, the activities concerning the host stars and some early results on the host stars.
The role of complex magnetic topologies on stellar spin-down
Victor Réville, Allan Sacha Brun, Antoine Strugarek, Sean P. Matt, Jérôme Bouvier, Colin P. Folsom, Pascal Petit
Published online by Cambridge University Press: 09 September 2016, pp. 297-302
The rotational braking of magnetic stars through the extraction of angular momentum by stellar winds has been studied for decades, leading to several formulations. We recently demonstrated that the dependency of the braking law on the coronal magnetic field topology can be taken into account through a simple scalar parameter: the open magnetic flux. The Zeeman-Doppler Imaging technique has brought the community a reliable and precise description of the surface magnetic field of distant stars. The coronal structure can then be reconstructed using a potential field extrapolation, a technique that relies on a source surface radius beyond which all field lines are open, thus avoiding a computationally expensive MHD simulations. We developed a methodology to choose the best source surface radius in order to estimate open flux and magnetic torques. We apply this methodology to five K-type stars from 25 to 584 Myr and the Sun, and compare the resulting torque to values expected from spin evolution models.
12 - Thinking inside the box
from PART 3 - IDEAS AND FUTURES
By Colin Allan, Scottish Futures Trust
Edited by Les Watson
Book: Better Library and Learning Space
Published by: Facet
Print publication: 31 October 2013, pp 159-166
Much is said about transforming education. As architects and designers it is debatable as to whether we can influence pedagogical outcome … but we are eager to try. Through placemaking and the creation of better learning environments we can influence activity and behaviour in a creative, exciting and innovative way.
Since Socrates sat under a plane tree in ancient Greece, Cistercian monks occupied their monastic cells in the 11th century, merchants discoursed in the coffee houses of 17th century London and pupils were fearful in foreboding Victorian school houses … learning has been serendipitous. We learn, despite the space or building.
This chapter will illustrate how form and space provide 'a wrap' for learning environments – intentionally or otherwise – and how it can be improved. The architecture can work much harder to encourage the activity within the space – creating the joy of space and an exciting environment in which to learn.
I believe that as architects we need to establish objectives, then work within defined parameters. This should be considered as a creative challenge rather than a restrictive convention. It is a fundamental multidimensional assessment of light, space and technology and how they can best be combined architecturally … it is about 'thinking inside the box'.
Many learning spaces now excel through designing from the inside out. We are not talking about simply interior design but true architecture where the inside of the building form and function is expressed in an intrinsic rather than superficial way – for example the Saltire Centre at Glasgow Caledonian University, where a range of learning spaces has been created, in part through orientation, circulation patterns, the permeability of the building and the environmental strategy. More recently some of the most exciting, imaginative environments have been created for technology and media-based companies such as Apple, Google, Microsoft, Pixar and Disney, embracing dynamic architectural spaces, the latest information and communication technology, smart furniture solutions, minimal storage, and bold use of colour and art to create a sense of place, which encourages inspiration and creative learning, and makes for a joyful working environment.
Land use and a low-carbon society
Colin D. Campbell, Allan Lilly, Willie Towers, Stephen J. Chapman, Alan Werritty, Nick Hanley
Journal: Earth and Environmental Science Transactions of The Royal Society of Edinburgh / Volume 103 / Issue 2 / July 2012
Published online by Cambridge University Press: 22 April 2013, pp. 165-173
Print publication: July 2012
Land use and the management of our natural resources such as soils and water offer great opportunities to sequester carbon and mitigate the effects of climate change. Actions on forestry, soil carbon and damaged peatlands each have the potential to reduce Scottish emissions in 2020 by hundreds of thousands of tonnes. Most actions to reduce emissions from land use have beneficial effects on other ecosystem services, so if we can cut emissions we can in many circumstances improve the environment. The cost of reducing emissions through land use change can be low in relation to other means of cutting emissions. The Scottish Land Use Strategy and the Ecosystem Approach it calls for, employing the concept of ecosystem services, offers a way of balancing environmental, social and economic demands on the land. Scotland's land, soils, forests and waters are all likely to be significantly altered by future climate change. Each of these components of the land-based environment offers opportunities for mitigation and adaptation to climate change. The emerging new imperatives for securing food, water and energy at a global level are equally important for Scotland, and interact with the need for environmental security and for dealing with climate change.
Exploring the Influence of Income and Geography on Access to Services for Older Adults in British Columbia: A Multivariate Analysis Using the Canadian Community Health Survey (Cycle 3.1)
Diane E. Allan, Laura M. Funk, R. Colin Reid, Denise Cloutier-Fisher
Journal: Canadian Journal on Aging / La Revue canadienne du vieillissement / Volume 30 / Issue 1 / March 2011
Published online by Cambridge University Press: 03 March 2011, pp. 69-82
Print publication: March 2011
Existing research on the health care utilization patterns of older Canadians suggests that income does not usually restrict an individual's access to care. However, the role that income plays in influencing access to health services by older adults living in rural areas is relatively unknown. This article examines the relationship between income and health service utilization among older adults in rural and urban areas of British Columbia. Data were drawn from Statistics Canada's Canadian Community Health Survey, Cycle 3.1. Multivariate regression techniques were employed to examine the influence of relative income on accessibility for 3,424 persons aged 65 and over. Results suggest that (1) relative income does not influence access to health care services; and (2) this is true for both urban and rural older adults. The most important and consistent predictors of access in all cases were those that measured health care need.
By Imran M. Ahmed, Richard P. Allen, Carl W. Bazil, Meredith Broderick, Oliviero Bruni, Christina J. Calamaro, Rosalind D. Cartwright, James Allan Cheyne, Sudhansu Chokroverty, Irshaad O. Ebrahim, Raffaele Ferri, Elena Finotti, Gina Graci, Christian Guilleminault, Divya Gupta, Shelby F. Harris, Timothy F. Hoban, Nelly Huynh, Raffaele Manni, Anissa M. Maroof, Thornton B. A. Mason, Thomas A. Mellman, Renee Monderer, Pasquale Montagna, Jacques Montplaisir, Eric A. Nofzinger, Luana Novelli, Maurice M. Ohayon, Alessandro Oldani, Rafael Pelayo, Giuseppe Plazzi, Satish C. Rao, Michael Schredl, Colin M. Shapiro, Michael H. Silber, Ravi Singareddy, Deepti Sinha, Gregory Stores, Shannon S. Sullivan, Michele Terzaghi, Michael J. Thorpy, Nikola N. Trajanovic, Thomas W. Uhde, Stefano Vandi, Roberto Vetrugno, John W. Winkelman, Antonio Zadra, Marco Zucconi
Edited by Michael J. Thorpy, Giuseppe Plazzi, Università di Bologna
Book: The Parasomnias and Other Sleep-Related Movement Disorders
Published online: 10 November 2010
Print publication: 10 June 2010, pp vii-ix
By Joëlle Adrien, M. Y. Agargun, Negar Ahmadi, Imran M. Ahmed, J. Todd Arnedt, Joseph Barbera, Simon Beaulieu-Bonneau, Marie E. Beitinger, Francesco Benedetti, Glenn Berall, Kirk J. Brower, Gregory M. Brown, Kumaraswamy Budur, Daniel P. Cardinali, Deirdre A. Conroy, Sara Dallaspezia, José Manuel de la Fuente, Paolo De Luca, Diana De Ronchi, Antonio Drago, Matthew R. Ebben, Irshaad Ebrahim, Pingfu Feng, Peter B. Fenwick, Lina Fine, Jonathan Adrian Ewing Fleming, Paul A. Fredrickson, Stephany Fulda, Lucile Garma, Roger Godbout, Reut Gruber, J. Allan Hobson, Andrea Iaboni, Anna Ivanenko, Mayumi Kimura, Milton Kramer, Christoph J. Lauer, Remy Luthringer, Luis Fernando Martínez, Sara Matteson-Rusby, Robert W. McCarley, Charles J. Meliska, Harvey Moldofsky, Charles M. Morin, Sricharan Moturi, Marie-Christine Ouellet, James F. Pagel, S. R. Pandi-Perumal, Barbara L. Parry, Timo Partonen, Wilfred R. Pigeon, Thomas Pollmächer, Nathalie Pross, Elliott Richelson, Naomi L. Rogers, Stefan Rupprecht-Mrozek, Philip Saleh, Andreas Schuld, Alessandro Serretti, Colin M. Shapiro, Christopher Michael Sinton, Marcel G. Smits, D. Warren Spence, Jürgen Staedt, Corinne Staner, Luc Staner, Axel Steiger, Deborah Suchecki, Michael J. Thorpy, Inna Voloh, Bradley G. Whitwell, Robert A. Zucker
Edited by S. R. Pandi-Perumal, Milton Kramer, University of Illinois, Chicago
Book: Sleep and Mental Illness
Print publication: 01 April 2010, pp ix-xiii
ARTEMiS (Automated Robotic Terrestrial Exoplanet Microlensing Search) – Hunting for planets of Earth mass and below
Martin Dominik, Keith Horne, Alasdair Allan, Nicholas J. Rattenbury, Yiannis Tsapras, Colin Snodgrass, Michael F. Bode, Martin J. Burgdorf, Stephen N. Fraser, Eamonn Kerins, Christopher J. Mottram, Iain A. Steele, Rachel A. Street, Peter J. Wheatley, Łukasz Wyrzykowski
Journal: Proceedings of the International Astronomical Union / Volume 3 / Issue S249 / October 2007
Published online by Cambridge University Press: 01 October 2007, pp. 35-41
Gravitational microlensing observations will lead to a census of planets that orbit stars of different populations. From 2008, ARTEMiS will provide an expert system that allows to adopt a three-step strategy of survey, follow-up and anomaly monitoring of gravitational microlensing events that is capable of detecting planets of Earth mass and below. The SIGNALMEN anomaly detector, an integral part, has already demonstrated its performance during a pilot season. Embedded into eSTAR, ARTEMiS serves as an open platform that links with existing microlensing campaigns. Real-time visualization of ongoing events along with an interpretation moreover allows to communicate "Science live to your home" to the general public.
A Theoretical Study of Ultra-Thin Films with the Wurtzite and Zinc Blende Structures
Frederik Claeyssens, Colin L. Freeman, John H. Harding, Neil L. Allan
Published online by Cambridge University Press: 01 February 2011, 1035-L09-08
Results of periodic ab initio density functional theory calculations on thin films of (i) wurtzite ZnO (hexagonal) which terminate with the non-polar (1010) surface, and with the polar (0001) and (0001) surfaces (ii) zinc blende (cubic) ZnO which terminate with the non-polar (110) and with the polar (111) surfaces. Thin (less than18 layer) films of wurtzite ZnO which terminate with the polar (0001) and (0001) surfaces are found to be higher in energy than corresponding films in which these polar surfaces flatten out forming a new planar 'graphitic'-like structure in which the Zn and O atoms are coplanar and the dipole is removed. This is the lowest energy surface for ultra-thin films. For zinc-blende ZnO a graphitic-type solution, but with a different stacking of ZnO layers, is also comparable to energy to the non-polar (110) and polar (111) solutions. Consequences for crystal growth and the stabilization of thin films and nanostructures are discussed.
Is There a Delay in the Onset of the Antidepressant Effect of Electroconvulsive Therapy?
Colin R. Rodger, Allan I. F. Scott, Lawrence J. Whalley
Journal: The British Journal of Psychiatry / Volume 164 / Issue 1 / January 1994
Print publication: January 1994
The severity of depression in 11 drug-free unipolar patients diagnosed with definite major depressive disorder was assessed using the Hamilton Rating Scale for Depression during a course (5–10 treatments) of bilateral electroconvulsive therapy (ECT). The degree of improvement after three treatments of ECT was six times greater than the improvement that occurred over the remainder of the course. Although depressed patients who recover with ECT require repeated treatments, the treatments early in a course of ECT can have marked antidepressant effect.
Is Old-Fashioned Electroconvulsive Therapy More Efficacious?: A Randomised Comparative Study of Bilateral Brief-Pulse and Bilateral Sine-Wave Treatments
Allan I.F. Scott, Colin R. Rodger, Ruth H. Stocks, Anne P. Shering
Journal: The British Journal of Psychiatry / Volume 160 / Issue 3 / March 1992
In-patients suffering from major depressive disorder (endogenous subtype) were randomly allocated to treatment by either traditional ECT with constant-voltage modified sine-wave stimuli (n = 17) or modern, constant-current brief-pulse ECT (n = 14). All treatments were bilateral and monitored by simultaneous recording by EEG. The severity of depressive illness was assessed the day before treatment, after three treatments, and seven days after the last treatment. The improvement and final depression rating scores, the likelihood of recovery, and the average number of treatments received were virtually identical in the two groups. We concluded that the policy of bilateral suprathreshold modern ECT monitored by EEG is as efficacious as traditional ECT.
The Prediction of Abnormal Evoked Potentials in Schizophrenic Patients by Means of Symptom Pattern
Harry B. Andrews, Allan O. House, John E. Cooper, Colin Barber
Journal: The British Journal of Psychiatry / Volume 149 / Issue 1 / July 1986
Published online by Cambridge University Press: 29 January 2018, pp. 46-50
The case notes of 23 schizophrenic patients who participated in a study of somato-sensory cortical evoked responses were examined for descriptions of the clinical features of their mental state at the time of first illness or most recent relapse. A comparison of syndrome scores derived from Present State Examination with those derived from case notes showed that the latter was a reliable method of mental state description for the purposes of this study. A group of syndromes similar to that described in electrodermal research was identified from the case note review and found to be highly correlated with the presence of an abnormal response in the somato-sensory evoked potentials study. | CommonCrawl |
Subsets and Splits